Author Topic: Test Net for Advanced Users  (Read 266026 times)

0 Members and 1 Guest are viewing this topic.

Offline xeroc

  • Board Moderator
  • Hero Member
  • *****
  • Posts: 12922
  • ChainSquad GmbH
    • View Profile
    • ChainSquad GmbH
  • BitShares: xeroc
  • GitHub: xeroc
we should still use 10 seconds first, and release a user friendly wallet.
and we can begin the marketing work.
then we try 5 seconds, then 3 secs, then 2 secs, then 1 secs
every time we make an improve,  we should look it as a big thing,  it's a chance to push the marketing work.

in fact we have make so many great things.
but we give it to public too easy,
people don't  cherish when they get it too easy.


I support this idea  +5%
Me too .. +1

Offline CalabiYau

we should still use 10 seconds first, and release a user friendly wallet.
and we can begin the marketing work.
then we try 5 seconds, then 3 secs, then 2 secs, then 1 secs
every time we make an improve,  we should look it as a big thing,  it's a chance to push the marketing work.

in fact we have make so many great things.
but we give it to public too easy,
people don't  cherish when they get it too easy.


I support this idea  +5%

Offline abit

  • Committee member
  • Hero Member
  • *
  • Posts: 4664
    • View Profile
    • Abit's Hive Blog
  • BitShares: abit
  • GitHub: abitmore
BMs commit seems to have broken the build of the tests.  However, if you just do a make witness_node that should work.    (Or you can use my automatically generated docker build that was pushed 10 minutes after the commit :)

https://hub.docker.com/r/sile16/graphene-witness/

I'm now pushing each commit as a separate tag
Thanks. Running with latest commit now.
BitShares committee member: abit
BitShares witness: in.abit

Offline Stan

  • Hero Member
  • *****
  • Posts: 2908
  • You need to think BIGGER, Pinky...
    • View Profile
    • Cryptonomex
  • BitShares: Stan
in fact we have make so many great things.
but we give it to public too easy,
people don't  cherish when they get it too easy.

Interesting point!

I can see the headlines now...
BitShares Doubles Speed
BitShares Doubles Speed Again
BitShares Doubles Speed Yet Again
and so on...

Why, that's three whole news cycle Announcements right there!

:)
Anything said on these forums does not constitute an intent to create a legal obligation or contract of any kind.   These are merely my opinions which I reserve the right to change at any time.

Offline alt

  • Hero Member
  • *****
  • Posts: 2821
    • View Profile
  • BitShares: baozi
we should still use 10 seconds first, and release a user friendly wallet.
and we can begin the marketing work.
then we try 5 seconds, then 3 secs, then 2 secs, then 1 secs
every time we make an improve,  we should look it as a big thing,  it's a chance to push the marketing work.

in fact we have make so many great things.
but we give it to public too easy,
people don't  cherish when they get it too easy.

In the interest of not slipping the release date we are going to fall back to 3 or 5 second blocks using the current P2P code.   Then after we update the P2P code we can increase the block rate to 2 and ultimately 1 second block times.

Ben is in the process of preparing instructions for committee members on how to change the block interval and we plan to dynamically update the test network to prove that we can do this on a live network.

 +5% +5% step by step
« Last Edit: August 25, 2015, 01:24:48 pm by alt »

Offline Troglodactyl

  • Hero Member
  • *****
  • Posts: 960
    • View Profile
In the interest of not slipping the release date we are going to fall back to 3 or 5 second blocks using the current P2P code.   Then after we update the P2P code we can increase the block rate to 2 and ultimately 1 second block times.

Ben is in the process of preparing instructions for committee members on how to change the block interval and we plan to dynamically update the test network to prove that we can do this on a live network.

Excellent.  As my martial arts instructor always says: "train the correct movement, and speed will come easily later."

iHashFury

  • Guest
I forgot to set BOOST_ROOT=$HOME/tmp/boost_1_57_0

and

cmake BOOST_ROOT=$HOME/tmp/boost_1_57_0 .

Code: [Select]
#as user
BOOST_ROOT=$HOME/tmp/boost_1_57_0
git clone https://github.com/cryptonomex/graphene.git
cd graphene
git pull
#git checkout test1
git checkout master
git submodule update --init --recursive
#make clean
#cmake -DCMAKE_BUILD_TYPE=Debug .
cmake BOOST_ROOT=$HOME/tmp/boost_1_57_0 .
make

but got it built on Ubuntu (change your boost folder as required)
You could also try make clean before cmake and make

Havent got it working on arm yet.

Offline sudo

  • Hero Member
  • *****
  • Posts: 2255
    • View Profile
  • BitShares: ags
In the interest of not slipping the release date we are going to fall back to 3 or 5 second blocks using the current P2P code.   Then after we update the P2P code we can increase the block rate to 2 and ultimately 1 second block times.

Ben is in the process of preparing instructions for committee members on how to change the block interval and we plan to dynamically update the test network to prove that we can do this on a live network.

 +5% +5% step by step

Offline abit

  • Committee member
  • Hero Member
  • *
  • Posts: 4664
    • View Profile
    • Abit's Hive Blog
  • BitShares: abit
  • GitHub: abitmore
That has always allowed me to restart, but now I can't without this crashing. Here's the output:

Code: [Select]
./witness: line 7:  32471 Segmentation fault      ./witness_node --resync-blockchain -d test_net

I also encountered segment fault problem, most time is due to producing a block while out of sync. See https://github.com/cryptonomex/graphene/issues/261

If the program is crashing, try run it inside gdb:

Code: [Select]
$ gdb
...
(gdb) file ./witness_node
...
(gdb) set args --resync-blockchain -d test_net
...
(gdb) run
...

Anytime you can end the program by pressing ctrl+C, then typing signal SIGINT:
Code: [Select]
(gdb) signal SIGINT

After crash, dump the back trace and post it here or to Github.
BitShares committee member: abit
BitShares witness: in.abit

Offline puppies

  • Hero Member
  • *****
  • Posts: 1659
    • View Profile
  • BitShares: puppies
BMs commit seems to have broken the build of the tests.  However, if you just do a make witness_node that should work.    (Or you can use my automatically generated docker build that was pushed 10 minutes after the commit :)

https://hub.docker.com/r/sile16/graphene-witness/

I'm now pushing each commit as a separate tag
thanks
https://metaexchange.info | Bitcoin<->Altcoin exchange | Instant | Safe | Low spreads

Offline cryptosile

  • Full Member
  • ***
  • Posts: 56
    • View Profile
BMs commit seems to have broken the build of the tests.  However, if you just do a make witness_node that should work.    (Or you can use my automatically generated docker build that was pushed 10 minutes after the commit :)

https://hub.docker.com/r/sile16/graphene-witness/

I'm now pushing each commit as a separate tag

Offline Thom

Confirmed. Same problem here. I'm on debian 8.1. The last step in my build process was:

Quote
[ 85%] Building CXX object libraries/p2p/CMakeFiles/graphene_p2p.dir/node.cpp.o
Injustice anywhere is a threat to justice everywhere - MLK |  Verbaltech2 Witness Reports: https://bitsharestalk.org/index.php/topic,23902.0.html

Offline puppies

  • Hero Member
  • *****
  • Posts: 1659
    • View Profile
  • BitShares: puppies
latest master won't build for me on Ubuntu 14.04.  commit cb3c23a17b0ea99816e3c3f35cbcc7b0cbce9f42
  I am getting
Code: [Select]
/usr/include/c++/4.8/bits/shared_ptr.h:614:42:   required from ‘std::shared_ptr<_Tp1> std::make_shared(_Args&& ...) [with _Tp = graphene::p2p::peer_connection; _Args = {std::shared_ptr<graphene::p2p::node>&}]’
/home/user/src/graphene8.24/graphene/libraries/p2p/node.cpp:36:67:   required from here
/usr/include/c++/4.8/ext/new_allocator.h:120:4: error: no matching function for call to ‘graphene::p2p::peer_connection::peer_connection(std::shared_ptr<graphene::p2p::node>&)’
  { ::new((void *)__p) _Up(std::forward<_Args>(__args)...); }
    ^
/usr/include/c++/4.8/ext/new_allocator.h:120:4: note: candidate is:
In file included from /home/user/src/graphene8.24/graphene/libraries/p2p/include/graphene/p2p/node.hpp:5:0,
                 from /home/user/src/graphene8.24/graphene/libraries/p2p/node.cpp:1:
/home/user/src/graphene8.24/graphene/libraries/p2p/include/graphene/p2p/peer_connection.hpp:55:9: note: graphene::p2p::peer_connection::peer_connection()
   class peer_connection : public message_oriented_connection_delegate,
         ^
/home/user/src/graphene8.24/graphene/libraries/p2p/include/graphene/p2p/peer_connection.hpp:55:9: note:   candidate expects 0 arguments, 1 provided
make[2]: *** [libraries/p2p/CMakeFiles/graphene_p2p.dir/node.cpp.o] Error 1
make[1]: *** [libraries/p2p/CMakeFiles/graphene_p2p.dir/all] Error 2
make: *** [all] Error 2
user@user-desktop:~/src/graphene8.24/graphene$

Any help would be appreciated.  I am going to try to checkout a different commit and build that.
https://metaexchange.info | Bitcoin<->Altcoin exchange | Instant | Safe | Low spreads

Offline clayop

  • Hero Member
  • *****
  • Posts: 2033
    • View Profile
    • Bitshares Korea
  • BitShares: clayop
Since we now have 10~15 active testers, can we have 10~15 witnesses in the next testnet for avoiding centralization issue?
Bitshares Korea - http://www.bitshares.kr
Vote for me and see Korean Bitshares community grows
delegate-clayop

Offline Thom

Latest master should still work with the test net.

@BM - Was that your answer to my post about your optimizations? I'll just build it from scratch (by "master" I presume you don't mean the test1 tagged commit so I'll comment out the git checkout test1 and see if it picks up your changes. I believe it will from looking at the commit log in git. Just not sure how the test1 tag relates is all. 

I was able to kill the witness with ctrl-c, save the blockchain folder and resume producing blocks without a hitch. All of my parameters are now in the config.ini file and the witness invocation is now only: ./witness_node -d test_net

Removing the object_database folder seems to be required to recover a functional witness after the seg faults happen.

Here is my config.ini if anyone is interested:

Code: [Select]
#Endpoint for P2P node to listen on
p2p-endpoint = 0.0.0.0:1776

# P2P nodes to connect to on startup (may specify multiple times)
seed-node = 45.55.6.216:1776
seed-node = 45.115.36.171:57281
seed-node = 45.55.6.216:37308
seed-node = 104.200.28.117:61705
seed-node = 104.236.51.238:1776
seed-node = 104.156.226.183:60715
seed-node = 104.156.226.183:40479
seed-node = 104.236.255.53:52995
seed-node = 114.92.254.159:62015
seed-node = 114.92.254.159:62015
seed-node = 176.221.43.130:33323
seed-node = 176.9.234.167:34858
seed-node = 176.9.234.167:57727
seed-node = 178.62.88.151:59148
seed-node = 178.62.88.151:41574
seed-node = 188.226.252.109:58843

# Pairs of [BLOCK_NUM,BLOCK_ID] that should be enforced as checkpoints.
# checkpoint =

# Endpoint for websocket RPC to listen on
rpc-endpoint = 127.0.0.1:8090

# Endpoint for TLS websocket RPC to listen on
# rpc-tls-endpoint =

# The TLS certificate file for this server
# server-pem =

# Password for this certificate                                             
# server-pem-password =

# File to read Genesis State from
#genesis-json = aug-14-test-genesis.json
#genesis-json = aug-19-puppies-test-genesis.json
genesis-json = aug-20-test-genesis.json

# JSON file specifying API permissions
# api-access =

# Enable block production, even if the chain is stale.
enable-stale-production = true

# Percent of witnesses (0-99) that must be participating in order to produce blocks
required-participation = false

# Allow block production, even if the last block was produced by the same witness.
allow-consecutive = false

# ID of witness controlled by this node (e.g. "1.6.0", quotes are required, may specify multiple times)
#witness-id = "1.6.1530"
witness-id = "1.6.1621"

# Tuple of [PublicKey, WIF private key] (may specify multiple times)
# delegate.verbaltech
private-key = ["GPH<public sigining key here>","<private siging key value here>"]

# Account ID to track history for (may specify multiple times)
# track-account =

# Track market history by grouping orders into buckets of equal size measured in seconds specified as a JSON array of numbers
bucket-size = [15,60,300,3600,86400]

# How far back in time to track history for each bucket size, measured in the number of buckets (default: 1000)
history-per-size = 1000

# declare an appender named "stderr" that writes messages to the console
[log.console_appender.stderr]
stream=std_error

# declare an appender named "p2p" that writes messages to p2p.log
[log.file_appender.p2p]
filename=logs/p2p/p2p.log
# filename can be absolute or relative to this config file

# route any messages logged to the default logger to the "stderr" logger we
# declared above, if they are info level are higher
[logger.default]
level=info
appenders=stderr

# route messages sent to the "p2p" logger to the p2p appender declared above
[logger.p2p]
level=debug
appenders=p2p

« Last Edit: August 24, 2015, 11:38:18 pm by Thom »
Injustice anywhere is a threat to justice everywhere - MLK |  Verbaltech2 Witness Reports: https://bitsharestalk.org/index.php/topic,23902.0.html