Author Topic: Test Net for Advanced Users  (Read 265986 times)

0 Members and 1 Guest are viewing this topic.

Offline cryptosile

  • Full Member
  • ***
  • Posts: 56
    • View Profile
The engineer in me says make the system work for 1s but then actually use 10s order of magnitude sadety

Offline Troglodactyl

  • Hero Member
  • *****
  • Posts: 960
    • View Profile
Under the new protocol transactions only get sent once (vs twice under the current protocol) so bandwidth should be lower and block latencies will be lower because we do not send transaction data with the blocks like we do today.

So assuming 250ms ping times (125ms one way), the new system should support forkless operation with witnesses up to 6 hops apart as long as block production time + ((block validation time + block transmission time) * 6) all add up to less than 250ms.  At 4 hops of separation between witnesses that allows 500ms for block production time + ((block validation time + block transmission time) * 4).  Does this sound right?

Is one second block time just because it sounds good and it's a challenging goal, or would moving to 2 second intervals be significantly detrimental to some particular use case?  1 second seems doable, just wondering.

Offline triox

  • Full Member
  • ***
  • Posts: 170
    • View Profile
  • BitShares: triox
Guys, what am I doing wrong when trying to import_balance?

Code: [Select]
0 exception: unspecified
3030001 tx_missing_active_auth: missing required active authority

Offline clayop

  • Hero Member
  • *****
  • Posts: 2033
    • View Profile
    • Bitshares Korea
  • BitShares: clayop
Under the new protocol transactions only get sent once (vs twice under the current protocol) so bandwidth should be lower and block latencies will be lower because we do not send transaction data with the blocks like we do today.

 +5%
Bitshares Korea - http://www.bitshares.kr
Vote for me and see Korean Bitshares community grows
delegate-clayop

Offline bytemaster

Under the new protocol transactions only get sent once (vs twice under the current protocol) so bandwidth should be lower and block latencies will be lower because we do not send transaction data with the blocks like we do today.

For the latest updates checkout my blog: http://bytemaster.bitshares.org
Anything said on these forums does not constitute an intent to create a legal obligation or contract between myself and anyone else.   These are merely my opinions and I reserve the right to change them at any time.

Offline bytemaster

The network is robust because it is centralized right now.   When the network is under heavy load network propagation delays cause minority witnesses to miss their slots.

We have been studying this problem and have concluded that the current P2P algorithm is not well suited to 1 second blocks.

Assuming 2 nodes are across the globe and have a 250 ms latency, then the following handshake  takes 0.75 seconds with 0 data and CPU delays:

notify inventory
request item
receive item

That is barely good enough for two peers to keep in sync if directly connected.   But when you have 2 peers connected through a middle man then we are talking 1.5 seconds which breaks down with 1 second blocks.

This is the one part of the code that we have been reusing from BTS 1.0 and it appears it is not up to the task.  So today Ben and I came up with a new, simple, protocol that should dramatically improve network performance.

Code: [Select]
# Network Protocol 2

Building a low-latency network requires P2P nodes that have low-latency
connections and a protocol designed to minimize latency. for the purpose
of this document we will assume that two nodes are located on opposite
sides of the globe with a ping time of 250ms.   


## Announce, Request, Send Protocol
Under the prior network archtiecture, transactions and blocks were broadcast
in a manner similar to the Bitcoin protocol: inventory messages notify peers of
transactions and blocks, then peers fetch the transaction or block from one
peer.  After validating the item a node will broadcast an inventory message to
its peers.

Under this model it will take 0.75 seconds for a peer to communicate a transaction
or block to another peer even if their size was 0 and there was no processing overhead.
This level of performance is unacceptable for a network attempting to produce one block
every second.

This prior protocol also sent every transaction twice: initial broadcast, and again as
part of a block. 


## Push Protocol
To minimize latency each node needs to immediately broadcast the data it receives
to its peers after validating it.   Given the average transaction size is less than
100 bytes, it is almost as effecient to send the transaction as it is to send
the notice (assuming a 20 byte transaction id)

Each node implements the following protocol:


    onReceiveTransaction( from_peer, transaction )
        if( isKnown( transaction.id() ) )
            return

        markKnown( transaction.id() )

        if( !validate( transaction ) )
           return

        for( peer : peers )
          if( peer != from_peer )
             send( peer, transaction )


    onReceiveBlock( from_peer, block_summary )
        if( isKnown( block_summary )
            return

        full_block = reconstructFullBlcok( from_peer, block_summary )
        if( !full_block ) disconnect from_peer

        markKnown( block_summary )

        if( !pushBlock( full_block ) ) disconnect from_peer

        for( peer : peers )
           if( peer != from_peer )
             send( peer, block_summary )
             

     onConnect( new_peer, new_peer_head_block_num )
        if( peers.size() >= max_peers )
           send( new_peer, peers )
           disconnect( new_peer )
           return
         
        while( new_peer_head_block_num < our_head_block_num )
           sendFullBlock( new_peer, ++new_peer_head_block_num )

        new_peer.synced = true
        for( peer : peers )
            send( peer, new_peer )
   
     onReceivePeers( from_peer, peers )
        addToPotentialPeers( peers )

     onUpdateConnectionsTimer
        if( peers.size() < desired_peers )
          connect( random_potential_peer )

     onFullBlock( from_peer, full_block )
        if( !pushBlock( full_block ) ) disconnect from_peer

     onStartup
        init_potential_peers from config
        start onUpdateConnectionsTimer
     


For the latest updates checkout my blog: http://bytemaster.bitshares.org
Anything said on these forums does not constitute an intent to create a legal obligation or contract between myself and anyone else.   These are merely my opinions and I reserve the right to change them at any time.

Offline abit

  • Committee member
  • Hero Member
  • *
  • Posts: 4664
    • View Profile
    • Abit's Hive Blog
  • BitShares: abit
  • GitHub: abitmore
Here is my theory:
Current testnet is too centralized.
Because the init node have about 90% of active witnesses, if it didn't receive a block produced by other witness (networking issue or so), most time next slot is the init node itself, so it just produce one, and next, next, next, certainly it will be the longest chain. That's why the network is 'robust', but it's not true, it's not safe, we can say we're encountering 51% attack indeed.
BitShares committee member: abit
BitShares witness: in.abit

Offline puppies

  • Hero Member
  • *****
  • Posts: 1659
    • View Profile
  • BitShares: puppies
puppies I have just "flood" you some 1 CORE transactions.

On my second round I sent 8,421.  :o :o :o :o at 1  core interval. This was done brute forced Hyper Olympics style https://www.youtube.com/watch?v=8va4YGGA3wE, so I am not convinced I did submit as many in 1 minute.

Anyway my witness has not died, can you check you got something?
sorry betax I'm at work and don't have puppies keys on any vps

Don't worry!

History only shows
Code: [Select]
unlocked >>> get_account_history puppies 10
get_account_history puppies 10
2015-08-21T03:05:27 Update Account 'puppies'   (Fee: 20.14453 CORE)
2015-08-21T02:36:35 Update Account 'puppies'   (Fee: 20.14062 CORE)
2015-08-21T02:03:08 balance_claim_operation puppies fee: 0 CORE
2015-08-21T02:03:08 balance_claim_operation puppies fee: 0 CORE
2015-08-21T02:03:08 balance_claim_operation puppies fee: 0 CORE
2015-08-21T02:03:08 balance_claim_operation puppies fee: 0 CORE
2015-08-21T02:03:08 balance_claim_operation puppies fee: 0 CORE
2015-08-21T02:03:08 balance_claim_operation puppies fee: 0 CORE
2015-08-21T02:03:08 balance_claim_operation puppies fee: 0 CORE
2015-08-21T02:03:08 balance_claim_operation puppies fee: 0 CORE

unlocked >>>[Code]
https://metaexchange.info | Bitcoin<->Altcoin exchange | Instant | Safe | Low spreads

Offline clayop

  • Hero Member
  • *****
  • Posts: 2033
    • View Profile
    • Bitshares Korea
  • BitShares: clayop
Here is a video capture about an error I experienced.

I ran 100*10 transfers on my witness node. When my witness have its produce turn, it failed to produce a block.

http://youtu.be/D1eRD2nIHJk
Bitshares Korea - http://www.bitshares.kr
Vote for me and see Korean Bitshares community grows
delegate-clayop

Offline mudshark79

  • Full Member
  • ***
  • Posts: 76
    • View Profile
I see 16 nodes connected now and quite some resyncing happening all the time. CPU Load never under 15% now...  (but can't say what CPU-rating that VPS actually has)  8)

Offline clayop

  • Hero Member
  • *****
  • Posts: 2033
    • View Profile
    • Bitshares Korea
  • BitShares: clayop
Just found some miutes to look at it again (partly for recreational purposes) and now I saw 1.6.1537 filling his block. So I guess it's really that simple, that most witnesses actually do not produce blocks yet or anymore. Even though I see a lot of connected nodes at the moment and one would expect them to be manned and producing?

1.6.1537 is me (delegate-clayop). I was resyncing.
Bitshares Korea - http://www.bitshares.kr
Vote for me and see Korean Bitshares community grows
delegate-clayop

Offline mudshark79

  • Full Member
  • ***
  • Posts: 76
    • View Profile
Somehow I think I lost the private signing key because dump_private_keys only shows one set, and it belongs to the account itself. I guess I will have to register a new witness.. unless there's a way to generate a new signing key pair?

Btw: How do you read this:

Code: [Select]
1522956ms th_a       witness.cpp:240               block_production_loo ] slot: 1 scheduled_witness: 1.6.1537 scheduled_time: 2015-08-21T16:25:21 now: 2015-08-21T16:25:21
1523956ms th_a       witness.cpp:240               block_production_loo ] slot: 2 scheduled_witness: 1.6.56 scheduled_time: 2015-08-21T16:25:22 now: 2015-08-21T16:25:22
1524185ms th_a       application.cpp:348           handle_block         ] Got block #76718 from network
1524234ms th_a       application.cpp:443           get_item             ] Request for item {"item_type":1001,"item_hash":"00012bae7445dfa925a756d82e6f68d9be6e20be"}
1524234ms th_a       application.cpp:451           get_item             ] Serving up block #76718
1524386ms th_a       application.cpp:443           get_item             ] Request for item {"item_type":1001,"item_hash":"00012bae7445dfa925a756d82e6f68d9be6e20be"}
1524386ms th_a       application.cpp:451           get_item             ] Serving up block #76718

Did Witness 1.6.1537 produce his block? Why is the init-witness directly "queueing up" or getting another slot then?

I read it as 1.6.1537 missed the slot, and 1.6.56 filled in.

Ben had this problem too.  Will look into this.

Just found some miutes to look at it again (partly for recreational purposes) and now I saw 1.6.1537 filling his block. So I guess it's really that simple, that most witnesses actually do not produce blocks yet or anymore. Even though I see a lot of connected nodes at the moment and one would expect them to be manned and producing?

Offline bytemaster

Left it running overnight and found this:

Quote
2121482ms th_a       application.cpp:348           handle_block         ] Got block #43785 from network
2122598ms th_a       application.cpp:348           handle_block         ] Got block #43786 from network
2123581ms th_a       application.cpp:348           handle_block         ] Got block #43787 from network
2124485ms th_a       application.cpp:348           handle_block         ] Got block #43788 from network
2125530ms th_a       application.cpp:348           handle_block         ] Got block #43789 from network
2126480ms th_a       application.cpp:348           handle_block         ] Got block #43790 from network
2131760ms th_a       application.cpp:348           handle_block         ] Got block #43791 from network
2132949ms th_a       application.cpp:348           handle_block         ] Got block #43795 from network
2132949ms th_a       fork_database.cpp:57          push_block           ] Pushing block to fork database that failed to link.
2133339ms th_a       application.cpp:348           handle_block         ] Got block #43796 from network
2133340ms th_a       fork_database.cpp:57          push_block           ] Pushing block to fork database that failed to link.
2133543ms th_a       application.cpp:348           handle_block         ] Got block #43797 from network
2133543ms th_a       fork_database.cpp:57          push_block           ] Pushing block to fork database that failed to link.
2134473ms th_a       application.cpp:348           handle_block         ] Got block #43798 from network
2134474ms th_a       fork_database.cpp:57          push_block           ] Pushing block to fork database that failed to link.
2135482ms th_a       application.cpp:348           handle_block         ] Got block #43799 from network
2135482ms th_a       fork_database.cpp:57          push_block           ] Pushing block to fork database that failed to link.
2136479ms th_a       application.cpp:348           handle_block         ] Got block #43800 from network
2136479ms th_a       fork_database.cpp:57          push_block           ] Pushing block to fork database that failed to link.
2137482ms th_a       application.cpp:348           handle_block         ] Got block #43801 from network
2137482ms th_a       fork_database.cpp:57          push_block           ] Pushing block to fork database that failed to link.
[......snip......]
2160479ms th_a       application.cpp:348           handle_block         ] Got block #43822 from network
2160479ms th_a       fork_database.cpp:57          push_block           ] Pushing block to fork database that failed to link.
2161493ms th_a       application.cpp:348           handle_block         ] Got block #43823 from network
2161494ms th_a       application.cpp:370           handle_block         ] Error when pushing block:
10 assert_exception: Assert Exception
item->num < _head->num + MAX_BLOCK_REORDERING:
[..snip..]
2162497ms th_a       application.cpp:348           handle_block         ] Got block #43824 from network
2162498ms th_a       application.cpp:370           handle_block         ] Error when pushing block:
10 assert_exception: Assert Exception
item->num < _head->num + MAX_BLOCK_REORDERING:
[..etc..]

This is very useful information because it shows that the network code actually skipped some blocks!
For the latest updates checkout my blog: http://bytemaster.bitshares.org
Anything said on these forums does not constitute an intent to create a legal obligation or contract between myself and anyone else.   These are merely my opinions and I reserve the right to change them at any time.

Offline bytemaster

Somehow I think I lost the private signing key because dump_private_keys only shows one set, and it belongs to the account itself. I guess I will have to register a new witness.. unless there's a way to generate a new signing key pair?

Btw: How do you read this:

Code: [Select]
1522956ms th_a       witness.cpp:240               block_production_loo ] slot: 1 scheduled_witness: 1.6.1537 scheduled_time: 2015-08-21T16:25:21 now: 2015-08-21T16:25:21
1523956ms th_a       witness.cpp:240               block_production_loo ] slot: 2 scheduled_witness: 1.6.56 scheduled_time: 2015-08-21T16:25:22 now: 2015-08-21T16:25:22
1524185ms th_a       application.cpp:348           handle_block         ] Got block #76718 from network
1524234ms th_a       application.cpp:443           get_item             ] Request for item {"item_type":1001,"item_hash":"00012bae7445dfa925a756d82e6f68d9be6e20be"}
1524234ms th_a       application.cpp:451           get_item             ] Serving up block #76718
1524386ms th_a       application.cpp:443           get_item             ] Request for item {"item_type":1001,"item_hash":"00012bae7445dfa925a756d82e6f68d9be6e20be"}
1524386ms th_a       application.cpp:451           get_item             ] Serving up block #76718

Did Witness 1.6.1537 produce his block? Why is the init-witness directly "queueing up" or getting another slot then?

I read it as 1.6.1537 missed the slot, and 1.6.56 filled in.

Ben had this problem too.  Will look into this.
For the latest updates checkout my blog: http://bytemaster.bitshares.org
Anything said on these forums does not constitute an intent to create a legal obligation or contract between myself and anyone else.   These are merely my opinions and I reserve the right to change them at any time.

Offline betax

  • Hero Member
  • *****
  • Posts: 808
    • View Profile
puppies I have just "flood" you some 1 CORE transactions.

On my second round I sent 8,421.  :o :o :o :o at 1  core interval. This was done brute forced Hyper Olympics style https://www.youtube.com/watch?v=8va4YGGA3wE, so I am not convinced I did submit as many in 1 minute.

Anyway my witness has not died, can you check you got something?

How did you do this? Can you send me some to delegate-clayop?

Physically is impossible, hence my question. I am down at the moment.. ill try tomorrow.
https://metaexchange.info | Bitcoin<->Altcoin exchange | Instant | Safe | Low spreads