Author Topic: Critical Bug: Blockchain sync jams at block 2821722 -- Losing New Users!  (Read 3716 times)

0 Members and 1 Guest are viewing this topic.

Offline abit

  • Committee member
  • Hero Member
  • *
  • Posts: 4664
    • View Profile
    • Abit's Hive Blog
  • BitShares: abit
  • GitHub: abitmore
I've been pushing it for a month+.   Nothing's happening. 

Well, that's a general problem with BitShares right now. Nobody is responsible since bm has left, and nobody has the incentive to do it.
Perhaps @dannotestein and/or @emf could help? There is an active worker for blockchain maintenance.
BitShares committee member: abit
BitShares witness: in.abit

Offline pc

  • Hero Member
  • *****
  • Posts: 1530
    • View Profile
    • Bitcoin - Perspektive oder Risiko?
  • BitShares: cyrano
However, I'm generally strongly against snapshots.  It's an out-of-band solution,  they can be tampered, it bypasses the whole idea of blockchain tech in the first place:  distributed, peer-to-peer validation,  not being dependent on a single source.  The growing use of snapshots is basically a tacit admission that there are problems with a coin's peer-base or parameters.   And that would apply here exactly.

I agree (except about the "they can be tampered"). The snapshot is just a workaround.

Re fast network.  I'm on relatively standard DSL,  I think 150KB/sec download.   It's over Wifi, maybe that's an issue,  but how fast is 'fast enough'? 
What are the speed/latency requirements exactly then?  Do we really want to go there?

This point in the transport layer needs to be found and fixed -- relax the timings to handle the maximum data size on any average network.

Well, if the block is ~1MB and the timeout is 1 second - you do the maths :-).

Do we want to go there? Yes and no.
Yes because BitShares claims to be a high-performance blockchain capable of processing thousands of transactions per second. It is an acknowledged fact that nodes will have to scale up their hardware, should we ever get there.

For short bursts in blocksize, however, this applies only to witness nodes. Normal nodes should be able to sync large blocks, at the cost of some delay. As @abit pointed out, the timeout need not apply while syncing.

I've been pushing it for a month+.   Nothing's happening. 

Well, that's a general problem with BitShares right now. Nobody is responsible since bm has left, and nobody has the incentive to do it.
Bitcoin - Perspektive oder Risiko? ISBN 978-3-8442-6568-2 http://bitcoin.quisquis.de

Offline Miner9r

  • Newbie
  • *
  • Posts: 10
    • View Profile
  • BitShares: miner9r
IOW you need a fast network connection for a normal sync.

It's probably much simpler (and faster) to download a snapshot of the blockchain, for example this one: http://seeds.quisquis.de/bts-chain-20160708.tar.bz2

Thanks much for the snapshot link, pc.   I may have to cave in and use that, as I suspect others have.

However, I'm generally strongly against snapshots.  It's an out-of-band solution,  they can be tampered, it bypasses the whole idea of blockchain tech in the first place:  distributed, peer-to-peer validation,  not being dependent on a single source.  The growing use of snapshots is basically a tacit admission that there are problems with a coin's peer-base or parameters.   And that would apply here exactly.

Re fast network.  I'm on relatively standard DSL,  I think 150KB/sec download.   It's over Wifi, maybe that's an issue,  but how fast is 'fast enough'? 
What are the speed/latency requirements exactly then?  Do we really want to go there?

I have not had a network issue like this with any other coin, both bitcoin-derived and other completely alternate techs.  I've run dozens or hundreds of coins for years.  So I don't think it's valid to require 'faster network'.

I think instead, the code must be adjusted to handle the timings/delays/latencies introduced by the giant block(s).   So far,  I think that's the best guess for the bug here.  That even though the protocol can handle the large block,  some other layer, like a transport layer,  is really expecting much smaller blocks, and thus hitting timeout and rejecting the block.

This point in the transport layer needs to be found and fixed -- relax the timings to handle the maximum data size on any average network.

===
I would really like to encourage the BTS community/developers to bottom out a solution to this.  The bug has been in there since January, it's now August. 
How many users have been lost in 8 months?   

I've been pushing it for a month+.   Nothing's happening. 

Sure I could just get the snapshot.  And what about all the other new users hitting this?  They won't find it.   What about the next giant block that jams up?   We'll need a new snapshot then.  We could all just forget the blockchain and send snapshots to each other...


Offline Miner9r

  • Newbie
  • *
  • Posts: 10
    • View Profile
  • BitShares: miner9r
Thanks for feedback, xeroc.  Did you compile it yourself from github source?   

Here is my exact sequence to reproduce the bug.   Do you see any errors? I'm running:

boost  1.58.0
g++ (Gentoo 4.9.3 p1.5, pie-0.6.4) 4.9.3
gcc (Gentoo 4.9.3 p1.5, pie-0.6.4) 4.9.3
Linux kernel 3.18.25

EXACT SEQ TO REPRODUCE JAM ON BLOCK 2821722:
Code: [Select]
git clone https://github.com/bitshares/bitshares-2.git bitshares-2.git
cd bitshares-2.git
git submodule update --init --recursive
cmake -DCMAKE_BUILD_TYPE=Release .
make
programs/witness_node/witness_node

These steps are from:
  http://docs.bitshares.eu/bitshares/installation/Build.html
except the line
   git submodule update --init --recursive, 
which is not mentioned at that url, but is in the included README.md in the source.

It runs successfully, then stops with last console display:

3297741ms th_a       application.cpp:523           handle_block         ] Got block: #2820000 time: 2016-01-20T08:02:27 latency: 17488197670 ms from: bhuz  irreversible: 2819982 (-18)

I've let it wait for hours, even overnight, at this point. 
Subsequent investigation with cli_wallet shows it stuck on block 2821722, as I posted above.


Offline pc

  • Hero Member
  • *****
  • Posts: 1530
    • View Profile
    • Bitcoin - Perspektive oder Risiko?
  • BitShares: cyrano
A couple updates:

If I understand the thread you linked in the OP correctly, the reason is simpy a very big block that triggers a timeout during transfer: https://bitsharestalk.org/index.php/topic,21157.msg290784.html#msg290784

IOW you need a fast network connection for a normal sync.

It's probably much simpler (and faster) to download a snapshot of the blockchain, for example this one: http://seeds.quisquis.de/bts-chain-20160708.tar.bz2
Bitcoin - Perspektive oder Risiko? ISBN 978-3-8442-6568-2 http://bitcoin.quisquis.de

Offline xeroc

  • Board Moderator
  • Hero Member
  • *****
  • Posts: 12922
  • ChainSquad GmbH
    • View Profile
    • ChainSquad GmbH
  • BitShares: xeroc
  • GitHub: xeroc
This is weird .. I have recently resync an empty node on ubuntu compile exactly as described on docs.bitshares.eu and it worked great.
Unfortunately, I am not a not a backend dev and have no clue how to debug something like this .. maybe @wackou or @emf can help

Offline Miner9r

  • Newbie
  • *
  • Posts: 10
    • View Profile
  • BitShares: miner9r
A couple updates:

* I increased the RAM to 8gb on my original test machine, Intel cpu.  Problem recurs.
* I tested on another machine with 8gb ram, AMD cpu.  Problem recurs.

So at this point it seems I've ruled out the local hardware.  The remaining possibilities are:  Configuration error, Network error, Library/Compiler issue,  or a bug in the Bitshares code/protocol. 

Do I need to configure my router, open some ports?
I built with boost-1.58 instead of 1.57.  I haven't checked the exact version of other lib dependencies.  It seems to have compiled without error though.

And the witness_node runs great apparently, for thousands of blocks.  It just jams on that particular one, and then is stuck.

Are the developers seeing this thread?  xeroc, emf is that you?  Is there somewhere else better to post this?

Seriously folks, I've been working on this over a month,  and I don't even have a vested interest,  just enthusiasm for good technology.    But it's not working, so frustration is going to override soon.  And it seems fairly certain there are other people who've run into this, given up and walked out the door -- you never heard from them.

I'm happy to work through configuration options, network settings, provide more debugging feedback,  whatever you need.  I'm running on Gentoo, one of the most advanced Linux platforms.  Everything on the system is compiled from source, I can tell you the exact installed versions of any library.

Does anyone else here actually have this working on Linux?  Try a fresh blockchain download with the latest git code and see whether it works, or you reproduce this bug.  (save your old blockchain directory so you can get back to it)



Offline Miner9r

  • Newbie
  • *
  • Posts: 10
    • View Profile
  • BitShares: miner9r
Thanks for the good info, emf.


Ok, I have some more test results.  I kept the wallet.json file (though I don't think I've made any accounts in there yet), and I moved away the old witness_node_data_dir/   and started programs/witness_node/witness_node.   It remakes the data directory, and starts a fresh sync from scratch. 
 
It jammed again.  I don't think it's memory.  At the time it's jammed, free reports:
Code: [Select]
$ free
              total        used        free      shared  buff/cache   available
Mem:        4047180     1657724      626496          24     1762960     2017828
Swap:      33554428      573680    32980748

Incidentally, when the witness_node creates the new config.ini, it puts the lines:
Code: [Select]
   # Tuple of [PublicKey, WIF private key] (may specify multiple times)
   private-key = ["___","___"]
where ___ is key data.  It made the same key as before.  Why is that?  Is this an important secret wallet key I need to keep?

Now, when it's jammed, here's the last output from the witness_node console:
Code: [Select]
393792ms th_a       application.cpp:522           handle_block         ] Got block: #2800000 time: 2016-01-19T15:19:54 latency: 15918246023 ms from: delegate.btsnow  irreversible: 2799982 (-18)
396724ms th_a       db_market.cpp:149             maybe_cull_small_ord ] applied epsilon logic
396745ms th_a       db_market.cpp:149             maybe_cull_small_ord ] applied epsilon logic
418312ms th_a       application.cpp:522           handle_block         ] Got block: #2810000 time: 2016-01-19T23:41:06 latency: 15888198543 ms from: spectral  irreversible: 2809982 (-18)
442609ms th_a       db_market.cpp:149             maybe_cull_small_ord ] applied epsilon logic
443822ms th_a       application.cpp:522           handle_block         ] Got block: #2820000 time: 2016-01-20T08:02:27 latency: 15858143053 ms from: bhuz  irreversible: 2819982 (-18)
621003ms ntp        ntp.cpp:177                   read_loop            ] ntp_delta_time updated to -153824800 us


I also monitored tail -f   on the debug log in    witness_node_data_dir/logs/p2p/p2p.log
The last lines  are as follows.  Note it repeated about 125 copies of the 4-line message about handling message_block_type, with various block types,  and then just hung.  (After a while, debug log output resumed and it seems to be just searching for connections, addresses... and it seems to think anybody it does find is on another fork.  I missed the exact next lines after the 125 copies, can track it down if you need it.)

Code: [Select]
2016-07-21T21:00:00 p2p:send_sync_block_to_node_delegate send_sync_block_to_n ] Successfully pushed sync block 2643675 (id:002856db20f48cd66a2c88d5f7fd4ef8c732e005) node.cpp:3025
2016-07-21T21:00:00 p2p:send_sync_block_to_node_delegate send_sync_block_to_n ] sync: client accpted the block, we now have only 5424633 items left to fetch before we're in sync node.cpp:3063
2016-07-21T21:00:00 p2p:send_sync_block_to_node_delegate send_sync_block_to_n ] Cannot pop first element off peer 81.89.101.133:1776's list, its list is empty node.cpp:3094
2016-07-21T21:00:00 p2p:send_sync_block_to_node_delegate send_sync_block_to_n ] Removed item from 162.213.195.203:60862's list of items being processed, still processing 199 blocks node.cpp:3119
2016-07-21T21:00:00 p2p:send_sync_block_to_node_delegate send_sync_block_to_n ] Leaving send_sync_block_to_node_delegate node.cpp:3172
2016-07-21T21:00:00 p2p:process_backlog_of_sync_blocks process_backlog_of_s ] in process_backlog_of_sync_blocks node.cpp:3194
2016-07-21T21:00:00 p2p:process_backlog_of_sync_blocks process_backlog_of_s ] currently 199 blocks in the process of being handled node.cpp:3200
2016-07-21T21:00:00 p2p:process_backlog_of_sync_blocks process_backlog_of_s ] currently 555 sync items to consider node.cpp:3230
2016-07-21T21:00:00 p2p:process_backlog_of_sync_blocks process_backlog_of_s ] stopping processing sync block backlog because we have 200 blocks in progress node.cpp:3282
2016-07-21T21:00:00 p2p:process_backlog_of_sync_blocks process_backlog_of_s ] leaving process_backlog_of_sync_blocks, 1 processed node.cpp:3291
2016-07-21T21:00:00 p2p:process_backlog_of_sync_blocks trigger_fetch_sync_i ] Triggering fetch sync items loop now node.cpp:1095
2016-07-21T21:00:00 p2p:send_sync_block_to_node_delegate send_sync_block_to_n ] in send_sync_block_to_node_delegate() node.cpp:3013
2016-07-21T21:00:00 p2p:fetch_sync_items_loop fetch_sync_items_loo ] beginning another iteration of the sync items loop node.cpp:1034
2016-07-21T21:00:00 p2p:fetch_sync_items_loop fetch_sync_items_loo ] no sync items to fetch right now, going to sleep node.cpp:1084
2016-07-21T21:00:00 p2p:send_sync_block_to_node_delegate send_sync_block_to_n ] Successfully pushed sync block 2643676 (id:002856dc079cb2039d62492aae802400bb4c6fb4) node.cpp:3025
2016-07-21T21:00:00 p2p:send_sync_block_to_node_delegate send_sync_block_to_n ] sync: client accpted the block, we now have only 5424632 items left to fetch before we're in sync node.cpp:3063
2016-07-21T21:00:00 p2p:send_sync_block_to_node_delegate send_sync_block_to_n ] Cannot pop first element off peer 81.89.101.133:1776's list, its list is empty node.cpp:3094
2016-07-21T21:00:00 p2p:send_sync_block_to_node_delegate send_sync_block_to_n ] Removed item from 162.213.195.203:60862's list of items being processed, still processing 199 blocks node.cpp:3119
2016-07-21T21:00:00 p2p:send_sync_block_to_node_delegate send_sync_block_to_n ] Leaving send_sync_block_to_node_delegate node.cpp:3172
2016-07-21T21:00:00 p2p:process_backlog_of_sync_blocks process_backlog_of_s ] in process_backlog_of_sync_blocks node.cpp:3194
2016-07-21T21:00:00 p2p:process_backlog_of_sync_blocks process_backlog_of_s ] currently 199 blocks in the process of being handled node.cpp:3200
2016-07-21T21:00:00 p2p:process_backlog_of_sync_blocks process_backlog_of_s ] currently 554 sync items to consider node.cpp:3230
2016-07-21T21:00:00 p2p:process_backlog_of_sync_blocks process_backlog_of_s ] stopping processing sync block backlog because we have 200 blocks in progress node.cpp:3282
2016-07-21T21:00:00 p2p:process_backlog_of_sync_blocks process_backlog_of_s ] leaving process_backlog_of_sync_blocks, 1 processed node.cpp:3291
2016-07-21T21:00:00 p2p:process_backlog_of_sync_blocks trigger_fetch_sync_i ] Triggering fetch sync items loop now node.cpp:1095
2016-07-21T21:00:00 p2p:send_sync_block_to_node_delegate send_sync_block_to_n ] in send_sync_block_to_node_delegate() node.cpp:3013
2016-07-21T21:00:00 p2p:fetch_sync_items_loop fetch_sync_items_loo ] beginning another iteration of the sync items loop node.cpp:1034
2016-07-21T21:00:00 p2p:fetch_sync_items_loop fetch_sync_items_loo ] no sync items to fetch right now, going to sleep node.cpp:1084
2016-07-21T21:00:00 p2p:message read_loop           on_message ] handling message block_message_type 694dd687ca44fbc8d191f1e7d40517d98fc93c90 size 132 from peer 162.213.195.203:60862 node.cpp:1757
2016-07-21T21:00:00 p2p:message read_loop process_block_during ] received a sync block from peer 162.213.195.203:60862 node.cpp:3308
2016-07-21T21:00:00 p2p:process_backlog_of_sync_blocks process_backlog_of_s ] in process_backlog_of_sync_blocks node.cpp:3194
2016-07-21T21:00:00 p2p:process_backlog_of_sync_blocks process_backlog_of_s ] leaving process_backlog_of_sync_blocks because we're already processing too many blocks node.cpp:3197
2016-07-21T21:00:00 p2p:message read_loop           on_message ] handling message block_message_type 8fcf6fc6bd88181ac11a0fbf43c4b4c950bab1dc size 132 from peer 162.213.195.203:60862 node.cpp:1757
2016-07-21T21:00:00 p2p:message read_loop process_block_during ] received a sync block from peer 162.213.195.203:60862 node.cpp:3308
2016-07-21T21:00:00 p2p:process_backlog_of_sync_blocks process_backlog_of_s ] in process_backlog_of_sync_blocks node.cpp:3194
2016-07-21T21:00:00 p2p:process_backlog_of_sync_blocks process_backlog_of_s ] leaving process_backlog_of_sync_blocks because we're already processing too many blocks node.cpp:3197
2016-07-21T21:00:00 p2p:message read_loop           on_message ] handling message block_message_type 2ee9cf866c5329c4813bdd8cd3e122b23e557811 size 132 from peer 162.213.195.203:60862 node.cpp:1757
2016-07-21T21:00:00 p2p:message read_loop process_block_during ] received a sync block from peer 162.213.195.203:60862 node.cpp:3308
2016-07-21T21:00:00 p2p:process_backlog_of_sync_blocks process_backlog_of_s ] in process_backlog_of_sync_blocks node.cpp:3194
2016-07-21T21:00:00 p2p:process_backlog_of_sync_blocks process_backlog_of_s ] leaving process_backlog_of_sync_blocks because we're already processing too many blocks node.cpp:3197

... REPEATS OVER 100 TIMES, WITH VARIOUS block_message_type ...

2016-07-21T21:00:00 p2p:message read_loop           on_message ] handling message block_message_type 3dc26eac040509897074ccf84af9a00e008a0414 size 132 from peer 162.213.195.203:60862 node.cpp:1757
2016-07-21T21:00:00 p2p:message read_loop process_block_during ] received a sync block from peer 162.213.195.203:60862 node.cpp:3308
2016-07-21T21:00:00 p2p:process_backlog_of_sync_blocks process_backlog_of_s ] in process_backlog_of_sync_blocks node.cpp:3194
2016-07-21T21:00:00 p2p:process_backlog_of_sync_blocks process_backlog_of_s ] leaving process_backlog_of_sync_blocks because we're already processing too many blocks node.cpp:3197
2016-07-21T21:00:00 p2p:message read_loop           on_message ] handling message block_message_type 6a16d2f18d9567d02331a65365788ef6fd42a3c8 size 132 from peer 162.213.195.203:60862 node.cpp:1757
2016-07-21T21:00:00 p2p:message read_loop process_block_during ] received a sync block from peer 162.213.195.203:60862 node.cpp:3308
2016-07-21T21:00:00 p2p:process_backlog_of_sync_blocks process_backlog_of_s ] in process_backlog_of_sync_blocks node.cpp:3194
2016-07-21T21:00:00 p2p:process_backlog_of_sync_blocks process_backlog_of_s ] leaving process_backlog_of_sync_blocks because we're already processing too many blocks node.cpp:3197
2016-07-21T21:00:00 p2p:message read_loop           on_message ] handling message block_message_type adfbb9188a0e1edeb31dc30e6243a19251d4254d size 132 from peer 162.213.195.203:60862 node.cpp:1757
2016-07-21T21:00:00 p2p:message read_loop process_block_during ] received a sync block from peer 162.213.195.203:60862 node.cpp:3308
2016-07-21T21:00:00 p2p:process_backlog_of_sync_blocks process_backlog_of_s ] in process_backlog_of_sync_blocks node.cpp:3194
2016-07-21T21:00:00 p2p:process_backlog_of_sync_blocks process_backlog_of_s ] leaving process_backlog_of_sync_blocks because we're already processing too many blocks node.cpp:3197


Ok, what now?   This is very challenging to debug, because this coin is unique, and I've never run any of the precursors, so I don't even know what "right" behavior looks like.

My expectation is it should just sync, up to the same block number as on OpenLedger. 

One concern is if I'm on the mainnet or a testnet.  I posted all the chainid info previously, so someone could confirm that I'm seeing mainnet?
How can I check/confirm that?
The docs/build-ubuntu.md says compile with -DCMAKE_BUILD_TYPE=Debug
whereas elsewhere it says use "Release", not "Debug".
(example of documentation inconsistency)
I forgot which one I used, but presumably that define is just for code optimization, not a main/testnet flag.

Or maybe I'm getting connected to nodes/data from an old fork, that breaks with the latest code?

Are there critical config.ini options I need to set?

Hopefully the above logs give you some clue about what's going on.


Offline emf

  • Jr. Member
  • **
  • Posts: 21
    • View Profile
The BitShares 2.0 developers made the design decision that full nodes were expected to be running high-end hardware with good network connectivity.  Most other coins (BitShares 0.x included) keep their indexes on disk, BitShares keeps them in memory.

The hardware doesn't have to be all that impressive.. I think you can get away with 4G if not much else is going on on the system, but it's about the minimum I'd bother trying with.  I often run it on a VM with 6G that is running a few other altcoins.  I usually have to shut something down if I need to do a big compile on that VM. 

The protocol actually does have built-in limits on the maximum size of individual transactions and of blocks.  It encourages smaller transactions by charging a per-kilobyte fee for transactions that exceed the minimum size.  The "create asset" operation that seems to be causing you problems was about a megabyte, so clayop paid the base fee for creating an asset plus about 1000 times the per-kb fee.  The committee can vote to change both the fees and the maximum sizes at any time, and they can vote to increase the per-kilobyte fee to make it more expensive if abuse becomes a problem.  I think the max block size right now is about 2M.
« Last Edit: July 21, 2016, 10:28:48 pm by emf »

Offline Miner9r

  • Newbie
  • *
  • Posts: 10
    • View Profile
  • BitShares: miner9r
Thanks for the comments everyone and xeroc

This particular machine has 4G ram and 32G swap.
Is that not enough?   With other coins, the swap file is hardly touched (per "free"), and that's with several other coins running simultaneously.

Example, with 3 coins running, not including Bitshares:
Code: [Select]
$ free
              total        used        free      shared  buff/cache   available
Mem:        4047180     1737252       50668          16     2259260     1205860
Swap:      33554428      716988    32837440
And I've been turning off 1 of the other coins when running BTS.

I'll try again with sync from scratch, and stop the other coins to give max mem for BTS, and report here.


Offline xeroc

  • Board Moderator
  • Hero Member
  • *****
  • Posts: 12922
  • ChainSquad GmbH
    • View Profile
    • ChainSquad GmbH
  • BitShares: xeroc
  • GitHub: xeroc
there are limits in the protocol and that text is well within the limits. From what I know, the only issue you may get is when you have too few RAM available. Try create a swap file and sync again. Or even better get some more RAM ..
Sorry for the inconveniences

Offline karnal

  • Hero Member
  • *****
  • Posts: 1068
    • View Profile
Pretty fucked up if this checks out ... @xeroc might know something about this?


Offline Miner9r

  • Newbie
  • *
  • Posts: 10
    • View Profile
  • BitShares: miner9r
I posted this in another thread here:   
  https://bitsharestalk.org/index.php/topic,21157.msg296184.html#msg296184
see which reports from other users, and their apparent cause analysis.

I'm elevating to a new topic here, because this is a Critical Showstopper Bug.  It's likely the BitShares community has been silently losing prospective users since January.

===
As of July-12 (v2.0.160702) this bug is still present.  I'm building from source under Linux.

My blockchain sync is jammed on block 2821722,  presumably because, as roadscape found, the next block 2821723 has a huge text (the bible -- typical graffiti  blockchain vandalism).

This needs to be fixed *immediately*,  or provide a workaround  in top level documentation. 

This seems like an insidious CRITICAL bug, throttling the entire BitShares ecosystem,  *No New Users* !!  Everything seems to be working for the old-timers, who already have the chain, while new users are throwing up their hands and silently leaving (possibly in droves, over the last several months, SINCE JANUARY!!)   It's incredibly frustrating to follow official documentation (which is already scattered, incomplete, outdated) and run into inexplicable jams like this.

I've been hammering on it for week++, trying to bring up BitShares as a new user, and only tunneled this far with relatively high expertise and perseverance.  (found issue, searched, confirmed bug, made this account specifically to alert you all, ...)

In a sense, we don't actually have a working software or blockchain at the moment.  It's a closed system,  no new users!

Please fix it!!

===

As further comment on the bug, how did that huge text get in the field?  The protocol should have limits built in for all these fields,  and the rest of the code needs to enforce and handle up to those known edges.  As the user base expands, you can expect plenty of accidental and malicious error injection like this.

Also, the release process should include a complete source build and blockchain sync from scratch,  on "typical" hardware.   If that's too onerous to do for every platform, at least cycle through one platform each release.

It would also help to update the official build/sync documentation as part of the release process.

Thanks to everyone for this excellent, advanced cryptocoin, exchange, ecosystem !


BTS id: miner9r   
(computing expert, new bts user, please send me donations for my informative or helpful postings)

==================
STATS FROM cli_wallet:

about
{
  "client_version": "v2.0.160702",
  "graphene_revision": "3f7bcddd2546b1a054c8d46193db4efa19eab3e3",
  "graphene_revision_age": "10 days ago",
  "fc_revision": "31adee49d91275cc63aa3a47b3a9e3c826baccca",
  "fc_revision_age": "16 weeks ago",
  "compile_date": "compiled on Jul 12 2016 at 11:35:42",
  "boost_version": "1.58",
  "openssl_version": "OpenSSL 1.0.2h  3 May 2016",
  "build": "linux 64-bit"
}


get_dynamic_global_properties
{
  "id": "2.1.0",
  "head_block_number": 2821722,
  "head_block_id": "002b0e5a206413b5f17ee209705b51b3cb90fa4c",
  "time": "2016-01-20T09:28:42",
  "current_witness": "1.6.38",
  "next_maintenance_time": "2016-01-20T10:00:00",
  "last_budget_time": "2016-01-20T09:00:00",
  "witness_budget": 94350000,
  "accounts_registered_this_interval": 0,
  "recently_missed_count": 0,
  "current_aslot": 2839861,
  "recent_slots_filled": "340282366920938463463374607431768211455",
  "dynamic_flags": 0,
  "last_irreversible_block_num": 2821705
}

info
{
  "head_block_num": 2821722,
  "head_block_id": "002b0e5a206413b5f17ee209705b51b3cb90fa4c",
  "head_block_age": "25 weeks old",
  "next_maintenance_time": "25 weeks ago",
  "chain_id": "4018d7844c78f6a6c41c6a552b898022310fc5dec06da467ee7905a8dad512c8",
  "participation": "100.00000000000000000",
 ...
}