Show Posts

This section allows you to view all posts made by this member. Note that you can only see posts made in areas you currently have access to.


Messages - Miner9r

Pages: [1]
1
BEOS / Re: BitShares EOS (BEOS) Launches - Get Yours
« on: June 29, 2019, 01:59:18 pm »

So there was a gap of about 2.5 days. Out of the 89 day distribution, that is about 2.8% of the Beos total rainfall missing. What has happened to those coins?  Are they burned/disappeared/lost, and thus subtracted from the total distribution?

Or will they be backcredited somehow, included in the remaining days, or the rainfall period extended?

Has the cause/situation for the lapse been identified?

Bump.  Any update on my questions:  Where did the missing coins go?  Will the rainfall period be extended to compensate?


2
BEOS / Re: BitShares EOS (BEOS) Launches - Get Yours
« on: June 25, 2019, 10:42:46 pm »
It has apparently resumed crediting, as of about 3:30am Tuesday June 25 Pacific Time, by my estimation.

So there was a gap of about 2.5 days. Out of the 89 day distribution, that is about 2.8% of the Beos total rainfall missing. What has happened to those coins?  Are they burned/disappeared/lost, and thus subtracted from the total distribution?

Or will they be backcredited somehow, included in the remaining days, or the rainfall period extended?

Has the cause/situation for the lapse been identified?

Incidentally, there was a major systemwide "general internet outage" across many sites, reported as of Monday morning.  It seems plausible it may have started earlier on the weekend, and was perhaps a factor in this Beos disruption (?)
  https://www.yahoo.com/news/no-not-just-half-internet-125115356.html
  "Half of the internet is down, including Google, Amazon, and Reddit"

Is there any news from the discussion group at Telegram https://t.me/officialbeos ?
An account is needed to read over there.


3
BEOS / Re: BitShares EOS (BEOS) Launches - Get Yours
« on: June 24, 2019, 05:21:42 pm »
Any situation update?  As of Monday June 24,  around 10a Pacific Time,  I have not received any BEOS nor RAM rainfall in the last 24+ hours since Sunday.  And my amount over the prior 24 hours before that, on a spot check on Sunday was way down, so it appears to have stopped crediting some time on Saturday June 22, as noted above. 

There is supposed to be another about 14 days of Beos Rainfall,  plus the RAM continuing another year+.

The block explorer at https://explore.beos.world/blocks currently shows
14363077    Jun 24, 2019, 10:11:40.000 AM    beos.prodo    0

which is current time, so it seems blocks are produced, but there's no rainfall credits ?

No change or activity to wallet or network on my end, so far as I know. My network is up.  I restarted the BEOS wallet just to be sure, and it's still frozen at the same BEOS/RAM amount, for 1.5 days now.

4
IOW you need a fast network connection for a normal sync.

It's probably much simpler (and faster) to download a snapshot of the blockchain, for example this one: http://seeds.quisquis.de/bts-chain-20160708.tar.bz2

Thanks much for the snapshot link, pc.   I may have to cave in and use that, as I suspect others have.

However, I'm generally strongly against snapshots.  It's an out-of-band solution,  they can be tampered, it bypasses the whole idea of blockchain tech in the first place:  distributed, peer-to-peer validation,  not being dependent on a single source.  The growing use of snapshots is basically a tacit admission that there are problems with a coin's peer-base or parameters.   And that would apply here exactly.

Re fast network.  I'm on relatively standard DSL,  I think 150KB/sec download.   It's over Wifi, maybe that's an issue,  but how fast is 'fast enough'? 
What are the speed/latency requirements exactly then?  Do we really want to go there?

I have not had a network issue like this with any other coin, both bitcoin-derived and other completely alternate techs.  I've run dozens or hundreds of coins for years.  So I don't think it's valid to require 'faster network'.

I think instead, the code must be adjusted to handle the timings/delays/latencies introduced by the giant block(s).   So far,  I think that's the best guess for the bug here.  That even though the protocol can handle the large block,  some other layer, like a transport layer,  is really expecting much smaller blocks, and thus hitting timeout and rejecting the block.

This point in the transport layer needs to be found and fixed -- relax the timings to handle the maximum data size on any average network.

===
I would really like to encourage the BTS community/developers to bottom out a solution to this.  The bug has been in there since January, it's now August. 
How many users have been lost in 8 months?   

I've been pushing it for a month+.   Nothing's happening. 

Sure I could just get the snapshot.  And what about all the other new users hitting this?  They won't find it.   What about the next giant block that jams up?   We'll need a new snapshot then.  We could all just forget the blockchain and send snapshots to each other...


5
Thanks for feedback, xeroc.  Did you compile it yourself from github source?   

Here is my exact sequence to reproduce the bug.   Do you see any errors? I'm running:

boost  1.58.0
g++ (Gentoo 4.9.3 p1.5, pie-0.6.4) 4.9.3
gcc (Gentoo 4.9.3 p1.5, pie-0.6.4) 4.9.3
Linux kernel 3.18.25

EXACT SEQ TO REPRODUCE JAM ON BLOCK 2821722:
Code: [Select]
git clone https://github.com/bitshares/bitshares-2.git bitshares-2.git
cd bitshares-2.git
git submodule update --init --recursive
cmake -DCMAKE_BUILD_TYPE=Release .
make
programs/witness_node/witness_node

These steps are from:
  http://docs.bitshares.eu/bitshares/installation/Build.html
except the line
   git submodule update --init --recursive, 
which is not mentioned at that url, but is in the included README.md in the source.

It runs successfully, then stops with last console display:

3297741ms th_a       application.cpp:523           handle_block         ] Got block: #2820000 time: 2016-01-20T08:02:27 latency: 17488197670 ms from: bhuz  irreversible: 2819982 (-18)

I've let it wait for hours, even overnight, at this point. 
Subsequent investigation with cli_wallet shows it stuck on block 2821722, as I posted above.


6
A couple updates:

* I increased the RAM to 8gb on my original test machine, Intel cpu.  Problem recurs.
* I tested on another machine with 8gb ram, AMD cpu.  Problem recurs.

So at this point it seems I've ruled out the local hardware.  The remaining possibilities are:  Configuration error, Network error, Library/Compiler issue,  or a bug in the Bitshares code/protocol. 

Do I need to configure my router, open some ports?
I built with boost-1.58 instead of 1.57.  I haven't checked the exact version of other lib dependencies.  It seems to have compiled without error though.

And the witness_node runs great apparently, for thousands of blocks.  It just jams on that particular one, and then is stuck.

Are the developers seeing this thread?  xeroc, emf is that you?  Is there somewhere else better to post this?

Seriously folks, I've been working on this over a month,  and I don't even have a vested interest,  just enthusiasm for good technology.    But it's not working, so frustration is going to override soon.  And it seems fairly certain there are other people who've run into this, given up and walked out the door -- you never heard from them.

I'm happy to work through configuration options, network settings, provide more debugging feedback,  whatever you need.  I'm running on Gentoo, one of the most advanced Linux platforms.  Everything on the system is compiled from source, I can tell you the exact installed versions of any library.

Does anyone else here actually have this working on Linux?  Try a fresh blockchain download with the latest git code and see whether it works, or you reproduce this bug.  (save your old blockchain directory so you can get back to it)



7
Thanks for the good info, emf.


Ok, I have some more test results.  I kept the wallet.json file (though I don't think I've made any accounts in there yet), and I moved away the old witness_node_data_dir/   and started programs/witness_node/witness_node.   It remakes the data directory, and starts a fresh sync from scratch. 
 
It jammed again.  I don't think it's memory.  At the time it's jammed, free reports:
Code: [Select]
$ free
              total        used        free      shared  buff/cache   available
Mem:        4047180     1657724      626496          24     1762960     2017828
Swap:      33554428      573680    32980748

Incidentally, when the witness_node creates the new config.ini, it puts the lines:
Code: [Select]
   # Tuple of [PublicKey, WIF private key] (may specify multiple times)
   private-key = ["___","___"]
where ___ is key data.  It made the same key as before.  Why is that?  Is this an important secret wallet key I need to keep?

Now, when it's jammed, here's the last output from the witness_node console:
Code: [Select]
393792ms th_a       application.cpp:522           handle_block         ] Got block: #2800000 time: 2016-01-19T15:19:54 latency: 15918246023 ms from: delegate.btsnow  irreversible: 2799982 (-18)
396724ms th_a       db_market.cpp:149             maybe_cull_small_ord ] applied epsilon logic
396745ms th_a       db_market.cpp:149             maybe_cull_small_ord ] applied epsilon logic
418312ms th_a       application.cpp:522           handle_block         ] Got block: #2810000 time: 2016-01-19T23:41:06 latency: 15888198543 ms from: spectral  irreversible: 2809982 (-18)
442609ms th_a       db_market.cpp:149             maybe_cull_small_ord ] applied epsilon logic
443822ms th_a       application.cpp:522           handle_block         ] Got block: #2820000 time: 2016-01-20T08:02:27 latency: 15858143053 ms from: bhuz  irreversible: 2819982 (-18)
621003ms ntp        ntp.cpp:177                   read_loop            ] ntp_delta_time updated to -153824800 us


I also monitored tail -f   on the debug log in    witness_node_data_dir/logs/p2p/p2p.log
The last lines  are as follows.  Note it repeated about 125 copies of the 4-line message about handling message_block_type, with various block types,  and then just hung.  (After a while, debug log output resumed and it seems to be just searching for connections, addresses... and it seems to think anybody it does find is on another fork.  I missed the exact next lines after the 125 copies, can track it down if you need it.)

Code: [Select]
2016-07-21T21:00:00 p2p:send_sync_block_to_node_delegate send_sync_block_to_n ] Successfully pushed sync block 2643675 (id:002856db20f48cd66a2c88d5f7fd4ef8c732e005) node.cpp:3025
2016-07-21T21:00:00 p2p:send_sync_block_to_node_delegate send_sync_block_to_n ] sync: client accpted the block, we now have only 5424633 items left to fetch before we're in sync node.cpp:3063
2016-07-21T21:00:00 p2p:send_sync_block_to_node_delegate send_sync_block_to_n ] Cannot pop first element off peer 81.89.101.133:1776's list, its list is empty node.cpp:3094
2016-07-21T21:00:00 p2p:send_sync_block_to_node_delegate send_sync_block_to_n ] Removed item from 162.213.195.203:60862's list of items being processed, still processing 199 blocks node.cpp:3119
2016-07-21T21:00:00 p2p:send_sync_block_to_node_delegate send_sync_block_to_n ] Leaving send_sync_block_to_node_delegate node.cpp:3172
2016-07-21T21:00:00 p2p:process_backlog_of_sync_blocks process_backlog_of_s ] in process_backlog_of_sync_blocks node.cpp:3194
2016-07-21T21:00:00 p2p:process_backlog_of_sync_blocks process_backlog_of_s ] currently 199 blocks in the process of being handled node.cpp:3200
2016-07-21T21:00:00 p2p:process_backlog_of_sync_blocks process_backlog_of_s ] currently 555 sync items to consider node.cpp:3230
2016-07-21T21:00:00 p2p:process_backlog_of_sync_blocks process_backlog_of_s ] stopping processing sync block backlog because we have 200 blocks in progress node.cpp:3282
2016-07-21T21:00:00 p2p:process_backlog_of_sync_blocks process_backlog_of_s ] leaving process_backlog_of_sync_blocks, 1 processed node.cpp:3291
2016-07-21T21:00:00 p2p:process_backlog_of_sync_blocks trigger_fetch_sync_i ] Triggering fetch sync items loop now node.cpp:1095
2016-07-21T21:00:00 p2p:send_sync_block_to_node_delegate send_sync_block_to_n ] in send_sync_block_to_node_delegate() node.cpp:3013
2016-07-21T21:00:00 p2p:fetch_sync_items_loop fetch_sync_items_loo ] beginning another iteration of the sync items loop node.cpp:1034
2016-07-21T21:00:00 p2p:fetch_sync_items_loop fetch_sync_items_loo ] no sync items to fetch right now, going to sleep node.cpp:1084
2016-07-21T21:00:00 p2p:send_sync_block_to_node_delegate send_sync_block_to_n ] Successfully pushed sync block 2643676 (id:002856dc079cb2039d62492aae802400bb4c6fb4) node.cpp:3025
2016-07-21T21:00:00 p2p:send_sync_block_to_node_delegate send_sync_block_to_n ] sync: client accpted the block, we now have only 5424632 items left to fetch before we're in sync node.cpp:3063
2016-07-21T21:00:00 p2p:send_sync_block_to_node_delegate send_sync_block_to_n ] Cannot pop first element off peer 81.89.101.133:1776's list, its list is empty node.cpp:3094
2016-07-21T21:00:00 p2p:send_sync_block_to_node_delegate send_sync_block_to_n ] Removed item from 162.213.195.203:60862's list of items being processed, still processing 199 blocks node.cpp:3119
2016-07-21T21:00:00 p2p:send_sync_block_to_node_delegate send_sync_block_to_n ] Leaving send_sync_block_to_node_delegate node.cpp:3172
2016-07-21T21:00:00 p2p:process_backlog_of_sync_blocks process_backlog_of_s ] in process_backlog_of_sync_blocks node.cpp:3194
2016-07-21T21:00:00 p2p:process_backlog_of_sync_blocks process_backlog_of_s ] currently 199 blocks in the process of being handled node.cpp:3200
2016-07-21T21:00:00 p2p:process_backlog_of_sync_blocks process_backlog_of_s ] currently 554 sync items to consider node.cpp:3230
2016-07-21T21:00:00 p2p:process_backlog_of_sync_blocks process_backlog_of_s ] stopping processing sync block backlog because we have 200 blocks in progress node.cpp:3282
2016-07-21T21:00:00 p2p:process_backlog_of_sync_blocks process_backlog_of_s ] leaving process_backlog_of_sync_blocks, 1 processed node.cpp:3291
2016-07-21T21:00:00 p2p:process_backlog_of_sync_blocks trigger_fetch_sync_i ] Triggering fetch sync items loop now node.cpp:1095
2016-07-21T21:00:00 p2p:send_sync_block_to_node_delegate send_sync_block_to_n ] in send_sync_block_to_node_delegate() node.cpp:3013
2016-07-21T21:00:00 p2p:fetch_sync_items_loop fetch_sync_items_loo ] beginning another iteration of the sync items loop node.cpp:1034
2016-07-21T21:00:00 p2p:fetch_sync_items_loop fetch_sync_items_loo ] no sync items to fetch right now, going to sleep node.cpp:1084
2016-07-21T21:00:00 p2p:message read_loop           on_message ] handling message block_message_type 694dd687ca44fbc8d191f1e7d40517d98fc93c90 size 132 from peer 162.213.195.203:60862 node.cpp:1757
2016-07-21T21:00:00 p2p:message read_loop process_block_during ] received a sync block from peer 162.213.195.203:60862 node.cpp:3308
2016-07-21T21:00:00 p2p:process_backlog_of_sync_blocks process_backlog_of_s ] in process_backlog_of_sync_blocks node.cpp:3194
2016-07-21T21:00:00 p2p:process_backlog_of_sync_blocks process_backlog_of_s ] leaving process_backlog_of_sync_blocks because we're already processing too many blocks node.cpp:3197
2016-07-21T21:00:00 p2p:message read_loop           on_message ] handling message block_message_type 8fcf6fc6bd88181ac11a0fbf43c4b4c950bab1dc size 132 from peer 162.213.195.203:60862 node.cpp:1757
2016-07-21T21:00:00 p2p:message read_loop process_block_during ] received a sync block from peer 162.213.195.203:60862 node.cpp:3308
2016-07-21T21:00:00 p2p:process_backlog_of_sync_blocks process_backlog_of_s ] in process_backlog_of_sync_blocks node.cpp:3194
2016-07-21T21:00:00 p2p:process_backlog_of_sync_blocks process_backlog_of_s ] leaving process_backlog_of_sync_blocks because we're already processing too many blocks node.cpp:3197
2016-07-21T21:00:00 p2p:message read_loop           on_message ] handling message block_message_type 2ee9cf866c5329c4813bdd8cd3e122b23e557811 size 132 from peer 162.213.195.203:60862 node.cpp:1757
2016-07-21T21:00:00 p2p:message read_loop process_block_during ] received a sync block from peer 162.213.195.203:60862 node.cpp:3308
2016-07-21T21:00:00 p2p:process_backlog_of_sync_blocks process_backlog_of_s ] in process_backlog_of_sync_blocks node.cpp:3194
2016-07-21T21:00:00 p2p:process_backlog_of_sync_blocks process_backlog_of_s ] leaving process_backlog_of_sync_blocks because we're already processing too many blocks node.cpp:3197

... REPEATS OVER 100 TIMES, WITH VARIOUS block_message_type ...

2016-07-21T21:00:00 p2p:message read_loop           on_message ] handling message block_message_type 3dc26eac040509897074ccf84af9a00e008a0414 size 132 from peer 162.213.195.203:60862 node.cpp:1757
2016-07-21T21:00:00 p2p:message read_loop process_block_during ] received a sync block from peer 162.213.195.203:60862 node.cpp:3308
2016-07-21T21:00:00 p2p:process_backlog_of_sync_blocks process_backlog_of_s ] in process_backlog_of_sync_blocks node.cpp:3194
2016-07-21T21:00:00 p2p:process_backlog_of_sync_blocks process_backlog_of_s ] leaving process_backlog_of_sync_blocks because we're already processing too many blocks node.cpp:3197
2016-07-21T21:00:00 p2p:message read_loop           on_message ] handling message block_message_type 6a16d2f18d9567d02331a65365788ef6fd42a3c8 size 132 from peer 162.213.195.203:60862 node.cpp:1757
2016-07-21T21:00:00 p2p:message read_loop process_block_during ] received a sync block from peer 162.213.195.203:60862 node.cpp:3308
2016-07-21T21:00:00 p2p:process_backlog_of_sync_blocks process_backlog_of_s ] in process_backlog_of_sync_blocks node.cpp:3194
2016-07-21T21:00:00 p2p:process_backlog_of_sync_blocks process_backlog_of_s ] leaving process_backlog_of_sync_blocks because we're already processing too many blocks node.cpp:3197
2016-07-21T21:00:00 p2p:message read_loop           on_message ] handling message block_message_type adfbb9188a0e1edeb31dc30e6243a19251d4254d size 132 from peer 162.213.195.203:60862 node.cpp:1757
2016-07-21T21:00:00 p2p:message read_loop process_block_during ] received a sync block from peer 162.213.195.203:60862 node.cpp:3308
2016-07-21T21:00:00 p2p:process_backlog_of_sync_blocks process_backlog_of_s ] in process_backlog_of_sync_blocks node.cpp:3194
2016-07-21T21:00:00 p2p:process_backlog_of_sync_blocks process_backlog_of_s ] leaving process_backlog_of_sync_blocks because we're already processing too many blocks node.cpp:3197


Ok, what now?   This is very challenging to debug, because this coin is unique, and I've never run any of the precursors, so I don't even know what "right" behavior looks like.

My expectation is it should just sync, up to the same block number as on OpenLedger. 

One concern is if I'm on the mainnet or a testnet.  I posted all the chainid info previously, so someone could confirm that I'm seeing mainnet?
How can I check/confirm that?
The docs/build-ubuntu.md says compile with -DCMAKE_BUILD_TYPE=Debug
whereas elsewhere it says use "Release", not "Debug".
(example of documentation inconsistency)
I forgot which one I used, but presumably that define is just for code optimization, not a main/testnet flag.

Or maybe I'm getting connected to nodes/data from an old fork, that breaks with the latest code?

Are there critical config.ini options I need to set?

Hopefully the above logs give you some clue about what's going on.


8
Thanks for the comments everyone and xeroc

This particular machine has 4G ram and 32G swap.
Is that not enough?   With other coins, the swap file is hardly touched (per "free"), and that's with several other coins running simultaneously.

Example, with 3 coins running, not including Bitshares:
Code: [Select]
$ free
              total        used        free      shared  buff/cache   available
Mem:        4047180     1737252       50668          16     2259260     1205860
Swap:      33554428      716988    32837440
And I've been turning off 1 of the other coins when running BTS.

I'll try again with sync from scratch, and stop the other coins to give max mem for BTS, and report here.


9
I posted this in another thread here:   
  https://bitsharestalk.org/index.php/topic,21157.msg296184.html#msg296184
see which reports from other users, and their apparent cause analysis.

I'm elevating to a new topic here, because this is a Critical Showstopper Bug.  It's likely the BitShares community has been silently losing prospective users since January.

===
As of July-12 (v2.0.160702) this bug is still present.  I'm building from source under Linux.

My blockchain sync is jammed on block 2821722,  presumably because, as roadscape found, the next block 2821723 has a huge text (the bible -- typical graffiti  blockchain vandalism).

This needs to be fixed *immediately*,  or provide a workaround  in top level documentation. 

This seems like an insidious CRITICAL bug, throttling the entire BitShares ecosystem,  *No New Users* !!  Everything seems to be working for the old-timers, who already have the chain, while new users are throwing up their hands and silently leaving (possibly in droves, over the last several months, SINCE JANUARY!!)   It's incredibly frustrating to follow official documentation (which is already scattered, incomplete, outdated) and run into inexplicable jams like this.

I've been hammering on it for week++, trying to bring up BitShares as a new user, and only tunneled this far with relatively high expertise and perseverance.  (found issue, searched, confirmed bug, made this account specifically to alert you all, ...)

In a sense, we don't actually have a working software or blockchain at the moment.  It's a closed system,  no new users!

Please fix it!!

===

As further comment on the bug, how did that huge text get in the field?  The protocol should have limits built in for all these fields,  and the rest of the code needs to enforce and handle up to those known edges.  As the user base expands, you can expect plenty of accidental and malicious error injection like this.

Also, the release process should include a complete source build and blockchain sync from scratch,  on "typical" hardware.   If that's too onerous to do for every platform, at least cycle through one platform each release.

It would also help to update the official build/sync documentation as part of the release process.

Thanks to everyone for this excellent, advanced cryptocoin, exchange, ecosystem !


BTS id: miner9r   
(computing expert, new bts user, please send me donations for my informative or helpful postings)

==================
STATS FROM cli_wallet:

about
{
  "client_version": "v2.0.160702",
  "graphene_revision": "3f7bcddd2546b1a054c8d46193db4efa19eab3e3",
  "graphene_revision_age": "10 days ago",
  "fc_revision": "31adee49d91275cc63aa3a47b3a9e3c826baccca",
  "fc_revision_age": "16 weeks ago",
  "compile_date": "compiled on Jul 12 2016 at 11:35:42",
  "boost_version": "1.58",
  "openssl_version": "OpenSSL 1.0.2h  3 May 2016",
  "build": "linux 64-bit"
}


get_dynamic_global_properties
{
  "id": "2.1.0",
  "head_block_number": 2821722,
  "head_block_id": "002b0e5a206413b5f17ee209705b51b3cb90fa4c",
  "time": "2016-01-20T09:28:42",
  "current_witness": "1.6.38",
  "next_maintenance_time": "2016-01-20T10:00:00",
  "last_budget_time": "2016-01-20T09:00:00",
  "witness_budget": 94350000,
  "accounts_registered_this_interval": 0,
  "recently_missed_count": 0,
  "current_aslot": 2839861,
  "recent_slots_filled": "340282366920938463463374607431768211455",
  "dynamic_flags": 0,
  "last_irreversible_block_num": 2821705
}

info
{
  "head_block_num": 2821722,
  "head_block_id": "002b0e5a206413b5f17ee209705b51b3cb90fa4c",
  "head_block_age": "25 weeks old",
  "next_maintenance_time": "25 weeks ago",
  "chain_id": "4018d7844c78f6a6c41c6a552b898022310fc5dec06da467ee7905a8dad512c8",
  "participation": "100.00000000000000000",
 ...
}



10
Technical Support / Re: is the witness thing running under win?
« on: July 13, 2016, 04:09:59 am »
As of July-12 (v2.0.160702) this bug is still present.  I'm building from source under Linux.

My blockchain sync is jammed on block 2821722,  presumably because, as roadscape found above, the next block 2821723 has a huge text (the bible -- typical graffiti  blockchain vandalism).

This needs to be fixed *immediately*,  or provide a workaround  in top level documentation. 

This seems like a CRITICAL bug, throttling the entire BitShares ecosystem,  *No New Users* !!  Everything seems to be working for the old-timers, who already have the chain, while new users are throwing up their hands and silently leaving (possibly in droves, over the last several months, SINCE JANUARY!!)   It's incredibly frustrating to follow official documentation (which is already scattered, incomplete, outdated) and run into inexplicable jams like this.

I've been hammering on it for week++, trying to bring up BitShares as a new user, and only tunneled this far with relatively high expertise and perseverance.

In a sense, we don't actually have a working software or blockchain at the moment.  It's a closed system,  no new users!

Please fix it!!

===

As further comment on the bug, how did that 1MB text get in there?  The protocol should have limits built in for all these fields,  and the rest of the code needs to enforce and handle up to those known edges.  As the user base expands, you can expect plenty of accidental and malicious error injection like this.

Also, the release process should include a complete source build and blockchain sync from scratch,  on "typical" hardware.   If that's too onerous to do for every platform, at least cycle through one platform each release.

It would also help to update the official build/sync documentation as part of the release process.

Thanks to everyone for this excellent, advanced cryptocoin, exchange, ecosystem !


BTS id:  miner9r   
(computing expert, new bts user, please send me donations for my informative or helpful postings)

==================
STATS FROM cli_wallet:

about
{
  "client_version": "v2.0.160702",
  "graphene_revision": "3f7bcddd2546b1a054c8d46193db4efa19eab3e3",
  "graphene_revision_age": "10 days ago",
  "fc_revision": "31adee49d91275cc63aa3a47b3a9e3c826baccca",
  "fc_revision_age": "16 weeks ago",
  "compile_date": "compiled on Jul 12 2016 at 11:35:42",
  "boost_version": "1.58",
  "openssl_version": "OpenSSL 1.0.2h  3 May 2016",
  "build": "linux 64-bit"
}


get_dynamic_global_properties
{
  "id": "2.1.0",
  "head_block_number": 2821722,
  "head_block_id": "002b0e5a206413b5f17ee209705b51b3cb90fa4c",
  "time": "2016-01-20T09:28:42",
  "current_witness": "1.6.38",
  "next_maintenance_time": "2016-01-20T10:00:00",
  "last_budget_time": "2016-01-20T09:00:00",
  "witness_budget": 94350000,
  "accounts_registered_this_interval": 0,
  "recently_missed_count": 0,
  "current_aslot": 2839861,
  "recent_slots_filled": "340282366920938463463374607431768211455",
  "dynamic_flags": 0,
  "last_irreversible_block_num": 2821705
}

info
{
  "head_block_num": 2821722,
  "head_block_id": "002b0e5a206413b5f17ee209705b51b3cb90fa4c",
  "head_block_age": "25 weeks old",
  "next_maintenance_time": "25 weeks ago",
  "chain_id": "4018d7844c78f6a6c41c6a552b898022310fc5dec06da467ee7905a8dad512c8",
  "participation": "100.00000000000000000",
 ...
}

Pages: [1]