0 Members and 1 Guest are viewing this topic.
Hmm I did a fresh checkout of 0.4.5 and seems I'm stuck on block 284575.Any ideas? This is on Ubuntu 14
Quote from: GaltReport on July 28, 2014, 12:12:25 amI didn't know that. That's cold man.Think about it... let's say the slate for block production is as follows:D1, D2, D3, D4, D5. Let's say D1 is a good honest guy and D2-D5 are playing dirty.D1 produces a block, but as soon as he does, D2 builds a longer chain based on last block before D1 and marks it as missed. Then D3 confirms it, and so does D4 and D5. Now they have the longest chain of decent looking blocks. While good guy D1 has bad rep.
I didn't know that. That's cold man.
Directing a portion of my delegate pay to happypatty for the effort put forth on delegate statistics.Thank you.
Quote from: bytemaster on July 27, 2014, 06:48:50 pmI think the latency metric is very important (only valid if your data collection node stays online the whole time). A good node will have a median latency of 0. Right now I just connect to the client via JSONRPC. Any pointers as to where/how I could get that data from so I can include in analysis? Or do I need to plug into the toolkit source code directly?QuoteAnother metric is "who came before and after me" when I missed a block. Sometimes it is the node that comes after you that "skips" your block and makes it look like you missed it. Because of the shuffle it would appear as if everyone is "randomly" missing a block when in reality it is everyone who goes before Attacker always misses a block.Funny you should mention that, I was just about to start doing an analysis of missed blocks before each delegate! Since as we all know a bad delegate could simulate missed block for someone else, so that they get kicked out. It gets worse - if you have a few delegates, you could target one specific delegate you want out, and collaborate to create bad rep for the good guy. It will be interesting how it unfolds, as the competition heats up. This is why I thought limiting to 100 delegates is tough because it will produce fierce competition and some bad behavior will ultimately come out of it. That said - it's also a good thing because we will analyze and find ways to strengthen the network. This is uncharted territory.
I think the latency metric is very important (only valid if your data collection node stays online the whole time). A good node will have a median latency of 0.
Another metric is "who came before and after me" when I missed a block. Sometimes it is the node that comes after you that "skips" your block and makes it look like you missed it. Because of the shuffle it would appear as if everyone is "randomly" missing a block when in reality it is everyone who goes before Attacker always misses a block.
wallet_approve_delegate eightbit
happypatty, please be sure to let us know if there are any stats you think we should be tracking directly in the blockchain database that would be useful. For example: https://github.com/BitShares/bitshares_toolkit/issues/580
Sending you 8 PTS from the community fund in recognition of your amazing efforts Happypattyhttps://bitsharestalk.org/index.php?topic=4909.new
Quote from: emski on July 24, 2014, 08:12:33 amNice one!What about a site where you can see the realtime graphs?Here you go: http://x.bitmeat.com/
Nice one!What about a site where you can see the realtime graphs?
If you need beta testers let me know. I have both Windows and Ubuntu environments.
Right now the data I'm collecting has already been collected by the client. You all have it, it's just not visualized very well. It can easily be verified by looking at the block chain.However once I release this, I plan to start gathering more interesting stats, like current forks, who are the delegates that produced the most forks, etc. <-- this stuff is very important, since you could have good guys be bashed by bad guys, if they fake missed blocks. We will see how it plays out, and also what the community needs most.
I'm doing this entirely as a hobby and a pet project at the moment.
Very much looking forward to your progress. How may interested parties connect with your efforts? Are you providing your code for peer review at this time? Are you willing to share a small subset of your collected data (perhaps a few rounds)? Thanks,Fox
I'll do much more over the weekend.
Here to whet your appetite are just some network health stats: