BitShares Forum
Main => General Discussion => Topic started by: ripplexiaoshan on December 12, 2014, 08:24:13 am
-
(http://s30.postimg.org/z16b1xq9b/111.png)
(http://s1.postimg.org/c5an1wh0d/2222.png)
Will more tests before releasing avoid this in future?
-
With the limited resources of Bitshares development ... makes it clear why we need devshares.
-
good we had that experience before v1 is out...so we are better prepared to avoid situations like that in future.
PS Please change your post title to something more realistic, we don't need to overreact !
-
What disaster?
-
good we had that experience before v1 is out...so we are better prepared to avoid situations like that in future.
PS Please change your post title to something more realistic, we don't need to overreact !
Sorry If I am not using the correct words. Please let me know the most appropriate word I should use, thank you.
I am not overreacting, actually this is the most serious network issue we ever had since the beginning of BTS.
btc38.com and yunbi.com just announced that due to this issue, they suspended all fund transferring, therefore a lot of Chinese users are wondering why, when the dev teams are sleeping...
Also, the price rising momentum due to the up of bitCNY is stopped due to the network issue, because people are afraid of transferring their fund.
So, please, please, more tests before releasing. We don't mind waiting, we have enough patience. :)
-
(http://s30.postimg.org/z16b1xq9b/111.png)
(http://s1.postimg.org/c5an1wh0d/2222.png)
Will more tests before releasing avoid this in future?
I'm just seeing a couple of broken image links?
-
(http://s30.postimg.org/z16b1xq9b/111.png)
(http://s1.postimg.org/c5an1wh0d/2222.png)
Will more tests before releasing avoid this in future?
I'm just seeing a couple of broken image links?
The network is currently having major issues with forks due to the latest client release, go to the home page on bitsharesblocks to get an idea. Delegate participation rate has been down to 60% but is now back to 87% at least, we've had confirmation times of >200s.
-
The network is currently having major issues with forks due to the latest client release, go to the home page on bitsharesblocks to get an idea. Delegate participation rate has been down to 60% but is now back to 87% at least, we've had confirmation times of >200s.
I would do, but I'm in a public library where only ports 80 and 443 are open, so no bitsharesblocks for me :|
I guess its a good job that not everyone upgraded, due to being asleep/lazy! :)
-
The network is currently having major issues with forks due to the latest client release, go to the home page on bitsharesblocks to get an idea. Delegate participation rate has been down to 60% but is now back to 87% at least, we've had confirmation times of >200s.
I would do, but I'm in a public library where only ports 80 and 443 are open, so no bitsharesblocks for me :|
I guess its a good job that not everyone upgraded, due to being asleep/lazy! :)
I'll get around to fixing that eventually! ;) The problem now is the delegates who are still on v0.4.24 or one of the release candidates for v0.4.25. There is a hardfork coming up in about 8 hours as well, and if your VPS server has less than 4gb of ram you might need to switch servers!
-
good we had that experience before v1 is out...so we are better prepared to avoid situations like that in future.
PS Please change your post title to something more realistic, we don't need to overreact !
Sorry If I am not using the correct words. Please let me know the most appropriate word I should use, thank you.
I am not overreacting, actually this is the most serious network issue we ever had since the beginning of BTS.
btc38.com and yunbi.com just announced that due to this issue, they suspended all fund transferring, therefore a lot of Chinese users are wondering why, when the dev teams are sleeping...
Also, the price rising momentum due to the up of bitCNY is stopped due to the network issue, because people are afraid of transferring their fund.
So, please, please, more tests before releasing. We don't mind waiting, we have enough patience. :)
In the meantime:
"blockchain_average_delegate_participation": "87.07 %",
PS Recommended Title: "To many blocks missed the last few hours! Participation rate is dropping to low!"
-
So, please, please, more tests before releasing. We don't mind waiting, we have enough patience. :)
It seems like the crashing issues have to do with low memory, which has become an issue with the latest update because of increased memory demands due to the recent changes to the code.
Perhaps more careful testing would discover these issues and give delegates a longer amount of time to upgrade their computers to meet the necessary demands. Or better yet perhaps the code could have been better optimized to not need such significant increased performance demands. And normally I agree that you want to test everything very carefully before pushing it out. But when there are serious security bugs discovered that need to be resolved... well it makes sense to cut some corners. Between a rock and hard place...
-
good we had that experience before v1 is out...so we are better prepared to avoid situations like that in future.
PS Please change your post title to something more realistic, we don't need to overreact !
Sorry If I am not using the correct words. Please let me know the most appropriate word I should use, thank you.
I am not overreacting, actually this is the most serious network issue we ever had since the beginning of BTS.
btc38.com and yunbi.com just announced that due to this issue, they suspended all fund transferring, therefore a lot of Chinese users are wondering why, when the dev teams are sleeping...
Also, the price rising momentum due to the up of bitCNY is stopped due to the network issue, because people are afraid of transferring their fund.
So, please, please, more tests before releasing. We don't mind waiting, we have enough patience. :)
In the meantime:
"blockchain_average_delegate_participation": "87.07 %",
PS Recommended Title: "To many blocks missed the last few hours! Participation rate is dropping to low!"
A few minutes ago, the lowest participation rate was 57%.
http://blockchain.bitsuperlab.com/index.html
It seems bitsuperlab is on a fork?
-
I'll get around to fixing that eventually! ;) The problem now is the delegates who are still on v0.4.24 or one of the release candidates for v0.4.25. There is a hardfork coming up in about 8 hours as well, and if your VPS server has less than 4gb of ram you might need to switch servers!
Wow, ok. Well I think mine is on v0.4.24.1, although I can't check from here. I don't have a VPS, I have a dedicated server, so upgrading the RAM is difficult with 8h notice :|
-
The network is currently having major issues with forks due to the latest client release, go to the home page on bitsharesblocks to get an idea. Delegate participation rate has been down to 60% but is now back to 87% at least, we've had confirmation times of >200s.
I would do, but I'm in a public library where only ports 80 and 443 are open, so no bitsharesblocks for me :|
I guess its a good job that not everyone upgraded, due to being asleep/lazy! :)
I'll get around to fixing that eventually! ;) The problem now is the delegates who are still on v0.4.24 or one of the release candidates for v0.4.25. There is a hardfork coming up in about 8 hours as well, and if your VPS server has less than 4gb of ram you might need to switch servers!
The section of last 10 missed blocks on bitsharesblocks is frozen, right?
-
I'll get around to fixing that eventually! ;) The problem now is the delegates who are still on v0.4.24 or one of the release candidates for v0.4.25. There is a hardfork coming up in about 8 hours as well, and if your VPS server has less than 4gb of ram you might need to switch servers!
It looks like there are currently 55 delegates on v0.4.25 right now. That is more than the 51 necessary. So worse case scenario I believe that means it should still be possible to determine the correct up-to-date consensus chain within a 16 minute period. It just might mean that it would take on average twice as long to get a transaction confirmed. Obviously getting a larger percentage of delegates to upgrade before the hard fork would be ideal.
-
The network is currently having major issues with forks due to the latest client release, go to the home page on bitsharesblocks to get an idea. Delegate participation rate has been down to 60% but is now back to 87% at least, we've had confirmation times of >200s.
I would do, but I'm in a public library where only ports 80 and 443 are open, so no bitsharesblocks for me :|
I guess its a good job that not everyone upgraded, due to being asleep/lazy! :)
I'll get around to fixing that eventually! ;) The problem now is the delegates who are still on v0.4.24 or one of the release candidates for v0.4.25. There is a hardfork coming up in about 8 hours as well, and if your VPS server has less than 4gb of ram you might need to switch servers!
The section of last 10 missed blocks on bitsharesblocks is frozen, right?
It appears so!
I've just shut down the clients on Bitsharesblocks to try to upgrade to v0.4.25, there won't be any new data for a little while!
-
I can confirm that v0.4.24.1 isn't missing any blocks for me
-
It's getting scary for sure!
"blockchain_average_delegate_participation": "55.49 %",
are some devs watching?
-
It's getting scary for sure!
"blockchain_average_delegate_participation": "55.49 %",
are some devs watching?
At their timezones they should be sleeping
-
I can confirm that v0.4.24.1 isn't missing any blocks for me
I am not missing blocks with v0.4.25-RC1 too...
...but I have 8 gb ram on my personal server that host my delegate.
-
I can confirm that v0.4.24.1 isn't missing any blocks for me
I am not missing blocks with v0.4.25-RC1 too...
...but I have 8 gb ram on my personal server that host my delegate.
Same here. Currently reindexing 0.4.25 AND running 0.4.24. Doesn't seem to have any issues.
-
I can confirm that v0.4.24.1 isn't missing any blocks for me
I am not missing blocks with v0.4.25-RC1 too...
...but I have 8 gb ram on my personal server that host my delegate.
I'm running 4GB on v0.4.24.1 and its been fine.
How can I tell if I'm producing forks?
-
It's getting scary for sure!
"blockchain_average_delegate_participation": "55.49 %",
are some devs watching?
At their timezones they should be sleeping
delegates that misses block (at least participation under 50% to check it on their wallets)
should downgrade to the last client working good for them or upgrade the ram to 8 GB ASAP !!!
-
It's getting scary for sure!
"blockchain_average_delegate_participation": "55.49 %",
are some devs watching?
This is actually kind of cool. Seeing what happens at the limits of DPOS. As long as it remains greater than 50%, then in theory the system should work fine (assuming you wait at most 16 minutes to ensure your transaction is in the right fork of the chain). And if it goes below that, you should in theory have enough evidence within 16 minutes to know that your transaction isn't considered valid and you should act as if the network is down (or you are on the minority fork). Of course, I don't know if the client UIs are properly designed to signal this to the user.
-
I can confirm that v0.4.24.1 isn't missing any blocks for me
I am not missing blocks with v0.4.25-RC1 too...
...but I have 8 gb ram on my personal server that host my delegate.
I'm running 4GB on v0.4.24.1 and its been fine.
How can I tell if I'm producing forks?
what is your participation rate when you type "info" ?
and look "blockchain_head_block_age" to go to zero in less than a minute I suppose
-
"blockchain_average_delegate_participation": "46.54 %",
-
It seems like that all the delegates that are running an older version than 0.4.25 are on forks. Approx 45% of delegates didn't upgrade and the partecipation rate is 55% now...
"alert_level": "red",
"estimated_confirmation_seconds": 670,
"participation_rate": 55.191256830601091
-
I'm running 4GB on v0.4.24.1 and its been fine.
How can I tell if I'm producing forks?
what is your participation rate when you type "info" ?
"blockchain_average_delegate_participation": "45.09 %"
That's the overall average for the entire network, though isn't it?
-
0.4.25 and 0.4.24 clients report different missed blocks.
And it seems the blockchain is different for my 2 instances.
I'll investigate further.
Yes. Even that the updgrade should've been scheduled for the future the upgrade caused forks between 0.4.24 and 0.4.25.
When I've upgraded I got on a different chain.
-
I'm running 4GB on v0.4.24.1 and its been fine.
How can I tell if I'm producing forks?
what is your participation rate when you type "info" ?
"blockchain_average_delegate_participation": "45.09 %"
That's the overall average for the entire network, though isn't it?
"blockchain_average_delegate_participation": "44.10 %",
i think it is the main chain :-[
-
I assume we have 3 big forks with bigger the one with 44% participation(main chain)... (Am I missing something?)
-
"blockchain_average_delegate_participation": "42.80 %",
running 25 RC2
should we start panicking?
-
I am obvious on a fork? Right?
-
I'm running 4GB on v0.4.24.1 and its been fine.
How can I tell if I'm producing forks?
what is your participation rate when you type "info" ?
"blockchain_average_delegate_participation": "45.09 %"
That's the overall average for the entire network, though isn't it?
"blockchain_average_delegate_participation": "44.10 %",
i think it is the main chain :-[
I don't tink it is.
My 0.4.25 istance i saying so:
"participation_rate": 57.06214689265537
-
The wallet I'm using for development (VPS) was at 87% and is now at 57% participation.
-
I'm totally unclear on what the correct course of action is.
I'm reporting 0 missed blocks on my v0.4.24.1 build, no crashes, everything *looks* fine. Am I causing forks here, or what?
-
"blockchain_average_delegate_participation": "42.80 %",
running 25 RC2
should we start panicking?
I assume we are on a fork!
restart your client
http://bitsharesblocks.com/home
-
57+42 = close to 100%. So the smaller fork is starting to win over delegates?
-
but how is it possible to be on a 44% paricipation fork and the same time svk is on a chain with 94% participation? :-\ :-\ :-\
-
but how is it possible to be on a 44% paricipation fork and the same time svk is on a chain with 94% participation? :-\ :-\ :-\
Bitsharesblock is down for upgrade
-
but how is it possible to be on a 44% paricipation fork and the same time svk is on a chain with 94% participation? :-\ :-\ :-\
I'm assuming you have stale data from svk.
-
57+42 = close to 100%. So the smaller fork is starting to win over delegates?
All exchanges should freeze transactions until the fork issue is resolved.
-
Every delegate that didn't upgrade the client yet should do so
https://bitsharestalk.org/index.php?topic=12194.0 (https://bitsharestalk.org/index.php?topic=12194.0)
-
It's getting scary for sure!
"blockchain_average_delegate_participation": "55.49 %",
are some devs watching?
This is actually kind of cool. Seeing what happens at the limits of DPOS. As long as it remains greater than 50%, then in theory the system should work fine (assuming you wait at most 16 minutes to ensure your transaction is in the right fork of the chain). And if it goes below that, you should in theory have enough evidence within 16 minutes to know that your transaction isn't considered valid and you should act as if the network is down (or you are on the minority fork). Of course, I don't know if the client UIs are properly designed to signal this to the user.
In this situation, actually the whole network can be considered as "down", because the exchanges have stopped their fund transferring. And most of the people are too scared to do anything.
-
but how is it possible to be on a 44% paricipation fork and the same time svk is on a chain with 94% participation? :-\ :-\ :-\
I'm assuming you have stale data from svk.
Exactly.
The upgrade has finished now, I'm currently reindexing blocks to make sure I get the correct missed blocks etc, it'll be back up shortly.
-
So the fork I was on that had 87% initially is steadily dropping. We're at 56.11 % now. It would seem the fork resolution code is fubared. I hope they have enough traces etc to figure out what happened. :(
edit - actually now it is 58%. I'm looking at it too often and the random noise keeps me from being able to see if it is increasing/decreasing.
-
Restarting my client didn't help. I am slowly dropping though, 39 % now.
-
Restarting my client didn't help. I am slowly dropping though, 39 % now.
I'm not doing anything until we get some clear advice on the correct course of action from the powers that be.
-
Bitsharesblocks is back up running v0.4.25. Missed blocks don't seem to be updating correctly however, I'm investigating.
-
Restarting my client didn't help. I am slowly dropping though, 39 % now.
Restarting my client didn't help. I am slowly dropping though, 39 % now.
I'm not doing anything until we get some clear advice on the correct course of action from the powers that be.
we need to upgrade!
https://bitsharestalk.org/index.php?topic=12194.msg161167;topicseen#msg161167
-
we need to upgrade!
https://bitsharestalk.org/index.php?topic=12194.msg161167;topicseen#msg161167
It's not clear whether that upgrade is the cause of all these problems, though.
-
0.4.25 and 0.4.24 clients report different missed blocks.
And it seems the blockchain is different for my 2 instances.
I'll investigate further.
Yes. Even that the updgrade should've been scheduled for the future the upgrade caused forks between 0.4.24 and 0.4.25.
When I've upgraded I got on a different chain.
Yup, this appears to be the case. I'm guess the changes made to 0.4.25 did not appropriately guard all hard forking changes with "if (get_head_block_num() >= FORK_25)" conditions, which would effectively make the hard fork happen for each client as soon as it was updated to v0.4.25 and connected to the network.
Certainly code (https://github.com/BitShares/bitshares/blob/6206ed9e45ced5e6314aa89b76d0ee4cbcfccd20/libraries/blockchain/include/bts/blockchain/fork_blocks.hpp) like this
...
#define BTS_V0_4_25_FORK_BLOCK_NUM 99999999
#define BTS_V0_4_26_FORK_BLOCK_NUM 99999999
#define BTS_V0_4_27_FORK_BLOCK_NUM 99999999
#define BTS_CHECK_CANONICAL_SIGNATURE_FORK_BLOCK_NUM BTS_V0_4_25_FORK_BLOCK_NUM
#define FORK_25 1249400 // FIXME
...
doesn't inspire much confidence that the forking code was worked out properly before release.
While delegates on v0.4.25 were a minority, they would appear to be the bad actors on a fork. After enough delegates (at least 51) switched to v0.4.25, everyone else became the minority on the wrong fork. At this point at least 55 delegates have switched, so the hard fork has basically already happened before the intended block (well not completely, some important code is still not activated until block 1249400). Everyone else not on v0.4.25 should just upgrade as soon as they can to not miss out on too many blocks.
-
Bitsharesblocks is back up running v0.4.25. Missed blocks don't seem to be updating correctly however, I'm investigating.
I found the reason, was on my backend, missed blocks are now updating correctly.
-
With 0.4.25 RC2 I seem to be on the fading fork. I am now down to 35 % participation.
FORKED BLOCK FORKING BLOCK ID SIGNING DELEGATE TXN COUNT SIZE TIMESTAMP LATENCY VALID IN CURRENT CHAIN
--------------------------------------------------------------------------------------------------------------------------------------------------------------
1246468
68b7e003ca08d50843faaddcf696cfff1954ede3 moon.delegate.service 0 166 2014-12-12T09:29:10 1155 YES YES
bf7fe0f13b2fb4e8377d595667156780c6345157 init0 1 1807 2014-12-12T09:28:50 1179 NO NO
REASONS FOR INVALID BLOCKS
bf7fe0f13b2fb4e8377d595667156780c6345157: 30007 duplicate_transaction: duplicate transaction
-
With 0.4.25 RC2 I seem to be on the fading fork. I am now down to 35 % participation.
FORKED BLOCK FORKING BLOCK ID SIGNING DELEGATE TXN COUNT SIZE TIMESTAMP LATENCY VALID IN CURRENT CHAIN
--------------------------------------------------------------------------------------------------------------------------------------------------------------
1246468
68b7e003ca08d50843faaddcf696cfff1954ede3 moon.delegate.service 0 166 2014-12-12T09:29:10 1155 YES YES
bf7fe0f13b2fb4e8377d595667156780c6345157 init0 1 1807 2014-12-12T09:28:50 1179 NO NO
REASONS FOR INVALID BLOCKS
bf7fe0f13b2fb4e8377d595667156780c6345157: 30007 duplicate_transaction: duplicate transaction
upgrade to 0.4.25 and delete the "chain" folder (maybe it' not necessary due reindexing?)
-
If you are on the list please upgrade to v0.4.25 !!!
https://github.com/BitShares/bitshares/releases
https://github.com/BitShares/bitshares/blob/master/BUILD_UBUNTU.md
1246748
bm.payroll.riverhead
ak
1246745
delegate-alt
1246742
delegate-watchman
c.delegate.charity
www2.minebitshares-com
sun.delegate.service
1246739
delegated-proof-of-steak
1246738
delegate.bitsuperlab
1246734
skyscraperfarms
delegate.charity
1246733
delegate1.john-galt
1246732
delegate.xeroc
1246729
delegate.follow-my-vote
dele-puppy
1246726
dev0.nikolai
1246725
moon.delegate.service
1246724
dev-metaexchange.monsterer
1246720
a.delegate.charity
1246719
delegate-baozi
1246717
fox
1246714
delegate.nathanhourt.com
1246708
delegate.liondani
1246703
marketing.methodx
maqifrnswa
delegate.cgafeng
bitcoiners
1246702
delegate.webber
1246699
ggozzo.skyscraperfarms
www.minebitshares-com
1246692
delegate.baozi
delegate.taolje
1246691
bm.payroll.riverhead
ak
1246688
delegate.btsnow
1246687
delegate1.maqifrnswa
1246682
delegate-alt
1246681
bits
delegate1-galt
wackou-delegate
1246678
delegate-watchman
c.delegate.charity
www2.minebitshares-com
sun.delegate.service
1246675
delegated-proof-of-steak
1246674
spartako2
delegate.bitsuperlab
1246671
skyscraperfarms
delegate.charity
1246670
delegate1.john-galt
1246669
delegate.xeroc
1246666
delegate.follow-my-vote
dele-puppy
1246663
dev0.nikolai
1246662
moon.delegate.service
1246661
dev-metaexchange.monsterer
1246657
a.delegate.charity
1246656
delegate-baozi
1246655
fox
spartako1
1246652
delegate.nathanhourt.com
1246646
delegate.liondani
1246643
delegate-alt
1246639
delegate.baozi
www2.minebitshares-com
1246634
delegate.follow-my-vote
1246631
delegate.cgafeng
marketing.methodx
1246630
c.delegate.charity
ggozzo.skyscraperfarms
dev0.nikolai
1246629
moon.delegate.service
1246628
dev-metaexchange.monsterer
delegate-baozi
1246627
a.delegate.charity
delegate.webber
1246621
delegate.bitsuperlab
delegate.btsnow
wackou-delega
-
It might be useful to have the .dmg and .exe for 0.4.25 on GitHub for those who can't compile themselves.
-
I have just updated my three delegates: spartako,spartako1,spartako2
I'm the right fork now:
default (unlocked) >>> getinfo
{
"blockchain_head_block_num": 1246771,
"blockchain_head_block_age": "6 seconds old",
"blockchain_head_block_timestamp": "2014-12-12T10:52:20",
"blockchain_average_delegate_participation": "64.33 %",
"blockchain_confirmation_requirement": 99,
"blockchain_share_supply": "2,498,305,678.95240 BTS",
"blockchain_blocks_left_in_round": 74,
"blockchain_next_round_time": "at least 12 minutes in the future",
"blockchain_next_round_timestamp": "2014-12-12T11:04:40",
"blockchain_random_seed": "95d3ca3374f0abc728e4ff1c9b9094cbd5f58a7e",
"client_data_dir": "/home/spartako/.BitShares",
"client_version": "v0.4.25",
"network_num_connections": 21,
"network_num_connections_max": 200,
"network_chain_downloader_running": false,
"network_chain_downloader_blocks_remaining": null,
"ntp_time": "2014-12-12T10:52:26",
"ntp_time_error": 0.00060700000000000001,
"wallet_open": true,
"wallet_unlocked": true,
"wallet_unlocked_until": "17 weeks in the future",
"wallet_unlocked_until_timestamp": "2015-04-07T04:15:13",
"wallet_last_scanned_block_timestamp": "2014-07-24T09:23:30",
"wallet_scan_progress": "? %",
"wallet_block_production_enabled": true,
"wallet_next_block_production_time": "5 minutes in the future",
"wallet_next_block_production_timestamp": "2014-12-12T10:57:20"
}
The only strange thing it is that I obtain PENDING for all transaction that I do:
2014-12-12T10:24:24 PENDING spartako spartako 0.00000 BTS publish version v0.4.25 0.10000 BTS 688498dc
2014-12-12T10:24:26 PENDING spartako spartako 0.00000 BTS publish version v0.4.25 0.10000 BTS 6cf54335
2014-12-12T10:33:47 PENDING spartako spartako 0.00000 BTS publish version v0.4.25 0.10000 BTS 714201f0
2014-12-12T10:40:19 PENDING spartako1 spartako1 0.00000 BTS publish price feeds 0.10000 BTS 3715b4c5
2014-12-12T10:40:19 PENDING spartako spartako 0.00000 BTS publish price feeds 0.10000 BTS 3ed51614
2014-12-12T10:40:19 PENDING spartako2 spartako2 0.00000 BTS publish price feeds 0.10000 BTS 8e29fc31
Anyway when it was my delegate turn these transactions were put in the block that I signed so actually are no more PENDING
-
The only strange thing it is that I obtain PENDING for all transaction that I do:
2014-12-12T10:24:24 PENDING spartako spartako 0.00000 BTS publish version v0.4.25 0.10000 BTS 688498dc
2014-12-12T10:24:26 PENDING spartako spartako 0.00000 BTS publish version v0.4.25 0.10000 BTS 6cf54335
2014-12-12T10:33:47 PENDING spartako spartako 0.00000 BTS publish version v0.4.25 0.10000 BTS 714201f0
2014-12-12T10:40:19 PENDING spartako1 spartako1 0.00000 BTS publish price feeds 0.10000 BTS 3715b4c5
2014-12-12T10:40:19 PENDING spartako spartako 0.00000 BTS publish price feeds 0.10000 BTS 3ed51614
2014-12-12T10:40:19 PENDING spartako2 spartako2 0.00000 BTS publish price feeds 0.10000 BTS 8e29fc31
Anyway when it was my delegate turn these transactions were put in the block that I signed so actually are no more PENDING
"blockchain_confirmation_requirement": 99
99 confirms, but a block ain't one :)
-
"blockchain_confirmation_requirement": 99
99 confirms, but a block ain't one :)
Thanks! now I understand because are PENDING :)
-
With 0.4.25 RC2 I seem to be on the fading fork. I am now down to 35 % participation.
FORKED BLOCK FORKING BLOCK ID SIGNING DELEGATE TXN COUNT SIZE TIMESTAMP LATENCY VALID IN CURRENT CHAIN
--------------------------------------------------------------------------------------------------------------------------------------------------------------
1246468
68b7e003ca08d50843faaddcf696cfff1954ede3 moon.delegate.service 0 166 2014-12-12T09:29:10 1155 YES YES
bf7fe0f13b2fb4e8377d595667156780c6345157 init0 1 1807 2014-12-12T09:28:50 1179 NO NO
REASONS FOR INVALID BLOCKS
bf7fe0f13b2fb4e8377d595667156780c6345157: 30007 duplicate_transaction: duplicate transaction
upgrade to 0.4.25 and delete the "chain" folder (maybe it' not necessary due reindexing?)
It costed me about 20 minutes to download a whole new chain. Not finished yet.. The higher the slower..
How much time is needed for re-indexing? And how much for re-download?
Estimate, compare, then decide ;)
-
It costed me about 20 minutes to download a whole new chain.
How much time is needed for re-indexing?
Estimate, compare, then decide ;)
Depends on network and CPU power. I got re-indexing for 10 minutes. However downloading the new chain might still be faster for me as I have a seed node locally.
-
What a total cluster f*ck this entire thing is.
We need to make sure that code is thoroughly reviewed before being pushed to latest.
edit: you could title this whole event: how forum notifications almost brought down the entire network
-
delegate.liondani &
jcalfee1-developer-team.helper.liondani
are successfully upgraded to v0.4.25 and
on the main chain with 70% participation ;)
-
The network is currently having major issues with forks due to the latest client release, go to the home page on bitsharesblocks to get an idea. Delegate participation rate has been down to 60% but is now back to 87% at least, we've had confirmation times of >200s.
I would do, but I'm in a public library where only ports 80 and 443 are open, so no bitsharesblocks for me :|
I guess its a good job that not everyone upgraded, due to being asleep/lazy! :)
I'll get around to fixing that eventually! ;) The problem now is the delegates who are still on v0.4.24 or one of the release candidates for v0.4.25. There is a hardfork coming up in about 8 hours as well, and if your VPS server has less than 4gb of ram you might need to switch servers!
do you need 4GB ram just to build or also to run?
-
The network is currently having major issues with forks due to the latest client release, go to the home page on bitsharesblocks to get an idea. Delegate participation rate has been down to 60% but is now back to 87% at least, we've had confirmation times of >200s.
I would do, but I'm in a public library where only ports 80 and 443 are open, so no bitsharesblocks for me :|
I guess its a good job that not everyone upgraded, due to being asleep/lazy! :)
I'll get around to fixing that eventually! ;) The problem now is the delegates who are still on v0.4.24 or one of the release candidates for v0.4.25. There is a hardfork coming up in about 8 hours as well, and if your VPS server has less than 4gb of ram you might need to switch servers!
do you need 4GB ram just to build or also to run?
Only to build, I've successfully upgraded my seednode which only has 512mb RAM.
If you build on another computer you'll be fine.
-
My (standby) delegate a.delegate.abit is running on v0.4.25 now..
"blockchain_average_delegate_participation": "68.71 %"
-
riverhead-del-server-1 and backbone.riverhead are on the main chain I think now. 77.7% particiption
-
riverhead-del-server-1 and backbone.riverhead are on the main chain I think now. 77.7% particiption
What about bm.payroll.riverhead ?
Isn't that somewhat important ?
-
Upgrade to v0.4.25:
delegate1-galt
delegate1.john-galt
-
riverhead-del-server-1 and backbone.riverhead are on the main chain I think now. 77.7% particiption
What about bm.payroll.riverhead ?
Isn't that somewhat important ?
That is in progress (just waiting on sync). It's on a VPS far far away so things are a bit slower. I had already mostly upgraded teh other two last night but fell asleep at the keys around 2am.
Also, missed blocks for bm.payroll.riverhead comes out of my pocket not his :). BM will not suffer any wage loss due to this issue.
-
riverhead-del-server-1 and backbone.riverhead are on the main chain I think now. 77.7% particiption
What about bm.payroll.riverhead ?
Isn't that somewhat important ?
That is in progress (just waiting on sync). It's on a VPS far far away so things are a bit slower. I had already mostly upgraded teh other two last night but fell asleep at the keys around 2am.
Also, missed blocks for bm.payroll.riverhead comes out of my pocket not his :). BM will not suffer any wage loss due to this issue.
That is strange...
Are the deal parameters public (the deal between you and BM for the delegate) ?
-
riverhead-del-server-1 and backbone.riverhead are on the main chain I think now. 77.7% particiption
What about bm.payroll.riverhead ?
Isn't that somewhat important ?
Yup, was in the proposal I posted. I'll find it in a bit, kinda busy right now ;).
That is in progress (just waiting on sync). It's on a VPS far far away so things are a bit slower. I had already mostly upgraded teh other two last night but fell asleep at the keys around 2am.
Also, missed blocks for bm.payroll.riverhead comes out of my pocket not his :) . BM will not suffer any wage loss due to this issue.
That is strange...
Are the deal parameters public (the deal between you and BM for the delegate) ?
-
my delegates are finally producing again .. sorry for the delay .. was sleeping and have noticed the EARLY!!! hardfork!
also had to switch to a different machine .. I hope the devs can fix the memory leak so that I can swtich back to the original machine ..
-
riverhead-del-server-1 and backbone.riverhead are on the main chain I think now. 77.7% particiption
What about bm.payroll.riverhead ?
Isn't that somewhat important ?
That is in progress (just waiting on sync). It's on a VPS far far away so things are a bit slower. I had already mostly upgraded teh other two last night but fell asleep at the keys around 2am.
Also, missed blocks for bm.payroll.riverhead comes out of my pocket not his :) . BM will not suffer any wage loss due to this issue.
That is strange...
Are the deal parameters public (the deal between you and BM for the delegate) ?
Yes. It's in here somewhere. I'll find it later. Kinda busy right now ;).
-
my delegates are finally producing again .. sorry for the delay .. was sleeping and have noticed the EARLY!!! hardfork!
also had to switch to a different machine .. I hope the devs can fix the memory leak so that I can swtich back to the original machine ..
What a nice surprise you woke up to! :)
So, is the memory leak on-going, or just at re-indexing time?
-
my delegates are finally producing again .. sorry for the delay .. was sleeping and have noticed the EARLY!!! hardfork!
also had to switch to a different machine .. I hope the devs can fix the memory leak so that I can swtich back to the original machine ..
Fortunately thanks to all the missed blocks the time of the hardfork has been pushed back by over two hours! ;)
Oh and there's no need to change machines finally, I did so at first but turns out you can reindex and resync just fine with 512mb RAM and a swap. It's the compile part that needs lots of RAM.
-
my delegates are finally producing again .. sorry for the delay .. was sleeping and have noticed the EARLY!!! hardfork!
also had to switch to a different machine .. I hope the devs can fix the memory leak so that I can swtich back to the original machine ..
What a nice surprise you woke up to! :)
So, is the memory leak on-going, or just at re-indexing time?
the client reindexes just fine .. then comes up completely and freezes with 99%CPU and over 2GB ram consumption .. not matter how often I restart the client
-
my delegates are finally producing again .. sorry for the delay .. was sleeping and have noticed the EARLY!!! hardfork!
also had to switch to a different machine .. I hope the devs can fix the memory leak so that I can swtich back to the original machine ..
Fortunately thanks to all the missed blocks the time of the hardfork has been pushed back by over two hours! ;)
Oh and there's no need to change machines finally, I did so at first but turns out you can reindex and resync just fine with 512mb RAM and a swap. It's the compile part that needs lots of RAM.
gonna give it a shot ..
redownloading blockchain on it ..
thanks for the hint ..
-
It will be nice when we dont hard fork as often, and when everything is run for a while first on devshares to prevent this. :)
-
I think develop should lunch new version in morning in case fatal bug
there are so many translation now , is it normal
-
my delegates are finally producing again .. sorry for the delay .. was sleeping and have noticed the EARLY!!! hardfork!
also had to switch to a different machine .. I hope the devs can fix the memory leak so that I can swtich back to the original machine ..
Fortunately thanks to all the missed blocks the time of the hardfork has been pushed back by over two hours! ;)
Oh and there's no need to change machines finally, I did so at first but turns out you can reindex and resync just fine with 512mb RAM and a swap. It's the compile part that needs lots of RAM.
Weird. My experience was more like xeroc's. The machine locked up so bad the live delegate was missing blocks.
-
Fortunately thanks to all the missed blocks the time of the hardfork has been pushed back by over two hours! ;)
How is that fortunate? We already suffered the most brutal part of hard forks unintentionally, at this point we should want block 1249400 to arrive as soon as possible.
-
my delegates are finally producing again .. sorry for the delay .. was sleeping and have noticed the EARLY!!! hardfork!
also had to switch to a different machine .. I hope the devs can fix the memory leak so that I can swtich back to the original machine ..
Fortunately thanks to all the missed blocks the time of the hardfork has been pushed back by over two hours! ;)
Oh and there's no need to change machines finally, I did so at first but turns out you can reindex and resync just fine with 512mb RAM and a swap. It's the compile part that needs lots of RAM.
Weird. My experience was more like xeroc's. The machine locked up so bad the live delegate was missing blocks.
I saw some unusual crashes at client start taking much CPU just before the crash. Other than this the update went smoothly.
-
I'm thinking maybe I'm on a fork; "get_info" says that delegate participation is 4.43% and I'm stuck downloading blocks at 1246468. Will I have to re-download the blockchain? If so, how do I ensure that I ultimately end up on the right fork?
Or maybe the answers to this question are too complicated and I should just wait for smarter people to fix everything...
-
my delegates are finally producing again .. sorry for the delay .. was sleeping and have noticed the EARLY!!! hardfork!
also had to switch to a different machine .. I hope the devs can fix the memory leak so that I can swtich back to the original machine ..
Fortunately thanks to all the missed blocks the time of the hardfork has been pushed back by over two hours! ;)
Oh and there's no need to change machines finally, I did so at first but turns out you can reindex and resync just fine with 512mb RAM and a swap. It's the compile part that needs lots of RAM.
Weird. My experience was more like xeroc's. The machine locked up so bad the live delegate was missing blocks.
I saw some unusual crashes at client start taking much CPU just before the crash. Other than this the update went smoothly.
Ya, seems to have gone better for some than others for sure. Network was down to 60% for a bit.
-
I'm thinking maybe I'm on a fork; "get_info" says that delegate participation is 4.43% and I'm stuck downloading blocks at 1246468. Will I have to re-download the blockchain? If so, how do I ensure that I ultimately end up on the right fork?
Or maybe the answers to this question are too complicated and I should just wait for smarter people to fix everything...
run with --rebuild-index first, before you try --resync-blockchain as a last resort.
-
All users who haven't upgraded to the pre-release are on the minority fork.
All delegates should downgrade to the last (Hot Fix) 0.4.24.1 immediately to rejoin the majority of users and start confirming that network.
0.4.25-RC2 had an intractable issue.
Vikram knows the details. We are working on a new upgrade but want to get the network back to a stable condition for users as soon as possible.
All valid transactions have been applied to both networks.
Edit by Vikram: Please read https://bitsharestalk.org/index.php?topic=12207.msg161341#msg161341
-
All users who haven't upgraded to the pre-release are on the minority fork.
All delegates should downgrade to the last (Hot Fix) 0.4.24.1 immediately to rejoin the majority of users and start confirming that network.
0.4.25-RC2 had an intractable issue.
Vikram knows the details. We are working on a new upgrade but want to get the network back to a stable condition for users as soon as possible.
All valid transactions have been applied to both networks.
You don't mean the delegates that run v0.4.25 I suppose...
-
Confused ...
-
All delegates should downgrade to the last (Hot Fix) 0.4.24.1 immediately to rejoin the majority of users and start confirming that network.
I *knew* I should have waited for official word.
So, downgrade it is...
-
[URGENT] Delegates Please Revert Upgrade!
https://bitsharestalk.org/index.php?topic=7067.msg161312#msg161312
https://bitsharestalk.org/index.php?topic=7067.msg161432#msg161432
All community members please help spread the word.
-
All users who haven't upgraded to the pre-release are on the minority fork.
All delegates should downgrade to the last (Hot Fix) 0.4.24.1 immediately to rejoin the majority of users and start confirming that network.
0.4.25-RC2 had an intractable issue.
Vikram knows the details. We are working on a new upgrade but want to get the network back to a stable condition for users as soon as possible.
All valid transactions have been applied to both networks.
Actually all 0.4.24.1 clients are on the minority a fork, as well as 0.4.25-RC1 clients.
0.4.25 ones are on another fork..
I don't know 0.4.25-RC2 though.
okay, I am confused. :-X
-
All users who haven't upgraded to the pre-release are on the minority fork.
All delegates should downgrade to the last (Hot Fix) 0.4.24.1 immediately to rejoin the majority of users and start confirming that network.
0.4.25-RC2 had an intractable issue.
Vikram knows the details. We are working on a new upgrade but want to get the network back to a stable condition for users as soon as possible.
All valid transactions have been applied to both networks.
Downgrading to 0.4.24 from 0.4.25 will likely be problematic due to the LevelDB upgrade that occurred, although at least one member reported success.
0.4.25-RC2 had an issue with wallet backup imports but is sufficient to continue operating the chain until 0.4.26 is released.
Please see details here: https://bitsharestalk.org/index.php?topic=7067.msg161312#msg161312
-
All users who haven't upgraded to the pre-release are on the minority fork.
All delegates should downgrade to the last (Hot Fix) 0.4.24.1 immediately to rejoin the majority of users and start confirming that network.
0.4.25-RC2 had an intractable issue.
Vikram knows the details. We are working on a new upgrade but want to get the network back to a stable condition for users as soon as possible.
All valid transactions have been applied to both networks.
Downgrading to 0.4.24 from 0.4.25 will likely be problematic due to the LevelDB upgrade that occurred, although at least one member reported success.
0.4.25-RC2 had an issue with wallet backup imports but is sufficient to continue operating the chain until 0.4.26 is released.
Please see details here: https://bitsharestalk.org/index.php?topic=7067.msg161312#msg161312
and what about short recovers we made on v0.4.25 ?
(in general transactions that have made the last hours with this version?)
-
are you testing us?
It doesn't make sense to me...
are we not with 82% participation with v0.4.25?
Why downgrade?
At least explain!
PS How can I know you are not kidnapped and you post this because you have a gun pointing on you ?
Participation is high because most delegates have upgraded. Most shareholders have not upgraded and are operating on the minority fork with low participation. To protect the shareholders, we request delegates to switch back to the minority fork. Please see my post here: https://bitsharestalk.org/index.php?topic=7067.msg161312#msg161312
I am happy to have Toast and Bytemaster confirm (or whatever proof you want) that I am not being coerced.
-
All users who haven't upgraded to the pre-release are on the minority fork.
All delegates should downgrade to the last (Hot Fix) 0.4.24.1 immediately to rejoin the majority of users and start confirming that network.
0.4.25-RC2 had an intractable issue.
Vikram knows the details. We are working on a new upgrade but want to get the network back to a stable condition for users as soon as possible.
All valid transactions have been applied to both networks.
Downgrading to 0.4.24 from 0.4.25 will likely be problematic due to the LevelDB upgrade that occurred, although at least one member reported success.
0.4.25-RC2 had an issue with wallet backup imports but is sufficient to continue operating the chain until 0.4.26 is released.
Please see details here: https://bitsharestalk.org/index.php?topic=7067.msg161312#msg161312
and what about short recovers we made on v0.4.25 ?
(in general transactions that have made the last hours with this version?)
They may have been applied to the minority fork as they were propagated through the network; it is possible that they were not and will be reverted if they did not get into a block.
-
0.4.26 will force everyone onto the minority fork because 0.4.25 has bad validation logic.
So the sooner we we have consensus on a working version (0.4.25-RC2 ?) the fewer reversed transactions there will be.
Exchanges were alerted to stop deposits/withdrawals a long time ago.
-
the OP needs to be updated to reflect this, most of this thread is talking about how they need to UPGRADE not DOWNGRADE. Maybe even a mod needs to 'cross out' these, and say on every post 'PLEASE DOWNGRADE' or something like that...
What a mess.
-
DOWNGRADE ALL that have v0.4.25 to v0.4.25-RC2
check the latest post from the devs
https://bitsharestalk.org/index.php?topic=7067.msg161312#msg161312
-
I am downgrading to v0.4.25-RC2 right now, hope it works.
-
I am downgrading to v0.4.25-RC2 right now, hope it works.
v0.4.26 has already been released.
-
Check your info screen. Mine looked like it was all good but it actually stopped at a block six hours old. blockchain_list_delegates was showing the * moving around but no blocks were being produced...or at least it wasn't showing in my list due to hung sync. The sync was on the correct chain (verified by drltc's posted hash).
So, stay on your toes.
-
v0.4.25-RC2 - re-downloaded chain, no way to sync, hanging on every new start after some seconds...
-
I am downgrading to v0.4.25-RC2 right now, hope it works.
v0.4.26 has already been released.
oh great, thanks I didn't see a notification from the update thread.
-
I am downgrading to v0.4.25-RC2 right now, hope it works.
v0.4.26 has already been released.
oh great, thanks I didn't see a notification from the update thread.
They're still doing internal testing. No official announcement yet.
-
I am downgrading to v0.4.25-RC2 right now, hope it works.
v0.4.26 has already been released.
Good news - so let`s try this one 8)