Show Posts

This section allows you to view all posts made by this member. Note that you can only see posts made in areas you currently have access to.


Messages - emski

Pages: [1] 2 3 4 5 6 7 8 ... 86
1
Stakeholder Proposals / Re: Proxy: fav - Journal
« on: January 12, 2016, 09:03:56 am »
emski removed from waiting list upon request
Sorry, what's "emski"?

I'm no longer hosting BTS witnesses.

2
General Discussion / Re: Looking for BTS 2.0 Seed Node Operators
« on: November 02, 2015, 03:48:29 pm »
took me a while, but here's the pull request:

https://github.com/bitshares/bitshares-2/pull/5

This includes everyone that has posted in this thread. I don't think I missed anybody, but if so, please let me know.

I've already sent a pull request for my seed node.
I'll withdraw it as it might conflict with your (more complete) list.

3
Technical Support / Re: [python] failover script
« on: October 30, 2015, 09:25:06 am »

There is no concept of "low risk" when you are dealing with such system. It either works in all possible cases or it is not secure. Simultaneously signing two chains (even with different signing keys) is an issue. This breaks BM's definition of irreversible.

My recommendation is to have two synchronized nodes and the control node only allows one of them to sign blocks (if they are on a different fork your control node just picks which one should be active).

I think 'low risk' is 'too much risk' for bts network to take.  This is especially we are dealing with people's money and the reputation of bitshares.

puppies, is it possible to refine the failover script to include an external 'control node' as recommended by emski?

He already has an external control node. The issue is that both his witness nodes are simultaneously signing blocks (with different signing keys). The control node just updates the signing keys. What I propose is that the control node ensures that only one of the signers will sign at any moment.

4
Technical Support / Re: [python] failover script
« on: October 30, 2015, 08:26:51 am »
I've read your post.
I state that allowing two nodes to sign blocks with the same witness account simultaneously should be banned.
@Bytemaster do you agree ?
@puppies your control node should make sure that only one of the witness nodes is signing blocks at any moment.

There is no concept of "low risk" when you are dealing with such system. It either works in all possible cases or it is not secure. Simultaneously signing two chains (even with different signing keys) is an issue. This breaks BM's definition of irreversible.

My recommendation is to have two synchronized nodes and the control node only allows one of them to sign blocks (if they are on a different fork your control node just picks which one should be active).

This is my opinion. Feel free to do whatever you consider "low enough risk".

5
Technical Support / Re: [python] failover script
« on: October 29, 2015, 09:54:20 pm »
Let me see if I got it right:

1 You are running two witness instances for the same witness account but with different signing keys
2 You allow both nodes to sign blocks.
3 At some point in time you want to switch the signing key.

This can work only if the switch signing key transaction is confirmed in BOTH chains AND only one node signs blocks at any moment.

2/3 confirmation is not irreversible if there is an option for double signing.

See my example here (from this thread: https://bitsharestalk.org/index.php/topic,19360.0.html ):
No response ?

Imagine the following situation:

31 witnesses total.
Automated backup that works like this (from secondary node):
1 If the primary node is missing blocks publish change signing key transaction.
2 Checks the latest irreversible (by BM's definition 66% of witnesses signed (total of 21)) block and verifies that signing key is irreversibly changed.
3 Starts to sign blocks with the new key if it is irreversibly changed.

Lets we have witnesses with the above mentioned automated backup.
Lets we have a network split where witnesses are divided in two groups -> group A(21) / group B(10) .
In chain A (with 21 witnesses) we have 10 witnesses missing blocks.
In chain B (with 10 witnesses) we have 21 witnesses missing blocks.

In chain A we have 10 transactions for change of signing key (for all witnesses from group B). When these transactions are confirmed then backup nodes for group B start signing blocks.

Imagine now witnesses from A begin to lose connection to others nodes in A and connect to witnesses in B. Let this happen one witness at a time.
When first witness (X) "transfers" from A to B we will still have group A with more than 66% participation. Then X's backup node will activate (let it be connected to group A) changing signing key and starting to sign blocks => maintaining 100% participation in chain A. However the original X will continue signing blocks together with group B. If this is repeated 11 times (note that this can happen with up to 10 witnesses simultaneously) we'll have:
 Fork A with >66% active witnesses; Fork B with >66% active witnesses.

Again I'm not saying this is likely to happen but it might be doable if witnesses are able to sign in two chains simultaneously.

6
Technical Support / Re: BTS 2.0 USD Feedprice
« on: October 29, 2015, 07:58:57 am »
The reason for this could be that my script takes the median of all enabled exchanges independent of the actual volume. and the reason for this is that the volume can be easily manipulated.

I could add a trigger and let the witness decide whether to use the median or the weighted price

Volume should always be included into calculations.

7
Technical Support / Re: [python] failover script
« on: October 29, 2015, 07:23:51 am »

Switch.py will now integrate with 2 remote.  witness nodes.  It will ensure that the signing keys for the specified witness match.  If there is a fork and they do not match switch.py will copy the signing key from the node with higher witness participation to the node with lower witness participation.  Documentation and comments are still pretty minimal.  I will try to flesh those out when I get a chance.


Can you provide more info and/or example of this ?

8
Congratulations! You've persistently improved these tools starting from ground level. Good Job!

9
General Discussion / Re: Graphene GUI testing and feedback
« on: October 23, 2015, 05:39:29 am »
svk,
The last GUI (BitShares-light_2.15.288) is so bad in the light wallet...I had to downgrade... seriously the ability to actual click on the 'update your position' is kind of important- so it kind have to fit in the window...if you do want to do something with it ,that is.


PS
Anybody has any idea if the idiotic browser the light client is running inside of , has anything that  can do ctrl+/-?
ctrl+scroll ?

10
Technical Support / Re: Network Security Question
« on: October 23, 2015, 05:07:51 am »
I haven't completely figured out how to automate it but all we really have to do from a witness perspective is ensure that all producing nodes see the same value in get_witness["signing_key"].   If all witnesses see the same value then even if forked there will be no double signing. 

Checking the signing key and setting the signing key is relatively easy.  What I need to figure out is the logic of determining what the key should be set to.  If and when they return different on two different nodes.

Again... relying on the blockchain state to start the backup node is INCORRECT.
EDIT: Because you cannot be sure if you are on the correct fork.

11
Technical Support / Re: Network Security Question
« on: October 22, 2015, 10:07:37 pm »
No response ?

Imagine the following situation:

31 witnesses total.
Automated backup that works like this (from secondary node):
1 If the primary node is missing blocks publish change signing key transaction.
2 Checks the latest irreversible (by BM's definition 66% of witnesses signed (total of 21)) block and verifies that signing key is irreversibly changed.
3 Starts to sign blocks with the new key if it is irreversibly changed.

Lets we have witnesses with the above mentioned automated backup.
Lets we have a network split where witnesses are divided in two groups -> group A(21) / group B(10) .
In chain A (with 21 witnesses) we have 10 witnesses missing blocks.
In chain B (with 10 witnesses) we have 21 witnesses missing blocks.

In chain A we have 10 transactions for change of signing key (for all witnesses from group B). When these transactions are confirmed then backup nodes for group B start signing blocks.

Imagine now witnesses from A begin to lose connection to others nodes in A and connect to witnesses in B. Let this happen one witness at a time.
When first witness (X) "transfers" from A to B we will still have group A with more than 66% participation. Then X's backup node will activate (let it be connected to group A) changing signing key and starting to sign blocks => maintaining 100% participation in chain A. However the original X will continue signing blocks together with group B. If this is repeated 11 times (note that this can happen with up to 10 witnesses simultaneously) we'll have:
 Fork A with >66% active witnesses; Fork B with >66% active witnesses.

Again I'm not saying this is likely to happen but it might be doable if witnesses are able to sign in two chains simultaneously.


12
General Discussion / Re: BitShares 2 Release Coordination Thread
« on: October 22, 2015, 03:17:34 pm »
Latest tag. Witness cant synchronize with the following cycle:
Code: [Select]
916125ms th_a       application.cpp:524           handle_block         ] Error when pushing block:
3080000 unlinkable_block_exception: unlinkable block
block does not link to known chain
    {}
    th_a  fork_database.cpp:81 _push_block

    {"new_block":{"previous":"0003e258da8ba8a4297d95c6397bb52fb21cd799","timestamp":"2015-10-22T14:23:51","witness":"1.6.14","transaction_merkle_root":"0000000000000000000000000000000000000000","extensions":[],"witness_signature":"1f4d9de85b26f4466d8e7bf21eeb012672d842645d577550802c72d26af2eca0a426c26f90f9a2e46d3b6d80ce75360470a64b1281ff3c2194695e8c15262bb430","transactions":[]}}
    th_a  db_block.cpp:200 _push_block
916126ms th_a       fork_database.cpp:60          push_block           ] Pushing block to fork database that failed to link: 0003e25aff0e90296a70625feea4d03798a68b4c, 254554
916126ms th_a       fork_database.cpp:61          push_block           ] Head: 254425, 0003e1d91a65ae9a74abb2d2c6c38884f053f784
916126ms th_a       application.cpp:524           handle_block         ] Error when pushing block:
3080000 unlinkable_block_exception: unlinkable block
block does not link to known chain
    {}
    th_a  fork_database.cpp:81 _push_block

    {"new_block":{"previous":"0003e259e3f0a7f055aff18cd643a84729ce0719","timestamp":"2015-10-22T14:23:54","witness":"1.6.22","transaction_merkle_root":"7883d1cad9c917b47f4aaaaa12ca74a21da302b6","extensions":[],"witness_signature":"1f05800866df7ec90d0b61507c4922c3a9ae0e42ec93a22ed27efda360902aed164e087fa0540146801ff37ce913b3ee0975b09595fbcc60a73584fee042e5d6a6","transactions":[{"ref_block_num":57945,"ref_block_prefix":4037538019,"expiration":"2015-10-22T14:24:06","operations":[[1,{"fee":{"amount":1000000,"asset_id":"1.3.0"},"seller":"1.2.23707","amount_to_sell":{"amount":16582298,"asset_id":"1.3.113"},"min_to_receive":{"amount":"6377807400","asset_id":"1.3.0"},"expiration":"2020-10-22T14:23:47","fill_or_kill":false,"extensions":[]}]],"extensions":[],"signatures":["1f14b1089e27b707446d52fad82917bf7679a7801822d4fed9f4b644a6a4da4dae7500b84e1e6963acddaf1ed417d80d18dd321184fe74b50c4ffe66c37bc231a9"],"operation_results":[[1,"1.7.1374"]]}]}}
    th_a  db_block.cpp:200 _push_block
916126ms th_a       fork_database.cpp:60          push_block           ] Pushing block to fork database that failed to link: 0003e25bf57b0358aaaf5f5ce6f2ec9430c94a49, 254555
916126ms th_a       fork_database.cpp:61          push_block           ] Head: 254425, 0003e1d91a65ae9a74abb2d2c6c38884f053f784
916126ms th_a       application.cpp:524           handle_block         ] Error when pushing block:
3080000 unlinkable_block_exception: unlinkable block
block does not link to known chain
    {}
    th_a  fork_database.cpp:81 _push_block

    {"new_block":{"previous":"0003e25aff0e90296a70625feea4d03798a68b4c","timestamp":"2015-10-22T14:23:57","witness":"1.6.20","transaction_merkle_root":"0000000000000000000000000000000000000000","extensions":[],"witness_signature":"1f056acfc41a934bfad4aff3258b42bd07c2175aa88a9c2fd790dd61d2ac1e83c61a54e5501401cd07818c5e99df55eadecdf979cc10abf1850a606db7facb5851","transactions":[]}}
    th_a  db_block.cpp:200 _push_block
916127ms th_a       fork_database.cpp:60          push_block           ] Pushing block to fork database that failed to link: 0003e25c186ef2e90aa0b7311bd339451a801d98, 254556
916127ms th_a       fork_database.cpp:61          push_block           ] Head: 254425, 0003e1d91a65ae9a74abb2d2c6c38884f053f784
916127ms th_a       application.cpp:524           handle_block         ] Error when pushing block:
3080000 unlinkable_block_exception: unlinkable block
block does not link to known chain
    {}
    th_a  fork_database.cpp:81 _push_block

    {"new_block":{"previous":"0003e25bf57b0358aaaf5f5ce6f2ec9430c94a49","timestamp":"2015-10-22T14:24:00","witness":"1.6.42","transaction_merkle_root":"0000000000000000000000000000000000000000","extensions":[],"witness_signature":"1f52dbec94e4696b8afeb66890878b61b1aec321172de4f4750665472442bec24b1a46c98085535b8a0edd4003a501bfa4b6eb9cb0d1c76435d36214bc542722d4","transactions":[]}}
    th_a  db_block.cpp:200 _push_block
916127ms th_a       fork_database.cpp:60          push_block           ] Pushing block to fork database that failed to link: 0003e25d32a3934e4ee8b80e1913977ba2ab8961, 254557
916127ms th_a       fork_database.cpp:61          push_block           ] Head: 254425, 0003e1d91a65ae9a74abb2d2c6c38884f053f784
916127ms th_a       application.cpp:524           handle_block         ] Error when pushing block:
3080000 unlinkable_block_exception: unlinkable block
block does not link to known chain
    {}
    th_a  fork_database.cpp:81 _push_block

    {"new_block":{"previous":"0003e25c186ef2e90aa0b7311bd339451a801d98","timestamp":"2015-10-22T14:24:06","witness":"1.6.32","transaction_merkle_root":"0000000000000000000000000000000000000000","extensions":[],"witness_signature":"1f6bdef4fde43434655d735d2e4a305929f02da613e13e8eff95e516d915d807472d05d4fec1724f0e6887bfcadb7498d14272e20ac2debc4fa181221cf96b3657","transactions":[]}}
    th_a  db_block.cpp:200 _push_block

EDIT: --replay-blockchain doesn't help
EDIT: looks that this is witness 1.6.24 that forked and produced on its own fork. The result is my node is not synchronizing.

13
Technical Support / Re: Network Security Question
« on: October 22, 2015, 07:33:58 am »
The transaction with the new signing key is in ForkA.
That can't really happen .. the IRREVERSIBILITY requires 2/3 of the witnesses to sign the transaction .. hence it cannot be on the minority fork .. and if it was .. it would not have been signed by 2/3 of the witnesses ..

If what you say is true for one witness then it should be true for all of them.
If you have 17 witnesses and you have 8 that sign on two forks => you could have 2 forks with at least 12 witnesses participation ( which is 2/3 of 17).

I'm not saying it is likely but if everyone is using a solution like this it is possible.

14
Technical Support / Re: Network Security Question
« on: October 22, 2015, 06:36:42 am »
In order to change the signing key you need to do a transaction to announce the new signing key. .. once this transaction meets the irreversible block offset (30 blocks or so)
you can consider it a checkpoint and no forks are possible any more .. if only one of the machines knows the corresponding key and is THEN activated to produce blocks your are safe from signing independent forks ..

The transaction with the new signing key is in ForkA. Then ForkB could miss this transaction. Then you have NodeA signing the ForkA and NodeB signing in ForkB with the old signing key. There is NO IRREVERSIBLE block offset incase of witnesses double signing blocks as you might have 2 valid chains with >50% participation. This was discussed numerous times before...

15
Technical Support / Network Security Question
« on: October 22, 2015, 06:00:38 am »
We've recently discussed some witness automated backup solutions that could possibly end up signing 2 different forks for a single witness.
In such case (assuming more than one witness use the script) we could have multiple forks with >50% participation which  is obviously extremely undesired.
We had a relatively easy way to check if someone signs in two forks in BTS 1.0 (and we had a massive amount of double signers).

My question to @Bytemaster is how do we guarantee this doesn't happen.

Pages: [1] 2 3 4 5 6 7 8 ... 86