Author Topic: 2 x 800 PTS - Generate Unspent Output Set every day at midnight GMT [PAID]  (Read 28294 times)

0 Members and 1 Guest are viewing this topic.

Offline BrownBear

  • Full Member
  • ***
  • Posts: 51
    • View Profile
Ok this means data will only be available after 2am

Offline bytemaster

So rather than using the first day Change. Use the last day change as the snapshot block for pts. 


Sent from my iPhone using Tapatalk
For the latest updates checkout my blog: http://bytemaster.bitshares.org
Anything said on these forums does not constitute an intent to create a legal obligation or contract between myself and anyone else.   These are merely my opinions and I reserve the right to change them at any time.

Offline bytemaster

I suspect you are also right.  The goal is to sync with ags and that means sticking to the time stamp of the block and ignoring order.


Sent from my iPhone using Tapatalk
For the latest updates checkout my blog: http://bytemaster.bitshares.org
Anything said on these forums does not constitute an intent to create a legal obligation or contract between myself and anyone else.   These are merely my opinions and I reserve the right to change them at any time.

Offline bytemaster

It would be way easier to implement if we would instead say the first change of daay is final, no going back in date. With your approach i have to do a cache for the last 2 hours of blocks. Any chance of changing the definition?

I believe using the first day change may be acceptable and is less ambiguous.   
For the latest updates checkout my blog: http://bytemaster.bitshares.org
Anything said on these forums does not constitute an intent to create a legal obligation or contract between myself and anyone else.   These are merely my opinions and I reserve the right to change them at any time.

Offline BrownBear

  • Full Member
  • ***
  • Posts: 51
    • View Profile
It would be way easier to implement if we would instead say the first change of daay is final, no going back in date. With your approach i have to do a cache for the last 2 hours of blocks. Any chance of changing the definition?

Offline bytemaster

I found another inconsistency problem. Maybe you already defined a solution for this as well:

Dates in the blockchain can be incosistent, for example, these are the dates of some early blocks:
block height:  4127 2013-11-06
block height:  4128 2013-11-06
block height:  4129 2013-11-06
block height:  4130 2013-11-07
block height:  4131 2013-11-07
block height:  4132 2013-11-07
block height:  4133 2013-11-07
block height:  4134 2013-11-06
block height:  4135 2013-11-06
block height:  4136 2013-11-06
block height:  4137 2013-11-07
block height:  4138 2013-11-07
block height:  4139 2013-11-07
block height:  4140 2013-11-07
block height:  4141 2013-11-07
block height:  4142 2013-11-07
block height:  4143 2013-11-07
block height:  4144 2013-11-07

This makes it hard to determine when exactly one day ends and another starts (this also seems abusable in general). How to handle it (if possible without looking into the future)?

The official answer to this is that you ignore the order of the blocks and merely look at the timestamp to allocate.  No need to draw a line. If the days overlap then they overlap.  There is a 2 hour tolerance in the bitcoin codebase.  Anyone donating in that 2 hour window is at the mercy of the miners to determine what day the block gets included in.
For the latest updates checkout my blog: http://bytemaster.bitshares.org
Anything said on these forums does not constitute an intent to create a legal obligation or contract between myself and anyone else.   These are merely my opinions and I reserve the right to change them at any time.

Offline BrownBear

  • Full Member
  • ***
  • Posts: 51
    • View Profile
I found another inconsistency problem. Maybe you already defined a solution for this as well:

Dates in the blockchain can be incosistent, for example, these are the dates of some early blocks:
block height:  4127 2013-11-06
block height:  4128 2013-11-06
block height:  4129 2013-11-06
block height:  4130 2013-11-07
block height:  4131 2013-11-07
block height:  4132 2013-11-07
block height:  4133 2013-11-07
block height:  4134 2013-11-06
block height:  4135 2013-11-06
block height:  4136 2013-11-06
block height:  4137 2013-11-07
block height:  4138 2013-11-07
block height:  4139 2013-11-07
block height:  4140 2013-11-07
block height:  4141 2013-11-07
block height:  4142 2013-11-07
block height:  4143 2013-11-07
block height:  4144 2013-11-07

This makes it hard to determine when exactly one day ends and another starts (this also seems abusable in general). How to handle it (if possible without looking into the future)?

Offline bytemaster

Vin[0] gets it all as if all vin[0...n] inputs came from vin[0].  There is one deterministic key per trx that gets the ags. 




Sent from my iPhone using Tapatalk
For the latest updates checkout my blog: http://bytemaster.bitshares.org
Anything said on these forums does not constitute an intent to create a legal obligation or contract between myself and anyone else.   These are merely my opinions and I reserve the right to change them at any time.

Offline BrownBear

  • Full Member
  • ***
  • Posts: 51
    • View Profile
meaning first-come-first-serve? It needs to be defined somehow or we're all gonna get different results.

Atm. I implemented it in the lakerta06 quoted you: vin[0] get's all he spent, vin[1] is next etc. until the assigned AGS equals the amount of PTS sent to the angel share address.

Offline bytemaster

We assume all addresses belong to same person.


Sent from my iPhone using Tapatalk
For the latest updates checkout my blog: http://bytemaster.bitshares.org
Anything said on these forums does not constitute an intent to create a legal obligation or contract between myself and anyone else.   These are merely my opinions and I reserve the right to change them at any time.

Offline boombastic

  • Sr. Member
  • ****
  • Posts: 251
    • View Profile
    • AngelShares Explorer
Will work with donschoe to work it out.  Thanks for pointing it out.
http://bitshares.dacplay.org/r/boombastic
Support My Witness: mr.agsexplorer
BTC: 1Bb6V1UStz45QMAamjaX8rDjsnQnBpHvE8

Offline BrownBear

  • Full Member
  • ***
  • Posts: 51
    • View Profile
How do you spread the coins evenly? (what's the PTS word for satoshis? ^_^)
If you use divisions you'll inevitably end up with rounding error:

Code: [Select]
//   example:
//
//   inputs:  A: 10, B: 20, C: 23
//   donation: 15
//   change and fees: 38
//
//   rewards: 0.28301887 * input
//     A: 2.830  =  3
//     B: 5.660  =  6
//     C: 6.509  =  7
//               +-----
//                 16    !!! one coin more than input

Do you have a link to an address/tx where it has been spread evenly?

Edit:
This one is the latest multi-input donation I found, it still distributes the coins by what looks like first-come-first-serve:
http://www1.agsexplorer.com/balances/1HkQzFX7g42kDkPk5GziNb1MrdxmQRvWoy
« Last Edit: February 13, 2014, 04:09:00 pm by BrownBear »

Offline boombastic

  • Sr. Member
  • ****
  • Posts: 251
    • View Profile
    • AngelShares Explorer
I am maintaining agsexplorer.com.  We credit AGS shares to each input proportionally at the moment.  As bytemaster clarified that the first input address will be credited AGS.  We are in the middle of changing the algorithm and update will be released within days.  It took us last week solving data source stability issue, setting up backup server and data sources.  Hopefully agsexplorer.com won't be suffering from inaccurate data source any more.

EDIT:
Even worse, agsexplorer seems to do this rather indeterministicly:
http://www1.agsexplorer.com/balances/1KHXpgQLeLgMTZmP5JVss5XX55UUTFunPP
https://blockchain.info/de/tx/d2b64a6d2e3860bfc6f37774bad6e7c4bfbc1ea63716de4bc146188f8e63e61e

In that transaction, address 1Q4isu8WRDn8Withk4GhM4vbeKNdRcq7TH leaves empty handed. To get comparable results, we need a clearly defined way to distribute the AGS.


You are right.  This is clearly a bug in the data source that leave 1Q4isu8WRDn8Withk4GhM4vbeKNdRcq7TH empty handed in current algorithm. I will investigate and make sure it's not a problem in new algorithm
http://bitshares.dacplay.org/r/boombastic
Support My Witness: mr.agsexplorer
BTC: 1Bb6V1UStz45QMAamjaX8rDjsnQnBpHvE8

Offline BrownBear

  • Full Member
  • ***
  • Posts: 51
    • View Profile
Is the first reliably the same first for everyone? Yes, right, otherwise the sig would change!?

Offline lakerta06

I'm still not 100% sure about the genesis block, but I think I might know what you want ^_^

A different problem arose tho:
When multiple addresses donate to AGS together, how am I supposed to fairly and accurately calculate which address donated how much? The obvious answer would be to give every address AGS relative to the amount that address spent, but then we get rounding errors.
I've looked at agsexplorer to find out how it's done there and saw that they just remove fees and other outputs in the transactions from the highest staking input address and ignore the resulting unfairness.
Example: http://www1.agsexplorer.com/balances/1GwqVEwMiwEwifRaLnRB14CJQ4rjqaJmvR
200mBTC where spent, the 17.82203 mBTC of fees and change where simply subtracted from 1EmFGWtWgAF8ZjDHHMZPqRZvSboLjWEY2r's angel shares.

Assuming that all inputs belong to the same person, the unfairness wouldn't matter, but if there was any scenario where two people could donate in the same TX (multisig?), one of them would get less AGS.

Is this how it's supposed to be handled or should a different approach be taken? If so, which?

EDIT:
Even worse, agsexplorer seems to do this rather indeterministicly:
http://www1.agsexplorer.com/balances/1KHXpgQLeLgMTZmP5JVss5XX55UUTFunPP
https://blockchain.info/de/tx/d2b64a6d2e3860bfc6f37774bad6e7c4bfbc1ea63716de4bc146188f8e63e61e

In that transaction, address 1Q4isu8WRDn8Withk4GhM4vbeKNdRcq7TH leaves empty handed. To get comparable results, we need a clearly defined way to distribute the AGS.

I remember bytemaster said somewhere in this forum "The first address in the tx gets all the AGS" first meaning the first in the blockchain data.