Author Topic: How to get a stable measurement of real world value without relying on a Feed?  (Read 1455 times)

0 Members and 1 Guest are viewing this topic.

Offline fuzzy

just posted it (among other good consequences)  :):

https://bitsharestalk.org/index.php/topic,21409.msg278321/topicseen.html#msg278321

cool ideas but why not just simplify it and make a decentralized gateway so all tokens traded are actually backed by the real thing?  bts collateral could be held as a backstop to protect against black swans (and used to create a derivatives market AFTER the actual asset-backed market exists).

We could do this today with fiat and crypto alike...and perhaps we could make deals with gold repositories for some sort of multisig option to be used with partner exchanges with real shares. An i missing somethi g that makes this a bad idea?  perhaps.  but it makes sense to me to actually have IOUs that are backed by the real thing in a decentralizdd fashion...THEN we can use our own exchanges prices for the derivatives market (which only makes sense to come after the "grounded" market grows up).
« Last Edit: February 10, 2016, 10:59:24 pm by fuzzy »
WhaleShares==DKP; BitShares is our Community! 
ShareBits and WhaleShares = Love :D

Offline tonyk

  • Hero Member
  • *****
  • Posts: 3308
    • View Profile
Lack of arbitrage is the problem, isn't it. And this 'should' solves it.

Offline bytemaster

How to get a stable measurement of real world value without relying on a price feed.

Proof of Work difficulty has a long lag time relative to price changes and also has efficiency improvements factored in.

So once could use mining as an objective source that is slightly biased.... all other things being equal, efficiency gains would cause the value of a proof of work to fall over time. 

If there was a way to measure / compensate for efficiency gains in an objective manner then we would be a step closer. One basic method is to guess what the efficiency gains should be.  This will give us something closer to the truth, but still error based upon the accuracy of the model.

Another means is to have several different proof-of-work systems using different algorithms. Not all algorithms will have efficiency growth spurts at the same time. By using the algorithm that grows the least in hash power at any point in time we can remove all efficiency gains that are not universal (impacting all proof of work variants at the same time).   

Any other ideas? 

 
For the latest updates checkout my blog: http://bytemaster.bitshares.org
Anything said on these forums does not constitute an intent to create a legal obligation or contract between myself and anyone else.   These are merely my opinions and I reserve the right to change them at any time.