Show Posts

This section allows you to view all posts made by this member. Note that you can only see posts made in areas you currently have access to.


Messages - arhag

Pages: 1 ... 3 4 5 6 7 8 9 [10] 11 12 13 14 15 16 17 ... 81
136
There are disadvantages (you need to "sync" with usd price close time) but it's not nearly as bad as you're implying

I still haven't wrapped my head around the details of how combinatorial prediction markets compensate for the betting asset's price volatility (despite reading Paul's papers), but it still seems clear to me that it is the inferior solution compared to betting with stablecoins directly. For one, collateral levels backing BitUSD can continuously be updated over long periods of time as the collateral asset price drops significantly over this time. I am not sure how this would work in a combinatorial prediciton market, but I would imagine there should be some lower bound for how low the price could drop from the initial price at the start of the PM before the peg would break. Second, there can be considerable delay from the date at which the USD price is reported and the time the payments are actually settled. For example, in Augur it seems this delay is 2 months. Obviously the reporters cannot report a USD price that has not yet happened, so that means the earliest access the winners will have to the collateral asset is 2 months after the fair settlement price was determined. In those 2 months the price of the collateral asset could have dropped even more relative to USD. Also, they would be forced to dump it in exchange for USD as soon as possible after settlement to no longer be exposed to the price changes of the collateral asset, whereas with BitUSD the holders can take their time to incrementally trade fractions of their BitUSD holdings into USD (via the collateral asset if necessary), assuming they even want to bother when they already have the option of just holding it as BitUSD, without needing to worry as much about slippage in the market.

137
General Discussion / Re: What's a Whale?
« on: June 23, 2015, 08:46:53 pm »
# people; #BTS owned(range)

1   -     400-200 Million
2.4  -   200-50 Million
9.25 -  50-15 Million
37.5 -  15-1 Million

Not sure what you are basing that on but that last category seems incredibly suspect. http://richlist.btsgame.org/

138
The real problem has to do with chain reorganization and replay attacks.

I addressed this very early in Graphene development by suggesting adding the full block_id of a recent block (the "reference block") to a tx digest [1].  The blockchain currently implements this by specifying the low 16 bits of the block height, the chain will fetch the block hash from the most recent block whose low 16 bits of height match the tx header, and include that block hash in the tx digest.  As long as the ID's referenced in the tx were assigned in or before the reference block, your tx is invulnerable to a reorg/replay attack -- it will have a different digest, and thus its sig will be invalid, on any fork which doesn't include the ref block (as long as there are no hash collisions).

[1] This idea was actually originally tossed about in the pre-launch discussions of the consensus algorithm for BTS 0.x as TaPoS.

Yup, I'm really happy you guys added this. It has other great security benefits:

https://bitsharestalk.org/index.php/topic,6584.msg87951.html#msg87951
You may think that you can just filter out all of the transactions that change their vote from the null slate to good-delegate-(k+101), but keep the transactions that changed the vote from good-delegate-k to the null slate in the first place. If you could do this, you would end up with all of your evil delegates in the top 101 slots with 2% approval, and the rest of the good delegates at lower slots with either 1% approval or less. But you cannot do this, because you did not have nearly all of your evil delegates simultaneously in the top 101 slots at any point in time.  The first time you would need to filter out one of these problematic transactions, you would be forced to filter out all of the blocks produced by good delegates after that point (otherwise the hash link would be broken).
...
But if this is not satisfying enough, there is another measure that can be taken to be extra cautious. We can bring back TaPOS on top of DPOS. The transactions do not need to reference the previous block, they can reference a block well enough in the past that is well establish. The point is that TaPOS would make it impossible to even include those unvoting transactions in your fake blockchain.

https://bitsharestalk.org/index.php/topic,14618.msg190100.html#msg190100
From your link I take Economic Clustering to mean the following:
Quote
From technical point of view it means that if someone decides to rewrite the history of the blockchain he won't be able to include transactions of those who don't take part in the attack, because every transaction contains the id of one of the recent blocks.

This seems a lot like Transactions-as-Proof-of-Stake (TaPoS). When BitShares was upgraded to DPOS this feature of binding each transaction not to only the chain ID but to a more recent block of the chain was removed.

...

Furthermore, I do not see how this Economic Clustering completely solves the Nothing-at-Stake problem as you claim in your post. The attacker producing the fake blockchain can simply remove everyone else's transactions from their fake blockchain and include their own transactions between sockpuppets to make it appear that the fake blockchain is valid. If they are able to trick the user onto that chain, they could then carry out the double-spend attack. The only problem for the attacker is if the victim is recovering an existing account where they have already made outgoing transactions to people that they expect to see in their transaction history (that is something that cannot be faked by the attacker). Even incoming transactions can be faked if the fake blockchain starts far enough in the past such that the parties that sent the victim the funds had not yet registered their account names on the blockchain (assuming the victim had not pinned the BTS public keys of their contacts in a wallet backup of course).

The trick is to know accounts of big market players like Walmart. If you don't see transactions made by Walmart then your branch is not legit.

Hmm well that isn't going to be exactly automated into the client code, but it is a smart idea for helping a user determine if the blockchain is fake if they suspect something suspicious (especially if their client warns them using other metrics). Okay, I think there are enough advantages provided by doing this that we absolutely should have transactions include a recent block hash in the transaction digest rather than just the chain ID.

139
Yes, there is certain amount of silence regarding 'blacklizard'... I asked the question in the last mumble but the question was quietly ignored by both BM and the host.

I think "Black Lizard" is just the code name for Graphene before it became Graphene. The more important thing beyond the name is how much of the (IMHO) really cool stuff discussed in those docs is actually part of the roadmap for Cryptonomex (meaning they are seriously planning to submit a worker proposal to implement after BitShares 2.0 has launched that they hope stakeholders will approve) and how much of it are ideas floating around among the devs that they haven't actually reached consensus on.

140
I'm interested to hear how the LMSR performs in practice - I've read a report that says although it has bounded loss by design, in practice it nearly always looses.

Considering that it only profits if the outcome is contrary to what the market predicts at the very end, I would expect this to be true. The point of an LMSR is mostly that the initial liquidity provider is not doing it to make profit but rather altruistically for the good of getting accurate predictions on the question being asked.

However, if the market maker charges trading fees that go to the initial liquidity provider, then profit becomes very possible. In fact, since the loss is bounded, after some amount of trading occurs, the initial liquidity provider will break even and anything after that is pure profit. So if the initial liquidity provider is not creating the prediction market altruistically, they need to bet on whether their particular PM will generate enough trading volume to make back their investment.

Also, a liquidity sensitive LMSR is an improvement on regular LMSR that allows some fraction of the trading fees to instead go back into the liquidity pool. This means that with more trading the liquidity of the market automatically increases (slippage decreases).

141
First, UIAs should be allowed to elected managers the same way BTS can elect delegates. These managers would of course still have limited powers with what they could do with the UIAs.
This is already in BTS 2.0, can't remember what the position is called but they are basically UIA delegates.

I remember reading that in docs as well, but I don't know if those docs (see here and here) were credible or outdated or a wish list. Can we get confirmation from bytemaster on this?

142
How did bitshares ever let Roger Ver  sweep down and poach this project?

The answer to that is easy: Bitcoin maximalism.

The real question is why aren't we able to convince Augur to use our platform rather than the much slower Ethereum (plus we already have BitUSD working while they are still working on eDollar).

Speaking of prediction markets. The use of BitAssets 2.0 for PM is unsatisfactory in many ways (although it is nice that we get it "for free" so to speak).

First, it is limited to binary outcomes rather than generally to N disjoint outcomes. That is an easy fix. Create a PM pool that let's you deposit X BitUSD to get assets X BitPM-1, X BitPM-2, ... , X BitPM-N, and also require you to burn X BitPM-1, X BitPM-2, ... X BitPM-N simultaneously in order to withdraw X BitUSD from the pool.

Second, I like that we can do traditional order book prediction markets (Ethereum will struggle with that), but a market maker is very important. I believe it should be possible to have both (is that correct? yes, should be possible [1]). So I would love to see BitShares have these BitPM-type assets trading against BitUSD but also have a liquidity sensitive LMSR autonomous agent making those markets on behalf of the PM creator (who would fund the initial liquidity, but would also be able to potentially profit from the trading fees acquired by the market maker). There wouldn't only be market orders, there would also be limit orders. A limit order might match against the LS-LMSR market maker, but could also match against other limit orders by real users sitting in the order book.

Third, using a multisig judge to provide outcomes of prediction markets is an okay start, but eventually we would need something more decentralized. We should learn from the REP token approach used in Augur. In my opinion, we can generalize this a little by allowing a special UIA type that can be used as a REP token to judge certain prediction markets. Some additional features for UIAs would be a prerequisite. First, UIAs should be allowed to elected managers the same way BTS can elect delegates. These managers would of course still have limited powers with what they could do with the UIAs. Second, there should be a built-in mechanism in the blockchain to shift the order of magnitude of the balance to the left or right as necessary to allow the max supply with desired precision to fit in a 64-bit number. This means that the token can be endlessly inflated and there won't be any overflow (instead balances would lose the starting precision of their balances over time, and very small balances would eventually completely disappear by just sitting there). This mechanism is important because redistribution from non-participating or dishonest REP holders to participating and honest REP holders can be simulated through distributing automatically inflated supply to the participating and honest REP holders, no matter if the other REP are locked in a contract or not. With these features in place for UIAs, a prediction market could be set up to choose a particular UIA as the oracle providing the outcomes. If the managers of the UIA (that represent the UIA holders) commit the UIA holders to act as an oracle to that PM (an action that the managers get compensated for from funds provided by the PM creator), then the PM opens up for trading. Eventually, once the PM expires, the UIA holders are obligated to report on the outcome of some selection of PMs that the UIA managers committed them to, and if they don't they won't receive the newly inflated UIA (thus diluting their ownership). For actual details of how this reporting process works (including details like how the consensus outcome is determined and how PCA is used to redistribute the inflated UIA) check out Augur.

These are three projects that I would absolutely vote a credible worker proposal to fund after BitShares 2.0.

[1] Here is a paper I found on integrating a market scoring rule (e.g. LMSR) with conventional limit orders: http://www.seas.upenn.edu/~hoda/HLPV.pdf

143
I am familiar with his digital cash and it is the foundation of our voting architecture.

Has the cryptography behind the voting architecture changed? If I remember correctly I think you were at some point planning on using linkable ring signatures to protect voter privacy, were you not? I assume the change is due to the fact that using ring signatures would require very large signatures if we wanted to provide sufficient privacy (hiding in a large enough group) for voters.

If you are using blinded signatures, what steps are being taken to prevent the signer from creating fake votes to take over the votes of people who sign up for an election but don't bother to vote? Are the blinded signatures using multisig or better yet threshold sigs (is a blinded threshold sig doable?) to reduce the chance of collusion to create fake votes? Are there economic incentives designed in the voting system to encourage everyone who signed up for the election to cast a vote (even if the vote is to say they refuse to vote)? For example, the voter could put up some fixed amount of money that goes into a common pool when registering for the election (and getting their blinded token signed), and then after the period to sign up for the election ended an new period would open up to allow users to anonymously associate their signed unblinded tokens to a new pseudonymous public key (with which they sign the ballots they will later cast) and provide a new blinded token to be signed with another set of keys by the token signers. After this second period ended, the election could finally open up to accept ballots and the voters would also be able to reveal the second signed blinded token to withdraw the fixed fund from the common pool. The economic motivation to get their money back would mean that nearly all of the people who signed up would broadcast their unblinded tokens. If the number of valid signed unblinded tokens ever exceeded the number of blinded tokens that were signed, everyone would know the signers were manipulating the results and the results of the election could not be trusted. In fact, the signers could put up some amount of funds into an escrow which they would lose if this manipulation were to ever happen. That way even if they didn't care about their reputation, they wouldn't even have an economic motivation to create fake signed tokens in order to steal the voters' temporary deposits.

144
Bytemaster has spoken to the reduced RAM requirements of the full node, saying in his interview with Adam Lavine only 1GB is needed for current ecosystem size.

How big will the transaction volume need to get it before it outgrows a 4GB RAM system?

I think it has less to do with transaction volume and more to do with the amount of adoption. The nodes need to be able to store the full database (anything that needs to be accessed by the single-threaded transaction processing engine in order for it to do its job, and anything that can potentially be updated as a result of processing the transactions) in RAM. So it scales with the number of accounts, the number of assets, the number of orders in the order books, etc. If done right, I think it should not scale with time (other than due to the natural growth in accounts, assets, smart contracts, orders, etc. that we expect to see over time due to growth and user adoption).

I think a fair first attempt at approximating database size over time is to model it as proportional to the number of users. This of course requires all kinds of assumptions about how the average user uses the system (how many accounts do they have, how many orders do they typically have in the markets, what other features of the network are being used and to what extent). It also requires predicting how the number of users will grow over time. But I think the main takeaway point from bytemaster's comments were that it is unlikely to be the bottleneck anytime soon even given current technology (and of course memory density will continue to increase while getting cheaper). The more likely bottleneck will be bandwidth and CPU clock speed, but even that is something we don't have to worry about until BitShares is much much bigger and more successful.

145
Nice job, Permie!

Although, there is a lot of pre 2.0 information mixed together with 2.0 information.

Examples include:
  • The description of the derivatives on the blockchain (BitAssets/Smartcoins) reflect BitAsset 1.0 and not the newer BitAsset 2.0. For example, there will not be any more yield/interest in BitAssets 2.0 (unless it is part of the BitAsset definition), although the bond market can make up for that. Also there will not be any 30 day expiration time on shorts anymore (and by the way shorts don't expire all on the same day even with the current BitAsset 1.0 system).
  • Everywhere you refer to witnesses/delegates or just delegates in your post should just be replaced by witnesses alone. The delegates are a very different role in BitShares 2.0 (they are not block producers and they do not get paid).
  • We are trying to tone down the DAC (where the C refers to either company or corporation) metaphor now in an attempt to reduce legal risk. For convenience, keeping the DAC acronym is fine if the C refers to community instead. Some other people prefer the term DAO (decentralized autonomous organization).

The other thing I have an issue with is the argument behind the statement "If the first cryptocurrency fails, what does that say of the prospects of an alternative?". The implication of that statement is that the fact that cryptocurrencies can fail means that it is too risky to support any of them. But companies fail all the time and it still makes sense to invest in them because some of them go on to become wildly successful. It is natural for different systems to gain success and then eventually fail and be replaced by better systems. This is true (although considerably more difficult) even for systems that are heavily dependent on large network effect. For example, MySpace failed and was replaced by Facebook. This can be used as evidence to show that Facebook is not invulnerable and can (and almost certainly will) eventually and gradually be replaced by something else. Despite that fact, Facebook has become tremendously successful, gained so many users, and even those who invested in the Facebook IPO have more than doubled their investment in 3 years. So the fact that an organization's predecessor that is/was in the same industry can/did fail is not in and of itself a reason to not invest in the organization.

146
Somebody needs to bear the burden of the amount of shortfall of the under-collateralised (<100% debt coverage) shorts. Its fine for the longs to bear this, but they should not have to bear any more than this amount. My issue with the proposed approach is that most of the other shorts may well still have enough collateral to cover their full debt, but they are getting a windfall gain because there were one or more shorts who triggered a black swan. This windfall gain of course, comes at the expense of the longs. It may even be the case that the under-collateralised short is very very small, but triggers a very large and unnecessary cross transfer between the longs and shorts.

Oh this is a good point. I would like clarification on this as well.

I am afraid that the code might be currently set up to settle all margin positions at the swan price defined as the ratio between the collateral and debt of the least collateralized margin position at the moment of black swan. But what it should do (as you already mentioned starspirit, but let me be a little more explicit in the description) is settle all margin positions with sufficient collateral at the feed price (taking the collateral paid for the settlement and putting it into the settlement pool, and the remaining collateral for each of these margin positions goes to their respective owners), and then for all other margin positions that do not have sufficient collateral it should just move all of their collateral into the settlement pool and consider their debt paid off. The ratio of the total debt owed of all margin positions when the black swan occurred to the total collateral added into the settlement pool is the new immutable settlement price that is then used for when longs redeem their fraction of the collateral from the settlement pool at their leisure.

147
General Discussion / Re: Dan's Interview is on 'Let's Talk Bitcoin'
« on: June 21, 2015, 01:15:37 am »
Exactly, which is why it counters the statement made by BM during last Friday's mumble.

Yeah, I agree. I don't think witnesses are really a check on other parties' power. Which helps the neutrality argument.

I guess one situation in which they might be is during a hard fork that bypasses the conventional protocol of waiting for stakeholder approval. These would be cases where an immediate security fix was necessary to protect users' funds. In that case one would hope that the witnesses do their best to make sure the changes follow the spirit of the existing blockchain rules and protocol, and also are tested well enough to hopefully not cause even bigger issues. Witnesses could refuse to upgrade to new code by devs until they were convinced it met these standards. There is no conflict with the stakeholders in this case because the stakeholders wouldn't have even had the chance to evaluate the situation and vote in favor of the change yet.

If apathy was so bad that the delegates approved of a hard fork but no disapproval was voiced by shareholders,

Delegates don't have the power to approve of hard forks. As far as I understand, delegates can propose to change parameters of the system (like fees, block intervals, etc.) and unless the stakeholders veto, it will go through in two weeks. But a hard fork requires active support from stakeholders for it to be approved. It won't just go through by default after some period of time.

148
General Discussion / Re: Dan's Interview is on 'Let's Talk Bitcoin'
« on: June 21, 2015, 12:33:46 am »
BM went on to say in this interview, "All hard forks or changes, due to the consensus protocol itself, must be contingent upon stakeholder approval". If that is literally the case, then any witness that fails to deploy anything, forks or parameter changes proposed and approved by the shareholders, would be acting against the shareholder consensus, and that would be very bad.

Seems like a very effective way for a witness to get fired from their job. If enough stakeholders vote to approve a hard fork and enough witnesses don't upgrade their nodes to the new software, then that must also mean that the stakeholders who voted for the hard fork have enough voting power to vote out the non-complying witnesses. Unless the witness wants to lose their job, I don't see why they wouldn't upgrade.

What is the purpose of the multisig delegate vote if the shareholders must approve parameter changes?

I think the main idea is that action from the stakeholders isn't necessary for change to occur. Rather, action from the stakeholders is necessary to prevent change from occurring. When you consider voter apathy, this is a huge difference. Most changes that are good for the network but aren't important enough to motivate the stakeholders to vote for it can still occur in a timely manner with this system since the elected delegates will take care of it. However, if the delegates propose a change that is clearly not good for the network, stakeholders have two weeks to get motivated and vote to stop the change.

149
General Discussion / Re: Dan's Interview is on 'Let's Talk Bitcoin'
« on: June 20, 2015, 11:28:25 pm »
+5% That was a great interview.


One thing I don't understand - to what does 'Some Other Castle' refer? I've not heard the phrase before!

Not sure. What my mind immediately goes to is this:


However, I have no idea what that could possibly have to do with this interview or more generally BitShares.

150
Namecoin?

What about it?

isn't doing domain registration Namecoin's big thing?

Sure. I don't know when you joined the community, but BitShares always intended to move into domain names. We even had a separate blockchain at one point intended just for that purpose (that has since been "merged" back into BTS).

Anyway, the main advantages BitShares provides are thanks to DPOS and the latest changes from Graphene: the blockchain is much faster (1 second block intervals), scalable (100,000+ trx/s), and cheaper to operate (no PoW). Also, we seem to be more willing (and frankly, more capable, again due to the performance gains) to experiment with different economic models for domain names such as auction systems.

Pages: 1 ... 3 4 5 6 7 8 9 [10] 11 12 13 14 15 16 17 ... 81