Show Posts

This section allows you to view all posts made by this member. Note that you can only see posts made in areas you currently have access to.


Topics - arhag

Pages: 1 [2]
16
I have a proposal that enhances UIA in a small way but provides the foundations necessary to implement really powerful things. This includes things like hierarchical DACs all sharing basically the same BitAssets (the ones minted into existence in the root BitShares DAC), side DACs that implement new experimental features that we don't (yet) want to implement on the main DAC and that we rather not implement with less efficient Turing complete scripts, and other possibilities.

A UIA would have a unique ID and it would at a minimum define a delay period N (in blocks), a consensus threshold C (as a percentage between 0% and 100%), a panic threshold P (as a percentage between 0% and C%), a false panic tax T (as a percentage between 0% and 100%), and a manager. The manager could be a single address but would generally be a multisig address. The UIA would also have a reserve associated with it. You can think of this reserve as a vault that can store any digital asset in the DAC that can be owned by users/addresses, and while anyone can deposit these digital assets into the reserve, only the manager can withdraw it.

Transactions dealing with UIAs would be in one of two classes: instant transactions and delayed transactions. Instant transactions would be transactions which have effects that immediately occur in the blockchain. Instant transactions include: depositing digital assets into the reserve; moving any UIAs that exist outside the reserve; changing the votes associated with UIAs that exist outside the reserve. Delayed transactions would be transactions that only go into effect after N blocks have passed since the transaction was first submitted into the blockchain unless a panic was initiated. Only the manager is allowed to submit a delayed transaction. If a panic is initiated, any delayed transactions that have yet to become active are put on hold. If the panic is lifted as a false alarm, the hold is removed and the delayed transactions can become activated if N blocks have passed since they were first submitted to the blockchain. If the panic is ratified, all of the delayed transactions that were put on hold become void. Furthermore, while a panic is in effect, no delayed transactions can be submitted to the blockchain. Delayed transactions include: changing the variables N, C, P, T, or even the manager of the UIA; withdrawing digital assets from the reserve; issuing a certain amount of new UIAs into existence that automatically go into the reserve.

A panic for a UIA can be initiated if enough UIAs vote in favor of initiating the panic. If the percentage of UIA supply existing outside the reserve that are voting in favor of initiating the panic grows above the panic threshold P%, the panic is initiated and a T% fraction of all of the UIAs that voted to initiate the panic is taken by the DAC and locked in escrow. The original owners of the UIAs from which that T% fraction was taken are allowed to change the votes associated with those UIAs locked in escrow but they are not allowed to move the funds to new owners. If the panic turns out to be a false alarm, the UIAs taxed and put in the escrow will automatically be moved into the reserve and the original owners will lose full control of those UIAs. Otherwise, if the panic turns out to be legitimate, the UIAs held in escrow will be returned back to their original owners so that they can have full control over them. The DAC decides whether an initiated panic was a false alarm or legitimate by the way UIAs vote during the state of panic. If more than C% of the UIA supply existing outside the reserve vote that the panic is a false alarm, then the panic is lifted and treated like a false alarm. For the panic to be treated as legitimate however, it requires more than C% of the UIA supply existing outside the reserve to agree that not only is the panic legitimate but also agree on a new manager to replace the current manager. Once this greater than C% of UIAs are voting in favor of treating the panic as legitimate and voting for a new manager M, the panic is lifted, the taxed UIAs in escrow are sent into the reserve, the delayed transaction on hold are made void, and the old manager of the UIA is replaced with manager M.


So those are the mechanics. What possibilities do these mechanics allow for? Well consider the case where the manager is a 51-of-101 multisig. This allows for a side DPOS-like DAC to be created that is tied to the main DAC. Everyone validating the side DAC will also be validating the main DAC. The side DAC can have whatever business logic it wants. And its consensus logic will be similar to DPOS but also different in significant ways. First, the block producers can just be the manager. A block can not be considered valid unless it has 51 of the 101 signatures as defined in the manager for the UIA on the main DAC. The 101 keys of the manager could be the 101 delegates of the side DAC. The delegates can communicate on their own private network to come to a consensus on the state of the next block (which can simply be done by sharing with each other the block hash of the block they are each individually building anyway) and then they can all (or at least 51 of them) sign that block and share the signature with each other so that it is an official block that they can broadcast to the network in time. Although the UIAs are owned on the main DAC, the child DAC can track the votes that are being modified on the child DAC's blockchain under the authority of the UIAs on the main DAC's blockchain (verifying that the public keys match). This allows the child DAC to come to a consensus on who should be the new 101 delegates as soon as possible using whatever voting system they want. It doesn't become official for the child DAC however until the change is made on the main DAC. This means that the manager of the UIA on the main DAC needs to submit an instant transaction changing the manager to reflect the delegate changes on the child DAC. If the manager does not do this within a sufficient period of time, the owners of the child DAC (the UIA holders) will initiate a panic and replace the managers through a vote with greater than C% consensus. This means that the mechanism I described in the paragraphs above allows the delegates of a child DAC to be gradually changed like in DPOS, but it has some other benefits. First, if the delegates of the child DAC are compromised a hard fork is not necessary because the tools to replace the delegates already exists in the main DAC. The stakeholders of the child DAC can easily come to a consensus on which new delegates replace the old malicious delegates. Second, since these child DAC delegates are also synchronized to be the manager on the main DAC, it allows these delegates to effectively move digital assets between the main DAC and child DAC (this is somewhat similar to side chains). A digital asset can be deposited by a user into the UIA's reserve. The delegates (and all other validators) can monitor this on the main DAC and mint a digital asset derivative on the child DAC which is credited to that user (same public keys). The opposite direction can also be done by destroying the digital asset derivative on the child DAC and withdrawing the digital asset from the reserve to deserving user, but this step has the N block delay for security reasons. This means that users can use a BitUSD derivative on a child DAC which ties its value back to an actual BitUSD held in the reserves on the main DAC which ultimately ties its value to BTS. As long as there are enough (P% of UIA) honest users actively monitoring the two chains to make sure the manager/delegates behave according to the rules of the child DAC, then users who deposited their BitAssets into the child DAC can be confident that their BitAssets won't be stolen from them. If the manager breaks the rules, the P% of UIA will vote to initiate the panic and put a hold on any offending delayed transactions. The UIA holders are motivated to do this (and put T% of their funds at risk) for legitimate panics, because after the panic has ended they will be rewarded with a certain amount according to the social consensus of the child DAC. The reward could come from some amount of the BitUSD held in the UIA's reserve which is allocated for this purpose and paid for by some percentage of the transaction fees on the child DAC. Then, after the panic is initiated, since the users (and UIA holders) of the system will be motivated to resolve the panic and return things back to normal operation, users can be confident that if the panic is legitimate then more than C% of UIAs (ignoring >C% stake attacks of course) will vote to replace the evil manager with a new one and end the panic.

The framework I described above can be used to launch a new experimental DAC with some new features. This can be done without affecting the operation of the main DAC or any of the other existing side DACs. Perhaps we want to test some new market features like a bond market. Perhaps we want try out features like dominant assurance contracts or smart loans collateralized by digital assets. Sure a lot of this could be done using Turing complete scripting, but I worry that this can add a lot of load to the validation of the main DAC. I worry about delegates (and even all full nodes) being forced to run through potentially very long scripts (even though there could be enough "gas" to pay for it) just so they can properly validate that the previous block that they are building off of is legitimate. I'm concerned about the synchronous nature of the scripts/contracts in Ethereum and most likely eventually BitShares as well. But with the child DACs, only the people who actually care about what that DAC is doing need to use resources to validate their blockchain. To everyone else it appears as if users are depositing funds in the UIA's reserve and the UIA manager is withdrawing funds from the reserve and sending them to some users. The outsiders do not need to understand the complicated logic behind those digital asset movements, unless they want to, in which case they are free to download that child DAC's blockchain and validate the custom business logic of the child DAC. The child DACs can also communicate via the manager with other child DACs. In this case the receiving child DAC needs to validate the chain of the sending child DAC to make sure the manager has the authority to send the message. An alternative mechanism would be that the manager uses a delayed transaction to broadcast the hash of a message directed to a particular child DAC on the main DAC's blockchain. The receiving DAC would need to receive the actual contents of that message on their network but off the main chain (likely received from the manager/delegates of the sending DAC) and would only consider the message valid once the delayed transaction went through after the N blocks without becoming void due to a panic. This would allow the block validators of the receiving DAC to just process the message without needing to validate the sending DAC's chain to determine whether it had the authority to send that message. With interchain communication and fund transfers, many things become possible. It is possible to have each of these child DACs house the different functionality powering autonomous software agents and let the software agents communicate with each other and send money back and forth. Ultimately the UIA holders of the child DAC are responsible to make sure the software agents are operating (operated by the manager/delegates of course) according to their published source code. The UIA holders would bother to do this check because ultimately they are the ones receiving the profits from the service of running the software agents (after paying the manager/delegates who are employed to actually run the infrastructure), and the only way they can continue to make a profit is if they continue to receive the revenue paid for by the transaction fees / gas paid by the users who are motivated to have these software agents operating. Also, the framework I have described gives a lot of flexibility in how these software agents / smart contracts can be implemented. The way the gas is calculated can vary from DAC to DAC (or even within a DAC) depending on the business model the DAC wants to focus on. The language or even VM that must be used to implement these smart contracts can also vary between DACs.

Finally, this framework allows for the DAC to scale without reducing the demand for BTS in the root DAC. You can think of each child DAC as a separate bank (perhaps even specializing in particular exchange markets) which has BitAsset derivatives as liabilities to its depositors and the actual BitAssets as its assets held in the reserve on the parent DAC (think of member banks holding USD reserves in the central bank, like the Fed, except with 100% reserve). These BitAsset derivatives are 1-to-1 pegged to the real BitAssets held in the reserve of the parent DAC. The child DACs can even hold some excess BitAsset reserves (collectively owned by the UIA stakeholders of the child DAC) to allow rapid transfers of the BitAsset derivatives between DACs while never going below a 100% reserve. By regularly settling between the DACs using the delayed withdraw transaction, they can maintain enough excess reserve to allow fast inter-DAC transfers even as the reserves from one DAC-bank are slowly drained over time and moved into another DAC-bank. So from the user's perspective it would typically appear as if they can transfer funds both within a DAC-bank and between DAC-banks instantly. Of course in the worst case scenario, the withdrawing users would only have to wait N blocks (which I think can safely be as small as 8640, or 24 hours with 10 second block times) to withdraw any amount out of the DAC bank. Also, for maximum scalability, child DACs can have their own child DACs using the same mechanism (in this case the grandchild DAC's UIA stake would be held in the child DAC, and the child DAC's UIA stake would be held in the root DAC). There is no reason the depth of the tree has to be limited to 2. By allowing arbitrarily deep DAC trees, the number of DACs using essentially the same BitAssets can scale to a large number L while making the worse case time to move funds between DACS to be approximately (Nbar + 1) * (10 seconds) * log(L), where Nbar is the average value of N for the DACs along the upward branch of the tree, and making the worst case number of transactions (and thus transaction fees) to be proportional to log(L). Of course in that case it would probably make more sense to save time by just paying a liquidity provider a small fee to do an atomic cross-chain trade of the sender's BitAsset derivative on the from-chain with the liquidity provider's BitAsset derivative on the to-chain.

17
I just thought of a particularly tricky situation regarding snapshots of DACs with BitAssets and was hoping to get some reassurance or explanation of how we could handle this hypothetical situation.

Let's say we wanted to at some point create a snapshot of BTSX stake to fork off a BitShares X clone that allowed for dilution. The idea is that at the point of the fork people have equal stake in BTSX and BTSXD (BitShares X with dilution), but then they have the ability to sell one and buy more of the other depending on their preferences. After the fork, the price of BTSX would quickly drop while the price of BTSXD (after being available for trading on exchanges) would quickly rise from zero, until they reach a new initial equilibrium. Let's say the market is initially equally split on which DAC is better, so the price of BTSX suddenly drops to half after the fork. Isn't this a huge black swan risk to the BitAssets on the original chain? It would be even worse if the market consensus favored the new chain more.

Are we supposed to split the BitAssets (both holdings and amounts owed by shorts) between the two DACs? Meaning we actually create two new forks (BTSXD and BTSX'). BTSX' would be the continuation of BTSX but with the adjusted BitAssets. The idea is that after the fork, BTSX becomes completely worthless (and all BitAssets on their chain are also worthless), but people effectively "continue" that chain with the BTSX' chain and its adjusted BitAssets. Also, they have the remainder of their BitAssets on the BTSXD chain. BitAsset holders still keep the same value of BitAssets as before, and they can use cross-chain trading to get the BitAssets over to their preferred chain.

The problem is that we need to decide how to split the BitAssets between the DACs depending on how we expect the value of BTSX to be split into BTSX' and BTSXD. This decision needs to be made before the actual fork, but we won't know the true market valuations until after the fork. Thus, we need some mechanism of estimating BTSX holder consensus of how much they think each future DAC is worth. If the estimation is too inaccurate, we risk a black swan event in one of the forked chains (accuracy of estimation needed is of course dependent on the minimum collateral ratio).

18
Finally some people are asking great technical questions about BitShares on r/Bitcoin rather than just dismissing it or spreading FUD about it. In particular, I am doing my best to answer some really great questions by Natanael_L on this thread: http://www.reddit.com/r/Bitcoin/comments/2grsrt/bitsharesx_impossible_to_understand/.

I don't want people to come in just for moral support. But if people have very good technical knowledge and can do a great job answering the questions, I would really appreciate your help (and it would also be nice to have people to fact check me so I don't accidentally say something incorrect). It would be ideal to have some devs comment, but I don't want to burden them with this when they have far more important things to do.


19
I am concerned with the community support for low delegate payrates (for example see this thread: https://bitsharestalk.org/index.php?topic=7553.0).

There is a reason public corporations do not pay out dividends to shareholders until much later in their lifetime. That money can be better used by reinvesting into the company. The growth in the company's stock made possible from the reinvested money (if done well) can provide greater value to shareholders than giving out dividends.

DACs are no different. The income of the DAC is the network fees which are collected by the delegates. By lowering the payrate, delegates are essentially paying out dividends to the DAC stakeholders. But in the early stages of DACs like BitShares X, that money can be more efficiently used by reinvesting in the DAC. In the case of DACs, that means the delegates need to transfer the money collected from fees to individuals/organizations/firms that have a plan to improve the DAC and the ecosystem. Transparency is incredibly important so that stakeholders know the appropriate amount of BTSX was transferred from the delegate who received the fees to the organizations who deserve to get them. The stakeholders indirectly get to choose which organizations receive the money by voting for delegates that pay the fees to the desired organizations. I think Agent86's worker model would be a better way of doing this personally, but this is the way it has to be done in the current system.

I think it is incredibly important in these early stages to use the vast majority of the network fees to fund further development and marketing. There are so many important things to fund. I would like to see some funds go to I3 so they can hire more people to work on important efforts such as lightweight clients, further improving the user interface, and porting the clients to all relevant platforms (mobile, desktop, web). I would like to see funds go to firms who want to create multisig security companies, chain servers, and even a BitPay-like service that provides (initially) free technical support for merchants who want to set up clients to accept BitUSD. And of course funds should also go to marketing organizations who will spread the news about BitShares, make the mechanics and value of the system very easy to understand, convince merchants to accept BitUSD, and convince users of the benefits of holding BitUSD or other similar BitAssets.

The purpose of this thread is to convince people to not waste our potential. Bitcoin core developers have trouble getting funding even though BTC has over a $6 billion market cap. DPOS can fix this problem because it makes DACs profitable. Let's take that profit and reinvest it to grow the value of BTSX even more. I hope this starts the discussion of how we should spend the profit and which organizations are best able to execute the plans that we envision will grow BTSX value.


20
In DPOS, blocks are produced in 10 seconds and thus we say the recommended confirmation time is 10 seconds. In reality, since blocks can be missed, the only way of being sure your transaction is in the consensus chain is if the block it is in has been approved by more than 50% of active delegates. By approved, I mean an active delegate signed a block that either contained the transaction or built the chain off a block which contained the transaction. Thus, the worst case confirmation time is actually around 8.5 minutes (actually I would say the true worst case confirmation time theoretically could be unbounded since delegate participation rate could be <50% for an indeterminate number of rounds). But in realistic scenarios, blocks are rarely missed and so one should not expect to wait more than 10 seconds to have their transaction confirmed with a high degree of probability.

So what I was thinking was that with just a little alteration to the protocol, we could get the recommended confirmation time below 1 second (I think, depends on network conditions). The idea is to take every transaction a user wants to broadcast (such as the signed transaction sent by a customer to a merchant) and send it to next 51 delegates (in block signing order) so that each of them can make sure the transaction is valid and send back signatures that claim that they will approve that transaction by their next block (this means include the transaction in the block unless it is already included in a previous block in the chain the delegate is building off of). They also keep that transaction in their working memory so that any new transaction received that would invalidate the prior transaction is considered invalid (a double-spend). If a delegate signs a block which doesn't approve the transaction they said they would approve by that block, the user can submit the signature to the network as proof that the delegate lied and get the delegate automatically fired. The reassurance that delegates will eventually lose their job if they break their promise to approve the transaction should be enough for the user to assume the transaction will go through. This means that as soon as all 51 of those delegates return their signatures for that transaction, the user can consider that transaction to be included in the chain even if it won't actually happen until a little later. Since, delegates are going to be online and responsive most of the time, it is unlikely that there will be a delegate in that group of 51 that does not respond extremely quickly. However, if it is the case that one of the delegates is not responding quickly or at all, then the user can just wait longer (usually 10 seconds but up to 8.5 minutes) to get confirmation of the transaction being included in the blockchain.

The other benefit of this approach is that it can be used with lightweight clients. As long as the lightweight clients are able to know who the current active delegates are (more on that later) and their ordering (just put the delegate's random number can be kept in the block header), then the signatures they get back from the 51 delegates should be enough assurance to the user that they actually received the assets they expected to receive. If the user later goes on the full client on their PC and finds out the transaction was never approved, they now can submit proof to get those delegates fired.

The lightweight clients need to know who are the active delegates in a given round. I think the way to make this work is to require the delegates to include the delta updates on the set of top 101 delegates in each block header. If the votes in the block causes delegate A at slot 101 to be replaced by delegate B, the block header for that block should note that A is out and B is in. Obviously all full clients on the network would require the block header deltas to be consistent with the vote changes in the block, or else the block would be invalid. Lightweight clients can rely on there being enough full clients online monitoring the full blocks and voting out any delegates that lie about the block headers. Under that assumption, the lightweight clients only need the block headers to keep the set of active delegates up-to-date over time.

I would also make sure to include in the block header two different Merkel tree roots. The first Merkel tree would have all the transactions in the block in a well-specified order as its leaf nodes. The second Merkel tree would have hashes corresponding to each trading asset pair in the decentralized exchange as its leaf nodes (again in a well-specified order). These hashes would be determined by taking the hash of the list of all transactions (in a well-specified order) defining all of the open market orders (up to that block) for the particular asset pair exchange market. With these two Merkel tree roots in the block headers, any full nodes (not just delegates) could provide Merkel branches to prove the existence of any transaction in the blockchain and prove to the lightweight clients that the client has received all open market orders (up to some block) for a given exchange market. This way lightweight clients could get accurate up-to-date information on the decentralized exchanges and get proof of any unsolicited funds sent to them more than 10 seconds ago (really more than 8.5 minutes ago to be safe), and all without having to burden the delegates to sign any statements. The lightweight (and full) clients would only burden the delegates for signatures for transactions that are sent from one party to another in a "real-time" environment (point-of-sale transactions) where sub-second confirmation times are desired.

21
Technical Support / Minor annoyance with build process: move htdocs
« on: August 24, 2014, 12:53:47 am »
This is an incredibly minor complaint meant for the devs who I know are super busy, but then again I think it's a really simple fix.

For some reason CMakeLists is setup to build the web wallet code in the source directory during make buildweb. Shouldn't that go into the build directory instead?

The way I build on my system is to keep the build directory separate from the source directory (the directory I do git pull updates on). But after doing the git pull and git submodule update commands, I can't just do "cmake -DINCLUDE_QT_WALLET=ON ../bitsharesx & make buildweb & make" in a new build directory because make buildweb complains about htdocs already existing. But if the HTML interface changes, then I cannot leave out make buildweb or else I will get the old UI in the Qt client, correct? So, I need to go delete bitsharesx/programs/qt_wallet/htdocs (and possibly bitsharesx/programs/web_wallet/generated and bitsharesx/programs/web_wallet/dist) before running make buildweb. However, if make buildweb were to create and check for the existence of these directories in the build directory rather than the source directory, this inconvenience wouldn't be necessary.



22
After writing the post at https://bitsharestalk.org/index.php?topic=5033.msg90117#msg90117, I started thinking more about DACs that have the property where the shares in the DAC do not have a lot of value (meaning low market cap for the DAC), but there are assets on the blockchain that do have a lot of value. It first seemed somewhat counterintuitive to me that the blockchain can contain a total amount of value that is more than the market cap of the DAC, but after thinking about it a little it seems totally obvious now. A fully owned domain on the BitShares DNS DAC has a lot of value that belongs to that domain owner, but not to the shareholders. Someone can do cross-chain trading to trade BTSX for the domain, and the DAC doesn't capture any of that value (well other than a tiny transaction fee). But at least BitShares DNS extracts a lot of value from the domain auctions. The problem is much worse for user-issued assets on a standalone BitShares Me DAC, or loans on a Lending DAC, and many other DAC examples people can think of. So, in the case of BitShares Me, there can be a lot of very highly valued user-issued assets (based on the trust in the user backing them) that can be traded back and forth, but because the only income source of the DAC are low transaction fees, the market cap of the DAC (which should be related to its net income) is also low. Whenever there is high value trade occurring, there could be an opportunity to make money off double-spend attacks.

So, I think DACs that have this structure are vulnerable to 51% stake attacks. As far as I can see, there could be a rational financial reason for an evil actor to pull off a double-spend attack by buying up 51% of the stake in a very low market cap DAC, since pulling off double spend (or other block chain manipulation) attacks on even just few particular high value transfers could make it all worth it. For example, if a user wanted to do an atomic cross-chain trade of some amount of BitUSD for a user-issued asset on a separate BitShares Me DAC, and the attacker doing the trade on the BitShares Me DAC actually owned 51% of the stake on the DAC (and thus all of the delegates), the user would be very vulnerable. After the attacker claims the BitUSD and thus reveals the secret to allow the user to claim the asset, he could selectively block the user's transaction that claims the asset for long enough until he could take the asset back as a refund after the timeout (timeout is necessary in atomic cross-chain trades so that traders don't lose their money if the other party disappears in the middle of a trade). Eventually, the complaints would reach the other 49% of shareholders so they could fork and purge, but by then the attacker may have done this multiple times with many victims (perhaps concurrently) to have made more BitUSD than the cost of buying 51% of the stake in the BitShares Me DAC. A possible answer to this is to not do high value transfers on low market cap DACs, but I think that may be too limiting sometimes.

What if instead, the BitShares Me DAC, recognizing that its low market cap made it insecure, decided to give up its sovereignty to another, larger market cap, DAC. To use the corporation analogy for DACs, what if the shareholders of the DAC agreed to not select its board of directors by shareholder vote, but rather by whoever who was on the board of a more successful company. A small DAC could decide to follow the blockchain/network of BitShares X in addition to its own blockchain/network. This would not be a bidirectional relationship (meaning the BitShares X blockchain wouldn't need to know about the small DACs existence) since otherwise this wouldn't scale. By following the BTSX blockchain, everyone using this small DAC would know the approval rating of all registered delegates on the BTSX blockchain (as determined by BTSX holders). The DAC would have its own ability for accounts to register as interested in acting as delegates. The difference between this and the one on BitShares X however is that the fee would be very small, the accounts could take themselves on or off as they would like, and only accounts that were also registered as delegates on the BTSX blockchain would be allowed to do this (as determined by the having the same Owner Key). Of the subset of BitShares X registered delegates who were also registered as interested in acting as a delegate for the small DAC, the approval rating from the BTSX blockchain would be used to rank them and choose the top 101 as the active delegates of the small DAC.

The shareholders of the small DAC would not be able to vote out/in the active delegates. Only the shareholders of BitShares X could do that. The delegates performance and behavior on this DAC could still influence users' decision on how to vote on BitShares X. If the users (not necessarily just shareholders) of this small DAC collectively have a considerable percentage of the stake of BTSX, then the delegates misbehaving on this DAC would hurt their approval rating on the BTSX blockchain. But, if users of the small DAC only have a tiny percentage of BTSX, their influence is too small to significantly change delegate approval rating, and so the only way they could get rid of misbehaving delegates on their DAC would be by convincing other BTSX stakeholders that the delegate they voted for is not a trustworthy individual. And technically, the shareholders of the small DAC could always hard fork away from the BitShares X chain if they thought conditions absolutely called for it.

This is the tradeoff that a DAC that makes by linking its delegates to a larger parent DAC. It gains protection from attackers buying 51% of their stake with the intention of harming their network; this is done by the shareholders putting their trust in the BTSX shareholders rather than themselves, since they worry that they are small enough to be compromised by an attacker. However, the shareholders also lose control of directly dealing with misbehaving delegates themselves. The tradeoff is worth it if the shareholders of the DAC believe that enough of the users (not necessarily just shareholders) of the DAC will also collectively be major BTSX holders and if they can convince the top active delegates on BitShares X to also act as delegates on their small DAC (with appropriate compensation of course).

The shareholders of the DAC would still have some control. In the case of an emergency or hard fork, it is always their stake that matters not the stake of the greater BTSX community. Also, during regular operation, I propose that the shareholders be able to use their stake to vote on matters specific to the DAC (within the constraints of the hard-coded rules of the DAC). For example, they could vote on the delegate pay rate that every active delegate on that DAC would get paid (which is in some ways more flexible than letting the delegates decide the pay rate as part of their campaign promise because it would allow the rate to be adjusted down and up over time as needed). I would also want to allow shareholders to vote yes/no on proposals created by the delegates which could make other changes in the operation of the DAC. One of them could be hiring "workers" (to use Agent86's term) who get their own specified salary (which could even be high enough to cause net inflation if the shareholders wanted). These workers would be hired to do all the interesting jobs related to the DAC, like funding development, marketing, and more. None of these proposals could compromise the security of the DAC, since that responsibility is only assigned to the active delegates which are determined by BTSX shareholders. The only job of the delegates would be to just keep the network operating and continuing to build on the blockchain in a way consistent with the rules. For this job, they would be compensated, but there would be no need to pay them too much more than the typical computing expenses of runnning a DPOS delegate node.

In conclusion, shareholders of a low market cap DAC can rely on the trustworthiness of other DAC stakeholders to choose the delegates to run the machinery of the DAC, while they still maintain control of all the important decisions of the DAC. The delegates' job is very simple and is one that they have already proven to be reliable at in other DACs. If the DAC's shareholders can rely on BTSX holders to be concerned with the reliable operation of the DAC (say if they are users of the DAC) and if they can attract the top BTSX delegates with sufficient pay to run their DAC, then they can link their DAC to BTSX to take advantage of the great security benefits that comes from the high market cap of BTSX (namely that a 51% attack doesn't provide any financial benefit for an attacker). This provides a lot of flexibility in the design of DACs. DACs that provide users the ability to hold and transfer significant value do not need to overcharge the users for security reasons. They can just take the low but respectable transaction fees as a sustainable source of income. Also, by only linking the DAC to BTSX rather than directly bloating up the BitShares X blockchain/network with lots of different business rules and transaction types, users can safely experiment with different DACs and, most importantly, properly scale out by not cramming everything onto one chain. BitAssets and atomic cross-chain trading still provide the means of communicating the value transferred from one DAC to another DAC without needing the blockchains of each DAC to explicitly communicate with one another.

What are people's thoughts on this idea? Am I overly concerned about the risk of 51% stake attack on low market cap DACs? Am I overstating the utility of low market cap DACs in the first place? And would BTSX shareholders be likely to even care if their chosen delegates are slacking off in DACs other than the ones they have a financial stake in? I think they absolutely would care if the delegates were proven to be malicious (double signing blocks, somehow known to be filtering), but what if they were just lazy and missing a lot of blocks?

23
After the discussion at https://bitsharestalk.org/index.php?topic=6584, I realize it is really important to have a coherent argument to address the POS vs POW debate. The hard part of getting other cryptocurrency fans, who are not already enlightened about POS, over to BitShares is going to be addressing all of their concerns about POS, Nothing-at-Stake, and their belief that POW is necessary for secure consensus. The other difficult challenge that will need to be addressed is convincing the POS believers, and NXTers in particular, that there is an appropriate balance between centralization and decentralization, and that hopefully DPOS has properly struck that balance to be decentralized enough (and with low enough barrier to entry) to be corruption-free and trustless, but centralized enough to be efficient (low transaction fees and fast block production). There is already great discussion on that topic happening at https://bitsharestalk.org/index.php?topic=5564. But this topic is not about the centralization vs decentralization argument but rather the POW vs POS argument.

This is a first draft, and I would appreciate feedback. I hope I didn't make mistakes in understanding some of the details of the technologies, but please correct me if I am wrong. I am also interested in what people think about my arguments about the economics of fake blockchain history attacks for resync periods less than the threshold (and whether 6 months is even an appropriate threshold or not). I really want to try to develop an argument that can address POW supporters' concerns and convince them that POS is the right way to go.

POW vs POS consensus systems

People in the POW (Proof-of-Work) community generally accept the concept of Nothing-at-Stake as a fundamental flaw in POS (Proof-of-Stake) consensus systems that make those systems, in their view, inferior to POW consensus systems. What is Nothing-at-Stake and is it actually a legitimate concern in practical POS systems? I hope that I will be able to convince people that it is not really an issue, and that, on the contrary, POS has many advantages to POW.

All consensus systems require all participants to maintain a consistent view of a shared database as the database is modified over time. In blockchain-based consensus systems, this database is a log of appended blocks of data, in which each block (other than the first block, called the genesis block) contains a hash of the content of the previous block. This forms a cryptographically secure chain of blocks such that modifying any block requires modifying all of the blocks that come after it in the chain. In a blockchain-based consensus system in which all participants are always connected to the network, consensus can be maintained as long as all participants can agree on which block to append next. Temporary network disruptions may cause some short forks which need to be resolved quickly. So, some other mechanism is also needed to resolve these forks. Finally, since all participants cannot be online all of the time, some mechanism is needed for participants to safely resync with the network after some period of time offline.

Deciding which block to append next:

POW systems use a stochastic computational process, called mining, to determine which block everyone agrees to accept and
append to the blockchain. It is essentially a cryptographic lottery in which the probability of winning is a function of a specific form of computational power (called hashing power) and the current consensus-accepted difficulty. Anyone can verify that a block won the cryptographic lottery by looking at the block and knowing the current difficulty.

The POS system in Peercoin also uses mining to determine which block everyone agrees to accept next. However, in this case the probability of success is a function of hashing power, the current consensus-accepted network-level difficulty, and the coin-age (product of the unspent transaction output value and elapsed time since transaction was created) of the stake used for block production. The function has a very strong dependence on the coin-age such that it is more profitable to buy more coins (or stake) to increase coin-age than to buy more computing power to increase hashing power. Again, anyone can verify that a block won the lottery by looking at the block, knowing the current network-level difficulty, and verifying that the coin-age used to produce the block is legitimate (which requires having a consistent view of the blockchain up to that point).

The POS system in BitShares is called Delegated Proof of Stake (DPOS), because it allows stake owners to delegate the power that their stake provides to other users called delegates. This power is the same power that stake provides in Peercoin and other POS systems: the power to create new blocks. Given a consistent view of the blockchain up to a certain point, anyone can know who the current active delegates are, the order in which these delegates will produce blocks in the current round, and thus which specific delegate is responsible to produce the next block. A cryptographic lottery in the form of mining is not necessary to determine who earns the right to produce the next block. Random values from the non-colluding delegates of the previous round determine the random ordering of the delegates for the next round. Stakeholders are able to vote for delegates using their stake through cryptographically-secure transactions that are stored in the blockchain, and the voting activity by the stakeholders can change the set of active delegates at any time.

Resolving short forks:

Due to the nondeterministic nature of mining as well as network propagation delays, short forks are possible in both POW systems and Peercoin-like POS systems. Network propagation delays may also cause extremely short forks in DPOS as well.

POW systems resolve the forks by agreeing to build on the chain with the most work done (the sum of the difficulty values at each block up to current head block in the blockchain). If everyone follows this rule, eventually all the nodes will come to a consensus on one particular chain, thus resolving the fork.

Peercoin-like POS systems can resolve the fork by building on a chain with the most amount of some other metric, like the total amount of coin-age consumed. Again, as long as everyone follows the same rule, the network will eventually naturally converge to just one of the forks.

Although, DPOS is able to randomize the order of delegates within a round, the order of the delegates in a given round is known prior to any of the delegates producing blocks in that round. For this reason, block production order can be considered deterministic. Nevertheless, very small forks could be possible because of network issues. For example, if block N is delayed by the network for too long, the producer of block N+1 might assume that the producer of block N was not available to produce his block at his designated time slot, so instead will chain off block N-1. The producer of block N+2 may have seen block N and/or block N+1. If he saw both, he always chooses the one that is supposed to come later in time, on the other hand if he sees only one or the other, he builds off of the one he saw. Thus, the chain is built with either block N or block N+1 considered missing, but the network is able to quickly get back to a consensus on the true chain because of the deterministic ordering of block producers.

Resyncing with the network after some period of time offline:

So far, the assumption has been that all participants were well connected to the network and therefore able to easily maintain consensus on the blockchain. Under these assumptions POW does not provide any advantages over POS. But realistically, users cannot always be online. And yet, they need some way of reconnecting with the network and getting up to speed on the current state of the blockchain from where they last left it without allowing an attacker to fool them onto a fake chain. If an attacker is able to fool the user into believing in fake additions to the blockchain since the last block seen by the user's client, the attacker can break consensus and thus damage the value of the network. In particular, the user becomes vulnerable to a double-spend attack by the attacker since they think they are getting tokens of value in exchange for goods/services (due to belief in the fake transaction history) but others on the true chain know that those tokens are worthless and will therefore not accept them in exchange for goods/services.

POW resolves this issue by using the same method used to resolve short forks: pick the chain with the most work done. Attackers have no way of faking the block acceptance criteria. They need to put in the work necessary to match the difficulty requirements at that point in the blockchain. Attackers can create a fake blockchain history by putting in the necessary work, but if they have less than <50% hashing power, their accumulated amount of work will be less than the accumulated work of the true chain. As long as the true chain is made visible to the resyncing user, he can easily pick it over the fake chains.

POS tries to resolve this issue by also making it difficult for attackers to fake the block acceptance criteria. In the case of Peercoin-like POS systems, it needs to be difficult for attackers to get coin-age (which is ultimately dependent on the amount of stake in the attacker's control). In the case of DPOS, it needs to be difficult for the attacker to get control of the delegates. Because of the way delegates work, the attacker would actually need to control nearly all of the 101 delegates to fake the blockchain history (see here and here for details). However, if the attacker controlled more than 50% of the stake, he could vote in all of his own delegates. So all POS systems are ultimately vulnerable if the attacker is able to get the majority of the stake. For a POS system to be secure from fake blockchain history attacks, the majority of the stake in the system needs to be kept away from the control of an attacker during the time a user is offline. However, if an attacker was able to capture only a small minority of the stake while the user was offline, the attacker cannot create a fake blockchain that the user would accept as valid.

POW supporters like to point out that the attacker does not need to control >50% on a live system; as long as an attacker controls >50% of the stake at any point in time t on the blockchain, that attacker could easily build a fake blockchain from that point forward that would fool a user's client if its last resync point was before time t. For a completely new user synchronizing from the genesis block, this means the attacker only needs to control >50% of the stake at any point in time in the history of the blockchain. This is the meaning behind the Nothing-at-Stake name. The users who owned >50% of the stake in the system in the past, may no longer own any stake in the system in the present. While it would be foolish for a present-day >50% stake holder to harm the network, someone who held >50% of the stake in the past but holds nothing at stake in the present has nothing to lose with an attack attempt.

As bad as this may look for POS systems, with more careful analysis, it is clear it is not actually a problem. A user in a POS system will always have a checkpoint in the not-too-distant past. This checkpoint either comes from the last block of the locally-saved, trusted blockchain (or perhaps just the locally-saved hash of the last seen block), or it can be hardcoded into the particular version of the wallet. As long as that checkpoint is in the not-too-distant past, users would not be vulnerable to fake blockchain history attacks in realistic scenarios. If the checkpoint is older than some threshold, then other measures are needed. This threshold can vary depending on the circumstances of the network and the paranoia of the user, but I think a threshold of 6 months is sufficient in most realistic scenarios.

Resyncing after being offline for less than 6 months should not be a cause for concern of fake blockchain history attacks. The only way such an attack can successfully work is if the attacker obtains ownership of >50% of the stake existing at some point during that 6 month period. The attacker would like to buy old private keys at very low cost from users who had stake in the system in the 6 month period but now no longer do. They have to no longer have stake in the system otherwise they would be foolish to sell old private keys to someone whose only purpose for buying old keys is clearly to attack the system and thus reduce the value of the seller's existing stake. But the attacker will not be able to find enough private key sellers that match that criteria, because it is extremely unlikely for stakeholders with >50% of the stake to completely exit out of the system within a 6 month period. The attacker is forced to legitimately buy into the system at a high cost if he wants to attack the network. But an attacker who grows his stake over some period of time until it reaches >50% would likely not attack the network while still holding the stake, otherwise they would cause the most damage to themselves. If the attacker is able to begin and finish selling their >50% of stake during that 6 month period, then the attacker has the opportunity to carry out a fake blockchain history attack against the victim who was offline for 6 months. However, the price one pays trading assets depends on how quickly they need to finish the trade. The attacker can take his time building up the stake to not have to overpay in order to incentivize stake holders to sell, but he is forced to sell at a discount to incentivize enough people to buy to quickly sell off his stake before the 6 month deadline. Pulling off this kind of buy-sell cycle is going to cost the attacker a lot of money. It is only rational to do this if this one buy-sell cycle provides him with enough opportunity to recover his costs through double-spend attacks. But the only people he can attack are people who were offline for about 6 months. Most people would be resyncing at much higher frequencies than that, which would be really hard to attack. Trying to sell >50% of stake in one week would cause a flash crash of the price of the coin (hurting the attacker the most). Also, from a practical manner, the attacker doesn't have any good way of knowing who has been offline during the same time period they set up the buy-sell cycle to actually target these individuals. So, even if there are a decent number of people out there that the attacker could target to make his money back, it isn't trivial to identify them.

So what about resyncing after being offline for more than 6 months? With the exception of resyncing from a genesis block on a new computer, it would be a very unusual circumstance to be doing this. The vast majority of people would be resyncing on a much more frequent basis. Nevertheless, in these rare cases, users would follow the same procedure that users who are resyncing from a genesis block on a new computer would follow. First, if the user already has an up-to-date blockchain on one computer and they just want to set up their wallet on a new computer, the clients could provide an easy method for the existing trusted client to communicate a hash of a recent block to the new client. Since a user obviously trusts himself and the client he has already been using, he can carry over that trust to the new device. What about a completely new user who has never been part of this network before? Or someone who lost their hard drive (but still has a backup of their private keys) and wants to reinstall the client from scratch on their computer? In these cases, the users would rely on the snapshot hardcoded in the latest version of the client software (which would be <6 months old). A new user needs to download the client software anyway; and, they need to have some way of trusting the software they download. If the attacker was able to provide a fake client with a fake snapshot, they would again be vulnerable to the fake blockchain history attack. But if the attacker was able to provide a fake client, the user would be compromised in so many ways. The fake client could just steal the user's private keys! Or if they are using a hardware wallet, the fake client could present a false view of the blockchain to make the user think he got paid when he didn't.

Ultimately there has to be some trust when it comes to these consensus technologies. Bitcoin users may not worry about fake blockchain histories because of POW, but if their wallet is compromised none of that matters. Therefore, Bitcoin users still need to somehow trust the Bitcoin client software they run on their computers. They can compile from source, but they still need to trust that the source is safe. They can rely on other people to audit the source code and tell them it is safe, but then they are just putting the trust on the auditors. Those auditors could collude together to attack the user. If the user is really geeky, he can audit the source code himself, which would take a very long time.

Similarly, in a POS system, the users also typically rely on either the developers or auditors to tell them that a particular client is safe to use. But that also carries with it the information of the most recent snapshot. If the developers try to change the snapshot hash to carry out a massive fake blockchain attack, auditors who have the legitimate blockchain up to the time of the client upgrade stored locally on their computer can check to see that the snapshot hash does not match any of the blocks on their stored blockchain and sound the alarms. If the user does not want to trust the developer or the auditors, he can audit the source code himself, but he would also need to somehow verify the latest snapshot hash. If he has a stored version of the blockchain up to client upgrade time, he can verify it the same way the auditors did. If he is evaluating this starting from scratch, then he needs to ask people he trusts that have already been on the network for a while that the hash is correct (and thus whether this program he has on is computer is going to connect him to the thing everyone else is already connected to). This may seem like a lot of work, but it is far less work than the code audit.

Advantages of POS over POW:

The point of all of the above was to show that in realistic scenarios, the cost of a 51% attack is too high to benefit the attacker. It is typically too expensive to get 51% of hashing power of a high hash rate proof of work coin for the minimal benefits it provides (killing the network and/or difficult to acheive double-spends). And, it is typically too expensive to get 51% of the stake in a high market cap coin for the minimal benefits it provides (killing the network and/or difficult to acheive double-spends). But users trade the guarantee that fake blockchain history attacks are virtually impossible in POW systems for an assurance that fake blockchain history attacks are merely highly improbable in POS systems. If that was the only difference between POW and POS, it would make sense to use POW. However, there are a few very important ways that POS is actually superior to POW.

In a POW system, if the attacker has enough hashing power to attack a POW system, he can also attack any other weaker POW system that uses a similar hashing algorithm. On the otherhand, buying up 51% of the stake of a POS system does not give the attacker any advantage for attacking another POS system. On the contrary, it is likely going to consume the attacker's money, leaving him too little money left over to do it again. This is incredibly important when one realizes what would likely happen if someone was foolish enough to try to buy 51% of the stake in a DPOS DAC only to kill it by taking over the delegates and refusing to sign blocks. People can take a snapshot of the failed DAC, identify the unspent transaction outputs which were voting for the corrupt delegates at the time of the DAC failure, and create a new genesis block from that snapshot with those particular unspent transaction outputs made void. A DAC identical to the previous failed one is created using this new genesis block, which takes stake control away from the attacker leaving the other innocent 49% of stake holders with 100% of the stake of the new DAC. Those who sold to the attacker should be happy because they made a voluntary exchange and got out before the DAC failed; those who did not sell to the attacker are also happy because they doubled their stake in a DAC that will quickly regain its old value, which should hopefully compensate them for the brief outage of the DAC.

The other major benefit of POS over POW is the cost needed to secure the network. In a POW system, the security of the network is directly tied to the cost of mining. If the cost of mining is cheap, an attacker can afford to gain more than 50% of the hashing power. No amount of clever mining algorithms or ASICs will change that relationship. As the electricity cost per gigahash goes down, the difficulty will go up to keep the total cost of mining high enough to secure the network. At some point when growth in the value of the system saturates and it can no longer support large coin inflation to pay miners, like is the case in Bitcoin currently, the cost of securing the network will fall entirely on the transaction fees. POW will either have higher transaction fees than POS systems to secure the network, or if users do not accept transactions fees that are too high, the security of the network will get worse. POS systems do not suffer from these issues because they do not need to waste the money from fees on electricity to secure the network. And in fact, as the system gets bigger (meaning the market cap grows) the POS system becomes even more secure naturally because it becomes harder for an attacker to acquire >50% of the stake.

24
General Discussion / Using the same name on all DACs
« on: July 31, 2014, 02:35:27 am »
There has been some discussion on the forums about the problem of trying to keep one's account name on all the different DACs that are going to be coming out. New DACs can snapshot the names on an older DAC they split off from (or Keyhotee founder IDs), but I think this is a really inelegant solution that doesn't actually fully addresses the problem, as I have discussed at https://bitsharestalk.org/index.php?topic=6420.msg85558#msg85558.

This is my proposal on how to deal with this issue. But first, I want to discuss how I understand names in the BitShares ecosystem. BitShares names are like email addresses (I believe bytemaster has already used the email metaphor to explain the naming system). An email address has two parts, the handle and the domain, e.g. james@example.com. The handle james is not universally unique; there can be a james@gmail.com, james@yahoo.com, etc. However, it is unique to a domain, and this is enforced by the mail server located by that domain. The mail server may have an IP address like 10.1.2.3, and a domain name like example.com pointing to that IP address. We have an analogous situation with BitShares: the mail server is now the DAC and the handle is the registered name on that particular DAC. The analogue to the IP address is the information necessary for universally identifying a particular instance of a DAC, such as the hash of the genesis block of a DAC. Regular people shouldn't have to deal with the hash or IP address. Instead, in the case of email, they use domain names. I think that the analogue of the domain name in BitShares is a local alias for a DAC that the client recognizes. For example "btsx" could be used to represent the particular BitShares X DAC that the client is configured to connect to. Meaning my BitShares X account could be referred to on my local machine by the name "arhag@btsx" for example. The same client might be configured to associate "namespaces" to the BitShares Namespaces/DNS/.p2p DAC (toast, which is the official name?) which the client could also connect to, or alternatively the client could send RPC calls to another client running on the user's machine that connects to the BitShares Namespaces network. These aliases are local to the client (not necessary universally recognized domain names, but by convention all clients might use the same names). You can think of them as manually setting hostnames (to continue the email analogy). I don't think manual hostnames are a burden in the BitShares ecosystem like they are for the web, because in the case of the web, your one browser can navigate to a web page located at any domain name, or your one email client can send an email to an email address at any domain name, but in the case of BitShares, you effectively need new software to run on your computer for every new DAC (or "domain") you want to be able to access.

So how does all of that deal with the unique name problem? To answer that I need to discuss how owner keys would be created for accounts in this system. By the way, I don't know the exact details of how the BitShares toolkit currently deterministically generates the owner key of an account from the wallet root key. I imagine it is similar to what I am going to discuss, but I am going to describe how I envision it should work. Each account created within a wallet would have a specific local index associated with it. That index (which in my system could be a string) along with the wallet root key determines the owner key of the account (something like WALLET_ROOT_KEY.hardened_child(ACCOUNT_INDEX) => ACCOUNT_OWNER_PRIVATE_KEY). The local index could by default be automatically set to an incrementing counter (increments each time a new account is created). To universally identify an account on any BitShares DAC, one only needs the public key corresponding to the private key that is generated purely from WALLET_ROOT_KEY and ACCOUNT_INDEX.

To make this more concrete, imagine a user James. James created a new wallet on BitShares X, and then created three accounts on that wallet. The first two accounts (jim and internetpseudonym) were created with default settings and then registered. This means the account jim was created with an owner key deterministically generated from WALLET_ROOT_KEY using the index 1, and internetpseudonym was generated using index 2. James then created a third account on the same wallet using the explicitly defined index "secretagent-2j29d94jd3s0" and was able to register it on the blockchain with the name agent007. If James now lost his hard drive and the only thing he backed up was WALLET_ROOT_KEY, his client would automatically be able to recover the two registered accounts jim and internetpseudonym, but it would not be able to automatically recovery his third registered account agent007 (this provides plausible deniability by the way!). James would have to remember and explicitly enter "secretagent-2j29d94jd3s0" into his client to recover the agent007 account and all of its funds.

James now downloads the client for the brand new BitShares Namespaces DAC. He of course wants to transfer over his accounts from BitShares X. So he imports the BitShares X wallet (or more precisely has the client extract the WALLET_ROOT_KEY and generated a new wallet with that key). The client could also look through all local accounts and their indices in the BitShares X wallet and use that to automatically set up the corresponding accounts in the new DAC. But if the client did not automatically import the wallet but nevertheless used the same WALLET_ROOT_KEY, it would still generate accounts on the new DAC with the same owner key as the corresponding account on the BitShares X DAC. By corresponding account I mean the account with the same index. So, if James creates a brand new default account on that wallet (so it has the default index 1) it would actually be an account with the same owner key as jim@btsx. And, if James decided to create a new account on that wallet using the explicit index "secretagent-2j29d94jd3s0" on BitShares Namespaces, it would create an account with the same owner key as agent007@btsx. If that account was registered on the BitShares Namespaces DAC under a different name, say jbond, outside parties would be able to associate the name jbond@namespaces to agent007@btsx because they have the same owner key. This is the link that can tie the different names of the same conceptual account on different DACs together.

So, lets say James wants to register the first account (index 1) on BitShares Namespaces blockchain. Unfortunately, jim is already taken. So he instead registers that account under james. Now james@namespaces is linked with jim@btsx. If Bob wants to transfer ownership of his domain name to James, but only has his BTSX account name (jim), Bob now has a way of finding out what name James uses on BitShares Namespaces without needing to talk to him. A lookup of jim on BitShares X can give the account's owner key. If there is a match for that owner key on BitShares Namespaces, then the corresponding account (james) must be the registered account that James uses. And so Bob can send the domain to james. This entire process can be automated by the clients. Bob should just be able to type jim@btsx in the to field and have his client automatically resolve who to send the domain name to in the background (or give an error if the user isn't registered on the relevant DACs).

Then, there is a final step to make all of this incredibly convenient and to, for example, be able to just give one universally unique name on a business card. We must develop a social consensus around which DAC everyone agrees to identify themselves with. My suggestion is that we use the upcoming BitShares Namespaces DAC for this purpose. I propose that by default every BitShares handle not further qualified with an @dacname should refer to the registered name on the BitShares Namespaces DAC. That means if someone wanted to send money to James in BitShares X but they only knew the BitShares X name (jim), they would have to send the money to "jim@btsx" (the @btsx would need to be explicit). If they just sent the money to "jim", it would instead be sent to jim@namespaces, who may not necessarily be James. The typical way this would be used, however, is for everyone to use the Namespaces DAC registered name within all DAC clients. So, the person sending money to James would learn that his Namespaces handle is james and would simply put "james" in the to field (although the explicit "james@namespaces" would be appropriate as well). If someone else wants to vote for James as a delegate on the Music DAC (on which James has registered the name jimmy@music), they would still put "james" in the to field of the Music DAC client (the system would automatically resolve james -> james@namespaces -> unique owner key -> jimmy@music). If this social consensus is followed, it becomes irrelevant what any of your registered names are except for the name registered on the BitShares Namespaces DAC. That name then becomes your universally unique name.

25
General Discussion / MultiSig with TITAN?
« on: July 22, 2014, 06:45:09 pm »
Does the BitShares toolkit currently have support for multisig transactions? And how does that work with TITAN? It would be great if there was some documentation on it.

Regardless of the current state of support of multisig, I think it would be really great if registered accounts could post their default multisig preferences. I suppose they can include that information today by storing it in the public data JSON, but I would like a protocol where the wallet clients of people sending money by default automatically respect the multisig wishes of the recipient. This would make multisig for the purposes of security far more convenient, since the receiver of money wouldn't have to immediately spend the transaction to themself just to secure it with multisig.

For example, one could upload 3 public keys to the blockchain under their account name, one of them a primary key and the rest secondary keys. The primary private key would be used to derive the shared SECRET used to decrypt the memo (also maybe a 4th observer key could also do this to track received transactions on the user's behalf). The secondary keys would not be able to derive SECRET (and thus not see the memo or who the transaction was from), but would be necessary to sign the user's spending transactions with a 2 of 3 multisig. One of the secondary public keys would be that of a user-chosen third-party validator that validates the user's spending transactions, and for the other secondary public key, its randomly generated private key would be encrypted and stored offline as a backup (possibly split among friends and family using something like Shamir's Secret Sharing). It should also be possible to change the public keys stored on the blockchain under the registered account at any time, again using multisig.

This kind of functionality would be incredibly useful from a security standpoint. In fact, I think the default behavior of the wallet should be to ask users to setup 2 of 3 multisig. Normal operation for the user would be to use a secure connection to the semi-trusted third-party validator to prove their identity in some way and send the transactions for the validator to sign. The strength of the proof of identity could vary depending on the conditions of the transaction, anything from just a BitShares XT Login to coming into a facility in person to verify using bio-metrics. If the validator is suspicious of the user's activity, or the accumulated spent amount of money signed by the validator in the last 48 hours is larger than some threshold, etc., then the validator can ask for a stronger proof of identity as a safeguard against thieves compromising your wallet (hacking, extortion, etc.). If the user no longer trusts the validator, or the validator is refusing to sign legitimate requests, the user can always wait until they can get access to their offline key and together with the primary key spend any transaction they want (most likely to change the validator on their account).

I am interested in what bytemaster has to say with regards to: incorporating this functionality in all BitShares DACs; whether this needs a hard fork to accomplish; how multisig works with TITAN and without compromising user privacy; and, assuming the blockchain and network allows for all of this today, how much of a priority it is to implement this functionality in a user-friendly way in the clients.

26
I initially had some trouble getting the Qt client to compile on Linux, but I finally figured it out. I wrote this detailed guide on the steps I took to get the client to compile on my system (Ubuntu 14.04 x86_64). It might be useful to other people who are having compilation problems, but it's also part of a question I have regarding errors that I am still experiencing with the web wallet (web wallet errors are at the bottom of this post).


The following is the procedure I followed to build on x86_64 Ubuntu 14.04 (3.13.0-27-generic):

Install prerequisite packages from repository (according to BUILD_UBUNTU.md):
Code: [Select]
sudo apt-get update
sudo apt-get install cmake git libreadline-dev uuid-dev g++ libdb++-dev libdb-dev zip libssl-dev openssl build-essential python-dev autotools-dev libicu-dev libbz2-dev libboost-dev libboost-all-dev

Download latest BitShares X code from GitHub:
Code: [Select]
mkdir ~/bitshares
cd ~/bitshares
git clone https://github.com/dacsunlimited/bitsharesx.git
cd bitsharesx
git checkout 0.2.1
git submodule init
git submodule update

Download and install Qt 5.3.1 for Linux 64-bit from http://qt-project.org/downloads:
Code: [Select]
cd ~/bitshares
wget http://download.qt-project.org/official_releases/online_installers/qt-opensource-linux-x64-1.6.0-4-online.run
chmod +x qt-opensource-linux-x64-1.6.0-4-online.run
./qt-opensource-linux-x64-1.6.0-4-online.run
I used the GUI installer to install to ~/bitshares/Qt with the default installation options.

Download and install Node.js v0.10.29 from http://nodejs.org/download:
Code: [Select]
cd ~/bitshares
wget http://nodejs.org/dist/v0.10.29/node-v0.10.29-linux-x64.tar.gz
tar xzf node-v0.10.29-linux-x64.tar.gz
cd node-v0.10.29-linux-x64
export PATH=/home/$USER/bitshares/node-v0.10.29-linux-x64/bin:$PATH

Install lineman and its dependencies in web_wallet folder using npm:
Code: [Select]
cd ~/bitshares/bitsharesx/programs/web_wallet
npm install -g lineman
npm install

Then, following directions from bitsharesx/programs/qt_wallet/README.md,
I configure using CMake:
Code: [Select]
cd ~/bitshares/
mkdir bitsharesx-build
cd bitsharesx-build
export CMAKE_PREFIX_PATH=/home/$USER/bitshares/Qt/5.3/gcc_64/
cmake -DINCLUDE_QT_WALLET=ON ../bitsharesx

However, CMake gave me an error with the following output:
Code: [Select]
-- The C compiler identification is GNU 4.8.2
-- The CXX compiler identification is GNU 4.8.2
-- Check for working C compiler: /usr/bin/cc
-- Check for working C compiler: /usr/bin/cc -- works
-- Detecting C compiler ABI info
-- Detecting C compiler ABI info - done
-- Check for working CXX compiler: /usr/bin/c++
-- Check for working CXX compiler: /usr/bin/c++ -- works
-- Detecting CXX compiler ABI info
-- Detecting CXX compiler ABI info - done
-- Found ZLIB: /usr/lib/x86_64-linux-gnu/libz.so (found version "1.2.8")
-- Configuring BitShares on Linux
-- Using  as BerkeleyDB root
-- Looking for: db_cxx-6.0
-- debug/usr/lib/x86_64-linux-gnu/libdb_cxx.sooptimized/usr/lib/x86_64-linux-gnu/libdb_cxx.so
-- Found BerkeleyDB: /usr/include 
-- Using custom FindBoost.cmake
-- Boost version: 1.54.0
-- Found the following Boost libraries:
--   thread
--   date_time
--   system
--   filesystem
--   program_options
--   signals
--   serialization
--   chrono
--   unit_test_framework
--   context
--   locale
-- Using custom FindBoost.cmake
-- Boost version: 1.54.0
-- Found the following Boost libraries:
--   coroutine
-- Configuring project fc located in: /home/arhag/bitshares/bitsharesx/libraries/fc
-- Configuring fc to build on Unix/Apple
-- Using custom FindBoost.cmake
-- Boost version: 1.54.0
-- Found the following Boost libraries:
--   thread
--   date_time
--   system
--   filesystem
--   program_options
--   signals
--   serialization
--   chrono
--   unit_test_framework
--   context
--   locale
--   iostreams
--   coroutine
-- Found OpenSSL: /usr/lib/x86_64-linux-gnu/libssl.a;/usr/lib/x86_64-linux-gnu/libcrypto.a (found version "1.0.1f")
** for a debug build: cmake -DCMAKE_BUILD_TYPE=Debug ..
-- Finished fc module configuration...
-- Could NOT find Curses (missing:  CURSES_LIBRARY CURSES_INCLUDE_PATH)
-- Found Readline: /usr/include 
-- Using  as BerkeleyDB root
-- Looking for: db_cxx-6.0
-- debug/usr/lib/x86_64-linux-gnu/libdb_cxx.sooptimized/usr/lib/x86_64-linux-gnu/libdb_cxx.so
-- Found BerkeleyDB: /usr/include 
-- Enabling Bitcoin Core Wallet Imports
CMake Error at /home/arhag/bitshares/Qt/5.3/gcc_64/lib/cmake/Qt5Gui/Qt5GuiConfig.cmake:15 (message):
  The imported target "Qt5::Gui" references the file

     "Qt5Gui_EGL_LIBRARY-NOTFOUND"

  but this file does not exist.  Possible reasons include:

  * The file was deleted, renamed, or moved to another location.

  * An install or uninstall procedure did not complete successfully.

  * The installation package was faulty and contained

     "/home/arhag/bitshares/Qt/5.3/gcc_64/lib/cmake/Qt5Gui/Qt5GuiConfigExtras.cmake"

  but not all the files it references.

Call Stack (most recent call first):
  /home/arhag/bitshares/Qt/5.3/gcc_64/lib/cmake/Qt5Gui/Qt5GuiConfigExtras.cmake:31 (_qt5_Gui_check_file_exists)
  /home/arhag/bitshares/Qt/5.3/gcc_64/lib/cmake/Qt5Gui/Qt5GuiConfigExtras.cmake:58 (_qt5gui_find_extra_libs)
  /home/arhag/bitshares/Qt/5.3/gcc_64/lib/cmake/Qt5Gui/Qt5GuiConfig.cmake:143 (include)
  programs/qt_wallet/CMakeLists.txt:16 (find_package)


-- Configuring incomplete, errors occurred!

A little bit of searching later and I figured out I needed another dependency:
Code: [Select]
sudo apt-get install libgl1-mesa-dev libegl1-mesa-dev
And, now CMake successfully finishes. It seems Qt's CMake would fail because it couldn't find libGl.so and libEGL.so on my system. They were located at /usr/lib/x86_64-linux-gnu/mesa/libGL.so and /usr/lib/x86_64-linux-gnu/mesa-egl/libEGL.so, but for some reason CMake refuses to
search for them there. Adding the above packages created symlinks to the libraries at the proper locations /usr/lib/x86_64-linux-gnu/libGl.so and /usr/lib/x86_64-linux-gnu/libEGL.so. Perhaps the symlinks alone would have been enough, but installing the packages is simpler. So, it seems these two packages should be included in the list of prerequisite packages to initially install.

I then make the web wallet first, and then the rest of the system.
Code: [Select]
cd ~/bitshares/bitsharesx-build
make buildweb
make
The compilation seems to work fine with the exception of some annoying compiler warnings which seem to me to be mostly benign.

Finally, I run the Qt client and everything appears to be working.
Code: [Select]
cd ~/bitsharesx-build/programs/qt_wallet
./BitSharesX

At this point, everything is working fine, and there is no need for the user to go further. However, I wanted to explore the other options available, such as using the web wallet through the browser directly and not needing Qt, and that is where I got errors.

I first modified "~/BitShares X/config.json" according to bitsharesx/programs/web_wallet/README.md to contain:
Code: [Select]
{
  "rpc": {
    "rpc_user": "test",
    "rpc_password": "test",
    "rpc_endpoint": "127.0.0.1:0",
    "httpd_endpoint": "127.0.0.1:0",
    "htdocs": "/home/arhag/bitshares/bitsharesx/programs/web_wallet/generated"
  },
...

Then, I run the bitshares_client with the following options:
Code: [Select]
cd ~/bitshares/bitsharesx-build/programs/client
./bitshares_client --data-dir ~/BitShares\ X/ --server --httpport 9989
The client seems to be working fine. I can use all the commands. Only problems seem to be a regularly repeating message that says a peer disconnected me because of an invalid block:
Code: [Select]
Peer <IP address>:<port> disconnected us: You offered us a block that we reject as invalid
I am not sure if this is normal behavior or not. Also when stopping the client, it crashes with a segmentation fault.

Now, with the bitshares_client running, I also run the web wallet:
Code: [Select]
cd ~/bitshares/bitsharesx/programs/web_wallet
lineman run

I am able to use my web browser to connect to http://localhost:8000/ and see the GUI interface, however nothing actually works because it gives me an RPC error at the lower left corner with an error about prefixMatchingApiProxy not being defined. The lineman process prints the following error in the terminal:
Code: [Select]
ReferenceError: prefixMatchingApiProxy is not defined
  at /home/arhag/bitshares/bitsharesx/programs/web_wallet/config/server.js:21:57
  at callbacks (/home/arhag/bitshares/bitsharesx/programs/web_wallet/node_modules/lineman/node_modules/express/lib/router/index.js:164:37)
  at multipart (/home/arhag/bitshares/bitsharesx/programs/web_wallet/node_modules/lineman/node_modules/express/node_modules/connect/lib/middleware/multipart.js:81:27)
  at /home/arhag/bitshares/bitsharesx/programs/web_wallet/node_modules/lineman/node_modules/express/node_modules/connect/lib/middleware/bodyParser.js:57:9
  at urlencoded (/home/arhag/bitshares/bitsharesx/programs/web_wallet/node_modules/lineman/node_modules/express/node_modules/connect/lib/middleware/urlencoded.js:46:27)
  at /home/arhag/bitshares/bitsharesx/programs/web_wallet/node_modules/lineman/node_modules/express/node_modules/connect/lib/middleware/bodyParser.js:55:7
  at IncomingMessage.<anonymous> (/home/arhag/bitshares/bitsharesx/programs/web_wallet/node_modules/lineman/node_modules/express/node_modules/connect/lib/middleware/json.js:72:9)
  at IncomingMessage.emit (events.js:92:17)
  at _stream_readable.js:929:16
  at process._tickCallback (node.js:419:13)

I don't know why lineman is giving me this problem. If I comment out the offending console.log function in web_wallet/config/server.js, I don't get that particular error anymore but the web wallet still doesn't work because the HTTP POST requests to /rpc don't get any response back. Have I not set things up correctly? In particular, I am wondering how the web wallet (the lineman process) knows what RPC username and password to use when making the requests.

Pages: 1 [2]