Show Posts

This section allows you to view all posts made by this member. Note that you can only see posts made in areas you currently have access to.


Messages - thisisausername

Pages: [1] 2 3 4 5 6
1
Muse/SoundDAC / Re: Connection issues
« on: August 02, 2016, 01:07:54 am »
Is there still a working websocket somewhere these days?

ws://128.199.143.47:2018 isn't working for me.

2
Hey all, here's a summary of the October 9th hangout with Bytemaster. A different format than last time, even more my interpretation of things. Still, I think I get across pretty much everything. All text is Bytemaster summarized unless prefixed by "Q:".

Also, I have a request: if any of my past transcript of sep 25th or summary of oct 2nd (check 'em out if you haven't!) have been helpful to you, feel free to throw some spare brownie.pts or bts my way: tiau or thisisausername on BitShares 2.0. So far I haven't made anything, not even what I would've had I not missed two hangouts.



BitShares Dev Hangout: Bytemaster – October 9, 2015 Quick Summary

Testnet has run for a week, survived flooding, stayed generally in sync.  100 TPS achieved; 1000 TPS could be done, but would more likely be an attack at this point. This parameter will be mutable. Found and fixed a number of issues along the way this week.  Tuesday launch schedule is a go.

Subtle issues fixed this week: Non-deterministic behavior in iterating over accounts (hash lookup non-deterministic, switched to ordered indexes instead of hash indexes - slower but deterministic) can switch back in the future. Didn't require a hardfork for this fix. Finding these types of bugs is a good sign.

Fixed some free memory reading bugs. Nothing big enough to cause a hardfork.

Exponential decay of bugs found; thus, at this point, bugs are to be expected. Bug-free can only be achieved by massive testing that requires release. Release must also be 'good enough'.

Major bugs have all been fixed. Polish type bugs being found now. Potential rounding errors could be found. Order of magnitude more confidence in BitShares 2.0 than 1. Funds will be safe if private keys are safe. Bugs can be fixed - very unlikely for account balance affecting bug.

Q: Hardfork bugs v. others?

Non-hardfork: Anything in the UI (most issues are here, at the moment)
Hardfork: Changes the behavior of something the witnesses did in the past, changes the validity of future transactions

Fork: Change in the the interpretation of the history of events. Some changes restrict things, things that were possible in the past are no longer possible.  For example, if non-deterministic behavior were found we'd have to define that non-determinsitic behavior as the expected behavior.

Another type of hardfork: allowing something in the future that wasn't allowed in the past. In this case just need to have all witnesses update in time.

Unplanned hardfork: All witnesses running the same code, but code isn't deterministic. Eventually some transaction is allowed that shouldn't be (valid for some, but no others;) chain splits.

Unplanned hardforks are the most dangerous. Bitcoin 2013 blocksize bug was like this.

Bitcoin took a couple hours to find (few people managed double spend,) multiple chains for a bit.

BitShares has Last Irreversible Block: last block confirmed by 2/3rds of the witnesses. This block is nonvolatile. 1 minute after transaction irreversibility has been achieved. Much faster than six hours for Bitcoin. Proves all witnesses had consensus at a certain point in time, so even if communication was lost it would be the only point to rebuild from. The Last Irreversible Block protects the network from unforeseen hardforks.

Q: Minor fork with 35% witnesses last night; resolved soon?

Haven't looked into that fork yet. There was a fork from Internet being lost. Also a recent patch to fix some forking behavior. Bottom line is that such a fork will just require witnesses to be diligent [and patch] and resync back to the main chain.

This won't affect most users using light-wallets (light-wallet operators will have to keep up to date.) Such bugs are likely to be fixed very quickly.

graphene.bitshares.org is test wallet for tesnet [probably now defunct since real launch has happened].

Genesis state will be published on October 13th; community please review this. See new 0.9.3c RPC commands for how to generate genesis file, but please review officially provided one. [This task, also irrelevant due to release having happened already].

NOTEs (and TESTNOTEs) getting own blockchain, not migrated to BitShares 2.0. MUSE launching soon, BitShares 0.9.3c wallet will be needed to claim NOTEs.

Q: Value in subjecting code base to 3rd-party review; worker proposal?

More eyes on code is better, probably other things of higher priority at the moment, but up to the community.

Q: When will transaction ordering be separated from block-signing responsibility?  Require hardfork?

Not necessarily [hardfork]. Witnesses could voluntarily agree to let someone else order for them. Currently witnesses have complete control over order of transactions included in block. All transactions must be hash-ordered, or expiration date ordered. Such constraints allow people who produce transactions to have a bit of control and then game the system (mine transaction ID, set expiration dates opportunistically). Witnesses always have the option to include or not include a transaction so now allowing reordering within a block doesn't completely fix this trust issue. Witnesses currently implement first-come first-serve ordering. This has lots of benefit, if witnesses are trusted [which they kind of have to be]. Good debate for the community to have on whether these responsibilities should be separated.

OpenLedger is going to be like graphene.bitshares.org wallet, themed a bit differently and a few different features (deposit/withdrawl).

Q: "To pay at a store via BitShares PoS wallet QR codes are required, will new light-wallets support this?"

Not on day one, but a great feature.

Q: Licensees for wallet backend?

About half-dozen licensees for wallet backend. Not sure, Stan spearheading this.

Q: Light-weight wallet can query multiple nodes?

Not currently. In the future this could be possible. Recently added ability to change host server. Can point to cloud [or locally, presumably] hosted full-node.

Q: Accounts in genesis block sorted as BitShares 1, Keyhotee accounts no longer first?

Probably several hours worth of work, don't have the time to do that. Manual edits will be required, will move Keyhotee founders to front manually.

Q: Feed frequency: important to publish feeds every 20 minutes; perhaps each hour?

Depends on how fast the market is moving. Usually every 20 minutes is more frequent than necessary. An hour generally seems too long. To minimize risk to shorts (and thus spreads) price feeds should be updated every time they change more than a fraction of a percent.

Q: How will fee pool markets work?

A fee pool is a set of BTS available for paying fees. The issuer of the asset says, "I will buy back my asset and give you BTS at the bts exchange rate." So convert USD to BTS; the issuer gets the USD and the network gets BTS from the fee pool to pay the referrer and the network fees. The issue of the asset must keep the pool funded and set the bts exchange rate.

So BitAssets have the issuer as the committee account. Can access and spend all the fees paid in BitUSD. Can sell BitUSD for BTS and put in back into the fee pool.

Q: 5% premium for core [bts] exchange rate makes sense?

Yes, to cover risks / costs of market operations. Not necessary though.

Q: Any luck working with exchanges to remove BTS/NOTE pairing?

NOTEs should not be on a centralized exchange [snapshot for MUSE soon]. NOTEs bought on exchange after 13th may or may not be honored, up to MUSE folks.

Q: Negative mining attack on Bitcoin?

Bitcoin is secured by the profit margins of the miners. Wide margins give lots of security. Cost of attack is the size of the margin. Bitcoin has a consensus model, though large mining pools control everything. This is as secure as BitShares with a similar (six or seven) witnesses. Risk is that someone can buy out the network [pay more than the margin the pools are making]. Bitcoin and proof-of-work are not likely to die. The true-believers of proof of work are not likely to switch to other systems.

Actually buying out Bitcoin is a fascinating idea.

Q: How much would it cost to buy out the pools?

100x cheaper to attack than people think. 25 BTC * $250 every 10 minutes; actually only the margin matters. Mining profitability was assumed to be 5%. Hard to calculate with difficulty rise, electricity cost, new hardware, price fluctuations, et cetera. Block reward halving within a year will cut margins even more.

Current Bitcoin pools may only sell for some multiplier. This is unknown, different incentives, goals and revenue streams. Value of being in control of Bitcoin network is unknown. Bitcoin could already have been taken over by people with some unknown profit-motive. Paying people high margins to raise the cost of their defection is crazy. Out of line markets with artificially high margins force the market to the same rate of return; thus the 5% rate of return assumption. Above 5% return one would expect the Bitcoin mining market to be liquid enough to bring in down to such levels. The other 95% (non-margin part) does not contribute to the security.

Don't have to worry about witness collusion (with BitShares 2.0) due to high margins. Cost to buy them out is proportional to what they're getting paid to stay honest. More profit also gives more incentive for witnesses to be validated. Security is proportional to witness pay. Current witness pay may be $300/month, at Bitcoin's size would be ~$70,000/month. At current dilution (a fraction of Bitcoin) it is more expensive to get enough witnesses to defect than it would be for Bitcoin, were BTS the size of Bitcoin.

Community can decide how expensive it wants it to be to buy defection [what level of security it wants.]

Q: What about state actors? Infiltration?

This is always a risk. Any business is at risk from this. The network can adapt and respond, barrier of entry of creating alternative networks or voting someone out is low, provided the attack is detected. Blockchain technology makes it very difficult to corrupt election process.

Q: Identabit?

Identabit snapshot will not occur until exact rules for snapshot date are established. Could have already happened, could be in the future. Doing this to minimize impact on BitShares price and thus bit assets. Algorithm will be published as soon as a fair one is found in the future.

Q: Referral system?

Referral system is operational and  good to go.  Affiliate system is somewhat implemented but not all there, will be rolled out after upgrade to 2.0.

3
Excellent work.

I found an inconsistency:
Quote
- No hardforks after Oct 13th expected
- Many things to be added [requiring hardforks], going to be judicious in doing so is a big disruption to the entire network

We actually expect hard forks to happen after 13th oct .. in particular every time we want to upgrade the protocol .. other than that there should not be any hard forks

Yeah, I could've clarified more.  Sounds like quarterly hard fork protocol updates but no black swan hard fork events are expected (of course, one never expects a black swan, so this probably doesn't say much.)  Some of Bytemaster's reassurance tempered with caution seems more for market spirits than cold logical analysis.

4
wow.  nice!  thank you very much for doing this!  @tuckfheman, we might have someone who could get paid in brownies for helping you out with these :)

Didn't realize anyone else had this task; I hope I'm not stepping on any toes.

I would be super-appreciative, though. :)

Edit: In case anyone didn't notice, I also did a summary of the Oct 2nd hangout.

5
Hey all, as I mentioned in my last post I haven't been able to make the past few hangouts.  There wasn't a transcript of the October 2nd one on Beyond Bitcoin but I also didn't have copious amounts of time today; so here's a summary of that one.  []'s indicate editor notes and keep in mind this is all my interpretation I could've got something wrong (but I don't think I did.)


(Code tags were the only way to make the tabpsaces big enough for this to be readable. Copy it to notepad or something for an even better experience.)


Code: [Select]
- Thanks for testing corner cases.
- Testnet discovered blackswan
- Killed network [hardfork required]
- Wasn't supposed to be able to happen
- Every new block failed for the same reason
- Now fixed
- Accounts can vote for vary numbers of witnesses
- Default behavior was to vote for the minimum number
-Lazy voter turnout (like now) would result in minimum number of witnesses
- Now, if you don't vote for at least 2 people you defer to those who do so for the number of witnesses that you will vote for
- If you wish to abstain you can abstain, whereas before you couldn't
- New, fixed testnet is already up
- Very successful testing
- Network stayed in sync w/ floods
- Margins calls, order matching, forced settling (aside from black swan) all worked well
- UI has continued to improve
- Permissions added
- Add/remove multisig keys
- Memokey
- Creating/Issuing assets fixed
- Help integrated into wallet
- Content is sparse at the moment
- Confident for Oct 13th launch
- Deploying infrastructure
- Making sure exchanges are prepared
- Xeroc has some docs for them
- They have been following the upgrade
- Statement will be requested
- Safest thing to do is hold funds locally
- Releasing soon:
- Hosted wallet, testnet
- Light-weight wallet, downloadable executable
- Full-node wallet, for mac, executable
- For windows will be DIY
- Built version coming ASAP
- Light-node or full-node offer best security
- Working on getting referral program in place
- Only very minor GUI changes from now to launch
- Wallet is looking good working under Graphene
- After Oct 13th still providing updates on a weekly basis
- New GUI release first
- No hardforks after Oct 13th expected
- Many things to be added [requiring hardforks], going to be judicious in doing so is a big disruption to the entire network
- Hardfork to add features once a quarter, unless fatal bug


- Metaexchange, blocktrades (Integrate bridges into the web wallet?)
- Working with blocktrades
- CCEDK


- Close to release, is how to connect to network going to change?
- Livenet will be similar to test
- Need to checkout from BitShares repository (not testnet repo)
- BitShares 1 can stay alive, but Bytemaster sees no point in doing so


- Have BitShares 2.0 witnesses been selected?
- Start with initial witnesses voted in
- Publish seed node
- Start election
- Were unable to migrate votes from BitShares 1
- Thus many default witnesses at first
- They do not plan on running witnesses for the long-term, just the transition


- Why can't delegate votes be moved over?
- Fundamental architectural difference
- Every balance not tied to account name, has balance in 1.0
- Every balance tied to account and has balance in 2.0
- Also 1 has many stale votes, new election would be good


- The risk is high with CNX being the initial witnesses
- We will select people we know [trustworthy people] to be the initial witnesses
- Not necessarily CNX employees


- Connect to backbone nodes exclusively.  DDOS protection.
- Hasn't been implemented, would like to eventually


- Long-range nothing-at-stake attack against initial witnesses
- Public record out there makes it so that you won't trust any other chain, regardless of length, everyone knows what the real chain is
- Vitalik, weak subjectivity
- TaPoS
- Not a worry for any Graphene DPOS chains
- Long-range nothing-at-stake: someone who used to have control offer block production can create an alternative chain and isolate a victim and make them think they have been sent real money
- User's account wouldn't exist on their chain
- If it did, your keys wouldn't be the right keys
- See Bytemaster's blog post on long-range nothing at stake
- Based on how rapidly witnesses can be voted in on testnets, 30+ in 24 hours
- Once checkpoint made, (~24 hours), long-range attacks can't be executed
- Checkpoint: universally known good state
- Full-nodes have these


- Eventually community will be able to hire competing actors?
- Bytemaster: Yes.
- Cryptonomix is producing the software (free speech)
- Other people run nodes
- Spreads legal risk by separating these tasks
- Software is open-source, so no trust is needed
- Best regulatory protection by not needing trustworthy


- Worker proposals: When worker's ask for shares/share dilution, are they setting a date in the future on which their shares will mature? Or a 100 day sliding window?  [Does payout come all at once or over some time-frame]?
- Sliding window.


- Pros/cons of allowing workers to bundle vested shares into savings bonds instruments that could be traded to raise capital to fund something; like a sovereign bond, network itself is guarantor. Market will look at trust of whole network and set price for such bonds.
- Bytemaster: Almost possible by transferring account of worker to cash provider and cash provider gives shares immediately
- Like OTC transaction
- Still on network, but finding buyer
- Similar to bond sale (country, corp)
- Bytemaster: Right
- [Being able to trade vested shares from one user to another.] No bid/ask market? Because bonds are bundled together with shares that have different maturity dates?
- Bytemaster: Vesting system is more robust than that
- Not a bunch of different bonds, one for each day
- Each day you're adding to the bond
- Can withdraw after accumulating
- For each coin-year can withdraw 1 coin
- 365 coins, each day can withdraw 1
- Similar to vested shares from merger, adding another restriction, incentivized to wait out the whole period
- Bytemaster: If you keep it all there you get to all fastest, if you take out half takes twice as long to get the second bit
- Shareholders could appoint trustees or workers who bid on this, raise funds based on the trust in the network and sell promissory notes (dilution in the future)
- Network can borrow in the present and not feel the effects of dilution until the future
- Bytemaster: Very complex economic scenario
- Price in present based on sell-pressure now
- If someone plans to hold for 1 year and they swap for a bond, so someone else can sell in the present
- There's still selling in the present
- Just added a bit of guaranteed long-term holding
- Discounted with time-value money, other factors
- How is the sell pressure of a vested share different from the sell pressure of an actual present, existing share?
- Bytemaster: Accounts can be transferred, vesting account balances cannot be directly transferred
- It would be very easy to add this functionality
- Vesting funds aren't fungible though, and there's no market for it
- Could be done off-network


- Mumble / live-stream on Oct 13th, 2015?
- Fuzzy: Will look into it
- Connect Google hangouts, mumble, Skype
- Bytemaster: On 13th we are doing open-heart surgery
- A lot of infrastructure role-out
- Differentiate day of starting network from grand opening
- Issues may occur
- Have grand opening once everything has been working well for a while
- 13th is like beta release
- UI still needs work
- Bugs will be discovered
- Fuzzy: Now we're going to open-beta, anyone can be a tester
- Bytemaster: Emphasize beta
- There're so many unknown-unknowns
- Only real test is length of time in market
- This is why Bitcoin doesn't change anything
- Few "emergency days" would not be unsurprising [hardforks?]
- More unit-tests than ever before
- Every feature tested at protocol level
- So many variables
- Even in case of hardfork, funds are safe if private keys are safe
- Exchanges will use delayed node that looks back several blocks
- Blackswan problems tend to stop the chain
- Still confident in system
- Audience: Team has good track-record with dealing with such issues


- Fuzzy: Bitcoin is static because they are stable. When will BitShares 2.0 achieve this?
- Bytemaster: Individual call.
- Once every op has been used once, every scenario has happened would inspire confidence for me
- Others will need more
- 6-months seems like a good time for vetting existing features
- Hardforks may reset this clock if they change these features
- We're likely to see issues in month 1 or not see them at all (electronics failure curve)
- Audience: Simultaneous dev chain? Run features on dev chain for 3-6 months before consideration of forking into main chain?
- Looking at ways of making the blockchain technology at the core robust and unchanging and allowing features to be added without breaking things from a blockchain perspective [separation of concerns].
- The challenge with consensus is that anything that changes ownership of tokens changes everything after it. Butterfly effect.


- Bytemaster: Some smart people have been criticizing our claims of 100,000 transactions per second.
- They are attacking a strawman
- Addressed this on the forum
- "We keep everything in RAM and at 100,00 TPS that's a terabyte of RAM per day that would need to be added - this doesn't scale."
- This would not work.
- Not all transactions are kept in RAM though
- Only the state, account balances
- 99% of transactions are simple transfers and result in no net increase in the [complexity of the state or] RAM usage
- I analyzed this from a perspective of, if each account has 1 KiB of data, which is a bit
- Their pubkey, balances, assets, open orders
- Most accounts use much less
- Used 1 KiB as average
- All account information for 1 billion accounts in 1 TiB of RAM
- Can support 2 TiB today
- By the time we grow to be this big, RAM will not be a barrier we will face
- Sequential processing bottleneck sets transaction rate limit (market operations, transactions that impact the market)
- Everything else can be done in parallel
- No way of getting around this within the market
- Every op affects the order book, can't do things in parallel without parallel markets
- Spreads, more complexity
- Parallel chains, one consensus set: side-chains approach. Only accelerates some tasks (cores vs speed CPU analogy - cores only help with parallel tasks)
- Probably max out at a couple thousand transactions per second right now (whole system perspective)
- Have seen couple hundreds of TPS in testnet
- Far in excess of what we're likely to generate in the near future
- If we did we'd be deflationary immediately (more funding, could get needed infrastructure)
- Marketing should be refined to reflect this TPS business


- Minimum requirements for witness VPS?
- Bytemaster: Digital Ocean 1 GiB node has worked for some
- No more than couple hundred megabytes of RAM
- Almost any computer


- Bytemaster summary: Working on deployment issues
- CCEDK
- Exchanges
- Forking codebase
- Setting up for release
- Please test on testnet this week
- Test wallet soon
- Several iterations, dry run of open-ledger wallet
- Have reached out to exchanges
- Get others in contact with Bytemaster
- Xeroc also working on exchange integration

6
Fuzzy: Now that the witnesses are not connected to an individual who needs to campaign for any specific reason – it's now a purely technical role, neutral in politic. It seems to be that there's no need to for us to worry about anonymous witnesses. Is this the case? Would anonymous witnesses, like someone using a VPN, would that be more beneficial or would there be downsides?

Bytemaster: I think it's beneficial to have one or two anonymous witnesses. Not enough that they could collude to be a danger but enough so that there's at least somebody who's still an elected witness who isn't taken out with all the other raids. If that person were able to produce enough blocks to recover the blockchain in a timely manner. Versus having everyone go out, and then, "Who's in charge?" right? [With one or two anonymous witnesses] we don't need to have an entirely new election before we begin [again, after such an attack].

One or two is probably a good idea but other than that [witnesses] should probably be well-known. And this is another issue I'd like to bring up. People say, "If they're publicly known they can be denial-of-service attacked." Just because the person behind the server is publicly known doesn't mean the server or its IP address if publicly known of even directly on the network. Just because a witness is public doesn't mean the server is public. Just because you can take out an individual that was elected doesn't mean that you can take out their server. It's entirely possible for someone arrange to have their server set up through several other people, with no ties to them directly, but for which they are responsible and control. In that particular instance, even if the government raided them, shot them – they still wouldn't know where the server was to shut down the witness. That type of thing is entirely possible and we need to think about those types of solutions versus the naive approach of, "Let's just add more witnesses because that will make it more robust."

At the end of the day, more witnesses means less voter attention on each witness, lower quality per witness and it comes down to a basic engineering problem: Do you have 100 unreliable parts or 3 very reliable parts? The probability of failure is based on how those parts all combine. There's also ability to coordinate the speed at which you respond. I bring up all these points, because from a technical perspective, 17 witnesses is more than enough redundancy to protect against technical failure and it's probably sufficient redundancy to protect against government attacks if the witnesses are located in a handful of different jurisdictions.

The cost versus risk needs to be measured. People are very, very bad at estimating probabilities. People buy lottery ticks on the mistaken belief that the probability of winning is greater than it actually is. People avoid flying, and yet drive, because they think that driving in inherently safe compared to flying when we know that the probabilities of all these things are opposite [our intuitions]. We underestimate the extreme cases and overestimate the lower ones. If we keep those types of things in mind, it explains a lot of the irrationality in the perception of people regarding the risks and the costs. An example is insulation in your home, if you have no insulation you have a very inefficient home, you lose a lot of heat. Put in the first little bit of insulation and it makes a huge difference. But eventually you can spend $1,000,000 adding insulation to your home and it makes no difference whatsoever in the ability of your home to retain heat. There's the same thing with security, eventually there's this point of diminishing returns, where you're adding cost yet getting no benefit. It gets more and more expensive for less and less benefit. That's what we need to keep in mind in all aspects of the system.

As I say these things, I am not arguing for centralization. I want a robust system that's going to serve the purpose of securing life, liberty and property and not be unnecessarily burdened. That's where I'm coming from. That's what I'm trying to achieve. I hope that I'm not losing people who are big fans of decentralization. I am a huge fan of it. But decentralization is a technique, and I don't want to get lost in a technique. I want to stay focused on the why, the goal. Are we achieving the goal? I think that's what we're trying to do with BitShares, and that's what sets BitShares apart from a lot of other systems.

Fuzzy: During my IT courses we'd talk in terms of project management and IT security, you're basically talking about the risk assessment matrices talked about in class to IT students. You have to find that fine line. You can always overdo security. The question is, "Is it worth it?" There might be some instances where, yes, the benefits outweigh the costs but others where the opposite is true.

Another question, "What are the best countries in the world for liberty and network speed?" I don't know if you've done any research on this.

Bytemaster: I'm too swamped with technical stuff to do research into all the political stuff. I'm kind of stuck in the United States. I hope someone else will do that research.

I would like to bring up another point: Just because you have a product that meets all the technical specifications and [makes] all the proper risk-reward trade-offs to maximize the value of the system, doesn't mean that it'll necessarily be the best-selling thing. This is what got interesting about the debate. Why do people buy a car with 300 horsepower when the speed limit's and reckless driving laws mean that you can get by with a car with a 120 horsepower engine with better gas mileage. [That] type of irrational feel-good value is something we should contemplate. That impacts how well we market something. A lot of companies do things with their products that have no technical [or functional] benefit, but they cause the product to sell better. An example of this is in the 50's and 60's magazines that had inappropriate material on their covers would be wrapped in paper. Some companies realized they could sell more of a legitimate magazine that didn't have that type of stuff if they wrapped their own magazines in similar paper. The paper wasn't providing any purpose other than [making it seem] like it was forbidden, therefore it drove interest and drove sales. There are other situations where companies do stuff that even has a negative impact on performance simply because it sells better.

I don't know the answers to these questions. I think it's a market-research thing. We all have ideas about what would sell better to us, but we have personal biases making it hard to tell what will sell best to the masses and to different target audiences. We know who the loud [and] vocal people are, but do they actually carry any weight? Or do fundamentals matter, like profitability. If you can make the blockchain profitable, is that more enticing than saying, "Well, we're not profitable, but we're super secure." Those are the types of [conversations] we need to have.

I mention all this because the people here on this call are going to be the voters. They're going to have to vote on who to hire as witnesses and committee members and as workers. These are things that you need to think about and consider. My job in these mumble sessions is to help provide perspective and help educate so we can all make better decisions and not just vote out of gut-reflex. The more educated the voters are the better the system will be.

This brings me to another point, we've got a fourth role in the system that hasn't really been talked about much because it's not an explicitly enumerated role. We've got the witnesses, the committee members and the workers – we've talked about those [roles] lots. We also have the proxy voters. In political terms you'd probably call them 'delegates'. These are the people to whom most people have set their account to proxy-vote through.

You can view these as the mining pools of Delegated Proof of Stake. We want as many of those people as possible. They can meet and make decisions. If we had 100 or 150 people that controlled 85% of the indirect vote, they would be able to quickly discuss policy and make smart choices about who all the other players in the system are. We can have as many of those as we want. We don't have to centralize on a handful of witnesses or committee members. Instead we can pick leaders of communities and businesses and break it up as much as we want to get as many votes concentrated into those hands as possible. And have those people decide how much technical redundancy is necessary. In fact, if you have 150 people that collectively control over 51% of those who vote through proxy and something happens to the network, those people can meet in a mumble session, they can discuss what to do, they can produce a new block that's signed by 51% of the voting stakeholders [and that block can] appoint new witnesses [and] then the network can continue. I would really, really like to see a robust set of proxy positions. Of people who decide to take the responsibility of vetting all the people in the technical positions. We should have as many of them as we want. Of course, this is maximally decentralized, everyone can vote with their own stake or vote in a pool [via proxy]. Like mining solo or in a pool with Bitcoin. I think that's what we want. We want more than 5 or 6 pools, we probably want 100 pools.

Fuzzy: These pools would have different dynamics because instead of mining it's voting. So these voting stakes can change quickly. Whereas mining pools don't.

Bytemaster: People say, "If all the mining pools get shutdown, someone else will just start one up." But the time it takes to start a new mining pool is much longer than it takes to point your vote at a new proxy. Mining pools and mining have costs associated with [them], the reason mining pools don't work and are ultimately insecure is, if you shut them all down, and it's not profitable to solo mine, you need a mining pool to be profitable. With DPOS and voting, it's just as profitable to vote solo as it is to vote through a proxy. There's no extra overhead or cost associated with solo voting. This means you can have 100 proxies and not have to worry about profitability concerns. But if you had 100 mining pools, each mining pool would have a very high variance and that would impact profitability.

Fuzzy: Deludo asks, "Does it make sense to pay proxies? Is there going to be [such] a functionality or do you foresee a need for it?"

Bytemaster: I don't think it makes sense to pay them, since they have financial interest in the system and they volunteered to do it. Generally speaking, it doesn't take a whole lot of time – they already have to vote anyway, if they want to vote their own stake; so, just allowing other people to follow them makes good sense.

Crypto: Thomas asks, "Is there a way to make your voting records public if you were to say, I would like the job of being a proxy, I'm a member of the community who pays attention. Would there be a way for everyone to verify every time that you voted this is who you voted for.

Bytemaster: It's on a blockchain, all votes are public and all stake is public.

Fuzzy: Unless it's the blinded stake, but then the voting doesn't matter, correct?

Bytemaster: Unless you're using confidential transactions, in which case you're not voting.

Crypto: Thanks.

Fuzzy: Collateral bid idea: From what I understand it's just the witnesses that put in the highest collateral.

Bytemaster: The idea that's on the table is: If you want to become a witness you post collateral and anyone who doesn't vote otherwise, votes, by default, for the witnesses with the highest collateral. The danger there is due to voter apathy. That highest collateral is going to win. This means you end up with a system that's more similar to how Peercoin or NXT operate. With the proactive voters being the backup plan and having to override all the defaults. It, more or less, means that the system will be ruled by the wealthy rather than ruled by the proactive consensus. I think it's a decent idea by way for filtering people. And it's entirely possible to put money into a vesting account balance which basically is your commitment to the network that you're not going to withdraw your funds for the next six months. If someone elects you, they know you're pre-committed. You get voted in based upon your commitment. That's a perfectly legitimate way of campaigning. The only reason for someone to do that is if the financial incentive for being a witness is high enough to justify locking up their funds in order to get the job. Which means they're probability going to do a calculation of, "Alright, how much is it going to cost me to run a node? How much time am I going to have to put in there? And how much capital will I have to tie up?" End result being, if you require people to tie up capital, you're going to have to pay them more to justify the interest rate on that capital that is factored into their pay. So you add a cost to being a witness, it doesn't necessarily give you any additional security because the people voting for them should already be vetting.

We have a lot of witnesses right now that are very technically competent and very honest but who don't have a lot of money. Most whales don't want to run a witness. The assumption that those with money want to do the dirty work of running a witness is a fallacy. A mistake made by a lot of the other proof-of-stake coins. That's the beauty of delegated proof of stake. You can have a wealthy person back you with their vote and then you can do the job. Getting someone to vote for you is putting something [forth] as collateral, the only difference is you don't have anything to lose other than the vote and your income stream and your reputation. I think people undervalue reputation and the importance of it. If you elect people that actually value their reputation and have a career and a public face – they won't be able to do future business if they harm the network and earn a bad rap. That reputation is on the line when they do this job and it's going to follow them around the rest of their life. That is worth far more than any collateral you could ask them to put up.

One last question from Tuck, "What's the difference between a bridge function and atomic cross-chain transactions?" A bridge means that there is a moment in time in which the bridge could rip you off. [With] atomic cross-chain trading there is no moment at which you can get ripped off. This is sort of getting back to the, well, "How secure do you need to be?" The probability of any particular exchange getting hacked or going down within a given minute is very, very small. But over the course of a year it's pretty high. The reason I think atomic cross-chain transactions are overdoing it is because it's looking at the risk-reward and making something very complicated and difficult to use to reduce that last fraction of a probability that the party that you're using for the bridge is going to turn corrupt and steal your money [during] that fraction of a second [while] you're trusting them.

With a bridge, you send them the money and they send you something else. There's no outstanding debt, it's a real quick transaction. It's sort of like the time between you handing the cashier your dollar and them handing you the drink. There's a moment in time where you don't have the dollar yet still don't have the drink, but are you worried about them stealing from you during that moment of time? No. But if you mailed someone cash and it took a day, risks are higher. That's why I think bridges are a better value than atomic cross-chain transactions, because atomic cross-chain transactions have a very high cost to reduce a very small risk. 90% of the risk is mitigated simply be reducing your period of exposure to minutes rather than hours or days or months.

Fuzzy: Deludo asks, "According to Toast, virtualized smart contracts can be almost as fast as natively implemented ones. What of transaction throughput, settlement speed or cost are affected by the virtualized versus native way of providing smart contract?"

Bytemaster: I can boil it down to once thing. Go to any language shootout and ask are just-in-time compiled languages faster or slower than native languages, like C++? In the vast majority of cases native will be faster, but there are some corner cases in which the virtual, just-in-time compiled code can be faster. The bottom line is, from a technology perspective, you can go with a virtualized approach, if your virtual machine is designed with just-in-time compilation in mind.

The challenge with all of these systems is to make it deterministic and to make sure that you can meter the costs. It's the metering of the cost that slows down the visualization approach. Even if you do just-in-time compiling, you still have to count the instructions, you still have to count your time. It might be possible to do some really advanced techniques with preemptive interruption, so you just let it run for a millisecond and then interrupt it.  If it's not done, you can discard it, you don't care about operations. There're lots of advanced techniques that can be put into the virtualized stuff. But the money and time and complexity involved in building those systems and then ensuring that they are deterministic in their behavior and bug-free is a very high barrier to entry. What that means is today's [metered] virtualized systems have very slow performance because they need to be very methodical and do a lot of extra operations. They're not just doing just-in-time compilation.

With the system like we have in BitShares where all changes are basically approved; it's not just that we have it compiled, it's that we have a process for reviewing every single piece of code. We can analyze the algorithmic complexity in advance and we can estimate the costs of it through benchmarking and set the fees accordingly. If you go to a completely generic system where anyone can submit [and run] code, you have to automate the process of analyzing the algorithmic complexity, of setting the fees and making sure that nothing bad happens as a result. That's where most of the complexity is. That's where most of the risk is. It's very much like the Apple app store: They look at all the apps and require a certain level of quality before they get on the chain versus allowing anyone to put an infinite loop on the chain or something on the chain that has bugs in it. Sure, you might pay for it with gas, but you have to pay the costs of tracking gas consumption and doing the metering. My short answer is, in theory, just-in-time compiled can just as fast as native, but there's extra overhead associated with metering and securing these systems that slows them down.

Someone: [Summarization of above: Metering is keeping track of resources uses so that people cannot use more than they've paid for. Determinism requires that any given input will always result in the same output. What would be the outcome of allowing indeterminism?

Bytemaster: It'd be like a Bitcoin hardfork. It's an unplanned for split in the network based upon which nodes went which way. If you have nondeterministic code in a contract then the nodes that go one way will be on one fork and the nodes that went the other way will be on a different one. If you start combing lots and lots of things you might even shatter it such that there're 100 forks. That's the catastrophic failure that results from not having a deterministic means of validating smart contracts.

Someone: You're saying that creating a system that prevents this indeterminism is difficult?

Bytemaster: Yes. The reason we don't use floating point in blockchains is because it's not deterministic behavior. Even with just one machine involved. When you create a virtual machine you're defining everything in terms of integer operations and if statements which we know will be deterministically evaluated. The more complexity you put into the system, something as stupid as an uninitialized variable, that is 99.99% of the time zero, but sometimes not can cause a break in consensus. My point here is that complexity creates more opportunities for nondeterminism. That is the challenge with it. It's not impossible to create a deterministic, just-in-time compiled, highly-performant, metered language, it's just very difficult, time-consuming, and you really don't know if you've got it right.

7
Hey all, I haven't been able to make the past few hangouts and couldn't find any transcript of the September 25th one.  So, here you go; lightly edited for clarity. Non-bytemaster content slightly more heavily edited.  []'s indicate editor notes.



Fuzzy: Intro

Bytemaster: It's been another week; another significant step forward in the life of BitShares 2.0 as we march towards releasing on October 13th. For those of you who were here last week we just started a new, and hopefully final, testnet. I'd like to report on how that testnet has gone this week.

So far we have 33 witnesses who have been voted in and we have a total of 100% witness participation, which is actually better than the current BitShares network which has 96% participation. I'd like to thank all of you testers out there who have helped set up nodes. These are 33 unique servers that have managed to stay in sync, despite all the spamming and even attempts at double signing blocks were done this week – just trying to mess the network up. We survived the double block signing without a single missed block. With all the fixes that we put in last week we were able to boost the transaction throughput. The testers were able to achieve several blocks with a couple hundred transactions in them. These are three second blocks. We're doing really well as far as throughput goes, far more than a real network will ever need to process in the short-term. I am very happy with the results of this test network and am feeling very good about upgrading on October 13th. If any of you had doubts due to bugs or the problems we've had with the networking code in the past month, those issues appear to have been resolved. We have a relatively stable blockchain, at least as stable as BitShares [0.9.3]. My full witness nodes have basically been running on their own without issue for some days now. The general takeaway from all of this is we're on track for October 13th, the network is extremely stable and that leaves the long pole[?] in the tent: the user interface.

I'd like to give some updates on the user interface. We have a full node downloadable GUI that some of you were able to test. It hasn't been updated with all the changes since earlier in the week but we have the build process and infrastructure in place [such] that we will be putting out another release of a full node graphical wallet that seems to work pretty well. We also are planning a light wallet that you can download that's similar to the full node but instead of connecting to a local witness, it connects to a remote witness. That's sort of an Electrum model. The [light wallet] uses the exact same interface as the website and the full node. One interface, three different ways that you can use it with different levels of security consideration.

Fuzzy: The question I would ask, if you don't mind, for the downloadable light-client: what's the downside to that in terms of security? This seems like a common concern.

Bytemaster: From a security point of view, you are not fetching new, mutable JavaScript from a remote server. That's the biggest improvement to security of the light-node.  You're still trusting the server to accurately report the state of the blockchain to you.  The worst it could do is lie to you, but there's actually no incentive for them to lie.  You can use the exact same server as the hosted wallet or whatnot.  Even if they did lie to you it's entirely possible to construct transactions that go to who you want them to go to or no one else. It's really just a matter of whether or not you trust a remote node. You can pick and choose different remote nodes for use with your wallet and that will give you the middle-ground. If you don't trust anyone and want to run your own node, that's the most trustworthy [and secure]. Allowing someone else to run the node and you just run your own GUI, that's the middle ground.  Using a hosted wallet is the least secure option [requiring the most trust of third parties,] but it's not so bad assuming the server doesn't get hacked and its JavaScript changed to try to steal keys. If the server were hacked you're only vulnerable if you visit the server and log in while it's compromised. Only active users during the time of the attack are [vulnerable]. {Sound cuts out here.} Of the three, I'd say your biggest risk with using a hosted web wallet is that if you clear your browser cache your wallet gets deleted. If you don't have a backup of your wallet and you clear your cache you're SOL, [shit out of luck]. That's one of the big motivators for having the light version and the full desktop version. To make sure that you can clear your bowser cache without risking your wallet. It seems there are a lot of people who recommended clearing the browser cache suggesting everything will be fine. This isn't safe if you have $100,000 worth of BitShares floating around in your wallet. It'd be a very sad day.

Fuzzy: When you first set up your wallet and it asks if you want a BitShares brainkey, say yes, write it down, don't lose it, make a copy of it.

Bytemaster: Technically yes. Although brainkeys aren't able to recover all the keys your wallet might have in it. If you import keys to a wallet from BitShares 0.9.3 or you change your brainkey – you might have more than one brainkey over time – we've concluded that a brainkey is not a general purpose approach. It only works for the basic case where you have a new account and you never change your brainkey and you never import keys. We didn't want to design a user interface around that assumption. It's not a safe assumption. We want it to be safe for all users. We will have a brainkey and you'll be able to write it down and [using it] you'll be able to recover keys. But that's not going to part of the regular work-flow. It's going to be more like a condensed, future-proof backup. All new keys that you generate in your wallet will be derived from the brainkey. Which means your existing backups are good and if you have your brainkey then you can recover those keys. The process that we've set up will require you to save a file and keep that file secure and backed-up. We're working on coming up with more automated backup solutions but for now, it'll be your responsibility to backup your wallet and all your keys to a file on disk so it can be imported later. Do not rely on your browser cache.

Fuzzy: Joey asks: "Would there be a code or paper wallet functionality that could help with that problem that you can foresee?"

Bytemaster: Older paper wallet functionality is just a matter of generating a public key and private key offline and transferring the public key to a wallet. If you configure the permissions on an account to a public key where the private key is kept cold, never has been on a [ed. networked] computer, then you have cold storage. We don't have any tools in place to generate those keys easily for you. You'd have to use one of the command-line tools offline.

Fuzzy: But they are available for somebody who wants to do that?

Bytemaster: It's easy to create the tools [ed. probably just simple wrappers around already existing API calls,] we just haven't prioritized it.

Bytemaster: Othername asks, "Why doesn't remembering your password for the web wallet suffice in case the cache is deleted?" The reason is the password never goes to the server and the server doesn't store your wallet. Your wallet is also kept locally [in the browser cache] and you're never authenticated to the server.

Future versions of the web wallet might have server-side storage and backup of your wallet file for you. In which case your wallet file would be stored on the server encrypted, meaning the server can't read your keys and they never get your password. They can give you your [wallet] file back to you thus you can restore from the server. This would be a good way to move [wallets] between devices automatically, though it would require server-side infrastructure and we haven't been focused on server-side infrastructure at this point in time.

Othername[?]: Isn't such a web wallet, where if you deleted your cache all your funds are gone, a potential source of bad PR? Might a solution be to just release a local light-client [and full-client]?

Bytemaster: Yes, there's risk there associated with that particular issue. It's not really a problem once you've done a backup. Then you don't have to worry about  your cache being cleared anymore. The benefits of the hosted wallet are that you get free, automatic upgrades as we improve things. Whereas you have to download new versions of the other wallets each time. Adding server-side storage to make the hosted-wallet as reliable as possible, to allow it to function even if you clear your cache, is a desired feature for the future.

Othername[?]: DataSecurityNode just mentioned, "What about an initial forced backup?"

Bytemaster: Our plan is to have a notification on the user-interface that indicates if a backup is required and how long it's been since you last backed-up. It's not there now but we've been engineering the data tracking into the wallet. So we can display a big red warning with a button to backup now.  On every page. Until you do it.

Fuzzy: Every user should be backing things up and we should have an easy process for them to do it.

Someone: We're going to want to educate people about this. Eventually it'll be taught in schools, "You've got to be responsible and that means backing-up your cryptocurrency files."

Bytemaster: I think in the future, when cryptocurrency is successful, it's all going to be managed automatically behind the scenes. You'll be able to recover your password and your cryptocurrency funds with similar difficulty to resetting your password on an existing banking system. Regular people out there are not going to magically change and learn how to do all this stuff. We're still at the early-adopter phase. During the early-adopter phase, yes, we can expect people to learn that stuff. Long-term all of the stuff's going to have to be managed because the risk of a hard-disk failure, network failure, forgetting your password is too great.

It's a greater security risk with cryptocurrency than exists in the current banking system. The probability of you losing your money in the bank is far less than the probability of losing your money with cryptocurrency even though, technically, someone can steal your money or freeze your accounts, but guess what? You forget your password, you lose your wallet, your computer dies: all those things can cause you to lose your funds. The only difference with cryptocurrency is that it's somewhat in your control whereas with the banking system it's not in your control. For the average person out there, their ability to control and be responsible actually means that a cryptocurrency is less secure for them, because they're not able to be responsible. We need to create products and services that cause the average person – who knows themselves well enough to know that they're going to forget their password, they're going to do something stupid with their computer, they're going to misplace their backup – [to be confident that their funds will be safe.]  Very smart people make those types of mistakes and need those types of services. People don't want to think about their money. They just want it to be there and they want to use it and they want to know that they can always get to it. We need to migrate to systems that are that easy to use, that easy to recover, that automatic. Where you're never at risk of getting locked out of your account. I think most people would choose to have the risk of their funds being stolen over the risk of being locked out for doing something stupid.

Fuzzy: I've noticed there are a lot of users who are starting to come in who have been BitShares users for some time but have had trouble with the current wallet. Some have asked, "How do I protect my backup? Is there a best-practice?"

Bytemaster: I've seen lot's of people ask those types of questions regarding wallet backups and transition [to 2.0]. There was a new release [on the] 0.9.3 [branch/series] this past week. I apologize for the botched [initial] release, I uploaded the wrong file and some people got a old, old 0.4 version. So 0.9.3c is out. It has an updated backup function that exports to a format that's compatible with Graphene and BitShares 2.0. You don't have to do anything with your existing wallet until you want to migrate. When you migrate you need to download and install 0.9.3[c], load up your wallet, export it to a file on disk and then load that file into Graphene/BitShares 2.0. There will be a button and instructions for how to load that file. Once you do that all your funds will show up in your BitShares 2.0 account.

Someone: In 0.9.3 is the exporting just through the GUI export function or does it require the command line?

Bytemaster: I updated the menu in the menubar to use the new version.

I'd like to switch gears and start talking about some of the raging debates. It started with last Friday's mumble session when we talked about how much pay a witness should receive. We then went into a broader discussion about how many witnesses we should have. This then broke into several different categories: how much is necessary from a technical perspective? How much is necessary from a marketing perspective? Those were some very lively discussions. I want to thank everyone who was on the forum and participating in those discussions. I would like to address some related things today, simply because it's good to think about these things. It's good to double check that we're not losing perspective on what our risks are, what we're trying to defend, why we're doing what we are doing

I set out to build free market solutions for securing life, liberty and property because I want freedom. BitShares is a tool that allows us to get freedom. The question is: does it serve these needs? Is it robust against the types of attacks it will face? There are many different types of defense mechanisms out there. Each defense mechanism is good against a different type of attack and has different types of problems. In nature, [for example,] you can have really thick armour or be really fast and agile or you can camouflage. Three different strategies that are employed to secure oneself against an adversary. The adversary that blockchains are typically concerned about is [at worst] a government adversary. This is an adversary that is big, strong, has almost unlimited funding and, more-or-less, controls the infrastructure upon which all of society is built. That is a pretty tough adversary to design a system to be robust against.

When we're talking about witnesses it's kind of like talking about how much difficulty one needs in a proof of work algorithm before it's secure? Do we need to increase the difficulty 10x before Bitcoin is finally secure? How much does it cost to attack the network? In the old days, with just proof of work, people thought, "Well, eventually, proof-of-work will get so difficult that not even the government will be able to do more work than everyone else." I hope everyone here has seen that that's very short-sighted. If you build a reinforced steel wall and you have it right next to an unlocked door, people aren't going to bother going through the steel, they'll use the door.

That gets us to the point of identifying the weakest point. You don't need to build a wall that is significantly stronger than your door or your window because the adversary is going to attack you at your weakest point. Any money you spend making the walls stronger [without also securing the weaker links] buys you very little security. If we're going to go up against an adversary that likes to control everything in our lives, [especially] our financial lives, and we want to maintain our financial freedom, then we need to have security that's based on something really difficult for the government to attack. That means, most-likely, the security is not coming from technology. It's entirely too easy to filter packets, it's entirely too easy to target hosted-wallet providers, public seed nodes; all those things provide redundancy against technical failure or against nuclear war and all the things the internet was designed to provide redundancy against, but the security doesn't come from how many block producers you have. How many people that actually sign the blocks. That doesn't give you much actual security at all. You could have one person do all the block signing and you could still have a blockchain that is resistant to double spend attacks and is immutable and can't change. The reason you have more than one person signing blocks is to avoid censorship and to have a little bit of redundancy in case that person gets taken down. Your funds are secure so long as there is a public record and the copy is widely distributed to as many people as possible. It doesn't matter if theoretically the witnesses could create an alternative set of blocks. Everyone out there already knows what they have already seen and the weak subjectivity that Vitalik [of Ethereum – https://blog.ethereum.org/2014/11/25/proof-stake-learned-love-weak-subjectivity/] talked about is a major part in the security of these systems.

If you want a system to actually be secure, the only thing that protects you is lacking the political will to attack you. To make attacking your system politically difficult. Because if you can get political support behind it then you're safe, even right out in public. If widespread public opinion turns against you, it doesn't matter if you're a country with nuclear weapons. The government's going to invade and take you down. It doesn't matter who you are or what kind of defenses you have. If public opinion turns against you, the powers that be will take it down.

A fundamental right that we all have is free speech. To the extent that we can make all this stuff about free speech and anti-censorship, the fact that the message being communicated is, "transfer funds to someone", you're allowed to say that and everyone is allowed to hear it and everyone is allowed to change their actions based upon what they heard. Doing things that keep the system pure and honest and convey a sense of goodness, so that the average person says, "How dare you attack that little kid? That innocent person?" Those are the things that secure a system against being taken down by the government. It's for the general public to see the system, see that it's good, not harming anyone and that it's something that they want to use. And [we need to] make attacking any individuals within the system seem like a bully picking on a little kid. That is ultimately the only defense any technology has against the mob. The governments and the media, they are tuned to manipulate the mob. It is the mob, through their passive consent, that allows governments to get away with genocide. If we want to design systems it needs to be something that's difficult to turn the mob against. The best way to do that is to make it so that the mob depends on and loves your system. It's difficult for the government to shut down twitter or Facebook or the Internet, because everyone has come to love and depend on [such services] and cutting [users] off of those technologies would cause riots in the streets. Therefore the political cost of attacking those [services] is too high. That's how it has to be for all decentralized systems.

There is a threshold where if you make it easy for them to shut you down – you know, they raid your office and now you're offline – they're probably just going to do it. So you need to make it just difficult enough that it's kind of like pirated music. They can try but a new site will pop up. Once they realize that it's going to be whack-a-mole, they stop trying. Or once they realize that you just moved to another country and host all your servers in a jurisdiction that's friendly, then they give up. From a technological perspective, we need to be decentralized enough that it is statistically unlikely for over half the nodes to fail at the same time. We need a system that is robust enough that if that statistically unlikely event happened the downtime should be as short as possible.

Let's say that the government-arranged for a simultaneous raid of all witnesses and shut down all block production. Your funds aren't lost. Everyone still knows what the public record was. Interested parties, stakeholders, already know people in the community and already have a mechanism to broadcast transactions and do voting. All it takes is for someone in the community to stand up and say, "Alright, here's the new chain. Picking up right where we left off. Go." Total downtime is less than the typical bank holiday. If we try to design a system that is free from any and all downtime that's probably over-designing it. We just need the probability of downtime to be in the .01% [range]. Our ability to recover is robust. People are used to the banks being closed every weekend. I think that this desire to over-engineer to the point of perfection, where that last .01% of security consumes the vast majority of the cost. That's what we need to be careful [about] and observe.

8
Look do we believe in BitShares or not?  Do we have patience or not?  Please can we allow some organic growth to arrive from all the hard work that is going on.......

Indeed.  What we need is organic growth of actual users, trading bitassets on the decentralized exchange.
And not a pyramid scheme of MLM marketers trying to convince people to spend $20 to sign up, so that they can get a cut of it.


1 level referral plan sounds great to me, with no large signup fee, simply giving a referral bonus of some portion of fees generated (from actual trading use of bitshares!) by the referrer.  Go beyond that and we look like a bunch of people who are really desperate to make a buck off people in whatever way we can.
+5%

9
http://bitshares.org/get-started directs users to download 0.6.1
http://bitshares.org/resources/downloads directs them to 0.6.2

10
Stakeholder Proposals / Re: Developer delegate: dev.bitsharesblocks
« on: March 06, 2015, 11:11:27 pm »
It would be nice if the home page price chart had a button to load the rest of the data.

Speaking of charts, the "BTS TRANSACTION VOLUME", "BTS NUMBER OF TRANSACTIONS" and "NEW ACCOUNTS" would be much more useful with at least a 7-day moving average line.

The "TOTAL NUMBER OF ACCOUNTS" chart could use a log scale to make variation in "New unique accounts" visible.  (Also I couldn't find anything on how unique accounts are distinguished from the rest.)

11
General Discussion / Delegates on IRC
« on: January 23, 2015, 10:42:13 pm »
I think we should have more delegates -- especially 100% delegates who are likely to be able to answers specific questions in-depth, as they are experts on various aspects of the BitShares ecosystem -- in #bitshares on irc.freenode.net.  I think the only 100% delegate currently there is indolering (assuming he's been voted in by now.)

I realize that IRC doesn't seem to be a popular communication medium for the devs, but simply idling in the channel, perhaps checking even daily to answer questions or clear up misunderstandings, could really be quite valuable at a minimal expense of time.

Thoughts?

12
General Discussion / Re: NuBits is a Ponzi [BLOG POST]
« on: January 13, 2015, 11:28:00 pm »
From reddit, "Because bytemaster ignored all the recent stuff nubits has been working on. It sucks because it makes it clear he's attacking them without being aware of what's going on."

This.  It makes the critique come off as disingenuous.  Same with the NXT post that didn't mention that some of the addresses were special (leased forging, et cetera).

Both posts would've been fine if they actually were up to date with what the communities are doing to respond to the problems mentioned (hint: their responses are probably still inadequate) but attacking these strawman versions of our competitors just looks really bad and desperate.

13
General Discussion / Re: Invite only community marketing forum.
« on: November 21, 2014, 04:41:36 am »
Hm, I suspect my only use here would be to look at things from a not-stuck-inside-the-crazy-hype-bubble that people fall into around here, but worth a shot...

14
There is no way to reliably determine this with just the information on the blockchain.

Best you can do is ask (and trust) the delegates themselves.

15
General Discussion / Re: October Newsletter - Halloween Edition
« on: November 11, 2014, 07:06:19 am »
I'm still trying to understand the economics of this concept of self-funded growth. In theory, if there were no dilution, and everybody agreed to pay existing BTS in proportion to their stake for the same expense, wouldn't the economic outcome be exactly the same as for dilution, except that the market cap of all shares would be spread over a different supply of shares?

If so, is the real benefit of dilution not an economic one, but a political/administrative one because it provides a more graceful mechanism for an expense to be shared without any enforcement of transactions?

I ask this not out of any negativity toward the power to dilute when appropriate (and I'm sure it has its place), but just wondering how dilution facilitates a higher level of growth.
Yes.

Yes.

Because "more graceful" is an understatement (without forced dilution and with the need for dev funds, the game reduces to a prisoner's dilemma.)

Pages: [1] 2 3 4 5 6