Show Posts

This section allows you to view all posts made by this member. Note that you can only see posts made in areas you currently have access to.


Messages - luckybit

Pages: 1 ... 28 29 30 31 32 33 34 [35] 36 37 38 39 40 41 42 ... 195
511
General Discussion / Can we get Bitshares 2.0 on Slashdot?
« on: September 07, 2015, 06:20:48 am »
Bitshares 2.0 is nearing release and Slashdot is a king maker in technology.

Bitcoin has been on Slashdot many times and a lot of people discovered Bitcoin through Slashdot.

The referral program would make it interesting because Slashdot mods and the site itself could make money from Bitshares. I suggest when Bitshares 2.0 is released anyone who has a prominent Slashdot account go and find a way to get Bitshares 2.0 as a story on Slashdot.



512
Why only 2 factor and not multi-factor? If the security of Bitshares is less than the security on exchanges then people will use exchanges.

On the other hand I also know there is an slight deadline for 2.0 so I suppose it's better to release with 2 factor in October than to delay over it. I just think this should be a priority for future releases.

https://blog.perfectcloud.io/two-factor-vs-multi-factor-authentication-a-stark-difference-2/

513
Technical Support / Re: Can Graphene be accelerated by GPU?
« on: September 06, 2015, 12:36:21 pm »
You guys have not read: https://bitshares.org/technology/industrial-performance-and-scalability/
Quote
To achieve this industry-leading performance, BitShares has borrowed lessons learned from the LMAX Exchange, which is able to process 6 million transactions per second. Among these lessons are the following key points:

The disruptor basically verifies the signatures (which can be parallelized easily .. also on GPU or clusters) .. and the (single-threaded) DEX engine matches the orders and puts them into a block ..
This is ALREADY implemented in BitShares from what I know

Of course I read it. I was trying to find a way to improve on it.

514
General Discussion / Re: Bitshares price discussion
« on: September 05, 2015, 11:58:07 pm »
Where do you guys see the price in one months time?
Lets get some predictions going for fun

I'd say about $0.04-$0.05 cents

Sure but I doubt in one month it will hit that. It takes a while for volume to reach that.

If Bitshares could get the volume it could get to over $1 but the issue is there is a limited amount of people who have money to buy crypto tokens and until the crypto community grows you're fighting for a small pie.

515
General Discussion / Re: Bitshares price discussion
« on: September 05, 2015, 11:56:32 pm »
Where do you guys see the price in one months time?
Lets get some predictions going for fun

0.01 which would be around a penny.

516
Technical Support / Re: Can Graphene be accelerated by GPU?
« on: September 05, 2015, 11:55:14 pm »
I believe this actually is single thread, not parallel. This is the YouTube blurb (my emphasis)

"There are many patterns and frameworks for concurrency and parallelism that are popular today, but is the throughput we need available in a single-threaded model if we just write code optimized to take advantage of how the hardware running our applications work? LMAX, a retail trading firm in the UK, has open sourced a concurrency pattern called the Disruptor, which enables the creation of graphs of dependent components to share data without locks or queues. This presentation will detail how LMAX was able to maximize the performance of their application, and then discuss things learned while porting the library to Scala"

What about Graphene? Is Graphene seeking to be a clone of LMAX or to go beyond LMAX?


517
Technical Support / Re: Can Graphene be accelerated by GPU?
« on: September 05, 2015, 11:10:34 pm »
Remember that Graphene employes a single thread design model. GPUs achieve their results through massive multi-parallelism.

That doesn't mean there can't be uses for them which optimizes certain parts of Graphene which don't require single thread.

518
Technical Support / Re: Can Graphene be accelerated by GPU?
« on: September 05, 2015, 11:05:31 pm »
Quote
Andy Phillips: On the server hardware side we've used HP in the past, but more recently we have re-evaluated the field and have gone with Dell. This was for a variety of reasons, but mainly related to selecting the right CPU. (For us, Sandy Bridge processors from Intel offered a number of very important benefits.)

When it comes down to it, servers from HP or Dell or Sun will all be very similar. The only differences that occasionally arise are if you go to someone like IBM or Cisco UCS who will have some slightly different silicon in the box that allows them to do some unusual things.
Are you CPU based for everything you do or are you using, or considering using, GPUs or FPGAs?
Mike Barker: It's an interesting question. GPUs don't really fall into the matching engine space, they would be more appropriate for the heavy duty floating point calculations needed for more complex risk modelling or algorithmic models. Matching is all about fixed point arithmetic and our risk model isn't really complicated enough to require GPUs. We can do it plenty fast enough using CPUs.

However, FPGAs are a slightly different case, because you can try to do pretty much anything with them. They are not something we've looked at yet, mainly because the development turnaround cycle can be quite slow for them and we push very hard for fast turnaround. However some of our vendors are looking at FPGAs for things like FIX parsing. The sort of activity that is really heavily commoditised is probably where we would look at them, but at the moment we have no plans.

http://www.automatedtrader.net/articles/exchange-views/137319/lmax-exchange-agile-challenge-to-the-status-quo
What about FGPAs?


519
Quote
ZeroMQ
· ZeroMQ is a messaging library - ‘messaging middleware’ , ‘TCP on steroids’ , ‘new layer on the networking stack’. not a complete messaging system , but is a simple messaging library to be used programmatically. Gives the flexibility and performance of low level socket interface plus ease of implementation of high level. It is designed for simplicity.

· Performance - ZeroMQ is orders of magnitude faster than most AMQP messaging systems as it doesn’t have the overhead. It leverages efficient transports such as reliable Multicast and makes use of intelligent message batching, minimizing not only protocol overhead but also system calls. You can choose the message encoding format such as BSON or ProtoBuff.

o ZeroMQ sockets can connect to multiple end points and automatically load balance messages over them. It is brokerless and thus has no single point of failure.

Bitshares 2.0 might be able to benefit from this.

http://zeromq.org/
https://www.youtube.com/watch?v=f6cNTSJp8Dw
https://www.youtube.com/watch?v=H1rNtRqq1qY
https://www.youtube.com/watch?v=uzQjIuD-ygg
https://www.youtube.com/watch?v=luQkcKhpd0c

520
General Discussion / ZeroMQ and Ring Buffer (Disruptor) Channels
« on: September 05, 2015, 10:25:00 pm »
LMAX Disruptor and ZeroMQ

Quote
The ZeroMQ binding can be combined with Ring Buffer Channels and the LMAX Disruptor to create low-latency processing engines. Consider the following topology where a consumer receives events from one channel, processes them and subsequently invokes another service that in turn publishes output to a second channel:

Quote
A message is read from a ZeroMQ socket on thread 1 and placed in a ring buffer (Disruptor) slot. The business logic runs on a second thread, which receives the message from the ring buffer (Disruptor). When processing is complete, a business logic component publishes the output event to the second channel which results it being placed in an outgoing ring buffer slot. A third thread responsible for dispatching messages over a ZeroMQ socket receives the output event and publishes it.

This channel architecture can be applied to a number of use cases which require fast (low-latency) and predictable (limited GC activity) performance.
https://fabric3.atlassian.net/wiki/spaces/flyingpdf/pdfpageexport.action?pageId=1540186
http://zeromq.org/whitepapers:market-analysis
http://zeromq.org/
Quote
+ Table of Contents
Introduction

The financial sector lives off messaging technology. On "Wall Street" (the global stock trading business), capacity and latency are everything. Current infrastructure, highly tuned to get million-message per second throughputs, and sub-millisecond latencies, still fails when trading gets frantic. Huge amounts of money depend on being the first to get data, and the first to trade.

The stock trading business is evolving dramatically. Penny pricing generates more data. New US and EU regulations increase the number of parties involved in financial markets. New algorithmic trading technologies increase the demand for up-to-date stock data and icrease number of orders. While the existing infrastructure can double in capacity or speed per 18 months, traffic is expected to grow by 20 times over the next three years1.

In the same time, prices for messaging technology are steadily rising. Messaging middleware - software that connects applications or pieces of applications in a generalised plug-and-play fashion - is one of the last big-ticket items still not turned into a commodity by the Internet age of cheap software.

Mainframes got much of their power from clever messaging, transaction processing systems like IBM CICS. But today even 1980's-standard middleware - unlike databases, operating systems, compilers, editors, GUIs, and so on - is still not widely available to ordinary developers. The software industry is producing various business applications and pieces of applications, and the tools to make these, in ever greater quantities, and ever lower prices, but the messaging bit is still missing. The lack of a way to connect these applications has become not just an unconquered terrain, but also a serious bottleneck to growth, especially for new start-ups that could in theory compete aggressively with larger, older firms, if they were able to cheaply combine existing blocks of software.

This frustration is visible in many markets and has lead to the growth of messaging-over-HTTP (SOAP), and other compromises. Architectures like SOAP do work, but they don't solve the two main issues of a enterprise-level messaging, namely routing and queuing. Thus businesses who use such technologies cannot scale, and cannot compete in really large markets, unless they write their own messaging software, or buy a commercial product. Various other standardisation attempts were made to commoditise the market: CORBA, JMS and lately AMQP, CORBA being unsuccessful because of RPC metaphor that doesn't suit the needs of financial markets, JMS succeeding in Java world, but unable to expand any further and AMQP still being a big unknown.

The increasing demand, and lack of real competition shows in the financial statements of high-end messaging vendors like Tibco Software Inc: "Total revenue in the first quarter of fiscal year 2007 compared to the same quarter last year increased by $11.0 million or 10%. The increase was comprised of a $7.0 million or 11% increase in service and maintenance revenue and by a $4.0 million or 8% increase in license revenue."2 Tibco customers report that license fees are increasing, year on year.

The market

The global stock trading market is primary focus of ØMQ because that's where most emphasis is placed on messaging, most resources are accumulated and most edge-cutting technologies are used.

The main characteristic of the market is hunger for fast delivery. Every millisecond the stock quote or the trade order is faster than the competing one translates into direct financial profit, so the firms involved are naturally eager for any advantage they can get.

Currently, in the stock-trading business traffic load is so high and latency so critical, that the middleware has to be highly optimised. Latencies are given in microseconds and throughputs in millions of messages per second… In spite of that, trading often experiences problems when message load peaks. Latency can suddenly drop to seconds (or even tens of seconds) and huge amounts of money can be lost as trades are delayed or fail.3

The situation is getting worse for several reasons:

In 2001, the NYSE and NASDAQ switched from pricing their stocks in 1/16th dollar units to single cent units. This so-called "penny pricing" means stock markets produce more data and this data must be shifted across networks.
Both in the US and EU, regulators are forcing financial markets to compete more openly and aggressively, in the interests of consumers. For example US SEC regulatory changes allow new firms to act as intermediaries in the stock trading sector while the EU's Markets in Financial Instruments Directive (MiFID)4 is expected to increase stock-trading traffic rates in EU to match the volumes seen in US after Reg NMS5.
Many new and aggressive firms are entering the market, especially building or using 'algorithmic trading' platforms.
Algorithmic trading executes big amount of low-volume orders as opposed to small amount of high-volume orders executed by traditional human traders.
So we have increased data flows, to more participants, who are pushing to develop new business models which depend on getting that data rapidly, detecting temporary market anomalies, and responding to it (with trades) before their competitors. A more flexible regulatory environment is opening previously protected markets to new competition. Overall, we see an arms race for bandwidth and latency in which better technology translates directly into more profits.6

Message traffic is expected to grow significantly in the near term - we have heard different figures of up to 30 times over the next three years - and existing systems can only double capacity every 18 months.

There are many attempts to solve this emerging issue. The most dramatic improvements in performance come from replacing the classic central broker with a peer-to-peer architecture in which messages can flow directly across the network with no extra hops. Not all messaging systems can adapt their architecture in this way.

Apart from architecture, the obvious place to optimise messaging is in the "stack", i.e. the layers that separate the application program from the physical network. The software itself is already heavily optimised in most cases, so vendors are shifting to other options, such as:

Optimising network architecture by connectivity providers to get better latencies, including moving message consumers close (in network terms) to the message producers;7,8.
Clients moving from consolidated stock quote feeds to direct connectivity to the exchanges;9
Optimising formats in which data are passed (FIX/FAST10);
Providing full-blown hardware solutions (Tervela, Exegy, etc.);
Replacing the physical transport layer (Infiniband11, 10GB Ethernet);
Optimising existing networking hardware. TCP offload engines, Intel's I/OAT technology;12, etc.;
Modifying the OS to handle the messages in real-time. Various real-time OS's, like Novell's SLERT13;
Modifying the OS to use more efficient messaging stack: Asynchronous I/O, SDP, various zero-copy techniques etc.;
Using multicast to distribute stock quotes on the client's LAN;
As well as these optimisations, which focus on individual aspects of the messaging stack or architecture, we also see attempts that look at the problem as a whole:

Intel's Low Latency Lab14
Securities Technology Analysis Center (STAC)15
Various measurement & monitoring solutions (Endace etc.)
Highly optimised products with extensive hardware support become very expensive. Only the largest trading firms can afford the full range of products and even for these firms, costs remain a persistent concern. For the smaller firms, many of the solutions are simply not an option.

Opportunities

In this section we look at the opportunities for new high-performance messaging products such as those we are building.

High-performance take-out

The first and most obvious target is any firm using high-end commercial middleware for stock trading, where we can provide a cheaper equivalent. This market is cost-sensitive and in our experience it is willing to absorb change and risk in order to get a compelling price and/or performance advantage over their competitors.

Further, there are many firms who cannot afford these products, but would use them if the cost was set lower. Zipf's Law (usually used for language but also applicable to business sizes) suggests that the number of firms and their size follows an inverse power ratio, so offering a product at 20% the price of the high-cost market leaders should open a market five times as large. (In fact it's probably not this large, because smaller firms will buy or rent trading platforms rather than try to build their own.)

Trading platforms

Trading platforms are software applications that trading firms can buy ready-made, rather than build themselves using messaging middleware. Given the demand for cheaper, faster trading, there is a large market for these platforms. Obviously a firm that builds a trading platform is sensitive to the cost of the messaging it uses and these firms provide a market for our planned products.

Investment banks

Investment banks build their own trading systems and (from our limited experience) like to have control over the technology they use. Standards-based systems are highly attractive here. The calculation is that a standard technology is easier to control, and is served by a larger market of cheaper experts. Any AMQP solution has immediate attraction. Cost is always a driver as well but for firms that do significant development around the messaging, reduction of secondary costs (such as the number and cost of in-house consultants) is an important aspect.

It becomes clear why JPMorganChase was motivated to push and invest in the AMQP process, even taking considerable risks at the time: AMQP enables very large savings on IT expenditure, for messaging licenses, custom development, operational control, and so on. We can deliver a much lower-risk proposal to other investment banks, but with the same kinds of benefits.

Data consolidators

The stock trading world connects many exchanges (NASDAQ, NYSE, etc.) to many clients. Large clients make separate connections to each exchange, but most work via data consolidators, firms like Reuters who provide unified streams from many sources.

Today's consolidators run highly-tuned custom messaging software, it is not standards-based, and has little scope for getting cheaper and faster. It can get faster, but only at high cost, which punishes those firms that stick with custom messaging, and gives an advantage to those firms using standards-based messaging, which spreads the costs and leverages far more work on performance.

There is a definite opportunity for opening this market, and allowing new firms to compete as data consolidators, using our high-performance products to carry quotes to clients. New US regulations are opening this market to real competition.

Exchanges

The exchanges (stock exchanges, currency exchanges, commodities, etc.) are heavily impacted by the growth in demand for their services. It seems inevitable that standards at the edges will slowly force their way into the center, and we should be able to follow with product offerings.

Also, new types of trading venues are emerging (ATS's, MTF's and dark pools16) that gradually take still greater share of the market from the traditional exchanges. Given that this trend is quite new and still gaining momentum, we expect to see increasing demand for high-end messaging systems on this market.

Moving the value to different markets

One of the goals of ØMQ is to use money, resources and experience accumulated during low-latency arms race in stock-trading business to deliver free high-end general-purpose messaging solution to the rest of IT sector.

Some of the areas where ØMQ may prove useful follow.

Business and institutional messaging

Sending payments, doing business-to-business communication, passing documents within governmental organisations etc. is the primary market to focus on apart of stock trading. The reason is that this is the field where messaging is used traditionally, with lot of experienced IT personel aware of messaging and using it for a long time.

It should be also taken into account that even applications that don't use messaging proper may be still sending 'messages' by different means. Consider an application located at place A writing a record to remote database server and another one at place B reading the record. In fact, there was a message sent from A to B, even though the programmer might not be aware of it. Even inter-process and inter-thread communication can be considered messaging. Synchronising different applications by copying files to remote destinations once a day can be considered messaging as well (although it is a spectacularly low-latency one).

Basically any application made for financial or institutional sector needs some kind of messaging and the cost of the implementation varies between 10 and 30 per cent of the total project cost, so using existing standards-based middleware implementation seems to be a rather good investment.

Although low latency is not a key requirement in this sphere, we expect that growing transaction rates (consider regulations like EU's SEPA17 and standardisation efforts like TWIST18) will slowly force financial institutions to adopt high-performance messaging solutions, thus causing the current small slice of the messaging market addressed by high-performance solutions will steadily grow, until it ultimately reaches 100%.

Embedded systems

Embedded systems often have real-time requirements similar to those seen in stock-trading business. Consider, for example, an equipment measuring some critical value in a technological process. The data have to be delivered to the unit controlling the process within 1 ms, otherwise the whole process will be spoiled.

Embedded systems don't usually need the throughput provided by stock-trading stacks, however, if the latency, reliability and deterministic delivery times are guaranteed, they can take advantage of it, even though it doesn't use all the bandwidth capacity available.

Multimedia

Same remark about real-time requirements applies to multimedia (streaming audio and video, teleconferencing, etc.). As opposed to embedded systems, latency is not that critical, the paramount being deterministic delivery time and high throughput.

In the future we may find out that lot-of-small-messages model of stock-trading apps is incompatible with stream-based multimedia approach. However, we don't believe this is the case. To test the hypothesis, we've built proof-of-concept teleconferencing application over AMQP and we've seen it perform smoothly.

Grid computing

Having almost the same requirements as stock trading, grid systems are natural area to employ ØMQ stack.

Grids are icreasingly being used in financials19 and - not surprisingly - in stock trading itself, providing a solution for computationally expensive problems like risk management and algorithmic trading20.

The low-latency bubble

The market for low-latency solutions is very lively and expanding these days. However, some have a feeling that the value of the market is overestimated and that low-latency arms races going on will result in the burst of the bubble, similar to dot-com crash of early 2000's.

Let's examine possible causes of market breakdown:

There are law's of physics that place lower bound on the latency. Specifically, speed of light cannot be exceeded and once the messaging hits this limit, there won't be much space for competition and low-latency arms race will come to its end.
The costs for fast messaging are constantly growing. Once we hit the point where improving the latency will require investments exceeding the profits it can possibly yield, the flow of money into the market will end.
Unreasonable spending on low-latency solutions can result in hysteria, once the still growing low-latency market starts shrinking. Hysteria can make the market plummet even below it's real value.
Our view of the problems above is following:

Speed of light is certainly an ultimate barrier, however, as can be seen with microprocessors, barriers seen as ultimate are quite prone to be crossed over and over again. In messaging business for example, we see emerging proximity solutions (handling speed of light problem by placing interdependent applications physically close one to another) or the trend to optimise software part of the messaging stack thus removing endpoint latency rather than on-the-wire latency. In fact, we don't believe there are any real unpenetrable barriers to stop low-latency arms race at least in the next several years.
Although costliness of the low-latency messaging grows steadily, it should be taken into account that technology price - both hardware and software - is steadily decreasing at the same time. What cost $100 last year, costs $50 today. So, even in stable, non-expanding market, where spending on IT keeps constant, there will be a demand for new solutions to keep pace with new technologies.
Hysteria can happen at any time and there's no way to prevent it completely. However, as stock-trading messaging is in a way a world for itself, we expect hysteria to be restricted to this turbulent little market leaving the rest of messaging market intact. Thus the main victims will be the firms that provide specialised stock-trading solutions rather than general-purpose messaging. Specifically, ØMQ project, by taking advantage of the resources accumulated in stock-trading-focused IT market to develop general-purpose messaging solution can survive market breakdown by relying on its presence in different sectors of messaging market.
Conclusion

The primary focus of ØMQ starts with stock trading because this market has a well-defined and growing demand for high-end solutions, and the options for collaborations and return on investment are plentiful. However, the construction of a cost-efficient, standards-based messaging system that can compete head-on with the best in the world opens doors into many other domains as well.




521
Technical Support / Can Graphene be accelerated by GPU?
« on: September 05, 2015, 10:15:40 pm »
My understanding is it is based on LMAX Disrupter?
https://www.youtube.com/watch?v=Qho1QNbXBso

Why can't Graphene be GPU accelerated? Is it intended to be GPU accelerated?

522
General Discussion / Re: I'm going to make all of you rich
« on: September 04, 2015, 05:47:11 pm »
Integrate with Jabber or Pidgin. Jabber is probably easier but Pidgin is very popular so a Bitshares Pidgin Plugin could really catch fire.

It would be interesting if people could trade BitAssets over IM through bots.

https://www.quora.com/What-is-the-best-C-C++-XMPP-client-library-for-desktop
https://developer.pidgin.im/wiki/Scripting%20and%20Plugins
https://xmpp.org/xmpp-software/libraries/
https://cloud.google.com/appengine/docs/python/xmpp/

If Graphene has a good API then it shouldn't be difficult. Bots would log into the IM network, humans could then email, IM, send SMS, or Skype to them.

523
This Hangout is w/ Rune of MKR

Please ask questions Here:

Follow Twitter: @Beyond_Bitcoin

Retweet This Week's Bonus Hangout [ANN]!

Or better yet--Join us!  For updates on upcoming events, attend, record and report live from our Mumble Server!


**ATTENTION BROWNIE LOVERS**

Ensure you Complete the account information HERE for easy dispersal. 

Who made the artwork for Maker?

524

The Internet was necessary in the context of a Cold War.


What exactly is this reference too?


We have many trillions actually. The amount of wealth we have which goes untapped is primarily due to the fact that 1) we aren't currently able to sell our unused computation resources, 2) we aren't currently able to sell our unused storage resources, 3) we aren't currently able to sell our unused bandwidth, 4) we aren't currently able to auction our attention

The interesting development is all of this is changing as we speak. A year from now we will have DACs which take full advantage of micropayments and once you bring micropayments operational, combined with the other elements I mentioned, there are easily trillions of dollars of wealth there. So I don't think there is any sort of wealth shortage, just the misdirection and centralization of that wealth, or in some cases people don't even recognize that what they have is a form of wealth. Attention is wealth, spare computing resources are wealth, knowledge is wealth, all can be turned into cryptocurrency.

This really amounts to trillions? If sold, to who exactly? In the context of my statement I suggested that trillions going towards a whole new physical infrastructure. At what point does it become worth trillions? So far we only had the biggest boy grow to $3 billion.. and where all that wealth has gone certainly doesn't appear to be of any major good.. especially if you agree with John Underwoods assessment of all Bitcoin is used for.

Computation is a commodity and is immensely valuable. Easily worth trillions when you think about the fact that all businesses and all people rely on it. HPC is immensely valuable as well.

Protein folding? Searching for aliens? Decentralized search engines? All possible.  Google's market cap alone is almost 400 billion. Yes there are trillions of dollars available in untapped resources.

The attention economy? Micropayments? That is completely untapped, it's hundreds of billions or perhaps trillions of dollars of monetization. Attention was enough to give people free TV, to power the entire advertising industry, Google and Facebook are advertising companies. Auctioning your attention gives you the money.

Honestly it's not easy to calculate exactly how much money but considering there would be billions of people involved, and considering the US economy alone is over 10 trillion, and global economy over 100 trillion? I would say trillions is reasonable.

That doesn't mean it's a guarantee. During the dot com bubble a lot of people made money and lost money, and many people avoided using the web entirely. I would say what we are talking about here is the birth of a different kind of blockchain web which can decentralize everything, computation, storage, and bandwidth.

The money earned from automation, from attention, from computation, from storage, can be used by each person to pay for bandwidth. So if you easily get say $400 a year just for your attention, that is easily enough money to pay for bandwidth for the whole year. 400 times 500 million? 20,000,000,000 a year.

1% of the world economy is around 1 trillion. https://en.wikipedia.org/wiki/World_economy

525
It is my opinion, that any PUBLIC P2P network can easily be censored by ISPs.  If you can join the network, then you can discover the IP and PORT of every publicly accessible node and then block all packets to/from those nodes on those ports.
Furthermore, every website that hosts content (binaries, source, and seed node IPs) can be shut down in a similar manner.
Even MaidSafe and Tor are not able to prevent this kind of censorship. 

https://www.cryptocoinsnews.com/isps-intentionally-blocking-bitcoin/

I don't think censorship is realistic in the long term but that isn't the issue. The issue is could they censor it in a way so that only hardcore hackers can access it? Yes they can and it would be a lot like what happens in China.

The point is, no you cannot censor Bitcoin but you can make it inconvenient for people to use it uncensored. Bitcoin transactions don't have to be transmitted through the Internet but most people don't have a clue about software defined radio, don't have a clue about the other ways to use Bitcoin, and just want it on their smart phone.

Pages: 1 ... 28 29 30 31 32 33 34 [35] 36 37 38 39 40 41 42 ... 195