This is an entirely scalable solution to decentralized transaction processing, which can not be said about the other projects in the crypto-currency space.
I think this is something that deserves an entire blog article that gets published on our site. Anyone care to write about the scalability issue?
Today the Bitcoin network is restricted to a sustained rate of 7 tps by some artificial limits. These were put in place to stop people from ballooning the size of the block chain before the network and community was ready for it. Once those limits are lifted, the maximum transaction rate will go up significantly.
Bitcoin is currently able (with a couple of simple optimizations that are prototyped but not merged yet) to perform around 8000 signature verifications per second on an quad core Intel Core i7-2670QM 2.2Ghz processor. The average number of inputs per transaction is around 2, so we must halve the rate. This means 4000 tps is easily achievable CPU-wise with a single fairly mainstream CPU.
Let's assume an average rate of 2000tps, so just VISA. Transactions vary in size from about 0.2 kilobytes to over 1 kilobyte, but it's averaging half a kilobyte today.
That means that you need to keep up with around 8 megabits/second of transaction data (2000tps * 512 bytes) / 1024 bytes in a kilobyte / 1024 kilobytes in a megabyte = 0.97 megabytes per second * 8 = 7.8 megabits/second.
This sort of bandwidth is already common for even residential connections today, and is certainly at the low end of what colocation providers would expect to provide you with.
When blocks are solved, the current protocol will send the transactions again, even if a peer has already seen it at broadcast time. Fixing this to make blocks just list of hashes would resolve the issue and make the bandwidth needed for block broadcast negligable. So whilst this optimization isn't fully implemented today, we do not consider block transmission bandwidth here.
At very high transaction rates each block can be over half a gigabyte in size.
It is not required for most fully validating nodes to store the entire chain. In Satoshi's paper he describes "pruning", a way to delete unnecessary data about transactions that are fully spent. This reduces the amount of data that is needed for a fully validating node to be only the size of the current unspent output size, plus some additional data that is needed to handle re-orgs. As of October 2012 (block 203258) there have been 7,979,231 transactions, however the size of the unspent output set is less than 100MiB, which is small enough to easily fit in RAM for even quite old computers.
Only a small number of archival nodes need to store the full chain going back to the genesis block. These nodes can be used to bootstrap new fully validating nodes from scratch but are otherwise unnecessary.
The primary limiting factor in Bitcoin's performance is disk seeks once the unspent transaction output set stops fitting in memory. It is quite possible that the set will always fit in memory on dedicated server class machines, if hardware advances faster than Bitcoin usage does.
We have the same kind of scalability challenges. However, in the evaluation of Bitcoin they did not consider that on average you must send out each transaction more than once and the protocol overhead mean the real bandwidth requirement is likely 3x as much as they state.
So the real requirement to be a full node on the network at scale is likely:
100 Megabytes / Second
256 GB of RAM or more
TB of flash disk storage
Then we have block propagation delay. They claim a CPU can validate 2-4K transactions per second, but if you are running at full speed then anyone more than 1 hop away from the producer will be unable to validate and relay the block in time and the result will be many forks. I contend that the CPU must be able to validate the block in no more than 10% of the block interval to avoid forks (assuming 6 degrees of separation). This means that their 4000 TPS falls to 400 TPS on the standard computer.
Therefore, at these speeds I suspect you will require a 32 core or more machine to process transactions at low enough latency to avoid forks.
All of these specs are easily achievable by most small businesses today on a machine/hosting service that likely costs just a few thousand dollars per month to operate.
The scalability issue with bitcoin is POW, so all we need to consider is Nxt / Peer coin. With Nxt / Peer coin the average user is unlikely to earn enough from fees to cover the cost of operating a node at this scale and this is why users need to delegate their authority to others so you can concentration of capital in a manner that is still decentralized and in control of the average users.