Author [EN] [ZH] [ES] [PT] [IT] [DE] [FR] [NL] [TR] [SR] [AR] [RU] [EN] [ZH] [ES] [PT] [IT] [DE] [FR] [NL] [TR] [SR] [AR] [RU] [EN] [ZH] [ES] [PT] [IT] [DE] [FR] [NL] [TR] [SR] [AR] [RU] Topic: Blockchain size projections  (Read 846 times)

0 Members and 1 Guest are viewing this topic.

Offline Chronos

Blockchain size projections
« on: February 02, 2015, 11:46:03 PM »

We all know Bitcoin blockchain continues to grow. Quickly approaching 30GB already. How does the BTS chain compare, in terms of longer-term extrapolations of current trends? I'm particularly interested in how the 10-second block timing and the on-chain exchange contribute to the total size.

Offline vikram

Re: Blockchain size projections
« Reply #1 on: February 03, 2015, 12:04:51 AM »
For the current size, use the "disk_usage" command or just directly measure the dbs in your data directory.

Offline Chronos

Re: Blockchain size projections
« Reply #2 on: February 03, 2015, 12:18:41 AM »
Does anyone have a historical graph of size over time?

Offline Methodise

Re: Blockchain size projections
« Reply #3 on: February 03, 2015, 01:13:44 AM »
Projected would be nice, as well.
BTS: methodise

Offline Chronos

Re: Blockchain size projections
« Reply #4 on: February 04, 2015, 04:30:16 PM »
Bump. No data? Blockchain size seems to be a potential issue for Bitshares.

Offline xeroc

  • Board Moderator
  • Hero Member
  • *****
  • Posts: 12242
  • ChainSquad GmbH
    • View Profile
    • ChainSquad GmbH
  • BTS: xeroc
  • GitHub: xeroc
Re: Blockchain size projections
« Reply #5 on: February 04, 2015, 05:58:21 PM »
Bump. No data? Blockchain size seems to be a potential issue for Bitshares.
It is not an issue:
 - only delegates require the full chain .. so 101 ..
 - light wallets are currently under development

No need for everyone to store the full thing unless you WANT to
Give BitShares a try! Use the http://testnet.bitshares.eu provided by http://bitshares.eu powered by ChainSquad GmbH

Offline svk

Re: Blockchain size projections
« Reply #6 on: February 04, 2015, 06:03:24 PM »
I never thought to track this, I could start doing it though. Here's the current output of disk_usage:

Code: [Select]
(wallet closed) >>> disk_usage
{
  "blockchain": "841 MiB",
  "dac_state": "1 GiB",
  "logs": "19 MiB",
  "mail_client": "71 KiB",
  "mail_server": null,
  "network_peers": "8 MiB"
Worker: dev.bitsharesblocks

Offline Ander

  • Hero Member
  • *****
  • Posts: 3507
    • View Profile
  • BTS: Ander
Re: Blockchain size projections
« Reply #7 on: February 04, 2015, 06:57:26 PM »
I never thought to track this, I could start doing it though. Here's the current output of disk_usage:

Code: [Select]
(wallet closed) >>> disk_usage
{
  "blockchain": "841 MiB",
  "dac_state": "1 GiB",
  "logs": "19 MiB",
  "mail_client": "71 KiB",
  "mail_server": null,
  "network_peers": "8 MiB"

So it seems the blockchain is less than 2GB, and it is thus growing slower than bitcoin's?

What would happen if the transaction rate increased to match bitcoin's current rate, would it start growing a lot faster?
https://metaexchange.info | Bitcoin<->Altcoin exchange | Instant | Safe | Low spreads

Offline Chronos

Re: Blockchain size projections
« Reply #8 on: February 04, 2015, 09:13:08 PM »
Bump. No data? Blockchain size seems to be a potential issue for Bitshares.
It is not an issue:
 - only delegates require the full chain .. so 101 ..
 - light wallets are currently under development

No need for everyone to store the full thing unless you WANT to
I hadn't thought that only delegates need to have the entire chain. Good point! Light wallets are very important here.
« Last Edit: February 04, 2015, 09:49:30 PM by Chronos »

Offline matt608

  • Hero Member
  • *****
  • Posts: 878
    • View Profile
Re: Blockchain size projections
« Reply #9 on: February 04, 2015, 10:04:23 PM »
Bump. No data? Blockchain size seems to be a potential issue for Bitshares.
It is not an issue:
 - only delegates require the full chain .. so 101 ..
 - light wallets are currently under development

No need for everyone to store the full thing unless you WANT to

Is that really the case?  I thought everyone is downloading the full blockchain.  And the light client wont have a market. 

Offline bytemaster

Re: Blockchain size projections
« Reply #10 on: February 04, 2015, 10:06:35 PM »
Bump. No data? Blockchain size seems to be a potential issue for Bitshares.
It is not an issue:
 - only delegates require the full chain .. so 101 ..
 - light wallets are currently under development

No need for everyone to store the full thing unless you WANT to

Is that really the case?  I thought everyone is downloading the full blockchain.  And the light client wont have a market.

Light Client will have a market.. just not on first release. 
For the latest updates checkout my blog: http://bytemaster.bitshares.org
Anything said on these forums does not constitute an intent to create a legal obligation or contract between myself and anyone else.   These are merely my opinions and I reserve the right to change them at any time.

Offline vikram

Re: Blockchain size projections
« Reply #11 on: February 04, 2015, 10:26:16 PM »
Bump. No data? Blockchain size seems to be a potential issue for Bitshares.
It is not an issue:
 - only delegates require the full chain .. so 101 ..
 - light wallets are currently under development

No need for everyone to store the full thing unless you WANT to

Is that really the case?  I thought everyone is downloading the full blockchain.  And the light client wont have a market.

xeroc is speaking theoretically; currently most everyone processes the entire blockchain--but this does not scale and in the future it will mostly be delegates processing the full blockchain while most users use light clients.

Offline gamey

  • Hero Member
  • *****
  • Posts: 2253
    • View Profile
Re: Blockchain size projections
« Reply #12 on: February 04, 2015, 11:02:01 PM »

Has the idea of pruning the blockchain been abandoned ?
I speak for myself and only myself.

Offline vikram

Re: Blockchain size projections
« Reply #13 on: February 04, 2015, 11:27:00 PM »

Has the idea of pruning the blockchain been abandoned ?

The blocks themselves are not the problem--they are just the inputs to the state machine and can be thrown out. The problem is that the amount of information that defines the full state of the network in our system grows without bound. In theory, if we changed the system to expire all the different kinds of data after some amount of time, we might be able to bound the size, but I am not aware of any work that's been done in this direction.

Offline arhag

  • Hero Member
  • *****
  • Posts: 1214
    • View Profile
    • My posts on Steem
  • BTS: arhag
  • GitHub: arhag
Re: Blockchain size projections
« Reply #14 on: February 05, 2015, 12:09:39 AM »
The blocks themselves are not the problem--they are just the inputs to the state machine and can be thrown out. The problem is that the amount of information that defines the full state of the network in our system grows without bound. In theory, if we changed the system to expire all the different kinds of data after some amount of time, we might be able to bound the size, but I am not aware of any work that's been done in this direction.

You still need the full blockchain for a new machine to get to the present state of the database even if the client includes a recent trusted checkpoint.

What about the idea of encoding the running snapshot of the database in such a way that it can deterministically be reduced to a single hash that commits to the entire state of the database, and having the delegates include that hash (and the block height of the block which was the head block at the time of database snapshot that the hash refers to) in the block headers as part of their job. The hash would have to be correct for the block to be valid. Once delegates finished computing (in parallel to regular block producing operations) the hash of one recent snapshot in parallel, they begin computing the hash of the state of the database immediately after the previous hash was computed. All full nodes (including of course the delegate notes) are able to coordinate on which database snapshot they will be computing the hash of next.

Then it becomes possible for the client with a recent trusted checkpoint to also get the hash of a recent trusted database state. They can ask any node that has a copy of that state (or has the full blockchain and can regenerate a copy of that state) to provide the full state to the requester. The requester will compute the hash to make sure it is in fact the true trusted database state and then continue evolving the database from that point using the portion of blockchain starting from that point. If all clients store the same database state snapshot once every 6 months (the one as of block N where N is the largest integer less than ((the current block height) - 1577846) and where N % 1577846 == 0) and only keep the most recent snapshot (to save space), then a new full client (with a trusted checkpoint no older than 6 months) only needs to download and process 6 to 12 months worth of the blockchain (unless they can find a node that also has a database snapshot that is more recent than 6 months but still older than the trusted checkpoint, in which case they only have to download and process less than 6 months worth of the blockchain) rather than the entire thing, in addition to downloading and validating the entire database state that existed somewhere between 6 and 12 months in the past.

Furthermore, if the database encoding was done in a way that anyone can provide log(K) sized proofs of the existence of some (key, value) pair in the database (where K is the number of unique keys that hold a value in the database), and the database was designed in a clever way, it becomes possible for lightweight clients to easily verify (with only the minimal trust required that the current active delegates will not go against their economic interest to defraud users when it is guaranteed that they will eventually be caught) the proof provided by an untrusted third party (say the light wallet server) about something they are interested to know about the blockchain as it existed in the most recent snapshot (which ideally should happen frequently enough that the most recent snapshot is always less than a minute old). This of course assumes that the light wallet knows who the current active 101 delegates are and has a way to update that set over time (under normal circumstances) in a lightweight manner without needing to rely on trusted checkpoints built into the client (other than to initialize the belief in the active set of delegates on a new machine or after a long time of being offline). If the snapshot frequency happens fast enough that its period is less than 17 minutes, then ideally all (or >80%) of the delegates should sign the previous block header (which creates a verification chain to the most recent snapshot) every block, so that the light wallet can actually have enough trust in the validity of the root hash of an extremely recent database snapshot.
« Last Edit: February 05, 2015, 12:20:56 AM by arhag »

 

Google+