Author [EN] [ZH] [ES] [PT] [IT] [DE] [FR] [NL] [TR] [SR] [AR] [RU] [EN] [ZH] [ES] [PT] [IT] [DE] [FR] [NL] [TR] [SR] [AR] [RU] [EN] [ZH] [ES] [PT] [IT] [DE] [FR] [NL] [TR] [SR] [AR] [RU] Topic: Possible to save a copy of the blockchain to save from having to re-download it?  (Read 718 times)

0 Members and 1 Guest are viewing this topic.

Offline jz831

  • Jr. Member
  • **
  • Posts: 46
    • View Profile
    • Customized ASIC Controllers

Running the 0.6.1 client on OSX here.  Due to crashes, I've had to rebuild the database a few times this week, which takes many hours to do, sometimes it takes > 12 hrs.   :(

So I'm wondering if it's possible to save a known working copy of the blockchain db to use in case my local copy gets corrupted and the client once again attempts to download it all, from block 0 (or whatever.)  Something like a checkpoint, so that I can resume downloading the blockchain from block ~1,835,000 (or where ever it was saved at).

i.e. Could I make a copy of the ~/Library/Application Support/BitShares/chain folder, and then re-use that the next time I'm asked to rebuild the DB, so I don't have to start from the beginning again?
https://metaexchange.info | Bitcoin<->Altcoin exchange | Instant | Safe | Low spreads

Offline Riverhead




Yes. I have taken a snapshot (tar -cf bts.tar ~/Library/A...../Bitshares) and untar it when things fly off the tracks. Works fine. May save you some headaches if the wallets directory isn't in the tar ball.

Offline jz831

  • Jr. Member
  • **
  • Posts: 46
    • View Profile
    • Customized ASIC Controllers
So it's not just the chain folder I'll need to keep a snapshot of?  Your example is the entire BitShares folder (minus the wallet folder, i guess?)  I have over 6G in the logs folder - methinks someone's a chatty Kathy on the log.
« Last Edit: February 20, 2015, 05:09:42 AM by jz831 »
https://metaexchange.info | Bitcoin<->Altcoin exchange | Instant | Safe | Low spreads

Offline Riverhead

Good question. To be honest I've never tried backing up this piece or that. Since it's usually just a snap shot for when I'm trying something that could mess things up.

Worth some testing :).

Offline Akado

  • Hero Member
  • *****
  • Posts: 2759
    • View Profile
  • BTS: akado
Maybe we could have a constantly updated blockchain file on our BitShares site so people can download it from there? It would be faster than letting the wallet sync  :) That way instead of syncing dozens or hundreds of days, the wallet would probably sync just the last week or 2
https://metaexchange.info | Bitcoin<->Altcoin exchange | Instant | Safe | Low spreads

Offline Thom

I have had pretty good success just making a copy of the chain folder after syncing for use as a backup. But not always. Of the 6 - 10 times I've used such a backup I've had 2 failures and I am not sure why.

Talking 100% on linux now. Most of the time the gui will not terminate cleanly. Perhaps that one factor.

If you're on linux you can purge the log and lock files with rm -rf .BitShares/chain/LO*  Type it carefully this could  destroy your chain, or worse! I would strongly advise against this as root or with sudo too!

I think the best strategy is first move the wallets folder to a backup location (for example walletBackupsFolder) then purge the log & lock files and lastly tarball the remainder with tar czf dotBitSharesBak.tgz .BItShares  You may then do the reverse by renaming or removing the existing .BitShares folder, then extract from the backup with tar xzf dotBitShares.tgz. Finally restore the wallets folder with cp -R walletBackupsFolder/wallets .BitShares/. 
Injustice anywhere is a threat to justice everywhere - MLK |  Verbaltech2 Witness Reports: https://bitsharestalk.org/index.php/topic,23902.0.html


Offline vikram

So it's not just the chain folder I'll need to keep a snapshot of?  Your example is the entire BitShares folder (minus the wallet folder, i guess?)  I have over 6G in the logs folder - methinks someone's a chatty Kathy on the log.

How long did it take to build up that many log messages? They should not be that huge. Have you modified your log level at all in "config.json"?

Offline jz831

  • Jr. Member
  • **
  • Posts: 46
    • View Profile
    • Customized ASIC Controllers
So it's not just the chain folder I'll need to keep a snapshot of?  Your example is the entire BitShares folder (minus the wallet folder, i guess?)  I have over 6G in the logs folder - methinks someone's a chatty Kathy on the log.

How long did it take to build up that many log messages? They should not be that huge. Have you modified your log level at all in "config.json"?

That 6G+ logs/p2p folder was built in just 1 day/few hours, after i emptied the contents of it earlier.  I have not modified the config.json at all.  The only thing i've done, besides truncating the logs/p2p folder, was to remove the peers.leveldb folder when I was having trouble connecting to peers, as I read somewhere as a suggestion to fix that issue.

However, unfort. I have already deleted those logfiles (wanted to make a smaller tar backup), and my logs/p2p folder is now just 16M, oops.

as I recall, the p2p.log file was being filled with messages similar to this:
Quote
2015-02-23T20:20:23 p2p:message read_loop ~call_statistics_col ] Call to method node_delegate::get_blockchain_synopsis took 633198us, longer than our target maximum of 500ms                   node.cpp:329

If it happens again, i'll be sure to do a better job reporting it.   8)
https://metaexchange.info | Bitcoin<->Altcoin exchange | Instant | Safe | Low spreads

Offline vikram

So it's not just the chain folder I'll need to keep a snapshot of?  Your example is the entire BitShares folder (minus the wallet folder, i guess?)  I have over 6G in the logs folder - methinks someone's a chatty Kathy on the log.

How long did it take to build up that many log messages? They should not be that huge. Have you modified your log level at all in "config.json"?

That 6G+ logs/p2p folder was built in just 1 day/few hours, after i emptied the contents of it earlier.  I have not modified the config.json at all.  The only thing i've done, besides truncating the logs/p2p folder, was to remove the peers.leveldb folder when I was having trouble connecting to peers, as I read somewhere as a suggestion to fix that issue.

However, unfort. I have already deleted those logfiles (wanted to make a smaller tar backup), and my logs/p2p folder is now just 16M, oops.

as I recall, the p2p.log file was being filled with messages similar to this:
Quote
2015-02-23T20:20:23 p2p:message read_loop ~call_statistics_col ] Call to method node_delegate::get_blockchain_synopsis took 633198us, longer than our target maximum of 500ms                   node.cpp:329

If it happens again, i'll be sure to do a better job reporting it.   8)

Thanks, I've seen occasional reports of this but have not been able to track it down. Seeing an example giant log file would be extremely helpful.

Offline kosh

  • Newbie
  • *
  • Posts: 8
    • View Profile
I'm able to make backup copies of the blockchain without an issue on Linux.

With no bitshares instance running:

cd ~/.BitShares
cp -r chain chian-backup

Done.

I should note, at least for me, if I try to start more than one instance of bitshares on the same machine both instances crash and the ~/BitShares/chain gets deleted. So, I learned to make a backup copy. :)
"The avalanche has already started, it is too late for the pebbles to vote."

Offline Methodise

I'm backing up the chain directory (while bitshares is not running). That can then be hot swapped into any wallet installation for less burdensome sync, according to my experience.

BTS: methodise

Offline tora62

Hi, just trying to clarify something; I've updated the client twice (0.8.1 > 0.9.0 > 0.9.1) and after every update chain had to be re-synced.
My chain folder is symlinked to a storage drive (binaries are on a ssd) and was in a healthy and consistent state pre-upgrade.

So, the question is: if the re-sync is mandatory after every (major?) upgrade, isn't backing it up a bit pointless*?
(*Not including cases for mitigating mishap risks, eg. '2 instances crash' and such).

(Btw, don't get me wrong here, I'm always in favour of doing backups of anything (semi)important, just tryin' to see if I'm missing smthing.)
Tx!
If at first you don't succeed, call it version 1.0...

 

Google+