Show Posts

This section allows you to view all posts made by this member. Note that you can only see posts made in areas you currently have access to.


Messages - FuLl

Pages: 1 2 [3] 4 5 6 7
31
I'm running the PPA binary on a few machines, and haven't seen this yet. You said it "still" won't stay running, did it ever work before (a different version maybe?) Do you have any other PPAs enabled? It looks like it's crashing when trying to connect to a peer, maybe one of the networking libraries is from a different PPA?
Got it to run by typing 'run'. I didn't know that needed to happen.

30 minutes or so now & it's still up, so I'm thinking you might be right about the ppa issue with libraries or something. It _did_ run back at version 0.4.18- started crashing at 0.4.19. Back then I was using kernel x-29 & I don't know if that since yesterday I'm at x-37 now that might be why. I'll test that soon.

If I have further issues, I'll just keep using github's version since it isn't crashing.

Just a s3fs-fuse ppa enabled by the way.

32
I ran it in the debugger, maybe it's still running, as there's my 'gdb...' command still in htop.

After 10 minutes or so I got the (gdb) prompt in my console:

Code: [Select]
Reading symbols from /home/ubuntu/BitSharesX/bitshares_toolkit/programs/client/bitshares_client...done.
(gdb)

I don't know if that's because it finally crashed or not, as I said, the gdb command's still present in htop.

Not sure what to do next- I don't see the command I'm used to seeing when the daemon is running: 'bltsharesx-cli --daemon' in htop, never have yet.

Not even sure the start then crash ever happened being that I didn't see that command appear.

I don't know if it might be necessary for someone else to try to recreate the problem on a VM or something, I'm running Ubuntu 12.04 with kernel 3.13.0-37-generic.

I do know that since I got to the (gdb) prompt, mysql has crashed & won't start again. I'm hesitant to reboot since I don't know if the daemon might actually be running & I ought to wait for it to crash again to debug it right.

Please advise.

33
No, it's from the Ubuntu ppa.

Give me some time & I'll try the one from git so it'll debug.

----

I keep the config.json because I was under the impression I needed the rpc configuration set to run as a seed node. Is this not correct?

34
I get this from gdb:

Code: [Select]
Reading symbols from bitshares_client...(no debugging symbols found)...done.
(gdb)

...And I don't see the client process show up in htop.

But it starts if I don't use gdb before the plain 'bitshares_client' command.

35
2) could you run the client in "gdb" so that you get a backtrace once the client crashes?

What's the right way to do that?

When I do:
Code: [Select]
$sudo gdb bitshares_client --daemon...gdb complains about the unrecognized --daemon flag.

And when I do:
Code: [Select]
$sudo gdb 'bitshares_client --daemon'...It doesn't start.

Am I starting it wrong maybe?

36
Technical Support / bitsharesx-cli v. 0.4.20+a-0ubuntu1~ppa5 process dies
« on: October 09, 2014, 11:12:00 am »
Hi,

I've upgraded from 0.4.19 to the 0.4.20+a version in the Ubuntu ppa, & it still won't stay running on my box.

I tried deleting everything in the .BitSharesX directory except config.json to no avail, it'll stay running for a little while then die.

I see problems in p2p.log that seem ominous, & suspect that's the problem, but I'm no expert so I'm posting here for help.

p2p.log:

Code: [Select]
20141009T090425.968249     ntp:ntp_read_loop          request_now ] resolving... ["pool.ntp.org",123] ntp.cpp:55
20141009T090426.216655     ntp:ntp_read_loop          request_now ] sending request to 108.61.73.243:123 ntp.cpp:59
20141009T090426.225183     ntp:ntp_read_loop            read_loop ] received ntp reply from 108.61.73.243:123 ntp.cpp:120
20141009T090426.225258     ntp:ntp_read_loop            read_loop ] ntp_delta_time updated to -71311 ntp.cpp:145
20141009T090427.205787        th_a:?unnamed?        open_database ] old database version, upgrade and re-sync chain_database.cpp:105
20141009T090430.725502        th_a:?unnamed?                 open ] loading pending trx... chain_database.cpp:1074
20141009T090438.922354   upnp:upnp::map_port           operator() ] No valid UPnP IGDs found upnp.cpp:157
20141009T090443.881835 th_a:rebroadcast_pending rebroadcast_pending_ ]  rebroadcasting... client.cpp:1102
20141009T090443.933725   p2p:connect_to_task           connect_to ] fatal: error connecting to peer 104.131.35.149:1776: 0 exception: unspecified
Connection refused
    {"message":"Connection refused"}
    asio  asio.cpp:59 error_handler peer_connection.cpp:204
20141009T090443.935878 p2p:delayed_peer_deletion_task              destroy ] Unexpected exception from peer_connection's accept_or_connect_task : {"code":0,"name":"exception","message":"unspecified","stack":[{"context":{"level":"error","file":"asio.cpp","line":59,"method":"error_handler","hostname":"","thread_name":"asio","timestamp":"20141009T090443.928942"},"format":"${message} ","data":{"message":"Connection refused"}}]} peer_connection.cpp:107
20141009T090444.006358   p2p:connect_to_task           connect_to ] fatal: error connecting to peer 54.77.51.177:1776: 0 exception: unspecified
Connection refused
    {"message":"Connection refused"}
    asio  asio.cpp:59 error_handler peer_connection.cpp:204
20141009T090444.006686 p2p:delayed_peer_deletion_task              destroy ] Unexpected exception from peer_connection's accept_or_connect_task : {"code":0,"name":"exception","message":"unspecified","stack":[{"context":{"level":"error","file":"asio.cpp","line":59,"method":"error_handler","hostname":"","thread_name":"asio","timestamp":"20141009T090444.005917"},"format":"${message} ","data":{"message":"Connection refused"}}]} peer_connection.cpp:107
20141009T090444.013204   p2p:connect_to_task           connect_to ] fatal: error connecting to peer 5.101.106.138:1777: 0 exception: unspecified
Connection refused
    {"message":"Connection refused"}
    asio  asio.cpp:59 error_handler peer_connection.cpp:204
20141009T090444.013430 p2p:delayed_peer_deletion_task              destroy ] Unexpected exception from peer_connection's accept_or_connect_task : {"code":0,"name":"exception","message":"unspecified","stack":[{"context":{"level":"error","file":"asio.cpp","line":59,"method":"error_handler","hostname":"","thread_name":"asio","timestamp":"20141009T090444.013086"},"format":"${message} ","data":{"message":"Connection refused"}}]} peer_connection.cpp:107
20141009T090444.017498   p2p:connect_to_task           connect_to ] fatal: error connecting to peer 178.62.157.161:1776: 0 exception: unspecified
Connection refused
    {"message":"Connection refused"}
    asio  asio.cpp:59 error_handler peer_connection.cpp:204
20141009T090444.017694 p2p:delayed_peer_deletion_task              destroy ] Unexpected exception from peer_connection's accept_or_connect_task : {"code":0,"name":"exception","message":"unspecified","stack":[{"context":{"level":"error","file":"asio.cpp","line":59,"method":"error_handler","hostname":"","thread_name":"asio","timestamp":"20141009T090444.016865"},"format":"${message} ","data":{"message":"Connection refused"}}]} peer_connection.cpp:107
20141009T090444.151510   p2p:connect_to_task           connect_to ] fatal: error connecting to peer 61.129.33.213:1776: 0 exception: unspecified
Connection refused
    {"message":"Connection refused"}
    asio  asio.cpp:59 error_handler peer_connection.cpp:204
20141009T090444.151857 p2p:delayed_peer_deletion_task              destroy ] Unexpected exception from peer_connection's accept_or_connect_task : {"code":0,"name":"exception","message":"unspecified","stack":[{"context":{"level":"error","file":"asio.cpp","line":59,"method":"error_handler","hostname":"","thread_name":"asio","timestamp":"20141009T090444.150642"},"format":"${message} ","data":{"message":"Connection refused"}}]} peer_connection.cpp:107
20141009T090444.179158   p2p:connect_to_task           connect_to ] fatal: error connecting to peer 54.79.27.224:1776: 0 exception: unspecified
Connection refused
    {"message":"Connection refused"}
    asio  asio.cpp:59 error_handler peer_connection.cpp:204
20141009T090444.179484 p2p:delayed_peer_deletion_task              destroy ] Unexpected exception from peer_connection's accept_or_connect_task : {"code":0,"name":"exception","message":"unspecified","stack":[{"context":{"level":"error","file":"asio.cpp","line":59,"method":"error_handler","hostname":"","thread_name":"asio","timestamp":"20141009T090444.179038"},"format":"${message} ","data":{"message":"Connection refused"}}]} peer_connection.cpp:107
20141009T090445.557016 p2p:message read_loop on_closing_connectio ] Peer 180.153.142.120:1777 is disconnecting us because: I rejected your connection request (hello message) so I'm disconnecting node.cpp:2501
20141009T090445.610143 p2p:message read_loop            read_loop ] disconnected 0 exception: unspecified
Bad file descriptor
    {"message":"Bad file descriptor"}
    asio  asio.cpp:37 operator()

    {"len":16}
    p2p  stcp_socket.cpp:94 readsome message_oriented_connection.cpp:182
20141009T090445.610351 p2p:delayed_peer_deletion_task   destroy_connection ] Exception thrown while canceling message_oriented_connection's read_loop, ignoring: {"code":0,"name":"exception","message":"unspecified","stack":[{"context":{"level":"warn","file":"message_oriented_connection.cpp","line":184,"method":"read_loop","hostname":"","thread_name":"p2p","timestamp":"20141009T090445.610207"},"format":"disconnected: ${e}","data":{"e":"0 exception: unspecified\nBad file descriptor \n    {\"message\":\"Bad file descriptor\"}\n    asio  asio.cpp:37 operator()\n\n    {\"len\":16}\n    p2p  stcp_socket.cpp:94 readsome"}}]} message_oriented_connection.cpp:274
20141009T090445.610453 p2p:delayed_peer_deletion_task   destroy_connection ] Exception thrown while canceling message_oriented_connection's read_loop, ignoring: {"code":0,"name":"exception","message":"unspecified","stack":[{"context":{"level":"warn","file":"message_oriented_connection.cpp","line":184,"method":"read_loop","hostname":"","thread_name":"p2p","timestamp":"20141009T090445.610207"},"format":"disconnected: ${e}","data":{"e":"0 exception: unspecified\nBad file descriptor \n    {\"message\":\"Bad file descriptor\"}\n    asio  asio.cpp:37 operator()\n\n    {\"len\":16}\n    p2p  stcp_socket.cpp:94 readsome"}}]} message_oriented_connection.cpp:274
20141009T090449.919339 p2p:terminate_inactive_connections_loop terminate_inactive_c ] Forcibly disconnecting from handshaking peer 89.187.144.203:8764 due to inactivity of at least 5 seconds node.cpp:1193
20141009T090449.919404 p2p:terminate_inactive_connections_loop terminate_inactive_c ] Peer's negotiating status: connecting, bytes sent: 0, bytes received: 0 node.cpp:1197
20141009T090449.919446 p2p:terminate_inactive_connections_loop terminate_inactive_c ] Forcibly disconnecting from handshaking peer 188.226.195.137:60696 due to inactivity of at least 5 seconds node.cpp:1193
20141009T090449.919471 p2p:terminate_inactive_connections_loop terminate_inactive_c ] Peer's negotiating status: connecting, bytes sent: 0, bytes received: 0 node.cpp:1197
20141009T090449.920938   p2p:connect_to_task           connect_to ] fatal: error connecting to peer 188.226.195.137:60696: 0 exception: unspecified
Operation canceled
    {"message":"Operation canceled"}
    asio  asio.cpp:59 error_handler peer_connection.cpp:204
20141009T090449.921141 p2p:delayed_peer_deletion_task              destroy ] Unexpected exception from peer_connection's accept_or_connect_task : {"code":0,"name":"exception","message":"unspecified","stack":[{"context":{"level":"error","file":"asio.cpp","line":59,"method":"error_handler","hostname":"","thread_name":"asio","timestamp":"20141009T090449.920455"},"format":"${message} ","data":{"message":"Operation canceled"}}]}

default.log:

Code: [Select]
20141009T090425.968249     ntp:ntp_read_loop          request_now ] resolving... ["pool.ntp.org",123] ntp.cpp:55
20141009T090426.216655     ntp:ntp_read_loop          request_now ] sending request to 108.61.73.243:123 ntp.cpp:59
20141009T090426.225183     ntp:ntp_read_loop            read_loop ] received ntp reply from 108.61.73.243:123 ntp.cpp:120
20141009T090426.225258     ntp:ntp_read_loop            read_loop ] ntp_delta_time updated to -71311 ntp.cpp:145
20141009T090427.205787        th_a:?unnamed?        open_database ] old database version, upgrade and re-sync chain_database.cpp:105
20141009T090430.725502        th_a:?unnamed?                 open ] loading pending trx... chain_database.cpp:1074
20141009T090438.922354   upnp:upnp::map_port           operator() ] No valid UPnP IGDs found upnp.cpp:157
20141009T090443.881835 th_a:rebroadcast_pending rebroadcast_pending_ ]  rebroadcasting... client.cpp:1102
20141009T090456.885002 th_a:rebroadcast_pending rebroadcast_pending_ ]  rebroadcasting... client.cpp:1102
20141009T090509.892257 th_a:rebroadcast_pending rebroadcast_pending_ ]  rebroadcasting... client.cpp:1102
20141009T090522.892461 th_a:rebroadcast_pending rebroadcast_pending_ ]  rebroadcasting... client.cpp:1102
20141009T090535.894387 th_a:rebroadcast_pending rebroadcast_pending_ ]  rebroadcasting... client.cpp:1102
20141009T090548.895160 th_a:rebroadcast_pending rebroadcast_pending_ ]  rebroadcasting... client.cpp:1102
20141009T090601.897461 th_a:rebroadcast_pending rebroadcast_pending_ ]  rebroadcasting... client.cpp:1102
20141009T090614.897692 th_a:rebroadcast_pending rebroadcast_pending_ ]  rebroadcasting... client.cpp:1102
20141009T090627.901372 th_a:rebroadcast_pending rebroadcast_pending_ ]  rebroadcasting... client.cpp:1102
20141009T090640.903119 th_a:rebroadcast_pending rebroadcast_pending_ ]  rebroadcasting... client.cpp:1102
20141009T090653.903355 th_a:rebroadcast_pending rebroadcast_pending_ ] skip rebroadcast_pending while syncing client.cpp:1098
20141009T090706.904391 th_a:rebroadcast_pending rebroadcast_pending_ ] skip rebroadcast_pending while syncing client.cpp:1098
20141009T090719.924230 th_a:rebroadcast_pending rebroadcast_pending_ ] skip rebroadcast_pending while syncing client.cpp:1098
20141009T090733.227063 th_a:rebroadcast_pending rebroadcast_pending_ ] skip rebroadcast_pending while syncing client.cpp:1098

Would someone kindly interpret this for me & provide some guidance?

Thanks,

-F

37
Technical Support / Re: How to have separate wallets with 1 account each
« on: October 07, 2014, 11:58:26 pm »
I got this to work, & for future reference for anyone else who wants to know how, here's how I did it:

-I have the original wallet containing 'Account-A' that I created when I first installed BitSharesX backed up to my hard drive, & I created a staging directory elsewhere where I'll keep a current version of each account's backup under directories titled 'Account-A' & 'Account-B'. This since the actual backups have the same name, & I dare not mess with renaming them in case they wouldn't be restorable without remembering their exact naming originally.

-I uninstalled the BitSharesX application, & afterward deleted the BitSharesX directory from Users/Username/AppData/Roaming.

-Then I ran a registry cleaner app for the sake of completeness in case there were registry settings left over from the uninstaller. I don't think this step was necessary, though.

-I then reinstalled the BitSharesX application fresh, & created the new account following the prompts.

-I backed up the newly created wallet containing the new 'Account-B' to its own directory, then copied that backup to my staging area where I keep current versions of each account's wallet.

-I can now 'Import' each separate wallet, each containing a single account, to effectively switch back & forth between them.


One question remains, however:

During the wallet import process, there's a message saying 'This will backup and replace your current wallet!'.

It's unclear to me where this particular backup gets saved, as the following screens don't ask for a file location to save it at. I checked the timestamps on the backups I already had after I imported back & forth a couple times, & it didn't appear to overwrite my current backups, so it's not using the 'last used' manual export directory.

I wouldn't know how to utilize that particular backup had I needed to, but it doesn't matter in my case since I have manual backups exported to my dedicated backup location.

So no worries about that.



Hope documenting this helps someone else.  :)

-F

38
known issues with 0.4.19

https://bitsharestalk.org/index.php?topic=9564.new#new

Bummer, thanks for letting me know.

Good luck & godspeed to the devs working on the fix. :-)

39
Technical Support / bitsharesx-cli v. 0.4.19 process won't stay running
« on: October 02, 2014, 01:24:04 pm »
Hi,

My seed node is down & I need some help to get it back up.

I'm used to the client not running when it's depreciated & there's an update to be installed, but according to aptitude I have the most recent version now, & it won't stay running for more than 5 minutes.

I tried deleting everything but 'config.json' in the /root/.BitSharesX/ directory in case it wanted to start over, but still it stops running.

The following is in default.log:

Code: [Select]
20141002T130151.950001     ntp:ntp_read_loop          request_now ] resolving... ["pool.ntp.org",123] ntp.cpp:55
20141002T130152.047217     ntp:ntp_read_loop          request_now ] sending request to 167.88.119.29:123 ntp.cpp:59
20141002T130152.067212     ntp:ntp_read_loop            read_loop ] received ntp reply from 167.88.119.29:123 ntp.cpp:120
20141002T130152.067321     ntp:ntp_read_loop            read_loop ] ntp_delta_time updated to 3158 ntp.cpp:145
20141002T130152.952717        th_a:?unnamed?        open_database ] old database version, upgrade and re-sync chain_database.cpp:105
20141002T130156.280816        th_a:?unnamed?                 open ] loading pending trx... chain_database.cpp:1074
20141002T130204.444662   upnp:upnp::map_port           operator() ] No valid UPnP IGDs found upnp.cpp:157
20141002T130209.410461 th_a:rebroadcast_pending rebroadcast_pending_ ]  rebroadcasting... client.cpp:1102
20141002T130222.472185 th_a:rebroadcast_pending rebroadcast_pending_ ]  rebroadcasting... client.cpp:1102
20141002T130235.641888 th_a:rebroadcast_pending rebroadcast_pending_ ]  rebroadcasting... client.cpp:1102
20141002T130248.851634 th_a:rebroadcast_pending rebroadcast_pending_ ]  rebroadcasting... client.cpp:1102
20141002T130301.919106 th_a:rebroadcast_pending rebroadcast_pending_ ]  rebroadcasting... client.cpp:1102
20141002T130314.919330 th_a:rebroadcast_pending rebroadcast_pending_ ]  rebroadcasting... client.cpp:1102
20141002T130328.013531 th_a:rebroadcast_pending rebroadcast_pending_ ]  rebroadcasting... client.cpp:1102
20141002T130341.093474 th_a:rebroadcast_pending rebroadcast_pending_ ]  rebroadcasting... client.cpp:1102
20141002T130354.102885 th_a:rebroadcast_pending rebroadcast_pending_ ]  rebroadcasting... client.cpp:1102
20141002T130407.911250 th_a:rebroadcast_pending rebroadcast_pending_ ]  rebroadcasting... client.cpp:1102
20141002T130420.911469 th_a:rebroadcast_pending rebroadcast_pending_ ]  rebroadcasting... client.cpp:1102
20141002T130433.979763 th_a:rebroadcast_pending rebroadcast_pending_ ]  rebroadcasting... client.cpp:1102
20141002T130447.064326 th_a:rebroadcast_pending rebroadcast_pending_ ]  rebroadcasting... client.cpp:1102
20141002T130500.224201 th_a:rebroadcast_pending rebroadcast_pending_ ]  rebroadcasting... client.cpp:1102
20141002T130513.305901 th_a:rebroadcast_pending rebroadcast_pending_ ]  rebroadcasting... client.cpp:1102
20141002T130526.306273 th_a:rebroadcast_pending rebroadcast_pending_ ]  rebroadcasting... client.cpp:1102
20141002T130539.461313 th_a:rebroadcast_pending rebroadcast_pending_ ]  rebroadcasting... client.cpp:1102
20141002T130552.765883 th_a:rebroadcast_pending rebroadcast_pending_ ]  rebroadcasting... client.cpp:1102
20141002T130605.818549 th_a:rebroadcast_pending rebroadcast_pending_ ]  rebroadcasting... client.cpp:1102
20141002T130619.097398 th_a:rebroadcast_pending rebroadcast_pending_ ]  rebroadcasting... client.cpp:1102
20141002T130632.100851 th_a:rebroadcast_pending rebroadcast_pending_ ] skip rebroadcast_pending while syncing client.cpp:1098
20141002T130645.114172 th_a:rebroadcast_pending rebroadcast_pending_ ] skip rebroadcast_pending while syncing client.cpp:1098

And this is my p2p.log:

Code: [Select]
20141002T130151.950001     ntp:ntp_read_loop          request_now ] resolving... ["pool.ntp.org",123] ntp.cpp:55
20141002T130152.047217     ntp:ntp_read_loop          request_now ] sending request to 167.88.119.29:123 ntp.cpp:59
20141002T130152.067212     ntp:ntp_read_loop            read_loop ] received ntp reply from 167.88.119.29:123 ntp.cpp:120
20141002T130152.067321     ntp:ntp_read_loop            read_loop ] ntp_delta_time updated to 3158 ntp.cpp:145
20141002T130152.952717        th_a:?unnamed?        open_database ] old database version, upgrade and re-sync chain_database.cpp:105
20141002T130156.280816        th_a:?unnamed?                 open ] loading pending trx... chain_database.cpp:1074
20141002T130204.444662   upnp:upnp::map_port           operator() ] No valid UPnP IGDs found upnp.cpp:157
20141002T130209.410461 th_a:rebroadcast_pending rebroadcast_pending_ ]  rebroadcasting... client.cpp:1102
20141002T130209.530186   p2p:connect_to_task           connect_to ] fatal: error connecting to peer 188.226.195.137:60696: 0 exception: unspecified
Connection refused
    {"message":"Connection refused"}
    asio  asio.cpp:59 error_handler peer_connection.cpp:204
20141002T130209.531980 p2p:delayed_peer_deletion_task              destroy ] Unexpected exception from peer_connection's accept_or_connect_task : {"code":0,"name":"exception","message":"unspecified","stack":[{"context":{"level":"error","file":"asio.cpp","line":59,"method":"error_handler","hostname":"","thread_name":"asio","timestamp":"20141002T130209.525050"},"format":"${message} ","data":{"message":"Connection refused"}}]} peer_connection.cpp:107
20141002T130209.536157   p2p:connect_to_task           connect_to ] fatal: error connecting to peer 178.62.157.161:1776: 0 exception: unspecified
Connection refused
    {"message":"Connection refused"}
    asio  asio.cpp:59 error_handler peer_connection.cpp:204
20141002T130209.536384 p2p:delayed_peer_deletion_task              destroy ] Unexpected exception from peer_connection's accept_or_connect_task : {"code":0,"name":"exception","message":"unspecified","stack":[{"context":{"level":"error","file":"asio.cpp","line":59,"method":"error_handler","hostname":"","thread_name":"asio","timestamp":"20141002T130209.535021"},"format":"${message} ","data":{"message":"Connection refused"}}]} peer_connection.cpp:107
20141002T130209.537471   p2p:connect_to_task           connect_to ] fatal: error connecting to peer 5.101.106.138:1777: 0 exception: unspecified
Connection refused
    {"message":"Connection refused"}
    asio  asio.cpp:59 error_handler peer_connection.cpp:204
20141002T130209.537755 p2p:delayed_peer_deletion_task              destroy ] Unexpected exception from peer_connection's accept_or_connect_task : {"code":0,"name":"exception","message":"unspecified","stack":[{"context":{"level":"error","file":"asio.cpp","line":59,"method":"error_handler","hostname":"","thread_name":"asio","timestamp":"20141002T130209.534222"},"format":"${message} ","data":{"message":"Connection refused"}}]} peer_connection.cpp:107
20141002T130209.765031   p2p:connect_to_task           connect_to ] fatal: error connecting to peer 61.129.33.213:1776: 0 exception: unspecified
Connection refused
    {"message":"Connection refused"}
    asio  asio.cpp:59 error_handler peer_connection.cpp:204
20141002T130209.765255 p2p:delayed_peer_deletion_task              destroy ] Unexpected exception from peer_connection's accept_or_connect_task : {"code":0,"name":"exception","message":"unspecified","stack":[{"context":{"level":"error","file":"asio.cpp","line":59,"method":"error_handler","hostname":"","thread_name":"asio","timestamp":"20141002T130209.764907"},"format":"${message} ","data":{"message":"Connection refused"}}]} peer_connection.cpp:107
20141002T130210.129586 p2p:message read_loop on_closing_connectio ] Peer 80.240.133.79:1777 is disconnecting us because: I rejected your connection request (hello message) so I'm disconnecting node.cpp:2501
20141002T130210.131960 p2p:message read_loop            read_loop ] disconnected 0 exception: unspecified
Bad file descriptor
    {"message":"Bad file descriptor"}
    asio  asio.cpp:37 operator()

    {"len":16}
    p2p  stcp_socket.cpp:94 readsome message_oriented_connection.cpp:182
20141002T130210.132229 p2p:delayed_peer_deletion_task   destroy_connection ] Exception thrown while canceling message_oriented_connection's read_loop, ignoring: {"code":0,"name":"exception","message":"unspecified","stack":[{"context":{"level":"warn","file":"message_oriented_connection.cpp","line":184,"method":"read_loop","hostname":"","thread_name":"p2p","timestamp":"20141002T130210.132011"},"format":"disconnected: ${e}","data":{"e":"0 exception: unspecified\nBad file descriptor \n    {\"message\":\"Bad file descriptor\"}\n    asio  asio.cpp:37 operator()\n\n    {\"len\":16}\n    p2p  stcp_socket.cpp:94 readsome"}}]} message_oriented_connection.cpp:274
20141002T130210.132328 p2p:delayed_peer_deletion_task   destroy_connection ] Exception thrown while canceling message_oriented_connection's read_loop, ignoring: {"code":0,"name":"exception","message":"unspecified","stack":[{"context":{"level":"warn","file":"message_oriented_connection.cpp","line":184,"method":"read_loop","hostname":"","thread_name":"p2p","timestamp":"20141002T130210.132011"},"format":"disconnected: ${e}","data":{"e":"0 exception: unspecified\nBad file descriptor \n    {\"message\":\"Bad file descriptor\"}\n    asio  asio.cpp:37 operator()\n\n    {\"len\":16}\n    p2p  stcp_socket.cpp:94 readsome"}}]} message_oriented_connection.cpp:274
20141002T130211.486805 p2p:message read_loop on_closing_connectio ] Peer 180.153.142.120:1777 is disconnecting us because: I rejected your connection request (hello message) so I'm disconnecting node.cpp:2501
20141002T130211.487708 p2p:message read_loop            read_loop ] disconnected 0 exception: unspecified
Bad file descriptor
    {"message":"Bad file descriptor"}
    asio  asio.cpp:37 operator()

    {"len":16}
    p2p  stcp_socket.cpp:94 readsome message_oriented_connection.cpp:182
20141002T130211.487875 p2p:delayed_peer_deletion_task   destroy_connection ] Exception thrown while canceling message_oriented_connection's read_loop, ignoring: {"code":0,"name":"exception","message":"unspecified","stack":[{"context":{"level":"warn","file":"message_oriented_connection.cpp","line":184,"method":"read_loop","hostname":"","thread_name":"p2p","timestamp":"20141002T130211.487755"},"format":"disconnected: ${e}","data":{"e":"0 exception: unspecified\nBad file descriptor \n    {\"message\":\"Bad file descriptor\"}\n    asio  asio.cpp:37 operator()\n\n    {\"len\":16}\n    p2p  stcp_socket.cpp:94 readsome"}}]} message_oriented_connection.cpp:274
20141002T130211.487971 p2p:delayed_peer_deletion_task   destroy_connection ] Exception thrown while canceling message_oriented_connection's read_loop, ignoring: {"code":0,"name":"exception","message":"unspecified","stack":[{"context":{"level":"warn","file":"message_oriented_connection.cpp","line":184,"method":"read_loop","hostname":"","thread_name":"p2p","timestamp":"20141002T130211.487755"},"format":"disconnected: ${e}","data":{"e":"0 exception: unspecified\nBad file descriptor \n    {\"message\":\"Bad file descriptor\"}\n    asio  asio.cpp:37 operator()\n\n    {\"len\":16}\n    p2p  stcp_socket.cpp:94 readsome"}}]} message_oriented_connection.cpp:274
20141002T130215.439055 p2p:terminate_inactive_connections_loop terminate_inactive_c ] Forcibly disconnecting from handshaking peer 89.187.144.203:8764 due to inactivity of at least 5 seconds node.cpp:1193
20141002T130215.439123 p2p:terminate_inactive_connections_loop terminate_inactive_c ] Peer's negotiating status: connecting, bytes sent: 0, bytes received: 0 node.cpp:1197
20141002T130215.439169 p2p:terminate_inactive_connections_loop terminate_inactive_c ] Forcibly disconnecting from handshaking peer 95.85.33.16:8764 due to inactivity of at least 5 seconds node.cpp:1193
20141002T130215.439195 p2p:terminate_inactive_connections_loop terminate_inactive_c ] Peer's negotiating status: connecting, bytes sent: 0, bytes received: 0 node.cpp:1197
20141002T130215.439824   p2p:connect_to_task           connect_to ] fatal: error connecting to peer 89.187.144.203:8764: 0 exception: unspecified
Operation canceled
    {"message":"Operation canceled"}
    asio  asio.cpp:59 error_handler peer_connection.cpp:204
20141002T130215.440073 p2p:delayed_peer_deletion_task              destroy ] Unexpected exception from peer_connection's accept_or_connect_task : {"code":0,"name":"exception","message":"unspecified","stack":[{"context":{"level":"error","file":"asio.cpp","line":59,"method":"error_handler","hostname":"","thread_name":"asio","timestamp":"20141002T130215.439370"},"format":"${message} ","data":{"message":"Operation canceled"}}]} peer_connection.cpp:107
20141002T130215.440456   p2p:connect_to_task           connect_to ] fatal: error connecting to peer 95.85.33.16:8764: 0 exception: unspecified
Operation canceled
    {"message":"Operation canceled"}
    asio  asio.cpp:37 operator() peer_connection.cpp:204
20141002T130215.440624 p2p:delayed_peer_deletion_task              destroy ] Unexpected exception from peer_connection's accept_or_connect_task : {"code":0,"name":"exception","message":"unspecified","stack":[{"context":{"level":"error","file":"asio.cpp","line":37,"method":"operator()","hostname":"","thread_name":"asio","timestamp":"20141002T130215.439472"},"format":"${message} ","data":{"message":"Operation canceled"}}]} peer_connection.cpp:107
20141002T130221.516458 p2p:message read_loop            read_loop ] disconnected 0 exception: unspecified
Connection reset by peer
    {"message":"Connection reset by peer"}
    asio  asio.cpp:37 operator()

    {"len":16}
    p2p  stcp_socket.cpp:94 readsome message_oriented_connection.cpp:182
20141002T130221.516735 p2p:terminate_inactive_connections_loop terminate_inactive_c ] Disconnecting peer 54.79.27.224:1776 because they didn't respond to my request for sync item ids after 0000000000000000000000000000000000000000 node.cpp:1240
20141002T130221.516986 p2p:delayed_peer_deletion_task   destroy_connection ] Exception thrown while canceling message_oriented_connection's read_loop, ignoring: {"code":0,"name":"exception","message":"unspecified","stack":[{"context":{"level":"warn","file":"message_oriented_connection.cpp","line":184,"method":"read_loop","hostname":"","thread_name":"p2p","timestamp":"20141002T130221.516517"},"format":"disconnected: ${e}","data":{"e":"0 exception: unspecified\nConnection reset by peer \n    {\"message\":\"Connection reset by peer\"}\n    asio  asio.cpp:37 operator()\n\n    {\"len\":16}\n    p2p  stcp_socket.cpp:94 readsome"}}]} message_oriented_connection.cpp:274
20141002T130221.517090 p2p:delayed_peer_deletion_task   destroy_connection ] Exception thrown while canceling message_oriented_connection's read_loop, ignoring: {"code":0,"name":"exception","message":"unspecified","stack":[{"context":{"level":"warn","file":"message_oriented_connection.cpp","line":184,"method":"read_loop","hostname":"","thread_name":"p2p","timestamp":"20141002T130221.516517"},"format":"disconnected: ${e}","data":{"e":"0 exception: unspecified\nConnection reset by peer \n    {\"message\":\"Connection reset by peer\"}\n    asio  asio.cpp:37 operator()\n\n    {\"len\":16}\n    p2p  stcp_socket.cpp:94 readsome"}}]} message_oriented_connection.cpp:274
20141002T130221.517388 p2p:message read_loop            read_loop ] disconnected 0 exception: unspecified
Operation canceled
    {"message":"Operation canceled"}
    asio  asio.cpp:37 operator()

    {"len":16}
    p2p  stcp_socket.cpp:94 readsome message_oriented_connection.cpp:182
20141002T130221.517624 p2p:delayed_peer_deletion_task   destroy_connection ] Exception thrown while canceling message_oriented_connection's read_loop, ignoring: {"code":0,"name":"exception","message":"unspecified","stack":[{"context":{"level":"warn","file":"message_oriented_connection.cpp","line":184,"method":"read_loop","hostname":"","thread_name":"p2p","timestamp":"20141002T130221.517429"},"format":"disconnected: ${e}","data":{"e":"0 exception: unspecified\nOperation canceled \n    {\"message\":\"Operation canceled\"}\n    asio  asio.cpp:37 operator()\n\n    {\"len\":16}\n    p2p  stcp_socket.cpp:94 readsome"}}]} message_oriented_connection.cpp:274
20141002T130221.517714 p2p:delayed_peer_deletion_task   destroy_connection ] Exception thrown while canceling message_oriented_connection's read_loop, ignoring: {"code":0,"name":"exception","message":"unspecified","stack":[{"context":{"level":"warn","file":"message_oriented_connection.cpp","line":184,"method":"read_loop","hostname":"","thread_name":"p2p","timestamp":"20141002T130221.517429"},"format":"disconnected: ${e}","data":{"e":"0 exception: unspecified\nOperation canceled \n    {\"message\":\"Operation canceled\"}\n    asio  asio.cpp:37 operator()\n\n    {\"len\":16}\n    p2p  stcp_socket.cpp:94 readsome"}}]} message_oriented_connection.cpp:274
20141002T130222.472185 th_a:rebroadcast_pending rebroadcast_pending_ ]  rebroadcasting... client.cpp:1102
20141002T130235.641888 th_a:rebroadcast_pending rebroadcast_pending_ ]  rebroadcasting... client.cpp:1102
20141002T130239.603366 p2p:message read_loop ~call_statistics_col ] Call to method node_delegate::get_blockchain_now took 721710us, longer than our target maximum of 500ms node.cpp:325
20141002T130239.603439 p2p:message read_loop ~call_statistics_col ] Actual execution took 6us, with a 234391us delay before the delegate thread started executing the method, and a 487313us delay after it finished before the p2p thread started processing the response node.cpp:330
20141002T130239.957645 p2p:message read_loop ~call_statistics_col ] Call to method node_delegate::get_blockchain_now took 1310667us, longer than our target maximum of 500ms node.cpp:325
20141002T130239.957739 p2p:message read_loop ~call_statistics_col ] Actual execution took 4us, with a 469099us delay before the delegate thread started executing the method, and a 841564us delay after it finished before the p2p thread started processing the response node.cpp:330
20141002T130239.961153 p2p:message read_loop ~call_statistics_col ] Call to method node_delegate::get_blockchain_now took 1078444us, longer than our target maximum of 500ms node.cpp:325
20141002T130239.961193 p2p:message read_loop ~call_statistics_col ] Actual execution took 3us, with a 233396us delay before the delegate thread started executing the method, and a 845045us delay after it finished before the p2p thread started processing the response node.cpp:330
20141002T130248.851634 th_a:rebroadcast_pending rebroadcast_pending_ ]  rebroadcasting... client.cpp:1102
20141002T130301.919106 th_a:rebroadcast_pending rebroadcast_pending_ ]  rebroadcasting... client.cpp:1102
20141002T130303.604058 p2p:message read_loop ~call_statistics_col ] Call to method node_delegate::get_blockchain_synopsis took 593311us, longer than our target maximum of 500ms node.cpp:325
20141002T130303.604149 p2p:message read_loop ~call_statistics_col ] Actual execution took 8us, with a 228908us delay before the delegate thread started executing the method, and a 364395us delay after it finished before the p2p thread started processing the response node.cpp:330
20141002T130307.517473 p2p:message read_loop ~call_statistics_col ] Call to method node_delegate::get_blockchain_now took 598848us, longer than our target maximum of 500ms node.cpp:325
20141002T130307.517548 p2p:message read_loop ~call_statistics_col ] Actual execution took 6us, with a 114313us delay before the delegate thread started executing the method, and a 484529us delay after it finished before the p2p thread started processing the response node.cpp:330
20141002T130307.993191 p2p:message read_loop ~call_statistics_col ] Call to method node_delegate::get_blockchain_now took 1078129us, longer than our target maximum of 500ms node.cpp:325
20141002T130307.993263 p2p:message read_loop ~call_statistics_col ] Actual execution took 5us, with a 117901us delay before the delegate thread started executing the method, and a 960223us delay after it finished before the p2p thread started processing the response node.cpp:330
20141002T130308.000349 p2p:message read_loop ~call_statistics_col ] Call to method node_delegate::get_blockchain_now took 1086240us, longer than our target maximum of 500ms node.cpp:325
20141002T130308.000403 p2p:message read_loop ~call_statistics_col ] Actual execution took 5us, with a 118838us delay before the delegate thread started executing the method, and a 967397us delay after it finished before the p2p thread started processing the response node.cpp:330
20141002T130308.353182 p2p:message read_loop ~call_statistics_col ] Call to method node_delegate::get_blockchain_now took 1434050us, longer than our target maximum of 500ms node.cpp:325
20141002T130308.353255 p2p:message read_loop ~call_statistics_col ] Actual execution took 5us, with a 113869us delay before the delegate thread started executing the method, and a 1320176us delay after it finished before the p2p thread started processing the response node.cpp:330
20141002T130314.919330 th_a:rebroadcast_pending rebroadcast_pending_ ]  rebroadcasting... client.cpp:1102
20141002T130328.013531 th_a:rebroadcast_pending rebroadcast_pending_ ]  rebroadcasting... client.cpp:1102
20141002T130341.093474 th_a:rebroadcast_pending rebroadcast_pending_ ]  rebroadcasting... client.cpp:1102
20141002T130354.102885 th_a:rebroadcast_pending rebroadcast_pending_ ]  rebroadcasting... client.cpp:1102
20141002T130407.911250 th_a:rebroadcast_pending rebroadcast_pending_ ]  rebroadcasting... client.cpp:1102
20141002T130420.911469 th_a:rebroadcast_pending rebroadcast_pending_ ]  rebroadcasting... client.cpp:1102
20141002T130431.357727 p2p:message read_loop            read_loop ] disconnected 11 eof_exception: End Of File
End of file
    {"message":"End of file"}
    asio  asio.cpp:35 operator()

    {"len":16}
    p2p  stcp_socket.cpp:94 readsome message_oriented_connection.cpp:177
20141002T130433.979763 th_a:rebroadcast_pending rebroadcast_pending_ ]  rebroadcasting... client.cpp:1102
20141002T130447.064326 th_a:rebroadcast_pending rebroadcast_pending_ ]  rebroadcasting... client.cpp:1102
20141002T130500.224201 th_a:rebroadcast_pending rebroadcast_pending_ ]  rebroadcasting... client.cpp:1102
20141002T130513.305901 th_a:rebroadcast_pending rebroadcast_pending_ ]  rebroadcasting... client.cpp:1102
20141002T130526.306273 th_a:rebroadcast_pending rebroadcast_pending_ ]  rebroadcasting... client.cpp:1102
20141002T130539.461313 th_a:rebroadcast_pending rebroadcast_pending_ ]  rebroadcasting... client.cpp:1102
20141002T130543.312503 p2p:message read_loop ~call_statistics_col ] Call to method node_delegate::get_blockchain_now took 601010us, longer than our target maximum of 500ms node.cpp:325
20141002T130543.312587 p2p:message read_loop ~call_statistics_col ] Actual execution took 4us, with a 236479us delay before the delegate thread started executing the method, and a 364527us delay after it finished before the p2p thread started processing the response node.cpp:330
20141002T130543.549222 p2p:message read_loop ~call_statistics_col ] Call to method node_delegate::get_blockchain_now took 842417us, longer than our target maximum of 500ms node.cpp:325
20141002T130543.549294 p2p:message read_loop ~call_statistics_col ] Actual execution took 5us, with a 235857us delay before the delegate thread started executing the method, and a 606555us delay after it finished before the p2p thread started processing the response node.cpp:330
20141002T130543.782593 p2p:message read_loop ~call_statistics_col ] Call to method node_delegate::get_blockchain_now took 1079636us, longer than our target maximum of 500ms node.cpp:325
20141002T130543.782664 p2p:message read_loop ~call_statistics_col ] Actual execution took 4us, with a 239672us delay before the delegate thread started executing the method, and a 839960us delay after it finished before the p2p thread started processing the response node.cpp:330
20141002T130543.905264 p2p:message read_loop ~call_statistics_col ] Call to method node_delegate::sync_status took 1195870us, longer than our target maximum of 500ms node.cpp:325
20141002T130543.905330 p2p:message read_loop ~call_statistics_col ] Actual execution took 5us, with a 233199us delay before the delegate thread started executing the method, and a 962666us delay after it finished before the p2p thread started processing the response node.cpp:330
20141002T130544.028671 p2p:message read_loop ~call_statistics_col ] Call to method node_delegate::get_blockchain_now took 1323721us, longer than our target maximum of 500ms node.cpp:325
20141002T130544.028742 p2p:message read_loop ~call_statistics_col ] Actual execution took 5us, with a 237665us delay before the delegate thread started executing the method, and a 1086051us delay after it finished before the p2p thread started processing the response node.cpp:330
20141002T130544.384023 p2p:message read_loop ~call_statistics_col ] Call to method node_delegate::sync_status took 1674542us, longer than our target maximum of 500ms node.cpp:325
20141002T130544.384094 p2p:message read_loop ~call_statistics_col ] Actual execution took 4us, with a 233125us delay before the delegate thread started executing the method, and a 1441413us delay after it finished before the p2p thread started processing the response node.cpp:330
20141002T130544.509254 p2p:message read_loop ~call_statistics_col ] Call to method node_delegate::get_blockchain_now took 1800775us, longer than our target maximum of 500ms node.cpp:325
20141002T130544.509326 p2p:message read_loop ~call_statistics_col ] Actual execution took 4us, with a 234171us delay before the delegate thread started executing the method, and a 1566600us delay after it finished before the p2p thread started processing the response node.cpp:330
20141002T130552.765883 th_a:rebroadcast_pending rebroadcast_pending_ ]  rebroadcasting... client.cpp:1102
20141002T130554.936176 p2p:message read_loop ~call_statistics_col ] Call to method node_delegate::get_blockchain_now took 718539us, longer than our target maximum of 500ms node.cpp:325
20141002T130554.936250 p2p:message read_loop ~call_statistics_col ] Actual execution took 4us, with a 112592us delay before the delegate thread started executing the method, and a 605943us delay after it finished before the p2p thread started processing the response node.cpp:330
20141002T130555.649384 p2p:message read_loop ~call_statistics_col ] Call to method node_delegate::get_blockchain_now took 1432421us, longer than our target maximum of 500ms node.cpp:325
20141002T130555.649458 p2p:message read_loop ~call_statistics_col ] Actual execution took 3us, with a 113233us delay before the delegate thread started executing the method, and a 1319185us delay after it finished before the p2p thread started processing the response node.cpp:330
20141002T130555.892557 p2p:message read_loop ~call_statistics_col ] Call to method node_delegate::get_blockchain_now took 1677065us, longer than our target maximum of 500ms node.cpp:325
20141002T130555.892629 p2p:message read_loop ~call_statistics_col ] Actual execution took 4us, with a 114725us delay before the delegate thread started executing the method, and a 1562336us delay after it finished before the p2p thread started processing the response node.cpp:330
20141002T130556.133611 p2p:message read_loop ~call_statistics_col ] Call to method node_delegate::get_blockchain_now took 1917302us, longer than our target maximum of 500ms node.cpp:325
20141002T130556.133682 p2p:message read_loop ~call_statistics_col ] Actual execution took 4us, with a 113931us delay before the delegate thread started executing the method, and a 1803367us delay after it finished before the p2p thread started processing the response node.cpp:330
20141002T130556.259397 p2p:message read_loop ~call_statistics_col ] Call to method node_delegate::get_blockchain_now took 2040895us, longer than our target maximum of 500ms node.cpp:325
20141002T130556.259468 p2p:message read_loop ~call_statistics_col ] Actual execution took 6us, with a 111669us delay before the delegate thread started executing the method, and a 1929220us delay after it finished before the p2p thread started processing the response

Both blockchain.log & rpc.log are empty.

Can someone determine the problem from these logs? I'm not sure what to make of them.

Thanks.

-F

40
Technical Support / How to have separate wallets with 1 account each
« on: September 28, 2014, 02:35:25 pm »
Hi,

Being that backups containing more than 1 account don't restore properly, I was thinking I could copy 'wallet a' directory contents to a staging folder, start the client & have a new wallet with zero accounts in to create a new account in 'wallet b'. I wanted to be able to switch back & forth between the 2 to access the different accounts by simply copying the default wallet directory contents in & out of the default wallet folder.

But apparently it doesn't work that way. I emptied the default wallet folder, started the client, & it asks for a password which isn't the original password from 'wallet a', and which I don't know.

How can I set this up the way I'm aiming to? I want more than 1 account, but I don't want to lose the ability to restore from backups.

Thanks,
-F

41
Technical Support / Re: How to profit from Music DAC Snapshot
« on: September 28, 2014, 12:24:46 am »
Good post, thanks.

http://bitshares.org/watch-for-falling-pts/

I hadn't gone all the way to page 3 when I first checked it out, actually I forgot about the blog. Glad to have it back on my radar.

42
Technical Support / Re: How to profit from Music DAC Snapshot
« on: September 28, 2014, 12:09:26 am »
PTS may fall after a snap shot(see Stan's blog post: Watch for falling PTS), and your gains on investment may only be realized when BTS music is up and running weeks or months later..

Who's Stan, & where's his blog?

I had the same idea that 'music' might take awhile to gain value. So maybe that won't be part of what I do.

Right now the money I have in PTS was what I had in mind to use to incorporate a business I want to start. I might still want to do that, & I'm hoping I can make $100 or so on the PTS spike, if it materializes.

...Always on the lookout for a side job.

43
Technical Support / Re: How to profit from Music DAC Snapshot
« on: September 27, 2014, 11:15:01 pm »
I will not use my rent money for investments... but that is just me.

Yeah, I know it's a bad idea. I was hesitant to even ask, but I would if it were a sure bet at a serious profit.

44
Technical Support / How to profit from Music DAC Snapshot
« on: September 27, 2014, 09:52:01 pm »
Since this is the first snapshot I'll be a part of, I'm not sure what to expect, so I'd like to learn from those of you who've been through this before.

While I'm not particularly interested in holding Songshares, I do see a potentially profitable situation on the horizon with the Snapshot coming up. What I don't know are the historical details necessary to realistically formulate a profitable strategy.

I see that PTS rallied when the Snapshot was announced, & I have, along with apparently many others, spent some BTC on PTS since the announcement. I don't know however, if I'd be better off holding my PTS through the Snapshot so I would receive some Songshares, or if I would make more by selling my PTS just before the Snapshot, assuming its price will be at its highest say, on the 9th of October.

A) Where can I look at pricing data of BTSX, PTS, & DAC's being Snapshottted for time periods during previous Snapshots?

B) How exactly are the allotted Songshares redeemed after the Snapshot?

C) Will I have to wait at all between the time of the Snapshot & the time I'm able to have Songshares to sell?

D) When is the price of PTS expected to peak?

E) Is there maybe some counterintuitive dip that happens say, 6 hours before the Snapshot when everyone who's going to buy into it already has?

F) When is the value of the alotted Songshares expected to be high?

G) How much is the value of PTS expected to drop immediately following the Snapshot?

H) Might the value of my new Songshares plus the value of my PTS provide a healthy ROI such that I'll want to sell both sometime after the Snapshot?

I) Do newly Snapshotted/allotted DACs peak right after a Snapshot?

J) How long might I want to hold my Songshares before selling them should I be seeking to recoup what I spent on PTS?

K) Might I be able at some point to sell my Songshares at such a price that I'm able to recoup my PTS investment within a relatively short period of time, while keeping my PTS holdings intact in anticipation of a future Snapshot?

L) What strategies are others employing aside from indefinite buy & hold?

And one last question, I may as well ask because it does cross my mind not knowing...

M) Is the profit potential of this Music DAC Snapshot sure enough that it might be worthwhile for me to allocate my October's rent to it, assuming I'd be able to make enough of a profit that I could sell it, get my money back with enough left over to cover my late fees & then some?


Thanks in advance for your feedback.

-F

45
General Discussion / Re: An open letter to Tonyk
« on: September 27, 2014, 04:24:30 am »
Regarding that company I wanted to start, first I renamed it, then more recently I gave up on it. So none of that I posted about matters anymore. I'll be staying a sole proprietorship until a better way to pivot comes up.

I guess I'm going to have to get used to you ridiculing my efforts.

Pages: 1 2 [3] 4 5 6 7