Show Posts

This section allows you to view all posts made by this member. Note that you can only see posts made in areas you currently have access to.


Messages - roadscape

Pages: 1 2 [3] 4 5 6 7 8 9 10 ... 64
31
General Discussion / Re: Cryptofresh API
« on: April 07, 2016, 04:24:53 pm »
That looks like a good way to organize it, thanks @tbone. I may end up breaking it out into pages once I add charts for some of the useful queries I have laying around.

@all: please put any further feedback not related to API in this thread: https://bitsharestalk.org/index.php/topic,19507.0.html -- trying to keep this thread clean so people can subscribe to API change notifications. Thanks!

32
General Discussion / Re: Cryptofresh API
« on: April 07, 2016, 03:25:32 pm »
Thanks for the positive feedback!

@tbone I think this is a good point about the default ops on the charts, I've made the change and it will be in the next release. It looks better too. I've made a note about the 30-day MA, will keep it in mind as I break this chart out into more detailed ones.

@bitacer - I haven't forgotten about your request! But to do it properly we will need to locate/create images for all primary assets so they are all fairly represented and visually balanced. I haven't had time to be collect these images, maybe you could help organize such an effort? Ideally 16x16 PNGs for important assets here: https://cryptofresh.com/assets, as well as a nice default image for the others (it shouldn't be bold, and maybe there could be a different default image to distinguish UIA/PM/MPA)

Excellent job! Waiting for more!

Thanks, any specific API's you like to see added??

33
General Discussion / Re: Subsidizing Market Liquidity
« on: April 07, 2016, 03:01:07 pm »
Discussion also here: https://github.com/cryptonomex/graphene/issues/643

At this point I'm thinking @abit's solution would indeed work great as long as he can implement this detail:

OK, how about a middle ground - taking the snapshot every 10 (20, 30 whatever) minutes BUT also reading the filled orders in that period and using them for the calculation[effectively adding them to the orderbook like they were not filled]?
We can do 2 diff things - either credit them for the whole time period or really check when they were placed and  filled and credit them with the correct real time they were on the book.

####
thisTimeIntervalStart = now() - 10 min
For each filled order in time [now, thisTimeIntervalStart]
      T = OrderFillTime - max(OrderPlacementTime,  thisTimeIntervalStart)
       order_total = size of the Filled Order
####

So each snapshot really needs to be a window of time (say 15mins) that contains every order that existed during that time PLUS its placement time and/or fill time. Is this what you were thinking @abit? Any idea how partially filled orders are handled with this approach?
My idea can be considered that a new snapshot will be taken on every new block, so all unfilled orders will be tracked. For partially filled orders, we'll know when they were created, when they were partially filled, and when they disappeared.

Sounds good.. so this data will be streamed to a log file for processing/scoring by external scripts? It would be ideal to have multiple people running the same script and checking to make sure the numbers are in agreement. Your approach sounds like the most accurate way to get the data.
I'll stream data to somewhere on the Internet.

@roadscape , @abit

Hey guys, how soon do you think this can be completed?  @abit, have you been making progress?  Ronny from @ccedk said he would be interested in utilizing this to reward market makers in his UIAs.  And he will soon be launching some high profile assets suck as Lisk, Digix, and Synerio.  Not to mention, it is critical for us to start bootstrapping key fiat BitAssets.  What do you guys think?
Yes, some progresses.
Check https://data.sparkfun.com/bitshares_usd_price for a feed_price only stream. Market snapshots stream would be something like that but with much more data, which can be used to calculate scores.
If I'm given a decided formula, I can also stream scores.

Very good.. Will these market snapshots be saved to log files directly from the node for external parsing/scoring? How will it work?

34
General Discussion / Re: [ANN] New Money project & SOLCERT token
« on: April 05, 2016, 03:42:44 pm »
This is undoubtedly an ambitious project but Solomon is a very driven guy. IMO it will come down to his ability to build up & organize the resources to make it a reality. Seems to be going well so far and I'm looking forward to seeing the next stage :)

35
General Discussion / Re: STEALTH Status Update
« on: April 05, 2016, 03:27:11 pm »
Personally don't care much for stealth (I do recognize the strategic value) but blinded amounts is kinda cool. My concern would be impacted GUI performance like we had with TITAN.

The server-side wallet storage was not part of the proposal, but something we felt was necessary to backup/secure user funds. We do not yet have confidence in the reliability of server-side storage to enable this feature. There is a significant about of liability associated in offering to host/backup user wallets.  We don't want to be responsible for the loss of funds.

Why not backup wallets directly on the blockchain? Along with a hash of the owner's email for easy lookups later. And maybe some random noise.

36
Technical Support / Re: help with faucet
« on: April 03, 2016, 06:03:00 pm »
Thanks for the response roadscape.  I am a complete and utter noob at this.  I am not exactly sure what you are asking by how do I launch the server.  I am following xerocs instructions here http://docs.bitshares.eu////testnet/7-faucet.html  I would be lying if I said I understood everything that I was typing into the terminal. 

From what little I do understand I think its mina deploy that should be selecting the proper environment and isn't.  The only other command that looks like it could be starting the server is sudo service nginx start. 

I switched over to the bitshares_faucet db and added dele-puppy.com to allowed_domains.  There is no important info in the databases, and I have no problem with wiping them.  I can add someones ssh key so they can ssh in directly if that would be better than trying to translate through me.

I've not used mina, but yes it's probably responsible for setting the environment. There's a lot of moving parts here and in the instructions I see some things that may need to be tweaked. So perhaps the best way to move forward here is to follow these instructions to set up a faucet and see for myself what's going on. From your web wallet, would you be able to switch the faucet over to my test one temporarily? Or do they need to be on the same domain?

37
Very interesting.. +5%

Sparkle has come a long way :P

38
General Discussion / Re: Subsidizing Market Liquidity
« on: April 02, 2016, 05:49:11 pm »
Discussion also here: https://github.com/cryptonomex/graphene/issues/643

At this point I'm thinking @abit's solution would indeed work great as long as he can implement this detail:

OK, how about a middle ground - taking the snapshot every 10 (20, 30 whatever) minutes BUT also reading the filled orders in that period and using them for the calculation[effectively adding them to the orderbook like they were not filled]?
We can do 2 diff things - either credit them for the whole time period or really check when they were placed and  filled and credit them with the correct real time they were on the book.

####
thisTimeIntervalStart = now() - 10 min
For each filled order in time [now, thisTimeIntervalStart]
      T = OrderFillTime - max(OrderPlacementTime,  thisTimeIntervalStart)
       order_total = size of the Filled Order
####

So each snapshot really needs to be a window of time (say 15mins) that contains every order that existed during that time PLUS its placement time and/or fill time. Is this what you were thinking @abit? Any idea how partially filled orders are handled with this approach?
My idea can be considered that a new snapshot will be taken on every new block, so all unfilled orders will be tracked. For partially filled orders, we'll know when they were created, when they were partially filled, and when they disappeared.

Sounds good.. so this data will be streamed to a log file for processing/scoring by external scripts? It would be ideal to have multiple people running the same script and checking to make sure the numbers are in agreement. Your approach sounds like the most accurate way to get the data.

39
General Discussion / Re: Subsidizing Market Liquidity
« on: March 31, 2016, 12:27:31 am »
Discussion also here: https://github.com/cryptonomex/graphene/issues/643

At this point I'm thinking @abit's solution would indeed work great as long as he can implement this detail:

OK, how about a middle ground - taking the snapshot every 10 (20, 30 whatever) minutes BUT also reading the filled orders in that period and using them for the calculation[effectively adding them to the orderbook like they were not filled]?
We can do 2 diff things - either credit them for the whole time period or really check when they were placed and  filled and credit them with the correct real time they were on the book.

####
thisTimeIntervalStart = now() - 10 min
For each filled order in time [now, thisTimeIntervalStart]
      T = OrderFillTime - max(OrderPlacementTime,  thisTimeIntervalStart)
       order_total = size of the Filled Order
####

So each snapshot really needs to be a window of time (say 15mins) that contains every order that existed during that time PLUS its placement time and/or fill time. Is this what you were thinking @abit? Any idea how partially filled orders are handled with this approach?

40
General Discussion / Cryptofresh API
« on: March 30, 2016, 11:38:46 pm »
https://cryptofresh.com/api/docs

Stay tuned to this thread for notifications about API updates and potentially breaking changes.
Also feel free to use it for questions or requests!

41
Technical Support / Re: help with faucet
« on: March 28, 2016, 04:00:02 pm »
Try:

RAILS_ENV=production bundle exec rake db:create db:schema:load

I do that when I set up the faucet.  I do it in the faucet directory though.  In fact the only way I could get the bitshares_faucet db to load as opposed to the bitshares_faucet_dev db is to modify the faucet/config/database.yml file.  For some reason it is loading the development branch, and I don't know what I am doing wrong.

I have a real secret under my develop branch in my secrets.yml file.  Other than that there wouldn't be any security issues from my faucet running the bitshares_faucet_dev db would there?

Please note that it's production/dev/test 'environment' not 'branch'. It's just a bit confusing to read :)

Rails by default is in the development environment. When you run take tasks, they're in the dev environment too.

bundle exec rake db:create db:schema:load <-- this sets up a development db
RAILS_ENV=production bundle exec rake db:create db:schema:load <-- this sets up a production db

rails s <-- this will start a server using development config
RAILS_ENV=production rails s <-- server will use production config

The secrets.yml file you don't need to worry about.

But if you're connecting to the development database, you're probably running in the development environment. And that is a security risk because it reveals a lot of information to make debugging easier.

We should get your faucet running in the production environment and connecting to the production database. What command are you using to start the server? And please verify that the "allowed_domains" entry was added to the production db.

Do you have any important information in the databases or can they still be wiped at this point?

42
+5%

This would be a very useful feature when we have more liquid assets, but I think bots are the best way for now. Eventually users may demand something like this.

43
Technical Support / Re: How much witnesses earn?
« on: March 28, 2016, 02:40:47 pm »
Here is better formula:

[block reward] * (60 / [block interval time]) * 60 * 24 * 30 / [total number of witnesses] = [witness salary for one month]
Almost correct. Due to maintenance intervals, currently there are 3 less blocks produced in every hour.

Is the '3 blocks' value hardcoded? And maintenance takes nowhere near that long in reality, right?

44
This week:

Wrapped up changes to the CMC reporting API; trades among 35 assets are being tracked and any new significant trading among them should show up automatically on CMC. It helps make sure we are credited properly for volume on the DEX with minimal delay. Thanks to Ronny for sponsoring this feature!

Updated and cleaned up these reports for the hardfork:
https://cryptofresh.com/workers
https://cryptofresh.com/ballots

Added a balance lookup API and working on a historical asset data API. As I wrap it up, I'll clean up & document the other cryptofresh API endpoints. Now is a good time to send requests for any API's you'd like to see! If you are needing to scrape cryptofresh for something please let me know.

45
General Discussion / Re: Subsidizing Market Liquidity
« on: March 25, 2016, 11:15:04 pm »
@roadscape I prefer to calculate with full data rather than with sample data. I'm going to write a snapshot plugin.

Data structure of one market pair (for example BTS/USD):
* a calculation period = snapshots
* a snapshot = time stamp, feed price, ask orders, bid orders
* a order = direction(ask/bid), price, volume, owner, create_time

Calculations:
[Note: the algorithm below is based on feed price, but not on "best prices"]
* at the beginning of a calculation period, take a snapshot, set everyone's score = 0
* Every time (after a new block is produced) when a new order is created or cancelled, or feed price is updated, or reached the end of a calculation period, take a new snapshot.
 -> let LT= last snapshot time
     let T = current snapshot time
     let LFP = feed price of last snapshot
 -> for each order,
     let LV=volume of last snapshot,
     calculate LV' = function1 (direction, price, LFP, LV), to filter out unqualified orders
     calculate new score gained by this order after last snapshot
          NOS = function2 (direction,LV', LT, T, create_time, price, LFP)
          note: create_time may be needed here to judge whether this order is qualified.
 -> for each account(owner) on each side,
     let LAS = score on last snapshot
     calculate TLV = sum(LV'), total volume of qualified orders at last snapshot
     calculate ELV = function3(TLV), to filter out unqualified account if total volume is too low
     calculate TNS = sum(NOS), total new score gained
     calculate ENS = function4(TNS,ELV), to filter out too low total score/volume, and perhaps set a cap on new score
     calculate AS = LAS + ENS, final score at this snapshot
* at the end of a calculation period, for each side, we have a set of (account, score), so we can calculate the final rewards.

//Update: I'm going to write a plugin to provide snapshot data, so you guys can use it to do whatever calculation as you like.

@abit, that would be great if this could be a plugin for the node. It will be much more efficient and accurate than doing this thru API.

@roadscape: Many orders will be on the books for much less than 10 minutes.  So I think sampling every 10 minutes will yield somewhat arbitrary results.  On the other hand, I imagine constant monitoring would be too resource intensive?  If so, how about sampling every 1 minute?  I don't think it would yield perfect results, but probably more than good enough, and I'm guessing without being too unreasonably resource intensive.  Thoughts?  @abits, can you comment on this as well?

@tbone It could probably be done every minute. If we have many quickly placed/filled orders at high volumes then it would not make sense to use the sampling approach tho. But I figured most the orders getting rewarded would be bigger walls that don't move quickly. But there are workarounds, I just wanted to explore the limits of that approach and get feedback from traders.

tonyk Your proposed changes make sense, but continuous monitoring is much more complex than sampling*. What does it capture that sampling can't? And what if samples were e.g. 15 mins apart?

@roadscape
OK, how about a middle ground - taking the snapshot every 10 (20, 30 whatever) minutes BUT also reading the filled orders in that period and using them for the calculation[effectively adding them to the orderbook like they were not filled]?
We can do 2 diff things - either credit them for the whole time period or really check when they were placed and  filled and credit them with the correct real time they were on the book.

####
thisTimeIntervalStart = now() - 10 min
For each filled order in time [now, thisTimeIntervalStart]
      T = OrderFillTime - max(OrderPlacementTime,  thisTimeIntervalStart)
       order_total = size of the Filled Order
####

My question is, what scenario are you trying to avoid (or create) by doing this?

If your order is on the books for 120 mins, and is *completely* filled at minute 125, you would not get credit for those last 5 minutes (assuming 10-minute snapshot interval). To me this doesn't seem like a problem.

If you expect orders to be on the books for less than 10 minutes at a time, I could see why we would need to be tracking this more detailed order activity.

My original line of thinking was a simple "sharedrop" of points onto order book participants at a regular interval.

My assumptions for MM subsidies:
1) The actual market activity doesn't matter nearly as much as creating a nicely-shaped order book.
2) 'Sampling' the order book every ~10 minutes is at least 95% as accurate as analyzing continuous data.
Obviously (I thought), I am trying to avoid orders being on the order book for 30% -99% of the time between sample taking, and such orders getting no credit at all. To say nothing about such orders being filled first, means they had the best prices of all orders!!!
The less the time between sample taking the less an issue this becomes, but your original proposal was every 1 h. (way too long in my view). The smaller the time between each sample taking the less an issue it is. (so if you cut it to 3-5-7 min, this is a one way to do it)
Yet, again the proposed solution will yield ' near perfect'* results even if you sample far less frequently... and the computations involved seem to take not too much recourses.

* It is not perfect, cause if you sample once every hour, you should also credit the "placed and subsequently cancelled" orders, meeting all other criteria, in that time frame.

Your middle ground is a good idea, the main issue--I don't think it's possible to get all this info solely via RPC calls; it would require extra processing. This is doable but at that point I'd weigh it against the continuous approach.
However @abit's solution (processing data straight in the node) is probably the most proper way to accomplish this.

Pages: 1 2 [3] 4 5 6 7 8 9 10 ... 64