Author Topic: Subsidizing Market Liquidity  (Read 74310 times)

0 Members and 1 Guest are viewing this topic.

Offline cylonmaker2053

  • Hero Member
  • *****
  • Posts: 1004
  • Saving the world one block at a time
    • View Profile
  • BitShares: cylonmaker2053
@tbone @cylonmaker2053 currently the scoring bonus is linear: 100% bonus @ the midpoint, and 0% bonus at 5% off. This could be scaled to a wider range, and we could also use a curve instead of a line (creating a "long tail") for the bonus.

yes, wider margin (like 20%+) would be the most important change. linear is fine as long as the margin is wider, but curved with weights trailing off towards the tails would be best.

Offline abit

  • Committee member
  • Hero Member
  • *
  • Posts: 4664
    • View Profile
    • Abit's Hive Blog
  • BitShares: abit
  • GitHub: abitmore
@roadscape I prefer to calculate with full data rather than with sample data. I'm going to write a snapshot plugin.

Data structure of one market pair (for example BTS/USD):
* a calculation period = snapshots
* a snapshot = time stamp, feed price, ask orders, bid orders
* a order = direction(ask/bid), price, volume, owner, create_time

Calculations:
[Note: the algorithm below is based on feed price, but not on "best prices"]
* at the beginning of a calculation period, take a snapshot, set everyone's score = 0
* Every time (after a new block is produced) when a new order is created or cancelled, or feed price is updated, or reached the end of a calculation period, take a new snapshot.
 -> let LT= last snapshot time
     let T = current snapshot time
     let LFP = feed price of last snapshot
 -> for each order,
     let LV=volume of last snapshot,
     calculate LV' = function1 (direction, price, LFP, LV), to filter out unqualified orders
     calculate new score gained by this order after last snapshot
          NOS = function2 (direction,LV', LT, T, create_time, price, LFP)
          note: create_time may be needed here to judge whether this order is qualified.
 -> for each account(owner) on each side,
     let LAS = score on last snapshot
     calculate TLV = sum(LV'), total volume of qualified orders at last snapshot
     calculate ELV = function3(TLV), to filter out unqualified account if total volume is too low
     calculate TNS = sum(NOS), total new score gained
     calculate ENS = function4(TNS,ELV), to filter out too low total score/volume, and perhaps set a cap on new score
     calculate AS = LAS + ENS, final score at this snapshot
* at the end of a calculation period, for each side, we have a set of (account, score), so we can calculate the final rewards.

//Update: I'm going to write a plugin to provide snapshot data, so you guys can use it to do whatever calculation as you like.
« Last Edit: March 24, 2016, 12:08:27 am by abit »
BitShares committee member: abit
BitShares witness: in.abit

Offline roadscape

@abit if you think any part of this can be done from within graphene, that's great. I've been looking at it from an API perspective.

@tbone @cylonmaker2053 currently the scoring bonus is linear: 100% bonus @ the midpoint, and 0% bonus at 5% off. This could be scaled to a wider range, and we could also use a curve instead of a line (creating a "long tail") for the bonus.

tonyk Your proposed changes make sense, but continuous monitoring is much more complex than sampling*. What does it capture that sampling can't? And what if samples were e.g. 15 mins apart?

@roadscape
OK, how about a middle ground - taking the snapshot every 10 (20, 30 whatever) minutes BUT also reading the filled orders in that period and using them for the calculation[effectively adding them to the orderbook like they were not filled]?
We can do 2 diff things - either credit them for the whole time period or really check when they were placed and  filled and credit them with the correct real time they were on the book.

####
thisTimeIntervalStart = now() - 10 min
For each filled order in time [now, thisTimeIntervalStart]
      T = OrderFillTime - max(OrderPlacementTime,  thisTimeIntervalStart)
       order_total = size of the Filled Order
####

My question is, what scenario are you trying to avoid (or create) by doing this?

If your order is on the books for 120 mins, and is *completely* filled at minute 125, you would not get credit for those last 5 minutes (assuming 10-minute snapshot interval). To me this doesn't seem like a problem.

If you expect orders to be on the books for less than 10 minutes at a time, I could see why we would need to be tracking this more detailed order activity.

My original line of thinking was a simple "sharedrop" of points onto order book participants at a regular interval.

My assumptions for MM subsidies:
1) The actual market activity doesn't matter nearly as much as creating a nicely-shaped order book.
2) 'Sampling' the order book every ~10 minutes is at least 95% as accurate as analyzing continuous data.
http://cryptofresh.com  |  witness: roadscape

Offline cylonmaker2053

  • Hero Member
  • *****
  • Posts: 1004
  • Saving the world one block at a time
    • View Profile
  • BitShares: cylonmaker2053
@cylonmaker2053 20% seems a bit high if we're trying to maintain a tight peg, no? For other markets it might make sense but imho for *stable* coins it should be a tight band.

true, i'd just like to see a tight range based on our markets functioning property with plenty of natural liquidity, not based purely on subsidies. i like the idea of a weighted subsidy based on distance from bid/ask midpoint, but extending at least 20% deep into the order book, maybe even further, just having the rewards diminishing with distance.

any way we cut it is fine since this should be a measured experiment and we can learn from what works and doesn't.

Offline cylonmaker2053

  • Hero Member
  • *****
  • Posts: 1004
  • Saving the world one block at a time
    • View Profile
  • BitShares: cylonmaker2053
I think it's beneficial to have a deep order book, especially if black swans are a risk.  So we could reward orders deeper in the book, just not nearly as much.  For example, perhaps x reward within 2% of the peg.  Then maybe 1/5x between 2 and 5%.  And then maybe 1/20x between 5 and 20%, or something along those lines.  This way we encourage both a tight peg and a deep order book.   

yes, some weighting scheme by distance to bid/ask midpiont would be a good idea to try out. i agree with the value of a deep order book, but "deep" order book is very relative for us since USD 1,000 or so equivalent for any of the assets is enough to eat through most of the order books. also note that it only takes something like a 20%-30% move in BTS to "break" most of our markets, which is hardly a black swan; that kind of rapid shift in settlement price churns through most open orders pretty fast, especially for those of us trying to keep these markets liquid without bots. wide margins should be part of this scheme to make it useful.

Offline tonyk

  • Hero Member
  • *****
  • Posts: 3308
    • View Profile
tonyk Your proposed changes make sense, but continuous monitoring is much more complex than sampling*. What does it capture that sampling can't? And what if samples were e.g. 15 mins apart?

@roadscape
OK, how about a middle ground - taking the snapshot every 10 (20, 30 whatever) minutes BUT also reading the filled orders in that period and using them for the calculation[effectively adding them to the orderbook like they were not filled]?
We can do 2 diff things - either credit them for the whole time period or really check when they were placed and  filled and credit them with the correct real time they were on the book.

####
thisTimeIntervalStart = now() - 10 min
For each filled order in time [now, thisTimeIntervalStart]
      T = OrderFillTime - max(OrderPlacementTime,  thisTimeIntervalStart)
       order_total = size of the Filled Order
####
---------------------------------------
As much as do not like using the feed price either.... very often currently we have
feed price 100BTS/USD
best bid 107
best ask 112

So the question is - do we really want to give a subsidy to this bid price at 7% above the peg? For me personally the answer is NO.

----------------------------------------
By the way, if we're going to do this in USD/BTS market, no need to subsidize buy (BTS) side imo due to force settlement already provides liquidity. However, technically we can implement it as a parameter so it can be disabled in some markets, but enabled in other markets.

Very good points and approach. + 1

------------------
@tbone
I finally found a post of yours I agree on all counts. The only thing about this layered approach is that it adds more complexity. Not impossibly high coding wise I guess[at worst they will have to run main code 3 times and check for orders in that interval [P<=2%;  2% < P < =5%;  5% < P <=20%], but lets se what the real coders think.
« Last Edit: March 22, 2016, 07:44:05 pm by tonyk »
Lack of arbitrage is the problem, isn't it. And this 'should' solves it.

Offline tbone

  • Hero Member
  • *****
  • Posts: 632
    • View Profile
  • BitShares: tbone2
@tonyk Your proposed changes make sense, but continuous monitoring is much more complex than sampling*. What does it capture that sampling can't? And what if samples were e.g. 15 mins apart?

* more complex to implement, and correctly. and if anything goes wrong it may be difficult to "replay" the events because we don't have historical orderbook API. at any rate, this would have to be an open source script/daemon that multiple people run and cross-check results. and this, too, will be much easier if we deal with snapshots rather than streams of data. not ruling anything out but I'd like to make sure we exhaust the simplest options.

I don't see how we can do meaningful scoring with 1-hour snapshots.  But without a way to replay events, I have no idea what the answer is.

@cylonmaker2053 20% seems a bit high if we're trying to maintain a tight peg, no? For other markets it might make sense but imho for *stable* coins it should be a tight band.

I think it's beneficial to have a deep order book, especially if black swans are a risk.  So we could reward orders deeper in the book, just not nearly as much.  For example, perhaps x reward within 2% of the peg.  Then maybe 1/5x between 2 and 5%.  And then maybe 1/20x between 5 and 20%, or something along those lines.  This way we encourage both a tight peg and a deep order book.   
 
Also: should we use the feed price or the center of the spread for P? (My preference would be to not rely on feed price if possible)

I think this has to be the price feed. Otherwise we may reward a tight spread, but around what price?

Offline abit

  • Committee member
  • Hero Member
  • *
  • Posts: 4664
    • View Profile
    • Abit's Hive Blog
  • BitShares: abit
  • GitHub: abitmore
The very basic feature to support this, is a plugin to "take snapshots" and provide required data. Implement it as a plugin so it won't affect the main functionality of the witness node, and can be enabled only when needed.

Quote
NB depending how filled orders are handled in bts a solution how exactly to do this must be made BUT:
If any order is filled due to this newly placed order, the filled order(s) should be included in the calculations above for that round of rewards
So you're talking about calculating points *before* a new order is placed, and take a new snapshot after filled. Doable.


By the way, if we're going to do this in USD/BTS market, no need to subsidize buy (BTS) side imo due to force settlement already provides liquidity. However, technically we can implement it as a parameter so it can be disabled in some markets, but enabled in other markets.
BitShares committee member: abit
BitShares witness: in.abit

Offline roadscape

@tonyk Your proposed changes make sense, but continuous monitoring is much more complex than sampling*. What does it capture that sampling can't? And what if samples were e.g. 15 mins apart?

@cylonmaker2053 20% seems a bit high if we're trying to maintain a tight peg, no? For other markets it might make sense but imho for *stable* coins it should be a tight band.


Also: should we use the feed price or the center of the spread for P? (My preference would be to not rely on feed price if possible)

* more complex to implement, and correctly. and if anything goes wrong it may be difficult to "replay" the events because we don't have historical orderbook API. at any rate, this would have to be an open source script/daemon that multiple people run and cross-check results. and this, too, will be much easier if we deal with snapshots rather than streams of data. not ruling anything out but I'd like to make sure we exhaust the simplest options.
http://cryptofresh.com  |  witness: roadscape

Offline cylonmaker2053

  • Hero Member
  • *****
  • Posts: 1004
  • Saving the world one block at a time
    • View Profile
  • BitShares: cylonmaker2053
This could work, with a few modifications.

*Modified part in italic in the original proposal if not explicitly quoted here
1)
" every hour:

 - Let P = "center point" of the market. (Feed price? Center of spread?)
 - Take a snapshot of the order book
 - Ignore all orders (a) more than 5% away from P or (b) less than 60 minutes old
"

becomes:
Every Time a new order is placed in that market:

 - Let P = "center point" of the market. (Feed price? Center of spread?)
 - Take a snapshot of the order book
 - Ignore all orders (a) more than 5% away from P
 - ignore the newly placed order

T = time since last snapshot

2)
score = (order_size / side_total) * (1 + distance_bonus) * T

3)
order_total = MIN (size of the sell (ask or bid side), maxOrder bitUSD)
maxOrder = the max order that qualifies for a bonus
 *[ we do not want orders for 50, 000 USD  existing for small periods of time to take away all the bonuses]; we can start with something like maxOrder =  500-1000 USD

//EDIT
NB depending how filled orders are handled in bts a solution how exactly to do this must be made BUT:
If any order is filled due to this newly placed order, the filled order(s) should be included in the calculations above for that round of rewards

looks like a solid first cut, but i'd recommend expanding the range from midpoint for relevance. 5% from midpoint doesn't capture much of current action, so i'd widen that to something like 20%. yes, orders on the margin matter most, but market depth expanding out from the margin also counts.

Offline tonyk

  • Hero Member
  • *****
  • Posts: 3308
    • View Profile
Thanks guys for summing it up. It looks like @tonyk has the only developed idea, but it might take me longer to fully comprehend it than to offer my own. I'll begin with what I think might be the easiest/most accessible way to score market makers and if it makes sense we can meet in the middle:

For each subsidized marketScore the remaining (i.e. eligible) orders, and sum up *per side/account*, every hour:

 - Let P = "center point" of the market. (Feed price? Center of spread?)
 - Take a snapshot of the order book
 - Ignore all orders (a) more than 5% away from P or (b) less than 60 minutes old



 - score = (order_size / side_total) * (1 + distance_bonus)
   - order_total = size of the sell (ask or bid side).
   - side_total = sum of all eligible orders on the ask or bid side
   - distance_bonus = ((max_distance - distance) / max_distance) (0% off feed = 1 ; 5% off feed = 0)

If we wanted to force balanced market making, then the final score per account is MIN(bid_score, ask_score).

(At this point, we could also discard any scores that are, say, below 25th percentile.)

100% of the reward per hour is split proportionally to the scores in this round. Payouts occur every 7 days.

-------

You get the optimal reward only if your bid/ask is balanced. One side can be smaller and closer to the peg yet still be balanced. Reward is based on your relative ownership of the eligible part of the bid/ask walls. Scalable bonus for how tight your walls are (up to 100% bonus for trading at the center of the market).

This could work, with a few modifications.

*Modified part in italic in the original proposal if not explicitly quoted here
1)
" every hour:

 - Let P = "center point" of the market. (Feed price? Center of spread?)
 - Take a snapshot of the order book
 - Ignore all orders (a) more than 5% away from P or (b) less than 60 minutes old
"

becomes:
Every Time a new order is placed in that market:

 - Let P = "center point" of the market. (Feed price? Center of spread?)
 - Take a snapshot of the order book
 - Ignore all orders (a) more than 5% away from P
 - ignore the newly placed order

T = time since last snapshot

2)
score = (order_size / side_total) * (1 + distance_bonus) * T

3)
order_total = MIN (size of the sell (ask or bid side), maxOrder bitUSD)
maxOrder = the max order that qualifies for a bonus
 *[ we do not want orders for 50, 000 USD  existing for small periods of time to take away all the bonuses]; we can start with something like maxOrder =  500-1000 USD

//EDIT
NB depending how filled orders are handled in bts a solution how exactly to do this must be made BUT:
If any order is filled due to this newly placed order, the filled order(s) should be included in the calculations above for that round of rewards
« Last Edit: March 21, 2016, 08:02:19 pm by tonyk »
Lack of arbitrage is the problem, isn't it. And this 'should' solves it.

Offline roadscape

Thanks guys for summing it up. It looks like @tonyk has the only developed idea, but it might take me longer to fully comprehend it than to offer my own. I'll begin with what I think might be the easiest/most accessible way to score market makers and if it makes sense we can meet in the middle:

For each subsidized market, every hour:

 - Let P = "center point" of the market. (Feed price? Center of spread?)
 - Take a snapshot of the order book
 - Ignore all orders (a) more than 5% away from P or (b) less than 60 minutes old

Score the remaining (i.e. eligible) orders, and sum up *per side/account*

 - score = (order_size / side_total) * (1 + distance_bonus)
   - order_total = size of the sell (ask or bid side).
   - side_total = sum of all eligible orders on the ask or bid side
   - distance_bonus = ((max_distance - distance) / max_distance) (0% off feed = 1 ; 5% off feed = 0)

If we wanted to force balanced market making, then the final score per account is MIN(bid_score, ask_score).

(At this point, we could also discard any scores that are, say, below 25th percentile.)

100% of the reward per hour is split proportionally to the scores in this round. Payouts occur every 7 days.

-------

You get the optimal reward only if your bid/ask is balanced. One side can be smaller and closer to the peg yet still be balanced. Reward is based on your relative ownership of the eligible part of the bid/ask walls. Scalable bonus for how tight your walls are (up to 100% bonus for trading at the center of the market).
http://cryptofresh.com  |  witness: roadscape

Offline cylonmaker2053

  • Hero Member
  • *****
  • Posts: 1004
  • Saving the world one block at a time
    • View Profile
  • BitShares: cylonmaker2053
All this brainstorming about liquidity, and I can't even get into my OL wallet to place trades right now. We really need to get a reliable platform working before anything else. Count me out for adding liquidity to markets tonight, which sucks with the big BTS move. i hate not being able to access my positions to update them in the face of this kind of volatility.

Try another client. Dowloadable client works well. Also, there is one at https://bitshares.org/wallet and another one hosted by bitcash but I lost the link.

great, thank you @yvv ...the bitshares.org wallet works perfectly.

@cylonmaker2053 If you have difficulty connecting to bitshares.openledger.info API server, go to settings, click on "API connection" and select dele-puppy.com from the list. We don't really depend on OL, and this is good.

Thanks again @yvv ...i've used the puppy API before, but last night i couldn't even get to the point where i could switch connections.

Offline Erlich Bachman

  • Sr. Member
  • ****
  • Posts: 287
  • I'm a pro
    • View Profile
But when will it get implemented?










Tune in again tomorrow, for another exciting edition of....


Beyonde Bitcoyne

10AM yo
You own the network, but who pays for development?

Offline tonyk

  • Hero Member
  • *****
  • Posts: 3308
    • View Profile
I would like to propose a new feature for BTS that CNX will provide free of charge if a hard fork is approved.

We would like to allow any market pair to reward users who provide liquidity in that market. The feature would work as follows:

Every order that is filled after being open on the books for at least 10 minutes earns shares a reward pool. The shares earned are proportional to the size of the order filled.

Any user *or* worker can contribute funds to the reward pool. These funds can be denominated in any asset specified by the issuer.

At most once per day users may convert their shares in the reward pool to a pro-rata share of the rewards.

The asset issuer has the ability to enable this feature for any market their asset trades in and to specify the asset used to fund the reward pool.

With this feature Open Ledger *could* pay out OBITS to those who provide liquidity in the OPEN.BTC / BTS market.
BTS can vote for a worker to provide liquidity in the BTS / USD and BTS / CNY markets.

It is possible that trades in the BTS / OPEN.BTC market could earn rewards from both BTS and OBITS *if* shareholders voted to subsidize this market.

Assuming we implement this feature in the BTS / USD market and voters approve workers funding this at a rate of 2.5 BTS / sec (50% of allowed dilution) and the internal exchange had $100,000 of daily volume then users trading on the internal exchange would see a 1% more than they would get by trading off chain. If daily volume was $50,000 then they would see a 2% profit over doing the same trades off-chain.

The impact of this should be a major influx of new traders who can make more money trading on the internal exchange than the external exchange. This added liquidity will dramatically tighten the USD / BTS peg and give shorters much more confidence.

This implementation will require 3 new operations on the blockchain:

1. create_liquidity_reward_pool issuer ASSET FUND_ASSET MARKET_ASSET    ie: openledger OBIT OPEN.BTC OPEN.USD
2. fund_liquidity_reward_pool funding_account AMOUNT FUND_ASSET ASSET MARKET_ASSET
3. claim_liquidity_rewards username AMOUNT FUND_ASSET  ASSET MARKET_ASSET

It will also create a new worker type that can direct BTS to any fund where FUND_ASSET is BTS.



Note: CNX reserves the right to retract this offer or request payment for adding this feature. This proposal does not commit CNX to develop the feature if we decide to pursue other options.


Excellent proposal.

I believe the proposal you've cited is outdated. Since then, if I'm not mistaken bytemaster has shown support for the direction this thread has been going, and has chimed in to state that he believes the reward calculations we'd been discussing should be done off-chain.  Also, one of the things we've concluded on this thread, with some guidance from Nasdaq's liquidity incentive program, is that instead of rewarding trades (which we can't guarantee won't be gamed), we should simply reward liquidity (i.e. placement of orders on the book).  In which case, a share of rewards would not be earned when a fill takes place.  Instead, rewards would be earned based on scores during any given period, with the scores calculated based on how many shares were on the book, for how long, and how close to the price feed.  Hopefully we can build on this and move it toward reality ASAP.
+ 1
Yes the proposal in its initial form is pretty bad actually....and the people who really care about seeing this really working.... as in working good in practice, have thrown there two cents to make it a decent if not great solution.[we still have our own biases on which will work best, but the main point is - there is a much better way to do this and the better solution is not that much harder to do... ]
Lack of arbitrage is the problem, isn't it. And this 'should' solves it.