The potential for front-running exists because a delegate can see the unconfirmed order tx, and has the opportunity to insert a new tx in the block to take advantage of it.
Could this be avoided by simply changing the market-engine so that unconfirmed orders in inventory are only tested/matched against orders that have already been confirmed/written to the blockchain?
If orders in inventory are never matched against each other then won't that eliminate the delegate's ability to front-run the trade?
What are the downsides to this, if any?
I thought about this, and it is an interesting idea. It's not perfect though. Let me explain with an example.
I'll take the scenario from the blog post. Say there are three orders submitted into block 1: 10 coins at $1/coin; 20 coins at $2/coin; and, 20 coins at $3/coin. By block 2, everyone can confirm that these orders exist. Now, let's say Alice wants to buy 50 coins with a limit order. She submits a limit order to buy 50 coins with a price limit at $3/coin and funds it with up to $150. The idea is that this transaction will be included into block 3 and buy up all three orders for $110 and give Alice 50 coins and also leave her with $40.
After Alice submits the transaction to the network, Bob (who let's say is producing block 3) sees this transaction and tries to front run the order. He places the following two orders in block 3 along with Alice's order:
- A limit order to buy 50 coins with a price limit at $3/coin and funded with $150. This transaction is crafted in a way to be processed before Alice's transaction (after all, all market orders need some deterministic order in which they are processed).
- An order to sell 50 coins at a price of $3/coin.
What happens while executing block 3 is that the three orders are matched with Bob's order (so he is at -$110 and +50 coins), then Alice's limit order is placed but there are no orders to match against it. Then Bob's sell order at $3/coin is placed but it does not match against Alice's order because they were included in the same block.
Then block 4 executes. The two previous orders that have not been matched but could be matched should probably execute now. However, if they immediately do, then this doesn't solve the front running problem at all (Bob would be at a net +$40 and +0 coins). If they never match, then that could be an annoying problem. There would be two orders sitting there with overlap that refuse to match against each other because they happened to be submitted into the same block. If any new orders did not match against them, the accounts would be forced to cancel the orders and try again.
So what I suggest is that the orders submitted into block 3 that were not matched with orders prior to block 3 should then match with each other (if possible) in the next block (block 4) but only after any market order cancel transactions are first executed. Thus, Alice's client could notice that her limit order did not get matched the way she hoped it would and could automatically submit a cancel order. If her cancel order reached block 4, then it is as if her limit order never happened (other than the two transaction fees). On the other hand, Bob would have spent $110 buying 50 coins with no guarantee that he can sell it soon for more than $110. Alice may or may not be willing to accept Bob's order to sell 50 coins at a price of $3/coin. If she decided to ignore it and hold out for a better deal, Bob is stuck exposed to 50 extra coins he does not want. If the price of those coins goes down, Bob could end up losing money compared to if he had never attempted the front running in the first place. So this now becomes speculation and not front running. There is no guarantee of profit.
Also, instead of requiring Alice's client to send a new cancel transaction, we could simply have a fill-or-kill order type. Either Alice's limit order would get fully matched with any of the orders that existed prior to block 3, or the order would become void. Alice would also want to make the transaction expire after block 3 so that it either gets included in block 3 or becomes void, in order to prevent the delegate producing block 3 from delaying that transaction by a block so that he could get his front-running transaction in. It would also probably be smart for the transaction to be encrypted for the delegate producing block 3 only so that as little information as necessary about Alice's intent is leaked out to the public. If the transaction was killed because someone beat Alice to filling the existing orders, Alice would have to create a new order, perhaps a YGWYAF order that sits in the order book until someone later matches it.
The option of having Alice's client monitor the change in the order book and decide whether to submit a cancel transaction has other merits though. It allows Alice to encode whatever conditions she wants for what a successful "fill" means into the client without the blockchain needing to be aware of it. For example, maybe if the majority of her limit order would be filled but not all of it, that could be considered good enough to Alice and she would prefer not to kill the entire order. Her client can respect those wishes by refusing to submit the cancel order for block 4.
However, the "submit limit order and then submit cancel order immediately afterwards if desired" model has some flaws. If Bob is colluding with the other delegate who is producing block 4, that delegate could simply ignore Alice's cancel order. The delegate order is randomized every round, which helps us. Nevertheless, the colluding delegates can just wait until a round when two of them produce blocks back-to-back to do their front-running with guaranteed profits. In those rounds, they are free to front run any market/limit orders which are not of the fill-or-kill type. If there are two colluding delegates, the probability of a given round being one where they are back-to-back is 2%.
But this is a scenario where statistical analysis by all the live nodes is actually possible, unlike my previous idea of
only sending the transaction to the next delegate which gives the delegate plausible deniability because they can just blame it on someone trying to
frame them. The market/limit orders would still be encrypted for the delegate producing the very next block (and they would be designed to expire after that block so that they will only be included in that block or they won't be included in the blockchain at all). This would mean that only that delegate (and the submitter of the order) would have the information necessary to do the front running. If clients notice that front running transactions appear to be submitted in block N AND that a cancel transaction for that order is submitted into the network immediately after block N is produced but yet does not get included in block N+1, and they see this happen again and again whenever the delegates producing block N and block N+1 are the same ones from some small set, then they could confidently assume that those delegates in the set are colluding together to front run orders. The cancel transaction part is important. That is something that is in the control of the delegate producing block N+1 and not in the control of the submitter of the order. So an order submitter that was front running himself to try to frame a delegate and get them fired cannot do so in this model because it only counts as a negative mark against the delegate if the cancel transaction was propagated into the network after block N but well before the deadline of block N+1 and yet it was not included in block N+1.