BitShares Forum

Main => Technical Support => Topic started by: luckybit on September 05, 2015, 10:15:40 pm

Title: Can Graphene be accelerated by GPU?
Post by: luckybit on September 05, 2015, 10:15:40 pm
My understanding is it is based on LMAX Disrupter?
https://www.youtube.com/watch?v=Qho1QNbXBso

Why can't Graphene be GPU accelerated? Is it intended to be GPU accelerated?
Title: Re: Can Graphene be accelerated by GPU?
Post by: bulletproof on September 05, 2015, 11:00:58 pm
Nice find. I just read as far as 6 million TPS - on CPU, wow.

Seems there is at least 1 actively maintained C++ port of their Java code: https://github.com/fsaintjacques/disruptor-- (https://github.com/fsaintjacques/disruptor--)
Title: Re: Can Graphene be accelerated by GPU?
Post by: luckybit on September 05, 2015, 11:05:31 pm
Quote
Andy Phillips: On the server hardware side we've used HP in the past, but more recently we have re-evaluated the field and have gone with Dell. This was for a variety of reasons, but mainly related to selecting the right CPU. (For us, Sandy Bridge processors from Intel offered a number of very important benefits.)

When it comes down to it, servers from HP or Dell or Sun will all be very similar. The only differences that occasionally arise are if you go to someone like IBM or Cisco UCS who will have some slightly different silicon in the box that allows them to do some unusual things.
Are you CPU based for everything you do or are you using, or considering using, GPUs or FPGAs?
Mike Barker: It's an interesting question. GPUs don't really fall into the matching engine space, they would be more appropriate for the heavy duty floating point calculations needed for more complex risk modelling or algorithmic models. Matching is all about fixed point arithmetic and our risk model isn't really complicated enough to require GPUs. We can do it plenty fast enough using CPUs.

However, FPGAs are a slightly different case, because you can try to do pretty much anything with them. They are not something we've looked at yet, mainly because the development turnaround cycle can be quite slow for them and we push very hard for fast turnaround. However some of our vendors are looking at FPGAs for things like FIX parsing. The sort of activity that is really heavily commoditised is probably where we would look at them, but at the moment we have no plans.

http://www.automatedtrader.net/articles/exchange-views/137319/lmax-exchange-agile-challenge-to-the-status-quo
What about FGPAs?

Title: Re: Can Graphene be accelerated by GPU?
Post by: Fox on September 05, 2015, 11:07:07 pm
Remember that Graphene employes a single thread design model. GPUs achieve their results through massive multi-parallelism.
Title: Re: Can Graphene be accelerated by GPU?
Post by: luckybit on September 05, 2015, 11:10:34 pm
Remember that Graphene employes a single thread design model. GPUs achieve their results through massive multi-parallelism.

That doesn't mean there can't be uses for them which optimizes certain parts of Graphene which don't require single thread.
Title: Re: Can Graphene be accelerated by GPU?
Post by: bulletproof on September 05, 2015, 11:16:40 pm
I believe this actually is single thread, not parallel. This is the YouTube blurb (my emphasis)

"There are many patterns and frameworks for concurrency and parallelism that are popular today, but is the throughput we need available in a single-threaded model if we just write code optimized to take advantage of how the hardware running our applications work? LMAX, a retail trading firm in the UK, has open sourced a concurrency pattern called the Disruptor, which enables the creation of graphs of dependent components to share data without locks or queues. This presentation will detail how LMAX was able to maximize the performance of their application, and then discuss things learned while porting the library to Scala"
Title: Re: Can Graphene be accelerated by GPU?
Post by: luckybit on September 05, 2015, 11:55:14 pm
I believe this actually is single thread, not parallel. This is the YouTube blurb (my emphasis)

"There are many patterns and frameworks for concurrency and parallelism that are popular today, but is the throughput we need available in a single-threaded model if we just write code optimized to take advantage of how the hardware running our applications work? LMAX, a retail trading firm in the UK, has open sourced a concurrency pattern called the Disruptor, which enables the creation of graphs of dependent components to share data without locks or queues. This presentation will detail how LMAX was able to maximize the performance of their application, and then discuss things learned while porting the library to Scala"

What about Graphene? Is Graphene seeking to be a clone of LMAX or to go beyond LMAX?

Title: Re: Can Graphene be accelerated by GPU?
Post by: xeroc on September 06, 2015, 08:39:16 am
You guys have not read: https://bitshares.org/technology/industrial-performance-and-scalability/
Quote
To achieve this industry-leading performance, BitShares has borrowed lessons learned from the LMAX Exchange, which is able to process 6 million transactions per second. Among these lessons are the following key points:

The disruptor basically verifies the signatures (which can be parallelized easily .. also on GPU or clusters) .. and the (single-threaded) DEX engine matches the orders and puts them into a block ..
This is ALREADY implemented in BitShares from what I know
Title: Re: Can Graphene be accelerated by GPU?
Post by: luckybit on September 06, 2015, 12:36:21 pm
You guys have not read: https://bitshares.org/technology/industrial-performance-and-scalability/
Quote
To achieve this industry-leading performance, BitShares has borrowed lessons learned from the LMAX Exchange, which is able to process 6 million transactions per second. Among these lessons are the following key points:

The disruptor basically verifies the signatures (which can be parallelized easily .. also on GPU or clusters) .. and the (single-threaded) DEX engine matches the orders and puts them into a block ..
This is ALREADY implemented in BitShares from what I know

Of course I read it. I was trying to find a way to improve on it.
Title: Re: Can Graphene be accelerated by GPU?
Post by: sudo on September 08, 2015, 03:10:15 pm
hash  rescan
checkpoints    GPU accelerated  many parts at the same time