Severe Show stopping ServerTime lag (Not to be mistaken for a a UTC issue!!)

Created at 11 Aug 2015, 19:31
How’s your experience with the cTrader Platform?
Your feedback is crucial to cTrader's development. Please take a few seconds to share your opinion and help us improve your trading experience. Thanks!
olddirtypipster's avatar

olddirtypipster

Joined 18.04.2014

Severe Show stopping ServerTime lag (Not to be mistaken for a a UTC issue!!)
11 Aug 2015, 19:31


Currently I have a test bot attached to cAlgo that is recording market depth by monitoring the MarketDepthUpdated event:at the start of the bot:

 

        protected override void OnStart()
        {
            _marketDepth = MarketData.GetMarketDepth(Symbol);

            _marketDepth.Updated += MarketDepthOnUpdated;           
        }

 

        private void MarketDepthOnUpdated()
        {
            if ( _marketDepth.AskEntries.Count > 0 && _marketDepth.BidEntries.Count > 0 )
            {

                CAlgoOrder order = new CAlgoOrder()
                {
                    AskPrices = _marketDepth.AskEntries,
                    BidPrices = _marketDepth.BidEntries,
                    BrokerName = Account.BrokerName,
                    CurrencyPair = Symbol.Code,
                    ServerTime = Server.Time,
                };

                 //Do some more work after this
            }

I let cAlgo run for a couple minutes, and what I find is that as time progresses, the cAlgo.API.Internals.IServer Server very quickly lags behind the current time.

For instance, if i started at 17:00:00 and cAlgo's Server.Time reports 17:00:00, after about 30 minutes of cAlgo running time, cAlgo' Server.Time will now report 17:09:05 while the current time reports the correct time of 17:30:00.

cAlgo is now lagging by 20 minutes when posting its latest MarketDepth data!

This is completely unacceptable, as the lag time is larger than the 10 minute chart. What this means is that if I was trading on anything less than a 10 minute chart for HFT or scalping on a 1 minute chart, after 30 minutes of use, the data I would be receiving would be for at least 20 minutes in the past!!!!

This bug renders cAlgo completely useless.

 

 

I could be qrong, but I suspect that cAlco is currently posting its MarketDepthOnUpdated data in this very unoptimized method (or something very similar).

When cAlgo receives realtime low latency streamed data from thecServer, it places this data into a Pipelined BlockingCollection buffer, then then posts this MarketDepthOnUpdated data with a SEQUENTIAL non-paralellized for loop in a First in First Out (FIFO) basis.

As a result, the rate at which it receives the LP's market data is faster than the rate at which this data is being posted to the user. As a result, the buffer soon becomes backlogged with older and older market data, resulting in the lag, since the data is being posted FIFO style.

Put a long story short, cAlgo is terribly un-optimized, and becomes useless for any real scalping or HFT algorithmic trading.

Spotware needs to address and fis this immediately, since it is a major show stopper.

 

As a result


@olddirtypipster
Replies

Spotware
12 Aug 2015, 12:12

Dear Trader,

cAlgo invokes all cBot's handlers in a single thread. All incoming events are added to the processing queue. cBot handles events one by one. If cBot takes too much time to handle an event, the queue will contain more and more events and that will cause handling of obsolete events. Please try to optimize code of your cBot, especially handler of the market depth changed event. If that doesn't help we can recommend you to move your calculations to a separate thread to minify processing time in the main thread.


@Spotware

olddirtypipster
12 Aug 2015, 14:23

RE:

I appreciate your response.

In which case, my assessment of non-optimized coding for high speed streams is correct.

Because cAlgo and cTrader (I tested this as well) pushes incoming events in the processing queue to the user via MarketDepthOnUpdated one by one (sequentially), this would mean that if the rate of incoming data exceeds the rate at which the user pulls data in (which will be the case for low latency, HFT targeting liquidity providers during hyper market activity), then the processing queue gets more and more backlogged and the user suffers through this by receiving market data whose relevancy to real-time degrades over time.

Previously, I was placing the data as it came into a BlockingCollection<MarketData> collection, and then using a pipelined sequential foreach loop to pull the data out one by one.

What I discovered was that my BlockingCollection buffer became backlogged very rapidly as the single foreach loop could not pull and process fast enough to keep up with the rapid incoming data. The problem becomes even more severe if one expects to perform any heavy analysis on the data packet prior to iterating over the for loop to call for the next packet. Very soon you will find that you will be operating on data several minutes into the past as you would not be able to keep up.

There are possible solutions, however;

On the user end, a parallelized for loop will minimize the degrading performance.

On the cAlgo developer side, i suggest employing a parallelized for loop that dumps several marketdata orders to  its internal processing queue for pickup by the user.

Secondly, instead of employing conventional .NET event messaging, I think it would be best to use the .NET Reactive Extensions, convert your MarketDataEvent message into an Observable stream, and continuously stream this to a Subscribed user.

Spotware said:

Dear Trader,

cAlgo invokes all cBot's handlers in a single thread. All incoming events are added to the processing queue. cBot handles events one by one. If cBot takes too much time to handle an event, the queue will contain more and more events and that will cause handling of obsolete events. Please try to optimize code of your cBot, especially handler of the market depth changed event. If that doesn't help we can recommend you to move your calculations to a separate thread to minify processing time in the main thread.

 


@olddirtypipster

olddirtypipster
12 Aug 2015, 14:28

RE: RE:

Lastly, you could provide the user with a direct connection to the cServer MarketData stream, so that they may bypass the inefficiencies of cAlgo when acquiring market data. I am certain that this was a suggestion offered via a phone conversation before.

Cheers.

olddirtypipster said:

I appreciate your response.

In which case, my assessment of non-optimized coding for high speed streams is correct.

Because cAlgo and cTrader (I tested this as well) pushes incoming events in the processing queue to the user via MarketDepthOnUpdated one by one (sequentially), this would mean that if the rate of incoming data exceeds the rate at which the user pulls data in (which will be the case for low latency, HFT targeting liquidity providers during hyper market activity), then the processing queue gets more and more backlogged and the user suffers through this by receiving market data whose relevancy to real-time degrades over time.

Previously, I was placing the data as it came into a BlockingCollection collection, and then using a pipelined sequential foreach loop to pull the data out one by one.

What I discovered was that my BlockingCollection buffer became backlogged very rapidly as the single foreach loop could not pull and process fast enough to keep up with the rapid incoming data. The problem becomes even more severe if one expects to perform any heavy analysis on the data packet prior to iterating over the for loop to call for the next packet. Very soon you will find that you will be operating on data several minutes into the past as you would not be able to keep up.

There are possible solutions, however;

On the user end, a parallelized for loop will minimize the degrading performance.

On the cAlgo developer side, i suggest employing a parallelized for loop that dumps several marketdata orders to  its internal processing queue for pickup by the user.

Secondly, instead of employing conventional .NET event messaging, I think it would be best to use the .NET Reactive Extensions, convert your MarketDataEvent message into an Observable stream, and continuously stream this to a Subscribed user.

Spotware said:

Dear Trader,

cAlgo invokes all cBot's handlers in a single thread. All incoming events are added to the processing queue. cBot handles events one by one. If cBot takes too much time to handle an event, the queue will contain more and more events and that will cause handling of obsolete events. Please try to optimize code of your cBot, especially handler of the market depth changed event. If that doesn't help we can recommend you to move your calculations to a separate thread to minify processing time in the main thread.

 

 


@olddirtypipster

Spotware
12 Aug 2015, 16:49

Dear Trader,

Thank you for your detailed feedback. 
Asynchonious event handling is too difficult for most of our users, so we do not plan to change current message loop approach. We plan to skip old market depth changed events if new event is already in the queue. We already did it for OnTick event. We also plan to update market depth in RefreshData method.

If you would like to work with cServer without cAlgo, please have a look at Connect API. Please note that it doesn't provide market depth yet.


@Spotware

olddirtypipster
12 Aug 2015, 17:17

RE:

Spotware said:

Dear Trader,

Thank you for your detailed feedback. 
Asynchonious event handling is too difficult for most of our users, so we do not plan to change current message loop approach. We plan to skip old market depth changed events if new event is already in the queue. We already did it for OnTick event. We also plan to update market depth in RefreshData method.

If you would like to work with cServer without cAlgo, please have a look at Connect API. Please note that it doesn't provide market depth yet.

Thank you for your reply.

When you say: "We plan to skip old market depth changed events if new event is already in the queue"are you implying that you plan to implement a fixed circular buffer that will purge old events not taken from the buffer, to leave space for newer event? If this is the case, then the result would be gaps in the MarketData stream on the user end. If I understood you correctly, then I strongly advise against this.

The aim here is to have COMPLETE MarketDepth and not MarketDepth with gaps as a solution to un-optimized code, and as you know, a good trrading platform must adhere to the following:

  • real-time reliability (no missing data, and reflects the current market)
  • adaptability (can adapt to varied trading conditions; slow/fast data in this case)
  • extendibility, scalability and modularity (allows useds to interface with their financial data in a way that adheres to the first two conditions.)

If you discard (skip) old market depth events in favor of publishing newer ones, the end result would be that there'd be gaps in the market data during peak times. This will violate the first and second principle and everything goes from bad to worse. This is not a good solution in this case.

My suggestion is that you stream the data continuously, and allow the user full opportunity to take it all in with optimized code methodologies. As it is right now, cAlgo is throttling back.

 

You are correct in saying this: "Asynchonious event handling is too difficult for most of our users". WHAT USERS MUST BE AWARE OF however, is that if after pulling in any data their operation on that data takes more than a few milliseconds, they WILL suffer from degrading performance due to increasing lag. This is inevitable given cAlgo's current design. Eliminating data to remove the backlog lag does not solve the problem. It makes the problem worse.

The Achilles heel to Connect API is precisely that it does not support Market Depth. Much of the important information I require is hidden in the statistics buried inside the MarketDepth.

Until you are able to provide a MarketDepth data feed from Connect API, it is of no use to me.


@olddirtypipster

Spotware
12 Aug 2015, 17:49

When you say: "We plan to skip old market depth changed events if new event is already in the queue"are you implying that you plan to implement a fixed circular buffer that will purge old events not taken from the buffer, to leave space for newer event? If this is the case, then the result would be gaps in the MarketData stream on the user end. If I understood you correctly, then I strongly advise against this.

No, the plan is not to invoke user handlers on obsolete events. API objects will be updated by every message. For example we invoke OnTick handler on the newest tick only, while MarketSeries object reflects all ticks. We believe it is the best approach for single thread model.


@Spotware

olddirtypipster
12 Aug 2015, 18:17

RE:

Spotware said:

When you say: "We plan to skip old market depth changed events if new event is already in the queue"are you implying that you plan to implement a fixed circular buffer that will purge old events not taken from the buffer, to leave space for newer event? If this is the case, then the result would be gaps in the MarketData stream on the user end. If I understood you correctly, then I strongly advise against this.

No, the plan is not to invoke user handlers on obsolete events. API objects will be updated by every message. For example we invoke OnTick handler on the newest tick only, while MarketSeries object reflects all ticks. We believe it is the best approach for single thread model.

Could you then be clearer as to what you define to be an 'obsolete' event? In my mind, If a market data event occurred, then it is important and I want to know about it. How can it be obsolete in this case?


@olddirtypipster

olddirtypipster
12 Aug 2015, 18:31

RE: RE:

In reading your last response again, it seems clearer now. Your aim is to only post changes to the orderbook, and not the full orderbook. This is similar to how it may be done using raw FIX protocol (incremental updates verses the full book every time).

What is very important here then, is that you provide a means for the user to know when a bid/ask price is no longer on the orderbook.

For example, the first time you send the book, you might have the following depth:

 

1.55645 @ 100000

1.55643 @ 50000

1.55640 @200000

Now, let us say that you no longer have 1.55643 on the book. When you next send an incremental update, you need to bear in mind that on the user end, I still have price 1.54643 @ 50000 on my last refresh. You need to implement a mechanism that informs the client that 1.55643 has been removed from the orderbook during the next refresh.

What would also be brilliant is if you could also let the client know why this price is no longer on the book (was this contract cancelled, sold out, or  what?). The  suggestion you made could work in reducing data throughput overload, but unless you accomodate for the fact that the user must now do most of the work in updating their latest orderbook using the most recent cServer publishes, everything is rendered useless.

 

I would be very interested to  know how this new f

 

This makes sense.

olddirtypipster said:

Spotware said:

When you say: "We plan to skip old market depth changed events if new event is already in the queue"are you implying that you plan to implement a fixed circular buffer that will purge old events not taken from the buffer, to leave space for newer event? If this is the case, then the result would be gaps in the MarketData stream on the user end. If I understood you correctly, then I strongly advise against this.

No, the plan is not to invoke user handlers on obsolete events. API objects will be updated by every message. For example we invoke OnTick handler on the newest tick only, while MarketSeries object reflects all ticks. We believe it is the best approach for single thread model.

Could you then be clearer as to what you define to be an 'obsolete' event? In my mind, If a market data event occurred, then it is important and I want to know about it. How can it be obsolete in this case?

 


@olddirtypipster

olddirtypipster
12 Aug 2015, 19:06

If your aim is to revamp the current way MarketData updates are posted, I would like to suggest a few changes that more closely reflect true FIX protocol when posting incremental orderbook updates.

You can research and validate these these provisions at the official www.FIXprotocol.org website. The modifications I would suggest are as follows:

Each price update should have the following codes attached:

A MDUpdateAction that  informs the client of the nature of the update that occurred to the most recent price on the incremental update

A DeleteReason that informs the  user of why a price level needs to be deleted form the previous orderbook.

A MDEntryID that assigns a unique integer id to a price that is being added or updated to the newest incremental update. This id is reset when this price is deleted.

A LpID which, for non-aggregated market depth, ties each price level to a unique ID. While this will not reveal the originating liquidity provider, it will allow the client to differentiate between prices and their providers.

The new MarketDepthEntry structure would be as follows:

public sealed MarketDepthEntry

{

public decimal Price{get;set;}

public double Volume {get;set;}

public MDUpdateAction Action{get;set;}

public DeleteReason Reason{get;set;}

public int MDEntryID {get;set;

public int LpID {get;set;}

}

where MDUpdateAction  and DeleteReason  are enums.

Hopefully, this the direction you plan to be taking.

 

I'd be very interested to hear what you think.

 

-OldDirty-

 


@olddirtypipster

Spotware
13 Aug 2015, 14:17

Dear Trader,

Thank you for your suggestions. Even if we add move info to market depth entry, it will not solve your current problem. The problem is that market depth changed event invoked more frequently than your cBot can handle. We understand that you would like to see non-aggregated market depth, but currently there are no plans to implement it.


@Spotware

olddirtypipster
13 Aug 2015, 16:35

RE:

Spotware said:

For example we invoke OnTick handler on the newest tick only, while MarketSeries object reflects all ticks. We believe it is the best approach for single thread model.

Your idea of a solution from what I gather is to send updates only when the orderbook has changed, with only those changed orders, instead of sending the entire orderbook. This will significantly reduce data throughput overload and is a viable solution.

However, please note that unless you modify the data packet to include information to tell the client what price now needs to be removed from the newly updated orderbook, you will incur other problems.

Currently, you re posting a complete orderbook with an IReadonlyList<MarketDepthEntry> of price depth where each element contains a price level of the form:

public sealed class MarketDepthEntry

{

public decimal Price{get;set;}

public double Volume {get;set;}

}

When you move away from sending a complete orderbook to sending incremental changes, you will need to add atleast two additional fields to inform the client of the new changes that have taken place between the current and previous orderbook. This information is critical for the client to reconstruct the orderbook for each incremental change. At a minimum, you will require the two following fields

public MDUpdateAction Action{get;set;} //informs client of a NEW, UPDATE or DELETE for this price level. This occurs if contract volume for this price has increased, decreased, or null.

public DeleteReason Reason{get;set;}   //informs the user of the REASON for the requirement to DELETE this price level (CANCELED, or FILLED)

The other two fields are for the purpose of supporting non-aggregated market depth as they allow the client to keep track of each individual price. These are:

public int MDEntryID {get;set); //a unique Id that tags each price level so users may track individual contracts;

public int LpID {get;set;}         //a unique Id that tags each price level so clients may track the pseudo-origin of the contract. It will not reveal the actual liquidity provider, but would allow the client to differentiate between contract origin.

These last two fields are OPTIONAL, and are at the discretion of the broker to fill them or leave them NULL

The fist two fields are critical tor the changes you proposed to solve the existing problem.

If the order-book is sent incrementally instead of full, packet size is reduced, and buffer overrun is ameliorated. It will go a long way to solving the issues observed.

holla back.

-OldDirty-


@olddirtypipster

Spotware
13 Aug 2015, 17:16

When you move away from sending a complete orderbook to sending incremental changes...

Currently there are no plans to change market depth API. We just plan to invoke market depth changed event only on latest available data as we already did for OnTick event.


@Spotware

olddirtypipster
13 Aug 2015, 18:13

RE:

The issue is not with the cBot. The issue is with the way the cAlgo posts updates internally. Consider the critical section of my cBot code:

        protected override void OnStart()
        {
            _marketDepth = MarketData.GetMarketDepth(Symbol);

            _marketDepth.Updated += MarketDepthOnUpdated;           
        }

        private void MarketDepthOnUpdated()
        {
            if ( _marketDepth.AskEntries.Count > 0 && _marketDepth.BidEntries.Count > 0 )
            {

                CAlgoOrder order = new CAlgoOrder()
                {
                    AskPrices = _marketDepth.AskEntries,
                    BidPrices = _marketDepth.BidEntries,
                    BrokerName = Account.BrokerName,
                    CurrencyPair = Symbol.Code,
                    ServerTime = Server.Time,                                  //Breakpoint placed here for a confirmation test (see below)
                };

//                _clientCommunicationPipeline.Add(order);                             uncomment to Add the order to the BlockingCollectionPipeline<MarketDepthOnUpdated>
//                DoWorkDirectly(order)                                                          //uncomment to do the job directly without using the BlockingCollection.
            }

TEST 1: In the first test I carried out, I filled the prices as they came in to a BlockingCollection and used a single threaded foreach loop to pull individual prices from the orderbook. Lag occured within the first 10 minutes of use. Placing a breakpoint on the highlighted line, the internal cAlgo.Server.Time remained synched with current time.

TEST 2: I replaced the single threaded foreach loop with a Parallel.ForEach loop to pull prices from my BlockingColelction. prices were pulled in significantly quicker, and lag was less severe, but still present. It took around 30 minutes to see significant lag. Placing a breakpoint on the highlighted line, the internal cAlgo.Server.Time remained synched with current time.

TEST 3: I did not use the Blockingcollection, but instead, performed the Job directly as the prices came in. Lag was 100% eliminated provided that the job was not CPU intensive, but if it bacame too time consuming, lag persisted.

The first two tests would have obj#viously led to lag if we couldn't  pull out prices fast enough, but the third task was a dead give-away that the fault laid solely with cAlgo. What was occurring here is that cAlgo also has its on internal buffer that is storing prices before they are being posted to the client bot.

As the UpdateEvent that notifies the MarketDepthOnUpdated is blocking, if it is blocked for too long due to long processing time, then its internal bufer becomes backlogged in a way similar to how the buffer in TEST1 and TEST 2 became backlogged. As a result, lag persists.

To confirm this, I commented out the two lines of code shown above, and placed a breakpoints for a few seconds each time over the course of several minutes. Sure enough, what I found twas that after a few times of doing this, I was able to get Server.Time to be permanently lagged behind in time!

The only way I could set the time back to normal was to restart the bot.

Now we see that the user is faced with a permanent dilemma if they need to perform CPU time intensive analysis of the incoming data:

1) If they opt to store the data in a buffer, and forloop each item for analysis, the buffer will overrun, and they will lag.

2) If they opt to eliminate their buffer storage and perform the CPU intensive analysis immediately as it comes in, each time they perform an operation on the packet they will block any further update posts from cAlgo's _marketDepth.Updated event post, and cAlgo's internal buffer will overrun, eventually resulting in lag.

Sorry. But cAlgo is at fault here! It was not designed with the premise of the following in mind.

    real-time reliability (no missing data, and reflects the current market)
    adaptability (can adapt to varied trading conditions; slow/fast data in this case)
    extendibility, scalability and modularity (allows useds to interface with their financial data in a way that adheres to the first two conditions.)
    
So what are the REAL solutions?

Firstly, reducing datapacet size will definitely help; the less immediate analysts that neds to be done, the better.

Secondly, cAlgo's internal thread that is responsible for posting _marketDepth.Updated NEEDS TO BE PARALLELIZED! This should not be running on a single thread, but should be grabing as many updates from its own internal buffer as quickly as possible. This will minimize the degree of internal buffer backlog.

LAstly, deprecate the use of Events when posting your data. This should be done as a Reactive Stream that a user subscribes to instead. elow is an example of a classic case, and how I converted it to a stream.

 

~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~Classic case of using  and handling events ~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~
    
    //MarketDataEngine class inside the main cAlgo/cTrader engine
    public class MarketDataEngine : ThreadObject
    {
        internal MarketDepthEntry[] marketDataInternalBuffer;        //assume this is being filled asynchronously by some TCP/IP connection to cServer
        
        
        //this is the event that cBot must subscribe to in order to get market depth updates
        
        public delegate void Updated( MarketDepthEntry entry );
        
        public event Updated MarketDepthNotification;        
                        
        
        //Here, the Worker thread posts a market depth notification as rapidly as it can using a for loop (thread safety has been omitted for clarity)
        protected override void WorkerThread(object parameters)
        {
            
            //serialized for loop I was talking about. This should really be parallelized.
            for(int tick = 0; tick <= 1000000; i++)
            {
                if ( MarketDepthNotification != null )
                    MarketDepthNotification(marketDataInternalBuffer[tick]);
            }
        }
    }
    
    
    
    
    
    
    //MArket depth consumer class inside cBot
    public class ConsummerClass
    {
        private MarketDataEngine _marketDepth;
        
        public ConsumingClass()
        {            
            _marketDepth = new MarketDataEngine();
            
            //This is te classic way of subscribing to             
            _marketDepth.Updated += MarketDepthOnUpdated            

        }
        
        //This is the classical way of handling notifications from the client consumer class (In this case cBot).
        
        //we would like to go to town with this data, but the lag we incur makes this impossible.
            
        //After a few minutes, we find that the data being sent is no longer current since the MarketDepthEngine
        //is sending us old data backlogged in its internal MarketDepthEntry[] buffer. So sad.
        
        private void MarketDepthOnUpdated(MarketDepthEntry entry)
        {
            
            //This blocks future updates causing lag as is will cause cAlgo's internal buffer to backlog as it wait until it can next post an update.
            CpuIntensiveAnalysisOfData(entry);            
        }        
    }
    

 

 


~~~~~~~~~~~~~~~~~~~~~~Here is how it is done using reactive Extensions (this is where the magic begins)~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~

    
    public class ObservableMarketDataEngineStream<T>
    {
        internal MarketDepthEntry[] marketDataInternalBuffer;        //assume this is being filled asynchronously by some TCP/IP connection to cServer
                
        //this is the event that cBot must subscribe to in order to get market depth updates
        
        public delegate void Updated( MarketDepthEntry entry );
        
        public event Updated MarketDepthNotification;        

        //This is how we convert our event to be posted into a NON-BLOCKING  continuous stream of data
        public IObservable<int> MarketDataStream
        {
            get
            {
                return Observable.FromEvent<Tick, int>(h => (int i) => h(i),h => TickNotification += h,h => TickNotification -= h);
            }

        }

        //Here, the Worker thread posts a tick notification every millisesond
        protected override void WorkerThread(object parameters)
        {
            
            //still should be parallelized as part of the cAlgo optimization strategy
            for(int tick = 0; tick <= 1000000; i++)
            {
                if ( MarketDepthNotification != null )
                    MarketDepthNotification(marketDataInternalBuffer[tick]);
            }
        }
    }
    
    
    
    

    //Tick consumer class iside of the cBot extension
    public class ConsummerClass
    {
        private DataStream _stream;
        
        public ConsumingClass()
        {            
            _marketDepth = new ObservableMarketDataEngineStream();            
            MrketDataStream = _marketDepth.MarketDataStream.

        }
        
        public IDisposable MrketDataStream
        {
            get;
            set;
        }        
    }

 

The client can then subscribe to the stream as follows:

var dataStream = ( from stream in MrketDataStream select stream).Subscribe(DoStuffFromThisDelegate);

void DoStufFromThisDelegate(MarketDataEntry entry)

{

}

or perform LinQ queries directly with the stream in a non-blocking fashion like so:

var filteredDataStream = ( from stream in MarketDataStream where ( stream .Price > 1.54670 && stream.Volume > 50000 ) select stream).Subscribe(DoStuffFromThisDelegate);

 

This is truly magical!

 

There are some important things to note hare:

In the first case, because of the way events are posted and handled, operations that take a long time to complete will cause cAlgo/cTrader to block future posts, causing it's internal buffer to backlog.

In the modified case, because the events have been converted into reactive streams, no blocking issues inside cAlgo's engine are eliminated while streaming.


@olddirtypipster

olddirtypipster
13 Aug 2015, 18:21

RE:

Spotware said:

When you move away from sending a complete orderbook to sending incremental changes...

Currently there are no plans to change market depth API. We just plan to invoke market depth changed event only on latest available data as we already did for OnTick event.

In this case, I can guarantee that the impact of this is minimal, as you will simply be eliminating repeated prices.This will not nearly reduce the amount of data being transmitted, and will most certainly solve this issue.

 

Look up.

 

See that post above right there? Contained in that is a solution that will work. If you can't or don't want to do it, I can. Just say the word when.


@olddirtypipster

olddirtypipster
13 Aug 2015, 18:35

RE:

Spotware said:

When you move away from sending a complete orderbook to sending incremental changes...

Currently there are no plans to change market depth API. We just plan to invoke market depth changed event only on latest available data as we already did for OnTick event.

And another thing...

 

There are times when the market is not very active, that the orderbook display is stationary.

Does this not mean that you are ALREADY only sending updated data as it comes in? Why then say that you will fix the problem by doing something that you are already doing???

This makes no sense at all...


@olddirtypipster

Spotware
14 Aug 2015, 09:18

Dear Trader,

As we already said we plan to skip old market depth changed events if new event is already in the queue. We already did it for OnTick event. We also plan to update market depth in RefreshData method.

Why do you think that the above changes will not help you?


@Spotware

olddirtypipster
14 Aug 2015, 11:18

RE:

Spotware said:

Dear Trader,

As we already said we plan to skip old market depth changed events if new event is already in the queue. We already did it for OnTick event. We also plan to update market depth in RefreshData method.

Why do you think that the above changes will not help you?

I think you will need to be clearer on what you mean by "skip old market depth changes". I suspect that on two occasions,Ii have misinterpreted your statement for a solution.

1) In the first instance, I assumed  that you were going to deviate away from pausing full market depth all the time, to sending only incremental changes.

2) Then,I assumed that what you meant was that you would only push market depth when the cServer notifies of new depth.

 

If neither of these assumptions are correct, please identify the one that is. If not, i look forward to you providing additional clarity to this matter.

 

On the other-hand, should the first case be the correct assumption, you will DEFINITELY need to augment your API to make the orderbook useful in any way. I have explained the details of why this must be so in a previous post.

If the second assumption I made is correct, then this is not a solution since this is already being done; the orderbook window does not refresh when the market is inactive, meaning that it only responds when cServer is sending new depth data.


@olddirtypipster

Spotware
14 Aug 2015, 12:30

Dear Trader,

Neither 1) or 2) are corrent. Let's imagine that your cBot takes 5 seconds to handle market depth changed event and market depth changes every 2 seconds.

Current flow:

  • Time: 0s. Market depth changes first time, cBot starts to handle State #1
  • Time: 2s. Market depth changes second time, cBot is still handling State #1
  • Time: 4s. Market depth changes third time, cBot is still handling State #1
  • Time: 5s. cBot finished handling of State #1, cBot starts to handle State #2. State #3 is still in the queue.

How we are going to change the flow:

  • Time: 0s. Market depth changes first time, cBot starts to handle State #1
  • Time: 2s. Market depth changes second time, cBot is still handling State #1
  • Time: 4s. Market depth changes third time, cBot is still handling State #1
  • Time: 5s. cBot finished handling of State #1, cBot starts to handle State #3. State #2 is skipped because it is obsolete.

@Spotware

olddirtypipster
14 Aug 2015, 13:17

RE:

Spotware said:

Dear Trader,

Neither 1) or 2) are corrent. Let's imagine that your cBot takes 5 seconds to handle market depth changed event and market depth changes every 2 seconds.

Current flow:

  • Time: 0s. Market depth changes first time, cBot starts to handle State #1
  • Time: 2s. Market depth changes second time, cBot is still handling State #1
  • Time: 4s. Market depth changes third time, cBot is still handling State #1
  • Time: 5s. cBot finished handling of State #1, cBot starts to handle State #2. State #3 is still in the queue.

How we are going to change the flow:

  • Time: 0s. Market depth changes first time, cBot starts to handle State #1
  • Time: 2s. Market depth changes second time, cBot is still handling State #1
  • Time: 4s. Market depth changes third time, cBot is still handling State #1
  • Time: 5s. cBot finished handling of State #1, cBot starts to handle State #3. State #2 is skipped because it is obsolete.

This would mean that the user will not have access to the complete data stream. cAlgo will now be discarding packets before they ever reach the user because they are in the past.


@olddirtypipster

olddirtypipster
14 Aug 2015, 13:51

Just because a data packet is in the past does not make it obsolete! Each packet carries with it a statistically important event that needs to be accounted for when building a wholesome picture of the market order flow. You cannot just discard these because they are in the past and the user-end is not keeping up .

This is like saying that because the user's cBot is unoptimized and slow, cAlgo will help them along by deleting some of its data so the user can catch up. If the user-end wants to skip packets because they cannot keep up, it should be left to their discretion.

 

cAlgo needs to GUARANTE the delivery of ALL data to the user verses delete this data because its internal buffer is full, and this is not what your proposed fix will be doing.

 

A mandatory requirement is to guarantee the delivery of ALL data to the user-end by flushing ALL past data to the Subscribed user before the arrival of the next packet.

This can be accomplished by:

 

1) reducing the packet size by sending incremental market depth updates instead of the full orderbook each time.

2) approach the data delivery algorithm heuristically; by that. The parallelized for loop should be intelligent enough to know when the internal buffer has more than one entry and flush the entire list out to the user as an Observable<T> stream

This is entirely possible, but it would require that you replace your blocking event handler with a reactive stream instead. The problem with using a for loop is that you guarantee asynchronous blocking in the loop when the cBot handler takes too long to return control back to the loop. You do not have this blocking effect when you use asynchronous Observable streams.

I suggest you review the two videos on reactive streaming to see where I am coming from. This illustrates how Reactive ExtensionsTM by Microsoft can help solve this problem you are facing.

https://channel9.msdn.com/Series/Rx-Workshop/Rx-Workshop-Event-Processing  (VERY ILLUMINATING VIDEO!!)

https://channel9.msdn.com/Series/Rx-Workshop/Rx-Workshop-Writing-Queries

I seriously ask that you reconsider the way you will implement this fix. If you proceed with your current method, you will severely hamper the usefulness of this platform.

Holla back.


@olddirtypipster

olddirtypipster
14 Aug 2015, 14:07

Two succinct quotes that highlight the principle concept behind reactive streaming:

"Reactive Streams is an initiative to provide a standard for asynchronous stream processing with non-blocking back pressure.".

"...The main goal of Reactive Streams is to govern the exchange of stream data across an asynchronous boundary ... while ensuring that the receiving side is not forced to buffer arbitrary amounts of data. In other words, back pressure is an integral part of this model in order to allow the queues which mediate between threads to be bounded."

Done properly, this will solve your problem in one fell swoop.

You can read more at http://www.reactive-streams.org/


@olddirtypipster

Spotware
14 Aug 2015, 14:09

We will think about that. Thank you for your suggestions.


@Spotware

olddirtypipster
14 Aug 2015, 15:26

RE:

Spotware said:

We will think about that. Thank you for your suggestions.

This is a good first step. Please keep us informed and up to date. This is a major bug that needs to be addressed promptly.


@olddirtypipster

taalamu
05 Feb 2016, 20:01

Market Depth

OldDirty

As a former futures trader where Market Depth  and Market Depth information/analysis was readily available, I was appalled to learn that such information was not readily available in Forex.

I could never understand how people in charge of providing liquidity could even think to take positions without knowing what the other big players in the game were doing. That brought me to C trader. But then I learned that there were issues with that data...

I read with interest your contributions in another thread on the DOM  and I appreciate all the effort you are expending in this direction. As a non programmer, I can not contribute in any meaningful way to the current discussion, except to encourage your efforts and to say that I personally would appreciate a truly non-aggregated real time complete market depth on my c trader platform.

I wish you all the success in the world.

 

Thanks Again

T

(Magere Taalamu)

 


@taalamu