Part 7 - Fetching Data and Multithreading

Can you use more than 1 thread to get data via OData? What happens when you do?

We'll take a look at how to use more than 1 thread to get data from Finance and Operations via OData. First, you should keep 1 thread for each interaction, as a general statement. There are several scenarios in which you can use OData but when you have to start manipulating a large amount of records or data, some other technology or approach should be used. OData is, in general, best at doing 1 thing fast rather than 100 things fast. However, don't take that from me. Let's look at the data. This was more designed as a "can I break stuff" test but I haven't been able to break anything yet. Wee can review the data and see if any trends emerge. All code for this can be found here.

The Tests

I setup one basic test for this; get 1 random record from the SalesOrderHeaderV2 entity 100 times. I wrapped this in a class that could be run as a thread then started stacking up threads in test groups to see what happens. I had one test fetching 100 records as a single threaded operation and another test fetching 100 records with 10 threads (totaling 100*10 reads) and another with 25 threads and so on. All tests were performed on a Development environment using the VHD image running locally. As a call out, I ran the same tests against a UAT environment and starting hitting what appeared to be some resource limiting technologies that my development environment either didn't have or did not configured.

The Results

To call out how CPU effects the results, we have 2 sets of results: One using a 6-core CPU VHD and another set using a 12-core CPU VHD.


All times are in milliseconds and there is some loss of precision because of some large numbers but, in general, its easy to see what is happening. As we increase the number of threads, the time it takes for the requests for that thread also increases. This is is what we would expect but it interesting to see how it scales.


Like the last graph, all times are in milliseconds and we see the same basic shape for the data. As the amount of work goes up, the amount of time it takes to complete that work also goes up.

Side By Side

This is the same data as the other graphs just next to each other. You can see the 12-core instance had overall better performance so we can conclude this is mostly a CPU constrained issue rather than anything else. For the sake of this test, we were randomly grabbing any 1 Sales Order header from around 15 thousand so the likelihood we weren't hitting a SQL level cache by the end for every record is pretty close to zero. 


When running the same tests against a UAT environment, I got some different and unexpected results. First, these tests include network overhead from my development machine to Azure and back ( from Detroit, MI to US East 2 to be a little more specific ). Additionally, it appears as though a UAT environment deployed in service fabric configuration has some additional throttling or flood control technologies getting in the way that my development environment didn't have. This is good to be aware of and think about as what works in a development environment may not work in UAT or Prod. We used the same set of tests but when it got to test 250 and 500 threads, all requests started timing out so there was no data to collect. This could be because of environment sizing or something else. I couldn't specifically tell or determine why. No throttling had been configured in Finance and Operations specifically so all requests should be treated equally. You can see the results below. In general, much slower than a development environment more than likely due to network overhead.

TL;DR and Key Takeaways

  • More work means longer working times
  • Largely a CPU constrained throughput problem
  • More CPU means more speed
  • Request processing times scale linearly with thread count
  • Development, UAT and Prod could or will act differently
  • Don't do this

All code can be found at and data can be found at

Blog Main Tag: