Git Product home page Git Product logo

Comments (7)

hootnot avatar hootnot commented on June 2, 2024

Hi @davidandreoletti

Can you provide some example code regarding the performance issue you have, the instruments you request, the frequency ... so the details that I can reproduce what you experience ?

Regards,
Feite

from oanda-api-v20.

davidandreoletti avatar davidandreoletti commented on June 2, 2024

@hootnot

the instruments you request, the frequency ... so the details that I can reproduce what you experience ?

Simply run the OANDA tests here. Each thread can issue no more than 2 requests/s at most [1] (on my single core virtual machine). My machine is in Asia and hitting a US server I believe. There is a bit of delay going on but that should not have much impact if requests were truly asynchronous. FYI: I am trying to get about 10-15 request/s as performance target.

[1] Uncomment this line to see the current number of requests/second

from oanda-api-v20.

hootnot avatar hootnot commented on June 2, 2024

@davidandreoletti

I had some issues running the code you pointed to. I alse guess it is your fx-oanda-historical branch I need from your forked repo.

Nevertheless I did some experiments on oandapyV20 using https://github.com/ross/requests-futures.
I made some modifications to the example code of candle-data.py to test. It looks promissing. The async version completes in about half of the time.

I have created a branch async-requests-futures and added the modified candle-data example.

Please clone it and install it locally, maybe in a separate virtualenv to be sure. Check the candle-data-par.py example for the details. I think most simple thing to do is to isolate the problem by creating a small script that only does the (parallel) requests that you want. The candle-data-par.py example is probably a good start since you make the same kind of requests.
Please let me know if this helps you tackling your problem.

I will see if/how/when to implement async. on oandapyV20.

(adrenv) hootnot@oatr:~/oanda-api-v20/examples $ time python src/candle-data.py --i EUR_USD --i EUR_JPY --i EUR_AUD --i EUR_GBP --i DE30_EUR --i EUR_ZAR --i US30_USD --i NAS100_USD --count 5000 --nice --gr M5 >out2

real	0m20.875s
user	0m4.636s
sys	0m0.540s
(adrenv) hootnot@oatr:~/oanda-api-v20/examples $ time python src/candle-data-par.py --i EUR_USD --i EUR_JPY --i EUR_AUD --i EUR_GBP --i DE30_EUR --i EUR_ZAR --i US30_USD --i NAS100_USD --count 5000 --nice --gr M5 >out

real	0m9.757s
user	0m4.768s
sys	0m0.696s
(adrenv) hootnot@oatr:~/examples $ wc out out2
  440048   760080  7825682 out
  440048   760072  7825650 out2
  880096  1520152 15651332 total

from oanda-api-v20.

davidandreoletti avatar davidandreoletti commented on June 2, 2024

@hootnot I looked a bit more about the infamous Python GIL issue (especially IO/CPU issues) and how it affects IO/CPU bound multithreaded applications. Each pure python thread must acquire the GIL to run and a thread waiting for IO automatically release the GIL allowing another python thread to run. The thread owning GIL runs, every other threads waits.

Basically for me, it is important that each IO bound thread issue as many async requests as possible (up to max 15 reqs/s) before it releases the GIL once it begins blocking on the async requests to get the response with my_future.results().

I digged further down into the future_requests package and I found that it is basically doing I am already doing:

  • create a bunch of threads (via ThreadPool for example)
  • call session.request() on each thread
    -- request() blocks until IO is available

So at first glance, it seems not to be a good fit.

from oanda-api-v20.

hootnot avatar hootnot commented on June 2, 2024

@davidandreoletti
Yes the GIL issue. Everyone doing threading runs into it at some moment ...

The ratelimiting is set to 30 requests/second and is IP bound. So regardless the success of the threading solution you choose, this is the hard limit. Threading must respect this limit also.
It seems to me that you can only experience this problem when you retrieve a lot of instruments with a high frequent granularity . Can you share me some parameters ?

Have you taken a look at my other repo: oanda-trading-environment. It reads the pricestream and bakes candle-records of your choice. It offers 0MQ pub/sub. You could let it write candle records of you choice into https://redis.io/ and query REDIS with your application. Redis does also pub/sub. (redis has python bindings). Maybe oanda-trading-environment can be a solution for your problem

from oanda-api-v20.

davidandreoletti avatar davidandreoletti commented on June 2, 2024

@hootnot

About the 30 reqs/s:

The official documentation says:

To provide equal resources to all clients, we recommend limiting both the number of new connections per second, and the number of requests per second made on a persistent connection (see above/below).

For new connections, we recommend you limit this to once per second (1/s). Establishing a new TCP and SSL connection is expensive for both client and server. To allow a better experience, using a persistent connection will allow more requests to be performed on an established connection.

For an established connection, we recommend limiting this to fifteen requests per second (30/s). Please see following section on persistent connections to learn how to maintain a connection once it is established.

As per the documentation, it seems possible to create several established connection (at a rate of 1/s) and have up to 30 req/s per established connection.

About 0MG pub/sub:

I am writing a datareader for pydata/pandas-datareader and unfortunately REDIS/0MQ are out of the picture. Nonetheless, I keep this good idea in mind :)

from oanda-api-v20.

hootnot avatar hootnot commented on June 2, 2024

@davidandreoletti
It was not clear to me what your goal exactly was. The datareader could also be a spin-off of something you try to accomplish.
Maybe I will look into the async connection during X-mas time

from oanda-api-v20.

Related Issues (20)

Recommend Projects

  • React photo React

    A declarative, efficient, and flexible JavaScript library for building user interfaces.

  • Vue.js photo Vue.js

    🖖 Vue.js is a progressive, incrementally-adoptable JavaScript framework for building UI on the web.

  • Typescript photo Typescript

    TypeScript is a superset of JavaScript that compiles to clean JavaScript output.

  • TensorFlow photo TensorFlow

    An Open Source Machine Learning Framework for Everyone

  • Django photo Django

    The Web framework for perfectionists with deadlines.

  • D3 photo D3

    Bring data to life with SVG, Canvas and HTML. 📊📈🎉

Recommend Topics

  • javascript

    JavaScript (JS) is a lightweight interpreted programming language with first-class functions.

  • web

    Some thing interesting about web. New door for the world.

  • server

    A server is a program made to process requests and deliver data to clients.

  • Machine learning

    Machine learning is a way of modeling and interpreting data that allows a piece of software to respond intelligently.

  • Game

    Some thing interesting about game, make everyone happy.

Recommend Org

  • Facebook photo Facebook

    We are working to build community through open source technology. NB: members must have two-factor auth.

  • Microsoft photo Microsoft

    Open source projects and samples from Microsoft.

  • Google photo Google

    Google ❤️ Open Source for everyone.

  • D3 photo D3

    Data-Driven Documents codes.