Let's see if caching elastic search helps first before we go to far down this route. I'm betting that python > memcache > response will be faster then python > http request to es > response, but let's be sure on that. I'm suggesting a simple caching client for the home page that takes the elastic search response, memcaches it for 5 minutes and then looks it up on subsequent requests. It should be toggleable on and off for the home page by a waffle flag. Then load test the home page and see if it improves.
Is this still relevant? Should we still worry about caching ES?
ES was the bottleneck at some point, that's why we wanted to cache it and see what the results are. I wrote a little proxy for it https://github.com/ametaireau/httpcache that we could use (it simply caches the responses to the requests it already knows), or I guess we can do this with something like Varnish, which could do things in a more standard way (I mean, real HTTP caching). I will do some testing on this next week to see how it goes for now with the loadtests we have.
-> alexis because he was interested in working on this. Is this something you've looked at?
Assignee: nobody → alexis+bugs
Priority: -- → P4
I've done an HTTP proxy to handle this (https://github.com/ametaireau/httpcache) but I didn't pushed in staging nor done load testing with yet. (we gave it a try with :jason but didn't succeed and decided to postpone for now). Since we have the load-testing infrastructure in place, I can have a look at that a bit later.
Guessing this isn't going to happen.
Status: NEW → RESOLVED
Last Resolved: 4 years ago
Resolution: --- → WONTFIX
You need to log in before you can comment on or make changes to this bug.