Closed Bug 583324 Opened 12 years ago Closed 12 years ago

Service Unavailable error on /addon/1865/statistics/versions-day-20070101-20100507.csv


( Graveyard :: Public Pages, defect)

Not set


(Not tracked)



(Reporter: krupa.mozbugs, Assigned: ianbicking)




(Whiteboard: [z])

steps to reproduce:
Go to

observed behavior:
Service Unavailable error message
It's getting an IOError from WSGI - I think it's timing out building the CSV.  A less popular add-on id works fine.

Ian - will this be fixed with your speed improvements?  I can reproduce this taking a long time on my dev copy, but it doesn't time out - I assume WSGI has a limit it's hitting.
Assignee: nobody → ianb
I don't know if it will fix it, but if it's just a timeout then it seems possible.
Target Milestone: 5.11.7 → 5.12
Thanks for the solid test case, but I'm duping to the perf bug
Closed: 12 years ago
Resolution: --- → DUPLICATE
Duplicate of bug: 564540
verified :(
I'm having a hard time figuring out where all the time goes, seconds disappear here and there without any apparent code that in between my times.  But anyway, out of a 20 second response, 12 seconds goes to this query (with another 9 or so seconds unaccounted for):

SELECT `update_counts`.`id`, `update_counts`.`addon_id`, `update_counts`.`count`, `update_counts`.`date`, `update_counts`.`version`, `update_counts`.`status`, `update_counts`.`application`, `update_counts`.`os`, `update_counts`.`locale` FROM `update_counts` WHERE (`update_counts`.`date` BETWEEN '2007-01-01' and '2010-05-07' AND `update_counts`.`addon_id` = 1865 ) ORDER BY `update_counts`.`date`

The weird thing is running this query through MySQL takes no time at all.  It returns 611 rows, not a huge number.  Maybe a caching overhead?
The built in MySQL query cache will cache that query very effectively.  If you run the same query twice, it will take "no time at all."  Are you running the same query more than once?  Try changing the add-on id.  Fwiw, that query for me, run twice:

611 rows in set (1 min 1.09 sec)

611 rows in set (0.11 sec)
Hm... well, the response times are consistent (i.e., reloading this URL over and over doesn't make it get fast), so the MySQL query cache has no positive effect on the running application itself.  Changing add-on id or date range has little effect for me when running MySQL directly, and I can't reproduce anything like a 1 minute time query time.  One possible problem is that there is no addon_id/date index on update_count (download_counts for example does have such an index).

I thought caching might have an effect, but disabling caching doesn't change times notably.

But I still can't figure out where the time goes.
Let me know if you need anything from me.
I've added a new index to the database to see if it improves performance.  The index I created is:

  create index addon_and_date using btree on update_counts (addon_id, date);

An EXPLAIN confirms the index is used for this query, though performance doesn't seem improved, it actually seems worse (though this could be a temporary thing I'm noticing, perhaps because creating the index would have cleared any cache on MySQL's side).

It's almost as though the index itself needs to warm up.  The first query I did was 90 seconds.  After running the query a few times, even changing addon_id it's under 2 seconds.  The actual request is as slow as ever (consistently about 21 seconds -- I'm connecting to the server directly, so presumably any timeout is in some proxy so I am not encountering it).
Alright, I ran the query in comment 5 with different add-on IDs on PAMO with the index you added and these are the numbers I got:

2442  - 611 rows in set (8.16 sec)
3006  - 611 rows in set (7.79 sec)
722   -  611 rows in set (10.30 sec)
220   - 611 rows in set (7.83 sec)
10900 - 538 rows in set (6.30 sec)
1843  - 611 rows in set (6.54 sec)
748   - 611 rows in set (8.55 sec)
8879  - 548 rows in set (5.01 sec)
201   - 611 rows in set (21.02 sec)
3456  - 611 rows in set (7.56 sec)

It's safe to assume that this will never be cached since the query is based on dates that change every day.  With your index we're in the ~10 sec range on a bored database, which is a good improvement, but still feels like too long to have in a request.  I imagine these queries could stack up on each other pretty easily with table locks and I don't want that.
I have copied just the 1865 records to my laptop, deleted *everything else* in update_counts, thus eliminating any real MySQL overhead.  The request still takes 15 seconds.  To me this mostly eliminates MySQL as a factor.

I'll be looking at the ORM more closely.  The particular 1865 record has a lot of data (I think PHP serialized objects), that might be the source of the performance problem.  While some other addons are slow, 1865 is slower than many other addons.
Profiling confirms most of the time is spent in phpserialize module.
Product: → Graveyard
You need to log in before you can comment on or make changes to this bug.