Last Comment Bug 426940 - Reduce or eliminate delay in collector to monitor hand-off
: Reduce or eliminate delay in collector to monitor hand-off
Status: RESOLVED FIXED
:
Product: Socorro
Classification: Server Software
Component: General (show other bugs)
: Trunk
: All All
: -- normal (vote)
: 0.5
Assigned To: K Lars Lohn [:lars] [:klohn]
: socorro
:
Mentors:
Depends on:
Blocks:
  Show dependency treegraph
 
Reported: 2008-04-03 20:12 PDT by Michael Morgan [:morgamic]
Modified: 2011-12-28 10:40 PST (History)
5 users (show)
See Also:
QA Whiteboard:
Iteration: ---
Points: ---


Attachments

Description Michael Morgan [:morgamic] 2008-04-03 20:12:35 PDT
Currently between cron calls jobs pile up and it ruins the instant-processing goal we have to speed up processing for devs.

A loop would allow pending reports to be instantly added to the jobs queue.
Comment 1 K Lars Lohn [:lars] [:klohn] 2008-04-03 20:24:31 PDT
I must disagree.  cron jobs piling up would increase the likelihood that
instant-processing goal would be successful.  The more eyes (processing) that
you have looking at the tree, the faster a new dump will be found and queued. 
Adding a loop would insure that the fastest a new dump could be found is the
sum of the times of each of monitors subtasks (moving completed tasks, managing
processors, and walking the tree/queing jobs).
Comment 2 Michael Morgan [:morgamic] 2008-04-03 20:26:37 PDT
I remember you suggesting this... hrmm. :)

So you've changed your mind.  What would be an acceptable lag time between cron jobs?

Should we change this bug to make the fs checks more efficient to reduce hits to the filesystem?
Comment 3 Michael Morgan [:morgamic] 2008-04-03 20:37:41 PDT
Lars - so based on IRC convo, figure out if this is worth pursuing and if not resolve this WONTFIX.
Comment 4 Michael Morgan [:morgamic] 2008-04-03 23:47:14 PDT
So, more discussion -- the delay for queuing new reports is caused by the need to traverse the directory containing new reports, which is a lot of overhead.  Lars had some ideas on how to reduce delay by letting the collector do more, but not sure about the tradeoffs of adding database dependency to the collector...

So we'll have some discussion here about that.  In the meantime we decided the cron should work for bug 422581 so I'm removing this as a blocker for it.
Comment 5 Ted Mielczarek [:ted.mielczarek] 2008-04-04 02:53:27 PDT
I think making the collector do anything more is a bad idea. Having it independent of the database was a design decision. I'd be open to figuring out a smarter way to have the collector indicate that a new report has been collected, but hitting the db just doesn't sound good.
Comment 6 Samuel Sidler (old account; do not CC) 2008-04-04 09:16:37 PDT
For background: Hitting the db as part of collection is a bad idea because if the db goes down, we stop collecting reports. Even though it might take longer, we need to keep the db out of collection.
Comment 7 Michael Morgan [:morgamic] 2008-04-05 02:27:22 PDT
There's a distinction between queuing the reports and accepting them and writing them to disk.  Lars' proposal is a little more complicated, he can explain it better than I, so I'll just let him comment here on Monday.

We understand that inserting immediately on every submission is not scalable and availability decreases dramatically if the collector is dependent on the database.  It's a core part of the 3-tier design.

Adding that dependency is not what this bug is about, it's about reducing delay in queuing new reports, but I didn't do a good job of saying that earlier because we were still having some discussion.  :)

One way to do this is to _extend_ the collector to do additional work after a .dump/.json are recorded to the fs, which would create a queue entry on-demand that the monitor could immediately pick up.  This could be done via messaging, db, whatever -- any method.

The idea is to eliminate the situation we have now where a report is submitted but not queued for 10 minutes while it waits for the monitor cron to fire and create new queue entries.

There is no reason why both the monitor cron (or a daemon version of it) and a smarter processor can't co-exist.  Either way the messaging approach makes sense because it's better than the "wait 10 minutes and then queue N items all at once" or the "constantly scan the fs for new stuff" approaches.
Comment 8 K Lars Lohn [:lars] [:klohn] 2008-04-07 13:22:10 PDT
After some research on modpython coupled with information from aravind about our setup with the collector, I believe that my idea originally discussed on IRC with morgamic is impractical.  The thought had been to use producer/consumer queues with a dedicated database writing thread.  

The next thought was about how to get the collector to communicate with the monitor.  The effect that we want is each collector twitters about the dump it has placed in the file system and the monitor subscribes to the RSS feed. This is not to suggest a literal implementation, this is just the effect that we want.  We need some form of communication in which messages persist until consumed by a reader.

Now, after watching the performance of the monitor and the processors today, it is interesting to see that the two instances of processor are actually sitting around idle some of the time.  They've completely eaten through the backlog.  This means that the time interval between runs of monitor is too long.  Before we proceed with trying to invent a new communication system between collector and monitor, I suggest that we just give monitor the capability to loop.  It will run and walk the directory tree constantly.  Since it is a singleton, there is only one monitor, it seems to me unlikely that it could overload the filesystem.  I'd like to see what sort of response times we can with this configuration.  I can add some efficiencies to monitor's walking of the directory tree if it still isn't fast enough.  Some of the janitorial work can be spun off onto a separate thread.  There are some other tricks up my sleeve too for increasing performance.

Comment 9 K Lars Lohn [:lars] [:klohn] 2008-04-08 14:14:05 PDT
I've checked in a version of monitor (363) that loops constantly.  This should virtually eliminate the lag between queuing and processing a dump with an priority  higher than the default.  Moving on to testing now...
Comment 10 K Lars Lohn [:lars] [:klohn] 2008-04-10 11:39:42 PDT
Here's the latest mad scheme to get as instant a response as we can for priority dump file processing requests.  The idea here is a heavily modified version of a suggestion by aravind. The scheme centers around a new table 'priorityJobs' and splitting monitor into muli-threaded script with two threads.

Any absolute timing in the following discussion should be understood to be configurable.

The first thread (known as the standardJob thread) is the standard dump queuing loop, it walks the file system seeking new dumps.  On finding a new dump, it grabs the insertion lock, places the found job in the 'jobs' table, scheduling it with a processor and then releases the insertion lock.  The standardJob thread uses the lock to protect each insertion, so it will acquire and release for each discovered new job.  Once it has finished the walk, it sleeps for two minutes before starting again.

The second thread is for priority jobs.  It scans the new table 'priorityJobs' every five seconds or so.  This table consists of  just the uuids of priority job requests.  Values are added to this table via a web interface.  When the priorityJob thread finds values in the 'priorityJobs' table, it grabs the insertion lock which stops the standardJob thread.   Then it checks to see if any of the priority jobs are already in the queue.  It updates the priority of any found in the queue.  For any jobs not found in the queue,  it starts its own file system scan, seeking just the target priority jobs.  On finding them, it queues them with a higher than default priority.  Once done, it releases the insertion lock and goes back to monitoring the 'priorityJobs' table.  

What are some situations that could still be slow?
1) the priorityJobs thread must have be able to acquire the insertion lock in a reasonable amount of time.  This means that the standardJob queue must use the lock to protect only the smallest amount of code it can.  This maximizes the opportunity for the priorityJob thread to steal the lock.   The priorityJob thread should use the lock to protect its entire search/queue operation, never giving the other thread a chance to grab the lock.  The longest delay would be the amount of time needed to do a database insert and commit.
2) at times of heavy load, the priorityJobs thread's walk of the file system tree may take some time.  Imagine if there are twenty thousand dumps in the file system, it's going to take some time to find the priority job hidden in the filesystem.

I'm testing on khan today...
Comment 11 K Lars Lohn [:lars] [:klohn] 2008-04-18 19:57:18 PDT
the aforementioned algorithm has been written, tested on khan, tested on stage and now deployed into production.  Looking up a job now tags that job for priority handling and the report is ready in less than sixty seconds.

Note You need to log in before you can comment on or make changes to this bug.