Closed
Bug 1393146
(bmo-autohammer)
Opened 8 years ago
Closed 7 years ago
Automate blocking IPs that bugzilla flags as exceeding rate limits
Categories
(bugzilla.mozilla.org :: General, enhancement)
Tracking
()
RESOLVED
FIXED
People
(Reporter: dylan, Assigned: dylan)
References
Details
In many cases, the bugzilla codebase can identify abusive amounts of requests from the same IP. Currently we log this to syslog with a special tag.
It would be nice if *something* could automatially block those.
| Assignee | ||
Updated•8 years ago
|
Alias: bmo-autohammer
| Assignee | ||
Comment 6•8 years ago
|
||
Luckily the application also has enough information to help guess who is a legitimate user (hi, glandium) and who is a bot.
| Assignee | ||
Updated•8 years ago
|
Assignee: nobody → dylan
Comment 7•8 years ago
|
||
(In reply to Kendall Libby [:fubar] from comment #3)
> worth noting that the other day we blocked the paris office because it's
> external IP was hammering us (and there was no rDNS to make it obvious who
> it was). not the first time and won't be the last time that happens. I don't
> know if there's a useful place we can pull a list of known office IPs from,
> but we should have some mechanism for whitelisting them.
bug 1291311 might be of interest to you.
Comment 9•8 years ago
|
||
(In reply to Greg Cox [:gcox] from comment #7)
> (In reply to Kendall Libby [:fubar] from comment #3)
> > worth noting that the other day we blocked the paris office because it's
> > external IP was hammering us (and there was no rDNS to make it obvious who
Strongly agreed. We've had repeated problems with every blocking method available in blocking offices and causing outages for staff, some small and some (in the case of banhammer) large.
> bug 1291311 might be of interest to you.
There is also work in bug 1342548 to maintain a consumable list in S3 of IP ranges for white listing in banhammer. Using the same data/process would seem to make sense.
| Assignee | ||
Comment 10•8 years ago
|
||
Since that isn't resolved and I want to get this resolved relatively soon, I'm going to build a set
of IPs used by all regular users over the last year.
Comment 11•8 years ago
|
||
(In reply to Dylan Hardison [:dylan] (he/him) from comment #10)
> Since that isn't resolved and I want to get this resolved relatively soon,
> I'm going to build a set
> of IPs used by all regular users over the last year.
Office addresses change painfully often. Datacenter addresses are going to change with scl3 to mdc1 migrations.
Hopefully changes will happen less often as we migrate to using our own IP space for offices... but some of those addresses in the last year will now be obsolete as offices have closed (Auckland, for example) and new ones have opened (BER3 quite recently).
If you don't have a way to get updates in the short term (I do understand needing a passable solution quickly) then please, please, consider adding that as soon as is reasonably possible. This is something that bites us, in the MOC, on a regular basis and causes outages.
| Assignee | ||
Comment 12•8 years ago
|
||
(In reply to Peter Radcliffe [:pir] from comment #11)
> (In reply to Dylan Hardison [:dylan] (he/him) from comment #10)
> > Since that isn't resolved and I want to get this resolved relatively soon,
> > I'm going to build a set
> > of IPs used by all regular users over the last year.
>
> Office addresses change painfully often. Datacenter addresses are going to
> change with scl3 to mdc1 migrations.
>
> Hopefully changes will happen less often as we migrate to using our own IP
> space for offices... but some of those addresses in the last year will now
> be obsolete as offices have closed (Auckland, for example) and new ones have
> opened (BER3 quite recently).
>
> If you don't have a way to get updates in the short term (I do understand
> needing a passable solution quickly) then please, please, consider adding
> that as soon as is reasonably possible. This is something that bites us, in
> the MOC, on a regular basis and causes outages.
I know the IP of every logged-in person that's used BMO in the last year, regardless of if they're in an office.
it's a lot of IPs, but if I filter by ones that typically make a lot of requests it is quite manageable.
| Assignee | ||
Comment 13•8 years ago
|
||
Oh, and it wouldn't be a one time list, but something I can re-generate on some sort of automated schedule.
| Assignee | ||
Comment 14•8 years ago
|
||
So when we block an IP, what layer is that?
Do I want to see if this is something mozdef can do by looking our syslog,
or can I write some code that talks to zeus?
I am fairly confident I'll be able to:
1) accurately detect abusive amounts of traffic
2) ignore such when the IP is in a (potentially large, currently 57k) list.
Flags: needinfo?(klibby)
Comment 15•8 years ago
|
||
So, neither zeus nor mozdef will be helpful in a couple months. Writing anything that works with them seems like it might be a waste of time. On the other hand, I'm not sure how they might want to handle this in CloudOps; I don't imagine they would want the app changing security groups.
Zeus does appear to let you modify protection classes via API (SOAP), but permissions are NOT fine grained so it's up to webops as to whether or not they would allow us to modify any/all protection classes. It might be possible to have a TrafficScript rule that calls out to a service, but I'd be worried about performance implication.
Flags: needinfo?(klibby)
| Assignee | ||
Comment 16•8 years ago
|
||
Okay, I'll see if I can make httpd bail out early based on... something. I'm pretty sure I'll be able to handle this with ELBs.
| Assignee | ||
Comment 17•8 years ago
|
||
BMO is going to start responding to requests that are "excessive" with HTTP 429 - TOO MANY REQUESTS.
This will only be done for IPs that go over the rate limit repeatedly, and will whitelist (report only)
any IP address that is reasonably determined to not be a scanner.
What I mean to say here is that BMO will not return HTTP 429 lightly.
With that in mind, the load balancer could possibly use this as a signal that a client is being aggressive
and harming "the commons", and take some sort of action?
We can wait on this and see how well the code in 1393888 works, but the cost per request (because of apache) is still quite high.
On bugzilla-dev, I can't get much more than 300ms per request.
| Assignee | ||
Updated•8 years ago
|
Flags: needinfo?(rsoderberg)
Flags: needinfo?(klibby)
| Assignee | ||
Updated•8 years ago
|
Component: Infrastructure → General
QA Contact: mcote
| Assignee | ||
Updated•8 years ago
|
Flags: needinfo?(rsoderberg)
Flags: needinfo?(klibby)
| Assignee | ||
Updated•7 years ago
|
Group: mozilla-employee-confidential
| Assignee | ||
Comment 18•7 years ago
|
||
This can't go out until we update the whitelist.
| Assignee | ||
Updated•7 years ago
|
Status: NEW → RESOLVED
Closed: 7 years ago
Resolution: --- → FIXED
You need to log in
before you can comment on or make changes to this bug.
Description
•