Closed
Bug 1381875
Opened 8 years ago
Closed 7 years ago
[traceback] TransportError: Request Entity Too Large
Categories
(Socorro :: Infra, task)
Socorro
Infra
Tracking
(Not tracked)
RESOLVED
WORKSFORME
People
(Reporter: willkg, Assigned: miles)
References
Details
From: https://sentry.prod.mozaws.net/operations/socorro-stage/issues/619537/
"""
TransportError: TransportError(413, u'<html>\r\n<head><title>413 Request Entity Too Large</title></head>\r\n<body bgcolor="white">\r\n<center><h1>413 Request Entity Too Large</h1></center>\r\n<hr><center>nginx/1.10.3 (Ubuntu)</center>\r\n</body>\r\n</html>\r\n')
File "socorro/external/crashstorage_base.py", line 619, in save_raw_and_processed
crash_id
File "socorro/external/statsd/statsd_base.py", line 139, in benchmarker
result = wrapped_attr(*args, **kwargs)
File "socorro/external/es/crashstorage.py", line 377, in save_raw_and_processed
crash_id
File "socorro/external/es/crashstorage.py", line 321, in save_raw_and_processed
crash_id
File "socorro/external/es/crashstorage.py", line 148, in save_raw_and_processed
crash_document=crash_document
File "socorro/database/transaction_executor.py", line 106, in __call__
result = function(connection, *args, **kwargs)
File "socorro/external/es/crashstorage.py", line 200, in _submit_crash_to_elasticsearch
id=crash_id
File "elasticsearch/client/utils.py", line 73, in _wrapped
return func(*args, params=params, **kwargs)
File "elasticsearch/client/__init__.py", line 300, in index
_make_path(index, doc_type, id), params=params, body=body)
File "elasticsearch/transport.py", line 318, in perform_request
status, headers, data = connection.perform_request(method, url, params, body, ignore=ignore, timeout=timeout)
File "elasticsearch/connection/http_requests.py", line 89, in perform_request
self._raise_error(response.status_code, raw_data)
File "elasticsearch/connection/base.py", line 124, in _raise_error
raise HTTP_EXCEPTIONS.get(status_code, TransportError)(status_code, error_message, additional_info)
"""
Miles says this is coming from nginx in front of ES. We need to increase the allowed request payload size there.
| Reporter | ||
Updated•8 years ago
|
Summary: [traceback] → [traceback] TransportError: Request Entity Too Large
| Reporter | ||
Comment 1•8 years ago
|
||
Assigning this to Miles since he said this is infra-related.
Assignee: nobody → miles
| Assignee | ||
Comment 2•8 years ago
|
||
I'm thinking that happens when:
processor tries to submit crash to elasticsearch -> processor hits hackily spun up ubuntu instance in webeng that proxies to ES5 cluster in ops prod.
nginx on the actual elasticsearch nodes has `client_max_body_size 20m;` set, so that is the logical answer. Before prod, we should make this proxy setup more durable, including an ASG/ELB setup with independent monitoring. Yikes.
For the immediate moment, I'm bumping `client_max_body_size 20m;` on that proxy instance.
| Assignee | ||
Comment 3•8 years ago
|
||
We won't know if this is fixed until we have switched back over to the new 5.3.3 cluster and have not seen this issue crop up in Sentry for a while. If we haven't seen this error recently (in a month or two) we should resolve this.
| Reporter | ||
Comment 4•8 years ago
|
||
Miles: Is there a bug for "make the proxy setup more durable"? If so, what's the bug number for that?--I'm having difficulties finding it.
Flags: needinfo?(miles)
| Assignee | ||
Comment 5•8 years ago
|
||
I've created a bug for this here: bug 1392725 and will be working on it this week. My working plan is to cutover stage with a hand-rolled instance and then switch over to using the new proxy instance once that's ready. The proxy instance being ready blocks prod's es5 cutover.
Flags: needinfo?(miles)
| Reporter | ||
Comment 6•7 years ago
|
||
Miles: Did this get finished? If not, what's outstanding?
Flags: needinfo?(miles)
| Assignee | ||
Comment 7•7 years ago
|
||
We migrated the entirety of the infra out of webeng and no longer have a proxy setup.
Status: NEW → RESOLVED
Closed: 7 years ago
Flags: needinfo?(miles)
Resolution: --- → WORKSFORME
You need to log in
before you can comment on or make changes to this bug.
Description
•