Closed
Bug 1221541
Opened 10 years ago
Closed 10 years ago
spark.log consumes all available disk space
Categories
(Cloud Services Graveyard :: Metrics: Pipeline, defect)
Cloud Services Graveyard
Metrics: Pipeline
Tracking
(firefox45 affected)
RESOLVED
FIXED
| Tracking | Status | |
|---|---|---|
| firefox45 | --- | affected |
People
(Reporter: birunthan, Assigned: rvitillo)
References
Details
On a fresh Spark cluster, /dev/xvda1 has 2.2GB free. After a long running analysis, /home/hadoop/spark.log consumes all available free disk space causing subsequent Spark/IPython operations to fail (e.g. opening a notebook does not work).
/dev/xvdb and /dev/xvdc seem to have a lot more free space so could we perhaps store spark.log there instead?
| Assignee | ||
Updated•10 years ago
|
Component: Telemetry Server → Metrics: Pipeline
Product: Webtools → Cloud Services
Version: Trunk → unspecified
| Assignee | ||
Updated•10 years ago
|
Assignee: nobody → rvitillo
| Assignee | ||
Comment 1•10 years ago
|
||
Should be fixed now as the log file is created in /mnt.
Status: NEW → RESOLVED
Closed: 10 years ago
Resolution: --- → FIXED
Updated•7 years ago
|
Product: Cloud Services → Cloud Services Graveyard
You need to log in
before you can comment on or make changes to this bug.
Description
•