Closed Bug 1221541 Opened 10 years ago Closed 10 years ago

spark.log consumes all available disk space

Categories

(Cloud Services Graveyard :: Metrics: Pipeline, defect)

defect
Not set
normal

Tracking

(firefox45 affected)

RESOLVED FIXED
Tracking Status
firefox45 --- affected

People

(Reporter: birunthan, Assigned: rvitillo)

References

Details

On a fresh Spark cluster, /dev/xvda1 has 2.2GB free. After a long running analysis, /home/hadoop/spark.log consumes all available free disk space causing subsequent Spark/IPython operations to fail (e.g. opening a notebook does not work). /dev/xvdb and /dev/xvdc seem to have a lot more free space so could we perhaps store spark.log there instead?
Blocks: 1222976
Component: Telemetry Server → Metrics: Pipeline
Product: Webtools → Cloud Services
Version: Trunk → unspecified
Assignee: nobody → rvitillo
Should be fixed now as the log file is created in /mnt.
Status: NEW → RESOLVED
Closed: 10 years ago
Resolution: --- → FIXED
Product: Cloud Services → Cloud Services Graveyard
You need to log in before you can comment on or make changes to this bug.