Weekly Topline Summary aborts on save

RESOLVED FIXED

Status

Data Platform and Tools
Datasets: General
P1
blocker
RESOLVED FIXED
5 months ago
4 months ago

People

(Reporter: amiyaguchi, Assigned: amiyaguchi)

Tracking

Details

Attachments

(1 attachment)

(Assignee)

Description

5 months ago
```
INFO:mozetl.topline.topline_summary:Loading main_summary into memory...
INFO:mozetl.topline.topline_summary:Running the topline summary...
INFO:mozetl.topline.topline_summary:Saving rollup to disk...
INFO:mozetl.topline.topline_summary:Writing topline summary to s3://telemetry-parquet/topline_summary/v1/v1/mode=weekly/report_start=20170625
Traceback (most recent call last):
  File "/mnt/analyses/python_mozetl/run.py", line 1, in <module>
    from mozetl.topline import topline_summary as ts; ts.main()
  File "/mnt/anaconda2/lib/python2.7/site-packages/click/core.py", line 716, in __call__
    return self.main(*args, **kwargs)
  File "/mnt/anaconda2/lib/python2.7/site-packages/click/core.py", line 696, in main
    rv = self.invoke(ctx)
  File "/mnt/anaconda2/lib/python2.7/site-packages/click/core.py", line 889, in invoke
    return ctx.invoke(self.callback, **ctx.params)
  File "/mnt/anaconda2/lib/python2.7/site-packages/click/core.py", line 534, in invoke
    return callback(*args, **kwargs)
  File "/mnt/analyses/python_mozetl/mozetl/topline/topline_summary.py", line 298, in main
    save(rollup, bucket, prefix, version, mode, start_date)
  File "/mnt/analyses/python_mozetl/mozetl/topline/topline_summary.py", line 265, in save
    .parquet(location, mode="overwrite")
  File "/usr/lib/spark/python/lib/pyspark.zip/pyspark/sql/readwriter.py", line 641, in parquet
  File "/usr/lib/spark/python/lib/py4j-0.10.3-src.zip/py4j/java_gateway.py", line 1133, in __call__
  File "/usr/lib/spark/python/lib/pyspark.zip/pyspark/sql/utils.py", line 63, in deco
  File "/usr/lib/spark/python/lib/py4j-0.10.3-src.zip/py4j/protocol.py", line 319, in get_return_value
py4j.protocol.Py4JJavaError: An error occurred while calling o521.parquet.
: org.apache.spark.SparkException: Job aborted.
	at org.apache.spark.sql.execution.datasources.InsertIntoHadoopFsRelationCommand$$anonfun$run$1.apply$mcV$sp(InsertIntoHadoopFsRelationCommand.scala:149)
	at org.apache.spark.sql.execution.datasources.InsertIntoHadoopFsRelationCommand$$anonfun$run$1.apply(InsertIntoHadoopFsRelationCommand.scala:115)
	at org.apache.spark.sql.execution.datasources.InsertIntoHadoopFsRelationCommand$$anonfun$run$1.apply(InsertIntoHadoopFsRelationCommand.scala:115)
	at org.apache.spark.sql.execution.SQLExecution$.withNewExecutionId(SQLExecution.scala:57)
	at org.apache.spark.sql.execution.datasources.InsertIntoHadoopFsRelationCommand.run(InsertIntoHadoopFsRelationCommand.scala:115)
	at org.apache.spark.sql.execution.command.ExecutedCommandExec.sideEffectResult$lzycompute(commands.scala:58)
	at org.apache.spark.sql.execution.command.ExecutedCommandExec.sideEffectResult(commands.scala:56)
	at org.apache.spark.sql.execution.command.ExecutedCommandExec.doExecute(commands.scala:74)
	at org.apache.spark.sql.execution.SparkPlan$$anonfun$execute$1.apply(SparkPlan.scala:115)
	at org.apache.spark.sql.execution.SparkPlan$$anonfun$execute$1.apply(SparkPlan.scala:115)
	at org.apache.spark.sql.execution.SparkPlan$$anonfun$executeQuery$1.apply(SparkPlan.scala:136)
	at org.apache.spark.rdd.RDDOperationScope$.withScope(RDDOperationScope.scala:151)
	at org.apache.spark.sql.execution.SparkPlan.executeQuery(SparkPlan.scala:133)
	at org.apache.spark.sql.execution.SparkPlan.execute(SparkPlan.scala:114)
	at org.apache.spark.sql.execution.QueryExecution.toRdd$lzycompute(QueryExecution.scala:86)
	at org.apache.spark.sql.execution.QueryExecution.toRdd(QueryExecution.scala:86)
	at org.apache.spark.sql.execution.datasources.DataSource.write(DataSource.scala:525)
	at org.apache.spark.sql.DataFrameWriter.save(DataFrameWriter.scala:211)
	at org.apache.spark.sql.DataFrameWriter.save(DataFrameWriter.scala:194)
	at org.apache.spark.sql.DataFrameWriter.parquet(DataFrameWriter.scala:488)
	at sun.reflect.NativeMethodAccessorImpl.invoke0(Native Method)
	at sun.reflect.NativeMethodAccessorImpl.invoke(NativeMethodAccessorImpl.java:62)
	at sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:43)
	at java.lang.reflect.Method.invoke(Method.java:498)
	at py4j.reflection.MethodInvoker.invoke(MethodInvoker.java:237)
	at py4j.reflection.ReflectionEngine.invoke(ReflectionEngine.java:357)
	at py4j.Gateway.invoke(Gateway.java:280)
	at py4j.commands.AbstractCommand.invokeMethod(AbstractCommand.java:132)
	at py4j.commands.CallCommand.execute(CallCommand.java:79)
	at py4j.GatewayConnection.run(GatewayConnection.java:214)
	at java.lang.Thread.run(Thread.java:745)
Caused by: org.apache.spark.SparkException: Job 2 cancelled because SparkContext was shut down
	at org.apache.spark.scheduler.DAGScheduler$$anonfun$cleanUpAfterSchedulerStop$1.apply(DAGScheduler.scala:818)
	at org.apache.spark.scheduler.DAGScheduler$$anonfun$cleanUpAfterSchedulerStop$1.apply(DAGScheduler.scala:816)
	at scala.collection.mutable.HashSet.foreach(HashSet.scala:78)
	at org.apache.spark.scheduler.DAGScheduler.cleanUpAfterSchedulerStop(DAGScheduler.scala:816)
	at org.apache.spark.scheduler.DAGSchedulerEventProcessLoop.onStop(DAGScheduler.scala:1685)
	at org.apache.spark.util.EventLoop.stop(EventLoop.scala:83)
	at org.apache.spark.scheduler.DAGScheduler.stop(DAGScheduler.scala:1604)
	at org.apache.spark.SparkContext$$anonfun$stop$8.apply$mcV$sp(SparkContext.scala:1781)
	at org.apache.spark.util.Utils$.tryLogNonFatalError(Utils.scala:1290)
	at org.apache.spark.SparkContext.stop(SparkContext.scala:1780)
	at org.apache.spark.scheduler.cluster.YarnClientSchedulerBackend$MonitorThread.run(YarnClientSchedulerBackend.scala:108)
	at org.apache.spark.scheduler.DAGScheduler.runJob(DAGScheduler.scala:632)
	at org.apache.spark.SparkContext.runJob(SparkContext.scala:1873)
	at org.apache.spark.SparkContext.runJob(SparkContext.scala:1886)
	at org.apache.spark.SparkContext.runJob(SparkContext.scala:1906)
	at org.apache.spark.sql.execution.datasources.InsertIntoHadoopFsRelationCommand$$anonfun$run$1.apply$mcV$sp(InsertIntoHadoopFsRelationCommand.scala:143)
	... 30 more
```
(Assignee)

Updated

5 months ago
Assignee: nobody → amiyaguchi
Blocks: 1329844
Severity: normal → blocker
Points: --- → 1
Priority: -- → P1
(Assignee)

Comment 1

5 months ago
I've disabled the job on airflow until I can resolve the issue. This week's data will be generated manually.
(Assignee)

Comment 2

5 months ago
Created attachment 8884050 [details] [review]
Bug 1377730 - disable document_id deduplication in topline_summary #64
(Assignee)

Comment 3

5 months ago
This error was caused by document_id deduplication via `drop_duplicates`. This ends up being very inefficient on spark, causing system resources to be shut down for some inexplicable reason.

The job without document deduping can be run on a single machine, but the job crawls to a halt with it enabled. Since the current job is within a reasonable margin of error (<1%), it may not be necessary to capture the removal of duplicate pings at this moment.
(Assignee)

Updated

4 months ago
Status: NEW → RESOLVED
Last Resolved: 4 months ago
Resolution: --- → FIXED
You need to log in before you can comment on or make changes to this bug.