Closed Bug 1311284 Opened 8 years ago Closed 8 years ago

ATMO V2: Make Spark job identifier unique

Categories

(Cloud Services Graveyard :: Metrics: Pipeline, defect, P1)

defect

Tracking

(Not tracked)

RESOLVED FIXED

People

(Reporter: jezdez, Unassigned)

References

Details

Attachments

(1 file)

The Spark jobs currently allow using the same identifier which can lead to data los when using the same notebook filename and identifier for multiple spark jobs. Deleting one of the Spark jobs will delete the notebook file from S3
Blocks: 1248688
Points: --- → 1
Priority: -- → P1
Status: NEW → RESOLVED
Closed: 8 years ago
Resolution: --- → FIXED
Product: Cloud Services → Cloud Services Graveyard
You need to log in before you can comment on or make changes to this bug.

Attachment

General

Created:
Updated:
Size: