Closed
Bug 1126471
Opened 11 years ago
Closed 11 years ago
Create a Heka input plugin for reading data from S3
Categories
(Cloud Services Graveyard :: Metrics: Pipeline, defect)
Cloud Services Graveyard
Metrics: Pipeline
Tracking
(Not tracked)
RESOLVED
FIXED
People
(Reporter: mreid, Assigned: mreid)
References
Details
This is basically the inverse of the S3SplitFileOutput plugin which writes partitioned data to S3:
https://github.com/mozilla-services/data-pipeline/blob/master/heka/plugins/s3splitfile/s3splitfile_output.go
The input should accept a "schema" file which acts as a filter to determine which files are read, then fetches each file and sends its contents through the pipeline.
| Assignee | ||
Updated•11 years ago
|
Assignee: nobody → mreid
Comment 1•11 years ago
|
||
Comments from bug triage:
- Functionality is there, cleaned up and made part of the build/repo
| Assignee | ||
Comment 2•11 years ago
|
||
Initial PR is here:
https://github.com/mozilla-services/data-pipeline/pull/3
| Assignee | ||
Updated•11 years ago
|
Status: NEW → ASSIGNED
| Assignee | ||
Updated•11 years ago
|
Status: ASSIGNED → RESOLVED
Closed: 11 years ago
Resolution: --- → FIXED
Updated•7 years ago
|
Product: Cloud Services → Cloud Services Graveyard
You need to log in
before you can comment on or make changes to this bug.
Description
•