Closed
Bug 1105554
Opened 11 years ago
Closed 11 years ago
Daily Aggregate Compression in Redshift
Categories
(Content Services Graveyard :: Tiles: Data Processing, defect)
Tracking
(Not tracked)
RESOLVED
INVALID
Iteration:
36.3
People
(Reporter: Mardak, Assigned: tspurway)
References
Details
(Whiteboard: .008)
Copied from https://github.com/tspurway/infernyx/issues/10
Right now we write data to Redshift quite often. This gives us great latency (freshness), but also increases storage requirements due to low aggregation. This increases the price of the Redshift instance as well as reduce query performance.
We can introduce a daily job in Infernyx that 'compresses' the data by writing a single row for each unique key per day, and cleaning out the existing day's data.
Reporter | ||
Updated•11 years ago
|
Iteration: 37.1 → 37.2
Assignee | ||
Updated•11 years ago
|
Status: NEW → RESOLVED
Closed: 11 years ago
Resolution: --- → INVALID
Reporter | ||
Updated•11 years ago
|
Iteration: 37.2 → 36.3
Points: --- → 3
Whiteboard: .? → .008
Reporter | ||
Updated•11 years ago
|
Assignee: nobody → tspurway
You need to log in
before you can comment on or make changes to this bug.
Description
•