Set up endpoint for receiving mach build metrics

NEW
Assigned to

Status

Data Platform and Tools
Pipeline Ingestion
P2
normal
a year ago
19 days ago

People

(Reporter: RaFromBRC, Assigned: whd)

Tracking

(Depends on: 1 bug, Blocks: 2 bugs)

Details

(Whiteboard: [SvcOps])

(Reporter)

Description

a year ago
The build team wants to gather and process data that is generated by the Mach tool, so they can measure and track build times and other relevant developer metrics / pain points. We can start with the minimum viable set up, providing them an HTTP endpoint to which they can POST their JSON data, which we will route to an S3 bucket from which they can retrieve said data. First iteration doesn't require us to parse or crack open the JSON at all. It's okay for the data to go into S3 as a Heka message stream, I (i.e. rmiller) will work with them to make sure they have the tooling they need to convert the Heka message stream back into the original JSON for their own processing.

Updated

a year ago
Points: --- → 3
Priority: P2 → P1

Comment 1

a year ago
Scheduled for the next sprint so it can make Q1

Comment 2

a year ago
Due to e10s and ops related fires, we are unable to address this ticket during this sprint.
Component: Operations: Metrics/Monitoring → Metrics: Pipeline
Priority: P1 → P2

Updated

a year ago
Whiteboard: [SvcOps]

Comment 3

10 months ago
Per IRL chat today, we'd like to revive this request.

Initially, we're looking to do some rapid prototyping on the type of data we send. So we're probably looking for a single endpoint that accepts N different message types with initially no schema validation. The data will almost certainly be JSON.

Once we have the client-side bits implemented and have confidence in the data we're sending, we can formalize a schema "productionize" the ingestion. How that switchover works, I'm not sure. It probably involves 2 endpoints (e.g. "stage" vs "production") or some kind of routing key in the HTTP request. Not sure what options are available.

Updated

a month ago
Component: Metrics: Pipeline → Pipeline Ingestion
Product: Cloud Services → Data Platform and Tools

Comment 4

a month ago
(In reply to Gregory Szorc [:gps] from comment #3)
 > Initially, we're looking to do some rapid prototyping on the type of data we
> send. So we're probably looking for a single endpoint that accepts N
> different message types with initially no schema validation. The data will
> almost certainly be JSON.
> 
> Once we have the client-side bits implemented and have confidence in the
> data we're sending, we can formalize a schema "productionize" the ingestion.
> How that switchover works, I'm not sure. It probably involves 2 endpoints
> (e.g. "stage" vs "production") or some kind of routing key in the HTTP
> request. Not sure what options are available.

Greg: do we have any further information about the format/content of data to be submitted via |mach telemetry| after 9 months?
Flags: needinfo?(gps)

Comment 5

26 days ago
(In reply to Chris Cooper [:coop] from comment #4)
> Greg: do we have any further information about the format/content of data to
> be submitted via |mach telemetry| after 9 months?

No. The best we have is the mach resource-usage.json file that is produced during builds. But that lacks metadata like what mach command was used, CPU count, etc. These are things we'd almost certainly want in mach telemetry.

At some point, someone just needs to hack up a PoC for what to capture. Or we can set up the endpoint to allow ingestion of anything until we get our act together.
Flags: needinfo?(gps)

Comment 6

25 days ago
(In reply to Gregory Szorc [:gps] from comment #5)
> (In reply to Chris Cooper [:coop] from comment #4)
> > Greg: do we have any further information about the format/content of data to
> > be submitted via |mach telemetry| after 9 months?
> 
> No. The best we have is the mach resource-usage.json file that is produced
> during builds. But that lacks metadata like what mach command was used, CPU
> count, etc. These are things we'd almost certainly want in mach telemetry.
> 
> At some point, someone just needs to hack up a PoC for what to capture. Or
> we can set up the endpoint to allow ingestion of anything until we get our
> act together.

Ted has signed up to do the PoC as a deliverable this quarter. Laura was quite interested in this when I spoke to her yesterday.

Updated

23 days ago
Blocks: 1362156

Updated

19 days ago
Depends on: 1363160
You need to log in before you can comment on or make changes to this bug.