Closed
Bug 1242017
Opened 9 years ago
Closed 6 years ago
Set up endpoint for receiving mach build metrics
Categories
(Data Platform and Tools :: General, defect, P2)
Data Platform and Tools
General
Tracking
(Not tracked)
RESOLVED
FIXED
People
(Reporter: rmiller, Assigned: whd)
References
(Blocks 1 open bug)
Details
(Whiteboard: [DataOps])
The build team wants to gather and process data that is generated by the Mach tool, so they can measure and track build times and other relevant developer metrics / pain points. We can start with the minimum viable set up, providing them an HTTP endpoint to which they can POST their JSON data, which we will route to an S3 bucket from which they can retrieve said data. First iteration doesn't require us to parse or crack open the JSON at all. It's okay for the data to go into S3 as a Heka message stream, I (i.e. rmiller) will work with them to make sure they have the tooling they need to convert the Heka message stream back into the original JSON for their own processing.
Updated•9 years ago
|
Points: --- → 3
Priority: P2 → P1
Comment 1•9 years ago
|
||
Scheduled for the next sprint so it can make Q1
Comment 2•9 years ago
|
||
Due to e10s and ops related fires, we are unable to address this ticket during this sprint.
Component: Operations: Metrics/Monitoring → Metrics: Pipeline
Priority: P1 → P2
Updated•9 years ago
|
Whiteboard: [SvcOps]
Comment 3•8 years ago
|
||
Per IRL chat today, we'd like to revive this request.
Initially, we're looking to do some rapid prototyping on the type of data we send. So we're probably looking for a single endpoint that accepts N different message types with initially no schema validation. The data will almost certainly be JSON.
Once we have the client-side bits implemented and have confidence in the data we're sending, we can formalize a schema "productionize" the ingestion. How that switchover works, I'm not sure. It probably involves 2 endpoints (e.g. "stage" vs "production") or some kind of routing key in the HTTP request. Not sure what options are available.
Updated•8 years ago
|
Component: Metrics: Pipeline → Pipeline Ingestion
Product: Cloud Services → Data Platform and Tools
Comment 4•8 years ago
|
||
(In reply to Gregory Szorc [:gps] from comment #3)
> Initially, we're looking to do some rapid prototyping on the type of data we
> send. So we're probably looking for a single endpoint that accepts N
> different message types with initially no schema validation. The data will
> almost certainly be JSON.
>
> Once we have the client-side bits implemented and have confidence in the
> data we're sending, we can formalize a schema "productionize" the ingestion.
> How that switchover works, I'm not sure. It probably involves 2 endpoints
> (e.g. "stage" vs "production") or some kind of routing key in the HTTP
> request. Not sure what options are available.
Greg: do we have any further information about the format/content of data to be submitted via |mach telemetry| after 9 months?
Flags: needinfo?(gps)
Comment 5•8 years ago
|
||
(In reply to Chris Cooper [:coop] from comment #4)
> Greg: do we have any further information about the format/content of data to
> be submitted via |mach telemetry| after 9 months?
No. The best we have is the mach resource-usage.json file that is produced during builds. But that lacks metadata like what mach command was used, CPU count, etc. These are things we'd almost certainly want in mach telemetry.
At some point, someone just needs to hack up a PoC for what to capture. Or we can set up the endpoint to allow ingestion of anything until we get our act together.
Flags: needinfo?(gps)
Comment 6•8 years ago
|
||
(In reply to Gregory Szorc [:gps] from comment #5)
> (In reply to Chris Cooper [:coop] from comment #4)
> > Greg: do we have any further information about the format/content of data to
> > be submitted via |mach telemetry| after 9 months?
>
> No. The best we have is the mach resource-usage.json file that is produced
> during builds. But that lacks metadata like what mach command was used, CPU
> count, etc. These are things we'd almost certainly want in mach telemetry.
>
> At some point, someone just needs to hack up a PoC for what to capture. Or
> we can set up the endpoint to allow ingestion of anything until we get our
> act together.
Ted has signed up to do the PoC as a deliverable this quarter. Laura was quite interested in this when I spoke to her yesterday.
Updated•8 years ago
|
Blocks: buildmetrics
Comment 7•7 years ago
|
||
Wes: any update on this, or the generic ingestion service in general? Do you have a new ETA?
Flags: needinfo?(whd)
Assignee | ||
Comment 8•7 years ago
|
||
I spec'd out the service requirements with :mreid and :jason, which resulted in the creation of bug #1368197 and bug #1368196, which I'm adding as blockers. Unfortunately I've not had time to do the development work on this as I've been focused on Mission Control. I would expect to next be able to work on this, priority permitting, in mid to late July, and depending on whether the development work has been done by then, that should take a week or a few weeks.
Comment 9•7 years ago
|
||
I'm not seeing any updates in the new dependencies...has this been de-prioritized?
Flags: needinfo?(whd)
Assignee | ||
Comment 10•7 years ago
|
||
I expect to start working on this within the next two weeks.
Flags: needinfo?(whd)
Comment 11•7 years ago
|
||
Any update here? We've punted on this for 2 quarters now.
Flags: needinfo?(whd)
Updated•7 years ago
|
Whiteboard: [SvcOps] → [DataOps]
Assignee | ||
Comment 12•7 years ago
|
||
This infrastructure is (finally) available. The process for adding a namespace isn't documented and will probably change to be more self-service, but the current instructions are to file a PR against https://github.com/mozilla-services/mozilla-pipeline-schemas similar to https://github.com/mozilla-services/mozilla-pipeline-schemas/pull/104/files with JSON and parquet schemas for your data. This PR unfortunately also has some extra telemetry-specific diffs; you should only need to add schema files to e.g. schemas/mach and templates/mach for your ping types.
Once that's merged and deployed, you should be able to POST your JSON blobs to e.g. https://incoming.telemetry.mozilla.org/submit/mach/<doctype>/<docversion>/<docid> and have the result accessible in stmo and atmo.
Flags: needinfo?(whd)
Comment 13•7 years ago
|
||
Greg, are you able to start working out a schema for the Mach submissions? I'm happy to help you with that, just let me know.
Flags: needinfo?(gps)
Comment 14•7 years ago
|
||
I talked to mcote about this and we decided to de-prioritize this work for a few weeks until we have a better handle on the hypothesis we want to test before collecting this data.
Comment 15•7 years ago
|
||
Per kmoir, I guess there's nothing to do right this moment. But thanks for the offer to help with a schema!
Flags: needinfo?(gps)
Assignee | ||
Comment 17•6 years ago
|
||
I'm going to call this fixed, in that the infrastructure is available and using it is now documented at https://docs.telemetry.mozilla.org/cookbooks/new_ping.html. The meta bugs this bug is associated with are tracking the actual data generation.
Status: NEW → RESOLVED
Closed: 6 years ago
Resolution: --- → FIXED
Updated•2 years ago
|
Component: Pipeline Ingestion → General
You need to log in
before you can comment on or make changes to this bug.
Description
•