Open Bug 1309656 Opened 5 years ago Updated 4 years ago

figure out what to do with Balrog Releases that contain non-whitelisted domains

Categories

(Release Engineering :: Release Requests, defect, P3)

defect

Tracking

(Not tracked)

People

(Reporter: bhearsum, Unassigned)

References

Details

Attachments

(1 file)

As part of bug 1303106, we were trying to update all of the blobs in Balrog to add "product". Many of the older blobs failed to update due to the domains they used not being in the whilelist, mostly stage.mozilla.org.

This isn't a scenario we ever envisioned, but we're not in a state where we have Releases that we could not serve updates to, and I think we should fix this one way or the other.

Possibilities I see:
1) Change domains to point at new versions (eg: s/stage.mozilla.org/download.cdn.mozilla.net/g)
2) Remove fileUrls for Releases we either don't have hosted updates for, or don't think we'll ever serve updates to.

One think I think we _shouldn't_ do is remove old Releases. There's still value in at least holding onto their MAR metadata, and it's necessary to hang on to some for partials to watershed Releases to work.
This is a list of all of the blobs that we couldn't update. A few of these may not be caused by domains - the script didn't spit out the specific error.
Duplicate of this bug: 1299828
Curious that your testing in the local dev environment didn't pick this up, when we have all the domain config in tree. Any idea about that ?

I'd be quite happy if we s/stage\.mozilla\.org/archive.mozilla.org/ and that would fix up a lot of them. There must be some other error for the Firefox nightly blobs, because mozilla-nightly-updates.s3.amazonaws.com is allowed. I'm not sure what to do with older the GMP blobs, maybe we just have to add 'GMP' back into the list for ciscobinary, cdmdownload, etc. Is there any downside to that ? If the Superblob shares that product but doesn't allow fileUrls we're OK ?
(In reply to Nick Thomas [:nthomas] from comment #3)
> Curious that your testing in the local dev environment didn't pick this up,
> when we have all the domain config in tree. Any idea about that ?

Ah, I knew this was going to be an issue already, I just didn't think it would be to such a large extent.

> I'd be quite happy if we s/stage\.mozilla\.org/archive.mozilla.org/ and that
> would fix up a lot of them.

Easy!

> There must be some other error for the Firefox
> nightly blobs, because mozilla-nightly-updates.s3.amazonaws.com is allowed.

Looks like some (probably all) of these are getting HTTP 413 (Request Entity Too Large), which sounds fun...

> I'm not sure what to do with older the GMP blobs, maybe we just have to add
> 'GMP' back into the list for ciscobinary, cdmdownload, etc. Is there any
> downside to that ? If the Superblob shares that product but doesn't allow
> fileUrls we're OK ?

The other implication is that anybody with "GMP" product access would technically be able to create non-SuperBlobs and point GMP rules at them. This would mean that a theoretical person who has GMP access but not access to a specific plugin would be able to create non-SuperBlobs and ship them (vs. only being able to mess with a SuperBlob and GMP rules). In practice, I highly doubt a user like this would ever exist, so this is probably fine.
(In reply to Ben Hearsum (:bhearsum) from comment #4)
> Looks like some (probably all) of these are getting HTTP 413 (Request Entity
> Too Large), which sounds fun...

It looks like aurora blobs are much larger than central, 1.6MB vs 630KB, presumably because there are many more locales there. And larger than releases (750KB) because they put all the information down at the platform+locale level, and there's a lot of duplication. But Firefox-mozilla-aurora-nightly-latest did get a product key added, perhaps by the balrog submitter ? So perhaps there's a way to resubmit a single locale instead of the whole blob. Or bump up client_max_body_size in the nginx config, teach the API to accept compressed payloads, etc.

> The other implication is that anybody with "GMP" product access would
> technically be able to create non-SuperBlobs and point GMP rules at them.
> This would mean that a theoretical person who has GMP access but not access
> to a specific plugin would be able to create non-SuperBlobs and ship them
> (vs. only being able to mess with a SuperBlob and GMP rules). In practice, I
> highly doubt a user like this would ever exist, so this is probably fine.

And if we don't add product to the old blobs does that create a foot gun in the unlikely event we have to point to one again ?
(In reply to Nick Thomas [:nthomas] from comment #5)
> (In reply to Ben Hearsum (:bhearsum) from comment #4)
> > Looks like some (probably all) of these are getting HTTP 413 (Request Entity
> > Too Large), which sounds fun...
> 
> It looks like aurora blobs are much larger than central, 1.6MB vs 630KB,
> presumably because there are many more locales there. And larger than
> releases (750KB) because they put all the information down at the
> platform+locale level, and there's a lot of duplication. But
> Firefox-mozilla-aurora-nightly-latest did get a product key added, perhaps
> by the balrog submitter?

I'm a little stumped by this. I don't see anything in the client or server side code that would do this, but it's also not in the fail list that I attached. -latest is bigger than the dated blobs, so I don't see how the script could've updated it. Probably this *did* get done as part of nightly updates, and I'm just reading code poorly...


> So perhaps there's a way to resubmit a single
> locale instead of the whole blob. Or bump up client_max_body_size in the
> nginx config, teach the API to accept compressed payloads, etc.

Good ideas! Hopefully there *is* a way to submit product without submitting the entire blob. a POST to /api/releases/:name looks promising based on https://github.com/mozilla/balrog/blob/master/auslib/admin/views/releases.py#L269


> > The other implication is that anybody with "GMP" product access would
> > technically be able to create non-SuperBlobs and point GMP rules at them.
> > This would mean that a theoretical person who has GMP access but not access
> > to a specific plugin would be able to create non-SuperBlobs and ship them
> > (vs. only being able to mess with a SuperBlob and GMP rules). In practice, I
> > highly doubt a user like this would ever exist, so this is probably fine.
> 
> And if we don't add product to the old blobs does that create a foot gun in
> the unlikely event we have to point to one again ?

It does, indeed, very good point. Probably any case where we have a blob that would fail a whitelist check is some sort of footgun, so adding GMP is probably the right thing to do, unless we want to convert these old blobs to multifile updates. It might be good to have a cronjob or something else that checks for this sort of thing regularly, to help us avoid getting into this state in the future.
It looks like a POST request to /releases/:name will do an incremental update of the blob. The only caveat is that if we're replacing fileUrls, we must replace it in *full*. .update() is used, so any keys present in the dict we pass to it will fully replace the existing ones (keys not provided in the new dict will be left alone). I tested this locally with one of the Fennec replace blobs and it worked well. It should be a pretty easy tweak to varun's script to do this, so i'm going to take a crack at it today.
I did a pass today, and so far have updated all of the blobs except the GMP ones to include product. I had to fix domains as discussed above, with a few caveats and extras
* There were some "ftp.mozilla.org" references in some Firefox blobs that I replaced with archive.mozilla.org.
* There were some "archive.mozilla.org" references in some Thunderbird blobs that I replaced with download.cdn.mozilla.net (Thunderbird is not allowed to use archive, for some reason).
* Some old manually created blobs (like -whatsnew blobs) had names that didn't match the database column - I fixed all of these.

AFAICT nightly blobs are not getting product set - i updated a whackload of them today. Varun is going to fix the submitter scripts to do that in bug 1303106. I'll need to do another manual pass of updating these once that's landed.

I haven't done anything with the old GMP blob either. We talked before about adding GMP to the OpenH264/CDM/Widevine domains, but considering we don't currently point at any of the old GMP blobs, perhaps we should just back them up and delete them instead? We're probably better off recreating them later in the unlikely event that we need them.
I had a look today, and the only blobs without products are nightlies and releases that were created since November 4th, and the old GMP blobs. Once the submission tools have been updated we can do a final pass on the nightlies/releases, and we still need to figure out what to do with the old GMP ones.
Priority: -- → P3
Bulk change of QA Contact to :jlund, per https://bugzilla.mozilla.org/show_bug.cgi?id=1428483
QA Contact: rail → jlund
You need to log in before you can comment on or make changes to this bug.