Closed Bug 1450309 Opened 3 years ago Closed 4 months ago

Improve WebsiteFilter policy implementation

Categories

(Firefox :: Enterprise Policies, enhancement, P3)

enhancement

Tracking

()

VERIFIED FIXED
83 Branch
Tracking Status
firefox-esr78 --- verified
firefox83 --- verified
firefox84 --- verified

People

(Reporter: Felipe, Assigned: mkaply)

References

Details

Attachments

(3 files, 1 obsolete file)

Currently, the implementation of WebsiteFilter uses an http-on-modify-request observer. That has the following disadvantages:

- it's only able to filter http/https accesses
- it uses an observer
- it's in JS

At a minimum, we should try to replace it with a nsIContentPolicy filter. The problem is that we want to return the error code Cr.NS_ERROR_BLOCKED_BY_POLICY, in order to display the proper error message, but that will require some changes to the code that calls the contentpolicy filters.

Ideally we would re-implement this filter in C++, maybe as a C++ impl of an nsIContentPolicy, or even better, directly called from nsChannelClassifier::StartInternal()

Hopefully we can re-use the MatchPattern matcher so we can keep supporting the policy with the same parameter type.
One note is that MatchPattern has a whitelist of protocols that it accepts, which doesn't include data:*, so we should probably add that as a special case
nsIContentPolicy has been rewritten. We might be able to do the error code now.
Priority: -- → P3

From a conversation with Ehsan on IRC we arrived as the best solution being to implement this on Necko, at the same level as where the nsChannelClassifer is called: https://searchfox.org/mozilla-central/rev/465dbfe030dfec7756b9b523029e90d48dd5ecce/netwerk/base/nsBaseChannel.cpp#314

Oh... That explains why the following shows up on about:policies but doesn't have any effect:

"WebsiteFilter": {
  "Block": ["file:///*/*", "file:///D:/*"]
}

Is it a goal of WebsiteFilter to support file protocol URIs or would would some other policy be needed?

Related issue: https://github.com/mozilla/policy-templates/issues/111

Yep, that's why. It is on my list, we just haven't gotten to it.

Ehsan:

I'm concerned this is a little too deep in necko to do this. Every request comes through here. So for instance, if we wanted to block file URLs for local files, how would we know if the request coming through was user initiated versus something Firefox requested? In a nightly build, I see requests for every resource Firefox needs to run.

Flags: needinfo?(ehsan)

Also, I don't even see normal page loads (navigating to a URL) coming through that code.

(In reply to Mike Kaply [:mkaply] from comment #5)

Ehsan:

I'm concerned this is a little too deep in necko to do this. Every request comes through here. So for instance, if we wanted to block file URLs for local files, how would we know if the request coming through was user initiated versus something Firefox requested? In a nightly build, I see requests for every resource Firefox needs to run.

Yup, that code is involved in loading any channel. :-) You probably want to determine a more precise condition that you want to check for, which could mean for example looking at the loading principal of the channel, or its triggering principal, when deciding what was the "cause" of the loading of the channel, and then ignore those with a system principal (if for example your code wants to focus on web pages). I can't suggest something more specific without knowing more about the problem you're trying to solve, but the documentation here should be helpful hopefully. This code runs at the same level of the existing WebsiteFilter code, so any information that you'd have access to there, you could have access to here, and vice versa (and I'd be happy to help figure out how to access something if it's not obvious where to obtain it.)

(In reply to Mike Kaply [:mkaply] from comment #6)

Also, I don't even see normal page loads (navigating to a URL) coming through that code.

Yeah, that link points to the nsBaseChannel call site, which is used for channel classes which inherit from that class (for example nsFileChannel). For other channels (e.g. nsHttpChannel) you can see how the URL classifier service integrates with nsHttpChannel.

Flags: needinfo?(ehsan)

Ehsan:

So the goal here is to add simple blacklist/whitelist filtering based on a MatchPattern (toolkit/components/extensions/MatchPattern.cpp)

This match pattern is defined in enterprise policies (Services.policies) including protocols like file, etc.

https://searchfox.org/mozilla-central/source/browser/components/enterprisepolicies/Policies.jsm#1474

The whitelist and blacklist work in tandem:

  • Allow access to all URLs except the ones you block—Use the blacklist to prevent users from visiting certain websites, while allowing them access to the rest of the web.
  • Block access to all URLs except the ones you allow—Use the blacklist to block access to all URLs. Then, use the whitelist to allow access to a limited list of URLs.
  • Define exceptions to very restrictive blacklists—Use the blacklist to block access to all URLs. Then, use the whitelist to let users access certain schemes, subdomains of other domains, ports, or specific paths.

If the URL load fails, we cancel the channel with NS_ERROR_BLOCKED_BY_POLICY

Right now we do this by modifying the httprequest, but this is preventing blocking of local files.

(In reply to Mike Kaply [:mkaply] from comment #8)

Ehsan:

So the goal here is to add simple blacklist/whitelist filtering based on a MatchPattern (toolkit/components/extensions/MatchPattern.cpp)

This match pattern is defined in enterprise policies (Services.policies) including protocols like file, etc.

https://searchfox.org/mozilla-central/source/browser/components/enterprisepolicies/Policies.jsm#1474

The whitelist and blacklist work in tandem:

  • Allow access to all URLs except the ones you block—Use the blacklist to prevent users from visiting certain websites, while allowing them access to the rest of the web.
  • Block access to all URLs except the ones you allow—Use the blacklist to block access to all URLs. Then, use the whitelist to allow access to a limited list of URLs.
  • Define exceptions to very restrictive blacklists—Use the blacklist to block access to all URLs. Then, use the whitelist to let users access certain schemes, subdomains of other domains, ports, or specific paths.

This is the part where you need a more precise condition for. "All URLs" doesn't mean much when you're looking at all Necko channels, which is what you'll get by hooking into nsBaseChannel. As you've seen we load all resources, even the ones required by the system, through channels. So you need to decide what exact set of loads you care about (see the previous comment).

BTW here I'm explicitly hoping that you're not asking me to figure this out. :-) I think I can help answer any questions you may have but I currently can't devote enough time here to think about all of the possible use cases for blocking things that you may or may not want to support, and I'm really unfamiliar with the actual use cases that enterprise deployments have around this stuff.

If the URL load fails, we cancel the channel with NS_ERROR_BLOCKED_BY_POLICY

Right now we do this by modifying the httprequest, but this is preventing blocking of local files.

Please note that the only change here is that your new approach is looking at more channel types. Your current solution for example, as far as I can tell, still allows blocking the loads of HTTP channels loaded for system purposes (e.g. "http://detectportal.firefox.com" which we use for captive portal detection). It's not clear to me if that is desirable or not...

It just happens that we don't use HTTP channels to load things like resources for browser UI, JSMs and what not, so now you're seeing more of this problem.

BTW here I'm explicitly hoping that you're not asking me to figure this out. :-)

Definitely not. I want to learn and figure it out :).

All this information has been super helpful. I have some ideas on how to do this. Thanks!

No problem! Let me know if I can help further. :-)

So I'm making some progress understanding how to block URLs, but file URLs are baffling me.

Even though file URLs seem to come through nsBaseChannel::ClassifyURI and nsChannelClassifier::StartInternal, they can't actually be cancelled (mChannel->Cancel(NS_ERROR_BLOCKED_BY_POLICY) has no effect).

Is there some other place where I can cancel local file channels? I don't see anything obvious in FileChannel*.cpp

Is it because file URLs are somehow sync versus async maybe?

Calling Cancel() should work whether or not the channel has been opened synchronously. It is indeed how the channel classifier cancels channels when it finds e.g. a phishing channel. Maybe the problem is somewhere else, for example with asynchronously opened channels you need to pause them first if you're going to do some async work before determining whether or not you'd like to call Cancel(), and if you don't you should resume (see how nsChannelClassifier does this.) If that's not the issue perhaps looking at the Necko logs and/or what happens when Cancel() is called under the debugger can tell you more about why it doesn't work.

So looking at this more, I decided to stick with content policy. It's what it is designed for. I was able to accomplish this with a very small change to the content blocker.

Assignee: nobody → mozilla
Status: NEW → ASSIGNED
Pushed by mozilla@kaply.com:
https://hg.mozilla.org/integration/autoland/rev/14c35856cea4
Allow nSIContentPolicy to reject based on enterprise policy. r=ckerschb
https://hg.mozilla.org/integration/autoland/rev/7fae6ea289bd
Switch WebsiteFilter to nsIContentPolicy and allow blocking local files. r=emalysz
Backout by csabou@mozilla.com:
https://hg.mozilla.org/integration/autoland/rev/efdf5255c248
Backed out 2 changesets for wpt failures on reporting-navigation.https.html. CLOSED TREE

Can you please confirm that this patch is causing the wpt failures?

I just did a local build without my patch and reporting-navigation.https.html is failing 14 out of 26 tests.

This patch doesn't change any existing behavior.

Flags: needinfo?(mozilla) → needinfo?(csabou)

This was me and I fixed.

Flags: needinfo?(csabou)
Pushed by mozilla@kaply.com:
https://hg.mozilla.org/integration/autoland/rev/e47127323735
Allow nSIContentPolicy to reject based on enterprise policy. r=ckerschb
https://hg.mozilla.org/integration/autoland/rev/011ac3fee004
Switch WebsiteFilter to nsIContentPolicy and allow blocking local files. r=emalysz

Hi Mike, before I pushed the backout I gave it go on try to see if the failures would be from there, to have a confirmation as there were build bustages on tree and test coverage was missing: https://treeherder.mozilla.org/#/jobs?repo=try&resultStatus=success%2Cpending%2Crunning%2Ctestfailed%2Cbusted%2Cexception&revision=1ad7fe7e254264fa6291b9a3161113f6c09d1e9c&searchStr=wpt&selectedTaskRun=JX49quCITT6NNEhfMl4Tvw.0.

Status: ASSIGNED → RESOLVED
Closed: 4 months ago
Resolution: --- → FIXED
Target Milestone: --- → 83 Branch
Blocks: 1559181

Comment on attachment 9177987 [details]
Bug 1450309 - Allow nSIContentPolicy to reject based on enterprise policy. r?ckerschb!

ESR Uplift Approval Request

  • If this is not a sec:{high,crit} bug, please state case for ESR consideration: Allow for implementing local file blocking in policy without using web request.
    Policy related.
  • User impact if declined: Unable to put new file blocking (and multiple followup bugs) on ESR.
  • Fix Landed on Version: 83
  • Risk to taking this patch: Low
  • Why is the change risky/not risky? (and alternatives if risky): Only affects policu
  • String or UUID changes made by this patch: None
Attachment #9177987 - Flags: approval-mozilla-esr78?
Attached patch ESR version of second patch (obsolete) — Splinter Review

[Approval Request Comment]
If this is not a sec:{high,crit} bug, please state case for ESR consideration: Policy update
User impact if declined: Not able to block local files via policy
Fix Landed on Version: 83
Risk to taking this patch (and alternatives if risky): Low - automated tests.

See https://wiki.mozilla.org/Release_Management/ESR_Landing_Process for more info.

ESR specific patch was needed because of bug 1649221

Attachment #9182975 - Flags: approval-mozilla-esr78?

I consider this low risk because we have existing tests for the behavior change and this is using the more common method of blocking (nsIContentPolicy) which is tested and works.

The previous implementation was much more fragile.

Can you share why this is needed on esr78 in more detail? What are the "multiple followup bugs"?

Flags: needinfo?(mozilla)
No longer depends on: 1429178
Blocks: 1465847

We keep the ESR and rapid release in sync with regards to policy so that we can make new policy features available in both. (We don't want them to diverge). In addition, it allows policy documentation and third party documentation (STIG) to only be about one version of Firefox.

So far there are two bugs that depend on this:

Bug 1559181 - Loading chrome pages blocked by about: policies never complete
Bug 1465847 - WebsiteFilter policy does not block view-source

Flags: needinfo?(mozilla)

Do you need any followup on this?

Flags: needinfo?(jcristau)

I don't understand why this is a priority to uplift. Bug 1559181 has been around for a long time and is low severity. Bug 1465847 isn't even fixed in trunk yet, and likewise has been around forever.

Flags: needinfo?(jcristau)

I don't understand why this is a priority to uplift. Bug 1559181 has been around for a long time and is low severity. Bug 1465847 isn't even fixed in trunk yet, and likewise has been around forever.

This is a priority to uplift because this has been an enterprise requested feature for a while that we have finally been able to fix.

As far as the other bugs go, I'm, pointing out that when I request uplift for those, they will depend on this.

Flags: qe-verify?

Verification process:

Add a policies.json that uses a file URI:

{
  "policies": {
    "WebsiteFilter": {
      "Block": ["file:///c:/*"]
    }
  }
}

Verify that navigating to file URLs that match that URL are blocked.

QA Whiteboard: [qa-triaged]
Flags: qe-verify? → qe-verify+
Attachment #9182975 - Flags: approval-mozilla-esr78?
Attached patch Updated patchSplinter Review

The head.js changes for detecting error pages were not compatible with 78.

This patch just removes that code (since it was just changing to a new API for checking error pages)

Attachment #9182975 - Attachment is obsolete: true
Attachment #9186763 - Flags: approval-mozilla-esr78?

Comment on attachment 9177987 [details]
Bug 1450309 - Allow nSIContentPolicy to reject based on enterprise policy. r?ckerschb!

Approved for 78.5esr.

Attachment #9177987 - Flags: approval-mozilla-esr78? → approval-mozilla-esr78+
Attachment #9186763 - Flags: approval-mozilla-esr78? → approval-mozilla-esr78+

Confirming this issue as verified fixed . Verified using macOS 10.15.7 , Windows 10x64 and Ubuntu 18.04x64 on the following builds:

  • 83.0(20201110140026)
  • 78.5esr (20201110001500)
  • 84.0a1(20201110220109)
Status: RESOLVED → VERIFIED
Flags: qe-verify+
You need to log in before you can comment on or make changes to this bug.