Open Bug 1917434 Opened 12 days ago Updated 6 days ago

Offer downloads of Firefox with "AI Chatbot" feature removed

Categories

(Core :: Machine Learning, defect)

Firefox 130
defect

Tracking

()

UNCONFIRMED

People

(Reporter: andi.m.mcclure, Unassigned, NeedInfo)

References

Details

Attachments

(1 file)

User Agent: Mozilla/5.0 (X11; Ubuntu; Linux x86_64; rv:130.0) Gecko/20100101 Firefox/130.0

Steps to reproduce:

Today Firefox updated itself on my system to 130.0. There is now an "AI Chatbot" feature, listed under "Labs". This is unacceptable to me.

The feature is disabled by default, currently. That does not make a difference to me. I consider the "AI" chatbot industry both harmful (because its products lie to you) and unethical (because it works by mass-aggregating the works of other people, who are neither credited nor compensated— accelerating the process whereby copyright protects corporations but does not bind them, and binds individuals but does not protect them). I do not consent to have software designed to connect to these services to be installed my computer.

This is a hard requirement for me. I have previously switched away from SwiftKey (my otherwise preferred Android keyboard) to GBoard rather than accept an AI chatbot feature being present in the software. I am currently in the process of switching away from Microsoft Windows to Linux to avoid the AI chatbot feature which will be present if I upgrade from Windows 10 to 11. If Firefox is going to install an AI chatbot trojan along with its web browser, then I will switch away from Firefox as well, to whatever Firefox fork will allow me to physically remove that code from my computer.

It is possible I am just one single angry person. I do not think so. Many people are being harmed by the "AI" companies. Any version of Firefox (whether made by Mozilla or otherwise) which can honestly display the label "No AI" will have an immediate audience.

Actual results:

As above

Expected results:

I see two options. Option one would be for Firefox to have a separate build channel which edits out the AI chatbot feature. I assume this would not be operationally practical for Mozilla¹.

Option two would be the compromise between anti-AI users and pro-AI software vendors previously taken by projects such as:

  • VS Code
  • Android Studio²
  • iTerm 2
    Which is to refactor the "AI" connectivity features into a plugin that the user could then choose to uninstall. As Firefox already has a plugin architecture, I believe this is the solution which could allow moral objectors to AI to continue using Firefox³ without putting undue strain on your devops.

¹ One thing I am currently trying to figure out is whether the "ESR" channel includes "Labs" features at all. I have seen claims some experimental features are excluded in ESR. If this is the case however it is irrelevant since I assume Labs exists to trial features you intend to put in the "real" product someday.
² Note, I have not confirmed that after uninstalling the "Gemini" plugin that the feature is actually removed, and I have concerns the plugin may be getting silently reinstalled during upgrades
³ I will still feel uncomfortable doing business with Mozilla given that you are promoting the "AI" scam companies at all, but I am willing to make compromises as long as you don't put that (junk) on my personal computer.

The Bugbug bot thinks this bug should belong to the 'Core::Machine Learning' component, and is moving the bug to that component. Please correct in case you think the bot is wrong.

Component: Untriaged → Machine Learning
Product: Firefox → Core

Should removing the chatbot feature also prevent upcoming Firefox features that provide alternative "AI chatbots" with better privacy, etc. that might significantly benefit those who are already using chatbots?

To be clear, is the threshold of your concerns around "AI" anything that's based on large language models even if it's running privately on device and/or trained with acceptable data? We've seen from early usage of the opt-in AI Chatbot feature that people frequently make use of the summarize shortcut, so we can guide those who otherwise would be using a hosted chatbot to use a specialized local model perhaps from bug 1914048 instead. This probably wouldn't be a full chatbot experience as it's focused on summarization for now, but it could be exposed with the same AI Chatbot interfaces we've added so far for this experimental Labs feature.

I would guess many ChatGPT users who have turned on this Firefox AI Chatbot feature have already discovered alternatives even with the early development in 130 supporting localhost and other providers, and hopefully we can continue to guide these chatbot users to better choices by providing user value.

Flags: needinfo?(andi.m.mcclure)

"Ed Lee": "Should removing the chatbot feature also prevent upcoming Firefox features that provide alternative 'AI chatbots' with better privacy, etc. that might significantly benefit those who are already using chatbots?"

I think I was very clear about what I wanted. If the AI Chatbot feature is compiled into the copy of Firefox on my computer, then I do not want a copy of Firefox on my computer.

There are four problems with LLMs/"AI" I see discussed in this thread:

  1. The copyright/plagiarism issue.
  2. The extreme environmental impact of creating and operating the models. (The comment that raised this issue was hidden as "metoo".
  3. When the models work "correctly", they do harm (because they do not and cannot do the thing they are advertised to do).
  4. "Privacy", raised by Ed Lee (and there are indeed serious privacy/user safety issues with the cloud-service versions).

Ed, you are proposing a hypothetical "AI chatbot" which averts issue four. However issues 1, 2, and 3, which are the three of highest concern to me, would remain unabated. For purposes of a productive discussion let's go further and imagine issue 1, which could technically be abated, were also resolved in some hypothetical future chatbot system. (Issues 2 and 3 are inherent to the technology and will not be resolved.)

If your entirely hypothetical "alternative AI chatbot" solving two of the four serious problems here existed, issues 1 and 4 would still also be unresolved, because I do not believe that such a hypothetical "alternative" chatbot would be important. LLM "AI" is a highly centralized space dominated by one or two players. There are smattering of "open source" LLMs (not really open source, often derived from one of the big open source LLMs, but never mind that) and some of these open source LLMs do avoid, or have the potential to avoid, some subset of the problems the big-corp LLMs have. But in practice all these "open source" LLMs do is serve as marketing for the closed-source LLMs. By existing, they help normalize the most harmful versions of the technology, because people can gesture to the side and go "look, three people are doing a less harmful version".

The problem is not any one particular LLM product, the problem is the product category itself and the economic forces worming broken, harmful versions of "AI" into places it should not go. By putting a privileged LLM interface into the browser, Mozilla is helping promote that economic force, and if I use the Firefox 130+ with the chatbot feature, then I will be too. I will choose not to do that.

"we can guide those who otherwise would be using a hosted chatbot to use a specialized local model perhaps from bug 1914048 instead…"
"Bug 1914048: Develop a Localized Summarization Model"

In this situation, rather than guiding people to cloud services which harm people by inaccurately summarizing web pages, Mozilla itself will be performing the harm by inaccurately summarizing web pages. At that point I might not be comfortable using anything you make, even if I do have the ability to opt out of installing the AI garbage on my personal computer. So to me this last proposal only makes the problem worse.

This feature should not be part of the idiom of a web browser. If you believe the fundamental idiom of a web browser should be altered to include it, then I do not want to be using your software, and frankly I kinda don't want your users looking at my websites (because what if they're doing it through the AI-garbage filter?). I am not the only person who feels this way, and browsers are not quite as hard to create or to replace as you may think. I am giving you an opportunity to create a path where I can continue being a Firefox user. If some people want to replace their web browsers with this new idiom you're imagining, I suggest you create some separate download for that and let the rest of us continue using a web browser.

Flags: needinfo?(andi.m.mcclure)

(In reply to Ed Lee :Mardak from comment #5)

Should removing the chatbot feature also prevent upcoming Firefox features that provide alternative "AI chatbots" with better privacy, etc. that might significantly benefit those who are already using chatbots?

To be clear, is the threshold of your concerns around "AI" anything that's based on large language models even if it's running privately on device and/or trained with acceptable data? We've seen from early usage of the opt-in AI Chatbot feature that people frequently make use of the summarize shortcut, so we can guide those who otherwise would be using a hosted chatbot to use a specialized local model perhaps from bug 1914048 instead. This probably wouldn't be a full chatbot experience as it's focused on summarization for now, but it could be exposed with the same AI Chatbot interfaces we've added so far for this experimental Labs feature.

I would guess many ChatGPT users who have turned on this Firefox AI Chatbot feature have already discovered alternatives even with the early development in 130 supporting localhost and other providers, and hopefully we can continue to guide these chatbot users to better choices by providing user value.

Moving this to an addon/extension/plugin would allow users who actually like and use it (however many or few there may be) to continue doing so; especially if the version migrator checked that preference and downloaded the updated add-on automatically if (and only if) it was enabled. Or just added a skeleton for the addon that the browser itself would then update to the real version. Then, updates to the extension could be handled by the automatic addon update system without forcing it on anyone who removed it.
I see little reason this has to be baked into the core of the browser when things like pdf.js (much more critical to using the web) were provided by extensions for a very long time, and stuff like OpenH264 and Widevine are still plugins.

A mozconfig option to avoid it at build-time would also be acceptable, but might be a bigger maintenance headache than simply making it a modular extension.

For the sake of being constructive and not getting this issue closed with prejuduce with a WONTFIX, I'd say maybe we should just be asking, "is moving this to a plugin or addon an acceptable compromise that allows the user more freedom of choice, without getting the evil eye from the management that (presumably) asked us to work on this?"

"For the sake of being constructive… I'd say maybe we should just be asking, 'is moving this to a plugin or addon an acceptable compromise that allows the user more freedom of choice'"

For clarity, this is my specific ask as the filer of this bug. (I hope that was clear in the original post.)

The main code for the chatbot is in browser/components/genai and local inference is in toolkit/components/ml, so you could remove those from your build. This would likely break features like image-to-alt-text or potentially translations and other upcoming uses of local inference of non-LLM specialized models like the summarize example, but that could be a reasonable decision for those who don't want those.

Local summarization with a relatively simple model definitely wouldn't be able to replace hosted chatbots, but my hope is that even something with lower correctness can still provide value such as helping users discover/re-discover content they might have otherwise overlooked and never get back to. There's a lot of great content on the web that Firefox can help users find, e.g., someone reading this bug about local inference might be interested in an introduction to transformers.js or WebGPU, or Firefox can help make content more accessible, e.g., explaining complex topics using concepts and language tailored to each user, and all this can happen as an additive experience with the original content for those who choose this.

Your earlier suggestion of ESR is also interesting in that we do support enterprise policies such as bug 1911826 to remove Labs, so something could specifically keep chatbot disabled if that helps.

See Also: → 1911826

Local summarization with a relatively simple model definitely wouldn't be able to replace hosted chatbots, but my hope is that even something with lower correctness can still provide value such as helping users discover/re-discover content they might have otherwise overlooked and never get back to. There's a lot of great content on the web that Firefox can help users find, e.g., someone reading this bug about local inference might be interested in an introduction to transformers.js or WebGPU, or Firefox can help make content more accessible, e.g., explaining complex topics using concepts and language tailored to each user, and all this can happen as an additive experience with the original content for those who choose this.

This is dangerous, and to my earlier comment, works directly against the interest of Firefox users such as myself. In particular, it makes the factual claim (with tacit endorsement) that LLM-based summarizers can accurately provide summaries of complex topics, and that moreover, it can do so in a way that novices can be relatively confident in the correctness of those summaries.

This is not possible, and is not how LLMs work. At the risk of being slightly hyperbolic, it's about the same level of risk as putting a lock symbol on a URL based on heuristics that are wrong almost as often as they are correct — it communicates something factually untrue to the user, and indicates that they are safe to act on the basis of that falsehood.

As a specific and relatively benign example, consider what happens if I ask ChatGPT 4o mini to summarize the Wikipedia entry on quantum superposition:

Quantum superposition is a fundamental principle in quantum mechanics where a quantum system can exist in multiple states simultaneously. Unlike classical systems, which are in one state at a time, a quantum system can be in a combination of states, described by a wave function. This superposition allows the system to exhibit different properties depending on how it is measured.

For example, an electron in an atom can be in a superposition of different energy levels until it is observed. The principle also underlies phenomena such as interference patterns in experiments with particles like electrons or photons. The concept was famously illustrated by Schrödinger's cat thought experiment, where a cat in a box is considered to be both alive and dead until observed. Quantum superposition is key to understanding various quantum phenomena and technologies, such as quantum computing and quantum cryptography.

As an expert in quantum computing for approximately twenty years, I can recognize that this summary is inaccurate in several ways: it is not true that a quantum system can exist in multiple states simultaneously (a claim that the original Wikipedia article doesn't even make!), the phrase "exhibit different properties depending on how it is measured" is compete gibberish, and Schrödinger's cat does not illustrate superposition (another claim not made by the original article) but rather illustrates a specific point that Schrödinger argued against the principle of superposition — ChatGPT's summary is literally the direct opposite of the truth in that regard.

What these falsehoods have in common, though, is that they're incredibly common misconceptions that do not appear in the original source. Despite that Wikipedia more or less gets those right, when used to "summarize," the LLM just plagiarizes and injects random bullshit from elsewhere on the internet. That's relatively harmless when talking about something like quantum superposition, but it's far more harmful when talking about stuff like public health. When asking the same model to summarize Wikipedia's coverage of mask usage in the pandemic, ChatGPT incorrectly refers to the pandemic in the past tense (changing Wikipedia's "have been" to "became"), but more worryingly omits one of the key parts of Wikipedia's own summary paragraph: "Reviews of various kinds of scientific studies have concluded that masking is effective in protecting the individual against COVID-19." The summary returned by ChatGPT 4o instead makes a weaker statement that is then hedged further with "however, there were debates and evolving guidelines about mask use, particularly regarding the balance between public health benefits and personal freedoms."

It gets even worse when considering topics where significant amount of bias and hate exists. Asking ChatGPT to summarize Wikipedia's coverage of the Cass Review produces dangerous and transphobic disinformation, omitting any of the serious critiques and limitations of the Cass Review highlighted by Wikipedia's article.

This is all in addition to the energy usage, training ethicality, and labor rights issues that I raised in my earlier comments. Put simply, and in strong agreement with mcc, moving a cloud-gated LLM to a local only doesn't solve the problem, it makes it even worse by making Mozilla and the Firefox browser directly responsible for these harms.

(As a brief addendum, because of how bad the documentation is for these services, I'm not entirely sure that the ChatGPT bot was loading the page and providing its content to the LLM, or whether the LLM was acting on the URL directly. Regardless, ChatGPT was unable to produce factually correct output for any of the three cases, despite clearly plagiarizing text from Wikipedia.)

(In reply to Ed Lee :Mardak from comment #13)

The main code for the chatbot is in browser/components/genai and local inference is in toolkit/components/ml, so you could remove those from your build. This would likely break features like image-to-alt-text or potentially translations and other upcoming uses of local inference of non-LLM specialized models like the summarize example, but that could be a reasonable decision for those who don't want those.

Local summarization with a relatively simple model definitely wouldn't be able to replace hosted chatbots, but my hope is that even something with lower correctness can still provide value such as helping users discover/re-discover content they might have otherwise overlooked and never get back to. There's a lot of great content on the web that Firefox can help users find, e.g., someone reading this bug about local inference might be interested in an introduction to transformers.js or WebGPU, or Firefox can help make content more accessible, e.g., explaining complex topics using concepts and language tailored to each user, and all this can happen as an additive experience with the original content for those who choose this.

I think this is the point of contention here - you assume that those opposed to this chatbot feature simply want a different form of generative AI, when that is not the case. We don't want big, privacy-invading LLMs, we don't want small, local inference models, we just want a web browser.

Your earlier suggestion of ESR is also interesting in that we do support enterprise policies such as bug 1911826 to remove Labs, so something could specifically keep chatbot disabled if that helps.

Again, the far better option would be to remove the functionality from all builds of the browser and move it to an extension if people really want to maintain the functionality.

As you probably already noticed, Mozilla has plans for AI in Firefox to influence the LLM landscape to better match the mission, but I'll check with leadership if Firefox is still the appropriate way to achieve that.

Flags: needinfo?(edilee)
You need to log in before you can comment on or make changes to this bug.

Attachment

General

Creator:
Created:
Updated:
Size: