Closed Bug 820555 Opened 12 years ago Closed 7 years ago

add rel="nofollow" to all external links

Categories

(developer.mozilla.org Graveyard :: General, defect)

defect
Not set
normal

Tracking

(Not tracked)

RESOLVED DUPLICATE of bug 1112668

People

(Reporter: groovecoder, Unassigned)

Details

User Agent: Mozilla/5.0 (Macintosh; Intel Mac OS X 10.8; rv:20.0) Gecko/20121210 Firefox/20.0
Build ID: 20121210030747

Steps to reproduce:

Add a link to theoatmeal.com to https://developer.mozilla.org/en-US/docs/Project:About


Actual results:

Link is added with class="external"


Expected results:

Link should be added with class="external" rel="nofollow"
CC'ing everyone on SEO interest list. Let's use this bug to discuss - do we want to rel="nofollow" all external links in the wiki?

See both http://meta.wikimedia.org/wiki/Nofollow#Current_use_on_Wikimedia_projects , http://www.brianbondy.com/blog/id/104/stackoverflow-amongst-nofollow-web-abuse-sites , and http://www.seomoz.org/blog/weigh-in-on-nofollow-abuse-or-ingenious
A little skeptical that this will have a positive effect on our rankings.

I know very little about SEO, but the overall pattern I see is that SEO has much less to do with tricks and much more to do with providing valuable content to users.

The job of a search engine is to help people find that matters to them. The style of external link matters very little to users, so I would expect it to matters very little to search engines.

Again, not an expert, but that is the pattern I see.
That said, we could discourage spammers by adding rel=nofollow to all links added by editors. Search engines ignore links that use rel=nofollow, so no spammer would waste his time creating a link if he knows it is just going to use rel=nofollow anyway.

From what I understand, spam on Wikipedia dramatically decreased after they started using rel=nofollow on links added by editors. See the video below for more information.

https://www.youtube.com/watch?v=EnVEERmbdpo
https://bugzilla.mozilla.org/show_bug.cgi?id=820555#c3 was my main reason for filing. SEO might be the wrong component.
I read the articles in https://bugzilla.mozilla.org/show_bug.cgi?id=820555#c1 as mostly netting against rel=nofollow. Links with nofollow still have value to spammers, because they get some marginal click-through. 

We have plenty of legitimate external links on MDN, to resources that have relevant and valuable information. It seems stingy to deprive them of SEO juice. After all, we have a campaign to get other sites to link to us via site badges. Are we takers but not givers?
For the moment, I'm not for it.

1) We don't have that many spammers right now. The experience of other sites, like Wikipedia, is that it does lower a bit the amount of spam, but robot don't care they still spam: "hey, by chance, a human may follow the link, …"
2) Sites like Wikipedia or StackOverflow regularly gets bad press because of this. Like Janet said "takers not givers". There were campaigns "Don't link to Wikipedia" a few years ago.
3) I don't believe it will have any significant impact on SEO. It is an urban legend that link juice goes away with external links. And I'm convinced that SE do follow these links, especially if they detect that all/most external links on a given site are tagged.

4) I also regularly revisit external links on pages: if the info of the linked resources is already included in the page, I just delete it. 

I think it is much more useful to be sure that any external link addition/modification shows prominently in the dashboard so that we are sure not to miss it.

And we can revisit this in the future if the situation change. (When robots will connect using Persona :-) )
Sounds good to me. Thanks for the input!
Status: NEW → RESOLVED
Closed: 12 years ago
Resolution: --- → WONTFIX
Moving to General component.
Component: SEO → General
Re-opening this and putting it under our spam tracking bug since we're getting more spammers now since Comment #6 was made. :)

Sounds like everyone agrees the SEO impact is minimal, so it seems the main value is to help deter spam, and the main risk is if MDN becomes a "taker not a giver" ?

I'm still 50/50 on this after re-reading https://support.google.com/webmasters/answer/96569?hl=en ... anyone else have new input on this?
Status: RESOLVED → REOPENED
Resolution: WONTFIX → ---
I'm now leaning *towards* using rel=nofollow, especially if we can have a whitelist of trusted sites that don't get rel=nofollow. But we would also need a process for adding sites to the whitelist (e.g., file a bug, which gets reviewed by humans, according to documented criteria). We can still be givers, but within a "network of trust"; gaining trust requires more than creating an account and putting a link into a page.
+1 The Google article suggests using nofollow on "untrusted content" - i.e., let's define what "trusted content" is and put nofollow on the rest.
(In reply to Luke Crouch [:groovecoder] from comment #11)
> +1 The Google article suggests using nofollow on "untrusted content" - i.e.,
> let's define what "trusted content" is and put nofollow on the rest.

I'm torn on this whole thing. I think it's important to give back by letting the links flow, but we do need to find ways to be less appealing to spammers.

It's pretty clear we could have a whitelist of domains that are generally trusted, such as stack overflow, w3.org, and so on. After that is where things get a little fuzzy.

Can we determine that a link was added or last edited by a trusted user before it, and use that information when deciding whether or not to add rel=nofollow?

I suspect there are true limitations to what we can do here. I would like to find a way for vouched Mozillians' edits to be whitelisted, and perhaps for links in their profile to be whitelisted.

If we can't have a really easy way to get trusted domains quickly (actually quickly, not "Mozilla quickly") whitelisted, then I still am sort of leaning against rel=nofollow.
Trusted content implies that we have a trust system to base it on, which we don't currently have. Using Persona or GitHub for authentication only brings identity and not trust. Ergo we can't trust the user's profile data without considerable effort to hook into systems that provide trust (e.g. a web of trust) like implemented on mozillians.org.

I don't understand :sheppy's comment about to "give back by letting the links flow", do you consider it part of MDNs mission to promote websites of registered users via search engines traffic?
Let's build up our trust system iteratively. I'm starting with a simple "LOW_ACTIVITY_THRESHOLD" value in https://github.com/mozilla/kuma/pull/3084.

:hoosteeno - can you link in the bugs or any other artifacts for the mozillians trust system?
Flags: needinfo?(hoosteeno)
:groovecoder I don't think we should build a trust system before knowing what we consider trustworthy, especially not "iteratively" which reads to me like implementing something before we thought about it completely. Sure you don't want to say you trust everyone who made 10 edits?

For the record, and I hate to repeat myself again, this is exactly the type of feature creep that has caused the major maintenance burden kuma is. Think about implementing a trust system before we go down that road.
There's a big difference between having a whitelist of trusted *sites* whose links allow following, and a system for trusted *users* who, among other things, can add links that allow following. While the latter would enable a lot of things, and should be discussed (as groovecoder has started on the mailing lists), the KISS principle points towards the former as a solution for this bug.
+1 to comment 16. For conversation of social trust in MDN, let's use the thread :groovedcoder started:

https://groups.google.com/forum/#!topic/mozilla.dev.mdn/0r0ZUP0nA1A

Whitelisted sites do not have to be the subject of a social trust system. They can be established by administrative fiat. And that is the simplest way to push this forward; discoverable and reversible if we devise something wonderfully complex later.

Also +1 to :jezdez for calling out the challenges of organically growing trust systems. My blog posts about rebuilding Mozillians' trust systems describe some of the risks that come from that approach: http://hoosteeno.com/2014/07/30/vouched_improvements/

Let's have more conversation about social trust systems in the threads. For this bug, let's work on a list of sites that would be on our whitelist: https://devengage.etherpad.mozilla.org/allow-list-urls-2015-02-19
Flags: needinfo?(hoosteeno)
Status: REOPENED → RESOLVED
Closed: 12 years ago7 years ago
Resolution: --- → DUPLICATE
Product: developer.mozilla.org → developer.mozilla.org Graveyard
You need to log in before you can comment on or make changes to this bug.