The State of Nofollow Link Attribute with Patrick Stox @patrickstox #VCBuzz

The State of Nofollow Link Attribute with Patrick Stox @patrickstox #VCBuzzMost of publishers are aware of rel=”nofollow” link attribute by now. From being SEO thing only, it has got a wide application throughout the Internet. But how to use it correctly?

Let’s discuss!

***Add #VCBuzz chats to your calendar here.

***Please sign in here to follow the chat -> twchat.com/hashtag/vcbuzz

About @patrickstox

Patrick Stox @patrickstox is a Technical SEO for IBM.

Patrick is an organizer for the Raleigh SEO Meetup, the most successful SEO Meetup in the US, and also the Beer & SEO Meetup.

Patrick is a contributing author for Search Engine Land, Marketing Land, and Search Engine Journal and also speaks at search conferences like #SMX and #Pubcon.

Questions we discussed

Q1 How did you become a digital marketer? Please share your career story!

I started as an affiliate, became a developer, transitioned to marketing (traditional + digital in house at a mid-size company) where I found out I really liked SEO, I ran my own search company for a while, worked for a local SEO company, and now I work at IBM.

Who says I stopped? 😉 It’s more work these days but I’d say it almost feels less competitive. There are more people competing, but the skillset required is greater than ever before.

It wasn’t a hard transition, I was bored of working in a dark room by myself. Time management is key I would say.

Q2 So what exactly is nofollow link attribute and how does it work?

Essentially, it says not to follow links on a page or can be used for specific links. This stops them from being crawled (from that location) but also stops value from passing through the links.

But it doesn’t stop them from being crawled or indexed. They might be listed somewhere else or linked to. Just stops them from being crawled from that single location.

Maybe a bit, but it’s not something I typically worry about. Google says to keep the number to a reasonable amount, “a few thousand at most”

Q3 What are some of the most widespread mistakes publishers need to avoid when using rel=”nofollow”? When should we consider using it?

The biggest mistake is using nofollow at a page level. Not only do bots not crawl the links going out, but they don’t crawl or pass value to your internal links either. I also see people nofollow for all links out from a website.

You should use nofollow when you can’t trust the people posting such as in user generated content sections, for paid links, or sometimes for crawl prioritization like with login pages.

Nuances. robots.txt doesn’t stop it from being indexed, just crawled. Then if you block in robots, a noindex won’t be seen/respected. Pretty much nofollow should be used on a link level only in my opinion.

I think you just covered both sides, when you’re in control vs when you’re not. That’s a great rule for when you can control it, but when others control it those should likely be nofollowed unless you trust the people adding the content.

Q4 What are other (better? worse?) ways to control how search bots crawl links and assign quality signals to them?

Robots.txt, directives (on page + header responses) sitemaps, internal and external links, canonical, pagination, javascript, alternate versions (mobile/amp), hreflang, authentication, URL parameter tool in Google Search Console, etc. It gets complex fast!

Yes. Basically you have to pick one or the other. Do you want the page not crawled, or not indexed? Because it’s pretty much impossible to have both unless there are zero links (including internal) pointing to the page. #vcbuzz

— Jenny Halasz (@jennyhalasz) November 20, 2018

Paid links, UGC like forums, spam, q&a. A lot of times this is already automated by a CMS like WordPress comments, but some systems have to be done custom to not allow these links to be followed. Ideally you’d never have to worry about it.

Q5 What are your favorite SEO tools, especially those that help us understand how search crawlers see and use our website?

For crawlers, DeepCrawl, Screaming Frog, ContentKing, Ryte, Sitebulb, Botify, OnCrawl. Some of those also include log file analysis options with someone of a search slant, but I’ll also mention Splunk and logz.io

Some great Google tools include the Index Coverage report inside the new Google Search Console which classifies different problems on pages. Fetch and Render in GSC, rich results tester, mobile friendly tester can all be helpful when dealing with JavaScript as well.

Our previous SEO chats:

VCBee

Viral Content Bee is the free platform for social media sharing helping you get more shares for your high-quality content

More Posts - Website

Follow Me:
TwitterFacebook

Leave a Reply