Solving Secure Server Issues for SEO

Share on Google+Share on LinkedInTweet about this on TwitterShare on StumbleUponShare on Facebook

 

A secure version of a webpage can be identified easily; it will have the ‘https://’ protocol within the URL. For websites which require protected logins, it’s an essential.

https

Where users will provide data which should not be shared, like their payment information, a secure server is required. These pages are deemed secure since their data is encrypted, making it difficult for any hackers to try to read the content in the instance of them actually getting to it.

The Difference between HTTP and HTTPS

The former, HTTP, is used in standard web browsing. After an address is typed in a web browser or a result is clicked to from the SERPs, the web browser requests information from the server of said website. The response is a web page, delivered to your screen.

The HTTPs protocol sees that communication being encrypted. The HTTP protocol is used in conjunction with TLS/SSL (Transport Layer Security and Secure Sockets Layer) to request authenticity. It means that, essentially, noone else can ‘eavesdrop’ on your browsing.

Why Would That Affect SEO?

The main thing that https versions of pages does is, often, cause duplicate content issues. This can be a massive issue, especially when https and http versions of the same page end up competing against each other in the rankings.

As a result of this, links can be built to both http and https versions of pages, reducing the potential of link equity.

SEO Considerations When Creating HTTPS pages

The following must be taken into account when you’re making secure pages on your site.

Secure pages which need securing

Only ever secure pages which actually need securing. There’s really no need to pass on the HTTPS protocol to other pages which aren’t required. If you’ve already done this, you can use a rewrite rule to sort it out simply.

Don’t duplicate

The amount of client sites I’ve seen where they have just duplicated the entire site with a HTTPS version makes me want to cry. It’s awful, for obvious duplicate content reasons, and you should NEVER do it.

Consider Login Pages

Login pages can act as a barrier against https pages being indexed, as search engine spiders can’t get through the login page to do any additional crawling.

If you don’t need a login page, read on to find out how to stop secure pages being indexed:

How to Stop HTTPS Pages Being Indexed

As a rule, search engine spider should not be able to crawl any of your secure pages. A search engine can index secure pages providing it can get to them – and likelihood is, you want http versions of pages to rank and NOT the secure versions.

You can do this by adding the rel=”canonical” tag to your https pages, OR you can add meta robots tags to your https pages (if there’s no http version to point a canonical to).

<meta name=”robots” content=”noindex,nofollow”>

You can also use rewrite rules if you use an Apache server. In this case, you’ll need to put the following into the .htaccess file:

rewrite robots

You will need to put this BEFORE any redirects you have listed in the .htaccess in order for it to be read in the right order. Then, you need to create special rules for your https robots by providing a https robots file.

Create a robots.txt file with the same file name as the one your specified in the .htaccess (in my case, it’s usually ‘secure_robots.txt’) and fill it as you would a normal robots.txt.

User-agent: *
Disallow: /

^.^

Written by Sarah Chalk

Sarah Chalk

Sarah is an SEO Account Manager at 360i and has a keen interest in all things SEO. She has also written for a number of sites, including Vue cinema’s film blog and a number of tech websites.

Leave a Reply

Your email address will not be published.

You may use these HTML tags and attributes: <a href="" title=""> <abbr title=""> <acronym title=""> <b> <blockquote cite=""> <cite> <code> <del datetime=""> <em> <i> <q cite=""> <strike> <strong>