Would a 403 page, a blank page and a page redirect be indexed by search robots?

Would a 403 page, a blank page and a page redirect be indexed by search robots? - If a page has internal and external outgoing links to redirecting URLs, it’s returning 3xx (301, 302, etc.) HTTP status codes standing for redirection. This issue means that the page does not exist on a permanent or temporary basis. It appears on most of the popular web browsers, usually caused by a misconfigured website. However, there are some steps you can take to ensure the issue isn’t on your side. You can find more details about redirecting URLs by reading the Google Search Central overview. In this article, we’ll go over how you can fix the Would a 403 page, a blank page and a page redirect be indexed by search robots? error on your web browser. Problem :


I am wondering if a disallow directive needs to be issued to robots on pages that will generate:




  1. a 403 forbidden error;

  2. a blank page, or;

  3. a page redirect (to log in page for example).



Would not doing so have any impact on SEO of the site?


Solution :

Will they be indexed:




  1. 403 Error – no, most search engines don't index error pages (assuming they're served with the correct HTTP status code)

  2. Blank page – almost certainly not, assuming it is entirely blank (no <title>, nothing).

  3. Page redirect – Generally only the destination of the redirect will be indexed (usually this is the intended result), but it depends on the type of redirect. For 301 the redirecting page will be de-indexed in favour of the destination page, however for 302 often both the referring and destination page can continue to be indexed.



SEO impact



From the above, you should be able to see that assuming everything is configured correctly, then in cases (1) and (3) we don't need to do anything else to manage these pages' SEO impact.



In the case of (2), we probably don't need to do anything else for a genuinely blank page, though I'd recommend avoiding generating blank pages at all if possible. If not, Disallow is better than nothing, and if you can apply "noindex" directly to the page either via HTML or HTTP header, better still.



You should block these directories in the robots, not so much for ranking purposes because it doesn't matter Google often finds lots of 403 and its not irregular for Google to find them. But it'll clutter your Web Master Tools so it is best to block them.



Block Registered User Areas with the Following:



Robots.txt



User-agent: *
Disallow: /user-area-here-change-me/


Also use noindex as Google recommends both: On all registered area pages use:



<META NAME="ROBOTS" CONTENT="NOINDEX, NOFOLLOW">

We hope that this article has helped you resolve the seo, redirects, robots.txt error in your web browsers. Enjoy browsing the internet uninterrupted!

Comments

Popular posts from this blog

How to redirect to any domain [duplicate]

"302 found" for index page bad for SEO?

Create redirect from url like www.example.us/?p=100&option=