Moz Q&A is closed.
After more than 13 years, and tens of thousands of questions, Moz Q&A closed on 12th December 2024. Whilst we’re not completely removing the content - many posts will still be possible to view - we have locked both new posts and new replies. More details here.
Why would our server return a 301 status code when Googlebot visits from one IP, but a 200 from a different IP?
-
I have begun a daily process of analyzing a site's Web server log files and have noticed something that seems odd. There are several IP addresses from which Googlebot crawls that our server returns a 301 status code for every request, consistently, day after day. In nearly all cases, these are not URLs that should 301. When Googlebot visits from other IP addresses, the exact same pages are returned with a 200 status code.
Is this normal? If so, why? If not, why not?
I am concerned that our server returning an inaccurate status code is interfering with the site being effectively crawled as quickly and as often as it might be if this weren't happening.
Thanks guys!
-
Howdie,
Yes, I believe we got this sorted out. Interestingly, it wasn't any of the suggestions made here causing the 301 status code responses. I posted a thread in Google Webmaster Tools Forum regarding the issue and received a response that I am 99.5% sure is the correct answer.
Here is a link to that thread for future readers' reference: https://productforums.google.com/forum/#!mydiscussions/webmasters/zOCDAVudxNo
I believe the underlying issue has to do with incorrect handling of a redirect for this domain: ccisound.com
I am currently pursuing getting it corrected with our IT Director. Once the remedy is in place, I should know right away if it solves the issue I am seeing in the server logs. I'll post back here once I am 100% certain that was the issue.
Thanks all! This has been an interesting one for me!
-
Hi Dana, have you definitively sorted this out?
-
They are pretty detailed, I'll send you yesterday's in a zip file so you can take a look. I'm certain that have everything needed. Thanks Eric!
-
Right, a DNS manager could do a redirect, but that would not be visible in the web server log. It would only be visible in whatever is managing the DNS.
-
Depends what kind of DNS manager you are using. A redirect via DNS can still be possible.
In my experience DNS managing software can redirect users with 301 or 302 headers depending on what settings you have. If your DNS manager has a security protocol along with redirect rules, it could be causing the issue.
Examples of DNS redirects:
-
The request headers will also show if any and what cookies the user may have set. Which it looks like is how your server determines if it should provide the client the desktop or mobile version.
-
How detailed are your log files? Can you see the user-agent (browser name) Maybe you could ask your IT department to log request headers? If that will make the log files too big, they can probably do it only for the 'problem' IPs, or only for cases that the webserver returns a 301. I'll take a look if you like. Email is in my profile.
Best,
-Eric
-
Thanks so much Eric. Yes, I was thinking about the mobile version of our site being related to what I'm seeing too. However, I am unaware that we 301 redirect anything from the main site to the mobile site. In fact, users can actually switch to the mobile site via desktop by clicking "Mobile Site" in the footer and then browse the mobile version of the site via desktop. All of the URLs are identical.
Just out of curiosity I browsed to the mobile version of our site, grabbed a URL and then plugged it into "Fetch as Googlebot" in GWT. For all options, including desktop and the three mobile options a status code of 200 was returned.
-
The problem can't be related to DNS. If the problem was related to DNS, the request would never make it to your server, and you would never see anything related to the request in your log files.
Because you can see it in your log file, it is definitely happening on your own webserver (not some external problem).
The requesting IP is probobly not the problem, but it could be if your server automatically adds to a banned list any IP that requests > X pages in Y time - your server might think this is a DOS (denial of service) attack.... But if your server was set up to do this, your IT guys would probobly know about it. This isn't something that is normally enabled 'out of the box' someone would need to intentionally activate a behavior like that.
More likely, is that there is another common denominator besides the requester IP... I would guess that it's the user agent string (the browser or device the user is using).
Taking a quick look at what I think is your site, you have a mobile version available. Google of course would be interested in what your site looks like to a mobile browser, and would send a 'fake' user agent string pretending to be so (a cell phone or a tablet etc...) If your server sees this request, and tries to automatically redirect the browser to the mobile version of the site, then you would have your 301 code (which in this case is exactly what you intended, so your all set!)
There are probably a few other cases that could cause a 301 for just some IPs, but this is the only one that comes to mind at the moment.
Good Luck!
-
Here is the response from my IT Director regarding the possibility that this is being done by our DNS manager:
"I do not believe so. Our DNS does translation of human readable names to IP address. It has nothing to do with the status being returned to a browser, and even if it did it could not write to the log file."
Is this accurate? I understand that the DNS cannot write to the log file, but if the DNS can flag a request to receive a certain status code from the server, then this scenario would still be a possibility.
-
According to our IT Director we have no spam filters, no mod_security module, absolutely nothing on our server to prevent it from being crawled by bot, human or spider from any IP address, including black-listed IPs.
To me, other than the obvious (no security is probably not a good idea at all), that means that the 301 status codes being returned because of a problem with server set up.
I do have server logs that I'd be willing to share privately with anyone who's willing to take a gander. Don't worry, I won't send you a month's worth. 1-2 days should be plenty.
In the meantime I am going to dive in and take a look further. It's entirely possible that IPs from Google are not the only ones receiving nothing but 301 status codes in response to requests.
-
Thanks William. Good suggestion. I am on it! I'll post back here once I know more.
-
I would not be surprised if this was done by your DNS. If you use a DNS manager, they could possibly redirect certain users or IPs based on patterns of visits.
I suggest finding out more about any server configurations from the admin and seeing who they use as a DNS provider or manager.
-
Excellent thoughts! Yes, they are consistently the same IP addresses every time. There are several producing the same phenomenon, so I looked at this one 66.249.79.174
According to what I can find online this is definitely Google and the data center is located in Mountain View, California. We are a USA company, so it seems unlikely that it is a country issue. It could be that this IP (and the others like it) are inadvertently being blocked by a spam filter.
It doesn't matter the day or time, every time Googlebot attempts to crawl from this IP address our server returns 301 status codes for every request, with no exceptions.
I am thinking I need to request a list of IP addresses being blocked by the server's spam filter. I am not a server administrator...would this be something reasonable for me to ask the people who set it up?
Is returning a 301 status code the best scenario for handling a bot attempting to disguise itself as googlebot? I would think setting the server up to respond with a 304 would be better? (Sorry, that's kind of a follow-up "side" question)
Let me know your thoughts and I'm going to go see if I can find out more about the spam filter.
-
Where are the 301s taking Googlebot on those IP addresses? And are they the same IP addresses every time? Have you narrowed those IP addresses down to any particular datacenter/country? It could be possible there is some configuration with your server that treats IP addresses differently depending on the country... it could also be that the IP addresses getting the 301s are known blacklisted spam IP addresses but are masking themselves as Googlebot so your server's blacklist software is keeping them out. It's really hard to say without looking into the data myself but I'm definitely interested in what you find out.
Got a burning SEO question?
Subscribe to Moz Pro to gain full access to Q&A, answer questions, and ask your own.
Browse Questions
Explore more categories
-
Moz Tools
Chat with the community about the Moz tools.
-
SEO Tactics
Discuss the SEO process with fellow marketers
-
Community
Discuss industry events, jobs, and news!
-
Digital Marketing
Chat about tactics outside of SEO
-
Research & Trends
Dive into research and trends in the search industry.
-
Support
Connect on product support and feature requests.
Related Questions
-
Is there any benefit to changing 303 redirects to 301?
A year ago I moved my marketplace website from http to https. I implemented some design changes at the same time, and saw a huge drop in traffic that we have not recovered from. I've been searching for reasons for the organic traffic decline and have noticed that the redirects from http to https URLs are 303 redirects. There's little information available about 303 redirects but most articles say they don't pass link juice. Is it worth changing them to 301 redirects now? Are there risks in making such a change a year later, and is it likely to have any benefits for rankings?
Intermediate & Advanced SEO | | MAdeit0 -
Difference LSI and and secondary related keywords
Hi, It is confusing to me. So far what I understand is the following: LSI are synonyms of the keyword your target (the one in the H1 and title tag). For example my keyword would be "Tuscany bike tour" and my LSI would be "Tuscany cycling vacation", "bicycle tour in Tuscany" etc... Then secondary related keyword are for me the other topics I need to cover in my content. In this case for example it would be "Florence", "Siena". But from what I understand a good writer wouldn't use "Siena" or "Florence" multiple times in it's content it would replace it by keywords that support them such as "the town of Florence", "the city of Siena"," the Palio of Siena" etc...Is my understanding correct ? If so what is the use of using those secondary related keyword, is it to rank on other keywords such as Palio of siena tuscany bike tour ? or just not to repeat a secondary keyword too many times. If i write the Palio of Siena isn't it considered as another topic that the topic siena ? Thank you,
Intermediate & Advanced SEO | | seoanalytics0 -
:Pointing hreflang to a different domain
Hi all, Let's say I have two websites: www.mywebsite.com and www.mywebsite.de - they share a lot of content but the main categories and URLs are almost always different. Am I right in saying I can't just set the hreflang tag on every page of www.mywebsite.com to read: rel='alternate' hreflang='de' href='http://mywebsite.de' /> That just won't do anything, right? Am I also right in saying that the only way to use hreflang properly across two domains is to have a customer hreflang tag on every page that has identical content translated into German? So for this page: www.mywebsite.com/page.html my hreflang tag for the german users would be: <link < span="">rel='alternate' hreflang='de' href='http://mywebsite.de/page.html' /></link <> Thanks for your time.
Intermediate & Advanced SEO | | Bee1590 -
Can an incorrect 301 redirect or .htaccess code cause 500 errors?
Google Webmaster Tools is showing the following message: _Googlebot couldn't access the contents of this URL because the server had an internal error when trying to process the request. These errors tend to be with the server itself, not with the request. _ Before I contact the person who manages the server and hosting (essentially asking if the error is on his end) is there a chance I could have created an issue with an incorrect 301 redirect or other code added to .htaccess incorrectly? Here is the 301 redirect code I am using in .htaccess: RewriteEngine On RewriteCond %{THE_REQUEST} ^[A-Z]{3,9}\ /([^/.]+/)*(index.html|default.asp)\ HTTP/ RewriteRule ^(([^/.]+/)*)(index|default) http://www.example.com/$1 [R=301,L] RewriteCond %{HTTP_HOST} !^(www.example.com)?$ [NC] RewriteRule (.*) http://www.example.com/$1 [R=301,L] Could adding the following code after that in the .htaccess potentially cause any issues? BEGIN EXPIRES <ifmodule mod_expires.c="">ExpiresActive On
Intermediate & Advanced SEO | | kimmiedawn
ExpiresDefault "access plus 10 days"
ExpiresByType text/css "access plus 1 week"
ExpiresByType text/plain "access plus 1 month"
ExpiresByType image/gif "access plus 1 month"
ExpiresByType image/png "access plus 1 month"
ExpiresByType image/jpeg "access plus 1 month"
ExpiresByType application/x-javascript "access plus 1 month"
ExpiresByType application/javascript "access plus 1 week"
ExpiresByType application/x-icon "access plus 1 year"</ifmodule> END EXPIRES (Edit) I'd like to add that there is a Wordpress blog on the site too at www.example.com/blog with the following code in it's .htaccess: BEGIN WordPress <ifmodule mod_rewrite.c="">RewriteEngine On
RewriteBase /blog/
RewriteRule ^index.php$ - [L]
RewriteCond %{REQUEST_FILENAME} !-f
RewriteCond %{REQUEST_FILENAME} !-d
RewriteRule . /blog/index.php [L]</ifmodule> END WordPress Thanks0 -
301 Redirect of subdomain?
Fellow Mozzers, I'm having a hard time wrapping my brain around a redirect issue and thought it was worth posing the question to the Moz community. I did a search first but couldn't find the exact answer I was looking for. How does a 301 redirect work when you redirect a sub domain example.homepage.com to www.homepage.com but you keep the sub directories of example.homepage.com/page-1 active and are trying to rank them? I'm dealing with a current project where this is happening and this doesn't make sense to me, to redirect the subdomain if you're also trying to rank/create search traffic for pages, sub directories on example.homepage.com. This also get's into the debate of if a sub domain site is viewed as it's own website and therefore has to rank itself. If this is true, it seems like we're kind of killing the authority of the site by redirecting it. Additionally, www.homepage.com has a much stronger link profile than example.homepage.com I hope this makes sense. Any thoughts are appreciated. Thanks for your time.
Intermediate & Advanced SEO | | SMG-Texas0 -
Are Bluehost servers slow or is it just me?
I have a ton of websites on Bluehosts servers... are my sites slow because I have so many sites on there? Or is Bluehost slow for everyone?
Intermediate & Advanced SEO | | jhinchcliffe1 -
301 or 302 Redirects to Mobile Site
When it's detected that a mobile device is accessing the site it has the ability to redirect from www.example.com to m.example.com. Does it make more sense to employ a 301 or 302 redirect here? Google says a 301 but does not explain why (although usually I stick to "when in doubt, 301") . It seems like a 302 would prevent passing link juice to the mobile site and having mobile-optimized results also showing up in Google's index. What is the preference here?
Intermediate & Advanced SEO | | SEOTGT0 -
301 redirect from .html to non .html?
Previously our site was using this as our URL structure: www.site.com/page.html. A few months ago we updated our URL structure to this: www.site.com/page & we're not using the .html. I've read over this guide & don't see anywhere that discusses this: http://www.seomoz.org/learn-seo/redirection. I've currently got a programmer looking into, but am always a bit weary with their workarounds, as I'd previously had them cause more problems then fix it. Here is the solution he is looking to do: The way that I am doing the redirect is fine. The problem is of where to put the code. The issue is that the files are .html files that need to be redirected to the same url with out a .html on them. I can see if I can add that to the 404 redirect page if there is one inside of there and see if that does the trick. That way if there is no page that exists without the .html then it will still be a 404 page. However if it is there then it will work as normal. I will see what I can find and get back. Any help would be greatly appreciated. Thanks, BJ
Intermediate & Advanced SEO | | seointern0