Moz Q&A is closed.
After more than 13 years, and tens of thousands of questions, Moz Q&A closed on 12th December 2024. Whilst we’re not completely removing the content - many posts will still be possible to view - we have locked both new posts and new replies. More details here.
Using the Google Remove URL Tool to remove https pages
-
I have found a way to get a list of 'some' of my 180,000+ garbage URLs now, and I'm going through the tedious task of using the URL removal tool to put them in one at a time. Between that and my robots.txt file and the URL Parameters, I'm hoping to see some change each week.
I have noticed when I put URL's starting with https:// in to the removal tool, it adds the http:// main URL at the front.
For example, I add to the removal tool:-
https://www.mydomain.com/blah.html?search_garbage_url_addition
On the confirmation page, the URL actually shows as:-
http://www.mydomain.com/https://www.mydomain.com/blah.html?search_garbage_url_addition
I don't want to accidentally remove my main URL or cause problems. Is this the right way this should look?
AND PART 2 OF MY QUESTION
If you see the search description in Google for a page you want removed that says the following in the SERP results, should I still go to the trouble of putting in the removal request?
www.domain.com/url.html?xsearch_...
A description for this result is not available because of this site's robots.txt – learn more.
-
Thanks so much for taking the time to respond.
I think I will add the https to WMT and remove them that way.
I will take a look through the .htaccess file and the creation of the ssl robots file. A while back, it seemed that Google was indexing a lot of my site as https and then the dropped it and went mainly back to http. I will get that sorted to make it clear.
-
Hi there
I'll start with question 2 first as it's a bit easier to answer. Robots.txt blocks the crawling of a page, but not necessarily indexing. Of course, if the page cannot be crawled it will be deindexed eventually anyway, but if you're getting that description for one of your URLs, Google has not been able to access it and will stop trying to. So that is usually enough, although if you want to remove it as well, you can by all means.
For question 1 - GWT is a bit awkward in the sense that it treats http and https versions of your site as different webmaster properties. Furthermore, if you want to remove a URL on your site, it will always prefix it with the http/https version of your site, no matter how you enter it.
If you added another WMT property that was https://www.yourdomain.com - you would be able to manage that domain as well and thus you would be able to remove any URLs under that prefix.
Incidentally, if you want to block all HTTPS pages from being accessed, you can do that with a special instruction in your htaccess file and robots txt. You can instruct the Googlebot and other bots to read a specific robots.txt file if they visit an HTTPS URL. To do that, you would first add this to your htaccess file:
RewriteCond %{HTTPS} ^on$
RewriteCond %{REQUEST_URI} ^/robots.txt$
RewriteRule ^(.*)$ /robots_ssl.txt [L]This command basically says "if the URL has https, read the robots_ssl.txt file". You then upload a file called robots_ssl.txt to your root domain. In the txt file you just add:
User-agent: *
Disallow: /So now, if a bot reaches an https URL, it has to read the robots_ssl.txt file and upon reading that, they are denied access. That would prevent all of your https URLs from being indexed.
That might be useful to you, but if you go ahead and use it please take care to backup all your files in case anything goes wrong - your htaccess file is very important!
Got a burning SEO question?
Subscribe to Moz Pro to gain full access to Q&A, answer questions, and ask your own.
Browse Questions
Explore more categories
-
Moz Tools
Chat with the community about the Moz tools.
-
SEO Tactics
Discuss the SEO process with fellow marketers
-
Community
Discuss industry events, jobs, and news!
-
Digital Marketing
Chat about tactics outside of SEO
-
Research & Trends
Dive into research and trends in the search industry.
-
Support
Connect on product support and feature requests.
Related Questions
-
Google Not Indexing Pages (Wordpress)
Hello, recently I started noticing that google is not indexing our new pages or our new blog posts. We are simply getting a "Discovered - Currently Not Indexed" message on all new pages. When I click "Request Indexing" is takes a few days, but eventually it does get indexed and is on Google. This is very strange, as our website has been around since the late 90's and the quality of the new content is neither duplicate nor "low quality". We started noticing this happening around February. We also do not have many pages - maybe 500 maximum? I have looked at all the obvious answers (allowing for indexing, etc.), but just can't seem to pinpoint a reason why. Has anyone had this happen recently? It is getting very annoying having to manually go in and request indexing for every page and makes me think there may be some underlying issues with the website that should be fixed.
Technical SEO | | Hasanovic1 -
Can you force Google to use meta description?
Is it possible to force Google to use only the Meta description put in place for a page and not gather additional text from the page?
Technical SEO | | A_Q0 -
Blog Page Titles - Page 1, Page 2 etc.
Hi All, I have a couple of crawl errors coming up in MOZ that I am trying to fix. They are duplicate page title issues with my blog area. For example we have a URL of www.ourwebsite.com/blog/page/1 and as we have quite a few blog posts they get put onto another page, example www.ourwebsite.com/blog/page/2 both of these urls have the same heading, title, meta description etc. I was just wondering if this was an actual SEO problem or not and if there is a way to fix it. I am using Wordpress for reference but I can't see anywhere to access the settings of these pages. Thanks
Technical SEO | | O2C0 -
Google Webmaster Tools - content keywords containing spam?
Hi all, When I looked in Google Webmaster Tools today I found under the menu Google Index, Content Keywords, that the list is full of spammy keywords (E.g. Viagra (no. 1) and stuff like that) Around april we built a whole new website, uploaded a new xml-sitemap, and did all the other things Google Webmaster Tools suggest when one is creating a Google Webmaster Account. Under the menu "Security Issues" nothing is mentioned. All together I find it har d to believe that the site is hacked - so WHY is Google finding these content keywords on our site?? Should I fear that this will harm my SEO efforts? Best regards, Christian
Technical SEO | | Henrik_Kruse0 -
Why is Google replacing our title tags with URLs in SERP?
Hey guys, We've noticed that Google is replacing a lot of our title tags with URLs in SERP. As far as we know, this has been happening for the last month or so and we can't seem to figure out why. I've attached a screenshot for your reference. What we know: depending on the search query, the title tag may or may not be replaced. this doesn't seem to have any connection to the relevance of the title tag vs the url. results are persistent on desktop and mobile. the length of the title tag doesn't seem to correlate with the replacement. the replacement is happening at mass, to dozens of pages. Any ideas as to why this may be happening? Thanks in advance,
Technical SEO | | Mobify
Peter mobify-site-www.mobify.com---Google-Search.png0 -
Splitting Page Authority with two URLs for the same page.
Hello guys, My website is currently holding two different URLs for the same page and I am under the impression such set up is dividing my Page Authority and Link Juice. We currently have the following page with both URLs below: www.wbresearch.com/soldiertechnologyusa/home.aspx
Technical SEO | | JoaoPdaCosta-WBR
www.wbresearch.com/soldiertechnologyusa/ Analysing the page authority and backlinks I identified that we are splitting the amount of backlinks (links from sites, social media and therefore authority). "/home.aspx"
PA: 67
Linking Root Domains: 52
Total Links: 272 "/"
PA: 64
Linking Root Domains: 29
Total Links: 128 I am under the impression that if the URLs were the same we would maximise our backlinks and therefore page authority. My Question: How can I fix this? Should I have a 301 redirect from the page "/" to the "/home.aspx" therefore passing the authority and link juice of “/” directly to “/homes.aspx”? Trying to gather thoughts and ideas on this, suggestions are much appreciated? Thanks!0 -
Why has Google removed meta descriptions from SERPS?
One of my clients' sites has just been redesigned with lots of new URLs added. So the 301 redirections have been put in place and most of the new URLs have now been indexed. BUT Google is still showing all the old URLs in the SERPS and even worse it only displays the title tag. The meta description is not shown, no rich snippet, no text, nothing below the title. This is proving disastrous as visitors are not clicking on a result with no description. I have to assume its got something to do with the redirection, but why is it not showing the descriptions? I've checked the old URLs and he meta description is definitely still in the code, but Google is choosing not to show it. I've never seen this before so I'm struggling for an answer. I'd like to know why or how this is happening, and if it can be resolved. I realise that this may be resolved when Google stops showing all the old URLs but there's no telling how long that will take (can it be speeded up?)
Technical SEO | | Websensejim0 -
How to remove a sub domain from Google Index!
Hello, I have a website having many subdomains having same copy of content i think its harming my SEO for that site since abc and xyz sub domains do have same contents. Thus i require to know i have already deleted required subdomain DNS RECORDS now how to have those pages removed from Google index as well ? The DNS Records no more exists for those subdomains already.
Technical SEO | | anand20100