Moz Q&A is closed.
After more than 13 years, and tens of thousands of questions, Moz Q&A closed on 12th December 2024. Whilst we’re not completely removing the content - many posts will still be possible to view - we have locked both new posts and new replies. More details here.
How to remove my cdn sub domins on Google search result?
-
A few months ago I moved all my Wordpress images into a sub domain. After I purchased CDN service, I again moved that images to my root domain. I added User-agent: * Disallow: / to my CDN domain.
But now, when I perform site search on the Google, I found that my CDN sub domains are indexed by the Google. I think this will make duplicate content issue. I already hit by the Panguin. How do I remove these search results on Google?
Should I add my cdn domain to webmaster tools to request URL removal request?
Problem is, If I use cdn.mydomain.com it shows my www.mydomain.com.
My blog:- http://goo.gl/58Utt
site search result:- http://goo.gl/ElNwc
-
Hey,
Glad to hear it worked out
Mark
-
Update!
Less than 12 hours my CDN domain de-indexed. Thanks Mark for your support!
-
Blocking with robots.txt doesn't remove pages/files from the search engines, it only prevents them from crawling the subdomain. If they already crawled those resources, as they have in your case, the robots.txt will just block them from visiting it again, but will not remove them.
What you should do is authenticate the subdomain with webmaster tools, and then remove the subdomain via the url removal tool, as you asked. This is the way to go about it.
To verify the subdomain, there are multiple options, including via your host, or you can modify the dns to prove you own the subdomain - it really depends on your setup and what you need to do.
But to remove these results permanently, with your robots.txt block, you should do it via the URL removal tool.
Good luck,
Mark
Got a burning SEO question?
Subscribe to Moz Pro to gain full access to Q&A, answer questions, and ask your own.
Browse Questions
Explore more categories
-
Moz Tools
Chat with the community about the Moz tools.
-
SEO Tactics
Discuss the SEO process with fellow marketers
-
Community
Discuss industry events, jobs, and news!
-
Digital Marketing
Chat about tactics outside of SEO
-
Research & Trends
Dive into research and trends in the search industry.
-
Support
Connect on product support and feature requests.
Related Questions
-
Homepage was removed from google and got deranked
Hello experts I have a problem. The main page of my homepage got deranked severely and now I am not sure how to get the rank back. It started when I accidentally canonicalized the main page "https://kv16.dk" to a page that did not exist. 4 months later the page got deranked, and you were not able to see the "main page" in the search results at all, not even when searching for "kv16.dk". Then we discovered the canonicalization mistake and fixed it, and were able to get the main page back in the search results when searching for "kv16.dk". At first after we made the correction, some weeks passed by, and the ranking didn't get better. Google search console recommended uploading a sitemap, do we did that. However in this sitemap there was a lot of "thin content sites", for all the wordpress attachments. E.g. for every image in an article. more exactly there were 91 of these attachment sites, and the rest of the page consists of only two pages "main page" and an extra landing page. After that google begun recommending the attachment urls in some searches. We tried fixing it by redirecting all the attachments to their simple form. E.g. if it was an attachment page for an image we redirected strait to the image. Google has not yet removed these attachment pages, so the question is if you think it will help to remove the attachments via google search console, or will that not help at all? For example when we search "kv16" an attachment URL named "birksø" is one of the first results
Technical SEO | | Christian_T0 -
Hide sitelinks from Google search results
Does anyone have any recommendations on how you can tell Google (hopefully via a URL) not to index that page of a website? I have tried through SEO Yoast to hide certain sitemaps (which has worked to a degree) but certain functionalities of Wordpress websites show links without them actually being part of a "sitemap" so those links are harder to hide. I'm having an issue with one of my websites - the sitelinks that Google is suggesting are nowhere near the most popular pages and I know that you can't make recommendations through Google not to show certain pages through Search Console. anymore. Any suggestions are greatly appreciated! Thanks!
Technical SEO | | MainstreamMktg0 -
Google Search console says 'sitemap is blocked by robots?
Google Search console is telling me "Sitemap contains URLs which are blocked by robots.txt." I don't understand why my sitemap is being blocked? My robots.txt look like this: User-Agent: *
Technical SEO | | Extima-Christian
Disallow: Sitemap: http://www.website.com/sitemap_index.xml It's a WordPress site, with Yoast SEO installed. Is anyone else having this issue with Google Search console? Does anyone know how I can fix this issue?1 -
Seeing URL Slugs as search result titles
I've been seeing some search results for my site that look like the first result here, where the URL slug is used as SERP title: https://drive.google.com/a/fitsmallbusiness.com/file/d/0B37y4RslpuY-a0hQYjlJQ0NxeFJicDF6RVlURFVSNFN0aGhB/view?usp=sharing The article title (and Yoast snippet title) are both "28 Press Release Examples From The Pros", but for some reason I'm seeing "press-release-examples" in the search results. I've seen this for multiple articles, and I see it now and then with different articles. I'm aware that Google often changes the titles in search results, but it seems very weird to me that they would opt for just the URL slug here. Thoughts? Has anyone else seen this issue? Any idea what might be causing this? All help much appreciated.
Technical SEO | | davidwaring0 -
Using the Google Remove URL Tool to remove https pages
I have found a way to get a list of 'some' of my 180,000+ garbage URLs now, and I'm going through the tedious task of using the URL removal tool to put them in one at a time. Between that and my robots.txt file and the URL Parameters, I'm hoping to see some change each week. I have noticed when I put URL's starting with https:// in to the removal tool, it adds the http:// main URL at the front. For example, I add to the removal tool:- https://www.mydomain.com/blah.html?search_garbage_url_addition On the confirmation page, the URL actually shows as:- http://www.mydomain.com/https://www.mydomain.com/blah.html?search_garbage_url_addition I don't want to accidentally remove my main URL or cause problems. Is this the right way this should look? AND PART 2 OF MY QUESTION If you see the search description in Google for a page you want removed that says the following in the SERP results, should I still go to the trouble of putting in the removal request? www.domain.com/url.html?xsearch_... A description for this result is not available because of this site's robots.txt – learn more.
Technical SEO | | sparrowdog1 -
How do I get out of google bomb?
Hi all, I have a website named bijouxroom.com; and I was in the 7th page for the search term takı in google; and 2nd page for online takı. Now, I see that in one day my results seem to be on the 13th and 10th page in google respectively. I made too much anchor text for takı and online takı. What shall I do to gain my positions back? Thanks in advance. Regards,
Technical SEO | | ozererim0 -
Using hyphenated sub-domains or non-hyphenated sub-domains? What is the question! I Any takers?
For our corporate business level domain, we are exploring using a hyphenated sub-domain foir a project. Something like www.go-figure.extreme.com I thought from a user perspective it seems cluttered. The domain length might also be an issue with the new Algorithm big G has launched in recent past. I know with past experience, hyphenated domains usually take longer to index, as they are used by spammers more frequently and can take longer to get out of the supplementary index. Our company site has over 90 million viewers / year, so our brand is well established and traffic isn't an issue. This is for a corporate level project and I didn't have the answer! Will this work? anyone have any experience testing this. Any thoughts will help! Thanks, Rob
Technical SEO | | RobMay0 -
Should I set up a disallow in the robots.txt for catalog search results?
When the crawl diagnostics came back for my site its showing around 3,000 pages of duplicate content. Almost all of them are of the catalog search results page. I also did a site search on Google and they have most of the results pages in their index too. I think I should just disallow the bots in the /catalogsearch/ sub folder, but I'm not sure if this will have any negative effect?
Technical SEO | | JordanJudson0