English Google Webmaster Central office-hours hangout

English Google Webmaster Central office-hours hangout


JOHN MUELLER: OK. Welcome everyone to today’s
Google Webmaster Central Office Hours Hangout. My name is John Mueller. I’m a Webmaster Trends Analyst
here at Google in Switzerland, and I try to connect
our engineers together with webmasters, publishers,
SEOs, like all of you guys. We have a bunch of questions
submitted, some of them through one side, some of
them through another side. So maybe if one of you guys
wants to get started first. And then I’ll try to
find the best side to pick some questions up from. MIHAI: Hi, John. Could I ask you a question? JOHN MUELLER: Sure. MIHAI: OK. A couple of Hangouts ago, I
talked about a client of mine that I’ve had for over
a year and a half. I’ve had problems with him
performing on organic results, despite disavowing bad links,
doing a new website basically, doing some nice content
marketing campaigns, which we [INAUDIBLE] links. There has been no
performance benefit for the last past
year and a half. I’ve also made a
product forum thread, which I explain everything. With the people I
talked to, we still couldn’t get to a
solution, I guess. I also sent you a
message on Google+. I don’t know if you got that. I’m just curious. I could send you the thread. JOHN MUELLER: Yeah. If you can post a URL
to thread in the chat, I can double-check there. MIHAI: OK. There you go. JOHN MUELLER:
Sometimes these things just take a long time
to get processed. That’s something where sometimes
it just needs a longer time to continue working on this,
but I can double-check. MIHAI: All right. I was just curious
if there’s anywhere I should look more
closely, anywhere we should direct our efforts. I hope it’s not the answer
that we’re not quality material or not, because our
competitors aren’t. Of course, this is a non-English
website we’re talking about, so it’s a local market. And I understand that things
might be moving more slowly, I guess, because there
are less people talking the language rather than
the English markets. But any information
that could help me understand where we should
focus our efforts better. Obviously, the client is not
that happy that other websites are using non-, well,
black hat tactics, let’s just call them
that, and are succeeding. And we’re trying to use branding
and marketing campaigns, and not doing anything. JOHN MUELLER: Mm-hm. You said you cleaned up
a bunch of links as well? MIHAI: Yes. JOHN MUELLER: And how
did you clean them up? Did you go through
the Disavow file? MIHAI: First of all, we took a
client about a year and a half ago. First of all, we
tried to find emails for all the directories and
everything– the contact emails, I mean, that
he had links from. We emailed that, we got a
bunch of links actually removed from the website. The others we just
used that Disavow file. We also updated the
Disavow file recently. We also included [INAUDIBLE]
404 now or 500 errors, because we don’t know,
maybe they’ll come back. And I don’t know if that’s
some influence or not. So just to be safe, we
put them there as well. The website was redesigned and
remade using a different CMS, because the old website
wasn’t really Google-friendly. So now we’re using
PrestaShop as the main CMS. And as I said, we have a bunch
of content guides for our users that we promoted and
actually got some nice links. But we haven’t seen any effect. And the link I gave
you in the chat, I actually do a more
in-depth analysis and our competitor’s analysis. And I hope that can help
you with the more detail. JOHN MUELLER: I’ll take
a look at the thread. But in general, it sounds like
you’ve made a lot of changes. And anytime you make
this many changes, things are going to
take quite a bit of time to kind of settle down again. So on the one hand, I
still see some issues with regards to the links that
the algorithms are picking up on, which might just be a matter
of things still taking time. But that’s something where
I’d double-check to make sure that you’re really covering
everything there as much as you can. And make sure that things these
are essentially as good as it can be, so that the next update
of the algorithms kind of picks up on the
positive changes there. So that’s something
that shouldn’t be slower in other languages
and other countries. So it’s not something
I’d kind of move in that direction
or excuse like that. It’s really just our
algorithms taking the time to kind of refresh
all of this data. And some of that takes
quite a bit of time. I wish we could do some of
these a little bit faster, but it just takes time. And I think to
some extent, you’re probably on the
right track there. Sometimes it makes sense
to set up a separate site and say, OK, I understand
that this website isn’t doing so well. I did a lot of spammy
stuff with this site, or the previous SEOs did. Maybe it’s worth just
setting up a separate site and moving to something
different like that. But I think you’re kind
of on the right track there with that site,
so personally, I wouldn’t say, try to
kind of walk around the problem like that. But continue to work
in that direction. MIHAI: Right. Well, the client is pretty
attached to the domain because it’s his brand. So it’s kind of difficult
to move from that. And we’ve made every
effort possible to try to remove ourselves from
anything bad the other SEO before us did. But we were just looking
for a bit of signal. I hope I’m not
taking too much time. This would lead me
to a second question. I was talking about the
language being different not in the algorithmic
point of view that you made, because I understand that
updates like Penguin and Panda are pretty language-independent,
pretty much. But from a manual
action point of view, because many of our competitors,
from what we’ve seen, kind of abuse advertorials. And from what I
understand, advertorials are subject to manual
actions, more or less, because it requires a manual
review of the websites to determine if the
article is an advertorial or an advertorial [INAUDIBLE]. And from that point
of view, I think there is a bit of
a language barrier. I think the English
language is much easier to maybe even
algorithmically detect or at least flag an
article for manual review later by an actual person,
rather than in other languages. JOHN MUELLER: It’s tricky. If this is something that you
think we’re totally missing out on, then sending
me that information would be a great idea. Doing spam reports is
something you could do as well. I don’t think this is something,
from an algorithmic point of view, where we’d
say, algorithmically, we have to recognize
this problem and then do a manual review. It’s more something where we
decide to do a manual review, and then we’ll do
a manual review. And we should be able to
pick up on signals like that, like advertorials, because
these are native speakers that do these kind of spam reviews. So it’s not someone who only
understands English looking at a page that they
don’t understand. It’s really a native
speaker who should be able to recognize when
something is an advertorial or when it’s actually
normal organic content. MIHAI: Yes, but
the native speakers get a signal from somewhere
to look at the website, right? From spam reports
or maybe some flags. JOHN MUELLER: Yeah. MIHAI: And regarding spam
reports, in the spam report, you have a website that is
selling links, a website that is buying links
and commentaries. For example, I have
a website that I know is doing a bunch
of spammy tactics and has a bunch of spammy links. Should I submit a report
for every spammy link, or could I just send
a single report? And I would like to
give you an example. For example, I took
one of the websites that I found is quite
abusing advertorials, and I made a Google
sheet with everything I found regarding spammy links. And I submitted it as the
URL of the site that’s selling the links
rather than just– I don’t know if
that’s a good way. For example, you could type– JOHN MUELLER: That’s probably
confusing for the team there. But I think in a
case like that where you have a complicated
case that you essentially want to get to reviewers, that’s
something that’s probably best sent directly to one
of us, so that we can take that whole email with
the link to your document, and pass that on to
the Web Spam team, instead of trying
to kind of fix it into a form that’s not really
that suited for something complicated like that. MIHAI: Oh, OK. Well, I sent you the link to
the spreadsheet in the chat. I don’t know. I hope it’s detailed enough. And let me know if
there’s any way I can do– if this is
a good way to do this for the other websites that
I find or anything like that. OK. Thank you. JOHN MUELLER: Sure. There’s a question in
the chat about Penguin, regarding whether or not
the rollout is complete. And to be honest, I
don’t really know. So I was out at SMX Milan
last week and kind of busy on the weekend,
so I didn’t really catch up on what exactly
is happening there. I imagine it’s probably
about rolled out now, but I don’t have any
absolute information. Sorry. BARRY SCHWARTZ: OK. Thank you. Can I ask one question,
not related to the Penguin? JOHN MUELLER: All right. BARRY SCHWARTZ: So you know how
Google started to fully render a page as a user would see it? JOHN MUELLER: Yes. BARRY SCHWARTZ: So
I’ve seen the reports that when you have
on the website, Click to Expand to show more
content, that Google’s ignoring the content
in that Click to Expand, because the
user doesn’t see it unless they click to expand. Is that an
implementation problem, or is that something new
with a fully render feature? JOHN MUELLER: I
think to some extent, we’ve been doing that
for quite awhile now. So I saw your blog
post about that, and I sent the team that
works on this a short email before the Hangout, but I
didn’t hear back from them on time to actually have a
definitive answer for you there. But I think we’ve been doing
something similar for quite awhile now, where if we can
recognize that the content is actually hidden,
then we’ll just try to discount it in a little bit. So that we kind of see
that it’s still there, but the user doesn’t see it. Therefore, it’s
probably not something that’s critical for this page. So that includes, like,
the Click to Expand. That includes the
tab [? QIs ?], where you have all kinds of
content hidden away in tabs, those kind of things. So if you want that
content really indexed, I’d make sure it’s
visible for the users when they go to that page. From our point of
view, it’s always a tricky problem when
we send a user to a page where we know this content
is actually hidden. Because the user will
see perhaps the content in the snippet, they’ll
click through the page, and say, well, I don’t see
where this information is on this page. I feel kind of almost
misled to click on this to actually get in there. So that’s kind of the
problem that we’re seeing. And some of that– I think
we’ve been picking up on that for quite some time
now to kind of discount that information. It might be that we’ve gone
a little bit further now to actively ignore
the information that’s not directly visible. GARY: John, is there
not a better way to deal with that, John? As we spoke about
previously, you don’t let us know
what the keywords are, coming to our website. And if we did, we wouldn’t
have to choose design over development for Google. We could say, OK,
those are the keywords. Let’s show the
panel that we want to be able to show
to our customers and potentially even use
jQuery to scroll them down to the exact area. But we’re unable to do that,
because you hide our keywords. So I know there’s abuses,
but you can’t really remove something just because
people abuse something. You ought to find a better
way around of doing it. JOHN MUELLER: I don’t really
think that’s going to change. I understand that concern. I know that some people were
using it for good reasons. And what you’re
saying there, I think that makes a lot of sense, but
I don’t see that coming back. So that’s kind of
the keyword data is visible in
Webmaster Tools now. You don’t see it
directly on the sessions when people are active. So I think that’s
something– you’ll kind of have to work with that. That’s I guess a little bit
of a different constraint that that’s happened
over the years, where maybe it makes
sense to try to recognize, based on the Webmaster Tools
data, what people are searching for, where you should have
specific pages, where you want to have more general
pages, and work that out. But I don’t see that referred
data coming back to the request from search. GARY: Maybe it can
come back for people that have some kind
of trust level. JOHN MUELLER: I don’t
see that happening. Sorry. [LAUGHTER] GARY: Thanks, John. JOHN MUELLER: All right. Let me grab some from the Q&A.
And Barry, I’ll get back to you on the Click to Expand stuff
to see if we have something more specific that
I can tell you. BARRY SCHWARTZ: Yeah. Thank you. Just post it on the
Google+, I guess. JOHN MUELLER: Great. ODYSSEAS: Hey, John. Why not use something like
link [? URL ?] for the Click to Expand thing, so that
you guys have the ability to send the users to the page
with a particular section expanded? JOHN MUELLER: I don’t know. [LAUGHS] I’d have to
take a look at that. I know we sometimes
use the anchors when we can recognize
that a page has specific sections on it, where
we can send them directly to that section. But I know that’s a very tricky
problem, because lots of pages use that for very
different means. So some of them use them
for JavaScript navigation, some of them use it for
navigating within the page. And not all elements on the
page are equally relevant. Or sometimes the design
makes it really hard to figure out which
kind of anchor goes to which part
of the content. ODYSSEAS: Right. Maybe something like the
Previous and Next page navigation, and we can give
a page number essentially to each tab. JOHN MUELLER: Yeah. But I guess at that point,
you might as well just create a separate page
with a separate URL, right? ODYSSEAS: Yeah, absolutely. But at least you would know
that it’s part of the same page, it’s not a separate page,
from a link perspective. JOHN MUELLER: Yeah. I don’t know. I have to think about that. Yeah. [LAUGHS] ODYSSEAS: OK. Thank you. JOHN MUELLER: All right. Let’s go through some
of the Q&A things. It seems some people were able
to make it through to the Q&A feature, so let me just
grab some of those quickly. “The number of daily crawl
pages in Webmaster Tools is a couple times higher than
websites that has subpages. Why? Is that bad?” I’m actually not
really sure what you mean there, [? Lukash. ?]
But essentially, the number of crawled pages depends mostly
on the server capabilities as we can find them, kind
of, like, as an upper limit. If we can recognize that we
can crawl this many pages from your website and we have
that many pages that we want to look at, we’ll try to
crawl that many pages. Another factor is
the number of URLs that we find on your website. So if you have a very
clean URL structure, you might have a lot of pages,
but they’re very unique URLs. So we don’t crawl a
lot of duplication. On the other hand, if you have
a complicated URL structure, then we might find
10 times as many URLs as you actually
have or 100 times. And we’ll try to crawl
all of those versions if we have a chance. So those are kind
of the two sides that I’d be looking at there. One thing I would
do, in your case, is look at your
server logs, find out which URLs were
actually crawling, and make sure that
those are actually URLs that you want
to have crawled. And if that’s the case, then
the crawling a lot of pages is probably a good thing. We’re keeping up with all of
the changes on your website. If you recognize that we’re
crawling a lot of URLs that you don’t want
to have crawled, then I’d kind of take
those sample URLs, and go back, and
look at your website, and see where they
came from, and maybe what you can do to prevent
those links actually being like that on your website. “Do you or any Googlers
can share with us some tips and hints
regarding Webmaster Tools for better diagnosis, improving
and interpreting data, or recommend someone
with that information? We want to see SQT Tools case
study, if this is possible, for better understanding
of common problems.” That’s good feedback. Good idea. I can see if I can find
actually maybe someone from the Webmaster Tools’ Team
to join one of these Hangouts. And we can ask him all of
these tough questions directly. And maybe he can show
us some information on what we’re looking at. So that might be an
interesting idea. “I noticed how you used your
magic tool during Hangouts to help identify problems. I’m wondering if
you would consider doing a special Hangout to help
webmasters to identify existing problems with their websites?” Yes, we’ve done a few of these
site clinic type Hangouts before. I think we can
definitely do one again. Why not? “Do no-follow links–” I
think, “do no-follow links pass some signal for
some algorithms?” Essentially, we see these
as links on the pages, but we don’t pass
page rank for them, and we don’t pass
any of our signals through no-follow links. So essentially, these
are links that we know exist on your page. We’ll show them in
Webmaster Tools. But we don’t pass page rank. It’s not that you can
kind of get anything like that from those links. So we might crawl
those links anyway, because maybe we’ll find
another version of those links as well just to also make
sure that we’re not missing any relevant content
on your site. So it’s not a block
like a robots.txt. But on the other hand, it
doesn’t pass page rank, so it’s not going to make
this page anything really strong in search. Because if we can’t
forward any page rank or any other
signals to those pages, they might be indexed, because
we’ve seen them before. But usually, they’re not
very relevant in search. MIHAI: John, can I ask
a question on that? JOHN MUELLER: Sure. MIHAI: So do you use the
no-follow links at all for, like, I don’t
know– so you’re not using them for
passing page rank. But do you use them for
determining or better understanding the relevancy
of a page based on where the no-follow link is coming
from or something like that? So do you use any signals
at all for determining how relevant or anything
about the target page? JOHN MUELLER: I don’t think
we use any signals at all from there. I can’t say absolutely, but at
least from the parts I’ve seen, I don’t think we use any
of the information that’s kind of passed from that link. MIHAI: OK. JOHN MUELLER: “Does
Google respect incorrectly formatted canonical tags?” And then there’s an example
with, like, a broken HTML link rel canonical. If we can parse that tag, we’ll
try to take it into account. If we can’t parse it, then we
can’t take it into account. And we’re pretty much
used to broken HTMLs, so for a lot of things,
we can recognize kind of the
information in there. But if you know that you
have broken HTML, especially with regards to something
like a canonical tag, a no index, any kind
of really directives that you want Google
to follow, then I’d definitely work to fix that. It’s possible we’ll be able
to pick it up correctly, but it’s not
absolutely guaranteed. GARY: John, Rob messaged me. He couldn’t get
into the Hangout, and he was wondering–
he sent you a message last week regarding
this sort of secretive thing you might be able to
tell him about that’s wrong with his site. And he was kind of hoping that
he would have heard from you. JOHN MUELLER: Yeah. I saw that, but I
don’t have anything new to add at the
moment for his site. GARY: He was wondering
if the penalty was– if the algorithm was
rerun, would his site just be functioning
as normal again? JOHN MUELLER: I don’t know. Perhaps. GARY: Or do you not
have the [INAUDIBLE]? JOHN MUELLER: Perhaps. GARY: Yeah. OK. JOHN MUELLER: As normal,
again, is always hard, because things change
over the years. And when, for example,
you get a manual action in the beginning of the year
and at the end of the year, that manual action
is lifted, doesn’t mean it’ll be exactly the
same as it was before. So everything kind of
evolves over that time. GARY: I think what he
meant was, would he be on a level playing
field with other people in order to be able to compete? JOHN MUELLER: Yeah. That’s definitely the case
that when these things expire, when they get
resolved, there’s no– I don’t know, how
do you say grudge that Google’s algorithms have. And say the site
was bad in the past. Therefore, we’ll be kind of
more cautious with this site. That’s not the case. GARY: Cool. He says thanks. JOHN MUELLER: Great. All right. Let’s grab some from the
other Moderator page, if I can get that up. All right. “How do you treat a
subdomain for href lang? For example, www.domain.com, 302
redirects to a territory like /uk?” We essentially treat
that as a normal page. What I would do here
is take the main page that you have here that
does a geographic redirect, and call it the
[? ix ?] default page so that we know
this page exists. And that you essentially
want this page to be shown for
any cases that you don’t have specifically covered. And when you do
the href lang, make sure that you’re doing that
between the canonical versions of these URLs. So if you have a /uk and a /uk
with an extra slash at the end, then make sure you’re doing it
between the versions that you have set up as a rel canonical. So in this case, I would call
the main, like the root URL, the [? ix ?] default,
and use the /uk/ version as the UK-specific one. “Is Google still working on
an automated penalty viewer so we can see what algorithms
aren’t happy with our site?” This is a popular
request, and we do discuss this regularly with
the engineering teams and also with the ranking teams to
see what we can do there. At the moment, I don’t
have anything specific that I can announce. I think it would be great
to have more transparency like that in Webmaster Tools. It’s a very tough
problem, though. So it’s not something that I
would expect anything specific on in the near future. But as always, keep pushing. If this is something that
you really would love to see, keep letting us know
about these things, so that we can also keep
talking to the engineers and kind of pass
that feedback on. “Our content is
changing steadily. As a consequence, we need
to add and remove pages. Is it better to use an
old URL for new content?” You can do it either way. I don’t think there’s
any inherent advantage or disadvantage of using
old URLs for new content. So personally, I
imagine it’s easier, from a maintenance point of
view, to just use new URLs. But if you can’t do that or
if you want to reuse old URLs, that’s essentially fine, too. “Googlebot recently rendered
pages as a user would see it.” This is Barry’s question. I’ll get back to
you on that, Barry, when I have more information. “Has the Penguin
update been completed?” We talked about this
briefly as well. “Are you tired of
Penguin questions?” It’s interesting that
when I went to SMX Milan, I didn’t get any Penguin
questions at all. So I don’t know if there are a
different group of webmasters that go to SMX
Milan or not, but I found it interesting that
the type of questions were very different. And it might just be that
Italian webmasters are so advanced that they don’t
worry about these things. [LAUGHTER] JOSH: John, can I ask you
one quick remaining question, though? JOHN MUELLER: Sure. JOSH: Do you know if Penguin is
a monthly rolling update now? And/or do you know
more or less if it’s going to be short term or long
term for it to be refreshed? JOHN MUELLER: I think
the general goal is to have this be
refreshed faster. I don’t know if there’s any
specific time frame in mind. So I imagine it’ll just be
faster than the existing kind of update cycles, which
I think was extremely long. But I don’t know how fast
we’ll be able to move it. And I imagine over
time, as we see how we can update it faster,
we’ll be able to speed things up, so that it kind
of makes a little bit more sense in that regard. So I don’t have any
specific announcement to say it’ll be monthly
or it’ll be quarterly. But I know the team
is working on making it a little bit faster. JOSH: OK. So can we expect another
announcement then if it’s not going to
be rolling monthly? JOHN MUELLER: Probably. I don’t know. [LAUGHTER] I’m pretty sure that Barry will
pick up on it and will ask us, and we’ll either say something,
or we won’t say something. I’m sure people will ask. So I don’t think it’ll,
like, sneak by silently. And the idea behind
these updates is so that people
see changes, those who work on it, to improve
things, see positive changes. And those who have been doing
sneaky things, kind of the web spam issues that we
pick up on there, that they see negative changes. So this is something where if
we do an update and nobody sees any changes, then that’s almost
like not doing an update. So you should kind of
see changes over time. JOSH: Perfect. Thanks. ARTHUR: John, can I step in
with a question about Panda, as you’re tired about Penguin? JOHN MUELLER: I didn’t
say that, but go ahead. ARTHUR: OK. You know recently, Barry’s
website, SE Roundtable, has been hit by Panda 4.1. So I was wondering if he should
change his website domain name, or how should he recover faster? JOHN MUELLER: I don’t know. I haven’t looked into
his site on that regard. So I don’t know. It’s hard to say. ARTHUR: OK. I’ve placed it in the
chat of the [INAUDIBLE] from the last of his blog posts. JOHN MUELLER: OK. I didn’t realize that, but OK. Good. I mean, not good, but yeah. ARTHUR: Yeah. Maybe we will find out– JOHN MUELLER:
Thanks for the link. ARTHUR: [LAUGHS] –a
faster recovery method. JOHN MUELLER: OK. “For the mobile version, we
need to prune some H1 titles and change some anchor text. How does Google
handle this case? Is that allowed?” From our point of
view, essentially, the primary content should be
equivalent in a mobile version. So sometimes it makes sense
to kind of hide things like sidebars, headings,
change the menu structure around a little bit,
kind of simplify images, or remove images,
those kind of things. But the primary
content of each page should essentially
be equivalent. And that can include things
like changing titles. It can include things like
changing the text slightly, simplifying the text. All of that is absolutely fine. But really, the primary
content should be equivalent. “A question about ‘Links to
Your Site’ in Webmaster Tools. Some very good and
natural links disappeared. And some low-quality
links are listed, regardless that
they’re disavowed. Is this a problem
in Webmaster Tools?” So Webmaster Tools tries
to reflect the data as we technically
find it on the web. It doesn’t take into
account the Disavow file. It doesn’t take into account
no-follow or not no-follow. Essentially, these are just the
links that we found on the web. So that’s something
where you should be seeing links
that are disavowed. You should still be seeing
links that are no-follow. All of those links
should still be there. If normal links disappear
there, then that seems more like a
technical problem either on the linking site
or on the receiving side. So maybe the page
that’s being linked to is a 404 or something like that. But sometimes these
things also just happen with normal fluctuations. Some things go up,
some things go down. Sometimes we kind of pick
up on changes quickly. Sometimes things
kind of take a while to settle down a little bit. So from that point of view,
I wouldn’t panic about this. If you see that
these kind of links are gone for a longer
time, then by all means, post in the forum. Maybe include a
screenshot, some example URLs that we can take a look at. “Have we learned any
causes for some sites take a ranking after
switching to HTTPS?” We’ve looked into a lot
of sites that moved. And for the most part, that move
works completely as expected. We’ve also seen some
issues on our site, which we’ve been able to kind
of resolve as well. So in general, you shouldn’t
be seeing any problems there. If you still are seeing
problems with a move like that, by all means, send
me some examples. “I’ve visited a few
of your Hangouts. In the last one, you took a
look at my individual situation. What’s the best way to get in
direct touch with you since you don’t respond on Google+?” I get a lot of
requests on Google+, but usually that’s the best
place to kind of get in touch with me. And if you notice that
I’ve kind of forgotten to reply to your post
there, by all means, feel free to add
a follow-up there. “Just using hreflang
tagging in just a site map, is that acceptable? For example, if I omit the
hreflang tag on the page?” Yes, the site map is
essentially equivalent to having the markup on the page. “About disavowed links, you
said the algorithm sometimes treats them as similar
to not followed. When and how?” In general, we try to
treat them as being the same as no-followed links. So that’s something
where we kind of reserve the right to recognize
particularly problematic situations and take
action if we think that that’s
absolutely necessary. I’m not aware of any
of those situations. So for the most
part, you can assume that they’re just treated
as no-follow links. “Considering the
comments of Gary and you, does it take more than
two weeks to process reflected data for large
algorithm updates?” Yes, definitely. So this is something where
depending on the URL, sometimes we crawl them
daily, sometimes we crawl them every
couple of months. So if you submit a large Disavow
file or a Disavow file that includes a lot of domain entries
or just generally includes a lot of different
URLs, then that’s something that’s going to
take quite a bit of time to kind of re-crawl all
of those URLs naturally and reprocess all
of that information. So I wouldn’t be
surprised if you’re looking at a time frame of maybe
three to six to nine months even for Disavow files to be
completely taken into account. And that’s not something
that happens from one day to the next. This is a granular process
that happens step by step. So as individual
URLs are re-crawled, we see them in the
disavow file, that will be taken into account. So it’s not that you have
to wait this long for them to be reflected. It’s just that for everything to
be re-crawled and reprocessed, it can take a significant
amount of time. “Canonicals and hreflang tags,
if the href lang site map is OK, what’s the effect
of having canonicals at the URL level in parallel? Is there a conflict or
confusing message for the bot?” So essentially, when combining
canonicals with hreflang tags, you should make sure
that the canonicals point at the individual language
or region versions, and the hreflang tags are
between the canonical versions that you specify. So don’t pick one language
version as the canonical and have all of the
different ones in href lang. But instead, have each
individual language version have its own canonical. So you have one
canonical for English, one canonical for French. And between those
canonical URLs, you have the href lang markup. So that’s essentially how
you would use a rel canonical together with href lang there. JOSH: So John, just to clarify,
the canonical tag on /en would be cyclical to /en? And the canonical tag on /fr
would be cyclical to /fr? JOHN MUELLER: Yes, exactly. JOSH: OK. So you should not be
canonicalling from xyz.com to en, because that is
going to tell Google that the en is the canonical. JOHN MUELLER: Yes, exactly. JOSH: OK. This is the confusion. People get really
confused over this. JOHN MUELLER: Yeah. It’s something where I
think in the beginning, we didn’t have absolutely
clear guidelines on that. So some of the confusion
is also our fault. But I think in general, you
can look at it this way, that if you have the
canonical tag there, then we kind of ignore the
non-canonical URL. So if you point your canonicals
to the English version and you have a French
version there as well, then we kind of forget
about the French version and just focus on
the English version. And that’s probably not
what you’re trying to do. You want them essentially
to be equivalent, to be indexed individually. JOSH: OK. Here’s another issue people
have with canonicals, if people don’t mind me
asking this question. So sometimes they’ll try and
change a page on their site, and Google doesn’t seem to
want to allow you to canonical a non-standard index page, for
example, or a non-standard page that has a URL parameter
or something like that. Is there any way around that? JOHN MUELLER: Not really. JOSH: For example, some people
are using URL parameters to dictate certain things
they would render on the page. And they’re trying to
set that as canonical. But the URL parameters– or
if it’s a non-standard index page, like index
2014, for example, we were trying to specify a
new index page for that year. And sometimes Google doesn’t
seem to want to canonical it. Is there any way around that,
or any comment you have on that? JOHN MUELLER: That
should actually work. So normal content, normal
URLs like that should work. Where it usually clashes with
our algorithms is if you have a page like example.com/
and example.com/index.html. And if you sent the
canonical to /index.html, then our systems will usually
recognize that this is actually the same as a shorter URL. And we’ll kind of prefer the
shorter one to the longer one. If we can kind of guess
that this is essentially a crufty part of the URL
that doesn’t really matter, then we’ll try to skip that. But if you have index 2014,
or something that’s clearly a unique URL, or if you
have URL parameters that aren’t blocked by the
URL handling tool, then that’s something we should be
able to pick up on and say, this is a fine
page for canonical. It’s just called index
2014, but that’s fine. And usually, it’s something also
that kind of plays into this is, if we can recognize that
the same content is on a simpler URL, then we might
choose the simpler URL. But if you have index
2014, then you’re not going to have the same content
on the [? root, ?] right? It’s going to be– JOSH: It’ll be tailored
for the new year. JOHN MUELLER: Yeah. Exactly. JOSH: OK. Perfect. Thank you. JOHN MUELLER: All right. “If unique content is
key, what kind of content should a page have in
order to rank higher than google.com for the
search term “google”? It probably needs to have
really good content on Google or be really relevant. “The same applies to
Wikipedia, or Nike, et cetera. Is the unique
content really key?” So I am guessing if you have
exactly the same content as Wikipedia on your
home page and you’re not Wikipedia, then that’s
not really going to help. Whereas if you have really
great content about Wikipedia and that’s recognized as
being really, really relevant, then sometimes that
could potentially rank higher than Wikipedia. Obviously, Wikipedia
is an extreme example, and you’re going to have to do
something really, really unique to actually do that. But I don’t think that this
is completely impossible. It’s not that we treat
these in any way special. It’s essentially these sites,
or these kind of brands, they have built
up their content, built up their reputation
over a long time. And they’ve become kind of
relevant for those terms. It doesn’t mean that it’s
impossible to outrank them, but it’s not trivial. GARY: John, regarding
that question, on a different note
in some respects, I just put a link
in the chat there. And we’ve talked about sort
of the UK for my business. But I’m in touch with
everybody in our industry. And they’ve all complained
about results in the US. And there’s probably
only three results in that entire page that are
actually really relevant. And there’s a lot
of repeat companies. There’s actually quite a lot
of garbage in there as well. So I guess you have a team
of people that will basically run that query through
a quality and check for quality of the
sites or something? I mean, it’s quite
a strong keyword, and I’m very surprised
the results are that bad. And I did send you
an email a couple of weeks ago with a
screen grab, highlighting what was bad about that page. I’m not sure if you got it. JOHN MUELLER: Yeah. I passed that on. Yeah. GARY: Yeah. This is a rather general
question for everybody. If this is happening
here, I’m sure that this is a very, very widespread
problem with quality that a lot of people are
seeing the results just aren’t up to scratch
for so many key terms. I’m not sure if it’s
because so many people are affected by Panda, and Penguin,
and all that kind of stuff, that good sites are actually
being demoted– the same thing that we’ve had for four years. We’re climbing out of that
hole, but it [? doesn’t ?] make us a bad choice for a business. And that’s a really clear
example of terrible results. JOHN MUELLER: Yeah, I
passed that on to the team. It’s tricky in the
sense that we don’t want to manually curate
the search results. That’s not something
that we really have the capability to do. There are not enough
people in the world to do that for all
search results. But it’s definitely useful
to have examples where we’re getting the top
results wrong, we’re kind of clearly
misunderstanding something here, or kind of promoting spam
or lower quality content like that. That’s always
useful for the team. GARY: Yeah. All right. Thanks, John. JOHN MUELLER: “Why my Webmaster
Tools section Links to My Site simply says, no data available? I’ve added HTTP and HTTPS sites. It never worked for years. There’s nothing else on
the page, no formatting, et cetera.” I need to know the URLs
that you’re looking at. So in general, this happens
when you have the wrong version of your site verified
in Webmaster Tools. So that could be the
www, non-www version. It could be that you’re
redirecting to a newer domain. Anything like that
could be the case there. What I’d do here
in a case like this is do a site query, take a look
at the cache page for your home page, and make sure that
that version is the one that you have in
Webmaster Tools. And then look at that version
there in Webmaster Tools. Usually, it takes a few days
to kind of create this data if the site has never
been added before. But afterwards, you should be
able to see that information there. ARTHUR: John? JOHN MUELLER: Yes? ARTHUR: Can I step
in with a question related to Webmaster Tools? JOHN MUELLER: Sure. ARTHUR: I’ve just pasted
a link in the chat. I wonder if you do
some adjustments on the index status
on Webmaster Tools? Because as far as you can
see on that Print screen, without changing the
robots.txt file for a website, I’ve dropped half of the
index pages along with a block by robots.txt URLs. I just want to mention that the
robots.txt file didn’t change. So the same blocked URLs–
it should be the same amount. And suddenly, just dropped
on the 9th of this month to a half of the indexed. Also, almost all of the
blocked URLs have been dropped. JOHN MUELLER: I need to take
a look at that example URL. So if you can post
that in there, I can double-check as well. But especially
when you’re looking at things in Webmaster
Tools, and you’re looking at the last
data find, then it might be that that data
is just halfway processed at the moment. And it will kind of jump
back up to the normal status as soon as we’ve reprocessed
everything there. But sometimes these algorithms
on our site and Webmaster Tools break as well. And while we do have
monitoring for all of that set up so that we recognize it
when things break or get stuck, sometimes things sneak through. So if you can give
me some example URLs, then I’d love to
take a look at that. ARTHUR: Sure. I’ve just posted the
URL, which is included. I posted it in the chat. JOHN MUELLER: OK. Great. ARTHUR: Thank you. I’ll get back to you on Google+. JOHN MUELLER: OK. How can– sorry? ODYSSEAS: Hey, John. Sorry. I was wondering, can you
take a look at this URL and just give us,
like, which area we might have the biggest
opportunity for improvement? JOHN MUELLER: I think
we’d have to kind of take some time to take
a look at that. I think we’ve looked
at that before. But I want to take a little
bit more time than just, like, five seconds to throw
it into something and say, this is what you’ve
been missing out on. But you’ve sent
this as well, or I think you probably sent
that with a document. ODYSSEAS: Yeah. It’s a site we have been
always talking about, but I wanted to broaden
the question outside Panda. And just in general
terms, if there is maybe something also
non-Panda that may be a bigger
opportunity than Panda. OK. So maybe we can handle
it offline together with the survey results. JOHN MUELLER: Yeah, sure. ODYSSEAS: Thank you so much. JOHN MUELLER: “How can one
avoid adding different copies of the same site, like
HTTP, HTTPS, et cetera, and just have one .com? When a page uses HTTPS
and drops the leading www, unless you add all four pages,
you don’t see some stats.” At the moment, that’s
something that you kind of have to live with
in Webmaster Tools. I think over time, we’ll
have to find a solution for that kind of
complicated UI, let’s say. But at the moment,
if you know that you might have data for different
versions of your site, I’d just add those versions
and double-check them as well. If you have redirect set up,
then usually, all of that data will just be combined in
your preferred version, and you don’t have to
worry about the other ones. “We rank number one
for a search term, but it leads to a
404 page and has yet to drop out
of search results. It’s been several weeks. Will that hurt our ranking,
as people will most likely be bouncing heavily
on that page?” That shouldn’t be a problem. This is something where
our algorithms should be able to pick up on
this change to a 404 page. But especially with
404s, sometimes we re-crawl them a few times to
make sure that we’re actually picking up the
right content there. So what you could do is
use a 410 instead of 404 to make this a little bit
faster, a little bit stronger signal. But if it’s been
several weeks already, then my guess is it’s just
a matter of a short time anyway for this
page to drop out. “Is there anything a site can
do to appear in the Knowledge Graph box? We rank first for many
terms, but the Knowledge box pushes us down and uses
information from other sites.” We don’t have anything
specific that you can put [? into ?] your
website to kind of rank within the Knowledge
Graph part of things. But using information
like structured markup, structured data on your page to
let us know about information on your page that you
have there already, letting us know about the link
to maybe your Google+ page if you have one, so that we
can combine those signals, that’s really useful for us. If you have a logo that you want
included, you can mark that up. If there are opening hours on
your site for your business, that’s something
you can add there. All that kind of helps
give us more information about your business that we
could show in a Knowledge Graph to kind of give users a
little bit of a better view of your site. RICHARD: John, may I just
ask a quick question there? JOHN MUELLER: Sure. RICHARD: Do you know how much
is Google supporting JSON-LD? JOHN MUELLER: We support
it, I think, just for events at the moment. So for events markup,
we support it. I imagine it’s something
we’ll add to the other types, but we don’t have anything
to kind of announce there at the moment. It’s a bit tricky,
because it’s not directly visible on the page. But apparently for
events, it makes sense for one reason or another. So that’s where we started. RICHARD: You don’t know, is
it used for discovery at all? If there’s URLs in
JSON-LD, whether that could be used for discovery? JOHN MUELLER: If there
are URLs in JSON-LD? I don’t think we’d use that. We might if we kind of crawl the
page and see it accidentally, but I’m pretty sure we’d ignore
that if it’s just in JSON-LD. RICHARD: OK, cool. JOHN MUELLER: If it’s something
like a JavaScript on the page, where you kind of, like,
create URLs with JavaScript, that’s something where
I think it makes sense to pick that up on, because
that’s something that might be shown
directly to the user. But if it’s in
JSON-LD, I don’t think we’d just pick it up from there. All right. We have a few minutes left. Let me open it up for you guys. What else is on your mind? GARY: John, what happened
to the Hangout with the Q&A? Because I noticed that it seems
to be something to do with the Google+, and if your
account isn’t actually linked to a Google+ account, then
you can’t view the button that allows you to select the Q&A. JOHN MUELLER: I don’t know. I have no idea. I asked around internally, but
I haven’t heard back from that. It’s certainly nothing I’ve
been doing on purpose here. So maybe these are just
normal changes in Google+. Maybe my Hangouts
are too spammy. I don’t know. GARY: [LAUGHS] MIHAI: Hey, John, can
I ask you– actually, it’s not really a
question, it’s more of a feedback regarding
Webmaster Tools. So whenever you go to the
Search Queries section, the periods for which
you see the results is automatically selected as
the previous 30 days, usually. But the data is actually
until two days ago, or something like that, usually. But the period is
still selected. So for example, if go into
Webmaster Tools for one of the websites now, I
see from 18 of October to 17– so until today,
basically, 17 of November. But the data is actually
only until 15 of November. And it might be a bit
confusing for some webmasters, because they think they
see 30 days’ worth of data, when they actually see only
28 days’ worth of data. And I also think–
and I’ve tested this– when you choose the
button with the modifications to check what’s
the data compared to the previous period, it
takes into account 30 days, not 28 days for which
you have the data on. So for example, if I
choose Modifications now, it should show me 28 days
compared to the previous 28 days. But instead, it shows me 28
days compared to the previous 30 days, because that’s the period
selected in the Webmaster. I think it would be more
useful if you couldn’t select, or if it would be
automatically selected. So I have data until
the 15 of November. If it would be automatically
selected until 15 of November, and I wouldn’t be able
to select 16 or 17, because there’s no data. JOHN MUELLER: Yeah. That makes sense. Especially when it compares
to the previous period and uses a different length,
that always looks weird, I guess. Because you’re looking
at the last 28 days and comparing it to 30
days, so you’ll always feel like your site
has been going down, when it might have just
been kind of stable, right? MIHAI: Right. JOSH: Hey, John. MIHAI: Oh, OK. Go ahead, Josh. JOSH: Thank you very much. So I put a little
thing in the chat there for you to look at it. I don’t want to mention it
out loud on the Hangout. But just saying. So regarding 500-level
errors, you guys have really improved lately. That’s really great. It used to be that
if a page dropped out due to 500-level errors, it
would take days to get back in. And now you guys come
back every hour or so and try to put the page back. That’s wonderful, because
I have a friend who runs an e-commerce
site, and he’s getting nailed with these
kinds of errors all the time. But he says that he’ll still
drop a few spots in ranking– or it’s not like Google
is not fully trusting that the server is
working perfectly yet. It would be really great
if he could come back right to where he was, because
then that would really limit the money
that is costing him from going down a few spots. Do you know what I mean? JOHN MUELLER: That shouldn’t
be causing anything to kind of drop in rankings. Because, like, 500
errors, even 404 errors, when they come
back, we essentially take everything just
like it was before. So it’s not that the site
loses any information or that, like, a
high number of errors means that we kind of
devalue the site in any way. It should essentially be
coming back one to one. So that’s not
something where I’d assume that having a bunch
of 404s or 500 errors would cause problems. One thing you can do with
a 503, in particular, is say when we should
re-crawl that page. I don’t know what the
actual phrasing is, but it’s something
like, this is a 503 and please check back after
a certain number of minutes, or hours, or days. JOSH: I see. And so he could say,
check back in 10 seconds or check back in 20 seconds. JOHN MUELLER: Sure,
something like that. JOSH: There really shouldn’t
be an interruption of service there in any way. JOHN MUELLER: Especially
if it’s a 503, then we’re not going to
drop that URL the first time we see it. We’re going to have to
see a continued 503 there, so that after maybe a couple of
days, if we always see a 503, then we might assume that
this page is really gone. But if it just has,
like, a one-time 503, then that shouldn’t cause
us to drop that page at all. JOSH: Do you mind
typing in the chat how long the time is there
for how long it would take? JOHN MUELLER: I don’t know
what the absolute time is. JOSH: You don’t
have it memorized? Come on, John. JOHN MUELLER: I’m
thinking it’s something on the order of
a couple of days. I know I saw one case where
Webmaster was complaining that we dropped his site,
and it was returning 503 for a couple of months. So a couple of months,
like, as an upper range, and I imagine a couple of days
as more like a realistic range. But somewhere in there. If you keep serving
503, then at some point, we assume that this
page is actually gone and that the server was just
returning the wrong result code. JOSH: That’s fantastic, John. I really, really
appreciate that. That’s going to save this
person a lot of money. Thank you very much. JOHN MUELLER: Great. MIHAI: [INAUDIBLE] just use
fetch and render for once a website is submitted
again when the website is– JOHN MUELLER: Sure. You can do that, too. You can use the Submit to
Index in Webmaster Tools to get that back, as well. That’s a good idea. Yeah, definitely. All right. Someone else needs
to take this room. So I’m going to have to cut
this a little bit short. Thank you all for all of
your questions and feedback. And maybe we can
join in another time with one of the future Hangouts. And hopefully, the Q&A
will work then, actually. MIHAI: Thank you, John. ARTHUR: Thank you, John. RICHARD: Bye, John. OSYSSEAS: Thanks, John. JOSH: All right, John. As always, John,
have a good week. MALE SPEAKER: Thanks, John. JOHN MUELLER: Bye.

Related Posts

How to Find Backlinks from Competitors site । SEO Backlinks checker tool । How to check Backlinks

How to Find Backlinks from Competitors site - SEO Backlinks checker tool - How to check Backlinks please subscribe my
Link Building Strategies on Steroids: How to Get Backlinks FAST!

Link Building Strategies on Steroids: How to Get Backlinks FAST!

In this video, I’m going to show you how to turn your backlink analysis into actionable link building strategies… fast.

3 Replies to “English Google Webmaster Central office-hours hangout”

  1. I know this is not most popular thing to say, but I agree with Gary. There I said it. There are lots of result where bad sites are ranking & good sites are being demoted. I understand that Google wants to fight spam but the best site should rank no matter what. Promoting garbage to the top because they never built a link is not helping users. 

  2. @John Mueller I do not think that Google supports JASON-LD for events only. It is also supported for the "Sitelinks Search Box": https://developers.google.com/webmasters/richsnippets/sitelinkssearch. Did you probably mean that the Google Structured Data tool does not understand JASON-LD? If that is the case, then we certainly agree.

Leave a Reply

Your email address will not be published. Required fields are marked *