English Google Webmaster Central “mobile” office-hours hangout

English Google Webmaster Central “mobile” office-hours hangout


JOHN MUELLER: OK,
welcome everyone to today’s Google Webmaster
Central office hours hangout. My name is John Mueller. I am a webmaster transanalyst
here at Google in Switzerland and part of what I do is
talk with webmasters like you and make sure that your
question are answered and that your feedback
goes back to the engineers and the feedback
from the engineers comes out to you as well. So before we head off with the
Q&A with the questions that were submitted, do any of you
want to ask a first question? No? No need to be shy. AUDIENCE: I do. But I know the answer already. I’ll ask. With the way you phrased
it to Gary the other day, it was when these things expire. But I know you can’t
give too much of it away, but is it something
that will expire, or are we waiting for a manual
update or a manual action to be removed? Or is that something
that you can’t– JOHN MUELLER: Usually, if
there’s something manual then that definitely expires. If there’s something
algorithmic, and it hasn’t been updated for
awhile, then at some point, we either update that
or we turn it off. So it’s kind of,
regardless of which side that’s in, at some point that
will expire, if you will. But it’s something
that’s hard to say. You could just wait
it out, for example. Because I guess, as
webmaster, if you’re trying to get your
site out there, it doesn’t really
make sense to just do nothing in the meantime. AUDIENCE: Right. Which we don’t. But we just– without working
with any kind of time scale or knowing that basically
anything we do now makes no difference,
because all of the good work is going into a black hole. How do we then judge
whether it’s worth putting in that effort? I’m sure by now, if it
was a manual action, you could have
just turned it off, if, like you said to
me before, actually we could see you haven’t really
done anything malicious. So we haven’t seen any
manual action in Webmaster at all, so I’m sure you
could have just removed it. So I’m assuming it’s
an algorithm we’re waiting to update,
which is not something that’s happening soon. But I don’t know that. JOHN MUELLER: If it there’s a
manual action involved, then that would be visible in
Webmaster Tools, yeah. AUDIENCE: But there isn’t one. JOHN MUELLER: Yeah. AUDIENCE: So it’s algorithmic. Waiting for update, but we
don’t know when it will be. JOHN MUELLER: I guess
that’s the case, yeah. Yeah. AUDIENCE: All right. AUDIENCE: Hi, John, Can
I ask you a question? JOHN MUELLER: Sure. AUDIENCE: If we look at the
[? Lumens ?] smartphones, it’s a case of [INAUDIBLE]. But in many of the cases,
when we make a mobile site, so we do not create
all the pages that have been created
in desktop side. So let’s say some specific
post or article, which would, let’s say, [INAUDIBLE]
website are not being created, is not being created
into the mobile site. So in such scenarios, when we
do that, do some other pages, it shows the faulty redials. So would you like
deduce anything how can we, apart from creating
those pages, is there any way that we can remove
those [INAUDIBLE]? JOHN MUELLER: So we look at
this on a per page basis, but especially in
Webmaster Tools, we have the aggregated
information there. And I don’t think you can take
those out of Webmaster Tools specifically. So if you know that you
don’t want to kind of create mobile friendly pages for
that, then that’s fine. But that’s something
you kind of have to filter out on your side. AUDIENCE: OK. So let’s say, if we do
not create those pages, and they are going to 404. So how Google will treat them? One site the desktop pages
are properly optimized, dragging well, and
in mobile searches, it when a user is
clicking on the desktop, URL is being redirected to
404 which is, in mobile cases, so how Google will treat that? JOHN MUELLER: Google
will see that, I mean, assuming the desktop page
still exists then Google will see that as a
mobile specific error, and try to flag that
in the search results. So what I would
recommend doing there is just showing
the desktop page. If you don’t have mobile
friendly page for that, just [INAUDIBLE]
the desktop page. It’s not perfect, but at least
they can see the content. And when we look at these
pages for the search results, we do that on a per URL basis. So we add like the
mobile friendly variable or don’t show that. So that’s not
something we’re having a part of your site
mobile friendly and another part
not mobile friendly, would cause the good part
of your site any problems. AUDIENCE: OK. Thank you. JOHN MUELLER: Sure. All right, let’s go through
some of the questions here. URL migration best practice. Is it advisable to
canonicalize the old URL to the new URL for
a couple of weeks before 301 redirecting the
old URL to the new URL. Would this make the move
faster, clearer for Google and limit loss on
organic traffic? I would only recommend doing
that if you assume that there’s a technical problem
with your redirect. But if you know you can set up a
cyclic redirect from one domain to the other one, and you
know that this isn’t really a technical problem,
then I would just set that up and let it ride. When we see a cyclic
redirect like that, it’s a lot easier for us to
say, this is a clear site move, together with the setting of
the Webmaster Tools, perhaps that you might have set. And we can say, well, we really
want to move all of these URLs to the new domain. We can crawl it a
little bit faster to see that kind of
a move happening. And it’s a lot easier for
us to handle on our side. So if you’re kind of like
obfuscating a site move, then on our side, that
almost causes more problems and potentially makes
it a lot harder, and definitely
makes a lot longer, for that site move to be
processed for you as well. So if at all possible,
I’d really recommend just doing a clear site like 301 from
the old domain to the new one. [INAUDIBLE] mobile usability
analysis in Webmaster Tools. Does this mean that
all factors mentioned in the mobile
usability are ranking factor for mobile search? That’s a good question. I mean, we’ve been talking
a lot about mobile recently, and we’ve started to show the
little label “mobile friendly” in the search results when we
find mobile friendly pages, based on these
criteria at the moment. We’re experimenting with going
past just showing that label, but at the moment we don’t
have anything specific to announce there. So I personally, I
could imagine that this is something that might
happen at some point, but I kind of take
that step by step. And I think getting the
information on Webmaster Tools gives you a lot of insight
into where you might still have room for improvement. And sometimes what we find
when we look at sites, is that a larger
part of the site might have moved to a
mobile friendly template but some part of
the site got lost. So, for example, I
looked at our help forms recently in Webmaster
Tools, and notice that for the largest
part, everything had mobile friendly pages,
but some of the profile pages that we have
weren’t mobile friendly. And that’s the
kind of information you can pick up in
Webmaster Tools. You can see there’s
still a bunch of errors. You see the sample URLs and
if you look at the sample, you’ll say, oh, maybe this is a
template of one part of a site that I forgot to update, and
then you can update that. So that’s kind of what I’d
recommend doing there, taking that information, using that to
improve the mobile friendliness of your pages and kind of being
prepared for anything else that happens in that regard. Let me mute you for a second. Feel free to unmute if
you have any questions. Text hidden for UX reasons. 80% are mobile. 50% are on desktop, are still
being crawled, but not indexed. Google finds my
hidden text, but it isn’t showing it in this search. The most important
stuff is invisible. We also still have
a positive effect. So if the text is
hidden on your pages, then we see as something
that’s not primarily important from your point of view. So if you’re hiding
content, then that’s probably not the
most relevant piece of information on these pages. So my recommendation
there would be to make sure that
important content is visible on your pages, so that
when users go to your pages, they find this
content right away. They see it visible
from the start. And then that’s
something that we’d love to kind of focus
on for indexing as well. We do kind of treat
hidden content on a page with a little bit of less
weight, because usually that’s something that users probably
are primarily seeing and might be confused if they
saw, kind of, treated in the same way as
something that’s primarily visible on a page. So that’s something
to keep in mind, if you have a kind of a tag UI,
or if you have content that’s hidden by default on your
pages, and you kind of have to click a button or a link
somewhere to really bubble that up. If it’s critical
information for your pages, if it’s relevant
for your website, make sure it’s visible
from the start. Maybe move that content to a
separate URL, if that makes sense, if that’s really
something that you find is important for your pages. So, that’s kind of what
I’d focus on there. Is there a mobile index? We can’t see our site on a
smartphone search results as desktop or tablet or
normal search results with rel alternate, et cetera. OK, you’re pointing to a mobile
page that’s no index, no follow So we have a separate
mobile index, I think, for feature phones
but not for smartphones. We put the smartphone content
into the same search results. So just depending on which
device you’re searching from, that’s essentially what
you’d be seeing there. If you set up a smartphone
friendly website on a separate URL, I’d
really recommend making sure that you follow our
recommendations with regards to the redirects, with regards
to robots text with regards to no index, all of that. And if you’re no indexing or
if you’re blocking my robot’s to your
smartphone-friendly pages, then we can’t really take that
into account for the search results. So we really need
to be able to crawl and index those pages normally. AUDIENCE: John, can
I ask a question? JOHN MUELLER: Sure, AUDIENCE: We have a client who
[INAUDIBLE] dynamically content to– on same URL,
different content, different stream for
mobile and desktop. So will this affect
ranking somewhere? Because we have
seen some variation in rankings of the same
URL in mobile and desktop, though the content is the
same, only the design part is different and they have
created two different websites just for the mobile part of it. JOHN MUELLER: If the
content is equivalent, and if you’re setting those
pages up properly with regard to our guidelines, then
that should be fine. There’s nothing special
you need to do there. If the content isn’t
equivalent, say you’re selling shoes on
the desktop page and t-shirts on the
mobile page, then that’s something
where we’d say well, these are probably
two separate sites. We shouldn’t be treating
them equivalent. If it’s equivalent content,
that seems like the right set-up there. So the design doesn’t play that
much of a role in that case. Also if you kind of hide
things on the mobile site page, that’s less of a problem. For example, if you’re hiding
the sidebar, the headings, the footer, those
kind of things, if you have smaller
images, that’s all fine to have
on the mobile page. The primary content
should just be equivalent. AUDIENCE: OK. Great. AUDIENCE: Again,
I have a question related to quality guidelines. JOHN MUELLER: Sure. AUDIENCE: If it’s, from
the Black Hat perspective, if a site is, we have seen
a couple of websites job which are doing [INAUDIBLE]
and blocking bot mode. But at the same time they
are stable in ranking. And we have seen their are
building links significantly. But the point that I
wanted to raise it, neither they are
increasing in pure ranking, nor are they decreasing. They are stabilized
on their part. So what is the reason behind it? Does Google look and see those
[INAUDIBLE] and all this stuff? JOHN MUELLER: It depends. So there are lots of
things that kind of come into play with the ranking. What I’d recommend
doing there, is just making sure that you file
the spam report so that we’re kind of aware of that situation. But sometimes what I see
is that normal sites that are doing something bad are kind
of stable in ranking as well. And it’s not the case that we’re
ignoring their sneaky things that they’re trying to do, but
rather that these sneaky things are pulling them
down a little bit but they’re still otherwise
fairly reasonable. So we keep showing them
in the search results. Obviously there are
situations that we get wrong, where maybe we show a bad
side very high in the search results, even though
they’re doing sneaky things. And that’s the kind
of thing that we’d love to hear about
in the spam reports. But for the most part, we take
the feedback from the spam reports to help
improve our algorithms and to help improve
our systems as well. So we don’t manually go
through all of the URLs in our search results to
find exactly the ones that are problematic and to kind
of manually take that out. That’s kind of impossible
with the size of the web. But we do use that
feedback to figure out what we need to be focusing on. And that’s definitely useful. It’s hard to say if they’re
just ranking because they’re doing other things
really good or if they’re in ranking because
they’re getting away with sneaky things. But regardless of the
case, that’s something you could tell us
about and we’ll take a look at how to see what
we could be doing differently. AUDIENCE: Great. Thank you. JOHN MUELLER: If you’re a
new entrant and new domain in a segment where
there are already lots of high quality,
high ranking sites, what’s the best strategy? Would you be best to focus
on a niche, long tail keyword, searches first,
lots of pages or just a few. I think this is almost like
a business or a marketing question. Because if you’re a business
that’s going into an area where there are already
well-established players, where there’s already a lot happening,
a lot of strong competition in place, then
that’s always going to be a really tough
situation, regardless if that’s online or offline. What I’d recommend
doing there is trying to find an area
that you can focus on, where you can kind of build up,
where you’re doing something very special that the other
players maybe don’t want to do or that they can’t do. So instead of trying
to compete one-on-one with the really strong
competitors that might be out there, find
something that they’re not interested in and
kind of focus on that. Become a really strong
website in that regard and build out from that slowly. So instead of trying to do
the same as everyone else, find something that
makes you special and that makes
your site special. We own two e-commerce
sites with similar products and descriptions. When this happens,
does Google choose to rank one over the other? They each target a
slightly different audience by the marketing and look. Will Google see them as
duplicate and just rank one? Sometimes we will see these
kind of sites as duplicate and try to pick one of them
to show in the search results. It kind of depends
on what kind of sites these are, but
essentially if you’re selling the same product,
if you are the same company, and someone is searching
for that product in general, then wouldn’t make sense to show
that listing essentially twice. So from that point of
view, for the user, we do try to fold those
kind of sites into one and say this is one site. We’ll show you one
search results. Sometimes that
doesn’t work so well. Sometimes it doesn’t
make sense to do that, if they’re really,
significantly different. So kind of take that
with a grain of salt. What I’d recommend doing
in a case like this, if it’s just two sites, I
think that might be fine. If you have more than
two or three sites, then I’d recommend folding
those together into one really strong site instead of kind
of diluting your efforts across multiple sites. It’s reported in
SMX Milan that you said [INAUDIBLE]
desktop version is used as a ranking signal
for the mobile version if the desktop
version is fast enough and the mobile is too slow,
it doesn’t affect ranking. I think at the moment,
this is correct. So we need do focus on the
desktop page for the search results for the most part. That’s also the one that you
use with the rel canonical. As we pick up more information
from mobile friendly pages or from mobile pages
in general, then I would expect that to flow
into the rankings as well. So that’s something
to keep in mind there. I’d still make sure that your
mobile friendly pages are as fast as possible,
that they work really well on mobile devices,
that you’re going past just essentially the required
minimum that we had with the mobile friendly
tool, and really providing a great experience on mobile. Because lots of people are
using mobile to kind of make their decisions,
to read content, and if your site is kind of
minimally usable on mobile, but really a bad
user experience, really, really slow,
then that’s something that users will notice
as well and they’ll jump off and do something else
or go to a different site. Is it possible for a
competitor to strip an article from your website
before it gets crawled and submit it as
his own content? Duplicate content in the
eyes of Google search engine, for example. I guess theoretically
this is possible. In practice is not
something that I would expect to happen
like this or that I would expect to have
cause any problems. Because we’re pretty good at
recognizing the original source of the content, even if we
first saw it somewhere else. And this is something
that’s fairly common, in that sometimes the site will
have a blog feed for example, and that feed might get picked
up somewhere else before we actually pick up and use
that content for web search and that’s not
necessary a problem. So even if, in an extreme
case, a competitor picks up one of your
pages and copies that content onto their site
and gets it indexed first, that’s not something
that I would assume would cause any problems
at all for a website. Let’s see in the chat,
there’s some question about the lower thirds. I have no idea if there’s
anything special happening there. I see some of you have them
active and some of you don’t. But there’s no
setting on my side that disables it,
let’s put it that way. A few weeks ago, I
submitted a mobile sitemap for a dynamic mobile site
using the same URL theme based on the user agent. I don’t see that
Google indexes that. Is that normal, that
Google views– well, different question. So the mobile sitemap
is essentially for feature phone pages. It’s not for smartphone pages. So that might be something that
you kind of ran into there. So if you’re doing a
smartphone friendly site, then I’d just use a
normal sitemap file. You can also include, I
believe, the rel alternate, the rel canonical markup
in the sitemap file. I personally, I try to keep
it on the pages themselves because it makes it easier
to debug what you’re actually doing. But essentially, if
you’re using the same URLs for a smartphone
friendly site, then I would just submit
those URLs normally with your normal sitemap file. Does Google use custom search
user’s behavior and statistics as signals? I’m not really sure
what you mean there. Do you want to elaborate? AUDIENCE: Yes. Google used some statistics
about the user usage. When they go to your website and
bounce back to a [INAUDIBLE], but you use the
custom search also. JOHN MUELLER: So,
the custom search I think is when you embed
like a search [INAUDIBLE] for your site? AUDIENCE: Yes. JOHN MUELLER: I think that’s
essentially just a feature that we make available. I don’t think we use
anything special from there. Because you can embed it
in so many different ways that it’s really hard for us
to, I guess, even look at that. But even with regards to normal
user behavior signals, that’s not something, I’d say, that
we’d use directly in search. So we do use that
kind of information to determine if our algorithms
are working as they should be, but kind of taking that
out to a site level isn’t really something that
I think makes a lot of sense. Can we expect Penguin to refresh
more than once a year this time around to help a site
who’ve cleaned up and demote sites that
are ranking boost spammy techniques? My understanding is
that we’re working on improving the speed there. So I would definitely expect
that to be a little bit faster this time around. I don’t have anything
specific to announce. I don’t have any specific dates
that I could give you guys so I don’t think that’ll
be happening next week or anything really
soon like that. But I know the team
kind of is working on improving the
speed there as well. Here’s a custom search question. Yes. Joshua. AUDIENCE: I thought I was in
the wrong HLA for a little bit there because I
hadn’t heard a Penguin question for the first
half of the meeting. Hey, I was going to ask
about a question regarding are there any– Well,
Matt Cutts has previously talked about some algorithms
that specifically relate to– or that Google was looking into
finding ways to verify or show authority of medical
related type sites, like help, natural, supplement,
these kind of sites. And I believe in the
past, he’s referred to it in relation to looking at
authorship, authority, or site authority or
something like that. But then also in relation to
these things because they’re important about people’s
health and stuff, we don’t just want any site
that, in the long term, it would be good to find the
more reputable type of sites in this particular genre. Are there any
algorithms recently that have focused
more closely on that, that you could mention
anything about? JOHN MUELLER: I don’t
know of anything specific that I could tell
you about there. So I don’t know. Nothing that I’m really
aware of in that regard. I know this, especially
the medical area, is something that we try to
keep an eye on a little bit to make sure that we’re giving
the right information to users, because that can sometimes
have a really strong effect. But I am not aware of any
specific algorithms that would be specifically
focusing on the things that you mentioned there. AUDIENCE: OK. We’re looking at a client site
related to vitamin supplements, but it’s not the– it’s a
e-commerce site, but not the more kind of spammy
type of a Viagra, steroids, or any of that kind of site. It’s all pretty–
and it’s well done, not in any kind of spammy way. Yet it seems like
a while back, they got pushed down considerably. So I was just wondering if
there been anything more closely related to that or did the
payday loans algorithm include much specific in that
genre that you know of? JOHN MUELLER: No, that would be
really specific to payday loan sites. That wouldn’t be
something that we’d kind of spread across all
different kinds of websites. AUDIENCE: OK. Because I thought
that algorithm update, even though that was used as the
original name but that it was talked about in they are latter
updates of it that it focusing across the board on any
kind of sites that are known to– or topics that are known
to have more spammy associations with them and maybe either
quality factors or keyword stuffing or– JOHN MUELLER: I don’t think so. Sorry. But, yeah, I don’t think
that’s the case in that case. So I think specifically,
with regards to like the site
that you mentioned, I haven’t taken a
look at that site. I don’t know what the
URL is, but, in general, what I’d recommend
doing there, is just really making sure that
the quality of the website overall is as high as
it can be and focusing on the normal things
as well there. So I wouldn’t assume that
there’s anything exotic holding back a site like that’s
kind of reasonable but not really great. I’d really try to focus
on the normal things there and just make sure that
it’s really the highest quality site it can be. AUDIENCE: Yeah. And so you’ve got the general
e-commerce site challenges, where there’s what do you write
about each individual topic and things like that of course. And then there’s the topic of
sliders on the top of the page. And we’ve talked
about that in relation to the layout
algorithms and stuff. Do you think– because they are
still quite popular, especially with a lot of very
uniquely designed websites. Like I was looking at something
in the jewelry category, like diamonds e-commerce sites. So it’s very visual
and they want to start with a lot
sliding images at the top. Do you see there
being much challenges there in that regard? JOHN MUELLER: I don’t
see a problem with that. I think that’s absolutely fine
if you have a great website design that uses these kind
of sliders, that’s fine. I just– I think one
aspect I’d just kind of out watch out for is that
you’re not putting the most important,
relevant information in these kind of sliders on top. Because if it’s
not really visible when we kind of crawl
the page, for example, if the slide number three has
the most important information for that website on it, if
that’s not directly visible, then we’re not going
to be treating that with as much weight when we kind
of crawl and index that page. So if these are
images that just lead to different parts
of the site, if these are kind of like current
ads or current information that kind of lead to
different parts of the site, but not really the primary
reason for visiting this page, then that’s absolutely fine. And I think even from a
usability point of view, if slider number three is
the most important part of the page, then
that’s probably going to be a bit
confusing for users. So for the most part, people
are probably doing this right and using these sliders as a
way to kind of draw attention to different parts of the site,
and that’s perfectly fine. AUDIENCE: OK. All right, thanks. AUDIENCE: John, can I
ask a followup question to that payday loan question? JOHN MUELLER: Sure. AUDIENCE: Can you– I know
you can’t speak specifics, but can you tell us roughly
how you would determine whether someone was in
the payday loan industry? Because we found connections
with our site to– and I’m just throwing a
URL in the chat now. But eight other sites have
scraped our entire site, including the analytics
code and other codes. And so if sites are scraper
sites, not scraper sites, so if analytics sites and these
kind of ranking sites say, actually your domain is
associated with this one, then Google has as much, if not
lots more, information on that. And if this type of site
looks at our site and says, well, you’re basically
the same domain as this site, then is that
something that can ever be used as a signal
to Google as, are you not going
to be that naive? JOHN MUELLER: We’ve seen
these kind of domain informational sites pop
up every now and then. And that’s not something
I’d really worry about. The links from
those kind of sites aren’t really something
that we worry about. Which sites that they think
might be related to your site, that’s– I wouldn’t
worry about that. These are usually just
like auto-generated sites with information they
could pull from the web. AUDIENCE: Right. Because at first glance,
obviously we then think, Christ, were we under
payday loan algorithm? So that leads me to
think, well, maybe we are. What do you use to think
we might be a payday loan company when clearly we’re not. But you would never
use this kind of– JOHN MUELLER: I
wouldn’t say never use this kind of information. If we don’t have
anything else to go by, then we might crawl that
page and say, well, there are like two links on here. Maybe they’re related. But for any website
that has been on the web for more on a
couple of weeks, we have just lots of other
information that we can use. So that’s something
where we could say, well, these are pages
that are auto-generated. We can crawl and
index them, but that doesn’t mean that we
give them any value. AUDIENCE: Right,
but I don’t mean what they– I don’t
mean their site, I mean what they’re using to
determine that we’re related. You clearly have the
same information, whether it’s the second domain
or under the same analytics. They’re using analytics
code because they’ve scraped our entire
page and used– and our analytics code
has been caught up. JOHN MUELLER: I wouldn’t
worry about that. I mean there are lots of
ways that people kind of pull in this information for those
kind of domain information sites. And we’ve– I don’t know. We’ve been crawling the
web for quite a long time and we see a lot of other
kinds of information. So I wouldn’t really
focus on what those sites Say. And sometimes you’ll run
across a site saying, OK, this domain is for sale. You could just like send
an offer to this guy. And in reality, your domain is
like, I’m keeping this forever. Nobody’s going to
take it away from me. So. those are the kind of
things where we crawl and index those pages,
but we don’t really give them that much weight. AUDIENCE: But again,
it’s not the page, it’s what they’re using,
which is essentially the analytics code that
I was concerned about. JOHN MUELLER: No. I wouldn’t worry about that. AUDIENCE: OK. AUDIENCE: Hi, John. We have a problem with HTTPS. We have a website
without HTTPS and when we search that HTTPS
format of that URL, it shows that it is being
blocked by [INAUDIBLE] dot txt. So can you look at that URL? JOHN MUELLER: What I’d do there
is use the Webmaster Tools robots text tool. AUDIENCE: We have
checked that and there’s no problem with the robots. Even though they’re showing that
the HTTPS region is blocked. And what it means without
HTTPS version is not blocked. So how is it possible? We did not put any robots. We did not do anything on that. JOHN MUELLER: What
sometimes happens– I don’t know if this is
the case, in this case, but if we can’t reach the
robot’s text file for a site, then we’ll assume that
the whole site is blocked. So that’s something that
might be happening there. Maybe your robot’s
text file is blocked from crawling it completely. Then we would see the whole site
as being blocked from crawling. AUDIENCE: No, this
is not the case. We have checked that. That HTTPS version, it
has not been certified. It is not present at all. JOHN MUELLER: Yeah. AUDIENCE: So no Google access. JOHN MUELLER: So, I don’t know. Like when I try to
access it, my browser doesn’t even let me look at it. So and if I search
for your domain name itself without HTTPS, then
I get the normal search result. I wouldn’t worry about that. That’s fine. I mean, we’ve crawled and
tried to index that page. Or we’ve tried to
crawl it, and we noticed that robot’s text
wasn’t letting us through. So we have it like
that in our index. But that doesn’t mean
that people will see it. It doesn’t mean that it’s
going to cause any problems. AUDIENCE: OK. Thank you. JOHN MUELLER: All right. Is the number of listings in a
list page counted as content? For example, for a
job site, number of jobs available in a category. So we can display this on
top of our mobile sites where space is limited. I don’t think we
count the number of entries in a list page. The one thing I
kind of watch out for with these kind of
pages is that they’re not completely auto-generated,
that there’s actually some content there
on these pages. So instead of just providing
search results pages, make sure that they’re
really of value, those pages. And then if you have a mobile
version for that, that’s great. So I wouldn’t see the number
as something that I’d say is a primary ranking factor
from Google’s point of view. But really just make sure
that these pages work well for the user, that they’re seen
as being high quality content. If there are user experience
that there’s probably no similar page on a mobile
site, is it so different? For example, no voucher page
on mobile, but on desktop. Which page will the search
results for smartphone show? So if we know that there’s
an equivalent smartphone friendly page, and a user
is searching on smartphones, then we’ll try to show that
page in the search results. If we know that there’s
no equivalent smartphone friendly page, then we’ll just
show the normal desktop page in the search results. So it’s not that we block
the desktop pages completely from being shown in
search, it’s just that we try to bring those
smartphone users a little bit faster to the smartphone
friendly version of that page. When building a mobile
version number page, is it better to start a new
project in a separate folder with a subdomain or
is it better to make the existing page
more responsive? Is there a best practice on
how to start a mobile version without Google
[INAUDIBLE] content? So as best practices,
I’d recommend double-checking our guidelines. We have, for smartphones
a lot of information now, a lot of new information
on how to make a smartphone-friendly site. If you’re using a common
content management system, sometimes there are
simple steps that you can do to kind of activate
the mobile friendly version of your site. So that’s kind of
where I’d start off on. If you realize that you
have to do this yourself, then our recommendation is to
use responsive design, which means you keep the same URLs
and you just essentially just create an alternate
CSS that smartphones could use to display your
context, for example. But if you can’t do a
responsive web design, we also support two
other variations, either serving different
content on the same URLs or serving the
smartphone-friendly page on a different URL. And with those three
options, usually there’s one approach that works for you. And I’d just follow
that approach. It’s not that we’d say you need
to go to a responsive design. It’s not like we say you
need to kind of follow our recommendations,
but rather we support these different types. And we just think of
these different types, responsive works the
best for most sites, so we can recommend that. But you can rank just as
well with all the other types that we support. AUDIENCE: John, since this is a
mobile discussion, supposedly– JOHN MUELLER: [INAUDIBLE] AUDIENCE: –on responsive
versus adaptive, what some common issues
might be around those? JOHN MUELLER: What do
you mean with adaptive? AUDIENCE: So the screen
resizes to a particular size rather than totally reactive. And I’m thinking Nick’s
asking the same question now in the chat actually, so
maybe you just want to answer. JOHN MUELLER: OK. So dynamic serving. So that would be when your
server essentially serves like different HTML
to smartphone users. Essentially we support
both of those types. So that’s something
where if you say adaptive leads to a
better kind of user experience for your users or
where you’re saying adaptive is easier for us to implement
and maintain on our side, then go for that. So it’s not the case that
I’d say that either of these would rank lower in
the search results. We essentially support all of
those three different types. With regards to
common issues there, I think when we look at
these sites, most of the time we look at them when
something goes wrong. And with responsive
web design we see that fewer things go
wrong because essentially it’s the same HTML. If it can process
the desktop version, we can process the
mobile version as well, provided we can
still see the CSS. With adaptive, what
we sometimes see is that people sniff
the user agent wrong, that they think that the
mobile’s Googlebot is actually a normal desktop Googlebot, and
they serve the desktop content to our smartphone crawlers. Those are the kind of issues
that we see going wrong. I haven’t really run across
a lot of usability issues where I’d say, this is
would’ve been better handled with a responsive design. Most of the time, it’s
just technical issues that are sometimes really
tricky to diagnose on your side. So if you’re always serving the
desktop content to Googlebot, even the smartphone
Googlebot, then that’s not something that
would be immediately visible in Webmaster Tools. And that’s something that
sometimes really tricky to figure out what’s
actually happening there. What I’d recommend
doing there, if you suspect that this
might be happening or if you want to double-check
that isn’t happening, is use the Webmaster Tools fetch
and render feature and select the smartphone option there. That way you can
see exactly what smartphone Googlebot is seeing. You can see if it’s able to
pick up the videos, for example, if you have videos on there
and that it actually shows you what was roboted, for example,
or what isn’t roboted. So that’s the kind of
thing I’d watch out for. Usually we run across the
issues more when there’s really something critical
happening on these sites. So it’s not the case that I
have a lot of best practices to share with regards to
creating an adaptive site. Do links from crawl
Google Play apps pass page rank or [INAUDIBLE]
signals to my website? I don’t know how the Google
Play website is set up. I assume that most
of these links will have a nofollow attached. And, in that case, we won’t
do anything special with them. At any case, it’s not that
we treat the Google Play website in any kind
of a special way. If we can crawl
the pages on there, and we can find
links on there that don’t have a nofollow
attached, then we’ll try to forward that
page rank appropriately. So it’s not that
there’s any kind of special casing for
the Google Play website. Authorship. Oh, man. Authorship was an idea that
appealed to many of us. A few months later
it was removed from the search results. So I think it was actually a few
years later but, more or less. In many places it’s mentioned
that the real reason was a drop in click advertising. Is that true? No. That’s definitely not true. We do take into account
what’s happening with regards to clicks on ads and from
a web search point of view, if we notice that more people
are clicking on ads, that’s actually a bad sign for us. So that’s something
where we’d say, if any change that we do
results in more people clicking on ads, then that means
our search results are doing a worse job. That means we’re
doing something wrong. So that’s essentially
our point of view. We try to take into account
the long term picture. And if we’re doing something in
search that results in people being less happy with our search
results, then they’re going to, over time, use other
things for a search. And that’s not really
going to help us. So the more we can kind
of key people in search, we can make sure that we’re
bringing the right search results to people that
they can click on, that actually bring
value for them, then I think we’re kind of
headed in the right direction. And we definitely
wouldn’t launch a feature where we know ahead
of time, that people are going to click more on ads. And we wouldn’t remove a
feature if we know ahead of time that t this change will
lead to people clicking more on ads. So that’s something
that’s definitely not playing a role there. Webmaster Tools shows
duplicate title tags for products that are on my home
page and collection selection even though [INAUDIBLE]
rel canonical duplicates of the same products when they
are in multiple categories. I’d probably have to take
a look at the examples to see what specifically
you’re seeing there. But in general, if you
have the same title peg on multiple pages we’ll
show that in Webmaster Tools. Sometimes even if we see
a rel canonical there. So, Webmaster Tools
in that regard, is a little bit–
almost on a lower level, that it says, well,
we crawled these pages and we saw the same title. Therefore we’ll let you know
about that, just in case you weren’t aware of that. And if you’re using the rel
canonical to kind of simplify that already, then
that’s something you’ve essentially
taking care of. Another thing to keep in mind
is that the rel canonical is something that
we have to process as a second or third step. So what sometimes happens
is, we’ll actually crawl and index a page
with a rel canonical set to a different URL. And we’ll keep that
original page in our index for awhile, even if we perhaps
crawled and indexed another URL because we kind of see this as
a unique URL on its own first. And over time when we can
process the rel canonical, we’ll forward the signals
to the canonical page. But at least in the
beginning, we’ll definitely crawl
and index that page first like that, even if it
has a rel canonical pointing to something else. If you really want to
prevent that from happening, and usually you don’t
really need to prevent that from happening, you could use a
301 instead of a rel canonical. And a 301 essentially tells
us right when we’re crawling, we should go to the other URL. Whereas the rel canonical
is something that kind of has to be processed first. Use of hidden text is not
so relevant for pages. Does this mean only
the text is not relevant for it or
also videos and links? For example, I have a
relevant link or video on a tab, which is only
visible when I click the tab. Should I also show that? So in general, this
is something where if the content isn’t
really visible, then it’s really hard for us to say
whether or not it makes sense to put a lot of weight
on this content. And it doesn’t really
matter if it’s a video or if it’s a link
or it’s images, this is essentially
something that’s been the case for
a really long time now, that if this is really
important and relevant content, then make sure that
it’s actually visible. One way to think about this
is, if a user is searching for that content specifically,
something perhaps hidden, or perhaps not so
hidden in a tab, where you have to click the
tab to see that content, if the user is searching
for that content, they land on your page. And they look at your
page and say, well, this isn’t really the content
that I was searching for. It’s not the image
of a, I don’t know, a car, that I was looking for. But rather a big piece of text
or image of something else, then they kind of feel
frustrated that they didn’t really get what they
were looking for. So we try to preempt that a
little bit by saying, well, if it’s hidden, maybe it’s not
really important for the user. Maybe we shouldn’t be putting
that much weight on it. So with that in mind,
if this is something that you think is really
important for your users, make sure it’s either visible
when they go to that page, or if you think that this
is significant enough, maybe set up a separate URL
where this is actually visible. If it’s essentially
auxiliary information that the user might want
to look at for a really kind of in-depth review of
that page or that content, then maybe it’s fine to
keep it in a tab or keep it behind
something like a click here to learn more link. So that’s something where
I wouldn’t say you always need to do it one
way or another, but rather to think about how
important is this content? Is it relevant enough
that you actually want to have it
visible right away or is it something
that you don’t really find that critical
for this page, but users might like it to get
a little bit more information. Let’s see, here’s one. When there’s an update to
algorithms like the rolling Panda update, is it data
from the last month’s worth of crawls or is it from
a few months earlier? We’ve noticed that it takes
a few months for big site improvements to be picked up. In general, these
are things where we don’t have like
a cut off date and say, it has
to be on this date or it has to be
crawled on this date. Because we can’t crawl the
whole web all the time. We have to make
regular decisions on which part of the web
we should crawl today, which part we should
crawl tomorrow, which parts we should crawl
every couple of hours, which parts we think
probably only make sense to crawl probably once
a year, or once every couple of months. So these are things
that all come into play. And from a technical
point of view, it’s not even
possible to say, well, we’ll only take everything
into account that was crawled last month, because
there are some things that we haven’t
crawled last month. And we can’t really
throw them away and say, well, if they
were crawled a year ago, it doesn’t really mean
that they’re relevant now because these could
be really important pieces of information, but they just
don’t change that frequently so we don’t crawl
them that frequently. So just because we crawl them
more often or less often, doesn’t infer its
importance for us. So with that in
mind, when updates look at the data
in our index, we don’t focus just
on a period of time that we’ve actually
crawled, but rather on everything that
we’ve collected until about that time. All right. We’re running low on time. I’m happy to give you guys
a chance to ask some more direct questions if you
have anything specific. AUDIENCE: John, I put a
link here in the chat. That’s something I
was talking about. I mean, I don’t know
right off if there’s any particularly
obvious problems there. JOHN MUELLER: I can take a look
at it after the hangout, yeah. AUDIENCE: OK sometimes when we
use the site search function to look at indexed
content, we don’t always get the same results. I mean, sometimes at
different times, even, maybe it’s an indexing
anomaly but we’ll see quite a good number of
changes and in those results. I don’t know if
that’s because it’s during an update or
something like that. Do you know about that? I mean that function is
basically you use the site and then you don’t
include the www in there. Are there any special variations
in which that function could be better utilized? JOHN MUELLER: I think it’s
kind of important to keep in mind that the site
query is something that’s pretty artificial, that
normal users don’t do. So we don’t necessarily
focus so much on that. It’s not something
where I’d say, if a URL isn’t listed
there or not indexed there, or if a URL is like write
first, like you do a site query for your site,
and the home page isn’t ranked first, that
that’s a sign of a problem. That’s essentially something
we try to maintain, we try to use as a filter for
some of the indexed URLs there, but it’s not meant to be
a comprehensive listing. Kind of the count of
the results on there isn’t meant to be a
comprehensive count of the indexed URLs. It’s essentially
something that we’re trying to provide a quick
set of search results based on that query. So I wouldn’t necessarily– AUDIENCE: [INAUDIBLE] JOHN MUELLER: I wouldn’t
use it necessarily use it as a diagnostics
tool, for example. AUDIENCE: OK. Yeah. So often it doesn’t
match up with a sitemap index diagnostics in
Webmaster Tools or other data. I guess that’s quite common. JOHN MUELLER: That’s
completely normal. So I’d use Webmaster Tools
for things like that. The sitemaps count that you
mentioned, the indexed status information in
Webmaster Tools, I wouldn’t use site
query for that kind of diagnostics information. AUDIENCE: All right. Thanks. JOHN MUELLER: Ramesh, I
think you had a question too. AUDIENCE: Yes. So my question is
about [INAUDIBLE]. Android [INAUDIBLE]
Google itself. But the problem is like it’s
[INAUDIBLE] with the Google [INAUDIBLE] how they can
get these particular viewing [INAUDIBLE] JOHN MUELLER: I’m having a
really hard time understanding your question from the sound. But I think it’s about
getting your app index and showing that in the
search results in Chrome on smartphone, right? What I would do there
is maybe do a short post in the help forum
about the problem that you’re seeing there. And I can send someone who’s
more familiar with the app indexing side to look
at your specific site. AUDIENCE: [INAUDIBLE] JOHN MUELLER: So if you
can post to the help forum with the specifics of the
URL of your site and the app that you’re using there,
then I can point someone at that for you. All right. Looks like we’re
kind of out of time. It’s been great
questions here again. So let me just
double-check that I have the links that
you guys sent me. And I wish you guys
a great weekend. Thank you all for joining.

Related Posts

How to Find Backlinks from Competitors site । SEO Backlinks checker tool । How to check Backlinks

How to Find Backlinks from Competitors site - SEO Backlinks checker tool - How to check Backlinks please subscribe my
Link Building Strategies on Steroids: How to Get Backlinks FAST!

Link Building Strategies on Steroids: How to Get Backlinks FAST!

In this video, I’m going to show you how to turn your backlink analysis into actionable link building strategies… fast.

3 Replies to “English Google Webmaster Central “mobile” office-hours hangout”

  1. 07:03 Are the mobile usability recommendations in WMT a ranking factor? "We're experimenting but nothing specific to announce.  It could happen at some point in the future."

Leave a Reply

Your email address will not be published. Required fields are marked *