Google catch-all, but badly!


In the rush for filing of the all World Wide Web (that in this case could be also less wide) Google since one year till now started a process of mad indexing and scanning of web pages without much care for details.

Google can read Ajax/JS’ contents

Since sometime this guide exists how to make Ajax contents readable from Crawler, but if you do not use such technicalities it seems that the search engine can anyway read the Ajax contents. It’s impossible to know how and to which qualitative level. The good Matt in a very curious way announces on Twitter: ….

Googlebot keeps getting smarter. Now has the ability to execute AJAX/JS to index some dynamic comments – Reply to him

Short before Matt Tweet there was the announce for which Google is capable to index the comments on Facebook and after few days later a quite articulate post wondering how capable is Google bot later discussed on Google+ by Valerio.

When you touch a so unexplored field you must always think a little on it and inquire because otherwise it’s really easy to enter the whirl of hearsay… then it is better to test even on what it is taken granted. For example for Facebook comments when we are certain that they are considered in all cases or it is not a question of chosen few? Or sporadic cases?

Personally even on sites of a certain importance I’ve not noticed any kind of indexing for the comments by means of social plug in of Facebook . You?

On the contrary it happens that the search engine and consequently Webmaster tool draw attention a great numbers of errors as the following:

Crawl Errors

Which should not be errors cause they are URL extrapolated from Ajax code present inline in the page. Something similar to the following (and very often to understand it):

<script>
...
var u = "/esempio/text/javascript";
var c = "/esempio_ancora/text/javascript";
...
</script>

While it can’t understand exactly what it should see (if really it is so smart) and for example its browser Chrome sees and as of using Webkit:

Webkit Bot

NB: This is deliberately ambitious and pretentious … cause the contents are readable with keyboard arrow keys! But if you are smart we want to see how smart are you also because the slides were shared and linked in a lot of places.

Maybe rather than the link in Js try to find something more useful without investing the report errors on webmaster tools.

Google can read flash contents and object

Since time Flash and Co. don’t agree with Crawler and it has never been a secret even if it seems that the search engine is quite smart and can read and follows the links inside flash contents.

We are happy for that even if we take care not to use Flash (especially today with the coming of possible alternatives) but happens that some script which use <object> pass files through a parameter for textual interpretation of data and that are of no interest to the user and certainly to the search engine … especially if they don’t exist.

Something like this to clarify:


<param name="FlashVars" value="parametri=txt_swf_boll_new/m15/20120112_h6.txt" />
<param name="quality" value="high" />
<param name="bgcolor" value="#F1F6FA" />
<param name="wmode" value="opaque" />

Also this gives a crazy quantity of errors (2828) 404 from webmaster tools because obviously the given cause it is not always the real one.

WMT Errors

Meditation mode

I hope all this will improve search. I hope that all this will allow developing more complex applications with more usable and simple UX, but I feel that there’s a sort of content greediness from the search engine and that’s it.

Maybe it’s becoming a “who has it bigger” greediness and for that it’s necessary to do less and say more. (Enrico Madrigarno at the VI Convegno gt said that sometimes we consider Google better than it is and Enrico Altavilla confirmed it here).

Search quality maybe is lowering to give space to the control manna of any web corner… also to the less significant and the inexistent one.

Personally this morning I have done some research and personalized research, geolocalization and various sillness things I could not find anything and then mass of operators to find what I was looking for; but how many people use it?

Are we sure that we are going toward the right direction for the user?

It is necessary “to have it so big” if it’s unable to find the latrine bowl?

Probably Google lives alone and if it pisses outside the latrine nobody tells it anything.

Sei membro del forum? Vuoi scrivere anche tu su LOGOALT
Chiedilo a @giorgiotave