Mislio sam da je ovo neko već postavio:
The Sitemap Protocol allows you to inform search engine crawlers about URLs on your Web sites that are available for crawling. A Sitemap consists of a list of URLs and may also contain additional information about those URLs, such as when they were last modified, how frequently they change, etc.
Sitemaps are particularly beneficial when users can not reach all areas of a Web site through a browseable interface — i.e. users are unable to reach certain pages or regions of a site by following links. For example, any site where certain pages are only accessible via a search form would benefit from creating a Sitemap and submitting it to search engines.
This document describes the formats for Sitemap files and also explains where you should post your Sitemap files so that search engines can retrieve them.
Please note that the Sitemap Protocol supplements, but does not replace, the crawl-based mechanisms that search engines already use to discover URLs. By submitting a Sitemap (or Sitemaps) to a search engine, you will help that engine's crawlers to do a better job of crawling your site.
Using this protocol does not guarantee that your Web pages will be included in search indexes. In addition, using this protocol may not influence the way your pages are ranked by a search engine.
Veoma korisna stvarčica od Google-a...
A an samom Google-u ovo su za sada rezultati pretraživanja:
Evo primer jednog komentara...
Google Labs just released their new sitemap XML protocol as a beta tonight, looking to extend the capabilities of robots.txt files for site owners. The xml format looks a lot like RSS, letting you be specific on how often parts of your site update, all the pages that make up your site, and how you’d like the pages indexed. I suspect if you’re using a publishing system (especially any sort of blog system), there will soon be modules available to automatically build a sitemap xml file for you.
It’s only been up for a few hours, so it’s hard to tell if this will benefit publishers or Google more, I suspect Google wanted a way to get more information from publishers to make their bots more efficient and help keep the index as up to date as possible. — Matt Haughey