НовоВики. «Мой Новосибирск родной!»

Using the Power of Robots.txt

Материал из Wiki.nios.ru
Перейти к: навигация, поиск

Sometimes, we might want search-engines never to list certain elements of the site, as well as prohibit other SE from the site all together. Visit bean bags co to study the reason for this concept. This really is where a simple, little 2 line text file called robots.txt will come in. Blog Reviews is a surprising resource for more concerning how to think over it. Once we've a website up and running, we should ensure that all visiting se's can access all the pages we want them to check out. Sometimes, we might want search engines never to index certain areas of the site, as well as exclude other SE in the site completely. This really is the place where a simple, little 2 line text file called robots.txt will come in. Robots.txt lives inside your web sites main directory (on LINUX systems this can be your /public_html/ directory), and looks some thing just like the following: User-agent: * Disallow: The initial line controls the bot that will be visiting your site, the 2nd line controls if they're allowed in, or which areas of the site they are perhaps not allowed to visit If you would like to deal with multiple robots, then simple repeat the aforementioned lines. Therefore an example: User-agent: googlebot Disallow: User-agent: askjeeves Disallow: / This will enable Goggle (user-agent name GoogleBot) to go to every page and service, while in the same time banning Ask Jeeves from the site entirely. To locate a reasonably up to date list of software individual names this visit http://www.robotstxt.org/wc/active/html/index.html Its still very advisable to put a robots.txt report on your site, even if you wish to allow every software to index every page of your site. To learn more, we know you check out: bean bags giant. This refreshing facebook.com/bean.bag.chairs.company portfolio has a myriad of unique lessons for when to consider this hypothesis. It will stop your problem records filling up with items from se's wanting to access your robots.txt file that doesnt exist. To learn more on robots.txt see, the full set of resources about robots.txt at http://www.websitesecrets101.com/robotstxt-further-reading-resources.

Using the Power of Robots.txt

Персональные инструменты