From 2e8fe364cb4dc212c397874b894a306876c66d17 Mon Sep 17 00:00:00 2001 From: JAMES Mason Date: Mon, 30 May 2011 20:54:24 +1200 Subject: [PATCH] Bug 6411 add another example to README.robots Signed-off-by: Magnus Enger Signed-off-by: Chris Cormack --- README.robots | 10 ++++++++++ 1 file changed, 10 insertions(+) diff --git a/README.robots b/README.robots index a08371dd07..6865d0ae26 100644 --- a/README.robots +++ b/README.robots @@ -5,13 +5,23 @@ look for the file /robots.txt. If this file is found and has lines that apply to them they will do as instructed. A very basic robots.txt follow as an example: +------------------------------------------- # go away User-agent: * Disallow: / +------------------------------------------- This tells every search engine that cares (User-agent: *) to not index the site (Disallow everything past /). +Another slightly more intelligent robots.txt file example allows for some bot indexing (good for your site in google, etc), but also stops your Koha from getting thrashing by ignoring URLs that cause heavy search load + +------------------------------------------- +# do some indexing, but dont index search URLs +User-agent: * +Disallow: /cgi-bin/koha/opac-search.pl +------------------------------------------- + If you have installed Koha to /usr/local/koha3 then this file would be placed in the directory /usr/local/koha3/opac/htdocs/. This should prevent search engines from browsing every biblio record, and every view of each record, on -- 2.39.5