Add ident and Identifier-standard to known indexes in C4::Search::getIndexes().
Those indexes can be very useful, for example for IdRef feature.
Test plan :
- Make sure some records have a field indexed with Identifier-standard, ISBN=1234 for example
- Perform a search /cgi-bin/koha/opac-search.pl?idx=ident,phr&q=1234
=> you find the record
- Perform a search /cgi-bin/koha/opac-search.pl?q=ident:1234
=> Without patch : you get no results
=> With patch : you find the record
Idem for 'Identifier-standard'
Signed-off-by: Jonathan Druart <jonathan.druart@bugs.koha-community.org>
Signed-off-by: Kyle M Hall <kyle@bywatersolutions.com>
1) Apply patch
2) Make sure that you have a bib that has MARC21 035$a (and possibly also 035$z) populated.
pre 3) Replace all modified zebra files and restart zebra server
3) Rebuild zebra: misc/migration_tools/rebuild_zebra.pl -x -b -z
4) Add the following to the intranetuserjs syspref:
$(document).ready(function(){
// Add Other Control Number to advanced search
if (window.location.href.indexOf("catalogue/search.pl") > -1) {
$(".advsearch").append('<option value="Other-control-number">Other Control Number</option>');
}
});
5) Do an advanced search, select "Other Control Number" from the search menu, then add the Other Control Number in 035$a for the bib specified in step 1.
Signed-off-by: Bernardo Gonzalez Kriegel <bgkriegel@gmail.com>
Works, no koha-qa errors
Signed-off-by: Marcel de Rooy <m.de.rooy@rijksmuseum.nl>
Signed-off-by: Kyle M Hall <kyle@bywatersolutions.com>
Don't retrieve prefs if we won't need them
Signed-off-by: Tomas Cohen Arazi <tomascohen@theke.io>
Signed-off-by: Kyle M Hall <kyle@bywatersolutions.com>
Set variables ($sysxml, $xslfilename, $lang) if they are not passed to
the subroutine. This happens from catalogue/detail.pl,
opac/opac-shelves.pl, opac/opac-tags.pl and virtualshelves/shelves.pl.
Signed-off-by: Mirko Tietgen <mirko@abunchofthings.net>
Signed-off-by: Tomas Cohen Arazi <tomascohen@theke.io>
Signed-off-by: Kyle M Hall <kyle@bywatersolutions.com>
On search, every single result goes through some XSLT processing.
This includes fetching the relevant sysprefs every single time.
We should do it only once per search.
Signed-off-by: Tomas Cohen Arazi <tomascohen@theke.io>
Signed-off-by: Kyle M Hall <kyle@bywatersolutions.com>
There are 2 prefs to drive this feature: StaffAuthorisedValueImages and
AuthorisedValueImages. AuthorisedValueImages is not added by
sysprefs.sql and does not appear in updatedatabase.pl, we could easily
imagine that nobody uses it.
With XSLT enabled, the feature is only visible on a record detail page
at the OPAC, if AuthorisedValueImages is set. Otherwise you need to turn
the XSLT off. In this case you will see the images on the result list
(OPAC+Staff interfaces) and OPAC detail page, but not the Staff detail
page.
This patch suggests to remove completely this feature as it does not
work correctly.
The ability to assign an image to an authorised value is now always
displayed, but the image will only be displayed on the advanced search
if defined.
Test plan:
Confirm that the authorised value images are no longer visible at the
opac and the staff interfaces.
The prefs should have been removed too.
Signed-off-by: Owen Leonard <oleonard@myacpl.org>
Signed-off-by: Katrin Fischer <katrin.fischer.83@web.de>
Signed-off-by: Kyle M Hall <kyle@bywatersolutions.com>
By default ES returns the facet terms ordered by most used, which makes
sense.
This patch removes resort done in the scripts (catalogue/search.pl and
opac/opac-search.pl) and moves it to the module.
For Zebra it's now done in C4::Search::getRecords, and there is no
change to expect (still alphabetically).
On the Elastic search side, we could imagine to let the library define
the order of the facets. The facet terms are now sorted by most used.
To test easily this change, turn on the displayFacetCount pref.
Signed-off-by: Nick Clemens <nick@bywatersolutions.com>
Signed-off-by: Jesse Weaver <jweaver@bywatersolutions.com>
Signed-off-by: Tomas Cohen Arazi <tomascohen@theke.io>
Signed-off-by: Kyle M Hall <kyle@bywatersolutions.com>
Signed-off-by: Brendan Gallagher <brendan@bywatersolutions.com>
This reverts commit cd4905c2969b067476881016d0b03271f0bcc7c8.
This commit caused an error in C4::Search::GetFacets when running in
zebra mode.
Conflicts:
Koha/SearchEngine/Elasticsearch/Search.pm
Signed-off-by: Nick Clemens <nick@bywatersolutions.com>
Signed-off-by: Jesse Weaver <jweaver@bywatersolutions.com>
Signed-off-by: Tomas Cohen Arazi <tomascohen@theke.io>
Signed-off-by: Kyle M Hall <kyle@bywatersolutions.com>
Signed-off-by: Brendan Gallagher <brendan@bywatersolutions.com>
By default ES returns the facet terms ordered by most used, which makes
sense.
This patch removes resort done in the scripts (catalogue/search.pl and
opac/opac-search.pl) and moves it to the module.
For Zebra it's now done in C4::Search::getRecords, and there is no
change to expect (still alphabetically).
On the Elastic search side, we could imagine to let the library define
the order of the facets. The facet terms are now sorted by most used.
To test easily this change, turn on the displayFacetCount pref.
Signed-off-by: Nick Clemens <nick@bywatersolutions.com>
Signed-off-by: Jesse Weaver <jweaver@bywatersolutions.com>
Signed-off-by: Tomas Cohen Arazi <tomascohen@theke.io>
Signed-off-by: Kyle M Hall <kyle@bywatersolutions.com>
Signed-off-by: Brendan Gallagher <brendan@bywatersolutions.com>
Signed-off-by: Nick Clemens <nick@bywatersolutions.com>
Signed-off-by: Jesse Weaver <jweaver@bywatersolutions.com>
Signed-off-by: Tomas Cohen Arazi <tomascohen@theke.io>
Signed-off-by: Kyle M Hall <kyle@bywatersolutions.com>
Signed-off-by: Brendan Gallagher <brendan@bywatersolutions.com>
Signed-off-by: Nick Clemens <nick@bywatersolutions.com>
Signed-off-by: Jesse Weaver <jweaver@bywatersolutions.com>
Signed-off-by: Tomas Cohen Arazi <tomascohen@theke.io>
Signed-off-by: Kyle M Hall <kyle@bywatersolutions.com>
Signed-off-by: Brendan Gallagher <brendan@bywatersolutions.com>
Added the following indexes:
Interest-age-level | 591$a ind1=1
Interest-grade-level | 591$a ind1=2
lexile-number | 591$a ind1=8
Reading-grade-level | 591$a ind1=0
Moved 'lex' from a zebra index to a ccl alias to lexile-number.
Changed the handling of st-numeric in C4/Search.pm to allow for search ranges.
Signed-off-by: Nick Clemens <nick@bywatersolutions.com>
Signed-off-by: Hector Castro <hector.hecaxmmx@gmail.com>
Works as advertised
Signed-off-by: Tomas Cohen Arazi <tomascohen@unc.edu.ar>
Signed-off-by: Jesse Weaver <jweaver@bywatersolutions.com>
subroutines should not take $dbh in parameter.
C4::Biblio::TransformMarcToKoha has it and does not use it.
Test plan:
Look at the patch and confirm that all occurrences of
TransformMarcToKoha have been modified.
Signed-off-by: Jacek Ablewicz <abl@biblos.pk.edu.pl>
Signed-off-by: Katrin Fischer <katrin.fischer.83@web.de>
Signed-off-by: Brendan Gallagher brendan@bywatersolutions.com
perl -p -i -e 's/^.*set the version for version checking.*\n//' **/*.pm
+ manual adjustements
Signed-off-by: Josef Moravec <josef.moravec@gmail.com>
Signed-off-by: Tomas Cohen Arazi <tomascohen@unc.edu.ar>
Signed-off-by: Brendan A Gallagher <brendan@bywatersolutions.com>
Mainly a
perl -p -i -e 's/^.*3.07.00.049.*\n//' **/*.pm
Then some adjustements
Signed-off-by: Josef Moravec <josef.moravec@gmail.com>
Signed-off-by: Tomas Cohen Arazi <tomascohen@unc.edu.ar>
Signed-off-by: Brendan A Gallagher <brendan@bywatersolutions.com>
perl -p -i -e 's/^(use vars .*)\$VERSION\s?(.*)/$1$2/' **/*.pm
Signed-off-by: Josef Moravec <josef.moravec@gmail.com>
Signed-off-by: Tomas Cohen Arazi <tomascohen@unc.edu.ar>
Signed-off-by: Brendan A Gallagher <brendan@bywatersolutions.com>
This patch will add indexes for Date/time-last-modified.
To test:
1. apply patch
2. reindex
3. search for dtlm:DATE and date-time-last-modified:DATE
4. confirm that you get results
Signed-off-by: Hector Castro <hector.hecaxmmx@gmail.com>
Works as advertised
Signed-off-by: Frédéric Demians <f.demians@tamil.fr>
I confirm Hector signing-off. A simple Zebra server restart suffice to get
working the searches on date-time-last-modified and dtlm.
Signed-off-by: Jonathan Druart <jonathan.druart@bugs.koha-community.org>
Signed-off-by: Brendan A Gallagher <brendan@bywatersolutions.com>
The singleBranchMode system preference does not make sense.
Either the install has only 1 library defined or several. In both case,
we can easily guess the behavior to follow.
So the idea of this patch is to replace the fetch of this syspref with a
call to count the number of libraries defined in DB.
Test plan:
1/ From a fresh Koha install, execute the DB entry to remove the pref.
2/ Define only 1 library
3/ Confirm that Koha behaves the same as before (try to change your
library, look at the facets)
4/ Create another library (or more) and reinsert the pref and set it:
insert into systempreferences (variable, value)
values('singleBranchMode', 1);
5/ Execute the DB entry
You should get a warning message.
6/ Repeat 3.
Signed-off-by: Chris Cormack <chrisc@catalyst.net.nz>
Does what it says, but will change behaviour for any Koha install that
has 2 branches defined, One circulation, and this preference set.
If that is an acceptable change, we might need to make sure this is noted well in the
release notes.
Signed-off-by: Kyle M Hall <kyle@bywatersolutions.com>
If a record has more than 20 items, all the items over 20 will show as
available on the search results even if they are not!
This is a hard coded limit in the Search module. This number should be
configurable.
Test Plan:
1) Create a record with more than 20 items
2) Set all the items to waiting holds or in transit
3) Search for results that will include that item
4) Note some say they are available even though they are not
5) Apply this patch
6) Run updatedatabase.pl
7) Set the new system preference MaxSearchResultsItemsPerRecordStatusCheck
to a number larger than the number of items on your record
8) Re-run the search
9) Note that the hold and transit statuses for the items are now correct
Signed-off-by: Andreas Hedström Mace <andreas.hedstrom.mace@sub.su.se>
Signed-off-by: Jonathan Druart <jonathan.druart@biblibre.com>
Signed-off-by: Brendan Gallagher brendan@bywatersolutions.com
Signed-off-by: Chris Cormack <chrisc@catalyst.net.nz>
Signed-off-by: Kyle M Hall <kyle@bywatersolutions.com>
Signed-off-by: Kyle M Hall <kyle@bywatersolutions.com>
This patch removes code related to stopwords usage. The following methods are removed:
C4::Search->remove_stopwords
C4::Context->stopwords
C4::Context->_new_stopwords
And the buildQuery API was changed (removed the \@removed_stopwords return value).
A follow-up is provided for database changes, to make rebasing easier.
To test:
- Apply this patch
- Do some searches in both intranet and opac interfaces
- Nothing should break
Sponsored-by: Universidad Nacional de Córdoba
Signed-off-by: Kyle M Hall <kyle@bywatersolutions.com>
Signed-off-by: Chris Cormack <chrisc@catalyst.net.nz>
Signed-off-by: Kyle M Hall <kyle@bywatersolutions.com>
This patch removes C4::Dates from following files in folder C4:
- C4/Members.pm
- C4/Reserves.pm
- C4/Search.pm
- C4/Utils/DataTables.pm
- C4/Utils/DataTables/Members.pm
- C4/VirtualShelves/Page.pm
To test:
-run tests as appropriate,
- have a close look at the code changes
- try to find regressions
http://bugs.koha-community.org/show_bug.cgi?id=14985
Signed-off-by: Brendan Gallagher <brendan@bywatersolutions.com>
Bug 14985: (followup) Remove eval if dates come from database
This patch removes some evals from date-formatting where the dates come
from the database.
See comments #7 - #9
Additionaly, C4/VirtualShelves/Page.pm is removed from the patches (obsolete).
Bug 14985: (followup) Remove C4::Dates from C4/Overdues.pm
Ths patch removes a stray C4::Dates from C4/Overdues.pm
- To test got to a patron who has overdues
(Home > Circulation > Checkouts > [Patron])
- Print overdues
Signed-off-by: Jonathan Druart <jonathan.druart@bugs.koha-community.org>
Signed-off-by: Tomas Cohen Arazi <tomascohen@theke.io>
This patch add zebra indexes to RDA 264 field.
The new Provider index is added too.
QA comments corrected.
To test:
1) Download RDA records with 264 fields from this attachment <http://bugs.koha-community.org/bugzilla3/attachment.cgi?id=36825>. Import the file and re-index/rebuild zebra. These records contain 260 and 264 fields per record.
2) Do a search with pb:Bethany two records will appear with title The guardian. Search with pl:Minneapolis too, the two records will appear.
3) Select one record of both records and delete the 260 field keeping the 264 field and save, rebuild your zebra.
4) Search again with pb:Bethany and just one record will appear. Thats mean 264 is not indexed.
5) Apply patches.
6) Rebuild your zebra but this time all biblio records.
7) Search again with pv:Bethany or Provider:Bethany, this time will appear the two records, 264 is indexed. Note that if you search again with pb only one record appear. This is because the suggestion of LOC.
10) Search with copydate:2013 only launch records with 260 fields and pv:2013 show both fields, i.e., 260 and 264.
11) Apply QA Test Tools
Sponsored-by: Universidad de El Salvador
Signed-off-by: Nick Clemens <nick@quecheelibrary.org>
Signed-off-by: Kyle M Hall <kyle@bywatersolutions.com>
Signed-off-by: Tomas Cohen Arazi <tomascohen@theke.io>
So, this one is VERY weird, let me try to explain what I have
understood.
Bisecting using run prove t/db_dependent/Search.t, I have found that the
following commit make the test fail:
commit 0f63f89f66
Bug 14100: Generic solution for language overlay - Item types
The error is
DBI bind_columns: invalid number of arguments: got handle + 0, expected handle + between 1 and -1
Usage: $h->bind_columns(\$var1 [, \$var2, ...]) at /usr/lib/i386-linux-gnu/perl5/5.20/DBI.pm line 2065.
Note that the interface (admin/itemtypes.pl) which calls the same
subroutine with the same parameter (style => 'array') works great.
The problem comes from the change in C4::Search::searchResults, if I
only apply the change done to this subroutine on 0f63f89f^1, I reproduce
the issue.
Looking closely at how %itemtypes is built, we could actually call
GetItemTypes with the style => 'hash' to get exactly what we want.
The following piece prove it for you:
use Test::More;
use C4::Koha;
my $i = GetItemTypes;
my $j = GetItemTypes(style => 'array');
my %itemtypes;
for my $itemtype ( @$j ) {
$itemtypes{ $itemtype->{itemtype} } = $itemtype;
}
is_deeply( \%itemtypes, $i);
So changing the code accordingly and just forget this last hour...
Signed-off-by: Tomas Cohen Arazi <tomascohen@theke.io>
This patch fixes:
- reports/bor_issues_top.pl
- sort order
- adv search and search results
- opac-topissues.pl
Signed-off-by: Katrin Fischer <katrin.fischer.83@web.de>
Signed-off-by: Tomas Cohen Arazi <tomascohen@theke.io>
Test plan:
1/ update the Schema (misc/devel/update_dbix_class_files.pl)
2/ Translate templates for some languages (es-DE, de-DE for instance)
3/ Enable them in the pref (search for 'lang') for the staff interface
4/ Go on the item type admin page (admin/itemtypes.pl)
5/ Edit one
6/ Click on the 'translate for other languages' link
7/ You are now on the interface to translate the item type's description
in the languages you want. So translate some :)
8/ Go back on the item type list view (admin/itemtypes.pl)
9/ You should see the original description (non translated)
10/ Switch the language
11/ You should see the translated description in the correct language.
If the description is non translated, the original description is
displayed.
12/ On the different page where the item type is displayed, confirm that
the translated description appears.
Think further / Todo:
1/ Update all occurrences of the item type's description (DONE)
2/ Implement for authorised values
3/ Implement for syspref value (at least textarea)
4/ Implement for branch names
5/ Centralize all the translation on a single page in the admin area
...
N/ Implement a webservice to centralize all the translations and give
the ability to sync the item types/authorised values description with
the rest of the world (push and pull).
Signed-off-by: Josef Moravec <josef.moravec@gmail.com>
Signed-off-by: Katrin Fischer <katrin.fischer.83@web.de>
Signed-off-by: Tomas Cohen Arazi <tomascohen@theke.io>
Since bug 10803 adds a C4::Search::History module, the
PurgeSearchHistory routine should be moved.
Test plan:
- run misc/cronjobs/cleanup_database.pl with the searchhistory param and
verify behavior is the same as before applying this patch.
- run prove t/Search/History.t
Signed-off-by: Joonas Kylmälä <j.kylmala@gmail.com>
Signed-off-by: Katrin Fischer <katrin.fischer.83@web.de>
Signed-off-by: Tomas Cohen Arazi <tomascohen@theke.io>
For an unkown reason, when executed from a test file, the 'SHOW COLUMNS'
statement does not return anything.
We need to retrieve the column list from the DBIx::Class resultset.
Signed-off-by: Tomas Cohen Arazi <tomascohen@theke.io>
This patch makes a tiny change in C4::Search::searchResults as to the
handling of not-for-loans. This should make the display of status and
availability of items more consistent in opac and staff.
Additionally, a Place hold link should disappear in the opac, when it is
not possible to place a hold on any item of one biblio.
The following spots need special attention; the display should be
corrected by this patch:
[A] Location column showing number of available items in catalogue/search.
[B] Location column of Cataloging Search (addbooks.pl)
[C] OPAC Search results list in non-XSLT view.
NOTE
The forms opac-MARCdetail and MARCdetail also include an Items table with
column Not for loan. The information in this column might still be somewhat
confusing but is actually correct. The column only contains Not for loan if
the item field is set. So it is empty when only the item type is nfl.
Since a correction here is arguable, I am not including it on this report.
Test plan:
[1] Have at least two item types. Mark one item type (X) as not for loan.
[2] Use at least two biblios with two items each. Mark one item of the first
biblio as not for loan at item level (via item editor).
Change one item of the second biblio to the item type of step 1 (X).
[3] Set pref item-level_itypes==item and set all four xslt prefs (for opac
and staff, results and detail) to default.
[4] Check spots A, B and C as mentioned above. Also check:
[D] OPAC Detail, Holdings table, Status column
[E] Staff Detail, Holdings, Status.
[5] Make all four xslt prefs now empty. Check spots A to E again.
Especially observe C here.
[6] Set pref item-level_itypes==biblio. Change your second biblio to item
type of step 1 (X) in the cataloguing editor (MARC 942c).
Check spots A to E again.
[7] Set all four xslt prefs again to default. Check spots A to E again.
[8] Run the unit test t/db_dependent/Search.t.
Signed-off-by: Nick Clemens <nick@quecheelibrary.org>
Signed-off-by: Jonathan Druart <jonathan.druart@bugs.koha-community.org>
Signed-off-by: Tomas Cohen Arazi <tomascohen@unc.edu.ar>
Bug 11202 introduced a new index 'dissertation-information' for
UNIMARC. This patch adds the index also for MARC21 installations.
http://www.loc.gov/marc/bibliographic/bd502.html
To test:
- Apply patch
- Copy files in etc/zebradb changed by this patch to your
corresponding directory (koha-dev..)
- Make sure you have records with 502
- Reindex
- Verify you can search the field contents with
dissertation-information= and
diss=
Signed-off-by: Bernardo Gonzalez Kriegel <bgkriegel@gmail.com>
Can find by dissertation-information,
No errors
Signed-off-by: Kyle M Hall <kyle@bywatersolutions.com>
Signed-off-by: Tomas Cohen Arazi <tomascohen@unc.edu.ar>
Most of them were found and fixed using codespell.
Fix also some related grammar issues.
In C4/Serials.pm a variable was renamed to make future codespelling
checks easier.
Signed-off-by: Stefan Weil <sw@weilnetz.de>
http://bugs.koha-community.org/show_bug.cgi?id=14383
Signed-off-by: Bernardo Gonzalez Kriegel <bgkriegel@gmail.com>
Signed-off-by: Jonathan Druart <jonathan.druart@koha-community.org>
Signed-off-by: Tomas Cohen Arazi <tomascohen@theke.io>
If the staff search results, if an item is both checked out and lost,
the result will appear as two item lines where one line has the lost
status and the other line has the rest of the item's data.
Test Plan:
1) Check an item out to a patron
2) Mark the item as lost *without* removing the item from the patron's
record, either by using longoverdue.pl or by editing the itemlost
field in the database directly.
3) Perform a search where that item will be in the results
4) Note the improper display of the item's data
5) Apply this patch set
6) Reload the search restults
7) Note the item now displays correctly
Signed-off-by: Nick <nick@quecheelibrary.org>
Signed-off-by: Katrin Fischer <katrin.fischer.83@web.de>
Signed-off-by: Tomas Cohen Arazi <tomascohen@gmail.com>
Signed-off-by: Nick <nick@quecheelibrary.org>
Signed-off-by: Katrin Fischer <katrin.fischer.83@web.de>
Signed-off-by: Tomas Cohen Arazi <tomascohen@gmail.com>
Any search limits including a ccode will break the search facts.
Test Plan:
1) Run an advanced search using a ccode limit
2) Try using any of the facet links of the left
3) Note they are broken
4) Apply this patch
5) Refresh the results
6) Note the facet links are no longer broken
Note: I have not been able to reproduce this issue on my own test
system, but have noted the problem on at least a dozen Koha servers.
I could not reproduce the bug either, but I saw it on the Bywater Demo (comment #1).
Applied patch and tested facets, no problems found, signing off
Signed-off-by: Marc Véron <veron@veron.ch>
Signed-off-by: Katrin Fischer <Katrin.Fischer.83@web.de>
Can not reproduce the problem, but I can also not find a regression.
Signed-off-by: Tomas Cohen Arazi <tomascohen@gmail.com>
This enhancement adds the ability to search on all isbn variations when
searching on the isbn index.
Test plan:
0/ Don't apply the patch
1/ Create or choose a notice with an isbn with dashes.
2/ Try to search the notice using the isbn index by it isbn without
dashes.
=> It does not work.
3/ Apply the patch, enable the new pref SearchWithISBNVariations and
disable UseQueryParser.
4/ repeat 2 and note that the record is now returned.
Note that this only works if UseQueryParser is disabled.
It looks like QueryParser does not manage more than 1 operator.
See:
QueryParser does not manage more than 1 operator?
http://lists.koha-community.org/pipermail/koha-devel/2014-December/041028.html
and
commit 036f2a50e1
Author: Galen Charlton <gmc@esilibrary.com>
Date: Mon May 5 19:31:00 2014 +0000
Bug 10500: (follow-up) disable AggressiveMatchOnISBN if
UseQueryParser is on
Signed-off-by: Morag Hills <the.invinnysible.one@gmail.com>
Signed-off-by: Kyle M Hall <kyle@bywatersolutions.com>
Signed-off-by: Tomas Cohen Arazi <tomascohen@gmail.com>
Imagine this scenario: we have one record with four items. Two of those
items are checked out, one of those items is a waiting hold, and one of
those items is available. We would expect to see this on the search
results page. Instead, we will see both non-checked out items as
unavailable due to waiting holds.
This is due to a semantic issue GetReserveStatus.
C4::Search::searchResults uses GetReserveStatus to get the reserve
status of each item, but unlike all other calls to the sub, this one
passes in not only itemnumber, but biblionumber.
When no reserve is found for the available item, the subroutine uses the
biblionumber to grab what is essentially an arbitrary reserve to use for
the status. This makes no sense and this functionality should be
entirely removed from the subroutine so regressions like this will be
prevented in the future.
Test Plan:
1) Create one record with 4 items
a) check two of the items out to patrons
b) set one of the items as a waiting hold
c) leave the fourth item as available
2) Run a search where this record will be in the results list
3) Note that the results list 2 items on loan, two unavailable
4) Apply this patch, reload the search results
5) Note that the results list 1 available, 2 on loan, 1 unavailable
Signed-off-by: John Andrews <jandrews@washoecounty.us>
Signed-off-by: Sheila Kearns <sheila.kearns@state.vt.us>
Signed-off-by: Katrin Fischer <katrin.fischer.83@web.de>
Note: This is for the staff search result list!
Works as expected.
Signed-off-by: Tomas Cohen Arazi <tomascohen@gmail.com>
The search patch should fix non-latin character searches.
Signed-off-by: Paola Rossi <paola.rossi@cineca.it>
Signed-off-by: Bernardo Gonzalez Kriegel <bgkriegel@gmail.com>
Signed-off-by: Dobrica Pavlinusic <dpavlin@rot13.org>
Signed-off-by: Martin Renvoize <martin.renvoize@ptfs-europe.com>
Signed-off-by: Tomas Cohen Arazi <tomascohen@gmail.com>
To reproduce, edit, index notice with utf-8 char and search for it
Signed-off-by: Paola Rossi <paola.rossi@cineca.it>
Signed-off-by: Bernardo Gonzalez Kriegel <bgkriegel@gmail.com>
Signed-off-by: Dobrica Pavlinusic <dpavlin@rot13.org>
Signed-off-by: Martin Renvoize <martin.renvoize@ptfs-europe.com>
Signed-off-by: Tomas Cohen Arazi <tomascohen@gmail.com>
See the wiki page for the explanation.
Signed-off-by: Paola Rossi <paola.rossi@cineca.it>
Signed-off-by: Bernardo Gonzalez Kriegel <bgkriegel@gmail.com>
Signed-off-by: Dobrica Pavlinusic <dpavlin@rot13.org>
Signed-off-by: Martin Renvoize <martin.renvoize@ptfs-europe.com>
Signed-off-by: Tomas Cohen Arazi <tomascohen@gmail.com>
[New commit on 18 Aug 2014 : rebased, and DOM indexing only]
Issues to fix :
Most of 6XX may contain a $2 that identifies the system used for indexing. It should not be indexed.
In French libraries, $2 contains "rameau". So searching books about the music composer "Rameau" retreive thousands of records!
For some 6XX fiels, other subfields should not be indexed, for example dates of persons and family, or adresses.
In Unimarc guide, 600$t,601$t,602$t are said to exist but to be "not used". I keep them indexed.
Additionnally, subject indexing could be improved by using specific indexes for each 6XX if possible :
In ccl.properties :
- su-to, su-geo and su-ut are defined as aliases of Subject.
- a specific index is defined, but not used in record.abs : Subject-name-personal, alias su-na
We can use these indexes and create new specific indexes by using existing bib1 attributes.
We could also index $j,$x,$y,$z subdivision in specific indexes.
This patch does the following changes :
1) For all 6XX : Not indexing $2 (LSCH, Rameau...), $3 and $5
2) Suppressing the indexing of some specific subfields, depending on the field:
600 : Personal name used as a subject // see Marc21 600
not indexing c (additional elements),f (dates),p (address/affiliation)
602 : Family name used as a subject // see Marc21 600 3X
not indexing f (dates)
616 : Trademark
not indexing c,f
3) For all 6XX : index $j,$x,$y,$z in several indexes in addition to the specfific index for their 6XX field:
4) Define in ccl.properties some specific indexes :
Subject-name-conference 1=1073 => alias su-conf
Subject-name-corporate 1=1074 => alias su-corp
Subject-genre-form 1=1075 => alias su-genre and su-form
Subject-geographical 1=1076 => alias su-geo
Subject-chronological 1=1077 => alias su-chrono
Subject-title 1=1078 => alias su-ut and su-ti
Subject-topical 1=1079 => alias su-to
5) Adding new aliases in Search.pm :
su-chrono, su-form, su-genre, su-corp, su-conf, su-ti
6) Using these new indexes in for
600 : Subject and Subject-Personal-Name ; all subfields except subdivisions in Personal-name
601 : Subject, Subject-name-conference and Subject-name-corporate and Subject-name-conf ; all subfields except subdivisions in Corporate-name and Conference-name
602 : same as 600 but could be improved later
604 : Subject and Subject-title ; $a in Subject-Personal-Name ; all subfields except subdivisions in Name-and-Title
605 : Subject and Subject-title
606 : Subject and Subject-topical
607 : Subject and Subject-geographical ; all subfields except subdivisions in Name-geographic
608 : Subject and Subject-genre-form
To test :
A. In a UNIMARC-DOM indexing environment
1) Apply the patch
2) Rebuild zebra
3) Create a record A with some values in critical fields, for example:
- the string "test9828" in 600$c 600$f 600$p, 602$f, 616$c, 616$f, 606$2,600$2
- the string "subform" in 600$j
4) Create a record B with the string "subgeo" in 606$y
5) Create a record C with the string "subdate" in 606$z
6) try to search "su:test9828". You should have no results
7) try to search "su-genre:subform". You should have 1 result : record A
8) try to search "su-geo:subgeo". You should have 1 result : record B
9) try to search "su-chrono:subdate". You should have 1 result : record C
10) on existing records, try su-ut, su-to, su-na, su-form, su-corp, su-geo indexes, and see it results are relevant
Indexing of subjects could maybe be improved later
Signed-off-by: Nick Clemens <nick@quecheelibrary.org>
All seems to work as expected, I am not super-familiar with UNIMARC but I wonder if in su-corp and su-conf the subdivisions might be useful (e.g. France-Gendarmie / Staatsbibliothek-Berlin)
Signed-off-by: Paul Poulain <paul.poulain@biblibre.com>
Signed-off-by: Tomas Cohen Arazi <tomascohen@gmail.com>
On the 23 July development meeting it was decided to formally deprecate
GRS-1 indexing mode for Zebra. This patch makes code fallback to DOM
on the remaining places. No behaviour change should be noticed, as DOM
has been the default for a while.
Regards
Signed-off-by: Chris Cormack <chris@bigballofwax.co.nz>
Signed-off-by: Katrin Fischer <Katrin.Fischer.83@web.de>
Passes tests and QA script.
Also checked running Makefile.PL
Signed-off-by: Tomas Cohen Arazi <tomascohen@gmail.com>
This patch makes _get_facet_from_result_set rely on a new syspref (FacetsMaxCount)
to set the max facets to show for each facet category. It defaults to 20 if
the syspref is absent or empty.
To test:
- Have a search with lots of facet results (with some category showing the "See more" link).
- Jump to "See more", notice it shows more than 20 facet values.
- Apply the patch, reload the page.
=> SUCCESS It only shows 20 (default hardcoded value)
- Change the FacetsMaxCount syspref to other value (e.g. 15 or 100).
- Reload
=> SUCCESS: it shows the expected amount.
- Sign off :-D
Regards
Sponsored-by: Universidad Nacional de Cordoba
Signed-off-by: Owen Leonard <oleonard@myacpl.org>
Test plan completed successfully
Signed-off-by: Katrin Fischer <katrin.fischer.83@web.de>
Passes tests and QA script, works as described.
Signed-off-by: Tomas Cohen Arazi <tomascohen@gmail.com>
This patch fixes the "Publication date" and "Acquisition date"
searches when using non-QueryParser and QueryParser searches.
It adds structure attributes to the template, which is consistent
with how phrase searching is currently handled.
It removes unnecessary code from Search.pm, adds some necessary
code which is consistent with existing code, and adds a lot of
explanatory comments.
_TEST PLAN_
Before applying:
0) Turn off QueryParser
1) Try a "Publication date" or "Acquisition date" search from the
staff client advanced search.
2) Note that even though the description on the result page makes
it seem like you're doing an index-specific search, you're actually
doing a keyword search. You can verify this by checking the 008
from positions 7 to 10 for "Publication date" or "Date accessioned"
on items for "Acquisition date".
3) Turn on QueryParser
4) Try doing the searches from Step 1.
5) A "Publication date" search should probably produce zero results
After applying patch:
6) Keep QueryParser on
7) Try doing the searches from Step 1.
8) Notice that you're actually getting results consistent with
your search (ie the 008/7-10 shows the date you searched for,
and there is a "Date accessioned" in items which matches your
search)
9) Turn off QueryParser
10) Note that your results are exactly the same as step 8
(N.B. this is because QueryParser is falling back to non-QP mode
instead of producing a bad query.)
Signed-off-by: Fridolin Somers <fridolin.somers@biblibre.com>
Signed-off-by: Kyle M Hall <kyle@bywatersolutions.com>
Signed-off-by: Tomas Cohen Arazi <tomascohen@gmail.com>
This patch adds a variable to koha-conf.xml controlling the use of Zebra facets.
Usage:
- use_zebra_facets = 1 | 0
Zebra facets work only on DOM.
Signed-off-by: Tomas Cohen Arazi <tomascohen@gmail.com>
Signed-off-by: Katrin Fischer <katrin.fischer.83@web.de>
Signed-off-by: Tomas Cohen Arazi <tomascohen@gmail.com>
This patch adds the following routines to C4::Search
- GetFacets
This is a wrapper routine, that given a ZOOM::ResultSet, extracts
the relevant facets. To do the job, it uses the internal functions:
_get_facets_from_records and _get_facets_from_zebra. The choice is done
on querying the indexing mode: grs1 will use records, and dom zebra's facets.
- _get_facets_from_records
Just refactoring the already existent main loop from getRecords into a function.
- _get_facets_from_zebra
Given a result set, loop through all defined facets in C4::Koha::getFacets
and call _get_facet_from_result_set for each, to build the facets
information. It retrieves the facets from Zebra's facets.
- _get_facet_from_result_set
Given a result set and a facet index name, retrieve the facets
for the given index, and build the result for rendering.
To test this preliminay work:
- Apply the patches, install on DOM, using MARC21, NORMARC and UNIMARC.
- Reindex some DB with lots of records.
- Check facets work.
Note: UNIMARC is the only dialect that has more than one subfield (concatenated)
for facets values, so it is better to test on uNIMARC. The approach leaves room
for re-thinking facets in MARC21/NORMARC, but it is outside of the scope of the bug
(e.g. we could have more author facets)
Sponsored-by: Universidad Nacional de Cordoba
Signed-off-by: David Cook <dcook@prosentient.com.au>
Seems to work with DOM and MARC21.
Signed-off-by: Jonathan Druart <jonathan.druart@biblibre.com>
Signed-off-by: Katrin Fischer <katrin.fischer.83@web.de>
Signed-off-by: Tomas Cohen Arazi <tomascohen@gmail.com>