Test Plan:
1) Enable EasyAnalytics
2) Disable QueryAutoTruncate
3) Create an analytic record, add some host items to it
4) Browser to the analytics tab for the record
5) Click the link in the 'used in' column of the table
6) No search results
7) Apply this patch
8) Reload the page, now you get the analytic record!
Signed-off-by: Séverine QUEUNE <severine.queune@bulac.fr>
Signed-off-by: Josef Moravec <josef.moravec@gmail.com>
Signed-off-by: Jonathan Druart <jonathan.druart@bugs.koha-community.org>
The same pattern was used in other files, this patch fixes it.
Signed-off-by: Jesse Maseto <jesse@bywatersolutions.com>
Signed-off-by: Nick Clemens <nick@bywatersolutions.com>
Signed-off-by: Jonathan Druart <jonathan.druart@bugs.koha-community.org>
TEST PLAN
---------
1) Run these commands:
git checkout master
git pull
perldoc C4::Search
2) look for parseQuery
-- NOTE: The sample code provided below this heading has
the wrong function name!
3) Run these commands:
git checkout -b bug_19971 origin/master
perldoc C4::Search
4) look for parseQuery
-- NOTE: The wrong function name is corrected.
5) Run koha qa test tools
Signed-off-by: Mark Tompsett <mtompset@hotmail.com>
Signed-off-by: Nick Clemens <nick@bywatersolutions.com>
Signed-off-by: Jonathan Druart <jonathan.druart@bugs.koha-community.org>
Change parameters to a hashref.
Signed-off-by: Josef Moravec <josef.moravec@gmail.com>
Signed-off-by: Marcel de Rooy <m.de.rooy@rijksmuseum.nl>
Looks good to me.
Two calls in migration_tools/22_to_30 still in old style.
Signed-off-by: Jonathan Druart <jonathan.druart@bugs.koha-community.org>
There was a bug that meant a very large offset in the search params
will cause the search script to run forever (or long enough to crash
the machine)
To test
1/ Get ready with sudo top so you can kill the thread before it causes
your machine to OOM
2/ Hit a page like yourdomain.com/cgi-bin/koha/opac-search.pl?q=1&offset=-9999999999999999999
3/ Notice the process runs for a long time
4/ Kill the process
5/ Apply the patch
6/ Hit the page again, notice the it loads (offset is set to zero)
7/ Do the same to search in the staff client
Signed-off-by: Tomas Cohen Arazi <tomascohen@theke.io>
Signed-off-by: Julian Maurice <julian.maurice@biblibre.com>
Signed-off-by: Marcel de Rooy <m.de.rooy@rijksmuseum.nl>
Signed-off-by: Jonathan Druart <jonathan.druart@bugs.koha-community.org>
Signed-off-by: Josef Moravec <josef.moravec@gmail.com>
Signed-off-by: Lari Taskula <lari.taskula@jns.fi>
Signed-off-by: Marcel de Rooy <m.de.rooy@rijksmuseum.nl>
Signed-off-by: Kyle M Hall <kyle@bywatersolutions.com>
Intranet search results displays due date from item onloan.
This should use the TT date plugin.
Test plan :
- set syspref dateformat not on yyyy-mm-dd, for example dd/mm/yyyy
- checkout an item
- at intranet, perform a search where you see the item
=> You must see : "date due : dd/mm/yyyy"
Signed-off-by: Owen Leonard <oleonard@myacpl.org>
Signed-off-by: Marcel de Rooy <m.de.rooy@rijksmuseum.nl>
Signed-off-by: Kyle M Hall <kyle@bywatersolutions.com>
This patch allows you to use the "qualifier,qualifier" syntax
in the Record Matching Rules "Search Index" when using the QueryParser.
While QueryParser doesn't support this syntax, it will now fallback
correctly to non-QueryParser functionality. Without the patch, your search
will just fail silently.
This patch also adds a "skip_normalize" option to C4::Search::SimpleSearch(),
and uses the option during C4::Matcher::get_matches. This prevents
the s/:/=/g and s/=/:/g normalization. This normalization is heavy-handed,
and while it is necessary sometimes to generate a valid CCL query or
QueryParser query, C4::Matcher::get_matches() already produces a valid
CCL query, so we don't need to do this normalization.
Additionally, this normalization causes problems when you use a
Zebra register which isn't normalized: namely the "u" register.
Strings are stored "as is", so http://localhost/koha/resource/1 is
stored as is during indexing. When you search, you need to pass
the exact same thing through the query to get a match. Using
http=//localhost/koha/resource/1 in your query will yield zero results.
_TEST PLAN_
0) Apply patch
1) Create a Record Matching Rule in Koha with the following details:
Matching rule code: TEST
Description: Test
Match threshold: 100
Record type: Bibliographic
Match point 1:
Search index: id-other,st-urx
Score: 100
Tag: 024
Subfields: a
Normalization rule: None
2) Create a record in Koha with an indexable URI
2a) Default framework
2b) 024 $a http://koha-community.org/test $2 uri
2c) 040 $c test
2d) 245 $a This is a test record
2e) 942 $c Books
2f) Save (save again if cautioned about missing fields as these should autofill)
3) Do a full re-index of Zebra
4) Download your record from Koha as a .mrc file (ie isomarc, binary marc, etc)
5) Go to "Stage MARC records for import"
5a) Upload your .marc file.
5b) Change your "Record matching rule" to "Test"
5c) Click Stage for import
9) It should say "1 records with at least one match in catalog per matching rule "Test".
NOTE: For completeness, you can go through this process on a clean master branch,
and note that it will say '0 records with at least one match in catalog per matching rule "TEST"'
Signed-off-by: Alex Buckley <alexbuckley@catalyst.net.nz>
Signed-off-by: Marcel de Rooy <m.de.rooy@rijksmuseum.nl>
Signed-off-by: Kyle M Hall <kyle@bywatersolutions.com>
If you are non logged-in and you the search result contain lost items,
you will get:
Can't call method "category" on an undefined value at
/home/liz/koha-src/koha/C4/Search.pm line 2091.
This is because bug 17556 assumed that $userenv was not defined when the
user is logged out. Actually it is, with non defined or empty string
values.
Test plan:
Do a search in the opac that would turn up a whole list of results (and
not just that one) with the lost item included.
=> Without this patch you should get an error
=> With this patch applied you should see the search results
Signed-off-by: Owen Leonard <oleonard@myacpl.org>
Signed-off-by: Kyle M Hall <kyle@bywatersolutions.com>
Signed-off-by: Brendan A Gallagher <brendan@bywatersolutions.com>
The subroutine C4::Members::GetHideLostItemsPreference can easily be
replaced with Koha::Patron->find(42)->category->hidelostitems
Test plan:
Create 2 patron categories, 1 with "Lost items in staff client" set to
"shown" and another one to "Hidden by default"
Create 2 patrons using them
On the result search page, the detail page of a record, the item list
page and the page to place a hold, make sure the lost items are
shown/hidden as expected
Signed-off-by: Josef Moravec <josef.moravec@gmail.com>
Signed-off-by: Marcel de Rooy <m.de.rooy@rijksmuseum.nl>
Signed-off-by: Kyle M Hall <kyle@bywatersolutions.com>
The calls
output_pref({ dt => dt_from_string( $date ) })
are wrong and should be replaced with
output_pref({ str => $date })
For better error handling.
Here we fix the problem of items.onloan when searching
Test plan:
- Set items.onloan=0000-00-00 (UPDATE items SET onloan='0000-00-00')
This can come from old data or bad migration
- Execute a search
=> Without this patch you get
Can't locate object method "ymd" via package "dateonly" (perhaps you forgot to load "dateonly"?) at /home/vagrant/kohaclone/Koha/DateUtils.pm line 225.
=> With this patch you won't get the error
Signed-off-by: Alex Buckley <alexbuckley@catalyst.net.nz>
Signed-off-by: Nick Clemens <nick@bywatersolutions.com>
Signed-off-by: Kyle M Hall <kyle@bywatersolutions.com>
Then we need to remove the "available" part from the query.
They are really awkward patches...
Signed-off-by: Josef Moravec <josef.moravec@gmail.com>
Signed-off-by: Martin Renvoize <martin.renvoize@ptfs-europe.com>
Signed-off-by: Kyle M Hall <kyle@bywatersolutions.com>
When a ccl query is used, the buildQuery subroutine does not handle
the available limit (not an index).
This available limit is handle later in the subroutine.
This affect the author links on the detail page for instance (an=xx).
A much better solution would be to keep an 'available' zebra index up-to-date.
Test plan:
(OPAC or staff interface, it does not matter)
- Launch a search, click on a result and then on an author link to
launch another query (an:xx)
- Limit to available items without the 'facet'
=> Without this patch you won't get any results
=> With this patch applied you should get relevant result (regarding the
known bugs 16970, 13715, 13658, 5463, etc.)
Signed-off-by: Josef Moravec <josef.moravec@gmail.com>
Signed-off-by: Martin Renvoize <martin.renvoize@ptfs-europe.com>
Signed-off-by: Kyle M Hall <kyle@bywatersolutions.com>
Ok I am silly, we needed to replace to use the cache mechanism for
search_by_koha_field, not find_by_koha_field...
Let's create another subroutine
Signed-off-by: Martin Renvoize <martin.renvoize@ptfs-europe.com>
Signed-off-by: Marcel de Rooy <m.de.rooy@rijksmuseum.nl>
Signed-off-by: Kyle M Hall <kyle@bywatersolutions.com>
This patch replaces the call to C4::Koha::GetKohaAuthorisedValues with
Koha::AuthorisedValues->search_by_koha_field
Test plan:
AV descriptions should be displayed on the following pages:
- XSLT view - location and ccode
- Bibliographic detail, moredetail and OPAC pages - location, ccode, copynumber
- returns - location
- opac-basket - ccode, location
- The 3 reports: catalogue_stats.pl, issues_stats.pl and
reserves_stats.pl - location, ccode
Signed-off-by: Claire Gravely <claire_gravely@hotmail.com>
Signed-off-by: Martin Renvoize <martin.renvoize@ptfs-europe.com>
Signed-off-by: Kyle M Hall <kyle@bywatersolutions.com>
GetAuthValCode did not return anything if the authorised_value column
was not defined. Our new calls to Koha::MarcSubfieldStructures->search
should behave the same
Signed-off-by: Martin Renvoize <martin.renvoize@ptfs-europe.com>
Signed-off-by: Kyle M Hall <kyle@bywatersolutions.com>
The subroutine C4::Koha::GetAuthValCode returned the authorised value
category for a given kohafield.
This can be acchieve easily using a new Koha::AuthorisedValues->search_by_koha_field
method which will mimic search_by_marc_field.
Test plan:
Confirm that the description is correctly displayed on the following
pages:
- detail and moredetail of a bibliographic page (itemlost, damaged, materials)
- Set AcqCreateItem=ordering and receiving items.
The description for notforloan, restricted, location, ccode, etc.
field should be displayed.
- Items search form
- On the checkout list from the circulation.pl and returns.pl
pages, the description for "materials" should be displayed
Note that GetKohaAuthorisedValuesMapping is going to be removed on bug
17251.
Followed test plan, works as expected.
Signed-off-by: Marc Véron <veron@veron.ch>
Signed-off-by: Martin Renvoize <martin.renvoize@ptfs-europe.com>
Signed-off-by: Kyle M Hall <kyle@bywatersolutions.com>
The subroutine C4::Koha::GetKohaAuthorisedValueLib just retrieves a description
(lib) for a given authorised value.
We can easily replace it using:
Koha::AuthorisedValues->search({ category => $cat, authorised_value => $av })->lib
or
Koha::AuthorisedValues->search({ category => $cat, authorised_value => $av })->opac_description
Test plan:
- On the detail page of a bibliographic record, the description for notforloan,
restricted and stack (?) should be correctly displayed
- View a shelf, the location (LOC) description should be displayed
- On the search result page, the location description should be displayed in the
facets
- Set AcqCreateItem=ordering and receiving items.
The description for notforloan, restricted, location, ccode, etc. field
should be displayed.
- When creating item in the acquisition module, the dropdown list for
field linked to AV should display the AV' descriptions
- On the transfers page, the description of the location should be
displayed.
- On the checkout list from the circulation.pl and returns.pl pages, the
description for "materials" should be displayed
- Fill some OPAC_SUG AV and create a suggestion, the reason dropdown
list should display the description of OPAC_SUG
Signed-off-by: Claire Gravely <claire_gravely@hotmail.com>
Signed-off-by: Katrin Fischer <katrin.fischer@bsz-bw.de>
Signed-off-by: Brendan Gallagher <brendan@bywatersolutions.com>
This subroutine is not used and can be removed
Test plan:
git grep SearchAcquisitions
should not return any results.
Signed-off-by: Claire Gravely <claire_gravely@hotmail.com>
Signed-off-by: Katrin Fischer <katrin.fischer@bsz-bw.de>
Signed-off-by: Kyle M Hall <kyle@bywatersolutions.com>
[1] C4/Search
A call to Koha::Libraries is added to routine pazGetRecords, but the
results of that call are not used. So removing it again.
[2] catalogue/itemsearch.pl
Although A=>B=>C=>D works, we'd better use here A=>B, C=>D.
Signed-off-by: Marcel de Rooy <m.de.rooy@rijksmuseum.nl>
Signed-off-by: Kyle M Hall <kyle@bywatersolutions.com>
[1] Acquisition.pm
The lines filling $row in GetBasketGroupAsCSV may have side-effects when
the library name is not found. This change restores former behavior. Just
theoretically more safe.
Note that it also contained a typo: $row->{deliveryplace} should have been
$row->{$place}.
[2] Auth.pm
checkauth: $branchname = Koha::Libraries->find($branchcode)->branchname;
Should normally be fine, but I rather have an empty string here than
crashing on "Can't call method branchname on undefined value".
Same for sub check_api_auth.
Note that this holds for a larger number of calls, but I am adding a check
here because it is checkauth.
Also removed a duplicate use Koha::Libraries-statement.
[3] Search.pm
Also removed a duplicate use statement for Libraries.
[4] svc/holds
Added an (explicit) use statement for Koha::Libraries.
Signed-off-by: Marcel de Rooy <m.de.rooy@rijksmuseum.nl>
Signed-off-by: Kyle M Hall <kyle@bywatersolutions.com>
This is the fourth and last patch set to remove C4::Branch.
The real purpose of this patch is to standardise and refactor some code
which is related to the libraries selection/display.
Its unconfessed purpose is to remove the C4::Branch package.
Before this patch set, only 6 subroutines still existed in the C4::Branch
package:
- GetBranchName
- GetBranchesLoop
- mybranch
- onlymine
- GetBranches
- GetBranch
GetBranchName basically returns the branchname for a given branchcode.
The branchname is only used for a display purpose and we don't need to
retrieve it in package or pl scripts (unless for a few exceptions).
We have a `Branches` template plugin with a `GetName` method which does
exactly this job.
To achieve this removal, we will use this template plugin and delete the
GetBranchName from pl and pm files.
The `Branches.all()` will now select the library of the logged in user
if no `selected` parameter has been passed.
This new behavior could cause regressions, for instance there are some
places where we do not want an option preselected (batch item
modification for instance), keep that in mind when testing.
GetBranchesLoop took 3 parameters: $branch and $onlymine.
The first one was used to set a "selected" flag, for a display purpose:
select an option in the libraries dropdown lists.
The second one was useless: If not passed or set to 0, the
`C4::Branch::onlymine` subroutine was called.
This onlymine flag was use to know if the logged in user was able to see
other libraries infos.
A patron can see the infos from other libraries if IndependentBranches
is not set OR if he has the superlibrarian permission.
Prior to this patch set, the "onlymine test" was done on different
places (neworderempty.pl, additem.pl, holidays.pl, etc.), including the
Branches TT plugin. In this patch set, this test is only done on one
place (C4::Context::only_my_library, code moved from
C4::Branch::onlymine).
To accomplish the same job as this subroutine, we just need to call the
`Branches.all()` method from the `Branches` TT plugin. It already
accepts a `selected` parameter to set a flag on the option to select.
To avoid the repetitive
[% IF selected %]<option selected="selected">[% ELSE %]<option>[% END %]
pattern, a new `html_helpers` TT include file has been created, it
defines an `options_for_libraries` block, which takes a `selected`
parameter. We could imagine to use this include file for other
selects.
The 'mybranch` and `onlymine` subroutines of the C4::Branch package have
been moved to C4::Context. onlymine has been renamed with
only_my_library. There are only 4 occurrences of it, against 11 before
this patch set.
There 2 subroutines are Context-centric and it makes sense to put them
in `C4::Context` (at least it's the least worst place!)
GetBranches is the tricky part of this patch set: It retrieves all the
libraries, independently of the value of IndependentBranches.
To keep the same way as the existing calls of `Branches.all()`, I have
added a `unfiltered` parameter. If set, the `Branches.all()` will call
a usual Koha::Libraries->search method, otherwise
Koha::Libraries->search_filtered will be called. This new method will
check if the logged in user is allowed to see other libraries or only
its library.
Note that this `GetBranches` subroutine also created a `category` key:
it allowed to get the list of groups (of libraries) where this library
existed. Thanks to a previous patch set (bug 15295), this value was
not used anymore (I may have missed something!).
Note that the only use of `GetBranch` was buggy (see bug 15746).
Test plan (for the whole patch set):
The best way to test this whole patch set is to test with 2 instances: 1
with the patch set applied, 1 using master, to be sure there is no
regression.
It would be good to test the same with `IndependentBranches` and the
without `IndependentBranches`.
No difference should be found.
The tester must focus on the library dropdowns on as many forms as
possible.
You will notice changes in the order of the options: the libraries will
now be ordered by branchname (instead of branchcode in some places).
A special attention will be given to the following page:
- acqui/neworderempty.pl
- catalogue/search.pl
- members/members-home.pl (header?)
- opac/opac-topissues.pl
- tools/holidays.pl
- admin/branch_transfer_limits.pl
- admin/item_circulation_alerts.pl
- rotating_collections/transferCollection.pl
- suggestion/suggestion.pl
- tools/export.pl
Notes for QA:
- There are 2 FIXMEs in the patch set, I have kept the existing behavior,
but I am not sure it's the good one. Feel free to open a bug report and
I will fill a patch if you think it's not correct. Otherwise, remove the
FIXME lines in a follow-up patch.
- The whole patch set is huge and makes a lot of changes.
But it finally will tremendously reduce the number of lines:
716 insertions for 1910 deletions
Signed-off-by: Owen Leonard <oleonard@myacpl.org>
Signed-off-by: Kyle M Hall <kyle@bywatersolutions.com>
Add ident and Identifier-standard to known indexes in C4::Search::getIndexes().
Those indexes can be very useful, for example for IdRef feature.
Test plan :
- Make sure some records have a field indexed with Identifier-standard, ISBN=1234 for example
- Perform a search /cgi-bin/koha/opac-search.pl?idx=ident,phr&q=1234
=> you find the record
- Perform a search /cgi-bin/koha/opac-search.pl?q=ident:1234
=> Without patch : you get no results
=> With patch : you find the record
Idem for 'Identifier-standard'
Signed-off-by: Jonathan Druart <jonathan.druart@bugs.koha-community.org>
Signed-off-by: Kyle M Hall <kyle@bywatersolutions.com>
1) Apply patch
2) Make sure that you have a bib that has MARC21 035$a (and possibly also 035$z) populated.
pre 3) Replace all modified zebra files and restart zebra server
3) Rebuild zebra: misc/migration_tools/rebuild_zebra.pl -x -b -z
4) Add the following to the intranetuserjs syspref:
$(document).ready(function(){
// Add Other Control Number to advanced search
if (window.location.href.indexOf("catalogue/search.pl") > -1) {
$(".advsearch").append('<option value="Other-control-number">Other Control Number</option>');
}
});
5) Do an advanced search, select "Other Control Number" from the search menu, then add the Other Control Number in 035$a for the bib specified in step 1.
Signed-off-by: Bernardo Gonzalez Kriegel <bgkriegel@gmail.com>
Works, no koha-qa errors
Signed-off-by: Marcel de Rooy <m.de.rooy@rijksmuseum.nl>
Signed-off-by: Kyle M Hall <kyle@bywatersolutions.com>
Don't retrieve prefs if we won't need them
Signed-off-by: Tomas Cohen Arazi <tomascohen@theke.io>
Signed-off-by: Kyle M Hall <kyle@bywatersolutions.com>
Set variables ($sysxml, $xslfilename, $lang) if they are not passed to
the subroutine. This happens from catalogue/detail.pl,
opac/opac-shelves.pl, opac/opac-tags.pl and virtualshelves/shelves.pl.
Signed-off-by: Mirko Tietgen <mirko@abunchofthings.net>
Signed-off-by: Tomas Cohen Arazi <tomascohen@theke.io>
Signed-off-by: Kyle M Hall <kyle@bywatersolutions.com>
On search, every single result goes through some XSLT processing.
This includes fetching the relevant sysprefs every single time.
We should do it only once per search.
Signed-off-by: Tomas Cohen Arazi <tomascohen@theke.io>
Signed-off-by: Kyle M Hall <kyle@bywatersolutions.com>
There are 2 prefs to drive this feature: StaffAuthorisedValueImages and
AuthorisedValueImages. AuthorisedValueImages is not added by
sysprefs.sql and does not appear in updatedatabase.pl, we could easily
imagine that nobody uses it.
With XSLT enabled, the feature is only visible on a record detail page
at the OPAC, if AuthorisedValueImages is set. Otherwise you need to turn
the XSLT off. In this case you will see the images on the result list
(OPAC+Staff interfaces) and OPAC detail page, but not the Staff detail
page.
This patch suggests to remove completely this feature as it does not
work correctly.
The ability to assign an image to an authorised value is now always
displayed, but the image will only be displayed on the advanced search
if defined.
Test plan:
Confirm that the authorised value images are no longer visible at the
opac and the staff interfaces.
The prefs should have been removed too.
Signed-off-by: Owen Leonard <oleonard@myacpl.org>
Signed-off-by: Katrin Fischer <katrin.fischer.83@web.de>
Signed-off-by: Kyle M Hall <kyle@bywatersolutions.com>
By default ES returns the facet terms ordered by most used, which makes
sense.
This patch removes resort done in the scripts (catalogue/search.pl and
opac/opac-search.pl) and moves it to the module.
For Zebra it's now done in C4::Search::getRecords, and there is no
change to expect (still alphabetically).
On the Elastic search side, we could imagine to let the library define
the order of the facets. The facet terms are now sorted by most used.
To test easily this change, turn on the displayFacetCount pref.
Signed-off-by: Nick Clemens <nick@bywatersolutions.com>
Signed-off-by: Jesse Weaver <jweaver@bywatersolutions.com>
Signed-off-by: Tomas Cohen Arazi <tomascohen@theke.io>
Signed-off-by: Kyle M Hall <kyle@bywatersolutions.com>
Signed-off-by: Brendan Gallagher <brendan@bywatersolutions.com>
This reverts commit cd4905c2969b067476881016d0b03271f0bcc7c8.
This commit caused an error in C4::Search::GetFacets when running in
zebra mode.
Conflicts:
Koha/SearchEngine/Elasticsearch/Search.pm
Signed-off-by: Nick Clemens <nick@bywatersolutions.com>
Signed-off-by: Jesse Weaver <jweaver@bywatersolutions.com>
Signed-off-by: Tomas Cohen Arazi <tomascohen@theke.io>
Signed-off-by: Kyle M Hall <kyle@bywatersolutions.com>
Signed-off-by: Brendan Gallagher <brendan@bywatersolutions.com>
By default ES returns the facet terms ordered by most used, which makes
sense.
This patch removes resort done in the scripts (catalogue/search.pl and
opac/opac-search.pl) and moves it to the module.
For Zebra it's now done in C4::Search::getRecords, and there is no
change to expect (still alphabetically).
On the Elastic search side, we could imagine to let the library define
the order of the facets. The facet terms are now sorted by most used.
To test easily this change, turn on the displayFacetCount pref.
Signed-off-by: Nick Clemens <nick@bywatersolutions.com>
Signed-off-by: Jesse Weaver <jweaver@bywatersolutions.com>
Signed-off-by: Tomas Cohen Arazi <tomascohen@theke.io>
Signed-off-by: Kyle M Hall <kyle@bywatersolutions.com>
Signed-off-by: Brendan Gallagher <brendan@bywatersolutions.com>
Signed-off-by: Nick Clemens <nick@bywatersolutions.com>
Signed-off-by: Jesse Weaver <jweaver@bywatersolutions.com>
Signed-off-by: Tomas Cohen Arazi <tomascohen@theke.io>
Signed-off-by: Kyle M Hall <kyle@bywatersolutions.com>
Signed-off-by: Brendan Gallagher <brendan@bywatersolutions.com>
Signed-off-by: Nick Clemens <nick@bywatersolutions.com>
Signed-off-by: Jesse Weaver <jweaver@bywatersolutions.com>
Signed-off-by: Tomas Cohen Arazi <tomascohen@theke.io>
Signed-off-by: Kyle M Hall <kyle@bywatersolutions.com>
Signed-off-by: Brendan Gallagher <brendan@bywatersolutions.com>
Added the following indexes:
Interest-age-level | 591$a ind1=1
Interest-grade-level | 591$a ind1=2
lexile-number | 591$a ind1=8
Reading-grade-level | 591$a ind1=0
Moved 'lex' from a zebra index to a ccl alias to lexile-number.
Changed the handling of st-numeric in C4/Search.pm to allow for search ranges.
Signed-off-by: Nick Clemens <nick@bywatersolutions.com>
Signed-off-by: Hector Castro <hector.hecaxmmx@gmail.com>
Works as advertised
Signed-off-by: Tomas Cohen Arazi <tomascohen@unc.edu.ar>
Signed-off-by: Jesse Weaver <jweaver@bywatersolutions.com>
subroutines should not take $dbh in parameter.
C4::Biblio::TransformMarcToKoha has it and does not use it.
Test plan:
Look at the patch and confirm that all occurrences of
TransformMarcToKoha have been modified.
Signed-off-by: Jacek Ablewicz <abl@biblos.pk.edu.pl>
Signed-off-by: Katrin Fischer <katrin.fischer.83@web.de>
Signed-off-by: Brendan Gallagher brendan@bywatersolutions.com
perl -p -i -e 's/^.*set the version for version checking.*\n//' **/*.pm
+ manual adjustements
Signed-off-by: Josef Moravec <josef.moravec@gmail.com>
Signed-off-by: Tomas Cohen Arazi <tomascohen@unc.edu.ar>
Signed-off-by: Brendan A Gallagher <brendan@bywatersolutions.com>
Mainly a
perl -p -i -e 's/^.*3.07.00.049.*\n//' **/*.pm
Then some adjustements
Signed-off-by: Josef Moravec <josef.moravec@gmail.com>
Signed-off-by: Tomas Cohen Arazi <tomascohen@unc.edu.ar>
Signed-off-by: Brendan A Gallagher <brendan@bywatersolutions.com>
perl -p -i -e 's/^(use vars .*)\$VERSION\s?(.*)/$1$2/' **/*.pm
Signed-off-by: Josef Moravec <josef.moravec@gmail.com>
Signed-off-by: Tomas Cohen Arazi <tomascohen@unc.edu.ar>
Signed-off-by: Brendan A Gallagher <brendan@bywatersolutions.com>
This patch will add indexes for Date/time-last-modified.
To test:
1. apply patch
2. reindex
3. search for dtlm:DATE and date-time-last-modified:DATE
4. confirm that you get results
Signed-off-by: Hector Castro <hector.hecaxmmx@gmail.com>
Works as advertised
Signed-off-by: Frédéric Demians <f.demians@tamil.fr>
I confirm Hector signing-off. A simple Zebra server restart suffice to get
working the searches on date-time-last-modified and dtlm.
Signed-off-by: Jonathan Druart <jonathan.druart@bugs.koha-community.org>
Signed-off-by: Brendan A Gallagher <brendan@bywatersolutions.com>
The singleBranchMode system preference does not make sense.
Either the install has only 1 library defined or several. In both case,
we can easily guess the behavior to follow.
So the idea of this patch is to replace the fetch of this syspref with a
call to count the number of libraries defined in DB.
Test plan:
1/ From a fresh Koha install, execute the DB entry to remove the pref.
2/ Define only 1 library
3/ Confirm that Koha behaves the same as before (try to change your
library, look at the facets)
4/ Create another library (or more) and reinsert the pref and set it:
insert into systempreferences (variable, value)
values('singleBranchMode', 1);
5/ Execute the DB entry
You should get a warning message.
6/ Repeat 3.
Signed-off-by: Chris Cormack <chrisc@catalyst.net.nz>
Does what it says, but will change behaviour for any Koha install that
has 2 branches defined, One circulation, and this preference set.
If that is an acceptable change, we might need to make sure this is noted well in the
release notes.
Signed-off-by: Kyle M Hall <kyle@bywatersolutions.com>
If a record has more than 20 items, all the items over 20 will show as
available on the search results even if they are not!
This is a hard coded limit in the Search module. This number should be
configurable.
Test Plan:
1) Create a record with more than 20 items
2) Set all the items to waiting holds or in transit
3) Search for results that will include that item
4) Note some say they are available even though they are not
5) Apply this patch
6) Run updatedatabase.pl
7) Set the new system preference MaxSearchResultsItemsPerRecordStatusCheck
to a number larger than the number of items on your record
8) Re-run the search
9) Note that the hold and transit statuses for the items are now correct
Signed-off-by: Andreas Hedström Mace <andreas.hedstrom.mace@sub.su.se>
Signed-off-by: Jonathan Druart <jonathan.druart@biblibre.com>
Signed-off-by: Brendan Gallagher brendan@bywatersolutions.com
Signed-off-by: Chris Cormack <chrisc@catalyst.net.nz>
Signed-off-by: Kyle M Hall <kyle@bywatersolutions.com>
Signed-off-by: Kyle M Hall <kyle@bywatersolutions.com>
This patch removes code related to stopwords usage. The following methods are removed:
C4::Search->remove_stopwords
C4::Context->stopwords
C4::Context->_new_stopwords
And the buildQuery API was changed (removed the \@removed_stopwords return value).
A follow-up is provided for database changes, to make rebasing easier.
To test:
- Apply this patch
- Do some searches in both intranet and opac interfaces
- Nothing should break
Sponsored-by: Universidad Nacional de Córdoba
Signed-off-by: Kyle M Hall <kyle@bywatersolutions.com>
Signed-off-by: Chris Cormack <chrisc@catalyst.net.nz>
Signed-off-by: Kyle M Hall <kyle@bywatersolutions.com>
This patch removes C4::Dates from following files in folder C4:
- C4/Members.pm
- C4/Reserves.pm
- C4/Search.pm
- C4/Utils/DataTables.pm
- C4/Utils/DataTables/Members.pm
- C4/VirtualShelves/Page.pm
To test:
-run tests as appropriate,
- have a close look at the code changes
- try to find regressions
http://bugs.koha-community.org/show_bug.cgi?id=14985
Signed-off-by: Brendan Gallagher <brendan@bywatersolutions.com>
Bug 14985: (followup) Remove eval if dates come from database
This patch removes some evals from date-formatting where the dates come
from the database.
See comments #7 - #9
Additionaly, C4/VirtualShelves/Page.pm is removed from the patches (obsolete).
Bug 14985: (followup) Remove C4::Dates from C4/Overdues.pm
Ths patch removes a stray C4::Dates from C4/Overdues.pm
- To test got to a patron who has overdues
(Home > Circulation > Checkouts > [Patron])
- Print overdues
Signed-off-by: Jonathan Druart <jonathan.druart@bugs.koha-community.org>
Signed-off-by: Tomas Cohen Arazi <tomascohen@theke.io>
This patch add zebra indexes to RDA 264 field.
The new Provider index is added too.
QA comments corrected.
To test:
1) Download RDA records with 264 fields from this attachment <http://bugs.koha-community.org/bugzilla3/attachment.cgi?id=36825>. Import the file and re-index/rebuild zebra. These records contain 260 and 264 fields per record.
2) Do a search with pb:Bethany two records will appear with title The guardian. Search with pl:Minneapolis too, the two records will appear.
3) Select one record of both records and delete the 260 field keeping the 264 field and save, rebuild your zebra.
4) Search again with pb:Bethany and just one record will appear. Thats mean 264 is not indexed.
5) Apply patches.
6) Rebuild your zebra but this time all biblio records.
7) Search again with pv:Bethany or Provider:Bethany, this time will appear the two records, 264 is indexed. Note that if you search again with pb only one record appear. This is because the suggestion of LOC.
10) Search with copydate:2013 only launch records with 260 fields and pv:2013 show both fields, i.e., 260 and 264.
11) Apply QA Test Tools
Sponsored-by: Universidad de El Salvador
Signed-off-by: Nick Clemens <nick@quecheelibrary.org>
Signed-off-by: Kyle M Hall <kyle@bywatersolutions.com>
Signed-off-by: Tomas Cohen Arazi <tomascohen@theke.io>
So, this one is VERY weird, let me try to explain what I have
understood.
Bisecting using run prove t/db_dependent/Search.t, I have found that the
following commit make the test fail:
commit 0f63f89f66
Bug 14100: Generic solution for language overlay - Item types
The error is
DBI bind_columns: invalid number of arguments: got handle + 0, expected handle + between 1 and -1
Usage: $h->bind_columns(\$var1 [, \$var2, ...]) at /usr/lib/i386-linux-gnu/perl5/5.20/DBI.pm line 2065.
Note that the interface (admin/itemtypes.pl) which calls the same
subroutine with the same parameter (style => 'array') works great.
The problem comes from the change in C4::Search::searchResults, if I
only apply the change done to this subroutine on 0f63f89f^1, I reproduce
the issue.
Looking closely at how %itemtypes is built, we could actually call
GetItemTypes with the style => 'hash' to get exactly what we want.
The following piece prove it for you:
use Test::More;
use C4::Koha;
my $i = GetItemTypes;
my $j = GetItemTypes(style => 'array');
my %itemtypes;
for my $itemtype ( @$j ) {
$itemtypes{ $itemtype->{itemtype} } = $itemtype;
}
is_deeply( \%itemtypes, $i);
So changing the code accordingly and just forget this last hour...
Signed-off-by: Tomas Cohen Arazi <tomascohen@theke.io>
This patch fixes:
- reports/bor_issues_top.pl
- sort order
- adv search and search results
- opac-topissues.pl
Signed-off-by: Katrin Fischer <katrin.fischer.83@web.de>
Signed-off-by: Tomas Cohen Arazi <tomascohen@theke.io>
Test plan:
1/ update the Schema (misc/devel/update_dbix_class_files.pl)
2/ Translate templates for some languages (es-DE, de-DE for instance)
3/ Enable them in the pref (search for 'lang') for the staff interface
4/ Go on the item type admin page (admin/itemtypes.pl)
5/ Edit one
6/ Click on the 'translate for other languages' link
7/ You are now on the interface to translate the item type's description
in the languages you want. So translate some :)
8/ Go back on the item type list view (admin/itemtypes.pl)
9/ You should see the original description (non translated)
10/ Switch the language
11/ You should see the translated description in the correct language.
If the description is non translated, the original description is
displayed.
12/ On the different page where the item type is displayed, confirm that
the translated description appears.
Think further / Todo:
1/ Update all occurrences of the item type's description (DONE)
2/ Implement for authorised values
3/ Implement for syspref value (at least textarea)
4/ Implement for branch names
5/ Centralize all the translation on a single page in the admin area
...
N/ Implement a webservice to centralize all the translations and give
the ability to sync the item types/authorised values description with
the rest of the world (push and pull).
Signed-off-by: Josef Moravec <josef.moravec@gmail.com>
Signed-off-by: Katrin Fischer <katrin.fischer.83@web.de>
Signed-off-by: Tomas Cohen Arazi <tomascohen@theke.io>
Since bug 10803 adds a C4::Search::History module, the
PurgeSearchHistory routine should be moved.
Test plan:
- run misc/cronjobs/cleanup_database.pl with the searchhistory param and
verify behavior is the same as before applying this patch.
- run prove t/Search/History.t
Signed-off-by: Joonas Kylmälä <j.kylmala@gmail.com>
Signed-off-by: Katrin Fischer <katrin.fischer.83@web.de>
Signed-off-by: Tomas Cohen Arazi <tomascohen@theke.io>
For an unkown reason, when executed from a test file, the 'SHOW COLUMNS'
statement does not return anything.
We need to retrieve the column list from the DBIx::Class resultset.
Signed-off-by: Tomas Cohen Arazi <tomascohen@theke.io>
This patch makes a tiny change in C4::Search::searchResults as to the
handling of not-for-loans. This should make the display of status and
availability of items more consistent in opac and staff.
Additionally, a Place hold link should disappear in the opac, when it is
not possible to place a hold on any item of one biblio.
The following spots need special attention; the display should be
corrected by this patch:
[A] Location column showing number of available items in catalogue/search.
[B] Location column of Cataloging Search (addbooks.pl)
[C] OPAC Search results list in non-XSLT view.
NOTE
The forms opac-MARCdetail and MARCdetail also include an Items table with
column Not for loan. The information in this column might still be somewhat
confusing but is actually correct. The column only contains Not for loan if
the item field is set. So it is empty when only the item type is nfl.
Since a correction here is arguable, I am not including it on this report.
Test plan:
[1] Have at least two item types. Mark one item type (X) as not for loan.
[2] Use at least two biblios with two items each. Mark one item of the first
biblio as not for loan at item level (via item editor).
Change one item of the second biblio to the item type of step 1 (X).
[3] Set pref item-level_itypes==item and set all four xslt prefs (for opac
and staff, results and detail) to default.
[4] Check spots A, B and C as mentioned above. Also check:
[D] OPAC Detail, Holdings table, Status column
[E] Staff Detail, Holdings, Status.
[5] Make all four xslt prefs now empty. Check spots A to E again.
Especially observe C here.
[6] Set pref item-level_itypes==biblio. Change your second biblio to item
type of step 1 (X) in the cataloguing editor (MARC 942c).
Check spots A to E again.
[7] Set all four xslt prefs again to default. Check spots A to E again.
[8] Run the unit test t/db_dependent/Search.t.
Signed-off-by: Nick Clemens <nick@quecheelibrary.org>
Signed-off-by: Jonathan Druart <jonathan.druart@bugs.koha-community.org>
Signed-off-by: Tomas Cohen Arazi <tomascohen@unc.edu.ar>
Bug 11202 introduced a new index 'dissertation-information' for
UNIMARC. This patch adds the index also for MARC21 installations.
http://www.loc.gov/marc/bibliographic/bd502.html
To test:
- Apply patch
- Copy files in etc/zebradb changed by this patch to your
corresponding directory (koha-dev..)
- Make sure you have records with 502
- Reindex
- Verify you can search the field contents with
dissertation-information= and
diss=
Signed-off-by: Bernardo Gonzalez Kriegel <bgkriegel@gmail.com>
Can find by dissertation-information,
No errors
Signed-off-by: Kyle M Hall <kyle@bywatersolutions.com>
Signed-off-by: Tomas Cohen Arazi <tomascohen@unc.edu.ar>
Most of them were found and fixed using codespell.
Fix also some related grammar issues.
In C4/Serials.pm a variable was renamed to make future codespelling
checks easier.
Signed-off-by: Stefan Weil <sw@weilnetz.de>
http://bugs.koha-community.org/show_bug.cgi?id=14383
Signed-off-by: Bernardo Gonzalez Kriegel <bgkriegel@gmail.com>
Signed-off-by: Jonathan Druart <jonathan.druart@koha-community.org>
Signed-off-by: Tomas Cohen Arazi <tomascohen@theke.io>
If the staff search results, if an item is both checked out and lost,
the result will appear as two item lines where one line has the lost
status and the other line has the rest of the item's data.
Test Plan:
1) Check an item out to a patron
2) Mark the item as lost *without* removing the item from the patron's
record, either by using longoverdue.pl or by editing the itemlost
field in the database directly.
3) Perform a search where that item will be in the results
4) Note the improper display of the item's data
5) Apply this patch set
6) Reload the search restults
7) Note the item now displays correctly
Signed-off-by: Nick <nick@quecheelibrary.org>
Signed-off-by: Katrin Fischer <katrin.fischer.83@web.de>
Signed-off-by: Tomas Cohen Arazi <tomascohen@gmail.com>
Signed-off-by: Nick <nick@quecheelibrary.org>
Signed-off-by: Katrin Fischer <katrin.fischer.83@web.de>
Signed-off-by: Tomas Cohen Arazi <tomascohen@gmail.com>
Any search limits including a ccode will break the search facts.
Test Plan:
1) Run an advanced search using a ccode limit
2) Try using any of the facet links of the left
3) Note they are broken
4) Apply this patch
5) Refresh the results
6) Note the facet links are no longer broken
Note: I have not been able to reproduce this issue on my own test
system, but have noted the problem on at least a dozen Koha servers.
I could not reproduce the bug either, but I saw it on the Bywater Demo (comment #1).
Applied patch and tested facets, no problems found, signing off
Signed-off-by: Marc Véron <veron@veron.ch>
Signed-off-by: Katrin Fischer <Katrin.Fischer.83@web.de>
Can not reproduce the problem, but I can also not find a regression.
Signed-off-by: Tomas Cohen Arazi <tomascohen@gmail.com>
This enhancement adds the ability to search on all isbn variations when
searching on the isbn index.
Test plan:
0/ Don't apply the patch
1/ Create or choose a notice with an isbn with dashes.
2/ Try to search the notice using the isbn index by it isbn without
dashes.
=> It does not work.
3/ Apply the patch, enable the new pref SearchWithISBNVariations and
disable UseQueryParser.
4/ repeat 2 and note that the record is now returned.
Note that this only works if UseQueryParser is disabled.
It looks like QueryParser does not manage more than 1 operator.
See:
QueryParser does not manage more than 1 operator?
http://lists.koha-community.org/pipermail/koha-devel/2014-December/041028.html
and
commit 036f2a50e1
Author: Galen Charlton <gmc@esilibrary.com>
Date: Mon May 5 19:31:00 2014 +0000
Bug 10500: (follow-up) disable AggressiveMatchOnISBN if
UseQueryParser is on
Signed-off-by: Morag Hills <the.invinnysible.one@gmail.com>
Signed-off-by: Kyle M Hall <kyle@bywatersolutions.com>
Signed-off-by: Tomas Cohen Arazi <tomascohen@gmail.com>
Imagine this scenario: we have one record with four items. Two of those
items are checked out, one of those items is a waiting hold, and one of
those items is available. We would expect to see this on the search
results page. Instead, we will see both non-checked out items as
unavailable due to waiting holds.
This is due to a semantic issue GetReserveStatus.
C4::Search::searchResults uses GetReserveStatus to get the reserve
status of each item, but unlike all other calls to the sub, this one
passes in not only itemnumber, but biblionumber.
When no reserve is found for the available item, the subroutine uses the
biblionumber to grab what is essentially an arbitrary reserve to use for
the status. This makes no sense and this functionality should be
entirely removed from the subroutine so regressions like this will be
prevented in the future.
Test Plan:
1) Create one record with 4 items
a) check two of the items out to patrons
b) set one of the items as a waiting hold
c) leave the fourth item as available
2) Run a search where this record will be in the results list
3) Note that the results list 2 items on loan, two unavailable
4) Apply this patch, reload the search results
5) Note that the results list 1 available, 2 on loan, 1 unavailable
Signed-off-by: John Andrews <jandrews@washoecounty.us>
Signed-off-by: Sheila Kearns <sheila.kearns@state.vt.us>
Signed-off-by: Katrin Fischer <katrin.fischer.83@web.de>
Note: This is for the staff search result list!
Works as expected.
Signed-off-by: Tomas Cohen Arazi <tomascohen@gmail.com>
The search patch should fix non-latin character searches.
Signed-off-by: Paola Rossi <paola.rossi@cineca.it>
Signed-off-by: Bernardo Gonzalez Kriegel <bgkriegel@gmail.com>
Signed-off-by: Dobrica Pavlinusic <dpavlin@rot13.org>
Signed-off-by: Martin Renvoize <martin.renvoize@ptfs-europe.com>
Signed-off-by: Tomas Cohen Arazi <tomascohen@gmail.com>
To reproduce, edit, index notice with utf-8 char and search for it
Signed-off-by: Paola Rossi <paola.rossi@cineca.it>
Signed-off-by: Bernardo Gonzalez Kriegel <bgkriegel@gmail.com>
Signed-off-by: Dobrica Pavlinusic <dpavlin@rot13.org>
Signed-off-by: Martin Renvoize <martin.renvoize@ptfs-europe.com>
Signed-off-by: Tomas Cohen Arazi <tomascohen@gmail.com>
See the wiki page for the explanation.
Signed-off-by: Paola Rossi <paola.rossi@cineca.it>
Signed-off-by: Bernardo Gonzalez Kriegel <bgkriegel@gmail.com>
Signed-off-by: Dobrica Pavlinusic <dpavlin@rot13.org>
Signed-off-by: Martin Renvoize <martin.renvoize@ptfs-europe.com>
Signed-off-by: Tomas Cohen Arazi <tomascohen@gmail.com>
[New commit on 18 Aug 2014 : rebased, and DOM indexing only]
Issues to fix :
Most of 6XX may contain a $2 that identifies the system used for indexing. It should not be indexed.
In French libraries, $2 contains "rameau". So searching books about the music composer "Rameau" retreive thousands of records!
For some 6XX fiels, other subfields should not be indexed, for example dates of persons and family, or adresses.
In Unimarc guide, 600$t,601$t,602$t are said to exist but to be "not used". I keep them indexed.
Additionnally, subject indexing could be improved by using specific indexes for each 6XX if possible :
In ccl.properties :
- su-to, su-geo and su-ut are defined as aliases of Subject.
- a specific index is defined, but not used in record.abs : Subject-name-personal, alias su-na
We can use these indexes and create new specific indexes by using existing bib1 attributes.
We could also index $j,$x,$y,$z subdivision in specific indexes.
This patch does the following changes :
1) For all 6XX : Not indexing $2 (LSCH, Rameau...), $3 and $5
2) Suppressing the indexing of some specific subfields, depending on the field:
600 : Personal name used as a subject // see Marc21 600
not indexing c (additional elements),f (dates),p (address/affiliation)
602 : Family name used as a subject // see Marc21 600 3X
not indexing f (dates)
616 : Trademark
not indexing c,f
3) For all 6XX : index $j,$x,$y,$z in several indexes in addition to the specfific index for their 6XX field:
4) Define in ccl.properties some specific indexes :
Subject-name-conference 1=1073 => alias su-conf
Subject-name-corporate 1=1074 => alias su-corp
Subject-genre-form 1=1075 => alias su-genre and su-form
Subject-geographical 1=1076 => alias su-geo
Subject-chronological 1=1077 => alias su-chrono
Subject-title 1=1078 => alias su-ut and su-ti
Subject-topical 1=1079 => alias su-to
5) Adding new aliases in Search.pm :
su-chrono, su-form, su-genre, su-corp, su-conf, su-ti
6) Using these new indexes in for
600 : Subject and Subject-Personal-Name ; all subfields except subdivisions in Personal-name
601 : Subject, Subject-name-conference and Subject-name-corporate and Subject-name-conf ; all subfields except subdivisions in Corporate-name and Conference-name
602 : same as 600 but could be improved later
604 : Subject and Subject-title ; $a in Subject-Personal-Name ; all subfields except subdivisions in Name-and-Title
605 : Subject and Subject-title
606 : Subject and Subject-topical
607 : Subject and Subject-geographical ; all subfields except subdivisions in Name-geographic
608 : Subject and Subject-genre-form
To test :
A. In a UNIMARC-DOM indexing environment
1) Apply the patch
2) Rebuild zebra
3) Create a record A with some values in critical fields, for example:
- the string "test9828" in 600$c 600$f 600$p, 602$f, 616$c, 616$f, 606$2,600$2
- the string "subform" in 600$j
4) Create a record B with the string "subgeo" in 606$y
5) Create a record C with the string "subdate" in 606$z
6) try to search "su:test9828". You should have no results
7) try to search "su-genre:subform". You should have 1 result : record A
8) try to search "su-geo:subgeo". You should have 1 result : record B
9) try to search "su-chrono:subdate". You should have 1 result : record C
10) on existing records, try su-ut, su-to, su-na, su-form, su-corp, su-geo indexes, and see it results are relevant
Indexing of subjects could maybe be improved later
Signed-off-by: Nick Clemens <nick@quecheelibrary.org>
All seems to work as expected, I am not super-familiar with UNIMARC but I wonder if in su-corp and su-conf the subdivisions might be useful (e.g. France-Gendarmie / Staatsbibliothek-Berlin)
Signed-off-by: Paul Poulain <paul.poulain@biblibre.com>
Signed-off-by: Tomas Cohen Arazi <tomascohen@gmail.com>
On the 23 July development meeting it was decided to formally deprecate
GRS-1 indexing mode for Zebra. This patch makes code fallback to DOM
on the remaining places. No behaviour change should be noticed, as DOM
has been the default for a while.
Regards
Signed-off-by: Chris Cormack <chris@bigballofwax.co.nz>
Signed-off-by: Katrin Fischer <Katrin.Fischer.83@web.de>
Passes tests and QA script.
Also checked running Makefile.PL
Signed-off-by: Tomas Cohen Arazi <tomascohen@gmail.com>
This patch makes _get_facet_from_result_set rely on a new syspref (FacetsMaxCount)
to set the max facets to show for each facet category. It defaults to 20 if
the syspref is absent or empty.
To test:
- Have a search with lots of facet results (with some category showing the "See more" link).
- Jump to "See more", notice it shows more than 20 facet values.
- Apply the patch, reload the page.
=> SUCCESS It only shows 20 (default hardcoded value)
- Change the FacetsMaxCount syspref to other value (e.g. 15 or 100).
- Reload
=> SUCCESS: it shows the expected amount.
- Sign off :-D
Regards
Sponsored-by: Universidad Nacional de Cordoba
Signed-off-by: Owen Leonard <oleonard@myacpl.org>
Test plan completed successfully
Signed-off-by: Katrin Fischer <katrin.fischer.83@web.de>
Passes tests and QA script, works as described.
Signed-off-by: Tomas Cohen Arazi <tomascohen@gmail.com>
This patch fixes the "Publication date" and "Acquisition date"
searches when using non-QueryParser and QueryParser searches.
It adds structure attributes to the template, which is consistent
with how phrase searching is currently handled.
It removes unnecessary code from Search.pm, adds some necessary
code which is consistent with existing code, and adds a lot of
explanatory comments.
_TEST PLAN_
Before applying:
0) Turn off QueryParser
1) Try a "Publication date" or "Acquisition date" search from the
staff client advanced search.
2) Note that even though the description on the result page makes
it seem like you're doing an index-specific search, you're actually
doing a keyword search. You can verify this by checking the 008
from positions 7 to 10 for "Publication date" or "Date accessioned"
on items for "Acquisition date".
3) Turn on QueryParser
4) Try doing the searches from Step 1.
5) A "Publication date" search should probably produce zero results
After applying patch:
6) Keep QueryParser on
7) Try doing the searches from Step 1.
8) Notice that you're actually getting results consistent with
your search (ie the 008/7-10 shows the date you searched for,
and there is a "Date accessioned" in items which matches your
search)
9) Turn off QueryParser
10) Note that your results are exactly the same as step 8
(N.B. this is because QueryParser is falling back to non-QP mode
instead of producing a bad query.)
Signed-off-by: Fridolin Somers <fridolin.somers@biblibre.com>
Signed-off-by: Kyle M Hall <kyle@bywatersolutions.com>
Signed-off-by: Tomas Cohen Arazi <tomascohen@gmail.com>
This patch adds a variable to koha-conf.xml controlling the use of Zebra facets.
Usage:
- use_zebra_facets = 1 | 0
Zebra facets work only on DOM.
Signed-off-by: Tomas Cohen Arazi <tomascohen@gmail.com>
Signed-off-by: Katrin Fischer <katrin.fischer.83@web.de>
Signed-off-by: Tomas Cohen Arazi <tomascohen@gmail.com>
This patch adds the following routines to C4::Search
- GetFacets
This is a wrapper routine, that given a ZOOM::ResultSet, extracts
the relevant facets. To do the job, it uses the internal functions:
_get_facets_from_records and _get_facets_from_zebra. The choice is done
on querying the indexing mode: grs1 will use records, and dom zebra's facets.
- _get_facets_from_records
Just refactoring the already existent main loop from getRecords into a function.
- _get_facets_from_zebra
Given a result set, loop through all defined facets in C4::Koha::getFacets
and call _get_facet_from_result_set for each, to build the facets
information. It retrieves the facets from Zebra's facets.
- _get_facet_from_result_set
Given a result set and a facet index name, retrieve the facets
for the given index, and build the result for rendering.
To test this preliminay work:
- Apply the patches, install on DOM, using MARC21, NORMARC and UNIMARC.
- Reindex some DB with lots of records.
- Check facets work.
Note: UNIMARC is the only dialect that has more than one subfield (concatenated)
for facets values, so it is better to test on uNIMARC. The approach leaves room
for re-thinking facets in MARC21/NORMARC, but it is outside of the scope of the bug
(e.g. we could have more author facets)
Sponsored-by: Universidad Nacional de Cordoba
Signed-off-by: David Cook <dcook@prosentient.com.au>
Seems to work with DOM and MARC21.
Signed-off-by: Jonathan Druart <jonathan.druart@biblibre.com>
Signed-off-by: Katrin Fischer <katrin.fischer.83@web.de>
Signed-off-by: Tomas Cohen Arazi <tomascohen@gmail.com>
This patch removes the $facets_info calculation from the _get_facets_data_from_record
sub so it is not done for each record. It introduces a new sub, _get_facets_info
that is called from the getRecords loop, that does the job only once.
To test:
- Apply on top of the previous patches
- Run
$ prove -v t/db_dependent/Search.t
=> SUCCESS: _get_facets_info gets tested and it passes for both MARC21 and UNIMARC.
Facets rendering should remain unchaged on the UI.
- Sign off :-D
Sponsored-by: Universidad Nacional de Cordoba
Signed-off-by: Nick Clemens <nick@quecheelibrary.org>
Signed-off-by: David Cook <dcook@prosentient.com.au>
Signed-off-by: Katrin Fischer <Katrin.Fischer.83@web.de>
Signed-off-by: Tomas Cohen Arazi <tomascohen@gmail.com>
This patch adds a test for field 100, to skip it on facet calculation
if ind1=z.
To test:
- Have IncludeSeeFromInSearches set.
- Create a biblio record, when adding an author, create a new authority record
that contains a 400$a field (see from).
- Rebuild zebra db.
- Search for the record making sure the search returns more than one record.
=> FAIL: the facets contain the 'see from' field.
- Run
$ prove -v t/db_dependent/Search.t
=> FAIL: it fails
- Apply the patch
- Run
$ prove -v t/db_dependent/Search.t
=> SUCCESS: it passes
- Re-run the search, notice the 'see from' doesn't show anymore on the facets.
- Sign off :-D
Edit: minor stylistic change
Regards
To+
Sponsored-by: Universidad Nacional de Cordoba
Signed-off-by: Nick Clemens <nick@quecheelibrary.org>
Signed-off-by: David Cook <dcook@prosentient.com.au>
Signed-off-by: Katrin Fischer <Katrin.Fischer.83@web.de>
Signed-off-by: Tomas Cohen Arazi <tomascohen@gmail.com>
This patch refactors the facet extraction loop into a proper function.
The loop is changed so the MARC::Record objects are created only once
instead of the old/current behaviour: once for each defined facet (in
C4::Koha::getFacets).
To test:
- Apply the patch
=> SUCCESS: verify facets functionality remains unchanged.
- Run:
$ prove -v t/db_dependent/Search.t
=> SUCCESS: tests for _get_facets_data_from_record fail, because
100$a is considered for fields with indicator 1=z (field added
by IncludeSeeFromInSearches syspref).
- Sign off :-D
Sponsored-by: Universidad Nacional de Cordoba
Signed-off-by: Nick Clemens <nick@quecheelibrary.org>
Signed-off-by: David Cook <dcook@prosentient.com.au>
Signed-off-by: Katrin Fischer <Katrin.Fischer.83@web.de>
Works as described, passes tests and QA script.
Signed-off-by: Tomas Cohen Arazi <tomascohen@gmail.com>
Some libraries would like the ability choose to display the home branch
on search results while having circulation rules based on the holding
branch. This is currently impossible because both the display of the
branch in search results, and the selection of the home or holding
branch for circulation rules are controlled by the same system
preference: HomeOrHoldingBranch. This preference is described as being
used only for circulation rules, and makes no mention of its use for
display control. The display control should be split off into a separate
preference.
Test Plan:
1) Apply this patch
2) Run updatedatabase.pl
3) Note the value of the new system preference StaffSearchResultsDisplayBranch
matches the current value of HomeOrHoldingBranch
4) Set the preference to home branch
5) Perform a staff catalog search with results having items with differing home and
holding branches.
6) Note the home branch displays
7) Set the preference to holding branch
8) Repeat step 5
9) Note the holding branch displays
Signed-off-by: Jason Robb <jrobb@sekls.org>
Signed-off-by: Katrin Fischer <Katrin.Fischer.83@web.de>
Works as described, logic is now tied to a new system preference.
Passes tests and QA script.
A use C4::Charset was added deep in the body of the code
we have already imported it at the top of the file
(the by convention normal place) As use is executed at compile time
specifying it in the code body does not serve a
useful purpose and detracts from the readability of an already
overly complex subroutine.
Remove the superfluous statement
also removed the tabs introduced to the surrounding lines
by the same commit
Signed-off-by: Bernardo Gonzalez Kriegel <bgkriegel@gmail.com>
Search still works, no errors.
Signed-off-by: Katrin Fischer <Katrin.Fischer.83@web.de>
Signed-off-by: Tomas Cohen Arazi <tomascohen@gmail.com>
It's quite common to have [something] within facet data, and it produces following error:
Unmatched [ in regex; marked by <-- HERE in m/^[ <-- HERE
This problem was intoduced in Bug 12151 but is trivial to fix.
Signed-off-by: Tomas Cohen Arazi <tomascohen@gmail.com>
Good catch.
To test:
- Created a bibliographic record, linked to an authority record (personal name). Did a search that returned the author as a facet.
- Added a [ symbol to the author name.
- Repeated the search
=> FAIL: "Unmatched [ in regex; marked by <-- HERE in m/^[ <-- HERE"
- Apply the patch
- Retry the search
=> SUCCESS: No error, bracket shows correctly.
Passes koha-qa.pl too.
Signed-off-by: Kyle M Hall <kyle@bywatersolutions.com>
Signed-off-by: Tomas Cohen Arazi <tomascohen@gmail.com>
This patch removes the use of smartmatch operators in the search code.
Regards
To+
Edit: this revision uses 'grep' instead of Lists::MoreUtils::any
Sponsored-by: Universidad Nacional de Cordoba
Signed-off-by: Kyle M Hall <kyle@bywatersolutions.com>
Signed-off-by: Katrin Fischer <Katrin.Fischer.83@web.de>
Passes all tests and QA script.
Tested search, no problems found.
Signed-off-by: Tomas Cohen Arazi <tomascohen@gmail.com>
On the cataloguing search (cataloguing/addbook.pl), if an item has a
notforloan value > 0, the item is not listed in the Location column.
It is quite confusing, the current behavior let patrons believe that
there is not item for the biblio (or less than the real count).
Test plan:
1/ Create 2 biblio records A and B
2/ Create some items for A
3/ Create 1+ item(s) for B with a notforloan status > 0
4/ Reindex both records
5/ Launch a search on the cataloguing module and verify that the
notforloan items are not listed in the 'Location' column.
6/ Apply this patch and verify the not for loan items are listed ("Not
for loan (XXX)").
Signed-off-by: Nicole C. Engard <nengard@bywatersolutions.com>
Signed-off-by: Katrin Fischer <Katrin.Fischer.83@web.de>
Passes tests and QA script, not for loan items now show up.
Signed-off-by: Tomas Cohen Arazi <tomascohen@gmail.com>
This patch reduces three repeated code fragments into a single
internal subroutine, which is easier to read, has comments,
and should make it easier to refactor more buildQuery code
in the future.
_TEST PLAN_
Before applying
1) Run a bunch of different searches in the staff client and OPAC
in separate tabs
2) Apply the patch
3) Run the same searches again (maybe in yet more tabs) and notice
that the results are exactly the same.
Signed-off-by: Bernardo Gonzalez Kriegel <bgkriegel@gmail.com>
Same results, no errors.
Signed-off-by: Kyle M Hall <kyle@bywatersolutions.com>
Signed-off-by: Tomas Cohen Arazi <tomascohen@gmail.com>
The Zoom specification defines that a ScanSet should provide a way
to retrieve terms suitable for displaying and another one for using
on further searches [1].
The Net::Z3950::ZOOM implementation actually provides both [2] but we
were using the wrong one.
Using $scanset->display_term(...) instead of $scanset->term(...) fixes
the problem.
To test:
- Do a index scan search (advanced search > more options > check
'index scan')
- Notice non-latin characters are replaced by one or more '@' symbols.
- Apply the patch
- Re-do the search, everything shows as it should.
- Try to follow any of the terms (clicking on them) and notice that
it actually gives you relevant results (i.e. is not searching for
@!!!!).
[1] http://zoom.z3950.org/api/zoom-1.4.html#3.6.3
[2] http://search.cpan.org/~mirk/Net-Z3950-ZOOM/lib/ZOOM.pod#term()_/_display_term()
Sponsored-by: Universidad Nacional de Cordoba
Followed test plan. Patch behaves as expected.
Signed-off-by: Marc Véron <veron@veron.ch>
Signed-off-by: Jonathan Druart <jonathan.druart@biblibre.com>
I reproduced the issue and I confirm this patch fixes it.
I put "Fuß" in a title, reindex the record. Launch a search on Title
checking the "scan index" checkbox. And the non-latin characters are
well displayed.
Signed-off-by: Galen Charlton <gmc@esilibrary.com>
Like biblio, this feature provides an authority search history.
This history is available for connected and disconnected user.
If the user is not logged in Koha, the history is stored in an
anonymous user sessin.
The search history feature is now factorized in a new module.
This patch adds:
- 1 new db field search_history.type. It permits to distinguish the
search type (biblio or authority).
- 1 new module C4::Search::History. It deals with 2 different storages:
DB or cookie
- 2 new UT files: t/Search/History.t and t/db_dependent/Search/History.t
- 1 new behavior: the 'Search history' link (on the top-right corner of
the screen) is always displayed.
Test plan:
1/ Switch on the 'EnableOpacSearchHistory' syspref.
2/ Go on the opac and log out.
3/ Launch some biblio and authority searches.
4/ Go on your search history page.
5/ Check that all yours searches are displayed.
6/ Click on some links and check that results are consistent.
7/ Delete your biblio history searches.
8/ Delete your authority searches history searches.
9/ Launch some biblio and authority searches
10/ Delete all your history (cross on the top-right corner)
11/ Check that all your history search is empty.
12/ Launch some biblio and authority searches.
13/ Login to your account.
14/ Check that all previous searches are displayed.
15/ Launch some biblio and authority searches.
16/ Check that these previous searches are displayed under "Current
session".
17/ Play with the 4 delete links (current / previous and biblio /
authority).
Signed-off-by: Owen Leonard <oleonard@myacpl.org>
Signed-off-by: Katrin Fischer <Katrin.Fischer.83@web.de>
All patches together pass QA script and tests.
Also, new tests in t/db_dependent/ pass.
Tested in all 4 OPAC themes, being logged in and anonymous.
Anonymous search history will be appended to personal search
history after logging in.
Also verified that cleanup_database still purges search history,
now also including the authority searchs.
Signed-off-by: Galen Charlton <gmc@esilibrary.com>
This patch adds :w and :p versions to the index for »Lexile number«
(it has only :n so far) and adds indexes for 653 (Index term
uncontrolled), 655 (Index term Genre/Form), 041 (language-audio) and
041 (language-subtitle). It also adds the »curriculum«-index to
Search.pm.
Signed-off-by: Chris Cormack <chrisc@catalyst.net.nz>
Signed-off-by: Martin Renvoize <martin.renvoize@ptfs-europe.com>
Signed-off-by: Galen Charlton <gmc@esilibrary.com>
When using QueryWeightFields to add ranking on a search without index,
the search actually uses:
- rank 1 : Title-cover,ext : exact title-cover
- rank 2 : ti,ext : exact title
- rank 3 : Title-cover,phr : phrase title-cover
- rank >7 : queries without index
This relevance sets title as phrase in priority and then any index.
This patch adds title as words list before search on any index, so
that records with all searched terms in title, even not well ordered,
are more relevant.
Test plan :
- Enable QueryWeightFields syspref
- Perform a search, with sort by relevance, with two words ofen
contained in title, but never one near the other.
For example: 'History France'
=> Records with both words in title are first. For example:
"Histoire de France" and "La France : 100 ans d'histoire"
Signed-off-by: Jesse Maseto <jesse@bywatersolutions.com>
Signed-off-by: Katrin Fischer <Katrin.Fischer.83@web.de>
Relevance ranking and field weighting are hard to test,
as many MARC fields are indexed into the used indexes.
If we had an index that only indexed 245$a/200$a the
effect might be more visible.
I found no regressions by this patch, change reads
logical.
Signed-off-by: Galen Charlton <gmc@esilibrary.com>
When searching with a sort (means not by relevance) and there is an error
in Zebra connexion (server is down or query is wrong), you get the message :
Error : Can't call method "sort" on an undefined value at /home/kohaadmin/src/C4/Search.pm line 405.
This patch corrects by not performing sort if there are no no results.
Steps to reproduce the error without patch:
In OPAC go to Advanced Search
Choose "Title" in first "Search for:" end enter "ccl=( and )"
Display "More options"
Set "Sort by" to "Title (A-Z)"
Click "Search" at bootom of page
Result:
Error:
Can't call method "sort" on an undefined value at /usr/share/kohaclone/C4/Search.pm line 430.
After applying the patch, try that search again. This time,
it should report not results found with out the error message.
Alternative Test plan :
- Set OPACdefaultSortField on something else than relevance
- Perform a simple search with a wrong CCL query. For example : ccl=( and )
=> You get the messge : No results found ...
Patch behaves as expected.
Signed-off-by: Marc Véron <veron@veron.ch>
Signed-off-by: Katrin Fischer <Katrin.Fischer.83@web.de>
Adds another check to prevent a bad Zebra error message.
Works as described, passes all tests and QA script.
Signed-off-by: Galen Charlton <gmc@esilibrary.com>
This patch fixes two problems with the generation of
links to execute a Z39.50 search from the staff client
catalog and cataloguing search results page.
First, if using URI::Escape 3.30 or earlier, performing a simple search
with a double quote (e.g., "histoire algerie"), the Javascript is broken
in results page because of :
function GetZ3950Terms(){
var strQuery="&frameworkcode=";
strQuery += "&" + "title" + "=" + ""histoire%20algerie"";
Second, the encoding of non-ASCII characters in the search
term was broken.
This patch moves URI escaping from Perl to template with uri TT filter.
Test plan :
- To reproduce the issue with double quotes, the server
must be running URI::Escape 3.30 or earlier; the current
version of URI::Escape properly escapes double quote.
- In staff interface, perform a search with double quotes
that will return no result, ie "aaa xxx"
=> Without patch, javascript is broken
=> With patch, javascript is not broken
- Click on Z3950 button on results page
=> Without patch, the Title input is empty
=> With patch, the Title input contains the search terms
Additional test:
Do a search with something like äöü and then click Z3950
button on results page.
Without patch, encoding is broken in Z3950 form
With patch, encoding is correct.
Signed-off-by: Marc Véron <veron@veron.ch>
Signed-off-by: Katrin Fischer <Katrin.Fischer.83@web.de>
Fixed a few tabs. Passes tests and QA script.
I can't reproduce the Javascript problem, but I can reproduce
the Z39.50 encoding problem and can detect no regression.
Signed-off-by: Galen Charlton <gmc@esilibrary.com>
This patch makes Koha <-> Zebra use MARCXML for the serialization when
using DOM, and USMARC for GRS-1.
* The following functions are modified to set the Zebra record syntax
according to the current sysprefs and configuration:
- C4::Context->Zconn
- C4::Context-_new_Zconn
* A new function 'new_record_from_zebra' is introduced, which checks the
context we are in, and creates the MARC::Record object using the right
constructor.
The following packages get touched to make use of the new function:
- C4::Search
- C4::AuthoritiesMarc
and the same happens to the UI scripts that make use of them (both in
the OPAC and STAFF interfaces).
* Calls to the unsafe ZOOM::Record->render()[1] method are removed.
Due to this last change the code for building facets was rewritten. And
for performance on the facets creation I pushed higher version
dependencies for MARC::File::XML and MARC::Record (we rely on
MARC::Field->as_string).
* Calls to MARC::Record->new_from_xml and MARC::Record->new_from_usmarc
are wrapped with eval for catching problems [2].
* As of bug 3087, UNIMARC uses the 'unimarc' record syntax. this case is
correctly handled.
* As of bug 7818 misc/migration_tools/rebuild_zebra.pl behaves like:
- bib_index_mode (defaults to 'grs1' if not specified)
- auth_index_mode (defaults to 'dom')
here we do exactly the same.
To test:
- prove t/db_dependent/Search.t should pass.
- Searching should remain functional.
- Indexing and searching for a big record should work (that's what the
unit tests do).
- Test an index scan search (on the staff interface):
Search > More options > Check "Scan indexes".
- Enable 'itemBarcodeFallbackSearch' and try to circulate any word, it
shouldn't break.
- Searching for a biblio in a new subscription shouldn't break.
- Running bulkmarcimport.pl shouldn't break.
- And so on... for the rest of the .pl files.
[1] http://search.cpan.org/~mirk/Net-Z3950-ZOOM/lib/ZOOM.pod#render()
[2] a record that cannot be parsed by MARC::Record is simply skipped (bug 10684)
Sponsored-by: Universidad Nacional de Cordoba
Signed-off-by: Kyle M Hall <kyle@bywatersolutions.com>
Signed-off-by: Martin Renvoize <martin.renvoize@ptfs-europe.com>
Signed-off-by: Galen Charlton <gmc@esilibrary.com>
If a search gives results with 6 facets, one of those facets won't be
displayed. This is due to a bug in the code that only considers great
than 6 facets in one area, and less than 6 in another.
Test Plan:
1) Perform a search that should give results for 6 different libraries
2) Note you only see 5 libraries in the facets with no option to expand
3) Apply this patch
4) Repeat step 1
5) Note you now have the option to expand the facets list
Signed-off-by: Chris Cormack <chris@bigballofwax.co.nz>
Signed-off-by: Jonathan Druart <jonathan.druart@biblibre.com>
This patch should provide a regression test but I really don't know how
to write it.
Signed-off-by: Galen Charlton <gmc@esilibrary.com>
This patch makes the following changes to UNIMARC biblio indexing :
A. Changes to UNIMARC conf files
1. add comments to biblio-koha-indexdefs.xml
2. make biblio-koha-indexdefs.xml more compact by grouping some
declarations
Ex : 200$f and 200$g => one declaration for 200$fg
3. suppress unneeded declarations (indexing of some 4XX fields and 6XX
fields not in unimarc format)
4. unindex some (sub)fields unneeded by most users (318, 207,230,210a,
215, 4XXd)
5. change the way 308 field is indexed (no visible changes)
6. replace Title-host with Host-item -- see bug 11119
7. index 208 in Material-Type -- see bug 11119
8. index 100 pos 8-9 and 9-12 in pubdate:y and pubdate:n
9. index 100 pos 8-9 in pubdate:s instead of 210$d
10. Index all subfields of note 334 and 327 in note index
11. Index 304 and 327 in title index as well as note index
327 can contain a list of titles included in a work
304 can contain the title of the original work in case of a
translation
12. Index 314 in author index as well as note index
314 can contain authors not mentionned in 200$f/g (the 4th, 5th etc.
author)
13. Index 328 note in Dissertation-information as well as note
14. Index 328$t in Title
B. Changes to ccl.properties :
1. add a new index Dissertation-information (1056)
2. fix EAN, pubdate and acqdate (they were not linked with bib1 attributes)
C. Changes to Search.pm
1. add Dissertation-information and suppress Title-host and UPC
D. Changes to QP config file queryparser.yaml
1. add Dissertation-information
2 fix EAN, pubdate and acqdate
Test plan :
If you cannot test in GRS1, test only in DOM, as GRS will be deprecated.
1. Apply the patch in a UNIMARC Koha running with DOM and ICU
2. copy src/etc/searchengine/queryparser.yaml into the main config
directory of QP
3. copy src/etc/zebradb/ccl.properties into the main config directory
of Zebra
4. copy src/etc/zebradb/marc_defs/unimarc/biblio/* into the main config
directory of Zebra
5. reindex biblios (rebuild_zebra.pl -r -b -x -v)
6. test note index : make some searches on 334$b or 327$b
7. test author index : make some searches on 314 field
8. test title index : make some searches on 304 and 327 field, make a
search on 328$t subfield
9. test dissertation-information index : make some searches on 328 field
10. In a record, put in the dates of 100 fields the values "1000" (1st
date) and "1001" (2d date) ; try to search a book written in year
1000, you should find the record ; idem for year 1001
11. make some searches and sort by date. It should work better as before,
especially if you have values like "c2009" or "impr. 2010" in 210
field
12. Regression test : make some searches on several indexes, like EAN,
etc. It should work as before
Test 10-12 with and without Queryparser activated.
Be careful: with Queryparser activated, the index names (title,
dissertation-information...) must be entered in lowercase only.
Of course, to test search and sort by dates, you need to have full
records, with dates in 100 field as well as 210 field.
Signed-off-by: Paola Rossi <paola.rossi@cineca.it>
Signed-off-by: Jonathan Druart <jonathan.druart@biblibre.com>
Signed-off-by: Galen Charlton <gmc@esilibrary.com>
Adding Number-local-acquisition in C4::Search known indexes allows to
search without using "ccl=" prefix.
Also corrects in ccl.properties : inv must be an alias of
Number-local-acquisition.
Signed-off-by: Marcel de Rooy <m.de.rooy@rijksmuseum.nl>
Signed-off-by: Kyle M Hall <kyle@bywatersolutions.com>
Signed-off-by: Galen Charlton <gmc@esilibrary.com>
In Koha 3.8, if a standard catalog search was performed and the user
clicked the Z39.50 search button, the search string would automatically
be placed in the isbn field for the Z39.50 search form.
Changes to the code have since broken this functionality.
Test Plan:
1) From mainpage.pl, use "Search the catalog" to search for the string
"9781570672835"
2) Click the Z39.50 Search button
3) Note the string is placed in the title field
4) Apply this patch
5) Repeat steps 1-2
6) Note the string is placed in the isbn field
Signed-off-by: Mark Tompsett <mtompset@hotmail.com>
Signed-off-by: Katrin Fischer <Katrin.Fischer.83@web.de>
Tested old and new ISBN with and without hyphens.
Also tested some other keyword searches.
Signed-off-by: Galen Charlton <gmc@esilibrary.com>
Note that the behavior will be a bit odd if you do a 'replace via
Z39.50' from a bib record whose title happens to be an ISBN, but
this scenario seems unlikely enough to ignore.
It could be useful to index the original language of a document (i.e.
"fre" for the English translation of a French novel).
This patch renames the Bib-1 use attribute 1095 from
Code-language-original to language-original and uses it to index:
- MARC21 041$h subfield
- UNIMARC 101$c subfield
It adds "language-original" in the list of index in Search.pm.
Test plan :
A. in a MARC21 GRS1 environment
1. Copy Zebra config files (zebradb/biblios/etc/bib1.att,
zebradb/ccl.properties, marc_defs/marc21/biblios/record.abs) from
your source etc/ directory to your main koha etc/ directory
2. Reindex zebra
3. Make some searches, like "language-original:fre"
B. in a MARC21 DOM environment
4. Copy Zebra config files (zebradb/biblios/etc/bib1.att, zebradb/ccl.properties,
marc_defs/marc21/biblios/biblio-zebra-indexdefs.xsl) from your source etc/
directory to your main koha etc/ directory
5. Reindex zebra
6. Make some searches, like "language-original:fre"
C. in a UNIMARC GRS1 environment
7. Copy Zebra config files (zebradb/biblios/etc/bib1.att,
zebradb/ccl.properties, marc_defs/unimarc/biblios/record.abs) from
your source etc/ directory to your main koha etc/ directory
8. Reindex zebra
9. Make some searches, like "language-original:fre"
A. in a UNIMARC DOM environment
10. Copy Zebra config files (zebradb/biblios/etc/bib1.att,
zebradb/ccl.properties, marc_defs/unimarc/biblios/biblio-zebra-indexdefs.xsl)
from your source etc/ directory to your main koha etc/ directory
11. Reindex zebra
12. Make some searches, like "language-original:fre"
Signed-off-by: Chris Cormack <chrisc@catalyst.net.nz>
Signed-off-by: Jonathan Druart <jonathan.druart@biblibre.com>
Signed-off-by: Galen Charlton <gmc@esilibrary.com>
Under certain circumstance, a search term without quotation marks
returns the expected results while the same search with a
double quote embedded in it would fail.
Koha should ignore the quotation marks and return results anyway.
This appears when QueryWeightFields syspref is activated (and
QueryAutoTruncate is off), as field weighting builds a complex CCL
query using double quotes around search words. This patch simply
replaces double quotes in search words by a space.
Test plan :
- Set QueryAutoTruncate off (you may also need to set QueryFuzzy to off)
- Set QueryWeightFields off
- Perform a serch on two words where you have results, like : centre "ville
=> you get results
- Set QueryWeightFields on
- Perform same serch
=> you get the same results
Signed-off-by: Leila <koha.aixmarseille@gmail.com>
Signed-off-by: Kyle M Hall <kyle@bywatersolutions.com>
Signed-off-by: Galen Charlton <gmc@esilibrary.com>
If you select an index in the search dropdown and then enter in a QP
query starting with the field, Koha will prepend the index you do not
want to use at the beginning of the search, resulting in a search that
probably does not match what you were hoping for.
To test:
1) Select an index in the search dropdown in the OPAC. Author is fine.
2) Enter a search term using manually entered indexes. For example:
ti:cat in the hat
3) Note that the search fails.
4) Apply patch.
5) Repeat steps 1 and 2.
6) Note that the search succeeds.
7) Sign off.
Signed-off-by: Chris Cormack <chrisc@catalyst.net.nz>
Signed-off-by: Jonathan Druart <jonathan.druart@biblibre.com>
Signed-off-by: Galen Charlton <gmc@esilibrary.com>
MARC::Record 2.0.6+ enables the warnings pragma, and as a
consequence, started logging cases where a routine in
C4::Search was calling MARC::Field->subfield() with an undef
subfield label. This patch removes the log noise.
To test:
- Run prove -v t/db_dependent/Search.t
- There will be warnings about
"Use of uninitialized value $code_wanted in string" in MARC::Field.
- Apply the patch.
- Those warnings are gone.
Signed-off-by: Liz Rea <liz@catalyst.net.nz>
Tests now pass
Signed-off-by: Galen Charlton <gmc@esilibrary.com>
Signed-off-by: Jonathan Druart <jonathan.druart@biblibre.com>
Signed-off-by: Kyle M Hall <kyle@bywatersolutions.com>
Signed-off-by: Galen Charlton <gmc@esilibrary.com>
When a record fails to decode during a search, Koha dies with an error.
Koha should ignore bad records and continue on ( and log the error ).
An example of a record that Zebra will happily ingest but which MARC::Record
doesn't like is one that contains a punctuation character in a tag label.
Signed-off-by: Chris Cormack <chrisc@catalyst.net.nz>
Signed-off-by: Jonathan Druart <jonathan.druart@biblibre.com>
Signed-off-by: Galen Charlton <gmc@esilibrary.com>
When an index does not contain a structure part, the structure "wrdl"
is automatically added and a structure is mandatory to build search
query (to convert ':' into '=').
But the code that tests that the structure is not already defined looks
in entire index string :
$index =~ /(st-|phr|ext|wrdl|nb|ns)/
It should look for a comma followed by a structure and in the case of
"nb" and "ns" look for an exact match.
The consequence is that an index containing ns or nb or phr or etc does
not work.
This patch modifies the regexp for this part and other parts looking at
structures into index.
Test plan :
- Desactivate all searching sysprefs.
- Create a new index called "ansa" number 8999 into bib1.att,
ccl.properties and records.abs
- Index a biblio with a value on this index, ie "VALUE"
- Perform a search on this index by editing URL:
http://<server>/cgi-bin/koha/catalogue/search.pl?idx=ansa&q=VALUE
=> Without patch, the search does not work. The PQF query is
"@and ansa: VALUE"
=> With patch, the search works. The PQF query is "@attr 1=8999 VALUE";
- Perform same test with an index containing a structure ie "aphra"
- Set QueryAutoTruncate syspref to automatically
=> Check * is added to operand : PQF query is
"@attr 1=8999 @attr 4=6 @attr 5=1 VALUE"
- You may check stopwords removal but this feature is obsolete
Signed-off-by: Bernardo Gonzalez Kriegel <bgkriegel@gmail.com>
Comment: as far as I can test, this works. Small tab error reported
by koha-qa, fixed in a followup.
This kind of patch is difficult to test without explicit instructions,
not everyone knows how to check what kind of PQF search is used.
I don't know. But I can test search results.
Test:
1) Deactivate search sysprefs
QueryAutoTruncate = only if * is added
QueryFuzzy = Don't try
QueryStemming = Don't try
QueryWeightFields = Disable
UseQueryParser = Do not try
2) Create new index 'ansa'
bib1.att : att 8999 ansa
ccl.properties : ansa 1=8999
records.abs : melm 999 ansa:w,ansa:p
1) and 2) from comment 3 on Bug
3) In the undestanding that index refers to field 999,
edited default framework, made 999a visible on editor
4) Edit sample record, add 'VALUE' to 999a, save, reindex
5) Search with /cgi-bin/koha/catalogue/search.pl?idx=ansa&q=VALUE
No results
6) Apply patch, repeat search
Got results
That's all I can test. If not enough for QA, then this
must wait until further and explicit test instructions
Signed-off-by: Kyle M Hall <kyle@bywatersolutions.com>
There is (for MARC21, at least), an exising indexes that this patch
fixes: Code-institution.
Signed-off-by: Galen Charlton <gmc@esilibrary.com>
Before fixing UNIMARC DOM indexing, we must fix GRS-1 indexing
1) In advanced search, some Coded fields index are not working: Print,
Illustration, Content
2) Country-heading index is not working
3) Some subfields are indexed in wrong indexes :
102$a should be in Country-publication instead of Country-heading
(non defined in bib1.att)
106$a, filled only for printed works, should be in ff88-23 (form of
item) instead of itype. (ff88-23 is made for Marc21 008 pos
23, which contains the same data as 106a)
200$b should be in Material-type instead of (or in addition to) itype
and itemtype: (Material-type :"free-form string, ... that
describes the material type of the item, e.g., cassette, kit,
computer database, computer file.")
100$a pos 22-24 should not be indexed as "ln" : it is the language of
the record, not the language of the ressource
4) Index names are too long : if we index new positions of coded fields,
with existing names it breaks Zebra indexing (there must be a limit
in line lenghth in record.abs?)
5) There are a lot of warns when rebuiding zebra.
This patch make some changes in bib1.att (could be used later to improve
search) :
- fixing wording for att 51 and 1012
- adding comments for attributes based on MARC21 008 field (8800-8841)
- creating 8806 (tpubdate), 8838 (Modified-code), 8818 (ff8-18), 8840
(ff8-18-21), 8819 (ff8-19), 8821 (ff8-21), 8828 (ff8-28), 8830
(ff8-30), 8831 (ff8-31)
- creating attributes specific to UNIMARC : 9701-9707 (Video-mt,
Graphics-type, Graphics-support, Title-page-availability,
Cumulative-index-availability, script-Title, char-encoding)
- setting apart 3 blocks of attributes, so it could be easy to make
further changes :
-- common to Marc21 and UNIMARC : 8806, 8822, 8838
-- slightly different in Marc21 and UNIMARC (different meanings
according to the type of the record => don't match a single
UNIMARC field)
-- specific to UNIMARC : 9701-9707
In ccl.properties :
- creating a new index: Country-publication 1=1053
- suppressing some warns by mapping with bib1 att:
Date-time-last-modified, Name, rtype, Music-number
- defining indexes using the 3 blocks attributes defined in bib1
(common to Marc21 and UNIMARC, slightly different, specific to UNIMARC)
In record.abs :
- renaming some index for 100-105-110 fields
- correcting indexing of 102$a (country of publication)
106$a (ff88-23)
100$a pos 22-24 (language of record, no more
indexed)
105$a pos. 0-3 (illustration code)
200$b (for the moment, I keep it indexed in
itype and itemtype, but also Material-Type)
In C4/Search.pm :
- adding "Country-publication" index
In OPAC and staff interface template subtypes_unimarc.in :
- renaming indexes to take into account the changes made to Zebra
config files
To test (this cannot be done with a sandbox) :
1) Apply the patch in a UNIMARC GRS-1 Koha instance
2) Copy the following files from the etc/zebradb of your source
directory into the etc/zebradb of your main Koha directory:
-- etc/zebradb/biblios/etc/bib1.att
-- etc/zebradb/ccl.properties
-- etc/zebradb/marc_defs/unimarc/biblios/record.abs
3) Reindex your data (rebuild_zebra -x -b -r -v)
4) Try to use those Coded fields indexes in Advanced search, in OPAC
and Staff interface (available after clicking on "More options",
then on "Coded information filters"):
Audience, Print, Literary genre, Biography, Illustration, Content,
Video Types, Serials, Serial Type, Periodicity, Regularity
5) Try to search "Country-publication=FR" in simple search
Signed-off-by: Bernardo Gonzalez Kriegel <bgkriegel@gmail.com>
No koha-qa errors.
Tests for GRS-1
Followed test plan
Search by coded fields works, but only on OPAC,
on staff there are few options
Search by Country-publication works after patch
Signed-off-by: Jonathan Druart <jonathan.druart@biblibre.com>
Signed-off-by: Galen Charlton <gmc@esilibrary.com>
This patch updates the wthdrawn field in items and deleteditems to be
withdrawn instead. No functional changes are made.
Signed-off-by: Bernardo Gonzalez Kriegel <bgkriegel@gmail.com>
Comment: Save for translation files (that will be fixed on next release),
only occurrence of wthdrawn is on updatedatabase.pl
No koha-qa errors.
This touch many files, and I did not test everything,
but all seems normal. I think that any problem could
be fixed later.
Perhaps both entries in updatedatabase.pl could be joined
into one, but thats for QA.
Signed-off-by: Jonathan Druart <jonathan.druart@biblibre.com>
Signed-off-by: Galen Charlton <gmc@esilibrary.com>
Add an option to cleanup_database.pl to purge the search_history
entries older than X days.
Test plan:
- Apply patch
- Check that your test DB has some entries a little older than 30 days
and a few ones even older than that in search_history:
SELECT * FROM search_history WHERE time < DATE_SUB( NOW(), INTERVAL 30 DAY );
If not, modify some existing entries.
- Run cleanup_database with a fixed number of days (replace XX with
something higher than 30)
/misc/cronjobs/cleanup_database.pl --searchhistory XX
- Check that entries older than XX days got deleted from search_history
- Run without the day parameter
/misc/cronjobs/cleanup_database.pl --searchhistory
- Check that entries older than 30 days got deleted from search_history
Signed-off-by: Galen Charlton <gmc@esilibrary.com>
Signed-off-by: Kyle M Hall <kyle@bywatersolutions.com>
Signed-off-by: Galen Charlton <gmc@esilibrary.com>
To test:
[1] Turn on the syspref for enabling OPAC holds.
[2] Create an item and bring it up on the OPAC search
results. Run through the following possibilities,
by changing the item, and verify that the place hold
link in OPAC search results appears only when the item is
- not lost AND
- not withdrawn AND
- not damaged (or is damged and AllowHoldsOnDamagedItems is ON) AND
- the item is not marked not-for-loan OR
the item has a negative notforloan value (e.g., it is on order)
Note that it is necessary to reindex the test bib after making
each change to the test item.
[3] Also verify that whether or not in the item is in transit does
NOT affect whether the place hold link appears.
[4] Verify that there is no regression on bug 8975 (i.e., if an
item is on order, that status should be displayed in staff client
search results).
Signed-off-by: Owen Leonard <oleonard@myacpl.org>
Signed-off-by: Chris Cormack <chrisc@catalyst.net.nz>
Signed-off-by: Galen Charlton <gmc@esilibrary.com>
In search results, one could not place a hold on an item in transit
and for loan (items.notforloan=0). This appears when AllowOnShelfHolds
is allowed.
This patch repairs a regression introduced by the patch for bug 8975.
Test plan :
- Set AllowOnShelfHolds to on
- Create a record with a normal item : not lost, not withdrawn, not
damaged, notforloan=0
- Index this record
- Perform a search on OPAC that returns this record (and others)
=> You see in actions "Place hold"
- Add this item in transit : /cgi-bin/koha/circ/branchtransfers.pl
- Re-perform the search on OPAC
=> You see in actions "Place hold" and item "in transit"
Signed-off-by: Owen Leonard <oleonard@myacpl.org>
Signed-off-by: Chris Cormack <chrisc@catalyst.net.nz>
Signed-off-by: Galen Charlton <gmc@esilibrary.com>
The original implementation of QueryParser did not handle truncation
based on the QueryAutoTruncate system preference. This patch adds support.
To test:
1) Apply patch.
2) Turn on UseQueryParser.
3) Set QueryAutoTruncate to "automatically."
4) Search for "har". Note that it returns results with words
like "Harry" (i.e. with right truncation).
5) Search for "har*". Note that it still returns results with right
truncation.
6) Set QueryAutoTruncate to "only when * is added."
7) Search for "har". Note that it returns only records that have the
exact word "har" in them (most likely there will be none unless you
have Hebrew items).
8) Search for "har*". Note that once again it returns results for "Harry"
(i.e. right truncated results).
9) Sign off.
This patch also reindents a hash in Koha/QueryParser/Driver/PQF.pm
because it was hard to read before.
Signed-off-by: Mirko Tietgen <mirko@abunchofthings.net>
Signed-off-by: Katrin Fischer <Katrin.Fischer.83@web.de>
All tests and QA script pass.
Thx for fixing this Jared!
Signed-off-by: Jared Camins-Esakov <jcamins@cpbibliography.com>
I just add use utf8; to the Search.pm and the problem
was solved .
Test plan :
1- Add bib records with non-latin characters
2- search for some of these records
3- try to refine your search using Subject / Author
Signed-off-by: Bernardo Gonzalez Kriegel <bgkriegel@gmail.com>
Comment: Work fixing URLs in facets. Now they work correctly.
No errors.
Signed-off-by: Katrin Fischer <Katrin.Fischer.83@web.de>
Passes all tests and QA script.
I tested facets with the 22 Arabic records provided on
bug 9579 successfully. Before the patch the links are not
correct, after applying the patch the links work as
expected.
Signed-off-by: Jared Camins-Esakov <jcamins@cpbibliography.com>
When QueryWeightFields is enabled, the searching query is created with several options.
In C4::Search::_build_weighted_query, when no index is defined, the query is build with fuzzy and stemming options. When an index is defined, theses options are missing, only unconditional right truncation is used.
The consequence is that when QueryStemming is disabled, a search with index can give more results (due to right truncation) that a search without.
This patch adds stemming and fuzzy on search with index, conditioned with QueryFuzzy and QuerryStemming sysprefs.
Also changes world list search (wrld) weight to r6 in order to set fuzzy search to r8 and stemming search to r9 (like search without index).
Test plan :
- Go to searching preferences (admin/preferences.pl?tab=searching)
- Set QueryAutoTruncate to "only if * is added"
- Set QueryFuzzy and QuerryStemming to "Don't try"
- Set QueryWeightFields to "Enable"
- Go to advanced search page
- Select an indexe (ie Title) and perform a search on a short word
=> Look at zebrarv log and see that query does not contain right truncation : @attr 5=1
- Set QueryFuzzy to "Try"
- Perform same search
=> Look at zebrarv log and see that query contains fuzzy : @attr 5=103
- Set QueryFuzzy to "Don't try" and QuerryStemming to "Try"
- Perform same search
=> Look at zebrarv log and see that query contains rigth truncation on stemmed word : @attr 5=1
Signed-off-by: koha.aixmarseille <koha.aixmarseille@gmail.com>
Signed-off-by: Marcel de Rooy <m.de.rooy@rijksmuseum.nl>
This patch makes Fuzzy and Stemming influence search results on weighted
queries when using an index. Side-effect is however that the results for a
search like index=term* (add truncation manually too) could be LOWER than the
the number of hits for index=term. Further comments on Bugzilla.
Signed-off-by: Jared Camins-Esakov <jcamins@cpbibliography.com>
In current implementation (mostly commented out in this patch)
uses heuristic to guess which strings need decoding from utf-8
to binary representation and doesn't support utf-8 characters
in templates and has problems with utf-8 data from database.
With this changes, Koha perl code always uses utf-8 encoding
correctly. All incomming data from database is allready
correctly marked as utf-8, and decoding of utf8 is required
only from Zebra and XSLT transfers which don't set utf-8 flag
correctly.
For output, standard perl :encoding(utf8) handler is used
so it also removes various "wide character" warnings as side-effect.
Test scenario:
1. make sure that you have utf-8 characters in your biblio
records, patrons, categories etc.
2. try to search records on intranet and opac which contain
utf-8 characters
3. install language which has utf-8 characters, e.g. uk-UA
dpavlin@koha-dev:/srv/koha/misc/translator(bug_6554) $
PERL5LIB=/srv/koha/ perl translate install uk-UA
4. switch language to uk-UA and verify that templates
display correctly
5. test search and Z39.50 search and verify that caracters
are correct
Signed-off-by: Owen Leonard <oleonard@myacpl.org>
I followed the test plan, adding utf-8 characters to library names,
patron categories, titles, and authorized values. I tried the uk-UA
translation and everything looked good.
When performing Z39.50 searches for titles containing utf-8 characters I
got results which were still occasionally contaminated with dummy
characters [?] but I assume this is Z39.50's fault not the patch's.
Signed-off-by: Marcel de Rooy <m.de.rooy@rijksmuseum.nl>
Signed-off-by: Bernardo Gonzalez Kriegel <bgkriegel@gmail.com>
Already signed, add mine.
Signed-off-by: Jared Camins-Esakov <jcamins@cpbibliography.com>
Removed NoZebra vestiges. This comprises several code blocks that depend on the NoZebra syspref and NZ related functions/methods.
C4::Biblio->
GetNoZebraIndexes
_DelBiblioNoZebra
_AddBiblioNoZebra
C4::Search->
NZgetRecords
NZanalyse
NZoperatorAND
NZoperatorOR
NZoperatorNOT
NZorder
C4::Installer->
set_indexing_engine
Sponsored-by: Universidad Nacional de Córdoba
Signed-off-by: Julian Maurice <julian.maurice@biblibre.com>
Signed-off-by: Paul Poulain <paul.poulain@biblibre.com>
Signed-off-by: Jared Camins-Esakov <jcamins@cpbibliography.com>
* Fix a long-standing bug in the linker that could crash the linker when
run against odd data.
* Sanitize input to SimpleSearch.
* Correctly handle CCL indexes with QP.
Signed-off-by: Katrin Fischer <Katrin.Fischer.83@web.de>
Signed-off-by: Jared Camins-Esakov <jcamins@cpbibliography.com>
This patch also corrects the definition of the an= index, which was
missing exactness.
Signed-off-by: Katrin Fischer <Katrin.Fischer.83@web.de>
Signed-off-by: Jared Camins-Esakov <jcamins@cpbibliography.com>
Koha was not previously escaping CGI input, which caused problems for
highlighting and is a security issue.
Signed-off-by: Katrin Fischer <Katrin.Fischer.83@web.de>
Thx for fixing this.
Signed-off-by: Jared Camins-Esakov <jcamins@cpbibliography.com>
QP searches with && broke search highlighting on the OPAC details page.
This patch corrects encoding of the query_desc parameter that is passed
to the details page.
My last attempt at rebasing also transposed the variable for index
names with the variable for operators, meaning that the dropdown in
the basic search did not work.
Signed-off-by: Chris Cormack <chrisc@catalyst.net.nz>
Signed-off-by: Katrin Fischer <Katrin.Fischer.83@web.de>
Fixes some problems raised during QA successfully.
Signed-off-by: Jared Camins-Esakov <jcamins@cpbibliography.com>
With the inclusion of this patch, all searches will (try) to use
QueryParser for handling queries for both the bibliographic and authority
databases if UseQueryParser is enabled. If QueryParser is unavailable,
UseQueryParser is disabled, or the search uses CCL indexes, the old
search code will be used.
To test:
1) Apply patch.
2) Run the unit test with `prove t/QueryParser.t`
3) Enable the UseQueryParser syspref.
4) Try searches that should return results in the following places:
* OPAC (simple search)
* OPAC (advanced search)
* OPAC (authorities)
* Staff client (header search)
* Staff client (advanced search)
* Staff client (cataloging search)
* Staff client (authorities)
* Staff client (importing a batch using a match point)
* Staff client (searching for an item for adding to a label)
* Staff client (acquisitions)
* Staff client (searching for a record to create a serial)
* ANYWHERE ELSE I HAVE FORGOTTEN
5) Disable the UseQueryParser syspref. Repeat at least some of the
searches you did above.
6) If all searches worked, sign off.
Signed-off-by: Chris Cormack <chris@bigballofwax.co.nz>
Signed-off-by: Elliott Davis <elliott@bywatersolions.com>
Searching still works as expected for variuos places.
QueryParser syspref seemed to be enabled by default
Signed-off-by: Katrin Fischer <Katrin.Fischer.83@web.de>
Signed-off-by: Jared Camins-Esakov <jcamins@cpbibliography.com>
This patch rewrites the GetReserveStatus routine in order to take in
parameter the itemnumber and/or the biblionumber.
In some places, the C4::Reserves::CheckReserves routine is called when
we just want to get the status of the reserve. In these cases, the
C4::Reserves::GetReserveStatus is now called.
This routine executes 1 sql query (or 2 max).
Test plan:
Check that there is no regression on the different pages where reserves
are used. The different status will be the same than before applying
this patch.
Signed-off-by: Kyle M Hall <kyle@bywatersolutions.com>
Signed-off-by: Marcel de Rooy <m.de.rooy@rijksmuseum.nl>
Signed-off-by: Jared Camins-Esakov <jcamins@cpbibliography.com>
In C4::Search::FindDuplicate, when biblio has no ISBN, the duplicate search adds :
$query .= " and itemtype=$result->{itemtype}".
This is wrong when itemtype is defined in items.
This patch simply removes the itemtype from dublicate search.
Test plan :
- Go to a biblio details page
- Click on "Edit as new (duplicate)"
- If ISBN is defined, remove it
- Click on save
=> a duplicate is detected
- Change biblio item type and save
=> a duplicate is detected
Signed-off-by: Marcel de Rooy <m.de.rooy@rijksmuseum.nl>
Signed-off-by: Jared Camins-Esakov <jcamins@cpbibliography.com>
Prior to this patch, there were three identical ZOOM event loops in
C4::Search. This is wasteful, and goes against all good programming
practice. This patch refactors the ZOOM event loops into a separate
subroutine which is called by SimpleSearch, searchResults, and
GetDistinctValues call.
The new routine, _ZOOM_event_loop process the ZOOM event loop and,
once it has been fully processed, passes control to a closure provided
by the calling routine for processing the results, and destroys the
result sets.
To test (after applying patch):
1) Do a regular bibliographic search that should return results.
2) Do a search in the Cataloging module that should return results.
3) If you get results from both searches, the patch works.
Signed-off-by: Kyle M Hall <kyle@bywatersolutions.com>
Signed-off-by: Jonathan Druart <jonathan.druart@biblibre.com>
Signed-off-by: Jared Camins-Esakov <jcamins@cpbibliography.com>
In order to allow holds on items with notforloan = -1, processing
of "unavailable" items in Search.pm was altered to exclude items
with notforloan < 0. (See Bug 2341: items marked 'on order' not
reserveable from search results). Doing so meant that such items
were excluded from the list, in staff client search results, of
items which are unavailable.
This patch changes the logic of that processing so that items
with notforloan < 0 are considered unavailable, but can still
be placed on hold.
To test, edit a record with a single item and view that record
in search results. When the item is is on order (notforloan -1)
it should say so. The holds link should be INactive only if:
- item is withdrawn AND/OR
- item is lost AND/OR
- item is damaged (and AllowHoldsOnDamagedItems is off) AND/OR
- item is not for loan, with notforloan > 0
Signed-off-by: Nicole C. Engard <nengard@bywatersolutions.com>
All tests pass (note that a reindex is required if changing item
statuses - which is why my first tests failed).
Passed-QA-by: Jonathan Druart <jonathan.druart@biblibre.com>
Signed-off-by: Jared Camins-Esakov <jcamins@cpbibliography.com>
Signed-off-by: Owen Leonard <oleonard@myacpl.org>
Passed-QA-by: Marcel de Rooy <M.de.Rooy@rijksmuseum.nl>
Signed-off-by: Jared Camins-Esakov <jcamins@cpbibliography.com>
When series title include a question or exclamation mark, theese must be removed
to prevent search failure.
http://bugs.koha-community.org/show_bug.cgi?id=8888
Signed-off-by: Owen Leonard <oleonard@myacpl.org>
Before the patch: Series facet links with ! or ? return no results.
After the patch the same links return valid results.
Signed-off-by: Paul Poulain <paul.poulain@biblibre.com>
This patch enables the shelving location facet as an
alternative to the branches fact in two situations:
A) SingleBranchMode is enabled
B) There is only one branch in the branches table
Test Plan:
1) Catalog multiple items with different shelving locations.
2) Test enable by enabling SingleBranchMode
3) Test enable by deleting all but one branch
Based on initial patch by Ian Walls.
Signed-off-by: Owen Leonard <oleonard@myacpl.org>
Tested cases 2) and 3) successfully in OPAC and staff client
Signed-off-by: Paul Poulain <paul.poulain@biblibre.com>
When working with hierarchical subject headings, it is sometimes helpful
to do a search for all records with a specific subject, plus
broader/narrower/related subjects. This patch adds support for these
"exploded" subject searches to Koha.
To test:
1) Make sure you have a bunch of hierarchical subjects. I created
geographical subjects for "Arizona," "United States," and "Phoenix,"
and linked them together using 551s, and made sure I had a half
dozen records linking to each (but not all to all three).
2) Do a search for su-br:Arizona (or choose "Subject and broader terms"
on the advanced search screen with "more options" displayed), and
check that you get the records with the subject "Arizona" and the
records with the subject "United States"
3) Do a search for su-na:Arizona (or choose "Subject and narrower terms"
on the advanced search screen with "more options" displayed), and
check that you get the records with the subject "Arizona" and the
records with the subject "Phoenix"
4) Do a search for su-rl:Arizona (or choose "Subject and related terms"
on the advanced search screen with "more options" displayed), and
check that you get the records with the subject "Arizona," the
records with the subject "United States," and the records with the
subject "Phoenix"
5) Ensure that other searches still work (keyword, subject, ccl,
whatever)
6) Sign off
Technical details:
This patch adds a shim in front of C4::Search::buildQuery in order to
preprocess the query and call the _handle_exploding_search callback.
This shim will allow us to gradually offload query parsing to a new
query parser module.
Signed-off-by: wajasu <matted-34813@mypacks.net>
Signed-off-by: Paul Poulain <paul.poulain@biblibre.com>
Signed-off-by: Owen Leonard <oleonard@myacpl.org>
Tested by toggling both the hidelostitems preference and the
OpacHiddenItems preference. Both work as expected in the normal
search results display.
Signed-off-by: Paul Poulain <paul.poulain@biblibre.com>
if @limit=('') buildQuery failed
Signed-off-by: Jared Camins-Esakov <jcamins@cpbibliography.com>
All searches that I tried (keyword, indexed, CCL, with limits, without,
etc.) worked fine. There are warnings about uninitialized variables in
the OPAC, but they exist on master as well and therefore should not
block these patches.
Signed-off-by: Paul Poulain <paul.poulain@biblibre.com>
When testing bug 8743, I discovered a missing index in my authority file.
The error message was
"CCL parsing error (10014) Unknown qualifier ZOOM"
which is not very helpfull because it does not show the query that was made.
This patch add the query itself after the zebra error
This patch adds the Koha::Indexer::RecordNormalizer and
Koha::Indexer::MARC::RecordNormalizer::EmbedSeeFromHeadings packages
to enable the inclusion of alternate forms of headings in bibliographic
searches. When the new syspref IncludeSeeFromInSearches is turned on
(default is off) rebuild_zebra.pl will insert see from headings from
authority records into bibliographic records when indexing, so that a
search on an obsolete term will turn up relevant records.
To test:
1) Enable IncludeSeeFromInSearches
2) Add a heading that has an alternate form to a record (for example,
"Cooking" has the alternate form "Cookery," if you have authority
records from LC)
3) Index the zebraqueue (or reindex if you haven't indexed your system
yet)
4) Confirm that if you search for "Cookery" you get the record you
just modified
Signed-off-by: Jared Camins-Esakov <jcamins@cpbibliography.com>
Rebased on master 5 August 2012
Signed-off-by: Jared Camins-Esakov <jcamins@cpbibliography.com>
Rebased on master 11 September 2012
Signed-off-by: Katrin Fischer <Katrin.Fischer.83@web.de>
Also checked:
- Verified database update works correctly
- Checked system preference and its description
- Checked staff/opac detail pages with feature on/off
- Checked staff/opac search facets
- Downloaded and tested records in various formats
- Tried different searches for 'see from' entries of authorities
- Ran all unit tests
No problems found.
Around line 1470-something:
my $sth =
$dbh->prepare(
"SELECT tagfield FROM marc_subfield_structure WHERE kohafield LIKE
'items.itemnumber'"
);
$sth->execute;
This patch replaces that with a call to GetMarcFromKohaField.
To test:
1) Apply patch.
2) Do a search that returns both available and unavailable items.
You'll know if the patch isn't working.
Signed-off-by: Julian Maurice <julian.maurice@biblibre.com>
Signed-off-by: Paul Poulain <paul.poulain@biblibre.com>
This patch removes the AmazonReviews and AmazonSimilarItems
features from the OPAC and staff client. With on Amazon
feature remaining, cover images, the *AmazonEnabled preference
is also removed in favor of checking the *AmazonCoverImages
preference. Two other system preferences, AWSAccessKeyID and
AWSPrivateKey are removed as they were required only by the
removed features.
Handling of book cover images from Amazon is unchanged.
Signed-off-by: Nicole C. Engard <nengard@bywatersolutions.com>
Turned on amazon covers in opac and staff client and all
worked as expected. Then tested to make sure other cover image
services still worked and they do.
Signing off.
Signed-off-by: Paul Poulain <paul.poulain@biblibre.com>
This is for MARC 21 only.
Made following changed:
- In getFacets in C4/Koha.pm added item type facet for 952y and 942c
- In getRecords in C4:Search.pm added code to get description of itemtype codes
- facets.inc in both staff and opac to show item types related label in the facets block
To test:
Add records such that a certain itype (say BK) is present in both 942c and 952y in two DIFFERENT records.
Run a search where both test records are present. Test to see if itype types are presented in the facets block (both OPAC and staff).
Click on the itype (say BK), both the test records should appear in the refined results. This shows that the feature works for both 942c and 952y.
Signed-off-by: Kyle M Hall <kyle@bywatersolutions.com>
Signed-off-by: Ian Walls <koha.sekjal@gmail.com>
QA Comment: fixed capitalization in template includes according to HTML4 coding
guideline ("Item types" instead of "ItemTypes")
Signed-off-by: Paul Poulain <paul.poulain@biblibre.com>
This is offerred as a compromise alternative to creating
a new Title-rel index to avoid having the statement of
responsiblity unduly affect field weight when using the DOM
filter and MARC21 -- the problem with creating a Title-rel index
is that it would *force* reindexing upon upgrade.
Signed-off-by: Galen Charlton <gmc@esilibrary.com>
Signed-off-by: Jared Camins-Esakov <jcamins@cpbibliography.com>
Signed-off-by: Paul Poulain <paul.poulain@biblibre.com>
Due to a dependency cycle between C4::Search and C4::Items, searches
in the OPAC die spectacularly under Plack. This counter-patch extends
dpavlin's solution and replaces use with require for C4::Search in
C4::Items and for C4::Items in C4::Search.
Signed-off-by: Paul Poulain <paul.poulain@biblibre.com>
To test:
- Syspref QueryStemming = Try
- Install Norwegian bokmål:
cd misc/translator/
perl translate install nb-NO
- Go to Home › Administration › System Preferences > I18N/L10N
and enable "Norsk bokmål(nb-NO)" for opaclanguages as well as
setting opaclanguagesdisplay = Allow
- Make sure you have selected "Norsk bokmål" as the active language
in the OPAC
- Find a record that has a tag (which does not contain any digits)
- Click on the tag and see that you get the error in the title of
this bug
- Apply the patch
- Click on the tag again and the error should be gone
Signed-off-by: Katrin Fischer <Katrin.Fischer.83@web.de>
Easy to test with a great test plan. Works nicely.
Signed-off-by: Paul Poulain <paul.poulain@biblibre.com>
Signed-off-by: Marijana Glavica <mglavica@ffzg.hr>
I am signing it off because it doesn't break anything and I will report
another bug for language issues described in my previous comment.
Removed MySQLism backquotes
Optionally delete bibliographic record when batch deleting items, if no items remain on the record.
Adds deleting of reserves to DelBiblio. Since subscriptions are deleted automatically,
it made sense for deletion of reserves to maintain the same behavior.
Signed-off-by: Liz Rea <wizzyrea@gmail.com>
I like the way this works, and it does. Passes tests.
Signed-off-by: Paul Poulain <paul.poulain@biblibre.com>
Signed-off-by: Jared Camins-Esakov <jcamins@cpbibliography.com>
This seems like a very big improvement, especially for people using screen
readers. I agree that the change to C4::Search is required.
Signed-off-by: Paul Poulain <paul.poulain@biblibre.com>
Adds the ability to perform advanced searches in both the OPAC and staff client on more than
a single AdvancedSearchType at a time. Support included for Itemtype, Collection Code and Shelving Location.
AdvancedSearchTypes syspref preference is repurposed; no longer a single value, it can now take
multiple item code fields separated by "|". The order of these fields will determine the order
of the tabs in the OPAC and staff client advanced search screens. Values within the search type
are OR'ed together, while each different search type is AND'ed together in the query limits. The
current stored values are supported without any required modification.
Each set of advanced search fields are displayed in tabs in both the OPAC and staff client. The
first value in the AdvancedSearchTypes syspref is the selected tab; if no values are present, "itemtypes"
is used. For non-itemtype values, the value in AdvancedSearchTypes must match the Authorised Value name, and
must be indexed with 'mc-' prefixing that name.
<li> elements in tab are assigned unique IDs, so the text of the tab can be altered to match the
library's needs (using JQuery)
The logic to handle the 5 element row limit has been moved from the Perl to the templates, since Template::Toolkit
has a simple method for extracting the count of an element in a loop and performing 'modulus' on it.
2011-12-21: Incorporated changes recommend by Owen Leonard on bug report.
Signed-off-by: Owen Leonard <oleonard@myacpl.org>
Add 700$b to UNIMARC author facets.
Other facets subfields could be added now. For example, other subjects
subfields.
Following patches are required to handle better MARC21 subfields and choose
other subfields to deal with UNIMARC format.
Signed-off-by: Jared Camins-Esakov <jcamins@cpbibliography.com>
Tested under both MARC21 and UNIMARC. Does not cause any regressions with
MARC21, and offers the possibility for better faceting there in the future.
Works as advertised with UNIMARC.
Signed-off-by: Jared Camins-Esakov <jcamins@cpbibliography.com>
Signed-off-by: Paul Poulain <paul.poulain@biblibre.com>