A use C4::Charset was added deep in the body of the code
we have already imported it at the top of the file
(the by convention normal place) As use is executed at compile time
specifying it in the code body does not serve a
useful purpose and detracts from the readability of an already
overly complex subroutine.
Remove the superfluous statement
also removed the tabs introduced to the surrounding lines
by the same commit
Signed-off-by: Bernardo Gonzalez Kriegel <bgkriegel@gmail.com>
Search still works, no errors.
Signed-off-by: Katrin Fischer <Katrin.Fischer.83@web.de>
Signed-off-by: Tomas Cohen Arazi <tomascohen@gmail.com>
It's quite common to have [something] within facet data, and it produces following error:
Unmatched [ in regex; marked by <-- HERE in m/^[ <-- HERE
This problem was intoduced in Bug 12151 but is trivial to fix.
Signed-off-by: Tomas Cohen Arazi <tomascohen@gmail.com>
Good catch.
To test:
- Created a bibliographic record, linked to an authority record (personal name). Did a search that returned the author as a facet.
- Added a [ symbol to the author name.
- Repeated the search
=> FAIL: "Unmatched [ in regex; marked by <-- HERE in m/^[ <-- HERE"
- Apply the patch
- Retry the search
=> SUCCESS: No error, bracket shows correctly.
Passes koha-qa.pl too.
Signed-off-by: Kyle M Hall <kyle@bywatersolutions.com>
Signed-off-by: Tomas Cohen Arazi <tomascohen@gmail.com>
This patch removes the use of smartmatch operators in the search code.
Regards
To+
Edit: this revision uses 'grep' instead of Lists::MoreUtils::any
Sponsored-by: Universidad Nacional de Cordoba
Signed-off-by: Kyle M Hall <kyle@bywatersolutions.com>
Signed-off-by: Katrin Fischer <Katrin.Fischer.83@web.de>
Passes all tests and QA script.
Tested search, no problems found.
Signed-off-by: Tomas Cohen Arazi <tomascohen@gmail.com>
On the cataloguing search (cataloguing/addbook.pl), if an item has a
notforloan value > 0, the item is not listed in the Location column.
It is quite confusing, the current behavior let patrons believe that
there is not item for the biblio (or less than the real count).
Test plan:
1/ Create 2 biblio records A and B
2/ Create some items for A
3/ Create 1+ item(s) for B with a notforloan status > 0
4/ Reindex both records
5/ Launch a search on the cataloguing module and verify that the
notforloan items are not listed in the 'Location' column.
6/ Apply this patch and verify the not for loan items are listed ("Not
for loan (XXX)").
Signed-off-by: Nicole C. Engard <nengard@bywatersolutions.com>
Signed-off-by: Katrin Fischer <Katrin.Fischer.83@web.de>
Passes tests and QA script, not for loan items now show up.
Signed-off-by: Tomas Cohen Arazi <tomascohen@gmail.com>
This patch reduces three repeated code fragments into a single
internal subroutine, which is easier to read, has comments,
and should make it easier to refactor more buildQuery code
in the future.
_TEST PLAN_
Before applying
1) Run a bunch of different searches in the staff client and OPAC
in separate tabs
2) Apply the patch
3) Run the same searches again (maybe in yet more tabs) and notice
that the results are exactly the same.
Signed-off-by: Bernardo Gonzalez Kriegel <bgkriegel@gmail.com>
Same results, no errors.
Signed-off-by: Kyle M Hall <kyle@bywatersolutions.com>
Signed-off-by: Tomas Cohen Arazi <tomascohen@gmail.com>
The Zoom specification defines that a ScanSet should provide a way
to retrieve terms suitable for displaying and another one for using
on further searches [1].
The Net::Z3950::ZOOM implementation actually provides both [2] but we
were using the wrong one.
Using $scanset->display_term(...) instead of $scanset->term(...) fixes
the problem.
To test:
- Do a index scan search (advanced search > more options > check
'index scan')
- Notice non-latin characters are replaced by one or more '@' symbols.
- Apply the patch
- Re-do the search, everything shows as it should.
- Try to follow any of the terms (clicking on them) and notice that
it actually gives you relevant results (i.e. is not searching for
@!!!!).
[1] http://zoom.z3950.org/api/zoom-1.4.html#3.6.3
[2] http://search.cpan.org/~mirk/Net-Z3950-ZOOM/lib/ZOOM.pod#term()_/_display_term()
Sponsored-by: Universidad Nacional de Cordoba
Followed test plan. Patch behaves as expected.
Signed-off-by: Marc Véron <veron@veron.ch>
Signed-off-by: Jonathan Druart <jonathan.druart@biblibre.com>
I reproduced the issue and I confirm this patch fixes it.
I put "Fuß" in a title, reindex the record. Launch a search on Title
checking the "scan index" checkbox. And the non-latin characters are
well displayed.
Signed-off-by: Galen Charlton <gmc@esilibrary.com>
Like biblio, this feature provides an authority search history.
This history is available for connected and disconnected user.
If the user is not logged in Koha, the history is stored in an
anonymous user sessin.
The search history feature is now factorized in a new module.
This patch adds:
- 1 new db field search_history.type. It permits to distinguish the
search type (biblio or authority).
- 1 new module C4::Search::History. It deals with 2 different storages:
DB or cookie
- 2 new UT files: t/Search/History.t and t/db_dependent/Search/History.t
- 1 new behavior: the 'Search history' link (on the top-right corner of
the screen) is always displayed.
Test plan:
1/ Switch on the 'EnableOpacSearchHistory' syspref.
2/ Go on the opac and log out.
3/ Launch some biblio and authority searches.
4/ Go on your search history page.
5/ Check that all yours searches are displayed.
6/ Click on some links and check that results are consistent.
7/ Delete your biblio history searches.
8/ Delete your authority searches history searches.
9/ Launch some biblio and authority searches
10/ Delete all your history (cross on the top-right corner)
11/ Check that all your history search is empty.
12/ Launch some biblio and authority searches.
13/ Login to your account.
14/ Check that all previous searches are displayed.
15/ Launch some biblio and authority searches.
16/ Check that these previous searches are displayed under "Current
session".
17/ Play with the 4 delete links (current / previous and biblio /
authority).
Signed-off-by: Owen Leonard <oleonard@myacpl.org>
Signed-off-by: Katrin Fischer <Katrin.Fischer.83@web.de>
All patches together pass QA script and tests.
Also, new tests in t/db_dependent/ pass.
Tested in all 4 OPAC themes, being logged in and anonymous.
Anonymous search history will be appended to personal search
history after logging in.
Also verified that cleanup_database still purges search history,
now also including the authority searchs.
Signed-off-by: Galen Charlton <gmc@esilibrary.com>
This patch adds :w and :p versions to the index for »Lexile number«
(it has only :n so far) and adds indexes for 653 (Index term
uncontrolled), 655 (Index term Genre/Form), 041 (language-audio) and
041 (language-subtitle). It also adds the »curriculum«-index to
Search.pm.
Signed-off-by: Chris Cormack <chrisc@catalyst.net.nz>
Signed-off-by: Martin Renvoize <martin.renvoize@ptfs-europe.com>
Signed-off-by: Galen Charlton <gmc@esilibrary.com>
When using QueryWeightFields to add ranking on a search without index,
the search actually uses:
- rank 1 : Title-cover,ext : exact title-cover
- rank 2 : ti,ext : exact title
- rank 3 : Title-cover,phr : phrase title-cover
- rank >7 : queries without index
This relevance sets title as phrase in priority and then any index.
This patch adds title as words list before search on any index, so
that records with all searched terms in title, even not well ordered,
are more relevant.
Test plan :
- Enable QueryWeightFields syspref
- Perform a search, with sort by relevance, with two words ofen
contained in title, but never one near the other.
For example: 'History France'
=> Records with both words in title are first. For example:
"Histoire de France" and "La France : 100 ans d'histoire"
Signed-off-by: Jesse Maseto <jesse@bywatersolutions.com>
Signed-off-by: Katrin Fischer <Katrin.Fischer.83@web.de>
Relevance ranking and field weighting are hard to test,
as many MARC fields are indexed into the used indexes.
If we had an index that only indexed 245$a/200$a the
effect might be more visible.
I found no regressions by this patch, change reads
logical.
Signed-off-by: Galen Charlton <gmc@esilibrary.com>
When searching with a sort (means not by relevance) and there is an error
in Zebra connexion (server is down or query is wrong), you get the message :
Error : Can't call method "sort" on an undefined value at /home/kohaadmin/src/C4/Search.pm line 405.
This patch corrects by not performing sort if there are no no results.
Steps to reproduce the error without patch:
In OPAC go to Advanced Search
Choose "Title" in first "Search for:" end enter "ccl=( and )"
Display "More options"
Set "Sort by" to "Title (A-Z)"
Click "Search" at bootom of page
Result:
Error:
Can't call method "sort" on an undefined value at /usr/share/kohaclone/C4/Search.pm line 430.
After applying the patch, try that search again. This time,
it should report not results found with out the error message.
Alternative Test plan :
- Set OPACdefaultSortField on something else than relevance
- Perform a simple search with a wrong CCL query. For example : ccl=( and )
=> You get the messge : No results found ...
Patch behaves as expected.
Signed-off-by: Marc Véron <veron@veron.ch>
Signed-off-by: Katrin Fischer <Katrin.Fischer.83@web.de>
Adds another check to prevent a bad Zebra error message.
Works as described, passes all tests and QA script.
Signed-off-by: Galen Charlton <gmc@esilibrary.com>
This patch fixes two problems with the generation of
links to execute a Z39.50 search from the staff client
catalog and cataloguing search results page.
First, if using URI::Escape 3.30 or earlier, performing a simple search
with a double quote (e.g., "histoire algerie"), the Javascript is broken
in results page because of :
function GetZ3950Terms(){
var strQuery="&frameworkcode=";
strQuery += "&" + "title" + "=" + ""histoire%20algerie"";
Second, the encoding of non-ASCII characters in the search
term was broken.
This patch moves URI escaping from Perl to template with uri TT filter.
Test plan :
- To reproduce the issue with double quotes, the server
must be running URI::Escape 3.30 or earlier; the current
version of URI::Escape properly escapes double quote.
- In staff interface, perform a search with double quotes
that will return no result, ie "aaa xxx"
=> Without patch, javascript is broken
=> With patch, javascript is not broken
- Click on Z3950 button on results page
=> Without patch, the Title input is empty
=> With patch, the Title input contains the search terms
Additional test:
Do a search with something like äöü and then click Z3950
button on results page.
Without patch, encoding is broken in Z3950 form
With patch, encoding is correct.
Signed-off-by: Marc Véron <veron@veron.ch>
Signed-off-by: Katrin Fischer <Katrin.Fischer.83@web.de>
Fixed a few tabs. Passes tests and QA script.
I can't reproduce the Javascript problem, but I can reproduce
the Z39.50 encoding problem and can detect no regression.
Signed-off-by: Galen Charlton <gmc@esilibrary.com>
This patch makes Koha <-> Zebra use MARCXML for the serialization when
using DOM, and USMARC for GRS-1.
* The following functions are modified to set the Zebra record syntax
according to the current sysprefs and configuration:
- C4::Context->Zconn
- C4::Context-_new_Zconn
* A new function 'new_record_from_zebra' is introduced, which checks the
context we are in, and creates the MARC::Record object using the right
constructor.
The following packages get touched to make use of the new function:
- C4::Search
- C4::AuthoritiesMarc
and the same happens to the UI scripts that make use of them (both in
the OPAC and STAFF interfaces).
* Calls to the unsafe ZOOM::Record->render()[1] method are removed.
Due to this last change the code for building facets was rewritten. And
for performance on the facets creation I pushed higher version
dependencies for MARC::File::XML and MARC::Record (we rely on
MARC::Field->as_string).
* Calls to MARC::Record->new_from_xml and MARC::Record->new_from_usmarc
are wrapped with eval for catching problems [2].
* As of bug 3087, UNIMARC uses the 'unimarc' record syntax. this case is
correctly handled.
* As of bug 7818 misc/migration_tools/rebuild_zebra.pl behaves like:
- bib_index_mode (defaults to 'grs1' if not specified)
- auth_index_mode (defaults to 'dom')
here we do exactly the same.
To test:
- prove t/db_dependent/Search.t should pass.
- Searching should remain functional.
- Indexing and searching for a big record should work (that's what the
unit tests do).
- Test an index scan search (on the staff interface):
Search > More options > Check "Scan indexes".
- Enable 'itemBarcodeFallbackSearch' and try to circulate any word, it
shouldn't break.
- Searching for a biblio in a new subscription shouldn't break.
- Running bulkmarcimport.pl shouldn't break.
- And so on... for the rest of the .pl files.
[1] http://search.cpan.org/~mirk/Net-Z3950-ZOOM/lib/ZOOM.pod#render()
[2] a record that cannot be parsed by MARC::Record is simply skipped (bug 10684)
Sponsored-by: Universidad Nacional de Cordoba
Signed-off-by: Kyle M Hall <kyle@bywatersolutions.com>
Signed-off-by: Martin Renvoize <martin.renvoize@ptfs-europe.com>
Signed-off-by: Galen Charlton <gmc@esilibrary.com>
If a search gives results with 6 facets, one of those facets won't be
displayed. This is due to a bug in the code that only considers great
than 6 facets in one area, and less than 6 in another.
Test Plan:
1) Perform a search that should give results for 6 different libraries
2) Note you only see 5 libraries in the facets with no option to expand
3) Apply this patch
4) Repeat step 1
5) Note you now have the option to expand the facets list
Signed-off-by: Chris Cormack <chris@bigballofwax.co.nz>
Signed-off-by: Jonathan Druart <jonathan.druart@biblibre.com>
This patch should provide a regression test but I really don't know how
to write it.
Signed-off-by: Galen Charlton <gmc@esilibrary.com>
This patch makes the following changes to UNIMARC biblio indexing :
A. Changes to UNIMARC conf files
1. add comments to biblio-koha-indexdefs.xml
2. make biblio-koha-indexdefs.xml more compact by grouping some
declarations
Ex : 200$f and 200$g => one declaration for 200$fg
3. suppress unneeded declarations (indexing of some 4XX fields and 6XX
fields not in unimarc format)
4. unindex some (sub)fields unneeded by most users (318, 207,230,210a,
215, 4XXd)
5. change the way 308 field is indexed (no visible changes)
6. replace Title-host with Host-item -- see bug 11119
7. index 208 in Material-Type -- see bug 11119
8. index 100 pos 8-9 and 9-12 in pubdate:y and pubdate:n
9. index 100 pos 8-9 in pubdate:s instead of 210$d
10. Index all subfields of note 334 and 327 in note index
11. Index 304 and 327 in title index as well as note index
327 can contain a list of titles included in a work
304 can contain the title of the original work in case of a
translation
12. Index 314 in author index as well as note index
314 can contain authors not mentionned in 200$f/g (the 4th, 5th etc.
author)
13. Index 328 note in Dissertation-information as well as note
14. Index 328$t in Title
B. Changes to ccl.properties :
1. add a new index Dissertation-information (1056)
2. fix EAN, pubdate and acqdate (they were not linked with bib1 attributes)
C. Changes to Search.pm
1. add Dissertation-information and suppress Title-host and UPC
D. Changes to QP config file queryparser.yaml
1. add Dissertation-information
2 fix EAN, pubdate and acqdate
Test plan :
If you cannot test in GRS1, test only in DOM, as GRS will be deprecated.
1. Apply the patch in a UNIMARC Koha running with DOM and ICU
2. copy src/etc/searchengine/queryparser.yaml into the main config
directory of QP
3. copy src/etc/zebradb/ccl.properties into the main config directory
of Zebra
4. copy src/etc/zebradb/marc_defs/unimarc/biblio/* into the main config
directory of Zebra
5. reindex biblios (rebuild_zebra.pl -r -b -x -v)
6. test note index : make some searches on 334$b or 327$b
7. test author index : make some searches on 314 field
8. test title index : make some searches on 304 and 327 field, make a
search on 328$t subfield
9. test dissertation-information index : make some searches on 328 field
10. In a record, put in the dates of 100 fields the values "1000" (1st
date) and "1001" (2d date) ; try to search a book written in year
1000, you should find the record ; idem for year 1001
11. make some searches and sort by date. It should work better as before,
especially if you have values like "c2009" or "impr. 2010" in 210
field
12. Regression test : make some searches on several indexes, like EAN,
etc. It should work as before
Test 10-12 with and without Queryparser activated.
Be careful: with Queryparser activated, the index names (title,
dissertation-information...) must be entered in lowercase only.
Of course, to test search and sort by dates, you need to have full
records, with dates in 100 field as well as 210 field.
Signed-off-by: Paola Rossi <paola.rossi@cineca.it>
Signed-off-by: Jonathan Druart <jonathan.druart@biblibre.com>
Signed-off-by: Galen Charlton <gmc@esilibrary.com>
Adding Number-local-acquisition in C4::Search known indexes allows to
search without using "ccl=" prefix.
Also corrects in ccl.properties : inv must be an alias of
Number-local-acquisition.
Signed-off-by: Marcel de Rooy <m.de.rooy@rijksmuseum.nl>
Signed-off-by: Kyle M Hall <kyle@bywatersolutions.com>
Signed-off-by: Galen Charlton <gmc@esilibrary.com>
In Koha 3.8, if a standard catalog search was performed and the user
clicked the Z39.50 search button, the search string would automatically
be placed in the isbn field for the Z39.50 search form.
Changes to the code have since broken this functionality.
Test Plan:
1) From mainpage.pl, use "Search the catalog" to search for the string
"9781570672835"
2) Click the Z39.50 Search button
3) Note the string is placed in the title field
4) Apply this patch
5) Repeat steps 1-2
6) Note the string is placed in the isbn field
Signed-off-by: Mark Tompsett <mtompset@hotmail.com>
Signed-off-by: Katrin Fischer <Katrin.Fischer.83@web.de>
Tested old and new ISBN with and without hyphens.
Also tested some other keyword searches.
Signed-off-by: Galen Charlton <gmc@esilibrary.com>
Note that the behavior will be a bit odd if you do a 'replace via
Z39.50' from a bib record whose title happens to be an ISBN, but
this scenario seems unlikely enough to ignore.
It could be useful to index the original language of a document (i.e.
"fre" for the English translation of a French novel).
This patch renames the Bib-1 use attribute 1095 from
Code-language-original to language-original and uses it to index:
- MARC21 041$h subfield
- UNIMARC 101$c subfield
It adds "language-original" in the list of index in Search.pm.
Test plan :
A. in a MARC21 GRS1 environment
1. Copy Zebra config files (zebradb/biblios/etc/bib1.att,
zebradb/ccl.properties, marc_defs/marc21/biblios/record.abs) from
your source etc/ directory to your main koha etc/ directory
2. Reindex zebra
3. Make some searches, like "language-original:fre"
B. in a MARC21 DOM environment
4. Copy Zebra config files (zebradb/biblios/etc/bib1.att, zebradb/ccl.properties,
marc_defs/marc21/biblios/biblio-zebra-indexdefs.xsl) from your source etc/
directory to your main koha etc/ directory
5. Reindex zebra
6. Make some searches, like "language-original:fre"
C. in a UNIMARC GRS1 environment
7. Copy Zebra config files (zebradb/biblios/etc/bib1.att,
zebradb/ccl.properties, marc_defs/unimarc/biblios/record.abs) from
your source etc/ directory to your main koha etc/ directory
8. Reindex zebra
9. Make some searches, like "language-original:fre"
A. in a UNIMARC DOM environment
10. Copy Zebra config files (zebradb/biblios/etc/bib1.att,
zebradb/ccl.properties, marc_defs/unimarc/biblios/biblio-zebra-indexdefs.xsl)
from your source etc/ directory to your main koha etc/ directory
11. Reindex zebra
12. Make some searches, like "language-original:fre"
Signed-off-by: Chris Cormack <chrisc@catalyst.net.nz>
Signed-off-by: Jonathan Druart <jonathan.druart@biblibre.com>
Signed-off-by: Galen Charlton <gmc@esilibrary.com>
Under certain circumstance, a search term without quotation marks
returns the expected results while the same search with a
double quote embedded in it would fail.
Koha should ignore the quotation marks and return results anyway.
This appears when QueryWeightFields syspref is activated (and
QueryAutoTruncate is off), as field weighting builds a complex CCL
query using double quotes around search words. This patch simply
replaces double quotes in search words by a space.
Test plan :
- Set QueryAutoTruncate off (you may also need to set QueryFuzzy to off)
- Set QueryWeightFields off
- Perform a serch on two words where you have results, like : centre "ville
=> you get results
- Set QueryWeightFields on
- Perform same serch
=> you get the same results
Signed-off-by: Leila <koha.aixmarseille@gmail.com>
Signed-off-by: Kyle M Hall <kyle@bywatersolutions.com>
Signed-off-by: Galen Charlton <gmc@esilibrary.com>
If you select an index in the search dropdown and then enter in a QP
query starting with the field, Koha will prepend the index you do not
want to use at the beginning of the search, resulting in a search that
probably does not match what you were hoping for.
To test:
1) Select an index in the search dropdown in the OPAC. Author is fine.
2) Enter a search term using manually entered indexes. For example:
ti:cat in the hat
3) Note that the search fails.
4) Apply patch.
5) Repeat steps 1 and 2.
6) Note that the search succeeds.
7) Sign off.
Signed-off-by: Chris Cormack <chrisc@catalyst.net.nz>
Signed-off-by: Jonathan Druart <jonathan.druart@biblibre.com>
Signed-off-by: Galen Charlton <gmc@esilibrary.com>
MARC::Record 2.0.6+ enables the warnings pragma, and as a
consequence, started logging cases where a routine in
C4::Search was calling MARC::Field->subfield() with an undef
subfield label. This patch removes the log noise.
To test:
- Run prove -v t/db_dependent/Search.t
- There will be warnings about
"Use of uninitialized value $code_wanted in string" in MARC::Field.
- Apply the patch.
- Those warnings are gone.
Signed-off-by: Liz Rea <liz@catalyst.net.nz>
Tests now pass
Signed-off-by: Galen Charlton <gmc@esilibrary.com>
Signed-off-by: Jonathan Druart <jonathan.druart@biblibre.com>
Signed-off-by: Kyle M Hall <kyle@bywatersolutions.com>
Signed-off-by: Galen Charlton <gmc@esilibrary.com>
When a record fails to decode during a search, Koha dies with an error.
Koha should ignore bad records and continue on ( and log the error ).
An example of a record that Zebra will happily ingest but which MARC::Record
doesn't like is one that contains a punctuation character in a tag label.
Signed-off-by: Chris Cormack <chrisc@catalyst.net.nz>
Signed-off-by: Jonathan Druart <jonathan.druart@biblibre.com>
Signed-off-by: Galen Charlton <gmc@esilibrary.com>
When an index does not contain a structure part, the structure "wrdl"
is automatically added and a structure is mandatory to build search
query (to convert ':' into '=').
But the code that tests that the structure is not already defined looks
in entire index string :
$index =~ /(st-|phr|ext|wrdl|nb|ns)/
It should look for a comma followed by a structure and in the case of
"nb" and "ns" look for an exact match.
The consequence is that an index containing ns or nb or phr or etc does
not work.
This patch modifies the regexp for this part and other parts looking at
structures into index.
Test plan :
- Desactivate all searching sysprefs.
- Create a new index called "ansa" number 8999 into bib1.att,
ccl.properties and records.abs
- Index a biblio with a value on this index, ie "VALUE"
- Perform a search on this index by editing URL:
http://<server>/cgi-bin/koha/catalogue/search.pl?idx=ansa&q=VALUE
=> Without patch, the search does not work. The PQF query is
"@and ansa: VALUE"
=> With patch, the search works. The PQF query is "@attr 1=8999 VALUE";
- Perform same test with an index containing a structure ie "aphra"
- Set QueryAutoTruncate syspref to automatically
=> Check * is added to operand : PQF query is
"@attr 1=8999 @attr 4=6 @attr 5=1 VALUE"
- You may check stopwords removal but this feature is obsolete
Signed-off-by: Bernardo Gonzalez Kriegel <bgkriegel@gmail.com>
Comment: as far as I can test, this works. Small tab error reported
by koha-qa, fixed in a followup.
This kind of patch is difficult to test without explicit instructions,
not everyone knows how to check what kind of PQF search is used.
I don't know. But I can test search results.
Test:
1) Deactivate search sysprefs
QueryAutoTruncate = only if * is added
QueryFuzzy = Don't try
QueryStemming = Don't try
QueryWeightFields = Disable
UseQueryParser = Do not try
2) Create new index 'ansa'
bib1.att : att 8999 ansa
ccl.properties : ansa 1=8999
records.abs : melm 999 ansa:w,ansa:p
1) and 2) from comment 3 on Bug
3) In the undestanding that index refers to field 999,
edited default framework, made 999a visible on editor
4) Edit sample record, add 'VALUE' to 999a, save, reindex
5) Search with /cgi-bin/koha/catalogue/search.pl?idx=ansa&q=VALUE
No results
6) Apply patch, repeat search
Got results
That's all I can test. If not enough for QA, then this
must wait until further and explicit test instructions
Signed-off-by: Kyle M Hall <kyle@bywatersolutions.com>
There is (for MARC21, at least), an exising indexes that this patch
fixes: Code-institution.
Signed-off-by: Galen Charlton <gmc@esilibrary.com>
Before fixing UNIMARC DOM indexing, we must fix GRS-1 indexing
1) In advanced search, some Coded fields index are not working: Print,
Illustration, Content
2) Country-heading index is not working
3) Some subfields are indexed in wrong indexes :
102$a should be in Country-publication instead of Country-heading
(non defined in bib1.att)
106$a, filled only for printed works, should be in ff88-23 (form of
item) instead of itype. (ff88-23 is made for Marc21 008 pos
23, which contains the same data as 106a)
200$b should be in Material-type instead of (or in addition to) itype
and itemtype: (Material-type :"free-form string, ... that
describes the material type of the item, e.g., cassette, kit,
computer database, computer file.")
100$a pos 22-24 should not be indexed as "ln" : it is the language of
the record, not the language of the ressource
4) Index names are too long : if we index new positions of coded fields,
with existing names it breaks Zebra indexing (there must be a limit
in line lenghth in record.abs?)
5) There are a lot of warns when rebuiding zebra.
This patch make some changes in bib1.att (could be used later to improve
search) :
- fixing wording for att 51 and 1012
- adding comments for attributes based on MARC21 008 field (8800-8841)
- creating 8806 (tpubdate), 8838 (Modified-code), 8818 (ff8-18), 8840
(ff8-18-21), 8819 (ff8-19), 8821 (ff8-21), 8828 (ff8-28), 8830
(ff8-30), 8831 (ff8-31)
- creating attributes specific to UNIMARC : 9701-9707 (Video-mt,
Graphics-type, Graphics-support, Title-page-availability,
Cumulative-index-availability, script-Title, char-encoding)
- setting apart 3 blocks of attributes, so it could be easy to make
further changes :
-- common to Marc21 and UNIMARC : 8806, 8822, 8838
-- slightly different in Marc21 and UNIMARC (different meanings
according to the type of the record => don't match a single
UNIMARC field)
-- specific to UNIMARC : 9701-9707
In ccl.properties :
- creating a new index: Country-publication 1=1053
- suppressing some warns by mapping with bib1 att:
Date-time-last-modified, Name, rtype, Music-number
- defining indexes using the 3 blocks attributes defined in bib1
(common to Marc21 and UNIMARC, slightly different, specific to UNIMARC)
In record.abs :
- renaming some index for 100-105-110 fields
- correcting indexing of 102$a (country of publication)
106$a (ff88-23)
100$a pos 22-24 (language of record, no more
indexed)
105$a pos. 0-3 (illustration code)
200$b (for the moment, I keep it indexed in
itype and itemtype, but also Material-Type)
In C4/Search.pm :
- adding "Country-publication" index
In OPAC and staff interface template subtypes_unimarc.in :
- renaming indexes to take into account the changes made to Zebra
config files
To test (this cannot be done with a sandbox) :
1) Apply the patch in a UNIMARC GRS-1 Koha instance
2) Copy the following files from the etc/zebradb of your source
directory into the etc/zebradb of your main Koha directory:
-- etc/zebradb/biblios/etc/bib1.att
-- etc/zebradb/ccl.properties
-- etc/zebradb/marc_defs/unimarc/biblios/record.abs
3) Reindex your data (rebuild_zebra -x -b -r -v)
4) Try to use those Coded fields indexes in Advanced search, in OPAC
and Staff interface (available after clicking on "More options",
then on "Coded information filters"):
Audience, Print, Literary genre, Biography, Illustration, Content,
Video Types, Serials, Serial Type, Periodicity, Regularity
5) Try to search "Country-publication=FR" in simple search
Signed-off-by: Bernardo Gonzalez Kriegel <bgkriegel@gmail.com>
No koha-qa errors.
Tests for GRS-1
Followed test plan
Search by coded fields works, but only on OPAC,
on staff there are few options
Search by Country-publication works after patch
Signed-off-by: Jonathan Druart <jonathan.druart@biblibre.com>
Signed-off-by: Galen Charlton <gmc@esilibrary.com>
This patch updates the wthdrawn field in items and deleteditems to be
withdrawn instead. No functional changes are made.
Signed-off-by: Bernardo Gonzalez Kriegel <bgkriegel@gmail.com>
Comment: Save for translation files (that will be fixed on next release),
only occurrence of wthdrawn is on updatedatabase.pl
No koha-qa errors.
This touch many files, and I did not test everything,
but all seems normal. I think that any problem could
be fixed later.
Perhaps both entries in updatedatabase.pl could be joined
into one, but thats for QA.
Signed-off-by: Jonathan Druart <jonathan.druart@biblibre.com>
Signed-off-by: Galen Charlton <gmc@esilibrary.com>
Add an option to cleanup_database.pl to purge the search_history
entries older than X days.
Test plan:
- Apply patch
- Check that your test DB has some entries a little older than 30 days
and a few ones even older than that in search_history:
SELECT * FROM search_history WHERE time < DATE_SUB( NOW(), INTERVAL 30 DAY );
If not, modify some existing entries.
- Run cleanup_database with a fixed number of days (replace XX with
something higher than 30)
/misc/cronjobs/cleanup_database.pl --searchhistory XX
- Check that entries older than XX days got deleted from search_history
- Run without the day parameter
/misc/cronjobs/cleanup_database.pl --searchhistory
- Check that entries older than 30 days got deleted from search_history
Signed-off-by: Galen Charlton <gmc@esilibrary.com>
Signed-off-by: Kyle M Hall <kyle@bywatersolutions.com>
Signed-off-by: Galen Charlton <gmc@esilibrary.com>
To test:
[1] Turn on the syspref for enabling OPAC holds.
[2] Create an item and bring it up on the OPAC search
results. Run through the following possibilities,
by changing the item, and verify that the place hold
link in OPAC search results appears only when the item is
- not lost AND
- not withdrawn AND
- not damaged (or is damged and AllowHoldsOnDamagedItems is ON) AND
- the item is not marked not-for-loan OR
the item has a negative notforloan value (e.g., it is on order)
Note that it is necessary to reindex the test bib after making
each change to the test item.
[3] Also verify that whether or not in the item is in transit does
NOT affect whether the place hold link appears.
[4] Verify that there is no regression on bug 8975 (i.e., if an
item is on order, that status should be displayed in staff client
search results).
Signed-off-by: Owen Leonard <oleonard@myacpl.org>
Signed-off-by: Chris Cormack <chrisc@catalyst.net.nz>
Signed-off-by: Galen Charlton <gmc@esilibrary.com>
In search results, one could not place a hold on an item in transit
and for loan (items.notforloan=0). This appears when AllowOnShelfHolds
is allowed.
This patch repairs a regression introduced by the patch for bug 8975.
Test plan :
- Set AllowOnShelfHolds to on
- Create a record with a normal item : not lost, not withdrawn, not
damaged, notforloan=0
- Index this record
- Perform a search on OPAC that returns this record (and others)
=> You see in actions "Place hold"
- Add this item in transit : /cgi-bin/koha/circ/branchtransfers.pl
- Re-perform the search on OPAC
=> You see in actions "Place hold" and item "in transit"
Signed-off-by: Owen Leonard <oleonard@myacpl.org>
Signed-off-by: Chris Cormack <chrisc@catalyst.net.nz>
Signed-off-by: Galen Charlton <gmc@esilibrary.com>
The original implementation of QueryParser did not handle truncation
based on the QueryAutoTruncate system preference. This patch adds support.
To test:
1) Apply patch.
2) Turn on UseQueryParser.
3) Set QueryAutoTruncate to "automatically."
4) Search for "har". Note that it returns results with words
like "Harry" (i.e. with right truncation).
5) Search for "har*". Note that it still returns results with right
truncation.
6) Set QueryAutoTruncate to "only when * is added."
7) Search for "har". Note that it returns only records that have the
exact word "har" in them (most likely there will be none unless you
have Hebrew items).
8) Search for "har*". Note that once again it returns results for "Harry"
(i.e. right truncated results).
9) Sign off.
This patch also reindents a hash in Koha/QueryParser/Driver/PQF.pm
because it was hard to read before.
Signed-off-by: Mirko Tietgen <mirko@abunchofthings.net>
Signed-off-by: Katrin Fischer <Katrin.Fischer.83@web.de>
All tests and QA script pass.
Thx for fixing this Jared!
Signed-off-by: Jared Camins-Esakov <jcamins@cpbibliography.com>
I just add use utf8; to the Search.pm and the problem
was solved .
Test plan :
1- Add bib records with non-latin characters
2- search for some of these records
3- try to refine your search using Subject / Author
Signed-off-by: Bernardo Gonzalez Kriegel <bgkriegel@gmail.com>
Comment: Work fixing URLs in facets. Now they work correctly.
No errors.
Signed-off-by: Katrin Fischer <Katrin.Fischer.83@web.de>
Passes all tests and QA script.
I tested facets with the 22 Arabic records provided on
bug 9579 successfully. Before the patch the links are not
correct, after applying the patch the links work as
expected.
Signed-off-by: Jared Camins-Esakov <jcamins@cpbibliography.com>
When QueryWeightFields is enabled, the searching query is created with several options.
In C4::Search::_build_weighted_query, when no index is defined, the query is build with fuzzy and stemming options. When an index is defined, theses options are missing, only unconditional right truncation is used.
The consequence is that when QueryStemming is disabled, a search with index can give more results (due to right truncation) that a search without.
This patch adds stemming and fuzzy on search with index, conditioned with QueryFuzzy and QuerryStemming sysprefs.
Also changes world list search (wrld) weight to r6 in order to set fuzzy search to r8 and stemming search to r9 (like search without index).
Test plan :
- Go to searching preferences (admin/preferences.pl?tab=searching)
- Set QueryAutoTruncate to "only if * is added"
- Set QueryFuzzy and QuerryStemming to "Don't try"
- Set QueryWeightFields to "Enable"
- Go to advanced search page
- Select an indexe (ie Title) and perform a search on a short word
=> Look at zebrarv log and see that query does not contain right truncation : @attr 5=1
- Set QueryFuzzy to "Try"
- Perform same search
=> Look at zebrarv log and see that query contains fuzzy : @attr 5=103
- Set QueryFuzzy to "Don't try" and QuerryStemming to "Try"
- Perform same search
=> Look at zebrarv log and see that query contains rigth truncation on stemmed word : @attr 5=1
Signed-off-by: koha.aixmarseille <koha.aixmarseille@gmail.com>
Signed-off-by: Marcel de Rooy <m.de.rooy@rijksmuseum.nl>
This patch makes Fuzzy and Stemming influence search results on weighted
queries when using an index. Side-effect is however that the results for a
search like index=term* (add truncation manually too) could be LOWER than the
the number of hits for index=term. Further comments on Bugzilla.
Signed-off-by: Jared Camins-Esakov <jcamins@cpbibliography.com>
In current implementation (mostly commented out in this patch)
uses heuristic to guess which strings need decoding from utf-8
to binary representation and doesn't support utf-8 characters
in templates and has problems with utf-8 data from database.
With this changes, Koha perl code always uses utf-8 encoding
correctly. All incomming data from database is allready
correctly marked as utf-8, and decoding of utf8 is required
only from Zebra and XSLT transfers which don't set utf-8 flag
correctly.
For output, standard perl :encoding(utf8) handler is used
so it also removes various "wide character" warnings as side-effect.
Test scenario:
1. make sure that you have utf-8 characters in your biblio
records, patrons, categories etc.
2. try to search records on intranet and opac which contain
utf-8 characters
3. install language which has utf-8 characters, e.g. uk-UA
dpavlin@koha-dev:/srv/koha/misc/translator(bug_6554) $
PERL5LIB=/srv/koha/ perl translate install uk-UA
4. switch language to uk-UA and verify that templates
display correctly
5. test search and Z39.50 search and verify that caracters
are correct
Signed-off-by: Owen Leonard <oleonard@myacpl.org>
I followed the test plan, adding utf-8 characters to library names,
patron categories, titles, and authorized values. I tried the uk-UA
translation and everything looked good.
When performing Z39.50 searches for titles containing utf-8 characters I
got results which were still occasionally contaminated with dummy
characters [?] but I assume this is Z39.50's fault not the patch's.
Signed-off-by: Marcel de Rooy <m.de.rooy@rijksmuseum.nl>
Signed-off-by: Bernardo Gonzalez Kriegel <bgkriegel@gmail.com>
Already signed, add mine.
Signed-off-by: Jared Camins-Esakov <jcamins@cpbibliography.com>
Removed NoZebra vestiges. This comprises several code blocks that depend on the NoZebra syspref and NZ related functions/methods.
C4::Biblio->
GetNoZebraIndexes
_DelBiblioNoZebra
_AddBiblioNoZebra
C4::Search->
NZgetRecords
NZanalyse
NZoperatorAND
NZoperatorOR
NZoperatorNOT
NZorder
C4::Installer->
set_indexing_engine
Sponsored-by: Universidad Nacional de Córdoba
Signed-off-by: Julian Maurice <julian.maurice@biblibre.com>
Signed-off-by: Paul Poulain <paul.poulain@biblibre.com>
Signed-off-by: Jared Camins-Esakov <jcamins@cpbibliography.com>
* Fix a long-standing bug in the linker that could crash the linker when
run against odd data.
* Sanitize input to SimpleSearch.
* Correctly handle CCL indexes with QP.
Signed-off-by: Katrin Fischer <Katrin.Fischer.83@web.de>
Signed-off-by: Jared Camins-Esakov <jcamins@cpbibliography.com>
This patch also corrects the definition of the an= index, which was
missing exactness.
Signed-off-by: Katrin Fischer <Katrin.Fischer.83@web.de>
Signed-off-by: Jared Camins-Esakov <jcamins@cpbibliography.com>
Koha was not previously escaping CGI input, which caused problems for
highlighting and is a security issue.
Signed-off-by: Katrin Fischer <Katrin.Fischer.83@web.de>
Thx for fixing this.
Signed-off-by: Jared Camins-Esakov <jcamins@cpbibliography.com>
QP searches with && broke search highlighting on the OPAC details page.
This patch corrects encoding of the query_desc parameter that is passed
to the details page.
My last attempt at rebasing also transposed the variable for index
names with the variable for operators, meaning that the dropdown in
the basic search did not work.
Signed-off-by: Chris Cormack <chrisc@catalyst.net.nz>
Signed-off-by: Katrin Fischer <Katrin.Fischer.83@web.de>
Fixes some problems raised during QA successfully.
Signed-off-by: Jared Camins-Esakov <jcamins@cpbibliography.com>
With the inclusion of this patch, all searches will (try) to use
QueryParser for handling queries for both the bibliographic and authority
databases if UseQueryParser is enabled. If QueryParser is unavailable,
UseQueryParser is disabled, or the search uses CCL indexes, the old
search code will be used.
To test:
1) Apply patch.
2) Run the unit test with `prove t/QueryParser.t`
3) Enable the UseQueryParser syspref.
4) Try searches that should return results in the following places:
* OPAC (simple search)
* OPAC (advanced search)
* OPAC (authorities)
* Staff client (header search)
* Staff client (advanced search)
* Staff client (cataloging search)
* Staff client (authorities)
* Staff client (importing a batch using a match point)
* Staff client (searching for an item for adding to a label)
* Staff client (acquisitions)
* Staff client (searching for a record to create a serial)
* ANYWHERE ELSE I HAVE FORGOTTEN
5) Disable the UseQueryParser syspref. Repeat at least some of the
searches you did above.
6) If all searches worked, sign off.
Signed-off-by: Chris Cormack <chris@bigballofwax.co.nz>
Signed-off-by: Elliott Davis <elliott@bywatersolions.com>
Searching still works as expected for variuos places.
QueryParser syspref seemed to be enabled by default
Signed-off-by: Katrin Fischer <Katrin.Fischer.83@web.de>
Signed-off-by: Jared Camins-Esakov <jcamins@cpbibliography.com>
This patch rewrites the GetReserveStatus routine in order to take in
parameter the itemnumber and/or the biblionumber.
In some places, the C4::Reserves::CheckReserves routine is called when
we just want to get the status of the reserve. In these cases, the
C4::Reserves::GetReserveStatus is now called.
This routine executes 1 sql query (or 2 max).
Test plan:
Check that there is no regression on the different pages where reserves
are used. The different status will be the same than before applying
this patch.
Signed-off-by: Kyle M Hall <kyle@bywatersolutions.com>
Signed-off-by: Marcel de Rooy <m.de.rooy@rijksmuseum.nl>
Signed-off-by: Jared Camins-Esakov <jcamins@cpbibliography.com>
In C4::Search::FindDuplicate, when biblio has no ISBN, the duplicate search adds :
$query .= " and itemtype=$result->{itemtype}".
This is wrong when itemtype is defined in items.
This patch simply removes the itemtype from dublicate search.
Test plan :
- Go to a biblio details page
- Click on "Edit as new (duplicate)"
- If ISBN is defined, remove it
- Click on save
=> a duplicate is detected
- Change biblio item type and save
=> a duplicate is detected
Signed-off-by: Marcel de Rooy <m.de.rooy@rijksmuseum.nl>
Signed-off-by: Jared Camins-Esakov <jcamins@cpbibliography.com>
Prior to this patch, there were three identical ZOOM event loops in
C4::Search. This is wasteful, and goes against all good programming
practice. This patch refactors the ZOOM event loops into a separate
subroutine which is called by SimpleSearch, searchResults, and
GetDistinctValues call.
The new routine, _ZOOM_event_loop process the ZOOM event loop and,
once it has been fully processed, passes control to a closure provided
by the calling routine for processing the results, and destroys the
result sets.
To test (after applying patch):
1) Do a regular bibliographic search that should return results.
2) Do a search in the Cataloging module that should return results.
3) If you get results from both searches, the patch works.
Signed-off-by: Kyle M Hall <kyle@bywatersolutions.com>
Signed-off-by: Jonathan Druart <jonathan.druart@biblibre.com>
Signed-off-by: Jared Camins-Esakov <jcamins@cpbibliography.com>
In order to allow holds on items with notforloan = -1, processing
of "unavailable" items in Search.pm was altered to exclude items
with notforloan < 0. (See Bug 2341: items marked 'on order' not
reserveable from search results). Doing so meant that such items
were excluded from the list, in staff client search results, of
items which are unavailable.
This patch changes the logic of that processing so that items
with notforloan < 0 are considered unavailable, but can still
be placed on hold.
To test, edit a record with a single item and view that record
in search results. When the item is is on order (notforloan -1)
it should say so. The holds link should be INactive only if:
- item is withdrawn AND/OR
- item is lost AND/OR
- item is damaged (and AllowHoldsOnDamagedItems is off) AND/OR
- item is not for loan, with notforloan > 0
Signed-off-by: Nicole C. Engard <nengard@bywatersolutions.com>
All tests pass (note that a reindex is required if changing item
statuses - which is why my first tests failed).
Passed-QA-by: Jonathan Druart <jonathan.druart@biblibre.com>
Signed-off-by: Jared Camins-Esakov <jcamins@cpbibliography.com>
Signed-off-by: Owen Leonard <oleonard@myacpl.org>
Passed-QA-by: Marcel de Rooy <M.de.Rooy@rijksmuseum.nl>
Signed-off-by: Jared Camins-Esakov <jcamins@cpbibliography.com>
When series title include a question or exclamation mark, theese must be removed
to prevent search failure.
http://bugs.koha-community.org/show_bug.cgi?id=8888
Signed-off-by: Owen Leonard <oleonard@myacpl.org>
Before the patch: Series facet links with ! or ? return no results.
After the patch the same links return valid results.
Signed-off-by: Paul Poulain <paul.poulain@biblibre.com>
This patch enables the shelving location facet as an
alternative to the branches fact in two situations:
A) SingleBranchMode is enabled
B) There is only one branch in the branches table
Test Plan:
1) Catalog multiple items with different shelving locations.
2) Test enable by enabling SingleBranchMode
3) Test enable by deleting all but one branch
Based on initial patch by Ian Walls.
Signed-off-by: Owen Leonard <oleonard@myacpl.org>
Tested cases 2) and 3) successfully in OPAC and staff client
Signed-off-by: Paul Poulain <paul.poulain@biblibre.com>
When working with hierarchical subject headings, it is sometimes helpful
to do a search for all records with a specific subject, plus
broader/narrower/related subjects. This patch adds support for these
"exploded" subject searches to Koha.
To test:
1) Make sure you have a bunch of hierarchical subjects. I created
geographical subjects for "Arizona," "United States," and "Phoenix,"
and linked them together using 551s, and made sure I had a half
dozen records linking to each (but not all to all three).
2) Do a search for su-br:Arizona (or choose "Subject and broader terms"
on the advanced search screen with "more options" displayed), and
check that you get the records with the subject "Arizona" and the
records with the subject "United States"
3) Do a search for su-na:Arizona (or choose "Subject and narrower terms"
on the advanced search screen with "more options" displayed), and
check that you get the records with the subject "Arizona" and the
records with the subject "Phoenix"
4) Do a search for su-rl:Arizona (or choose "Subject and related terms"
on the advanced search screen with "more options" displayed), and
check that you get the records with the subject "Arizona," the
records with the subject "United States," and the records with the
subject "Phoenix"
5) Ensure that other searches still work (keyword, subject, ccl,
whatever)
6) Sign off
Technical details:
This patch adds a shim in front of C4::Search::buildQuery in order to
preprocess the query and call the _handle_exploding_search callback.
This shim will allow us to gradually offload query parsing to a new
query parser module.
Signed-off-by: wajasu <matted-34813@mypacks.net>
Signed-off-by: Paul Poulain <paul.poulain@biblibre.com>
Signed-off-by: Owen Leonard <oleonard@myacpl.org>
Tested by toggling both the hidelostitems preference and the
OpacHiddenItems preference. Both work as expected in the normal
search results display.
Signed-off-by: Paul Poulain <paul.poulain@biblibre.com>
if @limit=('') buildQuery failed
Signed-off-by: Jared Camins-Esakov <jcamins@cpbibliography.com>
All searches that I tried (keyword, indexed, CCL, with limits, without,
etc.) worked fine. There are warnings about uninitialized variables in
the OPAC, but they exist on master as well and therefore should not
block these patches.
Signed-off-by: Paul Poulain <paul.poulain@biblibre.com>
When testing bug 8743, I discovered a missing index in my authority file.
The error message was
"CCL parsing error (10014) Unknown qualifier ZOOM"
which is not very helpfull because it does not show the query that was made.
This patch add the query itself after the zebra error
This patch adds the Koha::Indexer::RecordNormalizer and
Koha::Indexer::MARC::RecordNormalizer::EmbedSeeFromHeadings packages
to enable the inclusion of alternate forms of headings in bibliographic
searches. When the new syspref IncludeSeeFromInSearches is turned on
(default is off) rebuild_zebra.pl will insert see from headings from
authority records into bibliographic records when indexing, so that a
search on an obsolete term will turn up relevant records.
To test:
1) Enable IncludeSeeFromInSearches
2) Add a heading that has an alternate form to a record (for example,
"Cooking" has the alternate form "Cookery," if you have authority
records from LC)
3) Index the zebraqueue (or reindex if you haven't indexed your system
yet)
4) Confirm that if you search for "Cookery" you get the record you
just modified
Signed-off-by: Jared Camins-Esakov <jcamins@cpbibliography.com>
Rebased on master 5 August 2012
Signed-off-by: Jared Camins-Esakov <jcamins@cpbibliography.com>
Rebased on master 11 September 2012
Signed-off-by: Katrin Fischer <Katrin.Fischer.83@web.de>
Also checked:
- Verified database update works correctly
- Checked system preference and its description
- Checked staff/opac detail pages with feature on/off
- Checked staff/opac search facets
- Downloaded and tested records in various formats
- Tried different searches for 'see from' entries of authorities
- Ran all unit tests
No problems found.
Around line 1470-something:
my $sth =
$dbh->prepare(
"SELECT tagfield FROM marc_subfield_structure WHERE kohafield LIKE
'items.itemnumber'"
);
$sth->execute;
This patch replaces that with a call to GetMarcFromKohaField.
To test:
1) Apply patch.
2) Do a search that returns both available and unavailable items.
You'll know if the patch isn't working.
Signed-off-by: Julian Maurice <julian.maurice@biblibre.com>
Signed-off-by: Paul Poulain <paul.poulain@biblibre.com>
This patch removes the AmazonReviews and AmazonSimilarItems
features from the OPAC and staff client. With on Amazon
feature remaining, cover images, the *AmazonEnabled preference
is also removed in favor of checking the *AmazonCoverImages
preference. Two other system preferences, AWSAccessKeyID and
AWSPrivateKey are removed as they were required only by the
removed features.
Handling of book cover images from Amazon is unchanged.
Signed-off-by: Nicole C. Engard <nengard@bywatersolutions.com>
Turned on amazon covers in opac and staff client and all
worked as expected. Then tested to make sure other cover image
services still worked and they do.
Signing off.
Signed-off-by: Paul Poulain <paul.poulain@biblibre.com>
This is for MARC 21 only.
Made following changed:
- In getFacets in C4/Koha.pm added item type facet for 952y and 942c
- In getRecords in C4:Search.pm added code to get description of itemtype codes
- facets.inc in both staff and opac to show item types related label in the facets block
To test:
Add records such that a certain itype (say BK) is present in both 942c and 952y in two DIFFERENT records.
Run a search where both test records are present. Test to see if itype types are presented in the facets block (both OPAC and staff).
Click on the itype (say BK), both the test records should appear in the refined results. This shows that the feature works for both 942c and 952y.
Signed-off-by: Kyle M Hall <kyle@bywatersolutions.com>
Signed-off-by: Ian Walls <koha.sekjal@gmail.com>
QA Comment: fixed capitalization in template includes according to HTML4 coding
guideline ("Item types" instead of "ItemTypes")
Signed-off-by: Paul Poulain <paul.poulain@biblibre.com>
This is offerred as a compromise alternative to creating
a new Title-rel index to avoid having the statement of
responsiblity unduly affect field weight when using the DOM
filter and MARC21 -- the problem with creating a Title-rel index
is that it would *force* reindexing upon upgrade.
Signed-off-by: Galen Charlton <gmc@esilibrary.com>
Signed-off-by: Jared Camins-Esakov <jcamins@cpbibliography.com>
Signed-off-by: Paul Poulain <paul.poulain@biblibre.com>
Due to a dependency cycle between C4::Search and C4::Items, searches
in the OPAC die spectacularly under Plack. This counter-patch extends
dpavlin's solution and replaces use with require for C4::Search in
C4::Items and for C4::Items in C4::Search.
Signed-off-by: Paul Poulain <paul.poulain@biblibre.com>
To test:
- Syspref QueryStemming = Try
- Install Norwegian bokmål:
cd misc/translator/
perl translate install nb-NO
- Go to Home › Administration › System Preferences > I18N/L10N
and enable "Norsk bokmål(nb-NO)" for opaclanguages as well as
setting opaclanguagesdisplay = Allow
- Make sure you have selected "Norsk bokmål" as the active language
in the OPAC
- Find a record that has a tag (which does not contain any digits)
- Click on the tag and see that you get the error in the title of
this bug
- Apply the patch
- Click on the tag again and the error should be gone
Signed-off-by: Katrin Fischer <Katrin.Fischer.83@web.de>
Easy to test with a great test plan. Works nicely.
Signed-off-by: Paul Poulain <paul.poulain@biblibre.com>
Signed-off-by: Marijana Glavica <mglavica@ffzg.hr>
I am signing it off because it doesn't break anything and I will report
another bug for language issues described in my previous comment.
Removed MySQLism backquotes
Optionally delete bibliographic record when batch deleting items, if no items remain on the record.
Adds deleting of reserves to DelBiblio. Since subscriptions are deleted automatically,
it made sense for deletion of reserves to maintain the same behavior.
Signed-off-by: Liz Rea <wizzyrea@gmail.com>
I like the way this works, and it does. Passes tests.
Signed-off-by: Paul Poulain <paul.poulain@biblibre.com>
Signed-off-by: Jared Camins-Esakov <jcamins@cpbibliography.com>
This seems like a very big improvement, especially for people using screen
readers. I agree that the change to C4::Search is required.
Signed-off-by: Paul Poulain <paul.poulain@biblibre.com>
Adds the ability to perform advanced searches in both the OPAC and staff client on more than
a single AdvancedSearchType at a time. Support included for Itemtype, Collection Code and Shelving Location.
AdvancedSearchTypes syspref preference is repurposed; no longer a single value, it can now take
multiple item code fields separated by "|". The order of these fields will determine the order
of the tabs in the OPAC and staff client advanced search screens. Values within the search type
are OR'ed together, while each different search type is AND'ed together in the query limits. The
current stored values are supported without any required modification.
Each set of advanced search fields are displayed in tabs in both the OPAC and staff client. The
first value in the AdvancedSearchTypes syspref is the selected tab; if no values are present, "itemtypes"
is used. For non-itemtype values, the value in AdvancedSearchTypes must match the Authorised Value name, and
must be indexed with 'mc-' prefixing that name.
<li> elements in tab are assigned unique IDs, so the text of the tab can be altered to match the
library's needs (using JQuery)
The logic to handle the 5 element row limit has been moved from the Perl to the templates, since Template::Toolkit
has a simple method for extracting the count of an element in a loop and performing 'modulus' on it.
2011-12-21: Incorporated changes recommend by Owen Leonard on bug report.
Signed-off-by: Owen Leonard <oleonard@myacpl.org>
Add 700$b to UNIMARC author facets.
Other facets subfields could be added now. For example, other subjects
subfields.
Following patches are required to handle better MARC21 subfields and choose
other subfields to deal with UNIMARC format.
Signed-off-by: Jared Camins-Esakov <jcamins@cpbibliography.com>
Tested under both MARC21 and UNIMARC. Does not cause any regressions with
MARC21, and offers the possibility for better faceting there in the future.
Works as advertised with UNIMARC.
Signed-off-by: Jared Camins-Esakov <jcamins@cpbibliography.com>
Signed-off-by: Paul Poulain <paul.poulain@biblibre.com>
Squashed patch incorporating all previous patches (there is no functional
change compared to the previous version of this patch, this patch merely
squashes the original patch and follow-up, and rebases on latest master).
=== TL;DR VERSION ===
*** Installation ***
1. Run installer/data/mysql/atomicupdate/bug_7284_authority_linking_pt1
and installer/data/mysql/atomicupdate/bug_7284_authority_linking_pt2
2. Make sure you copy the following files from kohaclone to koha-dev:
etc/zeradb/authorities/etc/bib1.att,
etc/zebradb/marc_defs/marc21/authorities/authority-koha-indexdefs.xml,
etc/zebradb/marc_defs/marc21/authorities/authority-zebra-indexdefs.xsl,
etc/zebradb/marc_defs/marc21/authorities/koha-indexdefs-to-zebra.xsl, and
etc/zebradb/marc_defs/unimarc/authorities/record.abs
3. Run misc/migration_tools/rebuild_zebra.pl -a -r
*** New sysprefs ***
* AutoCreateAuthorities
* CatalogModuleRelink
* LinkerModule
* LinkerOptions
* LinkerRelink
* LinkerKeepStale
*** Important notes ***
You must have rebuild_zebra processing the zebraqueue for bibs when testing this
patch.
=== DESCRIPTION ===
*** Cataloging module ***
* Added an additional box to the authority finder plugin for "Heading match,"
which consults not just the main entry but also See-from and See-also-from
headings.
* With this patch, the automatic authority linking will actually work properly
in the cataloging module. As Owen pointed out while testing the patch,
though, longtime users of Koha will not be expecting that. In keeping with
the principles of least surprise and maximum configurability, a new syspref,
CatalogModuleRelink makes it possible to disable authority relinking in the
cataloging module only (i.e. leaving it enabled for future runs of
link_bibs_to_authorities.pl). Note that though the default behavior matches
the current behavior of Koha, it does not match the intended behavior.
Libraries that want the intended behavior rather than the current behavior
will need to adjust the CatalogModuleRelink syspref.
*** misc/link_bibs_to_authorities.pl ***
Added the following options to the misc/link_bibs_to_authorities.pl script:
--auth-limit Only process those headings that match the authorities
matching the user-specified WHERE clause.
--bib-limit Only process those bib records that match the
user-specified WHERE clause.
--commit Commit the results to the database after every N records
are processed.
--link-report Display a report of all the headings that were processed.
Converted misc/link_bibs_to_authorities.pl to use POD.
Added a detailed report of headings that linked, did not link, and linked
in a "fuzzy" fashion (the exact semantics of fuzzy are up to the individual
linker modules) during the run.
*** C4::Linker ***
Implemented new C4::Linker functionality to make it possible to easily add
custom authority linker algorithms. Currently available linker options are:
* Default: retains the current behavior of only creating links when there is
an exact match to one and only one authority record; if the 'broader_headings'
option is enabled, it will try to link to headings to authority records for
broader headings by removing subfields from the end of the heading (NOTE:
test the results before enabling broader_headings in a production system
because its usefulness is very much dependent on individual sites' authority
files)
* First Match: based on Default, creates a link to the *first* authority
record that matches a given heading, even if there is more than one
authority record that matches
* Last Match: based on Default, creates a link to the *last* authority
record that matches a given heading, even if there is more than one record
that matches
The API for linker modules is very simple. All modules should implement the
following two functions:
<get_link ($field)> - return the authid for the authority that should be
linked to the provided MARC::Field object, and a boolean to indicate whether
the match is "fuzzy" (the semantics of "fuzzy" are up to the individual plugin).
In order to handle authority limits, get_link should always end with:
return $self->SUPER::_handle_auth_limit($authid), $fuzzy;
<flip_heading ($field)> - return a MARC::Field object with the heading flipped
to the preferred form. At present this routine is not used, and can be a stub.
Made the linking functionality use the SearchAuthorities in C4::AuthoritiesMarc
rather than SimpleSearch in C4::Search. Once C4::Search has been refactored,
SearchAuthorities should be rewritten to simply call into C4::Search. However,
at this time C4::Search cannot handle authority searching. Also fixed numerous
performance issues in SearchAuthorities and the Linker script:
* Correctly destroy ZOOM recordsets in SearchAuthorities when finished. If left
undestroyed, efficiency appears to approach O(log n^n)
* Add an optional $skipmetadata flag to SearchAuthorities that can be used to
avoid additional calls into Zebra when all that is wanted are authority
records and not statistics about their use
*** New sysprefs ***
* AutoCreateAuthorities - When this and BiblioAddsAuthorities are both turned
on, automatically create authority records for headings that don't have
any authority link when cataloging. When BiblioAddsAuthorities is on and
AutoCreateAuthorities is turned off, do not automatically generate authority
records, but allow the user to enter headings that don't match an existing
authority. When BiblioAddsAuthorities is off, this has no effect.
* CatalogModuleRelink - when turned on, the automatic linker will relink
headings when a record is saved in the cataloging module when LinkerRelink
is turned on, even if the headings were manually linked to a different
authority by the cataloger. When turned off (the default), the automatic
linker will not relink any headings that have already been linked when a
record is saved.
* LinkerModule - Chooses which linker module to use for matching headings
(current options are as described above in the section on linker options:
"Default," "FirstMatch," and "LastMatch")
* LinkerOptions - A pipe-separated list of options to set for the authority
linker (at the moment, the only option available is "broader_headings," which
is described below)
* LinkerRelink - When turned on, the linker will confirm the links for headings
that have previously been linked to an authority record when it runs. When
turned off, any heading with an existing link will be ignored.
* LinkerKeepStale - When turned on, the linker will never *delete* a link to an
authority record, though, depending on the value of LinkerRelink, it may
change the link.
*** Other changes ***
* Cleaned up authorities code by removing unused functions and adding
unimplemented functions and added some unit tests.
* This patch also modifies the authority indexing to remove trailing punctuation
from Match indexes.
* Replace the old BiblioAddAuthorities subroutines with calls into the new
C4::Linker routines.
* Add a simple implementation for C4::Heading::UNIMARC. (With thanks to F.
Demians, 2011.01.09) Correct C4::Heading::UNIMARC class loading. Create
biblio tag to authority types data structure at initialization rather than
querying DB.
* Ran perltidy on all changed code.
*** Linker Options ***
Enter "broader_headings" in LinkerOptions. With this option, the linker will
try to match the following heading as follows:
=600 10$aCamins-Esakov, Jared$xCoin collections$vCatalogs$vEarly works to
1800.
First: Camins-Esakov, Jared--Coin collections--Catalogs--Early works to 1800
Next: Camins-Esakov, Jared--Coin collections--Catalogs
Next: Camins-Esakov, Jared--Coin collections
Next: Camins-Esakov, Jared (matches! if a previous attempt had matched, it
would not have tried this)
This is probably relevant only to MARC21 and LCSH, but could potentially be of
great use to libraries that make heavy use of floating subdivisions.
=== TESTING PLAN ===
Note: all of these tests require that you have some authority records,
preferably for headings that actually appear in your bibliographic data. At
least one authority record must contain a "see from" reference (remember which
one contains this, as you'll need it for some of the tests). The number shown
in the "Used in" column in the authority module is populated using Zebra
searches of the bibliographic database, so you *must* have
rebuild_zebra.pl -b -z [-x] running in cron, or manually run it after running
the linker.
*** Testing the Heading match in the cataloging plugin ***
1. Create a new record, and open the cataloging plugin for an
authority-controlled field.
2. Search for an authority by entering the "see from" term in the Heading Match
box
3. Confirm that the appropriate heading shows up
4. Search for an authority by entering the preferred heading into the Main
entry or Main entry ($a only) box (i.e., repeat the procedure you usually
use for cataloging, whatever that may be)
5. Confirm that the appropriate heading shows up
*** Testing the cataloging interface ***
6. Turn off BiblioAddsAuthorities
7. Confirm that you cannot enter text directly in an authority-controlled field
8. Confirm that if you search for a heading using the authority control plugin
the heading is inserted (note, however, that this patch does not AND IS NOT
INTENDED TO fix the bugs in the authority plugin with duplicate subfields;
those are wholly out of scope- this check is for regressions)
9. Turn on BiblioAddsAuthorities and AutoCreateAuthorities
10. Confirm that you can enter text directly into an authority-controlled field,
and if you enter a heading that doesn't currently have an authority record,
an authority record stub is automatically created, and the heading you
entered linked
11. Confirm that if you enter a heading with only a subfield $a that fully
*matches* an existing heading (i.e. the existing heading has only
subfield $a populated), the authid for that heading is inserted into
subfield $9
12. Confirm that if you enter a heading with multiple subfields that *matches*
an existing heading, the authid for that heading is inserted into
subfield $9
13. Turn on BiblioAddsAuthorities and turn off AutoCreateAuthorities
14. Confirm that you can enter text directly into an authority-controlled field,
and if you enter a heading that doesn't currently have an authority record,
an authority record stub is *not* created
15. Confirm that if you enter a heading with only a subfield $a that *matches*
an existing heading, the authid for that heading is inserted into
subfield $9
16. Confirm that if you enter a heading with multiple subfields that *matches*
an existing heading, the authid for that heading is inserted into
subfield $9
17. Create a record and link an authority record to an authorized field using
the authority plugin.
18. Save the record. Ensure that the heading is linked to the appropriate
authority.
19. Open the record. Change the heading manually to something else, leaving
the link. Save the record.
20. Ensure that the heading remains linked to that same authority.
21. Change CatalogModuleRelink to "on."
22. Open the record. Use the authority plugin to link that heading to the
same authority record you did earlier.
23. Save the record. Ensure that the heading is linked to the appropriate
authority.
24. Open the record. Change the heading manually to something else, leaving
the link. Save the record.
25. Ensure that the heading is no longer linked to the old authority record.
*** Testing link_bibs_to_authorities.pl ***
26. Set LinkerModule to "Default," turn on LinkerRelink and
BiblioAddsAuthorities, and turn AutoCreateAuthorities and
LinkerKeepStale off
27. Edit one bib record so that an authority controlled field that has already
been linked (i.e. has data in $9) has a heading that does not match any
authority record in your database
28. Run misc/link_bibs_to_authorities.pl --link-report --verbose --test (you may
want to pipe the output into less or a file, as the result is quite a lot of
information)
29. Look over the report to see if the headings that you have authority records
for report being matched, that the heading you modified in step 2 is
reported as "unlinked," and confirm that no changes were actually made to
the database (to check this, look at the bib record you edited earlier, and
check that the authid in the field you edited hasn't changed)
30. Run misc/link_bibs_to_authorities.pl --link-report --verbose (you may want
to pipe the output into less or a file, as the result is quite a lot of
information)
31. Check that the heading you modified has been unlinked
32. Change the modified heading back to whatever it was, but don't use the
authority control plugin to populate $9
33. Run misc/link_bibs_to_authorities.pl --link-report --verbose
--bib-limit="biblionumber=${BIB}" (replacing ${BIB} with the biblionumber
of the record you've been editing)
34. Confirm that the heading has been linked to the correct authority record
35. Turn LinkerKeepStale on
36. Change that heading to something else
37. Run misc/link_bibs_to_authorities.pl --link-report --verbose
--bib-limit="biblionumber=${BIB}" (replacing ${BIB} with the biblionumber
of the record you've been editing)
38. Confirm that the $9 has not changed
39. Turn LinkerKeepStale off
40. Create two authorities with the same heading
41. Run misc/migration_tools/rebuild_zebra.pl -a -z
42. Enter that heading into the bibliographic record you are working with
43. Run misc/link_bibs_to_authorities.pl --link-report --verbose
--bib-limit="biblionumber=${BIB}" (replacing ${BIB} with the biblionumber
of the record you've been editing)
44. Confirm that the heading has not been linked
45. Change LinkerModule to "FirstMatch"
46. Run misc/link_bibs_to_authorities.pl --link-report --verbose
--bib-limit="biblionumber=${BIB}" (replacing ${BIB} with the biblionumber
of the record you've been editing)
47. Confirm that the heading has been linked to the first authority record it
matches
48. Change LinkerModule to "LastMatch"
49. Run misc/link_bibs_to_authorities.pl --link-report --verbose
--bib-limit="biblionumber=${BIB}" (replacing ${BIB} with the biblionumber
of the record you've been editing)
50. Confirm that the heading has been linked to the second authority record it
matches
51. Run misc/link_bibs_to_authorities.pl --link-report --verbose
--auth-limit="authid=${AUTH}" (replacing ${AUTH} with an authid)
52. Confirm that only that heading is displayed in the report, and only those
bibs with that heading have been changed
If all those things worked, good news! You're ready to sign off on the patch
for bug 7284.
Signed-off-by: Jared Camins-Esakov <jcamins@cpbibliography.com>
Rebased on latest master and squashed follow-up, 16 February 2012
Signed-off-by: Jared Camins-Esakov <jcamins@cpbibliography.com>
Rebased on latest master, 21 February 2012
Signed-off-by: schuster <dschust1@gmail.com>
Signed-off-by: Marcel de Rooy <m.de.rooy@rijksmuseum.nl>
Signing off on this one with the following note: You have moved the call to XSLTParse4Display from around line 1775 to around 1842, as compared to the situation before the three 6919 patches. I probably would have left it at its original location, but while examining the code between these two spots, I do not see any real problems with this move. Tested it, works okay. Futher QA comments made on the report itself.
Signed-off-by: Paul Poulain <paul.poulain@biblibre.com>
To test:
* create a bib with no items
* update your index
serial records and your new bib with no items should be displayed.
* add something to the OPACHiddenItems syspref (I like itype: [BK] from the test data)
*** test both ways, with something in there and with the syspref empty.
* add an item to your new bib that would be suppressed
* update your index
* search for the bib
The item should not show
* change the item into a state where it would no longer be suppressed
* update your index
* search for the bib
The item should show in the opac
* just for fun, delete your item
* update your index
* Search for the bib - it should still display.
I tested on MARC21 - please test UNIMARC as well.
Signed-off-by: Jared Camins-Esakov <jcamins@cpbibliography.com>
This patch adds a C4::Search to subscription-detail.pl to compensate for a removed
one from auth.pm during the denesting effort.
Signed-off-by: Paul Poulain <paul.poulain@biblibre.com>
Bug 7546 Do not call routine as bareword
Fixes compilation errors due to calling routine without parens
Also nothing was gained (and obfuscation added) by forcing
the return into a hash ref have changed variable to hash
tidied up the if else chain
These routines should be refactored out future
Signed-off-by: Paul Poulain <paul.poulain@biblibre.com>
bug 7546 follow-up, enabled_staff_search_views problem
* enabled_staff_search_views was not exported by C4::Search, should have been
* serials/serials-edit.pl were also missing it
Comments:
* checked with for file in */*.pl; do perl -wc $file; done that no script was still having this problem
Signed-off-by: Liz Rea <wizzyrea@gmail.com>
Final sign off for all 3 patches
Note: I had some problems with tests, but it is probably related to my data and not this patch.
If I search for a valid ISBN number and hit the Z39.50 search, the title field
is populated with the ISBN number I searched for. This number should populate
the ISBN field and not the title field.
http://bugs.koha-community.org/show_bug.cgi?id=6539
Signed-off-by: Katrin Fischer <Katrin.Fischer.83@web.de>
To test:
- Search new ISBN using ISBN search from advanced search page
- 0 results - click on Z39.50 search
- Z30.50 form will have your ISBN in ISBN search option
I am signing off on this, because it's an improvement over the current
behaviour.
I see some problems though that should perhaps be addressed in a separate
bug or as a follow-up:
If you use th catalog search field and search for an ISBN or
a keyword the right fields of the Z39.50 search form will be populated, but
the search index will be put in front:
ISBN: kw,wrdl: 9783492251495
or
Title: kw,wrdl: koha testing
If you search for ISBN as keyword on the advanced search page, it will
still populate the Title search.
Signed-off-by: Liz Rea <wizzyrea@gmail.com>
Signed-off-by: Paul Poulain <paul.poulain@biblibre.com>
Changed searchResults() interface
Added trailing \n when parsing OpacHiddenItems to make YAML happy
XSLTParse4Display() and buildKohaItemsNamespace() take hidden
items as input param
Removed numbering from the search results, looks wrong with
hidden items
Signed-off-by: Nicole C. Engard <nengard@bywatersolutions.com>
Signed-off-by: Paul Poulain <paul.poulain@biblibre.com>
To test:
Create 4 holds on a bib, for patrons A, B, C, and D,
Check in the item to mark hold as waiting for patron A
Check out the item to patron B -> reserve for patron B should be removed
Check in the item to mark hold as waiting for patron A
Check out the item to Patron A, hold should complete normally
Check in the item to mark hold as waiting for patron C
Check out the item to patron D -> reserve for patron D should be removed.
Check in the item to mark hold as waiting for patron C
Check out the item to patron C, hold should complete normally
Check in the item -> there should be no more reserves.
We also tested:
Created 4 holds on a bib with two items, for patrons A, B, C, and D
All worked as expected.
Signed-off-by: Liz Rea <wizzyrea@gmail.com>
Signed-off-by: Paul Poulain <paul.poulain@biblibre.com>
Adds hyphen to regex looking for index names in buildQuery.
Test by searching on Control-number=...
Signed-off-by: Chris Cormack <chrisc@catalyst.net.nz>
Signed-off-by: Paul Poulain <paul.poulain@biblibre.com>
Display links to parent biblios, show linked items in holdings, allow holds on
linked items. This uses MARC to maintain relationships.
Sponsored by the Mississippi Department of Archives and History and RapidRadio
Solution. Originally developed by Savitra Sirohi and Amit Gupta at OSSLabs, with
UNIMARC support added by Zeno Tajoli. Commits squashed and merge conflicts
resolved by Chris Cormack from Catalyst. Respect for NORMARC and some small
framework portability fixes made by Jared Camins-Esakov of C & P Bibliography
Services.
IMPORTANT NOTE: A bug in the 773 coding for MARC21 was corrected from the
original OSS Labs code. The 773s generated by the pre-release code did not have
the first indicator set to '0', which means that they were not supposed to
display. Going forward, the first indicator will be set correctly, but existing
records created with this code will no longer appear (they appeared before only
due to another bug). To correct this, you could globally (or, to make sure you
only modify records created with the Analytics tool, for records with 773$0)
change the first indicator of the 773 from blank to '0'.
== Background ==
An analytic record for an item is a more detailed, monographic biblio for an
item attached to a serial record . This is often used for special issues of a
journal that are released as books on their own (assigned an ISBN, as well as an
ISSN/volume/issue). It is important for researchers to be able to search for
these items both as issues of the serial, and as monographs. It is equally
important for the library to not have duplicate item records for the item in
question to have to keep synchronized.
== Establishing relationships ==
Analytical records are connected to items belonging to parent or host
bibliographic records. This can be accomplished by:
* From an analytical bibliographic record linking to an host item by providing
the item barcode as input
* From a host item by using option "analyze", this creates a new empty
bibliographic record with field 773 (MARC21) populated
* Running a new CLI script that establishes a relationship between the
analytical record and the host item identified by the barcode in the
analytical record's 773$o (MARC21)
== Connecting Records ==
The relationships are maintained in the MARC records, we have not used database
tables at all.
== MARC Representation ==
In MARC21/NORMARC we have used:
* 773$9 to store the Koha item number of the host item
* 773$0 to store the Koha biblio number of the host bibliographic record
The above fields are used to display the relationships in various screens in the
OPAC and the staff interface. Additionally, when populating field 773 with host
item's details, we have used following MARC 21 mapping:
* 'a' <= 100/110/111 $a (author main)
* 'b' <= 250$a (edition)
* 'd' <= 260$a, 260$b, 260$c (place, publisher, year)
* 'o' <= barcode
* 't' <= 245$a (title)
* 'w' <= (003)001 --> if no 001 is available, we can populate biblionumber
* 'x' <= 022$a (issn)
* 'z' <= 020$a (isbn)
In UNIMARC, this code uses:
* 461$9 to store the Koha item number of the host item
* 461$0 to store the Koha biblio number of the host bibliographic record
When populating field 461 in UNIMARC, the following mapping is used:
* 't' <= 200$a (title)
== Treatment of Holds ==
A key requirement was to allow holds to be placed on host items from the
analytical record. We have accomplished this by allowing holds on specific
copies only. Biblio level holds are not allowed. This ensures that holds are
placed on specific items that are relevant to the analytical record.
== Deleting host items with linked analytical records ==
As we have not used database tables to maintain relationships, we had to use
search to find out if any linked analytical records are present. If 1 or more
analytical are present, we do not allow deletion of items. This is similar to
what we see when we try to delete authority records.
== Importing analytical records ==
Analytical records can be imported using bulkmarcimport or the GUI tools. The
new CLI script can be executed after the import to establish relationships with
host items. The script will establish relationships using the host item's
barcode, the barcode must be present in 773$o of the analytical record.
== What if there are two or more copies of the host item? ==
The current design will require that there be two host (773) fields, one for
each copy.
== What if there is no barcode available for the host item? ==
It is still possible to establish a relationship, by populating 773$9 with the
host's item number. However the CLI script uses barcode in 773$o to establish
relationships so it won't work where barcodes are unavailable. Also from an
analytical record, it is possible to establish a relationship to a host item by
providing the barcode as input, this option will not be available as well.
Commits that added the following features were squashed by Chris Cormack (this
is not a list of every commit):
* Display links to host records from biblio detail screens
* Support for UNIMARC, respecting the system preference 'marcflavor'
* Support holds from the OPAC
* Ability to link to items belong to host records from a analytical record
* Display items belonging to host records in the moredetail page
* Ability to edit items belonging to host records, also ability to delink from
them
* Move get host items code into a C4 routine, also calling the new routine in
related perl scripts
* Move host field population to a C4 routine, all changes in pl files to call
new routine
* Allow only specific copy holds for analytical records plus changes to use new
C4 routines
* Support for holds on items linked via host records
* Storing bibnumber and itemnumber in subfields 0 and 9, plus other mapping
changes
* New command line script that establishes relationships between analytical
records and host items and bibs. The script looks for host field (MARC21 773)
in records, and based on barcode in subfield 'o' populates host bibnumber in
subfield '0' and host itemnumber in subfield '9'. The script can be run after
an import of analytical records, it can also be run in the crontab to maintain
the relationships
* Ability to create analytical records from items, to view linked analytics, and
prevent deletion of items that have linked analytics
* New template for catalogue/detail.pl (NOTE: not a new template file, just a
new way of displaying analytics), template displays linked analytics and
allows creation of analytical records
* New zebra index for item number in host fields. This index will be used to
display links to analytical records from host records
* Display title of host record instead of the phrase host record
* Using detail.tmpl for analytics tab instead of a new template file
* Improved qualification info prepration in Prephostmarcfield
* Check for linked analytics before deleting item
* Display link to host record and more meaningful anchor text for edit item link
* Analytical record: Unimarc index in record.abs and help in
create_analytical_rel.pl
* Adding a sys pref that controls display of options to create analytical
relationships
* Add host entry in XSLT stylesheet in staff item detail
* Added host record support to OPAC detail XSLT
* Adding 773$0 and 773$9 to all frameworks
* Adding 773 subfields 0 and 9 to default marc framework via updatedatabase.pl
* Display create analytics and used in links in catalog detail
* Fixed problem where analytical records not showing in OPAC search results
because GetMarcBiblio now needs a flag to add item records
* Fixed problem where analytics count was set to 1 for all records, not just
those with analytics
* Fixed catalogue detail page not to show analytics counts if count is 0
Conflicts:
installer/data/mysql/updatedatabase.pl
koha-tmpl/intranet-tmpl/prog/en/modules/cataloguing/addbiblio.tt
kohaversion.pl
Co-author: Savitra Sirohi <savitra.sirohi@osslabs.biz>
Co-author: Zeno Tajoli <tajoli@cilea.it>
Signed-off-by: Jared Camins-Esakov <jcamins@cpbibliography.com>
Signed-off-by: Ian Walls <ian.walls@bywatersolutions.com>
Signed-off-by: Chris Cormack <chrisc@catalyst.net.nz>
Fixing the regex to detect index names in ccl queries. Changing loop
structure: looping through the index candidates in the query is faster than
testing every index name with a regex. Making the index comparison case
insensitive will benefit users misspelling the case of an index; Zebra does not
care about it. Test the change by searching on a word followed by a : or =
character. Previously, when that word contained an index name like an or nb,
the search would crash.
Signed-off-by: Frédéric Demians <f.demians@tamil.fr>
Signed-off-by: Paul Poulain <paul.poulain@biblibre.com>
Perltidied the new block to fix indentation
Signed-off-by: Chris Cormack <chrisc@catalyst.net.nz>
Bib-level is already indexed in MARC21 record.abs.
But you cannot search this index, because it is commented in ccl.properties and
not listed in getIndexes of Search.pm.
This very simple patch does only do those two things.
Signed-off-by: Katrin Fischer <Katrin.Fischer.83@web.de>
Signed-off-by: Paul Poulain <paul.poulain@biblibre.com>
Signed-off-by: Chris Cormack <chrisc@catalyst.net.nz>
If you have the default sorting set to title ascending or title descending,
your search results will not automatically be sorted because the syspref uses
title_asc and title_dsc, whereas Search.pm wants title_az and title_za. The same
issue is present when the default sort is on author.
Signed-off-by: Ian Walls <ian.walls@bywatersolutions.com>
Signed-off-by: Chris Cormack <chrisc@catalyst.net.nz>
This fix add different "music" identifier to zebra indexes, for example it permit search through CDs via EAN.
Signed-off-by: fdurand <frederic.durand@univ-lyon2.fr>
Signed-off-by: Chris Cormack <chrisc@catalyst.net.nz>
in Search.pm, in the list of available indexes, Title-host is missing
Signed-off-by: Jonathan Druart <jonathan.druart@biblibre.com>
Signed-off-by: Chris Cormack <chrisc@catalyst.net.nz>
SimpleSearch returns a 3 element array the first pf which is an error
indication. If an error is returned the other elements are
undefined. If error is undef the other elements are defined
This restores the consistency of the interface as it was before
the addition of zebra
Signed-off-by: Christophe Croullebois <christophe.croullebois@biblibre.com>
Signed-off-by: Chris Cormack <chrisc@catalyst.net.nz>
When a user is doing a simple keyword search, they should not be expected to
deal with the magical behavior of question marks in Zebra. This fix escapes
question marks, and reduces the number of false positives for identifying a
"simple keyword search."
Signed-off-by: Jonathan Druart <jonathan.druart@biblibre.com>
Signed-off-by: Chris Cormack <chrisc@catalyst.net.nz>
This patch adds a new index for stocknumber on field 952$i.
Note: For testing you have to copy over the changed files
from kohaclone/etc/zebradb/ to your koha-dev/etc/zebradb folders.
Reindex.
To test:
1) Add 952$i to your frameworks
2) Add an item with 952$i
3) Search for your 952$i value in keyword search
4) Search for stocknumber, using stocknumber:<your 952$i value> or inv:<...
Signed-off-by: Marcel de Rooy <m.de.rooy@rijksmuseum.nl>
Signed-off-by: Chris Cormack <chrisc@catalyst.net.nz>
This patch adds the ability to specify a field with alternate holdings
information for display when a biblio has no items associated with it.
Two sysprefs are added:
* AlternateHoldingsField specifies what field/subfields contain the alternate
holdings information. When blank, the alternate holdings information is not
displayed. The default is blank, as this is a new feature.
* AlternateHoldingsSeparator specifies the string to be used to separate
multiple subfields in the alternate holdings display. The default is ' '.
Example use case:
A library which does not have a 1-1 relationship between uncontrolled 852 fields
from a legacy system and actual physical items on the shelf wishes to display
holdings information from the 852, but does not want to create item records
which are almost certain to be inaccurate. By enabling the alternate holdings
feature (AlternateHoldingsField = '852abcdhi' and AlternateHoldingsSeparator =
' -- '), the library is able to gradually add item records as they locate the
physical items, without losing the holdings information presently stored in the
uncontrolled 852 fields.
To test:
1) Set AlternateHoldingsField to '852abcdhi'
2) Set AlternateHoldingsSeparator to ' -- '
3) Change the hidden value of subfields 'a', 'b', 'c', 'd', 'h', and/or 'i' of
field 852 to 0 so that they display
4) Create a record which has data in the 852, but no item record
5) Look at holdings tab, where the data you entered should be displayed
Proof-of-concept initially developed for the American Numismatic Society.
Signed-off-by: Jared Camins-Esakov <jcamins@bywatersolutions.com>
Signed-off-by: Nicole C. Engard <nengard@bywatersolutions.com>
Signed-off-by: Chris Cormack <chrisc@catalyst.net.nz>
Because of the way Scan Indexes works, the results cannot be sorted. Previously
when any sort other than relevance (or in some cases popularity) was used, the
search failed. This patch disables sorting on Scan results. This patch also
fixes the index selection dropdown on the results page, which was not being
populated correctly from the Advanced Search screen.
Signed-off-by: Nicole C. Engard <nengard@bywatersolutions.com>
Signed-off-by: Chris Cormack <chrisc@catalyst.net.nz>
This patch contains the functionality, not the install stuff.
Revised: with input of Ian Walls: populate authorised_value_images only if needed; no changes anymore for template and search.pl.
Signed-off-by: Ian Walls <ian.walls@bywatersolutions.com>
Signed-off-by: Chris Cormack <chrisc@catalyst.net.nz>
1. The current C4::ClassSortRoutine::Dewey turns "306 Les" into "306_Les" for items.cn_sort and MARC-field 952$6, which results in "306.46 Les" being sorted before "306 Les" in the OPAC. With this patch, "306 Les" is turned into "306_000000000000000_Les".
2. Currently, call_number_asc and call_number_desc are set up to sort by 1=20, but this is mapped to Local-classification in ccl.properties, which is mapped to 952$o in record.abs.
This patch changes these sorts to use 1=8007, which is mapped to cn-sort and 952$6.
Signed-off-by: Jared Camins-Esakov <jcamins@bywatersolutions.com>
Signed-off-by: Chris Cormack <chrisc@catalyst.net.nz>
On OPAC/staff result page facets are truncated to 20 characters. On some OPAC
layout, it's not enough. A new syspref FacetLabelTruncationLength defines the
length to cut facets if necessary.
This patch add the syspref to searching.pref and add it to syspref various
language default values loaded into DB during installation process. It's not
necessary to update DB since length is fixed to 20 (as before) when this
syspref isn't defined in systemprefercences table.
Rebased to last HEAD: 2011.03.18
[Documentation] FacetLabelTruncationLength syspref in Searching tab
[3.2] It doesn't apply.
Signed-off-by: Nicole C. Engard <nengard@bywatersolutions.com>
Signed-off-by: Chris Cormack <chrisc@catalyst.net.nz>
Based on patch from Fridolyn Somers with input of Frederic Demians.
Added new Searching preference maxRecordsForFacets.
This pref contains number of result records used in facet building.
Also added pref displayFacetCount (with thanks to Frederic).
Follow up patch takes care of install issues; functionality can already be tested with this patch only.
Updated on March 17 for changes in include files.
Signed-off-by: Nicole C. Engard <nengard@bywatersolutions.com>
Signed-off-by: Chris Cormack <chrisc@catalyst.net.nz>
Adds a check for item-level_itypes system preference. Note that this only
applies to the search results screens.
To test:
1) Set item-level_itypes to 'specific item'
2) Create record and set 942$c to an itype that is marked not for loan
3) Create item with itype not marked 'not for loan'
Current behaviour: Holds link is not shown, sys pref setting doesn't matter
After patch: Holds link is shown
- when item-level_itype is 'specific item'
- when item-level_itype is 'biblio record' and 942$c itype is for loan
Holds link is not shown
- when item-level_itype is 'biblio record' and 942$c is not for loan
Signed-off-by: Jared Camins-Esakov <jcamins@bywatersolutions.com>
Signed-off-by: Chris Cormack <chrisc@catalyst.net.nz>
This patch corrects a syntax error in the definition of bath attributes in
ccl.properties. In particular, it adds the search prefixes 'isbn', 'issn',
'name', and 'notes'. In order to make use of this patch, ccl.properties must be
updated.
Signed-off-by: Jared Camins-Esakov <jcamins@bywatersolutions.com>
Signed-off-by: Chris Cormack <chrisc@catalyst.net.nz>
When using XSLT mode, the OPAC results display will show "&" instead of "&"
when Zebra is indexing in XML mode. This patch works around this by replacing
"&" with "&" and then extends the previous fix to apply to all occurrences
of "& " instead of just the first.
Signed-off-by: Nicole C. Engard <nengard@bywatersolutions.com>
Signed-off-by: Chris Cormack <chrisc@catalyst.net.nz>
Items created as part of the acquisitions process, and assigned the temporary notforloan value of -1,
cannot be placed on hold from the search results in either the OPAC or staff client (the link is missing).
This patch changes the evaluation of items->notforloan from a Boolean (if $items->{notforloan}) to a comparison
(if $items->{notforloan} > 0). Any notforloan status with a negative value can therefore be reserved.
Signed-off-by: Nicole Engard <nengard@bywatersolutions.com>
Signed-off-by: Chris Cormack <chrisc@catalyst.net.nz>
Searches such as au,phr and kw,wrdl were passing through the regexes
that should replace colon with equals
add wrdl and phr
use trn so not just rtrn is spotted
Where we were checking for multiple spaces specify that in the
regex not just two spaces
Signed-off-by: Galen Charlton <gmcharlt@gmail.com>
Signed-off-by: Chris Cormack <chrisc@catalyst.net.nz>
This is a quick fix. The way scan indexes performs should be improved in
3.4. There are several issues:
- No paging
- The interface is the same as for biblio records search result and so
is unusable: for example you have a button to place a hold or you
can sort by Popularity which is irrelevant for index terms.
Signed-off-by: Galen Charlton <gmcharlt@gmail.com>
If systempreference BiblioAddsAuthorities is on, this could lead to error messages when trying to add records
to a basket from an external source.
Signed-off-by: Nicole Engard <nengard@bywatersolutions.com>
Signed-off-by: Galen Charlton <gmcharlt@gmail.com>
Removed instances of 'use YAML' that were either completely
unnecessary or which were used only in debug code. Also
removed a needless import of Data::Dumper.
Signed-off-by: Galen Charlton <gmcharlt@gmail.com>
Check OPACXSLTResultsDisplay instead of XSLTResultsDisplay when
determining whether to use the XSLT bib results stylesheet for
OPAC search results.
In the process, added a new $search_context parameter to
C4::Search::searchResults() to specify whether results
are to be served up for the staff interface or for the
OPAC.
Signed-off-by: Galen Charlton <gmcharlt@gmail.com>
if $index is unnecessary as we have made this true 5 lines above
variables should not be declared in conditionals if used outside of them
set $struct_attr to a sensible default to avoid generating warnings
in this assigment and elsewhere
Signed-off-by: Galen Charlton <gmcharlt@gmail.com>
* 'bug2505_patches' of git://git.catalyst.net.nz/koha: (24 commits)
Bug 2505 - use strict and warnings in sax_parser_test
Bug 2505 - enable warnings for link_bibs_to_authorities
Bug 2505 - add strict and warnings to perlmodule_ls
Bug 2505 - add strict and warnings to check_sysprefs
Bug 2505 - Add commented use warnings where missing in *.t
Bug 2505 - Add commented use warnings where missing in *.pm
Bug 2505 - Add commented use warnings where missing in the cataloguing/ directory
Bug 2505 - Add commented use warnings where missing in the misc/ directory
Bug 2505 - Add commented use warnings where missing in the tools/ directory
Bug 2505 - Add commented use warnings where missing in the installer/ directory
Bug 2505 - Add commented use warnings where missing in the rotating_collections/ directory
Bug 2505 - Add commented use warnings where missing in the C4/ directory
Bug 2505 - Add commented use warnings where missing in the serials/ directory
Bug 2505 - Add commented use warnings where missing in the catalogue/ directory
Bug 2505 - Add commented use warnings where missing in the sms/ directory
Bug 2505 - Add commented use warnings where missing in the opac/ directory
Bug 2505 - Add commented use warnings where missing in the virtualshelves/ directory
Bug 2505 - Add commented use warnings where missing in the suggestion/ directory
Bug 2505 - Add commented use warnings where missing in the admin/ directory
Bug 2505 - Add commented use warnings where missing in the circ/ directory
...
Conflicts:
C4/Auth_with_cas.pm
acqui/supplier.pl
Signed-off-by: Galen Charlton <gmcharlt@gmail.com>
This fixes the opac results - non xsl transformed - and the intranet result lists.
Also fixes opac-detail which was showing all items 'on hold' if there was a bib level request, whether items were on the hold shelf or not.
Signed-off-by: Galen Charlton <gmcharlt@gmail.com>
This bug was introduced by commit d51332698b
which removed the inability to do a search on anything contianing a colon (:).
This patch expands the regexp to only normalize true limit operators and leave
all other colons (:) intact.
Signed-off-by: Galen Charlton <gmcharlt@gmail.com>
When using XSLT Display, and UNIMARC,
since marcFlavour is not used in encoding data, when data is true utf8, as_xml
fails on some subfields.
Moreover, because transformMARCXMLForXSLT edits some values in the marc record
and the PERL UTF8 is not handled by MARC::File::USMARC, it endsup in double
encoding the data.
Sending a patch to fix both issues.
This patch adds
- two functions in C4/Charset.pm
NormalizeString (uses Unicode::Normalize)
SetUTF8Flag (This function in my opinion belongs to MARC::Record, or at least MARC::File::USMARC)
- edits C4::XSLT in order to cope with the correct marcflavour
- edits C4::Search searchResults to use setUTF8Flag
This patch C4::Search::buildQuery to detect ccl queries, and let zebra to parse them.
And set a default index "kw" if not specified.
This improve the detection of ccl queries, and do not duplicate the "ccl=" value.
Adding = as an index sign
Followup to 3037ff9e81
Signed-off-by: Henri-Damien LAURENT <henridamien.laurent@biblibre.com>
Auto_truncation is used even though exact search selected.
This patch removes this side effect
Conflicts solved:
C4/Search.pm
Cherry-picked from 3.0.x :
3287252c0
This fixes:
* A bug which caused the label template editor to throw
an error when saving when no previous profile was applied.
* A typo which caused a 'fetch without execute' error in Labels.pm
It also comments out several useless warns
Leading spaces in a search term were causing an error to be thrown in a join operator when auto-truncations is turned on. This patch removes the leading spaces.
This patch is a "rebased" one for 3.0.x.
This change how to calculate the item's summary, and fix the issue when you have repeated fields.
Now every line is repeated, still it have values in repeated fields(see bug report).