perl -p -i -e 's/use vars qw\(\s*\);\n//' **/*.pm
Signed-off-by: Josef Moravec <josef.moravec@gmail.com>
Signed-off-by: Tomas Cohen Arazi <tomascohen@unc.edu.ar>
Signed-off-by: Brendan A Gallagher <brendan@bywatersolutions.com>
perl -p -i -e 's/^.*set the version for version checking.*\n//' **/*.pm
+ manual adjustements
Signed-off-by: Josef Moravec <josef.moravec@gmail.com>
Signed-off-by: Tomas Cohen Arazi <tomascohen@unc.edu.ar>
Signed-off-by: Brendan A Gallagher <brendan@bywatersolutions.com>
Mainly a
perl -p -i -e 's/^.*3.07.00.049.*\n//' **/*.pm
Then some adjustements
Signed-off-by: Josef Moravec <josef.moravec@gmail.com>
Signed-off-by: Tomas Cohen Arazi <tomascohen@unc.edu.ar>
Signed-off-by: Brendan A Gallagher <brendan@bywatersolutions.com>
perl -p -i -e 's/^(use vars .*)\$VERSION\s?(.*)/$1$2/' **/*.pm
Signed-off-by: Josef Moravec <josef.moravec@gmail.com>
Signed-off-by: Tomas Cohen Arazi <tomascohen@unc.edu.ar>
Signed-off-by: Brendan A Gallagher <brendan@bywatersolutions.com>
Signed-off-by: Chris Nighswonger <cnighswonger@foundations.edu>
Signed-off-by: Tomas Cohen Arazi <tomascohen@gmail.com>
Signed-off-by: Katrin Fischer <katrin.fischer@bsz-bw.de>
http://bugs.koha-community.org/show_bug.cgi?id=9987
Signed-off-by: Kyle M Hall <kyle@bywatersolutions.com>
Signed-off-by: Tomas Cohen Arazi <tomascohen@gmail.com>
The original patch did not correctly construction ISBN phrase
searches when QP is on. Unfortunately, when attempting to fix that,
I discovered that there's a deep bug in QP that makes it generate
incorrect search queries when combining more than two atoms
in with the || operator. Consequently, until that can be fixed,
this patch ensures that if UseQueryParser is on, AggressiveMatchOnISBN
has no effect.
To state it anther way, AggressiveMatchOnISBN works only when QP
is not in use.
Signed-off-by: Galen Charlton <gmc@esilibrary.com>
In UNIMARC, the isbn index is ISBN.
Signed-off-by: Jonathan Druart <jonathan.druart@biblibre.com>
Signed-off-by: Martin Renvoize <martin.renvoize@ptfs-europe.com>
Signed-off-by: Galen Charlton <gmc@esilibrary.com>
Test Plan:
1) Catalog a record with the ISBN "0394502884 (Random House)"
2) Export the record, edit it so the ISBN is now
"0394502884 (UnRandomHouse)"
3) Using the record import tool, import this record with matching
on ISBN.
4) You should not find a match
5) Apply this patch
6) Run updatedatabase.pl
7) Enable the new system preference AggressiveMatchOnISBN
8) Repeat step 3
9) The tool should now find a match
Signed-off-by: Tom McMurdo <thomas.mcmurdo@state.vp.us>
Signed-off-by: Jonathan Druart <jonathan.druart@biblibre.com>
Signed-off-by: Martin Renvoize <martin.renvoize@ptfs-europe.com>
Signed-off-by: Galen Charlton <gmc@esilibrary.com>
This patch makes Koha <-> Zebra use MARCXML for the serialization when
using DOM, and USMARC for GRS-1.
* The following functions are modified to set the Zebra record syntax
according to the current sysprefs and configuration:
- C4::Context->Zconn
- C4::Context-_new_Zconn
* A new function 'new_record_from_zebra' is introduced, which checks the
context we are in, and creates the MARC::Record object using the right
constructor.
The following packages get touched to make use of the new function:
- C4::Search
- C4::AuthoritiesMarc
and the same happens to the UI scripts that make use of them (both in
the OPAC and STAFF interfaces).
* Calls to the unsafe ZOOM::Record->render()[1] method are removed.
Due to this last change the code for building facets was rewritten. And
for performance on the facets creation I pushed higher version
dependencies for MARC::File::XML and MARC::Record (we rely on
MARC::Field->as_string).
* Calls to MARC::Record->new_from_xml and MARC::Record->new_from_usmarc
are wrapped with eval for catching problems [2].
* As of bug 3087, UNIMARC uses the 'unimarc' record syntax. this case is
correctly handled.
* As of bug 7818 misc/migration_tools/rebuild_zebra.pl behaves like:
- bib_index_mode (defaults to 'grs1' if not specified)
- auth_index_mode (defaults to 'dom')
here we do exactly the same.
To test:
- prove t/db_dependent/Search.t should pass.
- Searching should remain functional.
- Indexing and searching for a big record should work (that's what the
unit tests do).
- Test an index scan search (on the staff interface):
Search > More options > Check "Scan indexes".
- Enable 'itemBarcodeFallbackSearch' and try to circulate any word, it
shouldn't break.
- Searching for a biblio in a new subscription shouldn't break.
- Running bulkmarcimport.pl shouldn't break.
- And so on... for the rest of the .pl files.
[1] http://search.cpan.org/~mirk/Net-Z3950-ZOOM/lib/ZOOM.pod#render()
[2] a record that cannot be parsed by MARC::Record is simply skipped (bug 10684)
Sponsored-by: Universidad Nacional de Cordoba
Signed-off-by: Kyle M Hall <kyle@bywatersolutions.com>
Signed-off-by: Martin Renvoize <martin.renvoize@ptfs-europe.com>
Signed-off-by: Galen Charlton <gmc@esilibrary.com>
When introducing QueryParser, I introduced a check for QueryParser at
too high a level, causing authority matching to try and use SimpleSearch
for authorities prematurely, when SearchAuthorities should be handling
it. This patch corrects the level of the check. This patch only moves
three lines, but thanks to the change in if level, it adjusts the
indentation quite a bit.
Signed-off-by: Katrin Fischer <Katrin.Fischer.83@web.de>
Comments on third patch of this series.
Signed-off-by: Jared Camins-Esakov <jcamins@cpbibliography.com>
With the inclusion of this patch, all searches will (try) to use
QueryParser for handling queries for both the bibliographic and authority
databases if UseQueryParser is enabled. If QueryParser is unavailable,
UseQueryParser is disabled, or the search uses CCL indexes, the old
search code will be used.
To test:
1) Apply patch.
2) Run the unit test with `prove t/QueryParser.t`
3) Enable the UseQueryParser syspref.
4) Try searches that should return results in the following places:
* OPAC (simple search)
* OPAC (advanced search)
* OPAC (authorities)
* Staff client (header search)
* Staff client (advanced search)
* Staff client (cataloging search)
* Staff client (authorities)
* Staff client (importing a batch using a match point)
* Staff client (searching for an item for adding to a label)
* Staff client (acquisitions)
* Staff client (searching for a record to create a serial)
* ANYWHERE ELSE I HAVE FORGOTTEN
5) Disable the UseQueryParser syspref. Repeat at least some of the
searches you did above.
6) If all searches worked, sign off.
Signed-off-by: Chris Cormack <chris@bigballofwax.co.nz>
Signed-off-by: Elliott Davis <elliott@bywatersolions.com>
Searching still works as expected for variuos places.
QueryParser syspref seemed to be enabled by default
Signed-off-by: Katrin Fischer <Katrin.Fischer.83@web.de>
Signed-off-by: Jared Camins-Esakov <jcamins@cpbibliography.com>
* Add the code necessary to handle authorities with matching rules and
import batches.
* Update all the scripts that use the matcher and import batch code
to use the new API.
* Add authority records to the matching rules interface in the staff
client.
http://bugs.koha-community.org/show_bug.cgi?id=2060
Signed-off-by: Elliott Davis <elliott@bywatersolutions.com>
Signed-off-by: Jared Camins-Esakov <jcamins@cpbibliography.com>
Rebased on latest master 11 September 2012
svc/import_bib:
* takes POST request with parameters in url and MARC XML as DATA
* pushes MARC XML to an impoort bach queue of type 'webservice'
* returns status and imported record XML
* is a drop-in replacement for svc/new_bib
misc/cronjobs/import_webservice_batch.pl:
* a cron job for processing impoort bach queues of type 'webservice'
* batches can also be processed through the UI
misc/bin/connexion_import_daemon.pl:
* a daemon that listens for OCLC Connexion requests and is compliant
with OCLC Gateway spec
* takes request with MARC XML
* takes import batch params from a config file and forwards the lot to
svc/import_bib
* returns status
ImportBatches:
* Added new import batch type of 'webservice'
* Changed interface to AddImportBatch() - now it takes a hashref
* Replaced batch_type = 'batch' with
batch_type IN ( 'batch', 'webservice' ) in some SELECTs
Signed-off-by: MJ Ray <mjr@phonecoop.coop>
Remove some unnecessary checks when check of error is
sufficient. Make the order in some cases more logical
Should remove some possibilities of runtime warning noise.
Although some calls belong to the 'Nothing could
ever go wrong' school have added some warnings
Signed-off-by: Christophe Croullebois <christophe.croullebois@biblibre.com>
Signed-off-by: Chris Cormack <chrisc@catalyst.net.nz>
Should fix any remaining warnings with 'podchecker'
Signed-off-by: Andrew Elwell <Andrew.Elwell@gmail.com>
Signed-off-by: Galen Charlton <gmcharlt@gmail.com>
Also fix issues with normalizing ISBNs and the default
normalizer in C4::Matcher.
Signed-off-by: Chris Cormack <chrisc@catalyst.net.nz>
Signed-off-by: Galen Charlton <galen.charlton@liblime.com>
Recommended by Michele Maenpaa, this adds handling colon and end-punctuation stripping
in subroutine _normalize, and fixes length testing in subroutine _get_match_keys
Signed-off-by: Galen Charlton <galen.charlton@liblime.com>
C4::Search::SimpleSearch was alredy patched to let you pass in the number of results you want back.
These instances were not using the new API. This patch makes all calls to SimpleSearch specify a limit.
I improved the documentation of SimpleSearch a bit to include the third returned value.
I believe there's a bug in C4::Output::pagination_bar, in that it doesn't deal well with URLs
with only one pair of parameter=value passed to it. I'm getting around this by passing in a second
pair that does nothing.
Signed-off-by: Joshua Ferraro <jmf@liblime.com>
Enhancement to store the matching rule associated with an
import batch and to allow the current matching rule in
effect to be changed and the duplicate detection redone.
Signed-off-by: Chris Cormack <crc@liblime.com>
Signed-off-by: Joshua Ferraro <jmf@liblime.com>