On bug 32594 we are adding a new worker, dedicated to Elastic indexing.
We should have a common place for workers, and we agreed on misc/workers
To test:
1 - Apply patch
2 - reset_all in koha testing docker
3 - ps aux | grep background
4 - Confirm the workers are running, and running in the new directory
5 - Perform a batch item modification
6 - Ensure the job is processed by the worker
Signed-off-by: David Nind <david@davidnind.com>
Signed-off-by: Jonathan Druart <jonathan.druart@bugs.koha-community.org>
Signed-off-by: Tomas Cohen Arazi <tomascohen@theke.io>
Signed-off-by: Kyle M Hall <kyle@bywatersolutions.com>
Signed-off-by: David Nind <david@davidnind.com>
Signed-off-by: Marcel de Rooy <m.de.rooy@rijksmuseum.nl>
Signed-off-by: Tomas Cohen Arazi <tomascohen@theke.io>
To test:
1 - Apply patch
2 - vim /etc/koha/sites/kohadev/log4perl.conf, Add lines below:
log4perl.logger.worker = WARN, WORKER
log4perl.appender.WORKER=Log::Log4perl::Appender::Screen
log4perl.appender.WORKER.stderr=1
log4perl.appender.WORKER.mode=append
log4perl.appender.WORKER.layout=PatternLayout
log4perl.appender.WORKER.layout.ConversionPattern=[%d] [%p] %m %l%n
log4perl.appender.WORKER.utf8=1
3 - Restart all
4 - Edit misc/background_jobs_worker.pl
- my $job = Koha::BackgroundJobs->find($args->{job_id});
+ my $job;# = Koha::BackgroundJobs->find($args->{job_id});
5 - In another terminal: tail -f /var/log/koha/kohadev/koha-worker-error.log
6 - Force enqueue a job (that won't be found because of #4
perl -e 'use Koha::BackgroundJob::BatchUpdateItem; my $bg = Koha::BackgroundJob::BatchUpdateItem->new(); $bg->enqueue({ record_ids=>['888888']});'
7 - Note error in log like:
[2023/01/11 19:26:10] [WARN] No job found for id=2983 main:: /kohadevbox/koha/misc/background_jobs_worker.pl (111)
Signed-off-by: Kyle M Hall <kyle@bywatersolutions.com>
Signed-off-by: David Nind <david@davidnind.com>
Signed-off-by: Marcel de Rooy <m.de.rooy@rijksmuseum.nl>
Signed-off-by: Tomas Cohen Arazi <tomascohen@theke.io>
This patch fixes a duplicate attribute code for Author-in-order in the
biblios definition.
The picked code matches what was already in ccl.properties.
Also Chronological-term for authorities gets fixed.
To test:
1. Apply the regression tests
2. Run:
k$ prove xt/verify_bib1.att.t
=> FAIL: Some failiures
3. Apply this patch
4. Repeat 1
=> SUCCESS: Tests now pass!
5. Sign off :-D
Signed-off-by: Tomas Cohen Arazi <tomascohen@theke.io>
Signed-off-by: David Nind <david@davidnind.com>
Signed-off-by: Martin Renvoize <martin.renvoize@ptfs-europe.com>
Signed-off-by: Tomas Cohen Arazi <tomascohen@theke.io>
Test plan:
- Enable show_outstanding_amount in SIPconfig.xml
- Check that the total outstanding amout for the patron is displayed on SIP
checkout (if it exists), for example:
Patron has fines - You owe $10.00.
- Check that the outstanding amout for a given item is displayed on SIP
checkin (if it exists), for example:
"You owe $10.00 for this item."
- Check that it is not displayed when show_outstanding_amount is disabled.
Signed-off-by: Martin Renvoize <martin.renvoize@ptfs-europe.com>
Signed-off-by: Kyle M Hall <kyle@bywatersolutions.com>
Signed-off-by: Tomas Cohen Arazi <tomascohen@theke.io>
Signed-off-by: David Nind <david@davidnind.com>
Signed-off-by: Marcel de Rooy <m.de.rooy@rijksmuseum.nl>
Signed-off-by: Tomas Cohen Arazi <tomascohen@theke.io>
Bug 20078 updated the attribute for arp to 2014 to avoid conflict with 9013 not-on-loan-count
Bug 28830 then added Control-number-identifier as 2014, breaking arp again
This patch updates the number to 9015
To test:
1 - Apply first patch
2 - Attempt searching by arp, no results (add a unique 526$d to a record to ease searching)
3 - Apply this patch
4 - Copy bib1.att and ccl.properties to the correct locations
cp etc/zebradb/biblios/etc/bib1.att /etc/koha/zebradb/biblios/etc/bib1.att
cp etc/zebradb/ccl.properties /etc/koha/zebradb/ccl.properties
5 - Restart zebra
6 - Rebuild indexes
7 - Search again, success!
Signed-off-by: David Nind <david@davidnind.com>
Signed-off-by: Marcel de Rooy <m.de.rooy@rijksmuseum.nl>
Signed-off-by: Tomas Cohen Arazi <tomascohen@theke.io>
This is going to be awesome!
Signed-off-by: Jonathan Field <jonathan.field@ptfs-europe.com>
Signed-off-by: Martin Renvoize <martin.renvoize@ptfs-europe.com>
Signed-off-by: Kyle M Hall <kyle@bywatersolutions.com>
Signed-off-by: Tomas Cohen Arazi <tomascohen@theke.io>
Currently we only index a - but we can setup the system such that avxyz are searched
To test:
1 - define both a 655$a *and* 655$x value in a bib, save, reindex
2 - Set system preferences:
TraceSubjectSubdivisions: Include
TraceCompleteSubfields: Force
3 - View the record edited above in the opac
4 - Click on the subject heading
5 - No results found
6 - Copy zebra files:
sudo cp ./etc/zebradb/marc_defs/marc21/biblios/biblio-koha-indexdefs.xml \
/etc/koha/zebradb/marc_defs/marc21/biblios/biblio-koha-indexdefs.xml
sudo cp etc/zebradb/marc_defs/marc21/biblios/biblio-zebra-indexdefs.xsl \
/etc/koha/zebradb/marc_defs/marc21/biblios/biblio-zebra-indexdefs.xsl
7 - restart all and reindex
8 - Click on the subject heading in OPAC
9 - Sucess!
10 - Repeat with other fields (vyz)
11 - Repeat under ES, reindexing and resetting mappings
Signed-off-by: Katrin Fischer <katrin.fischer.83@web.de>
Signed-off-by: Marcel de Rooy <m.de.rooy@rijksmuseum.nl>
Signed-off-by: Tomas Cohen Arazi <tomascohen@theke.io>
This patch adds a preprocessor XSLT to the Zebra indexing pipeline,
so that 880 fields get indexed as the fields they're linked to. For example,
a "880 $6 245" field would be indexed as a "245" field.
However, because the preprocessor only occurs in the indexing part of the pipeline,
it does not affect the retrieval of MARCXML from Zebra. That MARCXML will be
the same MARCXML that was sent to Zebra from Koha.
Test plan:
0. Revert bug 15187, and apply patch for 31532
1. cp ./etc/zebradb/biblios/etc/dom-config.xml /etc/koha/zebradb/biblios/etc/dom-config.xml
2a. cp etc/zebradb/marc_defs/marc21/biblios/preprocess_marcxml.xsl /etc/koha/zebradb/marc_defs/marc21/biblios/.
2b. cp etc/zebradb/marc_defs/normarc/biblios/preprocess_marcxml.xsl /etc/koha/zebradb/marc_defs/normarc/biblios/.
2c. cp etc/zebradb/marc_defs/unimarc/biblios/preprocess_marcxml.xsl /etc/koha/zebradb/marc_defs/unimarc/biblios/.
3. koha-rebuild-zebra -b -f -v kohadev
4. Note that in search results the 880$6245$a data appears before the 245$a data
5. Note that you can do a title index search on the 880$6245$a data
Signed-off-by: Katrin Fischer <katrin.fischer.83@web.de>
Signed-off-by: Martin Renvoize <martin.renvoize@ptfs-europe.com>
Signed-off-by: Tomas Cohen Arazi <tomascohen@theke.io>
This followup moves the configuration to the z3950 etc file, either the
default and or the custom file is used as per the existing script code.
In addition, the options and be set using an environment variable named Z3950_ADDITIONAL_OPTS.
Signed-off-by: Tomas Cohen Arazi <tomascohen@theke.io>
The z39.50 responder has a number of command line options that are not
accessible if using the debian scripts to control it. We should be able
to set those options in the koha conf file to be passed to the script
itself.
Test Plan:
1) Apply this patch
2) Copy your kohaclone's koha-z3950-responder to /usr/sbin/koha-z3950-responder if necessary
3) Add "<z3950_responder_options>--add-item-status k</z3950_responder_options>" inside your <config> block in your koha-conf.xml file
4) Use koha-z3950-responder to start/restart the z39.50 responder, note the item status is now in subfield k!
Signed-off-by: Liz Rea <wizzyrea@gmail.com>
Signed-off-by: Martin Renvoize <martin.renvoize@ptfs-europe.com>
Signed-off-by: Tomas Cohen Arazi <tomascohen@theke.io>
The PLACKOPAC, PLACKAPI and PLACKINTRANET appenders still need %n
(i.e. a newline). Note that this patch does not add %l since it
is a bit confusing because it adds a lot of Plack internal noise like:
[2022/09/29 08:51:34] [WARN] Test mainpage CGI::Compile::ROOT::usr_share_koha_mainpage_2epl::__ANON__ /usr/share/koha/mainpage.pl (49)
The patch is a result of:
git grep -l "log4perl.appender.PLACK" | xargs sed -i -e"/ConversionPattern/ s/%m$/%m%n/"
Test plan:
First run: sed -i -e"/ConversionPattern/ s/%m$/%m%n/" /etc/koha/sites/[YOUR_CLONE]/log4perl.conf
Edit that file, change PLACKOPAC to debug level like:
log4perl.logger.plack-opac = DEBUG, PLACKOPAC
Restart.
Hit an OPAC page twice.
Check plack-opac logfile and verify that it contains a newline between last two messages like:
[2022/09/29 08:04:30] [DEBUG] kohaversion : 22.0600054
[2022/09/29 08:04:42] [DEBUG] kohaversion : 22.0600054
Signed-off-by: Marcel de Rooy <m.de.rooy@rijksmuseum.nl>
Signed-off-by: Martin Renvoize <martin.renvoize@ptfs-europe.com>
Signed-off-by: Tomas Cohen Arazi <tomascohen@theke.io>
This patch adds <zebra_connection_timeout>30</zebra_connection_timeout>
to the koha-conf.xml file.
Sometimes, a Zebra search might take longer than 30 seconds. If it does,
Koha will say that 0 records have been found. While slow searching
is not desirable, it's more desirable to get the result set regardless.
Test plan:
0. Apply patch
1. Add <zebra_connection_timeout>.1</zebra_connection_timeout> to
your relevant koha-conf.xml file (e.g. /etc/koha/sites/kohadev/koha-conf.xml)
2. echo 'flush_all' | nc -q 1 memcached 11211
3. koha-plack --restart kohadev
4. Go to http://localhost:8081/cgi-bin/koha/catalogue/search.pl?q=test
5. Note that no results are returned
6. Change zebra_connection_timeout to 30
7. echo 'flush_all' | nc -q 1 memcached 11211
8. koha-plack --restart kohadev
9a. Go to http://localhost:8081/cgi-bin/koha/catalogue/search.pl?q=test
9b. Note that 3 results are returned
10. Remove zebra_connection_timeout from koha-conf.xml
11. echo 'flush_all' | nc -q 1 memcached 11211
12. koha-plack --restart kohadev
13a. Go to http://localhost:8081/cgi-bin/koha/catalogue/search.pl?q=test
13b. Note that 3 results are returned
14. Celebrate
Signed-off-by: David Nind <david@davidnind.com>
Signed-off-by: Kyle M Hall <kyle@bywatersolutions.com>
Signed-off-by: Tomas Cohen Arazi <tomascohen@theke.io>
Koha has been able to send arbitrary item fields via the "item_field" parameter in the config. We have partners that need the ability to created custom item fields from templates, as the item_fields feature cannot accomplish what they need. We need to add a templated custom field feature for items, similar to what we have for patrons.
Test Plan:
1) Apply this patch
2) Choose a SIP login to use, edit that account and add the following
*inside* the login section:
<custom_item_field field="IN" template="[% item.itemnumber %]" />
3) Restart SIP
4) Run an item information query
5) Note the itemnumber is sent in the IN field!
Signed-off-by: Martin Renvoize <martin.renvoize@ptfs-europe.com>
Signed-off-by: Tomas Cohen Arazi <tomascohen@theke.io>
Edit: fixed tests count
As requested by Jonathan, we need more flexibility ;)
Here it comes.
Test plan:
Run t/CookieManager.t
Signed-off-by: Marcel de Rooy <m.de.rooy@rijksmuseum.nl>
Signed-off-by: Kyle M Hall <kyle@bywatersolutions.com>
Signed-off-by: Tomas Cohen Arazi <tomascohen@theke.io>
Test plan:
Add this change to your koha-conf.xml.
Flush, restart.
Test if the cookie is kept now in the interface.
Signed-off-by: Nick Clemens <nick@bywatersolutions.com>
Signed-off-by: Kyle M Hall <kyle@bywatersolutions.com>
Signed-off-by: Tomas Cohen Arazi <tomascohen@theke.io>
By default, the SIP server appears to only use 1 child process for
responding to SIP connections.
This change makes this explicit in the configuration, which should
make it so that people who need more than 1 simultaneous SIP connection
can know to just increase the value for the "max_servers" parameter
in the SIPconfig.xml file.
Test plan:
1. Add "max_servers='1'" to your SIP configuration file
2. koha-sip --restart kohadev
3. Open 3 terminals
4. Run "telnet localhost 6001" on 2 terminals
5. On the 3rd terminal, run the following:
ss -l -n -t
ps -efww | grep "sip"
6. Note that there are 2 processes called
kohadev-koha-sip: perl /kohadevbox/koha/C4/SIP/SIPServer.pm /etc/koha/sites/kohadev/SIPconfig.xml
One of these processes is the parent of the other
7. The Recv-Q in the "ss" output should show 1
(This means that 1 of your telnet connections is in the server's TCP backlog)
8. Celebrate as the configuration works as expected
Signed-off-by: Kyle M Hall <kyle@bywatersolutions.com>
Signed-off-by: Martin Renvoize <martin.renvoize@ptfs-europe.com>
Signed-off-by: Tomas Cohen Arazi <tomascohen@theke.io>
In Koha, any report that uses C4::Reports::Guided will be limited to 999,999 rows. This is causing problems for larger libraries where some reports may have over a million results.
Test Plan:
1) Create a report "SELECT * FROM borrowers" and run it, note the number
of results
2) Apply this patch
3) Add the line `<report_results_limit>3</report_results_limit>`
within the <config> block of your koha-conf.xml
4) Restart all the things!
5) Run the report, download the results as a CSV
6) Note your CSV only has 4 lines, the header and 3 patrons
Signed-off-by: Rachael Laritz <rachael.laritz@inlibro.com>
Signed-off-by: Martin Renvoize <martin.renvoize@ptfs-europe.com>
Signed-off-by: Tomas Cohen Arazi <tomascohen@theke.io>
Some libraries have certain item types that can only do in house checkouts via SIP self check machines. In these cases, the items should not be demagnetized since the items cannot leave the library.
Test Plan:
1) Apply this patch
2) prove t/db_dependent/SIP/Message.t
Signed-off-by: Martin Renvoize <martin.renvoize@ptfs-europe.com>
Signed-off-by: Tomas Cohen Arazi <tomascohen@theke.io>
Same as before, but test with UNIMARC setup
Signed-off-by: Martin Renvoize <martin.renvoize@ptfs-europe.com>
Signed-off-by: Katrin Fischer <katrin.fischer.83@web.de>
Signed-off-by: Tomas Cohen Arazi <tomascohen@theke.io>
This patch updates the Local-Number indexing by adding a zeropad option
to Zebra indexing and adding this to the mapping files
It also updates C4/Search.pm to allow biblionumber as an option
To test:
1 - Apply patches
2 - copy etc/zebradb/marc_defs/marc21/biblios/biblio-zebra-indexdefs.xsl to /etc/koha/zebradb/marc_defs/marc21/biblios/biblio-zebra-indexdefs.xsl
3 - Restart all, reindex zebra
4 - Browse to: http://localhost:8081/cgi-bin/koha/catalogue/search.pl?idx=kw&q=a&sort_by=biblionumber_dsc&count=20
5 - Confirm records sorted correctly
6 - Browse to http://localhost:8081/cgi-bin/koha/catalogue/search.pl?idx=kw&q=a&sort_by=biblionumber_asc&count=20
7 - Confirm records sorted correctly
Signed-off-by: Martin Renvoize <martin.renvoize@ptfs-europe.com>
Signed-off-by: Katrin Fischer <katrin.fischer.83@web.de>
Signed-off-by: Tomas Cohen Arazi <tomascohen@theke.io>
This patch adds a new option to the SIP config, allowing for hold
capture to be disabled on difference devices. We still notice the hold
and alert the user, but we do not trigger the update in the system to
mark the hold as found (waiting, processing or in transit).
Sponsored-by: Cheshire Libraries Shared Services
Signed-off-by: Martin Renvoize <martin.renvoize@ptfs-europe.com>
Signed-off-by: Sally <sally.healey@cheshiresharedservices.gov.uk>
Signed-off-by: Tomas Cohen Arazi <tomascohen@theke.io>
Signed-off-by: Fridolin Somers <fridolin.somers@biblibre.com>
Test plan:
Just comments. Nothing to test.
Signed-off-by: Marcel de Rooy <m.de.rooy@rijksmuseum.nl>
Amended: using new name for deny list.
Signed-off-by: Nick Clemens <nick@bywatersolutions.com>
Signed-off-by: Martin Renvoize <martin.renvoize@ptfs-europe.com>
Signed-off-by: Fridolin Somers <fridolin.somers@biblibre.com>
I ran the xsltproc on both MARC21 and UNIMARC files (biblios and
authorities). With my follow-up the only changed one is this one.
I skipped NORMARC as it is supposed to be removed by now (so unused in
Norway).
Signed-off-by: Tomas Cohen Arazi <tomascohen@theke.io>
Signed-off-by: Fridolin Somers <fridolin.somers@biblibre.com>
This patch makes the generated xslt not mention index_sort_title unless
the entry is defined on the xml file. Otherwise there's a call to
<xslo:apply-templates mode="index_sort_title"/>
Signed-off-by: Tomas Cohen Arazi <tomascohen@theke.io>
Signed-off-by: Fridolin Somers <fridolin.somers@biblibre.com>
This patch moves the code that generates the xsl for MARC21 biblio sorting
to it's own template that is only called when specified in the xml
To test:
1 - xsltproc etc/zebradb/xsl/koha-indexdefs-to-zebra.xsl etc/zebradb/marc_defs/marc21/authorities/authority-koha-indexdefs.xml > etc/zebradb/marc_defs/marc21/authorities/authority-zebra-indexdefs.xsl
2 - git diff
3 - Note that authority-zebra-indexdefs.xsl now has 245 Title:s info
4 - Apply patch
5 - xsltproc etc/zebradb/xsl/koha-indexdefs-to-zebra.xsl etc/zebradb/marc_defs/marc21/authorities/authority-koha-indexdefs.xml > etc/zebradb/marc_defs/marc21/authorities/authority-zebra-indexdefs.xsl
6 - git diff
7 - There are lines added about title sort, but no 245 block
8 - xsltproc etc/zebradb/xsl/koha-indexdefs-to-zebra.xsl etc/zebradb/marc_defs/marc21/biblios/biblio-koha-indexdefs.xml > etc/zebradb/marc_defs/marc21/biblios/biblio-zebra-indexdefs.xsl
9 - git diff
10 - Note lines changes to ...title_sort
11 - 245 block does not change
Signed-off-by: Hayley Pelham <hayleypelham@catalyst.net.nz>
Signed-off-by: Tomas Cohen Arazi <tomascohen@theke.io>
Signed-off-by: Fridolin Somers <fridolin.somers@biblibre.com>
This patch grants json files from koha-tmpl. Otherwise systempreferences won't be able to fetch them.
TEST PLAN:
1) Try to set systempreference BorrowerUnwantedField.
2) The modal is empty.
3) Modify Apache configurations like in this patch.
4) Try again set the systempreference.
5) The modal should show list of parameters.
Sponsored-by: Koha-Suomi Oy
Signed-off-by: Martin Renvoize <martin.renvoize@ptfs-europe.com>
JD Amended patch: replace tab characters and reword commit title
Signed-off-by: Jonathan Druart <jonathan.druart@bugs.koha-community.org>
This adds mrl as appreviation of Multipart-resource-level to
the Zebra configuration and Search.pm so it can be used when
searching.
Signed-off-by: Katrin Fischer <katrin.fischer@bsz-bw.de>
Signed-off-by: Nick Clemens <nick@bywatersolutions.com>
Signed-off-by: Jonathan Druart <jonathan.druart@bugs.koha-community.org>
Sponsored-by: Bibliotheksservice-Zentrum Baden-Wuerttemberg
Signed-off-by: Katrin Fischer <katrin.fischer@bsz-bw.de>
Signed-off-by: Nick Clemens <nick@bywatersolutions.com>
Signed-off-by: Jonathan Druart <jonathan.druart@bugs.koha-community.org>
As the field is already dfined, we don't need to add anything here.
Bib1.att can use the existing number as well
To test, enable zebra debugging in koha-conf, adding 'request' to the list:
<zebra_loglevels>none,fatal,warn,request</zebra_loglevels>
Restart all the things
Repeat matching (redo matching with no rule, then with OCN rule)
Tail the zebra-output.log and note 1=Ohter-control-number is searched and match is found
Perform a search in the staff client for: other-control-number:expialodocious
Note in logs that 1=1211 is searched
Previous test plan did not mention copying ccl.properties and bib1.att to the package install,
so highlighted that things work without these changes
Signed-off-by: Katrin Fischer <katrin.fischer.83@web.de>
Signed-off-by: Jonathan Druart <jonathan.druart@bugs.koha-community.org>
To test:
1 - Apply patch
2 - Copy zebra files to destination:
cp /kohadevbox/koha/etc/zebradb/marc_defs/marc21/authorities/authority-koha-indexdefs.xml /etc/koha/zebradb/marc_defs/marc21/authorities/authority-koha-indexdefs.xml
cp /kohadevbox/koha/etc/zebradb/marc_defs/marc21/authorities/authority-zebra-indexdefs.xsl /etc/koha/zebradb/marc_defs/marc21/authorities/authority-zebra-indexdefs.xsl
3 - Reindex authorities
4 - Edit an authority and add 035$aExpialodocious
5 - Export the authority
6 - Create a new record matchign rule:
Matching rule code: OCN
Description: Other control number
Match threshhold: 1000
Record type: Authority record
Search-index: Other-control-number
Score: 1000
Tag: 035
Subfields: a
7 - Stage the record and use the new matchign rule
8 - Match found!
Signed-off-by: Andrew Fuerste-Henry <andrew@bywatersolutions.com>
Signed-off-by: Katrin Fischer <katrin.fischer.83@web.de>
Signed-off-by: Jonathan Druart <jonathan.druart@bugs.koha-community.org>
Signed-off-by: Marcel de Rooy <m.de.rooy@rijksmuseum.nl>
Signed-off-by: Tomas Cohen Arazi <tomascohen@theke.io>
Signed-off-by: Jonathan Druart <jonathan.druart@bugs.koha-community.org>
This patch adds the cni/Control-number-identifier index to enable
searches to use the 003 field.
Test plan
1/ Apply patch
2/ Re-index using updated configurations
3/ Confirm cni:number searches yield the expected results
4/ Signoff
Split-by: Martin Renvoize <martin.renvoize@ptfs-europe.com>
Signed-off-by: Nick Clemens <nick@bywatersolutions.com>
Signed-off-by: Pasi Kallinen <pasi.kallinen@koha-suomi.fi>
Signed-off-by: Martin Renvoize <martin.renvoize@ptfs-europe.com>
Signed-off-by: Katrin Fischer <katrin.fischer.83@web.de>
Signed-off-by: Jonathan Druart <jonathan.druart@bugs.koha-community.org>
Bug 15253 originally had the ability to specify the incoming IP address used for
a given log statement via SIP, as well as the SIP2 account that was in use at the time.
This data is very helpful for debugging purposes, and should be brought back.
Test Plan:
1) Apply this patch
2) Update you SIP ConversionPattern to "[%d] [%p] %X{accountid}@%X{peeraddr}: %m %l %n"
3) Restart SIP
4) Use the SIP cli tester to make some SIP requests
5) View the SIP2 log, note the account id and client ip address show in the log!
Signed-off-by: Martin Renvoize <martin.renvoize@ptfs-europe.com>
Signed-off-by: Nick Clemens <nick@bywatersolutions.com>
Signed-off-by: Jonathan Druart <jonathan.druart@bugs.koha-community.org>
Trivial change.
Do sed -i -r -e'/log4perl/ s/\s%n$/%n/' on the log4perl configs.
Test plan:
Update your own config.
Trigger some logging and check that logfile.
Signed-off-by: Marcel de Rooy <m.de.rooy@rijksmuseum.nl>
Signed-off-by: Nick Clemens <nick@bywatersolutions.com>
Signed-off-by: Katrin Fischer <katrin.fischer.83@web.de>
Signed-off-by: Jonathan Druart <jonathan.druart@bugs.koha-community.org>
This patch adds a 'koha' set to the pqf properties and maps some basic
koha fields to all for searchign
To test:
1 - Apply patch
2 - cp etc/z3950/pqf.properties /etc/koha/sites/kohadev/z3950/pqf.properties
3 - sudo koha-z3950-responder --restart kohadev
4 - Test a search:
curl -XGET "http://localhost:2100/biblios?version=1.1&operation=searchRetrieve&query=koha.itemtype=BK&maximumRecords=60&recordSchema=marcxml"
5 - Test other fields added:
koha.withdrawn
koha.lost
koha.classification-source
koha.materials-specified
koha.damaged
koha.restricted
koha.cn-sort
koha.notforloan
koha.ccode
koha.itemnumber
koha.homebranch
koha.holdingbranch
koha.location
koha.barcode
koha.onloan
koha.itemtype
Signed-off-by: Martin Renvoize <martin.renvoize@ptfs-europe.com>
Signed-off-by: Marcel de Rooy <m.de.rooy@rijksmuseum.nl>
[EDIT] Copied the changes from z3950 to zebradb too.
Signed-off-by: Jonathan Druart <jonathan.druart@bugs.koha-community.org>
Test plan:
Diff etc/zebradb/pqf.properties with the etc/z3950 one.
Signed-off-by: Marcel de Rooy <m.de.rooy@rijksmuseum.nl>
Signed-off-by: Jonathan Druart <jonathan.druart@bugs.koha-community.org>
Change sample SIPConfig to request server writes a pid file
Use this pid in shutdown rather than the current open to error
method.
Have also added to the Config parameters to ensure that the sipserver
runs as the correct user and sets its own session id. These are always
useful but makes it easier for users to run up the sipserver as root at
boot time similarly to apache, mysql etc
Added to the sample config where to locate other server parameter
documentation.
Removed from the sample config the unedifying, unwanted and
purely historical http example
Signed-off-by: Martin Renvoize <martin.renvoize@ptfs-europe.com>
Signed-off-by: Kyle M Hall <kyle@bywatersolutions.com>
Signed-off-by: Jonathan Druart <jonathan.druart@bugs.koha-community.org>
When using Zebra for searching, Koha performs a number of searches in order
to improve relevancy. This means that even for 'wordlist' search, we perform a phrase search.
When selecting 'Corporate-name' as an index, this expansion of the search causes errors and fails
the search
We can fix this for 'Corporate-name' searches by adding a phrase index
To test:
1 - Edit koha-conf.xml and uncomment the zebra debug line and add 'request' to the list
2 - Restart all
3 - tail -f /var/log/koha/kohadev/zebra-output.log
4 - Edit a record to add a 110 field e.g. 'House plants'
5 - Enable syspref IntranetCatalogSearchPulldown
6 - Search for 'Corporate name' and term 'House plants'
7 - No results
8 - View the log, see 'ERROR' and full search terms listed
9 - Apply patch
10 - copy the zebra files to the production instance:
cp etc/zebradb/marc_defs/marc21/biblios/biblio-koha-indexdefs.xml /etc/koha/zebradb/marc_defs/marc21/biblios/biblio-koha-indexdefs.xml
cp etc/zebradb/marc_defs/marc21/biblios/biblio-zebra-indexdefs.xsl /etc/koha/zebradb/marc_defs/marc21/biblios/biblio-zebra-indexdefs.xsl
11 - restart all
12 - rebuild: sudo koha-rebuild-zebra -v -f kohadev
13 - Repeat search
14 - Success!
Signed-off-by: David Nind <david@davidnind.com>
Signed-off-by: Katrin Fischer <katrin.fischer.83@web.de>
Signed-off-by: Jonathan Druart <jonathan.druart@bugs.koha-community.org>
This patch adds a "lib" directory to the source tree which gets
mapped to the same directory as "C4" and "Koha" for single and
standard installations.
CGI::Session::Serialize::yamlxs is put into this "lib" directory.
This patch also includes some changes so that dev/git installations
work as well.
Signed-off-by: Nick Clemens <nick@bywatersolutions.com>
Signed-off-by: Jonathan Druart <jonathan.druart@bugs.koha-community.org>
This patch adds `%P` to the SIP log4perl configuration so that PID is
recorded against log lines.
This allows transactions to be more easily tied together under one
SIPServer, thus making it easier to pick out a whole transaction from
start to finish.
Signed-off-by: Kyle M Hall <kyle@bywatersolutions.com>
Signed-off-by: Jonathan Druart <jonathan.druart@bugs.koha-community.org>
For a good management of autorities linking to biblio records,
MARC21 uses index_heading and index_match_heading in authorities zebra configuration.
UNIMARC configuration must use the same.
This patch adds in UNIMARC authorities zebra configuration index_heading and index_match_heading to earch heading
in order to be maximum close to MARC21 authorities zebra configuration.
See changes made in MARC21 :
32cf2af700
It fixes some indexes names : Personal-name-see => Personal-name-see-from
Removes useless Term-geographic index, a duplicate of Name-geographic.
Sometimes parallel 7xx form whas only on $a, it must contains same subfields
has the main heading.
Test plan :
===========
1.0) Use a UNIMARC install without patch
1.1) Set sysprefs
BiblioAddsAuthorities = ON
AutoCreateAuthorities = ON
LinkerModule = First Match
1.2) Replace authorities zebra configuration files
cp $KOHA_CLONE/etc/zebradb/marc_defs/unimarc/authorities/authority-koha-indexdefs.xml $KOHA_CONF_DIR/zebradb/marc_defs/unimarc/authorities/authority-koha-indexdefs.xml
cp $KOHA_CLONE/etc/zebradb/marc_defs/unimarc/authorities/authority-zebra-indexdefs.xsl $KOHA_CONF_DIR/zebradb/marc_defs/unimarc/authorities/authority-zebra-indexdefs.xsl
1.3) Restart zebra server and indexer services
1.4) Reindex authorities
./misc/migration_tools/rebuild_zebra.pl -r -a -v
1.5) Search in Z3950 a record with complex heading (with subdivisions),
for example ISBN 2877620115 "Facteurs culturels et sociaux de la santé en Afrique de l'Oues"
1.6) Import this record and save it : authorities are created
go to staff:/cgi-bin/koha/cataloguing/addbooks.pl
1.7) Reimport the same record (when asked, say that it's not a duplicate)
1.8) The authority should have been duplicated :
different url and different $9 value
2.0) Apply this patch
2.1) Replace again the authorities zebra configuration files
2.2) Restart zebra server and indexer services
2.3) Reindex authorities
2.4) Reimport the same record
2.5) The authority should have not been duplicated. Compare with both
existing records to see which the 3rd has been matched against.
3.0) Play with authorities search to check every mode :
Search main heading ($a only)
Search main heading
Search all headings
Search entire record
Signed-off-by: Martin Renvoize <martin.renvoize@ptfs-europe.com>
Signed-off-by: Victor Grousset/tuxayo <victor@tuxayo.net>
Signed-off-by: Jonathan Druart <jonathan.druart@bugs.koha-community.org>
Like for MARC21, UNIMARC authorities has subdivisions form, general,
chronological and geographic.
In C4::Heading::UNIMARC, use subdivisions in _get_search_heading like in C4::Heading::MARC21.
Adds subdivisions variables into UNIMARC authorities zebra configuration.
Note that unlike MARC21 geographic is subfield $y and chronological is subfield $z.
See https://www.ifla.org/publications/unimarc-formats-and-related-documentation
Signed-off-by: Martin Renvoize <martin.renvoize@ptfs-europe.com>
Signed-off-by: Victor Grousset/tuxayo <victor@tuxayo.net>
Signed-off-by: Jonathan Druart <jonathan.druart@bugs.koha-community.org>