This patch simply adds application/json to the mod_deflate configuration
To test:
1 - Open the netowrk tab in firefox
2 - Load http://localhost:8081/api/v1/libraries
3 - Not the transferred size, and note no 'Content-Encoding: gzip" header
4 - Apply patch, reset_all (or edit /etc/koha/apache-shared.conf)
5 - Reload
6 - Note smaller size, note gzip header
Signed-off-by: Phil Ringnalda <phil@chetcolibrary.org>
Signed-off-by: Martin Renvoize <martin.renvoize@ptfs-europe.com>
Signed-off-by: Katrin Fischer <katrin.fischer@bsz-bw.de>
(cherry picked from commit bccf7764c0)
Signed-off-by: Fridolin Somers <fridolin.somers@biblibre.com>
Whne performing batch operations we can send a large numebr of records for reindexing at once.
Currently this can create requetss that are too large for Elasticsearch to process. We need
to break these requests into chunks/
This patch adds a chunk_size configuration to the elasticsearch stanza in koha-conf.xml
If blank we default to 5000.
To test:
0 - Have Koha using Elasticsearch
1 - Create and download a report of all barcodes:
SELECT barcode FROM items
2 - Batch modify these items
3 - Note a single ESindexing job is created
4 - Create and download a report of all authority ids:
SELECT auth_header.authid FROM auth_header
5 - Setup a marc modification template, and batch modify all the authorities
6 - Again note a single ES backgorund job is created
7 - Apply patch
8 - Repeat the modifications above - you still get a single job
9 - Edit koha-conf.xml and add <chunk_size>250</chunk_size> to elasticsearch stanza
10 - Repeat modifications - you now get several background ES jobs
11 - prove -v t/db_dependent/Koha/SearchEngine/Elasticsearch/Indexer.t
Signed-off-by: David Nind <david@davidnind.com>
Signed-off-by: Jonathan Druart <jonathan.druart@bugs.koha-community.org>
Signed-off-by: Katrin Fischer <katrin.fischer@bsz-bw.de>
(cherry picked from commit 9951e230e4)
Signed-off-by: Fridolin Somers <fridolin.somers@biblibre.com>
This change more closely aligns ICU and CHR so that ICU also
removes the = character. This fixes issues in ICU when searching
with a : which gets transformed into a =. Without this change,
the Analytics features won't work for titles with a colon in them.
Test plan:
0. Apply the patch and import bibs from Bugzilla (using Staged MARC tools)
1. cp ./etc/zebradb/etc/phrases-icu.xml /etc/koha/zebradb/etc/phrases-icu.xml
2. cp ./etc/zebradb/etc/words-icu.xml /etc/koha/zebradb/etc/words-icu.xml
3. vi /etc/koha/zebradb/etc/default.idx
Change "charmap word-phrase-utf.chr" to "icuchain words-icu.xml" for "index w"
and "icuchain phrases-icu.xml" for "index p"
4. koha-zebra --stop kohadev
5. pkill zebrasrv
6. koha-zebra --start kohadev
7. koha-rebuild-zebra -a -b -f -v kohadev
8. Search for "Awesome title" and open the detail page
9. Note that the "Analytics: Show analytics" line shows up
10. Click that link
11. Note that it opens the "Cool article" record and it displays
"In: Awesome title: awesome subtitle"
12. Click that link
13. Note that it opens the "Awesome title" record
Signed-off-by: David Nind <david@davidnind.com>
Signed-off-by: Martin Renvoize <martin.renvoize@ptfs-europe.com>
Signed-off-by: Katrin Fischer <katrin.fischer@bsz-bw.de>
(cherry picked from commit 7375d82c40)
Signed-off-by: Fridolin Somers <fridolin.somers@biblibre.com>
The SIP circulation status specifies that a 12 means an item is lost, and 13 means an item is missing. In Koha, missing items are simply a type of lost item so we never send a 13. This is an important distinction for some SIP based inventory tools. It would be good to be able to specify when lost status means "missing" at the SIP login level.
Test Plan:
1) Apply this patch
2) prove t/db_dependent/SIP/Transaction.t
Signed-off-by: David Nind <david@davidnind.com>
Signed-off-by: Katrin Fischer <katrin.fischer.83@web.de>
Signed-off-by: Tomas Cohen Arazi <tomascohen@theke.io>
Currently, Koha does not return a message on successful SIP checkin.
This patchs adds the show_checkin_message option to SIPconfig.xml, disabled by
default. When enabled, the following message is displayed on SIP checkin:
"Item checked-in: {homebranch|permanent_location} - {location}"
The UseLocationAsAQInSIP system preference is used to determine whether the
homebranch or the permanent location will be used.
Test plan:
- Perform a successful checkin using SIP
- Check that the message is in the checkin response (AF field)
- prove t/db_dependent/SIP/Transaction.t
Signed-off-by: David Nind <david@davidnind.com>
Signed-off-by: Kyle M Hall <kyle@bywatersolutions.com>
Edit (tcohen): tidied the whole subtest.
Signed-off-by: Tomas Cohen Arazi <tomascohen@theke.io>
I think instead of a plain on/off switch we should use it in combination
with the plugin_repo's and set it to restrict to only those repos' (i.e.
disable uploads entirely if no repo's are listed, or just allow those
repo's when there are).
This patch achieves that, but only if plugins are installed via the
plugin browser method. We disable all direct upload avenues, so install
is blocked for other cases.
Signed-off-by: Kyle M Hall <kyle@bywatersolutions.com>
Signed-off-by: David Cook <dcook@prosentient.com.au>
Signed-off-by: Victor Grousset/tuxayo <victor@tuxayo.net>
Signed-off-by: Kyle M Hall <kyle@bywatersolutions.com>
Signed-off-by: Tomas Cohen Arazi <tomascohen@theke.io>
This patch enables enable_plugin_browser_upload by default,
since the current behaviour for Koha is to enable browser upload
when enable_plugins is 1.
Signed-off-by: Nicolas Legrand <nicolas.legrand@bulac.fr>
Signed-off-by: Martin Renvoize <martin.renvoize@ptfs-europe.com>
Signed-off-by: Kyle M Hall <kyle@bywatersolutions.com>
Signed-off-by: David Cook <dcook@prosentient.com.au>
Signed-off-by: Victor Grousset/tuxayo <victor@tuxayo.net>
Signed-off-by: Kyle M Hall <kyle@bywatersolutions.com>
Signed-off-by: Tomas Cohen Arazi <tomascohen@theke.io>
This patch adds a enable_plugin_browser_upload flag to koha-conf.xml, which
controls whether or not Koha intranet users can upload Koha plugins via
their browser. Like "enable_plugins", it defaults to 0 for new installs.
This is useful when you want to provide Koha intranet users with plugins
that are pre-installed by administrators (by CLI) or restricting them
to plugins from a Github repo. See the following for more information:
Bug 23975 - Add ability to search and install plugins from GitHub
Bug 23191 - Administrators should be able to install plugins from the command line
To test:
1) Apply the full patchset
2) Confirm <enable_plugins>1</enable_plugins> is present in koha-conf.xml
3) Add <plugins_restricted>1</plugins_restricted> to koha-conf.xml
4) Ensure that the <plugin_repos> block is not commented and contains at
least one trusted organisation in koha-conf.xml
If needed get it from: debian/templates/koha-conf-site.xml.in
5) Run restart_all (in koha-testing-docker)
6) Go to /cgi-bin/koha/plugins/plugins-home.pl and note that you don't see
an option to upload plugins
7) You should however see a search option and upon search you should have
results returned from the chosen trusted organisations listed in the
<plugin_repos> block mentioned above.
8) Clicking install on one of the results should work as expected and install
the plugin.
9) Go directly to /cgi-bin/koha/plugins/plugins-upload.pl and note that it says
"Plugin upload is restricted to only those plugins listed by your server
administrator" and gives instructions on how to enable unrestricted browser
upload.
Signed-off-by: Nicolas Legrand <nicolas.legrand@bulac.fr>
Signed-off-by: Martin Renvoize <martin.renvoize@ptfs-europe.com>
Signed-off-by: Kyle M Hall <kyle@bywatersolutions.com>
Signed-off-by: David Cook <dcook@prosentient.com.au>
Rebased-by: Victor Grousset/tuxayo <victor@tuxayo.net>
Signed-off-by: Victor Grousset/tuxayo <victor@tuxayo.net>
Signed-off-by: Kyle M Hall <kyle@bywatersolutions.com>
Signed-off-by: Tomas Cohen Arazi <tomascohen@theke.io>
At this time, any item with an additional materials date is blocked from checkout via SIP with a screen message to take the item to a circulation desk for checkout.
Some libraries wish to allow patrons to check out items via SIP even if the item has additional materials.
Test Plan:
1) Create an item with an additional materials note
2) Attempt to check it out via SIP
3) Note the failure and message
4) Enable the new SIP account option "allow_additional_materials_checkout"
5) Restart the SIP server
6) Attempt the checkout again
7) Note the checkout success and new AF field message!
8) prove t/db_dependent/SIP/Message.t
Signed-off-by: David Nind <david@davidnind.com>
Signed-off-by: Victor Grousset/tuxayo <victor@tuxayo.net>
Rebased-by: Victor Grousset/tuxayo <victor@tuxayo.net>
Signed-off-by: Tomas Cohen Arazi <tomascohen@theke.io>
This change adds a mfa_range configuration option for TOTP
to koha-conf.xml, and overrides the "verify" method from
Auth::GoogleAuth in order to provide a new default for "range"
Test plan:
0. Apply the patch
1. koha-plack --restart kohadev
2. Go to
http://localhost:8081/cgi-bin/koha/admin/preferences.pl?op=search&searchfield=TwoFactorAuthentication
3. Change the syspref to "Enable"
4. Go to
http://localhost:8081/cgi-bin/koha/members/moremember.pl?borrowernumber=51
5. Click "More" and "Manage two-factor authentication"
6. Register using an app
7. In an Incognito window, go to
http://localhost:8081/cgi-bin/koha/mainpage.pl
8. Sign in with the "koha" user
9. Note down a code from your Authenticator app
10. Wait until after 60 seconds and try it
11. Note it says "Invalid two-factor code"
12. Try a new code from the app
13. Note that it works
14. Add <mfa_range>10</mfa_range> to /etc/koha/sites/kohadev/koha-conf.xml
15. Clear memcached and koha-plack --restart kohadev
16. Sign in with the "koha" user
17. Note down a code from your Authenticator app
18. Wait 4 minutes and then try it
19. Note that it works
20. Disable your two-factor authentication and click to re-enable it
21. Use a code older than 60 seconds when registering for the two
factor authentication
22. Note that the code works
Signed-off-by: Marcel de Rooy <m.de.rooy@rijksmuseum.nl>
Signed-off-by: Martin Renvoize <martin.renvoize@ptfs-europe.com>
Signed-off-by: Tomas Cohen Arazi <tomascohen@theke.io>
At this time, we can specify fields to hide in SIP response at the login level. From a security perspective, it would be useful to also be able to specify which fields are allowed in a response.
Test Plan:
1) Apply this patch
2) prove t/db_dependent/SIP/Message.t
Signed-off-by: Sam Lau <samalau@gmail.com>
Signed-off-by: Marcel de Rooy <m.de.rooy@rijksmuseum.nl>
Signed-off-by: Tomas Cohen Arazi <tomascohen@theke.io>
from debian and /etc koha-conf.xml files
Signed-off-by: Martin Renvoize <martin.renvoize@ptfs-europe.com>
Signed-off-by: Tomas Cohen Arazi <tomascohen@theke.io>
This bug adds the ability to define a list of item types that are blocked from being issued at that SIP account
To test:
1) Apply this patch
2) Visit Administration->Item types and select edit on the music item type
3) Make the rental charge 0 and save changes (this allows for the item to be checked out via SIP)
4) In the terminal, vim /etc/koha/sites/kohadev/SIPconfig.xml
5) Edit the term1 account and add the following *inside* the login section:
blocked_item_types="BK|MU"
You should have something similar to this: <login id ="term1" ........... checked_in_ok="1" blocked_item_types="BK|MU" />
6) Restart SIP (sudo koha-sip --restart <instancename>)
7) Run a checkout query for an item with the item type book. Here is an example you could use:
perl misc/sip_cli_emulator.pl -a localhost -p 6001 -su term1 -sp term1 -l CPL --patron 23529001000463 --item 39999000011418-m checkout
8) Notice the checkout failed and you are given the screen msg "Item type cannot be checked out at this checkout location"
9) Run a checkout query for an item with the item type music. Here is an example you could use:
perl misc/sip_cli_emulator.pl -a localhost -p 6001 -su term1 -sp term1 -l CPL --patron 23529001000463 --item 39999000008715 -m checkout
10) Notice the checkout failed and you are given the screen msg "Item type cannot be checked out at this checkout location"
11) vim /etc/koha/sites/kohadev/SIPconfig.xml and delete the BK from the blocked_item
12) Delete the BK from blocked_item_types. It should now look like :
blocked_item_types="MU"
13) Restart SIP (sudo koha-sip --restart <instancename>)
14) Run a checkout query for the item with the item type book
perl misc/sip_cli_emulator.pl -a localhost -p 6001 -su term1 -sp term1 -l CPL --patron 23529001000463 --item 39999000011418 -m checkout
15) Checkout succesful
16) Run a checkout query for the item with the item type music
perl misc/sip_cli_emulator.pl -a localhost -p 6001 -su term1 -sp term1 -l CPL --patron 23529001000463 --item 39999000008715 -m checkout
17) Still fails (because it is blocked)
18) prove t/db_dependent/SIP/Message.t
19) Congratulate yourself for making it through the long test and sign-off :)
Signed-off-by: Nick Clemens <nick@bywatersolutions.com>
Signed-off-by: Martin Renvoize <martin.renvoize@ptfs-europe.com>
Signed-off-by: Tomas Cohen Arazi <tomascohen@theke.io>
Signed-off-by: Katrin Fischer <katrin.fischer.83@web.de>
Signed-off-by: Marcel de Rooy <m.de.rooy@rijksmuseum.nl>
Adding the biblio-zebra-indexdefs.xsl on same patch (as should
be generated with xsltproc).
Signed-off-by: Tomas Cohen Arazi <tomascohen@theke.io>
Prior to Koha 22.05, the SIP2 item information message had a side affect of updating the datelastseen field for items. This bug has been fixed, but was being utilized by inventory tools that used SIP2. We should bring back this affect and formalize it as an optional SIP2 config account setting.
Test Plan:
1) Apply this patch set
2) prove t/db_dependent/SIP/Message.t
Signed-off-by: David Nind <david@davidnind.com>
Signed-off-by: Marcel de Rooy <m.de.rooy@rijksmuseum.nl>
Signed-off-by: Tomas Cohen Arazi <tomascohen@theke.io>
Instead of typing the case sensitive Control-number each time.
4 strikes instead of 15 on your keyboard. Wow! Gain of 73%.
Test plan:
Copy ccl.properties to /etc/koha/zebradb, restart Zebra and
search for cnum=SOME_ID in opac or intranet.
Signed-off-by: Marcel de Rooy <m.de.rooy@rijksmuseum.nl>
Signed-off-by: David Nind <david@davidnind.com>
Signed-off-by: Martin Renvoize <martin.renvoize@ptfs-europe.com>
Signed-off-by: Tomas Cohen Arazi <tomascohen@theke.io>
The previous patches adjusted the mappings directly - moving this
change to the correct build file
Not needed for sign off, but QA can test that nothing changes when rebuilding the files:
xsltproc etc/zebradb/xsl/koha-indexdefs-to-zebra.xsl etc/zebradb/marc_defs/marc21/authorities/authority-koha-indexdefs.xml > etc/zebradb/marc_defs/marc21/authorities/authority-zebra-indexdefs.xsl
Signed-off-by: Frank Hansen <frank.hansen@ub.lu.se>
Signed-off-by: Marcel de Rooy <m.de.rooy@rijksmuseum.nl>
Signed-off-by: Tomas Cohen Arazi <tomascohen@theke.io>
Signed-off-by: Frank Hansen <frank.hansen@ub.lu.se>
Signed-off-by: Marcel de Rooy <m.de.rooy@rijksmuseum.nl>
Signed-off-by: Tomas Cohen Arazi <tomascohen@theke.io>
Test plan:
We already tested it. Just look at changes in this patch.
Signed-off-by: Marcel de Rooy <m.de.rooy@rijksmuseum.nl>
Signed-off-by: Martin Renvoize <martin.renvoize@ptfs-europe.com>
Signed-off-by: Kyle M Hall <kyle@bywatersolutions.com>
Signed-off-by: Tomas Cohen Arazi <tomascohen@theke.io>
This patch updates the example NGINX config to increase the
proxy_buffer_size to 16k. The default value of 4k (on some platforms)
has empirically been shown to be a bit too small for the Link
headers emitted by the REST API when pagination is requested.
To test
-------
[1] Set up a Koha system with NGINX as a reverse proxy in
front of it (either in front of Apache or in front of
of Starman).
[2] Perform a patron search that returns at least two pages
of results and navigate to the second page.
[3] Note that the navigation can fail with a 502 HTTP error
and an "upstream sent too big header while reading response
header from upstream" error in the NGINX log.
The problem is most likely when the pagesize of the server
running NGINX is 4096 bytes.
[4] Update the NGINX configuration per this patch and restart
NGINX.
[5] This time, repeating step 2 should work.
Signed-off-by: Galen Charlton <gmc@equinoxOLI.org>
Signed-off-by: Brendan Gallagher <brendan@bywatersolutions.com>
Signed-off-by: Marcel de Rooy <m.de.rooy@rijksmuseum.nl>
Signed-off-by: Tomas Cohen Arazi <tomascohen@theke.io>
This patch updates the default partner category used by the partner_code config to be in line with sample data in sample_patrons.yml
Preparation:
Apply patch
Enable ILLModule sys pref
Install an ILL backend (e.g. FreeForm)
Add this change to your koha-conf.xml
Flush, restart.
Search for patron of category inter-library loan and assign a primary e-mail address to it
Test plan:
Create an ILL request and click 'place request with partners'
Verify that the 'select partner libraries' has the correct patron of IL category
Run tests and ensure they pass:
prove t/db_dependent/Illrequest/Config.t
prove t/Koha/Config.t
Signed-off-by: Katrin Fischer <katrin.fischer.83@web.de>
Signed-off-by: Nick Clemens <nick@bywatersolutions.com>
Signed-off-by: Tomas Cohen Arazi <tomascohen@theke.io>
The SIP patron status and information responses always return false foe "too many items lost". It would be reasonable to check the count of lost items still checked out to the patron and compare that to a threshold set in the sip config file. Though not all libraries operate in this way, it seems like a good and reasonable implementation as long is it is properly documented.
This patch adds the ability to set the SIP "too many items lost" flag
for a patron based on the number of lost checkouts the patron has where
the lost flag on those items is greater than the given flag value.
For example, one could specify that the flag be set if the patron has
more than 2 items checked out where itemlost is greater than 3.
By default the feature is disabled to retain the existing functionality.
If enabled, the default itemlost minimum flag value is 1 unless
specified.
Test Plan:
1) Apply this patch
2) prove t/db_dependent/SIP/Message.t
Signed-off-by: David Nind <david@davidnind.com>
Signed-off-by: Marcel de Rooy <m.de.rooy@rijksmuseum.nl>
Signed-off-by: Tomas Cohen Arazi <tomascohen@theke.io>
Right now background_jobs_worker.pl only processes jobs in serial. It would make sense to handle jobs in parallel up to a user definable limit.
Test Plan:
1) Apply this patch
2) Stop background_jobs_worker.pl
3) Generate some background jobs by editing records, placing holds, etc
4) Watch processes in a new terminal: watch -n 0.1 'ps aux | grep background_jobs_worker.pl'
5) Run background_jobs_worker.pl with parameter -m 3 or some other
number of max processes
6) Note the multiple forked processes in the ps output
Test notes - also tested the following on KTD:
1. Stop background_jobs_worker.pl
2. Edit /etc/koha/sites/kohadev/koha-conf.xml - set max_processes to 10
3. Generate some background jobs
4. Watch processes in a new terminal: watch -n 0.1 'ps aux | grep background_jobs_worker.pl'
5. Restart all
6. Confirm multiple forked processes in the ps output
Both methods work as expected and generate multiple forked processes
based on the value set for max processes.
Signed-off-by: emlam <emily.lamancusa@montgomerycountymd.gov>
Signed-off-by: Marcel de Rooy <m.de.rooy@rijksmuseum.nl>
Signed-off-by: Tomas Cohen Arazi <tomascohen@theke.io>
On bug 32594 we are adding a new worker, dedicated to Elastic indexing.
We should have a common place for workers, and we agreed on misc/workers
To test:
1 - Apply patch
2 - reset_all in koha testing docker
3 - ps aux | grep background
4 - Confirm the workers are running, and running in the new directory
5 - Perform a batch item modification
6 - Ensure the job is processed by the worker
Signed-off-by: David Nind <david@davidnind.com>
Signed-off-by: Jonathan Druart <jonathan.druart@bugs.koha-community.org>
Signed-off-by: Tomas Cohen Arazi <tomascohen@theke.io>
Signed-off-by: Kyle M Hall <kyle@bywatersolutions.com>
Signed-off-by: David Nind <david@davidnind.com>
Signed-off-by: Marcel de Rooy <m.de.rooy@rijksmuseum.nl>
Signed-off-by: Tomas Cohen Arazi <tomascohen@theke.io>
To test:
1 - Apply patch
2 - vim /etc/koha/sites/kohadev/log4perl.conf, Add lines below:
log4perl.logger.worker = WARN, WORKER
log4perl.appender.WORKER=Log::Log4perl::Appender::Screen
log4perl.appender.WORKER.stderr=1
log4perl.appender.WORKER.mode=append
log4perl.appender.WORKER.layout=PatternLayout
log4perl.appender.WORKER.layout.ConversionPattern=[%d] [%p] %m %l%n
log4perl.appender.WORKER.utf8=1
3 - Restart all
4 - Edit misc/background_jobs_worker.pl
- my $job = Koha::BackgroundJobs->find($args->{job_id});
+ my $job;# = Koha::BackgroundJobs->find($args->{job_id});
5 - In another terminal: tail -f /var/log/koha/kohadev/koha-worker-error.log
6 - Force enqueue a job (that won't be found because of #4
perl -e 'use Koha::BackgroundJob::BatchUpdateItem; my $bg = Koha::BackgroundJob::BatchUpdateItem->new(); $bg->enqueue({ record_ids=>['888888']});'
7 - Note error in log like:
[2023/01/11 19:26:10] [WARN] No job found for id=2983 main:: /kohadevbox/koha/misc/background_jobs_worker.pl (111)
Signed-off-by: Kyle M Hall <kyle@bywatersolutions.com>
Signed-off-by: David Nind <david@davidnind.com>
Signed-off-by: Marcel de Rooy <m.de.rooy@rijksmuseum.nl>
Signed-off-by: Tomas Cohen Arazi <tomascohen@theke.io>
This patch fixes a duplicate attribute code for Author-in-order in the
biblios definition.
The picked code matches what was already in ccl.properties.
Also Chronological-term for authorities gets fixed.
To test:
1. Apply the regression tests
2. Run:
k$ prove xt/verify_bib1.att.t
=> FAIL: Some failiures
3. Apply this patch
4. Repeat 1
=> SUCCESS: Tests now pass!
5. Sign off :-D
Signed-off-by: Tomas Cohen Arazi <tomascohen@theke.io>
Signed-off-by: David Nind <david@davidnind.com>
Signed-off-by: Martin Renvoize <martin.renvoize@ptfs-europe.com>
Signed-off-by: Tomas Cohen Arazi <tomascohen@theke.io>
Test plan:
- Enable show_outstanding_amount in SIPconfig.xml
- Check that the total outstanding amout for the patron is displayed on SIP
checkout (if it exists), for example:
Patron has fines - You owe $10.00.
- Check that the outstanding amout for a given item is displayed on SIP
checkin (if it exists), for example:
"You owe $10.00 for this item."
- Check that it is not displayed when show_outstanding_amount is disabled.
Signed-off-by: Martin Renvoize <martin.renvoize@ptfs-europe.com>
Signed-off-by: Kyle M Hall <kyle@bywatersolutions.com>
Signed-off-by: Tomas Cohen Arazi <tomascohen@theke.io>
Signed-off-by: David Nind <david@davidnind.com>
Signed-off-by: Marcel de Rooy <m.de.rooy@rijksmuseum.nl>
Signed-off-by: Tomas Cohen Arazi <tomascohen@theke.io>
Bug 20078 updated the attribute for arp to 2014 to avoid conflict with 9013 not-on-loan-count
Bug 28830 then added Control-number-identifier as 2014, breaking arp again
This patch updates the number to 9015
To test:
1 - Apply first patch
2 - Attempt searching by arp, no results (add a unique 526$d to a record to ease searching)
3 - Apply this patch
4 - Copy bib1.att and ccl.properties to the correct locations
cp etc/zebradb/biblios/etc/bib1.att /etc/koha/zebradb/biblios/etc/bib1.att
cp etc/zebradb/ccl.properties /etc/koha/zebradb/ccl.properties
5 - Restart zebra
6 - Rebuild indexes
7 - Search again, success!
Signed-off-by: David Nind <david@davidnind.com>
Signed-off-by: Marcel de Rooy <m.de.rooy@rijksmuseum.nl>
Signed-off-by: Tomas Cohen Arazi <tomascohen@theke.io>
This is going to be awesome!
Signed-off-by: Jonathan Field <jonathan.field@ptfs-europe.com>
Signed-off-by: Martin Renvoize <martin.renvoize@ptfs-europe.com>
Signed-off-by: Kyle M Hall <kyle@bywatersolutions.com>
Signed-off-by: Tomas Cohen Arazi <tomascohen@theke.io>
Currently we only index a - but we can setup the system such that avxyz are searched
To test:
1 - define both a 655$a *and* 655$x value in a bib, save, reindex
2 - Set system preferences:
TraceSubjectSubdivisions: Include
TraceCompleteSubfields: Force
3 - View the record edited above in the opac
4 - Click on the subject heading
5 - No results found
6 - Copy zebra files:
sudo cp ./etc/zebradb/marc_defs/marc21/biblios/biblio-koha-indexdefs.xml \
/etc/koha/zebradb/marc_defs/marc21/biblios/biblio-koha-indexdefs.xml
sudo cp etc/zebradb/marc_defs/marc21/biblios/biblio-zebra-indexdefs.xsl \
/etc/koha/zebradb/marc_defs/marc21/biblios/biblio-zebra-indexdefs.xsl
7 - restart all and reindex
8 - Click on the subject heading in OPAC
9 - Sucess!
10 - Repeat with other fields (vyz)
11 - Repeat under ES, reindexing and resetting mappings
Signed-off-by: Katrin Fischer <katrin.fischer.83@web.de>
Signed-off-by: Marcel de Rooy <m.de.rooy@rijksmuseum.nl>
Signed-off-by: Tomas Cohen Arazi <tomascohen@theke.io>
This patch adds a preprocessor XSLT to the Zebra indexing pipeline,
so that 880 fields get indexed as the fields they're linked to. For example,
a "880 $6 245" field would be indexed as a "245" field.
However, because the preprocessor only occurs in the indexing part of the pipeline,
it does not affect the retrieval of MARCXML from Zebra. That MARCXML will be
the same MARCXML that was sent to Zebra from Koha.
Test plan:
0. Revert bug 15187, and apply patch for 31532
1. cp ./etc/zebradb/biblios/etc/dom-config.xml /etc/koha/zebradb/biblios/etc/dom-config.xml
2a. cp etc/zebradb/marc_defs/marc21/biblios/preprocess_marcxml.xsl /etc/koha/zebradb/marc_defs/marc21/biblios/.
2b. cp etc/zebradb/marc_defs/normarc/biblios/preprocess_marcxml.xsl /etc/koha/zebradb/marc_defs/normarc/biblios/.
2c. cp etc/zebradb/marc_defs/unimarc/biblios/preprocess_marcxml.xsl /etc/koha/zebradb/marc_defs/unimarc/biblios/.
3. koha-rebuild-zebra -b -f -v kohadev
4. Note that in search results the 880$6245$a data appears before the 245$a data
5. Note that you can do a title index search on the 880$6245$a data
Signed-off-by: Katrin Fischer <katrin.fischer.83@web.de>
Signed-off-by: Martin Renvoize <martin.renvoize@ptfs-europe.com>
Signed-off-by: Tomas Cohen Arazi <tomascohen@theke.io>
This followup moves the configuration to the z3950 etc file, either the
default and or the custom file is used as per the existing script code.
In addition, the options and be set using an environment variable named Z3950_ADDITIONAL_OPTS.
Signed-off-by: Tomas Cohen Arazi <tomascohen@theke.io>
The z39.50 responder has a number of command line options that are not
accessible if using the debian scripts to control it. We should be able
to set those options in the koha conf file to be passed to the script
itself.
Test Plan:
1) Apply this patch
2) Copy your kohaclone's koha-z3950-responder to /usr/sbin/koha-z3950-responder if necessary
3) Add "<z3950_responder_options>--add-item-status k</z3950_responder_options>" inside your <config> block in your koha-conf.xml file
4) Use koha-z3950-responder to start/restart the z39.50 responder, note the item status is now in subfield k!
Signed-off-by: Liz Rea <wizzyrea@gmail.com>
Signed-off-by: Martin Renvoize <martin.renvoize@ptfs-europe.com>
Signed-off-by: Tomas Cohen Arazi <tomascohen@theke.io>
The PLACKOPAC, PLACKAPI and PLACKINTRANET appenders still need %n
(i.e. a newline). Note that this patch does not add %l since it
is a bit confusing because it adds a lot of Plack internal noise like:
[2022/09/29 08:51:34] [WARN] Test mainpage CGI::Compile::ROOT::usr_share_koha_mainpage_2epl::__ANON__ /usr/share/koha/mainpage.pl (49)
The patch is a result of:
git grep -l "log4perl.appender.PLACK" | xargs sed -i -e"/ConversionPattern/ s/%m$/%m%n/"
Test plan:
First run: sed -i -e"/ConversionPattern/ s/%m$/%m%n/" /etc/koha/sites/[YOUR_CLONE]/log4perl.conf
Edit that file, change PLACKOPAC to debug level like:
log4perl.logger.plack-opac = DEBUG, PLACKOPAC
Restart.
Hit an OPAC page twice.
Check plack-opac logfile and verify that it contains a newline between last two messages like:
[2022/09/29 08:04:30] [DEBUG] kohaversion : 22.0600054
[2022/09/29 08:04:42] [DEBUG] kohaversion : 22.0600054
Signed-off-by: Marcel de Rooy <m.de.rooy@rijksmuseum.nl>
Signed-off-by: Martin Renvoize <martin.renvoize@ptfs-europe.com>
Signed-off-by: Tomas Cohen Arazi <tomascohen@theke.io>
This patch adds <zebra_connection_timeout>30</zebra_connection_timeout>
to the koha-conf.xml file.
Sometimes, a Zebra search might take longer than 30 seconds. If it does,
Koha will say that 0 records have been found. While slow searching
is not desirable, it's more desirable to get the result set regardless.
Test plan:
0. Apply patch
1. Add <zebra_connection_timeout>.1</zebra_connection_timeout> to
your relevant koha-conf.xml file (e.g. /etc/koha/sites/kohadev/koha-conf.xml)
2. echo 'flush_all' | nc -q 1 memcached 11211
3. koha-plack --restart kohadev
4. Go to http://localhost:8081/cgi-bin/koha/catalogue/search.pl?q=test
5. Note that no results are returned
6. Change zebra_connection_timeout to 30
7. echo 'flush_all' | nc -q 1 memcached 11211
8. koha-plack --restart kohadev
9a. Go to http://localhost:8081/cgi-bin/koha/catalogue/search.pl?q=test
9b. Note that 3 results are returned
10. Remove zebra_connection_timeout from koha-conf.xml
11. echo 'flush_all' | nc -q 1 memcached 11211
12. koha-plack --restart kohadev
13a. Go to http://localhost:8081/cgi-bin/koha/catalogue/search.pl?q=test
13b. Note that 3 results are returned
14. Celebrate
Signed-off-by: David Nind <david@davidnind.com>
Signed-off-by: Kyle M Hall <kyle@bywatersolutions.com>
Signed-off-by: Tomas Cohen Arazi <tomascohen@theke.io>
Koha has been able to send arbitrary item fields via the "item_field" parameter in the config. We have partners that need the ability to created custom item fields from templates, as the item_fields feature cannot accomplish what they need. We need to add a templated custom field feature for items, similar to what we have for patrons.
Test Plan:
1) Apply this patch
2) Choose a SIP login to use, edit that account and add the following
*inside* the login section:
<custom_item_field field="IN" template="[% item.itemnumber %]" />
3) Restart SIP
4) Run an item information query
5) Note the itemnumber is sent in the IN field!
Signed-off-by: Martin Renvoize <martin.renvoize@ptfs-europe.com>
Signed-off-by: Tomas Cohen Arazi <tomascohen@theke.io>
Edit: fixed tests count
As requested by Jonathan, we need more flexibility ;)
Here it comes.
Test plan:
Run t/CookieManager.t
Signed-off-by: Marcel de Rooy <m.de.rooy@rijksmuseum.nl>
Signed-off-by: Kyle M Hall <kyle@bywatersolutions.com>
Signed-off-by: Tomas Cohen Arazi <tomascohen@theke.io>
Test plan:
Add this change to your koha-conf.xml.
Flush, restart.
Test if the cookie is kept now in the interface.
Signed-off-by: Nick Clemens <nick@bywatersolutions.com>
Signed-off-by: Kyle M Hall <kyle@bywatersolutions.com>
Signed-off-by: Tomas Cohen Arazi <tomascohen@theke.io>
By default, the SIP server appears to only use 1 child process for
responding to SIP connections.
This change makes this explicit in the configuration, which should
make it so that people who need more than 1 simultaneous SIP connection
can know to just increase the value for the "max_servers" parameter
in the SIPconfig.xml file.
Test plan:
1. Add "max_servers='1'" to your SIP configuration file
2. koha-sip --restart kohadev
3. Open 3 terminals
4. Run "telnet localhost 6001" on 2 terminals
5. On the 3rd terminal, run the following:
ss -l -n -t
ps -efww | grep "sip"
6. Note that there are 2 processes called
kohadev-koha-sip: perl /kohadevbox/koha/C4/SIP/SIPServer.pm /etc/koha/sites/kohadev/SIPconfig.xml
One of these processes is the parent of the other
7. The Recv-Q in the "ss" output should show 1
(This means that 1 of your telnet connections is in the server's TCP backlog)
8. Celebrate as the configuration works as expected
Signed-off-by: Kyle M Hall <kyle@bywatersolutions.com>
Signed-off-by: Martin Renvoize <martin.renvoize@ptfs-europe.com>
Signed-off-by: Tomas Cohen Arazi <tomascohen@theke.io>