This change adds the ability to enable and configure TCP keepalive
support for the SIP server using SIPconfig.xml.
For the sake of backwards compatibility, it defaults to disabled
and additional parameters default match typical kernel defaults.
Technical detail can be found in the perldoc for C4/SIP/SIPserver.pm
Test plan:
0. Apply the patch
1. koha-sip --restart kohadev
2. apt-get update && apt-get install tcpdump
3. In one window, run "tcpdump -A -n -v -i any 'port 6001'"
4. In another window, run the following:
echo -e "9300CNterm1|COterm1|CPCPL|\r" | nc 127.0.0.1 6001 -v
5. Note in tcpdump output that after the initial flood of packets,
nothing more is received
6. vi /etc/koha/sites/kohadev/SIPconfig.xml
7. In the "server-params" element, add attributes like the following:
custom_tcp_keepalive='1'
custom_tcp_keepalive_time='10'
custom_tcp_keepalive_intvl='5'
8. koha-sip --restart kohadev
9. In one window, run "tcpdump -A -n -v -i any 'port 6001'"
10. In another window, run the following:
echo -e "9300CNterm1|COterm1|CPCPL|\r" | nc 127.0.0.1 6001 -v
11. Note in tcpdump output that after the initial flood of packets,
ACK packets are sent out every 10+ seconds for the idle connection
Signed-off-by: Tadeusz „tadzik” Sośnierz <tadeusz@sosnierz.com>
Signed-off-by: Martin Renvoize <martin.renvoize@ptfs-europe.com>
Signed-off-by: Katrin Fischer <katrin.fischer@bsz-bw.de>
The cron can take a very long time to run on systems with many issues.
For example, a partner with ~250k auto_renew issues is taking about 9 hours to run.
If we run that same number of issues in 5 parallel chunks
( splitting the number of issues as evenly as possible ), it could take under 2 hours.
Test Plan:
1) Generate a number of issues marked for auto_renew
2) Run the automatic_renewals.pl, use the `time` utility to track how much time it took to run
3) Set parallel_loops to 10 in auto_renew_cronjob section of config in koha-conf
4) Repeat step 2, note the improvement in speed
5) Experiment with other values
Signed-off-by: Matt Blenkinsop <matt.blenkinsop@ptfs-europe.com>
Signed-off-by: Martin Renvoize <martin.renvoize@ptfs-europe.com>
Signed-off-by: Katrin Fischer <katrin.fischer@bsz-bw.de>
This patch adds some commented Elasticsearch security configuration,
which shows how to use username/password with HTTPS.
Test plan:
0. Apply patch
1. cp debian/templates/koha-conf-site.xml.in /etc/koha/koha-conf-site.xml.in
2. koha-create --create-db test
3. vi /etc/koha/sites/test/koha-conf.xml
4. Note that the comments for userinfo and use_https are in the koha-conf.xml
Signed-off-by: Magnus Enger <magnus@libriotech.no>
Works as advertised.
Signed-off-by: Nick Clemens <nick@bywatersolutions.com>
Signed-off-by: Katrin Fischer <katrin.fischer@bsz-bw.de>
We keep OPEN when people still use log_file or setsid.
Signed-off-by: Marcel de Rooy <m.de.rooy@rijksmuseum.nl>
Signed-off-by: Martin Renvoize <martin.renvoize@ptfs-europe.com>
Having double dashes inside a commmented block is not valid XML. This
patch restores it, with an added message explaining it
To test:
1. Run:
$ xmllint etc/z3950/config.xml
=> FAIL: You get:
etc/z3950/config.xml:5: parser error : Double hyphen within comment: <!--
<config>
<z3950_responder_options>
<z3950_responder_options>--add-item-status k -t 5</z3950_responder_options
2. Apply this patch
3. Repeat 1
=> SUCCESS: All good!
Signed-off-by: Tomas Cohen Arazi <tomascohen@theke.io>
Signed-off-by: Katrin Fischer <katrin.fischer@bsz-bw.de>
This patch adds the <config> node that the z3950 responder starter script is looking for in the z3950/config.xml to the example code.
To test:
- verify that the <config> </config> is around the commented z3950_additional_options suggestion in the etc/z3950/config.xml file
- copy the config stanza to the live file: /etc/koha/sites/kohadev/z3950/config.xml
- restart_all
- ps aux | grep z3950
- confirm the script has restarted
- confirm the options: --add-item-status k -t 5 have been passed through
Signed-off-by: David Nind <david@davidnind.com>
Signed-off-by: Tomas Cohen Arazi <tomascohen@theke.io>
Signed-off-by: Katrin Fischer <katrin.fischer@bsz-bw.de>
This patch enabled the restart by default.
After a poll at hackfest24 we opted to enable this by default and the RM
requested I add the patch to the bug so we don't forget ;)
Signed-off-by: Katrin Fischer <katrin.fischer@bsz-bw.de>
This patch adds the ability to disable the automated plack restart we
introduce with this patchset via configuration.
Signed-off-by: Nick Clemens <nick@bywatersolutions.com>
Signed-off-by: Katrin Fischer <katrin.fischer@bsz-bw.de>
Koha serves static .js files as application/javascript (if /etc/mime.types
says to) and serves them compressed, but output_with_http_headers uses the
currently-correct text/javascript mimetype, and Koha doesn't compress that.
Test plan:
1. Set the preference EnableAdvancedCatalogingEditor to Enable.
2. Open the browser Web Developer Tools to the Network tab
3. Load Cataloging - Advanced editor
4. Click on the line for the framework?frameworkcode=&callback=define load
5. Note the content-type text/javascript, no Content-Encoding line, and
the size of 1.9MB
6. Apply the patches from bug 36463 if they haven't been pushed, then this
patch, and reset_all
7. Repeat steps 1-4, and note a Content-Encoding: gzip header and a
Transferred size around 160KB
Signed-off-by: David Nind <david@davidnind.com>
Signed-off-by: Nick Clemens <nick@bywatersolutions.com>
Signed-off-by: Katrin Fischer <katrin.fischer@bsz-bw.de>
This patch simply adds application/json to the mod_deflate configuration
To test:
1 - Open the netowrk tab in firefox
2 - Load http://localhost:8081/api/v1/libraries
3 - Not the transferred size, and note no 'Content-Encoding: gzip" header
4 - Apply patch, reset_all (or edit /etc/koha/apache-shared.conf)
5 - Reload
6 - Note smaller size, note gzip header
Signed-off-by: Phil Ringnalda <phil@chetcolibrary.org>
Signed-off-by: Martin Renvoize <martin.renvoize@ptfs-europe.com>
Signed-off-by: Katrin Fischer <katrin.fischer@bsz-bw.de>
To test:
1 - Find or add a record with title: Chevilly-Larue, L'Haÿ-les-Roses, Fresnes, Rungis [par] Sté éditions et de publicité L.F.B.
2 - Search for 'L'Hay-les-Roses'
3 - No results
4 - Apply patch, copy the file:
sudo cp /kohadevbox/koha/etc/zebradb/etc/word-phrase-utf.chr /etc/koha/zebradb/etc/word-phrase-utf.chr
5 - Restart all, Reindex
restart_all
sudo koha-rebuild-zebra -v -f kohadev
6 - Search again
7 - Success!
Signed-off-by: David Nind <david@davidnind.com>
Signed-off-by: Marcel de Rooy <m.de.rooy@rijksmuseum.nl>
Signed-off-by: Katrin Fischer <katrin.fischer@bsz-bw.de>
Whne performing batch operations we can send a large numebr of records for reindexing at once.
Currently this can create requetss that are too large for Elasticsearch to process. We need
to break these requests into chunks/
This patch adds a chunk_size configuration to the elasticsearch stanza in koha-conf.xml
If blank we default to 5000.
To test:
0 - Have Koha using Elasticsearch
1 - Create and download a report of all barcodes:
SELECT barcode FROM items
2 - Batch modify these items
3 - Note a single ESindexing job is created
4 - Create and download a report of all authority ids:
SELECT auth_header.authid FROM auth_header
5 - Setup a marc modification template, and batch modify all the authorities
6 - Again note a single ES backgorund job is created
7 - Apply patch
8 - Repeat the modifications above - you still get a single job
9 - Edit koha-conf.xml and add <chunk_size>250</chunk_size> to elasticsearch stanza
10 - Repeat modifications - you now get several background ES jobs
11 - prove -v t/db_dependent/Koha/SearchEngine/Elasticsearch/Indexer.t
Signed-off-by: David Nind <david@davidnind.com>
Signed-off-by: Jonathan Druart <jonathan.druart@bugs.koha-community.org>
Signed-off-by: Katrin Fischer <katrin.fischer@bsz-bw.de>
This change more closely aligns ICU and CHR so that ICU also
removes the = character. This fixes issues in ICU when searching
with a : which gets transformed into a =. Without this change,
the Analytics features won't work for titles with a colon in them.
Test plan:
0. Apply the patch and import bibs from Bugzilla (using Staged MARC tools)
1. cp ./etc/zebradb/etc/phrases-icu.xml /etc/koha/zebradb/etc/phrases-icu.xml
2. cp ./etc/zebradb/etc/words-icu.xml /etc/koha/zebradb/etc/words-icu.xml
3. vi /etc/koha/zebradb/etc/default.idx
Change "charmap word-phrase-utf.chr" to "icuchain words-icu.xml" for "index w"
and "icuchain phrases-icu.xml" for "index p"
4. koha-zebra --stop kohadev
5. pkill zebrasrv
6. koha-zebra --start kohadev
7. koha-rebuild-zebra -a -b -f -v kohadev
8. Search for "Awesome title" and open the detail page
9. Note that the "Analytics: Show analytics" line shows up
10. Click that link
11. Note that it opens the "Cool article" record and it displays
"In: Awesome title: awesome subtitle"
12. Click that link
13. Note that it opens the "Awesome title" record
Signed-off-by: David Nind <david@davidnind.com>
Signed-off-by: Martin Renvoize <martin.renvoize@ptfs-europe.com>
Signed-off-by: Katrin Fischer <katrin.fischer@bsz-bw.de>
The SIP circulation status specifies that a 12 means an item is lost, and 13 means an item is missing. In Koha, missing items are simply a type of lost item so we never send a 13. This is an important distinction for some SIP based inventory tools. It would be good to be able to specify when lost status means "missing" at the SIP login level.
Test Plan:
1) Apply this patch
2) prove t/db_dependent/SIP/Transaction.t
Signed-off-by: David Nind <david@davidnind.com>
Signed-off-by: Katrin Fischer <katrin.fischer.83@web.de>
Signed-off-by: Tomas Cohen Arazi <tomascohen@theke.io>
Currently, Koha does not return a message on successful SIP checkin.
This patchs adds the show_checkin_message option to SIPconfig.xml, disabled by
default. When enabled, the following message is displayed on SIP checkin:
"Item checked-in: {homebranch|permanent_location} - {location}"
The UseLocationAsAQInSIP system preference is used to determine whether the
homebranch or the permanent location will be used.
Test plan:
- Perform a successful checkin using SIP
- Check that the message is in the checkin response (AF field)
- prove t/db_dependent/SIP/Transaction.t
Signed-off-by: David Nind <david@davidnind.com>
Signed-off-by: Kyle M Hall <kyle@bywatersolutions.com>
Edit (tcohen): tidied the whole subtest.
Signed-off-by: Tomas Cohen Arazi <tomascohen@theke.io>
I think instead of a plain on/off switch we should use it in combination
with the plugin_repo's and set it to restrict to only those repos' (i.e.
disable uploads entirely if no repo's are listed, or just allow those
repo's when there are).
This patch achieves that, but only if plugins are installed via the
plugin browser method. We disable all direct upload avenues, so install
is blocked for other cases.
Signed-off-by: Kyle M Hall <kyle@bywatersolutions.com>
Signed-off-by: David Cook <dcook@prosentient.com.au>
Signed-off-by: Victor Grousset/tuxayo <victor@tuxayo.net>
Signed-off-by: Kyle M Hall <kyle@bywatersolutions.com>
Signed-off-by: Tomas Cohen Arazi <tomascohen@theke.io>
This patch enables enable_plugin_browser_upload by default,
since the current behaviour for Koha is to enable browser upload
when enable_plugins is 1.
Signed-off-by: Nicolas Legrand <nicolas.legrand@bulac.fr>
Signed-off-by: Martin Renvoize <martin.renvoize@ptfs-europe.com>
Signed-off-by: Kyle M Hall <kyle@bywatersolutions.com>
Signed-off-by: David Cook <dcook@prosentient.com.au>
Signed-off-by: Victor Grousset/tuxayo <victor@tuxayo.net>
Signed-off-by: Kyle M Hall <kyle@bywatersolutions.com>
Signed-off-by: Tomas Cohen Arazi <tomascohen@theke.io>
This patch adds a enable_plugin_browser_upload flag to koha-conf.xml, which
controls whether or not Koha intranet users can upload Koha plugins via
their browser. Like "enable_plugins", it defaults to 0 for new installs.
This is useful when you want to provide Koha intranet users with plugins
that are pre-installed by administrators (by CLI) or restricting them
to plugins from a Github repo. See the following for more information:
Bug 23975 - Add ability to search and install plugins from GitHub
Bug 23191 - Administrators should be able to install plugins from the command line
To test:
1) Apply the full patchset
2) Confirm <enable_plugins>1</enable_plugins> is present in koha-conf.xml
3) Add <plugins_restricted>1</plugins_restricted> to koha-conf.xml
4) Ensure that the <plugin_repos> block is not commented and contains at
least one trusted organisation in koha-conf.xml
If needed get it from: debian/templates/koha-conf-site.xml.in
5) Run restart_all (in koha-testing-docker)
6) Go to /cgi-bin/koha/plugins/plugins-home.pl and note that you don't see
an option to upload plugins
7) You should however see a search option and upon search you should have
results returned from the chosen trusted organisations listed in the
<plugin_repos> block mentioned above.
8) Clicking install on one of the results should work as expected and install
the plugin.
9) Go directly to /cgi-bin/koha/plugins/plugins-upload.pl and note that it says
"Plugin upload is restricted to only those plugins listed by your server
administrator" and gives instructions on how to enable unrestricted browser
upload.
Signed-off-by: Nicolas Legrand <nicolas.legrand@bulac.fr>
Signed-off-by: Martin Renvoize <martin.renvoize@ptfs-europe.com>
Signed-off-by: Kyle M Hall <kyle@bywatersolutions.com>
Signed-off-by: David Cook <dcook@prosentient.com.au>
Rebased-by: Victor Grousset/tuxayo <victor@tuxayo.net>
Signed-off-by: Victor Grousset/tuxayo <victor@tuxayo.net>
Signed-off-by: Kyle M Hall <kyle@bywatersolutions.com>
Signed-off-by: Tomas Cohen Arazi <tomascohen@theke.io>
At this time, any item with an additional materials date is blocked from checkout via SIP with a screen message to take the item to a circulation desk for checkout.
Some libraries wish to allow patrons to check out items via SIP even if the item has additional materials.
Test Plan:
1) Create an item with an additional materials note
2) Attempt to check it out via SIP
3) Note the failure and message
4) Enable the new SIP account option "allow_additional_materials_checkout"
5) Restart the SIP server
6) Attempt the checkout again
7) Note the checkout success and new AF field message!
8) prove t/db_dependent/SIP/Message.t
Signed-off-by: David Nind <david@davidnind.com>
Signed-off-by: Victor Grousset/tuxayo <victor@tuxayo.net>
Rebased-by: Victor Grousset/tuxayo <victor@tuxayo.net>
Signed-off-by: Tomas Cohen Arazi <tomascohen@theke.io>
This change adds a mfa_range configuration option for TOTP
to koha-conf.xml, and overrides the "verify" method from
Auth::GoogleAuth in order to provide a new default for "range"
Test plan:
0. Apply the patch
1. koha-plack --restart kohadev
2. Go to
http://localhost:8081/cgi-bin/koha/admin/preferences.pl?op=search&searchfield=TwoFactorAuthentication
3. Change the syspref to "Enable"
4. Go to
http://localhost:8081/cgi-bin/koha/members/moremember.pl?borrowernumber=51
5. Click "More" and "Manage two-factor authentication"
6. Register using an app
7. In an Incognito window, go to
http://localhost:8081/cgi-bin/koha/mainpage.pl
8. Sign in with the "koha" user
9. Note down a code from your Authenticator app
10. Wait until after 60 seconds and try it
11. Note it says "Invalid two-factor code"
12. Try a new code from the app
13. Note that it works
14. Add <mfa_range>10</mfa_range> to /etc/koha/sites/kohadev/koha-conf.xml
15. Clear memcached and koha-plack --restart kohadev
16. Sign in with the "koha" user
17. Note down a code from your Authenticator app
18. Wait 4 minutes and then try it
19. Note that it works
20. Disable your two-factor authentication and click to re-enable it
21. Use a code older than 60 seconds when registering for the two
factor authentication
22. Note that the code works
Signed-off-by: Marcel de Rooy <m.de.rooy@rijksmuseum.nl>
Signed-off-by: Martin Renvoize <martin.renvoize@ptfs-europe.com>
Signed-off-by: Tomas Cohen Arazi <tomascohen@theke.io>
At this time, we can specify fields to hide in SIP response at the login level. From a security perspective, it would be useful to also be able to specify which fields are allowed in a response.
Test Plan:
1) Apply this patch
2) prove t/db_dependent/SIP/Message.t
Signed-off-by: Sam Lau <samalau@gmail.com>
Signed-off-by: Marcel de Rooy <m.de.rooy@rijksmuseum.nl>
Signed-off-by: Tomas Cohen Arazi <tomascohen@theke.io>
from debian and /etc koha-conf.xml files
Signed-off-by: Martin Renvoize <martin.renvoize@ptfs-europe.com>
Signed-off-by: Tomas Cohen Arazi <tomascohen@theke.io>
This bug adds the ability to define a list of item types that are blocked from being issued at that SIP account
To test:
1) Apply this patch
2) Visit Administration->Item types and select edit on the music item type
3) Make the rental charge 0 and save changes (this allows for the item to be checked out via SIP)
4) In the terminal, vim /etc/koha/sites/kohadev/SIPconfig.xml
5) Edit the term1 account and add the following *inside* the login section:
blocked_item_types="BK|MU"
You should have something similar to this: <login id ="term1" ........... checked_in_ok="1" blocked_item_types="BK|MU" />
6) Restart SIP (sudo koha-sip --restart <instancename>)
7) Run a checkout query for an item with the item type book. Here is an example you could use:
perl misc/sip_cli_emulator.pl -a localhost -p 6001 -su term1 -sp term1 -l CPL --patron 23529001000463 --item 39999000011418-m checkout
8) Notice the checkout failed and you are given the screen msg "Item type cannot be checked out at this checkout location"
9) Run a checkout query for an item with the item type music. Here is an example you could use:
perl misc/sip_cli_emulator.pl -a localhost -p 6001 -su term1 -sp term1 -l CPL --patron 23529001000463 --item 39999000008715 -m checkout
10) Notice the checkout failed and you are given the screen msg "Item type cannot be checked out at this checkout location"
11) vim /etc/koha/sites/kohadev/SIPconfig.xml and delete the BK from the blocked_item
12) Delete the BK from blocked_item_types. It should now look like :
blocked_item_types="MU"
13) Restart SIP (sudo koha-sip --restart <instancename>)
14) Run a checkout query for the item with the item type book
perl misc/sip_cli_emulator.pl -a localhost -p 6001 -su term1 -sp term1 -l CPL --patron 23529001000463 --item 39999000011418 -m checkout
15) Checkout succesful
16) Run a checkout query for the item with the item type music
perl misc/sip_cli_emulator.pl -a localhost -p 6001 -su term1 -sp term1 -l CPL --patron 23529001000463 --item 39999000008715 -m checkout
17) Still fails (because it is blocked)
18) prove t/db_dependent/SIP/Message.t
19) Congratulate yourself for making it through the long test and sign-off :)
Signed-off-by: Nick Clemens <nick@bywatersolutions.com>
Signed-off-by: Martin Renvoize <martin.renvoize@ptfs-europe.com>
Signed-off-by: Tomas Cohen Arazi <tomascohen@theke.io>
Signed-off-by: Katrin Fischer <katrin.fischer.83@web.de>
Signed-off-by: Marcel de Rooy <m.de.rooy@rijksmuseum.nl>
Adding the biblio-zebra-indexdefs.xsl on same patch (as should
be generated with xsltproc).
Signed-off-by: Tomas Cohen Arazi <tomascohen@theke.io>
Prior to Koha 22.05, the SIP2 item information message had a side affect of updating the datelastseen field for items. This bug has been fixed, but was being utilized by inventory tools that used SIP2. We should bring back this affect and formalize it as an optional SIP2 config account setting.
Test Plan:
1) Apply this patch set
2) prove t/db_dependent/SIP/Message.t
Signed-off-by: David Nind <david@davidnind.com>
Signed-off-by: Marcel de Rooy <m.de.rooy@rijksmuseum.nl>
Signed-off-by: Tomas Cohen Arazi <tomascohen@theke.io>
Instead of typing the case sensitive Control-number each time.
4 strikes instead of 15 on your keyboard. Wow! Gain of 73%.
Test plan:
Copy ccl.properties to /etc/koha/zebradb, restart Zebra and
search for cnum=SOME_ID in opac or intranet.
Signed-off-by: Marcel de Rooy <m.de.rooy@rijksmuseum.nl>
Signed-off-by: David Nind <david@davidnind.com>
Signed-off-by: Martin Renvoize <martin.renvoize@ptfs-europe.com>
Signed-off-by: Tomas Cohen Arazi <tomascohen@theke.io>
The previous patches adjusted the mappings directly - moving this
change to the correct build file
Not needed for sign off, but QA can test that nothing changes when rebuilding the files:
xsltproc etc/zebradb/xsl/koha-indexdefs-to-zebra.xsl etc/zebradb/marc_defs/marc21/authorities/authority-koha-indexdefs.xml > etc/zebradb/marc_defs/marc21/authorities/authority-zebra-indexdefs.xsl
Signed-off-by: Frank Hansen <frank.hansen@ub.lu.se>
Signed-off-by: Marcel de Rooy <m.de.rooy@rijksmuseum.nl>
Signed-off-by: Tomas Cohen Arazi <tomascohen@theke.io>
Signed-off-by: Frank Hansen <frank.hansen@ub.lu.se>
Signed-off-by: Marcel de Rooy <m.de.rooy@rijksmuseum.nl>
Signed-off-by: Tomas Cohen Arazi <tomascohen@theke.io>
Test plan:
We already tested it. Just look at changes in this patch.
Signed-off-by: Marcel de Rooy <m.de.rooy@rijksmuseum.nl>
Signed-off-by: Martin Renvoize <martin.renvoize@ptfs-europe.com>
Signed-off-by: Kyle M Hall <kyle@bywatersolutions.com>
Signed-off-by: Tomas Cohen Arazi <tomascohen@theke.io>
This patch updates the example NGINX config to increase the
proxy_buffer_size to 16k. The default value of 4k (on some platforms)
has empirically been shown to be a bit too small for the Link
headers emitted by the REST API when pagination is requested.
To test
-------
[1] Set up a Koha system with NGINX as a reverse proxy in
front of it (either in front of Apache or in front of
of Starman).
[2] Perform a patron search that returns at least two pages
of results and navigate to the second page.
[3] Note that the navigation can fail with a 502 HTTP error
and an "upstream sent too big header while reading response
header from upstream" error in the NGINX log.
The problem is most likely when the pagesize of the server
running NGINX is 4096 bytes.
[4] Update the NGINX configuration per this patch and restart
NGINX.
[5] This time, repeating step 2 should work.
Signed-off-by: Galen Charlton <gmc@equinoxOLI.org>
Signed-off-by: Brendan Gallagher <brendan@bywatersolutions.com>
Signed-off-by: Marcel de Rooy <m.de.rooy@rijksmuseum.nl>
Signed-off-by: Tomas Cohen Arazi <tomascohen@theke.io>
This patch updates the default partner category used by the partner_code config to be in line with sample data in sample_patrons.yml
Preparation:
Apply patch
Enable ILLModule sys pref
Install an ILL backend (e.g. FreeForm)
Add this change to your koha-conf.xml
Flush, restart.
Search for patron of category inter-library loan and assign a primary e-mail address to it
Test plan:
Create an ILL request and click 'place request with partners'
Verify that the 'select partner libraries' has the correct patron of IL category
Run tests and ensure they pass:
prove t/db_dependent/Illrequest/Config.t
prove t/Koha/Config.t
Signed-off-by: Katrin Fischer <katrin.fischer.83@web.de>
Signed-off-by: Nick Clemens <nick@bywatersolutions.com>
Signed-off-by: Tomas Cohen Arazi <tomascohen@theke.io>
The SIP patron status and information responses always return false foe "too many items lost". It would be reasonable to check the count of lost items still checked out to the patron and compare that to a threshold set in the sip config file. Though not all libraries operate in this way, it seems like a good and reasonable implementation as long is it is properly documented.
This patch adds the ability to set the SIP "too many items lost" flag
for a patron based on the number of lost checkouts the patron has where
the lost flag on those items is greater than the given flag value.
For example, one could specify that the flag be set if the patron has
more than 2 items checked out where itemlost is greater than 3.
By default the feature is disabled to retain the existing functionality.
If enabled, the default itemlost minimum flag value is 1 unless
specified.
Test Plan:
1) Apply this patch
2) prove t/db_dependent/SIP/Message.t
Signed-off-by: David Nind <david@davidnind.com>
Signed-off-by: Marcel de Rooy <m.de.rooy@rijksmuseum.nl>
Signed-off-by: Tomas Cohen Arazi <tomascohen@theke.io>
Right now background_jobs_worker.pl only processes jobs in serial. It would make sense to handle jobs in parallel up to a user definable limit.
Test Plan:
1) Apply this patch
2) Stop background_jobs_worker.pl
3) Generate some background jobs by editing records, placing holds, etc
4) Watch processes in a new terminal: watch -n 0.1 'ps aux | grep background_jobs_worker.pl'
5) Run background_jobs_worker.pl with parameter -m 3 or some other
number of max processes
6) Note the multiple forked processes in the ps output
Test notes - also tested the following on KTD:
1. Stop background_jobs_worker.pl
2. Edit /etc/koha/sites/kohadev/koha-conf.xml - set max_processes to 10
3. Generate some background jobs
4. Watch processes in a new terminal: watch -n 0.1 'ps aux | grep background_jobs_worker.pl'
5. Restart all
6. Confirm multiple forked processes in the ps output
Both methods work as expected and generate multiple forked processes
based on the value set for max processes.
Signed-off-by: emlam <emily.lamancusa@montgomerycountymd.gov>
Signed-off-by: Marcel de Rooy <m.de.rooy@rijksmuseum.nl>
Signed-off-by: Tomas Cohen Arazi <tomascohen@theke.io>
On bug 32594 we are adding a new worker, dedicated to Elastic indexing.
We should have a common place for workers, and we agreed on misc/workers
To test:
1 - Apply patch
2 - reset_all in koha testing docker
3 - ps aux | grep background
4 - Confirm the workers are running, and running in the new directory
5 - Perform a batch item modification
6 - Ensure the job is processed by the worker
Signed-off-by: David Nind <david@davidnind.com>
Signed-off-by: Jonathan Druart <jonathan.druart@bugs.koha-community.org>
Signed-off-by: Tomas Cohen Arazi <tomascohen@theke.io>
Signed-off-by: Kyle M Hall <kyle@bywatersolutions.com>
Signed-off-by: David Nind <david@davidnind.com>
Signed-off-by: Marcel de Rooy <m.de.rooy@rijksmuseum.nl>
Signed-off-by: Tomas Cohen Arazi <tomascohen@theke.io>
To test:
1 - Apply patch
2 - vim /etc/koha/sites/kohadev/log4perl.conf, Add lines below:
log4perl.logger.worker = WARN, WORKER
log4perl.appender.WORKER=Log::Log4perl::Appender::Screen
log4perl.appender.WORKER.stderr=1
log4perl.appender.WORKER.mode=append
log4perl.appender.WORKER.layout=PatternLayout
log4perl.appender.WORKER.layout.ConversionPattern=[%d] [%p] %m %l%n
log4perl.appender.WORKER.utf8=1
3 - Restart all
4 - Edit misc/background_jobs_worker.pl
- my $job = Koha::BackgroundJobs->find($args->{job_id});
+ my $job;# = Koha::BackgroundJobs->find($args->{job_id});
5 - In another terminal: tail -f /var/log/koha/kohadev/koha-worker-error.log
6 - Force enqueue a job (that won't be found because of #4
perl -e 'use Koha::BackgroundJob::BatchUpdateItem; my $bg = Koha::BackgroundJob::BatchUpdateItem->new(); $bg->enqueue({ record_ids=>['888888']});'
7 - Note error in log like:
[2023/01/11 19:26:10] [WARN] No job found for id=2983 main:: /kohadevbox/koha/misc/background_jobs_worker.pl (111)
Signed-off-by: Kyle M Hall <kyle@bywatersolutions.com>
Signed-off-by: David Nind <david@davidnind.com>
Signed-off-by: Marcel de Rooy <m.de.rooy@rijksmuseum.nl>
Signed-off-by: Tomas Cohen Arazi <tomascohen@theke.io>
This patch fixes a duplicate attribute code for Author-in-order in the
biblios definition.
The picked code matches what was already in ccl.properties.
Also Chronological-term for authorities gets fixed.
To test:
1. Apply the regression tests
2. Run:
k$ prove xt/verify_bib1.att.t
=> FAIL: Some failiures
3. Apply this patch
4. Repeat 1
=> SUCCESS: Tests now pass!
5. Sign off :-D
Signed-off-by: Tomas Cohen Arazi <tomascohen@theke.io>
Signed-off-by: David Nind <david@davidnind.com>
Signed-off-by: Martin Renvoize <martin.renvoize@ptfs-europe.com>
Signed-off-by: Tomas Cohen Arazi <tomascohen@theke.io>
Test plan:
- Enable show_outstanding_amount in SIPconfig.xml
- Check that the total outstanding amout for the patron is displayed on SIP
checkout (if it exists), for example:
Patron has fines - You owe $10.00.
- Check that the outstanding amout for a given item is displayed on SIP
checkin (if it exists), for example:
"You owe $10.00 for this item."
- Check that it is not displayed when show_outstanding_amount is disabled.
Signed-off-by: Martin Renvoize <martin.renvoize@ptfs-europe.com>
Signed-off-by: Kyle M Hall <kyle@bywatersolutions.com>
Signed-off-by: Tomas Cohen Arazi <tomascohen@theke.io>
Signed-off-by: David Nind <david@davidnind.com>
Signed-off-by: Marcel de Rooy <m.de.rooy@rijksmuseum.nl>
Signed-off-by: Tomas Cohen Arazi <tomascohen@theke.io>
Bug 20078 updated the attribute for arp to 2014 to avoid conflict with 9013 not-on-loan-count
Bug 28830 then added Control-number-identifier as 2014, breaking arp again
This patch updates the number to 9015
To test:
1 - Apply first patch
2 - Attempt searching by arp, no results (add a unique 526$d to a record to ease searching)
3 - Apply this patch
4 - Copy bib1.att and ccl.properties to the correct locations
cp etc/zebradb/biblios/etc/bib1.att /etc/koha/zebradb/biblios/etc/bib1.att
cp etc/zebradb/ccl.properties /etc/koha/zebradb/ccl.properties
5 - Restart zebra
6 - Rebuild indexes
7 - Search again, success!
Signed-off-by: David Nind <david@davidnind.com>
Signed-off-by: Marcel de Rooy <m.de.rooy@rijksmuseum.nl>
Signed-off-by: Tomas Cohen Arazi <tomascohen@theke.io>
This is going to be awesome!
Signed-off-by: Jonathan Field <jonathan.field@ptfs-europe.com>
Signed-off-by: Martin Renvoize <martin.renvoize@ptfs-europe.com>
Signed-off-by: Kyle M Hall <kyle@bywatersolutions.com>
Signed-off-by: Tomas Cohen Arazi <tomascohen@theke.io>