This change lazy-loads modules for setters in the
koha-preferences tool, so that getters are free to run super fast.
Between BZ 37657 and BZ 37682 we effectively eliminate the
overhead of running "get" or "dump" commands via the koha-preferences
tool.
Test plan:
1. time misc/admin/koha-preferences get SearchEngine
2. Note time is about .35 seconds
3. time misc/admin/koha-preferences dump
4. Note time is about .35 seconds
5. Create sysprefs.yml
---
marcflavour: MARC21
viewMARC: 1
6. time misc/admin/koha-preferences load -i sysprefs.yml
7. Note time is about .35 seconds
8. time misc/admin/koha-preferences set SearchEngine Elasticsearch
9. Note time is about 1 seconds
10. Apply patch
11. Repeat the koha-preferences commands above
12. Note that the "dump" and "get" commands run in about .09-.1
seconds. The "load" and "set" commands still take the same amount
of time as their behaviours haven't changed
13. misc/admin/koha-preferences set SearchEngine Elasticsearch1
14. koha-mysql kohadev
15. select * from action_logs where module = 'SYSTEMPREFERENCE'
order by action_id desc limit 5;
16. Note that the action log showing Elasticsearch1 update says
"interface"
of "commandline" and "script" of "koha-preferences"
Signed-off-by: Martin Renvoize <martin.renvoize@ptfs-europe.com>
Signed-off-by: Katrin Fischer <katrin.fischer@bsz-bw.de>
By using Koha::Database->dbh() to use a minimal database handle
which doesn't preload the whole DBIx::Class schema, we're able to
run the same command 2-3 times faster.
This is beneficial when running the tool in a loop which runs the
command serially one by one.
Test plan:
1. time misc/admin/koha-preferences get SearchEngine
2. Note time is about 1 second
3. time misc/admin/koha-preferences dump
4. Note time is about 1 second
5. Create sysprefs.yml
Signed-off-by: Martin Renvoize <martin.renvoize@ptfs-europe.com>
Signed-off-by: Katrin Fischer <katrin.fischer@bsz-bw.de>
When libraries have a lot of checkouts, or an AMH, checkins can happen while the cron is running.
This patch simply adds an eval around the auto renewal attempt in case of early check in or other errors.
You can verify cron completed by enabling cronjob log in system preferences and checking the action logs
To test:
1 - Add 'sleep(10);' to automatic_renewals.pl
2 - Set circulation rules to enable automatic renewals
3 - Issue an item to a patron
4 - perl misc/cronjobs/automatic_renewals.pl -v
5 - Confirm item would not be renewed
6 - perl misc/cronjobs/automatic_renewals.pl -v -c
7 - Quickly check in the item
8 - The cronjob dies
DBIx::Class::Row::update(): Can't update Koha::Schema::Result::Issue=HASH(0x586e1a674fb0): row not found at /kohadevbox/koha/Koha/Object.pm line 172
9 - Apply patch
10 - Checkout the item again
11 - perl misc/cronjobs/automatic_renewals.pl -v -c
12 - Quickly checkin the item
13 - You get a warning, but the cron completes
Signed-off-by: CJ Lynce <cj.lynce@westlakelibrary.org>
Signed-off-by: Emily Lamancusa <emily.lamancusa@montgomerycountymd.gov>
Signed-off-by: Katrin Fischer <katrin.fischer@bsz-bw.de>
Tidy the whole thing
Signed-off-by: Nick Clemens <nick@bywatersolutions.com>
Signed-off-by: Paul Derscheid <paul.derscheid@lmscloud.de>
Signed-off-by: Katrin Fischer <katrin.fischer@bsz-bw.de>
Since version 24.05, due to the changes mentioned at
https://wiki.koha-community.org/wiki/Koha_/svc/_HTTP_API#Changes_coming_in_Koha_24.05 ,
the `connexion_import_daemon.pl` stopped working. The reason for this is that
it did not use CSRF tokens.
To test:
1. Get a Koha instance on 24.05, before applying the patch.
2. Create a plain text file somewhere on the server containing a raw MARC
record (not XML). Let's call it `marc.txt`.
3. On the server, create a config file like this:
```
host: 0.0.0.0
port: 5500
koha: http://localhost:82 # Where 82 is the port of the Koha staff interface.
user: foo # A Koha staff user.
password: Fooo1234 # The Koha staff user's password.
import_mode: stage
```
4. Run `./connexion_import_daemon.pl --config the-config-file-path`
5. In another terminal on the same server (or from anywhere that can reach the
port opened by the `connexion_import_daemon.pl` script,
run `nc localhost 5500 < marc.txt`
6. Observe in the stderr of the daemon script: `Response: Unsuccessful request`
7. Stop the daemon script.
8. Apply the patch and repeat steps 4 and 5.
9. Observe in the stderr of the daemon script:
`Response: Success. Batch number ... - biblio record number HASH(...) added to Koha`
10. Check at /cgi-bin/koha/tools/manage-marc-import.pl for a batch named
`(webservice)`. It should contain one record now. This is how we know that
authentication between the daemon and Koha worked, which is what this
patch tries to address.
Thanks-to: David Cook <dcook@prosentient.com.au>
Sponsored-by: Reformational Study Centre <www.refstudycentre.com>
Signed-off-by: Nick Clemens <nick@bywatersolutions.com>
Signed-off-by: Paul Derscheid <paul.derscheid@lmscloud.de>
Signed-off-by: Katrin Fischer <katrin.fischer@bsz-bw.de>
Trivial patch.
Change the --branch and --skip-branch options of the longoverdue cron script
to --library and --skip-library to meet the Terminology Guidelines.
Signed-off-by: Katrin Fischer <katrin.fischer@bsz-bw.de>
This script doesn't seem to be included in cron files by default.
This change is to allow script parameters to effect only certain branches.
This allows the script to be added multiple times to a cron file with
different settings for different branches.
Test plan:
1. apply patch
2. identify two books at different branches the same number of days overdue.
3. run the longoverdue.pl script specifying one of the branches in the
--branch command line parameter.
i.e. koha-shell -c 'misc/cronjobs/longoverdue.pl --branch branch_code --lost 60=2 --maxdays=61 --confirm' instance_name
4. observe that the book at the specified branch has been or would be affected
by the script while the other book is not.
Signed-off-by: Tadeusz „tadzik” Sośnierz <tadeusz@sosnierz.com>
Signed-off-by: Kyle M Hall <kyle@bywatersolutions.com>
Signed-off-by: Katrin Fischer <katrin.fischer@bsz-bw.de>
Enable the export_records.pl script use a report output to export biblios or authorities
Test plan:
1. Apply patches and restart services
2. Create a SQL report (id=1)
SELECT biblionumber
FROM biblio
3. Create a SQL report (id=2) and set an item as notforloan = -1
SELECT title, author, biblio.biblionumber
FROM biblio
LEFT JOIN items USING (biblionumber)
WHERE items.notforloan = <<Not for loan|NOT_LOAN>>
4. Create a SQL report (id=3)
SELECT title, author
FROM biblio
5. Create a SQL report (id=4)
SELECT authid
FROM auth_header
6. Run export_records.pl using report id=1 and confirm a koha.mrc file is created in the misc directory:
cd misc
./export_records.pl --report_id=1
7. Delete the koha.mrc file
8. Run export_records.pl using report id=2
./export_records.pl --report_id=2
9. Notice you are prompted to supply a parameter
10. Re-run report id=2 supplying a parameter. Confirm the koha.mrc file is created and contains bib data
./export_records.pl --report_id=2 --report_param=-1
11. Run export_records.pl using report id=3
./export_records.pl --report_id=3
12. Notice you get the message: The --report_id you specified does not fetch a biblionumber
13. Delete the koha.mrc file
14. Run export_records.pl using report id=4
./export_records.pl --report_id=4
15. Notice you get a message 'The --report_id you specified does not fetch a biblionumber'
16. Re-run export_records.pl setting the record-type=auths
./export_records.pl --record-type=auths --report_id=4
17. Notice the koha.mrc file is generated and contains auth data
Sponsored-by: Horowhenua Libraries, Toi Ohomai Institute of Technology, Plant and Food Research Limited, Waitaki District Council, South Taranaki District Council New Zealand
Signed-off-by: Alexandre Noel <alexandre.noel@inlibro.com>
Signed-off-by: Kyle M Hall <kyle@bywatersolutions.com>
Signed-off-by: Katrin Fischer <katrin.fischer@bsz-bw.de>
Signed-off-by: David Nind <david@davidnind.com>
Signed-off-by: Marcel de Rooy <m.de.rooy@rijksmuseum.nl>
Signed-off-by: Katrin Fischer <katrin.fischer@bsz-bw.de>
This change consistently sends the Csrf-Token in the request header.
Previously, one POST sent it in the request body, while the other POST
sent it in the request header. Since we're using an API, it's best
for us to always send it in the request header
Test plan:
0. Apply the patch
1. perl ./misc/migration_tools/koha-svc.pl \
http://localhost:8081/cgi-bin/koha/svc koha koha 29 > bib-29.xml
2. perl ./misc/migration_tools/koha-svc.pl \
http://localhost:8081/cgi-bin/koha/svc koha koha 29 bib-29.xml
3. Note that the following appears in STDOUT and there is no 403 error:
"update 29 from bib-29.xml"
Signed-off-by: David Nind <david@davidnind.com>
Signed-off-by: Marcel de Rooy <m.de.rooy@rijksmuseum.nl>
Signed-off-by: Katrin Fischer <katrin.fischer@bsz-bw.de>
This change fixes the Koha::SVC to store the CSRF token for
the authenticated session for further POSTing.
Test plan:
0. Apply the patch
1. perl ./misc/migration_tools/koha-svc.pl \
http://localhost:8081/cgi-bin/koha/svc koha koha 29 > bib-29.xml
2. perl ./misc/migration_tools/koha-svc.pl \
http://localhost:8081/cgi-bin/koha/svc koha koha 29 bib-29.xml
3. Note that the following appears in STDOUT and there is no 403 error:
"update 29 from bib-29.xml"
Signed-off-by: David Nind <david@davidnind.com>
Signed-off-by: Marcel de Rooy <m.de.rooy@rijksmuseum.nl>
Signed-off-by: Katrin Fischer <katrin.fischer@bsz-bw.de>
When using __() (ie. Gettext.js) we are seeing the translations that are marked as fuzzy.
This is definitely not the expected behaviour.
It happens because (our version of) po2json are old and no longer maintained,
and just embed them.
It seems that the bin we have has been upgraded to a JS version
(different authors).
Test plan:
(replace LANG with your language code)
0. Do not apply this patch
Edit misc/translator/po/LANG-messages-js.po
Mark a string as fuzzy
Edit ./intranet-main.tt and add the following lines inside $(document).ready
console.log(_("Your string"));
console.log(__("Your string"));
Replace "Your string" with the string you are actually testing.
Update the templates: `koha-translate --update LANG --dev kohadev && restart_all`
Go to the Koha home page, open the console.
=> Notice that the second log in the console is displaying the fuzzy string.
1. Apply this patch
Install the new version of po2json using `yarn install`
Repeat the previous steps.
=> With this patch applied both logs show the English version of the
string.
Remove fuzzy, update the templates and try again.
Signed-off-by: Julian Maurice <julian.maurice@biblibre.com>
Signed-off-by: Victor Grousset/tuxayo <victor@tuxayo.net>
Signed-off-by: Martin Renvoize <martin.renvoize@ptfs-europe.com>
In "Bug 19532: Make recalls.status an ENUM" one-letter statuses were
replaced by their verbal equivalents. But in overdue_recalls.pl
remained "R". As a result, recalls never overdue.
No test plan provided--obvious correction.
Sponsored-by: Ignatianum University in Cracow
Signed-off-by: Phil Ringnalda <phil@chetcolibrary.org>
Signed-off-by: Katrin Fischer <katrin.fischer@bsz-bw.de>
The cron can take a very long time to run on systems with many issues.
For example, a partner with ~250k auto_renew issues is taking about 9 hours to run.
If we run that same number of issues in 5 parallel chunks
( splitting the number of issues as evenly as possible ), it could take under 2 hours.
Test Plan:
1) Generate a number of issues marked for auto_renew
2) Run the automatic_renewals.pl, use the `time` utility to track how much time it took to run
3) Set parallel_loops to 10 in auto_renew_cronjob section of config in koha-conf
4) Repeat step 2, note the improvement in speed
5) Experiment with other values
Signed-off-by: Matt Blenkinsop <matt.blenkinsop@ptfs-europe.com>
Signed-off-by: Martin Renvoize <martin.renvoize@ptfs-europe.com>
Signed-off-by: Katrin Fischer <katrin.fischer@bsz-bw.de>
This patch adds logging of unhandled exceptions that could occur. This
is happening on busy production sites right now. This is also useful for
plugin jobs that might not be 100% following the guidelines and would
benefit from this.
But as the [DO NOT PUSH] patch highlights, this is something we really
want to have on our current codebase, as a database connection drop
might make us reach that `catch` block we are adding logging to on this
patch.
To test:
1. Apply the [DO NOT PUSH] patch
2. Run:
$ ktd --shell
k$ restart_all ; tail -f /var/log/koha/kohadev/worker*.log
3. Pick a valid barcode on the staff UI
4. Use the 'Batch delete items' tool in the cataloguing section
5. Start the job for deleting the item
=> FAIL: The item got deleted, but the job marked as failed and no logs
about the reasons
6. Apply this patch and repeat 2-5
=> SUCCESS: Same scenario but there's a log with the error message
7. Sign off :-D
Signed-off-by: David Nind <david@davidnind.com>
Signed-off-by: Martin Renvoize <martin.renvoize@ptfs-europe.com>
Signed-off-by: Katrin Fischer <katrin.fischer@bsz-bw.de>
By adding an empty text body, we force Email::Stuffer/Email::MIME
to use multipart/mixed handling for the attachment instead of forcing
a single part (ie direct attachment) email, which is not consistently
handled by different email clients.
An empty text body is language-neutral (ie not imposing English),
and it allows SMTP servers to inject organisational footers into
the email (e.g. confidentiality notices).
Signed-off-by: Magnus Enger <magnus@libriotech.no>
I have tested this solution in production, and it works for me.
Bug 32575: Tidy patch
Signed-off-by: Magnus Enger <magnus@libriotech.no>
Signed-off-by: Marcel de Rooy <m.de.rooy@rijksmuseum.nl>
Signed-off-by: Katrin Fischer <katrin.fischer@bsz-bw.de>
To test:
1 - Perform some transactions
2 - Enable Pseudonymization
3 - perl misc/maintenance/pseudonymize_statistics.pl -v
4 - Confirm test run an nothing changed
5 - perl misc/maintenance/pseudonymize_statistics.pl -v -c
6 - Confirm statistics are pseudonymized
Signed-off-by: Tomas Cohen Arazi <tomascohen@theke.io>
Amend: tidied the changes [tcohen]
Signed-off-by: Katrin Fischer <katrin.fischer@bsz-bw.de>
Like IncludeSeeFromInSearches adds see from heading in search (4xx),
new system preference IncludeSeeAlsoFromInSearches adds see also from heading (5xx).
Test plan:
Test on both Elasticsearch and Zebra:
1)
1.1) Create an authority record with heading, see from heading 4xx and see also from heading 5xx
1.2) Use this authority in a biblio record
2)
2.1) Set IncludeSeeFromInSearches and IncludeSeeAlsoFromInSearches to "Don't include"
2.3) Rebuild search engine
2.4) Perform a search on heading text => you find the biblio record
2.5) Perform a search on see from heading => you do not find the biblio record
2.6) Perform a search on see also from heading => you do not find the biblio record
3)
3.1) Set IncludeSeeFromInSearches and IncludeSeeAlsoFromInSearches to "Include"
3.2) Rebuild search engine
3.3) Perform a search on heading text => you find the biblio record
3.4) Perform a search on see from heading => you find the biblio record
3.5) Perform a search on see also from heading => you find the biblio record
Signed-off-by: Frédéric Demians <f.demians@tamil.fr>
Signed-off-by: Kyle M Hall <kyle@bywatersolutions.com>
Signed-off-by: Katrin Fischer <katrin.fischer@bsz-bw.de>
To test:
1. Set up autorenewals bu adjusting circulation rules:
'Automatic renewal' -> 'Yes'
'No automatic renewal before' -> 5
2. Set 'AutoRenewalNotices' to 'according to patron messaging preferences'.
3. Set an AUTO_RENEWALS and AUTO_RENEWALS_DGST notice to include branch info. I am using this to test:
Branchcode: [% branch.branchcode %]
Branch name: [% branch.branchname %]
Branch address: [% branch.branchaddress1 %]
Branch address2: [% IF branch.branchaddress2 %][% branch.branchaddress2 %][% END %]
Branch city: [% branch.branchcity %], [% branch.branchstate %] [% branch.branchzip %]
4. Make sure your branch has the proper infro. filled out in Libraries administration.
5. Find a patron and adjust the messaging preferences so they receive automatic renewal notices. Also make sure the patron has an email.
5. Check out some items and make them due with the next 5 days.
6. Run the automatic_renewal cron job:
perl /kohadevbox/koha/misc/cronjobs/automatic_renewals.pl -c -v
7. Notice no branch information displays.
8. APPLY PATCH
9. Checkout items from multiple issuing branches to a single patron.
10. Make sure the patron's messaging prefs are set to revieve NON-digestable notices.
11. Run the automatic renewal job, each notice should include the branch information from the issuing library.
12. Change the patron's messageing preferences to receieve digestable notices.
13. Run the job without the --digest-per-branch flag. You should get a single notice with the branch info. coming from the patron's home branch.
14. Run the job with the --digest-per-branch flag. You should get seperate digested notices with the branch info. coming from the issueing library branch.
Signed-off-by: David Nind <david@davidnind.com>
Signed-off-by: Emily Lamancusa <emily.lamancusa@montgomerycountymd.gov>
Signed-off-by: Katrin Fischer <katrin.fischer@bsz-bw.de>
For reasons unknown, GetOptions is inserting an empty string into the letter_code list. If you are running the script with a letter code filter, the empty string is added to the OR so it functions. If no letter_code is passed, the search requires the letter code to be an empty string, which will of course fail. Even more perplexing is that this does not happen for the type list which is essentially identical code.
Test Plan:
1) Generate some messages in the message queue
2) Run `process_message_queue.pl -v -c`
3) Note nothing happens
4) Apply this patch
5) Repeat step 2
6) Messages are sent!
Signed-off-by: David Nind <david@davidnind.com>
Signed-off-by: Martin Renvoize <martin.renvoize@ptfs-europe.com>
Signed-off-by: Katrin Fischer <katrin.fischer@bsz-bw.de>
We are not using it and it's confusing, let's remove the context stack.
Signed-off-by: Julian Maurice <julian.maurice@biblibre.com>
Signed-off-by: Martin Renvoize <martin.renvoize@ptfs-europe.com>
Test plan:
1. git grep 'reseve'. Notice there are instances of 'reseve'
2. Apply patch
3. Repeat step 1, there should be no instances of 'reseve'
Sponsored-by: Catalyst IT, New Zealand
Signed-off-by: David Nind <david@davidnind.com>
Signed-off-by: Matt Blenkinsop <matt.blenkinsop@ptfs-europe.com>
Signed-off-by: Martin Renvoize <martin.renvoize@ptfs-europe.com>
--category-code was not checked in the "at least one filter option"
check but it is clearly a filter option.
Signed-off-by: David Nind <david@davidnind.com>
Signed-off-by: Martin Renvoize <martin.renvoize@ptfs-europe.com>
* Change the SYNOPSIS to better describe the different ways to use the
script
* Only show the SYNOPSIS when options used are wrong (unknown option,
no filter options, or neither -c nor -v)
* Show the options details only with --help
* Clarify the fact that -v is required when -c is not supplied in the
description of both options
* Print a specific error message for the following cases:
* no filters options
* neither -c nor -v was supplied
Signed-off-by: David Nind <david@davidnind.com>
Signed-off-by: Martin Renvoize <martin.renvoize@ptfs-europe.com>
This patch adds a hint to the end of the script to notify the end user
that they may need to run the build_holds_queue cronjob if they are
using RealTimeHoldsQueue.
Signed-off-by: Martin Renvoize <martin.renvoize@ptfs-europe.com>
If RealTimeHoldsQueue is on, touch_all_biblios triggers a update_holds_queue_for_biblios background job for each affected record. This will result in a as many background jobs being queued up as records! It makes far more sense for this script to not do that which gives the administrator the option for running the holds queue builder if the changes would affect holdability, or to not run it at all.
Test Plan:
1) Run touch_all_biblios.pl
3) Note a update_holds_queue_for_biblios background job is queued for each record touched
4) Apply this patch
5) Merge touch_all_biblios.pl again
6) Note that no update_holds_queue_for_biblios jobs were queued
Signed-off-by: Matt Blenkinsop <matt.blenkinsop@ptfs-europe.com>
Signed-off-by: Emily Lamancusa <emily.lamancusa@montgomerycountymd.gov>
Signed-off-by: Martin Renvoize <martin.renvoize@ptfs-europe.com>
This bug originaly wants to get rid of "noindex" coming from this meta
tag:
<meta name="robots" content="noindex">
But actually we have other strings from the meta tags that should not be
translated.
Test plan:
0. Do not apply this patch
1. cd misc/translator/po && gulp po:update --lang es-ES (or any other
lang)
2. git commit -a -m"wip"
3. Apply this patch
4. Repeat 1 and git diff to show the diff
Notice that strings that should not be translated are removed from the
po files (actually commented)
Signed-off-by: Caroline Cyr La Rose <caroline.cyr-la-rose@inlibro.com>
Signed-off-by: Marcel de Rooy <m.de.rooy@rijksmuseum.nl>
Signed-off-by: Katrin Fischer <katrin.fischer@bsz-bw.de>
Signed-off-by: Matt Blenkinsop <matt.blenkinsop@ptfs-europe.com>
Signed-off-by: Philip Orr <philip.orr@lmscloud.de>
Signed-off-by: Martin Renvoize <martin.renvoize@ptfs-europe.com>
Signed-off-by: Katrin Fischer <katrin.fischer@bsz-bw.de>
Signed-off-by: Matt Blenkinsop <matt.blenkinsop@ptfs-europe.com>
Signed-off-by: Philip Orr <philip.orr@lmscloud.de>
Signed-off-by: Martin Renvoize <martin.renvoize@ptfs-europe.com>
Signed-off-by: Katrin Fischer <katrin.fischer@bsz-bw.de>
This patch adds the option to specify a framework to be used when overlaying records from webservices/connexion
To test:
1 - vim /etc/koha/sites/kohadev/connexion.cnf
2 - Set content:
host:
port: 8888
koha:http://localhost:8081
log:/var/log/koha/kohadev/connexion.log
match:ISBN
user:kohauser
password:kohapass
overlay_action:replace
nomatch_action:create_new
item_action:always_add
import_mode:direct
framework:BKS
overlay_framework:
debug:1
3 - Save the sample file from this bug into your kohaclone (or copy and paste into a file your koha test site can reach)
4 - On the command line:
perl misc/bin/connexion_import_daemon.pl -c /etc/koha/sites/kohadev/connexion.cnf
5 - In another terminal:
cat bug_33418.test | nc -v localhost 8888
6 - It should report success and a biblionumber
7 - In Koha: Cataloguing->Manage staged MARC record
8 - View the most recent batch with file name (webservice)
9 - Confirm it was imported, no match
10 - Click 'View' under the 'Record' column
11 - Confirm record loads correctly and 'MARC framework' detail is 'Books, Booklets, Workbooks'
12 - On the terminal repeat the command:
cat bug_33418.test | nc -v localhost 8888
13 - It should succed
14 - View the new batch, confirm the record matched this time
15 - View the record details, confirm framework is now 'default'
16 - On the first terminal hit Ctrl+C to stop the daemon
16 - Edit connexion.cnf and change:
import_mode:stage
framework:ACQ
overlay_framework:IR
17 - Restart daemon:
perl misc/bin/connexion_import_daemon.pl -c /etc/koha/sites/kohadev/connexion.cnf
18 - Delete the record created above
19 - On the second terminal repeat the command:
cat bug_33418.test | nc -v localhost 8888
20 - Confirm the batch is created, but not imported
21 - In terminal:
perl misc/cronjobs/import_webservice_batch.pl --framework=ACQ --overlay_framework=BKS
22 - Confirm batch imported, and record in ACQ framework
23 - In terminal:
cat bug_33418.test | nc -v localhost 8888
perl misc/cronjobs/import_webservice_batch.pl --framework=ACQ --overlay_framework=BKS
24 - Confirm batch added, record matched, record imported, and record now in Books framework
25 - Stop the deamon, edit connexion.cnf:
import_mode:direct
26 - Start the daemon, and on other terminal repeat:
cat bug_33418.test | nc -v localhost 8888
27 - Confirm record in Binders framework
28 - Set record framework to Books
29 - Stop daemon, edit cnf and remove 'overlay_framework' setting
30 - Start daemon and cat the file again
31 - Confirm the record remains in Books framework
Signed-off-by: Brendan Lawlor <blawlor@clamsnet.org>
Signed-off-by: Marcel de Rooy <m.de.rooy@rijksmuseum.nl>
Signed-off-by: Katrin Fischer <katrin.fischer@bsz-bw.de>
misc/cronjobs/staticfines.pl is calling output_pref() but it is missing from module import :
use Koha::DateUtils qw( dt_from_string );
Test but running staticfines.pl
Signed-off-by: Matt Blenkinsop <matt.blenkinsop@ptfs-europe.com>
Signed-off-by: Katrin Fischer <katrin.fischer@bsz-bw.de>
Update the preference name from 'Borrower' to 'Patron' and correct
spelling of receive.
Signed-off-by: Martin Renvoize <martin.renvoize@ptfs-europe.com>
Signed-off-by: Katrin Fischer <katrin.fischer@bsz-bw.de>
To test:
1. APPLY PATCH and restart all
2. Run the update_localuse_from_statistics.pl script without a confirm flag, nothing happens.
3. Run the script and add the --confirm flag, stuff happens.
Signed-off-by: Matt Blenkinsop <matt.blenkinsop@ptfs-europe.com>
Signed-off-by: Martin Renvoize <martin.renvoize@ptfs-europe.com>
Signed-off-by: Katrin Fischer <katrin.fischer@bsz-bw.de>
To test:
1. Run the update_totalissues cron, the holds queue is updated.
2. APPLY PATCH, restart services
3. Run update_totalissues cron again, this time the holds queue should always be skipped.
Signed-off-by: Matt Blenkinsop <matt.blenkinsop@ptfs-europe.com>
Signed-off-by: Emily Lamancusa <emily.lamancusa@montgomerycountymd.gov>
Signed-off-by: Katrin Fischer <katrin.fischer@bsz-bw.de>
Koha::Objects depends on Koha::DateUtils, which depends on C4::Context,
which depends on Koha::Config::SysPrefs, which depends on Koha::Objects
Apart from the circular dependency, the dependency on C4::Context alone
is problematic as it loads a bunch of modules that are not needed at all
in Koha::Objects (YAML::XS and ZOOM for instance).
As Koha::Objects is used as a base for a lot of modules, we should take
care to only load the minimum required.
This patch removes uses of Koha::DateUtils from Koha::Objects.
It was only used in Koha::Objects::filter_by_last_update
filter_by_last_update now requires that the 'from' and 'to' parameters
must be DateTime objects. Previously it would also allow date and
datetime strings. This possibility was only used in two places:
* misc/cronjobs/cleanup_database.pl
* tools/cleanborrowers.pl
Now they call dt_from_string first and pass a DateTime object to
filter_by_last_update
Test plan:
1. Run `perl -cw Koha/Objects.pm`. It should only say:
"Koha/Objects.pm syntax OK" without warnings
2. Run `prove t/db_dependent/Koha/Objects.t`
3. Verify that misc/cronjobs/cleanup_database.pl works as before,
especially with the options --pseudo-transactions,
--pseudo-transactions-from and --pseudo-transactions-to
4. Go to Tools » Batch patron deletion and anonymization, check "Verify
you want to anonymize patron checkout history" and enter a date in
the text input below. Then click Next and verify that the correct
count of borrowers is shown. Click on the "Finish" button and verify
that the circulation history has been correctly anonymized
See also bug 36432
Signed-off-by: Tadeusz Sośnierz <tadeusz@sosnierz.com>
Signed-off-by: Martin Renvoize <martin.renvoize@ptfs-europe.com>
Signed-off-by: Katrin Fischer <katrin.fischer@bsz-bw.de>
Add an unallocated option to CreateQueue and pass through as needed
Avoid deletion of the tmp_holdsqueue, and only check holds
and items that are not currently matched
A future hold with a higher priority will still fail here - because the
item may already be assigned, but on next change to the biblio it would
be corrected
To test:
1) Apply both patches
2) Enable RealTimeHoldsQueue and set HoldsQueueSkipClosed to "open"
3) Add a holiday to the calendar for all libraries for today, visit:
/cgi-bin/koha/tools/holidays.pl
-- Click today's day on the calendar and pick "Holiday repeated every same day of the week"
-- Click "Copy to all libraries". Hit "Save.
4) Place a biblio-level hold on a biblio record and set the pickup location to a library that has available copies, visit:
-- /cgi-bin/koha/reserve/request.pl?biblionumber=76&borrowernumber=51
-- Click the first "Place hold" button to place the biblio-level hold.
5) Verify that that hold got added to the holds queue, visit:
/cgi-bin/koha/circ/view_holdsqueue.pl?branchlimit=&itemtypeslimit=&ccodeslimit=&locationslimit=&run_report=1
6) Place a biblio-level hold on a biblio record where there are no other holds and copies are available at another location, but not the pickup location, visit:
-- /cgi-bin/koha/reserve/request.pl?biblionumber=437&borrowernumber=51
-- On the "pickup at" dropdown, pick something else other than "Centerville", e.g. "Fairfield".
-- Click the first "Place hold" button to place the biblio-level hold.
7) Check the holds queue again, notice that this 2nd hold was not added to the queue:
/cgi-bin/koha/circ/view_holdsqueue.pl?branchlimit=&itemtypeslimit=&ccodeslimit=&locationslimit=&run_report=1
8) Run the updated cronscript:
perl misc/cronjobs/holds/build_holds_queue.pl --force --unallocated
9) Notice nothing changed in the holds queue, visit:
/cgi-bin/koha/circ/view_holdsqueue.pl?branchlimit=&itemtypeslimit=&ccodeslimit=&locationslimit=&run_report=1
10) Remove the holiday we created previously, visit:
/cgi-bin/koha/tools/holidays.pl
-- Click today's day on the calendar and pick "Delete this holiday"
-- Click "Copy to all libraries". Hit "Save.
11) Run the updated cronscript:
perl misc/cronjobs/holds/build_holds_queue.pl --force --unallocated
12) Confirm the second hold is added to the holds queue, visit:
/cgi-bin/koha/circ/view_holdsqueue.pl?branchlimit=&itemtypeslimit=&ccodeslimit=&locationslimit=&run_report=1
Signed-off-by: Pedro Amorim <pedro.amorim@ptfs-europe.com>
Signed-off-by: Katrin Fischer <katrin.fischer@bsz-bw.de>
Sometimes it would be better for process_message_queue.pl to stop if a plugin hook fails rather than continue processing. It would be nice if that was a command line option.
Test Plan:
1) Install any plugin with a before_send_messages hook
2) Modify the plugin, add a 'die;' statement at the start of the
before_send_messages method of the plugin.
3) Run process_message_queue.pl as usual
4) Note the exit code is 0
5) Run it again with the new -e setting
6) Note the exit code is 1
Signed-off-by: Brendan Lawlor <blawlor@clamsnet.org>
Signed-off-by: Victor Grousset/tuxayo <victor@tuxayo.net>
Signed-off-by: Katrin Fischer <katrin.fischer@bsz-bw.de>
This patch adds a clarification to writeoff_debts.pl to make it clear that --category-code can't be used as the only filter when running the script. If this is the case, the script just displays the help menu as it is expecting one of the other filters
Test plan:
1) Run perl misc/cronjobs/writeoff_debts.pl --category-code TEST --confirm
2) Observe that the help menu is displayed instead of running the script
3) Check the help menu for the text added in this patch
WNC amended patch - added same warning in the option section
Signed-off-by: David Nind <david@davidnind.com>
Signed-off-by: Katrin Fischer <katrin.fischer@bsz-bw.de>