To test:
1 - Enable OAI sets, and define a set with mapping 952 y = BK
2 - perl misc/migration_tools/build_oai_sets.pl -v -i -r
3 - The script dies:
Koha::Biblio::Metadata->record must be called on an instantiated object or like a class method with a record passed in parameter
4 - Apply patch
5 - Repeat
6 - Success!
Signed-off-by: Jonathan Druart <jonathan.druart@bugs.koha-community.org>
Signed-off-by: Tomas Cohen Arazi <tomascohen@theke.io>
Edit: tcohen updated indentation
Some of our scripts have a space in the "shebang" (first) line:
#! /usr/bin/perl
This is not illegal, and it does work, but it is good to be
consistent, so this patch removes the space.
To test:
- Run: grep -rn --include=*.pl '#! /usr/' *
- See the list of files that have a space in the shebang
- Apply the patch
- Run the command again, there should be no output, meaning there
are no more files with space in the shebang
- Have a look at the patch and check that it only changes the
shebangs
- Sign off
Signed-off-by: Matt Blenkinsop <matt.blenkinsop@ptfs-europe.com>
Signed-off-by: Martin Renvoize <martin.renvoize@ptfs-europe.com>
Signed-off-by: Tomas Cohen Arazi <tomascohen@theke.io>
Signed-off-by: Kyle M Hall <kyle@bywatersolutions.com>
Signed-off-by: Tomas Cohen Arazi <tomascohen@theke.io>
Edit: Kyle, stop impersonating John Doe
Signed-off-by: Tomas Cohen Arazi <tomascohen@theke.io>
If the installer files exist for a given language, the translate script
won't update it.
We should get a confirmation from Bernardo (author of bug 24262), but I
don't understand why it could be needed (side-effects?)
Test plan:
Installer several times the same language, drop the DB and run the
installer+onboarding process.
Check files installed by the installer (yaml for notice templates,
biblio frameworks) have inserted the data properly in DB.
Signed-off-by: Katrin Fischer <katrin.fischer.83@web.de>
Signed-off-by: Tomás Cohen Arazi <tomascohen@theke.io>
Signed-off-by: Tomas Cohen Arazi <tomascohen@theke.io>
Adapted test plan:
1) Apply this patch
2) For DUE and PREDUE notices, set the message body to the following:
Title: [% checkout.title %]
3) For DUEDGST and PREDUEDGST notices, set the message body to the following:
Titles:
[% FOREACH c IN checkouts %]
* [% c.title %][% END %]
4) Generate PREDUE and DUE notices for patrons including digests
5) Verify those notices contain the checkout titles
Signed-off-by: Katrin Fischer <katrin.fischer.83@web.de>
Signed-off-by: Tomas Cohen Arazi <tomascohen@theke.io>
Edit: tcohen renamed @issues => @checkouts as well
Signed-off-by: Tomas Cohen Arazi <tomascohen@theke.io>
Signed-off-by: Tomás Cohen Arazi <tomascohen@theke.io>
Signed-off-by: Tomas Cohen Arazi <tomascohen@theke.io>
Remove loops that only operate one one result only
Signed-off-by: Felicity Brown <Felicity.Brown@montgomerycountymd.gov>
Signed-off-by: George Veranis <gveranis@dataly.gr>
Signed-off-by: Katrin Fischer <katrin.fischer.83@web.de>
Signed-off-by: Tomas Cohen Arazi <tomascohen@theke.io>
Predue / due notices are limited to using itemscontent to display checkouts data. It would be nice to make all the checkouts data available for those notices so that libraries could format that data more nicely if they wish.
1) Apply this patch
2) For DUE and PREDUE notices, set the message body to the following:
Title: [% issue.title %]
3) For DUEDGST and PREDUEDGST notices, set the message body to the following:
Titles:
[% FOREACH i IN issues %]
* [% i.title %]
[% END %]
4) Generate PREDUE and DUE notices for patrons including digests
5) Verify those notices contain the checkout titles
Signed-off-by: Felicity Brown <Felicity.Brown@montgomerycountymd.gov>
Signed-off-by: Katrin Fischer <katrin.fischer.83@web.de>
Signed-off-by: Tomas Cohen Arazi <tomascohen@theke.io>
This patch allows for selection of framework to use when overlaying
records - by default it is set to keep the initial framework
To test:
1 - Create some records using one framework
2 - Export the records
3 - Edit the records to add fields not in original framework
4 - Stage records using a rule that will find matches
5 - Import
6 - Note records contain new fields on display, but they are lost on edit
7 - Apply patch
8 - Stage records again
9 - Select a framework that contains the new fields on import
10 - Import records
11 - Note records now use selected framework and are displayed/edited
correctly
Signed-off-by: Andrew Fuerste-Henry <andrewfh@dubcolib.org>
Signed-off-by: Kyle M Hall <kyle@bywatersolutions.com>
Signed-off-by: Tomas Cohen Arazi <tomascohen@theke.io>
Typo fix to prevent confusion in CLI usage
Signed-off-by: David Nind <david@davidnind.com>
Signed-off-by: Marcel de Rooy <m.de.rooy@rijksmuseum.nl>
Signed-off-by: Tomas Cohen Arazi <tomascohen@theke.io>
This patch adds the option to pass a no-block option and use this
in checkout/checkin/renew messages
To test:
1 - perl misc/sip_cli_emulator.pl -a localhost -p 6001 -l CPL -su term1 -sp term1 -m checkin --item 39999000010831
2 - Confirm out like:
SEND: 09N20221227 20183920221227 201839APCPL|AOCPL|AB39999000010831|ACterm1|BIN|
(Note the N after 09)
3 - Apply patch
4 - Restart SIP, repeat 1
5 - Confirm still an N
6 - perl misc/sip_cli_emulator.pl -a localhost -p 6001 -l CPL -su term1 -sp term1 -m checkin --item 39999000010831 -n N
7 - Confirm still an N
8 - perl misc/sip_cli_emulator.pl -a localhost -p 6001 -l CPL -su term1 -sp term1 -m checkin --item 39999000010831 -n Y
9 - Confirm the send has a Y after 09
Signed-off-by: David Nind <david@davidnind.com>
Signed-off-by: Martin Renvoize <martin.renvoize@ptfs-europe.com>
Signed-off-by: Tomas Cohen Arazi <tomascohen@theke.io>
see comment 0 for more info
Signed-off-by: David Cook <dcook@prosentient.com.au>
Signed-off-by: Marcel de Rooy <m.de.rooy@rijksmuseum.nl>
Signed-off-by: Tomas Cohen Arazi <tomascohen@theke.io>
To test:
1 - Apply patch
2 - vim /etc/koha/sites/kohadev/log4perl.conf, Add lines below:
log4perl.logger.worker = WARN, WORKER
log4perl.appender.WORKER=Log::Log4perl::Appender::Screen
log4perl.appender.WORKER.stderr=1
log4perl.appender.WORKER.mode=append
log4perl.appender.WORKER.layout=PatternLayout
log4perl.appender.WORKER.layout.ConversionPattern=[%d] [%p] %m %l%n
log4perl.appender.WORKER.utf8=1
3 - Restart all
4 - Edit misc/background_jobs_worker.pl
- my $job = Koha::BackgroundJobs->find($args->{job_id});
+ my $job;# = Koha::BackgroundJobs->find($args->{job_id});
5 - In another terminal: tail -f /var/log/koha/kohadev/koha-worker-error.log
6 - Force enqueue a job (that won't be found because of #4
perl -e 'use Koha::BackgroundJob::BatchUpdateItem; my $bg = Koha::BackgroundJob::BatchUpdateItem->new(); $bg->enqueue({ record_ids=>['888888']});'
7 - Note error in log like:
[2023/01/11 19:26:10] [WARN] No job found for id=2983 main:: /kohadevbox/koha/misc/background_jobs_worker.pl (111)
Signed-off-by: Kyle M Hall <kyle@bywatersolutions.com>
Signed-off-by: David Nind <david@davidnind.com>
Signed-off-by: Marcel de Rooy <m.de.rooy@rijksmuseum.nl>
Signed-off-by: Tomas Cohen Arazi <tomascohen@theke.io>
Do not implicitly depend on last statement returning nothing.
Make it explicit. We want $args to be null here.
Signed-off-by: Marcel de Rooy <m.de.rooy@rijksmuseum.nl>
Signed-off-by: Tomas Cohen Arazi <tomascohen@theke.io>
Signed-off-by: Nick Clemens <nick@bywatersolutions.com>
Signed-off-by: Marcel de Rooy <m.de.rooy@rijksmuseum.nl>
Signed-off-by: Tomas Cohen Arazi <tomascohen@theke.io>
I have faced a problem when testing an incorrect version of bug 32370.
The frame sent to the message broker was not a correct JSON encoded
string, and its decoding was obviously failing, exploding the worker
script.
Additionally, as we don't send a ack for this frame, the next pull will
result in processing the same message, and so in the same explosion.
No more messages can be processed!
This patch is logging the error and ack the message to the broker, in
order to not get stuck.
Test plan:
0. Dont' apply this patch
1. Enqueue a bad message
a. Apply 32370
b. Comment the following line in Koha::BackgroundJob::enqueue
$self->set_encoded_json_field( { data => $job_args, field => 'data' } );
c. restart_all
d. Use the batch item modification tool to enqueue a new job
=> Notice the error in the log
=> Note that the status of the job is "new"
=> Inspect rabbitmq queue:
% rabbitmq-plugins enable rabbitmq_management
% rabbitmqadmin get queue=koha_kohadev-long_tasks
You will notice there is a message in the "long_tasks" queue
2. Enqueue a good message
a. Remove the change from 1.b
b. restart_all
c. Enqueue another job
=> Same error in the log
=> Both jobs are new
=> Inspect rabbitmq, there are 2 messages
3. Apply this patch
4. restart_all
=> Second (good) job is finished
=> rabbitmq long_tasks queue is empty
We cannot mark the first job as done, we have no idea which job it was!
QA: Note that this patch is dealing with another problem, not tested in
this test plan. If an exception is not correctly caught by the ->process
method of the job, we won't crash the worker. The job will be marked as
failed.
Signed-off-by: Nick Clemens <nick@bywatersolutions.com>
Signed-off-by: Marcel de Rooy <m.de.rooy@rijksmuseum.nl>
Signed-off-by: Tomas Cohen Arazi <tomascohen@theke.io>
Splitting off this functionality from bug 32558. This is the comment that started this discussion: https://bugs.koha-community.org/bugzilla3/show_bug.cgi?id=32558#c15
From the O'Reilly book Mobile and Web Messaging:
Signed-off-by: Tomas Cohen Arazi <tomascohen@theke.io>
Bug 29788 inadvertantly replaced a call to safe_delete() with safe_to_delete()
such that any time the script should delete an item it only checks to see if
the item is delectable, after which deletion of the record fails because the
items were not deleted.
Test Plan:
1) Mark a record with items to be deleted via the record leader
2) Run delete_records_via_leader.pl -i -b -v
3) Note the script says it is deleting the items but then the record
deletion fails. Note the items remain in the items table of the
database.
4) Apply this patch
5) Repeat step 2
6) This time the items and record should be deleted!
Signed-off-by: David Nind <david@davidnind.com>
Signed-off-by: Marcel de Rooy <m.de.rooy@rijksmuseum.nl>
Signed-off-by: Martin Renvoize <martin.renvoize@ptfs-europe.com>
This patch adds a prefetch size of 1 to the background jobs worker,
so that it fetches 1 message at a time. Without this change,
the RabbitMQ connection timeout when too many messages for slow tasks
are fetched at the same time.
To test:
0. Apply patch
1. Run background worker
2. Rapidly enqueue multiple jobs that in total will take longer
than 30 minutes to process
Bug 32481: Use correct prefetch syntax for RabbitMQ
According to https://www.rabbitmq.com/stomp.html the header to
use for managing the prefetch is "prefetch-count".
You can verify the number of delivered and unacknowledged messages
on a channel on a connection by running "rabbitmqctl list_channels"
on the RabbitMQ host. This will tell you how many messages have been
delivered and are awaiting acknowledgement
Signed-off-by: Jonathan Druart <jonathan.druart@bugs.koha-community.org>
Signed-off-by: Nick Clemens <nick@bywatersolutions.com>
Signed-off-by: Tomas Cohen Arazi <tomascohen@theke.io>
This script has a pattern to delete rows depending on a given
date/number of days, we should use the filter_by_last_update
Koha::Objects method.
No need for another method and tests, everything is already tested
there.
This patch also suggests to rename the reference to "background" and
"bg" with "jobs", which seems more appropriate and not an abbreviation
Signed-off-by: Tomas Cohen Arazi <tomascohen@theke.io>
There as a mismatch between the parameter name in documentation (bg-days)
and what was checked in the code (bg-jobs). This makes both match.
To test:
* Apply patch
* Make sure you have some background_jobs older than one day
* Verify the help of the script documents the parameter bg-days
* Run the cleanup_database.pl job
Example: ./misc/cronjobs/cleanup_database.pl --bg-days --confirm -v
* Verify there are no errors and the lines have been deleted as expected
Signed-off-by: David Nind <david@davidnind.com>
Signed-off-by: Tomas Cohen Arazi <tomascohen@theke.io>
Some libraries need to recalculate a patron's expiration date any time they are updated via a patron import from file.
Test Plan:
1) Apply this patch
2) prove t/db_dependent/Koha/Patrons/Import.t
Signed-off-by: David Nind <david@davidnind.com>
Signed-off-by: Tomas Cohen Arazi <tomascohen@theke.io>
The previous implementation was using vue-i18n inject localized strings
into the Vue app. However it was using json files, and we needed
additional overhead to convert from/to PO files.
I've also tried vue3-gettext that use PO, but the overhead to work with
our workflow was existent as well (see branch joubu/vue3-gettext).
vue-i18n-extract was using for extracting, and a specific misc script
(misc/translate_json.pl) was also used to generate the json file. They
can be removed.
Here we are simply reusing our existing workflow, and we will improve it
(ie. make it more vue-ish) later if we need it.
Signed-off-by: Jonathan Field <jonathan.field@ptfs-europe.com>
Signed-off-by: Martin Renvoize <martin.renvoize@ptfs-europe.com>
Signed-off-by: Kyle M Hall <kyle@bywatersolutions.com>
Signed-off-by: Tomas Cohen Arazi <tomascohen@theke.io>
Signed-off-by: Jonathan Field <jonathan.field@ptfs-europe.com>
Signed-off-by: Martin Renvoize <martin.renvoize@ptfs-europe.com>
Signed-off-by: Kyle M Hall <kyle@bywatersolutions.com>
Signed-off-by: Tomas Cohen Arazi <tomascohen@theke.io>
Signed-off-by: Jonathan Field <jonathan.field@ptfs-europe.com>
Signed-off-by: Martin Renvoize <martin.renvoize@ptfs-europe.com>
Signed-off-by: Kyle M Hall <kyle@bywatersolutions.com>
Signed-off-by: Tomas Cohen Arazi <tomascohen@theke.io>
Signed-off-by: Jonathan Field <jonathan.field@ptfs-europe.com>
Signed-off-by: Martin Renvoize <martin.renvoize@ptfs-europe.com>
Signed-off-by: Kyle M Hall <kyle@bywatersolutions.com>
Signed-off-by: Tomas Cohen Arazi <tomascohen@theke.io>
Note that we are adding an extra space for id and counter, otherwise
they got removed in favor of the "simple" string.
{ Agreement: Agreement }
replaced
{ Agreement: { id: ..., counter: ... } }
Signed-off-by: Jonathan Field <jonathan.field@ptfs-europe.com>
Signed-off-by: Martin Renvoize <martin.renvoize@ptfs-europe.com>
Signed-off-by: Kyle M Hall <kyle@bywatersolutions.com>
Signed-off-by: Tomas Cohen Arazi <tomascohen@theke.io>
* Remove ":" from the translation to prevent duplication string
* Add a script to auto-translate the strings in English
echo "{}" > koha-tmpl/intranet-tmpl/prog/js/vue/locales/en.json
npx vue-i18n-extract --vueFiles 'koha-tmpl/intranet-tmpl/prog/js/vue/**/*.?(js|vue)' \
--exclude koha-tmpl/intranet-tmpl/prog/js/vue/dist/main.js \
--languageFiles 'koha-tmpl/intranet-tmpl/prog/js/vue/locales/*.json' \
--add --remove
perl misc/translate_json.pl | sponge koha-tmpl/intranet-tmpl/prog/js/vue/locales/en.json
Signed-off-by: Jonathan Field <jonathan.field@ptfs-europe.com>
Signed-off-by: Martin Renvoize <martin.renvoize@ptfs-europe.com>
Signed-off-by: Kyle M Hall <kyle@bywatersolutions.com>
Signed-off-by: Tomas Cohen Arazi <tomascohen@theke.io>
The TooMany() function and fine calculation functions were incorrectly
hard coded to use homebranch for fetching the circulation rules. Those
ignored completely the syspref HomeOrHoldingBranch where the user
might have set it to holdingbranch and therefore the fines and whether
patron has too many checkouts (TooMany()) were counted using the
unintended branch's rules. This problem only arises in the cases where
there are branch specific circulation rules defined.
Test plan:
1. Make sure following tests pass:
$ prove t/db_dependent/Circulation/_CalculateAndUpdateFine.t
$ prove t/db_dependent/Circulation/TooMany.t
Test plan for fines.pl:
1. Add branch specific fine rules for branches A and B. A having a
fine of 1 per day and B having a fine of 0 per day.
2. Set sysprefs:
CircControl = the library the items is from
finesMode = Calculate and charge
HomeOrHoldingBranch = holdingbranch
3. Create an item with home and holding branch of A
4. Checkout the item with a due date in the past (the past due date can be
specified by clicking "Checkout settings" in the checkout page) and
make sure the branch you are checking from is B.
5. Run perl /usr/share/koha/bin/cronjobs/fines.pl
6. Notice that fines have popped up now to the patron incorrectly
7. Apply patch
8. Pay fines, Check-in the item and check it out again
9. Run perl /usr/share/koha/bin/cronjobs/fines.pl
10. Notice that fine is now 0. This means that the branch
B (holdingbranch of the checked-out item) specific rule is used.
Test plan for staticfines.pl:
1. Add branch specific fine rules for branches A and B. A having a
fine of 1 per day and B having a fine of 0 per day.
2. Set sysprefs:
CircControl = the library the items is from
finesMode = Calculate and charge
HomeOrHoldingBranch = holdingbranch
3. Create an item with homebranch A and holding branch of A
4. Checkout the item with a due date in the past (the past due date can be
specified by clicking "Checkout settings" in the checkout page) and
make sure the branch you are checking from is B.
5. Run perl staticfines.pl --library A --library B --category <PATRONS_CATEGORYCODE>
and notice that now there is inccorectly fines
6. Apply patch
7. Pay fines, Check-in the item and check it out again
8. Run perl staticfines.pl --library A --library B --category <PATRONS_CATEGORYCODE>
and notice the fines are now not generated
Rebased-by: Joonas Kylmälä <joonas.kylmala@iki.fi>
Signed-off-by: Petro Vashchuk <stalkernoid@gmail.com>
Signed-off-by: Martin Renvoize <martin.renvoize@ptfs-europe.com>
Signed-off-by: Tomas Cohen Arazi <tomascohen@theke.io>
To test:
- Run "perldoc misc/cronjobs/delete_patrons.pl"
- Note the absence of information about the potential conflict
between --not_borrowed_since and anonymization
- Apply this patch
- Re-run the perldoc command and make sure the note about
anonymization make sense
Signed-off-by: David Nind <david@davidnind.com>
Signed-off-by: Katrin Fischer <katrin.fischer.83@web.de>
Signed-off-by: Tomas Cohen Arazi <tomascohen@theke.io>
We often get asked where something can be found in setting,
adding the hint that RealTimeHoldsQueue is a system preference
is hopefully helpful here.
Signed-off-by: Katrin Fischer <katrin.fischer.83@web.de>
Signed-off-by: Tomas Cohen Arazi <tomascohen@theke.io>
The real time hold queue and the build_holds_queue.pl jobs are not 100% compatible in that we should not be running the cron if the real time queue is enabled, this could lead to double server work. It would be good to have a check in build_holds_queue for the RealTimeHoldsQueue syspref and not run the job if the preference is enabled.
There might be times when we'd want to force a run of this job without changing the syspref. To that end we would also want a flag for this job so that system administrators could force the job from the command line if required, overriding this limitation.
Test Plan:
1) Apply this patch
2) Try run misc/cronjobs/holds/build_holds_queue.pl with the -h/--help and -m/--man options
3) Disable RealTimeHoldsQueue
4) Run with no options, should succeed
5) Enable RealTimeHoldsQueue
6) Run with no options, should display a message and not rebuild the
holds queue
7) Run again with the -f/--force option, should succeed
Signed-off-by: David Nind <david@davidnind.com>
Signed-off-by: Katrin Fischer <katrin.fischer.83@web.de>
Signed-off-by: Tomas Cohen Arazi <tomascohen@theke.io>
Like Bug 26832 added binmode UTF-8 to script misc/search_tools/export_elasticsearch_mappings.pl, this should be added to misc/cronjobs/runreport.pl.
Test plan :
1) Do not apply patch
2) Create a SQL report with :
SELECT 'accentué',barcode FROM items limit 3
3) Note the id of this report, for example 1
4) Run : misc/cronjobs/runreport.pl 1 --format csv | tee /tmp/without.csv
=> You see output with unknown character instead of é :
5) Run : file --mime-type /tmp/without.csv
=> You see : /tmp/without.csv: iso-8859-1
6) Apply patch
7) Run : misc/cronjobs/runreport.pl 1 --format csv | tee /tmp/with.csv
=> You see correct output with é
8) Run : file --mime-type /tmp/without.csv
=> You see : /tmp/without.csv: utf-8
Signed-off-by: David Nind <david@davidnind.com>
Signed-off-by: Martin Renvoize <martin.renvoize@ptfs-europe.com>
Signed-off-by: Tomas Cohen Arazi <tomascohen@theke.io>
This patch adds background queue options to cleanup_database.pl to allow
for purging completed background jobs.
--bg-jobs DAYS Purge all finished background jobs this many days old. Defaults to 1 if no DAYS provided.
--bg-type TYPE What type of background job to purge. Defaults to "update_elastic_index" if omitted
Specifying "all" will purge all types. Repeatable.
To test:
1 - Enable elastic search in Koha
2 - perl misc/maintenance/touch_all_items.pl
3 - Generate an number of diffrent types of bg-jobs (eg batch_hold_cancel,
batch_biblio_record_deletion, batch_item_record_deletion)
4 - Check db and note there are a bunch of diffrent jobs
5 - Update to make them old
UPDATE background_jobs SET ended_on = '2022-10-01 00:00:00', status='finished'
6 - perl misc/cronjobs/cleanup_database.pl
7 - Note bg-jobs entry shows in help
8 - perl misc/cronjobs/cleanup_database.pl --bg-jobs 1 -v
9 - Note that elasticqueue would have been cleared
10 - perl misc/cronjobs/cleanup_database.pl --bg-jobs 1 -v --confirm
11 - Note that number of entries deleted is reported
12 - Attempt to clear other job types, including "all" eg
perl misc/cronjobs/cleanup_database.pl --bg-jobs 1 --bg-type batch_item_record_deletion -v --confirm
perl misc/cronjobs/cleanup_database.pl --bg-jobs 1 --bg-type all -v --confirm
13 - Confirm in staff interface that jobs are gone:
http://localhost:8081/cgi-bin/koha/admin/background_jobs.pl
(Uncheck 'Current jobs only')
Signed-off-by: Nick Clemens <nick@bywatersolutions.com>
Signed-off-by: Tomas Cohen Arazi <tomascohen@theke.io>
It would be nice to be able to combine several types in a single run,
but exclude others, without having to have multiple cron lines
Test Plan:
1) Apply this patch
2) Run process_message_queue.pl with a single -c parameter
3) Note behavior is unchanged
4) Run process_message_queue.pl with multiple -c parameters
5) Note all the codes specified are processed
6) Repeat 2-5 with -t for type limits
Signed-off-by: Andrew Fuerste-Henry <andrewfh@dubcolib.org>
Signed-off-by: Marcel de Rooy <m.de.rooy@rijksmuseum.nl>
Signed-off-by: Tomas Cohen Arazi <tomascohen@theke.io>