Some old-style code is making our tests fail when run in Debian Testing.
This patch addresses this.
To test:
1. Launch bookworm KTD:
$ KOHA_IMAGE=master-bookworm ktd up -d
2. Run:
$ ktd --shell
k$ prove t/00-testcritic.t
=> FAIL: It fails!
3. Apply the patch
4. Repeat 2
=> SUCCESS: Tests now pass!
5. Sign off :-D
Signed-off-by: Tomas Cohen Arazi <tomascohen@theke.io>
Signed-off-by: Martin Renvoize <martin.renvoize@ptfs-europe.com>
Signed-off-by: Tomas Cohen Arazi <tomascohen@theke.io>
If you have EmailOverduesNoEmail = Send and specify "--html somedir",
overdue_notices.pl will send a file by email that contains partial
HTML, as a file called attachment.txt. This patch fixes that.
To reproduce in koha-testing-docker:
- EmailOverduesNoEmail = Send
- Make sure you have a loan that was due yesterday, by backdating the
due date
- Set up an overdue action to send "Overdue Notice" (ODUE) to the
category you made the loan to above, when a loan is one day overdue
- Run this command:
$ sudo koha-shell -c "perl misc/cronjobs/overdue_notices.pl -v -t -html /tmp/" kohadev
- Look at the file /tmp/notices-<DATE>.html and make sure it is a full
HTML document, with <html>, <head>, <body> etc.
- Create a report like this:
SELECT message_id, letter_id, borrowernumber, subject, CONCAT( '<pre>', content, '</pre>' ) AS content,
metadata, letter_code, message_transport_type, time_queued, updated_on, to_address, content_type, failure_code
FROM message_queue
WHERE subject = 'Overdue Notices'
ORDER BY message_id DESC
- Run the report and verify there is a line like this in the "content"
of the newest message:
Content-Type: text/plain; name=attachment.txt
- A part of the "content" will be a block of several lines of gibberish
(base64) that look something like "RGVhciAga29oYSwNCg0KQWN...". Copy
this block of text to somewhere like base64decode.org and decode the
text. You should see a fragment of HTML, without <html>, <head>,
<body> etc.
To test:
- Apply the patch
- Run overdue_notices.pl again, with the same arguments as above
- Make sure /tmp/notices-<DATE>.html is still a full HTML document
- Re-run the report, and make sure you now have this in the "content":
Content-Type: text/html; name=attachment.html
- Decode the base64 and make sure it is now a full HTML document, with
<html>, <head>, <body> etc.
- Re-run overdue_notices.pl as above, but replace "-html /tmp/" with
"-csv /tmp/test.csv"
- Make sure /tmp/test.csv and the decoded base64 from the report
contains CSV data
- Re-run overdue_notices.pl as above, but replace "-html /tmp/" with
"-text /tmp/"
- Make sure /tmp/notices-<DATE>.txt and the decoded base64 from the
report contains no HTML
Note:
- The actual text from the different messages will be enclosed in
<pre>-tags
- If you have HTML in your ODUE message template and run with -v, you
will have warnings saying "The following terms were not matched and
replaced"
These are due to Bug 14347, and are not adressed by the current patch.
Signed-off-by: Lucas Gass <lucas@bywatersolutions.com>
Signed-off-by: Kyle M Hall <kyle@bywatersolutions.com>
Signed-off-by: Tomas Cohen Arazi <tomascohen@theke.io>
The default of 1 resembles the old behavior: 1 fork for the job.
Signed-off-by: Marcel de Rooy <m.de.rooy@rijksmuseum.nl>
Signed-off-by: Kyle M Hall <kyle@bywatersolutions.com>
Signed-off-by: Tomas Cohen Arazi <tomascohen@theke.io>
Right now background_jobs_worker.pl only processes jobs in serial. It would make sense to handle jobs in parallel up to a user definable limit.
Test Plan:
1) Apply this patch
2) Stop background_jobs_worker.pl
3) Generate some background jobs by editing records, placing holds, etc
4) Watch processes in a new terminal: watch -n 0.1 'ps aux | grep background_jobs_worker.pl'
5) Run background_jobs_worker.pl with parameter -m 3 or some other
number of max processes
6) Note the multiple forked processes in the ps output
Test notes - also tested the following on KTD:
1. Stop background_jobs_worker.pl
2. Edit /etc/koha/sites/kohadev/koha-conf.xml - set max_processes to 10
3. Generate some background jobs
4. Watch processes in a new terminal: watch -n 0.1 'ps aux | grep background_jobs_worker.pl'
5. Restart all
6. Confirm multiple forked processes in the ps output
Both methods work as expected and generate multiple forked processes
based on the value set for max processes.
Signed-off-by: emlam <emily.lamancusa@montgomerycountymd.gov>
Signed-off-by: Marcel de Rooy <m.de.rooy@rijksmuseum.nl>
Signed-off-by: Tomas Cohen Arazi <tomascohen@theke.io>
In search_for_data_inconsistencies.pl, the test for authorized values is a list in one line :
* The Framework *VR* is using the authorised value's category *LOC*, but the following items.location do not have a value defined ({itemnumber => value }):
{94 => AV} {95 => AV} {96 => AV} {97 => AV} {98 => AV} {99 => AV} {100 => AV} {101 => AV} {102 => AV} {103 => AV}
It would be more clear with new lines, especially for scripts (grep, awk ...) :
* The Framework *VR* is using the authorised value's category *LOC*, but the following items.location do not have a value defined ({itemnumber => value }):
{94 => AV}
{95 => AV}
{96 => AV}
{97 => AV}
{98 => AV}
{99 => AV}
{100 => AV}
{101 => AV}
{102 => AV}
{103 => AV}
Test plan :
1) In koha-testing-docker
2) Delete in authorized values LOC the value AV
3) Run misc/maintenance/search_for_data_inconsistencies.pl
=> You see the new line in result
Signed-off-by: David Nind <david@davidnind.com>
Signed-off-by: Nick Clemens <nick@bywatersolutions.com>
Signed-off-by: Tomas Cohen Arazi <tomascohen@theke.io>
This adds a reaosn parameter and passes it into the cancellation if supplied
To test:
1 - Place a hold for a patron in your system
2 - Run script with --days 0 -v
3 - verify that it would cancel the reserves (and that you are okay with cancelling the ones it found)
4 - Make sure you have a notice in the holds module with code 'HOLD_CANCELLATION'
5 - Set content of the notice like:
[% IF hold.cancellation_reason=='too_old' %]
Canceled old
[% END %]
6 - Run script with --days 0 -v --reason too_bad -c
7 - Confirm hold cancelled, no notice sent to patron
8 - Place another hold
9 - Run script with --days 0 -v --reason too_old -c
10 - Confirm hold cancelled, notice sent to patron
Signed-off-by: Andrew Fuerste-Henry <andrewfh@dubcolib.org>
Signed-off-by: Victor Grousset/tuxayo <victor@tuxayo.net>
Signed-off-by: Tomas Cohen Arazi <tomascohen@theke.io>
This patch now checks the status of messages and ignores any message
with a status of 'new'. It is also rebased to account for changes made
to cleanup_database.pl in bug 17350.
Test plan:
1) Ensure you have some EDI orders or even just some dummy messages in the edifact_messages table with a mixture of statuses including 'new'
2) Run perl misc/cronjobs/cleanup_database.pl --edifact-messages 100 --verbose (Change the number of days according to the data in your table)
3) The response should show a number of messages that would have been deleted
4) Run perl misc/cronjobs/cleanup_database.pl --edifact-messages 100 --verbose --confirm
5) The response should now show the same number of messages have been deleted
6) Check your edifact_messages table to confirm that the data has been deleted
7) Confirm that no messages marked 'new' have been deleted
Signed-off-by: Kyle M Hall <kyle@bywatersolutions.com>
Signed-off-by: Tomas Cohen Arazi <tomascohen@theke.io>
Changes a few occurrences of edifact in the output messages
to EDIFACT.
Signed-off-by: Katrin Fischer <katrin.fischer.83@web.de>
Signed-off-by: Kyle M Hall <kyle@bywatersolutions.com>
Signed-off-by: Tomas Cohen Arazi <tomascohen@theke.io>
This patch allows users to clear out old edifact_messages using the cleanup_database script. The number of days can either be set in the CLI or the default value of 365 can be used.
Test plan:
1) Ensure you have some EDI orders or even just some dummy messages in the edifact_messages table
2) Run perl misc/cronjobs/cleanup_database.pl --edifact-messages 100 --verbose (Change the number of days according to the data in your table)
3) The response should show a number of messages that would have been deleted
4) Run perl misc/cronjobs/cleanup_database.pl --edifact-messages 100 --verbose --confirm
5) The response should now show the same number of messages have been deleted
6) Check your edifact_messages table to confirm that the data has been deleted
Sponsored-by: PTFS Europe
Signed-off-by: Katrin Fischer <katrin.fischer.83@web.de>
Signed-off-by: Kyle M Hall <kyle@bywatersolutions.com>
Signed-off-by: Tomas Cohen Arazi <tomascohen@theke.io>
It would be very useful to be able to tell process_message_queue.pl to skip processing some messages. This is particularly useful where a plugin handles sending some message using the before_send_messages hook, but while that plugin is processing, more messages meant for the plugin might be queued. At that point, control moves back to the script and SendQueuedMessages is called, and those messages end up being processed there instead of by the plugin.
Test Plan:
1) Apply this patch
2) Queue two messages, each with a unique word
3) Run process_message_queue --where "content NOT LIKE '%WORD%'"
where WORD is a unique word in one of the two message
4) Note the message containing "WORD" was not processed
5) prove t/db_dependent/Letters.t
Signed-off-by: Andrew Fuerste-Henry <andrewfh@dubcolib.org>
Signed-off-by: Victor Grousset/tuxayo <victor@tuxayo.net>
Signed-off-by: Tomas Cohen Arazi <tomascohen@theke.io>
This amends the documentation of the cleanup_database.pl script
to include a hint for how the saved reports data is created.
Signed-off-by: Katrin Fischer <katrin.fischer@bsz-bw.de>
Signed-off-by: Victor Grousset/tuxayo <victor@tuxayo.net>
Signed-off-by: Tomas Cohen Arazi <tomascohen@theke.io>
To test:
apply patch
1 - have or create a report
2 - run your report via the command line with --store-results. Do this twice.
3 - update the saved_reports table to set date_run for one of your two saved results set to a datetime more than 5 days ago
4 - perl misc/cronjobs/cleanup_database.pl --reports 5 --verbose --confirm
5 - Koha tells you its deleting saved reports data from more than 5 days ago
6 - confirm in the database and the staff interface that it's done so
Signed-off-by: Katrin Fischer <katrin.fischer@bsz-bw.de>
Signed-off-by: Victor Grousset/tuxayo <victor@tuxayo.net>
Signed-off-by: Tomas Cohen Arazi <tomascohen@theke.io>
Currently we generate large numebrs if single record reindex for circulation and other
actions. It can take a long time to process these as we need to load the ES settings for each.
This patch updates the Elasticsearch background jobs to throw records into a new queue
that can be processed by it's own worker and adds a dedicated worker that batches the jobs
every 1 second.
To test:
1 - Apply patches, set SearchEngine system preference to 'Elasticsearch'
2 - perl misc/search_tools/es_indexer_daemon.pl
3 - Leave the running in terminal and perform actions in staff interface:
- Checking out a bib
- Returning a bib
- Editing a single bib
- Editing a single item
- Batch editing bibs
- Batch editing items
4 - Confirm for each action that records are updated in search/search results
5 - Stop the script
6 - set SearchEngine system preference to 'Zebra'
7 - perl misc/search_tools/es_indexer_daemon.pl
8 - Script dies as Elasticsearch not enabled
Signed-off-by: David Nind <david@davidnind.com>
Signed-off-by: Emily Lamancusa <emily.lamancusa@montgomerycountymd.gov>
Bug 32594: (follow-up) Adjust logging per bug 32612
JD amended patch: tidy! There were tabs here...
Signed-off-by: Jonathan Druart <jonathan.druart@bugs.koha-community.org>
Signed-off-by: Tomas Cohen Arazi <tomascohen@theke.io>
On bug 32594 we are adding a new worker, dedicated to Elastic indexing.
We should have a common place for workers, and we agreed on misc/workers
To test:
1 - Apply patch
2 - reset_all in koha testing docker
3 - ps aux | grep background
4 - Confirm the workers are running, and running in the new directory
5 - Perform a batch item modification
6 - Ensure the job is processed by the worker
Signed-off-by: David Nind <david@davidnind.com>
Signed-off-by: Jonathan Druart <jonathan.druart@bugs.koha-community.org>
Signed-off-by: Tomas Cohen Arazi <tomascohen@theke.io>
To test:
1 - Enable OAI sets, and define a set with mapping 952 y = BK
2 - perl misc/migration_tools/build_oai_sets.pl -v -i -r
3 - The script dies:
Koha::Biblio::Metadata->record must be called on an instantiated object or like a class method with a record passed in parameter
4 - Apply patch
5 - Repeat
6 - Success!
Signed-off-by: Jonathan Druart <jonathan.druart@bugs.koha-community.org>
Signed-off-by: Tomas Cohen Arazi <tomascohen@theke.io>
Edit: tcohen updated indentation
Some of our scripts have a space in the "shebang" (first) line:
#! /usr/bin/perl
This is not illegal, and it does work, but it is good to be
consistent, so this patch removes the space.
To test:
- Run: grep -rn --include=*.pl '#! /usr/' *
- See the list of files that have a space in the shebang
- Apply the patch
- Run the command again, there should be no output, meaning there
are no more files with space in the shebang
- Have a look at the patch and check that it only changes the
shebangs
- Sign off
Signed-off-by: Matt Blenkinsop <matt.blenkinsop@ptfs-europe.com>
Signed-off-by: Martin Renvoize <martin.renvoize@ptfs-europe.com>
Signed-off-by: Tomas Cohen Arazi <tomascohen@theke.io>
Signed-off-by: Kyle M Hall <kyle@bywatersolutions.com>
Signed-off-by: Tomas Cohen Arazi <tomascohen@theke.io>
Edit: Kyle, stop impersonating John Doe
Signed-off-by: Tomas Cohen Arazi <tomascohen@theke.io>
If the installer files exist for a given language, the translate script
won't update it.
We should get a confirmation from Bernardo (author of bug 24262), but I
don't understand why it could be needed (side-effects?)
Test plan:
Installer several times the same language, drop the DB and run the
installer+onboarding process.
Check files installed by the installer (yaml for notice templates,
biblio frameworks) have inserted the data properly in DB.
Signed-off-by: Katrin Fischer <katrin.fischer.83@web.de>
Signed-off-by: Tomás Cohen Arazi <tomascohen@theke.io>
Signed-off-by: Tomas Cohen Arazi <tomascohen@theke.io>
Adapted test plan:
1) Apply this patch
2) For DUE and PREDUE notices, set the message body to the following:
Title: [% checkout.title %]
3) For DUEDGST and PREDUEDGST notices, set the message body to the following:
Titles:
[% FOREACH c IN checkouts %]
* [% c.title %][% END %]
4) Generate PREDUE and DUE notices for patrons including digests
5) Verify those notices contain the checkout titles
Signed-off-by: Katrin Fischer <katrin.fischer.83@web.de>
Signed-off-by: Tomas Cohen Arazi <tomascohen@theke.io>
Edit: tcohen renamed @issues => @checkouts as well
Signed-off-by: Tomas Cohen Arazi <tomascohen@theke.io>
Signed-off-by: Tomás Cohen Arazi <tomascohen@theke.io>
Signed-off-by: Tomas Cohen Arazi <tomascohen@theke.io>
Remove loops that only operate one one result only
Signed-off-by: Felicity Brown <Felicity.Brown@montgomerycountymd.gov>
Signed-off-by: George Veranis <gveranis@dataly.gr>
Signed-off-by: Katrin Fischer <katrin.fischer.83@web.de>
Signed-off-by: Tomas Cohen Arazi <tomascohen@theke.io>
Predue / due notices are limited to using itemscontent to display checkouts data. It would be nice to make all the checkouts data available for those notices so that libraries could format that data more nicely if they wish.
1) Apply this patch
2) For DUE and PREDUE notices, set the message body to the following:
Title: [% issue.title %]
3) For DUEDGST and PREDUEDGST notices, set the message body to the following:
Titles:
[% FOREACH i IN issues %]
* [% i.title %]
[% END %]
4) Generate PREDUE and DUE notices for patrons including digests
5) Verify those notices contain the checkout titles
Signed-off-by: Felicity Brown <Felicity.Brown@montgomerycountymd.gov>
Signed-off-by: Katrin Fischer <katrin.fischer.83@web.de>
Signed-off-by: Tomas Cohen Arazi <tomascohen@theke.io>
This patch allows for selection of framework to use when overlaying
records - by default it is set to keep the initial framework
To test:
1 - Create some records using one framework
2 - Export the records
3 - Edit the records to add fields not in original framework
4 - Stage records using a rule that will find matches
5 - Import
6 - Note records contain new fields on display, but they are lost on edit
7 - Apply patch
8 - Stage records again
9 - Select a framework that contains the new fields on import
10 - Import records
11 - Note records now use selected framework and are displayed/edited
correctly
Signed-off-by: Andrew Fuerste-Henry <andrewfh@dubcolib.org>
Signed-off-by: Kyle M Hall <kyle@bywatersolutions.com>
Signed-off-by: Tomas Cohen Arazi <tomascohen@theke.io>
Typo fix to prevent confusion in CLI usage
Signed-off-by: David Nind <david@davidnind.com>
Signed-off-by: Marcel de Rooy <m.de.rooy@rijksmuseum.nl>
Signed-off-by: Tomas Cohen Arazi <tomascohen@theke.io>
This patch adds the option to pass a no-block option and use this
in checkout/checkin/renew messages
To test:
1 - perl misc/sip_cli_emulator.pl -a localhost -p 6001 -l CPL -su term1 -sp term1 -m checkin --item 39999000010831
2 - Confirm out like:
SEND: 09N20221227 20183920221227 201839APCPL|AOCPL|AB39999000010831|ACterm1|BIN|
(Note the N after 09)
3 - Apply patch
4 - Restart SIP, repeat 1
5 - Confirm still an N
6 - perl misc/sip_cli_emulator.pl -a localhost -p 6001 -l CPL -su term1 -sp term1 -m checkin --item 39999000010831 -n N
7 - Confirm still an N
8 - perl misc/sip_cli_emulator.pl -a localhost -p 6001 -l CPL -su term1 -sp term1 -m checkin --item 39999000010831 -n Y
9 - Confirm the send has a Y after 09
Signed-off-by: David Nind <david@davidnind.com>
Signed-off-by: Martin Renvoize <martin.renvoize@ptfs-europe.com>
Signed-off-by: Tomas Cohen Arazi <tomascohen@theke.io>
see comment 0 for more info
Signed-off-by: David Cook <dcook@prosentient.com.au>
Signed-off-by: Marcel de Rooy <m.de.rooy@rijksmuseum.nl>
Signed-off-by: Tomas Cohen Arazi <tomascohen@theke.io>
To test:
1 - Apply patch
2 - vim /etc/koha/sites/kohadev/log4perl.conf, Add lines below:
log4perl.logger.worker = WARN, WORKER
log4perl.appender.WORKER=Log::Log4perl::Appender::Screen
log4perl.appender.WORKER.stderr=1
log4perl.appender.WORKER.mode=append
log4perl.appender.WORKER.layout=PatternLayout
log4perl.appender.WORKER.layout.ConversionPattern=[%d] [%p] %m %l%n
log4perl.appender.WORKER.utf8=1
3 - Restart all
4 - Edit misc/background_jobs_worker.pl
- my $job = Koha::BackgroundJobs->find($args->{job_id});
+ my $job;# = Koha::BackgroundJobs->find($args->{job_id});
5 - In another terminal: tail -f /var/log/koha/kohadev/koha-worker-error.log
6 - Force enqueue a job (that won't be found because of #4
perl -e 'use Koha::BackgroundJob::BatchUpdateItem; my $bg = Koha::BackgroundJob::BatchUpdateItem->new(); $bg->enqueue({ record_ids=>['888888']});'
7 - Note error in log like:
[2023/01/11 19:26:10] [WARN] No job found for id=2983 main:: /kohadevbox/koha/misc/background_jobs_worker.pl (111)
Signed-off-by: Kyle M Hall <kyle@bywatersolutions.com>
Signed-off-by: David Nind <david@davidnind.com>
Signed-off-by: Marcel de Rooy <m.de.rooy@rijksmuseum.nl>
Signed-off-by: Tomas Cohen Arazi <tomascohen@theke.io>
Do not implicitly depend on last statement returning nothing.
Make it explicit. We want $args to be null here.
Signed-off-by: Marcel de Rooy <m.de.rooy@rijksmuseum.nl>
Signed-off-by: Tomas Cohen Arazi <tomascohen@theke.io>
Signed-off-by: Nick Clemens <nick@bywatersolutions.com>
Signed-off-by: Marcel de Rooy <m.de.rooy@rijksmuseum.nl>
Signed-off-by: Tomas Cohen Arazi <tomascohen@theke.io>
I have faced a problem when testing an incorrect version of bug 32370.
The frame sent to the message broker was not a correct JSON encoded
string, and its decoding was obviously failing, exploding the worker
script.
Additionally, as we don't send a ack for this frame, the next pull will
result in processing the same message, and so in the same explosion.
No more messages can be processed!
This patch is logging the error and ack the message to the broker, in
order to not get stuck.
Test plan:
0. Dont' apply this patch
1. Enqueue a bad message
a. Apply 32370
b. Comment the following line in Koha::BackgroundJob::enqueue
$self->set_encoded_json_field( { data => $job_args, field => 'data' } );
c. restart_all
d. Use the batch item modification tool to enqueue a new job
=> Notice the error in the log
=> Note that the status of the job is "new"
=> Inspect rabbitmq queue:
% rabbitmq-plugins enable rabbitmq_management
% rabbitmqadmin get queue=koha_kohadev-long_tasks
You will notice there is a message in the "long_tasks" queue
2. Enqueue a good message
a. Remove the change from 1.b
b. restart_all
c. Enqueue another job
=> Same error in the log
=> Both jobs are new
=> Inspect rabbitmq, there are 2 messages
3. Apply this patch
4. restart_all
=> Second (good) job is finished
=> rabbitmq long_tasks queue is empty
We cannot mark the first job as done, we have no idea which job it was!
QA: Note that this patch is dealing with another problem, not tested in
this test plan. If an exception is not correctly caught by the ->process
method of the job, we won't crash the worker. The job will be marked as
failed.
Signed-off-by: Nick Clemens <nick@bywatersolutions.com>
Signed-off-by: Marcel de Rooy <m.de.rooy@rijksmuseum.nl>
Signed-off-by: Tomas Cohen Arazi <tomascohen@theke.io>
Splitting off this functionality from bug 32558. This is the comment that started this discussion: https://bugs.koha-community.org/bugzilla3/show_bug.cgi?id=32558#c15
From the O'Reilly book Mobile and Web Messaging:
Signed-off-by: Tomas Cohen Arazi <tomascohen@theke.io>
Bug 29788 inadvertantly replaced a call to safe_delete() with safe_to_delete()
such that any time the script should delete an item it only checks to see if
the item is delectable, after which deletion of the record fails because the
items were not deleted.
Test Plan:
1) Mark a record with items to be deleted via the record leader
2) Run delete_records_via_leader.pl -i -b -v
3) Note the script says it is deleting the items but then the record
deletion fails. Note the items remain in the items table of the
database.
4) Apply this patch
5) Repeat step 2
6) This time the items and record should be deleted!
Signed-off-by: David Nind <david@davidnind.com>
Signed-off-by: Marcel de Rooy <m.de.rooy@rijksmuseum.nl>
Signed-off-by: Martin Renvoize <martin.renvoize@ptfs-europe.com>
This patch adds a prefetch size of 1 to the background jobs worker,
so that it fetches 1 message at a time. Without this change,
the RabbitMQ connection timeout when too many messages for slow tasks
are fetched at the same time.
To test:
0. Apply patch
1. Run background worker
2. Rapidly enqueue multiple jobs that in total will take longer
than 30 minutes to process
Bug 32481: Use correct prefetch syntax for RabbitMQ
According to https://www.rabbitmq.com/stomp.html the header to
use for managing the prefetch is "prefetch-count".
You can verify the number of delivered and unacknowledged messages
on a channel on a connection by running "rabbitmqctl list_channels"
on the RabbitMQ host. This will tell you how many messages have been
delivered and are awaiting acknowledgement
Signed-off-by: Jonathan Druart <jonathan.druart@bugs.koha-community.org>
Signed-off-by: Nick Clemens <nick@bywatersolutions.com>
Signed-off-by: Tomas Cohen Arazi <tomascohen@theke.io>
This script has a pattern to delete rows depending on a given
date/number of days, we should use the filter_by_last_update
Koha::Objects method.
No need for another method and tests, everything is already tested
there.
This patch also suggests to rename the reference to "background" and
"bg" with "jobs", which seems more appropriate and not an abbreviation
Signed-off-by: Tomas Cohen Arazi <tomascohen@theke.io>
There as a mismatch between the parameter name in documentation (bg-days)
and what was checked in the code (bg-jobs). This makes both match.
To test:
* Apply patch
* Make sure you have some background_jobs older than one day
* Verify the help of the script documents the parameter bg-days
* Run the cleanup_database.pl job
Example: ./misc/cronjobs/cleanup_database.pl --bg-days --confirm -v
* Verify there are no errors and the lines have been deleted as expected
Signed-off-by: David Nind <david@davidnind.com>
Signed-off-by: Tomas Cohen Arazi <tomascohen@theke.io>
Some libraries need to recalculate a patron's expiration date any time they are updated via a patron import from file.
Test Plan:
1) Apply this patch
2) prove t/db_dependent/Koha/Patrons/Import.t
Signed-off-by: David Nind <david@davidnind.com>
Signed-off-by: Tomas Cohen Arazi <tomascohen@theke.io>