Commit graph

2796 commits

Author SHA1 Message Date
f1ff6c5764
Bug 32686: Specify action of action_logs to purge
Add --log-action parameter to cleanup_database.pl

Test plan:
1. Apply patch
2. Enable cataloguing and borrowers log
3. Make some changes to borrowers, create some borrowers, and edit some
biblio records
4. Change the action_logs.timestamp for all action_logs entries to 367
days ago
5. Run cleanup_database.pl with --logs 365 --log-module=MEMBERS
--log-action=CREATE --confirm
6. Confirm only the borrowers creation action_logs entries are removed
7. Run cleanup_database.pl with --logs 365
8. Confirm all action_logs entries are removed

Sponsored-By: Toi Ohomai Institute of Technology, New Zealand
Signed-off-by: David Nind <david@davidnind.com>
Signed-off-by: Victor Grousset/tuxayo <victor@tuxayo.net>
Signed-off-by: Tomas Cohen Arazi <tomascohen@theke.io>
2023-04-06 10:03:08 -03:00
Kevin Carnes
5fdc436119
Bug 31640: Fuzzy translations of preferences can cause missing sections and inaccurate translations
This patch ignores fuzzy translations for preferences and warns if there are multiple sections with the same translated name.

Test Plan:
1)  Install English United Kingdom translations (./misc/translator/translate install en-GB)
2)  Go to Koha administration in the staff interface
3)  Click Global system preferences
4)  Select I18N/L10N preferences
5)  Enable English United Kingdom in the language preference for staff interface
6)  Save all I18N/L10N preferences
7)  Return to Koha administration
8)  Select English United Kingdom as the language at the bottom of the screen
9)  Click on Global system preferences
10) Select Circulation
11) Observe that there is only SelfCheckInMainUserBlock or StockRotation, but not both
12) Apply the patch
13) Install English United Kingdom translations (./misc/translator/translate install en-GB)
14) Go to Koha administration
15) Select English United Kingdom as the language at the bottom of the screen
16) Click on Global system preferences
17) Select Circulation
18) Observe that SelfCheckInMainUserBlock and StockRotation are both present

Signed-off-by: Caroline Cyr La Rose <caroline.cyr-la-rose@inlibro.com>

Signed-off-by: Jonathan Druart <jonathan.druart@bugs.koha-community.org>
Signed-off-by: Tomas Cohen Arazi <tomascohen@theke.io>
2023-04-06 09:29:27 -03:00
6221538b6d
Bug 33285: (QA follow-up) add POD and fix some code style
Signed-off-by: Fridolin Somers <fridolin.somers@biblibre.com>
Signed-off-by: Tomas Cohen Arazi <tomascohen@theke.io>
2023-04-06 09:29:24 -03:00
bad0d52245
Bug 33285: Allow specifying the delimeter for runreport.pl
To test:
1 - Write a report in koha
2 - perl misc/cronjobs/runreport.pl --format csv 1 (or correct report number)
3 - Note you get commas
4 - Apply patch
5 - Repeat #2 - no change
6 - perl misc/cronjobs/runreport.pl --format csv --separator "|" 1
7 - Now it is pipe delimited
8 - perl misc/cronjobs/runreport.pl --format tsv --separator "|" 1
9 - Error is reported, you cannot set separator unless csv
10 - Try other separators

Signed-off-by: Andrew Fuerste-Henry <andrewfh@dubcolib.org>
Signed-off-by: Fridolin Somers <fridolin.somers@biblibre.com>
Signed-off-by: Tomas Cohen Arazi <tomascohen@theke.io>
2023-04-06 09:29:23 -03:00
10d12f999f
Bug 33341: Address some perlcritic errors in 5.36
Some old-style code is making our tests fail when run in Debian Testing.

This patch addresses this.

To test:
1. Launch bookworm KTD:
   $ KOHA_IMAGE=master-bookworm ktd up -d
2. Run:
   $ ktd --shell
  k$ prove t/00-testcritic.t
=> FAIL: It fails!
3. Apply the patch
4. Repeat 2
=> SUCCESS: Tests now pass!
5. Sign off :-D

Signed-off-by: Tomas Cohen Arazi <tomascohen@theke.io>
Signed-off-by: Martin Renvoize <martin.renvoize@ptfs-europe.com>
Signed-off-by: Tomas Cohen Arazi <tomascohen@theke.io>
2023-03-28 14:50:33 +02:00
d8721bbc36
Bug 32594: Mark jobs as started and finished
Signed-off-by: Nick Clemens <nick@bywatersolutions.com>
Signed-off-by: Tomas Cohen Arazi <tomascohen@theke.io>
2023-03-28 14:49:44 +02:00
8e00746095
Bug 33108: (follow-up) Don't die, only warn if not using ES
Signed-off-by: Jonathan Druart <jonathan.druart@bugs.koha-community.org>
Signed-off-by: Tomas Cohen Arazi <tomascohen@theke.io>
2023-03-20 09:39:47 -03:00
Magnus Enger
6d1890fc8e
Bug 29354: Make overdue_notices.pl send .html
If you have EmailOverduesNoEmail = Send and specify "--html somedir",
overdue_notices.pl will send a file by email that contains partial
HTML, as a file called attachment.txt. This patch fixes that.

To reproduce in koha-testing-docker:
- EmailOverduesNoEmail = Send
- Make sure you have a loan that was due yesterday, by backdating the
  due date
- Set up an overdue action to send "Overdue Notice" (ODUE) to the
  category you made the loan to above, when a loan is one day overdue
- Run this command:
  $ sudo koha-shell -c "perl misc/cronjobs/overdue_notices.pl -v -t -html /tmp/" kohadev
- Look at the file /tmp/notices-<DATE>.html and make sure it is a full
  HTML document, with <html>, <head>, <body> etc.
- Create a report like this:
  SELECT message_id, letter_id, borrowernumber, subject, CONCAT( '<pre>', content, '</pre>' ) AS content,
    metadata, letter_code, message_transport_type, time_queued, updated_on, to_address, content_type, failure_code
  FROM message_queue
  WHERE subject = 'Overdue Notices'
  ORDER BY message_id DESC
- Run the report and verify there is a line like this in the "content"
  of the newest message:
  Content-Type: text/plain; name=attachment.txt
- A part of the "content" will be a block of several lines of gibberish
  (base64) that look something like "RGVhciAga29oYSwNCg0KQWN...". Copy
  this block of text to somewhere like base64decode.org and decode the
  text. You should see a fragment of HTML, without <html>, <head>,
  <body> etc.

To test:
- Apply the patch
- Run overdue_notices.pl again, with the same arguments as above
- Make sure /tmp/notices-<DATE>.html is still a full HTML document
- Re-run the report, and make sure you now have this in the "content":
  Content-Type: text/html; name=attachment.html
- Decode the base64 and make sure it is now a full HTML document, with
  <html>, <head>, <body> etc.
- Re-run overdue_notices.pl as above, but replace "-html /tmp/" with
  "-csv /tmp/test.csv"
- Make sure /tmp/test.csv and the decoded base64 from the report
  contains CSV data
- Re-run overdue_notices.pl as above, but replace "-html /tmp/" with
  "-text /tmp/"
- Make sure /tmp/notices-<DATE>.txt and the decoded base64 from the
  report contains no HTML

Note:
- The actual text from the different messages will be enclosed in
  <pre>-tags
- If you have HTML in your ODUE message template and run with -v, you
  will have warnings saying "The following terms were not matched and
  replaced"
These are due to Bug 14347, and are not adressed by the current patch.

Signed-off-by: Lucas Gass <lucas@bywatersolutions.com>

Signed-off-by: Kyle M Hall <kyle@bywatersolutions.com>
Signed-off-by: Tomas Cohen Arazi <tomascohen@theke.io>
2023-03-14 08:49:31 -03:00
3bf4addb24 Bug 32558: (QA follow-up) Leave default to 1, remove extra fork
The default of 1 resembles the old behavior: 1 fork for the job.

Signed-off-by: Marcel de Rooy <m.de.rooy@rijksmuseum.nl>

Signed-off-by: Kyle M Hall <kyle@bywatersolutions.com>
Signed-off-by: Tomas Cohen Arazi <tomascohen@theke.io>
2023-03-06 13:57:53 -03:00
311d68815f Bug 32558: Add ability for background_jobs_worker.pl to process multiple jobs simultaneously up to a limit
Right now background_jobs_worker.pl only processes jobs in serial. It would make sense to handle jobs in parallel up to a user definable limit.

Test Plan:
1) Apply this patch
2) Stop background_jobs_worker.pl
3) Generate some background jobs by editing records, placing holds, etc
4) Watch processes in a new terminal: watch -n 0.1 'ps aux | grep background_jobs_worker.pl'
5) Run background_jobs_worker.pl with parameter -m 3 or some other
   number of max processes
6) Note the multiple forked processes in the ps output

Test notes - also tested the following on KTD:
1. Stop background_jobs_worker.pl
2. Edit /etc/koha/sites/kohadev/koha-conf.xml - set max_processes to 10
3. Generate some background jobs
4. Watch processes in a new terminal: watch -n 0.1 'ps aux | grep background_jobs_worker.pl'
5. Restart all
6. Confirm multiple forked processes in the ps output

Both methods work as expected and generate multiple forked processes
based on the value set for max processes.

Signed-off-by: emlam <emily.lamancusa@montgomerycountymd.gov>

Signed-off-by: Marcel de Rooy <m.de.rooy@rijksmuseum.nl>
Signed-off-by: Tomas Cohen Arazi <tomascohen@theke.io>
2023-03-06 13:57:53 -03:00
45323886ae
Bug 32678: Add new line in authorized values tests in search_for_data_inconsistencies.pl
In search_for_data_inconsistencies.pl, the test for authorized values is a list in one line :
	* The Framework *VR* is using the authorised value's category *LOC*, but the following items.location do not have a value defined ({itemnumber => value }):
 {94 => AV} {95 => AV} {96 => AV} {97 => AV} {98 => AV} {99 => AV} {100 => AV} {101 => AV} {102 => AV} {103 => AV}

It would be more clear with new lines, especially for scripts (grep, awk ...) :
	* The Framework *VR* is using the authorised value's category *LOC*, but the following items.location do not have a value defined ({itemnumber => value }):
 {94 => AV}
 {95 => AV}
 {96 => AV}
 {97 => AV}
 {98 => AV}
 {99 => AV}
 {100 => AV}
 {101 => AV}
 {102 => AV}
 {103 => AV}

Test plan :
1) In koha-testing-docker
2) Delete in authorized values LOC the value AV
3) Run misc/maintenance/search_for_data_inconsistencies.pl
=> You see the new line in result

Signed-off-by: David Nind <david@davidnind.com>
Signed-off-by: Nick Clemens <nick@bywatersolutions.com>
Signed-off-by: Tomas Cohen Arazi <tomascohen@theke.io>
2023-03-06 09:51:50 -03:00
e6e587a821
Bug 32518: (QA follow-up) Fix alignment
Signed-off-by: Victor Grousset/tuxayo <victor@tuxayo.net>
Signed-off-by: Tomas Cohen Arazi <tomascohen@theke.io>
2023-03-02 14:46:17 -03:00
406d61cbb2
Bug 32518: Add reason option to cancel_unfilled_holds cronjob
This adds a reaosn parameter and passes it into the cancellation if supplied

To test:
1 - Place a hold for a patron in your system
2 - Run script with --days 0 -v
3 - verify that it would cancel the reserves (and that you are okay with cancelling the ones it found)
4 - Make sure you have a notice in the holds module with code 'HOLD_CANCELLATION'
5 - Set content of the notice like:
[% IF hold.cancellation_reason=='too_old' %]
Canceled old
[% END %]
6 - Run script with --days 0 -v --reason too_bad -c
7 - Confirm hold cancelled, no notice sent to patron
8 - Place another hold
9 - Run script with --days 0 -v --reason too_old -c
10 - Confirm hold cancelled, notice sent to patron

Signed-off-by: Andrew Fuerste-Henry <andrewfh@dubcolib.org>
Signed-off-by: Victor Grousset/tuxayo <victor@tuxayo.net>
Signed-off-by: Tomas Cohen Arazi <tomascohen@theke.io>
2023-03-02 14:46:16 -03:00
611a77ca51
Bug 30069: (QA follow-up) Use pure DBIC for select and delete with single query
Signed-off-by: Kyle M Hall <kyle@bywatersolutions.com>
Signed-off-by: Tomas Cohen Arazi <tomascohen@theke.io>
2023-03-02 12:00:22 -03:00
6b90fa3ec4
Bug 30069: (QA follow-up) Rebase and add filter
This patch now checks the status of messages and ignores any message
with a status of 'new'. It is also rebased to account for changes made
to cleanup_database.pl in bug 17350.

Test plan:
1) Ensure you have some EDI orders or even just some dummy messages in the edifact_messages table with a mixture of statuses including 'new'
2) Run perl misc/cronjobs/cleanup_database.pl --edifact-messages 100 --verbose (Change the number of days according to the data in your table)
3) The response should show a number of messages that would have been deleted
4) Run perl misc/cronjobs/cleanup_database.pl --edifact-messages 100 --verbose --confirm
5) The response should now show the same number of messages have been deleted
6) Check your edifact_messages table to confirm that the data has been deleted
7) Confirm that no messages marked 'new' have been deleted

Signed-off-by: Kyle M Hall <kyle@bywatersolutions.com>
Signed-off-by: Tomas Cohen Arazi <tomascohen@theke.io>
2023-03-02 12:00:21 -03:00
Katrin Fischer
ee91389edd
Bug 30069: (QA follow-up) Fix capitalization for edifact
Changes a few occurrences of edifact in the output messages
to EDIFACT.

Signed-off-by: Katrin Fischer <katrin.fischer.83@web.de>

Signed-off-by: Kyle M Hall <kyle@bywatersolutions.com>
Signed-off-by: Tomas Cohen Arazi <tomascohen@theke.io>
2023-03-02 12:00:20 -03:00
2b08584255
Bug 30069: Add edifact-messages to cleanup_database.pl
This patch allows users to clear out old edifact_messages using the cleanup_database script. The number of days can either be set in the CLI or the default value of 365 can be used.

Test plan:
1) Ensure you have some EDI orders or even just some dummy messages in the edifact_messages table
2) Run perl misc/cronjobs/cleanup_database.pl --edifact-messages 100 --verbose (Change the number of days according to the data in your table)
3) The response should show a number of messages that would have been deleted
4) Run perl misc/cronjobs/cleanup_database.pl --edifact-messages 100 --verbose --confirm
5) The response should now show the same number of messages have been deleted
6) Check your edifact_messages table to confirm that the data has been deleted

Sponsored-by: PTFS Europe
Signed-off-by: Katrin Fischer <katrin.fischer.83@web.de>

Signed-off-by: Kyle M Hall <kyle@bywatersolutions.com>
Signed-off-by: Tomas Cohen Arazi <tomascohen@theke.io>
2023-03-02 12:00:19 -03:00
b3c945dd09
Bug 31453: Add ability to filter messages to process using process_message_queue.pl via a command line parameter
It would be very useful to be able to tell process_message_queue.pl to skip processing some messages. This is particularly useful where a plugin handles sending some message using the before_send_messages hook, but while that plugin is processing, more messages meant for the plugin might be queued. At that point, control moves back to the script and SendQueuedMessages is called, and those messages end up being processed there instead of by the plugin.

Test Plan:
1) Apply this patch
2) Queue two messages, each with a unique word
3) Run process_message_queue --where "content NOT LIKE '%WORD%'"
   where WORD is a unique word in one of the two message
4) Note the message containing "WORD" was not processed
5) prove t/db_dependent/Letters.t

Signed-off-by: Andrew Fuerste-Henry <andrewfh@dubcolib.org>
Signed-off-by: Victor Grousset/tuxayo <victor@tuxayo.net>
Signed-off-by: Tomas Cohen Arazi <tomascohen@theke.io>
2023-03-02 09:37:44 -03:00
a3087d76bd
Bug 32594: (QA follow-up) Fix POD
Signed-off-by: Tomas Cohen Arazi <tomascohen@theke.io>
2023-03-02 09:37:32 -03:00
41c439bf41
Bug 17350: Add note about how the saved reports data is created in first place
This amends the documentation of the cleanup_database.pl script
to include a hint for how the saved reports data is created.

Signed-off-by: Katrin Fischer <katrin.fischer@bsz-bw.de>
Signed-off-by: Victor Grousset/tuxayo <victor@tuxayo.net>
Signed-off-by: Tomas Cohen Arazi <tomascohen@theke.io>
2023-02-27 13:08:06 -03:00
Andrew Fuerste-Henry
fe6a03c43a
Bug 17350: Purge old saved reports via cleanup_database
To test:
apply patch
1 - have or create a report
2 - run your report via the command line with --store-results. Do this twice.
3 - update the saved_reports table to set date_run for one of your two saved results set to a datetime more than 5 days ago
4 - perl misc/cronjobs/cleanup_database.pl --reports 5 --verbose --confirm
5 - Koha tells you its deleting saved reports data from more than 5 days ago
6 - confirm in the database and the staff interface that it's done so

Signed-off-by: Katrin Fischer <katrin.fischer@bsz-bw.de>
Signed-off-by: Victor Grousset/tuxayo <victor@tuxayo.net>
Signed-off-by: Tomas Cohen Arazi <tomascohen@theke.io>
2023-02-27 13:08:05 -03:00
aba2453ad6
Bug 32594: Add a dedicated Elasticsearch biblio indexing background queue
Currently we generate large numebrs if single record reindex for circulation and other
actions. It can take a long time to process these as we need to load the ES settings for each.

This patch updates the Elasticsearch background jobs to throw records into a new queue
that can be processed by it's own worker and adds a dedicated worker that batches the jobs
every 1 second.

To test:
1 - Apply patches, set SearchEngine system preference to 'Elasticsearch'
2 - perl misc/search_tools/es_indexer_daemon.pl
3 - Leave the running in terminal and perform actions in staff interface:
    - Checking out a bib
    - Returning a bib
    - Editing a single bib
    - Editing a single item
    - Batch editing bibs
    - Batch editing items
4 - Confirm for each action that records are updated in search/search results
5 - Stop the script
6 - set SearchEngine system preference to 'Zebra'
7 - perl misc/search_tools/es_indexer_daemon.pl
8 - Script dies as Elasticsearch not enabled

Signed-off-by: David Nind <david@davidnind.com>
Signed-off-by: Emily Lamancusa <emily.lamancusa@montgomerycountymd.gov>

Bug 32594: (follow-up) Adjust logging per bug 32612

JD amended patch: tidy! There were tabs here...

Signed-off-by: Jonathan Druart <jonathan.druart@bugs.koha-community.org>
Signed-off-by: Tomas Cohen Arazi <tomascohen@theke.io>
2023-02-24 17:52:20 -03:00
4c51596a4b
Bug 32992: Move background_jobs_worker to misc/workers
On bug 32594 we are adding a new worker, dedicated to Elastic indexing.
We should have a common place for workers, and we agreed on misc/workers

To test:
1 - Apply patch
2 - reset_all in koha testing docker
3 - ps aux | grep background
4 - Confirm the workers are running, and running in the new directory
5 - Perform a batch item modification
6 - Ensure the job is processed by the worker

Signed-off-by: David Nind <david@davidnind.com>

Signed-off-by: Jonathan Druart <jonathan.druart@bugs.koha-community.org>
Signed-off-by: Tomas Cohen Arazi <tomascohen@theke.io>
2023-02-24 17:52:19 -03:00
d6433ff71e
Bug 32798: Update parameter passed to Koha::Biblio::Metadata->record
To test:
1 - Enable OAI sets, and define a set with mapping 952 y = BK
2 - perl misc/migration_tools/build_oai_sets.pl -v -i -r
3 - The script dies:
    Koha::Biblio::Metadata->record must be called on an instantiated object or like a class method with a record passed in parameter
4 - Apply patch
5 - Repeat
6 - Success!

Signed-off-by: Jonathan Druart <jonathan.druart@bugs.koha-community.org>
Signed-off-by: Tomas Cohen Arazi <tomascohen@theke.io>
Edit: tcohen updated indentation
2023-02-21 10:23:34 -03:00
Magnus Enger
6a0cd4cc5e
Bug 32922: Remove space in shebang
Some of our scripts have a space in the "shebang" (first) line:

  #! /usr/bin/perl

This is not illegal, and it does work, but it is good to be
consistent, so this patch removes the space.

To test:
- Run: grep -rn --include=*.pl '#! /usr/' *
- See the list of files that have a space in the shebang
- Apply the patch
- Run the command again, there should be no output, meaning there
  are no more files with space in the shebang
- Have a look at the patch and check that it only changes the
  shebangs
- Sign off

Signed-off-by: Matt Blenkinsop <matt.blenkinsop@ptfs-europe.com>
Signed-off-by: Martin Renvoize <martin.renvoize@ptfs-europe.com>
Signed-off-by: Tomas Cohen Arazi <tomascohen@theke.io>
2023-02-20 09:44:06 -03:00
e8c232fcf7
Bug 30642: (QA follow-up) Do not rely on script names in modules, add unit test
Signed-off-by: Kyle M Hall <kyle@bywatersolutions.com>
Signed-off-by: Tomas Cohen Arazi <tomascohen@theke.io>
Edit: Kyle, stop impersonating John Doe
Signed-off-by: Tomas Cohen Arazi <tomascohen@theke.io>
2023-02-10 11:07:59 -03:00
aa3444505f
Bug 32356: Install installer translated files on update
If the installer files exist for a given language, the translate script
won't update it.
We should get a confirmation from Bernardo (author of bug 24262), but I
don't understand why it could be needed (side-effects?)

Test plan:
Installer several times the same language, drop the DB and run the
installer+onboarding process.
Check files installed by the installer (yaml for notice templates,
biblio frameworks) have inserted the data properly in DB.

Signed-off-by: Katrin Fischer <katrin.fischer.83@web.de>
Signed-off-by: Tomás Cohen Arazi <tomascohen@theke.io>
Signed-off-by: Tomas Cohen Arazi <tomascohen@theke.io>
2023-02-09 10:21:17 -03:00
d81bca8270
Bug 29100: (QA follow-up) Rename issue(s) keys to checkout(s)
Adapted test plan:

1) Apply this patch
2) For DUE and PREDUE notices, set the message body to the following:
   Title: [% checkout.title %]
3) For DUEDGST and PREDUEDGST notices, set the message body to the following:
   Titles:
   [% FOREACH c IN checkouts %]
    * [% c.title %][% END %]
4) Generate PREDUE and DUE notices for patrons including digests
5) Verify those notices contain the checkout titles

Signed-off-by: Katrin Fischer <katrin.fischer.83@web.de>
Signed-off-by: Tomas Cohen Arazi <tomascohen@theke.io>
Edit: tcohen renamed @issues => @checkouts as well
Signed-off-by: Tomas Cohen Arazi <tomascohen@theke.io>
Signed-off-by: Tomás Cohen Arazi <tomascohen@theke.io>
Signed-off-by: Tomas Cohen Arazi <tomascohen@theke.io>
2023-02-07 15:36:21 -03:00
fd7a612c6e
Bug 29100: Remove unnecessary loops
Remove loops that only operate one one result only

Signed-off-by: Felicity Brown <Felicity.Brown@montgomerycountymd.gov>
Signed-off-by: George Veranis <gveranis@dataly.gr>

Signed-off-by: Katrin Fischer <katrin.fischer.83@web.de>
Signed-off-by: Tomas Cohen Arazi <tomascohen@theke.io>
2023-02-07 15:36:21 -03:00
2114c70d11
Bug 29100: Add checkouts data loop to predue notices script ( advance_notices.pl )
Predue / due notices are limited to using itemscontent to display checkouts data. It would be nice to make all the checkouts data available for those notices so that libraries could format that data more nicely if they wish.

1) Apply this patch
2) For DUE and PREDUE notices, set the message body to the following:
Title: [% issue.title %]
3) For DUEDGST and PREDUEDGST notices, set the message body to the following:
Titles:
[% FOREACH i IN issues %]
* [% i.title %]
[% END %]
4) Generate PREDUE and DUE notices for patrons including digests
5) Verify those notices contain the checkout titles

Signed-off-by: Felicity Brown <Felicity.Brown@montgomerycountymd.gov>

Signed-off-by: Katrin Fischer <katrin.fischer.83@web.de>
Signed-off-by: Tomas Cohen Arazi <tomascohen@theke.io>
2023-02-07 15:36:21 -03:00
e5f251a709
Bug 15869: Change framework on overlay
This patch allows for selection of framework to use when overlaying
records - by default it is set to keep the initial framework

To test:
 1 - Create some records using one framework
 2 - Export the records
 3 - Edit the records to add fields not in original framework
 4 - Stage records using a rule that will find matches
 5 - Import
 6 - Note records contain new fields on display, but they are lost on edit
 7 - Apply patch
 8 - Stage records again
 9 - Select a framework that contains the new fields on import
10 - Import records
11 - Note records now use selected framework and are displayed/edited
correctly

Signed-off-by: Andrew Fuerste-Henry <andrewfh@dubcolib.org>

Signed-off-by: Kyle M Hall <kyle@bywatersolutions.com>
Signed-off-by: Tomas Cohen Arazi <tomascohen@theke.io>
2023-02-07 10:32:22 -03:00
4734ee5d00
Bug 32793: import_patrons.pl typo in usage
Typo fix to prevent confusion in CLI usage

Signed-off-by: David Nind <david@davidnind.com>

Signed-off-by: Marcel de Rooy <m.de.rooy@rijksmuseum.nl>
Signed-off-by: Tomas Cohen Arazi <tomascohen@theke.io>
2023-02-03 10:30:14 -03:00
2570062d25
Bug 32537: (QA follow-up): Tidy code
Signed-off-by: Martin Renvoize <martin.renvoize@ptfs-europe.com>
Signed-off-by: Tomas Cohen Arazi <tomascohen@theke.io>
2023-02-03 10:30:10 -03:00
b1867f8485
Bug 32537: (QA follow-up) Don't allow invalid no block values
Signed-off-by: Martin Renvoize <martin.renvoize@ptfs-europe.com>
Signed-off-by: Tomas Cohen Arazi <tomascohen@theke.io>
2023-02-03 10:30:09 -03:00
0880d80949
Bug 32537: Add no-block option to SIP cli emulator
This patch adds the option to pass a no-block option and use this
in checkout/checkin/renew messages

To test:
1 - perl misc/sip_cli_emulator.pl -a localhost -p 6001 -l CPL -su term1 -sp term1 -m checkin --item 39999000010831
2 - Confirm out like:
    SEND: 09N20221227    20183920221227    201839APCPL|AOCPL|AB39999000010831|ACterm1|BIN|
    (Note the N after 09)
3 - Apply patch
4 - Restart SIP, repeat 1
5 - Confirm still an N
6 - perl misc/sip_cli_emulator.pl -a localhost -p 6001 -l CPL -su term1 -sp term1 -m checkin --item 39999000010831 -n N
7 - Confirm still an N
8 - perl misc/sip_cli_emulator.pl -a localhost -p 6001 -l CPL -su term1 -sp term1 -m checkin --item 39999000010831 -n Y
9 - Confirm the send has a Y after 09

Signed-off-by: David Nind <david@davidnind.com>
Signed-off-by: Martin Renvoize <martin.renvoize@ptfs-europe.com>
Signed-off-by: Tomas Cohen Arazi <tomascohen@theke.io>
2023-02-03 10:30:08 -03:00
3bf31ae4d4
Bug 32673: Remove misc/load_testing/ scripts
see comment 0 for more info

Signed-off-by: David Cook <dcook@prosentient.com.au>

Signed-off-by: Marcel de Rooy <m.de.rooy@rijksmuseum.nl>
Signed-off-by: Tomas Cohen Arazi <tomascohen@theke.io>
2023-02-03 10:30:07 -03:00
89ff35b025
Bug 32612: (QA follow-up) Add missing interface for Logger
Signed-off-by: Marcel de Rooy <m.de.rooy@rijksmuseum.nl>
Signed-off-by: Tomas Cohen Arazi <tomascohen@theke.io>
2023-02-03 10:30:06 -03:00
3b1f2c57ea
Bug 32612: Add worker-error to log4perl conf
To test:
1 - Apply patch
2 - vim /etc/koha/sites/kohadev/log4perl.conf, Add lines below:
log4perl.logger.worker = WARN, WORKER
log4perl.appender.WORKER=Log::Log4perl::Appender::Screen
log4perl.appender.WORKER.stderr=1
log4perl.appender.WORKER.mode=append
log4perl.appender.WORKER.layout=PatternLayout
log4perl.appender.WORKER.layout.ConversionPattern=[%d] [%p] %m %l%n
log4perl.appender.WORKER.utf8=1
3 - Restart all
4 - Edit misc/background_jobs_worker.pl
-        my $job = Koha::BackgroundJobs->find($args->{job_id});
+        my $job;# = Koha::BackgroundJobs->find($args->{job_id});
5 - In another terminal: tail -f /var/log/koha/kohadev/koha-worker-error.log
6 - Force enqueue a job (that won't be found because of #4
  perl -e 'use Koha::BackgroundJob::BatchUpdateItem; my $bg = Koha::BackgroundJob::BatchUpdateItem->new(); $bg->enqueue({ record_ids=>['888888']});'
7 - Note error in log like:
  [2023/01/11 19:26:10] [WARN] No job found for id=2983 main:: /kohadevbox/koha/misc/background_jobs_worker.pl (111)

Signed-off-by: Kyle M Hall <kyle@bywatersolutions.com>
Signed-off-by: David Nind <david@davidnind.com>

Signed-off-by: Marcel de Rooy <m.de.rooy@rijksmuseum.nl>
Signed-off-by: Tomas Cohen Arazi <tomascohen@theke.io>
2023-02-03 10:30:02 -03:00
08cc23626c
Bug 32393: (QA follow-up) Add explicit undef response in two catch blocks
Do not implicitly depend on last statement returning nothing.
Make it explicit. We want $args to be null here.

Signed-off-by: Marcel de Rooy <m.de.rooy@rijksmuseum.nl>
Signed-off-by: Tomas Cohen Arazi <tomascohen@theke.io>
2023-01-27 15:10:01 -03:00
85c330d8f2
Bug 32393: Deal with the DB fallback part
Signed-off-by: Marcel de Rooy <m.de.rooy@rijksmuseum.nl>
Signed-off-by: Tomas Cohen Arazi <tomascohen@theke.io>
2023-01-27 15:10:01 -03:00
8df96262c1
Bug 32393: Split into 2 try catch blocks
Signed-off-by: Nick Clemens <nick@bywatersolutions.com>

Signed-off-by: Marcel de Rooy <m.de.rooy@rijksmuseum.nl>
Signed-off-by: Tomas Cohen Arazi <tomascohen@theke.io>
2023-01-27 15:10:00 -03:00
ca9a598183
Bug 32393: Prevent invalid job to block the job queue
I have faced a problem when testing an incorrect version of bug 32370.
The frame sent to the message broker was not a correct JSON encoded
string, and its decoding was obviously failing, exploding the worker
script.
Additionally, as we don't send a ack for this frame, the next pull will
result in processing the same message, and so in the same explosion.
No more messages can be processed!

This patch is logging the error and ack the message to the broker, in
order to not get stuck.

Test plan:
0. Dont' apply this patch
1. Enqueue a bad message
  a. Apply 32370
  b. Comment the following line in Koha::BackgroundJob::enqueue
    $self->set_encoded_json_field( { data => $job_args,    field => 'data' } );
  c. restart_all
  d. Use the batch item modification tool to enqueue a new job
=> Notice the error in the log
=> Note that the status of the job is "new"
=> Inspect rabbitmq queue:
% rabbitmq-plugins enable rabbitmq_management
% rabbitmqadmin get queue=koha_kohadev-long_tasks
You will notice there is a message in the "long_tasks" queue
2. Enqueue a good message
  a. Remove the change from 1.b
  b. restart_all
  c. Enqueue another job
=> Same error in the log
=> Both jobs are new
=> Inspect rabbitmq, there are 2 messages
3. Apply this patch
4. restart_all
=> Second (good) job is finished
=> rabbitmq long_tasks queue is empty

We cannot mark the first job as done, we have no idea which job it was!

QA: Note that this patch is dealing with another problem, not tested in
this test plan. If an exception is not correctly caught by the ->process
method of the job, we won't crash the worker. The job will be marked as
failed.

Signed-off-by: Nick Clemens <nick@bywatersolutions.com>

Signed-off-by: Marcel de Rooy <m.de.rooy@rijksmuseum.nl>
Signed-off-by: Tomas Cohen Arazi <tomascohen@theke.io>
2023-01-27 15:10:00 -03:00
b3c5bd02e1
Bug 32573: Send ACK message to RabbitMQ before handling the job
Splitting off this functionality from bug 32558. This is the comment that started this discussion: https://bugs.koha-community.org/bugzilla3/show_bug.cgi?id=32558#c15

From the O'Reilly book Mobile and Web Messaging:

Signed-off-by: Tomas Cohen Arazi <tomascohen@theke.io>
2023-01-27 15:09:52 -03:00
790f00781a
Bug 32656: Script delete_records_via_leader.pl no longer deletes items
Bug 29788 inadvertantly replaced a call to safe_delete() with safe_to_delete()
such that any time the script should delete an item it only checks to see if
the item is delectable, after which deletion of the record fails because the
items were not deleted.

Test Plan:
1) Mark a record with items to be deleted via the record leader
2) Run delete_records_via_leader.pl -i -b -v
3) Note the script says it is deleting the items but then the record
   deletion fails. Note the items remain in the items table of the
   database.
4) Apply this patch
5) Repeat step 2
6) This time the items and record should be deleted!

Signed-off-by: David Nind <david@davidnind.com>
Signed-off-by: Marcel de Rooy <m.de.rooy@rijksmuseum.nl>
Signed-off-by: Martin Renvoize <martin.renvoize@ptfs-europe.com>
2023-01-20 13:53:09 +00:00
cd89c38321
Bug 32481: Limit prefetch size for background jobs worker
This patch adds a prefetch size of 1 to the background jobs worker,
so that it fetches 1 message at a time. Without this change,
the RabbitMQ connection timeout when too many messages for slow tasks
are fetched at the same time.

To test:
0. Apply patch
1. Run background worker
2. Rapidly enqueue multiple jobs that in total will take longer
than 30 minutes to process

Bug 32481: Use correct prefetch syntax for RabbitMQ

According to https://www.rabbitmq.com/stomp.html the header to
use for managing the prefetch is "prefetch-count".

You can verify the number of delivered and unacknowledged messages
on a channel on a connection by running "rabbitmqctl list_channels"
on the RabbitMQ host. This will tell you how many messages have been
delivered and are awaiting acknowledgement

Signed-off-by: Jonathan Druart <jonathan.druart@bugs.koha-community.org>

Signed-off-by: Nick Clemens <nick@bywatersolutions.com>
Signed-off-by: Tomas Cohen Arazi <tomascohen@theke.io>
2023-01-11 20:46:01 -03:00
Koha translators
623fb377ad
Translation updates for Koha 22.11.00
Signed-off-by: Tomas Cohen Arazi <tomascohen@theke.io>
2022-11-25 15:20:05 -03:00
0b6837ea22
Add release notes for Koha 22.11.00
Signed-off-by: Tomas Cohen Arazi <tomascohen@theke.io>
2022-11-25 15:20:02 -03:00
404c88fe87 Bug 31969: Use filter_by_last_update
This script has a pattern to delete rows depending on a given
date/number of days, we should use the filter_by_last_update
Koha::Objects method.
No need for another method and tests, everything is already tested
there.

This patch also suggests to rename the reference to "background" and
"bg" with "jobs", which seems more appropriate and not an abbreviation

Signed-off-by: Tomas Cohen Arazi <tomascohen@theke.io>
2022-11-25 09:40:09 -03:00
Katrin Fischer
356e136b0f
Bug 32093: Make help and code match for bg-days in cleanup_database.pl
There as a mismatch between the parameter name in documentation (bg-days)
and what was checked in the code (bg-jobs). This makes both match.

To test:
* Apply patch
* Make sure you have some background_jobs older than one day
* Verify the help of the script documents the parameter bg-days
* Run the cleanup_database.pl job
  Example:  ./misc/cronjobs/cleanup_database.pl --bg-days  --confirm -v
* Verify there are no errors and the lines have been deleted as expected

Signed-off-by: David Nind <david@davidnind.com>
Signed-off-by: Tomas Cohen Arazi <tomascohen@theke.io>
2022-11-17 14:55:12 -03:00
a9e0bcff16
Bug 27920: (QA follow-up) Fix POD in cli tool
Signed-off-by: Katrin Fischer <katrin.fischer.83@web.de>
Signed-off-by: Tomas Cohen Arazi <tomascohen@theke.io>
2022-11-09 14:37:27 -03:00