This patch corrects a typo in the output of
search_for_data_inconsistencies.pl when a bibliographic record has no
title.
The patch also replaces biblio to bibliographic record in the same
sentence for terminology consistency.
To test:
- Have a bibliographic record without a title
- Run misc/maintenance/search_for_data_inconsistencies.pl
- Read output, make sure the sentence is correct
Signed-off-by: Katrin Fischer <katrin.fischer.83@web.de>
Signed-off-by: Tomas Cohen Arazi <tomascohen@theke.io>
To test:
1) Run the following test and make sure all pass:
t/db_dependent/api/v1/biblios.t
t/db_dependent/api/v1/checkouts.t
t/db_dependent/api/v1/return_claims.t
t/db_dependent/Circulation/CalcDateDue.t
t/db_dependent/Circulation/CheckIfIssuedToPatron.t
t/db_dependent/Circulation/dateexpiry.t
t/db_dependent/Circulation/GetPendingOnSiteCheckouts.t
t/db_dependent/Circulation/GetTopIssues.t
t/db_dependent/Circulation_holdsqueue.t
t/db_dependent/Circulation/IsItemIssued.t
t/db_dependent/Circulation/issue.t
t/db_dependent/Circulation/MarkIssueReturned.t
t/db_dependent/Circulation/maxsuspensiondays.t
t/db_dependent/Circulation/ReturnClaims.t
t/db_dependent/Circulation/Returns.t
t/db_dependent/Circulation/SwitchOnSiteCheckouts.t
t/db_dependent/Circulation.t
t/db_dependent/Circulation/TooMany.t
t/db_dependent/Circulation/transferbook.t
t/db_dependent/DecreaseLoanHighHolds.t
t/db_dependent/Holds/DisallowHoldIfItemsAvailable.t
t/db_dependent/HoldsQueue.t
t/db_dependent/Holds/RevertWaitingStatus.t
t/db_dependent/Illrequests.t
t/db_dependent/ILSDI_Services.t
t/db_dependent/Items.t
t/db_dependent/Koha/Account/Line.t
t/db_dependent/Koha/Acquisition/Order.t
t/db_dependent/Koha/Biblio.t
t/db_dependent/Koha/Holds.t
t/db_dependent/Koha/Items.t
t/db_dependent/Koha/Item.t
t/db_dependent/Koha/Object.t
t/db_dependent/Koha/Patrons.t
t/db_dependent/Koha/Plugins/Circulation_hooks.t
t/db_dependent/Koha/Pseudonymization.t
t/db_dependent/Koha/Recalls.t
t/db_dependent/Koha/Recall.t
t/db_dependent/Koha/Template/Plugin/CirculationRules.t
t/db_dependent/Letters/TemplateToolkit.t
t/db_dependent/Members/GetAllIssues.t
t/db_dependent/Members/IssueSlip.t
t/db_dependent/Patron/Borrower_Discharge.t
t/db_dependent/Patron/Borrower_PrevCheckout.t
t/db_dependent/Reserves/GetReserveFee.t
t/db_dependent/Reserves.t
t/db_dependent/rollingloans.t
t/db_dependent/selenium/regressions.t
t/db_dependent/SIP/ILS.t
t/db_dependent/Holds.t
t/db_dependent/Holds/LocalHoldsPriority.t
t/db_dependent/Holds/HoldFulfillmentPolicy.t
t/db_dependent/Holds/HoldItemtypeLimit.t
t/db_dependent/Circulation/transferbook.t
2) Performe one or more checkouts for a patron, making sure
that the circulation rules allows for renewals (for example by
setting an earlier due-date).
3) Log in as this patron in OPAC and make sure the list of
checkouts is displayed correctly, and that renewing an issue
still works.
Sponsored-by: Gothenburg University Library
Signed-off-by: Nick Clemens <nick@bywatersolutions.com>
Signed-off-by: Kyle M Hall <kyle@bywatersolutions.com>
Signed-off-by: David Nind <david@davidnind.com>
Signed-off-by: Tomas Cohen Arazi <tomascohen@theke.io>
We do not want to copy fields from the previous records!
Signed-off-by: David Nind <david@davidnind.com>
Signed-off-by: Katrin Fischer <katrin.fischer.83@web.de>
Signed-off-by: Tomas Cohen Arazi <tomascohen@theke.io>
The date used in the script will use this parameter, instead of today.
Test plan:
Use the POD of the script to understand how this flag works. Then use
the script to create fields with a date contained in a specific MARC
field.
Signed-off-by: Hugo Agud <hagud@orex.es>
Signed-off-by: David Nind <david@davidnind.com>
Signed-off-by: Katrin Fischer <katrin.fischer.83@web.de>
Signed-off-by: Tomas Cohen Arazi <tomascohen@theke.io>
This patch reformats the params passed to scroll_helper as defined here:
https://metacpan.org/pod/Search::Elasticsearch::Client::7_0::Scroll
To test:
1 - perl misc/maintenance/compare_es_to_db.pl
2 - It dies:
[Param] ** Unknown param (scroll_in_qs) in (search) request. , called from sub Search::Elasticsearch::Client::7_0::Direct::scroll_helper at misc/maintenance/compare_es_to_db.pl line 55.
3 - Apply patch
4 - Repeat
5 - It succeeds!
Signed-off-by: Jonathan Druart <jonathan.druart@bugs.koha-community.org>
Signed-off-by: Tomas Cohen Arazi <tomascohen@theke.io>
C4::Log::cronlogaction() takes a hashref as argument, with "info"
and possibly "action" as keys. But there are a couple of places
where it is called with just a string as argument, and that does
not work. Both places need lock_exec to fail to trigger the error.
I have seen this on a production server, but not been able to
reproduce in ktd.
To test:
- Run this on the Koha repo: grep -r "cronlogaction(" *
- Verify that fines.pl and process_message_queue.pl are the only
scripts that call cronlogaction without a hashref as argument,
but do it like this: cronlogaction( $message );
- Apply this patch
- Run the grep again and verify that all calls to cronlogaction
now take a hashref as argument
Signed-off-by: Nick Clemens <nick@bywatersolutions.com>
Signed-off-by: Jonathan Druart <jonathan.druart@bugs.koha-community.org>
Signed-off-by: Tomas Cohen Arazi <tomascohen@theke.io>
This patch moves to using txn_begin and txn_commit
Signed-off-by: Jonathan Druart <jonathan.druart@bugs.koha-community.org>
Bug 32250: (follow-up) Remove one more dbh commit and don't start a new transaction when done
Signed-off-by: Jonathan Druart <jonathan.druart@bugs.koha-community.org>
Signed-off-by: Tomas Cohen Arazi <tomascohen@theke.io>
We can't run a background job if it isn't in the database, however,
this script runs with AutoCommit disabled. We need to enable it while
generating the background job, then disable for the updates.
I don't nkow if using a transaction would be preferable.
I tried to solve this independently, but it requires consolidating the
index requests to make this work easily
Signed-off-by: David Nind <david@davidnind.com>
Signed-off-by: Jonathan Druart <jonathan.druart@bugs.koha-community.org>
Signed-off-by: Tomas Cohen Arazi <tomascohen@theke.io>
This patch adds an array to catch updated bibs, and defers indexing until a
batch of changes is committed
To test:
1 - Set LinkerModule system preference to either first or last match
Alternate this between runs of the linker to ensure changes are made
2 - Set SearchEngine to Elasticsearch and reindex (to ensure index is built)
3 - perl misc/link_bibs_to_authorities.pl -v
4 - Check Admin->Jobs and see that many ES index jobs are queued
5 - Apply patch
6 - perl misc/link_bibs_to_authorities.pl -v
7 - Check Admin->Jobs and see that 1 index per commit from is enqueued
Signed-off-by: David Nind <david@davidnind.com>
Signed-off-by: Jonathan Druart <jonathan.druart@bugs.koha-community.org>
Signed-off-by: Tomas Cohen Arazi <tomascohen@theke.io>
If the only change is a linked heading, we don't need to rebuild
the holds queue
To test:
1 - Set preference RealTimeHoldsQueue to enable
2 - Run link_bibs_to_authorities
3 - Note holds queue jobs generated
4 - Apply patch
5 - repeat
6 - No holds queue updates
Signed-off-by: David Nind <david@davidnind.com>
Signed-off-by: Jonathan Druart <jonathan.druart@bugs.koha-community.org>
Signed-off-by: Tomas Cohen Arazi <tomascohen@theke.io>
Since bug 29486, misc/maintenance/search_for_data_inconsistencies.pl search for biblio.biblionumber in MARC record with $record->subfield().
This fails when field is a control field (< 10).
Idem for biblioitems.biblioitemnumber
Test plan :
1.0) On a UNIMARC database (biblio.biblionumber is on 001)
1.1) Run misc/maintenance/search_for_data_inconsistencies.pl
=> Without patch you get error : Control fields (generally, just tags below 010) do not have subfields, use data()
=> With patch no error
2.0) On a MARC21 database (biblio.biblionumber is on 999c)
2.1) Run misc/maintenance/search_for_data_inconsistencies.pl
=> Check you see no error
Signed-off-by: David Nind <david@davidnind.com>
Signed-off-by: Nick Clemens <nick@bywatersolutions.com>
Signed-off-by: Tomas Cohen Arazi <tomascohen@theke.io>
Essentially only adds a warn, some cosmetic changes too.
Test plan:
Copy your kohastructure to xx.sql.
Run sync_db_comments.pl -schema xx.sql. You will see usage.
Run sync_db_comments.pl -schema xyz.sql. You will see a warn and
the usage statement.
Signed-off-by: Marcel de Rooy <m.de.rooy@rijksmuseum.nl>
Signed-off-by: David Nind <david@davidnind.com>
Signed-off-by: Jonathan Druart <jonathan.druart@bugs.koha-community.org>
Signed-off-by: Tomas Cohen Arazi <tomascohen@theke.io>
Adds a schema parameter to the cmdline script now too.
Test plan:
Run sync_db_comments.pl with -schema file where file does not exist.
(On dev install) rename kohastructure.sql, try with[out] referring
to it using -schema. You could also use the standard path
intranet/cgi-bin/installer/data/mysql.
Signed-off-by: Marcel de Rooy <m.de.rooy@rijksmuseum.nl>
Signed-off-by: David Nind <david@davidnind.com>
Signed-off-by: Jonathan Druart <jonathan.druart@bugs.koha-community.org>
Signed-off-by: Tomas Cohen Arazi <tomascohen@theke.io>
The module returns messages. The script can print them in verbose
mode. Test script adjusted accordingly.
Test plan:
Run t/db_dependent/Koha/Database/Commenter.t
Signed-off-by: Marcel de Rooy <m.de.rooy@rijksmuseum.nl>
Signed-off-by: David Nind <david@davidnind.com>
Signed-off-by: Jonathan Druart <jonathan.druart@bugs.koha-community.org>
Signed-off-by: Tomas Cohen Arazi <tomascohen@theke.io>
Signed-off-by: Marcel de Rooy <m.de.rooy@rijksmuseum.nl>
Signed-off-by: David Nind <david@davidnind.com>
Signed-off-by: Jonathan Druart <jonathan.druart@bugs.koha-community.org>
Signed-off-by: Tomas Cohen Arazi <tomascohen@theke.io>
Note: This is only done (and 'needed') for the command line, not
for the module subroutines.
Signed-off-by: Marcel de Rooy <m.de.rooy@rijksmuseum.nl>
Signed-off-by: David Nind <david@davidnind.com>
Signed-off-by: Jonathan Druart <jonathan.druart@bugs.koha-community.org>
Signed-off-by: Tomas Cohen Arazi <tomascohen@theke.io>
Test plan:
Run sync_db_comments.pl --clear --renumber
Signed-off-by: David Nind <david@davidnind.com>
Signed-off-by: Jonathan Druart <jonathan.druart@bugs.koha-community.org>
Signed-off-by: Tomas Cohen Arazi <tomascohen@theke.io>
Test plan:
[1] Backup your database, if not done already.
[2] Check output of dry_run when clearing a table:
misc/maintenance/sync_db_comments.pl -clear -table items -dry
[3] Save output of misc/devel/update_dbix_class_files before changing
comments in order to compare later. (Commit your changes.)
You may not have changes after running (at least on a fresh
database). That's fine.
[4] Clear all comments:
misc/maintenance/sync_db_comments.pl -clear
[5] Renumber all comments:
misc/maintenance/sync_db_comments.pl -renum
[6] Reset all comments to schema. Make sure that script finds your
structure in installer/data/mysql folder.
misc/maintenance/sync_db_comments.pl -reset
[7] Run update_dbix_class_files again and inspect changes as compared
to previous run.
Can you explain them? You should only see changes related to
column comments. If you did not have changes in step 3, you
should not have them here too.
Signed-off-by: Marcel de Rooy <m.de.rooy@rijksmuseum.nl>
Signed-off-by: David Nind <david@davidnind.com>
Signed-off-by: Jonathan Druart <jonathan.druart@bugs.koha-community.org>
Signed-off-by: Tomas Cohen Arazi <tomascohen@theke.io>
The automatic_renewals.pl cron script currently loops through items for automatic renewal and calls the indexer for each one individually. skip_record_index has now been added as a parameter to the AddRenewal function to skip the indexing process. The item numbers are now added to an array and then the indexer is called once from within automatic_renewals.pl and passed the array to queue one indexing job instead of multiple jobs.
Test plan:
1) AddRenewal uses Koha::Items->store() to trigger the indexing process. Run prove -vv t/db_dependent/Koha/SearchEngine/Indexer.t and check tests 5,6,29,30. These tests prove whether passing skip_record_index to store() triggers or skips the indexing process. All four tests should pass to show that skip_index_records can prevent the indexing being triggered.
2) Add multiple renewals that are able to be autorenewed and run the automatic_renewals.pl script. There should be multiple items queued in zebraqueue.
3) Apply patch and try again
4) There should now only be one job queued in zebraqueue
Mentored-by: Martin Renvoize <martin.renvoize@ptfs-europe.com>
Signed-off-by: Nick Clemens <nick@bywatersolutions.com>
Signed-off-by: Tomas Cohen Arazi <tomascohen@theke.io>
To test:
1 - Delete a biblio
2 - perl /kohadevbox/koha/misc/migration_tools/build_oai_sets.pl -v -i -r
3 - Error:
Can't call method "items" on an undefined value at /kohadevbox/koha/Koha/Biblio/Metadata.pm line 163.
4 - Apply patch
5 - Repeat
6 - Success!
Signed-off-by: Magnus Enger <magnus@libriotech.no>
Works as advertised.
Signed-off-by: Katrin Fischer <katrin.fischer.83@web.de>
Signed-off-by: Tomas Cohen Arazi <tomascohen@theke.io>
Add --log-action parameter to cleanup_database.pl
Test plan:
1. Apply patch
2. Enable cataloguing and borrowers log
3. Make some changes to borrowers, create some borrowers, and edit some
biblio records
4. Change the action_logs.timestamp for all action_logs entries to 367
days ago
5. Run cleanup_database.pl with --logs 365 --log-module=MEMBERS
--log-action=CREATE --confirm
6. Confirm only the borrowers creation action_logs entries are removed
7. Run cleanup_database.pl with --logs 365
8. Confirm all action_logs entries are removed
Sponsored-By: Toi Ohomai Institute of Technology, New Zealand
Signed-off-by: David Nind <david@davidnind.com>
Signed-off-by: Victor Grousset/tuxayo <victor@tuxayo.net>
Signed-off-by: Tomas Cohen Arazi <tomascohen@theke.io>
This patch ignores fuzzy translations for preferences and warns if there are multiple sections with the same translated name.
Test Plan:
1) Install English United Kingdom translations (./misc/translator/translate install en-GB)
2) Go to Koha administration in the staff interface
3) Click Global system preferences
4) Select I18N/L10N preferences
5) Enable English United Kingdom in the language preference for staff interface
6) Save all I18N/L10N preferences
7) Return to Koha administration
8) Select English United Kingdom as the language at the bottom of the screen
9) Click on Global system preferences
10) Select Circulation
11) Observe that there is only SelfCheckInMainUserBlock or StockRotation, but not both
12) Apply the patch
13) Install English United Kingdom translations (./misc/translator/translate install en-GB)
14) Go to Koha administration
15) Select English United Kingdom as the language at the bottom of the screen
16) Click on Global system preferences
17) Select Circulation
18) Observe that SelfCheckInMainUserBlock and StockRotation are both present
Signed-off-by: Caroline Cyr La Rose <caroline.cyr-la-rose@inlibro.com>
Signed-off-by: Jonathan Druart <jonathan.druart@bugs.koha-community.org>
Signed-off-by: Tomas Cohen Arazi <tomascohen@theke.io>
Some old-style code is making our tests fail when run in Debian Testing.
This patch addresses this.
To test:
1. Launch bookworm KTD:
$ KOHA_IMAGE=master-bookworm ktd up -d
2. Run:
$ ktd --shell
k$ prove t/00-testcritic.t
=> FAIL: It fails!
3. Apply the patch
4. Repeat 2
=> SUCCESS: Tests now pass!
5. Sign off :-D
Signed-off-by: Tomas Cohen Arazi <tomascohen@theke.io>
Signed-off-by: Martin Renvoize <martin.renvoize@ptfs-europe.com>
Signed-off-by: Tomas Cohen Arazi <tomascohen@theke.io>
If you have EmailOverduesNoEmail = Send and specify "--html somedir",
overdue_notices.pl will send a file by email that contains partial
HTML, as a file called attachment.txt. This patch fixes that.
To reproduce in koha-testing-docker:
- EmailOverduesNoEmail = Send
- Make sure you have a loan that was due yesterday, by backdating the
due date
- Set up an overdue action to send "Overdue Notice" (ODUE) to the
category you made the loan to above, when a loan is one day overdue
- Run this command:
$ sudo koha-shell -c "perl misc/cronjobs/overdue_notices.pl -v -t -html /tmp/" kohadev
- Look at the file /tmp/notices-<DATE>.html and make sure it is a full
HTML document, with <html>, <head>, <body> etc.
- Create a report like this:
SELECT message_id, letter_id, borrowernumber, subject, CONCAT( '<pre>', content, '</pre>' ) AS content,
metadata, letter_code, message_transport_type, time_queued, updated_on, to_address, content_type, failure_code
FROM message_queue
WHERE subject = 'Overdue Notices'
ORDER BY message_id DESC
- Run the report and verify there is a line like this in the "content"
of the newest message:
Content-Type: text/plain; name=attachment.txt
- A part of the "content" will be a block of several lines of gibberish
(base64) that look something like "RGVhciAga29oYSwNCg0KQWN...". Copy
this block of text to somewhere like base64decode.org and decode the
text. You should see a fragment of HTML, without <html>, <head>,
<body> etc.
To test:
- Apply the patch
- Run overdue_notices.pl again, with the same arguments as above
- Make sure /tmp/notices-<DATE>.html is still a full HTML document
- Re-run the report, and make sure you now have this in the "content":
Content-Type: text/html; name=attachment.html
- Decode the base64 and make sure it is now a full HTML document, with
<html>, <head>, <body> etc.
- Re-run overdue_notices.pl as above, but replace "-html /tmp/" with
"-csv /tmp/test.csv"
- Make sure /tmp/test.csv and the decoded base64 from the report
contains CSV data
- Re-run overdue_notices.pl as above, but replace "-html /tmp/" with
"-text /tmp/"
- Make sure /tmp/notices-<DATE>.txt and the decoded base64 from the
report contains no HTML
Note:
- The actual text from the different messages will be enclosed in
<pre>-tags
- If you have HTML in your ODUE message template and run with -v, you
will have warnings saying "The following terms were not matched and
replaced"
These are due to Bug 14347, and are not adressed by the current patch.
Signed-off-by: Lucas Gass <lucas@bywatersolutions.com>
Signed-off-by: Kyle M Hall <kyle@bywatersolutions.com>
Signed-off-by: Tomas Cohen Arazi <tomascohen@theke.io>
The default of 1 resembles the old behavior: 1 fork for the job.
Signed-off-by: Marcel de Rooy <m.de.rooy@rijksmuseum.nl>
Signed-off-by: Kyle M Hall <kyle@bywatersolutions.com>
Signed-off-by: Tomas Cohen Arazi <tomascohen@theke.io>
Right now background_jobs_worker.pl only processes jobs in serial. It would make sense to handle jobs in parallel up to a user definable limit.
Test Plan:
1) Apply this patch
2) Stop background_jobs_worker.pl
3) Generate some background jobs by editing records, placing holds, etc
4) Watch processes in a new terminal: watch -n 0.1 'ps aux | grep background_jobs_worker.pl'
5) Run background_jobs_worker.pl with parameter -m 3 or some other
number of max processes
6) Note the multiple forked processes in the ps output
Test notes - also tested the following on KTD:
1. Stop background_jobs_worker.pl
2. Edit /etc/koha/sites/kohadev/koha-conf.xml - set max_processes to 10
3. Generate some background jobs
4. Watch processes in a new terminal: watch -n 0.1 'ps aux | grep background_jobs_worker.pl'
5. Restart all
6. Confirm multiple forked processes in the ps output
Both methods work as expected and generate multiple forked processes
based on the value set for max processes.
Signed-off-by: emlam <emily.lamancusa@montgomerycountymd.gov>
Signed-off-by: Marcel de Rooy <m.de.rooy@rijksmuseum.nl>
Signed-off-by: Tomas Cohen Arazi <tomascohen@theke.io>
In search_for_data_inconsistencies.pl, the test for authorized values is a list in one line :
* The Framework *VR* is using the authorised value's category *LOC*, but the following items.location do not have a value defined ({itemnumber => value }):
{94 => AV} {95 => AV} {96 => AV} {97 => AV} {98 => AV} {99 => AV} {100 => AV} {101 => AV} {102 => AV} {103 => AV}
It would be more clear with new lines, especially for scripts (grep, awk ...) :
* The Framework *VR* is using the authorised value's category *LOC*, but the following items.location do not have a value defined ({itemnumber => value }):
{94 => AV}
{95 => AV}
{96 => AV}
{97 => AV}
{98 => AV}
{99 => AV}
{100 => AV}
{101 => AV}
{102 => AV}
{103 => AV}
Test plan :
1) In koha-testing-docker
2) Delete in authorized values LOC the value AV
3) Run misc/maintenance/search_for_data_inconsistencies.pl
=> You see the new line in result
Signed-off-by: David Nind <david@davidnind.com>
Signed-off-by: Nick Clemens <nick@bywatersolutions.com>
Signed-off-by: Tomas Cohen Arazi <tomascohen@theke.io>
This adds a reaosn parameter and passes it into the cancellation if supplied
To test:
1 - Place a hold for a patron in your system
2 - Run script with --days 0 -v
3 - verify that it would cancel the reserves (and that you are okay with cancelling the ones it found)
4 - Make sure you have a notice in the holds module with code 'HOLD_CANCELLATION'
5 - Set content of the notice like:
[% IF hold.cancellation_reason=='too_old' %]
Canceled old
[% END %]
6 - Run script with --days 0 -v --reason too_bad -c
7 - Confirm hold cancelled, no notice sent to patron
8 - Place another hold
9 - Run script with --days 0 -v --reason too_old -c
10 - Confirm hold cancelled, notice sent to patron
Signed-off-by: Andrew Fuerste-Henry <andrewfh@dubcolib.org>
Signed-off-by: Victor Grousset/tuxayo <victor@tuxayo.net>
Signed-off-by: Tomas Cohen Arazi <tomascohen@theke.io>
This patch now checks the status of messages and ignores any message
with a status of 'new'. It is also rebased to account for changes made
to cleanup_database.pl in bug 17350.
Test plan:
1) Ensure you have some EDI orders or even just some dummy messages in the edifact_messages table with a mixture of statuses including 'new'
2) Run perl misc/cronjobs/cleanup_database.pl --edifact-messages 100 --verbose (Change the number of days according to the data in your table)
3) The response should show a number of messages that would have been deleted
4) Run perl misc/cronjobs/cleanup_database.pl --edifact-messages 100 --verbose --confirm
5) The response should now show the same number of messages have been deleted
6) Check your edifact_messages table to confirm that the data has been deleted
7) Confirm that no messages marked 'new' have been deleted
Signed-off-by: Kyle M Hall <kyle@bywatersolutions.com>
Signed-off-by: Tomas Cohen Arazi <tomascohen@theke.io>
Changes a few occurrences of edifact in the output messages
to EDIFACT.
Signed-off-by: Katrin Fischer <katrin.fischer.83@web.de>
Signed-off-by: Kyle M Hall <kyle@bywatersolutions.com>
Signed-off-by: Tomas Cohen Arazi <tomascohen@theke.io>
This patch allows users to clear out old edifact_messages using the cleanup_database script. The number of days can either be set in the CLI or the default value of 365 can be used.
Test plan:
1) Ensure you have some EDI orders or even just some dummy messages in the edifact_messages table
2) Run perl misc/cronjobs/cleanup_database.pl --edifact-messages 100 --verbose (Change the number of days according to the data in your table)
3) The response should show a number of messages that would have been deleted
4) Run perl misc/cronjobs/cleanup_database.pl --edifact-messages 100 --verbose --confirm
5) The response should now show the same number of messages have been deleted
6) Check your edifact_messages table to confirm that the data has been deleted
Sponsored-by: PTFS Europe
Signed-off-by: Katrin Fischer <katrin.fischer.83@web.de>
Signed-off-by: Kyle M Hall <kyle@bywatersolutions.com>
Signed-off-by: Tomas Cohen Arazi <tomascohen@theke.io>
It would be very useful to be able to tell process_message_queue.pl to skip processing some messages. This is particularly useful where a plugin handles sending some message using the before_send_messages hook, but while that plugin is processing, more messages meant for the plugin might be queued. At that point, control moves back to the script and SendQueuedMessages is called, and those messages end up being processed there instead of by the plugin.
Test Plan:
1) Apply this patch
2) Queue two messages, each with a unique word
3) Run process_message_queue --where "content NOT LIKE '%WORD%'"
where WORD is a unique word in one of the two message
4) Note the message containing "WORD" was not processed
5) prove t/db_dependent/Letters.t
Signed-off-by: Andrew Fuerste-Henry <andrewfh@dubcolib.org>
Signed-off-by: Victor Grousset/tuxayo <victor@tuxayo.net>
Signed-off-by: Tomas Cohen Arazi <tomascohen@theke.io>
This amends the documentation of the cleanup_database.pl script
to include a hint for how the saved reports data is created.
Signed-off-by: Katrin Fischer <katrin.fischer@bsz-bw.de>
Signed-off-by: Victor Grousset/tuxayo <victor@tuxayo.net>
Signed-off-by: Tomas Cohen Arazi <tomascohen@theke.io>
To test:
apply patch
1 - have or create a report
2 - run your report via the command line with --store-results. Do this twice.
3 - update the saved_reports table to set date_run for one of your two saved results set to a datetime more than 5 days ago
4 - perl misc/cronjobs/cleanup_database.pl --reports 5 --verbose --confirm
5 - Koha tells you its deleting saved reports data from more than 5 days ago
6 - confirm in the database and the staff interface that it's done so
Signed-off-by: Katrin Fischer <katrin.fischer@bsz-bw.de>
Signed-off-by: Victor Grousset/tuxayo <victor@tuxayo.net>
Signed-off-by: Tomas Cohen Arazi <tomascohen@theke.io>
Currently we generate large numebrs if single record reindex for circulation and other
actions. It can take a long time to process these as we need to load the ES settings for each.
This patch updates the Elasticsearch background jobs to throw records into a new queue
that can be processed by it's own worker and adds a dedicated worker that batches the jobs
every 1 second.
To test:
1 - Apply patches, set SearchEngine system preference to 'Elasticsearch'
2 - perl misc/search_tools/es_indexer_daemon.pl
3 - Leave the running in terminal and perform actions in staff interface:
- Checking out a bib
- Returning a bib
- Editing a single bib
- Editing a single item
- Batch editing bibs
- Batch editing items
4 - Confirm for each action that records are updated in search/search results
5 - Stop the script
6 - set SearchEngine system preference to 'Zebra'
7 - perl misc/search_tools/es_indexer_daemon.pl
8 - Script dies as Elasticsearch not enabled
Signed-off-by: David Nind <david@davidnind.com>
Signed-off-by: Emily Lamancusa <emily.lamancusa@montgomerycountymd.gov>
Bug 32594: (follow-up) Adjust logging per bug 32612
JD amended patch: tidy! There were tabs here...
Signed-off-by: Jonathan Druart <jonathan.druart@bugs.koha-community.org>
Signed-off-by: Tomas Cohen Arazi <tomascohen@theke.io>
On bug 32594 we are adding a new worker, dedicated to Elastic indexing.
We should have a common place for workers, and we agreed on misc/workers
To test:
1 - Apply patch
2 - reset_all in koha testing docker
3 - ps aux | grep background
4 - Confirm the workers are running, and running in the new directory
5 - Perform a batch item modification
6 - Ensure the job is processed by the worker
Signed-off-by: David Nind <david@davidnind.com>
Signed-off-by: Jonathan Druart <jonathan.druart@bugs.koha-community.org>
Signed-off-by: Tomas Cohen Arazi <tomascohen@theke.io>
To test:
1 - Enable OAI sets, and define a set with mapping 952 y = BK
2 - perl misc/migration_tools/build_oai_sets.pl -v -i -r
3 - The script dies:
Koha::Biblio::Metadata->record must be called on an instantiated object or like a class method with a record passed in parameter
4 - Apply patch
5 - Repeat
6 - Success!
Signed-off-by: Jonathan Druart <jonathan.druart@bugs.koha-community.org>
Signed-off-by: Tomas Cohen Arazi <tomascohen@theke.io>
Edit: tcohen updated indentation
Some of our scripts have a space in the "shebang" (first) line:
#! /usr/bin/perl
This is not illegal, and it does work, but it is good to be
consistent, so this patch removes the space.
To test:
- Run: grep -rn --include=*.pl '#! /usr/' *
- See the list of files that have a space in the shebang
- Apply the patch
- Run the command again, there should be no output, meaning there
are no more files with space in the shebang
- Have a look at the patch and check that it only changes the
shebangs
- Sign off
Signed-off-by: Matt Blenkinsop <matt.blenkinsop@ptfs-europe.com>
Signed-off-by: Martin Renvoize <martin.renvoize@ptfs-europe.com>
Signed-off-by: Tomas Cohen Arazi <tomascohen@theke.io>
Signed-off-by: Kyle M Hall <kyle@bywatersolutions.com>
Signed-off-by: Tomas Cohen Arazi <tomascohen@theke.io>
Edit: Kyle, stop impersonating John Doe
Signed-off-by: Tomas Cohen Arazi <tomascohen@theke.io>
If the installer files exist for a given language, the translate script
won't update it.
We should get a confirmation from Bernardo (author of bug 24262), but I
don't understand why it could be needed (side-effects?)
Test plan:
Installer several times the same language, drop the DB and run the
installer+onboarding process.
Check files installed by the installer (yaml for notice templates,
biblio frameworks) have inserted the data properly in DB.
Signed-off-by: Katrin Fischer <katrin.fischer.83@web.de>
Signed-off-by: Tomás Cohen Arazi <tomascohen@theke.io>
Signed-off-by: Tomas Cohen Arazi <tomascohen@theke.io>
Adapted test plan:
1) Apply this patch
2) For DUE and PREDUE notices, set the message body to the following:
Title: [% checkout.title %]
3) For DUEDGST and PREDUEDGST notices, set the message body to the following:
Titles:
[% FOREACH c IN checkouts %]
* [% c.title %][% END %]
4) Generate PREDUE and DUE notices for patrons including digests
5) Verify those notices contain the checkout titles
Signed-off-by: Katrin Fischer <katrin.fischer.83@web.de>
Signed-off-by: Tomas Cohen Arazi <tomascohen@theke.io>
Edit: tcohen renamed @issues => @checkouts as well
Signed-off-by: Tomas Cohen Arazi <tomascohen@theke.io>
Signed-off-by: Tomás Cohen Arazi <tomascohen@theke.io>
Signed-off-by: Tomas Cohen Arazi <tomascohen@theke.io>
Remove loops that only operate one one result only
Signed-off-by: Felicity Brown <Felicity.Brown@montgomerycountymd.gov>
Signed-off-by: George Veranis <gveranis@dataly.gr>
Signed-off-by: Katrin Fischer <katrin.fischer.83@web.de>
Signed-off-by: Tomas Cohen Arazi <tomascohen@theke.io>