This change explicitly ignores SIGPIPE signals in the background jobs
worker.
Daemons like Starman ignore SIGPIPE so it makes sense to explicitly set this.
Differences in the inner workings of MySQL vs MariaDB client libraries have yielded
different behaviours in automatic reconnections and potentially SIGPIPE handling,
so this helps to make the overall behaviour more consistent.
Test plan:
0. Apply patch and run "restart_all"
1. Go to http://localhost:8081/cgi-bin/koha/catalogue/detail.pl?biblionumber=29
2. Click "Save" > "MARCXML"
3. Go to http://localhost:8081/cgi-bin/koha/tools/stage-marc-import.pl
4. Click "Choose file", choose the MARCXML file, click "Upload file"
5. Click "Stage for import"
6. Note the job is marked as "100% Finished"
7. In a separate window run "docker restart koha-db-1"
8. Repeat steps 3-5 for uploading file and running stage for import
9. Note that the job is marked as "100% Finished" as you'd expect
Signed-off-by: Marcel de Rooy <m.de.rooy@rijksmuseum.nl>
[EDIT] Added comment on the SIG PIPE line.
Signed-off-by: Victor Grousset/tuxayo <victor@tuxayo.net>
Signed-off-by: Tomas Cohen Arazi <tomascohen@theke.io>
This is a kind of copy-and-paste variant of background worker.
We probably could do better than having two scripts here ;)
Test plan:
See former test plan. Apply it to ES indexing.
Signed-off-by: Marcel de Rooy <m.de.rooy@rijksmuseum.nl>
Signed-off-by: Nick Clemens <nick@bywatersolutions.com>
Signed-off-by: Martin Renvoize <martin.renvoize@ptfs-europe.com>
Signed-off-by: Marcel de Rooy <m.de.rooy@rijksmuseum.nl>
[EDIT] Removed queue from query at MQ side. Discussed on IRC.
Signed-off-by: Tomas Cohen Arazi <tomascohen@theke.io>
MQ mode: The worker has seen that MQ is running and consumes messages
instead of polling db only.
Test plan:
Stop worker for long tasks.
Make sure that Rabbit MQ runs.
Stage a file. (This adds a long task.)
Goto staff view of jobs and cancel this job.
Check if job is still in MQ with rabbitmqctl list_queues.
Now start worker for long tasks.
Check if job is gone in MQ with rabbitmqctl list_queues.
And check logfile for the adjusted warning like:
[WARN] Job 5 not found, or has wrong status/queue main:: /usr/share/koha/misc/workers/background_jobs_worker.pl (134)
Signed-off-by: Marcel de Rooy <m.de.rooy@rijksmuseum.nl>
Signed-off-by: Nick Clemens <nick@bywatersolutions.com>
Signed-off-by: Martin Renvoize <martin.renvoize@ptfs-europe.com>
Signed-off-by: Marcel de Rooy <m.de.rooy@rijksmuseum.nl>
[EDIT] Removed queue from query at MQ side. Discussed on IRC.
Signed-off-by: Tomas Cohen Arazi <tomascohen@theke.io>
When background_jobs_worker.pl spawns a new child process, it needs to
explicitly reinitialize the random seed - otherwise each child process
will inherit the same random seed from the parent process, and any
randomization will produce identical results each time.
This patch adds a call to srand immediately after the fork to
reinitialize the seed. Note that child processes should not call
srand with no parameter anywhere else, as the Perl documentation
indicates that srand should not be called with no parameter more than
once per process.
To test:
1. Apply the logging patch only
2. Set system preferences:
a. RealTimeHoldsQueue -> Enable
b. RandomizeHoldsQueueWeight -> in random order
3. Watch the logs for the staff interface
in ktd:
ktd --shell
koha-intra-err
4. Place a hold. Note that the logs display the branch list before and
after it is randomized.
5. Place some more holds. Note that the branch order after randomization
is identical each time.
6. Apply both patches and restart_all
7. Repeat steps 3-5.
-> Note that the branch order before randomization hasn't changed
-> Note that the branch order after randomization is now different
each time.
Signed-off-by: Nick Clemens <nick@bywatersolutions.com>
Signed-off-by: Tomas Cohen Arazi <tomascohen@theke.io>
The default of 1 resembles the old behavior: 1 fork for the job.
Signed-off-by: Marcel de Rooy <m.de.rooy@rijksmuseum.nl>
Signed-off-by: Kyle M Hall <kyle@bywatersolutions.com>
Signed-off-by: Tomas Cohen Arazi <tomascohen@theke.io>
Right now background_jobs_worker.pl only processes jobs in serial. It would make sense to handle jobs in parallel up to a user definable limit.
Test Plan:
1) Apply this patch
2) Stop background_jobs_worker.pl
3) Generate some background jobs by editing records, placing holds, etc
4) Watch processes in a new terminal: watch -n 0.1 'ps aux | grep background_jobs_worker.pl'
5) Run background_jobs_worker.pl with parameter -m 3 or some other
number of max processes
6) Note the multiple forked processes in the ps output
Test notes - also tested the following on KTD:
1. Stop background_jobs_worker.pl
2. Edit /etc/koha/sites/kohadev/koha-conf.xml - set max_processes to 10
3. Generate some background jobs
4. Watch processes in a new terminal: watch -n 0.1 'ps aux | grep background_jobs_worker.pl'
5. Restart all
6. Confirm multiple forked processes in the ps output
Both methods work as expected and generate multiple forked processes
based on the value set for max processes.
Signed-off-by: emlam <emily.lamancusa@montgomerycountymd.gov>
Signed-off-by: Marcel de Rooy <m.de.rooy@rijksmuseum.nl>
Signed-off-by: Tomas Cohen Arazi <tomascohen@theke.io>
Currently we generate large numebrs if single record reindex for circulation and other
actions. It can take a long time to process these as we need to load the ES settings for each.
This patch updates the Elasticsearch background jobs to throw records into a new queue
that can be processed by it's own worker and adds a dedicated worker that batches the jobs
every 1 second.
To test:
1 - Apply patches, set SearchEngine system preference to 'Elasticsearch'
2 - perl misc/search_tools/es_indexer_daemon.pl
3 - Leave the running in terminal and perform actions in staff interface:
- Checking out a bib
- Returning a bib
- Editing a single bib
- Editing a single item
- Batch editing bibs
- Batch editing items
4 - Confirm for each action that records are updated in search/search results
5 - Stop the script
6 - set SearchEngine system preference to 'Zebra'
7 - perl misc/search_tools/es_indexer_daemon.pl
8 - Script dies as Elasticsearch not enabled
Signed-off-by: David Nind <david@davidnind.com>
Signed-off-by: Emily Lamancusa <emily.lamancusa@montgomerycountymd.gov>
Bug 32594: (follow-up) Adjust logging per bug 32612
JD amended patch: tidy! There were tabs here...
Signed-off-by: Jonathan Druart <jonathan.druart@bugs.koha-community.org>
Signed-off-by: Tomas Cohen Arazi <tomascohen@theke.io>
On bug 32594 we are adding a new worker, dedicated to Elastic indexing.
We should have a common place for workers, and we agreed on misc/workers
To test:
1 - Apply patch
2 - reset_all in koha testing docker
3 - ps aux | grep background
4 - Confirm the workers are running, and running in the new directory
5 - Perform a batch item modification
6 - Ensure the job is processed by the worker
Signed-off-by: David Nind <david@davidnind.com>
Signed-off-by: Jonathan Druart <jonathan.druart@bugs.koha-community.org>
Signed-off-by: Tomas Cohen Arazi <tomascohen@theke.io>