Jonathan Druart
424aca3d56
Here we go! Disclaimer: this patch is huge and does many things, but splitting it in several chunks would be time consuming and painful to rebase. However it adds many tests and isolate/refactor code to make it way more reusable. This patchset will make the "batch item modification" and "batch item deletion" features use the task queue (reminder: Since bug 28158, and so 21.05.00, we do no longer use the old "background job" functionality and the user does not get any info about the progress of the job). More than that, more of the code to build an item form and a list of items is now isolated in module (.pm) and include files (.inc) We are reusing the changes made by bug 27526 that simplifies the way we edit/create items (no more unecessary serialization Koha > MARC > MARCXML > XML > HTML) New module: * Koha::BackgroundJob::BatchDeleteItem Subclass for process item deletion in batch * Koha::BackgroundJob::BatchUpdateItem Subclass for process item modification in batch * Koha::Item::Attributes We needed an object to represent item's attributes that are not mapped with a koha field (aka "more subfields xml") This module will help us to create the marcxml from a hashref and the reverse. * Koha::UI::Form::Builder::Item The code that was used to build the add/edit item form is centralised in this module. In conjunction with the subfields_for_item BLOCK (from html_helpers.inc) it will be really easy to reuse this code in other places where the item form is used (acquisition and serials modules) * Koha::UI::Table::Builder::Items Same as previously for the table. We are now using this table from 3 different places (batch item mod, batch item del, backgroung job detail view) and the code is only in one place. To use with items_table_batchmod BLOCK (still from html_helpers.inc) This patch is fixing some bugs about repeatable subfields and regex. A UI change will reflect the limitation: if you want to apply a regex on a subfield you cannot add several subfields for the same subfield code. Test plan: Prepare the ground: - Make sure you are always using a bibliographic/item record using the framework you are modifying! - Add some subfields for items that are not mapped with a koha field (note that you can use 'é' for more fun, don't try more funny characters) - Make some subfields (mapped and not mapped with a kohafield) repeatable - Add default values to some of your subfields There are 4 main screens to test: 1. Add/edit item form The behaviour should be the same before and after this patch. See test plan from bug 27526. Those 2 prefs must be tested: * SubfieldsToAllowForRestrictedEditing * SubfieldsToUseWhenPrefill 2. Batch modification a. Fill some values, play with repeatable and regex. Note that the behaviour in master was buggy, only the first value was modified by the regex: * With subfield = "a | b" 1 value added with "new" => "new | b" * With subfield = "a | b" 2 new fields "new1","new2" => "new2 | b" Important note: For repeatable subfields, a regex will apply on the subfields in the "concatenated form". To apply the regex on all the different subfields of a given subfield code you must use the "g" modifier. This could be improved later, but keep in mind that it's not a regression or behaviour change. b. Play with the "Populate fields with default values from default framework" checkbox c. Use this tool to modify items and play with the different sysprefs that interfer with it: * NewItemsDefaultLocation * SubfieldsToAllowForRestrictedBatchmod * MaxItemsToDisplayForBatchMod * MaxItemsToProcessForBatchMod 3. Batch deletion a. Batch delete some items b. Check items out and try to delete them c. Use the "Delete records if no items remain" checkbox to delete bibliographic records without remaining items. d. Play with the following sysprefs and confirm that it works as expected: * MaxItemsToDisplayForBatchDel e. Stress the tool: Go to the confirmation screen with items that can be deleted, don't request the job to be processed right away, but check the item out before. 4. Background job detail view You must have seen it already if you are curious and tested the above. When a new modification or deletion batch is requested, the confirmation screen will tell you that the job has enqueued. A link to the progress of the job can be followed. On this screen you will be able to see the result of the job once it's fully processed. QA notes: * There are some FIXME's that are not blocker in my opinion. Feel free to discuss them if you have suggestions. * Do we still need MaxItemsToProcessForBatchMod? * Prior to this patchset we had a "Return to the cataloging module" link if we went from the cataloguing module and that the biblio was deleted. We cannot longer know if the biblio will be deleted but we could display a "Go to the cataloging module" link on the "job has been enqueued" screen regardless from where we were coming from. Signed-off-by: Nick Clemens <nick@bywatersolutions.com> Signed-off-by: Tomas Cohen Arazi <tomascohen@theke.io> Signed-off-by: Jonathan Druart <jonathan.druart@bugs.koha-community.org>
90 lines
2.6 KiB
Perl
Executable file
90 lines
2.6 KiB
Perl
Executable file
#!/usr/bin/perl
|
|
|
|
# This file is part of Koha.
|
|
#
|
|
# Koha is free software; you can redistribute it and/or modify it
|
|
# under the terms of the GNU General Public License as published by
|
|
# the Free Software Foundation; either version 3 of the License, or
|
|
# (at your option) any later version.
|
|
#
|
|
# Koha is distributed in the hope that it will be useful, but
|
|
# WITHOUT ANY WARRANTY; without even the implied warranty of
|
|
# MERCHANTABILITY or FITNESS FOR A PARTICULAR PURPOSE. See the
|
|
# GNU General Public License for more details.
|
|
#
|
|
# You should have received a copy of the GNU General Public License
|
|
# along with Koha; if not, see <http://www.gnu.org/licenses>.
|
|
|
|
use Modern::Perl;
|
|
use JSON qw( decode_json );
|
|
use Try::Tiny qw( catch try );
|
|
|
|
use Koha::BackgroundJobs;
|
|
|
|
my $conn;
|
|
try {
|
|
$conn = Koha::BackgroundJob->connect;
|
|
} catch {
|
|
warn sprintf "Cannot connect to the message broker, the jobs will be processed anyway (%s)", $_;
|
|
};
|
|
|
|
my @job_types = qw(
|
|
batch_biblio_record_modification
|
|
batch_authority_record_modification
|
|
batch_item_record_modification
|
|
batch_biblio_record_deletion
|
|
batch_authority_record_deletion
|
|
batch_item_record_deletion
|
|
batch_hold_cancel
|
|
);
|
|
|
|
if ( $conn ) {
|
|
# FIXME cf note in Koha::BackgroundJob about $namespace
|
|
my $namespace = C4::Context->config('memcached_namespace');
|
|
for my $job_type ( @job_types ) {
|
|
$conn->subscribe({ destination => sprintf("/queue/%s-%s", $namespace, $job_type), ack => 'client' });
|
|
}
|
|
}
|
|
while (1) {
|
|
if ( $conn ) {
|
|
my $frame = $conn->receive_frame;
|
|
if ( !defined $frame ) {
|
|
# maybe log connection problems
|
|
next; # will reconnect automatically
|
|
}
|
|
|
|
my $body = $frame->body;
|
|
my $args = decode_json($body);
|
|
|
|
# FIXME This means we need to have create the DB entry before
|
|
# It could work in a first step, but then we will want to handle job that will be created from the message received
|
|
my $job = Koha::BackgroundJobs->find($args->{job_id});
|
|
|
|
process_job( $job, $args );
|
|
$conn->ack( { frame => $frame } ); # FIXME depending on success?
|
|
|
|
} else {
|
|
my $jobs = Koha::BackgroundJobs->search({ status => 'new' });
|
|
while ( my $job = $jobs->next ) {
|
|
my $args = decode_json($job->data);
|
|
process_job( $job, { job_id => $job->id, %$args } );
|
|
}
|
|
sleep 10;
|
|
}
|
|
}
|
|
$conn->disconnect;
|
|
|
|
sub process_job {
|
|
my ( $job, $args ) = @_;
|
|
|
|
my $pid;
|
|
if ( $pid = fork ) {
|
|
wait;
|
|
return;
|
|
}
|
|
|
|
die "fork failed!" unless defined $pid;
|
|
|
|
$job->process( $args );
|
|
exit;
|
|
}
|