Bug 25265: [20.05.x] Prevent double reindex of the same item in batchmod
When batch editing, 2 reindex calls are sent to ES/Zebra.
We can easily avoid that reusing the skip_modzebra_update (renamed skip_record_index)
Additionally we should only send one request for biblio, and we should
only do it if we succeed
As the whole batch mod is in a transaction it is possible to fail in which case
Zebra queue is reset, but ES indexes have already been set
In addition to the skip param this patchset moves Zebra and Elasticsearch calls to
Indexer modules and introduces a generic Koha::SearchEngine::Indexer so that we don't
need to check the engine when calling for index
The new index_records routine takes an array so that we can reduce the calls to
the ES server.
The index_records routine for Zebra loops over ModZebra to avoid affecting current behaviour
Test plan:
General tests, under both search engines:
1 - Add a biblio and confirm it is searchable
2 - Edit the biblio and confirm changes are searchable
3 - Add an item, confirm it is searchable
4 - Delete an item, confirm it is not searchable
5 - Delete a biblio, confirm it is not searchable
6 - Add an authority and confirm it is searchable
7 - Delete an authority and confirm it is not searchable
Batch mod tests, under both search engines
1 - Have a bib with several items, none marked 'not for loan'
2 - Do a staff search that returns this biblio
3 - Items show as available
4 - Click on title to go to details page
5 - Edit->Item in a batch
6 - Set the not for loan status for all items
7 - Repeat your search
8 - Items show as not for loan
9 - Test batch deleting items
a - Test with a list of items, not deleting bibs
b - Test with a list of items, deleting bibs if no items remain where all items are only item on a biblio:
SELECT MAX(barcode) FROM items GROUP BY biblionumber HAVING COUNT(barcode) IN (1)
c - Test with a list of items, deleting bibs if no items remain where some items are the only item on a biblio:
SELECT MAX(barcode) FROM items GROUP BY biblionumber HAVING COUNT(barcode) IN (1,2)
10 - Confirm records are update/deleted as appropriate
Signed-off-by: Bob Bennhoff <bbennhoff@clicweb.org> Signed-off-by: Marcel de Rooy <m.de.rooy@rijksmuseum.nl>
Bug 25265: Rename skip_modzebra_update to skip_record_index
Signed-off-by: Bob Bennhoff <bbennhoff@clicweb.org> Signed-off-by: Marcel de Rooy <m.de.rooy@rijksmuseum.nl>
Bug 25265: Fix copy paste error for parameter
Signed-off-by: Bob Bennhoff <bbennhoff@clicweb.org> Signed-off-by: Marcel de Rooy <m.de.rooy@rijksmuseum.nl>
Bug 25265: (follow-up) Don't index malformed records
This is analogous to 26522, we shoudl skip record that cannot be retrieved for indexing
Signed-off-by: Marcel de Rooy <m.de.rooy@rijksmuseum.nl>
Bug 25265: (QA follow-up) Add shebang to Indexer.t
Signed-off-by: Marcel de Rooy <m.de.rooy@rijksmuseum.nl>
Bug 25265: (QA follow-up) Rename biblionumber in ModZebra, index_records
ModZebra:
The name is very misleading: we can index authid's too here.
And yes, it should not be in C4/Biblio too ;) A first step..
Adding the same change here in Koha/SearchEngine/Zebra/Indexer.
Signed-off-by: Marcel de Rooy <m.de.rooy@rijksmuseum.nl>
Bug 25265: (QA follow-up) Check server type in Elasticsearch::index_records
Doing the same change as previously (renaming biblionumber), but fixing
at the same the record fetch. If (theoretically) an authority is passed
without a record, it would have fetched a biblio record.
Test plan:
You need Elasticsearch here.
Replaced this line in AddAuthority:
$indexer->index_records( $authid, "specialUpdate", "authorityserver", $record );
by
$indexer->index_records( $authid, "specialUpdate", "authorityserver", undef );
And updated an authority record. Check if you can search for the change.
Signed-off-by: Marcel de Rooy <m.de.rooy@rijksmuseum.nl> Signed-off-by: Lucas Gass <lucas@bywatersolutions.com>