With the MARC21 standard moving from the 440 tag to the 490, this tool is
to help libraries make the move. It switches any information in 440 tags to
490 tags, and any information in 490 tags to 440 tags. That seemed like the
best way to go to me.
To Test:
locate some biblios with 440 or 490 tags filled.
run bin/migration_tools/switch_marc21_series_info.pl -c
observe that the information in the biblios has switched 4xx tags.
http://bugs.koha-community.org/show_bug.cgi?id=5608
Signed-off-by: Bernardo Gonzalez Kriegel <bgkriegel@gmail.com>
Comment: Works as described. No errors.
Signed-off-by: Paul Poulain <paul.poulain@biblibre.com>
Signed-off-by: Jared Camins-Esakov <jcamins@cpbibliography.com>
See the script's documentation for more details
New parameters are:
- authtypes
- filter
- insert
- update
- all
Signed-off-by: Pascale Nalon <pascale.nalon@gmail.com>
This patch is live in Mines ParisTech since 2012-07-24.
Signing off
Signed-off-by: Katrin Fischer <Katrin.Fischer.83@web.de>
- Moved the sign-off from bugzilla to the commit message.
- All tests and QA script pass.
- Amended commit message to list new parameters.
- Verified this patch works on a UNIMARC installation.
- Verified normal import still works correct on a MARC21
installation.
Signed-off-by: Jared Camins-Esakov <jcamins@cpbibliography.com>
Removed NoZebra vestiges. This comprises several code blocks that depend on the NoZebra syspref and NZ related functions/methods.
C4::Biblio->
GetNoZebraIndexes
_DelBiblioNoZebra
_AddBiblioNoZebra
C4::Search->
NZgetRecords
NZanalyse
NZoperatorAND
NZoperatorOR
NZoperatorNOT
NZorder
C4::Installer->
set_indexing_engine
Sponsored-by: Universidad Nacional de Córdoba
Signed-off-by: Julian Maurice <julian.maurice@biblibre.com>
Signed-off-by: Paul Poulain <paul.poulain@biblibre.com>
Signed-off-by: Jared Camins-Esakov <jcamins@cpbibliography.com>
With the inclusion of this patch, all searches will (try) to use
QueryParser for handling queries for both the bibliographic and authority
databases if UseQueryParser is enabled. If QueryParser is unavailable,
UseQueryParser is disabled, or the search uses CCL indexes, the old
search code will be used.
To test:
1) Apply patch.
2) Run the unit test with `prove t/QueryParser.t`
3) Enable the UseQueryParser syspref.
4) Try searches that should return results in the following places:
* OPAC (simple search)
* OPAC (advanced search)
* OPAC (authorities)
* Staff client (header search)
* Staff client (advanced search)
* Staff client (cataloging search)
* Staff client (authorities)
* Staff client (importing a batch using a match point)
* Staff client (searching for an item for adding to a label)
* Staff client (acquisitions)
* Staff client (searching for a record to create a serial)
* ANYWHERE ELSE I HAVE FORGOTTEN
5) Disable the UseQueryParser syspref. Repeat at least some of the
searches you did above.
6) If all searches worked, sign off.
Signed-off-by: Chris Cormack <chris@bigballofwax.co.nz>
Signed-off-by: Elliott Davis <elliott@bywatersolions.com>
Searching still works as expected for variuos places.
QueryParser syspref seemed to be enabled by default
Signed-off-by: Katrin Fischer <Katrin.Fischer.83@web.de>
Signed-off-by: Jared Camins-Esakov <jcamins@cpbibliography.com>
<dcook> Then bulkauthimport.pl?
<jcamins> bulkauthimport should not be used ever.
<eythian> it probably should be deleted
<jcamins> It should be.
Signed-off-by: David Cook <dcook@prosentient.com.au>
I've poked around in bulkmarcimport.pl and it certainly seems to have the functionality that Mason (and Jared and Robin) mention.
Signed-off-by: Jonathan Druart <jonathan.druart@biblibre.com>
Signed-off-by: Jared Camins-Esakov <jcamins@cpbibliography.com>
Replace \r with \n for newline in output for bulkmarcimport.pl
Signed-off-by: Kyle M Hall <kyle@bywatersolutions.com>
Signed-off-by: Paul Poulain <paul.poulain@biblibre.com>
Signed-off-by: Jared Camins-Esakov <jcamins@cpbibliography.com>
Due to a limitation of Zebra, the register must be cleared *before*
doing shadow indexing if you want to reset the indexes. In light of
that, it does not make sense to do shadow indexing at all when
rebuild_zebra.pl is run with the -r switch. This patch makes -r (reset)
imply -n (no shadow).
To test:
1) Run `rebuild_zebra.pl -b -r -v -v -v`
2) Note that the script never runs the merge phase
Without the patch I see log lines refering to the shadow cache (enabling shadow spec=/home/koha/koha-dev/var/lib/zebradb/biblios/shadow:20G)
With the patch I don't see anything in the logs about shadow. I do however see lines about merging. I think it could just be a misunderstanding of the logs
Signed-off-by: wajasu <matted-34813@mypacks.net>
Signed-off-by: Elliott Davis <elliott@bywatersolutions.com>
Signed-off-by: Jared Camins-Esakov <jcamins@cpbibliography.com>
This allows the --framework option to be specified when running
bulkmarkimport. This option allows a framework code to be specified for
the records being imported.
Signed-off-by: Kyle M Hall <kyle@bywatersolutions.com>
Signed-off-by: Katrin Fischer <Katrin.Fischer.83@web.de>
All tests pass, perlcritic fails before and after.
Tested
- imported records with -framework FA, FA framework is used
- imported records without -framework, default framework is used
Signed-off-by: Jared Camins-Esakov <jcamins@cpbibliography.com>
Previously we used the "delete" command in zebraidx, which fails when
you try to delete a record that doesn't exist in the index. By changing
to the "adelete" command, we can reduce the likelihood of a failed
delete causing ghost records. A symptom of this problem is the warning
message occasionally encountered when indexing from the zebraqueue,
"[warn] cannot delete record above (seems new)."
To test:
1) Add a recordDelete action for a record that does not exist to
zebraqueue in MySQL:
INSERT INTO zebraqueue (biblio_auth_number, operation, server) \
VALUES (999999999, 'recordDelete', 'biblioserver');
2) Run `rebuild_zebra.pl -b -z -v [-x]`.
3) Note that you do not get the message "[warn] cannot delete record
above (seems new)".
Signed-off-by: Chris Cormack <chris@bigballofwax.co.nz>
Passed-QA-by: Paul Poulain <paul.poulain@biblibre.com>
Signed-off-by: Jared Camins-Esakov <jcamins@cpbibliography.com>
indexing not indexation
some minor grammatical changes
Signed-off-by: Kyle M Hall <kyle@bywatersolutions.com>
Signed-off-by: Paul Poulain <paul.poulain@biblibre.com>
This patch adds the Koha::Indexer::RecordNormalizer and
Koha::Indexer::MARC::RecordNormalizer::EmbedSeeFromHeadings packages
to enable the inclusion of alternate forms of headings in bibliographic
searches. When the new syspref IncludeSeeFromInSearches is turned on
(default is off) rebuild_zebra.pl will insert see from headings from
authority records into bibliographic records when indexing, so that a
search on an obsolete term will turn up relevant records.
To test:
1) Enable IncludeSeeFromInSearches
2) Add a heading that has an alternate form to a record (for example,
"Cooking" has the alternate form "Cookery," if you have authority
records from LC)
3) Index the zebraqueue (or reindex if you haven't indexed your system
yet)
4) Confirm that if you search for "Cookery" you get the record you
just modified
Signed-off-by: Jared Camins-Esakov <jcamins@cpbibliography.com>
Rebased on master 5 August 2012
Signed-off-by: Jared Camins-Esakov <jcamins@cpbibliography.com>
Rebased on master 11 September 2012
Signed-off-by: Katrin Fischer <Katrin.Fischer.83@web.de>
Also checked:
- Verified database update works correctly
- Checked system preference and its description
- Checked staff/opac detail pages with feature on/off
- Checked staff/opac search facets
- Downloaded and tested records in various formats
- Tried different searches for 'see from' entries of authorities
- Ran all unit tests
No problems found.
Fix syntax errors preventing the scripts misc/translator/text-extract2.pl
and misc/cronjobs/thirdparty/TalkingTech_itiva_inbound.pl from compiling.
Remove misc/migration_tools/build6xx.pl entirely since it refers to
columns that no longer exist in the Koha database, and has seemingly
had broken encoding since Koha switched from CVS to git (or before!).
Signed-off-by: Chris Cormack <chris@bigballofwax.co.nz>
Signed-off-by: Paul Poulain <paul.poulain@biblibre.com>
Small script that checks if each bibliorecord in the DB is properly indexed
use -h to learn more
(MT #6389)
Signed-off-by: Robin Sheat <robin@catalyst.net.nz>
Signed-off-by: Paul Poulain <paul.poulain@biblibre.com>
Complete rewrite of rebuild_zebra_sliced.zsh (renamed to .sh). Main
improvements are:
- both biblio and authority records are handled
- records are exported only once
It also add an option --skip-index to rebuild_zebra.pl that permit to
use rebuild_zebra.pl as an 'export only' script.
Description:
Index Koha records by chunks. It is useful when some record causes
errors and stop the indexation process. With this script, if indexation
of one chunk fails, chunk is splitted in 2 (or 3) chunks, and
indexation continue on these chunks.
rebuild_zebra.pl is called only once to export records.
Splitting and indexing is handled by this script (using yaz-marcdump and
zebraidx).
Signed-off-by: Martin Renvoize <martin.renvoize@ptfs-europe.com>
Signed-off-by: Paul Poulain <paul.poulain@biblibre.com>
In rebuild_zebra.pl, if we are in "unimarc" ("marcflavour" syspref), the sub "fix_unimarc_100" is called and checks if 100$a lenght is equal to 35.
If it is not the case, the sub inserts the localtime and more, so we loose the datas in reindexing.
The standart lenght is 36.
I have just changed 35 to 36.
Signed-off-by: Sophie Meynieux <sophie.meynieux@biblibre.com>
Signed-off-by: Paul Poulain <paul.poulain@biblibre.com>
One consequence is that the -x and -a options are no longer
mutually exclusive.
Also, because of the way that the GRS-1 SGML filter works, if you're
indexing multiple documents, you can't just wrap them in a document
element, but the DOM filter *requires* it. Consequently, two
new config settings in koha-conf.xml are added to indicate the
Zebra filter in use so that the -x option of rebuild_zebra.pl
knows whether to wrap the exported records or not:
- bib_index_mode (defaults to 'grs1' if not specified)
- auth_index_mode (defaults to 'dom')
Signed-off-by: Galen Charlton <gmc@esilibrary.com>
Signed-off-by: Jared Camins-Esakov <jcamins@cpbibliography.com>
Signed-off-by: Paul Poulain <paul.poulain@biblibre.com>
Simple command-line client which can authorize itself to Koha,
get MARC XML record based on biblio number and update record
This script can also be used as module using require "koha-svc.pl"
from other scripts which can implement MARC XML creation or parsing.
This is follow up version which now uses Content-type: text/xml
header when using POST method to be in sync with documentation at
http://wiki.koha-community.org/wiki/Koha_/svc/_HTTP_API
Signed-off-by: Chris Cormack <chrisc@catalyst.net.nz>
Signed-off-by: Paul Poulain <paul.poulain@biblibre.com>
This adds the -dedupbarcode option that allows bulkmarkimport to erase
a barcode but keep the item of any items it finds with duplicate
barcodes.
Signed-off-by: Jared Camins-Esakov <jcamins@cpbibliography.com>
Signed-off-by: Paul Poulain <paul.poulain@biblibre.com>
New sql tables:
- oai_sets: contains the list of sets, described by a spec and a name
- oai_sets_descriptions: contains a list of descriptions for each set
- oai_sets_mappings: conditions on marc fields to match for biblio to be
in a set
- oai_sets_biblios: list of biblionumbers for each set
New admin page: allow to configure sets:
- Creation, deletion, modification of spec, name and descriptions
- Define mappings which will be used for building oai sets
Implements OAI Sets in opac/oai.pl:
- ListSets, ListIdentifiers, ListRecords, GetRecord
New script misc/migration_tools/build_oai_sets.pl:
- Retrieve marcxml from all biblios and test if they belong to defined
sets. The oai_sets_biblios table is then updated accordingly
New system preference OAI-PMH:AutoUpdateSets. If on, update sets
automatically when a biblio is created or updated.
Use OPACBaseURL in oai_dc xslt
This patch reimplement a feature that is on biblibre/master for Koha-community/master
It adds 4 parameters:
* offset = the offset of record. Say 1000 to start rebuilding at the 1000th record of your database
* length = how many records to export. Say 400 to export only 400 records
* where = add a where clause to rebuild only a given itemtype, or anything you want to filter on
Another improvement resulting from offset & length limit is the rebuild_zebra_sliced.zsh
that will be submitted in another patch.
rebuild_zebra_sliced will slice your all database in small chunks, and, if something went wrong for a given slice, will slice the slice, and repeat, until you reach a slice size of 1, showing which record is wrong in your database.
Signed-off-by: Jared Camins-Esakov <jcamins@cpbibliography.com>
Removed mention of -l option for limiting number of items exported, as requested
by QA manager. This can be re-added in a later patch.
Signed-off-by: Paul Poulain <paul.poulain@biblibre.com>
use encoding(UTF-8) rather than utf-8 for stricter
encoding
Marking output as ':utf8' only flags the data as utf8
using :encoding(UTF-8) also checks it as valid utf-8
see binmode in perlfunc for more details
In accordance with the robustness principle input
filehandles have not been changed as code may make
the undocumented assumption that invalid utf-8 is present
in the imput
Fixes errors reported by t/00-testcritic.t
Where feasable some filehandles have been made lexical rather than
reusing global filehandle vars
Signed-off-by: Jonathan Druart <jonathan.druart@biblibre.com>
Signed-off-by: Paul Poulain <paul.poulain@biblibre.com>
Currently, -v option resets Zebra log output to default system values.
This produce amount of log specified in system defaults which is usually
too low for debugging.
This change explicitly forces all Zebra log output which create much more
chatter so it triggers with verbosity level 2
Test scenario:
1. pick koha site to reindex
2. use -v -v options to rebuild_zebra.pl to see additional output
Signed-off-by: Liz Rea <wizzyrea@gmail.com>
Verified help corrections and loglevel 2 output vs. loglevel 1 output. No issues found.
Signed-off-by: Paul Poulain <paul.poulain@biblibre.com>
Sometimes zebra needs a tmp dir in order to work. This ensures that it
is created both by koha-create-dirs in the packages, and by
rebuild_zebra when it runs.
--
tested ok, signing off
Signed-off-by: Mason James <mtj@kohaaloha.com>
This patch allow to handle properly items containing extended characters and
send valid XML records to zebraidx
Signed-off-by: Julian Maurice <julian.maurice@biblibre.com>
Signed-off-by: Paul Poulain <paul.poulain@biblibre.com>
Pref MergeAuthoritiesOnUpdate does not exist; should be dontmerge
(AuthoritiesMarc.pm).
Instead of folder modified_authorities, now introducing a table for this
purpose: need_merge_authorities. This eliminates several permissions and
security issues. This change applies to AuthoritiesMarc.pm and
merge_authority.pl.
POD lines added for ModAuthority. Deprecated parameter $merge removed.
Test this patch by applying the db revision first from the second patch.
August 4, 2011: Rebased.
Signed-off-by: Frédéric Demians <f.demians@tamil.fr>
Thanks Marcel. It works as advertised. Both modes are functionnal
(back):
- Immediate with dontmerge=0: After modifying an authority record, its
linked biblios are immediately modified. This isn't the case in 3.4.5.
- Delayed with dontmerge=1: After modifying an authority record, its
linked biblios are not modified. But an entry is added to
need_merge_authority new table and 'merge_authorities.pl -b' script
updates biblios.
Comment: need_merge_authority, like zebraqueue, should be cleared
from time to time.
Signed-off-by: Chris Cormack <chrisc@catalyst.net.nz>
This patch fixes an issue whereby biblios with many items (often > 500) would index,
but not the biblionumber itself, resulting in search results with a) inaccurate item counts
and b) no biblionumber to use in the link to the details page. This is due to Net::Z3950::ZOOM not providing
a mechanism for specifying different connection attributes; the maximumRecordSize ZOOM connection attribute,
if not specified, defaults to 1MB, which is less than the size of a MARC record with many, many 952 fields. Since
it is unlikely we can fix Net::Z3950::ZOOM in a timely fashion, this patch aims to build a workaround on the Koha end.
This patch changes EmbedItemsInMarcBiblio to use append_fields instead of insert_ordered_fields,
so the 999$c will come before the item records. It's VERY unlikely we will encounter more than 1MB of biblio-level MARC
content, as this would break the ISO-2709 standard by a large factor.
To this end, it also moves the fix_biblio_ids portion of get_corrected_marc_record out of rebuild_zebra.pl,
and makes it a part of GetMarcBiblio (right before EmbedItemsInMarcBiblio, so the 952s still come last). fix_biblio_ids
is kept as a subroutine for the deletion portion of rebuild_zebra.pl, which still uses it.
It also uses the subroutine parameter in GetMarcBiblio to do the EmbedItemsInMarcBiblio action, rather than having
rebuild_zebra.pl perform it on the itemless record returned from GetMarcBiblio. Simpler and cleaner that way.
To verify bug issue:
1. Find a biblio with over 700 items (or enough that the resulting MARCXML is greater than 1MB)
2. search for this biblio (in a search that would return multiple results, not just this title). You should get the title in
the results list
3. attempt to click the link to this biblio's details page; the biblionumber should be blank, leading to a 404
To test solution:
1. Apply patch
2. modify the biblio slightly (click the 005 for example) and save
OR manually add the biblio to zebraqueue for reindexing
3. after rebuild_zebra.pl -z -b -x runs, use the same search as above. The title should still appear.
4. click the link, and find yourself on the biblio detail page as desired
Signed-off-by: D Ruth Bavousett <ruth@bywatersolutions.com>
Signed-off-by: Paul Poulain <paul.poulain@biblibre.com>
Signed-off-by: Chris Cormack <chrisc@catalyst.net.nz>
Display links to parent biblios, show linked items in holdings, allow holds on
linked items. This uses MARC to maintain relationships.
Sponsored by the Mississippi Department of Archives and History and RapidRadio
Solution. Originally developed by Savitra Sirohi and Amit Gupta at OSSLabs, with
UNIMARC support added by Zeno Tajoli. Commits squashed and merge conflicts
resolved by Chris Cormack from Catalyst. Respect for NORMARC and some small
framework portability fixes made by Jared Camins-Esakov of C & P Bibliography
Services.
IMPORTANT NOTE: A bug in the 773 coding for MARC21 was corrected from the
original OSS Labs code. The 773s generated by the pre-release code did not have
the first indicator set to '0', which means that they were not supposed to
display. Going forward, the first indicator will be set correctly, but existing
records created with this code will no longer appear (they appeared before only
due to another bug). To correct this, you could globally (or, to make sure you
only modify records created with the Analytics tool, for records with 773$0)
change the first indicator of the 773 from blank to '0'.
== Background ==
An analytic record for an item is a more detailed, monographic biblio for an
item attached to a serial record . This is often used for special issues of a
journal that are released as books on their own (assigned an ISBN, as well as an
ISSN/volume/issue). It is important for researchers to be able to search for
these items both as issues of the serial, and as monographs. It is equally
important for the library to not have duplicate item records for the item in
question to have to keep synchronized.
== Establishing relationships ==
Analytical records are connected to items belonging to parent or host
bibliographic records. This can be accomplished by:
* From an analytical bibliographic record linking to an host item by providing
the item barcode as input
* From a host item by using option "analyze", this creates a new empty
bibliographic record with field 773 (MARC21) populated
* Running a new CLI script that establishes a relationship between the
analytical record and the host item identified by the barcode in the
analytical record's 773$o (MARC21)
== Connecting Records ==
The relationships are maintained in the MARC records, we have not used database
tables at all.
== MARC Representation ==
In MARC21/NORMARC we have used:
* 773$9 to store the Koha item number of the host item
* 773$0 to store the Koha biblio number of the host bibliographic record
The above fields are used to display the relationships in various screens in the
OPAC and the staff interface. Additionally, when populating field 773 with host
item's details, we have used following MARC 21 mapping:
* 'a' <= 100/110/111 $a (author main)
* 'b' <= 250$a (edition)
* 'd' <= 260$a, 260$b, 260$c (place, publisher, year)
* 'o' <= barcode
* 't' <= 245$a (title)
* 'w' <= (003)001 --> if no 001 is available, we can populate biblionumber
* 'x' <= 022$a (issn)
* 'z' <= 020$a (isbn)
In UNIMARC, this code uses:
* 461$9 to store the Koha item number of the host item
* 461$0 to store the Koha biblio number of the host bibliographic record
When populating field 461 in UNIMARC, the following mapping is used:
* 't' <= 200$a (title)
== Treatment of Holds ==
A key requirement was to allow holds to be placed on host items from the
analytical record. We have accomplished this by allowing holds on specific
copies only. Biblio level holds are not allowed. This ensures that holds are
placed on specific items that are relevant to the analytical record.
== Deleting host items with linked analytical records ==
As we have not used database tables to maintain relationships, we had to use
search to find out if any linked analytical records are present. If 1 or more
analytical are present, we do not allow deletion of items. This is similar to
what we see when we try to delete authority records.
== Importing analytical records ==
Analytical records can be imported using bulkmarcimport or the GUI tools. The
new CLI script can be executed after the import to establish relationships with
host items. The script will establish relationships using the host item's
barcode, the barcode must be present in 773$o of the analytical record.
== What if there are two or more copies of the host item? ==
The current design will require that there be two host (773) fields, one for
each copy.
== What if there is no barcode available for the host item? ==
It is still possible to establish a relationship, by populating 773$9 with the
host's item number. However the CLI script uses barcode in 773$o to establish
relationships so it won't work where barcodes are unavailable. Also from an
analytical record, it is possible to establish a relationship to a host item by
providing the barcode as input, this option will not be available as well.
Commits that added the following features were squashed by Chris Cormack (this
is not a list of every commit):
* Display links to host records from biblio detail screens
* Support for UNIMARC, respecting the system preference 'marcflavor'
* Support holds from the OPAC
* Ability to link to items belong to host records from a analytical record
* Display items belonging to host records in the moredetail page
* Ability to edit items belonging to host records, also ability to delink from
them
* Move get host items code into a C4 routine, also calling the new routine in
related perl scripts
* Move host field population to a C4 routine, all changes in pl files to call
new routine
* Allow only specific copy holds for analytical records plus changes to use new
C4 routines
* Support for holds on items linked via host records
* Storing bibnumber and itemnumber in subfields 0 and 9, plus other mapping
changes
* New command line script that establishes relationships between analytical
records and host items and bibs. The script looks for host field (MARC21 773)
in records, and based on barcode in subfield 'o' populates host bibnumber in
subfield '0' and host itemnumber in subfield '9'. The script can be run after
an import of analytical records, it can also be run in the crontab to maintain
the relationships
* Ability to create analytical records from items, to view linked analytics, and
prevent deletion of items that have linked analytics
* New template for catalogue/detail.pl (NOTE: not a new template file, just a
new way of displaying analytics), template displays linked analytics and
allows creation of analytical records
* New zebra index for item number in host fields. This index will be used to
display links to analytical records from host records
* Display title of host record instead of the phrase host record
* Using detail.tmpl for analytics tab instead of a new template file
* Improved qualification info prepration in Prephostmarcfield
* Check for linked analytics before deleting item
* Display link to host record and more meaningful anchor text for edit item link
* Analytical record: Unimarc index in record.abs and help in
create_analytical_rel.pl
* Adding a sys pref that controls display of options to create analytical
relationships
* Add host entry in XSLT stylesheet in staff item detail
* Added host record support to OPAC detail XSLT
* Adding 773$0 and 773$9 to all frameworks
* Adding 773 subfields 0 and 9 to default marc framework via updatedatabase.pl
* Display create analytics and used in links in catalog detail
* Fixed problem where analytical records not showing in OPAC search results
because GetMarcBiblio now needs a flag to add item records
* Fixed problem where analytics count was set to 1 for all records, not just
those with analytics
* Fixed catalogue detail page not to show analytics counts if count is 0
Conflicts:
installer/data/mysql/updatedatabase.pl
koha-tmpl/intranet-tmpl/prog/en/modules/cataloguing/addbiblio.tt
kohaversion.pl
Co-author: Savitra Sirohi <savitra.sirohi@osslabs.biz>
Co-author: Zeno Tajoli <tajoli@cilea.it>
Signed-off-by: Jared Camins-Esakov <jcamins@cpbibliography.com>
Signed-off-by: Ian Walls <ian.walls@bywatersolutions.com>
Signed-off-by: Chris Cormack <chrisc@catalyst.net.nz>
This both adds a bit of a failsafe to get_raw_biblio, and prevents
records that have been deleted from being updated by the same instance
of rebuild_zebra.
Minor amendment to remove duplication of 6433
Signed-off-by: MJ Ray <mjr@phonecoop.coop>
Signed-off-by: Chris Cormack <chrisc@catalyst.net.nz>
Fixes bug where a bib record imported by bulkmarcimport.pl
could become unindexable by ensuring that ModBiblioMarc()
is always called by bulkmarcimport.pl to finalize saving the
bib record (as it was initially created by AddBiblio with the
defer_marc_save option).
Also introduces a utility routine, C4::Biblio::_strip_item_fields.
Signed-off-by: Galen Charlton <gmcharlt@gmail.com>
Signed-off-by: Chris Cormack <chrisc@catalyst.net.nz>
Adds a new routine, C4::Biblio::EmbedItemsInMarcBiblio, to
embed the items in the bib record when necessary:
* cataloging/additem.pl
* rebuild_zebra.pl
Signed-off-by: Galen Charlton <gmc@esilibrary.com>
Signed-off-by: Claire Hernandez <claire.hernandez@biblibre.com>
Signed-off-by: Chris Cormack <chrisc@catalyst.net.nz>
This is a squash of four patches by Henri-Damien Laurent
starting work on removing the copy of item record information
in the 9XX field of bibliographic records. The reason
for doing this is primarily to improve performance, in particular,
the expense of having to add/modify the bib record whenever an
item changes. Now, whenever an item changes, the bib record is
put in the queue to be reindexed; when the bib is indexed, the 9XX
fields are inserted into the version of the bib that Zebra indexes.
Since rebuild_zebra.pl runs in a separate process, the processing of the
bib record will not delay (e.g.) circulation.
As part of upgrading to 3.4, the following batch script should be run:
misc/maintenance/remove_items_from_biblioitems.pl --run
This should be followed by a complete reindexing of the bib records, e.g.,
misc/migration_tools/rebuild_zebra.pl -b -r
Signed-off-by: Galen Charlton <gmcharlt@gmail.com>
Signed-off-by: Claire Hernandez <claire.hernandez@biblibre.com>
Signed-off-by: Chris Cormack <chrisc@catalyst.net.nz>
Import script shouldn't remove an information present in entering biblio
records. With this patch, by default, ISBN are not cleared anymore.
[2011.04.12] Rebased on HEAD
DOCUMENTATION: There is a new paramater --isbn|--noisbn
Signed-off-by: Colin Campbell <colin.campbell@ptfs-europe.com>
Signed-off-by: Chris Cormack <chrisc@catalyst.net.nz>
Remove some unnecessary checks when check of error is
sufficient. Make the order in some cases more logical
Should remove some possibilities of runtime warning noise.
Although some calls belong to the 'Nothing could
ever go wrong' school have added some warnings
Signed-off-by: Christophe Croullebois <christophe.croullebois@biblibre.com>
Signed-off-by: Chris Cormack <chrisc@catalyst.net.nz>
Reimplements support for -r, as well for -reset
Signed-off-by: D Ruth Bavousett <ruth@bywatersolutions.com>
Signed-off-by: Chris Cormack <chrisc@catalyst.net.nz>
If the zebra server directories don't exist, zebra will spit the dummy.
This makes rebuild_zebra.pl smart enough to create them if they're not
there. If that fails, it'll scream loudly so you know zebra isn't
reindexing.
Signed-off-by: Chris Cormack <chrisc@catalyst.net.nz>
This prevents it leaving files lying around in /tmp
Signed-off-by: Chris Cormack <chrisc@catalyst.net.nz>
Signed-off-by: Galen Charlton <gmcharlt@gmail.com>