Add an option for marcstd to the opac-export.pl and catalogue/export.pl
scripts. This new format removes all 9XX, X9X, XX9 fields and subfield $9
(with the exception of 490 in flavours of MARC other than UNIMARC). The work is
done in C4::Record::marc2marc.
This patch adds the new export option 'marcstd' for exporting MARC
records without 9xx, x9x and xx9 fields and subfields to the staff
detail page.
Testing plan:
1. Export a record in "MARC (Unicode/UTF-8)" format as a control
2. In the OPAC, run the following jQuery to add the marcstd option to the UI:
> $("#export #format").append("<option value='marcstd'>MARC (no 9xx)</option>");
3. Export the same record in "MARC (no 9xx)" format
4. Compare the two, noticing that any subfield $9 or fields including 9 (other
than 490 in flavours of MARC other than UNIMARC) have been removed
Signed-off-by: Marcel de Rooy <m.de.rooy@rijksmuseum.nl>
Works as advertised now.
Signed-off-by: Katrin Fischer <Katrin.Fischer.83@web.de>
Signed-off-by: Jared Camins-Esakov <jcamins@cpbibliography.com>
This patch squashes both the original patch and Katrin's follow-up adding
marcstd as an export option on the staff client.
Feb 13, 2012 (marcel): Amended this patch to resolved two definitions of $error in catalogue/export script.
- The following export pages used to embed items when exporting,
this was no longer the case, so they were fixed :
Intranet :
- basket/downloadcart.pl,
- virtualshelves/downloadshelf.pl
- catalogue/export.pl
Opac :
- opac/opac-downloadcart.pl
- opac/opac-downloadshelf.pl
- opac/opac-export.pl
- Notes :
- GetMarcBiblio used to embed items data, this was no longer the case,
so an optional parameter was added to choose if items should be embedded or not.
This way, previous work on this bug is not broken, and this is a pretty usefull
feature, imho.
- An optional parameter has been added to SetUTF8Flag, to be able to use NFD during
normalization. This was required to make Unicode/UTF-8 export work again.
Signed-off-by: Claire Hernandez <claire.hernandez@biblibre.com>
Signed-off-by: Chris Cormack <chrisc@catalyst.net.nz>
html2marc and html2marcxml are not used, and html2marcxml
is the last user of the dead syspref TemplateEncoding
Signed-off-by: Galen Charlton <gmcharlt@gmail.com>
On a basket or a basket download, or csv export, if Koha cannot get a biblio, then it failed with error 500.
This patch fixes that behaviour skipping the faulty record in order to present the user with the biblios which are not causing trouble.
Signed-off-by: Galen Charlton <gmcharlt@gmail.com>
Should fix any remaining warnings with 'podchecker'
Signed-off-by: Andrew Elwell <Andrew.Elwell@gmail.com>
Signed-off-by: Galen Charlton <gmcharlt@gmail.com>
Adds the ability to customize CSV exports through the use of a YAML file.
The following customizations are available :
- Preprocessing
- Postprocessing
- Field-by-field processing
The YAML field should be stored in the tools/csv-profiles/ directory and
named after the id of the CSV profile you want to customize.
An example file is provided in that directory.
(cherry picked from commit 76655b5b94)
Signed-off-by: Galen Charlton <gmcharlt@gmail.com>
Numbers in perl with leading zeros are interpreted in octal
Ensure that comparisons are done using string operators
or where appropriate use the MARC::Field method
Signed-off-by: Galen Charlton <gmcharlt@gmail.com>
This module represents a major leap forward in Koha's support of standard
record formats (ISO-2709 (MARC), MARCXML, Dublin Core, MODS, etc). It
provides a standard API for record management as well as an error-handling
mechanism so that the API will return proper error strings to the calling
program. It's only partially implemented currently, but the API returns
proper error strings if a feature isn't implemented.
There is also a testing suite that you can use to check your system's
capabilities to handle record and encoding conversions. Commit coming
soon.
I'm gonna work in UNICODE support next ...
records. It can be used to convert from one record format to another,
build formats from html (such as in addbiblio and additem), convert from
one encoding to another, etc.
It's the first module that provides access to the new Koha 3.0 API that
I've been working on (stay tuned for a mail to koha-devel explaining it).