With the removal of admin/finesrules.pl and admin/issuingrules.pl,
the functions str_to_base64() and base64_to_str() in C4::Koha
are no longer used.
Signed-off-by: Joshua Ferraro <jmf@liblime.com>
In any MARC record display in the OPAC or staff client
that displays the MARC tag numbers, the indicators are
now displayed as well, following the tag number. If an
indicator is a blank, it is displayed as '#'.
Add a function to C4::Koha, display_marc_indicators(), to
generate this display form of the indicators.
Refactoring note: the four scripts changed in this commit
have a lot of duplicate code that could be merged into
a MARC displayer class.
Documentation notes: screenshots of tagged MARC record
displays should be updated.
Signed-off-by: Joshua Ferraro <jmf@liblime.com>
I'm extracting some of the icon manipulation logic so that I can get to it from the authorized values pages.
There should be no functionality or documentation changes with this commit.
Signed-off-by: Joshua Ferraro <jmf@liblime.com>
Because of the way that the older fine and issuing
rule editors generate the HTML form, if a branch code,
patron category code, or item type code happened to have a
'-' or '.', the HTML form would not be parsed properly, thus
adding an implicit (rather than explicit) limit on the
characters allowed in one of those codes.
This fix removes this limitation by Base64-encoding the codes
when constructing the names for the <input> elements.
Two functions are added to C4::Koha:
str_to_base64() - UTF-8 string to Base64
base64_to_str() - reverse
Signed-off-by: Joshua Ferraro <jmf@liblime.com>
* IsStringUTF8ish - determine if scalar contains a string in UTF8
* MarcToUTF8Record - convert MARC blob or MARC::Record to UTF8
* SetMarcUnicodeFlag - set appropriate MARC21 or UNIMARC field to
indicate that record is in UTF-8.
Design points of this module include:
* No dependencies on other C4 modules, making it easier to add
more test cases
* All character conversion code in one place
* Single entry point for doing a character conversion on a
MARC record
* Capture of errors and warnings produced by Text::Iconv
and MARC::Charset
* Start of support for guessing the source character set of
a MARC record.
Several functions were moved from other scripts
or modules to C4::Charset:
* C4::Koha->FixEncoding (expanded and renamed
MarcToUTF8Record)
* C4::Koha->char_decode5426
* fMARC8ToUTF8 from bulkmarcimport.pl (renamed
_marc_marc8_to_utf8)
Several batch jobs were adjusted to use MarcToUTF8Record instead of
FixEncoding.
Signed-off-by: Chris Cormack <crc@liblime.com>
Signed-off-by: Joshua Ferraro <jmf@liblime.com>
encoding is now defaulted to MARC8
encoding is now supported for USMARC and UNIMARC flavours.
Adding Encoding field to updatedatabase.pl
Signed-off-by: Chris Cormack <crc@liblime.com>
Signed-off-by: Joshua Ferraro <jmf@liblime.com>
Adding encoding for z3950 server information.
Uses Text::Iconv for conversion (ISO6937 and ISO_5428 and ISO5427)
For ISO 5426 (ANSEL or MARC-8) new char_decode5426 based on marc4j tool.
Not Tested on LOC or any USMARC z3950 source. But tested OK on BNF and SUDOC.
Signed-off-by: Chris Cormack <crc@liblime.com>
Signed-off-by: Joshua Ferraro <jmf@liblime.com>
This patch is meant to guarantee that a bulkedit
does not try to edit an item tag embedded in a MARC
biblio without updating the items feature. It is
not a comprehensive fix of the bulkedit feature, which
currently does not appear to be functional and
needs some thought:
* The general search results is probably not the
best place to put this feature -- it should
probably be in tools.
* A bulk edit of something like items is desireable,
but needs to be designed so that it respects
business logic for circulation and acquisitions.
Signed-off-by: Chris Cormack <crc@liblime.com>
Signed-off-by: Joshua Ferraro <jmf@liblime.com>
Those subs were no more useful, the template didn't use them.
No hardcoding strings in .pl & .pm pls, we can't translate them.
Signed-off-by: Chris Cormack <crc@liblime.com>
thus, the title sorting was not working. This commit fixes the problem.
LIBLIME : verify I am right in the analysis and in the fix, it is a part mostly written by joshua
Uses a complete new ZEBRA Indexing.
ZEBRA is now XML and comprises of a KOHA meta record. Explanatory notes will be on koha-devel
Fixes UTF8 problems
Fixes bug with authorities
SQL database major changes.
Separate biblioograaphic and holdings records. Biblioitems table depreceated
etc. etc.
Wait for explanatory document on koha-devel