In any MARC record display in the OPAC or staff client
that displays the MARC tag numbers, the indicators are
now displayed as well, following the tag number. If an
indicator is a blank, it is displayed as '#'.
Add a function to C4::Koha, display_marc_indicators(), to
generate this display form of the indicators.
Refactoring note: the four scripts changed in this commit
have a lot of duplicate code that could be merged into
a MARC displayer class.
Documentation notes: screenshots of tagged MARC record
displays should be updated.
Signed-off-by: Joshua Ferraro <jmf@liblime.com>
I'm extracting some of the icon manipulation logic so that I can get to it from the authorized values pages.
There should be no functionality or documentation changes with this commit.
Signed-off-by: Joshua Ferraro <jmf@liblime.com>
Because of the way that the older fine and issuing
rule editors generate the HTML form, if a branch code,
patron category code, or item type code happened to have a
'-' or '.', the HTML form would not be parsed properly, thus
adding an implicit (rather than explicit) limit on the
characters allowed in one of those codes.
This fix removes this limitation by Base64-encoding the codes
when constructing the names for the <input> elements.
Two functions are added to C4::Koha:
str_to_base64() - UTF-8 string to Base64
base64_to_str() - reverse
Signed-off-by: Joshua Ferraro <jmf@liblime.com>
* IsStringUTF8ish - determine if scalar contains a string in UTF8
* MarcToUTF8Record - convert MARC blob or MARC::Record to UTF8
* SetMarcUnicodeFlag - set appropriate MARC21 or UNIMARC field to
indicate that record is in UTF-8.
Design points of this module include:
* No dependencies on other C4 modules, making it easier to add
more test cases
* All character conversion code in one place
* Single entry point for doing a character conversion on a
MARC record
* Capture of errors and warnings produced by Text::Iconv
and MARC::Charset
* Start of support for guessing the source character set of
a MARC record.
Several functions were moved from other scripts
or modules to C4::Charset:
* C4::Koha->FixEncoding (expanded and renamed
MarcToUTF8Record)
* C4::Koha->char_decode5426
* fMARC8ToUTF8 from bulkmarcimport.pl (renamed
_marc_marc8_to_utf8)
Several batch jobs were adjusted to use MarcToUTF8Record instead of
FixEncoding.
Signed-off-by: Chris Cormack <crc@liblime.com>
Signed-off-by: Joshua Ferraro <jmf@liblime.com>
encoding is now defaulted to MARC8
encoding is now supported for USMARC and UNIMARC flavours.
Adding Encoding field to updatedatabase.pl
Signed-off-by: Chris Cormack <crc@liblime.com>
Signed-off-by: Joshua Ferraro <jmf@liblime.com>
Adding encoding for z3950 server information.
Uses Text::Iconv for conversion (ISO6937 and ISO_5428 and ISO5427)
For ISO 5426 (ANSEL or MARC-8) new char_decode5426 based on marc4j tool.
Not Tested on LOC or any USMARC z3950 source. But tested OK on BNF and SUDOC.
Signed-off-by: Chris Cormack <crc@liblime.com>
Signed-off-by: Joshua Ferraro <jmf@liblime.com>
This patch is meant to guarantee that a bulkedit
does not try to edit an item tag embedded in a MARC
biblio without updating the items feature. It is
not a comprehensive fix of the bulkedit feature, which
currently does not appear to be functional and
needs some thought:
* The general search results is probably not the
best place to put this feature -- it should
probably be in tools.
* A bulk edit of something like items is desireable,
but needs to be designed so that it respects
business logic for circulation and acquisitions.
Signed-off-by: Chris Cormack <crc@liblime.com>
Signed-off-by: Joshua Ferraro <jmf@liblime.com>
Those subs were no more useful, the template didn't use them.
No hardcoding strings in .pl & .pm pls, we can't translate them.
Signed-off-by: Chris Cormack <crc@liblime.com>
thus, the title sorting was not working. This commit fixes the problem.
LIBLIME : verify I am right in the analysis and in the fix, it is a part mostly written by joshua
Uses a complete new ZEBRA Indexing.
ZEBRA is now XML and comprises of a KOHA meta record. Explanatory notes will be on koha-devel
Fixes UTF8 problems
Fixes bug with authorities
SQL database major changes.
Separate biblioograaphic and holdings records. Biblioitems table depreceated
etc. etc.
Wait for explanatory document on koha-devel
the intranet. The development was made on branch 2.2 by Arnaud Laurin from
Ouest Provence and integrated on HEAD by Pierrick Le Gall from INEO media
system.
New page reserve/request.pl taking a biblionumber as entry point.
New functions:
- C4::Biblio::get_iteminfos_of retrieves item informations for a list of
itemnumbers
- C4::Biblio::get_biblioiteminfos_of retrieves biblioitem informations for a
list of biblioitemnumbers
- C4::Biblio::get_itemnumbers_of retrieve the list of itemnumbers related to
each biblionumber given in argument.
- C4::Circulation::Circ2::get_return_date_of retrieves return date for a
list of itemnumbers.
- C4::Koha::get_itemtypeinfos_of retrieves the informations related to a
list of itemtypes.
- C4::Koha::get_branchinfos_of retrieves the informations related to a list
of branchcodes.
- C4::Koha::get_notforloan_label_of retrives the list of status/label for
the authorised_values related to notforloan.
- C4::Koha::get_infos_of is the generic function used by all get_*infos_of.
- C4::Reserves2::GetNumberReservesFromBorrower
- C4::Reserves2::GetFirstReserveDateFromItem
Modified functions:
- C4::Reserves2::FindReserves was simplified to be more readable.
The reservation page is reserve/request.pl and is linked from nowhere as
long as zebra is not stable yet on HEAD.