when Logguing was ON, this resulted in an internal server error, thus the discovery of this bug.
Signed-off-by: Chris Cormack <crc@liblime.com>
Signed-off-by: Joshua Ferraro <jmf@liblime.com>
the bug occur only if we have a malformed biblio, so it should "never" happend, but i've seen it once ;-)
Signed-off-by: Chris Cormack <crc@liblime.com>
Signed-off-by: Joshua Ferraro <jmf@liblime.com>
C4/Biblio.pm - removed following functions:
_biblioitem_cn_sort
_items_cn_sort
TODO: add a new call number filing routine to emulate logic of
_biblioitem_cn_sort for compatibility purposes
Signed-off-by: Chris Cormack <crc@liblime.com>
Signed-off-by: Joshua Ferraro <jmf@liblime.com>
Most of this commit by Joshua Ferraro.
updatedatabase changes by Galen Charlton.
Major changes:
This commit includes a lot of fairly major changes
to Koha's Biblio handling, largest is the addition
and deletion of several columns in the biblioitems,
items tables, as well as cleanup of deletedbiblioitems,
deleteditems tables. Some of the changes are simple
cleanup, but most have to do with improvements to
storage of call numbers in Koha.
Also, I had to clean up the _koha_* routines quite a
lot to make them work -- there was lots of data just
being lost because columns weren't being updated.
I'm still not completely convinced that the items
table is being treated as authoritative for items
data, investigating further.
DB Changes (updated in kohastructure.sql and in
updatedatabases):
ADDED:
biblioitems.cn_source ( auth value, CN_SOURCE, stores the source of the
call number: DDC, LCC, NLM, etc.)
biblioitems.cn_class ( plugin, marc21_callnumber.pl, helps fill in
the rest of the biblio-level fields)
biblioitems.cn_item
biblioitems.cn_suffix
biblioitems.cn_sort ( for zebra sorting, stored as a decimal number)
biblioitems.totalissues ( for counting the total times issued )
items.cn_source ( auth value, CN_SOURCE, stores DDC, LCC, NLM, etc.)
items.itemcallnumber ( plugin, marc21_itemcallnumber.pl, helps fill in
the itemcallnumber based on the record data )
items.cn_sort ( for zebra sorting, stored as a decimal number)
items.ccode ( auth value, CCODE, stores the Collection Code
of the item, can be used as call number prefix
by some libraries )
items.uri
items.materials
items.damaged
DELETED:
items.itype
items.cutterextra
biblioitems.classification
biblioitems.subclass
biblioitems.dewey
biblioitems.lcsort
biblioitems.lccn
biblioitems.ccode
DB version now 3.00.00.009.
Minor changes:
* Drop revision history from C4/Biblio.pm
* GetMarcAuthors now returns additional authors (7XX), not
main authors (1XX)
* Debug warnings in C4/Search.pm commented out
Signed-off-by: Chris Cormack <crc@liblime.com>
Signed-off-by: Joshua Ferraro <jmf@liblime.com>
- the availability status was not available on result list. This patch reintroduces that
- notforloan as itemtype was not properly managed : an itemtype that was notforloan resulted in nothing in detail. Not, the user can't place a reserve anymore, and the status is correctly displayed
the fix is for OPAC as well as staff
(owen, pls, validate cat-toolbar.inc & catalogue/detail.tmpl)
Signed-off-by: Chris Cormack <crc@liblime.com>
Signed-off-by: Joshua Ferraro <jmf@liblime.com>
It will only delete biblios that have no items attached.
You must delete the items first then delete the biblio
Signed-off-by: Chris Cormack <crc@liblime.com>
Signed-off-by: Joshua Ferraro <jmf@liblime.com>
The top issue was based on a timestamp field, that was updated everytime a biblio is modified
This commit add a datecreated field, that is filled only when a biblio is ADDED
it is used by opac-topissues & can be useful at other places i'm sure
Signed-off-by: Chris Cormack <crc@liblime.com>
Signed-off-by: Joshua Ferraro <jmf@liblime.com>
When updating a biblio, all repeated fields had the same input name.
Thus, when retrieving them through cgi->param resulted in a single line
Thus all subfields where merged in a single MARC field.
Adding a random() part to the name solves the problem
+ removing some warn lines
Signed-off-by: Chris Cormack <crc@liblime.com>
hdl defined 5 new fields in biblioitems, to handle collections.
The _koha_modify_biblioitem didn't deal with those fields. it's fixed now.
GetMarcAuthor in UNIMARC have a $4 field that is related to the responsability the author has
it is connected to an authorised value.
The improvement expand the numeric value to the expanded one.
Signed-off-by: Chris Cormack <crc@liblime.com>
In Koha 2.2 the biblionumber was an hidden field in the form,
exactly as any other subfield.
As we also stored biblionumber as specific field, it was useless to have it
twice.
with this commit, we restore the biblionumber in the MARC:Record in TransformHtml2Marc
Signed-off-by: Chris Cormack <crc@liblime.com>
So, deal carefully with this commit pls, and check it for your setups, because the patch works for me, but I'm not sure to understand well why :\
Signed-off-by: Chris Cormack <crc@liblime.com>
- updating templates to have tmpl_process3.pl running without any errors
- adding a drupal-like css for prog templates (with 3 small images)
- fixing some bugs in circulation & other scripts
- updating french translation
- fixing some typos in templates
- support for authorities
- some bugfixes in ordering and "CCL" parsing
- support for authorities <=> biblios walking
Seems I can do what I want now, so I consider its done, except for bugfixes that will be needed i m sure !
- NoZebra features : seems they work fine now (adding, modifying, deleting)
- Biblio edition major bugfix : before this commit editing a biblio resulted in an item removal in marcxml field
* adding 3 subs in Biblio.pm
- GetNoZebraIndexes, that get the index structure in a new systempreference (added with this commit)
- _DelBiblioNoZebra, that retrieve all index entries for a biblio and remove in a variable the biblio reference
- _AddBiblioNoZebra, that add index entries for a biblio.
Note that the 2 _Add and _Del subs work only in a hash variable, to speed up things in case of a modif (ie : delete+add). The effective SQL update is done in the ModZebra sub (that existed before, and dealed with zebra index).
I think the code has to be more deeply tested, but it works at least partially.
- add nozebra table management on biblio editing
- the index table content is hardcoded. I still have to add some specific systempref to let the library update it
- manage pagination (next/previous)
- manage facets
WHAT works :
- NZgetRecords : has exactly the same API & returns as zebra getQuery, except that some parameters are unused
- search & sort works quite good
- CQL parser is better that what I thought I could do : title="harry and sally" and publicationyear>2000 not itemtype=LIVR should work fine
All subs have be cleaned :
- removed useless
- merged some
- reordering Biblio.pm completly
- using only naming conventions
Seems to have broken nothing, but it still has to be heavily tested.
Note that Biblio.pm is now much more efficient than previously & probably more reliable as well.
== Biblio.pm cleaning (useless) ==
* some sub declaration dropped
* removed modbiblio sub
* removed moditem sub
* removed newitems. It was used only in finishrecieve. Replaced by a Koha2Marc+AddItem, that is better.
* removed MARCkoha2marcItem
* removed MARCdelsubfield declaration
* removed MARCkoha2marcBiblio
== Biblio.pm cleaning (naming conventions) ==
* MARCgettagslib renamed to GetMarcStructure
* MARCgetitems renamed to GetMarcItem
* MARCfind_frameworkcode renamed to GetFrameworkCode
* MARCmarc2koha renamed to TransformMarcToKoha
* MARChtml2marc renamed to TransformHtmlToMarc
* MARChtml2xml renamed to TranformeHtmlToXml
* zebraop renamed to ModZebra
== MARC=OFF ==
* removing MARC=OFF related scripts (in cataloguing directory)
* removed checkitems (function related to MARC=off feature, that is completly broken in head. If someone want to reintroduce it, hard work coming...)
* removed getitemsbybiblioitem (used only by MARC=OFF scripts, that is removed as well)
Uses a complete new ZEBRA Indexing.
ZEBRA is now XML and comprises of a KOHA meta record. Explanatory notes will be on koha-devel
Fixes UTF8 problems
Fixes bug with authorities
SQL database major changes.
Separate biblioograaphic and holdings records. Biblioitems table depreceated
etc. etc.
Wait for explanatory document on koha-devel
the intranet. The development was made on branch 2.2 by Arnaud Laurin from
Ouest Provence and integrated on HEAD by Pierrick Le Gall from INEO media
system.
New page reserve/request.pl taking a biblionumber as entry point.
New functions:
- C4::Biblio::get_iteminfos_of retrieves item informations for a list of
itemnumbers
- C4::Biblio::get_biblioiteminfos_of retrieves biblioitem informations for a
list of biblioitemnumbers
- C4::Biblio::get_itemnumbers_of retrieve the list of itemnumbers related to
each biblionumber given in argument.
- C4::Circulation::Circ2::get_return_date_of retrieves return date for a
list of itemnumbers.
- C4::Koha::get_itemtypeinfos_of retrieves the informations related to a
list of itemtypes.
- C4::Koha::get_branchinfos_of retrieves the informations related to a list
of branchcodes.
- C4::Koha::get_notforloan_label_of retrives the list of status/label for
the authorised_values related to notforloan.
- C4::Koha::get_infos_of is the generic function used by all get_*infos_of.
- C4::Reserves2::GetNumberReservesFromBorrower
- C4::Reserves2::GetFirstReserveDateFromItem
Modified functions:
- C4::Reserves2::FindReserves was simplified to be more readable.
The reservation page is reserve/request.pl and is linked from nowhere as
long as zebra is not stable yet on HEAD.
longer necessary. If we need to convert from MARC-8 for display, we should:
1. use utf-8
2. do it with MARC::Charset
If you still need it, let me know and I'll put it back in.
Seems not to break too many things, but i'm probably wrong here.
at least, new features/bugfixes from 2.2.5 are here (tested on some features on my head local copy)
- removing useless directories (koha-html and koha-plucene)
some explanations :
- updater/updatedatabase => will transform all tables in innoDB (not related to utf8, just to warn you) AND collate them in utf8 / utf8_general_ci. The SQL command is : ALTER TABLE tablename DEFAULT CHARACTER SET utf8 COLLATE utf8_general_ci.
- *-top.inc will show the pages in utf8
- THE HARD THING : for me, mysql-client and mysql-server were set up to communicate in iso8859-1, whatever the mysql collation ! Thus, pages were improperly shown, as datas were transmitted in iso8859-1 format ! After a full day of investigation, someone on usenet pointed "set NAMES 'utf8'" to explain that I wanted utf8. I could put this in my.cnf, but if I do that, ALL databases will "speak" in utf8, that's not what we want. Thus, I added a line in Context.pm : everytime a DB handle is opened, the communication is set to utf8.
- using marcxml field and no more the iso2709 raw marc biblioitems.marc field.
* synch with rel_2_2. Probably the last non manual synch, as rel_2_2 should not be modified deeply.
* code cleaning (cleaning warnings from perl -w) continued
IMPORTANT NOTE : the MARCkoha2marc sub API has been modified. Instead of biblionumber & biblioitemnumber, it now gets a hash.
The sub is used only in Biblio.pm, so the API change should be harmless (except for me, but i'm aware ;-) )
* go to koha cvs home directory
* in misc/zebra there is a unimarc directory. I suggest that marc21 libraries create a marc21 directory
* put your zebra.cfg files here & create your database.
* from koha cvs home directory, ln -s misc/zebra/marc21 zebra (I mean create a symbolic link to YOUR zebra directory)
* now, everytime you add/modify a biblio/item your zebra DB is updated correctly.
NOTE :
* this uses a system call in perl. CPU consumming, but we are waiting for indexdata Perl/zoom
* deletion still not work
* UNIMARC zebra config files are provided in misc/zebra/unimarc directory. The most important line being :
in zebra.cfg :
recordId: (bib1,Local-number)
storeKeys:1
in .abs file :
elm 090 Local-number -
elm 090/? Local-number -
elm 090/?/9 Local-number !:w
(090$9 being the field mapped to biblio.biblionumber in Koha)
* removing useless subs
* removing some subs that are also elsewhere
* renaming all OLDxxx subs to REALxxx subs (should not change anything, as OLDxxx, as well as REAL, are supposed to be for Biblio.pm internal use only)
don't update your cvs if you want to have a working head...
this commit contains :
* updater/updatedatabase : get rid with marc_* tables, but DON'T remove them. As a lot of things uses them, it would not be a good idea for instance to drop them. If you really want to play, you can rename them to test head without them but being still able to reintroduce them...
* Biblio.pm : modify MARCgetbiblio to find the raw marc record in biblioitems.marc field, not from marc_subfield_table, modify MARCfindframeworkcode to find frameworkcode in biblio.frameworkcode, modify some other subs to use biblio.biblionumber & get rid of bibid.
* other files : get rid of bibid and use biblionumber instead.
What is broken :
* does not do anything on zebra yet.
* if you rename marc_subfield_table, you can't search anymore.
* you can view a biblio & bibliodetails, go to MARC editor, but NOT save any modif.
* don't try to add a biblio, it would add data poorly... (don't try to delete either, it may work, but that would be a surprise ;-) )
IMPORTANT NOTE : you need MARC::XML package (http://search.cpan.org/~esummers/MARC-XML-0.7/lib/MARC/File/XML.pm), that requires a recent version of MARC::Record
Updatedatabase stores the iso2709 data in biblioitems.marc field & an xml version in biblioitems.marcxml Not sure we will keep it when releasing the stable version, but I think it's a good idea to have something readable in sql, at least for development stage.
A sub had been forgotten to use the C4::Context->marcfromkohafield array, that caches DB datas.
this is only a little improvement for normal DB modif, but almost x2 the speed of bulkmarcimport... from 6records/seconds to more than 10.
* partial support of the "linkage" MARC feature : if you enter a "link" on a MARC subfield, the magnifying glass won't search on the field, but on the linked field. I agree it's a partial support. Will be improved, but I need to investigate MARC21 & UNIMARC diffs on this topic.
In 2.4, a new DB structure will highly speed things and this limit will be removed.
FindDuplicate is activated again, the perf problems were due to this problem.
For instance, it's only done on ISBN only. Will be improved soon.
When a duplicate is detected, the biblio is not saved, but the user is asked for a confirmations.
This field is useful when the callnumber contains no information on the room where the item is stored.
With this field, we now have 3 levels of informations to find a book :
* the branch.
* the location.
* the callnumber.
This should be versatile enough to solve any storing method.
This hack is quite simple, due to the nice Biblio.pm API. The MARC => koha db link is automatically managed. Just add the link in the parameters section.
1st draft for MARC biblio deletion.
Still does not work well, but at least, Biblio.pm compiles & it should'nt break too many things
(Note the trash in the MARCdetail, but don't use it, please :-) )
I changed this behaviour :
if notforloan is set for a given item, and NOT for all items from this itemtype, the notforloan is kept.
If notforloan is set for itemtype, it's used (and impossible to loan a specific item from this itemtype)
This supports is only for MARC <-> OLD-DB link. It worked previously, but values entered as MARC were not reported to OLD-DB, neither values entered as OLD-DB were reported to MARC.
Note that some OLD-DB subs are strange (dummy ?) see OLDmodsubject, OLDmodsubtitle, OLDmodaddiauthor in C4/Biblio.pm
For example it seems impossible to have more that 1 addi author and 1 subtitle. In MARC it's not the case. So, if you enter more than one, I'm afraid only the LAST will be stored.
z3950 search and import seems to works fine.
Let me explain how :
* a "search z3950" button is added in the addbiblio template.
* when clicked, a popup appears and z3950/search.pl is called
* z3950/search.pl calls addz3950search in the DB
* the z3950 daemon retrieve the records and stores them in z3950results AND in marc_breeding table.
* as long as there as searches pending, the popup auto refresh every 2 seconds, and says how many searches are pending.
* when the user clicks on a z3950 result => the parent popup is called with the requested biblio, and auto-filled
Note :
* character encoding support : (It's a nightmare...) In the z3950servers table, a "encoding" column has been added. You can put "UNIMARC" or "USMARC" in this column. Depending on this, the char_decode in C4::Biblio.pm replaces marc-char-encode by an iso 8859-1 encoding. Note that in the breeding import this value has been added too, for a better support.
* the marc_breeding and z3950* tables have been modified : they have an encoding column and the random z3950 number is stored too for convenience => it's the key I use to list only requested biblios in the popup.
It was due to an illegal contruction in Koha : we tried to retrive subfields from <10 tags.
That's not possible. MARC::Record accepted this in 0.93 version, but it was fixed after.
Now, the construct/retrieving is OK !
* worked in 1.9.0, but not in 1.9.1 :
- modif of a biblio didn't work
- empty fields where not shown when modifying a biblio. empty fields managed by the library (ie in tab 0->9 in MARC parameter table) MUST be entered, even if not presented.
* did not work before :
- repeatable subfields now works correctly. Enter 2 subfields separated by | and they will be splitted during saving.
- dropped the last subfield of the MARC form :-(
Internal changes :
- MARCmodbiblio now works by deleting and recreating the biblio. It's not perf optimized, but MARC is a "do_something_impossible_to_trace" standard, so, it's the best solution. not a problem for me, as biblio are rarely modified.
Note the MARCdelbiblio has been rewritted to enable deletion of a biblio WITHOUT deleting items.
Those fields doesn't have subfields.
In koha, we will use a specific "trick" : fields <10 will have a "virtual" subfield : "@".
Note it's only virtual : when rebuilding the MARC::Record, the koha API handle correctly "@" subfields => the resulting MARC record has a 00x field without subfield.
Those fields doesn't have subfields.
In koha, we will use a specific "trick" : fields <10 will have a "virtual" subfield : "@".
Note it's only virtual : when rebuilding the MARC::Record, the koha API handle correctly "@" subfields => the resulting MARC record has a 00x field without subfield.
'mandatory' property to a per-subfield 'tag_mandatory' template parameter,
so that addbiblio.tmpl can distinguish between mandatory subfields in a
mandatory tag and mandatory subfields in an optional tag
Not-minor factoring in acqui.simple/addbiblio.pl to make the if-else blocks
smaller, and to add some POD; need further testing for this
Added function to check if a MARC subfield name is "koha-internal" (instead
of checking it for 'lib' and 'tag' everywhere); temporarily added to Koha.pm
Use above function in acqui.simple/additem.pl and search.marc/search.pl
* many bugfixes
* adding value_builder : you can map a subfield in the marc_subfield_structure to a sub stored in "value_builder" directory. In this directory you can create screen used to build values with any method. In this commit is a 1st draft of the builder for 100$a unimarc french subfield, which is composed of 35 digits, with 12 differents values (only the 4th first are provided for instance)
* some bugfixes
* multiple item management : MARCadditem and MARCmoditem have been added. They suppose that ALL the MARC field linked to koha-item are in the same MARC tag (on the same line of MARC file)
Note : it should not be hard for marcimport and marcexport to re-link fields from internal tag/subfield to "legal" tag/subfield.
Database.pm and Output.pm are almost not modified (var test...)
Biblio.pm is almost completly rewritten.
WHAT DOES IT ??? ==> END of Hitchcock suspens
1st, it does... nothing...
Every old API should be there. So if MARC-stuff is not done, the behaviour is EXACTLY the same (if there is no added bug, of course). So, if you use normal acquisition, you won't find anything new neither on screen or old-DB tables ...
All old-API functions have been cloned. for example, the "newbiblio" sub, now has become :
* a "newbiblio" sub, with the same parameters. It just call a sub named OLDnewbiblio
* a "OLDnewbiblio" sub, which is a copy/paste of the previous newbiblio sub. Then, when you want to add the MARC-DB stuff, you can modify the newbiblio sub without modifying the OLDnewbiblio one. If we correct a bug in 1.2 in newbiblio, we can do the same in main branch by correcting OLDnewbiblio.
* The MARC stuff is usually done through a sub named MARCxxx where xxx is the same as OLDxxx. For example, newbiblio calls MARCnewbiblio. the MARCxxx subs use a MARC::Record as parameter.
The last thing to solve was to manage biblios through real MARC import : they must populate the old-db, but must populate the MARC-DB too, without loosing information (if we go from MARC::Record to old-data then back to MARC::Record, we loose A LOT OF ROWS). To do this, there are subs beginning by "ALLxxx" : they manage datas with MARC::Record datas. they call OLDxxx sub too (to populate old-DB), but MARCxxx subs too, with a complete MARC::Record ;-)
In Biblio.pm, there are some subs that permits to build a old-style record from a MARC::Record, and the opposite. There is also a sub finding a MARC-bibid from a old-biblionumber and the opposite too.
Note we have decided with steve that a old-biblio <=> a MARC-Biblio.
Not related to MARC :
* removed HLT- empty link when no basket for a supplier (should be useful to copy this into rel-1-2 i think)
* fixed some "use of uninitialized value"
related to MARC
* changed use Acquisition to use Catalogue, new package for MARC management
For instance, nothing is done to MARC DB, but structure is modified (see Biblio.pm for details), and everything seems to work : it's still possible to use acqui, and it fills old-DB pretty good.
WARNING : if you work on main trunk, please note Acquisition.pm is NO MORE USED in /acqui/ system. Every sub in Acquisition.pm has been moved to Biblio.pm or Catalogue.pm.
0- Requires MARC::Record from cpan to work
1- divided Catalogue.pm in 2 parts :
Biblio.pm ,that contains biblio management
Catalogue.pm, that contains acquisition management.
When ended, they will replace the Acquisition.pm package
2- Created MARCxxx functions :
* MARCgetbiblio : retrieves a MARC::Record from the bibid passed in parameter (working, see test.pl script)
* MARCaddbiblio : creates a MARC-DB entry, for a MARC::Record given as parameter. (working)
* MARCmodsubfield : modifies a subfield for a given subfieldid
* MARCfingsubfield : retrieves a subfieldvalue from a bibid/tag/subfield
* MARCaddsubfield : adds a subfield to biblio into the DB
* MARCkoha2marc : builds a MARC::Record, given a biblionumber, a biblioitemnumber and/or an itemnumber. (working).
TODO :
A lot ;-))))
For instance, you can create only a MARC-DB entry from a old-DB entry. Note some questions are still to solve around bibid (old-DB/MARC-DB)...