This script allows you to update your database/records after a bulkmarckimport integration.
Currectly it reformats biblioitems.isbn and bibliotitems.marcxml on deleting any '-' on isbn.
- updating templates to have tmpl_process3.pl running without any errors
- adding a drupal-like css for prog templates (with 3 small images)
- fixing some bugs in circulation & other scripts
- updating french translation
- fixing some typos in templates
- support for authorities
- some bugfixes in ordering and "CCL" parsing
- support for authorities <=> biblios walking
Seems I can do what I want now, so I consider its done, except for bugfixes that will be needed i m sure !
* adding 3 subs in Biblio.pm
- GetNoZebraIndexes, that get the index structure in a new systempreference (added with this commit)
- _DelBiblioNoZebra, that retrieve all index entries for a biblio and remove in a variable the biblio reference
- _AddBiblioNoZebra, that add index entries for a biblio.
Note that the 2 _Add and _Del subs work only in a hash variable, to speed up things in case of a modif (ie : delete+add). The effective SQL update is done in the ModZebra sub (that existed before, and dealed with zebra index).
I think the code has to be more deeply tested, but it works at least partially.
- changing nozebra table to have biblionumber,title-ranking; (; is the entry separator. Now, if a value is several times in an index, it is stored only once, with a higher ranking (the ranking is the number of times the word appeard for this index)
- improving search to have ranking value (default order). The ranking is the sum of ranking of all terms. The list is ordered by ranking+title, from most to lower
- add nozebra table management on biblio editing
- the index table content is hardcoded. I still have to add some specific systempref to let the library update it
- manage pagination (next/previous)
- manage facets
WHAT works :
- NZgetRecords : has exactly the same API & returns as zebra getQuery, except that some parameters are unused
- search & sort works quite good
- CQL parser is better that what I thought I could do : title="harry and sally" and publicationyear>2000 not itemtype=LIVR should work fine
Abiding by Name Convention.
Using Members wherever it should be used.
Borrower is only used for borrower Categories.
+ GetBorrowersWhoHaveNeverBorrowed
and lists like that.
- checkaccount and getborraccountno => GetBorrowerAcctRecord
Many changes in names,
some changes in function signature.
Will be detailed in a mail to kohadevel.
All subs have be cleaned :
- removed useless
- merged some
- reordering Biblio.pm completly
- using only naming conventions
Seems to have broken nothing, but it still has to be heavily tested.
Note that Biblio.pm is now much more efficient than previously & probably more reliable as well.
== Biblio.pm cleaning (useless) ==
* some sub declaration dropped
* removed modbiblio sub
* removed moditem sub
* removed newitems. It was used only in finishrecieve. Replaced by a Koha2Marc+AddItem, that is better.
* removed MARCkoha2marcItem
* removed MARCdelsubfield declaration
* removed MARCkoha2marcBiblio
== Biblio.pm cleaning (naming conventions) ==
* MARCgettagslib renamed to GetMarcStructure
* MARCgetitems renamed to GetMarcItem
* MARCfind_frameworkcode renamed to GetFrameworkCode
* MARCmarc2koha renamed to TransformMarcToKoha
* MARChtml2marc renamed to TransformHtmlToMarc
* MARChtml2xml renamed to TranformeHtmlToXml
* zebraop renamed to ModZebra
== MARC=OFF ==
* removing MARC=OFF related scripts (in cataloguing directory)
* removed checkitems (function related to MARC=off feature, that is completly broken in head. If someone want to reintroduce it, hard work coming...)
* removed getitemsbybiblioitem (used only by MARC=OFF scripts, that is removed as well)
cvs -z3 -d:ext:kados@cvs.savannah.nongnu.org:/sources/koha co -P koha
find koha.precrash -type d -name "CVS" -exec rm -v {} \;
cp -r koha.precrash/* koha/
cd koha/
cvs commit
This should in theory put us right back where we were before the crash
subfields in the 09o or 999 placeholer for 090 Locally Assigned LC-Type Call
Call Number found in copy catalogued records. The locally assinged original
090 call number remains visible in tab 0 for informational purposes as often
the only place to find what an LC Call number for the material might be.
This file now combines koha configuration and zebra configuration. Old koha.conf variable names are retained. They go between the tags <config>. Do not change the tag names for zebra part. Use zebrasrv -f /etc/koha.xml to start your server. The virtual server configuration details can be found in YAZ documentation
only allows numbers up to 127 - needs to allow bigger numbers if the sub you're making
is for a daily subscription like a newspaper where the number of issues in a year
will be greater than 127
head repo (ie, copies over the right files, creates some symlinks,
etc.). It's not finished yet, but getting there (I'll be testing it
extensively today and tomorrow).
It now responds to:
-n : the number of records to import.
-commit : the number of records to wait before performing a 'commit' operation
ALSO: IMPORTANT: I took out the char_encoding as this should be handled by
MARC::File::XML now, unless I'm mistaken.
recordId: (bib1,Identifier-standard) just after the comma. Adam agreed it was a bug, and it should be solved soon. But now we are aware, we can avoid putting the space !
In this commit you have all what is needed to setup a working zebra DB in Unimarc :
* collection.abs is UNIMARC specific and must be rewritten for MARC21, in marc21 directory
* pdf.properties is to be copied unmodified in the marc21 directory (can also be put somewhere else)
* rebuild_zebra.pl is SLOW, but 1 step reindexing tool, using ZOOM
* rebuild_zebra_idx is FAST, but 2 step reindexing tool, and does not use zebra. run it, it will create all biblios XML files in /zebra/biblios directory, then zebraidx update biblios in your zebra directory
* zebra.cfg is the zebra config file ;-)
* test_cql2rpn.pl is a script that will query the database and show the results. Works for me, just change the query at the beginning to get answers you expect.
What has to be done :
* benchmarking : it seems the zebraidx update is faster than lightning (400biblios/sec : 10 000biblios in 25seconds), while ZOOM indexing is slow (something like 25biblios/second) More benchmarking could be done.
* completing collection.abs for UNIMARC. I'll take care of it.
* modifying Biblio.pm to use ZOOM instead of the "zebraidx through exec" running actually. I'll take care of it also.
* modify the search API & tools & screens. I'll let the ball to someone else (chris ?) for this. I agree SearchMarc.pm can be dropped and replaced by something else (maybe a new-and-clean Search.pm package)
Seems not to break too many things, but i'm probably wrong here.
at least, new features/bugfixes from 2.2.5 are here (tested on some features on my head local copy)
- removing useless directories (koha-html and koha-plucene)
* go to koha cvs home directory
* in misc/zebra there is a unimarc directory. I suggest that marc21 libraries create a marc21 directory
* put your zebra.cfg files here & create your database.
* from koha cvs home directory, ln -s misc/zebra/marc21 zebra (I mean create a symbolic link to YOUR zebra directory)
* now, everytime you add/modify a biblio/item your zebra DB is updated correctly.
NOTE :
* this uses a system call in perl. CPU consumming, but we are waiting for indexdata Perl/zoom
* deletion still not work
* UNIMARC zebra config files are provided in misc/zebra/unimarc directory. The most important line being :
in zebra.cfg :
recordId: (bib1,Local-number)
storeKeys:1
in .abs file :
elm 090 Local-number -
elm 090/? Local-number -
elm 090/?/9 Local-number !:w
(090$9 being the field mapped to biblio.biblionumber in Koha)
how it works :
* create the table marc_Tword with the following structure :
CREATE TABLE `marc_Tword` (
`word` varchar(80) NOT NULL default '',
`usedin` text NOT NULL,
`tagsubfield` varchar(4) NOT NULL default '',
PRIMARY KEY (`word`,`tagsubfield`)
) TYPE=MyISAM;
* open a console & type export PERL5LIB & export KOHA_CONF as usual.
* fill this table with misc/build_marc_Tword.pl. Warning, this script uses a very very consumming but very fast method to fill the table : it does everything in memory, then write everything. Another method is provided (& commented), but it's 100x times slower (really !)
* open opac-search.pl and replace use C4::SearchMarc; by use C4::SearchMarcTest; as the API hasn't changed, it will work immediatly.
* go to opac-search (advanced search) & search whatever you want. Should work fine.
LIMITS :
* build_marc_Tword has problem with extended chars (accented ones mainly). So don't be afraid if you get sql errors. They are not a problem for a POC
* search works always order by title, whatever you choose.
* search works only search WORDA and WOARDB, not yet WORDA or WORDB or WORDA except WORDB.
Make the generated pot file (i.e., result of "create") look more "real",
but using msgmerge to reformat the output
Script failed to create intermediate directories if the directory of the
target does not exist and the parent of that directory does not exist
either. This should fix that.