Seems not to break too many things, but i'm probably wrong here.
at least, new features/bugfixes from 2.2.5 are here (tested on some features on my head local copy)
- removing useless directories (koha-html and koha-plucene)
some explanations :
- updater/updatedatabase => will transform all tables in innoDB (not related to utf8, just to warn you) AND collate them in utf8 / utf8_general_ci. The SQL command is : ALTER TABLE tablename DEFAULT CHARACTER SET utf8 COLLATE utf8_general_ci.
- *-top.inc will show the pages in utf8
- THE HARD THING : for me, mysql-client and mysql-server were set up to communicate in iso8859-1, whatever the mysql collation ! Thus, pages were improperly shown, as datas were transmitted in iso8859-1 format ! After a full day of investigation, someone on usenet pointed "set NAMES 'utf8'" to explain that I wanted utf8. I could put this in my.cnf, but if I do that, ALL databases will "speak" in utf8, that's not what we want. Thus, I added a line in Context.pm : everytime a DB handle is opened, the communication is set to utf8.
- using marcxml field and no more the iso2709 raw marc biblioitems.marc field.
- the last 5 issues are now shown, and their status can be changed (but not reverted to "waited", as there can be only one "waited")
- the library can create a "distribution list". this paper contains a list of borrowers (selected from the borrower list, or manually entered), and print it for a given issue. once printed, the sheet can be put on the issue and distributed to every reader on the list (one by one).
* synch with rel_2_2. Probably the last non manual synch, as rel_2_2 should not be modified deeply.
* code cleaning (cleaning warnings from perl -w) continued
actually existed; so if there was no isbn, and the issn was blank,
the item would be assigned a random biblionumber and the breeding farm
would report that the item already exists in the catalog (even though
it didn't). This fix adds a check to determine whether the imported
record has an issn before assigning a matching biblionumber.
But C4::Date uses Date::Manip, which in the authors own words
"If you look in CPAN, you'll find that there are a number of Date and Time packages. Is Date::Manip the one you should be using? In my
opinion, the answer is no most of the time."
He goes on to say, that because Date::Manip is powerful and is written fully in perl its also slow.
Now Circulation needs to be as fast as possible. And C4::Date isnt actually doing anything particularly tricky,
So im working on C4::Circulation::Date to be a replacement, in an attempt to win some speed
This module is for dealing with user submitted reviews of items
Currently it allows (with some scripts) a user to review any item on their reading record.
The review is marked unvetted, and a librarian must vette and approve the review before it can show to the public
The scripts to add/edit a review, and to display them for the opac are done.
The script to display a list of reviews waiting vetting for the librarians has also been done.
IMPORTANT NOTE : the MARCkoha2marc sub API has been modified. Instead of biblionumber & biblioitemnumber, it now gets a hash.
The sub is used only in Biblio.pm, so the API change should be harmless (except for me, but i'm aware ;-) )
* run updater/updatedatabase to create imageurl field in itemtypes.
* go to Koha >> parameters >> itemtypes >> modify (or add) an itemtype. You will see around 20 nice images to choose between (thanks to owen). If you prefer your own image, you also can type a complete url (http://www.myserver.lib/path/to/my/image.gif)
* go to OPAC, and search something. In the result list, you now have the picture instead of the text itemtype.
replacing 2.2 marc search by a Net::z3950 search (waiting for Perl/Zoom)
works only for title/author/isbn search, any other search is considered as 'keywork search' (=anywhere
* go to koha cvs home directory
* in misc/zebra there is a unimarc directory. I suggest that marc21 libraries create a marc21 directory
* put your zebra.cfg files here & create your database.
* from koha cvs home directory, ln -s misc/zebra/marc21 zebra (I mean create a symbolic link to YOUR zebra directory)
* now, everytime you add/modify a biblio/item your zebra DB is updated correctly.
NOTE :
* this uses a system call in perl. CPU consumming, but we are waiting for indexdata Perl/zoom
* deletion still not work
* UNIMARC zebra config files are provided in misc/zebra/unimarc directory. The most important line being :
in zebra.cfg :
recordId: (bib1,Local-number)
storeKeys:1
in .abs file :
elm 090 Local-number -
elm 090/? Local-number -
elm 090/?/9 Local-number !:w
(090$9 being the field mapped to biblio.biblionumber in Koha)
* removing useless subs
* removing some subs that are also elsewhere
* renaming all OLDxxx subs to REALxxx subs (should not change anything, as OLDxxx, as well as REAL, are supposed to be for Biblio.pm internal use only)
It provides the user with the list of items that have been ordered for a delay and are NOT yet received.
The user may filter by supplier or branch or delay.
This page is still under developpement.
Goal is to make it ready to print to reorder the books.
2 new functions have been written in Acquisition module :
getsupplierlistwithlateorders
getlateorders
branches has been modified to manage branch independancy.
Request for comment.
STILL UNDER developpment
don't update your cvs if you want to have a working head...
this commit contains :
* updater/updatedatabase : get rid with marc_* tables, but DON'T remove them. As a lot of things uses them, it would not be a good idea for instance to drop them. If you really want to play, you can rename them to test head without them but being still able to reintroduce them...
* Biblio.pm : modify MARCgetbiblio to find the raw marc record in biblioitems.marc field, not from marc_subfield_table, modify MARCfindframeworkcode to find frameworkcode in biblio.frameworkcode, modify some other subs to use biblio.biblionumber & get rid of bibid.
* other files : get rid of bibid and use biblionumber instead.
What is broken :
* does not do anything on zebra yet.
* if you rename marc_subfield_table, you can't search anymore.
* you can view a biblio & bibliodetails, go to MARC editor, but NOT save any modif.
* don't try to add a biblio, it would add data poorly... (don't try to delete either, it may work, but that would be a surprise ;-) )
IMPORTANT NOTE : you need MARC::XML package (http://search.cpan.org/~esummers/MARC-XML-0.7/lib/MARC/File/XML.pm), that requires a recent version of MARC::Record
Updatedatabase stores the iso2709 data in biblioitems.marc field & an xml version in biblioitems.marcxml Not sure we will keep it when releasing the stable version, but I think it's a good idea to have something readable in sql, at least for development stage.