Minor adjustments to Dates.pm and associated test. You should be able to run a
perl test that uses Context w/o getting fatalsToBrowser output.
Signed-off-by: Chris Cormack <crc@liblime.com>
Signed-off-by: Joshua Ferraro <jmf@liblime.com>
The kohaversion is in the code directory (in /kohaversion.pl)
C4::Context now has a new method C4::Context->KOHAVERSION
that returns the koha code version.
The systempreference Version contains the database version.
If the 2 are differents, when the user log in, he is redirected to web installer (new behaviour : before this commit, the check was done on everypage, it's too CPU costly I think)
In the web installer, we check now if we do a new setup or an upgrade and show only the appropriate link.
The updatedatabase contains a lot of new things :
* SetVersion($kohaversion), that set the kohaversion after each update
* TransformToNum($kohaversion) that returns a number (3.0000001 from 3.00.00.001 for example) for a given koha version
* DropAllForeignKeys($table) that does what is written : drop all foreign keys. A shame it's not possible directly in mySQL...
* for each database update, just :
add the following lines :
=item
Describe what it does for other developpers
=cut
$DBversion = "your.koha.version.dbnumber";
if (C4::Context->preference("Version") < TransformToNum($DBversion)) {
#
# DO YOUR UPDATE STUFF
#
print "Upgrade to $DBversion done (specify what it does if you want)\n";
SetVersion ($DBversion);
}
IMPORTANT NOTES :
in koha 2.2, a new install was done through installing a 2.2.0 database, then updating it to the installed version.
in Koha 3.0, /installer/kohaversion.sql MUST contain an uptodate version, as the installer set the DB version to kohaversion after uploading kohaversion.sql. It does NOT run updatedatabase.
The update from Koha 2.2 to Koha 3.0 must NOT be done through the webinstaller : updatedatabase is very very long to run and you'll reach Apache timeout for sure. See http://wiki.koha.org/doku.php?id=22_to_30 that contains my notes for upgrading (with some/few UNIMARC specific stuff)
Note For RM, please eyeball this change
Signed-off-by: Chris Cormack <crc@liblime.com>
- updating templates to have tmpl_process3.pl running without any errors
- adding a drupal-like css for prog templates (with 3 small images)
- fixing some bugs in circulation & other scripts
- updating french translation
- fixing some typos in templates
All subs have be cleaned :
- removed useless
- merged some
- reordering Biblio.pm completly
- using only naming conventions
Seems to have broken nothing, but it still has to be heavily tested.
Note that Biblio.pm is now much more efficient than previously & probably more reliable as well.
== Biblio.pm cleaning (useless) ==
* some sub declaration dropped
* removed modbiblio sub
* removed moditem sub
* removed newitems. It was used only in finishrecieve. Replaced by a Koha2Marc+AddItem, that is better.
* removed MARCkoha2marcItem
* removed MARCdelsubfield declaration
* removed MARCkoha2marcBiblio
== Biblio.pm cleaning (naming conventions) ==
* MARCgettagslib renamed to GetMarcStructure
* MARCgetitems renamed to GetMarcItem
* MARCfind_frameworkcode renamed to GetFrameworkCode
* MARCmarc2koha renamed to TransformMarcToKoha
* MARChtml2marc renamed to TransformHtmlToMarc
* MARChtml2xml renamed to TranformeHtmlToXml
* zebraop renamed to ModZebra
== MARC=OFF ==
* removing MARC=OFF related scripts (in cataloguing directory)
* removed checkitems (function related to MARC=off feature, that is completly broken in head. If someone want to reintroduce it, hard work coming...)
* removed getitemsbybiblioitem (used only by MARC=OFF scripts, that is removed as well)
cvs -z3 -d:ext:kados@cvs.savannah.nongnu.org:/sources/koha co -P koha
find koha.precrash -type d -name "CVS" -exec rm -v {} \;
cp -r koha.precrash/* koha/
cd koha/
cvs commit
This should in theory put us right back where we were before the crash
Uses a complete new ZEBRA Indexing.
ZEBRA is now XML and comprises of a KOHA meta record. Explanatory notes will be on koha-devel
Fixes UTF8 problems
Fixes bug with authorities
SQL database major changes.
Separate biblioograaphic and holdings records. Biblioitems table depreceated
etc. etc.
Wait for explanatory document on koha-devel
adding two fields in branches table (branchip,branchprinter)
branchip : if the library enter an ip or ip range any librarian that connect from computer in this ip range will be temporarly affected to the corresponding branch .
branchprinter : the library can select a default printer for a branch
koha.xml contains both the koha configuration and zebraserver configuration.
Zebra connection is modified to allow connection to authority zebra as well.
It will break head if koha.conf is not replaced with koha.xml
- modified userenv to add branchname
- modifier menus.inc to have the librarian name & userenv displayed on every page. they are in a librarian_information div.
zebradb=localhost
zebraport=<your port>
zebrauser=<username>
zebrapass=<password>
The zebra.cfg file should read:
perm.anonymous:r
perm.username:rw
passw.c:<yourpasswordfile>
Password file should be prepared with Apaches htpasswd utility in encrypted mode and should exist in a folder zebra.cfg can read
your data are truely utf-8 encoded in your database, they should be
correctly displayed. "set names 'UTF8'" on mysql connection (C4/Context.pm)
is mandatory and "binmode" to utf8 (C4/Interface/CGI/Output.pm) seemed to
converted data twice, so it was removed.
connection object by doing:
my $Zconn = C4::Context->Zconn;
My initial tests indicate that as soon as your funcion ends
(ie, when you're done doing something) the connection will be
closed automatically. There may be some other way to make the
connection more stateful, I'm not sure...
some explanations :
- updater/updatedatabase => will transform all tables in innoDB (not related to utf8, just to warn you) AND collate them in utf8 / utf8_general_ci. The SQL command is : ALTER TABLE tablename DEFAULT CHARACTER SET utf8 COLLATE utf8_general_ci.
- *-top.inc will show the pages in utf8
- THE HARD THING : for me, mysql-client and mysql-server were set up to communicate in iso8859-1, whatever the mysql collation ! Thus, pages were improperly shown, as datas were transmitted in iso8859-1 format ! After a full day of investigation, someone on usenet pointed "set NAMES 'utf8'" to explain that I wanted utf8. I could put this in my.cnf, but if I do that, ALL databases will "speak" in utf8, that's not what we want. Thus, I added a line in Context.pm : everytime a DB handle is opened, the communication is set to utf8.
- using marcxml field and no more the iso2709 raw marc biblioitems.marc field.
* synch with rel_2_2. Probably the last non manual synch, as rel_2_2 should not be modified deeply.
* code cleaning (cleaning warnings from perl -w) continued
Addign a Cookie containing user specific vars such as :
branch,
firstname,
surname,
cardnumber...
may be criticized from a lawyer point of view, since name and surname are given.
But the real need is for userid and branch.
And it is achieved.
Auth passes now TWO cookies :
a session cookie
And an environment cookie.