The *_PHONE notices (HOLD_PHONE, PREDUE_PHONE and OVERDUE_PHONE) should
be "merged" into the main code (i.e. HOLD, PREDUE and OVERDUE).
Test plan:
1/ Make sure you have HOLD_PHONE, PREDUE_PHONE and OVERDUE_PHONE notices
2/ Execute the update DB entry
3/ Verify the 3 notices have been merged into "phone" template of the
HOLD, PREDUE and OVERDUE notices
4/ Verify there is no regression in the Talking Tech feature (how?)
Signed-off-by: Bernardo Gonzalez Kriegel <bgkriegel@gmail.com>
No koha-qa errors
Verified that notices are merged
TalkingTech_itiva_outbound.pl runs without problem... but can't produce
any output, may be not correctly configured (my setup), no warnings
nor log messages
Signed-off-by: Galen Charlton <gmc@esilibrary.com>
Test plan:
- Choose 2 patrons P1 and P2
- Edit "Patron messaging preferences" and
check SMS + email with 2 days in advance for P1
check email with 5 days in advance for P2
- defined a message for the letter code PREDUE for sms and email
(tools/letters.pl).
- select 2 barcodes (B1, B2).
* checkout B1 to P1 with a due date = NOW - 2 days
* checkout B2 to P2 with a due date = NOW - 5 days
- into the mysql cli, note the value of unsent message:
select count(*) from message_queue where status != "send";
- launch the cronjob:
perl misc/cronjobs/advance_notices.pl -c
- retry the previous sql query, you should have X + 3 unsent messages
(depending of current checkouts in your DB!).
- view all unsent message:
select borrowernumber, letter_code, message_transport_type, content
from message_queue where status != "send";
You should see:
2 messages for P1, 1 for sms, 1 for email and the letter code PREDUE
1 message for P2, email and the letter code PREDUE
Signed-off-by: Olli-Antti Kivilahti <olli-antti.kivilahti@jns.fi>
Signed-off-by: Marcel de Rooy <m.de.rooy@rijksmuseum.nl>
Signed-off-by: Galen Charlton <gmc@esilibrary.com>
If overdues should be sent to some patron (on the same branch), only the
first one was notified.
This patch fixes this issue.
Signed-off-by: Olli-Antti Kivilahti <olli-antti.kivilahti@jns.fi>
Signed-off-by: Marcel de Rooy <m.de.rooy@rijksmuseum.nl>
Signed-off-by: Galen Charlton <gmc@esilibrary.com>
Test plan:
- define some complex overdue rules (tools/overduerules.pl).
For example:
First overdue from 2 to 5 days by sms and email with letter code L1
Second overdue from 5 to 15 days by email with letter code L2
Third overdue from 15 days by print with letter code L3
- define a message for each transport type selected (tools/letters.pl).
- select 3 patrons (P1, P2, P3) and 3 barcodes (B1, B2, B3).
* checkout B1 to P1 with a due date = NOW + 3 days
* checkout B2 to P2 with a due date = NOW + 10 days
* checkout B3 to P3 with a due date = NOW + 20 days
- into the mysql cli, note the value of unsent message:
select count(*) from message_queue where status != "send";
- launch the cronjob:
perl misc/cronjobs/overdue_notices.pl
- retry the previous sql query, you should have X + 4 unsent messages
(depending of current checkouts in your DB!).
- view all unsent message:
select borrowernumber, letter_code, message_transport_type, content
from message_queue where status != "send";
You should see:
2 messages for P1, 1 for sms, 1 for email and the letter code L1
1 message for P2, 1 for email and the letter code L2
1 message for P3, 1 for print and the letter code L3
- Specific case: If a user don't have a smsalertnumber and a sms is
required or if a user don't have an email defined and an email is
required, a print notice is generated.
A print notice is generated only 1 time per borrower and per level.
Signed-off-by: Olli-Antti Kivilahti <olli-antti.kivilahti@jns.fi>
Signed-off-by: Marcel de Rooy <m.de.rooy@rijksmuseum.nl>
Signed-off-by: Galen Charlton <gmc@esilibrary.com>
The message_transport_type param should passed to GetPreparedLetter, not
part of the "tables" parameter.
Signed-off-by: Galen Charlton <gmc@esilibrary.com>
The HOLD_PRINT and HOLD_PHONE notices become useless.
This patch modifies existing notices in order to group them into the
main notice type 'HOLD', with any pre-existing print and phone
templates in the appropriate places.
Test plan:
- Apply the patch and execute the update database entry.
- Verify that your previous HOLD_PHONE and HOLD_PRINT are displayed
when editing the HOLD notice (under phone and print).
- Choose a patron and check SMS, email, phone for "Hold filled"
(on the patron messaging preferences).
- Place a hold.
- Check the item in and confirm the hold.
- If the patron has an email *and* a SMS number, 2 new messages are put
into the message_queue table: 1 sms and 1 email.
If the patron does not have 1 of them, there are 2 new messages: 1
sms/email and 1 print.
If the user has neither of them, there is 1 new message: 1 print.
- The generated messages should correspond with the notices defined,
depending the message transport type.
Signed-off-by: Olli-Antti Kivilahti <olli-antti.kivilahti@jns.fi>
Signed-off-by: Marcel de Rooy <m.de.rooy@rijksmuseum.nl>
Just noting that if email and SMS are disabled in the msg prefs, the user
will not have a print message.
And if the SMS driver fails, the record status in message_queue is 'failed',
but staff may not be aware of that.
Signed-off-by: Galen Charlton <gmc@esilibrary.com>
This patch removes several types of strings from the
PO files that cannot be usefully translated, including
ones that consist entirely of punctuation and/or HTML entities.
Test:
1) Update PO files of some lang, xx-YY-*po
cd misc/translator
perl translate update xx-YY
2) Do it again, just in case
3) rm po/xx-YY*po~
4) Extract all msgid's, sorted
cat po/xx-YY*po | egrep "^msgid" | sort | uniq > xx-YY-pre
5) Apply the patch
6) Repeat 1-3
7) Repeat 4 again, other file
cat po/xx-YY*po | egrep "^msgid" | sort | uniq > xx-YY-post
8) Do a diff, inspect results, only strings with %s and \s
diff xx-YY-pre xx-YY-post | less
Signed-off-by: Bernardo Gonzalez Kriegel <bgkriegel@gmail.com>
Works as described, 380 strings less to 'translate'
No koha-qa errors.
Signed-off-by: Katrin Fischer <Katrin.Fischer.83@web.de>
Tested according to test plan, works as described.
Signed-off-by: Galen Charlton <gmc@esilibrary.com>
koha.psgi example and plackup.sh script to run any Koha site
intranet or opac interface under Plack with optional multi-process
Starman server
plackup.sh site-name [intranet]
site-name is used to find config /etc/koha/sites/site-name/koha-conf.xml
All configuration is specified in koha.psgi, which you are welcome to edit
and tune according to your development needs (enable memcache, enable/disable
debugging modules for plack and so on).
For deployment of opac or intranet you would probably want to take a look
in plackup.sh and enable starman as web server (which is pre-forking server
written in perl) and put some web server in front of it to serve static web
files (e.g. ngnix, apache)
When you are happy with it, rename koha.psgi and plackup.sh it to site name
and save it for safe-keeping.
This commit message is included in patch as README.plack because it includes
useful information for people using plack for first time.
Test scenario:
1. install plack and dependencies, as documented at
http://wiki.koha-community.org/wiki/Plack
2. start ./plackup.sh sitename i[ntranet]
3. open intranet page http://localhost:5001/ and verify that it redirects
to http://localhost:5001/cgi-bin/koha/mainpage.pl
4. start ./plackup.sh sitename
5. open OPAC http://localhost:5000/ and verify that it redirects to
http://localhost:5000/cgi-bin/koha/opac-main.pl
6. next step is to take a look into koha.psgi and enable additional
debug modules, save file and reload page (plackup will reload
code automatically)
Signed-off-by: Magnus Enger <magnus@enger.priv.no>
Works as advertised. As I have explained in a comment on the bug
this looks like a very good starting point, and we can argue about
the details and add more options over time. Very happy to sign
this off! (My earlier concern about / not working has now been
taken care of, thanks Dobrica!)
Signed-off-by: Martin Renvoize <martin.renvoize@ptfs-europe.com>
Signed-off-by: Galen Charlton <gmc@esilibrary.com>
The sharedate column is documented as having the following meaning:
"date of invitation or acceptance of invitation"
This patch adjust the new list-sharing code to stick with that
interpretation, as otherwise the column should have been renamed
to 'invite_expiration_date' or the like.
It also removes the "housekeeping" functionality from AddShare, as
otherwise the routine should have been named AddShareAndDoOtherStuff.
To prevent list shares from piling up, a new --list-invites flag
has been added to cleanup_database.pl. The default crontabs have
been modified to use the --list-invites flag by default.
To test
-------
[1] Make some list share invites and accept some, but now all of them.
[2] Wait 14 days (or more reasonably, manually edit the sharedate
values for the unaccepted shares to put them at least 14 days in the
past.).
[3] Run cleanup_database.pl --list-invites
[4] Verify that accepted shares remain, as to share invites that have
not yet reached more than 14 days of age.
Signed-off-by: Galen Charlton <gmc@esilibrary.com>
This is a very basic start to a sip server testing script.
I imagine we will want to make it interactive in end,
essentially replicating what a SIP based self-checkout machine does.
Signed-off-by: Adrien Saurat <adrien.saurat@biblibre.com>
Signed-off-by: Chris Cormack <chris@bigballofwax.co.nz>
Signed-off-by: Katrin Fischer <Katrin.Fischer.83@web.de>
Passes all tests and QA script.
Signed-off-by: Galen Charlton <gmc@esilibrary.com>
If a notice is defined for the library of the patron, it should be
used.
Without this patch, the notice used is the one defined for all
libraries.
Test plan:
1/ Set the advanced notice for a patron using digest.
2/ Check one item out to this patron (backdate the return date according
the days in advance value).
3/ launch advance_notices.pl -c
4/ Verify the notice used is the default one.
5/ Define a notice for the library of the patron for PREDUEDGST
6/ launch advance_notices.pl -c
7/ Verify the notice used is the one previously defined.
8/ Check one item out to this patron (date due = today)
9/ launch advance_notices.pl -c
10/ Verify the notice used is the default one.
11/ Define a notice for the library of the patron for DUEDGST
12/ launch advance_notices.pl -c
13/ Verify the notice used is the one previously defined.
Signed-off-by: Kyle M Hall <kyle@bywatersolutions.com>
Signed-off-by: Katrin Fischer <Katrin.Fischer.83@web.de>
Passes all tests and QA script.
Test case: User from Library A, checked out books
- in library A from A and B
- in library B from B
Verified, that the 'all libraries' notice is still used,
when no specific notice is defined.
Verified, that the patron's home library noticed is used,
when defined.
Note: Before and after the patch we print the branch information
from the patron's home library, so also using the template from
this branch, seems logical. All items over all branches are
processed into one single reminder email, before and after the patch.
Signed-off-by: Galen Charlton <gmc@esilibrary.com>
The SQL column headers is stored into the columns.def file.
This file is not managed by the translation script.
This patch makes possible the headers translation.
Note: The translation xml tags were added to avoid all lines being put
on a single line.
Test plan:
1/ update your po file
cd misc/translate;
perl translate -f columns update LANG # Replace by another language here
2/ translate header columns (search "columns.def" in your po file).
3/ install the translated columns.def
perl translate -f columns install LANG # Replace by another language here
4/ go on the report module > create a new report > next > next
5/ change the language
on the 3rd step, you should see the column header translated.
Signed-off-by: Bernardo Gonzalez Kriegel <bgkriegel@gmail.com>
Work as described, no koha-qa errors
[on es-ES about a third of the strings translated!! :-) ]
Signed-off-by: Katrin Fischer <Katrin.Fischer.83@web.de>
Works as described and fixes a long standing translation
problem.
Passes all tests and QA script.
Signed-off-by: Galen Charlton <gmc@esilibrary.com>
Show 'Unknown' when planneddate and publisheddate cannot be calculated
Also fixes SQL query in misc/cronjobs/serialsUpdate.pl that was still
using "periodicity != 32" to exclude irregular subscriptions from
results
Test plan:
1) Create a subscription in the serials module. Make sure to choose:
Frequency = Irregular
2) Test the prediction pattern, first publication date is set to
"First issue publication date" field, others will show as
'unknown'
3) Save the subscription
4) Check the created issue - it will show a published date and a
planned date (same as "First issue publication date" field)
5) Receive the issue and check the next generated issue, planned
date and published date should show as 'Unknown'
6) Generate a next issue, planned date and published date should
also show as 'Unknown'
Signed-off-by: Bernardo Gonzalez Kriegel <bgkriegel@gmail.com>
Work as described following test plan.
No koha-qa errors
Signed-off-by: Katrin Fischer <Katrin.Fischer.83@web.de>
Passes all tests and QA script.
Also tested:
- multi receiving generates mulitple issues without dates - 'unknown'
- staff detail page shows the dates empty, which is fine
- OPAC detail page shows the dates empty, which is fine
- serial collection page shows 'unknown' and those issues appear
on the 'manage' tab, as they did in the past
- Editing the issue from the serial collection page leaves the
date fields empty.
- Receving the issue, setting the status to 'Arrived' the Expected on
date is set to 'today' automatically. Date published has to be
entered manually (maybe something we could improve later
- subscription detail > issues tab shows Uknown.
- t/db_dependent/Serials/GetNextDate.t pass.
Signed-off-by: Galen Charlton <gmc@esilibrary.com>
It would be good to be able to specifically target import records from
Z39.50 for cleanup.
Test Plan:
1) Apply this patch
2) Import one or more batch record sets into Koha
3) Perform some Z39.50 searches
4) Run this command: misc/cronjobs/cleanup_database.pl -v --z3950
5) Verify that only Z39.50 records were deleted
Signed-off-by: Jonathan Druart <jonathan.druart@biblibre.com>
Signed-off-by: Marcel de Rooy <m.de.rooy@rijksmuseum.nl>
Signed-off-by: Galen Charlton <gmc@esilibrary.com>
This patch enable deletion of temp files used by
tmpl_process3.pl.
Just removed coments on existing code
To test:
1. Do a count of files on /tmp ( ls /tmp | wc -l )
2. Update preferred language
3. Count again, new files on /tmp
4. Apply the patch
5. Update again, check, no new files
Signed-off-by: Mark Tompsett <mtompset@hotmail.com>
NOTE: I watched what temp files were actually in /tmp to make
sure other processes didn't magically increase/decrease
the number.
$ perl translate update {lang code}
generated 10 temporary files for me (2x5 po files). After
removing those ten files, and applying the patch, no
other files were generated.
Signed-off-by: Jonathan Druart <jonathan.druart@biblibre.com>
These lines has been commented by commit
a399dcefad without any apparent good
reason.
Signed-off-by: Galen Charlton <gmc@esilibrary.com>
We have a number of reports of libraries that were upset by Bug 10720
being fixed! These libraries preferred this single file output, but as
text only. We should bring back this behavior, but as a feature, not a
bug.
Test Plan:
1) Apply this patch
2) Run overdue_notices.pl --html
3) Note the output is wrapped in html tags
4) Run overdue_notices.pl --text
5) Note the same output, but not wrapped in html tags
Signed-off-by: wajasu <matted-34813@mypacks.net>
Signed-off-by: Jonathan Druart <jonathan.druart@biblibre.com>
To test, add the -n parameter.
The filename generation could be refactored but not blocker.
Signed-off-by: Galen Charlton <gmc@esilibrary.com>
This patch adds three optional parameters to runreport.pl
to allow authentication with the SMTP server.
--username -> Username to pass to the SMTP server for
authentication
--password -> Password to pass to the SMTP server for
authentication
--method -> Method is the type of authentication.
Ie. LOGIN, DIGEST-MD5, etc.
Test Plan
---------
As for testing manually using a Gmail account:
1. Set up your sendmail as shown in
misc/cronjobs/CONFIGURE.gmail
2. Before applying this patch, run misc/cronjobs/runreports.pl
on your favorite report including the proper email parameters
against your gmail account.
3. Note the failure message stating the authentication
requirement.
4. Apply this patch, and return the script including the
additional parameters and specifying "LOGIN" for the method.
5. Note the successful send.
6. perldoc misc/cronjobs/runreport.pl
7. Run the koha qa test tool.
Signed-off-by: Chris Nighswonger <cnighswonger@foundations.edu>
Signed-off-by: Mark Tompsett <mtompset@hotmail.com>
Signed-off-by: Kyle M Hall <kyle@bywatersolutions.com>
Signed-off-by: Galen Charlton <gmc@esilibrary.com>
This patch expands and reformats the help text displayed
when running remove_unused_authorities.pl -h.
Signed-off-by: Galen Charlton <gmc@esilibrary.com>
remove_unused_authorities.pl previously required that --aut be supplied
to specify one or more authority types to check for unlinked authority
records. If --aut was omitted, it would default to search for
records of authority type NC, which is not present in many (or any?)
Koha databases.
Now, if --aut is omitted, unlinked authority records of any type
are removed.
To test it:
Parse only PERSO_NAME authorities:
misc/migration_tools/remove_unused_authorities.pl -aut PERSO_NAME
Parse all authorities:
misc/migration_tools/remove_unused_authorities.pl
Signed-off-by: Nicolas Legrand <nicolas.legrand@bulac.fr>
Signed-off-by: Kyle M Hall <kyle@bywatersolutions.com>
Signed-off-by: Galen Charlton <gmc@esilibrary.com>
Bug 7688 changed the prototype for GetNextDate, but the serialsUpdate.pl
cronjob script had not been updated. This patch fixes the problem.
Test plan:
Before applying the patch:
1/ Check that the following SQL query returns something:
SELECT serial.*
FROM serial
LEFT JOIN subscription ON (subscription.subscriptionid = serial.subscriptionid)
WHERE serial.status = 1
AND DATE_ADD(planneddate, INTERVAL CAST(graceperiod AS SIGNED) DAY) < NOW()
AND subscription.closed = 0;
2/ Run misc/cronjobs/serialsUpdate.pl -v
It should die with an error message like this:
Can't use string ("2011-03-05") as a HASH ref while "strict refs" in use
3/ Apply the patch
4/ Run misc/cronjobs/serialsUpdate.pl -v
It should exit normally and print messages like this:
Serial issue with id=XX updated
5/ Run the Koha QA test tools.
Signed-off-by: Mark Tompsett <mtompset@hotmail.com>
Signed-off-by: Kyle M Hall <kyle@bywatersolutions.com>
Signed-off-by: Galen Charlton <gmc@esilibrary.com>
Signed-off-by: Magnus Enger <digitalutvikling@gmail.com>
Keeps current behaviour as default.
The -append option is described in the POD and works as expected.
Signed-off-by: Katrin Fischer <Katrin.Fischer.83@web.de>
Works as described.
Adding a date/time to the output might
be good, to make it easier to find the entry you were looking for.
Signed-off-by: Galen Charlton <gmc@esilibrary.com>
The problem is with the name of PO files
Currently Koha expect, among other variants, that PO filenames
began with (using as example *-pref.po):
{lang}-pref.po
{lang}-{region}-pref.po
{lang}-{script}-pref.po
{lang}-{script}-{region}-pref.po
and expect 2 chars for lang and region, and 4 for script
So the problem with Thai translation files are that it's names
do not match that convention.
This patch only rename Thai files as th-THA-* to th-TH-*.
In that way language description is right.
translate script use that chars to make dirs, and use dirs to find
description.
To test:
1) Go to I18N/L10N sysprefs
2) Install th-THA language (or simply mkdir koha-tmpl/intranet-tmpl/prog/th-THA)
3) Reload page, wrong description
4) Apply patch
5) Install th-TH language (or simply mkdir koha-tmpl/intranet-tmpl/prog/th-TH)
6) Reload page, right description
7) If you want do "mkdir koha-tmpl/intranet-tmpl/prog/th-Thai", reload,
also right description
To the reporter of this Bug: the rename of the folder is a good
workaround, when this patch is pushed to stable I'll rename Thai
files
Signed-off-by: Mark Tompsett <mtompset@hotmail.com>
This does not correct existing problems which need human
intervention. It does, however, allow for a correct installation
of Thai after the patch is made.
If we really want a patch for fixing an existing install. I
wrote it, but have not tested it.
Signed-off-by: Katrin Fischer <Katrin.Fischer.83@web.de>
As pointed out by Mark, this does not fix existing installations.
Putting a note in the release notes might be something we can do here.
Signed-off-by: Galen Charlton <gmc@esilibrary.com>
installer/data/mysql/sysprefs.sql has semicolon as default.
This fixes both instances to use the same fallback value.
It also prevents CSV header info from being included in non-CSV messages.
Signed-off-by: wajasu <matted-34813@mypacks.net>
Signed-off-by: Jonathan Druart <jonathan.druart@biblibre.com>
Signed-off-by: Galen Charlton <gmc@esilibrary.com>
This determines if the CSV header should be included or not and
then generates it as needed using the delimiter specified in the
delimiter system preference.
TEST PLAN
---------
1. make some overdues books
2. run the overdue notices script without the -csv
3. check emails notice csv header is in the email
4. apply the patch
5. run the overdue notice again
6. check email notice CSV header is absent
Signed-off-by: Mark Tompsett <mtompset@hotmail.com>
Signed-off-by: Chris Cormack <chris@bigballofwax.co.nz>
Signed-off-by: wajasu <matted-34813@mypacks.net>
Signed-off-by: Jonathan Druart <jonathan.druart@biblibre.com>
Signed-off-by: Galen Charlton <gmc@esilibrary.com>
A link to course reserves is in the global header menu but not on the
home page. All links from the global header should be present on the
home page as well. This patch adds it.
To test, apply the patch and if necessary clear your browser cache. View
the staff client home page. If you have "UseCourseReserves" enabled you
should see a link for the course reserves page which is visually
consistent with the other module links. If you do not have course
reserves enabled you should not see the link.
Unrelated: I positioned the admin link after the tools link because it
bugged me.
Signed-off-by: Broust <jean-manuel.broust@univ-lyon2.fr>
Signed-off-by: marjorie barry-vila <marjorie.barry-vila@ccsr.qc.ca>
Signed-off-by: Katrin Fischer <Katrin.Fischer.83@web.de>
Passes all tests and QA script, works as described.
Course reserves is still accessible without permissions, but
you can't make any changes to the reserves then.
Signed-off-by: Galen Charlton <gmc@esilibrary.com>
This patch improves the CSS used to attempt to Zebra-stripe the
output of emailed reports. This will work with some email clients,
but other email clients (e.g., Gmail) don't handle style elements in the
body or head element.
Signed-off-by: Galen Charlton <gmc@esilibrary.com>
The misc/cronjobs/runreport.pl allows for sending html reports
via email. The problem is that the Content-Type isn't set to
text/html, which means that the generated html email isn't
displayed properly.
This patch set the Content-Type, and also adds a tiny bit of
CSS to potentially alternate row colours (just to make long
reports a bit easier on the eye!)
TEST PLAN
----------
1. Run the script similar to this:
./misc/cronjobs/runreport.pl --format=html --to=YOUREMAIL --subject="Bad Formatting!" REPORTNUMBER
2. Look at the email - the html code should by visible and ugly.
3. apply the patch
4. Run the script again.
5. Look at the email - the data should look nicer now.
Signed-off-by: Mark Tompsett <mtompset@hotmail.com>
Signed-off-by: Martin Renvoize <martin.renvoize@ptfs-europe.com>
Signed-off-by: Galen Charlton <gmc@esilibrary.com>
DBD::Mysql provides a mysql_auto_reconnect flag. Using it avoids
the time required to do a $dbh->ping().
Benchmarks:
use Modern::Perl;
use C4::Context;
for ( 1 .. 1000 ) {
$dbh = C4::Context->dbh;
}
* without this patch on a local DB:
perl t.pl 0,49s user 0,02s system 98% cpu 0,525 total
* without this patch on a remote DB:
perl t.pl 0,52s user 0,05s system 1% cpu 37,358 total
* with this patch on a local DB:
perl t.pl 0,46s user 0,04s system 99% cpu 0,509 total
* with this patch on a remote DB:
perl t.pl 0,49s user 0,02s system 56% cpu 0,892 total
Testing the auto reconnect:
use Modern::Perl;
use C4::Context;
my $ping = $dbh->ping;
say $ping;
$dbh->disconnect;
$ping = $dbh->ping;
say $ping;
Signed-off-by: Bernardo Gonzalez Kriegel <bgkriegel@gmail.com>
Comment: Real improvement. No koha-qa errors
prove t/db_dependent/Circulation_issuingrules.t produces no error
prove t/db_dependent/Context.t produces no error
Test
1) dumped Koha DB, load it on a non-local server
2) run sample script whit and without patch, local and remote
use Modern::Perl;
use C4::Context;
for ( 1 .. 100000 ) {
my $dbh = C4::Context->dbh;
}
Main difference I note is with remote server
a) without patch
real 0m16.357s
user 0m2.592s
sys 0m2.132s
b) with patch
real 0m0.259s
user 0m0.240s
sys 0m0.012s
I think this could be good for DBs placed on
remote servers
Bug 10611: add a "new" parameter to C4::Context->dbh
When dbh->disconnect is called and the mysql_auto_reconnect flag is set,
the dbh is not recreated: the old one is used.
Adding a new flag, we can now force the C4::Context->dbh method to
return a new dbh.
Signed-off-by: Bernardo Gonzalez Kriegel <bgkriegel@gmail.com>
Bug 10611: Followup: remove useless calls to dbh->disconnect
These 3 calls to disconnect are done at the end of the script, they are
useless.
Signed-off-by: Bernardo Gonzalez Kriegel <bgkriegel@gmail.com>
Signed-off-by: Paul Poulain <paul.poulain@biblibre.com>
Signed-off-by: Galen Charlton <gmc@esilibrary.com>
This patch improves rebuild_zebra.pl's usage help
by explaining when --skip-deletes should be considered
and noting that it should be used in conjunction with
a cronjob to process deletions after hours.
Signed-off-by: Galen Charlton <gmc@esilibrary.com>
It seems that record deletions can cause extreme slowdowns for Koha
installations with extremely large numbers of records. It would be
helpful to be able to skip record deletions when processing the
zebraqueue with rebuild_zebra.pl so the deletions can be processed with
a lower frequency.
Test Plan:
1) Disable any zebra indexing cronjobs you may have
2) Delete a record
3) Note the operation recordDelete in the zebraqueue table having done = 0
4) Run misc/migration_tools/rebuild_zebra.pl -b -z --skip-deletes
5) Note the delete still has done = 0
6) Run misc/migration_tools/rebuild_zebra.pl -b -z
7) Note the delete now has done = 1
Signed-off-by: Jonathan Druart <jonathan.druart@biblibre.com>
Signed-off-by: Katrin Fischer <Katrin.Fischer.83@web.de>
Passes all tests and QA script.
Also tested for authorities, no problems found.
Signed-off-by: Galen Charlton <gmc@esilibrary.com>
RM note: this is at best a work-around, and I will emphasize that
--skip-deletes should be used only when absolutely necessary.
I hope that --skip-deletes can go away at some point soon, but
that may depend on changes to Zebra.
- fix a couple typos in comments
- make replace a "$i" with a more descriptive variable name
- style some of the new code
Signed-off-by: Galen Charlton <gmc@esilibrary.com>
The original patch creates a lockfile in the ZEBRA_LOCKDIR.
It can fall back to /var/lock or even /tmp.
If the create fails, it dies. This can be considered as very
exceptional.
This followup adjusts the fallback location in /var/lock or /tmp
slightly. It appends the database name to the folder in order to
prevent interfering between multiple Koha instances. Creation of the
lockfile has been moved to a subroutine extending directory and file
creation testing.
In the very unlikely case that we cannot create the lockfile (after
three separate tries), this follow-up allows you to continue instead
of die. This is just as we did before we had file locking here. Every
time skipping a reindex could cause more harm than continuing and
having the race condition once in a while.
Test plan:
Test adding and removing lockdir from your koha-conf.xml. Check fallback.
Note that fallback in /var/lock or /tmp must contain database name.
Remove the lockdir config line and remove permissions from fallback. In
this case the reindex should continue but with a warning.
Signed-off-by: Marcel de Rooy <m.de.rooy@rijksmuseum.nl>
Tested with daemon and one-off invocation simultaneously.
Tested new wait parameter.
Tried all variations of lock directory (changing permissions etc.)
Signed-off-by: Galen Charlton <gmc@esilibrary.com>
This patch adds locking to rebuild_zebra.pl to ensure that simultaneous
changes are prevented (as one is likely to overwrite the other).
Incremental updates in daemon mode will skipped if the lock is busy
and they will be picked up on the next pass. Non-daemon mode
invocations will also exit immediately if they cannot get the lock
unless the new flag -wait-for-lock is specified, in which case they
will wait until the get the lock and then proceed.
Supporting changes made to Makefile.PL and templates for the new
locking directory (paralleling the other zebra lock directories).
We stash the zebra_lockdir in koha-conf.xml so rebuild_zebra.pl
can find it.
To address earlier QA concerns we:
1. added code to check if flock is available and ignore locking if
it's missing (from M. de Rooy)
2. changed default for adhoc invocations to abort if they cannot
obtain the lock. Added option -wait-for-lock if the user prefers
to wait until the lock is free, and then continue processing.
3. added missing entry to t/db_dependent/zebra_config.pl
4. added a fallback locking directory of /tmp
Signed-off-by: Marcel de Rooy <m.de.rooy@rijksmuseum.nl>
Doug merged the original patch with the QA changes.
Just for the record, noting here that the original patch was tested
extensively too by Martin Renvoize.
I have added a followup for some exceptional cases.
Signed-off-by: Galen Charlton <gmc@esilibrary.com>
This patch makes Koha <-> Zebra use MARCXML for the serialization when
using DOM, and USMARC for GRS-1.
* The following functions are modified to set the Zebra record syntax
according to the current sysprefs and configuration:
- C4::Context->Zconn
- C4::Context-_new_Zconn
* A new function 'new_record_from_zebra' is introduced, which checks the
context we are in, and creates the MARC::Record object using the right
constructor.
The following packages get touched to make use of the new function:
- C4::Search
- C4::AuthoritiesMarc
and the same happens to the UI scripts that make use of them (both in
the OPAC and STAFF interfaces).
* Calls to the unsafe ZOOM::Record->render()[1] method are removed.
Due to this last change the code for building facets was rewritten. And
for performance on the facets creation I pushed higher version
dependencies for MARC::File::XML and MARC::Record (we rely on
MARC::Field->as_string).
* Calls to MARC::Record->new_from_xml and MARC::Record->new_from_usmarc
are wrapped with eval for catching problems [2].
* As of bug 3087, UNIMARC uses the 'unimarc' record syntax. this case is
correctly handled.
* As of bug 7818 misc/migration_tools/rebuild_zebra.pl behaves like:
- bib_index_mode (defaults to 'grs1' if not specified)
- auth_index_mode (defaults to 'dom')
here we do exactly the same.
To test:
- prove t/db_dependent/Search.t should pass.
- Searching should remain functional.
- Indexing and searching for a big record should work (that's what the
unit tests do).
- Test an index scan search (on the staff interface):
Search > More options > Check "Scan indexes".
- Enable 'itemBarcodeFallbackSearch' and try to circulate any word, it
shouldn't break.
- Searching for a biblio in a new subscription shouldn't break.
- Running bulkmarcimport.pl shouldn't break.
- And so on... for the rest of the .pl files.
[1] http://search.cpan.org/~mirk/Net-Z3950-ZOOM/lib/ZOOM.pod#render()
[2] a record that cannot be parsed by MARC::Record is simply skipped (bug 10684)
Sponsored-by: Universidad Nacional de Cordoba
Signed-off-by: Kyle M Hall <kyle@bywatersolutions.com>
Signed-off-by: Martin Renvoize <martin.renvoize@ptfs-europe.com>
Signed-off-by: Galen Charlton <gmc@esilibrary.com>
Bug 7934 breaks the -f option introduced in bug 9161; this patch
repairs the regression.
While the generation of the command tmpl_process3.pl, a space is
missing if the -x option is given.
The tmpl_process3.pl is called like:
/home/koha/src/misc/translator/tmpl_process3.pl -q update -i
/home/koha/src/koha-tmpl/intranet-tmpl/prog/en/ -s
/home/koha/src/misc/translator/po/fr-FR-i-staff-t-prog-v-3006000.po -r
-x 'help'-f pay.tt
Revised test plan:
1) cd ./misc/translator
2) put a warn at LangInstaller.pm line 375.
3) time ./translate update fr-FR -f pay.tt
-- note the execution time and the output. The options in the
command contain "-x 'help'-f pay.tt"
The -f param is not passed to the script.
The execution time is strangely long.
5) git reset --hard origin/master
6) apply this patch
7) put a warn at LangInstaller.pm line 375.
8) time ./translate update fr-FR -f pay.tt
-- verify the output and the execution time is now corrected.
Signed-off-by: Mark Tompsett <mtompset@hotmail.com>
Signed-off-by: Katrin Fischer <Katrin.Fischer.83@web.de>
Passes all tests and QA script.
Signed-off-by: Galen Charlton <gmc@esilibrary.com>
To test:
Set up and run the cronjobs from crontab.example with a hold set to
unsuspend today. The hold should be unsuspended.
Signed-off-by: Jonathan Druart <jonathan.druart@biblibre.com>
Signed-off-by: Kyle M Hall <kyle@bywatersolutions.com>
Signed-off-by: Galen Charlton <gmc@esilibrary.com>
Problem:
If you tell gather_print_notices.pl to write output to a location
you do not have write access to, it will silently fail to write the
data, but still mark unsent messages as sent.
Solution:
This patch adds two lines of defense:
1. Check that the location given for the output is writable
2. use "open() or die" instead of just "open()" when writing the
output
The first measure should catch most of the potential errors, but
I guess a directory can be writable, but the open() still can fail
because the disk is full or something similar.
To test:
- Make sure you have some unsent messages in the message_queue table,
that do not have an email adress
- Apply the patch
- Run the script, pointing at a location you do not have access to
write to. Check that the script exits with an appropriate error
message, and that the unsent messages are still unsent. Do this
both with and without the -s option.
- To fake passing the first line of defence, comment out line 62
and put this in instead:
if ( !$output_directory || !-d $output_directory ) {
- Run the script again as above, check you get an appropriate
error and that the message queue is not touched
- Reset line 62 to how it was
- Run the script against a directory you do have access to write to
and check that output is produced as expected and that messages
are marked as sent
- Sign off
Signed-off-by: Chris Cormack <chris@bigballofwax.co.nz>
Signed-off-by: Katrin Fischer <Katrin.Fischer.83@web.de>
Passes all tests and QA script.
Works as described.
Signed-off-by: Galen Charlton <gmc@esilibrary.com>
bulkmarcimport.pl can crash when searching for duplicates if the 005
field from the incoming or local record is not defined. This patch
fixes it.
Signed-off-by: Chris Cormack <chrisc@catalyst.net.nz>
Test plan
1/ Create a record with no 005 field
2/ Try to import it checking for duplicates, notice it crashes
3/ Try with a record with a 005 field, but the one in Koha missing
one, still crashes
4/ Apply patch
5/ No more crash
Signed-off-by: Katrin Fischer <Katrin.Fischer.83@web.de>
Passes all tests and QA script.
Patch fixes the problem described for importing authorities
with the bulkmarcimport.pl when trying to match with existing
records.
Signed-off-by: Galen Charlton <gmc@esilibrary.com>
The -munge-config switch has been deprecated for years, and
trying to use it would either not work at all or, if it did "work",
almost certainly damage one's Zebra configuration for Koha.
This patch removes this switch.
To test:
[1] Run rebuild_zebra.pl and verify that no mention is made
of -munge-config.
[2] Run rebuild_zebra.pl to index records in one's test database
and verify that there are no regressions.
Signed-off-by: Galen Charlton <gmc@esilibrary.com>
Signed-off-by: Chris Cormack <chrisc@catalyst.net.nz>
Removing a really dangerous option
Signed-off-by: Katrin Fischer <Katrin.Fischer.83@web.de>
Passes all tests and QA script.
Ran rebuild_zebra.pl with various options and confirmed
that data was reindexed successfully.
No regressions found.
Signed-off-by: Galen Charlton <gmc@esilibrary.com>
This patches adds support for the --test option, as well as a
short message telling the user the script is running in test mode.
Test plan :
- Launch the script with -h to see the help
- Launch the script with --test and --aut with an authtypecode
that is used in your instance
- Make sure it does the same thing as launching it with -t
- Launch the script for real and make sure it still works as
expected, deleting unused authorities.
Signed-off-by: Galen Charlton <gmc@esilibrary.com>
Signed-off-by: Jonathan Druart <jonathan.druart@biblibre.com>
Signed-off-by: Galen Charlton <gmc@esilibrary.com>