This allows the --framework option to be specified when running
bulkmarkimport. This option allows a framework code to be specified for
the records being imported.
Signed-off-by: Kyle M Hall <kyle@bywatersolutions.com>
Signed-off-by: Katrin Fischer <Katrin.Fischer.83@web.de>
All tests pass, perlcritic fails before and after.
Tested
- imported records with -framework FA, FA framework is used
- imported records without -framework, default framework is used
Signed-off-by: Jared Camins-Esakov <jcamins@cpbibliography.com>
If a subscription is no longer enough published (or we are not waiting
for a new periodical) we are allowed to close it.
If a subscription is closed, we are not able to receive or generate a
new serial.
On the serial module, we can now
- close a subscriptionn
- reopen a closed subscription
On serial search 2 tabs is displayed (opened and closed subscriptions).
This patch adds:
- a new field subscription.closed in DB
- a new status for serials (8 = stopped)
Test plan:
- search subscriptions
- close a subscription and check that you cannot receive or generate a
new serial
- launch another search and check that the closed serial is into the "closed"
tab.
- You are allowed to reopen a subscription on the subscription detail
page and on the subscription result page. A javascript alert ask you
if are certain to do this operation.
- Check the serial status "stopped" everywhere the status is
displayed (catalogue/detail.pl, serials/claims.pl,
serials/serial-issues-full.pl, serials/serials-collection.pl,
serials/serials-edit.pl, serials/serials-recieve.pl,
serials/subscription-detail.pl and opac-full-serial-issues.pl)
- The report statistics does not include the closed subscriptions if you
don't check the "Include expired subscriptions" checkbox.
Signed-off-by: Kyle M Hall <kyle@bywatersolutions.com>
Bug 8782: Followup: add some minor modifications
- Show 'closed' information in biblio detail page
- Add a column in serials report table
- Search subscriptions on title words instead of string
- Prevent serials editing when subscription is closed
- Don't change status of "disabled" serials
Signed-off-by: Kyle M Hall <kyle@bywatersolutions.com>
Bug 8782 - Close a subscription - Followup - Fix updatedatabase.pl
Signed-off-by: Jonathan Druart <jonathan.druart@biblibre.com>
Signed-off-by: Paul Poulain <paul.poulain@biblibre.com>
Signed-off-by: Jared Camins-Esakov <jcamins@cpbibliography.com>
Signed-off by : Mathieu Saby <mathieu.saby@univ-rennes2.fr>
Signed-off-by: Paul Poulain <paul.poulain@biblibre.com>
Signed-off-by: Jared Camins-Esakov <jcamins@cpbibliography.com>
Previously we used the "delete" command in zebraidx, which fails when
you try to delete a record that doesn't exist in the index. By changing
to the "adelete" command, we can reduce the likelihood of a failed
delete causing ghost records. A symptom of this problem is the warning
message occasionally encountered when indexing from the zebraqueue,
"[warn] cannot delete record above (seems new)."
To test:
1) Add a recordDelete action for a record that does not exist to
zebraqueue in MySQL:
INSERT INTO zebraqueue (biblio_auth_number, operation, server) \
VALUES (999999999, 'recordDelete', 'biblioserver');
2) Run `rebuild_zebra.pl -b -z -v [-x]`.
3) Note that you do not get the message "[warn] cannot delete record
above (seems new)".
Signed-off-by: Chris Cormack <chris@bigballofwax.co.nz>
Passed-QA-by: Paul Poulain <paul.poulain@biblibre.com>
Signed-off-by: Jared Camins-Esakov <jcamins@cpbibliography.com>
Before this patch:
* misc/maintenance/fix_mysql_constraints.pl FAIL
pod FAIL
*** ERROR: unterminated B<...> in file misc/maintenance/fix_mysql_constraints.pl
forbidden patterns OK
valid OK
critic FAIL
"require" statement with library name as string at line 25, column 12. Use a bareword instead.
+ I add a die on the open statement
Signed-off-by: Kyle M Hall <kyle@bywatersolutions.com>
Passed-QA-by: Jonathan Druart <jonathan.druart@biblibre.com>
Signed-off-by: Jared Camins-Esakov <jcamins@cpbibliography.com>
Help: fix_mysql_contraints.pl -h
Signed-off-by: Chris Cormack <chris@bigballofwax.co.nz>
Passed-QA-by: Jonathan Druart <jonathan.druart@biblibre.com>
Signed-off-by: Jared Camins-Esakov <jcamins@cpbibliography.com>
The script misc/cronjobs/smsoverdues.pl requires HTML::Template::Pro and
uses templates that do not exist in Koha. Since this has been true for
at least a year and a half, and no one is aware of anyone who is using
it, it seems prudent to remove the script so that no one is confused
and/or distressed by its non-functioning nature.
Signed-off-by: Chris Cormack <chrisc@catalyst.net.nz>
Passed-QA-by: Mason James <mtj@kohaaloha.com>
Signed-off-by: Jared Camins-Esakov <jcamins@cpbibliography.com>
[This patch was split out from tcohen's excellent patches for bug 8519
--jcamins]
It removes the obsolete zebraqueue_daemon.pl and koha-zebraqueue-ctl.sh
obsolete scripts too.
Several files are modified to address te removal/addition of these files.
I didn't run the install procedure as I was working on my laptop with a dev
setup, just set the symlinks. Now fixed things as proposed by wajasu
on comment #4. Any other suggestion please let me know.
Tested to work on an up-to-date Ubuntu 12.04.
Asked by wajasu, remove remaining obsolete zebraqueue stuff.
Sponsored-by: Universidad Nacional de Córdoba
Signed-off-by: wajasu <matted-34813@mypacks.net>
Signed-off-by: Jared Camins-Esakov <jcamins@cpbibliography.com>
Signed-off-by: Chris Cormack <chris@bigballofwax.co.nz>
Passed-QA-by: Mason James <mtj@kohaaloha.com>
Signed-off-by: Jared Camins-Esakov <jcamins@cpbibliography.com>
This script will remove these duplicate fines. To use, repeatably run this
script until there are no more duplicates in the database.
(fix_accountlines_rmdupfines_bug8253.pl)
Duplicate fines would happen if you upgraded to a 3.8 version that does not have the
bug8253 patch, and the misc/cronjobs/fines.pl is run. In 3.8 an upgrade to a more
granular date/time was not addressed for pre-existing fine entries and this script
removes the resulting duplicates. It also intelligently preserves the amount outstanding
for payments already applied. If your version already had the bug8253 patch at the time
of the upgrade, duplicate fines should not have been generatd.
Signed-off-by: wajasu <matted-34813@mypacks.net>
Signed-off-by: Paul Poulain <paul.poulain@biblibre.com>
Modify /misc/translator/translate script in order to manage properly alternate
OPAC templates.
To test it with new 'ccsr' template:
- Create the .po file:
./translate create fr-FR
Result: existing .po files are not modified. A new fr-FR-opac-ccsr.po file is
available.
- Install all templates :
./translate install fr-FR
Result: A new koha-tmpl/opac-tmpl/ccsr/fr-FR directory contains translated
templates.
- Update .po files:
./translate update fr-FR
Result: fr-FR .po files are update, include fr-FR-opac-ccsr.po
Signed-off-by: Jared Camins-Esakov <jcamins@cpbibliography.com>
Signed-off-by: Paul Poulain <paul.poulain@biblibre.com>
Simple addition of the koha user to the sample cron file. Might help non-tech
users to get things like incremental indexing to work.
Sponsored-by: Universidad Nacional de Córdoba
Signed-off-by: Kyle M Hall <kyle@bywatersolutions.com>
Signed-off-by: Paul Poulain <paul.poulain@biblibre.com>
Current script check-url.pl checks URL found in 856$u by sending HTTP
requests, one by one. The next request can't be sent before the previous
one get a result, which can be very slow for dead URL. I propose a new
script which send multiple requests simultaneously which improve
drastically URL checking execution time.
This script is based on AnyEvent and AnyEvent::HTTP CPAN modules.
Add new dependencies AnyEvent & AnyEvent::HTTP.
See doc: perldoc check-url-quick.pl
Signed-off-by: Jonathan Druart <jonathan.druart@biblibre.com>
This script batch deletes biblios which contain a biblionumber
present in file passed in parameter.
If one biblio has items, it is not deleted.
http://bugs.koha-community.org/show_bug.cgi?id=8674
Signed-off-by: Kyle M Hall <kyle@bywatersolutions.com>
Created file with biblionumbers for bibs with and without items.
Only the bibs without items were deleted.
An eval { eval "require $module;" }; was replaced with
eval { eval { require $module; }; }; which is a no-op, meaning that
the linker was not getting loaded, and the catalog module was throwing
up a big nasty error every time someone tried to save a record with a
heading. This patch replaces the require with can_load from
Module::Load::Conditional, which is PBP-friendly, and offers equivalent
functionality.
Signed-off-by: Nicole C. Engard <nengard@bywatersolutions.com>
Signed-off-by: Paul Poulain <paul.poulain@biblibre.com>
The patches for bug 7001 removed the parseletter subroutine from
C4::Letters without updating the talking tech script to use the
new alternative. This patch rectifies that situation.
Signed-off-by: Chris Cormack <chris@bigballofwax.co.nz>
Expose authority import functionality to the command line import
scripts, and rename them from commit_biblios_file.pl and
stage_biblios_file.pl to commit_file.pl and stage_file.pl.
To test (note that these instructions assume you have a MARC21
installation and are using the provided sample file):
1. Find a file of authorities (a sample file with MARC21 authorities
is attached to bug 7475) and download it to your server
2. Stage the file using the following command (replace <filename> with
the name of the file you saved in step 1):
> misc/stage_file.pl --file <filename> --authorities
3. Note the batch number the script assigns to your batch
4. Commit the records using the following command (replace <batchnumber>
with the batch number you made note of in step 3):
> misc/commit_file.pl --batch-number <batchnumber>
5. Index the authorities Zebraqueue (or wait)
6. Confirm that the new authorities appear.
7. Create a matching rule with the following settings:
Code: AUTHTEST
Description: Personal name main entry
Match threshold: 999
Record type: Authority record
Search index: Heading-main
Score: 1000
Tag: 100
Subfields: a
Offset: 0
Length: 0
(note the ID of this matching rule)
8. Stage the authority file again, this time using the following
command:
> misc/stage_file.pl --file <filename> --authorities \
--match <matchingrule>
7. Revert the import with the following command:
> misc/commit_file.pl --batch-number <batchnumber> --revert
8. Index the authorities Zebraqueue (or wait)
9. Confirm that the records have been removed
10. Import an authority record with the Stage MARC/Manage staged MARC
tools in exactly the way you would for a bibliographic record,
but choose "Authority" instead of "Bibliographic" for the record
type.
Signed-off-by: Elliott Davis <elliott@bywatersolutions.com>
Testing plan delivers as it should.
Signed-off-by: Jared Camins-Esakov <jcamins@cpbibliography.com>
Rebased on latest master 11 September 2012
* Add the code necessary to handle authorities with matching rules and
import batches.
* Update all the scripts that use the matcher and import batch code
to use the new API.
* Add authority records to the matching rules interface in the staff
client.
http://bugs.koha-community.org/show_bug.cgi?id=2060
Signed-off-by: Elliott Davis <elliott@bywatersolutions.com>
Signed-off-by: Jared Camins-Esakov <jcamins@cpbibliography.com>
Rebased on latest master 11 September 2012
This should make saved reports more manageable.
Group/Subgroup hierarchy is stored in authorised_values,
categories REPORT_GROUP and REPORT_SUBGROUP, connected by
REPORT_SUBGROUP.lib_opac -> REPORT_GROUP.authorised_value
Database changes:
* authorised_values: expanded category to 16 chars
* created default set of REPORT_GROUP authorised values to match
hardcoded report areas
* reports_dictionary: replaced area int with report_area text, converted
values
* saved_sql: added report_area, report_group and report_subgroup;
report_area is not currently used, saved for the record
C4/Reports/Guided.pm:
* Replaced Area numeric values with the mnemonic codes
* get_report_areas(): returns hardcoded areas list
* created get_report_areas(): returns full hierarchy (groups with belonging
subgroups)
* save_report(): changed iterface, accepts fields hashref as input
* update_sql(): changed iterface, accepts id and fields hashref as input
* get_saved_reports():]
- join to authorised_values to pick group and subgroup name
- accept group and subgroup filter params
* get_saved_report():
- changed iterface, return record hashref
- join to authorised_values to pick group and subgroup name
* build_authorised_value_list(): new sub, moved code from
reports/guided_reports.pl
* Updated interfaces in:
cronjobs/runreport.pl, svc/report, opac/svc/report: get_saved_report()
reports/dictionary.pl: get_report_areas()
reports/guided_reports.pl
reports/guided_reports_start.tt:
* Reports list:
- added group/subgroup filter
- display area/group/subgroup for the reports
* Create report wizard:
- carry area to the end
- select group and subgroup when saving the report; group defaults to area,
useful when report groups match areas
* Update report and Create from SQL: added group/subgroup
* Amended reports/guided_reports.pl accordingly
Conflicts:
C4/Reports/Guided.pm
admin/authorised_values.pl
installer/data/mysql/kohastructure.sql
installer/data/mysql/updatedatabase.pl
koha-tmpl/intranet-tmpl/prog/en/modules/reports/dictionary.tmpl
koha-tmpl/intranet-tmpl/prog/en/modules/reports/guided_reports_start.tmpl
misc/cronjobs/runreport.pl
reports/dictionary.pl
reports/guided_reports.pl
Signed-off-by: Delaye Stephane <stephane.delaye@biblibre.com>
Signed-off-by: Paul Poulain <paul.poulain@biblibre.com>
indexing not indexation
some minor grammatical changes
Signed-off-by: Kyle M Hall <kyle@bywatersolutions.com>
Signed-off-by: Paul Poulain <paul.poulain@biblibre.com>
Bug 8378 - <fine> syntax broken NFC and charset utf8
NFC normalize enqueued letters and add content-type charset=utf-8
This prevents utf8 codes from causing mysql to truncate the 'content'
from the point of certain codes, when stored in the message_queue table.
This was happenning with the currency symbol generated by
Locale::Currency:Format currency_format routine. NFC normalization
was only done on the attachment content with its content-type
containing "text", as in text/plain.
For emails AND attachments, the charset="utf-8" was added to the
content-type so mail clients would correctly iterate the utf8 codes,
thus preventing mobijake.
Signed-off-by: Kyle M Hall <kyle@bywatersolutions.com>
Ran through test plan before and after applying patch. Verified
that fine syntax does not work pre-patch and does work post-patch
for both direct emails and emails to the KohaAdminEmailAddress.
Signed-off-by: Paul Poulain <paul.poulain@biblibre.com>
On some Linux distributions like RedHat, Fedora, CentOS you can use SELinux for enhanced security. Among others, this involves file labeling (security context). In other distributions SELinux can be installed additionally.
The attached script lets you update and restore such labels on the perl scripts in a Koha installation.
July 18, 2012: Added opac/svc.
Signed-off-by: Chris Cormack <chrisc@catalyst.net.nz>
This patch adds the Koha::Indexer::RecordNormalizer and
Koha::Indexer::MARC::RecordNormalizer::EmbedSeeFromHeadings packages
to enable the inclusion of alternate forms of headings in bibliographic
searches. When the new syspref IncludeSeeFromInSearches is turned on
(default is off) rebuild_zebra.pl will insert see from headings from
authority records into bibliographic records when indexing, so that a
search on an obsolete term will turn up relevant records.
To test:
1) Enable IncludeSeeFromInSearches
2) Add a heading that has an alternate form to a record (for example,
"Cooking" has the alternate form "Cookery," if you have authority
records from LC)
3) Index the zebraqueue (or reindex if you haven't indexed your system
yet)
4) Confirm that if you search for "Cookery" you get the record you
just modified
Signed-off-by: Jared Camins-Esakov <jcamins@cpbibliography.com>
Rebased on master 5 August 2012
Signed-off-by: Jared Camins-Esakov <jcamins@cpbibliography.com>
Rebased on master 11 September 2012
Signed-off-by: Katrin Fischer <Katrin.Fischer.83@web.de>
Also checked:
- Verified database update works correctly
- Checked system preference and its description
- Checked staff/opac detail pages with feature on/off
- Checked staff/opac search facets
- Downloaded and tested records in various formats
- Tried different searches for 'see from' entries of authorities
- Ran all unit tests
No problems found.
Create transport_cost table, added UseTransportCostMatrix syspref.
transport_cost table contains branch to branch transfer
costs. These are used for filling inter-branch hold transfers.
Moved GetHoldsQueueItems() from .pl to HoldsQueue.pm
Signed-off-by: Kyle M Hall <kyle@bywatersolutions.com>
Signed-off-by: Paul Poulain <paul.poulain@biblibre.com>
Prior patches to this bug had lots of comments like "I don't have a way to test this, so..."
In the OCLC Connexion web, when you choose the option to export to MARC, it'll *send* it, and
say, "Record Exported," but the web client does nothing whatever to confirm that the record
actually landed in Koha. That's a flaw in their software, but can be easily checked by
looking in Koha to see if an import batch got created. The desktop client is a little
smarter about this, but needed much more testing, also.
With this patch, both the client and web will actually work. With a config file and set up as
previously described, The record will be staged and/or imported, and the desktop client returns
a useful message about what happened, *and* the staff client URL to the record.
Oodles of gobs of bunches of thanks to Virginia Military Institute, for loaning me their OCLC
authorization credentials so this could be tested, as well as for great suggestions of cosmetic
improvements to the mechanism and output.
Suspended holds are showing up in both the holds queue and holds to pull reports.
This patch adds to the sql queries such that any hold that is suspended
is not selected.
Signed-off-by: Owen Leonard <oleonard@myacpl.org>
Signed-off-by: Paul Poulain <paul.poulain@biblibre.com>
GET requests in benchmark_staff.pl test 6 do not work if a space character is part of the barcode. That seems highly unlikely to happen in barcodes, but is possible if no real barcodes are used but a substitute, like a copy of the call number. Space character needs to be changed to %20 for the request to work.
Also fixes a typo.
Signed-off-by: Chris Cormack <chrisc@catalyst.net.nz>
Signed-off-by: Paul Poulain <paul.poulain@biblibre.com>
The script is unusable.
The variable $date is not replaced with its content.
Signed-off-by: wajasu <matted-34813@mypacks.net>
Signed-off-by: Paul Poulain <paul.poulain@biblibre.com>
Adds the ability to use branches.* fields in digest notices and
have them be parsed correctly. Also adds a warning to the notices
editor for digests.
Signed-off-by: Katrin Fischer <Katrin.Fischer.83@web.de>
I like the idea to show a warning, but I would perhaps move
it under the message body label to be more obvious.
Patch works nicely, branch data of my user's home library
is displayed in the notice.
Replaced existing MaxFine syspref logic with overduefinescap.
Repurposed MaxFine to be the overall overdue limit for all items
overdue. Implemented new MaxFine logic in UpdateFine().
Signed-off-by: Elliott Davis <elliott@bywatersolutions.com>
Tested according to Srdjan's test plan and everything worked like he said it would. I set fined equal to $2 and max fine equal to $1. When I ran the fines script for overdue items fines assessed were only $1.
Signed-off-by: Paul Poulain <paul.poulain@biblibre.com>
This script prints to standard output what is returned by
GetMemberDetails in CSV format.
Exported fields can be specified with option -f. If no -f option is
specified, all fields are exported.
Signed-off-by: Gaetan Boisson <gaetan.boisson@biblibre.com>
Signed-off-by: Robin Sheat <robin@catalyst.net.nz>
Amended with some code to better handle bad data.
Signed-off-by: Paul Poulain <paul.poulain@biblibre.com>
$OUTPUT being used but not being declared.
When trying to run this script I gat a nasty:
15:42 ~/koha.dev/koha-community (new/bug_8063 $%)$ ./misc/cronjobs/gather_print_notices.pl
Global symbol "$OUTPUT" requires explicit package name at ./misc/cronjobs/gather_print_notices.pl line 81.
Global symbol "$OUTPUT" requires explicit package name at ./misc/cronjobs/gather_print_notices.pl line 95.
Global symbol "$OUTPUT" requires explicit package name at ./misc/cronjobs/gather_print_notices.pl line 102.
Global symbol "$OUTPUT" requires explicit package name at ./misc/cronjobs/gather_print_notices.pl line 106.
Global symbol "$OUTPUT" requires explicit package name at ./misc/cronjobs/gather_print_notices.pl line 120.
Global symbol "$OUTPUT" requires explicit package name at ./misc/cronjobs/gather_print_notices.pl line 127.
Execution of ./misc/cronjobs/gather_print_notices.pl aborted due to compilation errors.
Signed-off-by: Jonathan Druart <jonathan.druart@biblibre.com>
Before the patch:
$perl -wc ./misc/cronjobs/gather_print_notices.pl
Global symbol "$OUTPUT" requires explicit package name at
[...]./misc/cronjobs/gather_print_notices.pl line 81.
./misc/cronjobs/gather_print_notices.pl had compilation errors.
With this patch:
$perl -wc ./misc/cronjobs/gather_print_notices.pl
./misc/cronjobs/gather_print_notices.pl syntax OK
Signed-off-by: Paul Poulain <paul.poulain@biblibre.com>
Adds the option -s/--split to enable notices to be separated
into different files by borrower home library.
Signed-off-by: Julian Maurice <julian.maurice@biblibre.com>
Fixes the following things:
1. Sanitizes log output to prevent an attacker from using a specially
crafted POST to add extra lines to the log
2. Simplify a regular expression since "..file" cannot be used to
escape the current directory
3. Makes sure directories are consistent
4. Correct logic issues in misc/cronjobs/backup.sh
Thanks to Frere Sebastien Marie for catching these issues.
Signed-off-by: Robin Sheat <robin@catalyst.net.nz>
Signed-off-by: Paul Poulain <paul.poulain@biblibre.com>
This patch builds on work by Lars Wirzenius for the Koha packages.
To date, the only way for a Koha librarian to obtain a complete backup
of their system has been to log into the system via SSH (or FTP) to
download the mysqldump file. This patch makes it possible for
superlibrarians in properly configured systems to download night backups
via the staff client's Export tool.
Recognizing that this is functionality with potentially very grave
security implications, system administrators must manually enable these
features in the koha-conf.xml configuration file.
The following configuration settings have been added to the koha-conf.xml
file:
* backupdir => directory where backups should be stored.
* backup_db_via_tools => whether to allow superlibrarians to download
database backups via the Export tool. The default is disabled, and
there is no way -- by design -- to enable this option without manually
editing koha-conf.xml.
* backup_conf_via_tools => whether to allow superlibrarians to download
configuration backups via the Export tool (this may be applicable to
packages only). The default is disabled, and there is no way -- by
design -- to enable this option without manually editing koha-conf.xml.
This commit modifies the following scripts to make use of the new
backupdir configuration option:
* koha-dump and koha-run-backups in the Debian packages
* The sample backup script misc/cronjobs/backup.sh
Note that for security reasons, superlibrarians will not be allowed
to download files that are not owned by the web server's effective user.
This imposes a de facto dependency on ITK (for Apache) or running the
web server as the Koha user (as is done with Plack).
To test:
1. Apply patch.
2. Go to export page as a superlibrarian. Notice that no additional
export options appear because they have not been enabled.
3. Add <backupdir>$KOHADEV/var/spool</backup> to the <config> section
of your koha-conf.xml (note that you will need to adjust that so that
it is pointing at a logical directory).
4. Create the aforementioned directory.
5. Go to export page as a superlibrarian. Notice that no additional
export options appear because they have not been enabled.
6. Add <backup_db_via_tools>1</backup_db_via_tools> to the <config>
section of your koha-conf.xml
7. Go to the export page as a superlibrarian. Notice the new tab.
8. Go to the export page as a non-superlibrarian. Notice there is no
new tab.
9. Run: mysqldump -u koha -p koha | gzip > $BACKUPDIR/backup.sql.gz
(substituting appropriate user, password, and database name)
10. Go to the export page as a superlibrarian, and look at the "Export
database" tab. If you are running the web server as your Koha user,
and ran the above command as your Koha user, you should now see the
file listed as an option for download.
11. If you *did* see the file listed, change the ownership to something
else: sudo chown root:root $BACKUPDIR/backup.sql.gz
11a. Confirm that you no longer see the file listed when you look at the
"Export database" tab.
12. Change the ownership on the file to your web server (or Koha) user:
sudo chown www-data:www-data backup.sql.gz
13. Go to the export page as a superlibrarian, and look at the "Export
database" tab. You should now see backup.sql.gz listed.
14. Choose to download backup.sql.gz
15. Confirm that the downloaded file is what you were expecting.
If you are interested, you can repeat the above steps but replace
<backup_db_via_tools> with <backup_conf_via_tools>, and instead of
creating an sql file, create a tar file.
To test packaging: run koha-dump, confirm that it still creates a
usable backup.
------
This signoff contains two changes:
10-1. If no backup/conf files were present, then the message telling you
so doesn't appear and the download button does. Made them behave
correctly.
10-2. The test for a file existing required it to be owned by the
webserver UID. This change makes it so it only has to be readable.
Signed-off-by: Robin Sheat <robin@catalyst.net.nz>
Fix syntax errors preventing the scripts misc/translator/text-extract2.pl
and misc/cronjobs/thirdparty/TalkingTech_itiva_inbound.pl from compiling.
Remove misc/migration_tools/build6xx.pl entirely since it refers to
columns that no longer exist in the Koha database, and has seemingly
had broken encoding since Koha switched from CVS to git (or before!).
Signed-off-by: Chris Cormack <chris@bigballofwax.co.nz>
Signed-off-by: Paul Poulain <paul.poulain@biblibre.com>
SIPServer.pm requires that C4/SIP is added to its lib
path This has been done by passing this directory
to it via -I. By using FindBin it can set the path
for itself correctly. This will also work if the C4/SIP
directory tree is moved to a non-standard location
Removed the now redundant -I. from sip_run.sh
Added a variable to sip_run.sh for the koha tree to
highlight a problem with the script if you have multiple
directories in the PERL5LIB environment variable
Signed-off-by: Chris Cormack <chris@bigballofwax.co.nz>
Signed-off-by: Paul Poulain <paul.poulain@biblibre.com>
Small script that checks if each bibliorecord in the DB is properly indexed
use -h to learn more
(MT #6389)
Signed-off-by: Robin Sheat <robin@catalyst.net.nz>
Signed-off-by: Paul Poulain <paul.poulain@biblibre.com>
Complete rewrite of rebuild_zebra_sliced.zsh (renamed to .sh). Main
improvements are:
- both biblio and authority records are handled
- records are exported only once
It also add an option --skip-index to rebuild_zebra.pl that permit to
use rebuild_zebra.pl as an 'export only' script.
Description:
Index Koha records by chunks. It is useful when some record causes
errors and stop the indexation process. With this script, if indexation
of one chunk fails, chunk is splitted in 2 (or 3) chunks, and
indexation continue on these chunks.
rebuild_zebra.pl is called only once to export records.
Splitting and indexing is handled by this script (using yaz-marcdump and
zebraidx).
Signed-off-by: Martin Renvoize <martin.renvoize@ptfs-europe.com>
Signed-off-by: Paul Poulain <paul.poulain@biblibre.com>
Because updating the total issues count associated with a bibliographic
record on issue could cause a significant load on the server, this
commit adds the syspref UpdateTotalIssuesOnCirc (which defaults to OFF
to match existing behavior). The syspref has the following description:
Do/Do not update a bibliographic record's total issues count whenever
an item is issued (WARNING! This increases server load significantly;
if performance is a concern, use the update_totalissues.pl cron job
to update the total issues count).
Bug 6557: automatically increment totalissues
Adds the ability to automatically increment biblioitems.totalissues
whenever an item is issued.
To test:
1) Choose a record with at least one item that can circulate
2) Check the value of 942$0 (you may need to look at the plain MARC view
on the OPAC). Most likely there won't be any 942$0 at all
3) Enable UpdateTotalIssuesOnCirc
4) Check out the item you selected
5) Check the value of 942$0 (you may need to look at the plain MARC view
on the OPAC). That value should now be one greater than before
6) Discharge the item
7) Disable UpdateTotalIssuesOnCirc
8) Check out the item you selected again
9) Check the value of 942$0 (you may need to look at the plain MARC view
on the OPAC). That value should not have changed
Bug 6557: add script to update totalissues from stats
NAME
update_totalissues.pl
SYNOPSIS
update_totalissues.pl --use-stats
update_totalissues.pl --use-items
update_totalissues.pl --commit=1000
update_totalissues.pl --since='2012-01-01'
update_totalissues.pl --interval=30d
DESCRIPTION
This batch job populates bibliographic records' total issues count
based on historical issue statistics.
--help Prints this help
-v|--verbose
Provide verbose log information (list every bib modified).
--use-stats
Use the data in the statistics table for populating total
issues.
--use-items
Use items.issues data for populating total issues. Note that
issues data from the items table does not respect the --since
or --interval options, by definition. Also note that if both
--use-stats and --use-items are specified, the count of biblios
processed will be misleading.
-s|--since=DATE
Only process issues recorded in the statistics table since
DATE.
-i|--interval=S
Only process issues recorded in the statistics table in the
last N units of time. The interval should consist of a number
with a one-letter unit suffix. The valid suffixes are h
(hours), d (days), w (weeks), m (months), and y (years). The
default unit is days.
--incremental
Add the number of issues found in the statistics table to the
existing total issues count. Intended so that this script can
be used as a cron job to update popularity information during
low-usage periods. If neither --since or --interval are
specified, incremental mode will default to processing the
last twenty-four hours.
--commit=N
Commit the results to the database after every N records are
processed.
--test Only test the popularity population script.
WARNING
If the time on your database server does not match the time on your Koha
server you will need to take that into account, and probably use the
--since argument instead of the --interval argument for incremental
updating.
=== TESTING PLAN ===
NOTE: in order to test this script, you will need to have some sort of
circulation data already existing in your Koha installation.
1) Disable UpdateTotalIssuesOnCirc
2) Run: misc/cronjobs/update_totalissues.pl --use-items -t -v
3) If you have total checkout data in your item records (i.e. anything
in 952$l), you should see messages like "Processing bib 43 (1 issues)"
4) Choose one of the lines that shows more than 0 issues, and view the
record with that biblionumber in the staff client, choosing the "Items"
tab (moredetail.pl). Add up the "Total checkouts" listed for each item,
and confirm it matches what the script reported
5) Run: misc/cronjobs/update_totalissues.pl --use-stats -t -v
6) If you have any circulation statistics in your database (i.e. any
'issue' entries in your statistics table), you should see messages
like "Processing bib 43 (1 issues)";
7) Choose one of the lines and view the record with that biblionumber in
the staff client, choosing the "Items" tab (moredetail.pl). If you
count the number of checkouts listed in each item's checkout history,
the total should match what the script reported.
8) Check out an item
9) Run: misc/cronjobs/update_totalissues.pl --use-stats
--incremental --interval=1h -t -v
10) You should see one line reporting a single circ for the bib record
associated with the item you just checked out (there may be more if
you checked out any books in the hour prior to running these tests
11) If the results in steps 4, 7, and 10 match the predictions, the
script worked
This patch to Koha was sponsored by the Arcadia Public Library and the
Arcadia Public Library Foundation in honor of Jackie Faust-Moreno, late
director of the Arcadia Public Library.
Signed-off-by: Liz Rea <wizzyrea@gmail.com>
Tested this with my test data - numbers are correct and updated appropriately.
More importantly - if I do a popularity search, the most popular items *come up first*. Amazing.