Signed-off-by: Owen Leonard <oleonard@myacpl.org>
Signed-off-by: Julian Maurice <julian.maurice@biblibre.com>
Signed-off-by: Nick Clemens <nick@bywatersolutions.com>
Test plan:
1) Aply the patches
2) cd misc/translator
3) Run these command one by one:
./translate install <lang-code>
./translate update <lang-code>
./translate create <lang-code>
4) All should not end with error and must do what is documented (see
./translate --help)
Signed-off-by: Owen Leonard <oleonard@myacpl.org>
Signed-off-by: Julian Maurice <julian.maurice@biblibre.com>
Signed-off-by: Nick Clemens <nick@bywatersolutions.com>
The help for this script says:
-t --type: If supplied, only processes this type of message ( email, sms )
Currently, the type argument is set up wrong, so it does not look
for an argument. This patch fixes that.
To test, run this command (should work in kohadevbox) or something
similar:
$ sudo koha-shell -c "perl \
/home/vagrant/kohaclone/misc/cronjobs/process_message_queue.pl -v \
--type=sms" kohadev
This should give the following error: "Option type does not take an argument".
Apply the patch and run the same command again. This should not give an
error.
Signed-off-by: Mark Tompsett <mtompset@hotmail.com>
Signed-off-by: Josef Moravec <josef.moravec@gmail.com>
Signed-off-by: Nick Clemens <nick@bywatersolutions.com>
Perl 5.26 (or earlier) introduced a security feature, where implicitly
including the program directory as a Perl library directory no longer
happens (perl -I. ).
This causes translate to fail because it cannot find the *.pm -files in
it's own directory.
This patch adds the familiar mantra
use lib $FindBin::Bin;
to the relevant scripts.
To test:
1. Install Ubuntu18.04 or something else with Perl 5.26
2. Install Koha (we use the dev-install)
3. cd $KOHA_PATH/misc/translator/
4. perl translate create fi-FI
5. Observe problems with missing modules.
Signed-off-by: Julian Maurice <julian.maurice@biblibre.com>
Signed-off-by: Jonathan Druart <jonathan.druart@bugs.koha-community.org>
Cannot recreate the issue right now but the changes make sense.
Signed-off-by: Nick Clemens <nick@bywatersolutions.com>
Some libraries want to be able to skip patrons with valid email addresses when generated outbound files for Talking Tech.
Test Plan:
1) Apply this patch
2) Run TalkingTech_itiva_outbound.pl
Overdue will be easiest to use for testing
3) Note one or more patrons show up that have email addresses
4) Run again with -s ( or --skip-patrons-with-email )
5) Note the new file no longer has patrons with emails!
Signed-off-by: Jesse Maseto <jesse@bywatersolution.com>
Signed-off-by: Jonathan Druart <jonathan.druart@bugs.koha-community.org>
I did not test but changes make sense
Signed-off-by: Nick Clemens <nick@bywatersolutions.com>
In delete_patrons.pl's POD
To be more readable and crontab friendly
Signed-off-by: Mark Tompsett <mtompset@hotmail.com>
Signed-off-by: Jonathan Druart <jonathan.druart@bugs.koha-community.org>
Signed-off-by: Nick Clemens <nick@bywatersolutions.com>
This table was only used by XISBN, this patch remove the table and the
related code (cronjobs)
Signed-off-by: Kyle M Hall <kyle@bywatersolutions.com>
https://bugs.koha-community.org/show_bug.cgi?id=21235
Signed-off-by: Martin Renvoize <martin.renvoize@ptfs-europe.com>
Signed-off-by: Nick Clemens <nick@bywatersolutions.com>
Remove:
- BIB_INDEX_MODE and AUTH_INDEX_MODE env var
- bib_index_mode and auth_index_mode options from scripts
- Warnings from about page, just kept one if zebra_bib_index_mode or
zebra_auth_index_mode still exist in config and are set to grs1
Test plan:
- Install Koha from src
- Install Koha from pkg
- Read the code, carefully!
Signed-off-by: Tomas Cohen Arazi <tomascohen@theke.io>
Rebased
Signed-off-by: Katrin Fischer <katrin.fischer.83@web.de>
Signed-off-by: Nick Clemens <nick@bywatersolutions.com>
Bug 11401 introduced code to support Norwegian national library card.
This code is too specific to be part of Koha as it, it should be a
plugin instead.
Moreover nobody uses it, but a modified version (see comment 3).
Test plan:
Add/edit/delete patron and make sure there are no regressions introduced
by these patches
Signed-off-by: Benjamin Rokseth <benjamin.rokseth@deichman.no>
Signed-off-by: Katrin Fischer <katrin.fischer.83@web.de>
Signed-off-by: Tomas Cohen Arazi <tomascohen@theke.io>
Signed-off-by: Nick Clemens <nick@bywatersolutions.com>
Improvements:
1) Mappings UI now has button that allows one to reset the mappings.
2) Mappings UI displays the items in alphabetical order.
3) Indexing script drops and recreates the index right away, which
helps prevent ES from autocreating a bad index if someone does something
while the first batch of records is being processed.
4) Indexing script has nicer output.
To test:
1) Change mappings.yaml file
2) Reset mappings in UI in mappings.pl
3) Verify the mappings have been changed in UI
4) The field order is alphabetical
5) Rebuild script has clean output
6) Run test t/db_dependent/Koha_Elasticsearch_Indexer.t
Signed-off-by: Bouzid Fergani <bouzid.fergani@inlibro.com>
Signed-off-by: Julian Maurice <julian.maurice@biblibre.com>
Signed-off-by: Nick Clemens <nick@bywatersolutions.com>
Signed-off-by: Owen Leonard <oleonard@myacpl.org>
Signed-off-by: Katrin Fischer <katrin.fischer.83@web.de>
Signed-off-by: Nick Clemens <nick@bywatersolutions.com>
search_for_data_inconsistencies.pl will now display errors if:
1.item-level_itypes is set to "specific item" and items.itype is not set
or not set to an item type defined in the system (itemtypes.itemtype)
2.item-level_itypes is set to "biblio record" and biblioitems.itemtype is not set
or not set to an item type defined in the system (itemtypes.itemtype)
Test plan:
Use the script and the different possible combinations to display the
errors
Signed-off-by: Owen Leonard <oleonard@myacpl.org>
Signed-off-by: Katrin Fischer <katrin.fischer.83@web.de>
Signed-off-by: Nick Clemens <nick@bywatersolutions.com>
This patch adds a new check in
misc/maintenance/search_for_data_inconsistencies.pl to search for not
defined authority codes.
Test plan:
Set some auth_header.authtypecode to not defined authority codes in Koha
(UPDATE auth_header SET authtypecode="XXX" WHERE authid=42;)
Then run `misc/maintenance/search_for_data_inconsistencies.pl`
Signed-off-by: Josef Moravec <josef.moravec@gmail.com>
Signed-off-by: Katrin Fischer <katrin.fischer.83@web.de>
Signed-off-by: Nick Clemens <nick@bywatersolutions.com>
Signed-off-by: Josef Moravec <josef.moravec@gmail.com>
Signed-off-by: Katrin Fischer <katrin.fischer.83@web.de>
Signed-off-by: Nick Clemens <nick@bywatersolutions.com>
From bug 5789: scripts can fail if items.homebranch and/or
items.holdingbranch is not defined
This script will help people catching these migration issues.
Test plan:
Update your items table to set some homebranch or holdingbranch to NULL
Run this script
It will display the different items with not defined values in these
fields.
Signed-off-by: Josef Moravec <josef.moravec@gmail.com>
Signed-off-by: Katrin Fischer <katrin.fischer.83@web.de>
Signed-off-by: Nick Clemens <nick@bywatersolutions.com>
Here we go, next step then.
As we did not fix the performance issue when autofiltering
the variables (see bug 20975), the only solution we have is to add the
filters explicitely.
This patch has been autogenerated (using add_html_filters.pl, see next
pathces) and add the html filter to all the variables displayed in the
template.
Exceptions are made (using the new 'raw' TT filter) to the variable we
already listed in the previous versions of this patch.
To test:
- Use t/db_dependent/Koha/Patrons.t to populate your DB with autogenerated
data which contain <script> tags
- Remove them from borrower_debarments.comments (there are allowed here)
update borrower_debarments set comment="html tags possible here";
- From the interface hit page and try to catch alert box.
If you find one it means you find a possible XSS.
To know where it comes from:
* note the exact URL where you found it
* note the alert box content
* Dump your DB and search for the string in the dump to identify its
location (for instance table.field)
Next:
* Ideally we would like to use the raw filter when it is not necessary
to HTML escape the variables (in big loop for instance)
* Provide a QA script to catch missing filters (we want html, uri, url
or raw, certainly others that I am forgetting now)
* Replace the html filters with uri when needed (!)
Signed-off-by: Owen Leonard <oleonard@myacpl.org>
Signed-off-by: Martin Renvoize <martin.renvoize@ptfs-europe.com>
Signed-off-by: Nick Clemens <nick@bywatersolutions.com>
This alternate patch adds a new icon to the sprite image which gives
icons to the link on the staff client home page. It modifies the CSS
positioning for all the links as the new image sprite is somewhat
different.
The SVG file from which the sprite image was generated is also updated,
and the about page has been updated to give credit to the creator of the
icon.
Unrelated change: The cataloging link is moved to the second column.
Although it's probably rare for all modules to be enabled and available,
this puts the same number of links in each column.
To test, apply the patch and clear your browser cache if necessary. With
interlibrary loan enabled, view the staff client home page and confirm
that all the module links look correct, including when you hover your
mouse over them.
Confirm that the about page lists the new icon under the "licenses" tab.
Signed-off-by: Martin Renvoize <martin.renvoize@ptfs-europe.com>
Signed-off-by: Jonathan Druart <jonathan.druart@bugs.koha-community.org>
Signed-off-by: Nick Clemens <nick@bywatersolutions.com>
See https://metacpan.org/pod/Text::CSV_XS#Embedded-newlines
Test plan:
1) Choose two items, say barcode '123' and '456'
2) Change the public note on 123 to read
Line1
Line2
(I.e. type 'Line1', then press Enter, type 'Line2' and click update).
3) Change the public note on 456 to read
Public note has one and only one line.
Click update.
4) Create a report with the followng query:
select barcode, itemnotes from items where barcode in ( '123', '456' )
Let's say that this is report number 10.
5) run ./misc/cronjobs/runreport.pl --format=csv REPORT_ID:
=> You should see both lines
Signed-off-by: Maryse Simard <maryse.simard@inlibro.com>
Followed the test plan and it works.
Signed-off-by: Katrin Fischer <katrin.fischer.83@web.de>
Signed-off-by: Nick Clemens <nick@bywatersolutions.com>
To test:
Make a record with a URL that has a UTF8 character, such as:
http://some.nonexistent.tld/MāoriWomenAotearoa.pdf
Run the check-url-quick.pl job, notice it dies at that URL
Apply this patch
Test again, it should work.
Signed-off-by: Owen Leonard <oleonard@myacpl.org>
Signed-off-by: Jonathan Druart <jonathan.druart@bugs.koha-community.org>
Signed-off-by: Tomas Cohen Arazi <tomascohen@theke.io>
If you use -update but do not find matches (or did not want to match), you
should not call those routines. We should warn and skip this record.
Adding a warn at the start that the choice of options may not be smart.
Note that this needs further attention somewhere else. You could mix
-update with -insert for instance and still see some problems. (May depend
on items with unique barcode etc.)
Test plan:
Run -update without match or isbn.
Or run -update -isbn with a non-matching ISBN.
Signed-off-by: Marcel de Rooy <m.de.rooy@rijksmuseum.nl>
Signed-off-by: Nick Clemens <nick@bywatersolutions.com>
The following code was never reached, since $isbn was not filled.
if (!$biblionumber && $isbn_check && $isbn) {
$sth_isbn->execute($isbn);
($biblionumber,$biblioitemnumber) = $sth_isbn->fetchrow;
}
Solution: Fix the code with two $isbn declarations. Move the checkisbn
condition a level deeper.
Test plan:
Run misc/migration_tools/bulkmarcimport.pl -file bib726.utf8 --update -isbn
Since you do not match on biblionumber, the ISBN should match.
Signed-off-by: Marcel de Rooy <m.de.rooy@rijksmuseum.nl>
Signed-off-by: Nick Clemens <nick@bywatersolutions.com>
Since
commit cefa7c21e2
Bug 5635: bulkmarcimport new parameters & features
AddBiblio call has been replaced with ModBiblio, but the return values
are different. We should not replace the value of $biblionumber with
what returns this subroutine.
Test plan:
If you are familiar with bulkmarcimport.pl you should know what to test,
I am not.
Signed-off-by: Martin Renvoize <martin.renvoize@ptfs-europe.com>
Signed-off-by: Marcel de Rooy <m.de.rooy@rijksmuseum.nl>
Signed-off-by: Nick Clemens <nick@bywatersolutions.com>
Signed-off-by: David Bourgault <david.bourgault@inlibro.com>
Signed-off-by: Julian Maurice <julian.maurice@biblibre.com>
Signed-off-by: Nick Clemens <nick@bywatersolutions.com>
Signed-off-by: Josef Moravec <josef.moravec@gmail.com>
Signed-off-by: Tomas Cohen Arazi <tomascohen@theke.io>
Signed-off-by: Martin Renvoize <martin.renvoize@ptfs-europe.com>
Signed-off-by: Nick Clemens <nick@bywatersolutions.com>
Signed-off-by: Josef Moravec <josef.moravec@gmail.com>
Signed-off-by: Tomas Cohen Arazi <tomascohen@theke.io>
Signed-off-by: Martin Renvoize <martin.renvoize@ptfs-europe.com>
Signed-off-by: Nick Clemens <nick@bywatersolutions.com>
To test:
1 - prove t/db_dependent/Reports/Guided.t
2 - grep "get_saved_report" - ensure there are no occurences of the
singular form
3 - create, save, edit, and convert a report
4 - access a public report and report json from opac and staff client
5 - Ensure all function as expected
Signed-off-by: Mark Tompsett <mtompset@hotmail.com>
Signed-off-by: Jonathan Druart <jonathan.druart@bugs.koha-community.org>
Signed-off-by: Nick Clemens <nick@bywatersolutions.com>
To test:
1 - run batchRebuildItemsTables.pl with a valid biblionumber
perl /usr/share/koha/bin/batchRebuildItemsTables.pl --where biblio.biblionumber=38483 -c
2 - Note it says 'undefined biblionumber
3 - Apply patch
4 - Do it again
5 - It works!
Signed-off-by: Katrin Fischer <katrin.fischer.83@web.de>
Signed-off-by: Julian Maurice <julian.maurice@biblibre.com>
Signed-off-by: Nick Clemens <nick@bywatersolutions.com>
We do no longer need "use Koha::UploadedFile" in a few places.
Signed-off-by: Marcel de Rooy <m.de.rooy@rijksmuseum.nl>
Signed-off-by: Jonathan Druart <jonathan.druart@bugs.koha-community.org>
Signed-off-by: Nick Clemens <nick@bywatersolutions.com>
Signed-off-by: Marcel de Rooy <m.de.rooy@rijksmuseum.nl>
Signed-off-by: Jonathan Druart <jonathan.druart@bugs.koha-community.org>
Signed-off-by: Nick Clemens <nick@bywatersolutions.com>
As explained in bug 20428 use tmpdir can cause issues and it just makes sense to standardize our temp directory in a universal way.
Test Plan:
1) Apply this patch
2) Verify you can still log in and use Koha
3) Verify the web installer still works
4) Verify EDI module can still download files via FTP
5) Verify fines.pl still runs with -o option
6) prove t/db_dependent/Plugins.t
7) prove t/db_dependent/Sitemapper.t
8) prove t/db_dependent/Templates.t
Signed-off-by: Marcel de Rooy <m.de.rooy@rijksmuseum.nl>
Signed-off-by: Jonathan Druart <jonathan.druart@bugs.koha-community.org>
Signed-off-by: Nick Clemens <nick@bywatersolutions.com>
1.
<failure message="msgfmt
failure">misc/translator/po/fr-CA-staff-prog.po:713: number of format
specifications in 'msgid' and 'msgstr' does not match
2.
<failure message="msgfmt
failure">misc/translator/po/hy-Armn-staff-prog.po:53901: 'msgstr' is
not a valid C format string, unlike 'msgid'. Reason: The character
that terminates the directive number 1 is not a v
alid conversion specifier.
Found with:
sudo apt-get install translate-toolkit
junitmsgfmt misc/translator/po/ar-Arab-staff-prog.po
Signed-off-by: Jonathan Druart <jonathan.druart@bugs.koha-community.org>
It implements only the "client credentials" flow with no scopes
support. API clients are tied to an existing patron and have the same
permissions as the patron they are tied to.
API Clients are defined in $KOHA_CONF.
Test plan:
0. Install Net::OAuth2::AuthorizationServer 0.16
1. In $KOHA_CONF, add an <api_client> element under <config>:
<api_client>
<client_id>$CLIENT_ID</client_id>
<client_secret>$CLIENT_SECRET</client_secret>
<patron_id>X</patron_id> <!-- X is an existing borrowernumber -->
</api_client>
2. Apply patch, run updatedatabase.pl and reload starman
3. Install Firefox extension RESTer [1]
4. In RESTer, go to "Authorization" tab and create a new OAuth2
configuration:
- OAuth flow: Client credentials
- Access Token Request Method: POST
- Access Token Request Endpoint: http://$KOHA_URL/api/v1/oauth/token
- Access Token Request Client Authentication: Credentials in request
body
- Client ID: $CLIENT_ID
- Client Secret: $CLIENT_SECRET
5. Click on the newly created configuration to generate a new token
(which will be valid only for an hour)
6. In RESTer, set HTTP method to GET and url to
http://$KOHA_URL/api/v1/patrons then click on SEND
If patron X has permission 'borrowers', it should return 200 OK
with the list of patrons
Otherwise it should return 403 with the list of required permissions
(Please test both cases)
7. Wait an hour (or run the following SQL query:
UPDATE oauth_access_tokens SET expires = 0) and repeat step 6.
You should have a 403 Forbidden status, and the token must have been
removed from the database.
8. Create a bunch of tokens using RESTer, make some of them expires
using the previous SQL query, and run the following command:
misc/cronjobs/cleanup_database.pl --oauth-tokens
Verify that expired tokens were removed, and that the others are
still there
9. prove t/db_dependent/api/v1/oauth.t
[1] https://addons.mozilla.org/en-US/firefox/addon/rester/
Signed-off-by: Josef Moravec <josef.moravec@gmail.com>
Signed-off-by: Tomas Cohen Arazi <tomascohen@theke.io>
Signed-off-by: Jonathan Druart <jonathan.druart@bugs.koha-community.org>
So far this script does not accept parameters and create a koha/koha
superlibrarian
This patch makes it accept parameters to create a customized
superlibrarian patrons.
Test plan:
Use the script with valid and invalid paramters and confirm that it
works as expected.
Note: A cryptic "Invalid parameter passed" error is raised when the
categorycode is not valid. Better error handling must be provided but
Koha::Exceptions seems to be enhancement (see related bug reports).
Signed-off-by: Marcel de Rooy <m.de.rooy@rijksmuseum.nl>
Signed-off-by: Jonathan Druart <jonathan.druart@bugs.koha-community.org>
Given the confusion regarding this behaviour it sounds better to make it
configurable.
This pref will take 4 different values, 1 per place an item can be
marked as lost.
Test plan:
Mark items as lost and confirm the item is returned or not, depending on
the value of the system preference.
- from the longoverdue cronjob (--mark-returned takes precedence if set)
- from the batch item modification tool
- when cataloguing an item
- from the items tab of the catalog module
Signed-off-by: Séverine QUEUNE <severine.queune@bulac.fr>
Signed-off-by: Marcel de Rooy <m.de.rooy@rijksmuseum.nl>
Signed-off-by: Jonathan Druart <jonathan.druart@bugs.koha-community.org>
Test plan:
perl misc/cronjobs/delete_records_via_leader.pl
=> Should display a warning
perl misc/cronjobs/delete_records_via_leader.pl --test
=> Should not display a warning and script should not apply changes
perl misc/cronjobs/delete_records_via_leader.pl --confirm
=> Should not display a warning and script should apply changes
Signed-off-by: Brendan Gallagher <brendan@bywatersolutions.com>
Signed-off-by: Marcel de Rooy <m.de.rooy@rijksmuseum.nl>
Signed-off-by: Jonathan Druart <jonathan.druart@bugs.koha-community.org>
It would be nice to allow emails to be sent overnight, but limit the sending of SMS messages to hours when people are awake. Adding a type limit to process_message_queue.pl would allow this to be accomplished easily.
Signed-off-by: Charles Farmer <charles.farmer@inLibro.com>
Signed-off-by: Tomas Cohen Arazi <tomascohen@gmail.com>
Signed-off-by: Jonathan Druart <jonathan.druart@bugs.koha-community.org>
This patch replaces the easy occurrences of patronflags.
These calls only need the CHARGES->amount value, that is the non issues
charges. Luckily we now have a Koha::Account->non_issues_charges that
deal with that.
Signed-off-by: Kyle M Hall <kyle@bywatersolutions.com>
Signed-off-by: Josef Moravec <josef.moravec@gmail.com>
Signed-off-by: Jonathan Druart <jonathan.druart@bugs.koha-community.org>
A follow-up on a preceding report introduced a join instead of a
subquery. This made the categorycode ambiguous.
Test plan:
See former patches.
Signed-off-by: Marcel de Rooy <m.de.rooy@rijksmuseum.nl>
Signed-off-by: Katrin Fischer <katrin.fischer.83@web.de>
Signed-off-by: Jonathan Druart <jonathan.druart@bugs.koha-community.org>
Test plan:
[1] Run the script with -doit and -cat [some_category] and verify that
the printed total is correct.
Signed-off-by: Marcel de Rooy <m.de.rooy@rijksmuseum.nl>
Signed-off-by: Charles Farmer <charles.farmer@inLibro.com>
Signed-off-by: Katrin Fischer <katrin.fischer.83@web.de>
Signed-off-by: Jonathan Druart <jonathan.druart@bugs.koha-community.org>
Instead of updating patrons over all patron categories, it would be
helpful if we could filter on a specified category.
Test plan:
[1] Select two patrons A and B in say categories C1 and C2.
[2] Change the msg prefs for A and B away from defaults.
[3] Run borrowers-force-messaging-defaults.pl -doit -cat C1
Verify that patron A changed and patron B did not.
Signed-off-by: Marcel de Rooy <m.de.rooy@rijksmuseum.nl>
Signed-off-by: Charles Farmer <charles.farmer@inLibro.com>
Amended: Replace -category by --category. (marcelr 20180314)
Signed-off-by: Katrin Fischer <katrin.fischer.83@web.de>
Signed-off-by: Jonathan Druart <jonathan.druart@bugs.koha-community.org>
Currently, if outputting a CSV file using runreport.pl, you need to look at the report used to know what each column means. It would be nice if we could include column headers.
Test Plan:
1) Apply this patch
2) Try using runreport.pl with --format csv --csv-header
Signed-off-by: David Bourgault <david.bourgault@inlibro.com>
Signed-off-by: Julian Maurice <julian.maurice@biblibre.com>
Signed-off-by: Jonathan Druart <jonathan.druart@bugs.koha-community.org>
Fix the same error in another place
Signed-off-by: Julian Maurice <julian.maurice@biblibre.com>
Signed-off-by: Jonathan Druart <jonathan.druart@bugs.koha-community.org>
If you use bulkmarcimport.pl to import records with items it looks
like the successfull insert of the record is reported multiple time,
but the second and subsequent "ok" is really related to importing
the item(s).
This patch changes the log message on successfully inserting an item
to match the log message given when inserting an item fails.
To test, the easy way:
- Look at lines 530 and 536 of bulkmarcimport.pl, and note that the
"op" in those two lines are different
- Apply the patch
- Look at lines 530 and 536 again, and note that the "op" is now
identical, and that this makes sense, since they are both related
to the same operation, specifically inserting an item
To test, the hard way
- Have some records with items
- Import the records with bulkmarcimport.pl, and make sure to specify
the -l option, to create a log of the actions taken
- Look at the log and verify it looks something like this:
id;operation;status
1;insert;ok
1;insert;ok
2;insert;ok
2;insert;ok
- Apply this patch and import some more records with items. The log
should now be similar to this:
id;operation;status
1;insert;ok
1;insertitem;ok
2;insert;ok
2;insertitem;ok
Signed-off-by: Maksim Sen <maksim.sen@inlibro.com>
Signed-off-by: Julian Maurice <julian.maurice@biblibre.com>
Signed-off-by: Jonathan Druart <jonathan.druart@bugs.koha-community.org>
Fixes this error:
Undefined subroutine &main::MarkIssueReturned called at
misc/cronjobs/longoverdue.pl line 316.
Signed-off-by: Josef Moravec <josef.moravec@gmail.com>
Signed-off-by: Julian Maurice <julian.maurice@biblibre.com>
Signed-off-by: Nick Clemens <nick@bywatersolutions.com>
The longoverdue.pl option --mark-returned doesn't work unless the
--charge option is used as well.
Test Plan:
1) Run long overdue with --mark-returned and not --charge,
note your items are marked lost but not returned
2) Apply this patch
3) Repeat step 1, the items should now get returned!
Tested with (for example):
misc/cronjobs/longoverdue.pl --lost 10=1 --mark-returned --verbose
--confirm
Signed-off-by: Josef Moravec <josef.moravec@gmail.com>
Signed-off-by: Julian Maurice <julian.maurice@biblibre.com>
Signed-off-by: Nick Clemens <nick@bywatersolutions.com>
Cosmetic changes.
And: adding a confirm flag (see earlier comment too). Without this flag but
with having a filled pref, the script would purge when you do not pass any
parameter. This might not be appreciated.
Signed-off-by: Marcel de Rooy <m.de.rooy@rijksmuseum.nl>
Signed-off-by: Jonathan Druart <jonathan.druart@bugs.koha-community.org>
Added missing upgrade SQL system preference.
Corrected system preference screen message
Fixes on purge_suggestions.pl
- perlcritic friendlier
- address $PERL5LIB comment by using $PROGRAM_NAME (comment #10)
- used STDERR (comment #10)
- perltidy
TEST PLAN
---------
$ ./installer/data/mysql/updatedatabase.pl
-- should run upgrade and generate new systempreference in table
$ ./misc/cronjobs/purge_suggestions.pl --help
-- should give help with a real path used instead of $PERL5LIB.
$ ./misc/cronjobs/purge_suggestions.pl -days -1
-- should give error message as expected
$ ./misc/cronjobs/purge_suggestions.pl -days 0
-- should give error message as expected
Go to OPAC system preferences tab and check the
PurgeSuggestionsOlderThan system preference
-- message should be as expected (see comment #9)
run koha qa test tools
-- all should pass
Signed-off-by: Marc Veron <veron@veron.ch>
Signed-off-by: Jon Knight <J.P.Knight@lboro.ac.uk>
Signed-off-by: Marcel de Rooy <m.de.rooy@rijksmuseum.nl>
Amended: Moved new pref from OPAC to Acquisitions preferences.
Signed-off-by: Jonathan Druart <jonathan.druart@bugs.koha-community.org>
The system preferences value is used whenever purge_suggestions.pl is called without the 'days' parameter.
This patch uses the preference description suggested by comment #9.
This version should now be cleanly applicable.
I Apply the patch
II Run updatedatabase.pl
a) Run purge_suggestions.pl without the days parameter
- validate that there is an error message
b) Run purge_suggestions.pl with the days parameter
- validate that there is no error message
c) Insert a number of days in the system variable PurgeSuggestionsOlderThan
d) Run purge_suggestions.pl without the days parameter
- validate that there is no error message
Signed-off-by: Liz Rea <liz@catalyst.net.nz>
Tested per plan, all tests pass.
Signed-off-by: Marc Veron <veron@veron.ch>
Signed-off-by: Marcel de Rooy <m.de.rooy@rijksmuseum.nl>
Signed-off-by: Jonathan Druart <jonathan.druart@bugs.koha-community.org>
The truncate option is not really useful. Its result is probably not what
most users of this script expect or need.
It truncates both tables borrower_message_preferences and
borrower_message_transport_preferences. This (unfortunately) includes
deleting messaging preferences for patron categories. After that, adding
preferences again will not add categories again, but only borrower
preferences which are all disabled.
Furthermore, we do not need to disable the foreign key check. Neither
do we actually need to truncate, deleting records seems sufficient.
Also deleting transport preferences is not needed, since it will be
done by a cascade from messaging preferences. Note that the subsequent
call of SetMessagingPreferencesFromDefaults will already delete the
records.
This makes it possible to remove the truncate option altogether.
Test plan:
[1] Select a patron category (say ST) and change days_in_advance to x.
[2] Select a ST patron and set days_advance to y in his msg prefs.
[3] Run borrowers-force-messaging-defaults.pl -doit
[4] Verify that the patron has been reset to the default prefs (incl.
value x in days_in_advance).
[5] Verify that the patron category prefs are still intact.
Signed-off-by: Marcel de Rooy <m.de.rooy@rijksmuseum.nl>
Signed-off-by: Charles Farmer <charles.farmer@inLibro.com>
Signed-off-by: Nick Clemens <nick@bywatersolutions.com>
Signed-off-by: Jonathan Druart <jonathan.druart@bugs.koha-community.org>
Test Plan:
1) Apply this patch
2) Test importing patrons from command line,
options are availble with --help.
Signed-off-by: Bernardo Gonzalez Kriegel <bgkriegel@gmail.com>
Tested with minimal csv
(cardnumber,surname,firstname,categorycode,branchcode,password,userid)
Overwrite does not change category or branch.
Patrons are loaded, userid & password works
Updated license to GPLv3
No other koha-qa errors.
Signed-off-by: Benjamin Rokseth <benjamin.rokseth@kul.oslo.kommune.no>
Signed-off-by: Kyle M Hall <kyle@bywatersolutions.com>
Bug 12598 - Tidy import_borrowers.pl
Signed-off-by: Benjamin Rokseth <benjamin.rokseth@kul.oslo.kommune.no>
Signed-off-by: Kyle M Hall <kyle@bywatersolutions.com>
Bug 12598 - Move importing code to a subroutine
Signed-off-by: Benjamin Rokseth <benjamin.rokseth@kul.oslo.kommune.no>
Signed-off-by: Kyle M Hall <kyle@bywatersolutions.com>
Bug 12598 - Update command line script to use patron import subroutine
Signed-off-by: Benjamin Rokseth <benjamin.rokseth@kul.oslo.kommune.no>
Signed-off-by: Kyle M Hall <kyle@bywatersolutions.com>
Bug 12598 [QA Followup]
* Fix copyright on import_borrowers.pl
* Changes -c --csv to -f --file
* Adds -c --confirm option
* Renames misc/import_borrowers.pl to misc/import_patrons.pl
* Restore userid matchpoint option
Signed-off-by: Kyle M Hall <kyle@bywatersolutions.com>
Bug 12598 - Fix merge to master. Backport 3 updates from latest import_borrowers.pl
Signed-off-by: Kyle M Hall <kyle@bywatersolutions.com>
Bug 12598 - Started regression tests. Fix missing C4::Members::Attributes package
Signed-off-by: Kyle M Hall <kyle@bywatersolutions.com>
Bug 12598 - More refactoring and regression tests in Koha::Patrons::Import
Signed-off-by: Kyle M Hall <kyle@bywatersolutions.com>
Bug 12598 - Creating objects in misc/import_patrons.pl and tools/import_borrowers.pl
Signed-off-by: Kyle M Hall <kyle@bywatersolutions.com>
Bug 12598 - Refactoring Koha::Patrons::Import includes bug fixed for critical date types and header column parsing
Signed-off-by: Kyle M Hall <kyle@bywatersolutions.com>
Bug 12598 - Rebase + backport of 16426 plus fixing 16426
Signed-off-by: Kyle M Hall <kyle@bywatersolutions.com>
Bug 12598: catch warnings raised by import_patrons in tests
Signed-off-by: Colin Campbell <colin.campbell@ptfs-europe.com>
Signed-off-by: Jonathan Druart <jonathan.druart@bugs.koha-community.org>
As requested, we add a JOIN and make the SELECT distinct.
Signed-off-by: Marcel de Rooy <m.de.rooy@rijksmuseum.nl>
Tested that no-overwrite still works as expected.
Signed-off-by: Jonathan Druart <jonathan.druart@bugs.koha-community.org>
This option allows you to add preferences only when they are not yet
present. In other words: skip patrons that already set their prefs.
Test plan:
[1] Delete all borrower messaging prefs for a patron.
[2] Run borrowers-force-messaging-defaults.pl -no-overwrite -doit
Verify that the patron now has default msg preferences.
[3] Change his settings and make them non-default.
For instance, increase days in advance.
[4] Run borrowers-force-messaging-defaults.pl -no-overwrite -doit
Verify that the patron still has the non-default settings.
[5] Run borrowers-force-messaging-defaults.pl -doit
Verify that the patron msg prefs have been overwritten.
Signed-off-by: Marcel de Rooy <m.de.rooy@rijksmuseum.nl>
Signed-off-by: Owen Leonard <oleonard@myacpl.org>
Signed-off-by: Josef Moravec <josef.moravec@gmail.com>
Signed-off-by: Jonathan Druart <jonathan.druart@bugs.koha-community.org>
Since bug 17196, biblioitems.timestamp is not always updated after a
change in the MARC record.
Filtering should be based on biblio_metadata.timestamp instead.
REVISED TEST PLAN
-----------------
0. Do not apply patch
1. Find a biblio record, remember the biblionumber for step 3
2. Edit the record, modify a field (e.g. 003, 015$q) that is
not mapped to a DB column, so biblio_metadata.timestamp will
be modified but not biblioitems.timestamp
3. In MySQL with the koha database selected:
> select timestamp from biblio where biblionumber=###;
> select timestamp from biblio_metadata where biblionumber=###;
-- you'll need to change the ###'s based on the biblionumber
you remembered in step 1.
-- the two timestamps will differ.
-- Remember the timestamp of biblio_metadata for step 4.
4. Run this command:
$ sudo koha-shell -c bash kohadev
$ export DATE="YYYY-MM-DD HH:mm:SS"
-- use the timestamp remembered in step 3.
5. Run this command:
$ ./misc/export_records.pl --date="$DATE"
$ ls -la koha.mrc
-- the file should be 0 bytes.
6. Run this command:
$ exit
$ git bz apply 19730
$ restart_all
$ sudo koha-shell -c bash kohadev
$ export DATE="YYYY-MM-DD HH:mm:SS"
-- use the timestamp remembered in step 3.
7. Run this command:
$ ./misc/export_records.pl --date="$DATE"
$ ls -la koha.mrc
-- the file should be a lot more than 0 bytes.
8. Run this command:
$ /home/vagrant/qa-test-tools/koha-qa.pl -v 2 -c 1
-- this should pass.
Signed-off-by: Mark Tompsett <mtompset@hotmail.com>
Signed-off-by: Marcel de Rooy <m.de.rooy@rijksmuseum.nl>
Signed-off-by: Jonathan Druart <jonathan.druart@bugs.koha-community.org>
When running fines.pl, any overdue items that have no corresponding circulation rule will generate the following warning:
Use of uninitialized value $amount in numeric gt (>) at /usr/share/koha/bin/cronjobs/fines.pl line 133.
Test Plan:
1) Create a single circ rule
2) Backdate a checkout so it is overdue
3) Delete the circ rule
4) Run fines.pl, note the warning
5) Apply this patch
6) Run fines.pl, note the warning is gone
Signed-off-by: Dilan Johnpullé <dilan@calyx.net.au>
Signed-off-by: Kyle M Hall <kyle@bywatersolutions.com>
Signed-off-by: Jonathan Druart <jonathan.druart@bugs.koha-community.org>
When running a compare with --upd flag, I got the following warn:
Use of uninitialized value in addition (+) at misc/maintenance/cmp_sysprefs.pl line 125.
This is simply resolved by not returning undef but 0 in case of the Version
syspref in the sub UpdateOnePref.
Test plan:
Look at this simple change.
Signed-off-by: Marcel de Rooy <m.de.rooy@rijksmuseum.nl>
Signed-off-by: Mark Tompsett <mtompset@hotmail.com>
Signed-off-by: Nick Clemens <nick@bywatersolutions.com>
Signed-off-by: Jonathan Druart <jonathan.druart@bugs.koha-community.org>
To test:
Test as before, verify commit option makes no changes and provides
additional feedback when verbose
Signed-off-by: Nick Clemens <nick@bywatersolutions.com>
Signed-off-by: Marcel de Rooy <m.de.rooy@rijksmuseum.nl>
Signed-off-by: Jonathan Druart <jonathan.druart@bugs.koha-community.org>
Addressing points mentioned in comment12:
[1] Commit parameter.
[2] Warning if authid does not exist for -merge.
Test plan:
[1] Run update_authorities.pl -authid X -merge -ref Y -c
where X does not exist in your db and Y does.
Signed-off-by: Marcel de Rooy <m.de.rooy@rijksmuseum.nl>
Signed-off-by: Nick Clemens <nick@bywatersolutions.com>
Signed-off-by: Jonathan Druart <jonathan.druart@bugs.koha-community.org>
With bug 9988 the manual merge option of merge_authorities was removed.
(Note that it did not work any more too.) This patch reintroduces this
functionality on the command line.
This maintenance script now allows you to force renumbering field 001 for
selected authid's, to delete authid's including the removal of references
in biblio records, as well as merging several authid's into one reference
record.
Test plan:
[1] Test the -renumber parameter. Field 001 and 005 should be updated.
[2] Test the -delete parameter. Check if a linked biblio does no longer
contain a reference to the deleted authority.
[3] Test the -merge parameter.
Create two PERSO_NAME records (say A,B) and attach biblios to them.
Pick a CORPO_NAME record as reference record C.
Now pass -merge -reference C -authid A,B
Verify that A and B are gone, and the records link to C now.
Signed-off-by: Marcel de Rooy <m.de.rooy@rijksmuseum.nl>
Signed-off-by: Lee Jamison <ldjamison@marywood.edu>
Signed-off-by: Nick Clemens <nick@bywatersolutions.com>
Signed-off-by: Jonathan Druart <jonathan.druart@bugs.koha-community.org>
When looking for a bad MARC Record using the rebuild_zebra_sliced.sh, it is
useful to skip the complete MARCXML exporting from Koha and reuse the exported
files for Zebra indexing.
This patch adds a new parameter:
-x | --exclude-export Do not export Biblios from Koha, but use the existing
export-dir
Which depends on the:
-d | --export-dir Where rebuild_zebra.pl will export data
Default: $EXPORTDIR
!---------!
! TEST PLAN !
!---------!
1. Run
"./rebuild_zebra_sliced.sh --length 1000"
to export 1000 MARC Records
and slice them to one big 1000-Record chunk.
2. Realize that you get an imaginary "stack smashing detected"-error crashing
your indexing at some Record you dont know of and can't make out from the
indexing logging.
3. Start looking for the bad Record by running:
"./rebuild_zebra_sliced.sh --exlude-export --chunk-size 10"
To skip Biblios export from Koha which takes ~2h and get straight into
splitting your exported biblios to chunks of 10, and indexing them. You
know which chunk fails so it is much easier to find the issue there.
Signed-off-by: Katrin Fischer <katrin.fischer.83@web.de>
Signed-off-by: Marcel de Rooy <m.de.rooy@rijksmuseum.nl>
Signed-off-by: Jonathan Druart <jonathan.druart@bugs.koha-community.org>
This subroutine is quite trivial and can be replaced easily with a new
method of Koha::Patron
Test plan:
Overdue notices and shelf sharing must be send the to an email address,
according to the value of the pref AutoEmailPrimaryAddress
Signed-off-by: David Bourgault <david.bourgault@inlibro.com>
Signed-off-by: Josef Moravec <josef.moravec@gmail.com>
Signed-off-by: Jonathan Druart <jonathan.druart@bugs.koha-community.org>
If the patron's account has expired and BlockExpiredPatronOpacActions is set,
we expect auto renewal to be rejected.
Test plan:
Use the automatic_renewals.pl cronjob script to auto renew a checkout
Before this patch, if the patron's account has expired the auto renew was done.
With this patch, it will only be auto renewed if BlockExpiredPatronOpacActions is not set.
Signed-off-by: Claire Gravely <claire.gravely@bsz-bw.de>
Signed-off-by: Julian Maurice <julian.maurice@biblibre.com>
Signed-off-by: Jonathan Druart <jonathan.druart@bugs.koha-community.org>
Having the ability to limit the number of messages sent by process_message_queue.pl on a single run would be very useful for controlling home many messages are sent at a given time. This can help prevent too many messages being sent out at once and getting flagged as a spammer.
Test Plan:
1) Apply this patch
2) Generate some number of messages in the message queue
3) Run process_message_queue.pl with the new --limit option,
set limit to a number smaller than the number of pending messages
4) After the script has run, check the database and note that only
a number of pending messages were sent, and that the remaining amount
of pending messages is the original amount less the number specified
as the limit
Signed-off-by: Mark Tompsett <mtompset@hotmail.com>
Signed-off-by: Marcel de Rooy <m.de.rooy@rijksmuseum.nl>
Signed-off-by: Jonathan Druart <jonathan.druart@bugs.koha-community.org>
Configuration values should be included in the koha-install-log
so that when running Makefile.PL with the --prev-install-log option
values can be read from there and reapplied rather than prompting
the user on each subsequent run.
This adds FONT_DIR and DB_USE_TLS (and its dependent options) to
koha-install-log so that the set values will be written by make
during a make install or make upgrade
Signed-off-by: Mark Tompsett <mtompset@hotmail.com>
Signed-off-by: Kyle M Hall <kyle@bywatersolutions.com>
Signed-off-by: Jonathan Druart <jonathan.druart@bugs.koha-community.org>
Fixed whitespace for QA tools
Added a verbose note when template found
Only print 'Modifying MARC' if verbose
Signed-off-by: Nick Clemens <nick@bywatersolutions.com>
Signed-off-by: Jonathan Druart <jonathan.druart@bugs.koha-community.org>
Signed-off-by: Katrin Fischer <katrin.fischer.83@web.de>
Signed-off-by: Nick Clemens <nick@bywatersolutions.com>
Signed-off-by: Jonathan Druart <jonathan.druart@bugs.koha-community.org>
When importing large numbers of MARC records from a legacy LMS to Koha
using bulkmarcimport.pl, it did not make use of the MARC modification
templates in the system (which can be useful for coversion of 852
fields to 952 fields for item holdings for example). This patch allows
MARC modification templates to be used with bulkmarcimport.pl.
To test:
1) Apply patch.
2) Set up a MARC modification template (in Home > Tools > MARC
modification templates) to make some changes to imported MARC
records (for example copy a subfield).
3) Take a test set of MARC records that have fields matching the
template and import them using the bulkmarcimport.pl tool. For example
if these MARC records are in testrecords.mrc and the MARC modification
template is called testtemplate use something like:
perl misc/migration_tools/bulkmarcimport.pl -commit 1000 \\
-file testrecords.mrc -marcmodtemplate testtemplate
4) Check the imported records in Koha to see that the required
modifications have been applied when the MARC records are imported.
5) Sign off.
Signed-off-by: Katrin Fischer <katrin.fischer.83@web.de>
Signed-off-by: Nick Clemens <nick@bywatersolutions.com>
Signed-off-by: Jonathan Druart <jonathan.druart@bugs.koha-community.org>
There are several ways to mark an item an lost:
- item list view (catalogue/moredetail.pl, "Items" tab)
- cataloguing (cataloguing/additem.pl)
- Batch item modification tools (tools/batchMod.pl)
- The long overdue cronjob (misc/cronjobs/longoverdue.pl)
So far only the cronjob is configurable, the others mark the item as
returned (does the checkin).
This behaviour should be controlable using a syspref, to let libraries
choose what fit best for them.
Test plan:
Use the 2 options of the pref, mark checked out items as lost using the
different possibilities, and confirm that the behaviours make sense to
you
Signed-off-by: Séverine QUEUNE <severine.queune@bulac.fr>
Signed-off-by: Séverine QUEUNE <severine.queune@bulac.fr>
Signed-off-by: Kyle M Hall <kyle@bywatersolutions.com>
Signed-off-by: Jonathan Druart <jonathan.druart@bugs.koha-community.org>
Fix:
Can't locate object method "next" via package "13" (perhaps you forgot to load "13"?) at misc/cronjobs/holds/cancel_unfilled_holds.pl line 119.
Undefined subroutine &main::CancelReserve called at misc/cronjobs/holds/cancel_unfilled_holds.pl line 143.
The script does not use Koha::Object's get_column correctly for getting
the branch codes.
The call to CancelReserve is obsolete. Was moved in the meantime to
Koha::Hold->cancel.
Signed-off-by: Marcel de Rooy <m.de.rooy@rijksmuseum.nl>
Signed-off-by: Jonathan Druart <jonathan.druart@bugs.koha-community.org>
This script takes parameters:
days - how many days waiting to concal an unfilled hold on or after
library - (repeatable) branches to consider
holidays - whether or not to count holidays (default is no)
This patchset adds two methods and covers them with tests:
Koha::Holds->unfilled(); To return holds where found = undef
Koha::Hold->age( $use_calendar ); To return the number of days since a
hold was placed (including or excluding holidays)
To test:
1 - Place some holds with varying reservedates
2 - Run script with different parameters to verify options are respected
(-v for verbosity will assist here)
3 - verify that script does nothing without days parameter
Sponsored by:
Siskiyou County Library (http://www.siskiyoulibrary.info/)
Signed-off-by: Marcel de Rooy <m.de.rooy@rijksmuseum.nl>
Bug 16187 - Followup
1 - Correct use of original (bad) script name
2 - Explain options better
3 - Remove change from 'W' to 'w'
Signed-off-by: Marcel de Rooy <m.de.rooy@rijksmuseum.nl>
RM note: Squashed for readability
Signed-off-by: Jonathan Druart <jonathan.druart@bugs.koha-community.org>
There is no reason to keep this perl script without the regular extension.
Please see other scripts in the same folder too.
Signed-off-by: Marcel de Rooy <m.de.rooy@rijksmuseum.nl>
Signed-off-by: Jonathan Druart <jonathan.druart@bugs.koha-community.org>
Very trivial change.
Signed-off-by: Marcel de Rooy <m.de.rooy@rijksmuseum.nl>
Signed-off-by: Nick Clemens <nick@bywatersolutions.com>
Signed-off-by: Jonathan Druart <jonathan.druart@bugs.koha-community.org>
1) Apply the patch
2) Create a new patron with random values, except for it expiration date, make it expired (Patrons > New Patron > Student)
3) Enable the system preference called “EnhancedMessagingPreferences”
4) In “Administration" > "Patron categories" > Student, modify the "days in advance", then click "Save"
5) run the script "./misc/maintenance/borrowers-force-messaging-defaults --doit --actives"
6) Validate that the student created in step 2 hasn't changed (Patrons > search)
7) Validate that any other student that isn't expired has changed.
Signed-off-by: Marcel de Rooy <m.de.rooy@rijksmuseum.nl>
Signed-off-by: Nick Clemens <nick@bywatersolutions.com>
Signed-off-by: Jonathan Druart <jonathan.druart@bugs.koha-community.org>
touch_all_items looks at the return of ModItem to determine if the
operation was successful. But ModItem does not return a meaningful
value. This patch puts the ModItem call in an eval and looks at $@.
Test plan:
Run touch_all_items with a where condition and verbose option.
Put print 1/0; at the end of ModItem.
Run touch_all_items again. You should see: ERROR WITH ITEM xxx !!!!
Signed-off-by: Marcel de Rooy <m.de.rooy@rijksmuseum.nl>
Signed-off-by: David Bourgault <david.bourgault@inlibro.com>
Signed-off-by: Nick Clemens <nick@bywatersolutions.com>
Signed-off-by: Jonathan Druart <jonathan.druart@bugs.koha-community.org>
When you want to calculate average time, do not divide count by time :)
Test plan:
Run the script with a where condition and verbose option and see that
the average time is meaningful.
Signed-off-by: Marcel de Rooy <m.de.rooy@rijksmuseum.nl>
Signed-off-by: David Bourgault <david.bourgault@inlibro.com>
Signed-off-by: Nick Clemens <nick@bywatersolutions.com>
Signed-off-by: Jonathan Druart <jonathan.druart@bugs.koha-community.org>
The IncludeSeeFromInSearches system preference is designed so that 'See from' headings from the authorities are included when you search in the catalog.
That means that you could find an author not only by the name printed on the book, but for example also by their pseudonym or a different spelling of their name.
It was added by bug 7417.
This regression has been introduced by
commit 5ef1b6710e
Bug 18098: Add an index with the count of not onloan items
- } elsif ($record_type eq 'biblio' && C4::Context->preference('IncludeSeeFromInSearches')) {
- my $normalizer = Koha::RecordProcessor->new( { filters => 'EmbedSeeFromHeadings' } );
[...]
+ push @filters, 'IncludeSeeFromInSearches'
+ if C4::Context->preference('IncludeSeeFromInSearches');
Test plan:
- Activate IncludeSeeFromInSearches
- Catalog an authority for a person
- main heading in 100
- see from headings in 400
- Catalog a bibliographic record and link it to the authority
- Make sure the record is indexed
- Verify that the record can be found searching for the main heading
- Verify that the record can be found searching for the see from headings
Signed-off-by: Katrin Fischer <katrin.fischer.83@web.de>
Signed-off-by: Tomas Cohen Arazi <tomascohen@theke.io>
Yet another reason to get rid of all this functions from this script.
Signed-off-by: Jonathan Druart <jonathan.druart@bugs.koha-community.org>
Images display correctly. Followed test plan and patch works as described.
Signed-off-by: Dilan Johnpullé <dilan@calyx.net.au>
Signed-off-by: Nick Clemens <nick@bywatersolutions.com>
Signed-off-by: Jonathan Druart <jonathan.druart@bugs.koha-community.org>
If you edit an authority record while having data in hidden fields or
subfields, that data will be lost now.
This script can help you to unhide some fields and prevent data loss.
Test plan:
[1] Add a PERSO_NAME record. Fill e.g. 100b.
[2] Hide 100b in the PERSO_NAME framework.
[3] Run auth_show_hidden_data.pl and verify that it reports 100b in
the PERSO_NAME framework.
Signed-off-by: Marcel de Rooy <m.de.rooy@rijksmuseum.nl>
Signed-off-by: Josef Moravec <josef.moravec@gmail.com>
Signed-off-by: Nick Clemens <nick@bywatersolutions.com>
Signed-off-by: Jonathan Druart <jonathan.druart@bugs.koha-community.org>
rebuild_zebra.pl fails in some conditions (perl version?)
I do not recreate but it has been reported that reindex fails with:
error retrieving biblio 94540 at /usr/share/koha/bin/migration_tools/rebuild_zebra.pl line 683, <DATA> line 751.
To fix it we can use fully qualified subroutine names for:
GetMarcFromKohaField
GetMarcBiblio
GetBiblionumberFromItemnumber
TransformKohaToMarc
GetFrameworkCode
Test plan:
Confirm the rebuild_zebra script still works correctly after this patch
Signed-off-by: Lee Jamison <ldjamison@marywood.edu>
Signed-off-by: Marcel de Rooy <m.de.rooy@rijksmuseum.nl>
Signed-off-by: Jonathan Druart <jonathan.druart@bugs.koha-community.org>
<<items.content>> is generated 4x in advance_notices.pl and once in
overdue_notices.pl
It would be better to have it in C4::Letters.
It will enforce the fact that it already has the same behavior, make it
testable and reusable.
Test plan:
Use the <<items.content>> tag for advance and overdue notices.
The generated notices must be the same as before this patch.
Followed test plan, works as expected.
Signed-off-by: Marc Véron <veron@veron.ch>
Signed-off-by: Tomas Cohen Arazi <tomascohen@theke.io>
Signed-off-by: Kyle M Hall <kyle@bywatersolutions.com>
Signed-off-by: Jonathan Druart <jonathan.druart@bugs.koha-community.org>
Change parameters to a hashref.
Signed-off-by: Josef Moravec <josef.moravec@gmail.com>
Signed-off-by: Marcel de Rooy <m.de.rooy@rijksmuseum.nl>
Looks good to me.
Two calls in migration_tools/22_to_30 still in old style.
Signed-off-by: Jonathan Druart <jonathan.druart@bugs.koha-community.org>
Bug 12412 added the use of to_marc plugins allowing arbitrary file formats
in stage-marc-import (as long as the plugins can handle them). The feature
was not very visible in the code, and when bug 10407 added the marcxml
format, it made some changes that broke the use of to_marc.
This patch restores the functionality by:
[1] Adding a sub RecordsFromMarcPlugin to ImportBatch.pm, specifically
addressing the conversion from arbitrary formats to MARC::Record.
The original to_marc interface is used: pass it the file contents,
and it returns a string consisting of a number of MARC blobs separated
by \x1D.
Consequently, the call of to_marc is removed from routine
BatchStageMarcRecords where it did not belong. The to_marc_plugin
parameter is removed and two calls are adjusted accordingly.
[2] Instead of a separate combo with plugins, the format combo contains
MARC, MARCXML and optionally some plugin formats.
[3] The code in stage-marc-import.pl now clearly shows the three main
format types: MARC, MARCXML or plugin based.
Note: This patch restores more or less the situation after bug 12412, but
I would actually recommend to have the to_marc plugins return MARC::Record
objects instead of large text strings. In the second example I added a
to_marc plugin that actually converts MARC record objects to string format,
while RecordsFromMarcPlugin reconverts them to MARC::Records.
Test plan:
See second patch.
Signed-off-by: Marcel de Rooy <m.de.rooy@rijksmuseum.nl>
Signed-off-by: Katrin Fischer <katrin.fischer.83@web.de>
Signed-off-by: Kyle M Hall <kyle@bywatersolutions.com>
Signed-off-by: Jonathan Druart <jonathan.druart@bugs.koha-community.org>
Adding one backslash makes a difference :)
We need to escape the dot in the regex to exclude a file like zzpref
from translation too. Perfect_regexes++
Signed-off-by: Marcel de Rooy <m.de.rooy@rijksmuseum.nl>
Signed-off-by: Jonathan Druart <jonathan.druart@bugs.koha-community.org>
Text editors can create temporary files in this folder and this can
confuse the translator.
For instance, vim can create a file named '.opac.pref.swp' which will
make the translator dies with the following error message:
Can't use string ("b0VIM 8.0") as a HASH ref while "strict refs" in use
at LangInstaller.pm line 248.
Test plan:
1. echo 'Oops' > .../en/modules/admin/preferences/whatever.pref.whatever
2. cd misc/translator && ./translate update fr-FR
3. Verify that you have the error message mentioned above
4. Apply patch
5. cd misc/translator && ./translate update fr-FR
6. No more errors!
Signed-off-by: Frédéric Demians <f.demians@tamil.fr>
Signed-off-by: Marcel de Rooy <m.de.rooy@rijksmuseum.nl>
Signed-off-by: Jonathan Druart <jonathan.druart@bugs.koha-community.org>
Most of the time C4::Biblio::GetBiblioData is used to retrieve the title
and/or the author of a bibliographic record.
This patch replaces the easy occurrences of GetBiblioData, the ones
where the 2 joins are needed, but only data from biblio and biblioitems
table are.
Test plan:
It will be hard to test everything, I'd suggest a QAer to review this
patch and confirm that the difference occurrences of GetBiblioData have
been correctly replaced by calling Koha::Biblios->find or
$biblio->bibioitem
Signed-off-by: Owen Leonard <oleonard@myacpl.org>
Signed-off-by: Marcel de Rooy <m.de.rooy@rijksmuseum.nl>
Signed-off-by: Jonathan Druart <jonathan.druart@bugs.koha-community.org>
This patch modifies the Koha logo SVG file to remove the fill from two
letters. This patch also optimizes the file and converts the text object
to paths for better cross-platform portability.
To test you could:
- Open the file in an editor and confirm that the change is correct
- or -
- Open the file in a browser and use the code inspector to add a
background-color attribute to the top-level <svg> tag. The logo should
appear transparent, with no white fills.
Signed-off-by: Jonathan Druart <jonathan.druart@bugs.koha-community.org>
GetMember returned a patron given a borrowernumber, cardnumber or
userid.
All of these 3 attributes are defined as a unique key at the DB level
and so we can use Koha::Patrons->find to replace this subroutine.
Additionaly GetMember set category_type and description.
Signed-off-by: Martin Renvoize <martin.renvoize@ptfs-europe.com>
Signed-off-by: Tomas Cohen Arazi <tomascohen@theke.io>
Signed-off-by: Jonathan Druart <jonathan.druart@bugs.koha-community.org>
I have no idea how to test this patch, see bug 5528, or simply read the
code.
Signed-off-by: Josef Moravec <josef.moravec@gmail.com>
Signed-off-by: Marcel de Rooy <m.de.rooy@rijksmuseum.nl>
Signed-off-by: Jonathan Druart <jonathan.druart@bugs.koha-community.org>
FAIL pod Apparent command =cut not preceded by blank line in file misc/cronjobs/advance_notices.pl
FAIL pod Apparent command =cut not preceded by blank line in file C4/SIP/ILS/Item.pm
Signed-off-by: Marcel de Rooy <m.de.rooy@rijksmuseum.nl>
Signed-off-by: Jonathan Druart <jonathan.druart@bugs.koha-community.org>
The subroutine C4::Biblio::GetBiblioFromItemNumber was wrong for several
reasons:
- badly named, we can get biblio info from a barcode
- SELECT * from items, biblio and biblioitems
makes things hard to follow and debug, we never know where do come from
the value we display
- sometimes called only for trivial information such as biblionumber,
author or title
This patchset suggests to replace it with calls to:
- Koha::Items->find for item's info
- $item->biblio for biblio's info
- $item->biblio->biblioitem for biblioitem's info
Test plan:
Item's info should correctly be displayed on the following pages:
- circulation history
- transfer book
- checkin
- waiting holds
QA will check the other changes reading the code, it's trivial
Signed-off-by: Josef Moravec <josef.moravec@gmail.com>
Signed-off-by: Marcel de Rooy <m.de.rooy@rijksmuseum.nl>
Signed-off-by: Jonathan Druart <jonathan.druart@bugs.koha-community.org>
Problem on this report was caused by translating the tabs Privacy
and Payments by the same string. This caused overwriting a hash entry.
This patch tests if the key already exists and if so, it merges the
entries instead of overwriting the old contents.
Test plan:
[1] Make sure that e.g. Privacy and Payments translate to e.g Vie privee.
[2] Run translate install fr-CA (or the language you altered)
[3] Without this patch you should loose preferences from either Privacy or
Payments. With this patch, they should be merged.
Signed-off-by: Marcel de Rooy <m.de.rooy@rijksmuseum.nl>
Tested with fr-CA.
Signed-off-by: Blou <philippe.blouin@inlibro.com>
Reset the .po files, reproduced the problem. Applied the patch and suddenly 'paypal' appeared.
Signed-off-by: Julian Maurice <julian.maurice@biblibre.com>
Signed-off-by: Jonathan Druart <jonathan.druart@bugs.koha-community.org>
Regardless of whether the phone transport has been selected for a given
overdue action or not, the Talking Tech outbound script generates and
sends a line for that action.
Test Plan:
1) Enable Talking Tech
2) Create one or more overdue actions without a phone transport selected
and one or more with the phone transport selected
3) Generate the overdues csv file to send to Itive
4) Note the csv file has lines for actions that do not have the phone
transport selected
5) Apply this patch
6) Repeat step 3
7) Note the csv file now only has lines for actions that have the phone
transport selected
Signed-off-by: Jonathan Field <jonathan.field@ptfs-europe.com>
Signed-off-by: Marcel de Rooy <m.de.rooy@rijksmuseum.nl>
Signed-off-by: Kyle M Hall <kyle@bywatersolutions.com>
The auto_renew_error has to be reset when an auto renew successes,
otherwise the patron is not going to receive the correct notice.
Test plan;
- Checkin an item and mark it as auto renewal (specify a due date in the past to allow auto renewals)
- Set OPACFineNoRenewalsBlockAutoRenew to 'Block' and 'OPACFineNoRenewals' to '1'
- Execute the script
=> Auto renewed, column auto_renew_error is null
- Add a fine of '2' to the patron
- Execute the script
=> Not auto renewed, column auto_renew_error is 'auto_too_much_oweing'
=> On the interface youo see the correct message "Automatic renewal failed, patron has unpaid fines"
- Pay the fine
- Execute the script
Without this patch the auto_renew_error is not reset and the patron is going to
receive a letter telling him he own too much money to the library
With this patch the patron will receive a letter to inform him the renew has been done!
Signed-off-by: Jonathan Field <jonathan.field@ptfs-europe.com>
Signed-off-by: Nick Clemens <nick@bywatersolutions.com>
Signed-off-by: Kyle M Hall <kyle@bywatersolutions.com>
When an issue is auto renewed, a notice will be sent to the patron.
The notice will tell if the renewed has been successfully done.
Note that this patch is not ready to be pushed yet, as it has some
defects:
- 1 notice per issue
- no way to disable the notice generation
- no way to specify patron categories to enable/disable the
notifications
Test plan:
Use the automatic_renewals.pl script to auto renew issues.
If the auto renew has failed or succeeded, a notice will be generated in the
message_queue table.
If the error is "too_soon" or is the same as the previous error, the
notice won't be generated (we do not want to spam the patron).
Signed-off-by: Janet McGowan <janet.mcgowan@ptfs-europe.com>
Signed-off-by: Jonathan Field <jonathan.field@ptfs-europe.com>
Signed-off-by: Nick Clemens <nick@bywatersolutions.com>
Signed-off-by: Kyle M Hall <kyle@bywatersolutions.com>
typo responsability
typo defautl in authorities.pref
typo reveived in t/db_dependent/Acquisition.t
typo ;; in advance_notices.pl
typo Stopping in restart_indexer (koha-indexer)
typo instutitional in moremember.pl
typo Corretly (Biblio.t)
typo periodicy in help serials
Signed-off-by: Marcel de Rooy <m.de.rooy@rijksmuseum.nl>
Signed-off-by: Marc Véron <veron@veron.ch>
Signed-off-by: Jonathan Druart <jonathan.druart@bugs.koha-community.org>
Signed-off-by: Kyle M Hall <kyle@bywatersolutions.com>
This patch set the lang parameter when C4::Letters::GetPreparedLetter is
called to generate the notice.
Note that we do not need to pass it if want_librarian is set.
TODO: I do not know what to do with TransferSlip
Sponsored-by: Orex Digital
Signed-off-by: Hugo Agud <hagud@orex.es>
Signed-off-by: Nick Clemens <nick@bywatersolutions.com>
Signed-off-by: Kyle M Hall <kyle@bywatersolutions.com>
I do not see a valid reason not to use the default one instead of the
syspref one.
Signed-off-by: Jonathan Druart <jonathan.druart@bugs.koha-community.org>
Signed-off-by: Kyle M Hall <kyle@bywatersolutions.com>
As L1 cache does not have expiration mechanism, scripts running
in daemon mode (rebuild_zebra.pl -daemon, sip server ?, ...) would
not be aware of any possible changes in the data being cached
in upstream L2 cache.
This patch adds ->flush_L1_caches() call in rebuild_zebra.pl
inside daemon mode loop.
To test:
1) apply patch
2) ensure that rebuild_zebra.pl -daemon is still working properly,
without any noticeable performance degradation
3) stop memcached daemon and try to run rebuild_zebra.pl -daemon
again: there should be a warning emitted stating that the script
is running in daemon mode but without recommended caching system
Signed-off-by: Josef Moravec <josef.moravec@gmail.com>
Signed-off-by: Jonathan Druart <jonathan.druart@bugs.koha-community.org>
Signed-off-by: Kyle M Hall <kyle@bywatersolutions.com>
This patch adds a numeric index 'not-onloan-count' containing the value
of 999$x. This subfield is filled by 'rebuild_zebra.pl' by making use of
(bug's 18208) 'EmbedItemsAvailability' filter.
bib1.att and indexes definitions are updated accordingly.
To test:
- Apply the patch
- Pick the right biblio-zebra-indexdefs.xsl file for your setup and
replace the one your Zebra uses [1]
- Replace your bib1.att
- Replace your ccl.properties
- Have at least one record with more than one item, checkout some
item(s) from that record(s).
- Rebuild zebra's indexes:
$ sudo koha-shell kohadev
k$ cd kohaclone
k$ misc/migration_tools/rebuild_zebra.pl -r -b -v -k
(notice the dump directory is kept, you can try the XSLT yourself
running:
$ xsltproc \
etc/zebradb/marc_defs/marc21/biblios/biblio-zebra-indexdefs.xsl \
/tmp/the_dump_dir/biblios/exported_records | less
=> SUCCESS: There are records with the not-onloan-count index, and the
value is correct!
- Check Zebra yourself:
$ yaz-client unix:/var/run/koha/kohadev/bibliosocket
Z> base biblios
Z> find @attr 1=9013 @attr 2=5 @attr 4=109 0
=> SUCCESS: The search matches the amount of records with not-onloan
items.
Z> s 1+1
=> SUCCESS: Records with 999$x having a value higher than 0 are rendered
- Sign off :-D
Note: While this work is complete on its purpose, it is part of an
attempt to create a better way of filtering by availability.
Sponsored-by: ByWater Solutions
[1] In kohadevbox this would be
/etc/koha/zebradb/marc_defs/marc21/biblios/biblio-zebra-indexdefs.xsl
Edit: Added the missing XSLT changes for UNIMARC and NORMARC
Signed-off-by: Josef Moravec <josef.moravec@gmail.com>
Signed-off-by: Tomas Cohen Arazi <tomascohen@theke.io>
Signed-off-by: Marcel de Rooy <m.de.rooy@rijksmuseum.nl>
Signed-off-by: Kyle M Hall <kyle@bywatersolutions.com>
If a library does not use --mark-returned when running longoverdue.pl,
all those lost item checkouts are selected by overdue_notices.pl.
This causes much unnecessary overhead. In addition Koha::Calendar is
instantiated many times for each branchcode which is not necessary.
Test Plan:
1) Run overdue_notices.pl, note output
2) Apply this patch
3) Run overdue_notices.pl again, note output is the same
Signed-off-by: Kyle M Hall <kyle@bywatersolutions.com>
Signed-off-by: Jane Leven <jleven@camdencountylibrary.org>
Signed-off-by: Martin Renvoize <martin.renvoize@ptfs-europe.com>
Signed-off-by: Jonathan Druart <jonathan.druart@bugs.koha-community.org>
Signed-off-by: Kyle M Hall <kyle@bywatersolutions.com>
If you deleted files from the upload directories manually, or you rebooted
with files in the temporary upload folder, or for some other reason have
records without a file, you may want to cleanup your records in the
uploaded_files table.
This patch adds the method delete_missing to Koha::UploadedFiles. It also
supports a keep_record parameter to do a dry run (count the missing files
first).
Also, we add an option --uploads-missing to cleanup_database. If you add
the flag 1 after this option, you will delete missing files. If you add the
flag 0 or only use the option, you will count missing files.
A subtest is added to Upload.t for delete_missing tests.
Test plan:
[1] Run t/db_dependent/Upload.t
[2] Upload a file and delete the file manually.
[3] Run cleanup_database.pl --uploads-missing
It should report at least one missing file now.
Check that the record has not been deleted.
[4] Run cleanup_database.pl --uploads-missing 1
It should report that it removed at least one file.
Check that the record is gone.
Signed-off-by: Marcel de Rooy <m.de.rooy@rijksmuseum.nl>
Signed-off-by: Mirko Tietgen <mirko@abunchofthings.net>
Signed-off-by: Jonathan Druart <jonathan.druart@bugs.koha-community.org>
Signed-off-by: Kyle M Hall <kyle@bywatersolutions.com>
As requested by QA on comment33.
If the pref is 0 or the overriding command line parameter is 0, all
temporary files will be deleted. But if the pref is NULL or empty string,
we will not delete files.
Also adjusted the description of the preference in this regard.
Signed-off-by: Marcel de Rooy <m.de.rooy@rijksmuseum.nl>
Signed-off-by: Jonathan Druart <jonathan.druart@bugs.koha-community.org>
Signed-off-by: Kyle M Hall <kyle@bywatersolutions.com>
As requested by QA on comment25.
Signed-off-by: Marcel de Rooy <m.de.rooy@rijksmuseum.nl>
Signed-off-by: Jonathan Druart <jonathan.druart@bugs.koha-community.org>
Signed-off-by: Kyle M Hall <kyle@bywatersolutions.com>
The new switch for deleting temporary uploads in cleanup_database can
be added to cron.
Note: Since the option --temp-uploads does only purge temporary uploads
when triggered by the preference Upload_PurgeTemporaryFiles_Days, it can
be safely added here.
Test plan:
There is actually nothing to test here if you followed the preceding test
plans. Just verify that the switch is inserted ocrrectly.
Signed-off-by: Mirko Tietgen <mirko@abunchofthings.net>
Signed-off-by: Jonathan Druart <jonathan.druart@bugs.koha-community.org>
Signed-off-by: Kyle M Hall <kyle@bywatersolutions.com>
Two command line options are added:
[1] -temp-uploads to indicate that you want to purge these uploads,
[2] -temp-uploads-override DAYS to (optionally) tell that you want to
override the corresponding pref value.
Test plan:
[1] Check the modified usage statement.
[2] If needed, backup your temporary uploads :)
In case you do not have one, add a temporary one with Tools/Upload.
Note: Do not choose an upload category.
[3] Set pref to 0, and run cleanup_database with only --temp-uploads.
No files should be deleted.
[4] Check number of "old" temp uploads. Set pref to nonzero value.
Verify that the oldest are gone (depending on the value chosen).
[5] Set pref to 0 again.
If all uploads are gone now, add a new one with Tools/Upload.
Run cleanup_database with --temp-uploads --temp-uploads-override -1
All temporary files are gone.
Signed-off-by: Marcel de Rooy <m.de.rooy@rijksmuseum.nl>
Signed-off-by: Mirko Tietgen <mirko@abunchofthings.net>
Signed-off-by: Jonathan Druart <jonathan.druart@bugs.koha-community.org>
Signed-off-by: Kyle M Hall <kyle@bywatersolutions.com>
Updating to use they/them and skipping the ones changed to it
Signed-off-by: Owen Leonard <oleonard@myacpl.org>
Signed-off-by: Jonathan Druart <jonathan.druart@bugs.koha-community.org>
Signed-off-by: Kyle M Hall <kyle@bywatersolutions.com>
Comments throughout the Koha codebase assume that
all librarians or borrowers are male by using the
pronoun 'he' universally. This patch changes to
'he or she' / 'him or hers'.
Testing plan:
- ensuring modifying tests still pass:
+ C4/SIP/t/06patron_enable.t
+ t/db_dependent/Circulation.t
+ t/db_dependent/Koha/Patrons.t
+ t/db_dependent/Reserves.t
Sponsored-By: California College of the Arts
No code changes detected.
Signed-off-by: Marc Véron <veron@veron.ch>
Signed-off-by: Owen Leonard <oleonard@myacpl.org>
Signed-off-by: Jonathan Druart <jonathan.druart@bugs.koha-community.org>
Signed-off-by: Kyle M Hall <kyle@bywatersolutions.com>
The Debian cron file and the misc crontab example are updated.
A message is printed when upgrading.
Note: At this moment the merge cron job is run once a day. This is imo a
good starting point. The load for this job greatly depends on the value of
pref AuthorityMergeLimit. Of course you can schedule the job more often,
and if this need is felt more globally, we can adjust it later.
Test plan:
[1] Run the dbrev and see the message.
[2] Read the changes to the cron files.
Signed-off-by: Marcel de Rooy <m.de.rooy@rijksmuseum.nl>
Signed-off-by: Marc Véron <veron@veron.ch>
Signed-off-by: Jacek Ablewicz <abl@biblos.pk.edu.pl>
Signed-off-by: Julian Maurice <julian.maurice@biblibre.com>
Signed-off-by: Kyle M Hall <kyle@bywatersolutions.com>
The cron job is moved from migration tools to cronjobs. And renamed to
a plural form. The script is now based on Koha objects. It does no longer
include the code to merge one record. This can be done via the interface,
and will be added to a maintenance script on bug 18071. Should not be part
of this cron job.
Adding a cron_cleanup method to MergeRequests; this method is called from
the cron script to reset older entries still marked in progress and to
also remove old processed entries. Tested in a separate unit test.
Test plan:
[1] Run t/db_dependent/Authorities/MergeRequests.t
[2] Set AuthorityMergeLimit to 0. (All merges are postponed.)
[3] Modify an authority linked to a few records.
[4] Delete an authority linked to a few records with batch delete tool.
[5] And select two auth records with linked records.
Merge these two records with authority/merge.pl.
Note: Do not select Default. See also bug 17380.
[6] Check the need_merge_authorities table for inserted records.
[7] Run misc/cronjobs/merge_authorities.pl -b and inspect the linked
records and the record status in need_merge_authorities.
Signed-off-by: Marc Véron <veron@veron.ch>
Signed-off-by: Jacek Ablewicz <abl@biblos.pk.edu.pl>
Signed-off-by: Julian Maurice <julian.maurice@biblibre.com>
Signed-off-by: Kyle M Hall <kyle@bywatersolutions.com>
[1] The preference was sent to HEA. We can now send both AuthorityMergeMode
as well as AuthorityMergeLimit.
[2] A comment in authorities/merge.pl is removed. Note that a subsequent
patch will modify and test the cron job.
[3] Script misc/batchRebuildItemsTables.pl temporarily enabled dontmerge.
This is equivalent to setting the mergelimit to zero.
The function defnonull is no longer needed. (If the pref was NULL,
we restore that value. Sub merge won't mind.)
Test plan:
[1] Run t/db_dependent/UsageStats.t
[2] Run misc/batchRebuildItemsTables.pl -t
This just ensures you it still compiles; the changes speak for itself.
[3] Now git grep on dontmerge.
You should only find hits in atomicupdate and misc/translator/po.
Signed-off-by: Marcel de Rooy <m.de.rooy@rijksmuseum.nl>
Signed-off-by: Marc Véron <veron@veron.ch>
Signed-off-by: Jacek Ablewicz <abl@biblos.pk.edu.pl>
Signed-off-by: Julian Maurice <julian.maurice@biblibre.com>
Signed-off-by: Kyle M Hall <kyle@bywatersolutions.com>
At this point, we are replacing dontmerge functionality by the new
AuthorityMergeLimit logic. Instead of doing this check before calling
merge, we just call merge and check it there. In order to let the cron
job do the larger (postponed) merges, we add a parameter override_limit.
A subtest is added in Merge.t to test the 'postponed merge' feature. Since
merge now also calls get_usage_count, an additional mock is added. All
references to dontmerge are removed.
In merge two lines, initializing $dbh and $counteditbiblios, are moved.
The dontmerge test in DelAuthority and ModAuthority is removed. Since this
did not leave much in ModAuthority, I fixed the whitespace on the remaining
lines rightaway (yes, I know).
A minimal set of changes is applied to the cron script; it will get further
attention on a next patch.
Test plan:
[1] Run t/db_dependent/Authorities/Merge.t
[2] Set AuthorityMergeLimit to 2. Modify an authority with two linked
biblios. Check that the merge was done immediately.
[3] Now modify an authority with more than 2 linked records.
Verify that the merge was postponed; a record must be inserted in
the need_merge_authorities table.
[4] Testing of the merge cron job is *postponed* to a next patch.
Note: I tested a modification, but the script just needs more
attention.
Signed-off-by: Marcel de Rooy <m.de.rooy@rijksmuseum.nl>
Signed-off-by: Marc Véron <veron@veron.ch>
Signed-off-by: Jacek Ablewicz <abl@biblos.pk.edu.pl>
Signed-off-by: Julian Maurice <julian.maurice@biblibre.com>
Signed-off-by: Kyle M Hall <kyle@bywatersolutions.com>
We will need a few additional parameters for merge later on. This patch
puts the original parameters in a parameter hash.
For the same reason DelAuthority gets a parameter hash here.
Note: We remove the second parameter from the DelAuthority call in
authorities/authorities-home.pl here. It was not used and could have
presented problems in the future.
Test plan:
[1] Run t/db_dependent/AuthoritiesMarc.t.
[2] Run t/db_dependent/Authorities/Merge.t.
Signed-off-by: Marcel de Rooy <m.de.rooy@rijksmuseum.nl>
Signed-off-by: Marc Véron <veron@veron.ch>
Signed-off-by: Jacek Ablewicz <abl@biblos.pk.edu.pl>
Signed-off-by: Julian Maurice <julian.maurice@biblibre.com>
Signed-off-by: Kyle M Hall <kyle@bywatersolutions.com>
Signed-off-by: Rhonda Kuiper <kuiper@roundrocktexas.gov>
Signed-off-by: Marcel de Rooy <m.de.rooy@rijksmuseum.nl>
Signed-off-by: Kyle M Hall <kyle@bywatersolutions.com>
Signed-off-by: Cab Vinton <director@plaistowlibrary.com>
Signed-off-by: Kyle M Hall <kyle@bywatersolutions.com>
Signed-off-by: Jonathan Druart <jonathan.druart@bugs.koha-community.org>
Signed-off-by: Kyle M Hall <kyle@bywatersolutions.com>
The way the export options are displayed at the bottom of the checkouts table
was not consistent.
Prior to this patch set, they are display if ExportRemoveFields or
ExportWithCsvProfile is set.
It does not make any sense, the user could want to export the checkouts in
iso2709 format without having to define a csv profile and fill the pref.
Moreover the behavior of this pref did not match its description: it's used as
a default CSV profile when exporting records from the export tools or the
command line.
This patch set adds a new pref ExportCircHistory and remove
ExportWithCsvProfile. The new pref is set if ExportWithCsvProfile or
ExportRemoveFields were set.
A new dropdown list with the CSV profile list will be displayed in the
export area, at the bottom of the checkouts table.
Note that now --csv_profile_id is mandatory for the export command line
(misc/export_records.pl) if the export format is csv.
Test plan:
0/ Do not execute the DB entry
1/ Clear both ExportWithCsvProfile and ExportRemoveFields prefs
2/ Execute the DB entry
3/ ExportCircHistory should not be set and the export options should not
be displayed at the bottom of the checkouts table.
4/ Remove the pref
DELETE FROM systempreferences WHERE variable='ExportCircHistory';
and reinsert the previous one, with a value:
INSERT INTO systempreferences (variable, value) VALUES
('ExportWithCsvProfile', 'something');
Execute the DB entry again
=> The now pref should be now set
5/ Export some checkouts using the CSV entry
6/ Note that the export tool and commandline script still work using the
csv format. You have to provide a --csv_profile_id option to make it
work.
Signed-off-by: Josef Moravec <josef.moravec@gmail.com>
Signed-off-by: Marcel de Rooy <m.de.rooy@rijksmuseum.nl>
Signed-off-by: Kyle M Hall <kyle@bywatersolutions.com>
borrower_message_preferences cannot be truncated because of the foreign.
DBMS fails with
"Cannot truncate a table referenced in a foreign key constraint"
To avoid that we should remove the FK check and truncate the other table
as well.
I am wondering if we really need a truncate here
DELETE FROM borrower_message_preferences;
should do the job, but leave it as it because of the param name.
Test plan
perl misc/maintenance/borrowers-force-messaging-defaults --doit --truncate
Should no longer raise the error message
Signed-off-by: Mark Tompsett <mtompset@hotmail.com>
Signed-off-by: Marcel de Rooy <m.de.rooy@rijksmuseum.nl>
Signed-off-by: Kyle M Hall <kyle@bywatersolutions.com>
This patch is the Koha part of the Hea v2 project.
You can find the (testing) code for the server at
hea-ws - https://github.com/joubu/hea-ws/commits/v2
hea-app - https://github.com/joubu/hea-app/commits/v2
They contain the different pull requests made over the last 6 months.
More information on Hea at https://wiki.koha-community.org/wiki/KohaUsageStat_RFC
The goal of this commit message is to provide an overview of what could
be a new version of Hea.
Prior to these changes, the Hea database was filled with 1 line per Koha
installation. System preferences were filled by the libraries and a
cronjob (share_usage_with_koha_community.pl) collected these values to send
them to a webservice (hea-ws/upload.pl).
With the need to collect more data we would want to collect data at the library
level (branch) and not at the installation level.
For instance the geolocation, the url or the country can be different from one
library to another, even if managed from the same Koha installation.
The Hea DB has been upgraded to reflect that change (see hea-app/sql/schema.sql).
The hidden goal of this patch is to make Hea sexier and explain
better to libraries how it can be useful to share their information
with the Koha community. I guess the main problem is the lack of
communication and explanations about what we are doing we these data.
To fill this gap I'd like to (TODO)
1. Communicate on the ML about this new version of Hea (once it got
pushed and backported)
2. Link the Privacy_Policy.md from the Hea interface
3. Get help from a native English speaker to add
popup/help/info/whatever on "Home › Administration › Usage statistics",
to clearly explain what happens (and what will not happen!) when an option or
another is set.
You can find screenshot of this whole enhancement on bug 18066, comment 2.
What this patch does:
- Create a new branches.geolocation DB field
- Add 3 new sysprefs:
* UsageStatsGeolocation
* UsageStatsLibrariesInfo
* UsageStatsPublicID
- Integrate the Leaflet JS library to get a fancy map to pick
geolocations
How does it works:
On the new administration page where statistics to share are configured,
there are several new things. It is now possible to share information either
per Koha installation or libraries. If UsageStatsLibrariesInfo is set,
the info at library level (url, name, country, geolocation) will be
sent to the Hea webservice. If it is not set, you can decide to fill
UsageStatsLibraryUrl, UsageStatsLibraryName, UsageStatsCountry,
UsageStatsGeolocation to share these information. Note that even if the
data are retrieved at installation level, it's better to fill the prefs
as well: On the Hea website the different libraries defined for a given
Koha installation could be displayed on the same page.
This page is a public page which will be attributed to every
installation (with the pref UsageStatsPublicID). On this page all the
info available publicly will be displayed.
TODO later:
- Add a button on the administration page to delete the info shared
publicly. It will be easy to show that the info are no longer displayed
on the public page.
- Add an icon per Koha installation to get a better "public page"
- Any suggestions?
Test plan:
We will need to test hea-ws, hea-app and the Koha-side code to test the
whole enhancement.
1/ To start, clone the hea-ws and hea-app project and checkout the
'master' branch (*not* 'v2')
2/ Create the hea database and user
CREATE DATABASE hea
CREATE USER 'hea'@'localhost' IDENTIFIED BY 'hea';
GRANT ALL PRIVILEGES ON hea.* TO 'hea'@'localhost';
FLUSH PRIVILEGES;
3/ Fill the DB with some data
mysql hea < hea-app/sql/schema.sql
mysql hea < hea-app/sql/sql/mock-data.sql
4/ Checkout the 'v2' branch for both hea-ws and hea-app
5/ Execute the upgrade DB script
% cd hea-app
% perl -p -i -e 's/REPLACE_ME/hea/' sql/upgrade.pl # Fill the DB info
% perl sql/upgrade.pl
Now the DB is using the v2 structure. That means we have 1 installation
row per library previously defined. 1 library row has also been created.
5/ Configure hea-ws
% echo '192.168.50.1 hea.koha-community.org' >> /etc/hosts
<VirtualHost *:80>
DocumentRoot "/path/to/hea-ws"
ServerName "hea.koha-community.org"
<Directory "/">
Options +ExecCGI
Require all granted
AddHandler cgi-script .pl
</Directory>
</VirtualHost>
And enable it with a2ensite, then restart apache.
The copy the database.yml.sample to database.yml and edit it to fill the
DB info.
6/ Launch the hea-app
% cd hea-app
% edit README.md # to install the missing modules
% cp environments/config.yml environments/development.yml
% edit environments/development.yml # to fill the DB info
% perl bin/app.pl
Then hit localhost:3000
You should see a local version of Hea with sample data
7/ Back to Koha side
A. We will test that the webservice still works with previous version of Koha (without v2)
a. Do not configure Hea
% perl misc/cronjobs/share_usage_with_koha_community.pl -f -v
Then hit localhost:3000
=> Nothing added
b. Configure Hea on admin/usage_statistics.pl
perl misc/cronjobs/share_usage_with_koha_community.pl -f -v
=> New library added
c. Modify the Hea configuration
perl misc/cronjobs/share_usage_with_koha_community.pl -f -v
=> Info are modified
B. Not we will test that it works with the new version (much more fun ;))
% git checkout hea-v2 # koha
a. Configure Hea using /admin/usage_statistics.pl
perl misc/cronjobs/share_usage_with_koha_community.pl -f -v
=> Check the result on localhost:3000
b. Share libraries's info
perl misc/cronjobs/share_usage_with_koha_community.pl -f -v
c. Continue to play a bit and share the info.
Signed-off-by: Chris Cormack <chrisc@catalyst.net.nz>
Signed-off-by: Julian Maurice <julian.maurice@biblibre.com>
Signed-off-by: Brendan A Gallagher <brendan@bywatersolutions.com>
In order to accomplish this, we need to add some additional checks in
the merge routine. The actual change to remove the field, is quite
small.
Furthermore, we need to add a merge call in DelAuthority and adjust
the merge cron job accordingly.
The change is well supported by additional tests, including a simulation
of postponed removal via cron, if dontmerge is enabled.
Note: Deleting an authority with linked biblios is tested on the next
patch.
Test plan:
[1] Run t/db_dependent/Authorities/Merge.t
[2] Delete an authority without linked biblios from the Authorities
module. If the indexer is not fast enough, wait a bit and refresh to
verify that the authority is gone on authorities-home.pl.
Signed-off-by: Marcel de Rooy <m.de.rooy@rijksmuseum.nl>
Signed-off-by: Josef Moravec <josef.moravec@gmail.com>
Signed-off-by: Julian Maurice <julian.maurice@biblibre.com>
Signed-off-by: Kyle M Hall <kyle@bywatersolutions.com>
The C4::Members::GetBorrowersWithIssuesHistoryOlderThan subroutine is supposed
to return the patrons with an issue history older than a given date.
It would make more sense to return a list of Koha::Patrons.
On the way, the code from AnonymiseIssueHistory will be moved as well to
anonymise_issue_history.
Note that these 2 subroutines are strongly linked: one is used to know the
number of patrons we will anonymise the history, the other one is used to
anonymise the issues history. The problem is that the first one is not used to
do the action, but only for displayed purpose.
In some cases, these 2 values can differ, which could be confusing.
Case 1:
The logged in librarian is not superlibrarian and IndependentBranches is set:
if 2+ patrons from different libraries match the date parameter, the interface
will display "Checkout history for 2 patrons will be anonymized", when actually
only 1 will be.
Case 2:
If 2+ patrons match the date parameter but one of them has his privacy set to
forever (privacy=0), the same issue will appear.
This patch moves the code from C4::Members::GetBorrowersWithIssuesHistoryOlderThan
to Koha::Patrons->search_patrons_to_anonymise and from
C4::Circulation::AnonymiseIssueHistory to
Koha::Patrons->anonymise_issue_history
Test plan:
1/ Confirm the 2 issues and make sure they are fixed using the Batch
patron anonymization tool (tools/cleanborrowers.pl)
2/ At the OPAC, use the 'Immediate deletion' button to delete all your
reading history (regardless the setting of the privacy rule)
3/ Use the cronjob script (misc/cronjobs/batch_anonymise.pl) to
anonymise patrons.
Signed-off-by: Josef Moravec <josef.moravec@gmail.com>
Signed-off-by: Kyle M Hall <kyle@bywatersolutions.com>
Signed-off-by: Marcel de Rooy <m.de.rooy@rijksmuseum.nl>
Signed-off-by: Kyle M Hall <kyle@bywatersolutions.com>
This patch does the following:
[1] Move some POD lines from Cache to Caches.
[2] Correct C4::Plugins to Koha::Plugins in POD line of Koha::Plugins
[3] POD Koha/AuthorisedValue.pm: lib_opac moved to opac_description
[4] The POD in Koha/Patron.pm uses head2 and head3 inconsistently.
Ran s/^=head2/=head3/ on those lines (7 substitutions on 7 lines)
[5] Correct a copied POD line from reports/issues_stats.pl in
reports/reserve_stats.pl.
[6] Correct a test description in t/db_dependent/Koha/Authorities.t.
You should never delete the library :)
[7] Correct typo shouild in a comment of rebuild_zebra.pl
Test plan:
[1] Read the patch. Does it make sense?
[2] Run perldoc Koha/Cache.pm and Koha/Caches.pm
[3] Run t/db_dependent/Koha/Authorities.t
Signed-off-by: Marcel de Rooy <m.de.rooy@rijksmuseum.nl>
Signed-off-by: David Cook <dcook@prosentient.com.au>
Signed-off-by: Jonathan Druart <jonathan.druart@biblibre.com>
Signed-off-by: Kyle M Hall <kyle@bywatersolutions.com>
Currently it is possible to spceify both --biblios and --authorities
as command line switches to bulkmarcimport.pl. This does not make sense
so we should exit early and explain that these switches are mutually
exclusive.
To test:
- Run one of these and check that there is no complaint about missing
options:
perl misc/migration_tools/bulkmarcimport.pl -a -b
sudo koha-shell -c "perl misc/migration_tools/bulkmarcimport.pl -a -b"
kohadev
- Observe that this displays the perldoc, but does not complain about
mutually exclusive switches.
- Apply the patch
- Rerun the command(s) from earlier.
- Verify that the script is now halted and a small explanation given.
Signed-off-by: Mark Tompsett <mtompset@hotmail.com>
Signed-off-by: Jonathan Druart <jonathan.druart@bugs.koha-community.org>
Signed-off-by: Kyle M Hall <kyle@bywatersolutions.com>
The removal of the noxml is a logical follow-up of bug 16506 (which
make xml the default).
Actually this option should have been removed by bug 10455 (it removes
the biblioitem.marc field).
Test plan:
Make sure the rebuild_zebra.pl script works as before.
Signed-off-by: Emma Smith <emma.nakamura.smith@gmail.com>
Signed-off-by: Kyle M Hall <kyle@bywatersolutions.com>
Signed-off-by: Nick Clemens <nick@bywatersolutions.com>
Signed-off-by: Marcel de Rooy <m.de.rooy@rijksmuseum.nl>
Amended: Using items from Koha::Biblio seems better :)
Signed-off-by: Kyle M Hall <kyle@bywatersolutions.com>
Signed-off-by: Nick Clemens <nick@bywatersolutions.com>
Signed-off-by: Marcel de Rooy <m.de.rooy@rijksmuseum.nl>
Signed-off-by: Kyle M Hall <kyle@bywatersolutions.com>
Signed-off-by: Josef Moravec <josef.moravec@gmail.com>
Signed-off-by: Zeno Tajoli <z.tajoli@cineca.it>
Signed-off-by: Nick Clemens <nick@bywatersolutions.com>
Signed-off-by: Marcel de Rooy <m.de.rooy@rijksmuseum.nl>
Signed-off-by: Kyle M Hall <kyle@bywatersolutions.com>
The wrong value was retrieved.
Signed-off-by: Josef Moravec <josef.moravec@gmail.com>
Signed-off-by: Zeno Tajoli <z.tajoli@cineca.it>
Signed-off-by: Nick Clemens <nick@bywatersolutions.com>
Signed-off-by: Marcel de Rooy <m.de.rooy@rijksmuseum.nl>
Signed-off-by: Kyle M Hall <kyle@bywatersolutions.com>
Not sure if this script is still used, could someone confirm?
Test plan:
If you know how to test it, please do
Signed-off-by: Mason James <mtj@kohaaloha.com>
Signed-off-by: Josef Moravec <josef.moravec@gmail.com>
Signed-off-by: Zeno Tajoli <z.tajoli@cineca.it>
Signed-off-by: Nick Clemens <nick@bywatersolutions.com>
Signed-off-by: Marcel de Rooy <m.de.rooy@rijksmuseum.nl>
Signed-off-by: Kyle M Hall <kyle@bywatersolutions.com>
Test plan:
Use the sanitize_records.pl maintenance script with the --auto-search
option
Signed-off-by: Mason James <mtj@kohaaloha.com>
Signed-off-by: Josef Moravec <josef.moravec@gmail.com>
Signed-off-by: Zeno Tajoli <z.tajoli@cineca.it>
Signed-off-by: Nick Clemens <nick@bywatersolutions.com>
Signed-off-by: Marcel de Rooy <m.de.rooy@rijksmuseum.nl>
Signed-off-by: Kyle M Hall <kyle@bywatersolutions.com>
Two discussions on koha-devel lead to the same conclusion:
biblioitems.marcxml should be moved out this table
- biblio and biblioitems
http://lists.koha-community.org/pipermail/koha-devel/2013-April/039239.html
- biblioitems.marcxml & biblioitems.marc / HUGE performance issue !
http://lists.koha-community.org/pipermail/koha-devel/2016-July/042821.html
There are several goals to do it:
- Performance
As Paul Poulain wrote, a simple query like
SELECT publicationyear, count(publicationyear) FROM biblioitems GROUP BY publicationyear;
takes more than 10min on a DB with more than 1M bibliographic records
but only 3sec (!) on the same DB without the biblioitems.marcxml field
Note that priori to this patch set, the biblioitems.marcxml was not
retrieved systematically, but was, at least, in
C4::Acquisition::GetOrdersByBiblionumber and C4::Acquisition::GetOrders
- Flexibility
Storing the marcxml in a specific table would allow use to store several
kind of metadata (USMARC, MARCXML, MIJ, etc.) and different formats (marcflavour)
- Clean code
It would be a first step toward Koha::MetadataRecord for bibliographic
records (not done in this patch set).
Test plan:
- Update the DBIC Schema
- Add / Edit / Delete / Import / Export bibliographic records
- Add items
- Reindex records using ES
- Confirm that the following scripts still work:
* misc/cronjobs/delete_records_via_leader.pl
* misc/migration_tools/build_oai_sets.pl
- Look at the reading history at the OPAC (opac-readingrecord.pl)
- At the OPAC, click on a tag, you must see the result
Note: Changes in Koha/OAI/Server/ListRecords.pm is planned on bug 15108.
Signed-off-by: Mason James <mtj@kohaaloha.com>
Signed-off-by: Josef Moravec <josef.moravec@gmail.com>
Signed-off-by: Zeno Tajoli <z.tajoli@cineca.it>
Signed-off-by: Nick Clemens <nick@bywatersolutions.com>
Signed-off-by: Marcel de Rooy <m.de.rooy@rijksmuseum.nl>
Signed-off-by: Kyle M Hall <kyle@bywatersolutions.com>
Deleted the line.
perlcritic -4 before and after.
Before there are issues. After there is not.
Also, changed function to not rely on implicit return value
of last line, but explicitly stated a return. And operator
changed, due to precedence issues.
Signed-off-by: Mark Tompsett <mtompset@hotmail.com>
Signed-off-by: Marcel de Rooy <m.de.rooy@rijksmuseum.nl>
Signed-off-by: Kyle M Hall <kyle@bywatersolutions.com>
Signed-off-by: Kyle M Hall <kyle@bywatersolutions.com>
Variables $extkey not used, %opt not used.
Variables $tmptest[...] not used, calling _build_tag_directory useless.
The script now does not only print help if you specify -t.
Sub defnonull tidied.
Rearranged modules, removed Dumper.
Test plan:
[1] Run the script with -t flag. The script should not only print usage
statement, but should do a dry run. (Test this on a small database,
or pass an additional where clause.)
Signed-off-by: Marcel de Rooy <m.de.rooy@rijksmuseum.nl>
Signed-off-by: Mark Tompsett <mtompset@hotmail.com>
Signed-off-by: Kyle M Hall <kyle@bywatersolutions.com>
Signed-off-by: Kyle M Hall <kyle@bywatersolutions.com>
Test plan:
Verify that the output of diff -w between the original and tidied file
does not introduce code changes.
Signed-off-by: Marcel de Rooy <m.de.rooy@rijksmuseum.nl>
Signed-off-by: Mark Tompsett <mtompset@hotmail.com>
Signed-off-by: Kyle M Hall <kyle@bywatersolutions.com>
Signed-off-by: Kyle M Hall <kyle@bywatersolutions.com>
Signed-off-by: Josef Moravec <josef.moravec@gmail.com>
Signed-off-by: Marcel de Rooy <m.de.rooy@rijksmuseum.nl>
Signed-off-by: Kyle M Hall <kyle@bywatersolutions.com>
This patchset moves the C4::Members::GetUpcomingMembershipExpires
subroutine code to the Koha::Patrons->search_upcoming_membership_expires
method.
This subroutine was used from only 1 place, so it's an easier to move.
Test plan:
Use the membership_expiry.pl cronjob script using the different
options.
The behavior should be the same as prior to this patch.
prove t/db_dependent/Koha/Patrons.t
should return green
Signed-off-by: Josef Moravec <josef.moravec@gmail.com>
Signed-off-by: Marcel de Rooy <m.de.rooy@rijksmuseum.nl>
Signed-off-by: Kyle M Hall <kyle@bywatersolutions.com>
This patch adds the job to debian package file and the examples file
in misc.
Test plan:
Add these lines to your cron file.
Check the results. (If an issue you expect passes the grace period defined
in the subscription, its status should go from Expected to Late.)
Signed-off-by: Josef Moravec <josef.moravec@gmail.com>
Signed-off-by: Jonathan Druart <jonathan.druart@biblibre.com>
Signed-off-by: Kyle M Hall <kyle@bywatersolutions.com>
All the values different from the ones GetMember returned has been
managed outside of GetMemberDetails.
It looks safe to replace all the occurrences of GetMemberDetails with
GetMember.
Signed-off-by: Josef Moravec <josef.moravec@gmail.com>
Signed-off-by: Nick Clemens <nick@bywatersolutions.com>
Signed-off-by: Kyle M Hall <kyle@bywatersolutions.com>
Yes, we are fixing these four typos here.
Test plan:
[1] Read the changes.
[2] Run t/Auth_with_shibboleth.t
[3] Run git grep recieved| grep -v -e 'recievedlist' | grep -v -e 'serials-recieve.tt'
Note: serials-recieve.tt is just history. Bonus points for the one who makes
us get rid of that column recievedlist.
Signed-off-by: Marcel de Rooy <m.de.rooy@rijksmuseum.nl>
Signed-off-by: Josef Moravec <josef.moravec@gmail.com>
EDIT:
Rebased. Change in Accounts has been corrected already.
Removed the po file.
Signed-off-by: Jonathan Druart <jonathan.druart@biblibre.com>
Signed-off-by: Kyle M Hall <kyle@bywatersolutions.com>
When a patron had overdues on items from different branches
he received from each corresponding branch a notice claiming
the complete list of these items.
Signed-off-by: sbujam <sbujam@users.noreply.github.com>
Signed-off-by: Jonathan Druart <jonathan.druart@biblibre.com>
Signed-off-by: Kyle M Hall <kyle@bywatersolutions.com>
Fix SAX parser error pointing to INSTALL docs
Signed-off-by: Nick Clemens <nick@bywatersolutions.com>
Signed-off-by: Kyle M Hall <kyle@bywatersolutions.com>
This patch removes three files which are obsolete following the removal
of the OPAC prog template.
To test, apply the patch and confirm that these files no longer exist in
misc/interface_customization:
- koha3-opac-button-background.png
- koha3-opac-button-background.psd
- koha3-opac-button-background.svg
Signed-off-by: Josef Moravec <josef.moravec@gmail.com>
Signed-off-by: Jonathan Druart <jonathan.druart@biblibre.com>
Signed-off-by: Kyle M Hall <kyle@bywatersolutions.com>
To test:
- Have a clean install, no DB
- Run populate_db.pl:
$ sudo koha-shell kohadev
k$ cd kohaclone
k$ misc/devel/populate_db.pl
- Go to
http://localhost:8081/cgi-bin/koha/admin/searchengine/elasticsearch/mappings.pl
=> FAIL: No mappings
- Delete the DB and create an empty one:
$ mysql -uroot
> DROP DATABASE koha_kohadev; CREATE DATABASE koha_kohadev;
> GRANT ALL PRIVILEGES ON koha_kohadev.* TO
'koha_kohadev'@'localhost';
- Run populate_db.pl:
$ sudo koha-shell kohadev
k$ cd kohaclone
k$ misc/devel/populate_db.pl
- Go to
http://localhost:8081/cgi-bin/koha/admin/searchengine/elasticsearch/mappings.pl
=> SUCCESS: There are mappings!
- Sign off :-D
Signed-off-by: Tomas Cohen Arazi <tomascohen@theke.io>
Signed-off-by: Jonathan Druart <jonathan.druart@biblibre.com>
Signed-off-by: Kyle M Hall <kyle@bywatersolutions.com>
Same as previous patch for misc/export_records.pl.
Test plan :
- Use syspref item-level_itypes = biblio record
- Run misc/export_records.pl
=> Without patch you get an error : DBD::mysql::st execute failed: Unknown column 'biblioitems.itemtype' in 'where clause' ...
=> With patch you get a correct export file
Signed-off-by: Bernardo Gonzalez Kriegel <bgkriegel@gmail.com>
Export Ok, no errors.
Signed-off-by: Marcel de Rooy <m.de.rooy@rijksmuseum.nl>
Signed-off-by: Kyle M Hall <kyle@bywatersolutions.com>
Signed-off-by: Tomas Cohen Arazi <tomascohen@theke.io>
Signed-off-by: Jonathan Druart <jonathan.druart@biblibre.com>
Signed-off-by: Kyle M Hall <kyle@bywatersolutions.com>
Adding POD and --userid and --password options
1/ To test, use the same routine as before, with no options.
2/ You should have a user with koha/koha as userid and passwords
3/ Delete that user
4/ Run the script with --userid <userid> --password <password>
5/ You should have a user in koha with userid/password set
Signed-off-by: Josef Moravec <josef.moravec@gmail.com>
Signed-off-by: Tomas Cohen Arazi <tomascohen@theke.io>
Signed-off-by: Kyle M Hall <kyle@bywatersolutions.com>
This is for developers: it's quite long (many clics) to create a new
superlibrarian user.
This new script creates a new user with superlibrarian permissions with
the easy to remember credential koha/koha
Test plan:
perl misc/devel/create_superlibrarian.pl
Log in to Koha using koha/koha
Signed-off-by: Chris Cormack <chrisc@catalyst.net.nz>
Signed-off-by: Josef Moravec <josef.moravec@gmail.com>
Signed-off-by: Tomas Cohen Arazi <tomascohen@theke.io>
Signed-off-by: Kyle M Hall <kyle@bywatersolutions.com>
When launching misc/export_records.pl with this command line :
misc/export_records.pl --date=`date +%d/%m/%Y` --deleted_barcodes --filename=/tmp/koha.mrc
You get this error message :
DBD::mysql::db selectall_arrayref failed: You have an error in your SQL syntax; check the manual that corresponds to your MariaDB server version for the right syntax to use near '' at line 4 [for Statement " (
SELECT DISTINCT barcode
FROM deleteditems
WHERE deleteditems.biblionumber = ?
"] at misc/export_records.pl line 189.
This is because of a '(' after 'q|', looks like a typo.
Also, this patch removes useless var $q.
Test plan :
- Delete an item with barcode
- Without patch, run : misc/export_records.pl --date=`date +%d/%m/%Y` --deleted_barcodes --filename=/tmp/koha.mrc
=> You get dirty MySQL
- Without patch, run the same command
=> No error and the file is generated
Signed-off-by: Nicolas Legrand <nicolas.legrand@bulac.fr>
Signed-off-by: Jonathan Druart <jonathan.druart@biblibre.com>
Signed-off-by: Kyle M Hall <kyle@bywatersolutions.com>
When running rebuild_zebra.pl in daemon mode, a while loop runs the script for ever.
But if something crashes inside the rebuild process, the all daemon crashes.
For example when it can not access database.
This problem may be temporary so daemon should keep running.
This patch add eval around the rebuild process to allow a run to fail without killing the daemon.
Also moves the DB handler get inside daemon loop because it is broken is DB stoppes.
This is a big issue for indexer running in a systemd service.
Test plan :
- run rebuild_zebra.pl in daemon mode :
/home/koha/src/misc/migration_tools/rebuild_zebra.pl -daemon -z -a -b -x --sleep 30
- stop the database
- wait a minute
=> you see an error on database connexion
=> the daemon is still running
- restart the database
- test the indexer by creating a new record (wait for a minute)
Signed-off-by: Jacek Ablewicz <abl@biblos.pk.edu.pl>
Signed-off-by: Jonathan Druart <jonathan.druart@biblibre.com>
Signed-off-by: Kyle M Hall <kyle@bywatersolutions.com>
Test plan:
1) Apply the patch
2) Edit a biblio
3) run export_records.pl with date time few minutes in the past
for example: --format=xml --record-type=bibs --date="2016-10-14 10:00:05" --filename="koha.xml"
4) look in the file and cofirm that the right record was exported
5) Try the same but with time after the biblio was edited, it shouldn't be exported
Signed-off-by: radiuscz <radek.siman@centrum.cz>
Bug 17444: Follow-up, don't change the name of parameter "date"
Signed-off-by: Jonathan Druart <jonathan.druart@bugs.koha-community.org>
Signed-off-by: Kyle M Hall <kyle@bywatersolutions.com>
This job should be done each time patron data are deleted. It's better
to do it just before deleting the patron than assuming the caller did
the job by itself.
Signed-off-by: Tomas Cohen Arazi <tomascohen@theke.io>
Signed-off-by: Martin Renvoize <martin.renvoize@ptfs-europe.com>
Signed-off-by: Kyle M Hall <kyle@bywatersolutions.com>
This patch moves the C4::Members::DelMember subroutine to the
Koha::Patron module.
The delete method must be overwritten to permit handling of patron's
holds.
Test plan:
(With the 2 patches applied)
1/ Create a patron with holds and owner of lists
2/ Delete patrons using the web interface:
- More > Delete on a patron page
- Batch patron deletion tools
3/ and the cronjob script
- perl misc/cronjobs/delete_patrons.pl -c [more options]
The patron should have been moved to the deletedborrowers table, his/her
holds and lists should have been deleted.
Signed-off-by: Tomas Cohen Arazi <tomascohen@theke.io>
Signed-off-by: Martin Renvoize <martin.renvoize@ptfs-europe.com>
Signed-off-by: Kyle M Hall <kyle@bywatersolutions.com>
This patch removes the C4::Members::MoveMemberToDeleted subroutine in
order to replace it with the Koha::Patron->move_to_deleted method.
Next after this change, we will move C4::Members::HandleDelBorrower and
C4::Members::DelMember to the same module to simplify the code in
members/deletemem.pl and misc/cronjobs/delete_patrons.pl
Test plan:
1/ Delete a patron from the staff interface and make sure (s)he has been moved to
the deletedborrowers table.
2/ Use the "Batch patron deletion" tool (tools/cleanborrowers.pl) to
remove patron. Make sure the "Permanently delete these patrons" and "Move
these patrons to the trash" options work as before
3/ Same as previously but using the cronjob
misc/cronjobs/delete_patrons.pl.
Signed-off-by: Tomas Cohen Arazi <tomascohen@theke.io>
Tested the delete_patrons.pl script and cleanborrowers.pl too.
Tests (are relevant and) pass and the qa scripts are happy too :-D
Signed-off-by: Martin Renvoize <martin.renvoize@ptfs-europe.com>
Fix record matching in misc/cronjobs/delete_records_via_leader.pl
Signed-off-by: Nick Clemens <nick@bywatersolutions.com>
Signed-off-by: Kyle M Hall <kyle@bywatersolutions.com>
Is this script still in use?
It uses the biblioitems.marc field so if it's still useful it will need
to be rewritten.
Signed-off-by: Mason James <mtj@kohaaloha.com>
Signed-off-by: Nick Clemens <nick@bywatersolutions.com>
Signed-off-by: Kyle M Hall <kyle@bywatersolutions.com>
Recently added, delete_records_via_leader.pl reads biblioitems.marc as a
text field and search for record to delete regarding the leader 5.
This can be acchieve doing the same thing on biblioitems.marcxml (will
certainly be slower) waiting for a patch on bug 15537.
Test plan:
Confirm that this script works as before, to do so the easiest way would
be to dump your DB before executing the update DB entry, execute the
script to delete records, reinsert the DB, execute the udpate DB entry
(remove biblioitems.marc), execute the script to delete records.
You should get the same number of records deleted.
Signed-off-by: Mason James <mtj@kohaaloha.com>
Signed-off-by: Nick Clemens <nick@bywatersolutions.com>
Signed-off-by: Kyle M Hall <kyle@bywatersolutions.com>
This script seems to be unused and it won't be of any usefulness after
the removal of biblioitems.marc
Signed-off-by: Mason James <mtj@kohaaloha.com>
Signed-off-by: Nick Clemens <nick@bywatersolutions.com>
Signed-off-by: Kyle M Hall <kyle@bywatersolutions.com>
This patch makes the populate_db.pl script upper case what gets passed
with the --marcflavour option switch. This is needed in order for this
to fit ``kohadevbox`` configuration files, and it is harmless for other
uses.
Signed-off-by: Tomas Cohen Arazi <tomascohen@theke.io>
Signed-off-by: Kyle M Hall <kyle@bywatersolutions.com>
This patch enables the --marcflavour option switch so the user
can specify the desired marc flavour. The code for handling it
was already in place, just not used.
Signed-off-by: Tomas Cohen Arazi <tomascohen@theke.io>
Signed-off-by: Kyle M Hall <kyle@bywatersolutions.com>
Executing the installer process and inserting all the sample data take a
lot of clics and time.
The idea of this script is to provide a quick way to insert all the
sample data easily to get a working Koha install asap.
Test plan:
- Set your database config to a non-existent DB
- Execute perl misc/devel/populate_db.pl
You will get an error
- Create an empty DB
- Execute perl misc/devel/populate_db.pl
It will insert all the MARC21 sample data
- Execute perl misc/devel/populate_db.pl
You will get an error because the DB is not empty (systempreferences and
borrowers tables)
Signed-off-by: Chris Cormack <chrisc@catalyst.net.nz>
Signed-off-by: Tomas Cohen Arazi <tomascohen@theke.io>
Signed-off-by: Kyle M Hall <kyle@bywatersolutions.com>
This allows OVERRIDE_SYSPREF_* and others to work properly.
Test plan:
1) Add the following line to your plack.psgi (near the bottom, just
above "mount ..."):
enable "+Koha::Middleware::Plack";
2) Load the OPAC advanced search page (under Plack). The title should
read "Koha online catalog" (or whatever your LibraryName syspref
contains).
3) Add the following to your Apache configuration:
RequestHeader add X-Koha-SetEnv "OVERRIDE_SYSPREF_LibraryName Potato\, Potato"
4) Restart Apache.
5) Refresh. The title should now read "Potato, Potato online catalog".
Signed-off-by: Tomas Cohen Arazi <tomascohen@theke.io>
Signed-off-by: Kyle M Hall <kyle@bywatersolutions.com>
Signed-off-by: Brendan Gallagher <brendan@bywatersolutions.com>
The swagger specification file is currently being minified adding
manual steps to release management and restful api route development.
The minification is not required; The deferenced version of the
specification is now internally validated at runtime and relavant errors
output and the dereferenced schema has been made publically available at
/api/v1/spec, so it can be copy&pasted into validation tools
Test Plan
1) Apply patch
2) Ensure api routes still function (applying the /cities patch might be
helpful)
3) Ensure /api/v1/spec page exists (it should be the de-referenced
swagger.json file)
Signed-off-by: Claire Gravely <claire_gravely@hotmail.com>
Signed-off-by: Tomas Cohen Arazi <tomascohen@theke.io>
Signed-off-by: Brendan Gallagher <brendan@bywatersolutions.com>
What we currently have:
Koha/ElasticSearch.pm
Koha/ElasticSearch/Indexer.pm
Koha/SearchEngine/Elasticsearch/QueryBuilder.pm
Koha/SearchEngine/Elasticsearch/Search.pm
What we want:
Koha/SearchEngine/Elasticsearch.pm
Koha/SearchEngine/Elasticsearch/Indexer.pm
Koha/SearchEngine/Elasticsearch/QueryBuilder.pm
Koha/SearchEngine/Elasticsearch/Search.pm
Test plan:
% git grep -i Koha::ElasticSearch
% git grep ElasticSearch|grep -v Catmandu::Store::ElasticSearch
should not return any result
Do a full reindex and search for records
Signed-off-by: Nick Clemens <nick@bywatersolutions.com>
Signed-off-by: Kyle M Hall <kyle@bywatersolutions.com>
Signed-off-by: Brendan Gallagher <brendan@bywatersolutions.com>
At one time it was possible to store the results of a report into the
saved_reports table.
This allowed the librarians to compare different results, from the Koha
interface.
This patch is a proof of concept and is not very polished (understood:
it cannot be pushed like that).
Test plan:
Execute the runreport.pl cronjob script with the new --store-results
option.
This will serialize into json the results and put it into the
saved_reports table.
On the "Saved report" list, the "Saved results" column is now populated
with a date (note that you can have several date for a given report).
If you click on this link, the data will be displayed in a simple table.
Signed-off-by: Chris Cormack <chris@bigballofwax.co.nz>
Signed-off-by: Katrin Fischer <katrin.fischer@bsz-bw.de>
Signed-off-by: Kyle M Hall <kyle@bywatersolutions.com>
Signed-off-by: Tomas Cohen Arazi <tomascohen@theke.io>
Signed-off-by: Barton Chittenden <barton@bywatersolutions.com>
Signed-off-by: Katrin Fischer <katrin.fischer@bsz-bw.de>
Signed-off-by: Kyle M Hall <kyle@bywatersolutions.com>
This patch adds the --test option switch to the overdue_notices.pl script
so it can be ran without doing any actual action.
To test:
- Have a patron with overdue items (simulate a checkout for a past date. Note it implies
that the circ rules are defined so the patron is overdue)
- Run:
$ sudo koha-shell kohadev
koha-dev$ misc/cronjobs/overdue_notices.pl --test
=> SUCCESS: The script is ran but the patron isn't debarred and no notice messages are queued.
Signed-off-by: Chris Cormack <chrisc@catalyst.net.nz>
Signed-off-by: Barton Chittenden <barton@bywatersolutions.com>
Signed-off-by: Katrin Fischer <katrin.fischer@bsz-bw.de>
Signed-off-by: Kyle M Hall <kyle@bywatersolutions.com>
The location of the script in misc/maintenance would be fine for
running it from the command line. But it will be a problem for several
install types when running it from the web installer.
Files from misc/maintenance go to bin/maintenance in a package install,
not to mention other installs than a dev install.
This patch moves the script to installer/data/mysql. Already there are two
other scripts run by upgradedatabase. I would rather move these three
scripts somewhere else, but we c/should do that on another report.
Fixed a small typo in a message too.
Signed-off-by: Marcel de Rooy <m.de.rooy@rijksmuseum.nl>
Signed-off-by: Kyle M Hall <kyle@bywatersolutions.com>
- all non-fatal output redirected to STDOUT (as there is an intention
to run this script from updatedatabase.pl)
- added borrowernumber and itemnumber equality checks to the SELECT
statement in getFinesForChecking() - accountlines.issue_id alone is not
entirely trustworthy (because InnoDB forgets it's highest auto_increment
after server restart), in some rare cases it may point to some random
issue for different patron and different item
Signed-off-by: Chris Cormack <chrisc@catalyst.net.nz>
Signed-off-by: Marcel de Rooy <m.de.rooy@rijksmuseum.nl>
Signed-off-by: Kyle M Hall <kyle@bywatersolutions.com>
External maintenace script for fixing unclosed (FU), non accruing fine
records which may still need FU -> F correction post-Bug 15675.
Signed-off-by: Chris Cormack <chrisc@catalyst.net.nz>
Signed-off-by: Marcel de Rooy <m.de.rooy@rijksmuseum.nl>
Signed-off-by: Kyle M Hall <kyle@bywatersolutions.com>
This patch modify the delete_patrons.pl cronjob to deal with the
last_seen option.
To test it, you just have to use the --last_seen option and pass a valid
date (iso format)
Example:
perl misc/cronjobs/delete_patrons.pl --last_seen="1984-02-04" --confirm
will delete all the patrons who do not have been active since this date.
Sponsored-by: BULAC - http://www.bulac.fr/
Signed-off-by: Nicolas Legrand <nicolas.legrand@bulac.fr>
https://bugs.koha-community.org/show_bug.cgi?id=12276
Signed-off-by: Marcel de Rooy <m.de.rooy@rijksmuseum.nl>
Signed-off-by: Kyle M Hall <kyle@bywatersolutions.com>
This script has not been updated for ages and is UNIMARC specific.
Since I am working on bug 17196 to move marcxml out of biblioitems
table, I'd like not to rewrite unused scripts (and lost my time...)
So if someone complains later, I will rewrite it on top of bug 17196.
Signed-off-by: Owen Leonard <oleonard@myacpl.org>
Signed-off-by: Katrin Fischer <katrin.fischer@bsz-bw.de>
Signed-off-by: Kyle M Hall <kyle@bywatersolutions.com>
The C4::Category module contained only 1 method to return the patron
categories available for the logged in user.
The new method Koha::Patron::Categories->search_limited does exactly the
same thing (see tests) and must be used in place of it.
Test plan:
- Same prerequisite as before
For the following pages, you should not see patron categories limited to
other libraries.
- On the 'Item circulation alerts' admin page
(admin/item_circulation_alerts.pl), modify the settings for check-in
and checkout (NOTE: Should not we display all patron categories on
this page? If yes, it must be done in another bug report to ease
backporting it).
- Search for patrons in the admin (budget) and acquisition (order) module.
- On the patron home page (search form in the header)
Signed-off-by: Chris Cormack <chris@bigballofwax.co.nz>
Signed-off-by: Marcel de Rooy <m.de.rooy@rijksmuseum.nl>
Signed-off-by: Kyle M Hall <kyle@bywatersolutions.com>
We need to define several namespaces for our cache system.
For instance sysprefs, koha conf (koha-conf.xml) and unit tests
should be defined in a separate namespace.
This will permit to
- launch the tests without interfering with other cache values
- and flush the sysprefs cache without flushing all other values
To do so, we need to store different Koha::Cache objects at a package
level. That's why this patch adds a new Koha::Caches module.
FIXME: There is an architecture problem here: the L1 cache should be
defined in Koha::Cache
Signed-off-by: Jacek Ablewicz <abl@biblos.pk.edu.pl>
Signed-off-by: Tomas Cohen Arazi <tomascohen@theke.io>
https://bugs.koha-community.org/show_bug.cgi?id=11921
Signed-off-by: Kyle M Hall <kyle@bywatersolutions.com>
As requested by Mark Tompsett. Hope this guarantees a signoff now..
Note: For consistency four additional parameters were needed to no longer
use file level vars in this subroutine.
Test plan:
Import a file with stage_file.pl.
Signed-off-by: Marcel de Rooy <m.de.rooy@rijksmuseum.nl>
Imported a marc file and a marcxml file with stage_file.pl.
Signed-off-by: Josef Moravec <josef.moravec@gmail.com>
Signed-off-by: Jonathan Druart <jonathan.druart@bugs.koha-community.org>
Signed-off-by: Kyle M Hall <kyle@bywatersolutions.com>
This patch makes the following changes:
[1] Based on the groundwork of the former patch, add call to
RecordsFromMARCXMLFile in stage-marc-import. Use format param.
[2] Add format to the template. Use file extension to determine.
If you use .xml or .marcxml as extension, MARCXML is selected.
[3] In stage-marc-import.tt mark UTF-8 encoding as UTF-8 not as utf8.
[4] BatchStageMarcRecords: do not call plugin if you have no records.
[5] RecordsFromISO2709File: also return errors in an array.
[6] In misc/stage_file.pl also use UTF-8. Handling of errors from [5].
Test plan:
[1] Import an empty file as MARC or MARCXML (with Tools/Stage..import).
[2] Import an non-empty file with invalid contents as MARC or MARCXML.
[3] Export a few records with Tools/Export as MARC and MARCXML.
[4] Import these two files. Check selected format versus file extension.
[5] Import a MARCXML file with misc/stage_file.pl.
Signed-off-by: Marcel de Rooy <m.de.rooy@rijksmuseum.nl>
Signed-off-by: Josef Moravec <josef.moravec@gmail.com>
Signed-off-by: Jonathan Druart <jonathan.druart@bugs.koha-community.org>
Signed-off-by: Kyle M Hall <kyle@bywatersolutions.com>
Patch from Olli, manual rebase by Marcel (July 7, 2016).
Signed-off-by: Marcel de Rooy <m.de.rooy@rijksmuseum.nl>
Needs follow-up. Test plan in the third patch.
Signed-off-by: Josef Moravec <josef.moravec@gmail.com>
Signed-off-by: Jonathan Druart <jonathan.druart@bugs.koha-community.org>
Signed-off-by: Kyle M Hall <kyle@bywatersolutions.com>
Incorrect method call is causing runtime error and not
retrieving the correct logdir value
Change retrieves the value correctly
To test:
1) Run edi_cron.pl, notice error
2) Apply patch and run edi_cron.pl again, should work as expected
Signed-off-by: Aleisha Amohia <aleishaamohia@hotmail.com>
Signed-off-by: Jonathan Druart <jonathan.druart@bugs.koha-community.org>
Note: I did not test but changes make sense.
Signed-off-by: Kyle M Hall <kyle@bywatersolutions.com>
Signed-off-by: Marcel de Rooy <m.de.rooy@rijksmuseum.nl>
Tested the A;B;C variant here. If A fails, B will run. Since we can safely
assume that A (or B) will not fail on a daily basis, this seems to be better
than running them in the wrong order every day.
As the comments on Bugzilla show, several people support this improved
(reordered) scheme and look forward to improved error handling on another
report (obviously not that simple).
Signed-off-by: Kyle M Hall <kyle@bywatersolutions.com>
The patch changes the sequence of cronjobs in the crontab example
file and in the cron.daily file of the packages.
This is why:
1) Renew automatically
... only when we can't renew, we want to
2) Calculate fines
... once the fine are calculated and charged
we can print the amount into the
3) Overdue notices
Before the change it could happen that you'd charge for an item,
that would then be renewed. Or that you'd try to print fine
amounts into the overdue notices, when they would only be
charged moments later.
To test:
- configure your system so you have items that should
- be charged with fines
- renew automatically
- configure your crontabs according to the example file
or switch the cron.daily in your package installation with
the new one
- configure your overdue notices so that one should be generated
<<items.fine>>
- Wait for the cronjobs or schedule them to run earlier
- Verify all is well and as it should be
Signed-off-by: Chris Cormack <chrisc@catalyst.net.nz>
Signed-off-by: Kyle M Hall <kyle@bywatersolutions.com>
Signed-off-by: Marcel de Rooy <m.de.rooy@rijksmuseum.nl>
Signed-off-by: Kyle M Hall <kyle@bywatersolutions.com>
Remove $dbh as argument to C4::Items::DelItemCheck
and C4::Items::ItemSafeToDelete, also change all
calls to these functions throughout the codebase.
Also remove remaining reference to 'do_not_commit' in
t/db_dependent/Items_DelItemCheck.t
Fixed doubled "$$" in C4/ImportBatch.pm
Signed-off-by: Tomas Cohen Arazi <tomascohen@theke.io>
Signed-off-by: Kyle M Hall <kyle@bywatersolutions.com>
Use t::lib::TestBuilder in t/db_dependent/Items_DelItemCheck.t
Remove the option 'do_not_commit' from C4::Items::DelItemCheck.
Whitespace cleanup.
Signed-off-by: Tomas Cohen Arazi <tomascohen@theke.io>
Signed-off-by: Kyle M Hall <kyle@bywatersolutions.com>
* Fix POD warning.
* Remove redundant 'use stric' and 'use warnings'
* Remove $VERSION and --version option.
* Remove --dry-run option
* Split test for --help and check for @criteria into two separate pod2usage calls,
enabling -msg on the latter.
* Fix 'target_tiems' typo.
* Test for holds on items to be deleted.
* Fix whitespace
* Fix test for holds.
Signed-off-by: Tomas Cohen Arazi <tomascohen@theke.io>
Signed-off-by: Kyle M Hall <kyle@bywatersolutions.com>
http://bugs.koha-community.org/show_bug.cgi?id=14504
Signed-off-by: Heather Braum <hbraum@nekls.org>
Signed-off-by: Tomas Cohen Arazi <tomascohen@theke.io>
Signed-off-by: Kyle M Hall <kyle@bywatersolutions.com>
This patch introduces the misc/devel/minifySwagger.pl script that
loads the Swagger files, follows references and produces a compact
("minified") version of the swagger file which is suitable for
distribution.
The wiki page should be updated with instructions on how to regenerate
it so the Release Manager does it on each spec upgrade.
Signed-off-by: Olli-Antti Kivilahti <olli-antti.kivilahti@jns.fi>
My name is Olli-Antti Kivilahti and I approve this commit.
We have been using the Swagger2.0-driven REST API on Mojolicious for 1 year now
in production and I am certain we have a pretty good idea on how to work with
the limitations of Swagger2.0
Signed-off-by: Johanna Raisa <johanna.raisa@gmail.com>
My name is Johanna Räisä and I approve this commit.
We have been using Swagger2.0-driven REST API in production successfully.
Signed-off-by: Benjamin Rokseth <benjamin.rokseth@kul.oslo.kommune.no>
Signed-off-by: Tomas Cohen Arazi <tomascohen@theke.io>
Signed-off-by: Kyle M Hall <kyle@bywatersolutions.com>
The previous calls were wrong, but there is something bad with the DB
structure: export_format.profile should be a unique key.
This patch fixes the previous calls and add a FIXME not to forget to fix
the DB structure.
Signed-off-by: Bernardo Gonzalez Kriegel <bgkriegel@gmail.com>
Previous test where done with all patches applied,
including this one, and all worked.
No errors
Signed-off-by: Marcel de Rooy <m.de.rooy@rijksmuseum.nl>
Signed-off-by: Kyle M Hall <kyle@bywatersolutions.com>
This patch erase all traces of C4::Csv since it's not used anymore.
All occurrences have been replaced by previous patches to use
Koha::CsvProfiles.
Note that GetMarcFieldsForCsv was not used prior this patch set.
Test plan:
git grep 'C4::Csv'
should not return any result.
Signed-off-by: Bernardo Gonzalez Kriegel <bgkriegel@gmail.com>
No more traces of the file.
This produces a koha-qa fail, due to the missing file.
No other errors
Signed-off-by: Marcel de Rooy <m.de.rooy@rijksmuseum.nl>
Signed-off-by: Kyle M Hall <kyle@bywatersolutions.com>
This subroutine returned the export_format_id for a given profile name.
This can be done easily with the Koha::CsvProfiles->search method.
Test plan:
Export records using the misc/export_records.pl script and the
export tool.
If you are exporting using the MARC format, the profile filled in the pref
ExportWithCsvProfile will be used (or the one passed in parameter of
misc/export_records.pl).
If you are exporting using the CSV format, you can choose a profile in
the dropdown list.
Signed-off-by: Bernardo Gonzalez Kriegel <bgkriegel@gmail.com>
Exported using tool & cmd, marc & csv. Pref is used.
No errors
Signed-off-by: Marcel de Rooy <m.de.rooy@rijksmuseum.nl>
Signed-off-by: Kyle M Hall <kyle@bywatersolutions.com>
Changes the value of the "comment" column in "borrower_debarments" table
from "Restriction added by overdues process yyyy-mm-dd hh:mm:ss" to
"OVERDUE_PROCESS yyyy-mm-dd hh:mm:ss" in the overdue_notices.pl. Then in
the templates "moremember.tt", "circulation.tt", "memberentrygen.tt",
"opac-reserve.tt" and "opac-user.tt" the value of "comment" is
check, if it's an automatical comment due to overdue process it'll
write "Restriction added by overdues process yyyy-mm-dd hh:mm:ss",
then if there is a customizable comment it will be written without
modification. Like this, the comment "Restriction added by overdues
process" is written in the po files and can be translated later.
To test:
1) create a patron with automatical restriction due to overdue process;
2) apply patch;
3) run misc/cronjobs/overdue_notices.pl;
4) verify if the comment "Restriction added by overdues process" is well
written and translatable on the following page :
- opac patron home page (opac-user.tt);
- opac item reservation page (opac-reserve.tt);
- pro patron page (moremember.tt);
- reservation item for a patron (circulation.tt, memberentrygen.tt);
5) try to translate the comment in po files;
6) sign off.
Signed-off-by: Owen Leonard <oleonard@myacpl.org>
Signed-off-by: Kyle M Hall <kyle@bywatersolutions.com>
Signed-off-by: Kyle M Hall <kyle@bywatersolutions.com>
s/sitemaper/sitemapper/
Test plan:
Run t/db_dependent/Sitemapper.t
Signed-off-by: Marc Véron <veron@veron.ch>
Signed-off-by: Jonathan Druart <jonathan.druart@bugs.koha-community.org>
Signed-off-by: Kyle M Hall <kyle@bywatersolutions.com>
Using Plack with the debian psgi file, I get lots of warnings like:
WARNING: Automatically converting Plack::App::CGIBin instance to a PSGI code reference. If you see this warning for each request, you probably need to explicitly call to_app() i.e. Plack::App::CGIBin->new(...)->to_app in your PSGI file.
This patch is aimed to eliminate the warns.
Test plan:
Run Plack with plack.psgi or koha.psgi and verify if you do not see these
warnings anymore.
Signed-off-by: Jonathan Druart <jonathan.druart@bugs.koha-community.org>
Signed-off-by: Tomas Cohen Arazi <tomascohen@theke.io>
I tested on Jessie and I see no regressions.
Signed-off-by: Kyle M Hall <kyle@bywatersolutions.com>
The changes made to Koha::Authority has not been correctly fixed.
The code of Koha::Authority has been moved bo
Koha::MetadataRecord::Authority by bug 15380.
Test plan:
perl misc/search_tools/rebuild_elastic_search.pl -a -v
should success
Signed-off-by: Kyle M Hall <kyle@bywatersolutions.com>
It would be very helpful if the cleanup_database.pl script had the
ability to delete holidays from the special_holidays table there were
older than a given number of days in the past.
Test Plan:
1) Apply this patch
2) Create some unique holidays in the past of varying ages
3) Test the new switch '--unique-holidays DAYS' for cleanup_database.pl
4) Verify only holidays older then the specified number of days are removed
NOTE: The language 'unique holdays' is used in the interface to match
it's use in the staff web interface.
Signed-off-by: Jonathan Druart <jonathan.druart@bugs.koha-community.org>
Signed-off-by: Kyle M Hall <kyle@bywatersolutions.com>
The goal of this patch is to avoid unecessary flush of the L1 cache on
creating a new CGI object each time C4::Languages::getlanguage is called
without a CGI object.
The new class Koha::Cache::Memory::Lite must be flushed by the CGI
constructor overide done in the psgi file. This new class will ease
caching of specific stuffs used by running script.
Test plan:
At the OPAC and the intranet interfaces:
Open 2 different browser session to simulate several users
- Clear the cookies of the browsers
- User 1 (U1) an User 2 (U2) should be set to the default language
(depending on the browser settings)
- U1 chooses another language
- U2 refreshes and the language used must be the default one
- U2 chooses a third language
- U1 refreshes and must be still using the one he has choosen.
Try to use a language which is not defined:
Add &language=es-ES (if es-ES is not translated) to the url, you should
not see the Spanish interface.
Signed-off-by: Jacek Ablewicz <abl@biblos.pk.edu.pl>
Signed-off-by: Jesse Weaver <jweaver@bywatersolutions.com>
Signed-off-by: Brendan Gallagher <brendan@bywatersolutions.com>
Use of uninitialized value in numeric eq (==)
Signed-off-by: Jonathan Druart <jonathan.druart@bugs.koha-community.org>
Signed-off-by: Kyle M Hall <kyle@bywatersolutions.com>
Signed-off-by: Bernardo Gonzalez Kriegel <bgkriegel@gmail.com>
Signed-off-by: Jonathan Druart <jonathan.druart@bugs.koha-community.org>
Signed-off-by: Kyle M Hall <kyle@bywatersolutions.com>
Signed-off-by: Bernardo Gonzalez Kriegel <bgkriegel@gmail.com>
Signed-off-by: Jonathan Druart <jonathan.druart@bugs.koha-community.org>
Signed-off-by: Kyle M Hall <kyle@bywatersolutions.com>
This patch deprecates the -x switch, making XML the default serialization format
used by rebuild_zebra.pl. It doesn't remove the option switch, but raises a warning
for the end user about the deprecation so they fix their cronjobs. Later we could remove it.
To test:
- Disable all indexing (daemon/cronjob)
- Create 2 records
- Edit one of them, delete the other one
- Verify they are queued for updates in zebraqueue
- sudo koha-mysql kohadev
> SELECT * FROM zebraqueue WHERE done=0
...
| 265 | 265 | specialUpdate | biblioserver | 1 | 2016-05-13 14:23:45 |
| 266 | 1 | recordDelete | biblioserver | 1 | 2016-05-16 14:14:33 |
| 267 | 2 | specialUpdate | biblioserver | 1 | 2016-05-16 14:15:06 |
+-----+--------------------+---------------+--------------+------+---------------------+
- Now go to koha-shell
$ sudo koha-shell kohadev ; cd kohaclone
- Run:
$ misc/migration_tools/rebuild_zebra.pl -k -b -z
You will get something similar to this:
NOTHING cleaned : the export /tmp/jI0OeHy6Tn has been kept.
You can re-run this script with the -s and -d /tmp/jI0OeHy6Tn parameters
if you just want to rebuild zebra after changing the record.abs
or another zebra config file
- Verify
* less /tmp/jI0OeHy6Tn/del_biblio/exported_records
* less /tmp/jI0OeHy6Tn/upd_biblio/exported_records
=> FAIL: They contain the records you added/modified/deleted but they are in
USMARC format
- Apply the patch
- Mark your records for indexing (in koha-mysql kohadev)
> UPDATE zebraqueue SET done=0 WHERE id > 264
- Run:
$ misc/migration_tools/rebuild_zebra.pl -k -b -z
You will get something similar to this:
<WARNINGS> [1]
NOTHING cleaned : the export /tmp/jI0OeHy6Tn has been kept.
You can re-run this script with the -s and -d /tmp/jI0OeHy6Tn parameters
if you just want to rebuild zebra after changing the record.abs
or another zebra config file
- Verify
* less /tmp/jI0OeHy6Tn/del_biblio/exported_records
* less /tmp/jI0OeHy6Tn/upd_biblio/exported_records
=> SUCCESS: Data is correctly in XML format
- Run:
$ misc/migration_tools/rebuild_zebra.pl -k -b -z -noxml
You will get something similar to this:
<WARNINGS> [1]
NOTHING cleaned : the export /tmp/jI0OeHy6Tn has been kept.
You can re-run this script with the -s and -d /tmp/jI0OeHy6Tn parameters
if you just want to rebuild zebra after changing the record.abs
or another zebra config file
- Verify
* less /tmp/jI0OeHy6Tn/del_biblio/exported_records
* less /tmp/jI0OeHy6Tn/upd_biblio/exported_records
=> SUCCESS: Data is correctly in USMARC format
- Sign off :-D
[1] Warnings covered by a followup
Signed-off-by: Bernardo Gonzalez Kriegel <bgkriegel@gmail.com>
On top of Bug 16505
Work as described following test plan, usmarc default pre patch,
post patch xml default and usmarc on request.
No errors (all patchset)
Signed-off-by: Jonathan Druart <jonathan.druart@bugs.koha-community.org>
Signed-off-by: Kyle M Hall <kyle@bywatersolutions.com>
Using rebuild_zerba.pl with the -x option switch, produces an incorrect output in
terms of what our XSLTs expect for indexing. This patch introduces the right namespace information
on the exported records so indexing succeeds.
To test:
- On current master, have some records on your db
- Run:
$ sudo koha-shell kohadev
$ cd kohaclone
$ misc/migration_tools/rebuild_zebra.pl -r -b -k -x
=> you will get a message like this:
NOTHING cleaned : the export /tmp/NL5ufjUfpp has been kept.
- Run
$ less /tmp/NL5ufjUfpp/biblio/exported_records
=> FAIL: The first line looks like this
<?xml version="1.0" encoding="UTF-8"?><collection><record
- Now run:
$ xsltproc \
/etc/koha/zebradb/marc_defs/marc21/biblios/biblio-zebra-indexdefs.xsl \
/tmp/NL5ufjUfpp/biblio/exported_records
=> FAIL: No output
- Apply the patch
- Run:
$ misc/migration_tools/rebuild_zebra.pl -r -b -k -x
- Take a look at the result file:
$ less /tmp/asdiouqwiue/biblio/exported_records
=> SUCCESS: The start of the file looks like this:
<?xml version="1.0" encoding="UTF-8"?><collection xmlns="http://www.loc.gov/MARC21/slim">
- Run:
$ xsltproc \
/etc/koha/zebradb/marc_defs/marc21/biblios/biblio-zebra-indexdefs.xsl \
/tmp/asdiouqwiue/biblio/exported_records
=> SUCCESS: There is actually indexing data :-D
- Sign off :-D
Edit: I changed qq{} for q{} as suggested by Jonathan.
Sponsored-by: American Numismatic Society
Signed-off-by: Bernardo Gonzalez Kriegel <bgkriegel@gmail.com>
Works as described following test plan
No errors
Signed-off-by: Jonathan Druart <jonathan.druart@bugs.koha-community.org>
Signed-off-by: Kyle M Hall <kyle@bywatersolutions.com>
This patch adds a letter parameter to the cron job membership_expiry.
It is used to substitute the default notice by another one.
This could be handy if you e.g. send a reminder after the first notice.
In any case, it allows for more flexibility.
Apart from this new parameter, this patch removes the sub parse_letter from
the code. The call to GetPreparedLetter is moved to the for loop and the
call to getletter is removed (no longer needed). If there is no letter
found, the Letter module already warns you. So we just exit the loop.
Test plan:
[1] Run membership_expiry.pl -c -n -v -let NOT_EXIST
Check if you see a warning (coming from Letters.pm)
[2] Check if you have some soon expiring patrons or add before/after
parameter to include some.
Run membership_expiry.pl -c -n -v [-before ?] [-after ?]
[3] Create a new notice MEMBERSHIP2. Copy the text from the original notice
and make some adjustments.
[4] Run membership_expiry.pl -c -v -let MEMBERSHIP2 [-before ?] [-after ?].
Be aware that this call generates email messages.
Verify that the email contained the adjusted text.
Signed-off-by: Bernardo Gonzalez Kriegel <bgkriegel@gmail.com>
On top of Bug 14834
Work as described, tested using '-n' to see messages on terminal, e.g.
membership_expiry.pl -v -n -c -before 3 -branch BC -after 2 --letter MEMEXP2
No errors
Signed-off-by: Kyle M Hall <kyle@bywatersolutions.com>
Signed-off-by: Kyle M Hall <kyle@bywatersolutions.com>
This patch adds three parameters to the cron job: -before and -after, and
-branch.
You can run the cronjob now in an adjusted frequency: say once a week with
before 6 or after 6 (not both together). If your pref is set to 14, running
before=6 will include expiries from 8 days to 14 days ahead. When you
use after=6, you would include 14 days to 20 days ahead, etc.
You could also rerun the job of yesterday by setting before=1 and after=-1;
this could help in case of problem recovery.
Obviously, the branch parameter can be used as a filter.
NOTE: Why are these parameters passed only via the command line?
Well, obviously the branch parameter is not suitable for a pref.
The before/after parameter allows you to handle expiry mails different from
the normal scheme or could be used in some sort of recovery. In those cases
it will be more practical to use a command line parameter than editing a
pref.
NOTE: The unit test has been adjusted for the above reasons, but I also
added some lines to let existing expires not interfere with the added
borrowers by an additional count and using the branchcode parameter.
Test plan:
[1] Run the adjusted unit test GetUpcomingMembershipExpires.t
[2] Set the expiry date for patron A to now+16 (with pref 14).
Set the expiry date for patron B to now+11.
[3] Run the cronjob without range. You should not see A and B.
[4] Run the cronjob with before 3. You should see patron B.
[5] Run the cronjob with before 3 and after 2. You should see A and B.
[6] Repeat step 5 with a branchcode that does not exist. No patrons.
Signed-off-by: Bernardo Gonzalez Kriegel <bgkriegel@gmail.com>
Work as described following test plan.
Test pass
No errors
New parameters work with one (-) or two(--) dashes, no problem
with that but convention suggest that 'long' options use two-dashes.
Signed-off-by: Kyle M Hall <kyle@bywatersolutions.com>
Signed-off-by: Kyle M Hall <kyle@bywatersolutions.com>
This patch tries to make the code more readable using Koha::Calendar
instead of deprecated C4::Calendar and Date::Calc
Signed-off-by: Kyle M Hall <kyle@bywatersolutions.com>
Signed-off-by: Nick Clemens <nick@bywatersolutions.com>
Signed-off-by: Jesse Weaver <jweaver@bywatersolutions.com>
Signed-off-by: Tomas Cohen Arazi <tomascohen@theke.io>
Signed-off-by: Kyle M Hall <kyle@bywatersolutions.com>
Signed-off-by: Brendan Gallagher <brendan@bywatersolutions.com>
Signed-off-by: Nick Clemens <nick@bywatersolutions.com>
Signed-off-by: Jesse Weaver <jweaver@bywatersolutions.com>
Signed-off-by: Tomas Cohen Arazi <tomascohen@theke.io>
Signed-off-by: Kyle M Hall <kyle@bywatersolutions.com>
Signed-off-by: Brendan Gallagher <brendan@bywatersolutions.com>
There were rebase conflicts that it was just easier to postpone until
afterwards.
Signed-off-by: Nick Clemens <nick@bywatersolutions.com>
Signed-off-by: Jesse Weaver <jweaver@bywatersolutions.com>
Signed-off-by: Tomas Cohen Arazi <tomascohen@theke.io>
Signed-off-by: Kyle M Hall <kyle@bywatersolutions.com>
Signed-off-by: Brendan Gallagher <brendan@bywatersolutions.com>
% perl misc/search_tools/rebuild_elastic_search.pl -bn 42
Can't locate object method "idnumber" via package "MARC::Record" at
misc/search_tools/rebuild_elastic_search.pl line 171.
Signed-off-by: Nick Clemens <nick@bywatersolutions.com>
Signed-off-by: Jesse Weaver <jweaver@bywatersolutions.com>
Signed-off-by: Tomas Cohen Arazi <tomascohen@theke.io>
Signed-off-by: Kyle M Hall <kyle@bywatersolutions.com>
Signed-off-by: Brendan Gallagher <brendan@bywatersolutions.com>
Signed-off-by: Nick Clemens <nick@bywatersolutions.com>
Signed-off-by: Jesse Weaver <jweaver@bywatersolutions.com>
Signed-off-by: Tomas Cohen Arazi <tomascohen@theke.io>
Signed-off-by: Kyle M Hall <kyle@bywatersolutions.com>
Signed-off-by: Brendan Gallagher <brendan@bywatersolutions.com>
Signed-off-by: Nick Clemens <nick@bywatersolutions.com>
Signed-off-by: Jesse Weaver <jweaver@bywatersolutions.com>
Signed-off-by: Tomas Cohen Arazi <tomascohen@theke.io>
Signed-off-by: Kyle M Hall <kyle@bywatersolutions.com>
Signed-off-by: Brendan Gallagher <brendan@bywatersolutions.com>
It will improve the indexing time.
Signed-off-by: Nick Clemens <nick@bywatersolutions.com>
Signed-off-by: Jesse Weaver <jweaver@bywatersolutions.com>
Signed-off-by: Tomas Cohen Arazi <tomascohen@theke.io>
Signed-off-by: Kyle M Hall <kyle@bywatersolutions.com>
Signed-off-by: Brendan Gallagher <brendan@bywatersolutions.com>
Signed-off-by: Nick Clemens <nick@bywatersolutions.com>
Signed-off-by: Jesse Weaver <jweaver@bywatersolutions.com>
Signed-off-by: Tomas Cohen Arazi <tomascohen@theke.io>
Signed-off-by: Kyle M Hall <kyle@bywatersolutions.com>
Signed-off-by: Brendan Gallagher <brendan@bywatersolutions.com>
Signed-off-by: Nick Clemens <nick@bywatersolutions.com>
Signed-off-by: Jesse Weaver <jweaver@bywatersolutions.com>
Signed-off-by: Tomas Cohen Arazi <tomascohen@theke.io>
Signed-off-by: Kyle M Hall <kyle@bywatersolutions.com>
Signed-off-by: Brendan Gallagher <brendan@bywatersolutions.com>
(Not fetched yet though.)
Signed-off-by: Nick Clemens <nick@bywatersolutions.com>
Signed-off-by: Jesse Weaver <jweaver@bywatersolutions.com>
Signed-off-by: Tomas Cohen Arazi <tomascohen@theke.io>
Signed-off-by: Kyle M Hall <kyle@bywatersolutions.com>
Signed-off-by: Brendan Gallagher <brendan@bywatersolutions.com>
Signed-off-by: Nick Clemens <nick@bywatersolutions.com>
Signed-off-by: Jesse Weaver <jweaver@bywatersolutions.com>
Signed-off-by: Tomas Cohen Arazi <tomascohen@theke.io>
Signed-off-by: Kyle M Hall <kyle@bywatersolutions.com>
Signed-off-by: Brendan Gallagher <brendan@bywatersolutions.com>
subroutines should not take $dbh in parameter.
C4::Biblio::TransformMarcToKoha has it and does not use it.
Test plan:
Look at the patch and confirm that all occurrences of
TransformMarcToKoha have been modified.
Signed-off-by: Jacek Ablewicz <abl@biblos.pk.edu.pl>
Signed-off-by: Katrin Fischer <katrin.fischer.83@web.de>
Signed-off-by: Brendan Gallagher brendan@bywatersolutions.com
Bug 15527 add an xslt dir, LangInstaller.pm must ingnore that dir.
To test:
1) Verify the problem on current master
Update translation for any lang, will see errors
2) Apply the patch
3) Update again, no errors
Signed-off-by: Tomas Cohen Arazi <tomascohen@theke.io>
Works as expected.
Signed-off-by: Katrin Fischer <katrin.fischer.83@web.de>
Signed-off-by: Brendan A Gallagher <brendan@bywatersolutions.com>
From commit 581759e985
Bug 14133: Print notices should be generated with the print
template
"""
IMPORTANT NOTE: This test plan does not take into account the notices
generated for the staff ("These messages were not sent directly to the
patrons."). However the behavior will also change, the print template
will be used in all cases. Is it what we want?
"""
Yes, it is what we want. But if the print template does not exist, the
notice is not generated, we'd like to get the email template instead.
Test plan:
- Remove the print template for the letter you use for overdues
- Define an overdue rule to send an email
- Remove the email address for the patron which has overdues
- Execute the overdue_notices script
The staff should get an email notice and a print notice (using the
email template) should be generated for the patron
Signed-off-by: Nick Clemens <nick@bywatersolutions.com>
Signed-off-by: Katrin Fischer <katrin.fischer.83@web.de>
Signed-off-by: Brendan A Gallagher <brendan@bywatersolutions.com>
Regression introduced by bug 14133, see but 14133 comment 13.
Test plan:
Without this patch applied, if a patron cannot be notified (no email
address or sms number), the print notice generated for the library was
not.
With this patch applied, the print notice should be generated using the
print template
Signed-off-by: Nick Clemens <nick@bywatersolutions.com>
Signed-off-by: Katrin Fischer <katrin.fischer.83@web.de>
Signed-off-by: Brendan A Gallagher <brendan@bywatersolutions.com>
Koha's EDIFACT module works great for many European vendors,
but does not work will for US vendors, which have a much different
interpretation of 'standard'. In fact, each vendor may require
different arrangements of values in EDIFACT messages. It would be
impossible to encompass all these requirements within Koha's EDIFACT
module itself. Instead, we should allow the module to be pluggable, so
versions of the module can be developed for vendors that require EDIFACT
messages that don't conform to the standard set by Koha's EDIFACT
module.
Test Plan:
1) Apply this patch
2) Run updatedatabase
3) Enable Koha plugins
4) Install the Edifact stub plugin available at
https://github.com/bywatersolutions/koha-plugin-edifact-stub
5) Edit the EDI Vendor account, assign the plugin to a Vendor EDI account
6) Test EDI functionality ( ORDER, INVOICE ), there should be no errors
or changes to the EDIFACT message input or output
Signed-off-by: Jason DeShaw <JDeShaw@cityoffargo.com>
Signed-off-by: Martin Renvoize <martin.renvoize@ptfs-europe.com>
Signed-off-by: Brendan A Gallagher <brendan@bywatersolutions.com>
Bug 9006 changed the api for retrieving config values
from C4::Context after the removal of Autoload
This changes the syntax used to retrieve logdir to reflect
the correct syntax
Signed-off-by: Brendan A Gallagher <brendan@bywatersolutions.com>
Add support for processing incoming Edifact Quotes, Invoices
and order responses and generating and transmission of
Edifact Orders.
Basic workflow is that an incoming quote generates an aquisition
basket in Koha, with each line corresponding to an order record
The user can then generate an edifact order from this (or another)
basket, which is transferred to the vendor's site
The supplier generates an invoice on despatch and this will
result in corresponding invoices being generated in Koha
The orderlines on the invoice are receipted automatically.
We also support order response messages. This may include
simple order acknowledgements, supplier reports/amendments
on availability. Cancellation messages cause the koha order
to be cancelled, other messages are recorded against the order
Which messages are to be supported/processed is specifiable on a
vendor by vendor basis via the admin screens
You can also specify auto order i.e. to generate orders from quotes
without user intervention - This reflects existing
workflows where most work is done on the suppliers website
then generating a dummy quote
Received messages are stored in the edifact_messages table
and the original can be viewed via the online
Database changes are in installer/data/mysql/atomicchanges/edifact.sql
Note new perl dependencies:
Net::SFTP:Foreign
Text::Unidecode
Signed-off-by: Paul Johnson <p.johnson@staffs.ac.uk>
Signed-off-by: Sally Healey <sally.healey@cheshiresharedservices.gov.uk>
Signed-off-by: Kyle M Hall <kyle@bywatersolutions.com>
Signed-off-by: Brendan A Gallagher <brendan@bywatersolutions.com>
Rename not_borrowered_since to not_borrowed_since
Signed-off-by: Katrin Fischer <katrin.fischer.83@web.de>
Signed-off-by: Brendan A Gallagher <brendan@bywatersolutions.com>
Koha's SIP2 server should have support for the AV field ( field items ).
The biggest problem with this field is that its' contents are not really
defined in SIP2 protocol specification. All it says is "this field
should be sent for each fine item". Due to this, I think the contents of
the field need to be configurable at the login level, so that the
contents can be defined based on the SIP2 devices requirements for the
AV field.
Test Plan:
1) Apply this patch
2) Find a patron with outstanding fines
3) Run a patron information request using misc/sip_cli_emulator.pl using the new -s option with the value " Y "
4) Note there is an AV field for each fee containing the description and amount
5) Edit your sip config, add an av_field_template parameter to the login you are using such as
av_field_template="TEST [% accountline.description %] [% accountline.amountoutstanding | format('%.2f') %]"
6) Restart your SIP server
7) Repeat the patron information request
8) Note your custom AV field is being used!
Signed-off-by: Chris Davis <cgdavis@uintah.utah.gov>
Signed-off-by: Jesse Weaver <jweaver@bywatersolutions.com>
Signed-off-by: Brendan A Gallagher <brendan@bywatersolutions.com>
Signed-off-by: Jesse Weaver <jweaver@bywatersolutions.com>
Signed-off-by: Tomas Cohen Arazi <tomascohen@theke.io>
Signed-off-by: Brendan A Gallagher <brendan@bywatersolutions.com>
perl -p -i -e 's/^.*set the version for version checking.*\n//' **/*.pm
+ manual adjustements
Signed-off-by: Josef Moravec <josef.moravec@gmail.com>
Signed-off-by: Tomas Cohen Arazi <tomascohen@unc.edu.ar>
Signed-off-by: Brendan A Gallagher <brendan@bywatersolutions.com>
Mainly a
perl -p -i -e 's/^.*3.07.00.049.*\n//' **/*.pm
Then some adjustements
Signed-off-by: Josef Moravec <josef.moravec@gmail.com>
Signed-off-by: Tomas Cohen Arazi <tomascohen@unc.edu.ar>
Signed-off-by: Brendan A Gallagher <brendan@bywatersolutions.com>
perl -p -i -e 's/^(use vars .*)\$VERSION\s?(.*)/$1$2/' **/*.pm
Signed-off-by: Josef Moravec <josef.moravec@gmail.com>
Signed-off-by: Tomas Cohen Arazi <tomascohen@unc.edu.ar>
Signed-off-by: Brendan A Gallagher <brendan@bywatersolutions.com>
pod2usage will exit with the status given in parameter.
Signed-off-by: Jonathan Druart <jonathan.druart@bugs.koha-community.org>
Signed-off-by: Brendan A Gallagher <brendan@bywatersolutions.com>
To test:
With UsageStats syspref set to No:
* run misc/cronjobs/share_usage_with_koha_community.pl
(without -q)
- "The UsageStats system preference is not set." message
with usage info should be output
* run misc/cronjobs/share_usage_with_koha_community.pl -q
- the output should be quiet
NOTE: See comment #7.
Signed-off-by: Mark Tompsett <mtompset@hotmail.com>
Signed-off-by: Jonathan Druart <jonathan.druart@bugs.koha-community.org>
Signed-off-by: Brendan A Gallagher <brendan@bywatersolutions.com>
Comment was using incorrect (but similarly spelled) word, obscuring
the meaning slightly. Also corrected the release note altering the
grammar there additionally as it should have been 3rd person singular
so that it now reads more clearly
Signed-off-by: Mark Tompsett <mtompset@hotmail.com>
Signed-off-by: Katrin Fischer <katrin.fischer.83@web.de>
Signed-off-by: Brendan Gallagher brendan@bywatersolutions.com
Caused by commit 7e70202d34
Bug 15381: Remove GetAuthType and GetAuthTypeCode
If you execute perl misc/migration_tools/merge_authority.pl -f 1 -t 2
you will get:
Can't locate object method "authtypecode" via package "1" (perhaps you forgot to load "1"?)
at misc/migration_tools/merge_authority.pl line 58.
GetAuthority does not return a Koha::Authority but a MARC::Record:
there is no authtype code method!
Test plan:
perl misc/migration_tools/merge_authority.pl -f X -t Y
Should not return any error.
Note that if the authid X or Y does not exist, the script will die.
Signed-off-by: Frédéric Demians <f.demians@tamil.fr>
Signed-off-by: Katrin Fischer <katrin.fischer.83@web.de>
Signed-off-by: Brendan Gallagher brendan@bywatersolutions.com
Signed-off-by: Tomas Cohen Arazi <tomascohen@unc.edu.ar>
Signed-off-by: Jacek Ablewicz <abl@biblos.pk.edu.pl>
Signed-off-by: Brendan Gallagher brendan@bywatersolutions.com
This reverts commit ca00f0ddae.
Bug 13805 fixes an installer bug by disabling the syspref cache.
It was not a good idea, it introduced performance issues (see bug 13805
comment 14).
Test plan:
Test plan:
0/ Create a new database and fill the database entry in the koha conf
with its name
1/ Go on the mainpage, you should be redirected to the installer
2/ Try to log in
You should not get the login form again.
Signed-off-by: Tomas Cohen Arazi <tomascohen@unc.edu.ar>
Signed-off-by: Jacek Ablewicz <abl@biblos.pk.edu.pl>
Test plan not followed by me for this patch, due to lack of working
plack setup, but I don't expect it to cause any problems, and performace
gain for plack will be tremendous
Signed-off-by: Brendan Gallagher brendan@bywatersolutions.com
This in only in koha.psgi, it has been introduced by bug 13815 but
should not have been added by this patch.
Removing it should not introduce any changes.
Not that it won't impact debian packages.
Signed-off-by: Tomas Cohen Arazi <tomascohen@unc.edu.ar>
Signed-off-by: Jacek Ablewicz <abl@biblos.pk.edu.pl>
Signed-off-by: Brendan Gallagher brendan@bywatersolutions.com
At the moment, the sysprefs are only cache in the thread memory
executing the processus
When using Plack, that means we need to clear the syspref cache on each
page.
To avoid that, we can use Koha::Cache to cache the sysprefs correctly.
A big part of the authorship of this patch goes to Robin Sheat.
Test plan:
1/ Add/Update/Delete local use prefs
2/ Update pref values and confirm that the changes are correctly taken
into account
Signed-off-by: Chris <chrisc@catalyst.net.nz>
Signed-off-by: Josef Moravec <josef.moravec@gmail.com>
Tested with plack with syspref cache enabled, there is some time between setting the syspref and applying it, but it takes just one reload of page, it shouldn't be problem, should it?
Signed-off-by: Tomas Cohen Arazi <tomascohen@unc.edu.ar>
Signed-off-by: Jacek Ablewicz <abl@biblos.pk.edu.pl>
Tested with CGI and CGI + memcache; some small issues still remain,
but it would be better to deal with them in separate bug reports
if necessary
Signed-off-by: Brendan Gallagher brendan@bywatersolutions.com
This followup fixes a tiny mistake in the script POD.
Signed-off-by: Mark Tompsett <mtompset@hotmail.com>
Signed-off-by: Jonathan Druart <jonathan.druart@bugs.koha-community.org>
Signed-off-by: Brendan A Gallagher <brendan@bywatersolutions.com>
Since bug 5010 was pushed, OPACBaseURL already contains the protocol. The
sitemap.pl script was written before this was pushed, and thus still concatenates
http:// in front of OPACBaseURL.
This patch removes this behaviour.
To test:
- Have OPACBaseURL set to (say) http://myopac.com
- Run the sitemap.pl script without specifying the --url param
=> FAIL: Notice URLs look like http://http://myopac.com/bib... in the sitemap files.
- Apply the patch
- Run the sitemap.pl script without specifying the --url param
=> SUCCESS: Notice URLs look correctly like http://myopac.com/bib...
- Sign off :-D
Regards
Signed-off-by: Mark Tompsett <mtompset@hotmail.com>
Signed-off-by: Jonathan Druart <jonathan.druart@bugs.koha-community.org>
Signed-off-by: Brendan A Gallagher <brendan@bywatersolutions.com>
The 'borrower' should not be used anymore, especially for new code.
This patch move files and rename variables newly pushed (i.e. in the Koha
namespace).
Test plan:
1/
git grep Koha::Borrower
should not return code in use.
2/
Prove the different modified test files
3/ Do some clicks in the member^Wpatron module to be sure there is not
an obvious error.
Signed-off-by: Hector Castro <hector.hecaxmmx@gmail.com>
Works as described. Tested with Circulation, Members/Patrons, Discharge,
Restrictions modules and the must common functionalities
Signed-off-by: Kyle M Hall <kyle@bywatersolutions.com>
Signed-off-by: Jesse Weaver <jweaver@bywatersolutions.com>
Most part of the code here is unnecessary complex. We should selected
the currency if it is selected, that's all :)
Signed-off-by: Josef Moravec <josef.moravec@gmail.com>
Signed-off-by: Kyle M Hall <kyle@bywatersolutions.com>
Signed-off-by: Brendan A Gallagher <brendan@bywatersolutions.com>
This patch adds:
- a new DB field items.new.
- a new page to configure this new status
(tools/toggle_new_status.pl).
- a new cronjob script (misc/cronjobs/automatic_item_modification_by_age.pl
was misc/cronjob/toggle_new_status.pl)
Why this status is useful for some libraries ?
The use cases are:
- to know easily what are the new items (with a simple sql query).
- to display an icon in the search results.
- issuing rules can be adapt for new items. Automatically (using the
cronjob script), the status change (depending the configuration) and
the item can be issued, for example.
- a RSS/Atom feeds can be created on these new items.
Test plan:
- log in with a librarian having the tools > items_batchmod permission.
- navigate to Home > Tools > Automatic item modifications by age (was: Toggle new status)
- click on the edit button
- there are 3 "blocks":
* duration: the duration during an item is considered as new.
* conditions: the status will change only if the conditions are meet.
* substitutions: if there is no substitution, no action will be done.
You can add some change to apply to the matching items.
E.g. ccode=3
new=''
If the value is an empty string (in other words, the input does not
contain anything), the field will be deleted.
You can create as many rules as you want.
- test the interface : add/remove rule, conditions, substitutions,
submit the form, edit, etc.
(There is a looot of JS everywhere, so certainly a looot of bugs...).
- when you have your rules defined, you can now launch the cronjob
script without any parameter.
A report will be displayed with the matching itemnumber and the
substitutions to apply. Verify results are consistent.
- launch the script with the -c argument and verify values have been
modified depending the substitution rules.
Signed-off-by: juliette et remy <juliette.levast@iepg.fr>
Signed-off-by: Kyle M Hall <kyle@bywatersolutions.com>
Bug 11023: Add the ability to specify fields from biblioitems table.
Test plan:
Same as before but try with fields from the biblioitems table.
Signed-off-by: juliette et remy <juliette.levast@iepg.fr>
Signed-off-by: Kyle M Hall <kyle@bywatersolutions.com>
Bug 11023: Add UT for C4::Items::ToggleNewStatus
Test plan:
prove t/db_dependent/Items/ToggleNewStatus.t
Signed-off-by: juliette et remy <juliette.levast@iepg.fr>
Signed-off-by: Kyle M Hall <kyle@bywatersolutions.com>
Bug 11023: FIX - condition on biblioitems table does not work
If a rule contains a condition on the biblioitems table, the match won't
work. This patch fixes this issue.
Signed-off-by: Kyle M Hall <kyle@bywatersolutions.com>
Bug 11023: Use DBIx-Class to retrieve column names
Signed-off-by: Kyle M Hall <kyle@bywatersolutions.com>
Bug 11023: Don't use the biblioitems fields for the subtitution
It's dangerous to allow a change on the biblioitems fields with this
feature.
Signed-off-by: Kyle M Hall <kyle@bywatersolutions.com>
Bug 11023: Rename the duration parameter with 'age'
Signed-off-by: Kyle M Hall <kyle@bywatersolutions.com>
Bug 11023: The age parameter should be a number
The template should check if the age parameter is correctly filled
(should be a number).
Signed-off-by: Kyle M Hall <kyle@bywatersolutions.com>
Bug 11023: Change the name of the feature
Originaly this feature only permits to update the "new" field.
Now all item fields can be updated.
The name of the feature is now "Automatic item modifications by age".
Signed-off-by: Kyle M Hall <kyle@bywatersolutions.com>
Bug 11023 [QA Followup]
* Update DB version
* Fix capitalization error
* Rename misc/cronjobs/toggle_new_status.pl to misc/cronjobs/automatic_item_modification_by_age.pl
Signed-off-by: Kyle M Hall <kyle@bywatersolutions.com>
Bug 11023 [QA Followup] - Complete the renaming of "toggle new status" to "automatic item modification by age"
Signed-off-by: Kyle M Hall <kyle@bywatersolutions.com>
Signed-off-by: Brendan A Gallagher <brendan@bywatersolutions.com>
Right now, fines are updated based on the fine description. There are a
number of areas where this can go wrong ( date or time format changing,
title being modified, etc ). Now that issues has a unique
identifier, we should use that for selection and updating of fines.
Test Plan:
1) Apply this patch
2) Test creating and updating fines via fines.pl
and checking in overdue items. No changes should be noted.
3) prove t/db_dependent/Circulation.t
Signed-off-by: Marc Véron <veron@veron.ch>
Signed-off-by: Mirko Tietgen <mirko@abunchofthings.net>
Signed-off-by: Brendan Gallagher brendan@bywatersolutions.com
When exporting records (tools/export.pl or misc/export_records.pl), a
file of ids (authid or biblionumber) can be passed to filter the
results.
Bug 14722 has broken this behavior.
Test plan:
Export records and specify a list of records to filter the results.
Prior to this patch, the record with the id 1 was exported.
Signed-off-by: Josef Moravec <josef.moravec@gmail.com>
Signed-off-by: Katrin Fischer <katrin.fischer.83@web.de>
Signed-off-by: Brendan Gallagher brendan@bywatersolutions.com
It has been decided that Moose should not be a dependency for Koha, and
that this patch set should be reverted to avoid it's use.
This reverts commit 43bcc1c42c.
This reverts commit e5f4a0e3d5.
This reverts commit 6d44b0a91a.
C4::Branch::GetBranchDetail retrieved library infos, it could be easily
replaced with Koha::Libraries->find
When this change needs other big changes, the unblessed method is
called, to manipulate a hashref (as before) instead of a Koha::Library
object (for instance when $library is sent to GetPreparedLetter).
Test plan:
1/ Print a basket group, the library names should be correctly
displayed.
2/ Enable emailLibrarianWhenHoldIsPlaced and place a hold, a HOLDPLACED
notice will be generated (focus on the library name)
3/ Edit a patron and change his/her library
4/ Generate the advanced notices (misc/cronjobs/advance_notices.pl) and
have a look at the generated notices
5/ Same of overdues notices
6/ Set IndependentBranches and use a non superlibrarian user to place a
hold. The "pickup at" should be correctly filled.
Signed-off-by: Owen Leonard <oleonard@myacpl.org>
Signed-off-by: Kyle M Hall <kyle@bywatersolutions.com>
Signed-off-by: Brendan Gallagher brendan@bywatersolutions.com
Script does not appear to have any other modifying patches at this
time based on bz splitter. This is a perfect time to clean up this
script!
Signed-off-by: Kyle M Hall <kyle@bywatersolutions.com>
Signed-off-by: Brendan Gallagher brendan@bywatersolutions.com
If no branchcode is given, all the libraries are retrieved and the same
query (so without using the libraries loop) is executed for each
library.
Test plan:
Use the j2a.pl cronjob to change the category of a child patron
If a branchcode is passed to the script, only the children from this
branchcode should be updated.
But if it is not passed, all children of the DB should be updated.
Signed-off-by: Mark Tompsett <mtompset@hotmail.com>
Signed-off-by: Kyle M Hall <kyle@bywatersolutions.com>
Signed-off-by: Brendan Gallagher brendan@bywatersolutions.com
This patch does:
[1] It removes some unused modules.
[2] It adds some options not listed in the synopsis.
[3] It removes an unused sql expression from one query.
Note: In fines related code the third parameter of CalcFine sometimes
is named as days_overdue too.
[4] Corrects a few typos in comments or pod.
Signed-off-by: Marcel de Rooy <m.de.rooy@rijksmuseum.nl>
Signed-off-by: Brendan A Gallagher <brendan@bywatersolutions.com>