When using __() (ie. Gettext.js) we are seeing the translations that are marked as fuzzy.
This is definitely not the expected behaviour.
It happens because (our version of) po2json are old and no longer maintained,
and just embed them.
It seems that the bin we have has been upgraded to a JS version
(different authors).
Test plan:
(replace LANG with your language code)
0. Do not apply this patch
Edit misc/translator/po/LANG-messages-js.po
Mark a string as fuzzy
Edit ./intranet-main.tt and add the following lines inside $(document).ready
console.log(_("Your string"));
console.log(__("Your string"));
Replace "Your string" with the string you are actually testing.
Update the templates: `koha-translate --update LANG --dev kohadev && restart_all`
Go to the Koha home page, open the console.
=> Notice that the second log in the console is displaying the fuzzy string.
1. Apply this patch
Install the new version of po2json using `yarn install`
Repeat the previous steps.
=> With this patch applied both logs show the English version of the
string.
Remove fuzzy, update the templates and try again.
Signed-off-by: Julian Maurice <julian.maurice@biblibre.com>
Signed-off-by: Victor Grousset/tuxayo <victor@tuxayo.net>
Signed-off-by: Martin Renvoize <martin.renvoize@ptfs-europe.com>
In "Bug 19532: Make recalls.status an ENUM" one-letter statuses were
replaced by their verbal equivalents. But in overdue_recalls.pl
remained "R". As a result, recalls never overdue.
No test plan provided--obvious correction.
Sponsored-by: Ignatianum University in Cracow
Signed-off-by: Phil Ringnalda <phil@chetcolibrary.org>
Signed-off-by: Katrin Fischer <katrin.fischer@bsz-bw.de>
The cron can take a very long time to run on systems with many issues.
For example, a partner with ~250k auto_renew issues is taking about 9 hours to run.
If we run that same number of issues in 5 parallel chunks
( splitting the number of issues as evenly as possible ), it could take under 2 hours.
Test Plan:
1) Generate a number of issues marked for auto_renew
2) Run the automatic_renewals.pl, use the `time` utility to track how much time it took to run
3) Set parallel_loops to 10 in auto_renew_cronjob section of config in koha-conf
4) Repeat step 2, note the improvement in speed
5) Experiment with other values
Signed-off-by: Matt Blenkinsop <matt.blenkinsop@ptfs-europe.com>
Signed-off-by: Martin Renvoize <martin.renvoize@ptfs-europe.com>
Signed-off-by: Katrin Fischer <katrin.fischer@bsz-bw.de>
This patch adds logging of unhandled exceptions that could occur. This
is happening on busy production sites right now. This is also useful for
plugin jobs that might not be 100% following the guidelines and would
benefit from this.
But as the [DO NOT PUSH] patch highlights, this is something we really
want to have on our current codebase, as a database connection drop
might make us reach that `catch` block we are adding logging to on this
patch.
To test:
1. Apply the [DO NOT PUSH] patch
2. Run:
$ ktd --shell
k$ restart_all ; tail -f /var/log/koha/kohadev/worker*.log
3. Pick a valid barcode on the staff UI
4. Use the 'Batch delete items' tool in the cataloguing section
5. Start the job for deleting the item
=> FAIL: The item got deleted, but the job marked as failed and no logs
about the reasons
6. Apply this patch and repeat 2-5
=> SUCCESS: Same scenario but there's a log with the error message
7. Sign off :-D
Signed-off-by: David Nind <david@davidnind.com>
Signed-off-by: Martin Renvoize <martin.renvoize@ptfs-europe.com>
Signed-off-by: Katrin Fischer <katrin.fischer@bsz-bw.de>
By adding an empty text body, we force Email::Stuffer/Email::MIME
to use multipart/mixed handling for the attachment instead of forcing
a single part (ie direct attachment) email, which is not consistently
handled by different email clients.
An empty text body is language-neutral (ie not imposing English),
and it allows SMTP servers to inject organisational footers into
the email (e.g. confidentiality notices).
Signed-off-by: Magnus Enger <magnus@libriotech.no>
I have tested this solution in production, and it works for me.
Bug 32575: Tidy patch
Signed-off-by: Magnus Enger <magnus@libriotech.no>
Signed-off-by: Marcel de Rooy <m.de.rooy@rijksmuseum.nl>
Signed-off-by: Katrin Fischer <katrin.fischer@bsz-bw.de>
To test:
1 - Perform some transactions
2 - Enable Pseudonymization
3 - perl misc/maintenance/pseudonymize_statistics.pl -v
4 - Confirm test run an nothing changed
5 - perl misc/maintenance/pseudonymize_statistics.pl -v -c
6 - Confirm statistics are pseudonymized
Signed-off-by: Tomas Cohen Arazi <tomascohen@theke.io>
Amend: tidied the changes [tcohen]
Signed-off-by: Katrin Fischer <katrin.fischer@bsz-bw.de>
Like IncludeSeeFromInSearches adds see from heading in search (4xx),
new system preference IncludeSeeAlsoFromInSearches adds see also from heading (5xx).
Test plan:
Test on both Elasticsearch and Zebra:
1)
1.1) Create an authority record with heading, see from heading 4xx and see also from heading 5xx
1.2) Use this authority in a biblio record
2)
2.1) Set IncludeSeeFromInSearches and IncludeSeeAlsoFromInSearches to "Don't include"
2.3) Rebuild search engine
2.4) Perform a search on heading text => you find the biblio record
2.5) Perform a search on see from heading => you do not find the biblio record
2.6) Perform a search on see also from heading => you do not find the biblio record
3)
3.1) Set IncludeSeeFromInSearches and IncludeSeeAlsoFromInSearches to "Include"
3.2) Rebuild search engine
3.3) Perform a search on heading text => you find the biblio record
3.4) Perform a search on see from heading => you find the biblio record
3.5) Perform a search on see also from heading => you find the biblio record
Signed-off-by: Frédéric Demians <f.demians@tamil.fr>
Signed-off-by: Kyle M Hall <kyle@bywatersolutions.com>
Signed-off-by: Katrin Fischer <katrin.fischer@bsz-bw.de>
To test:
1. Set up autorenewals bu adjusting circulation rules:
'Automatic renewal' -> 'Yes'
'No automatic renewal before' -> 5
2. Set 'AutoRenewalNotices' to 'according to patron messaging preferences'.
3. Set an AUTO_RENEWALS and AUTO_RENEWALS_DGST notice to include branch info. I am using this to test:
Branchcode: [% branch.branchcode %]
Branch name: [% branch.branchname %]
Branch address: [% branch.branchaddress1 %]
Branch address2: [% IF branch.branchaddress2 %][% branch.branchaddress2 %][% END %]
Branch city: [% branch.branchcity %], [% branch.branchstate %] [% branch.branchzip %]
4. Make sure your branch has the proper infro. filled out in Libraries administration.
5. Find a patron and adjust the messaging preferences so they receive automatic renewal notices. Also make sure the patron has an email.
5. Check out some items and make them due with the next 5 days.
6. Run the automatic_renewal cron job:
perl /kohadevbox/koha/misc/cronjobs/automatic_renewals.pl -c -v
7. Notice no branch information displays.
8. APPLY PATCH
9. Checkout items from multiple issuing branches to a single patron.
10. Make sure the patron's messaging prefs are set to revieve NON-digestable notices.
11. Run the automatic renewal job, each notice should include the branch information from the issuing library.
12. Change the patron's messageing preferences to receieve digestable notices.
13. Run the job without the --digest-per-branch flag. You should get a single notice with the branch info. coming from the patron's home branch.
14. Run the job with the --digest-per-branch flag. You should get seperate digested notices with the branch info. coming from the issueing library branch.
Signed-off-by: David Nind <david@davidnind.com>
Signed-off-by: Emily Lamancusa <emily.lamancusa@montgomerycountymd.gov>
Signed-off-by: Katrin Fischer <katrin.fischer@bsz-bw.de>
For reasons unknown, GetOptions is inserting an empty string into the letter_code list. If you are running the script with a letter code filter, the empty string is added to the OR so it functions. If no letter_code is passed, the search requires the letter code to be an empty string, which will of course fail. Even more perplexing is that this does not happen for the type list which is essentially identical code.
Test Plan:
1) Generate some messages in the message queue
2) Run `process_message_queue.pl -v -c`
3) Note nothing happens
4) Apply this patch
5) Repeat step 2
6) Messages are sent!
Signed-off-by: David Nind <david@davidnind.com>
Signed-off-by: Martin Renvoize <martin.renvoize@ptfs-europe.com>
Signed-off-by: Katrin Fischer <katrin.fischer@bsz-bw.de>
We are not using it and it's confusing, let's remove the context stack.
Signed-off-by: Julian Maurice <julian.maurice@biblibre.com>
Signed-off-by: Martin Renvoize <martin.renvoize@ptfs-europe.com>
Test plan:
1. git grep 'reseve'. Notice there are instances of 'reseve'
2. Apply patch
3. Repeat step 1, there should be no instances of 'reseve'
Sponsored-by: Catalyst IT, New Zealand
Signed-off-by: David Nind <david@davidnind.com>
Signed-off-by: Matt Blenkinsop <matt.blenkinsop@ptfs-europe.com>
Signed-off-by: Martin Renvoize <martin.renvoize@ptfs-europe.com>
--category-code was not checked in the "at least one filter option"
check but it is clearly a filter option.
Signed-off-by: David Nind <david@davidnind.com>
Signed-off-by: Martin Renvoize <martin.renvoize@ptfs-europe.com>
* Change the SYNOPSIS to better describe the different ways to use the
script
* Only show the SYNOPSIS when options used are wrong (unknown option,
no filter options, or neither -c nor -v)
* Show the options details only with --help
* Clarify the fact that -v is required when -c is not supplied in the
description of both options
* Print a specific error message for the following cases:
* no filters options
* neither -c nor -v was supplied
Signed-off-by: David Nind <david@davidnind.com>
Signed-off-by: Martin Renvoize <martin.renvoize@ptfs-europe.com>
This patch adds a hint to the end of the script to notify the end user
that they may need to run the build_holds_queue cronjob if they are
using RealTimeHoldsQueue.
Signed-off-by: Martin Renvoize <martin.renvoize@ptfs-europe.com>
If RealTimeHoldsQueue is on, touch_all_biblios triggers a update_holds_queue_for_biblios background job for each affected record. This will result in a as many background jobs being queued up as records! It makes far more sense for this script to not do that which gives the administrator the option for running the holds queue builder if the changes would affect holdability, or to not run it at all.
Test Plan:
1) Run touch_all_biblios.pl
3) Note a update_holds_queue_for_biblios background job is queued for each record touched
4) Apply this patch
5) Merge touch_all_biblios.pl again
6) Note that no update_holds_queue_for_biblios jobs were queued
Signed-off-by: Matt Blenkinsop <matt.blenkinsop@ptfs-europe.com>
Signed-off-by: Emily Lamancusa <emily.lamancusa@montgomerycountymd.gov>
Signed-off-by: Martin Renvoize <martin.renvoize@ptfs-europe.com>
This bug originaly wants to get rid of "noindex" coming from this meta
tag:
<meta name="robots" content="noindex">
But actually we have other strings from the meta tags that should not be
translated.
Test plan:
0. Do not apply this patch
1. cd misc/translator/po && gulp po:update --lang es-ES (or any other
lang)
2. git commit -a -m"wip"
3. Apply this patch
4. Repeat 1 and git diff to show the diff
Notice that strings that should not be translated are removed from the
po files (actually commented)
Signed-off-by: Caroline Cyr La Rose <caroline.cyr-la-rose@inlibro.com>
Signed-off-by: Marcel de Rooy <m.de.rooy@rijksmuseum.nl>
Signed-off-by: Katrin Fischer <katrin.fischer@bsz-bw.de>
Signed-off-by: Matt Blenkinsop <matt.blenkinsop@ptfs-europe.com>
Signed-off-by: Philip Orr <philip.orr@lmscloud.de>
Signed-off-by: Martin Renvoize <martin.renvoize@ptfs-europe.com>
Signed-off-by: Katrin Fischer <katrin.fischer@bsz-bw.de>
Signed-off-by: Matt Blenkinsop <matt.blenkinsop@ptfs-europe.com>
Signed-off-by: Philip Orr <philip.orr@lmscloud.de>
Signed-off-by: Martin Renvoize <martin.renvoize@ptfs-europe.com>
Signed-off-by: Katrin Fischer <katrin.fischer@bsz-bw.de>
This patch adds the option to specify a framework to be used when overlaying records from webservices/connexion
To test:
1 - vim /etc/koha/sites/kohadev/connexion.cnf
2 - Set content:
host:
port: 8888
koha:http://localhost:8081
log:/var/log/koha/kohadev/connexion.log
match:ISBN
user:kohauser
password:kohapass
overlay_action:replace
nomatch_action:create_new
item_action:always_add
import_mode:direct
framework:BKS
overlay_framework:
debug:1
3 - Save the sample file from this bug into your kohaclone (or copy and paste into a file your koha test site can reach)
4 - On the command line:
perl misc/bin/connexion_import_daemon.pl -c /etc/koha/sites/kohadev/connexion.cnf
5 - In another terminal:
cat bug_33418.test | nc -v localhost 8888
6 - It should report success and a biblionumber
7 - In Koha: Cataloguing->Manage staged MARC record
8 - View the most recent batch with file name (webservice)
9 - Confirm it was imported, no match
10 - Click 'View' under the 'Record' column
11 - Confirm record loads correctly and 'MARC framework' detail is 'Books, Booklets, Workbooks'
12 - On the terminal repeat the command:
cat bug_33418.test | nc -v localhost 8888
13 - It should succed
14 - View the new batch, confirm the record matched this time
15 - View the record details, confirm framework is now 'default'
16 - On the first terminal hit Ctrl+C to stop the daemon
16 - Edit connexion.cnf and change:
import_mode:stage
framework:ACQ
overlay_framework:IR
17 - Restart daemon:
perl misc/bin/connexion_import_daemon.pl -c /etc/koha/sites/kohadev/connexion.cnf
18 - Delete the record created above
19 - On the second terminal repeat the command:
cat bug_33418.test | nc -v localhost 8888
20 - Confirm the batch is created, but not imported
21 - In terminal:
perl misc/cronjobs/import_webservice_batch.pl --framework=ACQ --overlay_framework=BKS
22 - Confirm batch imported, and record in ACQ framework
23 - In terminal:
cat bug_33418.test | nc -v localhost 8888
perl misc/cronjobs/import_webservice_batch.pl --framework=ACQ --overlay_framework=BKS
24 - Confirm batch added, record matched, record imported, and record now in Books framework
25 - Stop the deamon, edit connexion.cnf:
import_mode:direct
26 - Start the daemon, and on other terminal repeat:
cat bug_33418.test | nc -v localhost 8888
27 - Confirm record in Binders framework
28 - Set record framework to Books
29 - Stop daemon, edit cnf and remove 'overlay_framework' setting
30 - Start daemon and cat the file again
31 - Confirm the record remains in Books framework
Signed-off-by: Brendan Lawlor <blawlor@clamsnet.org>
Signed-off-by: Marcel de Rooy <m.de.rooy@rijksmuseum.nl>
Signed-off-by: Katrin Fischer <katrin.fischer@bsz-bw.de>
misc/cronjobs/staticfines.pl is calling output_pref() but it is missing from module import :
use Koha::DateUtils qw( dt_from_string );
Test but running staticfines.pl
Signed-off-by: Matt Blenkinsop <matt.blenkinsop@ptfs-europe.com>
Signed-off-by: Katrin Fischer <katrin.fischer@bsz-bw.de>
Update the preference name from 'Borrower' to 'Patron' and correct
spelling of receive.
Signed-off-by: Martin Renvoize <martin.renvoize@ptfs-europe.com>
Signed-off-by: Katrin Fischer <katrin.fischer@bsz-bw.de>
To test:
1. APPLY PATCH and restart all
2. Run the update_localuse_from_statistics.pl script without a confirm flag, nothing happens.
3. Run the script and add the --confirm flag, stuff happens.
Signed-off-by: Matt Blenkinsop <matt.blenkinsop@ptfs-europe.com>
Signed-off-by: Martin Renvoize <martin.renvoize@ptfs-europe.com>
Signed-off-by: Katrin Fischer <katrin.fischer@bsz-bw.de>
To test:
1. Run the update_totalissues cron, the holds queue is updated.
2. APPLY PATCH, restart services
3. Run update_totalissues cron again, this time the holds queue should always be skipped.
Signed-off-by: Matt Blenkinsop <matt.blenkinsop@ptfs-europe.com>
Signed-off-by: Emily Lamancusa <emily.lamancusa@montgomerycountymd.gov>
Signed-off-by: Katrin Fischer <katrin.fischer@bsz-bw.de>
Koha::Objects depends on Koha::DateUtils, which depends on C4::Context,
which depends on Koha::Config::SysPrefs, which depends on Koha::Objects
Apart from the circular dependency, the dependency on C4::Context alone
is problematic as it loads a bunch of modules that are not needed at all
in Koha::Objects (YAML::XS and ZOOM for instance).
As Koha::Objects is used as a base for a lot of modules, we should take
care to only load the minimum required.
This patch removes uses of Koha::DateUtils from Koha::Objects.
It was only used in Koha::Objects::filter_by_last_update
filter_by_last_update now requires that the 'from' and 'to' parameters
must be DateTime objects. Previously it would also allow date and
datetime strings. This possibility was only used in two places:
* misc/cronjobs/cleanup_database.pl
* tools/cleanborrowers.pl
Now they call dt_from_string first and pass a DateTime object to
filter_by_last_update
Test plan:
1. Run `perl -cw Koha/Objects.pm`. It should only say:
"Koha/Objects.pm syntax OK" without warnings
2. Run `prove t/db_dependent/Koha/Objects.t`
3. Verify that misc/cronjobs/cleanup_database.pl works as before,
especially with the options --pseudo-transactions,
--pseudo-transactions-from and --pseudo-transactions-to
4. Go to Tools » Batch patron deletion and anonymization, check "Verify
you want to anonymize patron checkout history" and enter a date in
the text input below. Then click Next and verify that the correct
count of borrowers is shown. Click on the "Finish" button and verify
that the circulation history has been correctly anonymized
See also bug 36432
Signed-off-by: Tadeusz Sośnierz <tadeusz@sosnierz.com>
Signed-off-by: Martin Renvoize <martin.renvoize@ptfs-europe.com>
Signed-off-by: Katrin Fischer <katrin.fischer@bsz-bw.de>
Add an unallocated option to CreateQueue and pass through as needed
Avoid deletion of the tmp_holdsqueue, and only check holds
and items that are not currently matched
A future hold with a higher priority will still fail here - because the
item may already be assigned, but on next change to the biblio it would
be corrected
To test:
1) Apply both patches
2) Enable RealTimeHoldsQueue and set HoldsQueueSkipClosed to "open"
3) Add a holiday to the calendar for all libraries for today, visit:
/cgi-bin/koha/tools/holidays.pl
-- Click today's day on the calendar and pick "Holiday repeated every same day of the week"
-- Click "Copy to all libraries". Hit "Save.
4) Place a biblio-level hold on a biblio record and set the pickup location to a library that has available copies, visit:
-- /cgi-bin/koha/reserve/request.pl?biblionumber=76&borrowernumber=51
-- Click the first "Place hold" button to place the biblio-level hold.
5) Verify that that hold got added to the holds queue, visit:
/cgi-bin/koha/circ/view_holdsqueue.pl?branchlimit=&itemtypeslimit=&ccodeslimit=&locationslimit=&run_report=1
6) Place a biblio-level hold on a biblio record where there are no other holds and copies are available at another location, but not the pickup location, visit:
-- /cgi-bin/koha/reserve/request.pl?biblionumber=437&borrowernumber=51
-- On the "pickup at" dropdown, pick something else other than "Centerville", e.g. "Fairfield".
-- Click the first "Place hold" button to place the biblio-level hold.
7) Check the holds queue again, notice that this 2nd hold was not added to the queue:
/cgi-bin/koha/circ/view_holdsqueue.pl?branchlimit=&itemtypeslimit=&ccodeslimit=&locationslimit=&run_report=1
8) Run the updated cronscript:
perl misc/cronjobs/holds/build_holds_queue.pl --force --unallocated
9) Notice nothing changed in the holds queue, visit:
/cgi-bin/koha/circ/view_holdsqueue.pl?branchlimit=&itemtypeslimit=&ccodeslimit=&locationslimit=&run_report=1
10) Remove the holiday we created previously, visit:
/cgi-bin/koha/tools/holidays.pl
-- Click today's day on the calendar and pick "Delete this holiday"
-- Click "Copy to all libraries". Hit "Save.
11) Run the updated cronscript:
perl misc/cronjobs/holds/build_holds_queue.pl --force --unallocated
12) Confirm the second hold is added to the holds queue, visit:
/cgi-bin/koha/circ/view_holdsqueue.pl?branchlimit=&itemtypeslimit=&ccodeslimit=&locationslimit=&run_report=1
Signed-off-by: Pedro Amorim <pedro.amorim@ptfs-europe.com>
Signed-off-by: Katrin Fischer <katrin.fischer@bsz-bw.de>
Sometimes it would be better for process_message_queue.pl to stop if a plugin hook fails rather than continue processing. It would be nice if that was a command line option.
Test Plan:
1) Install any plugin with a before_send_messages hook
2) Modify the plugin, add a 'die;' statement at the start of the
before_send_messages method of the plugin.
3) Run process_message_queue.pl as usual
4) Note the exit code is 0
5) Run it again with the new -e setting
6) Note the exit code is 1
Signed-off-by: Brendan Lawlor <blawlor@clamsnet.org>
Signed-off-by: Victor Grousset/tuxayo <victor@tuxayo.net>
Signed-off-by: Katrin Fischer <katrin.fischer@bsz-bw.de>
This patch adds a clarification to writeoff_debts.pl to make it clear that --category-code can't be used as the only filter when running the script. If this is the case, the script just displays the help menu as it is expecting one of the other filters
Test plan:
1) Run perl misc/cronjobs/writeoff_debts.pl --category-code TEST --confirm
2) Observe that the help menu is displayed instead of running the script
3) Check the help menu for the text added in this patch
WNC amended patch - added same warning in the option section
Signed-off-by: David Nind <david@davidnind.com>
Signed-off-by: Katrin Fischer <katrin.fischer@bsz-bw.de>
Signed-off-by: Emily Lamancusa <emily.lamancusa@montgomerycountymd.gov>
Signed-off-by: Katrin Fischer <katrin.fischer.83@web.de>
Signed-off-by: Andrew Fuerste-Henry <andrewfh@dubcolib.org>
Signed-off-by: Andrew Fuerste Henry <andrewfh@dubcolib.org>
Signed-off-by: Martin Renvoize <martin.renvoize@ptfs-europe.com>
Signed-off-by: Katrin Fischer <katrin.fischer@bsz-bw.de>
This script allow to deduplicate authorities automatically.
Script is in misc/maintenance/
It works this way:
1) authorities are fetched from the database. You can limit fetched
results by authtypecode, or directly by specifying WHERE clause
2) for each authority:
2.1) build a Zebra query using the 'search_form' for the heading
2.2) run the query, retrieve the results
2.3) among duplicates, choose the one we want to keep (use
--choose-method option).
2.5) use C4::Authorities::merge to merge authorities
3) delete the merged authorities
Use --help for more informations on options
To be done:
1 - Move to module and cover with tests
2 - Add option to only merge unused authorities
3 - Expand 'ppn' option to be 'control-number' option and allow specifying field
4 - More?
1 & 2 I will attempt - 3 & 4 may be future enhancements
Signed-off-by: David Nind <david@davidnind.com>
Signed-off-by: Tomas Cohen Arazi <tomascohen@theke.io>
Signed-off-by: Katrin Fischer <katrin.fischer@bsz-bw.de>
Sometimes we need to only re-index a subset of our bibliographic data or authorities. Currently this is only possible by enumerating all id (-bn or -ai), which does not work well when indexing eg 100.000 items of a 2.000.000 DB. Re-indexing everything is also overkill.
This patch adds an `--where` flag to misc/search_tools/rebuild_elasticsearch.pl which can take arbitrary SQL (that of course has to match the respective tables) and adds it as an additional param to the resultset to index
To test, start koha-testing-docker with ElasticSearch enabled, for example via `ktd --es7 up
Before applying the patch, rebuild_elasticsearch will index all data:
Biblios:
$ misc/search_tools/rebuild_elasticsearch.pl -b -v
[12387] Checking state of biblios index
[12387] Indexing biblios
[12387] Committing final records...
[12387] Total 435 records indexed
(there might be a waring regarding a broken biblio, which can be ignored)
Auth:
$ misc/search_tools/rebuild_elasticsearch.pl -a -v
[12546] Checking state of authorities index
[12546] Indexing authorities
[12546] 1000 records processed
[12546] Committing final records...
[12546] Total 1706 records indexed
Now apply the patch
Biblio, limit by range of biblioid:
$ misc/search_tools/rebuild_elasticsearch.pl -b -v --where "biblionumber between 100 and 150"
[12765] Checking state of biblios index
[12765] Indexing biblios
[12765] Committing final records...
[12765] Total 50 records indexed
Note that only 50 records where indexed (instead of the whole set of 435 records)
Auth, limit by authtypecode:
$ misc/search_tools/rebuild_elasticsearch.pl -a -v --where "authtypecode = 'GEOGR_NAME'"
[12848] Checking state of authorities index
[12848] Indexing authorities
[12848] Committing final records...
[12848] Total 142 records indexed
Again, only 142 have been indexed.
Sponsored-by: Steiermärkische Landesbibliothek
Sponsored-by: HKS3 / koha-support.eu
Signed-off-by: David Nind <david@davidnind.com>
Signed-off-by: Nick Clemens <nick@bywatersolutions.com>
Signed-off-by: Katrin Fischer <katrin.fischer@bsz-bw.de>
Koha/Illrequests.pm -> Koha/ILL/Requests.pm
Merged:
t/db_dependent/Koha/Illrequests.t
t/db_dependent/Illrequests.t
Into:
t/db_dependent/Koha/ILL/Requests.t
ILL classes file structure is, for the most part, around 7 years old and doesn't follow a strict logic. It's so confusing that some test files exist redundantly.
This housekeeping should help future work in regards to ISO18626 to add Koha as a supplying agency instead of just requesting agency, as is now.
It should also help future housekeeping of moving backend related logic out of the Illrequest.pm into Illbackend.pm (now ILL/Request.pm and ILL/Backend.pm as of this patchset).
It should also help in structuring the addition of a master generic form (see bug 35570)
This patchset will require existing backends to be updated to match the new class names and structure, if they invoke them.
Test plan, k-t-d, run tests:
prove t/db_dependent/api/v1/ill_*
prove t/db_dependent/Koha/ILL/*
Test plan, k-t-d, manual:
1) Install FreeForm, enable ILL module, run:
bash <(curl -s https://raw.githubusercontent.com/ammopt/koha-ill-dev/master/start-ill-dev.sh)
2) You'll have to switch the FreeForm repo to the one compatible with this work, like:
cd /kohadevbox/koha/Koha/Illbackends/FreeForm
git checkout reorganize_ILL
3) Do some generic ILL testing:
3.1) Create a request
3.2) Add a comment to a request
3.3) Edit a request
3.4) Edit a request's item metadata
3.5) Confirm a request
3.6) List requests
3.7) Filter requests list using left side filters
4) Install a metadata enrichment plugin:
https://github.com/PTFS-Europe/koha-plugin-api-pubmed
4.1) Create an ILL batch and insert a pubmedid like 123
4.2) Add the request and finish batch
5) Verify all of the above works as expected
Signed-off-by: David Nind <david@davidnind.com>
Signed-off-by: Tomas Cohen Arazi <tomascohen@theke.io>
Signed-off-by: Pedro Amorim <pedro.amorim@ptfs-europe.com>
Signed-off-by: Katrin Fischer <katrin.fischer@bsz-bw.de>
Signed-off-by: David Nind <david@davidnind.com>
Signed-off-by: Pedro Amorim <pedro.amorim@ptfs-europe.com>
Signed-off-by: Katrin Fischer <katrin.fischer@bsz-bw.de>
In situations in which you are not familiar with all the Koha settings,
and table structure, the fact this script just fails telling there's a
broken FK is just not practical.
We should capture those exceptions and display a useful message instead.
This script does that. It adds some validations and some exception
handling too. It prints a nice message about the bad value the user
passed, and the valid values too!
To test:
1. Run this on a fresh KTD:
$ ktd --shell
k$ perl misc/devel/create_superlibrarian.pl \
--userid tcohen \
--password tomasito \
--cardnumber 123456789 \
--categorycode POT \
--branchcode ATO
=> FAIL: It explodes with a MySQL exception message!
2. Apply this patch
3. Repeat 1
=> SUCCESS: It tells you which value is wrong and what values you can
pick to make the command work
4. Pick a valid value, and repeat
=> SUCCESS: Now the other value is wrong, a nice message is displayed!
5. Fix with a valid value and repeat
=> SUCCESS: Patron created!
6. Sign off :-D
Signed-off-by: David Nind <david@davidnind.com>
Signed-off-by: Kyle M Hall <kyle@bywatersolutions.com>
Signed-off-by: Katrin Fischer <katrin.fischer@bsz-bw.de>
Introducing $orders->filter_by_obsolete and $orders->cancel.
Test plan:
Run t/db_dependent/Koha/Acquisition/Orders.t
Create basket with few orders. Remove some biblio records.
Run acq_cancel_obsolete_orders.pl -c
Signed-off-by: Marcel de Rooy <m.de.rooy@rijksmuseum.nl>
Signed-off-by: David Nind <david@davidnind.com>
Signed-off-by: Martin Renvoize <martin.renvoize@ptfs-europe.com>
Signed-off-by: Katrin Fischer <katrin.fischer@bsz-bw.de>
Test Plan:
1) Test importing a patron with the "from the current membership expiry date" option,
note it does not work
2) Apply this patch
3) Restart all the things!
4) Re-test, note the expiration was renewed from the patron's current
expiration date!
Signed-off-by: David Nind <david@davidnind.com>
Rebased-by: Victor Grousset/tuxayo <victor@tuxayo.net>
Signed-off-by: Marcel de Rooy <m.de.rooy@rijksmuseum.nl>
Signed-off-by: Katrin Fischer <katrin.fischer@bsz-bw.de>
When misc/devel/install_plugins.pl does not find any plugins, it prints
the list of pluginsdir, but with a literal \n separating the dirs, and
no newline at the end.
To test:
- Edit koha-conf.xml and add a second entry for <pluginsdir>, so there
are two entries. The second one could just be a copy of the original.
- Run "perl misc/devel/install_plugins.pl"
- Note the output looks something like this:
No plugins found
pluginsdir contains:
/var/lib/koha/kohadev/plugins\n/var/lib/koha/kohadev/pluginsroot@kohadevbox:koha(master)$
- Apply the patch and run the script again. Output should be:
No plugins found
pluginsdir contains:
/var/lib/koha/kohadev/plugins
/var/lib/koha/kohadev/plugins
root@kohadevbox:koha(master)$
- Sign off
Signed-off-by: David Nind <david@davidnind.com>
Signed-off-by: Emily Lamancusa <emily.lamancusa@montgomerycountymd.gov>
Signed-off-by: Katrin Fischer <katrin.fischer@bsz-bw.de>