Not sure if this script is still used, could someone confirm?
Test plan:
If you know how to test it, please do
Signed-off-by: Mason James <mtj@kohaaloha.com>
Signed-off-by: Josef Moravec <josef.moravec@gmail.com>
Signed-off-by: Zeno Tajoli <z.tajoli@cineca.it>
Signed-off-by: Nick Clemens <nick@bywatersolutions.com>
Signed-off-by: Marcel de Rooy <m.de.rooy@rijksmuseum.nl>
Signed-off-by: Kyle M Hall <kyle@bywatersolutions.com>
Test plan:
Use the sanitize_records.pl maintenance script with the --auto-search
option
Signed-off-by: Mason James <mtj@kohaaloha.com>
Signed-off-by: Josef Moravec <josef.moravec@gmail.com>
Signed-off-by: Zeno Tajoli <z.tajoli@cineca.it>
Signed-off-by: Nick Clemens <nick@bywatersolutions.com>
Signed-off-by: Marcel de Rooy <m.de.rooy@rijksmuseum.nl>
Signed-off-by: Kyle M Hall <kyle@bywatersolutions.com>
Two discussions on koha-devel lead to the same conclusion:
biblioitems.marcxml should be moved out this table
- biblio and biblioitems
http://lists.koha-community.org/pipermail/koha-devel/2013-April/039239.html
- biblioitems.marcxml & biblioitems.marc / HUGE performance issue !
http://lists.koha-community.org/pipermail/koha-devel/2016-July/042821.html
There are several goals to do it:
- Performance
As Paul Poulain wrote, a simple query like
SELECT publicationyear, count(publicationyear) FROM biblioitems GROUP BY publicationyear;
takes more than 10min on a DB with more than 1M bibliographic records
but only 3sec (!) on the same DB without the biblioitems.marcxml field
Note that priori to this patch set, the biblioitems.marcxml was not
retrieved systematically, but was, at least, in
C4::Acquisition::GetOrdersByBiblionumber and C4::Acquisition::GetOrders
- Flexibility
Storing the marcxml in a specific table would allow use to store several
kind of metadata (USMARC, MARCXML, MIJ, etc.) and different formats (marcflavour)
- Clean code
It would be a first step toward Koha::MetadataRecord for bibliographic
records (not done in this patch set).
Test plan:
- Update the DBIC Schema
- Add / Edit / Delete / Import / Export bibliographic records
- Add items
- Reindex records using ES
- Confirm that the following scripts still work:
* misc/cronjobs/delete_records_via_leader.pl
* misc/migration_tools/build_oai_sets.pl
- Look at the reading history at the OPAC (opac-readingrecord.pl)
- At the OPAC, click on a tag, you must see the result
Note: Changes in Koha/OAI/Server/ListRecords.pm is planned on bug 15108.
Signed-off-by: Mason James <mtj@kohaaloha.com>
Signed-off-by: Josef Moravec <josef.moravec@gmail.com>
Signed-off-by: Zeno Tajoli <z.tajoli@cineca.it>
Signed-off-by: Nick Clemens <nick@bywatersolutions.com>
Signed-off-by: Marcel de Rooy <m.de.rooy@rijksmuseum.nl>
Signed-off-by: Kyle M Hall <kyle@bywatersolutions.com>
Deleted the line.
perlcritic -4 before and after.
Before there are issues. After there is not.
Also, changed function to not rely on implicit return value
of last line, but explicitly stated a return. And operator
changed, due to precedence issues.
Signed-off-by: Mark Tompsett <mtompset@hotmail.com>
Signed-off-by: Marcel de Rooy <m.de.rooy@rijksmuseum.nl>
Signed-off-by: Kyle M Hall <kyle@bywatersolutions.com>
Signed-off-by: Kyle M Hall <kyle@bywatersolutions.com>
Variables $extkey not used, %opt not used.
Variables $tmptest[...] not used, calling _build_tag_directory useless.
The script now does not only print help if you specify -t.
Sub defnonull tidied.
Rearranged modules, removed Dumper.
Test plan:
[1] Run the script with -t flag. The script should not only print usage
statement, but should do a dry run. (Test this on a small database,
or pass an additional where clause.)
Signed-off-by: Marcel de Rooy <m.de.rooy@rijksmuseum.nl>
Signed-off-by: Mark Tompsett <mtompset@hotmail.com>
Signed-off-by: Kyle M Hall <kyle@bywatersolutions.com>
Signed-off-by: Kyle M Hall <kyle@bywatersolutions.com>
Test plan:
Verify that the output of diff -w between the original and tidied file
does not introduce code changes.
Signed-off-by: Marcel de Rooy <m.de.rooy@rijksmuseum.nl>
Signed-off-by: Mark Tompsett <mtompset@hotmail.com>
Signed-off-by: Kyle M Hall <kyle@bywatersolutions.com>
Signed-off-by: Kyle M Hall <kyle@bywatersolutions.com>
Signed-off-by: Josef Moravec <josef.moravec@gmail.com>
Signed-off-by: Marcel de Rooy <m.de.rooy@rijksmuseum.nl>
Signed-off-by: Kyle M Hall <kyle@bywatersolutions.com>
This patchset moves the C4::Members::GetUpcomingMembershipExpires
subroutine code to the Koha::Patrons->search_upcoming_membership_expires
method.
This subroutine was used from only 1 place, so it's an easier to move.
Test plan:
Use the membership_expiry.pl cronjob script using the different
options.
The behavior should be the same as prior to this patch.
prove t/db_dependent/Koha/Patrons.t
should return green
Signed-off-by: Josef Moravec <josef.moravec@gmail.com>
Signed-off-by: Marcel de Rooy <m.de.rooy@rijksmuseum.nl>
Signed-off-by: Kyle M Hall <kyle@bywatersolutions.com>
This patch adds the job to debian package file and the examples file
in misc.
Test plan:
Add these lines to your cron file.
Check the results. (If an issue you expect passes the grace period defined
in the subscription, its status should go from Expected to Late.)
Signed-off-by: Josef Moravec <josef.moravec@gmail.com>
Signed-off-by: Jonathan Druart <jonathan.druart@biblibre.com>
Signed-off-by: Kyle M Hall <kyle@bywatersolutions.com>
All the values different from the ones GetMember returned has been
managed outside of GetMemberDetails.
It looks safe to replace all the occurrences of GetMemberDetails with
GetMember.
Signed-off-by: Josef Moravec <josef.moravec@gmail.com>
Signed-off-by: Nick Clemens <nick@bywatersolutions.com>
Signed-off-by: Kyle M Hall <kyle@bywatersolutions.com>
Yes, we are fixing these four typos here.
Test plan:
[1] Read the changes.
[2] Run t/Auth_with_shibboleth.t
[3] Run git grep recieved| grep -v -e 'recievedlist' | grep -v -e 'serials-recieve.tt'
Note: serials-recieve.tt is just history. Bonus points for the one who makes
us get rid of that column recievedlist.
Signed-off-by: Marcel de Rooy <m.de.rooy@rijksmuseum.nl>
Signed-off-by: Josef Moravec <josef.moravec@gmail.com>
EDIT:
Rebased. Change in Accounts has been corrected already.
Removed the po file.
Signed-off-by: Jonathan Druart <jonathan.druart@biblibre.com>
Signed-off-by: Kyle M Hall <kyle@bywatersolutions.com>
When a patron had overdues on items from different branches
he received from each corresponding branch a notice claiming
the complete list of these items.
Signed-off-by: sbujam <sbujam@users.noreply.github.com>
Signed-off-by: Jonathan Druart <jonathan.druart@biblibre.com>
Signed-off-by: Kyle M Hall <kyle@bywatersolutions.com>
Fix SAX parser error pointing to INSTALL docs
Signed-off-by: Nick Clemens <nick@bywatersolutions.com>
Signed-off-by: Kyle M Hall <kyle@bywatersolutions.com>
This patch removes three files which are obsolete following the removal
of the OPAC prog template.
To test, apply the patch and confirm that these files no longer exist in
misc/interface_customization:
- koha3-opac-button-background.png
- koha3-opac-button-background.psd
- koha3-opac-button-background.svg
Signed-off-by: Josef Moravec <josef.moravec@gmail.com>
Signed-off-by: Jonathan Druart <jonathan.druart@biblibre.com>
Signed-off-by: Kyle M Hall <kyle@bywatersolutions.com>
To test:
- Have a clean install, no DB
- Run populate_db.pl:
$ sudo koha-shell kohadev
k$ cd kohaclone
k$ misc/devel/populate_db.pl
- Go to
http://localhost:8081/cgi-bin/koha/admin/searchengine/elasticsearch/mappings.pl
=> FAIL: No mappings
- Delete the DB and create an empty one:
$ mysql -uroot
> DROP DATABASE koha_kohadev; CREATE DATABASE koha_kohadev;
> GRANT ALL PRIVILEGES ON koha_kohadev.* TO
'koha_kohadev'@'localhost';
- Run populate_db.pl:
$ sudo koha-shell kohadev
k$ cd kohaclone
k$ misc/devel/populate_db.pl
- Go to
http://localhost:8081/cgi-bin/koha/admin/searchengine/elasticsearch/mappings.pl
=> SUCCESS: There are mappings!
- Sign off :-D
Signed-off-by: Tomas Cohen Arazi <tomascohen@theke.io>
Signed-off-by: Jonathan Druart <jonathan.druart@biblibre.com>
Signed-off-by: Kyle M Hall <kyle@bywatersolutions.com>
Same as previous patch for misc/export_records.pl.
Test plan :
- Use syspref item-level_itypes = biblio record
- Run misc/export_records.pl
=> Without patch you get an error : DBD::mysql::st execute failed: Unknown column 'biblioitems.itemtype' in 'where clause' ...
=> With patch you get a correct export file
Signed-off-by: Bernardo Gonzalez Kriegel <bgkriegel@gmail.com>
Export Ok, no errors.
Signed-off-by: Marcel de Rooy <m.de.rooy@rijksmuseum.nl>
Signed-off-by: Kyle M Hall <kyle@bywatersolutions.com>
Signed-off-by: Tomas Cohen Arazi <tomascohen@theke.io>
Signed-off-by: Jonathan Druart <jonathan.druart@biblibre.com>
Signed-off-by: Kyle M Hall <kyle@bywatersolutions.com>
Adding POD and --userid and --password options
1/ To test, use the same routine as before, with no options.
2/ You should have a user with koha/koha as userid and passwords
3/ Delete that user
4/ Run the script with --userid <userid> --password <password>
5/ You should have a user in koha with userid/password set
Signed-off-by: Josef Moravec <josef.moravec@gmail.com>
Signed-off-by: Tomas Cohen Arazi <tomascohen@theke.io>
Signed-off-by: Kyle M Hall <kyle@bywatersolutions.com>
This is for developers: it's quite long (many clics) to create a new
superlibrarian user.
This new script creates a new user with superlibrarian permissions with
the easy to remember credential koha/koha
Test plan:
perl misc/devel/create_superlibrarian.pl
Log in to Koha using koha/koha
Signed-off-by: Chris Cormack <chrisc@catalyst.net.nz>
Signed-off-by: Josef Moravec <josef.moravec@gmail.com>
Signed-off-by: Tomas Cohen Arazi <tomascohen@theke.io>
Signed-off-by: Kyle M Hall <kyle@bywatersolutions.com>
When launching misc/export_records.pl with this command line :
misc/export_records.pl --date=`date +%d/%m/%Y` --deleted_barcodes --filename=/tmp/koha.mrc
You get this error message :
DBD::mysql::db selectall_arrayref failed: You have an error in your SQL syntax; check the manual that corresponds to your MariaDB server version for the right syntax to use near '' at line 4 [for Statement " (
SELECT DISTINCT barcode
FROM deleteditems
WHERE deleteditems.biblionumber = ?
"] at misc/export_records.pl line 189.
This is because of a '(' after 'q|', looks like a typo.
Also, this patch removes useless var $q.
Test plan :
- Delete an item with barcode
- Without patch, run : misc/export_records.pl --date=`date +%d/%m/%Y` --deleted_barcodes --filename=/tmp/koha.mrc
=> You get dirty MySQL
- Without patch, run the same command
=> No error and the file is generated
Signed-off-by: Nicolas Legrand <nicolas.legrand@bulac.fr>
Signed-off-by: Jonathan Druart <jonathan.druart@biblibre.com>
Signed-off-by: Kyle M Hall <kyle@bywatersolutions.com>
When running rebuild_zebra.pl in daemon mode, a while loop runs the script for ever.
But if something crashes inside the rebuild process, the all daemon crashes.
For example when it can not access database.
This problem may be temporary so daemon should keep running.
This patch add eval around the rebuild process to allow a run to fail without killing the daemon.
Also moves the DB handler get inside daemon loop because it is broken is DB stoppes.
This is a big issue for indexer running in a systemd service.
Test plan :
- run rebuild_zebra.pl in daemon mode :
/home/koha/src/misc/migration_tools/rebuild_zebra.pl -daemon -z -a -b -x --sleep 30
- stop the database
- wait a minute
=> you see an error on database connexion
=> the daemon is still running
- restart the database
- test the indexer by creating a new record (wait for a minute)
Signed-off-by: Jacek Ablewicz <abl@biblos.pk.edu.pl>
Signed-off-by: Jonathan Druart <jonathan.druart@biblibre.com>
Signed-off-by: Kyle M Hall <kyle@bywatersolutions.com>
Test plan:
1) Apply the patch
2) Edit a biblio
3) run export_records.pl with date time few minutes in the past
for example: --format=xml --record-type=bibs --date="2016-10-14 10:00:05" --filename="koha.xml"
4) look in the file and cofirm that the right record was exported
5) Try the same but with time after the biblio was edited, it shouldn't be exported
Signed-off-by: radiuscz <radek.siman@centrum.cz>
Bug 17444: Follow-up, don't change the name of parameter "date"
Signed-off-by: Jonathan Druart <jonathan.druart@bugs.koha-community.org>
Signed-off-by: Kyle M Hall <kyle@bywatersolutions.com>
This job should be done each time patron data are deleted. It's better
to do it just before deleting the patron than assuming the caller did
the job by itself.
Signed-off-by: Tomas Cohen Arazi <tomascohen@theke.io>
Signed-off-by: Martin Renvoize <martin.renvoize@ptfs-europe.com>
Signed-off-by: Kyle M Hall <kyle@bywatersolutions.com>
This patch moves the C4::Members::DelMember subroutine to the
Koha::Patron module.
The delete method must be overwritten to permit handling of patron's
holds.
Test plan:
(With the 2 patches applied)
1/ Create a patron with holds and owner of lists
2/ Delete patrons using the web interface:
- More > Delete on a patron page
- Batch patron deletion tools
3/ and the cronjob script
- perl misc/cronjobs/delete_patrons.pl -c [more options]
The patron should have been moved to the deletedborrowers table, his/her
holds and lists should have been deleted.
Signed-off-by: Tomas Cohen Arazi <tomascohen@theke.io>
Signed-off-by: Martin Renvoize <martin.renvoize@ptfs-europe.com>
Signed-off-by: Kyle M Hall <kyle@bywatersolutions.com>
This patch removes the C4::Members::MoveMemberToDeleted subroutine in
order to replace it with the Koha::Patron->move_to_deleted method.
Next after this change, we will move C4::Members::HandleDelBorrower and
C4::Members::DelMember to the same module to simplify the code in
members/deletemem.pl and misc/cronjobs/delete_patrons.pl
Test plan:
1/ Delete a patron from the staff interface and make sure (s)he has been moved to
the deletedborrowers table.
2/ Use the "Batch patron deletion" tool (tools/cleanborrowers.pl) to
remove patron. Make sure the "Permanently delete these patrons" and "Move
these patrons to the trash" options work as before
3/ Same as previously but using the cronjob
misc/cronjobs/delete_patrons.pl.
Signed-off-by: Tomas Cohen Arazi <tomascohen@theke.io>
Tested the delete_patrons.pl script and cleanborrowers.pl too.
Tests (are relevant and) pass and the qa scripts are happy too :-D
Signed-off-by: Martin Renvoize <martin.renvoize@ptfs-europe.com>
Fix record matching in misc/cronjobs/delete_records_via_leader.pl
Signed-off-by: Nick Clemens <nick@bywatersolutions.com>
Signed-off-by: Kyle M Hall <kyle@bywatersolutions.com>
Is this script still in use?
It uses the biblioitems.marc field so if it's still useful it will need
to be rewritten.
Signed-off-by: Mason James <mtj@kohaaloha.com>
Signed-off-by: Nick Clemens <nick@bywatersolutions.com>
Signed-off-by: Kyle M Hall <kyle@bywatersolutions.com>
Recently added, delete_records_via_leader.pl reads biblioitems.marc as a
text field and search for record to delete regarding the leader 5.
This can be acchieve doing the same thing on biblioitems.marcxml (will
certainly be slower) waiting for a patch on bug 15537.
Test plan:
Confirm that this script works as before, to do so the easiest way would
be to dump your DB before executing the update DB entry, execute the
script to delete records, reinsert the DB, execute the udpate DB entry
(remove biblioitems.marc), execute the script to delete records.
You should get the same number of records deleted.
Signed-off-by: Mason James <mtj@kohaaloha.com>
Signed-off-by: Nick Clemens <nick@bywatersolutions.com>
Signed-off-by: Kyle M Hall <kyle@bywatersolutions.com>
This script seems to be unused and it won't be of any usefulness after
the removal of biblioitems.marc
Signed-off-by: Mason James <mtj@kohaaloha.com>
Signed-off-by: Nick Clemens <nick@bywatersolutions.com>
Signed-off-by: Kyle M Hall <kyle@bywatersolutions.com>
This patch makes the populate_db.pl script upper case what gets passed
with the --marcflavour option switch. This is needed in order for this
to fit ``kohadevbox`` configuration files, and it is harmless for other
uses.
Signed-off-by: Tomas Cohen Arazi <tomascohen@theke.io>
Signed-off-by: Kyle M Hall <kyle@bywatersolutions.com>
This patch enables the --marcflavour option switch so the user
can specify the desired marc flavour. The code for handling it
was already in place, just not used.
Signed-off-by: Tomas Cohen Arazi <tomascohen@theke.io>
Signed-off-by: Kyle M Hall <kyle@bywatersolutions.com>
Executing the installer process and inserting all the sample data take a
lot of clics and time.
The idea of this script is to provide a quick way to insert all the
sample data easily to get a working Koha install asap.
Test plan:
- Set your database config to a non-existent DB
- Execute perl misc/devel/populate_db.pl
You will get an error
- Create an empty DB
- Execute perl misc/devel/populate_db.pl
It will insert all the MARC21 sample data
- Execute perl misc/devel/populate_db.pl
You will get an error because the DB is not empty (systempreferences and
borrowers tables)
Signed-off-by: Chris Cormack <chrisc@catalyst.net.nz>
Signed-off-by: Tomas Cohen Arazi <tomascohen@theke.io>
Signed-off-by: Kyle M Hall <kyle@bywatersolutions.com>
This allows OVERRIDE_SYSPREF_* and others to work properly.
Test plan:
1) Add the following line to your plack.psgi (near the bottom, just
above "mount ..."):
enable "+Koha::Middleware::Plack";
2) Load the OPAC advanced search page (under Plack). The title should
read "Koha online catalog" (or whatever your LibraryName syspref
contains).
3) Add the following to your Apache configuration:
RequestHeader add X-Koha-SetEnv "OVERRIDE_SYSPREF_LibraryName Potato\, Potato"
4) Restart Apache.
5) Refresh. The title should now read "Potato, Potato online catalog".
Signed-off-by: Tomas Cohen Arazi <tomascohen@theke.io>
Signed-off-by: Kyle M Hall <kyle@bywatersolutions.com>
Signed-off-by: Brendan Gallagher <brendan@bywatersolutions.com>
The swagger specification file is currently being minified adding
manual steps to release management and restful api route development.
The minification is not required; The deferenced version of the
specification is now internally validated at runtime and relavant errors
output and the dereferenced schema has been made publically available at
/api/v1/spec, so it can be copy&pasted into validation tools
Test Plan
1) Apply patch
2) Ensure api routes still function (applying the /cities patch might be
helpful)
3) Ensure /api/v1/spec page exists (it should be the de-referenced
swagger.json file)
Signed-off-by: Claire Gravely <claire_gravely@hotmail.com>
Signed-off-by: Tomas Cohen Arazi <tomascohen@theke.io>
Signed-off-by: Brendan Gallagher <brendan@bywatersolutions.com>
What we currently have:
Koha/ElasticSearch.pm
Koha/ElasticSearch/Indexer.pm
Koha/SearchEngine/Elasticsearch/QueryBuilder.pm
Koha/SearchEngine/Elasticsearch/Search.pm
What we want:
Koha/SearchEngine/Elasticsearch.pm
Koha/SearchEngine/Elasticsearch/Indexer.pm
Koha/SearchEngine/Elasticsearch/QueryBuilder.pm
Koha/SearchEngine/Elasticsearch/Search.pm
Test plan:
% git grep -i Koha::ElasticSearch
% git grep ElasticSearch|grep -v Catmandu::Store::ElasticSearch
should not return any result
Do a full reindex and search for records
Signed-off-by: Nick Clemens <nick@bywatersolutions.com>
Signed-off-by: Kyle M Hall <kyle@bywatersolutions.com>
Signed-off-by: Brendan Gallagher <brendan@bywatersolutions.com>
At one time it was possible to store the results of a report into the
saved_reports table.
This allowed the librarians to compare different results, from the Koha
interface.
This patch is a proof of concept and is not very polished (understood:
it cannot be pushed like that).
Test plan:
Execute the runreport.pl cronjob script with the new --store-results
option.
This will serialize into json the results and put it into the
saved_reports table.
On the "Saved report" list, the "Saved results" column is now populated
with a date (note that you can have several date for a given report).
If you click on this link, the data will be displayed in a simple table.
Signed-off-by: Chris Cormack <chris@bigballofwax.co.nz>
Signed-off-by: Katrin Fischer <katrin.fischer@bsz-bw.de>
Signed-off-by: Kyle M Hall <kyle@bywatersolutions.com>
Signed-off-by: Tomas Cohen Arazi <tomascohen@theke.io>
Signed-off-by: Barton Chittenden <barton@bywatersolutions.com>
Signed-off-by: Katrin Fischer <katrin.fischer@bsz-bw.de>
Signed-off-by: Kyle M Hall <kyle@bywatersolutions.com>
This patch adds the --test option switch to the overdue_notices.pl script
so it can be ran without doing any actual action.
To test:
- Have a patron with overdue items (simulate a checkout for a past date. Note it implies
that the circ rules are defined so the patron is overdue)
- Run:
$ sudo koha-shell kohadev
koha-dev$ misc/cronjobs/overdue_notices.pl --test
=> SUCCESS: The script is ran but the patron isn't debarred and no notice messages are queued.
Signed-off-by: Chris Cormack <chrisc@catalyst.net.nz>
Signed-off-by: Barton Chittenden <barton@bywatersolutions.com>
Signed-off-by: Katrin Fischer <katrin.fischer@bsz-bw.de>
Signed-off-by: Kyle M Hall <kyle@bywatersolutions.com>
The location of the script in misc/maintenance would be fine for
running it from the command line. But it will be a problem for several
install types when running it from the web installer.
Files from misc/maintenance go to bin/maintenance in a package install,
not to mention other installs than a dev install.
This patch moves the script to installer/data/mysql. Already there are two
other scripts run by upgradedatabase. I would rather move these three
scripts somewhere else, but we c/should do that on another report.
Fixed a small typo in a message too.
Signed-off-by: Marcel de Rooy <m.de.rooy@rijksmuseum.nl>
Signed-off-by: Kyle M Hall <kyle@bywatersolutions.com>
- all non-fatal output redirected to STDOUT (as there is an intention
to run this script from updatedatabase.pl)
- added borrowernumber and itemnumber equality checks to the SELECT
statement in getFinesForChecking() - accountlines.issue_id alone is not
entirely trustworthy (because InnoDB forgets it's highest auto_increment
after server restart), in some rare cases it may point to some random
issue for different patron and different item
Signed-off-by: Chris Cormack <chrisc@catalyst.net.nz>
Signed-off-by: Marcel de Rooy <m.de.rooy@rijksmuseum.nl>
Signed-off-by: Kyle M Hall <kyle@bywatersolutions.com>
External maintenace script for fixing unclosed (FU), non accruing fine
records which may still need FU -> F correction post-Bug 15675.
Signed-off-by: Chris Cormack <chrisc@catalyst.net.nz>
Signed-off-by: Marcel de Rooy <m.de.rooy@rijksmuseum.nl>
Signed-off-by: Kyle M Hall <kyle@bywatersolutions.com>
This patch modify the delete_patrons.pl cronjob to deal with the
last_seen option.
To test it, you just have to use the --last_seen option and pass a valid
date (iso format)
Example:
perl misc/cronjobs/delete_patrons.pl --last_seen="1984-02-04" --confirm
will delete all the patrons who do not have been active since this date.
Sponsored-by: BULAC - http://www.bulac.fr/
Signed-off-by: Nicolas Legrand <nicolas.legrand@bulac.fr>
https://bugs.koha-community.org/show_bug.cgi?id=12276
Signed-off-by: Marcel de Rooy <m.de.rooy@rijksmuseum.nl>
Signed-off-by: Kyle M Hall <kyle@bywatersolutions.com>
This script has not been updated for ages and is UNIMARC specific.
Since I am working on bug 17196 to move marcxml out of biblioitems
table, I'd like not to rewrite unused scripts (and lost my time...)
So if someone complains later, I will rewrite it on top of bug 17196.
Signed-off-by: Owen Leonard <oleonard@myacpl.org>
Signed-off-by: Katrin Fischer <katrin.fischer@bsz-bw.de>
Signed-off-by: Kyle M Hall <kyle@bywatersolutions.com>
The C4::Category module contained only 1 method to return the patron
categories available for the logged in user.
The new method Koha::Patron::Categories->search_limited does exactly the
same thing (see tests) and must be used in place of it.
Test plan:
- Same prerequisite as before
For the following pages, you should not see patron categories limited to
other libraries.
- On the 'Item circulation alerts' admin page
(admin/item_circulation_alerts.pl), modify the settings for check-in
and checkout (NOTE: Should not we display all patron categories on
this page? If yes, it must be done in another bug report to ease
backporting it).
- Search for patrons in the admin (budget) and acquisition (order) module.
- On the patron home page (search form in the header)
Signed-off-by: Chris Cormack <chris@bigballofwax.co.nz>
Signed-off-by: Marcel de Rooy <m.de.rooy@rijksmuseum.nl>
Signed-off-by: Kyle M Hall <kyle@bywatersolutions.com>
We need to define several namespaces for our cache system.
For instance sysprefs, koha conf (koha-conf.xml) and unit tests
should be defined in a separate namespace.
This will permit to
- launch the tests without interfering with other cache values
- and flush the sysprefs cache without flushing all other values
To do so, we need to store different Koha::Cache objects at a package
level. That's why this patch adds a new Koha::Caches module.
FIXME: There is an architecture problem here: the L1 cache should be
defined in Koha::Cache
Signed-off-by: Jacek Ablewicz <abl@biblos.pk.edu.pl>
Signed-off-by: Tomas Cohen Arazi <tomascohen@theke.io>
https://bugs.koha-community.org/show_bug.cgi?id=11921
Signed-off-by: Kyle M Hall <kyle@bywatersolutions.com>
As requested by Mark Tompsett. Hope this guarantees a signoff now..
Note: For consistency four additional parameters were needed to no longer
use file level vars in this subroutine.
Test plan:
Import a file with stage_file.pl.
Signed-off-by: Marcel de Rooy <m.de.rooy@rijksmuseum.nl>
Imported a marc file and a marcxml file with stage_file.pl.
Signed-off-by: Josef Moravec <josef.moravec@gmail.com>
Signed-off-by: Jonathan Druart <jonathan.druart@bugs.koha-community.org>
Signed-off-by: Kyle M Hall <kyle@bywatersolutions.com>
This patch makes the following changes:
[1] Based on the groundwork of the former patch, add call to
RecordsFromMARCXMLFile in stage-marc-import. Use format param.
[2] Add format to the template. Use file extension to determine.
If you use .xml or .marcxml as extension, MARCXML is selected.
[3] In stage-marc-import.tt mark UTF-8 encoding as UTF-8 not as utf8.
[4] BatchStageMarcRecords: do not call plugin if you have no records.
[5] RecordsFromISO2709File: also return errors in an array.
[6] In misc/stage_file.pl also use UTF-8. Handling of errors from [5].
Test plan:
[1] Import an empty file as MARC or MARCXML (with Tools/Stage..import).
[2] Import an non-empty file with invalid contents as MARC or MARCXML.
[3] Export a few records with Tools/Export as MARC and MARCXML.
[4] Import these two files. Check selected format versus file extension.
[5] Import a MARCXML file with misc/stage_file.pl.
Signed-off-by: Marcel de Rooy <m.de.rooy@rijksmuseum.nl>
Signed-off-by: Josef Moravec <josef.moravec@gmail.com>
Signed-off-by: Jonathan Druart <jonathan.druart@bugs.koha-community.org>
Signed-off-by: Kyle M Hall <kyle@bywatersolutions.com>
Patch from Olli, manual rebase by Marcel (July 7, 2016).
Signed-off-by: Marcel de Rooy <m.de.rooy@rijksmuseum.nl>
Needs follow-up. Test plan in the third patch.
Signed-off-by: Josef Moravec <josef.moravec@gmail.com>
Signed-off-by: Jonathan Druart <jonathan.druart@bugs.koha-community.org>
Signed-off-by: Kyle M Hall <kyle@bywatersolutions.com>
Incorrect method call is causing runtime error and not
retrieving the correct logdir value
Change retrieves the value correctly
To test:
1) Run edi_cron.pl, notice error
2) Apply patch and run edi_cron.pl again, should work as expected
Signed-off-by: Aleisha Amohia <aleishaamohia@hotmail.com>
Signed-off-by: Jonathan Druart <jonathan.druart@bugs.koha-community.org>
Note: I did not test but changes make sense.
Signed-off-by: Kyle M Hall <kyle@bywatersolutions.com>
Signed-off-by: Marcel de Rooy <m.de.rooy@rijksmuseum.nl>
Tested the A;B;C variant here. If A fails, B will run. Since we can safely
assume that A (or B) will not fail on a daily basis, this seems to be better
than running them in the wrong order every day.
As the comments on Bugzilla show, several people support this improved
(reordered) scheme and look forward to improved error handling on another
report (obviously not that simple).
Signed-off-by: Kyle M Hall <kyle@bywatersolutions.com>
The patch changes the sequence of cronjobs in the crontab example
file and in the cron.daily file of the packages.
This is why:
1) Renew automatically
... only when we can't renew, we want to
2) Calculate fines
... once the fine are calculated and charged
we can print the amount into the
3) Overdue notices
Before the change it could happen that you'd charge for an item,
that would then be renewed. Or that you'd try to print fine
amounts into the overdue notices, when they would only be
charged moments later.
To test:
- configure your system so you have items that should
- be charged with fines
- renew automatically
- configure your crontabs according to the example file
or switch the cron.daily in your package installation with
the new one
- configure your overdue notices so that one should be generated
<<items.fine>>
- Wait for the cronjobs or schedule them to run earlier
- Verify all is well and as it should be
Signed-off-by: Chris Cormack <chrisc@catalyst.net.nz>
Signed-off-by: Kyle M Hall <kyle@bywatersolutions.com>
Signed-off-by: Marcel de Rooy <m.de.rooy@rijksmuseum.nl>
Signed-off-by: Kyle M Hall <kyle@bywatersolutions.com>
Remove $dbh as argument to C4::Items::DelItemCheck
and C4::Items::ItemSafeToDelete, also change all
calls to these functions throughout the codebase.
Also remove remaining reference to 'do_not_commit' in
t/db_dependent/Items_DelItemCheck.t
Fixed doubled "$$" in C4/ImportBatch.pm
Signed-off-by: Tomas Cohen Arazi <tomascohen@theke.io>
Signed-off-by: Kyle M Hall <kyle@bywatersolutions.com>
Use t::lib::TestBuilder in t/db_dependent/Items_DelItemCheck.t
Remove the option 'do_not_commit' from C4::Items::DelItemCheck.
Whitespace cleanup.
Signed-off-by: Tomas Cohen Arazi <tomascohen@theke.io>
Signed-off-by: Kyle M Hall <kyle@bywatersolutions.com>
* Fix POD warning.
* Remove redundant 'use stric' and 'use warnings'
* Remove $VERSION and --version option.
* Remove --dry-run option
* Split test for --help and check for @criteria into two separate pod2usage calls,
enabling -msg on the latter.
* Fix 'target_tiems' typo.
* Test for holds on items to be deleted.
* Fix whitespace
* Fix test for holds.
Signed-off-by: Tomas Cohen Arazi <tomascohen@theke.io>
Signed-off-by: Kyle M Hall <kyle@bywatersolutions.com>
http://bugs.koha-community.org/show_bug.cgi?id=14504
Signed-off-by: Heather Braum <hbraum@nekls.org>
Signed-off-by: Tomas Cohen Arazi <tomascohen@theke.io>
Signed-off-by: Kyle M Hall <kyle@bywatersolutions.com>
This patch introduces the misc/devel/minifySwagger.pl script that
loads the Swagger files, follows references and produces a compact
("minified") version of the swagger file which is suitable for
distribution.
The wiki page should be updated with instructions on how to regenerate
it so the Release Manager does it on each spec upgrade.
Signed-off-by: Olli-Antti Kivilahti <olli-antti.kivilahti@jns.fi>
My name is Olli-Antti Kivilahti and I approve this commit.
We have been using the Swagger2.0-driven REST API on Mojolicious for 1 year now
in production and I am certain we have a pretty good idea on how to work with
the limitations of Swagger2.0
Signed-off-by: Johanna Raisa <johanna.raisa@gmail.com>
My name is Johanna Räisä and I approve this commit.
We have been using Swagger2.0-driven REST API in production successfully.
Signed-off-by: Benjamin Rokseth <benjamin.rokseth@kul.oslo.kommune.no>
Signed-off-by: Tomas Cohen Arazi <tomascohen@theke.io>
Signed-off-by: Kyle M Hall <kyle@bywatersolutions.com>
The previous calls were wrong, but there is something bad with the DB
structure: export_format.profile should be a unique key.
This patch fixes the previous calls and add a FIXME not to forget to fix
the DB structure.
Signed-off-by: Bernardo Gonzalez Kriegel <bgkriegel@gmail.com>
Previous test where done with all patches applied,
including this one, and all worked.
No errors
Signed-off-by: Marcel de Rooy <m.de.rooy@rijksmuseum.nl>
Signed-off-by: Kyle M Hall <kyle@bywatersolutions.com>
This patch erase all traces of C4::Csv since it's not used anymore.
All occurrences have been replaced by previous patches to use
Koha::CsvProfiles.
Note that GetMarcFieldsForCsv was not used prior this patch set.
Test plan:
git grep 'C4::Csv'
should not return any result.
Signed-off-by: Bernardo Gonzalez Kriegel <bgkriegel@gmail.com>
No more traces of the file.
This produces a koha-qa fail, due to the missing file.
No other errors
Signed-off-by: Marcel de Rooy <m.de.rooy@rijksmuseum.nl>
Signed-off-by: Kyle M Hall <kyle@bywatersolutions.com>
This subroutine returned the export_format_id for a given profile name.
This can be done easily with the Koha::CsvProfiles->search method.
Test plan:
Export records using the misc/export_records.pl script and the
export tool.
If you are exporting using the MARC format, the profile filled in the pref
ExportWithCsvProfile will be used (or the one passed in parameter of
misc/export_records.pl).
If you are exporting using the CSV format, you can choose a profile in
the dropdown list.
Signed-off-by: Bernardo Gonzalez Kriegel <bgkriegel@gmail.com>
Exported using tool & cmd, marc & csv. Pref is used.
No errors
Signed-off-by: Marcel de Rooy <m.de.rooy@rijksmuseum.nl>
Signed-off-by: Kyle M Hall <kyle@bywatersolutions.com>
Changes the value of the "comment" column in "borrower_debarments" table
from "Restriction added by overdues process yyyy-mm-dd hh:mm:ss" to
"OVERDUE_PROCESS yyyy-mm-dd hh:mm:ss" in the overdue_notices.pl. Then in
the templates "moremember.tt", "circulation.tt", "memberentrygen.tt",
"opac-reserve.tt" and "opac-user.tt" the value of "comment" is
check, if it's an automatical comment due to overdue process it'll
write "Restriction added by overdues process yyyy-mm-dd hh:mm:ss",
then if there is a customizable comment it will be written without
modification. Like this, the comment "Restriction added by overdues
process" is written in the po files and can be translated later.
To test:
1) create a patron with automatical restriction due to overdue process;
2) apply patch;
3) run misc/cronjobs/overdue_notices.pl;
4) verify if the comment "Restriction added by overdues process" is well
written and translatable on the following page :
- opac patron home page (opac-user.tt);
- opac item reservation page (opac-reserve.tt);
- pro patron page (moremember.tt);
- reservation item for a patron (circulation.tt, memberentrygen.tt);
5) try to translate the comment in po files;
6) sign off.
Signed-off-by: Owen Leonard <oleonard@myacpl.org>
Signed-off-by: Kyle M Hall <kyle@bywatersolutions.com>
Signed-off-by: Kyle M Hall <kyle@bywatersolutions.com>
s/sitemaper/sitemapper/
Test plan:
Run t/db_dependent/Sitemapper.t
Signed-off-by: Marc Véron <veron@veron.ch>
Signed-off-by: Jonathan Druart <jonathan.druart@bugs.koha-community.org>
Signed-off-by: Kyle M Hall <kyle@bywatersolutions.com>
Using Plack with the debian psgi file, I get lots of warnings like:
WARNING: Automatically converting Plack::App::CGIBin instance to a PSGI code reference. If you see this warning for each request, you probably need to explicitly call to_app() i.e. Plack::App::CGIBin->new(...)->to_app in your PSGI file.
This patch is aimed to eliminate the warns.
Test plan:
Run Plack with plack.psgi or koha.psgi and verify if you do not see these
warnings anymore.
Signed-off-by: Jonathan Druart <jonathan.druart@bugs.koha-community.org>
Signed-off-by: Tomas Cohen Arazi <tomascohen@theke.io>
I tested on Jessie and I see no regressions.
Signed-off-by: Kyle M Hall <kyle@bywatersolutions.com>
The changes made to Koha::Authority has not been correctly fixed.
The code of Koha::Authority has been moved bo
Koha::MetadataRecord::Authority by bug 15380.
Test plan:
perl misc/search_tools/rebuild_elastic_search.pl -a -v
should success
Signed-off-by: Kyle M Hall <kyle@bywatersolutions.com>
It would be very helpful if the cleanup_database.pl script had the
ability to delete holidays from the special_holidays table there were
older than a given number of days in the past.
Test Plan:
1) Apply this patch
2) Create some unique holidays in the past of varying ages
3) Test the new switch '--unique-holidays DAYS' for cleanup_database.pl
4) Verify only holidays older then the specified number of days are removed
NOTE: The language 'unique holdays' is used in the interface to match
it's use in the staff web interface.
Signed-off-by: Jonathan Druart <jonathan.druart@bugs.koha-community.org>
Signed-off-by: Kyle M Hall <kyle@bywatersolutions.com>
The goal of this patch is to avoid unecessary flush of the L1 cache on
creating a new CGI object each time C4::Languages::getlanguage is called
without a CGI object.
The new class Koha::Cache::Memory::Lite must be flushed by the CGI
constructor overide done in the psgi file. This new class will ease
caching of specific stuffs used by running script.
Test plan:
At the OPAC and the intranet interfaces:
Open 2 different browser session to simulate several users
- Clear the cookies of the browsers
- User 1 (U1) an User 2 (U2) should be set to the default language
(depending on the browser settings)
- U1 chooses another language
- U2 refreshes and the language used must be the default one
- U2 chooses a third language
- U1 refreshes and must be still using the one he has choosen.
Try to use a language which is not defined:
Add &language=es-ES (if es-ES is not translated) to the url, you should
not see the Spanish interface.
Signed-off-by: Jacek Ablewicz <abl@biblos.pk.edu.pl>
Signed-off-by: Jesse Weaver <jweaver@bywatersolutions.com>
Signed-off-by: Brendan Gallagher <brendan@bywatersolutions.com>
Use of uninitialized value in numeric eq (==)
Signed-off-by: Jonathan Druart <jonathan.druart@bugs.koha-community.org>
Signed-off-by: Kyle M Hall <kyle@bywatersolutions.com>
Signed-off-by: Bernardo Gonzalez Kriegel <bgkriegel@gmail.com>
Signed-off-by: Jonathan Druart <jonathan.druart@bugs.koha-community.org>
Signed-off-by: Kyle M Hall <kyle@bywatersolutions.com>
Signed-off-by: Bernardo Gonzalez Kriegel <bgkriegel@gmail.com>
Signed-off-by: Jonathan Druart <jonathan.druart@bugs.koha-community.org>
Signed-off-by: Kyle M Hall <kyle@bywatersolutions.com>
This patch deprecates the -x switch, making XML the default serialization format
used by rebuild_zebra.pl. It doesn't remove the option switch, but raises a warning
for the end user about the deprecation so they fix their cronjobs. Later we could remove it.
To test:
- Disable all indexing (daemon/cronjob)
- Create 2 records
- Edit one of them, delete the other one
- Verify they are queued for updates in zebraqueue
- sudo koha-mysql kohadev
> SELECT * FROM zebraqueue WHERE done=0
...
| 265 | 265 | specialUpdate | biblioserver | 1 | 2016-05-13 14:23:45 |
| 266 | 1 | recordDelete | biblioserver | 1 | 2016-05-16 14:14:33 |
| 267 | 2 | specialUpdate | biblioserver | 1 | 2016-05-16 14:15:06 |
+-----+--------------------+---------------+--------------+------+---------------------+
- Now go to koha-shell
$ sudo koha-shell kohadev ; cd kohaclone
- Run:
$ misc/migration_tools/rebuild_zebra.pl -k -b -z
You will get something similar to this:
NOTHING cleaned : the export /tmp/jI0OeHy6Tn has been kept.
You can re-run this script with the -s and -d /tmp/jI0OeHy6Tn parameters
if you just want to rebuild zebra after changing the record.abs
or another zebra config file
- Verify
* less /tmp/jI0OeHy6Tn/del_biblio/exported_records
* less /tmp/jI0OeHy6Tn/upd_biblio/exported_records
=> FAIL: They contain the records you added/modified/deleted but they are in
USMARC format
- Apply the patch
- Mark your records for indexing (in koha-mysql kohadev)
> UPDATE zebraqueue SET done=0 WHERE id > 264
- Run:
$ misc/migration_tools/rebuild_zebra.pl -k -b -z
You will get something similar to this:
<WARNINGS> [1]
NOTHING cleaned : the export /tmp/jI0OeHy6Tn has been kept.
You can re-run this script with the -s and -d /tmp/jI0OeHy6Tn parameters
if you just want to rebuild zebra after changing the record.abs
or another zebra config file
- Verify
* less /tmp/jI0OeHy6Tn/del_biblio/exported_records
* less /tmp/jI0OeHy6Tn/upd_biblio/exported_records
=> SUCCESS: Data is correctly in XML format
- Run:
$ misc/migration_tools/rebuild_zebra.pl -k -b -z -noxml
You will get something similar to this:
<WARNINGS> [1]
NOTHING cleaned : the export /tmp/jI0OeHy6Tn has been kept.
You can re-run this script with the -s and -d /tmp/jI0OeHy6Tn parameters
if you just want to rebuild zebra after changing the record.abs
or another zebra config file
- Verify
* less /tmp/jI0OeHy6Tn/del_biblio/exported_records
* less /tmp/jI0OeHy6Tn/upd_biblio/exported_records
=> SUCCESS: Data is correctly in USMARC format
- Sign off :-D
[1] Warnings covered by a followup
Signed-off-by: Bernardo Gonzalez Kriegel <bgkriegel@gmail.com>
On top of Bug 16505
Work as described following test plan, usmarc default pre patch,
post patch xml default and usmarc on request.
No errors (all patchset)
Signed-off-by: Jonathan Druart <jonathan.druart@bugs.koha-community.org>
Signed-off-by: Kyle M Hall <kyle@bywatersolutions.com>
Using rebuild_zerba.pl with the -x option switch, produces an incorrect output in
terms of what our XSLTs expect for indexing. This patch introduces the right namespace information
on the exported records so indexing succeeds.
To test:
- On current master, have some records on your db
- Run:
$ sudo koha-shell kohadev
$ cd kohaclone
$ misc/migration_tools/rebuild_zebra.pl -r -b -k -x
=> you will get a message like this:
NOTHING cleaned : the export /tmp/NL5ufjUfpp has been kept.
- Run
$ less /tmp/NL5ufjUfpp/biblio/exported_records
=> FAIL: The first line looks like this
<?xml version="1.0" encoding="UTF-8"?><collection><record
- Now run:
$ xsltproc \
/etc/koha/zebradb/marc_defs/marc21/biblios/biblio-zebra-indexdefs.xsl \
/tmp/NL5ufjUfpp/biblio/exported_records
=> FAIL: No output
- Apply the patch
- Run:
$ misc/migration_tools/rebuild_zebra.pl -r -b -k -x
- Take a look at the result file:
$ less /tmp/asdiouqwiue/biblio/exported_records
=> SUCCESS: The start of the file looks like this:
<?xml version="1.0" encoding="UTF-8"?><collection xmlns="http://www.loc.gov/MARC21/slim">
- Run:
$ xsltproc \
/etc/koha/zebradb/marc_defs/marc21/biblios/biblio-zebra-indexdefs.xsl \
/tmp/asdiouqwiue/biblio/exported_records
=> SUCCESS: There is actually indexing data :-D
- Sign off :-D
Edit: I changed qq{} for q{} as suggested by Jonathan.
Sponsored-by: American Numismatic Society
Signed-off-by: Bernardo Gonzalez Kriegel <bgkriegel@gmail.com>
Works as described following test plan
No errors
Signed-off-by: Jonathan Druart <jonathan.druart@bugs.koha-community.org>
Signed-off-by: Kyle M Hall <kyle@bywatersolutions.com>
This patch adds a letter parameter to the cron job membership_expiry.
It is used to substitute the default notice by another one.
This could be handy if you e.g. send a reminder after the first notice.
In any case, it allows for more flexibility.
Apart from this new parameter, this patch removes the sub parse_letter from
the code. The call to GetPreparedLetter is moved to the for loop and the
call to getletter is removed (no longer needed). If there is no letter
found, the Letter module already warns you. So we just exit the loop.
Test plan:
[1] Run membership_expiry.pl -c -n -v -let NOT_EXIST
Check if you see a warning (coming from Letters.pm)
[2] Check if you have some soon expiring patrons or add before/after
parameter to include some.
Run membership_expiry.pl -c -n -v [-before ?] [-after ?]
[3] Create a new notice MEMBERSHIP2. Copy the text from the original notice
and make some adjustments.
[4] Run membership_expiry.pl -c -v -let MEMBERSHIP2 [-before ?] [-after ?].
Be aware that this call generates email messages.
Verify that the email contained the adjusted text.
Signed-off-by: Bernardo Gonzalez Kriegel <bgkriegel@gmail.com>
On top of Bug 14834
Work as described, tested using '-n' to see messages on terminal, e.g.
membership_expiry.pl -v -n -c -before 3 -branch BC -after 2 --letter MEMEXP2
No errors
Signed-off-by: Kyle M Hall <kyle@bywatersolutions.com>
Signed-off-by: Kyle M Hall <kyle@bywatersolutions.com>
This patch adds three parameters to the cron job: -before and -after, and
-branch.
You can run the cronjob now in an adjusted frequency: say once a week with
before 6 or after 6 (not both together). If your pref is set to 14, running
before=6 will include expiries from 8 days to 14 days ahead. When you
use after=6, you would include 14 days to 20 days ahead, etc.
You could also rerun the job of yesterday by setting before=1 and after=-1;
this could help in case of problem recovery.
Obviously, the branch parameter can be used as a filter.
NOTE: Why are these parameters passed only via the command line?
Well, obviously the branch parameter is not suitable for a pref.
The before/after parameter allows you to handle expiry mails different from
the normal scheme or could be used in some sort of recovery. In those cases
it will be more practical to use a command line parameter than editing a
pref.
NOTE: The unit test has been adjusted for the above reasons, but I also
added some lines to let existing expires not interfere with the added
borrowers by an additional count and using the branchcode parameter.
Test plan:
[1] Run the adjusted unit test GetUpcomingMembershipExpires.t
[2] Set the expiry date for patron A to now+16 (with pref 14).
Set the expiry date for patron B to now+11.
[3] Run the cronjob without range. You should not see A and B.
[4] Run the cronjob with before 3. You should see patron B.
[5] Run the cronjob with before 3 and after 2. You should see A and B.
[6] Repeat step 5 with a branchcode that does not exist. No patrons.
Signed-off-by: Bernardo Gonzalez Kriegel <bgkriegel@gmail.com>
Work as described following test plan.
Test pass
No errors
New parameters work with one (-) or two(--) dashes, no problem
with that but convention suggest that 'long' options use two-dashes.
Signed-off-by: Kyle M Hall <kyle@bywatersolutions.com>
Signed-off-by: Kyle M Hall <kyle@bywatersolutions.com>
This patch tries to make the code more readable using Koha::Calendar
instead of deprecated C4::Calendar and Date::Calc
Signed-off-by: Kyle M Hall <kyle@bywatersolutions.com>
Signed-off-by: Nick Clemens <nick@bywatersolutions.com>
Signed-off-by: Jesse Weaver <jweaver@bywatersolutions.com>
Signed-off-by: Tomas Cohen Arazi <tomascohen@theke.io>
Signed-off-by: Kyle M Hall <kyle@bywatersolutions.com>
Signed-off-by: Brendan Gallagher <brendan@bywatersolutions.com>
Signed-off-by: Nick Clemens <nick@bywatersolutions.com>
Signed-off-by: Jesse Weaver <jweaver@bywatersolutions.com>
Signed-off-by: Tomas Cohen Arazi <tomascohen@theke.io>
Signed-off-by: Kyle M Hall <kyle@bywatersolutions.com>
Signed-off-by: Brendan Gallagher <brendan@bywatersolutions.com>
There were rebase conflicts that it was just easier to postpone until
afterwards.
Signed-off-by: Nick Clemens <nick@bywatersolutions.com>
Signed-off-by: Jesse Weaver <jweaver@bywatersolutions.com>
Signed-off-by: Tomas Cohen Arazi <tomascohen@theke.io>
Signed-off-by: Kyle M Hall <kyle@bywatersolutions.com>
Signed-off-by: Brendan Gallagher <brendan@bywatersolutions.com>