Some libraries would like to add a check on the cardnumber length.
This patch adds the ability to restrict the cardnumber to a specific
length (strictly equal to XX, or length > XX or min < length < max).
This restriction is checked on inserting/updating a patron or on importing
patrons.
This patch adds:
- 1 new syspref CardnumberLength. 2 formats: a number or a range
(xx,yy).
- 1 new unit test file t/Members/checkcardnumber.t for the
C4::Members::checkcardnumber routine.
Test plan:
1/ Fill the pref CardnumberLength with '5,8'
2/ Create a new patron with an invalid cardnumber (123456789)
3/ Check that you cannot save
4/ With Firebug, replace the pattern attribute value (for the cardnumber
input) with ".{5,10}"
5/ You are allowed to save but an error occurred.
6/ Try the same steps for update.
7/ Go to the import borrowers tool.
8/ Play with the import borrowers tool. We must test add/update patrons
and the "record matching" field (cardnumber or a uniq patron attribute)
Signed-off-by: Marcel de Rooy <m.de.rooy@rijksmuseum.nl>
Tested adding, updating; importing and ran unit test.
Preliminary QA comments on Bugzilla
Signed-off-by: Kyle M Hall <kyle@bywatersolutions.com>
Signed-off-by: Galen Charlton <gmc@esilibrary.com>
The SearchOrders routine should return the booksellerid and this
patch adds it.
This fixes several problems:
[1] The link to the vendor on the order receive page breadcrumbs
was broken.
[2] The tax calculation in finishreceive.pl didn't run.
[3] The item booksellerid field never got updated during
receipt.
Booksellerid was returned before bug 10723.
Quick test plan:
Go on orderreceive.pl and verify that the vendor link is correct.
Followed test plan. Vendor link is now correct.
Signed-off-by: Marc Véron <veron@veron.ch>
Signed-off-by: Brendan Gallagher <brendan@bywatersolutions.com>
Signed-off-by: Galen Charlton <gmc@esilibrary.com>
This patch also adds POD and UT for the change in SearchOrders()
Signed-off-by: Jonathan Druart <jonathan.druart@biblibre.com>
Signed-off-by: Galen Charlton <gmc@esilibrary.com>
The order status ordered is set when the basket is closed.
The parcel page should only display status "ordered" and "partial".
Test plan:
- create a basket.
- create an order.
- verify the order is not listed on the parcel page (i.e. you cannot
receive it).
- close the basket.
- verify the order is listed on the parcel page.
Signed-off-by: Marcel de Rooy <m.de.rooy@rijksmuseum.nl>
Signed-off-by: Kyle M Hall <kyle@bywatersolutions.com>
Signed-off-by: Galen Charlton <gmc@esilibrary.com>
This patch removes some dead code concerning the handling of patrons
that are members of other, institutional patrons. This code did not
work; removing it clears the field if somebody wants to do a better
implementation of such relationships between patrons.
This patch:
[1] Removes the memberofinstitution system preference.
[2] Removes the following routines:
C4::Members::get_institutions()
C4::Members::add_member_orgs() (and removing this routine
removes a reference to a borrowers_to_borrowers table that
does not exist).
There should be no changes whatsoever to system functionality with this
patch (with the trivial exception of the absence of the
memberofinstitution system preference).
Test plan:
[1] Look at the code and use grep, git grep, etc. verify this patch
does not remove something in use.
[2] Verify that there are no regressions upon adding or editing
a patron record.
[3] Verify that the memberofinstitution system preference has been
removed
Signed-off-by: Galen Charlton <gmc@esilibrary.com>
Signed-off-by: Brendan Gallagher <brendan@bywatersolutions.com>
Signed-off-by: Galen Charlton <gmc@esilibrary.com>
DBD::Mysql provides a mysql_auto_reconnect flag. Using it avoids
the time required to do a $dbh->ping().
Benchmarks:
use Modern::Perl;
use C4::Context;
for ( 1 .. 1000 ) {
$dbh = C4::Context->dbh;
}
* without this patch on a local DB:
perl t.pl 0,49s user 0,02s system 98% cpu 0,525 total
* without this patch on a remote DB:
perl t.pl 0,52s user 0,05s system 1% cpu 37,358 total
* with this patch on a local DB:
perl t.pl 0,46s user 0,04s system 99% cpu 0,509 total
* with this patch on a remote DB:
perl t.pl 0,49s user 0,02s system 56% cpu 0,892 total
Testing the auto reconnect:
use Modern::Perl;
use C4::Context;
my $ping = $dbh->ping;
say $ping;
$dbh->disconnect;
$ping = $dbh->ping;
say $ping;
Signed-off-by: Bernardo Gonzalez Kriegel <bgkriegel@gmail.com>
Comment: Real improvement. No koha-qa errors
prove t/db_dependent/Circulation_issuingrules.t produces no error
prove t/db_dependent/Context.t produces no error
Test
1) dumped Koha DB, load it on a non-local server
2) run sample script whit and without patch, local and remote
use Modern::Perl;
use C4::Context;
for ( 1 .. 100000 ) {
my $dbh = C4::Context->dbh;
}
Main difference I note is with remote server
a) without patch
real 0m16.357s
user 0m2.592s
sys 0m2.132s
b) with patch
real 0m0.259s
user 0m0.240s
sys 0m0.012s
I think this could be good for DBs placed on
remote servers
Bug 10611: add a "new" parameter to C4::Context->dbh
When dbh->disconnect is called and the mysql_auto_reconnect flag is set,
the dbh is not recreated: the old one is used.
Adding a new flag, we can now force the C4::Context->dbh method to
return a new dbh.
Signed-off-by: Bernardo Gonzalez Kriegel <bgkriegel@gmail.com>
Bug 10611: Followup: remove useless calls to dbh->disconnect
These 3 calls to disconnect are done at the end of the script, they are
useless.
Signed-off-by: Bernardo Gonzalez Kriegel <bgkriegel@gmail.com>
Signed-off-by: Paul Poulain <paul.poulain@biblibre.com>
Signed-off-by: Galen Charlton <gmc@esilibrary.com>
The report also known as "Overdues with fines"
Signed-off-by: Nicole C. Engard <nengard@bywatersolutions.com>
All tests pass, this adds data to the Patron column on the
overdues with fines report to show the patron's cardnumber
and phone number.
Signed-off-by: Katrin Fischer <Katrin.Fischer.83@web.de>
This works as described and passes all tests and QA script.
Signed-off-by: Galen Charlton <gmc@esilibrary.com>
While reviewing the main patch for this bug to verify that the
holds queue routines and C4::Reserves had the same conception of
when a damaged item could fill a hold request, I noticed that
GetItemsAvailableToFillHoldRequestsForBib() duplicated the code
for adding an SQL clause to filter out damaged items. This patch
removes the duplication and improves the POD for that routine.
Signed-off-by: Galen Charlton <gmc@esilibrary.com>
AllowHoldsOnDamagedItems will stop item-specific holds from being placed
on damaged items, but does not stop Koha from using damaged items to
fill holds. This seems like incorrect behavior.
Test Plan:
1) Set 'AllowHoldsOnDamagedItems' to "Don't Allow"
2) Pick an item, set it to damaged
3) Place a bib-level hold on this item's record
4) Scan the item though the returns system
5) Koha will ask to use this item to fill the hold, click "ignore"
6) Apply this patch
7) Repeat step 4
8) Koha will not ask to use this item to fill the hold
Signed-off-by: Srdjan <srdjan@catalyst.net.nz>
Signed-off-by: Katrin Fischer <Katrin.Fischer.83@web.de>
Signed-off-by: Galen Charlton <gmc@esilibrary.com>
The priority of new hold requests was not calculated when using ILS-DI.
A new routine is added, C4::Reserves::CalculatePriority(), to calculate
the priority prior to placing a request.
A separate bug report, 11640, covers the changes in reserves to
use this new routine more generally.
This patch does therefore only affect ILS-DI.
Note: ILS-DI already allows you to generate multiple holds on a biblio or
item for the same patron. This patch does not change that behavior.
Test plan:
[1] Place multiple holds using ILS-DI HoldTitle service:
/cgi-bin/koha/ilsdi.pl?service=HoldTitle&patron_id=BORROWERNUMBER&bib_id=BIBLIONUMBER&request_location=test
Check the priority.
[2] Do the same using HoldItem service:
/cgi-bin/koha/ilsdi.pl?service=HoldItem&patron_id=BORROWERNUMBER&bib_id=BIBLIONUMBER&item_id=ITEMNUMBER
Check the priority again.
[3] Use a biblio with multiple items. Place item level holds on both.
Check in one of these items in another branch. Confirm transfer.
Check in the other item in the original branch. Confirm hold.
Now you have a waiting and a transit hold.
Test HoldTitle and HoldItem service again a few times.
[4] Enable AllowHoldDateInFuture and add a future hold.
Now test HoldTitle and HoldItem again and check if these holds are
inserted before the future hold (lower priority).
January 29, 2014: Rebased this patch and amended it to make a distinction
between fixing the ILS-DI bug and using the new routine.
Updated commit message and test plan (marcelr).
Signed-off-by: Marcel de Rooy <m.de.rooy@rijksmuseum.nl>
Signed-off-by: Kyle M Hall <kyle@bywatersolutions.com>
Signed-off-by: Galen Charlton <gmc@esilibrary.com>
C4::Acquisition need more UT, and more robust ones. This patch
adds some.
This patch adds UT to
- GetOrder
- GetOrders
- GetCancelledOrders
- GetLateOrders
It refactors UT for SearchOrders
New UT use 2 new routines, used for check the list of fields returned
by a routine:
_check_fields_of_order
_check_fields_of_orders
These 2 routines could later be used by other UT
_check_fields_of_order has its own UT (tests n°14,15,16).
to test :
prove -v t/db_dependent/Acquisition.t
Signed-off-by: Liz Rea <liz@catalyst.net.nz>
Unit tests pass, passes koha-qa.
Signed-off-by: Kyle M Hall <kyle@bywatersolutions.com>
Passes koha-qa and t
Signed-off-by: Galen Charlton <gmc@esilibrary.com>
This corrects line 1250 of C4/Context.pm to be:
return ($userenv->{flags}//0) % 2;
And thus avoids an uninitialized value used in the modulus.
TEST PLAN
---------
1) Apply the first patch (to update t/Context.t)
2) prove -v t/Context.t
-- This should fail tests 7 and 8
3) Apply this patch (to fix C4/Context.pm)
4) prove -v t/Context.t
-- All tests should succeed
Signed-off-by: Chris Cormack <chris@bigballofwax.co.nz>
Signed-off-by: Jonathan Druart <jonathan.druart@biblibre.com>
Signed-off-by: Galen Charlton <gmc@esilibrary.com>
This patch addresses a number of issues with the main patch:
- regression on bug 2060 (i.e., displaying authority import batches
correctly)
- regression on bug 10170 (translation of import record states)A
- use of datatables.inc
- lack of clarity as to the licensing of tools/batch_records_ajax.pl
- insufficent sanitizing of input used to generate an SQL statement
Signed-off-by: Galen Charlton <gmc@esilibrary.com>
Some libraries would like to sort by columns for the records of an
import batch. This seems like a good use of Ajax DataTables.
Test plan:
1) Apply this patch
2) Import a record batch into Koha
a) Use some form of matching
b) Have some records that will match and some that won't
c) Have at least 30 records so you can test the pager
3) Verify the new table is functionally equivalent to the old static one
Signed-off-by: Owen Leonard <oleonard@myacpl.org>
Tests fine and looks good with the exception of the corrections I put in
a follow-up.
Signed-off-by: Galen Charlton <gmc@esilibrary.com>
This patch makes Koha <-> Zebra use MARCXML for the serialization when
using DOM, and USMARC for GRS-1.
* The following functions are modified to set the Zebra record syntax
according to the current sysprefs and configuration:
- C4::Context->Zconn
- C4::Context-_new_Zconn
* A new function 'new_record_from_zebra' is introduced, which checks the
context we are in, and creates the MARC::Record object using the right
constructor.
The following packages get touched to make use of the new function:
- C4::Search
- C4::AuthoritiesMarc
and the same happens to the UI scripts that make use of them (both in
the OPAC and STAFF interfaces).
* Calls to the unsafe ZOOM::Record->render()[1] method are removed.
Due to this last change the code for building facets was rewritten. And
for performance on the facets creation I pushed higher version
dependencies for MARC::File::XML and MARC::Record (we rely on
MARC::Field->as_string).
* Calls to MARC::Record->new_from_xml and MARC::Record->new_from_usmarc
are wrapped with eval for catching problems [2].
* As of bug 3087, UNIMARC uses the 'unimarc' record syntax. this case is
correctly handled.
* As of bug 7818 misc/migration_tools/rebuild_zebra.pl behaves like:
- bib_index_mode (defaults to 'grs1' if not specified)
- auth_index_mode (defaults to 'dom')
here we do exactly the same.
To test:
- prove t/db_dependent/Search.t should pass.
- Searching should remain functional.
- Indexing and searching for a big record should work (that's what the
unit tests do).
- Test an index scan search (on the staff interface):
Search > More options > Check "Scan indexes".
- Enable 'itemBarcodeFallbackSearch' and try to circulate any word, it
shouldn't break.
- Searching for a biblio in a new subscription shouldn't break.
- Running bulkmarcimport.pl shouldn't break.
- And so on... for the rest of the .pl files.
[1] http://search.cpan.org/~mirk/Net-Z3950-ZOOM/lib/ZOOM.pod#render()
[2] a record that cannot be parsed by MARC::Record is simply skipped (bug 10684)
Sponsored-by: Universidad Nacional de Cordoba
Signed-off-by: Kyle M Hall <kyle@bywatersolutions.com>
Signed-off-by: Martin Renvoize <martin.renvoize@ptfs-europe.com>
Signed-off-by: Galen Charlton <gmc@esilibrary.com>
Bug 10649 introduced a new include file for adding DataTables-related
JavaScript assets. This patch adds use of this include file to the Koha
news page.
To test you should have existing news items with varying creation and
expiration dates. Apply the patch and confirm that table sorting works
correctly for all settings of the dateformat system preference.
C4::NewsChannels.pm has been modified so that it now passes an
unformatted date to the template, where the KohaDates plugin is used to
apply the correct formatting. Sorting is based on the unformatted date.
Also corrected: Capitalization errors.
Signed-off-by: wajasu <matted-34813@mypacks.net>
Signed-off-by: Katrin Fischer <Katrin.Fischer.83@web.de>
Works as described, no problems found.
Also passes tests and QA script.
Signed-off-by: Galen Charlton <gmc@esilibrary.com>
This is just some code cleanup, no behavior change expected.
Also replacing errstr with err in testing the results. (See DBI.)
Test plan:
Modify an item and save it.
Followed test plan. No problems found.
Signed-off-by: Marc Véron <veron@veron.ch>
Signed-off-by: Jonathan Druart <jonathan.druart@biblibre.com>
Signed-off-by: Galen Charlton <gmc@esilibrary.com>
This routine is no longer used.
Test plan:
Do a grep on the name.
(Bonus points:) Verify if you can perform some actions on lists.
No more occurences of _biblionumber_sth found
Signed-off-by: Marc Véron <veron@veron.ch>
Signed-off-by: Jonathan Druart <jonathan.druart@biblibre.com>
Signed-off-by: Galen Charlton <gmc@esilibrary.com>
Currently road types are stored in a specific table in DB. Moreover, an
admin page is present in order to manage them.
This patch proposes to remove this table and this page in favour of a
new authorised value category 'ROADTYPE'.
This patch:
- adds a new AV category 'ROADTYPE' (created from the roadtype table
content).
- remove the roadtype table.
- remove the .pl and .tt file admin/roadtype
- remove the 2 routines C4::Members::GetRoadTypes and
C4::Members::GetRoadTypeDetails
Test plan:
1/ Execute the updatedatabase entry and verify existing roadtypes are
now stored in the AV 'ROADTYPE'.
2/ Verify you can add/update a streettype for patrons.
3/ Verify on following pages the streettype is displayed in patron
information (top left):
circ/circulation.pl
members/memberentry.pl
members/moremember.pl
members/routing-lists.pl
Signed-off-by: Sophie Meynieux <sophie.meynieux@biblibre.com>
Signed-off-by: Kyle M Hall <kyle@bywatersolutions.com>
Signed-off-by: Galen Charlton <gmc@esilibrary.com>
If a search gives results with 6 facets, one of those facets won't be
displayed. This is due to a bug in the code that only considers great
than 6 facets in one area, and less than 6 in another.
Test Plan:
1) Perform a search that should give results for 6 different libraries
2) Note you only see 5 libraries in the facets with no option to expand
3) Apply this patch
4) Repeat step 1
5) Note you now have the option to expand the facets list
Signed-off-by: Chris Cormack <chris@bigballofwax.co.nz>
Signed-off-by: Jonathan Druart <jonathan.druart@biblibre.com>
This patch should provide a regression test but I really don't know how
to write it.
Signed-off-by: Galen Charlton <gmc@esilibrary.com>
Replace constructs using given and when by if/else
feature now generates compilation warnings in 5.18
and is liable to change behaviour.
This patch:
* replaces the construct with if/else
* reformats the if branching using perltidy
to remove the now redundant indent
To test:
[1] Verify that prove -v t/db_dependent/MarcModificationTemplates.t
passes.
Signed-off-by: Chris Cormack <chris@bigballofwax.co.nz>
Signed-off-by: Jonathan Druart <jonathan.druart@biblibre.com>
Signed-off-by: Galen Charlton <gmc@esilibrary.com>
When you run the Reserves test, you have the warnings:
Use of uninitialized value $branchcode in hash element at /usr/share/koha/testclone/C4/Letters.pm line 138.
Use of uninitialized value $branchcode in hash element at /usr/share/koha/testclone/C4/Letters.pm line 148.
This patch removes that warning.
Test plan:
Run the Reserves.t again.
Revised Test Plan
-----------------
Run the following on the command line prompt before and after
applying the patch:
perl -e "use C4::Letters; *C4::Context::userenv= sub { return {} }; my \$blah=C4::Letters::getletter('circulation','DUE', 'BRA');"
Before the patch there will be errors (as above), after there will not.
Signed-off-by: Mark Tompsett <mtompset@hotmail.com>
Signed-off-by: Kyle M Hall <kyle@bywatersolutions.com>
Signed-off-by: Jonathan Druart <jonathan.druart@biblibre.com>
IndependentBranches must be on.
Signed-off-by: Galen Charlton <gmc@esilibrary.com>
This patch makes the following changes to UNIMARC biblio indexing :
A. Changes to UNIMARC conf files
1. add comments to biblio-koha-indexdefs.xml
2. make biblio-koha-indexdefs.xml more compact by grouping some
declarations
Ex : 200$f and 200$g => one declaration for 200$fg
3. suppress unneeded declarations (indexing of some 4XX fields and 6XX
fields not in unimarc format)
4. unindex some (sub)fields unneeded by most users (318, 207,230,210a,
215, 4XXd)
5. change the way 308 field is indexed (no visible changes)
6. replace Title-host with Host-item -- see bug 11119
7. index 208 in Material-Type -- see bug 11119
8. index 100 pos 8-9 and 9-12 in pubdate:y and pubdate:n
9. index 100 pos 8-9 in pubdate:s instead of 210$d
10. Index all subfields of note 334 and 327 in note index
11. Index 304 and 327 in title index as well as note index
327 can contain a list of titles included in a work
304 can contain the title of the original work in case of a
translation
12. Index 314 in author index as well as note index
314 can contain authors not mentionned in 200$f/g (the 4th, 5th etc.
author)
13. Index 328 note in Dissertation-information as well as note
14. Index 328$t in Title
B. Changes to ccl.properties :
1. add a new index Dissertation-information (1056)
2. fix EAN, pubdate and acqdate (they were not linked with bib1 attributes)
C. Changes to Search.pm
1. add Dissertation-information and suppress Title-host and UPC
D. Changes to QP config file queryparser.yaml
1. add Dissertation-information
2 fix EAN, pubdate and acqdate
Test plan :
If you cannot test in GRS1, test only in DOM, as GRS will be deprecated.
1. Apply the patch in a UNIMARC Koha running with DOM and ICU
2. copy src/etc/searchengine/queryparser.yaml into the main config
directory of QP
3. copy src/etc/zebradb/ccl.properties into the main config directory
of Zebra
4. copy src/etc/zebradb/marc_defs/unimarc/biblio/* into the main config
directory of Zebra
5. reindex biblios (rebuild_zebra.pl -r -b -x -v)
6. test note index : make some searches on 334$b or 327$b
7. test author index : make some searches on 314 field
8. test title index : make some searches on 304 and 327 field, make a
search on 328$t subfield
9. test dissertation-information index : make some searches on 328 field
10. In a record, put in the dates of 100 fields the values "1000" (1st
date) and "1001" (2d date) ; try to search a book written in year
1000, you should find the record ; idem for year 1001
11. make some searches and sort by date. It should work better as before,
especially if you have values like "c2009" or "impr. 2010" in 210
field
12. Regression test : make some searches on several indexes, like EAN,
etc. It should work as before
Test 10-12 with and without Queryparser activated.
Be careful: with Queryparser activated, the index names (title,
dissertation-information...) must be entered in lowercase only.
Of course, to test search and sort by dates, you need to have full
records, with dates in 100 field as well as 210 field.
Signed-off-by: Paola Rossi <paola.rossi@cineca.it>
Signed-off-by: Jonathan Druart <jonathan.druart@biblibre.com>
Signed-off-by: Galen Charlton <gmc@esilibrary.com>
Adding Number-local-acquisition in C4::Search known indexes allows to
search without using "ccl=" prefix.
Also corrects in ccl.properties : inv must be an alias of
Number-local-acquisition.
Signed-off-by: Marcel de Rooy <m.de.rooy@rijksmuseum.nl>
Signed-off-by: Kyle M Hall <kyle@bywatersolutions.com>
Signed-off-by: Galen Charlton <gmc@esilibrary.com>
This fixes a regression introduced by the patches for bug 10723.
To Test:
1) Create budget and fund under budget administration.
2) Create Vendor in acquisitons module.
3) Create basket under vendor.
4) Create order and choose budget while creating order.
5) Click on Receive shipment button.
6) Click on receive link on the right hand side you
will be able to see a staff user name in the "created by"
field.
Signed-off-by: Owen Leonard <oleonard@myacpl.org>
Signed-off-by: Jonathan Druart <jonathan.druart@biblibre.com>
Signed-off-by: Galen Charlton <gmc@esilibrary.com>
If a patron has a record-level hold that is unavailable, any patron
information request will send back an empty CD field when this field
should have an item barcode in it [RM note: this actually isn't
universally true -- the SIP2 standard is silent as to what is supposed
to go in the CD field. Some SIP2 devices do indeed want an item
barcode, but others are known to just want a display of the title
and author of the request in question. Providing an option is the
topic of a new enhancement request, however.]
This is due to a minor error in ILS::Patron::_get_outstanding_holds
where GetItemnumbersForBiblio is assumed to return an array but in
reality returns an arrayref.
Test Plan:
1) Create a record level hold for a patron and record
2) Using SIP2, make a patron information request
3) Note the empty CD fields
4) Apply this patch, restart SIP server
5) Repeat step 2
6) Note the CD field now has a barcode
Signed-off-by: Chris Cormack <chris@bigballofwax.co.nz>
Signed-off-by: Jonathan Druart <jonathan.druart@biblibre.com>
I did not test this patch but the following code shows me it is correct:
use C4::Items;
use Data::Dumper;
my $biblionumber = 5035;
my $itemnumber = (GetItemnumbersForBiblio($biblionumber))[0];
say Dumper $itemnumber;
$itemnumber = (GetItemnumbersForBiblio($biblionumber))->[0];
say Dumper $itemnumber;
displays:
$VAR1 = [
'23168',
'23169',
'23170',
'23171',
'23172'
];
$VAR1 = '23168';
Signed-off-by: Galen Charlton <gmc@esilibrary.com>
C4::Charset::SetMarcUnicodeFlag() fetches system preference
values, so since it invokes routines in C4::Context, it should
load the module.
Signed-off-by: Chris Cormack <chris@bigballofwax.co.nz>
Signed-off-by: Jonathan Druart <jonathan.druart@biblibre.com>
Signed-off-by: Galen Charlton <gmc@esilibrary.com>
C4::Acquisitions contained a number of unnecessary calls to
$sth->finish. Removed these and the associated variables introduced to
cache query results between fetch and the return
Where finish was the end of the routine I have added an
explicit return to document that no data is returned.
A number of places made query calls and fetched a single
row. Such a case could require an explicit finish.
These assume that they are looking up with a unique key.
To remove assumptions and isolate the code from future changes
I've switched these to fetching all and returning the
first row. I have commented these cases.
For fuller explanation see perldoc DBI
What I tested:
Edit existing basket, chnged name
Modify order line, change vendor price
Create new basket and add order
Delet this order
Delte this basket
New Basket, new order, user added, user removed
Add contract to vendor, change details, delete contract
Search order biblio
Create basket group, add basket to group, remove basket from group
Delete basket group
Receive order
Everything behaved as I expected
Signed-off-by: Marc Veron <veron@veron.ch>
Signed-off-by: Marcel de Rooy <m.de.rooy@rijksmuseum.nl>
Signed-off-by: Galen Charlton <gmc@esilibrary.com>
The last use of the POE family of Perl modules went away with
the removal of zebraqueue_daemon.pl per bug 9001. Consequently,
this patch removes POE as a dependency.
To test:
[1] Verify that "git grep POE" and "git grep libpoe" report
nothing.
[2] Verify that koha_perl_deps.pl -a does not report POE
as a dependency.
[3] (extra credit) verify that Debian packages can be built
that do not list libpoe-perl as a dependency.
This patch also updates some distro-specific installation
instructions and scripts, but makes no representations about
whether those instructions currently work.
Signed-off-by: Galen Charlton <gmc@esilibrary.com>
Signed-off-by: Chris Cormack <chris@bigballofwax.co.nz>
Signed-off-by: Jonathan Druart <jonathan.druart@biblibre.com>
Signed-off-by: Galen Charlton <gmc@esilibrary.com>
The SQL option for MARC framework imports was subject to a bug whereby
somebody could use it to gain access to arbitrary information in the
database by uploading an SQL file containing unexpected statements.
As it is difficult to securely sanitize SQL, this patch removes the
option to use SQL as an import or export format.
To test:
[1] Verify that SQL no longer appears as an import or export option
for the MARC frameworks.
[2] Verify that exports and imports in CSV, Excel XML, and ODS formats
still work.
Signed-off-by: Galen Charlton <gmc@esilibrary.com>
Signed-off-by: Tomas Cohen Arazi <tomascohen@gmail.com>
Works as advertised. The UI doesn't offer exporting/importing in the SQL format.
Crafting the URL to export SQL fallbacks to a spreadsheet format (ODS).
Signed-off-by: Katrin Fischer <Katrin.Fischer.83@web.de>
Works as described, passes all tests and QA script.
Signed-off-by: Galen Charlton <gmc@esilibrary.com>
In Koha 3.8, if a standard catalog search was performed and the user
clicked the Z39.50 search button, the search string would automatically
be placed in the isbn field for the Z39.50 search form.
Changes to the code have since broken this functionality.
Test Plan:
1) From mainpage.pl, use "Search the catalog" to search for the string
"9781570672835"
2) Click the Z39.50 Search button
3) Note the string is placed in the title field
4) Apply this patch
5) Repeat steps 1-2
6) Note the string is placed in the isbn field
Signed-off-by: Mark Tompsett <mtompset@hotmail.com>
Signed-off-by: Katrin Fischer <Katrin.Fischer.83@web.de>
Tested old and new ISBN with and without hyphens.
Also tested some other keyword searches.
Signed-off-by: Galen Charlton <gmc@esilibrary.com>
Note that the behavior will be a bit odd if you do a 'replace via
Z39.50' from a bib record whose title happens to be an ISBN, but
this scenario seems unlikely enough to ignore.
This patch fixes following warnings:
FAIL C4/Serials.pm
FAIL valid
Useless use of a constant (43) in void context
Useless use of a constant (41) in void context
Useless use of a constant (44) in void context
Useless use of a constant (42) in void context
Useless use of a constant (4) in void context
Signed-off-by: Katrin Fischer <Katrin.Fischer.83@web.de>
Signed-off-by: Galen Charlton <gmc@esilibrary.com>
4 new statuses to represent variations on "missing" is added by this
patch: "never received", "sold out", "damaged", and "lost.
These status have the same behavior than the simple Missing status.
Test plan:
- Find a serial to claim.
- Modify the status of this serial with one of these new statuses.
- Try to find it with the "serials to claim" search.
- Verify that the status is displayed on the serial module pages and on
the OPAC.
Signed-off-by: Nicolas Bravais <nicolas.bravais@gmail.com>
Signed-off-by: Katrin Fischer <Katrin.Fischer.83@web.de>
Signed-off-by: Galen Charlton <gmc@esilibrary.com>
The return from GetReservesFromBiblionumber contains an unnecessary
extra variable. In scalar context an array returns its element count.
Maintaining a separate count can lead to unforeseen bugs
and imposes ugly constructions on the subroutine's users.
Remove the useless count variable from the return
This patch also changes the parameters: now the routine takes a hashref.
Signed-off-by: Marcel de Rooy <m.de.rooy@rijksmuseum.nl>
Placed biblio holds, future holds and item holds. Works as expected.
Tested Holds.t and Reserves.t. Pass.
Tested /cgi-bin/koha/ilsdi.pl?service=GetRecords&id=999 with two holds on
one item. Fine.
C4/SIP/ILS/Item.pm: Looked for "whatever" and "arrayref" and could not find
them anymore. Looks good.
Handled a few unneeded calls in QA follow-up.
Left only one point to-do for serials/routing-preview.pl. See Bugzilla.
Signed-off-by: Kyle M Hall <kyle@bywatersolutions.com>
Signed-off-by: Galen Charlton <gmc@esilibrary.com>
This patch fixes an issue where chosing 'None' as the sort order
for an authority search would result in zero hits if QueryParser is
eanbled.
This patch also adds some additional test cases.
To test:
[1] Enable QueryParser.
[2] Perform an authority search in the staff interface that
uses 'Heading A-Z' as the sort order and returns hits.
[3] Run the same search, but with the sort order set to 'None'.
No hits are returned.
[4] Apply the patch.
[5] Do step 3 again. This time, hits should be returned.
[6] Verify that prove -v t/db_dependent/Search.t passes.
Signed-off-by: Galen Charlton <gmc@esilibrary.com>
Signed-off-by: Tomas Cohen Arazi <tomascohen@gmail.com>
Signed-off-by: Kyle M Hall <kyle@bywatersolutions.com>
Signed-off-by: Galen Charlton <gmc@esilibrary.com>
If parent_ordernumber is not set in NewOrder parameter, it is
automatically set to ordernumber.
This patch only avoid code duplication.
Signed-off-by: Jonathan Druart <jonathan.druart@biblibre.com>
This solution is better!
Signed-off-by: Katrin Fischer <Katrin.Fischer.83@web.de>
Passes all tests and QA script. Also all tests in
t/db_dependent/Acquisitions/.
Confirmed bug and that the patch fixes it.
Signed-off-by: Galen Charlton <gmc@esilibrary.com>
To reproduce the issue:
- transfer an order from a basket to another. Note the previous
ordernumber (X) and the new one (Y).
- receive the order
- cancel the receipt
- verify the order has been deleted:
select count(*) from aqorders where ordernumber=Y;
select * from aqorders_transfers where ordernumber_from = X;
The value for ordernumber_to is null.
To test this patch:
- apply this patch
- transfer an order from a basket to another
- receive the order
- cancel the receipt
- verify the order still exist in the basket where the transfer has been
done.
Signed-off-by: Sonia BOUIS <sonia.bouis@univ-lyon3.fr>
Signed-off-by: Katrin Fischer <Katrin.Fischer.83@web.de>
Signed-off-by: Galen Charlton <gmc@esilibrary.com>
This patch adds the words 'biblio' and 'item' to the 'info'
of the cataloguing logs which were missing them (such as biblio
delete, biblio mod, item mod, upload cover image).
This patch also adds 'authority' for authority mod.
_TEST PLAN_
Before applying:
1) Create/view mods for items, biblios, and authorities.
2) Create/view biblio deletion
3) Create/view upload cover image log
4) Note that none of these contain the words 'biblio','item',or
'authority' in their "Info" columns.
Apply patch.
5) Repeat steps 1-3
6) Note that the new logs contain 'biblio','item', and 'authority'
in their "Info" column, while the past ones don't.
7) Note also that 'biblio' and 'item' will have 'Biblio' and 'Item'
appear in their "Object" column for the new logs
Signed-off-by: Chris Cormack <chris@bigballofwax.co.nz>
Signed-off-by: Jonathan Druart <jonathan.druart@biblibre.com>
Signed-off-by: Galen Charlton <gmc@esilibrary.com>
1/ CURRENT_DATE() is a MySQLism and should be replaced with CAST(now() AS
DATE).
2/ The date formatting should be done in the template (using the TT
plugin).
Signed-off-by: Jonathan Druart <jonathan.druart@biblibre.com>
Signed-off-by: Marcel de Rooy <m.de.rooy@rijksmuseum.nl>
Signed-off-by: Galen Charlton <gmc@esilibrary.com>
Pasting comment from the Bugzilla report:
Looking bit longer at this code, it is kind of strange to find it
there in the first place. Adding maxpickupdelay in Letters.pm should
not be there, but it is..
Also this date is not used normally in the default HOLD Available for
Pickup notice (that we are generating in this case). And if it would be
undef, the expiration date should imo be empty instead of today+0.
(before adding maxreservespickupdelay, you should test the allowexpire
pref first) So it is an (invisible) bug on its own.
Test plan:
See former patch. Kyle just discovered this bug, apparently by
deleting the maxpickupdelay pref..
Signed-off-by: Kyle M Hall <kyle@bywatersolutions.com>
Signed-off-by: Jonathan Druart <jonathan.druart@biblibre.com>
Signed-off-by: Galen Charlton <gmc@esilibrary.com>
Before bug 9788 the alldates parameter of GetReservesFromItemnumber was
actually not used in the codebase.
The first patch of bug 9788 did change that and passed true by default.
But a closer look revealed that we do not really need it.
The parameter is removed by this patch; the SQL statement is slightly
adjusted: if reservedate<=now or a waitingdate is filled for the
requested itemnumber, GetReservesFromItemnumber will return the reserve.
This includes so-called future waits: a future hold that has been confirmed
ahead of time with pref ConfirmFutureHolds > 0 days.
Note that future item-level holds are not really interesting to return; this
just corresponds to original behavior. Future next-available holds are not
in view at all; they do not contain an item number.
Test plan:
Actually, the test plan of the first patch is valid. But for completeness I
repeat it here:
[1] Enable future holds and set ConfirmFutureHolds to 2 days.
[2] Place a future next-available hold for 2 days ahead.
[3] Check item status on catalogue detail. Available? That is fine.
[4] Confirm the future hold by checking it in. ('future wait')
[5] Look at item status again on catalogue detail. Must be Waiting now.
[6] Switch to OPAC and login as another opac user. Goto Place a hold.
[7] Check item status with item level hold info. Is it waiting?
[8] Try to place hold in staff, check item level status again. Waiting?
[9] Make a transfer for the item. Switch branch. Check hold status on
Transfers to receive.
Signed-off-by: Kyle M Hall <kyle@bywatersolutions.com>
Signed-off-by: Jonathan Druart <jonathan.druart@biblibre.com>
Signed-off-by: Galen Charlton <gmc@esilibrary.com>
This patch makes GetReservesFromItemnumber also returns the waiting
date and removes some repeated code.
Improves item status display on catalogue detail, when placing a hold at
opac-reserve and in staff, and on transfers to receive form.
This patch builds on work from reports 9367 and 9761.
Test plan:
Place a future next-av. hold (enable future holds prefs), say 2 days ahead.
Check item status on catalogue detail. Nothing to see.
Enable ConfirmFutureHolds by inserting a number of days, say 2.
Confirm earlier hold by checking it in. Look at item status again on detail.
Switch to other opac user. Try to place a hold again. Check item status with
item level hold info. Try to place hold in staff, check item level status.
Make a transfer for that item. Switch branch. Look at transfers to receive.
Signed-off-by: Kyle M Hall <kyle@bywatersolutions.com>
Signed-off-by: Jonathan Druart <jonathan.druart@biblibre.com>
Signed-off-by: Galen Charlton <gmc@esilibrary.com>
This patch removes C4::Barcodes::PrinterConfig, which is
used by no other code in the database.
Signed-off-by: Emma Heath <emmaheath.student@wegc.school.nz>
Signed-off-by: Kyle M Hall <kyle@bywatersolutions.com>
No instances of PrinterConfig found in the codebase
Signed-off-by: Katrin Fischer <Katrin.Fischer.83@web.de>
Couldn't find any reference to those files in Koha.
Signed-off-by: Galen Charlton <gmc@esilibrary.com>
This patch restores the ability to request a DBI database handle
or a DBIx::Class schema object connected to a PostgreSQL database.
To address the concerns raised in bug 7188, only "mysql" and "Pg"
are recognized as valid DB schemes. If anything else is passed
to C4::Context::db_scheme2dbi or set as the db_scheme in the Koha
configuration file, the DBD driver to load is assumed to be "mysql".
Note that this patch drops any pretense of Oracle support.
To test:
[1] Apply patch, and verify that the database-dependent tests
pass when run against a MySQL Koha database.
[2] To test against PostgreSQL, create a Pg database and
edit koha-conf.xml to set db_scheme to Pg (and adjust
the other DB connection parameters appropriately). The
following tests should pass, at minimum:
t/Context.t
t/db_dependent/Koha_Database.t
Signed-off-by: Galen Charlton <gmc@esilibrary.com>
Signed-off-by: Chris Cormack <chris@bigballofwax.co.nz>
Signed-off-by: Katrin Fischer <Katrin.Fischer.83@web.de>
Works as described, some additional notes:
- Installed Postgres following
http://wiki.ubuntuusers.de/PostgreSQL
- Created a database user koha
- Created a database koha
- Changed the koha-conf.xml file
<db_scheme>Pg</db_scheme>
<database>koha</database>
<hostname>localhost</hostname>
<port>5432</port>
<user>koha</user>
<pass>xxxx</pass>
- Installed libdbd-pg-perl
- Ran the web installer until step 3 everything looked ok
Step 3 complains:
Password for user koha: psql: fe_sendauth: no password supplied
- Both t/Context.t and t/db_dependent/Koha_Database.t pass
Signed-off-by: Galen Charlton <gmc@esilibrary.com>