Also addresses QA test tools, perldoc in ImportExportFramework.pm
Signed-off-by: Katrin Fischer <katrin.fischer.83@web.de>
Signed-off-by: Fridolin Somers <fridolin.somers@biblibre.com>
This patch fixes the comments from Comment 29 and the import
functionality. You should now be able to import an exported file without
editing the file at all and the authority type code will be overwritten
in the file (same behaviour as biblio frameworks).
Sponsored-by: Catalyst IT
Signed-off-by: Katrin Fischer <katrin.fischer.83@web.de>
Signed-off-by: Michal Denar <black23@gmail.com>
Signed-off-by: Michal Denar <black23@gmail.com>
Signed-off-by: Mazen Khallaf <mazen.i.khallaf@gamil.com>
Signed-off-by: Michal Denar <black23@gmail.com>
Signed-off-by: Katrin Fischer <katrin.fischer.83@web.de>
Signed-off-by: Fridolin Somers <fridolin.somers@biblibre.com>
This patch amends C4::ImportExportFramework to work for authority types as well as MARC frameworks.
New file: admin/import_export_authtype.pl
Update: Ensuring we are passing the right column to the right tables.
Update2: Making the error messages the same to be consistent with patch on Bug 15665
Update3: Fixing merge conflicts
Update4: Fixing merge conflicts and removing tabs
Update5: Getting rid of warns, making sure Import and Export of default
authority will work
Update6: Merge conflicts and making sure export of default auth type
works
Update7: Fixing merge conflicts and updating buttons to bootstrap3
To test:
1) Go to Admin -> Authority types
2) Confirm there are two new columns 'Export' and 'Import' in the table
3) Click 'Export' on an existing authority type and choose a file type, click 'Export'
4) Confirm that the authority type is exported as your chosen file type. Save the file
5) Create a new authority type
6) Import into your new authority type using the file you just exported
7) Confirm you are taken to auth_tag_structure.pl
8) Go back to Authority types
9) Export your new authority type. View the exported file and confirm
the authtypecode has been updated to match the code you set for the new
auth type
10) Click 'Import' next to any existing authority type and attempt to import a file that is not XML, CSV or ODS. Confirm that this fails and you are asked to import a file of the correct file type
11) Confirm Export and Import work for the Default authority type
12) Go to Admin -> MARC bibliographic framework
13) Confirm that the 'Export' and 'Import' functions still work here as well
Sponsored-by: Catalyst IT
Signed-off-by: Katrin Fischer <katrin.fischer.83@web.de>
Signed-off-by: Michal Denar <black23@gmail.com>
Signed-off-by: Michal Denar <black23@gmail.com>
Signed-off-by: Mazen Khallaf <mazen.i.khallaf@gamil.com>
Signed-off-by: Michal Denar <black23@gmail.com>
Signed-off-by: Katrin Fischer <katrin.fischer.83@web.de>
Signed-off-by: Fridolin Somers <fridolin.somers@biblibre.com>
This patch simply adds a check of cached values to see if agerestriction enabled before
fetching the biblio object
To test:
1 - prove -v t/db_dependent/Holds.t
Signed-off-by: Fridolin Somers <fridolin.somers@biblibre.com>
Signed-off-by: Marcel de Rooy <m.de.rooy@rijksmuseum.nl>
Signed-off-by: Fridolin Somers <fridolin.somers@biblibre.com>
If a record has holds on it, and, using offline circulation, an item on
that is checked out to a patron that did not have a hold on the record,
the next hold that would have trapped that item will be canceled.
1) Place two bib level holds on an item for two patrons
2) Find a third patron, note his or her cardnumber
3) Create the file test.koc by copying and pasting the following into a
text editor
Version=1.0 Generator=kocTest GeneratorVersion=0.7
2008-06-11 12:24:11 547 issue PATRONCARDNUMBER ITEMBARCODE
note the fields are tab delemited, ensure the tabs do not get lost
when copying and pasting!
4) Upload the offline circ action, and commit the action
5) Note the item was checked out to the third patron
6) Note one of the other patron's holds were canceled
7) Reset your database
8) Apply this patch
9) Test again, no hold should be canceled!
Signed-off-by: Owen Leonard <oleonard@myacpl.org>
Signed-off-by: Martin Renvoize <martin.renvoize@ptfs-europe.com>
Signed-off-by: Fridolin Somers <fridolin.somers@biblibre.com>
When building search results for XSLT we generate and pass a list of hidden
itemnumbers.
We do skip the lost items in our parsing, however, we neglect to add the itemnumber
to the hidden list
This patch simply adds the lost itemnumbers to the list
There is more work to be done here to simpliofy this process, however, this patch resolves
the issue and can be backported to stable branches
To test:
1 - Set systempreference hidelostitems to "Don't show"
2 - Edit a record to set one item as lost and one as available
3 - Perform an OPAC search that returns the record above
4 - Note that the lost item shows in availability line
5 - Click on the record - note the lost item does not show on details
6 - Apply patch
7 - Reload search results
8 - Lost itme no longer displays
Signed-off-by: Andrew Fuerste-Henry <andrew@bywatersolutions.com>
Signed-off-by: Martin Renvoize <martin.renvoize@ptfs-europe.com>
Signed-off-by: Fridolin Somers <fridolin.somers@biblibre.com>
There are several occurrences of `my $var += ` or .= in the code.
It should not cause problems but it's confusing.
Test plan:
Read the patch and confirm that the changes make sense.
Signed-off-by: Fridolin Somers <fridolin.somers@biblibre.com>
Signed-off-by: Martin Renvoize <martin.renvoize@ptfs-europe.com>
Signed-off-by: Fridolin Somers <fridolin.somers@biblibre.com>
After items search result when entering a term in date due colum filter
table is not filtered.
Its is because columns from 'Issue' must be added in _SearchItems_build_where_fragment()
like it has been added in SearchItems().
Test plan :
1) Perform an item search with some results beeing checked-out
2) In colum 'Date due' enter a number that is contained is some due
dates, ie '2022'
3) Table results are filtered with items having search term in due date
Signed-off-by: Jonathan Druart <jonathan.druart@bugs.koha-community.org>
Signed-off-by: Fridolin Somers <fridolin.somers@biblibre.com>
Items can now only be filtered by 'Checked out' or 'not' rather than
looking at damaged/itemlost/withdrawn/notforloan status.
Removed availability column as Checked out items are made clear by the
due date column.
Signed-off-by: Christian Stelzenmüller <christian.stelzenmueller@bsz-bw.de>
Signed-off-by: Jonathan Druart <jonathan.druart@bugs.koha-community.org>
Signed-off-by: Fridolin Somers <fridolin.somers@biblibre.com>
Confirm tests in t/db_dependent/Items.t still pass.
Signed-off-by: Christian Stelzenmüller <christian.stelzenmueller@bsz-bw.de>
Signed-off-by: Jonathan Druart <jonathan.druart@bugs.koha-community.org>
Signed-off-by: Fridolin Somers <fridolin.somers@biblibre.com>
This enhancement adds the availability of an item to the item search
results (i.e. shows if checked out or not). If checked out,
shows due date in item search results. The due
date column will also show when exporting results to a CSV file.
To test:
1) Apply patch and restart services
2) Set up two items. Check out Item A to a borrower. Leave Item B as
is not checked out, and not unavailable status.
3) Go to Search -> Item search. Scroll down and notice the Availability
radio options - Ignore, and Checked out.
4) Leave the Ignore option selected and do a search so that both items show.
5) Confirm the availability and due date columns are showing at the
right end of the table. Confirm Item A says Checked out and has a due
date. Confirm Item B says available.
6) Export all result to CSV. Confirm the results show in the CSV file as
expected.
7) Go to edit your search. Select the 'Checked out' radio option for
Availability and submit the search. Confirm only Item A shows in the
results (not Item B).
Sponsored-by: Bibliotheksservice-Zentrum Baden-Württemberg (BSZ)
Signed-off-by: Christian Stelzenmüller <christian.stelzenmueller@bsz-bw.de>
Signed-off-by: Jonathan Druart <jonathan.druart@bugs.koha-community.org>
Signed-off-by: Fridolin Somers <fridolin.somers@biblibre.com>
To test, the headers should have value set-cookie: secure;
for the language cookie, when the site is using https.
Signed-off-by: Katrin Fischer <katrin.fischer.83@web.de>
Signed-off-by: Jonathan Druart <jonathan.druart@bugs.koha-community.org>
Signed-off-by: Fridolin Somers <fridolin.somers@biblibre.com>
- Don't add an operator on first loop
- Only add plus option on last loop
- Fix indentation of first search box
- Remove spaces from operators in query_cgi and add to query and query_desc
Signed-off-by: Katrin Fischer <katrin.fischer.83@web.de>
Signed-off-by: Fridolin Somers <fridolin.somers@biblibre.com>
If there is deleted session info but no session->id, a wrong cookie
with empty name could be generated containing expired session id.
Test plan:
Run t/db_dependent/Auth.t
Login. Check cookies in browser.
Logout. Check cookies in browser.
Without this patch, you should see an invalid cookie.
Signed-off-by: Marcel de Rooy <m.de.rooy@rijksmuseum.nl>
Signed-off-by: Jonathan Druart <jonathan.druart@bugs.koha-community.org>
Signed-off-by: Fridolin Somers <fridolin.somers@biblibre.com>
This subroutine is ensuring that the biblionumber and biblioitemnumber
will be part of the MARC record.
We should not need that, unless there is something broken somewhere
else.
This line has been added by the following commit:
commit 4e95e94727
Bug 6789: biblios with many items can result in broken search results link
"""
To this end, it also moves the fix_biblio_ids portion of get_corrected_marc_record out of rebuild_zebra.pl,
and makes it a part of GetMarcBiblio (right before EmbedItemsInMarcBiblio, so the 952s still come last). fix_biblio_ids
is kept as a subroutine for the deletion portion of rebuild_zebra.pl, which still uses it.
"""
But it does not explain why it's better to have it in GetMarcBiblio.
If we need it for the reindexation process, we shouldn't impact
GetMarcBiblio which is used from several different places.
We might then consider adding the fix_biblio_ids call to
rebuild_zebra.pl, but I am failing to understand in which cases it could
be useful.
Signed-off-by: Marcel de Rooy <m.de.rooy@rijksmuseum.nl>
Signed-off-by: Martin Renvoize <martin.renvoize@ptfs-europe.com>
Signed-off-by: Fridolin Somers <fridolin.somers@biblibre.com>
If a maximum number of checkouts allowed is defined in circulation
rules, C4::Circulation::TooMany will loop over all patron's checkouts.
When a patron has several hundreds of checkouts, it can really slow down
the checkout process by several seconds (or even tens of seconds)
This patch does two things:
- Always prefetch item data so that `$c->item` does not execute an
additional SQL query at every iteration of the loop. Item data is
always needed at the first line of the loop, so there is really no
downside for doing this.
- Build the `@types` array only once, out of the checkouts loop. Since
it does not depend at all on patron's checkouts data, it does not make
sense to build it inside the loop.
Test plan:
1. Before applying the patch, create a patron with a lot of checkouts.
I tested with 1000 checkouts, but the slowness should be noticeable
with less.
2. Make sure you have a circulation rule (one that apply to your patron
and the item(s) you will check out for testing) with a maximum number
of checkouts allowed
3. Check out an item for the patron with a lot of checkouts. Measure the
time it takes.
4. Apply the patch
5. Check out another item (or check in and then check out the same item
used in step 3). Measure the time it takes and compare it to step 3.
It should be faster now.
6. Run `prove t/db_dependent/Circulation/TooMany.t`
Signed-off-by: Nick Clemens <nick@bywatersolutions.com>
Signed-off-by: Jonathan Druart <jonathan.druart@bugs.koha-community.org>
Signed-off-by: Fridolin Somers <fridolin.somers@biblibre.com>
This is quite a misleading call.
Signed-off-by: Marcel de Rooy <m.de.rooy@rijksmuseum.nl>
Signed-off-by: Jonathan Druart <jonathan.druart@bugs.koha-community.org>
Signed-off-by: Nick Clemens <nick@bywatersolutions.com>
Signed-off-by: Martin Renvoize <martin.renvoize@ptfs-europe.com>
Signed-off-by: Fridolin Somers <fridolin.somers@biblibre.com>
Signed-off-by: Marcel de Rooy <m.de.rooy@rijksmuseum.nl>
Signed-off-by: Jonathan Druart <jonathan.druart@bugs.koha-community.org>
Signed-off-by: Nick Clemens <nick@bywatersolutions.com>
Signed-off-by: Martin Renvoize <martin.renvoize@ptfs-europe.com>
Signed-off-by: Fridolin Somers <fridolin.somers@biblibre.com>
It may be that we need a few additional flushes.
And checking the returned session before clearing busc.
Signed-off-by: Marcel de Rooy <m.de.rooy@rijksmuseum.nl>
Signed-off-by: Jonathan Druart <jonathan.druart@bugs.koha-community.org>
Signed-off-by: Nick Clemens <nick@bywatersolutions.com>
Signed-off-by: Martin Renvoize <martin.renvoize@ptfs-europe.com>
Signed-off-by: Fridolin Somers <fridolin.somers@biblibre.com>
If we look for an existing session, do not create a new one.
Found a bug in the unset_userenv calls. For this moment
changing the calls in Auth here. Later fix goes to bug
29954.
Signed-off-by: Marcel de Rooy <m.de.rooy@rijksmuseum.nl>
Signed-off-by: Jonathan Druart <jonathan.druart@bugs.koha-community.org>
Signed-off-by: Nick Clemens <nick@bywatersolutions.com>
Signed-off-by: Martin Renvoize <martin.renvoize@ptfs-europe.com>
Signed-off-by: Fridolin Somers <fridolin.somers@biblibre.com>
When a user is not logged in, a new session ID is generated every time a
new page is hit.
Signed-off-by: Marcel de Rooy <m.de.rooy@rijksmuseum.nl>
Signed-off-by: Nick Clemens <nick@bywatersolutions.com>
Signed-off-by: Martin Renvoize <martin.renvoize@ptfs-europe.com>
Signed-off-by: Fridolin Somers <fridolin.somers@biblibre.com>
The ACCTDETAILS notice apparently bypasses message_queue; notices are sent directly to the linux mail queue.
Test Plan:
1) Apply this patch
2) Create a new patron with an email address
3) Note the patron's ACCTDETAILS notice shows in the patron's messages
Signed-off-by: Martin Renvoize <martin.renvoize@ptfs-europe.com>
Signed-off-by: Katrin Fischer <katrin.fischer.83@web.de>
Signed-off-by: Fridolin Somers <fridolin.somers@biblibre.com>
Signed-off-by: David Nind <david@davidnind.com>
Signed-off-by: Tomas Cohen Arazi <tomascohen@theke.io>
Signed-off-by: Jonathan Druart <jonathan.druart@bugs.koha-community.org>
Signed-off-by: Fridolin Somers <fridolin.somers@biblibre.com>
When system preference is off, call no code related to Koha::Recalls.
Also add some missing module import.
Signed-off-by: Fridolin Somers <fridolin.somers@biblibre.com>
This patch makes the different ->recalls accessors implemented on this
bug be more standard. This means:
- They don't do special things like default sorting or stripping out
special parameters. That's all left to the caller and the methods are
clean: they just return the related objects
- Useful filtering methods for Koha::Recalls resultsets are added. The
only used one (in the end) was ->filter_by_current. It seems like a
better approach, because it gives devs more control on how they want
to chain things, and there's a single place in which to maintain the
criteria of what is 'current' or 'finished'. This clearly makes the
'old' column obsolete IMHO, at least in the use cases I found. This is
covered by tests as well.
Signed-off-by: Tomas Cohen Arazi <tomascohen@theke.io>
Signed-off-by: Fridolin Somers <fridolin.somers@biblibre.com>
This patch makes the status attribute an ENUM, setting the default value
as 'requested' as well. The chosen names are easier to read than single
letters. Also, renamed F into fulfilled (this impacts methods names as
well). This is because 'finished' or 'completed' is more a synonym for
old => 1...
Signed-off-by: Tomas Cohen Arazi <tomascohen@theke.io>
Signed-off-by: Fridolin Somers <fridolin.somers@biblibre.com>
The fines cron job uses Getoverdues to pass issue info to CalcFine.
It took me a while to realize that the overdue hash does not contain
a biblionumber. When testing CalcFine, we pass an item hash that
does include one.
So what happened? $item->{biblionumber} is undefined when it comes from
Getoverdues and no recall overdue fine is calculated, only a regular one.
Simple fix (without any impact): Add a biblionumber to Getoverdues.
Signed-off-by: Marcel de Rooy <m.de.rooy@rijksmuseum.nl>
Tested with fines.pl: recall fine applied now.
Ran some Circulation and Overdues unit tests.
Signed-off-by: Fridolin Somers <fridolin.somers@biblibre.com>
- misc/cronjobs/recalls/expire_recalls.pl
- misc/cronjobs/recalls/overdue_recalls.pl
- tests for overdue fines in t/db_dependent/Circulation/CalcFine.t
Signed-off-by: David Nind <david@davidnind.com>
Signed-off-by: David Nind <david@davidnind.com>
Signed-off-by: Marcel de Rooy <m.de.rooy@rijksmuseum.nl>
Signed-off-by: Fridolin Somers <fridolin.somers@biblibre.com>
We are using %B to display the month name but it seems that using the
CLDR pattern LLLL would be more appropriated.
https://metacpan.org/pod/DateTime#CLDR-Patterns
%B - The full month name.
LLLL - The wide stand-alone form for the month.
For instance in Catalan:
https://metacpan.org/pod/DateTime::Locale::ca
%B will display "de gener" when LLLL will be "gener"
Test plan:
Create a new numbering pattern:
Home > Serials > Numbering patterns > New numbering pattern
Numbering formula: {X}
Label: monthname
Add: 1
Every: 1
Set back to: 1
When more than: 999
Formatting: Name of month
And test it at the bottom of the form
Locale: Catalan
The number column should contain "gener", not "de gener"
Test other locales and confirm that the output is correct (no change
expected for English, French and Spanish for instance).
Signed-off-by: Katrin Fischer <katrin.fischer.83@web.de>
Signed-off-by: Nick Clemens <nick@bywatersolutions.com>
Signed-off-by: Fridolin Somers <fridolin.somers@biblibre.com>
From plack-opac-error.log:
[WARN] Use of uninitialized value $value in concatenation (.) or string at /usr/share/koha/C4/XSLT.pm line 286.
Test plan:
An opac search triggered the warning. So repeat it without warns.
Signed-off-by: Marcel de Rooy <m.de.rooy@rijksmuseum.nl>
Signed-off-by: Nick Clemens <nick@bywatersolutions.com>
Signed-off-by: Fridolin Somers <fridolin.somers@biblibre.com>
Koha::Old::Checkout->anonymize takes care of checking the syspref and
raises an exception if not set. So no we can now leverage on it, instead
of checking manually and then tweaking the checkout object manually as
well.
To test:
1. Apply this patch
2. Run:
$ kshell
k$ prove t/db_dependent/Circulation/MarkIssueReturned.t
=> SUCCESS: Tests pass!
3. Sign off :-D
Signed-off-by: David Nind <david@davidnind.com>
Signed-off-by: Martin Renvoize <martin.renvoize@ptfs-europe.com>
Signed-off-by: Fridolin Somers <fridolin.somers@biblibre.com>
C4::Search->FindDuplicate joins title and author using 'and'
(lowercase). When this query is passed on to ElasticSearch, it
interprets the lowercase 'and' as a term to search for, because the
operator has to be in uppercases ('AND').
Test plan:
* Reproduce the bug:
- Set SearchEngine to ElasticSearch (and make sure you have the data
indexed etc)
- Find an existing book, note the title (245a) and the author (100a)
- Create a new book (Cataloging -> New Record)
- Fill in the same title and author using the same data as in an
existing book (and any other fields that might be required)
- Click "save"
=> A new book will be created, the Duplicate Finder has failed
* Apply the patch
* Check if it's working now:
- Create a new book (Cataloging -> New Record)
- Fill in the same title and author using the same data as in an
existing book (and any other fields that might be required)
- Click "save"
- The DuplicateFinder should now report the already exising book
Maybe we should also check if Zebra does not have any problems with the
uppercase 'AND'? In that case, repeat the above steps, but set
SearchEngine to Zebra :-)
Sponsored-by: Steiermärkische Landesbibliothek
Signed-off-by: Martin Renvoize <martin.renvoize@ptfs-europe.com>
Signed-off-by: Fridolin Somers <fridolin.somers@biblibre.com>
This patch updates the original fix to only set the template parameter
for opac sessions and updates all occurences in templats to check first
for logged_in_user.branchcode before falling back to default_branch
Signed-off-by: Jonathan Druart <jonathan.druart@bugs.koha-community.org>
Signed-off-by: Fridolin Somers <fridolin.somers@biblibre.com>
This patch adds support for OPAC_BRANCH_DEFAULT as an environment option
that can be passed via apache with either SetEnv or as a header for
plack. It allows setting a default branch for the anonymous opac
session such that you can display the right opac content blocks prior to
login if you have set up per branch URI's.
To test (on top of bug 29691)
1 - Add to apache conf (/etc/apache2/sites-available/kohadev.conf)
SetEnv OPAC_BRANCH_DEFAULT "CPL"
RequestHeader add X-Koha-SetEnv "OPAC_BRANCH_DEFAULT CPL"
2 - Restart all
3 - Confirm that news for all and CPL show on opac mainpage
4 - Sign in as a different library
5 - Confirm users library news shows
Signed-off-by: Nick Clemens <nick@bywatersolutions.com>
Signed-off-by: Jonathan Druart <jonathan.druart@bugs.koha-community.org>
Signed-off-by: Fridolin Somers <fridolin.somers@biblibre.com>
Uppercase occurances of all (hopefully) lowercase "and"
used in ElasticSearch Query String Query contexts
Signed-off-by: Fridolin Somers <fridolin.somers@biblibre.com>
Signed-off-by: Katrin Fischer <katrin.fischer.83@web.de>
Signed-off-by: Martin Renvoize <martin.renvoize@ptfs-europe.com>
Signed-off-by: Fridolin Somers <fridolin.somers@biblibre.com>
This doesn't rely on the other statuses, and requires only cached
preference check, so lets allow the possibiliy of an early exit
Signed-off-by: Martin Renvoize <martin.renvoize@ptfs-europe.com>
Signed-off-by: Jonathan Druart <jonathan.druart@bugs.koha-community.org>
Signed-off-by: Fridolin Somers <fridolin.somers@biblibre.com>
Similar to first patch, move a count only conditionally used into
the conditional
This could be updated to use DBIC, but the itemtype conditionals
add complexity
Signed-off-by: Martin Renvoize <martin.renvoize@ptfs-europe.com>
Signed-off-by: Jonathan Druart <jonathan.druart@bugs.koha-community.org>
Signed-off-by: Fridolin Somers <fridolin.somers@biblibre.com>
We essentially copy the code from GetReservesControlBranch here, because we
also calculate 'branchfield'
Setting it to "" vs undef makes no difference in this code, so lets not fetch this
again later.
Rename the variable to make it clearer where the branchcode came from
Signed-off-by: Martin Renvoize <martin.renvoize@ptfs-europe.com>
Signed-off-by: Jonathan Druart <jonathan.druart@bugs.koha-community.org>
Signed-off-by: Fridolin Somers <fridolin.somers@biblibre.com>
We retrieve two counts that are only needed if rules for hold limits are defined.
The DB counts should only be fetched once the rules are confirmed to exist
Further improvement would be possiblke by allowing them to be passed in (or cached?)
from CanBookBeReserved as they rely only on patron/biblionumber and not item specific information
Signed-off-by: Martin Renvoize <martin.renvoize@ptfs-europe.com>
Signed-off-by: Jonathan Druart <jonathan.druart@bugs.koha-community.org>
Signed-off-by: Fridolin Somers <fridolin.somers@biblibre.com>