Removes last remaining bit of configuration for the Titles facet
configuration.
Signed-off-by: Katrin Fischer <katrin.fischer.83@web.de>
Signed-off-by: Jonathan Druart <jonathan.druart@bugs.koha-community.org>
Signed-off-by: Martin Renvoize <martin.renvoize@ptfs-europe.com>
Signed-off-by: Katrin Fischer <katrin.fischer.83@web.de>
Signed-off-by: Jonathan Druart <jonathan.druart@bugs.koha-community.org>
Signed-off-by: Martin Renvoize <martin.renvoize@ptfs-europe.com>
Some Sami languages need few more characters to be mapped, or
are otherwise very hard to search for.
Test plan:
1) Catalogue a new record with title "Ǩoǯeŋa"
2) Make sure zebra indexed that record, then try to search for
it with the text "kozena"
3) Apply patch
4) Redo 2, now the record should be found.
Signed-off-by: Pasi Kallinen <pasi.kallinen@koha-suomi.fi>
Signed-off-by: Katrin Fischer <katrin.fischer.83@web.de>
Signed-off-by: Martin Renvoize <martin.renvoize@ptfs-europe.com>
Missing entries are added to those files.
Test plan:
Search for typos
Compare the two files and comfirm the entries are the same in both.
Exception: supportdir exists in etc/koha-conf.xml only, but I think it's
obsolete.
Signed-off-by: Katrin Fischer <katrin.fischer.83@web.de>
Signed-off-by: Marcel de Rooy <m.de.rooy@rijksmuseum.nl>
Signed-off-by: Martin Renvoize <martin.renvoize@ptfs-europe.com>
Koha has a number of features that rely on knowing the IP address of the connecting client. If that server is behind a proxy these features do not work.
This patch adds a module to automatically convert the X-Forwarded-For header into the REMOTE_ADDR environment variable for both CGI and Plack processes.
TEST PLAN:
1) Apply this patch set
2) Install Plack::Middleware::RealIP via cpanm or your favorite utility
3) Update your plack.psgi with the changes you find in this patch set ( this process differs based on your testing environment )
4) Restart plack
5) Tail the plack error log for your instance
6) Use curl to access the OPAC, adding an X-Forwarded-For header: curl --header "X-Forwarded-For: 32.32.32.32" http://127.0.0.1:8080
7) Note the logs output this address if you are unproxied
8) If you are proxied, restart plack using a command like below, where the ip you see in the logs ("REAL IP) is what you put in the koha conf:
<koha_trusted_proxies>172.22.0.1 1.1.1.1</koha_trusted_proxies>
9) Restart all the things!
10) Repeat step 6
11) You should now see "REAL IP: 32.32.32.32" in the plack logs as the remote address in your plack-error.log logs!
12) Disable plack so you are running in cgi mode, repeat step 6 again
13) You should see "REAL IP: 32.32.32.32" as the remove address in your opac-error.log logs!
Signed-off-by: Martin Renvoize <martin.renvoize@ptfs-europe.com>
Signed-off-by: Ed Veal <eveal@mckinneytexas.org>
Signed-off-by: Martin Renvoize <martin.renvoize@ptfs-europe.com>
directory _LOG_DIR_/logs does't exists.
All other log4perl -logs are out to _LOG_DIR_
Thank you for the good work Ere!
Signed-off-by: Stefan Berndtsson <stefan.berndtsson@ub.gu.se>
Signed-off-by: Martin Renvoize <martin.renvoize@ptfs-europe.com>
Splits Session to GenericSession and ZebraSession where Generic supports any
search backend via the SearchEngine classes and Zebra maintains the direct
channel to the Zebra server.
Adds config files required for mapping BIB-1 attributes to Koha search fields
and SRU indexes to BIB-1 attributes.
Adds PODs.
Sponsored-by: National Library of Finland
Signed-off-by: Stefan Berndtsson <stefan.berndtsson@ub.gu.se>
Signed-off-by: Martin Renvoize <martin.renvoize@ptfs-europe.com>
This creates a new daemon, misc/z3950_responder.pl, which can respond to
Z39.50 requests. By default, it just proxies searches to Zebra.
If desired, however, it can also add a subfield to the item tags on
outgoing records with a textual description of the item's status
(checked out, lost, etc.). This is useful for certain ILL systems. These
strings can be translated using the 'Z3950_STATUS' authorized value.
Test plan:
1) Start the Z39.50 server using `perl misc/z3950_responder.pl`.
2) Connect to the server using `yaz-client 127.0.0.1:9999/biblios`.
3) Run a search, such as `find @attr 1=1016 book`.
4) Fetch the results both one at a time with `show 1` and in a batch
using `show 1+5`.
5) Turn on MARCXML using `format xml` and `elements marcxml`, and
verify that the records are still correctly fetched.
6) Enable the item status subfield by restarting the server with the
option `--add-item-status=k`.
7) Search for and fetch records, and verify that a $k subfield is
added to the item tags as appropriate. It should show some
combination of "Checked Out", "Lost", "Not For Loan", "Damaged",
"Withdrawn", "In Transit", or "On Hold" as appropriate, or
"Available".
8) Add an authorized value named "Z3950_STATUS" with any of the keys
"AVAILABLE", "CHECKED_OUT", "LOST", "NOT_FOR_LOAN", "DAMAGED",
"WITHDRAWN", "IN_TRANSIT" or "ON_HOLD", and verify that their
descriptions are used instead of the default values above.
Signed-off-by: George Williams <george@nekls.org>
Signed-off-by: Stefan Berndtsson <stefan.berndtsson@ub.gu.se>
Signed-off-by: Martin Renvoize <martin.renvoize@ptfs-europe.com>
Missing installer for debian packages.
Sponsored-by: Koha-Suomi Oy
Signed-off-by: Johanna Raisa <johanna.raisa@gmail.com>
Signed-off-by: Nick Clemens <nick@bywatersolutions.com>
Signed-off-by: Martin Renvoize <martin.renvoize@ptfs-europe.com>
The example SIP config file in Koha is kind of a kitchen sink document where
all available SIP2 options should be demonstrated.
The allow_empty_passwords flag should be included in it.
Signed-off-by: Liz Rea <wizzyrea@gmail.com>
Signed-off-by: Katrin Fischer <katrin.fischer.83@web.de>
Signed-off-by: Martin Renvoize <martin.renvoize@ptfs-europe.com>
Signed-off-by: Kyle M Hall <kyle@bywatersolutions.com>
Signed-off-by: Marcel de Rooy <m.de.rooy@rijksmuseum.nl>
Signed-off-by: Martin Renvoize <martin.renvoize@ptfs-europe.com>
We should be able to set system preference overrides for SIP in a
similar manner that we do in Apache. It would be great if we could
specify those overrides on both a config level, and login level basis.
Test Plan:
1) Apply this patch
2) Start your SIP server
3) Enable the syspref AllFinesNeedOverride
4) Find or create a patron with a small fine ( less than noissuescharge )
5) Attempt to check out an item to the patron, it should fail
6) Add the global syspref override from the bottom of the example SIP config file
7) Restart your SIP server
8) Attempt to check out an item to the patron again, this time it should work
9) Now, add the login level syspref override section as it appears in
the eaxmple SIP config file. Make sure to add it to the login you are using
10) Attempt to check out another item to the patron, this time is should
again fail
Signed-off-by: Liz Rea <wizzyrea@gmail.com>
Signed-off-by: Marcel de Rooy <m.de.rooy@rijksmuseum.nl>
Signed-off-by: Martin Renvoize <martin.renvoize@ptfs-europe.com>
Some libraries would like to limit the amount of personal information a SIP server sends
to arbitrary parties on a per-login basis.
Test Plan:
1) Add a new key/value pair to one of your existing login stanzas in your SIP config file
For example: hide_fields="BD,BE,BF,PB"
2) Restart SIP
3) Send a SIP message that would normally return those fields ( in this example, a Patron Information Request )
4) Note the response has had those fields removed
Signed-off-by: Kyle M Hall <kyle@bywatersolutions.com>
Signed-off-by: Christopher Davis <tubaclarinet@protonmail.com>
Signed-off-by: Marcel de Rooy <m.de.rooy@rijksmuseum.nl>
Signed-off-by: Martin Renvoize <martin.renvoize@ptfs-europe.com>
To test:
1 - Add a record with a unique publisher "Supercalifragilistic" in the
264 b field
2 - Search for the value
3 - Record not found
4 - Apply patch (may need ot copy the .xml file into koha install)
5 - Reindex all the things
6 - Search for the value
7 - Success!
Signed-off-by: Felicia Martin <felicia.martin@dncr.nh.gov>
Signed-off-by: Katrin Fischer <katrin.fischer.83@web.de>
Signed-off-by: Martin Renvoize <martin.renvoize@ptfs-europe.com>
If the alternative mana KB server URL in etc/koha-conf.xml is used it
causes an error message like 'malformed JSON string, neither tag,
array, object, number, string or atom, at character offset 0 (before
"<html>\r\n<head><tit...") at /usr/share/perl5/JSON.pm line 171.' when
submitting an account creation request, and the account creation
request fails.
This patch updates the alternative mana KB server URL in
etc/koha-conf.xml from http://mana-test.koha-community.org to
https://mana-test.koha-community.org (the URL should start with https
instead of http). If the updated URL is used the account creation
request succeeds without causing any error messages.
To test:
1) Add <mana_config>http://mana-test.koha-community.org</mana_config> to
/etc/koha/sites/<instancename>/koha-conf.xml (see etc/koha-conf.xml
in the Koha code repository for an example of where to add this;
<instancename> if using koha-testing-docker is kohadev).
2) Clear memcached and restart services so that the changes to your Koha
instance configuration are recognised (if using koha-testing-docker
run flush_memcached and then restart_all).
3) From the staff client home page go to Koha administration >
Additional parameters > Share content with Mana KB.
4) Enable content sharing: change 'Use Mana KB for sharing content' to
Yes and press Save.
5) Enter your first name, last name and email address in the Configure
Mana KB section of the page and then click on 'Send to Mana KB'.
6) An error message is displayed, this may be something like:
'malformed JSON string, neither tag, array, object, number, string
or atom, at character offset 0 (before "<html>\r\n<head><tit...") at
/usr/share/perl5/JSON.pm line 171.'
7) Change the mana server URL to https://mana-test.koha-community.org
(see step 1), and repeat steps 2, 3 and 5.
8) Instead of an error message you should get 'You successfully created
your Mana KB account. Check your mailbox and follow instructions.'
Signed-off-by: Katrin Fischer <katrin.fischer.83@web.de>
Signed-off-by: Julian Maurice <julian.maurice@biblibre.com>
Signed-off-by: Martin Renvoize <martin.renvoize@ptfs-europe.com>
To test:
1 - Define a matching rule for authorities on field 001 index Local-Number
2 - In koha-conf.xml raise the zebra_loglevels
<zebra_loglevels>none,fatal,warn,request,info</zebra_loglevels>
3 - Export some authorities using the tools->export data
4 - Import those authorities
5 - Note no matches found
6 - View the zebra output log, you should see lots of error 114
7 - Apply patch
8 - Copy the indexdefs files to the installed versions
9 - Reapply matchign rules to staged files
10 - Matches should now be found
11 - Logs should not have errors
Signed-off-by: Claire Gravely <claire.gravely@bsz-bw.de>
Signed-off-by: Katrin Fischer <katrin.fischer.83@web.de>
Signed-off-by: Martin Renvoize <martin.renvoize@ptfs-europe.com>
'Static' is the default value of Search::Elasticsearch and for a good
reason : it works in most cases, unlike the 'Sniff' option
To test:
1 - Apply patch
2 - Edit koha-conf.xml
3 - Add '<cxn_pool>Static</cxn_pool>' to the elasticsearch stanza
4 - Restart all the things!
5 - Reindex ES, it works
6 - Set SearchEngine to ES, try searching
7 - It works!
Signed-off-by: Nick Clemens <nick@bywatersolutions.com>
Signed-off-by: Chris Cormack <chrisc@catalyst.net.nz>
Signed-off-by: Nick Clemens <nick@bywatersolutions.com>
Test plan:
0) Do not apply the patch
1) Add some word with character "ů" into metadata, for example author "Martinů, Bohuslav"
2) Try to search it with "Martinu" and you'll see you can't find it
3) Apply the patch
4) Copy file etc/zebradb/etc/word-phrase-utf.chr to your /etc/koha directory
sudo cp etc/zebradb/etc/word-phrase-utf.chr /etc/koha/zebradb/etc/
5) koha-zebra --restart kohadev
6) koha-rebuild-zebra -f kohadev
6) try to search "Martinu" again - you should be able to find your record
Signed-off-by: Josef Moravec <josef.moravec@gmail.com>
Signed-off-by: Michal Denar <black23@gmail.com>
Signed-off-by: Chris Cormack <chrisc@catalyst.net.nz>
Signed-off-by: Nick Clemens <nick@bywatersolutions.com>
Now that we have Koha::Logger, we should use it in our SIP server. This
has the potential to make debugging SIP issue much easier. We should add
the userid for the sipuser to the namespace so we can allow for separate
files per sip user if wanted.
Also modifies the log4perl.conf to lazy-open filehandles to log files,
so the same config can be used with log-files needing different
permissions.
Test Plan:
1) Apply this patch set
2) Update the modififed log4perl.conf to your system
3) Restart your sip server
4) Tail your sip2.log, run some queries
5) Note you still get the same output messages as before, with the
addition of the ip address and username ( if available )
prefixing the message.
Based on original patches by Kyle Hall and additions by Olli-Antti
Kivilahti.
Signed-off-by: Liz Rea <wizzyrea@gmail.com>
Signed-off-by: Chris Cormack <chris@bigballofwax.co.nz>
Signed-off-by: Nick Clemens <nick@bywatersolutions.com>
Some SIP devices ( in this particular case, bin sorting machines from RFID Library Solutions ) require a CT field to be sent, even if that field is empty. Koha should be able to support this behavior.
Test Plan:
1) Apply this patch
2) Enable the new option ct_always_send for a SIP2 account
3) Restart SIP
4) Check in an item successfully via SIP, which will not be transferred
5) Note the response contains a CT field with no value
Sponsored-by: Pueblo City-County Library District
Signed-off-by: Kyle M Hall <kyle@bywatersolutions.com>
Signed-off-by: Jill Kleven <jill.kleven@pueblolibrary.org>
Signed-off-by: Marcel de Rooy <m.de.rooy@rijksmuseum.nl>
Signed-off-by: Nick Clemens <nick@bywatersolutions.com>
Some SIP devices ( in this particular case, bin sorting machines from RFID Library Solutions ) require a checkin success to return a CV field of the value "00" rather than no CV field at all. Koha should be able to support this behavior.
Test Plan:
1) Apply this patch
2) Enable the new option cv_send_00_on_success for a SIP2 account
3) Restart SIP
4) Check in an item successfully via SIP
5) Note the response contains a CV field with the value '00'
Sponsored-by: Pueblo City-County Library District
Signed-off-by: Kyle M Hall <kyle@bywatersolutions.com>
Signed-off-by: Jill Kleven <jill.kleven@pueblolibrary.org>
Signed-off-by: Marcel de Rooy <m.de.rooy@rijksmuseum.nl>
Signed-off-by: Nick Clemens <nick@bywatersolutions.com>
Includes:
* code factorization
Some code from subscription & Mana-KB has been factorized in order to speed-up next developments
* SytemPreferences:
Mana Activation:
- add a value "no, let me think about it", that is the default value.
- as long as this value is selected, messages ask if user want to activate it ( in Administration and Add-subscription(page 2) )
AutoShareWithMana
- Add the syspref AutoShareWithMana: user can automatically share infos with Mana-KB (not set by default)
* Interface :
- On mana-search, rows are now sorted by date of last import, then by number of users
- Windows redesigned to improve the user experience
* New Feature : report a mistake.
- people can now report an invalid data (wrong, obsolete,...)
- if a data is reported as invalid many time, it will appear differently
- Added few tooltip (to explain the fields last import, nb of users, to explain the new feature)
- When reporting a data as invalid, a comment can also be added. Koha will then display comments related to data in result lists
* API (svc/mana)
- add svc/mana/addvaluetofield: allows to ask mana incrementing a field of a resource
- no hardcoding for resources in the code of api (api needs to be called with a ressourcename)
* New feature : SQL report sharing
- Create Koha::Report.pm and Koha::Reports.pm, objects class for Reports
- New feature: share reports with Mana-KB
- New feature: search report in Mana-KB with keywords
- New feature: load reports from Mana-KB
Test plan:
1 - Apply Patch + update database
2 - Copy the three lines about mana config in etc/koha-conf.xml in ../etc/koha-conf.xml (after <backupdir> for example)
<!-- URL of the mana KB server -->
<!-- alternative value http://mana-test.koha-community.org to query the test server -->
<mana_config>https://mana-kb.koha-community.org</mana_config>
3 - Check Mana syspref and AutoShareWithMana syspref are not activated
4 - Search the syspref ManaToken and follow the instructions
5 - subscriptions
- Try create a new subscription for a first serial => Mana-KB shouldn't show you anything (except if the base hase been filled)
- Share this serial with Mana-KB (on the serial individual's page there must be a Share button)
- Try to create a new subscription for serial nr1 => a message should appear when you click on "next", click on "use", the fields should automaticaly appear
- Activate AutoShareWithMana => Subscriptions
- Create a new subscription for a second serial
- There shouldn't be any Share button
- Create a second subscription => the message should appear, click again on use
6 - SQL Report
- Create a new SQL report, without notes.
- On the table with all report (reports > use saved), there should be the action "Share"
- If you click on share, you have an error message
- Create a new report, with a title and notes longer than 20 characters
- You can share it with mana => you will have a success message
- On (report > use saved), there must be a message inviting you to search on Mana-KB for more results, enter a few word from title, notes, type of the report you shared, it should appear. You can use it, it will load it into your report list.
7 - Report mistakes.
- On any table containing Mana-KB search results, you can report a mistake and add a comment.
8 - For each previous test, try to send wrong data, to delete the security token, to send nothing: it should show a correct warning message.
Signed-off-by: Brendan A Gallagher <brendan@bywatersolutions.com>
Signed-off-by: Kyle M Hall <kyle@bywatersolutions.com>
Rebased-by: Alex Arnaud <alex.arnaud@biblibre.com> (2018-07-04)
Signed-off-by: Michal Denar <black23@gmail.com>
Signed-off-by: Michal Denar <black23@gmail.com>
Signed-off-by: Kyle M Hall <kyle@bywatersolutions.com>
Signed-off-by: Nick Clemens <nick@bywatersolutions.com>
To test:
1 - Find or add a record by author Slavoj Žižek
2 - Search for 'Zizek'
3 - No results
4 - Apply patch
5 - Reindex
6 - Search again
7 - Success!
https://bugs.koha-community.org/show_bug.cgi?id=22073
Signed-off-by: Eric Phetteplace <ephetteplace@cca.edu>
Signed-off-by: Josef Moravec <josef.moravec@gmail.com>
Signed-off-by: Nick Clemens <nick@bywatersolutions.com>
To test:
1 - Apply first patch
2 - Attempt searching by arp, no results
3 - Apply this patch
4 - Copy bib1.att and ccl.properties to the correct locations
5 - Restart zebra
6 - Rebuild indexes
7 - Search agian, success!
Signed-off-by: Margie Sheppard - Central Kansas Library System CKLS <msheppard@ckls.org>
Signed-off-by: Katrin Fischer <katrin.fischer.83@web.de>
Signed-off-by: Nick Clemens <nick@bywatersolutions.com>
Since we now require the <branch> block, we should add it to the config
templates
Signed-off-by: Magnus Enger <magnus@libriotech.no>
Signed-off-by: Josef Moravec <josef.moravec@gmail.com>
Signed-off-by: Nick Clemens <nick@bywatersolutions.com>
Signed-off-by: Tomas Cohen Arazi <tomascohen@theke.io>
Signed-off-by: Katrin Fischer <katrin.fischer.83@web.de>
Signed-off-by: Nick Clemens <nick@bywatersolutions.com>
Remove:
- BIB_INDEX_MODE and AUTH_INDEX_MODE env var
- bib_index_mode and auth_index_mode options from scripts
- Warnings from about page, just kept one if zebra_bib_index_mode or
zebra_auth_index_mode still exist in config and are set to grs1
Test plan:
- Install Koha from src
- Install Koha from pkg
- Read the code, carefully!
Signed-off-by: Tomas Cohen Arazi <tomascohen@theke.io>
Rebased
Signed-off-by: Katrin Fischer <katrin.fischer.83@web.de>
Signed-off-by: Nick Clemens <nick@bywatersolutions.com>
Signed-off-by: Tomas Cohen Arazi <tomascohen@theke.io>
Signed-off-by: Katrin Fischer <katrin.fischer.83@web.de>
Signed-off-by: Nick Clemens <nick@bywatersolutions.com>
The Rewrite rules for Apache don't work unless you're using
debian/templates/apache-shared-opac-plack.conf or
debian/templates/apache-shared-intranet-plack.conf.
This patch fixes the Rewrite rules for the non-Plack Debian
Apache configuration templates as well as the standard
Apache configuration file that comes with Koha.
__BEFORE APPLYING__
1. Visit /api/v1/app.pl/api/v1/spec on your git dev install
2. This should display a large page of JSON
3. Visit /api/v1/spec on your git dev install
4. This should generate a 404 error
__APPLY PATCH__
__AFTER APPLYING__
5. Visit /api/v1/app.pl/api/v1/spec on your git dev install
6. This should display a large page of JSON
7. Visit /api/v1/spec on your git dev install
8. This should display a large page of JSON (identical to
the one from earlier steps)
Signed-off-by: Ere Maijala <ere.maijala@helsinki.fi>
Signed-off-by: Marcel de Rooy <m.de.rooy@rijksmuseum.nl>
Passed QA with few notes posted separately to Bugzilla.
Signed-off-by: Tomas Cohen Arazi <tomascohen@theke.io>
This patch adds the index definitions for zebra faceting of ccode in
koha for marc21, normarc and unimarc.
We also add lines to the templates to expose the new facet and enable
non-zebra faceting for ccode too.
Signed-off-by: David Cook <dcook@prosentient.com.au>
Signed-off-by: Tomas Cohen Arazi <tomascohen@theke.io>
Signed-off-by: Nick Clemens <nick@bywatersolutions.com>
Improvements:
1) Index settings moved from code to etc/searchengine/elasticsearch/index_config.yaml. An alternative can be specified in koha-conf.xml.
2) Field settings moved from code to etc/searchengine/elasticsearch/field_config.yaml. An alternative can be specified in koha-conf.xml.
3) mappings.yaml has been moved from admin/searchengine/elasticsearch to etc/searchengine/elasticsearch. An alternative can be specified in koha-conf.xml.
4) Default settings have been improved to remove punctuation from phrases used for sorting etc.
5) State variables are used for storing configuration to avoid parsing it multiple times.
6) A possibility to reset the fields too has been added to the reset operation of mappings administration.
7) mappings.yaml has been moved from admin/searchengine/elasticsearch to etc/searchengine/elasticsearch.
8) An stdno field type has been added for standard identifiers.
To test:
1) Run tests in t/Koha/SearchEngine/Elasticsearch.t
2) Clear tables search_fields and search_marc_map
3) Go to admin/searchengine/elasticsearch/mappings.pl?op=reset&i_know_what_i_am_doing=1
4) Verify that admin/searchengine/elasticsearch/mappings.pl displays the mappings properly, including ISBN and other standard number fields.
5) Index some records using the -d parameter with misc/search_tools/rebuild_elastic_search.pl to recreate the index
6) Verify that you can find the records
7) Put <elasticsearch_index_mappings>non_existent</elasticsearch_index_mappings> to koha-conf.xml
8) Verify that admin/searchengine/elasticsearch/mappings.pl?op=reset&i_know_what_i_am_doing=1 fails because it can't find non_existent.
9) Copy etc/searchengine/elasticsearch/mappings.yaml to a new location and make elasticsearch_index_mappings setting in koha-conf.xml point to it.
10) Make a change in the new mappings.yaml.
11) Clear table search_fields (mappings reset doesn't do it yet, see bug 20248)
12) Go to admin/searchengine/elasticsearch/mappings.pl?op=reset&i_know_what_i_am_doing=1
13) Verify that the changes you made are now visible in the mappings UI
Signed-off-by: Kyle M Hall <kyle@bywatersolutions.com>
Bug 20073: Move Elasticsearch yaml files back to admin directory
Signed-off-by: Kyle M Hall <kyle@bywatersolutions.com>
Signed-off-by: Nick Clemens <nick@bywatersolutions.com>
Bug 20187 has changed the JS and CSS rewrite rules to :
RewriteRule ^(.*)_[0-9][0-9]\.[0-9][0-9][0-9][0-9][0-9][0-9][0-9].js$ $1.js [L]
RewriteRule ^(.*)_[0-9][0-9]\.[0-9][0-9][0-9][0-9][0-9][0-9][0-9].css$ $1.css [L]
This patch changes this rules using [0-9]{N} and fusion in one rule.
And espaces the dot in extension js and css.
Test plan :
1) Go to intranet and opac
2) Check CSS and JS are doing well
3) Apply patch changes on our Apache configuration
4) Reload intranet and opac pages (Ctrl + F5)
5) Check CSS and JS are doing well
Signed-off-by: Charles Farmer <charles.farmer@inLibro.com>
Signed-off-by: Jonathan Druart <jonathan.druart@bugs.koha-community.org>
Signed-off-by: Nick Clemens <nick@bywatersolutions.com>
This reverts commit f489d2034b.
This commit breaks the install process when using debian packages.
Reverting as we are very close to the 18.05.00 release
Signed-off-by: Jonathan Druart <jonathan.druart@bugs.koha-community.org>
Signed-off-by: Tomas Cohen Arazi <tomascohen@theke.io>
Signed-off-by: Marcel de Rooy <m.de.rooy@rijksmuseum.nl>
Signed-off-by: Jonathan Druart <jonathan.druart@bugs.koha-community.org>
This patch adds an option to the koha-conf.xml file for specifying
a temporary uploaded files directory.
The koha-create script is adjusted to handle it and a convenient option
switch is added. If ommited, it will default to
/var/lib/koha/<instance>/uploads_tmp.
koha-create-dirs is patched to create the required directory with the
right permissions.
The docs get the new parameter documented.
Signed-off-by: Kyle M Hall <kyle@bywatersolutions.com>
Signed-off-by: Mark Tompsett <mtompset@hotmail.com>
Signed-off-by: Marcel de Rooy <m.de.rooy@rijksmuseum.nl>
Signed-off-by: Jonathan Druart <jonathan.druart@bugs.koha-community.org>
Signed-off-by: Julian Maurice <julian.maurice@biblibre.com>
Signed-off-by: Benjamin Rokseth <benjamin.rokseth@deichman.no>
Signed-off-by: Kyle M Hall <kyle@bywatersolutions.com>
Signed-off-by: Jonathan Druart <jonathan.druart@bugs.koha-community.org>
It implements only the "client credentials" flow with no scopes
support. API clients are tied to an existing patron and have the same
permissions as the patron they are tied to.
API Clients are defined in $KOHA_CONF.
Test plan:
0. Install Net::OAuth2::AuthorizationServer 0.16
1. In $KOHA_CONF, add an <api_client> element under <config>:
<api_client>
<client_id>$CLIENT_ID</client_id>
<client_secret>$CLIENT_SECRET</client_secret>
<patron_id>X</patron_id> <!-- X is an existing borrowernumber -->
</api_client>
2. Apply patch, run updatedatabase.pl and reload starman
3. Install Firefox extension RESTer [1]
4. In RESTer, go to "Authorization" tab and create a new OAuth2
configuration:
- OAuth flow: Client credentials
- Access Token Request Method: POST
- Access Token Request Endpoint: http://$KOHA_URL/api/v1/oauth/token
- Access Token Request Client Authentication: Credentials in request
body
- Client ID: $CLIENT_ID
- Client Secret: $CLIENT_SECRET
5. Click on the newly created configuration to generate a new token
(which will be valid only for an hour)
6. In RESTer, set HTTP method to GET and url to
http://$KOHA_URL/api/v1/patrons then click on SEND
If patron X has permission 'borrowers', it should return 200 OK
with the list of patrons
Otherwise it should return 403 with the list of required permissions
(Please test both cases)
7. Wait an hour (or run the following SQL query:
UPDATE oauth_access_tokens SET expires = 0) and repeat step 6.
You should have a 403 Forbidden status, and the token must have been
removed from the database.
8. Create a bunch of tokens using RESTer, make some of them expires
using the previous SQL query, and run the following command:
misc/cronjobs/cleanup_database.pl --oauth-tokens
Verify that expired tokens were removed, and that the others are
still there
9. prove t/db_dependent/api/v1/oauth.t
[1] https://addons.mozilla.org/en-US/firefox/addon/rester/
Signed-off-by: Josef Moravec <josef.moravec@gmail.com>
Signed-off-by: Tomas Cohen Arazi <tomascohen@theke.io>
Signed-off-by: Jonathan Druart <jonathan.druart@bugs.koha-community.org>
- Removed merge marker
- Changed include path in favor of using the Asset tt plugin (bug 20538)
- Changed access_dir to a two-level entry for clarity
Test plans stay the same, just make sure that the two-level configuration entry
work properly and everything pass QA.
Signed-off-by: Kyle M Hall <kyle@bywatersolutions.com>
Signed-off-by: Katrin Fischer <katrin.fischer.83@web.de>
Signed-off-by: Jonathan Druart <jonathan.druart@bugs.koha-community.org>
This squash contains all of these commits:
- Adds a page to access log files on the server from the intranet
- Update ID to allow for permalinking
- Rename config to "'accessdir' and fix qa
- Allows for multiple directories to be accessible
- Update the link under reports
- (Follow-up) Fixing merge error and cosmetic changes
- (Follow-up) Fix tab chars and move javascript to the footer
- (QA Follow-up) Fix datatable
- Make filename unicode-proof, renamed accessdir to access_dir and fix update
Test plans:
- Apply patch, update database
- Add to koha-conf:
<access_dir>/tmp/koha-public/one</access_dir>
<access_dir>/tmp/koha-public/two</access_dir>
<access_dir>/tmp/koha-public</access_dir>
- Create these directories ( mkdir /tmp/koha-public , etc...)
- Create these files:
echo "hello world!" > /tmp/koha-public/❤
echo "test" > /tmp/koha-public/one/samename.txt
echo "this is not the same" > /tmp/koha-public/two/samename.txt
- Login as Superadmin, go to tools > reports files
- Click on ❤, make sure it's downloadable and readable
- Click on both samename.txt, look inside and make sure the file is different
- Login as NON-superadmin. Go under tools, see no Report/Log under the third column
- Go to add tools/access_file permission to user
- See new entry under tools third column.
- validate link is ok.
Signed-off-by: Kyle M Hall <kyle@bywatersolutions.com>
Signed-off-by: Katrin Fischer <katrin.fischer.83@web.de>
Signed-off-by: Jonathan Druart <jonathan.druart@bugs.koha-community.org>
Improvements:
1) Index settings moved from code to etc/searchengine/elasticsearch/index_config.yaml. An alternative can be specified in koha-conf.xml.
2) Field settings moved from code to etc/searchengine/elasticsearch/field_config.yaml. An alternative can be specified in koha-conf.xml.
3) mappings.yaml has been moved from admin/searchengine/elasticsearch to etc/searchengine/elasticsearch. An alternative can be specified in koha-conf.xml.
4) Default settings have been improved to remove punctuation from phrases used for sorting etc.
5) State variables are used for storing configuration to avoid parsing it multiple times.
6) A possibility to reset the fields too has been added to the reset operation of mappings administration.
7) mappings.yaml has been moved from admin/searchengine/elasticsearch to etc/searchengine/elasticsearch.
8) An stdno field type has been added for standard identifiers.
To test:
1) Run tests in t/Koha/SearchEngine/Elasticsearch.t
2) Clear tables search_fields and search_marc_map
3) Go to admin/searchengine/elasticsearch/mappings.pl?op=reset&i_know_what_i_am_doing=1
4) Verify that admin/searchengine/elasticsearch/mappings.pl displays the mappings properly, including ISBN and other standard number fields.
5) Index some records using the -d parameter with misc/search_tools/rebuild_elastic_search.pl to recreate the index
6) Verify that you can find the records
7) Put <elasticsearch_index_mappings>non_existent</elasticsearch_index_mappings> to koha-conf.xml
8) Verify that admin/searchengine/elasticsearch/mappings.pl?op=reset&i_know_what_i_am_doing=1 fails because it can't find non_existent.
9) Copy etc/searchengine/elasticsearch/mappings.yaml to a new location and make elasticsearch_index_mappings setting in koha-conf.xml point to it.
10) Make a change in the new mappings.yaml.
11) Clear table search_fields (mappings reset doesn't do it yet, see bug 20248)
12) Go to admin/searchengine/elasticsearch/mappings.pl?op=reset&i_know_what_i_am_doing=1
13) Verify that the changes you made are now visible in the mappings UI
Signed-off-by: Nicolas Legrand <nicolas.legrand@bulac.fr>
Signed-off-by: Nick Clemens <nick@bywatersolutions.com>
Signed-off-by: Jonathan Druart <jonathan.druart@bugs.koha-community.org>
Bug 18571 added default ES configuration for packaging config.
This patch adds it to dev install in koha-conf.xml.
Database name is used in index_name to allow multiple installs.
Test plan :
- Run dev install
- Install ElasticSearch server and Koha deps
- Enable ElasticSearch in Koha
- Check indexing and searching works directly
Signed-off-by: Brendan Gallagher <brendan@bywatersolutions.com>
Signed-off-by: Nick Clemens <nick@bywatersolutions.com>
Signed-off-by: Jonathan Druart <jonathan.druart@bugs.koha-community.org>
Signed-off-by: Mark Tompsett <mtompset@hotmail.com>
Signed-off-by: Katrin Fischer <katrin.fischer.83@web.de>
Signed-off-by: Jonathan Druart <jonathan.druart@bugs.koha-community.org>
Some libraries need to be able to send additional patron data from the
extended patron attributes in made up SIP2 fields for the patron
information and patron status responses.
Test Plan:
1) Apply this patch
2) Create 3 new patron attributes with the codes CODE1, CODE2, CODE3.
Make a least one repeatable.
3) Create a patron, add those attibutes for the patron, make sure there
are at least two instances of the repeatable code
4) Edit your SIP2 config file, add the following within the login stanza:
<patron_attribute field="XX" code="CODE1" />
<patron_attribute field="XY" code="CODE2" />
<patron_attribute field="XZ" code="CODE3" />
5) Using the sip cli emulator, run patron_information and
patron_status_request messages for the patron
6) Note the values you set for the patron attributes are sent in the
corrosponding fields!
Signed-off-by: Kyle M Hall <kyle@bywatersolutions.com>
Signed-off-by: Daniel Mauchley <dmauchley@duchesne.utah.gov>
Signed-off-by: Marcel de Rooy <m.de.rooy@rijksmuseum.nl>
Amended: added parentheses on line 488 when assigning hashref to array.
Signed-off-by: Jonathan Druart <jonathan.druart@bugs.koha-community.org>
The flags [N,L] make no sense: next and last combined.
Choosing here for L to stop the rewriting process.
Signed-off-by: Marcel de Rooy <m.de.rooy@rijksmuseum.nl>
Signed-off-by: Victor Grousset <victor.grousset@biblibre.com>
Signed-off-by: Jonathan Druart <jonathan.druart@bugs.koha-community.org>
Koha has the ability to include custom css in the apache configuration.
If a library has any custom css ( or adds a custom js file in some way ),
and that file has an underscore in it ( e.g. my_custom.css ), the
apache rewrite rule will convert it to my.css and thus it will 404.
We should make the rewrite rules as specific as possible for the
format we are using.
Test Plan:
1) Set OPAC_CSS_OVERRIDE to a file with an underscore in it
2) Note it does not work
3) Apply this patch
4) Update the apache rewrite rules to match those in the patch
For kohadevbox, just run /home/vagrant/misc4dev/cp_debian_files.pl
5) Restart apache
6) Reload the page, your custom css should load now!
Signed-off-by: Victor Grousset <victor.grousset@biblibre.com>
Signed-off-by: Marcel de Rooy <m.de.rooy@rijksmuseum.nl>
Signed-off-by: Jonathan Druart <jonathan.druart@bugs.koha-community.org>
Signed-off-by: Nick Clemens <nick@bywatersolutions.com>
Signed-off-by: Olli-Antti Kivilahti <olli-antti.kivilahti@jns.fi>
Signed-off-by: Kyle M Hall <kyle@bywatersolutions.com>
Signed-off-by: Josef Moravec <josef.moravec@gmail.com>
This patch adds Makefile.PL the capability of handling the template_cache_dir configuration entry.
To do so, it:
- Adds the --template-cache-dir option switch (consistency with koha-create)
- Sets a default value for template_cache_dir to '/tmp/koha'
- Adds a dialog requesting the path for the template cache dir to Makefile.PL
- It tweaks etc/koha-conf.xml so it is correctly changed by rewrite-config.PL
To test:
- Apply this patch
- Run:
$ perl Makefile.PL --template-cache-dir your/favourite/dir
=> SUCCESS: The dialogs don't ask for template cache dir
=> SUCCESS: The resulting Makefile contains an entry for TEMPLATE_CACHE_DIR which value
matches what we passed to --template-cache-dir
- Run:
$ perl Makefile.PL
- When prompted for a template cache dir, introduce whatever you want
=> SUCCESS: The default you are offered is /tmp/koha
=> SUCCESS: At the end of the process, Makefile contains what we put in there
- Run:
$ sudo make install
=> SUCCESS: The resulting koha-conf.xml contains a <template_cache_dir> entry containing
whatever you picked for that purpose.
- Sign off :-D
Signed-off-by: Tomas Cohen Arazi <tomascohen@theke.io>
Signed-off-by: Katrin Fischer <katrin.fischer.83@web.de>
Signed-off-by: Kyle M Hall <kyle@bywatersolutions.com>
Signed-off-by: Jonathan Druart <jonathan.druart@bugs.koha-community.org>
This Commit is at the heart of adding an interlibrary loans framework
for Koha. The framework does not prescribe a particular workflow.
Instead it provides a general framework that can be extended &
implemented by individual backends whose responsibility it is to
implement a specific workflow.
The module is largely self-sufficient: it adds new tables to the Koha
database and touches only a few files in the Koha source tree.
Primarily, we add our files to the Makefile and the koha-conf.xml,
define ill paths for the REST API, and introduce links from the main
intranet, opac pages & user permissions.
Outside of this we simply add new files & functionality.
Signed-off-by: Magnus Enger <magnus@libriotech.no>
Signed-off-by: Tomas Cohen Arazi <tomascohen@theke.io>
Signed-off-by: Benjamin Rokseth <benjamin.rokseth@kul.oslo.kommune.no>
Signed-off-by: Jonathan Druart <jonathan.druart@bugs.koha-community.org>
Some SIP services ( such as Comprise ) require that an attempt at
over-paying a patron's account via SIP2 should fail, rather than create
a credit on the account. We should make this a configurable option on a
per-login basis in the SIP2 config file.
Test Plan:
1) Apply this patch
2) Enable the new parameter
disallow_overpayment="1"
for the login to be used in this test.
3) Restart your SIP server
4) Create or find a patron with fines
5) Attempt to send a payment via SIP for more than what the
patron's balance is
6) Note the response indicates a payment failure
7) Attempt to send a payment via SIP for the account balance or
less
8) Note the response indicates the payment has succeeded
9) Verify in Koha that the payment was processed
Signed-off-by: Rhonda Kuiper <kuiper@roundrocktexas.gov>
Signed-off-by: Marcel de Rooy <m.de.rooy@rijksmuseum.nl>
Signed-off-by: Jonathan Druart <jonathan.druart@bugs.koha-community.org>
Koha's SIP2 server sends the patron's name in the format "Firstname
Surname" which is not very good for machine reading. We need to allow
the format of the patron name to be customized in a manner similar to
what is done with the DA field on bug 16755.
Test Plan:
1) Apply this patch, start or restart your SIP server
2) Find a patron with a first and last name
3) Send a patron information request via the sip2 cli tool
4) Note the AE field has the format "<firstname> <surname>" ( i.e. the current behavior )
5) Add this parameter to the login stanza you are using:
ae_field_template="[% patron.surname %][% IF patron.firstname %], [% patron.firstname %][% END %]"
6) Restart your SIP server
7) Repeat step 3
8) Note the AE field now has the format "<surname>, <firstname>"
Signed-off-by: Kyle M Hall <kyle@bywatersolutions.com>
Signed-off-by: Benjamin Daeuber <BDaeuber@cityoffargo.com>
Signed-off-by: Marcel de Rooy <m.de.rooy@rijksmuseum.nl>
Signed-off-by: Jonathan Druart <jonathan.druart@bugs.koha-community.org>
The SIP2 DA field that Koha transmits is an odd and arbitrary format
that some SIP2 clients cannot handle. It would be best if this
format were customizable on a per-login basis in the same manner as
the AV field.
Test Plan:
1) Find an item that is checked out with holds
2) Return the item via SIP2 ( using the SIP2 cli emulator )
3) Note the value of the DA field
4) Apply this patch, restart your SIP2 server
5) Repeat step 2
6) Note the DA field value has not changed
7) Add this parameter to the login stanza you are using:
da_field_template="[% patron.surname %][% IF patron.firstname %], [% patron.firstname %][% END %]"
8) Restart the SIP2 server again
9) Repeat step 2
10) Note the DA field returned is now in the format "$surname, $firstname"
Signed-off-by: Kyle M Hall <kyle@bywatersolutions.com>
Signed-off-by: Benjamin Daeuber <BDaeuber@cityoffargo.com>
Signed-off-by: Marcel de Rooy <m.de.rooy@rijksmuseum.nl>
Signed-off-by: Jonathan Druart <jonathan.druart@bugs.koha-community.org>
One of the RewriteCond directives in koha-httpd.conf was affecting
the wrong RewriteRule after its original RewriteRule was commented out
years ago.
_TEST PLAN_
0) Before applying patch, build Koha from source
*) make
*) make install (or make upgrade)
*) Copy or symlink etc/koha-httpd.conf to your Apache vhost directory
(and enable if you're on a Debian based system)
*) Restart Apache
1) Make sure that you have at least 1 bibliographic record in Koha
(URL like this http://server:port/cgi-bin/koha/opac-detail.pl?biblionumber=1)
2) Go to http://server:port/bib/1
3) Note that you get a 404 error
4) Apply the patch
5) Rebuild Koha from source as per step 0
6) Go to http://server:port/bib/1
7) Note that you now see the same page as you would if you went to
http://server:port/cgi-bin/koha/opac-detail.pl?biblionumber=1
Signed-off-by: Josef Moravec <josef.moravec@gmail.com>
Signed-off-by: Marcel de Rooy <m.de.rooy@rijksmuseum.nl>
Signed-off-by: Kyle M Hall <kyle@bywatersolutions.com>
This patch adds a numeric index 'not-onloan-count' containing the value
of 999$x. This subfield is filled by 'rebuild_zebra.pl' by making use of
(bug's 18208) 'EmbedItemsAvailability' filter.
bib1.att and indexes definitions are updated accordingly.
To test:
- Apply the patch
- Pick the right biblio-zebra-indexdefs.xsl file for your setup and
replace the one your Zebra uses [1]
- Replace your bib1.att
- Replace your ccl.properties
- Have at least one record with more than one item, checkout some
item(s) from that record(s).
- Rebuild zebra's indexes:
$ sudo koha-shell kohadev
k$ cd kohaclone
k$ misc/migration_tools/rebuild_zebra.pl -r -b -v -k
(notice the dump directory is kept, you can try the XSLT yourself
running:
$ xsltproc \
etc/zebradb/marc_defs/marc21/biblios/biblio-zebra-indexdefs.xsl \
/tmp/the_dump_dir/biblios/exported_records | less
=> SUCCESS: There are records with the not-onloan-count index, and the
value is correct!
- Check Zebra yourself:
$ yaz-client unix:/var/run/koha/kohadev/bibliosocket
Z> base biblios
Z> find @attr 1=9013 @attr 2=5 @attr 4=109 0
=> SUCCESS: The search matches the amount of records with not-onloan
items.
Z> s 1+1
=> SUCCESS: Records with 999$x having a value higher than 0 are rendered
- Sign off :-D
Note: While this work is complete on its purpose, it is part of an
attempt to create a better way of filtering by availability.
Sponsored-by: ByWater Solutions
[1] In kohadevbox this would be
/etc/koha/zebradb/marc_defs/marc21/biblios/biblio-zebra-indexdefs.xsl
Edit: Added the missing XSLT changes for UNIMARC and NORMARC
Signed-off-by: Josef Moravec <josef.moravec@gmail.com>
Signed-off-by: Tomas Cohen Arazi <tomascohen@theke.io>
Signed-off-by: Marcel de Rooy <m.de.rooy@rijksmuseum.nl>
Signed-off-by: Kyle M Hall <kyle@bywatersolutions.com>
Many SIP2 services such as those by Comprise Technologies are able to or
require that an ILS be able to accept writeoffs via SIP2. The SIP2
protocol specifies that payment type be a two digit number, but does not
specify a code for writeoffs. To this end we should allow the write-off
code to be specified in the SIP2 config on a per-account basis so that
if different vendors use different fixed codes for write-offs we can
handle that gracefully.
Test Plan:
1) Apply this patch
2) Modify your SIP2 config to include
payment_type_writeoff="06"
in the login portion of the account you will be using for the test.
3) Restart your SIP2 server
4) Create a fee for a patron
5) Send a SIP2 fee paid message specifying the payment type code we
defined earlier, with a payment amount that is *not* equal to the
amount outstanding for the fee.
6) Note the fee paid response indicates the payment failed
7) Repeat step 5, but this time send the amount outstanding as the
payment amount
8) Note that the fee paid response indicates a successful payment
9) Note in Koha that the fee has been written off!
Signed-off-by: Rhonda Kuiper <kuiper@roundrocktexas.gov>
Signed-off-by: Marcel de Rooy <m.de.rooy@rijksmuseum.nl>
In summary, changes are:
1) If you have chosen MySQL, Makefile.PL will ask you if you want TLS (default:
"no"), and then the locations for CA cert, client cert and client key
(reasonable defaults are provided). Settings <tls>, <ca>, <cert> and <key> are
added in koha-conf.xml
2) If <tls>yes</tls> in koha-conf.xml, the installer and database connection
scripts add the TLS options in both DBI connection strings and mysql command
line
To test
1/ Apply patch
2/ Check everything still works and db connections are the same as before
3/ Either run Makefile.PL and step through the options or edit your koha-conf.xml to
enable TLS
4/ Check db connections are still working
Patch provided to me by Dimitris Kamenopoulos and I reformatted it into a git patch,
any errors are probably mine
Signed-off-by: Mirko Tietgen <mirko@abunchofthings.net>
Signed-off-by: Jonathan Druart <jonathan.druart@bugs.koha-community.org>
Signed-off-by: Kyle M Hall <kyle@bywatersolutions.com>
TEST PLAN
1. Make sure you have no items checked out.
2. Run sudo koha-rebuild-zebra -f -v kohadev.
3. Go to search the catalog and search.
4. Check items availability and then click on limit to currently
available items.
5. This should return no results.
6. Apply patch and reload.
7. Results should show.
Signed-off-by: Katrin Fischer <katrin.fischer.83@web.de>
Signed-off-by: Jonathan Druart <jonathan.druart@bugs.koha-community.org>
Attribute 14: " Specifies whether un-indexed fields should be ignored. A
zero value (default) throws a diagnostic when an un-indexed field is
specified. A non-zero value makes it return 0 hits."
From http://www.indexdata.com/zebra/doc/querymodel-zebra.html
Signed-off-by: Kyle M Hall <kyle@bywatersolutions.com>
Looking at the default framework's fields that are linked to authority
records, there's a divergence with the Zebra index definitions.
This yields to authority usage count be incorrect for users searching
for authority records.
MariaDB [koha_kohadev]> SELECT tagfield,tagsubfield,authtypecode FROM
marc_subfield_structure WHERE authtypecode IS NOT NULL AND
authtypecode<>'' AND frameworkcode='' GROUP BY
tagfield,tagsubfield,authtypecode ;
+----------+-------------+--------------+
| tagfield | tagsubfield | authtypecode |
+----------+-------------+--------------+
| 100 | a | PERSO_NAME |
| 110 | a | CORPO_NAME |
| 111 | a | MEETI_NAME |
| 130 | a | UNIF_TITLE |
| 440 | a | UNIF_TITLE |
| 600 | a | PERSO_NAME |
| 610 | a | CORPO_NAME |
| 611 | a | MEETI_NAME |
| 630 | a | UNIF_TITLE |
| 648 | a | CHRON_TERM |
| 650 | a | TOPIC_TERM |
| 651 | a | GEOGR_NAME |
| 654 | a | TOPIC_TERM |
| 655 | a | GENRE/FORM |
| 656 | a | TOPIC_TERM |
| 657 | a | TOPIC_TERM |
| 658 | a | TOPIC_TERM |
| 662 | a | GEOGR_NAME |
| 690 | a | TOPIC_TERM |
| 691 | a | GEOGR_NAME |
| 696 | a | PERSO_NAME |
| 697 | a | CORPO_NAME |
| 698 | a | MEETI_NAME |
| 699 | a | UNIF_TITLE |
| 700 | a | PERSO_NAME |
| 710 | a | CORPO_NAME |
| 711 | a | MEETI_NAME |
| 730 | a | UNIF_TITLE |
| 796 | a | PERSO_NAME |
| 797 | a | CORPO_NAME |
| 798 | a | MEETI_NAME |
| 799 | a | UNIF_TITLE |
| 800 | a | PERSO_NAME |
| 810 | a | CORPO_NAME |
| 811 | a | MEETI_NAME |
| 830 | a | UNIF_TITLE |
| 896 | a | PERSO_NAME |
| 897 | a | CORPO_NAME |
| 898 | a | MEETI_NAME |
| 899 | a | UNIF_TITLE |
+----------+-------------+--------------+
This patch adds the missing ones to the authority number index as it is
done for the rest of the fields.
To test:
- Verify that
etc/zebradb/marc_defs/marc21/biblios/biblio-koha-indexdefs.xml
contains intries pointing the $9 subfield of all the fields in the
'tagfield' column above, to the Koha-Auth-Number:w index.
- Sign off :-D
Signed-off-by: Hugo Agud <hagud@orex.es>
Signed-off-by: Marcel de Rooy <m.de.rooy@rijksmuseum.nl>
Signed-off-by: Kyle M Hall <kyle@bywatersolutions.com>
This patch restores access to zebra facets (or zebra::snippet) with YAZ 5.8.1 or higher.
It was failing due to The <retrieval syntax="xml" name="zebra::*" /> entry in
retrieval-info-bib-dom.xml which IndexData said it wasn't even needed to
get that access.
Edit: I amended the commit message (tcohen)
Signed-off-by: Colin Campbell <colin.campbell@ptfs-europe.com>
Signed-off-by: Tomas Cohen Arazi <tomascohen@theke.io>
I tested on kohadevbox and found no regression or behaviour change. I
will provide a followup for the packages.
Signed-off-by: Kyle M Hall <kyle@bywatersolutions.com>
It would be nice if we could control the number of workers and max
requests on a per instance basis, rather than the numbers being
hardcoded in the plack startup script.
Test Plan:
1) Build a new package of Koha with this patch applied ; )
2) Verify koha-plack still works
3) Add the following to the config section of your koha-conf.xml:
<plack_max_requests>75</plack_max_requests>
<plack_workers>4</plack_workers>
4) Stop plack
5) Start plack
6) Verify the number of works and max requests worked!
Signed-off-by: Kyle M Hall <kyle@bywatersolutions.com>
Signed-off-by: Larry Baerveldt <larry@bywatersolutions.com>
Signed-off-by: Tomas Cohen Arazi <tomascohen@theke.io>
Rebased against master and added a description for the new configuration
entries
Signed-off-by: Jonathan Druart <jonathan.druart@biblibre.com>
Signed-off-by: Kyle M Hall <kyle@bywatersolutions.com>
And comment it, as we don't know what are the sysop's preferences
Signed-off-by: Magnus Enger <magnus@libriotech.no>
Signed-off-by: Nick Clemens <nick@bywatersolutions.com>
Signed-off-by: Kyle M Hall <kyle@bywatersolutions.com>
Adding POD, changing the config files to live in a path pointed to by
koha-conf.xml
This means multiple instances can have their own config
Please test the 2 patches together
Signed-off-by: Magnus Enger <magnus@libriotech.no>
Works as advertised. Arbitrary arguments can be passed to SMS:Send
drivers. If an argument is provided that has already been set by
SMS::Send or the driver, it will be overwritten by the value from
the YAML file. My only suggestion for an improvement would be an
example of what the YAML should look like, but that is a minor thing.
Signed-off-by: Jonathan Druart <jonathan.druart@biblibre.com>
Signed-off-by: Kyle M Hall <kyle@bywatersolutions.com>
This patch makes Zebra index the 648$9 link for chronological terms on
bibliographic records. This way an authority search on chronological terms
will show the right number in 'Used in X records' message.
To test:
- Have a record with a 648 field, linked to an authority record (i.e. with an authid on 648$9).
- Search for the record, notice it is indexed.
- Perform an authority search for the chronological term
=> FAIL: the term is linked to our record, but koha shows '0' count.
- Apply the patch
- Run:
$ cd kohaclone
$ xsltproc etc/zebra/xsl/koha-indexdefs-to-zebra.xsl \
etc/zebradb/marc_defs/marc21/biblios/biblio-koha-indexdefs.xml \
> etc/zebradb/marc_defs/marc21/biblios/biblio-zebra-indexdefs.xsl
$ git diff
=> SUCCESS: Notice the shipped etc/zebradb/marc_defs/marc21/biblios/biblio-zebra-indexdefs.xsl
is up-to-date
- Run:
$ sudo cp etc/zebradb/marc_defs/marc21/biblios/biblio-zebra-indexdefs.xsl \
/etc/koha/zebradb/marc_defs/marc21/biblios/biblio-zebra-indexdefs.xsl
$ sudo koha-restart-zebra kohadev
$ sudo koha-rebuild-zebra -f -b -v kohadev
- Search for the record, notice it is indexed.
- Perform an authority search for the chronological term
=> SUCCESS: the term is linked to our record, usage count is 1
- Sign off :-D
I assume NORMARC is similar on this regard. Feel free to fail it if the NORMARC part of the
patch is wrong.
Sponsored-by: Universidad Nacional de Cordoba
Signed-off-by: Hugo Agud <hagud@orex.es>
Signed-off-by: Jonathan Druart <jonathan.druart@bugs.koha-community.org>
Signed-off-by: Kyle M Hall <kyle@bywatersolutions.com>
Test plan:
1) Apply patch
2) Create new instance with parameter --zebralang cs
3) Insert some record with basic latin characters and some with "czech" characters (for example: "č" - should be sorted after "c", "š" - should be sorted after "s")
4) Try to search in katalog (staff and opac) and sort by other field then relevance - title or author for instance
5) Records should be sorted correctly by Czech rules
6) Look at code and confirm it is ok
Signed-off-by: radiuscz <radek.siman@centrum.cz>
Signed-off-by: Jonathan Druart <jonathan.druart@bugs.koha-community.org>
I did not test this patch, but trust in the author and signoffer
Signed-off-by: Kyle M Hall <kyle@bywatersolutions.com>
to test bug...
1/ make a random user
2/ change to random user
3/ access any zebra database with random user and no authentication
4/ read zebra database
here is a transcript of the bug...
---------------------------
root@xen1:~# adduser bob
root@xen1:~# su -l bob
bob@xen1:~$ cd /var/lib/koha
bob@xen1:/var/lib/koha$ ls
topsecret
bob@xen1:/var/lib/koha$ yaz-client unix:/var/run/koha/topsecret/bibliosocket
Connecting...OK.
Sent initrequest.
Connection accepted by v3 target.
ID : 81
Name : Zebra Information Server/GFS/YAZ
Version: 4.2.30 98864b44c654645bc16b2c54f822dc2e45a93031
Options: search present delSet triggerResourceCtrl scan sort extendedServices namedResultSets
Elapsed: 0.001002
Z> base biblios;
Z> find the
Sent searchRequest.
Received SearchResponse.
Search was a success.
Number of hits: 1130, setno 2
SearchResult-1: term=the cnt=1130
records returned: 0
Elapsed: 0.005518
Z> show
Sent presentRequest (1+1).
Records: 1
[biblios]Record type: USmarc
01824cam a2200397 a 4500
001 000045782309
003 AuCNLKIN
005 20111013213222.0
008 100707s2011 maua 001 0 e
...
---------------------------
5/ apply changes to a Koha instance's config files, that you plan to test
6/ restart zebra for instance
# sudo koha-restart-zebra topsecret
7/ repeat steps 2 and 3, but receive a 'bad user/passwd ' error from zebra
bob@xen1:~$ yaz-client unix:/var/run/koha/topsecret/bibliosocket
Connecting...OK.
Sent initrequest.
Connection rejected by v3 target.
1: code=1011 (Init/AC: Bad Userid and/or Password),
NOTE: this patch currently will only fixes newly created instances, it wont fix existing instances
Signed-off-by: Chris Cormack <chrisc@catalyst.net.nz>
Good catch Mason
Signed-off-by: Jonathan Druart <jonathan.druart@bugs.koha-community.org>
Signed-off-by: Brendan Gallagher <brendan@bywatersolutions.com>
This patch removes Memcached configurations from the shipped apache files.
Note: testing is not actually needed for this patch, as it is really trivial. But I
include testing steps, just in case QA members require it.
To test:
- Apply the patch
- Do a (standard/dev/single) Koah install
=> SUCCESS: Verify the resulting koha-httpd.conf file doens't include memcached data
- Have a packages install
- Replace
* /etc/koha/apache-site-https.conf.in
* /etc/koha/apache-site.conf.in
with the ones from this patch
- Create an instance
=> SUCCESS: The apache configuration doesn't include memcached configurations
- Sign off :-D
Signed-off-by: Chris Cormack <chrisc@catalyst.net.nz>
Signed-off-by: Kyle M Hall <kyle@bywatersolutions.com>
Signed-off-by: Kyle M Hall <kyle@bywatersolutions.com>
This patch introduces the memcached_servers and memcached_namespace
configuration entries as expected by 11921.
Note: better test this one and the followup together to ease the process.
To test:
- Do a source Koha install (dev, standard, single)
=> SUCCESS: The resulting koha-conf.xml file includes the memcached_* entries
which are filled with the right values.
- In kohadevbox (packages setup):
- Replace /etc/koha/koha-conf-site.xml.in with the one from this patch
- Create a new koha instance
=> SUCCESS: The instance's koha-conf.xml includes the relevant entries
- Sign off
Signed-off-by: Chris Cormack <chrisc@catalyst.net.nz>
Signed-off-by: Kyle M Hall <kyle@bywatersolutions.com>
Signed-off-by: Kyle M Hall <kyle@bywatersolutions.com>
Remove trailing whitespace and replace tabs with 4 spaces.
Signed-off-by: Claire Gravely <claire_gravely@hotmail.com>
Signed-off-by: Marcel de Rooy <m.de.rooy@rijksmuseum.nl>
Signed-off-by: Kyle M Hall <kyle@bywatersolutions.com>
Removes commented line from bib1.att.
Adjust OCLC-number to Other-control-number in comment of ccl properties.
No need to explicitly add 035$a and $z if you index 035 completely in
record.abs as well as biblio-koha-indexdefs.xml.
Rerun koha-indexdefs-to-zebra.xsl on index defs.
Signed-off-by: Marcel de Rooy <m.de.rooy@rijksmuseum.nl>
Signed-off-by: Kyle M Hall <kyle@bywatersolutions.com>
1) Apply patch
2) Make sure that you have a bib that has MARC21 035$a (and possibly also 035$z) populated.
pre 3) Replace all modified zebra files and restart zebra server
3) Rebuild zebra: misc/migration_tools/rebuild_zebra.pl -x -b -z
4) Add the following to the intranetuserjs syspref:
$(document).ready(function(){
// Add Other Control Number to advanced search
if (window.location.href.indexOf("catalogue/search.pl") > -1) {
$(".advsearch").append('<option value="Other-control-number">Other Control Number</option>');
}
});
5) Do an advanced search, select "Other Control Number" from the search menu, then add the Other Control Number in 035$a for the bib specified in step 1.
Signed-off-by: Bernardo Gonzalez Kriegel <bgkriegel@gmail.com>
Works, no koha-qa errors
Signed-off-by: Marcel de Rooy <m.de.rooy@rijksmuseum.nl>
Signed-off-by: Kyle M Hall <kyle@bywatersolutions.com>
A long-standing typo in our apache config files:
[intranet]/search refers to search.pl (which does not exist)
This patch refers it to catalogue/search.pl
Test plan:
Run an install or copy the change from apache-shared-intranet.conf or
koha-httpd.conf to your apache config. Restart Apache and check
if http://[your staff client]/search works.
Signed-off-by: Owen Leonard <oleonard@myacpl.org>
Tested by making manual changes according to the patch. Did not test a
new installation.
Signed-off-by: Katrin Fischer <katrin.fischer.83@web.de>
Signed-off-by: Kyle M Hall <kyle@bywatersolutions.com>
Most selfchecks have persistent connections and send a
periodic status request at intervals (approx every 5mins appears
the norm) The timeout was dropping connections by default every 30secs
which for the client appears as a very flakey network.
This patch adds a separate parameter client_timeout that can be
used if you do want to force a disconnect if the client sends
no requests for a period. The sample config sets it to 600, but you
can also define a 0 value meaning no timeout. If the parameter is not
defined, it will fallback to service timeout.
Signed-off-by: Marcel de Rooy <m.de.rooy@rijksmuseum.nl>
Restored this patch from Colin in order to separate it from the
get_timeout patch. Adjusted the commit message slightly.
The original value of 600 from Colin's earlier patch may give less
discussion than setting to 0 (no timeout) in a later proposal.
Signed-off-by: Srdjan <srdjan@catalyst.net.nz>
Signed-off-by: Kyle M Hall <kyle@bywatersolutions.com>
This patch indexes 024$a into the "phrase" index type, and the "url" index type,
if the 024$2 equals "uri".
TEST PLAN
1) Apply the patch.
1b) If you're using a gitified Koha or a git install,
you'll need to upgrade your instance or copy your zebradb files
over to /etc/koha/zebradb or your "kohadev" directory.
2) Add a 024$a with a URL like http://libris.kb.se/resource/bib/219553
to a bibliographic record
3) Re-index Zebra
4) Type "id-other,st-urx,fuzzy=http://libris.kb.se/resource/bib/219553"
into the "Search the catalog" box in the Staff Client and search
5) Note that you retrieve your record
NOTE: The fuzzy is required because Koha's query "parsing" functions change
http:// to http=// which won't correctly match the value in the "Identifier-other:u" index.
NOTE: Alternatively, you could do the following search instead:
"id-other,phr=http libris kb se resource bib 219553".
It would work as well by using the "Identifier-other:p" index.
Advanced tester version:
4) In a terminal window, find the "koha-conf.xml" file in your "etc" directory.
5) Open "koha-conf.xml" and find <listen id="biblioserver">.
Copy the URI you find there. (e.g. unix:/home/dcook/koha-dev/var/run/zebradb/bibliosocket).
6) Type "yaz-client unix:/home/dcook/koha-dev/var/run/zebradb/bibliosocket"
7) After it connects, type "base biblios" and press enter
8) Type "format xml" and press enter
9) Type "elements zebra::index" and press enter
10) Type "f id-other,st-urx=http://libris.kb.se/resource/bib/219553" and press enter
11) Note that you should have at least one result
12) Type "show 1"
13) If you scroll through the results, you should find something like the following:
<index name="Identifier-other" type="w" seq="28">@^</index>
<index name="Identifier-other" type="w" seq="1"></index>
<index name="Identifier-other" type="w" seq="29">http</index>
<index name="Identifier-other" type="w" seq="30">libris</index>
<index name="Identifier-other" type="w" seq="31">kb</index>
<index name="Identifier-other" type="w" seq="32">se</index>
<index name="Identifier-other" type="w" seq="33">resource</index>
<index name="Identifier-other" type="w" seq="34">bib</index>
<index name="Identifier-other" type="w" seq="35">219553</index>
<index name="Identifier-other" type="p" seq="28">http libris kb se resource bib 219553</index>
<index name="Identifier-other" type="u" seq="36">http://libris.kb.se/resource/bib/219553</index>
Signed-off-by: Hector Castro <hector.hecaxmmx@gmail.com>
Works as advertised the record is retrieved
Signed-off-by: Marcel de Rooy <m.de.rooy@rijksmuseum.nl>
Some of the statements in the commit message do not work for me.
A search like "id-other,phr=http libris kb se resource bib 219553" does not
have results. Searching for "id-other,phr=libris.kb.se resource" does.
The steps in the advanced tester version do not work for me too.
I verified the following in yaz-client:
[1] Z> f @attr 1=9012 @attr 4=104 http://libris.kb.se/resource/bib/219553
Sent searchRequest.
Received SearchResponse.
Search was a success.
Number of hits: 1, setno 16
[2] First removed $2 and reindexed. Then searched again:
Z> f @attr 1=9012 @attr 4=104 http://libris.kb.se/resource/bib/219553
Sent searchRequest.
Received SearchResponse.
Search was a success.
Number of hits: 0, setno 1
Signed-off-by: Kyle M Hall <kyle@bywatersolutions.com>
changed ocurrences of 'lex' to 'lexile-number' in record.abs
Edits were made to the deprecated file record.abs *solely* to quiet
warnings in tests -- this makes sense until GRS-1 code is removed
from Koha.
Signed-off-by: Tomas Cohen Arazi <tomascohen@unc.edu.ar>
Signed-off-by: Jesse Weaver <jweaver@bywatersolutions.com>
Added the following indexes:
Interest-age-level | 591$a ind1=1
Interest-grade-level | 591$a ind1=2
lexile-number | 591$a ind1=8
Reading-grade-level | 591$a ind1=0
Moved 'lex' from a zebra index to a ccl alias to lexile-number.
Changed the handling of st-numeric in C4/Search.pm to allow for search ranges.
Signed-off-by: Nick Clemens <nick@bywatersolutions.com>
Signed-off-by: Hector Castro <hector.hecaxmmx@gmail.com>
Works as advertised
Signed-off-by: Tomas Cohen Arazi <tomascohen@unc.edu.ar>
Signed-off-by: Jesse Weaver <jweaver@bywatersolutions.com>
Koha's SIP2 server should have support for the AV field ( field items ).
The biggest problem with this field is that its' contents are not really
defined in SIP2 protocol specification. All it says is "this field
should be sent for each fine item". Due to this, I think the contents of
the field need to be configurable at the login level, so that the
contents can be defined based on the SIP2 devices requirements for the
AV field.
Test Plan:
1) Apply this patch
2) Find a patron with outstanding fines
3) Run a patron information request using misc/sip_cli_emulator.pl using the new -s option with the value " Y "
4) Note there is an AV field for each fee containing the description and amount
5) Edit your sip config, add an av_field_template parameter to the login you are using such as
av_field_template="TEST [% accountline.description %] [% accountline.amountoutstanding | format('%.2f') %]"
6) Restart your SIP server
7) Repeat the patron information request
8) Note your custom AV field is being used!
Signed-off-by: Chris Davis <cgdavis@uintah.utah.gov>
Signed-off-by: Jesse Weaver <jweaver@bywatersolutions.com>
Signed-off-by: Brendan A Gallagher <brendan@bywatersolutions.com>
This patch will add indexes for Date/time-last-modified.
To test:
1. apply patch
2. reindex
3. search for dtlm:DATE and date-time-last-modified:DATE
4. confirm that you get results
Signed-off-by: Hector Castro <hector.hecaxmmx@gmail.com>
Works as advertised
Signed-off-by: Frédéric Demians <f.demians@tamil.fr>
I confirm Hector signing-off. A simple Zebra server restart suffice to get
working the searches on date-time-last-modified and dtlm.
Signed-off-by: Jonathan Druart <jonathan.druart@bugs.koha-community.org>
Signed-off-by: Brendan A Gallagher <brendan@bywatersolutions.com>
Allows for selection of DejaVu font path when installing from the command line. This
is useful for non-debian distributions that don't store the fonts in the same place.
Adds a new configuration variable to Makefile.PL: FONT_DIR
Defaults to the Debian install location for the fonts.
Test plan:
1. Run a CLI install, accepting the defaults.
2. Compare the generated koha-conf.xml to a
previous install - the font path for DejaVu fonts should be the same.
3. Run another CLI install, this time choosing a custom path for the fonts
4. Check that the path selected is reflected in the koha-conf.xml file.
NOTE: 'perl Makefile.pl' and 'make' generates blib/KOHA_CONF_DIR/koha-conf.xml
ran with a weird string for the font dir
copied that koha-conf.xml to my home dir
reran with all defaults
compared the two, and only the font paths differed.
Also, I cleaned up the tabs that snuck in. :)
Signed-off-by: Mark Tompsett <mtompset@hotmail.com>
Signed-off-by: Tomas Cohen Arazi <tomascohen@unc.edu.ar>
Signed-off-by: Brendan Gallagher brendan@bywatersolutions.com
Only in MARC21 is possible to use ind2 of tag 245 to skip articles.
This patch is based on inserting a special template in
koha-indexdefs-to-zebra.xsl With this patch you must not insert index
Title:s in biblio-koha-indexdefs.xml, it is defined in
koha-indexdefs-to-zebra.xsl. It is not the best setup, but I find very
difficult to use biblio-koha-indexdefs.xml.
To test it in a english MARC21 setup:
Insert same records with titles and correct values in ind2 of 245.
If you have articles not in the skiping list of sort-string-utf.chr (The|the|a|A|an|An)
you can see that the sort by articles use also articles.
Insert the patch
Rebuilt indexes from scratch
Now all articles of titles are skipped
TO TEST WITHOUT INDEXING:
1. Go to etc/zebradb/marc_defs/marc21/biblios directory.
2. Put the sample MARCXML file in this directory.
3. Transform the file into Zebra indexes:
xsltproc biblio-zebra-indexdefs.xsl record.xml
Observe that the Title:s index contains:
01 Business and Technologies
4. Apply the patch.
5. Repeat:
xsltproc biblio-zebra-indexdefs.xsl record.xml
Observe that the Title:s index contains:
Business and Technologies
Signed-off-by: Frederic Demians <f.demians@tamil.fr>
Signed-off-by: Jesse Weaver <jweaver@bywatersolutions.com>
Verified working using yaz-client (as in
http://wiki.koha-community.org/wiki/Understanding_Zebra_indexing#Examine_Zebra_index,
though note that the `elem zebra::index` seems to be unneeded).
Signed-off-by: Brendan A Gallagher <brendan@bywatersolutions.com>
Test plan:
Without this patch, doing a zebra reindex on a fedora-based install will cause errors like this:
15:10:47-01/05 zebraidx(16108) [warn] No such record type: dom./etc/koha/zebradb/biblios/etc/dom-config.xml
With this patch, reindexing should just work.
Signed-off-by: Chris Cormack <chrisc@catalyst.net.nz>
I have tested this doesn't break on debian/ubuntu systems, someone with a non
debian system will need to test it on that
Signed-off-by: Bob Ewart bob-ewart@bobsown.com
It works on openSUSE Leap 42.1
Signed-off-by: Marcel de Rooy <m.de.rooy@rijksmuseum.nl>
Just noting that the debian zebra files already contain much more paths here.
Signed-off-by: Kyle M Hall <kyle@bywatersolutions.com>
This patch adds the Zebra special attribute 14 to ccl.properties and
opac-search.pl, so that we can turn on OpacSuppression and still return
results even if there are no records in Zebra for the Suppress index.
_TEST PLAN_
Before applying:
1) Make sure that you have no suppressed records indexed in Zebra
2) Turn on OpacSuppression system preference
3) Search using a keyword which should bring up records
4) Note that no records are returned in the results
5) Change UseQueryParser system preference to "Try"
6) Repeat steps 3-4
Apply the patch.
After applying:
7) Repeat step 3 (ie search using a keyword which should bring up records)
8) Confirm that records are appearing in the results!
9) Change UseQueryParser system preference to "Do not try"
10) Repeat step 3
11) Confirm that records are appearing in the results!
Signed-off-by: Hector Castro <hector.hecaxmmx@gmail.com>
Works as advertised. No more, won't need to have at least one record with the
value "1" in the field mapped with this index. All records in OPAC returned.
Signed-off-by: Katrin Fischer <katrin.fischer.83@web.de>
Signed-off-by: Tomas Cohen Arazi <tomascohen@theke.io>
If an item is not checked out when a checkin via SIP2 is attempted,
Koha's SIP server sends back an "ok" of 0, and the AF message "Item
not checked out". I am not entirely sure this is good and correct
behavior by the SIP2 protocol.
In particular, this will cause SIP2 book sorting devices to fail on
all items that are not checked out, causing them all to be put into
the "special handling" been that should be reserved for things like
items checked in at the wrong library and items on hold.
Test Plan:
1) Apply the patch for bug 13159 so you can use the new enhanced
SIP2 command line emulator
2) Use a command similar to the following to check in an item:
sip_cli_emulator.pl -a localhost -su <sip user> -sp <sip password> -l <instituation id> --item <barcode> -m checkin
3) Note the 3rd character is 0, and there is an AF field saying the item is not checked out
4) Apply this patch
5) Restart the SIP server
6) Repeat steps 2-3, note that nothing has changed
7) In the SIP config file, Add the parameter checked_in_ok="1" to the SIP account you are using.
8) Restart the SIP server
9) Repeat steps 2-3, note that this time the 3rd character is 1, and you do not recieve the item not checked out message.
Signed-off-by: Benjamin Rokseth <benjamin.rokseth@kul.oslo.kommune.no>
Signed-off-by: Brendan A Gallagher <brendan@bywatersolutions.com>
Signed-off-by: Tomas Cohen Arazi <tomascohen@theke.io>
Single quotes in common language (not in programming) are usually ', but
there is also the form known as ’ in HTML. See
https://fr.wikipedia.org/wiki/Apostrophe_%28typographie%29
This bug proposes to transliterate all forms into a space.
Test plan :
(I'll use the code ’ instead of the unicode character)
- Without the patch
- Create a record with title : L’avion d’argile
- Index this record
- Search for "L’avion d’argile" => You find the record
- Search for "L'avion d'argile" => You do not find the record
- Apply patch
- Search for "L’avion d’argile" => You find the record
- Search for "L'avion d'argile" => You find the record
- Search for "L avion d argile" => You find the record
Signed-off-by: Frederic Demians <f.demians@tamil.fr>
Signed-off-by: Jonathan Druart <jonathan.druart@bugs.koha-community.org>
Signed-off-by: Tomas Cohen Arazi <tomascohen@theke.io>
Signed-off-by: Martin Renvoize <martin.renvoize@ptfs-europe.com>
Signed-off-by: Kyle M Hall <kyle@bywatersolutions.com>
Signed-off-by: Tomas Cohen Arazi <tomascohen@theke.io>
This avoid getting a debug page when URL is wrong.
Signed-off-by: Martin Renvoize <martin.renvoize@ptfs-europe.com>
Signed-off-by: Kyle M Hall <kyle@bywatersolutions.com>
Signed-off-by: Tomas Cohen Arazi <tomascohen@theke.io>
Actual routes are:
/borrowers
Return a list of all borrowers in Koha
/borrowers/{borrowernumber}
Return the borrower identified by {borrowernumber}
(eg. /borrowers/1)
There is a test file you can run with:
$ prove t/db_dependent/rest/borrowers.t
All API stuff is in /api/v1 (except Perl modules)
So we have:
/api/v1/script.cgi CGI script
/api/v1/swagger.json Swagger specification
Change both OPAC and Intranet VirtualHosts to access the API,
so we have:
http://OPAC/api/v1/swagger.json Swagger specification
http://OPAC/api/v1/{path} API endpoint
http://INTRANET/api/v1/swagger.json Swagger specification
http://INTRANET/api/v1/{path} API endpoint
Add a (disabled) virtual host in Apache configuration api.HOSTNAME,
so we have:
http://api.HOSTNAME/api/v1/swagger.json Swagger specification
http://api.HOSTNAME/api/v1/{path} API endpoint
Add 'unblessed' subroutines to both Koha::Objects and Koha::Object to be
able to pass it to Mojolicious
Test plan:
1/ Install Perl modules Mojolicious and Swagger2
2/ perl Makefile.PL
3/ make && make install
4/ Change etc/koha-httpd.conf and copy it to the right place if needed
5/ Reload Apache
6/ Check that http://(OPAC|INTRANET)/api/v1/borrowers and
http://(OPAC|INTRANET)/api/v1/borrowers/{borrowernumber} works
Optionally, you could verify that http://(OPAC|INTRANET)/vX/borrowers
(where X is an integer greater than 1) returns a 404 error
Signed-off-by: Alex Arnaud <alex.arnaud@biblibre.com>
Signed-off-by: Tomas Cohen Arazi <tomascohen@theke.io>
Signed-off-by: Martin Renvoize <martin.renvoize@ptfs-europe.com>
Signed-off-by: Kyle M Hall <kyle@bywatersolutions.com>
Signed-off-by: Tomas Cohen Arazi <tomascohen@theke.io>
This patch add zebra indexes to RDA 264 field.
The new Provider index is added too.
QA comments corrected.
To test:
1) Download RDA records with 264 fields from this attachment <http://bugs.koha-community.org/bugzilla3/attachment.cgi?id=36825>. Import the file and re-index/rebuild zebra. These records contain 260 and 264 fields per record.
2) Do a search with pb:Bethany two records will appear with title The guardian. Search with pl:Minneapolis too, the two records will appear.
3) Select one record of both records and delete the 260 field keeping the 264 field and save, rebuild your zebra.
4) Search again with pb:Bethany and just one record will appear. Thats mean 264 is not indexed.
5) Apply patches.
6) Rebuild your zebra but this time all biblio records.
7) Search again with pv:Bethany or Provider:Bethany, this time will appear the two records, 264 is indexed. Note that if you search again with pb only one record appear. This is because the suggestion of LOC.
10) Search with copydate:2013 only launch records with 260 fields and pv:2013 show both fields, i.e., 260 and 264.
11) Apply QA Test Tools
Sponsored-by: Universidad de El Salvador
Signed-off-by: Nick Clemens <nick@quecheelibrary.org>
Signed-off-by: Kyle M Hall <kyle@bywatersolutions.com>
Signed-off-by: Tomas Cohen Arazi <tomascohen@theke.io>
Currently, Norwegian vowels are not sorted correctly, even when
you have chosen "nb" as the ZEBRA_LANGUAGE during installation.
To test:
- Make sure you have three records with titles that begin with ÆØÅ
respectively
- Do a search that turns up those three records and some others, and
sort the results by title, bot ascending and descending.
- Verify that ÆØÅ is shown in some weird order.
- Edit your sort-string-utf.chr* so it is in line with the current
patch. It should include these two lines:
lowercase {0-9}{a-z}æøå
uppercase {0-9}{A-Z}ÆØÅ
- Restart Zebra and reindex all the records, e.g.:
$ sudo koha-restart-zebra <instancename>
$ sudo koha-rebuild-zebra -f -v <instancename>
- Do the search again, make sure you order by title and check that
ÆØÅ are sorted in the order of 1. Æ 2. Ø 3. Å, and after all other
characters
* = If you are on a gitified install, you need to edit this file:
/etc/koha/zebradb/lang_defs/<your ZEBRA_LANGUAGE>/sort-string-utf.chr
NOT the file in your git clone (yeah, i wasted some time there...)
Signed-off-by: Mirko Tietgen <mirko@abunchofthings.net>
Signed-off-by: Katrin Fischer <katrin.fischer.83@web.de>
Signed-off-by: Tomas Cohen Arazi <tomascohen@theke.io>