This patch adds an endpoint for fetching ciruclations rules given the
constraints of the passed parameters.
We optionally expect item_type, library and patron_category as query
parameters and we return a list of relevant circulation rules pertaining
to that combination of requirements.
You can also add a list of `rules` as a query parameter to limit the
response to only the rules you are interested in for this combination.
Signed-off-by: Tomas Cohen Arazi <tomascohen@theke.io>
Signed-off-by: Kyle M Hall <kyle@bywatersolutions.com>
Signed-off-by: Katrin Fischer <katrin.fischer@bsz-bw.de>
This patch adds the CRUD endpoints for stock rotation rota's.
Signed-off-by: David Nind <david@davidnind.com>
Signed-off-by: Kyle M Hall <kyle@bywatersolutions.com>
Signed-off-by: Katrin Fischer <katrin.fischer@bsz-bw.de>
Signed-off-by: Tomas Cohen Arazi <tomascohen@theke.io>
Signed-off-by: Martin Renvoize <martin.renvoize@ptfs-europe.com>
Signed-off-by: Katrin Fischer <katrin.fischer@bsz-bw.de>
Signed-off-by: Tomas Cohen Arazi <tomascohen@theke.io>
Signed-off-by: Martin Renvoize <martin.renvoize@ptfs-europe.com>
Signed-off-by: Katrin Fischer <katrin.fischer@bsz-bw.de>
Test plan:
1) Apply patch, restart plack 'koha-plack --restart kohadev'
2) Visit /api/v1/additional_fields?tablename=aqinvoices - Notice its empty
3) Visit /cgi-bin/koha/admin/additional-fields.pl?tablename=aqbasket and add a new additional field
4) Do step 2) again - Notice the newly created additional field is there
5) Visit /cgi-bin/koha/admin/additional-fields.pl?tablename=aqinvoices and add a new additional field for invoices
6) Do step 2) again - Notice both additional fields are there
7) Visit /api/v1/additional_fields?tablename=aqinvoices - Notice only the additional field for aqinvoices is listed
Signed-off-by: Lucas Gass <lucas@bywatersolutions.com>
Signed-off-by: Martin Renvoize <martin.renvoize@ptfs-europe.com>
Signed-off-by: Katrin Fischer <katrin.fischer@bsz-bw.de>
This patch adds the endpoint needed to queue an import_from_kbart_file background job
Signed-off-by: Clemens Tubach <clemens.tubach@kit.edu>
Signed-off-by: Nick Clemens <nick@bywatersolutions.com>
Signed-off-by: Katrin Fischer <katrin.fischer@bsz-bw.de>
Test plan, k-t-d:
1) Access /api/v1/patron_categories
2) Verify the patron categories are correctly listed
Signed-off-by: Lucas Gass <lucas@bywatersolutions.com>
Bug 26297: (QA follow-up) Move to REST::V1::Patrons::Categories
Bug 26297: (QA follow-up) Use search_with_library_limits
JD amended-patch: squashed + tidy
Signed-off-by: Tomas Cohen Arazi <tomascohen@theke.io>
Signed-off-by: Katrin Fischer <katrin.fischer@bsz-bw.de>
Signed-off-by: Tomas Cohen Arazi <tomascohen@theke.io>
Signed-off-by: David Nind <david@davidnind.com>
Signed-off-by: Martin Renvoize <martin.renvoize@ptfs-europe.com>
Signed-off-by: Katrin Fischer <katrin.fischer@bsz-bw.de>
Signed-off-by: Tomas Cohen Arazi <tomascohen@theke.io>
Signed-off-by: David Nind <david@davidnind.com>
Signed-off-by: Martin Renvoize <martin.renvoize@ptfs-europe.com>
Signed-off-by: Katrin Fischer <katrin.fischer@bsz-bw.de>
Signed-off-by: Tomas Cohen Arazi <tomascohen@theke.io>
Signed-off-by: David Nind <david@davidnind.com>
Signed-off-by: Martin Renvoize <martin.renvoize@ptfs-europe.com>
Signed-off-by: Katrin Fischer <katrin.fischer@bsz-bw.de>
This patch adds a new endpoint, for fetching checkouts from a specific
patron.
Test plan:
1. Apply this patch
2. Run:
$ ktd --shell
k$ prove t/db_dependent/api/v1/patrons_checkouts.t
=> SUCCESS: Tests pass!
3. Run:
$ curl -v -s -u koha:koha --request GET \
http://kohadev.local/api/v1/patrons/{id}/checkouts
test with query parameters (they are the same as for /patrons/{id}/holds
=> SUCCESS: The API works!
Signed-off-by: Tomas Cohen Arazi <tomascohen@theke.io>
Signed-off-by: Martin Renvoize <martin.renvoize@ptfs-europe.com>
Signed-off-by: Katrin Fischer <katrin.fischer@bsz-bw.de>
This patch adds support for setting the record source on the API. It
does so by adding support for a new header `x-record-source-id`.
Setting the record source is restricted to patrons with the
`set_record_sources` permission.
A 403 error is returned on an attempt to set it without the correct
permissions.
The feature is documented on the spec.
To test:
1. Apply this patch
2. Run:
$ ktd --shell
k$ prove t/db_dependent/api/v1/biblios.t
=> SUCCESS: Tests pass! Tests cover the right use cases!
3. Play with Postman (or similar)
4. Sign off :-D
Sponsored-by: ByWater Solutions
Signed-off-by: Martin Renvoize <martin.renvoize@ptfs-europe.com>
Signed-off-by: Lucas Gass <lucas@bywatersolutions.com>
Signed-off-by: Arthur Suzuki <arthur.suzuki@biblibre.com>
Signed-off-by: Katrin Fischer <katrin.fischer@bsz-bw.de>
This patch adds the mentioned route. For the task it:
* Adds Koha::Cash::Register->to_api_mapping with trivial mappings
* Adds a cash_register object definition on the API spec
* Adds a controller to handle requests
* Adds tests for the new endpoint
To test:
1. Apply this patch
2. Run:
$ ktd --shell
k$ qa
=> SUCCESS: All green! Tests pass!
3. Play with Postman!
4. Sign off :-D
Signed-off-by: David Nind <david@davidnind.com>
Signed-off-by: Pedro Amorim <pedro.amorim@ptfs-europe.com>
Signed-off-by: Katrin Fischer <katrin.fischer@bsz-bw.de>
This patch adds the mentioned endpoint. For it, it does:
* Add Koha::Desk->to_api_mapping
* Add desk.yaml with the correct data structure for desks
* Add the route to the spec
* Add tests
Note: Lucas and I had doubts about the right return value for when the feature is disabled.
I opted for returning 404 with a message telling the feature is disabled. This can be discussed.
To test:
1. Apply this patches
2. Run:
$ ktd
k$ qa
=> SUCCESS: All green, all tests pass!
3. Play with this using Postman.
4. Sign off :-D
Signed-off-by: David Nind <david@davidnind.com>
Signed-off-by: Pedro Amorim <pedro.amorim@ptfs-europe.com>
Signed-off-by: Katrin Fischer <katrin.fischer@bsz-bw.de>
This patch introduces endpoints for managing record sources. This is
done on top of Koha::RecordSource(s) following the current coding style.
To test:
1. Apply this patch
2. Run:
$ ktd --shell
k$ prove t/db_dependent/api/v1/record_sources.t
=> SUCCESS: Tests pass!
3. Sign off :-D
Signed-off-by: Tomas Cohen Arazi <tomascohen@theke.io>
Signed-off-by: Matt Blenkinsop <matt.blenkinsop@ptfs-europe.com>
Signed-off-by: Jonathan Druart <jonathan.druart@bugs.koha-community.org>
Signed-off-by: Katrin Fischer <katrin.fischer@bsz-bw.de>
Signed-off-by: Tomas Cohen Arazi <tomascohen@theke.io>
Signed-off-by: Martin Renvoize <martin.renvoize@ptfs-europe.com>
Signed-off-by: Katrin Fischer <katrin.fischer@bsz-bw.de>
This enhancement adds a REST API endpoint to list a patron's recalls:
/api/v1/patrons/{patron_id}/recalls
This depends on the logged in patron having the manage_recalls subpermission.
To test:
1. Log in to the staff interface as your superlibrarian self (Patron A)
2. Go to Koha Administration -> Global system preferences. Enable the UseRecalls system preference
3. Set the relevant recalls circulation and fines rules
4. Search for an item (Item A)
5. Check out Item A to yourself (Patron A)
6. Log in to the OPAC as Patron B, a patron who does not have the manage_recalls permission
7. Search for Item A and request a recall
8. While still logged in to the OPAC as Patron B, hit this URL: https://your-opac-url/api/v1/patrons/patron-b-borrowernumber/recalls (swap out your URL and Patron B's borrowernumber)
9. Confirm you are given an error: "Authorization failure. Missing required permission(s)."
10. Log out of the OPAC and log back in, this time as Patron A
11. Hit the URL again https://your-opac-url/api/v1/patrons/patron-b-borrowernumber/recalls
12. Confirm you are able to view a list of Patron B's recalls
13. Confirm tests pass: t/db_dependent/api/v1/patrons_recalls.t
Sponsored-by: Auckland University of Technology
PA amended: QA follow-up: tidy
Signed-off-by: Pedro Amorim <pedro.amorim@ptfs-europe.com>
Signed-off-by: Katrin Fischer <katrin.fischer@bsz-bw.de>
Signed-off-by: David Nind <david@davidnind.com>
Signed-off-by: Kyle M Hall <kyle@bywatersolutions.com>
Signed-off-by: Katrin Fischer <katrin.fischer@bsz-bw.de>
+ attached items, subscriptions etc via the API as an alternative to the web interface: cgi-bin/koha/cataloguing/merge.pl
This is a slightly improved version of Zenos patch: I (domm) have converted the code in Koha::Biblio to a more DBICy style and packed it into a transaction (as requested in Comment 23)
Even the QA script is happy now!
To test:
1) you need an API user with the permissions "editcatalogue"
2) two records: one to be merged into (with biblio_id, eg 262) and another one from
which to merge (with biblio_id_to_merge, eg 9) which will be deleted!
both records may/should have items, subscription, subscriptionhistory, serial, suggestions
orders and holds
3) check both records via the web
4) Apply patch
5) Write a JSON file with inside the field 'biblio_id_to_merge' and the biblionumber from wihich to merge.
As example:
{
"biblio_id_to_merge" : 9
}
6) Execute an API call with correct headers and location. For example:
curl -s -u koha:koha --header "Content-Type: application/json" --header "Accept: application/marc-in-json"
--request POST "http://127.0.0.1:8080/api/v1/biblios/262/merge" -d @file.json
You must to setup the headers and to use a json file with parameters
7) The record with the id 9 is deleted now, the record with 262 has all items, etc attached,
the return is: return code 200 and the changed record 262 in marc-in-json format
8) It is possible to override biblio data with an external bib record. You need to put external bib record
into the json file in marc-in-json format. To write use the json file uploaded as example
You need to fill the fields 'rules' and 'datarecord'. The field 'rules' must contains 'override_ext'
To do the call:
curl -s -u koha:koha --header "Content-Type: application/json" --header "Accept: application/marc-in-json"
--request POST "http://127.0.0.1:8080/api/v1/biblios/XXX/merge" -d @file_with_recod.json
9) The record in 'biblio_id_to_merge' is deleted now, in biblio XXX now there are the bibliographic data
of field 'datarecord' of json file, the return is: return code 200 and the changed record XXX in marc-in-json format
10) Go into intranet and do a search. Select two or (better) more record.
11) Merge them; merge must be a success.
12) Test with prove -v t/db_dependent/Koha/Biblio.t
13) Test with prove -v t/db_dependent/api/v1/biblios.t
To test with curl the step 8 you can customize the json file attached in bugzilla.
The marc-in-json record inside follows the MAR21 standard
Sponsored-by: Technische Hochschule Wildau
Co-authored-by: Zeno Tajoli <ztajoli@gmail.com>
Co-authored-by: Thomas Klausner <domm@plix.at>
Co-authored-by: Mark Hofstetter <<mark@hofstetter.at>>
Signed-off-by: Jan Kissig <jkissig@th-wildau.de>
Bug 33036: Update of test number.
File ../biblios.t was update with a new subutest.
So we need this update to have a 'OK' after test running.
Signed-off-by: Jonathan Druart <jonathan.druart@bugs.koha-community.org>
Signed-off-by: Katrin Fischer <katrin.fischer@bsz-bw.de>
This patch fixes the tags and also adds the tags to the swagger.yaml
file to allow the endpoints to be documented correctly.
One endpoint has also been deleted as it is no longer required.
Test plan:
Check the attached files to see that all tags are now prefixed with
'erm_' and that the swagger file now includes an entry for all of these
files
Signed-off-by: Owen Leonard <oleonard@myacpl.org>
Signed-off-by: Katrin Fischer <katrin.fischer.83@web.de>
Signed-off-by: Tomas Cohen Arazi <tomascohen@theke.io>
This patch adds new Koha::Object based classes for bookings logic and
adds API controllers to expose the new bookings data via the REST API's.
Signed-off-by: Martin Renvoize <martin.renvoize@ptfs-europe.com>
Signed-off-by: Janet McGowan <janet.mcgowan@ptfs-europe.com>
Signed-off-by: Caroline Cyr La Rose <caroline.cyr-la-rose@inlibro.com>
Signed-off-by: Laurence Rault <laurence.rault@biblibre.com>
Signed-off-by: Kyle M Hall <kyle@bywatersolutions.com>
Signed-off-by: Tomas Cohen Arazi <tomascohen@theke.io>
Test plan:
* Enable the system preference RESTBasicAuth
* curl -s --request GET http://kohadev-intra.mydnsname.org:8081/api/v1/itemtypes
should give 401 Unauthorized
* curl -s -u koha:koha --request GET http://kohadev-intra.mydnsname.org:8081/api/v1/itemtypes
should produce JSON-list of itemtypes
* curl -s -u koha:koha --header "x-koha-embed: translated_descriptions" --request GET http://kohadev-intra.mydnsname.org:8081/api/v1/itemtypes
should include the field translated_descriptions containing the translated descriptions, if any
Signed-off-by: David Nind <david@davidnind.com>
Signed-off-by: Marcel de Rooy <m.de.rooy@rijksmuseum.nl>
[EDIT] perltidy -b t/db_dependent/api/v1/itemtypes.t # Resolve bad score of 44
[EDIT] chmod 755 t/db_dependent/api/v1/itemtypes.t
[EDIT] perltidy -b Koha/REST/V1/ItemTypes.pm
Lesson: Please run qa tools yourself and adjust accordingly?
Edit (tcohen): I restored the item_type_translated_description.yaml file
as the entire API was broken because of the lack of it.
Signed-off-by: Tomas Cohen Arazi <tomascohen@theke.io>
Rather than fetching the counter files and embedding the counter logs, we now add a foreign key to the data provider in the counter logs table and fetch them directly.
Signed-off-by: Jessica Zairo <jzairo@bywatersolutions.com>
Signed-off-by: Michaela Sieber <michaela.sieber@kit.edu>
Signed-off-by: Nick Clemens <nick@bywatersolutions.com>
Signed-off-by: Tomas Cohen Arazi <tomascohen@theke.io>
The counter registry offers an API that provides SUSHI information for each provider. This will be incredibly useful for creating new providers as we can use the correct information from the registry for harvesting urls etc. This reduces the risk of user input errors when creating providers and gives greater reliability to the data required to successfully harvest from the SUSHI API
Signed-off-by: Jessica Zairo <jzairo@bywatersolutions.com>
Signed-off-by: Michaela Sieber <michaela.sieber@kit.edu>
Signed-off-by: Nick Clemens <nick@bywatersolutions.com>
Signed-off-by: Tomas Cohen Arazi <tomascohen@theke.io>
ErmSushiHarvester background job will now be initiated by either
enqueue_counter_file_processing_job or by enqueue_sushi_harvest_jobs
from Koha/ERM/UsageDataProvider.pm, the former if triggered by a manual
file upload, the latter if by the 'run now' button or by the cron script.
This commit also includes some rewording/refactoring, namely:
- COUNTER file validation now happens in the API, before enqueuing the job.
- Removal of no longer used POST /erm/counter_files endpoint
- Koha/ERM/UsageDataProvider.pm:
-- run method is now enqueue_sushi_harvest_jobs
-- new enqueue_counter_file_processing_job method
-- harvest method is now harvest_sushi
-- new set_background_job_callbacks method to set the background job callbacks
- REST/V1/ERM/UsageDataProviders
-- run method is now process_SUSHI_response
-- new process_COUNTER_file endpoint
Signed-off-by: Jessica Zairo <jzairo@bywatersolutions.com>
Signed-off-by: Michaela Sieber <michaela.sieber@kit.edu>
Signed-off-by: Nick Clemens <nick@bywatersolutions.com>
Signed-off-by: Tomas Cohen Arazi <tomascohen@theke.io>
Needed to display lists in the provider tabs in the UI
Signed-off-by: Jessica Zairo <jzairo@bywatersolutions.com>
Signed-off-by: Michaela Sieber <michaela.sieber@kit.edu>
Signed-off-by: Nick Clemens <nick@bywatersolutions.com>
Signed-off-by: Tomas Cohen Arazi <tomascohen@theke.io>
Now that harvesting is possible for platforms, databases and items we need to be able to generate reports for all of these data types. Currently the reporting backend structure is very geared towards titles. Rather than copying this for each different data type, this patch abstracts the code to accept the data type as a url parameter and use that to generate a report based on a given data type
Signed-off-by: Jessica Zairo <jzairo@bywatersolutions.com>
Signed-off-by: Michaela Sieber <michaela.sieber@kit.edu>
Signed-off-by: Nick Clemens <nick@bywatersolutions.com>
Signed-off-by: Tomas Cohen Arazi <tomascohen@theke.io>
Add the option to have a report by provider that rolls all usage up into one top-level figure to see how often that provider is being used a given period
Signed-off-by: Jessica Zairo <jzairo@bywatersolutions.com>
Signed-off-by: Michaela Sieber <michaela.sieber@kit.edu>
Signed-off-by: Nick Clemens <nick@bywatersolutions.com>
Signed-off-by: Tomas Cohen Arazi <tomascohen@theke.io>
Signed-off-by: Jessica Zairo <jzairo@bywatersolutions.com>
Signed-off-by: Michaela Sieber <michaela.sieber@kit.edu>
Signed-off-by: Nick Clemens <nick@bywatersolutions.com>
Signed-off-by: Tomas Cohen Arazi <tomascohen@theke.io>
Signed-off-by: Jessica Zairo <jzairo@bywatersolutions.com>
Signed-off-by: Michaela Sieber <michaela.sieber@kit.edu>
Signed-off-by: Nick Clemens <nick@bywatersolutions.com>
Signed-off-by: Tomas Cohen Arazi <tomascohen@theke.io>
This commit is a squash of the following:
SUSHI harvesting process in the data providers class:
* Builds the URL query and requests the SUSHI service endpoint
* Parses the JSON response and builds the csv COUNTER file and adds it to counter_files table
Usage statistics data processing:
* When a counter_files entry is stored, CounterFile.pm will:
* Parse the csv COUNTER file and
* Add a usage_titles entry for each unique title in the COUNTER file
* Add the title's respective erm_usage_mus (monthly usage) entries, repeating for each metric_type
* Add the title's respective erm_usage_yus (yearly usage) entries, repeating for each metric_type
Harvesting cronjob;
'Run now':
* API endpoint to start the harvesting process of a data provider
* Button in the data providers list to run the harvesting process for each data provider upon clicked
ERM SUSHI: Background job
Job progress is updated to total amount of usage titles after retrieving
the response from SUSHI;
Job warning and success messages are added accordingly
Redundant duplicate titles will not be added
Redundant duplicate monthly and yearly usage statistics will not be added
Data provider harvest background job harvests once per report_type
Enqueue one background job for each report_type in the usage data provider
Update the way we measure progress in the background job.
It now uses the COUNTER report body rows instead of SUSHI response results.
We're now incrementing and showing the number of skipped mus, skipped
yus, added mus and added yus
There's a bug in the way we calculate yus
Updates to background job progress bar - Depends on 34468
Signed-off-by: Jessica Zairo <jzairo@bywatersolutions.com>
Signed-off-by: Michaela Sieber <michaela.sieber@kit.edu>
Signed-off-by: Nick Clemens <nick@bywatersolutions.com>
Signed-off-by: Tomas Cohen Arazi <tomascohen@theke.io>
Signed-off-by: Jessica Zairo <jzairo@bywatersolutions.com>
Signed-off-by: Michaela Sieber <michaela.sieber@kit.edu>
Signed-off-by: Nick Clemens <nick@bywatersolutions.com>
Signed-off-by: Tomas Cohen Arazi <tomascohen@theke.io>
Signed-off-by: Martin Renvoize <martin.renvoize@ptfs-europe.com>
Signed-off-by: Marcel de Rooy <m.de.rooy@rijksmuseum.nl>
Signed-off-by: Tomas Cohen Arazi <tomascohen@theke.io>
To retrieve the sysprefs, instead of using the svc script. See bug
33606.
Signed-off-by: Laurence Rault <laurence.rault@biblibre.com>
Signed-off-by: Marcel de Rooy <m.de.rooy@rijksmuseum.nl>
Signed-off-by: Tomas Cohen Arazi <tomascohen@theke.io>
- Adds 'batch' accessor to Illrequest object
- New illbatches and illbatch_statuses tables
- New foreign key 'batch_id' in illrequests table
- Atomic update file
- Default illbatch_statuses
- Add 'add' ill_requests api method
- Add POST method in ill_requests path
- Add 'batch_id property to ill_request api definition
- Updated swagger.yml with new batches and batchstatuses endpoints
Co-authored-by: Andrew Isherwood <andrew.isherwood@ptfs-europe.com>
Signed-off-by: Edith Speller <Edith.Speller@ukhsa.gov.uk>
Signed-off-by: Katrin Fischer <katrin.fischer@bsz-bw.de>
Signed-off-by: Tomas Cohen Arazi <tomascohen@theke.io>
Fixes the heading and sidebar display of the 'Import batches'
section.
Signed-off-by: Katrin Fischer <katrin.fischer.83@web.de>
Signed-off-by: Tomas Cohen Arazi <tomascohen@theke.io>
Edit: I removed the wrongly introduced import_batches.yaml file
Signed-off-by: Tomas Cohen Arazi <tomascohen@theke.io>
Signed-off-by: Martin Renvoize <martin.renvoize@ptfs-europe.com>
Signed-off-by: Katrin Fischer <katrin.fischer.83@web.de>
Signed-off-by: Tomas Cohen Arazi <tomascohen@theke.io>
This patch aims to make our API docs be more consistent.
It addresses two particular things:
* There's no consistency on the `tags` used across the spec, and not all
of them are correctly described and have an `x-displayName` entry.
More on this later.
* This are not sorted either by some for of grouping, or at least
alphabetically.
For the former, I did my best trying to harmonize (specially on the ERM
front) with what we do in the rest of the use cases.
For the latter, I opted for sorting everything alphabetically, as a
first step. Hoping someone else could work on grouping things.
To test (ON YOUR HOST MACHINE):
1. On current master run:
$ cd api/v1/swagger
$ docker run --rm -v $(pwd):/api --workdir /api redocly/cli \
build-docs swagger.yaml --output index.html
=> SUCCESS: It doesn't break or anything
2. Open your browser, open the generated api/v1/swagger/index.html file
=> FAIL: The left column has
* several lower case entries
* not everything is correctly grouped (ERM? packages?)
* Things are not sorted. There's an attempt but looks messy
3. Apply this patch
4. Repeat 1 and 2
=> SUCCESS: Things look much better!
5. Sign off :-D
CAVEAT1: I'm not sure why, but import_batches doesn't work. Ideas are
welcome, I'll keep looking for fixes.
CAVEAT2: I don't have enough eHoldings background to weight in, but I
feel like 'ERM eHoldings packages' could just be 'ERM packages'.
Follw-up patches with better ideas are welcome.
CAVEAT3: Patron credits, debits, balance... They could all go in to
'Patrons accounts' or similar. Open to ideas.
CAVEAT4: Old redocly didn't support mapping an endpoint to more than one
target section. Something to explore if we want (for example) to reach
'credits' through the 'Patrons' section but also from 'Accounting'.
Signed-off-by: Martin Renvoize <martin.renvoize@ptfs-europe.com>
Signed-off-by: Tomas Cohen Arazi <tomascohen@theke.io>
Signed-off-by: Jonathan Field <jonathan.fieeld@ptfs-europe.com>
Signed-off-by: Marcel de Rooy <m.de.rooy@rijksmuseum.nl>
Signed-off-by: Tomas Cohen Arazi <tomascohen@theke.io>
This patch expands the checkouts endpoints to allow for a public workflow.
We add the availability endpoint under `/public/checkouts/availability`
and restrict the information we send back to only those fields a public
user should be allowed to see.
We also add a new checkout endpoint at `/patrons/{patron_id}/checkouts`
that allows for users to checkout to themselves and accepts the same POST
request with checkout details including item_id and a confirmation token
in the body that the staff client endpoints accept.
Signed-off-by: Silvia Meakins <smeakins@eso.org>
Signed-off-by: Kyle M Hall <kyle@bywatersolutions.com>
Signed-off-by: Tomas Cohen Arazi <tomascohen@theke.io>
This could be extended later in bug 32968 to pass the permission of the
logged in user.
Signed-off-by: Pedro Amorim <pedro.amorim@ptfs-europe.com>
Signed-off-by: Marcel de Rooy <m.de.rooy@rijksmuseum.nl>
Signed-off-by: Tomas Cohen Arazi <tomascohen@theke.io>
At some point in the patch series we lost the availability api schema.
This patch restores a basic version, but we should work towards a
clearer enum based schema for each of the available blockers, confirms
and warnings.
Signed-off-by: Tomas Cohen Arazi <tomascohen@theke.io>