Pedro Amorim
2951edc697
This commit is a squash of the following: SUSHI harvesting process in the data providers class: * Builds the URL query and requests the SUSHI service endpoint * Parses the JSON response and builds the csv COUNTER file and adds it to counter_files table Usage statistics data processing: * When a counter_files entry is stored, CounterFile.pm will: * Parse the csv COUNTER file and * Add a usage_titles entry for each unique title in the COUNTER file * Add the title's respective erm_usage_mus (monthly usage) entries, repeating for each metric_type * Add the title's respective erm_usage_yus (yearly usage) entries, repeating for each metric_type Harvesting cronjob; 'Run now': * API endpoint to start the harvesting process of a data provider * Button in the data providers list to run the harvesting process for each data provider upon clicked ERM SUSHI: Background job Job progress is updated to total amount of usage titles after retrieving the response from SUSHI; Job warning and success messages are added accordingly Redundant duplicate titles will not be added Redundant duplicate monthly and yearly usage statistics will not be added Data provider harvest background job harvests once per report_type Enqueue one background job for each report_type in the usage data provider Update the way we measure progress in the background job. It now uses the COUNTER report body rows instead of SUSHI response results. We're now incrementing and showing the number of skipped mus, skipped yus, added mus and added yus There's a bug in the way we calculate yus Updates to background job progress bar - Depends on 34468 Signed-off-by: Jessica Zairo <jzairo@bywatersolutions.com> Signed-off-by: Michaela Sieber <michaela.sieber@kit.edu> Signed-off-by: Nick Clemens <nick@bywatersolutions.com> Signed-off-by: Tomas Cohen Arazi <tomascohen@theke.io> |
||
---|---|---|
.. | ||
mysql |