Commit graph

44 commits

Author SHA1 Message Date
Galen Charlton
990bb17e14 Bug 12112: remove disused routine C4::Breeding::ImportBreeding()
This patch removes the ImportBreeding() routine, which lost its
last caller as of the patch for bug 10462.

To test:

[1] Verify that prove -v t/Breeding.t passes.
[2] Perform a Z39.50 search in the staff interface.
[3] Perform a cataloguing reservoir search in the staff
    interface; verifying that cached records from the search
    done in step 2 are retrieved.

Signed-off-by: Galen Charlton <gmc@esilibrary.com>
Signed-off-by: Marcel de Rooy <m.de.rooy@rijksmuseum.nl>
Signed-off-by: Kyle M Hall <kyle@bywatersolutions.com>
Signed-off-by: Galen Charlton <gmc@esilibrary.com>
2014-04-25 15:07:52 +00:00
Fridolyn SOMERS
cac06afeb1 Bug 11254: make reservoir search normalize ISBNs
When importing records, the ISBN is normalized and stored
into database (see C4::ImportBatch::_add_biblio_fields).  But when
searching with ISBN into reservoir, it is not normalized
(see C4::Breeding::BreedingSearch).  So search does not match.

This patch adds the normalisation to reservoir search.  Also, it
replaces call private method _isbn_cleanup by GetNormalizedISBN,
the correct public method.  Also allows the reservoir search
on ISBN with hyphens.

This is intended to fix only reservoir searches.

Revised Test plan
-----------------
 1) Back up DB
 2) Save copy of attached example somewhere findable
 2) Home -> Tools -> Stage MARC records for import
 3) Click Browse and select the example MARC file
 4) Click Upload file
 5) Tweak as desired then click Stage for import
 6) Click Manage staged records
 7) Click Import this batch into the catalog
 8) Home -> Cataloging
 9) In the Cataloging search text box type 978-0-691-14289-0 and
     click Submit
    -- ISBN13 with hypens not found in reservoir
10) In the Cataloging search text box type 9780691142890 and
     click Submit
    -- ISBN13 without hypens not found in reservoir
11) In the Cataloging search text box type 0-691-14289-0 and
     click Submit
    -- ISBN10 with hypens not found in reservoir
12) In the Cataloging search text box type 0691142890 and
     click Submit
    -- ISBN10 without hypens found in reservoir
13) Apply patch
14) Repeat steps 9-12, this time it is always found! :)

Signed-off-by: Mark Tompsett <mtompset@hotmail.com>
Signed-off-by: Kyle M Hall <kyle@bywatersolutions.com>
Signed-off-by: Galen Charlton <gmc@esilibrary.com>
2014-04-19 21:44:30 +00:00
0fc114eee3 Bug 11419: display Z39.50 search errors more completely
When a z39.50 server isn't able to be searched successfully, the yellow
error box came up empty.  This patch fixes the problem.

Test Plan:
1) Go to Administration/z39.50 servers
2) Create a fake z39.50 server with a made up address
3) Go to cataloging, search only that server
4) Note the empty yellow alert box
5) Apply this patch
6) Re-run the search, not the alert box has a message in it now

Signed-off-by: Nora Blake <nblake@masslibsystem.org>
Signed-off-by: Katrin Fischer <Katrin.Fischer.83@web.de>
Passes all tests and QA script.
Works according to test plan.
When one of the selected servers gives result no dialog
box is shown before and after applying the patch.

Signed-off-by: Galen Charlton <gmc@esilibrary.com>
2013-12-27 00:25:39 +00:00
264de29621 Bug 10096 - (follow-up) various QA improvements
- improve POD
- remove extraneous comments
- correct license statement in new files
- remove backticks in database update SQL

Signed-off-by: Kyle M Hall <kyle@bywatersolutions.com>
Signed-off-by: Galen Charlton <gmc@esilibrary.com>
2013-10-04 14:29:18 +00:00
1e0b890b0c Bug 10096 - Add a Z39.50 interface for authority searching
This patch introduces a new Z39.50 interface for searching Z39.50
compliant databases for MARC authority records.

These databases aren't as common as their bibliographic equivalents,
but they're out there and very useful. I have included info at the
bottom of this messsage for sample authority databases you can try.

To test this patch:

1) Set up Z39.50 client targets for authority databases. (I've included
information at the bottom of this message for LibrariesAustralia's
test server for authorities as well as instructions on how to use
your Koha's z39.50 authority server as well. The Library of Congress
also has authority databases available (unsure if these are test or
prod), and you might have access to others through OCLC or RLIN. OCLC
provides login credentials for their test databases.

2) Go to the Authorities module

3) Click on the new "Z39.50 search button"

4) Select your authority search targets from the list.

5) Do a search for an authority you would like using either the "Raw"
input box or the more specific input boxes for names, subjects, subject
sub divisions, or titles. (I like searching Name (personal): Eric on
the LibrariesAustralia test DB.)

6) You should see a table listing the server, heading, authority type,
and two other columns (MARC and a nameless column). "Authority type"
is the type of authority it will become when imported in to Koha. In
the Eric example, "PERSO_NAME".

7) Click on "MARC" next to the results of interest to review the MARC
authority record.

8) When you're satisfied with a record, click on "Import".

9) The pop-up window will close and your original Koha window will
change to the "Adding authority Personal Name" screen (in the Eric
example).

10) All the relevant fields should be filled out for the record. Review
them and make any changes as necessary. (N.B. The 001 will be cleared
when saved, so if you have a use for the imported control number, move
it to the 010, 016, or 035 as appropriate. If you have a default value
for the 003, this will also likely be overwritten. Move it if necessary.
The 005 will also be updated when saved, so do not worry about that.)

11) When you're satisfied, click save.

12) Presto! You've imported your first authority record via Z39.50!

--

Here is the info for the LibrariesAustralia test Z39.50 authority
database:

Z39.50 server: LibrariesAustralia Authorities
Hostname: z3950-test.librariesaustralia.nla.gov.au
Port: 210
Database: AuthTraining
Userid: ANLEZ
Password: z39.50
Syntax: MARC21/USMARC
Encoding: utf8

-

The U.S.A. Library of Congress also provides Z39.50 access to its Name
and Subject Authorities (http://www.loc.gov/z3950/lcserver.html).

Name Authority:
Z39.50 server: Library of Congress Name Authority File
Hostname: lx2.loc.gov
Port: 210
Database: NAF
Syntax: MARC21/USMARC
Encoding: utf8

Subject Authority:
Z39.50 server: Library of Congress Subject Authority File
Hostname: lx2.loc.gov
Port: 210
Database: SAF
Syntax: MARC21/USMARC
Encoding: utf8

(N.B. Both of these databases also include title authorities.)

-

For testing purposes, you can also set up a Z39.50 client target,
which points at your own Koha instance's Z39.50 authority server.

To find the hostname, go to /etc/koha-conf.xml and find the value for
the <listen id="authorityserver"> element. Depending on your
configuration, this could be something like the following:

unix:/zebra/koha/var/run/zebradb/authoritysocket

(N.B. You might be using a different scheme than unix sockets...)

To find the database, scroll down to the bottom of koha-conf.xml until
you reach the <config> element. Within this, look for the value of the
element <authorityserver>. It should probably be "authorities".

To set up this Z39.50 client target in Koha...

Z39.50 server: my koha authorities
Hostname: unix:/zebra/koha/var/run/zebradb/authoritysocket
Port:
Database: authorities
Userid:
Password:
Syntax: MARC21/USMARC (or whichever flavour you need)
Encoding: utf8

Signed-off-by: Mason James <mtj@kohaaloha.com>

Bug 10096 [FOLLOW-UP] - Add a z39.50 interface for authority searching

This patch adds the "recordtype" column to the "z3950servers" table.

The value in this column (biblio or authority) then controls whether
the z3950 server shows up in a bibliographic search (through the
Acq and Cataloguing modules) or in an authority search (through
the Authorities module).

I also edited the z3950 management console to show this value
and allow users to edit it. The default value is "biblio", since
the vast majority of z3950 targets will be bibliographic. However,
there is an option to add/edit a z3950 target as a source of
authority records.

Test Plan:

1) Apply both patches
2) Run updatedatabase.pl (after setting your KOHA_CONF and PERL5
environmental variables)
3) Use the test plan from the 1st patch

N.B. Make sure that your Z39.50 client target has a Record Type
of Authority, otherwise it won't display when you're doing a
Z3950 search for authorities.

Signed-off-by: Mason James <mtj@kohaaloha.com>

Bug 10096 [FOLLOW-UP] - fix tabs/whitespace errors to pass QA

Signed-off-by: Kyle M Hall <kyle@bywatersolutions.com>
Signed-off-by: Galen Charlton <gmc@esilibrary.com>
2013-10-04 14:26:29 +00:00
Galen Charlton
75842c7d62 Bug 10462: (follow-up) remove some undefined variable warning noise
Signed-off-by: Galen Charlton <gmc@esilibrary.com>
2013-07-24 16:55:52 +00:00
4bf04e4239 Bug 10462: QA Followup to resolve LCCN mixup and remove hardcoded marc tags
This patch corrects the mixup for LC call number and control number.

Further, as suggested by Galen, it would be better to not introduce hardcoded
tags in the Z3950Search subs in Breeding.pm.
This patch resolves that by calling TransformMarcToKohaOneField.
Note that this only involves changes to _addrowdata and _isbn_show. These
subs are only used in building the displayed results table.

Additionally, for French UNIMARC installs publicationyear is used to fill
the Date column (copyrightdate is not used in those installs). The edition
statement is only used in unimarc_lecture_pub not in unimarc_complet.

Test plan:
Do some Z3950 searches and look for values in all result columns.
For MARC21 on LOC (and/or others):
  Look for isbn 9780415964845 (check LCCN).
  Look for author Rowling.
For UNIMARC on BNF2 (and/or others):
  On BNF2 look for isbn 2070518426: result contains date and multiple isbn's.
  Look for title: Guide des candidats aux emplois de commissaire de police.
  Third result show edition statement (if you use 205$a with pub install).
  Note that there are no results with LCCN here (just as before).

Signed-off-by: Marcel de Rooy <m.de.rooy@rijksmuseum.nl>
Tested for MARC21 and UNIMARC (French lecture_pub install).

Signed-off-by: Galen Charlton <gmc@esilibrary.com>
2013-07-24 16:33:00 +00:00
7e83c7ea38 Bug 10462: Followup for showing multiple ISBNs in Z3950 response
As Jonathan correctly noted, the new Z3950 response only showed one isbn
although more isbn numbers could be in the record and would be imported.
To resolve this display problem, I traverse them all now in the updated
routine _isbn_show. There is no change in the imported records.
Note that before this patch TransformMarcToKoha did put all isbn numbers in
one field, separated by pipes (for display only). This behavior is restored
now. The three regexes on the individual isbn numbers now seem to be
overkill, but I left them there for completeness.

Signed-off-by: Marcel de Rooy <m.de.rooy@rijksmuseum.nl>
Tested this on a fresh French install under UNIMARC with BNF server.
Tested it too for MARC21.

Signed-off-by: Jonathan Druart <jonathan.druart@biblibre.com>
Signed-off-by: Galen Charlton <gmc@esilibrary.com>
2013-07-24 16:32:47 +00:00
52dad05b45 Bug 10462: Some optimizations in Z3950 search paving the way for enhancements
Refactors Z3950Search.
Disable batch record counts for z3950 records.

Test plan:
Do various Z3950 searches on multiple targets from Cataloging and Acquisition.
Behavior should not have changed.

Signed-off-by: Kyle M Hall <kyle@bywatersolutions.com>
Signed-off-by: Jonathan Druart <jonathan.druart@biblibre.com>
Signed-off-by: Galen Charlton <gmc@esilibrary.com>
2013-07-24 16:32:29 +00:00
aff9d00b71 Bug 9986: Two fixes for Z3950 search
Searching for stdid: Standard ID, srchany: RAW (any) somehow did not work
anymore.
Probably my fault :) Note that these two fields are in Cataloging Z3950 search
and not in Acquisition.

Fixing encoding problems: When adding -utf flag for CGI in acqui/z3950 and
cataloging/z3950, the decoding statements in C4/Breeding, Z3950Search should be
removed.

Test plan:
Search in Cataloging with:
Standard ID: 9782358670043 on LOC
RAW (any): musee [add an accent aigu on first e]  on LOC  -- Add diacritic!!!

Search in Acquisition
Somewhere, does not matter, but use a diacritic.

A note: My git version still has a hard time with utf8. Need to upgrade to version 1.7.10 to resolve this..

Signed-off-by: Bernardo Gonzalez Kriegel <bgkriegel@gmail.com>

Comment: Work as described. No errors
Without patch z39.50 search for example Std ID OR musee gives no results,
with patch there are.
No problems in acq search.

Signed-off-by: Katrin Fischer <Katrin.Fischer.83@web.de>
Good catch, passes all tests and QA script.
Signed-off-by: Jared Camins-Esakov <jcamins@cpbibliography.com>
2013-04-15 08:44:11 -04:00
d85fa9c5fb 9105: Followup for closing Zoom connections
Housekeeping: close the results sets and connections from Z3950 searches.
Signed-off-by: Kyle M Hall <kyle@bywatersolutions.com>
Signed-off-by: Katrin Fischer <Katrin.Fischer.83@web.de>
No regression found, all tests and QA script pass.
Signed-off-by: Jared Camins-Esakov <jcamins@cpbibliography.com>
2013-02-12 08:49:58 -05:00
3f73f9228b Bug 9105: Second housekeeping followup
Removing some unused variables.
Restoring timeout parameter that was only used in cataloging.
Restoring copyrightdate and editionstatement in row data for template.
Small adjustment at the end of the while loop with template vars.

Discovered while doing so, that the paging feature needs some further corrections; will propose a patch under a separate report.

Signed-off-by: Kyle M Hall <kyle@bywatersolutions.com>
Signed-off-by: Paul Poulain <paul.poulain@biblibre.com>
Signed-off-by: Jared Camins-Esakov <jcamins@cpbibliography.com>
2012-12-22 16:16:59 -05:00
e201d55c21 Bug 9105: Housekeeping followup
Remove some debug warnings, fix indentation

Signed-off-by: Jonathan Druart <jonathan.druart@biblibre.com>
Signed-off-by: Kyle M Hall <kyle@bywatersolutions.com>
Signed-off-by: Jared Camins-Esakov <jcamins@cpbibliography.com>
2012-12-22 16:16:59 -05:00
e9766f6094 Bug 9105: Move Z3950 search code to Breeding module
As a first step in realizing the goals of report 6536 (Z3950 Search improvements), this patch moves identical code in acquisition and cataloging to module level.
A followup deals with formatting.
Note that this patch should not change any behavior.

Signed-off-by: Jonathan Druart <jonathan.druart@biblibre.com>
I did not find any regression

Signed-off-by: Kyle M Hall <kyle@bywatersolutions.com>
Signed-off-by: Jared Camins-Esakov <jcamins@cpbibliography.com>
2012-12-22 16:16:59 -05:00
Frédérick Capovilla
37340e3718 Normalize records imported from Z39.50 servers.
Some Z39.50 server may use the MARC-8 encoding, which uses separated
diacritics. By forcing a normalization, all imported records will have
combined diacritics.

Records with separated diacritics might not show up in Zebra searches if
the search terms use accented characters.

Signed-off-by: Marcel de Rooy <m.de.rooy@rijksmuseum.nl>

http://bugs.koha-community.org/show_bug.cgi?id=8610
Signed-off-by: Paul Poulain <paul.poulain@biblibre.com>
checked it still works after the patch with UNIMARC and BNF server (that
provide utf-8 records)
2012-10-08 18:46:56 +02:00
Mark Tompsett
514b32898e Bug 8350: warning in logs when searching for nonexistent ISBN
Searching for a 10 or 13 digit numeric string that does not exist in
one's catalog will fail to affect SQL statement correctly
in C4::Breeding. Moved string substitution, which was triggering error
when search was undefined, and fixed if statements accordingly.

Signed-off-by: Jared Camins-Esakov <jcamins@cpbibliography.com>
Signed-off-by: Paul Poulain <paul.poulain@biblibre.com>
2012-10-02 16:58:41 +02:00
Chris Cormack
509d673f10 Bug 7941 : Fix version numbers in modules
Signed-off-by: Kyle M Hall <kyle@bywatersolutions.com>
Signed-off-by: Paul Poulain <paul.poulain@biblibre.com>
2012-06-11 17:29:38 +02:00
Srdjan
12ff7355bb bug_7613: OCLC Connexion gateway
svc/import_bib:
* takes POST request with parameters in url and MARC XML as DATA
* pushes MARC XML to an impoort bach queue of type 'webservice'
* returns status and imported record XML
* is a drop-in replacement for svc/new_bib

misc/cronjobs/import_webservice_batch.pl:
* a cron job for processing impoort bach queues of type 'webservice'
* batches can also be processed through the UI

misc/bin/connexion_import_daemon.pl:
* a daemon that listens for OCLC Connexion requests and is compliant
  with OCLC Gateway spec
* takes request with MARC XML
* takes import batch params from a config file and forwards the lot to
  svc/import_bib
* returns status

ImportBatches:
* Added new import batch type of 'webservice'
* Changed interface to AddImportBatch() - now it takes a hashref
* Replaced batch_type = 'batch' with
  batch_type IN ( 'batch', 'webservice' ) in some SELECTs

Signed-off-by: MJ Ray <mjr@phonecoop.coop>
2012-04-06 17:26:20 +02:00
0ba9aab76b Fix for Bug 4290 - search for author in repository
Reimplementation of Nahuel's patch from 2010-03-02

Signed-off-by: Paul Poulain <paul.poulain@biblibre.com>
Signed-off-by: Chris Cormack <chrisc@catalyst.net.nz>
2011-03-06 08:22:06 +13:00
Lars Wirzenius
7279f55b60 Fix FSF address in directory C4/
Signed-off-by: Galen Charlton <gmcharlt@gmail.com>
2010-03-16 20:17:56 -04:00
Galen Charlton
58920538f1 bug 2505: enable warnings in C4::Breeding
* also start to standardize ISBN normalization

Signed-off-by: Chris Cormack <chrisc@catalyst.net.nz>
Signed-off-by: Galen Charlton <galen.charlton@liblime.com>
2009-06-07 20:09:25 -05:00
Galen Charlton
60a98d258a IMPORTANT - refactor MARC character set handling
* IsStringUTF8ish - determine if scalar contains a string in UTF8
* MarcToUTF8Record - convert MARC blob or MARC::Record to UTF8
* SetMarcUnicodeFlag - set appropriate MARC21 or UNIMARC field to
  indicate that record is in UTF-8.

Design points of this module include:

* No dependencies on other C4 modules, making it easier to add
  more test cases
* All character conversion code in one place
* Single entry point for doing a character conversion on a
  MARC record
* Capture of errors and warnings produced by Text::Iconv
  and MARC::Charset
* Start of support for guessing the source character set of
  a MARC record.

Several functions were moved from other scripts
or modules to C4::Charset:

* C4::Koha->FixEncoding (expanded and renamed
  MarcToUTF8Record)
* C4::Koha->char_decode5426
* fMARC8ToUTF8 from bulkmarcimport.pl (renamed
  _marc_marc8_to_utf8)

Several batch jobs were adjusted to use MarcToUTF8Record instead of
FixEncoding.

Signed-off-by: Chris Cormack <crc@liblime.com>
Signed-off-by: Joshua Ferraro <jmf@liblime.com>
2008-02-03 07:23:56 -06:00
Henri-Damien LAURENT
b5a1060788 Improving encoding Support for z3950 clients.
Adding encoding for z3950 server information.
Uses Text::Iconv for conversion (ISO6937 and ISO_5428 and ISO5427)
For ISO 5426 (ANSEL or MARC-8) new char_decode5426 based on marc4j tool.

Not Tested on LOC or any USMARC z3950 source. But tested OK on BNF and SUDOC.

Signed-off-by: Chris Cormack <crc@liblime.com>
Signed-off-by: Joshua Ferraro <jmf@liblime.com>
2008-01-08 19:23:30 -06:00
Joe Atzberger
f59595d92f C4 - BEGIN blocks and 1; __END__ for modules
Signed-off-by: Chris Cormack <crc@liblime.com>
Signed-off-by: Joshua Ferraro <jmf@liblime.com>
2008-01-07 20:02:18 -06:00
Paul POULAIN
317736360b fix related #1340 (z3950 search for quick cataloguing)
Signed-off-by: Chris Cormack <crc@liblime.com>
Signed-off-by: Joshua Ferraro <jmf@liblime.com>
2007-10-30 17:44:21 -05:00
Galen Charlton
853aa657ba batch import rework -- implement stage-commit-undo for batch import
Revamps the import options on the tools menu to have two parts:

[1] Staging (load file into reservoir)
[2] Managing (review the list of staged batches, then
    choose to commit or undo a given batch.

Signed-off-by: Chris Cormack <crc@liblime.com>
Signed-off-by: Joshua Ferraro <jmf@liblime.com>
2007-10-29 16:47:59 -05:00
Galen Charlton
2e07983367 more work on batch import
* Completely removed old marc_breeding table
* Started updated Tools import function to stage records

Signed-off-by: Chris Cormack <crc@liblime.com>
Signed-off-by: Joshua Ferraro <jmf@liblime.com>
2007-10-29 16:47:58 -05:00
Galen Charlton
d37919eab9 improved import batches part 2 -- replace use of marc_breeding
Signed-off-by: Chris Cormack <crc@liblime.com>
Signed-off-by: Joshua Ferraro <jmf@liblime.com>
2007-10-29 16:05:25 -05:00
toins
3787c329d0 bug fix : Sometime importing biblio return no breedingid.
+ remove warn compilation
+ remove pod error.
2007-07-10 14:34:32 +00:00
tipaul
a481fad4b7 Code cleaning :
== Biblio.pm cleaning (useless) ==
* some sub declaration dropped
* removed modbiblio sub
* removed moditem sub
* removed newitems. It was used only in finishrecieve. Replaced by a Koha2Marc+AddItem, that is better.
* removed MARCkoha2marcItem
* removed MARCdelsubfield declaration
* removed MARCkoha2marcBiblio

== Biblio.pm cleaning (naming conventions) ==
* MARCgettagslib renamed to GetMarcStructure
* MARCgetitems renamed to GetMarcItem
* MARCfind_frameworkcode renamed to GetFrameworkCode
* MARCmarc2koha renamed to TransformMarcToKoha
* MARChtml2marc renamed to TransformHtmlToMarc
* MARChtml2xml renamed to TranformeHtmlToXml
* zebraop renamed to ModZebra

== MARC=OFF ==
* removing MARC=OFF related scripts (in cataloguing directory)
* removed checkitems (function related to MARC=off feature, that is completly broken in head. If someone want to reintroduce it, hard work coming...)
* removed getitemsbybiblioitem (used only by MARC=OFF scripts, that is removed as well)
2007-03-29 13:30:31 +00:00
tipaul
2ffd5b7228 rel_3_0 moved to HEAD 2007-03-09 14:28:54 +00:00
tgarip1957
ab45e7aaab Bug fixing and complete removal of Date::Manip 2006-11-06 21:01:43 +00:00
tgarip1957
df5492f167 Field weighting applied to ranked searches. A new facets table in mysql db 2006-10-01 21:48:54 +00:00
tgarip1957
0b5c3f94e7 New XML API
Event & Net::Z3950 dependency removed
HTML::Template::Pro dependency added
2006-09-01 22:16:00 +00:00
tgarip1957
0451359813 New set of routines for HEAD.
Uses a complete new ZEBRA Indexing.
ZEBRA is now XML and comprises of a KOHA meta record. Explanatory notes will be on koha-devel
Fixes UTF8 problems
Fixes bug with authorities
SQL database major changes.
Separate biblioograaphic and holdings records. Biblioitems table depreceated
etc. etc.
Wait for explanatory document on koha-devel
2006-08-25 21:07:08 +00:00
joshferraro
aa14e3e2d1 Previously there was no check to make sure that an issn number
actually existed; so if there was no isbn, and the issn was blank,
the item would be assigned a random biblionumber and the breeding farm
would report that the item already exists in the catalog (even though
it didn't). This fix adds a check to determine whether the imported
record has an issn before assigning a matching biblionumber.
2005-09-30 18:58:25 +00:00
tipaul
369527637b synch'ing 2.2 and head 2005-05-04 15:39:07 +00:00
tipaul
93e3f0411f fix for "marc upload fail silently" (from MJRay) 2004-12-08 10:11:51 +00:00
tipaul
1c16a1ee91 commiting modif already in RC2 (bug in breeding import) 2004-11-24 15:55:31 +00:00
rangi
bb69d41e7b Fix for bug 761
Ill commit this to the rel-2-0 branch as well
2004-03-24 00:36:07 +00:00
tipaul
98884012cf minor fixes for bilbio deletion (still buggy) 2003-10-25 08:46:27 +00:00
tipaul
f2c07bb529 bugfix : biblios without isbn where not added into the breeding farm 2003-09-08 08:42:40 +00:00
tipaul
26543b430e really proud of this commit :-)
z3950 search and import seems to works fine.
Let me explain how :
* a "search z3950" button is added in the addbiblio template.
* when clicked, a popup appears and z3950/search.pl is called
* z3950/search.pl calls addz3950search in the DB
* the z3950 daemon retrieve the records and stores them in z3950results AND in marc_breeding table.
* as long as there as searches pending, the popup auto refresh every 2 seconds, and says how many searches are pending.
* when the user clicks on a z3950 result => the parent popup is called with the requested biblio, and auto-filled

Note :
* character encoding support : (It's a nightmare...) In the z3950servers table, a "encoding" column has been added. You can put "UNIMARC" or "USMARC" in this column. Depending on this, the char_decode in C4::Biblio.pm replaces marc-char-encode by an iso 8859-1 encoding. Note that in the breeding import this value has been added too, for a better support.
* the marc_breeding and z3950* tables have been modified : they have an encoding column and the random z3950 number is stored too for convenience => it's the key I use to list only requested biblios in the popup.
2003-04-29 16:48:25 +00:00
tipaul
a118eeacfc 1st draft for z3950 client import.
moving Breeding farm script to a perl package C4/Breeding.pm
2003-04-22 12:22:52 +00:00