Discuss.FOLIO.org is no longer used. This is a static snapshot of the website as of February 14, 2023.

Locally-stored metadata records for eResources

Ann-Marie
2 Oct '17

Related to both the MM and RM SIGs.

We have been talking about eBooks and eJournals as part of structuring some acquisitions work. Libraries differ widely in whether they bring cataloging records for electronic resources into a local library catalog, in addition to activating those titles in an external KB for discovery purposes. Some libraries will have local cataloging records (usually MARC at this stage) for everything. Some will have them only for e-titles not represented in the KB. Some will have them for e-titles ordered singly rather than in packages. Some will have them for individual eBooks, but not eJournals or ePackages.

This will obviously need lots more discussion in the various SIGs, but here’s some (hopefully) quick questions that may help us in the meantime.

  1. For libraries that do have at least some local cataloging records for eContent, why do you have those in addition to, or instead of, the KB records? Is it:
  • To facilitate acquisitions processes (e.g. I have to have a local bib record to attach an order to, or that I order on a vendor site and then use the MARC record to automatically create the bib and order in my local ILS)

  • Because the local cataloging metadata is richer than the KB metadata, or complements the KB metadata by providing more or different metadata than the KB

  • Because some eContent is not represented in the KB (perhaps very new purchases, or streaming audio/video, or locally produced eContent)

  • Because all resources a library owns or has access to should have a local cataloging record

  • For other reasons?

  1. What kinds of eContent bibliographic metadata are stored in your local catalog, what is only managed by your discovery KB, and why?

  2. If you have some metadata that is duplicated between your local catalog and your KB, what kinds of benefits does that provide? What kinds of problems does that cause?

  3. Would you prefer to store eContent bibliographic metadata locally, rely solely on external KB(s), or a combination of both?

  4. If you would prefer NOT to store any eContent bibliographic metadata locally, what enhancements or changes to the KB or to your acq/cat/discovery workflow practices would be required to allow for that?

  5. Any other relevant thoughts that should be taken into account for acquisitions and/or cataloging purposes?

Thank you!

Ann-Marie

VirginiaMartin
2 Oct '17

I’ve answered your your question 1 for Duke, from my perspective as Head, Continuing Resource Acquisitions.

We use local econtent cataloging records for:

  • Facilitating acquisitions processes (must attach orders to bibs, so need to pull in at least a brief bib and suppress it for any eorder).
  • Because not all econtent is in the knowledgebase. We use Serials Solutions, which has ebooks, journals, and some other types of serials (e.g. monographic series, newspapers), but not title-level records for databases, streaming media. datasets, etc.

Questions 2-6 are perhaps better answered by Duke’s cataloging and Metadata and Discovery Strategy staff, though I think some of the answers can be inferred by my answer to #1 above.

LauraW
2 Oct '17
  1. For libraries that do have at least some local cataloging records for eContent, why do you have those in addition to, or instead of, the KB records? Is it:
    • To facilitate acquisitions processes (e.g. I have to have a local bib record to attach an order to, or that I order on a vendor site and then use the MARC record to automatically create the bib and order in my local ILS)
    • Because the local cataloging metadata is richer than the KB metadata, or complements the KB metadata by providing more or different metadata than the KB
    • Because some eContent is not represented in the KB (perhaps very new purchases, or streaming audio/video, or locally produced eContent)
    • Because all resources a library owns or has access to should have a local cataloging record
    • For other reasons?
    All of the above, but especially this other: we want to have local metadata we can control for any content we actually own (not necessarily everything we have access to).
  2. What kinds of eContent bibliographic metadata are stored in your local catalog, what is only managed by your discovery KB, and why?
    Stored in local catalog: everything we actually own, much of what we subscribe to, depending on record quality and availability.
    Only managed by Discovery: some freely available resources
    We also currently have some records in the local catalog that are not displayed in our WebPac but are sent to our Discovery service (some are for databases because our ERM records are not MARC and cannot be ingested into our Discovery service’s index and some are poor quality records for streaming resources that have decent contents notes)
  3. If you have some metadata that is duplicated between your local catalog and your KB, what kinds of benefits does that provide? What kinds of problems does that cause?
    The KB index sometimes includes full text when our local records do not. Sometimes local records/KB records don’t match/merge and we end up with multiple records for the same item displaying, but this is relatively infrequent.
  4. Would you prefer to store eContent bibliographic metadata locally, rely solely on external KB(s), or a combination of both?
    I would prefer a combination – as stated above, prefer to have local metadata for anything we own permanently.
  5. If you would prefer NOT to store any eContent bibliographic metadata locally, what enhancements or changes to the KB or to your acq/cat/discovery workflow practices would be required to allow for that?
  6. Any other relevant thoughts that should be taken into account for acquisitions and/or cataloging purposes?
    I wish our E-Resources Management tools played better with our Discovery tools. I feel like we are creating a lot of only somewhat effective workarounds. Some of my concerns would be addressed by allowing a lot more customization of the indexing for discovery.
kmarti
4 Oct '17

Before answering Ann-Marie’s questions, I’d like to give a little background on what we do at Chicago:
We have VuFind as an interface to the traditional “catalog.” It takes in records from the ILS, from a HathiTrust “pile of records” and brief records from SFX (not the MARC record service). It uses APIs to bring in HathiTrust links and SFX links. We also have EDS discovery layer for article-based content. We used to load the catalog in there, but stopped. Reviews of user search logs reveal that people don’t go there for books. Finally, we have two Z39.50 consortial catalogs, that pulls in the content from our ILS. We’d like to move to a bento box approach to search.

Meanwhile, we do catalog with full bib records for our purchased e-resources, and selected free or open access content, as requested. Serials are generally cataloged individually if we have a subscription to them, but we do not create local catalog records to aggregator databases (e.g., ProQuest or EBSCO databases). For monographs, we receive records as part of the acquisitions process (in the case of title by title purchase), or load sets of records from the vendor or through OCLC. There are some cases where we catalog individually.

First, where this is feasible, there is a desire to have full catalog records for discovery purposes. There is also a belief that this kind of title-level access is appropriate in the catalog. While we do pull in brief records from SFX, they are brief. we rely on these for title-level access to serials in aggregator databases. Although ebooks are in SFX, we do not pull in brief ebook records. They are covered with more comprehensive description information in the catalog, and SFX is incomplete. There is no coverage of streaming in it. We do want the KB holdings data in the catalog, because it tends to be richer than what we are providing in local holdings.

Second, we do frequently need some sort of record for acquisitions purposes. For title-by-title purchases, this is required. For packages, we could technically not have the individual records, although we do currently need a package-level record for acquisitions purposes. If there were a way to attach a PO to something other than a bib, we would do this.

We have had conversations about “what resources belong in a catalog” and “what resources belong in a discovery system.” Generally, what we want in the catalog:

  • Title level representations for items we have purchased
  • Freely accessible content that has been deemed in scope
  • locally produced content
  • NOT article-level content.

Practical considerations, like being unable to keep up with changes to aggregator database title lists, mean we rely on records straight from the KB, as opposed to managing them locally. Similarly, we rely on the “pile of records” HathiTrust records to be indexed directly within VuFind, and not be in the ILS.

[quote=“Ann-Marie, post:1, topic:1280”]
If you have some metadata that is duplicated between your local catalog and your KB, what kinds of benefits does that provide? What kinds of problems does that cause?
[\quote]
Right now for e-journals, we have brief records in the catalog from the KB, and fuller catalog records for many of the same titles. At one point, we tried suppressing the links from the e-holdings within the ILS, but found that while there was significant duplication between KB holdings and OLE holdings, there were a number of OLE holdings not in SFX. Longer term, we’d like to be able to match the OLE record against SFX record and merge the two.

I could see advantages to relying on external systems, but there are some caveats:

  1. We need something within the system that is linked to the acquisitions process.
  2. External systems need to have comprehensive description and access.
  3. If there are problems in the data in the external system, we need to be able to correct it.

It would be wonderful to have a single place of record to place holdings information, both access information, and perpetual access rights. Right now, we are doing double work. It is also helpful to be able to attach POs to the holdings level, to specify not only what title is paying paid for, but what title on what platform.

We would need some way to indicate titles that are managed by the library, and have that system interface with FOLIO or be reliable enough and comprehensive enough to cover all e-resources (the SFX KB can’t do that alone right now).

Chicago may be in a more unusual situation, in that we don’t use a record service for our e-serials and are still relying on individual cataloging.

peter
5 Oct '17

I think there is a case to be made for libraries having records in their systems for things that they have purchased. There was a recent thread on the LIBLICENSE-L mailing list where there was a report of a library losing access to 44 years worth of backfile content (start of thread, thread continuation 1 – including response from publisher, thread continuation 2). It seems to me that – particularly with electronic access – there is a need for a library to audit what it has purchased, and to ideally do so in an automated way. That can only occur if the record of what was purchased is in the library’s system and not external KBs.

Is there a question of fiduciary responsibility to have the records controlled/facilitated by the library that represents the things that have been purchased?

VirginiaMartin
5 Oct '17

I think Peter is absolutely right that local records are needed for purposes of tracking what has been purchased. However, this doesn’t necessarily need to happen with MARC or other bib/catalog records, it could be done through order records, or a combination of order and “resource” records, or even something else. It’s especially important for econtent, because unlike with physical materials which sit on shelf, are stamped and barcoded, etc. so that they are obviously owned by the library, econtent is easily “lost” by both vendor and library if proof of entitlement is not recorded.

Eric_Hartnett
12 Oct '17

Is there a question of fiduciary responsibility to have the records controlled/facilitated by the library that represents the things that have been purchased?

Yes, definitely. This is something an auditor might look at.

As we’re planning for the move Folio we’re trying to determine what data needs to be migrated. Ejournal order history is a big one for us. Our previous serials vendor only stored the last 5 years of order history and so the only place we have this data is in our ILS because we do occasionally run into instances like the LibLicense example you provided where we have to prove that we had a subscription for a certain period and I know that there have been cases where we’ve lost access to things because of poor tracking at the time. But as Virginia mentioned, this tracking does not have to be in the catalog record itself but could be in a resource record.

darredo1
13 Oct '17

Hi all, here is my perspective as Acquisitions Manager at the University of Nebraska. I am enjoying reading everyone’s responses, thank you all for sharing!

  1. We try and have local catalog records for all of our e-content (when possible), and our preference is for OCLC records. This is for some reasons mentioned (so that we can attach order records, we generally prefer OCLC metadata to vendor’s). But I think it also helps in preventing duplicate ordering in that we can find any titles in an ePackage before we firm order it. Also, our ILL uses OCLC as part of their processes so by setting holdings for our econtent on OCLC records it aids their discovery.
  2. Our econtent in our catalog is primarily ebooks, ejournals, databases and datasets, and multimedia such as streaming music and video. We do not harvest any of our Archives and Special Collections scanned images – those are hosted in another database and harvested to our search layer. Also, our digital commons materials are not all cataloged, but all are harvested.
  3. I’m not sure I know enough of the details of our practices here to answer this sufficiently.
  4. I prefer to store our ebook, ejournal, databases locally, as we’ve been able to make steps towards vendor-neutral records that way. However, I think harvesting other materials into the KB, such as our digital content that we generate here but don’t add to OCLC, works well.
  5. I am not entirely sure because everything we order at this time goes into our cat.
  6. I’m not sure if this is relevant, but we are beginning investigations into possible other ILS systems, the impetus being to establish or join a consortium system. In the face of possible data migration, we stand to lose any acquisitions data that we can’t crosswalk into a new system. I don’t know if there is anyway to protect from this in the future with standardization of order data, but it is a concern of mine and I think for others.
Heather
16 Nov '17

For Cornell practices, I’m going to riff off @LauraW’s reply - we do a lot of the same things. For context, we use Voyager; our OPAC is a local implementation of Blacklight; our primary KB is Intota/Summon, though I think we’re still syncing our holdings to WorldCat Local.

1. … All of the above, but especially this other: we want to have local metadata we can control for any content we actually own (not necessarily everything we have access to).

Yes, this exactly.

It’s also incredibly helpful to have as much stuff as possible in a single system, to reduce the number of places that a human has to hunt down information or that an automated system has to interface with. And the more data we hold locally, the less vulnerable we are to data loss from somebody else’s bugs, and the more freedom we have to break up with a third party system that’s no longer meeting our needs.

  1. Ditto Laura’s response.

Anything we pay for has a record in Voyager; if we can’t get good title-level records, we at least have a database-level bib record to point to the collection as a whole. We only add open access records to the local catalog if (a) we manage them, or (b) a selector considers them important and has asked us to add them to the catalog.

3. … Sometimes local records/KB records don’t match/merge and we end up with multiple records for the same item displaying, but this is relatively infrequent.

This is pretty frequent for us with ebook subscriptions; when Intota has good MARC for them, we get catalog records from there, and anything we hold from multiple subscriptions will get multiple links on a single record. But if Intota doesn’t have good (or any) MARC, we get records from the vendor or OCLC, and we make no attempt to merge them with corresponding Intota/SerSol records, because uggghhh. Books24x7 is an example of this - the records Intota has are of far lesser quality than what Batch can get from Books24x7 themselves. That subscription has considerable overlap with our Safari subscription, though - we currently get MARC for these from Intota, soooo, there’s duplicate records for all that overlap. There would be duplicates even if we got the Safari MARC from Safari, though - we make no attempt to merge links from multiple subscriptions onto a single record, either. But if we could get good MARC for Books24x7 titles from Intota, we’d do that, to reduce the duplication.

It’s less common on other pockets of material - databases and print don’t create this kind of problem, and we get almost all of our journal MARC from Intota/SerSol.

3, again: The KB index sometimes includes full text when our local records do not.

Yeah. The ability to search at an article-level is one of the big reasons we use Intota to begin with.

  1. Different opinion on this one - our e-resources team considers managing multiple systems to be the bane of our existence. If the local system were capable of automated updates of links/title lists for e-journal packages, article-level searches, openURL linkage, ERM functions, license/order tracking, and/or troubleshooting, we would so agitate to dump all the duplicate external systems.

  2. N/A

  3. I’ll come back to this later if I think of anything. I’m all distracted now by my emotions about #4! We feel very, very strongly about all those multiple systems. :sob:

LauraW
21 Nov '17

Re #4 – I don’t think I imagined a system able to make all the external systems redundant. If that were possible, yes, we too would drop the multiple systems in a heartbeat.