Category Archives: Best Practices

It’s elementary, dear Watson…or is it?

I was processing my quarterly authorities updates and changes, when I noticed something peculiar. Sherlock Holmes is no longer a fictitious character. Gone is the Holmes, Sherlock (Fictitious character) subject heading, and it is replaced by the simple name authority access point Holmes, Sherlock who wrote a book last year, his (male gender as noted in the 375) fields of activity include criminology and bee culture, and his profession is as a detective.

Is this authority record a joke?!

Sherlock Holmes (r) and Dr. John H. Watson. Illustration by Sidney Paget.

RDA has very clear rules about fictitious characters. Rule 9.6.1.7 to be specific, which states:

Fictitious and Legendary Persons
For a fictitious or legendary person, record Fictitious character, Legendary character, or another appropriate designation.
EXAMPLE
  • Greek deity
  • Mythical animal
  • Vampire
That little 2013/07 is interesting because it points us back to an earlier edition of RDA that didn’t include a rule for fictitious characters, non-human entities, other important designations that can distinguish a person. Holmes’ new authority record was created on July 16, 2013 (according to OCLC Connexion). So did the rule change after his record was created? But even still, if the rule changed why did LoC move ahead with editing the authority record one of the most influential fictitious characters in western literature?
To make it even more confusing, there’s no trace of the non-preferred access point in the new authority record. So, based on this new authority record, how are people supposed to know that Sherlock Holmes in in fact not a real male who enjoys bee culture and criminology? They can’t, and that’s just confusing. At least Harry Potter and Count Dracula are still fictitious…but for how long?! If we follow RDA closely, we could soon have Potter, Harry (Wizard) and Dracula, Count (Vampire)!
I’d love to hear what you think about the loss of Fictitious characters in our authority file! Please post your comments. Until this all gets sorted out, I’m going to keep Holmes in our catalog just as he is – fictitious.

The future of bibliographic description is here…wait a minute!

It’s here! It’s finally here! While we were all out shopping or eating our Thanksgiving leftovers, LOC published the long awaited data model for bibliographic linked data on Black Friday, November 23, 2012! In a document entitled, “Bibliographic Framework as a Web of Data: Linked Data Model and Supporting Services,” the contracted team of Zepheira outlined how we might move from our static MARC record silos to open linked data networks. I read through it today and here are my thoughts:

Another Acronym

AACR2, RDA, DACS, CCO, MARC, MODS, MADS, METS, DC, PBCore, VRACore, EAD, CDWA, CDWA-lite, LCSH, LCNAF, MESH, AAT, LCC, DDC, FRBR, FRAD…and list continues…welcome BIBFRAME! Another acronym you’ll use to bore your friends and confuse non-cataloging coworkers! BIBFRAME is short for Bibliographic Framework and is the name of the new model that will take our bibliographic descriptions into the 21st century. Here are a few of my takeaways:

Goals

“The goal of this initial draft is to provide a pattern for modeling both future resources and bibliographic assets traditionally encoded in MARC21.” (p. 6)

While reading this document, I had to remind myself that this is a draft of a conceptual model. LOC acknowledges this and even wants feedback from the library community. This is a work in progress that hopes to establish a data model and develop an encoding standard for expressing bibliographic metadata based on MARC21 records. Thank goodness!

Goodbye MARC…Hello RDF!

BIBFRAME moves from flat bibliographic descriptions that use controlled vocabularies, to dynamic linked RDF triples that bring together the inherent relationships of our resources. The model looks like this:

http://www.loc.gov/marc/transition/pdf/marcld-report-11-21-2012.pdf
(p. 9)

(My apologies for the poor quality of the snagged image from the report – I’m trying to publish this before the end of the work day) From this model we can see the relationships between Works to their creators and subjects; Works to  Instances; and Instances to publishers, locations, and formats. Creators, subjects, publishers, locations, and formats are all considered Authorities.

BIBFRAME then uses Annotations that relate to either Works or Instances to include local holdings data or other linked data services such as reviews, book covers, etc.

but what about FRBR and RDA you say?! Not much mention of them in the draft, except for a broad overview at the end. Instead BIBFRAME re-conceptualizes the FRBR WEMI (Work, Expression, Manifestation, Item) model into a less hierarchical structure. It uses network graph relationships between Works,  Instances, Authorities, and Annotations.

I have mixed feeling about this new WIAA model. Just when I was starting to see how FRBR’s WEMI model could be expressed through RDA, BIBFRAME throws curve ball. While I was reading the draft, I kept thinking where are the Expressions? Just Works and Instances? Then I have to remind myself – it’s all just semantics with relational data. Here’s how I broke it down:

  • Work = Works
  • Instances = Expressions + Manifestations
  • Authorities = Authorities
  • Annotations = Holdings + Other linked data stuff

Standard Best-Practices

“Formally reconciling the BIBFRAME modelling effort with an RDA-Lite set of cataloging rules is a logical next step.” (p. 15)

<head action=http://pounds.on.desk>

Really?? An RDA-Lite?! We’ve just spent 3 years debating whether or not to adopt RDA in the first place and we finally have the green light from central command. Implementation is March 31, 2013. Period. Now they want a “lite” version to express in linked data relationships, rather than hierarchical descriptions!

Content Standards are guidelines that have to account for all circumstances of resource description. If you want a “lite” version, use what you need from the standard and ignore the rest! There’s not need to develop a new version of the standard to accommodate a new model.

A Centralized Approach

The BIBFRAME model calls for a centralized namespace for all Works, Instances, Authorities, and Annotations. I’m very confused by this approach. How I am supposed to adopt a new model and standard of Bibliographic description at my local institution if it’s all being managed by LOC? Does LOC really care that my institution has three copies of the Games of Thrones? Or will they only be implementing this for their collections and descriptions. (see comments below) I want to research how MARC was implemented now.

So LOC will maintain the framework centrally, like it does with the MODS family and other structure standards. But why maintain a new separate model for authority data that links to existing linked data service? The report gives the following example:

<Topic id=”http://bibframe/auth/topic/cataloging”>
<label>Cataloging</label>
<hasIDLink resource=”http://id.loc.gov/authorities/subjects/
sh85020816” />
</Topic>

There must be a reason for creating a new authority record that references an existing authority record, but it just seems redundant. If I link my BIBFRAME Work and Instance record to BIBFRAME authority records, the BIBFRAME authority record continues to link out to another linked data service. It feels like an unnecessary mediating link.

More to Come

Ok – that enough for now. Please watch out for more over the next couple days as I process this document further and continue my research. I promise to post more often too. I’ll be reviewing RIMMF soon!

time on my side

My biggest priority at the library right now is to catch up on the backlog of items needing original cataloging. Currently we have approximately 15 linear feet of special collections materials, 2.5 linear feet of DVDs, and about 30 linear feet of LPs, scores, and other miscellaneous items. For the time being, I’ve decided to focus on the special collections materials and the DVDs.

I could just brainlessly catalog through the backlog one shelf at a time, but that wouldn’t be a very effective use of my time. So I assessed the situation and decided that work through the special collections materials by the oldest in the backlog (the items sitting on that shelf the longest), and cataloging the newest DVDs first. I determined my plan of backlog attack by weighing time based cataloging priorities – procedure with context.

Let’s look at the DVDs first. The items in this collection are mostly local recordings of guest lecturers and events around campus. Since they’re locally created, all the items require complex original cataloging. There is no copy and each video must be viewed to analyze its subject matter. Most of these recordings were create within the last 5 years, so the students or faculty who would be interested in viewing these items were probably around when the event actually took place. As time passes the memory of that event may fade, so it’s important to provide access to the most recently recorded items and work through the backlog to the oldest.

The opposite is true for the special collections backlog. All of these items were checked for copy in OCLC and no copy was found, so they were set aside for original cataloging. Over the years the backlog grew and grew to its current size. Many of the items have been waiting for cataloging for over 7 or 8 years, perhaps longer. In the years passed since the items were acquired and set aside, there is a very good chance that another local institution could have cataloged that item – providing excellent copy that I can use for my local catalog! So, by working through the “oldest” items in the special collections backlog, I have a higher chance of finding copy and saving myself time in in the long run.

The exception to these procedures is when something from the backlog is specifically requested. All items have provisional records with tombstone information, so they are at least somewhat searchable via the OPAC. If an item is requested by a user, it is treated like a rush and cataloged right away.

By cataloging in context I have a more efficient cataloging workflow, and therefore more a effective catalog for our users. Save the time of the cataloger and save the time of the user!