Monday, October 3, 2011

LITA 2011 Day 3 - Keynote - The Evolving Semantic World


Barbara McGlamery, a taxonomist at Martha Stewart Living Omnimedia gave the ending keynote for the LITA National Forum 2011.
In her engaging presentation she described the concept of the semantic web as web pages that can both be processed meaning fully by humans and by computers that then index and present the data from those pages in new and interesting ways.  She argued that if web 1.0 was about making connections online, and web 2.0 was about online collaboration, web 3.0 will be about intelligence and making the web smart.
She then went on to describe the process she went through when she was working for Time on trying to implement the semantic web, in what she called the "Big S" method of making a website completely readable by a computer.  In such a system a computer is given enough information that it can draw its own conclusions.  If you have a dataset that has enough information about the relationship between things, a computer can make inferences.  In such a system you can do some powerful things, however it is time intensive to implement, processor intensive to search, and laden with many other problems.  Despite this, McGlamery described several attempts to use these methods at Time, that ultimately did not lead to a practical project.
After this, McGlamery described how she has instead been using "little s" semantic web principles in her work at Martha Stewart Living.  This uses lighter-weight standards and is much easier to implement, but you wind up needing people to filter and adjust the conclusions a computer may come to based on its understanding of data, because of the limitations of this kind of approach.  Despite that, this seems to be the way of the future, and I found myself quite interested in learning more about what is involved, particularly in implementing Google's microdata semantic standard in order to make our own website smarter without the immense work required with a heavier "Big S" solution.

LITA 2011 Day 3 - Social Networking the Catalog : A Community Based Approach to Building Your Catalog and Collection


The last day of LITA 2011 started not with a keynote, but with a presentation.  As a half day, one round of presentations was first, followed by the keynote.  The presentation I attended was made by Margaret Heller of Dominican University and of the Read/Write Library, formerly known as the Chicago Underground Library.
I thought this was a really interesting and unusual session, if applicable to what I do in only the most abstract way.  However, even not being particularly useful to me, it was informative since it described something near to where I live that I'd never heard about.
The Chicago Underground Library is apparently a small collection of largely print materials that has Chicago as its single, unifying focus.  There are a lot of zines and items with small print runs and limited conventional interest. The library focuses on materials and programs that appeal to the artistic community and is independent of any other libraries in Chicago.
Recently the library has changed its name to the Read/Write Library, referring to the common concept of web 2.0 collaboration sites as the "Read/Write Web."  Eschewing conventional librarian wisdom, the library allows anyone to catalog its materials using the library website.  Some of the people who work on this have library degrees, but most do not.  Furthermore, they use their own Drupal-based catalog system, not the standard record types and cataloging standards used in almost all other libraries.
This whole system works because of the unusual nature of the library, but this vision of the library is something from which other libraries could stand to learn something, and that was where the value of this session was.  We have been fixated of late on adding social networking capabilities to our computer systems while the physical locations have remained much the same.  Heller made the point that without adding some kind of social integration to the library itself, the social integration on the catalog is largely meaningless.  Her unique library has found a way that makes sense to make that happen.  Other libraries hopefully can find their own ways.

LITA 2011 Day 2 - Lightning Talks


The lightning talks were interesting, although generally not applicable to my situation.  As lightning talks, they were very short, so at least I was able to hear a lot of different things that had limited applicability to my situation, than listen to one thing that had the exact same low level of applicability.
The first presentation was from Rice Majors from the University of Boulder who described his generally successful effort to loan out someone from his IT department to a different IT department on campus that had had a bumpy relationship.
Then Rebecca Fernandez from Midwestern State University discussed implementing a hosted service on Primo.
Next M Ryan Hess of DePaul University provided a talk that was somewhat intriguing to me about how he tested a theory about potential problems he was having with Google Analytics by implementing a browser cookie for tracking purposes for 3 months and then comparing that information with the information he was getting from Google Analytics.
Hannah Kim from Utah State University described her efforts to switch from a Drupal-based Intranet that wasn't being used and wasn't serving staff needs to an online course system that wasn't designed for Intranets at all but had the features she needed and, with some work, she was able to bend to her purposes.
Todd Vandenbark from University of Utah drescibed his using card sorts to update website navigation during a redesign process.
Finally, Annette Baily presented on the new release of a browser plugin called LibX that I hadn't heard of and that I thought sounded pretty interesting.  I think I'll have to look more into that and see if there's a way we can use it.

LITA 2011 Day 2 - Changing Times : How Mobile Solutions Provide a Catalyst for Expanded Community Reach and Relevance


This presentation was made by Greg Carpenter, C.E.O. of Boopsie, and Jim Loter, director of The Seattle Public Library.
I have been to presentations made by representatives and founders of companies before and I was a little concerned that this talk could be much like some of those: boring sales pitches with a PowerPoint presentation.
I was pleasantly surprised by this one.
Although Greg was clearly not entirely neutral in that he had a product that the other presenter had used successfully, his talk was generally a solid overview of the mobile space with some information that Boopsie has learned about mobile users from its product, some of which were obtained because of some special features of the Boopsie mobile product.
After Greg, Jim Loter from The Seattle Public Library presented on their process of setting up their mobile website with Boopsie.  Comparing what was literally called "mobile services" back in 1931 and what we consider "mobile services" today, Jim described how their app/mobile website has been useful to their patrons in downloading electronic books/audiobooks and in dealing with the week when the library had to shut down due to budgetary issues.

I particularly found interesting Seattle's efforts to make sure that their staff had access to mobile devices so that they can help their public with using them.  They are rolling out a "get out from behind the desk" campaign where they are hoping to have the bulk of what librarians at public service desks do available to be done from a mobile device, and consequently done from somewhere other than seated behind the reference desk.  It's an interesting plan and I think a good one.

I enjoyed this session and found it quite worth my time.

Sunday, October 2, 2011

LITA 2011 Day 2 - Trends at a Glance : A Management Dashboard of Library Statistics


This was a good session presented by Emily Morton-Owens of New York University and Karen Hanson of the NYU Medical Center.

Emily and Karen described their problem of wanting to make the monitoring of information easier. They had a new director who wanted to know how the library was being used. They had access to a lot of existing data and had multiple uses for this data: to make decisions, to show worth, and to make one-off reports.

Their data came from open source, homegrown, and available sources. In deciding how to create an interface with graphs showing the data they used “above all else, show the data” and the principles of Edward Tufte to guide them. It was important to have truthful propotions in the data and eliminate graphical junk.  For this reason they eliminated the use of pie charts, which have been determined in studies to be sometimes difficult for users to properly interpret, and instead used bar charts.

They built their bar charts using Google charts, which made the process quite simple (they showed the entire code for a chart that they had to write and it was less than 50 lines).  They also implemented a nice linear regression line and described how they did that.

The dashboard they created has multiple graphs on it that are not necessarily related except that they show information about the utilization of resources at their campus.  They used as a principle that a good dashboard should have benchmarks and goals. The dashboard that they wound up developing and demonstrating was geared toward providing an operational view of their data, although they would like to adapt it to show a strategic view.

One of the things that they discussed in their data usage an analysis was the parsing of EZProxy data, which I use a lot.  I haven't seen many presentations on parsing EZProxy data before, even though I'm doing it frequently, and it was interesting to see their approach.

Saturday, October 1, 2011

LITA 2011 Day 2 - Extending Library Services with AI Conversational Agents


I loved this session from David Newyear of Mentor Public Library and Michele McNeal of Akron-Summit County Public Library.  I'm not sure I loved the automated cat avatar, but I loved much else that they are doing, and it seems to me there is a place for some kind of avatar with this kind of program, so I don't really fault the cat avatar at all.

They have been working on a largely open source automated reference program that you can see in action here: www.mentorpl.org/catbot.html.   Following in the path first forged by the classic computer program/experiment Eliza, their cat uses Artificial Intelligence Markup Language to recognize forms of questions and provide canned answers or direct users to appropriate resources.

This seems to me an interesting, albeit challenging, alternative to long FAQ lists or having basic information about services spread all over your website.  Instead you have the chatbot programmed to look for questions like "what are your hours" and have it reply appropriately.
The presenters gave several examples of the language used to create the chatbot which I found most useful and an example looks like this:


Do you know who * is
The presenters also provided sample log segments showing actual conversations with the chatbot and discussed some of the problems with the chatbot, the process required to develop and maintain it, and the general reaction to the chatbot.  It seems users have taken to the chatbot generally in a positive and sometimes playful way, proposing marriage to it even.
I think this is something that could be fun and interesting to pursue, and probably something that could add value to our services.  More information about their chatbot system can be found at:

LITA 2011 Day 2 - Keynote - On the Web, Of the Web : A Possible Future


Karen Coyle presented this excellent keynote to begin day 2 of the the 2011 LITA National Forum.

She began by expressing a key sentiment from the prior day's keynote, that we are getting toward the end of something and the beginning of something else. Below are my notes expressing many of the key concepts:


“If Moses were to come down from the mountain today he would have to come down with comments enabled.”

Wikileaks and bloggers are examples of actors who are changing the balance of power. Print isn't going away but it is waning. Print will become analogous to electronic like live performances are to recordings. We haven't managed to progress beyond filenames on computers – we don't see title and author information when we look at what we've downloaded.

The world where books are written and other books are written later to respond to that book are disappearing. Our conversations have become faster and shorter. Coyle sees a new media now is dominating old media. Informal communication is becoming formalized. With Facebook, Twitter and email we've lost our ability to be off the record.

There are two primary activities that libraries are engaged in that can help. One is the FRamily (FRBR, etc). The second is linked data. This is the year that linked data is going big and several libraries in Europe are already implimenting it in their catalogs.

Linked data is a metadata format that is designed for the web, and the web is where we need to be. It provides a flexibility that you can't get with other formats and an expansion of your metadata in a non disruptive way. Linked data can be built up incrementally without having to change your technology. The MARC record is comparatively all or nothing. With the use of identifiers multiple language displays are easy.

The Semantic Web vs. the Pedantic Web.

Library catalog data can be cryptic “xii, 356 p.; 23 cm.” What's 23 cm? What does xii mean? “Library cataloging is like the secret language of twins.” It doesn't make sense to put this out on the web. We're too focused on our stuff, not on our stuff as to be organized as knowledge.

Neither FRBR nor RDA address the isssue of subject headings and organize information topically. Search engines do keyword rather than topic searching because it's easy. The simple search box puts a big burdon on the user to figure out what they should put in that will match what they are looking for. “At the same time, it is known that users in their attempts to search by subject sometimes find themselves at a loss for words” - Elaine Svenonius. Searching for concepts or things with common terms in their names causes problems with the simple search box.

Wikipedia is organized information and has concepts that don't appear easily in a keyword search, which is part of why it's so important in Google searches. Keyword searching is like dumpster diving or dynamite fishing for information. We pay attention to what's right in searching and ignore what's wrong.

Even tagging is not a good answer. We are using Victorian era knowledge schemes – Cookery has finally become Cooking, too little, too late. We can use computing to do a lot of interesting things with faceting – things that were worked on by Raganathan and others in the past, but were too complex to be implemented in an analog fashion. We tend to be good at helping people find a title, but not so good at finding a concept.

Users need to use information, not just find it. Use is an information activity and we should be available to support information activities. Linked data can help.

The library catalog needs to become a backroom database, not what gets shown to users. We have to move beyond the catalog. It may be wasted time to try and make the catalog better. It may be better to figure out how to make that information useful to people where they do their work.

The concepts behind FRBR were find, identify, select, and obtain. We need to really focus on find and make that the priority, and then add on a major focus of use. If FRBR and RDA are going to support this need to change radically and evolve. There are data points that we need that are not included in these standards.

Users flock to Wikipedia because the knowledge is organized and they can understand it.

LITA 2011 Day 1 - Making Smartphones Smarter in the Library : Reaching Mobile Users with QR Codes


Anne Morrow and Nancy Lombardo of the University of Utah and Benjamin Rawlins of Kentucky State University presented this fun session on QR Codes and their use.

The first part was an overview of what QR-codes are, their history, and ways they can be used. The second part had a more detailed description of different ways libraries are using them. A QR Code scavenger hunt was passed out and attendees got mini chocolate bars at the end for completing them and chatting with the presenters.

Some suggested uses for QR codes were: way-finding, directing users to online forms and registration, research assistance, directing users to services, providing announcements, and directing users to additional information in immersive exhibits.

They mentioned their use of BeeTagg.com and Delivr.com to create and track QR Codes.  Delivr.com is free for small institution uses and BeeTagg.com costs between $1 and $5 per code for use of their tracking system.  I'm wondering if it wouldn't be too difficult to create a system which could be easily installed and customized to provide this service without depending on a third party.  QR codes are easily created using open source software and there are open source solutions for creating short URLs (see Casimir) and some basic statistics package.

LITA 2011 Day 1 - Getting Your Mobile Web Presence Off the Ground

This was a great session, presented by Kathryn Frederick from Skidmore College, on developing a mobile website, which they launched just over a year ago.  The presentation is at https://docs.google.com/present/view?id=dff8vx6_17hd6nvzcd and the mobile website itself is at http://lib.skidmore.edu/m/.

When developing a mobile version of a site it is important to ask the questions:
  • Who are your users and what would they find useful?
  • What resources (staff, money, time) are available to you?
  • What would you do if you had no limitations?

It's necessary to decide whether you want to make an app or a mobile site.  In the case of Skidmore they decided to make a mobile site.  It's then necessary to decide whether you want to take the existing site and make it work well on mobile, or to design a new mobile site that has a subset of information that's on the regular website.  Finally you have to decide what technologies you will use to make the site work.

When devoloping for mobile it's important to keep in mind the small screen size and the variable connection speed that users will have.  Kathryn also provided information about a lot of different tools to use to test your mobile site and make sure that it works well on different platforms.

Some things to consider including in a mobile website are directions (perhaps GPS based) and clickable phone numbers.  Android Chrome doesn't deal well with RSS feed content which is good to be aware of if you want to integrate that.  It's helpful to use a mobile redirect script and Kathryn included a link to a sample script.  Finally, Kathryn dealt with marketing a mobile site saying thtat QR codes are a handy way to disseminate the URL.