Tuesday, April 15, 2014

Computers in Libraries 2014 - Day 2 - Good Not Perfect!

This was a well-meaning but not particularly enlightening presentation was presented by Andrew Shaping of the Jack Tarver Library at Mercer University.  The basic point was that it is important to call projects finished, or at least ready for prime-time once they've reached a level of "good enough" rather than holding onto them until considering them perfect.  Striving for perfection is a noble goal, but as there is no clear definition of perfection for any specific project, striving for perfection can mean never completing the project or delaying the release of a project unnecessarily.  In a worst case, a fixation on perfection can make something worse, for which the presenter gave the example of the Cake Wrecks website showing cakes that people should have stopped decorating several tubs of fondant ago.

 Instead it is better to have unfinished, but functional, products out as perpetual betas (ala Google Mail) and to be willing to recognize quickly when something is not working and to discard it.

Monday, April 14, 2014

Computers in Libraries 2014 - Day 2 - What Does the Dashboard Tell Us?

Amy Deschenes of Simmons College gave this presentation on the implementation of a statistics dashboard for the college library's collection and display of statistics.

The library had been storing all of its statistics in a shared Excel file.  This was cumbersome and not particularly efficient.  They decided that they would develop a new system for data collection and reporting.  They went about doing this in three steps.

The first step was to define a data collection process.  To do this they asked their stakeholders for what information was needed to be collected and they determined the different sources of the data, how the data would be sliced up, and how frequently it would be collected.

The second step was to implement the best tool.  They determined that Excel was beyond what they needed for data collection.  They looked at using MySQL and a service called Zoho Creator.  They decided that MySQL would provide too many technical difficulties and went with Zoho Creator's cloud-based database service, which offers a single database in the free version of the product.  They then setup a fixed schedule and had a student input the previous year's data and made sure everyone knew the data input schedule

Now that they had a database that was being regularly updated with the information they created a way to view the data.  This was the third step of creating an online presentation.  They used a JavaScript library called Sheetsee.js which can take the contents of a Google Docs spreadsheet and make a nice interface to filter data and see charts (see their implementation).  They manually copy the data from Zoho each month and put it into the Google Docs spreadsheet for their reporting.

After doing this, they decided to also make a more exciting, interactive, visual and student-focused dashboard.  This they built using a responsive HTML 5 framework called Foundation.  After looking at some other dashboards for ideas they created a friendly dashboard with both quantitative and qualitative data.  The results are quite attractive and friendly.
Although much of their process solved problems that we solved in our own way some time ago, there were approaches here I found useful and I think there are some good things to be learned from their experience.

Computers in Libraries 2014 - Day 2 - Library Data Mashups

In this session Mike Crandall, Samantha Becker, and Becca Blakewood from the Information School of the University of Washington described what data mashups are, where to find data, and how to mash it up.

They began by describing the concepts of mashups in general, where for instance, audio from one source has been mixed with video from a different source to create a new, doubly-derivative work that is entertaining in a way that neither of the original works are on their own.  Data mashups do this with data, and can be fun in their own way.  The presenters suggested that using data mashups can be an effective way to advocate for your library as mixing data from different sources can build powerful statements on library service need, reach and utilization.

On the topic of data sources to mash-up, the presenters suggested these sources as a starting point:

  • National Sources: IMLS Public Library Survey, Edge Initiative, Impact Survey, Census Data
  • Local Sources: Community indicators, City/county data, community anchor institutions or agencies
  • Your Sources: Library use statistics, circulation statistics, patron surveys

After discussing and other sources some, they went on describe approaches to mashing up data.  The first approach is to get a "conceptual mash."  A conceptual mash doesn't make for a pretty graph, largely because it can be a bit of a mismatch of data.  However it can point in a direction that is ripe for further intelligence gathering.  The following example was given of a conceptual mash.

The presenters took national, Texas, and local information for New Braunfels, Texas.  On each level they compared married couple family, never married, families with elementary school children, Hispanic, and non-English speaking households.  That data indicates a higher level of Hispanics living in New Braunfels compared to other places.  This information can then be compared against the Pew Library Typology report and that indicates that statistically more Hispanics are in the "Distant Admirers" group than are in any other group.  Based on that it might mean that there is something to be gained in reaching out to the Hispanic community in New Braunfels.  This information needs to be validated, but it creates a working hypothesis that can be explored.

A second type of data mashup is an "Actual Mash."  This is where there are datasets from different sources that can be directly compared or joined using data points that are in both sets.

As an example of this, the presenters looked at the Edge assessment of library technology access and linked that up with data from the Public Libraries Survey to try and determine if there is a correlation between a high edge score and library size.  The Public Libraries Survey had library size available, the Edge assessment information had Edge scores available, and both had the names of the libraries being evaluated.  That meant that the data sets could be joined up to create a greater data set.

In this case they pointed out some pitfalls of not looking at data very carefully.  A simple bar graph would seem to indicate a correlation between Edge score and size, but a closer analysis actually shows that very small number of extremely large libraries generally have extremely high Edge scores while other libraries across the spectrum tend not to have much of a correlation between size and Edge score.

 With the amount of data available from different agencies and from libraries themselves, the possibilities are strong for being able to create complex actual mashes of data using a variety of data sets and matching points.

The presentation had a handout with data sources on one side and tools for data analysis on the other.  There was a brief overview of some of the many tools available.  Here is a quick list of the tools mentioned on that handout:

Computers in Libraries 2014 - Day 2 - Change in Action

This session was a great followup to the session that came immediately preceding which had covered steps that need to be followed to effect successful change.  In this session there were two presentations from two rather different libraries that had made significant changes to their buildings and cultures.  I was able to see in the descriptions that they gave of their change the general patter which had been described in the prior presentation.

The first presenter was Tod Colegrove of the DeLaMare Science & Engineering Library.  He described the academic library that he had started working in as being quiet and empty.  This troubled Colegrove concerning the library's future and he knew that change was needed.  The dean of the library provided a vision of a knowledge center, and following that vision, in three years they had increased tenfold the amount of the building that was in active use on a daily basis.

A primary goal of the transformation seemed to be changing how students thought about and used the library.  A key concept used was that of changing the space from something analogous to cropland, a highly-organized uniform space, to something analogous to a  rain forest, a space with many different uses and many different kinds of programs and materials.  3000 square feet of library walls were covered in whiteboard paint to encourage students to have meetings and brainstorming sessions in the building.  The library started circulating a lot of non-traditional materials to students that were quite appropriate for an engineering school, such as Lego mindstorms sets and Arduino inventor kits.

For their process Colegrove described seven rules that they followed:

  • Breaking rules and dreaming
  • Opening doors and listening
  • Trusting and being trustworthy
  • Experimenting and iterating together
  • Seeking fairness, not advantage
  • Erring, failing, and persisting
  • And a seventh "rule of the rain forest" that Colegrove ended with was "Pay it forward"

Following Colegrove, Nate Hill of the Chattanooga Public Library described some radical changes that they made to their building.

Hill described the Chattanooga Public Library in its pre-change state as a dump, "assessed as one of the crappiest libraries out there."  There was a concerted effort to change it from a dump into an innovative space.  As it happens, Chattanooga is a great place for this since they have one of the fastest Internet access networks in the country (gigabit fiber for the entire city).

The fourth floor of the library was wasted space.  It was unclear to me if there had been books on the floor in the recent past (Hill at one point said "you can do all kinds of interesting things if you get the books out of the way") but certainly there was a lot of junk in storage there that no one had had the heart to throw away.  It was decided to turn this entire floor into a "beta space" where they could try things out.

They contacted a local Linux User's Group and started on a process of converting the space. They auctioned off all of the crap and partnered with the AIGA to redesign the floor now that it was empty.

After the space was cleared out they started coming up with programming to have in it that could utilize the space in many different ways.  They've also added furniture, movable walls, and signage (a huge sign saying "You are in the right place") to make the space customizable to different kinds of needs and to make people feel more comfortable in the space.  So far a sampling of programs they've offered there includes:

  • Their first program, a class on HTML and CSS basics
  • They worked with an organization called the Company Lab for a program to test small business ideas which 450 people attended
  • The held a Maker Day where makers were encouraged to bring their own 3D printers and there was a kind of fair in the space
  • A dance program which involved dancers manipulating projected screens by moving them around with their hands and throwing them to a different building across town over the gigabit connection (kind of strange and a little difficult to describe)

Going on from this remodeling of space, they have implemented or are in the process of implementing several other programs and changes.

  • They have trained staff in a fashion loosely modeled on the Apple Genius Bar using the term "Smart People" to help the public with certain kinds of computer and technical issues.  
  • They are now using the library to serve civic data from data platform in the library.
  • A contingent of staff went to SparkFun in Colorado and were hackers in residence for a week, increasing their hacking skill-set.
  • They have a program called Arduino Thursdays that encourages Arduino experimentation
  • They are currently working on developing a video remixing platform called Hyperaudio as well as the creation of a music lab on the second floor of the library being called Adagio.

Both of these libraries seem to be doing a lot of exciting things and I think we can find models of programs and spaces in what they are doing.

Sunday, April 13, 2014

Computers in Libraries 2014 - Day 2 - Ready for Change? 8 Steps

This was a really unusual session for Computers in Libraries, but I thought it was very good.

Myles Miller is an author and business coach (Facebook page here) who travels around the country talking and consulting on the topic of change.  It was more the kind of thing I'd expect at a staff development day rather than a Computers in Libraries session, which tend to be more library staff talking about things they've done or tried to do.


In this case Miller provided a quick overview of John Kotter's 8 step change process.  Miller is an excellent and engaging speaker and this talk actually dove-tailed nicely with the next session in which two libraries described successful change.



Miller went through the eight steps indicating that they need to be followed in order and that they should not be rushed.


Step one: Create Urgency

For change to happen it helps if the whole company really wants it.  Develop a sense of urgency around the need for change.  This may help you spark the initial motivation to get things moving.

  • Open an honest and convincing dialogue about what's happening in the marketplace
  • If people start talking about the change you propose, the urgency can build and feed on itself
  • For change to be successful 75% of a company's management needs to buy into the change


Step two: Form a powerful coalition

  • Convince people that change is necessary
  • This often takes strong leadership and visible support from key people within your organization
  • Managing change isn't enough - you have to lead it
  • Find people who have power throughout the organization


Step three: Create a vision for change

  • When you first start thinking about change, there will probably be many great ideas and solutions floating around
  • Link these concepts to an overall vision that people can grasp easily and remember
  • A clear vision can help everyone understand why you're asking them to do something
  • Create a vision that can be expressed in less than 30 seconds


Step four: Communicate the vision

  • What you do with your vision after you create it will determine your success
  • Don't just communicate the vision at special meetings, but at every chance you get
  • Use the vision to make decisions and solve problems daily
  • Resolve conflicts.  If you can work through the conflict together you will have a stronger relationship


Step five:  Remove obstacles

  • Find people, processes and structures that are getting in the way (emotional thinkers vs. rational thinkers)
  • Put in place the structure for change and continually check for barriers to it
  • Removing obstacles can empower the people you need to execute your vision and help the change move forward
  • When responding to people ("I don't like it.") use the grand pause, don't react immediately.  **Silence** "OK. Tell me what you don't like and why?"  At the heart of most obstacles to change is fear.  Find out what people are truly afraid of.
  • Document concerns -- it shows that you are listening


Step six: Create Short-term Wins

  • Nothing motivates more than success
  • Give the organization a taste of victory early in the change process
  • Within a short time frame you'll want to have results your staff can see


Step seven: Build on the Change

  • Don't declare victory too early
  • After every win analyze what went right and what needs improving
  • Set goals to continue building on the momentum you've achieved


Step eight: Anchor the Changes in the Organization's Culture

  • New staff and leaders need to be initiated
  • Talk about progress every chance you get
  • Acknowledge the people who got you there

This was a great session.  Even though it had little direct bearing on the typical things that are discussed at this conference it was completely relevant and appropriate for a conference the frequently inspires a desire to bring about change.

Computers in Libraries 2014 - Day 2 - Keynote

My second day of Computers in Libraries 2014 officially began with a keynote presented by Mary Lee Kennedy, the Chief Library Officer at New York Public Library.  Ms. Kennedy titled her talk Hacking Strategies for Library Innovation and began with a quick list of four strategies that she then explained in depth in her presentation.  Those four strategies were:

  1. Know what we are fundamentally about
  2. Identify the target areas of opportunity
  3. We need to make changes - head off in the direction even when we don't know what the outcome might be
  4. We need to have fun

On the topic of "knowing what we are fundamentally about" Kennedy described the role that the New York Public Library plays.  On a local level it provides a variety of services to the city of New York.  It is free for all to use and books, archives, and documents are in its collections.  

But because the New York Public Library has an Internet presence, its potential for a community it can serve expands to the 2.5 billion people who use the Internet.  In addition to its physical materials it is now a place that can offer digital media and data offering APIs for that community to use to access that information.

As for opportunity, the New York Public Library has historically focused on access to information.  Going forward there is an opportunity to move from a passive role of just providing access, to an active role of engaging with users. 

Kennedy laid out some strategies for changes that can be made to capitalize on this opportunity.

Make knowledge accessible 
NYPL has started the following initiatives to make knowledge more accessible:
  • NYPL Map Warper - takes sheet maps and puts them into Google Earth to give context, like showing what families lived where in Manhattan 100 years ago.  NYPL encouraged users to make corrections to the Google Earth overlays in a game-like fashion, and as a result what would have taken staff months was accomplished very quickly.
  • Children's' books - Has an intuitive mechanism for quickly narrowing down titles of interest from a list of recommended children's' books.
  • NYPL Archives & Manuscripts - This system allows traditional textual links that exist between different archives and manuscripts to be expressed in a graphical way.
Turn the Library Inside Out (or Take the Library Out)
Rather than requiring people to come to the library for services, we need to export library services out so that the public can benefit from them in many different contexts.  Here are some examples of how NYPL is attempting to do that:
  • Wikipedia Edit-a-Thon - Many people use Wikipedia as a starting point and librarians have the expertise and the resources to improve it; it is an area where we can help
  • Zooniverse - NYPL has partnered with this citizen-science site to use crowd-sourcing to turn text into structured data
  • Hackathons - If we can connect people who know common kinds of things wonderful things can happen.  We need to get inside of the life of the people in our communities.
  • Bit by Bit - the NYPL has helped people to collaborate in digital storytelling for this project.
Spark Connections
Libraries are all about connecting people, but we are a part of a network ourselves.  Focus on what we do best and focus on what others will do better with us.
  • ReadersFirst  - 292 library systems subscribe to the principle that it should be easy for people to read an ebook.  They now have a guide to library ebook vendors.  Working on an ebook api.
  • Broadband lending - literally lending out Internet connections via wireless connection boxes.
  • MyLibraryNYC - Teachers select titles and title sets.  NYPL delivers the books to schools and they pick them up when the teachers are done.
In all of these kinds of endeavors Kennedy encourages us to have fun.

Thursday, April 10, 2014

Computers in Libraries 2014 - Day 2 - OCLC Breakfast

On Tuesday morning I attended the OCLC breakfast, where OCLC (stands for Online Computer Library Center, a nonprofit company that provides a variety of library services and plays a particularly important role in cataloging of resources and interlibrary loan) provides a rundown of various products and services they are working on.  The food a the OCLC breakfast is a little nicer than that in the conference hall and it's always interesting to hear what OCLC has going on.

This year they mentioned the following programs and products, some of which I was more interested in than others.

They recently had a symposium on MOOCs (it was titled "The Hope & Hype").  The assertion is that MOOCs are going to change change the way the library operates as libraries will be asked to support the ability of students to connect to these online courses and support them in other ways.  There is a recording of this symposium on the OCLC website.  They are also coming out with a publication on MOOCs and libraries.

They are working on an interesting project called Worldcat Identities.  This is a tool that flips the WorldCat catalog on its head so that rather than looking at what titles are owned by certain libraries, you're just looking at a combined list of everything that has been produced.  For instance you can bring up an author and see all of her works, or find out what cookbooks are out there that cover specific cuisines.  At least from the brief testing I've done of the site it seems a little buggy but the idea has potential.

They spent a lot of time talking about WorldShare Management Services, which is a cloud-based Integrated Library System that they are marketing.  They say that 225 libraries will be using it by the end of the year.  Using something like this frees libraries from a lot of the traditional problems that they have had with conventional library systems like having to keep staff clients up-to-date, making sure that the catalog server gets upgraded, and making sure that the clients are installed and configured properly wherever they are needed.  WorldShareManagement Services is all web-based and takes heavy advantage of OCLC's position as a central clearinghouse for catalog records making maintenance of a library's catalog hypothetically extremely easy.

 OCLC is just completing a shift from the older Interlibrary Loan system to a new system called WorldShare Interlibrary Loan.  There are a number of advantages to the new platform, it is required for all libraries who want to continue doing Interlibrary Loan using OCLC and it is available at no additional cost.

Of greater interest to me was a discussion of a new product called WorldCat Discovery Services.  This is another free upgrade that will eventually phase out three services currently used by the general public for searching OCLC content: FirstSearch, WorldCat.org, and WorldCat Local.  Libraries can sign up for their own new unique URL which they can customize.  The product doesn't quite have all of the features that they want it to have when it is complete, which they anticipate it being by November.  It does sound like it will be a nice upgrade though, with separate staff an patron interfaces and responsive design, among other features.  All libraries will be required to go to WorldCat Discovery Services from FirstSearch by December of 2015.

Computers in Libraries 2014 - Day 1 - Libraries & the Big Picture

This two-timeblock session was a great one, crammed with lots of hot information and insight, even if it didn't go quite in the direction that the organizers had planned.

The session was intended to be divided into four parts.  In the first part Kathryn Zickuhr from the Pew Research Center would provide the results of the third of Pew's planned three studies of public libraries.  In the second, Marydee Ojala would discuss the IFLA trend report.  In the third Stephen Abram would provide some insight into the observations from these reports.  In the fourth, conference attendees in the room would provide examples of what they were doing that was innovative, interesting and maybe the next big thing.  The fourth part of this kind of fizzled as people were two interested in asking follow-up questions of the three presenters and the three presenters were only too-willing to provide great replies to the questions and apparently not too many people in the room had done anything in the past year that they thought was world-shakingly innovative.

There was a lot of information in this session and I'll try to cover the main points here, which is much easier to do with the first two presenters who had neatly organized PowerPoint presentations, as opposed to Stephen Abram, who seemed to just be speaking off the cuff in the scary, funny, information-dense way that only Stephen Abram can.

The first presentation was titled featured new data from the Pew Research center on how the American population in general is engaged with public libraries.  Two earlier studies in this series have been done, one on the state of reading in America and one on library services.  This study was on typology -- classifying the American populace into broad types based on library usage.  This is kind of the inverse of the typical study which they have done.  Prior studies have looked at the behavior of different groups of people broken out by race, gender and socio-economic status (e.g. "How much do middle class Asian women between the ages of 30 and 45 read books").  This instead looked at types of different kinds of library users (e.g. heavy library users, people who never use the library) and then tried to discover what these groups might have in common.

The full results of this and other research can be found at libraries.pewinternet.org, but here is a quick rundown of what was found.

 Pew divided the responses they had to their phone survey into four basic types and each type was further divided into two or three subtypes.  The four basic types described the level of engagement that the interviewed persons had with their local library and these  were high, medium, low and none.

Within the high type there were "Library Lovers", about 10% of the respondents, and "Information Omnivores", which was about 30% of the respondents. 

"Library Lovers" frequently use libraries and have high levels of appreciation for libraries.  They found that this type included many parents, students and job seekers.  They tended to have high levels of education.

"Information Omivores" made heavy use of libraries, but less so than the "Library Lovers".  These respondents had the highest rates of technology use, education, employment, and household income.

The respondents in the medium level of engagement were divided into "The Solid Center", 30% of all respondents, and "Print Traditionalists", 9% of all respondents.  Zickuhr didn't have much to say about "The Solid Center."  They were the largest group overall and were a broad swathe of the population.  The "Print Traditionalists" on the other hand, tended to live in rural areas, father away from libraries.  The highest number of rural southerners was to be found in this group.

Zickuhr indicated that the most interesting information was to be found in the groups with low levels of engagement or who were not engaged with their libraries.  In the low type there were three subtypes: "Not for Me", comprising 4% of respondents, the "Young and Restless", making up 7% of respondents, and the "Rooted and Roadblocked", making up another 7% of respondents.

The "Not for Me" type had a strikingly less positive view of libraries in their communities.  They were people who were more likely to have had negative experiences at libraries, although they typically weren't this way because they just relied on the Internet for all of their information needs.

The "Young and Restless" type generally didn't even know where the nearest library was.  For this group libraries aren't even something they consider.  They tended, as the type name indicates, to be young people, frequently who had recently moved to an area.

The "Rooted and Roadblocked" type tended to have a positive view of libraries but had a lot of difficulties that made it impractical to make regular use of libraries.  They tended to be older, many of them living with disability, and they frequently had experienced a recent illness in the family.

The people who had no engagement at all with libraries were divided into two groups: "Distant Admirers", 10% of respondents, and "Off the Grid", 4% of respondents.

The "Distant Admirers" were people who didn't personally use libraries, although 40% of them had family members that used libraries.  The highest number of Hispanics were in this subtype.  They tended to view libraries quite positively.

Finally the "Off the Grid" subtype just had little exposure to libraries.  They seemed to be people who engage less frequently in community and social activities.  Many of them live in rural areas and have low household incomes.

Zickuhr suggested taking the data from the study and cross tabulating it with the community to establish working hypotheses that could be tested.  For instance, if a community has a high number of Hispanics, it is statistically more likely based on the results of Pew's survey to have more people in the "Distant Admirers" category.  Libraries could test this hypothesis with targeted surveys and small studies, and if it was found to have validity that would provide an opportunity to reach out to the "Distant Admirers" in their community and try to turn them into library users.

Pew will be coming out with a library engagement quiz this summer and all who are interested in that are encouraged to sign up with the newsletter advertised on the Pew libraries site.

Following Zickuhr, was Marydee Ojala of the International Federation of Library Associations' summary of their trend report.  Ojala began by providing a little information about the Federation of Library Associations (IFLA) itself explaining that it is a super organization of library associations and members of those associations are also members of it.  One of the associations is the American Library Association, for instance, so most librarians in the United States, by being members of that organization, are also members of IFLA.

IFLA decided create a report to identify general information trends.  The report applies to more than just libraries in its scope, although libraries should concern themselves with the findings of the report.  The full details on the report itself can be found at trends.ifla.org.
The report identifies five trends which Ojala briefly covered.  They were:
  1. New technologies will both expand and limit who has access to information

    The "both expand and limit" part is the key point of this trend.  Although things like the Internet clearly expand the amount of information people have available to them, there are countervailing trends and complex factors that are likely to limit information as well.  Copyright is a major factor here as copyright laws have become more strict and copyright protection measures are widely use to make sure copyright is not violated.  However this means that information that would be leaving copyright is not (items published since 1923 have not been entering the public domain, which has put objects at risk of being lost before they can be widely distributed without fear of violating copyright).

    Also Information literacy is increasingly important.  If you know how to find information it is frequently there for you, but if you aren't well versed in the art of finding information it may even be harder to find things you need than it was before the Internet.

    For each point, Ojala had a catchy phrase that tried to get to the heart of the matter, and for this trend the phrase was "The world's information at your fingertips - but what can you do with it?"

  2. Online Education will democratise [the British spelling is used because of the document's international flavor] and disrupt global learning.

    The growing popularity of free online courses (commonly called MOOCs), informal learning, and open access to university class syllabi and lectures has made it such that if anyone is interested in learning about topics that previously might have been beyond their reach, they now can.  It is unclear what this means for formal higher education, where education costs have been skyrocketing for decades, and for the quality of education.

    The phrase here was: "If education is free then how much is it really worth?"

  3. The boundaries of privacy and data protection will be redefined

    To see what this item is about your hardly need to go farther than this year's headlines.  The U.S. and U.K. governments have been publicly revealed as having harvested wholesale information on private citizens for years.  Is the person who revealed this a traitor, a hero, or something else? Part of the reason that the government was so attracted to tapping the likes of Google is that we more and more contribute willingly piles of information to companies that decades ago would have been unthinkable.

    The phrase here was "Who is profiting from your personal information?"

  4. Hyper-connected societies will listen to and empower new voices and groups.

    A networked society has allowed groups that were too small, powerless, and/or disparate before to join together in ways that have not previously been possible.  It seems also to be leading, in some cases, to lead to a situation where a reinforcing feedback loop of opinion is making individuals less interested to try to understand where others are coming from and consequently to compromise.  This is resulting in greater fragmentation of our political parties and upheaval in the political landscape.

    The phrase here was "Are you ready for cyber politics?"

  5. The global information environment will be transformed by new technologies

    The prevalence of mobile devices that always know where you are, "Internet of things" devices that know details like how much energy you use and how warm you like your house to be, potentially disruptive technologies like 3D printing and Bitcoin, and the creation of a global information economy are so novel that we aren't really sure what kind of impact they will have on our society.

    The phrase here was "When your phone, your car, and your wristwatch know where you are at all times, who runs your life?"
There are implications for the trends for libraries, for information providers and for each individual.  We need to be thinking about these trends and finding out what they are going to mean and how to deal with them.
Following Ojala was Stephen Abram's talk.  It is rather difficult to summarize Abram's talk.  He tends to quickly build a lot of disparate facts, some of which sound difficult to believe but plausible (and hearing them from him makes you feel that you need to believe them) and joins them together into a coherent argument for action.  It's perhaps not too dissimilar to listening to a really good conspiracy theorist talk except that there is no clear conspiracy and little is there to be dismissed by Occam's Razor.
Some of the points I managed to take down were:
  • Copyright is a major issue going forward.  The 10 biggest contributors to the presidential campaign are also the 10 largest copyright holders (I looked briefly for a source for this stat and couldn't find any direct evidence -- it probably depends on terminology -- although I have no argument with the larger point).  Abram express a great deal of concern about copyright provisions in the Trans-Pacific Partnership treaty.
  • MOOCs at the University of Toronto have had more students attend them than have graduated from the university in its history.  Abram teaches at the University of Toronto and presumably knows what he's talking about here.  He indicated that a lot of Chinese and Indians sign up for MOOCs in the United States and Canada.  Later a question was addressed to Abram about how universities are going to pay for MOOCs, which are generally currently free.  He explained that this is a developing technology in its infancy and like many Internet startup technologies the path to profit is murky at the outset.  He felt that it was likely that MOOCs would stay free but that there would be some pay system if you wanted to get some kind of formal credit for the classes.
  • What we are facing now is not a digital divide, but an access divide.  It's not so much that people can't get their hands on computer technology, but that the resources that might empower them are inaccessible.
  • The average newspaper reader is 63 years old.  Coupled (interestingly) with this is his statement that 95% of American newspapers are owned by six Republican families.  At least to me this last bit isn't too surprising or worrying as there has long been a tendency for business people to be Republican and being Republican doesn't mean that your a member of the Tea Party (a fact that I'm sure is no comfort to the Republicans that want nothing to do with the Tea Party).  However, this certainly does not bode well for either newspapers or the Republican party.
  • The biggest three textbook publishers are opening online high schools that will operate through public libraries.
  • The NSA has been photocopying the cover of every snail mail letter going through the postal system for the past 30 years (there's some level of confirmation of this claim in this article in the New York Times).
  • My paraphrase of what Abram said trying to stay close to the original wording: "People give up something for the great experience that they have at Google and Amazon.  They give up nothing at the library and get a worse experience for it.  Until we solve this problem it will be a huge issue."  This is a huge problem for libraries and he is absolutely correct here.  Google and Amazon can use information that its customers wittingly or unwittingly give them and then those customers get all kinds of recommendations and services that they love.  Privacy laws, which are something libraries are proud of upholding, mean that we cannot do that for our patrons and then they get annoyed that we can't tell them what book they had checked out three months ago that they can't remember the title of.  We potentially have a huge amount of data that would make Google or Amazon jealous, but our ethics mean that we don't dare keep it lest it end up in the wrong hands, and the NSA has not made us feel any more comfy about doing that this year.
  • What will happen with privacy in libraries when our public are walking in wearing Google Glass?
  • The brand of libraries may indeed be books, but if you look closer it seems that people are getting something else that's valuable out of the equation that they don't even consciously associate with libraries.  We need to resolve this conflict.  Abram pointed to the cautionary tale of Vaseline which wanted to create a dry antiperspirant, but that product failed just on the fact that people didn't want to buy a dry antiperspirant from a company whose name was synonymous with something that notably wasn't very dry.
  • On an optimistic note Abram stated that librarians are the only professionals in the world that as a class have all of the proper skills to address the trends that Ojala detailed.  That doesn't mean much if we can't find it in ourselves to use those skills properly, but it is something to go forward on.
This was a fantastic session, even if there weren't many libraries who could point to a big daring project they had undertaken.  Maybe it will inspire some to do just that and they can report about them next year.

Tuesday, April 8, 2014

Computers in Libraries 2014 - Day 1 - Moving Ideas Forward

Unfortunately I missed the first 15 minutes or so of this presentation as I got back from lunch a little late (I am going to note in my conference evaluation this time around that 1 hour is a little tight for lunch when it's common to walk a half-mile to the place you're going to eat and that you might strike up an interesting conversation with someone else attending the conference).

The presentation I missed much of was from James Liebhardt of NCI Information Services who was describing a process of developing services for patrons.  The part I caught made the sensible points that it's important to get products you are developing into your patrons hands early and that waiting for a perfect product is not a good strategy.  It is best to see how people use the product that you are developing and use some tests to evaluate it.

A couple specific kinds of tests that he mentioned for patron testing were:
  • The smoke test - put a link to a product that really isn't quite there (more the promise of a product) on the website to gauge interest.  If no one clicks on the link, then maybe you don't need to develop this product.  If the link becomes the most popular thing on the website focus all of your energies there.
  • The A/B test - Show half of the people looking for a kind of thing a beta product while you direct the other half to whatever you would have done before.  Find out if the people who use the beta product fare better.
When this presentation was complete there was a second presentation that I did get to see all of.  James Stephens from the University of Maryland Baltimore County described his experiences with an experimental product.  This was actually a pretty interesting presentation as a lot went wrong with this experimental product, although it wasn't a complete disaster by any means.  The stakes were low (failure meant the status quo) so it seems to have been a great learning experience.
 
In his instance the university has a lot of study rooms that need to be reserved on a website.  Students frequently don't know this and there's no easy way to figure this out without going to the website which you can't do easily (or know to do if you can) if you are standing in front of the door to the study room.
 
So James thought (my paraphrase), "What if we buy a bunch of Raspberry Pis, hook them up to touch screens and put them at the entrance of each study room?  That way the schedule for the room could be shown on the screen and if someone wanted to reserve the room on the spot they could using the Raspberry Pi." It is important to note here that there was a real problem that James was trying to solve and he had come up with an innovative way to try and fix it.  James observed himself in his presentation that it is critically important to approach things in this way, i.e. "I have this problem, how can I solve it" rather than "I have this cool gadget, what problem might I be able to solve with it."
 
This idea (which has progressed to a kind of functional product at this date) hit a host of obstacles.  First the people above him, who didn't want to look like they were just being cheap, wanted to know why they were using these $35 computers rather than just buying a regular computer and putting it there.  James' answer, which he hadn't really developed initially because he hadn't anticipated this line of questioning, was that this wasn't being cheap to be cheap, but rather being cheap because the product was adequate to the task and by being cheap they could do more (put the things all over the place).
 
After that initial hurdle came a number of other hurdles. 
  • The security enclosure for the device, which was necessary to keep it kind of neat and difficult to walk off with or tamper with, was more expensive than all of the other parts combined (about $200 for the enclosure). 
  • The enclosures were designed for iPads and the screens, although iPad sized (roughly) rubbed against the enclosure causing them to falsely read touch activity. 
  • The first touch screen they got was defective and had to be shipped back to China adding a huge delay, so the first draft of the device had to be done with a non-touch screen which meant the product had to be temporarily changed to a display of the room's status rather than an actual station where the room could be reserved. 
  • The Raspberry Pi does not have built-in wireless and its wireless implementation is a little flakey, so it would randomly drop off the network and code had to be written for it to make sure that it was connected to the network and to reboot itself if it wasn't.
  • People would unplug the thing at night so it wouldn't properly shut down.  After that happened a few times the SD card running the OS would get corrupted and it couldn't boot anymore and a new SD card with the image would have to be put in to replace it.
There were a few other issues as well.  These have mostly been sorted out, but it sounds like it was quite the learning experience.

James mentioned that there was also a project that they worked on there using a Raspberry Pi for digital signage, but he didn't have time to go into the details on what worked and what went wrong with that.

I personally have had a lot of experience with the kinds of things James described here (I haven't used a Raspberry Pi, but I have used several other small, flash-memory based, Linux computers) and I empathize entirely with his goals and with the obstacles he encountered. They are the kinds of things I would anticipate in such a project.  There is little that gives you the same level of satisfaction when you've completed such a project and have things working, although I could only imagine trying to work out those issues in a production environment of the type James was working in (so far, at least).

Computers in Libraries 2014 - Day 1 - Rock Your Library's Content with WordPress

As my library's website is currently running on WordPress and this is a relatively recent development I thought I'd go to this program where two academic librarians were going to talk about how they were using WordPress at their institutions.  There was a lot of stuff discussed here which I already knew, which I kind of figured would be the case, but there were a couple tidbits of useful information that I was able to glean as well.

The first half of this session was presented by Chad Haefele, the Emerging Technologies Librarian at UNC Chapel Hill and focused on using WordPress on a large scale library website, in this case the UNC Libraries site.

UNC Libraries used to have a site that numbered in the thousands of pages.  After transitioning onto WordPress the number of pages they now need to maintain has dropped drastically to about 250.

Of particular interest to me were the add-ons for WordPress that they used.  To get a responsive website they used the theme named (appropriately) Responsive.  They purchased three plugins: Formidable Pro, Elegant Themes, and Press Permit.  Press Permit sounds to me like something we might want to investigate as it improves the permissions controls in WordPress and allows some very fine control over who can post and edit what where.  Chad specifically recommended Press Permit regardless of the size of an organization's WordPress site.


Chad also talked a little on WordPress security and particularly mentioned a hack that had bit his personal site that was out of date, called the Pharma hack which puts pharmaceutical ads on your website for you.  He strongly recommends doing WordPress updates as soon as they come out.


Chad's talk was followed by a talk from a different Chad, Chad Boeniger, the Head of Reference at Ohio University.  Back in 2006 this Chad had given a presentation in the same room on how he was using Wikimedia to host a research guide for the reference department.  This time he reported on how that Wikimedia implementation had been replaced by a WordPress implementation. Reasons for this were easily summed up by demonstrating that Wikimedia had made few usability improvements over the past few years while WordPress has been in extremely heavy development adding many kinds of usability enhancements and other features.


I particularly found of interest the mention of a book called Trust Agents which proposes this method for developing a website that has answers to many questions:
  1. Get an email from someone (presumably who's seen your website, not a friend) with a question
  2. Answer the question and post the question and the answer (presumably anonymized) on your website
  3. Repeat
This apparently is partly how they have developed an exhaustive collection of questions and answers on their website.
The Ohio Chad also covered a number of features that he found useful in WordPress that are common with most PHP-based CMS platforms (persistent links to dynamic pages based on category ids placed in the URL) and mentioned a variety of plugins that he's found useful for harvesting statistics off of the WordPress site where his site is hosted.

Computers in Libraries 2014 - Day 1 - Super Searcher Tips

This session for me was kind of a guilty pleasure.  Every year I see the Super Searcher Tips program listed on the schedule but see something somewhere else on the schedule that is probably a little more relevant to my job and go to that instead.  This year I didn't have much of a conflict, so I went to this.

The session is presented by Mary Ellen Bates, who, at the beginning of this very session, was presented with the AIIP Marilyn Levine President's Award.  After receiving the award (she had been unable to attend the actual meeting where it had been officially presented last week) she went straight into her presentation which was a fantastic brain-dump of search tools and tips.  Here they are.

For private search, Bates recommends motherpipe.com which is based on Bing and includes Twitter results. The servers are located in Germany.  You can get different results if you go to the .co.uk address instead of .com.


To find "long tail" results Bates recommends millionshort.com.  This site allows you to drop the first 10,000, 100,000, 1,000,000 etc. results from a Google results list so you see only the stuff that is at the end that normal people will never get to by scrolling through results.  This catches things that might be really good but haven't been optimized for search.  Milliontall.com is the reverse; it retrieves only the top sites.

A tip Bates suggests if you are using Wikipedia as a source is to compare different language versions of an article, for example the English and French articles on tracking.  The different language articles are rarely just translations of one another and you can get Google translate to get you a serviceable translation if you don't know the language.


The site social-searcher.com lets you search social networking sites and limit by the popularity of the posts.  You can search for posts with a certain number of likes/re-tweets/etc.  Also has quick social media analytics.  For example, a search for Obamacare shows generally positive feedback on Twitter and Google+ and slightly more negative feedback on Facebook

Bates also mentioned on this line that Twitter has started improving its search.  Its search results now have nice facet-like limiting.


Also for searching Twitter posts is the site hashtagify.me.   Hashtagify.me finds related hashtags on Twitter so you can find top influencers and see popularity trends.  For instance "sustainability" links to the related tags: eco, car, environment, energy, renewable, green, business, climate change, climate, crowdenergyorg.


Bates suggests using Pocket for storing articles that you want to read later.  I personally use Instapaper for this kind of thing, but Pocket sounds like it has a few nice extra features like tagging and allowing you to export an archive as an HTML list.


Google has added a Library feature to Google Scholar.  This lets you save all of your citations in one place.  You can add labels for sorting.


Searchonymous is a Firefox plugin that allows you to do an anonymous Google while logged into Google.  This can be quite useful as Google makes modifications to your results based on your search history and other stuff they know about you.  Sometimes it can be useful to know what results the average Google user who isn't you is going to get when they do a search.


One interesting tip was for finding lists using Google. If, for instance, you want to find various lists of top anime (a random topic that happens to work pretty well with this example if you try it), but didn't want to limit it to just lists of 10 (or some other arbitrary number) you can search for "top 5...50 anime" and that will find lists ranging between 5 and 50 in length.


If you are looking for mashups of various kinds of data with Google Maps (and who isn't, really) a nice place to look is Google Maps Gallery.  You can browse or search the data (world bank, census, PolicyMap, etc.).  You can even add your own organization's maps.


One tip Bates had that I will admit to having used before this session is using Google Autocomplete to indicate alternatives to a product/service if you don't know what they might be.  Just go into Google and type something like "Roku vs." and then stop and see what shows up in the list of recommended searches that drops down.


Google's new site info card can be a handy feature.  It can be brought up when in search results you see the grey name of an organization followed by a triangle in light grey print.  When you click on this you'll get a brief description of the organization, generally from Wikipedia.  This can give you an idea of the legitimacy of an organization before you click on the link.


A feature to celebrate in Google Images is the ability now to add a number of Creative Commons filters to a search.  This is most helpful if you are looking for an image that you want to reuse in a specific context and you need to know that the licensing will be friendly to the purpose you have in mind.  To use this click on "Search Tools" after performing a search and then use the "Usage Rights" menu.

More an example of how to do something well, rather than a tool in itself (although it certainly could be a useful page), Bates mentioned Google's Media Tools, which are intended for journalists who use Google to get certain kinds of data. - Great way to "package" tools - Google.Com/get/mediatools  - A nice example of how to organize tools for an audience.  Tools are arranged into categories like "Gather & Organize", "Engage", and "Visualize".


Bates final tip was one for doing a job search, in this case most relevant to the corporate world.  It's kind of a hack, but it's an interesting hack to be sure.  It turns out that the company Taleo, owned by Oracle, provides back-end job posting services for many companies.  So even though you can't go to www.taleo.com and see a bunch of job sites (it actually just redirects to Oracle's information page on the Taleo service) you can search the domain and find jobs that have been posted only on the sites of the companies that use the Taleo service.  So the search "site:taleo.net intitle:career keyword(s)" would find jobs that matched the keyword(s). It doesn't work particularly well with phrase jobs ("engineer" would be more successful than "electrical engineer").  One glitch that Bates pointed out, which is actually a bonus tip in itself, is that the Google search engine will work harder against its indexes if you ask more of it.  The above search for "intitle:career" and the above search for "intitle:careers" find fewer searches combined than the search for "(intitle:career OR intitle:careers)", which in a pure system that gave you all of the results every time would not happen. The more complicated search makes Google work harder and gives you more results.

Computers in Libraries 2014 - Day 1 - Keynote

The first keynote for this year's Computers in Libraries conference was made by David Weinberger, co-director of the Harvard Library Innovation Lab and author of several books including The Cluetrain Manifesto, Everything is Miscellaneous, and Too Big to Know.

He started his talk with two questions:
  • Why hacking now?
  • Why isn't every knife a Swiss Army Knife?
He proceeded in his talk to endeavor to answer these questions and provided a thoughtful presentation on the position in which libraries find themselves today.
 
On the question of "why hacking now", Weinberger first clarified that we were talking about "white hat hacking", i.e. thinking about a problem in a new and fresh way, rather than "black hat hacking" (stealing credit card data) bozo hacking (thinking about a problem in a different, but ultimately doomed way).

Then came the answer to the "why... now" part of the question.  Weinberger identified four major factors that make now an unusual time:
  • Everything is getting networked.  Weinberger made a point of classifying this as something quite different from everything going digital.  You can only go so far by changing something from an analog to digital format.  Changing the way people interact with one another through media changes the world.
  • Everything is being opened.  There is a shift from the most prevalent kind of information being copyrighted information to the most prevalent form of information being information that is Creative Commons licensed and open access.  On this point I agree that this is a good and world-changing thing and is much more common than it used to be, although I'm less certain exactly how prevalent it is.  However I'm sure it is extremely prevalent in the world of academia, where Weinberger is coming from.
  • There is engagement with communities at all points in the product lifecycle.  Authors can interact with readers while books are being written and companies can interact with consumers while products are being developed in a way that has never to this point been possible.
  • There is a new, networked ecosystem.  It used to be the case that our users thought of the library early on if they needed a book or information.  Now we are a late thought after users first look at Amazon or Google.  We have an opportunity to turn this around by repositioning ourselves.
Then Weinberger tackled the question of Swiss Army knives.  He explained that although you can buy a massive Swiss Army knife that has nearly every tool you could possibly use, it's expensive, cumbersome and awkward.  In a world with Star Trek replicators that could make products you need on demand, you'd never ask for one of those, just the tool you needed at the time.  The reason we buy Swiss Army knives is because we anticipate needing other tools when we don't know that we will need them.  That anticipation is present in many industries and has modeled much about the world in which we have developed.
 
For instance publishers filter out materials they don't think will sell well so they spend their resources creating products that they anticipate will sell well.  This means there is an awful lot of stuff (some of it bad, some of it good) that doesn't get published.  The Internet has no such built in filter -- anything that people want to publish there gets published.  It is then the place for "curators" to filter-in the content that is good and relevant.  The bad stuff is still there for people to find if they really want it, as well as other good things.
 
After addressing these two questions, Weinberger pointed to three ways forward for libraries that get around the anticipation problem.
 
The first is the platform approach -- placing open data in a library portal.  There is a lot of open data and meta-data that is available through all kinds of APIs.  Libraries can collect and organize that data to create new resources.  Examples of this are the Digital Public Library of America and Harvard's StackLife.  StackLife is built on open data at Harvard's LibraryCloud and if a different library wanted to do something with it that isn't done with StackLife, they can take that data and use it.
 
Weinberger gave an example of a physical open platform in Harvard's "Labrary" where students can place their own projects and exhibits.  Another thing which Weinberger mentioned as an aside that will be a source of open data is the AwesomeBox.  The concept behind this is that libraries have two book return boxes -- the regular one and the AwesomeBox.  If something is returned in the "AwesomeBox" it gets additionally checked in as being "awesome" creating an anonymized, low-friction way of creating a list of loved materials.
 
Another way for libraries to move on is to take advantage of linked open data.  Linked data allows the creation of connections between sets of data that use different terms for the same facets.  For example, if one dataset uses the term "Author" and the other uses the term "Content_Creator", if an application pulling in the data can get information from the sources that those terms really mean the same thing, then the data can be combined into a single set.
 
Finally, Weinberger encouraged the creation of graphs along the line of Facebook's social graph.  Graphs allow the visualization and exploration of connections that are harder to see in raw data but that we know.  One example he gave were the various connections between Homer, The Odyssey, James Joyce, Ulysses, Dublin and the film O Brother, Where Art Thou.
 
These last two items in particular are variations on what libraries have always done, just updated to address new challenges.  We've always strived to create consistency among data so that items are easy to find and we've always focused on the ability of people to make connections between different kinds of things to help users find what they are looking for.
 
Weinberger closed summarizing that to hack libraries we need to hack the future, to enrich our existing assets, to create an infrastructure of knowledge, and to fight the trend prevalent on the Internet in particular for people with an opinion to only search for items that further confirm their opinions rather than search for the truth behind a matter.
 
 

Monday, April 7, 2014

Computers in Libraries 2013 - Day 0 - Gadgets & Gaming Session

Once again I started off Computers in Libraries with the laid-back Gadgets & Gaming session where miscellaneous technology toys, typically useful for education, are put on display so that curious librarians can see how they work and if there's potential for their use in their library environments.  Many of the items on display this year had been present at previous Gadget & Gaming sessions, like Sphere, a remote controlled ball, slightly larger than a billiard ball, that can be driven using any Bluetooth enabled tablet.  However, several things were new and interesting and here are a few pictures.

Cubelets next to a paper describing them
Cubelets
The cubes in this picture are items called Cubelets.  They are each self-contained pre-programmed bits of robotics.  They have magnetic connectors on some sides and other functional parts on one or more sides, depending on the function of the Cubelet in question.  They are compatible with Lagos, hence the Lego squares sitting next to them.  The Cubelets can be chained together to create a kind of logical action.  So if a power Cubelet is connected to a sensor Cubelet which is connected to a light Cubelet, the light Cubelet will turn on if it receives an appropriate signal from the sensor Cubelet if the sensor Cubelet is getting power and senses something appropriate.  They are kind of interesting to play with and make for a nice low bar for basic robotics entry.

The Finch robot with its 'beak' glowing blue.
Finch
The Finch is a piece of robotics somewhat more advanced, perhaps a little too advanced to really be of much use during this gadget petting zoo.  It looked kind of interesting though.  It's a basic robot that is powered via USB cable connected to a computer.  The computer can send instructions to the robot via the USB cable and those instructions can be written in a veritable bevy of languages ranging from basic, kid-friendly languages (Scratch 2.0) to much more difficult languages (C++).  It's fairly inexpensive at $99.  It would have been nice to see it in action aside from having its beak glow though.  The presenters hadn't managed to get that far with the device.  In their defense, it's also kind of hard to quickly instruct people to write functional code for a device in a casual setting where people are chatting, drinking cans of pop and eating munchies.
The Robo 3D printer working on a project
Robot 3D Printer
The Robo is a serviceable, attractive, inexpensive (at $700) 3D printer with similar specs to the much more popular Makerbot.  It can use either ABS plastic or more environmentally friendly PLA plant-based plastic.  It seems it's primary drawback is there is no enclosure around the main print area.  I don't know exactly how it compares with the just announced, and hugely overfunded on Kickstarter, Micro 3D printer, but I think they are both signs that this technology will continue to improve and get cheaper.
A disembodied arm holds a 3Doodler while its custodian points at a sculture it has made
3Doodler
The 3Doodler was a Kickstarter product that has made it into the wild.  It is a hand-held 3D printer, meaning the only computer control it has comes from your brain and it's only as good at drawing a three dimensional object as you are.  It is kind of fun though.  Using it isn't too dissimilar to using a hot-glue gun, except the "glue" rapidly firms up into a stiff, plasticky filament and rather than squeezing a trigger you push buttons on the device.  It's not as revolutionary as the 3D printer, but it's not as expensive either and provides a more immediate creative challenge.

Those were some of the more interesting additions at this year's session.