This was a well-meaning but not particularly enlightening presentation was presented by Andrew Shaping of the Jack Tarver Library at Mercer University. The basic point was that it is important to call projects finished, or at least ready for prime-time once they've reached a level of "good enough" rather than holding onto them until considering them perfect. Striving for perfection is a noble goal, but as there is no clear definition of perfection for any specific project, striving for perfection can mean never completing the project or delaying the release of a project unnecessarily. In a worst case, a fixation on perfection can make something worse, for which the presenter gave the example of the Cake Wrecks website showing cakes that people should have stopped decorating several tubs of fondant ago.
Instead it is better to have unfinished, but functional, products out as perpetual betas (ala Google Mail) and to be willing to recognize quickly when something is not working and to discard it.
Thursday, April 10, 2014
Computers in Libraries 2014 - Day 2 - OCLC Breakfast
On Tuesday morning I attended the OCLC breakfast, where OCLC (stands for Online Computer Library Center, a nonprofit company that provides a variety of library services and plays a particularly important role in cataloging of resources and interlibrary loan) provides a rundown of various products and services they are working on. The food a the OCLC breakfast is a little nicer than that in the conference hall and it's always interesting to hear what OCLC has going on.
This year they mentioned the following programs and products, some of which I was more interested in than others.
This year they mentioned the following programs and products, some of which I was more interested in than others.
They recently had a symposium on MOOCs (it was titled "The Hope & Hype"). The assertion is that MOOCs are going to change change the way the library operates as libraries will be asked to support the ability of students to connect to these online courses and support them in other ways. There is a recording of this symposium on the OCLC website. They are also coming out with a publication on MOOCs and libraries.
They are working on an interesting project called Worldcat Identities. This is a tool that flips the WorldCat catalog on its head so that rather than looking at what titles are owned by certain libraries, you're just looking at a combined list of everything that has been produced. For instance you can bring up an author and see all of her works, or find out what cookbooks are out there that cover specific cuisines. At least from the brief testing I've done of the site it seems a little buggy but the idea has potential.
They spent a lot of time talking about WorldShare Management Services, which is a cloud-based Integrated Library System that they are marketing. They say that 225 libraries will be using it by the end of the year. Using something like this frees libraries from a lot of the traditional problems that they have had with conventional library systems like having to keep staff clients up-to-date, making sure that the catalog server gets upgraded, and making sure that the clients are installed and configured properly wherever they are needed. WorldShareManagement Services is all web-based and takes heavy advantage of OCLC's position as a central clearinghouse for catalog records making maintenance of a library's catalog hypothetically extremely easy.
OCLC is just completing a shift from the older Interlibrary Loan system to a new system called WorldShare Interlibrary Loan. There are a number of advantages to the new platform, it is required for all libraries who want to continue doing Interlibrary Loan using OCLC and it is available at no additional cost.
Of greater interest to me was a discussion of a new product called WorldCat Discovery Services. This is another free upgrade that will eventually phase out three services currently used by the general public for searching OCLC content: FirstSearch, WorldCat.org, and WorldCat Local. Libraries can sign up for their own new unique URL which they can customize. The product doesn't quite have all of the features that they want it to have when it is complete, which they anticipate it being by November. It does sound like it will be a nice upgrade though, with separate staff an patron interfaces and responsive design, among other features. All libraries will be required to go to WorldCat Discovery Services from FirstSearch by December of 2015.
Computers in Libraries 2014 - Day 1 - Libraries & the Big Picture
This two-timeblock session was a great one, crammed with lots of hot information and insight, even if it didn't go quite in the direction that the organizers had planned.
The session was intended to be divided into four parts. In the first part Kathryn Zickuhr from the Pew Research Center would provide the results of the third of Pew's planned three studies of public libraries. In the second, Marydee Ojala would discuss the IFLA trend report. In the third Stephen Abram would provide some insight into the observations from these reports. In the fourth, conference attendees in the room would provide examples of what they were doing that was innovative, interesting and maybe the next big thing. The fourth part of this kind of fizzled as people were two interested in asking follow-up questions of the three presenters and the three presenters were only too-willing to provide great replies to the questions and apparently not too many people in the room had done anything in the past year that they thought was world-shakingly innovative.
There was a lot of information in this session and I'll try to cover the main points here, which is much easier to do with the first two presenters who had neatly organized PowerPoint presentations, as opposed to Stephen Abram, who seemed to just be speaking off the cuff in the scary, funny, information-dense way that only Stephen Abram can.
The first presentation was titled featured new data from the Pew Research center on how the American population in general is engaged with public libraries. Two earlier studies in this series have been done, one on the state of reading in America and one on library services. This study was on typology -- classifying the American populace into broad types based on library usage. This is kind of the inverse of the typical study which they have done. Prior studies have looked at the behavior of different groups of people broken out by race, gender and socio-economic status (e.g. "How much do middle class Asian women between the ages of 30 and 45 read books"). This instead looked at types of different kinds of library users (e.g. heavy library users, people who never use the library) and then tried to discover what these groups might have in common.
The full results of this and other research can be found at libraries.pewinternet.org, but here is a quick rundown of what was found.
Pew divided the responses they had to their phone survey into four basic types and each type was further divided into two or three subtypes. The four basic types described the level of engagement that the interviewed persons had with their local library and these were high, medium, low and none.
Within the high type there were "Library Lovers", about 10% of the respondents, and "Information Omnivores", which was about 30% of the respondents.
"Library Lovers" frequently use libraries and have high levels of appreciation for libraries. They found that this type included many parents, students and job seekers. They tended to have high levels of education.
"Information Omivores" made heavy use of libraries, but less so than the "Library Lovers". These respondents had the highest rates of technology use, education, employment, and household income.
The respondents in the medium level of engagement were divided into "The Solid Center", 30% of all respondents, and "Print Traditionalists", 9% of all respondents. Zickuhr didn't have much to say about "The Solid Center." They were the largest group overall and were a broad swathe of the population. The "Print Traditionalists" on the other hand, tended to live in rural areas, father away from libraries. The highest number of rural southerners was to be found in this group.
Zickuhr indicated that the most interesting information was to be found in the groups with low levels of engagement or who were not engaged with their libraries. In the low type there were three subtypes: "Not for Me", comprising 4% of respondents, the "Young and Restless", making up 7% of respondents, and the "Rooted and Roadblocked", making up another 7% of respondents.
The session was intended to be divided into four parts. In the first part Kathryn Zickuhr from the Pew Research Center would provide the results of the third of Pew's planned three studies of public libraries. In the second, Marydee Ojala would discuss the IFLA trend report. In the third Stephen Abram would provide some insight into the observations from these reports. In the fourth, conference attendees in the room would provide examples of what they were doing that was innovative, interesting and maybe the next big thing. The fourth part of this kind of fizzled as people were two interested in asking follow-up questions of the three presenters and the three presenters were only too-willing to provide great replies to the questions and apparently not too many people in the room had done anything in the past year that they thought was world-shakingly innovative.
There was a lot of information in this session and I'll try to cover the main points here, which is much easier to do with the first two presenters who had neatly organized PowerPoint presentations, as opposed to Stephen Abram, who seemed to just be speaking off the cuff in the scary, funny, information-dense way that only Stephen Abram can.
The first presentation was titled featured new data from the Pew Research center on how the American population in general is engaged with public libraries. Two earlier studies in this series have been done, one on the state of reading in America and one on library services. This study was on typology -- classifying the American populace into broad types based on library usage. This is kind of the inverse of the typical study which they have done. Prior studies have looked at the behavior of different groups of people broken out by race, gender and socio-economic status (e.g. "How much do middle class Asian women between the ages of 30 and 45 read books"). This instead looked at types of different kinds of library users (e.g. heavy library users, people who never use the library) and then tried to discover what these groups might have in common.
The full results of this and other research can be found at libraries.pewinternet.org, but here is a quick rundown of what was found.
Pew divided the responses they had to their phone survey into four basic types and each type was further divided into two or three subtypes. The four basic types described the level of engagement that the interviewed persons had with their local library and these were high, medium, low and none.
Within the high type there were "Library Lovers", about 10% of the respondents, and "Information Omnivores", which was about 30% of the respondents.
"Library Lovers" frequently use libraries and have high levels of appreciation for libraries. They found that this type included many parents, students and job seekers. They tended to have high levels of education.
"Information Omivores" made heavy use of libraries, but less so than the "Library Lovers". These respondents had the highest rates of technology use, education, employment, and household income.
The respondents in the medium level of engagement were divided into "The Solid Center", 30% of all respondents, and "Print Traditionalists", 9% of all respondents. Zickuhr didn't have much to say about "The Solid Center." They were the largest group overall and were a broad swathe of the population. The "Print Traditionalists" on the other hand, tended to live in rural areas, father away from libraries. The highest number of rural southerners was to be found in this group.
Zickuhr indicated that the most interesting information was to be found in the groups with low levels of engagement or who were not engaged with their libraries. In the low type there were three subtypes: "Not for Me", comprising 4% of respondents, the "Young and Restless", making up 7% of respondents, and the "Rooted and Roadblocked", making up another 7% of respondents.
The "Not for Me" type had a strikingly less positive view of libraries in their communities. They were people who were more likely to have had negative experiences at libraries, although they typically weren't this way because they just relied on the Internet for all of their information needs.
The "Young and Restless" type generally didn't even know where the nearest library was. For this group libraries aren't even something they consider. They tended, as the type name indicates, to be young people, frequently who had recently moved to an area.
The "Rooted and Roadblocked" type tended to have a positive view of libraries but had a lot of difficulties that made it impractical to make regular use of libraries. They tended to be older, many of them living with disability, and they frequently had experienced a recent illness in the family.
The people who had no engagement at all with libraries were divided into two groups: "Distant Admirers", 10% of respondents, and "Off the Grid", 4% of respondents.
The "Distant Admirers" were people who didn't personally use libraries, although 40% of them had family members that used libraries. The highest number of Hispanics were in this subtype. They tended to view libraries quite positively.
Finally the "Off the Grid" subtype just had little exposure to libraries. They seemed to be people who engage less frequently in community and social activities. Many of them live in rural areas and have low household incomes.
Zickuhr suggested taking the data from the study and cross tabulating it with the community to establish working hypotheses that could be tested. For instance, if a community has a high number of Hispanics, it is statistically more likely based on the results of Pew's survey to have more people in the "Distant Admirers" category. Libraries could test this hypothesis with targeted surveys and small studies, and if it was found to have validity that would provide an opportunity to reach out to the "Distant Admirers" in their community and try to turn them into library users.
Pew will be coming out with a library engagement quiz this summer and all who are interested in that are encouraged to sign up with the newsletter advertised on the Pew libraries site.
Following Zickuhr, was Marydee Ojala of the International Federation of Library Associations' summary of their trend report. Ojala began by providing a little information about the Federation of Library Associations (IFLA) itself explaining that it is a super organization of library associations and members of those associations are also members of it. One of the associations is the American Library Association, for instance, so most librarians in the United States, by being members of that organization, are also members of IFLA.
IFLA decided create a report to identify general information trends. The report applies to more than just libraries in its scope, although libraries should concern themselves with the findings of the report. The full details on the report itself can be found at trends.ifla.org.
The report identifies five trends which Ojala briefly covered. They were:
- New technologies will both expand and limit who has access to information
The "both expand and limit" part is the key point of this trend. Although things like the Internet clearly expand the amount of information people have available to them, there are countervailing trends and complex factors that are likely to limit information as well. Copyright is a major factor here as copyright laws have become more strict and copyright protection measures are widely use to make sure copyright is not violated. However this means that information that would be leaving copyright is not (items published since 1923 have not been entering the public domain, which has put objects at risk of being lost before they can be widely distributed without fear of violating copyright).
Also Information literacy is increasingly important. If you know how to find information it is frequently there for you, but if you aren't well versed in the art of finding information it may even be harder to find things you need than it was before the Internet.
For each point, Ojala had a catchy phrase that tried to get to the heart of the matter, and for this trend the phrase was "The world's information at your fingertips - but what can you do with it?" - Online Education will democratise [the British spelling is used because of the document's international flavor] and disrupt global learning.
The growing popularity of free online courses (commonly called MOOCs), informal learning, and open access to university class syllabi and lectures has made it such that if anyone is interested in learning about topics that previously might have been beyond their reach, they now can. It is unclear what this means for formal higher education, where education costs have been skyrocketing for decades, and for the quality of education.
The phrase here was: "If education is free then how much is it really worth?" - The boundaries of privacy and data protection will be redefined
To see what this item is about your hardly need to go farther than this year's headlines. The U.S. and U.K. governments have been publicly revealed as having harvested wholesale information on private citizens for years. Is the person who revealed this a traitor, a hero, or something else? Part of the reason that the government was so attracted to tapping the likes of Google is that we more and more contribute willingly piles of information to companies that decades ago would have been unthinkable.
The phrase here was "Who is profiting from your personal information?" - Hyper-connected societies will listen to and empower new voices and groups.
A networked society has allowed groups that were too small, powerless, and/or disparate before to join together in ways that have not previously been possible. It seems also to be leading, in some cases, to lead to a situation where a reinforcing feedback loop of opinion is making individuals less interested to try to understand where others are coming from and consequently to compromise. This is resulting in greater fragmentation of our political parties and upheaval in the political landscape.
The phrase here was "Are you ready for cyber politics?" - The global information environment will be transformed by new technologies
The prevalence of mobile devices that always know where you are, "Internet of things" devices that know details like how much energy you use and how warm you like your house to be, potentially disruptive technologies like 3D printing and Bitcoin, and the creation of a global information economy are so novel that we aren't really sure what kind of impact they will have on our society.
The phrase here was "When your phone, your car, and your wristwatch know where you are at all times, who runs your life?"
There are implications for the trends for libraries, for information providers and for each individual. We need to be thinking about these trends and finding out what they are going to mean and how to deal with them.
Following Ojala was Stephen Abram's talk. It is rather difficult to summarize Abram's talk. He tends to quickly build a lot of disparate facts, some of which sound difficult to believe but plausible (and hearing them from him makes you feel that you need to believe them) and joins them together into a coherent argument for action. It's perhaps not too dissimilar to listening to a really good conspiracy theorist talk except that there is no clear conspiracy and little is there to be dismissed by Occam's Razor.
Some of the points I managed to take down were:
- Copyright is a major issue going forward. The 10 biggest contributors to the presidential campaign are also the 10 largest copyright holders (I looked briefly for a source for this stat and couldn't find any direct evidence -- it probably depends on terminology -- although I have no argument with the larger point). Abram express a great deal of concern about copyright provisions in the Trans-Pacific Partnership treaty.
- MOOCs at the University of Toronto have had more students attend them than have graduated from the university in its history. Abram teaches at the University of Toronto and presumably knows what he's talking about here. He indicated that a lot of Chinese and Indians sign up for MOOCs in the United States and Canada. Later a question was addressed to Abram about how universities are going to pay for MOOCs, which are generally currently free. He explained that this is a developing technology in its infancy and like many Internet startup technologies the path to profit is murky at the outset. He felt that it was likely that MOOCs would stay free but that there would be some pay system if you wanted to get some kind of formal credit for the classes.
- What we are facing now is not a digital divide, but an access divide. It's not so much that people can't get their hands on computer technology, but that the resources that might empower them are inaccessible.
- The average newspaper reader is 63 years old. Coupled (interestingly) with this is his statement that 95% of American newspapers are owned by six Republican families. At least to me this last bit isn't too surprising or worrying as there has long been a tendency for business people to be Republican and being Republican doesn't mean that your a member of the Tea Party (a fact that I'm sure is no comfort to the Republicans that want nothing to do with the Tea Party). However, this certainly does not bode well for either newspapers or the Republican party.
- The biggest three textbook publishers are opening online high schools that will operate through public libraries.
- The NSA has been photocopying the cover of every snail mail letter going through the postal system for the past 30 years (there's some level of confirmation of this claim in this article in the New York Times).
- My paraphrase of what Abram said trying to stay close to the original wording: "People give up something for the great experience that they have at Google and Amazon. They give up nothing at the library and get a worse experience for it. Until we solve this problem it will be a huge issue." This is a huge problem for libraries and he is absolutely correct here. Google and Amazon can use information that its customers wittingly or unwittingly give them and then those customers get all kinds of recommendations and services that they love. Privacy laws, which are something libraries are proud of upholding, mean that we cannot do that for our patrons and then they get annoyed that we can't tell them what book they had checked out three months ago that they can't remember the title of. We potentially have a huge amount of data that would make Google or Amazon jealous, but our ethics mean that we don't dare keep it lest it end up in the wrong hands, and the NSA has not made us feel any more comfy about doing that this year.
- What will happen with privacy in libraries when our public are walking in wearing Google Glass?
- The brand of libraries may indeed be books, but if you look closer it seems that people are getting something else that's valuable out of the equation that they don't even consciously associate with libraries. We need to resolve this conflict. Abram pointed to the cautionary tale of Vaseline which wanted to create a dry antiperspirant, but that product failed just on the fact that people didn't want to buy a dry antiperspirant from a company whose name was synonymous with something that notably wasn't very dry.
- On an optimistic note Abram stated that librarians are the only professionals in the world that as a class have all of the proper skills to address the trends that Ojala detailed. That doesn't mean much if we can't find it in ourselves to use those skills properly, but it is something to go forward on.
This was a fantastic session, even if there weren't many libraries who could point to a big daring project they had undertaken. Maybe it will inspire some to do just that and they can report about them next year.
Tuesday, April 8, 2014
Computers in Libraries 2014 - Day 1 - Moving Ideas Forward
Unfortunately I missed the first 15 minutes or so of this presentation as I got back from lunch a little late (I am going to note in my conference evaluation this time around that 1 hour is a little tight for lunch when it's common to walk a half-mile to the place you're going to eat and that you might strike up an interesting conversation with someone else attending the conference).
The presentation I missed much of was from James Liebhardt of NCI Information Services who was describing a process of developing services for patrons. The part I caught made the sensible points that it's important to get products you are developing into your patrons hands early and that waiting for a perfect product is not a good strategy. It is best to see how people use the product that you are developing and use some tests to evaluate it.
A couple specific kinds of tests that he mentioned for patron testing were:
James mentioned that there was also a project that they worked on there using a Raspberry Pi for digital signage, but he didn't have time to go into the details on what worked and what went wrong with that.
I personally have had a lot of experience with the kinds of things James described here (I haven't used a Raspberry Pi, but I have used several other small, flash-memory based, Linux computers) and I empathize entirely with his goals and with the obstacles he encountered. They are the kinds of things I would anticipate in such a project. There is little that gives you the same level of satisfaction when you've completed such a project and have things working, although I could only imagine trying to work out those issues in a production environment of the type James was working in (so far, at least).
The presentation I missed much of was from James Liebhardt of NCI Information Services who was describing a process of developing services for patrons. The part I caught made the sensible points that it's important to get products you are developing into your patrons hands early and that waiting for a perfect product is not a good strategy. It is best to see how people use the product that you are developing and use some tests to evaluate it.
A couple specific kinds of tests that he mentioned for patron testing were:
- The smoke test - put a link to a product that really isn't quite there (more the promise of a product) on the website to gauge interest. If no one clicks on the link, then maybe you don't need to develop this product. If the link becomes the most popular thing on the website focus all of your energies there.
- The A/B test - Show half of the people looking for a kind of thing a beta product while you direct the other half to whatever you would have done before. Find out if the people who use the beta product fare better.
When this presentation was complete there was a second presentation that I did get to see all of. James Stephens from the University of Maryland Baltimore County described his experiences with an experimental product. This was actually a pretty interesting presentation as a lot went wrong with this experimental product, although it wasn't a complete disaster by any means. The stakes were low (failure meant the status quo) so it seems to have been a great learning experience.
In his instance the university has a lot of study rooms that need to be reserved on a website. Students frequently don't know this and there's no easy way to figure this out without going to the website which you can't do easily (or know to do if you can) if you are standing in front of the door to the study room.
So James thought (my paraphrase), "What if we buy a bunch of Raspberry Pis, hook them up to touch screens and put them at the entrance of each study room? That way the schedule for the room could be shown on the screen and if someone wanted to reserve the room on the spot they could using the Raspberry Pi." It is important to note here that there was a real problem that James was trying to solve and he had come up with an innovative way to try and fix it. James observed himself in his presentation that it is critically important to approach things in this way, i.e. "I have this problem, how can I solve it" rather than "I have this cool gadget, what problem might I be able to solve with it."
This idea (which has progressed to a kind of functional product at this date) hit a host of obstacles. First the people above him, who didn't want to look like they were just being cheap, wanted to know why they were using these $35 computers rather than just buying a regular computer and putting it there. James' answer, which he hadn't really developed initially because he hadn't anticipated this line of questioning, was that this wasn't being cheap to be cheap, but rather being cheap because the product was adequate to the task and by being cheap they could do more (put the things all over the place).
After that initial hurdle came a number of other hurdles.
- The security enclosure for the device, which was necessary to keep it kind of neat and difficult to walk off with or tamper with, was more expensive than all of the other parts combined (about $200 for the enclosure).
- The enclosures were designed for iPads and the screens, although iPad sized (roughly) rubbed against the enclosure causing them to falsely read touch activity.
- The first touch screen they got was defective and had to be shipped back to China adding a huge delay, so the first draft of the device had to be done with a non-touch screen which meant the product had to be temporarily changed to a display of the room's status rather than an actual station where the room could be reserved.
- The Raspberry Pi does not have built-in wireless and its wireless implementation is a little flakey, so it would randomly drop off the network and code had to be written for it to make sure that it was connected to the network and to reboot itself if it wasn't.
- People would unplug the thing at night so it wouldn't properly shut down. After that happened a few times the SD card running the OS would get corrupted and it couldn't boot anymore and a new SD card with the image would have to be put in to replace it.
James mentioned that there was also a project that they worked on there using a Raspberry Pi for digital signage, but he didn't have time to go into the details on what worked and what went wrong with that.
I personally have had a lot of experience with the kinds of things James described here (I haven't used a Raspberry Pi, but I have used several other small, flash-memory based, Linux computers) and I empathize entirely with his goals and with the obstacles he encountered. They are the kinds of things I would anticipate in such a project. There is little that gives you the same level of satisfaction when you've completed such a project and have things working, although I could only imagine trying to work out those issues in a production environment of the type James was working in (so far, at least).
Computers in Libraries 2014 - Day 1 - Rock Your Library's Content with WordPress
As my library's website is currently running on WordPress and this is a relatively recent development I thought I'd go to this program where two academic librarians were going to talk about how they were using WordPress at their institutions. There was a lot of stuff discussed here which I already knew, which I kind of figured would be the case, but there were a couple tidbits of useful information that I was able to glean as well.
The first half of this session was presented by Chad Haefele, the Emerging Technologies Librarian at UNC Chapel Hill and focused on using WordPress on a large scale library website, in this case the UNC Libraries site.
UNC Libraries used to have a site that numbered in the thousands of pages. After transitioning onto WordPress the number of pages they now need to maintain has dropped drastically to about 250.
I particularly found of interest the mention of a book called Trust Agents which proposes this method for developing a website that has answers to many questions:
The Ohio Chad also covered a number of features that he found useful in WordPress that are common with most PHP-based CMS platforms (persistent links to dynamic pages based on category ids placed in the URL) and mentioned a variety of plugins that he's found useful for harvesting statistics off of the WordPress site where his site is hosted.
The first half of this session was presented by Chad Haefele, the Emerging Technologies Librarian at UNC Chapel Hill and focused on using WordPress on a large scale library website, in this case the UNC Libraries site.
UNC Libraries used to have a site that numbered in the thousands of pages. After transitioning onto WordPress the number of pages they now need to maintain has dropped drastically to about 250.
Of particular interest to me were the add-ons for WordPress that they used. To get a responsive website they used the theme named (appropriately) Responsive. They purchased three plugins: Formidable Pro, Elegant Themes, and Press Permit. Press Permit sounds to me like something we might want to investigate as it improves the permissions controls in WordPress and allows some very fine control over who can post and edit what where. Chad specifically recommended Press Permit regardless of the size of an organization's WordPress site.
Chad also talked a little on WordPress security and particularly mentioned a hack that had bit his personal site that was out of date, called the Pharma hack which puts pharmaceutical ads on your website for you. He strongly recommends doing WordPress updates as soon as they come out.
Chad's talk was followed by a talk from a different Chad, Chad Boeniger, the Head of Reference at Ohio University. Back in 2006 this Chad had given a presentation in the same room on how he was using Wikimedia to host a research guide for the reference department. This time he reported on how that Wikimedia implementation had been replaced by a WordPress implementation. Reasons for this were easily summed up by demonstrating that Wikimedia had made few usability improvements over the past few years while WordPress has been in extremely heavy development adding many kinds of usability enhancements and other features.
I particularly found of interest the mention of a book called Trust Agents which proposes this method for developing a website that has answers to many questions:
- Get an email from someone (presumably who's seen your website, not a friend) with a question
- Answer the question and post the question and the answer (presumably anonymized) on your website
- Repeat
This apparently is partly how they have developed an exhaustive collection of questions and answers on their website.
Computers in Libraries 2014 - Day 1 - Super Searcher Tips
This session for me was kind of a guilty pleasure. Every year I see the Super Searcher Tips program listed on the schedule but see something somewhere else on the schedule that is probably a little more relevant to my job and go to that instead. This year I didn't have much of a conflict, so I went to this.
The session is presented by Mary Ellen Bates, who, at the beginning of this very session, was presented with the AIIP Marilyn Levine President's Award. After receiving the award (she had been unable to attend the actual meeting where it had been officially presented last week) she went straight into her presentation which was a fantastic brain-dump of search tools and tips. Here they are.
The session is presented by Mary Ellen Bates, who, at the beginning of this very session, was presented with the AIIP Marilyn Levine President's Award. After receiving the award (she had been unable to attend the actual meeting where it had been officially presented last week) she went straight into her presentation which was a fantastic brain-dump of search tools and tips. Here they are.
For private search, Bates recommends motherpipe.com which is based on Bing and includes Twitter results. The servers are located in Germany. You can get different results if you go to
the .co.uk address instead of .com.
To find "long tail" results Bates recommends millionshort.com. This site allows you to drop the first 10,000, 100,000, 1,000,000 etc. results from a Google results list so you see only the stuff that is at the end that normal people will never get to by scrolling through results. This catches things that might be really good but
haven't been optimized for search.
Milliontall.com is the reverse; it retrieves only the top sites.
A tip Bates suggests if you are using Wikipedia as a source is to compare different language versions of an article, for example the English and French articles on tracking. The different language articles are rarely just translations of one another and you can get Google translate to get you a serviceable translation if you don't know the language.
A tip Bates suggests if you are using Wikipedia as a source is to compare different language versions of an article, for example the English and French articles on tracking. The different language articles are rarely just translations of one another and you can get Google translate to get you a serviceable translation if you don't know the language.
The site social-searcher.com
lets you search social networking sites and limit by the popularity of the posts.
You can search for posts with a
certain number of likes/re-tweets/etc.
Also has quick social media analytics.
For example, a search for Obamacare shows generally positive feedback on Twitter and
Google+ and slightly more negative feedback on Facebook
Bates also mentioned on this line that Twitter has started
improving its search. Its search results now have nice facet-like limiting.
Also for searching Twitter posts is the site hashtagify.me. Hashtagify.me finds related hashtags on Twitter so you can find top
influencers and see popularity trends. For instance "sustainability" links to the related tags: eco,
car, environment, energy, renewable, green, business, climate change, climate,
crowdenergyorg.
Bates suggests using Pocket for storing articles that you want to read later. I personally use Instapaper for this kind of thing, but Pocket sounds like it has a few nice extra features like tagging and allowing you to export an archive as an HTML list.
Google has added a Library feature to Google Scholar. This lets you save all of your citations in one place. You can add labels for sorting.
Searchonymous is a Firefox plugin that allows you to do an anonymous Google while logged into Google.
This can be quite useful as Google makes modifications to your results based on your search history and other stuff they know about you. Sometimes it can be useful to know what results the average Google user who isn't you is going to get when they do a search.
One interesting tip was for finding lists using Google. If, for instance, you want to find various lists of top anime (a random topic that happens to work pretty well with this example if you try it), but didn't want to limit it to just lists of 10 (or some other arbitrary number) you can search for "top 5...50 anime" and that will find lists ranging between 5 and 50 in length.
If you are looking for mashups of various kinds of data with Google Maps (and who isn't, really) a nice place to look is Google Maps Gallery. You can browse or search the data (world bank, census, PolicyMap,
etc.). You can even add your own organization's maps.
One tip Bates had that I will admit to having used before this session is using Google
Autocomplete to indicate alternatives to a product/service if you don't know what they might be. Just go into Google and type something like "Roku vs." and then stop and see what shows up in the list of recommended searches that drops down.
Google's new site
info card can be a handy feature. It can be brought up when in search results you see the grey name of an organization followed by a triangle in light grey print. When you click on this you'll get a brief description of
the organization, generally from Wikipedia.
This can give you an idea of the legitimacy of an organization before you click
on the link.
A feature to celebrate in Google Images is the ability now to add a number of Creative Commons filters to a search. This is most helpful if you are looking for an image that you want to reuse in a specific context and you need to know that the licensing will be friendly to the purpose you have in mind. To use this click on "Search Tools" after performing a search and then use the "Usage Rights" menu.
More an example of how to do something well, rather than a tool in itself (although it certainly could be a useful page), Bates mentioned Google's Media Tools, which are intended for journalists who use Google to get certain kinds of data.
- Great way to "package" tools - Google.Com/get/mediatools - A nice example of how to organize tools for
an audience. Tools are arranged into categories like "Gather & Organize", "Engage", and "Visualize".
Bates final tip was one for doing a job search, in this case most relevant to the corporate world. It's kind of a hack, but it's an interesting hack to be sure. It turns out that the company Taleo, owned by Oracle, provides back-end job posting services for many companies. So even though you can't go to www.taleo.com and see a bunch of job sites (it actually just redirects to Oracle's information page on the Taleo service) you can search the domain and find jobs that have been posted only on the sites of the companies that use the Taleo service. So the search "site:taleo.net intitle:career keyword(s)" would find jobs that matched the keyword(s). It doesn't work particularly well with phrase jobs ("engineer" would be more successful than "electrical engineer"). One glitch that Bates pointed out, which is actually a bonus tip in itself, is that the Google search engine will work harder against its indexes if you ask more of it. The above search for "intitle:career" and the above search for "intitle:careers" find fewer searches combined than the search for "(intitle:career OR intitle:careers)", which in a pure system that gave you all of the results every time would not happen. The more complicated search makes Google work harder and gives you more results.
Computers in Libraries 2014 - Day 1 - Keynote
The first keynote for this year's Computers in Libraries conference was made by David Weinberger, co-director of the Harvard Library Innovation Lab and author of several books including The Cluetrain Manifesto, Everything is Miscellaneous, and Too Big to Know.
He started his talk with two questions:
Then came the answer to the "why... now" part of the question. Weinberger identified four major factors that make now an unusual time:
He started his talk with two questions:
- Why hacking now?
- Why isn't every knife a Swiss Army Knife?
He proceeded in his talk to endeavor to answer these questions and provided a thoughtful presentation on the position in which libraries find themselves today.
On the question of "why hacking now", Weinberger first clarified that we were talking about "white hat hacking", i.e. thinking about a problem in a new and fresh way, rather than "black hat hacking" (stealing credit card data) bozo hacking (thinking about a problem in a different, but ultimately doomed way).
Then came the answer to the "why... now" part of the question. Weinberger identified four major factors that make now an unusual time:
- Everything is getting networked. Weinberger made a point of classifying this as something quite different from everything going digital. You can only go so far by changing something from an analog to digital format. Changing the way people interact with one another through media changes the world.
- Everything is being opened. There is a shift from the most prevalent kind of information being copyrighted information to the most prevalent form of information being information that is Creative Commons licensed and open access. On this point I agree that this is a good and world-changing thing and is much more common than it used to be, although I'm less certain exactly how prevalent it is. However I'm sure it is extremely prevalent in the world of academia, where Weinberger is coming from.
- There is engagement with communities at all points in the product lifecycle. Authors can interact with readers while books are being written and companies can interact with consumers while products are being developed in a way that has never to this point been possible.
- There is a new, networked ecosystem. It used to be the case that our users thought of the library early on if they needed a book or information. Now we are a late thought after users first look at Amazon or Google. We have an opportunity to turn this around by repositioning ourselves.
Then Weinberger tackled the question of Swiss Army knives. He explained that although you can buy a massive Swiss Army knife that has nearly every tool you could possibly use, it's expensive, cumbersome and awkward. In a world with Star Trek replicators that could make products you need on demand, you'd never ask for one of those, just the tool you needed at the time. The reason we buy Swiss Army knives is because we anticipate needing other tools when we don't know that we will need them. That anticipation is present in many industries and has modeled much about the world in which we have developed.
For instance publishers filter out materials they don't think will sell well so they spend their resources creating products that they anticipate will sell well. This means there is an awful lot of stuff (some of it bad, some of it good) that doesn't get published. The Internet has no such built in filter -- anything that people want to publish there gets published. It is then the place for "curators" to filter-in the content that is good and relevant. The bad stuff is still there for people to find if they really want it, as well as other good things.
After addressing these two questions, Weinberger pointed to three ways forward for libraries that get around the anticipation problem.
The first is the platform approach -- placing open data in a library portal. There is a lot of open data and meta-data that is available through all kinds of APIs. Libraries can collect and organize that data to create new resources. Examples of this are the Digital Public Library of America and Harvard's StackLife. StackLife is built on open data at Harvard's LibraryCloud and if a different library wanted to do something with it that isn't done with StackLife, they can take that data and use it.
Weinberger gave an example of a physical open platform in Harvard's "Labrary" where students can place their own projects and exhibits. Another thing which Weinberger mentioned as an aside that will be a source of open data is the AwesomeBox. The concept behind this is that libraries have two book return boxes -- the regular one and the AwesomeBox. If something is returned in the "AwesomeBox" it gets additionally checked in as being "awesome" creating an anonymized, low-friction way of creating a list of loved materials.
Another way for libraries to move on is to take advantage of linked open data. Linked data allows the creation of connections between sets of data that use different terms for the same facets. For example, if one dataset uses the term "Author" and the other uses the term "Content_Creator", if an application pulling in the data can get information from the sources that those terms really mean the same thing, then the data can be combined into a single set.
Finally, Weinberger encouraged the creation of graphs along the line of Facebook's social graph. Graphs allow the visualization and exploration of connections that are harder to see in raw data but that we know. One example he gave were the various connections between Homer, The Odyssey, James Joyce, Ulysses, Dublin and the film O Brother, Where Art Thou.
These last two items in particular are variations on what libraries have always done, just updated to address new challenges. We've always strived to create consistency among data so that items are easy to find and we've always focused on the ability of people to make connections between different kinds of things to help users find what they are looking for.
Weinberger closed summarizing that to hack libraries we need to hack the future, to enrich our existing assets, to create an infrastructure of knowledge, and to fight the trend prevalent on the Internet in particular for people with an opinion to only search for items that further confirm their opinions rather than search for the truth behind a matter.
Monday, April 7, 2014
Computers in Libraries 2013 - Day 0 - Gadgets & Gaming Session
Once again I started off Computers in Libraries with the laid-back Gadgets & Gaming session where miscellaneous technology toys, typically useful for education, are put on display so that curious librarians can see how they work and if there's potential for their use in their library environments. Many of the items on display this year had been present at previous Gadget & Gaming sessions, like Sphere, a remote controlled ball, slightly larger than a billiard ball, that can be driven using any Bluetooth enabled tablet. However, several things were new and interesting and here are a few pictures.
The cubes in this picture are items called Cubelets. They are each self-contained pre-programmed bits of robotics. They have magnetic connectors on some sides and other functional parts on one or more sides, depending on the function of the Cubelet in question. They are compatible with Lagos, hence the Lego squares sitting next to them. The Cubelets can be chained together to create a kind of logical action. So if a power Cubelet is connected to a sensor Cubelet which is connected to a light Cubelet, the light Cubelet will turn on if it receives an appropriate signal from the sensor Cubelet if the sensor Cubelet is getting power and senses something appropriate. They are kind of interesting to play with and make for a nice low bar for basic robotics entry.
The Finch is a piece of robotics somewhat more advanced, perhaps a little too advanced to really be of much use during this gadget petting zoo. It looked kind of interesting though. It's a basic robot that is powered via USB cable connected to a computer. The computer can send instructions to the robot via the USB cable and those instructions can be written in a veritable bevy of languages ranging from basic, kid-friendly languages (Scratch 2.0) to much more difficult languages (C++). It's fairly inexpensive at $99. It would have been nice to see it in action aside from having its beak glow though. The presenters hadn't managed to get that far with the device. In their defense, it's also kind of hard to quickly instruct people to write functional code for a device in a casual setting where people are chatting, drinking cans of pop and eating munchies.
The Robo is a serviceable, attractive, inexpensive (at $700) 3D printer with similar specs to the much more popular Makerbot. It can use either ABS plastic or more environmentally friendly PLA plant-based plastic. It seems it's primary drawback is there is no enclosure around the main print area. I don't know exactly how it compares with the just announced, and hugely overfunded on Kickstarter, Micro 3D printer, but I think they are both signs that this technology will continue to improve and get cheaper.
The 3Doodler was a Kickstarter product that has made it into the wild. It is a hand-held 3D printer, meaning the only computer control it has comes from your brain and it's only as good at drawing a three dimensional object as you are. It is kind of fun though. Using it isn't too dissimilar to using a hot-glue gun, except the "glue" rapidly firms up into a stiff, plasticky filament and rather than squeezing a trigger you push buttons on the device. It's not as revolutionary as the 3D printer, but it's not as expensive either and provides a more immediate creative challenge.
Those were some of the more interesting additions at this year's session.
![]() |
Cubelets |
![]() |
Finch |
![]() |
Robot 3D Printer |
![]() |
3Doodler |
Those were some of the more interesting additions at this year's session.
Subscribe to:
Posts (Atom)