Don't be this guyThere are a lot of popular assumptions people make in this profession that lead us to make classic blunders. These can be assumptions about the change process, assumptions about our colleagues, and assumptions about our patrons. We can go into developing a new service or technology with the best of intentions and fail spectacularly because of the blinders we put on due to these strongly-held assumptions. Sometimes things fail in libraries because they weren’t a good idea or fit, but sometimes the failure is caused by the approach taken to creating change. And those failures truly can be avoided.

As I work delicately and slowly at my library to build a culture of assessment, I’ve been thinking a lot about implementation failures and thought it would be nice to look at some of the classic blunders I’ve seen in both libraries and higher ed over the past seven years related to implementation. Here’s the first.

“Why don’t we try it and see what happens” is always a good way to approach new services

No, offense intended, Andy, but I have to disagree with you here (though I certainly would have agreed strongly with you when I was new to the profession). I am definitely not a risk-averse person in my work. I have experimented many times over the years with new services, service models, and technologies. Some have been successes and some failures, but I’ve always learned from the experiences. One thing I’ve learned is that while in some cases the “try it and see what happens” mantra is a very reasonable way to approach things, other times, it can be a disaster. This Fall, I did a pilot project with some colleagues to provide synchronous online workshops for students using web conferencing software. What we learned was that there wasn’t much need for general research instruction workshops, but grad students in particular were very interested in online instruction on specific topics, such as using Zotero and Mendeley. So, based on that information, we retooled for this term with more discipline-specific sessions and I continued offering my Zotero and Mendeley workshops. In that case, trying it and seeing what happened was a totally reasonable approach because whether we were wildly successful or a total flop, we could handle either eventuality.

Back in 2006, when I was the distance learning librarian at Norwich, I tried an embedded librarian pilot for our online Masters degree programs. Having been one of those students who never asked for help at her library, I wanted to make sure I was available as possible to our students as they started out in their program. I also wanted to try and put a human face on the library, which is even more critical in the online learning environment. The first term, I embedded myself in the first seminar of our two most research-intensive classes (both of which had several sections). I had an “Ask a Librarian” discussion board (that was front and center) in each classroom where I could both answer questions and proactively provide information literacy instruction at key points in the term.

The major issue was that I had to check each WebCT classroom separately to see if there were any messages from students — there was no way to get alerts when new content was posted. It took me 4-7 hours each week to monitor the boards and answer questions. This wouldn’t have been an issue if I’d been deluged with questions, but that was far from the case. Occasionally, a single class would have a lot of questions one week (if their prof asked them to check with me about their research topics), but for the most part, questions were few and far between and some classes never used the discussion board at all. And even when I (and the program administrators) strongly encouraged faculty to encourage their students to ask for help, only some chose to do so. I was basically routing traffic from the reference desk to myself and taking 4-7 hours/week to answer anywhere between 0 and 12 questions. Clearly not a great value proposition. Had I gotten a lot of questions, it would have been worth the time spent, but for so few, it clearly wasn’t.

The big problem was that the faculty and administrators thought this was a great service as did the students who used it. Even though I’d called it a pilot, no one outside of the library saw it that way. They wanted the program to expand, not go away. It was very difficult to pull out of providing this service, but it had to be done. Had I really considered the worst-case scenarios of either wild success or failure, I would have realized that this had the potential to be a HUGE problem. If a potential consequence of not being able to sustain a service means losing credibility with faculty and/or administrators, then it’s not a risk to take lightly. Building credibility with one’s faculty is a painstaking process. It often takes years to build their trust and to get them to see you as someone who can offer something useful to them and their students. You don’t want to risk that. As anyone involved in instruction can attest, it sometimes takes just one bad session to lead a faculty member to never request instruction again.

There are a lot of awesome services we could be providing at PSU, but we are constrained by our extremely small public services staff relative to our student population. In many cases, we have to worry about what it would look like to be the “victims of our success,” because we are already stretched to the point where everything we do is an essential service. I believe strongly that “try it and see what happens” is a great idea after you visualize potential outcomes and realize that none of them will be truly damaging. If we had tons of demand for online instruction, we could have handled it. That we didn’t (except in the Zotero and Mendeley classes) also wasn’t a problem. All we really were risking was our pride. But when the risk is alienating students/faculty/administrators or seriously overworking already stressed librarians, I think there needs to be a serious discussion about how to handle that eventuality and whether it’s worth risking without understanding the service population better.

I’m a huge believer in seeing service development as an iterative process. That part of perpetual beta appeals strongly to me. I believe in trying something, assessing it, and retooling based on those results. I see that as a continuous loop that should continue to happen even when you think the service/technology is mature (since populations and their needs change). However, I also think that in some cases assessment has to start before we ever offer the service. I think perpetual beta, whether in the tech world or in libraries, can sometimes be an excuse for putting out things that are truly half-baked. Putting out something (service, technology, etc.) that risks our reputation, credibility, or relationship with our service population requires more than a “let’s try it and see what happens” attitude.

The next classic blunder I’ll be tackling: the assumption that resistance to change is bad and something one needs to defeat.