Forgive this less-than-well-thought-out post. I’ve been thinking a lot about assessment lately and the librarianly love of numbers in assessment, and I’m a troubled by the way that some academic libraries tend to measure how well they are supporting the academic mission of the institution.
Librarians keep a lot of statistics and measure a lot of things. Gate count, reference transactions, instruction sessions, website hits, visits to a specific tutorial or research guide, e-resource usage, etc. We are big on numbers. I have no problem whatsoever with measuring things like this and in many cases I think it’s essential. The thing I do have a problem with are the unsupported interpretations we often make based on these numbers and the direction they’re going in.
Reference desk transactions went down. This is a bad thing! We need to try and get them back up! Really? Why? Do you know why they went down? You probably have some theories, but do you know for sure? Is it because you’re less approachable or is it because there has been an increase in instruction sessions which helped students become more independent researchers? You need to look at the larger ecosystem beyond the reference desk to figure out why this happened and whether it’s a good or bad thing.
The tutorial I created has received more hits than any other one. It must be really useful! Oh yeah? Or is the tutorial for a class that has a lot of sections? Did an instructor require that students visit it? Are the people visiting it staying for a long time or just for a few seconds? Are they getting anything out of it? You can’t say that a web hit = someone getting something out of that page.
We’re teaching more library sessions than ever before. Students will be more information literate when they graduate! Maybe. But how do you know that? Teaching more doesn’t necessarily = learning more. If the instruction you’re providing is not course-integrated and emphasized at various subsequent points in their college career, it might be going in one ear and out the other. How can we determine that what we’re teaching is actually making our students information literate?
Sidenote: Years ago, a professional colleague complained that students in her information literacy sessions were not as engaged as they were years ago and reasoned that the caliber of students at her school had declined. The question I wanted to ask at the time, but didn’t, was have you considered that maybe the way you teach doesn’t work for the current crop of students? We come to unsupported conclusions all the time — not just when trying to analyze statistics. Don’t just assume it’s “them.” Maybe it’s you.
Statistics can tell us a lot of things, but they can also be manipulated to support just about any position. Without actually knowing why something increased or decreased, we should be hesitant about making any judgments.
We often take these assumptions right up to Administration, using these numbers as evidence that we are doing a great job, deserve more funding, etc. This reveals another flawed assumption; the idea that these numbers matter to administrators outside of the library. What do university administrators care about? Retention. Student success. Accreditation. Student satisfaction with the University. Etc. They don’t care about the number of information literacy sessions the library taught unless you can somehow show how those contributed to student success (i.e. student use of quality resources in their papers increased leading to better grades). They don’t care about the number of reference transactions unless you can show that reference support helped to improve retention. Sure, they may nod their head and say “great job!” but you’re not going to really get them excited and “on board” until you tie what the library does to the University’s goals and provide data that demonstrates how what you do contributes to those goals.
I don’t have all the answers on exactly how to measure how the library contributes to the larger goals of the University, but I do know that we’re doing our students a disservice when we make assumptions about how what we do is impacting them based solely on a bunch of numbers. And if we want to promote libraries to the people who hold the purse strings, we need to focus more on demonstrating how we contribute to their “bottom line” than to our own.
Great post! I think statistics are good internally to indicate what is going on. We need to really drill down and get to the “why” something is happening in order to know what to do next. Externally, the numbers don’t matter as much other than a sound byte. Advocacy is about winning hearts and minds and it isn’t won with a bunch of numbers.
Good post, Meredith. We offer “2nd level” ref for our public library members and our numbers have been going down (no surprise). When I mention the stats to the staff their reaction is something like this: “Oh, how can we make those public library ref folks send us more questions.” The assumption is that the questions are being hoarded. My reply is that perhaps there are a lot more options for finding information now and that maybe the public library ref folks aren’t getting the questions themselves to send to us. People aren’t necessarily interested in the “why” when their jobs are threatened.
Pingback: Tweets that mention Numbers vs. meaning | Information Wants To Be Free -- Topsy.com
This issue sounds like a great discussion/ice breaker for an information literacy class! “Even we librarians need to evaluate and think critically about how we use our information sources (i.e. stats).”
Thanks as always for the insightful post!
Hi, Meredith. A former colleague in California posted this on Facebook. You make a lot of interesting points. Yes, better to understand the meaning/significance of one’s statistics than to make assumptions.
Thank you for this post. I find it insightful and I like the way my mind starts going in different directions. I am a firm believer in statistics but of course you’re right about not making the assumptions.
You could look into Outcome-Based Evaluation (OBE), which is a user-centered approach to assessment. It measures a program or service based on how much it changes the user’s behavior, attitude, skills, knowledge, and/or condition/status.
For example, to evaluate your library classes, you could measure the increase in their knowledge by doing a quiz before the class and a quiz after the class. You could further measure their condition/status, by following-up say 6 months or a year later and see if the class helped increase their grades.
Of course, there’s a lot more to it than that. My introduction to the topic was a presentation by Rhea Rubin and you could try her book “Demonstrating Results.” There are probably other authors who address OBE specifically for academic librarians.
More dangerous are people who confuse words with meaning. Numbers at least are well-defined.
Pingback: Calculating ROI – Are we measuring the right numbers? « SLA PHT Division Blog
Awesome post Meredith! I completely agree that numbers don’t tell the complete story and that anecdotal evidence is best. I too find myself struggling to define that for administration. I’m going to take Steve’s recommendation and review outcome-based evaluation.
Anyone interested in library metrics and outcomes needs to read the recent ALA award-winning book on the subject by the trio of Dugan, Simmons, and Nitecki.
http://www.amazon.com/Viewing-Library-Metrics-Different-Perspectives/dp/1591586658
Numbers are at their worst when people try to manipulate them to be more favorable by making things worse overall, such as the (apocryphal?) story of a library sealing up its drop boxes so that people would have to come in and goose the door count. However, I’m also very leery of saying “anecdotal evidence is best”. If you want to solve the problem of doing poorly but manipulating things so you look good, anecdotal evidence is not the way to go. I don’t think the numbers should ever be sacrificed, but if you pair them with HONEST accounts of your experiences in order to interpret what it all really means, you’ll get something more meaningful. Unfortunately, lots of people in all walks of life go for the CYA approach first.
Meredith,
100% of everyone surveyed thinks your post is insightful and required reading for the profession. (OK, I only surveyed myself so far…but I stand firmly by my conclusion!)
lol! Thanks Peter! 😉
@Chris O. & @Steve Cauffman – I totally agree. We need to rely on a combination of numbers, meaningful performance measures/outcome-based evaluation, qualitative measures (like ethnography and focus groups) and what we’re observing. No one measure is likely to give you the whole picture.
Pingback: Let’s Not (Just) Do the Numbers
Thanks for the interesting post! All the more reason for the lib community to understand statistics and the need for ongoing assessment and longitudinal data.
Numbers, like words and language, will always be subject to interpretation and misinterpretation. Analysis & interpretation is how we make meaning… Continued thoughtful assessment of various types increases our understanding of what we’re doing and what our users need/want. All of your scenarios are potential research projects – love it!!
Thanks again for the interesting post!
Alot of pundits claim that it’s best to get those library stories into something ‘quantifiable.’ I like to emphasize the reverse: what story are the numbers telling?
It’s easy to forget that most quantitative measures are only telling a story. Numbers are useful because we know that our instincts are often quite wrong. On the other hand, they are just part of a whole story about success.
Meredith,
An insightful post. We recently did an undergraduate library use study using a combination of quantitative and qualitative methods. Both approaches provided us with lots of useful information. One thing that stood out in the survey numbers was that students wanted more outlets. We discovered why in our student interviews. So we not only had the statistical evidence, but the story behind it. I think that kind of assessment is more useful than one or the other by itself.
Pingback: Inspiring stuff to read, Take 3 | Information Wants To Be Free