This has been a crazy year, full of a lot of research and activities centered around assessment. From my participation in RAILS last Spring, to my Assessment LibGuide, to my presentation at LOEX of the West, to my paper (forthcoming) and presentation (with Lisa Hinchliffe) at the Library Assessment Conference, to my just-published article in Reference Services Review, to to my ambitious plans to assess Freshman portfolios this summer, and my regular assessment practice as an instructor I feel like I’ve been living, eating and breathing assessment. My views on the topic are definitely shifting and changing as I read, research and experience. One of the topics I have definitely changed my mind about over time is the Value of Academic Libraries initiative.
That I haven’t heard any criticisms of the research agenda of the Value of Academic Libraries initiative other than my own either means that I’m completely alone in my concerns or that people are afraid of criticizing what they see as a a sacred cow. I’d fully intended to write a post this week about the whole ACRL Value of Academic Libraries initiative, but was asked to write about it for the next issue of OLA Quarterly, which I’ll link to as soon as it comes out. The gist is that I’m concerned about the impact of value research and the value movement on assessment focused on learning. It can be difficult to sell librarians on the value of doing assessment, but when assessment becomes focused on demonstrating value, librarians will likely become that much more concerned about what negative assessment results in a class could mean for them, which will likely influence how they design their assessment(s). I am also not-at-all convinced that college/university administrators are going to buy the argument that the library is valuable because there’s research that demonstrates a relationship between library use and student success/retention. I tend to believe that college/university administrators are smart enough to discern the difference between correlation and causation, but maybe that’s just me. While I’d like to believe that values research can also inform practice and help libraries improve, the studies out there right now that have embraced this kind of research don’t show any evidence of using the data they collected for improvement. Anyways, you’ll be able to read my diatribe thoughtful article on the subject soon.
I have a new article out this month in Reference Services Review. It looks at the process building a culture of assessment through the lens of John Kotter’s 8-step process for organizational change and is called “Building and Sustaining a Culture of Assessment: Best Practices for Change Leadership.” You can access it in my institution’s repository, PDXSCholar here. If you’re a librarian trying to lead from the middle (or the bottom), this maps out a clear strategy for doing so. I’ve read too many articles about building a culture of assessment that seem designed for a Director (or someone who had authority), when most of the people trying to get colleagues on-board with assessment do not necessarily have positional authority (instruction coordinators, assessment coordinators, etc.). My article is full of practical advice for building culture change from the bottom up.
Right now, I’m working on a major study with Lisa Hinchliffe of UIUC and Amy Harris Houk of UNC-Greensboro. We’re surveying academic library instruction program leaders to determine what factors help facilitate the creation of a culture of assessment and what factors hinder a library in moving towards a culture of assessment. It’s the first study of its kind to actually be done in any sort of systematic way with a truly representative sample (our response rate is insane!) and the preliminary results look to be very important for the profession. When I was working on the literature review for my Reference Services Review article, I noticed that the vast majority of studies I was reading (from higher ed and libraries) were case studies and anecdote. The few research studies I found suffered from having a too-small sample size, an unrepresentative sample or (in most cases) both. It seemed about time that someone put the theories librarians and educators have had about what it takes to create a culture of assessment to the test. We’ll be sharing our early results at the ACRL Conference and will be publishing our more-comprehensive analysis later.
I’m also doing a qualitative research right now. Anyone who worked with me at Norwich knows that since hearing librarians from the University of Rochester talk about library ethnography in 2006, I have been dying to do that kind of research in my own library. It frustrates me how many decisions we make in libraries based on our own preferences or what we think we know about our patrons. If there’s anything I’ve learned from doing assessment and usability testing, it’s that librarians are frequently wrong about their patrons and how they use resources, spaces, etc. I got a grant with two terrific colleagues at PSU (Emily Ford and Molly Blalock-Koral) last Spring to do an ethnographic study to develop a better understanding of the information needs, behavior and challenges of returning students (which we defined as students with a gap of at least four years in their formal education). We’ve been collecting data this term and last and it has been so fun! I’ve learned a ton about students at PSU and I’m especially excited that most of the people we’re working with actually aren’t big library users (some don’t use it at all). The biggest challenge has been taking off my librarian hat and putting on my observer hat. It’s hard not to intervene when you know you could help them search more effectively!
I geek out on assessment because I’m so curious about our users and what they think/want/do. I truly believe that better knowing our users will bring us closer to meeting their needs. I only wish I had more time to do this kind of work. Librarians who get to do user research as a regular part of their job are insanely lucky.
Really interested on your thoughts of the Value vs Learning Assessment. Without preempting your written piece, do you think it’s possible to talk about assessment of student learning and demonstration of our value/worth in the same sentence? For me, I always have to think evaluation vs assessment when thinking/reading/talking about assessment.
Meredith – noting that I will have to wait for your more fully-articulated argument, let me make some comments:
“It can be difficult to sell librarians on the value of doing assessment, but when assessment becomes focused on demonstrating value, librarians will likely become that much more concerned about what negative assessment results in a class could mean for them, which will likely influence how they design their assessment(s).”
**This strikes me as true of all assessment, both in the library and outside, e.g., teaching faculty who balk at the articulation of end-of-program learning outcomes, or at the demand to demonstrate evidence of learning beyond the classroom-based mechanisms employed for years (also known as “grades”). It seems to me that this is simply a variant of the similar concerns raised when we began to articulate the need to assess student learning in information literacy instruction, as opposed to simply documenting attendance in classes (“number of participants” being the original assessment of the success of an instruction program, and still the primary metric reported by ARL and other national data-collection efforts).
“I am also not-at-all convinced that college/university administrators are going to buy the argument that the library is valuable because there’s research that demonstrates a relationship between library use and student success/retention.”
**Perhaps true, but in this case it is almost just as important to ask the question as it is to answer it. For decades, the assumption was that the value of the library to the university was assumed, irrefutable, and intrinsic. One of the most troubling changes for many people over the past decade has been the end of that shared assumption. We may not be able to clearly demonstrate a causal relationship between library use and GPA or information skills and job placement, etc., but it is critically important (for us, and for our campus colleagues to see) that we care about the question, i.e., that we are no longer bound solely by assessment questions that are focused completely inward, or completely on input measures (materials budget, volumes held, number of professional staff, etc.). To me, one of the great benefits of the Value initiative is that it encourages people to ask the bigger question of what goals have other programs on campus set, how are they measuring progress toward those goals, and how can we demonstrate our contributions to those goals?
“While I’d like to believe that values research can also inform practice and help libraries improve, the studies out there right now that have embraced this kind of research don’t show any evidence of using the data they collected for improvement.”
**The end goal of all assessment should be improvement, whether at the individual level, the program level, or the library level. If the studies are not resulting in approaches for improvement, then is the problem the assessment or the culture of the organization that allows the results of the assessment to inform decision-making?
Great question, Alan! The short answer is that I’d like to believe it’s possible to talk about assessment and value in the same sentence, but I haven’t seen it done successfully. In my experience, assessing instruction with a mind towards demonstrating value impacts the design of the assessment method and the librarian’s comfort-level with potentially seeing negative results (which also will influence the design). If anyone else has experience with assessments that have both informed practice and demonstrated value to administrators and other stakeholders, especially in instruction, I’d love to hear it.
>>If anyone else has experience with assessments that have both informed practice and demonstrated value to administrators and other stakeholders, especially in instruction, I’d love to hear it.<<
We've successfully changed how we did instruction and how the professors created assignments based on our evaluation of students' papers, but this feedback is kept between the librarians and the professors. As you said, no one wants to share assessment that is not positive with administrators.
Tying our efforts to retention is tricky. In fact, a colleague and I have sketched out an article in which we look critically at the library lit in this area, but we know it wouldn't make us very popular! My favorite quote from some retention guru (buried in my notes somewhere) is that we (not just librarians) really can't isolate anything and say it's aiding retention. I try to focus on the holistic picture- having a community helps retention, so how can we contribute to the community feeling?