This has been a crazy year, full of a lot of research and activities centered around assessment. From my participation in RAILS last Spring, to my Assessment LibGuide, to my presentation at LOEX of the West, to my paper (forthcoming) and presentation (with Lisa Hinchliffe) at the Library Assessment Conference, to my just-published article in Reference Services Review, to to my ambitious plans to assess Freshman portfolios this summer, and my regular assessment practice as an instructor I feel like I’ve been living, eating and breathing assessment. My views on the topic are definitely shifting and changing as I read, research and experience. One of the topics I have definitely changed my mind about over time is the Value of Academic Libraries initiative.

That I haven’t heard any criticisms of the research agenda of the Value of Academic Libraries initiative other than my own either means that I’m completely alone in my concerns or that people are afraid of criticizing what they see as a a sacred cow. I’d fully intended to write a post this week about the whole ACRL Value of Academic Libraries initiative, but was asked to write about it for the next issue of OLA Quarterly, which I’ll link to as soon as it comes out. The gist is that I’m concerned about the impact of value research and the value movement on assessment focused on learning. It can be difficult to sell librarians on the value of doing assessment, but when assessment becomes focused on demonstrating value, librarians will likely become that much more concerned about what negative assessment results in a class could mean for them, which will likely influence how they design their assessment(s). I am also not-at-all convinced that college/university administrators are going to buy the argument that the library is valuable because there’s research that demonstrates a relationship between library use and student success/retention. I tend to believe that college/university administrators are smart enough to discern the difference between correlation and causation, but maybe that’s just me. While I’d like to believe that values research can also inform practice and help libraries improve, the studies out there right now that have embraced this kind of research don’t show any evidence of using the data they collected for improvement. Anyways, you’ll be able to read my diatribe thoughtful article on the subject soon.

I have a new article out this month in Reference Services Review. It looks at the process building a culture of assessment through the lens of John Kotter’s 8-step process for organizational change and is called “Building and Sustaining a Culture of Assessment: Best Practices for Change Leadership.” You can access it in my institution’s repository, PDXSCholar here. If you’re a librarian trying to lead from the middle (or the bottom), this maps out a clear strategy for doing so. I’ve read too many articles about building a culture of assessment that seem designed for a Director (or someone who had authority), when most of the people trying to get colleagues on-board with assessment do not necessarily have positional authority (instruction coordinators, assessment coordinators, etc.). My article is full of practical advice for building culture change from the bottom up.

Right now, I’m working on a major study with Lisa Hinchliffe of UIUC and Amy Harris Houk of UNC-Greensboro. We’re surveying academic library instruction program leaders to determine what factors help facilitate the creation of a culture of assessment and what factors hinder a library in moving towards a culture of assessment. It’s the first study of its kind to actually be done in any sort of systematic way with a truly representative sample (our response rate is insane!) and the preliminary results look to be very important for the profession. When I was working on the literature review for my Reference Services Review article, I noticed that the vast majority of studies I was reading (from higher ed and libraries) were case studies and anecdote. The few research studies I found suffered from having a too-small sample size, an unrepresentative sample or (in most cases) both. It seemed about time that someone put the theories librarians and educators have had about what it takes to create a culture of assessment to the test. We’ll be sharing our early results at the ACRL Conference and will be publishing our more-comprehensive analysis later.

I’m also doing a qualitative research right now. Anyone who worked with me at Norwich knows that since hearing librarians from the University of Rochester talk about library ethnography in 2006, I have been dying to do that kind of research in my own library. It frustrates me how many decisions we make in libraries based on our own preferences or what we think we know about our patrons. If there’s anything I’ve learned from doing assessment and usability testing, it’s that librarians are frequently wrong about their patrons and how they use resources, spaces, etc. I got a grant with two terrific colleagues at PSU (Emily Ford and Molly Blalock-Koral) last Spring to do an ethnographic study to develop a better understanding of the information needs, behavior and challenges of returning students (which we defined as students with a gap of at least four years in their formal education). We’ve been collecting data this term and last and it has been so fun! I’ve learned a ton about students at PSU and I’m especially excited that most of the people we’re working with actually aren’t big library users (some don’t use it at all). The biggest challenge has been taking off my librarian hat and putting on my observer hat. It’s hard not to intervene when you know you could help them search more effectively!

I geek out on assessment because I’m so curious about our users and what they think/want/do. I truly believe that better knowing our users will bring us closer to meeting their needs. I only wish I had more time to do this kind of work. Librarians who get to do user research as a regular part of their job are insanely lucky.