I wanted to write about the Library Assessment Conference as soon as I returned, but unfortunately, life got in the way. I got barely a week and a half before I was set to leave my job and, not surprisingly, there was a lot of wrapping up of projects and getting things to a good place to hand them over to colleagues. My last day was August 15th and after spending six days riding bikes in Sunriver, Oregon, I finally have some time to take a breath and reflect.

I went to the Library Assessment Conference two years ago, and for some reason, spent most of the time feeling like I should be sitting at the kids table. I don’t know if it’s because it’s so ARL-heavy or what, but I didn’t really feel like I belonged there. This time, I felt completely different. Sure, there were still a lot of people presenting on things I could never do at a less well-resourced institution, but I felt more like we were all struggling with  a lot of the same meaty assessment issues, whether we were at small community colleges or huge ARLs.

Lisa Hinchliffe and I presented some of the results of our survey of community college libraries about the elements that have helped and hindered them in their journey towards building a culture of assessment. It was a quick sketch of our findings and I look forward to publishing the results (eventually) and also having more time to present them at ACRL 2015.

Here were a few of the interesting takeaways for me from the conference:

 

One evangelist or expert does not a “culture of assessment” make

 

The always insightful Library Loon writes occasionally about something called “Coordinator Syndrome.” As a newly former coordinator, it’s something I can say without question that I experienced. I think administrators have good intentions when they create coordinator positions that are set up to fail. They recognize that x is important and they don’t currently have anyone with expertise in x in the Library. So they hire someone to be in charge of x, but they give this person no support, only a mandate to make sure a culture of x is infused into every part of the institution. As if one person (and a new person at that) could achieve this. This could be instruction, assessment, scholarly communications, or a whole host of other things. The simple fact is that no one person can create a culture of anything. They can bring expertise and provide support, but there has to be a general understanding and acceptance that x is everyone’s responsibility. One person can’t do the heavy lifting on their own.

In light of this, I was very impressed with how the libraries at Northwestern have structured their efforts to foster an assessment culture. They started with an assessment committee whose charge was to be evangelists for assessment. This was made up of various stakeholders from the institution. Having influential people on a committee will lend credence to the charge. When they had the funding, they hired an assessment librarian whose primary job was to act as a consultant, sharing their expertise to help librarians actually DO assessment work. The assessment librarian wasn’t there to be a cheerleader for assessment; that was the job of the committee. The assessment librarian was there to help librarians do assessment work when they’d already decided it was worth their time to do assessment work.

This, my friends, is how a coordinator position should be framed.

 

We are (nearly) all grossly underprepared to do assessment work

One theme threaded through several talks (these two stuck out for me) was the notion that librarians come out of library school ill-prepared to design assessments and analyze the results. Most of us stumble and bumble our way towards better assessment work, but it’s largely through trial and error and watching what others are doing. I look back on my very early assessment work and cringe. What was I actually even measuring???

That said, I question whether library school is the best place for students to be learning to do assessment work or research. These are the sorts of skills that library school students don’t think they’ll need (how many of us got a lot out of the required “research class” in library school?) until they do. And when they do is the best time actually to learn how to do it, because, like with our own students in information literacy sessions, it won’t stick unless you are practicing what you learned in a real situation. I think libraries and organizations that support practicing librarians need to do more to train and support librarians in doing assessment and libraries specifically need to give librarians the time to learn how to do these things right. Hiring an assessment librarian who acts as a knowledgeable consultant is one solution, but not every library can afford to hire an assessment expert. There are immersion programs and assessment conferences, but those aren’t enough to give people the nuts and bolts skills they need. I favor the idea of assessment mentoring programs, but am not sure what it would look like in practice. However, mentoring makes more sense, because I think people need feedback and support at their points of need.

This is something we struggled with on the Orbis Cascade Alliance Assessment Team over the past two years. We know there is a lot of assessment expertise within the Alliance (hell, we have Steve Hiller, my assessment idol), but figuring out the best way to get that expertise out of the heads of those who have it and into the heads of those who don’t is the challenge. We planned and executed a Library Assessment and User Experience Unconference (LUAU) where folks could share their assessment projects and ideas and have conversations around them (and for which we were just awarded the 2014 ACRL Oregon Award for Excellence), but, again, it’s a one-time thing and not the sort of point-of-need mentoring that many really need. Still, every little bit helps, and I know I came away from both the LUAU and the Library Assessment Conference with great new ideas.

 

Data privacy is the elephant in the room

If there was one thing that became abundantly clear at the conference, it’s that data is king. It’s funny, because I thought at the last Library Assessment Conference, there was a general trend toward mixed-method qualitative research, but this year it was all about hard data. Using student data was at the heart of each of the keynote’s talks. One keynote even said that we should be collecting transaction-level data regularly even if we don’t yet know what we might use it for.

Whether we like it or not, predictive analytics have become a major part of the higher education game. Student affairs and admissions folks have been using predictive analytics systems for ages in making admission decisions and in advising. At Portland State, advisors can see when the best time is for students to take a “bottleneck course” in order to succeed in it. They know that students coming in with a certain high school GPA would be best served by taking certain combinations of classes at specific times in their college career. They know which students are most likely to leave the institution and can provide more intensive support.

Part of me thinks this is great. We need to learn more about how our patrons use our resources and services. We can use what we learn about specific populations for targeted outreach. This could be so beneficial!

The other part of me is kind of horrified by some possible implications of predictive analytics. If we know what types of people are likely to succeed in college, why not only admit that type of person? And what if that type of person tends to be white or asian and have money? It’s one thing to use predictive analytics to help guide a student through school so that they can be successful, but predictive analytics are already being used in admissions decision-making at many schools.

Predictive analytics are also rather creepy. HP uses predictive analytics to determine which of their employees are most likely to quit and what steps they can take to retain them. How would you like your characteristics to be used by your employer in this way? Feel a little intrusive? The privacy implications are huge. Do our students know that we are collecting this information about them? Do they get the chance to say “no?” Libraries have always been passionate about safeguarding patron data. If we collect transaction-level data and correlate it with student demographic data, even if we get rid of student numbers, can we really guarantee that people won’t be identifiable in the data? With all the data breaches that have happened on campuses across the country, can we guarantee the security of this data? And when students discover that we are collecting this data (perhaps when they are contacted because they have never used the library and those who use the library are 5 times more likely to graduate) how comfortable will they feel checking out a book or accessing an article on a controversial or potentially embarrassing subject? What is the potential impact on intellectual freedom?

Yes, there are a lot of safeguards in place to protect students (FERPA, IRB, etc.), but I still think that libraries should hold themselves to even higher standards than admissions staff or advisors, simply because of our ethical obligations as librarians. This doesn’t mean that I don’t think we should collect data, but that with every project, we should weigh the potential benefits for students against the privacy risks and do everything in our power to ensure the security and anonymity of the data we collect.

No one talked about data privacy in presentations at the Library Assessment Conference, but there was a discussion on Twitter that sparked thoughtful posts from Andrew Asher and Barbara Fister.  I think it would be great to have a panel at the next LAC on data privacy and ethics.

 

Librarians are doing amazing, creative assessment work

Take a look at the powerpoints from presentations and the posters themselves from the conference. People are doing amazing things with assessment to learn more about our patrons, help inform decision-making, make the case for changes or funding, and demonstrate the value of things the library does. Libraries are doing amazing work analyzing data they already have using data visualization tools and doing longitudinal analyses. One library is using GIS data in ways I’d never thought possible, like in showing how students are using library spaces. Another library used data to determine which classes they should be providing instruction in and which are redundant.  All of the work coming out of the Assessment in Action program participants is so exciting. I think there’s a lot to learn from projects undertaken by other libraries, and I came from the conference with so many ideas that I might use in the future.

 

LAC is a great conference at which to present

As a presenter, I love how well-supported we are by the ARL staff at the conference. It’s nice to know that I don’t have to worry about anything other than being prepared to talk. I’ve had a lot of terrible experiences at talks that ranged from having to do DIY tech support just before my talk, to not being told I had to bring my own laptop until I got there, to not being told I couldn’t use my laptop (with my Mac Keynote presentation) until I got there, to presenter-unfriendly set-ups (like having the computer somewhere else so you can’t actually see your slides unless you look behind you at the screen). They tell you what they need from you ahead of time, what the affordances and limitations are, and make sure everything is loaded and ready to go before you even enter the room. Very nice!

All in all, it’s a conference I would recommend to any academic librarian interested in assessment. I came home energized and inspired, though in this liminal state between jobs, there’s not a lot for me to actually use. For those of you who attended the Library Assessment Conference, what was your biggest takeaway? What presentations inspired you?