The part of my still new-ish job that causes my the most worry is assessment. I’d hoped to have time this summer to do some serious research on information literacy assessment and get some good ideas for how to best assess library instruction this Fall. However, I got too busy with other, more pressing, tasks to spend the time doing any research. The only thing I went into the semester knowing for sure was that I wanted to change the way we did assessment, especially in our English 101 sessions.
I think my Director would like to see assessment that is consistent across instruction sessions. There are good reasons for this. For accreditation, it’s good to show that you are using consistent measures of instruction. Also, how will you really be able to compare the effectiveness of an English session with a Business session if the measures are different? However, I worry that creating any assessment that is going to work for everyone will end up not giving us as meaningful results. Last year, we used an assessment tool for English 101/102, and in the social sciences, architecture and engineering. It asked students to rate us on the following statements:
- The purpose of this instructional session was explained to me.
- The information presented was clear and well organized.
- There were opportunities to ask questions during the session.
- I will be able to use these research tools by myself.
with a strongly agree to strongly disagree scale. Looking at the assessment stats for last year, only a very small percentage actually chose “disagree” or “strongly disagree” on any of the questions, leaving us with very little useful feedback. There was a final question that asked for open-ended feedback, but we almost never got any. It is nice that we can look at most of the classes taught last year on a single spreadsheet and see what students thought of them, but looking at those stats tells me absolutely nothing about what I (or any of my colleagues) should be doing better or covering in more depth. It also doesn’t give me any sense of whether they absorbed what I taught them.
My big push for this year is to incorporate active learning into every session we teach. To that end, I created a worksheet that students are meant to complete during the class session(s) we’re teaching. The worksheet contains things like an area for students to brainstorm keywords for their topic, a place to record their search strategies and what the results were, and a place to record useful books and articles they found. In class, we’re supposed to go over a topic, like finding books, and then give the students time to complete that part of the worksheet with our help. Then we go onto the next topic and give them time to practice those skills. I think it’s helpful for them to practice the skills we’re teaching during the session, because I know how easy it can be to zone out when someone spends an hour lecturing. But when you’re forced to actually do something, the chances of remembering what you learned later on are much greater. Also, the worksheet can act as a “roadmap to research” later on, reminding them of what they’ve already tried when they’re in the thick of their research.
The worksheet template is being used in all EN 101 sessions and all of the social science sessions this semester. It’s just a template and it can be tweaked to fit the topics being taught. For example, I recently taught a session that covered creating search queries, finding tertiary sources, and finding peer-reviewed articles in History. For that, I got rid of the section on finding books and added a section on reference sources that was appropriate to what I would be teaching. As a result, each class will probably have a slightly different worksheet, because we’re preparing them for different assignments and, thus, should be teaching different skills.
Even within English 101, it’s difficult to create any sort of consistent worksheet for students to use. My predecessor’s goal had been to create a consistent list of things that we would teach in every EN 101 session. It was a good goal, but difficult to do when each instructor assigns a very different research assignment. I’ve seen English 101 classes where students had to find peer-reviewed articles on environmental topics, where students had to find articles related to topics discussed in Flowers for Algernon, where students had to research and write about a controversial issue, and more. And the resources they were required to use also varied widely. With so many different topics, it’s almost impossible to cover the same set of resources in every EN 101 session. I know there’s a push here to standardize the EN 101 curriculum, but until then, I think we have to teach to the assignments students have.
A colleague of mine gave me the idea of using the worksheet as an internal assessment tool as well. We could grade the assessment using a rubric I created and then record each student’s score on each of the individual questions on a spreadsheet. That way, we could see not only how well the students in the class absorbed what we were teaching, but what specific topics they had the most difficulty with. If there is a certain question that the average student score was very low on (for example, justifying why an article is of sufficient quality to use in their paper), we would know that we need to cover that topic better next time. It should really give us a sense of our effectiveness as instructors and what specifically we need to do to improve.
Obviously, if we all have somewhat different worksheets, we won’t be able to compare the scores exactly from one class to the next. Hopefully, each worksheet will contain some common elements (like, hopefully each person is teaching the students how to use library databases), and we can compare those scores. But, what I think will be nice, is that we’ll be able to see where, in a given session, we didn’t get through to the students.
I know consistency is important, but what I really hope to accomplish this year is to get some measure of our effectiveness as instructors. And I just don’t know if any single tool will help us to learn that. I want to know that we’re really giving the students the skills that they need to do research for their assignments. Then, once we know we’re ok on that, I’d like to focus on creating more consistent assessment. But I really think that making our instruction work effective is far more important than measuring everything in the same way.
I’m curious how other libraries assess individual information literacy sessions. Does every instructor use the same assessment tool in every class? Are there specific assessment tools for specific disciplines? Do you assess students satisfaction or student learning? Do you feel that the assessments you get back are instructive to you?
Take a look at Chapter 5 in Joseph Matthews book “Library Assessment”. There is a section on IL assessment that you may find helpful. Possibly different from your own accreditor guidelines, Middle States recommends using a variety of methods to assess student learning – not just one. So pre-test/post-testing, portfolio analysis, citation review, rubrics, etc. are all possibilities. I always found this publication to be a good source of understanding and examples:
http://www.msche.org/publications/Developing-Skills080111151714.pdf
Assessing student satisfaction and student learning would be quite different. The latter is much more difficult, but of much greater value to the library and institution. I would often use pre/post testing with courseware (e.g., Bb assessment tools) to learn not only if students were retaining information learned in my sessions, but to fine tune my content. By analyzing the results to each question I could see what the vast majority were getting right, what they getting wrong – and why they could be getting it wrong based on their wrong answers. This always helped me to revise my sessions by dropping out content they seemed to easily “get” and increasing time spent on concepts they didn’t get. Good luck with your assessment efforts.
I agree that this is a very difficult task, but one that most of us librarians have had to grapple with a lot recently. I have used a combination of surveys and pre- and post-tests. The problem with the tests, is that the faculty have to be on board with this project. So, it is pretty much ad hoc with those faculty who agree to participate. I do seem to get some good information out of these assessments, but some of it does not really tell me anything insightful about student learning or our teaching. I think the ideal situation, is for the administration to mandate an information literacy test for all freshmen upon entering college and another test to test their competency as seniors before graduating. How many college/university adminstrations would support this kind of thing at this point is questionable. But I see it as setting baselines and the best way of measuring students learning of info. lit. skills.
I am not in a career which requires the sort of massive assessment that you are talking about. But I have taught a few courses and I have lot of opinions about surveys and assessments.
It impossible to get really useful results from surveys without some varied and open ended questions. You can have a set of 5 to 10 closed questions that are the same in every class, and which can be used for statistical purposes to report to bosses. For the rest have open ended and varied questions. The most useful to me as an instructor are questions like: Did you learn A, B, C, and D. When students say they didn’t, I can emphasize that material more.
I remember taking a survey about health that was done by the local county health services. One of the questions was along the line of, “Did you know that walking 10,000 steps per day can improve your health.” On NPR just a month earlier a reporter had said that this statistic was used by pedometer makers to sell the devices. The statistic was based on no research. But there was no answer to the question that equated with, “I disagree with this question.” Closed-ended questions will always have this sort of problem that they cannot reflect the perceptions of people who don’t see the world the same way as the people writing the questions.
I think you have to decide what you want to assess. Are you assessing the students’ perception of the quality of instruction, in terms of performance? Do you want to find out if they feel more comfortable using library resources/talking to a librarian? Or are you trying to see if you taught them a discrete skill? After you know what you want to find out, then you create an instrument. I just read this really excellent book, Radcliff, C. et al (2007).A practical guide to information literacy assessment for academic librarians.