By Meredith Farkas | March 22, 2007
In this post, I’m going to talk about the decisions we made in creating the course and what I would do differently next time. Hopefully this will be useful to those of you who wish to replicate the course. I’m sure my fellow organizers will have other insights to add from their perspective.
Certain decisions we didn’t even think about. Drupal seemed like the best solution for the main course site, since you can have many blogs that feed into a single blog page and you can have static pages. There are also a lot of nice things you can do with Drupal to extend the functionality — tons of modules can be added. I didn’t have any problem getting Drupal installed, getting the modules we want and getting the menus set up, etc. but making Drupal look decent is a difficult task and one I was really not up to. We were very fortunate in that a very nice library school student named Heather Yager volunteered to be our intern for the course. When we saw her graphic design skills, we immediately knew where she could be useful. And she did a great job making the site look good! Otherwise, I would have fallen at the feet of John Blyberg and begged him to help us (which he probably would have, he’s such a good egg).
Another no-brainer was using MediaWiki for the wiki. I really wanted the students to get experience using MediaWiki since it’s the wiki software most commonly used by libraries at this point (though Twiki and PmWiki are certainly making headway). I’d considered using something like PBWiki since it is a good option for those who do not have access to a server, but I figured that the participants could sign up for a wiki in there and play with it on their own. It would be much more difficult for them to install MediaWiki and start playing with it. Plus, I’m just really familiar with MediaWiki and I knew there wouldn’t be any unpleasant surprises on the tech side.
The participants didn’t actually use the wiki a great deal. They were required to create a profile in there to get experience with wiki markup. It was a good space to create directories of people’s AIM usernames, flickr accounts, proposal URLs, etc. And some people used it more than others. If they’d had group projects, it would definitely have been a great space for them to collaborate in.
The tools we used for synchronous communications did require decisions. We’d originally bandied about the idea of using Skype for our weekly small group chats. It would have been nice to have used VoIP because it’s so much more personal, but I think we made the right decision in not choosing it. Twice, the organizers tried to meet and twice Skype ended up not working for us (lots of echoing to the point where we couldn’t have a conversation). We figured if six of us had issues, imagine what would happen with 40 participants! So we went to our fallback position which was just using IM (saved people the cost of buying a headshet or mic too). I’d written up instructions for setting up an account with AIM and getting a client, but two weeks before the course was set to start, my husband discovered a really cool chat room add-on to Drupal. He set it up and Dorothea, Adam and I tested it briefly. Since we really hadn’t put it through enough paces, we decided to still have people sign up for account with AIM as a backup. We only ended up needing to use the backup once (the very beginning of the first week — sorry Anne Welsh!), and we then got the technical problems fixed and it worked well. A few times throughout the course the chat room got temperamental and kicked people out, but for the most part, it performed beautifully.
I think it was really nice to have that chat tool, because it gave us a central place to meet, rather than everyone using different clients that come with their own quirks. It also really makes Drupal the community center of the course.
The other synchronous tool we used was the Web conferencing software. This was the part I was most nervous about. The original reason I even thought I could do this was because my husband had a nearly unlimited license for Citrix’s GoToMeeting through his work, and said that we could use it for the course. As I learned more, I discovered that GoToMeeting did not have built-in VOIP, so people would need to call in. There were alternative ideas for audio, like a shoutcast server, but it just seemed really complicated. At the time, GoToMeeting also did not play nice with Macs; more to the point, it didn’t play at all with Macs. Fortunately, my knight in shining armor, Tom Peters, e-mailed me out of the blue and said “hey, do you want to use OPAL for this course of yours?” It was like he was reading my mind! Tom made what for me was the most stressful part of the course during the planning absolutely stress-free while the course was running. He’s a superstar!
But there were some more difficult decisions made in getting the course together…
The first: how to choose the participants.
The problem: we had over 100 applications for a course that would really only be manageable with 40 participants (given the number of organizers). None of us had experience sorting through applications for something like this.
What we did: The first step was choosing a tool and the best tool for us was Google Spreadsheets. A bunch of us use gmail, so it was very easy to just click from our e-mail to the Google Spreadsheets. Dorothea was kind enough to set up a very nice spreadsheet which each of us could use ourselves to rate the applicants. They were rated based on four factors:
- How much support they get from their institution for continuing education. People with less support got more points.
- Benefit to self – what would they personally get out of the course? This decision was pretty subjective and based on how well they made their case.
- Benefit to library – did they have concrete ideas of how they might improve their library with social software (+ points)? Are there already people at their library who know this stuff (- points)? Do they mention wanting to teach this stuff to others (+ points)? The more we thought they could benefit their colleagues, library and patrons, they more points they got.
- Tech skills – We didn’t want people in the class who already knew and used all of the social software tools we’d be discussing in the class since it is kind of a class for beginners and they wouldn’t benefit as much. However, we didn’t want to choose people where we thought the course would be over their head. There was definitely a minimum required level of tech-savvy.
So each of these factors was rated on a scale of 1 to 3 and then we each chose our top 20 picks. We then went through those and saw who got the most votes from the organizers. Anyone who got two or more votes from the organizers was in the course. We then had to choose seven from the twenty people who had each gotten one vote. We each rated them on a scale of 1 to 5 and picked the seven who got the highest scores. It was a lot of work, but it was definitely a good fair way of choosing participants.
How long to make the course
The problem: A short course might be more palatable to people and their supervisors who are approving their involvement in the course. A long course could cover more and/or could spread things out so people aren’t doing so much each week.
What we did: While I had wanted the course to be longer, I think five weeks was just the right amount of time to pack things in and prevent the burnout that comes from doing it for too long. The pace was fast, but I sometimes think that’s necessary to keep people’s attention… keep ‘em moving. I think some people were a bit overwhelmed by the sheer amount of materials available to them, but luckily the materials are archived so they can look back now and take their time. We didn’t get to cover everything I’d wanted to cover, but we did get through a lot in five weeks. Any shorter would have been way too much in too short a time. Too much longer and people may not have been able to participate or may have “hit the wall.” Our participant, Suzanne Mangrum said it best here when asked how we could improve the course: “I would say more time, but I think that I may have been afraid of a really long commitment.” So, I could definitely see this class being a week or two longer, but I think five worked out pretty well. In fact, I’d probably say that I would add an additional week or two to the course were I doing it again, and maybe add some group activities using the tools that we’ve described. Just to give people more time with the tools (and to breathe).
When to schedule the Webcasts
The problem: I knew there would be some people who were unable to participate in Webcasts during the workday. I also knew there would be people who were only going to participate in the course during the workday. How to please everyone?
What we did: We offered two Webcasts per week, one at 2:00 pm ET and one at 7:00 pm ET. We knew this would not meet everyone’s needs, but we hoped it would meet most of their needs and everyone else could watch the archived talks and write about it. The biggest problem came in Week 3 when we had only one Webcast and a lot of people couldn’t make that one. I’d suggest strongly that offering one Webcast in the afternoon and one in the evening every week, is the best way to do it. I would have offered the evening Webcast at 8:00 pm ET, but I knew that would be driving home time for a lot of folks on the West coast. When you’re working in more than 4 time zones (we had a participant in central Europe!), it can be difficult to please everyone.
How to deal with people who did not participate fully.
The problem: When we accepted our participants, we told each of them what they would be expected to do each week and asked them to commit to that before we entered them as users into Drupal. All of them said they were ok with the requirements. Most of our participants really did go above and beyond and many others met all or almost all of the requirements, but there were a few who came very far from meeting the requirements (a few even disappeared after week 1 or 2). Does everyone get credit for having done the course when some people really didn’t?
What we did: We knew that a couple of people were getting CE credits for the course, so I was pretty adamant that no one get credit for completing the course successfully without actually having done it. I felt that it was insulting to the people who did really work hard. We reminded people of the requirements time and again, but I don’t think that had much of an effect. We decided to create a list of people who had successfully completed the course — these were people who had met all of the requirements weekly or at most had missed one blog post or one chat or missed Webcasts one week. We also gave people the opportunity to make things up if they missed more than one thing and we gave them until one week after the course ended (tomorrow), so they would have time to finish up. I think this motivated a bunch of people to do the little bit of makeup work they had to do so that they could be credited with completing the course. As of now, 30 out of 40 people successfully completed the course and there were at least six more who participated a lot in the course but have not completed the makeup work. Not bad! However, we should have made decisions about how to deal with stuff like this in advance instead of in the middle of the course.
How to organize the chat groups
Problems: Several of our participants mentioned at the end of course comments that they would like to have had chat groups organized by library type, though a few felt they learned better by having a more diverse group.
What we did: We could see pluses and minuses to organizing people by library type. The pluses: people who have similar experiences, limitations and patrons can relate to each other better and may provide more useful advice to each other. The minuses: people would not benefit from the insights of people in other types of libraries (diversity is good!), and how exactly would we be able to get these people to all be able to meet at the same time? We ended up making what I thought was a very good decision which was to create eight chat groups (5 to a group) offer them at different times throughout the week (morning, afternoon and nighttime EST). We then allowed the participants to choose their own groups. Even then, we had a few people who couldn’t get into groups at the time they wanted, so we had a few groups with 6 people and a few with 4. I can’t imagine what it would have been like organizing everyone by library type and then having to try and find a time when everyone could meet. Like herding cats!
Some things I would do better/differently next time:
- Work hard to identify people who are struggling and provide extra support: One of our participants commented that a lot of things went over her head. If I did this again, I would probably ask each of the facilitators to tell me after week 1 which of their participants seem to be struggling in the chat or who are not participating much in the chat (another sign that they might be in over their heads). Then the chat facilitator could contact that individual and ask what they can do to help. I also do think that it’s up to the participant to ask questions and/or look things up. We made it very clear that we were happy to answer any questions in the chats, on the blogs or via email. I don’t think we should “dumb things down” because of one or two people, but I think we should provide them with extra help.
- Tell people how long each of the screencasts/podcasts are: when we watched the asynchronous presentations given by the presenters (to make sure they worked ok), none of us thought to time them and let people know how long they were. And no one made the suggestion to us until around week 3, when we were all too busy to do it. That was a big oversight on our part. People should know how long the presentations are going to be so that they can budget their time accordingly. I would definitely do that next time.
- Archive the chats with presenters of screencasts and podcasts: unfortunately, there was no mechanism that we could find for archiving chats automatically, so the only way to save them was to copy the content and paste it into a text file. And we were not present for all of the chats, so there was no mechanism (human or otherwise) for reliably archiving them. I think we probably could have come up with something had we thought about it before the course started, but it didn’t occur to us until then. I would have liked to read the transcripts from the chats I wasn’t able to attend too. Also, some of the organizers archived their weekly group chats and others did not. We should have decided about stuff like that at the beginning, but none of us thought of it until we were in the thick of it.
- Not make any changes to the technologies at least two weeks before the course and test them thoroughly: We had a few minor issues in the beginning of the course caused by adding a plugin that made the server go haywire and we also did not test the chat tool with more than three people. Next time, I would invite everyone I know to come help me try to break that chat tool so that we’d know what settings make it work best.
- Not worry so much about the folks who aren’t completing the course requirements: I really did fret over this. I guess I was naive because I was surprised that some people didn’t do the required activities. We had told them what to expect and plenty of people were doing everything (and more!), so I didn’t feel like we hadn’t properly prepared them for it. I was a little annoyed, I guess, because I thought of all the people who hadn’t been accepted to the course and how if these participants had just said before the course “no, I don’t think I have the time for this” some of these other interested folks (who were interested enough to create a lurkers’ wiki) could have benefited more. I need to learn not to let things like that bother me, because in every course there will be a few slackers and a few people who drop out entirely due to life and work circumstances. I should actually be amazed by the number of people who worked their booties off in the course and created amazing proposals at the end. I’m so proud of them!
There are probably a bunch of other things I’d do differently, but I can’t think of them right now. I’ll add them to the post as they come to my mind.
All in all though, the course went about a million times better than I’d expected. The tech worked well and people didn’t have too many tech problems at all. Given that this was the first time we’d offered it, the mistakes we made were fairly minor, though there are definitely things I’d do differently next time. While the model worked, it mainly worked because we had an amazing group of participants. They were extremely bright, enthusiastic, skeptical, critical, curious and just wonderful. I learned so much from them. Had we not had the same amazing group of people, I doubt the class would have gone so well.
I’ll likely be posting more meta-posts about the course in the coming days and weeks. Suffice it to say, it was an incredible experience and I wish everyone involved in it the best.