News Ticker

Critical Thinking

Re-Evaluating Evaluations

Sandy HughesBy Sandy Hughes

We often assume that our conference evaluations are designed in such a way as to provide us with both holistic and specific feedback on our event.What we typically prepare for these types of evaluation forms are questions which generate participant satisfaction ratings around event elements such as speakers, sessions, content, format and outcomes. We also include a host of logistical details covering parking, signage, ease of registration, scheduling and food quality.

However, as a university administrator supporting faculty in curriculum,course design and evaluation of the overall pedagogical process (and after attending a great many meetings and conferences over the years), I find it quite surprising how often these evaluation questions are so generic, despite substantial differences in themes, venues and intended purposes of the events. The biggest problem these types of standard surveys pose for meeting and event planners is that they are likely not getting us to the real viewpoints of participants. As such, they are not truly affording us the opportunity to critically assess the assumptions we’ve made about our participants, the content delivered and the intended outcomes for which we were aiming.

If we want to develop and execute event evaluations that more acutely assess our event and the assumptions behind it, then it becomes more about understanding how to apply the principles of critical thinking to the evaluation process.

DEFINING CRITICAL THINKING
At the 8th Annual International Conference on Critical Thinking and Education Reform (summer 1987), Michael Scriven and Richard Paul described critical thinking as “…the intellectually disciplined process of actively and skillfully conceptualizing, applying, analyzing, synthesizing, and/or evaluating information gathered from, or generated by, observation, experience, reflection, reasoning, or communication, as a guide to belief and action.” Essentially, critical thinking enables us to apply a disciplined goes beyond our regular, often spontaneous, thought processes.

If we want to apply critical thinking to our evaluations, attributes such as clarity, precision, relevance and good reason should be our guiding principles. If we can more effectively use these elements to formulate our event evaluation questions, we will capture more relevant information about what we should start doing, what we should keep doing and what we should stop doing!

START WITH A CLEARLY DEFINED EVENT PURPOSE AND DESIRED OUTCOMES
Evaluation questions should be linked to the initial intended outcomes of the event. At the beginning of your planning, it’s paramount to use critical thinking to ensure clarity around what, how and why you’re organizing the event. A few good questions include:

  • What is this meeting intended to accomplish?
  • What could be the best outcome for this meeting?
  • Who will attend and why?
  • Will this be a one-time event on a current or upcoming topic, or is it intended to develop over time and attract attendees year after year?
  • What is the specific knowledge, skills, networking that attendees will hope to take away from the event?

With these questions answered, it is easier to determine how the event goals can be accomplished, such as how to prepare a call for proposals (if appropriate), how to organize the schedule of events, how much free/networking time to allow, what type of venue would be appropriate and other details.

EFFECTIVELY BRIEF PRESENTERS AND SPEAKERS
Intended outcomes and clarity of purpose are also important pieces of infor-mation to provide to keynote speakers. Usually keynote speakers are chosen based on their specific knowledge and expertise, experiences and roles. They need to know how to specifically tailor their expertise.

Sometimes a broad lecture is just what is needed; often though, a more focussed talk or participant-engaged workshop will help to better set the tone for the intended outcomes. Keynote speakers should be equally clear at the start of their talk by stating the objectives for their session to participants. They should inform, expand, clarify and provide additional questions to motivate the audience.

Resist the temptation to accept additional concurrent sessions based solely on a need to “fill the schedule” or as a way of bolstering attendance. This is where a rubric or other unbiased evaluation instrument will help you to ensure that your session selections are made for the most appropriate reasons.

CRITICALLY ASSESS YOUR EVALUATION QUESTIONS
When the goals and specific outcomes for your event are clear, it’s easier to develop evaluation questions for participants that specifically align with these goals. Evaluations based on your intended outcomes can help identify the added value that the event brought to your attendees.

In terms of using critical thinking to assess conference objectives, it’s important to ask evaluation questions that align with the attributes associated with critical thinking. Clustering questions around various aspects of the event is also helpful. It will help respondents to think about a particular aspect of the event more deeply. If you really need to understand participants’ feelings about a certain aspect of the conference, it is good to ask more than one question about it, from slightly different perspectives, to ensure that you get an accurate picture of how participants felt. It is also a good idea to have a few open-ended questions, as these will give attendees the opportunity to address any issues that you may have neglected to ask.

Create a draft of your proposed evaluation questions, then critically assess this draft using the questions below. Reflecting on these pointed questions will help you to determine whether or not your survey is really getting you to those deeper insights about your attendees:

  • What precise information is to be gained from this question?
  • Is there any bias associated with the way this question is phrased?
  • Will participants be able to clearly understand the question so that the responses are reliable and useful to inform future events?
  • Have I asked appropriate questions to inform future related events?
  • What was the response rate when I used these questions for the evaluation in the past?
  • Is it likely that respondents are only those who are really satisfied, or really dissatisfied?
  • Has an opportunity been provided for written feedback?

Considering these critical thinking questions regularly, as the planning for a meeting or conference progresses, is also quite helpful. Reviewing them repeatedly will provide you with greater clarity around the purpose and intended outcomes of your event, which will certainly be echoed in the event evaluation. This ongoing reflection will ensure that a more accurate summary of your attendees’ opinions have been captured.

EXECUTING EVALUATIONS
The best time to ask for feedback is as soon as possible after the session/event concludes. Many events now conduct their evaluations online, which may work more effectively for an overall conference evaluation as opposed to the individual concurrent sessions because attendees don’t always bring electronic devices to the session, or the time-frame between sessions doesn’t allow enough time. Also, if you are conducting online surveys, ensure that your attendees are comfortable with and have easy access to the required hardware.

Sometimes, a keynote presenter will provide their own online survey tool. I suggest that you always review the survey instrument to ensure it meets your needs and doesn’t contain bias. In this situation, it’s likely preferable to include a few of the questions that might be important to the speaker into your evaluation, rather than the other way round.

Some planners judge whether they’ll hire a presenter for a future conference based on evaluation results, so getting an appropriate response rate is important. Higher response rates give some confidence that you’ve heard from a good cross-section of attendees, not just the very satisfied or the very dissatisfied.

To help drive response rates, be sure to include information on the evaluation that tells participants that feedback is important, and how it will be used for future planning, etc. Also, ensure that it’s easy to complete the evaluation. If it’s too complicated or too long, people will avoid doing it!

If you’re really anxious for a high response rate, then an incentive (such as a discount on next year’s conference) might be an enticement. Have members of the conference committee attend concurrent sessions to introduce speakers, keep track of time and ensure evaluations are completed.

EVALUATING THE RESULTS
If appropriate questions have been asked and the response rate is good, you have excellent data from which to determine how things went and what to do in the future. If you need detailed statistical analysis, you may want to work with an expert researcher who can evaluate the data at a more sophisticated level but, in general, this is not necessary.

Finally, review the results with all concerned. If necessary, assign various follow up tasks to individuals to ensure closure of the event. If you’ve promised feedback to concurrent session leaders, ensure that they receive it in a timely fashion and that you give them an opportunity to discuss their feedback with you (if that’s appropriate). You may even want to share some of the key findings with attendees and let them know specifically how you are using their insights to shape your future programs.

Above all, a well-designed evaluation shows your participants that you care about creating a rich experience for them and that you are committed to doing an even better job next time around. When they are developed with a bit of critical thinking, your evaluations will have greater credibility. This additional credibility will not only enhance future attendance at your program, but can also be leveraged to entice new sponsorship dollars. Especially when viewed from this perspective, thinking a little harder about your event evaluations is definitely worth the extra time and effort!

Sandy Hughes is the director of the Centre for Teaching Innovation and Excellence at Wilfrid Laurier University, focusing on online learning, educational technologies, quality assurance, community service, and teaching excellence. www.wlu.ca

Appeared in Speaking of Impact, Spring 2015 Edition

Leave a comment

Your email address will not be published.

*