Incorporating "Participant Feedback" and Measuring Success (Pre, Post, and Follow-up Evaluations)
Incorporating "Participant Feedback" and Measuring Success (Pre, Post, and Follow-up Evaluations)
The only way your program can improve over time is by collecting feedback from your participants and refining the program to better meet their needs. Develop mechanisms to collect and incorporate “participant feedback” and measure program success in the earliest stages of program planning. If you meet the needs of your participants, your program is more likely to meet its long-term goals.
Think carefully about (and write down) your program goals and objectives to determine what you want participants to know, do, or understand by the end of a specific session/activity, and at the end of the program. Program goals and objectives should very specifically identify what you want participants to learn (or be able to do or understand) by the end of each session/activity. Select the most important learning objectives and develop ways to assess them.
Part of the assessment development process is to consider how the assessment information will be used. Are you trying to improve and refine your program? Or are you trying to justify your agency's investment in your program (prove your program)?
New programs should develop and use assessment tools to determine if the program accomplishes stated objectives, and to improve/enhance the effort. These assessments generally are used by staff and partners directly involved in program development and delivery.
Administrators and funders are more interested in long-term assessments that prove a program's impact (outputs and outcomes). Assessments to measure long-term impacts should be developed early in program planning, so the program is designed to produce the long-term goals that will be assessed.
Pre/post surveys (administered before and after a session, activity, or program) are used to determine if specific learning objectives are met. Pre/post program surveys can also help you determine if you meet educational goals of the program. Surveys can be hard copy (paper) or administered using off-the-shelf survey programs like SurveyMonkey. Examples of other programs' surveys as well as a series of surveys developed by Responsive Management specifically for new, adult, food-motivated programs can be found below in the resources section of this page.
Screening surveys also are recommended to identify people who may be interested in your program and determine specific information potential participants want/need.
Assessment tools can be formal or informal, and qualitative or quantitative. A formal survey is not necessary for all steps, but may be the best choice for measuring long-term impacts and behavioral changes. When selecting assessment tools, take care to ensure they are systematic - collect information in a uniform manner from a representative sample of the population - and that they do not lead recipients to specific responses. Short-term, internal assessments (formal and informal) should gather information about program content and delivery, as well as whether participants are attaining specific skills/knowledge/understanding needed to reach stated learning objectives (e.g., learning to safely load and unload a firearm so they can hunt safely.)
Any assessments conducted should be well documented, so you have a record of how they were conducted, as well as the results that were obtained. This information will help you and your colleagues learn from your efforts and develop better assessment tools the next time. Sharing the results of your assessment will help the R3 (Recruitment, Retention and Reactivation) community become more effective in developing future assessment tools, and ultimately, better serving the food-motivated hunting community.
Do not be intimidated by trying to develop perfect assessment tools. Actually conducting a few assessments “on-the-job” is the best way to learn how. Templates for conducting formal assessments have been developed by Responsive Management specifically for food-motivated programs (see resources below). Additional information can be found in Appendix C of the National Hunting & Shooting Sports Action Plan (Second Content Draft), and the Hunting Heritage Action Plan.
An excellent example of a post-program assessment is contained in Alumni Reflections on Wisconsin’s “Hunting for Sustainability” Course, 2012 & 2013 in the resources section below. In addition, Georgia Department of Natural Resources’ Wildlife Resources Division has customized an evaluation template that is attached below for their “Hunt and Learn” program.
Again, pre/post assessments are the best way to determine if participants have learned what they need. If the learning objectives are aligned with participant needs, and they are met for each session/activity, then overall program goal(s) will be attained. Conduct periodic “checks” during your program, especially after teaching specific skills or introducing new concepts to measure your participants’ progress toward stated objectives/goals (formative assessments). Formative assessment allows you to take corrective steps, if necessary, to make sure participants gain critical skills, knowledge and understanding necessary to be successful hunters/anglers. In addition, identifying areas where additional/different instruction or resources should be included will help you design better programs in the future.
Use formative assessments to check participants’ understanding of program concepts, skill development, and comfort level with hunting and hunting related activities. These can be reviews of content you have already covered; they do not have to be formal surveys. For example: Ask participants, “What else do you need to know or do related to topic X (focus of last session) in order to be able to do this on your own?” (Have them write answers on an index card, record on a flip chart, or just answer verbally.) This helps you gage how well participants understood the material or whether they are confident they can perform a task. Or, before a session, ask, “What do you want/need to learn about topic X?” to help you provide content and/or activities they need to learn during the session.
Skills assessment often requires direct observation. Again, assessment can be formal or informal. Informal assessments of skills can be FUN exercises. For example: A team exercise or “scavenger hunt” where each “team member” has to demonstrate a certain skill before they can “collect” their token and move on to the next station can be used to assess skills. Be creative in designing assessments to avoid “survey fatigue.” In addition, using different types of assessment tools, and measuring specific components provides better information about whether a program accomplishes stated goals and objectives. (Every assessment tool has biases and limitations, so using only one assessment tool can lead to incorrect conclusions.)
Long-term behavior changes, hence long-term program success, is best assessed by formal survey-type assessments, paired with specific indicators. Measuring an actual behavior, such as hunting or hunting license purchases, is generally more accurate than relying on a person's recollection of past behavior or intention to do some future behavior. In order to conduct longer-term assessments, it is critical to collect participant contact information and assign them a unique customer identification number during the program. Appendix C of the National Hunting & Shooting Sports Action Plan (Second Content Draft) has sample templates for programs to use in assessing long-term impacts. These templates are designed to be modified by program managers to fit specific program needs. In addition Responsive Management developed specific survey instruments for evaluating food-motivated programs.
This research was conducted for the Southeastern Association of Fish & Wildlife Agencies’
(SEAFWA) Committee on Hunting, Fishing, and Wildlife-Related Participation and the
Midwestern Association of Fish & Wildlife Agencies’ (MAFWA) Recruitment and Retention
Committee to evaluate the outcomes of a series of pilot programs designed to promote hunting
and fishing among young adults in urban/suburban settings who are interested in locally grown
or organic foods (commonly known as “locavores”).
Over the past two years, hands-on pilot hunting and fishing programs were offered in Ark
Methods include the collection of qualitative data (such as interviews and narrative stories) or quantitative data such as numeric survey ratings. Sometimes program evaluations will include both types of data collection. A common evaluation design for Extension programs involves the collection of pre- and post-data that can document level of change in knowledge, attitudes, skills, motivations, and behaviors.
The Kentucky Department of Fish and Wildlife Resources is working with Responsive Management, a professional survey research firm, to evaluate participant, instructor, and mentor experiences with the [PROGRAM NAME].
As an instructor, we would like to know your opinions on the effectiveness of the program. Your responses will help improve the [PROGRAM NAME]—thanks in advance for your input.
The Evaluation Guide is designed to assist
practitioners of aquatic education programs with
all levels of evaluation.
About the Evaluation Guide
The Evaluation Guide was developed as a companion
to the Best Practices Workbook for Boating, Fishing and
Aquatic Resources Stewardship Education and was
developed to provide a thoughtful introduction to
evaluation.The guide has benefited from the input
of more than two dozen evaluation experts and
aquatic education practitioners.
These surveys were developed as part of the Locavore program to assess program participants thoughts about your program after they have had a season in the field using their new skills and hopefully caught a few fish. The results of such a survey would be used to refine your program for future participants. They could also be used to develop support programs for past graduates if needed.
These surveys were developed as part of the Locavore program to gather recent program graduates thoughts on your program AFTER they receive fishing instruction but BEFORE the end of their first fishing season. The results of such a survey would be used to improve your program for future participants.
To date, there has not been very much research conducted specifically on hunting and shooting Recruitment and Retention (R&R). The need for this kind of research has only recently been identified and has not been quick to catch on in most of the hunting and shooting community. For example, the question, “Are we having an impact with our R&R programs?” largely is unanswered.
Research can tell us a lot about boating, fishing, and stewardship education and how to improve it. Unfortunately, existing research goes largely unused by the aquatic education community, and there are many research needs that have yet to be addressed.For example, the question,“Are we having an impact with our programs?” largely is unanswered.The broader environmental education literature has provided the guidance to develop the Best Practices in this Workbook.