![](http://www.ryanjday.com/wp-content/uploads/2019/08/003-1024x627.png)
The execution of an evaluation plan will require the collection of evidence or data. This might involve looking through existing records like test scores for a course. It could also include performing assessment activities that generate new data like conducting a survey. Planning is critical when collecting data to ensure you get what you need and minimize potential bias where possible.
![](http://www.ryanjday.com/wp-content/uploads/2019/08/001-1024x953.png)
Data for Evaluation
Quantitative data is data that can be quantified. For example, how many people attended a workshop? This numerical data is typically easiest to collect, analyse, and visualize. Qualitative data captures descriptive information. For example, what aspects of the workshop did attendees prefer? This type of data can be extremely valuable and capture the story behind the numbers. Qualitative data is harder to collect and requires more resources to analyse and visualize in a meaningful way. Evaluators must strike a balance between qualitative and quantitative data that is representative and fair.
![](http://www.ryanjday.com/wp-content/uploads/2019/08/000-1-1024x769.png)
There may be many different data sources available for an evaluation project. For example an evaluation of a life skills program for teen parents would have curricular artifacts like course notes or presentations that may be helpful in an evaluation. Individual workers would have information about changes to the program over the years that might not have been documented. The cultural norms and practices would include routines and widely shared ways of thinking and acting in an organization that shape interactions in the program. Information from the program’s Facebook page could yield insight on participant engagement outside of the classroom. Archived annual reports from the program may be helpful in establishing benchmarks for measurement. Referral information might be available from external partners and could be incorporated into the evaluation.
Ideally multiple data sources will lead to a more fulsome understanding of a subject. In reality however, the decision on which data source to pursue will be limited by access, available resources, and capacity.
Ethics and Privacy
The ethical use of data is extremely important. The Open University (2014) published a set of guidelines for the ethical use of student data for learning analytics. I have adapted these guidelines to create guiding principles for the use of data in evaluation projects.
Use data to benefit stakeholders
If data can be responsibly utilized to benefit stakeholders there is an obligation to pursue its use to benefit others.
Acknowledge the limits of data
Our data can rarely capture the full picture. Interpretations of data may not be complete and it’s not fair or accurate to not acknowledge limitations of data.
Be transparent and seek consent
Evaluators should be clear and truthful to stakeholders about the their intended collection and use of data.
Review policies regularly
As an evaluation proceeds or new activities begin policies and best practices should be reviewed. This is particularly important if the scope or focus changes during the execution of an evaluation.
Engage with stakeholders
Ensure they know how data is being used and encourage their input to guide policies and procedures.
Acknowledge bias
Data in an evaluation plan will require some interpretation by the evaluator. This can be influenced by a number of external and internal factors. In fairness to those that are included in the evaluation these biases should be acknowledged.
Follow the rules including ethical considerations, policies and procedures, and the law.
Data must be collected, stored, and used in a way that respects individuals’ privacy especially regarding sensitive data. Depending on the context of the evaluation there may be local laws or organizational policies that must be followed. For example an evaluation project at a Canadian university would have to follow provincial privacy legislation and possibly get ethics approval for certain types of research activities. There would be limits on the types of data available and used. These laws and policies can vary by jurisdiction and organization. It is important to research carefully what rules apply to your situation.
![](http://www.ryanjday.com/wp-content/uploads/2019/08/002-976x1024.png)
Tools for Collecting Data
a. Data Mining and Learner Analytics
A wealth of data is available to evaluators through the technology used to engage with stakeholders like learners, community members, and staff. Most technology used in education and evaluation creates some sort of digital paper trail.
- The Learning Management System for a school tracks IP addresses, log-in information, and online quiz scores.
- The YouTube channel of a non-profit collects information about where viewers live, if they finish viewing the video, and what they may have skipped.
- A classroom blog tracks student posts and comments.
This is an emerging, but exciting field. The sheer volume of the data can be overwhelming and this makes the effective use of this type of data challenging. As Hora (2018) notes early misuse has lead to a backlash from critics that feel the over-reliance on this type of data oversimplifies issues, lacks proper contextualization, and ignores basic human intuition. A mix of methods and data sources is important in achieving a holistic view for evaluation.
b. Surveys
Surveys are an important tool for evaluators and there are number of configurations depending on the purpose of your evaluation.
Anonymity is ideal to help reduce potential confounding variables in the results. For example if an employee is asked to rate the performance of their manager they may fear retaliation and not answer the survey truthfully. Respondents should be told how their anonymity will be protected and make efforts to ensure that it is. The Information and Privacy Commissioner of Ontario (2015) warns that the simple combination of gender, date of birth, and postal code is sufficient enough to identify unique individuals. Extra caution should be taken when conducting surveys online to ensure that potential identifiable data is not captured (Roberts & Allen, 2015). A general best practice is to minimize the collection of personal data.
There are a number of best practices to consider for creating effective surveys.
- Only ask one thing at a time.
- Avoid double -negative statements.
- Avoid absolutes like “always” and instead use terms like “almost always” (Gelbach & Brinkworth, 2011).
- Make sure that every question is applicable to the respondent by using piping techniques (Dillman et al., 2009).
- Don’t make the survey too long or respondents may not complete the survey. Van Mol (2017) recommends thirteen minutes or less to obtain a high-response rate. A report by Survey Monkey found that the time spent considering a question dropped drastically in longer surveys.
- Don’t ask open-ended follow up questions after a close-ended question. Krosnick (2009) found that respondents do not typically volunteer additional information and select among what’s listed even if the best answer is not included.
- Test the survey and ask respondent to explain their thought processes while completing the survey to identify misconceptions or errors (Gelbach & Brinkworth, 2011).
C. Focus Groups
Focus groups can be a powerful tool for gathering qualitative information. Chouinard, J., & Cousins, J. (2007) recommend focus groups in cross-cultural evaluation as a participatory process that engages participants in a reflective dialogue. It allows participants to have more freedom to express themselves and provide additional important contexts and consequently improves trust. For example LaFrance & Nichols (2008) describe how focus groups conducted in communities that have traditionally been mistreated or unfairly represented may require time to build trust and time for participants to vent frustrations of negative past experiences with researchers and evaluators. This type of relationship building is difficult to do with a survey.
Liamputtong, P. (2016) identifies a number of considerations to be made when conducting focus groups:
- Where will the focus group be held and what considerations should be made regarding the venue?
- Who will be the facilitator or moderator? What is their background and how might it impact cross-cultural research?
- How will the data be recorded?
- What probing questions are appropriate and what are the risks of introducing bias?
- How might group dynamics impact the quality of data?
- How will sensitive topics be discussed?
References
Best Practices for Protecting Individual Privacy in Conducting Survey Research. (2015). Information and Privacy Commissioner of Ontario, Canada. Retrieved from http://deslibris.ca/ID/10064239
Chouinard, J., & Cousins, J. (2007). Culturally Competent Evaluation for Aboriginal Communities: A Review of the Empirical Literature. Journal of MultiDisciplinary Evaluation, 4(8), 40–57. Retrieved from http://search.proquest.com/docview/61942859/
Dillman, D. A. , Smyth, J. D. , & Christian, L. M. (2009). Internet, mail, and mixed-mode surveys: The tailored design method (3rd ed.). Hoboken, NJ: Wiley.
Gehlbach, H., & Brinkworth, M. (2011). Measure Twice, Cut Down Error: A Process for Enhancing the Validity of Survey Scales. Review of General Psychology, 15(4), 380–387. https://doi.org/10.1037/a0025704
Hora M. (2018). Analytics in the field: why locally grown continuous improvement systems are essential for effective data-driven decision-making. In Lester, J., Johri, A., Rangwala, H., & Klein, C. (Eds.), Learning analytics in higher education: Current innovations, future potential, and practical applications. (pp. 30-48). Milton: Taylor and Francis. doi:10.4324/9780203731864
Krosnick, J. (1999). SURVEY RESEARCH. Annual Review of Psychology, 50(1), 537–567. https://doi.org/10.1146/annurev.psych.50.1.537
LaFrance, J. and R. Nichols (2008). “Reframing Evaluation: Defining an Indigenous Evaluation Framework.” Canadian Journal of Program Evaluation 23(2): 13-31.
Liamputtong, P. (2016). Focus group methodology : principles and practice . London: SAGE.
Roberts, L., & Allen, P. (2015). Exploring ethical issues associated with using online surveys in educational research. Educational Research and Evaluation, 21(2), 95–108. https://doi.org/10.1080/13803611.2015.1024421
The Open University. (2014). Policy on Ethical use of Student Data for Learning Analytics. Retrieved from https://help.open.ac.uk/documents/policies/ethical-use-of-student-data/files/22/ethical-use-of-student-data-policy.pdf
Van Mol, C. (2017). Improving web survey efficiency: the impact of an extra reminder and reminder content on web survey response. International Journal of Social Research Methodology, 20(4), 317–327. https://doi.org/10.1080/13645579.2016.1185255