Evaluation is the keyword at every step of the learning process. This short, very clear insight looks into different evaluative tools that can be used for games for learning and, by extension, for every other educational process. The text is representative of the contribution that Nicola Whitton and Manchester Metropolitan University brought to the project Learning Games in Adult Education: the perspective of researchers and creators. 

The EduScape game is an Escape Room with a pedagogical aim without losing the entertainment value of the original. It was developed by high school students in the framework of ‘Learning Games’, was played and tested by the partnership and successfully modified by VUC of Denmark, who created a simplified, adaptable, version using IT devices. This shows how a well-designed game for learning can trigger a series of related events that are in themselves a learning process.

Evaluation is the keyword

There are a wide variety of ways in which to evaluate the effectiveness of a game for learning. This document gives an overview of the different approaches available. The first section looks at ways to evaluate the design of the game itself as a learning tool during development, and the second looks at ways of evaluating the player experience and learning that has taken place.

Developmental evaluation

Diagnostic evaluation focuses on techniques for carrying out evaluations during the game development process. Three areas of game design are particularly relevant: playability (how well the game works and whether it is fun), functionality (what the player can do in the game) and usability (how the player interacts with the game pieces or interface). The latter two aspects are particularly relevant to digital games, but also worthy of consideration for traditional games.

Types of developmental evaluation include:

  • Paper prototyping to get player feedback early on by using paper prototypes or mock-ups of the actual game.
  • Wizard-of-Oz prototyping is used in video game development and involves simulating the behaviour of the game, often manually, in a way that is not apparent to the user – from his or her point of view the game is fully functional.
  • Scenarios allow students to comment on scenarios of use, descriptions of the game and how it might be used and the types of activities that would occur in it can help you gain insights early on.
  • Expert walkthroughs involve someone who has a background and expertise in game design providing feedback from the point of view of an expert.
  • Think-aloud walkthroughs involve asking players to play the game and talk through their thought processes.
  • Observations such as simply watching people play a game and seeing how they interact with the game and each other.
  • Interviews or focus groups that involve talking to individuals or groups about their experiences using the game.
  • Piloting such as running through the whole game with a small number of users to identify and address any final issues with the game design.
  • Diagnostic evaluation of accessibility and usability of video games, such as working through a checklist or checking against guidelines.

Evaluating learner experience

Evaluation of the effectiveness of games for learning is problematic for several reasons. First, games that are used in formal learning situations are typically small-scale interventions, often used for a small number of hours only in total. This means that any effects that could be shown from the use of the game might be minimal and short-lasting, as no small-scale learning intervention is likely to have a significant impact on learning overall. Second, much evaluation of games and learning is carried out by those with a vested interest in its success; such as the teacher who created the game. Third, evaluating learning is difficult in itself, particularly over time and in relation to transfer to other contexts. Timing of evaluation is also an important factor to consider; evaluating immediately after a game allows for fresh responses but will not provide any evidence of long-term benefits.

Student learning is most commonly evaluated through the development of measurable and observable performance indicators or learning outcomes; the degree to with a student can evidence these learning outcomes (through an exam, essay or other assessment is then evaluated to indicate whether learning has taken place). However, meaningful evaluation is not always possible in the case of games for learning simply because learning from the game forms a small part of a much larger set of learning objectives or because the game isn’t explicitly assessed as part of a formal course. A second issue of using formal assessment to evaluate learning is that it does not take into account the unintended learning from game play, such as problem-solving, teamwork or negotiation.

Experimental research designs are common studies on games and learning, where students are separated into groups that undergo different treatments and the differences in outcomes are compared, using tests before and after the game. However, there are drawbacks to this approach in that any learning beyond simple memorisation is difficult to evaluate with a test and the real potential of learning games is engaging the higher level learning outcomes. It may be difficult to persuade students to give up the extra time to complete additional tests, and there are also ethical implications of such an evaluation design.

Alternate ways of evaluating learner experience include:

  • Student self-evaluation of learning is notoriously inaccurate but does at least allow data gathering as to whether students think they have learned something.
  • Questionnaires for student evaluation of their playing experiences, looking at aspects such as enjoyment, affect, motivation, or engagement with the game.
  • Quantitative indicators can be used to evaluate engagement with digital games, such as time spent playing the game, points accrued, or levels reached.
  • Interview/focus groups can be used to explore players qualitative experiences when playing the game.
  • Observations can be used to explore how behavioural indications can be used to gain a deeper understanding of the processes of learning that are taking place during a game.

Using a mixed-methods approach, of large-scale quantitative experimental research coupled with deep qualitative research to explore the nuances of the learning experiences, provides one way to support robust evaluation. Looking at the depth as well as the breadth of evidence enables researchers to gather insights into the potential of games for learning and the factors and contexts that make it more effective as an educational paradigm.