Print

Evaluation is the keyword at every step of the learning process. This short, very clear insight looks into different evaluative tools that can be used for games for learning and, by extension, for every other educational process. The text is representative of the contribution that Nicola Whitton and Manchester Metropolitan University brought to the project Learning Games in Adult Education: the perspective of researchers and creators. 

The EduScape game is an Escape Room with a pedagogical aim without losing the entertainment value of the original. It was developed by high school students in the framework of ‘Learning Games’, was played and tested by the partnership and successfully modified by VUC of Denmark, who created a simplified, adaptable, version using IT devices. This shows how a well-designed game for learning can trigger a series of related events that are in themselves a learning process.

Evaluation is the keyword

There are a wide variety of ways in which to evaluate the effectiveness of a game for learning. This document gives an overview of the different approaches available. The first section looks at ways to evaluate the design of the game itself as a learning tool during development, and the second looks at ways of evaluating the player experience and learning that has taken place.

Developmental evaluation

Diagnostic evaluation focuses on techniques for carrying out evaluations during the game development process. Three areas of game design are particularly relevant: playability (how well the game works and whether it is fun), functionality (what the player can do in the game) and usability (how the player interacts with the game pieces or interface). The latter two aspects are particularly relevant to digital games, but also worthy of consideration for traditional games.

Types of developmental evaluation include:

Evaluating learner experience

Evaluation of the effectiveness of games for learning is problematic for several reasons. First, games that are used in formal learning situations are typically small-scale interventions, often used for a small number of hours only in total. This means that any effects that could be shown from the use of the game might be minimal and short-lasting, as no small-scale learning intervention is likely to have a significant impact on learning overall. Second, much evaluation of games and learning is carried out by those with a vested interest in its success; such as the teacher who created the game. Third, evaluating learning is difficult in itself, particularly over time and in relation to transfer to other contexts. Timing of evaluation is also an important factor to consider; evaluating immediately after a game allows for fresh responses but will not provide any evidence of long-term benefits.

Student learning is most commonly evaluated through the development of measurable and observable performance indicators or learning outcomes; the degree to with a student can evidence these learning outcomes (through an exam, essay or other assessment is then evaluated to indicate whether learning has taken place). However, meaningful evaluation is not always possible in the case of games for learning simply because learning from the game forms a small part of a much larger set of learning objectives or because the game isn’t explicitly assessed as part of a formal course. A second issue of using formal assessment to evaluate learning is that it does not take into account the unintended learning from game play, such as problem-solving, teamwork or negotiation.

Experimental research designs are common studies on games and learning, where students are separated into groups that undergo different treatments and the differences in outcomes are compared, using tests before and after the game. However, there are drawbacks to this approach in that any learning beyond simple memorisation is difficult to evaluate with a test and the real potential of learning games is engaging the higher level learning outcomes. It may be difficult to persuade students to give up the extra time to complete additional tests, and there are also ethical implications of such an evaluation design.

Alternate ways of evaluating learner experience include:

Using a mixed-methods approach, of large-scale quantitative experimental research coupled with deep qualitative research to explore the nuances of the learning experiences, provides one way to support robust evaluation. Looking at the depth as well as the breadth of evidence enables researchers to gather insights into the potential of games for learning and the factors and contexts that make it more effective as an educational paradigm.