Skip to main content

“I Broke Your Game!”: critique among middle schoolers designing computer games about climate change

Abstract

Background

There have been increasing calls for integrating computational thinking and computing into school science, mathematics, and engineering classrooms. The learning goals of the curriculum in this study included learning about both computational thinking and climate science. Including computer science in science classrooms also means a shift in the focus on design and creation of artifacts and attendant practices. One such design practice, widespread in the design and arts fields, is critique. This paper explores the role of critique in two urban, heterogenous 8th grade science classrooms in which students engaged in creating computer games on the topic of climate systems and climate change. It explores and compares how practices of critique resulted from curricular decisions to (i) scaffold intentional critique sessions for student game designers and (ii) allow for spontaneous feedback as students interacted with each other and their games during the process of game creation.

Results

Although we designed formal opportunities for critique, the participatory dimension of the project meant that students were free to critique each other’s games at any time during the building process and did so voluntarily. Data indicate that students focused much more on the game play dimension of the design than the science, particularly in those critique sessions that were student-initiated. Despite the de-emphasis on science in spontaneous critiques, students still focused on several dimensions of computational thinking, considering user experience, troubleshooting, modeling, and elegance of solutions.

Conclusions

Students making games about science topics should have opportunities for both formal and spontaneous critiques. Spontaneous critiques allow for students to be authorities of knowledge and to determine what is acceptable and what is not. However, formal, teacher-designed critiques may be necessary for students to focus on science as part of the critique. Furthermore, one of the benefits to critiquing others was that students were able to see what others had done, how they had set up their games, the content they included, and how they had programmed certain features. Lastly, critiques can help facilitate iteration as students work to improve their games.

Introduction

Professional and scholarly education communities increasingly recognize the importance of computing in STEM (science, technology, engineering, and science) fields. For example, see the National Science Foundation’s 10 Big Ideas for Future Investments (NSF, 2017). Furthermore, efforts to include computer science and computational thinking in STEM classrooms have been under increasing focus since computational practices were integrated into the Next-Generation Science Standards (NGSS) (NGSS Lead States, 2013). These efforts include varied approaches, for example, playing serious games (e.g., Boyle et al., 2016), investigating models and simulations (e.g., Weintrop et al., 2016), developing applications (e.g., Tissenbaum, Sheldon, Soep, Lee, and Lao, 2017), and game design (e.g., Puttick & Tucker-Raymond, 2018). Design-based approaches to integration, such as creating games, encourage participatory forms of engagement in which students are positioned as experts in both content (e.g., climate systems) and delivery (e.g., how to include the player as an actor in the system) (Puttick & Tucker-Raymond, 2018). This paper argues that if educators are to take computational design as integral to scientific study seriously, then critique, as a participatory design approach that leverages distributed expertise among students, must become an integral practice in STEM classrooms. Computational thinking and climate science, the aspects of STEM taken up in this study were fully integrated as students were expected to create workable games and, in so doing, learn about climate science (Puttick et al., 2019).

Our definition of critique in this study is as follows: the public evaluation of a work to (a) provide feedback for the creators and/or (b) understand key concepts in design for one’s own future work.

This paper explores the role of critique in two 8th grade science classrooms in which students engaged in creating computer games on the topic of climate systems and climate change. It explores and compares how practices of critique resulted from curricular decisions both to (i) scaffold intentional critique sessions for student game designers and (ii) allow for spontaneous feedback as students interacted with each other and their games during the process of game creation. One of our initial conjectures for the design of the learning environment was that formal critiques, or those that were teacher/curriculum initiated, would help students improve their systems-based games. We also thought that the formal critique practice would build toward a classroom culture of distributed expertise in which students went to each other for help. Even though we designed for open interaction between students during game creation, we did not expect some of the spontaneous forms of critique that took place.

Critique may seem like the purview of the humanities, the arts, or architecture. We argue that critique is no less a scientific practice than argumentation (Osborne, 2010). After all, this paper has gone through one peer review process for a conference, multiple revisions based on feedback from critical friends, and another peer review process for inclusion in this journal. Yet, definitions of scientific argumentation in science education have been rather more formalized, focusing for example on Toulmin’s (2003) grounds-claim-warrant model (Berland & Reiser, 2011; McNeill, Lizotte, Krajcik, & Marx, 2006). We argue that critique, as a form of argumentation, can take many forms, and allows for student-initiated explorations into game design, computational practices, and science topics in the science classroom.

Critique is an integral part of design programs that encourage youth engagement (Soep, 2005) and can help students develop the tools necessary not only to be effective designers, but also to judge the quality of others’ work (e.g., Hwang, Hung, & Chen, 2014). In addition, recognition of critique as a set of scientific practices can potentially expand what it means to participate in school-based science learning and whose participation is ratified as productive, such as students from underrepresented groups in STEM (Wright, 2019). Yet, the degree to which design curricula encourage and teach students to critique is an open question (Petrina, 2017).

To frame our study of critique, in the next section, we outline participatory pedagogy as a general approach to teaching and learning design and critique, of games in this case, as central practices within participatory pedagogies. We then describe our design-research methodology to explain how we came to understand critique as a central practice. In the results section, we describe student participation in both teacher-initiated and student-initiated critiques and finally present implications and recommendations for integrating critique into design-based STEM learning.

Theoretical framework

Our curriculum emphasized participation structures that allowed young people to create their own learning products through a focus on game design and systems thinking. In this paper, we draw on theories and previous empirical work about learning science through participatory pedagogies, including game design and critique in other settings, to understand and explain our findings. We then present the process and principles that contributed to the inclusion and construction of critique in the design.

Participatory pedagogies

We recognize learning as a constructive and cultural process that (a) draws on the values, practices, and histories of learners and their communities (Bell, Lewenstein, Shouse, & Feder, 2009; Nasir, Rosebery, Warren, & Lee, 2006) and (b) emphasizes the intellectual resources of young people as knowledge producers (Tucker-Raymond, Torres-Petrovich, Dumbleton, & Damlich, 2012). Research with youth worldwide suggests that membership in multidimensional media culture is an important source of young people’s identity development (Dolby & Rizvi, 2007; Nortier & Svendsen, 2015). Identities are constructions of possibilities for being kinds of people in the world, created through participation in activities. Contemporary participation and identity formation within multimedia cultures have been characterized as part of “participatory cultures” (Jenkins, 2008; Halverson, 2009) in which people act as content creators and interlocutors with content creators, not just as content consumers. In game design, students are creating their games from the ground up, or remixing already created games and adding their own style and flair. Likewise, through the online Scratch community in which games created in Scratch, a graphics-based computer programming platform, are shared, students can add their creations and comment on others’ creations in a robust interactive community.

Historically in US schooling, expertise has primarily resided with the teacher and the instructional materials he or she provides (Cornelius & Herrenkohl, 2015; Puttick, Drayton, & Karp, 2015). Rarely are students asked to contribute their expertise. In contrast, participatory practices and values allow for the democratization of content creation, distributed expertise, a focus on skills rather than abilities, and a flattening of traditional hierarchical teaching and learning structures (Cornelius & Herrenkohl, 2015; Jenkins, 2008). In classrooms, this translates to more student choice about engagement in their own learning, including the forms and practices that learning takes; a view of skills as continually developing; distributed expertise among students, teachers, and outside resources; and object creation (conceptual, online, or physical—e.g., video games) aimed at authentic audiences. In game design, students choose topic and genre. Game design also encourages students to use each other as resources both for advancing their skills and for eliciting authentic feedback through critique.

Game design and critique are both manifestations of participatory pedagogies. Game design is participatory because it is grounded in modern media practices in which participants are both consumers of content and also producers in their own right—what Jenkins and colleagues (Jenkins, 2008) call prosumers and what Salen (2008), writing about youth game design, states is merely a part of contemporary young people’s daily lives. Thus, recognizing youth cultural practices in gaming, we sought to have students contribute their creations to a wider audience and at the same time learn science content.

Integrating game design in science

Game design is based in the constructionist principle that people learn best when they can create their own tangible objects and models (Papert, 1980). Game design as a pedagogical tool for learning programming has grown in popularity over the past decade with the advent of computational environments such as Alice and Scratch, created to engage children in learning computer literacy (NRC, 2011). Game creation as an introduction to programming and computational thinking has proved to be highly engaging at middle and high school levels (e.g., Aydin, 2005; Repenning, Webb, & Ioannidou, 2010), facilitating creative thinking, social cooperation, and broader participation (Denner, Werner, & Ortiz, 2011; Resnick, Maloney, Monroy-Hernandez, & Kafai, 2009). Game design has also been shown to support student learning in various disciplines, for instance, climate science (Puttick, Strawhacker, Bernstein, & Sylvan, 2014; Puttick & Tucker-Raymond, 2018), mathematics (Tucker-Raymond et al., 2016), biology (Baytak & Land, 2011; Khalili, Sheridan, Williams, Clark, & Stegman, 2011), natural sciences (Hwang, Yang, & Wang, 2013), and chemistry (Siko, Barbour, & Toker, 2011).

In our application of game design in the science classroom, we have drawn on the theory of triadic game design (TGD) (Harteveld, 2011). TGD suggests that the successful application of games in education requires an interdisciplinary approach in which three paradigms should be considered: reality (e.g., expertise related to the domain—climate change), meaning (e.g., expertise related to learning and to students—what the user is supposed to experience while playing the game), and play (e.g., expertise related to the playability of the game—including the genre of the game). Each paradigm consists of specific criteria that need to be considered and successfully balanced as part of the (curriculum) design process. Part of the success of game design is participants’ willing engagement in creating works of their own choice and vision. Student construction of new artifacts makes them more than consumers of already known information, or inquirers into already understood phenomena, as is typical in many science classes and encouraged by the NGSS (NGSS Lead States, 2013). Construction of new artifacts allows students to contribute to knowledge production in the classroom. The structure of game design, with built in choice, agency, and content creation, aligns the approach with participatory pedagogies.

Critique is a part of a participatory pedagogy because critics choose what to focus on, relying on the distributed expertise of peers to tell each other what works and what needs improvement. Likewise, it is a student-driven form of assessment in which students’ ideas count about what is ratified and what is not, thus democratizing who gets to decide what is important.

Critique

Critique is a learning practice that permeates art, architecture, and design worlds as well as sports and other professional settings (Hetland, Winner, Veenema, & Sheridan, 2007; Soep, 2005). It and its cousin argumentation are also integral to scientific and engineering practice (Osborne, 2010; Wright, 2019). As Wright (2019) has argued, critique at the intersection of science and design, especially with young people, can take many forms and can be both planned and improvised (see also Soep, 2005).

In professional design spaces, formal critiques help to establish what counts as knowing, and how one should “see” in a discipline (Wright, 2019). Formal critiques are for giving ongoing instruction and support to fellow practitioners, especially to those who are newer to a field (Lymer, 2009). Such critiques are often led by an experienced practitioner, with others contributing, and are focused on the learning of the person being critiqued (Blythman, Orr, & Blair, 2007). Thus, how one should participate in the discipline is structured by more experienced others and can help enculturate learners into the discipline, its vocabulary, and practices. They can also help to establish social bonds, providing common experiences for students, and create more collaborative atmospheres (Schrand & Eliason, 2012).

Critiques are both formative and summative. When students are engaged in critique, they speak spontaneously about a given project, respond to feedback from others, and decide whether to modify their own aesthetic judgments in light of their critics’ reactions. Critique is, in this sense, a kind of spontaneous argumentation (Soep, 2005, p. 40). A focus on meanings, expressions, what works, and what does not are intended to push designers to new possibilities and to give people in the group opportunities to see each other’s work (Blythman et al., 2007). Formal critiques in those spaces can help students to learn to observe, explain, and evaluate works; highlight key concepts; and guide future work (Hetland et al., 2007; Lai & Hwang, 2015). The same is true of science in which scientists argue over findings, making claims, and using evidence (Osborne, 2010).

Critique is also a practice in game design communities, often called play testing, and can encourage computational thinking (Lee, et al, 2011), one of the goals of our curriculum (Cassidy, Tucker-Raymond, & Puttick, in press). We argue that critique is also multimodal (Gray & Howard, 2015) and can be embodied in game play, for instance. In that sense, critique expands previous definitions of scientific argumentation. However, participatory approaches to critique need to be intentionally designed, since students are also positioned as knowers. In the next section, we show how we designed for critique and how explicitly establishing opportunities for students to be authorities also allowed new forms of critique to be created by the students themselves.

Design

Design-based research (DBR) uses multiple cycles of design, assessment, and redesign to iteratively develop a learning environment and structure participants’ learning in that environment, as well as to advance theoretical foundations for the field at large (e.g., Barab & Squire, 2004; Collins, Joseph, & Bielaczyc, 2004). Our three cycle DBR analysis used a conjecture mapping approach (Sandoval, 2014). Conjecture maps articulate, “the joint design and theoretical ideas embodied in a learning environment in a way that supports choices about the means for testing them” (Sandoval, 2014, p. 20). Such mapping includes theories about how learning happens in advance of implementation, functions to test empirical predictions, and use results to refine the particular design and theoretical perspectives.

Designing for critique

The initial conjecture map for our design formalized the features of the learning environment into four categories: (a) tools and materials (e.g., software, readings, student worksheets), (b) task structures (e.g., game design), (c) participant structures (e.g., peer programming), and (d) discursive practices (e.g., asking questions, critique) (Fig. 1). We conjectured that student participation would lead to increases in knowledge about climate systems, understanding of computational thinking practices, and increases in self-efficacy and persistence in computing.

Fig. 1
figure 1

Design conjectures about the role of critique

More specifically, we conjectured that the practice of critique, along with opportunities for problem solving, would ultimately improve students’ systems-based games and support a classroom of distributed expertise that would result in collaborative sense-making, creativity, interest, persistence, and deeper learning (Fig. 1).

We believed this would happen as a result of the embodied structures we created in the curriculum including the tools and artifacts that included the games themselves and prewritten critique forms, task structures that included side-by-side game design with plenty of opportunities for interaction across groups and peer feedback, participant structures that included students as programming partners and as play testers, and discourse practices that included evaluation, argumentation, and sense-making. We planned to observe evidence of student critique practices in both peer-to-peer talk and written feedback.

We included three lessons in which we intentionally built in student engagement in formal critique structures. We designed scaffolded tasks with guiding questions for youth to critique each other’s games. We also designed more open-ended opportunities for critique during which students could both make their own games and move about the room to play each other’s games. These open-ended spaces were intended to “allow young people to deploy their full linguistic/intellectual practices” (Wright, 2019, p. 3) as they learned game design. Guided by our conjecture map and the spontaneous emergence of students initiating critique of each other’s games, we created a series of research questions about critique in the classrooms that would help us understand the breadth of ways in which critique was happening and how it contributed to students’ game design.

Research questions

  1. 1)

    To what extent, and in what ways, do teachers and students take up and enact critique?

  2. 2)

    What kinds of physical artifacts, task structures, participant structures, and discourse practices emerge in different moments of critique?

  3. 3)

    What consequences do the critiques have for student game creation?

Methods

The results reported here are derived from the first year of a 3-year design research study in which an interdisciplinary core team of seven people, including learning scientists who focus on science learning in middle school, ecology experts, systems learning experts, and game researchers and designers designed a curriculum and carried out the research. Two 8th grade classroom teachers and a technology integration specialist also helped to design the curriculum and implemented it in their classrooms.

Design context and participants

The two 8th grade science teachers taught at separate public middle schools in a northeast US city. They each taught four sections and chose one class to participate as the focal class. There were 22 students in each ethnically and socioeconomically heterogeneous class. There were 28 days of instruction in Mr. Soucek’s class and 17 in Denise’s class. Class periods were about 50 min. We use a first name pseudonym for Denise because her students used her first name. We use a surname pseudonym for Mr. Soucek because his students used his last name.

All students in both classes used a graphical drag and drop programming language (Scratch) to create games about climate change. Although teachers were teaching the same written curriculum, the enactments were slightly different. For instance, Denise allowed for more movement between groups than Mr. Soucek, who more consistently reminded students to ask each other for help. He also explicitly identified particular students as “experts” more than Denise did. Teachers chose student pairs to work as teams for creating games. Three focal pairs in each class were chosen for us by teachers for previous high, medium, and low achievement in science.

Data collection

Researchers observed all classroom lessons, took written field notes, and engaged in informal conversations with the teacher and students during the observations. All lessons were attended by at least one researcher and 84% were attended by two researchers. All lessons were videotaped and audiotaped. A software program was used to create screen capture video in addition to the simultaneous videos and audios of focal groups in each classroom. At the conclusion of the unit, focal groups also participated in a semi-structured interview about their experience representing an aspect of climate change in a game, collaborating to design their games, and drawing on resources (e.g., curriculum materials, peers, the teacher) to do so. Interviews were transcribed fully. We included all of these data sources in our analysis.

Analysis

We engaged in theory-driven deductive-inductive analysis of the data. This contributed to the rigor of our overall interpretive methods by creating an initial framework for analysis while simultaneously attending to participants’ meanings (Elo & Kyngas, 2007; Fereday & Muir-Cochrane, 2006). For instance, our game design curriculum was driven by theory pertaining to the triadic game design framework which included attention to three dimensions: reality, meaning, and play. At the same time, we attended to these dimensions in ways that would have been recognizable to participants. That is, we used participants’ words about climate change, the purpose of the game, and their talk about strategies for games to create deeper analysis of their activities.

Since critique was a central participant structure in our curriculum, we wanted to know if critique was functioning how we intended and whether or not we needed to make any changes in the next iteration. As we observed students’ progress in creating their games, we noticed that they were voluntarily playing each other’s games with the goal of exposing bugs in the programming. We recognized that students were participating in a range of critique in ways we had and had not designed. This led us to refine our research focus on the practice.

To analyze data for this paper, we first read through field notes for (a) evidence of student interaction during explicit designed critique sessions implemented by the teacher and (b) evidence of students spontaneously participating in critique across groups. We also sought counter examples, or examples of help or teacher assessment that were like critique in some ways but not all ways, such as times when students were helping to debug a program but were not offering feedback. Part of our goal in this process was to help define what we might and might not consider a critique. Thus, while critique took several forms, we were able to identify a commonality among the forms that contributed to their sameness as the public evaluation of a work for the purpose of (a) providing feedback for the creators and/or (b) understanding key concepts in design for one’s own future work.

From our conjecture mapping (Fig. 1), ongoing observations, and weekly team discussions of classroom activities, we created preliminary coding categories for the critiques as they occurred in field notes:

  1. 1)

    Use of artifacts in the critique (designed scaffolds, comments on Scratch website, Scratch programming platform) to help us understand the communicative modalities.

  2. 2)

    Spontaneous or designed critique: times students sought help, played each other’s games, or volunteered opinions on the games of others to help us understand the place of critique in choice and participatory pedagogies.

  3. 3)

    Who participated in critiques and their positioning to help us understand distributed expertise and authenticity of audience (e.g., expert, learner, friend).

  4. 4)

    Kinds of comments, suggestions, questions, and use of evidence and their modalities to help us understand choice and participatory pedagogy.

  5. 5)

    Dimensions of triadic game design within the critique: reality, meaning, and play.

  6. 6)

    Consequences of critique—what those receiving critiques did with that information to help us understand orientation towards development of skills, computational thinking practices, and science content.

We then identified additional potential data sources from the excerpts if they were available, including any artifacts and relevant excerpts from the screen capture software, video, and audio data. Choosing what video and audio to transcribe served as a confirming or disconfirming check of whether excerpts were related to the practice of critique or not. Related episodes of video and audio data excerpts were transcribed. We then applied the codes as we read through the excerpted data, constructing themes about the kinds of critiques that students participated in and co-constructed.

In Denise’s class, the practice of critique was coded in field notes as occurring on 11 of the 17 recorded days of the project, while it was coded as occurring on 14 of the 28 recorded days in Mr. Soucek’s class. These codes included instances of both formal and spontaneous critique as well as talk about using or learning from critique, such as in a presentation when students talked about what they learned. As part of the intended curriculum, students in each class engaged in two planned critique sessions. Although it was impossible to capture all spontaneous critiques, field notes recorded spontaneous critique on 14 different days across both classrooms.

Teacher-initiated critique sessions lasted 20 min or more while student critiques lasted no longer than playing the game a few times and making a couple of quick comments. Instances of teachers giving critiques or feedback have not been counted since evaluation is a constant mediator of relationships between students and teachers.

From the identified excerpts, we then watched a video, including screen captured video from the three focal groups in each class, of all the days on which critique was coded. We applied the coding categories to the data sources and identified disconfirming evidence. There were many instances of help that we did not code as critique because the expressed purpose of the interaction was to learn how to program an action, and no feedback on the game or coding was given. At times, one member of a design pair would critique a move or choice made by the other. We also did not count this as critique as it was a regular part of programming pair practice and not necessarily public. The results of our observations and interpretations are presented below.

Results

This section addresses the three research questions through an integrated analysis of the ways teachers and students take up critique. Critiques are presented by participant structure. Task structures, artifacts, discourse structures, and consequences of each type of critique are presented within each.

The practice of critique occurred in both teacher-initiated and student-initiated ways (Table 1). In teacher-initiated critique sessions, the students critiqued video games for homework. The teacher also assigned feedback sessions in class in which design pairs swapped games and answered a series of predetermined questions. In informal ways, students spontaneously critiqued each other’s games when they asked each other for help, purposely asked classmates to play their games and give feedback, or simply accessed others’ games in the shared online class studio where their games were housed. There were differences and commonalities among teacher-initiated critique sessions and student-initiated ones across both classrooms, as we discuss below.

Table 1 A summary of the forms of critique that were enacted in the classroom that were teacher-initiated (columns 2–4) or student-initiated (columns 5–7)

Teacher-initiated critique

The first teacher-initiated critique described in the curriculum guide was of pre-existing games about climate change. The second teacher-initiated critique was of other classmates’ games in a pair-to-pair participant structure, also included in the curriculum guide. The last two critiques only took place in Mr. Soucek’s class. The first of these was when students were asked to leave online comments about their classmates’ games and the second was of students’ presentations at the end-of-year all-district 8th grade science exposition. We describe each of these in turn.

Critiquing pre-existing games about climate change

Mr. Soucek and Denise both began the unit by asking students what games they liked and what they thought made good games. In Mr. Soucek’s class, students mentioned that good games had a number of characteristics. Good games allowed them to be better players as they played more, had a character that one could take on, had achievements and levels to unlock, were a balance between doable and challenging, and as the player got better, the game got harder (Fieldnotes, 4/13). Most student responses or critiques were about the playability of the games. When students had this conversation in Denise’s class, they mentioned similar criteria and added art as another criterion (Fieldnotes 4/29).

Students played two types of pre-existing games about climate change: (a) games young people had created in a previous summer workshop with us, and (b) games that were professionally produced by, for example, the National Aeronautic Space Administration. Playing other people’s games with critique at the forefront, we thought, would give students examples of what they liked and did not like about the games so that they might consider those dimensions when designing their own. Critique can serve many purposes in design classrooms. The purposes of critiquing outside games before getting into design were three-fold: critical consumption of technology in general, developing characteristics of good games, and learning how one might incorporate science content about climate change. Denise thought students’ heavy-handed critiques betrayed their inexperience in game creation,

I thought it was interesting because it was so early on, before they had to even think about making a game, they were really quick to judge the games. “Ah this game is dumb”…oh actually, it’s a lot harder than you think to [make a game]. (Post Interview)

One of Denise’s and Mr. Soucek’s explicit goals through the curriculum was for students to realize how hard it was to make a game. In doing so, the teachers hoped students would become more critical consumers and more conscious about the technology they used every day.

As students in Denise’s class played the games, they also remarked on whether they learned anything from them. A focal student, Colin, called attention to a “design flaw” in one of the student-created games because there was still a bug. In their critiques of the students’ games from the summer workshop, students focused on playability. Students also wanted the games to be more challenging, and to become harder as one played them, corroborating some of the criteria they had listed for what made a good game. In the next section, we describe teacher-initiated activities in which pairs of students critiqued each other’s games.

Pair-to-pair critiques

In the second planned critique day, programming pairs swapped games with another group and gave feedback. The day before the critique, Mr. Soucek handed out the game design rubric (Table 2) and asked students to use the rubric to think about their own games going forward. The rubric was based on the dimensions of TGD: reality, meaning, and play. Mr. Soucek emphasized climate science (reality) as an integral part of the game and the critique (field notes, 5/19). Teacher-initiated peer-to-peer critique was highly structured as a school task in that there was a set of minimum requirements for students’ critiques, students went over norms for feedback, and in Mr. Soucek’s class, students received grades for the quality of their feedback. However, it was a school task that students had never before completed in science class, as shown by Denise’s introduction of the task, “You’ve done this at least in ELA [English Language Arts] right? Analyzed each other’s work?” (field notes, 5/20). Earlier, in the beginning of the unit, Denise had noted that, “sometimes it is not easy to receive feedback, especially from someone you don’t perceive as an expert” (field notes, 5/2), and so, the class needed to set some norms for participation in the critiques. In both classes, prompted by the teacher, students suggested norms for participation. Students gave suggestions that feedback from peers should be constructive and doable,

“Don’t be rude.”

“Say what they could do to fix it.”

“Don’t try to find stuff that they can't do.”

Table 2 Game design rubric

Mr. Soucek also asked how people should receive feedback. Franny pointed out that they should “be gracious because they [the critics] have to give feedback.” In this instance, Franny acknowledged the critique as something imposed on them, a requirement of school and not a personal choice. Mr. Soucek further reiterated this sentiment when he pointed out students must answer all the questions on their feedback forms.

Groups were told to play one game at a time, with one pair explaining while the other played the game and then switching. Some groups did as they were told. However, most pairs start playing each other’s games at the same time. Playing the games exposed bugs in the programming, such as not including a function that would reset the game back to the beginning after someone played it. As a result, many of the students began fixing their games early on rather than letting the critic play it out and provide feedback about the overall structure or content of the game. This also occurred in Denise’s class after feedback. Rather than take up the suggested changes to the game structure or additions to the content, students tended to fix bugs in the programming that they had already completed. However, students also learned different coding skills from playing other’s games. For example, Franny and her partner played Emma’s game which included a timer. By playing the game, they learned how to add the timer to their own game. Acting as critics, with access to the code through the affordances of Scratch, helped these two students improve their own programming skills.

When students did offer feedback, they were conscious of potentially being rude to one another. At times they prefaced their feedback with disclaimers that what they were about to say was because they were told they had to. This interaction style reflected the norms that students and teachers in both classrooms had established at the beginning, e.g., “don’t be rude” and the imposition of the school task. Thus, formal critiques were very much associated with an imposed school task, one with little agency. Perhaps teacher-initiated critiques were antithetical to the participatory pedagogy we sought.

Tools and artifacts

In the designed, teacher-initiated critique sessions, students used written artifacts. They did not during student-initiated ones. The written artifacts were design critique sheets and game design rubrics, both in the curriculum and used by the teachers. The critique sheets were forms (Fig. 2) that asked students to provide general feedback in three categories: (a) something that did not work or could be improved, (b) something confusing or that could be done differently, and (c) something that worked well or that the critic liked. Denise thought the critique sheets produced mixed results.

I’m thinking about the iteration though, the feedback…I think I would want to go back and look at those questions and forms so it would help the kids be more specific when they’re giving each other feedback…Having set norms for giving each other feedback, that was helpful. So, they weren't just criticizing but giving good concrete suggestions. (Denise, interview)

Fig. 2
figure 2

Critique form

Students typically wrote short responses on the design critique sheets and communicated most of their critique through talk. As a result, the types of feedback in the provided forms may have been too general to be useful.

Students also had the opportunity to respond to each other’s games in the Scratch online community by leaving comments on game pages. Toward the end of the unit, while students were still finalizing their games and after they had presented them at the city-wide science fair, Mr. Soucek had students play each other’s games and leave comments on their online project pages. The majority of comments from students were about playability or bugs in the game. However, there were several comments on the science “not being clear” or, alternatively, being “accurate and clear” which suggests that Mr. Soucek explicitly asked students not only to comment on playability, whether the game was fun or not, but also on the clarity and accuracy of the science in the games. One commenter asked the creators to put more science in the game play rather than in text in the slides that introduced the game. In that case, the students did not change their game to reflect this comment. Only three comments, on 28 games, highlighted the specific science in games.

“The temperature got to zero and I won the game...too little heat also isn’t good for the earth. I think that if all the CO2 is out of the atmosphere you should lose, as well as when there is too much CO2

“It’s really easy to get the seeds. What do the seeds symbolize? Next time write out the science more clearly.”

“The science isn’t very clear. Maybe if you change smog to CO2 it would be more clear.”

In a few cases, the creator did fix some of the game play or programming bugs and wrote back to original commenters. Video of one of the focal students, Max, showed him not leaving written comments, but rather shouting across the room to the creators of the different games he was playing. Thus, students preferred to communicate orally. For instance, the following episode was recorded in Mr. Soucek’s class (field notes, 5/31):

Field notes excerpt 1

Max scoots over to play Fran and Mack’s game…Mack is somewhere else in the room.

Fran asks Max what do you think you can do to make a car go away?

Max does not answer, then exclaims, “Darn, I didn’t win!”

Fran consoles him, “You got a point.”

Max: “Winning is better. What does getting a point mean?”

Fran: “I dunno. It makes you feel better.”

Max: “Winning is better!”

Franny does some programming, then tells him, “OK, try now.”

He tries again. He chooses the bike, and makes the car go away. “I won! I beat your game!”

Mack returns, and Fran tells her, “I got rid of ‘You get a point.’ Apparently, it was confusing.”

As the excerpt above shows, based on playtesting, students’ games did improve. For instance, Fran and Mack’s game benefited from a more intuitive interface based on their interaction with Max. In a few cases, such as this, the creator did fix some of the game play or programming bugs from comments left on their game page and even wrote back to original commenters. However, as also shown in the excerpt above, the critiques were mostly about game play. The majority of students did not appear to fix games at all in response to online comments either about game play or the science. Rather, in teacher-initiated critiques, they focused on debugging.

Final teacher-initiated critiques

Mr. Soucek directed students to create one final public presentation and critique of students’ games. He asked students to create a pitch to “a group that wanted to buy their game, and they had to sell it to this group” (Mr. Soucek, Interview). Mr. Soucek put feedback questions on the board and pointed out to student audience members that the groups they were critiquing were supposed to learn from the feedback. He also attempted to create a safe space for presentations and feedback, in part saying that “everyone has gaps in their game” (field notes, 6/13).

During the presentations, it was mostly Mr. Soucek who asked questions of the groups, but at times students also asked questions. We considered this presentation a final “critique,” even though students would not be working on their game afterwards. These last two instances of critique, the comments on games and the presentations, potentially could be productive. Yet, without time to iterate on the games, they served mostly as sharing opportunities rather than opportunities to receive feedback for improvement. In the next section, we share episodes during which students initiated their own critiques of each other’s games, often by inviting a friend to critique their game.

Student-initiated critiques

Student-initiated critiques were spontaneous and initiated in class while all students were supposed to be working on their games. The structures and rules of the two classrooms were slightly different, but in both classrooms, students were allowed to move about the room and were encouraged to help one another.

Student-initiated critiques often occurred when game designers invited one of their friends to test out their games. Most of the time, the people called on to help were the more expert programmers in the classroom. However, based on several weeks of observations in both classrooms, students were more likely to call on those experts if they were friends. At times, students would ask for an opinion, as when Emmanuel asked people to listen to the music he was making as a soundtrack to the game. Sometimes, game testers initiated the session, wanting to play their friends’ games. That gave rise to informal critiques, including beginning to get a concept of “what a good game looks like,” as Danielle said after playing Darren’s game (field notes, 5/11). Thus, informal critiques allowed students to bend them to their own purposes, whether they were asking for an opinion on a specific topic, or when they were offering one. Students were able to take advantage of both roles in the participant structure.

At other times, critiques came after the game designers had asked for help with one programming task or another and the helpers had volunteered advice. For instance, in Mr. Soucek’s class, Cathy had emerged as the class Scratch expert due to her previous experience. Cathy, while solving a problem for Eliza, suggested that she, “Just put instructions on the side [in the notes box] rather than at the beginning of the game [on the game screen]. It is easier and if you put up slides [instructions on the game screen itself], it might mess up the coding” (field notes, 5/19).

A number of students took the initiative to play others’ games in the class from time to time rather than work on their own games. At other times, people who were close in proximity to game designers, where they could lean over and look at what their neighbor was doing, or where movement on a screen might catch their eye, also offered personal opinions. In some cases, students were in assigned locations in the classroom, but they also moved about. Based on our observations, proximity also indicated social connectedness to some degree. We can say with certainty that almost all student-initiated critiques were between students who were friends, or at least friendly, with one another. Thus, student-initiated critiques were circumscribed by social circles. For the most part, they were interested in each other’s work because they were interested in each other. This also may be why they were able to leverage both roles in the interchange.

Breaking the game

Often, critique sessions consisted of a student intentionally trying to point out bugs in the program. Students in Denise’s and in Mr. Soucek’s classroom termed this “breaking the game” or “exposing the game.” For instance, in Denise’s classroom, Damien had extensive experience programming in Scratch and was identified as the best programmer in the class by multiple students in interviews. Damien’s friends and other classmates often called on him for advice, to run ideas by him, and asked him to play their games. Damien was able to successfully “break” games on a regular basis. In their post interview, focal students Sana and Colin commented that having Damien try to break their game on multiple occasions allowed them to find the bugs in the game and ultimately improved their programming. People would play Colin and Sana’s game and “would just like click on the roofs as many times as they could, so we made it if you click on the roofs too much, then it just ends the game” (Colin, interview, 5/27).

Interview excerpt 1

Sana: So, we had to fix glitches that people were exposing.

Colin: Yeah. So, we changed it, so the right arrow key press only works when it’s on a certain backdrop.

Sana: Oh, did you fix the thing? Did you fix these ones, where they just like randomly show up?

Int: Do you think people played your game to like try to break it?

Colin: Some people, definitely.

Sana: Oh, yup, yup, yup. But it’s more like, it’s not like they’re not trying to like break the game, just expose it. It’s something that everyone started doing to each other’s games in your friend group.

Colin: Yeah. I don’t know. You just see the things that are wrong with it.

Sana: You can expose everyone’s game. Yeah. And I guess that does help, though, too.

Colin: Yeah. It’s like constructive criticism, but you have fun while you’re doing it.

Sana: Yeah, exactly. It was more for laughs, so it wasn’t like in a mean way.

In the above exchange, Colin marked the game breaking/exposing as “constructive criticism, but you have fun.” At the same time, Damien also contributed to improving Colin and Sana’s computational practices such as helping them to debug the glitches in the game, cleaning up their code and annotating it, and putting a note in the code to explain which piece of code contributed to which game function (field notes, 5/25). The inclusion of their friend made the activity fun, “for laughs,” and possibly encouraged them to make the changes they did to reduce the “messiness.” When they were asked why they made the note, Sana said that Damien, though he had made fun of the messiness of their code, would also have made fun of the note they had written even though they had written it because of his comments.

Interview excerpt 2

Inter.: Why did you decide to have a note there?

Colin: Cause—There are so many scripts in this sprite.

Sana: Oh yeah, our friends [Damien] would have made fun of that note, too.

Colin: There are just so many scripts, it’s hard to know which is for which. Yeah, so before, it was like this massive script. It was—

Sana: Oh, and then with the help of Damien too, like he was just like, oh my god, it looks gross. Make it look better. So—

Colin: Yeah, we had it all individual scripts, so we have a ton of when the [cursor sprite is] clicked [and touching x], and then eventually, we just combined it into one giant script. So, when it was smaller, you couldn’t tell the difference between any of the scripts.

Sana and Colin were able to improve their organization and efficiency in coding as a result of Damien’s spontaneous critique. In addition, his critique led them to voluntarily engage in the computational practice of annotation.

And yet, Damien, the most experienced scratch programmer, managed to avoid anyone “breaking” his game. In Damien’s (and his partner Yusuf’s) game “Galaxy Guardian,” the player is a spaceship keeping the radiant energy from a photon (“heat” in Damien’s game) in the earth’s atmosphere. The player is the bad guy, but also represents the current state of the climate, according to Damien, who told us that more greenhouse gases meant more heat was being trapped on earth.

Toward the middle of the unit, Stan was playing Damien’s game and at the end of class loped over to Damien with a big grin, to tell him the good news, “Damien, I broke your game.” Damien responded quickly, “It’s not breaking it, it’s a strategy” (field notes, 5/19). Damien’s game included ways in which it could be beaten easily. The strategy was not to move the spaceship from the beginning of the game. Photons were set to bounce off at the opposite angle and the first photon went straight up in the air. When his friend, who knew some amount of programming, offered that his playing had “broken” the game, Damien argued that what had been interpreted by his friend as a bug was actually an intentional strategy that players could use to do well. He used his expertness to claim intentionality and not work on his game further or open himself to critique. The game was good enough. It worked if you played along. While Damien may have helped many of his classmates, his position as expert and his desire to maintain that position may have hurt his own development both as a programmer and in representing the science. For instance, his spaceship could have been a greenhouse gas molecule.

Despite this, breaking other people’s games may have been Damien’s favorite activity in the unit. At one point, when playing and critiquing Colin and Sana’s game, Damien articulated his desire to get a job as a game tester (field notes, 5/25). We take this to be a result of his success in being able to tell other people how to fix their games. The participatory approach, allowing students to create games and critique each other’s, may open new ways of participating in school and new possibilities students might envision for their futures.

Perhaps what most differentiated student-initiated critiques from teacher-initiated ones was the enthusiasm with which both parties engaged and their openness to learning in both roles, the critic, and the critiqued. When someone asked for feedback spontaneously, as opposed to being assigned critique partners, they were asking someone whose opinion they respected. The creator’s level of attention often increased in the spontaneous critiques as both parties stared at the screen pointing out specific aspects of the games and their code. When students spontaneously critiqued others’ games by trying to break them or just by playing them, they exclaimed satisfaction at breaking the game or shouted across the room to show their enjoyment.

At the end of the year, students presented their games, or posters of their games, to 8th graders from across the city at a citywide science fair. In their presentation, both Colin and Sana reflected on using feedback as the most important things they learned,

Colin: All right. So, the most important thing I learned from this project is to like listen to the feedback from others that they give you, because we had people play our game, and the feedback that they gave us was really helpful.

Sana: And I learned that the presentation of something really helps make it more professional…and then after every class, we’d ask class members to play the game, and we used their feedback to improve. (Fieldnotes, 5/28)

At the science fair, Colin and Sana were the only students we observed who mentioned learning from and about feedback in their presentation. Their comments show that critique was not a by-product of their activity, but it was an integral part of making games about climate change. However, what students learned from critiquing each other was not about climate change. It was about programming and game design. Teacher interventions were needed for peer critiques to address the science of climate change. In the next section, we discuss implications for how students took up critique, the ways in which they enacted it and the impact on their games.

Discussion

Overall, we found that students participated both in teacher-initiated and student-initiated critique. Teacher-initiated critique was broadly enacted as the curriculum described, for example, with teachers asking students to complete critique forms by answering prescribed questions. These were generally completed by student pairs as a part of doing school. In student-initiated critique, discourse practices that emerged were typically highly informal and engaging to participants. The consequences for game creation of teacher- and student-initiated critiques differed in two important respects. First, formal critiques were less guided by a participatory pedagogy than were the informal ones. Formal critiques forced the artifacts, and task and participant structures on students, and as articulated by Franny, acted as an imposition. However, formal teacher-initiated critiques were necessary to address science learning and science content, while student-initiated critiques resulted primarily in improving game mechanics through debugging and focusing on playability.

Learning science through a game design has been shown to be effective (Puttick & Tucker-Raymond, 2018). Students were supposed to be learning about climate change and game design. We wanted their critiques, through the discourse practices of evaluation, argumentation, and sense-making, to reflect and contribute to that learning as well. However, as much as critique is part of design-centered curricula, we found that critique must be explicitly scaffolded for it to be helpful in promoting science learning. In regard to TGD and its dimensions (reality, meaning, and play), most critiques, like “breaking the game,” focused on gameplay rather than the science represented in the game (reality) or whether critics thought it would teach the player something (meaning). The first round of designed critique sheets also focused solely on game play.

Teachers were more balanced in their feedback, including more attention to the science of the games.

Still, considering the user experience and attending to bugs in the programs are consistent with goals for learning computational thinking. These findings are consistent with those of Lee et al. (2011) studying game design in computer science education environments for middle schoolers, in which students play-tested each other’s games and analyzed them for playability. Although we had hoped for a more balanced critique on all dimensions, students focused on playability. The spontaneous emergent critique strategy of “breaking the game” is consistent with learning strategies used in learning to code, which suggest breaking code as a learning strategy to find bugs and learn how to remediate them. In the future, we would like students to engage in critiques of games that emphasize the science, or reality, of the games at least as much as the meaning and play and revised critique sheets in subsequent iterations to provide more explicit scaffolding.

Both teacher- and student-initiated critiques required students to act in participant structures as critical peers. Based on similar observations in the focal classrooms and in the literature, both forms of critique relied on a distribution of expertise that was different from most other experiences in science classes (e.g., Miller, Manz, Russ, Stroupe, & Berland, 2018). Even when students engage in inquiry-based science where they are largely expected to be responsible for finding answers, teachers are expected to know the content, to be the authorities on what is known and not known. Although critique task and participant structures allow for students to be authorities of content and knowing, to determine what is acceptable and what is not, they are not easy. Classroom social dynamics, acknowledged by Denise when introducing the rules for giving and getting critiques, “Sometimes it is not easy to receive feedback, especially from someone you don’t perceive as an expert,” were still at play. In participatory pedagogies, knowledge and power hierarchies may be flattened to some extent but are still subject to participants’ pre-existing ideas about whose comments are valuable and worth listening to.

We also found that playing and critiquing other people’s games was a part of improvement of one’s own games. Returning to our definition of critique, “The public evaluation of a work to (a) provide feedback for the creators and/or (b) understand key concepts in design for one’s own future work,” we found evidence that one of the benefits to being a critic was that one was able to see what others had done, how they had set up their games, the content they had included, and how they had programmed certain features. In regard to the tools and artifacts used within the critique structures, part of that benefit was aided by the affordances of the Scratch programming platform. The Scratch platform allows one to play the game and look at the code at the same time, providing easy access to relationships between the coded instructions and the gameplay. One can even activate certain parts of the code and not others. This allows simultaneous critique of game play and programming and aids in providing or receiving consequential critiques.

This paper highlights the ways in which students participated both in official, designed forms of learning and those that they engaged in when allowed space to make their own decisions. Wright (2019) argues that critique opens pathways for students whose discourse patterns have not been historically valued in schools. This study adds to the conversation in integrated STEM environments on how to include a wider range of discourse patterns in which learners can engage. Understanding and supporting a full range of engagement in integrated computer science and science learning environments is important for making those environments more inclusive and to expand what it means to participate in science. At the same time, explicit attention to science through critique is necessary.

Conclusions

We originally included explicit attention to critique because educators and curriculum designers who pay attention to critique can also help strengthen their own designs for learning environments (Buckingham, 2003; Soep, 2005). We used attention to the ways in which students participated in critique to create more of a focus on the science in students’ games both in the initial act of creation and in the formal critiques. Formal, teacher-initiated critiques can operate as formative assessments of ongoing work, when done at the right time. Finding that sweet spot between a game that is not ready to be critiqued and one that is too far along, where the creators will not respond to feedback, is important. For instance, there were times when students were too early in the process to receive much feedback on their science content, simply because they had not created it yet, and so, critiques were focused on more superficial glitches in the game such as how to fix changes in backdrops that Colin and Sana alluded to.

Another implication peer critiques in classrooms can have is to emphasize the iterative nature of creative projects. Informal critiques allowed students to set their own criteria for assessment. In subsequent professional development for teachers, we highlighted the role of distributed expertise and student agency when engaged in computer game creation. Yet, this approach to learning will be difficult if this is the only project in which educators treat students as creators. Finding space for students in schools to critique each other’s work is not easy, especially in classrooms in which teachers, driven by multiple demands from administrators, accountability measures, and others, take responsibility for content expertise. In classrooms with distributed expertise students can also become authorities of knowledge. Creative, ongoing, iterative construction projects can be interspersed throughout the year, or over the course of a students’ school career, to give students practice on participating in critique, giving feedback, iterating on their own designs, and using feedback from one work to create a better work in a future project.

Availability of data and materials

The datasets used and/or analyzed during the current study are available from the corresponding author on reasonable request.

Abbreviations

CO2:

Carbon dioxide

DBR:

Design-based research

STEM:

Science, technology, engineering, and mathematics

TGD:

Triadic Game Design

References

  • Aydin, E. (2005). The use of computers in mathematics education: A paradigm shift in “computer assisted instruction” towards “student programming”. The Turkish Online Journal of Education, 4(2), 4.

    Google Scholar 

  • Barab, S., & Squire, K. (2004). Design-based research: Putting a stake in the ground. Journal of the Learning Sciences, 13(1), 1–14.

    Article  Google Scholar 

  • Baytak, A., & Land, S. M. (2011). An investigation of the artifacts and process of constructing computers games about environmental science in a fifth-grade classroom. Educational Technology Research and Development, 59(6), 765–782.

    Article  Google Scholar 

  • Bell, P., Lewenstein, B., Shouse, A., & Feder, M. (2009). Learning science in informal environments: People, places, and pursuits. Washington, DC: National Academies Press.

    Google Scholar 

  • Berland, L. K., & Reiser, B. J. (2011). Classroom communities’ adaptations of the practice of scientific argumentation. Science Education, 95(2), 191–216.

  • Blythman, M., Orr, S., & Blair, B. (2007). Critiquing the crit. Brighton: Art, Design and Media Subject Centre.

    Google Scholar 

  • Boyle, E. A., Hainey, T., Connolly, T. M., Gray, G., Earp, J., Ott, M., & Pereira, J. (2016). An update to the systematic literature review of empirical evidence of the impacts and outcomes of computer games and serious games. Computers & Education, 94, 178–192.

    Article  Google Scholar 

  • Buckingham, D. (2003). Media education: Literacy, learning, and contemporary culture. Cambridge: Polity Press.

    Google Scholar 

  • Cassidy, M., Tucker-Raymond, E., & Puttick, G. (in press). Distributing expertise to integrate computational thinking practices. Science Scope.

  • Collins, A., Joseph, D., & Bielaczyc, K. (2004). Design research: Theoretical and methodological issues. Journal of the Learning Sciences, 13(1), 15–42.

    Article  Google Scholar 

  • Cornelius, L., & Herrenkohl, L. R. (2015). Power in the classroom: How the classroom environment shapes students’ relationships with each other and with concepts. Cognition and Instruction, 22, 467–498.

    Article  Google Scholar 

  • Denner, J., Werner, L., & Ortiz, O. (2011). Computer games created by middle school girls: Can they be used to measure understanding of computer science concepts? Computers and Education, 58, 240–249.

    Article  Google Scholar 

  • Dolby, N., & Rizvi, F. (2007). Youth moves: Identities and education in global perspective. New York: Routledge.

    Google Scholar 

  • Elo, S., & Kyngas, H. (2007). The qualitative content analysis process. Journal of Advanced Nursing, 62, 107–115.

    Article  Google Scholar 

  • Fereday, J., & Muir-Cochrane, E. (2006). Demonstrating rigor using thematic analysis: A hybrid approach of inductive and deductive coding and theme development. International Journal of Qualitative Methods, 5, 1–11.

    Article  Google Scholar 

  • Gray, C. M., & Howard, C. D. (2015). Why are they not responding to critique?: A student-centered construction of the crit. In R. VandeZande, E. Bohemia, & I. Digranes (Eds.), LearnxDesign: The 3rd international conference for design education researchers and prek-16 design educators (pp. 1680–1700). Aalto: Aalto University Retrieved from: https://issuu.com/josephschwartz/docs/learn-x-design-2015-v1a.

  • Halverson, E. R. (2009). Shifting learning goals: from competent tool use to participatory media spaces in the emergent design process. Cultural Studies of Science Education, 4(1), 67–76.

    Article  Google Scholar 

  • Harteveld, C. (2011). Triadic game design: Balancing reality, meaning and play. London, UK: Springer Science & Business Media.

  • Hetland, L., Winner, E., Veenema, S., & Sheridan, K. M. (2007). Studio thinking: The real benefits of visual arts education. New York: Teachers College Press.

    Google Scholar 

  • Hwang, G. J., Hung, C. M., & Chen, N. S. (2014). Improving learning achievements, motivations and problem-solving skills through a peer assessment-based game development approach. Educational Technology Research & Development, 62(2), 129–145.

    Article  Google Scholar 

  • Hwang, G. J., Yang, L. H., & Wang, S. Y. (2013). A concept map-embedded educational computer game for improving students’ learning performance in natural science courses. Computers & Education, 69, 121–130.

    Article  Google Scholar 

  • Jenkins, H. (2008). Confronting the challenges of participatory culture: Media education for the 21 st century. [PDF]. Occasional paper on digital media and learning John D. and Catherine T. MacArthur Foundation. Chicago: The MacArthur Foundation Retrieved from https://macfound.org/media/article_pdfs/JENKINS_WHITE_PAPER.PDF.

  • Khalili, N., Sheridan, K., Williams, A., Clark, K., & Stegman, M. (2011). Students designing video games about immunology: Insights for science learning. Computers in the Schools, 28(3), 228–240.

    Article  Google Scholar 

  • Lai, C. L., & Hwang, G. J. (2015). An interactive peer-assessment criteria development approach to improving students’ art design performance using handheld devices. Computers & Education, 85, 149–159.

    Article  Google Scholar 

  • Lee, I., Martin, F., Denner, J., Coulter, B., Allan, W., Erickson, J., Malyn-Smith, J., & Werner, L. (2011). Computational thinking for youth in practice. ACM Inroads, 2(1), 32–37.

  • Lymer, G. (2009). Demonstrating professional vision: The work of critique in architectural education. Mind, Culture, and Activity, 16(2), 145–171. https://doi.org/10.1080/10749030802590580.

    Article  Google Scholar 

  • McNeill, K. L., Lizotte, D. J., Krajcik, J., & Marx, R. W. (2006). Supporting students’ construction of scientific explanations by fading scaffolds in instructional materials. Journal of the Learning Sciences, 15(2), 153–191.

    Article  Google Scholar 

  • Miller, E., Manz, E., Russ, R., Stroupe, D., & Berland, L. (2018). Addressing the epistemic elephant in the room: Epistemic agency and the next generation science standards. Journal of Research in Science Teaching, 55(7), 1053–1075.

    Article  Google Scholar 

  • Nasir, N., Rosebery, A. S., Warren, B., & Lee, C. D. (2006). Learning as a cultural process: Achieving equity through diversity. In K. Sawyer (Ed.), Handbook of the Learning Sciences (pp. 489–504). New York: Cambridge University Press.

    Google Scholar 

  • National Research Council. (2011). Report of a workshop of pedagogical aspects of computational thinking. Washington DC: National Academies Press.

    Google Scholar 

  • National Science Foundation. (2017). NSF’s 10 Big Ideas for Future Investment. [Webpage]. Retrieved from https://www.nsf.gov/news/special_reports/big_ideas/.

  • NGSS Lead States. (2013). Next Generation Science Standards: For States, By States. Washington, DC: The National Academies Press.

  • Nortier, J., & Svendsen, B. A. (Eds.). (2015). Language, youth and identity in the 21st century: Linguistic practices across urban spaces. Cambridge: Cambridge University Press.

    Google Scholar 

  • Osborne, J. (2010). Arguing to learn science: The role of collaborative, critical discourse. Science, 328, 463–466.

    Article  Google Scholar 

  • Papert, S. (1980). Mindstorms: Children, computers, and powerful ideas. New York: Basic Books.

    Google Scholar 

  • Petrina, S. (2017). From crit to social critique. In J. M. de Vries (Ed.), Handbook of Technology Education (pp. 39-50). Delft: Springer. Retrieved from https://doi.org/10.1007/978-3-319-38889-2

    Google Scholar 

  • Puttick, G., Barnes, J., Troiano, G., Harteveld, C., Tucker-Raymond, E., Cassidy, M., & Smith, G. (2019). Exploring how student designers model climate systems complexity in computer games. In J. H. Kalir (Ed.), Proceedings of the 2018 Connected Learning Summit (Vol. 1) (pp. 196–204). Pittsburgh: ETC Press.

    Google Scholar 

  • Puttick, G., Drayton, B., & Karp, J. (2015). Digital curriculum in the classroom: Authority, control, and teacher role. International Journal on Emerging Technologies in Learning, 10, 11–20.

    Article  Google Scholar 

  • Puttick, G., Strawhacker, A., Bernstein, D., & Sylvan, E. (2014). It’s not as bad as using the toaster all the time. Trade offs in a Scratch game about energy use. In J. L. Polman, E. A. Kyza, D. K. O’Neill, I. Tabak, W. R. Penuel, A. S. Jurow, K. O’Connor, T. Lee, & L. D’Amico (Eds.), Proceedings of the International Conference on the Learning Sciences 2014, 3 (pp. 1485–1486). Boulder: International Society of the Learning Sciences.

    Google Scholar 

  • Puttick, G., & Tucker-Raymond, E. (2018). Building systems from Scratch: An exploratory study of students learning about climate change. Journal of Science Education and Technology, 27, 306–321 https://link.springer.com/article/10.1007/s10956-017-9725-x.

  • Repenning, A., Webb, D., & Ioannidou, A. (2010). Scalable game design and the development of a checklist for getting computational thinking into public schools. In Proceedings of the 41st ACM technical symposium on computer science education (SIGCSE 10) (pp. 265–269). New York: ACM Press.

    Chapter  Google Scholar 

  • Resnick, M., Maloney, J., Monroy-Hernandez, A., & Kafai, Y. (2009). Scratch: Programming for All. Communications of the ACM, 52, 60–67.

    Article  Google Scholar 

  • Salen, K. (2008). Toward an ecology of gaming. In K. Salen (Ed.), The ecology of games: Connecting youth, games, and learning (pp. 1–20). Cambridge: MIT Press. https://doi.org/10.1162/dmal.9780262693646.001.

  • Sandoval, W. (2014). Conjecture mapping: An approach to systematic educational design research. Journal of the Learning Sciences, 23(1), 18–36.

    Article  Google Scholar 

  • Schrand, T., & Eliason, J. (2012). Feedback practices and signature pedagogies: what can the liberal arts learn from the design critique? Teaching in Higher Education, 17(1), 51–62.

    Article  Google Scholar 

  • Siko, J., Barbour, M. K., & Toker, S. (2011). Beyond Jeopardy and lectures: using Microsoft PowerPoint as a game tool to teach science. Journal of Computers in Mathematics and Science Teaching, 30(3), 303–320.

    Google Scholar 

  • Soep, E. (2005). Critique: Where art meets assessment. Phi Delta Kappan, 87, 38–63.

    Article  Google Scholar 

  • Tissenbaum, M., Sheldon, J., Soep, L., Lee, C. H., Lao, N. (2017). Critical computational empowerment: Engaging youth as shapers of the digital future. Proceedings of the IEEE Global Empowerment Conference. Athens Greece, April, 1705–1708.

  • Toulmin, S. E. (2003). The uses of argument. Cambridge: Cambridge University Press.

    Book  Google Scholar 

  • Tucker-Raymond, E., Torres-Petrovich, D., Dumbleton, K., & Damlich, E. (2012). Reconceptualizing Together: Exploring Participatory and Productive Critical Media Literacies in a Collaborative Teacher Research Group. In D. Alvermann & K. Hinchman (Eds.) Reconceptualizing literacies in adolescents’ lives (3rd ed.). New York: Routledge. pp., 224–243.

  • Tucker-Raymond, E., Lewis, N., Moses, M., & Milner, C. (2016). Opting in and creating demand: Why young people choose to teach mathematics to each other. Journal of Science Education and Technology, 25, 1025–1041. https://doi.org/10.1007/s10956-016-9638-0.

    Article  Google Scholar 

  • Weintrop, D., Beheshti, E., Horn, M., Orton, K., Jona, K., Trouille, L., & Wilensky, U. (2016). Defining computational thinking for mathematics and science classrooms. Journal of Science Education and Technology, 25(1), 127–147.

    Article  Google Scholar 

  • Wright, C. G. (2019). Constructing a collaborative critique-learning environment for exploring science through improvisational performance. Urban Education, 54(9), 1319–1348. https://doi.org/10.1177/0042085916646626.

    Article  Google Scholar 

Download references

Acknowledgements

We would like to thank Jackie Barnes who contributed to data collection and Gillian Smith who contributed to curriculum design.

Funding

This material is based upon the work supported by the National Science Foundation under Grant No. 1542954. The funding body administered the money and had no role in the administration of the study itself.

Author information

Authors and Affiliations

Authors

Contributions

ETR co-designed the curriculum, led the data collection and analysis, and was the principal author of the manuscript. GP led the development of the curriculum and contributed to the analysis and writing of the manuscript. MC carried out the data collection and contributed to the analysis and writing of the manuscript. CH co-designed the curriculum and contributed to revising the paper substantially. GT contributed to revising the paper substantially. All authors read and approved the final manuscript.

Corresponding author

Correspondence to Eli Tucker-Raymond.

Ethics declarations

Competing interests

The authors declare that they have no competing interests.

Additional information

Publisher’s Note

Springer Nature remains neutral with regard to jurisdictional claims in published maps and institutional affiliations.

Rights and permissions

Open Access This article is distributed under the terms of the Creative Commons Attribution 4.0 International License (http://creativecommons.org/licenses/by/4.0/), which permits unrestricted use, distribution, and reproduction in any medium, provided you give appropriate credit to the original author(s) and the source, provide a link to the Creative Commons license, and indicate if changes were made.

Reprints and permissions

About this article

Check for updates. Verify currency and authenticity via CrossMark

Cite this article

Tucker-Raymond, E., Puttick, G., Cassidy, M. et al. “I Broke Your Game!”: critique among middle schoolers designing computer games about climate change. IJ STEM Ed 6, 41 (2019). https://doi.org/10.1186/s40594-019-0194-z

Download citation

  • Received:

  • Accepted:

  • Published:

  • DOI: https://doi.org/10.1186/s40594-019-0194-z

Keywords