Thursday, 19 December 2013

Failing in Games at Aarhus University

 Last month I was invited to present at the Aarhus University's Interacting Minds mini-conference on "Failing and Confusion in Games and Gaming" along with Jesper Juul, Dennis Ramirez and Charlotte Janasson. Thanks to Andreas Lieberoth for the invite and organising a seriously interesting day :-)

Mine was the final talk of the day, where I presented some of my PhD research on Investigating Game-play: Are Breakdowns in Action and Understanding Detrimental to Involvement? (see pic below). You can find my slides here. I'm waiting to hear back about a journal paper we put together on the findings but in the meantime you can check out my DiGRA paper on why I did not find physiological data to be useful for identifying breakdowns and breakthroughs in game-play. My main argument was that action and understanding breakdowns will contribute to involvement when the player feels responsible for overcoming them but that they will decrease involvement if they take too long to overcome or have major consequences e.g. a loss of progress. There was some interesting discussion in the Q&A afterwards around defining involvement, whether "positive engagement" is a helpful term, the importance of triangulation and how we can avoid players getting into "negative cycles" where breakdowns don't lead to breakthroughs. While I think my work can help explain when certain breakdowns are likely to disrupt involvement, I think there is still plenty of scope to consider how and why some players are able to avoid these negative cycles and others don't.

(Thanks to Andreas Lieberoth for the twitpic)

In terms of the other presentations, I was glad to hear more about some of Andreas' initial work on Quantum Moves (a citizen science game) where they investigated player motivations e.g. in terms of fear of failing i.e. trying to avoid looking bad or achieving mastery challenges. While they chose a different focus, there is definitely some overlap with some work I presented at CHI this year in relation to the Citizen Cyberlab project, looking at why people chose to play citizen science games. I'm definitely looking forward to Andreas visiting next term so we can get into some more discussion about our research.

Jesper then kicked off the main talks by discussing failure in games (he's also written a book about the topic called the Art of Failure). Amusingly, he got different people in the audience to try out Super Hexagon and China Miner - I think I lasted about 10 seconds in the latter! Juul argued that while failure can be a source of learning, it's still an unpleasant experience and pointed out that there is a bit of a paradox going on here - normally we want to succeed but when we play games we seem to be seeking out experiences where we will fail (at least part of the time). I wonder though about how you define failure? I don't think all breakdowns are necessarily failures, often they are part of the challenge, or quickly overcome, whereas the word failure seems to indicate something more serious. What was really interesting was how when he pointed out how games can promise to repair some sort of inadequacy in us, but it is an inadequacy the game actually created in the first place! I think I'm going to have to read his book to get more to grips with the various paradoxes and philosophical arguments outlined in the talk but Jesper also suggested failure in games differs from real life as games offer a certain amount of plausible deniability e.g. "It's just a game", "It wasn't fair", or even "I wasn't trying that hard in the first place". I have thought about "its not fair" comments before - I see them as an indication that involvement has been disrupted, since the player sees the game rather than their own actions as being at fault - but either way I think they indicate a serious breakdown has occurred as player are essentially distancing themselves from the game.

Dennis' talk on his PhD research followed similar lines but he focused a little more of what failure means for learning and educational games. He pointed out that only 20% of players actually reached the end of Hitman Absolution (and apparently only 10% of players will see the end of any game) and argued that it's important to consider the metric being used to assess success within a game. Dennis also discussed various approaches to using games and assessing them - from chocolate covered broccoli e.g. Math Blaster to thinking about model based assessment e.g. Schaffer's epistemic frames. I particularly liked how he pointed out that we can't always infer competence from completion and when he discussed more recent approaches to evaluation that relate to "big data" (though also stressed the importance of talking to players too). For instance, he talked about some work going on at Wisconsin-Madison that was looking at heat maps in terms of how different players move through the game. The fact that progress doesn't always guarantee learning is something I've considered in my PhD research i.e. you can achieve action breakthroughs without understanding (though chances are these will be less satisfying) but it was good to hear more about what that means in terms of assessing learning from an educational point of view.

Charlotte provided a different perspective with her talk on Learning from errors in education. While not focusing on digital games, she provided an interesting account of failure in real-life settings, in this case a vocational school. She made the argument that while not exactly a game, school isn't quite real-life either and vocational schools offer a sort of real-life work game - where failure is considered part of the learning process. Charlotte used an example of the students learning how to clean, cook and prepare flounder (apparently very tricky!). She noted that the instructors treat the school as a practice space where errors are ok, but not if they are made as a result of knowledge you should have acquired already. Further, it seems that developing expertise is about becoming more skilled at paying attention and knowing what to pay attention too. Her talk got me thinking about my work on CHI+MED and errors within a healthcare environment, where I've been interested in how nurses are trained to use infusion devices. But, if errors are an unavoidable part of work practice and learning from them can help you become an expert, then how on earth do you go about supporting this process in an environment where the consequences of errors could literally be life or death?! I guess using a pump isn't normally that complicated but I do wonder about what sort of knowledge nurses have and how they develop expertise in this context.

Overall it was a really good day and it got me thinking a lot about games, failure and errors in the workplace. It was a great opportunity to talk to attendees at the event, catch up with Andreas and Yishay Mor, and enjoy lots of discussion afterwards when we went out for a lovely meal in Aarhus :-)

Monday, 13 May 2013

CHI 2013 (Part II) - Citizen Science, biometrics, gamification@work and student games

So on to part two. Apart attending a workshop, I also presented a poster at CHI: Do Games Attract or Sustain Engagement in Citizen Science? A Study of Volunteer Motivations (see below). The poster is based on some work being carried out as part of the Citizen Cyberlab project that Charlene Jennett and Anna Cox are involved with. The paper reports on the findings of a set of pilot interviews that Cassandra Cornish-Trestail carried out with people who play citizen science games - in this case, Foldit and Eyewire. The answer to the question in the title is no, game mechanics didn't seem to attract volunteers but, in addition to tools such as chat facilities and forums, they do help to sustain involvement over time. Essentially, the people who play these games are already interested in science, they aren't gamers. In addition, what game mechanics allow for is greater participation in a range of social interactions while also providing ways in which to recognise volunteer achievements as being meaningful. I really quite enjoyed chatting about the poster and luckily there were quite a few interested people to chat too :-)

I got to meet Elaine Massung a researcher from Bristol who was involved in the Close the Door project - where they were investigating motivations around crowdsourcing to support forms of environmental activism. Interestingly, their work suggests that game mechanics such as points can actually decrease motivation for some people. I also met Anne Bowser, a PhD student from Maryland University who presented the PLACE (Prototyping Location, Activities and Collective Experience - see below) framework for designing location based apps and games earlier on in the conference. I enjoyed hearing about Anne's work with on floracaching (a form of geocaching) and how they developed the Biotracker app - a serious geocaching game for citizen science that encourages players to gather plant phenology data. I'm hoping to be able to use at some point in the UK too!

Anne presented at the session on game design, where I also got to hear about Pejman Mirza-Babaei's work on Biometric storyboards. Unfortunately, Pejman couldn't make the conference but his supervisor Lennart Nacke was there to present the paper. I first became aware of Pejman's work during my PhD and it was really nice to see how far it had come. I'm not a big fan of biometrics, I didn't find the raw data I collected to be useful in relation to identifying game-play breakdowns and breakthroughs within my case studies but the tool that was presented during this talk was pretty cool. It allows for designers to consider the what they want players experience to be (see below) and provides a neat visualisation of the GSR (galvanic skin response) and EMG (electromyography) data that can them be compared with what was intended. The fact the Pejman also compared using this tool with a classic user testing approach (alongwith a control group) was great too and the results indicated that the BioST approach did lead to higher game-play quality. However, I do have some questions about the work carried out, even after reading the paper. The main thing I'm not sure about is whether the BioST approach took more time than the standard gamer user experience approach. This is important, as I know from visiting Playable games, there isn't always a lot of time to get some feedback and provide suggestions to designers. There weren't actually that many differences between the BioST and Classic UT approach, is the former really worth it if it takes a lot longer? I was also unsure about how the tool dealt with artefacts such as movement - does the researcher have to manually clear these up and how long does this take? Finally, I noticed that the BioST tool allowed for player annotations where it looks like players were asked to review a recording of the game-play session and add their comments but I'm pretty sure the classic UT condition didn't also do this... Considering this is what I asked my participants to do and I got a lot of rich data from it, I wondered whether the conditions really were a fair comparison - could the player reviews have been helpful without the biometric data? Nevertheless, I do like that the tool presented does not consider biometric data alone as I think it's important to give player's a voice too. Also, I think the way in which the biometric data was visualised provided designers with a powerful tool for interpreting play experiences so I'd be keen to see more research like this.

Later on I attended the Gamification@Work panel, which has a really interesting mix of people including Sebastien Deterding and a number of people from industry. I particularly liked Sebastien's emphasis on ensuring that autonomy isn't taken away from people when using gameful approaches at work. He also provided us with some quotes from games journalists which clearly indicated how when you have to do something for work, even playing games, the activity can lose it's appeal. I took a lot of notes in this session as it got me thinking about how I would design a game (or gamify a task) but I'm still mulling over these. The people from industry also had some insightful contributions to make but I couldn't help coming away from the session a little concerned about how game mechanics can be used to track performance and manipulate people into behaving in different ways. Why does this make me uncomforatble in relation to work but less so in relation to education or promoting health? Some interesting questions were also raised at the end and while measurements may be important for showing improvement (or lack of it) it's important to remember that not everything can be reduced to metrics.

Other highlights from the latter part of the conference include the student game competition - the quality of games was seriously impressive and I'd really quite like to check out a few of them including Machinneers (a lovely looking puzzle adventure for children stealthily teaches logical thinking, problem solving and procedural literacy), ATUM (an innovative multi-layer point and click game) and Squidge (a really cute game controller that monitors player heart rate - see below); the Women's Representations in Technology panel - again a seriously interesting mix of people and perspectives which got me thinking about feminism and how gender isn't necessarily binary; Razvan Rughinis' paper on badges in education - where he discussed badge architectures and how they can be used to chart learning routes; and finally Bruno Latour's keynote - I have to be honest and say I did not find this the easiest talk to follow but I'm sure it got my brain working! There are definitely other people who have got a better handle on it than I do (e.g. J.Nathan Mathias).

It was a huge conference and in addition to the other talks I haven't mentioend, there are also a few sessions I didn't get to go to so I've also got paper on persuasive games and behaviour change to my reading list. In general though, the conference gave me lots to think about especially in terms of how I want my own research to continue, especially in terms of considering games in relation to my work on CHI+MED, which there may be more to say about later on...

Saturday, 11 May 2013

CHI 2013: Paris (Part I) - MOMA, games and learning, game players and the Games SIG

Last week I went to CHI in Paris - it's been a while since I've been to a major academic conference and I seem to have gotten out of the habit of blogging so I thought I would use this as an excuse to get back into it :-) Plus there was a lot of game sessions that have got me thinking.

It all started last Sunday with the MediCHI workshop. This was a good opportunity to talk to about the work I'm doing on CHI+MED, with respect to medical device safety, and to meet others in the field. The main conference started on Monday with a keynote from Paola Antonelli from MOMA. She gave us an overview of lots of intriguing design projects that MOMA has exhibited and while no specific HCI challenges were made explicit during the talk, I was reminded about how technology, including games, can make people think. Particularly interesting examples include PIG 05049 (Christien Meindertsma) and the Menstruation Machine (Hiromi Ozaki/Sputniko!). She also mentioned their recent games collection - extra points for the inclusion of Passage :-)

In terms of the game-related talks, Erik Harpstead discussed an educational game they had developed (a single player physics game called RumbleBlocks - see below) and how they used metrics to assess learning as part of the ENGAGE project. A toolkit was presented for logging game events and that allows for a replay of game-play so player behaviour can be analysed further. This toolkit seems like it could be really useful but my main question was whether collecting this type of logging data can actually account for situations where players progress but without gaining any real understanding of the principles behind what they are doing. This concern was partially addressed during the talk when the replay analysis indicated that the gameplay mechanics actually contravened one of the learning goals (where students were not lowering the centre of their structures, even though they were building one with wider bases and that were more symmetrical). The misalignment between content and gameplay was seen to potentially explain why there was not a difference between pre and post-test regarding centre of mass and also suggested that the game needs to be redesigned to remedy this issue.

Derek Lomas' talk on optimising learning and motivation in educational games through using crowdsourcing techniques also got my attention. What was particularly interesting about this study was the huge amount of data collected (one study has 10,000 participants the other 70,000 - all who played the online math game Battleship Numberline) and the questioning of the inverted-U hypothesis regarding challenge and engagement. Basically, flow theory suggests that if something is too easy, boredom will occur and if it is too hard, you'll get frustration - so a moderate amount of challenge would be the most engaging. However, the findings from Derek's work actually suggest that people find spent the most time playing when the challenge level was lowest thus indicating that  easier challenges are more engaging . Further, the studies indicated there is a trade-off between engagement and learning i.e. you can't have both... I'm going to have to read the paper for more details but there are several points here that I'd like to consider further. First, I'm questioning whether the length of time spent playing is a good measure of engagement (especially when children might be playing these games during school time - who is controlling the length of play if that is the case?). The terms engagement, motivation and enjoyment were all used interchangeably but I'm not sure they can all be reduced to amount of time spent playing. Surely I can enjoy something I play for less time more than something I might play for longer (e.g. if my motivation was to kill time)? Secondly, I want to look at how well integrated the game mechanics of Battleship Numberline are with the learning content - mainly because I don't like the idea that there needs to be a trade-off between engagement and learning! Further, given Jake Habgood's work on the importance of integrating game mechanics, flow and learning content I don't think there has to be. Finally, the authors also suggested that novelty might be more important than challenge in relation to engagement. This was particularly intriguing as I don't think it's something that has been explored in the literature on games and learning and I'm guessing there might be quite a lot to it.

Within the same session, Stephen Foster talked about designing diverse and sustainable educational games that support competition and meta-cognition. Inspired by the way Chess and Starcarft II players relfect and review their game-play, Stephen presented a game called CompetitiveSourcery based on the pre-existing CodeSpells platoform. The game requires players to compete by designing "spells" in java and using them against each other. Three users were observed over two months as they prepared as a team for a tournament - this included playing the game but also discussing strategies, bebugging each other's code and updating a team wiki. In general, this was a good example of tapping into both micro and macro involvement for the purposes of learning but I was surprised not to see any mention of Gee's discussion player affinity groups that exist around games (though Gee is mentioned in the paper). Plus, the idea that teachers should consider meta-level activities isn't entirely new (see Paul Pivec's BECTA report for the importance of the meta-game) while encourgaing discussion through having a tournament has been done before (e.g. research on Racing Academy). Also, while Stephen makes claims about the sustainability of this approach, there is always going to be an issue concerning whether all players will actually engage in the meta-level activities to the same extent. I'm not sure how you address that though...

There were several other game related sessions including Max Birk talking about the relationship between controller type and personality and Jeff Huang talking about patterns of game-play and skill in Halo. For both these talks I wanted a bit more detail on the methods so I'm going to have to add them to the pile of post-CHI papers to follow-up. In relation to the former, I was a little confused about the relationship between my "real" self, my "ideal" self and my "game" self as I'm not sure any of these can be static constructs but there may be some interesting differences to explore here (do standard controllers really make gamers more neurotic?). In relation to the latter, an awful lot of logging data was collected but I was a little disappointed that "patterns of gameplay" was more about how long people play for and how often than it was about gameplay strategies (but that's only because I'm more interested in player strategies!). Other highlights included Nicholas Graham discussing a tabletop game where one person plays the game and the other orchestrates the experience in real time i.e. builds levels and obstacles. This reminded me a little of Sleep is Death but Tabula Rasa seemed a bit more light-hearted in it's approach to foster open-ended creativity. Tamara Peyton (see below) then spoke about the alternate reality game I Love Bees and showed how leadership emerged from team-play. Interestingly, the players spontaneously used military terms and take on different roles within the team which she classified as General, Lieutenant and Private. I particularly liked that she emphasised that disjuncture can be as important as flow - essentially we should also be thinking about what it means to fail and how failing isn't necessarily a negative experience.

I also attended the SIG on Games and Entertainment and was pleased to see that there really is quite an active games community at CHI. Katherine Isbister and Regina Bernhaupt led the session but handed over the reigns of the SIG to Magy Seif-El-Nasr and Heather Desurvive. The topics that came up ranged from needing to foster links between industry and academia, introducing further games courses at next year's conference, and discussing other venues for games research. It was clear that while some people were interested in the user experience side and methods for assessing game-play others where interested in using games as research tools e.g. for the purposes of collecting data. Regarding the latter, there was a suggestion that there might be a workshop or course next year with a focus on how you might evaluate this kind of large-scale data but I think that will depend on whether someone volunteers to run it! I enjoyed the session overall and found a good way to see the range of game-related interests across the CHI community.

Ok, I think that's enough for today. There is still plenty more to write but I'm going to have to leave it for another post! For now I'm going to leave you with a pic of Charlene Jennett hugging a bear to make one jump on screen :-)

Friday, 3 May 2013

My PhD thesis

My thesis is finally available via ORO: Digital Games: Motivation, Engagement and Informal Learning.  I've included the abstract below though please feel free to get in touch if you have any questions or comments :-)

This thesis investigates the relationships between motivation, engagement and informal learning, with respect to digital games and adult players. Following the reconceptualisation of motivation and engagement (as forms of micro and macro level involvement respectively) three linked studies were conducted. In the first study, 30 players were interviewed via email about their gaming experiences. The resulting set of learning categories and themes drew attention to learning on a game, skill and personal level, which arose from micro-level gameplay and macro-level interaction with wider communities and resources. The second investigation consisted of eight case studies that examined how involvement and learning come together in practice. Participants were observed in the lab during two gameplay sessions and kept gaming diaries over a three week period. A method for categorising game-play breakdowns and breakthroughs (relating to action, understanding and involvement) was developed in order to analyse several hours of gameplay footage. The previous categories and themes were also applied to the data. The findings suggested a relationship between macro-involvement and player identity, which was further investigated by a third survey study(with 232 respondents). The survey helped to establish a link between identity, involvement, and learning; the more strongly someone identifies as a gamer, the more likely they are to learn from their involvement in gaming practice. Four main contributions are presented: (1) an empirical account of how informal learning occurs as a result of micro and macro-involvement within a gaming context, (2) an in-depth understanding of how breakdowns and breakthroughs relate to each other during play, (3) a set of categories that represent the range of learning experienced by players, and (4) a consideration of the role player identity serves with respect to learning and involvement.