From Brain to Game

A recent article published online by Edge magazine raises an interesting prospect in gaming; the use of neuroscience in the research and development of games. The article focuses on the possible replacement of violence in games with something more acceptable that gives players the same thrill. While this is an interesting prospect the more general use of neuroscience in the gaming industry could be a real possibility for the future. In an earlier post I described gaming as being intrinsically linked to the advancement of technology but perhaps, in the future, gaming will be linked to advances in our understanding of the brain and what goes on in there when we enjoy a game.

You can probably name the types of games you like, give examples and even go so far as to break these games down into the aspects that you particularly enjoyed and those that you didn’t care for; but when pushed, can you articulate precisely what makes a game fun? This is the challenge for game developers the world over, to create a game that consumers will enjoy. It therefore seems logical that developing an understanding of such enjoyment could lead to the development of better video games. This is the prize that neuroscience can offer. And research into video gaming isn’t even that big a leap, games have already been used in neuroscience. Maguire et al. (2009), for example, used an adapted version of the game The Getaway to examine the brain activity of London taxi drivers as they navigated the virtual streets. This was a landmark paper in the study of human spatial navigation but it also demonstrates that games can be used in laboratory experiments, in fact, they are well suited to MRI (magnetic resonance imaging) studies that require the subject to keep their head still. Neuroscience in gaming could allow developers to understand what has made successful games so popular, perhaps by taking components of these games (such as violence) and examining their effect on gamers’ brains. In terms of development, new game mechanic designs could be tested similarly and their probable success determined from comparisons with already established mechanics.

I would volunteer to be a test subject.

So why has neuroscience yet to be utilised by gaming? Part of it (possibly a big part) is the vast amount of money that would be required to undertake such a project, with no guarantee of return. Chris Stevens talking to Edge magazine claims that:

“The first game publisher which buys an MRI scanner will make its money back in publicity”

but those who can afford such equipment are already successful and may therefore lack the incentive to invest in such an ambitious venture. Proof of concept may also be required before investment is made and perhaps other industries such as advertising will need to adopt a neuroscience approach before the games industry dives in.

There are also ethical issues that need to be noted. An understanding of the brain could lead to its exploitation. In his book The Decisive Moment (a.k.a. How We Decide) Jonah Lehrer describes how mortgage lenders and credit card companies have exploited flaws in the human psyche to sell consumers products that they cannot afford. While there has been no direct use of neuroscience by these industries it is conceivable that such exploitation could be used in any industry with an understanding of the brain. Even if a games developer has no unethical intentions they could unwittingly make a game that is tailored so perfectly to enjoyment that it becomes addictive. Games that already exist have been shown to cause addiction in some people. Tapping into the reward and pleasure centres of the brain could be a slippery slope and perhaps games developers that utilise neuroscientific research will go too far before they realise their mistake. We are already surrounded by advertising that uses psychology to make a sale but perhaps using neuroscience will be taking that principle too far.

Addicted anyone?

The potential to develop better, more enjoyable and perhaps more socially acceptable games through neuroscience surely exists. It is only a matter of time before one non-scientific industry or another uses neuroscience and eventually I believe games will too. We must be careful though, there are dangers to using such knowledge that may not be realised until it’s too late. So much is still unknown about the brain that exploiting it too early may prove damaging. Neuroscience can be used in games but, only time will tell whether it should be used or not.

Advertisements

Zombies! A Love That Never Dies.

There’s an argument, a perfectly valid argument, that certain video games glorify war. Video games like Call of Duty and Battlefield where you actively participate in war and kill (virtual) humans, can be criticised by this argument but they are acclaimed by game critics and defended by those who enjoy them, which is a sizeable number of people. Personally, I find it difficult to play such games as I can’t find a strong argument against those who damn them. Can I really justify enjoying participating in a simulation of something that I would be opposed to in real life? It’s a modern moral dilemma. I have no qualms, however, about blowing the head off a zombie.

Already dead? A little more dead won’t hurt then…

Perhaps it is this destructive freedom that makes zombie games so appealing. We are absolved of responsibility when the enemy is the mindless killing undead. Zombies also present the challenge that gamers seek, essentially having the abilities of humans but with fewer weaknesses and self preservation instincts. And there is room for flexibility when it comes to developing zombie games. We all know the standard zombie model but games like Left 4 Dead and Dead Nation (both of which I recommend) have added variations and just look at the plethora of creatures spawned by the Resident Evil franchise. So while your basic zombie hoard offers a decent challenge, developers are free to add challenge by changing (mutating if you will) the enemies in the game. This is not true for games where humans are the enemy and there are greater restrictions to their abilities. Endless story possibilities exist in the zombie world and this in part contributes to the large number of these games that exist.

Holy zombies Batman!

A problem arises, however, when the zombie apocalypse gets over used. Innovation and challenge are craved by gamers and too much of the same results in a loss of interest. I love zombie games but I look at the games I own and see very few in my collection. This is because, while zombie games have been done to death (pardon the pun), they are not often done well enough to warrant me parting with cash. Technical issues aside, zombie games can often be repetitive (within themselves and in reference to other games) in gameplay, cliché in storyline and badly written. The announcement of a new zombie game is therefore often met with groans amongst gamers, and this can lead to the view that we’re fed up with this format all together. But that’s not true; one of the most anticipated games of 2013 is Naughty Dog’s The Last Of Us, a post apocalyptic game where at least some of the enemies are zombie-esque. The demand and love of such games is there but they have been done so badly in the past that excitement for The Last Of Us comes almost entirely from the reputation of the developer.

While there is room for flexibility in zombie games, there are a few things that help make it good. Survival against the odds is a theme that should be included but, more importantly, the player needs more objectives than “kill as many zombies as possible”. Killing zombies is fun but killing just for the sake of it gets old fast. I for one am very excited about The Last Of Us and I’m interested to see how ZombiU turns out too. Zombies don’t appear to be leaving the games industry any time soon and I cannot express how happy that makes me.

Gaming into parenthood

In the digital age, technology is king. I would wager that most people today have played and enjoyed a video game; be it something as accessible as Fruit Ninja or as complicated as Civilization, gaming is becoming part of our culture and, as we become parents, our children’s culture. A recent Observer article discussed the affinity that children have with computers and games and how this can leave “their parents baffled”. Does this mean, therefore, that the next generation of parents, those who themselves grew up playing video games, will be able to understand and connect with their children more? Probably not seems the likely answer; the child parent relationship is unlikely to change that drastically just because parents understand the interests of their children a little better, but, for the gaming industry and for children, a better understanding of the role games can play in growing up might not be a bad thing.

Children like video games and pressure their parents to buy them, this is undeniably true. The advantage that the gaming generation might have as parents is the willingness and ability to research and test what their children want. Rather that simply relying on what the media and shop assistants advise, gamer parents can inform themselves and make their own judgements for their children. The worry that is expressed in the Observer is that the author can’t interact with her son over games:

“…computer games still bother me. It’s the knowledge gap. I have no idea what Patrick’s up to when he plays Zelda, or cries over penalties in Classics XI, because, other than the odd game of Space Invaders, I’ve never got into computer games.”

This is a problem that can be rectified by gaming knowledge. Moreover, gaming parents can change the way games are viewed by making informed decisions.

This WILL happen

Too often have I heard of children playing games that are completely inappropriate for their age group. One could argue that this shows a lack of parental involvement but I think it is, at least in part, caused by a lack of understanding of games. Most people know what a 15 or an 18 rating means in terms of movies but for games they don’t. Gaming parents, however, will understand the likely cause of these ratings and can play the games before allowing their children anywhere near them. On the other hand, for some parents there seems to be the view that video games are universally “bad for children”. This too is damaging and misguided. I’ll say it again, children like video games; many of them will tire of games quickly, many will grow out of them eventually and many of them will become avid gamers.  Think of games like chocolate. I am not suggesting that children should do nothing but play games but nor should games be seen as “bad”. If properly regulated, games are harmless fun for children and perhaps it is this view that will become more accepted as more gamers become parents.

This WILL NOT happen

I am not a parent and perhaps I am naive to think that being a gamer will change the type of parent I am or affect my kids in any way; but I do believe that a lack of understanding is giving games a bad image and allowing children to play games that are inappropriate. Not every parent of my generation will be a gamer but perhaps they will all have a better understanding of games and the gaming industry will be better off for it.

Is this the real life?

As an industry that moves forward with technology, video gaming advances in sophistication very rapidly. In 1980 Pac-Man was the height of gaming, a 2D yellow blob eating smaller yellow blobs in a maze whilst being chased by ghost shapes. 20 years later, the Sims allowed us to take complete control over a character’s life. Now we have games with open worlds, morals, communities of real people playing together and Pac-Man playable on our already obsolete mobile phones. One of the industries greatest strengths is its ability to adopt the latest technology and push it to produce fantastic entertainment. But, while the advancement of technology is seemingly infinite, is there a point in the advancement of video games that will be too far?

Morals have been finding their way into games lately. At the moment they tend not to be greatly developed. Actions are either good or bad leaving no grey area; your character can either be a hero or a villain, but we can assume that morals will play a bigger and more developed role in the future . In current games, however, I find it hard to be the bad guy. I know the villains always have the best weapons and powers but I just can’t bring myself to be that much of a dick. As the industry advances this is only going to get worse, with more excruciating decisions, maybe making games more difficult to play. Realism is something that games seem to aim for but I don’t believe that is why we play games.

It doesn’t end with morals either, advancement in graphics could be an even bigger problem. Quantic Dream’s Heavy Rain has one particularly difficult point where you have to cut off your character’s finger. It is amazing that a game can immerse you to the point where you have an emotional response, but how much is too much? Heavy Rain is not the most graphically accomplished game but perhaps that’s a good thing; if it was too realistic then it might be difficult to play.

How did you chop?

There are a number problems that could arise if games become too realistic. Arguments around violence in games could gain momentum (perhaps rightly so) and gamers might be alienated from newer systems because games hit too close to home. Can you imaging playing a Call of Duty where your character looks real and you have to kill AI that looks real? I think that such a game would be unsaleable. But does that mean that at some point the games industry will reach a plateau where no technical advancements are made? Perhaps, but it’s more likely that the industry will have to evolve, changing the type of games it makes and the type of style it uses in visuals. No one plays games purely for their realism; in 1980 Pac-Man was the height of gaming and it is still played and loved today. Games should first and foremost be enjoyable experiences and let’s hope that as they become more sophisticated they don’t lose sight of that.