GDC 2009 Report
So, as usual, I’m so late with my conference notes that the next big conference (Siggraph 2009) is already rolling round.
“GDC is big. Really big. You just won’t believe how vastly hugely mindbogglingly big it is. I mean you may think it’s a long way down the road to the chemist, but that’s just peanuts to GDC.”
– with apologies to Douglas Adams
I really wasn’t prepared for just how many people go to GDC. When you’re used to the European version, going to a conference with 17,000 attendees is a real eye-opener.
As with most write-ups, I’ve got a bit tired towards the end. I’ll try to go back later to bolster the detail in the later talks.
I missed this (I was in a meeting). So I missed the free copy of Rhythm Heaven that was given out. Which is a shame because my admiration for Nintendo grows every month. You can find an exhaustive write-up on Wired here
Speaker: Alex Evans (Technical Director, Media Molecule), Mark Healey (Creative Director, Media Molecule)
The Molecules managed to more or less fill a big hall with their engagingly high-tech yet pseudo-shambolic talk. Using some custom code that Alex Evans had written on his Mac, they used diagrams draw in real time, pre-built images and text to paste together “slides” in real time on a virtual graph paper display. (At one point A.E. pulled down the screen to reveal some of the app’s code. Tellingly the file was called “craplib.cpp”)
They called their talk “Winging It” but really that does a slight injustice to how flexible – or chaotic? – their development process is. MM seem to be ruthless at reworking and refining their game during its development. As part of their talk they showed a video of Milestone 6 of LBP (which appeared to be an early Greenlight demo). The core of LBP was clearly there in many ways, with the cloth-feel graphics and the same intro-level music. On the other hand, as they admitted, they had gone down several blind alleys. One was the attempt of a giant beanstalk in the game, not suited to a 2D platformer, and custom boss fights.
Perhaps the biggest change, though, was in terms of level creation. Their original strategy was to use in-game tools such as a paintbrush and shotgun to affect the world, something you could see in the demo, with the sackboy avatar with a long paint roller in his hand, desperately trying to paint the world competently. This was all ditched later for the custom editor and Pop-It tools. This was a very late decision in the project, previously there had been a tension between the “Level Designers” and “Pop-It Iterators”.
The culture of prototyping and the legacy of Lionhead was obvious. [If they want to try something]… “People just go ahead and do it” said Mark Healey at one point, and he showed videos mockups of some features that team members created on their own initiative. It made you want to go out and just do something. Very inspiring. This is helped by the fact that as Alex said, they have a very small codebase, and all their technology is under their own control. I think he said 5 minutes for a full build, but that might be my fevered imagination.
It clearly wasn’t all plain sailing though. Mark mentioned that the team had nearly come to blows over the creation of Sackboy’s design (they eventually had a “face off” to see who had the best design, then gave final say to one artist) and the integration of physics. It also took a long time to argue for the inclusion of the in-game mapped switches that LBP ended doing crazy things with, which seems such a fundamental component of the game that I’m amazed. So, in all, the transition to the final “Game Creation Package” took a long time to take hold fully. And even then, with the first private Sony beta, for MM families and Sony employees, the levels created were so poor that they thought they had made an enormous mistake…
In three keywords: inspiring – imaginative – ninjas
Level-5’s Techniques to Producing a Hit Game–From PROFESSOR LAYTON to INAZUMA ELEVEN and THE ANOTHER WORLD
Speaker: Akihiro Hino (CEO/President, LEVEL-5 Inc.), Usuke Kumagai (Programmer, Level-5, Inc.)
This talk was real-time translated, and I suspect it lost a little in the process. As well as doing a franchise that is sweeping the world, Level 5 have at least two other games in production. There was also “Inazuma Eleven”, a combination of football game and RPG, and “Ninokuni” which is a collaboration with Studio Ghibli.
Of all the words in the title, “Hit” seemed to be the point of the talk. The subtitle was “how to make a fun game sellable”. It concentrated on why they thought they made their games sell. The gist was: they have a promotional strategy set up even before the game went into full development.
Through the power of a dodgy translation they cited two key elements that they used when selling a game
“Catch Copy Planning” Subtitled as “loading the weapons”, this appeared to boil down to “have key selling points in mind before you start”. When thinking of game ideas Hino-san said that he imagined a conference presentation from it, and how the audience would react to it. For Professor Layton the 3 key ideas were
- Puzzles and story in the same game
- The “mental exercise” theories of Dr Akiro Tago, who is extremely popular with children in Japan
- Animation and voice acting beyond the quality of that seen before in a DS game (in Japan, the voice actors used are much more famous than those on other DS games).
These ideas don’t seem particularly revolutionary to us, but I think the key was that he had a clear idea of the products core strengths before setting out. In addition, they also produced a promo video before development started, setting out the key ideas to the development team, so that they also had a clear idea of what the project was meant to be. Very little gameplay was shown, even in videos which had been re-edited much later in development.
The second half of “Catch Copy Planning” was to identify the core market for the game (the female market was identified for Professor Layton) and to match the game content to the market and pitch the game directly to them.
“Boom Trigger” Subtitled as “Creating a big fire”. This was suggested ways to make people talk to one another about the game (what they termed the “Communication Gimmick”) and extending the lifetime of the game (“Extension Gimmick”). So for Prof L the Communication Gimmick was asking friends to solve puzzles for them (possibly using the wi-fi connection?) The Extension Gimmick was the sheer number of subgames, and the ability to unlock and download new puzzles after the release date.
So again, nothing too extraordinary, but it was noticeable that they applied these principles to all the other games they produced, including their football RPG.
There was much excitement in the audience about the Ghibli tie-in. Including some footage of the new game. Akihiro Hino admitted to being like a schoolboy in their presence, which was quite sweet.
In three keywords: hits – boomtrigger – cryptic
An American engine in Tokyo: The collaboration of Epic Games and Square Enix for THE LAST REMNANT
Speaker: Mark Cerny (Consultant/Developer, Cerny Games), Michael Capps (President, Epic Games Inc), Robert Gray (Technical Consultant, Square Enix), Hiroshi Takai (Director, The Last Remnant, Square Enix), Daniel Vogel (Lead Engine Programmer, Epic Games, Inc.)
Square Enix did something few Japanese developers would even consider: they switched to using Unreal for one of their games. So the panel had 2 very voluble Americans, a quiet German lead programmer, and a near-silent producer from Japan.
Reflecting on it now, the only thing that sticks in the mind was Mr. Square’s initial response when asked “what were the problems with working with Unreal 3”. The response was to pretend that the question hadn’t been asked at all and to stare into the distance until the question went away. Like my mother used to say, “if you can’t say something nice, don’t say anything at all”.
As with most panel discussions things were very disjointed, so I might have to resort to bullet points
- The Squeenix guys got a bit of a shock when they first saw the code, their initial response was “how on earth can you ship a game with this”? They refused to buy a licence until Epic committed to translating all their documentation into Japanese. Unfortunately Epic hired cheap-ass translators who (a) translated the wrong documents, starting with Unreal 2 docs and (b) did things like translate “Actor class” into “man who plays in films class”.
- Square sent engineers to the East Coast to “embed” with Epic. This is the first time any developer has done this.
- Epic development process is very different from the Japanese. Square based around the idea of only bringing assets into the game when they are done. Unreal is based around iteration and effectively generating content in-game. This was a major cultural shift.
- Their biggest asset was Rob Gray who acted as a “conduit” between the two companies. He could translate and act as a mediator. It sounded like the whole process would have failed without him.
- Major problems (they answered eventually!) were:
- artist-driven development was new to them. They jumped in and made a lot of mistakes early on that made it very hard to fix things later.
- constant engine change. Updates took 1-2 months to come through to the game teams because of the upgrade time. Epic constantly asked “when will the engine be done?”
- working with the engine. The Unreal Lead Programmer effectively admitted that some of their code might not be great, but it’s there for a reason i.e. someone will be using it. So they never change for the sake of it. (Personally this sounds like a horrible situation for any programmer to be in, I wonder how long this can last).
- Square very used to a fixed, tight memory budget (“not 1 kilobyte wasted”). Garbage collection made them very nervous and Square reported “wastage” of up to 30MB. Epic didn’t really have an answer to this.
- Compromises had to be made. Square wanted lots of characters on screen, more than UE could do. They had to scale down as a result. There were “lots of areas” where it didn’t meet expectations.
When asked if it was a success, the Japanese producer described it as a “great experience”, which I think meant “no”. On the other hand, he did say that he got much happier as development went on (“we learned a lot”). Some of the team do want to use the engine again.
I notice Square have just licensed Gamebryo (albeit from a different internal studio). I wonder how much of that is based on the Unreal experience.
In three keywords: cultural – cagey – misunderstandings
Speaker: Mike Ambinder (Experimental Psychologist, Valve)
Valve make great games, but I’ve always been left disappointed when seeing their talks. Maybe I’m unlucky. Some people raved about this talk, but it seemed to me to be quite a dry, not too incisive runthrough of techniques on how to playtest a game. It suffered from trying to cover too many subjects rather than picking one and covering it well. So it ended up being a bit of a litany of starter information, with each technique discussed in turn. There were still a few nuggets in amongst the stones though.
- Direct Observation
- Verbal Reports (player speaking out loud while playing)
- Statistical Tracking
- Design Experiments e.g. community polls
- Physiological Measurements (eye tracking, skin conductance, face sensors)
The advantages and disadvantages of each can be found quite readily with a bit of Googling, so again some snippets of things I thought were non-obvious or worth repeating.
- Direct Observation (as most of these methods) needs a specific design goal to be useful i.e. “is this UI working”. Just watching without any aim isn’t worth it.
- Of the pieces of footage shown, it’s obvious that Valve go a long way to make their UIs very, very, well… obvious. Big glows round characters, lots of icons were added if the player missed any important information. Trying to be too subtle didn’t work for them. Which is what we have found time after time :-)
- What the player does, rather than what she says, is primary. For example, the player often gives the wrong explanation as to why they have done something after-the-fact, in order to self justify. It is too easy to bias the results with the wrong context. (I wish the presenter had given references for some of the psychology studies he mentioned. Some of the experimental results he stated were almost too good to be believed). But the bottom line is “people don’t know why they do what they do”.
- When doing surveys, a common trick is to repeat the same question in different ways – and get different answers from the respondents. This is a way of correlating the underlying answer and checking the answer isn’t susceptible to innate (too much) bias or influence.
- Valve’s play testing continues long after initial release. There is online stats gathering, polls, testing.
- Some people have successful used an unusual marker to measure involvement: the rate a player is seen sipping his/her drink has a good correlation to their involvement!
- When designing UIs, they try to limit long single eye movements (say between opposite corners of the screen), since this is known to be very tiring for the player.
- Using skin conductance, it’s known that players get a second spike in their body alertness between 10 and 15 seconds after the initial shock. So designers use this to time secondary attacks in the game for maximum scariness!
It’s still reassuring to see effort in the direction of “design as hypothesis, so test it as scientifically as you can”, since that’s the direction we seem to be moving in as well.
In three keywords: dry – thorough – a-bit-dull-really
Speaker: Hideo Kojima (Head of Kojima Productions)
This talk has been done to death, and there are far better write-ups than I can manage (my favourite is on Wired. And I missed the start by being in a meeting with the Havok guys. The highlights were the Japanese MSG adverts, the one in the office and the jungle were hysterical. Unfortunately I can’t find them on YouTube :-(
In three keywords: isometric – adverts – dig-at-ps3
Speaker: Keita Takahashi (Bandai Namco)
Probably my favourite talk from the whole conference: it was just so… so… nice. It started out with a knitted scarf that looked like Noby Noby Boy himself and turned out to have been knitted by the designer’s mother, and ended with Takahashi gently inquiring if one of the questioners in the Q&A section maybe had a few, you know, issues?
More-or-less complete transcription is here It’s well worth reading to see the mental processes of someone who is clearly viewing the world from a different angle from most people. Sometimes I wondered if the games industry was sending him mad… he is certainly treading a separate path. No wonder he at one point declared that we should “ignore players and companies and make the games we like” and “do not fear failure”.
There was no real structure to the talk, so in many ways it matches the Noby Noby Boy “game” itself. It just gradually unfolded in a way that sort of made sense at the end. Similar to the Media Molecule talk, Takahashi-san used drawings and videos rather than a traditional presentation. Maybe the word we shouldn’t fixate on is “game”. Maybe it’s “play” instead. (Man, that’s deep). The presenter described NNB as being like “buying a ticket to a festival” and his goal as “creating something enjoyable”, which ties in with that somewhat.
In three keywords: charming – whimsical – inspiring
2 hours, 10 presentations, lots of indie games. How “experimental” they were seemed to vary (Flower seemed an odd choice, and Derek Yu almost seemed like he was chosen because he’s popular in the indie scene), but everything had at least one interesting element to it.
You can download and play quite a few of them, and see videos. In order the games were
- Unfinished Swan This looked like a really cool video that was still searching for a game to attach to it
- Shadow Physics Steve Swink, Steve Anderson
- Miegakure possibly the most bonkers of the lot
- Spy Party - Chris Hecker
- Daniel Benmergui “I Wish I Were the Moon” ,”Trials”, “StoryTeller”, “Today I Die”
- Flower - Jenova Chen, Nick Clark
- Achron - Hazardous Software (I have the word “MADNESS” written in big letters in my notepad next to this one)
- Closure - Tyler Glail
- Where is My Heart - Bernhard Schulenburg
- Rom Check Fail - Farbs
- Roguelikes - Derek Yu
At least 3 or 4 of the games (Miegakure, Achron, Closure, Shadow Physics) were influenced, or at least in the mould of, recent games where the concept of “mapping different dimensions to one another” is used, Braid being the most obvious. They ranged from the merely quite-bonkers (Shadow Physics), to the totally-bonkers (Achron, Miegakure).
In terms of new gameplay or mechanics, I thought Spy Party – although flawed in its implementation – was the closest to being something genuinely novel. The idea was for the player to watch on-screen characters interact (in a party situation), then try to work out who was the other real-life player, and who were the AIs. A sort of “reverse Turing Test” made more feasible by limiting the range of player “language” to something simpler than full natural written English. Unfortunately, the “gameplay” consisted of just watching for a tell-tale animation playing on a character rather than spotting things like patterns of behaviour.
The game that I was most interested in playing was “Today I Die”, a simple 2D game where the lyrics of a short poem were written on screen. By playing the game you could collect new words and swap them into the poem. And when the words swapped, the gameplay on the screen matched the new words, so new games opened up. It was the kind of thing you need to see – it becomes far simpler to explain. And quite charming in its execution.
In three keywords: mindbending – fun – exhausting
Lots of design talks yesterday, so today I put on my programming hat, and the unveiling of…
Speaker: Michael Abrash (RAD Game Tools)
Speaker: Tom Forsyth (Intel)
Note The slides for these presentations are available here on Intel’s site The slides are in PDF so don’t include the extra notes that Michael Abrash promised. Curses.
It’s probably best to lump these presentations together, although the style of presentation was somewhat different. Larrabee is Intel’s attempt to take on the hardware graphics cards vendors by gluing together lots of old Pentium chips and developing a software renderer. Actually that’s a horrible mis-representation, but the whole thing does remind me of a MacGyver-style “it’s crazy but it might just work!” skunkworks.
Abrash and Forsyth are two of the Smartest Guys in the Room, so if it’s possible they will be the ones to do it, but at the moment the jury is definitely out as to whether the numbers will work out. It looks like they are betting on using the inherent parallelisation of some algorithms involved in rendering, and load balancing to occupy the chipset’s time and spread out workload to constantly feed the processors, to give them enough of a gain to offset the inevitable performance decrease you will get by not using custom hardware to do your rendering for you. And those gains and losses are much too hard to define to just stick some numbers into Excel. And we haven’t gone into the power consumption either…
Michael Abrash’s talk was easily the most hardcore talk I went to. Not only did it feature extensive assembly language listings – which I love, but I know are an acquired taste – he also deliberately raced through some quite involved triangle rasterisation algorithms in order to fit into his alloted hour. And I can honestly say that the attendee sat next to me did, genuinely, fall asleep (he was a forum moderator on gamedev.net, I checked his pass). It was a really interesting concept though, so I need to wait till the full slides come out to go through it in-depth.
Tom Forsyth’s talk was much easier to follow, and still had a gratifying dollop of assembler. While Abrash’s talk focussed on how using clever algorithms could help you get a win over a GPU architecture, Tom’s talk went into more detail about the extra instructions they had add to the traditional Pentium instruction-set. With the addition of instructions with 3 operands, and the mask register usable on many parallel instructions, you can end up with
The new scatter-gather instructions look neat. They are meant to allow easily packing/unpacking between SoA and AoS data format, so you can easily shuffle them into formats that can be parallelised easily. However, the downside is that they use memory access just like any other instruction, so the cost of cache misses could be crippling. Again, the proof is in the pudding.
In three keywords: hardcore – assembler – abrashtalkstoofast
Speaker: Jason Q Gregory (Naughty Dog)
This talk was packed 10 minutes before it started, although I have to confess that I didn’t find too much new in this talk. It generally talked about how they attach scripts to objects and use their scripting language to set up scripted sequences, and add a bit of variety to things like making prop objects wobble when the player is climbing on them, and fall off when the player jumps off.
All the usual issues such as making the scripts re-startable if the entity was streamed back in came up. There didn’t seem to be any solution other than “write your script correctly”.
ND are using a custom scripting language largely based around PLT Scheme which I guess you could broadly categorise as a Lisp variant with very strong typing. Partly this is because of ND’s heritage with Goal (a “kind of” Lisp, it seemed to be a lot more complex than a simple categorisation) on PS2. To me using Lisp’s syntax but not its amazing weak typing seems to be, at first sight, taking all of Lisp’s disadvantages and few of its advantages, but ND are the ultimate arbiter of that. I’d love to see how their scripters work, or indeed what background their scripters have.
One side-effect of the use of a Lisp variant was that their scripts ended up as a mix of code and data in “state blocks” (I think to represent their state machines).
Debugging seemed to be a case of using printf(). However, what Jason did seem to intimate, and what I’ve seen in the past, is that education, documentation and training are the most effective debugging tools in many cases.
In three keywords: scripting – standard – lisptastic
Worst talk I saw :-( I regret not going to the Peter Molyneux talk instead, and it’s not often I say that.
The idea was that during GDC itself, small teams of people went around doing research on the attendees themselves. The “experiments” were things like conducting surveys, analysing Twitter and blogs for the “#GDC” tag or analysing the use of business cards (I think – it really wasn’t clear what that one was about).
It’s fair enough that the work wasn’t intended to be peer-reviewed or anything, but unfortunately a lot of the work was conducted so roughly as to not give even the semblance of a sensible result. For example, one experiment involved asking attendees to pick a point on the Richard Bartle player interest graph which map onto player styles, and build a profile of disciplines. However, the experiment involved sticking Post-Its on a largeish cardboard sheet, and as the Post-Its accumulated, people started to stick their entries further and further out to avoid obscuring others’ entries… oops.
David Tisserand of SCEE was in the presenting list for the Business Card experiment. I think he was a bit embarrassed about having been roped in, sorry David!
Side note: I don’t know if the poor standard of work was related to the fact that this talk had the highest factor of academics presenting and attending. Sample set is a bit too small.
In three keywords: disappointing – amateurish – wordle
Speaker: Rune Skovbo Johansen (Unity Technologies)
I saved a good one till last. Tom Forsyth’s blog had recommended this talk and I wasn’t disappointed. It was a tech talk about how to do runtime skeletal manipulation of a walking character to handle correct foot placement. And it also handled quadrupeds really well. Very impressive.
Gavin Clarke did a lot of very similar work for us on Ghosthunter on PS2 to get what was for its time real state-of-the-art foot IK. Many of the ideas here were the same, but it did some very good analysis to allow “hands-free” integration. So for example, it analyses the walk cycle animation to determine automatically the foot-down and foot-up times, length of stride and foot on the floor. It also goes into some depth about how to handle tricky cases, such as bending the trailing foot when stepping down from a ledge, which often looks very ungainly unless you get it right, or smoothing out the root transform of the character as you move up and down steep changes in height.
Good advert for the Unity engine, too. I’ve been guilty of thinking of it as “just a Flash-like browser game engine” in the past, I need to reconsider because they were doing some very good stuff.
In three keywords: animation – quadrupeds – impressive
These are things I wish I’d know before I went:
- take a laptop. There is lots of free wi-fi, but no pods with PCs
- find a quick timetable – the provided one is useless
- scout out the venue in advance so you can easily find where all the talks are held
- don’t trust the titles of the talks! Generally they are either vague, trying to be too clever, or in some cases downright misleading. A better guide is stick to speaker’s reputations, or read an in-depth description of the talk
and finally, but possibly most importantly:
- avoid the vegetarian option for the free GDC lunch. Or for that matter, probably the non-vegetarian option.