A Single Click: What The Player Doesn't SeeApril 17, 2015
Interactions with objects of various shapes and forms are an inherent part of playing games, but rarely do we think about what goes on behind a simple click. In this blog post we will broadly go over what it takes to get one of the interactions of Fragments of Him working. The interaction we will be analyzing is one of the first ones you encounter in the game, the opening of an elevator.
It all starts with a concept
We store the script for Fragments of Him in a spreadsheet. This sheet contains all of the interactions, dialogue, and animation descriptions for its numerous scenes. The entry for the interaction you see animated above can be seen in the image below. Interactions in our spreadsheet are given identifiers to keep track of whether the interaction is linear, branching, mandatory, or optional. More often than not there are various things that can be heard by the voice actor as a result of this interaction.
Elevator interaction in spreadsheet
With all of the above written down, it’s up to the art team to figure out how to proceed onward. Almost all of the game is based on familiar or real locations so we use a lot of photo references to construct our scenes. There’s also the question of how will the environment react to the player’s interaction, what are the materials that we use on the scene geometry, and how will the lighting in the environment react to the characters in the scene?
Creating the environment
The scene description states that this takes place in an apartment building and that we need both an elevator and a staircase. Some of the reference for the elevator can be seen below. We usually look at multiple reference photos and take elements from all of them to form the final result. In this case we took the interior panels from one photo and the hand rests from another.
Elevator reference material
Elevator reference material
Before starting on creating assets we usually discuss these reference images to determine what we prefer, after which the modelling can start. We’ll do this for every interaction so that we end up with a complete scene (at least at a basic level). At many points in this process we reflect on how it’s shaping up and how the player moves through this space. For example, early on we noticed that we made one of the hallways too long which this took away some flow from the experience. When realising this early on it’s relatively easy to cut out the chunks that don’t feel pleasant from a player’s perspective.
Creating the character
Now that we have the environment covered, we’ll need a character to interact with this elevator. The process here is similar to the environment. We get a description from our narrative designer of Will (one of the characters in the story), which gives us a general idea of who this person is. As a team, we go over a bunch of different photo references and pick the photos that fit with the person and the narrative. From this point forwards the elements from the photo reference are combined and mashed together into concept art.
Will character sketch
As we got closer to nailing down the look of Will we tried to clarify the details of he way this person dresses. The outfits required for the narrative are largely determined by the year in which this part of the story takes place, what season and which time of day. Below you can see some of the silhouette concepts created by Baiba Gedrovica that helped us explore the possibilities in clothing and hair styles.
Will concept drawings
Once the character features are decided on it’s time to start the modelling process of this character. There are quite a few steps involved to get this from concept to a modeled, rigged and skinned model. Unfortunately, this blog does not aim to cover this particular process in depth. If you want to know more there’s some more info on the Unity website on the steps it takes to get a character in the game. This is the result of a single character with a single outfit:
Static view of Will
Animating the character
At this point we have a static character that’s all set up to be animated. We made use of Xsens Motion Capture technology to record the motions and turn this into data that drives Will’s skeleton. While this method is not entirely drag and drop, it does help a tremendous amount in creating movement that appears very natural.
Kaylee monitoring recording and keeping track of animation spreadsheet
Tino in an Xsens motion capture suit
Tino described the process from modelling to getting it in the game as follows:
Many of these steps have to be done for every single animation in the game. Some of the other steps fortunately only have to be done once such as creating the model, its textures, the skeleton and its skin weights. There are roughly 250 unique animation in the game, in case you were wondering. The resulting animation as its portrayed in the scene, however, is quite pleasant.
Hooking it all up
When it comes to implementation, we have spent a lot of time building editor tools to assist us. I wrote a more detailed blog post on developing these tools last year. In short, it takes away the task of writing custom code for each interaction. That single button press triggers at least 15 scripts, 6 animations and 1 sound effect.
Elevator Interaction
This totals to around 1500 lines of code (excluding the duplicate modules and the actual interface & code behind the above image).
If you have any questions you can contact me @elwinverploegen.