3D Art Pipeline, for Games

As I move more into the area of web development, I run into a lot of people who are interested in making games, but have little idea on how they are made, especially the art. Hopefully this post will give a general idea of how art goes down. Specifically of an animated character, from concept to engine.

As a side note, every studio, project, and artist are different. This is meant to be an overview of the general flow, but most projects will have more steps, omit some, combine others, use different software. I

The idea: All things start in with an idea. Maybe it’s several well designed pages, with description and purpose. Other times it’s a napkin that someone scrawled on during lunch, complete with mustard stains .3D Art Pipeline_1

Concept: This is then kicked off to the concept artist. Their job is to turn the words into pictures. This usually starts as many thumbnails which then gets refined into color comps and eventually finalized.

3D Art Pipeline_2

Besides the design document, they usually have a style guide, they follow to help ensure that the character will fit into the rest of the world, and speak the same visual language. An amazing example of this is DOTA 2’s style guide

3D Art Pipeline_3

Their finished product is often a character turn around, with maybe a couple extra action / emotive poses.  This gets passed off to the modeler

3D Art Pipeline_4

The Model: There are many methods for making 3D models, my preferred method is the ‘Box Modeling Method’ in which you start with a box, subdivide it, move the vertices to best fit the shape you want, then rinse and repeat until you have the level of detail you are looking for.

3D Art Pipeline_5

Lets take  a second to describe the anatomy of a model, and polygons. All models are made up of 2D shapes, laid end to end. As you see in the left image, there are a bunch of rectangles (Which all consist of 2 triangles.) While they give the illusion of volume, they are all completely flat. With points that define the area. Then these polygons have color placed on them, through shaders. In fact, in order to see them they have to have 2 different types shaders on them, but lets not worry about that yet.

3D Art Pipeline_6

In order to get the color onto these models, you must lay them out flat, so that you can overlay an image onto them. This is called ‘unwrapping’. You can think of taking an orange rind, and chopping it up into little pieces so it will lay flat.

3D Art Pipeline_7

Once it’s unwrapped, the character artist has to actually make the images (maps) that will be laid onto the model. There are also a handful of maps that go into defining the material of a model.

These are usually done in either a traditional raster image program, like Photoshop, or in a sculpting program like ZBrush or Mudbox. Often mix and matched.

3D Art Pipeline_9

Inside the sculpting programs, the artist doesn’t have to worry about things like polygon limits. They get to just get in there and make all the detail they want, molding millions of polygons. Then at the end, they tell the program to make an image that will tell the low poly starting model, look like the high res beautiful one. That is an over simplification, but the basic idea.

3D Art Pipeline_10

The character artist now has the textures they made from either sculpts or hand painting / photo chopping. The next step is to compile these textures together into a material. A material is an instance of a shader. A shader tells the computer how to display the polygons. Light affecting a model? The shader told it to. Does the character have any color, outline, or transparency? Shader.  Did it move? That could be the shader, depends on how it’s implemented.

It’s good to note there are 2 types of shaders, vertex and fragment/pixel. Where artists are concerned, they are messing with pixel shaders. I’ll get into the difference in a bit.

So, the shader tells the computer how things should look, or rather, the rules to follow when determining what a thing should look like. Such as ‘look at a textureA, multiply it by lighting for the color, look at textureB and modify it by valueC for how shiny this spot is.” The material is then the specific instance, the place where the artist gets to plug in specific textures and values into those slots for textureA, textureB and valueC.

3D Art Pipeline_11

From left to right:
The same rhino with 3 different materials (probably different shaders too.)
A material view, showing primitive objects with different materials applied
A material editor view in 3DS Max

This is a pretty solid high level overview of the how a mesh gets turned into pixels

3D Art Pipeline_12

So, the artist has a finished model, it’s all textured and made pretty. The next step is to make it move, right? Whoa, slow down there buddy. We have to create the system which will allow it to move. This is called rigging. Just like the character artist had to layout the mesh flat, and assign the 3D vertices to a 2D Space, we must again bind each vertex to an object so we can easily move them over time.

This is the job of the rigger. The rigger creates a system of joints (or bones) and other voodoo to define how the character can move. Then they bind these joints to control objects. Further abstracting and distilling down the control, and exposed API (options) that the animator has access to. Once the controls are set up, the rigger then binds the mesh to the joints, in a process called skinning.

3D Art Pipeline_13

Then it can handed over to the animator, to begin, well, animating.

Animation is the process of creating a set of images that will be seen sequentially, over time, to give the illusion of motion. Usually the animator tries to convey things through this motion. “This is a big and heavy robot, moving forward in time.”, “This person just got hit, really hard.”, “This person is sad, and really just wants someone to acknowledge their existence.” The last one doesn’t happen as much in games.

In 3D, the animator does this by setting keys, on frames. Coming from the traditional animation term of ‘keyframe’. Keys define when an object should be at a specific place. It looks something like this

3D Art Pipeline_14

After that it gets exported and is ready for use in the game engine.

A few notes:
If the process is done well, most of these steps can be done in parallel, which is really neat. Not so much the first part (drawing.) But you can totally have a proxy model of roughly the right proportions get rigged with basic controls and passed onto the animator so they can all work in unison. Then update just get passed downstream.

Characters often have effects associated with them. Dust when they walk, fire when fly with their jetpack, lazer eyes, whatever it may be. This is done by a ‘technical artist’ and is a whole ‘nother beast we’ll get into at a later date. Maybe.

Anyways, I hope you found some good info in this high level overview, and will be kind to your artist in the future. They work very hard.

Advertisements