This is part of a series on the basic elements of Visual Effects. Each post will talk about a certain element which is one of the basic bricks used for building VFX shots.
In this sixth post I will talk about lighting and rendering.
A ray traced image with a directional light and a sky dome which is used for the image based lighting. |
The reason why lighting and rendering usually goes together is because lighting technical directors handle both of them and certain render techniques influence directly how the scene is lit.
Rendering in a nutshell
Let's start this post with rendering and more specific with render engines. A render engine is a piece of software which translates scene data, like models, shaders and lights into a final viewable image. These calculations can go from mere seconds for a simple scene to hours for a complex scene just for one frame. Keep in mind that a movie in the theater needs 24 frames a second so you can imagine that movies packed with VFX are taking years to render, if it was done by a single CPU that is. Tackling a large number of frames is solved by using many CPU's at once. A room full of computers set up for this purpose is called a render farm.
The render engine can be seen as a separate entity and is not really part of the animation and modeling package although it might seem so. Most packages come with a built in renderer but external render engines usually are chosen when working on bigger projects. Photorealistic RenderMan, 3Delight, Mental Ray and V-Ray are only a few examples. There are plenty of good renderers out there. Just choose the one which works for you.
The above diagram shows how the software packages like Maya and 3D Studio Max talk with the help of a translator to the rendering packages. Each render engine has its own "language" so the translator gets provided by the render engine.
There is one family of renderers which talks the same language which is the RenderMan standard. This standard was created by Pixar and for a long time Pixar was the only company which had a RenderMan compliant renderer. Now there are other commercial renderers availabe. The scene file from the 3D program gets translated to a RIB file which any RenderMan compliant renderer should be able to read and render out. This open standard is extremely powerful and flexible. It allows for very complex render pipelines so this is mostly used in VFX for film and not so much for small projects.
The renderer will use this HDR image as base to light the scene. This can give highly realistic results. There are two main techniques to make these photographs. The cheap way is to use a mirror ball. It works well for grabbing the lighting but has certain limits when the image is used for crisp reflections on the CGI objects. The more expensive way is to use a fish eye lens.
There is one family of renderers which talks the same language which is the RenderMan standard. This standard was created by Pixar and for a long time Pixar was the only company which had a RenderMan compliant renderer. Now there are other commercial renderers availabe. The scene file from the 3D program gets translated to a RIB file which any RenderMan compliant renderer should be able to read and render out. This open standard is extremely powerful and flexible. It allows for very complex render pipelines so this is mostly used in VFX for film and not so much for small projects.
Render algorithms
Every renderer has its own algorithm but there are two distinctive paths on how to calculate an image: Scanline rendering and ray tracing. I am explaining shadow creation here as well although it can also fit in the lighting section below.
Scanline rendering is a technique which sorts the geometry according to depth and then renders a row at the time. It is very efficient as it discards geometry which is invisible and therefor limits calculations. Shadow calculations are usually done trough shadow mapping. Shadow maps are depth maps which can be stored in a file and be reused. High resolutions depth maps can be expensive to calculate but their reusability in certain circumstances make up for that. It handles large amounts of geometry rather well. Although it has been a very popular technique and is used for example by PRMan, it is being replaced more and more by ray tracing.
Very simple example of shadows made with a shadow map. This one has not a high enough resolution to have a nice shadow. The edges are pixelated. |
A new render with a higher resolution shadow map. The edges aren't pixelated anymore. |
This how a shadow map looks like. It is a depth map where the lighter grey is closer to the camera than the darker grey. |
Raytracing is a technique which calculates a pixel at the time. It shoots a ray from the camera to the the objects which bounces of to the lights present in the scene. Shadows are an automatic result of this technique and therefor easy to make but they can become rather expensive when soft shadows are needed. This increases the rays per pixel and affects directly the render times. The big benefit is true reflections and refractions are possible. Mental Ray and V-Ray belong to this category.
Ray traced shadows. These are sharp clean shadows. |
An attempt to get softer shadows but not enough samples are used and it looks bad and pixelated. |
This render has more samples as the previous image and has therefor a much smoother result but render times have gone up considerably. From 7 seconds to 23 seconds. |
Render times in both techniques are influenced by the objects in the scene and the complexity of the shaders and lights used. It is imperative to keep render times under control when running a production. Render algorithms can be combined. It can be possible to use a scanline renderer and activate the ray traced shadows if the render engine allows this.
There are more subcategories like global illumination and radiosity but that is a bit too specialized for this article.
Lighting
A 3D scene without light would turn out black just like in the real world. The techniques for lighting a scene are pretty much like lighting for a live action set. There are some differences though. It is impossible to subtract light (and I do not mean block) in the real world whereas in the digital world it is a matter of mathematics. Shadows are another difference. It is possible to change the color, softness without affecting the light itself or even turn it off completely. This gives a huge amount of flexibility but be careful when lighting for a photorealistic scene. Using these tricks usually makes the scene look less real.
Classic lights
Lights in a 3D scene are controlled by light shaders just like surface shaders. Most of the time you do not have to assign a light shader to your light though as most packages do this automatically. A light shader is a little program which defines the properties of the light. Light shaders are also render engine specific. All 3D packages have a ready to use set of lights available. Here are the most important examples.
- Point light: light emitting from a single point like a light bulb.
- Spot light: A light like a regular spot on a set. It can be controlled by barn doors and by adjusting its diameter.
- Directional light: A light source with parallel light beams. It is well suited for simulating the sun for example.
- Area light: A light which emanates from a surface. Good examples are Kino Flo lamps and light trough a window into a room.
Take notice that area lights are a special case for shadows. The light is bigger than a point and will therefor give always soft shadows. Soft shadows are nice but are always a bit more expensive in calculations especially when you don't want them to look to grainy.
Newer techniques
When lighting for CGI which needs to be incorporated into live action another technique can be used if your renderer allows it. This is called image based lighting or in short IBL. Instead of lighting the scene completely yourself it is possible to use the lighting which was present on the set. This can be done by taking high a dynamic range (or HDR) panorama photograph. The HDR images contain much more information than a regular photograph. Hot spots in the image are not clipped and the blacks are not crushed.
A panoramic image of a kitchen. This is just a representation of an HDR image. Real HDR images are not supported to be shown on the web. |
The renderer will use this HDR image as base to light the scene. This can give highly realistic results. There are two main techniques to make these photographs. The cheap way is to use a mirror ball. It works well for grabbing the lighting but has certain limits when the image is used for crisp reflections on the CGI objects. The more expensive way is to use a fish eye lens.
If you're interested in good lighting make sure you read some books on cinematography. There is much to learn from the real thing. Understanding color theory and how light behaves is very important.
This concludes the sixth part of the VFX Back to Basics series.
Make sure to subscribe (at the top) or follow me on Twitter (check the link on the right) if you want to stay informed on the release of new posts.