Some say the new 3DS from Nintendo will feature per-pixel lighting. Sounds cool! But what does that mean? To explain per pixel lighting requires a bit of background information…
How Rendering Works
When a graphics processing unit (GPU) is painting a 3D scene, it typically uses triangles that were authored by a 3D artist and then pushed through a math-intensive pipeline that results in 3 points placed on your screen. Those three points (vertices) carry information the GPU uses to shade the face of the triangle.
By the way, when you see GPU stats about fill rate, that’s the number of pixels in triangle faces that the GPU can shade per second. When you’re drawing a complex game scene, painting the pixels just once is not sufficient. You need extra bandwidth to draw shadows, reflections, layers of transparency, etc. So that raises the question, how many times per frame can we paint the 3DS’s stereo screen?
• DMP’s PICA200 spec sheet claims a fill rate of 800 million pixels per second
• The 3DS stereo screen is 400×240 pixels per eye: 400 * 240 * 2 = 192k pixels plus the non-stereo screen of 320×240 = 78.6k pixels, total of 268.8k pixels in a frame
• And let’s say we want to draw at 60 frames per second
800M pixels per second / 268.8k pixels per frame / 60 frames per second = 49 and change… so, it’s got enough horsepower to repaint the whole stereo screen and the non-stereo screen 49 times a frame.
To put that number in context, how does that compare to a PS3 or Xbox 360? These guys say 4 giga-pixels. Repeat the math above with 4 giga-pixels and a 1920×1080 (1080p) frame buffer… 32. Repeat the math again with 4 giga-pixels and a 1280×720 (720p) frame buffer… 72.
So, the 3DS is in good company… especially since its 49 is based on stereo rendering and that 720p 72 is not.
Back to rendering…
What’s in a vertex?
A vertex can contain…
• Position – where
• Color – what color
• Texture Coordinates – if there’s a texture image to be applied, what part of the texture
• Normal Vector – which direction is “up” and away from the triangle mesh
Why does that matter?
Two of these vertex components are used for lighting: the position and the normal vector. These two components describe the triangle’s surface at the vertex, and thus can be used to determine how a light source affects that surface.
Old school rendering hardware computes the lighting for each vertex, and then interpolates the result to create a smooth gradient across the face of the triangle. This looks pretty good. But, someone had a better idea…
Lighting Each Pixel
As GPU computational power increased, someone realized that they could afford to do the lighting calculation not just at the vertices of a triangle, but at each pixel that gets painted. This produces much more realistic results than interpolating, and opens the door to a really cool feature… normal mapping.
Remember that normal vector in the vertex that points up and away from the mesh surface? If we encode those up vectors in a texture map, we could sample the map per pixel and have a much more detailed surface. A normal vector has X, Y and Z components that indicate the up direction. A color texture has red, green and blue color components. By storing the X in the red, the Y in the green, and the Z in the blue, we can store surface information in the texture instead of an image.
Now the GPU shading hardware can extract the surface normal from the texture instead of from the vertices, and voila… incredible surface details without an insanely dense triangle mesh.
This kind of computational ability per pixel opens the door to other rendering features as well, including different lighting models, refraction and environment reflections.
So, the GPU in the Nintendo 3DS appears to be quite the power-house, at least by the specs. I look forward to seeing what we game developers can do with this little box!