The performance of NVIDIA's GPUs, including GeForce GTX 980 and Pascal GeForce GTX 1080, have been revealed in a study that the NVIDIA's Maxwell architecture implements Tile-Based Rasterization to get high power-efficiency and enhanced performance.
With its GPUs, NVIDIA provided a huge performance-per-watt boost for only a modest size increase, allowing NVIDIA to give a full generation's performance enhancement with no corresponding manufacturing enhancement. However, its developers had always avoided revealing details about the fixed function graphics hardware, even sometimes denying the implementations.
Tech expert, David Kanter pointed out that tile-based rasterization has been around since the 1990s, first appearing in the PowerVR architecture and later adopted by ARM and Qualcomm in their mobile processors' GPUs. By the time Nvidia introduced the technique into its Maxwell GM20x model, however, it was not effectively implemented into desktop graphics chips or plates.
'Tile-based rasterization' means that each triangle-based or three-dimensional scene is divided into tiles, and each tile is broken down or rasterized into pixels on the graphics chip itself to be printed on a two-dimensional (2D) screen. Full-screen immediate-mode rasterizers use extra memory and additional power by splitting the entire scene into pixels in one scan.
Kanter also explained that using tiled regions and buffering the rasterizer data reduces the memory bandwidth for rendering that automatically enhances performance and power-efficiency.
"Regular with this hypothesis, the testing demonstrates that Nvidia GPUs change the tile size to make sure that the pixel output fits within a fixed size on-chip buffer or cache," he said.
For more detailed details check video below: