scroll down

Evolution of Nvidia GPUs in Last 20 Years, From First to the Latest

Nvidia GPUs have been here with us for as long as we can remember. But how did it all start and where did it start from? If that is something that you are interested in then this is what we are going to talk about here.

Nvidia NV1

Nvidia took off back in 1993, but the company did not have its first product until 1995, the NV1. The GPU was very innovative for its time and could handle both 2D and 3D video. Other than that the Nvidia NV1 was used in the Sega Saturn game console. The NV1 used PCI interface with 133 MB/s of bandwidth. Other than that the graphics card used EDO memory clocked at up to 75 MHz, and the card supported a resolution of 1600×1200 with 16-bit color.

After the success of the Nvidia NV1, the company started on working on a successor for the graphics card, the NV2. There were a few disagreements with Sega, because of which Sega took a different route and the NV2 was canceled.

Nvidia NV3 / Riva 128

The Nvidia NV3 came out in 1997 and turned out to be more successful than the previous Nvidia GPU. It switched from using quadrilaterals to polygon and hence it was easier to provide support for the Riva 128 in games. While the new Nvidia GPUs could render frames much faster, the trade-off was lower image quality.

So you weren’t really getting the best of both worlds. It was ok though because this was the second GPU that Nvidia had made and there was room for improvements to be made with the next generation Nvidia GPUs that would follow.

The Nvidia NV3 GPUs were available in two main variants: the Riva 128 and the Riva 128ZX. The Riva 128ZX graphics cards used higher-quality chips that enabled the company to increase the RAMDAC frequency.

Both variants used SDRAM memory clocked at 100 MHz over a 128-bit bus which translated to 1.6 GB/s of bandwidth. The Riva 128ZX GPUs also came with double the amount of memory as compared to the standard version. Where the standard graphics card came with 4 MB of VRAM the ZX variant came with 8 MB.

While this might seem like very little keep in mind that this is 1997 that we are talking about and that this was a lot of VRAM back in the day.

Nvidia NV4

Next year, 1998, the NV4 came out and it was very popular indeed. This graphics card was able to change the name of the game for the company in the graphics cards world. When the NV4 came out, it was the 3dfx’s Voodoo2 that was the performance king, but it was expensive and needed an additional 2D card.

The NV4 Nvidia GPUs did not require that because they could render both 2D and 3D. This made them budget friendly, even if they did not provide the best performance on the market, at that time. The introduction of optimizations in the form of drivers made it even more competitive, against the competition.

Nvidia NV5

In 1999, Nvidia launched the NV5 and this was another attempt to become the performance king. The card was able to deliver around 17% better performance as compared to the previous one, even though both cards were based on the same architecture. The NV5 came with 32 MB RAM, which was double as compared to the previous one. The transition to the 250nm process allowed for the graphics card to be clocked at 175 MHz.

Nvidia’s main competition was the 3dfx Vodoo3. Both graphics cards were on par with one another and even after a few years of competing with one another, there was no clear winner.

NV10 / GeForce 256

Cards that came before the GeForce 256 were called graphics accelerators or video cards but the GeForce 256 was called a GPU. Nvidia introduced some new features in this graphics card including Transform and Lighting processing. This allowed the graphics card to perform calculations that were traditionally handed over to the CPU. The GeForce 256 was 5 times better at this as compared to the modern CPU at that time, the Pentium 3.

The GPU used the 220nm process and was able to perform 50% better as compared to the competition. This was the first graphics card to come with 32 to 64MB of DDR SDRAM. The graphics card had a core clock speed of 120 MHz and the memory ran between 150 MHz and 166 MHz.

GeForce2: NV11, NV15, NV16

Nvidia GeForce2 graphics cards were based on the same architecture as the previous graphics cards but Nvidia was able to double the TMUs by moving to the 180nm process. The NV11, NV15, and NV16 cores were used in order to power the GeForce2 graphics cards and they had slight differences. The NV11 core featured two-pixel pipelines, while the NV15 and NV16 cores had four, and NV16 operated at higher clock rates as compared to the NV15.

NV20 / GeForce3

This was the first graphics card from Nvidia that was DX8 compatible. The core was based on the 150nm process and featured 60 million transistors and 250 MHz clock speed. This was also the first graphics card from Nvidia to feature Lightspeed Memory Architecture.

The GeForce3 was also designed to accelerate FSAA by using a special Quincunx algorithm. The GeForce3 performance better as compared to the previous graphics card, but it was complex to make to the card was expensive. The GeForce3 was later tweaked and featured inside the original Xbox in 2001.

GeForce4

These graphics cards came out in 2002. There were an array of graphics cards coming out at this point. At the entry level, we had the NV17, which was an NV11 GeForce2 but had been shrunk down using the 150nm. This made it cheaper to produce. The clock speed of the graphics card was between 250 MHz and 300 MHz.

Later on, two revisions of the NV17 were released, the NV18 and the NV19. The NV18 came with an upgraded bus, now  AGP 8X and the NV19 Nvidia GPUs came with a PCIe bridge to support x16 links. The memory on these chips was clocked between 166 MHz and 667 MHz.

At the high-end Nvidia GPUs like the NV25 were released. These GPUs featured four-pixel pipelines, eight TMUs, and four ROPs, which translated into better performance as compared to the previous generation graphics cards as well as the entry-level ones.

NV25 contained 63 million transistors, which was 3 million more as compared to the GeForce3. The GeForce4 NV25 also had a clock speed advantage over the GeForce3 as well and the 128MB DDR memory was clocked between 500 to 650 MHz.

NV30 / FX 5000

in 2002, Microsoft released DX9 which would be a very popular API for the next couple of years. Both ATI and Nvidia rushed to get supporting hardware to the market. ATI was able to beat Nvidia but later in 2002 Nvidia released the FX 5000 series. While Nvidia graphics cards came to a while later, they did come with some additional features that set them apart. These include  Shader 2.0A.

Even though the NV30 was the original FX 5000 graphics card, a few months after its release Nvidia released another flagship which came with an extra vertex shader and DDR3 memory. The graphics card also rocked a 256-bit bus. Though the graphics card came with some new features, it lagged behind the ATI model that was out at that time.

Adding insult to injury the graphics card got very hot and that forced OEM partners to use beefy coolers that added to the cost of the graphics card.

NV40: Nvidia GeForce 6800

A year later, the Nvidia GeForce 6800 was released that came with 222 million transistors, 16-pixel superscalar pipelines, six vertex shaders, Pixel Shader 3.0 support, and 32-bit floating-point precision. The 6800 Nvidia GPU featured 512MB of GDDR3 memory over a 256-bit bus. The 6000 series as very successful as it provided double the performance in some games as compared to the FX 5950. While it was powerful, it was also efficient.

NV43: The GeForce 6600

After being successful at capturing the high-end market, Nvidia moved on to introducing mid-ranged products once again and that is where the 6600 Nvidia GPU came into play. The NV43 came with half the resources as compared to the NV40. While it only used a 128-bit bus, it was shrunk down using the 110nm process. It did have half the resources, and that is one of the reasons why it was so inexpensive to produce.

While the boost clock was 20% power as compared to the higher-end version, the energy consumption was lower as well. That is what made it a good buy.

GeForce 7800 GTX

The 7800 Nvidia GPU replaced the 6800. While it was based on the same 110nm process it did feature 24 TMUs, 8 vertex shaders, and 16 ROPs. The 256MB of GDDR3 could be clocked to 1.2 GHz over a 256-bit bus. The core itself ran at 430 MHz.

While the 7800 was already doing pretty well, Nvidia managed to improve it and later introduced the GeForce 7800 GTX 512. With a new cooler design, Nvidia was able to increase the clock speed to 550 MHz. The memory latency was also reduced and the new card featured a 512-bit bus. Memory capacity was increased to 512 MB and hence the name of the Nvidia GPUs.

GeForce 8000 Series

Nvidia introduced the Tesla architecture with the 8000 series. The architecture was used in multiple Nvidia GPUs including GeForce 8000, GeForce 9000, GeForce 100, GeForce 200, and GeForce 300 series of Nvidia GPUs. The flagship of the 8000 series was the 8800 which was based on the G80 GPU. The chip based on the 80nm process featured 681 million transistors.

The 8800 along with the other graphics cards supported the DX10 API that was released by Microsoft back in the day. The flagship GPU came with 128 shaders clocked at 575 MHz, 768MB of GDDR3 and 384-bit bus. TMUs were increased to 64 and ROP count went up to 24. Later on, the 8800 was replaced with the 8800 Ultra, that came with similar specifications but the core clock was higher at 612 MHz.

GeForce 9000 Series

The 9000 series Nvidia GPUs also used the Tesla architecture but the chips were shrunk down using the 65nm process. This allowed the Nvidia GPUs to reach 600 to 675 MHz clock speeds while reducing the power consumption of the graphics cards. The reduced heat output and lower power consumption allowed Nvidia to released dual GPU graphics card and it was the GeForce 9800 GX2 dual GPU that was the flagship at the time.

While the GeForce 9800 GX2 dual GPU was up to 40% better in performance as compared to the 8800, the graphics card was also very expensive and that is where it hurt Nvidia. In order to counter the price issue, Nvidia released the single GPU 9800 but it had memory issues. Soon after, the 9800+ was introduced which had higher clock speeds as compared to the previous model but also had 1 GB of memory which was ahead of its time.

The 900 series was later rebranded to 100 series and these were OEM exclusive cards which meant that consumers could not buy them directly from retailers.

GeForce 200 Series

An improved revision of Tesla was introduced with the 200 series Nvidia GPUs in 2008. The improvements included improved scheduler and instruction set, a wider memory interface, and an altered core ratio. The GT200 used 10 TPCs with 24 EUs and 8 TMUs each. Nvidia doubled the number of ROPs to 32 for the GT200.

The Nvidia GeForce GTX 280 was significantly more powerful as compared to the previous generation graphics cards due to the additional resources that the graphics card had to offer. Later on, the GTX295 dual GPU graphics card was also released. These graphics cards were later rebranded to 300 series and were again exclusive to OEMs. The cards were based on the 40nm process and the Tesla architecture revision.

GeForce 400

The 400 series Nvidia GPUs came out in 2010. These were based on the Fermi architecture. The GF100 was the top of the line chip which featured 4 GPCs. Each GPC comprised of 4 Streaming Multiprocessors, with 32 CUDA cores, 4 TMUs, 3 ROPs, and a PolyMorph Engine. All that adds up to a total of 512 CUDA cores, 64 TMUs, 48 ROPs, and 16 PolyMorph Engines.

The top of the line GTX480 would get very hot and in order to counter the issue, beefy cooling solutions needed to be used and hence the graphics card produced a log of noise. It is still remembered as one of the loudest graphics cards of old. In order to reduce production costs and increase yield, smaller Fermi-based Nvidia GPUs were introduced. The GTX460 powered by the GF104 offered two SMs, with 48 CUDA cores, 4 TMUs, and 2 ROPs each.

GeForce 500 Series

The 500 series still used the Fermi architecture but Nvidia was able to improve it by re-working at a transistor level. This allowed the Nvidia GPUs to have lower power consumption and higher performance. All things included the 580 was faster than the 480.

600 Series

The 680 was the graphics card that replaced the 580 and it was based on the Kepler architecture. This was the time that Nvidia GPUs were based on the 28nm process. This shift allowed the graphics cards to feature twice as many TMUs and three times as many CUDA cores. This might not have increased performance by threefold, but an increase in performance of up to 30% was recorded.

The 700 series was based on the same architecture and was introduced for supercomputers. The larger die featured 2880 CUDA cores and 240 TMUs. The first Nvidia Titan was also based on the same die. The Titan came with 2688 CUDA cores, 224 TMUs, and 6GB of RAM. All things considered, the graphics card was very expensive at $1000.

The GTX 780 came with 3 GB of RAM and was more affordable as compared to the Titan and Nivida also released the GTX 780 Ti later on. The Ti version came with 2880 CUDA cores and 240 TMUs.

GTX 900 Series

The Maxwell architecture was introduced in 2014 and power consumption was a big focus for the new architecture. The GM204 was the chip that power the top of the line GTX 980. The chip featured 2048 CUDA cores, 128 TMUs, 64 ROPs, and 16 PolyMorph engines. While the new chip was only around 6% better as compared to the 780Ti the major selling point was the lower energy that it consumed. The difference was of a major 33%.

Nvidia later released the GM200 chip that powered the GTX 980 Ti. The chip came with 2816 CUDA cores but it was not as efficient as the little brother.

GTX 1000 Series

In 2016, Nvidia GPUs moved on to the 16nm process and Nvidia released the new Pascal architecture, on which modern graphics cards are still based. If you have a 10 series GPU then you are using Pascal. The GeForce GTX 1080 features 7.2 billion transistors. With 2560 CUDA cores, 160 TMUs, 64 ROPs, and 20 PolyMorph engines. This is significantly more powerful as compared to the previous generation GTX 980.

The GTX 1080 Ti was later introduced for 4K gaming. Other than that there is the GTX 1070 that is aimed at 1440p gamers and the mainstream GTX 1060 that is aimed at the 1080p gamers and is the most popular of the Nvidia GPUs, as of right now.

Let us know what you think about the evolution of Nvidia GPUs over the years and whether or not you are interested in getting the next generation graphics cards that Nvidia is going to be unveiling soon.