The Biggest Graphics Leap in Nearly a Decade: NVIDIA RTX 3080 First Look Review

Please follow and like us:

In 2018, NVIDIA officially launched the RTX 20 series graphics card, bringing the gaming industry to the world of light chasing, while the addition of DLSS and other AI technologies has a considerable degree of improvement in the game’s frame rate, Huang Renxun also said at the time that the appearance of the 20 series graphics card is the biggest progress in the gaming industry in 20 years. However, I believe we all know what happened after that, because the increase in traditional performance is not enough to make people satisfied, and there are not many games that support light tracking, but the price is much higher than the 10th generation graphics card, so some gamers still hold a wait-and-see attitude towards the debut of the 20-series graphics card.

But wait until after DLSS 2.0, the power of RTX also began to show. At that time, we did not expect that the 30 series became NVIDIA’s new killer, until NVIDIA officially announced the 30 series graphics card and the price, we realized that NVIDIA has all the black technology, and now these black technology is only gradually to meet with everyone. We also got our hands on NVIDIA’s RTX 3080 graphics card as soon as it went on sale, and now it’s time to unleash this performance monster and let everyone experience perhaps the biggest graphical leap in performance in a decade.

NVIDIA Ampere Architecture Analysis

Before we talk about this graphics beast, let’s take a look at the NVIDIA Ampere architecture used by the 30-series graphics cards. We’ve already covered NVIDIA Ampere in some detail, so let’s go over a few key features about the Ampere graphics cards here again.

Significantly higher CUDA count

For the new generation of graphics cards, the improved architecture can greatly improve the performance of graphics cards, and the addition of ray tracing games has revolutionized the architecture of NVIDIA’s graphics cards, and this change is already evident in the 20 series graphics cards. The Core computing unit, where the RT computing unit is used for light chasing calculations, and the Tensor Core reduces the rendering resolution of the graphics, which is then used for image optimization through artificial intelligence to reduce the computational pressure on the GPU and make light chasing games smoother.

For the 30 series graphics cards, NVIDIA has updated and improved the NVIDIA Ampere architecture, with the biggest change being the change to the FP32 unit in each SM unit. The Turing architecture has one FP32 compute unit per SM unit, as well as RT Core, Tensor Core and INT32 compute units. In the NVIDIA Ampere architecture, in addition to having a separate FP32 compute unit in each SM unit, NVIDIA has also made the INT 32 and FP32 share an additional compute unit, which means that in the NVIDIA Ampere graphics cards, each SM unit is composed of a separate FP32 and an INT 32 and FP32. Combination units with Tensor Core compute units.

Compared to Turing cards, the NVIDIA Ampere architecture doubles the number of units responsible for FP32 computation on a single unit, so consumers are looking at quite an exaggerated amount of single-precision performance and CUDAs for 30-series cards. The RTX 3070 has 5888 CUDA, the RTX 3080 has 8704 CUDA, and the RTX 3090 has 10496 CUDA, allowing for a huge increase in performance as well, as the RTX 3080 can reach as much as 30T in single precision floating point.

In order to match the 30-series graphics cards, the new generation of graphics cards also adopts Micron’s latest GDDR6X memory, the new GDDR6X memory uses four-stage Pulse Amplitude Modulation (PAM4) signaling, which can significantly improve performance without increasing SGRAM power consumption, allowing the cards to have higher memory bandwidth.

On top of that, NVIDIA is using Samsung’s 8N custom process on its 30-series graphics cards, which brings higher transistor density compared to the 12nm FNN process, thus cramming more transistors into the same die area.NVIDIA claims that the GA 102 used in the RTX 3090 can have 28 billion transistors, a number that is 1.5x the size of a Turing graphics card, and this extra transistor is naturally used for the RT unit, Tensor Core, and the FP32 compute unit.

In terms of hardware, NVIDIARTX 3080 has such a strong performance, in addition to the specifications, NVIDIARTX 3080 has another focus on the RTX IO, the traditional storage device data exchange is often centered on the CPU to transfer, but now with the growing popularity of PCIe SSDs, especially the rise of PCIe 4.0 SSDs, the CPU to deal with the The high data streams of SSDs have begun to outstrip their capacity, especially for high frame rate games, and so RTX IO was born.

RTX IO allows the GPU to be the first to exchange data with the storage device, freeing up CPU computing power by eliminating the need for CPU resources and precious bus bandwidth. Thanks to RTX IO, the compressed game data stored on a PCIe 4.0 SSD, which originally required 24 cores of CPU performance, can be achieved with only 0.5 cores of CPU performance, while also greatly saving bus bandwidth and improving the efficiency of game data utilization. This can significantly improve the loading speed of the game, but also fundamentally solve the system IO bottleneck in the case of increasing game capacity and memory speed.

It can be said that the new architecture design and software updates of the amperage graphics card has enabled it to achieve performance far beyond the Turing graphics card, which is also the hardware and software foundation that makes the 30 series graphics card the largest performance increase in the past 10 years.

Graphics Card Specifications

After talking about the powerful new architecture, it’s on to the main character of this review which is the RTX 3080. We’ve got NVIDIA’s public version of the RTX 3080 graphics card.The NVIDIA RTX 3080 card has 8,704 CUDAs, 96 raster units and 272 texture units, with up to PCIe 4.0 bandwidth support and a pixel fill rate of 164.2G pixels per second and a texture fill rate of 465.1G textures per second. The RTX 3080 is also powered by Micron’s GDDR6X memory with a 320Bit bit width and 10240MB of memory, giving it a total of 760.3GB/s of memory bit width.

In terms of other core specs, the RTX 3080 has a core frequency of 1440MHz, a Boost frequency of 1710MHz, and a memory frequency of 1188MHz with an equivalent of 19Gbps, so it can be said that the RTX 3080’s specs are quite luxurious. Translated with (free version)

We compared the RTX 3080 to the RTX 2080 and RTX 2080 Ti, and found that the RTX 3080’s specs are far superior to its rivals, and more importantly, compared to the exploding price of the RTX 2080 and RTX 3080 when they first came out, the $5,499 RTX 3080 is clearly more acceptable to consumers and leaves a pretty good impression on gamers.

Graphics card appearance

This time we get NV’s version of the RTX 3080 FE also commonly known as the public domain card, and compared to previous cards, this generation of the RTX 3080 can be considered very refined.

The overall lines are very smooth, and the theme color scheme is black and gold, showing the nobility and elegance of the card, while NVIDIA has chosen a dual-fan design with one in front and one in back, with a diameter of 9cm.

Compared to previous graphics cards, this generation of graphics cards has a significantly improved TDP, so NVIDIA has put in a lot of effort in cooling. The dual axial fan cooling solution on both sides of the card provides powerful cooling while significantly lowering the card’s temperature.

In the power supply interface, the RTX 3080 uses a 12Pin interface power supply design, compared with the traditional power supply interface is more delicate in size, considering that people basically do not have a 12Pin power supply cable, so the official included a dual 8Pin to 12Pin adapter cable, but the official adapter cable is too rugged, and the delicate RTX 3080 graphics card seems a bit out of place.

The sides are classic NVIDIA RTX elements, while the lights change from green to white, but the dual 8-pin adapter cables keep the lights firmly in place, which is a bit of a bummer.

This time, NVIDIA has eliminated the Type-C port on the RTX 3080, thus making it a traditional 3 DP+1 HDMI port. The DP is 1.4a, while the HDMI is the latest HDMI 2.1.

Obviously the RTX 3080 is quite sophisticated, reflecting the card’s positioning for the enthusiast crowd, but the complimentary adapter cable is a slot full of points, I believe that in the future power supply manufacturers will bring you a dedicated 12pin power supply cable, thus improving the overall value of the card.

Test Platform Description

The RTX 3080 graphics card is a product aimed at enthusiast consumers, so we tried to choose the most powerful configuration available in the consumer market for the platform. In terms of CPU, we will use the Intel Core i9-10900K as the main test platform.

Considering that the RTX 3080 graphics card has the highest PCIe 4.0 transfer rate, we also adopted AMD’s 3950X as the PCIe 4.0 test platform, not only to test the performance of this graphics card on the AMD platform, but also to compare the difference between PCIe 3.0 and PCIe 4.0. The other hardware choices, we also used enthusiast-grade products, such as HyperX Predator memory, Armor Man RD10 and WD_BLACK SN750, in addition to the choice of motherboards, such as the M12E and other top motherboards, can be said to ensure that the graphics card to play the strongest level.

In addition, NVIDIA recommends that players of RTX 3080 graphics card choose to use more than 750W power supply, and we also use TT and Cooler Supreme 850W power supply to ensure the stable operation of the system, in terms of driver selection, we use NVIDIA GeForce 456.16 test driver, the future of the performance of the RTX 3080 graphics card. It will be adjusted with driver changes. In addition to the RTX 3080, we also tested the RTX 2080 Super as well as the RTX 2080 Ti to see how much the RTX 3080 is ahead of its predecessors in terms of performance.

Theoretical Testing

As the first choice for graphics theory testing, 3DMark is naturally the most popular testing software, through FireStrike, TimeSpy and Royal Port to test the card in DX11, DX12 and light chase performance, naturally we also used 3DMark to test the theoretical results of the RTX 3080.

In 3DMark’s tests, you can see that the RTX 3080 has a pretty big lead over the RTX 2080 SUPER, especially in the light chase test, also known as Port Royal, where the RTX 3080 has an 86% lead, and it also has about a 25-30% performance increase over the RTX 2080 Ti. Compared to the comparison between the RTX 2080 and RTX 1080 Ti, this is a satisfactory result.

Actual gaming tests

After reading the theoretical test of 3DMark, I believe we are very interested in the actual game test of these three graphics cards, of course, the theoretical test can only show the theoretical graphics performance of the graphics card, but in actual games, due to the difference in optimization, the actual game performance gap will be bigger or smaller, especially now that some games have added light tracking performance, which makes the gap between different generations of graphics cards have more amazing. So how far ahead can the RTX 3080 be compared to a 20-series graphics card in real-world gaming tests? We chose to test DX 11, DX 12, and LightChase games separately to see the actual gaming differences between them.

DX 11 games

As the traditional use of DX 11 engine for the production of the game, these games do not ray tracing and DLSS, can be said to compete is the traditional graphics performance of the card is often said that the FP32 performance, then in these game tests, the RTX 3080 to achieve how much lead?

The Watchtower.

Watch Dogs is a first-person shooter developed by Blizzard Entertainment. Set on a future Earth, it tells the story of the feud between humans, members of the Watchtower and intelligent machines. Naturally, it is also a popular online game. We have the default effects all turned on.

As a mainstream eSports game, Watch Dogs is actually not very harsh on the performance requirements of the graphics card, to be honest, using the RTX 3080 graphics card for testing is already a big waste of resources, and the actual test of the RTX 3080 is also very powerful, in 2K resolution frame rate reached 268, and 4K also has 166 fps, has exceeded the upper limit of the 144Hz gaming screen. . From this point, we can also see that with the development of GPU, the threshold of gaming monitors should also be raised. The 360Hz refresh rate gaming monitor released with the RTX30 series has become the new equipment for professional athletes, and with NVIDIA Reflex technology, it can also significantly reduce latency. It can be said that the RTX30 series will be gaming equipment to bring a new level.

The Witcher 3.

The Witcher 3: Wild Hunt is a role-playing game produced by CD Projekt RED and published by WB Game and Spike Chunsoft, as well as a classic DX 11 game, where we turn on full effects including NVIDIA hair effects.

The NVIDIA RTX 3080 also scored quite well in The Witcher 3’s testing, with over 144 fps at 2K resolution and 87 fps at 4K resolution, about 24.28% ahead of the RTX 2080 Ti.

‘Assassin’s Creed: Odyssey’

Assassin’s Creed: Odyssey is an action role-playing game developed by Ubisoft Quebec Studios and published by Ubisoft. Set in 431 BC, four hundred years before the events of Assassin’s Creed: Origins, the game tells the secret fictional history of the Peloponnesian War between ancient Greek city-states. Assassin’s Creed: Odyssey has been described as “all men are created equal” due to its poor optimization, so we turn on maximum effects and Benchmark.

I have to say that The Odyssey is The Odyssey, and even the RTX 3080 is only about 18% ahead of the RTX 2080 Ti, but the RTX 3080 finally exceeded 60 fps in The Odyssey.

Ghost Recon: Point Break.

Ghost Recon: Point Break is a military shooter set in a diverse and hostile open world and fully supports solo or co-op play with up to 4 players. Players will explore the Aurora Island, a mysterious island where state-of-the-art technology coexists with the wilderness, in the highest quality setting.

Compared to Assassin’s Creed: Odyssey, Ghost Recon: Point Break’s real-world performance is pretty decent, with the RTX 3080’s frame rate exceeding 80 fps, significantly ahead of the RTX 2080 Ti.

DX 12/ Vulcan Games.

Nowadays, the number of games using the DX 11 engine is decreasing, and more and more manufacturers are using the DX 12 engine for game development and production, and along with the DX 12 engine will be ray tracing and DLSS, here we test pure DX 12 games.

DOOM: Eternal

Doom Warrior: Eternal is a first-person shooter developed by id Software and published by Bethesda Softworks in the United States. The game is the second installment in the series after the reboot of the 2016 game Doom Warriors. We turn on full effects.

Thanks to the Vulcan engine and the fact that DOOM is very well optimized for gaming, all three cards performed quite well, with the RTX 3080 achieving 178 fps at 4K resolution, which is quite good.

‘Death Stranding’

Death Stranding is an action game developed by Kojima Productions and released by Sony Interactive Entertainment on November 8, 2019. It was also a much talked about game at the time, and can be considered Hideo Kojima’s latest masterpiece, which tells the story of the protagonist Sam, who must bravely face the world that has changed beyond recognition due to Death Stranding, unite the existing society, and save the humans in a different dimension. We turn on full effects in the game, and turn on DLSS in performance mode.

Death Stranding is very well optimized for the PC, especially with the DLSS feature, which makes the game perform quite well, and the RTX 3080 also achieved over 120 fps at 4K resolution.

Light Chase Games

When NVIDIA launched the RTX 20-series graphics cards, it put a lot of emphasis on ray-tracing, especially saying that ray-tracing was the best graphics in 20 years, and two years after the launch of the 20-series graphics cards, there are more and more games that support ray-tracing, so naturally we are here to test games that support ray-tracing, and as the best companion for ray-tracing, DLSS is also very important to reduce the graphics consumption of games with ray-tracing effects.

DLSS: New Bloodline.

German HQ: New Bloodline is a first-person shooter developed by Machine Games and published by Bethesda. We turned on maximum effects in our testing and also turned on ray tracing as well as DLSS.

The game, German Army Headquarters: New Bloodline, also had a satisfactory frame rate with DLSS enabled, with the RTX 3080 achieving over 154 fps.

Tomb Raider: Shadowrun.

Shadow of the Tomb Raider: Shadowrun is an action-adventure game produced by Eidos Montreal Studio and published by Square Enix, which will be the third in a series of reboots of the Tomb Raider franchise. We set full special effects with maximum light chase and turn on the game’s DLSS.

Tomb Raider: Shadowrun was one of the first games to support ray tracing, and in real-world performance, the RTX 3080 scored 100 fps in DLSS, which is pretty powerful.


Control is a third-person action-adventure game produced by GreenMed Entertainment and published by 505 Games. The main scene of the game takes place in the United States Federal Bureau of Control, and Control is arguably one of the best games to showcase ray tracing effects. We turn on the game’s maximum effects and light tracking effects, as well as DLSS.

In Control, even with DLSS enabled, ray tracing is a noticeable drain on graphics resources, but the RTX 3080 was able to achieve 68 fps at 4K resolution thanks to its excellent performance, outperforming the RTX 2080 Ti by about 36%.

Battlefield 5.

Battlefield 5 is a first-person shooter game developed and produced by EA DICE Studios and published by EA (American Arts Television). The game is powered by the Frostbite engine, which showcases more powerful 3D detailing and takes animation, environmental destruction, lighting, maps and sound to a new level. We turn on maximum effects and also DLSS.

Battlefield 5 boasts excellent ray tracing while working with DLSS to get the most out of the RTX 3080, and the RTX 3080 lives up to the hype, outperforming the RTX 2080 Ti by 39% and even faster than twice the performance of the RTX 2080 SUPER.


A third-person shooter in which the tsunami recedes, the former world of the fortress is covered with water, and a whole new maritime era begins! You can build on the water in the new season, and there are new items and water carriers to help you play on the water. In the latest update, Bastion Nights adds support for ray-tracing, as well as NVIDIA’s Reflex technology. We used NVIDIA’s official RTX tracking map, with full effects and performance DLSS mode enabled.

Although “Fortnite” is an online game, but if the ray tracing is turned on, then it will also be a hardware killer that eats up the performance of the graphics card, under the full light tracking effect, the graphical pressure of several graphics cards showed a straight line, but with DLSS, the RTX 3080 achieved 63 fps at 4K resolution, about 50% ahead of the RTX 2080 Ti, it can be seen in the future of the light tracking game, DLSS will be the best partner for ray tracing effects.

“The Border

Frontier is a near-future space-focused first-person shooter that takes you on a true gravity-free tactical shooting experience. Space operators of all shapes and sizes meet in near orbit to break the usual constraints and make weightlessness your weapon of choice. We’ve also turned on the highest light chase effects and performance DLSS.

In the Border game, the RTX 3080 also failed to achieve 60 fps, but it was also 36% ahead of the RTX 2080 Ti. It’s worth mentioning that both games use the ray tracing focal dispersion that was announced at the NVIDIA Ampere launch, making the game’s light-chasing details more realistic. As you can see from the above tests, the RTX 3080 is already capable of meeting 90% of the 4K resolution requirements, but in some of the light chasing games, the RTX 3080 seems to be out of reach. In addition, as the best partner for light chasing games, DLSS can be said to be a necessary technology to run light chasing games smoothly. Without DLSS, it would be even more difficult to run these light chasing games.

Temperature and Power Consumption

In the actual test, we often encounter a situation where the performance of the game is significantly reduced while playing, which is due to the GPU’s frequency reduction caused by the unsatisfactory cooling of the graphics card, so the quality of a graphics card cooling will also affect the actual performance of the graphics card.

We ran a temperature test on the card using Furmark and the results showed that the RTX 3080 FE public graphics card was 73 degrees Celsius, already cooler than the non-public version of the RTX 2080 Ti. Considering we’re using the non-public version of the RTX 2080 Ti as well as the RTX 2080 Super this time around, the temperature control lead is naturally even more stunning when compared to the public RTX 3080 card.

However, it is worth noting that this time NVIDIA has adopted the latest cooling methods, so the card is also quite “powerful” when running at full load. We used a FLIR infrared imager to test the RTX 3080 FE, and found that the case temperature was 70 degrees Celsius, and the air outlet reached 73.4 degrees Celsius, so when replacing the card, you could clearly feel the heat.

In the power test, we chose Furmark and “Assassin’s Creed: Odyssey” to test the performance of power consumption in the copy and game environment, using a power meter to test the power consumption of the whole machine. So if you want to run the RTX 3080 card reliably, a 650W power supply is the starting point. If you use a CPU like the Rui Long 9 3950X, then the power supply needs to be 750W or even 850W.

Difference between PCIe 4.0 and 3.0

This time NVIDIA for the first time to get the RTX 3080 in support of PCIe 4.0 transfer lanes, and the only platform that can support PCIe 4.0 is apparently AMD’s ryzen platform, and Intel’s side according to the news will not begin to support until the 11th generation of Core processors, I believe that there will be many people will post the question, that is, in PCIe 3.0 and PCIe 3.0 and PCIe 4.0 lanes, how much of a difference in performance is there. This we also used two different AMD and Intel platforms for testing to see how the RTX 3080 actually performs under different bandwidths.

Left Intel platform Right AMD platform

We used 3DMark TimeSpy and Tomb Raider: Shadow of the Darkness for comparison testing, and the results were 17,629 graphics points for TimeSpy on the AMD platform and 17,106 points on the Intel platform, a 3% difference, while in Tomb Raider: Shadow of the Darkness, the total number of frames rendered was 101 on both the AMD and Intel platforms, a difference of 0.27%. From our testing, we can see that the actual performance of the RTX 3080 is in between the two platforms, and the difference in bandwidth between PCIe 3.0 and PCIe 4.0 is difficult to show in terms of actual gaming performance. Since Microsoft DirectStorage is not yet available and the first games supporting DirectStorage and RTX IO will not be available until next year, the difference in gaming performance between PCIe 3.0 and PCIe 4.0 is negligible at this point.

The CPU is starting to become a bottleneck.

Compared to 20-series graphics cards, the 30-series cards do offer a tremendous, if not unimaginable, increase in performance, and the RTX 3080 delivers a powerful performance that makes running 4K games smoothly no longer a dream. From the test results, the violent hardware stack coupled with the improved architecture allowed the RTX 3080 to achieve extremely good results. However, the RTX 3080 has achieved quite promising results at 4K resolutions, but we found in our testing that the performance gains in 2K resolutions, let alone 1080p, didn’t come as high. Even in some of the tests, the difference between 2K resolution and 1080p resolution is not much, does this mean that the GPU has started to slack off in the face of low resolution graphics processing?

In this regard, we take “Tomb Raider: Shadow” BenchMark as an example to briefly discuss why the RTX 3080 in 2K resolution as well as 1080p resolution performance improvement did not show a linear increase.

In Tomb Raider: Shadowrun’s BenchMark, in addition to the traditional average frame rate of the game, it also includes the CPU rendering, CPU game and GPU performance. Let’s now look at the actual game performance at 4K resolution, first of all, at 4K resolution, the CPU game average frame rate is 152 fps, while the GPU’s frame rate is 102 fps, which means that the GPU rendered game screen CPU is fully satisfied, but at 2K resolution, the GPU performance is 159 fps, while the CPU is 151 fps, the CPU has lagged behind the GPU The scores are up. As for the much less stressful 1080p, the average frame rate of the CPU is 154 fps compared to the GPU score of 201 fps and the game score of 154 fps, indicating that at this point the CPU has become the bottleneck in improving the frame rate of the entire game.

Traditional game rendering presents a linear trend, that is, the GPU is responsible for graphics rendering and the CPU is responsible for sending rendering commands and handling the non-graphics related operations of the game engine calculations, allocating memory and other system resources. In the past, the CPU’s performance has been sufficient to deliver enough graphics rendering instructions to the GPU to be less of a bottleneck for games, but this time around, the GPU’s ability to render graphics has outstripped the CPU’s processing power, and it takes a while for the graphics to finally be output to the display, so with little load pressure on the GPU, the game’s rate depends on the CPU’s speed. That is to say, the CPU performance becomes the key to restrict the game frame rate to improve again.

So one solution to the frame rate bottleneck that occurs with 2K and even 1080p resolutions is to increase the special effects to increase the load on the GPU, which can reduce GPU resource idleness to some extent. The other is to wait for Intel or AMD to introduce more powerful CPUs, either way, at least for now the CPUs are lagging behind the GPU’s graphics needs.

Summing up: the most powerful graphics leap in a decade, the CPU is next!

After NVIDIA launched the 20-series graphics card, it was met with a lot of criticism from gamers, one of the main reasons being that they spent more money but did not enjoy the performance improvement they deserved, and it took a long time for the two black technologies of ray tracing and DLSS, which were advertised at the time of launch, to officially play their true strength, so gamers have said that the 20-series graphics card is the least powerful in recent years. Cost effective product. Yet NVIDIA seems to be telling gamers this time around with its 30-series graphics cards that it’s not that we don’t want to make good graphics cards, it’s just because we want to do something more meaningful. And the 30-series graphics cards that add volume without adding price are telling you that we want to make performance simple, you just have to buy them. Translated with (free version)

Compared with the previous generation of RTX 2080, this time the RTX 3080 can be said to present a rocket-like leap in performance, 60-70% of the traditional game improvement and nearly double the ray tracing performance improvement can let players shout value for money, in addition to the public version of the price of $5,499 is also quite sincere, the RTX 3080 may be the strongest graphics leap in a decade. Of course, when we reviewed the 30 series graphics card, we found that for the RTX 3080, only 4K is the only place to really play its strength, 2K resolution has been the case of the frame rate is not high, obviously the development of the GPU has surpassed the CPU, and then improve the frame rate will have to CPU efforts.

After witnessing the strongest graphics leap in a decade, it’s up to Intel and AMD to actually make them launch CPU products with even better performance to catch up with the GPU’s ever-running pace.

Comment here