A graphics processing unit (GPU) is usually a dedicated processor whose main purpose is to do vast amounts of computations in order to build images to be displayed. In order to render the 3D graphics one sees in modern day games, the GPU must perform countless floating-point calculations. GPUs can be found on virtually all commonly used electronics today such as phones, computers, video game consoles, etc. The term “GPU” didn’t actually exist until 1999 when the company NVIDIA used the term while marketing their new graphics card called the GeForce 256 [3]. From the 1980’s until today, GPUs have gone through an exceptional amount of evolution with early graphics cards only being able to display simple vectors on screen to modern ones that can create full lush worlds that are hard to differentiate from our own world. The process in which the GPU creates graphics is based off what is called a pipeline. A pipeline is essentially an assembly line in which the GPU creates vertices, and then turns them into primitives, then into pixel fragments (known as rasterization), which are displayed onto the screen [1]. Early GPUs weren’t capable of handling the entire process and left the early parts to be calculated by the CPU. The whole process can be basically broken down into two parts. The first being the creation of the objects, while the other is just the rendering of the textures for said objects [2]. Figure 1 shows a visual of this.
In the mid 1980’s the company IBM created the first processor based video card for the computer. This allowed for the graphics card to take over all video tasks and thus freed up the CPU to be able to compute more calculations. It wasn’t a great success financially since it cost so much money, but it is important to note since it started a trend of having a separate processor on the video card to do the graphics computations instead of leaving it up to the CPU. In 1989 a