{"id":708,"date":"2024-04-15T14:01:33","date_gmt":"2024-04-15T08:31:33","guid":{"rendered":"https:\/\/gih.al-emam.org\/?p=708"},"modified":"2024-04-16T10:57:30","modified_gmt":"2024-04-16T05:27:30","slug":"what-is-a-gpu-an-expert-explains-the-chips-powering-the-ai-boom-and-why-theyre-worth-trillions","status":"publish","type":"post","link":"https:\/\/gih.al-emam.org\/?p=708","title":{"rendered":"What is a GPU? An expert explains the chips powering the AI boom, and why they\u2019re worth trillions"},"content":{"rendered":"<p><strong><em>Author: Conrad Sanderson<\/em><\/strong><\/p>\n<p><strong><em>Research Scientist &amp; Team Leader, CSIRO <\/em><\/strong><\/p>\n<p>As the world rushes to make use of the latest wave of AI technologies, one piece of high-tech hardware has become a surprisingly hot commodity: the graphics processing unit, or GPU.<\/p>\n<p>A top-of-the-line GPU can sell for\u00a0tens of thousands of dollars, and leading manufacturer NVIDIA has seen its market valuation\u00a0soar past US$2 trillion\u00a0as demand for its products surges.<\/p>\n<p>GPUs aren\u2019t just high-end AI products, either. There are less powerful GPUs in phones, laptops and gaming consoles, too.<\/p>\n<p>By now you\u2019re probably wondering: what is a GPU, really? And what makes them so special?<\/p>\n<p><strong>What is a GPU?<\/strong><\/p>\n<p>GPUs were originally designed primarily to quickly generate and display complex 3D scenes and objects, such as those involved in video games and\u00a0computer-aided design\u00a0software. Modern GPUs also handle tasks such as\u00a0decompressing\u00a0video streams.<\/p>\n<p>The \u201cbrain\u201d of most computers is a chip called a central processing unit (CPU). CPUs can be used to generate graphical scenes and decompress videos, but they are typically far slower and less efficient on these tasks compared to GPUs. CPUs are better suited for general computation tasks, such as word processing and browsing web pages.<\/p>\n<p><strong>How are GPUs different from CPUs?<\/strong><\/p>\n<p>A typical modern CPU is made up of between 8 and 16 \u201ccores\u201d, each of which can process complex tasks in a sequential manner.<\/p>\n<p>GPUs, on the other hand, have thousands of relatively small cores, which are designed to all work at the same time (\u201cin parallel\u201d) to achieve fast overall processing. This makes them well suited for tasks that require a large number of simple operations which can be done at the same time, rather than one after another.<\/p>\n<p>Traditional GPUs come in two main flavours.<\/p>\n<p style=\"background: white; vertical-align: baseline; margin: 0cm 0cm 13.5pt 0cm;\"><span style=\"font-size: 13.5pt; font-family: 'Libre Baskerville'; color: black;\">First, there are standalone chips, which often come in add-on cards for large desktop computers. Second are GPUs combined with a CPU in the same chip package, which are often found in laptops and game consoles such as the PlayStation 5. In both cases, the CPU controls what the GPU does.<\/span><\/p>\n<h2 style=\"background: white; vertical-align: baseline; margin: 0cm 0cm 9.0pt 0cm;\"><span style=\"font-size: 17.5pt; font-family: 'Libre Baskerville'; color: black;\">Why are GPUs so useful for AI?<\/span><\/h2>\n<p style=\"background: white; vertical-align: baseline; margin: 0cm 0cm 13.5pt 0cm;\"><span style=\"font-size: 13.5pt; font-family: 'Libre Baskerville'; color: black;\">It turns out GPUs can be repurposed to do more than generate graphical scenes.<\/span><\/p>\n<p style=\"background: white; vertical-align: baseline; margin: 0cm 0cm 13.5pt 0cm;\"><span style=\"font-size: 13.5pt; font-family: 'Libre Baskerville'; color: black;\">Many of the machine learning techniques behind artificial intelligence (AI), such as\u00a0<\/span><a href=\"https:\/\/en.wikipedia.org\/wiki\/Deep_learning\" target=\"_blank\" rel=\"noopener\"><span style=\"font-size: 13.5pt; font-family: 'Libre Baskerville'; color: #4b4b4e;\">deep neural networks<\/span><\/a><span style=\"font-size: 13.5pt; font-family: 'Libre Baskerville'; color: black;\">, rely heavily on various forms of \u201cmatrix multiplication\u201d.<\/span><\/p>\n<p style=\"background: white; vertical-align: baseline; margin: 0cm 0cm 13.5pt 0cm;\"><span style=\"font-size: 13.5pt; font-family: 'Libre Baskerville'; color: black;\">This is a mathematical operation where very large sets of numbers are multiplied and summed together. These operations are well suited to parallel processing, and hence can be performed very quickly by GPUs.<\/span><\/p>\n<h2 style=\"background: white; vertical-align: baseline; margin: 0cm 0cm 9.0pt 0cm;\"><span style=\"font-size: 17.5pt; font-family: 'Libre Baskerville'; color: black;\">What\u2019s next for GPUs?<\/span><\/h2>\n<p style=\"background: white; vertical-align: baseline; margin: 0cm 0cm 13.5pt 0cm;\"><span style=\"font-size: 13.5pt; font-family: 'Libre Baskerville'; color: black;\">The number-crunching prowess of GPUs is steadily increasing, due to the rise in the number of cores and their operating speeds. These improvements are primarily driven by improvements in chip manufacturing by companies such as\u00a0<\/span><a href=\"https:\/\/www.anandtech.com\/show\/21241\/tsmc-2nm-update-two-fabs-in-construction-one-awaiting-government-approval\" target=\"_blank\" rel=\"noopener\"><span style=\"font-size: 13.5pt; font-family: 'Libre Baskerville'; color: #4b4b4e;\">TSMC<\/span><\/a><span style=\"font-size: 13.5pt; font-family: 'Libre Baskerville'; color: black;\">\u00a0in Taiwan.<\/span><\/p>\n<p style=\"background: white; vertical-align: baseline; margin: 0cm 0cm 13.5pt 0cm;\"><span style=\"font-size: 13.5pt; font-family: 'Libre Baskerville'; color: black;\">The size of individual transistors \u2013 the basic components of any computer chip \u2013 is decreasing, allowing more transistors to be placed in the same amount of physical space.<\/span><\/p>\n<p style=\"background: white; vertical-align: baseline; margin: 0cm 0cm 13.5pt 0cm;\"><span style=\"font-size: 13.5pt; font-family: 'Libre Baskerville'; color: black;\">However, that is not the entire story. While traditional GPUs are useful for AI-related computation tasks, they are not optimal.<\/span><\/p>\n<p style=\"background: white; vertical-align: baseline; margin: 0cm 0cm 13.5pt 0cm;\"><span style=\"font-size: 13.5pt; font-family: 'Libre Baskerville'; color: black;\">Just as GPUs were originally designed to accelerate computers by providing specialised processing for graphics, there are accelerators that are designed to speed up machine learning tasks. These accelerators are often referred to as \u201cdata centre GPUs\u201d.<\/span><\/p>\n<p style=\"background: white; vertical-align: baseline; margin: 0cm 0cm 13.5pt 0cm;\"><span style=\"font-size: 13.5pt; font-family: 'Libre Baskerville'; color: black;\">Some of the most popular accelerators, made by companies such as AMD and NVIDIA, started out as traditional GPUs. Over time, their designs evolved to better handle various machine learning tasks, for example by supporting the more efficient \u201c<\/span><a href=\"https:\/\/en.wikipedia.org\/wiki\/Bfloat16_floating-point_format\" target=\"_blank\" rel=\"noopener\"><span style=\"font-size: 13.5pt; font-family: 'Libre Baskerville'; color: #4b4b4e;\">brain float<\/span><\/a><span style=\"font-size: 13.5pt; font-family: 'Libre Baskerville'; color: black;\">\u201d number format.<\/span><\/p>\n<p>Other accelerators, such as Google\u2019s\u00a0Tensor Processing Units\u00a0and Tenstorrent\u2019s\u00a0Tensix Cores, were designed from the ground up for speeding up deep neural networks.<\/p>\n<p>Data centre GPUs and other AI accelerators typically come with significantly more memory than traditional GPU add-on cards, which is crucial for training large AI models. The larger the AI model, the more capable and accurate it is.<\/p>\n<p>To further speed up training and handle even larger AI models, such as ChatGPT, many data centre GPUs can be pooled together to form a supercomputer. This requires more complex software in order to properly harness the available number crunching power. Another approach is to create a single very large accelerator, such as the \u201cwafer-scale processor\u201d produced by Cerebras.<\/p>\n<p><strong>Are specialised chips the future?<\/strong><\/p>\n<p>CPUs have not been standing still either. Recent CPUs from AMD and Intel have built-in low-level instructions that speed up the number-crunching required by deep neural networks. This additional functionality mainly helps with \u201cinference\u201d tasks \u2013 that is, using AI models that have already been developed elsewhere.<\/p>\n<p>To train the AI models in the first place, large GPU-like accelerators are still needed.<\/p>\n<p>It is possible to create ever more specialised accelerators for specific machine learning algorithms. Recently, for example, a company called Groq has produced a \u201clanguage processing unit\u201d (LPU) specifically designed for running large language models along the lines of ChatGPT.<\/p>\n<p>However, creating these specialised processors takes considerable engineering resources. History shows the usage and popularity of any given machine learning algorithm tends to peak and then wane \u2013 so expensive specialised hardware may become quickly outdated.<\/p>\n<p>For the average consumer, however, that\u2019s unlikely to be a problem. The GPUs and other chips in the products you use are likely to keep quietly getting faster.<\/p>\n<p>Source:<\/p>\n<p>https:\/\/theconversation.com\/global<\/p>\n","protected":false},"excerpt":{"rendered":"<p>Author: Conrad Sanderson Research Scientist &amp; Team Leader, CSIRO As the world rushes to make use of the latest wave of AI technologies, one piece of high-tech hardware has become a surprisingly hot commodity: the graphics processing unit, or GPU. A top-of-the-line GPU can sell for\u00a0tens of thousands of dollars, and leading manufacturer NVIDIA has [&hellip;]<\/p>\n","protected":false},"author":1,"featured_media":705,"comment_status":"open","ping_status":"open","sticky":false,"template":"","format":"standard","meta":{"footnotes":""},"categories":[31],"tags":[],"class_list":{"0":"post-708","1":"post","2":"type-post","3":"status-publish","4":"format-standard","5":"has-post-thumbnail","7":"category-global"},"_links":{"self":[{"href":"https:\/\/gih.al-emam.org\/index.php?rest_route=\/wp\/v2\/posts\/708","targetHints":{"allow":["GET"]}}],"collection":[{"href":"https:\/\/gih.al-emam.org\/index.php?rest_route=\/wp\/v2\/posts"}],"about":[{"href":"https:\/\/gih.al-emam.org\/index.php?rest_route=\/wp\/v2\/types\/post"}],"author":[{"embeddable":true,"href":"https:\/\/gih.al-emam.org\/index.php?rest_route=\/wp\/v2\/users\/1"}],"replies":[{"embeddable":true,"href":"https:\/\/gih.al-emam.org\/index.php?rest_route=%2Fwp%2Fv2%2Fcomments&post=708"}],"version-history":[{"count":3,"href":"https:\/\/gih.al-emam.org\/index.php?rest_route=\/wp\/v2\/posts\/708\/revisions"}],"predecessor-version":[{"id":786,"href":"https:\/\/gih.al-emam.org\/index.php?rest_route=\/wp\/v2\/posts\/708\/revisions\/786"}],"wp:featuredmedia":[{"embeddable":true,"href":"https:\/\/gih.al-emam.org\/index.php?rest_route=\/wp\/v2\/media\/705"}],"wp:attachment":[{"href":"https:\/\/gih.al-emam.org\/index.php?rest_route=%2Fwp%2Fv2%2Fmedia&parent=708"}],"wp:term":[{"taxonomy":"category","embeddable":true,"href":"https:\/\/gih.al-emam.org\/index.php?rest_route=%2Fwp%2Fv2%2Fcategories&post=708"},{"taxonomy":"post_tag","embeddable":true,"href":"https:\/\/gih.al-emam.org\/index.php?rest_route=%2Fwp%2Fv2%2Ftags&post=708"}],"curies":[{"name":"wp","href":"https:\/\/api.w.org\/{rel}","templated":true}]}}