Page 15 - EE Times Europe Magazine – November 2023
P. 15
Why should you choose ATP as
your IC supplier?
Competitive prices: Massive and direct
cooperation with different types of suppliers.
Assured quality : Strict suppliers
management, stringent quality inspection.
Timely delivery: Developed global
distribution network, ensuring timely
delivery.
Flexible Payment Terms: Various payment
options to suit customers budget and cash
flow.
USE Enjoy
ATPCHR2024 6000$
-600$
instant 5%OFF and Free Shipment Charge
Suitable for purchasing STOCK Suitable for placing BACK-ORDER
sales@asianterapart.com
service@asianterapart.com
EETE: As AI algorithms get bigger every specific tasks. Instead, you can use a reduced number of work threads they run through
year, they require more computing model if, for example, you limit your dataset each of these chips. [Recently,] we were very
power. Do you see that trend continuing, to what your company or industry needs. excited to have announced an even more
or do you see the algorithms becoming Reducing the scope allows you to reduce the compact version, called Zen 4c. We’ve made
more efficient? model size, and that is another way of driving it more energy-efficient for cloud-native
Papermaster: The need for bigger and bigger more energy-efficient AI computing. tasks.
computers to support the most sophisticated If you’re running cloud-native applica-
and accurate AI models is indeed growing. EETE: We talked about the big tions at hyperscale, where you need a lot of
Large language models and other types of supercomputers that enable AI. But let’s work threads, you need them to run in a very
generative AI are really driving the massive not forget about the data centers that energy-efficient manner. The fourth genera-
scaling up of supercomputers. serve less ambitious applications—the tion of our processor, code-named Bergamo,
On the one hand, large language models regular data centers that we rely on takes the number of CPUs to 128 and doubles
are running up to hundreds of billions of every day. How are you approaching that the number of threads on each chip. It’s
parameters, and they are well on their way market? highly efficient and you can get 3.7× the per-
to a trillion parameters. As they grow, it’s Papermaster: That’s a very strong market formance of competitors, like Ampere, which
breathtaking to see the kinds of content for AMD and one in which AMD has been are also designed for cloud-native workloads.
they create and the kinds of questions they steadily gaining ground. In fact, we have a And going back to how we tailor our
answer. But that impressive capacity comes majority share in the hyperscale CPU clusters approach to the needs of an application
at a cost. that run business applications. We are now domain, for scientific computing, we
Large language models ingest a massive on our fourth generation of the Zen line of announced AMD 3D V-Cache, where we took
amount of training data, which requires high-performance CPUs, which is shipping the fourth-generation x86 core and stacked
supercomputer-class computational power. in the fourth generation of EPYC servers for on additional cache. Remember that difficult
Because the need will continue to grow, it’s general-purpose computing. scientific tasks need data to be very close to
essential that we have more advanced tech- Our four generations of CPU-based the processing core. So we stack the cache
niques for energy-efficient supercomputing. computing have been doing very well. We’ve vertically, right on top. This results in huge
The holistic design I described is the best way added more CPUs at every generation, and improvements in throughput for the types
to achieve that. the most recent generation of EPYC servers of workloads that run in electronic design
On the other hand, there will be innovation has 96 CPU cores on every chip and runs two automation or computer-aided design. We
in the way that AI models are used. You don’t work threads on each. This means a data achieve significant acceleration, which is of
need a general-purpose language model for center operator immediately doubles the great benefit to applications like Ansys. ■
www.eetimes.eu | NOVEMBER 2023