Warning: Attempt to read property "post_excerpt" on null in /www/wwwroot/coinpulsehq.com/wp-content/themes/mh-magazine/includes/mh-custom-functions.php on line 392
AI models are rapidly evolving, outpacing hardware capabilities, which presents an opportunity for Arm to innovate across the compute stack.
Recently, Arm unveiled new chip blueprints and software tools aimed at enhancing smartphones’ ability to handle AI tasks more efficiently. But they didn’t stop there – Arm also implemented changes to how they deliver these blueprints, potentially accelerating adoption.
Arm is evolving its solution offerings to maximise the benefits of leading process nodes. They announced the Arm Compute Subsystems (CSS) for Client, their latest cutting-edge compute solution tailored for AI applications in smartphones and PCs.
This CSS for Client promises a significant performance leap – we’re talking over 30% increased compute and graphics performance, along with an impressive 59% faster AI inference for AI, machine learning, and computer vision workloads.
While Arm’s technology powered the smartphone revolution, it’s also gaining traction in PCs and data centres, where energy efficiency is prized. Though smartphones remain Arm’s biggest market, supplying IP to rivals like Apple, Qualcomm, and MediaTek, the company is expanding its offerings.
They’ve launched new CPU designs optimised for AI workloads and new GPUs, as well as software tools to ease the development of chatbots and other AI apps on Arm chips.
But the real gamechanger is how these products are delivered. Historically, Arm provided specs or abstract designs that chipmakers had to translate into physical blueprints – an immense challenge arranging billions of transistors.
For this latest offering, Arm collaborated with Samsung and TSMC to provide physical chip blueprints ready for manufacturing, which was a huge time saver.
Samsung’s Jongwook Kye praised the partnership, stating their 3nm process combined with Arm’s CPU solutions meets soaring demand for generative AI in mobiles through “early and tight collaboration” in the areas of DTCO and PPA maximisation for an on-time silicon delivery that hit performance and efficiency demands.
TSMC’s head of the ecosystem and alliance management division, Dan Kochpatcharin echoed this, calling the AI-optimised CSS “a prime example” of Arm-TSMC collaboration helping designers push semiconductor innovation’s boundaries for unmatched AI performance and efficiency.
“Together with Arm and our Open Innovation Platform® (OIP) ecosystem partners, we empower our customers to accelerate their AI innovation using the most advanced process technologies and design solutions,” Kochpatcharin emphasised.
Arm isn’t trying to compete with customers, but rather enable faster time-to-market by providing optimised designs for neural processors delivering cutting-edge AI performance.
As Arm’s Chris Bergey said, “We’re combining a platform where these accelerators can be very tightly coupled” to customer NPUs.
Essentially, Arm provides more refined, “baked” designs customers can integrate with their own accelerators to rapidly develop powerful AI-driven chips and devices.
Want to learn more about AI and big data from industry leaders? Check out AI & Big Data Expo taking place in Amsterdam, California, and London. The comprehensive event is co-located with other leading events including Intelligent Automation Conference, BlockX, Digital Transformation Week, and Cyber Security & Cloud Expo.
Explore other upcoming enterprise technology events and webinars powered by TechForge here.
Be the first to comment