The company tackled inferencing the Llama-3.1 405B foundation model and just crushed it. And for the crowds at SC24 this week in Atlanta, the company also announced it is 700 times faster than ...
A.I. chip, Maia 200, calling it “the most efficient inference system” the company has ever built. The Satya Nadella -led tech ...
But the same qualities that make those graphics processor chips, or GPUs, so effective at creating powerful AI systems from scratch make them less efficient at putting AI products to work. That’s ...
Everyone is not just talking about AI inference processing; they are doing it. Analyst firm Gartner released a new report this week forecasting that global generative AI spending will hit $644 billion ...
Microsoft is also inviting developers and AI startups to explore model and workload optimisation with the new Maia 200 SDK.
The major cloud builders and their hyperscaler brethren – in many cases, one company acts like both a cloud and a hyperscaler – have made their technology choices when it comes to deploying AI ...
The small form factor HPE Edgeline EL8000 is designed for AI tasks such as computer vision and natural-language processing. Later this month, HP Enterprise will ship what looks to be the first server ...
GPUs’ ability to perform many computations in parallel make them well-suited to running today’s most capable AI. But GPUs are becoming tougher to procure, as companies of all sizes increase their ...
Some results have been hidden because they may be inaccessible to you
Show inaccessible results