SkyWatchMesh – UAP Intelligence Network

UAP Intelligence Network – Real-time monitoring of official UAP reports from government agencies and scientific institutions worldwide

IBM signs up Groq for speedy AI inferencing option

IBM has teamed up with Groq to offer enterprise customers a reliable, cost-effective way to speed AI inferencing applications.

Specifically, IBM is incorporating Groq’s inference platform, GroqCloud, and its custom Language Processing Unit (LPU) hardware architecture into Big Blue’s watsonx Orchestrate, which helps customers build, deploy, and manage AI agents and workflows to automate business operations. IBM watsonx Orchestrate offers more than 500 tools and customizable, domain-specific agents from IBM and third-party contributors. Groq claims its GroqCloud delivers over 5X faster and more cost-efficient inference than traditional GPU systems.

Further, IBM and Groq plan to integrate and enhance Red Hat’s open-source large language (vLLM) model framework, which includes its own inference server, to run on Groq’s LPU architecture as well as let IBM Granite models run on GroqCloud.

The technology involved in the partnership will let customers use watsonx capabilities in a familiar way and allow them to use their preferred tools while accelerating inference with GroqCloud, IBM stated. “This integration will address key AI developer needs, including inference orchestration, load balancing, and hardware acceleration, ultimately streamlining the inference process,” IBM stated.

For enterprises running production AI workloads — especially agentic AI, real-time decision systems such as customer service bots, fraud detection and IoT monitoring — inference speed can be a bottleneck, IBM stated. The idea here is to help customers gain productivity and cost-efficiency in their agentic workflows. “This is especially critical for sectors like healthcare, finance, government, retail, and manufacturing, which face hurdles with speed, cost, and reliability when implementing AI,” IBM stated.

“Many large enterprise organizations have a range of options with AI inferencing when they’re experimenting, but when they want to go into production, they must ensure complex workflows can be deployed successfully to ensure high-quality experiences,” said Rob Thomas, senior vice president, software and chief commercial officer at IBM, in a statement.

IBM has been busy on the AI front of late.

Most recently it partnered with startup Anthropic – founded by former OpenAI experts and is backed by Amazon and Google – to integrate Anthropic’s LLM Claude into Big Blue’s software portfolio. Claude LLMs can understand and generate natural language and offers governance, security, auditability, and scalability, according to IBM. Claude will be integrated into select IBM software products, starting with IBM’s AI-first integrated development environment (IDE), designed with advanced task generation capabilities for enterprise software development lifecycles including software modernization. IDE is available in private preview, IBM said.

IBM’s goal is to expand its AI software universe to include many LLM developers and technology to offer customers a menu of service options, IBM stated. 

Comments

Leave a Reply

Your email address will not be published. Required fields are marked *