NeuReality, a pioneer in reimagining AI inferencing architecture for the demands of today’s AI models and workloads, announced that its NR1 Inference Appliance now comes preloaded with popular enterprise AI models, including Llama, Mistral, Qwen, Granite1, plus support for private generative AI clouds and on premise clusters. Up and running in under 30 minutes, the generative and agentic AI-ready appliance delivers 3x better time-to-value, allowing customers to innovate faster. Current proofs of concept demonstrate up to 6.5x more token output for the same cost and power envelope compared to x86 CPU-based inference systems – making AI more affordable and accessible to businesses and governments of all sizes.
Inside the appliance, the NR1® Chip is the first true AI-CPU purpose built for inference orchestration – the management of data, tasks, and integration – with built-in software, services, and APIs. It not only subsumes traditional CPU and NIC architecture into one but also packs 6x the processing power onto the chip to keep pace with the rapid evolution of GPUs, while removing traditional CPU bottlenecks.
The NR1 Chip pairs with any GPU or AI accelerator inside its appliance to deliver breakthrough cost, energy, and real-estate efficiencies critical for broad enterprise AI adoption. For example, comparing the same Llama 3.3-70B model and the identical GPU or AI accelerator setup, NeuReality's AI-CPU powered appliance achieved a lower total cost per million AI tokens versus x86 CPU-based systems.
“No one debates the incredible potential of AI. The challenge is how to make it economical enough for companies to deploy AI inferencing at scale. NeuReality’s disruptive AI-CPU technology removes the bottlenecks allowing us to deliver the extra performance punch needed to unleash the full capability of GPUs, while orchestrating AI queries and tokens that maximize performance and ROI of those expensive AI systems,” said Moshe Tanach, Co-founder and CEO at NeuReality.
“Now, we are taking ease-of-use to the next level with an integrated silicon-to-software AI inference appliance. It comes pre-loaded with AI models and all the tools to help AI software developers deploy AI faster, easier, and cheaper than ever before, allowing them to divert resource to applying AI in their business instead of in Infrastructure integration and optimizations,” continued Tanach.
A recent study found that roughly 70% of businesses report using generative AI in at least one business function, showing increased demand. Yet only 25% have processes fully enabled by AI with widespread adoption and only one-third have started implementing limited AI use cases according to Exploding Topics.
Today, CPU performance bottlenecks on servers managing multi-modal and large language model workloads are a driving factor for low 30-40% average GPU utilization rates. This results in expensive silicon waste in AI deployments and underserved markets that still face complexity and cost barriers to entry.
Already deployed with cloud and financial services customers, NeuReality’s NR1 Appliance was specifically designed to accelerate AI adoption through its affordability, accessibility, and space efficiency for both on-premises and cloud inference-as-a-service (IaaS) options. Along with new pre-loaded generative and agentic AI models, with new releases each quarter, it comes fully optimized with preconfigured software development kits and APIs for computer vision, conversational AI or custom requests that support a variety of business use cases and markets (e.g. financial services, life sciences, government, cloud service providers).
The first NR1 Appliance unifies NR1® Modules (PCIe cards) with Qualcomm® Cloud AI 100 Ultra accelerators. More information on the NR1 Appliance, Module, Chip, and NeuReality® Software and Services please visit: https://www.neureality.ai/solution.
Join NeuReality at InnoVEX 2025
NeuReality will be at InnoVEX (co-located with Computex in Taipei, Taiwan) on May 20-23, 2025, in the Israel Pavillion, Hall 2 Booth S0912 (near Center Stage). The company will host live demonstrations on the NR1 Inference Appliance, including migrating a chat application in minutes and a performance demo with the NR1 chip running Smooth Factory Models and DeepSeek-R1-Distill-Llama-8B.
About NeuReality
Founded in 2019, NeuReality is pioneering purpose-built AI inference architecture that makes advanced AI immediately accessible and affordable. Our innovation is powered by our NR1® Chip—the first true AI-CPU designed for inference orchestration—which eliminates legacy bottlenecks with an open, collaborative approach fully compatible with any AI accelerator. This breakthrough technology reaches enterprise customers through our turnkey NR1® Appliance, which deploys in under an hour with pre-loaded AI models, delivering enterprise-grade AI inference with unprecedented ease. With 80 experienced team members across Israel, Poland, and the U.S., we're on a mission to make all AI shine. Learn more at http://www.neureality.ai.
1 AI models pre-loaded and pre-optimized for enterprise customers include: Llama 3.3 70B, Llama 3.1 8B (with Llama 4 series coming soon); Mistral 7B, Mistral 8x7B and Mistral Small; Qwen 2.5 including Coder (with Qwen 3 coming soon); DeepSeek R1-Distill-Llama 8B, R1 Distill-Llama 70b; and Granite 3, 3.1 8B (with Granite 3.3 coming soon).
View source version on businesswire.com: https://www.businesswire.com/news/home/20250514657573/en/
Media Contact:
Leigh Rosenwald
Voxus PR for NeuReality
NeuReality@voxuspr.com