Intel hosted a roundtable discussion featuring key representatives from the International Olympic Committee (IOC), Seekr, and Red Hat. The conversation centered around the benefits of an open AI ecosystem for developers and enterprises, addressing challenges brought by the surge in artificial intelligence. The event was led by Justin Hotard, Intel’s executive vice president and general manager of the Data Center and AI Group. It included insights from Kaveh Mehrabi of the IOC, Steven Huels of Red Hat, Rob Clark of Seekr, and Bill Pearson of Intel.
Key Announcements
Intel revealed exciting collaborations, including a partnership with the IOC and introducing a generative AI (GenAI) retrieval-augmented generation (RAG) solution. These initiatives highlight how open AI systems, utilizing Intel Gaudi AI accelerators and Intel® Xeon® processors, empower developers and enterprises to address AI-driven challenges.
Intel emphasized its commitment to making AI accessible by collaborating with the IOC. The company aims to foster an innovative environment that allows developers and enterprises to create tailored AI solutions, driving real-world results. By adopting an open and collaborative approach, Intel seeks to push the boundaries of what is possible for athletes and customers.
AthleteGPT: Enhancing the Olympic Experience
To assist around 11,000 athletes from diverse linguistic and cultural backgrounds during the Olympic Games, the IOC, in partnership with Intel, developed AthleteGPT. This chatbot, integrated into the Athlete365 platform, uses Intel’s Gaudi accelerators and Xeon processors to handle athlete inquiries and provide on-demand information. AthleteGPT aims to streamline athletes’ experiences in the Olympic Village in Paris, allowing them to concentrate on their training and competition.
The Importance of GenAI Solutions
Deploying GenAI solutions comes with challenges, such as cost, scalability, accuracy, and security. RAG plays a vital role in GenAI by enabling companies to use proprietary data securely, enhancing the accuracy and timeliness of AI outputs. This improvement is essential in today’s data-driven landscape, as it boosts the quality and utility of AI applications.
Intel’s collaborative approach involves AI platforms, open standards, and a comprehensive software and systems ecosystem, enabling developers to build customized GenAI RAG solutions. The progress shared at the event underscores Intel’s dedication to providing robust and adaptable generative AI solutions across multiple providers.
How the GenAI RAG Solution Works
Intel collaborates with industry partners to develop an open-source, interoperable solution for easy RAG deployment. The GenAI solution, based on the Open Platform for Enterprise AI (OPEA) foundation, offers an out-of-the-box, production-ready approach for enterprises. It is designed to be flexible and customizable, integrating components from a catalog of offerings by various OEM systems and industry partners.
The solution incorporates OPEA-based microservice components into a scalable RAG framework, deploying Xeon and Gaudi AI systems. It scales efficiently using orchestration frameworks like Kubernetes and Red Hat OpenShift and provides standardized APIs with security and system telemetry.
Most large language model (LLM) development relies on the PyTorch framework, supported by Intel Gaudi and Xeon technologies. This support makes it straightforward to develop on Intel AI systems or platforms. Intel has worked with OPEA to create an open software stack for RAG and LLM deployment, optimized for the GenAI turnkey solution and built with PyTorch, Hugging Face libraries, LangChain, and Redis Vector database.
Meeting Developers’ Needs
OPEA offers open-source, standardized, modular RAG pipelines for enterprises, supporting various compilers and toolchains. This foundation accelerates AI integration and delivery for unique vertical use cases, unlocking new AI possibilities.
Intel delivers a complete solution with the GenAI turnkey and the comprehensive enterprise AI stack, addressing the challenges of deploying and scaling RAG and LLM applications. Leveraging Intel-powered AI systems and optimized software, businesses can fully utilize GenAI efficiently and swiftly.
In the future
Increasing access to the latest AI computing technology remains a challenge for enterprises. Intel is creating new opportunities for AI services driven by GenAI and RAG solutions through strategic collaborations with industry partners and customers. Intel announced a new Coalition for Secure AI (CoSAI) with Google, IBM, and others to enhance trust and security in AI development and deployment. Further demonstrations of Intel’s unique AI systems approach will be showcased at Intel Innovation on September 24–25.