NetApp Powers the Future of AI with Intelligent Data Infrastructure
GenAI powers practical and highly visible use cases for business innovation such as generating content, summarizing large amounts of information, and responding to questions. Gartner research predicts that spending on AI software will grow to $297.9 billion by 2027 and that GenAI will account for over one-third of that. The key to success in the AI era is mastery over governable, trusted, and traceable data.
Yesterday, NetApp CEO George Kurian kicked off NetApp INSIGHT 2024 with an expansive vision of this era of data intelligence. A large part of the AI challenge is a data challenge, and Kurian laid out a vision for how intelligent data infrastructure can ensure the relevant data is secure, governed, and always updated to feed a unified, integrated GenAI stack.
Today at NetApp INSIGHT, NetApp will be unveiling further innovations in intelligent data infrastructure, including a transformative vision for AI running on NetApp ONTAP®, the leading operating system for unified storage. Specifically, NetApp's vision includes:
- NVIDIA DGX SuperPOD Storage Certification for NetApp ONTAP: NetApp has begun the NVIDIA certification process of NetApp ONTAP storage on the AFF A90 platform with NVIDIA DGX SuperPOD AI infrastructure, which will enable organizations to leverage industry-leading data management capabilities for their largest AI projects. This certification will complement and build upon NetApp ONTAP's existing certification with NVIDIA DGX BasePOD. NetApp ONTAP addresses data management challenges for large language models (LLMs), eliminating the need to compromise data management for AI training workloads.
- Creation of a global metadata namespace to explore and manage data in a secure and compliant fashion across a customers' hybrid multi-cloud estate to enable feature extraction and data classification for AI. NetApp separately announced today a new integration with NVIDIA AI software that can leverage the global metadata namespace with ONTAP to power enterprise retrieval augmented generation (RAG) for agentic AI.
- Directly integrated AI data pipeline, allowing ONTAP to make unstructured data ready for AI automatically and iteratively by capturing incremental changes to the customer data set, performing policy driven data classification and anonymization, generating highly compressible vector embeddings and storing them in a vector DB integrated with the ONTAP data model, ready for high scale, low latency semantic searches and retrieval augmented generation (RAG) inferencing.
- A disaggregated storage architecture that enables full sharing of the storage backend, which maximizes utilization of network and flash speeds and lowers infrastructure cost, significantly improving performance while economizing rack space and power for very high-scale, compute-intensive AI workloads like LLM training. This architecture will be an integral part of NetApp ONTAP, so it will get the benefit of a disaggregated storage architecture but still maintain ONTAP's proven resiliency, data management, security and governance features.
- New capabilities for native cloud services to drive AI innovation in the cloud. Across all its native cloud services, NetApp is working to provide an integrated and centralized data platform to ingest, discover and catalog data. NetApp is also integrating its cloud services with data warehouses and developing data processing services to visualize, prepare and transform data. The prepared datasets can then be securely shared and used with the cloud providers' AI and machine learning services, including third party solutions. NetApp will also announce a planned integration that allows customers to use Google Cloud NetApp Volumes as a data store for BigQuery and Vertex AI.
NetApp continues to innovate with the AI ecosystem:
- Domino Data Labs chooses Amazon FSx for NetApp ONTAP: To advance the state of machine learning operations (MLOps), NetApp has partnered with Domino Data Labs, underscoring the importance of seamless integration in AI workflows. Effective today, Domino is using Amazon FSx for NetApp ONTAP as the underlying storage for Domino Datasets running in Domino Cloud platform to provide cost-effective performance, scalability, and the ability to accelerate model development. In addition to Domino using FSx for NetApp ONTAP, Domino and NetApp have also begun joint development to integrate Domino's MLOps platform directly into NetApp ONTAP to make it easier to manage the data for AI workloads.
- General Availability of AIPod with Lenovo for NVIDIA OVX: Announced in May 2024, the NetApp AIPod with Lenovo ThinkSystem servers for NVIDIA OVX converged infrastructure solution is now generally available. This infrastructure solution is designed for enterprises aiming to harness generative AI and RAG capabilities to boost productivity, streamline operations, and unlock new revenue opportunities.
- New capabilities for FlexPod AI: NetApp is releasing new features for its FlexPod AI solution, the hybrid infrastructure and operation platform that accelerate the delivery of modern workloads. FlexPod AI running RAG simplifies, automates, and secures AI applications, enabling organizations to leverage the full potential of their data. With Cisco compute, Cisco network, and NetApp storage, customers experience lower costs, efficient scaling, faster time to value, and reduced risks.
Additional Resources
Hashtag: #NetApp
The issuer is solely responsible for the content of this announcement.
Authors: NetApp
Read more https://www.media-outreach.com/news/singapore/2024/09/25/328925/