Cisco Latest News |

Call a Specialist Today! 888-785-4402 | Free Shipping!Free Shipping!

Cisco Unveils Server for Artificial Intelligence and Machine Learning

September 10, 2018 By BlueAlly


SAN JOSE, Calif—September 10, 2018–Artificial intelligence (AI) and machine learning (ML) are opening up new ways for enterprises to solve complex problems. But they will also have a profound effect on the underlying infrastructure and processes of IT. According to Gartner, "only 4% of CIOs worldwide report that they have AI projects in production." * That number will grow dramatically over the next few years. And when it does, IT will struggle to manage new workloads, new traffic patterns, and new relationships within their business. To help enterprises address these emerging challenges, Cisco is unveiling its first server built from the ground up for AI and ML workloads. The new Cisco UCS server speeds up deep learning, a compute-intensive form of machine learning that uses neural networks and large data sets to train computers for complex tasks. Packed with powerful NVIDIA GPUs, it is designed to accelerate many of today's best-known machine learning software stacks. Data scientists and developers can experiment with machine learning on a laptop. But deep learning at scale demands much more compute capability. It requires an IT architecture that is capable of taking in vast sets of data. And tools that can make sense of this data and use it to learn. That is why Cisco is working with its technology partners to validate many of today's most popular machine learning tools: to help simplify deployments and accelerate time to insight. "Over the next few years, apps powered by artificial intelligence and machine learning will become mainstream in the enterprise. While this will solve many complex business issues, it will also create new challenges for IT," said Roland Acra, SVP and GM for Cisco's Data Center Business Group. "Today's powerful addition to the Cisco UCS lineup will power AI initiatives across a wide range of industries. Our early-access customers in the financial sector are exploring ways to improve fraud detection and enhance algorithmic trading. Meanwhile in healthcare, they're interested in better insights and diagnostics, improving medical image classification, and speeding drug discovery and research. Powering Artificial Intelligence at Scale With the addition of the Cisco UCS C480 ML, Cisco now offers a complete range of computing options designed for each stage of the AI and ML lifecycle. From data collection and analysis near the edge, to data preparation and training in the data center, to the real-time inference at the heart of AI, customers are covered.
  • Built for data scientists and developers: Today, thousands of customers use Cisco UCS to help them make sense of big data. Cisco's new server for AI and ML builds on its expertise of moving data from the edge to the core and goes further. It lets customers extract more intelligence from their data and use it to make better, faster decisions. With its new DevNet AI Developer Center and DevNet Ecosystem Exchange, Cisco is also giving data scientists and developers the tools and resources to create a new generation of apps.
  • Built for IT: UCS makes it easy for IT to add new technology to their environment. With Cisco Intersight, they also get the simplicity and reach of cloud-based systems management. This lets them automate policy and operations for all their computing infrastructure from the cloud. And with Cisco validated designs to help demystify the rapidly evolving stacks of AI and ML software, IT can deploy with confidence at enterprise scale.
  • Built with an ecosystem: Cisco is not working alone. It is embracing containers and multicloud computing models to make it easier to deploy open source software at scale, no matter where apps live. It is validating machine learning environments and software such as Anaconda, Kubeflow, and solutions from Cloudera and Hortonworks on the new server. UCS customers who use Kubeflow running on top of Kubernetes will find it easy to deploy AI workloads directly to Google Kubernetes Engine, taking advantage of both on-prem and cloud ML capabilities.