IT experts from around the world are headed to VMworld 2019 in San Francisco to learn how they can leverage emerging technologies from VMware and ecosystem partners (e.g. Red Hat, NVIDIA, etc.) to help achieve the digital transformation for their organizations. Artificial Intelligence (AI)/Machine Learning (ML) is a very popular technology trend, with Red Hat OpenShift customers like HCA Healthcare, BMW, Emirates NBD, and several more are offering differentiated value to their customers. Investments are ramping up across many industries to develop intelligent digital services that help improve customer satisfaction, and gain competitive business advantages. Early deployment trends indicate AI/ML solution architectures are spanning across edge, data center, and public clouds.
If you are part of the IT group, you may have already been asked to support the data scientists and software developers in your organization that are driving the development of machine learning models and the associated intelligent applications.
Data scientists play a vital role in the success of AI/ML projects. They are primarily responsible for ML model selection, training, and testing. They also need to collaborate with data engineers and software developers to make sure the source data is credible, and the machine learning models are successful deployed in application development processes.
Here are some of the key challenges faced by data scientists as they strive to efficiently build the ML models:
Selecting & deploying the right ML tooling or framework
Complexities and time required to train, test, and select the ML model providing the highest prediction accuracy
Slow execution of ML modeling computational tasks because of lack of powerful IT infrastructure
Dependency on IT to provision and manage infrastructure
Collaboration with other key contributors e.g. data engineers, application developers, etc.
If I were a data scientist, I would want a “self-service cloud like” experience for my ML projects. This experience should allow me to access a rich set of ML modelling frameworks, data, and computational resources across edge, data center, and public clouds. I should be able to share work and collaborate with my colleagues, and deliver my work into production with agility and repeatability to achieve business value.
This is where containers and Kubernetes-based hybrid cloud solutions like Red Hat OpenShift Container Platform and NVIDIA GPUs, on VMware vSphere, come into play. It can help extend the value of your vSphere investments, and drive the mainstream adoption of AI/ML powered intelligent apps.
There are several benefits that can be achieved with this solution, including:
Agility across the ML pipeline by automating the install, provisioning, and autoscaling of the containers based ML models/frameworks. NVIDIA GPUs can help speed up the massive computational tasks required to train, test, and fine tune the ML models without having to buy more compute and storage resources, with Red Hat OpenShift serving as the container and Kubernetes based “self service cloud.”
Portability and flexibility for ML powered apps to be developed and delivered across data center, edge, and public clouds. OpenShift also provides flexibility to offer ML-as-a-service to apps without having to embed the ML models directly in the application code for production use.
Efficient operations and lifecycle management for ML powered intelligent applications with automation of the CI/CD process, enabling more efficient collaboration and helping to boost productivity.
While you are at VMworld, don’t miss your chance to learn more on this topic. Come check out the mini-theatre session from Red Hat’s Andrew Sullivan at the NVIDIA booth in the expo center at 12:45pm on Monday, August 26th, 2019.
Please also check out the Red Hat AI/ML blog here to learn more, and also our announcement with NVIDIA to learn more about the strategic partnership between Red Hat and NVIDIA to accelerate and scale AI/ML across Hybrid Cloud.
The post Going to VMWorld? Learn to help data scientists and application developers accelerate AI/ML initiatives appeared first on Red Hat OpenShift Blog.
Quelle: OpenShift
Published by