Saishruthi Swaminathan, Carlos Santana and Sepideh Seifzadeh – members of the IBM Center for Open-Source Data & AI Technologies team – explained the effort in a blog post, noting that it was becoming necessary to integrate and run AI and ML technologies across cloud environments. Last year, the team released the Elyra AI toolkit and said the latest launch is a machine-learning, end-to-end pipeline starter kit within the Cloud-Native Toolkit. “Using critical hybrid cloud capabilities including open source and Red Hat OpenShift, developers can use the new toolkit as a starting point to transition their ML and AI-powered applications from Jupyter notebooks to production environments,” the IBM team wrote. “This will help developers and data scientists speed up the development, deployment and innovation of projects by providing a set of opinionated approaches and tools to ensure they run well and optimize business value during the process.” The researchers added that the kit would help developers save time by keeping them from getting “bogged down” by different components and tasks that come together while transitioning to cloud environments. Santana, Swaminathan and Seifzadeh also noted that it is now common for developers to integrate AI and machine learning technologies with cloud-native environments during work on tools like cognitive chatbots or automated language translations. The use of microservices has also spurred the integration of the technologies. “The starter kits are part of the IBM Cloud-Native Toolkit, an open-source collection of assets that provide an environment to develop cloud-native applications for deployment within Red Hat OpenShift and Kubernetes,” they wrote. “These starter kits offer an excellent starting point to operationalize and industrialize your AI-powered applications and make them ready for production using open source and Red Hat OpenShift technologies. The starter kit speeds up the development, deployment, and innovation with a set of opinionated approaches/tools.” IBM said data scientists and developers can now use toolkit and will need to create their model as a microservice using MAX* Framework. Users will then “build and deploy on Red Hat OpenShift with support of continuous integration (using Jenkins & Tekton CI) and continuous delivery (using Argo CD), code analysis (using SonarQube), logging (using logDNA/sysdig), API support (using support), and health checks.”