
Praktyczne szkolenia na żywo z MLOps. MLOps (DevOps na potrzeby uczenia maszynowego) sprawdza się przy współpracy zespołów Data Science i IT. Pomaga w tworzeniu I wdrażaniu modeli, a także automatyzacji cyklu życia uczenia maszynowego w oparciu o procesy DevOps.
Szkolenie MLOps jest dostępne jako "szkolenie stacjonarne" lub "szkolenie online na żywo".
Szkolenie stacjonarne może odbywać się lokalnie w siedzibie klienta w Polsce lub w ośrodkach szkoleniowych NobleProg w Polsce. Zdalne szkolenie online odbywa się za pomocą interaktywnego, zdalnego pulpitu DaDesktop .
NobleProg -- Twój lokalny dostawca szkoleń.
Opinie uczestników
Dostosowując się do naszych potrzeb
Sumitomo Mitsui Finance and Leasing Company, Limited
Szkolenie: Kubeflow
Machine Translated
Dostosowując się do naszych potrzeb
Sumitomo Mitsui Finance and Leasing Company, Limited
Szkolenie: Kubeflow
Machine Translated
Plany szkoleń z technologii MLOps
By the end of this training, participants will be able to:
- Install and configure Kubeflow on premise and in the cloud using AWS EKS (Elastic Kubernetes Service).
- Build, deploy, and manage ML workflows based on Docker containers and Kubernetes.
- Run entire machine learning pipelines on diverse architectures and cloud environments.
- Using Kubeflow to spawn and manage Jupyter notebooks.
- Build ML training, hyperparameter tuning, and serving workloads across multiple platforms.
By the end of this training, participants will be able to:
- Install and configure Kubernetes, Kubeflow and other needed software on AWS.
- Use EKS (Elastic Kubernetes Service) to simplify the work of initializing a Kubernetes cluster on AWS.
- Create and deploy a Kubernetes pipeline for automating and managing ML models in production.
- Train and deploy TensorFlow ML models across multiple GPUs and machines running in parallel.
- Leverage other AWS managed services to extend an ML application.
By the end of this training, participants will be able to:
- Install and configure Kubernetes, Kubeflow and other needed software on Azure.
- Use Azure Kubernetes Service (AKS) to simplify the work of initializing a Kubernetes cluster on Azure.
- Create and deploy a Kubernetes pipeline for automating and managing ML models in production.
- Train and deploy TensorFlow ML models across multiple GPUs and machines running in parallel.
- Leverage other AWS managed services to extend an ML application.
By the end of this training, participants will be able to:
- Install and configure Kubernetes, Kubeflow and other needed software on GCP and GKE.
- Use GKE (Kubernetes Kubernetes Engine) to simplify the work of initializing a Kubernetes cluster on GCP.
- Create and deploy a Kubernetes pipeline for automating and managing ML models in production.
- Train and deploy TensorFlow ML models across multiple GPUs and machines running in parallel.
- Leverage other GCP services to extend an ML application.
By the end of this training, participants will be able to:
- Install and configure Kubernetes, Kubeflow and other needed software on IBM Cloud Kubernetes Service (IKS).
- Use IKS to simplify the work of initializing a Kubernetes cluster on IBM Cloud.
- Create and deploy a Kubernetes pipeline for automating and managing ML models in production.
- Train and deploy TensorFlow ML models across multiple GPUs and machines running in parallel.
- Leverage other IBM Cloud services to extend an ML application.
- By the end of this training, participants will be able to:
- Install and configure Kubernetes and Kubeflow on an OpenShift cluster.
- Use OpenShift to simplify the work of initializing a Kubernetes cluster.
- Create and deploy a Kubernetes pipeline for automating and managing ML models in production.
- Train and deploy TensorFlow ML models across multiple GPUs and machines running in parallel.
- Call public cloud services (e.g., AWS services) from within OpenShift to extend an ML application.
By the end of this training, participants will be able to:
- Install and configure Kubeflow on premise and in the cloud.
- Build, deploy, and manage ML workflows based on Docker containers and Kubernetes.
- Run entire machine learning pipelines on diverse architectures and cloud environments.
- Using Kubeflow to spawn and manage Jupyter notebooks.
- Build ML training, hyperparameter tuning, and serving workloads across multiple platforms.
By the end of this training, participants will be able to:
- Install and configure MLflow and related ML libraries and frameworks.
- Appreciate the importance of trackability, reproducability and deployability of an ML model
- Deploy ML models to different public clouds, platforms, or on-premise servers.
- Scale the ML deployment process to accommodate multiple users collaborating on a project.
- Set up a central registry to experiment with, reproduce, and deploy ML models.
By the end of this training, participants will be able to:
- Install and configure various MLOps frameworks and tools.
- Assemble the right kind of team with the right skills for constructing and supporting an MLOps system.
- Prepare, validate and version data for use by ML models.
- Understand the components of an ML Pipeline and the tools needed to build one.
- Experiment with different machine learning frameworks and servers for deploying to production.
- Operationalize the entire Machine Learning process so that it's reproduceable and maintainable.