Member-only story

Deployment of Machine Learning Models: On-Premises and in the Cloud

Amber Ivanna Trujillo
5 min readApr 10, 2024

--

In the dynamic realm of machine learning, crafting a potent model is just the starting point. To maximize the efficacy of your machine learning endeavors, it’s crucial to adeptly deploy your models, ensuring accessibility and functionality for end-users. This article delves into the deployment journey, whether on-premises or in the cloud, to provide insights into the fundamental concepts and methodologies involved.

Understanding Model Deployment Model deployment entails making a machine learning model accessible for utilization by others. This encompasses integrating your trained model into a system or application, enabling it to generate predictions or decisions based on real-world data. Deployment typically occurs through two main avenues: on-premises and cloud-based.

On-Premises Deployment

  1. Establishing a Server Environment On-premises deployment entails hosting your machine learning model on local servers within your organization. Here’s a breakdown of the essential steps:
  2. Hardware Selection: Opt for hardware suited to handle the computational demands of your model. Software Installation: Install requisite software, encompassing libraries, frameworks, and web servers. Network Configuration: Ensure robust network connectivity to facilitate data access and model deployment securely.
  3. Containerization Leveraging containerization tools like Docker streamlines deployment by…

--

--

Amber Ivanna Trujillo
Amber Ivanna Trujillo

Written by Amber Ivanna Trujillo

I am Executive Data Science Manager. Interested in Deep Learning, LLM, Startup, AI-Influencer, Technical stuff, Interviews and much more!!!

No responses yet