<img height="1" width="1" style="display:none;" alt="" src="https://px.ads.linkedin.com/collect/?pid=2429892&amp;fmt=gif">

New! Prompts that write themselves Try it out

Is MLOps Just For Operationalization?

There’s a lot of conversation - and maybe even argument - happening around MLOps and whether or not it’s just about operationalization.

The short answer: no. But of course, there’s more to it than that.

MLOps vs. ModelOps

First off, let me clarify that when I say MLOps, I’m referring to all AI/ML models. Gartner calls this ModelOps (of which MLOps is a subset), but I’m keeping it simple and going with the more common MLOps as a catch-all.

Lessons from DevOps

There’s a lot we can learn from DevOps when it comes to operationalizing models, specifically the common practice in software engineering that separates tools and processes explicitly for writing code (e.g. an IDE) from the tools and processes that span the build and run phases of the development lifecycle (e.g. a code repo).

Carrying the example above into MLOps, I wouldn’t consider a notebook part of an MLOps stack, but a centralized model management system that a notebook integrates with most certainly would be. This is where the gray area begins, and it all hinges around our definition of the model lifecycle.

The MLOps Lifecycle

Most data scientists and machine learning engineers seem to agree the MLOps lifecycle includes data preparation, training, and validation, in no small part because re-training models (whether manual or ideally automated) is crucial to operating a model in production. 

But there’s another reason MLOps should address the data prep, training, and validation steps in the build phase: in order to efficiently deploy and operate a model, the tools and processes used in operationalization should leverage the tools and processes that were in place during model development.

For example, managing a model in production - which, by the way, should be standardized and centralized - is not scalable unless deployment and monitoring systems are integrated - via a central model management system - with the build systems used for data prep, training, and validation. Throwing a model over the fence to be operationalized is a recipe for bespoke pipelines that are brittle and overly complex, not to mention highly manual. 

Operationalization is more than just ops

In summary, while the problem of the day when it comes to operationalizing AI/ML models is squarely focused on more efficient deployment, inference & serving, and monitoring, MLOps is not solely concerned with the run phase of the lifecycle. 

Standardizing model operationalization for scale means integrating with and leveraging the tools and processes used during the build phase of the lifecycle (specifically for data prep, training, and validation), starting with a model management system that marries experiment tracking with a production model catalog / registry.

Ultimately, the goal of broadening MLOps to incorporate some aspects of model development is automation, for example reducing or eliminating the effort needed to configure monitoring, infrastructure scaling, and re-training in production systems. Because in the end, data scientists and machine learning engineers should be spending their time solving business problems, not wrangling (or worse yet, building) infrastructure.

Subscribe To Our Blog

Get the latest from Verta delivered directly to you email.

Try Verta today