<img height="1" width="1" style="display:none;" alt="" src="https://px.ads.linkedin.com/collect/?pid=2429892&amp;fmt=gif">

New! Prompts that write themselves Try it out

3 MLOps Predictions for 2022

Welcome to 2022. (It is 2022, right?) The blur of the last year (or two) has been dizzying in more ways than I care to recount, and MLOps continues to evolve at an unprecedented pace. It feels like the last 6 months in machine learning operationalization have seen more changes than the 6 years preceding. And we’re just getting started.

So what’s next? Here are my MLOps predictions for 2022:

1. Experiment Tracking -> Model Management

We’ve been building models for a while now—all kinds. But machine learning has emerged in recent years to solve a variety of previously unaddressable problems. With the rise of machine learning came the need for more robust experiment tracking - dozens if not hundreds of candidate models, their related data sets, and associated artifacts and metadata - and thus the need for more robust experiment tracking.

Now that we’re better at building models - both quality and quantity of them - the need for more robust model management to support operationalization is paramount. This starts with a centralized production model registry for deployment but includes additional functionality to support operations needs such as governance and monitoring. Which is best accomplished by one solution for full-lifecycle model management that integrates with both build and run tooling, from data prep, training, and model validation during model development and experimentation, to deployment, inference & serving, and monitoring in production. 

2022 will be the year full-lifecycle model management not only becomes a thing but a required thing for model operationalization.

2. Solidification of a 2-Phase Model Lifecycle

Speaking of the model lifecycle…

MLOps has proven to be a confusing term, to say the least. While the focus is clearly on operationalization, when referring to what all MLOps entails, we almost always include development phase processes like data preparation and model training. This, of course, makes a lot of sense in the world of ML, as automatic re-training is core to the vision of what’s possible. 

That said, there’s still significant effort put into building models the first time that needs to be captured in the model lifecycle, especially given the tendencies of teams to use a plethora and diverse set of tools during the development process while needing standardized toolchain and processes to run models in production.

Broadly dividing these two phases of the iterative model lifecycle into Build and Run captures both the delineation of tooling and the fact that how each phase is executed looks different depending on whether it’s the first deployment of a model or an iteration.

Bottom line: we’ll start talking about the Build phase of the model lifecycle to capture data preparation, training, and validation and the Run phase to describe deployment, operations, and monitoring.

3. Monitoring vs. Observability vs. Explainability

I put “vs.” in the title there because (unfortunately) I don’t think this one will be resolved in 2022.

We lack clear definitions for what we mean by model monitoring, model observability, and model explainability, and the terms are often used in overlapping and interchangeable ways. Which creates not insignificant confusion when trying to describe what exactly we’re trying to do and the tooling needed to accomplish it.

I’m not going to attempt to resolve this debate here other than to say monitoring and observability (or o11y as it’s known in the DevOps world) are probably more aligned and will collapse into one set of best practices and tooling, while explainability - which is more aligned with responsible AI and the need to track and understand fairness and bias - has the potential to be its own domain altogether or fall under the broad charter of model governance.

And those are my MLOps predictions for 2022! Agree? Disagree? Want to discuss? Connect with me on LinkedIn or michael@verta.ai .

Michael Butt is the VP of Marketing at Verta. He spent nearly 20 years in the DevOps space as a consulting engineer, product manager, and product marketer focused on network and application performance monitoring and loves being at the forefront of what’s next in enterprise technology.

Subscribe To Our Blog

Get the latest from Verta delivered directly to you email.

Try Verta today