Zephyrnet Logo

Why Monitoring Machine Learning Models Matters

Date:

Monitoring machine learning models is crucial for any business that has operating ML models in production. This can be even more important than some other software systems that DevOps teams are more used to monitoring for multiple reasons. Firstly, ML models may fail silently if not monitored, and thus errors can potentially go undetected for a long time. Additionally, ML models are often some of the more essential components in the larger software systems. They are responsible for making intelligent decisions, and we rely on their predictions heavily.

Despite the importance of monitoring machine learning models in production, there is not yet a standard practice or framework for doing this, and thus many models go into production without proper monitoring and testing. This is because the technology of ML models is only starting to mature, and MLOps, the intersection of DevOps and ML, is still a new field. In this post we will discuss the importance of machine learning model monitoring and potential issues that may arise with your model in the production environment.

Illustration of the full lifecycle of a machine learning model from conception to production. In practice, many models do not go through a proper monitoring stage. (source)

What
Can Go Wrong?

The short answer is a lot. ML models are often treated as black boxes. Many software engineers don’t understand how these models are constructed and how to evaluate their performance, and if the model endpoint is alive, they assume that it is performing as expected.

ML models are often treated as a black box, which could lead to costly undetected errors and loss of control (source)

However, even a frozen model does not live in a frozen environment, and thus we must continuously evaluate whether our model is still fit for the task. We will discuss some of the reasons your model may not perform as expected.

Data
Drift and Concept Drift

One of the common causes for model degradation over time is data drift. Any change of the distribution of the input data over time is considered data drift. This can be caused either by a shift in the “real world” (e.g. new competitor affects the market, pandemic changes user behavior) or by more technical changes to the data pipeline (e.g. incorporating data from additional sources, introducing new categories to categorical data). Similarly, Concept drift occurs when there is a shift in the relation between the features and the target.

More specifically, during training, the model learns to fit the patterns in the training data. Usually a test set is set aside for evaluation to simulate performance on real world data. However, if there is a significant shift in the makeup of the data, we cannot expect our model to perform flawlessly, and our model may become stale.

Taxonomy of types of concept drift (source)

Data
Integrity Problems

The data pipeline can be very complex, involving different data sources that may be owned by different bodies. ML models can be extremely intelligent when it comes to identifying patterns in the makeup of the DNA, but when it comes to understanding that two columns have been swapped, or that an input attribute changed scale, they are completely stupid. Monitoring the data that is fed to your model constantly, will help you identify any change to the data schema early on, and enable you to fix the issue before any significant damage has been done.

Mismatch
Between Dev and Production Environments

Deploying an ML model in production is a very different setting from the development setting. This difference can potentially manifest in the structure of the data that is fed to the model from the moment it was deployed. Without proper monitoring, you may wrongly assume your model is performing exactly as it did in the development environment.

Serving
Issues

Since ML models can be relatively “hidden” in the production environment, it may happen that your model is not even providing the basic functionality of generating predictions given an input and you may not know about it. Issues like this can be caused by high latency, misconfiguration of the production environment and the model endpoint and so on. Monitoring metrics such as number of requests processed per endpoint can assure you that your model is fully operational

Control
Your Model

Proper implementation of machine learning monitoring will enable you to detect all these issues early on, and notify you when it’s time to retrain your model on up-to-date data, update the way a feature is calculated or fix the data pipeline. Thus, you will be able to be in control of your ML models, and answer any of the following question instantly:

– Is my model starting to degrade?

– Does the current input data have a similar distribution to the data used for training?

– Is the data pipeline and the schema of the input data intact?

– Is there any increase in biases with respect to attributes such as race or gender?

– Is my model handling the number of requests it receives as required? Do we need more resources? Would less be sufficient?

– Are there any subsets of the data where my model performs better or worse?

Conclusion

Monitoring and testing machine learning models is an area that’s often overlooked. The data science team may be content with achieving good results in the experimental environment, while the DevOps team does not always understand the terminology and concepts of machine learning. If you want to ensure that your ML models are not only theoretically useful but a product that gives added value to your business, continuously, it is worth investing in a proper monitoring system.

Image Credit: https://www.ie.edu/

Coinsmart. Beste Bitcoin-Börse in Europa
Source: https://datafloq.com/read/why-monitoring-machine-learning-models-matters/14527

spot_img

Latest Intelligence

spot_img