Javierdlrm

Javierdlrm

Scalable, Automated ML Model Monitoring with KFServing and Hopsworks

Event: FOSDEM'21 | Date: Sunday, February 7th 2021

Track: HPC, Big Data, and Data Science Devroom

Abstract

In recent years, MLOps has emerged to bring DevOps processes to the machine learning (ML) development process, aiming at more automation in the execution of repetitive tasks and at smoother interoperability between tools. Among the different stages in the ML lifecycle, model monitoring involves the continuous supervision of the model performance over time, involving the combination of techniques in four categories: outlier detection, data drift detection, explainability and adversarial attacks.

Nowadays, most of the available model monitoring tools follow a scheduled batch processing approach or analyse model performance using isolated subsets of the inference data. However, for the continuous monitoring of models, stream processing platforms show several advantages, including support for continuous data analytics, scalable processing of large amounts of data and first-class support for window-based aggregations useful for concept drift detection.

In this session, we will present an open-source stream processing architecture, based on Spark Structured Streaming, for automating model monitoring with some experiment results. We use Kafka to log model predictions, KFServing for model serving and a Kubernetes operator for the deployment and configuration of the different components. As for the analysis of inference data, we implemented an extendable monitoring framework on top of Spark Structured Streaming to detect outliers and data drift.