This article is transferred fromThe Only 3 ML Tools You Need"

The full text is translated by machine, and the reading experience is not good in many places, but it does not affect the understanding of the full text.

Picture Author

Many of the rapidly evolving machine learning technologies have been transformed from proof-of-concept to provide support for key technologies that people rely on daily.In order to capture this newly released value, many teams find themselves caught up in the craze for the production of machine learning in products without the right tools to successfully do this.

The fact is that we are in the early stages of determining what the right toolkit will look like to build, deploy and iterate machine learning models.In this article, we will discuss only 3 ML tools needed to make the team successfully apply machine learning in products.

Let us learn from the past

Before we make the ML stack recommendations, let us quickly turn our attention to the tools that the software engineering industry adapts to.One major observation is that there is no solution for building, deploying, and monitoring code in production.

In other words, there is no end-to-end tool platform.Instead, there is a set of tools that focus on specific parts of the software engineering life cycle.

Picture Author

To simplify the creation of software, tools must be created to track issues, manage version history, supervise builds, and provide monitoring and alerting when problems occur in production.

Although not every tool clearly fits into one of these categories, each of these tool categories represents an obvious point of friction in the process of creating software, so tools need to be created.

I thought it was related to machine learning?

Just like the software development process, the development process of a machine learning model has a wide range of categories that are consistent with what is needed to research, build, deploy, and monitor the model.

In this article, we will focus on the basic ML tool categories that emerged in the process of solving some of the biggest obstacles to applying machine learning outside the laboratory.

To create an effective machine learning toolbox, you actually only need the following three basic tools:

  1. Feature Store: Handling offline and online function conversion
  2. Model Store: Acting as a central model registry and tracking experiments
  3. Evaluation Store: Monitor and improve model performance
Picture Author

Feature Store

First, let’s delve intoFeature Store.To define what the feature library is, let's start with the features that the feature library should enable for your team.

Functions that should be enabled for function storage:

  1. Used as the main source of feature conversion
  2. Allow the same function conversion in offline training and online services
  3. Enable team members to share their conversions for experimentation
  4. Provide powerful version control for function conversion code

In addition to how the feature store should empower your team, here are some essential features that can help you determine which feature store is best for you and your team.

Functions that the feature store should have:

  1. Integrate with your data store/lake
  2. Provide a fast method of feature conversion for online deployment of models
  3. Quickly and easily deploy functional transformation code to production
  4. Integrate with your evaluation store for data and functional quality checks

recommend:

Constructor

Model Store

Now that you have a feature store that stores feature transformations, you now need a tool that can classify and track the history of team model creation.This isModel StoreWhere it comes into play.

Functions that should be enabled for model storage:

  1. Acts as a central repository for all models and model versions
  2. Allows reproducibility of each model version
  3. Track model history

In addition to these core functions, there are many model storage functions, and you may find them very helpful for building and deploying models.

f1 ">Features that your model store should have:

  1. It should be able to track each version of the model, git commits, and reference datasets of model artifacts (patch files)
  2. The latest version of any model to be served should be provided, for example (v2.1)
  3. Maintain a consistent lineage to roll back versions when needed
  4. Integrate with your evaluation store to track the evaluation of each version of the model to identify model regressions
  5. Integrate with your service infrastructure to facilitate model deployment and rollback

recommend:

Weight sumBiasMLFlow

Evaluation Store

Now that you have tracked the model and stored it in the model store, you need to be able to select a model to ship and monitor its performance in production.This isEvaluation StoreWhere can help.

Features that should be enabled for evaluation storage:

  1. Aggregate aggregate (or slice) performance indicators of any model in any environment, production, validation, and training
  2. Use benchmarks to monitor and identify drift, data quality issues or abnormal performance degradation
  3. Enable the team to link performance changes to the reasons for the changes
  4. Provide a platform to help continuously provide high-quality and feedback loop models for improvement-compare production with training
  5. Provide experimental platform for A/B test model version

Now, we turn our attention to the essential functions of the evaluation store. The following points make a special evaluation store worth considering.

Features that your evaluation store should have:

  1. Storage model evaluation: input, SHAP value and output of each model version across environments: production, verification and training
  2. Automated monitoring makes it easy to find problems-benchmarks based on the evaluation repository
  3. Flexible dashboard creation that can be used for any type of performance analysis — DataDog for ML
  4. Integrate with your function store to track function drift
  5. Integrate with your model library to have a historical record of model performance for each model version

recommend:

Ariz

Other tools that might work for you

Data annotation platform:

Let us take a step back and say that you have just collected data, which may or may not have ground truth tags.Modern statistical machine learning models usually require a lot of training data to perform well, and the ability to annotate enough data with ground truth tags to make the model effective can be a big challenge.

do not worry,Data annotation platformBatches of your data will be distributed to a set of distributed scorers, and each scorer will add tags to your data according to the instructions you provide.

recommend:

  1. Appen
  2. ScalableServe for fully automatic data annotation

Model service platform:

In many cases where machine learning is applied, you will need some form of service platform to deploy the model to users.In short, these are some of the core functions that a service platform should provide for your team.

The model service platform should implement the following functions:

  1. Regarding access control to model services, only some people have the right to change the deployed model.
  2. If necessary, you can quickly roll back to the previously deployed model version mechanism
  3. Flexible support for different ML application types.For example, when predicting latency is not an issue, your service platform should allow batch inference to optimize calculations
  4. Well integrated with model store to facilitate model promotion
  5. It is well integrated with the evaluation library to realize the observability of the model in production.

recommend:

KubeFlowalgorithm