Artificial intelligence is now in everyone's mind. Mature companies are disrupting themselves and are slowly shifting toward becoming a data-driven organization, and startups need to implement clear and effective data strategies to achieve relevance.

While large and small companies now generally embrace the need to adopt a data strategy, there is still a common challenge: how to build and manage machine learning projects?

This article provides a framework to help you manage your machine learning projects. Of course, you have to adjust to the company's specific needs, but it will get you in the right direction.

Why do I need an AI strategy?

Of course, we need towhyStart. Why is it important to develop an artificial intelligence strategy within the company?

The problem in machine learning projects is that there are many ways to improve the performance of the model:

  • Collect more data
  • Training algorithm for a long time
  • Change the architecture of the model
  • Get a more diverse training set

However, the pursuit of wrong strategies can lead to significant time and money losses. It may take up to six months to collect more data for training, but realize that it hardly improves your model. Similarly, you can blindly train your model for a longer period of time (and pay extra calculation time) and you won't see any improvement at all.

Therefore, the importance of a well-defined artificial intelligence strategy. It will help improve team efficiency and increase the return on investment of AI projects.

Orthogonal

The most effective machine learning practitioners have a clear understanding of what to adjust to get better results.

OrthogonalizationRefers to controls with very specific functions.

For example, an office chair has a lever that can be moved up and down, and the wheels on the chair allow it to move horizontally. In this case, the lever is a controller that has the function of raising and lowering the chair. The wheels form a controller that has the function of moving the chair horizontally.

Therefore, these controls are said to beOrthogonal: Rolling a chair on a chair does not lower it, just like pulling a lever on a chair without moving it backwards.

The same concepts must be applied to machine learning projects. A single modification to a project must have an impact on a single aspect. Otherwise, you will improve within one area, but will reduce the performance of the other area and the project will get stuck.

How does this translate into an AI project?

First, we must consider the chain of hypotheses in machine learning.

Hypothetical chain in machine learning

Suppose that if the model performs well on the training set, it will perform well on the development set, then it will perform well on the test set, and then it will perform well in the real world.

This is a fairly common list of assumptions in the AI ​​project. Now, does the model perform poorly in one of these situations?

  • Training Set: Train a larger network or change the optimization algorithm
  • Development set: use regularization or larger training sets
  • Test set: Using a larger development set
  • Real world: change development set distribution (more on later) or change cost function

The above list gives explicit orthogonal control to improve the model in very specific situations. Once your model performs well on one group, continue to improve in the other group.

Now, how do you know if your model is performing well?

set a goal

As mentioned above, you need a clear goal to determine if the model is performing well. Therefore, setting the evaluation indicators and the importance of meeting and optimizing the indicators.

Single evaluation indicator

Having a single evaluation metric allows for faster evaluation of the algorithm.

For example, precision and calls are often used for classifiers. However, there is a trade-off between these two indicators. Instead, useF1Fraction, which is the harmonic mean of precision and recall. Therefore, a single metric is used, and it is easier to evaluate the quality of different models and speed up iterations.

Satisfied and optimized indicators

Once a single metric is obtained, other important metrics are usually tracked.

For example, you might want to build a classifier with an F1 score of at least 0.90 and a runtime of less than 200 milliseconds. In this case, the F1 score isoptimizationMetric while the runtime isSatisfactorymeasure.

The optimization metric is usually the same as the metric, and you should have only one metric. Other indicators of interest will be satisfactory indicators and will help you choose the overall best model that meets the optimization metrics.

Training, development and test set

The trains, development and test equipment mentioned above, but what exactly are they?

Training and development (or maintenance) sets are used to train the model. Training sets are often used to fit models to data, and development sets are used to make predictions and adjust models.

The test set is then an example of actual data where you can test the algorithm to see how it will execute.

Training / development / test distribution

Once you have different data sets, you must ensure that the distribution represents the data you want to get in the future.

For example, if you want to create a model to mark images from mobile uploads, it doesn't make sense to train models from the Internet to high-resolution images. The resolution of the mobile upload may be lower, the image may be blurred, and the object may not be perfectly centered. Therefore, the train / dev / test set should contain images of that type.

Also, you want each collection to be from the samedistributed. For example, you are building a model to predict customer churn, and the 6% data set contains churn instances. Then, your train, development, and test set should also have approximately 6% of the data as a loss instance.

Training/development/test size

How big should each set be?

Typically, for train/development/test sets, the splits are 60/20/20 respectively. This is still valid if the data is not very rich.

However, in the case of millions of instances, the more appropriate split would be 98/1/1 because the model can still be validated over 10,000 data points.

Compared with human performance

Recently, we have begun to see headlines in which AI systems are superior to humans or very close to human performance.

Unfortunately, humans are very good at many tasks and it is difficult to get the AI ​​system close to our performance. A lot of data is needed, and the performance of your model will eventually reach a steady level, making it difficult to improve.

However, how can I improve the model?

If your model is over-fitting, you can reduce the difference by:

  • Collect more data
  • normalization(L2, dropout, data increase)
  • Change model

If your model is not suitable for data, you must reduce the bias by:

  • Cultivate larger or more complex models
  • Use better optimization algorithms or train longer
  • Change model

If none of the above methods have a significant impact, the next step is to get the data for the human marker. Although costly and arduous, this step will make your model as close as possible to human level performance.

Last words

Building an AI system is an iterative process. It's important to build, test, and improve quickly. Don't build a very complicated system from the start, but don't build things that are too simple.


I hope this will help you better manage and plan your AI project. The potential of artificial intelligence in many industries is huge, and it is important to seize this opportunity. Having a clear artificial intelligence strategy will help you surf instead of being swallowed up.

This article was transferred from awardsdatascience,Original address