https://easyai.tech/wp-content/uploads/2022/08/c3a87-2021-03-21-datafeature.png

Numerical features are the most common feature type, and numerical values ​​can be directly fed to the algorithm.
In order to improve the effect, we need to do some processing on numerical features. This article introduces 4 common processing methods: missing value processing, binarization, bucketing, and scaling.

What is a numerical feature?

https://easyai.tech/wp-content/uploads/2022/08/5f1f1-2021-03-21-keceliang.png

Numerical features are features that can be actually measured.E.g:

  • Human height, weight, three-dimensional
  • The number of visits to the product, the number of times it was added to the shopping cart, and the final sales volume
  • How many new users and returning users among the logged-in users

 

The features of the numerical class can be directly fed to the algorithm, why do we need to deal with it?

Because good numerical features can not only show the information hidden in the data, but also consistent with the model's assumptions.A good effect can be improved through proper numerical transformation.

For example, linear regression and logistic regression are very sensitive to the size of the value, so it needs to be scaled.

https://easyai.tech/wp-content/uploads/2022/08/8a714-2021-03-21-2points.png

For numerical features, we mainly focus on 2 points:

  1. 大小
  2. distributed

The four processing methods mentioned below are optimized around size and distribution.

 

4 common processing methods for numerical features

https://easyai.tech/wp-content/uploads/2022/08/e1ef8-2021-03-21-4method.png

  1. Missing value processing
  2. Binarization
  3. Divide buckets/bins
  4. Zoom

 

Missing value processing

In actual problems, we often encounter data missing.Missing values ​​will have a greater impact on performance.So it needs to be dealt with according to the actual situation.

There are three commonly used processing methods for missing values:

  1. Fill in missing values ​​(mean, median, model prediction...)
  2. Delete rows with missing values
  3. Ignore it directly, and feed the missing value as part of the feature to the model for learning

 

Binarization

This processing method is usually used in counting scenarios, such as: the number of visits, the number of times a song has been listened to...

Example:

Predict which songs are more popular based on the user’s listening music data.

Assuming that most people listen to songs very averagely and will listen to new songs continuously, but there is a user who plays the same song 24 hours a day, and this song is very partial, resulting in a particularly high total number of listening to this song .If the total number of listening times is used to feed the model, it will mislead the model.At this time, you need to use "binarization".

The same user has listened to the same song N times, and only counts 1, so that everyone can find songs that everyone likes to recommend.

 

Divide buckets/bins

Take the income of each person as an example. The income of most people is not high, and the income of a very small number of people is extremely high and the distribution is very uneven.Some have a monthly income of 3000, and some have a monthly income of 30, which is several orders of magnitude.

This feature is very unfriendly to the model.This situation can be handled by bucketing.Bucketing is to divide numerical features into different intervals, and treat each interval as a whole.

Common bucketing:

  1. age distribution
  2. Commodity price distribution
  3. Income distribution

Commonly used bucketing methods:

  1. 固定数值的分桶(例如年龄分布:0-12岁、13-17岁、18-24岁…)、
  2. Quantiles and buckets (for example, the price range recommended by Taobao: 30% of users choose the cheapest price range, 60% of users choose the medium price range, and 9% of users choose the most expensive price range)
  3. Use the model to find the best bucket

https://easyai.tech/wp-content/uploads/2022/08/c2ba0-2021-03-21-taobao-fenweishu.png

 

Zoom

Linear regression and logistic regression are very sensitive to the magnitude of the value, and the large difference between different feature scales will seriously affect the effect.Therefore, the values ​​of different magnitudes need to be normalized.Scale different orders of magnitude into the same static range (for example: 0~1, -1~1).

Commonly used normalization methods:

  1. z-score normalization
  2. min-max standardization
  3. Row normalization
  4. Variance scaling

Extended reading:

'Data scaling: standardization and normalization"

'106-Data scaling (standardization, normalization) those things"