The use of machine learning is rapidly expanding. As early as 2019, a lot of research has been done on exploring new horizons of using this technology. The following is a collection of some of the most exciting research in the field of machine learning so far this year.

[Related article: Reviewing Amazon's Machine Learning University: Is it worth the hype? ]

Transfer learning based on visual tactile sensing

The ability of computers to simulate human sensory abilities is uneven; of the five senses, touch is perhaps the slowest development.To overcome these shortcomings, researchers Carmelo Sferrazza and Raffaello D'Andrea Published a paperTitled "Transfer Learning Based on Vision-Based Tactile Sensing", in which they advocate the use of "soft optics (or vision-based) tactile sensors", "combining low cost, ease of manufacture, and minimal wiring." The extent relies on computer vision to train tactile models and to help tactile sensors for object recognition. Their proprietary system uses a soft gel sensor and a computer vision training network in which they "use the camera to sense the force distribution on the soft surface through the deformation experienced by the elastic material as it is stressed."

Self-supervised learning of video face clustering face representation

With the continuous development of face recognition technology in recent years, researchers have begun to reconsider the scope of application of this technology and its application. For systems designed to study video, research is not just about identifying the main characters, but using facial knowledge to analyze the story. inTheir recent papers on this topicA group of researchers at the University of Toronto pointed out that “the ability to predict which characters appear where and when to create a deeper understanding of the video in the storyline.”, “Self-supervised learning of facial representations for video face clustering” ". To this end, these researchers developed an unsupervised model that relied on existing data sets (ie, face databases such as YoutubeFaces) and a limited amount of training to create highly accurate facial recognition models. These models "can dynamically generate positive/negative constraints based on ordered face distances and do not have to rely solely on commonly used orbital level information." Reduced dependence on complex and time-intensive model training indicates greater potential for the future Video analysis.

[Related article: Using machine learning to ask the right questions]

Independent depth generation model mixed competition training

Variational autoencoder (VAE) and generation of confrontation networks (GANIt is the most prominent type of model for unsupervised learning, but each has obvious drawbacks: VAE is difficult to generate high quality samples when feeding natural images, and GAN requires a lot of training. A recent research project by a research team at the Max Planck Institute aims to improve the shortcomings of each model by taking advantage of both, using a method to train multiple models in parallel, focusing on training. Independent part of the distribution "paper" Competitive training of independent depth generation model mixture"Summarizing their findings, they write, using models more intuitively, or using multiple types at the same time, will create a more powerful environment for model training, allowing for broader data usage, and" can clarify how to do it dynamically Model selection. "

Can you believe this prediction? Audit point-by-point reliability after learning

As machine learning is more deeply integrated into day-to-day business operations, the desire to test the reliability and accuracy of predictive models has increased. While most accuracy metrics focus on eliminating errors in the training process, there are few options to assess the accuracy of the activity model. To solve this problem, Professor Peter Schulam and Professor Suchi Saria of Johns Hopkins University submitted an auditing algorithm called Resampling Uncertainty Estimator (RUE), which "estimates if the model fits different training data, The amount that is predicted to change." According to its creators, the purpose of this new algorithm is to “help to improve the application of machine learning in high-risk areas such as medicine.” In their research paper, “Can you believe this prediction? Audit point-by-point reliability learning"They point out that due to the responsibilities involved in these areas, machine learning must be measured in terms of accuracy before and after adoption. Developments such as RUE will accelerate the adoption of machine learning in these areas.

in conclusion

Machine learning has led to the automation of trivialities in areas such as finance and human resources. Now, as research aims to make the technology more reliable, accurate and widely available, we may see more automation tasks in areas such as advertising and medicine. Where do you think the machine learning revolution will lead?

The original post is here.

readOpenDataScience.comMore data science articles, including tutorials and tutorials from beginner to advanced!Subscribe to our weekly newsletterReceive the latest news every Thursday.

This article is transferred from medium,Original address