It's the end of the year again, which means that it is time for KDnuggets' year-end experts to analyze and predict. This year we asked a question:

What are the main developments in AI, data science, deep learning and machine learning in 2019? What are the main trends you expect in 2020?

When we look backa year agoprofessionalWhen predictingWe see mixed results that can be viewed as natural technological advancements with some more ambitious predictions. There are several general topics, and several noteworthy prognosis.

In particular, the persistent fear of AI has been mentioned more than once, and this prediction seems to have subsided. Discussions about the progress of automated machine learning are widespread, although there are divergent opinions on whether it is useful. I think it's inconclusive to some extent, but when technology's expectations are lowered, it becomes easier to see it as a useful addition rather than an imminent substitute. There are also good reasons to point out that increased AI is always beneficial, and there are countless examples of the accuracy of this prediction. The idea that practical machine learning will have an impact is presented there, which indicates that the fun and games are coming to an end, and now it is time for machine learning. It is true that rumors suggest that practitioners are looking for these opportunities. Finally, referring to the growing focus on surveillance, fear, and manipulation caused by the development of dystopian artificial intelligence, it can be confidently added to the category of successful predictions through a simple on-site inspection of news from the past year.

Some predictions have not yet been completed. However, this is unavoidable in this exercise, and we will let those interested readers find it for themselves.

This year our list of experts includes Imtiaz Adam, Xavier Amatriain, Anima Anandkumar, Andriy Burkov, Georgina Cosma, Pedro Domingos, Ajit Jaokar, Charles Martin, Ines Montani, Dipanjan Sarkar, Elena Sharova, Rosaria Silipo, and Daniel Tunkelang. We thank all of them for taking the time out of their busy year-end schedule to provide us with insights.

This is the first of three similar articles in the next week. Although they will be divided into research, deployment, and industry, there is a considerable and understandable overlap between these disciplines, so we recommend that you check them out when you publish all three books.

Avatar title picture

Without hesitation, this is the main trend of 2019 and the forecast of 2020 proposed by the expert group this year.

Imtiaz Adam(@DeepLearn007) Is the executive officer of artificial intelligence and strategy.

In 2019, the organization has raised awareness of issues related to ethics and diversity in data science.

The lottery hypothesis paper shows the potential of simplifying deep neural network training through pruning. Neuro Symbolic Concept LearNerThe paper shows the potential of combining logic and deep learning with enhanced data and memory efficiency.

GANResearch has gained momentum, especially deep reinforcement learning has received a lot of research attention, including areas such as logical reinforcement learning and genetic algorithms for parameter optimization.

TensorFlow 2 comes with Keras integration and is eager to implement the default mode.

In 2020, the data science team and the business team will become more integrated. 5G will promote the development of AI reasoning and the development of intelligent IoT, which means that AI will increasingly enter the physical world. The combination of deep learning and augmented reality will transform the customer experience.

Xavier Amatriain(@xamat)YesCuraiCo-founder / chief technology officer.

I think this is deep learning andNLPThis year, it is difficult to object to this. More specifically, the year of the language model. Or more specifically, the year of Transformers and GPT-2. Yes, this may be incredible, but less than a year since the first use of OpenAIPublishedTalking about themGPT-2 language model. This blog post sparked a lot of discussions on AI security because OpenAI is not happy to publish the model. Since then, the model has been publicly copied, andFinal release. However, this is not the only progress in the field. We have seen Google releaseAlbertOrXLNETAnd also discussedBERTHow to be the biggest improvement in Google search in years. FromAmazonMicrosoftToFacebookEveryone It seems to have really joined the language model revolution, I do hope to see remarkable progress in the field in 2020, and it seems we are getting closer and closer to passing the Turing test.

Anima Anandkumar(@AnimaAnandkumar) Is the research director of machine learning at NVIDIA and professor of Bren at the California Institute of Technology.

Researchers aim to better understand deep learning, its generalization features, and failure cases. Reducing reliance on labeled data is a priority, and methods such as self-training have also made progress. Simulation is becoming more and more important for AI training, and the fidelity in vision fields such as autonomous driving and robot learning (including on NVIDIA platforms such as DriveSIM and Isaac) is getting higher and higher. Language models have become very large, for example, NVIDIA's 80 billion Megatron model is at 512 GPUTrained on it and started generating coherent paragraphs. However, researchers have shown false correlations and bad social bias in these models. Artificial intelligence regulations have become mainstream, and many prominent politicians have expressed support for government agencies to ban facial recognition. Starting with last year's change in the name of NeurIPS, the artificial intelligence conference began implementing codes of conduct and stepped up efforts to improve diversity and inclusion. In the coming year, I expect that there will be new algorithm development, not just surface applications of deep learning. This will particularly affect "scientific artificial intelligence" in many fields such as physics, chemistry, materials science and biology.

Andriy Burkov(@burkov) Is the head of Gartner's machine learning team and the author of The Hundred-Page Machine Learning Book

There is no doubt that the main development is BERT, a language modeling neural network model that can improve the quality of NLP on almost all tasks. Google even uses it as one of the main signals of relevance-the most important update in years.

In my opinion, the key trends will be the widespread adoption of PyTorch in the industry, research on faster neural network training methods, and research on fast training of neural networks on convenient hardware.

Georgina Cosma(@gcosma1) Is a senior lecturer at Loughborough University.

In 2019, we evaluated the impressive capabilities of deep learning models such as YOLOv3 to address a variety of complex computer vision tasks, especially real-time object detection. We have also seen that generative adversarial networks continue to attract the attention of the deep learning community, with its BigGAN model for ImageNet generation and StyleGAN for human image synthesis for image synthesis. This year, we also realized that it's easy to fool deep learning models, and some studies have shown that deep neural networks are vulnerable to adversarial examples. In 2019, we also see biased AI decision models being deployed for facial recognition, recruitment, and legal applications. I hope to see the development of multi-tasking AI models in 2020, which are designed to be versatile and versatile,

Pedro Domingos(@pmddomingos) Is a professor in the Department of Computer Science and Engineering at the University of Washington.

Major developments in 2019:

  • Fast propagation of context embedding. They are less than two years old, but now they dominate in NLP, and Google has deployed them in search engines, and it is reported that 10 out of every 1 searches improves. From vision to language, pre-training models on big data and then adjusting them for specific tasks has become standard.
  • Discovery of dual bloodlines. Our theoretical understanding of how well hyperparametric models generalize and perfectly fit training data has been greatly improved, especially by candidate interpretations of the following observations:-contrary to the predictions of classical learning theory-generalization error As the model capacity decreases, it increases and then decreases.
  • The media and the public have become more sceptical about the progress of AI, people's expectations for self-driving cars and virtual assistants are getting lower, and flashy demonstrations are no longer valuable.

Key trends in 2020:

  • Deep learning crowds' attempts to "climb" from low-level perceptual tasks such as visual and speech recognition to high-level cognitive tasks such as language understanding and commonsense reasoning will speed up.
  • A research model that obtains better results by investing more data and computing power on the problem will reach its limit, because its exponential cost curve is steeper than Moore's Law, and even rich companies cannot afford it.
  • Fortunately, we will enter the age of Goldilocks, where there is neither excessive publicity about AI nor another AI winter.

Ajit Jaokar(@AjitJaokar) Is the course director of the "Artificial Intelligence: Cloud and Edge Implementation" course at Oxford University.

In 2019 we renamed the course at Oxford UniversityArtificial intelligence: cloud and edge implementationThis also reflects my personal opinion that 2019 is a year of cloud maturity. This year is the year when the various technologies we are talking about (big data, artificial intelligence, the Internet of Things, etc.) are integrated within the cloud framework. This trend will continue-especially for businesses. Companies will adopt "digital transformation" plans-in these plans they will use the cloud as a unified paradigm to transform AI-driven processes (similar to redesigning company 2.0)

In 2020, I will also see the maturity of NLP (BERT, Megatron). 5G will continue to be deployed. When 2020G is fully deployed (such as driverless cars) after 5, we will see widespread use of IoT. Finally, in terms of IoT, I follow a technology called MCU (microcontroller unit)-specifically the deployment of machine learning models or MCUs

I believe AI will change the rules of the game, and every day we will see many interesting examples of AI deployment. Much of what Alvin Toffler predicted in his future shock is already around us today-it remains to be seen how artificial intelligence will zoom in! Sadly, the rate of change in artificial intelligence will leave many people behind.

Charles MartinHe is an AI scientist and consultant, and the founder of Calculation Consulting.

BERT, ELMO, GPT2 and more! AI in 2019 has made tremendous progress in NLP. OpenAI released their large GPT2 model-DeepFakes for text. Google announced the use of BERT for search-the biggest change since Panda. Even my collaborators at UC Berkeley have released (quantified) QBERT for low-footprint hardware. Everyone is making their own document embeds.

What this means for 2020. According to search experts, 2020 will be a year of relevance * (er, what have they been doing?). Expect to see that with BERT-style fine-tuned embedding, vector space search will eventually get attention.

Under the hood, PyTorch surpassed Tensorflow as a choice for AI research in 2019. With the release of TensorFlow 2.x (and pytorchTPUstand by). AI coding in 2020 will all be eager to execute.

Are big companies making progress in AI? The report shows that the success rate is one in ten. Not very good. As a result, AutoML will be in demand in 2020, although I personally believe that, like getting great search results, successful AI requires customized solutions for the business.

Word cloud

Ines Montani(@_inesmontani) Is a software developer working on artificial intelligence and natural language processing technologies, and is the co-founder of Explosion.

Everyone chooses "DIY AI" over a cloud solution. One factor driving this trend is the success of transfer learning, which makes it easier for anyone to train their models with good accuracy and fine-tune them for their specific use case. There is only one user per model, and service providers cannot take advantage of economies of scale. Another advantage of transfer learning is that the data set no longer needs to be so large, so the annotations are also moved internally. The housing trend is a positive development: the concentration of commercial AI is much lower than many expected. A few years ago, people were worried that everyone could only get "their AI" from one provider. Instead, people don't get AI from any provider, they do it themselves.

Dipanjan SarkarHe is the head of data science at Applied Materials and author, writer, consultant, and trainer of Google Developer Machine-Experts.

The major advances in artificial intelligence in 2019 are in the areas of automatic ML, interpretable AI and deep learning. The democratization of data science has remained a key aspect since recent years, and various tools and frameworks related to Auto-ML are trying to make this process easier. It is also important to note that when using these tools, we need to be careful to ensure that we do not have biased or overfitted models. Fairness, accountability and transparency remain key factors for customers, businesses and businesses to accept AI decisions. As a result, interpretable AI is no longer the subject of research papers. Many excellent tools and techniques have begun to make the decisions of machine learning models more interpretable. Last but not least, we have seen many advances in the areas of deep learning and transfer learning, especially in natural language processing. I hope to see more research and models in the field of deep transfer learning around NLP and computer vision in 2020, and hope that there will be something that can make full use of the knowledge of deep learning and neuroscience to guide us towards true AGI.

Elena SharovaHe is a senior data scientist at ITV.

So far, at DeepMind DQNAlphaGoIn games, deep reinforcement learning is the most important development of machine learning in 2019; leading toGo champion Lee Sedol retires. Another important advance is natural language processing, Google andMicrosoftOpen sourced BERT (Deep Bidirectional Language Representation),Leading the GLUE benchmark, anddevelopedFor speech parsing tasksMT-DNNintegratedOpen source procurement.

It is important to highlight what the European CommissionThe Trustworthy AI Code of Ethics,This is the first official publication that lists wise guidelines about legal, ethical, and robust AI.

Finally, I want to share with KDnuggets readers,PyData London 2019All keynote speakers are women-this is a welcomeprogress!

I expect that the main machine learning trends in 2020 will continue in the areas of NLP and computer vision. Industries adopting ML and DS have realized that they are overdue when defining sharing standards for best practices in hiring and retaining data scientists, managing the complexity of projects involving DS and ML, and ensuring that the community remains open and collaborative. . Therefore, we should see more attention to such standards in the near future.

Rosaria Silipo(@DMR_Rosaria)YesKNIMEChief Data Scientist.

The most promising achievement for 2019 is the adoption of active learning, reinforcement learning and other semi-supervised learning programs. Semi-supervised learning may bring hope for all these unlabeled data stubs currently populating our database.

Another major advance was the correction of the word "automatic" in the autoML concept with "guidance". For more complex data science issues, expert intervention seems to be essential.

In 2020, data scientists will need a fast solution to enable simple model deployment, continuous model monitoring, and flexible model management. The real business value will come from these three final parts of the data science life cycle.

I also believe that the wider use of deep learning black boxes will raise questions about machine learning interpretability (MLI). By the end of 2020, we will see if the MLI algorithm can meet the challenge of explaining the closed-door events of deep learning models in detail.

Daniel Tunkelang(@dtunkelang) Is an independent consultant specializing in search, discovery and ML / AI.

The forefront of AI remains focused on language understanding and generation.

OpenAI releasedGPT-2To predict and generate text. Out of concern for malicious applications, OpenAI did not release trained models at the time, but in the end they  Changed my mind.

Google released a size of 80MB Speech recognizer on your deviceSo you can perform speech recognition on your mobile device without sending data to the cloud.

At the same time, we are seeing growing concerns about AI and privacy. This year, all major digital assistant companies faced strong opposition around employees or contractors listening to user conversations.

What will 2020 bring to artificial intelligence? We will see further development of conversational AI and better image and video generation. These advances will make people more aware of malicious applications, and we may see one or two scandals, especially during election years. The tension between good and evil AI will not disappear, and we must learn better ways to cope.

Related :

This article is adapted from kdnuggets,Original address