Author Archive

GPU (Graphics Processing Unit)

What is a GPU?

The CPU is more powerful and can do a lot of things, it is suitable for handling complex tasks. The GPU is simple in structure and can form a sea of ​​people tactics, suitable for handling repetitive and simple tasks.

Knowing that there is an answer is very good:

GPU and CPU differences
GPU and CPU differences


A mathematics professor and 100 primary school PK.

The first round, four arithmetic, one hundred questions. The professor got the paper together. One hundred primary school students each took a question. When the professor first started to calculate the second question, the primary school students collectively handed in the papers. The first round of primary school students crushed professor.

The second round, advanced functions. One hundred questions. When the professor is done. One hundred elementary school students don't know what they are doing... In the second round, the professor crushed one hundred elementary school students. Is it easy to understand?

This is a shallow comparison between CPU and GPU.


Baidu Encyclopedia version

Graphics Processing Unit (English: Graphics Processing Unit, GPU), also known as display core, visual processor, display chip, is a special type of personal computer, workstation, game console and some mobile devices (such as tablets, smartphones) Etc.) A microprocessor that works on image operations.

The purpose is to convert and display the display information required by the computer system, and provide a line scan signal to the display to control the correct display of the display. It is an important component for connecting the display and the main board of the personal computer, and is also one of the important devices for the "human-machine dialogue". . As an important part of the computer host, the graphics card undertakes the task of outputting display graphics. It is very important for people who are engaged in professional graphic design.

Read More


Wikipedia version

A graphics processing unit (GPU) is a specialized electronic circuit designed to quickly manipulate and change memory acceleration to create an image in a frame buffer intended for output to a display device. GPUs are used in embedded systems, mobile phones, personal computers, workstations and game consoles. Modern GPUs are very effective at handling computer graphics and image processing. Their highly parallel architecture makes them more efficient than general purpose CPUs for algorithms that process large blocks of data in parallel. In a personal computer, the GPU can exist on a video card or be embedded on a motherboard. In some CPUs, they are embedded on the CPU chip.

The GPU has been used at least in the 20 century 80 era. It was promoted by Nvidia in 1999, and he brought GeForce 256 to the market as the "world's first GPU." It is presented as "a single-chip processor with integrated transform, lighting, triangle settings/cutting and rendering engines." Competitor ATI Technologies released Radeon 2002 in 9700, creating the term "visual processing unit" or VPU.

Read More


Computing power – computation

What is the computing power in artificial intelligence?

In a normal computer, the CPU provides the power to help the computer run fast. When playing games, you need a graphics card to provide power to help the computer process graphics quickly. In artificial intelligence, you need to have a similar CPU andGPUThe hardware provides the power to help the algorithm quickly calculate the results.

BeforealgorithmAs mentioned in the process, the factory assembly line is the algorithm in the process of manufacturing the wooden table. In that case, the machine in the factory is like a computing power. The better the machine, the more advanced it is, and the faster the manufacturing process.


The greater the power, the faster the speed
The greater the power, the faster the speed


Wikipedia version

Techpedia version

Computing power is the process of using computer technology to complete a given goal-oriented task. Computing power can include the design and development of software and hardware systems for a wide range of purposes-usually to construct, process and manage any type of information-to help pursue scientific research, make intelligent systems, and create and use different media for entertainment and communication .

Read More


Wikipedia version

Computational power is any activity that uses a computer. It includes developing hardware and software, as well as using computers to manage and process information for communication and entertainment. Computation is a vital component of modern industrial technology. The main computing disciplines include computer engineering, software engineering, computer science, information systems, and information technology.

Read More



Article to understand algorithms in artificial intelligence

Artificial intelligence has a troika: data, algorithms, and computing power. This article focuses on algorithm-related knowledge.

This article will introduce the concept of algorithm in artificial intelligence, 4 characteristics of algorithm, 6 general methods. And 3 points to note when choosing an algorithm.


What is an algorithm?

Simply put, the algorithm is:The means to solve the problem, and is a means of mass solving the problem.

A recipe is an "algorithm", as long as you follow the recipe method, you can make the corresponding dish.

The recipe is an algorithm

Algorithms in artificial intelligence are mainly used to train models.

机器 学习 There are 7 steps, and the third step is to choose the appropriate algorithm model. The final predictable model is obtained through training.

The third step of machine learning is to choose the appropriate algorithm model

4 basic characteristics of the algorithm

4 basic characteristics of the algorithm

The algorithm has the following four characteristics:

  1. feasibility
  2. Certainty
  3. Poor
  4. Have enough intelligence

For a detailed description of these 4 characteristics, please see the "Basic concepts of the algorithm"


6 basic methods of algorithms

Computer algorithms and human computing are different, and there are roughly six different ideas:

  1. Enumeration
  2. Induction
  3. Recursion
  4. Recursive
  5. Halving recursion
  6. Backtracking

For more details, check out the "Basic concepts of the algorithm"


3 Tips when choosing an algorithm

3 tips for selection algorithm

  1. Different algorithms may be used to solve different problems, and the same algorithm may be used.No algorithm is omnipotent, but the scope of application is different.
  2. The algorithm has no advanced and low level points.The quick and cheap solution is the purpose, blindly pursue complex algorithms (for example: deep learning), equivalent to "using a cannon to fight mosquitoes"
  3. Sometimes there are multiple algorithms that can solve the same problem, and it is the goal to solve the problem with the lowest cost and the shortest time.It is important to choose the right algorithm for your environment.


Baidu Encyclopedia + Wikipedia

Baidu Encyclopedia version

The algorithm refers to the accurate and complete description of the solution, and is a series of clear instructions to solve the problem. The algorithm represents a systematic way to describe the problem-solving strategy. That is to say, it is possible to obtain the required output in a limited time for a certain specification input.

If an algorithm is flawed or not suitable for a problem, executing this algorithm will not solve the problem. Different algorithms may accomplish the same task with different time, space or efficiency. The pros and cons of an algorithm can be measured by space complexity and time complexity. The instructions in the algorithm describe a calculation that, when run, starts with an initial state and (possibly empty) initial input, passes through a series of finite and clearly defined states, and eventually produces an output and stops at a final state.

Read More


Wikipedia version

In mathematics and computer science, algorithms are a clear specification of how to solve a class of problems. The algorithm can perform calculations, data processing, and automatic reasoning tasks.

As an efficient method, algorithms can be expressed in limited space and time, as well as in well-defined formal languages ​​for computational functions. Starting from the initial state and the initial input, the instruction describes a calculation that, when executed, ultimately produces an "output" and a final end state through a finite number of well-defined continuous states.

The concept of algorithms has existed for centuries. Greek mathematicians use algorithms in the sieves such as Eratosthenes to find prime numbers and use the Euclidean algorithm to find the greatest common divisor of two numbers. The word itself comes from the 9 century mathematician MuḥammadibnMūsāal-Khwārizmī, the Latinized Algoritmi. Partial formalization of the concept of modern algorithms began with an attempt to solve the Entscheidungsproblem (decision problem) proposed by David Hilbert in 1928. Subsequent formalization is defined as an attempt to define “effective computability” or “effective method”. These formalizations include the Gödel-Herbrand-Kleene recursive function for 1930 years, 1934 years and 1935 years, the lambda calculus for Alonzo Church for 1936 years, the Formula 1936 for Emil Post for 1, and the Alan Turing for 1936-37 and 1939. Turing machine.

Read More


Turing Test-The Turing Test

Read the Turing test and Turing himself

What was the original intention of the Turing test?

The Turing test was made because Turing was thinking about a problem:Can the machine think?

And Turing believes that it is possible to create a machine that will think, so I am thinking about the second question:How to judge whether the machine can think?

Then there is the Turing test.

This process is as magical as science fiction, but Turing is such a magical person, if you are interested, you can continue to look down.


What is the Turing test?

Turing test diagram

The Turing test was presented in 1950 and was first published in Computational Machinery and Intelligence (Computing Machinery and Intelligence).

Let a person sit in front of the computer and talk to the other side with the keyboard. If the person can't tell whether he or she is talking to a person or a machine, then the dialogue machine passes the Turing test and has artificial intelligence.

Test standard: 25 minutes of chat time, less than 25 minutes does not pass the test


Turing also gave very useful suggestions for the development of artificial intelligence:

Instead of developing a computer that simulates adult thinking, it is better to try to make it simpler, perhaps just as a child's intelligent artificial intelligence system, and then let the system continue to learn - this is exactly what we use machine learning today. The core guiding ideology for solving the problem of artificial intelligence.

Who is Turing?

Allen Turing

Turing can be said to be a genius in genius, and many of his thoughts have affected so far.


Key figures in World War II

Churchill once wrote in his memoirs that "Turing as the hero who deciphered the Enigma cipher machine, he made the greatest contribution to the Allied victory in the victory of the Second World War."

Turing's cracking system can decipher the German Enigma system in a matter of minutes, and increase the amount of intelligence that the British Wartime Intelligence Center deciphers every month from 39000 to 84000, allowing World War II to end at least a few years earlier. After that, Turing cracked the German's highly encrypted Tunny password. With a powerful deciphering machine, the German military almost all the levels of communication encryption systems were cracked during World War II.


Computer founder

24's old Turing has created a vision to change the world - Turing machine, written in his paper "On the application of digital computing in decision-making problems", Turing machine can be said to be the concept of the computer is fully conceived.

His universal Turing machine concept—an abstract computer that implements multitasking by changing software—has been recognized as the predecessor of contemporary computers, from the first generation of cathode tube arrays to the various notebooks we use today. ancestor".



Turing fell in love with 19's young boy Murray. However, in 1952 year, Allen's same-sex partner Murray and a accomplice broke into Turing's house theft, and Turing called the police. After several rounds of police censorship, Turing acknowledged the homosexual relationship with Murray and was accused of “obvious swearing and sexual reversal”.

In the 1952 3 month trial, Allen admitted his crimes, but he made it clear that he would not regret it.


Long-distance running UK 20

Turing is also a talented marathon runner who once had the best Marathon 2 hour 46 points.



Artificial Intelligence + Turing Test

Artificial intelligence is what Turing proposed in 1950. If he did not propose the concept of artificial intelligence, I believe that you will not see so many AI developments and applications today.

Even the Turing test has been used today.


Baidu Encyclopedia + Wikipedia

Baidu Encyclopedia version

The Turing test was invented by Allen Messon Turing, which means that the tester is separated from the testee (one person and one machine) by some devices (such as a keyboard). The tester asks questions at will. After multiple tests, if more than 30% of the testers can't determine whether the testee is a human or a machine, then the machine passes the test and is considered human intelligence.

The word Turing test comes from a paper by Dr. Alan Messon Turing, a pioneer in computer science and cryptography, written in 1950, "Computers and Intelligence," where 30% is Turing's machine thinking of 2000. A prediction of ability, we are far behind this prediction.

Read More

Wikipedia version

In the Turing test, by developing Alan Turing in 1950, it is a machine that demonstrates the intelligence of the ability to equate to, or differ from, one person.

Turing argues that human evaluators will judge natural language dialogues between humans and machines, with the aim of producing human-like responses. The evaluator will realize that one of the two partners in the conversation is a machine and all participants will be separated from each other. The conversation will be limited to plain text channels, such as computer keyboards and screens, so the results are not dependent on the machine's ability to render words as speech. If the evaluator cannot reliably tell the human machine, the machine has passed the test.

. The test result does not depend on the ability of the machine to give the correct answer to the question, but only depends on how close the answer is to the answer given by the human. The test was introduced by Turing in his 1950 thesis "Computational Machinery and Intelligence" while working at the University of Manchester. It begins with the words: "I suggest to consider the question,'Can machines think?'" Because "thinking" is difficult to define, Turing chose to "replace another question with another closely related question." Expressed in relatively clear words. "Turing's new question is: "Is there any imaginable digital computer that performs well in imitation games?" Turing believes that this question is actually an answerable question.

In the rest of the paper, he opposed all major objections to the proposition that "machines can think." Since Turing introduced his test for the first time, it has proven to be highly influential and widely criticized and has become an important concept in the philosophy of artificial intelligence.

Read More


Extended reading

"Computing Machinery and Intelligence" in English PDF Imitation game (Douban 8.6 points)


Weak artificial intelligence, strong artificial intelligence, super artificial intelligence


Weak artificial intelligence (Weak AI)

Weak artificial intelligence, also known as restricted-field artificial intelligence (Narrow AI) or applied artificial intelligence (Applied AI), refers to artificial intelligence that focuses on and can only solve problems in specific areas.

For example:AlphaGo,Crab,FaceID Wait

Extended reading:

Weak AI - Wikipedia

Weak AI - Investopedia


Strong artificial intelligence (Strong AI)

Also known as Artificial Artificial Intelligence or Full Artificial Intelligence (Full AI), it refers to artificial intelligence that can do all the work of human beings.

Strong artificial intelligence has the following capabilities :

  • Reasoning when using uncertainties, using strategies, solving problems, and making decisions
  • The ability to express knowledge, including the ability to express common sense knowledge
  • Planning ability
  • Learning ability
  • Ability to communicate using natural language
  • Ability to integrate these capabilities to achieve a defined goal

Extended reading:

What is the difference between strong-AI and weak-AI?——Stackexchange


Super Intelligence (ASI)

Assuming that computer programs continue to evolve and are smarter than the world's smartest and most gifted humans, the resulting artificial intelligence system can be called super artificial intelligence.

Extended reading:


The Difference Between Artificial Intelligence, General Intelligence, And Super Intelligence - Coresystems

Super smart deadly gambling

Machine learning – machine learning | ML

What is the relationship between machine learning, artificial intelligence, and deep learning?

1956 year proposes AI concept, just after 3 years (1959) Arthur Samuel The concept of machine learning is proposed:

Field of study that gives computers the ability to learn without being explicitly programmed.

Machine learning studies and builds a special algorithm (Not a specific algorithm), allowing the computer to learn in the data to make predictions.

Therefore,Machine learning is not a specific algorithm, but a general term for many algorithms.

Machine learning involves many different algorithms, and deep learning is one of them. Other methods include decision trees, clustering, Bayesian, and so on.

Deep learning is inspired by the structure and function of the brain, the interconnection of many neurons. Artificial neural networks (ANNs) are algorithms that mimic the structure of brain biology.

Whether it is machine learning or deep learning, it belongs to the category of artificial intelligence (AI). So artificial intelligence, machine learning, and deep learning can be represented by the following diagram:

The relationship between artificial intelligence, machine learning, and deep learning
The relationship between artificial intelligence, machine learning, and deep learning

Learn more about artificial intelligence:"2019 Update" What is artificial intelligence? (The essence of AI + history of development + limitations)"

Learn more about deep learning:A text to understand deep learning (verbal explanation + 8 advantages and disadvantages + 4 typical algorithm)"

Machine learning science for everyone


What is machine learning?

Before explaining the principle of machine learning, I will introduce the most basic ideas to everyone, understand the most essential things of machine learning, and make better use of machine learning. At the same time, this problem-solving thinking can also be used for work and life. in.

The basic idea of ​​machine learning

  1. Abstract the problems in real life into mathematical models, and clearly understand the role of different parameters in the model
  2. Solve this mathematical model by mathematical methods to solve real-life problems
  3. Evaluating this mathematical model, is it really solving the problems in real life and how is it solved?

No matter what algorithm is used, what kind of data is used, the most fundamental ideas can't escape the above 3 step!

The basic idea of ​​machine learning
The basic idea of ​​machine learning

When we understand this basic idea, we can find out:

Not all problems can be converted into mathematical problems. There is no way to solve the real problem AI that has no way to convert. At the same time, the most difficult part is to convert the real problem into a mathematical problem.


Principle of machine learning

Let's take the supervision study as an example to explain the implementation principle of machine learning.

If we are teaching children to literate (1, 2, 3). We will first take out the 3 card, and then let the children see the card, saying "a horizontal line is one or two horizontal lines is two or three horizontal lines is three."

Machine learning principle explains 1

Repeat the above process, the children's brain is constantly learning.

Machine learning principle explains 2

When the number of repetitions is enough, the children learn a new skill - know Chinese characters: one, two, three.

Machine learning principle explains 3

We use the above human learning process to analogize with machine learning. Machine learning is very similar to the human learning process mentioned above.

  • The card mentioned above is called in machine learning - training set
  • The above-mentioned "one horizontal line, two horizontal lines" is a property that distinguishes different Chinese characters.
  • The process of children's continuous learning is called - modeling
  • Learned the law that was summed up after literacy - the model

Through the training set, constantly identifying features, continuously modeling, and finally forming an effective model, this process is called "machine learning"!

Machine learning principle explains 4


Supervised learning, unsupervised learning, and intensive learning

Machine learning can be roughly divided into 3 categories based on training methods:

  1. Supervised learning
  2. Unsupervised learning
  3. Reinforcement learning

In addition, you may have heard of "semi-supervised learning", but those are based on the above variants of the 3 class, the essence has not changed.


Supervised learning

Supervised learning means that we give the algorithm a data set and give the correct answer. The machine uses data to learn how to calculate the correct answer.

For chestnut:

We have prepared a lot of photos of cats and dogs, and we want the machine to learn how to recognize cats and dogs. When we use supervised learning, we need to label these photos.

Use labeled photos for training
Use labeled photos for training

The label we give to the photo is the "correct answer", and the machine learns to recognize cats and dogs in new photos through extensive learning.

When the machine meets a new puppy photo, he will recognize him.
When the machine meets a new puppy photo, he will recognize him.

This way of helping machine learning through a lot of manual tagging is to supervise learning. This learning method works very well, but the cost is also very high.

Learn more about supervised learning


Unsupervised learning

In unsupervised learning, there is no "correct answer" for a given data set, and all data is the same. The task of unsupervised learning is to mine out the underlying structure from a given data set.

For chestnut:

We give a bunch of pictures of cats and dogs to the machine, don't label these photos, but we want the machine to sort the photos.

Give unlabeled photos to the machine
Give unlabeled photos to the machine

Through learning, the machine will divide these photos into 2 categories, all of which are photos of cats, all of which are photos of dogs. Although the results of supervised learning above seem similar, there are essential differences:

In unsupervised learning, although the photos are divided into cats and dogs, the machine does not know which one is a cat and which is a dog. For the machine, it is equivalent to divided into two categories, A and B.

The machine can separate the cat and the dog, but I don’t know which one is the cat and which one is the dog.
The machine can separate the cat and the dog, but I don’t know which one is the cat and which one is the dog.
Learn more about unsupervised learning


Reinforcement learning

Reinforcement learning is closer to the essence of biological learning, so it is expected to gain higher intelligence. It focuses on how agents can take a series of actions in the environment to get the most cumulative return. Through reinforcement learning, an agent should know what state of action should be taken.

The most typical scene is playing games.

2019 1 Month 25 Day, AlphaStar (Google's artificial intelligence program, using intensive learning training) The professional players "TLO" and "MANA" who have abused StarCraft.News link

Learn more about reinforcement learning


7 steps for machine learning

Through the above content, we have some vague concepts about machine learning. At this time, we will be particularly curious: how to use machine learning?

Machine learning is divided into 7 steps in the actual operation level:

  1. Data collection
  2. data preparation
  3. Choose a model
  4. training
  5. Evaluation
  6. Parameter adjustment
  7. Forecast (beginning)
7 steps for machine learning
7 steps for machine learning

Suppose our task is to distinguish between red wine and beer by alcohol and color. Here is a detailed description of how each step in machine learning works.

Case goal: distinguish between red wine and beer
Case goal: distinguish between red wine and beer


Step 1: Collecting data

We bought a bunch of different kinds of beer and red wine in the supermarket, and then bought a spectrometer to measure color and equipment for measuring alcohol.

At this time, we mark all the wines we bought with his color and alcohol, which will form the following form.

Colour Alcohol content kind
610 5 beer
599 13 Red wine
693 14 Red wine
... ... ...

This step is very important because the quantity and quality of the data directly determine the quality of the prediction model.


Step 2: Data Preparation

In this example, our data is very neat, but in the actual situation, the data we collect will have many problems, so it will involve data cleaning and other work.

When there is no problem with the data itself, we divide the data into 3 parts: training set (60%), validation set (20%), and test set (20%) for later verification and evaluation.

The data is divided into 3 parts: training set, verification set, test set
The data is divided into 3 parts: training set, verification set, test set

There are a lot of tips on the data preparation part. If you are interested, you can check outThe most common 6 big problem in AI dataset (with solution)"


Step 3: Select a model

Researchers and data scientists have created many models over the years. Some are very suitable for image data, some are very suitable for sequences (such as text or music), some for digital data, and some for text-based data.

In our case, since we only have 2 features, color and alcohol, we can use a small linear model, which is a fairly simple model.


Step 4: Training

Most people think that this is the most important part, but it is not the case. The quantity and quality of the data, as well as the choice of the model, are more important than the training itself (3 minutes on the training desk, and more importantly the 10 year. Gong).

This process does not require people to participate, the machine can be completed independently, the whole process is like doing arithmetic problems. Because the essence of machine learning isThe process of turning a problem into a mathematical problem and then solving the math problem.


Step 5: Evaluation

Once the training is complete, you can assess whether the model is useful. This is where our previously set validation sets and test sets come into play. The indicators evaluated mainly include accuracy rate, recall rate and F value.

This process allows us to see how the model predicts the numbers that have not yet been seen. This means representing the performance of the model in the real world.


Step 6: Parameter adjustment

After completing the assessment, you may wish to know if you can further improve your training in any way. We can do this by adjusting the parameters. When we train, we implicitly assume some parameters, and we can adjust the parameters to make the model perform better.


Step 7: Forecast

The 6 steps above are all for this step. This is also the value of machine learning. At this time, when we buy a bottle of new wine, just tell the machine his color and alcohol, he will tell you that the beer is still red wine.

There is a video on YouTube that introduces these 7 steps. The 7 Steps of Machine Learning(Requires Internet Science)

15 classic machine learning algorithm


algorithm Training method
Linear regression Supervised learning
Logistic regression Supervised learning
Linear discriminant analysis Supervised learning
Decision tree Supervised learning
Naive Bayes Supervised learning
K proximity Supervised learning
Learning vector quantization Supervised learning
Support Vector Machines Supervised learning
Random forest Supervised learning
AdaBoost Supervised learning
Gaussian mixture model Unsupervised learning
Limit Boltzmann machine Unsupervised learning
K-means clustering Unsupervised learning
Maximum expectation algorithm Unsupervised learning

Machine learning science for everyone


Baidu Encyclopedia + Wikipedia

Baidu Encyclopedia version

Machine Learning (ML) is a multidisciplinary subject involving many disciplines such as probability theory, statistics, approximation theory, convex analysis, and algorithm complexity theory.

Specializing in how computers simulate or implement human learning behaviors to acquire new knowledge or skills and reorganize existing knowledge structures to continuously improve their performance. It is the core of artificial intelligence, and it is the fundamental way to make computers intelligent. Its application spans all fields of artificial intelligence. It mainly uses induction, synthesis rather than deduction.

Read More


Wikipedia version

Machine learning is the use of computer algorithms and statistical models that are used by computer systems to progressively improve the ability to perform specific tasks.

Machine learning establishes a mathematical model of sample data called "training data" to make predictions or decisions without explicit programming to perform tasks. Machine learning algorithms for email filtering, network intruder detection, and computer vision applications, algorithms that develop specific instructions for performing tasks are not feasible. Machine learning is closely related to computational statistics, and computational statistics focus on the use of computers for prediction. The study of mathematical optimization provides methods, theories and applications for the field of machine learning. Data mining is a research area in machine learning, focusing on exploratory data analysis through unsupervised learning. In the application of cross-business issues, machine learning is also known as predictive analysis.

Read More


Supplementary information 2: Quality extended reading

Artificial Intelligence – Artificial intelligence | AI

Understanding the nature of artificial intelligence

Artificial intelligence (AI) has entered the vision of the general public, and we can see many AI-related products in our lives. Such as Siri, AI beauty, AI face change...

Although everyone listens a lot, most people don't understand AI, and there are even some misunderstandings. This article will not cover any technical details to help everyone understand the nature of artificial intelligence.


What is artificial intelligence?

Many people have some misconceptions about artificial intelligence:

  1. Robots in movies are typical examples of artificial intelligence
  2. Artificial intelligence seems to be omnipotent
  3. Artificial intelligence will threaten human survival in the future
  4. ……

The reason why there are many misunderstandings about artificial intelligence is mainly because everyone only sees the speech of some people, but does not understand the basic principles of AI. This article will help everyone understand the basic principles of AI. The nature of things is often not what everyone said So complicated.

We use traditional software and artificial intelligence for comparison, and it is easier to understand with a frame of reference.


Traditional software vs artificial intelligence

Traditional software

Traditional software is the basic logic of "if-then". Humans summarize some effective rules through their own experience, and then let the computer run these rules automatically. Traditional software can never cross the boundaries of human knowledge, because all rules are made by humans.

To put it simply: traditional software is "rule-based," requiring artificially set conditions and telling the computer what to do if it meets this condition.

This logic is very useful when dealing with simple problems, because the rules are clear and the results are predictable. The programmer is the god of software.

But in real life, it is full of a variety of complex problems. These problems are almost impossible to be solved by formulating rules. For example, the effect of face recognition through rules will be very poor.

Traditional software is rule-based logic


Artificial intelligence has now developed many different branches, and the technical principles are also diverse. Here we only introduce the most popular deep learning today.

The technical principles of deep learning are completely different from the logic of traditional software:

The machine summarizes the laws from "specific" large amounts of data, summarizes some "specific knowledge", and then applies this "knowledge" to real-world scenarios to solve practical problems.

This is the essential logic of the development of artificial intelligence to this stage. The knowledge summarized by artificial intelligence is not like traditional software, which can be expressed intuitively and accurately. It is more like the knowledge learned by human beings. It is more abstract and difficult to express.

Artificial Intelligence Logic: Inducting Knowledge from Data

The above statement is still relatively abstract. Here are some aspects to help you thoroughly understand:


Artificial intelligence is a tool

The AI ​​is the same as the hammer, car, computer... we use, and its essence is a tool.

Tools must be used to be of value, and if they exist independently, there is no value, just like the hammer in the toolbox, no one waving it has no value.

Artificial intelligence is essentially a tool

The reason why the tool of artificial intelligence is spoken by the whole society is that it greatly expands the capabilities of traditional software. There were many things that computers couldn't do before, but now artificial intelligence can do it.

Thanks to Moore's Law, the power of computers has increased exponentially. As long as the computer can disengage, the productivity has been greatly improved, and artificial intelligence has allowed more links to catch the express train of Moore's Law, so this change Is extraordinary.

But no matter how it changes, traditional software and artificial intelligence are tools that exist to solve practical problems. This has not changed.


Artificial intelligence only solves specific problems

"Terminator" and "The Matrix"...Many movies have appeared against heavenly robots. This kind of movie gives everyone a feeling: artificial intelligence seems to be omnipotent.

The reality is that artificial intelligence is still at the stage of a single task.

Artificial intelligence currently can only handle a single task

Single task mode.

Landline for phone calls, game consoles for games, MP3 for listening to music, navigation for driving...

Multitasking mode

This stage is similar to a smart phone. Many apps can be installed on one phone and do many things.

However, these capabilities are independent of each other. After booking a flight on the travel app, you need to set the alarm with the alarm clock app, and finally you need to call a taxi with the taxi app. Multi-tasking mode is just the superposition of a single task mode, which is far from human intelligence.


You are playing Go with a friend, and you find that your friend ’s mood is very bad. You could have easily won, but you deliberately lost to the other side, and you still praise the other side because you do n’t want to make this friend more depressed Irritability.

In this small matter, you have used a variety of different skills: emotion recognition, Go skills, communication, psychology...

But the famous AlphaGo will never do this. No matter what the opponent's situation is, even if they lose the game, AlphaGo will win the game relentlessly, because it can't do anything except play Go!

Only when all the knowledge is formed into a network structure can it be integrated.For example, military knowledge can be used in business, and biology can be used in economics.


Know it, but do n’t know why

The current artificial intelligence is to summarize inductive knowledge from a large amount of data. This crude "induction method" has a big problem:

Don't care why

AI doesn't care why

Ponzi schemes take advantage of this!

  • It uses ultra-high returns to attract leeks and then turn money for everyone who gets up early to participate;
  • When bystanders found that all participants had actually made money, it was simply summarized as: historical experience shows that this is reliable.
  • So more and more people became jealous and joined until one day the crooks ran away.

When we use logic to deduce this thing, we can conclude that the scammer:

  • Such high returns are not in line with market rules
  • Don't lose money? I don't need to take high risks with high returns? Doesn't seem reasonable
  • Why does such a good thing fall on me? Doesn't seem right

Because the current artificial intelligence is based on "inductive logic", it also makes very low-level mistakes.

Labor can only make low-level mistakes

  • Left: The occlusion of a motorcycle makes AI mistake a monkey for humans.
  • Middle: The obscuration of the bicycle caused the AI ​​to mistake the monkey for a human, and the jungle background caused the AI ​​to mistake the bicycle handle for a bird.
  • Right: The guitar turns the monkey into a human, and the jungle turns the guitar into a bird

The image above shows the effect of a guitar on ps in a photo of a jungle monkey. This led the deep network to mistake monkeys for humans and mistake the guitar for birds, presumably because it believed that humans were more likely to carry guitar than monkeys, and birds were more likely to appear in the nearby jungle than guitars.

It is also because of inductive logic that it depends on a large amount of data. The more data there is, the more universal the generalized experience is.


The history of artificial intelligence

AI is not a brand new thing, he has been developing for decades! Below we introduce the most representative 3 development stages.

History of Artificial Intelligence

The above picture shows some milestones in the field of artificial intelligence from 1950 years to 2017 years. Summarized will be divided into 3 big stage:

First wave (non-intelligent dialogue robot)

20 century 50 era to 60 era

1950 10 month, Turing proposed the concept of artificial intelligence (AI), and proposedTuring testTo test AI.

The Turing test suggested that in a few years, people saw the "twilight" of the computer through the Turing test.

1966 year, the psychotherapy robot ELIZA was born

People of that era rated him very high, and some patients even liked to chat with robots. But his implementation logic is very simple, is a limited dialogue library, when the patient speaks a certain keyword, the robot responds to a specific word.

The first wave did not use any new technology, but used some techniques to make the computer look like a real person. The computer itself is not smart.


Second wave (speech recognition)

20 century 80 era to 90 era

In the second wave, speech recognition is one of the most representative breakthroughs. The core breakthrough was to abandon the idea of ​​the symbol school and changed it to a statistical idea to solve practical problems.

In the book "Artificial Intelligence", Kaifu Li introduced this process in detail, and he is also one of the important people involved.

The biggest breakthrough of the second wave was to change the way of thinking, to abandon the idea of ​​the symbol school, and to use statistical ideas to solve the problem.


The third wave (deep learning + big data)

21 century

The 2006 year is a watershed in the history of deep learning. In this year, Jeffrey Sinton published "A Fast Learning Algorithm for Deep Belief Networks". Other important deep learning academic articles were also released this year, and several major breakthroughs were made at the basic theoretical level.

The reason why the third wave will come is that the 2 conditions are mature:

After the years of 2000, the rapid development of the Internet industry has produced massive data. At the same time, the cost of data storage has also dropped rapidly. It makes the storage and analysis of massive data possible.

GPU The continuous maturity provides the necessary computing power to improve the usability of the algorithm and reduce the cost of computing power.

Deep learning is the mainstream technology today

Deep learning has developed a powerful ability after various conditions have matured. In speech recognition, image recognition,NLPThe field continues to set records. Make AI products truly available (for example, the error rate of speech recognition is only 6%, and the accuracy of face recognition is higher than that of humans.BERTThe stage of exceeding the human...) in the performance of the 11 item.

The third wave struck, mainly because of the big data and computing power conditions, so that deep learning can exert great power, and the performance of AI has surpassed human beings and can reach the stage of “availability”, not just scientific research.

The difference between the artificial intelligence 3 wave

  1. The first two craze was dominated by academic research, and the third craze was dominated by real business needs.
  2. The first two craze is mostly at the market level, while the third craze is at the business model level.
  3. The first two crazes were mostly in the academic world to persuade the government and investors to invest money. The third wave of enthusiasm was that investors actively invested in academic projects and entrepreneurial projects in hotspots.
  4. The first two booms raised questions more, and the third boom solved problems more.

To learn more about the history of AI, I recommend reading Kai-Fu Lee's人工智能", The content of the three waves above is excerpted from this book.


What can artificial intelligence not do?

3 levels of artificial intelligence

When exploring the boundaries of AI, we can first simply divide AI into 3 levels:

  1. Weak artificial intelligence
  2. Strong artificial intelligence
  3. Super artificial intelligence

3 levels of artificial intelligence: weak artificial intelligence, strong artificial intelligence, super artificial intelligence

Weak artificial intelligence

Weak artificial intelligence, also known as restricted-field artificial intelligence (Narrow AI) or applied artificial intelligence (Applied AI), refers to artificial intelligence that focuses on and can only solve problems in specific areas.

For example: AlphaGo, Siri, FaceID...

Strong artificial intelligence

Also known as Artificial Artificial Intelligence or Full Artificial Intelligence (Full AI), it refers to artificial intelligence that can do all the work of human beings.

Strong artificial intelligence has the following capabilities:

  • Reasoning when using uncertainties, using strategies, solving problems, and making decisions
  • The ability to express knowledge, including the ability to express common sense knowledge
  • Planning ability
  • Learning ability
  • Ability to communicate using natural language
  • Ability to integrate these capabilities to achieve a defined goal

Super artificial intelligence

Assuming that computer programs continue to evolve and are smarter than the world's smartest and most gifted humans, the resulting artificial intelligence system can be called super artificial intelligence.

Our current stage is weak artificial intelligence, strong artificial intelligence has not been realized (even far away), and super artificial intelligence is even invisible. So "specific areas" are still borders that AI cannot overcome.


What is the capability boundary of artificial intelligence?

If we go deeper and explain the boundaries of AI's capabilities from a theoretical level, we must move Master Turing out. Turing was thinking about three questions in the mid-30s:

  1. Are there any clear answers to all math problems in the world?
  2. If there is a clear answer, can I calculate the answer in a limited number of steps?
  3. For those mathematical problems that may be calculated in a finite number of steps, can there be an imaginary machine that allows him to keep moving, and finally, when the machine stops, the mathematical problem is solved?

Turing really designed a method that later generations called the Turing machine. Today's computers, including the new computers being designed around the world, are not beyond the scope of Turing machines in terms of their ability to solve problems.

(Everyone is a human being, how is the gap so big?)

Through the above 3 questions, Turing has drawn a line.This limit applies not only to today's AI, but also to future AI. .

Let's take a closer look at the boundaries:

Ability boundaries for artificial intelligence

  1. There are many problems in the world, only a small part of which is a mathematical problem.
  2. In mathematics, only a small part is solved.
  3. Among the solutions to the problem, only part of the ideal state of the Turing machine can be solved.
  4. In the latter part (the part that the Turing machine can solve), only part of it is solved by today's computers.
  5. The problem that AI can solve is just a part of the computer that can solve the problem.

Worried that artificial intelligence is too powerful? You think too much!

In some specific scenarios, AI can perform very well, but in most scenarios, AI is not useful.


Will artificial intelligence make you unemployed?

This question is the one that everyone cares about most, and it is also the one that has the greatest influence on each individual. So come up and talk about it separately.

First, the replacement of "partial human behavior" by artificial intelligence is an inevitable trend

Every new technology or invention will replace part of the labor force:

Time reporting-form

The work of pulling a rickshaw-car

Well digging work-drilling machine


It should be noted that technology replaces only certain jobs. The digging machine can only help you dig holes, but cannot help you determine where to dig.

The same is true of artificial intelligence, which is not aimed at certain occupations or certain people, but replaces some specific labor behaviors.

Second, there will be better new jobs as you lose your job.

The history of several technological revolutions tells us that although the emergence of new technologies has caused some people to lose their jobs, many new jobs will also be created. The jobs that are replaced are often inefficient, and the jobs that are created are often more efficient. Think about pulling a rickshaw, then think about driving a car.

When artificial intelligence frees up a portion of the workforce, it can do more valuable and interesting things.

do not be afraid! Using AI well is a super skill

Two points were mentioned above:

  1. The essence of artificial intelligence is a tool, and people need to use it
  2. Artificial intelligence replaces not people, but certain work links

So do n’t be afraid to replace yourself with artificial intelligence, you shouldActively learn AI, become the earliest person who can use AI, and become a person who can use AI well.

Think of people who would use computers and the Internet 20 years ago. They were very scarce in that era, so they earned the dividends of the Internet era. By the same token, the dividends of the intelligent age will belong to those who can use AI.


What jobs will be replaced by artificial intelligence?

Kaifu Li put forward a judgment basis:

If a job takes less than 5 seconds to make a decision, then there is a high probability that it will not be replaced by artificial intelligence.

4 job characteristics that are easily replaced by artificial intelligence

This work has four characteristics:

  1. Not much information is needed to make a decision
  2. The decision-making process is not complicated and the logic is simple
  3. Can be done on its own, without collaboration
  4. Repetitive work

Skills that are hard to replace by artificial intelligence

Scientists have identified three skills that are difficult to replace with artificial intelligence:

  1. Social intelligence (insight, negotiation skills, empathy...)
  2. Creativity (original power, artistic aesthetics...)
  3. Perception and operation ability (finger sensitivity, coordinated operation ability, ability to cope with complex environments...)


How to usher in the intelligent era?

Artificial intelligence will sweep the world like the industrial era. In this case, what we need to do is not to escape, but to embrace this change. Here are some specific suggestions for everyone:

  1. To understand the underlying logic and basic principles of the intelligent age, you don't need to learn to write code, but you need to know what might happen, what is impossible.
  2. Artificial intelligence will infiltrate all walks of life like computers in the future. You should try to understand artificial intelligence as much as possible, and learn how to use it to solve existing problems and become an early adopter of artificial intelligence.
  3. Make a career plan. Don't choose three non-professionals (no social, no creativity, no strong perception and operation skills)


Final Thoughts

The basic principle of artificial intelligence: Machines summarize the laws from "specific" large amounts of data to form certain "specific knowledge", and then apply this "knowledge" to real-world scenarios to solve practical problems.

Based on this basic principle, there are three characteristics:

  1. Artificial intelligence is essentially a tool
  2. AI skills can only solve specific problems, not everything
  3. Artificial intelligence belongs to inductive logic and can tell you what it is, but cannot tell you why


So far, artificial intelligence has experienced 3 waves:

  1. 20s to 50s: non-intelligent dialogue robots
  2. 20s to 80s: speech recognition
  3. Early 21st Century: Deep Learning + Big Data


There are 3 levels of artificial intelligence:

  1. Weak artificial intelligence
  2. Strong artificial intelligence
  3. Super artificial intelligence


In terms of unemployment, artificial intelligence will indeed replace some human jobs, but at the same time, some new and more valuable jobs will appear. There are three skills that will not be easily replaced by artificial intelligence in the future:

  1. Social intelligence (insight, negotiation skills, empathy...)
  2. Creativity (original power, artistic aesthetics...)
  3. Perception and operation ability (finger sensitivity, coordinated operation ability, ability to cope with complex environments...)


"Attached" 2020 AI development trends

Let's review the important changes in artificial intelligence in 2019:

  1. Important progress has taken place in the NLP field, and pre-trained models such as BERT, GPT-2, XLNET have already played an important role in the product.
  2. The infrastructure is further improved: PyTorch is growing very fast, and TensorFlow is deeply integrated with Keras.
  3. GAN Rapid development, the emergence of popular products. DeepFake and ZAO let the masses experience GAN technology.
  4. It is also because of DeepFake that the social impact of artificial intelligence has been paid attention to by everyone, and AI-related laws are being improved globally.
  5. Auto-ML lowers the threshold of AI and makes the deployment of artificial intelligence very easy.

What are the development trends in 2020?

  1. The introduction of 5G will digitize more of the physical world, which will further promote the development and popularization of AI.
  2. The integration of the data science team and the business team will be closer.
  3. It is possible to see the development of multi-tasking AI models and move towards general artificial intelligence.
  4. Get rid of data dependence and get better models with less data.
  5. Achieve greater breakthroughs and development in the NLP field.
  6. Improve the interpretability of AI and solve the black box problem
  7. Social issues have intensified, and discussions on personal data security, privacy, and algorithmic bias have been increasing.

More important milestones in 2019 and development trends in 2020 can be found in the following two articles:

'Important developments of artificial intelligence, machine learning, and deep learning in 2019 and trends in 2020 (technical articles)"

'Important developments of artificial intelligence, machine learning, and deep learning in 2019 and trends in 2020 (research)"


Baidu Encyclopedia + Wikipedia

Baidu Encyclopedia version
Artificial Intelligence, abbreviated as AI in English. It is a new technical science that studies and develops theories, methods, techniques, and applications for simulating, extending, and extending human intelligence.
more content
Wikipedia version
In computer science, artificial intelligence, sometimes called machine intelligence, is the intelligence that machines display.
more content