This article is transferred from Crunchbase,Original address.The original text is in English, using Google machine translation, does not affect the acquisition of important information.

ByZest AI CTO Written by Jay Budzik.Zest's ZAML software uses machine learning technology to help lenders make more effective credit decisions safely, fairly, and transparently. Zest was founded by Google CIO Douglas Merrill and gotMatrix Partners.Lightspeed, Upfront,FlybridgeWith Baidu's support, we cooperate with global financial companies to help more people get fair and transparent credit.

It has been a year since MMC Ventures released an unexpected discovery, there are40% of AI startups have no substantial use of AI in their technology stack. (This research was conducted in Europe, but it could be anywhere.) As the CTO of an AI company, I can tell you that the buzz can be refreshing.

I have recently discussed many problems with many customers, partners, and especially investors, proving that AI is real (not only real).The outline of the real AI company is still taking shape. I think Matt Bornstein and Martin Casado of Andreesseen Horowitz are inWritten hereThe content of the AI ​​company will be prescient.

If you are an investor, customer or partner opposite the founder or CEO of an AI company, here are the questions I would like to ask their team whether it is legal.In view of the wide variety of AI, due to specific considerations, we define AI as machine learning here.

What data sets did you use to train and evaluate your AI?

General AI is still science fiction.Today's technology is suitable for a series of narrow and specific problems, and machines can learn to solve these problems by processing large data sets of historical indicators and results, with the best effect.You can judge how well your AI is in solving problems by keeping some data to test its accuracy.Leaders of artificial intelligence companies should be able to describe what specific problems their artificial intelligence is solving, how accurate it is, and how this accuracy brings business results.

The more data an AI company needs, the better.Data can take many forms, but it's easy to think about it in terms of rows and columns.These rows correspond to each observation of the result (for example, did the loan go bad or get paid off?).The column is the input; information known before the result is observed (for example, monthly income at the time of application).

An AI company should be able to introduce its data to you in detail.Companies should be able to communicate what AI is trying to predict, the data used to train AI, and how to evaluate the effectiveness of AI. How often does AI update?What plan must they take to merge the new data to make it better?If a company has a good answer to a data question, it is likely to be legitimate.

What should your AI do now? What are people doing?

If the opposing team takes AI seriously, then you are an urgent problem for them from the beginning.You want to hear them talking through a specific application of their AI.Depending on how it is deployed and how it works, artificial intelligence can solve any of thousands of potential tasks.You need to be wary of teams that lack a specific focus and everything that sounds unbelievable.Do they claim that you will be able to replace a large number of workers?Do they use AI as a panacea to solve any problems?

When a company has truly completed the process of applying AI to a specific problem, it will know the accuracy of the results, when it succeeded and failed, and when there were data and process gaps.The company knows it well and can see that artificial intelligence is a tool that can do what computers and advanced mathematics are good at, while enabling people to do better.

Companies should have a clear understanding of what people will need to do that AI will not do, and how AI can adapt to business processes involving people.It should describe the change management required to apply AI to business problems so that you know what the customer needs to do to reap the benefits.People who are entangled with AI should be coherent, thoughtful and humble.They will talk about what happened and how to correct the problem.Be wary of claims that the AI ​​does not need to be carefully monitored.

Has artificial intelligence been used to drive consistent business results and solve real problems for multiple customers?

It is tempting to underestimate the difficulty of an idea that works in the laboratory and makes it work in the real world.After entering the work environment, AI usually does not achieve the desired results, and making it really work can be a long and expensive journey.According to GartNerAccording to the latest estimates, only 20% of AI projects can leave the laboratory.In my work, I have heard stories from large companies that have spent years trying to put their AI projects into production.

It is very important to understand some details about how AI works in practice.You can ask how many customers have used it, how long it has been in production, and what business results it has produced.How long does it take to get up and running on average? How does AI compare to historical measures of the same business outcome or task?How does it compare to simpler alternatives like rules, decision trees, or linear models?

There are a lot of cool-looking AIs out there.The hard part is to convert a viable method (without having to deal with a large number of examples and specific limited data sets) into a method that can be used in the real world without the need for continuous and expensive adjustments and maintenance.Data science is difficult. To create AI that can produce consistent business results requires investment in highly skilled people, excellent tools, and process specifications (including comprehensive monitoring).Just remember that what looks good in the demo may not appear the same way when applied to real problems: ask questions to prove that AI really works.

How much time does it take to build AI, how many field tests have been conducted, and who has reviewed it and provided comments?

Of course you want to know how many PhDs the company has and how much money has been invested in the development of AI.Although they cannot explain everything, these are good indicators.The purpose is to ensure that the company spends enough time and energy to play games on various problems in the laboratory, and then conduct tests and improvements on the spot.Ideally, you will hear about years of development and deployment with different types of customers, so you can ensure that their AI is adaptable and proven.

Regulation of AI will only increase.This will require the model to go through a careful validation and governance process, likeAs we see in financial services today.To ensure that the model is used in a responsible way, the AI ​​model needs to be thoroughly validated.In medical research, the Food and Drug Administration has approved some processes that support AI, while in the financial sector, regulators have approved AI models in audits.When the correct verification process is followed, even in regulated industries, artificial intelligence has already passed deployment and is expected to be widely adopted.What verification practices does the company have?

How easy is it to understand your AI's decisions or recommendations?

The early results of AI are so encouraging that the industry moved quickly without establishing transparent tools to review decisions and processes.If your AI recommends posting a post to click or choosing a lip gloss color, it is not that important.In order to make federally regulated decisions (such as loans or driving), the government needs detailed documentation for each step of the AI ​​model building process and the company's defense of each AI-based decision.In many cases, companies will be held responsible for biased decisions or undesirable results, regardless of whether the regulatory agency has reviewed the construction process.Ask AI companies to show you how they explain AI-based decisions to customers and regulators.

What kind of biases does AI have and how to eliminate them?

A good AI company should have a clear understanding of how to make its AI fair, because bias is inherent in every data set.We know that the data set we used to train the model contains gender and ethnic bias, and many data sets do not include historically underserved important demographic data segments.Building a more inclusive AI leads us to search for more data.The people in the team are also important.A good team of data scientists knows its blind spots and value diversity. Approximately 40% of Zest's technical team are women and other underrepresented fields in the computer science field.Diversity leads to better results.

Dealing with the unexpected biases that benign intentions ultimately lead to unfair results returns to transparency.Since AI can find invisible associations between seemingly different information, the input seems unbiased, but the results may be biased.Ethical AI companies will have a comprehensive and actionable strategy to measure and mitigate prejudice in order to use AI fairly and inclusively.Ask to see it.

Everyone wants to have a successful business and make money.It is not difficult to use AI to achieve your goals. You only need to ask the right questions to ensure that your AI partner is ethical and to prove that it has the discipline to always put AI into production.A real AI company can tell you all about this journey.