This article is transferred fromLess Algorithm, More Application: Lyft's Craig Martell"
The full text is translated by machine, the quality is not good enough, but it does not affect the overall understanding
Craig Martell is the head of machine learning at Lyft and an adjunct professor of machine learning in the Align program at Northeastern University in Seattle.Before joining Lyft, he was the Director of Machine Intelligence at Dropbox, led many AI teams and programs at LinkedIn, and was a tenured professor at the Naval Graduate School in Monterey, California. Martell has a PhD.He has a PhD in Computer Science from the University of Pennsylvania and is a co-author of "The Great Principles of Computer" (Massachusetts Institute of Technology Press, 2015).
Your comments are critical to the success of Me, Myself and AI.For a limited time, we will download MIT SMR's best articles on artificial intelligence for free to listeners watching the show.Send a screenshot of your comment firstname.lastname@example.orgTo receive the download.
We started with the second season of "Me, Myself and Artificial Intelligence" to discuss specific trends that Craig saw in the field of AI and machine learning: As organizations increasingly rely on technology-driven solutions to solve business problems, The algorithm itself is more important than its applicability into the overall engineering product pipeline and product development roadmap.Craig shared his thoughts on what this transformation means for academic education and cross-functional collaboration in the organization, and the moderator is thoughtful about how to eliminate unconscious prejudices.
Onhttps://sloanreview.mit.edu/aipodcast上Read more about our show and follow the series.
To learn more about the movie "Coding Bias," which Craig mentioned in the interview, please visitwww.codedbias.com.To learn more about the work of MIT Media Lab researcher Joy Buolamwini, please visit her atLaboratory website上Page of.
Sam Ransbotham:Are algorithms becoming less and less important?As algorithms become more commoditized, there may be fewer and fewer algorithms and more and more applications.In the first episode of the second season of "Me, Me and Artificial Intelligence," we will talk to Craig Martell, head of machine learning at Lyft, about how Lyft uses artificial intelligence to improve its business.
Welcome to "Me, Me and AI", this is a podcast about artificial intelligence in business.In each episode, we will introduce you to people who use AI to innovate.I’m Sam Ransbotham, professor of information systems at Boston College.I am also the guest editor of the MIT Sloan Management Review (MIT Sloan Management Review) AI and business strategy big idea plan.
Shervin Khodabandeh:I am Shervin Khodabandeh, senior partner of BCG, and I am the co-leader of BCG's AI practice in North America.MIT SMR and BCG conducted five years of research together, interviewed hundreds of practitioners, and conducted surveys on thousands of companies to understand what is involved in building, deploying, and expanding AI capabilities and truly changing the way organizations operate. Required conditions.
Sam Ransbotham:Today we are going to talk to Craig Martell. Craig is the head of machine learning at Lyft.Thank you for joining us today, Craig.
Craig Martell:Thanks, Sam.I am very happy to be here.These are all exciting topics.
Sam Ransbotham:So Craig, head of machine learning at Lyft-what exactly does it mean and how did you get there?
Craig Martell:First of all, I am pretty sure that I won the lottery. This is the reason: I started to engage in political theory research academically, and I have a pent-up young man who has collected a master's degree along the way.Figure out what I want to do.Therefore, I studied philosophy, politics, political theory, logic... and finally got a PhD.Computer Science from the University of Pennsylvania.I thought I would do a testable philosophy.The closest to it is AI, so I just do it out of love.I just found that the whole process, goals and techniques are absolutely fascinating.
Sam Ransbotham:All parts of your master plan are integrated.
Craig Martell: One pointnor.I fell in.
Sam Ransbotham:So how did you finally come to Lyft?
Craig Martell:I stayed on LinkedIn for about six years.Then my wife got this extraordinary job at Amazon. I wanted to stay married, so I followed her to Seattle.I worked at Dropbox for a year, and then Lyft contacted me.I basically seized this opportunity because this space is so charming.I like cars in general, which means I like traffic in general.The idea of changing our mode of transportation is just a fascinating space.Then, in my previous life, I was a tenured computer science professor. This is still my beloved. Therefore, I am an adjunct professor at Northeastern University just to make sure that I maintain my teaching skills.
Shervin Khodabandeh: Craig, you have a deep humanistic background in philosophy and political science, and you mentioned logic-all of this-in your entire journey. How has this helped you?
Craig Martell:So this is really interesting.When I think about what AI is, I find that algorithms are mathematically interesting, but I find that the use of algorithms is even more interesting.Because from a technical point of view, we are discovering correlations in very high-dimensional nonlinear spaces.In a sense, it's large-scale statistics, right?We are discovering these correlations between A and B.These algorithms are really interesting, and I am still teaching those, they are very interesting.But what is more interesting to me is, what do these correlations mean to people?I think every AI model launched is a cognitive science test.We are trying to simulate the way humans behave.Now, for autonomous driving, we are modeling the behavior of cars in a sense, but in fact, considering other cars driven by humans, we are modeling correct human behavior.
Sam Ransbotham:Can you talk about how Lyft organizes AI and ML teams?
Craig Martell:At Lyft, we have model builders throughout the company-we have a very large scientific organization.We also have so-called ML SWE-ML software engineers.I manage a team called LyftML, which consists of two main teams.One is called "applied machine learning". We use professional knowledge and machine learning to solve some very difficult problems.There is also the ML platform, which motivates me to have a great interest in obtaining ML's operational excellence to ensure that it effectively meets business targets.
Shervin Khodabandeh:Your idea is-because I think Craig, you are still teaching, right?
Craig Martell:Yes, I am a part-time teacher at Northeastern University in Seattle.
Shervin Khodabandeh:So you think your students should ask them, right?Or to put it another way, when they enter the labor market and actually use AI in the real world, what are they most surprised by?
Craig Martell:The algorithm itself is becoming less important.I don't want to useProduct commercializationThe word, But to some extent, they have been commoditized, right?You can choose one-fifth or one-seventh. For specific problems, you can try all-model family.But what actually happened, or what I think is exciting, is how these models fit into the larger engineering process so that you can measure and guarantee your effectiveness against your business goals.This is related to the cleanliness of the data, please ensure that the data exists in time...Classic engineering design, for example, did you return the function with an appropriate delay?Therefore, the actual model itself has shrunk from 85% of the problem to 15% of the problem.Now 85% of the problems are around its engineering design and operational excellence.I think we are at a turning point.
Shervin Khodabandeh:Therefore, you believe that with the advent of AutoML and these packaging tools, and over time, there is less and less attention to algorithms, and more and more attention to data and how it is used. …Do you think the courses and training and just the overall direction of data scientists will be very different in 10 years?Should we teach them different things, different skills?Because it used to be, a lot of energy is focused on creating algorithms and trying different things. I think you are pointing out that this is in a stable state.What does it mean for the workforce of the future?
Craig Martell:Yes, I think that's great.I want to say something controversial here, and I hope not to offend anyone.
Shervin Khodabandeh:That's why I asked, so I hope you will.
Craig Martell:So, if you were only five or ten years ago, in order to achieve the kind of value that a technology company wanted to achieve, you would need a lot of PhDs, right?The technical ability to build these algorithms is very important.I think the turning point may be TensorFlow in 2013, when it was not commercialized-you still need to think about the algorithm carefully-but the actual "going out" algorithm has become a lot easier.Now, there are many frameworks that can do this.
I want to know-this is a real miracle: I want to know the degree of degree, how much specialized machine learning/AI data science training we need in the future.I think CS undergraduates or general engineering undergraduates will graduate through two to three AI classes.There are also these two or three AI classes, with the correct infrastructure in the company, the correct function collection method, and the correct method to specify the marked data... If we have the ML platform, then we have two or three Strong people will join to deliver 70% of the models that the company may need.Now, for that 30%, I think you still need an expert for some time.I am willing.I just don't think you need it like you used to. Almost every expert must have a PhD.
Shervin Khodabandeh:Yes Sam, I actuallyTo thisIt resonated.In an interesting way, it confirms what we have been saying to really have a large-scale impact. This is so far, technical knowledge can only be brought to you, but in the end, you must change the way it is used , And you must change the way people work and the different ways people interact with AI.I think this is a lot of humanities, philosophy, politics, and the way humans work-much more important than algorithmic work.
Sam Ransbotham:Well, this is also a good redirect, because if we are not careful, that conversation will make us more into the DevOps course, so Shervin pointed out that, of course, this is of course also an integral part, but there is a process. Changes, there are more business-oriented plans.
What other methods do you want to teach others?Or, what else do you think executives should know about? ...Everyone doesn't have to know everything; it would be a little overwhelming.If everyone knows everything, it may be the ideal choice, but what exactly do managers of different levels need to know?
Craig Martell:I think that the highest decision makers need to understand the danger of errors in the model, and they need to understand the whole process-you really need labeled data.There is no magic here.They must understand that there is no magic.Therefore, they must understand that the marked data is expensive, and the world distribution required to set the mark correctly and sample correctly is very important.I believe they must also have a general understanding of the life cycle, which is different from the two-week sprint that we will close these Gila tickets.Data collection is very important, and it may take a quarter or a half of the time.And the first model you posted may not be great because it comes from a small data set with labels, and now you are collecting data wildly.Therefore, there is a part of the life cycle that they need to understand. Unfortunately, they need to understand in many ways (probably not for car driving, but for advice), and the first couple you transport will gradually become more good.I think this is extremely important for senior staff.
I think that to reduce a few levels, they need to understand the precision/recall trade-off: the types of errors that the model may produce.Your model may produce false negative errors or false positive errors. I think this is very important as a product person with this choice.Therefore, if we are doing a document search, I think you will care more about false positives.You care more about accuracy.You want the most important things to make sense.For most search questions, you do not need to obtain all relevant information; you only need to obtain enough relevant information.So if something related is called irrelevant, then you can accept it, right?But for other issues, you need to get everything.
Sam Ransbotham: YesDocument search.Yes, so does Lyft. …Put it in the context of one of the companies that you have precision and recall the compromises—false positives, false positives.
Craig Martell:Fortunately, at Lyft, we have a good escape hatch for humans, which I think is very important.Ideally, all of these suggestions should have artificial escape ports.Therefore, if I recommend a destination for you and that destination is wrong, it is-
Craig Martell:No harm, no foul-you just need to enter the destination.So, for Lyft as a product, I think we are lucky, because most of our recommendations-are trying to reduce friction to give you a free ride-if we don't choose them exactly right, that's fine.There is no real danger there.Driverless cars are so difficult because you want to get them at the same time.You want to know that it is a pedestrian, and you want to make sure that you don't miss any pedestrians.
Sam Ransbotham:Let people fall intoDilemmaThe idea is more than just saying: "Well, there are some destinations here; there are many more. Which do you like?"
Shervin Khodabandeh: Craig, you have talked about how AI in real life conducts a large number of cognitive science experiments, because it ultimately involves-
Craig Martell:At least for me.
Shervin Khodabandeh:Yes.It raises the idea of unconscious bias.Therefore, as human beings, we have become more aware of our subconscious biases in all things, right?Because they have been ingrained in generations and stereotypes.
Craig Martell:It’s just our past experience, right?Just like, a biased world creates a biased experience, even if you have the best intentions.
Shervin Khodabandeh: Yes-right?Therefore, I think my problem is that it is clear that artificial intelligence has unexpected biases.What do you think we need to consider now, so that in 10 to 20 years from now, this prejudice is not so entrenched that the working principle of AI is difficult to correct.
Craig Martell:already have.So the question is, how do we correct it?The first thing I want to say is that I am a member of the group of Northeast Film Company's film "Coding Bias".If you haven't seen the movie "Coding Bias", you should definitely see it.This is related to this black woman from the MIT Media Lab. This black woman tried to do a useless project because facial recognition does not work for black women at all.This is an absolutely fascinating social study.Researchers at the time collected data sets (facial recognition algorithms) used to train machine learning, and the researchers at the time were a group of white men.This is a known issue, right?There is a deviation in the way the data set is collected.See, all psychological research has similar biases.Psychological research does not apply to me-I am 56 years old this year.Psychological research is suitable for college students because it is a ready-made subject.
Therefore, these people are readily available due to the prejudices of the world, so this is how the data set is generated.Therefore, even if [not] malicious, the world is still distorted, the world is still biased, and the data is still biased; it does not apply to many people.Not many women participated in the training.Then the darker the skin, the worse the situation.There are multiple technical reasons for this: dark skin has lower contrast, etc., etc., etc.But this is not a problem.The question is, should we collect data in this way?What is the goal of the data set?Who are our customers?Who do we want to serve?Let us sample the data in a way that serves our customers.
We talked about undergraduates earlier.I think that is really important.One way to get rid of this situation is diversity in the workplace.I firmly believe this.Then you ask everyone (including all these different groups) to test the system and see if the system works for them.When we performed an image search on Dropbox, we asked all the staff research groups: "Please search for things that have been problematic for you in the past and see if we are correct." If we find some errors, we will return and collect data to mitigate these questions.So, see: your system will be biased due to the data collected-in fact.The fact is that it will be biased due to the collected data.You want to do your best to collect it correctly.You may not be able to gather correctly because you have your own subconscious biases, as you pointed out.Therefore, you must ask everyone who will be your customer to try it out, try it out to make sure it does it right, and if it doesn’t work, go back and collect the necessary data to fix it.Therefore, I think the short answer is diversity in the workplace.
Sam Ransbotham: Craig, thank you for taking out todayvaluableTime to talk to us-many interesting things-
Craig Martell:Yes, I am very happy, these conversations are really interesting.I'm very nerdy about it, so I like it very much.
Sam Ransbotham:Your enthusiasm is manifested.
Shervin Khodabandeh:Very insightful stuff.Thank you.
Craig Martell:thank you all.
Sam Ransbotham:Well, Shervin, Craig said he won the lottery in his career, but I think we won the lottery because he was our guest in the first episode of the second season.
Shervin Khodabandeh:He made many points.Obviously, as time goes by, the commoditization of algorithms and how to link them with strategy, returning to key business indicators, makes changes happen and usage becomes more and more important. …I really like his point of view about what it takes to remove the bias from the system and the degree of bias that already exists in the system.
Sam Ransbotham:Commercialization is particularly important.I think this resonates with us because we are talking about this from a business perspective.What he meant was that many things will increasingly become business issues.When it is a business problem, it is not a technical problem.I don't want to discount its technical aspects, of course he will definitely bring a lot of technical troubles.But he did emphasize the "now this is a business issue" aspect.
Shervin Khodabandeh:Yes, within five minutes, he basically provided such a convincing argument for our last two reports (i.e. 2019 and 2020).
Shervin Khodabandeh:It is related to strategy and process changes and process redesign and reengineering, and is related to the interaction and adoption of humans and AI.
Sam Ransbotham:Another business issue is the choice of management.He also came back.He was talking about... some of these things were not clear decisions.You can choose which way to make mistakes.That is a management issue, not a technical issue.
Shervin Khodabandeh:It also requires managers to know what they are talking about, which means they need to really understand what AI is talking about, what it might be talking about, what its limitations are, and what the possible art is. .I also like the view that as you get closer to the developers and builders of AI, you must really understand the math and code, because otherwise you won't be able to guide them.
Sam Ransbotham:Although, don’t you worry that we just ran into things where everyone must know everything?I find it difficult to sell.If managers must understand the business and how to make money, they must also understand the code. …I mean, it’s obviously important to let everyone know everything-
Shervin Khodabandeh:Well, I think the question is, how much do you have to know everything?A good business executive has fully understood his or her level and asked the right questions.I think you are right.But isn't this what Einstein said?Unless you can describe a 5-year-old child, wouldn't you really understand it?You can describe gravity to 5-year-olds, 20-year-olds, and graduate students in different ways, and they will all understand gravity.The problem is, at least you understand it, instead of saying: "I don't know about gravity or something."
Sam Ransbotham:Therefore, basically, teaching and academic work are very important.Is that what Shervin just said?
Shervin Khodabandeh:I think that managers and senior managers need to understand that AI is not a slam dunk in itself, because you asked the right question: What is the correct level of understanding?So, what is the correct level of synthesis and expression that allows you to make the right decision without knowing everything?But isn't this a successful business executive's handling of every business problem?I think this is what we are going to say: to use AI, you need to know enough knowledge to be able to explore.However, it can be said that this is not a black box, just like many technical implementations in the past are black boxes.
Sam Ransbotham:This helps to return to the overall position of "learning more" and "drawing the line", and helps to understand this balance.After discussing gravity, everyone knows more about gravity than before, so this is a question of moving from the current state to the next.
Sam Ransbotham: Craig has some good ideas about diversity in the workplace.If the data collection team doesn't understand the biases inherent in their data set, then the algorithm is bound to produce bias.He was referring to the movie "Coding Bias" and Joy Buolamwini, a researcher at the MIT Media Lab.Joy is the founder of the Algorithmic Justice League.We will provide some links in the presentation description where you can read more about Joy and her research.
Thank you for joining us today.We look forward to the next episode, when we will talk to Will Grannis, he will face the unique challenge of building a CTO function on Google Cloud.Until next time.