This article is part of a large-scale study of how product managers integrate machine learning into their products (see other articles below),Brian Polidori和MyselfMBA at the University of California at Berkeley, atVince LawWith the help of our teacher.
The study aims to understand how product managers design, plan, and build products that support ML. To achieve this understanding, we interviewed 15 product development experts from various technology companies. Among the 15 companies represented, the market value of 14 companies exceeds 10 billion, 11 is publicly listed, 6 is B2C, and 9 is B2B.
Product Manager guides the ML series:
- How product managers determine the usage scenarios for machine learning
- Product Manager uses 4 steps for machine learning
- How AI Product Managers Create Data Strategies for Machine Learning
- Principles of AI Product Managers Managing Machine Learning Models
Enabling machine learning (ML) products has an ongoing collection, cleansing and analysis of data loops for input into ML models. This repetitive loop is the driving force behind the ML algorithm and enables ML products to provide useful insights for users.
Every step in the loop is a unique challenge. Therefore, we explored each of the data strategy steps through the framework and examples, highlighting some of these unique challenges.
Organic data creation
In the business world, corporate growth typically breaks down between organic and inorganic growth. Organic growth refers to the growth brought about by the company's own business activities, while inorganic growth comes from mergers and acquisitions. This same concept can be applied to the data creation process.
Organic data creation refers to the creation of data (ie, data used to notify the ML model) as a by-product of the product itself. Inorganic data creation refers to obtaining (purchasing or free access) data from a third party. All of the largest technology companies operate under an organic data creation strategy.
Facebook knows who advises you to be a possible friend because you have confirmed your friendship with other similar people. Amazon knows about other products you might purchase because of all your past purchase and browsing history. And Netflix knows which program to recommend next because you watch past shows.
One notable exception is when the company first started. Companies may need to acquire data in an inorganic way to build an initial ML model and use that ML model over time to create the necessary network effects to begin the organic data creation process.
Key points(Four benefits of organic data creation):
- Cost-effective – ML models require a lot of data to be trained, and these data need to be constantly updated and refreshed (see the data expiration section below).
- Representative data -Organic data creation may contain data representing your specific user because it was actually created by your user.
- Competitive advantages -As a natural by-product, organic data is proprietary and can serve as a competitive advantage that competitors cannot replicate.
- Network effect -Organic data creation can increase the functionality of network effects, because as users increase, data will increase, leading to improvements in the model.
By its very nature, ML is able to "learn" how to best perform a task by receiving feedback-a lot of feedback. There are two main forms of this feedback:
- User generated feedback
- Manual person in loop review
User generated feedback
User-generated feedback is effective data creation. For many use cases, user-generated feedback is relatively easy to capture. For example, one of our interviewees provided us with the following example-this generally applies to all search use cases.
When a user types a query in the search bar of the company's website and clicks enter-20 results are displayed. The user then quickly scans the summary text and (probably) clicks on one of them. The user is the user selected from the list-this is the key. Although this is fairly obvious, it is important to elaborate and clearly define its importance.
The only person in the world who knows which of the 20 results presented is most relevant to the user is the actual user. Anyone else will guess. Therefore, users can not only meet their own needs, but also use the most accurate data to help train the ML model.
Therefore, in-product user feedback is the most valuable type of feedback your ML model can get. In addition, it does not require you to hire people (unlike the people discussed below) and can scale with the size of your product.
Sometimes it is more difficult to capture user feedback and other elements must be added to the use case. The key to adding these additional elements is to build a feedback channel in a way that improves the user experience and ensures partial adoption.
For example, when LinkedIn began to expand itInMailWhen the message service, it decided to introduce two reply options. When the recruiter reaches out to you, LinkedIn will provide two replies, "Yes" or "No, thank you." This is a simple solution, not only by allowing users to improve the user experience faster, but also to LinkedIn. Providing highly structured user feedback can be used to train its ML model. Over time, LinkedIn has introduced more products that support ML, such asSmart ReplyThey benefit from the same in-product feedback mechanism.
- Create structured feedback points in your products, and users can personally motivate them to choose to provide feedback (Facebook photo tag, LinkedIn recruiter response).
- Essentially, the operation of a single user is the most accurate data that the ML model can receive for that particular user.
- Enhance ML models with user-generated feedback and create enhanced network effects between user and ML accuracy.
People in the loop
Feedback between people means that you pay a person to view a specific use case or data set and provide their educated opinions (eg, tags, yes/no, etc.). Although our respondents indicated that most companies may hire third-party companies or create internal teams, you can also consider such people in the loop process.Mechanical turk.
Considering the non-scalability of human feedback in circulation, we were surprised to find that more than half of the respondents indicated that their company is currently using or planning to use human-in-the-loop to provide structured feedback for their ML models.
To turn this concept into reality, let's look at an example.
Quora is a Q&A site that asks questions, answers and organizes questions from the user community. Sort through all the noise on the platform, Quora allows usersGive praiseThe answer helps the quality response rise to the top.
Quora noticed that some content will receive a lot of content, but after review, the quality will be lower than the standard and has been changed to"clickbait"content. Therefore, in order to enhance the upvote function, Quora also decided to adopt feedback from people in the ring. Quora now sends a small set of questions and answers to people who have been trained according to the Quora standard (described in more detail below) to evaluate feed quality at the digital level for input into the ML model.
In essence, human feedback in the loop is a manual process and is therefore very expensive. Due to the high cost associated with people in the ring, only large companies seem to be using it extensively to provide feedback. In fact, some interviewees pointed out that the costs associated with people's links helped the company create a “moat” around the business. For example, Facebook isThink that there are more than 3,000 team,Committed to label and content review.
When we analyzed the use cases of our respondents using humans in the ring, we found several primary reasons.
- Use cases do not have absolute (ie, universally true) quantitative indicators to measure performance. Therefore, human reviewers are the highest standards of quality and require subjective decisions based on nuance rules.
The Quora example above illustrates this. For a particular post, Quora's measure of participation in measuring ML results may be high, but human interpretation quality (relative to prescribed rules) is low.
- If the ML model is incorrect and humans have the ability to determine whether something is correct on an individual basis, there is a significant downside risk.
For example, if a social network does not properly perform content review, there is a significant public relationship risk. The person providing the rule can reasonably determine whether a piece of content complies with or violates those rules.
One of the questions asked by respondents about the cycle of people is that it is difficult to create guidelines that guide reviewers to perform manual inspections. The guiding principles must not only be specific enough to limit the number of “gray” areas that reviewers must make subjective decisions, but also be simple enough for reviewers to perform their tasks effectively. Many interviewees mentioned that their company's guiding principles have been heatedly debated and constantly changing.
- Sometimes, user-generated feedback is not enough to meet the product's goals and needs to be enhanced by people using loop feedback.
- The human evaluator spends a lot of money, and the process of effectively setting up this manual review is more costly. However, once these programs are deeply rooted in the product process, they can bring competitive advantage.
- Creating rule classifications for human reviewers is difficult. Careful consideration should be given to determining label rules to extend the data expiration date (see the Data Expiration section for more details).
Among the 15 people we interviewed, 11 people mentioned the importance of data timeliness. Some people have mentioned specific regulations or contractual requirements that force them to clear user-specific data after 60-90 days. Others say that old data is less likely to provide information (seeReddit's ranking algorithm) or increase the predictive value. The timeliness of such data seems to apply not only to user-generated data, but also to some examples of human-in-the-loop assessments.
For example, Facebook attempts to maintain up-to-date information about small businesses (eg, websites, hours, phone numbers, etc.) on its platform. In addition to having these small businesses own the ownership of their Facebook pages, Facebook also uses reviewers to review a small portion of the business page and see if the data is up to date.
However, after the auditor confirms that the data of the small business is up to date, the probability that the data is still up to date begins to decline. Based on experience, we have heard that about 6 months after the review, the data may also be stale because it is up to date – however, the time frame will vary greatly depending on the specific use case.
The company has been working hard to get new data before the old data becomes stale and irrelevant. This is another reason why companies should build build products around organic data.
- Once the data is created, its usefulness in the ML model begins to decline, and for some use cases, this data decay can occur in days to weeks.
- Because of the short usefulness of data, companies should focus on organic data creation in order to continuously introduce new data into the system.
This article is transferred from medium,Original address