Zephyrnet Logo

Improving Diversity Through Recommendation Systems In Machine Learning and AI

Date:

recommendation system

Every day you are being influenced by machine learning and AI recommendation algorithms. What you consume on social media through Facebook, Twitter, Instagram, the personalization you experience when you search, listen, or watch Google, Spotify, YouTube, what you discover using Airbnb and UberEATS, all of these products are powered by machine learning and AI recommender systems.

Recommender systems influence our everyday lives

80% of all content consumed on Netflix and $98 billion of annual revenue on Amazon is driven by recommendation systems and these companies continue investing millions in building better versions of these algorithms.

There are two main types of recommender systems:

  1. Collaborative filtering: finding similar users to you and recommending you something based on what that similar user liked.
  2. Content-based filtering: taking your past history and behavior to make recommendations.

If this in-depth educational content is useful for you, subscribe to our AI research mailing list to be alerted when we release new material.

There is also a hybrid-based recommender system, which mixes collaborative and content-based filtering. These machine learning and AI algorithms are what power the consumer products we use every day.

How recommender systems work. Image by Sanket.

The problem is these algorithms are fundamentally optimizing for the same thing: similarities

Recommendation algorithms make optimizations based on the key assumption that only similarities are good. If you like fantasy books, you will get recommended more fantasy books, if you like progressive politics, you will get recommended more progressive politics. Following these algorithms limit our world view and we fail to see new, interesting, and unique perspectives.

Recommender systems lead us down a one-track mind

Like a horse running with blinders, we fall into an echo chamber and the dangerous AI feedback loop where the algorithm’s outputs are reused to train new versions of the model. This narrows our thinking and reinforces biases. Recent events like the Facebook–Cambridge Analytica data breach demonstrate technology’s influence over human behavior and its impact on individuals and society.

Psychology and sociology agree: we fear what we do not know. When people become myopic, that is when the “us vs them” mentality is created and where prejudice is rooted. The civil unrest in the United States and around the world can be linked back to these concepts. Fortunately, research also demonstrates that diversity of perspectives creates understanding and connectedness.

This is also a business problem

The typical consumer has 3 to 5 preferences:

  • 3 to 5 favorite book or movie genres
  • 3 to 5 most listened-to musical categories
  • 3 to 5 different fashion styles
  • 3 to 5 preferred cuisines
Consumers are diverse

Why is this diverse consumer behavior not better reflected in our technology’s behavior? In fact, if a business is able to convert a customer into trying a new category, such as turning a running customer into a new road biking customer, that customer is likely to spend 5 to 10x more through onboarding and purchases in that new activity. For every diverse category a business is not recommending, that is lost sales and engagement.

The opportunity: how can we build a better recommender system that enables consumer diversity and increases customer lifetime value?

We can approach this problem through the customer lens. Let’s take Elon Musk as a model world citizen, who publicly stated he loved fantasy books growing up and Lord of the Rings having a large impact on him.

But if Elon continued to follow the recommendations of today’s most visible machine learning algorithm on Amazon, he would continue down the path of fantasy, fantasy, and more fantasy. Elon has also stated that business books shaped his world view, with Zero to One as his recommendation. Technology should be enabling, not limiting, more of these connections for everyone.

The status quo leads to more of the same, so how can we better match the customer’s interests? Images from Amazon.

How can we build a recommendation engine that would take an input book like Lord of the Rings and recommend an output book like Zero to One?

If we can solve this for an individual case like Elon’s, then we can start to see how a better recommender system can diverge from the similarity path to create more meaningful diversity.

Building a diversity recommender system

The data science process:
1. Define a goal
2. Gather, explore, and clean data
3. Transform data
4. Build machine learning recommendation engine
5. Build diversity recommendation engine proof of concept
6. Design mockups
7. Business value hypothesis and target launch

The data science process

1. Define goal

Books is the ideal industry to explore because there is a clear distinction between book categories and potential revenue, unlike music where the dollar value for listening to new genres is less clear. The goal is to build a recommender system where we input a book and have it output:

  1. Recommendations based on similarities, the status quo algorithm
  2. Recommendations based on diversity, the evolution of the status quo

The long term goal is to build a recommender system that can be applied across various industries, enabling customers to open doors to diverse discoveries and increasing customer lifetime value for the company.

2. Gather, explore, and clean data

Dealing with data

Goodreads provides a good dataset. Within here we need to determine what is useful, what can be removed, and which datasets to merge together.

# Load book data from csv import pandas as pd books = pd.read_csv('../data/books.csv') books
# Explore features
books.columns

There are 10,000 books in this dataset and we want “book tags” as a key feature because it has rich data about the books to help us with recommendations. That data lives in different datasets so we have to data wrangle and piece the data puzzle together.

# Load tags book_tags data from csv
book_tags = pd.read_csv('../data/book_tags.csv')
tags = pd.read_csv('../data/tags.csv') # Merge book_tags and tags tags_join = pd.merge(book_tags, tags, left_on='tag_id', right_on='tag_id', how='inner') # Merge tags_join and books
books_with_tags = pd.merge(books, tags_join, left_on='book_id', right_on='goodreads_book_id', how='inner') # Store tags into the same book id row
temp_df = books_with_tags.groupby('book_id')['tag_name'].apply(' '.join).reset_index()
temp_df.head(5) # Merge tag_names back into books
books = pd.merge(books, temp_df, left_on='book_id', right_on='book_id', how='inner')
books

We now have book tags all in one dataset.

3. Transform data

We have 10,000 books in the dataset each with 100 book tags. What do these book tags contain?

# Explore book tags
books['tag_name']
Example book tags for The Hunger Games and Harry Potter and the Philosopher’s Stone

We want to transform these texts into numerical values so we have data that the machine learning algorithm understands. TfidfVectorizer turns text into feature vectors.

# Transform text to feature vectors
from sklearn.feature_extraction.text import TfidfVectorizer
tf = TfidfVectorizer(analyzer='word',ngram_range=(1, 2),min_df=0, stop_words='english')
tfidf_matrix = tf.fit_transform(books['tag_name'])
tfidf_matrix.todense()
This becomes a 10000×144268 matrix

TF-IDF (Term Frequency — Inverse Document Frequency) calculates how important words are in relation to the whole document. TF summarizes how often a given word appears within a document. IDF downscales words that appear frequently across documents. This allows TF-IDF to define the importance of words within a document based on the relationship and weighting factor.

4. Build machine learning recommendation engine

Now we build the recommender. We can use cosine similarity to calculate the numeric values that denote similarities between books.

# Use numeric values to find similarities
from sklearn.metrics.pairwise import linear_kernel
cosine_sim = linear_kernel(tfidf_matrix, tfidf_matrix)
cosine_sim

Cosine similarity measures the cosine of the angle between two vectors projected in a multi-dimensional space. The smaller the angle, the higher the cosine similarity. In other words, the closer these book tags are to each other, the more similar the book.

Next we write the machine learning algorithm.

# Get book recommendations based on the cosine similarity score of book tags Build a 1-dimensional array with book titles
titles = books['title']
tag_name = books['tag_name']
indices = pd.Series(books.index, index=books['title']) # Function that gets similarity scores
def tags_recommendations(title): idx = indices[title] sim_scores = list(enumerate(cosine_sim[idx])) sim_scores = sorted(sim_scores, key=lambda x: x[1], reverse=True) sim_scores = sim_scores[1:11] # How many results to display book_indices = [i[0] for i in sim_scores] title_df = pd.DataFrame({'title': titles.iloc[book_indices].tolist(), 'similarity': [i[1] for i in sim_scores], 'tag_name': tag_name.iloc[book_indices].tolist()}, index=book_indices) return title_df

This is the foundational code we need for a recommendation engine. This is the building block for Amazon’s $98 billion revenue-generating algorithm and others like it. Almost seems too simple. We can stop here or we can expand our code to show more data insights.

# Get book tags, total tags, and percentage of common tags
def recommend_stats(target_book_title): # Get recommended books rec_df = tags_recommendations(target_book_title) # Get tags of the target book rec_book_tags = books_with_tags[books_with_tags['title'] == target_book_title]['tag_name'].to_list() # Create dictionary of tag lists by book title book_tag_dict = {} for title in rec_df['title'].tolist(): book_tag_dict[title] = books_with_tags[books_with_tags['title'] == title]['tag_name'].to_list() # Create dictionary of tag statistics by book title tags_stats = {} for book, tags in book_tag_dict.items(): tags_stats[book] = {} tags_stats[book]['total_tags'] = len(tags) same_tags = set(rec_book_tags).intersection(set(tags)) # Get tags in recommended book that are also in target book tags_stats[book]['%_common_tags'] = (len(same_tags) / len(tags)) * 100 # Convert dictionary to dataframe tags_stats_df = pd.DataFrame.from_dict(tags_stats, orient='index').reset_index().rename(columns={'index': 'title'}) # Merge tag statistics dataframe to recommended books dataframe all_stats_df = pd.merge(rec_df, tags_stats_df, on='title') return all_stats_df

Now we input Lord of the Rings into the recommendation engine and see the results.

# Find book recommendations
lor_recs = recommend_stats('The Fellowship of the Ring (The Lord of the Rings, #1)')
lor_recs
Top 10 recommended books based on book tag similarity score

We get a list of the top 10 most similar books to Lord of the Rings based on book tags. The recommendations look nearly identical to Amazon’s website:

Amazon.com as of August 2020

Success! Producing a similarity recommendation was part one. Part two is producing diversity.

5. Build diversity recommendation engine proof of concept

This next part is where evolution happens. The diversity recommendation algorithm does not currently exist (publicly) so there will be some art to this science. In lieu of spending months in the research and mathematics lab, how can we build a proof of concept that can either validate or invalidate the feasibility of producing a diversity recommendation? Let’s explore the data.

Since we are reverse engineering through the Elon Musk customer lens and wanting the recommender to output Zero to One, let’s find where this book is positioned in relation to Lord of the Rings.

# Find Zero to One book
lor_recs[lor_recs.title == 'Zero to One: Notes on Startups, or How to Build the Future']

In relation to Lord of the Rings, Zero to One is rank 8,592 out of 10,000 books based on similarities. Pretty low. According to the algorithm, these two books are on opposite ends of the spectrum and not similar at all. This book is statistically in the lowest quartile which means neither you nor Elon would be recommended this diversity of thought.

# Calculate statistical data
lor_recs.describe()

Using a boxplot, we can better visualize this positioning:

# Boxplot of similarity score
import matplotlib.pyplot as plt
lor_recs.boxplot(column=['similarity'])
plt.show() # Boxplot of percentage of common tags
lor_recs.boxplot(column=['%_common_tags'])
plt.show()
Positioning based on cosine similarity and common book tags compared to all 10,000 books

Based on your own knowledge, would you say these two books are so extremely different?

We can explore the data further and find the most common book tags using NLTK (Natural Language Toolkit). First, we clean up words such as removing hyphens, tokenize the words, and then remove all the stop words. After the text is clean, we can calculate the top 10 frequent words that appear in the Lord of the Rings book tags.

# Store book tags into new dataframe
lor_tags = pd.DataFrame(books_with_tags[books_with_tags['title']=='The Fellowship of the Ring (The Lord of the Rings, #1)']['tag_name']) # Find most frequent word used in book tags
import matplotlib
import nltk top_N = 10
txt = lor_tags.tag_name.str.lower().str.replace(r'-', ' ').str.cat(sep=' ') # Remove hyphens
words = nltk.tokenize.word_tokenize(txt)
word_dist = nltk.FreqDist(words) stopwords = nltk.corpus.stopwords.words('english')
words_except_stop_dist = nltk.FreqDist(w for w in words if w not in stopwords) print('All frequencies, including STOPWORDS:')
print('=' * 60)
lor_rslt = pd.DataFrame(word_dist.most_common(top_N), columns=['Word', 'Frequency'])
print(lor_rslt)
print('=' * 60)
lor_rslt = pd.DataFrame(words_except_stop_dist.most_common(top_N), columns=['Word', 'Frequency']).set_index('Word')
matplotlib.style.use('ggplot') lor_rslt.plot.bar(rot=0)
plt.show()
Most frequent words in the Lord of the Rings book tags

Since we want diversity and variety, we can take the most frequent words “fantasy” and “fiction” and filter by unlike or different words in the context of book genres. These might be words like non-fiction, economics, or entrepreneurial.

# Filter by unlike words
lor_recs_filter = lor_recs[(lor_recs['tag_name'].str.contains('non-fiction')) & (lor_recs['tag_name'].str.contains('economics')) & (lor_recs['tag_name'].str.contains('entrepreneurial'))]
lor_recs_filter

This narrows down the list and only include books that contain “non-fiction”, “economics”, or “entrepreneurial” in the book tags. To ensure our reader is recommended a good book, we merge ‘average_rating’ back into the dataset and sort the results by the highest average book rating.

# Merge recommendations with ratings
lor_recs_filter_merge = pd.merge(books[['title', 'average_rating']], lor_recs_filter, left_on='title', right_on='title', how='inner') # Sort by highest average rating
lor_recs_filter_merge = lor_recs_filter_merge.sort_values(by=['average_rating'], ascending=False)
lor_recs_filter_merge

What appears at the top of the list — Zero to One. We engineered our way into recommending diversity.

There was only one leap of faith, but quite a large one, in linking the relationship between Lord of the Rings and Zero to One. The next step to moving this forward would be in programmatically identifying the relationship between these two books and other books so it becomes reproducible. What is the math and logic that might drive this? The majority of current machine learning and AI recommendation algorithms are based on finding similarities. How can we find diversity instead?

Intuitively we can understand that someone interested in fantasy books can also benefit from learning about entrepreneurship. However, our algorithms currently do not provide this nudge. If we can solve this problem and build a better algorithm, not only will this significantly help the customer but also increase a company’s revenue through more sales and satisfied customers. First, we bridge the gap through domain knowledge and expertise for the book industry. Then we move towards applying this across other industries. If you have thoughts on this, let’s chat. This may be a game-changer.

5.1 An alternate diversity recommendation engine

Here’s an alternative solution. Instead of recommendations based on book tags of the individual book, we can make recommendations based on categories. The rule: each recommendation can still have similarities to the original book, but it must be a unique category.

For example, Amazon has 5 results to display book recommendations.

Amazon.com as of August 2020

Instead of 5 books with similar book tags to Lord of the Rings, the result should be 5 books with similar book tags, but with a different category to Fantasy, such as Science Fiction. Subsequent results would follow the same logic. This would “diversify” the categories of books the user sees, while maintaining a chain of relevancy. It would display: Fantasy → Science Fiction → Technology → Entrepreneurship → Biographies.

Within these categories, individual books can be ranked and sorted based on similarities to the initial book or by user ratings. Each slot presents the book within that category with most relevancy to the original book.

Slot 1: Fantasy
Slot 2: Science Fiction
Slot 3: Technology
Slot 4: Entrepreneurship
Slot 5: Biographies

This categorization becomes a sort within a sort to ensure a quality, relevant, and diverse read. This concept could more systematically connect books like Lord of the Rings and Zero to One together, eventually scaling to different product types or industries, such as music.

6. Design mockups

How might we design this? We could deploy our algorithm to allocate a certain percentage to the exploration of diversity recommendations, such as a split of 70% similarity and 30% diversity recommendations.

One potential user gave feedback that they would like to see a “diversity switch”. Let’s mockup this potential design.

Design mockup of Amazon.com with a “diversity recommendation” switch

Once the customer switches it on, we can keep 3 books as the usual similarity recommendations and the next 2 as our diversity recommendations.

Switching on diversity

In our product metrics, we would track the number and percentage of times users interact with this switch, increase/decrease in the number of product pages visited, other subsequent user flow behaviors, and conversion rate of purchasing the recommended books or other products. As a potential customer, what are your thoughts on this?

7. Business value hypothesis and target launch

What is the potential business value behind this idea? We can start by narrowing down the target customer launch to Amazon.com USA customers that have in the past 12 months: purchased a book, searched for fantasy books, and bought from multiple book categories. Conservative assumptions:

USA customers: 112 million
x book buying customers: 25%
x searches for fantasy: 25%
x buys multiple categories: 25%
= roll out to 1.75 million customers x conversion rate: 10%
= 175,000 customers convert x increase in average annual spend $40
= $7 million additional annual revenue

In 2019, the average Amazon customer spend was about $600 per year and Amazon’s annual revenue was $280 billion. This estimate is light in comparison which is good as an initial rollout test. If we increase the scope of the launch we will get a larger potential value. Let’s expand our reach and roll this out to all USA Amazon customer that has purchased a book, with a conservative assumption of 25%:

USA customers: 112 million
x book buying customers: 25%
= roll out to 28 million customers x conversion rate: 10%
= 2.8 million customers convert x increase in average annual spend $40
= $112 million additional annual revenue

Finally, if we are more aggressive and assume half of Amazon customers can be book buyers, increase the conversion rate, increase the average annual spend uptick, we get into the billions of additional value:

USA customers: 112 million
x book buying customers: 50%
= roll out to 56 million customers x conversion rate: 20%
= 28 million customers convert x increase in average annual spend $90
= $1 billion additional annual revenue

The potential pitfall is that this new recommender system negatively impacts the customer experience and decreases the conversion rate, which becomes a revenue loss. This is why initial customer validation and smaller launch and test plans are a good starting point.

The business value upside is significant seeing as how 35% ($98 billion) of Amazon’s revenues were generated through recommendation systems. Even a small percentage improvement in the algorithm would amount to millions and billions of additional revenue.

The end to end data science process of building a recommender system

What’s Next?

This proof of concept illustrates the potential for diversity recommender systems to improve customer experiences, address societal problems, add significant business value, as well as outlines a feasible data science process to improve our technology.

Through better machine learning and AI recommender systems, technology can enable more diversity of thought. Society’s current day ethos says that similarities are good, whereas being diverse can be mixed. Perhaps that is why the majority of recommendation systems research and development today has only focused on finding similarities. If we do not implement change within our algorithms, the status quo will give us more of the same.

Technology to enable a diversity of thought

Imagine the world where diversity of thought is enabled across all the products that influence us every day. How might that change the world for you, your friends, and people who think differently to you? Machine learning and AI recommendation algorithms can be a powerful change agent. If we continue pursuing this path of diversity, we can positively impact the people and the world around us.

This article was originally published on Towards AI and re-published to TOPBOTS with permission from the author.

Enjoy this article? Sign up for more AI updates.

We’ll let you know when we release more technical education.

Coinsmart. Beste Bitcoin-Börse in Europa
Source: https://www.topbots.com/diversity-through-recommendation-systems/

spot_img

Latest Intelligence

spot_img

Chat with us

Hi there! How can I help you?