{ "cells": [ { "cell_type": "markdown", "metadata": {}, "source": [ "When exploring a large set of documents -- such as Wikipedia, news articles, StackOverflow, etc. -- it can be useful to get a list of related material. To find relevant documents you typically\n", "* Decide on a notion of similarity\n", "* Find the documents that are most similar \n", "\n", "In the assignment you will\n", "* Gain intuition for different notions of similarity and practice finding similar documents. \n", "* Explore the tradeoffs with representing documents using raw word counts and TF-IDF\n", "* Explore the behavior of different distance metrics by looking at the Wikipedia pages most similar to President Obama’s page." ] }, { "cell_type": "code", "execution_count": 1, "metadata": { "collapsed": false }, "outputs": [ { "ename": "ImportError", "evalue": "No module named graphlab", "output_type": "error", "traceback": [ "\u001b[0;31m---------------------------------------------------------------------------\u001b[0m", "\u001b[0;31mImportError\u001b[0m Traceback (most recent call last)", "\u001b[0;32m\u001b[0m in \u001b[0;36m\u001b[0;34m()\u001b[0m\n\u001b[0;32m----> 1\u001b[0;31m \u001b[0;32mimport\u001b[0m \u001b[0mgraphlab\u001b[0m\u001b[0;34m\u001b[0m\u001b[0m\n\u001b[0m\u001b[1;32m 2\u001b[0m \u001b[0;32mimport\u001b[0m \u001b[0mmatplotlib\u001b[0m\u001b[0;34m.\u001b[0m\u001b[0mpyplot\u001b[0m \u001b[0;32mas\u001b[0m \u001b[0mplt\u001b[0m\u001b[0;34m\u001b[0m\u001b[0m\n\u001b[1;32m 3\u001b[0m \u001b[0;32mimport\u001b[0m \u001b[0mnumpy\u001b[0m \u001b[0;32mas\u001b[0m \u001b[0mnp\u001b[0m\u001b[0;34m\u001b[0m\u001b[0m\n\u001b[1;32m 4\u001b[0m \u001b[0mget_ipython\u001b[0m\u001b[0;34m(\u001b[0m\u001b[0;34m)\u001b[0m\u001b[0;34m.\u001b[0m\u001b[0mmagic\u001b[0m\u001b[0;34m(\u001b[0m\u001b[0;34mu'matplotlib inline'\u001b[0m\u001b[0;34m)\u001b[0m\u001b[0;34m\u001b[0m\u001b[0m\n\u001b[1;32m 5\u001b[0m \u001b[0;34m\u001b[0m\u001b[0m\n", "\u001b[0;31mImportError\u001b[0m: No module named graphlab" ] } ], "source": [ "import graphlab\n", "import matplotlib.pyplot as plt\n", "import numpy as np\n", "%matplotlib inline\n", "\n", "'''Check GraphLab Create version'''\n", "from distutils.version import StrictVersion\n", "assert (StrictVersion(graphlab.version) >= StrictVersion('1.8.5')), 'GraphLab Create must be version 1.8.5 or later.'" ] }, { "cell_type": "markdown", "metadata": {}, "source": [ "## Load Wikipedia dataset" ] }, { "cell_type": "markdown", "metadata": {}, "source": [ "We will be using the same dataset of Wikipedia pages that we used in the Machine Learning Foundations course (Course 1). Each element of the dataset consists of a link to the wikipedia article, the name of the person, and the text of the article (in lowercase). " ] }, { "cell_type": "code", "execution_count": null, "metadata": { "collapsed": false }, "outputs": [], "source": [ "wiki = graphlab.SFrame('people_wiki.gl')" ] }, { "cell_type": "code", "execution_count": null, "metadata": { "collapsed": false }, "outputs": [], "source": [ "wiki" ] }, { "cell_type": "markdown", "metadata": {}, "source": [ "## Extract word count vectors" ] }, { "cell_type": "markdown", "metadata": {}, "source": [ "As we have seen in Course 1, we can extract word count vectors using a GraphLab utility function. We add this as a column in `wiki`." ] }, { "cell_type": "code", "execution_count": null, "metadata": { "collapsed": false }, "outputs": [], "source": [ "wiki['word_count'] = graphlab.text_analytics.count_words(wiki['text'])" ] }, { "cell_type": "code", "execution_count": null, "metadata": { "collapsed": false }, "outputs": [], "source": [ "wiki" ] }, { "cell_type": "markdown", "metadata": {}, "source": [ "## Find nearest neighbors" ] }, { "cell_type": "markdown", "metadata": {}, "source": [ "Let's start by finding the nearest neighbors of the Barack Obama page using the word count vectors to represent the articles and Euclidean distance to measure distance. For this, again will we use a GraphLab Create implementation of nearest neighbor search." ] }, { "cell_type": "code", "execution_count": null, "metadata": { "collapsed": false }, "outputs": [], "source": [ "model = graphlab.nearest_neighbors.create(wiki, label='name', features=['word_count'],\n", " method='brute_force', distance='euclidean')" ] }, { "cell_type": "markdown", "metadata": {}, "source": [ "Let's look at the top 10 nearest neighbors by performing the following query:" ] }, { "cell_type": "code", "execution_count": null, "metadata": { "collapsed": false, "scrolled": false }, "outputs": [], "source": [ "model.query(wiki[wiki['name']=='Barack Obama'], label='name', k=10)" ] }, { "cell_type": "markdown", "metadata": {}, "source": [ "All of the 10 people are politicians, but about half of them have rather tenuous connections with Obama, other than the fact that they are politicians.\n", "\n", "* Francisco Barrio is a Mexican politician, and a former governor of Chihuahua.\n", "* Walter Mondale and Don Bonker are Democrats who made their career in late 1970s.\n", "* Wynn Normington Hugh-Jones is a former British diplomat and Liberal Party official.\n", "* Andy Anstett is a former politician in Manitoba, Canada.\n", "\n", "Nearest neighbors with raw word counts got some things right, showing all politicians in the query result, but missed finer and important details.\n", "\n", "For instance, let's find out why Francisco Barrio was considered a close neighbor of Obama. To do this, let's look at the most frequently used words in each of Barack Obama and Francisco Barrio's pages:" ] }, { "cell_type": "code", "execution_count": null, "metadata": { "collapsed": true }, "outputs": [], "source": [ "def top_words(name):\n", " \"\"\"\n", " Get a table of the most frequent words in the given person's wikipedia page.\n", " \"\"\"\n", " row = wiki[wiki['name'] == name]\n", " word_count_table = row[['word_count']].stack('word_count', new_column_name=['word','count'])\n", " return word_count_table.sort('count', ascending=False)" ] }, { "cell_type": "code", "execution_count": null, "metadata": { "collapsed": false }, "outputs": [], "source": [ "obama_words = top_words('Barack Obama')\n", "obama_words" ] }, { "cell_type": "code", "execution_count": null, "metadata": { "collapsed": false }, "outputs": [], "source": [ "barrio_words = top_words('Francisco Barrio')\n", "barrio_words" ] }, { "cell_type": "markdown", "metadata": {}, "source": [ "Let's extract the list of most frequent words that appear in both Obama's and Barrio's documents. We've so far sorted all words from Obama and Barrio's articles by their word frequencies. We will now use a dataframe operation known as **join**. The **join** operation is very useful when it comes to playing around with data: it lets you combine the content of two tables using a shared column (in this case, the word column). See [the documentation](https://dato.com/products/create/docs/generated/graphlab.SFrame.join.html) for more details.\n", "\n", "For instance, running\n", "```\n", "obama_words.join(barrio_words, on='word')\n", "```\n", "will extract the rows from both tables that correspond to the common words." ] }, { "cell_type": "code", "execution_count": null, "metadata": { "collapsed": false }, "outputs": [], "source": [ "combined_words = obama_words.join(barrio_words, on='word')\n", "combined_words" ] }, { "cell_type": "markdown", "metadata": {}, "source": [ "Since both tables contained the column named `count`, SFrame automatically renamed one of them to prevent confusion. Let's rename the columns to tell which one is for which. By inspection, we see that the first column (`count`) is for Obama and the second (`count.1`) for Barrio." ] }, { "cell_type": "code", "execution_count": null, "metadata": { "collapsed": false }, "outputs": [], "source": [ "combined_words = combined_words.rename({'count':'Obama', 'count.1':'Barrio'})\n", "combined_words" ] }, { "cell_type": "markdown", "metadata": {}, "source": [ "**Note**. The **join** operation does not enforce any particular ordering on the shared column. So to obtain, say, the five common words that appear most often in Obama's article, sort the combined table by the Obama column. Don't forget `ascending=False` to display largest counts first." ] }, { "cell_type": "code", "execution_count": null, "metadata": { "collapsed": false }, "outputs": [], "source": [ "combined_words.sort('Obama', ascending=False)" ] }, { "cell_type": "markdown", "metadata": {}, "source": [ "**Quiz Question**. Among the words that appear in both Barack Obama and Francisco Barrio, take the 5 that appear most frequently in Obama. How many of the articles in the Wikipedia dataset contain all of those 5 words?\n", "\n", "Hint:\n", "* Refer to the previous paragraph for finding the words that appear in both articles. Sort the common words by their frequencies in Obama's article and take the largest five.\n", "* Each word count vector is a Python dictionary. For each word count vector in SFrame, you'd have to check if the set of the 5 common words is a subset of the keys of the word count vector. Complete the function `has_top_words` to accomplish the task.\n", " - Convert the list of top 5 words into set using the syntax\n", "```\n", "set(common_words)\n", "```\n", " where `common_words` is a Python list. See [this link](https://docs.python.org/2/library/stdtypes.html#set) if you're curious about Python sets.\n", " - Extract the list of keys of the word count dictionary by calling the [`keys()` method](https://docs.python.org/2/library/stdtypes.html#dict.keys).\n", " - Convert the list of keys into a set as well.\n", " - Use [`issubset()` method](https://docs.python.org/2/library/stdtypes.html#set) to check if all 5 words are among the keys.\n", "* Now apply the `has_top_words` function on every row of the SFrame.\n", "* Compute the sum of the result column to obtain the number of articles containing all the 5 top words." ] }, { "cell_type": "code", "execution_count": null, "metadata": { "collapsed": false }, "outputs": [], "source": [ "common_words = ... # YOUR CODE HERE\n", "\n", "def has_top_words(word_count_vector):\n", " # extract the keys of word_count_vector and convert it to a set\n", " unique_words = ... # YOUR CODE HERE\n", " # return True if common_words is a subset of unique_words\n", " # return False otherwise\n", " return ... # YOUR CODE HERE\n", "\n", "wiki['has_top_words'] = wiki['word_count'].apply(has_top_words)\n", "\n", "# use has_top_words column to answer the quiz question\n", "... # YOUR CODE HERE" ] }, { "cell_type": "markdown", "metadata": {}, "source": [ "**Checkpoint**. Check your `has_top_words` function on two random articles:" ] }, { "cell_type": "code", "execution_count": null, "metadata": { "collapsed": false }, "outputs": [], "source": [ "print 'Output from your function:', has_top_words(wiki[32]['word_count'])\n", "print 'Correct output: True'\n", "print 'Also check the length of unique_words. It should be 167'" ] }, { "cell_type": "code", "execution_count": null, "metadata": { "collapsed": false }, "outputs": [], "source": [ "print 'Output from your function:', has_top_words(wiki[33]['word_count'])\n", "print 'Correct output: False'\n", "print 'Also check the length of unique_words. It should be 188'" ] }, { "cell_type": "markdown", "metadata": {}, "source": [ "**Quiz Question**. Measure the pairwise distance between the Wikipedia pages of Barack Obama, George W. Bush, and Joe Biden. Which of the three pairs has the smallest distance?\n", "\n", "Hint: To compute the Euclidean distance between two dictionaries, use `graphlab.toolkits.distances.euclidean`. Refer to [this link](https://dato.com/products/create/docs/generated/graphlab.toolkits.distances.euclidean.html) for usage." ] }, { "cell_type": "code", "execution_count": null, "metadata": { "collapsed": true }, "outputs": [], "source": [] }, { "cell_type": "markdown", "metadata": {}, "source": [ "**Quiz Question**. Collect all words that appear both in Barack Obama and George W. Bush pages. Out of those words, find the 10 words that show up most often in Obama's page. " ] }, { "cell_type": "code", "execution_count": null, "metadata": { "collapsed": true }, "outputs": [], "source": [] }, { "cell_type": "markdown", "metadata": {}, "source": [ "**Note.** Even though common words are swamping out important subtle differences, commonalities in rarer political words still matter on the margin. This is why politicians are being listed in the query result instead of musicians, for example. In the next subsection, we will introduce a different metric that will place greater emphasis on those rarer words." ] }, { "cell_type": "markdown", "metadata": {}, "source": [ "## TF-IDF to the rescue" ] }, { "cell_type": "markdown", "metadata": {}, "source": [ "Much of the perceived commonalities between Obama and Barrio were due to occurrences of extremely frequent words, such as \"the\", \"and\", and \"his\". So nearest neighbors is recommending plausible results sometimes for the wrong reasons. \n", "\n", "To retrieve articles that are more relevant, we should focus more on rare words that don't happen in every article. **TF-IDF** (term frequency–inverse document frequency) is a feature representation that penalizes words that are too common. Let's use GraphLab Create's implementation of TF-IDF and repeat the search for the 10 nearest neighbors of Barack Obama:" ] }, { "cell_type": "code", "execution_count": null, "metadata": { "collapsed": false }, "outputs": [], "source": [ "wiki['tf_idf'] = graphlab.text_analytics.tf_idf(wiki['word_count'])" ] }, { "cell_type": "code", "execution_count": null, "metadata": { "collapsed": false }, "outputs": [], "source": [ "model_tf_idf = graphlab.nearest_neighbors.create(wiki, label='name', features=['tf_idf'],\n", " method='brute_force', distance='euclidean')" ] }, { "cell_type": "code", "execution_count": null, "metadata": { "collapsed": false }, "outputs": [], "source": [ "model_tf_idf.query(wiki[wiki['name'] == 'Barack Obama'], label='name', k=10)" ] }, { "cell_type": "markdown", "metadata": {}, "source": [ "Let's determine whether this list makes sense.\n", "* With a notable exception of Roland Grossenbacher, the other 8 are all American politicians who are contemporaries of Barack Obama.\n", "* Phil Schiliro, Jesse Lee, Samantha Power, and Eric Stern worked for Obama.\n", "\n", "Clearly, the results are more plausible with the use of TF-IDF. Let's take a look at the word vector for Obama and Schilirio's pages. Notice that TF-IDF representation assigns a weight to each word. This weight captures relative importance of that word in the document. Let us sort the words in Obama's article by their TF-IDF weights; we do the same for Schiliro's article as well." ] }, { "cell_type": "code", "execution_count": null, "metadata": { "collapsed": false }, "outputs": [], "source": [ "def top_words_tf_idf(name):\n", " row = wiki[wiki['name'] == name]\n", " word_count_table = row[['tf_idf']].stack('tf_idf', new_column_name=['word','weight'])\n", " return word_count_table.sort('weight', ascending=False)" ] }, { "cell_type": "code", "execution_count": null, "metadata": { "collapsed": false }, "outputs": [], "source": [ "obama_tf_idf = top_words_tf_idf('Barack Obama')\n", "obama_tf_idf" ] }, { "cell_type": "code", "execution_count": null, "metadata": { "collapsed": false }, "outputs": [], "source": [ "schiliro_tf_idf = top_words_tf_idf('Phil Schiliro')\n", "schiliro_tf_idf" ] }, { "cell_type": "markdown", "metadata": {}, "source": [ "Using the **join** operation we learned earlier, try your hands at computing the common words shared by Obama's and Schiliro's articles. Sort the common words by their TF-IDF weights in Obama's document." ] }, { "cell_type": "code", "execution_count": null, "metadata": { "collapsed": true }, "outputs": [], "source": [] }, { "cell_type": "markdown", "metadata": {}, "source": [ "The first 10 words should say: Obama, law, democratic, Senate, presidential, president, policy, states, office, 2011." ] }, { "cell_type": "markdown", "metadata": {}, "source": [ "**Quiz Question**. Among the words that appear in both Barack Obama and Phil Schiliro, take the 5 that have largest weights in Obama. How many of the articles in the Wikipedia dataset contain all of those 5 words?" ] }, { "cell_type": "code", "execution_count": null, "metadata": { "collapsed": true }, "outputs": [], "source": [ "common_words = ... # YOUR CODE HERE\n", "\n", "def has_top_words(word_count_vector):\n", " # extract the keys of word_count_vector and convert it to a set\n", " unique_words = ... # YOUR CODE HERE\n", " # return True if common_words is a subset of unique_words\n", " # return False otherwise\n", " return ... # YOUR CODE HERE\n", "\n", "wiki['has_top_words'] = wiki['word_count'].apply(has_top_words)\n", "\n", "# use has_top_words column to answer the quiz question\n", "... # YOUR CODE HERE" ] }, { "cell_type": "markdown", "metadata": {}, "source": [ "Notice the huge difference in this calculation using TF-IDF scores instead of raw word counts. We've eliminated noise arising from extremely common words." ] }, { "cell_type": "markdown", "metadata": {}, "source": [ "## Choosing metrics" ] }, { "cell_type": "markdown", "metadata": {}, "source": [ "You may wonder why Joe Biden, Obama's running mate in two presidential elections, is missing from the query results of `model_tf_idf`. Let's find out why. First, compute the distance between TF-IDF features of Obama and Biden." ] }, { "cell_type": "markdown", "metadata": {}, "source": [ "**Quiz Question**. Compute the Euclidean distance between TF-IDF features of Obama and Biden. Hint: When using Boolean filter in SFrame/SArray, take the index 0 to access the first match." ] }, { "cell_type": "code", "execution_count": null, "metadata": { "collapsed": true }, "outputs": [], "source": [] }, { "cell_type": "markdown", "metadata": {}, "source": [ "The distance is larger than the distances we found for the 10 nearest neighbors, which we repeat here for readability:" ] }, { "cell_type": "code", "execution_count": null, "metadata": { "collapsed": false }, "outputs": [], "source": [ "model_tf_idf.query(wiki[wiki['name'] == 'Barack Obama'], label='name', k=10)" ] }, { "cell_type": "markdown", "metadata": {}, "source": [ "But one may wonder, is Biden's article that different from Obama's, more so than, say, Schiliro's? It turns out that, when we compute nearest neighbors using the Euclidean distances, we unwittingly favor short articles over long ones. Let us compute the length of each Wikipedia document, and examine the document lengths for the 100 nearest neighbors to Obama's page." ] }, { "cell_type": "code", "execution_count": null, "metadata": { "collapsed": false }, "outputs": [], "source": [ "def compute_length(row):\n", " return len(row['text'].split(' '))\n", "\n", "wiki['length'] = wiki.apply(compute_length) " ] }, { "cell_type": "code", "execution_count": null, "metadata": { "collapsed": false }, "outputs": [], "source": [ "nearest_neighbors_euclidean = model_tf_idf.query(wiki[wiki['name'] == 'Barack Obama'], label='name', k=100)\n", "nearest_neighbors_euclidean = nearest_neighbors_euclidean.join(wiki[['name', 'length']], on={'reference_label':'name'})" ] }, { "cell_type": "code", "execution_count": null, "metadata": { "collapsed": false }, "outputs": [], "source": [ "nearest_neighbors_euclidean.sort('rank')" ] }, { "cell_type": "markdown", "metadata": {}, "source": [ "To see how these document lengths compare to the lengths of other documents in the corpus, let's make a histogram of the document lengths of Obama's 100 nearest neighbors and compare to a histogram of document lengths for all documents." ] }, { "cell_type": "code", "execution_count": null, "metadata": { "collapsed": false }, "outputs": [], "source": [ "plt.figure(figsize=(10.5,4.5))\n", "plt.hist(wiki['length'], 50, color='k', edgecolor='None', histtype='stepfilled', normed=True,\n", " label='Entire Wikipedia', zorder=3, alpha=0.8)\n", "plt.hist(nearest_neighbors_euclidean['length'], 50, color='r', edgecolor='None', histtype='stepfilled', normed=True,\n", " label='100 NNs of Obama (Euclidean)', zorder=10, alpha=0.8)\n", "plt.axvline(x=wiki['length'][wiki['name'] == 'Barack Obama'][0], color='k', linestyle='--', linewidth=4,\n", " label='Length of Barack Obama', zorder=2)\n", "plt.axvline(x=wiki['length'][wiki['name'] == 'Joe Biden'][0], color='g', linestyle='--', linewidth=4,\n", " label='Length of Joe Biden', zorder=1)\n", "plt.axis([0, 1000, 0, 0.04])\n", "\n", "plt.legend(loc='best', prop={'size':15})\n", "plt.title('Distribution of document length')\n", "plt.xlabel('# of words')\n", "plt.ylabel('Percentage')\n", "plt.rcParams.update({'font.size':16})\n", "plt.tight_layout()" ] }, { "cell_type": "markdown", "metadata": {}, "source": [ "Relative to the rest of Wikipedia, nearest neighbors of Obama are overwhemingly short, most of them being shorter than 300 words. The bias towards short articles is not appropriate in this application as there is really no reason to favor short articles over long articles (they are all Wikipedia articles, after all). Many of the Wikipedia articles are 300 words or more, and both Obama and Biden are over 300 words long.\n", "\n", "**Note**: For the interest of computation time, the dataset given here contains _excerpts_ of the articles rather than full text. For instance, the actual Wikipedia article about Obama is around 25000 words. Do not be surprised by the low numbers shown in the histogram." ] }, { "cell_type": "markdown", "metadata": {}, "source": [ "**Note:** Both word-count features and TF-IDF are proportional to word frequencies. While TF-IDF penalizes very common words, longer articles tend to have longer TF-IDF vectors simply because they have more words in them." ] }, { "cell_type": "markdown", "metadata": { "collapsed": true }, "source": [ "To remove this bias, we turn to **cosine distances**:\n", "$$\n", "d(\\mathbf{x},\\mathbf{y}) = 1 - \\frac{\\mathbf{x}^T\\mathbf{y}}{\\|\\mathbf{x}\\| \\|\\mathbf{y}\\|}\n", "$$\n", "Cosine distances let us compare word distributions of two articles of varying lengths.\n", "\n", "Let us train a new nearest neighbor model, this time with cosine distances. We then repeat the search for Obama's 100 nearest neighbors." ] }, { "cell_type": "code", "execution_count": null, "metadata": { "collapsed": false }, "outputs": [], "source": [ "model2_tf_idf = graphlab.nearest_neighbors.create(wiki, label='name', features=['tf_idf'],\n", " method='brute_force', distance='cosine')" ] }, { "cell_type": "code", "execution_count": null, "metadata": { "collapsed": false }, "outputs": [], "source": [ "nearest_neighbors_cosine = model2_tf_idf.query(wiki[wiki['name'] == 'Barack Obama'], label='name', k=100)\n", "nearest_neighbors_cosine = nearest_neighbors_cosine.join(wiki[['name', 'length']], on={'reference_label':'name'})" ] }, { "cell_type": "code", "execution_count": null, "metadata": { "collapsed": false }, "outputs": [], "source": [ "nearest_neighbors_cosine.sort('rank')" ] }, { "cell_type": "markdown", "metadata": {}, "source": [ "From a glance at the above table, things look better. For example, we now see Joe Biden as Barack Obama's nearest neighbor! We also see Hillary Clinton on the list. This list looks even more plausible as nearest neighbors of Barack Obama.\n", "\n", "Let's make a plot to better visualize the effect of having used cosine distance in place of Euclidean on our TF-IDF vectors." ] }, { "cell_type": "code", "execution_count": null, "metadata": { "collapsed": false }, "outputs": [], "source": [ "plt.figure(figsize=(10.5,4.5))\n", "plt.figure(figsize=(10.5,4.5))\n", "plt.hist(wiki['length'], 50, color='k', edgecolor='None', histtype='stepfilled', normed=True,\n", " label='Entire Wikipedia', zorder=3, alpha=0.8)\n", "plt.hist(nearest_neighbors_euclidean['length'], 50, color='r', edgecolor='None', histtype='stepfilled', normed=True,\n", " label='100 NNs of Obama (Euclidean)', zorder=10, alpha=0.8)\n", "plt.hist(nearest_neighbors_cosine['length'], 50, color='b', edgecolor='None', histtype='stepfilled', normed=True,\n", " label='100 NNs of Obama (cosine)', zorder=11, alpha=0.8)\n", "plt.axvline(x=wiki['length'][wiki['name'] == 'Barack Obama'][0], color='k', linestyle='--', linewidth=4,\n", " label='Length of Barack Obama', zorder=2)\n", "plt.axvline(x=wiki['length'][wiki['name'] == 'Joe Biden'][0], color='g', linestyle='--', linewidth=4,\n", " label='Length of Joe Biden', zorder=1)\n", "plt.axis([0, 1000, 0, 0.04])\n", "plt.legend(loc='best', prop={'size':15})\n", "plt.title('Distribution of document length')\n", "plt.xlabel('# of words')\n", "plt.ylabel('Percentage')\n", "plt.rcParams.update({'font.size': 16})\n", "plt.tight_layout()" ] }, { "cell_type": "markdown", "metadata": {}, "source": [ "Indeed, the 100 nearest neighbors using cosine distance provide a sampling across the range of document lengths, rather than just short articles like Euclidean distance provided." ] }, { "cell_type": "markdown", "metadata": {}, "source": [ "**Moral of the story**: In deciding the features and distance measures, check if they produce results that make sense for your particular application." ] }, { "cell_type": "markdown", "metadata": {}, "source": [ "# Problem with cosine distances: tweets vs. long articles" ] }, { "cell_type": "markdown", "metadata": {}, "source": [ "Happily ever after? Not so fast. Cosine distances ignore all document lengths, which may be great in certain situations but not in others. For instance, consider the following (admittedly contrived) example." ] }, { "cell_type": "markdown", "metadata": {}, "source": [ "```\n", "+--------------------------------------------------------+\n", "| +--------+ |\n", "| One that shall not be named | Follow | |\n", "| @username +--------+ |\n", "| |\n", "| Democratic governments control law in response to |\n", "| popular act. |\n", "| |\n", "| 8:05 AM - 16 May 2016 |\n", "| |\n", "| Reply Retweet (1,332) Like (300) |\n", "| |\n", "+--------------------------------------------------------+\n", "```" ] }, { "cell_type": "markdown", "metadata": {}, "source": [ "How similar is this tweet to Barack Obama's Wikipedia article? Let's transform the tweet into TF-IDF features, using an encoder fit to the Wikipedia dataset. (That is, let's treat this tweet as an article in our Wikipedia dataset and see what happens.)" ] }, { "cell_type": "code", "execution_count": null, "metadata": { "collapsed": false }, "outputs": [], "source": [ "sf = graphlab.SFrame({'text': ['democratic governments control law in response to popular act']})\n", "sf['word_count'] = graphlab.text_analytics.count_words(sf['text'])\n", "\n", "encoder = graphlab.feature_engineering.TFIDF(features=['word_count'], output_column_prefix='tf_idf')\n", "encoder.fit(wiki)\n", "sf = encoder.transform(sf)\n", "sf" ] }, { "cell_type": "markdown", "metadata": {}, "source": [ "Let's look at the TF-IDF vectors for this tweet and for Barack Obama's Wikipedia entry, just to visually see their differences." ] }, { "cell_type": "code", "execution_count": null, "metadata": { "collapsed": false }, "outputs": [], "source": [ "tweet_tf_idf = sf[0]['tf_idf.word_count']\n", "tweet_tf_idf" ] }, { "cell_type": "code", "execution_count": null, "metadata": { "collapsed": false }, "outputs": [], "source": [ "obama = wiki[wiki['name'] == 'Barack Obama']\n", "obama" ] }, { "cell_type": "markdown", "metadata": {}, "source": [ "Now, compute the cosine distance between the Barack Obama article and this tweet:" ] }, { "cell_type": "code", "execution_count": null, "metadata": { "collapsed": false }, "outputs": [], "source": [ "obama_tf_idf = obama[0]['tf_idf']\n", "graphlab.toolkits.distances.cosine(obama_tf_idf, tweet_tf_idf)" ] }, { "cell_type": "markdown", "metadata": {}, "source": [ "Let's compare this distance to the distance between the Barack Obama article and all of its Wikipedia 10 nearest neighbors:" ] }, { "cell_type": "code", "execution_count": null, "metadata": { "collapsed": false }, "outputs": [], "source": [ "model2_tf_idf.query(obama, label='name', k=10)" ] }, { "cell_type": "markdown", "metadata": {}, "source": [ "With cosine distances, the tweet is \"nearer\" to Barack Obama than everyone else, except for Joe Biden! This probably is not something we want. If someone is reading the Barack Obama Wikipedia page, would you want to recommend they read this tweet? Ignoring article lengths completely resulted in nonsensical results. In practice, it is common to enforce maximum or minimum document lengths. After all, when someone is reading a long article from _The Atlantic_, you wouldn't recommend him/her a tweet." ] } ], "metadata": { "anaconda-cloud": {}, "kernelspec": { "display_name": "Python [default]", "language": "python", "name": "python2" }, "language_info": { "codemirror_mode": { "name": "ipython", "version": 2 }, "file_extension": ".py", "mimetype": "text/x-python", "name": "python", "nbconvert_exporter": "python", "pygments_lexer": "ipython2", "version": "2.7.12" } }, "nbformat": 4, "nbformat_minor": 0 }