At this point you have learned about Julia’s core data structures, and you have seen some of the algorithms that use them.

This chapter presents a case study with exercises that let you think about choosing data structures and practice using them.

Modify the previous program to read a word list and then print all the words in the book that are not in the word list. How many of them are typos? How many of them are common words that should be in the word list, and how many of them are really obscure?

Modify the program from the previous exercise to print the 20 most frequently used words in the book.

Print the number of different words used in the book. Compare different books by different authors, written in different eras. Which author uses the most extensive vocabulary?

Then modify the program to count the total number of words in the book, and the number of times each word is used.

Modify your program from the previous exercise to read the book you downloaded, skip over the header information at the beginning of the file, and process the rest of the words as before.

Write a program that reads a file, breaks each line into words, strips whitespace and punctuation from the words, and converts them to lowercase.

As usual, you should at least attempt the exercises before you read my solutions.

Write a function named choosefromhist that takes a histogram as defined in Dictionary as a Collection of Counters and returns a random value from the histogram, chosen with probability in proportion to frequency. For example, for this histogram:

The function rand can take an iterator or array as argument and returns a random element:

The function rand returns a random float between 0.0 and 1.0 (including 0.0 but not 1.0). Each time you call rand , you get the next number in a long series. To see a sample, run this loop:

Making a program truly nondeterministic turns out to be difficult, but there are ways to make it at least seem nondeterministic. One of them is to use algorithms that generate pseudorandom numbers. Pseudorandom numbers are not truly random because they are generated by a deterministic computation, but just by looking at the numbers it is all but impossible to distinguish them from random.

Given the same inputs, most computer programs generate the same outputs every time, so they are said to be deterministic. Determinism is usually a good thing, since we expect the same calculation to yield the same result. For some applications, though, we want the computer to be unpredictable. Games are an obvious example, but there are more.

The number of different words is just the number of items in the dictionary:

To count the total number of words in the file, we can add up the frequencies in the histogram:

processline uses the function replace to replace hyphens with spaces before using split to break the line into an array of strings. It traverses the array of words and uses filter , isletter and lowercase to remove punctuation and convert to lower case. (It is a shorthand to say that strings are “converted”; remember that strings are immutable, so a function like lowercase return new strings.)

processfile loops through the lines of the file, passing them one at a time to processline . The histogram hist is being used as an accumulator.

Here is a program that reads a file and builds a histogram of the words in the file:

You should attempt the previous exercises before you go on. You will also need https://github.com/BenLauwens/ThinkJulia.jl/blob/master/data/emma.txt .

This code can be simplified using the rev keyword argument of the sort function. You can read about it at https://docs.julialang.org/en/v1/base/sort/#Base.sort .

I use a tab character ( '\t' ) as a “separator”, rather than a space, so the second column is lined up. Here are the results from Emma:

In each tuple, the frequency appears first, so the resulting array is sorted by frequency. Here is a loop that prints the 10 most common words:

To find the most common words, we can make an array of tuples, where each tuple contains a word and its frequency, and sort it. The following function takes a histogram and returns an array of word-frequency tuples:

If a function has both required and optional parameters, all the required parameters have to come first, followed by the optional ones.

num gets the value of the argument instead. In other words, the optional argument overrides the default value.

The first parameter is required; the second is optional. The default value of num is 10 .

We have seen built-in functions that take optional arguments. It is possible to write programmer-defined functions with optional arguments, too. For example, here is a function that prints the most common words in a histogram:

Write a program that uses set subtraction to find words in the book that are not in the word list.

Julia provides a data structure called Set that provides many common set operations. You can read about them in Collections and Data Structures , or read the documentation at https://docs.julialang.org/en/v1/base/collections/#Set-Like-Collections-1 .

Some of these words are names and possessives. Others, like “rencontre”, are no longer in common use. But a few are common words that should really be in the list!

To find the words in the book that are not in words.txt , we can use processfile to build a histogram for words.txt , and then subtract :

subtract takes dictionaries d1 and d2 and returns a new dictionary that contains all the keys from d1 that are not in d2 . Since we don’t really care about the values, we set them all to nothing .

Finding the words from the book that are not in the word list from words.txt is a problem you might recognize as set subtraction; that is, we want to find all the words from one set (the words in the book) that are not in the other (the words in the list).

Use the index to find the corresponding word in the word array.

Choose a random number from 1 to \(n\). Use a bisection search (see Exercise 10-10 ) to find the index where the random number would be inserted in the cumulative sum.

Build an array that contains the cumulative sum of the word frequencies (see Exercise 10-2 ). The last item in this array is the total number of words in the book, \(n\).

Use keys to get an array of the words in the book.

This algorithm works, but it is not very efficient; each time you choose a random word, it rebuilds the array, which is as big as the original book. An obvious improvement is to build the array once and then make multiple selections, but the array is still big.

To choose a random word from the histogram, the simplest algorithm is to build an array with multiple copies of each word, according to the observed frequency, and then choose from the array:

You should attempt this exercise before you go on.

Credit: This case study is based on an example from Kernighan and Pike, The Practice of Programming, Addison-Wesley, 1999.

Once your program is working, you might want to try a mash-up: if you combine text from two or more books, the random text you generate will blend the vocabulary and phrases from the sources in interesting ways.

What happens if you increase the prefix length? Does the random text make more sense?

For this example, I left the punctuation attached to the words. The result is almost syntactically correct, but not quite. Semantically, it almost makes sense, but not quite.

“He was very clever, be it sweetness or be angry, ashamed or only amused, at such a stroke. She had never thought of Hannah till you were never meant for me?" "I cannot make speeches, Emma:" he soon cut it all himself.”

Add a function to the previous program to generate random text based on the Markov analysis. Here is an example from Emma with prefix length 2:

Write a program to read a text from a file and perform Markov analysis. The result should be a dictionary that maps from prefixes to a collection of possible suffixes. The collection might be an array, tuple, or dictionary; it is up to you to make an appropriate choice. You can test your program with prefix length two, but you should write the program in a way that makes it easy to try other lengths.

In this example the length of the prefix is always two, but you can do Markov analysis with any prefix length.

For example, if you start with the prefix “Half a”, then the next word has to be “bee”, because the prefix only appears once in the text. The next prefix is “a bee”, so the next suffix might be “philosophically”, “be” or “due”.

Given this mapping, you can generate a random text by starting with any prefix and choosing at random from the possible suffixes. Next, you can combine the end of the prefix and the new suffix to form the next prefix, and repeat.

The result of Markov analysis is a mapping from each prefix (like “half the” and “the bee”) to all possible suffixes (like “has” and “is”).

In this text, the phrase “half the” is always followed by the word “bee”, but the phrase “the bee” might be followed by either “has” or “is”.

One way to measure these kinds of relationships is Markov analysis, which characterizes, for a given sequence of words, the probability of the words that might come next. For example, the song Eric, the Half a Bee (by Monty Python) begins:

A series of random words seldom makes sense because there is no relationship between successive words. For example, in a real sentence you would expect an article like “the” to be followed by an adjective or a noun, and probably not a verb or adverb.

If you choose words from the book at random, you can get a sense of the vocabulary, but you probably won’t get a sentence:

Data Structures

Using Markov analysis to generate random text is fun, but there is also a point to this exercise: data structure selection. In your solution to the previous exercises, you had to choose:

How to represent the prefixes.

How to represent the collection of possible suffixes.

How to represent the mapping from each prefix to the collection of possible suffixes.

The last one is easy: a dictionary is the obvious choice for a mapping from keys to corresponding values.

For the prefixes, the most obvious options are string, array of strings, or tuple of strings.

For the suffixes, one option is an array; another is a histogram (dictionary).

How should you choose? The first step is to think about the operations you will need to implement for each data structure. For the prefixes, we need to be able to remove words from the beginning and add to the end. For example, if the current prefix is “Half a”, and the next word is “bee”, you need to be able to form the next prefix, “a bee”.

Your first choice might be an array, since it is easy to add and remove elements.

For the collection of suffixes, the operations we need to perform include adding a new suffix (or increasing the frequency of an existing one), and choosing a random suffix.

Adding a new suffix is equally easy for the array implementation or the histogram. Choosing a random element from an array is easy; choosing from a histogram is harder to do efficiently (see Exercise 13-7).

So far we have been talking mostly about ease of implementation, but there are other factors to consider in choosing data structures. One is run time. Sometimes there is a theoretical reason to expect one data structure to be faster than other; for example, I mentioned that the in operator is faster for dictionaries than for arrays, at least when the number of elements is large.

But often you don’t know ahead of time which implementation will be faster. One option is to implement both of them and see which is better. This approach is called benchmarking. A practical alternative is to choose the data structure that is easiest to implement, and then see if it is fast enough for the intended application. If so, there is no need to go on. If not, there are tools, like the Profile module, that can identify the places in a program that take the most time.

The other factor to consider is storage space. For example, using a histogram for the collection of suffixes might take less space because you only have to store each word once, no matter how many times it appears in the text. In some cases, saving space can also make your program run faster, and in the extreme, your program might not run at all if you run out of memory. But for many applications, space is a secondary consideration after run time.

One final thought: in this discussion, I have implied that we should use one data structure for both analysis and generation. But since these are separate phases, it would also be possible to use one structure for analysis and then convert to another structure for generation. This would be a net win if the time saved during generation exceeded the time spent in conversion.