Summary

This page goes into detail on how I used Machine Learning to find hundreds of Krazy Kat comics that are now in the public domain.

As a result of this project, several hundred high resolution scans of Krazy Kat comics are now easily available online, including a comic that I couldn't find in any published book!

What follows is a detailed description of what I did to find these comics in online newspaper archives.

About

After becoming a little obsessed with Krazy Kat, I was very disappointed to see many of the books I wanted were incredibly expensive. For example "Krazy & Ignatz: The Complete Sunday Strips 1916-1924" was selling on Amazon for nearly $600 and "Krazy & Ignatz 1922-1924: At Last My Drim Of Love Has Come True" was selling for nearly $90.

At some point, I realized that the copyright for many of the comics that I was looking for has expired and that these public domain comics were likely available in online newspaper archives.

So, driven a desire to obtain the "unobtainable" and mostly by curiosity to see if it was possible, I set out to see if I could find public domain Krazy Kat Sunday comics in online newspaper archives.

As you can see in the "Comics" section of this site, it is possible to find Krazy Kat comics in online newspaper archives and I've made all of the comics I could find viewable on this web page.

If all you want to do is read Krazy Kat comics, I encourage you to click on the "Comics" link above.

I hope that by being able to read the comics online, you'll be inspired to buy one of the reprints from Fantagraphics. Krazy Kat is best appreciated in the medium it was designed for and the books that Fantagraphics publishes are a delight. You can find the books that I recommend in the "Buy" section of this site.

What follows below is a detailed description of the code I wrote to find the Krazy Kat comics in newspaper archives. I also wrote my recommendations for curators of newspaper archives, as well as my advice for for people who want to build upon, or replicate, my work.

Finally, I close with a long list of things that I wish I could have done, in the hope that someone else will be inspired to do them.

How to find Krazy Kat comics in newspaper archives

In short, I wrote some programs in Python that downloaded thumbnails from various newspaper archives, manually found about 100 Sunday comic strips from the thumbnails, used Microsoft's Custom Vision service to train an image classifier to detect Krazy Kat comics in thumbnail images, used that classifier to find several hundred more thumbnails, then wrote some more code in Python to download high resolution images of all of the thumbnails that I found.

This was done in several stages:

Learning about Krazy Kat

Discussing feasibility of the project with an ML expert

Searching for archives that contain Krazy Kat Sunday comics

Writing code to download thumbnails from newspaper archives

Training an image classifier

Using the image classifier to find more thumbnails

Writing code to download full size images

Finding comics in other online archives

I go into detail on each of those stages in the sections below:

Learning about Krazy Kat

If it were not for a chance encounter with Krazy Kat and The Art of George Herriman at Pegasus Books I wouldn't be familiar with the series myself.

The reason I picked up the book in the first place is because the comic strip Calvin and Hobbes was such a big part of my childhood, and I remembered how Bill Watterson referenced Krazy Kat as a big reason why he insisted on getting a larger full color format for his Sunday comic strips.

Once I finished reading "Krazy Kat and The Art of George Herriman" I started to buy the fantastic books from Fantagraphics. However, as stated above, I felt frustrated that some of the books so expensive.

Discussing feasibility of the project with an ML expert

The real genesis of this project however, was a conversation I had with Tyler Neylon about one of the projects he was working on at Unbox Research a machine learning research & development company.

Speaking with Tyler got me thinking about the types of projects I could use machine learning with, and the idea of using machine learning to help me find Krazy Kat images was the most interesting of the things that we discussed.

Searching for archives that contain Krazy Kat Sunday comics

Before I could get started with machine learning, I had to see if the idea was feasible at all, were there any newspaper archives online that had Krazy Kat comics?

Many of the newspapers that I initially checked didn't have any trace of Krazy Kat. I was starting to get worried until I found the archive of the Chicago Examiner (1908-1918) that the Chicago Public Library keeps online. This was exciting!

It's fortunate that I found the Chicago Examiner archive as soon as I did, because it turns out that it's not very easy to find newspapers with Krazy Kat Sunday comics in them!

After many hours of frustrating research, I was finally able to narrow determine that Krazy Kat Sunday comics are available from the following sources:

Sunday comics:

The Chicago Examiner via the Chicago Public Library

The Washington Times via the excellent "Chronicling America" archive at the Library of Congress

Newspapers.com

HA.com

In the process of looking for Sunday comics, I was also able to find several newspapers in various archives that also have copies of the daily Krazy Kat comics. However, given a self-imposed deadline I set for myself, I didn't have the time to do anything other than make a list of newspapers that have the dailies. Below is a list of newspaper archives and the year that I was able to find daily Krazy Kat comics:

Daily comics:

The St. Louis Star and Times (1913)

The Oregon Daily Journal (1914)

The Lincoln Star (1919)

El Paso Herald (1919)

The San Francisco Examiner (1920)

Salt Lake Telegram (1920)

The Pittsburgh Press (1920)

The Minneapolis Star (1920)

The Lincoln Star (1920)

Writing code to download thumbnails from newspaper archives

Once I had sources for newspaper scans, my next step was to download thumbnails from the archives I found. The main reason to use thumbnails over full sized images is that the average size of a thumbnail is about 4KiB while the size of a full resolution scan can be nearly 7 MiB! I was also very curious if I could detect Krazy Kat comics using only thumbnails.

In general, my specific goal for each of the newspaper archives was to download as many thumbnails as I could from Sunday editions published before 1923 (as of 2019, works published before 1923 are in the public domain)

Some of the newspaper archives had better APIs for finding and fetching images than others. Surprisingly, the internal API that Newspapers.com uses for their archives was the easiest to use.

I ended up writing several different Python scripts to download each thumbnail collection individually. After seeing the similarities between them, I decided to put all the logic together into a single Python package called "krazy.py" using this package, we can get a list of all thumbnails from known newspaper archives with code like this:

import krazy proxies = { 'http': 'http://localhost:3030', }

This code has two parts. The first part handles registering of the different "Finders" that I implemented, which know how to find Sunday pages:

finder = krazy.Finder(proxies=proxies) finder.add(krazy.NewspapersCom) finder.add(krazy.LocGov) finder.add(krazy.ChipublibOrg)

And the second part will query all registered "Finders" for their Sunday pages, using the .sunday-pages() method:

for page in finder.sunday_pages(): print('get-thumbnails.py {}'.format(page)) print("\t thumbnail {}".format(page.thumbnail)) print("\t suggested {}".format(page.suggested_name)) print("\t full name {}".format(page.full_size))

This code in turn will call the "Finders" for the Newspapers.com, Library of Congress, and Chicago Public Library archives.

All of these classes implement the base "Source" class, which contains some syntactic sugar that makes working with all of the different finders a little easier. One thing to note is that these classes are written to make it easy to convert between the URL for a full size image full_size , the URL for the corresponding thumbnail image thumbnail , and the suggested local filename for that same image suggested_name .

Here's what all the code above looks like in a single file:

import krazy proxies = { 'http': 'http://localhost:3030', } finder = krazy.Finder(proxies=proxies) finder.add(krazy.NewspapersCom) finder.add(krazy.LocGov) finder.add(krazy.ChipublibOrg) for page in finder.sunday_pages(): print('get-thumbnails.py {}'.format(page)) print("\t thumbnail {}".format(page.thumbnail)) print("\t suggested {}".format(page.suggested_name)) print("\t full name {}".format(page.full_size))

With that in mind, let's start with the code for getting Sunday pages from the Newspapers.com archives. This code was pretty easy to write, I calculate all the dates that have Sundays, then query the Newspapers.com API for the pages for that date.

class NewspapersCom(Source): source_id = 'newspapers.com' url_template = 'https://www.newspapers.com/download/image/?type=jpg&id={id}' @property def _sundays(self): start_day = '1916-04-23' end_year = 1924 date = datetime.datetime.strptime(start_day, '%Y-%m-%d') sundays = [] while date.year < end_year: if date.strftime('%A') == 'Sunday': yield date date += datetime.timedelta(days=7) def sunday_pages(self): urls = [ 'http://www.newspapers.com/api/browse/1/US/California/San%20Francisco/The%20San%20Francisco%20Examiner_9317', 'http://www.newspapers.com/api/browse/1/US/District%20of%20Columbia/Washington/The%20Washington%20Times_1607' ] for url in urls: for day in self._sundays: day_path = day.strftime('/%Y/%m/%d') rv = self.session.get(url + day_path).json() if 'children' not in rv: continue for page in rv['children']: self._parts = [ self.source_id, day.strftime('%Y'), day.strftime('%m'), day.strftime('%d'), 'id', page['name'] ] yield self

This is the code for getting Sunday pages from the Library of Congress archives. In this case, I actually take advantage of a search query for "krazy kat" and return all of those pages. I did this because the Library of Congress archive was what I first started with. If I wrote this again, I'd implement it without a search.

class LocGov(Source): source_id = 'chroniclingamerica.loc.gov' file_suffix = '.jp2' _base_url = 'http://' + source_id from_page_template = _base_url + '/lccn/{lccn}/{date}/ed-{ed}/seq-{seq}' _search_url = _base_url + '/search/pages/results/' url_template = from_page_template + file_suffix def sunday_pages(self): search_payload = { 'date1': '1913', 'date2': '1944', 'proxdistance': '5', 'proxtext': 'krazy+herriman', 'sort': 'relevance', 'format': 'json' } results_seen = 0 page = 1 urls = [] while True: search_payload['page'] = str(page) # print('Fetching page {}'.format(page)) result = self.session.get(self._search_url, params=search_payload).json() for item in result['items']: results_seen += 1 url = item['url'] if url: yield self._bootstrap_from_url(item['url']) if(results_seen < result['totalItems']): page += 1 else: # print('Found {} items'.format(results_seen)) break

And finally, here is the code for getting Sunday pages from the Chicago Public Library archives. As you can see, this is pretty involved. The supporting cast in this object are the _url method, which creates URLs for the Chicago Public Library API, and the _bootstrap_from_url method, which creates an instance of a ChipublibOrg object.

_url and _bootstrap_from_url are then used by the sunday_pages method to return objects representing Sunday pages from the Chicago Public Library archive.

class ChipublibOrg(Source): source_id = 'digital.chipublib.org' _base_url = 'http://' + source_id _full_size_url = _base_url + '/digital/download/collection/examiner/id/{id}/size/full' _thumbnail_url = _base_url + '/digital/api/singleitem/collection/examiner/id/{id}/thumbnail' _info_url = _base_url + '/digital/api/collections/{collection}/items/{id}/false' url_template = _full_size_url _chipublib_defaults = [ ('collection', 'examiner'), ('order', 'date'), ('ad', 'dec'), ('page', '1'), ('maxRecords', '100'), ] def _url(self, **inputs): options = OrderedDict(self._chipublib_defaults) for key in inputs: if key in options: value = inputs[key] if isinstance(value, int): value = str(value) options[key] = value path = '/'.join(itertools.chain.from_iterable(options.items())) search_url = 'http://{source_id}/digital/api/search/{path}'.format(source_id=self.source_id, path=path) return search_url def _bootstrap_from_id(self, item_id): defaults = dict(self._chipublib_defaults) values = {'id': item_id, 'collection': defaults['collection']} url = self._info_url.format(**values) # print('url: ' + url) result = self.session.get(url).json() fields = dict([(field['key'], field['value']) for field in result['fields']]) for child in result['parent']['children']: page_number = child['title'].split(' ')[1] # digital.chipublib.org-examiner-1917-12-16-page-34-item-88543 self._parts = [self.source_id, defaults['collection']] self._parts.extend(fields['date'].split(self.sep)) self._parts.extend(['page', str(page_number), 'item', str(child['id'])]) yield self def sunday_pages(self): resultsSeen = 0 page = 1 while True: result = self.session.get(self._url(page=page)).json() for record in result['items']: resultsSeen += 1 metadata = dict([(item['field'], item['value']) for item in record['metadataFields']]) record_date = datetime.datetime.strptime(metadata['date'], '%Y-%m-%d') # Krazy Kat ... a comic strip by cartoonist George Herriman, ... ran from 1913 to 1944. if record_date.year < 1913: continue if record_date.strftime('%A') != 'Sunday': continue item_id = record['itemId'] for page in self._bootstrap_from_id(item_id): yield page if(resultsSeen < result['totalResults']): page += 1 else: break

Visually confirming images

Once I was able to get a list of thumbnail image URLs, I wrote a script to download each of those thumbnails into a local file named using the suggested_name() for each thumbnail.

For example, this URL: http://digital.chipublib.org/digital/api/singleitem/collection/examiner/id/88888/thumbnail would be saved as this file: digital.chipublib.org-examiner-1917-12-30-page-31-item-88888.jpeg

At this point, I had a folder full of thumbnail images, that looked something like this:

Using Finder in macOS, I scanned over each image personally. When I found a thumbnail that looked like it contained Krazy Kat, I'd use the identifier in the filename to load up the full sized image. For example, if I thought that this thumbnail contained Krazy Kat:

Then I would take the identifier from that filename (in this case "88888") and use that to load up the full sized image, which in this example would be this URL: http://digital.chipublib.org/digital/collection/examiner/id/88888

If the thumbnail did turn out to contain a Krazy Kat comic, then I'd tag the file in Finder. In my case, I tagged thumbnails for Krazy Kat comics with "Green", but the color doesn't matter.

After tagging about 100 images, I was ready to learn to train a custom "image classifier" that I could use to find even more Krazy Kat comics for me.

Training an image classifier

Initially, my plan was to use this project as an excuse to learn to use TensorFlow. However, after about 4 frustrating hours of trying to get TensorFlow, Karas, or PyTorch running, I just gave up.

Most frustrating of all was Illegal instruction: 4 error message that I kept getting from TensorFlow. Apparently the Mac I use at home is too old for modern versions of TensorFlow?

In any event, thanks to a suggestion from my friend Timothy Fitz, who suggested that I use Google's Auto ML. I realized that I could use a cloud service to train an image classifier instead.

With that in mind, I decided to give Microsoft's Custom Vision service instead.

In short, Custom Vision is a wonderful service. It was easy to use and gave me exactly what I wanted.

All I had to do to build an image classifier with Custom Vision was to upload the 100 or so thumbnails that I found with Krazy Kat sunday comics in them, tag those images "krazy" and then upload about 100 images that did not have Krazy Kat in them.

To get the thumbnails I found with Krazy Kat, I simply made a "Smart Folder" in macOS that contained all images tagged "Green"

I simply selected all of the images from the Smart Folder and dragged them into Custom Vision to upload them. After tagging those images as "krazy" I repeated the process to upload thumbnails that didn't have Krazy Kat and tagged those as "Negative"

After that, I spent some time playing with Custom Vision, working on improving the rate at which it correctly recognized thumbnails with Krazy Kat. An interesting part of this exercise is that I ended up feeling like I was doing "meta-programming" rather than looking for patterns myself, I just spent more time finding thumbnails that appeared to have Krazy Kat, but didn't.

One word of warning, using the "Advanced Training" option costs money and it's not clear how much the training will cost. I ended up spending just over $180 on Advanced Training before I realized how much I had spent!

Using the image classifier to find candidates in Library of Congress archive

Once I was fairly confidant with the image classification model that Custom Vision made, it was time to test it out in the real world.

Initially, I had hoped to use Custom Vision's ability to export TensorFlow models so I could run image classification locally on my computer. My main concern was that I didn't want to upload each thumbnail to Custom Vision. However, given how much trouble I had getting TensorFlow working, I decided to use Custom Vision's API directly. I was very pleased to learn that the API could take the URL for an image as an option, meaning that I could have Custom Vision fetch the thumbnails directly from the newspaper fetch!

I wrote two "one off" scripts to make use of the Custom Vision API:

ms-predict.py This was the first script I wrote, which uploaded thumbnails to the Custom Vision API. It's pretty simple and proved to me that it the API worked. custom-vision-find-kats-loc.py This was the second script I wrote. I also implemented the Custom Vision API because somehow I missed that they already had a Python library. In any event, this script sent URLs to the Custom Vision API

Using a combination of both of these scripts, I was able to find about another couple of hundred thumbnails for Krazy Kat Sunday comics.

Writing code to download full size images

Thanks to the Custom Vision API, I finally had several hundred thumbnails on my computer. All of these thumbnails were tagged "Green" and had unique names that I could use to find their corresponding full size image.

Because they were all tagged "Green", I could use the mdfind command in macOS to get a list of all the thumbnails I'd found.

This is the command I used to get all of the tagged thumbnails:

mdfind "kMDItemUserTags == Green"

And here is an example of what the output looked like:

$ mdfind "kMDItemUserTags == Green" ... ~/Projects/krazykat/fetch-kats/manual.thumbnails/chroniclingamerica.loc.gov-lccn-sn84026749-1922-12-31-ed-1-seq-49.jpeg ~/Projects/krazykat/fetch-kats/manual.thumbnails/chroniclingamerica.loc.gov-lccn-sn84026749-1922-05-14-ed-1-seq-49.jpeg ~/Projects/krazykat/fetch-kats/manual.thumbnails/chroniclingamerica.loc.gov-lccn-sn84026749-1922-04-16-ed-1-seq-50.jpeg ~/Projects/krazykat/fetch-kats/manual.thumbnails/chroniclingamerica.loc.gov-lccn-sn84026749-1922-03-05-ed-1-seq-50.jpeg ~/Projects/krazykat/fetch-kats/manual.thumbnails/chroniclingamerica.loc.gov-lccn-sn84026749-1922-02-12-ed-1-seq-29.jpeg ~/Projects/krazykat/fetch-kats/manual.thumbnails/chroniclingamerica.loc.gov-lccn-sn84026749-1922-01-29-ed-1-seq-27.jpeg ~/Projects/krazykat/fetch-kats/manual.thumbnails/digital.chipublib.org-examiner-1917-04-01-page-38-item-83144.jpeg ~/Projects/krazykat/fetch-kats/manual.thumbnails/digital.chipublib.org-examiner-1916-06-04-page-49-item-81637.jpeg ~/Projects/krazykat/fetch-kats/manual.thumbnails/digital.chipublib.org-examiner-1916-06-11-page-47-item-81884.jpeg ~/Projects/krazykat/fetch-kats/manual.thumbnails/digital.chipublib.org-examiner-1916-07-02-page-32-item-85927.jpeg ~/Projects/krazykat/fetch-kats/manual.thumbnails/digital.chipublib.org-examiner-1916-10-15-page-40-item-93726.jpeg ~/Projects/krazykat/fetch-kats/manual.thumbnails/digital.chipublib.org-examiner-1916-10-29-page-44-item-94073.jpeg ~/Projects/krazykat/fetch-kats/manual.thumbnails/digital.chipublib.org-examiner-1917-02-11-page-31-item-89081.jpeg ...

With a list of thumbnails to download, I used some code similar to the code below to take in a list of thumbnails and, using their names, determine the URL for the full size image and then download the full sized image:

import fileinput import krazy finder = krazy.Finder() finder.add(krazy.LocGov) # finder.add(NewspapersCom) finder.add(krazy.ChipublibOrg) for line in fileinput.input(): print(line) img = finder.identify(line) if not img: continue if img.source_id is 'newspapers.com': img.headers = newspapers_headers img.cookies = newspapers_cookies # print(img.full_size) finder.download(img)

This code looks simple because most of the logic is hidden in the krazy Python library that I wrote to abstract out the details.

Figuring out dates for comics

Once I had several hundred comics on my computer, my next task was to start collecting them together. In the early days of Krazy Kat, the dates on which comics were published was very erratic. I considered several approaches to identifying each comic: made up names, names that have to do with the contents of the comic, "perceptual hashing", and so on. Eventually I just decided to use the dates that Fantagraphics used in their books.

Overall, this approach worked quite well. With one exception! One of the comics that I found, published on 1922-01-29, was not in any of the Fantagraphics books I looked in! I'm not sure why this is the case, but I suspect that the archive from which Fantagraphics used somehow didn't have this comic? I hope that someone will figure out the reason why I couldn't find the comic and let me know!

Finding comics in other online archives

Whew! At this point, I had a lot of Krazy Kat comics, but I still had a sneaking suspicion that I had somehow missed some Krazy Kat comics that were already online.

Well, it turns out that my suspicions were well founded, because I found comics in three other sources:

Wikipedia, which hosts quite a few comics in the Wikimedia Commons The Comic Strip Library, which has scans from books Heritage Auctions, which has high resolution scans of original Krazy Kat artwork

From what I could tell, the Wikimedia Commons only had comics which were available elsewhere, so I skipped those. That said, I did put an effort into getting images from The Comic Strip Library and Heritage Auctions and included those images in the comic viewer on this site. I enjoy finding days where the same comic is available from several sources, since it's interesting to see how the comic changes when printed in different ways.

Suggestions for future work

I gave myself a month to complete this project, so I didn't investigate many of the interesting side-paths that appeared as I explored the world of comics in newspapers archives.

Since you, dear reader, have made it this far through this page, then I'm assuming that you're interested enough in the work I did to maybe build upon it?

If you are indeed interested in working on a project in this space, here is a list of things I would have liked to have done, but didn't have time to do:

Things I wish I could have done

Train an image classifier to recognize the comic boundaries I included the full newspaper scans on this site because I didn't want to crop all of those images by hand, and also because that's a job that a good image classifier could do automatically?

Train an image classifier that can find all types of comics I was only interested in Krazy Kat comics that were published on Sunday. In the process of manually looking through the archives, I ran across many other interesting comics that I would have liked to have extracted too. The Katzenjammer Kids and Winsor McCay's comics in particular.

Train an image classifier that can find the daily Krazy Kat comics From what I can tell, most of the daily Krazy Kat comics haven't been published. Doing this would allow the world to see thousands of new Krazy Kat comics.

Investigate approaches to automatically restore comics Given that we have scans of original artwork available, as well as the ability to pull the same comic from several archives, doing automatic comic restoration seems like an approach that's worth investigating. Here are some of the approaches that I would have liked to have looked into myself: Write code to detect and correct stippling When you compare an original Krazy Kat print with what was published in a newspaper, you can see that George Herriman scribbled in areas where he wanted the newspapers to insert stippling. In the published comics, you can see that the scans of the comics have stippling that has been smudged or distorted. It should be possible to train a machine learning model to recognize stippling and correct any errant stipples that it finds. Try and build a Machine Learning model that could synthesize original sketches Given how many original prints are available online, it might be possible to use Machine Learning to "dream up" what the original drawing for a comic might have looked like. If this approach worked, it would mean that we could have much higher quality prints of comics. Color and contrast correction All of the scans have different contrast levels, all of them have different shades of white. It would have been nice to automatically correct contrast and color on the scans.

Build a Krazy Kat API I would have liked to have made an API like The Star Wars API or the PokeAPI for Krazy Kat. I would have liked this API to have a list of all known comics, links to newspaper archives that held those comics, a list of who shows up in each comic, the text to the comics, and so on

Make the image viewer a SPA It would have been cool to make the comic viewer a "Single Page App" that used a Krazy Kat API

Figure out why the dates that Krazy Kat comics were published on are so erratic It would have been nice to have done some more in-depth analysis into why various newspapers published Krazy Kat comics on different dates.

Make Krazy Kat comics in the public domain available in other formats If I had a way to automatically correct the color and contrast in comics, as well as have them automatically cropped, I would have also liked to have converted the comics to other formats, for example: Comic book archive PDF Kindle



Recommendations for newspaper archivists

If you work for a newspaper archive, here is my wish list for what I would have loved to have had in some or all of the newspaper archives I worked with:

An easy way to search for specific days of the week (Sunday in particular)

A clearer way to get thumbnails for pages. In particular, I would have loved to be able to download an entire collection of thumbnails at once.

Use .jp2 It's JPEG with metadata. Why isn't everybody using this?

Make better quality thumbnails If the quality of a thumbnail image is high enough, you don't need to fetch any additional images. For what it's worth, of all the thumbnails that I saw, those from the Chicago Public Library's archive of the Chicago Examiner were the best.

Advice for building upon my work

If you're inspired by this work and want to build upon it, here is my high level advice:

I kept a log with more detailed notes on what I did. Email me and I'll send it to you!

Try not to piss off archivists, use caching and thumbnails!

Use file tagging to make your life easier

I'm happy to share the training data I used with you, just ask!

Thanks

This project would have been a lot more difficult without the help from the following people and institutions:

Full code listing

Below is the full code listing for the code in krazy.py :