Exporting my Slack data to an Org mode file

Story

In 2017, I tried to use Slack as some sort of todo list, with channels serving as categories, messages being stuff to process, and myself being the only member. It did not work out, partly because I didn't have internet on my phone; I abandoned the personal Slack workspace shortly after.

My personal notes system has evolved a lot, and is now a central Git repository with a bunch of Org files in it. However, I never actually exported the data on the Slack workspace—some notes were still only available there.

That's why I'd like to move them to my notes repository—this only has to be done once.

First I tried to do it manually, checking the timestamp of each message, writing the text into a new Org file, categorized per channel. This is somewhat doable, as I only about 3 months worth of intermittent messages (in 20 channels) to move. Still, "checking the timestamp" involves hovering my cursor on the message and hoping it shows me the full timestamp—I want to keep the seconds, as throwing away archival data feels wrong. Doing this quickly became tedious, so I started trying to work on the exported data direcly instead.

Exporting the data from Slack

Workspace → Administration → Workspace settings

Import / Export Data

Export → Choose "Entire workspace history"

The data is in a zip file, structured like this:

1 2 3 4 5 6 7 8 9 10 11 12 slack-export ├── a_channel │ ├── 2017-02-14.json │ └── 2017-03-02.json ├── another_channel │ ├── 2017-02-14.json │ ├── 2017-02-19.json │ ├── 2017-02-21.json │ └── 2017-03-02.json ├── channels.json ├── integration_logs.json └── users.json

Messages from each channel is in its own folder. channels.json contains metadata for all channels, users.json contains all users in the workspace; I don't care about integration_logs.json , but it seems to be the installed Slack Apps.

Starting to write it

The code I wrote grew organically. Starting from:

1 2 3 4 5 6 7 8 9 10 11 12 13 14 15 16 ( defun my/file->json ( file ) "Return contents of FILE read through `json-read' ." ( save-match-data ( with-temp-buffer ( insert-file-contents-literally file ) ( decode-coding-region ( point-min ) ( point-max ) 'utf-8 ) ( goto-char ( point-min )) ( json-read )))) ( with-temp-file "slack.org" ( insert "#+COLUMNS: %ITEM %CREATED %TOPIC %PURPOSE



" ) ( let (( channels ( append ( my/file->json "channels.json" ) nil ))) ( seq-doseq ( channel channels ) ;; Insert channel information ( let-alist channel ( insert "* =" . name "=

" )))))

I can then run eval-buffer to update slack.org for me to explore.

The final logic

The main script

1 2 3 4 5 6 7 8 9 10 11 12 13 14 15 ( with-temp-file "slack.org" ( insert "#+COLUMNS: %ITEM %CREATED %TOPIC %PURPOSE



" ) ( let (( channels ( append ( my/file->json "channels.json" ) nil ))) ( seq-doseq ( channel channels ) ;; Insert channel information ( let-alist channel ( insert "* =" . name "=

" ) ( my/insert-properties ` (( created . , ( my/unix-time-to-iso8601-local . created )) ;; `let-alist' does not work with `() syntax ,@ ( unless ( equal "" . topic.value ) ( list ( cons 'topic . topic.value ))) ,@ ( unless ( equal "" . purpose.value ) ( list ( cons 'purpose . purpose.value )))))) ;; Insert events / messages ( seq-doseq ( event ( my/get-channel-events channel )) ( my/insert-event event )))))

The logic is roughly:

For each channel, insert its metadata and its messages.

For each message, insert its metadata and its text, except if it's a list of files, in which case insert all the filenames.

While inserting the text, replace user and channel mentions (that are exported as IDs) with their names, and format links as Org mode.

Other comments

1 2 3 4 5 6 7 8 ( defun my/insert-properties ( alist ) "Insert ALIST as an Org property drawer." ( insert ":PROPERTIES:

" ) ( map-do ( lambda ( k v ) ( insert ":" ( upcase ( format "%s" k )) ": " v "

" )) alist ) ( insert ":END:



" ))

Omitting items from an alist conditionally is easier than doing the same with a string, hence this little helper.

1 2 3 4 5 6 7 8 9 10 11 12 13 14 15 16 17 18 19 20 21 22 23 24 ( defun my/insert-text ( text ) "Insert TEXT with necessary newlines added amongst other processing." ( save-match-data ( insert ( with-temp-buffer ( insert text "



" ) ( goto-char ( point-min )) ( while ( re-search-forward "<\\(.*?\\)>" nil t ) ( let (( matched ( match-string 1 ))) ( cond (( string-prefix-p "@" matched ) ( replace-match ( format "=@%s=" ( alist-get 'name ( my/get-user ( substring matched 1 )))) t t )) (( string-prefix-p "http" matched ) ( replace-match ( format "[[%s]]" matched ) t t )) (( string-prefix-p "#" matched ) ( replace-match ( format "=#%s=" ( alist-get 'name ( my/get-channel ;; Channel IDs are 9 digits ;; + 1 for the # ( substring matched 1 10 )))) t t ))))) ( buffer-string )))))

Slack exports user and channel mentions as <@user ID> and <#channel ID> , so to make it more readable I extracted the names from their respective JSON files. Links are exported as <http://example.com> , which doesn't work well in Org, so I also replace that with the Org syntax.

It is easier to work with buffers in Emacs than with strings, which is why I did the processing in another temporary buffer.

1 2 3 4 5 6 7 8 9 10 11 12 13 14 15 16 17 18 ( defun my/file->json ( file ) "Return contents of FILE read through `json-read' ." ( save-match-data ( with-temp-buffer ( insert-file-contents-literally file ) ( decode-coding-region ( point-min ) ( point-max ) 'utf-8 ) ( goto-char ( point-min )) ( json-read )))) ( defun my/array-files->json ( &rest files ) "Like `my/file->json' , except that top-level arrays are merged." ( cl-reduce ( lambda ( json-a json-b ) ( cl-merge 'list json-a json-b ( lambda ( elem-a elem-b ) ( < ( string-to-number ( alist-get 'ts elem-a )) ( string-to-number ( alist-get 'ts elem-b )))))) ( mapcar #' my/file->json files )))

json-read changes match data, so it needs to be wrapped in a save-match-data . This caused me a few minutes of pain as I tried to figure out why my (while (re-search-forward ...) (replace-match ...)) didn't work.

my/file->json is pretty straight forward, it just runs json-read on a file. my/array-files->json is less so. It is used to merge two JSON arrays together: as messages of the same channel are stored as multiple arrays in multiple files, getting all messages of a channel requires merging them. We use cl-merge to do the actual merging (the inner lambda is the comparasion function that cl-merge requires for its magic), and cl-reduce to make the two-input cl-merge work on the whole list of arrays.

1 2 3 4 5 6 7 8 9 10 11 12 ( defun my/get-channel-events ( channel ) "Get events for CHANNEL. CHANNEL can be either a string for its name, or an alist, in which case the `name' property is used." ( let (( name ( cond (( stringp channel ) channel ) (( json-alist-p channel ) ( alist-get 'name channel )) ( t ( error "CHANNEL must be a string or a `json-alist-p' " ))))) ( apply #' my/array-files->json ( directory-files name t "json$" ))))

The use of my/array-files->json . I called them "events" here, but I later realized that all of them have the type "message".

Full code