Background

In the past weeks I spent quite some time working on BitQ&A; my side project that explores possibilities that digital currencies offer. BitQ&A is a Phoenix application that I initially created for the 2nd Bitcoin SV virtual hackathon. It was also the first project where I had an opportunity to try LiveView (brilliant tool that brings rich real-time UX to Phoenix).

I already used LiveView for numerous parts of the site (such as real-time Markdown previews), when a feature request for real-time auto save came in. Knowing how complicated this would be to achieve in tools that I used in the past, I decided to bite into this one. The more I thought about the problem the more I was convinced that LiveView is the way to go due to its speed and efficiency. I was aware that I will need to make sure that the constant inflow of changes won't kill my backend or the database, but I was curious if I can make it work.

The devil is in the details....

This article will describe a simplified version of the solution. In order to keep things simple we will save changes into the original schema. This comes with some drawbacks; changes will be immediately seen to everyone and we will run into issues in case of failed validations. The real life solution would require a bit smarter solution; possibly a separate draft schema (the approach used on BitQ&A), "forward" revisions or something similar. However, I do think this part is not essential to explain the approach that I took. Keep it simple, stupid! Right? Right!

I also created a GitHub repo where you can see the working code, run it and play with it.

If you already know how LiveView works feel free to skip to the interesting part.

More or less boring steps (ec10a3)

We need a new Phoenix project with a schema that we'll work with. I decided to go with a Post with title and body (but we could use any number or type of fields):

# ... schema "posts" do field :body, :string field :title, :string timestamps() end # ...

If you are not sure how to get this far check Phoenix documentation. It is great. Really... you should read it.

We want LiveView! (50709f)

There will be no juice without the LiveView, so let's bring it in by adding it to mix.exs :

# ... defp deps do [ # More dependencies ... {:phoenix_live_view, "~> 0.9.0"} ] end # ...

This was the most recent version at the time of writing. Check LiveView's page on hex.pm for up to date information.

We will also need to update a few things in our project to be able to use it. Things like importing its JavaScript part, enabling the endpoint that will be used to establish the socket connection, bring in helpers, ... LiveView has great documentation that describes these steps in detail.

Good!

Let the real fun begin! (dde992)

The idea is to let LiveView take control of the edit form for our post schema and let it react on any change to the form. This will result in changes being sent to the server where we will be able to save them. This will be very fast and efficient since LiveView uses web sockets to communicate and tries to be smart about the data that is being sent over the wire. Also, by LiveView taking over the entire form we make it very easy to add more fields - we should only make sure that they appear on the form. And that's it... Easy peasy!

We will need to change the extension of the template in question ( form.html.eex to form.html.leex ) and make LiveView trigger an event on every change to the form ( phx_change: :change part):

<%= f = form_for @changeset, @action, phx_change: :change %>

LiveView also has some strict requirements about the structure of the form in templates. You can learn more about that in the documentation.

We also need to change how the form is rendered in its parent ( edit.html.eex in our case):

# ... <%= live_render( @conn, PhoenixAutoSaveWeb.PostFormLive, session: %{ "changeset" => @changeset, "action" => Routes.post_path(@conn, :update, @post), "post" => @post } ) %> # ...

Now we need a module that will implement the server-side logic. It will generally have a function that is called when the LiveView establishes the connection and one or multiple functions that handle the events. The most interesting part in our case is the callback that reacts on changes in the form:

# ... def handle_event("change", %{"post" => post_params}, socket) do {:ok, post} = Posts.update_post(socket.assigns.post, post_params) { :noreply, socket |> assign(:changeset, Posts.change_post(post)) |> assign(:post, post) } end # ...

Let me explain this in a bit more detail:

First argument is the event name. This comes from the phx_change: :change thing that we added to the form template. :change basically becomes "change" . It is the name of the event.

thing that we added to the form template. basically becomes . It is the name of the event. Second argument are the current values in the form. We are only interested in the post part so we pattern match it.

Last argument is the socket object where the current state of the LiveView is stored.

In the first line of the body we take the current post and update it with the values from the form. Posts.update_post/2 will also save the updated post into the database, essentially making the change persistent.

will also save the updated post into the database, essentially making the change persistent. At the end we update the socket with the new version of the post and our job is finished.

Now every time a change happens on the form it gets sent over and is saved. Amazing! (not so much... as you will see soon)

Full code of the LiveView.

Database gods will send me to hell unless.... (f23ec8)

You probably already saw the nasty part... The approach above will result in a database UPDATE query literary for every letter while you are typing your fancy new blog post. Believe me, Postgres will hate you for that! And we wouldn't want that, would we? Imagine having hundreds of users writing their posts at the same time. Things would go very bad pretty soon....

So how do we solve that? Well.... we can decide that auto save won't work and go on with our lives. Not so fast!

What if we would keep the changes in the memory for a while and only save to the database periodically. Let's try:

# ... defp schedule_save() do Process.send_after(self(), :store, 10 * 1_000) end def handle_event("change", %{"post" => post_params}, socket) do { :noreply, socket |> assign(:changeset, Posts.change_post(socket.assigns.post, post_params)) } end def handle_info(:store, socket) do {:ok, post} = Posts.update_post(socket.assigns.changeset) schedule_save() { :noreply, socket |> assign(:changeset, Posts.change_post(post)) |> assign(:post, post) } end # ...

Quite a lot going on here. Let's go step by step:

schedule_save/0 will schedule a save operation 10 seconds (arbitrary, could be anything) after being called. We initially do that when the LiveView connection is established.

will schedule a save operation 10 seconds (arbitrary, could be anything) after being called. We initially do that when the LiveView connection is established. :change event no longer triggers fully-fledged save operation. It rather only stores the changes into the changeset that is part of the socket object. This is a fast in-memory operation.

event no longer triggers fully-fledged save operation. It rather only stores the changes into the changeset that is part of the socket object. This is a fast in-memory operation. periodic :store message will be processed by handle_info/2 . It will essentially do what the :change handler was doing before. It updates the post, saves it into the database and updates the socket object.

message will be processed by . It will essentially do what the handler was doing before. It updates the post, saves it into the database and updates the socket object. we schedule next save operation by calling schedule_save()

Changes are now tracked in real-time and persisted periodically. Great user experience and Postgres will still be our friend. Much better!

But what if....

Current approach will work fine but can lead to loss of data in some cases. Let's imagine this case:

Author opens the form and starts writing the post. Changes are persisted periodically and life is good.

At some point database update happens.

Immediately after that our author gets a moment of supernatural inspiration and writes a huge amount out text in a single second.

Supernatural powers apparently have side effects and our author closes the browser window before LiveView manages to save the most recent changes.

Result: all changes since the last save - supernatural-powers driven moment of productivity - are lost. Not good!

How can we solve this? We should try to detect the window close event and do one final save when that happens. Let's give it a try:

# ... def terminate(_reason, socket) do {:ok, %Post{}} = Posts.update_post(socket.assigns.changeset) :ok end # ...

terminate/2 callback will be called when the LiveView connection is terminated. This can happen for many reasons: timeout, the user navigates to another page, browser window is closed, ...

callback will be called when the LiveView connection is terminated. This can happen for many reasons: timeout, the user navigates to another page, browser window is closed, ... In our case we are not interested in the termination reason (first argument). Whenever something went wrong we want to save the data.

Post is saved using Posts.update_post/1 - exactly the same as we do in handle_event/3

New posts? (f12a76)

What about new posts? Our approach assumes that the post already exists in the database and this is not the case on the create form. One way to solve this would be to save an empty post before loading the form, which is obviously not ideal.

Proper solution would be to use one of the approaches that I already mentioned at the beginning - drafts, "forward" revisions, ...

I hope that you will find this writing useful and maybe use auto save for one of your projects. Do you have any comments, ideas or maybe noticed something that could have been done better. Do not hesitate to comment below!

Until next time!