Fulcro and React behave in a very performant manner for most data-driven applications. There are ways, however, in which you can negatively affect performance.

Changing a single line of code in that large namespace not only will take a long time for that single namespace, but will trigger tons of unnecessary dependent recompiles. I’ve adopted this in my recent personal projects, and this successfully keeps my compile times quite low even as the source grows. Technically this could probably be solved at the compiler level, but it is a "hard problem" that may not be solved for some time (if ever).

When you put large amounts of code in a single namespace then many of your other namespaces will likely depend on it (the probability of needing to require it goes up with the number of artifacts within it).

Keep your namespaces small and :require sections pruned. This is about "size of code to compile", but it’s more about the dependency graph. Whenever a save happens that new file needs to be compiled along with everything that depends on it, and files that depend on those files up the dependency tree. Having large files means a high false-positive factor in these reloads (reloading files whose code didn’t really depend on the subsection you changed). If a file has just a few artifacts (e.g. functions) per file then less code will depend on it and the recompile tree will be smaller for a given change.

Use shadow-cljs instead of the plain CLJS compiler. Especially if you plan on using external Javascript libraries from npm.

One of the most annoying performance problems when building your application is slow compile times. We all want sub-second rebuilds on our code base so that we can continue to work quickly. Here are some tips about keeping your compile times low:

Most of the time these kinds of functions can be fixed by generating the functions in a different context:

The value of a lambda won’t compare as equal to the new one, meaning shouldComponentUpdate checks end up being useless.

This is a potential source of performance issues in Fulcro UIs. Evaluating the UI query is relatively fast, but relative is the key word. The larger the query, the more work that has to be done to get the data for it. Remember that you compose all component queries to the root. If you do this with only joins and props, then your root query will ask for everything that your UI could ever show! This will perform poorly as your application gets large.

The solution to this is to ensure that your query contains only the currently relevant things. There are two primary solutions for this: use union or dynamic queries. Fortunately this is what you naturally want to do anyhow to get UI "routing".

The com.fulcrologic.fulcro.routing.legacy-ui-routers namespace includes primitives for building UI routes using unions (the unions are written for you). It has a number of features, including the ability to nicely integrate with HTML5 history events for full HTML5 routing in your application.

In the dynamic query approach you use comp/set-query! to actually change the query of a component at runtime in response to user events.

There are two "dynamic routers" that use dynamic queries to change their current query so it only includes the "current" route. The older of these is present in the legacy-ui-routers namespace for legacy support, and should probably not be used for new projects.

It is also important to note that you need not normalize things that are really just big blobs of data that you don’t intend to mutate. An example of this is large reports where the data is read-only and only displayed in one place. You could write a big nested bunch of components, normalize all of the parts, and write a query that joins it all back together; however, that incurs a lot of overhead both in loading the data, and every time you render.

Instead, realize that a property query like [:report-data] can pull any kind of (serializable, if you want tools support) value from the application state. You can put a js/Date there. You can put a map there. Anything.

Furthermore, this query is super-fast since it just pulls that big blob of data from app state and adds it to the result tree. Structural sharing makes that a very simple and fast operation.