Angular Universal is a great solution for server-side rendering Angular application. It allows us to set meta tags for SEO, for sharing on social networks, but most importantly in renders our page on the server so the user gets already rendered page which means faster and smoother performance.

However, most of the time developers don’t use all advantages of SSR. For example, in some cases they block (isPlatFormBrowser) api calls, therefore, server renders a page without data, and client renders again with data in frontend (data is fetched in frontend) which leads to so called “flickering” glitch. In other cases the server (ssr), while rendering the page, does the request to the API, but the same request is done on the frontend as well, which in this time means double request for the data that has been already fetched from server.

Instead, all of this we should let the Universal to fetch all GET requests on the server, store it somewhere and when the client (frontend) will do the request, it will get already fetched data and won’t make any additional request to api server.

Default solution TransferHttpCacheModule

Yes, there is a dedicated module, TransferHttpCacheModule which does exactly what we want. It will register an interceptor and when the SSR fetches the data, it will save the data in the state and frontend will get that data, from the state, without making an additional request.

All you need to do is add TransferHttpCacheModule from @nguniversal/common inside imports array of your App module.

Then, import ServerTransferStateModule from @angular/platform-server in your Server module.

After this, if there are no blocking parts like checks for server/browser, you will see that the page gets rendered but there are no calls in the network tab, because Angular Universal already fetched the data and passed to fronted as a state. However, there’s one thing to consider. This is all it does, nothing more. In many cases, we want to write our own caching mechanism, which unfortunately is not possible with this module. However, the best part is we can write our own interceptor and change/get the state.

The problem

As an example (which was real issue for me) consider the following situation. When any user wants to open a public page with public data, SSR will do a request to API endpoint, get the data, render it and send back to the user. Now imagine you have a very large user base and the same page is requested 10–20 times in a second. Each second SSR will make a request, get mostly the same data and give it to transferState. This is already bad because we are making too many requests even tho we know that data most likely is the same.

Now think that SSR will try to fetch some public data from third party server. Let's say a list of US presidents. And that third party server has limitation of 10 requests from one IP address in a second. Now if more than 10 people at the same time will try to open that page, SSR will make 10+ requests to third party api and most probably get banned. The thing is when all requests are handled by server, the server itself is just a computer with its own IP so all requests will be from the same IP address.

To solve this particular issue and the issue with requesting the same, unchanged data again and again we need to write our own logic for transferstate.

Caching and manual managing transferState

First let's write the same (or very similar) interceptor that is used in TransferHttpCacheModule .

Create a new file serverstate.interceptor.ts

When the SSR gets the data from the api, the interceptor will use the request url as the key and store the actual response body in a special object (transferState).

2. Register serverstate.interceptor.ts in Server module

providers: [{

provide: HTTP_INTERCEPTORS,

useClass: ServerStateInterceptor,

multi: true

}],

3. Now all responses from SSR requests will be stored in transferState. We need to create another interceptor, this time for frontend so that instead of re-fetching the data, it will grab it from the state if it exists there.

Create new file browserstate.interceptor.ts

First, we are checking if the request method is GET, if not we pass the request to the next interceptor or to http client.

Then we are trying to get saved data from transferState. If there is a data, we create a new HttpResponse object with our data and return it, so no other interceptor will intercept this request. If there is nothing in transferState for given key (url) we are passing request to next interceptor or http client to actually make the request.

4. Add BrowserStateInterceptor in your App module

providers: [

{

provide: HTTP_INTERCEPTORS,

useClass: BrowserStateInterceptor,

multi: true,

}

],

5. And finally remove TransferHttpCacheModule from App module

So now we have almost the same features as TransferHttpCacheModule but with our custom solution. However this doesn’t solve our problem with caching and frequent requests.

Caching

The idea is simple. When a user opens a page for a first time , SSR will do a request to API server, get that data, save in transferState AND also save it with the same key (request url) in some local database. For next user requests we gonna check if for given key (request url) there is entry in our local database. If yes, we will save it in transferState and return it as HttpResponse, so that no actual request to API server will be made. If not we will repeat first step, fetch from API, save it in transferState and in local database.

You can implement your own logic for local server database (it even can be simple array or object). I’m going to use memory-cache node module.

npm i memory-cache

You may also want to install @types/memory-cache for better type checking.

Include it in your serverstate.interceptor.ts file

import * as memoryCache from 'memory-cache';

First lets modify logic inside next.handle() to add responses from API server in our local database

Change the following part (new added code is highlighted)

if (event instanceof HttpResponse) {

this.transferState.set(makeStateKey(req.url), event.body);

memoryCache.put(req.url, event.body);

}

And before returning next.handle() we need to check our local DB and return value from there

const cachedData = memoryCache.get(req.url);

if (cachedData) {

this.transferState.set(makeStateKey(req.url), cachedData);

return of(new HttpResponse({ body: cachedData, status: 200 }));

} return next.handle(req).pipe(

.....

With this we are almost ready. Now only the one request from SSR to API server will be made, and all users will get saved data from our SSR server local database. However, this brings us another issue. Probably we don’t want to save this data in our local db forever. At least we might want to update it sometimes. Well, we can create a function that will clear the all local DB or selected keys from there. But we need to trigger that function somehow (maybe with api call). Or we can do even better. We can set time and invalidate our local data after that time. memory-cache module has support of keys with expiration time. If you use custom solution you can implement this with setTimeout() .

Now lets modify the code again so that each response data stored in our local db will be valid for 5 minutes. During that 5 minutes all users will get data from our local database, hence no API calls will be made. After invalidating the local data, new request will trigger new API call, and new data will be stored in our local db for another 5 minutes and so on.

if (event instanceof HttpResponse) {

this.transferState.set(makeStateKey(req.url), event.body);

memoryCache.put(req.url, event.body, 6000 * 5);

}

Now if you run your application you will see nothing is working !!! There will be infinite (actually it will take 5 minutes in this case) loading.

Actually the problem is not with memory-cache or your custom solution. The problem is with Angular Universal handling async code.

Angular Universal won’t complete rendering the page until there are unfinished async events

Our memory-cache module uses setTimeout() internally and in our case Universal will wait 5 minutes so that task queue will be empty and only after that it will finish its job.

As a final step, to solve this issue we can tell Angular to run this code outside of the Angular zone.

constructor(private transferState: TransferState, private ngZone: NgZone) {}

.... return next.handle(req).pipe(

tap(event => {

if (event instanceof HttpResponse) {

this.transferState.set(makeStateKey(req.url), event.body);

this.ngZone.runOutsideAngular(() => {

memoryCache.put(req.url, event.body, 1000 * 60);

})



}

})

);

So now, instead of default and basic TransferHttpCacheModule we have fully functioning custom transferState mechanism that can be modified in many many other ways.

Final code for serverstate interceptor and browserstate interceptor .

Thanks for reading.