So if that’s how you need to support Chrome OS window resizing in a graceful way, why don’t we opt in to handle orientation changes ourselves more often, without feeling bad about it?

Of course, we might not want to handle every configuration change, like locale change.

But even then, retaining data across config change in Activities was always pretty easy: with the combination of onRetainCustomNonConfigurationInstance() , and getLastCustomNonConfigurationInstance() .

Application flows

The original design

https://developer.android.com/guide/practices/ui_guidelines/activity_task_design#taking_over_back_key (since then removed: you can find it at https://android.googlesource.com/platform/frameworks/base/+/d36ad9b1ff99675dd0eca6a3fda1f52353f451a4/docs/html/guide/practices/ui_guidelines/activity_task_design.jd )

By the original design, it was recommended not to override onBackPressed . It was instead encouraged to let the system handle the navigation stack of the application, because “Activities represent a single set of operations on a given screen”.

In fact, by original design, if an Activity was more complex, you could even nest them into an ActivityGroup (although I personally never had to work with the LocalActivityManager — Fragments had already taken over).

However, with sufficient complexity, the need for sharing data between screens is inevitable. Communication between screens is inevitable.

In fact, you don’t even need to think too complex — think your regular Todo app that can show an item list screen and a detail screen. If the detail is modified, show a toast in the list screen when you’ve navigated back.

The “standard” hacky way — startActivityForResult

Even in the fairly new Android Room with a View codelabs, we encounter this unfortunately too common approach.

In theory, one can look at the communication between two Activities within the same app like this:

But that’s not exactly the full story, is it?

There are multiple questions that come to mind just by looking at this simple example.

Why do we need to open a second Window for our application via a second entry point to the application just to show a different layout?

Why do we use an IPC mechanism (intent, bundle) in order to communicate data or commands between views in our own process?

Why do we even want to talk to the Android system, just because we’re trying to show a second screen, and communicate back?

The bandaid — LiveData, ViewModel, Fragments, SingleLiveEvent

Fortunately, at least Jetpack tries to help by making us realize that there’s a problem worth solving here.

Especially now that, despite its limitations, with the introduction of the Navigation Architecture Component, at least it’s official that a Single-Activity Architecture is preferred.

Even without Navigation AAC though, we can replace bits and pieces and make the setup a bit nicer.

We have replaced communication to the Android system with communication to the Activity-scoped ViewModel. Definitely an improvement, as we skip unnecessary chatter with the system itself, but it still raises questions.

If the ViewModel is supposed to belong to a specific feature, and should be cleared once we go “back” from a given screen (for example, step back from the start of a given flow), then how can we clear the data? In Single-Activity setup, the Activity’s ViewModel is effectively global.

Why do I need to know who holds the specific ViewModel instance (in this case, the enclosing Activity)? What if it’s held by a parent Fragment instead?

One step closer to the truth: a controller hierarchy

We actually don’t have to use the Activity’s ViewModel directly just because it’s a ViewModelStoreOwner.

Theoretically, we can use nested fragments, and use the parent fragment to scope the ViewModel for us.

Now we’ve theoretically detached the actual “flow” of “word management” into its own separate root “flow controller”.

But now, we’ve got the pragmatic Android developers say,

“Hold on, Fragment transactions are already confusing, and now we’re using nested/child fragments for the hell of it?!”

And they’ve got a point. Fragments are ViewControllers (excluding “retained headless”), so why create a ViewController just to host additional ViewControllers in it?

I’m actually listing this option only because no matter how you skew it, by design, this is the only way Navigation AAC is able to support scoping: via nested nav graphs.

Which is why I’m actually not in favor of Navigation AAC. I’m sure we can find a more reasonable solution.

An aside: Uber’s deep scope hierarchies

From https://eng.uber.com/deep-scope-hierarchies/ : The existence of shared objects between different screens means the application cannot be composed of a distinct set of Activities, unless shared objects are stored as singletons in global scope. To address this, a pattern is needed to control how objects are shared between screens and subscreens. In short, we needed an effective scoping pattern. Since none of the pre-existing options we considered met Uber’s requirements, we created our own architectural framework for our new rider app: RIBs.

Uber had made the realization by March 2017, that the theoretical solution to state management would be through the means of observing the state stored in nodes of a scope hierarchy via subscriptions for changes in state (Observer pattern).

By storing state in their respective nodes, one particular state exists only in one place, and all other interested parties are automatically notified down the chain. “Stale state” is eliminated, because there is no unnecessary copying, and changes are propagated through the system/application.

They called this “deep scope hierarchies”. Within a single Activity, managing the creation and destruction of the scope tree.

A powerful, scope and state management solution designed on top of Dagger2 and RxJava.

However, for us, this was a bit too complex, so we opted to find our own solution for the same problem.

Maybe we could simplify this. Maybe we’ll run into a dead-end. Who knows?

Searching for the unknown: implicit parent scopes in Simple-Stack

The initial idea while searching for a new solution was that if you have a single backstack that contains your navigation state and history, then you’re able to determine the scopes that should exist depending on where you are in the application.

Therefore, any screen that exists before our current screen, is technically a parent of our current screen. So why not be able to see its scope, and whatever services exist there?

While it makes sense and works alright at first, technically we’re creating an assumption regarding that “when a particular scope exists, then we must be in a given state, and a previous scope must be in a given state”.

There are a lot of assumptions in there. Uber has previously analyzed such approach, and said:

From https://eng.uber.com/deep-scope-hierarchies/ : With Scoop, scopes are correlated with the navigation stack: going deeper in the navigation stack nests a scope below the current scope. Scoop’s design provides convenient navigation at the cost of encouraging greater coupling from controller to controller, a pattern we were determined to avoid. In RIBs, unlike Scoop, the navigation stack and scopes are decoupled.

In certain scenarios, the same view can be navigated to from multiple views, or the same view but different state, or maybe even distinct flows. These cases require special care.

A step forward: shared parent scopes across screens

For these scenarios, implicit knowledge about the parent screen is just too tricky. However, we can know that our screen is supposed to exist within a given scope, if it is created for a given flow.

Interestingly, finding the flow itself is the hardest part. Screens are evident from the design. But flows? You have to name the concept yourself, otherwise it’s implicit. (and kudos to Vasiliy Zukanov for telling me to watch the linked video!)

So for a given screen, if it’s associated with a scope, we can set up its expected hierarchy. In this setup, one could say we’re still coupling navigation state to scopes — and it is true — but this results in simpler configuration, and more importantly, automatic lifecycle management for scopes and for the services these scopes contain.

So what we can do is define that both of our screens expect to be part of the Word flow, meaning their common parent is the WordScope , which has a WordController bound to it.