Let me preface this by saying that there is no clear-cut winner and no single “best” solution. Multiple solutions stand out to me as feature-rich, and each has its own philosophy. We can never say that there is the best architectural library you should use, because it all depends on the team’s needs.
I would suggest always picking the best technical solution for the business needs and not the other way around - i.e. don’t optimize for newness, cool tech, capabilities, or your intrinsic interest in some technology. The business and team needs should always be the driving factor for choosing a technical solution.
Over the years, whenever I was faced with a decision to choose a particular dependency, I felt there was no “single source of truth” that compared as many libraries and solutions as possible across as many different criteria as possible. Picking the best library for my needs always felt like a task that required hours of research (reading random Medium articles about a particular library).
Usually the best way to approach this is to try multiple libraries. I strongly encourage you to try at least the top few libraries listed in this article, and then decide based on what best fits your use case and your application.
In this article I’m going to assume that you already know what MVI is and have experience with Kotlin app architecture. This isn’t a guide on how to implement MVI from scratch, but a comparison of existing solutions.
So without further ado, let’s list the top 4 architectural frameworks in 2026 and their pros and cons.
Best Kotlin MVI / state management libraries (2025 - 2026):
- MVIKotlin
- FlowMVI
- Orbit MVI
- Ballast
MVIKotlin
This is probably the most mature and popular architectural library in the ecosystem right now. It has been around for a very long time and it boasts a wide range of sample apps, plus a small ecosystem built around it (the Decompose navigation framework and the Essenty multi-platform utility library).
MVIKotlin is known for its simple, “no BS”, strongly opinionated design that encourages separation of concerns and following the Redux flow.
How to implement MVI architecture with MVIKotlin
For each new feature or screen that you implement, you need to create these entities, at a minimum:
State: data class with loading / content / error properties.Intent: user events from UI (e.g.,Refresh,Retry).Label: one-off side effects to the UI (e.g.,ShowToast), optional.Action: bootstrap actions fired onStoreinit (optional).Message: internal reducer inputs. One or more are produced in response toIntentsand will result inStateupdates through theReducer.Executor: does side‑effects; takesIntent/Action, calls repo, dispatchesMessagesandLabels.Reducer: pure function mapping(State + Message) -> new State.Store: built viaDefaultStoreFactoryor DSL from the pieces above.
And optionally (for custom startup and creation) you’ll need:
Bootstrapperimplementation. Bootstrapper is MVIKotlin’s hook for firing initial (or periodic) actions when aStoreis initialized, before any user intents arrive.StoreFactoryimplementation.StoreFactoryis an optional way to decorate or wrapStores that it creates, or to provide custom implementations of the interface.DefaultStoreFactoryfrom the library just creates a store directly.
This library enforces Messages - one extra indirection layer on top of MVI - usually seen with the Elm/TEA architecture. Here’s why:
- Executors often need to turn one intent into multiple state updates (e.g., emit
LoadingthenSuccess/Failure); splitting outMessagekeeps the reducer pure and single‑purpose. Messagescan be normalized domain results (e.g.,Loaded(items),Failed(error)) while intents stay UI‑shaped (e.g.,Retry,Refresh,ItemClicked(id)).- Executors can also react to bootstrap
Actions; bothActions andIntents funnel intoMessages so reducers handle one shape. - This separation lets you reuse reducers across different executors or tests by dispatching
Messages directly without running side‑effects.
You’ll have a dedicated testable Reducer function/object:
val lceReducer = Reducer<LceState, LceMessage> { msg ->
when (msg) {
is LceMessage.Loading -> LceState.Loading
is LceMessage.Success -> LceState.Content(msg.items)
is LceMessage.Failure -> LceState.Error(msg.throwable.message ?: "Unknown error")
}
}And then an Executor to dispatch Messages, Labels, and Actions:
class LceExecutor(
private val repo: LceRepository,
mainContext: CoroutineContext = Dispatchers.Main
) : CoroutineExecutor<LceIntent, LceAction, LceState, LceMessage, LceLabel>(mainContext) {
override fun executeAction(action: LceAction) { load() } // bootstrap path
override fun executeIntent(intent: LceIntent) { load() } // Refresh/Retry path
private fun load() {
dispatch(LceMessage.Loading)
scope.launch {
runCatching { repo.load() }
.onSuccess { dispatch(LceMessage.Success(it)) }
.onFailure {
dispatch(LceMessage.Failure(it))
publish(LceLabel.ShowError(it.message))
}
}
}
}Then you can create the Store:
fun createLceStore(
repo: LceRepository,
storeFactory: StoreFactory = DefaultStoreFactory(), // or wrap with LoggingStoreFactory/TimeTravelStoreFactory
autoInit: Boolean = true
): Store<LceIntent, LceState, LceLabel> = storeFactory.create(
name = "LceStore",
initialState = LceState.Loading,
bootstrapper = SimpleBootstrapper(LceAction.Bootstrap), // provide Action on startup
executorFactory = { LceExecutor(repo) },
reducer = lceReducer,
autoInit = autoInit
)As you can see, this is pretty verbose but is straightforward to understand and operates on familiar concepts like factories, bootstrappers, executors and stores.

MVIKotlin pros / benefits
The main pros of using MVIKotlin are that it enforces a particular structure onto your code, following the Redux pattern more closely than other libraries.
It doesn’t use any platform/third-party dependencies and doesn’t tie your logic to the UI, making your business logic more generic and detached from any platform quirks or framework dependencies.
MVIKotlin is the only popular MVI framework that doesn’t depend on Kotlin coroutines and allows you to plug your own reactivity solution such as Reaktive, RxJava or Compose state. MVIKotlin is simple and easy to understand because every screen and feature will follow the same conventions. It leaves little to no room for creativity or leeway in how features can be implemented, which can be both a good thing and a bad thing depending on your needs.
The library internals are simple to understand, replicate and work with. There is no black magic under the hood or some unusual behavior that you can expect from this library. Everything is clearly documented. MVIKotlin has been maintained and stable over many years, so it is also unlikely that it will be abandoned in the future or suffer from some drastic change of any kind, not to mention extensive test coverage in the library and overall great testability of any code you write (by design).
MVIKotlin has a mature and feature-rich time-travel debugging plugin, which works and integrates pretty seamlessly with your code. So you can expect powerful debugging capabilities and even a Chrome extension with the same functionality for web apps.
The library provides a huge catalog of different sample apps and implementations showcasing integration with various DI frameworks, navigation libraries, UI frameworks and even languages (Swift), and I found a significant number of other usages in OSS apps.
MVIKotlin cons and downsides
MVIKotlin doesn’t only implement the MVI pattern, it also builds on top of it by introducing an extra indirection layer in the form of Messages. This is mostly in the name of testability, but this isn’t the only way to make sure your reducers are testable. It introduces a noticeable amount of boilerplate and verbosity.
The library has extra classes, interfaces and constructs that you have to implement or use that aren’t strictly “needed”, such as:
- Bootstrappers, which can be implemented via an interceptor architecture or a dedicated stage in the lifecycle.
- Store factories, which don’t need to be explicit objects and can just be builders or convenient DSLs and can remain an optional concept instead of being a first-party architectural pattern to follow.
- The dedicated
Reducerobject, which is just a pure function and can be provided inline or via a DSL or directly inside the store instead of requiring a separate concept and being limited in terms of what the reducer can do.
Testability can be achieved in other ways, so if you care about conciseness, flexibility, feature richness or modern development practices, you may not like the extra structure that MVIKotlin adds on top of MVI.
The library isn’t even close to many other frameworks in terms of features, or in its ability to quickly help you iterate when developing, or to ship fast. So while good for mature projects, long-term development vision, or big enterprises, this isn’t great for fast-paced teams, startups, and hobby projects and smaller apps with tightly-knit teams, where some leeway can not only be acceptable but beneficial.
Based on my analysis across 100+ features, it lacks a significant portion of what other frameworks provide (only 29 features vs the top library having 76 out of ~100 total). It doesn’t have state persistence, interceptors, decorators, DSLs, any subscriber management, coroutine-first integration, and many more extras.
The library’s philosophy and simplicity dictate requirements on threading as well. The library is supposed to be used on the main thread only, with only specific places where execution can be moved away from the main thread. Even then, it requires explicit context switching for some of its operations such as reducing the state. The library has no thread safety features and does not allow or have any built-in functionality for parallelism, long-running task management, job execution, background work, participation in a generic event bus system, doesn’t implement chain of responsibility or any other patterns out of the box. Any of that should be built on top of the library in-house, and often, due to intended limitations and intentional design, will not be possible or desired.
Who is MVIKotlin for?
MVIKotlin is for teams and businesses that want a very small, conservative core with no third-party concepts, no tight coupling to a specific framework, and no relationship to UI code or implementation.
MVIKotlin is for you if you want maximum architectural freedom to design your own layers or build something on top of it, or you want a library that is neutral in terms of its reactivity implementation. The library also features mature and stable debugging tools with great functionality and an IDE plugin.
If your team is big or your app is mature and you want something that will be easily understood by many different developers and is already widely adopted, there is a high chance that if you onboard someone familiar with MVI onto your team they will also be familiar with MVIKotlin in particular. This saves you time, simplifies hiring, and reduces the leeway in how developers approach changes or especially addition of new code - so you can expect an easier time following the standards, which means fewer bugs and problems in big teams.
FlowMVI
In short, FlowMVI is the polar opposite of MVIKotlin. FlowMVI leans hard into the concept of freedom. Its core philosophy is based on the premise that an architectural library should not constrain you, but enhance your degrees of freedom.
The library boasts its plugin system - a sort of a merger between a chain of responsibility pattern, decorator, and an interceptor pattern, and they permeate every layer of the library, which is both a pro and a con.
How to use Kotlin FlowMVI
The minimum amount of code to write is limited to:
- Store property.
That’s it, everything else (state, intents, etc.) is optional.
But more likely, for every feature you build, you usually define:
Intentsealed interface family. This is optional, you can use functions instead.Statesealed family (the library encourages sealed, but a single class is also possible). State is also optional.Actionsealed family. These are one-off “Side effects” in FlowMVI, also optional.Storeobject. The library doesn’t let you extend theStoreinterface, or at least doesn’t encourage it, and instead uses a lambda-driven DSL with nice syntax for creating stores, which makes your code look declarative. So the library smartly avoids any sort of inheritance in its implementation.
That’s it. You may have already noticed that everything in here is optional. So the amount of restrictions that the library places on you is pretty much nonexistent or minimal, requiring only one single object to give you the full functionality of the library. Which means you can do whatever you want right off the bat.
Your feature logic will look something like this (equivalent to the MVIKotlin example):
private typealias Ctx = PipelineContext<LCEState, LCEIntent, LCEAction>
class LCEContainer(
private val repo: LCERepository,
) {
val store = store(LCEState.Loading) {
recover { e ->
updateState { LCEState.Error(e) }
action(LCEAction.ShowError(e.message))
null
}
init { load() }
reduce { intent ->
when (intent) {
is ClickedRefresh -> updateState {
launch { load() }
LCEState.Loading
}
}
}
}
private fun Ctx.load() = updateState {
LCEState.Content(repo.load())
}
}As you can see, this is much more concise, already provides some extra features out of the box, and reads like English, but there is a lot of black magic going on with some advanced stuff like this PipelineContext, coroutines, and lambdas all over the place.

FlowMVI benefits & pros
FlowMVI is an absolute beast, providing a huge amount of functionality out of the box with a whopping 76 different features and enhancements that try to cover as many needs as possible out of the box.
The plugin architecture of FlowMVI allows you to inject new behaviors, decompose logic, handle exceptions everywhere, and adjust many behaviors at any point and any stage of your business logic component lifecycle. This architecture is what gives the library so many features.
The library excels in major parameters that I have taken into consideration. It:
- Supports Compose, XML Views, serialization, saved state natively and with a pretty clean DSL
- Doesn’t enforce usage of any third-party concept like AndroidX
ViewModels - Has multiple sample apps (not as big as MVIKotlin’s or Orbit’s OSS ecosystem, but still there)
- Runs regular benchmarks which show excellent performance
- Provides a testing harness
- Supports all 9+ Kotlin Multiplatform targets
- Has high test coverage
Although there are no UI tests or end-to-end testing in the library currently, from my research it seems that end-to-end testing is not really a very common practice among architectural libraries.
Coroutines are a first-class citizen both in the Kotlin language now and in Compose, and FlowMVI is intentionally built with coroutines. This depends on your existing stack, but if you are using coroutines you will be delighted to learn that pretty much all of the FlowMVI API is suspendable and many operations can be performed safely with structured concurrency in mind. The library doesn’t force you to use disposables, listeners, callbacks or anything similar, delegating that to coroutines.
The library is built with concurrency and parallelism as a first-class citizen. The core philosophy is to be able to write asynchronous and reactive apps really fast. It gives you the ability to run your logic in parallel with great thread safety, protection from data races out of the box, and still maintains one of the best levels of performance in single-threaded scenarios. This is actually something unique among architecture libraries, because very few of the libraries I studied actually encourage you to write concurrent and reactive code - unlike MVIKotlin, for example, which forcibly restricts you to the main thread, or Orbit MVI which claims to support background execution and parallelism but doesn’t actually provide helpers, utilities, or synchronization to make your multithreaded code safe.
This is the only library I’ve seen that allows you to:
- Automatically send every exception to Crashlytics without any handling code
- Track analytics, allowing you to automatically send user actions, screen visits, and session times
- Use a long-running job management framework with extras like batching operations, backpressure control, retry, filtering and more
- Collect actually useful metrics such as how long it takes for your stores to load data, start up, or how many inputs your business logic produces
If you desire to enforce some constraints on your code - for example, if you unit test your reducers and want them to be pure - you can easily do that through the library by creating your own plugin. So if you want to shape the API surface of this library for your needs, it’s pretty easy to do that because the library just doesn’t want interfaces, factories, wrappers etc. from you, just one object named Store.
Unlike the first impression may suggest, the library doesn’t actually force you to use it with UI-level architecture or a particular framework. You are free to use it with many different UI libraries or even in non-UI code. While being feature-rich, the library also manages to not really be opinionated on the structure of the code. It doesn’t force you into a particular structure, it doesn’t require you to use or even have side effects, or to handle errors in a particular way, and in fact even offers you multiple ways through neighboring libraries such as ApiResult. FlowMVI doesn’t smell “Android” or “overengineering”.
FlowMVI downsides & problems
I guess the biggest problem with FlowMVI is that it can feel like black magic everywhere. When you first jump into the library, it claims that you can start using it in 10 minutes. But to fully understand what’s going on under the hood and all the quirks that the library’s APIs have, and how they interact with coroutines, structured concurrency, and each other, you have to really dig into the sources and read a bunch of documentation - way more than with MVIKotlin or Orbit MVI.
Because the library has such an extensive amount of features, you can try any one of them and find 15 more that you have to research, choose from, and understand. Many pieces of functionality in the library can be done in multiple ways, and sometimes it’s not really clear which way is the best or future-proof. So if you’re going for simplicity and want all of your team members following a single process, this isn’t the library for you. There is always room for imagination and creativity with FlowMVI. Unless your team explicitly agrees on standards and understands all of the advanced concepts of FlowMVI, you’ll face chaos and bugs due to misuse of its capabilities.
The library’s flexibility is intentional, but it is also its drawback. The official documentation states that you can write a single extension for the code as a “plugin”, and depending on where you put it in the file (on which line of code you “install” this plugin) this may completely change the behavior of your logic - changing when and how intents are handled, swallowing exceptions, and disabling or removing logging and repository calls. That’s a pretty big responsibility coming with this power - I wouldn’t let juniors run amok with such tools in their hands.
Also, if you are not using coroutines, it will be pretty much impossible for you to use the library, because it is built entirely on coroutines and you must not only be familiar with them, but there are also no adapters for frameworks such as RxJava or Reaktive that MVIKotlin has. So it’s pretty much coroutines or nothing. I don’t personally perceive that as a drawback since coroutines are native to Kotlin, but this library definitely locks you into using them even more than Compose does.
As a nitpick, I found the time travel and logging plugin in FlowMVI lackluster compared to MVIKotlin’s.
And lastly, I can’t not mention again that this library is much newer than the others, so my searches yielded very few open source project usages, samples, and different integrations with it. So you will probably have a harder time finding relevant samples and implementations of what you need and establishing best practices for your team than you would with other libraries in this list. Expect some exploration, experiments, and documenting your own way to use this library.
FlowMVI library use cases
I think FlowMVI is a great fit for:
- Small teams, where you can spread information quickly and control the code through code review
- Teams that are really fast-paced, ship features, iterate quickly, and don’t yet have an established product that requires superb “codebase stability”
- Solo developers making hobby projects or their own products
- Big teams that don’t shy away from flexibility and really want to stay on top of things, pursue modern technologies, and build new solutions on a single all-encompassing stack
For those, FlowMVI will be a better fit than any other library. Just because of the sheer amount of features it gives you, you can write code with FlowMVI incredibly fast, not worry about many issues (crashes/analytics/thread safety/data races/debugging/log collection) and rely on it at every step of your journey. Whatever product requirement you get, you can almost surely find something in FlowMVI that will help you. And even if you don’t, the library is structured in such a way that with a few lines of code you can extend the business logic in any place of your app or even everywhere at once without extra refactoring or adjusting your codebase.
If you have an established product with a big team, or you are trying to hire engineers that aren’t really versed in FlowMVI (since it’s a newer framework), and you aren’t willing to spend time on developer education, then you should probably avoid FlowMVI. Each developer that onboards with FlowMVI will need to read its documentation, dive into sources, have clear examples of how to use the library, understand its internals in some way, be very well versed with coroutines and skilled in general, since FlowMVI builds on top of so many architectural patterns that developers have to really understand to use effectively.
And finally, if you are stuck with RxJava or a Java project, then you’re out of luck here since FlowMVI is pretty much unusable with RxJava and Java in general.
Orbit MVI
This is probably the most well-known and popular Kotlin MVI framework in existence right now.
How to use Orbit MVI in 2026/2025
For any given feature you implement, you’ll want to create:
State: both sealed families and single data class styles work.- Sealed interface for side effects (optional).
- A
ViewModel:ContainerHostwithcontainer(initialState = Loading)to bootstrap.
class LceViewModel(
private val repo: LceRepository,
) : ViewModel(), ContainerHost<LceState, LceSideEffect> {
override val container = container<LceState, LceSideEffect>(LceState.Loading) {
load() // bootstrap; runs as an implicit intent
}
fun load() = intent {
reduce { LceState.Loading }
runCatching { repo.load() }
.onSuccess { items -> reduce { LceState.Content(items) } }
.onFailure { e ->
reduce { LceState.Error(e.message) }
postSideEffect(LceSideEffect.ShowError(e.message))
}
}
}As you see, the library’s code is incredibly lean, I would even say leaner than FlowMVI’s.
Orbit MVI benefits and advantages
Orbit is the most conceptually similar to how we used to write code in the MVVM era and probably the simplest to understand library of the ones compared here.
It specifically leans into the “MVVM with extras” mental model, and indeed we can see in the code familiar concepts such as viewmodels and their functions. Intents are simple lambdas, which contain code blocks instead of some more convoluted hierarchies that model-driven MVI implementations have. Although FlowMVI also supports MVVM+ style, Orbit MVI operates on concepts more familiar from MVVM such as view models and structures code in a much simpler way.
Orbit is a mature framework and is widely referenced. I found more than 130 open source usages of Orbit, which give anyone trying to understand how to use it and how it works a really easy time. It has been in production since at least 2019, so it is stable and you don’t have to expect huge changes to it in the future. There are numerous articles on how to use Orbit MVI and how to integrate it besides this one, so I’m not going to really dive too much into guidance.
The library also supports Kotlin Multiplatform. Although I have found the documentation heavily references Android, that seems more like a legacy quirk than the library actually favoring Android, and I find that this works pretty well for multiplatform apps, especially given that it doesn’t really get in the way of your other code, such as UI-bound code (unlike MVIKotlin encouraging Decompose or FlowMVI hiding everything behind magic DSLs).
Orbit MVI is probably the only popular framework that supports Android UI testing natively, so that’s a big upside if you really lean into UI tests and integration tests on Android specifically.
Orbit MVI downsides and problems
Despite being so popular and widely adopted, Orbit MVI isn’t very actively evolving anymore. I was surprised to find documentation still referencing Android extensively and in general being pretty minimal. The actual coverage of features and different possibilities that Orbit MVI offers is much wider than what is stated in the documentation. That was kind of surprising to me because based on my analysis, the library has a pretty decent score, being on par with MVIKotlin in functionality albeit leaning into a slightly different direction.
Both FlowMVI and Orbit have a similar philosophy, being this lean library that focuses on features and gets out of your way when writing code, but FlowMVI currently offers much more and in a nicer packaging. It looks to me like Orbit MVI still tries to carry something legacy from the Android era that hinders its progress in implementing new interesting features. Or maybe that just isn’t in the scope of the authors of the library.
So you can’t expect feature parity or even anything remotely comparable to FlowMVI. If you’re only thinking about features and ease of use, it’s hard to recommend Orbit MVI over FlowMVI going into 2026.
Unlike MVIKotlin and FlowMVI, Orbit doesn’t ship with remote debugger support or an IDE plugin, so that may be a deal breaker for you if you favor debuggability and developer tooling.
When to use Orbit MVI
I would say the surest way to pick Orbit MVI over any other library is if your team is already familiar with MVVM and especially if you have an MVVM- or MVVM+-based app with maybe an in-house implementation, and now you just want to migrate to a well-maintained architectural framework as your main solution rather than in-house code. In that case you would definitely choose Orbit MVI just because of how familiar and at home you will feel, enabling you to gradually and smoothly transition to MVI.
Migrating to Orbit MVI will probably be the easiest:
- MVIKotlin would require extensive refactoring which can’t really be automated and will at least require deploying an AI agent and then thoroughly reviewing all of the code.
- FlowMVI, although being easy to migrate to and maybe even in an automated way, is not that conceptually similar to a traditional
ViewModel-based approach and doesn’t lean into that MVVM vibe as hard as Orbit. So you will still be kind of swimming against the current with both of these libraries for an existing MVVM-based codebase.
If you’re starting a new project, I would probably only recommend Orbit if you have a team of developers who are familiar with MVVM and you really want to get up to speed quickly and start coding features. Otherwise, I would probably recommend either choosing FlowMVI or MVIKotlin to secure a better future.
Ballast
Ballast is actually a lesser-known architectural framework, but I am not even sure why, because it is a great contender and a great architectural library to use in 2026 and onwards. The library features a simple opinionated API which doesn’t have much fluff to it while also staying flexible enough, and it has an impressive range of features, being the second strongest after FlowMVI in terms of raw functionality it provides.
How to use the Kotlin Ballast library
To implement the LCE example from above, you need to create the following:
data class State(...)holding loading flag (orCached<T>), data, error. Ballast encourages a single data class for its state.sealed interface Inputs, which is Ballast’s name for intents.sealed interface Eventsfor UI one-off events (e.g.,ShowError) (optional).
Then the logic:
InputHandlerimplementation. This does exactly what it says - handles intents. It can not only update state but also suspend and do other things, so it’s not only a reducer.EventHandlerimplementation. If your view model has side effects (Events), then Ballast encourages separation of those side effects from the actual UI code such as composables you write. So you would issue navigation commands, for example, in an event handler that has a different lifecycle than the view model.ViewModel- aBasicViewModel(orAndroidViewModelon Android) withBallastViewModelConfiguration.Builder().withViewModel(initialState, inputHandler, name). The library leans into view models as the container for business logic, although you don’t have to put anything besides your setup there.
class LceInputHandler(
private val repo: LceRepository,
) : InputHandler<LceInput, LceEvent, LceState> {
override suspend fun InputHandlerScope<LceInput, LceEvent, LceState>.handleInput(input: LceInput) = when (input) {
is LceInput.Load -> {
updateState { it.copy(isLoading = true, error = null) }
sideJob("load") {
runCatching { repo.load() }
.onSuccess { postInput(LceInput.Loaded(it)) }
.onFailure { postInput(LceInput.Failed(it)) }
}
}
is LceInput.Loaded -> updateState { it.copy(isLoading = false, items = input.items, error = null) }
is LceInput.Failed -> {
updateState { it.copy(isLoading = false, error = input.error.message ?: "Unknown error") }
postEvent(LceEvent.ShowError(input.error.message ?: "Unknown error"))
}
}
}
object LceEventHandler : EventHandler<LceInput, LceEvent, LceState> {
// In Compose, collect events and show snackbar/nav inside LaunchedEffect (use platform integrations)
override suspend fun EventHandlerScope<LceInput, LceEvent, LceState>.handleEvent(event: LceEvent) = Unit
}
fun createLceViewModel(
repo: LceRepository,
scope: CoroutineScope,
): BallastViewModel<LceInput, LceEvent, LceState> = BasicViewModel(
config = BallastViewModelConfiguration.Builder()
.withViewModel(
initialState = LceState(isLoading = true),
inputHandler = LceInputHandler(repo),
name = "LceViewModel",
)
.build(),
eventHandler = LceEventHandler,
coroutineScope = scope,
).also { it.trySend(LceInput.Load) }As you can see, the library doesn’t lie when it claims to be opinionated. The structure is very interesting, but let’s see what it gives us.
Benefits of Ballast
I would say this library doesn’t try to please everyone. It isn’t try-hard like FlowMVI, or “junior-friendly” like Orbit, or boasting its “structure” like MVIKotlin. Instead, you will enjoy Ballast if you catch its drift, period.
Ballast is the second library among the 70 that I compared in terms of features and functionality. It gives you a ready-made solution with some enhancements and cool stuff for every layer of your application architecture. It gives you tools and tricks for:
- The UI layer
- The view model layer
- Even the repository layer
Ballast features native integration with Firebase Analytics and can integrate with any other analytics services. It has a rich concept of interceptors and decorators that is actually similar to how they are done in FlowMVI, while also staying kind of simple and true to the MVI principle.
I like Ballast because it is very inspiring. It has many features while also staying down to business. It doesn’t try as hard as FlowMVI to be “different”.
Ballast has a unique feature which allows you to synchronize your state and intents to an actual remote server, which no other library has. And that’s not including some other interesting stuff you’ll find if you dive into the docs. You should definitely check it out and get inspired by what you can build.
Ballast’s developer tooling is also great, featuring its own time travel and debugging plugin, with quite a number of uses, and a rich ecosystem built around creating apps fast and easy. I would say that’s the core philosophy of Ballast - it’s like a complete batteries-included solution for you to build actual apps with this library. The whole idea strikes me as kinda cool.
Downsides of Ballast
As it often happens, the nature of Ballast being so opinionated is also its biggest drawback. I feel like the library tries to do everything, but only in one particular way. So what if you aren’t on the same wavelength as the authors of Ballast? Then you’re going to have a really bad time, obviously.
For example, I found that the Firebase Analytics integration only sends events in a particular way and isn’t really flexible enough to send them in any other way or specialize them for any given page in the app, unlike what FlowMVI provides with its more generic but also more flexible implementation. The same can be said for the repository and caching functionality.
If you get the same use cases for caching and structure your repositories in the same way as Ballast encourages you to (after reading that big chunk of documentation on the official Ballast website), you will be golden. But as soon as you get a use case that doesn’t fit the pattern or you need to make a change, you may have to ditch what Ballast has built for you and start working on something else. This can lead to fragmentation and, frankly, just rewriting the same stuff with slight changes. Maybe you’ll find the use cases that Ballast provides you not sufficient or features not flexible enough. That’s a real issue.
I’ve also found that the library seems to be maintained but not really super actively promoted or developed. Because the library is really opinionated, if the authors don’t have any of the same use cases as you do, then there is no real incentive for them - and it isn’t really in their general philosophy - to build something extra on top of what already exists.
My third problem is that the library kind of tries to mix and match a bunch of stuff. FlowMVI seems to be really consistent in its style - it gives this Gen Z vibe of “let’s do everything with lambdas”. But Ballast employs a combination of Java-style builders with DSLs, with lambdas, and then with factories. It has something that looks like a reducer but also isn’t really a reducer - it’s now an InputHandler. And then it uses view models and has the concept of view models, but then the view models aren’t really view models - they are just wrappers for view model builders, whatever that means for Ballast. I just find the terminology sometimes confusing. Even though it makes sense conceptually, the library could probably benefit from some consistency overall in the general style that it has.
Ballast library use cases
You will really enjoy Ballast if you have similar use-cases as the library’s authors.
The people who benefit the most from this library will be small teams that build a specific type of apps and aren’t really afraid to adopt someone else’s architectural patterns. In this case, if you really have similar use cases and considerations, then you will greatly benefit and save a lot of time, save a lot of code and get an amazing feature set out of this library.
But if you’re not really following the patterns that Ballast gives you, and you have, for example, a huge app, a big enterprise solution, or, on the other hand, you have a really simple app and you don’t want to invest into understanding everything that Ballast gives you, then you’re not really a good fit for using Ballast.
- In the first case (huge app / enterprise), you’re better off using something either structured like MVIKotlin or flexible like FlowMVI, depending on your philosophy and needs.
- In the second case (very simple app), you’re better off just using Orbit as the simplest-to-understand solution for learning and to get up to speed quickly.
So Ballast is kind of in this middle space of not really leaning into anything in particular.
Bonus: A spreadsheet comparing 70 architecture libraries
I will admit I kind of went overboard when researching all of these libraries. I found more than 70 different state management / MVI / architectural libraries and compared them over 100+ criteria. So this write-up is based mostly on my research that I did for those architectural libraries.
I’d like to thank my colleague Artyom for doing the first part of the research originally for the Mobius conference we spoke at. Recently I decided to update those research findings to include more criteria, more features and re-evaluate every single library because many of them had major releases since then.
And of course I will keep the spreadsheet updated as long as I can. Some stuff is updated automatically there, such as maintainability status. I will keep adding new features for comparing. Please let me know if I made any mistakes in there, you want your library added, or you just have something to say via email.
The spreadsheet has all the honorable mentions - definitely check them out! I’m sorry if i didn’t include your library here - this article is huge as-is!
Keep in mind that the scores in the spreadsheet are purely subjective and use a very simple formula which multiplies the weight with the checkbox status. The weights were selected for an “average Joe” developer and thus are pure speculation, so you should check the full list and find specific features that are important to your team.