Written by Lauren Darcey, Rob WcWhinnie, Catherine Chi, Drew Heavner, Eric Kuck
How It Started
Let’s rewind the clock a few years to late 2021. The pandemic is in full swing and Adele has staged a comeback. Bitcoin is at an all-time high, Facebook has an outage and rebrands itself as Meta, William Shatner gets launched into space, and Britney is finally free. Everyone’s watching Squid Game and their debt-ridden contestants are playing games and fighting for their lives.
Meanwhile, the Reddit Android app is supporting communities talking and shitposting about all these very important topics while struggle-bugging along with major [tech] debt and growing pains of its own. We’ve also grown fast as a company and have more mobile engineers than ever, but things aren’t speeding up. They’re slowing down instead.
Back then, the Android app wasn’t winning any stability or speed contests, with a crash-free rate in the 98% range (7D) and startup times over 12 seconds at p90. Yeah, I said 12 seconds. Those are near-lethal stats for an app that supports millions of users every day. Redditors were impatiently waiting for feeds to load, scrolling was a janky mess, the app did not have a coherent architecture anymore and had grown quickly into a vast, highly coupled monolith. Feature velocity slowed, even small changes became difficult, and in many critical cases there was no observability in place to even know something was wrong. Incidents took forever to resolve, in part, because making fixes took a long time to develop, test, deploy. Adding tests just slowed things down even more without much obvious upside, because writing tests on poorly written code invites more pain.
These were dark times, friends, but amidst the disruptions of near-weekly “Reddit is down” moments, a spark of determination ignited in teams across Reddit to make the mobile app experiences suck less. Like a lot less. Reddit might have been almost as old as dial-up days, but there was no excuse for it still feeling like that in-app in the 2020s.
App stability and performance are not nice-to-haves, they’re make-or-break factors for apps and their users. Slow load times lead to app abandonment and retention problems. Frequent crashes, app not responding events (ANRs), and memory leaks lead to frustrated users uninstalling and leaving rage-filled negative reviews. On the engineering team, we read lots of them and we understood that pain deeply. Many of us joined Reddit to help make it a better product. And so began a series of multi-org stability and performance improvement projects that have continued for years, with folks across a variety of platform and feature teams working together to make the app more stable, reliable, and performant.
This blog post is about that journey. Hopefully this can help other mobile app teams out there make changes to address legacy performance debt in a more rational and sustainable way.
Snappy, Not Crappy
You might be asking, “Why all the fuss? Can’t we just keep adding new features?” We tried that for years, and it showed. Our app grew into a massive, complex monolith with little cleanup or refactoring. Features were tightly coupled and CI times ballooned to hours. Both our ability to innovate and our app performance suffered. Metrics like crash rates, ANRs, memory leaks, startup time, and app size all indicated we had significant work to do. We faced challenges in prioritization, but eventually we developed effective operational metrics to address issues, eliminate debt, and establish a sustainable approach to app health and performance.
The approach we took, broadly, entailed:
- Take stock of Android stability and performance and make lots of horrified noises.
- Bikeshed on measurement methods, set unrealistic goals, and fail to hit them a few times.
- Shift focus on outcomes and burndown tons of stability issues, performance bottlenecks, and legacy tech debt.
- Break up the app monolith and adopt a modern, performant tech stack for further gains.
- Improve observability and regression prevention mechanisms to safeguard improvements long term. Take on new metrics, repeat.
- Refactor critical app experiences to these modern, performant patterns and instrument them with metrics and better observability.
- Take app performance to screen level and hunt for screen-specific improvement opportunities.
- Improve optimization with R8 full mode, upgrade Jetpack Compose, and introduce Baseline Profiles for more performance wins.
- Start celebrating removing legacy tech and code as much as adding new code to the app.
We set some north star goals that felt very far out-of-reach and got down to business.
From Bikeshedding on Metrics to Focusing On Burning Down Obvious Debt
Well, we tried to get down to business but there was one more challenge before we could really start. Big performance initiatives always want big promises up-front on return on investment, and you’re making such promises while staring at a big ball of mud that is fragile with changes prone to negative user impact if not done with great care.
When facing a mountain of technical debt and traditional project goals, it’s tempting to set ambitious goals without a clear path to achieve them. This approach can, however, demoralize engineers who, despite making great progress, may feel like they’re always falling short. Estimating how much debt can be cleared is challenging, especially within poorly maintained and highly coupled code.
“Measurement is ripe with anti-patterns. The ways you can mess up measurement are truly innumerable” - Will Larson, The Engineering Executive's Primer
We initially set broad and aggressive goals and encountered pretty much every one of the metrics and measurement pitfalls described by Will Larson in "The Engineering Executive's Primer." Eventually, we built enough trust with our stakeholders to move faster with looser goals and shifted focus to making consistent, incremental, measurable improvements, emphasizing solving specific problems over precise performance metrics goals upfront and instead delivered consistent outcomes after calling those shots. This change greatly improved team morale and allowed us to address debt more effectively, especially since we were often making deep changes capable of undermining metrics themselves.
Everyone wants to build fancy metrics frameworks but we decided to keep it simple as long as we could. We took aim at simple metrics we could all agree on as both important and bad enough to act on. We called these proxy metrics for bigger and broader performance concerns:
- Crashlytics crash-free rate (7D) became our top-level stability and “up-time” equivalent metric for mobile.
- When the crash-free rate was too abstract to underscore user pain associated with crashing, we would invert the number and talk about our crashing user rates instead. A 99% starts to sound great, but 1% crashing user rate still sounds terrible and worth acting on. This worked better when talking priorities with teams and product folks.
- Cold start time became our primary top-level performance metric.
- App size and modularization progress became how we measured feature coupling.
These metrics allowed us to prioritize effectively for a very long time. You also might wonder why stability matters here in a blog post primarily about performance. Stability turns out to be pretty crucial in a performance-focused discussion because you need reliable functionality to trust performance improvements. A fast feature that fails isn’t a real improvement. Core functionality must be stable before performance gains can be effectively realized and appreciated by users.
Staying with straightforward metrics to quickly address user pain allowed us to get to work fixing known problems without getting bogged down in complex measurement systems. These metrics were cheap, easy, and available, reducing the risk of measurement errors. Using standard industry metrics also facilitated benchmarking against peers and sharing insights. We deferred creating a perfect metrics framework for a while (still a work in progress) until we had a clearer path toward our goals and needed more detailed measurements. Instead, we focused on getting down to business and fixing the very real issues we saw in plain sight.
In Terms of Banana Scale, Our App Size & Codebase Complexity Was Un-a-peeling
Over the years, the Reddit app had grown due to the continuous feature development, especially in key spaces, without corresponding efforts around feature removals or optimization. App size is important on its own, but it’s also a handy proxy for assessing an app’s feature scope and complexity. Our overall app size blew past our peers’ sizes as our app monolith grew in scope in complexity under-the-hood.
Figure 1: The Reddit Android App Size: Up, Up and Away!
App size was especially critical for the Android client, given our focus on emerging markets where data constraints and slower network speeds can significantly impact user acquisition and retention. Drawing from industry insights, such as Google’s recommendations on reducing APK size to enhance install conversion rates, we recognized the need to address our app’s size was important, but our features were so tightly coupled we were constrained on how to reduce app size until we modularized and decoupled features enough to isolate them from one another.
We prioritized making it as easy to remove features as to add them and explored capabilities like conditional delivery. Worst case? By modularizing by feature with sample apps, we were ensuring that features operated more independently and ownership (or lack of it) was obvious. This way, if worse came to worse, we could take the modernized features to a new app target and declare bankruptcy on the legacy app. Luckily, we made a ton of progress on modularization quickly, those investments began to pay off and we did not have to continue in that direction.
As of last week, our app nudged to under 50Mb for the first time in three years and app size and complexity continue to improve with further code reuse and cleanups. We are working to explore more robust conditional delivery opportunities to deliver the right features to our users. We are also less tolerant of poorly owned code living rent-free in the app just in case we might need it again someday.
How we achieved a healthier app size:
- We audited app assets and features for anything that could be removed: experiments, sunsetted features, assets and resources
- We optimized our assets and resources for Android, where there were opportunities like webp. Google Play was handy for highlighting some of the lowest hanging fruit
- We experimented with dynamic features and conditional delivery, shaving about a third of our app install size
- We leveraged R8 full mode for improved minification
- We worked with teams to have more experiment cleanup and legacy code sunset plans budgeted into projects
- We made app size more visible in our discussions and introduced observability and CI checks to catch any accidental app size bloat at the time of merge and deploy
Finally, we leaned in to celebrating performance and especially removing features and unnecessary code as much as adding it, in fun ways like slack channels.
Cold Start Improvements Have More Chill All The Time
When we measured our app startup time to feed interactions (a core journey we care about) and it came in at that astronomical 12.3s @ p90, we didn’t really need to debate that this was a problem that needed our immediate attention. One of the first cross-platform tiger teams we set up focused on burning down app startup debt. It made sense to start here because when you think about it, app startup impacts everything: every time a developer starts the app or a tester runs a test, they pay the app startup tax. By starting with app start, we could positively impact all teams, all features, all users, and improve their execution speeds.
How we burned more than 8 seconds off app start to feed experience:
- We audited app startup from start to finish and classified tasks as essential, deferrable or removable
- We curated essential startup tasks and their ordering, scrutinizing them for optimization opportunities
- We optimized feed content we would load and how much was optimal via experimentation
- We optimized each essential task with more modern patterns and worked to reduce or remove legacy tech (e.g. old work manager solutions, Rx initialization, etc.)
- We optimized our GraphQL calls and payloads as well as the amount of networking we were doing
- We deferred work and lazy loaded what we could, moving those tasks closer to the experiences requiring them
- We stopped pre-warming non-essential features in early startup
- We cleaned up old experiments and their startup tasks, reducing the problem space significantly
- We curated essential startup tasks and their ordering, scrutinizing them for optimization opportunities
- We modularized startup and put code ownership around it for better visibility into new work being introduced to startup
- We introduced regression prevention mechanisms as CI checks, experiment checks and app observability in maintain our gains long term
- We built an advisory group with benchmarking expertise and better tooling, aided in root causing regressions, and provided teams with better patterns less likely to introduce app-wide regressions
These days our app start time is a little over 3 seconds p90 worldwide and has been stable and slowly decreasing as we make more improvements to startup and optimize our GQL endpoints. Despite having added lots of exciting new features over the years, we have maintained and even improved on our initial work. Android and iOS are in close parity on higher end hardware, while Android continues to support a long tail of more affordable device types as well which take their sweet time starting up and live in our p75+ range. We manage an app-wide error budget primarily through observability, alerting and experimentation freezes when new work impacts startup metrics meaningfully. There are still times where we allow a purposeful (and usually temporary) regression to startup, if the value added is substantial and optimizations are likely to materialize, but we work with teams to ensure we are continuously paying down performance debt, defer unnecessary work, and get the user to the in-app experience they intended as quickly as possible.
Tech Stack Modernization as a Driver for Stability & Performance
Our ongoing commitment to mobile modernization has been a powerful driver for enhancing and maintaining app stability and performance. By transforming our development processes and accelerating iteration speeds, we’ve significantly improved our ability to work on new features while maintaining high standards for app stability and performance; it’s no longer a tradeoff teams have to regularly make.
Our modernization journey centered around transitioning to a monorepo architecture, modularized by feature, and integrating a modern, cutting-edge tech stack that developers were excited to work in and could be much more agile within. This included adopting a pure Kotlin, Anvil, GraphQL, MVVM, Compose-based architecture and leveraging our design system for brand consistency. Our modernization efforts are well-established these days (and we talk about them at conferences quite often), and as we’ve progressed, we’ve been able to double-down on improvements built on our choices. For example:
- Going full Kotlin meant we could now leverage KSP and move away from KAPT. Coroutine adoption took off, and RxJava disappeared from the codebase much faster, reducing feature complexity and lines of code. We’ve added plugins to make creating and maintaining features easy.
- Going pure GQL meant having to maintain and debug two network stacks, retry logic and traffic payloads was mostly a thing of the past for feature developers. Feature development with GQL is a golden path. We’ve been quite happy leveraging Apollo on Android and taking advantage of features, like normalized caching, for example, to power more delightful user experiences.
- Going all in on Anvil meant investing in simplified DI boilerplate and feature code, investing in devx plugins and more build improvements to keep build times manageable.
- Adopting Compose has been a great investment for Reddit, both in the app and in our design system. Google’s commitment to continued stability and performance improvements meant that this framework has scaled well alongside Reddit’s app investments and delivers more compelling and performant features as it matures.
Our core surfaces, like feeds, video, and post detail page have undergone significant refactors and improvements for further devx and performance gains, which you can read all about on the Reddit Engineering blog as well. The feed rewrites, as an example, resulted in much more maintainable code using modern technologies like Compose to iterate on, a better developer experience in a space pretty much all teams at Reddit need to integrate with, and Reddit users get their memes and photoshop battle content hundreds of milliseconds faster than before. Apollo GQL’s normalized caching helped power instant comment loading on the post details page. These are investments we can afford to make now that we are future focused instead of spending our time mired in so much legacy code.
These cleanup celebrations also had other upsides. Users noticed and sentiment analysis improved. Our binary got smaller and our app startup and runtime improved demonstrably. Our testing infrastructure also became faster, more scalable, and cost-effective as the app performance improved. As we phased out legacy code, maintenance burdens on teams were lessened, simplifying on-call runbooks and reducing developer navigation through outdated code. This made it easier to prioritize stability and performance, as developers worked with a cleaner, more consistent codebase. Consequently, developer satisfaction increased as build times and app size decreased.
Figure 4: App Size & Complexity Go Down. Developer Happiness Go Up.
By early 2024, we completed this comprehensive modularization, enabling major feature teams—such as those working on feeds, video players, and post details—to rebuild their components within modern frameworks with high confidence that on the other side of those migrations, their feature velocity would be greater and they’d have a solid foundation to build for the future in more performant ways. For each of the tech stack choices we’ve made, we’ve invested in continuously improving the developer experience around those choices so teams have confidence in investing in them and that they get better and more efficient over time.
Affording Test Infrastructure When Your CI Times Are Already Off The Charts
By transitioning to a monorepo structure modularized by feature and adopting a modern tech stack, we’ve made our codebase honor separation of concerns and become much more testable, maintainable and pleasant to work in. It is possible for teams to work on features and app stability/performance in tandem instead of having to choose one or the other and have a stronger quality focus. This shift not only enhanced our development efficiency but also allowed us to implement robust test infrastructure. By paying down developer experience and performance debt, we can now afford to spend some of our resources on much more robust testing strategies. We improved our unit test coverage from 5% to 70% and introduced intelligent test sharding, leading to sustainable cycle times. As a result, teams could more rapidly address stability and performance issues in production and develop tests to ensure ongoing
Figure 5: Android Repo Unit Test Coverage Safeguarding App Stability & Performance
Our modularization efforts have proven valuable, enabling independent feature teams to build, test, and iterate more effectively. This autonomy has also strengthened code ownership and streamlined issue triaging. With improved CI times now in the 30 minute range @ p90 and extensive test coverage, we can better justify investments in test types like performance and endurance tests. Sharding tests for performance, introducing a merge queue to our monorepo, and providing early PR results and artifacts have further boosted efficiency.
By encouraging standardization of boilerplate, introducing checks and golden paths, we’ve decoupled some of the gnarliest problems with our app stability and performance while being able to deliver tools and frameworks that help all teams have better observability and metrics insights, in part because they work in stronger isolation where attribution is easier. Teams with stronger code ownership are also more efficient with bug fixing and more comfortable resolving not just crashes but other types of performance issues like memory leaks and startup regressions that crop up in their code.
Observe All The Things! …Sometimes
As our app-wide stability and performance metrics stabilized and moved into healthier territory, we looked for ways to safeguard those improvements and make them easier to maintain over time.
We did this a few key ways:
- We introduced on-call programs to monitor, identify, triage and resolve issues as they arose, when fixes are most straightforward.
- We added reporting and alerting as CI checks, experiment checks, deployment checks, Sourcegraph observability and real-time production health checks.
- We took on second-degree performance metrics like ANRs and memory leaks and used similar patterns to establish, improve and maintain those metrics in healthy zones
- We leveraged Google’s ANR guide to make improvements.
- We leveraged Square’s LeakCanary 2 and 3 for memory leaks
- We scaled our beta programs to much larger communities for better signals on app stability and performance issues prior to deployments
- We introduced better observability and profiling tooling for detection, debugging, tracing and root cause analysis, Perfetto for tracing and Bitdrift for debugging critical-path beta crashes
- We introduced screen-level performance metrics, allowing teams to see how code changes impacted their screen performance with metrics like time-to-interactive, time to first draw, and slow and frozen frame rates.
Today, identifying the source of app-wide regressions is straightforward. Feature teams use screen-specific dashboards to monitor performance as they add new features. Experiments are automatically flagged for stability and performance issues, which then freeze for review and improvements.
Our performance dashboards help with root cause analysis by filtering data by date, app version, region, and more. This allows us to pinpoint issues quickly:
- Problem in a specific app version? Likely from a client update or experiment.
- Problem not matching app release adoption? Likely from an experiment.
- Problem across Android and iOS? Check for upstream backend changes.
- Problem in one region? Look into edge/CDN issues or regional experiments.
We also use trend dashboards to find performance improvement opportunities. For example, by analyzing user engagement and screen metrics, we've applied optimizations like code cleanup and lazy loading, leading to significant improvements. Recent successes include a 20% improvement in user first impressions on login screens and up to a 70% reduction in frozen frame rates during onboarding. Code cleanup in our comment section led to a 77% improvement in frozen frame rates on high-traffic screens.
These tools and methods have enabled us to move quickly and confidently, improving stability and performance while ensuring new features are well-received or quickly reverted if necessary. We’re also much more proactive in keeping dependencies updated and leveraging production insights to deliver better user experiences faster.
Obfuscate & Shrink, Reflect Less
We have worked closely with partners in Google Developer Relations to find key opportunities for more performance improvements and this partnership has paid off over time. We’ve resolved blockers to making larger improvements and built out better observability and deployment capabilities to reduce the risks of making large and un-gateable updates to the app. Taking advantage of these opportunities for stability, performance, and security gains required us to change our dependency update strategy to stay closer to current than Reddit had in the past. These days, we try to stay within easy update distance of the latest stable release on critical dependencies and are sometimes willing to take more calculated upgrade risks for big benefits to our users because we can accurately weigh the risks and rewards through observability, as you’ll see in a moment.
Let’s start with how we optimized and minified our release builds to make our app leaner and snappier. We’d been using R8 for a long time, but enabling R8 “Full Mode” with its aggressive optimizations took some work, especially addressing some code still leveraging legacy reflection patterns and a few other blockers to strategic dependency updates that needed to be addressed first. Once we had R8 Full Mode working, we kept it baking internally and in our beta for a few weeks and timed the release to be a week when little else was going to production, in case we had to roll it back. Luckily, the release went smoothly and we didn’t need to use any contingencies, which then allowed us to move on to our next big updates. In production, we saw an immediate improvement of about 20% to the percentage of daily active users who experienced at least one Application Not Responding event (ANR). In total, we saw total ANRs for the app drop by about 30%, largely driven by optimizations improving setup time in dependency injection code, which makes sense. There’s still a lot more we can do here. We still have too many DEX files and work to improve this area, but we got the rewards we expected out of this effort and it continues to pay off in terms of performance. Our app ratings, especially around performance, got measurably better when we introduced these improvements.
Major Updates Without Major Headaches
You can imagine with a big monolith and slow build times, engineers were not always inclined to update dependencies or make changes unless absolutely necessary. Breaking up the app monolith, having better observability and incident response turnaround times, and making the developer experience more reasonable has led to a lot more future-facing requests from engineering. For example, there's been a significant cultural shift at Reddit in mobile to stay more up-to-date with our tooling and dependencies and to chase improvements in frameworks APIs for improved experiences, stability, and performance, instead of only updating when compelled to.
https://i.redd.it/exu8w2jd69pd1.gif
We’ve introduced tooling like Renovate to help us automate many minor dependency updates but some major ones, like Compose upgrades, require some extra planning, testing, and a quick revert strategy. We had been working towards the Compose 1.6+ update for some time since it was made available early this year. We were excited about the features and the performance improvements promised, especially around startup and scroll performance, but we had a few edge-case crashes that were making it difficult for us to deploy it to production at scale.
We launched our new open beta program with tens of thousands of testers, giving us a clear view of potential production crashes. Despite finding some critical issues, we eventually decided that the benefits of the update outweighed the risks. Developers needed the Compose updates for their projects, and we anticipated users would benefit from the performance improvements. While the update caused a temporary dip in stability, marked by some edge case crashes, we made a strategic choice to proceed with the release and fix forward. We monitored the issues closely, fixed them as they arose, and saw significant improvements in performance and user ratings. Three app releases later, we had reported and resolved the edge cases and achieved our best stability and performance on Android to date.
Results wise? We saw improvements across the app and it was a great exercise in testing all our observability. We saw app-wide cold start app startup improvements in the 20% range u/p50 and app-wide scroll performance improvements in the 15% range u/p50. We also saw marked improvements on lower-end device classes and stronger improvements in some of our target emerging market geos. These areas are often more sensitive to app size, startup ANRs and performance constrained so it makes sense they would see outsized benefits on work like this.
Figure 7: App Start Benchmark Improvements
We also saw:
- Google Play App Vitals: Slow Cold Start Over Time improved by ~13%, sustained.
- Google Play App Vitals: Excessive Frozen Frames Over Time improved by over ~10%, sustained.
- Google Play App Vitals: Excessive Slow Frames Over Time improved by over ~30%, sustained.
We saw sweeping changes, so we also took this opportunity to check on our screen-level performance metrics and noted that every screen that had been refactored for Compose (almost 75% of our screens these days) saw performance improvements. We saw this in practice: no single screen was driving the overall app improvements from the update. Any screen that has modernized (Core Stack/Compose) saw benefits. As an example, we focused on the Home screen and saw about a 15% improvement in scroll performance @ p50, which brought us into a similar performance zone as our iOS sister app, while p90s are still significantly worse on Android mostly due to supporting a much broader variety of lower-end hardware available to support different price points for worldwide Android users.
Figure 8: App-Wide Scroll Performance Improvements & Different Feeds Impacted By the Compose Update
The R8 and Compose upgrades were non-trivial to deploy in relative isolation and stabilize, but we feel like we got great outcomes from this work for all teams who are adopting our modern tech stack and Compose. As teams adopt these modern technologies, they pick up these stability and performance improvements in their projects from the get-go, not to mention the significant improvements to the developer experience by working solely in modularized Kotlin, MVVM presentation patterns, Compose and GraphQL. It’s been nice to see these improvements not just land, but provide sustained improvements to the app experiences.
Startup and Baseline Profiles As the Cherry On Top of the Banana Split That Is Our Performance Strategy
Because we’ve invested in staying up-to-date in AGP and other critical dependencies, we are now much more capable of taking advantage of newer performance features and frameworks available to developers. Baseline profiles, for example, have been another way we have made strategic performance improvements to feature surfaces. You can read all about them on the Android website.
Recently, Reddit introduced and integrated several Baseline Profiles on key user journeys in the app and saw some positive improvements to our performance metrics. Baseline profiles are easy to set up and leverage and sometimes demonstrate significant improvements to the app runtime performance. We did an audit of important user journeys and partnered with several orgs, from feeds and video to subreddit communities and ads, to leverage baseline profiles and see what sorts of improvements we might see. We’ve added a handful to the app so far and are still evaluating more opportunities to leverage them strategically.
Adding a baseline profile to our community feed, for example, led to:
- ~15% improvement in time-to-first-draw @ p50
- ~10% improvement to time-to-interactive @ p50
- ~35% improvement in slow frames @ p50
We continue to look for more opportunities to leverage baseline profiles and ensure they are easy for teams to maintain.
Cool Performance Metrics, But How Do Users Feel About Them?
Everyone always wants to know how these performance improvements impact business metrics and this is an area we are investing in a lot lately. Understanding how performance improvements translate into tangible benefits for our users and business metrics is crucial, and we are still not good at flexing this muscle. This is a focus of our ongoing collaboration with our data science team, as we strive to link enhancements in stability and performance to key metrics such as user growth, retention, and satisfaction. Right now? We really want to be able to stack rank the various performance issues we know about to better prioritize work.
We do regularly get direct user validation for our improvements and Google Play insights can be of good use on that front. Here’s a striking example of this is the immediate correlation we observed between app-wide performance upgrades and a substantial increase in positive ratings and reviews on Google Play. Notably, these improvements had a particularly pronounced impact on users with lower-end devices globally, which aligns seamlessly with our commitment to building inclusive communities and delivering exceptional experiences to users everywhere.
Figure 9: Quelle Surprise: Reddit Users Like Performance Improvements
So What’s Next?
Android stability and performance at Reddit are at their best in years, but we recognize there is still much more to be done to deliver exceptional experiences to users. Our approach to metrics has evolved significantly, moving from a basic focus to a comprehensive evaluation of app health and performance. Over time, we’ve incorporated many other app health and performance signals and expanded our app health programs to address a wider range of issues, including ANRs, memory leaks, and battery life. Not all stability issues are weighted equally these days. We’ve started prioritizing user-facing defects much higher and built out deployment processes as well as automated bug triaging with on-call bots to help maintain engineering team awareness of production impacts to their features. Similarly on the performance metrics side, we moved beyond app start to also monitor scroll performance and address jank, closely monitor video performance, and we routinely deep-dive screen-based performance metric regressions to resolve feature-specific issues.
Our mobile observability has given us the ability to know quickly when something is wrong, to root-cause quickly, and to tell when we’ve successfully resolved a stability or performance issue. We can also validate that updates we make, be it a Compose update or an Exoplayer upgrade, is delivering better results for our users and use that observability to go hunting for opportunities to improve experiences more strategically now that our app is modularized and sufficiently decoupled and abstracted. While we wouldn’t say our app stability and performance is stellar yet, we are on the right path and we’ve clawed our way up into the industry standard ranges amongst our peers from some abysmal numbers. Building out great operational processes, like deployment war rooms and better on-call programs has helped support better operational excellence around maintaining those app improvements and expanding upon them.
These days, we have a really great mobile team that is committed to making Android awesome and keeping it that way, so if these sorts of projects sound like compelling challenges, please check out the open roles on our Careers page and come take Reddit to the next level.
ACKs
These improvements could not have been achieved without the dedication and support of every Android developer at Reddit, as well as our leadership’s commitment to prioritizing stability and performance, and fostering a culture of quality across the business. We are also deeply grateful to our partners in performance on the Google Developer Relations team. Their insights and advice has been critical to our success in making improvements to Android performance at scale with more confidence. Finally, we appreciate that the broader Android community is open and has such a willingness to talk shop, and workshop insights, tooling ideas, architecture patterns and successful approaches to better serve value to Android users. Thank you for sharing what you can, when you can, and we hope our learnings at Reddit help others deliver better Android experiences as well.
No comments:
Post a Comment