The most-used apps on my phone in 2026 are mostly worse than they were in 2018. Not in every dimension — some have better security, some have improved accessibility features, some are noticeably faster on current hardware than they were on the hardware of their era. But the day-to-day experience of using them has gotten denser, more cluttered, more cognitively demanding, and more frequently interrupted by features I did not ask for.

This is not a nostalgia argument. The 2018 versions of these apps were not perfect; they had bugs, they had usability problems, they had performance issues that have since been fixed. But the steady accumulation of features over the last several years has produced specific user-experience harms that the additions have not justified, and the structural reasons for the pattern are worth examining.

The pattern

A consumer software product launches with a focused use case. It does that thing well. Users come; users like it; users tell other users.

The product accrues a development team, a product-management organization, and the metrics infrastructure that comes with both. The metrics measure engagement (how often users open the app, how long they stay, how many features they touch), retention (how often they come back), and growth (how many new users sign up). The metrics do not, generally, measure user satisfaction with the original use case, or the quality of the experience for the users who liked the original product.

The development team needs to ship things, because shipping things is what justifies the headcount and the budget. The product-management organization needs to demonstrate impact on the metrics, because demonstrated impact is what justifies promotions and bonuses. The combination produces a steady stream of new features, each of which can be justified by a small lift in some metric.

The features accumulate. The interface gets denser. The original use case becomes harder to find under the layers of additional functionality. The users who liked the original product complain; the metrics show them as a small and shrinking percentage of the user base; the product organization concludes that the new features are working, because the new features are bringing in new users to replace the unhappy original ones.

This is not a hypothetical pattern. It is the observable history of a substantial fraction of major consumer-software products over the last decade.

The recent specific case

The current iteration of this pattern — the one happening across nearly every major consumer-software product simultaneously — is the addition of AI features. Notification summaries, writing assistants, image generation, “AI search,” automated suggestions, in-app chatbots. Most of these features were rushed to ship in 2024 and 2025 to demonstrate AI capabilities to investors and to hit competitive parity with other companies that were doing the same.

Many of these features are bad in ways their developers know they are bad. The writing assistants produce text that is technically correct and substantively bland. The image generators produce images of declining novelty as their outputs flood the visual environment they were supposed to enrich. The notification summaries occasionally compress important alerts into misleading single-line condensations. The AI search features return results that are confidently wrong on questions where being right is the entire point of asking.

These features have been added to apps where they were not requested by users, where they often actively get in the way of the use case the user opened the app for, and where they are costly to maintain in ways the user pays for through degraded performance and higher subscription prices. They have been added because the metrics organization wanted to demonstrate AI engagement, because the product organization wanted to ship the feature, and because the executive layer wanted to be able to put “AI-powered” on the product page.

The user, who opened the app to do a specific thing, has been recruited into someone else’s product strategy.

The deeper problem

The deeper problem is that consumer software is now developed and operated under a structure that has weak incentive to make the product better for the existing users. The existing users have already paid; the existing users are not the ones whose marginal acquisition will move the next quarter’s metric; the existing users are, increasingly, treated as a captive audience whose tolerance for product changes is the actual upper bound on what the organization can do.

The metric for the unhappy existing user is “churn.” The product organization measures it, the product organization optimizes against it, and the product organization is largely fine with anything that keeps churn below a tolerable threshold. The fact that the existing user, who is staying, is meaningfully less happy with the product than they were two years ago — but is staying because the switching cost is high, or because there is no clear alternative, or because the user has not gotten around to migrating yet — does not produce a metric that anyone in the product organization is responsible for moving.

This is a structural problem, not a problem of bad individuals. The product managers I know who run these features are mostly thoughtful, well-intentioned people who would prefer not to ship things they don’t believe in. They ship them because the structure of their job requires it.

What would help

A few changes that would help, in increasing order of unlikelihood:

Periodic feature deletion as a normal part of product maintenance. Most products would benefit from removing 10 to 30 percent of their current features. This almost never happens, because deletion produces a vocal small minority of users whose use case depended on the feature, and product organizations are structured to avoid that vocal minority.

Subscription models that explicitly promise feature stability rather than feature growth. “Pay us $5 a month and we promise to ship security updates and bug fixes and not much else.” A few products in the productivity tool space have moved toward this; most have not.

Honest measurement of user satisfaction with the original use case, separately from engagement metrics. Some companies do post-purchase satisfaction surveys; few continue measuring satisfaction with specific functions over time. The ones that do tend to be the ones whose products age more gracefully.

Recognition that the existing user is not a captive audience. Switching costs are high, but they are not infinite. The user who has been frustrated for two years and who finally finds a viable alternative will leave, and the longer the existing-user dissatisfaction has accumulated, the more dramatic the switch will look when it happens.

Where this leaves the user

The user, who has finished reading this opinion, may have been hoping for a specific recommendation about which apps to use. The honest answer is: there are not great alternatives in most categories. The pattern this piece describes is industry-wide, and the alternatives that exist are usually small, less polished, and themselves at risk of falling into the same pattern as they grow.

The most useful posture I have come to is one of cautious portability. Choose apps whose data-export formats are well documented, so that switching costs stay finite. Maintain skepticism toward marketing claims of new capabilities. Pay for software when paying for software allows you to skip the engagement-focused free tier. And, when an app you have used for years has accumulated enough cruft to be actively unpleasant, give yourself permission to migrate.

The structural problem will not be solved by individual users. But the structural problem also reveals itself one user at a time.