User experience and app economics, as we know them, will become a relic as AI disrupts software marketing and the market. Traditional apps, the cornerstone of mobile and desktop experiences, are in decline as more dynamic, AI-driven interfaces integrate the functionality that used to require clicking an app icon to access. What's a developer to do?
The shift mirrors historical transitions in user interface design and signals a new era in which operating system (OS)-level AI will become the focal point of our digital lives. The next primary user interface hasn't taken shape. AI's evolution beyond today's command-line-like interactions will leave independent software vendors struggling to find and maintain relationships with their current audiences. Today's AI interaction patterns are hints of the next, still uninvented, user experience paradigm that the Model Context Protocol (MCP) created by Anthropic hints at but is far from realizing in its current form.
Declining App Usage
AI promises to eliminate the need for apps, instead connecting users to many intelligences that deliver functionality presently encapsulated in apps. People are tired of swiping through screens of icons, and are already headed for the exits.
Recent data shows a downturn in app downloads, and the trend appears to be accelerating. In 2024:
- Global app downloads declined year-over-year by 2.3%
- iOS downloads decreased by 1.1%
- Android downloads fell by 2.6%
- Far fewer than half of apps on a device are used monthly.
Despite using fewer downloaded apps, a trend projected to continue at about 1% per year, consumers are spending more within apps, as revenues increased by 15.7% in 2024. The message? People want comprehensive functionality from fewer applications—now the "super-app" is giving way to the AI assistant as the primary interface, promising personalized experiences that do it all, ultimately eliminating the need for discrete apps.
The Command-Line Feel of Current AI Interactions
Many people struggle to communicate with AI systems, despite their "natural language" interface. The orchestration of and advice about which intelligence to use to answer a query are two facets of the undiscovered AI UX. For now, the usability gap of the DOS and UNIX command lines has reopened, recalling a pre-browser era when navigating hypertext using Lynx or Gopher was cumbersome and required technical knowledge. A new UX paradigm will emerge from that gap to provide comprehensive access to humanity's accumulated knowledge on fair, financially rewarding terms.
Can ISVs Brand Within an AI Assistant?
Computer applications provide a structured sequence of functions to facilitate user input, data processing, and the presentation of results to the user; in one sense, they are the UI layer for interacting with the system, peripherals, and remote services connected to the device, while on the other hand, an app is the developer's branded business surface—their sole opportunity to build a relationship with users.
Using individual apps is so 2016. Apps are a procedural interface, far different than the conversational interactions that characterize AI user experience. Now, we want to type, talk, and show our world to AI, letting it decide what algorithm and expertise to apply to the input. But for now, the best option for receiving a complete answer from one LLM is to ask another LLM to write the prompt. The promise of agentic AI, that it will access and leverage the capabilities of other services and apps on the device and in the cloud, describes the change needed but does not address the challenge of managing AI to achieve the best results.
AI eliminates every step between the user's input and the display of the result to the user. In the app paradigm, comparing this year's profit margin to last year's involves opening Excel or QuickBooks, entering the required data if unavailable, and generating a P&L statement within the application's user experience (UX). The resulting document can be saved, printed, or shared. Today, AI allows a non-financial user to ask the system for their profit margin and receive it in a summary with a choice of options to render it for sharing or printing—the AI makes decisions at every step that eliminates the need to navigate to the app, click the icon, open the correct window, and so forth.
Agentic AI agents promise to handle complex tasks, anticipate user needs, and personalize interactions in real-time, which eliminates the need for a discrete app. This shift will diminish the relevance of app icons as gateways to digital services—a concept rooted in an outdated computing model that focuses on applications instead of applying intelligence to solve user needs.
With Siri, Copilot, Gemini, and other models increasingly integrated directly into the operating system, third-party developers face a challenge reaching and connecting with users through the OS-level AI assistant; these are the new gatekeepers in the $673 billion software market.
AI Integration into Operating Systems
AI integration into operating systems is a move toward unitary OSes that orchestrate data access, inference services, and functionalities traditionally provided by third-party applications. For example, Microsoft CEO Satya Nadella envisions AI agents replacing traditional software and SaaS platforms, eliminating conventional interfaces and static business logic.
This UX evolution has significant implications for developers and their ability to monetize code. At Intentional Futures, we expect platform and operating system giants to focus on orchestrating AI functionalities, offering to share revenue with partners who provide specific, specialized intelligence, algorithms, and data sets to augment the analysis of user queries. Think of an Intelligence Store, a next-generation app store that serves as a promotional platform and transactional service, connecting users dynamically to intelligence services.
Developers must support algorithmic integration and interactions with third-party services in this new paradigm. In the best-case scenario, traditional apps and the icons on our home screens will be relegated to handling output formatting, user preference settings, and subscription management. In the worst case, third-party access to system-level AI will be exiled to the depths of system preferences settings.
How will independent software vendors (ISVs) compete in that environment? Software will not be downloaded and installed on a device; there to wait for the user's need to justify finding, opening, and interacting with an app.
The Foundations of an Intelligence Store: MCP + A2A
The Intelligence Store is emerging as the next-generation economic model for the AI-native ecosystem in which users invoke agents, models, and knowledge services dynamically. Instead of tapping app icons, users make requests, and agents decide which tools, models, and data to engage with behind the scenes, just in time. For this to scale, we need foundational protocols that enable seamless, secure, and monetizable interoperability between heterogeneous AI systems.
The Model Context Protocol (MCP), introduced by Anthropic and now supported by OpenAI, has become the most discussed candidate for enabling multi-agent, multi-model workflows. It allows AI models to structure and share contextual memory, enabling agent-to-agent handoffs, persistent user state, and context-rich model orchestration across vendors. MCP represents a generational shift—analogous to Vint Cerf and Robert Kahn's 1974 proposal for TCP, the protocol that underpins the internet. MCP may do for intelligence what TCP/IP did for connectivity.
But MCP alone is not enough.
Into the gap steps Google's recently unveiled Agents2Agents (A2A) protocol. Where MCP focuses on contextual continuity—passing memory, user goals, and state between models—A2A provides another layer of the coordination infrastructure: managing how agents discover, invoke, authenticate, and delegate to one another in a structured, permissioned, and goal-driven manner.
Together, MCP and A2A point toward a new open agent economy:
- MCP as the semantic layer (what we're doing, why, and with what memory)
- A2A as the coordination layer (who does it, with what capabilities, under what contract)
What A2A Adds to the Stack
Google's A2A protocol proposes features that directly fill gaps in current agentic interoperability frameworks and help build a foundation for the Intelligence Store:
- Agent Discovery & Capability Publishing. Agents can publish their capabilities (e.g., summarization, database lookup, transaction processing), allowing others to discover and delegate workloads based on the task context.
- Intent Routing & Contracting. A2A supports the creation of temporary agent contracts—where one agent requests another to perform a sub-task and defines the expected result, accountability, and bounds of the interaction.
- Delegation & Subtask Hierarchies. A2A supports hierarchical agent relationships: master agents can delegate subtasks to specialists, allowing complex workflows to unfold across multiple services without central coordination.
- Access Control & Trust Management. Through structured access permissions, agents can share or restrict data, define context scope, and enforce constraints on how memory and identity propagate across service boundaries.

Filling the Gaps: Toward a Fully Operational Intelligence Store
While MCP is rapidly becoming the shared context substrate for AI models, it lacks core transactional and trust primitives needed to support commercial use cases. With the complementary architecture proposed by A2A, we begin to see the contours of an Intelligence Store framework. In this marketplace, models, tools, and data services can be composed, priced, and invoked on demand.
But the work is not done yet. Several capabilities are still needed that, developed and standardized, can complete the intelligence store using the A2A and MCP protocols:
- Transactional Management. Secure, metered, and auditable execution of AI services—with billing hooks, pricing transparency, and usage verification—must be native to the protocol layer, not bolted on.
- Dynamic Intelligence Pricing. A pricing layer that accounts for usage context, model complexity, compute cost, latency sensitivity, and data licensing that allows intelligence services to be priced as utilities or microservices.
- Robust Identity & Consent Infrastructure. End-users must control which agents can act on their behalf, what data they can see, and where that data travels. Identity across the mesh must be persistent yet portable, secure but federated.
- Model Provenance and Auditability. When invoking intelligence from third-party agents, users and regulators must be able to trace the inferences' source, logic, and impact—essential for accountability in high-stakes contexts.
MCP + A2A: The Emerging Protocol Layer for the AI Economy
MCP and A2A suggest a shift away from static apps and siloed models toward a composable, transparent, and decentralized marketplace for intelligence—the Intelligence Store. The next IT infrastructure will support a world where agents negotiate and collaborate across vendors, users summon expertise without knowing the brand behind it, and intelligence flows as freely as data once did.
The Intelligence Store is more than a metaphor. It represents the next platform opportunity, an open protocol layer that will become as foundational as HTTP, TCP/IP, and OAuth. But to achieve that promise, these protocols must remain open, pluralistic, and embedded with public interest values.
Future of AI-Driven Expertise Access
In the future, users may access AI models that offer specialized expertise in tasks such as idea evaluation, research, and business planning. An integrated, dynamic AI environment will enable both short-term and continuous access to specialized knowledge, as well as the transaction processing needed to animate an Intelligence Store. Third-party developers could offer specialized AI services, creating a vibrant intelligence marketplace.
Imagine a scientist collaborating on a Theory of Everything with an Einsteinian model trained on all of Einstein's work, or a teacher bringing together Aristotle, Plato, and Lao Tse to engage students in moral philosophy. Enhanced MCP and A2A capabilities, including transaction management and pricing mechanisms, can facilitate these interactions.
The Time to Adjust
More than two years of AI investigations by Intentional Futures have helped our clients illuminate the path forward for applying AI in education, delivering public goods, strategic planning, daily workflows in scientific and industrial processes, and providing coaching on a wide range of topics.
Developers need to begin planning for a post-app world. User experience is changing, evolving from app-based interactions toward a unitary AI-enabled user interface. That transformation signals the decline of standalone applications and mirrors historical UI evolutions aimed at improving user experience.
Before the introduction of the new interface that will unite all these services in a valuable partnership with the user, significant improvements in usability and frameworks, such as MCP and A2A, are needed to realize the full potential of AI integration. We are beginning a new computing era in which technology will seamlessly align with individual needs and preferences. Much work remains to get there.