Insider Brief
- Agentic AI will shift the web from human-driven navigation to autonomous, goal-directed task execution by software agents, researchers report in a recent study.
- Web architecture will evolve into an agent-native ecosystem where pages act as active software entities and services compete for agent selection.
- Economic models will move from human-targeted advertising to an “Agent Attention Economy” based on service invocation fees and agent-oriented ranking.
The internet is entering what researchers call its third major era — one defined not by search or algorithmic feeds, but by autonomous software agents that plan, negotiate, and execute tasks without human step-by-step direction.
A team of researchers from institutions including Shanghai Jiao Tong University, the University of California, Berkeley, the University of Liverpool, and the Hong Kong University of Science and Technology describes this transition as the “Agentic Web.” In their paper, Agentic Web: Weaving the Next Web with AI Agents, the researchers define it as “a distributed, interactive internet ecosystem in which autonomous software agents, often powered by large language models, act as autonomous intermediaries that persistently plan, coordinate, and execute goal-directed tasks.”
Unlike today’s web, where humans navigate pages, enter queries and manually initiate actions, the Agentic Web shifts the primary actors to AI agents. These agents interact with one another and with services to complete multi-step workflows, potentially spanning multiple platforms and days, on behalf of human users.
From Search to Feeds to Actions
The study, which the researchers posted on the pre-print server arXiv, frames the change as the next stage in a historical progression. The PC Web Era of the 1990s and early 2000s revolved around static pages, keyword search, and early advertising models based on pay-per-click systems. The Mobile Web Era, emerging in the late 2000s, introduced recommender systems, algorithm-curated feeds, and what became known as the attention economy, in which platforms competed for user engagement through personalized content delivery.
The Agentic Web Era, which the authors date to the mid-2020s, shifts the paradigm again — this time, from information retrieval and recommendation to autonomous execution. Here, the human role changes from operator to goal-setter.
In this model, a flight booking no longer requires a multi-tab search, manual price comparisons, and direct purchase. An agent receives a goal — for example, “book a flight to New York next weekend within budget” — and executes the process autonomously. It can refine options, negotiate with other service agents, check loyalty points, coordinate with hotel bookings and finalize the transaction without further user action.
The same shift applies to complex research tasks. Where a user might once have searched for white papers, downloaded PDFs, extracted diagrams, and compiled results, an autonomous agent could now parse technical literature, integrate data from APIs, generate visualizations, and deliver a structured report in response to a high-level prompt.
Architecture and Economics in Transition
The study notes that the Agentic Web redefines the structural and economic underpinnings of the internet. Webpages evolve from static information nodes to active software agents with defined capabilities, interfaces, and roles in broader workflows. Hyperlinks — once purely navigational — become coordination channels enabling inter-agent communication and dynamic task decomposition.
In this environment, traditional search engines and web crawlers give way to agent orchestrators, systems that discover, rank and coordinate other agents based on reliability, cooperation success rates, and task relevance rather than link popularity. PageRank-like algorithms could be reimagined to measure agent contribution to multi-agent workflows.
Economically, the researchers identify a new “Agent Attention Economy,” where APIs, tools and services compete for selection and invocation by agents rather than human clicks. This alters the competitive landscape. Services may optimize for agent discovery in registries, adjust capabilities to rank higher in agent-facing recommendation systems, or participate in bidding systems designed for machine decision-making pipelines.
Future monetization models could shift toward service invocation fees, capability relevance scores and performance-based compensation, replacing or supplementing human-targeted advertising.
Enabling Technologies
The transition depends on the capabilities of large language models (LLMs) trained on web-scale data and embedded with vast in-parameter knowledge, which means, instead of looking everything up each time, the model already “remembers” facts, concepts, and relationships within its internal settings, much like a very large, compressed library in its brain. These capabilities will likely be combined with live retrieval and reasoning. Protocols such as the Model Context Protocol (MCP) and Agent-to-Agent (A2A) frameworks are highlighted as foundational to this shift.
MCP allows agents to dynamically discover system capabilities at runtime, maintain semantic context across multi-step workflows, and collaborate with privacy safeguards. A2A enables agents to form ad hoc coalitions, share intermediate results, and jointly pursue objectives. Together, these systems provide the infrastructure for persistent, coordinated machine-to-machine interaction.
Multi-agent orchestration frameworks such as LangChain and AutoGen support modular coordination among specialized agents. The paper cites Microsoft’s NLWeb toolset, announced in 2025, as an example of efforts to convert human-oriented web interfaces into structured, agent-readable formats, avoiding brittle techniques like DOM scraping in favor of standardized, semantic access.
Shifts in Information Flow
In the Agentic Web, information is not only retrieved from static documents but also generated by agents for other agents. The study points out that much of the world’s knowledge now resides within the parameters of language models themselves, enabling agents to combine stored knowledge with real-time retrieval and synthesis.
This creates the possibility of agent-to-agent content production and consumption cycles in which information may never be rendered for human viewing. Agents can produce summaries, instructions, or datasets designed for other agents’ use, forming an autonomous information ecosystem.
Risks and Governance
The researchers emphasize that the delegation of complex decision-making to autonomous agents introduces new risks. Technical threats include malicious tool injection, adversarial content generation, and exploitation of inter-agent communication channels. Economic risks could arise from manipulation of agent ranking systems or monopolization of agent registries.
Governance challenges include determining liability for agent actions, ensuring transparency in decision-making processes, and maintaining alignment with human intent in high-stakes domains such as finance, healthcare, or legal contracting. Because agents operate at machine timescales, regulatory frameworks may need to adapt to monitor and control transactions occurring faster than human oversight can match.
Toward an Agent-Native Internet
The paper envisions a future in which the web functions as an agent-native substrate rather than a human-readable medium. On the consumption side, agents continuously monitor, filter, and act on behalf of users, offering highly personalized and efficient execution of tasks. On the production side, content is increasingly generated for machine consumption, optimized for semantic parsing and orchestration.
In this configuration, human users interact with the web primarily through high-level directives, while agents handle the complexity of execution. The underlying web shifts from a repository of documents, to a curated feed, to an active, distributed network of autonomous actors generating, exchanging, and acting on information.
According to the study, this transformation marks “a fundamental change in how value is created and tasks are fulfilled on the Web.” If realized, it would reconfigure the technical architecture, economic incentives, and social dynamics of the internet — completing the progression from the search paradigm, to the recommendation paradigm, to the action paradigm.
For a more technical outline of the agentic web and its affect on the evolution of the internet, please review the researchers’ paper on arXiv. Researchers often use preprint servers, like arXiv, to distribute their findings efficiently, particularly in fast-evolving fields, such as artificial intelligence. However, preprints are not peer reviewed, an important part of the scientific method.
The research team included Yingxuan Yang, Huacan Chai, Ying Wen, and Weinan Zhang, from Shanghai Jiao Tong University; Mulei Ma and Yang Yang, from The Hong Kong University of Science and Technology, Guangzhou; Yuxuan Huang and Meng Fang, from the University of Liverpool; Haoran Geng, Shangding Gu, Pieter Abbeel, Costas Spanos, and Dawn Song, from the University of California, Berkeley; Yuanjian Zhou, from Shanghai Innovation Institute and also affiliated with Shanghai Jiao Tong University; Muhao Chen, from the University of California, Davis; Ming Jin, from Virginia Tech; and Jun Wang, from University College London.




