Asteris Logo

The Race Moved Underground

News
WIAISERIESWeek in AITECHNOLOGY1st May
This week confirmed what has been building for months: model intelligence is no longer the scarce resource in AI. GPT-5.5, DeepSeek V4, and OpenAI's arrival on Amazon Bedrock made the point in three different ways. The real competition has moved underground, into infrastructure spending, chip supply chains, governance frameworks, and the human judgment layer that decides where AI belongs in real work.

This week in artificial intelligence news, the model race gave way to a deeper contest over infrastructure, governance and distribution. GPT-5.5, DeepSeek V4 and $600 billion in AI capex all pointed the same direction: intelligence is cheap, but the systems around it are not.

The last seven days delivered a volume of AI news that would have been unthinkable two years ago. New frontier models from OpenAI and DeepSeek. A potential $40 billion investment from Google into Anthropic. OpenAI's models arriving on Amazon Bedrock. Over $600 billion in projected AI infrastructure spending from the hyperscalers alone. A classified Pentagon deal. A collapsed EU negotiation. A courtroom fight over who owns the future of AI itself. Taken individually, each of these stories has a clear headline. Taken together, they tell a different story entirely. The AI race did not slow down this week. It moved underground.

Intelligence found its price

OpenAI released GPT-5.5 with a focus on professional-grade reasoning, coding, research, and agentic execution.1 DeepSeek answered with V4, featuring a one-million-token context window, aggressive API pricing, and versions adapted for Huawei's Ascend chips.2 Within days, DeepSeek followed up by slashing V4-Pro prices by 75 percent.3 On any other week, those would be the headline. This week, they were the setup.

The real signal came from distribution. OpenAI brought its latest models, Codex, and Managed Agents to Amazon Bedrock, giving enterprise customers a way to access OpenAI's technology without routing everything through Microsoft Azure.4 Microsoft itself confirmed the end of its exclusive licence to OpenAI's technology, restructuring a partnership that had defined the early phase of the generative AI era.5 At the same time, Qualcomm shares jumped on reports that OpenAI is working with chipmakers on AI smartphone processors targeted for 2028 mass production.6 These moves make a single argument: the model is no longer the product. The model is becoming an ingredient, and the companies that win will be the ones controlling where that ingredient gets used, how it gets priced, and which workflows it enters.

For founders and small business owners evaluating AI content tools, this matters practically. The cost of accessing frontier intelligence is falling. The cost of knowing which intelligence to use, where to put it, and how to trust the output is not falling at all. That gap is where real competitive advantage now sits. A smarter model still needs to be served, governed, integrated, and trusted. The organisations that treat model access as a strategy will be disappointed. The ones that treat it as a starting point will not.

The trillion-pound plumbing problem

If the model layer is commoditising, the infrastructure layer is doing the opposite. Google Cloud revenue grew 63 percent, driven by enterprise AI demand.7 Reuters reported that Microsoft, Alphabet, Amazon, Meta, and Oracle are projected to spend more than $600 billion on AI infrastructure in 2026, a figure that Citi subsequently framed within a global AI market forecast above $4 trillion.89 US core capital goods orders rose 3.3 percent in March, the strongest monthly gain since 2020, with AI investment and data centre construction cited as primary drivers.10 AI is leaving the software budget and entering the capital expenditure cycle.

The chip picture is broadening too. Huawei expects AI chip revenue to reach around $12 billion this year, driven by Chinese demand for its Ascend processors.11 MediaTek told investors the AI data centre chip market is still accelerating.12 Intel rallied because AI inference workloads are making CPUs relevant again.13 Nvidia's B300 servers are reportedly selling in China for nearly $1 million each, almost double their US price, because scarcity has become its own form of strategy. 14 The AI chip story is no longer a Nvidia story. It is a geopolitical story with industrial economics underneath it.

The physical cost of intelligence is rising at the same time the marginal cost of a model query is falling. That creates a fork. On one side, cheap systems will produce disposable output. On the other, serious systems backed by real compute, governance, and workflow integration will produce work that earns its cost. For businesses using AI to manage content, customer communication, or marketing, this is the question that matters most: is the AI you are using built on infrastructure that will keep improving, or is it borrowing cheap compute that could become expensive or unreliable? Tools built to help businesses manage Instagram AI content need to sit on the right side of that divide. The cheap magic phase will not last in its current shape. Someone always pays for the engine.

Governance left the lab

This was also the week that AI governance stopped being a policy conversation and became a procurement conversation. Google reportedly signed a classified AI deal with the Pentagon, with reported carve-outs around domestic surveillance and autonomous weapons.15 The White House reportedly explored ways to work around Anthropic's risk flags on newer models.16 The EU failed to reach agreement on softened AI Act provisions after twelve hours of talks.17 And in the United States, the Justice Department intervened in xAI's legal challenge to Colorado's AI law, arguing the state's approach raises constitutional problems.18

Each of these stories has its own context. Together, they reveal a structural tension. AI companies spent years selling themselves as builders of neutral, general-purpose tools: productivity engines, helpful assistants, content generators. Now the same models are moving into classified government systems, military planning, regulated industries, and core public decision-making. The gap between the product narrative and the deployment reality is widening. When a model can help write an email and also sit inside a defence workflow, the safety conversation changes character entirely. It is no longer about whether the model refuses the wrong request. It is about who controls access, who sets the boundary, and what happens when the customer is the government.

South Africa offered a different lesson on the same theme. The country withdrew its draft national AI policy after fake AI-generated citations were found in the reference list.19 A policy document proposing a National AI Commission and an AI Ethics Board could not pass the most basic test of credibility: could the evidence be trusted? That failure is not unique to South Africa. It is a warning for every organisation adopting AI without updating its review process. The AI does not make integrity obsolete. It makes integrity operational. Businesses using AI to generate content for customers, whether through social media, customer service, or marketing, face a version of the same problem. AI can produce the output, but someone still has to verify that the output is worth trusting. Italy's antitrust authority made this explicit by closing probes into AI firms only after commitments around hallucination warnings and transparency. 20 The industry is learning that once AI speaks to customers and represents a brand, the excuse that the model made a mistake is not a defence.

How is AI changing content creation for small businesses?

The coding story crystallised a broader argument about AI and human work. OpenAI's president stated that AI has moved from writing 20 percent to 80 percent of code.21 Google has talked about 75 percent of new code being AI-generated and human-reviewed. The headline reads like replacement. The reality is subtler and more consequential. If AI writes the first draft, the human role shifts upstream and downstream: framing the problem, defining quality, reviewing trade-offs, testing edge cases, and deciding whether the thing being built is worth building at all. The scarce skill is no longer syntax. It is product judgment, architecture, and the ability to recognise when the fastest path is the wrong path.

This pattern extends well beyond software. In content creation, in marketing, in customer communication, the same shift is underway. AI content generation for small business owners is no longer limited by capability. The tools can write, design, schedule, and publish. What they cannot do is decide what deserves to exist, what fits the brand, and what the audience actually needs to hear. That is the human layer, and this week's research reinforced why it matters. A survey on evaluating LLM-based agents mapped a field moving from static benchmarks toward realistic tests of planning, tool use, memory, robustness, and cost-efficiency.22 A clinical triage paper showed that a domain-adapted small language model outperformed larger proprietary systems when given expert data and task-specific tuning.23 ReaLM-Retrieve demonstrated that smarter retrieval timing, not larger models, improved performance while reducing retrieval calls by 47 percent.24

The research direction is consistent. Better AI does not always mean larger AI. Sometimes it means retrieving at the right moment, specialising for a domain, or building verification tools that stop impressive systems from quietly producing unreliable results. For businesses that rely on AI to maintain a consistent presence on platforms like Instagram, the implication is direct. The value is not in the generation. The value is in the system that ensures what gets generated is accurate, on-brand, and worth the audience's attention. That is the difference between AI that elevates a business and AI that produces noise indistinguishable from every other account in the feed.

The ownership question

The OpenAI trial brought one more thread into the open. Elon Musk returned to the witness stand to challenge Sam Altman's leadership and OpenAI's shift from nonprofit idealism to commercial scale.25 The legal arguments will be resolved in court. The structural question will outlast the verdict. Can a technology built in the language of public benefit survive the economics of private capital? AI labs need enormous funding, scarce talent, cloud infrastructure, distribution, legal cover, and commercial partners. Public-good language attracts trust, but private-market machinery pays the bills. The gap between those two realities is where governance breaks. Google's reported $40 billion Anthropic investment,26 Cohere's acquisition of Aleph Alpha to build Europe's secure AI stack,27 China's reported order for Meta to unwind its acquisition of Manus28: all of these are ownership stories dressed as partnership stories.

AI sovereignty used to sound abstract. This week it became a business model. The UK government backed Ineffable Intelligence through its Sovereign AI work.29 Finland launched a high-security sovereign AI platform.30 Countries and companies are deciding that the next competitive advantage may not be the best model. It may be legal permission, trusted infrastructure, national alignment, and data jurisdiction. For smaller businesses, the sovereignty question is less visible but equally real. The AI tools you build on carry assumptions about where data lives, who controls the model, and whose economic incentives are quietly embedded in the system. That is not paranoia. It is the kind of strategic literacy that separates businesses that use AI well from businesses that use AI blindly.

The part nobody wants to fund

The pattern across this entire week is remarkably consistent. Intelligence got cheaper. Everything else got harder. Infrastructure spending is rising. Governance is fragmenting. Accountability is being forced into products that were designed without it. Ownership structures are being tested in courtrooms. Chip supply chains are being redrawn along national lines. And the human judgment layer, the part that decides where AI belongs, what it should say, and when it should stop, remains the least funded and least discussed part of the entire stack.

That is the real story of this week in AI. The model race has not ended. It has been absorbed into a larger, messier contest over who controls the systems that sit above and below the model itself. The companies that will matter in twelve months are not the ones releasing the next benchmark winner. They are the ones building the governance, the workflow integration, the evaluation infrastructure, and the trust architecture that turns raw intelligence into something a business, a government, or a person can rely on. Intelligence was the hard part for a decade, and now it is the easy part. The hard part is everything that makes intelligence useful, accountable, and worth paying for. That is where the race is now, and most of the runners have not noticed the course has changed.

Sources

Footnotes

1

OpenAI introduces GPT-5.5, OpenAI

2

DeepSeek V4 Chinese AI model adapted for Huawei chips, Reuters

3

DeepSeek slashes prices on new AI model, Reuters

4

OpenAI brings latest AI models and Codex to Amazon Bedrock, Reuters

5

Microsoft ends exclusive licence to OpenAI technology, Reuters

6

Qualcomm surges on report of OpenAI tie-up for AI smartphone processors, Reuters

7

Google Cloud pulls ahead as Big Tech AI spending swells, Reuters

8

Big Tech investors gauge payoff as AI spending set to hit $600 billion, Reuters

9

Citigroup lifts AI market view over $4 trillion on enterprise adoption, Reuters

10

US core capital goods orders exceed expectations in March, Reuters

11

Huawei expects AI chip revenue to jump at least 60 percent, Reuters

12

MediaTek says AI megatrend continues, Reuters

13

Intel surges on AI-driven CPU demand, Reuters

14

Nvidia B300 server prices near $1 million in China, Reuters

15

Google signs classified AI deal with Pentagon, Reuters

16

White House drafts guidance to bypass Anthropic risk flag, Reuters

17

EU countries and lawmakers fail to reach deal on AI rules, Reuters

18

US Justice Department intervenes in xAI challenge to Colorado AI law, Reuters

19

South Africa withdraws AI policy due to fake AI-generated sources, Reuters

20

Italy closes antitrust probes into AI firms after hallucination commitments, Reuters

21

OpenAI president says AI now writing 80 percent of code, Business Insider

22

Survey on evaluating LLM-based agents, arXiv

23

Clinical triage with domain-adapted small language models, arXiv

24

ReaLM-Retrieve on improved RAG with smarter retrieval timing, arXiv

25

Musk returns to witness stand at OpenAI trial, Reuters

26

Google plans to invest up to $40 billion in Anthropic, Reuters

27

Cohere and Aleph Alpha announce merger, Reuters

28

China blocks foreign acquisition of AI startup Manus, Reuters

29

UK backs company building breakthrough AI, GOV.UK

30

CGI launches sovereign AI platform in Finland, PR Newswire