Five for Friday: Issue #10
On Biden's escalation of the chip wars, the UK's national AI plan, Apple's AI-generated fake news, UC Berkeley's new budget AI model, and Anthropic's latest funding round

Happy Friday and welcome to issue No. 10 of Five for Friday!
Policy making led the way in AI news this week with US President Joe Biden bringing into play highly restrictive export controls of advanced chips, while Prime Minister Keir Starmer launched an ambitious plan aimed at catapulting the UK back into the AI big leagues.
#1 Traffic Lights & Tech Fights
Biden's final policy move divides world into AI haves and have-nots
With just days left in office, Biden's administration dropped a regulatory bombshell that sent shockwaves through the global AI and tech industry and left even some allies scratching their heads.
At its core, the new framework is about controlling the flow of AI's most precious resource: advanced chips. The rules create an exclusive club of 18 nations with unlimited access, while leaving most of the world, including traditional allies like Israel and Poland, facing strict quotas.
The rules read like a byzantine "traffic lights game". The US and 18 chosen allies get the green light for unlimited AI chip access. Meanwhile, China and Russia see nothing but red, facing complete bans. Everyone else sits within an amber zone of strict quotas, requiring Uncle Sam's permission slip to import more than 1,700 high-end GPUs annually.
A host of other measures are also now in play. Companies operating data centers must keep 75% of their total AI computing power within the US or allied countries, with no more than 7% in any single non-allied nation. The rules also introduce first-ever controls on AI model weights and parameters, requiring companies to adopt security standards protecting this intellectual property.
The European Commission, finding some of its members left out in the cold, diplomatically reminded everyone that they're "an economic opportunity for the US, not a security risk." Talk about awkward holiday dinner conversation.
The tech giants obviously aren't taking this lying down. Nvidia's VP Ned Finkle didn't mince words, calling the rules "unprecedented and misguided" and warning they “threaten to derail innovation and economic growth worldwide”.
But perhaps the most intriguing aspect isn't the rules themselves — it's the timing. By starting a 120-day comment period just before Trump's return, Biden has effectively thrown a policy hot potato into his successor's lap, daring him to catch or drop it.
Perspectives:
Biden’s policy creates an interesting paradox: while aimed at maintaining US technological dominance, it risks fragmenting the very global supply chains and collaborative networks that helped establish that dominance in the first place. The question is whether building such walls will truly protect that advantage, or simply accelerate the development of alternative technologies outside US control.
Indeed China has proven remarkably adept at playing technological leapfrog. Despite years of existing chip restrictions, they've managed to push AI boundaries in surprising ways. Sure, China's silicon industry may be a few generations behind, but necessity has a funny way of mothering invention.
#2 The Great British Tech-Off
The UK whisks up massive AI transformation plan
Britain's AI ambitions just got a triple espresso shot straight to the mainframe. The UK government has unveiled a sweeping 50-point AI Action Plan that promises to transform the nation (back) into an AI superpower.
At the heart of this digital dream lies a vision to boost computing power twentyfold by 2030, anchored by “AI Growth Zones” that will serve as tech industry honey pots.
The first of these digital promised lands will sprout up in Culham, Oxfordshire, with £14 billion in private sector commitments already in the bag. As Matt Clifford, co-founder and Chair of Entrepreneur First (EF) and the plan's architect, describes this as “a plan which puts us all-in" — though hopefully not in the same way your crypto-obsessed cousin went all-in on dogecoin.
The plan's ambitions extend beyond just building fancy computer farms. A new headhunting unit will scout for global AI talent, while prestigious scholarship programmes aim to nurture homegrown expertise.
However this plan will not be without its fair share of hurdles. The proposed National Data Library, while promising to unlock valuable public sector data for AI research, has privacy advocates reaching for their digital deadbolts.
And then there's the small matter of powering all this tech wizardry — a challenge the government hopes to solve through a new AI Energy Council and small nuclear reactors (SMRs). Yet most SMR designs ae not yet approved in the UK and experts warn the current grid might struggle to handle this much digital horsepower.
Perspectives:
Credit where credit's due — this ambitious plan is what the UK sorely needs. The nation, once home to AI pioneers like DeepMind, has seen its competitive edge slowly erode as talent and companies drift to Silicon Valley's gravitational pull.
This plan could help Britain reclaim its position as an AI powerhouse. However, throwing money and computing power at the problem isn't necessarily enough — just ask anyone who's ever tried to fix a relationship by buying expensive gifts. The real test will be whether Britain can create an environment where innovation flourishes while maintaining its democratic values and social fabric.
Let’s also consider the talent elements of the plan. While the focus on attracting global talent and building prestigious AI institutions is crucial for staying competitive, it risks creating a two-tier digital society.
The UK's AI heritage — from Alan Turing's foundational work to the breakthrough achievements at universities like Cambridge and Oxford — shows that British innovation thrives when it taps into broad talent pools.
Yet while the plan promises 13,000 new tech jobs and shiny AI Growth Zones, it's surprisingly quiet about supporting the millions of workers who'll need to adapt to an AI-enhanced workplace. The next DeepMind or OpenAI might be brilliant, but true success means ensuring everyone can participate in our AI future, not just those with PhDs in machine learning.
#3 iHallucinate
Truth becomes optional in Apple's AI news alerts
Apple's latest AI feature is proving that even the most polished tech giant can sometimes bite off more than it can chew. The company's attempt to simplify our notification-cluttered lives has instead created a parallel universe where Rafael Nadal is openly gay, Netanyahu gets arrested, and darts champions win tournaments before they begin - all while wearing the BBC's digital credentials.
The BBC, which has been trying to get Apple's attention since December, finally got a response this week — though perhaps not the one they were hoping for. Instead of pulling the plug on its hallucination-prone AI, Apple's solution is to slap a "Made by AI" label on these creative reinterpretations of reality.
While Apple promises a fix "in the coming weeks," the journalism community isn't holding its breath. As Vincent Berthier from Reporters Without Borders points out, simply labeling AI-generated content doesn't solve the fundamental problem but merely shifts the burden of fact-checking onto already overwhelmed users.
The irony here is delicious — in trying to solve information overload, Apple has instead created an information integrity crisis.
Perspectives:
From my perspective, the fundamental issue here isn't so much about Apple's AI getting things wrong — it's about Big Tech's growing tendency to outsource responsibility for truth and accuracy. Apple's "just label it" solution arrives just a week after Meta announced that it's replacing professional fact-checkers with a community-driven system, suggesting a troubling industry-wide pivot.
We're watching Big Tech, who once positioned themselves as stewards of data and information, increasingly adopt a "not our problem" stance. Apple wants users to shoulder the burden of fact-checking AI hallucinations, while Meta expects its community to police truth itself — all while both companies conveniently save money on professional oversight.
#4 Berkeley’s Budget Bombshell
U.C. Berkeley builds AI rival to OpenAI's o1 model for less than $500
Remember when training an AI model used to cost more than a luxury yacht? Well, U.C. Berkeley's Sky Computing Lab just crashed that party with Sky-T1, a reasoning model that costs less than your monthly coffee budget. At just $450 in training costs and 19 hours of computing time, this open source marvel is giving OpenAI's early o1 model a run for its (considerably more) money.
The team achieved this remarkable feat by leveraging synthetic training data and open source components (thereby making the entire process replicable by others). As the NovaSky team — who were responsible for the breakthrough — puts it, they've demonstrated "it is possible to replicate high-level reasoning capabilities affordably and efficiently." The model particularly shines in mathematical reasoning and coding challenges, even outperforming o1's preview version on several benchmarks.
However, Sky-T1 isn't without its limitations as it stumbles on PhD-level science questions (but that still means its “smarter” than most of us!) that OpenAI’s previous version of the o1 model scored well on. And meanwhile, OpenAI’s even more capable o3 family of models is expected to come to market as soon as in the next few weeks. Nonetheless Sky-T1 undoubtedly represents a massive leap forward for affordable and open source frontier AI.
Perspectives:
One can’t help but wonder if Silicon Valley's AI giants might quietly be reconsidering their closed-source strategies. With Berkeley's $450 Sky-T1 coming hot on the heels of DeepSeek's $5 million model, the former’s moat of computational resources is rapidly shrinking.
The big players face an interesting dilemma: maintain complete control of increasingly expensive proprietary systems, or adopt a hybrid approach where some components are open sourced to stay relevant in a democratising field. I wouldn’t rule out a strategic shift similar to how Microsoft embraced elements of open source in the last decade after years of resistance — not out of altruism, but because the ecosystem had become too powerful to ignore.
Cost is another key differentiator between open and closed source. Until now, the corporate world has largely steered clear of open source AI, with many citing concerns about quality, security, and support. Sky-T1 might be the tipping point that changes this calculus. When you're running tens or hundreds of thousands of AI queries daily, the cost difference between paying premium API rates and running your own free-to-use model becomes impossible to ignore.
As open source models continue to close the capability gap, CFOs everywhere will be asking uncomfortable questions about those hefty commercial AI contracts. Big Tech's premium AI services might soon find themselves in the same position as expensive enterprise software did when Linux started entering corporate data centers — facing a 'good enough' alternative that's essentially free.
#5 Values Added
Ethical AI startup triples valuation to $60B
According to Pitchbook, AI startups have accounted for nearly half of venture capital dollars raised in the U.S. last year, and Anthropic continues to lead that charge in 2025.
While most of us were watching New Year’s fireworks, the company was busy plotting a $60 billion valuation, and closing in on a $2 billion funding round led by Lighthouse Ventures. Between Amazon's deep pockets ($8 billion of investment) and Google's $2 billion stake, Anthropic has now assembled quite the roster of tech giants in its corner.
Profit will of course probably be some years away, but Anthropic is now taking in $875 million in annualised revenue, which is no mean feat for a company that was founded only in 2021.
Anthropic's Claude is now one of the key competitors to OpenAI's ChatGPT, and Google's Gemini. While the company was initial known primarily for being the AI developer "with a conscience", its Claude chatbot is now highly favoured by many users for its exceptional coding and writing capabilities.
Perspectives:
The rapid rise in Anthropic's revenue suggests that enterprise customers are increasingly willing to bet on alternatives to OpenAI. This could be partly due to the drama surrounding Sam Altman's brief departure, which likely sent many companies searching for Plan B.
While Anthropic's "AI safety first" approach initially seemed like it might slow them down compared to competitors, it may have actually become a key differentiator in enterprise sales. Turns out, when you're selling to Fortune 500 companies, having a conscience isn't just good ethics — it's good business.
Justin Tan is passionate about supporting organisations to navigate disruptive change and towards sustainable and robust growth. He founded Evolutio Consulting in 2021 to help senior leaders to upskill and accelerate adoption of AI within their organisation through AI literacy and proficiency training, and also works with his clients to design and build bespoke AI solutions that drive growth and productivity for their businesses. Alongside his consultancy work, he is an Adjunct Assistant Professor at University College London, where he lectures on digital health and AI. If you're pondering how to harness these technologies in your business, or simply fancy a chat about the latest developments in AI, why not reach out?