Episode 4 – Are You 18+? (Sponsored by Meta)

March 27, 2026

Recently, two juries hit Meta with back-to-back verdicts in 24 hours: $375 million in New Mexico for child exploitation, and $6 million in Los Angeles for engineering addictive products that harmed a minor. While those verdicts landed, a researcher published findings on GitHub tracing Meta’s lobbying operation across 45 states, with over $25 million directly confirmed and estimates running as high as $2 billion, all designed to shift age verification responsibility onto Apple and Google. Then Apple released iOS 26.4, and UK iPhone users got a new prompt: “Confirm You Are 18+.” California’s law goes further, requiring every operating system to collect user age data, Linux included. GrapheneOS, a privacy-focused alternative to Android, told regulators to pound sand. Elsewhere: Nvidia unveiled DLSS 5 and the internet called it an AI beauty filter. A federal judge blocked the Pentagon’s attempt to blacklist Anthropic as a national security threat. Microsoft admitted Windows 11 has a bloat problem. And the FCC banned all new foreign-made routers from the U.S. market, which covers basically every brand on store shelves.


Timestamps

  • 0:00 – Intro
  • 1:56 – Meta’s lawsuits (“The Rundown”)
  • 5:23 – Meta is behind recent age verification pushes
  • 11:48 – Nvidia’s DLSS 5 reveal, “sloptracing”
  • 20:42 – Anthropic vs. Pentagon update
  • 25:14 – Microsoft’s “Microslop” retreat
  • 29:40 – FCC bans effectively ALL new routers
  • 34:26 – Age verification hits iOS, outrage ensues
  • 43:09 – Observability (“The Build Log”)
  • 47:34 – “The Plug” + Outro

Topics Covered

  • Meta hit with $375 million verdict in New Mexico for child exploitation on Facebook and Instagram, followed by a $6 million negligence verdict in Los Angeles the next day
  • A researcher’s public records investigation tracing Meta’s multi-billion-dollar lobbying operation to shift age verification liability onto Apple and Google through front groups, dark money, and the Digital Childhood Alliance
  • Nvidia DLSS 5 unveiled at GTC 2026: 84% YouTube dislike ratio, Jensen Huang’s contradicted claims about the technology, Capcom stating it will not implement AI-generated assets in its games
  • Anthropic vs. the Pentagon: Judge Rita Lin grants preliminary injunction blocking the DOD’s “supply chain risk” designation, while a Pentagon official confirms Claude is actively deployed in the Iran conflict
  • Microsoft’s “Microslop” retreat: Pavan Davuluri announces Copilot rollbacks, movable taskbar, reduced RAM usage, and fewer ads in Windows 11
  • FCC bans all new foreign-made routers from the U.S. market, affecting every major consumer router brand including TP-Link, Asus, Netgear, and Linksys
  • Apple’s iOS 26.4 age verification goes live in the UK; California’s AB 1043 will require OS-level age data collection from every platform including Linux distributions starting January 2027
    • GrapheneOS refuses to comply with age verification mandates; systemd adds a date-of-birth field to user records; open-source community pushes back

No deep dive this week! Too much news! 馃檪

Plugs

  • Chris N:
    • A YouTube video from Joe Scott called “Rocky Is Weirder Than You Think
    • Why? If you’ve read Project Hail Mary by Andy Weir, or if you’ve been hearing about the movie that just came out, this is a deep dive into the alien character Rocky and the actual science behind how Andy Weir designed him. Weir apparently wrote 16 pages of notes on this creature’s biology, everything from its circulatory system to its cellular makeup, all based on the real conditions of an actual exoplanet we’ve discovered. Joe Scott, an awesome creator himself, goes through those notes with Weir himself, and they get into details that aren’t in the book or the movie. I haven’t seen the film yet, but I’m a huge Andy Weir fan, The Martian was incredible, and this is exactly the kind of content I love: someone taking a fictional concept and pulling it apart to show you the real science underneath.
  • Chris V:
    • No plugs (officially)

Other Links

Transcript

Toggle Transcript Visibility

IMPORTANT DISCLAIMER: Transcripts are auto-generated and may contain inaccuracies or differ from the spoken content.

Welcome back to SquaredCast. This is episode 4, recorded on March 27th, 2026.

How’s it going? How’s it going, everyone? Been a minute. We recorded the last episode a little while ago, a couple weeks ago. Life kind of got in the way, so sorry about that. Hopefully this will be timely and out soon, whenever that is.

This one actually might be a little bit shorter than the rest of them.

That’s true. It might be. We’ve got too much news to cover. Recently, two juries hit Meta with back-to-back verdicts in 24 hours. $375 million in New Mexico for child exploitation, and $6 million in Los Angeles for engineering addictive products that harmed a minor. While those verdicts landed, a researcher published findings on GitHub tracing Meta’s lobbying operation across 45 states, with over $25 million directly confirmed and estimates running as high as $2 billion. All designed to shift age verification responsibility onto Apple and Google. Then Apple released iOS 26.4, and UK iPhone users got a new prompt: “Confirm you are 18+.” California’s law goes further, requiring every operating system to collect user age data, Linux included. GrapheneOS, a privacy-focused alternative to Android, told regulators to pound sand. Elsewhere, Nvidia unveiled DLSS 5 and the internet called it an AI beauty filter. Pretty much. A federal judge blocked the Pentagon’s attempt to blacklist Anthropic as a national security threat. Microsoft admitted Windows 11 has a bloat problem. And the FCC banned all new foreign-made routers from the US market, which covers basically every brand on store shelves. We’ve got so much news to cover today that we’ll be skipping the deep dive. Show notes and sources are always on squaredcast.com. If you want to support us and get bonus content, our Patreon starts at two bucks a month. Link is in the show notes. Let’s get into it.

The Rundown (News)

1. Meta’s Week from Hell

Meta has had a bad week. A really, really, really bad week. Two juries in two different states delivered back-to-back guilty verdicts against Meta in the span of 24 hours, and the combined message is hard to misread. On March 24th, a Santa Fe jury found Meta willfully violated New Mexico’s consumer protection laws after a six-week trial brought by Attorney General Ra煤l Torrez, who accused the company of creating a “breeding ground” for child predators on Facebook and Instagram. The case originated from a 2023 undercover operation where investigators created fake profiles of children under 14 and were immediately flooded with sexually explicit content and contact from adults. Jurors awarded the state $375 million in civil penalties at the maximum $5,000 per violation rate. Torrez called it “a historic victory for every child and family who has paid the price for Meta’s choice to put profits over kids’ safety.” Meta said it plans to appeal.

Then, on March 25th, a Los Angeles jury found Meta and YouTube negligent in a test case designed to set the tone for roughly 2,000 pending lawsuits. The plaintiff, a 20-year-old woman identified as KGM, alleged she became addicted to Instagram at age 9 and YouTube at age 6, contributing to depression, body dysmorphia, and suicidal thoughts. The jury awarded $3 million in compensatory damages and $3 million in punitive damages, with Meta shouldering 70% and YouTube 30%. This is the first time a jury has treated social media apps as defective products for being engineered to exploit developing brains. Internal Meta documents shown at trial included a memo stating, quote, “If we want to win big with teens, we must bring them in as tweens,” and evidence that 11-year-olds were four times more likely to return to Instagram than competing apps, despite the platform’s stated minimum age of 13.

The second phase of the New Mexico trial begins May 4th, where Torrez will argue a public nuisance claim and push for court-ordered changes to Meta’s platforms, including mandatory age verification, algorithm modifications, and an independent monitor. California Attorney General Rob Bonta announced his state’s own trial against Meta is set for August. These verdicts don’t exist in isolation. Meta faces thousands of active lawsuits from parents, school districts, and attorneys general across the country. Legal observers are calling this the tech industry’s “Big Tobacco” moment, and the comparison is getting harder to dismiss.

Ain’t that the truth? I left Facebook. It was like 2018. I never touch that thing again. I hate that platform. There’s not a single thing that I really like about that. The only thing that I miss is the people who were on it, but I already get whatever I need to know from family members anyway who tell me what’s going on. So what is even the point of me having it?

You know, I was an early adopter of Facebook, too. It wasn’t as bad back then, but god, it is now. It got worse. It got much worse over time. Which is funny, how you look at how it started and the reason why Mark Zuckerberg created it, and then you see what it’s turned into today. It’s the complete opposite of what the initial goal was.

2. Meta Is Behind the Push for OS-Level Age Verification

Well, Meta’s in the news again. Turns out Meta is largely behind the push for OS-level age verification. I couldn’t believe it when I saw it. Buckle up.

So, this month, a research investigation published on GitHub under the handle “upper” pulled back the curtain on one of the most deliberate corporate lobbying campaigns in recent tech history. The project, called TBOTE, used publicly available records including tax filings, federal lobbying disclosures, state lobbying registrations, campaign finance databases, and corporate registries to map how Meta built a multi-channel influence operation to push age verification laws that shift the regulatory burden away from social media platforms and onto Apple and Google’s app stores. The investigation was covered by Bloomberg, The Register, Rappler, and Gigazine, among others.

The TBOTE investigation directly confirmed over $25 million in Meta’s lobbying spending tied to these efforts, and estimates the company may have spent upwards of $2 billion when including dark money grant networks and fragmented political operations. The operation spans all five confirmed funding channels: $26.3 million in direct federal lobbying in 2025 (company record, by the way, exceeding even Lockheed Martin and Boeing), over 86 lobbyists deployed across 45 states, covert funding of a “grassroots” child safety group, $70 million routed through fragmented super PACs structured to stay below the reporting limits that would trigger public disclosure, and targeted state legislative campaign spending.

The centerpiece of the operation is the Digital Childhood Alliance, which launched on December 18th, 2024, with a professional website fully loaded with statistics, testimonials from Heritage Foundation and NCOSE staff, and talking points for the App Store Accountability Act. Within weeks of its launch, DCA was testifying before the Utah state legislature in support of SB-142, which became the first ASAA law signed in the country 77 days later. Bloomberg reported in July 2025, through three sources familiar with the funding, that Meta was bankrolling the DCA. Under oath at a Louisiana Senate committee hearing, DCA Executive Director Casey Stefanski admitted receiving tech company funding but refused to name donors.

The legislative target is the App Store Accountability Act, which would require Apple and Google to verify every user’s age at the app store level before any download. Under the proposed framework, app stores bear the full infrastructure and compliance cost of building age verification systems. Social media platforms like Meta’s receive the age data through an API, basically a data pipeline between two pieces of software, without having to build or maintain verification systems of their own. The bills also contain safe harbor provisions for app developers. If a developer relies in good faith on age data provided by the app store, that developer is shielded from liability under this act. The practical effect is that if Apple or Google misidentify a minor as an adult, Meta faces no consequences under these specific laws, though liability under other statutes like COPPA would still apply. Google’s spokesperson Danielle Cohen told Bloomberg, quote, “We see the legislation being pushed by Meta as an effort to offload their own responsibilities to keep kids safe.”

The DCA’s legal footprint tells its own story. It’s registered in Delaware and reports gross receipts under $25,000 for tax year 2024. It files the IRS form reserved for the smallest nonprofits, one that requires zero financial disclosure. That figure is difficult to square with an organization that employs a paid executive director, retains a registered lobbyist, commissions polling, and coordinates testimony programs across more than 20 states. The TBOTE investigation concludes the real operating budget never touches DCA’s own EIN. Instead, funding flows through intermediary nonprofit structures that aren’t subject to standard political ad disclosure, keeping Meta’s fingerprints off the paperwork.

Utah’s version of the law, the first to be signed, hits its compliance deadline on May 6th, 2026. Texas enacted its own version, though a federal court issued a preliminary injunction blocking enforcement on First Amendment grounds. Louisiana’s version takes effect July 1st, 2026. Alabama signed its ASAA into law on March 9th. At the federal level, Senator Mike Lee and Representative John James introduced the App Store Accountability Act, tracked as S.1586 or H.R. 3149, which would preempt the state patchwork with a single national standard enforced by the FTC. Roughly 17 additional states have introduced or are considering similar bills. The direction is clear, and Meta’s lobbying machine is pushing the door open in every statehouse it can reach.

Meanwhile, the EU’s eIDAS 2.0 framework is building a fundamentally different model. The European Digital Identity Wallet, now being piloted in five member states, supports privacy-preserving age verification using zero-knowledge proofs, which allow a user to prove they’re over 18 without revealing their birthdate or identity. The data stays on the user’s device. It’s open-source, decentralized, and targets large platforms while exempting small entities and open-source projects. The contrast with Meta’s preferred approach could not be any sharper. One model puts the user in control. The other builds a permanent identity layer into operating systems and hands the data to the companies that already have too much of it.

Of course it would be Meta. Big spending. That’s a huge, huge amount of money. As if you didn’t need another reason to hate Facebook/Meta. There you go. There’s another.

I already have many reasons.

Me too.

3. Nvidia DLSS 5: AI Slop Meets Gamer Rage

All right, what do we got next? This is one where I actually didn’t know if it was real or not when I first saw it, because it looked like sort of an April Fool’s thing. Just how insane the images looked. I was like, “This can’t be real.” But then of course I saw it officially from Nvidia, so I was like, “Okay, I’m shocked that it’s real.”

CEO Jensen Huang called it, quote, “the GPT moment for graphics,” claiming the technology blends traditional rendering with generative AI to deliver, quote, “a dramatic leap in visual realism while keeping creative control in developers’ hands.” Unlike previous DLSS versions which focused on upscaling resolution and generating frames, DLSS 5 uses an AI model to infer and reconstruct lighting, materials, and surface detail in real time. Nvidia framed this as the biggest advance in computer graphics since real-time ray tracing hit the scene in 2018.

Gamers, however, did not agree with this. The official DLSS 5 reveal trailer on Nvidia’s GeForce YouTube channel racked up over a million views with a shocking 84% dislike ratio. Roughly 82,500 thumbs down against 16,100 thumbs up. According to browser extensions that actually allow you to see the dislike counts before they removed them, every individual game demo video followed the same pattern. Resident Evil Requiem had about a 14% approval. EA Sports FC landed at 14.5%. And the Zora Unreal Engine tech demo broke out of the teens but only managed to get 37%.

Wow, people hate this.

Well, if you saw it, I would imagine most people would. So one of the big things here was they were featuring Resident Evil Requiem, specifically a shot from a character called Grace Ashcraft, which is someone you play primarily through that game. They have their comparison image, on versus off. In this case, with it on, the character model looked pretty hollowed out, smoothed, and altered in ways that you could immediately draw comparisons to AI TikTok filters or Instagram filters. It just didn’t look natural. Something was off. She had darker hair roots, makeup that was on her face that wasn’t actually there before. And of course we have the internet-coined term “yassified,” which just implies this over-beautification of characters. So of course, from all this, spawn memes galore everywhere. One especially popular YouTube comment read, “Went from ray tracing to slop tracing.”

So Huang fired back at a press Q&A with Tom’s Hardware on March 17th. He dismissed the backlash entirely. Quote, “Well, first of all, they’re completely wrong.” He argued that DLSS 5, quote, “fuses controllability of the geometry and textures and everything about the game” with generative AI, and insisted the technology operates at the geometry level, not as a post-processing filter. “It’s not post-processing. It’s not post-processing at the frame level. It’s generative control at the geometry level,” apparently from him. So he’s denying that it’s a filter. He’s saying no, no, no, it’s working at the geometry level of the game, improving the existing textures, whatever.

But that framing started to fall apart within days. YouTuber Daniel Owen published an email exchange with Jacob Freeman, a marketing specialist at Nvidia. Owen asked Freeman directly whether DLSS 5 is effectively taking a single 2D screenshot and some data about how objects are moving between frames to create the output. Freeman’s reply, quote: “Yes, DLSS 5 takes a 2D frame plus motion vectors as an input.” He confirmed that the system understands complex scene semantics by analyzing a single frame with no access to the 3D geometry data, scene depth, or material surface data from the engine. DLSS 5 is taking a screenshot of the game and running it through a generative AI model. Kotaku reported the contradiction between Huang’s “geometry level” claims and Freeman’s confirmation that the system works exclusively with 2D frame data, noting it wouldn’t be the first time Huang was accused of misleading consumers about Nvidia products.

So, liar. So who’s telling the truth? I mean, the developer reaction was equally damning. Insider Gaming reported that developers at both Capcom and Ubisoft, two studios whose games were featured in the showcase for DLSS 5, learned about the announcement at the same time as the public. Quote, “We found out the same time as the public,” said one Ubisoft developer. Capcom developers were reportedly shocked, given the company’s historically anti-AI stance. On March 23rd, Capcom published an investor Q&A summary in which it stated, “Our company will not be implementing any AI-generated assets into our video game content.” Capcom said it would use generative AI internally for development efficiency but drew a clear line at final game assets. That policy adds an awkward layer to the DLSS 5 reveal, where a Capcom executive was quoted in Nvidia’s own press release praising the technology’s potential for Resident Evil.

Bethesda’s Todd Howard was even more vocal, calling the technology’s results in Starfield, quote, “amazing,” and saying “the artistic style and detail shine through without being held back by traditional limits of real-time rendering.” Todd Howard’s enthusiasm reads differently when you consider that Starfield was one of those demos where DLSS 5 was showcased.

Yeah, the Starfield one’s funny because it was adding facial details that aren’t actually there, and you could tell that the AI didn’t know what to do with certain things it couldn’t interpret, right? Like shadows.

Digital Foundry, one of the most respected names in technical analysis, ran a positive hands-on piece with the technology and immediately came under fire. The outlet released a follow-up Q&A video in which founder Richard Leadbetter… wow, that’s a fun last name.

Wait, that’s a perfect name. That’s a perfect name for a founder.

Richard made a statement, quote, “Should have taken more time.” A team member went further, saying DLSS 5 appears to, quote, “trample on artistic vision in a very hardcore way.” So you’ve got some serious backpedaling here, pointing out that the system only has access to 2D data and no specialized 3D face gains, meaning it averages results based on training data. On the ethical side, he said the alteration of characters was “very problematic because the actress probably signed off on her likeness to a certain degree.”

By March 23rd, Huang was on the Lex Friedman podcast and the tone had shifted. Quote, “I think their perspective makes sense and I can see where they’re coming from because I don’t love AI slop myself. All of the AI-generated content increasingly looks similar and they’re all beautiful.” So, “I’m empathetic toward what they’re thinking.” But the concession only went so far. Huang still maintained that DLSS 5 is, quote, “3D conditioned, 3D guided,” and that it preserves artist intent.

How is that possible when it’s a whole-ass filter?

Huge alterations in some cases. DLSS 5 is not shipping until sometime this fall. The company has announced 16-plus launch titles across major publications, though the blindsided developer reaction at Capcom and Ubisoft raises real questions about how much artistic control studios will actually exercise. DLSS as a technology has been integrated into over 750 games since its 2018 debut, and previous versions faced their own skepticism before gaining widespread adoption. Whether DLSS 5 follows the same arc or becomes the first version gamers actively refuse to accept is an open question. The backlash shows no signs of cooling down.

Here’s the shitty graphics. Thanks, Nvidia. So now we could take all of that and add an AI slop filter on it. They did say that it’d be optimized to run on a single GPU before it launches. And this will be exclusive to the 50 series. It makes me wonder what kind of artifacts or hallucinations we’ll see.

It’s going to be bad. We’ll see what happens when it actually comes out and it’s “finished,” quote unquote.

4. Anthropic vs. The Pentagon

All right, what do we got next here? Anthropic versus the Pentagon.

Defense Secretary Pete Hegseth told Anthropic CEO Dario Amodei to grant the Pentagon access to Claude, quote, “for all lawful purposes,” or face termination of the contract and worse. Amodei refused. In a public statement, he said the company, quote, “cannot in good conscience accede to their request.” Hegseth designated Anthropic a supply chain risk. It was the first time the United States government had ever applied this designation to an American company. The practical consequences were severe. If the label stood, every defense contractor doing business with the Pentagon, including Amazon, Microsoft, and Palantir, would be required to certify that they do not use Claude in any military-related work. The fallout wouldn’t stop at the $200 million contract. It would ripple through Anthropic’s entire commercial customer base.

So on March 9th, Anthropic filed two federal lawsuits. The company called the designation, quote, “unprecedented and unlawful,” arguing it violated both First Amendment protections and the company’s Fifth Amendment right to due process. The ACLU and the Center for Democracy and Technology filed a supporting brief in the D.C. Circuit case. Patrick Toomey, deputy director of the ACLU’s National Security Project, put it this way. Quote, “AI-powered surveillance poses immense dangers to our democracy. Anthropic’s public advocacy for AI guardrails is laudable and protected by the First Amendment, not something the Pentagon should be punishing.”

Meanwhile, something remarkable happened across Silicon Valley. Over 100 Google employees working on AI technology sent a letter to Jeff Dean, chief scientist of Google DeepMind, requesting that the company prohibit the military from using Gemini for surveillance of Americans or autonomous weapons without human oversight. The letter explicitly cited the Anthropic dispute. Separately, nearly 900 employees across Google and OpenAI signed a public open letter titled, quote, “We Will Not Be Divided,” urging their companies to hold the same lines Anthropic was being punished for drawing.

OpenAI CEO Sam Altman then signed a deal with the Pentagon anyway, though he publicly claimed it contained similar restrictions. Anthropic had also been making moves in the political arena. On February 12th, the company donated $20 million to Public First Action, a bipartisan nonprofit advocacy group backing candidates from both parties who support AI oversight and safeguards in the 2026 midterms. The donation put Anthropic directly at odds with Leading the Future, a rival super PAC backed by OpenAI co-founder Greg Brockman and Andreessen Horowitz that had already raised $125 million to support lighter AI regulation.

The case went before U.S. District Judge Rita Lin in San Francisco on March 24th. Lin did not hide her skepticism. Quote, “I don’t know if it’s murder,” she said, “but it looks like an attempt to [expletive] Anthropic.” She questioned whether the company was being, quote, “punished for criticizing the government’s contracting position in the press.” Eric Hamilton, the government’s lawyer, argued that the DoD had, quote, “come to worry that Anthropic may in the future take action to sabotage or subvert IT systems.” Anthropic’s attorney responded, quote, “This is something that has never been done with respect to an American company. It is a very narrow authority. It doesn’t apply here.”

The backdrop made the whole thing even more surreal. On the same day as the hearing, Pentagon Chief Information Officer Kirsten A. Davies confirmed during a Senate Armed Services Committee appearance that Claude is actively being used in the ongoing military conflict with Iran. The Pentagon designated Anthropic a national security threat while simultaneously relying on the company’s technology in combat. Senator Jack Reed asked Davies if that struck her as odd. She deflected, saying the six-month phase-out period was, quote, “reasonable.” Without the injunction, Anthropic said in filings it could lose billions in business.

On March 26th, Judge Lin granted the preliminary injunction, blocking both the supply chain risk designation and Trump’s directive. Quote, “Punishing Anthropic for bringing public scrutiny to the government’s contracting position is classic illegal First Amendment retaliation,” she wrote. The Pentagon has signaled it will appeal.

5. Microsoft’s “Microslop” Retreat

Well, if you haven’t heard the term “Microslop,” that is a direct reference to the company Microsoft. Very nice. It is their meme name, if you will.

So, on March 20th, Windows chief Pavan Davuluri published a blog post titled “Our Commitment to Windows Quality.” Yeah, right.

Come on. And if you spent any time using Windows 11 over the past year, the list of promised changes reads like a user wish list that Microsoft ignored for three straight years and suddenly discovered in a desk drawer. “Oh, hey. Oh, that’s where that went. Won’t these be nice? Won’t these things be nice?”

So the headline: Microsoft is reducing Copilot integrations across the OS, starting with the Snipping Tool, Photos, Widgets, and Notepad. That implies that the previous approach, cramming Copilot into every corner of the operating system whether it belonged there or not, was somehow unintentional. Which makes no sense. Nobody who watched Microsoft bolt an AI button onto Notepad believes that.

Yeah, I know. Sprinkle some Copilot here. Throw some in there. Yeah, come on.

Yeah. Back in late December of 2025, CEO Nadella published a year-end blog post asking the tech industry to get past the, quote, “arguments of slop versus sophistication” when talking about AI. The internet took that as a direct challenge. Within days, the term “Microslop” exploded across X, Reddit, Instagram, and just about every tech forum that exists on the internet. The name itself had floated around since the mid-2000s, but Nadella’s plea turned it into the most popular tech meme of early 2026. There’s even browser extensions that replace every mention of “Microsoft” with “Microslop.” Those racked up over 180,000 downloads. Holy shit. It even got so bad that Microsoft eventually banned the word “Microslop” on its own Copilot Discord server, which predictably triggered a raid that forced the server offline entirely by March 1st. Classic Streisand Effect.

The backlash clearly reached Redmond. Yeah. So in that blog post, he announced the return of the movable taskbar. That’s a feature Windows offered over 30 years ago in Windows 95, and inexplicably removed for Windows 11.

Thanks. In that blog post, he also promised to reduce the baseline RAM usage. Okay. He pledged that Windows Update would become less disruptive with the ability to pause updates for as long as you need, skip them during setup, and shut down without being forced to install them. Okay. He also announced plans to reduce promotional upsells and make the OS less cluttered overall. I’ll believe that when I see it. And he outlined an overhaul of the Windows Insider program to make feedback loops more transparent, with an upgraded Feedback Hub so users can actually see how their input is shaping Windows.

Ah, that app that I uninstall immediately after installing Windows. Pre-installed apps. Yeah.

So, Windows Central summed it up. Quote, “Today’s announcement almost reads like an apology letter, just without the actual apology.” The Register was pointing out that the blog was, quote, “long on promises that things will get better, but short on words like ‘sorry’ and ‘apologize.'”

So Windows Latest also reported that Microsoft may drop Copilot branding from some AI features entirely and shift toward performance-focused development instead. Wow. That tracks with earlier reporting from Windows Central that several planned Copilot integrations for the Settings menu and File Explorer were quietly shelved, and that features which did return came back without the Copilot name attached to them. The initial wave of changes is rolling out to Insider builds through March and April 2026. Scott Hanselman, VP at Microsoft, confirmed on X that updates will, quote, “arrive every month this year,” with broader availability throughout 2026.

So whether Microsoft actually follows through is the real question. Obviously, we’ve heard similar pledges before. The company has been here before. But between the “Microslop” meme, users fleeing to macOS or Linux, Microsoft may finally be running out of second chances. So they actually might be realizing that they’re dropping the ball.

6. FCC Bans All Foreign-Made Routers

Speaking of other shit, March 23rd, just a few days ago, the FCC added every consumer-grade router produced in a foreign country to its covered list. That single move blocks any new foreign-made router model from receiving FCC equipment authorization. And without that authorization, a router cannot legally be imported, marketed, or sold in the United States. What?

The action followed a March 20th national security determination from a White House-convened interagency body, which concluded that foreign-produced routers, quote, “produce unacceptable risks to the national security of the United States.” The determination cited the Volt Flax and Salt Typhoon cyberattack campaigns, all attributed to Chinese state actors, as evidence that foreign-made routers had been weaponized against American infrastructure. FCC Chair Brendan Carr framed the action as following, quote, “President Trump’s leadership on cybersecurity.”

Here is the problem with framing this as a targeted security action. Virtually every consumer router sold in America is made overseas. TP-Link, ASUS, Netgear, D-Link, Linksys, Amazon’s Eero, and Google’s Nest WiFi all manufacture their hardware outside the US. The only notable exception is some Starlink routers produced in Texas, and those ship as part of a satellite internet kit, not as standalone consumer networking gear. A TP-Link spokesperson told Wired, quote, “Virtually all routers are made outside the United States.”

Some government officials, including former NSA cybersecurity director Rob Joyce in congressional testimony, have estimated that Chinese manufacturers control roughly 60% of the US home router market. Either way, the ban does not target Chinese manufacturers specifically. It targets all routers produced in any foreign country, period.

Companies can apply for, quote, “conditional approval” from the Department of War or the Department of Homeland Security to get their products exempted. The process is not light. Applicants must disclose their full management structure and supply chain details, provide a detailed bill of materials with country of origin for every component, and submit a concrete, time-bound plan for moving manufacturing to the United States, complete with quarterly progress reports. This is not a security audit. This is an onshoring program with a security label on it.

The FCC ran the exact same playbook with foreign-made drones in December of last year. The results so far are revealing. Of all the drone applications submitted, exactly four systems have received conditional approval as of March 18th, 2026: Skydio Aviation based in the US, Mobilicom based in Israel, Scoutdi based in Norway, and Verge Aero, also based in the US. Every single approval went to a non-Chinese manufacturer. DJI and Autel, the two dominant drone makers, remain fully blocked. DJI is currently suing the FCC in the Ninth Circuit. Former FCC officials told CyberScoop the router ban has similar, quote, “big swing” parallels to the drone action, questioning whether it would survive legal challenge or meaningfully address the actual security risk.

And about that security risk: the cyberattack campaigns the FCC cited to justify the ban largely exploited American-brand hardware. Salt Typhoon gained access to US telecom networks by exploiting known, unpatched vulnerabilities in Cisco routers running Cisco’s own networking software, including a flaw that had been publicly documented for seven years. Multiple security researchers and publications, including CSO Online and Dark Reading, have noted that TP-Link’s vulnerability track record is comparable to, and in some metrics better than, its American and Taiwanese competitors. CISA, the federal cybersecurity agency, maintains a public list of known exploited security flaws. That list has two entries for TP-Link, compared to 74 for Cisco and 20 for D-Link.

The practical impact for consumers right now is limited. Existing routers already in homes and offices are completely unaffected. So if you need a new router, the models on shelves today are still fair game. But the pipeline of new products just froze, and nobody knows when or if it will thaw.

Just great. Just great. Another heavy-handed move by an administration. You know what? I’m not going to finish that sentence. Things are getting closed off. Things are getting tight.

7. Age Verification Hits the iPhone

Now, lastly, ironically, again coming back to age verification, something that doesn’t seem to ever really go away. Age verification has now hit the iPhone in the UK, but the US is poised to be next.

Apple released iOS 26.4 on March 25th, and UK users who opened their Settings app found something new at the top: a prompt labeled “Confirm you’re 18+.” If you have a credit card linked to your Apple account, or your account is old enough, the confirmation takes seconds. If not, you’ll need to scan a government-issued ID to prove you’re an adult. If you decline the prompt entirely or can’t verify, Apple automatically flips on its Web Content Filter and Communication Safety features, restricting your access to certain apps, websites, and content.

Apple has framed its approach around data minimization and says the system checks payment methods and account history before ever asking for an ID. That said, the company has not published specifics on what happens to scanned ID data or how long it persists. The rollout has not gone smoothly. Apple Support forums and Reddit are filled with complaints from users whose driver’s license scans keep failing, whose debit cards were rejected (only credit cards are accepted), and who had no alternative verification method available.

Oh, jeez. So, wow. They’re only accepting credit cards. You can’t use your debit card. Driver’s licenses are failing to be scanned in. There’s no alternate method to verify any of this. So it’s going really well right now.

So 9to5Mac documented the failures the day after launch, and TechRadar ran a piece headlined with a Reddit user quote: “If this starts affecting me using apps, I will switch to Android. Me too.” Ofcom, the UK’s communications regulator, praised the move as, quote, “a real win for children and families.” But Ofcom acknowledged that Apple was not technically required to implement OS-level age checks under the UK’s Online Safety Act. Apple did this voluntarily.

I wonder who else did that. Discord. Discord reportedly did so after close collaboration with Ofcom. In other words, Apple is getting ahead of regulation, not responding to it.

But this is not limited to the UK. On February 24th, Apple began blocking users in Australia, Brazil, and Singapore from downloading apps that are rated 18+ unless they can confirm they are adults through the App Store. Utah’s App Store Accountability Act takes effect on May 6, 2026, requiring app stores to verify user ages using commercially reasonable methods before allowing downloads. And in California, the Digital Age Assurance Act, signed by Governor Newsom on October 13th of last year, which takes effect January 1st of 2027, goes further than any of these. It requires every operating system provider to collect user age data at account setup and transmit it to app developers through a real-time API. The law’s definition of “operating system provider” is anyone who develops, licenses, or controls OS software on a computer, mobile device, or any other general-purpose computing device. That’s broad enough to capture SteamOS, niche Linux distros, and anything else running software. AB 1043 passed both California chambers unanimously, 76 to 0 in the Assembly and 38 to 0 in the Senate.

Not a single person pushed back. No one said no.

So Newsom signed it but flagged concerns in his signing statement about how the law handles shared devices and multi-user accounts. The Electronic Frontier Foundation published an analysis of AB 1043 that lays out the downstream consequences. Once a developer receives an age-bracket signal, they are, quote, “deemed to have actual knowledge” of the user’s age range under California law. It triggers liability under COPPA, the federal Children’s Online Privacy Protection Act, for users under 13, under the CCPA, California’s Consumer Privacy Act, for minors generally, and under AB 1043 itself. Penalties run up to $7,500 per affected child for each intentional violation.

This is absurd. The EFF argues this gives developers a strong incentive to simply block anyone flagged as a minor rather than navigate the compliance maze. The EFF’s term for it: “outsourced censorship.” California sets the mandate. Developers do the restricting. And because California’s consumer protection laws tend to set the national floor, as the CCPA did for data privacy, AB 1043 will almost certainly shape how OS providers and app developers behave everywhere, not just in California.

Apple itself argued against this exact approach in a February 2025 white paper titled “Helping Protect Kids Online,” stating that requiring age verification at the app marketplace level is “not data minimization.” Twelve months later, the company is implementing age verification at the OS level. So they can’t even stay true to their own word. Now we’re here. They’re doing it. Ridiculous.

The privacy backlash is already measurable, as you can imagine. This is pretty bad. When the UK’s Online Safety Act began enforcing age verification requirements on adult websites in July 2025, Proton VPN reported a 1,400% surge in UK signups within minutes. NordVPN, another popular VPN (I personally don’t use it, but NordVPN’s popular), also confirmed a 1,000% spike in UK purchases over the same period. These numbers reflect what happens when governments mandate identity checks online. A significant portion of users immediately reach for tools that circumvent them. The iOS 26.4 rollout is likely to accelerate that pattern.

I bet it will. The open-source community is pushing back harder than anyone, for obvious reasons. GrapheneOS, which is a privacy-focused Android fork that currently runs exclusively on Google’s Pixel phones, posted a statement on March 20th refusing to comply with any age verification mandate. Their exact words, quote, “GrapheneOS will remain usable by anyone around the world without requiring personal information, identification, or an account. GrapheneOS and our services will remain available internationally. If GrapheneOS devices can’t be sold in a region due to their regulations, so be it.” When someone on X asked whether they’d geo-block users via VPN, the team responded, “We don’t filter the internet for Iran or North Korea. So why would we for Brazil or California?”

Well, got to get it right. Over 400 computer scientists have signed an open letter arguing that these laws build surveillance architecture without meaningfully protecting children, since self-declaration is trivially bypassed. Yeah, of course. I mean, we see that all the time. You get a prompt that says, “Are you 18?” You click yes. You go, you enter, right? I mean, it’s as simple as that.

Colorado’s SB26-051 passed the state senate on March 3rd with similar requirements to California’s law. The pattern is clear. Governments are pushing age verification down the stack, from websites to app stores to operating systems. Apple is complying. Open-source projects are refusing. And the gap between the two is where the fight over who controls your device and who gets to know how old you are is playing out in real time, right in front of you.

Unreal.

I knew I told you this would happen.

Yeah. When Discord started doing it, I knew that everyone else was going to jump on, and that’s exactly what happened.

Yeah. And as we learned earlier, we got Meta in large part to thank for that.

Yeah. I know that there’s a distro called Ageless Linux. That’s just exactly what it sounds like. But on the Linux side of things, there’s been a lot of controversy over the systemd stuff there. It’s insane. Just a quick mention that it’s been reported that people where age verification laws are in effect are starting to see PSN have age verification requirements pop up. It’s pretty rapid, the spread of this, and it’s clamping things down pretty hard.

The Build Log

That wraps up all the news. We don’t have a deep dive. More of a news-centered podcast this time. I got one thing to talk about for the build log here. It’s funny. This is all tied into observing data and visibility and all this. And meanwhile, here I am building out a dashboard for observability. That’s what I’ve been into this past week.

Chris N: Observability and Subtoken Auth

What do I mean by that? Building the ability to see what’s actually happening with the product once it’s out in the world. Let me just back up for a second for anyone who hasn’t heard me talk about this before. It’s called Subtoken Auth. It’s an access solution that I’m building for self-hosted apps. It generates permission-restricted access tokens for situations where a normal login flow doesn’t work, like mobile apps that can’t handle single sign-on redirects or two-factor prompts. The whole point is that you run it on your own server, on your own terms. Which is great for privacy, great for control, but it creates a real blind spot for me as the developer. If something breaks, I don’t get a phone call. There’s no server I can check. The app is running on somebody else’s machine, behind their firewall. So the question becomes: how do I know if things are healthy out there? How do I know if people are even using it? How do I know if something’s going wrong before it turns into some support email?

The answer to that is community telemetry, where deployed instances can send back anonymous health signals. You can think of it like a weather station network, where each station reports conditions locally, and then when you aggregate them all, you can get maybe a forecast. There’s no personal data. There’s no actual tracking. It’s just stuff like “this instance started up successfully” or “this instance validated 200 tokens today” or “this instance is running version 1.2 on an x64 architecture with a community license.” Useful stuff, things like that. Also, timing of specific functions in the application so I can see where the slow spots are to improve those.

But building that pipeline was kind of a project. The first version used a monitoring tool called SigNoz, and it worked, but it was really heavy. Within about 48 hours, I ripped that thing out and replaced it with a Grafana-based stack, which is the industry-standard open-source monitoring toolkit for this kind of thing. Tools that handle logs, dashboards, traces, and metrics. It’s more flexible, and I actually understand what every piece does now.

So on top of that, I had to build an authentication layer so only registered instances can send valid data. So nobody can hit my telemetry endpoint with garbage data and then I get useless information from that. That went through a couple iterations as well. Started with a single shared key, realized that didn’t scale very well, and then ended up building a registration system where each deployed instance gets its own unique credential that’s digitally signed and derived from a key to prove it’s legitimate. It’s all issued automatically. There’s no user interaction required. Everything’s handled automatically and silently on startup.

And then there’s the license server that I got working this week, which is a separate service running on Cloudflare’s edge network. It’s basically their global server network. When I started the week, it validated license keys. That was about it. By the end of the week, though, it now validates keys, distributes plugins, registers telemetry credentials, monitors its own health, and deploys itself through an automated pipeline.

On the project management side, I recently moved away from a platform called ClickUp. I moved from ClickUp to a self-hosted tool. It’s not that there was anything wrong with ClickUp. ClickUp worked fine, but I wasn’t using probably 95% of the features, and it just felt like it was time to simplify. Now when I finish building a feature, I just mark the task done. When new work surfaces, that gets logged and tagged with a numeric task ID. It’s a small thing, but it’s significantly smoothed out the workflow from all this.

The big takeaway, I guess, if there even is one: when you’re building a product that runs on other people’s infrastructure, you kind of have to build two products if you want your product to be that much better. You build the thing the customer uses, and then you build the thing that tells you the thing is working. And that second product is just as complex, but just as important, and sometimes just as interesting to build. So there you go. That’s what I’ve got going on.

Chris V: Same as Before

Wow. Yeah. Well, what about you? You got anything interesting to share?

No, pretty much doing the same thing that I was doing last time. I’m probably going to be doing that for quite a while. Potentially several months.

All right. But yeah, if anything significant changes there, then yeah, we’ll talk about it. But so far, no. Still grinding away. Keep us posted.

The Plug / Outro

Well, I guess now it’s on to the plugs. Yeah, I’ve got a plug. It’s another YouTube video, because of course it is. It’s from a creator called Joe Scott, who I really like. The video is called “Rocky is Weirder Than You Think.” It’s actually about Project Hail Mary. So that is now a movie that’s coming out as well. The video is a deep dive into the alien character Rocky and the actual science behind how Andy Weir designed him. He wrote 16 pages of notes on this creature’s biology. Everything from its circulatory system to its cellular makeup, all based on the real conditions of an actual exoplanet we’ve discovered. Joe Scott is an awesome creator himself. He goes through those notes with Andy himself, and they get into the details that aren’t in the book or the movie. I haven’t seen the film yet, but I’m a huge fan of his work. The Martian was incredible, and this is primo content for me. This is exactly the type of thing I love. Somebody taking a concept, whether it’s science fiction or not, and pulling it apart to show you the real science underneath. And it’s a really phenomenal video.

I found it incredibly fascinating, even though I have never read the book or seen the movie. Good stuff. You got any plugs?

Uh, nope. I don’t have anything this time around.

What about Kevin James?

Sound guy stuff. Sound guy stuff.

No, I’m just throwing that out there.

Yeah, it’s pretty old. If you haven’t seen any of that, you should watch it. And if you don’t know who Kevin James is, he’s been in a lot of pretty goofy comedies. Oh god, how do you not know Kevin James?

Pretty silly guy. Anyways, he’s got his own YouTube channel and he makes little skits. And there’s a specific one called “The Sound Guy,” and he takes various movies and pretends that he’s one of the sound guys on set, you know, recording the actors, and it’s just very, very, very wild scenarios in some of them, and it’s just, you know, it’s fun.

It’s fun. Yeah, they’re pretty short. They’re only like a minute or two each. So hey, good stuff.

I don’t think he’s done one for a while though. It’s been a long time.

Yeah, he’s got some other project coming out. I’ve seen an ad for it. I don’t know what’s going on there, but best of luck.

Anyways, I guess that’s it. All right. Well, just to do a little housekeeping here. We’ve got links. patreon.com/squaredcast. We also have a website, squaredcast.com. If you’d like to support what we’re doing, you can check out our Patreon. You’ll get bonus episodes, project builds, music from the archive (just uploaded, by the way), and a whole lot more, starting at just $2 a month. We appreciate you being here. We’ll see you next week.

Leave a Comment

Join Our COMMUNITY

profileprofileprofile
Bonus Content + EARLY BUILDS

Get bonus episodes, project builds, and vote on what we cover next.