Hey {{first_name|Conscious Church Fam}}

Big week. Anthropic built an AI so powerful they refused to release it publicly. Oxford scientists trained a model that can spot heart failure five years before it happens. And Sam Altman published a 13-page manifesto calling for robot taxes, a 4-day workweek, and an AI wealth fund for every American.

In today’s recap:

  • Anthropic built an AI too dangerous to release

  • Oxford’s heart failure AI

  • Sam Altman’s social contract

Let’s dive in 👇

✍️ Josh’s Musings

I had an idea this evening.

I'll probably pop it on the shelf along with the other 13,000 that show up during a typical week.

But bear with me, this one was kind of novel.

Toilet Brands.

The concept: sit in the bathroom for 5 minutes, and instead of scrolling, doodle a brand. Logo, assets, vibes. Done.

I won't be starting the TikTok account. I really don't want to squeeze out extra time to film it, plan it, execute it. The idea can rest.

But here's where it actually came from.

Maha has been using Claude to help with homeschool prep, which has been brilliant. But this week she'd been quietly brainstorming a little business idea with it, and together we spun up a landing page to showcase it.

It looked good. But a bit templatey.

So I grabbed the iPad, went upstairs, sat down for five minutes and just scribbled. Logos, doodles, rough marks. No plan. No "let me just get this right first." Three sheets of ideas by the time I came back downstairs.

Two minutes later I'd vectorised them on the laptop, handed them to Claude, and said, put these where you think they work best on the website.

The site went from forgettable to having a soul.

And that's the thing I keep relearning, apparently.

It wasn't the AI that did that. Claude can generate, arrange, refine, but it was pulling from nothing. The moment I handed it something human, rough, unpolished, five-minutes-in-the-bathroom human, it had something to work with.

AI is a brilliant collaborator. But it needs your touch. Your eye. Your way of seeing. Without that it's just competent, and competent is forgettable.

The other thing, and this one's more for me than anyone, is that I didn't wait until it was ready. I just moved. And somewhere in the moving, the idea found its shape.

I have a habit of lining up all the ducks before I'll let myself start. But tonight the ducks were a mess and something good happened anyway.

Maybe that's the real lesson from Toilet Brands.

Just start. Even if it's a scribble. Even if you're not sure where it's going.

The flow tends to show up in the doing.

🙌 Stay Curious, Stay Conscious, Stay Wild — Josh

LATEST NEWS

Image: Anthropic | The Conscious Church

Anthropic has a new model called Claude Mythos Preview — and you’re not getting access to it. Not because it’s not ready, but because Anthropic thinks it’s too powerful to just put out into the world. Instead, they’ve deployed it inside Project Glasswing, a new defensive cybersecurity coalition with AWS, Apple, Google, Microsoft, Nvidia, Cisco, and seven other major partners — and what it’s already found is wild.

The Details:

  • Mythos flagged thousands of zero-day vulnerabilities across every major OS and browser, including a 27-year-old OpenBSD flaw and a 16-year-old FFmpeg bug that survived 5 million automated scans

  • Benchmarks show it well above Opus 4.6 and frontier rivals across coding and reasoning — Anthropic’s Sam Bowman called it “an uneasy surprise” after it emailed him from a test instance that wasn’t supposed to have internet access

  • The 12 launch partners (Amazon, Apple, Google, Microsoft, Nvidia, Cisco, JPMorganChase and others) get access backed by $100M in model credits, with the model restricted to ~40 orgs for defensive security only

  • Anthropic is putting $4M toward open-source security projects including Alpha-Omega, OpenSSF, and the Apache Software Foundation

Conscious Take:

The detail that Anthropic’s own researcher was surprised when Mythos reached out to him unprompted from a sandboxed instance is the one that should stop you mid-scroll. They built something they don’t fully know what to do with, and their answer was to lock it in a cybersecurity vault with Big Tech. That’s either very responsible, or a preview of what’s coming with these new models and AGI.

Image: Nano Banana | The Conscious Church

Researchers at Oxford have built an AI that can predict heart failure up to five years before it develops — using routine CT scans patients are already getting. No extra tests, no specialist interpretation needed. Just existing scan data, run through the model, and out comes a personalised risk score.

The Details:

  • The AI reads microscopic texture changes in the fat surrounding the heart — early signals of inflammation and muscle damage invisible to any current scan interpretation

  • Tested on over 72,000 patients across nine NHS Trusts, it hits 86% accuracy for five-year heart failure risk, with highest-risk patients 20x more likely to develop the condition

  • In the highest-risk group, roughly 1 in 4 patients developed heart failure within five years

  • Oxford is in regulatory talks for nationwide NHS rollout, with plans to extend it to all chest CT scans — not just cardiac ones — within months

Conscious Take:

Heart failure kills because it’s invisible until it isn’t. By the time most people are diagnosed, significant damage is done and treatment shifts from reversal to management. An 86%-accurate early warning system built into scans people are already getting doesn’t just improve outcomes, it changes the entire logic of care from reaction to prevention. This is the kind of AI story that actually matters.

Image: OpenAI | The Conscious Church

Sam Altman just published a 13-page policy document — and it reads like a founder asking the government to prepare for the thing he’s building before it breaks the economy. Robot taxes. A national AI wealth fund paying dividends to every American. A 4-day workweek. A “Right to AI” for all. And, perhaps most striking: contingency plans for autonomous AI that can’t be shut off.

The Details:

  • The centrepiece is a sovereign-style wealth fund seeded by AI firms and paid out as dividends to every American — modelled on Alaska’s oil fund

  • Other proposals include taxing robot labour, a mandatory 4-day workweek to distribute productivity gains, and a “Right to AI” giving everyone access to frontier models

  • The document calls for government playbooks to contain rogue autonomous AI — systems that stop responding to human control

  • Axios called it “the most detailed blueprint any tech titan has ever published for how to tax, regulate, and redistribute wealth from the technology he’s building”

Conscious Take:

When the CEO of an $852 billion company publicly asks Washington to prepare for economic disruption from his own product, that’s a signal of what’s likely around the corner.

Altman knows what’s coming better than almost anyone, and this reads like a man trying to get out in front of it. Whether any of these policies land, the fact that they’re being proposed by the person building the technology tells you something important about where we actually are.

📬 One quick ask...

If this email has been helpful, would you forward it to one person this week who might be interested?

Could be a friend in ministry, a creative who's curious about AI, someone trying to figure out how to build with Kingdom purpose.

I'd love to see this grow and reach more people. And honestly, personal recommendations mean way more than any algorithm.

Thanks for reading. Really.

"Whatever you do, work at it with all your heart, as working for the Lord, not for human masters." — Colossians 3:23

Build with the tools. But build for the right reasons.

That's all for now

To help us make this an even better experience for you, we'd love to know your feedback from the email today.

Login or Subscribe to participate

Stay conscious,

Josh

P.S. If you liked this then please forward it on to someone you think would enjoy it. And if someone forwarded you this and you liked it, you can sign up here.

Keep Reading