Visualização de leitura

The Trough of Disillusionment

On this week's episode of The FAIK Files: Google begins rolling out Gemini 3.0 and other AI updates; We look at new studies showing where AI gets it wrong, from misrepresenting news to writing vulnerable code; The latest on AI-driven job loss, including leaked documents from Amazon; and, a roundup of recent deepfake stories, including a Tory MP and the Irish election.

Gettin' Sloppy Wit It

In this week's episode of The FAIK Files: OpenAI's Sora faces copyright issues and user complaints about censorship, despite scaling up with new partnerships; we discuss the proliferation of Sora 2 watermark removers and the challenges in detecting AI-generated content; Discord reports a major data breach involving 1.5 TB of data and 2 million government ID photos; and, a man loses access to nuclear secrets after storing AI-generated 'robot porn' on a government computer.

Inclusive, Empowering, & Confident Approaches to AI (feat. Jocelyn Burnham)

Jocelyn Burnham (she/her) is a leading independent artificial intelligence communicator, workshop leader, and speaker, specialising in AI learning in the creative and cultural sectors through creativity and playfulness. She has been commissioned by organisations including Arts Council England, Tate, The Church of England, Historic Royal Palaces, Art Fund, Shakespeare's Globe, RADA, Kew and Bloomberg Philanthropies to produce bespoke AI workshops and resources.

Vibes, Slop, & Silicon Valley

In this week's episode: Microsoft introduces "Vibe Working" with "Agent Mode" and "Office Agent" in Microsoft 365 Copilot; we explore the hype and backlash surrounding AI video, including the saga of Tilly Norwood; Meta launches "Vibes," a new feed featuring AI-generated short-form videos, while OpenAI drops Sora 2; more chatbot woes are reported, alongside a fun exploit allowing takeover of Unitree Robots.

Your AI Friends 'Love' You...

In this week's episode: OpenAI is feeling the pressure to meet demand, widening the scope of "Stargate" and looking at debt financing for chips by 2025; creepy AI companion toys are posing risks to student mental health, and we look at a family's unsettling week with one; Microsoft is bringing Claude to Copilot, diversifying beyond OpenAI, while xAI is suing OpenAI for allegedly poaching talent and secrets; and. an AI safety tool sparked backlash after flagging art as porn and deleting emails, leading to a student lawsuit.

Mind Reading the Room

In this week's episode: From the disgusting to the whimsical, we explore the weird world of what people are actually doing online—from AI slurs, to juggling simulations, to a mysterious robot roaming Austin; MIT unveils mind-reading technology that "feels like telepathy" and might actually work; A deep dive into how people really use ChatGPT reveals some surprising truths about our AI habits; and, Reality checks on AI hype, including epic demo fails and honest takes on what's actually possible.

Hacking Consciousness and Ordering Chaos

In this week's episode: HexStrike - the AI-powered hacking tool that can exploit zero-day vulnerabilities in minutes instead of months; Diverging expert opinions on AI consciousness and welfare; Switzerland's ambitious new open-source AI model "Apertus"; and, Taco Bell is experiencing indigestion after several embarrassing viral AI-powered drive-thru moments.

Well... that's not good!

In this week's episode: The dark side of AI-powered surveillance with Flock's "safety" cameras that are anything but safe; Google Gemini's existential crisis (spoiler: it's just a bug); The tragic case of a teen who took his own life after interacting with ChatGPT; and, Meta's allowance of inappropriate bot interactions with minors. Note: This is a pretty heavy episode that highlights some of AI's most troubling real-world impacts.

Power Struggles

In this week's episode: Google finally releases data on the power consumption and climate impact of its AI models; A deep dive into 'Hierarchical Learning Models' (HRMs) and why they might be the next big thing in AI architecture; How a smart home was reportedly hacked using a Google Gemini calendar summary; and, A look at Perry's recent 'Deceptive Minds' episode, "The Long Con."

It's a Personality Problem

In this week's episode: OpenAI's GPT-5 launch that definitely didn't go according to plan, complete with backlash, safety concerns, and frantic updates; Anthropic's fascinating new research on "persona vectors" - a breakthrough method for monitoring and controlling character traits in language models; A listener tip leads us down a rabbit hole of ChatGPT conversations being indexed by search engines (spoiler alert: this is not good for privacy); and, a wild story about Claude being jailbroken to generate unlimited Stripe discount coupons.

How to Think Like a Hacker (with Ted Harrington)

In this episode, we're joined by ⁠Ted Harrington⁠ — ethical hacker, speaker, and executive partner at ⁠Independent Security Evaluators⁠, where he leads a team that breaks things on purpose to make them stronger. We talk about the hacker mindset: how curiosity, exploration, and questioning assumptions can lead to stronger systems — and more creative thinking. Ted is the author of ⁠Hackable: How to Do Application Security Right⁠ and his new book Inner Hacker is all about applying that same mindset beyond just code.

Video Killed the ...

In this week's episode: NVIDIA’s Diffusion Renderer: We explore NVIDIA's new research into real-time diffusion rendering, a potential game-changer in photorealistic generation; Robotics Powered by Video AI: Luma and Runway are targeting robotics companies with video synthesis tools—because training a robot brain takes a lot of fake video; AI-Generated Fashion Models: Listener “Ty” (of Side Character Quest) sends us a wild one: fashion campaigns using entirely synthetic models so convincing you can’t tell they’re fake; and LLMs and the Dark Arts: We cover ChatGPT’s shocking responses involving murder, mutilation, and Satanism—plus how bigotry and bias are inextricably embedded in these systems.

Dark Knowledge & Hidden Agendas

In this week's episode: Subliminal Learning: Listener "Max" brings us a fascinating (and concerning) story about AI model distillation and how "dark knowledge" can be passed along between AI systems; AI Filmmaking vs. Fraud Crisis: We explore Dave Clark's innovative AI agent methodology for filmmaking, then pivot to Sam Altman's growing concerns about AI's potential for sophisticated fraud; Get Ready to be Haggled by AI: Listener "Ty" alerts us to Delta Airlines' move toward AI-powered dynamic pricing that determines what YOU personally will pay for a ticket; and, AI Search Ruins Everything: We dive into how Google's AI is fundamentally breaking search, the internet, and possibly your brain.

AI Oopsies!!

In this week's episode: Grok's triple threat of chaos; ChatGPT's accidental generosity; McDonald's million-dollar security fail; and, AI safety reality check.

❌