AI with a will of it's own?

+NEWS: NVIDIA set to build Europe's largest datacenter; AI can beat you in a debate

TL;DR

A new study argues that some generative AI agents—like Minecraft’s Voyager or autonomous drones—exhibit functional free will. That means they make goal-driven decisions, adapt to changing environments, and act independently based on internal reasoning.

Why it matters for your business:

  • AI is becoming a decision-maker, not just a tool.

  • You may need to start treating AI like a strategic actor—with responsibilities and ethical guardrails.

  • Developers must embed moral reasoning from the start, or risk unpredictable consequences.

In short: If AI is acting like it has free will, your business needs to act like it’s responsible.

Can AI make its own decisions—and should we treat it like it can?

A new study out of Aalto University says yes, at least in a functional sense. According to Finnish philosopher Frank Martela, some of today’s advanced AI agents meet all three classical criteria for free will: intentionality, alternatives, and control over actions.

The Big Idea: Functional Free Will

Let’s break this down in simple terms:

  • Intentionality: The AI sets goals and works towards them—without being micromanaged.

  • Alternatives: It faces real choices in how to achieve its goals.

  • Control: It adapts its actions based on its internal goals and external feedback.

Martela's study argues that agents like the Minecraft-based “Voyager” and a conceptual drone called “Spitenik” demonstrate these traits.

Voyager is an AI agent playing Minecraft. It sets its own goals, stores memories of what worked, plans next moves, and adapts to new environments—all without user micromanagement. It’s not just reacting; it’s learning and choosing.

Spitenik, while fictional, is based on the architecture of modern battle drones. It calculates optimal paths, avoids obstacles, chooses rest locations, and selects targets—all while optimising for mission success. Sound like science fiction?

Systems like this already exist in the defense industry, albeit with more constraints.

This isn’t just theory—these AI agents behave in ways that make us explain their actions as if they chose them. That’s functional free will in action.

What This Means:

With AI systems now embedded in customer service, sales, operations, and even product development, the choices these systems make have real-world consequences.

  • AI isn’t just a tool. It’s becoming a decision-maker in your workflows.

  • Responsibility is shifting. If AI acts with “functional free will,” accountability may gradually move away from developers and toward the systems themselves.

  • Ethics matter more than ever. Your AI needs a moral compass—because it’s going to make decisions whether it has one or not.

With AI already making decisions — ask yourself:

  • Do you know the goals your AI systems are optimising for?

  • Have you checked whether they have fallback choices—or do they always act the same way?

  • What happens when they encounter uncertainty?

As Martela warns, “The more freedom you give AI, the more you need to give it a moral compass from the start.”

Martela’s research doesn’t suggest that AI is human. But it does highlight something crucial: many AI systems now behave in ways that require us to treat them like agents.

That means we’re not just designing tools—we’re designing actors that will operate in real-world environments, with real-world consequences. Whether it's a helpful customer service agent or a drone navigating a crisis zone, these systems need moral and strategic direction.

AI is growing up fast.

If you're handing over decision-making power to a system that appears to “think for itself,” it’s time to acknowledge: the responsibility to guide it starts with us.

This Week in AI

Did you enjoy today's newsletter?

Login or Subscribe to participate in polls.

This is all for this week.

If you have any specific questions around today’s issue, email me under [email protected].

For more infos about us, check out our website here.

See you next week!