Episode 103: It’s Not Just Code Anymore: AI That Thinks

R Green WF Headshot 100x100
Ron Green05/22/2025

The conversation around artificial intelligence (AI) has shifted. We’re no longer asking if AI will change how we work — we’re asking how, when, and what happens next. Ron Green, co-founder and CTO of kungfu.ai, has been living these questions for decades. In this #shifthappens episode, Ron shares a view from the frontlines of AI development where machine learning meets human reasoning, and software engineering meets something more creative.

From his early days in computer science to building one of the leading AI consultancies in the U.S., Ron has seen the arc of AI evolution move from speculative to operational. But the big story isn’t just about algorithms; it’s about people, strategy, and a rethink of how we design the future of work.

The Thinking Machine Is Here

AI systems today are showing surprising abilities to reason, adapt, and even support decision-making. What used to be considered hallmarks of human cognition – pattern recognition, logic, and synthesis – are increasingly being demonstrated by large-scale models and specialized AI systems.

Even as AI’s capabilities improve, Ron emphasizes the importance of how we use those capabilities. The goal is not to build machines that replace humans but systems that extend what humans can do. That includes assisting with complex tasks, enhancing productivity, and in some cases, introducing entirely new ways of creating value.

AI, however, is not infallible. It can hallucinate, deliver incorrect responses confidently, and reinforce biases baked into its training data. That’s why smart strategies and human oversight remain non-negotiable.

AI’s Inflection Point in Business

The rapid evolution of generative AI has forced companies to reckon with both opportunity and uncertainty. Ron points out that many businesses rush into AI adoption expecting immediate transformation. But in reality, success goes beyond a tech upgrade; it requires strategic clarity.

Organizations need to answer fundamental questions: What problem are we solving? How will we measure value? Who owns the AI output? And perhaps most importantly, where do humans stay in the loop?

The companies seeing success with AI today are those who treat it not as a magic solution but as a tool — one that requires the same diligence, governance, and iteration as any major transformation effort.

How to Lead When the Code Starts Thinking

Ron Green outlines AI’s trajectory and how leaders should respond to it. From practical guidance to provocative foresight, here are the key shifts every organization should be paying attention to.


Smarter Tech Needs Smarter Strategy

AI is only as powerful as the strategy behind it. While the pace of technological advancement is staggering, it’s easy to be dazzled by demos and prototypes that don’t scale in the real world. Ron warns against chasing capability without context. For AI to truly drive business value, organizations need to tie use cases back to meaningful goals — whether that’s operational efficiency, customer experience, or unlocking new revenue streams.

This means thinking beyond the pilot phase and building the infrastructure to support continuous improvement, from data governance to cross-functional collaboration.


Keep Humans in the Loop

Even the smartest AI systems need supervision. In critical fields like healthcare, finance, and law, human oversight is critical. AI can assist with analysis and generate content, but it doesn’t understand the implications of its decisions.

Keeping humans in the loop ensures quality, context, and ethical accountability. It also builds trust within teams, especially as employees begin working alongside tools that feel increasingly autonomous. The best results come when machines and people collaborate — each playing to their strengths.

In this regard, organizations need to build AI fluency at every level. This doesn’t mean everyone has to learn how to code, but they do need to understand key concepts: what makes a model trustworthy, how training data influences behavior, and why explainability matters.


Rethink Software from the Ground Up

One of the most transformative impacts of AI is on software development itself. AI is no longer just a tool to run code; it’s now writing it. Developers are shifting from crafting every line manually to guiding intelligent systems that generate, test, and refactor code.

This doesn’t mean software engineers are going extinct. On the contrary, their role is evolving. Developers are becoming architects of intent: designing systems, setting parameters, and validating results rather than laboring over syntax. This shift opens the field to more creative problem-solvers and not just traditional coders.


Use AI’s Strengths but Know Its Limits

AI systems excel at pattern recognition, summarization, and even basic reasoning. However, they’re still limited when it comes to long-term planning, causal understanding, and dealing with nuance. Organizations must be clear-eyed about what AI can and can’t do.

Ron emphasizes the need for domain expertise when deploying AI solutions. Models may suggest answers, but it takes human judgment to know whether those answers are meaningful or just confidently wrong. In regulated or safety-critical environments, that distinction could be the difference between success and catastrophe.


Push Creativity Further with Machines

Despite early skepticism, creative applications of AI are gaining traction — from music and art to advertising and product design. The key is not expecting machines to replace human creativity, but rather to augment it.

As Ron puts it, “Right now, we've been constrained to music that is the best that humans can create. In the future, we might be able to experience music that's the best that can be created and that won't be limited necessarily by our own cognitive or creative abilities.”

In this way, AI becomes a new kind of creative partner — one that’s tireless, iterative, and capable of surprise.

Ethics Aren't Optional

As AI systems become more capable, the ethical stakes get higher. Transparency and accountability become business imperatives. If left unchecked, biased AI can harm customers, damage reputations, and trigger legal consequences.

That’s why governance must be integrated from the beginning. Organizations should develop frameworks for responsible AI use, including processes for monitoring models, auditing decisions, and addressing unintended outcomes.

Governance goes beyond compliance; it’s about creating AI systems that people can trust. That trust will be a key differentiator as technology becomes more deeply embedded in daily life.

Why Humans Still Hold the Edge

In a world where AI can write code, draft documents, generate visuals, and even reason about next steps, it’s tempting to assume we’re approaching artificial general intelligence. But capability doesn’t equal wisdom, and performance doesn’t guarantee understanding.

What AI lacks is the grounding that humans bring: empathy, ethics, judgment, and an ability to navigate complexity that isn’t just computational. That’s why, even as machines get smarter, the most important decisions about AI still fall to us.

Episode Resources

#shifthappens Research: AI & Information Management Report

#shifthappens Insights: #shifthappens | Where’s the ROI in AI?

#shifthappens Podcasts:

kungfu.ai website

Ron Green on LinkedIn

Dux Raymond Sy on LinkedIn

Mario Carvajal on LinkedIn

Stay Ahead of the Curve with the Latest Insights on the Future of Work

Explore Insights