Dabble No More: Toward Disciplined AI Adoption
Experimenting with AI is a starting point. But creating real value requires direction and discipline.

Recently, I had a conversation with an architecture studio lead that went something like this:
Architect: We’re using AI in the studio.
Me: Oh yeah? What are you doing?
Architect: A few things. [Person A] is using one of those meeting bots to transcribe meetings. And [Person B] is feeding renderings into ChatGPT to explore materials and colors. Clients are impressed.
Me: Interesting. Anything else?
Architect: Yes, [Person C] has used ChatGPT to create social media posts. Although we haven’t really scaled that.
This is actually a composite of several similar conversations, and I’ve changed the details — but the spirit stands. I believe this short dialog accurately represents how many service organizations are embracing AI: by dabbling.
Dabbling — or, more gently, “undisciplined adoption” — is experimenting with AI without understanding how information actually flows through the organization to create value. Instead, team members use AI ad-hoc on whatever interests them most. It can happen officially (i.e., using company-provided licenses) or unofficially (bringing their own.)
While dabbling has upsides, it also carries significant risks. It also precludes getting the most value out of AI. Let’s explore how.
Upsides of Dabbling
I can think of at least three pros to dabbling with AI:
Quick learning. By now, most folks in service industries have heard about AI. Many are wondering how it might help their business. But reading about a technology isn’t the same as using it. Dabbling gets them rolling quickly: Setting up an account is easy, and getting a useful reply to a prompt highly satisfying. A nudge to go deeper — good!
Low friction. Basic LLM accounts are free and the pro versions around $20/month — not a big commitment. A ChatGPT account and YouTube will get you rolling. No need for big culture change initiatives, reorgs, or IT investments. And unless your IT department put the kibosh on it, you won’t be stepping on anyone’s toes.
Nice spread. AI is a general purpose technology: it can help with research, production, marketing, finance, etc. With different people experimenting, as in the example above, you’ll get glimmers of possible applications. Letting a thousand (or, more likely, half a dozen) flowers bloom will give you a sense of what the garden might include.
Given these “pros,” it’s understandable why firms dabble: it’s a nonthreatening way to get started on the journey.
But It’s Not All Roses
Dabbling is better than nothing. But it has significant downsides:
No governance. Let’s start with the scariest. Undisciplined AI use is a privacy and security risk. Unless you have a properly configured pro account, your chats will likely be used to train models. Meaning, your private data might show up as an answer to someone else’s prompt. There are good reasons why your IT team wants visibility and control!
Learnings don’t scale. Yes, dabbling lets team members get into AI. But that learning won’t be evenly distributed. And their focus will be on narrow problems (e.g., crafting a social media post, tweaking a rendering) that can’t be leveraged more broadly. They’ll likely have no plans or means to feed data back into the org’s broader data repositories.
Wrong mental model. Fast learning doesn’t mean good learning. By dabbling, team members will come to understand AIs as freestanding tools whose abilities reside in vendors’ clouds. They’ll assume utility lies in the chatbot’s cleverness rather than how they leverage structured information. This is a bad mental model. AIs should be understood as adding smarts to (and with) their firm’s IT infrastructure.
Opportunity cost. By focusing on “paper cut” problems, org leaders can boast that the company is already “using AI.” As a result, they’ll fail to invest in projects that have greater upside potential — something that can only happen when they consider initiatives as holistic responses to strategic directions. By dabbling, the org gets a false sense of closure while leaving lots of value on the table.
What To Do Instead
Ok, so dabbling isn’t a good strategy. But that doesn’t mean you shouldn’t use AI at all. So how should you proceed instead?
1. Identify your business’s “soul”
Start where your organization shines. What makes it stand out from competitors? What’s the secret sauce? Where does it create most value? Don’t threaten those things. Instead, look to automate the chores that keep you from delivering your particular kind of value in a timely and cost-effective manner.
2. Define your knowledge pipeline
And how do you do that? To begin with, you must grok the organization’s “knowledge pipeline” — how information is created, transformed, passed on, searched, used, etc. All businesses generate and consume data: leads, proposals, research, responses, invoices, documentation, etc. The more structured this data, the easier it’ll be to integrate into AI-powered workflows.
3. Understand AI’s real capabilities
Many people are pushing unrealistic ideas of what AI can do. The reality is that while LLMs are a powerful general-purpose technology, you can’t just point them to a problem and say “fix this” — at least not in a scalable, and repeatable way. Understanding what the technology can do today is essential to designing systems that create real value consistently, rather than one-off automations.
By mapping how information flows through the organization, where the real value lies, and what AI can (and can’t) do well, you can determine how it might best alleviate information bottlenecks — without threatening your people.
A Real-world Example
Recently, Greg and I helped an architecture studio define a coherent direction for their AI use. Outlining the studio’s knowledge pipeline led to an interesting discovery: a significant portion of their time was spent responding to questions during the construction administration (CA) phase of projects.
Given current LLM capabilities, we determined that helping build CA dossiers would be a good place to start. It’s a time-consuming task that few people want to do, but which must be done to deliver value. But it’s also far enough removed from the studio’s core deliverable — excellent architectural design — that it doesn’t threaten their soul.
This isn’t the “sexiest” use of AI, the sort one brags about. But it solves a real problem in a scalable and repeatable way. It enhances the overall value to clients and improves working conditions for team members. It’s a win-win all around — but you don’t get there by dabbling.
Moving Ahead — With Discipline
Dabbling isn’t dangerous just because it’s uncontrolled. It’s dangerous because it gives the firm a false sense of progress. It teaches people to think about AI in the wrong way — as a clever ad hoc tool rather than as part of a broader system — while distracting them from more fruitful explorations.
The opposite of dabbling isn’t stasis; it’s moving ahead in a disciplined way. Starting undirected is natural and easy. But eventually, you must move more deliberately and strategically. The goal of using AI shouldn’t be replacing what makes you special. Instead, it should be freeing your people so they can deliver excellence — and enjoy the process.
If this resonates, unfinishe can help. We work with small and medium-sized service firms to move beyond AI dabbling and deliver real value.

