image generated by Google Gemini
We walked into the architecture firm and began analyzing their construction administration processes. After mapping their workflows and analyzing their project data, we discovered something striking: they were spending substantial resources annually on administrative tasks that AI could potentially automate. The numbers were undeniable. The vision was compelling. We proposed a wholesale set of changes that we felt AI enabled tools could deliver.
They politely thanked us and asked if we could start with something smaller.
This moment taught us everything about why most AI implementations fail before they begin. The problem isn’t technical capability or even budget—it’s that we’re trying to sell the future to people who are drowning in today’s broken processes.
The Big Picture Trap
When you can see the forest, it’s tempting to point out how much timber could be harvested. But the people living among the trees are focused on not getting lost.
We came prepared to demonstrate how AI could eliminate substantial administrative burden in construction administration. What we discovered instead was that before anyone could believe in transformation, we needed to prove we understood their reality.
That reality looked like this:
Architects spending a quarter of their time on administrative tasks they never learned in design school — and which few people like doing now
Critical project information scattered across email threads, specification documents, and tacit knowledge
Workarounds built on top of workarounds, creating fragile systems that somehow kept projects moving
Legacy system inertia (their current project management platform) that had shaped inefficient processes everyone had learned to work around
The big efficiency vision felt abstract. The 10 minutes they spent every morning hunting through emails for RFIs (Requests for Information—formal questions contractors send to architects asking for clarification on design details) that needed their attention? That felt urgent.
Workflow Archaeology: Digging to the Bones
Forget the process diagrams. Real workflow discovery happens through rigorous field research, not idealized explanations of how work should flow.
We conducted interviews with architects across different experience levels, ran working groups to map actual workflows, and dissected real RFI emails from their recent projects. We surveyed the entire firm about time allocation—over 60% responded, revealing that architects consistently underestimated how much time they spent on administrative tasks.
Most importantly, we analyzed their actual project data: tens of thousands of RFIs and submittals across dozens of projects, with massive variation in distribution. Some projects generated thousands of items; others fewer than a hundred. The patterns revealed bottlenecks invisible in abstract process discussions.
One insight emerged clearly from all this research: email is the interface between contractors and architects. While the firm had invested in web-based project management tools, the real work happened in Outlook. Contractors sent RFIs via email. Architects responded via email. The web tools served as repositories, but email remained the conversation medium where professionals felt comfortable working.
Service Design for AI Implementation
Observe Before You Optimize
AI consulting often begins with identifying processes that could be automated. We started by understanding which processes were already broken.
The difference is crucial. Automating a bad process makes it efficiently bad. Understanding why the process breaks down reveals where intelligence—artificial or otherwise—can provide the most leverage.
In architecture firms, we found that the stated workflow (”we review RFIs systematically”) rarely matched the actual workflow (”Sarah always knows where to find the answer, so we ask Sarah”). The AI opportunity wasn’t replacing systematic review—it was scaling Sarah’s institutional knowledge.
Follow the Energy
Through our surveys and prototype testing, we tracked not just what people did, but where they lost energy. The moment an architect’s shoulders sagged wasn’t when they were analyzing complex technical problems—it was when they realized they’d have to dig through 1,800 pages of specifications to find one relevant clause.
Architects Love Solving Design Problems
They hate hunting through documents to find information they know exists somewhere. This energy differential pointed us toward our automation target: not the complex analysis, but the tedious information retrieval and administrative overhead that precedes analysis.
Start With the Smallest Viable Improvement—In the Right Place
Based on our research, we’re building two interconnected tools that work where architects actually spend their time: in email.
First, an AI agent that reads incoming RFI emails and automatically creates properly formatted project management tickets with the right metadata. Second, an AI assistant that provides instant analysis of RFI content, suggests relevant specification sections, and offers response recommendations—all in the place where their email lives.
Together, these would eliminate the administrative overhead per item and provide substantial research assistance. More importantly, they work within their existing email-centric workflow.
The Evolution Model We’re Testing
Phase 0: Grow Competence and Confidence with AI
Co-explore the problem space with the client to help them understand the emerging capabilities of AI tools. A basic tenet for our work is we are doing it together with the client so that they learn along with us.
Phase 1: Automate the Annoying
We’re building the email-to-ticket automation first. If it works as expected, it’ll solve a daily frustration without requiring anyone to change how they think about their work—and prove we understand their workflow well enough to improve it.
Phase 2: Augment the Analysis
With trust established, we could then introduce AI that helps with specification comparison—still supporting human judgment, not replacing it. The AI would become a research assistant that instantly knows where to look in 1,800-page documents.
Phase 3: Orchestrate the Workflow
Only after proving value in phases 1 and 2 would we propose end-to-end process transformation. By then, the team would have experienced AI as a helpful colleague rather than a threatening replacement.
Our hypothesis: skipping directly to Phase 3—even with superior technology—usually fails because it asks people to trust a future they can’t yet imagine - crawl before you run!
Key Ingredient to Success: Use AI to help structure unstructured data. Organizations are full of unstructured data that could be useful if only it was organized better.
What We’re Learning About Change Management
Process Attachment
People defend inefficient workflows they’ve mastered. This isn’t irrationality—it’s professional competence. They know exactly where to find information in their current system, even if that system looks chaotic to outsiders.
We’re learning that successful AI implementation requires respecting this expertise while gradually demonstrating that intelligence can be enhanced, not just replaced.
Trust Building Through Small Successes
Our hypothesis is that every small automation success will create permission for slightly larger changes. The compound effect of trust may prove more valuable than the compound effect of efficiency gains.
The Design Methods That Revealed Hidden Insights
Systematic Research Over Assumptions
Instead of relying on stakeholder interviews alone, we built a comprehensive research program. We surveyed the entire firm about time allocation and got over 60% response rates. We analyzed their complete project dataset, revealing massive variation that abstract discussions had missed.
Email Dissection as Ethnographic Method
We collected and analyzed actual RFI emails from recent projects, identifying patterns in language, formatting, and information density. This revealed that contractors already provide most of the metadata needed for project management—it’s just buried in prose rather than structured fields.
Prototype Everything, Test with Real Data
We built multiple prototypes using actual project specifications and RFI content, not sanitized demo scenarios. Each prototype taught us something about the gap between what we thought would work and what actually helped professionals make decisions faster. And with AI we can now build functional prototypes incredibly fast by ourselves rather than relying on developers.
Benchmarking Reality Against Ideals
We researched industry standards and available tools, comparing the firm’s performance against both competitors and best-in-class examples. This grounded our efficiency targets in achievable improvements rather than theoretical maximums.
Lessons for Other Professional Services
Start with Observation, Not Solution
The AI tool you think they need rarely matches what actually helps them. Domain expertise beats technical sophistication when it comes to identifying the right problems to solve.
Find the Keystone Habit
Look for the one small change that unlocks bigger improvements. Email processing became our gateway—unglamorous but immediately valuable and working within existing workflows rather than forcing new ones.
Respect the Craft
Professional services firms aren’t just processing information—they’re applying judgment developed through years of experience. AI implementations that acknowledge and augment this expertise succeed; those that ignore it fail.
Meet People Where They Work
Don’t force adoption of new interfaces when existing tools (like email) already serve as the natural workflow hub. The most powerful AI might be invisible to users, working behind the scenes in familiar environments.
Don’t Boil the Ocean
Resist the temptation to use AI to solve all the problems in one go. Instead, pick small yet annoying and expensive, clearly-defined problems to automate.
The Compound Effect We’re Betting On
We’re betting that the boring AI applications—the ones that eliminate minutes of document hunting rather than promising to revolutionize entire industries—will deliver the biggest improvements to professional satisfaction.
The patient capital approach to AI implementation requires resisting the urge to lead with transformation and instead beginning with observation, empathy, and very small wins.
Sometimes the most innovative thing you can do is solve the mundane problems that everyone has learned to tolerate.
Always Unfinished
This is why we call ourselves Unfinishe_. The work is never done—not because we’re incomplete, but because organizations are living systems that constantly evolve. Workflows shift. Tools change. People develop new expertise and face new challenges. The “finished” AI solution becomes obsolete the moment the organization adapts around it.
Our approach embraces this perpetual evolution. We’re not building toward a perfect end state; we’re creating adaptive systems that improve alongside the people who use them. Phase 1 will reveal insights that reshape Phase 2. Phase 2 will surface needs we can’t anticipate today. Each implementation teaches us something that informs the next.
Workflow archaeology isn’t a one-time discovery process—it’s an ongoing practice of observation, experimentation, and refinement. The small bite we’re taking today creates space for the next small bite, which creates space for the next. Progress compounds not through grand transformation but through continuous, incremental evolution.
We’ll know soon whether this specific approach works with this architecture firm. But we’ve already learned that workflow archaeology—understanding the actual work before trying to improve it—reveals opportunities that process diagrams and stakeholder interviews miss entirely.
The work remains unfinished. And that’s exactly the point.
What workflow archaeology have you discovered in your organization? Where do people lose energy that they don’t even recognize as lost?
We’re building a playbook for human-centered AI implementation, one small bite at a time.
Learn more about our approach at unfinishe.com