<?xml version="1.0" encoding="UTF-8"?><rss xmlns:dc="http://purl.org/dc/elements/1.1/" xmlns:content="http://purl.org/rss/1.0/modules/content/" xmlns:atom="http://www.w3.org/2005/Atom" version="2.0" xmlns:itunes="http://www.itunes.com/dtds/podcast-1.0.dtd" xmlns:googleplay="http://www.google.com/schemas/play-podcasts/1.0"><channel><title><![CDATA[unfinishe_ thoughts]]></title><description><![CDATA[Practical perspectives on how help small and mid-sized businesses can use AI to work smarter. ]]></description><link>https://thoughts.unfinishe.com</link><generator>Substack</generator><lastBuildDate>Tue, 07 Apr 2026 05:49:11 GMT</lastBuildDate><atom:link href="https://thoughts.unfinishe.com/feed" rel="self" type="application/rss+xml"/><copyright><![CDATA[Boot Studio LLC]]></copyright><language><![CDATA[en]]></language><webMaster><![CDATA[unfinishethoughts@substack.com]]></webMaster><itunes:owner><itunes:email><![CDATA[unfinishethoughts@substack.com]]></itunes:email><itunes:name><![CDATA[Jorge Arango]]></itunes:name></itunes:owner><itunes:author><![CDATA[Jorge Arango]]></itunes:author><googleplay:owner><![CDATA[unfinishethoughts@substack.com]]></googleplay:owner><googleplay:email><![CDATA[unfinishethoughts@substack.com]]></googleplay:email><googleplay:author><![CDATA[Jorge Arango]]></googleplay:author><itunes:block><![CDATA[Yes]]></itunes:block><item><title><![CDATA[Finding Our Way Podcast, Ep. 69]]></title><description><![CDATA[A conversation about what AI really demands of design and product leaders.]]></description><link>https://thoughts.unfinishe.com/p/finding-our-way-podcast-ep-69</link><guid isPermaLink="false">https://thoughts.unfinishe.com/p/finding-our-way-podcast-ep-69</guid><dc:creator><![CDATA[Jorge Arango]]></dc:creator><pubDate>Mon, 30 Mar 2026 19:20:05 GMT</pubDate><enclosure url="https://substackcdn.com/image/fetch/$s_!UupH!,w_256,c_limit,f_auto,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2F4d103bc3-b322-462f-a3e1-4ba1229990c5_480x480.png" length="0" type="image/jpeg"/><content:encoded><![CDATA[<p>Sure, AI can help you move faster. But are you moving in the right direction? How do you know? These are the key questions <a href="https://jessejamesgarrett.com/">Jesse James Garrett</a>, <a href="https://www.petermerholz.com/">Peter Merholz</a>, and I explored in <a href="https://findingourway.design/2026/03/27/69-in-a-world-of-ai-what-is-the-work-really-about-ft-jorge-arango/">episode 69</a> of their <em><a href="https://findingourway.design/">Finding Our Way</a></em><a href="https://findingourway.design/"> podcast</a>.</p><p>Leadership entails acting intelligently &#8212; i.e., moving in the right direction for the right reasons. This requires seeing clearly. Tools can help&#8230; or they can make it harder while <em>seeming</em> to help.</p><p>The question is, how do you do it? I&#8217;m a big fan of understanding the technology firsthand. But we must also understand how the technology changes the nature of the work.</p><p>AI calls for moving up the abstraction stack. It&#8217;s similar to what happened with computer programming, which went from flipping bits to assembly language to higher-level languages and now coding agents. The question before design and product leaders isn&#8217;t whether this shift will happen to design: it&#8217;s whether they&#8217;re ready to lead at the right level.</p><p>A bifurcation is coming. The organizations that figure out the role the technology plays in this shift will thrive. Those who do it poorly will crank out work faster &#8212; but it&#8217;ll be increasingly misaligned with the business&#8217; need.</p><p>As I said near the end of the episode: if you come out of any of these conversations feeling like you&#8217;ve got the answer, you&#8217;re probably wrong. The technology is changing too fast. What you can get is a clearer read on the context. Hopefully, this conversation helps.</p><p><em><a href="https://findingourway.design/2026/03/27/69-in-a-world-of-ai-what-is-the-work-really-about-ft-jorge-arango/">Finding Our Way, Ep. 69: In a World of AI, What is the Work Really About?</a></em></p><div><hr></div><p><em>This post first appeared <a href="https://jarango.com/2026/03/30/finding-our-way-podcast-ep-69/">on jarango.com</a>.</em></p>]]></content:encoded></item><item><title><![CDATA[Open-Ended Sessions: How Are You Feeling?]]></title><description><![CDATA[A conversation about the anxiety many product and design leaders are feeling due to AI-driven changes.]]></description><link>https://thoughts.unfinishe.com/p/open-ended-sessions-how-are-you-feeling</link><guid isPermaLink="false">https://thoughts.unfinishe.com/p/open-ended-sessions-how-are-you-feeling</guid><dc:creator><![CDATA[Jorge Arango]]></dc:creator><pubDate>Fri, 27 Feb 2026 16:23:27 GMT</pubDate><enclosure url="https://substackcdn.com/image/youtube/w_728,c_limit/4FYXZEkE5ag" length="0" type="image/jpeg"/><content:encoded><![CDATA[<div id="youtube2-4FYXZEkE5ag" class="youtube-wrap" data-attrs="{&quot;videoId&quot;:&quot;4FYXZEkE5ag&quot;,&quot;startTime&quot;:null,&quot;endTime&quot;:null}" data-component-name="Youtube2ToDOM"><div class="youtube-inner"><iframe src="https://www.youtube-nocookie.com/embed/4FYXZEkE5ag?rel=0&amp;autoplay=0&amp;showinfo=0&amp;enablejsapi=0" frameborder="0" loading="lazy" gesture="media" allow="autoplay; fullscreen" allowautoplay="true" allowfullscreen="true" width="728" height="409"></iframe></div></div><p>In the second of our <a href="https://www.youtube.com/playlist?list=PLZeu-R3TlcIxKLsNGXtzAF-k4SQj-As4h">&#8220;Open-Ended&#8221; livestreams</a>, we discussed the anxiety many design and product leaders are feeling from AI-driven changes. The intent wasn&#8217;t to offer suggestions, but to think out loud about what we&#8217;re observing. </p><p>That said, we surfaced a couple of important insights:</p><ul><li><p>Organizational structures must change. How? Greg suggested empowering smaller (e.g., two-pizza) teams with enough agency to move and learn quickly.</p></li><li><p>In response to the &#8220;AI will replace SaaS&#8221; narrative, Jorge countered that for many products, information architecture is the moat.</p></li></ul><p>We&#8217;d love to know what you think; please leave comments in the <a href="https://www.youtube.com/live/4FYXZEkE5ag">YouTube video</a>.</p><h2><strong>Links</strong></h2><p>We referenced several articles and at least one book during the conversation:</p><ul><li><p><strong><a href="https://aboutexperiences.substack.com/p/the-cognitive-cost-of-ai">The cognitive cost of AI</a></strong> by Giu Vicente</p></li><li><p><strong><a href="https://hbr.org/2026/02/why-ai-adoption-stalls-according-to-industry-data?ab=HP-hero-latest-1&amp;__readwiseLocation=&amp;giftToken=12050438461771433707670">Why AI Adoption Stalls</a></strong> by Keith Ferrazzi, Wendy Smith, and Shonna Waters</p></li><li><p><strong><a href="https://www.nytimes.com/2026/02/18/opinion/ai-software.html">The A.I. Disruption We&#8217;ve Been Waiting for Has Arrived</a></strong> by Paul Ford</p></li><li><p><strong><a href="https://craighepburn.substack.com/p/welcome-to-the-intelligence-era?r=1uelnl&amp;utm_campaign=post&amp;utm_medium=web&amp;triedRedirect=true&amp;__readwiseLocation=">Welcome to the Intelligence Era</a></strong> by Craig Hepburn</p></li><li><p><strong><a href="https://rosenfeldmedia.com/books/managing-priorities/">Managing Priorities</a></strong> by Harry Max</p></li></ul><h2><strong>Transcript</strong></h2><p><em>(AI generated.)</em></p><p><strong>Jorge</strong>: Well, hello Greg. I think we are&#8212;let me refresh&#8212;yep, so we are live, sir. It&#8217;s good to see you.</p><p><strong>Greg</strong>: Nice to see you, Jorge. Happy Thursday!</p><p><strong>Jorge</strong>: Happy Thursday to you as well. I&#8217;m having a weird echo.</p><p><strong>Greg</strong>: Well, anyway, we&#8217;re here today to talk a little bit about what&#8217;s going on with sort of this zeitgeist moment. It feels like there are a bunch of messages kind of moving through our communities. Jorge and I have been talking a lot about this stuff, and we thought we would get together and run one of our Unfinishe sessions&#8212;Open-Ended is what we call them&#8212;but I thought maybe we could start and talk a little bit about Unfinishe, and then we&#8217;ll get into today&#8217;s topic, which is really about the psychological tax of AI initiatives and how all of us are feeling. But before we do that, Jorge, what is Unfinishe?</p><p><strong>Jorge</strong>: Unfinishe is an emergent practice that you and I have taken on to develop to help teams navigate this new era. I think that that&#8217;s kind of like the highest level description that I can offer. What teams might mean might be up for grabs; it&#8217;s emergent, right? But we are trying to be responsive to what we are hearing in our various communities and contexts. It&#8217;s very clear that everyone is cognizant at this point of the fact that we are in a different space. This technology is massively disruptive, and it requires new approaches and new thinking, so that&#8217;s clear. The other thing that&#8217;s become increasingly clear is that many of us&#8212;and I&#8217;ll put you and I in this&#8212;are trying to come to grips with how to navigate this time skillfully. You and I bring particular perspectives and life experiences to bear on this problem that we believe are helpful to folks. So that&#8217;s my kind of 10,000-foot view on what Unfinishe is. What would you answer? How would you answer that question?</p><p><strong>Greg</strong>: Yeah, I mean, I echo what you&#8217;re talking about. I think part of it is the opportunity to disrupt ourselves and explore the meaning of these new tools and do it in a way where we can sort of be all in, but at the same time be intentional and try to understand what it might mean, and then share what we learn with folks. I think we named this endeavor Unfinishe with the &#8216;D&#8217; missing on purpose because I think one of the things that we&#8217;re all experiencing is that the moment you feel like you&#8217;re on solid ground, the ground shifts, and we need to find an understanding of what to do next. I think the journey we&#8217;re on is to help organizations and teams navigate that. We&#8217;re taking the experience we&#8217;ve had in our careers, but we&#8217;re also super willing to experiment and adapt. We&#8217;re trying to be curious and mindful in the practice. So that&#8217;s how I might answer that. Maybe that&#8217;s a good segue for today&#8217;s conversation, which is, you know, there&#8217;s a lot of anxiety around what&#8217;s going on with these tools. We&#8217;re starting to experience it in our own work, but we&#8217;re also seeing it in the teams that we help. There seems to be a conversation bubbling up in the zeitgeist around AI right now about what it might mean. I think there are also some seminal moments that have happened recently that have demonstrated that we&#8217;re actually in a new place. You know, this isn&#8217;t the announcement of ChatGPT two and a half years ago. This is the arrival of coding tools, the rapid improvement of the models, and the fact that we&#8217;re now starting to see teams use these things. There have been some really salient conversations around that. So that&#8217;s what we&#8217;re starting for here today, and we want to help and have a conversation around it. Also, folks online, you&#8217;re welcome to come and ask questions. We&#8217;re going to try to be vulnerable and transparent, if possible, about our own insecurities and feelings. This is an experiment, and we&#8217;re glad that folks are here with us today.</p><p><strong>Jorge</strong>: And for a bit of context for folks who, for whom this might be the first live stream of ours that they join, this is only the second one that we&#8217;ve done. Right. And in the Unfinishe spirit, this is a very open-ended conversation. It is very loosely structured. I would say there are not going to be any decks. There are no pitches. That&#8217;s not what we&#8217;re doing here. What we&#8217;re doing is we&#8217;re trying to think through the moment that we&#8217;re in, and we&#8217;re trying to think out loud. Because the time does require kind of fast responses, I think that we can&#8217;t be too precious about what we&#8217;re doing right now. So with that in mind, you said that we want to be vulnerable and that we&#8217;re both feeling a bit of anxiety. I&#8217;m going to kind of pinch and zoom on that. The live stream you titled it &#8220;How Are You Feeling?&#8221; How are you feeling, Greg?</p><p><strong>Greg</strong>: Yeah, I mean, there have been a couple of articles that have encapsulated my experience lately. I would say I&#8217;m both super intrigued and excited and super freaked out at the same time. And what do I mean by that? I mean, I&#8217;m enamored by the capabilities that I have at my fingertips and blown away by the things I&#8217;m able to accomplish with the tools that I&#8217;m using. I&#8217;m also recognizing that I don&#8217;t have good boundaries with how I operate with Claude, which is the tool that I use, Anthropic&#8217;s AI. At the end of the day, my brain is like, I&#8217;ve gone through a lot of work, and I&#8217;m wondering if it&#8217;s sustainable. I&#8217;m mixed about all this stuff; it&#8217;s exciting, and I&#8217;m enabled to do some really incredible things. But at the same time, I&#8217;m trying to track if I&#8217;m being changed by this experience.</p><p><strong>Jorge</strong>: It might be worth calling this out because some folks tuning in, this might be the first time they hear from you. I think that we have slightly different backgrounds. I would say that your background, your trajectory, and your career has been mostly around design leadership, very senior roles, managing teams and organizations, whereas my background is more as an individual contributor for hire. I&#8217;ve been a consultant for the bulk of my career, and I&#8217;ve been brought in to do very specific things. I&#8217;m just calling that out because I hear you talk about this being torn between excitement and apprehension, and I&#8217;m feeling like that too. But I think I&#8217;m feeling like that for maybe different reasons than you are. How does this tension show up in your work as a design leader?</p><p><strong>Greg</strong>: Yeah, I mean, I think there are a couple of things. Paul Ford wrote something recently about feeling obsolete at some level and at the same time superpowered. Right? I have some of those feelings. I&#8217;m able to help a couple of companies right now from a design leadership perspective, and I can help them in really fast ways that would have taken weeks to accomplish, and I can do it in like days. That&#8217;s really great, but at the same time, it feels like the flattening of my expertise. It&#8217;s an interesting moment to see how we show up. I think there&#8217;s some anxiety around that. I might flip the bit for you, and you&#8217;ve been spending decades thinking about how humans navigate information. Does AI feel like an extension of that work or a threat to that work? How does that fit into how you operate in this moment?</p><p><strong>Jorge</strong>: Well, the first thing that I&#8217;ll say here is that anything I say today, I say with more interest than conviction, meaning my mind is still exploring this, and I&#8217;m trying to develop my positions. What is very clear to me is that large language models in particular change our relationship to information considerably. I realized this; I&#8217;ve been working with AI&#8212;in general, what we call AI&#8212;for a long time with client projects. But when ChatGPT was released, I kind of went all in and said, &#8220;Okay, let&#8217;s see how this can help me do the work of an information architect.&#8221; It became very clear to me very quickly that the work I was doing needed to change and was going to change. You talked about acceleration as one of the things you&#8217;re experiencing in your design leadership role. I also felt that this is going to greatly accelerate certain processes. It&#8217;s also going to change how we interact with information. The object of the things that we design is likely going to change, but that might take a bit longer. I don&#8217;t know that I felt as threatened; I&#8217;ve felt more excited. I&#8217;ve been more excited than I&#8217;ve been threatened, I think, by all this stuff. There&#8217;s a flip side to it, which is the fact that there&#8217;s a lot more information being generated. Not all of it is useful, perhaps. These tools have the potential to generate a lot of misinformation. But this is the kind of upside bit. I might sound like I&#8217;m taking a very kind of positive perspective here. The more I worked with these tools, the more evident it became to me that their effectiveness is highly reliant on the information that you are giving the tool. Initially, there was this idea of prompt engineering, and then people realized it&#8217;s not just a prompt; there&#8217;s more stuff that you&#8217;re feeding the AI. The phrase became &#8220;context engineering.&#8221; To me, the upshot of all that is that language models are as useful as the information that they&#8217;re given to work with, and I suspect that people who do information architecture work have a big role to play in creating and structuring the information that gets fed to the LLMs. That&#8217;s going to have a very important effect on the degree to which the systems produce good results. So I&#8217;m excited. It is a time of great change, and great changes always produce anxiety, so I&#8217;m feeling anxious too, but I think I&#8217;m also feeling like, my gosh, there&#8217;s so much potential here&#8212;unexplored potential, right?</p><p><strong>Greg</strong>: Yeah, and I think you and I did a consulting arrangement last fall where we helped an organization sort of organize their business information. I think you&#8217;re right that there&#8217;s this notion of understanding how work gets done and what content exists in an organization. Most organizations can articulate that very well; they just kind of tacitly know this is how they operate. These systems work better if you can be clear and crisp about the terminology. I&#8217;ll use a fancy word: the ontology or the model of information in it. I think for folks like you&#8212;who I love the fact that you called yourself an architect of information now versus an information architect&#8230;</p><p><strong>Jorge</strong>: An architect of intelligence.</p><p><strong>Greg</strong>: That&#8217;s right, architect of intelligence. I think there&#8217;s some truth to that because I think one of the things that we need to talk about&#8212;one thread that needs to be in this conversation&#8212;is to be intentional about how you use these tools. One way to alleviate anxiety is to understand the structure of the entity that you work for, the organization that you work for, or the thing that you&#8217;re trying to accomplish&#8212;so that you can make conscious decisions when you interact with these tools, and then you know your intent. That&#8217;s where these tools are actually really valuable. If your intent is clear, the quality of the answers that they generate or collaborate with you on improves, and that&#8217;s where you can start to have a conversation that leads you to new insights or new outcomes. That&#8217;s the part that I think is super fascinating. Every day I&#8217;m surprised by something. There&#8217;s something I&#8217;ve done, and I&#8217;m just sort of like, &#8220;Oh my gosh, how did I do that? Wow, how did it do that?&#8221; That&#8217;s part of it. Is there something that you&#8217;ve noticed about yourself, though? Have you changed at all in terms of how you&#8217;re operating with these things?</p><p><strong>Jorge</strong>: I&#8217;ve always been very hands-on with the tools that I use, and one of my directions early on with this stuff was I did not want to learn about it or just learn about it in the abstract; I wanted to have hands-on experience. I think that I&#8217;ve been maybe more hands-on with code than I have been more recently in my career just because I&#8217;ve been really trying to lift the hood on this stuff to get a sense of how it works. You referenced the Paul Ford op-ed piece in The New York Times earlier. We have been having conversations with other folks and also reading stuff that people have been publishing. One of the things that I read in one of the articles that you and I were discussing on Slack over the last week or so is something that resonated with me, which is the idea that all of a sudden you have this tool that lets you do so much stuff that you tend to fill your day with stuff. It&#8217;s the kid in a candy store thing where, left unchecked, you end up with a really bad bellyache. I don&#8217;t remember which one of the articles it was. I think you shared this one where this person was saying, &#8220;You know, it&#8217;s taken over. Now I&#8217;m thinking about it during my lunch break and thinking about how I can prompt this thing.&#8221; Or, you know, &#8220;Before I go to sleep, I want to leave it doing something overnight.&#8221; There&#8217;s so much potential. All of a sudden, there&#8217;s an unlocking of so much potential that we want to&#8212;well, and then there&#8217;s the incentive to move very fast, to take advantage of that potential. We run the risk of not leaving enough space to be mindful about what we&#8217;re doing, to prioritize what we&#8217;re doing. I&#8217;m saying this because I am feeling that. I&#8217;m feeling like there&#8217;s so much that I can do. Let&#8217;s do it all! Now that we have these things that can do it for me, I&#8217;m feeling a little burned out by that. I&#8217;m suspecting that other people are as well based on what I&#8217;m reading.</p><p><strong>Greg</strong>: Yeah, I think that, first of all, we&#8217;re hitting a cognitive barrier. I mean, humans can only process so much information. Individually, I think there&#8217;s a challenge. I&#8217;m feeling exactly the same thing. I generate, you know, I&#8217;ll take some information, I&#8217;ll process it, and I&#8217;ll work with Claude to tune it up in a way that makes sense to me. I&#8217;ll get a very professional document. Part of my process is I usually print them. I know it&#8217;s very old school, but I find that I don&#8217;t edit very well if I&#8217;m just looking at a screen. If I look at a piece of paper, I can distance myself for a second and read it, take some notes, and then go back, and that&#8217;s kind of the way that I operate. But I&#8217;m starting to build these very useful and deep content pieces for the customers that I&#8217;m working with that are highly valuable. But I&#8217;m filling my day with like doing that work. Earlier in our conversation, I was talking about how sometimes my brain is just like, &#8220;Oh, I&#8217;ve done&#8230; I can&#8217;t process it anymore.&#8221; One thing I&#8217;m noticing&#8212;I don&#8217;t know if others are noticing this online, but if you are, let us know. One of the things in organizations is the socialization of ideas. We&#8217;re used to operating, especially in product development teams, at a certain clock speed. There&#8217;s a group of people who start working on an idea, and they start building prototypes and making, and they&#8217;re learning in that process. Then they need to bring other people along as that idea starts to gain momentum to empower those people to contribute to or execute aspects of that idea or that project to move it forward. Part of that is human nature; you want to co-create and be a participant in it. Part of it is you need to understand the decisions that have been made so that you can operate and feel like part of something. I think the velocity that some of these tools allow you to operate at is not just about the individual&#8217;s cognitive ability to manage; there&#8217;s anxiety around it that fits the organization&#8217;s ability to grok or understand and then ingest so that they can focus on, &#8220;Okay, this is how I can contribute or I can join the conversation.&#8221; I worry about that because I feel like we haven&#8217;t learned good boundary skills with these tools. It&#8217;s a little bit like a version of doom scrolling where you generate an endless amount of stuff. How much of it is still relevant the next day? Maybe not as much as you think, right? I think that I have some anxiety about being in that. One of the things I&#8217;m anxious about is that we&#8217;re going to have to learn new behaviors to manage that. What does that feel like, and how does that change us? A lot of people talk about discernment; that&#8217;s an important skill. Anyway, it&#8217;s a long-winded way of saying I think we&#8217;re only capable of grokking so much in a day.</p><p><strong>Jorge</strong>: Yes, and I think we&#8217;re talking about it kind of at the individual level, right? We can do all this stuff, so we&#8217;re doing it all, right? There is an organizational variation of this, which is we have this design or product team which maybe is not growing. I see some folks posting job openings on LinkedIn, but if anything, I think the tendency has been for teams to shrink. All of a sudden there&#8217;s an insurgent request for new features and capabilities. There&#8217;s this drive to AI all the things. You have AI, so it&#8217;s easy to do, right? It&#8217;s like, no, it&#8217;s not easy to do. Now we&#8217;re overloaded with stuff. I&#8217;m thinking you were talking about this and our mutual friend and my podcast co-host, Harry Max, wrote a book on prioritization, right?</p><p><strong>Greg</strong>: Yeah.</p><p><strong>Jorge</strong>: What you&#8217;re pointing to is that we need to, on the one hand, move fast because this does indeed call for a fast coming to grips with the capabilities and constraints of the technology. But we need to do it in a way where we&#8217;re focusing our energy, our limited resources, on the things that matter most. It feels to me like right now, for a lot of organizations, at least from what I&#8217;m hearing, there&#8217;s not very good prioritization happening. It&#8217;s more like let&#8217;s throw everything against the wall and see what works coming out the other end. I&#8217;m kind of making a note here; that might be one practice that we could encourage folks to do to be more conscious as a team of the things that they are taking on and to take it on with the dual purpose of building useful things for people&#8212;obviously, we want to create value&#8212;but we have to keep in mind that part of what we&#8217;re doing here is also becoming competent with the new tools.</p><p><strong>Greg</strong>: We have to create new skills, yeah. I think you&#8217;re&#8212;oops, I just unplugged myself. I can still hear you, though. Okay, I&#8217;m back. I think you&#8217;re right. One of the things that I think many teams are struggling with is that these tools also allow us to do each other&#8217;s jobs, right? So the notion of a product organization creates a lot of anxiety around that. You know, I&#8217;m a designer, but the engineering team can now write code for the UI. I&#8217;m an engineer, but the design team can now write code. I&#8217;m a product leader, and I can do both of those things. I&#8217;m a designer who can write a PRD, right? Those are very specific to the product development process. The notion of how we work is also in radical change because the boundaries between the disciplines are fuzzier. We need to be open and in a conversation around exploring it together versus in our disciplines; at least that&#8217;s my belief. I led a workshop with a client recently on who does what, how, why, and when? It wasn&#8217;t really to say that design only owns design and product owns product and engineering owns engineering; it was, &#8220;Hey, these tools allow us to be in each other&#8217;s camps.&#8221; There may be appropriate moments for us to be in each other&#8217;s camps. We don&#8217;t have the capacity to do something with the staffing we have, but as a team, we can use these tools to help us fulfill that capacity. We need to be in dialogue about that. One of the things I&#8217;ve learned is that discipline and having expertise still really matters, right? Discernment is a powerful thing. Just because someone can write code doesn&#8217;t mean the experience is a good one. Someone who has the ability to look at that and say, &#8220;Here&#8217;s how I might modify that because I have expertise in this area&#8221; is valuable. Similarly, on the product side, product market fit is still required&#8212;just because you can ask these tools to help you find product market fit doesn&#8217;t eliminate the need to have people on the team who have experience in bringing products to market, working with customers, and understanding how you create motion and market demand. All the things of modern product development or building things are still in play. But we have a lot of anxiety about whether our roles are still important. Going back to your central point, I think smaller teams are probably what&#8217;s going to be. Smaller, more empowered teams are going to be the future, and the smaller, more empowered teams can punch above, using a boxing metaphor, their weight. There are two reasons for that: one, because the tools allow you to do that, and the second goes back to my notion of cognitive dissonance and being able to communicate as a team. You need to have the intimacy of a small group to be able to share your thinking at the speed that this thinking is happening. It starts to break down if you&#8217;re in a larger organization that has organized people doing pieces of the work. I think the future is more empowered teams with more agency and clarity about what they&#8217;re about, and then just let them do their thing.</p><p><strong>Jorge</strong>: And smaller&#8212;I heard you say as well, right?</p><p><strong>Greg</strong>: And smaller, that&#8217;s right.</p><p><strong>Jorge</strong>: Yeah. Do you have like, we all know about the two pizza team&#8212;the Amazon pizza thing. Do you have a size in mind?</p><p><strong>Greg</strong>: Yeah, it&#8217;s not bigger than that. I think the notion of the two pizza team is that you all know each other, and you have a human relationship with each other, right? You have the ability to communicate and anticipate and complete each other&#8217;s thoughts and know who&#8217;s good at certain things. I think it breaks down once you go above that.</p><p><strong>Jorge</strong>: I wanted to circle back to something you said because it made me shudder a little bit. You said something like designers are writing PRDs, and all of a sudden, we don&#8217;t need as much expertise because we can all do these roles.</p><p><strong>Greg</strong>: Yeah.</p><p><strong>Jorge</strong>: One bit of caution that I would drop in here is that a common mistake that many people make is to confuse the outcome of a piece of work&#8212;the artifact that comes out the other end&#8212;with the value of the work. I&#8217;m thinking of an exercise that I was part of many, many years ago, which is something a lot of designers have done. We were part of this workshop where we locked ourselves in a conference room for two days and made this enormous wall-sized journey map, right?</p><p><strong>Greg</strong>: Yeah.</p><p><strong>Jorge</strong>: The artifact that came out of that diagram was valuable per se because it informed a lot of important design decisions. But the artifact was only part of the value that the company got out of that. The other part of the value was the alignment that happened by getting a group of&#8212;I think it was like 24 people&#8212;to work together for two days building the artifact. If you could just prompt Claude to feed it a bunch of research and then say, &#8220;Draw me the journey map for this thing,&#8221; you might get a really useful diagram in the end. It might even be better than the one that the people put together. But you&#8217;d be missing out on the opportunity for people to use the artifact as a MacGuffin to have conversations that need to happen. It&#8217;s a little bit like the stone soup thing, right? The story about the stone soup that I&#8217;m sure people have heard. We&#8217;ve gone from having a bunch of basically stones to get important conversations to happen to now having the equivalent of the Star Trek replicator where you say, &#8220;Just give me chicken soup,&#8221; and you get the plate of chicken soup, but then you don&#8217;t get the collaboration that happens in making the soup, right? That collaboration is really important.</p><p><strong>Greg</strong>: Just to build on that, I think one of the risks that we have is that we spend our day collaborating with AI and not with each other. It&#8217;s very easy to do. It came up in one of the workshops I recently led that folks were in a product team spending less time talking with each other and more giving each other things to read. Not all the things that they were giving each other were as tuned as they could have been, but they felt very clear to the person who had been participating with their AI assistant. I think we are at risk of finding our way into that relationship with the AI versus finding our way into a relationship with the cross-functional peers that we work with. Again, it goes back to the healthy boundaries. I think we need to find how we manage that, and I have anxiety around that because I spend a lot of time with these things. I think there was another piece of the story that we wanted to talk about today. There was an article that I really loved, and I excerpted part of it last week, and a lot of people responded to it by Hepburn on going fast and how this was a moment for generalists to be really successful. I felt seen in that article, and at the same time, I recognized that maybe it was a little bit of wishful thinking on my part. I think we&#8217;re all guilty of finding the things that reflect well on our own personal point of view that reinforce our vision of ourselves. There was a piece in that about moving fast&#8212;not about velocity, but it was more about, you know, one of the things I think we&#8217;re in this moment is, some people are using the tools effectively, and they&#8217;re using them with their teams and gaining a certain sense of momentum. They&#8217;re being intentional about it, learning how to do it, and course correcting. Others are not, and I think there&#8217;s some anxiety around that too because some organizations don&#8217;t enable teams to do that. Are you feeling like you&#8217;re left behind? I think for my own self, I have anxiety around keeping up. I know there are people who are way more into this than I am, and so therefore my keeping up is a worry.</p><p><strong>Jorge</strong>: The article you&#8217;re referring to is a Substack post called &#8220;Welcome to the Intelligence Era&#8221; by Hepburn. What I&#8217;m going to do is, when we release a recording of this, I&#8217;m going to add links to these various posts in the description for the video. The metaphor Hepburn uses for this speed thing is learning to ride a bicycle. He makes a good point that one of the risks you run when learning to ride a bicycle is that you try to take it too slow. If you&#8217;ve ridden a bicycle before, you know that it&#8217;s not until you reach a certain speed that you can maintain your balance. He&#8217;s advocating for getting up to a certain speed to get your bearings. He doesn&#8217;t say this in the article, but there&#8217;s a flip side to this: if you&#8217;re learning to ride a bicycle and you strap a jet engine to the bicycle, you&#8217;re going to be really stressed out, right? You&#8217;re probably going to get in an accident. I think there&#8217;s a Goldilocks thing here; I&#8217;m trying to reflect back what&#8217;s emerging from this conversation. You&#8217;ve already said we need smaller teams that have greater agency. It also sounds like they need to focus; they need to prioritize the stuff that they&#8217;re working on. There&#8217;s the notion of speed&#8212;meaning they need to move fast. Maybe the phrase is they need to move fast enough, but it&#8217;s possible to move too fast. The organization, the team, the individuals might not be able to handle being asked to do so much so fast with such new stuff because, to your point earlier, there&#8217;s cognitive load involved.</p><p><strong>Greg</strong>: Yeah, I think the velocity conversation has many vectors to it, too, some of which are super anxiety-producing. You hear a lot of leadership in the Valley right now talking about speed and how fast we have to deliver product outcomes. Now that Claude can write most of the code, we can go 10 times faster. I don&#8217;t think that&#8217;s necessarily what we&#8217;re talking about. By the way, I think there&#8217;s a huge risk in going faster; it doesn&#8217;t necessarily mean that you get to a good outcome. At the same time, I think what Hepburn is talking about is you need to dive into understanding how these tools work because they are changing the way that we work and, for each of us, they&#8217;re changing who we are and the roles that we have and the impact we can make. We can push back on it if it&#8217;s going too fast, but you really don&#8217;t learn how to use them unless you&#8217;re using them. The advice I have for folks is to get your hands into it and be using it. Then you can be intentional about how you want to use it. One of the opportunities I think in this space now is that especially in product development, a lot of time was spent on execution and not enough time on defining the outcome or the product fit. Now, I think we can use these tools to do a lot more discovery earlier and have more clarity about what problem we&#8217;re trying to solve, why that problem is valuable to the end customer or end user, and get validation that we&#8217;re solving the right problem. Execution&#8212;building that piece&#8212;should be something that can go much faster. This inverts how we look at the work that we do, and that part of it is exciting to me. But it&#8217;s different.</p><p><strong>Jorge</strong>: I want to maybe pinch and zoom into the word invert. But before we do that, I want to circle back to the chat. We have a couple of comments in the chat, and I think the first one here is relevant to what you&#8217;re just talking about now. So RPUXD671 says, &#8220;I agree. We can&#8217;t be too precious. Yes, and we need to show up with calm and help the teams we&#8217;re advising through trade-offs they face.&#8221; Here&#8217;s the question: how do you hang on to and transmit that calm through teams?</p><p><strong>Greg</strong>: Yeah, that&#8217;s right. I think a couple of things are important. One is being curious, right? Having a culture of curiosity, being conscious that you don&#8217;t know the answer, and being public about it. One of the challenges is when we think we know the answer, and then it pivots and changes&#8212;it just undermines team health. It&#8217;s a notion that we&#8217;re on a collective journey together, and we&#8217;re going to explore and find out where we&#8217;re headed. Those are some things I would consider. I think there needs to be&#8212;you said this earlier around how to prioritize the efforts you have because you can go everywhere all at once and not get anywhere. Practice some exercises around what are the experiments you&#8217;re going to do as an organization or as a team and create some space for evaluating the success of those experiments. This is something you and I did with one of our customers last fall, where we sat and kind of helped them understand how they worked, looked at the activities and workflows that were important to their success, and helped them stack rank the things that we felt AI could help them with. Instead of doing all of them, we said, &#8220;Let&#8217;s pick one and do that.&#8221; How did that work? Did we learn something? Okay, let&#8217;s go do the next one. I think a structured approach could help teams have a little bit more comfort.</p><p><strong>Jorge</strong>: I think this question was framed around how do you, as a leader, communicate with your team? That&#8217;s the way I read it anyway. But I think what you&#8217;re saying also applies to how do you manage up, right? Because as a chief design officer, as a VP of design or product, you are reporting into the organization&#8217;s leadership. They have expectations&#8212;whether fair or not&#8212;that this stuff is going to change things quickly, right? It&#8217;s worth acknowledging that leaders need to manage their teams and the mood of their teams, but they also need to manage upwards, right?</p><p><strong>Greg</strong>: Yeah, and there have been all kinds of crazy statements made in the last two years around the possibilities, role definition, and how product is going to be made based on the lens of where a leader might come from. I think we&#8217;re learning right now that those lenses are incomplete. You bring up a really important point; this curiosity and openness and adaptability need to be shared when you manage a team down or when you&#8217;re working with people collectively. It also needs to be flipped on the opposite conversation: what are we learning right now? What advantages is this giving us, and what challenges are we creating? There are challenges being created. Many teams are spending a lot of effort on AI, but their productivity isn&#8217;t improving. Many teams are spending a lot of tokens, but their costs are going up in the organization. Some organizations are letting people go because they think AI will fill the gap, but they&#8217;re letting them go before they figured out how to do that work. Those are the things that I think are building anxiety right now. The sad part is that we&#8217;re having the wrong conversation. People are talking about, &#8220;Here&#8217;s our current business model; here&#8217;s how we work, and now we can just do it faster and more simply.&#8221; The conversation I want to have in organizations is, &#8220;Here&#8217;s the community of people we serve. Here&#8217;s how we can deliver better outcomes for them. Here&#8217;s how we can grow our business, and here are the new things we can do with the people that we have that are valuable to that constituency.&#8221; I just don&#8217;t think we talk about that enough.</p><p><strong>Jorge</strong>: We have another comment here. It&#8217;s not a question, but it&#8217;s a comment from Albie underscore G. They say, &#8220;I agree with Greg. Without intention, it&#8217;s easy to lose control of the output. Planning and guardrails are essential.&#8221; I will chime in here and say, even though your name is checked in this comment, I want to point out that when you talked about smaller teams, you did not use the word control; you used the word agency. That is an important distinction. As I hope is becoming evident from this conversation, one of the footballs that is being tossed around the field right now is precisely control&#8212;control over the outputs, control over the process&#8212;which is part of why there&#8217;s this anxiety happening. I think it&#8217;s going to be important to live with the&#8212;I&#8217;m going to use the word&#8212;discomfort that comes from not feeling like you have full control over the output. What you want, I don&#8217;t think that you want control; I think you want agency. That&#8217;s my take anyway.</p><p><strong>Greg</strong>: My personal belief is that teams do better when folks have agency. I&#8217;ve always tried to build organizations where there&#8217;s clarity, and the gift you&#8217;re giving is, &#8220;Here&#8217;s where we&#8217;re trying to go. You figure out how to get there.&#8221; I do think where I&#8217;ve seen AI being used well is in groups that are willing to experiment and communicate and not try to own or control the process of how it works. Instead, they have a conversation with each other about how it&#8217;s impacting the way that they&#8217;re operating and how the outcomes for which they are responsible are improving or not improving by using the tools. That&#8217;s the part that I think is fascinating. My hope is that we&#8217;re responsible about it, and we have these conversations, but it&#8217;s not easy, and sometimes we don&#8217;t have the frameworks to have those conversations. I think you and I have talked a lot about this, and it&#8217;s part of what we&#8217;re trying to do here with Unfinishe: help people have healthy conversations around how they can use these tools in their environments and provide some structure that allows them to make progress.</p><p><strong>Jorge</strong>: That makes a lot of sense. We have only about nine minutes left here. If folks who are viewing have any questions or comments, please do post them in the chat. Greg and I want to have conversations about this. We have been monitoring what people are writing, but we are also having conversations one-on-one with folks in organizations. If you want to talk with us, we would love to meet up to compare notes and have a quick meeting. I&#8217;m flashing on the screen a URL that you can go to set up time; we&#8217;d love to hear from you. If you are watching this now and have any questions, please do post them in the chat. Let&#8217;s start rounding the bend here. We are running out of time. Our intent here, as we said at the top of the hour, was not to offer a very structured conversation; this is really kind of thinking out loud. It does seem to me that there are a few points that are worth noting. The first is acknowledging that we are in a time of anxiety, and I keep tying this time to the early part of the web when the web first came out. That was a time of big disruption, a big new technology; it was clear to many of us that it was going to change things much like it is now. I don&#8217;t remember there being this level of anxiety of, &#8220;It&#8217;s going to replace my&#8230;&#8221; I mean, there are a few people who saw the writing on the wall that I wouldn&#8217;t be making any more printed financial reports for organizations because all that stuff is becoming digitized. That was pretty clear. For the most part, there wasn&#8217;t the level of replacement anxiety that we&#8217;re feeling now. It does feel like there is angst, and there&#8217;s an HBR article that I&#8217;ll include in the description that names it &#8220;AI Angst&#8221; and outlines what that means and why it might be caused.</p><p><strong>Greg</strong>: I would just build on that. This is a moment where our identity is challenged. Each of us, no matter what you do, has made decisions in your life and constructed a story around your expertise. That is part of who you are. This moment can feel very unsettling because a lot of that narrative can be challenged. How do we manage through that? I think about this moment personally&#8212;I used to lead large teams, and my identity was a chief design officer. Now I&#8217;m not doing that anymore. Now I&#8217;m helping organizations as a fractional leader. I come in and support teams and do some work. You and I are doing this work for helping organizations prioritize. I&#8217;m coming to terms with that: what does the new version of me look like moving forward with these capabilities and tools? It&#8217;s not the chief design officer that I used to be. That&#8217;s unsettling. I spent a whole lifetime building that narrative. I have adult kids, and I have curiosity about how that happens. The attitude you have to have is to be curious, mindful, and intentional. I don&#8217;t know. What are you anxious about in these final moments?</p><p><strong>Jorge</strong>: I&#8217;m smiling because this hits so close to me. I&#8217;ve been calling myself an information architect for almost three decades at this point. Information architecture is so deeply part of my identity. A few days ago, someone posted on LinkedIn saying, &#8220;Oh, I had a conversation with someone who was talking about getting into information architecture.&#8221; They asked where to begin, and I said, &#8220;Look at this person&#8217;s work. Look at this person&#8217;s work.&#8221; There were three references, and the third was me. It stated something like, &#8220;Look at Jorge&#8217;s website, but he&#8217;s more focused on AI and LLMs these days than information architecture.&#8221; I felt like, is that true? I immediately wrote back and said, &#8220;It is true that a lot of my efforts have been focused in this direction, but I don&#8217;t see it as a replacement of my identity. To me, it&#8217;s the contrary. I don&#8217;t think you can be an information architect and not be all over this stuff, because it&#8217;s so obviously important.&#8221; The way that I put it on my website is that information architecture changes as a result of AI, and AI is made better as a result of information architecture. With all these SaaS replacement narratives from the mainstream media, my canned retort is that information architecture is your moat. You can&#8217;t just replace a system that has a lot of carefully structured information; it&#8217;s not going to be replaced by a chatbot with no context. I&#8217;m seeing an evolution of my identity rather than a replacement of it, so I don&#8217;t feel as much anxiety there. Where I do feel anxiety is the question of, how do I make a living doing this? Because, to your point, if nothing else, the perception out there is that now that we have these tools, they can structure information for you. Yes, but there are a bunch of asterisks following that. My last three years have been about investigating those asterisks. I think that&#8217;s going to be true for a lot of knowledge work. That&#8217;s a big part of the anxiety here: the narratives out there say you&#8217;re going to be out of a job. I&#8217;m not entirely sold on that, because I think these are tools that will definitely change the work, but they still need expertise to produce really good results. That&#8217;s where I stand right now on that stuff.</p><p><strong>Greg</strong>: I love that. I think this goes into, you are on the bicycle or not, like we talked about earlier. I think there are some things that, I don&#8217;t know if it will happen, but if the cost to deliver a software outcome reduces significantly which is where we&#8217;re headed&#8212;the amount of engineering required, the tooling that allows you to deploy something is lowering&#8212;does that mean less work for all of us? Or does it just mean that there&#8217;s a whole set of new use cases that were too expensive to solve before are now solvable? I don&#8217;t know that the equilibrium around that will be. My hope is that we&#8217;re intentional about the problems we&#8217;re trying to solve in this world and that these tools allow us to solve more of them. I think you&#8217;re right: the architecture of intelligence, the organization of the information, and the organization toward the outcomes that matter will be a skill set that&#8217;s really important in the future. Not everyone will gravitate towards that, but I think folks like you will be very valuable. You are very valuable.</p><p><strong>Jorge</strong>: Thank you. You are very valuable too, Greg. We are out of time. I just want to acknowledge there are a couple of comments in the chat we can get to after we release the recording, but there&#8217;s one comment that speaks to this from RPUXD671 again: &#8220;It&#8217;s not going to be replaced, well, by a chatbot, but some organizations will try.&#8221; I&#8217;ll say this: we are living through the very early days of this, and there are going to be all sorts of really poor decisions made. We&#8217;re going to try all sorts of things that are not going to work, and we just have to go through it. This is the bicycle thing: you have to keep going, and you have to find stability. We are out of time, unfortunately. It&#8217;s been brilliant catching up as always. Thank you.</p><p><strong>Greg</strong>: Awesome. Thank you all for joining us today. We&#8217;ll try another one of these soon.</p><p><strong>Jorge</strong>: I&#8217;ve flashed the slide on the screen. If you want to set up time to talk with us, please visit unfinishe.com/connect, and you can set up some time. All right. Thank you, sir.</p><p><strong>Greg</strong>: Thanks, Jorge. See you soon. Bye.</p>]]></content:encoded></item><item><title><![CDATA[Dabble No More: Toward Disciplined AI Adoption]]></title><description><![CDATA[Experimenting with AI is a starting point. But creating real value requires direction and discipline.]]></description><link>https://thoughts.unfinishe.com/p/dabble-no-more-toward-disciplined</link><guid isPermaLink="false">https://thoughts.unfinishe.com/p/dabble-no-more-toward-disciplined</guid><dc:creator><![CDATA[Jorge Arango]]></dc:creator><pubDate>Tue, 13 Jan 2026 17:14:05 GMT</pubDate><enclosure url="https://substackcdn.com/image/fetch/$s_!_i5N!,f_auto,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2F8fd98c0b-fc8d-4d35-9eb6-9d69f8402ce3_1200x675.jpeg" length="0" type="image/jpeg"/><content:encoded><![CDATA[<div class="captioned-image-container"><figure><a class="image-link image2 is-viewable-img" target="_blank" href="https://substackcdn.com/image/fetch/$s_!_i5N!,f_auto,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2F8fd98c0b-fc8d-4d35-9eb6-9d69f8402ce3_1200x675.jpeg" data-component-name="Image2ToDOM"><div class="image2-inset"><picture><source type="image/webp" srcset="https://substackcdn.com/image/fetch/$s_!_i5N!,w_424,c_limit,f_webp,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2F8fd98c0b-fc8d-4d35-9eb6-9d69f8402ce3_1200x675.jpeg 424w, https://substackcdn.com/image/fetch/$s_!_i5N!,w_848,c_limit,f_webp,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2F8fd98c0b-fc8d-4d35-9eb6-9d69f8402ce3_1200x675.jpeg 848w, https://substackcdn.com/image/fetch/$s_!_i5N!,w_1272,c_limit,f_webp,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2F8fd98c0b-fc8d-4d35-9eb6-9d69f8402ce3_1200x675.jpeg 1272w, https://substackcdn.com/image/fetch/$s_!_i5N!,w_1456,c_limit,f_webp,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2F8fd98c0b-fc8d-4d35-9eb6-9d69f8402ce3_1200x675.jpeg 1456w" sizes="100vw"><img src="https://substackcdn.com/image/fetch/$s_!_i5N!,w_1456,c_limit,f_auto,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2F8fd98c0b-fc8d-4d35-9eb6-9d69f8402ce3_1200x675.jpeg" width="1200" height="675" data-attrs="{&quot;src&quot;:&quot;https://substack-post-media.s3.amazonaws.com/public/images/8fd98c0b-fc8d-4d35-9eb6-9d69f8402ce3_1200x675.jpeg&quot;,&quot;srcNoWatermark&quot;:null,&quot;fullscreen&quot;:null,&quot;imageSize&quot;:null,&quot;height&quot;:675,&quot;width&quot;:1200,&quot;resizeWidth&quot;:null,&quot;bytes&quot;:48444,&quot;alt&quot;:null,&quot;title&quot;:null,&quot;type&quot;:&quot;image/jpeg&quot;,&quot;href&quot;:null,&quot;belowTheFold&quot;:false,&quot;topImage&quot;:true,&quot;internalRedirect&quot;:&quot;https://thoughts.unfinishe.com/i/184454629?img=https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2F8fd98c0b-fc8d-4d35-9eb6-9d69f8402ce3_1200x675.jpeg&quot;,&quot;isProcessing&quot;:false,&quot;align&quot;:null,&quot;offset&quot;:false}" class="sizing-normal" alt="" srcset="https://substackcdn.com/image/fetch/$s_!_i5N!,w_424,c_limit,f_auto,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2F8fd98c0b-fc8d-4d35-9eb6-9d69f8402ce3_1200x675.jpeg 424w, https://substackcdn.com/image/fetch/$s_!_i5N!,w_848,c_limit,f_auto,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2F8fd98c0b-fc8d-4d35-9eb6-9d69f8402ce3_1200x675.jpeg 848w, https://substackcdn.com/image/fetch/$s_!_i5N!,w_1272,c_limit,f_auto,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2F8fd98c0b-fc8d-4d35-9eb6-9d69f8402ce3_1200x675.jpeg 1272w, https://substackcdn.com/image/fetch/$s_!_i5N!,w_1456,c_limit,f_auto,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2F8fd98c0b-fc8d-4d35-9eb6-9d69f8402ce3_1200x675.jpeg 1456w" sizes="100vw" fetchpriority="high"></picture><div class="image-link-expand"><div class="pencraft pc-display-flex pc-gap-8 pc-reset"><button tabindex="0" type="button" class="pencraft pc-reset pencraft icon-container restack-image"><svg role="img" width="20" height="20" viewBox="0 0 20 20" fill="none" stroke-width="1.5" stroke="var(--color-fg-primary)" stroke-linecap="round" stroke-linejoin="round" xmlns="http://www.w3.org/2000/svg"><g><title></title><path d="M2.53001 7.81595C3.49179 4.73911 6.43281 2.5 9.91173 2.5C13.1684 2.5 15.9537 4.46214 17.0852 7.23684L17.6179 8.67647M17.6179 8.67647L18.5002 4.26471M17.6179 8.67647L13.6473 6.91176M17.4995 12.1841C16.5378 15.2609 13.5967 17.5 10.1178 17.5C6.86118 17.5 4.07589 15.5379 2.94432 12.7632L2.41165 11.3235M2.41165 11.3235L1.5293 15.7353M2.41165 11.3235L6.38224 13.0882"></path></g></svg></button><button tabindex="0" type="button" class="pencraft pc-reset pencraft icon-container view-image"><svg xmlns="http://www.w3.org/2000/svg" width="20" height="20" viewBox="0 0 24 24" fill="none" stroke="currentColor" stroke-width="2" stroke-linecap="round" stroke-linejoin="round" class="lucide lucide-maximize2 lucide-maximize-2"><polyline points="15 3 21 3 21 9"></polyline><polyline points="9 21 3 21 3 15"></polyline><line x1="21" x2="14" y1="3" y2="10"></line><line x1="3" x2="10" y1="21" y2="14"></line></svg></button></div></div></div></a><figcaption class="image-caption">Photo by <a href="https://unsplash.com/@hxzrshk?utm_source=unsplash&amp;utm_medium=referral&amp;utm_content=creditCopyText">Harsh Kumar</a> on <a href="https://unsplash.com/photos/blue-and-clear-geometric-shapes-on-a-white-background-D6bJiMFHeAc?utm_source=unsplash&amp;utm_medium=referral&amp;utm_content=creditCopyText">Unsplash</a></figcaption></figure></div><p>Recently, I had a conversation with an architecture studio lead that went something like this:</p><blockquote><p><strong>Architect:</strong> We&#8217;re using AI in the studio.</p><p><strong>Me:</strong> Oh yeah? What are you doing?</p><p><strong>Architect:</strong> A few things. [Person A] is using one of those meeting bots to transcribe meetings. And [Person B] is feeding renderings into ChatGPT to explore materials and colors. Clients are impressed.</p><p><strong>Me:</strong> Interesting. Anything else?</p><p><strong>Architect:</strong> Yes, [Person C] has used ChatGPT to create social media posts. Although we haven&#8217;t really scaled that.</p></blockquote><p>This is actually a composite of several similar conversations, and I&#8217;ve changed the details &#8212; but the spirit stands. I believe this short dialog accurately represents how many service organizations are embracing AI: by <em>dabbling</em>.</p><p>Dabbling &#8212; or, more gently, &#8220;undisciplined adoption&#8221; &#8212; is experimenting with AI without understanding how information actually flows through the organization to create value. Instead, team members use AI ad-hoc on whatever interests them most. It can happen officially (i.e., using company-provided licenses) or unofficially (bringing their own.)</p><p>While dabbling has upsides, it also carries significant risks. It also precludes getting the most value out of AI. Let&#8217;s explore how.</p><h2>Upsides of Dabbling</h2><p>I can think of at least three pros to dabbling with AI:</p><ul><li><p><strong>Quick learning.</strong> By now, most folks in service industries have heard about AI. Many are wondering how it might help their business. But reading about a technology isn&#8217;t the same as using it. Dabbling gets them rolling quickly: Setting up an account is easy, and getting a useful reply to a prompt highly satisfying. A nudge to go deeper &#8212; good!</p></li><li><p><strong>Low friction.</strong> Basic LLM accounts are free and the pro versions around $20/month &#8212; not a big commitment. A ChatGPT account and YouTube will get you rolling. No need for big culture change initiatives, reorgs, or IT investments. And unless your IT department put the kibosh on it, you won&#8217;t be stepping on anyone&#8217;s toes.</p></li><li><p><strong>Nice spread.</strong> AI is a general purpose technology: it can help with research, production, marketing, finance, etc. With different people experimenting, as in the example above, you&#8217;ll get glimmers of possible applications. Letting a thousand (or, more likely, half a dozen) flowers bloom will give you a sense of what the garden might include.</p></li></ul><p>Given these &#8220;pros,&#8221; it&#8217;s understandable why firms dabble: it&#8217;s a nonthreatening way to get started on the journey.</p><h2>But It&#8217;s Not All Roses</h2><p>Dabbling is better than nothing. But it has significant downsides:</p><ul><li><p><strong>No governance.</strong> Let&#8217;s start with the scariest. Undisciplined AI use is a privacy and security risk. Unless you have a properly configured pro account, your chats will likely be used to train models. Meaning, your private data might show up as an answer to someone else&#8217;s prompt. There are good reasons why your IT team wants visibility and control!</p></li><li><p><strong>Learnings don&#8217;t scale.</strong> Yes, dabbling lets team members get into AI. But that learning won&#8217;t be evenly distributed. And their focus will be on narrow problems (e.g., crafting a social media post, tweaking a rendering) that can&#8217;t be leveraged more broadly. They&#8217;ll likely have no plans or means to feed data back into the org&#8217;s broader data repositories.</p></li><li><p><strong>Wrong mental model.</strong> Fast learning doesn&#8217;t mean <em>good</em> learning. By dabbling, team members will come to understand AIs as freestanding tools whose abilities reside in vendors&#8217; clouds. They&#8217;ll assume utility lies in the chatbot&#8217;s cleverness rather than how they leverage structured information. This is a bad mental model. AIs should be understood as adding smarts to (and with) their firm&#8217;s IT infrastructure.</p></li><li><p><strong>Opportunity cost.</strong> By focusing on &#8220;paper cut&#8221; problems, org leaders can boast that the company is already &#8220;using AI.&#8221; As a result, they&#8217;ll fail to invest in projects that have greater upside potential &#8212; something that can only happen when they consider initiatives as holistic responses to strategic directions. By dabbling, the org gets a false sense of closure while leaving lots of value on the table.</p></li></ul><h2>What To Do Instead</h2><p>Ok, so dabbling isn&#8217;t a good strategy. But that doesn&#8217;t mean you shouldn&#8217;t use AI at all. So how should you proceed instead?</p><h3>1. Identify your business&#8217;s &#8220;soul&#8221;</h3><p>Start where your organization shines. What makes it stand out from competitors? What&#8217;s the secret sauce? Where does it create most value? Don&#8217;t threaten those things. Instead, look to automate the chores that keep you from delivering your particular kind of value in a timely and cost-effective manner.</p><h3>2. Define your knowledge pipeline</h3><p>And how do you do that? To begin with, you must grok the organization&#8217;s &#8220;knowledge pipeline&#8221; &#8212; how information is created, transformed, passed on, searched, used, etc. All businesses generate and consume data: leads, proposals, research, responses, invoices, documentation, etc. The more structured this data, the easier it&#8217;ll be to integrate into AI-powered workflows.</p><h3>3. Understand AI&#8217;s real capabilities</h3><p>Many people are pushing unrealistic ideas of what AI can do. The reality is that while LLMs are a powerful general-purpose technology, you can&#8217;t just point them to a problem and say &#8220;fix this&#8221; &#8212; at least not in a scalable, and repeatable way. Understanding what the technology can do <em>today</em> is essential to designing systems that create real value consistently, rather than one-off automations.</p><p>By mapping how information flows through the organization, where the real value lies, and what AI can (and can&#8217;t) do well, you can determine how it might best alleviate information bottlenecks &#8212; without threatening your people.</p><h2>A Real-world Example</h2><p>Recently, Greg and I helped an architecture studio define a coherent direction for their AI use. Outlining the studio&#8217;s knowledge pipeline led to an interesting discovery: a significant portion of their time was spent responding to questions during the construction administration (CA) phase of projects.</p><p>Given current LLM capabilities, we determined that helping build CA dossiers would be a good place to start. It&#8217;s a time-consuming task that few people want to do, but which must be done to deliver value. But it&#8217;s also far enough removed from the studio&#8217;s core deliverable &#8212; excellent architectural design &#8212; that it doesn&#8217;t threaten their soul.</p><p>This isn&#8217;t the &#8220;sexiest&#8221; use of AI, the sort one brags about. But it solves a real problem in a scalable and repeatable way. It enhances the overall value to clients and improves working conditions for team members. It&#8217;s a win-win all around &#8212; but you don&#8217;t get there by dabbling.</p><h2>Moving Ahead &#8212; With Discipline</h2><p>Dabbling isn&#8217;t dangerous just because it&#8217;s uncontrolled. It&#8217;s dangerous because it gives the firm a false sense of progress. It teaches people to think about AI in the wrong way &#8212; as a clever ad hoc tool rather than as part of a broader system &#8212; while distracting them from more fruitful explorations. </p><p>The opposite of dabbling isn&#8217;t stasis; it&#8217;s moving ahead in a disciplined way. Starting undirected is natural and easy. But eventually, you must move more deliberately and strategically. The goal of using AI shouldn&#8217;t be replacing what makes you special. Instead, it should be freeing your people so they can deliver excellence &#8212; and enjoy the process.</p><p><em>If this resonates, <a href="https://iunfinishe.com/">unfinishe</a> can help. We work with small and medium-sized service firms to move beyond AI dabbling and deliver real value.</em></p><div class="subscription-widget-wrap-editor" data-attrs="{&quot;url&quot;:&quot;https://thoughts.unfinishe.com/subscribe?&quot;,&quot;text&quot;:&quot;Subscribe&quot;,&quot;language&quot;:&quot;en&quot;}" data-component-name="SubscribeWidgetToDOM"><div class="subscription-widget show-subscribe"><div class="preamble"><p class="cta-caption">Thanks for reading unfinishe_ thoughts! Subscribe for free to receive new posts and support our work.</p></div><form class="subscription-widget-subscribe"><input type="email" class="email-input" name="email" placeholder="Type your email&#8230;" tabindex="-1"><input type="submit" class="button primary" value="Subscribe"><div class="fake-input-wrapper"><div class="fake-input"></div><div class="fake-button"></div></div></form></div></div><p></p>]]></content:encoded></item><item><title><![CDATA[The Moylan Arrow: IA Lessons for AI-Powered Experiences]]></title><description><![CDATA[How traditional structural principles can inform the design of AI-powered products and services.]]></description><link>https://thoughts.unfinishe.com/p/the-moylan-arrow-ia-lessons-for-ai</link><guid isPermaLink="false">https://thoughts.unfinishe.com/p/the-moylan-arrow-ia-lessons-for-ai</guid><dc:creator><![CDATA[Jorge Arango]]></dc:creator><pubDate>Sun, 04 Jan 2026 21:59:19 GMT</pubDate><enclosure url="https://substackcdn.com/image/fetch/$s_!93KH!,f_auto,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2F1a8d20f5-a0e6-4586-80a2-2099455dea39_1200x675.jpeg" length="0" type="image/jpeg"/><content:encoded><![CDATA[<div class="captioned-image-container"><figure><a class="image-link image2 is-viewable-img" target="_blank" href="https://substackcdn.com/image/fetch/$s_!93KH!,f_auto,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2F1a8d20f5-a0e6-4586-80a2-2099455dea39_1200x675.jpeg" data-component-name="Image2ToDOM"><div class="image2-inset"><picture><source type="image/webp" srcset="https://substackcdn.com/image/fetch/$s_!93KH!,w_424,c_limit,f_webp,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2F1a8d20f5-a0e6-4586-80a2-2099455dea39_1200x675.jpeg 424w, https://substackcdn.com/image/fetch/$s_!93KH!,w_848,c_limit,f_webp,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2F1a8d20f5-a0e6-4586-80a2-2099455dea39_1200x675.jpeg 848w, https://substackcdn.com/image/fetch/$s_!93KH!,w_1272,c_limit,f_webp,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2F1a8d20f5-a0e6-4586-80a2-2099455dea39_1200x675.jpeg 1272w, https://substackcdn.com/image/fetch/$s_!93KH!,w_1456,c_limit,f_webp,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2F1a8d20f5-a0e6-4586-80a2-2099455dea39_1200x675.jpeg 1456w" sizes="100vw"><img src="https://substackcdn.com/image/fetch/$s_!93KH!,w_1456,c_limit,f_auto,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2F1a8d20f5-a0e6-4586-80a2-2099455dea39_1200x675.jpeg" width="1200" height="675" data-attrs="{&quot;src&quot;:&quot;https://substack-post-media.s3.amazonaws.com/public/images/1a8d20f5-a0e6-4586-80a2-2099455dea39_1200x675.jpeg&quot;,&quot;srcNoWatermark&quot;:null,&quot;fullscreen&quot;:null,&quot;imageSize&quot;:null,&quot;height&quot;:675,&quot;width&quot;:1200,&quot;resizeWidth&quot;:null,&quot;bytes&quot;:71335,&quot;alt&quot;:null,&quot;title&quot;:null,&quot;type&quot;:&quot;image/jpeg&quot;,&quot;href&quot;:null,&quot;belowTheFold&quot;:false,&quot;topImage&quot;:true,&quot;internalRedirect&quot;:&quot;https://thoughts.unfinishe.com/i/183482279?img=https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2F1a8d20f5-a0e6-4586-80a2-2099455dea39_1200x675.jpeg&quot;,&quot;isProcessing&quot;:false,&quot;align&quot;:null,&quot;offset&quot;:false}" class="sizing-normal" alt="" srcset="https://substackcdn.com/image/fetch/$s_!93KH!,w_424,c_limit,f_auto,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2F1a8d20f5-a0e6-4586-80a2-2099455dea39_1200x675.jpeg 424w, https://substackcdn.com/image/fetch/$s_!93KH!,w_848,c_limit,f_auto,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2F1a8d20f5-a0e6-4586-80a2-2099455dea39_1200x675.jpeg 848w, https://substackcdn.com/image/fetch/$s_!93KH!,w_1272,c_limit,f_auto,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2F1a8d20f5-a0e6-4586-80a2-2099455dea39_1200x675.jpeg 1272w, https://substackcdn.com/image/fetch/$s_!93KH!,w_1456,c_limit,f_auto,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2F1a8d20f5-a0e6-4586-80a2-2099455dea39_1200x675.jpeg 1456w" sizes="100vw" fetchpriority="high"></picture><div class="image-link-expand"><div class="pencraft pc-display-flex pc-gap-8 pc-reset"><button tabindex="0" type="button" class="pencraft pc-reset pencraft icon-container restack-image"><svg role="img" width="20" height="20" viewBox="0 0 20 20" fill="none" stroke-width="1.5" stroke="var(--color-fg-primary)" stroke-linecap="round" stroke-linejoin="round" xmlns="http://www.w3.org/2000/svg"><g><title></title><path d="M2.53001 7.81595C3.49179 4.73911 6.43281 2.5 9.91173 2.5C13.1684 2.5 15.9537 4.46214 17.0852 7.23684L17.6179 8.67647M17.6179 8.67647L18.5002 4.26471M17.6179 8.67647L13.6473 6.91176M17.4995 12.1841C16.5378 15.2609 13.5967 17.5 10.1178 17.5C6.86118 17.5 4.07589 15.5379 2.94432 12.7632L2.41165 11.3235M2.41165 11.3235L1.5293 15.7353M2.41165 11.3235L6.38224 13.0882"></path></g></svg></button><button tabindex="0" type="button" class="pencraft pc-reset pencraft icon-container view-image"><svg xmlns="http://www.w3.org/2000/svg" width="20" height="20" viewBox="0 0 24 24" fill="none" stroke="currentColor" stroke-width="2" stroke-linecap="round" stroke-linejoin="round" class="lucide lucide-maximize2 lucide-maximize-2"><polyline points="15 3 21 3 21 9"></polyline><polyline points="9 21 3 21 3 15"></polyline><line x1="21" x2="14" y1="3" y2="10"></line><line x1="3" x2="10" y1="21" y2="14"></line></svg></button></div></div></div></a><figcaption class="image-caption">Moylan arrow from a 2016 Corolla by Petar Milo&#353;evi&#263;, via <a href="https://commons.wikimedia.org/w/index.php?curid=52408388">Wikimedia</a></figcaption></figure></div><p><a href="https://www.wsj.com/business/autos/ford-gas-arrow-inventor-jim-moylan-6b2ef066?st=wwpyRk&amp;reflink=desktopwebshare_permalink">Jim Moylan died recently</a>. He was the Ford engineer who proposed that little arrow on the fuel gauge of most cars that indicates the cap&#8217;s location. It&#8217;s handy when you&#8217;re pulling into a gas station to refuel, especially when you&#8217;re driving an unfamiliar car.</p><p>The <a href="https://en.wikipedia.org/wiki/Fuel_gauge#Moylan_arrow">Moylan arrow</a> is such an obviously useful idea that it was immediately implemented by Ford and widely adopted by other manufacturers. It&#8217;s also an excellent example of good <a href="https://jarango.com/what-is-information-architecture/">information architecture</a> &#8212; and one that provides important lessons as we navigate the AI age.</p><h2>How Is This Information Architecture?</h2><p>Information allows us to act more skillfully. Imagine you come to a fork on a road. Without a sign, you&#8217;d need a compass or a great sense of direction to choose correctly. But with a clear sign, you&#8217;d quickly know which road to take. The sign reduces ambiguity.</p><p>The Moylan arrow, too, disambiguates a choice. Pulling in on the wrong side of the pump is an annoying inconvenience. By making the driver smarter, the arrow improves the car&#8217;s UX. Critically, it does so without much cost to the manufacturer. That&#8217;s why it&#8217;s become pervasive.</p><p>&#8220;But,&#8221; you may protest, &#8220;this isn&#8217;t IA; it&#8217;s user interface/icon design.&#8221; That&#8217;s partly true. As usual, users experience IA in an interface. The arrow wouldn&#8217;t be as effective if it wasn&#8217;t clear and recognizable. Visuals &#8212; the choice of symbols (an abstracted gas pump and a triangle) and colors (usually white on black) &#8212; are key.</p><p>But there&#8217;s more to it than that. A big part of the arrow&#8217;s effectiveness is its location: on the dashboard, next to the fuel gauge &#8212; exactly where you&#8217;re looking when your car needs refueling. Consider how much less effective it&#8217;d be if it were only noted in the owner&#8217;s manual.</p><p>The Moylan arrow works because it&#8217;s:</p><ul><li><p><strong>Clear</strong>: legible and understandable</p></li><li><p><strong>Findable</strong>: located where you&#8217;re already looking</p></li><li><p><strong>Relevant</strong>: provides the exact answer you need</p></li><li><p><strong>Contextual</strong>: available when needed, but &#8220;quiet&#8221; otherwise</p></li><li><p><strong>Obvious</strong>: doesn&#8217;t need further instructions</p></li><li><p><strong>Cheap</strong>: of negligible cost to manufacturers</p></li></ul><p>The arrow isn&#8217;t just a clear icon. It disambiguates a key structural distinction of the car. The mental model is clear: most current <a href="https://en.wikipedia.org/wiki/Internal_combustion_engine">ICE</a> cars have their fuel cap on either the left or right side. The question is, &#8220;which is it for <em>this</em> car?&#8221; The answer is obvious once you know where to look &#8212; and it&#8217;s cognitively respectful (i.e., it doesn&#8217;t scream, &#8220;LOOK AT ME!&#8221; while you&#8217;re driving.)</p><p>Which is to say, the Moylan arrow:</p><ol><li><p>answers a latent question (&#8220;Which side is the fuel cap on?&#8221;)</p></li><li><p>at a time when the user is making a key decision (pulling in to a gas station)</p></li><li><p>by showing them just what they need (left or right side)</p></li><li><p>where they expect to find it (on the dashboard, next to the fuel gauge)</p></li><li><p>cheaply, efficiently, and respectfully.</p></li></ol><p>That&#8217;s classic information architecture.</p><h2>What Does This Have to do With AI?</h2><p>This is the <em>opposite</em> approach to many of today&#8217;s AI-powered systems. The arrow is low tech (just a bit more paint/pixels!) and therefore relatively cheap. It does just one job &#8212; resolving structural ambiguity &#8212; effectively and efficiently. It&#8217;s there when needed and blends into the background otherwise.</p><p>Admittedly, its elegance is due in great part to the binary, static, and universal nature of the information it conveys. The cap can only be in one of two positions: left or right. These concepts are unambiguously represented with arrows across cultures. (The pump is more complicated but still recognizable.) Also, the information is static: the cap won&#8217;t change sides between fuelings. </p><p>This is a very constrained set of requirements. But compare Moylan&#8217;s solution with many AI products today, especially those with chat interfaces. Rather than a constrained structure within an expectable construct (dashboard &#8594; fuel gauge &#8594; [left|right] arrow), chats offer completely open-ended interfaces. This may be appropriate for systems that require extraordinary flexibility, but it&#8217;s overkill otherwise. And while flexibility adds power, it opens the door to complexity and errors. (Consider the risk of hallucinations!)</p><p>Chat interfaces also have higher latency than more structured UIs. Conversational interfaces require explicit instructions &#8212; either spoken or typed &#8212; before they can provide utility, and getting there may take multiple rounds. To put it bluntly: for many tasks, <a href="https://jarango.com/2023/05/18/thinking-with-words/">chat UIs are inefficient</a>. Compare this with the low latency inherent in Moylan&#8217;s &#8220;ambient&#8221; approach: just glance and turn the wheel.</p><p>Finally, many AI-powered products call too much attention to themselves. The value to the user (e.g., avoiding the inconvenience/embarrassment of pulling in to the wrong side of the pump) takes a back seat (sorry!) to the fact the product now &#8220;has AI.&#8221; Lacking good system models, users can only guess at what pressing the pervasive &#8220;sparklies&#8221; and &#8220;copilot&#8221; buttons might do. Many users recoil when products add complexity through seemingly gratuitous features.</p><h2>What Can We Learn From This?</h2><p>I&#8217;m not poo-pooing chat UIs. They&#8217;re appropriate for some use cases. But they&#8217;re also overused. I expect this is because of two reasons:</p><ul><li><p><strong>Chat = AI</strong>. Many people associate chat UIs with AI, so they expect conversational interactions.</p></li><li><p><strong>Laziness</strong>. It&#8217;s easier to graft a chatbot onto a product than redesign its IA to accommodate new capabilities.</p></li></ul><p>Both reasons are bad. If you believe your system&#8217;s value will come from making it more &#8220;intelligent,&#8221; it&#8217;ll likely turn out overwrought. Users get most value from systems that help them effectively and efficiently and otherwise get out of the way. They don&#8217;t want to &#8220;AI all the things&#8221;; they just want the <em>right</em> information <em>when</em> and <em>where</em> they need it. Everything else is noise.</p><p>Rather than ask, &#8220;how might we add AI to this system?,&#8221; consider the following questions:</p><ul><li><p>What is the person trying to do?</p></li><li><p>Do they understand the system?</p></li><li><p>What&#8217;s keeping them from choosing skillfully?</p></li><li><p>What questions do they have? Which come up repeatedly?</p></li><li><p>Which structural distinctions are ambiguous?</p></li></ul><p>These are information architecture questions. AI might play an important role in answering them &#8212; even in real time, as the user interacts with the system. But it won&#8217;t happen by simply &#8220;adding AI.&#8221; Instead, you must understand the user&#8217;s needs as they work with the system. Then, you can determine where to judiciously apply AI.</p><p>Also, rather than an open-ended UI (such as a chat,) consider whether your system might be better served by a UI that offers clear distinctions and affordances. Buttons and menus don&#8217;t just give users means to act: they also help them understand the system. A thoughtful IA will make your AI-powered product easier to use &#8212; and likely do it more cheaply and elegantly than a chat UI.</p><h2>Closing Thoughts</h2><p>I doubt Jim Moylan thought of himself as an IA. But that doesn&#8217;t matter. We can study manifestations of an area of practice retrospectively even if they weren&#8217;t explicitly produced as such. (For example, we think of many ancient buildings as &#8220;architecture&#8221; even though their designers didn&#8217;t think of themselves as architects in our current sense.)</p><p>As the practice of designing AI-powered systems matures, I expect we&#8217;ll move away from general-purpose interfaces to systems that use AI on the back end while presenting a more traditional UX. There&#8217;s room for delight and intelligence in simple, less open-ended systems. The Moylan arrow is an excellent example.</p><div class="subscription-widget-wrap-editor" data-attrs="{&quot;url&quot;:&quot;https://thoughts.unfinishe.com/subscribe?&quot;,&quot;text&quot;:&quot;Subscribe&quot;,&quot;language&quot;:&quot;en&quot;}" data-component-name="SubscribeWidgetToDOM"><div class="subscription-widget show-subscribe"><div class="preamble"><p class="cta-caption">Thanks for reading unfinishe_ thoughts! Subscribe for free to receive new posts.</p></div><form class="subscription-widget-subscribe"><input type="email" class="email-input" name="email" placeholder="Type your email&#8230;" tabindex="-1"><input type="submit" class="button primary" value="Subscribe"><div class="fake-input-wrapper"><div class="fake-input"></div><div class="fake-button"></div></div></form></div></div><p></p>]]></content:encoded></item><item><title><![CDATA[Open-Ended Sessions: Workflow Archaeology]]></title><description><![CDATA[A conversation about how considered use of AI can help small and medium-sized businesses thrive.]]></description><link>https://thoughts.unfinishe.com/p/open-ended-sessions-workflow-archaeology</link><guid isPermaLink="false">https://thoughts.unfinishe.com/p/open-ended-sessions-workflow-archaeology</guid><dc:creator><![CDATA[Jorge Arango]]></dc:creator><pubDate>Fri, 21 Nov 2025 17:41:56 GMT</pubDate><enclosure url="https://substackcdn.com/image/youtube/w_728,c_limit/jEuuCMK8-ww" length="0" type="image/jpeg"/><content:encoded><![CDATA[<div id="youtube2-jEuuCMK8-ww" class="youtube-wrap" data-attrs="{&quot;videoId&quot;:&quot;jEuuCMK8-ww&quot;,&quot;startTime&quot;:null,&quot;endTime&quot;:null}" data-component-name="Youtube2ToDOM"><div class="youtube-inner"><iframe src="https://www.youtube-nocookie.com/embed/jEuuCMK8-ww?rel=0&amp;autoplay=0&amp;showinfo=0&amp;enablejsapi=0" frameborder="0" loading="lazy" gesture="media" allow="autoplay; fullscreen" allowautoplay="true" allowfullscreen="true" width="728" height="409"></iframe></div></div><p>In the spirit of working with the garage door open, we&#8217;ll do periodic livestreams to share what we&#8217;re learning at <a href="https://unfinishe.com/">unfinishe_</a>. </p><p>In this first &#8220;Open-Ended&#8221; session, we discussed:</p><ol><li><p>The principles that led us to start the business.</p></li><li><p>Insights from Reid Hoffman&#8217;s and Greg Beato&#8217;s new book Superagency &#8212; especially as they apply to small and medium sized businesses.</p></li><li><p>Workflow archaeology, our approach to designing solutions that are aligned from both a strategic and human perspective.</p></li></ol><p>We&#8217;d love to know what you think.</p><div class="subscription-widget-wrap-editor" data-attrs="{&quot;url&quot;:&quot;https://thoughts.unfinishe.com/subscribe?&quot;,&quot;text&quot;:&quot;Subscribe&quot;,&quot;language&quot;:&quot;en&quot;}" data-component-name="SubscribeWidgetToDOM"><div class="subscription-widget show-subscribe"><div class="preamble"><p class="cta-caption">Thanks for reading unfinishe_ thoughts! Subscribe to receive new posts and support our work.</p></div><form class="subscription-widget-subscribe"><input type="email" class="email-input" name="email" placeholder="Type your email&#8230;" tabindex="-1"><input type="submit" class="button primary" value="Subscribe"><div class="fake-input-wrapper"><div class="fake-input"></div><div class="fake-button"></div></div></form></div></div><div><hr></div><h2>Transcript</h2><p><em>(AI generated.)</em></p><p><strong>Jorge</strong>: All right, Greg. Good morning, sir.</p><p><strong>Greg:</strong> Yeah, good morning. Well, it&#8217;s good to see you live streamed.</p><p><strong>Jorge</strong>: I&#8217;m excited, yeah, same here, and just before we jumped on, we were having a bit of a fingernail-biting moment trying to get everything set up right. We&#8217;re very much figuring it out as we go. But we are doing what I think are interesting things, and we were hoping to share with folks. And this seems like an easy way to do it. Do you want to explain a bit about what this is all about?</p><p><strong>Greg:</strong> Yeah, thank you for setting it up nicely and actually talking about how this is an unfinished moment in and of itself. Jorge and I have known each other for a long time, and we&#8217;ve worked together at moments. And we&#8217;re both really fascinated by where we are right now&#8212;the culture and the implementation of technology and AI. I think we both feel like a lot of the things that we&#8217;ve learned in the past aren&#8217;t really applicable in this new environment, and that we have to learn new things. At some level, learning those new things means disrupting ourselves and examining practices that we&#8217;ve held dear over the last 20 years in our careers and seeing if they&#8217;re still valid, and then also exploring the territory that&#8217;s available in this new space. Right? And I think we started that conversation and decided, hey, let&#8217;s put something together. And so we started this thing we&#8217;re calling Unfinishe. The D is missing on purpose. It tells a story about, I think, the market, the space, and the place that we find ourselves in. I don&#8217;t know, maybe you could talk a little bit more about your&#8230; I&#8217;m taking some airtime here, so you jump in and talk a little bit about what you think Unfinishe is.</p><p><strong>Jorge</strong>: Well, I just wanted to touch on something you said. I think you said that the things we learned in the past aren&#8217;t applicable. Is that what you said?</p><p><strong>Greg:</strong> I said that we need to re-examine. I did, but I think what I really meant was we need to examine whether they are still applicable.</p><p><strong>Jorge</strong>: Right, right. Well, the reason I started there is because there&#8217;s a flip side to that, which is that as disruptive as the current moment feels&#8212;and we&#8217;re talking specifically about AI, right? Like, there is a new major technology that is upending a lot of things. And as disruptive as this moment feels, both you and I have been through another disruptive moment like this, which was the dot-com era, you know, the appearance of the World Wide Web in particular. Right? That was a major, major thing. And we now kind of take it for granted because it&#8217;s become so pervasive in our world. But I remember going through that time when design was up in the air, and publishing was up in the air, and there were all these things where it&#8217;s like, well, this needs to be reinvented clearly because we have this different thing happening. In retrospect, it feels like, well, of course that&#8217;s how it would turn out, but it wasn&#8217;t obvious at the time. And it entailed a lot of experimentation. And part of the reason why, circling back to Unfinishe and the concept of making things that are unfinished, is I think that we have this drive for closure. We want things to be neatly wrapped up. But in times of transformational change like we&#8217;re going through now, we don&#8217;t really know how things are going to turn out. We don&#8217;t really understand the technology&#8217;s implications yet. We&#8217;re in the process of discovering that. And one of the things that I learned&#8212;and maybe you can chime in on your experience&#8212;but one of the things that I learned from the previous wave of disruption that I was part of was that you can have these big ideas about how technology is going to transform things. You can have ideals about how it should transform things. But the changes actually happen more incrementally through a bottom-up approach with people trying things, seeing what sticks, and then that is what affects the transformation.</p><p><strong>Greg:</strong> So, yeah, it&#8217;s a maker-builder mindset, right? You know, I oftentimes talk about how some people think to make, and others make to think. I think we&#8217;re both makers. One of the things that&#8217;s interesting about this space is it&#8217;s emergent, right? There&#8217;s a conversation going on. It&#8217;s really fast. So I think that&#8217;s one of the things that&#8217;s a little bit different. I don&#8217;t know if the dot-com era moved fast, but this era is moving super fast because every week there&#8217;s some new model, capability, discovery, or insight. So it&#8217;s challenging. I actually think that&#8217;s one of the things that we&#8217;re thinking about with Unfinishe; it&#8217;s also helping organizations make sense of the moment by making practical things. Our insight is that we&#8217;re not suggesting that we&#8217;re super experts in this space, but what we are saying is that maybe we&#8217;re eight weeks ahead of you. We can help be a little bit of a Rosetta Stone or a wayfinder for organizations around how to understand how to use these tools and think about it in a really practical way. I think that&#8217;s the basis behind why we think Unfinishe as a consulting practice is actually valuable and necessary right now. In the space that we&#8217;re looking at&#8212;which is small and medium businesses&#8212;and this is something we should also talk a little bit about: why are we looking at the SMB space? There are a whole bunch of opportunities to help organizations punch above their weight, to help them manage some of their complexity more effectively so that they can focus their attention and energy on the things they really love or growing their business in a way that makes sense to them. Some of these tools do give you some superpowers. So how can we help organizations make sense of which ones to use and which outcomes are more applicable now and are most useful? I think that is a journey that we want to help people on. Just back to why we&#8217;re calling ourselves Unfinishe. I think one of the things that business leaders have to recognize in this moment is that you are either evolving or you&#8217;re not, and that transformation is something that you need to continuously invest in and have a mindset around. Your point earlier that you talked about closure or completion or having it all figured out&#8212; I think the ethos now is to dive in, experiment, make, find your way, and keep on that path while recognizing that you need to keep moving forward.</p><p><strong>Jorge</strong>: I love that you use the word evolve. I will draw a distinction between evolving and reinventing. Because so much of what one reads out there&#8212;when people write about AI&#8212;and I&#8217;ve seen this with a lot of other consultancies, it&#8217;s like the pitch is, you need to reinvent your business for the AI age. I think our approach is more, &#8220;You know what? That sounds like premature optimization for a world we don&#8217;t really understand yet.&#8221; It&#8217;s much better to try these very carefully defined, pragmatic experiments that help you make steps towards an alternative future, as opposed to this whole huge initiative where it&#8217;s like, let&#8217;s reinvent everything and rethink everything from the ground up. All right. Let&#8217;s not belabor Unfinishe itself. We&#8217;ll have more opportunities in the future to talk about what we&#8217;re doing in the business. We&#8217;re talking about Superagency. This is a book that I had not read. You suggested this book, and I was hoping that you could talk a bit about why Superagency? Why this book? And what is this idea about?</p><p><strong>Greg:</strong> Yeah, I mean, I think it&#8217;s one of the pieces of content this year that tries to unpack where we are in this moment. Hoffman&#8217;s book has a couple of tenets in it that I like. One, he talks about technological transformations historically and sort of recognizes the societal disruption and the ramifications of that in those moments to give us a compass for what&#8217;s happening right now. What do I mean by that? He talks about the invention of the steam engine, the reaction to industrialization in the Luddite movement. He has a kind of model for a two-by-two of different characteristics of people who have a point of view about where AI is from, you know, when he calls it doomers who believe it&#8217;s the end of everything and, you know, the zoomers who are like, &#8220;No regulation, no AI at all costs,&#8221; etc. And that framework, I think, starts to establish, you know, and then there&#8217;s bloomers and gloomers, right? So there&#8217;s the four quadrants. I tend to see myself as a bloomer, but that&#8217;s because I&#8217;m an optimist, and I believe that we get to choose the future we want to live if we&#8217;re intentional about how we operate. I think this is one of the things that&#8217;s very important, actually, about right now&#8212;that we need to be looking at these tools in a really smart way and make sure that humans are in the center of the conversation. The last tenet in his book is around agency, and that these systems should enable us to have agency&#8212;that we get to make decisions, that we get to make choices, that we get to use them for things we think are valuable. Obviously, there&#8217;ll be some disruption in employment, for sure, but there&#8217;ll be new opportunities that emerge out of this technological change as well. That&#8217;s why I thought the book was interesting&#8212;because it was trying to put this into a context of, &#8220;We&#8217;re in a messy moment. The way out of that is to actually be intentional about doing things that have a positive impact.&#8221;</p><p><strong>Jorge</strong>: You talked about the four profiles. And I think the way they&#8212;it&#8217;s two authors, Hoffman and Beato&#8212;but I think of this as Hoffman&#8217;s book in some ways, right? They talk about these four profiles as people who are part of the conversation. They say these are voices that need to be in the room&#8212;you have to accommodate that discussion. You said that you associate or think of yourself more as a bloomer. When I read the book, I too thought, &#8220;Totally, I&#8217;m a bloomer.&#8221; People watching this might not have read the book. Could you give a brief outline of the bloomer profile? And while you&#8217;re thinking about that, I&#8217;ll say we do have people tuning in. I&#8217;ll just put it out there. This series is called Open-Ended Sessions. The idea is to make this a conversation because it&#8217;s unfinished, right? So if you, who are tuning in, have any questions for us or any comments, please drop them in the session chat. All right, Greg.</p><p><strong>Greg:</strong> Yeah, so bloomers, right? I think bloomers are optimists. They believe in progress, and they believe there are opportunities that can be created by the emergence of new technology, identifying opportunities to use that for positive outcomes. I feel like that is my mindset. I&#8217;m not unaware of some of the challenges and issues and problems that are materializing because of this change or this moment we&#8217;re in&#8212;the environmental consequences of building data centers, the background of how the content has been created. But I feel for me&#8212;and one of the reasons why I think I want this partnership you and I are putting together&#8212;is to be intentional about helping organizations make the choices that make sense for them and allow them to be successful and do it in a way that&#8217;s human and places humans in the center of the conversation. That&#8217;s the kind of work that I want to do. I think the bloomer category is someone who believes that the long-term impact of this is actually going to be good for society. It may be rough in the beginning, but there are positive outcomes to be had. But it means that we have to put in the effort and the energy to make sure that that happens.</p><p><strong>Jorge</strong>: The phrase that kept coming to my mind when I was reading the book is this is a glass-half-full mindset. But that doesn&#8217;t mean&#8212;it&#8217;s an optimistic approach, right?&#8212;that doesn&#8217;t mean a Pollyanna approach. To your point, there&#8217;s a recognition that this is a very powerful technology, and like all powerful technologies, they need to be deployed mindfully. Now, the devil is in the details, right? The question is, what does that mean? They get into a bunch of things about regulation in the book, which I don&#8217;t think we&#8217;re going to touch on here. But you mentioned when you were introducing the work that we&#8217;re doing that we have decided to focus our offerings toward small and medium businesses.</p><p><strong>Greg:</strong> Yeah.</p><p><strong>Jorge</strong>: I&#8217;m curious about this idea of superagency and what it might mean for small and medium businesses. I have ideas about that, but I&#8217;d love to hear your take on what those are.</p><p><strong>Greg:</strong> Yeah, I mean, I think one of the things that&#8217;s interesting is that small teams can do more things in a way, right? That&#8217;s evidenced by some of our work. If you think about, you know, one of our more recent engagements, we discovered that an organization we were helping was spending a significant amount of time doing administrative tasks to fulfill legal requirements and compliance requirements for their work. We&#8217;re talking vaguely because they just don&#8217;t want to say who it is or what they&#8217;re up to. That number was increasing over time, hitting their margins, and they didn&#8217;t really understand what was going on. At a certain level, it was sort of like they were like a frog in water&#8212;turning up the heat, they&#8217;re getting boiled by more and more content they had to manage. This was preventing them from doing the things they wanted to do, the things they valued, and the things they felt differentiated them in their marketplace. One of the things we did was work with them to try to understand how they operated. From that, we discerned what might be simple, small, practical things that they could automate or use AI to assist so that they could focus their attention on things of high value to them. That&#8217;s an allegory for what we can help small and medium businesses with. You said it right: no one loves to do the laundry. Some people do, but most people don&#8217;t like to do the laundry. So let&#8217;s help you do your laundry so you can focus on what&#8217;s truly important to you. We can talk a little bit about how we&#8217;re doing that. The second thing is small and medium businesses are much more willing to try things and experiment. You don&#8217;t have the layers of bureaucracy that might hinder a larger organization around what you can and can&#8217;t do. The opportunity for innovation could be higher.</p><p><strong>Jorge</strong>: I want to be fair; the laundry analogy comes from&#8212;I&#8217;m not sure how to pronounce her surname&#8212;but Joanna Maciejewska. I must be butchering that. She put out a tweet saying that the problem with AI was directionality&#8212;we&#8217;re trying to automate the wrong things. We&#8217;ve been automating writing and creating art. What we want is for AI to automate doing the dishes and the laundry so that we can focus on writing and creating art. I think that&#8217;s fundamentally right. I think it&#8217;s also more&#8230; It feels to me correct regarding the state of the technology itself.</p><p><strong>Greg:</strong> Yeah, I agree with you there, too. Is it ready for all of this agent-to-agent conversation stuff that&#8217;s going on? I don&#8217;t think so. Maybe at some point in the future, but what we found in the engagement I was just talking about is that, in the abstract, technology alone isn&#8217;t something that will land in an organization. You need to understand the culture of the organization, how people work, their mental models, and the flow of information. We came up with this term; we call it workflow archaeology. It&#8217;s a bit different than service design practice or the UX space we&#8217;ve come from because it requires some additional investigation, but it leverages that skill set. It&#8217;s understanding the journey from start to finish of an outcome or a job&#8212;something valuable for an organization. Then you have to dig in and see how that happens. You need to understand the information flow, the shape of the data, where it&#8217;s stored, and how it&#8217;s managed. You also need to understand how people expect it to show up on their desktop or in any way they work, and then you can affect change. You can add a small intervention or a small evolution&#8212; I like the word evolution versus reinvention. Evolution enables a performance gain or improvement or unlocks some extra capability they&#8217;ve always wanted to do but haven&#8217;t been able to do before. You do that incrementally. This moment isn&#8217;t calling for us to blow up the firm and say to start over; it&#8217;s much more about getting you up to speed on one thing so that not only do you have something valuable, but you also start to understand how these things work so you, as an organization, can recognize how you want to use them and what they mean to you and what&#8217;s meaningful. In the case of the organization we supported, we delivered an outcome that led to productivity gains, but their intention wasn&#8217;t to let go of people; their intention was to spend more time on things they viewed as high value. Every organization is going to have a different calculus about what matters to them, but right now, these things have to be small; they benefit from prototyping your way of making. We talked about this maker mindset earlier. That&#8217;s part of the journey we want to help people take on&#8212;practical, straightforward things you can do that add value as quickly as possible.</p><p><strong>Jorge</strong>: And I think that&#8217;s part of the actionable outcome of an engagement like this&#8212;the thing you can fire up Monday morning and start doing that changes your workflows and hopefully relieves people in your team from drudgery. But I think there&#8217;s another level of value that comes from these engagements, which is that they help the organization get a sense of direction.</p><p><strong>Greg:</strong> Yeah. I think this is an important piece of the puzzle. Maybe you can talk a bit about how we do that, but this is a really important perspective because many organizations&#8212;probably most&#8212;don&#8217;t know where to start. If they are doing things, they&#8217;re often in an unintentional way. There&#8217;s a fair amount of evidence that says people are using AI, but it&#8217;s not giving them any positive outcomes; it&#8217;s just burning time as people sort of goof off or experiment with it. What&#8217;s your perspective on that? Why is that so important, and how are we doing it?</p><p><strong>Jorge</strong>: Well, the sense I get is that there must be a sense of the emperor&#8217;s new clothes in people&#8217;s minds right now&#8212;in that you read the news and see these huge investments happening in data centers and organizations cutting human positions to invest more in AI. There must be a lot of people wondering, what is the AI doing? The experience most people have had with these tools is through chatbots like ChatGPT. What I&#8217;ve observed, and I think you&#8217;ve seen this as well in talking with folks&#8212;especially in small and medium businesses&#8212;is that there is curiosity about AI. You can&#8217;t help but be curious if you hear about it in the media and everyone is talking about it. Oftentimes, what happens is the organization&#8217;s leadership will take someone in the firm&#8212;usually from IT&#8212;and say, &#8220;Okay, you&#8217;re our AI person, figure this out for us.&#8221; What that person does is get a ChatGPT business account for the firm, give a few people in the company accounts, and then people start dabbling with trying to automate their workflows without any clear step in the process where they are provided a mental model about these tools, how they work, and how they can help. They&#8217;re also not given a holistic understanding of where these tools fit into their information workflows because it&#8217;s being done ad hoc.</p><p><strong>Greg:</strong> Yeah.</p><p><strong>Jorge</strong>: I think one of the tenets that is somewhat unacknowledged, but is central to the work we&#8217;re doing, is the fact that all businesses nowadays&#8212;all modern businesses, anyway&#8212;are, in some sense, information businesses. They have to move information; they have these information workflows where data moves through the organization. If you understand what the technologies can do (and you talked about us being like eight weeks ahead&#8212; I think that&#8217;s a fair assessment), the idea is to try to grip the capabilities and constraints of the tools. So that&#8217;s one aspect of this: understanding what the tools can do. Then you can gain an understanding of the organization&#8217;s information flows&#8212;how is this organization operating so you can identify areas where people are expending inordinate amounts of time and resources doing things that are necessary for operations but aren&#8217;t necessarily adding value to their customers? All companies have to deal with some degree of bureaucracy, and my emergent sense is that particularly large language models can be valuable in helping alleviate some of that tedium so people can focus their time on things that, A, add more value to their customers, and to their companies, but also that they enjoy more. No one likes having to deal with red tape. The point is that part of the value we&#8217;re trying to bring to organizations through this process of workflow archaeology is that by understanding the information flows and where the tools might help, the organization gains a new understanding of what the tools can do and a sense of direction. It&#8217;s not that we&#8217;re going to reinvent the company from day one, but at least we start developing an emergent roadmap of where the low-hanging fruit is and let&#8217;s start with a few pragmatically chosen areas to focus on, so we can begin gaining the competency internally to evolve toward that different state of being.</p><p><strong>Greg:</strong> Yeah, and I think you bring up a couple of important points there. One, it&#8217;s a journey that we are bringing our clients on so they gain competency, right? One of the things we&#8217;ve built into our engagements is that part of what we&#8217;re doing is teaching. We&#8217;re showing you a methodology; we call it workflow archaeology, but we&#8217;re showing you a methodology for understanding how to make the tacit explicit in an organization, how to identify the IP of an organization&#8212;the things it cares about, the culture, the business processes that matter, the things that make them valuable. I think people may think in the back of their minds that many organizations&#8212;including small ones&#8212;might have a role in which someone&#8217;s only job is red tape. That&#8217;s also something to help people recognize: as these tools enable us to do more routine, repeat tasks more effectively and efficiently, you need to help your people gain and acquire new skills or focus their attention and energy on things that will benefit the business in new ways. I think we&#8217;re trying to promote is a perspective of the evolution of your organization, not revolution. The people who work with you and for you are there with you. That&#8217;s another reason why I like small and medium-sized businesses: small and medium-sized business owners are much more in relationship with their employees; many of these kinds of organizations are almost like families. If they care about what they&#8217;re trying to accomplish as a business and they care about their people, we can help them transition their organizations to take advantage of these tools while also growing their business or managing it in a way that&#8217;s meaningful for them. It&#8217;s an interesting moment to be in.</p><p><strong>Jorge</strong>: There&#8217;s another dimension to this, which is that the information itself&#8212;if you buy into the idea that all businesses have these information flows as part of the lifeblood of the organization&#8212;the truth is that most organizations, even though that is true, probably don&#8217;t understand themselves in that light. A lot of that information is managed in a very ersatz way. It&#8217;s certainly unstructured. One of the things we are learning&#8212;and maybe we can pivot to talk about some of the lessons we&#8217;ve learned as part of this initial engagement&#8212;is that when you start working with AI, you&#8217;re going to have an easier time if the information you&#8217;re working with is structured. And hey, guess what? AI can help you do a first pass at structuring the information. I&#8217;m mentioning that because I talked earlier about our being eight weeks ahead as one of our differentiators. I think another one of our differentiators, frankly, is that we come at this problem space from the perspective of information architecture and this designerly approach of understanding how information is structured. The idea is like you were saying: to augment your people so they can create more value and enjoy their work more, as well. One way that happens is not just understanding the flow of information but also the state of the information and doing something about it. That might be that the doing something might have nothing to do with AI; it might be that you discover that your information systems are not up to speed to work with AI. You might need to upgrade those. What ends up happening, maybe, is that the AI thing ends up being a MacGuffin for this broader transformation that probably needed to happen anyway. This is just kind of the reason to get it done.</p><p><strong>Greg:</strong> Yeah, and I think you&#8217;re bringing up an important point: that&#8217;s why we don&#8217;t call it workflow anthropology. I think we both have a design and research background, and we certainly want to research and understand how people work and see them at work. But the reason we&#8217;re calling it archaeology is that there&#8217;s this new element: the structure of the data in the environment we&#8217;re working with. Small businesses tend to not even understand that; they just build it incrementally over time, connecting different technologies, using different stuff. It becomes how they work. You need to be able to unpack it and see how the humans in the system use it; that might be the more anthropological or research lens. But you also need to see the structure of that information and its compatibility for large language models to make sense of it. You hinted that there might be some work to do organizing it more successfully so you can get better accuracy or make it machine-readable. It&#8217;s almost like there are layers to the organization that you have to appeal to in this new moment&#8212;some of service design, some user research, a bit about spelunking into the technological platforms that organizations use. It&#8217;s looking at the files&#8212; the artifacts they have&#8212; and seeing how they&#8217;re formed and shaped and the degree of variety or variation that exists in them. One of the interesting things about us is that we&#8217;re not really focused on a hypothesis when we come in; we want to start with artifacts. We want to look at the substrate of the organization and explore it. Like archaeologists, you dig a little of the dirt, find the first layer of civilization, and come up with some thinking about what&#8217;s going on. You dig the next layer of dirt and find the next piece. It&#8217;s important to understand how people actually get things done, and then you can make suggestions about what to do. One of the lessons we learned recently with this engagement was we saw a process and were like, wow, if you did this differently and this differently, you could achieve this huge productivity gain and here&#8217;s how you could do it. It was almost like the management consulting version of showing up with a hypothesis, and the ROI would be an enormous number. Our client just looked at us and went, that doesn&#8217;t feel right to us. We don&#8217;t believe you and don&#8217;t understand this. We had to reset and ask, what&#8217;s important to you and the way you work? We found this key insight that drove an outcome they didn&#8217;t want to change. It was cultural, and it mattered to them. So from that, we said, Oh, okay, now we know this: We have to get really small, micro. We have to look at one small improvement we could make. We did it&#8212;it was valuable to them. Now they&#8217;re on this path of, hey, this makes sense; what&#8217;s the next small thing we could do? This part of our perspective is to take people on a journey one nugget at a time or one, you know&#8230; I&#8217;ll probably overuse the archaeological metaphor, but we&#8217;ll dig down another layer.</p><p><strong>Jorge</strong>: I think you started touching on something there that I wanted to expand on because we actually have a question from Katherine in the chat. She asks, &#8220;Can you talk more about the deliverables to the organizations and businesses supported? I like the term emergent roadmap. What would that include? Detailed documentation? How-to guides?&#8221; So, what do we deliver, Greg?</p><p><strong>Greg:</strong> Yeah, so I think one of the things we&#8217;ve done is we prototype from the beginning. We&#8217;re constantly making. I can give you an outline of the things we did and want to continue doing. We ran a workshop with our client around how they work, helping them identify jobs to be done or workflows or outcomes that were particularly important to the firm but where they were spending a lot of time. Then we started making stuff with them. We explored the art of the possible together. As we went through this process, two things happened: one, they started learning to use these tools and were surprised by the efficacy of the results. We were surprised sometimes&#8212;like, wow, that didn&#8217;t work, or that could work if our data structure were more organized. Oh shoot, we need to do that before we can make this happen. In the end, we built, you know, we built an agent. It&#8217;s not an autonomous one&#8212;it&#8217;s one that you work with. There was a huge aha moment in there. I don&#8217;t know, maybe Jorge, you were more involved with this and want to talk about the importance of understanding the discernment of an organization and the collective knowledge in being able to build something that sorts through, triages, and does the right work. How did you do that? Talk a little about the last mile of the effort we did.</p><p><strong>Jorge</strong>: Yeah, and you talked about starting with prototypes and the last mile, which is right: it&#8217;s about prototyping throughout the process. You mentioned skepticism, which I expect we&#8217;ll encounter a lot, because many suspect there&#8217;s a lot of hype around this stuff. The quicker you can get to testing hypotheses and validating hypotheses&#8230; When we did the first pass at the workflow archaeology thing in this engagement, we came out with a couple of hypotheses about what might be good uses for AI in this context. The immediate next step should be, &#8220;What&#8217;s the minimal test we can do to validate this hypothesis?&#8221; It might be that the data isn&#8217;t there; it might be that the culture isn&#8217;t there. It might be, and this is now going to your question, that the knowledge that needs to be articulated as part of this AI assistant or agent or whatever you want to call it is so dispersed culturally in the organization and not described explicitly. A lot of the knowledge&#8212;if you want to use tools that help augment people&#8217;s work&#8212;you have to get people to express what it is that they do.</p><p><strong>Greg:</strong> And that&#8217;s important, right?</p><p><strong>Jorge</strong>: Exactly. The thing is, people don&#8217;t tell you what they do&#8212;you have to find other ways of getting that out. Prototyping is one way to do it, right? It&#8217;s a way to get that done. That is one of the, I think, deliverables to Katherine&#8217;s question. But also to honor the notion of the emergent roadmap, the other thing we worked on in parallel is basically a business case. It&#8217;s not just about building a proof of concept here&#8212;something that is like a minimal test, a minimal validation of whether there&#8217;s any &#8220;there&#8221; there. If that test proves successful, then what would it mean to scale this? What would it mean to get it into production? You want to come out of this process&#8212;not just with a tool that someone can use to automate a particular workflow, but also a sense of direction of where we could go next and how to take this initial experiment and start moving it so it has a larger impact. One way to do that, I think the grown-up way to do that, is to start putting numbers to it and having the numbers be realistic so leadership can make decisions about whether this is something they want to invest in or not.</p><p><strong>Greg:</strong> Yeah, I think we had another aha moment. We invested in building a pretty comprehensive model; we did time on task, understood the billable rates, the team costs, and how much time it took to accomplish things. We could give a very accurate picture of if the evolution we were promoting&#8212;the prototype we had&#8212;was utilized at a certain level by the organization; they could achieve this outcome. What was really interesting was and unexpected&#8212;they asked a really good question: &#8220;Okay, now that we have more time because this effort we solved is going to give us time back, what do we use it for?&#8221; That was a really interesting and valuable question and one of the things that&#8217;s interesting about small and medium businesses: their perspective was not that they needed fewer people but that they reduced some aspect they didn&#8217;t want to manage. They asked how they could use this gift of time toward something meaningful for them and what the value of that would be for the firm. That&#8217;s harder for us to solve for, but we can facilitate conversations around goals and outcomes. One of the interesting things about our work is our last client walked away with a recognition of part of their business they didn&#8217;t even understand and its implications. They didn&#8217;t understand that part of their business they&#8217;d been doing forever was consuming more time. It was like a frog boiled in water; if they hadn&#8217;t paid attention to it, it would cut their margins to the point where the business would be less successful, and they wouldn&#8217;t understand why. This process of workflow archaeology isn&#8217;t just about the technology; it&#8217;s about helping identify and see yourself as an organization and then ideally craft a path toward a better outcome. To come back to the question that was asked, part of what we did early on&#8212;and this is really important&#8212;we helped them prioritize the outcomes and workflows against the current state of AI. That gave them confidence: we had this two-by-two framework where the upper right quadrant was high value and easy to do. So we said, &#8220;You should just work on those right now.&#8221; The other things all sound cool and could be transformational and amazing, but let&#8217;s work on practical things of high value that conceptually have high value. Let&#8217;s help discover what those are because they may not be the things that people talk about&#8212;the things they think are high value are the things they love to do. But in terms of pushing an organization forward or allowing it to achieve its goals, the necessary things often are the important things. If you can make those more straightforward, the benefits accrue over time. Part of what we left them with was a roadmap of what&#8217;s the next workflow they should tackle. They don&#8217;t need us anymore to do it, which is interesting, too. We taught them how to do it.</p><p><strong>Jorge</strong>: I wanted to circle back to something you said because it was intentional on our part: helping the client understand their current state better. When we originally discussed the offering, we riffed on an old Velvet Underground song and called it &#8220;We&#8217;ll Be Your Mirror.&#8221; You remember that?</p><p><strong>Greg:</strong> Right, right.</p><p><strong>Jorge</strong>: That&#8217;s because this technological disruption&#8212;this opportunity&#8212;presents one of those rare moments where you can step back and examine the state of what you&#8217;re doing. Organizations are systems, and long-running organizations are complex systems that have evolved over time to perform their functions. These engagements present the rare opportunity to take a step back and take stock of how the whole system is operating. Obviously, you want to improve how it&#8217;s working; that&#8217;s the whole point of the engagement. But, at a minimum, if you get nothing else out of it, having that high-level picture&#8212;even if we&#8217;re just a part of the business&#8212;is really valuable. I want to pivot here because we have about six minutes left.</p><p><strong>Greg:</strong> Yeah.</p><p><strong>Jorge</strong>: By the way, Katherine is following up and saying they work in government, and this process is very applicable there in addition to small and medium-sized businesses. Yes, I believe that&#8217;s right, Katherine. When we say small and medium-sized businesses, departments within large organizations sometimes function like small and medium-sized businesses. Enterprises have different constraints, making them slightly different. I suspect that government does as well. That&#8217;s a good point. What I wanted to suggest, Greg, given we only have about five minutes left here&#8212;and we didn&#8217;t plan this beforehand, so again, very emergent, unfinished conversation. What would be one takeaway for folks tuning in? Something that maybe they can do differently or think differently about this new technology that, I don&#8217;t know if I want to complicate by saying, might be counterintuitive or might be surprising. Something we&#8217;ve learned that might help them.</p><p><strong>Greg:</strong> Yeah, I mean, I think culture is really important. People talk about how culture eats strategy for lunch, right? You need to understand what&#8217;s important to people. You can anchor this kind of work into that so it feels like it&#8217;s part of a journey people are on together. That may sound altruistic and optimistic of me, but personally, I think one of the reasons you and I are doing this is that we want to see real impact and see that impact is meaningful, where humans have agency in the conversation. If you just talk about the technology, you won&#8217;t understand that aspects of the way people currently work will stop you from making progress unless you understand it. If you understand it, then you can use that as an anchor to drive something forward. So don&#8217;t ignore&#8230;</p><p><strong>Jorge</strong>: Culture. I love that you said culture is important, and I will add that culture is also fragile.</p><p><strong>Greg:</strong> Very much so.</p><p><strong>Jorge</strong>: Organizations with a dysfunctional culture probably want to change it, but I would expect those folks might not be looking to institute or add AI to the mix necessarily. So assuming that the culture in your organization is healthy&#8212;which was certainly the case with our client&#8212;then I think a question becomes how do you introduce such a disruptive technology without ruining it? That&#8217;s yet another reason to delve into this new space, but do it mindfully, not with the goal of transforming the whole thing from the ground up day one, but rather, let&#8217;s take this one step at a time. Let&#8217;s ensure it&#8217;s true to who you are as an organization and helps you become more of who you are, as opposed to trying to change you into something completely unrecognizable.</p><p><strong>Greg:</strong> Also allow you to recognize what change you will have to go through. This isn&#8217;t going to happen overnight, right? Even for small and medium-sized organizations, there will be some disruption and impact and roles that will change. But do it in a way that&#8217;s intentional and mindful, so you understand the implications. Don&#8217;t just do it. I know that&#8217;s more than one thing, but I think that&#8217;s important. I think you should also make stuff. That&#8217;s something to help impress people with&#8212;don&#8217;t just make anything, but be focused on what you make first. That may not have a benefit immediately, but at least you&#8217;re focused on it. Then you learn and make the next thing. Don&#8217;t try to do everything at once. Be focused and intentional about the evolution of your organization. If we can help organizations do that, then I think you can give them comfort about their trajectory. Leadership will have agency and ideally communicate that to their employees so they can evolve together in this new environment rather than have it imposed on them.</p><p><strong>Jorge</strong>: Yeah, that&#8217;s a superagency thing, right? It&#8217;s not being imposed on me; I&#8217;m a participant in this. We are at time. I think this was a great first conversation. We will have more of these. For those who want to follow up with us, our website is unfinishe (without the D)&#8212;unfinishe.com&#8212;and we do have a Substack where we&#8217;ll be posting hopefully fairly regularly what we learn; that&#8217;s at thoughts.unfinishe.com&#8212;so Unfinishe Thoughts, basically. All right, Greg, thank you. We will let folks know when we have another one of these scheduled.</p><p><strong>Greg:</strong> Thanks. All right.</p><p><strong>Jorge</strong>: And thank you to everyone who tuned in, by the way.</p><div><hr></div><p>We&#8217;d love to know your thoughts&#8212;especially since we plan to do more of these. Are there questions or topics you&#8217;d like to bring to the table? Please let us know in the comments below.</p><div class="subscription-widget-wrap-editor" data-attrs="{&quot;url&quot;:&quot;https://thoughts.unfinishe.com/subscribe?&quot;,&quot;text&quot;:&quot;Subscribe&quot;,&quot;language&quot;:&quot;en&quot;}" data-component-name="SubscribeWidgetToDOM"><div class="subscription-widget show-subscribe"><div class="preamble"><p class="cta-caption">Thanks for reading unfinishe_ thoughts! Subscribe to receive new posts and support our work.</p></div><form class="subscription-widget-subscribe"><input type="email" class="email-input" name="email" placeholder="Type your email&#8230;" tabindex="-1"><input type="submit" class="button primary" value="Subscribe"><div class="fake-input-wrapper"><div class="fake-input"></div><div class="fake-button"></div></div></form></div></div>]]></content:encoded></item><item><title><![CDATA[LLMs as Robot Arms]]></title><description><![CDATA[How to deliver value by rearchitecting knowledge pipelines around AI constraints.]]></description><link>https://thoughts.unfinishe.com/p/llms-as-robot-arms</link><guid isPermaLink="false">https://thoughts.unfinishe.com/p/llms-as-robot-arms</guid><dc:creator><![CDATA[Jorge Arango]]></dc:creator><pubDate>Tue, 28 Oct 2025 23:35:27 GMT</pubDate><enclosure url="https://substackcdn.com/image/fetch/$s_!JfyQ!,f_auto,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2Ffd90aba9-60f8-45ab-af4a-6f5c3b03845b_1200x675.jpeg" length="0" type="image/jpeg"/><content:encoded><![CDATA[<div class="captioned-image-container"><figure><a class="image-link image2 is-viewable-img" target="_blank" href="https://substackcdn.com/image/fetch/$s_!JfyQ!,f_auto,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2Ffd90aba9-60f8-45ab-af4a-6f5c3b03845b_1200x675.jpeg" data-component-name="Image2ToDOM"><div class="image2-inset"><picture><source type="image/webp" srcset="https://substackcdn.com/image/fetch/$s_!JfyQ!,w_424,c_limit,f_webp,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2Ffd90aba9-60f8-45ab-af4a-6f5c3b03845b_1200x675.jpeg 424w, https://substackcdn.com/image/fetch/$s_!JfyQ!,w_848,c_limit,f_webp,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2Ffd90aba9-60f8-45ab-af4a-6f5c3b03845b_1200x675.jpeg 848w, https://substackcdn.com/image/fetch/$s_!JfyQ!,w_1272,c_limit,f_webp,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2Ffd90aba9-60f8-45ab-af4a-6f5c3b03845b_1200x675.jpeg 1272w, https://substackcdn.com/image/fetch/$s_!JfyQ!,w_1456,c_limit,f_webp,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2Ffd90aba9-60f8-45ab-af4a-6f5c3b03845b_1200x675.jpeg 1456w" sizes="100vw"><img src="https://substackcdn.com/image/fetch/$s_!JfyQ!,w_1456,c_limit,f_auto,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2Ffd90aba9-60f8-45ab-af4a-6f5c3b03845b_1200x675.jpeg" width="1200" height="675" data-attrs="{&quot;src&quot;:&quot;https://substack-post-media.s3.amazonaws.com/public/images/fd90aba9-60f8-45ab-af4a-6f5c3b03845b_1200x675.jpeg&quot;,&quot;srcNoWatermark&quot;:null,&quot;fullscreen&quot;:null,&quot;imageSize&quot;:null,&quot;height&quot;:675,&quot;width&quot;:1200,&quot;resizeWidth&quot;:null,&quot;bytes&quot;:null,&quot;alt&quot;:&quot;Post cover art&quot;,&quot;title&quot;:&quot;Post cover art&quot;,&quot;type&quot;:null,&quot;href&quot;:null,&quot;belowTheFold&quot;:false,&quot;topImage&quot;:true,&quot;internalRedirect&quot;:null,&quot;isProcessing&quot;:false,&quot;align&quot;:null,&quot;offset&quot;:false}" class="sizing-normal" alt="Post cover art" title="Post cover art" srcset="https://substackcdn.com/image/fetch/$s_!JfyQ!,w_424,c_limit,f_auto,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2Ffd90aba9-60f8-45ab-af4a-6f5c3b03845b_1200x675.jpeg 424w, https://substackcdn.com/image/fetch/$s_!JfyQ!,w_848,c_limit,f_auto,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2Ffd90aba9-60f8-45ab-af4a-6f5c3b03845b_1200x675.jpeg 848w, https://substackcdn.com/image/fetch/$s_!JfyQ!,w_1272,c_limit,f_auto,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2Ffd90aba9-60f8-45ab-af4a-6f5c3b03845b_1200x675.jpeg 1272w, https://substackcdn.com/image/fetch/$s_!JfyQ!,w_1456,c_limit,f_auto,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2Ffd90aba9-60f8-45ab-af4a-6f5c3b03845b_1200x675.jpeg 1456w" sizes="100vw" fetchpriority="high"></picture><div class="image-link-expand"><div class="pencraft pc-display-flex pc-gap-8 pc-reset"><button tabindex="0" type="button" class="pencraft pc-reset pencraft icon-container restack-image"><svg role="img" width="20" height="20" viewBox="0 0 20 20" fill="none" stroke-width="1.5" stroke="var(--color-fg-primary)" stroke-linecap="round" stroke-linejoin="round" xmlns="http://www.w3.org/2000/svg"><g><title></title><path d="M2.53001 7.81595C3.49179 4.73911 6.43281 2.5 9.91173 2.5C13.1684 2.5 15.9537 4.46214 17.0852 7.23684L17.6179 8.67647M17.6179 8.67647L18.5002 4.26471M17.6179 8.67647L13.6473 6.91176M17.4995 12.1841C16.5378 15.2609 13.5967 17.5 10.1178 17.5C6.86118 17.5 4.07589 15.5379 2.94432 12.7632L2.41165 11.3235M2.41165 11.3235L1.5293 15.7353M2.41165 11.3235L6.38224 13.0882"></path></g></svg></button><button tabindex="0" type="button" class="pencraft pc-reset pencraft icon-container view-image"><svg xmlns="http://www.w3.org/2000/svg" width="20" height="20" viewBox="0 0 24 24" fill="none" stroke="currentColor" stroke-width="2" stroke-linecap="round" stroke-linejoin="round" class="lucide lucide-maximize2 lucide-maximize-2"><polyline points="15 3 21 3 21 9"></polyline><polyline points="9 21 3 21 3 15"></polyline><line x1="21" x2="14" y1="3" y2="10"></line><line x1="3" x2="10" y1="21" y2="14"></line></svg></button></div></div></div></a><figcaption class="image-caption">Photo by <a href="https://unsplash.com/@homaappliances?utm_source=unsplash&amp;utm_medium=referral&amp;utm_content=creditCopyText">Homa Appliances</a> on <a href="https://unsplash.com/photos/a-machine-that-is-working-on-some-kind-of-thing-sz1CHL7Pky0?utm_source=unsplash&amp;utm_medium=referral&amp;utm_content=creditCopyText">Unsplash</a></figcaption></figure></div><p>Most robots today aren&#8217;t the general-purpose humanoids dreamed up by sci-fi visionaries. Instead, they&#8217;re task-specific industrial tools. Why? Simple economics.</p><p>Imagine you&#8217;re looking to automate a human-staffed automobile factory. There are two ways you could go about it. The first entails designing machines that work within the assembly line&#8217;s constraints &#8212; i.e., general-purpose automatons that can skillfully navigate real-world contingencies.</p><p>These machines would need to respect the factory&#8217;s affordances and other agents in the environment. Because of this, they&#8217;d have to replicate human capabilities: similar sensing and actuation mechanisms, similar cognitive abilities, similar communication channels, etc. They&#8217;d need to switch tasks on-the-fly to perform different jobs as needed.</p><p>This is the dream, and there are companies working to make it real. But there&#8217;s a problem: humans are very complex. Building one-to-one replacements is challenging with today&#8217;s technologies. A factory staffed with humanoids would require huge investments before the first automobile rolled off the line. But more to the point, it&#8217;s unnecessary: the goal is cheap cars, not cool robots.</p><p>A smarter approach entails redesigning the assembly line to constrain conditions so simpler machines can do specific jobs: transporting components, welding panels, painting doors, etc. By choreographing several simpler robots, you can have an efficient and cost-effective assembly line. This is, in fact, how most cars are built today.</p><p>Even if technologies afforded fully agentic humanoid workers, it&#8217;s unclear such an assembly line would be more efficient. At the risk of over-generalizing, here&#8217;s a simple design principle: <em>Re-architecting workflows for simple agents costs less than engineering complex agents for existing workflows.</em></p><p>Of course, this assumes stable, predictable inputs and outputs. The automobile plant takes in particular materials in specific quantities at predefined cadences and outputs consistent products &#8212; a scenario that offers little leeway for variation. Not all industries fit this description. Still, there are cheaper ways to introduce flexibility than using general-purpose automatons.</p><p>Now, map this principle onto knowledge work. Many organizations are betting on AI to replace humans. But as with the humanoid factory worker, the technology isn&#8217;t quite there yet. Yes, there are some neat demos. Yes, agentic systems can automate complex tasks. But most of these systems operate within tightly constrained conditions and can&#8217;t autonomously replace human workers.</p><p>And that&#8217;s ok. As with the assembly line, fully autonomous, flexible agents are unnecessary in most scenarios. Many workflows can be made more efficient by &#8220;rearchitecting the line&#8221; around the constraints and capabilities of simpler AIs (such as LLMs) rather than via hypothetical superintelligent agents. Mindfully structured workflows can get you pretty far at a fraction of the cost.</p><p>And in any case, why mimic humans &#8212; at great expense &#8212; when LLMs offer unique capabilities? By analogy, robot arms can be much stronger than human arms, have more degrees of motion, move with greater precision, and repeat motions endlessly without boredom or injury. Conversely, most don&#8217;t need the delicate sensory capabilities of human skin or the nuanced motions of fingers.</p><p>Which is to say, artificial systems are more capable than humans in some ways and less so in others. Where humans have an indisputable edge is in their <em>flexibility</em>. A person can take on many different tasks and serve different roles &#8212; including some they weren&#8217;t asked to do. It&#8217;ll be a long time before AIs can catch up with human flexibility and initiative. In the meantime, organizations aspiring to implement general-purpose systems will pay an onerous &#8212; and often unnecessary &#8212; &#8220;flexibility tax.&#8221;</p><p>When considering AI optimizations, start by understanding your organization&#8217;s <a href="https://jarango.com/2025/04/15/smarter-ai-begins-with-your-business-knowledge-pipeline/">knowledge pipelines</a> &#8212; i.e., how information flows through the org. When you do, you&#8217;ll recognize opportunities to automate processes in cost-effective ways. You&#8217;ll also recognize opportunities to <em>get rid</em> of processes. No reason to automate things that shouldn&#8217;t be happening at all!</p><p>As with industrial flows, using AI to automate knowledge work is more cost-effective if you &#8220;rearchitect the line&#8221; &#8212; i.e., constrain variances, standardize interfaces, provide predictable inputs, etc. Yes, AGI might make it all obsolete. But AGI doesn&#8217;t exist yet, and LLMs are already powerful enough to serve as &#8220;robot arms&#8221; that boost your knowledge pipelines.</p><div class="subscription-widget-wrap-editor" data-attrs="{&quot;url&quot;:&quot;https://thoughts.unfinishe.com/subscribe?&quot;,&quot;text&quot;:&quot;Subscribe&quot;,&quot;language&quot;:&quot;en&quot;}" data-component-name="SubscribeWidgetToDOM"><div class="subscription-widget show-subscribe"><div class="preamble"><p class="cta-caption">Thanks for reading unfinishe_ thoughts! Subscribe for free to receive new posts and support our work.</p></div><form class="subscription-widget-subscribe"><input type="email" class="email-input" name="email" placeholder="Type your email&#8230;" tabindex="-1"><input type="submit" class="button primary" value="Subscribe"><div class="fake-input-wrapper"><div class="fake-input"></div><div class="fake-button"></div></div></form></div></div>]]></content:encoded></item><item><title><![CDATA[Small Bites, Big Impact: ]]></title><description><![CDATA[Why Workflow Archaeology Beats Vision Selling in AI Implementation]]></description><link>https://thoughts.unfinishe.com/p/small-bites-big-impact</link><guid isPermaLink="false">https://thoughts.unfinishe.com/p/small-bites-big-impact</guid><dc:creator><![CDATA[Greg Petroff]]></dc:creator><pubDate>Wed, 15 Oct 2025 21:39:56 GMT</pubDate><enclosure url="https://substack-post-media.s3.amazonaws.com/public/images/6ad78d28-c828-4a0c-ba94-2933b0b0184d_2132x2126.png" length="0" type="image/jpeg"/><content:encoded><![CDATA[<div class="captioned-image-container"><figure><a class="image-link image2 is-viewable-img" target="_blank" href="https://substackcdn.com/image/fetch/$s_!S73s!,f_auto,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2F3db059ad-7670-4345-a453-2f97cc03992a_2132x2126.png" data-component-name="Image2ToDOM"><div class="image2-inset"><picture><source type="image/webp" srcset="https://substackcdn.com/image/fetch/$s_!S73s!,w_424,c_limit,f_webp,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2F3db059ad-7670-4345-a453-2f97cc03992a_2132x2126.png 424w, https://substackcdn.com/image/fetch/$s_!S73s!,w_848,c_limit,f_webp,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2F3db059ad-7670-4345-a453-2f97cc03992a_2132x2126.png 848w, https://substackcdn.com/image/fetch/$s_!S73s!,w_1272,c_limit,f_webp,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2F3db059ad-7670-4345-a453-2f97cc03992a_2132x2126.png 1272w, https://substackcdn.com/image/fetch/$s_!S73s!,w_1456,c_limit,f_webp,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2F3db059ad-7670-4345-a453-2f97cc03992a_2132x2126.png 1456w" sizes="100vw"><img src="https://substackcdn.com/image/fetch/$s_!S73s!,w_1456,c_limit,f_auto,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2F3db059ad-7670-4345-a453-2f97cc03992a_2132x2126.png" width="1456" height="1452" data-attrs="{&quot;src&quot;:&quot;https://substack-post-media.s3.amazonaws.com/public/images/3db059ad-7670-4345-a453-2f97cc03992a_2132x2126.png&quot;,&quot;srcNoWatermark&quot;:null,&quot;fullscreen&quot;:null,&quot;imageSize&quot;:null,&quot;height&quot;:1452,&quot;width&quot;:1456,&quot;resizeWidth&quot;:null,&quot;bytes&quot;:8082784,&quot;alt&quot;:null,&quot;title&quot;:null,&quot;type&quot;:&quot;image/png&quot;,&quot;href&quot;:null,&quot;belowTheFold&quot;:false,&quot;topImage&quot;:true,&quot;internalRedirect&quot;:&quot;https://thoughts.unfinishe.com/i/176274902?img=https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2F3db059ad-7670-4345-a453-2f97cc03992a_2132x2126.png&quot;,&quot;isProcessing&quot;:false,&quot;align&quot;:null,&quot;offset&quot;:false}" class="sizing-normal" alt="" srcset="https://substackcdn.com/image/fetch/$s_!S73s!,w_424,c_limit,f_auto,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2F3db059ad-7670-4345-a453-2f97cc03992a_2132x2126.png 424w, https://substackcdn.com/image/fetch/$s_!S73s!,w_848,c_limit,f_auto,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2F3db059ad-7670-4345-a453-2f97cc03992a_2132x2126.png 848w, https://substackcdn.com/image/fetch/$s_!S73s!,w_1272,c_limit,f_auto,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2F3db059ad-7670-4345-a453-2f97cc03992a_2132x2126.png 1272w, https://substackcdn.com/image/fetch/$s_!S73s!,w_1456,c_limit,f_auto,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2F3db059ad-7670-4345-a453-2f97cc03992a_2132x2126.png 1456w" sizes="100vw" fetchpriority="high"></picture><div class="image-link-expand"><div class="pencraft pc-display-flex pc-gap-8 pc-reset"><button tabindex="0" type="button" class="pencraft pc-reset pencraft icon-container restack-image"><svg role="img" width="20" height="20" viewBox="0 0 20 20" fill="none" stroke-width="1.5" stroke="var(--color-fg-primary)" stroke-linecap="round" stroke-linejoin="round" xmlns="http://www.w3.org/2000/svg"><g><title></title><path d="M2.53001 7.81595C3.49179 4.73911 6.43281 2.5 9.91173 2.5C13.1684 2.5 15.9537 4.46214 17.0852 7.23684L17.6179 8.67647M17.6179 8.67647L18.5002 4.26471M17.6179 8.67647L13.6473 6.91176M17.4995 12.1841C16.5378 15.2609 13.5967 17.5 10.1178 17.5C6.86118 17.5 4.07589 15.5379 2.94432 12.7632L2.41165 11.3235M2.41165 11.3235L1.5293 15.7353M2.41165 11.3235L6.38224 13.0882"></path></g></svg></button><button tabindex="0" type="button" class="pencraft pc-reset pencraft icon-container view-image"><svg xmlns="http://www.w3.org/2000/svg" width="20" height="20" viewBox="0 0 24 24" fill="none" stroke="currentColor" stroke-width="2" stroke-linecap="round" stroke-linejoin="round" class="lucide lucide-maximize2 lucide-maximize-2"><polyline points="15 3 21 3 21 9"></polyline><polyline points="9 21 3 21 3 15"></polyline><line x1="21" x2="14" y1="3" y2="10"></line><line x1="3" x2="10" y1="21" y2="14"></line></svg></button></div></div></div></a></figure></div><h6><code>image generated by Google Gemini<br></code></h6><p>We walked into the architecture firm and began analyzing their construction administration processes. After mapping their workflows and analyzing their project data, we discovered something striking: they were spending substantial resources annually on administrative tasks that AI could potentially automate. The numbers were undeniable. The vision was compelling. We proposed a wholesale set of changes that we felt AI enabled tools could deliver.</p><p>They politely thanked us and asked if we could start with something smaller.</p><p>This moment taught us everything about why most AI implementations fail before they begin. The problem isn&#8217;t technical capability or even budget&#8212;it&#8217;s that we&#8217;re trying to sell the future to people who are drowning in today&#8217;s broken processes.</p><div><hr></div><h2><strong>The Big Picture Trap</strong></h2><p>When you can see the forest, it&#8217;s tempting to point out how much timber could be harvested. But the people living among the trees are focused on not getting lost.</p><p>We came prepared to demonstrate how AI could eliminate substantial administrative burden in construction administration. What we discovered instead was that before anyone could believe in transformation, we needed to prove we understood their reality.</p><p>That reality looked like this:</p><ul><li><p>Architects spending a quarter of their time on administrative tasks they never learned in design school &#8212; and which few people like doing now</p></li><li><p>Critical project information scattered across email threads, specification documents, and tacit knowledge</p></li><li><p>Workarounds built on top of workarounds, creating fragile systems that somehow kept projects moving</p></li><li><p>Legacy system inertia (their current project management platform) that had shaped inefficient processes everyone had learned to work around</p></li></ul><p>The big efficiency vision felt abstract. The 10 minutes they spent every morning hunting through emails for RFIs (Requests for Information&#8212;formal questions contractors send to architects asking for clarification on design details) that needed their attention? That felt urgent.</p><div><hr></div><h2><strong>Workflow Archaeology: Digging to the Bones</strong></h2><p>Forget the process diagrams. Real workflow discovery happens through rigorous field research, not idealized explanations of how work should flow.</p><p>We conducted interviews with architects across different experience levels, ran working groups to map actual workflows, and dissected real RFI emails from their recent projects. We surveyed the entire firm about time allocation&#8212;over 60% responded, revealing that architects consistently underestimated how much time they spent on administrative tasks.</p><p>Most importantly, we analyzed their actual project data: tens of thousands of RFIs and submittals across dozens of projects, with massive variation in distribution. Some projects generated thousands of items; others fewer than a hundred. The patterns revealed bottlenecks invisible in abstract process discussions.</p><p>One insight emerged clearly from all this research: <strong>email is the interface between contractors and architects</strong>. While the firm had invested in web-based project management tools, the real work happened in Outlook. Contractors sent RFIs via email. Architects responded via email. The web tools served as repositories, but email remained the conversation medium where professionals felt comfortable working.</p><div><hr></div><h2><strong>Service Design for AI Implementation</strong></h2><h3><strong>Observe Before You Optimize</strong></h3><p>AI consulting often begins with identifying processes that could be automated. We started by understanding which processes were already broken.</p><p>The difference is crucial. Automating a bad process makes it efficiently bad. Understanding why the process breaks down reveals where intelligence&#8212;artificial or otherwise&#8212;can provide the most leverage.</p><p>In architecture firms, we found that the stated workflow (&#8221;we review RFIs systematically&#8221;) rarely matched the actual workflow (&#8221;Sarah always knows where to find the answer, so we ask Sarah&#8221;). The AI opportunity wasn&#8217;t replacing systematic review&#8212;it was scaling Sarah&#8217;s institutional knowledge.</p><h3><strong>Follow the Energy</strong></h3><p>Through our surveys and prototype testing, we tracked not just what people did, but where they lost energy. The moment an architect&#8217;s shoulders sagged wasn&#8217;t when they were analyzing complex technical problems&#8212;it was when they realized they&#8217;d have to dig through 1,800 pages of specifications to find one relevant clause.</p><h3><strong>Architects Love Solving Design Problems</strong></h3><p>They hate hunting through documents to find information they know exists somewhere. This energy differential pointed us toward our automation target: not the complex analysis, but the tedious information retrieval and administrative overhead that precedes analysis.</p><h3><strong>Start With the Smallest Viable Improvement&#8212;In the Right Place</strong></h3><p>Based on our research, we&#8217;re building two interconnected tools that work where architects actually spend their time: in email.</p><p>First, an AI agent that reads incoming RFI emails and automatically creates properly formatted project management tickets with the right metadata. Second, an AI assistant that provides instant analysis of RFI content, suggests relevant specification sections, and offers response recommendations&#8212;all in the place where their email lives.</p><p>Together, these would eliminate the administrative overhead per item and provide substantial research assistance. More importantly, they work within their existing email-centric workflow.</p><div><hr></div><h3><strong>The Evolution Model We&#8217;re Testing</strong></h3><h3><strong>Phase 0: Grow Competence and Confidence with AI</strong></h3><p>Co-explore the problem space with the client to help them understand the emerging capabilities of AI tools. A basic tenet for our work is we are doing it together with the client so that they learn along with us.</p><h3><strong>Phase 1: Automate the Annoying</strong></h3><p>We&#8217;re building the email-to-ticket automation first. If it works as expected, it&#8217;ll solve a daily frustration without requiring anyone to change how they think about their work&#8212;and prove we understand their workflow well enough to improve it.</p><h3><strong>Phase 2: Augment the Analysis</strong></h3><p>With trust established, we could then introduce AI that helps with specification comparison&#8212;still supporting human judgment, not replacing it. The AI would become a research assistant that instantly knows where to look in 1,800-page documents.</p><h3><strong>Phase 3: Orchestrate the Workflow</strong></h3><p>Only after proving value in phases 1 and 2 would we propose end-to-end process transformation. By then, the team would have experienced AI as a helpful colleague rather than a threatening replacement.</p><p><strong>Our hypothesis:</strong> skipping directly to Phase 3&#8212;even with superior technology&#8212;usually fails because it asks people to trust a future they can&#8217;t yet imagine - crawl before you run!<br><br><strong>Key Ingredient to Success:</strong> Use AI to help structure unstructured data. Organizations are full of unstructured data that could be useful if only it was organized better.</p><div><hr></div><h2><strong>What We&#8217;re Learning About Change Management</strong></h2><h3><strong>Process Attachment</strong></h3><p>People defend inefficient workflows they&#8217;ve mastered. This isn&#8217;t irrationality&#8212;it&#8217;s professional competence. They know exactly where to find information in their current system, even if that system looks chaotic to outsiders.</p><p>We&#8217;re learning that successful AI implementation requires respecting this expertise while gradually demonstrating that intelligence can be enhanced, not just replaced.</p><h3><strong>Trust Building Through Small Successes</strong></h3><p>Our hypothesis is that every small automation success will create permission for slightly larger changes. The compound effect of trust may prove more valuable than the compound effect of efficiency gains.</p><div><hr></div><h2><strong>The Design Methods That Revealed Hidden Insights</strong></h2><h3><strong>Systematic Research Over Assumptions</strong></h3><p>Instead of relying on stakeholder interviews alone, we built a comprehensive research program. We surveyed the entire firm about time allocation and got over 60% response rates. We analyzed their complete project dataset, revealing massive variation that abstract discussions had missed.</p><h3><strong>Email Dissection as Ethnographic Method</strong></h3><p>We collected and analyzed actual RFI emails from recent projects, identifying patterns in language, formatting, and information density. This revealed that contractors already provide most of the metadata needed for project management&#8212;it&#8217;s just buried in prose rather than structured fields.</p><h3><strong>Prototype Everything, Test with Real Data</strong></h3><p>We built multiple prototypes using actual project specifications and RFI content, not sanitized demo scenarios. Each prototype taught us something about the gap between what we thought would work and what actually helped professionals make decisions faster. And with AI we can now build functional prototypes incredibly fast by ourselves rather than relying on developers.</p><h3><strong>Benchmarking Reality Against Ideals</strong></h3><p>We researched industry standards and available tools, comparing the firm&#8217;s performance against both competitors and best-in-class examples. This grounded our efficiency targets in achievable improvements rather than theoretical maximums.</p><div><hr></div><h2><strong>Lessons for Other Professional Services</strong></h2><h3><strong>Start with Observation, Not Solution</strong></h3><p>The AI tool you think they need rarely matches what actually helps them. Domain expertise beats technical sophistication when it comes to identifying the right problems to solve.</p><h3><strong>Find the Keystone Habit</strong></h3><p>Look for the one small change that unlocks bigger improvements. Email processing became our gateway&#8212;unglamorous but immediately valuable and working within existing workflows rather than forcing new ones.</p><h3><strong>Respect the Craft</strong></h3><p>Professional services firms aren&#8217;t just processing information&#8212;they&#8217;re applying judgment developed through years of experience. AI implementations that acknowledge and augment this expertise succeed; those that ignore it fail.</p><h3><strong>Meet People Where They Work</strong></h3><p>Don&#8217;t force adoption of new interfaces when existing tools (like email) already serve as the natural workflow hub. The most powerful AI might be invisible to users, working behind the scenes in familiar environments.</p><h3><strong>Don&#8217;t Boil the Ocean</strong></h3><p>Resist the temptation to use AI to solve all the problems in one go. Instead, pick small yet annoying and expensive, clearly-defined problems to automate.</p><div><hr></div><h2><strong>The Compound Effect We&#8217;re Betting On</strong></h2><p>We&#8217;re betting that the boring AI applications&#8212;the ones that eliminate minutes of document hunting rather than promising to revolutionize entire industries&#8212;will deliver the biggest improvements to professional satisfaction.</p><p>The patient capital approach to AI implementation requires resisting the urge to lead with transformation and instead beginning with observation, empathy, and very small wins.</p><p>Sometimes the most innovative thing you can do is solve the mundane problems that everyone has learned to tolerate.</p><div><hr></div><h2><strong>Always Unfinished</strong></h2><p>This is why we call ourselves <strong>Unfinishe_</strong>. The work is never done&#8212;not because we&#8217;re incomplete, but because organizations are living systems that constantly evolve. Workflows shift. Tools change. People develop new expertise and face new challenges. The &#8220;finished&#8221; AI solution becomes obsolete the moment the organization adapts around it.</p><p>Our approach embraces this perpetual evolution. We&#8217;re not building toward a perfect end state; we&#8217;re creating adaptive systems that improve alongside the people who use them. Phase 1 will reveal insights that reshape Phase 2. Phase 2 will surface needs we can&#8217;t anticipate today. Each implementation teaches us something that informs the next.</p><p>Workflow archaeology isn&#8217;t a one-time discovery process&#8212;it&#8217;s an ongoing practice of observation, experimentation, and refinement. The small bite we&#8217;re taking today creates space for the next small bite, which creates space for the next. Progress compounds not through grand transformation but through continuous, incremental evolution.</p><p>We&#8217;ll know soon whether this specific approach works with this architecture firm. But we&#8217;ve already learned that workflow archaeology&#8212;understanding the actual work before trying to improve it&#8212;reveals opportunities that process diagrams and stakeholder interviews miss entirely.</p><p>The work remains unfinished. And that&#8217;s exactly the point.</p><div><hr></div><p><strong>What workflow archaeology have you discovered in your organization? Where do people lose energy that they don&#8217;t even recognize as lost?</strong></p><p>We&#8217;re building a playbook for human-centered AI implementation, one small bite at a time.</p><p><em>Learn more about our approach at unfinishe.com</em></p><div class="subscription-widget-wrap-editor" data-attrs="{&quot;url&quot;:&quot;https://thoughts.unfinishe.com/subscribe?&quot;,&quot;text&quot;:&quot;Subscribe&quot;,&quot;language&quot;:&quot;en&quot;}" data-component-name="SubscribeWidgetToDOM"><div class="subscription-widget show-subscribe"><div class="preamble"><p class="cta-caption">Thanks for reading unfinishe_ thoughts! Subscribe for free to receive new posts and support my work.</p></div><form class="subscription-widget-subscribe"><input type="email" class="email-input" name="email" placeholder="Type your email&#8230;" tabindex="-1"><input type="submit" class="button primary" value="Subscribe"><div class="fake-input-wrapper"><div class="fake-input"></div><div class="fake-button"></div></div></form></div></div>]]></content:encoded></item><item><title><![CDATA[Tapping Into AI’s Conservatism]]></title><description><![CDATA[AI favors established frameworks. Knowing that will change how you develop your information systems.]]></description><link>https://thoughts.unfinishe.com/p/tapping-into-ais-conservatism</link><guid isPermaLink="false">https://thoughts.unfinishe.com/p/tapping-into-ais-conservatism</guid><dc:creator><![CDATA[Jorge Arango]]></dc:creator><pubDate>Thu, 18 Sep 2025 23:35:06 GMT</pubDate><enclosure url="https://substackcdn.com/image/fetch/$s_!kA6W!,f_auto,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2Fa8d0539c-bc66-429f-9f9f-7421fd1e37c9_1200x675.jpeg" length="0" type="image/jpeg"/><content:encoded><![CDATA[<div class="captioned-image-container"><figure><a class="image-link image2 is-viewable-img" target="_blank" href="https://substackcdn.com/image/fetch/$s_!kA6W!,f_auto,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2Fa8d0539c-bc66-429f-9f9f-7421fd1e37c9_1200x675.jpeg" data-component-name="Image2ToDOM"><div class="image2-inset"><picture><source type="image/webp" srcset="https://substackcdn.com/image/fetch/$s_!kA6W!,w_424,c_limit,f_webp,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2Fa8d0539c-bc66-429f-9f9f-7421fd1e37c9_1200x675.jpeg 424w, https://substackcdn.com/image/fetch/$s_!kA6W!,w_848,c_limit,f_webp,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2Fa8d0539c-bc66-429f-9f9f-7421fd1e37c9_1200x675.jpeg 848w, https://substackcdn.com/image/fetch/$s_!kA6W!,w_1272,c_limit,f_webp,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2Fa8d0539c-bc66-429f-9f9f-7421fd1e37c9_1200x675.jpeg 1272w, https://substackcdn.com/image/fetch/$s_!kA6W!,w_1456,c_limit,f_webp,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2Fa8d0539c-bc66-429f-9f9f-7421fd1e37c9_1200x675.jpeg 1456w" sizes="100vw"><img src="https://substackcdn.com/image/fetch/$s_!kA6W!,w_1456,c_limit,f_auto,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2Fa8d0539c-bc66-429f-9f9f-7421fd1e37c9_1200x675.jpeg" width="1200" height="675" data-attrs="{&quot;src&quot;:&quot;https://substack-post-media.s3.amazonaws.com/public/images/a8d0539c-bc66-429f-9f9f-7421fd1e37c9_1200x675.jpeg&quot;,&quot;srcNoWatermark&quot;:null,&quot;fullscreen&quot;:null,&quot;imageSize&quot;:null,&quot;height&quot;:675,&quot;width&quot;:1200,&quot;resizeWidth&quot;:null,&quot;bytes&quot;:443008,&quot;alt&quot;:&quot;A narrow dirt path winds between high, fern-covered banks with tree branches arching overhead.&quot;,&quot;title&quot;:null,&quot;type&quot;:&quot;image/jpeg&quot;,&quot;href&quot;:null,&quot;belowTheFold&quot;:false,&quot;topImage&quot;:true,&quot;internalRedirect&quot;:&quot;https://thoughts.unfinishe.com/i/173983295?img=https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2Fa8d0539c-bc66-429f-9f9f-7421fd1e37c9_1200x675.jpeg&quot;,&quot;isProcessing&quot;:false,&quot;align&quot;:null,&quot;offset&quot;:false}" class="sizing-normal" alt="A narrow dirt path winds between high, fern-covered banks with tree branches arching overhead." title="A narrow dirt path winds between high, fern-covered banks with tree branches arching overhead." srcset="https://substackcdn.com/image/fetch/$s_!kA6W!,w_424,c_limit,f_auto,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2Fa8d0539c-bc66-429f-9f9f-7421fd1e37c9_1200x675.jpeg 424w, https://substackcdn.com/image/fetch/$s_!kA6W!,w_848,c_limit,f_auto,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2Fa8d0539c-bc66-429f-9f9f-7421fd1e37c9_1200x675.jpeg 848w, https://substackcdn.com/image/fetch/$s_!kA6W!,w_1272,c_limit,f_auto,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2Fa8d0539c-bc66-429f-9f9f-7421fd1e37c9_1200x675.jpeg 1272w, https://substackcdn.com/image/fetch/$s_!kA6W!,w_1456,c_limit,f_auto,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2Fa8d0539c-bc66-429f-9f9f-7421fd1e37c9_1200x675.jpeg 1456w" sizes="100vw" fetchpriority="high"></picture><div class="image-link-expand"><div class="pencraft pc-display-flex pc-gap-8 pc-reset"><button tabindex="0" type="button" class="pencraft pc-reset pencraft icon-container restack-image"><svg role="img" width="20" height="20" viewBox="0 0 20 20" fill="none" stroke-width="1.5" stroke="var(--color-fg-primary)" stroke-linecap="round" stroke-linejoin="round" xmlns="http://www.w3.org/2000/svg"><g><title></title><path d="M2.53001 7.81595C3.49179 4.73911 6.43281 2.5 9.91173 2.5C13.1684 2.5 15.9537 4.46214 17.0852 7.23684L17.6179 8.67647M17.6179 8.67647L18.5002 4.26471M17.6179 8.67647L13.6473 6.91176M17.4995 12.1841C16.5378 15.2609 13.5967 17.5 10.1178 17.5C6.86118 17.5 4.07589 15.5379 2.94432 12.7632L2.41165 11.3235M2.41165 11.3235L1.5293 15.7353M2.41165 11.3235L6.38224 13.0882"></path></g></svg></button><button tabindex="0" type="button" class="pencraft pc-reset pencraft icon-container view-image"><svg xmlns="http://www.w3.org/2000/svg" width="20" height="20" viewBox="0 0 24 24" fill="none" stroke="currentColor" stroke-width="2" stroke-linecap="round" stroke-linejoin="round" class="lucide lucide-maximize2 lucide-maximize-2"><polyline points="15 3 21 3 21 9"></polyline><polyline points="9 21 3 21 3 15"></polyline><line x1="21" x2="14" y1="3" y2="10"></line><line x1="3" x2="10" y1="21" y2="14"></line></svg></button></div></div></div></a><figcaption class="image-caption">Photo by <a href="https://unsplash.com/@anniespratt?utm_content=creditCopyText&amp;utm_medium=referral&amp;utm_source=unsplash">Annie Spratt</a> on <a href="https://unsplash.com/photos/brown-dirt-road-between-green-trees-during-daytime-e6sOq7ab9ew?utm_content=creditCopyText&amp;utm_medium=referral&amp;utm_source=unsplash">Unsplash</a></figcaption></figure></div><p>AI is inherently conservative. I don&#8217;t mean this in the political sense: here, &#8216;conservative&#8217; has a lowercase <em>c</em>. Rather, I mean that, because of how they&#8217;re architected, large language models favor and perpetuate long-established frameworks and ideas over upstarts. This has implications for how you use AIs to develop your information infrastructure.</p><p>I won&#8217;t go into how LLMs work here. The main point is that models are trained from existing data. On day one, they only &#8220;know&#8221; about what&#8217;s in the training corpus. The more information there is about a particular topic, the better the model will do with queries about that topic.</p><p>Naturally, the corpus includes more information about older mainstream subjects than newer niche subjects. As a result, models do better on older stuff. For example, it&#8217;s more likely an LLM has been trained on the full text of <em>Bleak House</em> than on a more recent novel. An LLM-generated summary of the former will likely be more accurate than the latter.</p><p>Yes, newer chatbots include research modes that let them search the web. But even so, LLMs tend do better when the core model &#8220;knows&#8221; more about the subject. Of course, this applies to more than just prose: LLMs also produce better answers to questions about older, more established ideas in other domains as well.</p><p></p><p class="button-wrapper" data-attrs="{&quot;url&quot;:&quot;https://thoughts.unfinishe.com/subscribe?&quot;,&quot;text&quot;:&quot;Subscribe now&quot;,&quot;action&quot;:null,&quot;class&quot;:null}" data-component-name="ButtonCreateButton"><a class="button primary" href="https://thoughts.unfinishe.com/subscribe?"><span>Subscribe now</span></a></p><p></p><h2><strong>A few real-world examples</strong></h2><p>A few months ago, I asked the then-new ChatGPT o3 &#8220;reasoning&#8221; model for help troubleshooting my old-school <a href="https://shop.panasonic.com/products/gx85-mirrorless-camera-12-32mm-45-150mm-lenses">Panasonic Micro Four-Thirds camera</a>. The device was unresponsive after I&#8217;d plugged it into an unfamiliar charger. I wanted to know if there was anything I could do to unbrick it.</p><p>At the time, <a href="https://marginalrevolution.com/marginalrevolution/2025/04/o3-and-agi-is-april-16th-agi-day.html">folks were saying</a> o3 had near-AGI capabilities. In my case, it produced a textbook hallucination: long, authoritative, and repeatedly mistaken explanations of how to disassemble the camera and which parts to order. Fortunately, I had enough sense to doubt its recommendations.</p><p>The problem? The parts and procedure it suggested were indeed for a Panasonic Micro Four-Thirds camera, just not my model. My sense is that when dealing with a niche product within a niche category, the LLM didn&#8217;t have enough to go on. It tried the best it could, but the result was worse than saying &#8220;I don&#8217;t know.&#8221;</p><p>I&#8217;ve also experienced this issue when using LLMs for software development. Recently, I asked both Claude and ChatGPT for help in implementing a workflow in <a href="https://www.langflow.org/">Langflow</a>, a relatively new system for developing agentic applications. Both chatbots suggested I try nonexistent features or produced broken code. (Yes, even though I put GPT 5 into &#8220;thinking&#8221; mode.)</p><p>In this case, both Claude and ChatGPT were likely hampered by the fact that 1) Langflow is relatively new and 2) it has a visual (rather than text-based) interface. Interactions with the chatbots consisted of me pasting screenshots of the dev environment and the LLMs offering instructions of which UI elements to &#8216;wire up.&#8217; Less than ideal.</p><p>Conversely, for me, both LLMs have succeeded brilliantly at writing Emacs Lisp config files, Unix shell scripts, and Python applications. Not only are these text-based platforms, but they&#8217;ve been used widely for a long time. There are decades&#8217; worth of material online on how to solve problems with Python, Elisp, and Bash.</p><p>But this isn&#8217;t just about longevity. I&#8217;ve had poor experiences using LLMs with another long-lived programming language: AppleScript. My guess is there aren&#8217;t enough examples in the training corpus of how to solve problems with AppleScript. Basically, LLMs do better with systems supported by lots of Stack Overflow posts.</p><h2><strong>Entrenching established technologies</strong></h2><p>And here&#8217;s where we come to the word &#8216;perpetuate&#8217; in the opening. Although I don&#8217;t have traffic stats, my sense is LLMs are replacing the Stack Overflows of the world. As more developers turn to LLMs for answers, they eschew the types of interactions &#8212; blog posts, forum questions, etc. &#8212; that would produce the next generation&#8217;s training corpus.</p><p>This will place new systems at a disadvantage. We&#8217;ll get worse suggestions for a new development framework or language if the LLMs don&#8217;t know enough about it. And LLMs won&#8217;t know about it if people don&#8217;t post about it &#8212; which they won&#8217;t do if LLMs are answering all their one-off dev questions.</p><p>A corollary: applications written with older languages and frameworks will be easier to develop and maintain than those created using newer systems. As a result, established programming languages such as Python, Perl, Lisp, etc. will become more entrenched. A vicious (?) cycle ensues.</p><p>I added the question mark because I&#8217;m not convinced this is a bad thing. Standing on the shoulders of giants is a time-honored way to build higher and faster. (Especially if the giants are unlikely to jerk you around. Open source technologies like Python, Elisp, and Perl are trustworthy and predictable.)</p><h2><strong>Implications for tech choices</strong></h2><p>The idea that AI entrenches incumbents has counter-intuitive implications. First, even though LLMs themselves are an exciting new technology, you should favor older, established, mainstream technologies over newer, unproven alternatives &#8212; especially when you use LLMs to develop software.</p><p>Even amazing new features (e.g., Langflow&#8217;s visual dev environment) must be weighed against older systems&#8217; overwhelming advantages in an AI-augmented world. Put simply, new applications built atop established technologies will be easier to develop and maintain &#8212; and not just by humans, but also by AIs and AI/human centaurs.</p><p>Second, <a href="https://en.wikipedia.org/wiki/Lindy_effect">Lindy</a> is in effect here. You don&#8217;t want to build atop technologies that might soon become obsolete. Ironically, in an AI-augmented world, older technologies stand a better chance of sticking around. New entrants have a formidable disadvantage because LLMs don&#8217;t know as much about them &#8212; and perhaps never will.</p><p>Third, there&#8217;s a reason why &#8216;language&#8217; is one of the Ls in LLM: these systems are trained on <em>text</em> and fare best when dealing with text queries about natively text-based systems such as novels and Python code. You&#8217;ll get better results when solving problems in systems that use plain text (e.g., a directory full of .py files) rather than fancy UIs.</p><h2><strong>Toward a conservative approach to AI</strong></h2><p>LLMs are one of the most disruptive technologies in our lifetime. But structurally, they&#8217;re inherently conservative. Their training leads them to favor established frameworks and ideas. Often, this leaves new or niche technologies and ideas at a disadvantage.</p><p>This has implications for your tech choices, especially when you&#8217;re using AI to develop information systems. Favoring older, more established programming languages and frameworks will lead to more efficient development, easier maintenance, and better outcomes.</p><p>As is often the case with innovation, the challenge is balancing novel approaches with the reliability of proven solutions. The goal is harnessing the power, speed, and scale of LLMs while building on solid foundations. Ultimately, building wisely with AI calls for adopting the new while recognizing the value of the old.</p><div class="subscription-widget-wrap-editor" data-attrs="{&quot;url&quot;:&quot;https://thoughts.unfinishe.com/subscribe?&quot;,&quot;text&quot;:&quot;Subscribe&quot;,&quot;language&quot;:&quot;en&quot;}" data-component-name="SubscribeWidgetToDOM"><div class="subscription-widget show-subscribe"><div class="preamble"><p class="cta-caption">Thanks for reading unfinishe_ thoughts! Subscribe for free to receive new posts and support our work.</p></div><form class="subscription-widget-subscribe"><input type="email" class="email-input" name="email" placeholder="Type your email&#8230;" tabindex="-1"><input type="submit" class="button primary" value="Subscribe"><div class="fake-input-wrapper"><div class="fake-input"></div><div class="fake-button"></div></div></form></div></div><p><em>This post first appeared <a href="https://jarango.com/2025/09/18/tapping-into-ais-conservatism/">on jarango.com</a>.</em></p>]]></content:encoded></item></channel></rss>