Staying In The Loop

faber-headshot-lino
Michael Faber
Aug. 12, 2025 10 min read
flowers blooming

Depending on how the curve of AI growth happens over the next few years, there will likely be one of two possible outcomes. First, AI continues on it’s hockey-stick growth pattern, where massive investment in LLMs as we know them today continues to scale with more resources, algorithmic efficiency, unhobbling, and more, resulting in some semblance of Artificial General Intelligence, or possibly Super-Intelligence (whatever that means).

Or, we hit the wall, exhausting the potential of this kind of transformer based architecture, and the growth and progress plateaus.

Either way, this is a really interesting ‘era’ in the future history of AI.

The State of the Industry

The state of the AI industry is settling into a groove. Every few months, the hype cycle goes into overdrive, with the big tech companies parading out their top minds to see who can say “AI” the most in a two hour keynote. We get a version update of their model, or a glimpse into some new features or integrations in their tools, or maybe even some fancy hardware teases. Then the youtube algorithm spends a few weeks recommending a parade of AI videos that all have a thumbnail of someone mimicing Munch’s screamer and claiming that “[new thing] IS INSANE” or “AGI is HERE! (?)”. (Just me?)

ai is insane2

The truth is that AI hasn’t changed a ton since I wrote a post on the state of the industry almost 2 years ago. You should read it, it’s still pretty relevant, especially in light of Duke’s recent launch of the AI Suite, which finally solves the core access problem outlined in that article.

But despite improvements in software, improvements in the models, the addition of reasoning models, and a staggering amount of investment - both monetary and human resource - in basically every industry, I claim that we are still in roughly the same “era” of AI - the Co-Intelligence Era. And that’s probably a good thing.

Our work ahead

Even though the models have gotten faster, slicker, and more integrated into our daily tools and lives, the basic interaction pattern hasn’t changed much. We pull up our chatbot of choice, we ask a question and it responds. Sometimes the response is great, usually it’s generic and uninspired, and sometimes it’s just nonsense delivered confidently. But what has changed, or perhaps what needs to change, is us. With two years of experience in this GPT-4ish era, we’re starting to figure out how to work with it, not just watch it work.

A few years ago, when AI felt more like a technological parlor trick, we treated it like a slot machine. Pull the arm, watch it produce something, and hope that you won the prize and it’s actually useful. Now, with more sophisticated tools, and more experienced humans operating those tools, that old framing gives too much power to the machine. The more time we spend with these systems, the more we realize their usefulness isn’t about what they are technically capable of. It’s about how I ask, how I respond, how I shape the exchange back and forth, and how I’ve primed the tool best to result in something that is better than human or ai alone. The reps matter. Testing the edges of what these tools can do helps us get better at spotting the jagged boundaries between useful and advantageous on one hand, and boring and broken on the other.

AI is not a human. I like to think of it more as an alien from another planet, who has read everything there is to know about Earth, but has never experienced it themselves. And everything we are asking happens to be about the human condition. So it does what it thinks is best, which is to mirror what we gave it in the first place, in an attempt to be a compliant helper. This is handy, since it knows everything, but it’s also incumbent upon us, as the only actual human in this conversation, to interpret, critique, and reshape those responses to something that isn’t just a factual regurgitation, but is validated by our own expertise. The human in the loop, the back and forth, shaping and reshaping, is where the value in these tools lie.

And that’s why I like to think of this as a Co-Intelligence Era. Not because the AI is particularly brilliant on it’s own, but because it’s just good enough to be an excellent collaborator. It still needs us. It doesn’t understand subtext, or taste, or consequence, or quality. It will happily offer a million ideas, but it doesn’t know which ones are interesting, or beautiful, or ethical - that’s still our job.

And that’s a good thing. I don’t want my tools to be smarter than me. I want them to be fast, flexible, fill in the gaps, be willing to riff, or do the grunt work along the way. AI is great at that stuff, as long as I remember to stay in the loop, keep my own standards, be critical, and learn to use the tools more effectively.

It remains to be seen whether the next era of AI will be one of super-intelligent, agentic systems that actually can comprehend the human condition, or whether we will continue in this incremental pattern of modest improvements, better integrations, and broadened access. [Author note: With the release of GPT-5 and updates to other models moving the intelligence needle a bit, but not making a revolutionary step, it does seem like we will remain in this era for some time.] But either way, this era we’re in right now might turn out to be the most interesting. It’s the one where we still matter the most.

Staying In the Loop

Right now, being “in the loop” is more than a nice idea, it is essential to the moment. Staying engaged is how we learn, prepare for what’s next, and get the best results from today’s tools.

So that's what we hope to be the point of this blog. A place to share what we’re learning, test ideas, and explore how to work with AI in ways that are practical, thoughtful, and exciting.

faber-headshot-lino
Michael Faber
Aug. 12, 2025