Thoughts on AI
Two sides of a coin
There’s a lot we should be excited about when it comes to AI and business. It will inarguably:
enable a large percentage of work to be performed faster and with more consistently reliable results
equip workers with information in seconds that it might normally (pre-AI) take them hours to collect and distill
arm executives with quicker and more detailed actionable insights from which they can move their strategies forward to keep pace with, or outpace, the market
allow for a tighter alignment between investment, the returns on it, and compliance with governance models
All good so far. But what should we be worried about? Turns out, there’s a whole boatload of stuff.
poor implementation within companies (weak or missing central governance strategy, accompanied by disjointed and uncoordinated execution, often at the individual level)
hiring of new graduates into tech businesses is down 50%. This bodes badly for the future… our society is not preparing itself for the massive ramifications when hundreds of thousands of over-indebted people see that promises made to them have vanished. See recent research from SignalFire (link below)
weak corporate leadership that seems intellectually ill-equipped to move organizations to a higher plane of knowledge and of being able to work with AI (see section above) and that has shown itself to be too quick to reduce employment of people (triumphs of hope over proof, and lazy-mindedness over measured reason)
a vacuum of governmental leadership strategy for the regulation of AI (its power is too great and the implications of it for society are too important to leave to rich people, VCs, and corporate executives to figure out)
exaggerated claims of success by AI industry leaders (which are often barely concealed attempts to attract more funding and/or stave off investor criticism); see Apple’s recent research on the Illusion of Thinking (link below)
immature mass media that proves each day it is unable to help educate populations of AI’s potential good and bad, and is unable to seriously hold powerful people accountable for the actions they are taking around the topic of AI
I’m optimistic about AI’s potential for good; always have been. That optimism has roots in my years of deep operational work at the process level within many large and small companies. But I’m less optimistic about AI’s potential when I see it being leveraged as a tool to advance the interests of powerful and malicious forces in our world.
My advice to everyone is this:
educate yourself about AI but uplevel that education beyond the basics of engineering prompts. Become conversant in how AI should be a force for good in our societies. Don’t leave that realm to those with power.
within your own company, be part of the conversation, part of the effort, to implement AI strategies that promise scale, and a closer alignment to, and a consistency with, the client’s optimal experience
think globally, not nationally, not parochially. IMO, the future of a world that benefits from AI is one in which nations cooperate and collaborate. It’s not a world in which rich people and rich nations make all the decisions. Before you say, well, that’s the way the world has always worked… ask yourself whether that’s been a good thing for all of humanity.
Reference articles:
https://www.brookings.edu/articles/the-coming-ai-backlash-will-shape-future-regulation/?utm_campaign=Center%20for%20Technology%20Innovation&utm_medium=email&utm_content=365958415&utm_source=hs_email
https://machinelearning.apple.com/research/illusion-of-thinking
https://www.signalfire.com/blog/signalfire-state-of-talent-report-2025?utm_source=x-twitter&utm_medium=social&utm_campaign=sot
https://www.cigionline.org/publications/conceptualizing-global-governance-of-ai/



Great post, my friend. It's important to emphasize that AI does not exist separate and apart from human intelligence. We created it in its entirety, it sucks on our data, and imbibed our biases. It's not really artificial as much as an alter ego of ourselves.