AI and the Illusion of Productivity
Cheating is when you skip the hard work of developing a strategy and ask the ever friendly Claude to write one for you instead.
I’ve been thinking about productivity lately because I recently spent three hours editing a document that turned out to be written by AI. I could tell that the writing was bad, but the author was a serious person so I line-edited the thing.
But I had been duped. Three hours of cleaning, trimming and restructuring left me with a pile of rhetorical fluff. It also left me livid.
Many companies are betting big on AI-powered productivity tools that they hope to sell at a premium to other companies who have a financial incentive to make their employees more productive. This is Microsoft’s strategy, for example, and it’s not a crazy bet.
The tools of the white collar trade are often byzantine contraptions that have suffered from feature creep and backwards compatibility constraints. These products require add-ons like onboarding guides, employee trainings and dedicated support teams. Last week, I had to file a Jira ticket that some poor human needed to work on so I could change some functionality of my team’s Jira board. If you don’t know what Jira is, consider yourself lucky and trust me that this is bonkers.
If you asked me two weeks ago, I would have told you that I was generally onboard with the movement to create ‘smart’ workplace tools that could increase productivity. I don’t think the AI-enhanced versions of the knowledge worker toolset are particularly good yet, but I was eager to adopt them once they were. It takes me an inordinate amount of time to make a keynote presentation look presentable and I am under no delusions about the value I’m adding to the company by spending half an hour making sure my text boxes are properly aligned.
But that was before my ChatGPT editing debacle. My new thinking is that AI-enhanced tools are going to have a net negative impact on businesses and should be rolled out with great care.
This is my logic:
- It’s nearly impossible to prevent humans from cheating when they think they can get away with it
- AI tools lower the barriers to cheating
- It’s risky to have your employees cheating on their work
The key question is whether AI assistants more easily facilitate cheating or productivity, and risk-reward payoff for each.
What is cheating? Cheating is when you skip the hard work of developing a strategy and ask the ever friendly Claude to write one for you instead. It’s relying on a summary of a contract instead of reading the details, forwarding along peer feedback without reading it first or not understanding the logic behind a chart you created before sending it to your manager. Cheating looks like productivity in the short term, but it comes with elevated risks. The contract could have a catch, the peer feedback could be egregiously biased and that chart could contain an error that leads the business to make a very regrettable decision.
Productivity is not about outsourcing the thinking work, but accelerating the execution speed. Using an LLM to write a vacuous status update that contains no concrete information is cheating. Building a reusable email template for that update to save you time futzing with formatting is productivity. Productivity comes from workflows that help people and machines go from decision to impact faster.
Increases in cheating might lead to short term savings, but they also have the chance of royally blowing up in your face.
I bring this all up because we are entering a golden age of grift. The scammers, con artists and literal felons (jfc America) are empowered. Online sports betting is now legal in 30 states, and already resulting in a measurable decrease in household savings according to recent study. Over in Brazil, gambling addiction has become a national crisis (Bloomberg reports that 20% of the money the government handed out for its flagship social program in August went to online gambling sites.)
Then there’s the blockchain, a bad distributed data storage technology that has been pumped up by VC funds and still does not have a use case other than—I knew you were waiting for it—crypto! A murky monetary world full of humans hidden behind hexadecimal hashes who are very eager to work outside of the US financial system.
We’re in an age of cutting corners, screwing over your neighbor and looking for a way to get rich quick. If I were a corporate executive, I would tread cautiously about throwing AI “productivity” tools into this milieu. People are tired, they’re depressed, they might not like working for you or your company and now you are letting them, ENCOURAGING them, to complete their tasks by copying the output of a sometimes mendacious LLM system in the name of innovation and productivity. They will do it. They will do it and go home early and you will feel great until your ship hits an iceberg or gets sued out of existence.
Anyway, that’s my deep thought of the day. Perhaps it’s not so smart to outsource the intelligence part of our professional workforces. And perhaps the appeal of increased productivity wouldn’t be so compelling if the way we worked incentivized good analysis and good ideas rather than the window dressings around them.
Comments
Sign in or become a Machines on Paper member to join the conversation.
Just enter your email below to get a log in link.