A Few More Workplace Stories
[ stories ]

by: Committee

These are a few more modest stories some of us have shared about encounters with AI at our workplaces. We published a previous installment in October. Do you have your own story to tell? Contact us, and we’ll publish it in the next round!

An Engineer

A company I’m contracting with just interviewed someone for a full-time role. They said the person seemed really talented, but it turns out he only knew how to have AI write his code. When presented with a simple bug, he couldn’t figure out how to debug it himself, because he didn’t really understand how things work. In my opinion, the AI bubble that’s going to burst is not just the financial one, but the ideological one. I think in a couple of years companies will not be pushing so hard for this anymore as they realize LLM code completion was not the magical tech advancement they were led to believe.

A UX Designer

AI tools are being heavily encouraged/borderline mandated at my company but there’s actually nothing that I’ve found useful for my workflow, beyond occasionally using Copilot to generate placeholder copy to include in my designs. Now that my team is being tasked with designing AI interfaces for our products, and there’s an expectation that I’m using enough AI in my work and personal life to be able to speak to emerging design patterns & user experience flows for AI interfaces.

An Engineer

I just feel like there’s going to be a point in the not too distant future where people that can fix all this broken AI code are going to be at a premium. It’s rough right now where everyone is putting emphasis on that “skill”, but I can’t see it lasting for long once the embarrassing outages and bugs start happening.

An Engineer

I don’t like AI because of its environmental impact, how it hurts artists and content creators, and how it’s currently affecting the job market through layoffs. But my survival instinct is more powerful than my desire to avoid supporting a tidal wave that I don’t think I’ll make much difference in changing. When they started mandating LLM usage at work, I started to learn it. There’s things it really sucks at, but if you think of it as a tool like any other, there are some things it does pretty well. And it’s definitely decreased my development time, mostly because I don’t have to google nearly as many things.

It’s not worth the trade-off, but also not useless. It is a different way of thinking, but thinking at a higher level for most of the day isn’t bad. The paradigm shift can even be useful in its own way. AI sucks at debugging and you need to know how to do that to be a successful dev. But when it comes to spitting out a small well-defined function… it can often do it faster than finding the relevant stack overflow and copy/pasting then modifying like how I used to do.

An Engineer

LLMs are like a class of algorithms. If you treat them like any other algorithm and understand the tradeoffs and use cases then you can use them effectively. If you need to churn out a lot of repetitive, regular output that is appropriately bounded you’ll be well suited to go for an LLM. They quickly derail when the goal output varies too much from the model’s distribution.

I am a tech lead of a successful, high performing team, and the engineers I work with regularly use LLMs sporadically at best. And I work at an AI-pilled firm that tracks usage of these tools and I am happy to report the results look very nothingburger on AI adoption versus team performance. (Much to the bosses displeasure).