Despite having explained in the post which immediately precedes this one the reasons on why I disabled LinkedIn, I found myself having had to temporarily re-enable it a few months later for certain bureaucratic processes. Believe it or not, certain fintechs will refuse to consider taking you as a client if you don’t haveā¦ A LinkedIn profile. Long story short, I had to re-enable it for a few days. And of course, because I am chronically curious, I decided to look at what absolute gems of wisdom are being discussed in the platform in this moment.
One thing I noticed beyond the arms-crossed mind-twisted profiles of certain self-described entrepreneurs is the complete flattening of very multifaceted issues. To put it simply, modern internet is where nuance goes to die.
But for the sake of fun and trying to take the good from the bad, I am starting here a series of posts addressing certain points which I have seen thrown around in LinkedIn and social media as a whole which are usually missing nuance, background or context. It is, in a sense, very much like soda pop: a very simplistic flavor profile, caters to the lowest common denominator, but still very widely consumed by the world in general. So I am calling this series Soda Pop Opinions.
I do not want to expose any individuals in particular which expressed these opinions, so I will summarize their main argument, which becomes very easy to do when you oversimplify complex issues.
Here is the opinion I would like to tackle today: “Programmers who find generative LLMs useful for code do not add value as they write only boilerplate, and can be replaced by AI”.
The first time I saw this thrown around, it wasn’t actually coming from someone who worked in code in any capacity. So I just discarded it as a non-argument, as it would be the equivalent of me describing the intricacies of a dentist’s work life. I’m not even remotely close to being a dentist, so anything I say on the matter would be at most just repeating someone else’s opinion. A few months later I did find this argument again on LinkedIn, but surprisingly, this time it was stated by a self-described software engineer. And more surprisingly, with quite a few likes to it.
I say surprisingly because as a software engineer, especially as you advance to more senior positions, coding actually becomes less common to do in your daily work. Of course you absolutely need the theoretical knowledge so you know what you are doing and why things are the way they are, but once you leave college and are thorough with internships, an actual career on software engineering starts to entail much more than that.
Decisions, Decisions
At higher levels, you need to start making decisions. And I am not even necessarily talking about big decisions the level of ones a CTO would make – it can be even small stuff, like explaining to a product owner or client why this approach is better than the other one. This requires a mix of knowledge, experience, and even human understanding – which is also surprising to a certain extent as development is usually considered a loner career (it definitely isn’t, and I should probably write a piece on this one day).
It is certain that talented, hard-to-replace engineers still have to write code, but the part that makes them exceptional is what they do when they don’t have their IDE (“code editor”) open. They are able to switch back-and-forth between technical and non-technical language and mindsets, make decisions on whether to use “X” or “Y” technology, avoid this or that pitfall, and in some cases even being around for long enough is in itself valuable as they know how everything was put together and how the business operates. Yes, yes, I know that this kind of thing should be put down in written documentation so one engineer can replace other easily – but I’m talking about real world organizations, not those in management books, and the real world is much messier.
Real World Experience
All this real-world experience is not something you can teach a LLM. At the end of the day, as I have stated in an old post, LLMs are a cross between an abacus and a dictionary. They are programmed to be “rewarded” when they put together words that us humans assign meaning or usefulness to them. Not every organization is creating cutting-edge technology outside the scope of an LLM’s training data. What most organizations out there need are customized solutions for problems that were already solved.
This is why WordPress grew so much, for example. It has already solved big problems you’d find in developing a website, such as how to safely create/read/update/delete content, how to allow modifying core functionality, and so on. So what a WordPress developer needs to do is to work with these solutions to attend certain requirements. And I find it that if you have completely understood the requirements of whatever it is you are trying to solve, and are able to guide the LLM of your choice in the direction of the code you are envisioning, it is a fantastically powerful tool.
Real World Experience In The Real World
Now, one argument I find respectable is that the unsupervised use of LLMs might prevent a novice engineer from acquiring their own battle scars. This is something that needs to be handled at an organizational level, and it is already being done – higher ed institutions are creating their own rules in how LLMs can be used by students, and I expect that soon tech companies might start introducing similar rules for interns and even maybe junior-level developers.
Even so, this is no justification for dismissing the enormous potential productivity gains that can be obtained by a well-versed engineer capable of analyzing code at an architectural level – that is, not just individual functions or lines of code, but also how entire systems come together and interact. We cannot cap those that are skilled and talented based on specific requirements for those who are still in training. Putting it simply: that would be like demanding professional cyclists to use training wheels because many kids need them. Nonsensical.
In Conclusion
It might come to a point one day where a super-advanced AI with several layers of intelligence linked together (like an actual brain) will be able to take on these more bird-eye view of a system and organizational requirements to the point of coming close or even surpassing humans in real-world roles inside real-world organizations, but while we are not there, let us all enjoy the fact that the job market still exists and that it could be our great-grandkids who will need to worry the machine apocalypse, but almost certainly not us. Carpe diem!
This is the first post of what is going to become a regular series in this blog, where I look to tackle oversimplified opinions. Keep on the look out for more like these in the future!