No-code for… coders?!

Is OpenAI’s GPT-3 a miracle or a curse?

This article is also available on Medium.

Earlier this year, I wrote some articles about no-code, the learning schemes I think it implies and the benefits or drawbacks I see in this new trend. No-code (and its companion low-code) is quickly becoming a turning point in the world of tech – it’s considered by many to be a game changer, in particular for the impacts it may have on time-to-market and software production cost. According to Gartner, low-code could “partner up” with AI to produce over 65% of our software activity by 2024.

While exploring this topic, I found plenty of examples that would try and abstract away the difficulty from different types of processes: Webflow focuses on web interfaces design, Zapier helps automate inter-software communication tasks, Airtable provides low-code databases… some tools like Makerpad even offer to create a wider range of applications, going from shopping sites to Slack-like apps or automatic email digest services.

But all of this reading and exploring left me with one burning question: who is no-code for, exactly?

Using GPT-3 to write code

You’ve probably heard of GPT-3, OpenAI’s latest iteration of their text generating AI, and of how it is used more and more to write apps. Last March there were already over 300 apps written (at least in part) using OpenAI – for chatbots, search answers, more lifelike feedback, etc. You can even try it online as they provide a GPT-3 code generation demo. It is OpenAI’s kick in the anthill to try and take their place in the low-code world.

Here’s an example of GPT-3 generating Tailwind CSS from natural language (video from: https://gpt3demo.com/apps/gpt-3-tailwind-css):

 

Ok, so – the results are impressive. The range of things it can create is, too. As F. Bussler pointed out in his 2020 article about GPT-3, this AI can be applied to various use cases and could even work for image generation (basically by learning to stack pixels next to each other rather than text characters). Does it mean we’ve achieved state-of-the-art and can now create anything we want with just a few prompts? Does it mean we don’t have to write code or take pictures anymore?

But at that stage, I’m a bit lost. To me, the point of no-code was to help people without any prior experience in coding prototype apps, try out some ideas, draft some UIs. The point of auto-ML was to introduce citizen data scientists to the world of data analysis and machine learning. It was about bridging the gap between professionals and enthusiasts. Why would we offer constrained and “lightened” tools to coders to code? Isn’t it their job, their special skill, to be able to leverage those complex but powerful programming languages to create truly custom user experiences?

Isn’t creativity a core human skill?

Note: by the way, I recently discussed this in more details in an article about creative AIs 😉

One-size-fits-all?

I believe one of the key things to keep in mind with low-code is that, by definition, it is meant to satisfy the largest audience possible. Be it by providing commonly used building blocks, widely required features and services or even narrowing its focus on a trendy field, a no-code tool will not adapt to your project – instead, it’s up to you to adapt to it!

This is not bad per se, but it implies a standardization in the final products. This normalization can be revealed in the UI, in the UX flow, in the offered services… all in all, products created with a given low-code tool may have different contents and target audiences but they will most likely have similar layouts.

As software grows up and we are building more and complex systems, we are pushing the boundaries of what we can do with the current software and hardware. Look at the ecological impact of AI, or the highly-refrigerated data centers – our computers are burning the candle at both ends and, for the most demanding applications, only a carefully and specially designed architecture can actually keep the system running. This is, I believe, not possible with low-code.

Note: take some modern languages like Javascript – iterating over an array can be done in 5 different ways that all have different execution times. Chances are an AI that can supposedly write everything in any programming language, like GPT-3, would not care to distinguish between those subtle differences and would have one “loop iteration in Javascript” snippet…

Don’t get me wrong: no-code solutions are often well-coded and try to make the best out of your requirements. Scalability and containerization tech like Kubernetes can further expand your power to deploy and maintain your app for large audiences. But, in the end, we still need human experts to design and implement truly unique systems because low-code cannot replace specialized engineering.

A deal with the devil

However my real issue with this GPT-3 situation is not about the standardization of UI and features. It’s about the centralization of tech logic.

Relinquishing control to machines…

We are at an age where programmers have a tremendous power over our everyday lives. Think about it: we developers are the ones who decide how the systems that rule our world work. We are the ones who implement the logic, the constraints, the edge cases and the “normal behaviors”. We are the ones who tell the algorithms what is right and what is wrong. If all these infamous cases of AI bias have shown one thing, it’s that the person behind the screen and the data he/she fed the AI has a big influence on the “fairness” of the results of a machine.

For example, AI Dungeon, a GPT-3 based “machine RPG narrator”, has recently shown its limitations in terms of regulation with the scandal (article from Korii, in French) about it producing overly sexualized content. GPT-3 itself apparently has disturbing biases against muslims – and OpenAI knew about it when they released the algorithm, as stated on their Github page!

GPT-3, like all large language models trained on internet corpora, will generate stereotyped or prejudiced content. The model has the propensity to retain and magnify biases it inherited from any part of its training, from the datasets we selected to the training techniques we chose. This is concerning, since model bias could harm people in the relevant groups in different ways by entrenching existing stereotypes and producing demeaning portrayals amongst other potential harms. This issue is of special concern from a societal perspective, and is discussed along with other issues in the paper section on Broader Impacts.

Now, as long as there are humans in the loop, we can have a somewhat reasonable control over those systems (although it’s not always the case…). Aircrafts are subjected to very harsh tests, and so is their software. Banking companies are required to meet precise criteria. All in all, plenty of industries have imposed programmers to follow test plans and always be able to show proof that their logic follows the given rules.

But what if machines start writing code themselves? Then the black-box problem will return more strongly than ever. You might think it’s of little importance if you only use GPT-3 to automatically fill in your for-loop, but let’s be honest: we’re not going to stop there. At least, it doesn’t seem to be going that way. And as we start to cram up more and more complex logic in the “AI-generated part”, we’re losing grip on the inner workings of our software. The more powerful the AI gets at writing accurate code, the less we’re going to worry – remember the old rule: “don’t fix what ain’t broken”?

When I read this article by M. Griffin, one sentence made me think:

[…] As GPT-3 shows, language is actually a skill machine learning is rapidly mastering, and programming languages are not so different from English, Chinese, or Swahili.

This is true: programming language are similar to natural languages because they have a grammar, a vocabulary and are, in the end, a succession of text characters that together give the piece of code a precise meaning, just like a usual text. However, just because you can write in a specific programming language doesn’t mean you are a developer in this language. As I’ve said in previous articles, I believe coding is about more than simply spitting out the right keywords in your script – it’s about software architecture, model thinking and interconnecting resilient systems.

Note: also, I actually think that even though there are similarities, learning a programming language isn’t exactly the same as learning a natural language. If you’re interested, I wrote a blog post on that topic a while ago 🙂

By giving up on fully understanding the codebase you’re working on, I think you’re making a mistake as a programmer. One thing that I was told when I got my first gig in a startup is that I should at least have a vague notion of what the other devs in my team were doing – hence the daily meetings, the scrums and all those ceremonies that come along with agile methods. And while you don’t need to follow each commit from everyone at every instant, I do think that the only way to work in a coherent and cohesive way is to all be aware of each other… socially and “developlly”. Peer-coding, pull request reviews… those mechanics are great because they force you to go outside your comfort zone, to let go of the little snippet of code you amorously polished for the past three days and go toe to toe with another part of the codebase you’re less familiar with.

The problem is that this is possible when the codebase grows as a human rate; but it’s hard – if not impossible! – to do when you ask an AI to “program half of your auto-ML solution”. No one in their right mind will read all the details of a commit of a full-fledged application, all at once. And so, in the end, we will just have to trust that the AI did a good job.

… or relinquishing control to the brains behind the machines?

The question remains, though: is it really the machine you are trusting? Or rather the programmers who initially coded it? Today, we are at this strange crossroads where we talk of software writing software… while citing human-made tools. GPT-3 was not written by an AI. It is an AI that was developed by OpenAI – a group of human people with the same skills, brilliance and flaws as they had before.

There is this large controversy around the latest versions of GPT not being open-source (as fantastically well summed up in this article by D. Gershgorn). OpenAI initially stated that it was for security reasons – for GPT-2, they said in February 2019 they would not release the source code because it was “too dangerous”: to avoid anyone producing credible content on anything and overflooding the Internet with conspiracy theories or racist pamphlets, they preferred to keep the AI a secret. They did however give up small parts of the AI for focused use cases during a while and eventually released the entire thing, since the world was still standing.

xkcd’s “Duty Calls” comic strip (https://xkcd.com/386/)

But things aren’t that clear for GPT-3. As Gershgorn says: it looks like we’ve gone from “too dangerous” to “too lucrative” for OpenAI to release their software. And the fact that Elon Musk is OpenAI’s co-founder or that they’ve made a $1 billion deal with Microsoft that gives the company complete access to the GPT-3 algorithm (and has already spawned Power Fx) is not reassuring either…

Note: I have to point out however that Musk criticized OpenAI’s decision to license GPT-3 exclusively to Microsoft, saying that it goes against he “open-source” initial drive of the company. Plus he was pretty suspicious, like many, of the fact that the algorithm would remain completely accessible via OpenAI’s application programming interface platform, as promised by Microsoft’s Executive Vice President and Chief Technology Officer Kevin Scott.

We’ve seen how the GAFAM have gradually taken over most of the Internet, the apps and many of the hardware or services we use everyday. Still, it felt like developers remained vigilant and curious in how those giants were spreading everywhere. We had this need for deconstruction, for analysis, for architecture disassembly (we see it with the numerous web pages about “What programming languages are used at Google?” or “What tech stack is Facebook using?”). My fear with GPT-3 is that it may erase this will to unravel software and instead make us accustomed to black boxes and proprietary algorithms writing everything for us.

To conclude

No-code, auto-ML and GPT-3 all derive from the same guilty pleasure we have when delegating tedious work to an obedient and emotionless slave. Because working less is always better. Right?

Well – no. That’s just the thing: I’m a developer, and I like to develop things. I love it. That’s why I spend hours everyday typing weird stuff in my IDE and reading articles about the latest trends in Javascript or the news about some Python release. To me, coding is as much a game as it is work – though not all days are as sunny and exciting, overall, programming is my raison d’être. And nothing in the world could make me trade this for sitting in front of my computer watching GPT-3 write my code for me.

To me, low-code is a perfect tool for prototyping, for inter-teams discussions (especially between code specialists and citizen developers) and for automation or quick setup of secondary products, but your main app should remain in the hands of trained and experts in dev, devops, IT, marketing, UI, communication… We, as a team, can build wonderful things – let’s not forget that and ask computers to do everything for us! 😉

I usually always balance my critics on no-code with some positive – but to be honest, this time, I’m having a hard time with the idea of letting the machine code everything. Have I missed OpenAI’s goal? Have I misunderstood this new journey they offer? Is it just my developer knee-jerk reflex to someone depriving from building apps my own way? Or is there something more in the offing we need to stay cautious of?

What do you think: is OpenAI’s revolutionary GPT-3 a good thing? Will it irremediably change the software industry? Feel free to react in the comments!

Leave a Reply

Your email address will not be published. Required fields are marked *