Why I don’t call what I do “artificial intelligence”

While I use machine learning, ML, or sometimes applied or computational statistics to describe1 what I do, the term artificial intelligence, AI, is now very widespread in society, industry, politics and marketing. In Italian too, where the shorthand is very similar: IA. However, I try to refrain from using it as much as possible. Why? For coherence, cultural and societal concerns.

AI is overused and already characterized in literature and cinema.

While we like to think that everyone now knows what we do, we’re nowhere near that place. There’s still millions of people who, when hearing artificial intelligence, think of machines in The Matrix. They think of consciousness, of AI-human wars, even of AI being alive. They know that AI is either malevolent and something to stop or a benevolent force that will consciously help humanity. It sounds silly? A Google engineer sometime ago thought that their chat model was alive, and recently a letter by a few AI practitioners foresaw a danger of a war against AI.

AI is already very characterized in pop culture and what we’re doing has nothing to do with that.

I don’t think what we create qualifies as intelligence.

The question of “what is intelligence” is philosophical in nature and possibly will never have one uniquely correct answer. For me, the main component of intelligence is creativity. When we see something that’s scolding hot, and we want to move it, we might poke it with a stick, protect our hand with a thick glove, kick it quickly so that our skin is not burned, or more. We might even come up with something that nobody ever did, ever. We all find it strange to think that something so simple might have a historically unique answer, but in the end, everything that exists was first made by someone, or a team, for the first time ever.

A computer isn’t able to create. If nobody ever did something, a computer won’t invent it.2

And I’ll hazard a prediction: a computer will never be able to actually be creative. Of course, what it means to be creative is also a philosophical question. Painting something beautiful used to be used as an example of creativity, but a computer can emulate it by using known painting and image patterns, rearranging them randomly or according to a distribution, joining different known techniques and patterns in a new way, and it ends up that not every painting is actually an act of creativity. Who would have guessed?

Of course there are more components to intelligence. Memory, the ability to learn from knowledge and experience, the ability to make calculations. And while the computer obviously has memory and calculation power, the ability of a computer to learn I would reckon is nowhere near the way that a human learns. A computer learns, when we’re getting to the nitty gritty, to optimize with math and statistics a mathematical function. What is that? Who chooses the function? Who chooses the metrics? Who chooses the datasets and the algorithms? Humans do, because that requires real reasoning, real intelligence.

And while there are some parallelisms between machine learning and the sociology of human growth and human acquired behavior - where it is encouraged or discouraged by the social groups we find ourselves in - I still think there’s way enough difference to see the two processes of learning as deeply different.

Much of the human learning process has to do with rewards, the human necessity of belonging, of social recognition, with the very human emotion of fear, of love, with the existence of death.

Can rewards and punishments be emulated well by mathematical functions? I don’t know. Maybe? Possibly not, possibly never? But not today, for sure.

It makes it sound like AI is not the work of humans, or that the results of AI are not the work of humans.

This is crucial. An AI denied your mortgage application? No, a human did that. Most likely a team. We as machine learning developers and data scientists need to own the results of our work. Especially its deficiencies, especially its biases, its idiosyncrasies, its reinforcements of historical unfairness. Just as well as its successes.

When we train our models on historical data without accounting that historical data paints a picture of an unfair world, then our ML models will replicate that unfairness. Experts know this all too well, in fact it’s taught in data science courses. Computers don’t have ethics, they don’t see the bias themselves. They don’t know what discrimination is, and even if we taught them that (again, with mathematical functions3), it is only humans that can tell a computer that discrimination is bad. Is adherence to the optimization of mathematical functions the same, or will it ever be the same than a fair mind, empathy, the experience of pain and the hope for a better future? Maybe. Maybe not, maybe never. But for sure it is only humans that can tell an algorithm what to optimize.

Who creates the content?

The only reason ChatGPT is able to write your college essay is that it has read billions of college essays. So if the only way that AI can produce results is based on humans’ work, is AI really anything at all without the human experience? I would argue, not.

In fact, it is the developers and financiers of ChatGPT that are writing your college essay. And all the millions of people that authored those billions of pieces of original work.

My conclusions

What I think should really be at the forefront of social discussion is the impact and consequences of AI. The European Union - following the GDPR work in privacy4 - is doing massive work in AI regulation which looks to be a good step forward, but this discussion cannot be left to experts only. We need to decide as a society how and in what direction to employ our collective efforts. And the place of experts is to educate, yes, but most importantly to own our work, its results and its impact.

We are data scientists developing machine learning algorithms. We are the artificial intelligence. And - we are not so artificial ourselves, and our computers are not so very intelligent, at all.


  1. Not much of a description, yes. As I am proof reading this, I’ve realized my next post should probably be “How would I describe what I do?”↩︎

  2. An exception is, if we asked a computer to list 1 million things that could work for moving something scolding, and then looked at those million ideas, there may be something new, but not because of intelligence, but because there were 1 million minus one silly ideas. ↩︎

  3. Today, it’s not even clear how we would model discrimination and make it part of our loss functions. I read some really cool ideas though. ↩︎

  4. Work which, while good, is not perfect at all. Already it looks like the anti-unsolicited marketing communications is being hollowed out by a legitimate interest interpretation that almost completely empties all the GDPR’s protections against personal data usage for commercial reasons. ↩︎

Contents

✴️ Also on Micro.blog