It's not easy and Big Tech is working on ethical agreements on this issue: we talked about it with an expert, Paolo Bottazzini

The Biden administration has worked hard to push the most important tech companies in the world to work on ethical transparency and to join the Hiroshima Process.

What does it mean? That the Western world is wondering how to make a product (more or less creative)of an artificial intelligence recognisable. It is important, for obvious reasons but also to reassure people of their independent judgement.

Could AI replace a competent and talented designer?

An example that could have to do with our world to make the problem clearer. Let's imagine looking at a new design product. It is beautiful, functional, impeccable from the point of view of innovation, sustainability and intelligence that supports the entire design process.

Let's imagine that it sells well, that it solves a serious problem, that it does not pose a danger to human health or that of the planet. The object ends up before the ADI commission.

Here, in this situation in which creativity and human productive intelligence are rewarded, it is essential to know that if we are faced with an AI product or no.

To reassure, especially on the human factor. We really need to recognize ourselves in the creativity produced in our time, and let's face it: we are not yet sure that we want a proxemic relationship with objects that we do not know by whom or what they are designed.

Why it is important to distinguish between generative technology and the human brain

It's the same type of caution we use when we're faced with anything new.The human brain doesn't like change. With the aggravating circumstance, in this case, of a truly delicate ethical issue.

The Biden administration is in fact taking the issue very seriously and, in view of the 2024 elections, is rapidly implementing a series of decrees on public order, national security, health and the protection of citizens from racial and gender bias. And fake news, obviously.

How to distinguish an AI product from a man-made one?

We asked Paolo Bottazzini, professor of Social network analysis at the philosophy department of the State University of Milan and founder of VentunoLab. “There is a precise reason why the current American administration is worrying about the ethical problems related to the use of AI: the upcoming presidential elections.

But apart from the contingent problem, in the United States the emergency is also linked to the more widespread use of AI by the administrative and judicial system .

Artificial intelligence is used in courts to speed up the analysis of past cases, which constantly informs the legal system.

Furthermore, the police use AI to carry out preventive analyzes and racial and gender biases are very likely because in fact machine learning is stimulated by human prompts."

But returning to creative products, is it possible to recognize an image or text generated by a machine?

Bottazzini continues: “Obviously it is possible, but it can take time, more than that used by a fake image to go around the world. The truth is that there is no system, practice or mechanism that can truly prevent the spread of fakes.

Generative technology is extremely open, accessible to virtually anyone, and simple to use.

Laws, from this point of view, serve to prosecute those who make unethical use of AI. Not only in the case of news or images, but also in much more serious cases for people's safety."

What does the text of the Hiroshima Process say

Recently the most important players in the AI sector, Amazon, Open Ai, Microsoft, Google, Meta, have agreed on a voluntary code of conduct which establishes shared ethical directives on AI projects.

“It should be underlined that many tech companies have long integrated codes of ethics into their founding documents. So the topic is not new.

At present, the measures taken, for example, to make an AI image distinguishable from an analogue one are often naive. Putting a watercolor mark on a digital image or integrating meta-information within the pixels is absolutely not enough to have certainty. They are all systems that can be easily overcome by practically any programmer or digital imaging expert."

For now, common sense and ethics are the most useful weapon against fakes

So what is left to make us autonomous in analyzing news? “Common sense,” says Bottazzini. “The radical parts of society, such as those represented for example by Trump, have shown great capacity for organization and a good measure of audacity.

But the fact remains that they do not shine for refinement: let's say that if tomorrow morning a photo of Biden committing a crime, or even better a video, begins to circulate online, we have the good sense to ask ourselves serious questions about its authenticity. However, obviously, the urgency of a shared ethics remains, which is the first real piece of the correct use of AI".

Because everything, as usual, depends on the intentions of those who program the machine, those who build the software and hardware and design the machine learning process.

In this sense, the Hiroshima Process guarantees the willingness of big tech to act in a transparent manner towards the institutions. A big step forward, because on that shared basis individual governments will be able to begin to think and build a legal infrastructure.

European brands are proceeding with caution

As regards the use of AI within companies, the discussion in Europe is still beyond being urgent. “Brands are reluctant to integrate AI into their processes, they currently do it at a micro procedural level.

It is a question of mistrust but also of identity: AI is destined to become a commodity similar to wi-fi: something we take for granted both as a service and as a use.

We already deal with AI on a daily basis, often unconsciously, and we are not bothered by it in the slightest, because in fact it is something that enormously improves and speeds up companies' services."

But from here to letting AI become pervasive in design or strategic processes is a really big step, concludes Paolo Bottazzini.

Cover photo: DeepAI - Bauhaus bed