Regulated, Responsible, Restricted: Three Dimensions of Artificial Intelligence

Italy has banned ChatGPT, an OpenAI startup, due to concerns about violating EU data privacy regulations, sparking a debate on AI regulation. In this blog, we discuss three dimensions of AI: regulated, responsible, and restricted.

Francisco Mainez
3 min
The price of greatness is responsibility
— Winston Churchill

Introduction

After a heated discussion in the media involving famous figures in technology, science, and politics about the future of Artificial Intelligence (AI), Italy made an unusual decision to ban ChatGPT, a startup developed by OpenAI based in the United States. The Italian Data Protection Regulator (Garante) requested that OpenAI temporarily stop processing data from Italian users due to concerns about possible violations of EU data privacy regulations. The Garante's decision was based on the fact that there is no legal basis for collecting and processing large amounts of personal data to train the algorithms used by the platform.

This decision sparked reactions and statements from other countries and industries about their stance on AI. The underlying issue is a larger debate surrounding the regulation and responsible use of AI, and how its increasing presence in our lives will affect us.

Is regulation catching up with innovation?

For a long time, people have been calling for regulations on AI, but technology has advanced so quickly that it's difficult for governments to keep up. Nowadays, computers can create content, art, and code, which has raised concerns about job security, data privacy, and the spread of false information. This last issue was highlighted by the 2018 Cambridge Analytica scandal, which led to new regulations on the use of consumer data.

Different governments have different approaches to dealing with AI like ChatGPT. Some are considering bans, while others want to regulate it. The U.K. proposal asks regulators in different sectors to apply existing regulations to AI. The EU, on the other hand, is expected to propose a more restrictive approach, similar to what Italy did, with a new law called the European AI Act.

Restrictive approaches to AI regulation would mean imposing strict limits in critical areas such as education, law enforcement, defense, and the judicial system. The EU would enforce these measures alongside the General Data Protection Regulation (GDPR).

In contrast, the UK has taken a more lenient approach, diverging from EU digital laws. Their non-statutory approach focuses on balancing risks and opportunities while adhering to specific principles. All AI products in the UK must prioritize safety, transparency, fairness, and accountability.

Meanwhile, other key players in the AI market such as the US and China have not proposed any additional oversight rules. The US has not issued additional guidance, and China's strict internet censorship regime means that ChatGPT is unavailable in China.

Responsible AI


While regulations on AI have been slow to progress, promoting responsible use of AI has been a topic of discussion for a while, especially after scandals involving the misuse of personal data came to light.

Many compliance solution providers now emphasize that their AI solutions are fully explainable and designed to empower employees and businesses, maximizing efficiency while minimizing gaps in financial crime prevention.

Ethics in AI is often included in training programs in relevant industries, and governance structures have developed to ensure that policies are in place and regularly reviewed.

However, responsible use of AI is an ongoing effort. In light of recent developments, financial institutions, solution providers, and other users should use their governance structures to review their responsibility statements and ensure that they are still appropriate and effective.

“Restrictions will set you free”

The quote by W.A. Mathieu, “restrictions will set you free”, may seem contradictory, but it actually highlights the need for regulation in the AI field. While there are some countries where AI training is restricted, compliance systems that use AI need guidance to evolve in a way that can combat financial crime without being invasive, discriminatory, or harmful to users.

To effectively combat global financial crime, compliance efforts need a unified, global approach that includes data sharing and coordinated action in AI. While heavy restrictions may seem like a good idea, they could create areas of secrecy where transparency is needed most. On the other hand, a lack of regulation could lead to a scenario where both financial institutions and criminals take advantage of the absence of guidance, with customers and the public at large becoming vulnerable to attacks from both sides.

It's important to protect the public from the negative effects of AI used by bad people. However, we also need to have clear rules and boundaries for AI, so that people are responsible for its management and accountability. Throughout history, uncontrolled developments have led to dangerous situations, but we've also had scientific advances that have been out of this world (literally). AI is a useful tool to prevent financial crime, but we need to manage it carefully rather than trying to stop it altogether.

To learn more about Lucinity and how we use AI responsibly, book a demo.

Sign up for insights from Lucinity

Recent Posts