It's time to explore the creation of "AI-free sanctuaries"

16-022-supersoniccontract

The potential benefits of AI to humanity are relatively clear, ranging from promoting social justice, combating systemic racism, and improving cancer detection to alleviating environmental crises and increasing productivity.

However, the dark side of AI is also coming into focus, including the resulting intensification of racial prejudice, the deepening of socioeconomic disparities, and even the ability to manipulate human emotions and behavior.

     The first AI rulebook in the West?

Despite the growing risks, there is no binding national/international regime that attempts to regulate AI. That is why the European Commission's proposal for a regulatory effort on AI is so rare and valuable. The European Commission's proposed AI bill examines the potential risks inherent in AI technologies and classifies them as "unacceptable," "high," and "other. Within the "unacceptable" category, the following AI practices would be prohibited outright:

- Manipulating the behavior of another person in a way that causes or is likely to cause physical or psychological harm to the person at hand or to others.

- Using AI to distort the behavior of specific people (e.g., age, disability) by exploiting their vulnerable reality and causing potential harm.

- Assessing and classifying people (e.g., social scoring)

- Use real-time facial recognition technology for law enforcement in public places, except in specific circumstances (e.g., in response to terrorist attacks).

In this AI bill, the risk of "unacceptability" is closely related to the concept of harm. A series of important steps together reveal the need to protect specific activities and physical spaces from AI intrusion.

     A sanctuary from AI

There is widespread opposition to the deployment of AI outcomes in scenarios involving rights, privacy, and cognitive capacity. In addition, it is important to clearly define which spaces are subject to strict regulation of AI activities.

A "sanctuary from AI" in no way implies a complete ban on AI systems, but rather a stricter regulation of the practical application of such technologies. The EU's AI bill, for example, calls for a more precise definition of the concept of harm. But there is no clear definition of harm that can be operationalized, either in the proposed draft EU legislation or at the level of individual EU member states.

Professor Suzanne Vergnolle, a scholar concerned with technology legislation, argues that a potential solution is to establish common standards across EU member states that describe exactly the types of harm that AI practices may cause, taking into account the collective harm that may be triggered in different ethnic and socioeconomic contexts.

In order to create AI-free protected areas, it would be logical to enforce regulations that allow humans to protect their cognitive and mental health states. Possible starting points include the creation of a new generation of human rights - a "neurological right" - to ensure that our cognitive freedoms are protected in the face of rapid advances in neurotechnology.

      AI-free sanctuaries: a manifesto for their creation

Here, I would like to roughly organize a declaration of basic rights for "AI-free sanctuaries," including the following provisional provisions:

- The right to opt out. Everyone has the right to opt out of AI support in sensitive areas for a specified period of time. This means that AI devices are required to be completely non-interventionist or only moderately interventionist.

- No negative impact. Opting out of AI support must never trigger any negative effects in an economic or social sense.

- Human decision making. All people have the right to make the final decision by the individual.

- Sensitive areas and populations. Government authorities should work with civil society and private actors to identify areas (e.g., education, health, etc.) and human/social groups (e.g., children) that should not be exposed/can only be adapted to be exposed to invasive AI because of their particular sensitivity

     Establishing AI-free protected areas in the real world

To date, the use of "AI-free spaces" has been uneven from a strictly spatial perspective. Some U.S. and European schools are beginning to deliberately avoid electronic screens in the classroom - the so-called "low-tech/tech-free education" movement. But there are still a large number of digital education programs that rely on designs that can lead to mental addiction, especially in public and poorly funded schools that tend to rely more on electronic screens and digital tools, which undoubtedly exacerbates the social divide.

Even outside of controlled environments such as classrooms, the reach of AI is expanding. In response to this reality, between 2019 and 2021, more than a dozen U.S. cities have passed laws restricting and prohibiting the use of facial recognition technology for law enforcement purposes. However, since 2022, a number of cities have compromised due to rising crime rates. France has decided to adopt AI surveillance cameras at the 2024 Paris Olympics, despite legislative recommendations from the European Commission.

In addition, face analysis AI, which may exacerbate inequality issues, has been seen at times in job interviews. Combined with data from past successful passers, AI may tend to select candidates with privileged backgrounds while excluding people with disadvantaged backgrounds. This practice deserves to be banned.

      Allowing progress, but also preserving rights

The rights highlighted by AI-free sanctuaries will facilitate advances in AI technology while protecting the cognitive and emotional capacities of all humans. If we want to continue to acquire knowledge and experience on our own terms while maintaining our original moral judgment, we must have the important option of not using AI.

In Dan Simmons' novel, the poet Keats achieves a "cyber" rebirth and disconnects from data space in order to resist the encroachment of AI. This is extremely illuminating because it reveals the pervasive impact AI could have on art, music, literature and culture. Indeed, in addition to copyright issues, these human activities are also closely related to our imagination and creativity, and will be the cornerstone of our ability to resist AI intrusion and maintain independent thinking.