Navigating AI with Pestalozzi – Part 2: Regulation | Pestalozzi Attorneys at Law

Navigating AI with Pestalozzi – Part 2: Regulation

17.10.2024

Download PDF

Key Takeaways

  • Identify and review the relevant legal framework for your company by reviewing specific contracts or the AI provider's terms and conditions for an applicable law clause. In addition, assess where your AI providers and customers are located, and the geographic reach of the personal data streams and IP rights involved.
  • If Swiss law is applicable: Switzerland does not have an overarching regulation concerning AI. Instead, AI applications are governed by the existing legal framework.
  • Assess whether your company is subject to the extraterritorial reach of the EU AI Act due to an EU connection.
  • Update yourselves regularly about new regulations and interpretations of existing regulations from federal authorities, as the regulatory landscape is currently changing rapidly.

Companies implementing AI face a complex legal and regulatory terrain that requires careful evaluation. The legal framework surrounding AI is constantly evolving, encompassing both national statutes and international conventions. AI regulations were first introduced in China, underscoring the country's proactive approach to AI governance. This was followed by President Biden’s Executive Order on AI on 30 October 2023, prioritizing the safe and trustworthy development of AI in the United States. Furthermore, the wide-ranging Artificial Intelligence Act entered into force on 1 August 2024 ("EU AI Act"), aiming to create a comprehensive regulatory framework for AI in the EU.

Map out the Relevant Regulatory Framework

Given the global nature of the AI ecosystem, identifying the relevant jurisdictions can be challenging. In order to determine and review the relevant legal framework and consider rights and obligations thereunder, it is essential for every company to first map out the applicable law for each individual use case by analyzing:

  • Implicitly or explicitly agreed contractual provisions, including general terms and conditions ("GTC"), of AI providers;
  • the seat or domicile of involved AI providers and the company's customers; and
  • the geographic reach of the personal data streams and IP rights concerned.

As many jurisdictions to date lack an AI-specific regulatory framework, companies need to rely on contractual protections to guard against potential challenges posed by AI applications. Typically, contracts between AI providers and acquiring companies ("AI-deploying companies") include a choice of law clause that specifies which legal framework governs their agreement. For example, the most prominent AI provider, ChatGPT, designates U.S. law, specifically the laws of the State of California, as applicable in its GTC. Microsoft specifies in its GTC for its Copilot plug-ins that the applicable law depends on the user's location – for companies located in Europe, the applicable law is the law of Ireland. With smaller AI providers, there may be more flexibility in negotiating the choice of law (e.g., in favour of Swiss law) depending on bargaining power. In view of the principle of party autonomy and the commercial setting, these choice of law clauses are generally upheld by the courts unless they are abused to deprive a party of rights, which are fundamental to the country in which the court is seated.

While intellectual property rights are generally governed by the law of the state for which protection is sought, data protection laws often provide for a certain extraterritorial reach, being applicable to the processing of personal data that has an effect in that state, even if processed abroad.

No Legal Vacuum in Switzerland

In Switzerland, there is currently no legislation or overarching regulation that specifically addresses AI. This does not imply, however, that AI operates in a legal vacuum. Rather, AI applications are governed by the prevailing general legal and regulatory frameworks.

The Federal Council, Switzerland's executive body, is closely monitoring AI's potential legal and regulatory implications. In contrast to the European Union, which aims at addressing the technology comprehensively as such (horizontal regulation), Switzerland has, so far, promoted an agile, sector-specific regulatory strategy: Measures should be taken, if necessary, in the relevant sectors based on the existing legal framework and in a technologically neutral way.

The emerging international rules and standards, especially the EU AI Act, which entered into force on 1 August 2024, will, however, have a direct impact on Switzerland. The Federal Council therefore mandated the Federal Department of the Environment, Transport, Energy and Communications last year to identify the need for action and possible options for sectoral and, if necessary, horizontal measures by the end of 2024. This report will form the basis for a legislative proposal expected in 2025. Furthermore, on 17 May 2024, the Committee of Ministers of the Council of Europe adopted the Convention on Artificial Intelligence, attended by the head of the Swiss Federal Department of Foreign Affairs. The convention's aim is to ensure compliance with the legal standards applicable to AI in terms of human rights, democracy, and the rule of law. Switzerland was actively involved throughout the negotiations.

If the regulatory assessment of your individual AI use case leads to the conclusion that Swiss law is applicable, the following federal statutes should be carefully considered in the context of AI (non-exhaustive list):

  • Swiss Code of Obligations (SR 220), in particular Art. 41 et seqq. and 99 et seqq. with regard to liability and Art. 319 et seqq. with regard to employment aspects;
  • Product Liability Act (SR 221.112.944);
  • Data Protection Act (SR 235.1);
  • Copyright Act (SR 231.1);
  • Trade Mark Protection Act (SR 232.11);
  • Designs Act (SR 232.12); and
  • Patents Act (SR 232.14).

Be Aware of the Extraterritorial Reach of the EU AI Act

The recently enacted EU AI Act applies to actors inside and outside the EU, as long as the AI system is placed on the EU market or its use affects EU citizens. The EU AI Act provides for a staggered date of application of its provisions:

  • As of 2 February 2025, the prohibitions of AI systems deemed to present an unacceptable risk will already apply.
  • As of 2 August 2025, the rules for so-called general-purpose AI models will apply.
  • 2 August 2026 is the important date, as most of the remaining rules of the EU AI Act will then start to apply.
  • As of 2 August 2027, certain obligations for high-risk AI Systems (embedded in regulated products) will apply.

The definition implemented by the EU is rather broad, capturing systems that are already in use. Therefore, if a Swiss company has any ties to the EU with respect to AI, it is advisable to carefully assess whether the new regulation is applicable. If so, the company must first determine its own role and secondly evaluate the risk category of each of its AI systems. These initial two steps will allow the company to assess the EU AI Act's legal implications on its business.

Different Roles and Risk Categories Lead to Different Obligations

As a first step, a company with ties to the EU with respect to AI should determine which role applies to it. The EU AI Act distinguishes between four different roles:

  1. Providers: Natural or legal persons who (i) place their AI system on the EU market, (ii) put their AI system into service in the EU, or (iii) use the output produced by their AI system in the EU, are classified as providers, regardless of whether they are domiciled or established in the EU.
  2. Importers: Natural or legal persons domiciled or established in the EU are classified as importers if they place an AI system on the market under the trademark of another natural or legal person domiciled or established outside the EU.
  3. Distributors: Natural or legal persons who make an AI system available on the EU market without their activities qualifying for the role of a provider or importer are classified as distributors, regardless of whether they are domiciled or established in the EU.
  4. Deployers: Natural or legal persons who use an AI system under their authority qualify as deployers, except if the AI system is used only for personal, non-professional activities.

The EU has adopted a risk-based approach: In a second step, each relevant AI system must be classified according to one of the four different risk categories defined in the EU AI Act: (i) unacceptable risk, (ii) high risk, (iii) limited risk, and (iv) minimal or no-risk.

As a result, AI systems with an unacceptable risk, such as practices that threaten fundamental rights (e.g., social scoring, individual predictive policing, or untargeted scraping of facial images), are prohibited under the EU AI Act. High-risk AI systems are subject to rules on their design, governance, and transparency, such as data governance, impact assessment and/or human oversight. AI systems that could have a negative impact on the security of people's fundamental rights (e.g., applications used to recruit employees or determine creditworthiness) fall under this second category. AI systems with limited risk, which may cause confusion or may be deceptive for users (e.g., chatbots or spam filters), are subject to transparency obligations. Minimal or no-risk AI systems (e.g., text generators) can be developed and deployed without additional legal obligations.

Violations of the EU AI Act can result in significant fines – up to €35 million and, for companies, up to 7% of total annual worldwide turnover (depending on which of the two figures is higher). There are exceptions to the applicability of the EU AI Act, such as research and development as well as exclusive use for military or national security purposes.

Is the EU AI Act Relevant for You?

As the EU AI Act is a rather comprehensive piece of legislation combined with an extraterritorial reach, the possibility of falling into a prohibited or regulated category of AI varies widely depending on the company in question. Annex III of the EU AI Act provides a list of high-risk AI systems, including some sector specific use cases, which can help determine whether certain restrictions and obligations apply. Alternatively, there are several "EU AI Act Compliance Checkers" online that can be used to broadly assess one's risk category. These are not, however, officially provided by the EU but by independent organizations.

Essentially, the following questions must be answered in order to classify the AI system used into one of the risk categories of the EU AI Act:

None

If, based on a first reading, the EU AI Act is potentially applicable to your company, we recommend a thorough assessment of the system as such, as well as of the legal obligations and implications according to the risk category into which your company's AI system falls.

Contributors: Sarah Drukarch (Partner), Simon Winkler (Associate), Luise Locher (Junior Associate)

Contact us

No legal or tax advice

This legal update provides a high-level overview and does not claim to be comprehensive. It does not represent legal or tax advice. If you have any questions relating to this legal update or would like to have advice concerning your particular circumstances, please get in touch with your contact at Pestalozzi Attorneys at Law Ltd. or one of the contact persons mentioned in this legal update.

© 2024 Pestalozzi Attorneys at Law Ltd. All rights reserved.

To the top