Switzerland Sets Its Course on AI Legislation: Federal Council Outlines a Lean Regulatory Approach
Key Takeaways
- On 12 February 2025, the Federal Council announced Switzerland’s AI regulatory strategy. The strategy prioritises ratifying the AI Convention, encouraging targeted, sector-specific adjustments and promoting responsible AI practices through non-binding measures.
- While Swiss law is considered flexible enough to accommodate AI-related developments, businesses should monitor potential adjustments in liability, intellectual property, data protection and employment law, particularly in response to evolving EU regulations.
- Set timelines are rather generous: the draft legislation package should be submitted to the Federal Council by the end of 2026. No immediate legislative changes are therefore expected in Switzerland in the near future.
- Swiss companies are well-advised to keep a close eye on the EU AI Act, particularly if they export high-risk AI products to the EU. These companies will need to undergo a conformity assessment under EU law.
The Ratification of the AI Convention as a Key Priority
The global AI regulatory landscape continues to evolve rapidly, with no single approach emerging as the clear standard. Countries are divided between two regulatory approaches: a horizontal approach, applying uniform rules across all sectors (followed, for example, by the 27 EU member states, Canada, South Korea and Brazil), and a sector-specific approach, allowing for more targeted regulation (followed, for example, by the UK and Israel). Amidst these different strategies, there is increasingly a move towards a risk-based regulatory framework that prioritises managing the potential impact and severity of AI-related risks.
Switzerland, too, is setting its regulatory course. On 12 February 2025, the Federal Council (the Swiss executive branch) announced Switzerland’s AI regulatory strategy, based on a detailed analysis conducted by the Federal Department of the Environment, Transport, Energy and Communications (DETEC) and the Federal Department of Foreign Affairs (FDFA) as part of the Federal Administration. The strategy revolves around the following key elements:
- Incorporating the Council of Europe’s AI Convention into Swiss law as the main priority, ensuring alignment with international obligations and adherence to best practices.
- Focusing legislative changes on selected sector-specific adjustments, with broader general regulations applied only where fundamental rights, such as personality rights, are affected.
- Encouraging non-binding measures, such as self-declaration frameworks or industry-led initiatives, to promote responsible AI practices and encourage voluntary compliance.
According to the Federal Council, this regulatory approach should (i) strengthen Switzerland’s status as an innovation hub, (ii) safeguard fundamental rights, and (iii) increase public trust in AI technologies.
The Federal Council’s announcement aligns with our assumption in our previous legal update on AI regulation (available here): no overarching horizontal AI regulation like the EU AI Act will be implemented. Instead, as is also the case for other topics, Switzerland remains committed to a targeted, sector-specific, technology-neutral approach, adapting existing laws where necessary to effectively address AI-related challenges while leaving room for economic growth and technological progress.
Does the Council of Europe’s AI Convention Have a Direct Impact on the Private Sector?
On 17 May 2024, the Council of Europe adopted the Framework Convention on Artificial Intelligence, Human Rights, Democracy and the Rule of Law (the “AI Convention” or “Convention”). If it is successful, it will be the first legally binding international agreement on AI. With 57 countries involved in the negotiations including Switzerland, the convention has a global scope and has already been signed by major international players, including the EU, the UK and the USA.
Switzerland intends to ratify the AI Convention as a key priority of its regulatory strategy, reinforcing its commitment to responsible AI governance. The Convention requires states to assess and mitigate risks associated with AI throughout its entire lifecycle, focusing on safeguarding human rights, democracy and the rule of law. While it does not impose outright bans, it mandates a risk-based approach, requiring proportionate measures based on the potential impact of AI on fundamental rights.
A key distinction in the Convention lies in its scope of applicability. While the Convention is directly applicable to the public sector, ratifying states have discretion in how they extend its provisions to the private sector. Upon ratification, Switzerland must decide whether to apply the Convention directly to private actors or introduce tailored national regulations or adjustments to achieve compliance.
For businesses operating in Switzerland, the ratification raises questions about its practical impact on the private sector. The regulatory effect will mainly depend on how Switzerland translates the Convention’s principles into national law. However, the Convention’s impact on the private sector is limited to AI systems that affect human rights, democracy or the rule of law. Such fundamental rights could particularly become relevant where AI systems impact areas such as non-discrimination, data protection or economic freedom. To put it differently, not all relationships between private actors will be affected. If no fundamental rights are implicated by the private-sector relationship, the Convention’s provisions may not apply. For example, AI-related product defects – such as a robotic lawnmower failing to cut an entire lawn or a smart refrigerator setting the wrong temperature – fall outside its scope, as economic damages alone do not constitute a violation of fundamental rights.
As self-regulation by businesses alone will likely be insufficient to meet the AI Convention’s requirements, the Federal Administration identified legal adjustments that will be necessary where Swiss law does not yet fully align with the Convention’s objectives. Potential regulatory changes for the private sector could include expanded obligations regarding transparency, accountability and risk management. Companies may need to strengthen internal compliance mechanisms to ensure AI applications respect fundamental rights. For instance, the analysis highlights a possible extension of information obligations under the Federal Act on Data Protection (FADP) to semi-automated decisions.
Further Legislative Developments to Keep on the Radar
Overall, the Federal Administration regards Swiss law as sufficiently flexible to accommodate AI-related developments without the need for immediate reforms. However, given the regulatory influence of the EU, Swiss businesses must continue to monitor legislative trends to ensure alignment, where necessary. While certain areas in Swiss law – such as liability, intellectual property, data protection and employment law – may require targeted adjustments, there is currently no clear legislative momentum towards fundamental changes.
The legal analysis conducted by the Federal Department of Justice and Police (FDJP) summarises and highlights the following legal topics as being subject to increasing regulatory scrutiny:
Liability
Contractual Liability
Under Swiss contract law, a party seeking damages for a breach of contract does not need to prove fault. Instead, the burden falls on the breaching party to demonstrate that they were not at fault (Art. 97(1) of the Swiss Code of Obligations (CO)). Whether this principle applies effectively to AI-related disputes remains a subject of debate, particularly regarding whether liability could be too easily avoided by showing that all due care was taken in the operation or deployment of an AI system. Therefore, some legal scholars propose aligning AI-related contractual liability with liability for auxiliary persons (Art. 101(1) CO), making the deploying party fully responsible for errors made by autonomous AI applications. However, DETEC’s analysis questions whether a legal gap truly exists as courts can, on a case-by-case basis, determine whether the deployment of an AI system constitutes a breach of duty of care, or not. Furthermore, AI applications lack legal personality, unlike auxiliary persons, and are ultimately technical tools that fall within the risk sphere of the contracting party deploying them. Therefore, no legislative action is expected any time soon.
Non-Contractual Liability
With respect to non-contractual liability (tort law), Swiss law does not contain AI-specific provisions comparable to those debated in the EU. Instead, it relies on the general tort law framework under Art. 41(1) CO, which establishes liability for damages caused unlawfully and with intent or negligence. Claimants bear the burden of proving damage, unlawfulness, causation and fault (Art. 8 of the Swiss Civil Code), with Swiss civil procedure allowing for requesting targeted disclosure of specific documents (Art. 160(1)(b) of the Swiss Civil Procedure Code).
In contrast, the European Commission’s 2022 proposal for a new AI Liability Directive sought to address challenges in proving fault for AI-related harm by introducing evidentiary relief mechanisms such as disclosure obligations and rebuttable presumptions. However, following criticism from the USA at the AI Action Summit in Paris, the European Commission has recently removed the AI Liability Directive from its 2025 legislative agenda, signalling a slowdown in its regulatory push.
Swiss courts already apply evidentiary relief in cases where proving causation is inherently difficult, relying on a high probability standard. However, unlike the proposed AI Liability Directive, Swiss law does not introduce presumptions of a breach of duty of care or causality. Furthermore, the Swiss strict liability framework ensures protection for injured parties in key AI-driven sectors, such as autonomous vehicles through the strict liability of vehicle owners under Art. 58 of the Road Traffic Act (SVG) and drones through the strict liability of aircraft operators under Art. 64 of the Aviation Act, (LFG) with mandatory insurance requirements effectively preventing liability gaps (Art. 63 SVG and Art. 70 LFG). As a result, the current framework for non-contractual liability appears well-suited for AI systems, with no immediate need for regulatory adjustments or new legislative concepts.
Product Liability
Swiss product liability law imposes strict liability on manufacturers for damages caused by defective products but excludes damage to the product itself. The revised EU Product Liability Directive, which entered into force on 8 December 2024, expands liability to software, including AI systems, and covers defective updates, cybersecurity flaws and data loss, introducing evidentiary facilitation similar to the proposed (and, for the time being, dropped) AI Liability Directive.
The Swiss Federal Law on Product Liability (PrHG) largely corresponds to the old version of the EU Product Liability Directive of 1985 enacted as part of the Swiss Lex project in the wake of Switzerland’s aborted project to join the EEA. As a consequence, a key debate persists whether standalone software should qualify as a product under PrHG. The Federal Supreme Court has not yet ruled on this, but many legal scholars argue for an expansive interpretation to cover defective software. In June 2023, the Swiss Federal Council recognised gaps in consumer protection for digital products and proposed an obligation for software providers to maintain updates, aligning with EU rules. While no immediate revisions to the Swiss PrHG are planned, future adjustments are likely to come to ensure compatibility with EU standards.
Intellectual Property
Copyright Law
AI is transforming the way inventions, texts, music and images are created, leading to discussions about whether traditional copyright principles require adaptation. As already identified in our previous legal update on AI and IP (available here), copyright protection in Switzerland requires a “human intellectual creation” with individual character (Art. 2(1) of the Copyright Act (CopA)). As only natural persons qualify as authors under Art. 6 CopA, purely AI-generated works are not protected, and no revision of this approach is expected. The published analysis also confirms that human intervention in AI-assisted works may suffice already under the current law for copyright eligibility, depending on the extent of the creative input.
Another key issue that will likely lead to legislative adjustments is whether the use of copyrighted materials for AI training constitutes an infringement. If training involves reproduction of protected works, the consent of rightsholders is required unless a statutory exception applies. CopA provides exemptions for internal business use (Art. 19 (1)(c)), temporary copies (Art. 24a) and scientific research (Art. 24d), but their applicability to AI remains uncertain. The EU introduced a text and data mining (TDM) exception in the Directive on Copyright in the Digital Single Market (EU Directive 2019/790), allowing for certain automated data processing activities without prior authorisation from copyright holders under specific conditions (see Section 44b of the German Copyright Act). It remains to be seen whether Swiss copyright law will adopt a similarly broad TDM exception in the future. If the use of copyrighted materials for AI training were to be permitted without explicit consent, the question would arise as to how the interests of affected copyright holders should be safeguarded. While the partial revision of the Federal Copyright Act initially proposed a neighbouring right for media companies, consultation participants opposed the regulation of copyright-related aspects at this stage.
Patent Law
Legislative amendments in Switzerland regarding AI-related patent law are not expected at this time. According to the analysis, the rapid increase in AI-driven patent applications suggests that the existing legal framework is functioning effectively. Core patentability concepts such as “state of the art”, “novelty”, “inventive step” and “disclosure requirements” remain adaptable to technological advancements, allowing intellectual property offices and courts to refine their interpretation as needed.
While AI challenges traditional notions of inventiveness and disclosure, regulatory bodies have provided their standpoint on the matter. The European Patent Office, for example, has addressed the disclosure of AI training data, stating that if a technical effect depends on specific characteristics of the training dataset, the necessary features must be disclosed, unless a skilled person can determine them without undue effort using general technical knowledge. However, the dataset itself generally does not need to be disclosed.
Since human involvement remains central to research and development, ongoing international discussions about whether the requirement to name a natural person as an inventor can be revised appear premature. Swiss lawmakers currently see no need for legislative reform in this area.
Employment Law
AI-driven applications in recruitment, performance monitoring and workplace decision-making continue to raise legal and ethical concerns. Swiss law already provides safeguards against potential risks associated with AI in employment. The FADP and Art. 328b CO govern data collection and processing in employment relationships. Employers must ensure transparency, proportionality and legitimacy when using AI for decision-making, particularly if sensitive personal data is involved. Furthermore, Art. 26 of the Ordinance on Health and Safety restricts surveillance mechanisms, which could limit certain AI-based monitoring tools. However, these rules do not directly address the opacity of AI systems or the broader implications of automated decision-making on employee rights.
A parliamentary motion submitted in December 2023 calls for stronger employee participation rights, expanded information rights, collective legal remedies and potential sanctions related to AI use in the workplace. So far, however, this motion has not gained political traction.
From a comparative law perspective, the regulation of platform work is a major topic in the EU, with ongoing discussions on its legal framework. By contrast, Switzerland has not yet placed significant focus on this issue.
The analysis concludes that future regulatory developments in Swiss employment law will need to be assessed within the broader AI governance framework. If Switzerland aligns with the AI Convention, key employment-related concerns – transparency, oversight, non-discrimination and data protection – are likely to be addressed as part of a comprehensive AI regulatory approach rather than through isolated employment law reforms.
Next Steps
For Swiss companies, the ratification of the AI Convention will not bring immediate regulatory changes, and no direct action is required in the short or medium term. Any legislative adjustments will take several years, and for now, AI applications in Switzerland continue to be governed by the existing legal framework. The Federal Council has mandated the FDJP to draft, jointly with other federal departments, a public consultation proposal for incorporating the AI Convention into Swiss law. This draft is expected by the end of 2026. To ensure Switzerland’s regulatory approach aligns with that of its key trading partners, by the end of 2026 the Federal Administration is also mandated to draw up an implementation plan for other measures not laid down in legislation. The ratification of the AI Convention is subject to parliamentary approval. Additionally, it may be subject to a referendum, if so requested by a sufficient number of voters.
In regulated sectors, authorities such as FINMA are already working to establish guidance based on current laws, ensuring AI compliance and oversight where necessary (see previous legal update for details).
A more pressing issue for Swiss companies is the impact of the EU AI Act, particularly for companies operating in the EU market. Swiss companies that place AI products on the EU market must comply with the requirements of the AI Act (see previous legal update for details). According to the FDJP’s legal analysis, one further challenge is that Swiss companies exporting high-risk AI products to the EU will need to undergo a conformity assessment within the EU, as the agreement on mutual recognition of conformity assessments (MRA) between Switzerland and the EU remains blocked for political reasons. This means that Swiss companies cannot rely on Swiss conformity assessments and must, instead, meet EU requirements directly.
Contributors: Simon Winkler (Associate), Sarah Drukarch (Partner) and Markus Winkler (Counsel)
No legal or tax advice
This legal update provides a high-level overview and does not claim to be comprehensive. It does not represent legal or tax advice. If you have any questions relating to this legal update or would like to have advice concerning your particular circumstances, please get in touch with your contact at Pestalozzi Attorneys at Law Ltd. or one of the contact persons mentioned in this legal update.
© 2025 Pestalozzi Attorneys at Law Ltd. All rights reserved.