Artificial Intelligence | Pestalozzi Attorneys at Law

The visuals on this site were created with AI and then reworked manually.

AI has transformed the digital landscape, presenting vast potential to enhance workflows, support decision-making, and unlock innovative opportunities across various sectors.

Since the launch of ChatGPT in November 2022, AI has moved beyond early adoption, becoming widely accessible and gaining public interest. Companies are increasingly relying on AI to create content, streamline processes, and analyze data. However, with these opportunities come challenges: legal uncertainties, ethical considerations, and data protection requirements demand careful attention and clear policies.

A Practical Legal Guide

"Navigating AI with Pestalozzi – A Practical Legal Guide" serves as a practical manual for Swiss companies to identify key legal issues and navigating the complexities of introducing and using AI internally. Through our six-part series, we provide a pragmatic overview of critical issues focusing on corporate governance, regulation, liability, data protection, intellectual property, and employment.

This Guide is for AI users who use an AI application to create a work product for resale or who use the AI application to support their work.

Download here the complete Booklet. 

Are you interested in one specific topic only? Here comes a short overview on the topics covered.

Effective AI governance balances innovation with compliance, ethical considerations, and commercial value.

A clear allocation of the different levels of responsibility regarding AI is essential for a safe use of AI tools. 

Establishing policies and (ethical) guidelines, regular audits, and trainings to raise awareness are key to mitigating risks posed by AI. 

AI Governance is not a linear process, which can be completed once, but is cyclical in nature and must be continually evaluated and revised.

Learn more about AI Governance in our legal update.

More about our Governance expertise.

To assess the relevant legal framework and consider associated rights and obligations, companies should first map the applicable laws for each specific use case.

Companies implementing AI face a complex legal and regulatory terrain that requires careful evaluation. The legal framework surrounding AI is constantly evolving, encompassing both national statutes and international conventions.

In Switzerland, there is currently no legislation or overarching regulation that specifically addresses AI. This does not imply, however, that AI operates in a legal vacuum. Rather, AI applications are governed by the prevailing general legal and regulatory frameworks.

Learn more about AI Regulation in our legal update.

More about our Regulation expertise.

To assess the relevant legal framework and consider associated rights and obligations, companies should first map the applicable laws for each specific use case.

Companies implementing AI face a complex legal and regulatory terrain that requires careful evaluation. The legal framework surrounding AI is constantly evolving, encompassing both national statutes and international conventions.

In Switzerland, there is currently no legislation or overarching regulation that specifically addresses AI. This does not imply, however, that AI operates in a legal vacuum. Rather, AI applications are governed by the prevailing general legal and regulatory frameworks.

Learn more about AI Regulation in our legal update.

More about our Regulation expertise.

What is not allowed without the use of AI is also prohibited when using AI.

In principle, AI-deploying companies are liable for AI output or actions generated by them just as if they had generated the output or acted without the use of AI. This means that AI-deploying companies are liable if they wilfully or negligently use AI tools so that it constitutes a breach of contract or tort, and if this AI use causes damage to others. Therefore, diligence is key when offering AI-powered services.

Learn more about AI Liability in our legal update.

More about our Liability expertise.

It remains important to remember that not every type of data automatically is personal data.

Thanks to “Big Data“ technologies, the analysis of data previously limited to a company’s own data warehouse can now be expanded to almost infinite amounts of data from an almost infinite number of sources.

If a company chooses an AI application, either internally as an “auxiliary“ for employees (e.g., ChatGPT) or externally as a tool in customer service (e.g., digital sales assistant chatbots on a company’s website), the company bears responsibility to ensure data protection.

Learn more about AI Data Protection in our legal update.

More about our Data Protection expertise.

It remains important to remember that not every type of data automatically is personal data.

Thanks to “Big Data“ technologies, the analysis of data previously limited to a company’s own data warehouse can now be expanded to almost infinite amounts of data from an almost infinite number of sources.

If a company chooses an AI application, either internally as an “auxiliary“ for employees (e.g., ChatGPT) or externally as a tool in customer service (e.g., digital sales assistant chatbots on a company’s website), the company bears responsibility to ensure data protection.

Learn more about AI Data Protection in our legal update.

More about our Data Protection expertise.

Copyrights are refused for works created by generative AI because the purpose of patent law is to foster human innovation by rewarding human intellectual effort.

With AI applications, opportunities come along, but also significant legal risks, especially with regards to the protection and infringement of intellectual property rights

Using generative AI applications which are trained, fine-tuned, or prompted on IP protected content may infringe third parties’ IP rights and bear liability risks for the user of a generative AI application.

Learn more about AI Intellectual Property in our legal update.

More about our Intellectual Property expertise.

The employer can and should determine whether and to what extent employees are provided with and allowed to use a specific AI application.

The key legal risks of using AI in employment include AI bias and discrimination by AI in hiring and firing processes, automated decisions by AI, surveillance by AI, employee participation rights in the use of AI, and risks of using AI as a working tool without explicit permission.

Learn more about AI Employment in our legal update.

More about our Employment expertise.

The employer can and should determine whether and to what extent employees are provided with and allowed to use a specific AI application.

The key legal risks of using AI in employment include AI bias and discrimination by AI in hiring and firing processes, automated decisions by AI, surveillance by AI, employee participation rights in the use of AI, and risks of using AI as a working tool without explicit permission.

Learn more about AI Employment in our legal update.

More about our Employment expertise.

To the top