Tabla de contenidos
What is the Artificial Intelligence Act (IA Act) and how does it affect businesses?
Artificial intelligence (AI) has emerged as one of the most powerful tools for a company in the 21st century, transforming the way organisations process and manage data. However, its outbreak has led to legal and ethical challenges, especially regarding the protection of personal data. The European regulator, with Regulation (EU) 2024/1689 (AI Act) and Regulation (EU) 2016/679 General Data Protection Regulation (GDPR) as key instruments supported by other specific regulations, seeks to achieve the difficult and complex balance between the undeniable and urgent need for technological innovation and the essential protection of fundamental rights. In this context, any processing of personal data by AI systems must comply with a clear legal basis, guaranteeing the application of principles such as minimising data processing and transparency when used.
A key point of the new regulation is the right of individuals not to be subject to automated decisions without human intervention when these may significantly affect their rights. Furthermore, the European legislator has enacted Regulation (EU) 2018/1807 on the Free Flow of Non-Personal Data, facilitating its free movement between Member States, and requiring, at the same time, the application of appropriate technical and organisational measures to protect its integrity. Although non-personal data does not directly have an impact on individual privacy, its correct management is essential to avoid operational risks and encourage trust in AI systems.
Within the framework of the GDPR, if a company were to transfer data to an AI system, only data classifiable as personal data would be protected, no matter if it belongs to third parties or to the company itself (in the case of companies that operate as natural persons), provided that the requirements set out in Article 6 have been met in its processing.
The recent approval of Regulation (EU) 2024/1689 on Artificial Intelligence (EU AI Act) extends the regulatory framework by imposing strict requirements on the quality of the data used by AI and prohibiting the use of information that could lead to discrimination. Moreover, it requires systems to be transparent and automated decisions to be reviewable by humans. This is complemented by best management practices that reduce the impact on individual privacy and enable companies to comply with the regulations while maximising the potential of their AI systems.
In this context, the EU-US Data Privacy Framework (DPF), approved in 2023, has attempted to improve the governance of data flows between the United States and the European Union.
What is the Data Privacy Framework (DPF) and why is it important for international data transfers?
This legally binding instrument aims to address the deficiencies identified in the Schrems II case and to strengthen safeguards to minimise the indiscriminate gathering of personal data. To this end, it establishes clearer restrictions on the monitoring of personal data by the US for reasons of public policy, creates a Data Protection Review Court and encourages US companies that want to receive data from the EU to certify themselves under this framework, without prejudice to using the system of model clauses suggested by the EU. However, certification under the DPF is voluntary and there are still doubts about the real effectiveness of the restrictions imposed on government surveillance, especially regarding the US.
The EU AI Act adds further requirements in Europe for the transfer and processing of data to AI systems, especially those considered high risk. It requires documenting how personal data is used and prohibits the transfer of information if it creates an unacceptable risk to basic human rights. It also imposes restrictions on the mass gathering of biometric data and requires developers of generative AI to show whether they have used personal data in the training of their models, allowing users to delete their information if they wish to do so.
On 2 February 2025, the first set of obligations under the AI Act entered into force, focusing on the prohibition of artificial intelligence practices named and assessed as ‘unacceptable risk’. These include the use of AI systems that manipulate human behaviour in a subliminal way or use people’s vulnerabilities due to their age or disability, social scoring by governments or companies, and the inference of emotions in work or educational environments. Furthermore, Article 4 of the AI Act establishes that providers and users of AI systems must guarantee that their personnel have adequate knowledge about AI, including its opportunities and risks, in order to encourage a safe and ethical use of the technology.
Social scoring is a system by which individuals or entities are assigned a rating based on their behaviour, actions or personal characteristics, using data taken from various sources. This concept has been widely debated due to its potential to restrict individual rights and freedoms.
A well-known example is the Chinese Social Credit System, which evaluates citizens and companies according to their financial history, compliance with rules and social behaviour, and can grant benefits or impose restrictions depending on the score obtained. This type of system raises concerns about privacy, discrimination and social control, as it could be used to limit access to essential services, restrict job opportunities, or even affect people’s mobility based on arbitrary or biased criteria.
On the other hand, the protection of corporate or trade secrets is a crucial factor in this environment. The Spanish Trade Secrets Act 1/2019, which implements Directive (EU) 2016/943, protects sensitive commercial information against unauthorised disclosure. The Spanish Trade Secrets Act establishes that for information or data to be considered trade secrets, it must be confidential, have commercial value and be actively protected. If corporate data is provided in a confidential context, its unauthorised disclosure may result in legal infringements. It is considered unlawful to obtain it when it is done without the consent of the owner, through unauthorised access, copying or fraudulent appropriation. This regulation also contemplates sanctions for those who, having legitimately accessed a trade or corporate secret, use it outside the established contractual framework. Hence the need of establishing and not neglecting procedures in the business governance structure to manage and update the confidentiality agreements that should safeguard all commercial operations and relationships with customers, employees and suppliers.
Protection of personal data in an AI environment is an issue that requires special attention. According to the GDPR, any processing of personal data must have the informed consent of the data subject. Companies responsible for data processing must design their systems to ensure, as far as reasonably possible, compliance with their legal obligations. This involves the company adapting to scenarios in which it interacts with Artificial Intelligence environments, which implies updating the data protection management systems already designed, which must include at least a detailed record of their activities and reviewing the processes in force to guarantee that personal information is processed in a lawful, fair and transparent way. The regulation also defines the fundamental principles of processing, including data minimisation, limiting the period of conservation and adequate security to prevent misuse.
Fines for GDPR non-compliance: Examples and penalties
As it is well known, failure to comply with the GDPR and the AI Act can result in severe sanctions. The regulations classify infringements as minor, serious and very serious, with fines that can reach 20 million euros or 4 % of overall annual revenue. Cases such as BBVA in 2020, with a fine of 5 million euros, and Vodafone in 2021, with a fine of 8.15 million euros, are a token of the seriousness of the Spanish Data Protection Agency (AEPD in Spanish) when it comes to the application of the regulations. In addition, in particularly serious cases, infringements can lead to criminal liability, both at a personal and corporate level, with prison sentences of up to five years in the case of undue disclosure of secrets.
Spanish regulations also establish actions to guarantee the security of information systems. Royal Decree-Law 12/2018, which transposes the NIS Directive, imposes obligations on operators of essential services to guarantee the security of networks and information. Royal Decree-Law 12/2018 introduces amendments to the Spanish General Telecommunications Law and other regulations with the aim of strengthening the Administration’s capacity to supervise and control digital service operators and providers. This extension of powers is particularly important in emergency situations or when national security is threatened.
The measures laid down in this decree include new security obligations for telecommunications operators, in order to guarantee more protection in the provision of these services. Furthermore, it provides for state intervention in cases of crisis or serious threats, allowing rapid and effective action to face potential risks. To this end, coordination with organisations specialising in cybersecurity, such as CCN-CERT and INCIBE, is being strengthened, consolidating a strategic and preventive approach to the protection of digital infrastructures and telecommunications networks.
Failure to comply with these regulations can lead to sanctions of up to €500,000, emphasising the importance of protecting data against possible infringements.
Regarding China, the Asian giant has developed its own regulatory framework for data protection, with regulations such as the Personal Information Protection Law (PIPL), similar to the European GDPR. Companies such as DeepSeek, which handle large volumes of data, are subject to strict government controls and limitations on the transfer of information outside the country. The Data Security Law and the Cybersecurity Law establish requirements regarding data classification and the obligation of local storage, reinforcing state control over information managed by AI. However, given the different approach that the EU applies to the protection of personal data, any transfer of personal data to China has to be assessed in each particular instance and will probably require specific contractual support to ensure compliance with the European regulatory framework.
In this context, companies must adopt a comprehensive approach to protect their business data from possible misuse by AI providers. It is advisable to establish clear contracts that define the ownership and permitted use of the data, limit access to authorised personnel only and include strict confidentiality clauses. In addition, technical and organisational measures such as data encryption, anonymisation and pseudonymisation should be implemented to minimise risks. Reviewing the Data Protection Impact Assessment (DPIA) is another key tool for identifying vulnerabilities and ensuring compliance in a new and changing regulatory environment.
Companies that process personal data must ensure that AI providers have operational processing tools designed in accordance with the existing regulatory framework, ensuring respect for the rights of data subjects and adopting proportionate security measures. In regulated sectors, such as finance or healthcare, additional specific regulations must be complied with. To reinforce security, it is essential to establish internal policies that regulate the use of data in AI projects, set time limits for its retention and use advanced tools for authentication and prevention of information leaks.
As a conclusion, success of AI in the business world will depend on the ability of organisations to navigate this complex regulatory environment. The implementation of access controls, audits and ethical supervision systems will guarantee that the tools that use AI operate in a reasonably safe manner and in accordance with the law. As technology continues to evolve, an adaptative commitment to legality and ethical principles will be key on consolidating trust in artificial intelligence, transmitting that trust to customers, shareholders, employees and suppliers and taking advantage of its potential while respecting fundamental legal tenets.
******
More información:
Lupicinio International Law Firm
C/ Villanueva 29
28001 Madrid
P: +34 91 436 00 90