Will Europe become the Digital Age Queen?
JAN 2021
In a daily evolving virtual environment parts of Europe’s current legal framework have inevitably become obsolete. The outbreak of Covid-19 led to a sharp surge in digital transactions, resulting in a need of innovations on online services’ policies.
The GDPR was just the beginning of what is intended to be an EU coordinated effort to regulate the digital market.
European Commission:
Digital Services Act/ Digital Market Act/
White Paper on AI/
Digital Europe Programme
The e-Commerce Directive 2000/31/EC has been the sole point of reference for the provision of digital services for the past 20 years.
European Commission’s proposals for reshaping the digital environment – the “Digital Services Act” (DSA) and the “Digital Market Act” (DMA) target on one hand to safeguard internet users’ fundamental rights against illegal activities and on the other hand to even- out the digital’s “unfair” competition.
In a nutshell, the DSA redefining the key issues of digital services, distinguishes between online services providers like caching services, for whom the exemption of liability for illegal content still stands under certain conditions and online platforms where the “non- liability” clause is attenuated. Additionally, for online platforms considered as large i.e. “platforms which provide their services to a number of average monthly active recipients of the service in the Union equal to or higher than 45 million”, (such as Ebay), specific obligations are imposed, leaving them tip toeing on ice, in order to comply with the new regime. Such platforms will be obliged to enforce risk assessment strategies and act ex ante so as to facilitate content protection mechanisms.
“Transparency” is a key notion that will affect all online platforms – especially as far as online advertising is concerned – cooperation with “trusted flaggers” in order to repress illegal content circulation and traceability in terms of a “know-your-own-business-customer” approach, are other important ones.
As far as the DMA is concerned, Commission’s proposal aims to cast a legal framework for online platforms that fall under the definition of “gatekeepers” in order to restrain unfair competition practices, such as for example the unilateral exploitation of consumers behavioral data. By compelling “gatekeepers” to equally share with all market players that type of information, the DMA sets out to both offer more options to the consumers and lower prices for them, whilst at the same time enhancing the effectiveness of business strategies of companies selling products online.
Given the high percentage of companies that do business today through online platforms, EU’s effort for regulation was not only expected but long-awaited. SMEs and startups would have otherwise fallen prey to unconfined practices by large platforms. The Commission’s proposal thus reflects the EU’s intention to safeguard its citizens’ fundamental rights and guarantee fair trade practices.
To that end last July, the “P2B” Regulation came into force, establishing a new legal framework for online platforms in order to guarantee the fair and transparent treatment of their business users (like for example clear terms and conditions).
The European Commission’s White Paper on Artificial Intelligence aims to create an ecosystem based on two pillars: excellence and trust. EC’s goal is to achieve that without jeopardizing EU’s core democratic values, such as fundamental human rights. According to the White Paper, in order to accelerate research and innovations in AI, data management must comply with the FAIR principles (Findable, Accessible, Interoperable and Reusable).
Furthermore, EC is intending to incentivize AI researchers through Horizon 2020, an enormous public-private funding program.
Last October, the European Parliament adopted resolutions with recommendations to the European Commission focusing on a fertile ground for such technological growth, without endangering human rights and legal security. By proposing a civil liability regime for AI and an effective intellectual property rights system for the development of AI technologies, the European Parliament claims a leading role in AI challenges.
On Denmark’s initiative, a non paper was signed by 10 EU digital Ministers (among them Belgium, Finland, France, Estonia, the Netherlands, Sweden) calling on the Commission to incentivize the development of next-gen AI technologies, rather than put up barriers: “EU should aim to establish an ecosystem of trust, where trustworthiness by design is a natural companion in any given AI solution…”
The European Commission is expected to launch a regulatory proposal within the first quarter of 2021, aiming to set a risk-based approach on AI applications. In essence, by stepping in to determine AI’s integration in the digital market, EU aims to minimize ethical risks in order to create a safe virtual environment for public trust. Many Member States like France, Finland, Germany and Sweden have already drafted their national AI strategies in order to participate in the creation of such AI policy framework, others are in the process. Greece has set up an expert team aiming to develop a national AI strategy.
European Court of Justice:
Schrems decisions I & II
Invalidation of privacy shield / the GDPR front
The European Court of Justice (ECJ) invalidated last July the Privacy Shield – a legal tool enabling unrestricted EU-US data flow, without technically violating the GDPR. Moving further, in another landmark decision, the ECJ justified its ruling on the grounds that personal data transferred by the EU to the US could not be guaranteed an adequate level of data protection as required by the GDPR. Evidently, the Court’s decision falls in line with EU’s attempt to circumscribe unfiltered use of personal data by the GAFA.
It is true that the GDPR’s 2-year lifetime brought into surface several issues requiring further adjustments and clarifications.
The European Data Protection Board (EDPB), as well as the national DPA’s have provided useful guidelines, but not always consistent
In line with the EU’s Commission intentions on this matter, the French Data Protection Authority (“CNIL”) fined last December Google € 100 million for using advertising cookies without properly obtaining users’ consent.
Recent freeze of US former President Trump’s accounts by Twitter highlights the need of a uniform and clear legal framework.
A side-effect of the worldwide radical precautionary measures against the spread of Covid-19 was the sudden need for broad use of AI tools (such as contact-tracing App) involving massive data sharing. Biomedicine, autonomous vehicles, online chatbots, recruitment algorithms, facial recognition systems and even automated justice decision systems are some of the AI instruments raising ethical and legal issues in the future.
Commission’s determination to take the lead in all these innovation services is evident by the Digital Europe Programme:
by a 7,5 billion Euros budget for 2021 – 2027 the Commission aims to boost investments in supercomputing (2,2 billion), AI (2,1 billion), cybersecurity (1,7 billion), advanced digital skills (580 million) and ensuring a wide use of digital technologies across the economy and society, including through Digital Innovation Hubs (1,1,billion).
Efforts are significant and promising – it remains to be seen whether Europe will achieve technological sovereignty in a highly competitive digital economy.
27/01/21 ©Effie Mitsopoulou