viernes, 3 de octubre de 2025

LA GOBERNANZA DE LA IA Y EL PAPEL ESTRATÉGICO DE LA UE EN 2025.

 Por: Carlos A. FERREYROS SOTO

Doctor en Derecho

Universidad de Montpellier I Francia.

cferreyros@hotmail.com

RESUMEN

La gobernanza mundial de la AI en 2025 se desarrolla en un contexto geopolítico inestable, marcado por tentativas de coordinación internacional que revelan importantes discordias entre las potencias mundiales. La ausencia de un cuadro global coherente conduce a una proliferación de iniciativas, foros y coaliciones con prioridades divergentes. La tensión entre unilateralismo y multilateralismo fragiliza la gobernanza, permitiendo que surjan intereses privados poderosos con una influencia desproporcionada sobre las normas y el acceso a las infraestructuras críticas.

Además, las brechas entre el desarrollo tecnológico y la regulación son cada vez más amplias. La Unión Europea, pionera tradicional en la regulación, tiene ahora una influencia cada vez menor en este debate. Para seguir siendo un actor global creíble, Europa debería replantear su estrategia, al mismo tiempo que abordar las deficiencias internas en la aplicación de la normativa y la disminución de su influencia externa.

El presente artículo examina también la dinámica actual de la gobernanza y argumenta que la ambición regulatoria por sí sola ya no es suficiente, Para mantener su credibilidad, la UE necesita asociar la regulación normativa con las competencias de investigación y desarrollo y establecer alianzas y coaliciones basadas en intereses estratégicos compartidos, en lugar de políticas y regulación inciertas.

Se adjunta el texto en inglés, sin corrección de mi parte. La cita oficial al documento además del enlace al texto original en inglés es la siguiente: Instituto Universitario Europeo y Cantero Gamito, M., AI governance and the EU’s strategic role in 2025, Instituto Universitario Europeo, 2025, https://data.europa.eu/doi/10.2870/2955242

A fin de acceder a normas similares y estándares europeos, las empresas, organizaciones públicas y privadas interesadas en asesorías, consultorías, capacitaciones, estudios, evaluaciones, auditorías sobre el tema, sírvanse comunicar al correo electrónico:cferreyros@hotmail.com

___________________________________________________

SCHOOL OF TRANSNATIONAL GOVERNANCE

STG Policy Papers

POLICY BRIEF

 

AI GOVERNANCE AND THE EU'S
STRATEGIC ROLE IN 2025

 Author:

Marta Cantero Gamito

 Florence School of Transnational Governance

ISSUE 2025/13
AUGUST 2025

 EXECUTIVE SUMMARY

The current global conversation on AI governance is taking place within an intense and shifting geopolitical setting. As such, the ongoing attempts to coordinate governance through summits and other international initiatives are revealing important disagreements among world powers. The EU, once a regulatory leader, faces weakening influence in this conversation. To remain a credible global actor, Europe should rethink its strategy while dealing with internal enforcement gaps and declining external leverage. This policy brief examines current governance dynamics and argues that regulatory ambition alone is no longer sufficient.

To remain credible, the EU needs to connect rulemaking with industrial capacity and build coalitions shaped by shared strategic interests rather than rhetorical alignment.

 Author:

 Marta Cantero Gamito | Research Fellow, Florence School of Transnational Governance, EUI

 Views expressed in this publication reflect the opinion of individual authors and not those of the European

University Institute.


1. CONTEXT AND STAKES1

In 2025, aspirations for global coherence in AI governance seem unfeasible. Recent summits such as the Paris AI Action Summit and the Munich Security Conference, rather than building consensus, have exposed significant disagreements. At the same time, the gap between AI development and regulation is widening. Also, since the EU proposed the Artificial Intelligence Act (the so-called 'AI Act') in 2021, the geopolitical climate has dramatically changed. The proposal came in a world still buffered by the regulatory influence of the GDPR. However, AI is more than data.

It involves software, hardware, semiconductors and, more broadly, network infrastructure and security architectures. As such, AI is a deeply political technology and a strategic governance tool. Therefore, tensions surrounding its regulation and governance are no longer about  compliance but also a competition over control and who gets to write the rules.

Now, however, the illusion of control is fading. The proliferation of AI governance venues has created overlapping mandates and fragmented commitments. Instead of convergence, there has been an increase in competing authorities and institutional complexity.2 Yet, the risk of ineffective AI governance is not just duplication or inefficiency. The concern is rather the systematic marginalisation of those unable to follow, let alone shape, these initiatives. Often, states and institutions without the resources to engage across multiple agendas are left behind. This is particularly true for Global South countries whose voices are sidelined by capacity asymmetries.3

Meanwhile, the boundary between governance and geopolitical strategy is increasingly blurred. What was once about market harmonisation or data protection now concerns power and sovereignty. Although the EU continues to position itself as a value-based rulemaker, it struggles to project that identity in a world of largely influenced by the United States and China.4 The digital 'third way' Europe once offered is under pressure, if not in retreat.5 In this shifting order, questions about who sets the rules, who enforces them, and who gets left behind are no longer abstract governance questions but critical geopolitical challenges. Europe's role in this context is at once decisive and precarious. The EU aspires to act as a bridge by upholding democratic accountability while engaging strategically with global powers. However, its success depends on staying normatively coherent and geopolitically relevant. This brief explores the resulting landscape of convergence and divergence in AI governance and Europe's place in it.

 

2. FLOCKING BEHAVIOUR IN AI GOVERNANCE

Much of the current realignment in AI governance is not entirely intentional. In fact, when institutional responses do emerge, they are often reactive, imitative, and driven more by perceived necessity than by deliberate design. Traditional accounts of international cooperation assume a degree of strategic coordination and institutional rationality.6 However, in the AI domain, what we observe increasingly resembles flocking, not structured coordination.

Borrowed from behavioural ecology, flocking describes the tendency of animals (mainly birds) to follow perceived leaders, often without fully evaluating the implications or alternatives. A 5% shift in a certain direction by a dominant actor direction can pull the entire group into a new trajectory. The same is observed in AI/ digital governance, where early movers create gravitational pulls that shape the regulatory landscape beyond their borders. This imitation reproduces and amplifies asymmetries of power. Those who lead also define the terms of engagement. For many states, especially in the Global South, alignment with dominant models is less about shared values and more about infrastructural dependency, capacity gaps or geopolitical pressure (or all of the above). In such cases, alignment can serve as a proxy for legitimacy, even when the underlying distribution of agency remains deeply uneven.

Calls for regulatory interoperability, particularly in global trade, now collide with an increasingly fragmented global landscape. The move away from hierarchical rulemaking toward a more dispersed legal order has increased complexity without resolving questions of authority. While this shift might signal a higher-intensity pluralism, it also strains the coherence required for effective global coordination. In this context, technocratic governance has emerged not just as a Western trend, but as a structural response to the perceived ungovernability of deeply interdependent but politically divided systems.7 

Whether or not this is a global phenomenon remains an open question, but its implications are evident in the widening gap between formal participation and real influence. Flocking also masks the role of 'predators' which are actors that can reshape the trajectories of others through leverage rather than persuasion. For AI governance, these may be powerful firms controlling access to large-scale computational infrastructure, foundational models, or standard-setting processes.8 Their ability to frame risk, define technical parameters, or even set governance priorities often bypasses formal institutional checks.

Behavioural science sheds further light on the origins of reactive behaviour. Under uncertainty, actors tend to conform, especially when losses (e.g. in competitiveness or security) loom larger than potential gains.9 In the context of AI, risk narratives exacerbate these dynamics by creating perceived urgency and limiting the space for deliberation.10 As a result, the global regulatory landscape often resembles a cascading response system in which imitation replaces deliberate policy direction. The result is a governance ecosystem driven less by principled coordination than by reputational pressure.

What this process generates is the appearance of supranational alignment. Superficial signs of cooperation, such as countries signing joint statements, adopting similar language, or participating in the same forums, do not guarantee meaningful agreement or shared commitment. This points to the risk of performative governance, where coordination is claimed without enforceable obligations or inclusive participation. Therefore, to assess where digital governance is headed, we need to examine the architectures that underpin it.

 

3. GLOBAL ARCHITECTURES IN DIGITAL GOVERNANCE: WHO HAS GRAVITAS?

What began as a race to regulate has become a competition to shape and control markets and critical infrastructures. In the absence of a coherent framework for AI governance, a fragmented space of initiatives, summits, institutes, frameworks, and coalitions has emerged, with each forum advancing its own priorities, definitions, and claims to legitimacy. However, while pluralism is often seen as a strength, it has led to architectural incoherence.

Rules proliferate without coordination, responsibilities are diffused, and important gaps remain unaddressed. Forums like Internet Governance Forum (IGF), once celebrated as an unparalleled multistakeholder format, have been sidelined by more agile and exclusive coalitions, which accelerate coordination among powerful players but often do so at the expense of transparency and inclusion. At the centre of this fragmentation lies a deeper tension between multilateralism and multistakeholderism. The fate of the IGF's mandate since the adoption of the Global Digital Compact in September 2024 is symptomatic of this conflict. While the former centres state legitimacy, the latter distributes influence among diverse stakeholders. In theory, this is a pragmatic solution to the complexity of digital power. In practice, though, it can complicate (and even obscure) structures and questions of accountability or even open the door to governance capture, as seen in other sectors.11

This tension is especially visible in AI governance, where control over infrastructure and agenda- setting power is heavily concentrated in the hands of a few private frontier labs and hyperscale cloud providers. These actors are invited into governance spaces as technical experts, but their dual role as both rulemakers and market actors creates conflicts of interest. In this sense, as Kate Crawford has noted, "AI is in its empire era," characterised by expansionist ambitions, massive investment in extractive infrastructure, and limited accountability.12

As a result, far from convergence, AI governance models find themselves in conflict, driven by competing agendas. Some prioritise coordination, others fragmentation. Some emphasise precaution, others acceleration. Some claim democratic accountability, others strategic utility. The result is a layered geopolitical landscape marked by parallel architectures built on incompatible assumptions about sovereignty, risk, and/or legitimacy.

The Paris AI Action Summit highlighted this fragmentation. While it invoked multistakeholderism and global coordination, the absence of key countries (including the US and the UK) from its final declaration exposed the fragility of any substantive consensus. Also, the summit's rhetoric of inclusion stood in contrast with the reality. Voices from the Global South were present (for instance, on panels), but their influence on decision-making remained peripheral.

Besides,   this   divergence   is   not   merely procedural. Scholars like Sean O hEigeartaigh describe the current moment as a 'pre-AGI diplomatic phase,' where informal coordination among dominant actors replaces formal institutional rules. In this context, informal elite consensus between labs, leading states, and technical experts may shape the future more decisively than treaty-based multilateralism.13 

The question is, then, not only who sits at the table, but whose agenda gets embedded in governance frameworks. In a 2021 paper, Dafoe and Carlier called for a "constitutional moment" in an AI governance landscape shaped by high uncertainty and long-term normative consequences.14 Yet, the core concern persists: can such moments deliver institutions that remain epistemically independent?

In response, some attempts to mediate these tensions are emerging. One is a proposal for a jurisdictional certification model for AI governance, in which states voluntarily recognise and interoperate with one another's domestic AI regulations based on shared principles.15 This approach avoids the rigidity of a centralised treaty model while enabling coordination through regulatory interoperability. It mirrors the logic of mutual recognition in global trade but is adapted to a risk-sensitive and values-diverse digital domain. Its feasibility, however, depends on trust and transparency, both currently in short supply. 

Ultimately, the debate is not just over who governs but how. The deeper tension is between the ideal of interoperable and inclusive governance and the political reality of fragmentation and strategic competition. This is highly visible in Europe, where the EU's ambition to act as a normative bridge is increasingly constrained by weakened leverage and growing uncertainty about Europe's position in the current global order.

 

4. EUROPE'S STRATEGIC ROLE

Can the EU still shape digital governance, or is it being sidelined? In a world that is growing less inclined to flock around its regulatory model, the EU's status as a norm entrepreneur is challenged by internal fragmentation and the waning traction of the 'Brussels Effect.'

The strategic position that once allowed the EU to mediate between divergent approaches (e.g. those of the US and China) is increasingly difficult to maintain. In the wake of the Paris AI Action Summit, the EU finds itself gradually growing more isolated. The refusal of the US and the UK to endorse the final declaration, as well as the muted response from frontier AI labs, exposed the limits of Europe's influence.

At the same time, the EU is facing increasing enforcement dilemmas at home. While the AI Act, the Digital Services Act (DSA), and the Digital Markets Act (DMA) are recent achievements in global digital regulation, challenges related to their implementation reveal a growing mismatch between European rulemaking ambition and enforcement capacity (and, more critically, political will). At the 2025 Munich Security Conference, US Vice President JD Vance openly criticised Europe's "excessive regulation" and hinted at retaliatory measures if the EU continued to target American platforms. Growing lobbying efforts and the political cost of enforcement have all contributed to the view of these regulations as 'rules without teeth.' The quiet shelving of the AI Liability Directive earlier this year also signals regulatory fatigue, both administrative and geopolitical.

The EU's soft power is also under strain. The EuroStack initiative, an industry-proposed sovereign digital infrastructure alternative to Silicon Valley or Beijing, has not yet materialised. Although the EU has committed to long-term investment in AI research, compute capacity, and cloud sovereignty, it still lacks globally competitive firms to project its regulatory norms through market reach. Europe has limited access to foundational models and continues to rely on external compute infrastructures. These structural limitations undermine the EU's ability to translate regulatory ambition into global influence.

The AI Continent Action Plan, released in April 2025, aims to address this issue. It reflects a shift from purely rules-based governance toward a more active industrial strategy.16 Building on strategic autonomy (the EU's preferred euphemism for digital sovereignty), the plan seeks to coordinate investment and develop interoperable infrastructure in an attempt to strengthen Europe's position across the AI value chain. Politically, it signals a more pragmatic turn, recognising that normative influence requires industrial leverage. Its success, however, will depend on aligning member state priorities and attracting private sector engagement.

 The challenge is also structural. Earlier EU successes in digital governance were achieved in a more stable technological and geopolitical context. Regulatory action is now driven by urgency and a perceived need to assert sovereignty in a race that is already underway, rather than by the demand to solve pressing societal problems and protect rights. This is visible in the AI Act's language, which mirrors the rhetoric of risk and strategic autonomy. Given these dynamics, the EU faces a critical decision. Should it double down on its goal to lead global digital governance, or should it focus instead on building industrial capacity and agreeing on a common EU digital policy for its citizens?

Some call for realism and the acknowledgement that Europe must first consolidate its internal capacity before projecting global leadership.17 Others warn that retreating from global engagement would accelerate the EU's marginalisation and leave the normative ground to authoritarian models.18 In either case, the EU's position will ultimately depend less on its regulatory volume and more on its political credibility and technological relevance.

 

5. POLICY RECOMMENDATIONS INAN IMPOSSIBILITY TRILEMMA

AI governance is currently constrained by an impossibility trilemma. In other words, no policy can simultaneously prioritise state sovereignty, individual autonomy, and unrestricted innovation. Advancing two of these principles typically undermines the third. Protecting sovereignty and individual autonomy often restricts innovation, promoting innovation and individual autonomy weakens state control, and the pursuit of state control and technological leadership often leaves little room for dissent, privacy, or individual agency.19

 The EU is at the core of this tension because it currently seeks to simultaneously regulate innovation in the public interest, safeguard individual rights, and retain strategic autonomy. To complicate things further, it must do so in a global environment shaped by infrastructures that it cannot fully access and companies that it does not control. In this context, policy recommendations should move beyond chasing impossible balances and instead focus on asserting a clear normative stance that projects credibility and direction amid shifting geopolitical and technological constraints.

 1.    Enforcement must be treated as existential. The EU's influence on the world stage has thus far largely relied on enforcement, but this credibility is eroding. Geopolitical credibility requires sustained investment in enforcement capacity, which entails stronger national authorities with the budgets and technical expertise to act effectively, sound cross-border enforcement mechanisms across member states, and active support for civil society litigation in the public interest.

2.     Public investment must reflect public values. European AI innovation cannot depend on non-EU entities whose business models run counter to European interests. EU funding schemes should be deliberately aligned with the Union's normative standing. This includes conditioning public funds on model openness and democratic oversight. Moreover, the EU must promote investment in accessible compute capacity and sovereign AI infrastructure. Here, instruments like the European Chips Act, the Cyber Resilience Act, and the Data Act can provide critical regulatory support to ensure that Europe retains control over its hardware, cybersecurity, and data flows.

 3.   Incentives-driven coalitions should be  prioritised  over  extraterritoriality. While universal convergence now seems unlikely, value-aligned coalitions remain possible. The EU should lead in creating interoperable, flexible arrangements that reflect its core values, such as jurisdictional certification models consisting of voluntary mutual recognition agreements among states whose domestic AI regimes meet shared standards. This approach mirrors mutual recognition in trade law but adapts it to a value-based governance framework. It would allow Europe to avoid a potential backlash against extraterritorial imposition, shaping global norms through alignment rather than dominance.

4.     Governance infrastructure must support epistemic independence. While the European AI Office plays a central role in implementing the AI Act and conducting technical evaluations, democratic resilience requires distributed expertise, as seen, for instance, in the field of climate governance with the Intergovernmental Panel on Climate Change (IPCC). The EU could support the creation of a European network for AI foresight and risk: a publicly funded but institutionally independent consortium of academics, civil society, and technical experts. This network would offer critical foresight, signal slow-moving risks, and conduct open evaluations of high- impact models and infrastructures while complementing (rather than replicating) the AI Office.

 5.     Governance must be anticipatory, not reactive. This requires embedding digital governance into Europe's foreign, industrial, and security strategies. In this regard, the creation of a Digital Geopolitics Council as a permanent advisory body for emerging tech foresight would help the EU reconcile  regulatory frameworks with its global positioning and industrial policy.

 6. CONCLUSION

Flocking behaviour in digital governance highlights both the unstable nature of global coordination and the systemic pressures that shape it. Understanding underlying behavioural patterns is needed to shape forward-looking governance rather than simply reacting to it. Yet, if AI governance continues to be treated as a mere regulatory tool rather than as a strategic end, current frameworks may fail to capture the shifting balance of power. In light of this, the EU's reliance on regulatory projection is reaching its limits as a tool of influence. Europe cannot afford to continue externalising its industrial dependencies if it expects to retain regulatory power. Instead, to remain a credible actor in global digital governance, the EU needs to support its regulatory ambition with industrial capacity while forging coalitions that help to hold the flock together.

 The School of Transnational Governance (STG) delivers teaching and high-level training in the methods, knowledge, skills and practice of governance beyond the State. Based within the European University Institute (EUI) in Florence, the School brings the worlds of academia and policy-making together in an effort to navigate a context, both inside and outside Europe, where policy-making increasingly transcends national borders.

The School offers Executive Training Seminars for experienced professionals and a Policy Leaders Fellowship for early- and mid- career innovators. The School also hosts expert Policy Dialogues and distinguished lectures from transnational leaders (to include the STG's Leaders Beyond the State series which recorded the experiences of former European Institution presidents, and the Giorgio La Pira Lecture series which focuses on building bridges between Africa and Europe). In September 2020, the School launched its Master-of-Arts in Transnational Governance (MTnG), which will educate and train a new breed of policy leader able to navigate the unprecedented issues our world will face during the next decade and beyond.

 The STG Policy Papers Collection aims to further the EUI School of Transnational Governance's goal in creating a bridge between academia and policy and provide actionable knowledge for policy-making. The collection includes Policy Points (providing information at-a-glance), Policy Briefs (concise summaries of issues and recommended policy options), and Policy Analyses (in-depth analysis of particular issues). The contributions provide topical and policy-oriented perspectives on a diverse range of issues relevant to transnational governance. They are authored by STG staff and guest authors invited to contribute on particular topics.

  ______________

1 Marta Cantero Gamito is a Research Fellow at the EUI's Florence School of Transnational Governance and Professor of Information Technology Law at the School of Law of the University of Tartu (Estonia).

2 Nye, J. S. (2014). The regime complex for managing global cyber activities. Global Commission on Internet Governance Paper Series: No. 1, May 2014. On the description of a broader phenomenon of institutional complexity in global governance, see Abbott, K. W., & Faude, B. (2022). Hybrid institutional complexes in global governance. The Review of International Organizations, 17(2), 263-291.

3 Heeks, R. (2022). Digital inequality beyond the digital divide: conceptualizing adverse digital incorporation in the global South. Information Technology for Development, 28(4), 688-704.

4 Csernatoni, R. (2024). Charting the Geopolitics and European Governance of Artificial Intelligence. Carnegie Endowment for International Peace. Accessible at https://carnegieendowment.org/research/2024/03/charting-the-geopolitics-and-european-governance-of-artificial-intelligence?lang=en.

5 Bradford, A., Kelemen, R. D., & Pavone, T. (2024). Europe Could Lose What Makes It Great. Foreign Affairs (April 25, 2025). Accesible at https://www.foreignaffairs. com/europe/europe-could-lose-what-makes-it-great

6 Koremenos, B., Lipson, C., & Snidal, D. (2001). The rational design of international institutions. International Organization, 55(4), 761-799.

 7 See Streeck's latest book, Streeck, W. (2024). Taking Back Control?: States and State Systems After Globalism. Verso Books.

8 AI Now Institute. (2023). Computational Power and AI. Accessible at https://ainowinstitute.org/publications/compute-and-ai.

9 Kahneman, D., & Tversky, A. (1979). Prospect Theory: An Analysis of Decision under Risk. Econometrica, 47(2), 263-291.

10 See the letter to Science by Lazar, S., & Nelson, A. (2023). AI safety on whose terms?. Science, 381(6654), 138-138.

 11 Cf. Raymond, M., & DeNardis, L. (2015). Multistakeholderism: Anatomy of an inchoate global institution. International Theory, 7(3), 572-616.

12 https://www.bloodinthemachine.com/p/ai-is-in-its-empire-era

13 O hEigeartaigh, Sean, The Most Dangerous Fiction: The Rhetoric and Reality of the AI Race (May 25, 2025). Available at SSRN: https://ssrn.com/abstract=5278644.

14 Carlier, A., & Dafoe, A. (2020). Emerging Institutions for AI Governance: AI Governance in 2020. Centre for the Governance of AI. Accessible at https://www. governance.ai/research-paper/emerging-institutions-for-ai-governance-ai-governance-in-2020.

15 Forum on Information and Democracy. (2024). A Voluntary Certification Mechanism for Public Interest AI: Exploring the Design and Specifications of Trustworthy Global Institutions to Govern AI. September 2024. Accessible at https://informationdemocracy.org/wp-content/uploads/2024/09/FID-Public-Interest-AI-Sept-2024. pdf.

 16 COM(2025) 165 final.

17 Torreblanca, J. I., & Verdi, L. (2024). Control-Alt-Deliver: A Digital Grand Strategy for the European Union. European Council on Foreign Relations (ECFR). Accessible at https://ecfr.eu/publication/control-alt-deliver-a-digital-grand-strategy-for-the-european-union.

18 Shapiro, J. (2020). Europe's digital sovereignty: From rulemaker to superpower in the age of US-China rivalry. European Council on Foreign Relations (ECFR). Accesible at https://ecfr.eu/publication/europe digital sovereignty rulemaker superpower age us china rivalry.

 19   Cantero Gamito, M (2024). El trilema de la gobernanza: los retos para la democracia global en la era de la IA. Telos, vol. 125, p. 104-108.

STG | Policy Papers Issue | 2025/13

Florence School of Transnational Governance

European University Institute

Via Camillo Cavour, 65a, 50129 Firenze (FI), Italy

Tel. +39 055 4685 545

Email: stg.publications@eui.eu

 www.eui.eu/stg

© © © ©

Funded by the European Union. Views and opinions expressed are however those of

the author(s) only and do not necessarily reflect those of the European Union or the
European Education and Culture Executive Agency (EACEA). Neither the European
Union nor EACEA can be held responsible for them.

This work is licensed under the Creative Commons Attribution 4.0 (CC-BY 4.0)
International license which governs the terms of access and reuse for this work. If cited
or quoted, reference should be made to the full name of the author(s), editor(s), the title,
the series and number, the year and the publisher.

doi:10.2870/2955242

ISBN:978-92-9466-673-4
ISSN:2600-271X
QM-01-25-097-EN-N

 © European University Institute, 2025


 


miércoles, 1 de octubre de 2025

GUIAS CONTRA CIBERAMENAZAS PARA FAMILIAS Y ADULTOS MAYORES. CNIL

  Por: Carlos A. FERREYROS SOTO

Doctor en Derecho

Universidad de Montpellier I Francia.

cferreyros@hotmail.com

RESUMEN

En junio de 2024, la CNIL, Cybermalveillance.gouv.fr y la UNAF publicaron dos guías prácticas tituladas  "Ciberseguridad: Tenga los reflejos adecuados", dedicadas a proteger a las familias y a los adultos mayores contra las ciberamenazas.

Estas guías buscan sensibilizar a las familias y a los adultos mayores sobre los crecientes peligros de internet, ofreciéndoles consejos concretos sobre cómo reconocer las amenazas (piratería informática, estafas y fraude), proteger sus dispositivos y datos personales, y ampliar sus conocimientos mediante recursos en línea adicionales.

Los documentos se estructuran en tres secciones principales:

  • Identificar las principales amenazas en línea (tipos de estafas y fraudes comunes);
  • Descubrir consejos prácticos para actuar con cautela y proteger eficazmente su información y dispositivos;
  • Acceder a herramientas y recursos para aumentar la vigilancia y mantenerse informado sobre las nuevas amenazas. 
  • Estas guías están diseñadas para todos los niveles de competencias digitales, disponibles para descarga gratuita y son también distribuidas por la UNAF, lo que fomenta su difusión entre los grupos de mayor riesgo.

Esta iniciativa destaca la importancia de la colaboración entre la CNIL (protección de datos), Cybermalveillance.gouv.fr (prevención, asistencia y recursos nacionales) y la UNAF (apoyo familiar) para un enfoque proactivo en la educación digital y la prevención de la cíberdelincuencia.

En síntesis, estas guías son herramientas sencillas y esenciales para comprender mejor los riesgos y adoptar un uso digital seguro e informado, especialmente para poblaciones vulnerables como los adultos mayores y las familias.

A fin de acceder a normas similares y estándares europeos, las empresas, organizaciones públicas y privadas interesadas en asesorías, consultorías, capacitaciones, estudios, evaluaciones, auditorías sobre el tema, sírvanse comunicar al correo electrónico: cferreyros@hotmail.com

___________________________________________________

La CNIL, Cybermalveillance.gouv.fr y la Unaf publican dos guías sobre cíberamenazas para familias y personas mayores

26 de junio de 2024


Hackeo de cuentas, estafas en línea... ante los riesgos cada vez más presentes, es fundamental protegerse. Para acompañarlo en su vida digital, Cybermalveillance.gouv.fr, la Unaf y la CNIL han publicado dos guías para familias y personas mayores.

Para acompañar a las familias y a las personas mayores, Cybermalveillance.gouv.fr, la Comisión Nacional de Informática y Libertades, CNIL y la Unión Nacional de Asociaciones Familiares, UNAF publican hoy dos nuevas guías tituladas "Ciberseguridad: Tenga los reflejos adecuados".

Estas guías tienen como objetivo concienciar a los usuarios de todas las edades sobre los peligros de internet y ofrecer consejos prácticos para protegerse. Claras y accesibles, permiten a todos, independientemente de su nivel de conocimientos técnicos, comprender los problemas de seguridad e implementar medidas preventivas eficaces.

Los puntos clave de la guía:

  • Identificar las amenazas: aprender a reconocer diferentes tipos de amenazas en línea.
  • Consejos prácticos: descubrir astucias simples y prácticas para proteger tus dispositivos e información personal.
  • Recursos adicionales : acceder a herramientas y recursos en línea para profundizar sus conocimientos y mantenerse informado sobre las amenazas.

Estas guías se pueden descargar gratuitamente y se pueden obtener copias impresas solicitándolas a la UNAF.

Cybermalveillance.gouv.fr:

Cybermalveillance.gouv.fr es la plataforma del dispositivo nacional para la prevención y asistencia a las víctimas de ciberdelitos. Está dirigida tanto a particulares como a profesionales a través de numerosos contenidos y servicios gratuitos en línea. También ofrece proveedores en servicios especializados en todo el país que pueden ayudar a las víctimas.

UNAF: 

La Unión Nacional de Asociaciones Familiares (UNAF) es la institución encargada de promover, defender y representar los intereses de todas las familias. Los padres tienen un papel fundamental en el apoyo a sus hijos en la sociedad digital. Con esto en mente, la UNAF informa y apoya a los padres en prácticas digitales responsables.