0

Today Monday, February 9th, 2026 is Safer Internet Day, which has always been about protecting people online. In 2026, that mission takes on new urgency as artificial intelligence becomes deeply embedded in how we work, learn, communicate, and transact.

RELATED: World Safer Internet Day: 10 safety tip for better protection

“AI is no longer a future technology—it is already shaping what we see online, how decisions are made, and how cybercriminals operate. As AI becomes a default layer across the internet, online safety is no longer just about user behaviour—it is about how intelligently and responsibly AI itself is designed, governed, and secured,” says Lorna Hardie, Regional Director: Africa, Check Point Software Technologies.

Using AI Safely: Balancing Innovation with Risk Awareness and Responsibility

“The challenge this year is not whether to use AI, but how to use it safely, responsibly, and with awareness of the new risks it introduces,” Hardie adds.

As AI accelerates productivity and creativity, it is also expanding the attack surface of the internet in ways that affect individuals, families, schools, and organisations alike. This is why Check Point increasingly frames cyber security as “AI-first and prevention-first”—because reacting after harm occurs is no longer sufficient in a machine-speed threat environment.

AI Is Now Part of Everyday Online Life

From writing assistance and image generation to recommendation engines and chatbots, AI is now present in almost every digital interaction. Enterprises are adopting generative AI (GenAI) at speed, and individuals are using it daily—often without fully understanding how data is processed or stored. AI has effectively become a co-pilot for daily digital life, influencing decisions, content, and trust signals in the background.

According to Check Point Research from December cyber statistics, in December 2025, 1 in every 27 GenAI prompts submitted from enterprise networks posed a high risk of sensitive data leakage, and 91% of organisations using GenAI tools were affected by high-risk prompt.

ADVERTISEMENT

An additional 25% of prompts contained potentially sensitive information, highlighting how easily users can overshare data when interacting with AI tools. These findings reinforce a core Check Point message: AI must be secured just like any other critical system—because it now directly handles sensitive data and decision-making.

Safe AI Begins with Digital Literacy, Not Limitations

These risks are not limited to enterprises. When students, families, or individuals use AI tools for homework, advice, or content creation, the same behaviours—copy-pasting personal information, uploading images, or trusting outputs without verification—can expose them to privacy, misinformation, or manipulation risks. Safe AI use, therefore, starts with digital literacy, not restriction.

How AI Is Changing the Cyber Threat Landscape

Cybercrime has always evolved alongside technology, but AI is accelerating this evolution at unprecedented speed. According to Check Point’s Cyber Security Report 2026, attackers are now combining AI, identity abuse, ransomware, and social engineering into coordinated, multi-stage campaigns that move faster than traditional defences can. These attacks increasingly adapt in real time, learning from failed attempts and automatically refining their techniques—mirroring how defensive AI operates.

According to Check Point’s Cyber Security Report 2026, globally, organisations faced an average of 1,968 cyberattack attempts per week in 2025, representing an 18% year-over-year increase and a 70% increase since 2023. These attacks are no longer isolated incidents—they are persistent, automated, and increasingly personalised. This scale is precisely why human-only security models can no longer keep pace.

Three trends are particularly relevant to Safer Internet Day:

1. AI-Enabled Social Engineering

AI is making phishing and scams more convincing and scalable. Attackers can now generate multilingual, culturally tailored messages that mimic trusted voices, institutions, or even family members. According to Check Point’s Cyber Security Report 2026, email remains the primary delivery mechanism for malicious content, accounting for 82% of malicious file delivery, but web-based and multi-channel attacks are growing rapidly. This reinforces the need for AI-driven threat prevention that can detect intent and behaviour—not just known signatures.

ADVERTISEMENT

2. Ransomware at Scale

According to  Check Point Research from December cyber statistics, in December 2025 alone, 945 ransomware attacks were publicly reported, a 60% increase compared to December 2024. Ransomware groups are becoming more fragmented, automated, and aggressive, often combining data theft with extortion and public pressure. AI is now being used to accelerate targeting, reconnaissance, and extortion tactics.

3. Unmanaged AI Usage as a Risk Multiplier

AI systems themselves are becoming targets. A Check Point–affiliated review seen in the Check Point Cyber Security Report 2026 of approximately 10,000 Model Context Protocol servers found security vulnerabilities in 40% of them, demonstrating that AI infrastructure is now part of the attack surface. Securing AI pipelines, models, and data flows is now as critical as securing endpoints or networks.

Smart Tech, Safe Choices: A New Online Safety Framework

To navigate this environment, users need a new set of habits—ones that recognise AI as a powerful tool, but not an infallible authority. Users should : 

  • Question the Output, Not Just the Source –  AI responses can sound authoritative even when they are wrong. Users should be encouraged to ask if the information is verifiable from another trusted source or is the AI pushing urgency, fear, or secrecy?

  • Protect Personal and Sensitive Data – In Check Point Research from December cyber statistics, in December, employees use an average of 11 different GenAI tools and generate 56 AI prompts per user per month, increasing the risk of accidental data exposure. AI safety starts with minimising unnecessary data sharing.

  • Assume Content Can Be Synthetic – From images to voices, digital content can now be fabricated with ease. Treat anything that demands money, credentials, or immediate action with caution—even if it appears realistic. Verification is now a core life skill.

What Organisations and Platforms Must Do

“Safer Internet Day is not only about individual responsibility. Platforms, schools, and organisations must design safety into AI systems from the start. Security must be embedded into AI development, deployment, and use—not bolted on afterwards,” Hardie says.

Check Point’s Cyber Security Research 2026 shows that 90% of organisations encountered risky AI prompts within a three-month period, indicating that governance and controls are lagging behind adoption. Effective AI safety requires not only clear policies on AI usage and consistent monitoring for data leakage but also education that keeps pace with evolving threats.

Ultimately, security must evolve from reactive tools to AI-powered, cloud-delivered platforms that prevent harm at machine speed.

Tips to Stay Safe Online in an AI-Driven World

As Safer Internet Day reminds us, small habits can significantly reduce risk:

  • Pause before you trust – If AI-generated content asks for urgency, money, or secrecy, stop and verify.
  • Limit what you share with AI tools – Avoid entering personal, financial, or identifiable information unless absolutely necessary.
  • Verify important information – Cross-check AI outputs with reliable, human-verified sources.
  • Keep systems updated – Many attacks exploit known vulnerabilities rather than new ones.
  • Talk openly about AI use – Especially with younger users, discuss what AI can and cannot safely do.

Concludes Hardie: “AI is rapidly becoming a co-pilot in how we learn, work, and connect online—but trust in technology must be earned, not assumed. This Safer Internet Day is a reminder that smart technology choices, combined with prevention-first, AI-powered security and strong digital literacy, are essential to keeping the internet safe, resilient, and trustworthy in an AI-driven world.”

More in News

You may also like