News

Cyber Threat Briefing

A new kind of threat

Tenable Research has uncovered seven previously unknown vulnerabilities in OpenAI’s ChatGPT service that could be exploited to exfiltrate users’ personal data, persist beyond a single session and even bypass built‑in safety controls. The flaws affect GPT‑4o and GPT‑5, the latest models behind ChatGPT, and highlight how malicious actors can abuse large‑language models (LLMs) to change their behaviour in ways developers did not intend. Tenable reported the issues to OpenAI, which has fixed some of them; however, the discovery underscores the growing “prompt injection” problem—malicious instructions hidden in data that cause an AI to act against its owner’s interests. The research quickly went viral across infosec circles, with #ChatGPT and #HackedGPT trending on social media and sparking debate about the risks of generative AI.

Technical details / Who’s affected

Tenable’s analysis grouped the issues into seven attack techniques:

  • Indirect prompt injection via trusted sites – Attackers can embed malicious instructions in the comments or metadata of web pages. When a user asks ChatGPT to summarise the page, the model unknowingly executes the hidden instructions.
  • Zero‑click prompt injection (search context) – Simply asking ChatGPT about a particular site can trigger hidden instructions if the site has been indexed by search engines and contains malicious prompts.
  • One‑click prompt injection via the chatgpt.com/?q parameter – Crafting a URL with a ?q= parameter automatically executes a supplied prompt when loaded, allowing attackers to inject commands.
  • Safety‑bypass through allow‑listed domains – Malicious URLs can be disguised as Bing ad‑tracking links because bing.com is on ChatGPT’s safe list, enabling the attacker to serve hidden instructions.
  • Conversation injection – Adversaries can embed prompts into a web page and then ask ChatGPT to summarise it. The injected instructions persist in subsequent interactions, causing the model to drift from its original task.
  • Malicious content hiding – A bug in ChatGPT’s markdown renderer allows attackers to hide malicious prompts by placing them on the same line as a code‑block delimiter, so the instructions aren’t displayed to the user.
  • Memory injection – By concealing instructions in a webpage and asking ChatGPT to summarise it, attackers can poison the model’s “memory” so that later queries leak data.

In addition to these methods, security researchers flagged related exploitation techniques across the AI ecosystem—prompt jacking in Anthropic’s Claude, agent‑session smuggling to hijack cross‑agent communication, “prompt inception” to amplify false narratives and a shadow‑escape zero‑click attack that steals sensitive data via model‑context protocols. Tenable warned that the vulnerabilities allow an attacker to exfiltrate chat histories, personal information and potentially tokens, and that the issues arise because large‑language models implicitly trust content drawn from the web. The problem is particularly concerning for organisations adopting AI assistants for customer support or internal productivity tools.

Industry or government response

OpenAI has patched several of the issues disclosed by Tenable. Tenable’s blog describes the flaws as including “unique indirect prompt injections, exfiltration of personal user information, persistence, evasion and bypass of safety mechanisms.” The company emphasised that prompt injection is an inherent challenge for LLMs and cautioned that there may be no systematic fix in the near future. Meanwhile, researchers from Texas A&M, the University of Texas and Purdue University warned that training AI models on “junk data” can lead to LLM brain rot, making them more susceptible to poisoning.

Governments have also begun to respond. The U.S. Cybersecurity and Infrastructure Security Agency (CISA) recently added vulnerabilities affecting AI‑related products to its Known Exploited Vulnerabilities catalogue, and has urged federal agencies to audit their AI tools. Europe’s upcoming AI Act includes provisions requiring developers to demonstrate how they mitigate prompt injection and data‑exfiltration risks. These moves signal regulatory pressure to harden AI models against adversarial manipulation.

Why this matters

ChatGPT is one of the most widely used generative AI services; millions of people rely on it for research, coding and personal advice. The discovery that attackers can trick the model into leaking private data or executing hidden instructions undermines trust and shows that LLMs remain an immature technology. The research also demonstrates that connecting AI to external tools (browsers, search engines, plug‑ins) dramatically increases the attack surface. As Tenable put it, vendors must ensure all safety mechanisms—such as URL allow‑lists and content filtering—are robust.

How to stay safe

  1. Be cautious with summaries. Don’t ask ChatGPT to summarise pages from unknown or untrusted sources—malicious prompts can hide in comments or metadata.
  2. Check URLs. Avoid suspicious links, especially those that use the ?q= parameter or look like disguised ad‑tracking links.
  3. Keep AI tools up to date. Use the latest versions of ChatGPT and other AI assistants.
  4. Limit sensitive information. Treat AI chats like a public forum; never share confidential data.
  5. Implement input sanitisation. Developers integrating LLMs should strip HTML comments and other hidden content before sending data to the model.
  6. Monitor model output. Organisations using AI assistants should log interactions and watch for unusual behaviour.
  7. Stay informed. Follow BadActyr and official advisories for ongoing updates on AI vulnerabilities.

  1. Ravie Lakshmanan – The Hacker News, “Researchers Find ChatGPT Vulnerabilities That Let Attackers Trick AI Into Leaking Data,” Nov 5 2025.
  2. Moshe Bernstein & Liv Matan – Tenable Blog, “HackedGPT: Novel AI Vulnerabilities Open the Door for Private Data Leakage,” Nov 5 2025.

Credential Theft Surges 160%

Credential Theft Surges 160% — Protect Yourself Now

Cybersecurity researchers report a staggering 160% increase in credential theft in 2025. That means stolen usernames and passwords are flooding the dark web at record pace, sold like trading cards to the highest bidder. Once stolen, attackers don’t stop there — they try those same login details across dozens of other services, from banking and email to workplace systems. If you’ve reused a password anywhere, one breach can quickly turn into many.

This is why our security program doesn’t just focus on “strong” passwords — it goes further. Two-Factor Authentication (2FA) adds an extra layer of defense, requiring a code or push notification in addition to your password. Even if a hacker buys or steals your login, they can’t get in without that second factor. Pair that with a password manager, and you’ve got a system that creates long, unique passwords for every account, remembers them for you, and autofills them safely. The result: fewer headaches for you, and far less opportunity for attackers.

But security isn’t just about tools — it’s also about habits. A few simple behaviors make a big difference. Always hover over links before clicking to make sure they go where they claim. Don’t open unexpected attachments or enable macros in documents from unknown sources. When in doubt, call the sender on a known number instead of replying to a suspicious email. And remember, the quickest way to protect everyone is to report phishing attempts immediately.

The surge in stolen credentials shows how valuable our logins are to attackers. By combining 2FA, password managers, and a few smart daily habits, you can make yourself a hard target. Hackers are counting on the easy win — let’s not give it to them.

Here are 13 things you can do to protect your credentials.

  • Stop — check the sender. If the address looks weird (extra letters, wrong domain), don’t click.
  • Hover before you click. Move your mouse over links to see where they go. If it doesn’t match the message, don’t click.
  • Don’t enable macros or run unknown attachments. If a doc asks you to “Enable Content,” treat it like a red flag.
  • When in doubt, call. If your boss asks for money or sensitive info via email, call them on a known number — not the number in the email.
  • Use 2FA everywhere. It’s the single best protection when passwords leak.
  • Keep devices updated. Install OS and app updates frequently — they close security holes hackers use.
  • Don’t use public Wi-Fi for sensitive work. Use a company VPN or your phone hotspot.
  • Lock your screen when away. Even a minute matters.
  • Report suspicious emails — immediately.
  • Be careful with links in SMS or social apps. Phishers use texts too.
  • Backup important files. Ransomware looks for easy targets — backups save the day.
  • Implement verbal pass phrases with loved ones. With the rise of AI-powered vishing, having a way to confirm identity can be huge.
  • Use a password manager. One strong master password + autofill beats reused passwords and sticky notes.

Salty2FA Phishing Kit Evades MFA to Clone Corporate Login Pages

A new phishing-as-a-service (PhaaS) toolkit called Salty2FA has been discovered, capable of completely bypassing multi-factor authentication on Microsoft 365 and other corporate platforms. The kit uses multi-stage clones of login portals and sophisticated spoofing techniques—including push notifications, SMS, voice, and app-based 2FA—to harvest access credentials.

Salty2FA employs advanced evasion tactics such as Cloudflare Turnstile filtering, dynamic domain infrastructure combining .com subdomains and .ru domains, obfuscated JavaScript, and anti-sandbox logic to elude detection. Researchers note its rapid rise since mid-2025 and widespread impact across multiple sectors and geographies.

Why It Matters

Salty2FA represents a pivotal leap in phishing capabilities—undermining trust in MFA, long considered a strong line of defense. Its ability to mimic legitimate systems and intercept authentication codes in real time forces companies to rethink authentication strategies and embrace behavior-based threat detection.

Key Takeaways

  • Salty2FA is a phishing kit designed to bypass SMS, app, voice, and push-based MFA.
  • Uses Cloudflare Turnstile, obfuscation, and dynamic .com/​.ru domain pairings to evade detection.
  • Spoofs login portals with credible look and feel to harvest credentials and 2FA codes.
  • Targets Microsoft 365 and enterprise users across US, EU, and other sectors.
  • Behavioral detection, sandbox analysis, and phishing-resistant authentication are key defenses.

Source: HackRead (reporting by Deeba Ahmed)

Infographic showing how Hello Gym stored 1.6M audio files unsecured, exposing them to risks of phishing, impersonation, and deepfakes.

Hello Gym Audio Leak Puts Millions at Risk of Deepfake & Phishing

An unsecured database managed by Hello Gym, a tech provider for fitness franchises in the U.S. and Canada, exposed more than 1.6 million audio recordings of gym members—including voicemails and call logs collected from 2020 to 2025. The files, which included member names, phone numbers, and call details, were accessible without authentication and could be played directly in web browsers.

The breach was discovered by cybersecurity researcher Jeremiah Fowler of Website Planet, who reported it to. The database was secured within hours of disclosure, but it’s unknown how long it remained exposed or whether any malicious actors accessed the data

Why It Matters

Voice recordings carry a high risk profile: they can be weaponized for spear-phishing, social engineering, or impersonation. In the era of AI deepfakes, stolen audio can be used to craft convincing scams or fake calls that appear authentic, putting individuals, and potentially businesses at heightened risk

Key Takeaways

  • Hello Gym leak exposed 1,605,345 audio files containing PII.
  • The database was publicly accessible without password protection.
  • Files included names, phone numbers, and reasons for gym member calls.
  • Audio data can be leveraged for phishing, impersonations, and deepfakes.
  • The leak was fixed quickly, but the duration and potential misuse remain unclear.

Source:HackRead (reporting by Deeba Ahmed)

Logo Larceny

Phishing is no longer just shady emails with typos and bad grammar. Today’s cybercriminals are using brand impersonation to create messages that look exactly like they came from the companies you trust most.

In Q2 2025, Microsoft led with 25% of global impersonation cases, followed by Google (11%), Apple (9%), and Spotify (6%). These attacks usually claim there’s a problem with your account or a special offer waiting—anything to get you to click a link. That link leads to a fake login page where your credentials are stolen in seconds.

Brand impersonation works because it leverages trust. When we see a logo we recognize, our guard drops. Criminals combine that trust with personal details they’ve gathered elsewhere to make the scam even more convincing.

This type of phishing also thrives on urgency—messages often warn of account closure, billing issues, or security alerts. That ticking clock is designed to make you act before you think.

To protect yourself, slow down. Hover over links to see the true destination, navigate directly to the official website, and enable multi-factor authentication (MFA) whenever possible. Remember: if something feels urgent and looks official, it’s worth verifying before you click.

Reference: Check Point

FileFix: The Next Stage in Social Engineering’s Evolution

It started with fake CAPTCHAs—clever little “prove you’re human” pop-ups that tricked users into granting permissions or pasting malicious commands. That technique, dubbed ClickFix, relied on social engineering to turn a harmless-looking interaction into a security breach.

Now, attackers have taken that formula and supercharged it. FileFix, spotted by Check Point Research, builds on the ClickFix model but removes friction. Instead of just copying a malicious command to your clipboard, FileFix automatically launches Windows File Explorer using the file:// protocol. At the same time, it places a PowerShell command disguised as a file path into your clipboard.

From there, it’s pure psychology: users are conditioned to paste a copied path into the Explorer address bar and hit Enter. But instead of opening a folder, the hidden PowerShell command executes—often with no visible signs anything happened. Current tests have used benign payloads, but it’s only a matter of time before real malware replaces them.

This evolution from ClickFix to FileFix shows how phishing techniques adapt. Attackers don’t just invent new tricks—they refine ones that already work, making them faster, smoother, and more convincing. Fake CAPTCHAs were effective because they felt routine; FileFix is even more dangerous because it feels like you’re just navigating your own PC.

Protecting against FileFix starts with awareness. No legitimate site will ever tell you to paste commands into File Explorer. Treat unexpected Explorer pop-ups as red flags, verify instructions with IT before acting, and configure endpoint security to flag unexpected PowerShell executions.

Reference: Check Point

From Subbed to Snubbed: Cut the Noise, Cut the Risk

We often sign up for newsletters, promotional codes, and one-time coupons, and before we know it, our inboxes are overflowing. However, many Gmail users might not be aware of a special feature that allows you to view all your email subscriptions in one place. You can access this feature directly by visiting the Gmail Subscriptions View link in your desktop browser.

This feature compiles a list of emails that Gmail recognizes as subscriptions or recurring marketing messages. From this view, you can see when you last interacted with each sender and decide whether to unsubscribe or keep the ones that are useful. This eliminates the need to sift through old emails or click on dubious “unsubscribe” links that could potentially be phishing attempts.

Cleaning out your subscriptions not only tidies up your inbox but also reduces your risk of falling victim to phishing attempts. The more marketing emails you receive, the higher the chances of one being a disguised phishing attempt from a familiar brand. By trimming down your subscription list, you minimize both digital clutter and security risks.

Take a few minutes to visit the link, review your senders, and unsubscribe from any that you do not recognize or no longer need. This is one of the easiest ways to improve your digital hygiene, and you might even uncover some surprises hiding in plain sight.

Prime Target: Don’t Click That Deal!

Amazon Prime customers are currently being targeted by a widespread phishing campaign that exploits brand familiarity and creates a sense of urgency. Scammers are sending fake emails and robocalls claiming that your membership is expiring, there’s a billing issue, or a large purchase requires verification. Some messages even offer false refunds or announce that you’ve won a gift card. These scams are designed to pressure you into clicking a link or providing sensitive information, such as your Amazon credentials or credit card number.

What makes this campaign particularly convincing is how authentic it appears. The emails often use real Amazon logos, colors, and wording. Some phishing pages even replicate Amazon’s login screen perfectly. On the phone, scammers may spoof caller IDs to appear as “Amazon Customer Support” and can sound very professional—some even use AI-generated voices for added realism.

The goal of these scams is always the same: to obtain your personal or payment information. If you fall victim to one of these schemes, attackers may not only drain your account or place fraudulent orders but could also attempt to use the same credentials on other services, such as your bank or streaming accounts.

To stay safe, never trust unexpected messages about your account. Instead, directly visit the Amazon app or website to verify your status. Avoid clicking on links or returning calls from suspicious emails. If a refund, order, or payment seems too urgent or too good to be true, it probably is. If you’re ever unsure, consult with someone or report it to IT before taking any action.

Scamouflage: When AI Puts on a Human Face

As artificial intelligence tools evolve, scammers are trading phishing emails for deepfake videos—convincing fakes that can talk, blink, and even hold meetings. One staggering example: a finance worker at a multinational firm was tricked into transferring $25 million after a video call with what appeared to be the company’s CEO. The catch? It wasn’t the CEO at all—it was a synthetic copy generated using AI. 

These kinds of scams are becoming more common as AI-generated video and voice tools become publicly accessible. Unlike obvious email scams full of typos and strange links, deepfake video scams play on visual trust. They replicate facial features, voice tone, and expressions with eerie accuracy. But there are tells: unnatural blinking, lips out of sync with speech, or a voice that lacks natural inflection.

Experts recommend a healthy dose of skepticism—especially when money or sensitive information is involved. Always verify requests, even if they appear to come from someone you know. A second phone call or direct message can prevent a costly mistake. As these scams grow more sophisticated, our best defense is still simple: pause and double-check.

While the tech behind these scams is impressive, it’s also a reminder that critical thinking is more important than ever. If something feels off—even during a video call—trust your instincts. The face may look familiar, but the intent behind it might not be.

Source: Times

Computer screen displaying a Google search result for "Are Bengal cats legal in Australia," with red warning signs and broken lock icons surrounding it. Scattered papers with personal data like names, addresses, and credit card numbers emphasize the risk of data theft.

Six-Word Google Search Term Exposes Users to Hackers

A bizarre six-word Google search term has been identified by cybersecurity experts as a potential risk for users, leaving them vulnerable to hackers. The search term “Are Bengal cats legal in Australia” has been hijacked by cybercriminals to create fake websites that can download malicious software, known as malware, onto users’ computers. This malware can steal personal data, financial details, and login credentials, and even give hackers remote access to the infected device.

Cybersecurity researcher Sean Gallagher from Sophos, a British cybersecurity company, explained that hackers exploit niche search terms with fewer search results by creating fake websites that appear to answer the question. These websites are used for malicious purposes. The technique, known as “SEO poisoning,” has been around since 2020 but has seen continued growth in recent years.

Hackers have also targeted popular software searches, such as Blender 3D, Photoshop, and financial trading tools, to infect users’ computers. To stay safe, users should be cautious when clicking on search results, check web addresses for misspellings or unusual names, and beware of unexpected downloads or requests for sensitive information. Keeping browsers and operating systems up to date is also essential.

Citation: Bradley Jolly, “Bizarre six-word Google search term which leaves you open to hackers is revealed,” Mirror

Scroll to Top