Vendor News

Inside Qubit Conference Prague 2025: Hacking Social Platforms and Securing AI 

Qubit Conference Prague 2025 brought together some of the sharpest minds in cybersecurity—and Cato CTRL made sure to leave a mark. Not only did we share insights on AI-powered security, but we also marked a major milestone: the opening of our new R&D office in Prague.  This expansion strengthens our global footprint and taps the best in the local engineering and development talent to help with the kinds of projects we present at Qubit.  

Using AI to Extract Threat Intel from X to Generate Real-Time Threat Feeds  

In a session titled “Hacking Social Platforms with AI + OSS for Real-Time Threat Intel,” Inga Cherny, security researcher at Cato Networks and member of Cato CTRL, revealed how our team is extracting reliable threat intelligence from complex social media data. 

QubitD2064

Traditionally, extracting threat intelligence from social networks like X involves manually monitoring a handful of trusted sources. But with today’s volume and variety of online chatter—spanning countless voices and languages—manual tracking isn’t scalable. Inga’s solution: an AI-powered system combining the depth of large language models (LLMs) with the efficiency of small language models (SLMs) to identify actionable indicators.  

Cherny demoed an AI agent that uses Grok to discover threat hunters and accounts that periodically share indicators of compromise (IoCs), fetch their live posts via the X API, and analyze the content using a custom fine-tuned model trained specifically for this threat intelligence task. The system generates a real-time threat intelligence feed that integrates directly into a security protection pipeline.  

This practical use case showed how to leverage this data for tracking zero-day vulnerabilities, malware campaigns, and other emerging threats. Attendees learned replicable methods for building and leveraging AI tools for targeted security monitoring, and most importantly how to integrate the resulting threat intelligence into broader security protection pipelines based on publicly available data.  

Mapping the LLM Threat Landscape 

In a session titled “From Assistants to Adversaries: The LLM Threat Landscape,” Vitaly Simonovich, threat intelligence researcher at Cato Networks and member of Cato CTRL, explored the growing risks of LLM misuse. He highlighted how threat actors are tapping AI to develop tools like WormGPT and Nytheon AI to generate polymorphic malware and scalable phishing campaigns. 

QubitD2156

Simonovich also explored prompt injection attacks that specifically target AI tools, such as Retrieval-Augmented Generation (RAG) systems. He also shared his insights on technical vulnerabilities, such as jailbreaks, designed to bypass security measures to prevent the misuse of LLMs.   


WormGPT Variants Powered by Grok and Mixtral | Read the blog

Simonovich presented two sets of tips for security in the age of LLMs. For enterprises building or embedding LLMs, he recommended: 

  • Use LLM guardrails and treat every prompt like untrusted code. They should treat user input as hostile—just like raw input in web applications. Guardrails, prompt templates, and output filters are your new input sanitization. 
  • Fuzz test your LLM, just like an API. Don’t trust it blindly. Treat your model like an external service— leverage malformed, edge-case, and adversarial prompts to validate behavior. 
  • Put a safety gate in your RAG loop. When using RAG systems, insert a filtering layer to catch unsafe or irrelevant data before it reaches the model. 
  • Log and hunt for “prompt residue” in logs. Just like command injection leaves traces, prompt injection may too. Log full prompt context and analyze for manipulation patterns. 
  • AI red teaming. Continuously test your system with red team prompts to uncover jailbreaks, toxic outputs, or unsafe completions—before threat actors do. 

For enterprises facing AI-powered threat actors, he recommended: 

  • Prepare for AI-enhanced social engineering. LLMs make phishing more believable, scalable, and dynamic. Train employees with examples generated by AI, not humans. 
  • Run AI-driven attack drills each quarter. Tabletop exercises should now include LLM-based threats—like code-gen malware or manipulated chatbots in support channels. 
  • Expect polymorphic malware at scale and faster exploit development and tooling. AI is supercharging the speed and variety of malware. Signature-based detections alone won’t keep up. 
  • Track emerging AI threat tooling in the wild. Follow underground forums, open-source AI projects, and red team tooling. New threats are often published before they’re deployed. 
  • Use XDR and behavioral analytics to keep up with scale. As AI drives automated attacks, defenders need telemetry-rich, behavior-based tooling—not just traditional rules. 

Why This Matters 

These sessions went beyond theory. They offered practical insights into how AI can be applied—proactively and defensively—within security operations for enterprises. Cato CTRL demonstrated leadership in turning complex research into tangible solutions that strengthen AI protection in real time. 

The post Inside Qubit Conference Prague 2025: Hacking Social Platforms and Securing AI  appeared first on Cato Networks.

Related Articles

Back to top button