• About
  • Privacy Policy
  • Disclaimer
  • Contact
Soft Bliss Academy
No Result
View All Result
  • Home
  • Artificial Intelligence
  • Software Development
  • Machine Learning
  • Research & Academia
  • Startups
  • Home
  • Artificial Intelligence
  • Software Development
  • Machine Learning
  • Research & Academia
  • Startups
Soft Bliss Academy
No Result
View All Result
Home Artificial Intelligence

The State of AI Security in 2025: Key Insights from the Cisco Report

softbliss by softbliss
May 16, 2025
in Artificial Intelligence
0
The State of AI Security in 2025: Key Insights from the Cisco Report
0
SHARES
0
VIEWS
Share on FacebookShare on Twitter


As more businesses adopt AI, understanding its security risks has become more important than ever. AI is reshaping industries and workflows, but it also introduces new security challenges that organizations must address. Protecting AI systems is essential to maintain trust, safeguard privacy, and ensure smooth business operations. This article summarizes the key insights from Cisco’s recent “State of AI Security in 2025” report. It offers an overview of where AI security stands today and what companies should consider for the future.

A Growing Security Threat to AI

If 2024 taught us anything, it’s that AI adoption is moving faster than many organizations can secure it. Cisco’s report states that about 72% of organizations now use AI in their business functions, yet only 13% feel fully ready to maximize its potential safely. This gap between adoption and readiness is largely driven by security concerns, which remain the main barrier to wider enterprise AI use. What makes this situation even more concerning is that AI introduces new types of threats that traditional cybersecurity methods are not fully equipped to handle. Unlike conventional cybersecurity, which often protects fixed systems, AI brings dynamic and adaptive threats that are harder to predict. The report highlights several emerging threats organizations should be aware of:

  • Infrastructure Attacks: AI infrastructure has become a prime target for attackers. A notable example is the compromise of NVIDIA’s Container Toolkit, which allowed attackers to access file systems, run malicious code, and escalate privileges. Similarly, Ray, an open-source AI framework for GPU management, was compromised in one of the first real-world AI framework attacks. These cases show how weaknesses in AI infrastructure can affect many users and systems.
  • Supply Chain Risks: AI supply chain vulnerabilities present another significant concern. Around 60% of organizations rely on open-source AI components or ecosystems. This creates risk since attackers can compromise these widely used tools. The report mentions a technique called “Sleepy Pickle,” which allows adversaries to tamper with AI models even after distribution. This makes detection extremely difficult.
  • AI-Specific Attacks: New attack techniques are evolving rapidly. Methods such as prompt injection, jailbreaking, and training data extraction allow attackers to bypass safety controls and access sensitive information contained within training datasets.

Attack Vectors Targeting AI Systems

The report highlights the emergence of attack vectors that malicious actors use to exploit weaknesses in AI systems. These attacks can occur at various stages of the AI lifecycle from data collection and model training to deployment and inference. The goal is often to make the AI behave in unintended ways, leak private data, or carry out harmful actions.

Over recent years, these attack methods have become more advanced and harder to detect. The report highlights several types of attack vectors:

  • Jailbreaking: This technique involves crafting adversarial prompts that bypass a model’s safety measures. Despite improvements in AI defenses, Cisco’s research shows even simple jailbreaks remain effective against advanced models like DeepSeek R1.
  • Indirect Prompt Injection: Unlike direct attacks, this attack vector involves manipulating input data or the context the AI model uses indirectly. Attackers may supply compromised source materials like malicious PDFs or web pages, causing the AI to generate unintended or harmful outputs. These attacks are especially dangerous because they do not require direct access to the AI system, letting attackers bypass many traditional defenses.
  • Training Data Extraction and Poisoning: Cisco’s researchers demonstrated that chatbots can be tricked into revealing parts of their training data. This raises serious concerns about data privacy, intellectual property, and compliance. Attackers can also poison training data by injecting malicious inputs. Alarmingly, poisoning just 0.01% of large datasets like LAION-400M or COYO-700M can impact model behavior, and this can be done with a small budget (around $60 USD), making these attacks accessible to many bad actors.

The report highlights serious concerns about the current state of these attacks, with researchers achieving a 100% success rate against advanced models like DeepSeek R1 and Llama 2. This reveals critical security vulnerabilities and potential risks associated with their use. Additionally, the report identifies the emergence of new threats like voice-based jailbreaks which are specifically designed to target multimodal AI models.

Findings from Cisco’s AI Security Research

Cisco’s research team has evaluated various aspects of AI security and revealed several key findings:

  • Algorithmic Jailbreaking: Researchers showed that even top AI models can be tricked automatically. Using a method called Tree of Attacks with Pruning (TAP), researchers bypassed protections on GPT-4 and Llama 2.
  • Risks in Fine-Tuning: Many businesses fine-tune foundation models to improve relevance for specific domains. However, researchers found that fine-tuning can weaken internal safety guardrails. Fine-tuned versions were over three times more vulnerable to jailbreaking and 22 times more likely to produce harmful content than the original models.
  • Training Data Extraction: Cisco researchers used a simple decomposition method to trick chatbots into reproducing news article fragments which enable them to reconstruct sources of the material. This poses risks for exposing sensitive or proprietary data.
  • Data Poisoning: Data Poisoning: Cisco’s team demonstrates how easy and inexpensive it is to poison large-scale web datasets. For about $60, researchers managed to poison 0.01% of datasets like LAION-400M or COYO-700M. Moreover, they highlight that this level of poisoning is enough to cause noticeable changes in model behavior.

The Role of AI in Cybercrime

AI is not just a target – it is also becoming a tool for cybercriminals. The report notes that automation and AI-driven social engineering have made attacks more effective and harder to spot. From phishing scams to voice cloning, AI helps criminals create convincing and personalized attacks. The report also identifies the rise of malicious AI tools like “DarkGPT,” designed specifically to help cybercrime by generating phishing emails or exploiting vulnerabilities. What makes these tools especially concerning is their accessibility. Even low-skilled criminals can now create highly personalized attacks that evade traditional defenses.

Best Practices for Securing AI

Given the volatile nature of AI security, Cisco recommends several practical steps for organizations:

  1. Manage Risk Across the AI Lifecycle: It is crucial to identify and reduce risks at every stage of AI lifecycle from data sourcing and model training to deployment and monitoring. This also includes securing third-party components, applying strong guardrails, and tightly controlling access points.
  2. Use Established Cybersecurity Practices: While AI is unique, traditional cybersecurity best practices are still essential. Techniques like access control, permission management, and data loss prevention can play a vital role.
  3. Focus on Vulnerable Areas: Organizations should focus on areas that are most likely to be targeted, such as supply chains and third-party AI applications. By understanding where the vulnerabilities lie, businesses can implement more targeted defenses.
  4. Educate and Train Employees: As AI tools become widespread, it’s important to train users on responsible AI use and risk awareness. A well-informed workforce helps reduce accidental data exposure and misuse.

Looking Ahead

AI adoption will keep growing, and with it, security risks will evolve. Governments and organizations worldwide are recognizing these challenges and starting to build policies and regulations to guide AI safety. As Cisco’s report highlights, the balance between AI safety and progress will define the next era of AI development and deployment. Organizations that prioritize security alongside innovation will be best equipped to handle the challenges and grab emerging opportunities.

Tags: CiscoInsightsKeyReportSecurityState
Previous Post

Gemma Scope: helping the safety community shed light on the inner workings of language models

Next Post

Build and train a recommender system in 10 minutes using Keras and JAX

softbliss

softbliss

Related Posts

Mapping the misuse of generative AI
Artificial Intelligence

Mapping the misuse of generative AI

by softbliss
May 16, 2025
With AI, researchers predict the location of virtually any protein within a human cell | MIT News
Artificial Intelligence

With AI, researchers predict the location of virtually any protein within a human cell | MIT News

by softbliss
May 16, 2025
Artificial Intelligence

Hugging Face Introduces a Free Model Context Protocol (MCP) Course: A Developer’s Guide to Build and Deploy Context-Aware AI Agents and Applications

by softbliss
May 15, 2025
Harnessing AI for a Sustainable Earth Day
Artificial Intelligence

Harnessing AI for a Sustainable Earth Day

by softbliss
May 15, 2025
The Future of Branding: AI in Logo Creation
Artificial Intelligence

The Future of Branding: AI in Logo Creation

by softbliss
May 15, 2025
Next Post
Build and train a recommender system in 10 minutes using Keras and JAX

Build and train a recommender system in 10 minutes using Keras and JAX

Leave a Reply Cancel reply

Your email address will not be published. Required fields are marked *

Premium Content

Gemini Robotics brings AI into the physical world

March 24, 2025
Modeling Extremely Large Images with xT – The Berkeley Artificial Intelligence Research Blog

Modeling Extremely Large Images with xT – The Berkeley Artificial Intelligence Research Blog

April 5, 2025
Evolving from Bots to Brainpower: The Ascendancy of Agentic AI

Evolving from Bots to Brainpower: The Ascendancy of Agentic AI

May 14, 2025

Browse by Category

  • Artificial Intelligence
  • Machine Learning
  • Research & Academia
  • Software Development
  • Startups

Browse by Tags

Amazon App Apr Artificial Berkeley BigML.com Blog Build Building Business Content Data Development Future Gemini Generative Google Guide Innovation Intelligence Key Language Large Learning LLM LLMs Machine Microsoft MIT Mobile model Models News NVIDIA Official opinion OReilly Research Startup Startups Strategies students Tech Tools Video

Soft Bliss Academy

Welcome to SoftBliss Academy, your go-to source for the latest news, insights, and resources on Artificial Intelligence (AI), Software Development, Machine Learning, Startups, and Research & Academia. We are passionate about exploring the ever-evolving world of technology and providing valuable content for developers, AI enthusiasts, entrepreneurs, and anyone interested in the future of innovation.

Categories

  • Artificial Intelligence
  • Machine Learning
  • Research & Academia
  • Software Development
  • Startups

Recent Posts

  • Build and train a recommender system in 10 minutes using Keras and JAX
  • The State of AI Security in 2025: Key Insights from the Cisco Report
  • Gemma Scope: helping the safety community shed light on the inner workings of language models

© 2025 https://softblissacademy.online/- All Rights Reserved

No Result
View All Result
  • Home
  • Artificial Intelligence
  • Software Development
  • Machine Learning
  • Research & Academia
  • Startups

© 2025 https://softblissacademy.online/- All Rights Reserved

Are you sure want to unlock this post?
Unlock left : 0
Are you sure want to cancel subscription?