• About
  • Privacy Policy
  • Disclaimer
  • Contact
Soft Bliss Academy
No Result
View All Result
  • Home
  • Artificial Intelligence
  • Software Development
  • Machine Learning
  • Research & Academia
  • Startups
  • Home
  • Artificial Intelligence
  • Software Development
  • Machine Learning
  • Research & Academia
  • Startups
Soft Bliss Academy
No Result
View All Result
Home Machine Learning

Open LLMs are Necessary For Current Private Adaptations and Outperform Their Closed Alternatives [Paper Reflection]

softbliss by softbliss
April 16, 2025
in Machine Learning
0
0
SHARES
1
VIEWS
Share on FacebookShare on Twitter


Closed Large Language Models (LLMs), which are proprietary and accessible only via APIs, have dominated the LLM space since around 2022 due to their high performance and versatility. However, Open LLMs have made substantial progress, narrowing the performance gap with their Closed LLM counterparts. Open LLMs are models whose architecture and parameters are publicly available for use, modification, and distribution.

For instance, while Closed LLMs like Anthropic’s Claude (released in March 2023) and OpenAI’s GPT-4 (released in March 2023) set new benchmarks upon their launches, the Open LLM Llama 3 released by Meta in April 2024 and DeepSeek-R1 released in January 2025 not only matched but surpassed these models in tasks such as coding, reasoning, text classification, summarization, and question answering.

While much of the discussion around LLMs centers on task and computational performance, in our paper Open LLMs are Necessary for Current Private Adaptations and Outperform their Closed Alternatives, we focus on the privacy implications of using Open and Closed LLMs. Specifically, we explore whether and how models can be fine-tuned on sensitive data while ensuring robust privacy guarantees.

To this end, we define threat models, compare various Open and Closed LLMs that leverage differential privacy (DP) on classification and generation tasks and analyze methodological limitations. Our research results in a thorough analysis of the privacy-utility tradeoff under different privacy levels.

Our findings indicate that Open LLMs can be adapted to private data without leaking information to third parties, such as LLM providers and malicious users. Thus, they offer a significant privacy advantage over Closed, proprietary models.

The threat space in adapting LLMs to private data

The adaptation of Closed LLMs to private datasets introduces a multifaceted threat space. In typical scenarios, data curators provide their sensitive data to LLM providers for fine-tuning, producing a model tailored to the dataset. This customized model is subsequently queried by external parties, e.g., customers of the data curator.

The resulting threat space can be categorized into three key dimensions:

  1. From the data curator to the LLM provider: The private data shared during fine-tuning may be susceptible to unauthorized access or misuse.
  2. From the querying party to the LLM provider: Queries submitted by end users, which often contain sensitive information intended for the data curator, are exposed to the LLM provider.
  1. From malicious end users to the adapted LLM: Malicious end users may attempt to extract private information through the LLM’s responses to carefully crafted queries.

In contrast to Closed LLMs, Open LLMs provide full control over the model and data, enabling private adaptation without the need to share sensitive information with a third party. This control eliminates the first two threat vectors associated with Closed LLMs, such as unauthorized access or misuse by the provider and exposure of user queries. With Open LLMs, data curators can directly fine-tune the model on private datasets using privacy-preserving techniques, ensuring end-to-end privacy.

What are the current methods for private adaptation of LLMs? 

It follows from our threat space analysis that restricting access to the fine-tuning dataset alone does not guarantee data privacy. Model outputs can still reveal sensitive information from the fine-tuning data. If the fine-tuned model is exposed (e.g., via an API), it remains vulnerable to information extraction and inference attacks.

Differential privacy (DP) introduces a rigorous mathematical framework that ensures the privacy of individuals whose data is used in the fine-tuning process. Specifically, DP adds carefully calibrated noise to the model updates, making it statistically improbable to determine whether any individual’s data was included in the fine-tuning dataset. Its quantifiable and robust privacy guarantee makes DP valuable for protecting sensitive information in LLM fine-tuning.

While DP provides privacy guarantees for both Open and Closed LLMs, it does not address the issue of trust in third-party providers for Closed LLMs. For these models, data curators must rely on the provider to implement safeguards and handle sensitive data responsibly.

Private adaptation methods for Closed LLMs 

We can rule out fine-tuning services offered by LLM providers (e.g., OpenAI and Amazon), as this entails sharing private data with a third party. Closed LLMs are accessible only via APIs. Thus, we cannot access and adapt the model’s weights directly.

Instead, private adaptation methods for Closed LLMs rely on privacy-preserving discrete prompts or private in-context learning (ICL). These approaches work by carefully crafting input prompts or selecting relevant examples to guide the model’s behavior, all while ensuring that sensitive information in the prompts or examples is protected from potential leakage or inference attacks.

All methods we evaluate in our study follow the PATE (Private Aggregation of Teacher Ensembles) framework. At a high level, PATE achieves data privacy by splitting the private dataset into non-overlapping partitions. Then, each partition is used to train a so-called teacher model. These teacher models are joined into an ensemble model by combining their outputs while adding noise, which preserves privacy.

This ensemble is then used to train a so-called student model in the following way: The ensemble makes predictions for samples from an unlabeled public dataset. The resulting (sample, ensemble prediction) pairs constitute the training data for the student model. Thus, the student learns to make the same predictions as the teacher ensemble but never sees sensitive data samples. The student is what’s released as the final model.

Overview of the PATE framework. The sensitive dataset is divided into non-overlapping partitions, and a separate teacher model is trained on each partition. All teachers are aggregated noisily into an ensemble model, which is used to make predictions on a public dataset. The samples from the public dataset, together with the ensemble’s predictions, constitute the training data for the student model, which is the model that is eventually queried by users.
Overview of the PATE framework. The sensitive dataset is divided into non-overlapping partitions, and a separate teacher model is trained on each partition. All teachers are aggregated noisily into an ensemble model, which is used to make predictions on a public dataset. The samples from the public dataset, together with the ensemble’s predictions, constitute the training data for the student model, which is the model that is eventually queried by users. | Source

The private adaptation methods for Closed LLMs we analyze in our study build on this general framework. They differ in how the teachers are utilized and how their responses are aggregated:

  • Differentially Private In-context Learning (DP-ICL): All teachers process the same prompt, and the ensemble’s response is the noisy consensus.
  • PromptPATE: The teacher ensemble assigns labels to public unlabeled data via private voting. These labeled public sequences are used to create new discrete student prompts, which are deployed with the LLM.
  • DP-FewShotGen: The teacher ensemble generates private synthetic few-shot samples that are used as samples for in-context learning.
  • DP-OPT: A local LLM generates privacy-preserving prompts and instructions from the private dataset. These are used for in-context learning for the third-party Closed LLM.

In our paper, we compare the privacy protection and performance of these four state-of-the-art methods for private adaptation of Closed LLMs. When applying them to the popular Closed LLMs Claude, GPT-3 Babbage, GPT-3 Davinci, and GPT-4 Turbo, we observe that compared to private adaptation of Open LLMs, these methods offer lower performance at a higher cost on various downstream tasks, including dialog summarization, classification, and generation. Further, all methods except DP-OPT leak training data to the LLM provider.

Private adaptation methods for Open LLMs 

Unlike Closed LLMs, Open LLMs provide access to their parameters, enabling more flexible and parameter-centric private adaptation methods. These methods typically follow the Differentially Private Stochastic Gradient Descent (DPSGD) paradigm to ensure privacy. In DPSGD, the influence of each private data point is constrained during training through gradient clipping and the addition of calibrated noise. This approach guarantees that the model does not memorize or leak sensitive information.

In our study, we explore three primary methods for private adaptation of Open LLMs: 

  1. Prompt-based adaptation (PromptDPSGD) introduces a small number of additional parameters (soft prompts or prefix-tuning and adapts Differentially Private Stochastic Gradient Descent (DPSGD) to preserve privacy.
  2. Parameter-efficient fine-tuning, such as LoRA, only updates a relatively small number of parameters (PrivateLoRA extends this approach with DP guarantees by building on the DPSGD algorithm.
  3. Full fine-tuning adaptations (DP-FineTune) involve fine-tuning the entire model or a subset of its layers for comprehensive adaptation while adhering to differential privacy principles.

Applying these methods to Vicuna, Llama-3, OpenLLaMa, BART, RoBERTa, and the Pythia suite of models, we find that private adaptation of Open LLMs improves performance on downstream tasks and reduces costs compared to their Closed counterparts. It also provides a critical privacy benefit by eliminating the risk of exposing private data and user queries to LLM providers.

Insightful results

Our analysis of private adaptation methods for both Closed and Open LLMs reveals several critical findings regarding data leakage, performance, and cost:

  1. Query data leakage: All private adaptation methods for Closed LLMs leak query data to the LLM provider. This means that sensitive information from user queries is exposed during the adaptation process, posing a significant privacy risk.
  2. Training data leakage: Only one method (DP-OPT) of the four methods of private adaptation of Closed LLMs successfully protects private training data from the LLM provider. However, this method requires a local LLM to effectively protect the privacy of the training data. The remaining private adaptation methods for Closed LLMs leak a large fraction of the training data to the LLM provider, undermining the privacy guarantees of the adaptation process.
  3. Performance: All adaptation methods for Closed LLMs achieve lower downstream task performance than privacy-preserving local adaptations on Open LLMs, even when the Open LLMs are significantly smaller than their Closed counterparts.
  4. Cost: The training and query costs for private adaptations of Closed LLMs are substantially higher due to the API access costs imposed by the LLM provider. In contrast, private adaptations for Open LLMs are more cost-effective. We estimated the costs assuming an A40 GPU with 48 GB of memory. In this scenario, privately adopting a Closed LLM to text classification tasks with DP-ICL costs about $140. In contrast, fine-tuning an Open LLM with PrivateLoRA on the same tasks costs about $30.

This leads to the conclusion that for a truly privacy-preserving adaptation of LLMs, one should use Open LLMs. By offering full control over the model and data, Open LLMs eliminate the risks associated with third-party providers and enable robust privacy-preserving techniques. As a result, Open LLMs address the limitations of Closed LLMs and enable efficient and customizable adaptations tailored to sensitive datasets.

Was the article useful?


Yes


No

Explore more content topics:

Tags: AdaptationsAlternativesClosedCurrentLLMsopenOutperformPaperPrivateReflection
Previous Post

Quantum startup Q-CTRL creates jam-proof navigation to solve GPS-blocking

Next Post

Streamlining Educational Content with AI Video Generators

softbliss

softbliss

Related Posts

Build a serverless audio summarization solution with Amazon Bedrock and Whisper
Machine Learning

Build a serverless audio summarization solution with Amazon Bedrock and Whisper

by softbliss
June 7, 2025
Introducing Veo and Imagen 3 generative AI tools
Machine Learning

Introducing Veo and Imagen 3 generative AI tools

by softbliss
June 7, 2025
5 Error Handling Patterns in Python (Beyond Try-Except)
Machine Learning

5 Error Handling Patterns in Python (Beyond Try-Except)

by softbliss
June 7, 2025
How I Automated My Machine Learning Workflow with Just 10 Lines of Python
Machine Learning

How I Automated My Machine Learning Workflow with Just 10 Lines of Python

by softbliss
June 6, 2025
What It Is and Why It Matters—Part 3 – O’Reilly
Machine Learning

What It Is and Why It Matters—Part 3 – O’Reilly

by softbliss
June 6, 2025
Next Post
Streamlining Educational Content with AI Video Generators

Streamlining Educational Content with AI Video Generators

Premium Content

How To Write a Diamante Poem: Examples and Templates

How To Write a Diamante Poem: Examples and Templates

March 27, 2025
Gemma 3 on mobile and web with Google AI Edge

Gemma 3 on mobile and web with Google AI Edge

March 28, 2025
TC Sessions: AI Trivia Countdown — score big on tickets

TC Sessions: AI Trivia Countdown — score big on tickets

June 1, 2025

Browse by Category

  • Artificial Intelligence
  • Machine Learning
  • Research & Academia
  • Software Development
  • Startups

Browse by Tags

Amazon App Artificial Blog Build Building Business Coding Data Development Digital Framework Future Gemini Generative Google Guide Impact Innovation Intelligence Key Language Large Learning LLM LLMs Machine Microsoft MIT model Models News NVIDIA opinion OReilly Research Science Series Software Startup Startups students Tech Tools Video

Soft Bliss Academy

Welcome to SoftBliss Academy, your go-to source for the latest news, insights, and resources on Artificial Intelligence (AI), Software Development, Machine Learning, Startups, and Research & Academia. We are passionate about exploring the ever-evolving world of technology and providing valuable content for developers, AI enthusiasts, entrepreneurs, and anyone interested in the future of innovation.

Categories

  • Artificial Intelligence
  • Machine Learning
  • Research & Academia
  • Software Development
  • Startups

Recent Posts

  • Watermarking AI-generated text and video with SynthID
  • Build a serverless audio summarization solution with Amazon Bedrock and Whisper
  • Autonomous coding agents: A Codex example

© 2025 https://softblissacademy.online/- All Rights Reserved

No Result
View All Result
  • Home
  • Artificial Intelligence
  • Software Development
  • Machine Learning
  • Research & Academia
  • Startups

© 2025 https://softblissacademy.online/- All Rights Reserved

Are you sure want to unlock this post?
Unlock left : 0
Are you sure want to cancel subscription?