• About
  • Privacy Policy
  • Disclaimer
  • Contact
Soft Bliss Academy
No Result
View All Result
  • Home
  • Artificial Intelligence
  • Software Development
  • Machine Learning
  • Research & Academia
  • Startups
  • Home
  • Artificial Intelligence
  • Software Development
  • Machine Learning
  • Research & Academia
  • Startups
Soft Bliss Academy
No Result
View All Result
Home Machine Learning

“Periodic table of machine learning” could fuel AI discovery | MIT News

softbliss by softbliss
April 29, 2025
in Machine Learning
0
“Periodic table of machine learning” could fuel AI discovery | MIT News
0
SHARES
0
VIEWS
Share on FacebookShare on Twitter



MIT researchers have created a periodic table that shows how more than 20 classical machine-learning algorithms are connected. The new framework sheds light on how scientists could fuse strategies from different methods to improve existing AI models or come up with new ones.

For instance, the researchers used their framework to combine elements of two different algorithms to create a new image-classification algorithm that performed 8 percent better than current state-of-the-art approaches.

The periodic table stems from one key idea: All these algorithms learn a specific kind of relationship between data points. While each algorithm may accomplish that in a slightly different way, the core mathematics behind each approach is the same.

Building on these insights, the researchers identified a unifying equation that underlies many classical AI algorithms. They used that equation to reframe popular methods and arrange them into a table, categorizing each based on the approximate relationships it learns.

Just like the periodic table of chemical elements, which initially contained blank squares that were later filled in by scientists, the periodic table of machine learning also has empty spaces. These spaces predict where algorithms should exist, but which haven’t been discovered yet.

The table gives researchers a toolkit to design new algorithms without the need to rediscover ideas from prior approaches, says Shaden Alshammari, an MIT graduate student and lead author of a paper on this new framework.

“It’s not just a metaphor,” adds Alshammari. “We’re starting to see machine learning as a system with structure that is a space we can explore rather than just guess our way through.”

She is joined on the paper by John Hershey, a researcher at Google AI Perception; Axel Feldmann, an MIT graduate student; William Freeman, the Thomas and Gerd Perkins Professor of Electrical Engineering and Computer Science and a member of the Computer Science and Artificial Intelligence Laboratory (CSAIL); and senior author Mark Hamilton, an MIT graduate student and senior engineering manager at Microsoft. The research will be presented at the International Conference on Learning Representations.

An accidental equation

The researchers didn’t set out to create a periodic table of machine learning.

After joining the Freeman Lab, Alshammari began studying clustering, a machine-learning technique that classifies images by learning to organize similar images into nearby clusters.

She realized the clustering algorithm she was studying was similar to another classical machine-learning algorithm, called contrastive learning, and began digging deeper into the mathematics. Alshammari found that these two disparate algorithms could be reframed using the same underlying equation.

“We almost got to this unifying equation by accident. Once Shaden discovered that it connects two methods, we just started dreaming up new methods to bring into this framework. Almost every single one we tried could be added in,” Hamilton says.

The framework they created, information contrastive learning (I-Con), shows how a variety of algorithms can be viewed through the lens of this unifying equation. It includes everything from classification algorithms that can detect spam to the deep learning algorithms that power LLMs.

The equation describes how such algorithms find connections between real data points and then approximate those connections internally.

Each algorithm aims to minimize the amount of deviation between the connections it learns to approximate and the real connections in its training data.

They decided to organize I-Con into a periodic table to categorize algorithms based on how points are connected in real datasets and the primary ways algorithms can approximate those connections.

“The work went gradually, but once we had identified the general structure of this equation, it was easier to add more methods to our framework,” Alshammari says.

A tool for discovery

As they arranged the table, the researchers began to see gaps where algorithms could exist, but which hadn’t been invented yet.

The researchers filled in one gap by borrowing ideas from a machine-learning technique called contrastive learning and applying them to image clustering. This resulted in a new algorithm that could classify unlabeled images 8 percent better than another state-of-the-art approach.

They also used I-Con to show how a data debiasing technique developed for contrastive learning could be used to boost the accuracy of clustering algorithms.

In addition, the flexible periodic table allows researchers to add new rows and columns to represent additional types of datapoint connections.

Ultimately, having I-Con as a guide could help machine learning scientists think outside the box, encouraging them to combine ideas in ways they wouldn’t necessarily have thought of otherwise, says Hamilton.

“We’ve shown that just one very elegant equation, rooted in the science of information, gives you rich algorithms spanning 100 years of research in machine learning. This opens up many new avenues for discovery,” he adds.

“Perhaps the most challenging aspect of being a machine-learning researcher these days is the seemingly unlimited number of papers that appear each year. In this context, papers that unify and connect existing algorithms are of great importance, yet they are extremely rare. I-Con provides an excellent example of such a unifying approach and will hopefully inspire others to apply a similar approach to other domains of machine learning,” says Yair Weiss, a professor in the School of Computer Science and Engineering at the Hebrew University of Jerusalem, who was not involved in this research.

This research was funded, in part, by the Air Force Artificial Intelligence Accelerator, the National Science Foundation AI Institute for Artificial Intelligence and Fundamental Interactions, and Quanta Computer.


Tags: discoveryfuelLearningMachineMITNewsPeriodictable
Previous Post

Dystopian Literature Just Right for Middle Schoolers

Next Post

A Coding Guide to Different Function Calling Methods to Create Real-Time, Tool-Enabled Conversational AI Agents

softbliss

softbliss

Related Posts

Deploy Qwen models with Amazon Bedrock Custom Model Import
Machine Learning

Deploy Qwen models with Amazon Bedrock Custom Model Import

by softbliss
June 15, 2025
How we’re supporting better tropical cyclone prediction with AI
Machine Learning

How we’re supporting better tropical cyclone prediction with AI

by softbliss
June 15, 2025
Automating GitHub Workflows with Claude 4
Machine Learning

Automating GitHub Workflows with Claude 4

by softbliss
June 14, 2025
Stop Building AI Platforms | Towards Data Science
Machine Learning

Stop Building AI Platforms | Towards Data Science

by softbliss
June 14, 2025
Normal Technology at Scale – O’Reilly
Machine Learning

Normal Technology at Scale – O’Reilly

by softbliss
June 14, 2025
Next Post
A Coding Guide to Different Function Calling Methods to Create Real-Time, Tool-Enabled Conversational AI Agents

A Coding Guide to Different Function Calling Methods to Create Real-Time, Tool-Enabled Conversational AI Agents

Premium Content

10 Best AI Pre-Production Tools for Filmmakers (April 2025)

10 Best AI Pre-Production Tools for Filmmakers (April 2025)

April 28, 2025
Mixture of Experts LLMs: Key Concepts Explained

Mixture of Experts LLMs: Key Concepts Explained

April 25, 2025
o1’s Thoughts on LNMs and LMMs • AI Blog

o1’s Thoughts on LNMs and LMMs • AI Blog

April 6, 2025

Browse by Category

  • Artificial Intelligence
  • Machine Learning
  • Research & Academia
  • Software Development
  • Startups

Browse by Tags

Amazon App Apps Artificial Blog Build Building Business Coding Data Development Framework Future Gemini Generative Google Growth Guide Innovation Intelligence Language Learning LLM LLMs Machine Microsoft MIT model Models News NVIDIA opinion OReilly Research Science Series Software Solutions Startup Startups Strategies students Tech Tools Video

Soft Bliss Academy

Welcome to SoftBliss Academy, your go-to source for the latest news, insights, and resources on Artificial Intelligence (AI), Software Development, Machine Learning, Startups, and Research & Academia. We are passionate about exploring the ever-evolving world of technology and providing valuable content for developers, AI enthusiasts, entrepreneurs, and anyone interested in the future of innovation.

Categories

  • Artificial Intelligence
  • Machine Learning
  • Research & Academia
  • Software Development
  • Startups

Recent Posts

  • Have a damaged painting? Restore it in just hours with an AI-generated “mask” | MIT News
  • Deploy Qwen models with Amazon Bedrock Custom Model Import
  • Scaling Finance with MS Dynamics 365 ERP

© 2025 https://softblissacademy.online/- All Rights Reserved

No Result
View All Result
  • Home
  • Artificial Intelligence
  • Software Development
  • Machine Learning
  • Research & Academia
  • Startups

© 2025 https://softblissacademy.online/- All Rights Reserved

Are you sure want to unlock this post?
Unlock left : 0
Are you sure want to cancel subscription?