The weekly student newspaper of Bucknell University

The Bucknellian

The weekly student newspaper of Bucknell University

The Bucknellian

The weekly student newspaper of Bucknell University

The Bucknellian

Michelle Roehm named next Dean of the Freeman College of Management
Bucknell’s campus dining: A personal ranking
Women’s Basketball prevails over Boston University

Women’s Basketball prevails over Boston University

February 16, 2024

Women’s Tennis grabs 4-3 win at home

Women’s Tennis grabs 4-3 win at home

February 16, 2024

‘Seeing sound’ in the Samek Art Museum

‘Seeing sound’ in the Samek Art Museum

February 16, 2024

View All

Bucknell research: smaller AI models can be more accurate, fairer

Emily+Paine+%2F+Bucknell+University
Emily Paine
Emily Paine / Bucknell University

When it comes to the increasingly popularized Artificial Intelligence (AI) applications and neural network machine learning, a collection of Bucknell University business analytics professors have found that less may be more in proving precision and accuracy and reducing bias and harm. 

Thiago Serra, Freeman College of Management Analytics & Operations Management Professor, along with pupils from his analytics lab have authored two papers revealing how “pruning” machine learning models called neural networks may increase their accuracy. Neural networks are an AI method that teach computers to process data in a way that is inspired by the human brain. Their research was recently accepted for presentation at two prestigious AI-related conferences and may influence AI developers moving forward.

“You use data for training to come up with a neural network, and initially, everything is connected. We examined how the neural network performs with the data cut from it,” Serra said. 

The Bucknell researchers’ original paper showed that when they utilized a moderate amount of compression, they were able to inhibit the machine learning model’s bias. 

Story continues below advertisement

“If you start removing smaller connections that are not so important, the network actually performs better than before,” Serra said. “That’s because the performance is more uniform when you prune it—even just a little bit.”

The team concluded that to decrease bias, you prune neural networks make correct predictions more uniformly for different groups. Students from all four undergraduate classes collaborated with Serra on this paper, which was presented last December at NeurIPS, the leading conference about neural networks.

The second paper the team produced explained neural network compression from a different angle: determining what connections to remove initially to increase efficiency. 

“There is very little in terms of theory for where to prune from a neural network, so we came up with our own mathematical results by extending some of my prior work,” Serra said. “That work was intended to decide what neural network architectures—in terms of numbers of layers and neurons—are more expressive, and we extended it to deal with the number of connections within neural networks.”

The team of researchers were supported by a $174,000 National Science Foundation two-year grant meant to develop exact neural network compression algorithms to effectively reduce their quantity and size while making them more widely available on conventional devices.

Serra presented that paper in June at the International Conference on the Integration of Constraint Programming, Artificial Intelligence and Operations Research in Nice, France.

“All of this has come from my own work, which I started in 2018, and from testing neural networks in my lab over the past two years,” Serra says. “We can use this analysis to get neural networks to do all kinds of things, and these papers will help us leverage what they do to work better in the future.”

Serra and his students will continue their neural network optimization work as interest surrounding AI-related applications and technological advancements grows.

(Visited 43 times, 1 visits today)

Leave a Comment
More to Discover

Comments (0)

The editorial board of The Bucknellian reserves the right to review all comments before they are posted on the website and remove any if deemed offensive, illegal or in bad taste. Comments left on our web pages are not necessarily in-line with the views expressed by the writer.
All The Bucknellian Picks Reader Picks Sort: Newest

Your email address will not be published. Required fields are marked *