Skip to main content

Devon Lister

 

Writing your first academic paper is an accomplishment in itself. Getting it accepted by a conference is a remarkable achievement. Writing your first paper, getting it accepted and winning a best paper award for your efforts is truly something to celebrate. That’s exactly what computer engineering senior Devon Lister was able to do for his work on a novel style of neural networks designed to mimic human thinking.

Lister won the Amar Mukherjee Best Paper Award from the IEEE Computer Society Annual Symposium on Very Large Scale Integration (ISVLSI) for his work, “Catwalk: Unary Top-K for Efficient Ramp-No-Leak Neuron Design for Temporal Neural Networks.”

“The award was unexpected and absolutely a joy to receive,” he says. “My first paper getting accepted was already a treat, and then winning the best paper award was just not on my radar.”

Lister is a computer engineering transfer student who joined UCF in 2022. Though he had worked on various projects for the UCF Baja Team, designed and built his own keyboard and worked on his own reverse engineering projects, research was an area he had yet to experience before taking on this paper.

“I had no idea what research entailed, as this was my first paper, and I wanted to try out academia,” he says. “I also wanted something more complicated than what my classes offered since I am a very project-heavy person.”

Assistant Professor Di Wu, director of the U.N.A.R.Y. Lab, which stands for Unary, Neuromorphic, Approximate, Reconfigurable and Yet More Computing, encouraged Lister to research optimization techniques placed within neural networks, and later submit the work to ISVLSI.

“The human brain is a masterpiece of mother nature with extremely high efficiency and intelligence,” Wu says. “Our work moves one step towards the efficiency of the biological neuron, which subsequently allows more neurons towards the intelligence of the brain.”

Lister explains that the research can be applied during the design phase of low-level circuit planning. It’s a technique that is designed to save space and power, a significant advantage in building neural networks.

“We identified certain optimization techniques and proved they could be successfully implemented within neural networks; specifically, a style of neural networks that aims to replicate human-esque thinking,” he says. “If you compared a brain neuron to a circuit equivalent, the circuit would take up more area and power. This paper showed that we were able to save area and power, ultimately getting closer to the ideal brain-like thinking algorithm.”

Lister says he’s continuing the research by building a framework that would automatically create the required optimization design after data is entered into an interface.

“Right now it would take a manual implementation, but soon the idea is that it would be as simple as laying out what you want, and our system would implement it without the user ever having to know what we did.”

With graduation just around the corner in Spring 2026, Lister has been attending workshops to work on his resume in preparation for his upcoming interviews. He says he plans to continue his work in hardware design both personally and professionally.

“I remain positive that once I graduate, that I can nail the interview because I can demonstrate my skills and knowledge once I get the chance.”

 

Written by Bel Huston | Sept. 4, 2025