AI

The next generation of neural networks could live in hardware

1 Mins read

Once the network has been trained, though, things get way, way cheaper. Petersen compared his logic-gate networks with a cohort of other ultra-efficient networks, such as binary neural networks, which use simplified perceptrons that can process only binary values. The logic-gate networks did just as well as these other efficient methods at classifying images in the CIFAR-10 data set, which includes 10 different categories of low-resolution pictures, from “frog” to “truck.” It achieved this with fewer than a tenth of the logic gates required by those other methods, and in less than a thousandth of the time. Petersen tested his networks using programmable computer chips called FPGAs, which can be used to emulate many different potential patterns of logic gates; implementing the networks in non-programmable ASIC chips would reduce costs even further, because programmable chips need to use more components in order to achieve their flexibility.

Farinaz Koushanfar, a professor of electrical and computer engineering at the University of California, San Diego, says she isn’t convinced that logic-gate networks will be able to perform when faced with more realistic problems. “It’s a cute idea, but I’m not sure how well it scales,” she says. She notes that the logic-gate networks can only be trained approximately, via the relaxation strategy, and approximations can fail. That hasn’t caused issues yet, but Koushanfar says that it could prove more problematic as the networks grow. 

Nevertheless, Petersen is ambitious. He plans to continue pushing the abilities of his logic-gate networks, and he hopes, eventually, to create what he calls a “hardware foundation model.” A powerful, general-purpose logic-gate network for vision could be mass-produced directly on computer chips, and those chips could be integrated into devices like personal phones and computers. That could reap enormous energy benefits, Petersen says. If those networks could effectively reconstruct photos and videos from low-resolution information, for example, then far less data would need to be sent between servers and personal devices. 

Petersen acknowledges that logic-gate networks will never compete with traditional neural networks on performance, but that isn’t his goal. Making something that works, and that is as efficient as possible, should be enough. “It won’t be the best model,” he says. “But it should be the cheapest.”


Source link

Related posts
AI

Google DeepMind Introduces FACTS Grounding: A New AI Benchmark for Evaluating Factuality in Long-Form LLM Response

3 Mins read
Despite the transformative potential of large language models (LLMs), these models face significant challenges in generating contextually accurate responses faithful to the…
AI

Ecologists find computer vision models’ blind spots in retrieving wildlife images | MIT News

5 Mins read
Try taking a picture of each of North America’s roughly 11,000 tree species, and you’ll have a mere fraction of the millions of…
AI

Hugging Face Releases FineMath: The Ultimate Open Math Pre-Training Dataset with 50B+ Tokens

2 Mins read
For education research, access to high-quality educational resources is critical for learners and educators. Often perceived as one of the most challenging…

 

 

Leave a Reply

Your email address will not be published. Required fields are marked *