MacMusic  |  PcMusic  |  440 Software  |  440 Forums  |  440TV  |  Zicos
chip
Search

Nanosecond Neural Networks: MIT’s Photonic Chip Approach to Hyper-Efficient AI Computation

Friday December 20, 2024. 01:04 PM , from eWeek
Massachusetts Institute of Technology (MIT) has made a big step forward in AI by introducing a photonic chip that promises faster and more energy-efficient AI computations. Developed by MIT researchers, this cutting-edge technology achieves an impressive 96 percent accuracy during training and 92 percent during inference, rivaling the performance of conventional electronic processors while using significantly less power.

This chip, capable of completing key calculations in under half a nanosecond, could pave the way for ultra-fast AI applications. The innovation addresses a growing concern in AI: the increasing energy demands of traditional electronic hardware.

Why Photonic Chips Matter

The demand for more efficient AI model training has been growing as deep neural networks (DNNs)—the backbone of modern AI—require vast computational resources. Traditional electronic hardware struggles to keep pace with these demands, consuming immense energy and nearing its performance limits. The photonic chip developed by MIT could change that by performing key computations at lightning speed, completing tasks in less than half a nanosecond.

Deep neural networks process data through interconnected layers, mimicking the human brain. They rely on two main types of operations: linear computations, which involve matrix multiplication, and nonlinear computations, which allow models to detect complex patterns. While photonic processors have previously managed linear tasks, the challenge has always been nonlinear computations, as light particles (photons) naturally resist interacting with one another.

“Nonlinearity in optics is quite challenging because photons don’t interact with each other very easily,” explained Saumil Bandyopadhyay, a visiting scientist at MIT involved in the project. “That makes it very power consuming to trigger optical nonlinearities, so it becomes challenging to build a system that can do it in a scalable way,”

Solving the Nonlinear Puzzle

The MIT team, led by Dirk Englund of the Quantum Photonics and Artificial Intelligence Group, tackled this challenge by designing “nonlinear optical function units,” or NOFUs. According to the report, the researchers “built an optical deep neural network on a photonic chip using three layers of devices that perform linear and nonlinear operations.”

These hybrid devices integrate both photonic and electronic components to handle nonlinear operations. By converting small amounts of light into electric current, the chip eliminates the need for power-hungry amplifiers, maintaining energy efficiency. The researchers successfully fabricated the trial chip using the same foundry processes used for traditional CMOS chips. This scalability means that the photonic chip could be integrated into existing manufacturing systems, paving the way for widespread adoption in the near future.

As AI computations become more integral to applications like self-driving cars, natural language processing, and advanced robotics, innovations like MIT’s photonic chip promise to revolutionize the field by making AI both faster and greener.

Funded by agencies such as the U.S. National Science Foundation, the U.S. Air Force Office of Scientific Research, and NTT Research, this research could lay the foundation for next-generation AI hardware.

Explore the top companies pushing the boundaries of AI and redefining how the technology is used across a wide range of industries.
The post Nanosecond Neural Networks: MIT’s Photonic Chip Approach to Hyper-Efficient AI Computation appeared first on eWEEK.
https://www.eweek.com/news/mit-photonic-neural-network-chip/

Related News

News copyright owned by their original publishers | Copyright © 2004 - 2024 Zicos / 440Network
Current Date
Dec, Sat 21 - 09:17 CET