Abstract
In recent years, the rapid development of Deep Neural Networks (DNNs) has posed significant challenges in terms of training duration and costs. High-frequency, low-power photonic computing has emerged as a highly promising solution. However, the substantial cost of data conversion and the limitations introduced by noise in photonic devices continue to hinder the realization of high-precision and energy-efficient DNN training. To address this challenge, we propose a novel photonic accelerator, ROCKET, based on the Residue Number System (RNS). RNS is based on modular arithmetic and enables support for high-precision computation through parallel multi-path low-precision operations. First, we leverage specialized lookup tables to enable high-throughput, low-latency conversions between high-precision and low-precision numerical representations. Next, we design a low-power photonic accelerator architecture utilizing intensity modulators, which minimizes the number of computational components while maximizing data reuse. Subsequently, we propose a hybrid photonic-electronic pipelined dataflow to maximize parallelism within the photonic-electronic computation path. Finally, we develop a high-frequency (4.096 GHz) hybrid photonic-electronic prototype using FPGA, Radio Frequency (RF), and photonic components to validate the feasibility of the ROCKET. Our large-scale simulations on seven mainstream DNN models show that, compared to the A100 GPU, TPU v4, and the state-of-the-art photonic accelerator Mirage, ROCKET achieves speedups of 33×, 243×, and 198×, respectively, while saving energy by factors of 64×, 204×, and 142×.