Abstract
To enhance the precision of inference, deep neural network (DNN) models have been progressively growing in scale and complexity, leading to increased latency and computational resource demands. This growth necessitates scalable architectures, such as chiplet-based accelerators, to accommodate the substantial volume of deep learning inference tasks. However, the efficiency, energy consumption, and scalability of existing accelerators are severely constrained by metallic interconnects. Photonic interconnects, on the contrary, offer a promising alternative, with their advantages of low latency, high bandwidth, high energy efficiency, and simplified communication processes. In this paper, we propose ChipAI, an accelerator designed based on photonic interconnects for accelerating DNN inference tasks. ChipAI implements an efficient hybrid optical network that supports effective inter-chiplet and intra-chiplet data sharing, thereby enhancing parallel processing capabilities. Additionally, we propose a flexible dataflow leveraging the ChipAI architecture and the characteristics of DNN models, facilitating efficient architectural mapping of DNN layers. Simulation on various DNN models demonstrates that, compared to the state-of-the-art chiplet-based DNN accelerator with photonic interconnects, ChipAI can reduce the DNN inference time and energy consumption by up to 82% and 79%, respectively.