Skip to main content

CAREER: Advancing Combinatorial Optimization Accelerators with Compute in Memory Design Approach

This award is funded in whole or in part under the American Rescue Plan Act of 2021 (Public Law 117-2).

Combinatorial optimization problems find many real-world social and industrial data intensive compute applications. Examples include optimization of mRNA sequences for COVID-19 vaccines, semiconductor supply-chains, and financial index tracking, to name a few. Such optimization problems are computationally intensive, and a brute-force search method for finding the optimum solution becomes untenable as the problem size increases. An efficient way to solve an optimization problem is to let nature perform the exhaustive search in the physical world by mapping the problem onto an Ising model. The Ising model describes spin dynamics in a ferromagnet, wherein spins naturally orient to achieve the lowest energy state, representing the optimal solution to a given optimization problem. Performing such Ising computations using conventional methods requires numerous compute iterations. This results in frequent off-chip memory accesses and incur significant energy overheads. The goal of this project is to advance the development of energy-efficient as well as cost-efficient combinatorial optimization hardware accelerators to be integrated in modern integrated circuits for solving critical optimization problems as mentioned above. The research results from this project will be disseminated to the students in the form of course design case-studies. Reciprocally, some of the course projects will be aligned with Ising accelerator designs enabling tight research-teaching integration. The project also aims to engage with underrepresented and minority students in the form of undergraduate and graduate student mentoring and research experiences.

This project proposes a unique analog compute-within-memory design approach performing the Ising computations by reconfiguring existing memory array circuitry. In contrast to prior near-memory, digital-arithmetic computing approaches, this compute-in-memory approach performs Ising Hamiltonian computations in the analog domain within a memory array with minimal circuit changes. It maps Hamiltonian computations on to available memory wordline and bitline circuitry, which has remained a key technical challenge so far. In addition, this project will investigate the ways to seamlessly map large Ising models across multiple memory banks, thereby scaling up the Ising spin count significantly. The project aims to demonstrate compute-in-memory Ising accelerator silicon prototypes, perform design-space exploration, and quantify the benefits over prior approaches. Furthermore, the project will explore the high-density memory needs for future complex combinatorial-optimization accelerators utilizing large-scale Ising models. This project will systematically investigate device-technology circuit co-design aspects of emerging monolithically integrated 3D memory technologies. This can potentially leapfrog the benefits of compute-in-memory based Ising accelerators for solving extreme-scale optimization problems. The tightly-integrated research, education, and outreach plan aims to establish a close industry relationship, integrate this research with a graduate course, deliver online courses, expand K-12 outreach, and train students in the area of memory devices, circuit designs, and combinatorial-optimization algorithms in service of furthering the creation of the STEM workforce.

Texas ECE PI
Link to Story
Grant Award Date