Skip to main content

Machine Learning for Digital Pre-Distortion

The project addressed the challenge of passing a pre-distorted signal through a power amplifier (PA) to achieve a linearized output signal. Typically, a PA amplifies a signal’s power, but this process introduced non-linear distortions. Digital Pre-Distortion (DPD) technology was employed to pre-distort the signal in such a way that it compensated for these non-linearities when the signal passed through the PA, resulting in a linear output.

To effectively implement DPD, it was crucial to predict the correct DPD matrix coefficients that, when applied, accurately counteracted the PA's non-linear behaviors. Our project utilized Machine Learning (ML) models for this prediction task. We developed three models: Long Short-Term Memory (LSTM), Gated Recurrent Unit (GRU), and PGJanet. These models were designed to output a digital pre-distorted signal and a corresponding DPD matrix, optimizing for Error Vector Magnitude (EVM) and Adjacent Channel Leakage Ratio (ACLR). This approach aimed to enhance signal integrity and efficiency in telecommunications systems.

Team Members:

Joshua Dao

Albert Wang

Alba Arias

Teyo Turrubiates

Tiffany Lam

Hayden Warren

Frankie Ortiz

Sponsors
Martin Lim and Andreas Roessler
Semester