We study the problem of differentially private stochastic convex optimization (DP-SCO) with heavy-tailed gradients, where we assume a -moment bound on the Lipschitz constants of sample functions, rather than a uniform bound. We propose a new reduction-based approach that enables us to obtain the first optimal rates (up to logarithmic factors) in the heavy-tailed setting, achieving error under -approximate differential privacy, up to a mild factor, where and are the and moment bounds on sample Lipschitz constants, nearly-matching a lower bound of (Lowy et al. 2023).