We study the out-of-sample properties of robust empirical optimization problems with smooth φ-divergence penalties and smooth concave objective functions, and develop a theory for data-driven calibration of the non-negative “robustness parameter” δ that controls the size of the deviations...