This article describes a simple way to assess the statistical power of experimental designs. The approach presented is based on the concept of a minimum detectable effect, which, intuitively, is the smallest true impact that an experiment has a good chance of detecting. The article illustrates how to compute minimum detectable effects and how to apply this concept to the assessment of alternative experimental designs. Applications to impact estimators for both continuous and binary outcome measures are considered