Like dropout, batch normalization (proposed by Sergey Ioffe and Christian Szegedy in Batch Normalization: Accelerating Deep Network Training by Reducing Internal Covariate Shift, JMLR, 2015) is an operation that can be inserted into neural networks and affects their training. This operation takes the batched results of the preceding layers and normalizes them; that is, it subtracts the batch mean and divides it by the batch standard deviation.
Since batches are randomly sampled in SGD (and thus are rarely the same twice), this means that the data will almost never be normalized the same way. Therefore, the network has to learn how to deal with these data fluctuations, making it more robust and generic. Furthermore, this normalization step concomitantly improves the way the gradients flow through the network, facilitating the SGD process.