When setting the epoch and batch size, you must have times thinking why do I need to set these two values, why can’t just epoch be enough, do I need to set batch size again and what does it do?
Well, for beginners, the batch size and epoch in neural networks may sound very confusing. Since both of them do a very similar job in training models.
To understand the differences between batch size and epoch, we first need to understand what Gradient Descent is, and how it optimizes the weights of internal model parameters.
But if you don’t want to go deeper then for you in short, a batch size means how many samples from the dataset you want to pass through the network before you want to update the weights of internal parameters of the network. The value of batch size can be anything less than the total dataset size.
The epoch means how many times the entire dataset should be passed through the network. An epoch value can be anything from one up to infinity. This means you can train a model with the same dataset as many times you want.