Ensemble Learning: Bagging & Boosting

How to combine weak learners to build a stronger learner to reduce bias and variance in your ML model

The bias and variance tradeoff is one of the key concerns when working with machine learning algorithms. Fortunately there are some Ensemble Learning based techniques that machine learning practitioners can take advantage of in order to tackle the bias and variance tradeoff, these techniques are bagging and boosting. So, in this blog we are going to explain how bagging and boosting works, what theirs components are and how you can implement them in your ML problem. Read More

#ensemble-learning