Ensemble Learning: Bagging & Boosting

How to combine weak learners to build a stronger learner to reduce bias and variance in your ML model

The bias and variance tradeoff is one of the key concerns when working with machine learning algorithms. Fortunately there are some Ensemble Learning based techniques that machine learning practitioners can take advantage of in order to tackle the bias and variance tradeoff, these techniques are bagging and boosting. So, in this blog we are going to explain how bagging and boosting works, what theirs components are and how you can implement them in your ML problem. Read More

#ensemble-learning

Ensemble Learning: Stacking, Blending & Voting

If you want to increase the effectiveness of your ML model, maybe you should consider Ensemble Learning

We have heard the phrase “unity is strength”, whose meaning can be transferred to different areas of life. Sometimes correct answers to a specific problem are supported by several sources and not just one. This is what Ensemble Learning tries to do, that is, to put together a group of ML models to improve solutions to specific problems. Read More

#ensemble-learning