What Is Random Forest?

Written by Coursera Staff • Updated on

Explore machine learning topics with an introduction to random forests. Learn why these models are popularly chosen by industry professionals and how you can take steps to build your own.

[Featured Image] A machine learning professional sits at a laptop outside and uses random forest algorithms on a project.

Random forest algorithms are a popular machine learning method for classifying data and predicting outcomes. Using random forests, you can improve your machine learning model and produce more accurate insights with your data. Explore the basics of random forest algorithms, their benefits and limitations, and the intricacies of how these models operate.

What are random forest algorithms? 

Random forest algorithms create many individual decision trees using a random selection of data points and features. When asked to make a prediction, this algorithm outputs the most heavily weighted answer, meaning the majority of the trees found that answer. 

To conceptualize this, imagine you are deciding which car to buy out of 10 cars. Instead of asking one person what car they think you should buy, you ask 100 people which car you should buy. Each person gives an answer based on their experiences and perspectives, yet several answers are likely to match. In the end, you buy the car that was most often recommended by the people you asked. 

Why is random forest important?

Random forest algorithms are important as they are incredibly versatile and can perform classification tasks (where you’re trying to categorize things) and regression tasks (where you’re predicting an output, most often in the form of a probability or number). Random forests can handle large data sets with high dimensionality (lots of features) and automatically rank the importance of different features, providing valuable insights into the data. Because of this, professionals use random forests for many different purposes, from identifying disease associations to forecasting application performance.

Random forests are also robust to overfitting, which is a problem many machine learning algorithms struggle with. Overfitting is where a model performs well with training data but doesn’t generalize to other data. Essentially, the algorithm learns to be too specific with the task rather than being able to take what it’s learned and apply it to new information. Because random forests aggregate the predictions of many trees, each based on different subsets of the data, they are better at generalizing to new data than many other methods.

Benefits of random forest algorithms

Random forest algorithms have many advantages, which make them highly favored in machine learning and data science. 

  • Accuracy: By aggregating the predictions of smaller decision trees, random forests typically produce more accurate results than individual trees, especially on complex data sets.

  • Versatility: These algorithms work for classification and regression applications, so they can be used for a wide range of applications.

  • Handling highly correlated predictors: Random forest algorithms can model high-order interactions between predictors without needing additional specifications in the model. This may be convenient depending on the relationships between your variables.

  • Robustness to overfitting: Unlike single decision trees, which can easily overfit to the training data, random forests are less likely to do so because they take into account the average and variance of the individual decision tree outputs.

  • Suitable for data set sizes: Random forest algorithms can handle large and small data sets due to the design of the algorithm and use of ensemble learning methods.

  • Automatic feature selection: Random forests weight the importance of different features, which helps you understand and improve model performance.

Disadvantages of random forest algorithms

While the advantages of random forest algorithms are numerous, be aware of potential drawbacks to decide if they're the right choice for your data task.

  • May not generalize to other research: Compared to logistic regression, the results of random forest models may not be as easily applied in other research contexts.

  • May have bias: Depending on your variable selection, you might find bias related to the types of variables the model selects.

  • Difficulty with model validation: In some cases, random forest results can be difficult to validate. This means that you don’t always know exactly how or why the model produced the results it did, and you might have difficulty replicating your results.

  • Model complexity: Random forest models are complex, and you need to determine how to construct the model and which component variables to include. Deciding what to choose in different contexts can be difficult, as no formal rules exist about choosing the right variables.

Ensemble learning methods

Random forest models are a type of ensemble learning method, which is a machine learning strategy. These methods work on the principle that combining the predictions from multiple models can produce more accurate results than any single model alone. You can break ensemble learning into the following components:

Bagging (bootstrap aggregating) 

Bagging involves developing multiple models with subsets from the data set. By sampling with replacement (meaning that models can select overlapping data points), each model sees a slightly different slice of the data, reducing variance and improving the overall prediction. For random forest algorithms, this is the aggregations of multiple decision trees.

Boosting 

Boosting is a sequential process in which each model refines the algorithm to fix errors made by the previous models. The models are weighted based on their accuracy, and more emphasis is placed on instances that are hard to predict until the algorithm is able to predict them more accurately. 

Stacking 

Stacking involves training a new model based on the results of the smaller models. The smaller models train on the full data set. Then, their predictions are combined into the final prediction model with the most accuracy. 

How to implement random forest 

Implementing random forest algorithms can be thought of as a two-step process: building the forest and then making predictions. To build the model, follow these general steps:

1. Feature selection: Randomly select several features from the total features available. The number to select is usually the square root of the total number of features, but it should be less than the total.

2. Node calculation: Determine the split point used to calculate the node.

3. Splitting: Use a split point to divide the node into more nodes. This process of selecting features, identifying the best node, and splitting repeats until a pre-set number of nodes or stopping condition is reached.

4. Building the forest: Repeat the above steps a specific number of times to create an equal number of trees that constitute the random forest.

Once you’ve built the model, you can make predictions. Generally, the prediction mechanism follows these steps:

1. Prediction per tree: Input features of the test data run through each decision tree to predict outcomes.

2. Voting: Collect and count the predicted outcomes from all the trees, tallying the votes for each predicted target.

3. Final prediction: The outcome is the weighted result of the results from the individual trees.

Learn more with Coursera.

You can find many different types of courses in the machine learning space on the Coursera learning platform. As a beginner looking for a comprehensive overview of topics, consider completing the Machine Learning Specialization, which covers a wide range of topics ranging from regression, classification, reinforcement learning, and more.

Keep reading

Updated on
Written by:

Editorial Team

Coursera’s editorial team is comprised of highly experienced professional editors, writers, and fact...

This content has been made available for informational purposes only. Learners are advised to conduct additional research to ensure that courses and other credentials pursued meet their personal, professional, and financial goals.