Random Forest and Decision Tree Algorithm
.everyoneloves__top-leaderboard:empty,.everyoneloves__mid-leaderboard:empty{ margin-bottom:0;
}
up vote
10
down vote
favorite
A random forest is a collection of decision trees following the bagging concept. When we move from one decision tree to the next decision tree then how does the information learned by last decision tree move forward to the next?
Because, as per my understanding, there is nothing like a trained model which gets created for every decision tree and then loaded before the next decision tree starts learning from the misclassified error.
So how does it work?
machine-learning random-forest cart bagging
New contributor
add a comment |
up vote
10
down vote
favorite
A random forest is a collection of decision trees following the bagging concept. When we move from one decision tree to the next decision tree then how does the information learned by last decision tree move forward to the next?
Because, as per my understanding, there is nothing like a trained model which gets created for every decision tree and then loaded before the next decision tree starts learning from the misclassified error.
So how does it work?
machine-learning random-forest cart bagging
New contributor
"When we move from one decision tree to the next decision tree". This suggests an linear process. We've built parallel implementations where we worked on one tree per CPU core; this works perfectly fine unless you use a separate random number generator per CPU core in training, all of which share the same seed. In that case you can end up with lots of identical trees.
– MSalters
11 hours ago
add a comment |
up vote
10
down vote
favorite
up vote
10
down vote
favorite
A random forest is a collection of decision trees following the bagging concept. When we move from one decision tree to the next decision tree then how does the information learned by last decision tree move forward to the next?
Because, as per my understanding, there is nothing like a trained model which gets created for every decision tree and then loaded before the next decision tree starts learning from the misclassified error.
So how does it work?
machine-learning random-forest cart bagging
New contributor
A random forest is a collection of decision trees following the bagging concept. When we move from one decision tree to the next decision tree then how does the information learned by last decision tree move forward to the next?
Because, as per my understanding, there is nothing like a trained model which gets created for every decision tree and then loaded before the next decision tree starts learning from the misclassified error.
So how does it work?
machine-learning random-forest cart bagging
machine-learning random-forest cart bagging
New contributor
New contributor
edited 13 hours ago
Peter Flom♦
73.3k11104200
73.3k11104200
New contributor
asked 2 days ago
Abhay Raj Singh
513
513
New contributor
New contributor
"When we move from one decision tree to the next decision tree". This suggests an linear process. We've built parallel implementations where we worked on one tree per CPU core; this works perfectly fine unless you use a separate random number generator per CPU core in training, all of which share the same seed. In that case you can end up with lots of identical trees.
– MSalters
11 hours ago
add a comment |
"When we move from one decision tree to the next decision tree". This suggests an linear process. We've built parallel implementations where we worked on one tree per CPU core; this works perfectly fine unless you use a separate random number generator per CPU core in training, all of which share the same seed. In that case you can end up with lots of identical trees.
– MSalters
11 hours ago
"When we move from one decision tree to the next decision tree". This suggests an linear process. We've built parallel implementations where we worked on one tree per CPU core; this works perfectly fine unless you use a separate random number generator per CPU core in training, all of which share the same seed. In that case you can end up with lots of identical trees.
– MSalters
11 hours ago
"When we move from one decision tree to the next decision tree". This suggests an linear process. We've built parallel implementations where we worked on one tree per CPU core; this works perfectly fine unless you use a separate random number generator per CPU core in training, all of which share the same seed. In that case you can end up with lots of identical trees.
– MSalters
11 hours ago
add a comment |
4 Answers
4
active
oldest
votes
up vote
18
down vote
No information is passed between trees. In a random forest, all of the trees are iid. They are iid because trees are grown using the same randomization strategy for all trees: first, take a bootstrap sample of the data, and then grow the tree using splits from a randomly-chosen subset of features. This happens for each tree individually without attention to any other trees in the ensemble.
You might find it helpful to read an introduction to random forests from a high-quality text. One is "Random Forests" by Leo Breiman. There's also a chapter in Elements of Statistical Learning by Hastie et al.
It's possible that you've confused random forests with boosting methods such as AdaBoost or gradient-boosted trees. Boosting methods are not the same, because they use information about misfit from previous boosting rounds to inform the next boosting round.
2
By iid do you mean independent and identically distributed? I wasn't familiar with this abbreviation.
– nekomatic
14 hours ago
1
@nekomatic It's safe to assume that that was the intended meaning. It's a pretty common abbrev. in statistics.
– JAD
11 hours ago
add a comment |
up vote
8
down vote
The random forests is a collection of multiple decision trees which are trained independently of one another. So there is no notion of sequentially dependent training (which is the case in boosting algorithms). As a result of this, as mentioned in another answer, it is possible to do parallel training of the trees.
You might like to know where the "random" in random forest comes from: there are two ways with which randomness is injected into the process of learning the trees. First is the random selection of data points used for training each of the trees, and second is the random selection of features used in building each tree. As a single decision tree usually tends to overfit on the data, the injection of randomness in this way results in having a bunch of trees where each one of them have a good accuracy (and possibly overfit) on a different subset of the available training data. Therefore, when we take the average of the predictions made by all the trees, we would observe a reduction in overfitting (compared to the case of training one single decision tree on all the available data).
To better understand this, here is a rough sketch of the training process assuming all the data points are stored in a set denoted by $M$ and the number of trees in the forest is $N$:
- $i = 0$
- Take a boostrap sample of $M$ (i.e. sampling with replacement and with the same size as $M$) which is denoted by $S_i$.
- Train $i$-th tree, denoted as $T_i$, using $S_i$ as input data.
- the training process is the same as training a decision tree except with the difference that at each node in the tree only a random selection of features is used for the split in that node.
- $i = i + 1$
- if $i < N$ go to step 2, otherwise all the trees have been trained, so random forest training is finished.
Note that I described the algorithm as a sequential algorithm, but since training of the trees is not dependent on each other, you can also do this in parallel. Now for prediction step, first make a prediction for every tree (i.e. $T_1$, $T_2$, ..., $T_N$) in the forest and then:
If it is used for a regression task, take the average of predictions as the final prediction of the random forest.
If it is used for a classification task, use soft voting strategy: take the average of the probabilities predicted by the trees for each class, then declare the class with the highest average probability as the final prediction of random forest.
Further, it is worth mentioning that it is possible to train the trees in a sequentially dependent manner and that's exactly what gradient boosted trees algorithm does, which is a totally different method from random forests.
add a comment |
up vote
6
down vote
Random forest is a bagging algorithm rather than a boosting algorithm.
Random forest constructs the tree independently using random sample of the data. A parallel implementation is possible.
You might like to check out gradient boosting where trees are built sequentially where new tree tries to correct the mistake previously made.
add a comment |
up vote
3
down vote
So how does it works ?
Random Forest is a collection of decision trees. The trees are constructed independently. Each tree is trained on subset of features and subset of a sample chosen with replacement.
When predicting, say for Classification, the input parameters are given to each tree in the forest and each tree "votes" on the classification, label with most votes wins.
Why to use Random Forest over simple Decision Tree? Bias/Variance trade off. Random Forest are built from much simpler trees when compared to a single decision tree. Generally Random forests provide a big reduction of error due to variance and small increase in error due to bias.
If we are chosing different features for every Decision Tree, then how the learning by a set of features in previous Decision Tree improves while we send the missclassified values ahead as for the next Decision Tree there is totally a new set of features ?
– Abhay Raj Singh
yesterday
3
@AbhayRajSingh - you do not "send the misclassified values ahead" in Random Forest. As Akavall says, "The trees are constructed independently"
– Henry
yesterday
add a comment |
4 Answers
4
active
oldest
votes
4 Answers
4
active
oldest
votes
active
oldest
votes
active
oldest
votes
up vote
18
down vote
No information is passed between trees. In a random forest, all of the trees are iid. They are iid because trees are grown using the same randomization strategy for all trees: first, take a bootstrap sample of the data, and then grow the tree using splits from a randomly-chosen subset of features. This happens for each tree individually without attention to any other trees in the ensemble.
You might find it helpful to read an introduction to random forests from a high-quality text. One is "Random Forests" by Leo Breiman. There's also a chapter in Elements of Statistical Learning by Hastie et al.
It's possible that you've confused random forests with boosting methods such as AdaBoost or gradient-boosted trees. Boosting methods are not the same, because they use information about misfit from previous boosting rounds to inform the next boosting round.
2
By iid do you mean independent and identically distributed? I wasn't familiar with this abbreviation.
– nekomatic
14 hours ago
1
@nekomatic It's safe to assume that that was the intended meaning. It's a pretty common abbrev. in statistics.
– JAD
11 hours ago
add a comment |
up vote
18
down vote
No information is passed between trees. In a random forest, all of the trees are iid. They are iid because trees are grown using the same randomization strategy for all trees: first, take a bootstrap sample of the data, and then grow the tree using splits from a randomly-chosen subset of features. This happens for each tree individually without attention to any other trees in the ensemble.
You might find it helpful to read an introduction to random forests from a high-quality text. One is "Random Forests" by Leo Breiman. There's also a chapter in Elements of Statistical Learning by Hastie et al.
It's possible that you've confused random forests with boosting methods such as AdaBoost or gradient-boosted trees. Boosting methods are not the same, because they use information about misfit from previous boosting rounds to inform the next boosting round.
2
By iid do you mean independent and identically distributed? I wasn't familiar with this abbreviation.
– nekomatic
14 hours ago
1
@nekomatic It's safe to assume that that was the intended meaning. It's a pretty common abbrev. in statistics.
– JAD
11 hours ago
add a comment |
up vote
18
down vote
up vote
18
down vote
No information is passed between trees. In a random forest, all of the trees are iid. They are iid because trees are grown using the same randomization strategy for all trees: first, take a bootstrap sample of the data, and then grow the tree using splits from a randomly-chosen subset of features. This happens for each tree individually without attention to any other trees in the ensemble.
You might find it helpful to read an introduction to random forests from a high-quality text. One is "Random Forests" by Leo Breiman. There's also a chapter in Elements of Statistical Learning by Hastie et al.
It's possible that you've confused random forests with boosting methods such as AdaBoost or gradient-boosted trees. Boosting methods are not the same, because they use information about misfit from previous boosting rounds to inform the next boosting round.
No information is passed between trees. In a random forest, all of the trees are iid. They are iid because trees are grown using the same randomization strategy for all trees: first, take a bootstrap sample of the data, and then grow the tree using splits from a randomly-chosen subset of features. This happens for each tree individually without attention to any other trees in the ensemble.
You might find it helpful to read an introduction to random forests from a high-quality text. One is "Random Forests" by Leo Breiman. There's also a chapter in Elements of Statistical Learning by Hastie et al.
It's possible that you've confused random forests with boosting methods such as AdaBoost or gradient-boosted trees. Boosting methods are not the same, because they use information about misfit from previous boosting rounds to inform the next boosting round.
edited yesterday
answered 2 days ago
Sycorax
37.1k894183
37.1k894183
2
By iid do you mean independent and identically distributed? I wasn't familiar with this abbreviation.
– nekomatic
14 hours ago
1
@nekomatic It's safe to assume that that was the intended meaning. It's a pretty common abbrev. in statistics.
– JAD
11 hours ago
add a comment |
2
By iid do you mean independent and identically distributed? I wasn't familiar with this abbreviation.
– nekomatic
14 hours ago
1
@nekomatic It's safe to assume that that was the intended meaning. It's a pretty common abbrev. in statistics.
– JAD
11 hours ago
2
2
By iid do you mean independent and identically distributed? I wasn't familiar with this abbreviation.
– nekomatic
14 hours ago
By iid do you mean independent and identically distributed? I wasn't familiar with this abbreviation.
– nekomatic
14 hours ago
1
1
@nekomatic It's safe to assume that that was the intended meaning. It's a pretty common abbrev. in statistics.
– JAD
11 hours ago
@nekomatic It's safe to assume that that was the intended meaning. It's a pretty common abbrev. in statistics.
– JAD
11 hours ago
add a comment |
up vote
8
down vote
The random forests is a collection of multiple decision trees which are trained independently of one another. So there is no notion of sequentially dependent training (which is the case in boosting algorithms). As a result of this, as mentioned in another answer, it is possible to do parallel training of the trees.
You might like to know where the "random" in random forest comes from: there are two ways with which randomness is injected into the process of learning the trees. First is the random selection of data points used for training each of the trees, and second is the random selection of features used in building each tree. As a single decision tree usually tends to overfit on the data, the injection of randomness in this way results in having a bunch of trees where each one of them have a good accuracy (and possibly overfit) on a different subset of the available training data. Therefore, when we take the average of the predictions made by all the trees, we would observe a reduction in overfitting (compared to the case of training one single decision tree on all the available data).
To better understand this, here is a rough sketch of the training process assuming all the data points are stored in a set denoted by $M$ and the number of trees in the forest is $N$:
- $i = 0$
- Take a boostrap sample of $M$ (i.e. sampling with replacement and with the same size as $M$) which is denoted by $S_i$.
- Train $i$-th tree, denoted as $T_i$, using $S_i$ as input data.
- the training process is the same as training a decision tree except with the difference that at each node in the tree only a random selection of features is used for the split in that node.
- $i = i + 1$
- if $i < N$ go to step 2, otherwise all the trees have been trained, so random forest training is finished.
Note that I described the algorithm as a sequential algorithm, but since training of the trees is not dependent on each other, you can also do this in parallel. Now for prediction step, first make a prediction for every tree (i.e. $T_1$, $T_2$, ..., $T_N$) in the forest and then:
If it is used for a regression task, take the average of predictions as the final prediction of the random forest.
If it is used for a classification task, use soft voting strategy: take the average of the probabilities predicted by the trees for each class, then declare the class with the highest average probability as the final prediction of random forest.
Further, it is worth mentioning that it is possible to train the trees in a sequentially dependent manner and that's exactly what gradient boosted trees algorithm does, which is a totally different method from random forests.
add a comment |
up vote
8
down vote
The random forests is a collection of multiple decision trees which are trained independently of one another. So there is no notion of sequentially dependent training (which is the case in boosting algorithms). As a result of this, as mentioned in another answer, it is possible to do parallel training of the trees.
You might like to know where the "random" in random forest comes from: there are two ways with which randomness is injected into the process of learning the trees. First is the random selection of data points used for training each of the trees, and second is the random selection of features used in building each tree. As a single decision tree usually tends to overfit on the data, the injection of randomness in this way results in having a bunch of trees where each one of them have a good accuracy (and possibly overfit) on a different subset of the available training data. Therefore, when we take the average of the predictions made by all the trees, we would observe a reduction in overfitting (compared to the case of training one single decision tree on all the available data).
To better understand this, here is a rough sketch of the training process assuming all the data points are stored in a set denoted by $M$ and the number of trees in the forest is $N$:
- $i = 0$
- Take a boostrap sample of $M$ (i.e. sampling with replacement and with the same size as $M$) which is denoted by $S_i$.
- Train $i$-th tree, denoted as $T_i$, using $S_i$ as input data.
- the training process is the same as training a decision tree except with the difference that at each node in the tree only a random selection of features is used for the split in that node.
- $i = i + 1$
- if $i < N$ go to step 2, otherwise all the trees have been trained, so random forest training is finished.
Note that I described the algorithm as a sequential algorithm, but since training of the trees is not dependent on each other, you can also do this in parallel. Now for prediction step, first make a prediction for every tree (i.e. $T_1$, $T_2$, ..., $T_N$) in the forest and then:
If it is used for a regression task, take the average of predictions as the final prediction of the random forest.
If it is used for a classification task, use soft voting strategy: take the average of the probabilities predicted by the trees for each class, then declare the class with the highest average probability as the final prediction of random forest.
Further, it is worth mentioning that it is possible to train the trees in a sequentially dependent manner and that's exactly what gradient boosted trees algorithm does, which is a totally different method from random forests.
add a comment |
up vote
8
down vote
up vote
8
down vote
The random forests is a collection of multiple decision trees which are trained independently of one another. So there is no notion of sequentially dependent training (which is the case in boosting algorithms). As a result of this, as mentioned in another answer, it is possible to do parallel training of the trees.
You might like to know where the "random" in random forest comes from: there are two ways with which randomness is injected into the process of learning the trees. First is the random selection of data points used for training each of the trees, and second is the random selection of features used in building each tree. As a single decision tree usually tends to overfit on the data, the injection of randomness in this way results in having a bunch of trees where each one of them have a good accuracy (and possibly overfit) on a different subset of the available training data. Therefore, when we take the average of the predictions made by all the trees, we would observe a reduction in overfitting (compared to the case of training one single decision tree on all the available data).
To better understand this, here is a rough sketch of the training process assuming all the data points are stored in a set denoted by $M$ and the number of trees in the forest is $N$:
- $i = 0$
- Take a boostrap sample of $M$ (i.e. sampling with replacement and with the same size as $M$) which is denoted by $S_i$.
- Train $i$-th tree, denoted as $T_i$, using $S_i$ as input data.
- the training process is the same as training a decision tree except with the difference that at each node in the tree only a random selection of features is used for the split in that node.
- $i = i + 1$
- if $i < N$ go to step 2, otherwise all the trees have been trained, so random forest training is finished.
Note that I described the algorithm as a sequential algorithm, but since training of the trees is not dependent on each other, you can also do this in parallel. Now for prediction step, first make a prediction for every tree (i.e. $T_1$, $T_2$, ..., $T_N$) in the forest and then:
If it is used for a regression task, take the average of predictions as the final prediction of the random forest.
If it is used for a classification task, use soft voting strategy: take the average of the probabilities predicted by the trees for each class, then declare the class with the highest average probability as the final prediction of random forest.
Further, it is worth mentioning that it is possible to train the trees in a sequentially dependent manner and that's exactly what gradient boosted trees algorithm does, which is a totally different method from random forests.
The random forests is a collection of multiple decision trees which are trained independently of one another. So there is no notion of sequentially dependent training (which is the case in boosting algorithms). As a result of this, as mentioned in another answer, it is possible to do parallel training of the trees.
You might like to know where the "random" in random forest comes from: there are two ways with which randomness is injected into the process of learning the trees. First is the random selection of data points used for training each of the trees, and second is the random selection of features used in building each tree. As a single decision tree usually tends to overfit on the data, the injection of randomness in this way results in having a bunch of trees where each one of them have a good accuracy (and possibly overfit) on a different subset of the available training data. Therefore, when we take the average of the predictions made by all the trees, we would observe a reduction in overfitting (compared to the case of training one single decision tree on all the available data).
To better understand this, here is a rough sketch of the training process assuming all the data points are stored in a set denoted by $M$ and the number of trees in the forest is $N$:
- $i = 0$
- Take a boostrap sample of $M$ (i.e. sampling with replacement and with the same size as $M$) which is denoted by $S_i$.
- Train $i$-th tree, denoted as $T_i$, using $S_i$ as input data.
- the training process is the same as training a decision tree except with the difference that at each node in the tree only a random selection of features is used for the split in that node.
- $i = i + 1$
- if $i < N$ go to step 2, otherwise all the trees have been trained, so random forest training is finished.
Note that I described the algorithm as a sequential algorithm, but since training of the trees is not dependent on each other, you can also do this in parallel. Now for prediction step, first make a prediction for every tree (i.e. $T_1$, $T_2$, ..., $T_N$) in the forest and then:
If it is used for a regression task, take the average of predictions as the final prediction of the random forest.
If it is used for a classification task, use soft voting strategy: take the average of the probabilities predicted by the trees for each class, then declare the class with the highest average probability as the final prediction of random forest.
Further, it is worth mentioning that it is possible to train the trees in a sequentially dependent manner and that's exactly what gradient boosted trees algorithm does, which is a totally different method from random forests.
edited yesterday
answered yesterday
today
22418
22418
add a comment |
add a comment |
up vote
6
down vote
Random forest is a bagging algorithm rather than a boosting algorithm.
Random forest constructs the tree independently using random sample of the data. A parallel implementation is possible.
You might like to check out gradient boosting where trees are built sequentially where new tree tries to correct the mistake previously made.
add a comment |
up vote
6
down vote
Random forest is a bagging algorithm rather than a boosting algorithm.
Random forest constructs the tree independently using random sample of the data. A parallel implementation is possible.
You might like to check out gradient boosting where trees are built sequentially where new tree tries to correct the mistake previously made.
add a comment |
up vote
6
down vote
up vote
6
down vote
Random forest is a bagging algorithm rather than a boosting algorithm.
Random forest constructs the tree independently using random sample of the data. A parallel implementation is possible.
You might like to check out gradient boosting where trees are built sequentially where new tree tries to correct the mistake previously made.
Random forest is a bagging algorithm rather than a boosting algorithm.
Random forest constructs the tree independently using random sample of the data. A parallel implementation is possible.
You might like to check out gradient boosting where trees are built sequentially where new tree tries to correct the mistake previously made.
answered yesterday
Siong Thye Goh
2,2821518
2,2821518
add a comment |
add a comment |
up vote
3
down vote
So how does it works ?
Random Forest is a collection of decision trees. The trees are constructed independently. Each tree is trained on subset of features and subset of a sample chosen with replacement.
When predicting, say for Classification, the input parameters are given to each tree in the forest and each tree "votes" on the classification, label with most votes wins.
Why to use Random Forest over simple Decision Tree? Bias/Variance trade off. Random Forest are built from much simpler trees when compared to a single decision tree. Generally Random forests provide a big reduction of error due to variance and small increase in error due to bias.
If we are chosing different features for every Decision Tree, then how the learning by a set of features in previous Decision Tree improves while we send the missclassified values ahead as for the next Decision Tree there is totally a new set of features ?
– Abhay Raj Singh
yesterday
3
@AbhayRajSingh - you do not "send the misclassified values ahead" in Random Forest. As Akavall says, "The trees are constructed independently"
– Henry
yesterday
add a comment |
up vote
3
down vote
So how does it works ?
Random Forest is a collection of decision trees. The trees are constructed independently. Each tree is trained on subset of features and subset of a sample chosen with replacement.
When predicting, say for Classification, the input parameters are given to each tree in the forest and each tree "votes" on the classification, label with most votes wins.
Why to use Random Forest over simple Decision Tree? Bias/Variance trade off. Random Forest are built from much simpler trees when compared to a single decision tree. Generally Random forests provide a big reduction of error due to variance and small increase in error due to bias.
If we are chosing different features for every Decision Tree, then how the learning by a set of features in previous Decision Tree improves while we send the missclassified values ahead as for the next Decision Tree there is totally a new set of features ?
– Abhay Raj Singh
yesterday
3
@AbhayRajSingh - you do not "send the misclassified values ahead" in Random Forest. As Akavall says, "The trees are constructed independently"
– Henry
yesterday
add a comment |
up vote
3
down vote
up vote
3
down vote
So how does it works ?
Random Forest is a collection of decision trees. The trees are constructed independently. Each tree is trained on subset of features and subset of a sample chosen with replacement.
When predicting, say for Classification, the input parameters are given to each tree in the forest and each tree "votes" on the classification, label with most votes wins.
Why to use Random Forest over simple Decision Tree? Bias/Variance trade off. Random Forest are built from much simpler trees when compared to a single decision tree. Generally Random forests provide a big reduction of error due to variance and small increase in error due to bias.
So how does it works ?
Random Forest is a collection of decision trees. The trees are constructed independently. Each tree is trained on subset of features and subset of a sample chosen with replacement.
When predicting, say for Classification, the input parameters are given to each tree in the forest and each tree "votes" on the classification, label with most votes wins.
Why to use Random Forest over simple Decision Tree? Bias/Variance trade off. Random Forest are built from much simpler trees when compared to a single decision tree. Generally Random forests provide a big reduction of error due to variance and small increase in error due to bias.
answered yesterday
Akavall
1,56111522
1,56111522
If we are chosing different features for every Decision Tree, then how the learning by a set of features in previous Decision Tree improves while we send the missclassified values ahead as for the next Decision Tree there is totally a new set of features ?
– Abhay Raj Singh
yesterday
3
@AbhayRajSingh - you do not "send the misclassified values ahead" in Random Forest. As Akavall says, "The trees are constructed independently"
– Henry
yesterday
add a comment |
If we are chosing different features for every Decision Tree, then how the learning by a set of features in previous Decision Tree improves while we send the missclassified values ahead as for the next Decision Tree there is totally a new set of features ?
– Abhay Raj Singh
yesterday
3
@AbhayRajSingh - you do not "send the misclassified values ahead" in Random Forest. As Akavall says, "The trees are constructed independently"
– Henry
yesterday
If we are chosing different features for every Decision Tree, then how the learning by a set of features in previous Decision Tree improves while we send the missclassified values ahead as for the next Decision Tree there is totally a new set of features ?
– Abhay Raj Singh
yesterday
If we are chosing different features for every Decision Tree, then how the learning by a set of features in previous Decision Tree improves while we send the missclassified values ahead as for the next Decision Tree there is totally a new set of features ?
– Abhay Raj Singh
yesterday
3
3
@AbhayRajSingh - you do not "send the misclassified values ahead" in Random Forest. As Akavall says, "The trees are constructed independently"
– Henry
yesterday
@AbhayRajSingh - you do not "send the misclassified values ahead" in Random Forest. As Akavall says, "The trees are constructed independently"
– Henry
yesterday
add a comment |
Abhay Raj Singh is a new contributor. Be nice, and check out our Code of Conduct.
Abhay Raj Singh is a new contributor. Be nice, and check out our Code of Conduct.
Abhay Raj Singh is a new contributor. Be nice, and check out our Code of Conduct.
Abhay Raj Singh is a new contributor. Be nice, and check out our Code of Conduct.
Sign up or log in
StackExchange.ready(function () {
StackExchange.helpers.onClickDraftSave('#login-link');
});
Sign up using Google
Sign up using Facebook
Sign up using Email and Password
Post as a guest
Required, but never shown
StackExchange.ready(
function () {
StackExchange.openid.initPostLogin('.new-post-login', 'https%3a%2f%2fstats.stackexchange.com%2fquestions%2f377865%2frandom-forest-and-decision-tree-algorithm%23new-answer', 'question_page');
}
);
Post as a guest
Required, but never shown
Sign up or log in
StackExchange.ready(function () {
StackExchange.helpers.onClickDraftSave('#login-link');
});
Sign up using Google
Sign up using Facebook
Sign up using Email and Password
Post as a guest
Required, but never shown
Sign up or log in
StackExchange.ready(function () {
StackExchange.helpers.onClickDraftSave('#login-link');
});
Sign up using Google
Sign up using Facebook
Sign up using Email and Password
Post as a guest
Required, but never shown
Sign up or log in
StackExchange.ready(function () {
StackExchange.helpers.onClickDraftSave('#login-link');
});
Sign up using Google
Sign up using Facebook
Sign up using Email and Password
Sign up using Google
Sign up using Facebook
Sign up using Email and Password
Post as a guest
Required, but never shown
Required, but never shown
Required, but never shown
Required, but never shown
Required, but never shown
Required, but never shown
Required, but never shown
Required, but never shown
Required, but never shown
"When we move from one decision tree to the next decision tree". This suggests an linear process. We've built parallel implementations where we worked on one tree per CPU core; this works perfectly fine unless you use a separate random number generator per CPU core in training, all of which share the same seed. In that case you can end up with lots of identical trees.
– MSalters
11 hours ago