 # Extracting Rules from a Decision Tree

I am working with the R programming language. Recently, I read about a new decision tree algorithm called "Reinforcement Learning Trees" (RLT) which supposedly has the potential to fit "better" decision trees to a dataset. The documentation for this library is available over here: https://cran.r-project.org/web/packages/RLT/RLT.pdf

I tried to use this library to run a classification decision tree on the (famous) Iris Dataset:

``````library(RLT)
data(iris)
fit = RLT(iris[,c(1,2,3,4)], iris\$Species, model = "classification", ntrees = 1)
``````

Question: From here, is it possible to extract the "rules" from this decision tree?

For example, if you use the CART Decision Tree model:

``````library(rpart)
library(rpart.plot)

fit <-rpart( Species ~. , data = iris)
rpart.plot(fit)
`````` `````` rpart.rules(fit)

Species  seto vers virg
setosa [1.00  .00  .00] when Petal.Length <  2.5
versicolor [ .00  .91  .09] when Petal.Length >= 2.5 & Petal.Width <  1.8
virginica [ .00  .02  .98] when Petal.Length >= 2.5 & Petal.Width >= 1.8
``````

Is it possible to do this with the RLT library? I have been reading the documentation for this library and can not seem to find a direct way to extract the decision rules. I understand that this library is typically meant to be used as a substitute for the random forest (which do not have decision rules) - but I was reading the original paper for this algorithm where they specify that the RLT algorithm fit individual decision trees (via the RLT algorithm) and then aggregates them together as in random forest. Thus on some level, the RLT algorithm is able to fit an individual decision tree - which in theory should have "decision rules".

Does anyone know how to extract these rules?

Thanks!

References:

This topic was automatically closed 21 days after the last reply. New replies are no longer allowed.

If you have a query related to it or one of the replies, start a new topic and refer back with a link.