https://towardsdatascience.com/ways-to-detect-and-remove-the-outliers-404d16608dba
https://towardsdatascience.com/exploratory-data-analysis-in-python-c9a77dfa39ce
https://www.geeksforgeeks.org/difference-between-matplotlib-vs-seaborn/
https://towardsdatascience.com/feature-engineering-for-machine-learning-3a5e293a5114
A decision tree is built on an entire dataset, using all the features/variables of interest, whereas a random forest randomly selects observations/rows and specific features/variables to build multiple decision trees from and then averages the results. After a large number of trees are built using this method, each tree "votes" or chooses the class, and the class receiving the most votes by a simple majority is the "winner" or predicted class. There are of course some more detailed differences, but this is the main conceptual difference.
https://towardsdatascience.com/decision-tree-ensembles-bagging-and-boosting-266a8ba60fd9