At the end of each Land cover classification (or of each predictive modeling in general) must be an assessment of the accuracy of the model. Though you might have the feeling that the hardest part of the task is done, the accuracy assessment is actually a challenging and very important part of the process.
R Studio — we recommend to use R Studio for (interactive) programming with R. You can download R Studio from the official web page.
In this worksheet we will assess the performance of our classification using an independent test dataset.
Eyeball verification of map-type figures generally gives a very good (first) impression. For a quantitative description however, a more “objective” index might be requested by others. Regarding classification accuracies, Cohen's Kappa index of agreement might likely be the most commonly used index (see e.g. [Kuhnert2005] for a short overview and calculation information). Although Kappa is certainly not a one fits all purposes index, we will start with this one for today.
Do a visual validation of your land cover map
Use the testing data set that you held back from the classification to statistically test the performance of the model on independent data.