Rapid advancements in digital platforms have continued and include frequent monitoring from satellites to focussed small-scale capture from UAVs. While landscape-level monitoring requirements are fulfilled using satellite imagery and UAVs provide higher resolution applications, gaps exist between these two platforms. This opportunity is realised by aerial photography, a technology that has evolved as fast but has perhaps been overlooked.
In this example, we show the results of a collaborative project between Indufor and Scion’s Geomatics team. The workflow developed uses data collected from an airborne camera system and deep learning algorithms to map and classify tree seedlings.
The operational feasibility of the approach was evaluated on a recently planted site. Each of the six areas was flown two-three months after planting using a light aircraft at resolutions between 2.5 to 5 cm. The imagery collected was ingested into a deep learning neural network, with a selection of targets annotated, classified, refined, and used to train a deep learning model.
The following images show the results of a two-class model. The output identifies the extent of sprayed spots (approximately 1 m in diameter) and distinguishes spots which have a presence (green boxes) or absence (red boxes) of live tree seedlings.
A secondary output provides a map that shows the detection precision of each target which offers a useful means of identifying trends or to determine the cause of an accuracy score which is lower than expected.
Over the coming months, Indufor and Scion intend to scale-up the methods developed and further automate the processing chain. Recent camera upgrades include the colour infrared band, which will further assist with vegetation classification and monitoring changes in canopy health. The same detection framework can be adapted for UAV photography and used to create information to support a diverse range of precision mapping (such as species separation) and object classification applications.