DRONECORIA task 2: make a Tensorflow model to discriminate level of damage in the captured photos
This task is a follow up to the ones that have develop the first one (Task https://codein.withgoogle.com/dashboard/tasks/6295075911892992/ ) and want to make some coding, but the task can also be made by any student.
What's the task: develop a Tensorflow model to analyze the photosets we have captured to make an analysis of the burnt areas with three levels of damaged:
- totally burnt
- half burnt
- slightly burn
You can do this, among other ways, making a table of colour versus status. Likely a burnt area will be totally black, half burnt black and brown, and slightly burn will be most green with some brown or black. Of course you can make it more difficult playing with tonalities, but minimum use this three levels.
Here're the images we have at this time, not a big set but, https://drive.google.com/drive/folders/1uUFXgMgcliMoGVhofi_w9j4DjIst486L?usp=sharing
Once finished sent us to lgcodein@gmail.com and also postat the dashboard, those files:
- The TS model files.
- A document explaning the system you use and your approach to the task.
- A short video in .mp4 showing up your desktop and your model running.
- A shared drive folder or Google photo album with the images used (we'll use those to keep growing the main folder)
FYI: Dronecoria will use the Liquid Galaxy as visualization tool for the controlling of the burnt areas and also for commanding the drone fleet that will plant the trees. Info -spanish-: http://www.liquidgalaxylab.com/2018/04/Dronecoria-Visualization-Engine.html