Two-layer approach for Unsupervised and Semi-Supervised Learning for Satellite Images

Manish Kumar, Neha Patil, Poonam Tiwari, Pooja Varier

Abstract


In our galaxy, there are many advanced satellite. Large distance image can be captured with very high quality. The image provides sufficient information at global level and regional level. The field of satellite imagery is evaluated so much that it has created questions to human and environmental sustainability. It is still a challenge to scale those techniques to very high spatial resolutions. Satellite images are of greater spatial, spectral and high resolution creating large set of information about the image which makes it difficult to identify the features of images. This is because the images are unlabeled. Unsupervised method allows us to organize images into clusters. However, unsupervised method like machine learning uses features for clustering. Those images which are close, are kept in same group. The system uses satellite images datasets which provide aerial shots of different location. Images are grouped into sets of 5 where each image in a set was taken on a different day at a specific location but not necessarily at the same time each day. The images for each set cover the same area but not perfectly align. This dataset is provided as input dataset to proposed system. Feature extraction is done by sending a set of images through a network and extracting features at a certain layer which results in a feature set for a certain network. The process of transfer learning involves sending our own images through the network and extracting features at a certain layer. The process followed in this system is different from fine-tuning because images are not trained and the number of classes is not changed in the SoftMax layer of the network. Rather parameters learned by a pre-trained model to see if it can be used for an unlabeled dataset where fine tuning would not be possible are used.

Full Text:

PDF

References


M. C. Hansen, P. V. Potapov, R. Moore, M. Hancher, S. Turubanova, A. Tyukavina, D. Thau, S. Stehman, S. Goetz, T. Loveland et al., “High-resolution global maps of 21st-century forest cover change,” Science, vol. 342, no. 6160, pp. 850–853, 2013.

R. Goldblatt, W. You, G. Hanson, and A. K. Khandelwal, “Detecting the boundaries of urban areas in india: A dataset for pixel-based image classification in google earth engine,” Remote Sensing, vol. 8, no. 8, p. 634, 2016.

A. Krizhevsky, I. Sutskever, and G. E. Hinton, “Imagenet classification with deep convolutional neural networks,” in Advances in neural information processing systems, 2012, pp. 1097–1105.

K. Simonyan and A. Zisserman, “Very deep convolutional networks for large-scale image recognition,” arXiv preprint arXiv:1409.1556, 2014.

C. Szegedy, W. Liu, Y. Jia, P. Sermanet, S. Reed, D. Anguelov, D. Erhan, V. Vanhoucke, and A. Rabinovich, “Going deeper with convolutions,” in Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition, 2015, pp. 1–9.

K. He, X. Zhang, S. Ren, and J. Sun, “Deep residual learning for image recognition,” arXiv preprint arXiv:1512.03385, 2015.

B. Hedayatnia, M. Yazdani, M. Nguyen, J. Block, and I. Altintas, Determining Feature Extractors for Unsupervised Learning on Satellite Images," based on big data, 2016, pp. 26552663.

J. Yosinski, J. Clune, Y. Bengio, and H. Lipson, How transferable are features in deep neural networks?" in Advances in neural information processing systems, 2014, pp. 33203328

J. Deng, W. Dong, R. Socher, L.-J. Li, K. Li, and L. Fei-Fei, Imagenet: A large-scale hierarchical image database," in Computer Vision and Pattern Recognition, 2009. CVPR 2009. IEEE Conference on. IEEE, 2009, pp. 248255.


Refbacks

  • There are currently no refbacks.