Open Access Open Access  Restricted Access Subscription Access

Digital Image Watermarking Schemes Using Advanced Multiresolution Transform Methods

Siva Krishna Sajarao, M Pradeep

Abstract


Digital Image Watermarking schemes using advanced multiresolution transform methods have been implemented in this paper. Watermarking scheme is hidden input image by external covering image. Transform domain is widely used in spectrum representation of the input image. The multiresolution transform method can be implemented on the different scaling factor of the system. Discrete wavelet transform is decomposing as different components as approximation, horizontal, vertically and diagonal levels of the system. The contourlet transform is decomposed the input image as many sub levels of the input image. The watermark image has been embedded into the original input image. The performance measures are mean square error (MSE) and peak signal to noise ratio (PSNR). The existing methods give better performance results as compared to existing techniques.

 


Full Text:

PDF

References


J. T. Favata, G. Srikantan, and S. N.Srihari, “Handprinted character/digit recognition using a multiplefeature/resolution philosophy,” in IWFHR-IV, pp. 57–66, Dec1994.

A.MarzalandE.Vidal,“Computation of normalized edit distance and applications,” IEEE Transactions on Pattern Analy- sis and Machine Intelligence, vol. 15, pp. 926–932, Sep 1993.

P. A. V. Hall and G. R. Dowling, “Approximate stringmatch- ing,” ACM Computing Surveys, vol. 12, pp. 381–402, Dec 1980.

CEDAR, “Intelligent character recognition.” http://www.cedar.buffalo.edu/ICR,1997.

R. A. Wagner and M. J. Fischer, “The string-to-string correc- tion problem,” Journal of the ACM, vol. 6, pp. 168–178, Jan 1974.

G.M.LandauandU.Vishkin,“Introducing efficient paral-lelism into approximate string matching and a new serial al- gorithm,”in Proceeding softhe18thAnnualACMSymposium on Theory of Computing, pp. 220–230,1986.

R.R.Bradford and R.B.Bradford, Introduction to Handwrit- ing Examination and Identification. Nelson-Hall Publishers: Chicago, 1992.

G.KimandV.Govindaraju,“Efflcient chain code based image manipulation for handwritten word recognition,” in the SPIE sym.onelectronic imaging science and technology(Document Recognition III), vol. 2660, pp. 262–272, Feb1996.

CEDAR,“http://www.cedar.buffalo.edu/databases/cdrom1/.”

J. J. Hull, “A database for hand writen text recognition re- search,” Transactions on Pattern Analysis and Machine Intel- ligence, pp. 550–554, May1994.

R. O. Duda and P. E. Hart, Pattern classification and scene analysis. New York, Wiley, 1st ed., 1973.

C. Szegedy, W. Liu, Y. Jia, P. Sermanet, S. Reed, D. Anguelov, D. Erhan, V. Vanhoucke, A. Rabinovich, Going deeper with convolutions, in: Proceedings of the IEEE conference on

computer vision and pattern recognition, 2015, pp. 1–9.

S. Han, H. Mao, W. J. Dally, Deep compression: Compressing deep neural networks with pruning, trained quantization and Huffman coding, arXiv preprint arXiv: 1510.00149.

Y. Guo, A. Yao, Y. Chen, Dynamic network surgery for efficient dnns, in: Advances In Neural Information Processing Systems, 2016, pp. 1379–1387.

W. Chen, J. T. Wilson, S. Tyree, K. Q. Weinberger, Y. Chen, Compressing convolutional neural networks, arXiv preprint arXiv:1506.04449.

W. Wen, C. Wu, Y. Wang, Y. Chen, H. Li, Learning structured sparsity in deep neural networks, in: Advances in Neural Information Processing Systems, 2016, pp. 2074–2082.

Y. Gong, L. Liu, M. Yang, L. Bourdev, Compressing deep Convolutional networks using vector quantization, arXiv preprint arXiv:1412.6115.

D. Lin, S. Talathi, S. Annapureddy, Fixed point quantization of deep convolutional networks, in: International Conference on Machine Learning, 2016, pp. 2849–2858.

C. Tai, T. Xiao, Y. Zhang, X. Wang, E. Weinan, Convolutional neural networks with low-rank regularization, arXiv preprint arXiv: 1511.06067.

E. L. Denton, W. Zaremba, J. Bruna, Y. LeCun, R. Fergus, Exploiting linear structure within convolutional networks for efficient evaluation, in: Advances in Neural Information Processing Systems, 2014, pp. 1269–1277.

M. Jaderberg, A. Vedaldi, A. Zisserman, Speeding up convolutional neural networks with low rank expansions, arXiv preprint arXiv: 1405.3866.

Y. Ioannou, D. Robertson, J. Shotton, R. Cipolla, A. Criminisi, Training cnns with low-rank filters for efficient image classification, arXiv preprint arXiv:1511.06744.

I. Hubara, M. Courbariaux, D. Soudry, R. El-Yaniv, Y. Bengio, Quantized neural networks: Training neural networks with lowprecision weights and activations, arXiv preprint arXiv:1609.07061.

P. Gysel, M. Motamedi, S. Ghiasi, Hardware-oriented approximation of convolutional neural networks, arXiv preprint arXiv: 1604.03168.


Refbacks

  • There are currently no refbacks.