ORIGINAL PAPER
Improved Pix2PixGAN water-bearing ore reflection image restoration method
,
 
,
 
,
 
 
 
 
More details
Hide details
1
School of Mechanical and Electrical Engineering, Jiangxi University of Science and Technology, Ganzhou, Jiangxi Province, China
 
2
Jiangxi Mining and Metallurgical Engineering Research Center, China
 
 
Submission date: 2024-05-09
 
 
Final revision date: 2024-07-04
 
 
Acceptance date: 2024-10-28
 
 
Publication date: 2024-12-17
 
 
Corresponding author
Xiaoyan Luo   

Jiangxi Mining and Metallurgical Engineering Research Center, China
 
 
Gospodarka Surowcami Mineralnymi – Mineral Resources Management 2024;40(4):131-146
 
KEYWORDS
TOPICS
ABSTRACT
At the ore crushing site, the crushed ore must be washed to remove sediment, and the washing step puts the detected ore in a watery environment, resulting in the presence of reflective areas in the image of watery ore particles. Aiming at the problem of mis-segmentation of ore images due to the masking of ore feature information by the reflective area, an improved Pix2PixGAN model is proposed to solve the problem of removing water and repairing the reflective area in watery ore images. The ResNet network with good stability is used to comprehensively extract the features of watery ore images, improve the stability of network training, introduce the structural similarity loss function, and update the network parameters by minimising the structural similarity loss value to reduce the structural differences between the reconstructed image and the real image. The experimental results show that the improved Pix2PixGAN model compares with the Pix2PixGAN and CycleGAN models; the watery ore image removes the water image reflection restoration better and, at the same time, improves the structural edge clarity of the generated dry ore particle image. The PSNR and SSIM evaluation metrics are improved by 8.8 and 1.28%, respectively, further verifying the effectiveness of the improved algorithm. This innovative approach provides a feasible solution for image processing at the ore-crushing site. It is of great significance for the subsequent enhancement of image recognition, segmentation, and reduction of misjudgment.
ACKNOWLEDGEMENTS
This work was supported by the National Natural Science Foundation of China (Grant No. 52364025) and the Key R&D Program of Jiangxi Provincial Department of Science and Technology (Grant No. 20181ACE50034).
CONFLICT OF INTEREST
The Authors have no conflicts of interest to declare.
METADATA IN OTHER LANGUAGES:
Polish
Ulepszona metoda przywracania obrazu rudy zawierającej wodę Pix2PixGAN
funkcja strat, cząstka rudy, usuwanie wody, generatywna sieć przeciwstawna
W miejscu kruszenia rudy rozdrobniona ruda musi zostać wypłukana w celu usunięcia osadu, a etap płukania umieszcza wykrytą rudę w środowisku wodnym, co powoduje obecność obszarów odblaskowych na obrazie cząstek rudy wodnej. Mając na celu rozwiązanie problemu błędnej segmentacji obrazów rudy z powodu maskowania informacji o cechach rudy przez obszar odblaskowy, zaproponowano ulepszony model Pix2PixGAN w celu rozwiązania problemu usuwania wody i naprawy obszaru odblaskowego na obrazach rudy wodnej. Sieć ResNet o dobrej stabilności jest używana do kompleksowego wyodrębniania cech obrazów rudy wodnej, poprawy stabilności treningu sieci, wprowadzenia funkcji utraty podobieństwa strukturalnego i aktualizacji parametrów sieci poprzez minimalizację wartości utraty podobieństwa strukturalnego w celu zmniejszenia różnic strukturalnych między zrekonstruowanym obrazem a obrazem rzeczywistym. Wyniki eksperymentów pokazują, że ulepszony model Pix2PixGAN wypada lepiej w porównaniu z modelami Pix2PixGAN i CycleGAN; obraz rudy wodnistej lepiej usuwa odtworzenie odbicia obrazu wody i jednocześnie poprawia wyrazistość krawędzi strukturalnych wygenerowanego obrazu suchych cząstek rudy. Wskaźniki oceny PSNR i SSIM poprawiły się odpowiednio o 8,8 i 1,28%, co dodatkowo potwierdza skuteczność ulepszonego algorytmu. To innowacyjne podejście zapewnia wykonalne rozwiązanie do przetwarzania obrazu w miejscu kruszenia rudy i ma duże znaczenie dla późniejszej poprawy rozpoznawania obrazu, segmentacji i redukcji błędnej oceny.
REFERENCES (17)
1.
Baoqi et al. 2022 – Baoqi, L., Haining, H., Jiyuan, L., et al. 2022. Research on image enhancement algorithm of turbid water body based on improved CycleGAN. Journal of Electronics and Information 44(7), pp. 2504–2511 (in Chinese).
 
2.
Bertalmio et al. 2000 – Bertalmio, M., Sapiro, G., Caselles, V. and Ballester, C. 2000. Image inpainting. Proceedings of the 27th annual conference on Computer graphics and interactive techniques, New York, United States, pp. 417–424.
 
3.
Cao et al. 2023 – Cao, J., Qiang, Z., Lin, H., He, L. and Dai, F. 2023. An Improved BM3D Algorithm Based on Image Depth Feature Map and Structural Similarity Block-Matching. Sensors 23(16), DOI: 10.3390/s23167265.
 
4.
Demir, U. and Unal, G. 2018. Patch-Based Image Inpainting with Generative Adversarial Networks, DOI: 10.48550/arXiv.1803.07422.
 
5.
Efros, A.A. and Leung, T.K. 1999. Texture synthesis by non-parametric sampling. IEEE International Conference on Computer Vision (ICCV), Kerkyra, Greece, pp. 1033–1038, DOI: 10.1109/ICCV.1999.790383.
 
6.
Fuzhen et al. 2023 – Fuzhen, Z., Chen, W., Bing, Z., Sun, C. and Qi, C. 2023. An improved generative adversarial networks for remote sensing image super-resolution reconstruction via multi-scale residual block. The Egyptian Journal of Remote Sensing and Space Sciences 26(1), pp. 151–160, DOI: 10.1016/j.ejrs.2022.12.008.
 
7.
Isola et al. 2017 – Isola, P., Zhu, J.Y., Zhou, T. and Efros, A.A. 2017. Image-to-Image Translation with Conditional Adversarial Networks. IEEE Conference on Computer Vision and Pattern Recognition (CVPR), DOI: 10.1109/CVPR.2017.632.
 
8.
Junchao, C. 2022. A study on highlight removal from specular object surface based on improved pairwise generative adversarial network. Wuhan Textile University, (in Chinese).
 
9.
Li et al. 2021 – Li, J., Li, Y., Li, J., Zhang, Q., Yang, G., CHen, S. and Li, J. 2021. Single Exposure Optical Image Watermarking Using a cGAN Network. IEEE Photonics Journal 13(2), DOI: 10.1109/JPHOT.2021.3068299.
 
10.
Lu et al. 2022 – Lu, Y., Qiu, Y., Gao, Q. and Sun, D. 2022. Infrared and visible image fusion based on tight frame learning via VGG19 network[J]. Digital Signal Processing 131(3), DOI: 10.1016/j.dsp.2022.103745.
 
11.
Nan et al. 2020 – Nan, Z., Xiaoming, X., Xin, X., Wei J., Chu Z., Tian P., Shi, L., Li, C. and Zhou, J.Z. 2020. Robust T-S Fuzzy Model Identification Approach Based on FCRM Algorithm and L1-Norm Loss Function. IEEE International Conference on Acoustics, Speech and Signal Processing (ICASSP), DOI: 10.1109/ACCESS.2020.2973722.
 
12.
Ronneberger, O., Fischer, P. and Brox, T. 2015. U-Net: Convolutional Networks for Biomedical Image Segmentation. Medical Image Computing and Computer-Assisted Intervention – MICCAI 2015. [Online:] http://lmb.informatik.uni-frei....
 
13.
Shin et al. 2020 – Shin, Y.G., Sagong, M.C., Yeo, Y.J., Kim, S.W. and Ko, S.J. 2020, PEPSI++: Fast and Lightweight Network for Image Inpainting. IEEE transactions on neural networks and learning systems, pp. 252–265, DOI: 10.48550/arXiv.1905.09010.
 
14.
Tun et al. 2024 – Tun, Y., Yongcun, G., Shuang, W., et al. 2024. Obstacle recognition of unmanned railroad motor vehicle in coal mine. Journal of Zhejiang University (Engineering Edition) 58(1), pp. 29–39 (in Chinese).
 
15.
Wenjie, W. and Honglei, W. 2021. Improvement and implementation of occluded face image restoration based on generative adversarial network. Computer Application and Software 38(01), pp. 217–221+249 (in Chinese).
 
16.
Wu et al. 2021 – Wu, Y., He, Q., Xue, T., Garg, R., Chen, J., Veeraraghavan, A. and Barron J.T. 2021. How to Train Neural Networks for Flare Removal. IEEE/CVF International Conference on Computer Vision (ICCV), pp. 2219–2227, DOI: 10.1109/ICCV48922.2021.00224.
 
17.
Xuaner et al. 2018 – Xuaner, Z., Ren, N. and Qifeng, C. 2018. Single image reflection separation with perceptual losses. IEEE/CVF Conference on Computer Vision and Pattern Recognition (CVPR), pp. 4786–4794, DOI: 10.1109/CVPR.2018.00503.
 
eISSN:2299-2324
ISSN:0860-0953
Journals System - logo
Scroll to top