ORIGINAL PAPER
Ore extraction and analysis from RGB image and 3D Point Cloud
More details
Hide details
1
BGRIMM Technology Group
2
University of Science and Technology Beijing
Submission date: 2021-12-20
Final revision date: 2022-02-18
Acceptance date: 2022-02-28
Publication date: 2022-03-23
Gospodarka Surowcami Mineralnymi – Mineral Resources Management 2022;38(1):89-105
KEYWORDS
TOPICS
ABSTRACT
Based on the theory of computer vision, a new method for extracting ore from underground mines is proposed. This is based on a combination of RGB images collected by a color industrial camera and a point cloud generated by a 3D ToF camera. Firstly, the mean-shift algorithm combined with the embedded confidence edge detection algorithm is used to segment the RGB ore image into different regions. Secondly, the effective ore regions are classified into large pieces of ore and ore piles consisting of a number of small pieces of ore. The method applied in the classification process is to embed the confidence into the edge detection algorithm which calculates edge distribution around ore regions. Finally, the RGB camera and the 3D ToF camera are calibrated and the camera matrix transformation of the two cameras is obtained. Point cloud fragments are then extracted according to the cross-calibration result. The geometric properties of the ore point cloud are then analysed in the subsequent procedure.
ACKNOWLEDGEMENTS
This work was jointly supported by the Major Science and Technology Innovation Project of Shandong Province (No. 2019SDZY05) and the Scientific Research Fund of BGRIMM Technology Group (No. 02-2035).
METADATA IN OTHER LANGUAGES:
Polish
Wydobycie i analiza rudy z obrazu RGB i chmury punktów 3D
obraz rudy, chmura punktów 3D, wbudowane wykrywanie krawędzi ufności, zmiana średniej, kalibracja krzyżowa
W oparciu o teorię widzenia komputerowego zaproponowano nową metodę wydobycia rudy z podziemnych kopalń. Jest to połączenie obrazów RGB zebranych przez kolorową kamerę przemysłową oraz chmury punktów wygenerowanej przez kamerę 3D ToF. Po pierwsze, algorytm zmiany średniej w połączeniu z wbudowanym algorytmem wykrywania krawędzi ufności służy do segmentacji obrazu rudy RGB na różne regiony. Po drugie, efektywne regiony rud są podzielone na złoża rudy o dużych rozmiarach i zwałowiska składające się z małej ilości rudy. Metodą stosowaną w procesie klasyfikacji jest określenie ufności w algorytmie wykrywania krawędzi, który oblicza rozkład krawędzi wokół regionów rudnych. Wreszcie, kamera RGB i kamera 3D ToF są skalibrowane i uzyskuje się transformację matrycy obu kamer. Następnie, fragmenty chmury punktów są wyodrębniane zgodnie z wynikiem kalibracji krzyżowej. W kolejnej procedurze przeanalizowano właściwości geometryczne chmury punktów rudy.
REFERENCES (23)
1.
Cheng et al. 2001 – Cheng, H.D., Jiang, X.H., Sun, Y. and Wang, J. 2001. Color image segmentation: Advances and prospects. Pattern Recognition 34(12), pp. 2259–2281, DOI: 10.1016/S0031-3203(00)00149-7.
2.
Cheng, Y. 1995. Mean shift, mode seeking, and clustering. IEEE Transactions on Pattern Analysis and Machine Intelligence 17(8), pp. 790–799, DOI: 10.1109/34.400568.
3.
Comaniciu, D. and Meer, P. 2002. Mean shift: a robust approach toward feature space analysis. IEEE Trans Pattern Analysis and Machine Intelligence 24(5), pp. 603–619, DOI: 10.1109/34.1000236.
4.
Dai et al. 2017 – Dai, J., Qi, H., Xiong, Y., Li, Y., Zhang, G., Hu, H. and Wei, Y. 2017. Deformable convolutional networks. Proceedings of the IEEE international conference on computer vision. pp. 764–773, DOI: 10.1109/ICCV.2017.89.
5.
Dong, K. and Jiang, D.-L. 2013. Ore image segmentation algorithm based on improved watershed transform. Computer Engineering and Design 34, pp. 899–903.
6.
Foix et al. 2011 – Foix, S., Alenya, G. and Torras, C. 2011. Lock-in time-of-flight (ToF) cameras: A survey. IEEE Sensors Journal 11(9), pp. 1917–1926, DOI: 10.1109/JSEN.2010.2101060.
7.
Gao et al. 2003 – Gao, X.S., Hou, X.R., Tang, J. and Cheng, H.F. 2003. Complete solution classification for the perspective-three-point problem. IEEE Transactions on Pattern Analysis & Machine Intelligence 25(8), pp. 930–943, DOI: 10.1109/TPAMI.2003.1217599.
8.
Gokturk et al. 2004 – Gokturk, S. B., Yalcin, H. and Bamji, C. 2004. A time-of-flight depth sensor-system description, issues and solutions. 2004 conference on computer vision and pattern recognition workshop. IEEE, pp. 35–35, DOI: 10.1109/CVPR.2004.17.
9.
Hartley, R. and Zisserman, A. 2003. Multiple view geometry in computer vision. DOI: 10.1017/CBO9780511811685.
10.
Jin, X. and Zhang, G. 2018. Ore Impurities Detection Based on Marker-Watershed Segmentation Algorithm. Computer Science and Application 8, pp. 21–29, DOI: 10.12677/csa.2018.81004.
11.
Liu et al. 2020 – Liu, X., Zhang, Y., Jing, H., Wang, L. and Zhao, S. 2020. Ore image segmentation method using U-Net and Res_Unet convolutional networks. RSC Advances 10, pp. 9396–9406, DOI: 10.1039/C9RA05877J.
12.
Meer, P. and Georgescu, B. 2001. Edge Detection with Embedded Confidence. IEEE Transactions on Pattern Analysis and Machine Intelligence, 23, pp. 1351–1365, DOI: 10.1109/34.977560.
13.
Pal, N. R. and Pal, S. K. 1993. A review on image segmentation techniques. Pattern Recognit 26(9), pp. 1277–1294, DOI: 10.1016/0031-3203(93)90135-J.
14.
Ren et al. 2016 – Ren, S., He, K., Girshick, R. and Sun, J. 2016. Faster R-CNN: towards real-time object detection with region proposal networks. IEEE transactions on pattern analysis and machine intelligence 39, pp. 1137–1149.
15.
Salmerón et al. 2010 – Salmerón, S. F., Ribas, G. A. and Genís, C. T. 2010. Exploitation of time-of-flight (ToF) cameras. Institut de Robòtica i Informàtica Industrial, CSIC-UPC, Tech. Rep. IRI-TR-10-07.
16.
Van den Boomgaard, R. and van de Weijer, J. 2002. On the equivalence of local-mode finding, robust estimation and mean-shift analysis as used in early vision tasks. Object recognition supported by user interaction for service robots. IEEE, pp. 927–930, DOI: 10.1109/ICPR.2002.1048187.
17.
Wu, W. 1986. Basic Principles of Mechanical Theorem Proving in Elementary Geometries. Journal of Automated Reasoning 2, pp. 221–252, DOI: 10.1007/BF02328447.
18.
Xiao et al. 2020 – Xiao, D., Liu, X., Le, B.T., Ji, Z. and Sun, X. 2020. An ore image segmentation method based on RDU-Net model. Sensors 20(17), DOI: 10.3390/s20174979.
19.
Zhan, Y. and Zhang, G. 2019. An improved OTSU algorithm using histogram accumulation moment for ore segmentation. Symmetry 11(3), DOI: 10.3390/sym11030431.
20.
Zhang et al. 2011 – Zhang, G.Y., Liu, G.Z. and Zhu, H. 2011. Segmentation algorithm of complex ore images based on templates transformation and reconstruction. International Journal of Minerals, Metallurgy, and Materials 18(4), pp. 385–389, DOI: 10.1007/s12613-011-0451-8.
21.
Zhang et al. 2019 – Zhang, J.L., Sun, S.S. and Qin, S.Q. 2019. Ore image segmentation based on optimal threshold segmentation of genetic algorithm. Science Technology and Engineering 19, pp. 105–109.
22.
Zhang, W. and Jiang, D. 2011. The marker-based watershed segmentation algorithm of ore image. IEEE 3rd International Conference on Communication Software and Networks, pp. 472–474, DOI: 10.1109/ICCSN.2011.6014611.
23.
Zhang, Z. 2000. A flexible new technique for camera calibration. IEEE Transactions on pattern analysis and machine intelligence 22(11), pp. 1330–1334, DOI: 10.1109/34.888718.