← Back to all works

Thickness Classifier on Steel in Heavy Melting Scrap by Deep-learning-based Image Analysis

AcademicISIJ InternationalJanuary 15, 2023DOI: 10.2355/isijinternational.ISIJINT-2022-331

Authors: Ichiro Daigo, Ken Murakami, Keijiro Tajima, Rei Kawakami

Deep LearningSemantic SegmentationCNNConvolutional Neural NetworkPSPNetPyramid Scene Parsing NetworkHeavy ScrapImage AnalysisSteel Scrap Classification

Conclusion

This study was the first trial of employing a deep-learning-based image analysis technique to detect the thickness or diameter of steel without taking measurements. This trial revealed both the potential of using image analysis techniques on steel scrap and the problems involved. As a deep-learning-based image analysis, the semantic segmentation based on PSPNet was shown to effectively classify the thickness or diameter of steel in heavy steel scrap, even in the case of images in which the thickness or diameter in the cross-section of the steel cannot be observed. In our developed model, by compiling around 200 images with accurate ground-truth labeling and around 130 images with quasi-ground-truth labeling for the training dataset, the best F-score was almost 0.5 for the three classes of thickness or diameter less than 3 mm, 3 to 6 mm, and 6 mm or more. Notably, the F-score for the less than 3 mm class was more than 0.9. While the procedures for image acquisition and annotation were shown to perform well, there is much room for improvement in the resolution and input size of images and in labelling the classes. For the sake of empirical image acquisition, we ensure that the shortage of the number of images with accurate ground-truth labels can be complemented by the images with labeling based on the grades of steel scrap judged in the market as quasi-ground-truth labels. We believe that our developed model relies mainly on the features of deformation. While the developed model does not observe the cross-section of steel to predict the thickness, the experimental results show that the model refers to the scale of images. Therefore, the scale of images should be constant in the preparation of images.

Abstract

Avoiding the contamination of tramp elements in steel requires the non-ferrous materials mixed in steel scrap to be identified. For this to be possible, the types of recovered steel scrap used in the finished product must be known. Since the thickness and diameter of steel are important sources of information for identifying the steel type, in this study, the aim is to employ an image analysis to detect the thickness or diameter of steel without taking measurements. A deep-learning-based image analysis technique based on a pyramid scene parsing network was used for semantic segmentation. It was found that the thickness or diameter of steel in heavy steel scrap could be effectively classified even in cases where the thickness or diameter of the cross-section of steel could not be observed. In the developed model, the best F-score was around 0.5 for three classes of thickness or diameter: less than 3 mm, 3 to 6 mm, and 6 mm or more. According to our results, the F-score for the class of less than 3 mm class was more than 0.9. The results suggest that the developed model relies mainly on the features of deformation. While the model does not require the cross-section of steel to predict the thickness, it does refer to the scale of images. This study reveals both the potential of image analysis techniques in developing a network model for steel scrap and the challenges associated with the procedures for image acquisition and annotation.