Our center utilizes underwater survey data collected by the Penghu Archaeology Team over the years, including side-scan sonar files from 2010 to 2024 stored in jsf format. These jsf files contain extensive information, such as scanning time, coordinates, and water temperature. To streamline data processing, we developed a custom Python program that reads jsf files and converts them into png image files for easier analysis and future use.
Due to the lack of a standardized format in the past ship logs, most records only contain brief descriptions without images or interpretation data, making it difficult to build a database from them directly. Therefore, we used labelImg software to annotate the existing dataset, marking regions with potential features in the images, such as patterned areas or striped zones.
Our center then built a YOLO-v3 deep learning model, training it with the previously created dataset. Given the limited data, we applied transfer learning. This approach initially trains the model on general images (e.g., everyday objects) before fine-tuning it with underwater imagery, thereby improving its learning ability. Ultimately, we developed two AI neural network models:
1.General object recognition model – capable of identifying common objects.
2.Underwater object recognition model – specifically designed for detecting underwater artifacts.
To make these models user-friendly, we created a graphical software interface that allows non-technical users to easily operate the system. The software supports loading images, recognizing general and underwater objects, and saving recognition results for further analysis.
https://rcuah.site.nthu.edu.tw