Academic literature on the topic 'Sonar image'

Create a spot-on reference in APA, MLA, Chicago, Harvard, and other styles

Select a source type:

Consult the lists of relevant articles, books, theses, conference reports, and other scholarly sources on the topic 'Sonar image.'

Next to every source in the list of references, there is an 'Add to bibliography' button. Press on it, and we will generate automatically the bibliographic reference to the chosen work in the citation style you need: APA, MLA, Harvard, Chicago, Vancouver, etc.

You can also download the full text of the academic publication as pdf and read online its abstract whenever available in the metadata.

Journal articles on the topic "Sonar image"

1

Peng, Ye, Houpu Li, Wenwen Zhang, Junhui Zhu, Lei Liu, and Guojun Zhai. "Underwater Sonar Image Classification with Image Disentanglement Reconstruction and Zero-Shot Learning." Remote Sensing 17, no. 1 (2025): 134. https://doi.org/10.3390/rs17010134.

Full text
Abstract:
Sonar is a valuable tool for ocean exploration since it can obtain a wealth of data. With the development of intelligent technology, deep learning has brought new vitality to underwater sonar image classification. However, due to the difficulty and high cost of acquiring underwater sonar images, we have to consider the extreme case when there are no available sonar data of a specific category, and how to improve the prediction ability of intelligent classification models for unseen sonar data. In this work, we design an underwater sonar image classification method based on Image Disentanglement Reconstruction and Zero-Shot Learning (IDR-ZSL). Initially, an image disentanglement reconstruction (IDR) network is proposed for generating pseudo-sonar samples. The IDR consists of two encoders, a decoder, and three discriminators. The first encoder is responsible for extracting the structure vectors of the optical images and the texture vectors of the sonar images; the decoder is in charge of combining the above vectors to generate the pseudo-sonar images; and the second encoder is in charge of disentangling the pseudo-sonar images. Furthermore, three discriminators are incorporated to determine the realness and texture quality of the reconstructed image and feedback to the decoder. Subsequently, the underwater sonar image classification model performs zero-shot learning based on the generated pseudo-sonar images. Experimental results show that IDR-ZSL can generate high-quality pseudo-sonar images, and improve the prediction accuracy of the zero-shot classifier on unseen classes of sonar images.
APA, Harvard, Vancouver, ISO, and other styles
2

Kim, Hong-Gi, Jungmin Seo, and Soo Mee Kim. "Underwater Optical-Sonar Image Fusion Systems." Sensors 22, no. 21 (2022): 8445. http://dx.doi.org/10.3390/s22218445.

Full text
Abstract:
Unmanned underwater operations using remotely operated vehicles or unmanned surface vehicles are increasing in recent times, and this guarantees human safety and work efficiency. Optical cameras and multi-beam sonars are generally used as imaging sensors in underwater environments. However, the obtained underwater images are difficult to understand intuitively, owing to noise and distortion. In this study, we developed an optical and sonar image fusion system that integrates the color and distance information from two different images. The enhanced optical and sonar images were fused using calibrated transformation matrices, and the underwater image quality measure (UIQM) and underwater color image quality evaluation (UCIQE) were used as metrics to evaluate the performance of the proposed system. Compared with the original underwater image, image fusion increased the mean UIQM and UCIQE by 94% and 27%, respectively. The contrast-to-noise ratio was increased six times after applying the median filter and gamma correction. The fused image in sonar image coordinates showed qualitatively good spatial agreement and the average IoU was 75% between the optical and sonar pixels in the fused images. The optical-sonar fusion system will help to visualize and understand well underwater situations with color and distance information for unmanned works.
APA, Harvard, Vancouver, ISO, and other styles
3

Ye, Xiufen, Haibo Yang, Chuanlong Li, Yunpeng Jia, and Peng Li. "A Gray Scale Correction Method for Side-Scan Sonar Images Based on Retinex." Remote Sensing 11, no. 11 (2019): 1281. http://dx.doi.org/10.3390/rs11111281.

Full text
Abstract:
When side-scan sonars collect data, sonar energy attenuation, the residual of time varying gain, beam patterns, angular responses, and sonar altitude variations occur, which lead to an uneven gray level in side-scan sonar images. Therefore, gray scale correction is needed before further processing of side-scan sonar images. In this paper, we introduce the causes of gray distortion in side-scan sonar images and the commonly used optical and side-scan sonar gray scale correction methods. As existing methods cannot effectively correct distortion, we propose a simple, yet effective gray scale correction method for side-scan sonar images based on Retinex given the characteristics of side-scan sonar images. Firstly, we smooth the original image and add a constant as an illumination map. Then, we divide the original image by the illumination map to produce the reflection map. Finally, we perform element-wise multiplication between the reflection map and a constant coefficient to produce the final enhanced image. Two different schemes are used to implement our algorithm. For gray scale correction of side-scan sonar images, the proposed method is more effective than the latest similar methods based on the Retinex theory, and the proposed method is faster. Experiments prove the validity of the proposed method.
APA, Harvard, Vancouver, ISO, and other styles
4

Xi, Jier, Xiufen Ye, and Chuanlong Li. "Sonar Image Target Detection Based on Style Transfer Learning and Random Shape of Noise under Zero Shot Target." Remote Sensing 14, no. 24 (2022): 6260. http://dx.doi.org/10.3390/rs14246260.

Full text
Abstract:
With the development of sonar technology, sonar images have been widely used to detect targets. However, there are many challenges for sonar images in terms of object detection. For example, the detectable targets in the sonar data are more sparse than those in optical images, the real underwater scanning experiment is complicated, and the sonar image styles produced by different types of sonar equipment due to their different characteristics are inconsistent, which makes it difficult to use them for sonar object detection and recognition algorithms. In order to solve these problems, we propose a novel sonar image object-detection method based on style learning and random noise with various shapes. Sonar style target sample images are generated through style transfer, which enhances insufficient sonar objects image. By introducing various noise shapes, which included points, lines, and rectangles, the problems of mud and sand obstruction and a mutilated target in the real environment are solved, and the single poses of the sonar image target is improved by fusing multiple poses of optical image target. In the meantime, a method of feature enhancement is proposed to solve the issue of missing key features when using style transfer on optical images directly. The experimental results show that our method achieves better precision.
APA, Harvard, Vancouver, ISO, and other styles
5

Peng, Lei. "Adaptive De-Noising Approach for Underwater Side Scan Sonar Image." Applied Mechanics and Materials 373-375 (August 2013): 509–12. http://dx.doi.org/10.4028/www.scientific.net/amm.373-375.509.

Full text
Abstract:
It is difficult to detect the edges of objects in side scan sonar images due to the complex background, bad contrast and deteriorate edges. Therefore, it is important to remove noise from side scan sonar images. The traditional de-noising methods for optical images may not work well on the sonar image. In this paper, an adaptive de-noising approach is used. The side scan sonar image is first filtered using mean filter to remove the rough noise, then a weighted function is generated using spatial distance filter and intensity distance filter. The parameters are adaptive according to the sonar image. The experimental results indicate that it is an effective de-noising method for underwater sonar image.
APA, Harvard, Vancouver, ISO, and other styles
6

Jiao, Shengxi, Chunyu Zhao, and Ye Xin. "Research on Convolutional Neural Network Model for Sonar IMAGE Segmentation." MATEC Web of Conferences 220 (2018): 10004. http://dx.doi.org/10.1051/matecconf/201822010004.

Full text
Abstract:
The speckle noise of sonar images affects the human interpretation and automatic recognition of images seriously. It is important and difficult to realize the precision segmentation of sonar image with speckle noise in the field of image processing. Full convolution neural network (FCN) has the advantage of accepting arbitrary size image and preserving spatial information of original input image. In this paper, the image features are obtained by autonomic learning of convolutional neural network, the original learning rules based on the mean square error loss function is improved. Taking the pixel as the processing unit, the segmentation method based on FCN model with relative loss function(FCN-RLF) for small submarine sonar image is proposed, sonar image pixel-level segmentation is achievied. Experimental results show that the improved algorithm can improve the segmentation accuracy and keep the edge and detail of sonar image better. The proposed model has better ability to reject sonar image speckle noise.
APA, Harvard, Vancouver, ISO, and other styles
7

Xi, Jier, and Xiufen Ye. "Sonar Image Target Detection Based on Simulated Stain-like Noise and Shadow Enhancement in Optical Images under Zero-Shot Learning." Journal of Marine Science and Engineering 12, no. 2 (2024): 352. http://dx.doi.org/10.3390/jmse12020352.

Full text
Abstract:
There are many challenges in using side-scan sonar (SSS) images to detect objects. The challenge of object detection and recognition in sonar data is greater than in optical images due to the sparsity of detectable targets. The complexity of real-world underwater scanning presents additional difficulties, as different angles produce sonar images of varying characteristics. This heterogeneity makes it difficult for algorithms to accurately identify and detect sonar objects. To solve these problems, this paper presents a novel method for sonar image target detection based on a transformer and YOLOv7. Thus, two data augmentation techniques are introduced to improve the performance of the detection system. The first technique applies stain-like noise to the training optical image data to simulate the real sonar image environment. The second technique adds multiple shadows to the optical image and 3D data targets to represent the direction of the target in the sonar image. The proposed method is evaluated on a public sonar image dataset, and the experimental results demonstrate that the proposed method outperforms the state-of-the-art methods in terms of accuracy and speed. The experimental results show that our method achieves better precision.
APA, Harvard, Vancouver, ISO, and other styles
8

Nguyen, Huu-Thu, Eon-Ho Lee, and Sejin Lee. "Study on the Classification Performance of Underwater Sonar Image Classification Based on Convolutional Neural Networks for Detecting a Submerged Human Body." Sensors 20, no. 1 (2019): 94. http://dx.doi.org/10.3390/s20010094.

Full text
Abstract:
Auto-detecting a submerged human body underwater is very challenging with the absolute necessity to a diver or a submersible. For the vision sensor, the water turbidity and limited light condition make it difficult to take clear images. For this reason, sonar sensors are mainly utilized in water. However, even though a sonar sensor can give a plausible underwater image within this limitation, the sonar image’s quality varies greatly depending on the background of the target. The readability of the sonar image is very different according to the target distance from the underwater floor or the incidence angle of the sonar sensor to the floor. The target background must be very considerable because it causes scattered and polarization noise in the sonar image. To successfully classify the sonar image with these noises, we adopted a Convolutional Neural Network (CNN) such as AlexNet and GoogleNet. In preparing the training data for this model, the data augmentation on scattering and polarization were implemented to improve the classification accuracy from the original sonar image. It could be practical to classify sonar images undersea even by training sonar images only from the simple testbed experiments. Experimental validation was performed using three different datasets of underwater sonar images from a submerged body of a dummy, resulting in a final average classification accuracy of 91.6% using GoogleNet.
APA, Harvard, Vancouver, ISO, and other styles
9

Tan, Xinyu, Chengjie Wang, Tianshun Chen, and Liran Shen. "Forward looking sonar image segmentation based on empirical mode decomposition." Journal of Physics: Conference Series 2303, no. 1 (2022): 012063. http://dx.doi.org/10.1088/1742-6596/2303/1/012063.

Full text
Abstract:
Abstract Due to the low imaging quality and serious noise pollution of forward-looking sonar images, traditional image segmentation methods based on edge information or statistical information are difficult to obtain high precision and robust segmentation results. Therefore, it is very complicated to segment forward-looking sonar images. Based on the analysis of the characteristics of forward-looking sonar, a new image segmentation method for forward-looking sonar is proposed.Firstly, 2d maximum entropy segmentation principle combined with chicken flock optimization algorithm is used to remove the background of the forward-looking sonar image.Then, based on 2d empirical mode decomposition, appropriate eigenmode functions are selected to denoise and enhance the forward-looking sonar image.Finally, the reconstructed forward-looking sonar image is segmented by enhanced fuzzy C-means clustering, and the segmented forward-looking sonar image result is obtained. This method can not only suppress noise interference effectively, but also protect the details of image edge for segmentation.
APA, Harvard, Vancouver, ISO, and other styles
10

Dong, M., H. Qiu, H. Wang, P. Zhi, and Z. Xu. "SONAR IMAGE RECOGNITION BASED ON MACHINE LEARNING FRAMEWORK." International Archives of the Photogrammetry, Remote Sensing and Spatial Information Sciences XLVI-3/W1-2022 (April 22, 2022): 45–51. http://dx.doi.org/10.5194/isprs-archives-xlvi-3-w1-2022-45-2022.

Full text
Abstract:
Abstract. In order to improve the robustness and generalization ability of model recognition, sonar images are enhanced by preprocessing such as conversion coordinates, interpolation, denoising and enhancement, and the transfer learning method under the Caffe framework of MATLAB as an interface is used respectively (mainly composed of 8 layers of network structure, including 5 convolutional layers and 3 full chain layers) And the transfer learning method under the Python deep learning framework Inception-Resnet-v2 model for sonar image training and recognition. First of all, part of the sonar image dataset (derived from the 2021 National Robot Underwater Competition online competition data), using MATLAB as the interface Caffe framework, the sonar image is trained to obtain a training model, and then through parameter adjustment, the convolutional neural network model of sonar image automatic recognition is obtained, and the transfer learning method can use less sonar image data to solve the problem of insufficient sonar image data, and then make the training achieve a higher recognition rate in a shorter time. When the training data is randomly sampled for testing, the sonar data recognition model based on the Caffe framework is quickly and fully recognized, and the recognition rate can reach 92% when the test sample does not participate in the training of sonar image data; The transfer learning method under the Inception-Resnet-v2 model of python deep learning framework is used to train recognition on sonar images, and the recognition rate reaches about 97%. Using the two models in this paper, it is feasible to identify sonar images with high recognition rate, which is much higher than traditional recognition methods such as SVM classifiers, and the two sonar image data recognition models based on deep learning have better recognition ability and generalization ability.
APA, Harvard, Vancouver, ISO, and other styles
More sources

Dissertations / Theses on the topic "Sonar image"

1

Henson, Benjamin. "Image registration for sonar applications." Thesis, University of York, 2017. http://etheses.whiterose.ac.uk/19536/.

Full text
Abstract:
This work develops techniques to estimate the motion of an underwater platform by processing data from an on-board sonar, such as a Forward Looking Sonar (FLS). Based on image registration, a universal algorithm has been developed and validated with in field datasets. The proposed algorithm gives a high quality registration to a fine (sub-pixel) precision using an adaptive filter and is suitable for both optical and acoustic images. The efficiency and quality of the result can be improved if an initial estimate of the motion is made. Therefore, a coarse (pixel-wide) registration algorithm is proposed, this is based on the assumption of local sparsity in the pixel motion between two images. Using a coarse and then fine registration, large displacements can be accommodated with a result that is to a sub-pixel precision. The registration process produces a displacement map (DM) between two images. From a sequence of DMs, an estimation of the sensor's motion is made. This is performed by a proposed fast searching and matching technique applied to a library of modelled DMs. Further, this technique exploits regularised splines to estimate the attitude and trajectory of the platform. To validate the results, a mosaic has been produced from three sets of in field data. Using a more detailed model of the acoustic propagation has the potential to improve the results further. As a step towards this a baseband underwater channel model has been developed. A physics simulator is used to characterise the channel at waymark points in a changing environment. A baseband equivalent representation of the time varying channel is then interpolated from these points. Processing in the baseband reduces the sample rate and hence reduces the run time for the model. A comparison to a more established channel model has been made to validate the results.
APA, Harvard, Vancouver, ISO, and other styles
2

Daniell, Oliver James. "Sonar image interpretation for sub-sea operations." Thesis, Heriot-Watt University, 2015. http://hdl.handle.net/10399/2983.

Full text
Abstract:
Mine Counter-Measure (MCM) missions are conducted to neutralise underwater explosives. Automatic Target Recognition (ATR) assists operators by increasing the speed and accuracy of data review. ATR embedded on vehicles enables adaptive missions which increase the speed of data acquisition. This thesis addresses three challenges; the speed of data processing, robustness of ATR to environmental conditions and the large quantities of data required to train an algorithm. The main contribution of this thesis is a novel ATR algorithm. The algorithm uses features derived from the projection of 3D boxes to produce a set of 2D templates. The template responses are independent of grazing angle, range and target orientation. Integer skewed integral images, are derived to accelerate the calculation of the template responses. The algorithm is compared to the Haar cascade algorithm. For a single model of sonar and cylindrical targets the algorithm reduces the Probability of False Alarm (PFA) by 80% at a Probability of Detection (PD) of 85%. The algorithm is trained on target data from another model of sonar. The PD is only 6% lower even though no representative target data was used for training. The second major contribution is an adaptive ATR algorithm that uses local sea-floor characteristics to address the problem of ATR robustness with respect to the local environment. A dual-tree wavelet decomposition of the sea-floor and an Markov Random Field (MRF) based graph-cut algorithm is used to segment the terrain. A Neural Network (NN) is then trained to filter ATR results based on the local sea-floor context. It is shown, for the Haar Cascade algorithm, that the PFA can be reduced by 70% at a PD of 85%. Speed of data processing is addressed using novel pre-processing techniques. The standard three class MRF, for sonar image segmentation, is formulated using graph-cuts. Consequently, a 1.2 million pixel image is segmented in 1.2 seconds. Additionally, local estimation of class models is introduced to remove range dependent segmentation quality. Finally, an A* graph search is developed to remove the surface return, a line of saturated pixels often detected as false alarms by ATR. The A* search identifies the surface return in 199 of 220 images tested with a runtime of 2.1 seconds. The algorithm is robust to the presence of ripples and rocks.
APA, Harvard, Vancouver, ISO, and other styles
3

Calder, Brian. "Bayesian spatial models for SONAR image interpretation." Thesis, Heriot-Watt University, 1997. http://hdl.handle.net/10399/1249.

Full text
Abstract:
This thesis is concerned with the utilisation of spatial information in processing of high-frequency sidescan SONAR imagery, and particularly in how such information can be used in developing techniques to assist in mapping functions. Survey applications aim to generate maps of the seabed, but are time consuming and expensive; automatic processing is required to improve efficiency. Current techniques have had some success, but utilise little of the available spatial information. Previously, inclusion of such knowledge was prohibitively expensive; recent improvements in numerical simulations techniques has reduced the costs involved. This thesis attempts to exploit these improvements into a method for including spatial information in SONAR processing and in general to image and signal analysis. Bayesian techniques for inclusion of prior knowledge and structuring complex problems are developed and applied to problems of texture segmentation, object detection and parameter extraction. It is shown through experiments on groundtruth and real datasets that the inclusion of spatial context can be very effective in improving poor techniques or, conversely in allowing simpler techniques to be used with the same objective outcome (with obvious computational advantages). The thesis also considers some of the implementation problems with the techniques used, and develops simple modifications to improve common algorithms.
APA, Harvard, Vancouver, ISO, and other styles
4

Hendriks, Lukas Anton. "Image processing techniques for sector scan sonar." Thesis, Stellenbosch : University of Stellenbosch, 2009. http://hdl.handle.net/10019.1/2487.

Full text
Abstract:
Thesis (MScEng (Electrical and Electronic Engineering))--University of Stellenbosch, 2009.<br>ENGLISH ABSTRACT: Sonars are used extensively for underwater sensing and recent advances in forward-looking imaging sonar have made this type of sonar an appropriate choice for use on Autonomous Underwater Vehicles. The images received from these sonar do however, tend to be noisy and when used in shallow water contain strong bottom reflections that obscure returns from actual targets. The focus of this work was the investigation and development of post-processing techniques to enable the successful use of the sonar images for automated navigation. The use of standard image processing techniques for noise reduction and background estimation, were evaluated on sonar images with varying amounts of noise, as well as on a set of images taken from an AUV in a harbour. The use of multiple background removal and noise reduction techniques on a single image was also investigated. To this end a performance measure was developed, based on the dynamic range found in the image and the uniformity of returned targets. This provided a means to quantitatively compare sets of post-processing techniques and identify the “optimal” processing. The resultant images showed great improvement in the visibility of target areas and the proposed techniques can significantly improve the chances of correct target extraction.<br>AFRIKAANSE OPSOMMING: Sonars word algemeen gebruik as onderwater sensors. Onlangse ontwikkelings in vooruit-kykende sonars, maak hierdie tipe sonar ’n goeie keuse vir die gebruik op ’n Outomatiese Onderwater Voertuig. Die beelde wat ontvang word vanaf hierdie sonar neig om egter raserig te wees, en wanneer dit in vlak water gebruik word toon dit sterk bodemrefleksies, wat die weerkaatsings van regte teikens verduister. Die fokus van die werk was die ondersoek en ontwikkeling van naverwerkings tegnieke, wat die sonar beelde bruikbaar maak vir outomatiese navigasie. Die gebruik van standaard beeldverwerkingstegnieke vir ruis-onderdrukking en agtergrond beraming, is geëvalueer aan die hand van sonar beelde met verskillende hoeveelhede ruis, asook aan die hand van ’n stel beelde wat in ’n hawe geneem is. Verdere ondersoek is ingestel na die gebruik van meer as een agtergrond beramings en ruis onderdrukking tegniek op ’n enkele beeld. Hierdie het gelei tot die ontwikkeling van ’n maatstaf vir werkverrigting van toegepaste tegnieke. Hierdie maatstaf gee ’n kwantitatiewe waardering van die verbetering op die oorspronklike beeld, en is gebaseer op die verbetering in dinamiese bereik in die beeld en die uniformiteit van die teiken se weerkaatsing. Hierdie maatstaf is gebruik vir die vergelyking van verskeie tegnieke, en identifisering van die “optimale” verwerking. Die verwerkte beelde het ’n groot verbetering getoon in die sigbaarheid van teikens, en die voorgestelde tegnieke kan ’n betekenisvolle bedrae lewer tot die suksesvolle identifisering van obstruksies.
APA, Harvard, Vancouver, ISO, and other styles
5

Callow, Hayden J. "Signal Processing for Synthetic Aperture Sonar Image Enhancement." Thesis, University of Canterbury. Electrical and Electronic Engineering, 2003. http://hdl.handle.net/10092/4000.

Full text
Abstract:
This thesis contains a description of SAS processing algorithms, offering improvements in Fourier-based reconstruction, motion-compensation, and autofocus. Fourier-based image reconstruction is reviewed and improvements shown as the result of improved system modelling. A number of new algorithms based on the wavenumber algorithm for correcting second order effects are proposed. In addition, a new framework for describing multiple-receiver reconstruction in terms of the bistatic geometry is presented and is a useful aid to understanding. Motion-compensation techniques for allowing Fourier-based reconstruction in widebeam geometries suffering large-motion errors are discussed. A motion-compensation algorithm exploiting multiple receiver geometries is suggested and shown to provide substantial improvement in image quality. New motion compensation techniques for yaw correction using the wavenumber algorithm are discussed. A common framework for describing phase estimation is presented and techniques from a number of fields are reviewed within this framework. In addition a new proof is provided outlining the relationship between eigenvector-based autofocus phase estimation kernels and the phase-closure techniques used astronomical imaging. Micronavigation techniques are reviewed and extensions to the shear average single-receiver micronavigation technique result in a 3 - 4 fold performance improvement when operating on high-contrast images. The stripmap phase gradient autofocus (SPGA) algorithm is developed and extends spotlight SAR PGA to the wide-beam, wide-band stripmap geometries common in SAS imaging. SPGA supersedes traditional PGA-based stripmap autofocus algorithms such as mPGA and PCA - the relationships between SPGA and these algorithms is discussed. SPGA's operation is verified on simulated and field-collected data where it provides significant image improvement. SPGA with phase-curvature based estimation is shown and found to perform poorly compared with phase-gradient techniques. The operation of SPGA on data collected from Sydney Harbour is shown with SPGA able to improve resolution to near the diffraction-limit. Additional analysis of practical stripmap autofocus operation in presence of undersampling and space-invariant blurring is presented with significant comment regarding the difficulties inherent in autofocusing field-collected data. Field-collected data from trials in Sydney Harbour is presented along with associated autofocus results from a number of algorithms.
APA, Harvard, Vancouver, ISO, and other styles
6

Williams, Neil. "Recognising objects in sector-scan sonar image sequences." Thesis, Heriot-Watt University, 1998. http://hdl.handle.net/10399/1176.

Full text
APA, Harvard, Vancouver, ISO, and other styles
7

Beattie, Robert Scott. "Side scan sonar image formation, restoration and modelling." Thesis, Robert Gordon University, 1996. http://ethos.bl.uk/OrderDetails.do?uin=uk.bl.ethos.318551.

Full text
APA, Harvard, Vancouver, ISO, and other styles
8

Ng, Ferdinand. "Calibration and image formation in the ABACUS sonar system." Master's thesis, University of Cape Town, 2005. http://hdl.handle.net/11427/5138.

Full text
APA, Harvard, Vancouver, ISO, and other styles
9

McClelland, Scott C. "A rolling line source for a seismic sonar." Thesis, Monterey, Calif. : Springfield, Va. : Naval Postgraduate School ; Available from National Technical Information Service, 2002. http://library.nps.navy.mil/uhtbin/hyperion-image/02Jun%5FMcClelland.pdf.

Full text
APA, Harvard, Vancouver, ISO, and other styles
10

Carey-Smith, Christopher M. "Electronic stabilisation of sector scanning sonar displays." Thesis, Loughborough University, 1986. https://dspace.lboro.ac.uk/2134/32623.

Full text
APA, Harvard, Vancouver, ISO, and other styles
More sources

Books on the topic "Sonar image"

1

Cheung, Kwok-Yin. Signal processing for sonar image enhancement. University of East Anglia, 1993.

Find full text
APA, Harvard, Vancouver, ISO, and other styles
2

Tang, Xiaoou. Dominant run-length method for image classification. Woods Hole Oceanographic Institution, 1997.

Find full text
APA, Harvard, Vancouver, ISO, and other styles
3

Tang, Xiaoou. Dominant run-length method for image classification. Woods Hole Oceanographic Institution, 1997.

Find full text
APA, Harvard, Vancouver, ISO, and other styles
4

Ann, Swift B., and Geological Survey (U.S.), eds. Sidescan-sonar image of inner shelf off Folly Beach, S.C. U.S. Dept. of the Interior, U.S. Geological Survey, 1997.

Find full text
APA, Harvard, Vancouver, ISO, and other styles
5

Ann, Swift B., and Geological Survey (U.S.), eds. Sidescan-sonar image of inner shelf off Folly Beach, S.C. U.S. Dept. of the Interior, U.S. Geological Survey, 1997.

Find full text
APA, Harvard, Vancouver, ISO, and other styles
6

1952-, McKinney B. Ann, and Geological Survey (U.S.), eds. Sidescan-sonar image of inner shelf off Folly Beach, S.C. U.S. Dept. of the Interior, U.S. Geological Survey, 1997.

Find full text
APA, Harvard, Vancouver, ISO, and other styles
7

1952-, McKinney B. Ann, and Geological Survey (U.S.), eds. Sidescan-sonar image of inner shelf off Folly Beach, S.C. U.S. Dept. of the Interior, U.S. Geological Survey, 1997.

Find full text
APA, Harvard, Vancouver, ISO, and other styles
8

Ann, Swift B., and Geological Survey (U.S.), eds. Sidescan-sonar image of inner shelf off Folly Beach, S.C. U.S. Dept. of the Interior, U.S. Geological Survey, 1997.

Find full text
APA, Harvard, Vancouver, ISO, and other styles
9

Ann, Swift B., and Geological Survey (U.S.), eds. Sidescan-sonar image of inner shelf off Folly Beach, S.C. U.S. Dept. of the Interior, U.S. Geological Survey, 1997.

Find full text
APA, Harvard, Vancouver, ISO, and other styles
10

Paskevich, Valerie F. MAPIT: An improved method for mapping digital sidescan sonar data using the Woods Hole Image Processing System (WHIPS) software. U.S. Dept. of the Interior, U.S. Geological Survey, 1996.

Find full text
APA, Harvard, Vancouver, ISO, and other styles
More sources

Book chapters on the topic "Sonar image"

1

Zhao, Ting, Srđan Lazendić, Yuxin Zhao, Giacomo Montereale-Gavazzi, and Aleksandra Pižurica. "Classification of Multibeam Sonar Image Using the Weyl Transform." In Image Processing and Communications. Springer International Publishing, 2019. http://dx.doi.org/10.1007/978-3-030-31254-1_25.

Full text
APA, Harvard, Vancouver, ISO, and other styles
2

Rajpal, N., S. Banerjee, and R. Bahl. "Automatic Object Identification from Sonar Image Shadow." In Acoustical Imaging. Springer US, 1993. http://dx.doi.org/10.1007/978-1-4615-2958-3_105.

Full text
APA, Harvard, Vancouver, ISO, and other styles
3

Bozzano, Roberto, and Antonio Siccardi. "Underwater vegetation detection in high frequency sonar images: A preliminary approach." In Image Analysis and Processing. Springer Berlin Heidelberg, 1997. http://dx.doi.org/10.1007/3-540-63508-4_170.

Full text
APA, Harvard, Vancouver, ISO, and other styles
4

Carvalho, João, and Rodrigo Ventura. "Comparative Evaluation of Occupancy Grid Mapping Methods Using Sonar Sensors." In Pattern Recognition and Image Analysis. Springer Berlin Heidelberg, 2013. http://dx.doi.org/10.1007/978-3-642-38628-2_105.

Full text
APA, Harvard, Vancouver, ISO, and other styles
5

Xie, Guohao, Jianxun Tang, Zhe Chen, and Mingsong Chen. "Improvements Towards the Sonar Image Dataset for Yolov7." In Lecture Notes of the Institute for Computer Sciences, Social Informatics and Telecommunications Engineering. Springer Nature Switzerland, 2024. http://dx.doi.org/10.1007/978-3-031-60347-1_6.

Full text
APA, Harvard, Vancouver, ISO, and other styles
6

Li, Menglei, Hejun Jiang, Yiwen Hu, and Yongfei Pan. "Sonar Image Processing Based on C-V Algorithm." In Proceedings of the World Conference on Intelligent and 3-D Technologies (WCI3DT 2022). Springer Nature Singapore, 2023. http://dx.doi.org/10.1007/978-981-19-7184-6_23.

Full text
APA, Harvard, Vancouver, ISO, and other styles
7

Iida, Kohji, Tohru Mukai, Yoshinao Aoki, and Tomoko Hayakawa. "Three Dimensional Interpretation of Sonar Image for Fisheries Research." In Acoustical Imaging. Springer US, 1996. http://dx.doi.org/10.1007/978-1-4419-8772-3_94.

Full text
APA, Harvard, Vancouver, ISO, and other styles
8

Zhou, Xin, Kun Tian, and Zihan Zhou. "STGAN: Sonar Image Despeckling Method Utilizing GAN and Transformer." In Communications in Computer and Information Science. Springer Nature Singapore, 2024. http://dx.doi.org/10.1007/978-981-99-9109-9_6.

Full text
APA, Harvard, Vancouver, ISO, and other styles
9

Kumudham, R., P. Sathish Kumar, V. Rajendran, M. S. Jagan Mugesh, and U. Charan Raj. "Multiresolution Representation of SONAR Pipeline Image Using Pyramidal Transforms." In Lecture Notes in Electrical Engineering. Springer Singapore, 2021. http://dx.doi.org/10.1007/978-981-16-4943-1_44.

Full text
APA, Harvard, Vancouver, ISO, and other styles
10

Wei, Longxing, Jianning Chi, Xiaoqiang Li, and Tianpeng Zhang. "SVM Based Dynamic Target Detection in Underwater Sonar Image." In Proceedings of 2021 International Conference on Autonomous Unmanned Systems (ICAUS 2021). Springer Singapore, 2022. http://dx.doi.org/10.1007/978-981-16-9492-9_184.

Full text
APA, Harvard, Vancouver, ISO, and other styles

Conference papers on the topic "Sonar image"

1

Lin, Cunyi, Hongyu Bian, and Shuxin Wang. "Nonreference Image Denoising for Sonar Images." In 2024 OES China Ocean Acoustics (COA). IEEE, 2024. http://dx.doi.org/10.1109/coa58979.2024.10723373.

Full text
APA, Harvard, Vancouver, ISO, and other styles
2

Gode, Samiran, Akshay Hinduja, and Michael Kaess. "SONIC: Sonar Image Correspondence using Pose Supervised Learning for Imaging Sonars." In 2024 IEEE International Conference on Robotics and Automation (ICRA). IEEE, 2024. http://dx.doi.org/10.1109/icra57147.2024.10611678.

Full text
APA, Harvard, Vancouver, ISO, and other styles
3

Pillai, Swapna, Philippe Courmontagne, and Sujit Kumar Sahoo. "Improving SONAR Image Classification Performance Via Denoising." In OCEANS 2024 - SINGAPORE. IEEE, 2024. http://dx.doi.org/10.1109/oceans51537.2024.10682361.

Full text
APA, Harvard, Vancouver, ISO, and other styles
4

Zhang, Liheng, Siquan Yu, and Lei Gao. "Side Scan Sonar Image Alignment for Complex Terrain." In 2024 IEEE International Conference on Unmanned Systems (ICUS). IEEE, 2024. https://doi.org/10.1109/icus61736.2024.10839967.

Full text
APA, Harvard, Vancouver, ISO, and other styles
5

Liu, Hongji, Gaofeng Cheng, and Jian Liu. "Instance Segmentation for Active Sonar Image Based on Image Segmentation Foundation Model." In OCEANS 2024 - SINGAPORE. IEEE, 2024. http://dx.doi.org/10.1109/oceans51537.2024.10682216.

Full text
APA, Harvard, Vancouver, ISO, and other styles
6

Ren, Kaiwei, and Xiaolin Wang. "A fully automated tracking algorithm for active sonar." In 4th International Conference on Signal Image Processing and Communication, edited by Xianye Ben and Lei Chen. SPIE, 2024. http://dx.doi.org/10.1117/12.3042532.

Full text
APA, Harvard, Vancouver, ISO, and other styles
7

CALDER, BR, LM LINNETT, and SJ CLARKE. "SPATIAL INTERACTION MODELS FOR SONAR IMAGE DATA." In Sonar Signal Processing 1995. Institute of Acoustics, 2024. http://dx.doi.org/10.25144/20149.

Full text
APA, Harvard, Vancouver, ISO, and other styles
8

Evangelista, Gabriel Arruda, and João Baptista de Oliveira e Souza Filho. "Graph-based Multibeam Forward Looking Acoustic Image Classification." In Encontro Nacional de Inteligência Artificial e Computacional. Sociedade Brasileira de Computação - SBC, 2023. http://dx.doi.org/10.5753/eniac.2023.234431.

Full text
Abstract:
Multibeam sonar imaging has many applications, such as mine-like detection and navigation tasks, motivating interest in the automatic classification of sonar images. Recent works have proposed graph neural networks (GNNs) as an alternative to convolutional neural networks (CNNs) to address this task. This paper focuses on combining the strengths of both models to enhance the performance of GNNs when classifying sonar images. This proposal exploits a superpixel algorithm for image segmentation and graph formation. Comprehensive experiments with an MFLS open dataset evaluate the effect of model design parameters on the performance of the proposed approach. Using CNN-extracted features as initial node embeddings significantly improved the graph-based image classification performance.
APA, Harvard, Vancouver, ISO, and other styles
9

CARMICHAEL, DR, LM LINNETT, and SJ CLARKE. "A MULTIRESOLUTION DIRECTIONAL OPERATOR FOR SIDESCAN SONAR IMAGE ANALYSIS." In Sonar Signal Processing 1995. Institute of Acoustics, 2024. http://dx.doi.org/10.25144/20152.

Full text
APA, Harvard, Vancouver, ISO, and other styles
10

JONES, SAS, GJ HEALD, and F. DAWE. "APPLICATION OF NOVEL IMAGING TECHNIQUES TO PASSIVE SONAR IMAGE ENHANCEMENT." In Sonar Signal Processing 1991. Institute of Acoustics, 2024. http://dx.doi.org/10.25144/21220.

Full text
APA, Harvard, Vancouver, ISO, and other styles

Reports on the topic "Sonar image"

1

Marston, Philip L. High Frequency Sonar Elastic Image Enhancements: Ray Theory. Defense Technical Information Center, 2001. http://dx.doi.org/10.21236/ada389728.

Full text
APA, Harvard, Vancouver, ISO, and other styles
2

Lansbergen, Romy, Hendrik de Villiers, and Marnix Poelman. Sonar 2021-2022 field experiment method development : a case-study of seaweed cultivation and biomass estimation using different sonar techniques and image recognizing networks. Wageningen Marine Research, 2023. http://dx.doi.org/10.18174/591184.

Full text
APA, Harvard, Vancouver, ISO, and other styles
3

Cumming, E. H., and G. V. Sonnichsen. White Rose repetitive seafloor mapping: a correlation of side scan sonar data and a swath bathmetry image from the Grand Banks, Newfoundland. Natural Resources Canada/ESS/Scientific and Technical Publishing Services, 2000. http://dx.doi.org/10.4095/211270.

Full text
APA, Harvard, Vancouver, ISO, and other styles
4

Harrison, Ian J., Ned Horning, K. Koy, P. McPhearson, and Osman Wallace. An Introduction to Remote Sensing. American Museum of Natural History, 2008. http://dx.doi.org/10.5531/cbc.ncep.0174.

Full text
Abstract:
Remote sensing is a technology for sampling reflected and emitted electromagnetic radiation of features on the Earth's land surface, oceans, and atmosphere. In this module, we exclude such techniques as sonar, geomagnetic and seismic sounding, as well as medical imaging, but include a wide set of techniques often known by the alternative name of Earth Observation (EO). The main objective of this module is to introduce the basic concepts of remote sensing science, focusing on the practical aspects of accessing, visualizing, and processing remotely-sensed data. Information here is targeted towards those who are interested in learning about working with satellite imagery. Through this module, teachers will enable students of biodiversity conservation to begin exploring the benefits of remote sensing and explore imagery on their own. The accompanying appendices contain numerous resources for remote sensing information, image archives, and software.
APA, Harvard, Vancouver, ISO, and other styles
5

Branduardi-Raymont, Graziella, and et al. SMILE Definition Study Report. ESA SCI, 2018. http://dx.doi.org/10.5270/esa.smile.definition_study_report-2018-12.

Full text
Abstract:
The SMILE definition study report describes a novel self-standing mission dedicated to observing solar wind-magnetosphere coupling via simultaneous in situ solar wind/magnetosheath plasma and magnetic field measurements, X-Ray images of the magnetosheath and magnetic cusps, and UV images of global auroral distributions defining system-level consequences. The Solar wind Magnetosphere Ionosphere Link Explorer (SMILE) will complement all solar, solar wind and in situ magnetospheric observations, including both space- and ground-based observatories, to enable the first-ever observations of the full chain of events that drive the Sun-Earth connection.
APA, Harvard, Vancouver, ISO, and other styles
6

Simmons, James A., Cynthia F. Moss, and Michael Ferragamo. Target Images in the Sonar of Bats. Defense Technical Information Center, 1988. http://dx.doi.org/10.21236/ada205679.

Full text
APA, Harvard, Vancouver, ISO, and other styles
7

Bragdon, Sophia, Vuong Truong, and Jay Clausen. Environmentally informed buried object recognition. Engineer Research and Development Center (U.S.), 2022. http://dx.doi.org/10.21079/11681/45902.

Full text
Abstract:
The ability to detect and classify buried objects using thermal infrared imaging is affected by the environmental conditions at the time of imaging, which leads to an inconsistent probability of detection. For example, periods of dense overcast or recent precipitation events result in the suppression of the soil temperature difference between the buried object and soil, thus preventing detection. This work introduces an environmentally informed framework to reduce the false alarm rate in the classification of regions of interest (ROIs) in thermal IR images containing buried objects. Using a dataset that consists of thermal images containing buried objects paired with the corresponding environmental and meteorological conditions, we employ a machine learning approach to determine which environmental conditions are the most impactful on the visibility of the buried objects. We find the key environmental conditions include incoming shortwave solar radiation, soil volumetric water content, and average air temperature. For each image, ROIs are computed using a computer vision approach and these ROIs are coupled with the most important environmental conditions to form the input for the classification algorithm. The environmentally informed classification algorithm produces a decision on whether the ROI contains a buried object by simultaneously learning on the ROIs with a classification neural network and on the environmental data using a tabular neural network. On a given set of ROIs, we have shown that the environmentally informed classification approach improves the detection of buried objects within the ROIs.
APA, Harvard, Vancouver, ISO, and other styles
8

Zirin, Harold. An Image Processing System for Research in Solar Physics. Defense Technical Information Center, 1987. http://dx.doi.org/10.21236/ada191673.

Full text
APA, Harvard, Vancouver, ISO, and other styles
9

Norquist, Donald C., and K. S. Balasubramaniam. Diagnosis of Solar Flare Probability from Chromosphere Image Sequences. Defense Technical Information Center, 2011. http://dx.doi.org/10.21236/ada554688.

Full text
APA, Harvard, Vancouver, ISO, and other styles
10

Norquist, Donald C., and K. S. Balasubramaniam. More Diagnosis of Solar Flare Probability from Chromosphere Image Sequences. Defense Technical Information Center, 2012. http://dx.doi.org/10.21236/ada566093.

Full text
APA, Harvard, Vancouver, ISO, and other styles
We offer discounts on all premium plans for authors whose works are included in thematic literature selections. Contact us to get a unique promo code!

To the bibliography