To see the other types of publications on this topic, follow the link: Small scaled image.

Journal articles on the topic 'Small scaled image'

Create a spot-on reference in APA, MLA, Chicago, Harvard, and other styles

Select a source type:

Consult the top 50 journal articles for your research on the topic 'Small scaled image.'

Next to every source in the list of references, there is an 'Add to bibliography' button. Press on it, and we will generate automatically the bibliographic reference to the chosen work in the citation style you need: APA, MLA, Harvard, Chicago, Vancouver, etc.

You can also download the full text of the academic publication as pdf and read online its abstract whenever available in the metadata.

Browse journal articles on a wide variety of disciplines and organise your bibliography correctly.

1

Kishor, Datta Gupta, and Sen Sajib. "BRAIN. Broad Research in Artificial Intelligence and Neuroscience-A Genetic Algorithm Approach to Regenerate Image from a Reduce Scaled Image Using Bit Data Count." BRAIN. Broad Research in Artificial Intelligence and Neuroscience 9, no. 2 (2018): 34–44. https://doi.org/10.5281/zenodo.1245885.

Full text
Abstract:
Small scaled image lost some important bits of information which cannot be recovered when scaled back. Using multi-objective genetic algorithm, we can recover these lost bits. In this paper, we described a genetic algorithm approach to recover lost bits while image resized to the smaller version using the original image data bit counts which are stored while the image is scaled. This method is very scalable to apply in a distributed system. Also, the same method can be applied to recover error bits in any types of data blocks. In this paper, we showed proof of concept by providing the implementation and results.
APA, Harvard, Vancouver, ISO, and other styles
2

Merino, Timothy, Roman Negri, Dipika Rajesh, M. Charity, and Julian Togelius. "The Five-Dollar Model: Generating Game Maps and Sprites from Sentence Embeddings." Proceedings of the AAAI Conference on Artificial Intelligence and Interactive Digital Entertainment 19, no. 1 (2023): 107–15. http://dx.doi.org/10.1609/aiide.v19i1.27506.

Full text
Abstract:
The five-dollar model is a lightweight text-to-image generative architecture that generates low dimensional images or tile maps from an encoded text prompt. This model can successfully generate accurate and aesthetically pleasing content in low dimensional domains, with limited amounts of training data. Despite the small size of both the model and datasets, the generated images or maps are still able to maintain the encoded semantic meaning of the textual prompt. We apply this model to three small datasets: pixel art video game maps, video game sprite images, and down-scaled emoji images and apply novel augmentation strategies to improve the performance of our model on these limited datasets. We evaluate our models' performance using cosine similarity score between text-image pairs generated by the CLIP VIT-B/32 model to demonstrate quality generation.
APA, Harvard, Vancouver, ISO, and other styles
3

Sapna Shinde, Priti Chakurkar, and Rashmi Rane. "Deep Learning-Based Small Face Detection from Hard Image." International Research Journal on Advanced Engineering Hub (IRJAEH) 2, no. 03 (2024): 579–88. http://dx.doi.org/10.47392/irjaeh.2024.0084.

Full text
Abstract:
Facial detection usually comes first in face recognition and face analysis systems. Previously, techniques such as directed gradient histograms and cascades relied on manually-engineered features from particular photos. Nevertheless, the precision with which these techniques could identify faces in uncontrolled environments was restricted. Numerous deep learning-based face recognition frameworks have recently been developed, many of which have significantly increased accuracy, as a result of the rapid progress of deep learning in computer vision. Despite these advancements, detecting small, scaled, positioned, occluded, blurred, and faces that are partially occluded in uncontrolled conditions remains a challenge in face identification. This problem has been studied for many years but has not been completely resolved.
APA, Harvard, Vancouver, ISO, and other styles
4

Lu, Tao, Jiaming Wang, Yanduo Zhang, Zhongyuan Wang, and Junjun Jiang. "Satellite Image Super-Resolution via Multi-Scale Residual Deep Neural Network." Remote Sensing 11, no. 13 (2019): 1588. http://dx.doi.org/10.3390/rs11131588.

Full text
Abstract:
Recently, the application of satellite remote sensing images is becoming increasingly popular, but the observed images from satellite sensors are frequently in low-resolution (LR). Thus, they cannot fully meet the requirements of object identification and analysis. To utilize the multi-scale characteristics of objects fully in remote sensing images, this paper presents a multi-scale residual neural network (MRNN). MRNN adopts the multi-scale nature of satellite images to reconstruct high-frequency information accurately for super-resolution (SR) satellite imagery. Different sizes of patches from LR satellite images are initially extracted to fit different scale of objects. Large-, middle-, and small-scale deep residual neural networks are designed to simulate differently sized receptive fields for acquiring relative global, contextual, and local information for prior representation. Then, a fusion network is used to refine different scales of information. MRNN fuses the complementary high-frequency information from differently scaled networks to reconstruct the desired high-resolution satellite object image, which is in line with human visual experience (“look in multi-scale to see better”). Experimental results on the SpaceNet satellite image and NWPU-RESISC45 databases show that the proposed approach outperformed several state-of-the-art SR algorithms in terms of objective and subjective image qualities.
APA, Harvard, Vancouver, ISO, and other styles
5

Günen, Mehmet Akif, Umit Haluk Atasever, and Erkan Beşdok. "Analyzing the Contribution of Training Algorithms on Deep Neural Networks for Hyperspectral Image Classification." Photogrammetric Engineering & Remote Sensing 86, no. 9 (2020): 581–88. http://dx.doi.org/10.14358/pers.86.9.581.

Full text
Abstract:
Autoencoder (<small>AE</small>)-based deep neural networks learn complex problems by generating feature-space conjugates of input data. The learning success of an AE is too sensitive for a training algorithm. The problem of hyperspectral image (<small>HSI</small>) classification by using spectral features of pixels is a highly complex problem due to its multi-dimensional and excessive data nature. In this paper, the contribution of three gradient-based training algorithms (i.e., scaled conjugate gradient (<small>SCG</small>), gradient descent (<small>GD</small>), and resilient backpropagation algorithms (<small>RP</small>)) on the solution of the HSI classification problem by using AE was analyzed. Also, it was investigated how neighborhood component analysis affects classification performance for training algorithms on HSIs. Two hyperspectral image classification benchmark data sets were used in the experimental analysis. Wilcoxon signed-rank test indicates that RB is statistically better than SCG and GD in solving the related image classification problem.
APA, Harvard, Vancouver, ISO, and other styles
6

Wei, Linhai, Chen Zheng, and Yijun Hu. "Oriented Object Detection in Aerial Images Based on the Scaled Smooth L1 Loss Function." Remote Sensing 15, no. 5 (2023): 1350. http://dx.doi.org/10.3390/rs15051350.

Full text
Abstract:
Although many state-of-the-art object detectors have been developed, detecting small and densely packed objects with complicated orientations in remote sensing aerial images remains challenging. For object detection in remote sensing aerial images, different scales, sizes, appearances, and orientations of objects from different categories could most likely enlarge the variance in the detection error. Undoubtedly, the variance in the detection error should have a non-negligible impact on the detection performance. Motivated by the above consideration, in this paper, we tackled this issue, so that we could improve the detection performance and reduce the impact of this variance on the detection performance as much as possible. By proposing a scaled smooth L1 loss function, we developed a new two-stage object detector for remote sensing aerial images, named Faster R-CNN-NeXt with RoI-Transformer. The proposed scaled smooth L1 loss function is used for bounding box regression and makes regression invariant to scale. This property ensures that the bounding box regression is more reliable in detecting small and densely packed objects with complicated orientations and backgrounds, leading to improved detection performance. To learn rotated bounding boxes and produce more accurate object locations, a RoI-Transformer module is employed. This is necessary because horizontal bounding boxes are inadequate for aerial image detection. The ResNeXt backbone is also adopted for the proposed object detector. Experimental results on two popular datasets, DOTA and HRSC2016, show that the variance in the detection error significantly affects detection performance. The proposed object detector is effective and robust, with the optimal scale factor for the scaled smooth L1 loss function being around 2.0. Compared to other promising two-stage oriented methods, our method achieves a mAP of 70.82 on DOTA, with an improvement of at least 1.26 and up to 16.49. On HRSC2016, our method achieves an mAP of 87.1, with an improvement of at least 0.9 and up to 1.4.
APA, Harvard, Vancouver, ISO, and other styles
7

Ni, Peishuang, Yanyang Liu, Hao Pei, Haoze Du, Haolin Li, and Gang Xu. "CLISAR-Net: A Deformation-Robust ISAR Image Classification Network Using Contrastive Learning." Remote Sensing 15, no. 1 (2022): 33. http://dx.doi.org/10.3390/rs15010033.

Full text
Abstract:
The inherent unknown deformations of inverse synthetic aperture radar (ISAR) images, such as translation, scaling, and rotation, pose great challenges to space target classification. To achieve high-precision classification for ISAR images, a deformation-robust ISAR image classification network using contrastive learning (CL), i.e., CLISAR-Net, is proposed for deformation ISAR image classification. Unlike traditional supervised learning methods, CLISAR-Net develops a new unsupervised pretraining phase, which means that the method uses a two-phase training strategy to achieve classification. In the unsupervised pretraining phase, combined with data augmentation, positive and negative sample pairs are constructed using unlabeled ISAR images, and then the encoder is trained to learn discriminative deep representations of deformation ISAR images by means of CL. In the fine-tuning phase, based on the deep representations obtained from pretraining, a classifier is fine-tuned using a small number of labeled ISAR images, and finally, the deformation ISAR image classification is realized. In the experimental analysis, CLISAR-Net achieves higher classification accuracy than supervised learning methods for unknown scaled, rotated, and combined deformations. It implies that CLISAR-Net learned more robust deep features of deformation ISAR images through CL, which ensures the performance of the subsequent classification.
APA, Harvard, Vancouver, ISO, and other styles
8

Jenkins, David R. "Differential contrast enhancement of digital image information by PC-based massively parallel processing." Proceedings, annual meeting, Electron Microscopy Society of America 52 (1994): 516–17. http://dx.doi.org/10.1017/s0424820100170311.

Full text
Abstract:
Digital images represent two-dimensional intensity maps which characterize the spacial x/y distribution of contrast information. Often, small detail contrasts are visaully unrecognizable due to the prevalence of large contrast components which, inversely proportional to their extent, compress the intensity range of the detail contrasts. A new “pixel-accurate intensity processing” (PAIP) technology provides unrestricted access to and display of all detail information contained in digital images independent of content, size and depth (8-16 bit). It offers an exciting new way for objective and exhaustive digital image evaluation in near-real-time.We developed a PC-based massively parallel processing workstation (PiXision) for fast, artifact free differential contrast extraction utilizing the PAIP technology. Information extraction and enhancement requires only a few digital processing steps (Fig. 1): 1. The original image is smoothed with pixelaccurate precision within a significant intensity range (IRS) using the PAIP technology; 2. The smoothed image may additionally be subtracted from the original generating a detail image; 3. Finally, the smoothed or detail images are linearly scaled to 8-bit providing full visual intensity range. The basic software routine “smoolhens” an image to any significant intensity range without spacial artifacts, using a variable automatically adjusting local mask for pixel averaging.
APA, Harvard, Vancouver, ISO, and other styles
9

Hasan, Sumaya Falih, Muntadher Aidi Shareef, and Hussein Sabah Jaber. "Influence of Noise Equivalent Beta Naught estimation on backscattering image classification of TerraSAR-X." Geodesy and cartography 50, no. 2 (2024): 104–12. http://dx.doi.org/10.3846/gac.2024.18264.

Full text
Abstract:
Noise Equivalent Beta Naught is the different noise influence that beneficence to the radar signal. This type of noise is available in TerraSAR-X satellite images and expressed in forms of scaled polynomial described the noise power. On the other hand, Sigma naught or backscattering coefficient represents the average reflectivity of a horizontal material samples which used to reflect the nature of the land use and land cover in radar images. In this paper, radar satellite images in dual VV and HH polarization were used to study the influence of the noise on backscattering image classification. The result demonstrated that the visual interpretation of sigma naught which is result from the comparison between existence case and absence case (in the other word: with and without noise) of the noise illustrated that there is no different between them. In the other hand, for more details and more precise, an example of small images are used to show the values of obtained backscattering. The result demonstrated that the NEBN plays the main roles in decreasing the values of backscattering coefficient in TSX image. The influence of this noise had usually high in water body surface, because this surface is generally having small backscattering coefficients compared with land cover.
APA, Harvard, Vancouver, ISO, and other styles
10

JIN, JIANGANG. "AN ADAPTIVE IMAGE SCALING ALGORITHM BASED ON CONTINUOUS FRACTION INTERPOLATION AND MULTI-RESOLUTION HIERARCHY PROCESSING." Fractals 28, no. 08 (2020): 2040016. http://dx.doi.org/10.1142/s0218348x20400162.

Full text
Abstract:
Traditional interpolation algorithms often blur the edges of the target image due to low-pass filtering effects, making it difficult to obtain satisfactory visual effects. Especially when the reduction ratio becomes small, the phenomenon of jagged edges and partial information loss will occur. In order to obtain better image scaling quality, an adaptive image scaling algorithm based on continuous fraction interpolation and multi-resolution hierarchical processing is proposed. In order to overcome the noise problem of the original image, this paper first performs wavelet decomposition on the original image to obtain multiple images with different resolutions. Secondly, in order to eliminate the influence of local area variance on the overall image, weighted average is performed on images of different resolutions. Then, in order to overcome the blurring effect of the weighted average image, by calculating the variance of the three groups of pixels around the target pixel, selecting a group of pixels with the smallest variance and using the Salzer continuous fraction interpolation equation to obtain the gray value of the target pixel. Finally, the multiple corrected images are stitched together into a scaled image. The algorithm in this paper achieves a high-order smooth transition between pixels in the same feature area, and can also adaptively modify the pixels of the image. The experimental results show that the edge of the target image obtained by the algorithm in this paper is clear, and the algorithm complexity is low, which is convenient for hardware implementation and can realize real-time image scaling.
APA, Harvard, Vancouver, ISO, and other styles
11

Koch, Tobias, Marco Körner, and Friedrich Fraundorfer. "Automatic and Semantically-Aware 3D UAV Flight Planning for Image-Based 3D Reconstruction." Remote Sensing 11, no. 13 (2019): 1550. http://dx.doi.org/10.3390/rs11131550.

Full text
Abstract:
Small-scaled unmanned aerial vehicles (UAVs) emerge as ideal image acquisition platforms due to their high maneuverability even in complex and tightly built environments. The acquired images can be utilized to generate high-quality 3D models using current multi-view stereo approaches. However, the quality of the resulting 3D model highly depends on the preceding flight plan which still requires human expert knowledge, especially in complex urban and hazardous environments. In terms of safe flight plans, practical considerations often define prohibited and restricted airspaces to be accessed with the vehicle. We propose a 3D UAV path planning framework designed for detailed and complete small-scaled 3D reconstructions considering the semantic properties of the environment allowing for user-specified restrictions on the airspace. The generated trajectories account for the desired model resolution and the demands on a successful photogrammetric reconstruction. We exploit semantics from an initial flight to extract the target object and to define restricted and prohibited airspaces which have to be avoided during the path planning process to ensure a safe and short UAV path, while still aiming to maximize the object reconstruction quality. The path planning problem is formulated as an orienteering problem and solved via discrete optimization exploiting submodularity and photogrammetrical relevant heuristics. An evaluation of our method on a customized synthetic scene and on outdoor experiments suggests the real-world capability of our methodology by providing feasible, short and safe flight plans for the generation of detailed 3D reconstruction models.
APA, Harvard, Vancouver, ISO, and other styles
12

M., Chinnarao R. Goutham Sai Kalyan T. Naga Pravallika B. Srinivas. "Object Detection Using Yolo And Tensor Flow." International Journal in Engineering Sciences 1, no. 1 (2024): 13–23. https://doi.org/10.5281/zenodo.11825059.

Full text
Abstract:
Object detection methods aim to identify all target objects in the target image and determine the categories and position information in order to achieve machine vision understanding. Numerous approaches have been proposed to solve this problem, mainly inspired by methods of computer vision and deep learning. However, existing approaches always perform poorly for the detection of small, dense objects, and even fail to detect objects with random geometric transformations. In this study, we compare and analyse mainstream object detection algorithms and propose a multi-scaled deformable convolutional object detection network to deal with the challenges faced by current methods. Our analysis demonstrates a strong performance on par, or even better, than state of the art methods. We use deep convolutional networks to obtain multi-scaled features, and add deformable convolutional structures to overcome geometric transformations. We then fuse the multi-scaled features by up sampling, in order to implement the final object recognition and region regress. Experiments prove that our suggested framework improves the accuracy of detecting small target objects with geometric deformation, showing significant improvements in the trade-of between accuracy and speed.
APA, Harvard, Vancouver, ISO, and other styles
13

Polyakova, M. V. "IMAGE SEGMENTATION WITH A CONVOLUTIONAL NEURAL NETWORK WITHOUT POOLING LAYERS IN DERMATOLOGICAL DISEASE DIAGNOSTICS SYSTEMS." Radio Electronics, Computer Science, Control, no. 1 (February 25, 2023): 51. http://dx.doi.org/10.15588/1607-3274-2023-1-5.

Full text
Abstract:
Context. The problem of automating of the segmentation of spectral-statistical texture images is considered. The object of research is image processing in dermatological disease diagnostic systems.
 Objective. The aim of the research is to improve the segmentation performance of color images of psoriasis lesions by elaboration of a deep learning convolutional neural network without pooling layers.
 Method. The convolutional neural network is proposed to process a three-channel psoriasis image with a specified size. The initial color images were scaled to the specified size and then inputed on the neural network. The architecture of the proposed neural network consists of four convolutional layers with batch normalization layers and ReLU activation function. Feature maps from the output of these layers were inputted to the 1*1 convolutional layer with the Softmax activation function. The resulting feature maps were inputted to the image pixel classification layer. When segmenting images, convolutional and pooling layers extract the features of image fragments, and fully connected layers classify the resulting feature vectors, forming a partition of the image into homogeneous segments. The segmentation features are evaluated as a result of network training using ground-truth images which segmented by an expert. Such features are robust to noise and distortion in images. The combination of segmentation results at different scales is determined by the network architecture. Pooling layers were not included in the architecture of the proposed convolutional neural network since they reduce the size of feature maps compared to the size of the original image and can decrease the segmentation performance of small psoriasis lesions and psoriasis lesions of complex shape.
 Results. The proposed convolutional neural network has been implemented in software and researched for solving the problem of psoriasis images segmentation.
 Conclusions. The use of the proposed convolutional neural network made it possible to enhance the segmentation performance of plaque and guttate psoriasis images, especially at the edges of the lesions. Prospects for further research are to study the performance of the proposed CNN then abrupt changes in color and illumination, blurring, as well as the complex background areas are present on dermatological images, for example, containing clothes or fragments of the interior. It is advisable to use the proposed CNN in other problems of color image processing to segment statistical or spectral-statistical texture regions on a uniform or textured background.
APA, Harvard, Vancouver, ISO, and other styles
14

Ren, Keying, Xiaoyan Chen, Zichen Wang, Xiwen Liang, Zhihui Chen, and Xia Miao. "HAM-Transformer: A Hybrid Adaptive Multi-Scaled Transformer Net for Remote Sensing in Complex Scenes." Remote Sensing 15, no. 19 (2023): 4817. http://dx.doi.org/10.3390/rs15194817.

Full text
Abstract:
The quality of remote sensing images has been greatly improved by the rapid improvement of unmanned aerial vehicles (UAVs), which has made it possible to detect small objects in the most complex scenes. Recently, learning-based object detection has been introduced and has gained popularity in remote sensing image processing. To improve the detection accuracy of small, weak objects in complex scenes, this work proposes a novel hybrid backbone composed of a convolutional neural network and an adaptive multi-scaled transformer, referred to as HAM-Transformer Net. HAM-Transformer Net firstly extracts the details of feature maps using convolutional local feature extraction blocks. Secondly, hierarchical information is extracted, using multi-scale location coding. Finally, an adaptive multi-scale transformer block is used to extract further features in different receptive fields and to fuse them adaptively. We implemented comparison experiments on a self-constructed dataset. The experiments proved that the method is a significant improvement over the state-of-the-art object detection algorithms. We also conducted a large number of comparative experiments in this work to demonstrate the effectiveness of this method.
APA, Harvard, Vancouver, ISO, and other styles
15

Krishnagopal, Sanjukta, Yiannis Aloimonos, and Michelle Girvan. "Similarity Learning and Generalization with Limited Data: A Reservoir Computing Approach." Complexity 2018 (November 1, 2018): 1–15. http://dx.doi.org/10.1155/2018/6953836.

Full text
Abstract:
We investigate the ways in which a machine learning architecture known as Reservoir Computing learns concepts such as “similar” and “different” and other relationships between image pairs and generalizes these concepts to previously unseen classes of data. We present two Reservoir Computing architectures, which loosely resemble neural dynamics, and show that a Reservoir Computer (RC) trained to identify relationships between image pairs drawn from a subset of training classes generalizes the learned relationships to substantially different classes unseen during training. We demonstrate our results on the simple MNIST handwritten digit database as well as a database of depth maps of visual scenes in videos taken from a moving camera. We consider image pair relationships such as images from the same class; images from the same class with one image superposed with noise, rotated 90°, blurred, or scaled; images from different classes. We observe that the reservoir acts as a nonlinear filter projecting the input into a higher dimensional space in which the relationships are separable; i.e., the reservoir system state trajectories display different dynamical patterns that reflect the corresponding input pair relationships. Thus, as opposed to training in the entire high-dimensional reservoir space, the RC only needs to learns characteristic features of these dynamical patterns, allowing it to perform well with very few training examples compared with conventional machine learning feed-forward techniques such as deep learning. In generalization tasks, we observe that RCs perform significantly better than state-of-the-art, feed-forward, pair-based architectures such as convolutional and deep Siamese Neural Networks (SNNs). We also show that RCs can not only generalize relationships, but also generalize combinations of relationships, providing robust and effective image pair classification. Our work helps bridge the gap between explainable machine learning with small datasets and biologically inspired analogy-based learning, pointing to new directions in the investigation of learning processes.
APA, Harvard, Vancouver, ISO, and other styles
16

Liu, Liping, Yantao Wei, Yue Wang, Huang Yao, and Di Chen. "Using Double-Layer Patch-Based Contrast for Infrared Small Target Detection." Remote Sensing 15, no. 15 (2023): 3839. http://dx.doi.org/10.3390/rs15153839.

Full text
Abstract:
Detecting infrared (IR) small targets effectively and robustly is crucial for the tasks such as infrared searching and guarding. While methods based on the human vision system (HVS) have achieved great success in this field, detecting dim targets in complex backgrounds remains a challenge due to the multi-scale framework and over-simplified disparity calculations. In this paper, infrared small targets are detected with a novel local contrast measurement named double-layer patch-based contrast (DLPC). Firstly, we crafted an elaborated double-layer local contrast measure, to suppress the background, which can accurately measure the gray difference between the target and its surrounding complex background. Secondly, we calculated the absolute value of the grayscale difference between the target and the background in the diagonal directions as a weighting factor to further enhance the target. Then, an adaptive threshold on the DLPC was employed to extract the target from the IR image. The proposed method can detect small targets effectively with a fixed-scaled mask template while being computationally efficient. Experimental results in terms of background suppression factor (BSF), signal-to-clutter ratio gain (SCRG) and receiver operating characteristic (ROC) curve on five IR image datasets demonstrated that the proposed method has better detection performance compared to six state-of-the-art methods and is more robust in addressing complex backgrounds.
APA, Harvard, Vancouver, ISO, and other styles
17

Wade, Richard A., R. Ciardullo, J. B. DeVeny, G. H. Jacoby та W. E. Schoening. "An Hα Image of Nova V1500 Cygni Twelve Years After Outburst". International Astronomical Union Colloquium 122 (1990): 195–96. http://dx.doi.org/10.1017/s0252921100068573.

Full text
Abstract:
We obtained a narrow band Hα image of V1500 Cyg during an engineering run at the R-C focus of the KPNO 4 m telescope on UT 1987 July 19. Our detector was an 800 × 800 format TI CCD, which yielded a plate scale of 0.1013 arcsec/pixel. The exposure was 2000 sec and was made through a 75 Å wide filter centered at 6563 Å. The seeing was ~ 1.2 arcsec.Our reductions were accomplished with the DAOPHOT photometry package (Stetson 1987) and IRAF data reduction facility. After performing standard bias level subtraction and flat field division, we used DAOPHOT to find the frame’s normalized point spread function (PSF) by summing the images of several field stars. We then scaled the PSF to calculate the instrumental magnitudes of the stars and of V1500 Cyg. This procedure overestimates the brightness of the nova, since the nebula contributes extra light to the central object and modifies the expected intensity distribution. (This is especially true in the narrow Hα bandpass.) Hence, in a frame where the PSF has been used to remove the fitted images, all the stars have satisfactorily small residuals around the mean sky level, except the nova itself. The image of V1500 Cyg shows sky level at the center, but a halo that rises from the center and then falls away.
APA, Harvard, Vancouver, ISO, and other styles
18

Post, Robert B., and Robert B. Welch. "The Role of Retinal versus Perceived Size in the Effects of Pitched Displays on Visually Perceived Eye Level." Perception 25, no. 7 (1996): 853–59. http://dx.doi.org/10.1068/p250853.

Full text
Abstract:
Visually perceived eye level (VPEL) was measured while subjects viewed two vertical lines which were either upright or pitched about the horizontal axis. In separate conditions, the display consisted of a relatively large pair of lines viewed at a distance of 1 m, or a display scaled to one third the dimensions and viewed at a distance of either 1 m or 33.3 cm. The small display viewed at 33.3 cm produced a retinal image the same size as that of the large display at 1 m. Pitch of all three displays top-toward and top-away from the observer caused upward and downward VPEL shifts, respectively. These effects were highly similar for the large display and the small display viewed at 33.3 cm (ie equal retinal size), but were significantly smaller for the small display viewed at 1 m. In a second experiment, perceived size of the three displays was measured and found to be highly accurate. The results of the two experiments indicate that the effect of optical pitch on VPEL depends on the retinal image size of stimuli rather than on perceived size.
APA, Harvard, Vancouver, ISO, and other styles
19

Preiss, Felix Johannes, Teresa Dagenbach, Markus Fischer, and Heike Petra Karbstein. "Development of a Pressure Stable Inline Droplet Generator with Live Droplet Size Measurement." ChemEngineering 4, no. 4 (2020): 60. http://dx.doi.org/10.3390/chemengineering4040060.

Full text
Abstract:
For the research on droplet deformation and breakup in scaled high-pressure homogenizing units, a pressure stable inline droplet generator was developed. It consists of an optically accessible flow channel with a combination of stainless steel and glass capillaries and a 3D printed orifice. The droplet size is determined online by live image analysis. The influence of the orifice diameter, the mass flow of the continuous phase and the mass flow of the disperse phase on the droplet diameter were investigated. Furthermore, the droplet detachment mechanisms were identified. Droplet diameters with a small diameter fluctuation between 175 µm and 500 µm could be realized, which allows a precise adjustment of the capillary (Ca) and Weber (We) Number in the subsequent scaled high pressure homogenizer disruption unit. The determined influence of geometry and process parameters on the resulting droplet size and droplet detachment mechanism agreed well with the literature on microfluidics. Furthermore, droplet trajectories in an exemplary scaled high-pressure homogenizer disruption unit are presented which show that the droplets can be reinjected on a trajectory close to the center axis or close to the wall, which should result in different stresses on the droplets.
APA, Harvard, Vancouver, ISO, and other styles
20

Mukaromah, H., C. T. Permana, and W. Astuti. "Aiming towards creative city: how Surakarta City government applied the Sustainability-Oriented Innovation (SOI) as a strategy to empower local small and medium creative industries." IOP Conference Series: Earth and Environmental Science 1186, no. 1 (2023): 012018. http://dx.doi.org/10.1088/1755-1315/1186/1/012018.

Full text
Abstract:
Abstract Sustainability-oriented innovations (SOI), defines as an integration of ecological and social aspects into products, processes, and organizational structures of economic activities. For Small-Medium Scaled Enterprises (SMEs), which engage in small scaled products and limited capital assets as well as human resources capacity, the aim of achieving sustainability becomes a great challenge. According to many cases, SMEs have been only reached the co-production level of sustainability. In this regard, interactions with external actor, such as the government institutions, larger private sectors, and other SMEs are one of the keys of SMEs in attaining sustainability. The role of the government in exhalating SMEs as a part of creative economy environment in the city is important. The government set the creative city vision and then synergizes the SMEs, especially those engaged in creative industries, into the broader organizational structure in the city development. The government also empowers the main actors of SMEs both in techniques, managerial, and financial capacities. The research aims at learning towards the Government-led Sustainability-oriented Innovation strategies in SMEs as a means of promoting the creative city image of Surakarta. Our research was supplemented by interviews and FGDs to explore relevant data and information. The findings revealed that the SOI strategies provided by the city government to empower SMEs to help the achievement of creative city image in Surakarta were undertaken through the introduction of advance information and communication technologies, capacity building to improve community networking, capital accessibility, and product standards and certifications.
APA, Harvard, Vancouver, ISO, and other styles
21

Huang, He, Mierk Schwabe, and Cheng-Ran Du. "Identification of the Interface in a Binary Complex Plasma Using Machine Learning." Journal of Imaging 5, no. 3 (2019): 36. http://dx.doi.org/10.3390/jimaging5030036.

Full text
Abstract:
A binary complex plasma consists of two different types of dust particles in an ionized gas. Due to the spinodal decomposition and force imbalance, particles of different masses and diameters are typically phase separated, resulting in an interface. Both external excitation and internal instability may cause the interface to move with time. Support vector machine (SVM) is a supervised machine learning method that can be very effective for multi-class classification. We applied an SVM classification method based on image brightness to locate the interface in a binary complex plasma. Taking the scaled mean and variance as features, three areas, namely small particles, big particles and plasma without dust particles, were distinguished, leading to the identification of the interface between small and big particles.
APA, Harvard, Vancouver, ISO, and other styles
22

Lo, Tien‐when, M. Nafi Toksöz, Shao‐hui Xu, and Ru‐Shan Wu. "Ultrasonic laboratory tests of geophysical tomographic reconstruction." GEOPHYSICS 53, no. 7 (1988): 947–56. http://dx.doi.org/10.1190/1.1442531.

Full text
Abstract:
In this study, we test geophysical ray tomography and geophysical diffraction tomography by scaled model ultrasonics experiments. First, we compare the performance of these two methods under limited view‐angle conditions. Second, we compare the adaptabilities of these two methods to objects of various sizes and acoustic properties. Finally, for diffraction tomography, we compare the Born and Rytov approximations based on the induced image distortion by using these two approximation methods. Our experimental results indicate the following: (1) When the scattered field can be obtained, geophysical diffraction tomography is in general superior to ray tomography because diffraction tomography is less sensitive to the limited view‐angle problem and can image small objects of size comparable to a wavelength. (2) The advantage of using ray tomography is that reconstruction can be done using the first arrivals only, the most easily measurable quantity; and there is no restriction on the properties of the object being imaged. (3) For geophysical diffraction tomography, the Rytov approximation is valid over a wider frequency range than the Born approximation in the cross‐borehole experiment. In the VSP and the surface reflection tomography experiments, no substantial difference between the Born and Rytov approximations is observed.
APA, Harvard, Vancouver, ISO, and other styles
23

Gerasimov, Jacob, Nezah Balal, Egor Liokumovitch, et al. "Scaled Modeling and Measurement for Studying Radio Wave Propagation in Tunnels." Electronics 10, no. 1 (2020): 53. http://dx.doi.org/10.3390/electronics10010053.

Full text
Abstract:
The subject of radio wave propagation in tunnels has gathered attention in recent years, mainly regarding the fading phenomena caused by internal reflections. Several methods have been suggested to describe the propagation inside a tunnel. This work is based on the ray tracing approach, which is useful for structures where the dimensions are orders of magnitude larger than the transmission wavelength. Using image theory, we utilized a multi-ray model to reveal non-dimensional parameters, enabling measurements in down-scaled experiments. We present the results of field experiments in a small concrete pedestrian tunnel with smooth walls for radio frequencies (RF) of 1, 2.4, and 10 GHz, as well as in a down-scaled model, for which millimeter waves (MMWs) were used, to demonstrate the roles of the frequency, polarization, tunnel dimensions, and dielectric properties on the wave propagation. The ray tracing method correlated well with the experimental results measured in the tunnel as well as in a scale model.
APA, Harvard, Vancouver, ISO, and other styles
24

Zhang, Dong, and Qichuan Tian. "A Novel Fuzzy Optimized CNN-RNN Method for Facial Expression Recognition." Elektronika ir Elektrotechnika 27, no. 5 (2021): 67–74. http://dx.doi.org/10.5755/j02.eie.29648.

Full text
Abstract:
Facial expression is one of the important ways of transferring emotion in interpersonal communication, and it has been widely used in many interpersonal communication systems. The traditional facial expression recognition methods are not intelligent enough to manage the model uncertainty. The deep learning method has obvious ability to deal with model uncertainty in the image recognition. The deep learning method is able to complete the facial expression work, but the recognition rate can be further improved by a hybrid learning strategy. In this paper, a Fuzzy optimized convolutional neural network-recurrent neural network (CNN-RNN) method for facial expression recognition is proposed to solve the problems of direct image convolution without image enhancement and simple convolution stack ignoring feature layer-by-layer convolution resulting in information loss. Firstly, each face image is scaled by the bilinear interpolation and the affine transformation is adopted to expand the image data to avoid the shortage of the data set. Then the feature map of the facial expression is extracted by the CNN with small information loss. To deal with the uncertainty in the feature map, the Fuzzy logic is employed to reduce the uncertainty by recognizing the highly nonlinear relationship between the features. Then the output of the Fuzzy model is fed with the RNN to classify different facial expression images. The recognition results based on the open datasets CK, Jaffe, and FER2013 show that the proposed Fuzzy optimized CNN-RNN method has a certain improvement in the recognition effect of different facial expression data sets compared with current popular algorithms.
APA, Harvard, Vancouver, ISO, and other styles
25

Garg, Prateek, Anirudh Srinivasan Chakravarthy, Murari Mandal, Pratik Narang, Vinay Chamola, and Mohsen Guizani. "ISDNet: AI-enabled Instance Segmentation of Aerial Scenes for Smart Cities." ACM Transactions on Internet Technology 21, no. 3 (2021): 1–18. http://dx.doi.org/10.1145/3418205.

Full text
Abstract:
Aerial scenes captured by UAVs have immense potential in IoT applications related to urban surveillance, road and building segmentation, land cover classification, and so on, which are necessary for the evolution of smart cities. The advancements in deep learning have greatly enhanced visual understanding, but the domain of aerial vision remains largely unexplored. Aerial images pose many unique challenges for performing proper scene parsing such as high-resolution data, small-scaled objects, a large number of objects in the camera view, dense clustering of objects, background clutter, and so on, which greatly hinder the performance of the existing deep learning methods. In this work, we propose ISDNet (Instance Segmentation and Detection Network), a novel network to perform instance segmentation and object detection on visual data captured by UAVs. This work enables aerial image analytics for various needs in a smart city. In particular, we use dilated convolutions to generate improved spatial context, leading to better discrimination between foreground and background features. The proposed network efficiently reuses the segment-mask features by propagating them from early stages using residual connections. Furthermore, ISDNet makes use of effective anchors to accommodate varying object scales and sizes. The proposed method obtains state-of-the-art results in the aerial context.
APA, Harvard, Vancouver, ISO, and other styles
26

Jalife-Chavira, J. M., G. Trujillo-Schiaffino, P. G. Mendoza-Villegas, et al. "Inverse Hartmann test for radius of curvature measurement in a corneal topography calibration sphere." Review of Scientific Instruments 93, no. 4 (2022): 043101. http://dx.doi.org/10.1063/5.0080572.

Full text
Abstract:
In this article, the use of a square Hartmann screen test to measure the radius of curvature of a corneal topography calibration test sphere is presented. The proposed technique is based on the image formation principle by specular reflection on convex reflective surfaces. Applying an inverse Hartmann test, a de-magnified virtual image (Hartmanngram) is obtained; considering their own scaled reference screen plate, a zonal wavefront retrieval approach is used and the radius of curvature obtained. Experimental setup along the obtained results is presented. A simulated spherical wavefront is used as a method to evaluate the error in the wavefront reconstruction. Since the measurements of radius of curvature fits in to ISO 10343, through suitable modifications the proposed method is potentially applicable in small F/# convex specular surfaces, as is the case in keratometry and corneal topography measurements.
APA, Harvard, Vancouver, ISO, and other styles
27

Goossens, R., E. D'Haluin, and G. Larnoe. "Satellite image interpretation (SPOT) for the survey of the ecological infrastructure in a small scaled landscape (Kempenland, Belgium)." Landscape Ecology 5, no. 3 (1991): 175–82. http://dx.doi.org/10.1007/bf00158064.

Full text
APA, Harvard, Vancouver, ISO, and other styles
28

Waldrop, Lindsay D., Miranda Hann, Amy K. Henry, Agnes Kim, Ayesha Punjabi, and M. A. R. Koehl. "Ontogenetic changes in the olfactory antennules of the shore crab, Hemigrapsus oregonensis , maintain sniffing function during growth." Journal of The Royal Society Interface 12, no. 102 (2015): 20141077. http://dx.doi.org/10.1098/rsif.2014.1077.

Full text
Abstract:
Malacostracan crustaceans capture odours using arrays of chemosensory hairs (aesthetascs) on antennules. Lobsters and stomatopods have sparse aesthetascs on long antennules that flick with a rapid downstroke when water flows between the aesthetascs and a slow return stroke when water is trapped within the array (sniffing). Changes in velocity only cause big differences in flow through an array in a critical range of hair size, spacing and speed. Crabs have short antennules bearing dense arrays of flexible aesthetascs that splay apart during downstroke and clump together during return. Can crabs sniff, and when during ontogeny are they big enough to sniff? Antennules of Hemigrapsus oregonensis representing an ontogenetic series from small juveniles to adults were used to design dynamically scaled physical models. Particle image velocimetry quantified fluid flow through each array and showed that even very small crabs capture a new water sample in their arrays during the downstroke and retain that sample during return stroke. Comparison with isometrically scaled antennules suggests that reduction in aesthetasc flexural stiffness during ontogeny, in addition to increase in aesthetasc number and decrease in relative size, maintain sniffing as crabs grow. Sniffing performance of intermediate-sized juveniles was worse than for smaller and larger crabs.
APA, Harvard, Vancouver, ISO, and other styles
29

Ibrahim, S. Syed, and G. Ravi. "Deep learning based Brain Tumour Classification based on Recursive Sigmoid Neural Network based on Multi-Scale Neural Segmentation." International Journal on Recent and Innovation Trends in Computing and Communication 11, no. 2s (2023): 77–86. http://dx.doi.org/10.17762/ijritcc.v11i2s.6031.

Full text
Abstract:
Brain tumours are malignant tissues in which cells replicate rapidly and indefinitely, and tumours grow out of control. Deep learning has the potential to overcome challenges associated with brain tumour diagnosis and intervention. It is well known that segmentation methods can be used to remove abnormal tumour areas in the brain. It is one of the advanced technology classification and detection tools. Can effectively achieve early diagnosis of the disease or brain tumours through reliable and advanced neural network classification algorithms. Previous algorithm has some drawbacks, an automatic and reliable method for segmentation is needed. However, the large spatial and structural heterogeneity between brain tumors makes automated segmentation a challenging problem. Image tumors have irregular shapes and are spatially located in any part of the brain, making their segmentation is inaccurate for clinical purposes a challenging task. In this work, propose a method Recursive SigmoidNeural Network based on Multi-scale Neural Segmentation (RSN2-MSNS) for image proper segmentation. Initially collets the image dataset from standard repository for brain tumour classification. Next, pre-processing method that targets only a small part of an image rather than the entire image. This approach reduces computational time and overcomes the over complication. Second stage, segmenting the images based on the Enhanced Deep Clustering U-net (EDCU-net) for estimating the boundary points in the brain tumour images. This method can successfully colour histogram values are evaluating segment complex images that contain both textured and non-textured regions. Third stage, Feature extraction for extracts the features from segmenting images using Convolution Deep Feature Spectral Similarity (CDFS2) scaled the values from images extracting the relevant weights based on its threshold limits. Then selecting the features from extracting stage, this selection is based on the relational weights. And finally classified the features based on the Recursive Sigmoid Neural Network based on Multi-scale Neural Segmentation (RSN2-MSNS) for evaluating the proposed brain tumour classification model consists of 1500 trainable images and the proposed method achieves 97.0% accuracy. The sensitivity, specificity, detection accuracy and F1 measures were 96.4%, 952%, and 95.9%, respectively.
APA, Harvard, Vancouver, ISO, and other styles
30

Yu, Dawen, and Shunping Ji. "Grid Based Spherical CNN for Object Detection from Panoramic Images." Sensors 19, no. 11 (2019): 2622. http://dx.doi.org/10.3390/s19112622.

Full text
Abstract:
Recently proposed spherical convolutional neural networks (SCNNs) have shown advantages over conventional planar CNNs on classifying spherical images. However, two factors hamper their application in an objection detection task. First, a convolution in S2 (a two-dimensional sphere in three-dimensional space) or SO(3) (three-dimensional special orthogonal group) space results in the loss of an object’s location. Second, overlarge bandwidth is required to preserve a small object’s information on a sphere because the S2/SO(3) convolution must be performed on the whole sphere, instead of a local image patch. In this study, we propose a novel grid-based spherical CNN (G-SCNN) for detecting objects from spherical images. According to input bandwidth, a sphere image is transformed to a conformal grid map to be the input of the S2/SO3 convolution, and an object’s bounding box is scaled to cover an adequate area of the grid map. This solves the second problem. For the first problem, we utilize a planar region proposal network (RPN) with a data augmentation strategy that increases rotation invariance. We have also created a dataset including 600 street view panoramic images captured from a vehicle-borne panoramic camera. The dataset contains 5636 objects of interest annotated with class and bounding box and is named as WHU (Wuhan University) panoramic dataset. Results on the dataset proved our grid-based method is extremely better than the original SCNN in detecting objects from spherical images, and it outperformed several mainstream object detection networks, such as Faster R-CNN and SSD.
APA, Harvard, Vancouver, ISO, and other styles
31

Nghiêm-Phú, Bình. "Correlation between tourists’ perceptions/evaluations of destination attributes and their overall satisfactions: Observations of a meta-analysis." European Journal of Tourism Research 19 (July 1, 2018): 98–115. http://dx.doi.org/10.54055/ejtr.v19i.328.

Full text
Abstract:
This study examined the correlation between tourists’ perception/evaluation of destination attributes and their overall satisfaction. Using the data gathered from 34 previous studies and applying the metaanalysis method, this study found that destination image, destination quality, and destination attribute satisfaction have significant positive effects on the tourists’ overall satisfaction, whether the latter variable is singly or multiply scaled; all the overall estimates have small to medium sizes. However, three issues should be taken into account when interpreting this correlation. First, not all of the components of the attribute-based constructs (destination image, destination quality, destination attribute satisfaction) can have significant effects on the overall tourist satisfaction. Second, the unfavourable attributes of a destination may have some negative influences on tourist satisfaction. Third, the attribute-based constructs represent the external/common antecedents of overall tourist satisfactions; their predicting power may be eliminated when controlled by other internal/personal forces, such as personal values. Implications for future research and destination attributes management are discussed based on these observations.
APA, Harvard, Vancouver, ISO, and other styles
32

Ngadiman, Norhayati, Masiri Kaamin, Nor Baizura Hamid, et al. "Production of Orthophoto Map Using Unmanned Aerial Vehicles Photogrammetry." Journal of Computational and Theoretical Nanoscience 16, no. 12 (2019): 4925–29. http://dx.doi.org/10.1166/jctn.2019.8543.

Full text
Abstract:
Orthophoto is a part of the process of concept of photogrammetry in map production. Orthophoto image is a specific scaled photographic image produced from perspective images in which distortion errors originating from the attitude differences and the tilt of image are eliminated while orthophoto map is an orthophoto that has cartographic information on it (legend, grids, contours, labels, etc.). The development of current technology has introduced the Unmanned Aerial Vehicles (UAV) as an alternative to conventional photogrammetry in creating orthophoto map. Purpose of this paper is to identify the potential of Unmanned Aerial Vehicles (UAV) as an alternative to conventional photogrammetry which uses aircraft in creating orthophoto map. The area of the study is UniversitiTun Hussein Onn Malaysia (UTHM) Pagoh Campus within Pagoh Educational Hub. UAV photogrammetry based on a photogrammetricmeasurement platform, which operates remotely controlled, semi-autonomously, or autonomously, without a pilot sitting in the vehicle. The large format aerial camera in conventional photogrammetry is replaced with small format digital camera in UAV photogrammetry. For general area, the amount of forward overlap and side overlap is 75% and 60% respectively. For forest area, the amount of forward overlap and side overlap is 85% and 70% respectively as the UAV flight altitude will be higher. The whole workflow is introduced in the paper, where 461 images were collected with DJI Phantom 4 Pro with camera model FC6310. The flight mission was completed using PIX4D Capture. An orthophoto map covering an area of 250458 m2 was produced using AgisoftPhotoScan. The result from this study produces a better quality orthophoto map of UTHM Pagoh Campus. This map will shows the overview of the campus area with the most updated information.
APA, Harvard, Vancouver, ISO, and other styles
33

Preiss, Felix Johannes, Benedikt Mutsch, Christian J. Kähler, and Heike Petra Karbstein. "Scaling of Droplet Breakup in High-Pressure Homogenizer Orifices. Part I: Comparison of Velocity Profiles in Scaled Coaxial Orifices." ChemEngineering 5, no. 1 (2021): 7. http://dx.doi.org/10.3390/chemengineering5010007.

Full text
Abstract:
Properties of emulsions such as stability, viscosity or color can be influenced by the droplet size distribution. High-pressure homogenization (HPH) is the method of choice for emulsions with a low to medium viscosity with a target mean droplet diameter of less than 1 µm. During HPH, the droplets of the emulsion are exposed to shear and extensional stresses, which cause them to break up. Ongoing work is focused on better understanding the mechanisms of droplet breakup and relevant parameters. Since the gap dimensions of the disruption unit (e.g., flat valve or orifice) are small (usually below 500 µm) and the droplet breakup also takes place on small spatial and time scales, the resolution limit of current measuring systems is reached. In addition, the high velocities impede time resolved measurements. Therefore, a five-fold and fifty-fold magnified optically accessible coaxial orifice were used in this study while maintaining the dimensionless numbers characteristic for the droplet breakup (Reynolds and Weber number, viscosity and density ratio). Three matching material systems are presented. In order to verify their similarity, the local velocity profiles of the emerging free jet were measured using both a microparticle image velocimetry (µ-PIV) and a particle image velocimetry (PIV) system. Furthermore, the influence of the outlet geometry on the velocity profiles is investigated. Similar relationships were found on all investigated scales. The areas with the highest velocity fluctuations were identified where droplets are exposed to the highest turbulent forces. The Reynolds number had no influence on the normalized velocity fluctuation field. The confinement of the jet started to influence the velocity field if the outlet channel diameter is smaller than 10 times the diameter of the orifice. In conclusion, the scaling approach offers advantages to study very fast processes on very small spatial scales in detail. The presented scaling approach also offers chances in the optimization of the geometry of the disruption unit. However, the results also show challenges of each size scale, which can come from the respective production, measurement technology or experimental design. Depending on the problem to be investigated, we recommend conducting experimental studies at different scales.
APA, Harvard, Vancouver, ISO, and other styles
34

Hebbache, Loucif, Dariush Amirkhani, Mohand Saïd Allili, Nadir Hammouche, and Jean-François Lapointe. "Leveraging Saliency in Single-Stage Multi-Label Concrete Defect Detection Using Unmanned Aerial Vehicle Imagery." Remote Sensing 15, no. 5 (2023): 1218. http://dx.doi.org/10.3390/rs15051218.

Full text
Abstract:
Visual inspection of concrete structures using Unmanned Areal Vehicle (UAV) imagery is a challenging task due to the variability of defects’ size and appearance. This paper proposes a high-performance model for automatic and fast detection of bridge concrete defects using UAV-acquired images. Our method, coined the Saliency-based Multi-label Defect Detector (SMDD-Net), combines pyramidal feature extraction and attention through a one-stage concrete defect detection model. The attention module extracts local and global saliency features, which are scaled and integrated with the pyramidal feature extraction module of the network using the max-pooling, multiplication, and residual skip connections operations. This has the effect of enhancing the localisation of small and low-contrast defects, as well as the overall accuracy of detection in varying image acquisition ranges. Finally, a multi-label loss function detection is used to identify and localise overlapping defects. The experimental results on a standard dataset and real-world images demonstrated the performance of SMDD-Net with regard to state-of-the-art techniques. The accuracy and computational efficiency of SMDD-Net make it a suitable method for UAV-based bridge structure inspection.
APA, Harvard, Vancouver, ISO, and other styles
35

Wu, Wenbo, and Yun Pan. "Adaptive Modular Convolutional Neural Network for Image Recognition." Sensors 22, no. 15 (2022): 5488. http://dx.doi.org/10.3390/s22155488.

Full text
Abstract:
Image recognition has long been one of the research hotspots in computer vision tasks. The development of deep learning is rapid in recent years, and convolutional neural networks usually need to be designed with fixed resources. If sufficient resources are available, the model can be scaled up to achieve higher accuracy, for example, VggNet, ResNet, GoogLeNet, etc. Although the accuracy of large-scale models has been improved, the following problems will occur with the expansion of model scale: (1) There may be over-fitting; (2) increasing model parameters; (3) slow model convergence. This paper proposes a design method for a modular convolutional neural network model which solves the problem of over-fitting and large model parameters by connecting multiple modules in parallel. Moreover, each module contains several submodules (three submodules in this paper) and fuses the features extracted from the submodules. The model convergence can be accelerated by using the fused features (the fused features contain more image information). In this study, we add a gate unit based on the attention mechanism to the model, which aims to optimize the structure of the model (select the optimal number of modules), allowing the model to select an optimum network structure by learning and dynamically reducing FLOPs (floating-point operations per second) of the model. Compared to VggNet, ResNet, and GoogLeNet, the structure of the model proposed in this paper is simple and the parameters are small. The proposed model achieves good results in the Kaggle datasets Cats-vs.-Dogs (99.3%), 10-Monkey Species (99.26%), and Birds-400 (99.13%).
APA, Harvard, Vancouver, ISO, and other styles
36

Wang, Jingzi, Hongyan Mao, and Hongwei Li. "FMFN: Fine-Grained Multimodal Fusion Networks for Fake News Detection." Applied Sciences 12, no. 3 (2022): 1093. http://dx.doi.org/10.3390/app12031093.

Full text
Abstract:
As one of the most popular social media platforms, microblogs are ideal places for news propagation. In microblogs, tweets with both text and images are more likely to attract attention than text-only tweets. This advantage is exploited by fake news producers to publish fake news, which has a devasting impact on individuals and society. Thus, multimodal fake news detection has attracted the attention of many researchers. For news with text and image, multimodal fake news detection utilizes both text and image information to determine the authenticity of news. Most of the existing methods for multimodal fake news detection obtain a joint representation by simply concatenating a vector representation of the text and a visual representation of the image, which ignores the dependencies between them. Although there are a small number of approaches that use the attention mechanism to fuse them, they are not fine-grained enough in feature fusion. The reason is that, for a given image, there are multiple visual features and certain correlations between these features. They do not use multiple feature vectors representing different visual features to fuse with textual features, and ignore the correlations, resulting in inadequate fusion of textual features and visual features. In this paper, we propose a novel fine-grained multimodal fusion network (FMFN) to fully fuse textual features and visual features for fake news detection. Scaled dot-product attention is utilized to fuse word embeddings of words in the text and multiple feature vectors representing different features of the image, which not only considers the correlations between different visual features but also better captures the dependencies between textual features and visual features. We conduct extensive experiments on a public Weibo dataset. Our approach achieves competitive results compared with other methods for fusing visual representation and text representation, which demonstrates that the joint representation learned by the FMFN (which fuses multiple visual features and multiple textual features) is better than the joint representation obtained by fusing a visual representation and a text representation in determining fake news.
APA, Harvard, Vancouver, ISO, and other styles
37

Sharma, M. Dinesh, Kowshick K, Mokesh Prabhu S, Nitin Kumar S, and Pandiya Raj C. "PRINTED CIRCUIT BOARD DEFECT DETECTION USING MACHINE LEARNING." International Journal of Technical Research & Science 9, Spl (2024): 27–35. http://dx.doi.org/10.30780/specialissue-iset-2024/040.

Full text
Abstract:
As technology advances, printed circuit boards (PCBs) add more components and change their layout. One of the most important quality control procedures is PCB surface inspection, as small defects in signal traces can cause major damage to the system. Due to the disadvantages of manual scanning, great efforts have been made to automate scanning using high-resolution CCD or CMOS sensors. In traditional machine vision approaches, it is always difficult to determine pass/fail criteria based on small failure examples. the development of improved sensor technology. To solve this problem, we propose an advanced PCB inspection solution based on a jump-related convolutional autoencoder. A deep autoencoder model is trained to separate imperfect original images from defective ones. The location of defects is determined by comparing the decoded image with the input image. In the first production, we scaled the correct representation to improve the performance of training samples through a small and unbalanced database. The printed circuit board (PCB), which is the basic structure for electronic devices, is very important to the electronics industry. PCB quality and reliability must be ensured, but manual inspection methods are often labor-intensive and error-prone. This study presents a new machine learning (ML) method for PCB fault detection. To automatically detect defects, we use advanced ML models such as Convolutional Neural Networks (CNNs) on a large database of PCB images marked for defects. To provide reliable and accurate detection results, our research focuses on data preparation, feature extraction, model selection, and robust validation. The results show that ML-based PCB defect detection is effective, which enables better quality control. electronics manufacturing industry. The performance of our system has been carefully evaluated, showing good accuracy and efficiency in detecting defects using precise validation methods. This research is an important step towards automating PCB inspection, improving the reliability of electronic products, and reducing production costs. It also laid the groundwork for advances in quality control and defect prevention in electronics manufacturing. Added ML-based PCB defects These findings are expected to revolutionize quality assurance procedures as the electronics industry evolves and open the door to more efficient, error-free, and costeffective electronics manufacturing.
APA, Harvard, Vancouver, ISO, and other styles
38

Arase, Yuki, Takahiro Hara, Toshiaki Uemukai, and Shojiro Nishio. "Annotation and Auto-Scrolling for Web Page Overview in Mobile Web Browsing." International Journal of Handheld Computing Research 1, no. 4 (2010): 63–80. http://dx.doi.org/10.4018/jhcr.2010100104.

Full text
Abstract:
Due to advances in mobile phones, mobile Web browsing has become increasingly popular. In this regard, small screens and poor input capabilities of mobile phones prevent users from comfortably browsing Web pages that are designed for desktop PCs. One of the serious problems of mobile Web browsing is that users often get lost in a Web page and can only view a small portion of a Web page at a time, not able to grasp the entire page’s structure to decide which direction their information of interest is located. To solve this problem, an effective technique is to present an overview of the page. Although prior studies adopted the conventional style of overview, that is, a scaled-down image of the page, this is not sufficient because users cannot see details of the contents. Therefore, in this paper, the authors present annotations on a Web page that provides a functionality which automatically scrolls the page. Results of a user experiment show that annotations are informative for users who want to find contents from a large Web page.
APA, Harvard, Vancouver, ISO, and other styles
39

Jouir, Tasarinan, Reuben Strydom, Thomas M. Stace, and Mandyam V. Srinivasan. "Vision-only egomotion estimation in 6DOF using a sky compass." Robotica 36, no. 10 (2018): 1571–89. http://dx.doi.org/10.1017/s0263574718000577.

Full text
Abstract:
SUMMARYA novel pure-vision egomotion estimation algorithm is presented, with extensions to Unmanned Aerial Systems (UAS) navigation through visual odometry. Our proposed method computes egomotion in two stages using panoramic images segmented into sky and ground regions. Rotations (in 3DOF) are estimated by using a customised algorithm to measure the motion of the sky image, which is affected only by the rotation of the aircraft, and not by its translation. The rotation estimate is then used to derotate the optic flow field generated by the ground, from which the translation of the aircraft (in 3DOF) is estimated by another customised, iterative algorithm. Segmentation of the rotation and translation estimations allows for a partial relaxation of the planar ground assumption, inherently increasing the robustness of the approach. The translation vectors are scaled using stereo-based height to compute the current UAS position through path integration for closed-loop navigation. Outdoor field tests of our approach in a small quadrotor UAS suggest that the technique is comparable to the performance of existing state-of-the-art vision-based navigation algorithms, whilst also removing all dependence on additional sensors, such as an IMU or global positioning system (GPS).
APA, Harvard, Vancouver, ISO, and other styles
40

Jamil, Akhtar, Bulent Bayram, Turgay Kucuk, and Dursun Zafer Seker. "Spectral features based tea garden extraction from digital orthophoto maps." Proceedings of the ICA 1 (May 16, 2018): 1–7. http://dx.doi.org/10.5194/ica-proc-1-57-2018.

Full text
Abstract:
The advancements in the photogrammetry and remote sensing technologies has made it possible to extract useful tangible information from data which plays a pivotal role in various application such as management and monitoring of forests and agricultural lands etc. This study aimed to evaluate the effectiveness of spectral signatures for extraction of tea gardens from 1 : 5000 scaled digital orthophoto maps obtained from Rize city in Turkey. First, the normalized difference vegetation index (NDVI) was derived from the input images to suppress the non-vegetation areas. NDVI values less than zero were discarded and the output images was normalized in the range 0–255. Individual pixels were then mapped into meaningful objects using global region growing technique. The resulting image was filtered and smoothed to reduce the impact of noise. Furthermore, geometrical constraints were applied to remove small objects (less than 500 pixels) followed by morphological opening operator to enhance the results. These objects served as building blocks for further image analysis. Finally, for the classification stage, a range of spectral values were empirically calculated for each band and applied on candidate objects to extract tea gardens. For accuracy assessment, we employed an area based similarity metric by overlapping obtained tea garden boundaries with the manually digitized tea garden boundaries created by experts of photogrammetry. The overall accuracy of the proposed method scored 89 % for tea gardens from 10 sample orthophoto maps. We concluded that exploiting the spectral signatures using object based analysis is an effective technique for extraction of dominant tree species from digital orthophoto maps.
APA, Harvard, Vancouver, ISO, and other styles
41

Eugene, Fedorov, Utkina Tetyana, Nechyporenko Olga, and Korpan Yaroslav. "DEVELOPMENT OF TECHNIQUE FOR FACE DETECTION IN IMAGE BASED ON BINARIZATION, SCALING AND SEGMENTATION METHODS." Eastern-European Journal of Enterprise Technologies 1, no. 9 (103) (2020): 23–31. https://doi.org/10.15587/1729-4061.2020.195369.

Full text
Abstract:
A technique for face detection in<strong>&nbsp;</strong>the image is proposed, which is based on binarization, scaling, and segmentation of the image, followed by the determination of the largest connected component that matches the image of the face. Modern methods of binarization, scaling, and taxonomic image segmentation have one or more of the following disadvantages: they have a high computational complexity; require the determination of parameter values. Taxonomic image segmentation methods may have additional disadvantages: they do not allow noise and outliers selection; clusters can&rsquo;t have different shapes and sizes, and their number is fixed. Due to this, to improve the efficiency of face detection techniques, the methods of binarization, scaling and taxonomic segmentation needs to be improved. A binarization method is proposed, the distinction of which is the use of the image background. This allows to simplify the process of scaling and segmentation (since all the pixels in the background are represented by the same color), non-uniform brightness of the face, and not to use the threshold settings and additional parameters. A binary image scaling method is proposed, the distinction of which is the use of an arithmetic mean filter with threshold processing and fast wavelet transform. This allows to speed up the image segmentation process by about P<sup>2</sup>&nbsp;times, where P is the scaling parameter, and not to use the time-consuming procedure for determining. A binary scaled image segmentation method is proposed, the distinction of which is the use of density clustering. This allows to separate areas of the face of non-uniform brightness from the image background, noise and outliers. It also allows clusters to have different shapes and sizes, to not require setting the number of clusters and additional parameters. To determine the scaling parameter, numerous studies were conducted in this work, which concluded that the dependence of the segmentation time on the scaling parameter is close to exponential. It was also found that for small P, where P is the scaling parameter, the quality of face detection deteriorates slightly. The proposed technique for face detection in image based on binarization, scaling and segmentation can be used in intelligent computer systems for biometric identification of a person by the face image
APA, Harvard, Vancouver, ISO, and other styles
42

Wang, Yang, Nian Pan, and Ping Jiang. "Anisotropic Filtering Based on the WY Distribution and Multiscale Energy Concentration Accumulation Method for Dim and Small Target Enhancement." Remote Sensing 16, no. 16 (2024): 3069. http://dx.doi.org/10.3390/rs16163069.

Full text
Abstract:
In ground-based infrared optical remote sensing systems, the target signal is very weak due to the dynamic strong light background and the movement of dim and small targets. To improve the limit detection capability, background suppression and target enhancement methods are required to be more suitable for this scenario. To solve this problem, we first analyze the image features in the current scene and propose a more complete point target and noise model. Then, we propose a new WY distribution function based on the Fermi–Dirac distribution function and propose an anisotropic filtering method based on this function, which further suppresses the background through the difference results of two steps. Building on the distribution function, we further designed an energy concentration accumulation strategy in nine scaled directions, through which the SNR of the target is effectively improved, and the suppression ability of the background is enhanced. In this dynamic scenario, the method can still detect targets with an average minimum SNR of 0.76. Through quantitative and qualitative experimental analysis, the proposed method has better robustness against extremely weak targets and dynamic backgrounds compared to the same type of algorithms.
APA, Harvard, Vancouver, ISO, and other styles
43

Han, Dandan, Changhoon Park, Seonghyeon Oh, Howon Jung, and Jae W. Hahn. "Quantitative analysis and modeling of line edge roughness in near-field lithography: toward high pattern quality in nanofabrication." Nanophotonics 8, no. 5 (2019): 879–88. http://dx.doi.org/10.1515/nanoph-2019-0031.

Full text
Abstract:
AbstractQuantitative analysis of line edge roughness (LER) is very important for understanding the root causes of LER and thereby improving the pattern quality in near-field lithography (NFL), because LER has become the main limiter of critical dimension (CD) control as the feature size of nanostructures is scaled down. To address this challenge, the photoresist point-spread function of NFL with a contact plasmonic ridge nanoaperture can be employed to account for the physical and chemical effects involved in the LER-generation mechanism. Our theoretical and experimental results show that the sources of LER in NFL mainly come from the aerial image, material chemistry, and process. Importantly, the complicated decay characteristics of surface plasmon waves are demonstrated to be the main optical contributor. Because the evanescent mode of surface plasmon polaritons (SPPs) and quasi-spherical waves (QSWs) decay in the lateral direction, they can induce a small image log-slope and low photoresist contrast, leading to a large LER. We introduce an analytical model and demonstrate the relationship between LER and CD to estimate the pattern quality in NFL. We expect that these results can provide alternative approaches to further improve pattern uniformity and resolution, which can lead to advanced nanopatterning results in NFL.
APA, Harvard, Vancouver, ISO, and other styles
44

Strother, Stephen C., Jon R. Anderson, Kirt A. Schaper, et al. "Principal Component Analysis and the Scaled Subprofile Model Compared to Intersubject Averaging and Statistical Parametric Mapping: I. “Functional Connectivity” of the Human Motor System Studied with [15O]Water PET." Journal of Cerebral Blood Flow & Metabolism 15, no. 5 (1995): 738–53. http://dx.doi.org/10.1038/jcbfm.1995.94.

Full text
Abstract:
Using [15O]water PET and a previously well studied motor activation task, repetitive finger-to-thumb opposition, we compared the spatial activation patterns produced by (1) global normalization and intersubject averaging of paired-image subtractions, (2) the mean differences of ANCOVA-adjusted voxels in Statistical Parametric Mapping, (3) ANCOVA-adjusted voxels followed by principal component analysis (PCA), (4) ANCOVA-adjustment of mean image volumes (mean over subjects at each time point) followed by F-masking and PCA, and (5) PCA with Scaled Subprofile Model pre- and postprocessing. All data analysis techniques identified large positive focal activations in the contralateral sensorimotor cortex and ipsilateral cerebellar cortex, with varying levels of activation in other parts of the motor system, e.g., supplementary motor area, thalamus, putamen; techniques 1–4 also produced extensive negative areas. The activation signal of interest constitutes a very small fraction of the total nonrandom signal in the original dataset, and the exact choice of data preprocessing steps together with a particular analysis procedure have a significant impact on the identification and relative levels of activated regions. The challenge for the future is to identify those preprocessing algorithms and data analysis models that reproducibly optimize the identification and quantification of higher-order sensorimotor and cognitive responses.
APA, Harvard, Vancouver, ISO, and other styles
45

Shafqat, S., and J. P. M. Hoefnagels. "Cool, Dry, Nano-scale DIC Patterning of Delicate, Heterogeneous, Non-planar Specimens by Micro-mist Nebulization." Experimental Mechanics 61, no. 6 (2021): 917–37. http://dx.doi.org/10.1007/s11340-020-00686-2.

Full text
Abstract:
AbstractBackground: Application of patterns to enable high-resolution Digital Image Correlation (DIC) at the small scale (μm/nm) is known to be very challenging as techniques developed for the macro- and mesoscale, such as spray painting, cannot be scaled down directly. Moreover, existing nano-patterning techniques all rely on harsh processing steps, based on high temperature, chemicals, physical contact, liquids, and/or high vacuum, that can easily damage fragile, small-scale, free-standing and/or hygro-sensitive specimens, such as MEMS or biological samples. Objective: To present a straightforward, inexpensive technique specially designed for nano-patterning highly delicate specimens for high-resolution DIC. Methods: The technique consists in a well-controlled nebulized micro-mist, containing predominantly no more than one nanoparticle per mist droplet. The micro-mist is subsequently dried, resulting in a flow of individual nanoparticles that are deposited on the specimen surface at near-room temperature. By having single nanoparticles falling on the specimen surface, the notoriously challenging task of controlling nanoparticle-nanoparticle and nanoparticle-surface interactions as a result of the complex droplet drying dynamics, e.g., in drop-casting, is circumvented. Results: High-quality patterns are demonstrated for a number of challenging cases of physically and chemically sensitive specimens with nanoparticles from 1 μm down to 50 nm in diameter. It is shown that the pattern can easily be scaled within (and probably beyond) this range, which is of special interest for micromechanical testing using in-situ microscopic imaging techniques, such as high-magnification optical microscopy, optical profilometry, atomic force microscopy, and scanning electron microscopy, etc. Conclusions: Delicate specimens can conveniently be patterned at near-room temperature ($\sim $ ∼ 37 ∘C), without exposure to chemicals, physical contact or vacuum, while the pattern density and speckle size can be easily tuned.
APA, Harvard, Vancouver, ISO, and other styles
46

Jackson, Richard W., Edmund Harberd, Gary D. Lock, and James A. Scobie. "Investigation of Reverse Swing and Magnus Effect on a Cricket Ball Using Particle Image Velocimetry." Applied Sciences 10, no. 22 (2020): 7990. http://dx.doi.org/10.3390/app10227990.

Full text
Abstract:
Lateral movement from the principal trajectory, or “swing”, can be generated on a cricket ball when its seam, which sits proud of the surface, is angled to the flow. The boundary layer on the two hemispheres divided by the seam is governed by the Reynolds number and the surface roughness; the swing is fundamentally caused by the pressure differences associated with asymmetric flow separation. Skillful bowlers impart a small backspin to create gyroscopic inertia and stabilize the seam position in flight. Under certain flow conditions, the resultant pressure asymmetry can reverse across the hemispheres and “reverse swing” will occur. In this paper, particle image velocimetry measurements of a scaled cricket ball are presented to interrogate the flow field and the physical mechanism for reverse swing. The results show that a laminar separation bubble forms on the non-seam side (hemisphere), causing the separation angle for the boundary layer to be increased relative to that on the seam side. For the first time, it is shown that the separation bubble is present even under large rates of backspin, suggesting that this flow feature is present under match conditions. The Magnus effect on a rotating ball is also demonstrated, with the position of flow separation on the upper (retreating) side delayed due to the reduced relative speed between the surface and the freestream.
APA, Harvard, Vancouver, ISO, and other styles
47

Zhang, Yixing, Ziyan Mo, Zhuan Xin, Xianyu Chen, Yuqin Deng, and Xuan Dong. "EBBA-detector: An effective detector for defect detection in solar panel EL images with unbalanced data." PLOS One 20, no. 6 (2025): e0325676. https://doi.org/10.1371/journal.pone.0325676.

Full text
Abstract:
Solar panel defect detection, a crucial quality control task in the manufacturing process, often faces challenges such as varying defect sizes, severe image background interference, and imbalanced data sample distribution. To address these issues, this paper proposes the EBBA-Detector. The core of the model lies in an enhanced balanced attention framework, which includes an Enhanced Bidirectional Feature Pyramid Network (EBFPN) and a Balanced-Attention Module (B-A Module). The EBFPN captures defect features of different sizes, significantly improving the recognition ability for small defects, while the B-A Module suppresses background interference, guiding the model to focus more on defect locations. Additionally, this paper designs a Scaled Dynamic Focal Loss (SDFL) function, which enables the model to pay more attention to minority and hard-to-identify defect samples under imbalanced data distribution. Through experimental validation on a large-scale electroluminescence (EL) dataset, the proposed method has achieved significant improvements in detection performance, with a mean Average Precision (mAP) of 89.85%, outperforming other models in multiple defect category detections. Therefore, the EBBA-Detector not only effectively detects small target objects but also demonstrates good handling capabilities for large targets and imbalanced data, providing an efficient and accurate solution for solar panel defect detection.
APA, Harvard, Vancouver, ISO, and other styles
48

Valiyakath Vadakkan Habeeb, Nismath, and Kevin Chou. "Size Effects on Process-Induced Porosity in Ti6Al4V Thin Struts Additively Manufactured by Laser Powder-Bed Fusion." Journal of Manufacturing and Materials Processing 9, no. 7 (2025): 226. https://doi.org/10.3390/jmmp9070226.

Full text
Abstract:
Laser powder-bed fusion (L-PBF) additive manufacturing has been widely explored for fabricating intricate metallic parts such as lattice structures with thin struts. However, L-PBF-fabricated small parts (e.g., thin struts) exhibit different morphological and mechanical characteristics compared to bulk-sized parts due to distinct scan lengths, affecting the melt pool behavior between transient and quasi-steady states. This study investigates the keyhole porosity in Ti6Al4V thin struts fabricated by L-PBF, incorporating a range of strut sizes, along with various levels of linear energy densities. Micro-scaled computed tomography and image analysis were employed for porosity measurements and evaluations. Generally, keyhole porosity lessens with decreasing energy density, though with varying patterns across a higher energy density range. Keyhole porosity in struts predictably becomes severe at high laser powers and/or low scan speeds. However, a major finding reveals that the porosity is reduced with decreasing strut size (if less than 1.25 mm diameter), plausibly because the keyhole formed has not reached a stable state to produce pores in a permanent way. This implies that a higher linear energy density, greater than commonly formulated in making bulk components, could be utilized in making small-scale features to ensure not only full melting but also minimum keyhole porosity.
APA, Harvard, Vancouver, ISO, and other styles
49

Varam, Dara, Lujain Khalil, and Tamer Shanableh. "On-Edge Deployment of Vision Transformers for Medical Diagnostics Using the Kvasir-Capsule Dataset." Applied Sciences 14, no. 18 (2024): 8115. http://dx.doi.org/10.3390/app14188115.

Full text
Abstract:
This paper aims to explore the possibility of utilizing vision transformers (ViTs) for on-edge medical diagnostics by experimenting with the Kvasir-Capsule image classification dataset, a large-scale image dataset of gastrointestinal diseases. Quantization techniques made available through TensorFlow Lite (TFLite), including post-training float-16 (F16) quantization and quantization-aware training (QAT), are applied to achieve reductions in model size, without compromising performance. The seven ViT models selected for this study are EfficientFormerV2S2, EfficientViT_B0, EfficientViT_M4, MobileViT_V2_050, MobileViT_V2_100, MobileViT_V2_175, and RepViT_M11. Three metrics are considered when analyzing a model: (i) F1-score, (ii) model size, and (iii) performance-to-size ratio, where performance is the F1-score and size is the model size in megabytes (MB). In terms of F1-score, we show that MobileViT_V2_175 with F16 quantization outperforms all other models with an F1-score of 0.9534. On the other hand, MobileViT_V2_050 trained using QAT was scaled down to a model size of 1.70 MB, making it the smallest model amongst the variations this paper examined. MobileViT_V2_050 also achieved the highest performance-to-size ratio of 41.25. Despite preferring smaller models for latency and memory concerns, medical diagnostics cannot afford poor-performing models. We conclude that MobileViT_V2_175 with F16 quantization is our best-performing model, with a small size of 27.47 MB, providing a benchmark for lightweight models on the Kvasir-Capsule dataset.
APA, Harvard, Vancouver, ISO, and other styles
50

Al-Dossary, Saleh, Jinsong Wang, and Yuchun E. Wang. "Combining multiseismic attributes with an extended octree quantization method." Interpretation 7, no. 2 (2019): SC11—SC19. http://dx.doi.org/10.1190/int-2018-0099.1.

Full text
Abstract:
Seismic interpreters and processors encounter ever-increasing volumes of seismic attributes in geophysical exploration each year. Multiattribute integration and classification improve the ability to identify geologic facies and reservoir properties, such as thickness, fluid type, fracture intensity, and orientation. Simple color mixing technology allows us to display three attributes simultaneously. To overcome this limit, we extend from three nodes to up to eight nodes octree color quantization originated from image processing of compressing colors to handle eight groups of attributes to form a single attribute. We can then apply the group reduction criterion for geophysical data classification to reveal common geologic targets while preserving the small variations or thin layers often present in hydrocarbon reservoirs. By combining multiple attributes, we hope to see all individual geologic features in the same image but also channels that might not be visible in any single attribute and to focus on major geobodies through group reduction classification on combined data. We first applied the method to a 2D section of poststack seismic data and well logs to test its validity, and then we further used it on scaled curvatures and other 3D seismic attributes to showcase the aforementioned benefits. Storage efficiency is a noted additional advantage of octree. However, the importance of selecting relevant attributes for octree application cannot be underestimated and requires the involvement of experienced seismic interpreters.
APA, Harvard, Vancouver, ISO, and other styles
We offer discounts on all premium plans for authors whose works are included in thematic literature selections. Contact us to get a unique promo code!