Academic literature on the topic 'REAL IMAGE PREDICTION'

Create a spot-on reference in APA, MLA, Chicago, Harvard, and other styles

Select a source type:

Consult the lists of relevant articles, books, theses, conference reports, and other scholarly sources on the topic 'REAL IMAGE PREDICTION.'

Next to every source in the list of references, there is an 'Add to bibliography' button. Press on it, and we will generate automatically the bibliographic reference to the chosen work in the citation style you need: APA, MLA, Harvard, Chicago, Vancouver, etc.

You can also download the full text of the academic publication as pdf and read online its abstract whenever available in the metadata.

Journal articles on the topic "REAL IMAGE PREDICTION"

1

Takezawa, Takuma, and Yukihiko Yamashita. "Wavelet Based Image Coding via Image Component Prediction Using Neural Networks." International Journal of Machine Learning and Computing 11, no. 2 (2021): 137–42. http://dx.doi.org/10.18178/ijmlc.2021.11.2.1026.

Full text
Abstract:
In the process of wavelet based image coding, it is possible to enhance the performance by applying prediction. However, it is difficult to apply the prediction using a decoded image to the 2D DWT which is used in JPEG2000 because the decoded pixels are apart from pixels which should be predicted. Therefore, not images but DWT coefficients have been predicted. To solve this problem, predictive coding is applied for one-dimensional transform part in 2D DWT. Zhou and Yamashita proposed to use half-pixel line segment matching for the prediction of wavelet based image coding with prediction. In this research, convolutional neural networks are used as the predictor which estimates a pair of target pixels from the values of pixels which have already been decoded and adjacent to the target row. It helps to reduce the redundancy by sending the error between the real value and its predicted value. We also show its advantage by experimental results.
APA, Harvard, Vancouver, ISO, and other styles
2

Hong, Yan, Li Niu, and Jianfu Zhang. "Shadow Generation for Composite Image in Real-World Scenes." Proceedings of the AAAI Conference on Artificial Intelligence 36, no. 1 (2022): 914–22. http://dx.doi.org/10.1609/aaai.v36i1.19974.

Full text
Abstract:
Image composition targets at inserting a foreground object into a background image. Most previous image composition methods focus on adjusting the foreground to make it compatible with background while ignoring the shadow effect of foreground on the background. In this work, we focus on generating plausible shadow for the foreground object in the composite image. First, we contribute a real-world shadow generation dataset DESOBA by generating synthetic composite images based on paired real images and deshadowed images. Then, we propose a novel shadow generation network SGRNet, which consists of a shadow mask prediction stage and a shadow filling stage. In the shadow mask prediction stage, foreground and background information are thoroughly interacted to generate foreground shadow mask. In the shadow filling stage, shadow parameters are predicted to fill the shadow area. Extensive experiments on our DESOBA dataset and real composite images demonstrate the effectiveness of our proposed method. Our dataset and code are available at https://github.com/bcmi/Object-Shadow-Generation- Dataset-DESOBA.
APA, Harvard, Vancouver, ISO, and other styles
3

Sather, A. P., S. D. M. Jones, and D. R. C. Bailey. "Real-time ultrasound image analysis for the estimation of carcass yield and pork quality." Canadian Journal of Animal Science 76, no. 1 (1996): 55–62. http://dx.doi.org/10.4141/cjas96-008.

Full text
Abstract:
Average backfat thickness measurements (liveweight of 92.5 kg) were made on 276 pigs using the Krautkramer USK7 ultrasonic machine. Immediately preceding and 1 h after slaughter real-time ultrasonic images were made between the 3rd and 4th last ribs with the Tokyo Keiki LS-1000 (n = 149) and/or CS-3000 (n = 240) machines. Image analysis software was used to measure fat thickness (FT), muscle depth (MD) and area (MA) as well as scoring the number of objects, object area and percentage object area of the loin to be used for predicting meat quality. Carcasses were also graded by the Hennessy Grading Probe (HGP). Prediction equations for lean in the primal cuts based on USK7 and LS-1000 animal fat measurements had R2-values (residual standard deviations, RSD) of 0.62 (27.0) and 0.66 (25.7). Adding MD or MA to LS-1000 FT measurements increased the R2-values to 0.68 and 0.66. Prediction equations using animal fat measurements made by the USK7 and CS-3000 had R2-values (RSD) of 0.66 (26.5) and 0.76 (22.4). Adding MD or MA to CS-3000 FT measurements made no further improvement in the R2-values. Estimation of commercial lean yield from carcass FT and MD measurements made by the HGP and LS-1000 had R2-values (RSD) of 0.58 (1.72) and 0.65 (1.56). Adding MA to LS-1000 measurements made no further improvement in the R2-values. Prediction equations based on carcass FT and MD measurements made by the HGPandCS-3000 had R2-values (RSD) of 0.68 (1.65) and 0.72 (1.54). Adding MA to CS-3000 measurements made no further improvement in the prediction equation. It was concluded that RTU has most value for predicting carcass lean content and further improvements in precision will come from more accurate FT measurements from RTU images made by image analysis software. Correlation of the number of objects, object area and of percent object area of image from RTU images with intramuscular fat or marbling score made on the live pig or carcass were low, and presently do not appear suitable for predicting intramuscular fat. Key words: Carcass composition, meat quality, marbling, intramuscular fat, sex, pigs
APA, Harvard, Vancouver, ISO, and other styles
4

Tham, Hwee Sheng, Razaidi Hussin, and Rizalafande Che Ismail. "A Real-Time Distance Prediction via Deep Learning and Microsoft Kinect." IOP Conference Series: Earth and Environmental Science 1064, no. 1 (2022): 012048. http://dx.doi.org/10.1088/1755-1315/1064/1/012048.

Full text
Abstract:
Abstract 3D(Dimension) understanding has become the herald of computer vision and graphics research in the era of technology. It benefits many applications such as autonomous cars, robotics, and medical image processing. The pros and cons of 3D detection bring convenience to the human community instead of 2D detection. The 3D detection consists of RGB (Red, Green and Blue) colour images and depth images which are able to perform better than 2D in real. The current technology is relying on the high costing light detection and ranging (LiDAR). However, the use of Microsoft Kinect has replaced the LiDAR systems for 3D detection gradually. In this project, a Kinect camera is used to extract the depth of image information. From the depth images, the distance can be defined easily. As in the colour scale, the red colour is the nearest and the blue colour is the farthest. The depth image will turn black when reaching the limitation of the Kinect camera measuring range. The depth information collected will be trained with deep learning architecture to perform a real-time distance prediction.
APA, Harvard, Vancouver, ISO, and other styles
5

Pintelas, Emmanuel, Meletis Liaskos, Ioannis E. Livieris, Sotiris Kotsiantis, and Panagiotis Pintelas. "Explainable Machine Learning Framework for Image Classification Problems: Case Study on Glioma Cancer Prediction." Journal of Imaging 6, no. 6 (2020): 37. http://dx.doi.org/10.3390/jimaging6060037.

Full text
Abstract:
Image classification is a very popular machine learning domain in which deep convolutional neural networks have mainly emerged on such applications. These networks manage to achieve remarkable performance in terms of prediction accuracy but they are considered as black box models since they lack the ability to interpret their inner working mechanism and explain the main reasoning of their predictions. There is a variety of real world tasks, such as medical applications, in which interpretability and explainability play a significant role. Making decisions on critical issues such as cancer prediction utilizing black box models in order to achieve high prediction accuracy but without provision for any sort of explanation for its prediction, accuracy cannot be considered as sufficient and ethnically acceptable. Reasoning and explanation is essential in order to trust these models and support such critical predictions. Nevertheless, the definition and the validation of the quality of a prediction model’s explanation can be considered in general extremely subjective and unclear. In this work, an accurate and interpretable machine learning framework is proposed, for image classification problems able to make high quality explanations. For this task, it is developed a feature extraction and explanation extraction framework, proposing also three basic general conditions which validate the quality of any model’s prediction explanation for any application domain. The feature extraction framework will extract and create transparent and meaningful high level features for images, while the explanation extraction framework will be responsible for creating good explanations relying on these extracted features and the prediction model’s inner function with respect to the proposed conditions. As a case study application, brain tumor magnetic resonance images were utilized for predicting glioma cancer. Our results demonstrate the efficiency of the proposed model since it managed to achieve sufficient prediction accuracy being also interpretable and explainable in simple human terms.
APA, Harvard, Vancouver, ISO, and other styles
6

Snider, Eric J., Sofia I. Hernandez-Torres, and Ryan Hennessey. "Using Ultrasound Image Augmentation and Ensemble Predictions to Prevent Machine-Learning Model Overfitting." Diagnostics 13, no. 3 (2023): 417. http://dx.doi.org/10.3390/diagnostics13030417.

Full text
Abstract:
Deep learning predictive models have the potential to simplify and automate medical imaging diagnostics by lowering the skill threshold for image interpretation. However, this requires predictive models that are generalized to handle subject variability as seen clinically. Here, we highlight methods to improve test accuracy of an image classifier model for shrapnel identification using tissue phantom image sets. Using a previously developed image classifier neural network—termed ShrapML—blind test accuracy was less than 70% and was variable depending on the training/test data setup, as determined by a leave one subject out (LOSO) holdout methodology. Introduction of affine transformations for image augmentation or MixUp methodologies to generate additional training sets improved model performance and overall accuracy improved to 75%. Further improvements were made by aggregating predictions across five LOSO holdouts. This was done by bagging confidences or predictions from all LOSOs or the top-3 LOSO confidence models for each image prediction. Top-3 LOSO confidence bagging performed best, with test accuracy improved to greater than 85% accuracy for two different blind tissue phantoms. This was confirmed by gradient-weighted class activation mapping to highlight that the image classifier was tracking shrapnel in the image sets. Overall, data augmentation and ensemble prediction approaches were suitable for creating more generalized predictive models for ultrasound image analysis, a critical step for real-time diagnostic deployment.
APA, Harvard, Vancouver, ISO, and other styles
7

Froning, Dieter, Eugen Hoppe, and Ralf Peters. "The Applicability of Machine Learning Methods to the Characterization of Fibrous Gas Diffusion Layers." Applied Sciences 13, no. 12 (2023): 6981. http://dx.doi.org/10.3390/app13126981.

Full text
Abstract:
Porous materials can be characterized by well-trained neural networks. In this study, fibrous paper-type gas diffusion layers were trained with artificial data created by a stochastic geometry model. The features of the data were calculated by means of transport simulations using the Lattice–Boltzmann method based on stochastic micro-structures. A convolutional neural network was developed that can predict the permeability and tortuosity of the material, through-plane and in-plane. The characteristics of real data, both uncompressed and compressed, were predicted. The data were represented by reconstructed images of different sizes and image resolutions. Image artifacts are also a source of potential errors in the prediction. The Kozeny–Carman trend was used to evaluate the prediction of permeability and tortuosity of compressed real data. Using this method, it was possible to decide if the predictions on compressed data were appropriate.
APA, Harvard, Vancouver, ISO, and other styles
8

Moskolaï, Waytehad Rose, Wahabou Abdou, Albert Dipanda, and Kolyang. "Application of Deep Learning Architectures for Satellite Image Time Series Prediction: A Review." Remote Sensing 13, no. 23 (2021): 4822. http://dx.doi.org/10.3390/rs13234822.

Full text
Abstract:
Satellite image time series (SITS) is a sequence of satellite images that record a given area at several consecutive times. The aim of such sequences is to use not only spatial information but also the temporal dimension of the data, which is used for multiple real-world applications, such as classification, segmentation, anomaly detection, and prediction. Several traditional machine learning algorithms have been developed and successfully applied to time series for predictions. However, these methods have limitations in some situations, thus deep learning (DL) techniques have been introduced to achieve the best performance. Reviews of machine learning and DL methods for time series prediction problems have been conducted in previous studies. However, to the best of our knowledge, none of these surveys have addressed the specific case of works using DL techniques and satellite images as datasets for predictions. Therefore, this paper concentrates on the DL applications for SITS prediction, giving an overview of the main elements used to design and evaluate the predictive models, namely the architectures, data, optimization functions, and evaluation metrics. The reviewed DL-based models are divided into three categories, namely recurrent neural network-based models, hybrid models, and feed-forward-based models (convolutional neural networks and multi-layer perceptron). The main characteristics of satellite images and the major existing applications in the field of SITS prediction are also presented in this article. These applications include weather forecasting, precipitation nowcasting, spatio-temporal analysis, and missing data reconstruction. Finally, current limitations and proposed workable solutions related to the use of DL for SITS prediction are also highlighted.
APA, Harvard, Vancouver, ISO, and other styles
9

Rajesh, E., Shajahan Basheer, Rajesh Kumar Dhanaraj, et al. "Machine Learning for Online Automatic Prediction of Common Disease Attributes Using Never-Ending Image Learner." Diagnostics 13, no. 1 (2022): 95. http://dx.doi.org/10.3390/diagnostics13010095.

Full text
Abstract:
The rapid increase in Internet technology and machine-learning devices has opened up new avenues for online healthcare systems. Sometimes, getting medical assistance or healthcare advice online is easier to understand than getting it in person. For mild symptoms, people frequently feel reluctant to visit the hospital or a doctor; instead, they express their questions on numerous healthcare forums. However, predictions may not always be accurate, and there is no assurance that users will always receive a reply to their posts. In addition, some posts are made up, which can misdirect the patient. To address these issues, automatic online prediction (OAP) is proposed. OAP clarifies the idea of employing machine learning to predict the common attributes of disease using Never-Ending Image Learner with an intelligent analysis of disease factors. Never-Ending Image Learner predicts disease factors by selecting from finite data images with minimum structural risk and efficiently predicting efficient real-time images via machine-learning-enabled M-theory. The proposed multi-access edge computing platform works with the machine-learning-assisted automatic prediction from multiple images using multiple-instance learning. Using a Never-Ending Image Learner based on Machine Learning, common disease attributes may be predicted online automatically. This method has deeper storage of images, and their data are stored per the isotropic positioning. The proposed method was compared with existing approaches, such as Multiple-Instance Learning for automated image indexing and hyper-spectrum image classification. Regarding the machine learning of multiple images with the application of isotropic positioning, the operating efficiency is improved, and the results are predicted with better accuracy. In this paper, machine-learning performance metrics for online automatic prediction tools are compiled and compared, and through this survey, the proposed method is shown to achieve higher accuracy, proving its efficiency compared to the existing methods.
APA, Harvard, Vancouver, ISO, and other styles
10

Bhimte, Sumit, Hrishikesh hasabnis, Rohit Shirsath, Saurabh Sonar, and Mahendra Salunke. "Severity Prediction System for Real Time Pothole Detection." Journal of University of Shanghai for Science and Technology 23, no. 07 (2021): 1328–34. http://dx.doi.org/10.51201/jusst/21/07356.

Full text
Abstract:
Pothole Detection System using Image Processing or using Accelerometer is not a new normal. But there is no real time application which utilizes both techniques to provide us with efficient solution. We present a system which can be useful for the drivers to determine the intensity of Pothole using both Image Processing Technology and Accelerometer device-based Algorithm. The challenge in building this system was to efficiently detect a Pothole present in roads, to analyze the severity of Pothole and to provide users with information like Road Quality and best possible route. We have used various algorithms for frequency-based pothole detection. We compared the results. Apart from that, we selected the best approach suitable for achieving the project goals. We have used a Simple Differentiation-based Edge Detection Algorithm for Image Processing. The system has been built on Map Interfaces for Android devices using Android Studio, which consists of usage of Image Processing Algorithm based Python frameworks which is a sub field of Machine Learning. It is backed by powerful DBMS. This project facilitates use of most efficient technology tools to provide a good user experience, real time application, reliability and improved efficiency.
APA, Harvard, Vancouver, ISO, and other styles
More sources
We offer discounts on all premium plans for authors whose works are included in thematic literature selections. Contact us to get a unique promo code!

To the bibliography