To see the other types of publications on this topic, follow the link: Accumulated histogram.

Journal articles on the topic 'Accumulated histogram'

Create a spot-on reference in APA, MLA, Chicago, Harvard, and other styles

Select a source type:

Consult the top 50 journal articles for your research on the topic 'Accumulated histogram.'

Next to every source in the list of references, there is an 'Add to bibliography' button. Press on it, and we will generate automatically the bibliographic reference to the chosen work in the citation style you need: APA, MLA, Harvard, Chicago, Vancouver, etc.

You can also download the full text of the academic publication as pdf and read online its abstract whenever available in the metadata.

Browse journal articles on a wide variety of disciplines and organise your bibliography correctly.

1

Han, Yu, Ling Luo, Bin Xie, and Chen Xu. "Nonparametric histogram segmentation-based automatic detection of yarns." Textile Research Journal 90, no. 11-12 (2019): 1326–41. http://dx.doi.org/10.1177/0040517519890212.

Full text
Abstract:
Detection of yarns in fabric images is a basic task in real-time monitoring in fabric production processes since it relates to yarn density and fabric structure estimation. In this paper, a new detection method is proposed that can automatically and efficiently estimate the locations as well as the numbers of both weft and warp yarn in fabric images. The method has three sequential phases. First, the modulus of discrete partial derivatives at each pixel is projected onto the weft and warp directions to generate the accumulated histograms. Second, for each histogram, a monotone hypothesis of a nonparametric statistical approach is applied to segment the histogram. Third, according to the segmentation result, the locations of each weft and warp yarn are adaptively determined, while the fabric structure is also obtained. Numerical results demonstrate that, compared with classical yarn detection methods, which are based on image smoothing, the proposed method can estimate yarn locations and fabric structures with more accuracy, but also reduce the influence of yarn hairiness.
APA, Harvard, Vancouver, ISO, and other styles
2

Bright, David S. "Software Tools for Examination of Microanalytical Images." Proceedings, annual meeting, Electron Microscopy Society of America 48, no. 2 (1990): 116–17. http://dx.doi.org/10.1017/s042482010013417x.

Full text
Abstract:
Image processing for enhancement and interpretation is a powerful tool for microscopy and microanalysis. Digital images are arrays of picture elements or pixels each having a coordinate (location) and a value. Two dimensional arrays with single intensity value (monochrome) or triple intensity values (color) pixels are in common use. Software tools and techniques are now available for desk top computers at reasonable cost that allow visualization of higher dimensional arrays and multivalued pixels. The following examples illustrate the application of these tools to microanalysis.Short image sequences (movies) are useful for showing dynamic effects such as the drift of an electron microscope stage with time or the interior of a sample eroded by sputtering on an ion microscope.The values of the pixels of registered images or x-ray maps can be accumulated in a multidimensional histogram (Concentration Histogram Image or CHI). The number of registered maps determines the dimensionality of the histogram.
APA, Harvard, Vancouver, ISO, and other styles
3

Srikote, Gatesakda, and Anupap Meesomboon. "Face Recognition Performance Improvement Using Derivative of Accumulated Absolute Difference Based on Probabilistic Histogram." Procedia Computer Science 86 (2016): 265–68. http://dx.doi.org/10.1016/j.procs.2016.05.052.

Full text
APA, Harvard, Vancouver, ISO, and other styles
4

Zhu, Nan, Junge Shen, and Xiaotong Niu. "Double JPEG Compression Detection Based on Noise-Free DCT Coefficients Mixture Histogram Model." Symmetry 11, no. 9 (2019): 1119. http://dx.doi.org/10.3390/sym11091119.

Full text
Abstract:
With the wide use of various image altering tools, digital image manipulation becomes very convenient and easy, which makes the detection of image originality and authenticity significant. Among various image tampering detection tools, double JPEG image compression detector, which is not sensitive to specific image tampering operation, has received large attention. In this paper, we propose an improved double JPEG compression detection method based on noise-free DCT (Discrete Cosine Transform) coefficients mixture histogram model. Specifically, we first extract the block-wise DCT coefficients histogram and eliminate the quantization noise which introduced by rounding and truncation operations. Then, for each DCT frequency, a posterior probability can be obtained by solving the DCT coefficients mixture histogram with a simplified model. Finally, the probabilities from all the DCT frequencies are accumulated to give the posterior probability of a DCT block being authentic or tampered. Extensive experimental results in both quantitative and qualitative terms prove the superiority of our proposed method when compared with the state-of-the-art methods.
APA, Harvard, Vancouver, ISO, and other styles
5

Liu, Xiao Lei. "Dance Movement Recognition Based on Multimodal Environmental Monitoring Data." Journal of Environmental and Public Health 2022 (July 19, 2022): 1–8. http://dx.doi.org/10.1155/2022/1568930.

Full text
Abstract:
Fine motion recognition is a challenging topic in computer vision, and it has been a trendy research direction in recent years. This study combines motion recognition technology with dance movements and the problems such as the high complexity of dance movements and fully considers the human body’s self-occlusion. The excellent motion recognition content in the dance field was studied and analyzed. A compelling feature extraction method was proposed for the dance video dataset, segmented video, and accumulated edge feature operation. By extracting directional gradient histogram features, a set of directional gradient histogram feature vectors is used to characterize the shape features of the dance video movements. A dance movement recognition method is adopted based on the fusion direction gradient histogram feature, optical flow direction histogram feature, and audio signature feature. Three components are combined for dance movement recognition by a multicore learning method. Experimental results show that the cumulative edge feature algorithm proposed in this study outperforms traditional models in the recognition results of HOG features extracted from images. After adding edge features, the description of the dance movement shape is more effective. The algorithm can guarantee a specific recognition rate of complex dance movements. The results also verify the effectiveness of the movement recognition algorithm in this study for dance movement recognition.
APA, Harvard, Vancouver, ISO, and other styles
6

Lee, Byunguk, Wonho Kim, and Seunghwan Lee. "An Extended Vector Polar Histogram Method Using Omni-Directional LiDAR Information." Symmetry 15, no. 8 (2023): 1545. http://dx.doi.org/10.3390/sym15081545.

Full text
Abstract:
This study presents an extended vector polar histogram (EVPH) method for efficient robot navigation using omni-directional LiDAR data. Although the conventional vector polar histogram (VPH) method is a powerful technique suitable for LiDAR sensors, it is limited in its sensing range by the single LiDAR sensor to a semicircle. To address this limitation, the EVPH method incorporates multiple LiDAR sensor’s data for omni-directional sensing. First off, in the EVPH method, the LiDAR sensor coordinate systems are directly transformed into the robot coordinate system to obtain an omni-directional polar histogram. Several techniques are also employed in this process, such as minimum value selection and linear interpolation, to generate a uniform omni-directional polar histogram. The resulting histogram is modified to represent the robot as a single point. Subsequently, consecutive points in the histogram are grouped to construct a symbol function for excluding concave blocks and a threshold function for safety. These functions are combined to determine the maximum cost value that generates the robot’s next heading angle. Robot backward motion is made feasible based on the determined heading angle, enabling the calculation of the velocity vector for time-efficient and collision-free navigation. To assess the efficacy of the proposed EVPH method, experiments were carried out in two environments where humans and obstacles coexist. The results showed that, compared to the conventional method, the robot traveled safely and efficiently in terms of the accumulated amount of rotations, total traveling distance, and time using the EVPH method. In the future, our plan includes enhancing the robustness of the proposed method in congested environments by integrating parameter adaptation and dynamic object estimation methods.
APA, Harvard, Vancouver, ISO, and other styles
7

Xu, Kuan-Man. "Using the Bootstrap Method for a Statistical Significance Test of Differences between Summary Histograms." Monthly Weather Review 134, no. 5 (2006): 1442–53. http://dx.doi.org/10.1175/mwr3133.1.

Full text
Abstract:
Abstract A new method is proposed to compare statistical differences between summary histograms, which are the histograms summed over a large ensemble of individual histograms. It consists of choosing a distance statistic for measuring the difference between summary histograms and using a bootstrap procedure to calculate the statistical significance level. Bootstrapping is an approach to statistical inference that makes few assumptions about the underlying probability distribution that describes the data. Three distance statistics are compared in this study. They are the Euclidean distance, the Jeffries–Matusita distance, and the Kuiper distance. The data used in testing the bootstrap method are satellite measurements of cloud systems called “cloud objects.” Each cloud object is defined as a contiguous region/patch composed of individual footprints or fields of view. A histogram of measured values over footprints is generated for each parameter of each cloud object, and then summary histograms are accumulated over all individual histograms in a given cloud-object size category. The results of statistical hypothesis tests using all three distances as test statistics are generally similar, indicating the validity of the proposed method. The Euclidean distance is determined to be most suitable after comparing the statistical tests of several parameters with distinct probability distributions among three cloud-object size categories. Impacts on the statistical significance levels resulting from differences in the total lengths of satellite footprint data between two size categories are also discussed.
APA, Harvard, Vancouver, ISO, and other styles
8

Myung, Jinbok, Kwang-Ho Kim, Jeong-sik Park, Myoung-Wan Koo, and Ji-Hwan Kim. "Two-pass search strategy using accumulated band energy histogram for HMM-based identification of perceptually identical music." International Journal of Imaging Systems and Technology 23, no. 2 (2013): 127–32. http://dx.doi.org/10.1002/ima.22043.

Full text
APA, Harvard, Vancouver, ISO, and other styles
9

Hadi, Israa, and Mustafa Sabah. "Upgrade Video Tracking Technique Using Enhanced Hybrid Cat Swarm Optimization Based on Multi Target Model and Accumulated Histogram." Journal of Computational and Theoretical Nanoscience 12, no. 11 (2015): 4017–27. http://dx.doi.org/10.1166/jctn.2015.4313.

Full text
APA, Harvard, Vancouver, ISO, and other styles
10

Yoon, Rina, Seokjin Oh, Seungmyeong Cho, and Kyeong-Sik Min. "Memristor–CMOS Hybrid Circuits Implementing Event-Driven Neural Networks for Dynamic Vision Sensor Camera." Micromachines 15, no. 4 (2024): 426. http://dx.doi.org/10.3390/mi15040426.

Full text
Abstract:
For processing streaming events from a Dynamic Vision Sensor camera, two types of neural networks can be considered. One are spiking neural networks, where simple spike-based computation is suitable for low-power consumption, but the discontinuity in spikes can make the training complicated in terms of hardware. The other one are digital Complementary Metal Oxide Semiconductor (CMOS)-based neural networks that can be trained directly using the normal backpropagation algorithm. However, the hardware and energy overhead can be significantly large, because all streaming events must be accumulated and converted into histogram data, which requires a large amount of memory such as SRAM. In this paper, to combine the spike-based operation with the normal backpropagation algorithm, memristor–CMOS hybrid circuits are proposed for implementing event-driven neural networks in hardware. The proposed hybrid circuits are composed of input neurons, synaptic crossbars, hidden/output neurons, and a neural network’s controller. Firstly, the input neurons perform preprocessing for the DVS camera’s events. The events are converted to histogram data using very simple memristor-based latches in the input neurons. After preprocessing the events, the converted histogram data are delivered to an ANN implemented using synaptic memristor crossbars. The memristor crossbars can perform low-power Multiply–Accumulate (MAC) calculations according to the memristor’s current–voltage relationship. The hidden and output neurons can convert the crossbar’s column currents to the output voltages according to the Rectified Linear Unit (ReLU) activation function. The neural network’s controller adjusts the MAC calculation frequency according to the workload of the event computation. Moreover, the controller can disable the MAC calculation clock automatically to minimize unnecessary power consumption. The proposed hybrid circuits have been verified by circuit simulation for several event-based datasets such as POKER-DVS and MNIST-DVS. The circuit simulation results indicate that the neural network’s performance proposed in this paper is degraded by as low as 0.5% while saving as much as 79% in power consumption for POKER-DVS. The recognition rate of the proposed scheme is lower by 0.75% compared to the conventional one, for the MNIST-DVS dataset. In spite of this little loss, the power consumption can be reduced by as much as 75% for the proposed scheme.
APA, Harvard, Vancouver, ISO, and other styles
11

Yan, Zhenggang, Yue Yu, and Mohammad Shabaz. "Optimization Research on Deep Learning and Temporal Segmentation Algorithm of Video Shot in Basketball Games." Computational Intelligence and Neuroscience 2021 (September 6, 2021): 1–10. http://dx.doi.org/10.1155/2021/4674140.

Full text
Abstract:
The analysis of the video shot in basketball games and the edge detection of the video shot are the most active and rapid development topics in the field of multimedia research in the world. Video shots’ temporal segmentation is based on video image frame extraction. It is the precondition for video application. Studying the temporal segmentation of basketball game video shots has great practical significance and application prospects. In view of the fact that the current algorithm has long segmentation time for the video shot of basketball games, the deep learning model and temporal segmentation algorithm based on the histogram for the video shot of the basketball game are proposed. The video data is converted from the RGB space to the HSV space by the boundary detection of the video shot of the basketball game using deep learning and processing of the image frames, in which the histogram statistics are used to reduce the dimension of the video image, and the three-color components in the video are combined into a one-dimensional feature vector to obtain the quantization level of the video. The one-dimensional vector is used as the variable to perform histogram statistics and analysis on the video shot and to calculate the continuous frame difference, the accumulated frame difference, the window frame difference, the adaptive window’s mean, and the superaverage ratio of the basketball game video. The calculation results are combined with the set dynamic threshold to optimize the temporal segmentation of the video shot in the basketball game. It can be seen from the comparison results that the effectiveness of the proposed algorithm is verified by the test of the missed detection rate of the video shots. According to the test result of the split time, the optimization algorithm for temporal segmentation of the video shot in the basketball game is efficiently implemented.
APA, Harvard, Vancouver, ISO, and other styles
12

Al-Jubaury, Ban Abdul-Rahman, Luay Adware, and Laith Abdul-Aziz. "Modified Edge Detector for Coloured Images." Iraqi Journal for Computers and Informatics 40, no. 1 (2002): 45–54. http://dx.doi.org/10.25195/ijci.v40i1.225.

Full text
Abstract:
In this paper, a study of the role nf clour information in detecting the edges of an image was conducted Therefore, different colour spaces with components corresponding to the attribute tumance, hue, and saturation le., VHS, and LUVI were implemented.Two edge delection techniques were applied to each of above colour spaces. These techniques are Sobel operator d Noalincar Laplace operator. A proposed nonlinear Laplace operator was implemented, and in encouraging results indicated its better cticiency than the traditional Tuonlinear Laplace operator. Diffcrcnt approaches were utilized to aclect El threshold value either manually or automatically. The automatic selection depends on the calculation of mean colour gredient magnitude or on the accumulated histogram. A suggestec and implemented mechanism based on using two threshold boundaries to detect edges in colour spaces other than RGB, the results indicated an improvement in the resultant eden image.
APA, Harvard, Vancouver, ISO, and other styles
13

Lu, Peng, Mingyu Xu, Ming Chen, Zhenhua Wang, Zongsheng Zheng, and Yixuan Yin. "Multi-Step Prediction of Typhoon Tracks Combining Reanalysis Image Fusion Using Laplacian Pyramid and Discrete Wavelet Transform with ConvLSTM." Axioms 12, no. 9 (2023): 874. http://dx.doi.org/10.3390/axioms12090874.

Full text
Abstract:
Typhoons often cause huge losses, so it is significant to accurately predict typhoon tracks. Nowadays, researchers predict typhoon tracks with the single step, while the correlation of adjacent moments data is small in long-term prediction, due to the large step of time. Moreover, recursive multi-step prediction results in the accumulated error. Therefore, this paper proposes to fuse reanalysis images at the similarly historical moment and predicted images through Laplacian Pyramid and Discrete Wavelet Transform to reduce the accumulated error. That moment is determined according to the difference in the moving angle at predicted and historical moments, the color histogram similarity between predicted images and reanalysis images at historical moments and so on. Moreover, reanalysis images are weighted cascaded and input to ConvLSTM on the basis of the correlation between reanalysis data and the moving angle and distance of the typhoon. And, the Spatial Attention and weighted calculation of memory cells are added to improve the performance of ConvLSTM. This paper predicted typhoon tracks in 12 h, 18 h, 24 h and 48 h with recursive multi-step prediction. Their MAEs were 102.14 km, 168.17 km, 243.73 km and 574.62 km, respectively, which were reduced by 1.65 km, 5.93 km, 4.6 km and 13.09 km, respectively, compared with the predicted results of the improved ConvLSTM in this paper, which proved the validity of the model.
APA, Harvard, Vancouver, ISO, and other styles
14

De Ocampo, Anton Louise Pernez, Argel Bandala, and Elmer Dadios. "Gabor-enhanced histogram of oriented gradients for human presence detection applied in aerial monitoring." International Journal of Advances in Intelligent Informatics 6, no. 3 (2020): 223. http://dx.doi.org/10.26555/ijain.v6i3.514.

Full text
Abstract:
In UAV-based human detection, the extraction and selection of the feature vector are one of the critical tasks to ensure the optimal performance of the detection system. Although UAV cameras capture high-resolution images, human figures' relative size renders persons at very low resolution and contrast. Feature descriptors that can adequately discriminate between local symmetrical patterns in a low-contrast image may improve a human figures' detection in vegetative environments. Such a descriptor is proposed and presented in this paper. Initially, the acquired images are fed to a digital processor in a ground station where the human detection algorithm is performed. Part of the human detection algorithm is the GeHOG feature extraction, where a bank of Gabor filters is used to generate textured images from the original. The local energy for each cell of the Gabor images is calculated to identify the dominant orientations. The bins of conventional HOG are enhanced based on the dominant orientation index and the accumulated local energy in Gabor images. To measure the performance of the proposed features, Gabor-enhanced HOG (GeHOG) and other two recent improvements to HOG, Histogram of Edge Oriented Gradients (HEOG) and Improved HOG (ImHOG), are used for human detection on INRIA dataset and a custom dataset of farmers working in fields captured via unmanned aerial vehicle. The proposed feature descriptor significantly improved human detection and performed better than recent improvements in conventional HOG. Using GeHOG improved the precision of human detection to 98.23% in the INRIA dataset. The proposed feature can significantly improve human detection applied in surveillance systems, especially in vegetative environments.
APA, Harvard, Vancouver, ISO, and other styles
15

Zhou, Wu, Lijuan Zhang, Yaoqin Xie, and Changhong Liang. "A Novel Technique for Prealignment in Multimodality Medical Image Registration." BioMed Research International 2014 (2014): 1–16. http://dx.doi.org/10.1155/2014/726852.

Full text
Abstract:
Image pair is often aligned initially based on a rigid or affine transformation before a deformable registration method is applied in medical image registration. Inappropriate initial registration may compromise the registration speed or impede the convergence of the optimization algorithm. In this work, a novel technique was proposed for prealignment in both monomodality and multimodality image registration based on statistical correlation of gradient information. A simple and robust algorithm was proposed to determine the rotational differences between two images based on orientation histogram matching accumulated from local orientation of each pixel without any feature extraction. Experimental results showed that it was effective to acquire the orientation angle between two unregistered images with advantages over the existed method based on edge-map in multimodalities. Applying the orientation detection into the registration of CT/MR, T1/T2 MRI, and monomadality images with respect to rigid and nonrigid deformation improved the chances of finding the global optimization of the registration and reduced the search space of optimization.
APA, Harvard, Vancouver, ISO, and other styles
16

Huang, Yu-Chuen, Yuh-Cheng Yang, Kai-Chien Yang, et al. "Pegylated Gold Nanoparticles Induce Apoptosis in Human Chronic Myeloid Leukemia Cells." BioMed Research International 2014 (2014): 1–9. http://dx.doi.org/10.1155/2014/182353.

Full text
Abstract:
Gold nanoparticles (AuNPs) have several potential biological applications as well as excellent biocompatibility. AuNPs with surface modification using polyethylene glycol (PEG-AuNPs) can facilitate easy conjugation with various biological molecules of interest. To examine the anticancer bioactivity of PEG-AuNPs, we investigated their effect on human chronic myeloid leukemia K562 cells. The results indicated that PEG-AuNPs markedly inhibited the viability and impaired the cell membrane integrity of K562 cells. The particles caused morphological changes typical of cell death, and a marked increase in the sub-G1 population in DNA histogram, indicating apoptosis. In addition, PEG-AuNPs reduced the mitochondrial transmembrane potential, a hallmark of the involvement of intrinsic apoptotic pathway in K562 cells. Observation of ultrastructure under a transmission electron microscope revealed that the internalized PEG-AuNPs were distributed into cytoplasmic vacuoles and damaged mitochondria, and subsequently accumulated in areas surrounding the nuclear membrane. In conclusion, PEG-AuNPs may have the potential to inhibit growth and induce apoptosis in human chronic myeloid leukemia cells.
APA, Harvard, Vancouver, ISO, and other styles
17

Zhao, Ziwan. "Influence of VR-Assisted College Dance on College Students' Physical and Mental Health and Comprehensive Quality." International Journal of Information and Communication Technology Education 20, no. 1 (2024): 1–21. http://dx.doi.org/10.4018/ijicte.343521.

Full text
Abstract:
With the development and the popularization of sports dance, sports dance teaching has become a required elective course in universities. Sports dance can not only improve students' comprehensive quality, but also affect college students' healthy psychology. The use of VR (Virtual Reality) technology in dance education will definitely develop and promote dance education. This paper studies an effective feature extraction method for the characteristics of dance movements based on VR. The edge features of all video images in each segment are accumulated into one image, and the directional gradient histogram features are extracted from it. The results show that compared with the current robust regression method and cascade regression method, our method has higher positioning accuracy on the pollution test set, and more than 75% of the sample errors in this method are within 0.1. This also verifies the effectiveness of this motion recognition algorithm for dance motion recognition. Dance can effectively resist the psychological barriers of college students and improve their comprehensive quality.
APA, Harvard, Vancouver, ISO, and other styles
18

Park, MinJi, and Byoung Chul Ko. "Two-Step Real-Time Night-Time Fire Detection in an Urban Environment Using Static ELASTIC-YOLOv3 and Temporal Fire-Tube." Sensors 20, no. 8 (2020): 2202. http://dx.doi.org/10.3390/s20082202.

Full text
Abstract:
While the number of casualties and amount of property damage caused by fires in urban areas are increasing each year, studies on their automatic detection have not maintained pace with the scale of such fire damage. Camera-based fire detection systems have numerous advantages over conventional sensor-based methods, but most research in this area has been limited to daytime use. However, night-time fire detection in urban areas is more difficult to achieve than daytime detection owing to the presence of ambient lighting such as headlights, neon signs, and streetlights. Therefore, in this study, we propose an algorithm that can quickly detect a fire at night in urban areas by reflecting its night-time characteristics. It is termed ELASTIC-YOLOv3 (which is an improvement over the existing YOLOv3) to detect fire candidate areas quickly and accurately, regardless of the size of the fire during the pre-processing stage. To reflect the dynamic characteristics of a night-time flame, N frames are accumulated to create a temporal fire-tube, and a histogram of the optical flow of the flame is extracted from the fire-tube and converted into a bag-of-features (BoF) histogram. The BoF is then applied to a random forest classifier, which achieves a fast classification and high classification performance of the tabular features to verify a fire candidate. Based on a performance comparison against a few other state-of-the-art fire detection methods, the proposed method can increase the fire detection at night compared to deep neural network (DNN)-based methods and achieves a reduced processing time without any loss in accuracy.
APA, Harvard, Vancouver, ISO, and other styles
19

Wu, Cheng, Xiang Qiang, Yiming Wang, Changsheng Yan, and Guangyao Zhai. "Efficient detection of obstacles on tramways using adaptive multilevel thresholding and region growing methods." Proceedings of the Institution of Mechanical Engineers, Part F: Journal of Rail and Rapid Transit 232, no. 5 (2017): 1375–84. http://dx.doi.org/10.1177/0954409717720840.

Full text
Abstract:
With the rapid development of light-rail public transportation, video-based obstacle detection is becoming an essential and foregoing task in driver assistance systems. The system should be able to automatically survey the tramway using an onboard camera. However, the functioning of the system is challenging due to the presence of various ground types, different weather and illumination conditions, as well as varying time of acquisition. This article presents a real-time tramway detection method that deals efficiently with various challenging situations in real-world urban rail traffic scenarios. It first uses an adaptive multilevel thresholding method to segment the regions of interest of the tramway, in which the threshold parameters are estimated using a local accumulated histogram. The approach then adopts the region growing method to decrease the influence of environmental noise and to predict the trend of the tramway. The experiment validation of this study proves that the method is able to correctly detect tramways even in challenging scenarios and uses lesser computational time to meet the real-time demand.
APA, Harvard, Vancouver, ISO, and other styles
20

Maltseva, Daria, Sergey Zablotskiy, Julia Martemyanova, Viktor Ivanov, Timur Shakirov, and Wolfgang Paul. "Diagrams of States of Single Flexible-Semiflexible Multi-Block Copolymer Chains: A Flat-Histogram Monte Carlo Study." Polymers 11, no. 5 (2019): 757. http://dx.doi.org/10.3390/polym11050757.

Full text
Abstract:
The combination of flexibility and semiflexibility in a single molecule is a powerful design principle both in nature and in materials science. We present results on the conformational behavior of a single multiblock-copolymer chain, consisting of equal amounts of Flexible (F) and Semiflexible (S) blocks with different affinity to an implicit solvent. We consider a manifold of macrostates defined by two terms in the total energy: intermonomer interaction energy and stiffness energy. To obtain diagrams of states (pseudo-phase diagrams), we performed flat-histogram Monte Carlo simulations using the Stochastic Approximation Monte Carlo algorithm (SAMC). We have accumulated two-Dimensional Density of States (2D DoS) functions (defined on the 2D manifold of macrostates) for a SF-multiblock-copolymer chain of length N = 64 with block lengths b = 4, 8, 16, and 32 in two different selective solvents. In an analysis of the canonical ensemble, we calculated the heat capacity and determined its maxima and the most probable morphologies in different regions of the state diagrams. These are rich in various, non-trivial morphologies, which are formed without any specific interactions, and depend on the block length and the type of solvent selectivity (preferring S or F blocks, respectively). We compared the diagrams with those for the non-selective solvent and reveal essential changes in some cases. Additionally, we implemented microcanonical analysis in the “conformational” microcanonical ( N V U , where U is the potential energy) and the true microcanonical ( N V E , where E is the total energy) ensembles with the aim to reveal and classify pseudo-phase transitions, occurring under the change of temperature.
APA, Harvard, Vancouver, ISO, and other styles
21

Song, Yue, Yue Ma, Zhibiao Zhou, Jian Yang, and Song Li. "Signal Photon Extraction and Classification for ICESat-2 Photon-Counting Lidar in Coastal Areas." Remote Sensing 16, no. 7 (2024): 1127. http://dx.doi.org/10.3390/rs16071127.

Full text
Abstract:
The highly accurate data of topography and bathymetry are fundamental to ecological studies and policy decisions for coastal zones. Currently, the automatic extraction and classification of signal photons in coastal zones is a challenging problem, especially the surface type classification without auxiliary data. The lack of classification information limits large-scale bathymetric applications of ICESat-2 (Ice, Cloud, and Land Elevation Satellite-2). In this study, we propose a photon extraction–classification method to process geolocated photons in coastal areas from the ICESat-2 ATL03 product. The basic idea is to extract the signal photons using an adaptive photon clustering algorithm, and the extracted signal photons are classified based on the accumulated histogram and triangular grid. We also generate the bottom profile using the weighted interpolation. In four typical coastal areas (artificial coast, natural coast, island, and reefs), the extraction accuracy of a signal photons exceeds 0.90, and the Kappa coefficients of four surface types exceed 0.75. This method independently extracts and classifies signal photons without relying on auxiliary data, which can greatly improve the efficiency of obtaining bathymetric points in all kinds of coastal areas and provide technical support for other coastal studies using ICESat-2 data.
APA, Harvard, Vancouver, ISO, and other styles
22

Takaoğlu, Mustafa, Adem Özyavaş, Naim Ajlouni, Ali Alshahrani, and Basil Alkasasbeh. "A Novel and Robust Hybrid Blockchain and Steganography Scheme." Applied Sciences 11, no. 22 (2021): 10698. http://dx.doi.org/10.3390/app112210698.

Full text
Abstract:
Data security and data hiding have been studied throughout history. Studies show that steganography and encryption methods are used together to hide data and avoid detection. Large amounts of data hidden in the cover multimedia distort the image, which can be detected in visual and histogram analysis. The proposed method will solve two major drawbacks of the current methods: the limitation imposed on the size of the data to be hidden in the cover multimedia and low resistance to steganalysis after stego-operation. In the proposed method, plaintext data are divided into fixed-sized bits whose corresponding matching bits’ indices in the cover multimedia are accumulated. Thus, the hidden data are composed of the indices in the cover multimedia, causing no change in it, thus enabling considerable amounts of plaintext to be hidden. The proposed method also has high resistance to known steganalysis methods because it does not cause any distortion to the cover multimedia. The test results show that the performance of the proposed method outperforms similar conventional stenographic techniques. The proposed Ozyavas–Takaoglu–Ajlouni (OTA) method relieves the limitation on the size of the hidden data, and hidden data is undetectable by steganalysis because it is no longer embedded in the cover multimedia.
APA, Harvard, Vancouver, ISO, and other styles
23

Jankov, Isidora, Lewis D. Grasso, Manajit Sengupta, et al. "An Evaluation of Five ARW-WRF Microphysics Schemes Using Synthetic GOES Imagery for an Atmospheric River Event Affecting the California Coast." Journal of Hydrometeorology 12, no. 4 (2011): 618–33. http://dx.doi.org/10.1175/2010jhm1282.1.

Full text
Abstract:
Abstract The main purpose of the present study is to assess the value of synthetic satellite imagery as a tool for model evaluation performance in addition to more traditional approaches. For this purpose, synthetic GOES-10 imagery at 10.7 μm was produced using output from the Advanced Research Weather Research and Forecasting (ARW-WRF) numerical model. Use of synthetic imagery is a unique method to indirectly evaluate the performance of various microphysical schemes available within the ARW-WRF. In the present study, a simulation of an atmospheric river event that occurred on 30 December 2005 was used. The simulations were performed using the ARW-WRF numerical model with five different microphysical schemes [Lin, WRF single-moment 6 class (WSM6), Thompson, Schultz, and double-moment Morrison]. Synthetic imagery was created and scenes from the simulations were statistically compared with observations from the 10.7-μm band of the GOES-10 imager using a histogram-based technique. The results suggest that synthetic satellite imagery is useful in model performance evaluations as a complementary metric to those used traditionally. For example, accumulated precipitation analyses and other commonly used fields in model evaluations suggested a good agreement among solutions from various microphysical schemes, while the synthetic imagery analysis pointed toward notable differences in simulations of clouds among the microphysical schemes.
APA, Harvard, Vancouver, ISO, and other styles
24

Jin, Yan. "A Three-Dimensional Animation Character Dance Movement Model Based on the Edge Distance Random Matrix." Mathematical Problems in Engineering 2022 (May 31, 2022): 1–11. http://dx.doi.org/10.1155/2022/3212308.

Full text
Abstract:
In this paper, we use the edge distance random matrix method to analyze and study the dance movements of 3D animation characters and design a 3D animation character dance movement model. Firstly, each dance movement video in the dataset is divided into equal parts, while the segmented videos are subjected to the operation of accumulating edge features separately, and the edge features in all video images within each segment are accumulated into one image, and the directional gradient histogram features are extracted from them. Finally, a set of directional gradient histogram features are used to represent the local appearance and shape of the video dance movement. The reconstructed human movements are obtained mainly by fitting the 3D human coordinates in the image space using the human model and the estimated depth coordinates, which are currently more mature and are chosen from existing techniques. In other combinations, the performance of the method in this paper is better than the recognition results of the benchmark method, especially when the similarities of dance movements in the towel flower combination and the piece flower combination are too high. In response to the problem of the reconstructed action appearing to have ground penetration, slipping, and floating feet, a method is proposed to optimize the fitting human model foot problem according to the foot touching the ground. The experimental results show that the method can replace the traditional motion capture method to a certain extent, simplify the use of motion capture, and reduce the cost of motion capture. In the model deformation stage, to reduce the deformation quality problem during the model motion and improve the efficiency of weight calculation, a model deformation method combining double quaternions and bounded double tuning and weights are given. The static 3D model is tetrahedralization, and then the skeletal control points of the model are set and the weight of each skeletal segment to the model is calculated; next, the 3D model is mixed and bound with the skeletal data using the dual quaternion skinning algorithm; finally, the static 3D model motion is driven by the skeletal data. Experiments demonstrate that the method results in better deformation of the 3D model during rotation.
APA, Harvard, Vancouver, ISO, and other styles
25

Parfentsevа, N. О., and H. V. Holubova. "Statistical Methods for Quality Control: A Tool for Data Analysis in the Statistica Package." Statistics of Ukraine 100, no. 1 (2023): 9–26. http://dx.doi.org/10.31767/su.1(100)2023.01.02.

Full text
Abstract:
The article substantiates the applicability of statistical methods for product quality assessment, analysis of production processes, business processes, etc. The notion “quality” is characterized and its properties are defined: suitability, operational efficiency, appropriate content, etc. It is highlighted that the State Standard of Ukraine is a national body for standardization, metrology and certification, which defines and approves the quality standardization system in accordance with the international standards Guidance on statistical techniques for ISO 9001:2000.
 The focus is made on the main methods for quality control, which are considered the most relevant and most widely used. The application of seven quality control methods is described in detail: Control Sheets, Pareto Diagram, Stratification, Histogram, Scatter Diagram, Cause and Effect Diagram, Control Chart.
 It is substantiated that in a digitalized economy with large scopes of accumulated information, the use of statistical data processing packages is an indisputable tool for analysts. Using statistical quality control methods implemented in the Statistica package, the authors conducted research on simulated data and constructed appropriate graphs and charts. Pareto diagram is designed for ranking the factors with impact on a production process or product quality. The stratification method allows for performing a variance analysis, to determine each factor’s effect on the result.
 The main advantage of the histogram method is its visibility and simplicity for analyzing the homogeneity of a distribution and checking for normality. Scatter diagrams allow one to evaluate the correlation strength and make graphical descriptions of the dependence between production factors, reveal the impact of a factor characteristic on the resulting one, etc. The Ishikawa’s cause-and-effect diagram provides a tool for arranging the factors with effect on the production process. The use of control charts makes enables for analyzing the production process in dynamics.
 It is emphasized that the described quality control methods can be applied in any sequence, production cycle or combination: altogether or as separate analytical tools. Based on the results of the study, the main challenges faced now by business analysts are summed up: the mastery of statistical tools and computer data processing for making effective management decisions.
APA, Harvard, Vancouver, ISO, and other styles
26

Ren, Jianqiang, Chunhong Zhang, Lingjuan Zhang, Ning Wang, and Yue Feng. "Automatic Measurement of Traffic State Parameters Based on Computer Vision for Intelligent Transportation Surveillance." International Journal of Pattern Recognition and Artificial Intelligence 32, no. 04 (2017): 1855003. http://dx.doi.org/10.1142/s0218001418550030.

Full text
Abstract:
Online automatic measurement of traffic state parameters has important significance for intelligent transportation surveillance. The video-based monitoring technology is widely studied today but the existing methods are not satisfactory at processing speed or accuracy, especially for traffic scenes with traffic congestion or complex road environments. Based on technologies of computer vision and pattern recognition, this paper proposes a novel measurement method that can detect multiple parameters of traffic flow and identify vehicle types from video sequence rapidly and accurately by combining feature points detection with foreground temporal-spatial image (FTSI) analysis. In this method, two virtual detection lines (VDLs) are first set in frame images. During working, vehicular feature points are extracted via the upstream-VDL and grouped in unit of vehicle based on their movement differences. Then, FTSI is accumulated from video frames via the downstream-VDL, and adhesive blobs of occlusion vehicles in FTSI are separated effectively based on feature point groups and projection histogram of blob pixels. At regular intervals, traffic parameters are calculated via statistical analysis of blobs and vehicles are classified via a K-nearest neighbor (KNN) classifier based on geometrical characteristics of their blobs. For vehicle classification, the distorted blobs of temporary stopped vehicles are corrected accurately based on the vehicular instantaneous speed on the downstream-VDL. Experiments show that the proposed method is efficient and practicable.
APA, Harvard, Vancouver, ISO, and other styles
27

Lebedeva, Dar’ya A., Vitaliy A. Zuyevskiy, and Il’ya V. Romanov. "Allocation of the wear of the main support of raba-man cylinders blocks." Tekhnicheskiy servis mashin, no. 1 (March 1, 2020): 20–27. http://dx.doi.org/10.22314/2618-8287-2020-58-1-20-27.

Full text
Abstract:
Wear of the main supports is a defect of many domestic and foreign engines. Statistical processing and selection of the theoretical distribution of the wear greatly simplifies the choice of recovery methods at the design stage. (Research purpose) The research purpose is choosing the theoretical law of wear allocation of the main supports of Raba-MAN cylinder blocks. (Materials and methods) Authors carried out statistical processing of micrometric data of the main supports of Raba-MAN cylinder blocks. Authors measured the diameters of the main supports of 40 cylinder blocks Raba-MAN three times with tightened fastening bolts with NI 100-160 GOST 868-82 micrometer with a step of 0.01 millimeters. Authors evaluated the match of experimental and theoretical distribution laws by Pearson’s criterion. (Results and discussion) The measurements results were summarized in a statistical series. The article presents a histogram of accumulated experimental probabilities and graphs of differential and integral distribution functions. It was found that the probability of matching of experimental and theoretical distribution laws for both laws is less than 10 percent. Article notes that the chosen distribution laws are not characterized by the presence of the second peak on the graph of the differential function, and the slope of the curve of integral function when approaching the last interval is much higher than the theoretical curves functions. (Conclusion) The maximum wear does not exceed 0.16 millimeters. Since the most common laws of normal distribution and Weibull distribution in statistical calculations are unsuitable in this case, additional research is required to select the theoretical distribution law.
APA, Harvard, Vancouver, ISO, and other styles
28

ALBANESI, M., and M. FERRETTI. "SYSTOLIC MERGING AND RANKING OF VOTES FOR THE GENERALIZED HOUGH TRANSFORM." International Journal of Pattern Recognition and Artificial Intelligence 09, no. 02 (1995): 315–41. http://dx.doi.org/10.1142/s0218001495000158.

Full text
Abstract:
In this paper we present and analyze a systolic structure to support the Generalized Hough Transform. Among the structural methods for object recognition, this transform is well established for its flexibility and noise immunity. Its use in actual systems has been however limited by the computational cost associated with the management of votes. Previous work has shown that a limited amount of memory can substitute the large address space required for building the histogram of votes. The systolic queue here introduced substitutes the address space required in a M- dimensional voting process, where the quantization of each dimension into Q bins yields a space complexity O(QM). An N-stage queue uses 3 M log(Q) memory bits at each stage and is capable of accumulating the incoming votes on the fly. The flow of data within the queue is designed to minimize the probability that new votes are lost because of overflow. We derive analytic expressions for the growth of the queue during the set-up period and for the time each new vote spends within the queue if it is not accumulated; furthermore, we show the conditions for the arrival times of a couple of coincident addresses to be detected and merged. The analysis of the time behaviour of the queue supports the experimental evidence that such a structure performs the accumulating process very reliably. A VLSI integrated circuit embedding a 50-stage queue is the third in a chip-set for the real time implementation of the Generalized Hough Transform.
APA, Harvard, Vancouver, ISO, and other styles
29

Wan, Haoming, Yunwei Tang, Linhai Jing, Hui Li, Fang Qiu, and Wenjin Wu. "Tree Species Classification of Forest Stands Using Multisource Remote Sensing Data." Remote Sensing 13, no. 1 (2021): 144. http://dx.doi.org/10.3390/rs13010144.

Full text
Abstract:
The spatial distribution of forest stands is one of the fundamental properties of forests. Timely and accurately obtained stand distribution can help people better understand, manage, and utilize forests. The development of remote sensing technology has made it possible to map the distribution of tree species in a timely and accurate manner. At present, a large amount of remote sensing data have been accumulated, including high-spatial-resolution images, time-series images, light detection and ranging (LiDAR) data, etc. However, these data have not been fully utilized. To accurately identify the tree species of forest stands, various and complementary data need to be synthesized for classification. A curve matching based method called the fusion of spectral image and point data (FSP) algorithm was developed to fuse high-spatial-resolution images, time-series images, and LiDAR data for forest stand classification. In this method, the multispectral Sentinel-2 image and high-spatial-resolution aerial images were first fused. Then, the fused images were segmented to derive forest stands, which are the basic unit for classification. To extract features from forest stands, the gray histogram of each band was extracted from the aerial images. The average reflectance in each stand was calculated and stacked for the time-series images. The profile curve of forest structure was generated from the LiDAR data. Finally, the features of forest stands were compared with training samples using curve matching methods to derive the tree species. The developed method was tested in a forest farm to classify 11 tree species. The average accuracy of the FSP method for ten performances was between 0.900 and 0.913, and the maximum accuracy was 0.945. The experiments demonstrate that the FSP method is more accurate and stable than traditional machine learning classification methods.
APA, Harvard, Vancouver, ISO, and other styles
30

Hayata, Eijiro, Masahiko Nakata, and Mineto Morita. "Time trend analysis of perinatal mortality, stillbirth, and early neonatal mortality of multiple pregnancies for each gestational week from the year 2000 to 2019: A population-based study in Japan." PLOS ONE 17, no. 7 (2022): e0272075. http://dx.doi.org/10.1371/journal.pone.0272075.

Full text
Abstract:
Multiple pregnancies pose a high risk of morbidity and mortality in both mothers and infants; thus, obtaining reliable information based on a large population is essential to improve management. We used the maternal and child health statistics, which are published annually, from the database of the Ministry of Health, Labor, and Welfare. The data obtained were aggregated in 5-year intervals, and we used them to analyze the proportion of the number of births for each week of pregnancy to the total of each singleton and multiple pregnancy. For perinatal health indicators (perinatal mortality, stillbirth, and neonatal mortality), the obtained data were calculated and plotted on graphs for each week of pregnancy. Moreover, these indicators were calculated by dividing them into first twin and second twin fetuses. Stillbirth weights were aggregated in several groups, and a histogram was displayed. Between 2000 and 2019, there were 21,068,275 live births, 67,666 stillbirths, and 16,443 early neonatal deaths, excluding 7,148 (7,104 singletons, 44 multiple births) cases, in which the exact gestational weeks at birth were unknown. More than 95% of multiple pregnancies were twin births. Perinatal mortality, stillbirth, and early neonatal mortality rates in multiple pregnancies were the lowest at approximately 37 weeks of gestation and lower than those of single pregnancies at approximately 36 weeks of gestation. Perinatal mortality and stillbirth rates were higher during the delivery of the second twins than the first-born twins, but the early neonatal mortality rate remained approximately the same during the delivery of both twins. As the data in the government database are accumulated and published continuously, indicators can be calculated in the future using the method presented in this study. Further, our findings may be useful for policymaking related to managing multiple pregnancies.
APA, Harvard, Vancouver, ISO, and other styles
31

Truax, Kelly, Henrietta Dulai, Anupam Misra, et al. "Laser-Induced Fluorescence for Monitoring Environmental Contamination and Stress in the Moss Thuidium plicatile." Plants 12, no. 17 (2023): 3124. http://dx.doi.org/10.3390/plants12173124.

Full text
Abstract:
The ability to detect, measure, and locate the source of contaminants, especially heavy metals and radionuclides, is of ongoing interest. A common tool for contaminant identification and bioremediation is vegetation that can accumulate and indicate recent and historic pollution. However, large-scale sampling can be costly and labor-intensive. Hence, non-invasive in-situ techniques such as laser-induced fluorescence (LIF) are becoming useful and effective ways to observe the health of plants through the excitation of organic molecules, e.g., chlorophyll. The technique presented utilizes images collected of LIF in moss to identify different metals and environmental stressors. Analysis through image processing of LIF response was key to identifying Cu, Zn, Pb, and a mixture of the metals at nmol/cm2 levels. Specifically, the RGB values from each image were used to create density histograms of each color channel’s relative pixel abundance at each decimal code value. These histograms were then used to compare color shifts linked to the successful identification of contaminated moss samples. Photoperiod and extraneous environmental stressors had minimal impact on the histogram color shift compared to metals and presented with a response that differentiated them from metal contamination.
APA, Harvard, Vancouver, ISO, and other styles
32

Rosti, V., G. Bergamaschi, C. Lucotti, et al. "Oligodeoxynucleotides antisense to c-abl specifically inhibit entry into S-phase of CD34+ hematopoietic cells and their differentiation to granulocyte-macrophage progenitors." Blood 86, no. 9 (1995): 3387–93. http://dx.doi.org/10.1182/blood.v86.9.3387.bloodjournal8693387.

Full text
Abstract:
A number of experimental observations suggest that the proto-oncogene c- abl participates in the regulation of hematopoietic cell growth. We used an antisense strategy to study the relationship between c-abl expression and hematopoietic cell proliferation and differentiation. Purified normal human bone marrow-derived CD34+ cells were obtained by immunomagnetic selection and incubated with 18-base-unmodified antisense oligodeoxynucleotides complementary to the first six codons of the two alternative first exons of c-abl, la and lb. At the end of incubation, an aliquot of cells was assayed for clonogenic growth and the remainder was used for flow cytometric analyses. Cell kinetics were evaluated by means of both single parameter DNA and bivariate DNA/bromodeoxyuridine (BrdU) flow cytometry. Apoptosis was routinely studied by DNA flow cytometric analysis and, in some cases, also through DNA agarose gel electrophoresis for detection of oligonucleosomal DNA fragments. Expression of differentiation markers was studied by flow cytometry. Exposure to antisense oligonucleotides specifically inhibited the accumulation of c-abl mRNA in CD34+ cells. Preincubation with the c-abl antisense oligomers reduced the proportion of cells in S-phase from 19% +/- 5% (mean +/- SD) to 7% +/- 4% (P < .05), and BrdU labeling from 13% +/- 6% to 6% +/- 3% (P < .05). Flow cytometry and DNA agarose gel electrophoresis showed that treated CD34+ cells accumulated in the G0/G1 region of the DNA histogram with no evidence of either differentiation or apoptosis. By contrast, both growth factor deprivation and exposure of CD34+ cells to the tyrosine kinase inhibitor tyrphostin AG82 clearly induced apoptosis. When cells were preincubated with antisense oligonucleotides and then plated for evaluation of colony formation, this resulted in a significant inhibition of colony forming unit granulocyte-macrophage growth (from 44 +/- 15 to 22 +/- 9; P < .01) but had no effect on burst-forming unit erythroid growth (24 +/- 11 v 21 +/- 11; P < .05). These results suggest that c-abl expression is critical for entry of human CD34+ hematopoietic cells into S-phase and for their differentiation to granulocyte-macrophage progenitors. They also indicate that other tyrosine kinases besides p145c-alb are active in the prevention of apoptosis, so that inhibition of c-abl RNA accumulation arrests CD34+ cells in G0/G1 without activating programmed death.
APA, Harvard, Vancouver, ISO, and other styles
33

Kulagin, A. V., and E. V. Russkih. "About preliminary statistical processing of information on the study of fatigue strength of machine parts." PNRPU Mechanics Bulletin, no. 2 (December 15, 2022): 98–104. http://dx.doi.org/10.15593/perm.mech/2022.2.09.

Full text
Abstract:
A generalized approach to statistical processing of the results is proposed fatigue parameters of static strength using methods of mathematical statistics and conventional statistics using the following probabilistic parameters: the mean square deviation, the initial moment, the central moment of dispersion and the coefficient of variation of the distribution series of physical and mechanical characteristics of the part material. Further, the analysis of fatigue strength research at the initial stage is carried out according to statistical information processing, graphical design of a number of stress distributions, statistical stress analysis, a graphical approach is applied in the form of a histogram of a number of distributions, a polygon of frequencies, a polygon of accumulated frequencies, the selection of theoretical stress distribution laws with their empirical confirmation is compared with a more rigorous assessment of the conformity of the distribution laws, which is performed using special consent criteria, for example, the Pearson criterion, a parallel verification of the correctness of the chosen approach using classical dependences of the resistance of materials is proposed. the total error is estimated in the form of a methodological error and a direct discrepancy between the theoretical and experimental values of stresses and temperatures of the physico-mechanical process according to experimental and theoretical normal and tangential stresses arising during the operation of the part as a result of the application of external force factors, a temperature-time superposition is used, in the form of a function of the predicted durability of the fluctuating kinetic theory of strength, in which the temperature, as a linear function, it can be replaced by any energy or force criterion, in particular: specific energy, relative deformation, normal or tangential stresses. The proposed approach requires substantial experimental study on a basic batch of the same type of samples and coordination according to schematized diagrams of the limiting amplitudes of Goodman, Sorensen - Kinasoshvili, Kogaev, subject to the conditions of safe operation of parts in the field of low-cycle fatigue. The proposed probabilistic model of statistical processing of fatigue strength can be recommended for solving applied problems of the theories of mechanics of materials, elasticity, plasticity and creep, resistance of materials, structural mechanics
APA, Harvard, Vancouver, ISO, and other styles
34

Suwardi, Suwardi, Arko Djajadi, Tata Subrata, and Laura Belani Nudiyah. "APLIKASI DOUBLE DIFFERENCE UNTUK IDENTIFIKASI ZONA PATAHAN MIKRO WILAYAH SULAWESI BARAT." METHOMIKA Jurnal Manajemen Informatika dan Komputerisasi Akuntansi 7, no. 2 (2023): 272–77. http://dx.doi.org/10.46880/jmika.vol7no2.pp272-277.

Full text
Abstract:
West Sulawesi is an area in South Sulawesi province that is experiencing a sudden increase in seismic activity. There have been 264 earthquake events recorded during the 2021-2022 period. The increase in seismic activity occurred around the west Sulawesi segment and the Mamuju segment, which are local faults located in west Sulawesi. Sulawesi Island itself is the result of fragments due to larger complex reactions and has quite high seismic activity. Therefore, it is necessary to calculate the earthquake hypocenter relocation to obtain more accurate hypocenter parameters for fault zone identification and micro fault orientation. This study aims to relocate the earthquake hypocenter based on mathematical measurement data using the Double-Difference method for identification of fault zones and micro-fault orientation. The data used is BMKG earthquake catalog data for the western Sulawesi region with a range of 118.49 East - 119.59 East and 3.67 LS -2.45 LS in the 2021-2022 period. The Double-Difference method is a method based on the Geiger method using residual travel time data from each pair of hypocenters to the earthquake recording station. The principle is to compare two earthquake hypocenters that are close to the earthquake recording station with the assumption that the distance between the two hypocenters must be closer than the distance between the hypocenter and the earthquake recording station. This is done so that the raypath of the two hypocenters can be considered close to the same value. From the calculation of the earthquake hypocenter relocation, it was found that the hypocenter after relocation experienced changes in distribution that accumulated in the Mamasa fault segment area. From the residual histogram, it is found that the RMS residual is close to 0, indicating an improvement in data quality as evidenced by the observation travel time and the calculated travel time are almost close to the same value. In the cross-section results, it is obtained that the depth changes vary in the subsurface depth from the initial depth of 10 km before being relocated, indicating that the earthquake that occurred in West Sulawesi was included in the shallow crustal earthquake type.
APA, Harvard, Vancouver, ISO, and other styles
35

Melezhik, V. A., A. E. Fallick, and T. Clark. "Two billion year old isotopically heavy carbon: evidence from the Labrador Trough, Canada." Canadian Journal of Earth Sciences 34, no. 3 (1997): 271–85. http://dx.doi.org/10.1139/e17-025.

Full text
Abstract:
Forty-eight samples of bulk carbonates were studied from the Dunphy, Portage, Alder, Uvé, Denault, and Abner formations and the undivided Pistolet subgroup in the Labrador Trough, Quebec, Canada. The carbonate units occur as a number of thick beds (up to 300 m) within 100–600 m thick formations. They are interbedded with siliciclastic sediments and are composed of crystalline, sparry, micritic allochemical and stromatolitic dolostones. The carbonate rocks are recrystallized and metamorphosed (up to amphibolite grade). They are chemically dolostones, and include chemical or biochemical laminated and massive dolostones, dolorudites, dolarenites, and dololitites, which were deposited in shallow-water (Dunphy, Denault, and Abner formations) to relatively deep water (Uvé Formation) marine environments. Reef facies are present in the Denault and Abner formations. The Dunphy, Portage, Alder, and Uvé formations, deposited between 2.17 and 2.14 Ga, yield isotopically heavy values of δ13C(PDB) ranging from + 5.3 to + 15.4‰, whereas δ18O(SMOW) values range from + 16.2 to + 25.4‰, which is rather normal for Precambrian sedimentary carbonates. The average values of δ13C show a decreasing trend upwards in the stratigraphy from 14.8 ± 0.5‰ (Dunphy Formation), to 9.5 ± 0.7‰ (Alder Formation), to 8.0 ± 1.7‰ (Uvé Formation). Average δ18O values exhibit a similar tendency, 23.6 ± 1.1, 22.6 ± 1.1, and 20.0 ± 1.9‰, respectively. The Alder and Uvé formations display clustering in a δ13C histogram, while the Dunphy Formation plots as a separate offset. The Abner and Denault formations, deposited at about 1.88 Ga, show an average δ13C of 2.1 ± 0.6 and 1.3 ± 1.4‰, and an average δ18O of 21.4 ± 1.9 and 22.6 ± 2.0‰, respectively. δ13C for these two formations is similar to that of Recent marine carbonates. The isotopically heavy carbonates (13Ccarb) of the Dunphy, Portage, Alder, and Uvé formations and those of the undivided Pistolet subgroup were accumulated during approximately 30 Ma over a vast area of at least 20 000 km2. The 13Ccarb enrichment occurs in connection with diverse depositional environments, not controlled by any detectable local factors (e.g., facies, diagenetic or metamorphic alteration) and therefore considered to be part of the world-wide development of high δ13Ccarb documented at around 2.2 ± 0.1 Ga. The Canadian Shield is the fourth continent, in addition to Africa, northern Europe, and Australia, to show extensive development of isotopically heavy carbonate formations of this age.
APA, Harvard, Vancouver, ISO, and other styles
36

Stephens, Graeme L., and Norman B. Wood. "Properties of Tropical Convection Observed by Millimeter-Wave Radar Systems." Monthly Weather Review 135, no. 3 (2007): 821–42. http://dx.doi.org/10.1175/mwr3321.1.

Full text
Abstract:
Abstract This paper describes the results of analysis of over 825 000 profiles of millimeter-wave radar (MWR) reflectivities primarily collected by zenith-pointing surface radars observing tropical convection associated with various phases of activity of the large-scale tropical circulation. The data principally analyzed in this paper come from surface observations obtained at the Atmospheric Radiation Measurement Manus site during active and break episodes of the Madden–Julian oscillation (MJO) and from observations collected from a shipborne radar during an active phase of the monsoon over the Indian Ocean during the Joint Air–Sea Monsoon Interaction Experiment. It was shown, for example, in a histogram regime analysis that the MWR data produce statistics on convection regimes similar in most respects to the analogous regime analysis of the Tropical Rainfall Measuring Mission radar–radiometer observations. Attenuation of the surface MWRs by heavy precipitation, however, incorrectly shifts a small fraction of the deeper precipitation modes into the shallow modes of precipitation. The principal findings are the following. (i) The cloud and precipitation structures of the different convective regimes are largely identical regardless of the mode of synoptic forcing, that is, regardless of whether the convection occurred during an active phase of the MJO, a transition phase of the MJO, or in an active monsoon period. What changes between these synoptically forced modes of convection are the relative frequencies of occurrences of the different storm regimes. (ii) The cloud structures associated with the majority of cases of observed precipitation (ranging in occurrence from 45% to 53% of all precipitation profiles) were multilayered structures regardless of the mode of synoptic forcing. The predominant multilayered cloud mode was of higher-level cirrus of varying thickness overlying cumulus congestus–like convection. (iii) The majority of water accumulated (i.e., 53%–63%) over each of the periods assigned to the active monsoon (5 days of data), the active MJO (38 days of data), and the transition MJO (53 days of data) fell from these multiple-layered cloud systems. (iv) Solar transmittances reveal that significantly less sunlight (reductions of about 30%–50%) reaches the surface in the precipitating regimes than reaches the surface under drizzle and cloud-only conditions, suggesting that the optical thicknesses of precipitation-bearing clouds significantly exceeds those of nonprecipitating clouds.
APA, Harvard, Vancouver, ISO, and other styles
37

Popikov, P., Denis Kanishchev, and A. Sutolkin. "RESULTS OF EXPERIMENTAL RESEARCHES OF WORKING PROCESSES OF BLOCKLESS LOGGING CAPTURE WITH ENERGY-SAVING HYDRAULIC DRIVE." Actual directions of scientific researches of the XXI century: theory and practice 8, no. 1 (2020): 123–28. http://dx.doi.org/10.34220/2308-8877-2020-8-1-123-128.

Full text
Abstract:
During the movement of the tractor in the unit with the lockless grabs on the roughness of the terrain on the cuttings there are fluctuations that cause jumps of the working fluid in the hydraulic system. This leads to loss of fluid through the gaps and seals of the moving elements of the pump and hydraulic cylinders. For laboratory studies of these phenomena, a laboratory installation was made with the introduction of a pneumatic-hydraulic accumulator of the A5579-0 series into the hydraulic circuit. Laboratory tests were carried out, which showed that the pneumohydraulic battery allows, due to the energy accumulated during operation, to reduce the pressure spikes of the working fluid in the hydraulic system. This reduces the dynamic load on the metal structure of the grip, the hydraulic pump drive and the tractor transmission, as well as increases the volumetric efficiency. After processing waveforms of operating modes to capture with energy saving hydraulic drive using the program STATISTICA was established replications of individual load magnitudes and histograms of pressure without the use of a hydropneumatic accumulator and to its use. It was found that the energy recovery system reduces the pressure spikes of the working fluid during transients by 1.4-1.7 times and allows you to store power within 1.7 ... 2.1 kW.
APA, Harvard, Vancouver, ISO, and other styles
38

Gutschick, Vincent P., Michael H. Barron, David A. Waechter, and Michael A. Wolf. "Portable monitor for solar radiation that accumulates irradiance histograms for 32 leaf-mounted sensors." Agricultural and Forest Meteorology 33, no. 4 (1985): 281–90. http://dx.doi.org/10.1016/0168-1923(85)90028-0.

Full text
APA, Harvard, Vancouver, ISO, and other styles
39

Incoronato, Alfonso, Mauro Locatelli, and Franco Zappa. "Statistical Modelling of SPADs for Time-of-Flight LiDAR." Sensors 21, no. 13 (2021): 4481. http://dx.doi.org/10.3390/s21134481.

Full text
Abstract:
Time-of-Flight (TOF) based Light Detection and Ranging (LiDAR) is a widespread technique for distance measurements in both single-spot depth ranging and 3D mapping. Single Photon Avalanche Diode (SPAD) detectors provide single-photon sensitivity and allow in-pixel integration of a Time-to-Digital Converter (TDC) to measure the TOF of single-photons. From the repetitive acquisition of photons returning from multiple laser shots, it is possible to accumulate a TOF histogram, so as to identify the laser pulse return from unwelcome ambient light and compute the desired distance information. In order to properly predict the TOF histogram distribution and design each component of the LiDAR system, from SPAD to TDC and histogram processing, we present a detailed statistical modelling of the acquisition chain and we show the perfect matching with Monte Carlo simulations in very different operating conditions and very high background levels. We take into consideration SPAD non-idealities such as hold-off time, afterpulsing, and crosstalk, and we show the heavy pile-up distortion in case of high background. Moreover, we also model non-idealities of timing electronics chain, namely, TDC dead-time, limited number of storage cells for TOF data, and TDC sharing. Eventually, we show how the exploit the modelling to reversely extract the original LiDAR return signal from the distorted measured TOF data in different operating conditions.
APA, Harvard, Vancouver, ISO, and other styles
40

Krasnov, Andrei E., Mikhail E. Golovkin, and Victoria I. Gerasimova. "RECOGNITION OF SIGNALS AND IMAGES BASED ON CAUSAL HILBERT AND FRESNEL TRANSFORMATIONS." RSUH/RGGU Bulletin. Series Information Science. Information Security. Mathematics, no. 4 (2024): 99–122. https://doi.org/10.28995/2686-679x-2024-4-99-122.

Full text
Abstract:
In the present work, the mathematical foundations of a unified approach to the description of signals are considered, based on the construction of phase portraits in the form of two-dimensional histograms of joint values of signals and their Hilbert images, as well as two–dimensional histograms of joint values of real and imaginary components of the complex Fresnel transformation of images. An important advantage of such an approach is the invariance of the descriptions of signals and images to the group of translational, scale and amplitude transformations for signals and translational, orientation and amplitude transformations for images. In addition, a method for reducing two-dimensional phase portraits to one-dimensional histograms is proposed. A reasonable criterion for choosing the order of the Hilbert digital filter is considered, filters are applied that allow, on the one hand, to distinguish similar signals against a background of noise, on the other hand, to obtain an orthogonal complement of the signal that is not inferior to it in amplitude. The calculation of an empirical Kolmogorov–Smirnov type criterion is described, which allows comparing the results of two empirical samples – to find the point at which the sum of accumulated discrepancies between the two distributions is the largest, and to assess the reliability of that discrepancy. Examples of the application of the considered approach to signal and image recognition are given.
APA, Harvard, Vancouver, ISO, and other styles
41

Cuong, Nguyen-Van, Yung-Tsung Chen, and Ming-Fa Hsieh. "DOXORUBICIN-LOADED MICELLES OF Y-SHAPED PEG-(PCL)2 AGAINST DRUG-RESISTANT BREAST CANCER CELLS." Biomedical Engineering: Applications, Basis and Communications 25, no. 05 (2013): 1340009. http://dx.doi.org/10.4015/s1016237213400097.

Full text
Abstract:
Y-shaped amphiphilic copolymers have been synthesized by various strategies. They can self-assemble to form stable micellar drug carriers. Higher ratio of hydrophobic segment in these micelles is known to stabilize encapsulated drugs. This study used methoxy poly(ethylene glycol) (mPEG, A block) and poly(ε-caprolactone) (PCL, B block) to synthesize an AB2 type amphiphilic block copolymer to encapsulate the anticancer drug, doxorubicin (DOX). The synthesized mPEG-(PCL)2 self-assembled into nanoparticles and its critical micelle concentration was 43.7 × 10-3 mg/mL. The particle size of empty micelle and DOX-loaded micelle was 95.1 and 21.4 nm, respectively. DOX-loaded micelles have a drug- loading efficiency of 22.3% and could be released from micelles up to 50% at pH 5 and 40% at pH 7.4 in 48 h in an in vitro release experiment. The nitric oxide (NO) assay indicated that the micelles could avoid the cytotoxic recognition by murine macrophage cells. The half-lethal dose (IC50) of DOX-loaded micelles for two human breast cancer cell lines (MCF-7/WT for wild-type and MCF-7/ADR for adriamycin-resistant) were 0.937 and 7.476 μg/mL, respectively. For MCF-7/ADR cells, the ratio of IC50 of free DOX vs. that of DOX-loaded micelles, which reported as resistance reversion index, was 0.125. Confocal images showed that DOX-loaded micelles accumulated mostly in the cytoplasm instead of nuclei. On the contrary, free DOX diffused throughout the cells. Flow cytometric histograms indicated that the fluorescence intensity in MCF-7/ADR cell line was about 83.2% and 50.9% for DOX-loaded micelles and free DOX, respectively. This result indicates that the DOX-loaded micelles formed by AB2 copolymer could overcome multidrug resistance of breast cancer cells as it can accumulate more in MCF-7/ADR cells compared with free DOX.
APA, Harvard, Vancouver, ISO, and other styles
42

Wang, Xiaofang, Tongyi Zhang, Yan Kang, Weiwei Li, and Jintao Liang. "High-Flux Fast Photon-Counting 3D Imaging Based on Empirical Depth Error Correction." Photonics 10, no. 12 (2023): 1304. http://dx.doi.org/10.3390/photonics10121304.

Full text
Abstract:
The time-correlated single-photon-counting (TCSPC) three-dimensional (3D) imaging lidar system has broad application prospects in the field of low-light 3D imaging because of its single-photon detection sensitivity and picoseconds temporal resolution. However, conventional TCSPC systems always limit the echo photon flux to an ultra-low level to obtain high-accuracy depth images, thus needing to spend amounts of acquisition time to accumulate sufficient photon detection events to form a reliable histogram. When the echo photon flux is increased to medium or even high, the data acquisition time can be shortened, but the photon pile-up effect can seriously distort the photon histogram and cause depth errors. To realize high accuracy TCSPC depth imaging with a shorter acquisition time, we propose a high-flux fast photon-counting 3D imaging method based on empirical depth error correction. First, we derive the photon flux estimation formula and calculate the depth error of our photon-counting lidar under different photon fluxes with experimental data. Then, a function correction model between the depth errors and the number of echo photons is established by numerical fitting. Finally, the function correction model is used to correct depth images at high photon flux with different acquisition times. Experimental results show that the empirical error correction method can shorten the image acquisition time by about one order of magnitude while ensuring a moderate accuracy of the depth image.
APA, Harvard, Vancouver, ISO, and other styles
43

Aisyah, S., H. Hidayat, and D. Verawati. "Statistical Assessment of Some Water Quality and Rainfall Data in Ciliwung River, Indonesia." IOP Conference Series: Earth and Environmental Science 1062, no. 1 (2022): 012035. http://dx.doi.org/10.1088/1755-1315/1062/1/012035.

Full text
Abstract:
Abstract Ciliwung is the river that flows from upstream in the Puncak area of Bogor Regency to Jakarta Bay. Water quality parameters have been monitored each month at three stations in the Ciliwung River watershed by the Indonesian Ministry of Environmental and Forestry and Meteorological, Climatological, and Geophysical Agency. The data analyzed in this study are rainfall and water quality data for the period 2017-2020 with water quality variables including pH, temperature, Dissolved Oxygen (DO), electrical conductivity (EC), Turbidity, Total Dissolved Solids (TDS), and Nitrate-N. This paper aimed to analyze the time series data using statistical methods and describes certain chemical parameters and rainfall data, that show Ciliwung water quality during 2017-2020. Descriptive analysis was used to determine the mean, skewness, and kurtosis values based on time and location. Histogram and boxplot graphs were used to describe distribution of the data set. Test of Kolmogorov-Smirnov was used to evaluate the normality of the data set. The descriptive analysis resulted in the mean of each water quality parameter showing a value that did not show a significant difference between observation locations except for the TDS parameter. From the histogram and boxplot graphs, it can be seen that the data shows an abnormal distribution and there are many outliers. Kolmogorov-Smirnov test resulted in better normality of data distribution. The main problem of pollution is the use of oxygen by organic matter contained in river water. Ciliwung River downstream was changed into open accumulator wastewater from the food industry, livestock, and settlements.
APA, Harvard, Vancouver, ISO, and other styles
44

Zepernick, Hans-Jürgen, Markus Fiedler, Thi My Chinh Chu, and Viktor Kelkkanen. "Video Freeze Assessment of TPCAST Wireless Virtual Reality: An Experimental Study." Applied Sciences 12, no. 3 (2022): 1733. http://dx.doi.org/10.3390/app12031733.

Full text
Abstract:
Wireless virtual reality (VR) offers a seamless user experience but has to cope with higher sensitivity to temporal impairments induced on the wireless link. Apart from bandwidth dynamics and latency, video freezes and their lengths are important temporal performance indicators that impact on the quality of experience (QoE) of networked VR applications and services. This paper reports an experimental study that focuses on the VR video frame freeze length characteristics of a wireless VR solution. A comprehensive measurement campaign using a commercial TPCAST wireless VR solution with an HTC Vive head-mounted display was conducted to obtain real VR video traces. The number of detected freezes and freeze intensities are reported both accumulated over four room quadrants as well as for each of the four quadrants subject to six transmitter-receiver distances. The statistical analysis of the VR video traces of the different experiments includes histograms of the freeze lengths and cumulative complementary histograms of the freeze length. The results of this analysis offer insights into the density of the underlying distributions of the measured data, illustrate the impact of the room topology on the freeze characteristics, and suggest the statistical modeling of the freeze characteristics as exponential and geometric distributions. The statistical models of the freeze characteristics may be included in wireless VR simulators supporting the development of physical layer, medium access layer, and higher layer functionalities. They also may serve as network-disturbance models for VR QoE studies, e.g., generating realistic freeze events in wireless VR stimuli.
APA, Harvard, Vancouver, ISO, and other styles
45

Fawad, Muhammad Jamil Khan, and MuhibUr Rahman. "Person Re-Identification by Discriminative Local Features of Overlapping Stripes." Symmetry 12, no. 4 (2020): 647. http://dx.doi.org/10.3390/sym12040647.

Full text
Abstract:
The human visual system can recognize a person based on his physical appearance, even if extreme spatio-temporal variations exist. However, the surveillance system deployed so far fails to re-identify the individual when it travels through the non-overlapping camera’s field-of-view. Person re-identification (Re-ID) is the task of associating individuals across disjoint camera views. In this paper, we propose a robust feature extraction model named Discriminative Local Features of Overlapping Stripes (DLFOS) that can associate corresponding actual individuals in the disjoint visual surveillance system. The proposed DLFOS model accumulates the discriminative features from the local patch of each overlapping strip of the pedestrian appearance. The concatenation of histogram of oriented gradients, Gaussian of color, and the magnitude operator of CJLBP bring robustness in the final feature vector. The experimental results show that our proposed feature extraction model achieves rank@1 matching rate of 47.18% on VIPeR, 64.4% on CAVIAR4REID, and 62.68% on Market1501, outperforming the recently reported models from the literature and validating the advantage of the proposed model.
APA, Harvard, Vancouver, ISO, and other styles
46

Akram, M. Waseem, Muhammad Fakhar-e-Alam, Alvina Rafiq Butt, et al. "Magnesium Oxide in Nanodimension: Model for MRI and Multimodal Therapy." Journal of Nanomaterials 2018 (June 20, 2018): 1–12. http://dx.doi.org/10.1155/2018/4210920.

Full text
Abstract:
The prime focus of this investigation is to determine which morphology of magnesium oxide (MgO) is nontoxic and accumulates in sufficient quantity to a human brain cellular/tissue model. Thus, nanostructured MgO was synthesized from a coprecipitation technique involving twin synthetic protocols and the resulting product was characterized by scanning electron microscopy (SEM), transmission electron microscopy (TEM), size distribution histogram, Fourier-transform infrared spectroscopy (FTIR), and X-ray diffraction (XRD) analysis and elemental composition was confirmed by EDX analysis. They were tested for selective antigen response in a human brain cancer model through biodistribution, biotoxicity via MTT assay, and tissue morphology. In addition, the MRI compatibility of MgO nanostructures and immunofluorescence studies were investigated on nanoconjugates with different immunoglobulins in the brain section. The results indicated that MgO had some degree of bindings with the antigens. These results led to the empirical modeling of MgO nanomaterials towards toxicity in cancer cells by analyzing the statistical data obtained by experiments. All these results are providing new rational strategy with the concept of MgO for MRI and PTT/PDT.
APA, Harvard, Vancouver, ISO, and other styles
47

Scaife, Jessica E., Simon J. Thomas, Karl Harrison, et al. "Accumulated dose to the rectum, measured using dose–volume histograms and dose-surface maps, is different from planned dose in all patients treated with radiotherapy for prostate cancer." British Journal of Radiology 88, no. 1054 (2015): 20150243. http://dx.doi.org/10.1259/bjr.20150243.

Full text
APA, Harvard, Vancouver, ISO, and other styles
48

Keane, Richard J., George C. Craig, Christian Keil, and Günther Zängl. "The Plant–Craig Stochastic Convection Scheme in ICON and Its Scale Adaptivity." Journal of the Atmospheric Sciences 71, no. 9 (2014): 3404–15. http://dx.doi.org/10.1175/jas-d-13-0331.1.

Full text
Abstract:
Abstract The emergence of numerical weather prediction and climate models with multiple or variable resolutions requires that their parameterizations adapt correctly, with consistent increases in variability as resolution increases. In this study, the stochastic convection scheme of Plant and Craig is tested in the Icosahedral Nonhydrostatic GCM (ICON), which is planned to be used with multiple resolutions. The model is run in an aquaplanet configuration with horizontal resolutions of 160, 80, and 40 km, and frequency histograms of 6-h accumulated precipitation amount are compared. Precipitation variability is found to increase substantially at high resolution, in contrast to results using two reference deterministic schemes in which the distribution is approximately independent of resolution. The consistent scaling of the stochastic scheme with changing resolution is demonstrated by averaging the precipitation fields from the 40- and 80-km runs to the 160-km grid, showing that the variability is then the same as that obtained from the 160-km model run. It is shown that upscale averaging of the input variables for the convective closure is important for producing consistent variability at high resolution.
APA, Harvard, Vancouver, ISO, and other styles
49

Parsotan, Tyler, Sibasish Laha, David M. Palmer, et al. "BatAnalysis: A Comprehensive Python Pipeline for Swift BAT Survey Analysis." Astrophysical Journal 953, no. 2 (2023): 155. http://dx.doi.org/10.3847/1538-4357/ace325.

Full text
Abstract:
Abstract The Swift Burst Alert Telescope (BAT) is a coded-aperture gamma-ray instrument with a large field of view that primarily operates in survey mode when it is not triggering on transient events. The survey data consist of 80-channel detector plane histograms that accumulate photon counts over periods of at least 5 minutes. These histograms are processed on the ground and are used to produce the survey data set between 14 and 195 keV. Survey data comprise >90% of all BAT data by volume and allow for the tracking of long-term light curves and spectral properties of cataloged and uncataloged hard X-ray sources. Until now, the survey data set has not been used to its full potential due to the complexity associated with its analysis and the lack of easily usable pipelines. Here, we introduce the BatAnalysis Python package, a wrapper for HEASoftpy, which provides a modern, open-source pipeline to process and analyze BAT survey data. BatAnalysis allows members of the community to use BAT survey data in more advanced analyses of astrophysical sources, including pulsars, pulsar wind nebula, active galactic nuclei, and other known/unknown transient events that may be detected in the hard X-ray band. We outline the steps taken by the Python code and exemplify its usefulness and accuracy by analyzing survey data of the Crab Nebula, NGC 2992, and a previously uncataloged MAXI transient. The BatAnalysis package allows for ∼18 yr of BAT survey data to be used in a systematic way to study a large variety of astrophysical sources.
APA, Harvard, Vancouver, ISO, and other styles
50

Pincus, Robert, Paul A. Hubanks, Steven Platnick, et al. "Updated observations of clouds by MODIS for global model assessment." Earth System Science Data 15, no. 6 (2023): 2483–97. http://dx.doi.org/10.5194/essd-15-2483-2023.

Full text
Abstract:
Abstract. This paper describes a new global dataset of cloud properties observed by MODIS relying on the current (collection 6.1) processing of MODIS data and produced to facilitate comparison with results from the MODIS observational proxy used in climate models. The dataset merges observations from the two MODIS instruments into a single netCDF file. Statistics (mean, standard deviation, and number of observations) are accumulated over daily and monthly timescales on an equal-angle grid for viewing and illumination geometry, cloud detection, cloud-top pressure, and cloud properties (optical thickness, effective particle size, and water path) partitioned by thermodynamic phase and an assessment as to whether the underlying observations come from fully or partly cloudy pixels. Similarly partitioned joint histograms are available for (1) optical thickness and cloud-top pressure, (2) optical thickness and particle size, and (3) cloud water path and particle size. Differences with standard data products, caveats for data use, and guidelines for comparison to the MODIS simulator are described. Data are available on daily (https://doi.org/10.5067/MODIS/MCD06COSP_D3_MODIS.062; NASA, 2022b) and monthly (https://doi.org/10.5067/MODIS/MCD06COSP_M3_MODIS.062; NASA, 2022a) timescales.
APA, Harvard, Vancouver, ISO, and other styles
We offer discounts on all premium plans for authors whose works are included in thematic literature selections. Contact us to get a unique promo code!

To the bibliography