To see the other types of publications on this topic, follow the link: Sampling system loss correction.

Journal articles on the topic 'Sampling system loss correction'

Create a spot-on reference in APA, MLA, Chicago, Harvard, and other styles

Select a source type:

Consult the top 50 journal articles for your research on the topic 'Sampling system loss correction.'

Next to every source in the list of references, there is an 'Add to bibliography' button. Press on it, and we will generate automatically the bibliographic reference to the chosen work in the citation style you need: APA, MLA, Harvard, Chicago, Vancouver, etc.

You can also download the full text of the academic publication as pdf and read online its abstract whenever available in the metadata.

Browse journal articles on a wide variety of disciplines and organise your bibliography correctly.

1

Mammarella, Ivan, Samuli Launiainen, Tiia Gronholm, et al. "Relative Humidity Effect on the High-Frequency Attenuation of Water Vapor Flux Measured by a Closed-Path Eddy Covariance System." Journal of Atmospheric and Oceanic Technology 26, no. 9 (2009): 1856–66. http://dx.doi.org/10.1175/2009jtecha1179.1.

Full text
Abstract:
Abstract In this study the high-frequency loss of carbon dioxide (CO2) and water vapor (H2O) fluxes, measured by a closed-path eddy covariance system, were studied, and the related correction factors through the cospectral transfer function method were calculated. As already reported by other studies, it was found that the age of the sampling tube is a relevant factor to consider when estimating the spectral correction of water vapor fluxes. Moreover, a time-dependent relationship between the characteristic time constant (or response time) for water vapor and the ambient relative humidity was disclosed. Such dependence is negligible when the sampling tube is new, but it becomes important already when the tube is only 1 yr old and increases with the age of the tube. With a new sampling tube, the correction of water vapor flux measurements over a Scots pine forest in Hyytiälä, Finland, amounted on average to 7%. After 4 yr the correction increased strongly, ranging from 10%–15% during the summer to 30%–40% in wintertime, when the relative humidity is typically high. For this site the effective correction improved the long-term energy and water balance. Results suggest that the relative humidity effect on high-frequency loss of water vapor flux should be taken into account and that the effective transfer function should be estimated experimentally at least once per year. On the other hand, this high correction can be avoided by a correct choice and periodic maintenance of the eddy covariance system tube, for example, by cleaning or changing it at least once per year.
APA, Harvard, Vancouver, ISO, and other styles
2

Huang, Liang, Xiang Zhi Li, Qing Lei Zhao, and Cheng Shan Han. "The Design and Implementation of Data Sampling and Processing System of EUV Camera Based on FPGA." Applied Mechanics and Materials 483 (December 2013): 374–77. http://dx.doi.org/10.4028/www.scientific.net/amm.483.374.

Full text
Abstract:
The FPGA is applied to the Extreme Ultraviolet camera (EUV) in the emerging aerospace applications. Firstly, the imaging principle of the EUV camera is analyzed, and the FPGA type is selected. Secondly, the data collection module is designed and the original data correction and coordinate conversion algorithm is researched. Finally, the simulation results indicate that the method which is used for this paper greatly improves the dependability and stability at the cost of the small speed loss.
APA, Harvard, Vancouver, ISO, and other styles
3

Li, W., J. B. Lu, X. F. Zhao, W. W. Cui, Y. J. Yang, and Y. Chen. "Periodic sampling and count rate correction of the low energy X-ray telescope on board the Insight- Hard X-ray modulation telescope." Journal of Instrumentation 19, no. 05 (2024): P05013. http://dx.doi.org/10.1088/1748-0221/19/05/p05013.

Full text
Abstract:
Abstract The Low Energy X-ray Telescope (LE) is an important instrument of the Insight-Hard X-ray Modulation Telescope (Insight-HXMT), which performs scanning and point observation in the soft X-ray energy band (0.7–13 keV). Since its launch in 2017, it has conducted a large number of scientific observations and achieved significant astronomical discoveries. LE is composed of three detector boxes and an electronic control box. The data acquisition system of LE is designed to use periodic sampling, which solves the problem of unstable noise peaks. Data loss may occur when observing X-ray bursts, and periodic sampling provides a method of correcting light curves. In addition, in the South Atlantic Anomaly (SAA) or where the count rate of the particle monitor on the Insight-HXMT is very high, LE automatically switches into SAA operating mode, and only a small number of events are recorded. After data correction, the light curves can be estimated to expand the astronomical achievements of LE.
APA, Harvard, Vancouver, ISO, and other styles
4

Awan, Hafiz Shakeel Ahmad, and Muhammad Tariq Mahmood. "Underwater Image Restoration through Color Correction and UW-Net." Electronics 13, no. 1 (2024): 199. http://dx.doi.org/10.3390/electronics13010199.

Full text
Abstract:
The restoration of underwater images plays a vital role in underwater target detection and recognition, underwater robots, underwater rescue, sea organism monitoring, marine geological surveys, and real-time navigation. In this paper, we propose an end-to-end neural network model, UW-Net, that leverages discrete wavelet transform (DWT) and inverse discrete wavelet transform (IDWT) for effective feature extraction for underwater image restoration. First, a color correction method is applied that compensates for color loss in the red and blue channels. Then, a U-Net based network that applies DWT for down-sampling and IDWT for up-sampling is designed for underwater image restoration. Additionally, a chromatic adaptation transform layer is added to the net to enhance the contrast and color in the restored image. The model is rigorously trained and evaluated using well-known datasets, demonstrating an enhanced performance compared with existing methods across various metrics in experimental evaluations.
APA, Harvard, Vancouver, ISO, and other styles
5

Metzger, Stefan, George Burba, Sean P. Burns, et al. "Optimization of an enclosed gas analyzer sampling system for measuring eddy covariance fluxes of H<sub>2</sub>O and CO<sub>2</sub>." Atmospheric Measurement Techniques 9, no. 3 (2016): 1341–59. http://dx.doi.org/10.5194/amt-9-1341-2016.

Full text
Abstract:
Abstract. Several initiatives are currently emerging to observe the exchange of energy and matter between the earth's surface and atmosphere standardized over larger space and time domains. For example, the National Ecological Observatory Network (NEON) and the Integrated Carbon Observing System (ICOS) are set to provide the ability of unbiased ecological inference across ecoclimatic zones and decades by deploying highly scalable and robust instruments and data processing. In the construction of these observatories, enclosed infrared gas analyzers are widely employed for eddy covariance applications. While these sensors represent a substantial improvement compared to their open- and closed-path predecessors, remaining high-frequency attenuation varies with site properties and gas sampling systems, and requires correction. Here, we show that components of the gas sampling system can substantially contribute to such high-frequency attenuation, but their effects can be significantly reduced by careful system design. From laboratory tests we determine the frequency at which signal attenuation reaches 50 % for individual parts of the gas sampling system. For different models of rain caps and particulate filters, this frequency falls into ranges of 2.5–16.5 Hz for CO2, 2.4–14.3 Hz for H2O, and 8.3–21.8 Hz for CO2, 1.4–19.9 Hz for H2O, respectively. A short and thin stainless steel intake tube was found to not limit frequency response, with 50 % attenuation occurring at frequencies well above 10 Hz for both H2O and CO2. From field tests we found that heating the intake tube and particulate filter continuously with 4 W was effective, and reduced the occurrence of problematic relative humidity levels (RH &gt; 60 %) by 50 % in the infrared gas analyzer cell. No further improvement of H2O frequency response was found for heating in excess of 4 W. These laboratory and field tests were reconciled using resistor–capacitor theory, and NEON's final gas sampling system was developed on this basis. The design consists of the stainless steel intake tube, a pleated mesh particulate filter and a low-volume rain cap in combination with 4 W of heating and insulation. In comparison to the original design, this reduced the high-frequency attenuation for H2O by ≈ 3∕4, and the remaining cospectral correction did not exceed 3 %, even at high relative humidity (95 %). The standardized design can be used across a wide range of ecoclimates and site layouts, and maximizes practicability due to minimal flow resistance and maintenance needs. Furthermore, due to minimal high-frequency spectral loss, it supports the routine application of adaptive correction procedures, and enables largely automated data processing across sites.
APA, Harvard, Vancouver, ISO, and other styles
6

Metzger, S., G. Burba, S. P. Burns, et al. "Optimization of a gas sampling system for measuring eddy-covariance fluxes of H<sub>2</sub>O and CO<sub>2</sub>." Atmospheric Measurement Techniques Discussions 8, no. 10 (2015): 10983–1028. http://dx.doi.org/10.5194/amtd-8-10983-2015.

Full text
Abstract:
Abstract. Several initiatives are currently emerging to observe the exchange of energy and matter between the earth's surface and atmosphere standardized over larger space and time domains. For example, the National Ecological Observatory Network (NEON) and the Integrated Carbon Observing System (ICOS) will provide the ability of unbiased ecological inference across eco-climatic zones and decades by deploying highly scalable and robust instruments and data processing. In the construction of these observatories, enclosed infrared gas analysers are widely employed for eddy-covariance applications. While these sensors represent a substantial improvement compared to their open- and closed-path predecessors, remaining high-frequency attenuation varies with site properties, and requires correction. Here, we show that the gas sampling system substantially contributes to high-frequency attenuation, which can be minimized by careful design. From laboratory tests we determine the frequency at which signal attenuation reaches 50 % for individual parts of the gas sampling system. For different models of rain caps and particulate filters, this frequency falls into ranges of 2.5–16.5 Hz for CO2, 2.4–14.3 Hz for H2O, and 8.3–21.8 Hz for CO2, 1.4–19.9 Hz for H2O, respectively. A short and thin stainless steel intake tube was found to not limit frequency response, with 50 % attenuation occurring at frequencies well above 10 Hz for both H2O and CO2. From field tests we found that heating the intake tube and particulate filter continuously with 4 W was effective, and reduced the occurrence of problematic relative humidity levels (RH &gt; 60 %) by 50 % in the infrared gas analyser cell. No further improvement of H2O frequency response was found for heating in excess of 4 W. These laboratory and field tests were reconciled using resistor-capacitor theory, and NEON's final gas sampling system was developed on this basis. The design consists of the stainless steel intake tube, a pleated mesh particulate filter, and a low-volume rain cap in combination with 4 W of heating and insulation. In comparison to the original design, this reduced the high-frequency attenuation for H2O by &amp;amp;approx; 3/4, and the remaining cospectral correction did not exceed 3 %, even at a very high relative humidity (95 %). This standardized design can be used across a wide range of eco-climates and site layouts, and maximizes practicability due to minimal flow resistance and maintenance needs. Furthermore, due to minimal high-frequency spectral loss, it supports the routine application of adaptive correction procedures, and enables more automated data processing across sites.
APA, Harvard, Vancouver, ISO, and other styles
7

ARENAS, Jorge, Chiara VALDERRAMA, and Jorge CARDENAS. "Comparing Kurtosis-adjusted weighted levels with other metrics to assess the risk of hearing loss from non-Gaussian noise exposures." INTER-NOISE and NOISE-CON Congress and Conference Proceedings 270, no. 5 (2024): 6674–81. http://dx.doi.org/10.3397/in_2024_3851.

Full text
Abstract:
Several studies have shown that impulsive noise can cause more damage to hearing than steady-state noise of equal energy. As a result, a large body of research has been devoted to evaluating the hazards of impulsive noise. However, work environments often have varying noise patterns, including Gaussian background noise combined with high-level transient noises contributing to the worker's daily dose. Thus, an energy-based noise metric underestimates the risk of hearing loss unless incorporating a temporal structure correction term. Kurtosis has been reported as an effective adjunct to energy for predicting hearing hazards from non-Gaussian noise exposures. In this work, recordings of real impulsive noise were conducted using a portable measuring system at a very high sampling frequency and using high-pressure microphones. Controlled synthetic non-Gaussian noise was generated using these impulsive signals and steady-state background noise. After exploring the detailed temporal structure of the noise, the daily exposure time has been determined using different metrics, including kurtosis-adjusted weighted levels, weighted peak levels, and auditory risk units. The examples presented here aim to demonstrate how kurtosis can help develop more accurate methods to prevent noise-induced hearing loss.
APA, Harvard, Vancouver, ISO, and other styles
8

Peltola, Olli, Toprak Aslan, Andreas Ibrom, Eiko Nemitz, Üllar Rannik, and Ivan Mammarella. "The high-frequency response correction of eddy covariance fluxes – Part 1: An experimental approach and its interdependence with the time-lag estimation." Atmospheric Measurement Techniques 14, no. 7 (2021): 5071–88. http://dx.doi.org/10.5194/amt-14-5071-2021.

Full text
Abstract:
Abstract. The eddy covariance (EC) technique has emerged as the prevailing method to observe the ecosystem–atmosphere exchange of gases, heat and momentum. EC measurements require rigorous data processing to derive the fluxes that can be used to analyse exchange processes at the ecosystem–atmosphere interface. Here we show that two common post-processing steps (time-lag estimation via cross-covariance maximisation and correction for limited frequency response of the EC measurement system) are interrelated, and this should be accounted for when processing EC gas flux data. These findings are applicable to EC systems employing closed- or enclosed-path gas analysers which can be approximated to be linear first-order sensors. These EC measurement systems act as low-pass filters on the time series of the scalar χ (e.g. CO2, H2O), and this induces a time lag (tlpf) between vertical wind speed (w) and scalar χ time series which is additional to the travel time of the gas signal in the sampling line (tube, filters). Time-lag estimation via cross-covariance maximisation inadvertently accounts also for tlpf and hence overestimates the travel time in the sampling line. This results in a phase shift between the time series of w and χ, which distorts the measured cospectra between w and χ and hence has an effect on the correction for the dampening of the EC flux signal at high frequencies. This distortion can be described with a transfer function related to the phase shift (Hp) which is typically neglected when processing EC flux data. Based on analyses using EC data from two contrasting measurement sites, we show that the low-pass-filtering-induced time lag increases approximately linearly with the time constant of the low-pass filter, and hence the importance of Hp in describing the high-frequency flux loss increases as well. Incomplete description of these processes in EC data processing algorithms results in flux biases of up to 10 %, with the largest biases observed for short towers due to the prevalence of small-scale turbulence. Based on these findings, it is suggested that spectral correction methods implemented in EC data processing algorithms are revised to account for the influence of low-pass-filtering-induced time lag.
APA, Harvard, Vancouver, ISO, and other styles
9

Sintermann, J., C. Spirig, A. Jordan, U. Kuhn, C. Ammann, and A. Neftel. "Eddy covariance flux measurements of ammonia by electron transfer reaction-mass spectrometry." Atmospheric Measurement Techniques Discussions 3, no. 6 (2010): 4707–59. http://dx.doi.org/10.5194/amtd-3-4707-2010.

Full text
Abstract:
Abstract. A system for fast ammonia (NH3) measurements based on a commercial Proton Transfer Reaction-Mass Spectrometer is presented. It uses electron transfer reaction (eTR) as ionisation pathway and features a drift tube of polyetheretherketone (PEEK) and silica-coated steel. Heating the instrumental inlet and the drift tube to 180° C enabled an effective time resolution of ~1 s and made it possible to apply the eTR-MS for eddy covariance (EC) measurements. EC fluxes of NH3 were measured over two agricultural fields in Oensingen, Switzerland, following fertilisations with cattle slurry. Air was aspirated close to a sonic anemometer at a flow of 100 STP L min−1 and was directed through a 23 m long 1/2" PFA tube heated to 150°C to an air-conditioned trailer where the eTR-MS sub-sampled from the large bypass stream. This setup minimised damping of fast NH3 concentration changes between the sampling point and the actual measurement. High-frequency attenuation loss of the NH3 fluxes of 20 to 40% was quantified and corrected for using an empirical ogive method. The instrumental NH3 background signal showed a minor interference with H2O which was characterised in the laboratory. The resulting correction of the NH3 flux after slurry spreading was less than 1‰. The flux detection limit of the EC system was about 5 ng m−2 s−1 while the accuracy of individual flux measurements was estimated 16% for the high-flux regime during these experiments. The NH3 emissions after broad spreading of the slurry showed an initial maximum of 150 μg m2 s−1 with a fast decline in the following hours.
APA, Harvard, Vancouver, ISO, and other styles
10

Sintermann, J., C. Spirig, A. Jordan, U. Kuhn, C. Ammann, and A. Neftel. "Eddy covariance flux measurements of ammonia by high temperature chemical ionisation mass spectrometry." Atmospheric Measurement Techniques 4, no. 3 (2011): 599–616. http://dx.doi.org/10.5194/amt-4-599-2011.

Full text
Abstract:
Abstract. A system for fast ammonia (NH3) measurements with chemical ionisation mass spectrometry (CIMS) based on a commercial Proton Transfer Reaction-Mass Spectrometer (PTR-MS) is presented. It uses electron transfer reaction as ionisation pathway and features a drift tube of polyetheretherketone (PEEK) and silica-coated steel. Heating the instrumental inlet and the drift tube to 180 °C enabled an effective time resolution of ~1 s and made it possible to apply the instrument for eddy covariance (EC) measurements. EC fluxes of NH3 were measured over two agricultural fields in Oensingen, Switzerland, following fertilisations with cattle slurry. Air was aspirated close to a sonic anemometer at a flow of 100 STP L min−1 and was directed through a 23 m long 1/2" PFA tube heated to 150 °C to an air-conditioned trailer where the gas was sub-sampled from the large bypass stream. This setup minimised damping of fast NH3 concentration changes between the sampling point and the actual measurement. High-frequency attenuation loss of the NH3 fluxes of 20 to 40% was quantified and corrected for using an empirical ogive method. The instrumental NH3 background signal showed a minor interference with H2O which was characterised in the laboratory. The resulting correction of the NH3 flux after slurry spreading was less than 1‰. The flux detection limit of the EC system was about 5 ng m−2 s−1 while the accuracy of individual flux measurements was estimated 16% for the high-flux regime during these experiments. The NH3 emissions after broad spreading of the slurry showed an initial maximum of 150 μg m−2 s−1 with a fast decline in the following hours.
APA, Harvard, Vancouver, ISO, and other styles
11

Keel, Stuart, Jing Xie, Joshua Foreman, Hugh R. Taylor, and Mohamed Dirani. "Population-based assessment of visual acuity outcomes following cataract surgery in Australia: the National Eye Health Survey." British Journal of Ophthalmology 102, no. 10 (2018): 1419–24. http://dx.doi.org/10.1136/bjophthalmol-2017-311257.

Full text
Abstract:
AimTo assess the visual outcomes of cataract surgery among a national sample of non-Indigenous and Indigenous Australians.MethodsThis was a population-based study of 3098 non-Indigenous Australians (50–98 years) and 1738 Indigenous Australians (40–92 years), stratified by remoteness. A poor postoperative outcome in an eye that had undergone cataract surgery was defined as presenting distance visual acuity (PVA) &lt;6/12–6/60, and a very poor outcome was defined as PVA &lt;6/60. Effective cataract surgery coverage (eCSC; operated cataract and a good outcome (PVA ≥6/12) as a proportion of operable plus operated cataract) was calculated.ResultsThe sampling weight adjusted cataract surgery prevalence was 19.8% (95% CI 17.9 to 22.0) in non-Indigenous Australians and 8.2% (95% CI 6.0 to 9.6) in Indigenous Australians. Among the non-Indigenous population, poor and very poor PVA outcomes were present in 18.1% and 1.9% of eyes, respectively. For Indigenous Australians, these values were 27.8% and 6.3%, respectively. The main causes of poor vision were refractive error (non-Indigenous=41.8%; Indigenous=41.9%) and coincident disease (non-Indigenous=43.3%; Indigenous=40.3%). The eCSC rates in the non-Indigenous and Indigenous populations were 88.5% (95% CI 85.2 to 91.2) and 51.6% (95% CI 42.4 to 60.7), respectively.ConclusionApproximately half of eyes with a poor visual outcome postcataract surgery could be readily avoided through the appropriate refractive correction. The finding of a lower eCSC rate among Indigenous Australians suggests that improvements in access and quality of cataract services may be warranted in order to reduce cataract-related vision loss in the Indigenous population.
APA, Harvard, Vancouver, ISO, and other styles
12

Ciani, Daniele, Claudia Fanelli, and Bruno Buongiorno Nardelli. "Estimating ocean currents from the joint reconstruction of absolute dynamic topography and sea surface temperature through deep learning algorithms." Ocean Science 21, no. 1 (2025): 199–216. https://doi.org/10.5194/os-21-199-2025.

Full text
Abstract:
Abstract. Our study focuses on absolute dynamic topography (ADT) and sea surface temperature (SST) mapping from satellite observations, with the primary objective of improving the satellite-derived ADT (and derived geostrophic currents) spatial resolution. Retrieving consistent high-resolution ADT and SST information from space is challenging, due to instrument limitations, sampling constraints, and degradations introduced by the interpolation algorithms used to obtain gap-free (L4) analyses. To address these issues, we developed and tested different deep learning methodologies, specifically convolutional neural network (CNN) models that were originally proposed for single-image super resolution. Building upon recent findings, we conduct an Observing System Simulation Experiment (OSSE) relying on Copernicus numerical model outputs (with respective temporal and spatial resolutions of 1 d and 1/24°), and we present a strategy for further refinements. Previous OSSEs combined low-resolution L4 satellite equivalent ADTs with high-resolution “perfectly known” SSTs to derive high-resolution sea surface dynamical features. Here, we introduce realistic SST L4 processing errors and modify the network to concurrently predict high-resolution SST and ADT from synthetic, satellite equivalent L4 products. This modification allows us to evaluate the potential enhancement in the ADT and SST mapping while integrating dynamical constraints through tailored, physics-informed loss functions. The neural networks are thus trained using OSSE data and subsequently applied to the Copernicus Marine Service satellite-derived ADTs and SSTs, allowing us to reconstruct super-resolved ADTs and geostrophic currents at the same spatiotemporal resolution of the model outputs employed for the OSSE. A 12-year-long time series of super-resolved geostrophic currents (2008–2019) is thus presented and validated against in situ-measured currents from drogued drifting buoys and via spectral analyses. This study suggests that CNNs are beneficial for improving standard altimetry mapping: they generally sharpen the ADT gradients, with consequent correction of the surface currents direction and intensities with respect to the altimeter-derived products. Our investigation is focused on the Mediterranean Sea, quite a challenging region due to its small Rossby deformation radius (around 10 km).
APA, Harvard, Vancouver, ISO, and other styles
13

Hamilton, T. S., Randolph J. Enkin, Michael Riedel, Garry C. Rogers, John W. Pohlman, and Heather M. Benway. "Slipstream: an early Holocene slump and turbidite record from the frontal ridge of the Cascadia accretionary wedge off western Canada and paleoseismic implications." Canadian Journal of Earth Sciences 52, no. 6 (2015): 405–30. http://dx.doi.org/10.1139/cjes-2014-0131.

Full text
Abstract:
Slipstream Slump, a well-preserved 3 km wide sedimentary failure from the frontal ridge of the Cascadia accretionary wedge 85 km off Vancouver Island, Canada, was sampled during Canadian Coast Guard Ship (CCGS) John P. Tully cruise 2008007PGC along a transect of five piston cores. Shipboard sediment analysis and physical property logging revealed 12 turbidites interbedded with thick hemipelagic sediments overlying the slumped glacial diamict. Despite the different sedimentary setting, atop the abyssal plain fan, this record is similar in number and age to the sequence of turbidites sampled farther to the south from channel systems along the Cascadia Subduction Zone, with no extra turbidites present in this local record. Given the regional physiographic and tectonic setting, megathrust earthquake shaking is the most likely trigger for both the initial slumping and subsequent turbidity currents, with sediments sourced exclusively from the exposed slump face of the frontal ridge. Planktonic foraminifera picked from the resedimented diamict of the underlying main slump have a disordered cluster of 14C ages between 12.8 and 14.5 ka BP. For the post-slump stratigraphy, an event-free depth scale is defined by removing the turbidite sediment intervals and using the hemipelagic sediments. Nine 14C dates from the most foraminifera-rich intervals define a nearly constant hemipelagic sedimentation rate of 0.021 cm/year. The combined age model is defined using only planktonic foraminiferal dates and Bayesian analysis with a Poisson-process sedimentation model. The age model of ongoing hemipelagic sedimentation is strengthened by physical property correlations from Slipstream events to the turbidites for the Barkley Canyon site 40 km south. Additional modelling addressed the possibilities of seabed erosion or loss and basal erosion beneath turbidites. Neither of these approaches achieves a modern seabed age when applying the commonly used regional marine 14C reservoir age of 800 years (marine reservoir correction ΔR = 400 years). Rather, the top of the core appears to be 400 years in the future. A younger marine reservoir age of 400 years (ΔR = 0 years) brings the top to the present and produces better correlations with the nearby Effingham Inlet paleo-earthquake chronology based only on terrestrial carbon requiring no reservoir correction. The high-resolution dating and facies analysis of Slipstream Slump in this isolated slope basin setting demonstrates that this is also a useful type of sedimentary target for sampling the paleoseismic record in addition to the more studied turbidites from submarine canyon and channel systems. The first 10 turbidites at Slipstream Slump were deposited between 10.8 and 6.6 ka BP, after which the system became sediment starved and only two more turbidites were deposited. The recurrence interval for the inferred frequent early Holocene megathrust earthquakes is 460 ± 140 years, compatible with other estimates of paleoseismic megathrust earthquake occurrence rates along the subduction zone.
APA, Harvard, Vancouver, ISO, and other styles
14

Ikegbunam, Peter Chierike, and Odishika Emmanuel. "ICTs Use in Sustaining Nigerian Democracy: Evaluation of Voters' Perception of the use of Smart Card-Readers in 2019 General Election." ICTs Use in Sustaining Nigerian Democracy: Evaluation of Voters' Perception of the use of Smart Card-Readers in 2019 General Election 3, no. 5 (2021): 09–21. https://doi.org/10.5281/zenodo.5529321.

Full text
Abstract:
Smart-card reader technology had been introduced into the Nigeria&rsquo;s electoral system to eliminate electoral frauds and manipulations that lead to pre-, on- and post-election crises in Nigeria. This is serious problem which do not only undermine good governance but also lead to loss of lives. Against this background, evaluation of Voters&rsquo; Perception of Smart-card readers in 2019 General Election was conceived on the need to investigate respondents&rsquo; perception of the technology, whether the technology had stopped multiple voting and make election credible before the electorate. Using the survey research method, the study sampled a total of 400 respondents from Ayamelum and Dunukofia local government areas of Anambra state. The sample was drawn using Rakesh [1] sample determination formula&nbsp;while purposive sampling technique was used to select only the electorates who participated in the election for the study. The research was based on the uses and gratifications and technological acceptance theories of mass communication. Findings revealed that the smart-card reader technology is a total failure given that it could neither stop multiple nor make the election credible in the eyes of the electorates, a situation which attracted negative perception of the technology from the respondents. Among others, the researcher recommended that INEC should affect the corrections on all errors observed to have marred the use the technology and religiously improve on it for better elections in future.
APA, Harvard, Vancouver, ISO, and other styles
15

Ikegbunam, Peter Chierike, and Odishika Emmanuel. "ICTs Use in Sustaining Nigerian Democracy: Evaluation of Voters' Perception of the use of Smart Card-Readers in 2019 General Election." International Journal of Arts, Humanities and Social Studies 3, no. 5 (2021): 09–21. https://doi.org/10.5281/zenodo.6513680.

Full text
Abstract:
Smart-card reader technology had been introduced into the Nigeria&rsquo;s electoral system to eliminate electoral frauds and manipulations that lead to pre-, on- and post-election crises in Nigeria. This is serious problem which do not only undermine good governance but also lead to loss of lives. Against this background, evaluation of Voters&rsquo; Perception of Smart-card readers in 2019 General Election was conceived on the need to investigate respondents&rsquo; perception of the technology, whether the technology had stopped multiple voting and make election credible before the electorate. Using the survey research method, the study sampled a total of 400 respondents from Ayamelum and Dunukofia local government areas of Anambra state. The sample was drawn using Rakesh [1] sample determination formula while purposive sampling technique was used to select only the electorates who participated in the election for the study. The research was based on the uses and gratifications and technological acceptance theories of mass communication. Findings revealed that the smart-card reader technology is a total failure given that it could neither stop multiple nor make the election credible in the eyes of the electorates, a situation which attracted negative perception of the technology from the respondents. Among others, the researcher recommended that INEC should affect the corrections on all errors observed to have marred the use the technology and religiously improve on it for better elections in future.
APA, Harvard, Vancouver, ISO, and other styles
16

Bradley, Stuart, and Chandra Ghimire. "Design of an Ultrasound Sensing System for Estimation of the Porosity of Agricultural Soils." Sensors 24, no. 7 (2024): 2266. http://dx.doi.org/10.3390/s24072266.

Full text
Abstract:
The design of a readily useable technology for routine paddock-scale soil porosity estimation is described. The method is non-contact (proximal) and typically from “on-the-go” sensors mounted on a small farm vehicle around 1 m above the soil surface. This ultrasonic sensing method is unique in providing estimates of porosity by a non-invasive, cost-effective, and relatively simple method. Challenges arise from the need to have a compact low-power rigid structure and to allow for pasture cover and surface roughness. The high-frequency regime for acoustic reflections from a porous material is a function of the porosity ϕ, the tortuosity α∞, and the angle of incidence θ. There is no dependence on frequency, so measurements must be conducted at two or more angles of incidence θ to obtain two or more equations in the unknown soil properties ϕ and α∞. Sensing and correcting for scattering of ultrasound from a rough soil surface requires measurements at three or more angles of incidence. A system requiring a single transmitter/receiver pair to be moved from one angle to another is not viable for rapid sampling. Therefore, the design includes at least three transmitter/reflector pairs placed at identical distances from the ground so that they would respond identically to power reflected from a perfectly reflecting surface. A single 25 kHz frequency is a compromise which allows for the frequency-dependent signal loss from a natural rough agricultural soil surface. Multiple-transmitter and multiple-microphone arrays are described which give a good signal-to-noise ratio while maintaining a compact system design. The resulting arrays have a diameter of 100 mm. Pulsed ultrasound is used so that the reflected sound can be separated from sound travelling directly through the air horizontally from transmitter to receiver. The average porosity estimated for soil samples in the laboratory and in the field is found to be within around 0.04 of the porosity measured independently. This level of variation is consistent with uncertainties in setting the angle of incidence, although assumptions made in modelling the interaction of ultrasound with the rough surface no doubt also contribute. Although the method is applicable to all soil types, the current design has only been tested on dry, vegetation-free soils for which the sampled area does not contain large animal footprints or rocks.
APA, Harvard, Vancouver, ISO, and other styles
17

Shahani, Amir Reza, Hamid Shooshtar, Arash Karbasian, and Mohamad Mehdi Karimi. "Evaluation of different methods of relaxation modulus extraction for linear viscoelastic materials from ramp-constant strain experiments." Proceedings of the Institution of Mechanical Engineers, Part C: Journal of Mechanical Engineering Science 233, no. 9 (2018): 3155–69. http://dx.doi.org/10.1177/0954406218802599.

Full text
Abstract:
Experimental determination of relaxation modulus of linear viscoelastic materials, in principle, requires the application of an ideal step strain to the specimen. This could not be achieved in practice, however, and is replaced by a ramp-constant strain history. The material response to the ramp-constant strain deviates from its ideal step response and should be corrected. Different correction methods have been proposed for the full-range modulus extraction from ramp-constant strain experiments, and among them the three methods of Zapas–Phillips, Lee–Knauss, and Sorvari–Malinen are distinguishable. Few comparative studies have been performed on these methods, all of which have been based on the simulated response of a hypothetic material rather than on the real experimental data. Furthermore, the simulations have been performed assuming specific material models not essentially appropriate for the material parameter extraction purposes, leading to undesirable errors in the simulation results. In this paper, the above-mentioned methods are compared based on both simulation and experimental approaches. The simulation results show that all methods effectively improve the range of modulus extraction compared to the well-known “ten-times rule”, and the Lee–Knauss method provides the best predictions, in contrast to some of the previously published results. Considering the experimental results, however, it is observed that all the modulus extraction methods lose their performance if a sufficiently small sampling rate is not provided by the experimental data acquisition system. It has been discussed that the conclusion of some authors regarding the invalidity of ten-times-rule stems from a misinterpretation of their simulation results and is faultful.
APA, Harvard, Vancouver, ISO, and other styles
18

Hasan, Md Muyeed, Rony Basak, Minhaz Hasan Sujan, et al. "An Assessment of the Impact of Industrialization on Physical Environment and Socio-economic Conditions around the Alipur Industrial Area, Bangladesh." American Journal of Agricultural Science, Engineering, and Technology 5, no. 2 (2021): 309–25. https://doi.org/10.54536/ajaset.v5i2.102.

Full text
Abstract:
Massive industrialization promotes economic growth and causes environmental pollution and degradation. The purpose of this research is to determine the impact of industrialization on the physical environment and socio-economic condition at Alipur industrial areas, Habiganj, Bangladesh, by measuring water, soil, air, sound quality parameters and a random sampling questionnaire survey on socio-economic conditions. Most of the measured physicochemical parameters exceeded the acceptable limit of inland surface water. The pH of the effluent water ranged from 4.83 to 8.58, which was found lower than the standard level for two points. The DO level was within the range of 1.98 to 3.32 mg/L indicating that aquatic life is in danger because of the lower level of DO. BOD, COD, TSS, and TDS ranged from 133 to 255.8 mg/L, 330 to 566 mg/L, 1960 to 2170 mg/L and 4110 to 5500 mg/L, respectively. The concentration of Nitrates (14.63 mg/l), Phosphate (10.33 mg/l), and Copper (5.49 mg/l) in the water samples found more than the inland surface, public sewer STP, and Irrigated land standard.&Acirc;&nbsp; The concentrations of CO (10.71), NO2 (90.56), and SO2 (104.34) in the air are near the acceptable level, indicating that the air was moderately polluted. The Durbin-Watson statistic is 0.495 from the model summary indicating the research model has a positive auto-correction, and the coefficient of significance is at 0.00, and Test F at 150.345 suggests that the model is suitable. Furthermore, the coefficient of the land area lost due to industrial park construction is found at 0.00, indicating household income increased when people lose land and non-agricultural sectors like building houses, investing in services, traffic systems. On the other hand, it is undeniable that few members whose land is acquired turned unemployed during industrial parks, resulting in the high number of unemployed workers being high and income declines.
APA, Harvard, Vancouver, ISO, and other styles
19

Kemokai, Michael Boima. "MANAGERIAL CHALLENGES IN USING LEAN TECHNIQUES IN DESIGNING A COMPREHENSIVE CHARACTERIZATION OF ORGANIZATION LOGISTICS AND PHYSICAL DISTRIBUTION SYSTEM: A CASE OF DEPARTMENT OF FIELD SUPPORT (DFS), UNITED NATIONS." International Journal of Supply Chain and Logistics 1, no. 2 (2017): 28–54. http://dx.doi.org/10.47941/ijscl.124.

Full text
Abstract:
Purpose: The purpose of this study was to identify the managerial challenges in using lean techniques in designing a comprehensive characterization of the organization logistics and physical distribution system. The study further sought to determine the extent to which these challenges impact the logistics and physical distribution of goods and informed managerial practices.Methodology: The study employed a qualitative research design. The study targeted all the senior managers at Director Levels at the GSC and the RSC and all the senior operational staff from supply chain and service delivery pillars in three (03) large Field Missions. These field missions are namely United Nations Stabilization Mission in Congo (MONUSCO); United Nations Mission in South Sudan (UNMISS); and the United Nations Support Office for AMISOM (UNSOA). This study used purposive sampling technique. The researcher purposively sampled fifteen (15) personnel; with three (03) participants each from the GSC, RSC, and three (03) participants each from the three (03) Field Mission. Primary data was obtained from the original sources using questionnaires and interview.Findings: The results revealed that DFS did face various challenges in the implementation of lean strategies and that the operational difficulties experienced within its operations across field missions resulted in reduced customer satisfaction, increased supplier lead time and increased operational costs. The respondents indicated that they experienced a lack of understanding of the complexity of supply chain at the senior leadership level whereby most initiatives are tainted with personal agenda as a consequent the biggest hurdle to overcome is gaining enthusiasm and trust of the staff. They also indicated that there is a high risk of asset waste and loss due to assets remaining in stock for over one year without use. Further, they indicated that there are weaknesses in the management of construction or “self-constructed” projects; and weaknesses in the restructuring of the department of peacekeeping operations.Unique contribution to theory, practice and policy: The study recommends that DFS should foster a continuous performance improvement mindset among staffs through planning, doing, checking and taking corrective actions on a balanced scorecard that integrates and aligns staff motivation and interest to the objectives of the organization. Also, objective and fair work plan and performance management are an ingredient to foster continuous improvement
APA, Harvard, Vancouver, ISO, and other styles
20

Kemokai, Michael Boima. "MANAGERIAL CHALLENGES IN USING LEAN TECHNIQUES IN DESIGNING A COMPREHENSIVE CHARACTERIZATION OF ORGANIZATION LOGISTICS AND PHYSICAL DISTRIBUTION SYSTEM: A CASE OF DEPARTMENT OF FIELD SUPPORT (DFS), UNITED NATIONS." International Journal of Supply Chain and Logistics 1, no. 2 (2017): 28. http://dx.doi.org/10.47941/ijscl.v1i2.124.

Full text
Abstract:
Purpose: The purpose of this study was to identify the managerial challenges in using lean techniques in designing a comprehensive characterization of the organization logistics and physical distribution system. The study further sought to determine the extent to which these challenges impact the logistics and physical distribution of goods and informed managerial practices.Methodology: The study employed a qualitative research design. The study targeted all the senior managers at Director Levels at the GSC and the RSC and all the senior operational staff from supply chain and service delivery pillars in three (03) large Field Missions. These field missions are namely United Nations Stabilization Mission in Congo (MONUSCO); United Nations Mission in South Sudan (UNMISS); and the United Nations Support Office for AMISOM (UNSOA). This study used purposive sampling technique. The researcher purposively sampled fifteen (15) personnel; with three (03) participants each from the GSC, RSC, and three (03) participants each from the three (03) Field Mission. Primary data was obtained from the original sources using questionnaires and interview.Findings: The results revealed that DFS did face various challenges in the implementation of lean strategies and that the operational difficulties experienced within its operations across field missions resulted in reduced customer satisfaction, increased supplier lead time and increased operational costs. The respondents indicated that they experienced a lack of understanding of the complexity of supply chain at the senior leadership level whereby most initiatives are tainted with personal agenda as a consequent the biggest hurdle to overcome is gaining enthusiasm and trust of the staff. They also indicated that there is a high risk of asset waste and loss due to assets remaining in stock for over one year without use. Further, they indicated that there are weaknesses in the management of construction or “self-constructed” projects; and weaknesses in the restructuring of the department of peacekeeping operations.Unique contribution to theory, practice and policy: The study recommends that DFS should foster a continuous performance improvement mindset among staffs through planning, doing, checking and taking corrective actions on a balanced scorecard that integrates and aligns staff motivation and interest to the objectives of the organization. Also, objective and fair work plan and performance management are an ingredient to foster continuous improvement
APA, Harvard, Vancouver, ISO, and other styles
21

Bognar, Kristof, Susann Tegtmeier, Adam Bourassa, et al. "Stratospheric ozone trends for 1984–2021 in the SAGE II–OSIRIS–SAGE III/ISS composite dataset." Atmospheric Chemistry and Physics 22, no. 14 (2022): 9553–69. http://dx.doi.org/10.5194/acp-22-9553-2022.

Full text
Abstract:
Abstract. After decades of depletion in the 20th century, near-global ozone now shows clear signs of recovery in the upper stratosphere. The ozone column, however, has remained largely constant since the turn of the century, mainly due to the evolution of lower stratospheric ozone. In the tropical lower stratosphere, ozone is expected to decrease as a consequence of enhanced upwelling driven by increasing greenhouse gas concentrations, and this is consistent with observations. There is recent evidence, however, that mid-latitude ozone continues to decrease as well, contrary to model predictions. These changes are likely related to dynamical variability, but the impact of changing circulation patterns on stratospheric ozone is not well understood. Here we use merged measurements from the Stratospheric Aerosol and Gas Experiment II (SAGE II), the Optical Spectrograph and InfraRed Imaging System (OSIRIS), and SAGE III on the International Space Station (SAGE III/ISS) to quantify ozone trends in the 2000–2021 period. We implement a sampling correction for the OSIRIS and SAGE III/ISS datasets and assess trend significance, taking into account the temporal differences with respect to Aura Microwave Limb Sounder data. We show that ozone has increased by 2 %–6 % in the upper and 1 %–3 % in the middle stratosphere since 2000, while lower stratospheric ozone has decreased by similar amounts. These decreases are significant in the tropics (&gt;95 % confidence) but not necessarily at mid-latitudes (&gt;80 % confidence). In the upper and middle stratosphere, changes since 2010 have pointed to hemispheric asymmetries in ozone recovery. Significant positive trends are present in the Southern Hemisphere, while ozone at northern mid-latitudes has remained largely unchanged in the last decade. These differences might be related to asymmetries and long-term variability in the Brewer–Dobson circulation. Circulation changes impact ozone in the lower stratosphere even more. In tropopause-relative coordinates, most of the negative trends in the tropics lose significance, highlighting the impacts of a warming troposphere and increasing tropopause altitudes.
APA, Harvard, Vancouver, ISO, and other styles
22

Kowalski, Dariusz, Beata Kowalska, Ewa Hołota, and Artur Choma. "Water Quality Correction Within Water Distribution System." Ecological Chemistry and Engineering S 22, no. 3 (2015): 401–10. http://dx.doi.org/10.1515/eces-2015-0022.

Full text
Abstract:
Abstract Water suppliers can be treated as production companies whose main product is water delivered to their customers. The article presents problems connected with management of such companies in the conditions of secondary contamination in water distribution systems. This phenomenon exists in water networks all over the world. Its’ presence is particularly visible in countries of former communistic block. In the article particular attention was devoted to the issue of water quality correction in the analysed systems. In the case of water distribution systems, former quality correction methods consisted in special treatment of water pumped into the system, flushing and cleaning of water pipes. In both these cases identification of water quality deficiencies resulted in significant water loss. The situation reflects management processes applied in the manufacturing industry of the 1940s. The authors of this paper put forward the concept of three water quality correction methods which would not entail such considerable water loss. The methods in question are intended for different network types. The implementation of proposed solutions could set new standards in management of distribution systems of water providers.
APA, Harvard, Vancouver, ISO, and other styles
23

Yeo, J. G., K. Nay Yaung, A. Law, et al. "OP0214 A MULTI-DIMENSIONAL APPROACH REVEALS A DYSREGULATED SYSTEMIC LUPUS ERYTHEMATOSUS IMMUNE RHEOSTAT WITH AN ABNORMAL IMMUNOREGULATORY RESPONSE AND REDUCED CTLA4 EXPRESSION IN EFFECTOR T CELLS." Annals of the Rheumatic Diseases 82, Suppl 1 (2023): 140.1–141. http://dx.doi.org/10.1136/annrheumdis-2023-eular.2990.

Full text
Abstract:
BackgroundSystemic Lupus Erythematosus (SLE) immunopathogenesis involves a complex network of regulatory and effector cells with its balance constituting an immune rheostat that maintains immune homeostasis. Thus, the disease must be interrogated holistically to identify mechanistically and clinically relevant immune cell subsets for its pathogenesis. There is a crucial unmet need to examine the mechanism of tolerance loss in lupus and identify novel targets for potential therapeutic intervention to improve outcomes.ObjectivesTo characterise the SLE immunome holistically and address our hypothesis that SLE is driven dually by an impaired immunoregulatory axis and perturbed immune effector system.MethodsForty-one peripheral blood mononuclear cell (PBMC) samples from 26 adult SLE patients and 27 age-matched healthy controls were studied with a 43 markers mass cytometry panel. The SLE patients (23 females) had a median age of 40 (interquartile range [IQR]: 28 to 54) years with a median SLEDAI 2K score of 4 (IQR: 0 to 6). Quality check, batch-effect correction, cell clustering, annotation and visualisation after t-distributed stochastic neighbour embedding (tSNE) dimensional reduction was done using our extended polydimensional immunome characterisation (EPIC) pipeline [1]. Frequencies are expressed as a percentage of total CD45+PBMC or ratio and described with median and IQR. Statistical significance is defined as p&lt;0.05 (Mann-Whitney U test with Bonferroni correction).ResultsOur multi-parametric approach reveals multiple derangements in all the major immune lineages but predominantly in the CD4+T cell population (Figure 1). Interestingly, there were no significant differences in the memory (CD45RO+) and naïve (CD45RA+) Treg (CD3+CD4+CD25+Foxp3+CTLA+) subsets between SLE and controls. Instead, an enrichment of an activated memory Treg-like T subset (CD3+CD4+CD45RO+CD25-Foxp3+CTLA+ICOS+) was found in SLE (SLE versus healthy: 0.86 [0.43 to 1.38]% versus 0.31 [0.24 to 0.42]%, p&lt;0.01). These Treg-like cells do not express CD25, a phenotypic marker for natural Treg cells.Concurrently, there was a significant reduction in CTLA4 expressing naïve non-Treg CD4+T (effector) cells in lupus (SLE versus healthy: 0.59 [0.40 to 1.11]% versus 2.0 [0.9 to 2.9]%, p&lt;0.01). This was accompanied by a significant reduction in the ratio of CTLA4+to CTLA4-naïve non-Treg TNFα+effector T cells (SLE versus healthy, ratio of CTLA4+to total naïve non-Treg TNFα+T cells: 0.14 [0.11 to 0.23] versus 0.33 [0.17 to 0.55], p&lt;0.0001). Additionally, other significant changes such as activated IL4+T cell subset (CD3+CD4+CD45RA+HLA-DR+IL4+) was increased in lupus (SLE versus healthy: 0.30 [0.19 to 0.52]% versus 0.07 [0.05 to 0.12]%, p&lt;0.0001).Figure 1.(A) Cell types embedded tSNE plot.(B) Healthy and SLE immunomes (Density plots).(C) Expression embedded tSNE plots.(D) Differentially enriched CD4+T cell clusters in either healthy or SLE. Blue: significantly increased cell subsets in healthy. Red: significantly increased cell sub-sets in SLE. All plots are derived from a random sampling of 50,000 single-cell events, dimensionally reduced with 43 markers (n=23 healthy and 41 SLE).ConclusionThere are multiple derangements in the immunoregulatory and effector axes in lupus consistent with its complex immunopathogenesis. The activated CD25 negative Treg-like subset indicates an impaired regulatory response in SLE. The reduced CTLA4 expression in the naïve non-Treg T cells in lupus suggests a perturbed negative feedback mechanism in the effector system. The validation of these results and delineation of the underlying disease mechanisms have translational therapeutic potential.Reference[1]Yeo JG, Wasser M et al. The Extended Polydimensional Immunome Characterisation (EPIC) web-based reference and discovery tool for cytometry data. Nat Biotechnol. 2020 Jun;38(6):679-684.AcknowledgementsThis research was supported by the National Research Foundation Singapore under its National Medical Research Council (NMRC) Centre Grant Programme (MOH-000988) and is administered by the Ministry of Health, Singapore’s NMRC. Other NMRC grant support CIRG21nov-0031 and CSAINV22jul-0008 is gratefully acknowledged.Disclosure of InterestsNone Declared.
APA, Harvard, Vancouver, ISO, and other styles
24

Horwitz, Gregory D. "Correction: Temporal information loss in the macaque early visual system." PLOS Biology 20, no. 8 (2022): e3001752. http://dx.doi.org/10.1371/journal.pbio.3001752.

Full text
APA, Harvard, Vancouver, ISO, and other styles
25

Levin, Ingeborg, Dominik Schmithüsen, and Alex Vermeulen. "Assessment of <sup>222</sup>radon progeny loss in long tubing based on static filter measurements in the laboratory and in the field." Atmospheric Measurement Techniques 10, no. 4 (2017): 1313–21. http://dx.doi.org/10.5194/amt-10-1313-2017.

Full text
Abstract:
Abstract. Aerosol loss in air intake systems potentially hampers the application of one-filter systems for progeny-based atmospheric 222radon (222Rn) measurements. The artefacts are significant when air has to be collected via long sampling lines, e.g. from elevated heights at tall tower observatories. Here we present results from a study, determining 222Rn progeny loss from ambient air sampled via 8.2 mm inner diameter (ID) Decabon tubing in the laboratory and from pre-installed 10 mm ID tubing at the Cabauw meteorological tower in the Netherlands. Progeny loss increased steeply with length of the tubing, decreasing sampling efficiency to 66 % for 8.2 mm ID rolled-up tubing of 200 m length at a flow rate of ca. 1 m3 h−1. Preliminary theoretical estimation of the loss yielded a sampling efficiency of 64 % for the same tubing, when taking into account turbulent inertial deposition of aerosol to the walls as well as loss due to gravitational settling. At Cabauw tower, theoretical estimates of the loss in vertical tubing with 10 mm ID and 200 m lengths with flow rate of 1.1 m3 h−1 yielded a total efficiency of 73 %, the same value as observed. 222Rn progeny loss increased strongly at activity concentrations below 1 Bq m−3. Based on our experiments, an empirical correction function for 222Rn progeny measurements when sampling through long Decabon tubing was developed, allowing correction of respective measurements for this particular experimental setting (tubing type and diameter, flow rate, aerosol size distribution) with an estimated uncertainty of 10–20 % for activity concentrations between 1 and 2 Bq m−3 and less than 10 % for activity concentrations above 2 Bq m−3.
APA, Harvard, Vancouver, ISO, and other styles
26

Hazratwala, Kaushik. "One year follow-up of a case series using the iBalance medial opening wedge HTO system and an accelerated rehabilitation program." Orthopaedic Journal of Sports Medicine 5, no. 5_suppl5 (2017): 2325967117S0018. http://dx.doi.org/10.1177/2325967117s00188.

Full text
Abstract:
Objectives: To investigate the clinical outcome and postoperative alignment changes changes to following computer navigated medial opening wedge HTO using the iBalance HTO system Methods: We performed a prospective observational series at a single centre of 20 consecutive patients undergoing computer navigated high tibial osteotomy by a single surgeon. The surgical device used to maintain the osteotomy was the Arthrex iBalance® HTO system. We compared preoperative and postoperative ROM, WOMAC, Lysholm, IKDC, alignment and the inferred tibial slope. We also measured the tibiofemoral angle on long leg weight bearing plain radiographs at 2 weeks, 6 weeks, 3 months and 1 year and calculated any change from the initial correction measured at 2 weeks to asses any loss of correction over 1-year time frame. We also correlated clinical outcome with loss of correction. Results: Regarding intraoperative results, the mean navigated correction to HKA was 5.4°±1.3°. No significant change was found to the knee sagittal angle posteroperatively (pre op mean 0.1°±4.4°, post op mean 0.81°±5.1°; p&gt; 0.05). No significant change was found to the post operative ROM (pre op mean 125.4°±41.5°, post op mean 123.9°±34.4°; p&gt; 0.05). The IKDC, Lysholm and WOMAC scores showed a significant difference, between 6 week to 3 months and 3 months to 6 months follow-up. After this time point scores did not show and statistical significant difference. The IKDC, Lysholm and WOMAC scores all demonstrate a general improvement over 12 months. One patient had to be removed from the study, as he had a lateral cortex breach at three months, and was converted to a TKR. There was a significant loss of correction between 2 weeks and 6 weeks, and again a significant loss of correction between 6 weeks and 3 months. However, the average loss of correction was measured radiographically to be 1.6°±1.7°, between 2 week and 12 month postoperative follow-up. Though this may be statistically significant, it is not clinically significant when compared to PROMs. We divided the loss of correction into greater than 1.5° and less than 1.5° and compared them to measured PROMs at all time points. We found no significant correlation between increased loss of correction and poorer PROM scores. Conclusion: Computer assisted iBalance medial opening wedge HTO with accelerated rehabilitation program does radiographically show loss of correction over 12 months. However it is not clinically significant when compared to PROMs. ROM and inferred tibial slope is preserved.
APA, Harvard, Vancouver, ISO, and other styles
27

Kim, Dong Hyun, and Jong Deok Kim. "Unequal loss protection scheme using a quality prediction model in a Wi-Fi broadcasting system." International Journal of Distributed Sensor Networks 15, no. 6 (2019): 155014771985424. http://dx.doi.org/10.1177/1550147719854247.

Full text
Abstract:
Wireless local area network–based broadcasting techniques are a type of mobile Internet Protocol television technology that simultaneously transmits multimedia content to local users. Contrary to the existing wireless local area network–based multimedia transmission systems, which transmit multimedia data to users using unicast packets, a wireless local area network–based broadcasting system is able to transmit multimedia data to many users in a single broadcast packet. Consequently, network resources do not increase with the increase in the number of users. However, IEEE 802.11 does not provide a packet loss recovery algorithm for broadcast packet loss, which is unavoidable. Therefore, the forward error correction technique is required to address the issue of broadcast packet loss. The broadcast packet loss rate of a wireless local area network–based broadcasting system that transmits compressed multimedia data is not proportional to the quality deterioration of the received video signals; therefore, it is difficult to predict the quality of the received video while also considering the effect of broadcast packet loss. In this scenario, allocating equal forward error correction packets to compressed frames is not an effective method for recovering broadcast packet loss. Thus, several studies on unequal loss protection have been conducted. This study proposes an effective, prediction-based unequal loss protection algorithm that can be applied to wireless local area network–based broadcasting systems. The proposed unequal loss protection algorithm adopts a novel approach by adding forward error correction packets to every transmission frame while considering frame loss. This algorithm was used as a new metric to predict video quality deterioration, and an unequal loss protection structure was designed, implemented, and verified. The effectiveness of the quality deterioration model and the validity of the unequal loss protection algorithm were demonstrated through experiments.
APA, Harvard, Vancouver, ISO, and other styles
28

Blanchet, Jose, and Jing Dong. "Perfect sampling for infinite server and loss systems." Advances in Applied Probability 47, no. 3 (2015): 761–86. http://dx.doi.org/10.1239/aap/1444308881.

Full text
Abstract:
We present the first class of perfect sampling (also known as exact simulation) algorithms for the steady-state distribution of non-Markovian loss systems. We use a variation of dominated coupling from the past. We first simulate a stationary infinite server system backwards in time and analyze the running time in heavy traffic. In particular, we are able to simulate stationary renewal marked point processes in unbounded regions. We then use the infinite server system as an upper bound process to simulate the loss system. The running time analysis of our perfect sampling algorithm for loss systems is performed in the quality-driven (QD) and the quality-and-efficiency-driven regimes. In both cases, we show that our algorithm achieves subexponential complexity as both the number of servers and the arrival rate increase. Moreover, in the QD regime, our algorithm achieves a nearly optimal rate of complexity.
APA, Harvard, Vancouver, ISO, and other styles
29

Blanchet, Jose, and Jing Dong. "Perfect sampling for infinite server and loss systems." Advances in Applied Probability 47, no. 03 (2015): 761–86. http://dx.doi.org/10.1017/s0001867800048825.

Full text
Abstract:
We present the first class of perfect sampling (also known as exact simulation) algorithms for the steady-state distribution of non-Markovian loss systems. We use a variation of dominated coupling from the past. We first simulate a stationary infinite server system backwards in time and analyze the running time in heavy traffic. In particular, we are able to simulate stationary renewal marked point processes in unbounded regions. We then use the infinite server system as an upper bound process to simulate the loss system. The running time analysis of our perfect sampling algorithm for loss systems is performed in the quality-driven (QD) and the quality-and-efficiency-driven regimes. In both cases, we show that our algorithm achieves subexponential complexity as both the number of servers and the arrival rate increase. Moreover, in the QD regime, our algorithm achieves a nearly optimal rate of complexity.
APA, Harvard, Vancouver, ISO, and other styles
30

Qin, Guo-jie, Guo-man Liu, Mei-guo Gao, Xiong-jun Fu, and Peng Xu. "Correction of Sample-Time Error for Time-Interleaved Sampling System Using Cubic Spline Interpolation." Metrology and Measurement Systems 21, no. 3 (2014): 485–96. http://dx.doi.org/10.2478/mms-2014-0041.

Full text
Abstract:
Abstract Sample-time errors can greatly degrade the dynamic range of a time-interleaved sampling system. In this paper, a novel correction technique employing a cubic spline interpolation is proposed for inter-channel sample-time error compensation. The cubic spline interpolation compensation filter is developed in the form of a finite-impulse response (FIR) filter structure. The correction method of the interpolation compensation filter coefficients is deduced. A 4GS/s two-channel, time-interleaved ADC prototype system has been implemented to evaluate the performance of the technique. The experimental results showed that the correction technique is effective to attenuate the spurious spurs and improve the dynamic performance of the system.
APA, Harvard, Vancouver, ISO, and other styles
31

Su, Daiqin, Robert Israel, Kunal Sharma, Haoyu Qi, Ish Dhand, and Kamil Brádler. "Error mitigation on a near-term quantum photonic device." Quantum 5 (May 4, 2021): 452. http://dx.doi.org/10.22331/q-2021-05-04-452.

Full text
Abstract:
Photon loss is destructive to the performance of quantum photonic devices and therefore suppressing the effects of photon loss is paramount to photonic quantum technologies. We present two schemes to mitigate the effects of photon loss for a Gaussian Boson Sampling device, in particular, to improve the estimation of the sampling probabilities. Instead of using error correction codes which are expensive in terms of their hardware resource overhead, our schemes require only a small amount of hardware modifications or even no modification. Our loss-suppression techniques rely either on collecting additional measurement data or on classical post-processing once the measurement data is obtained. We show that with a moderate cost of classical post processing, the effects of photon loss can be significantly suppressed for a certain amount of loss. The proposed schemes are thus a key enabler for applications of near-term photonic quantum devices.
APA, Harvard, Vancouver, ISO, and other styles
32

Faraday, Coleridge, and W. A. Horowitz. "High-pT suppression in small systems." EPJ Web of Conferences 296 (2024): 12011. http://dx.doi.org/10.1051/epjconf/202429612011.

Full text
Abstract:
We present first results for leading hadron suppression in small collision systems, from a convolved radiative and collisional pQCD energy loss model which receives a short path length correction to the radiative energy loss. We find that the short path length correction is exceptionally large for light flavor final states at high-pT , for all system sizes. We numerically investigate the self-consistency of various assumptions underlying the radiative energy loss model, in an effort to understand the size of the short path length correction. This calculation shows that the large formation time assumption, which is utilized by most contemporary energy loss models, is invalid for a large portion of the phenomenologically relevant parameter space.
APA, Harvard, Vancouver, ISO, and other styles
33

Fursov, V. A. "Design of Stable IIR Filters on Nonuniform Sampling System for Defocusing Correction." Optoelectronics, Instrumentation and Data Processing 58, no. 5 (2022): 479–86. http://dx.doi.org/10.3103/s8756699022050041.

Full text
APA, Harvard, Vancouver, ISO, and other styles
34

Axibal, Derek, Christopher Joyce, Darby Houck, Stephanie Logterman, Rachel Frank, and Armando Vidal. "Accuracy of Correction and Postoperative Complications Associated with a PEEK Opening Wedge High Tibial Osteotomy Implant." Orthopaedic Journal of Sports Medicine 8, no. 7_suppl6 (2020): 2325967120S0049. http://dx.doi.org/10.1177/2325967120s00497.

Full text
Abstract:
Objectives: To evaluate the accuracy of correction angle of an all-PEEK medial opening wedge high tibial osteotomy (HTO) system, as well as determine the effect of correction angle on postoperative complications. Methods: Retrospective review was performed on patients who underwent an HTO by the senior author using an all-PEEK HTO system between 2014-2018 with a &gt;6-month follow-up were included. Measurements were performed and classifications formulated by three senior residents and the senior author. Lateral hinge fractures (LHF) were classified according to Takeuchi classification system. Bivariate statistics were performed. Results: Thirty HTOs in 27 patients were included (50% female; age, 37.8±10.8 years). Average follow-up was 16.2±10.1 months. Average post-operative radiographic valgus correction (Δ, 5.8±2.4°) was significantly less than that of the average implant correction angle (8.1±2.3°; p&lt;0.001), indicating an overall average under-correction of 2.3±2.1°. There were 5 failures (16.7%; defined as approximately &gt;5° loss of planned correction) that shared a common mechanism of medial cortical failure inferior to the implant. When selecting out the construct failures, the average correction accuracy of the 25 (83.3%) non-failures (1.5±1.2°) was significantly better than the 5 (16.7%) failures (6.3±1.8°; p=0.002). Four fractures (13.3%) were identified: 3 (10%) Takeuchi type I and 1 (3.3%) type III. All type I fractures were associated with &gt;5° planned correction loss with medial cortex buckling. Overall, 9 knees (30%) experienced minor complications: neuropathy (n=1; 3.3%), deep vein thrombosis (n=2; 6.7%), and superficial infection (n=3; 10%). Conclusion: The use of an all-PEEK medial opening wedge HTO implant is a safe and effective system, with an acceptable average under-correction of 1.5° in patients not sustaining medial cortex failure. Loss of correction was associated with medial cortex failure and Takeuchi type I LHF. This is the first description of this failure mechanism (medial cortex buckling) that is specific to this implant and technique.
APA, Harvard, Vancouver, ISO, and other styles
35

Sun, Ning, Jie Li, Debiao Zhang, et al. "Error Correction Method of TIADC System Based on Parameter Estimation of Identification Model." Applied Sciences 12, no. 12 (2022): 6257. http://dx.doi.org/10.3390/app12126257.

Full text
Abstract:
The performance of analog-to-digital converters (ADCs) has reached a bottleneck due to the limitations of the manufacturing process and testing environment. Time-interleaved ADC (TIADC) technology can increase the sampling rate without changing the resolution. However, channel mismatch severely degrades the dynamic performance of the TIADC system. For the channel mismatch problem of TIADC, most of the current solutions have preconditions, such as eliminating only some kind of error or increasing the complexity of the hardware. A few methods can estimate multiple errors without changing the hardware circuit. To improve the dynamic performance of the TIADC system, on the basis of an in-depth study of the channel mismatch error of TIADC, according to the system identification theory, an identification model is designed to characterize the frequency characteristics of TIADC. Using the system observation data, the transfer function parameters of the system are recursively estimated. By constructing and verifying the identification model of the TIADC system, and then through the frequency domain correction method, a digital compensation filter is established to complete the error correction of the system. The test results of the four-channel TIADC high-speed data acquisition system show that the actual input and output characteristics of the test system are consistent with the nature of the identification model. The four channels of the TIADC system are provided by four sub-channels of two AD9653 chips, and the highest sampling rate of a single channel is 125MSPS. For sinusoidal input signals from 20 MHz to 150 MHz, the sampling system can achieve a signal-to-noise ratio (SNR) above 56.8 dB and spurious free dynamic range (SFDR) above 69.7 dB. The dynamic performance of the sampling system is nearly equivalent to that of its sub-ADC; the feasibility of the model identification method and the effectiveness of error correction are verified in simulation and experiment.
APA, Harvard, Vancouver, ISO, and other styles
36

Wang, Ningxin, Spiro D. Jorga, Jeffery R. Pierce, Neil M. Donahue, and Spyros N. Pandis. "Particle wall-loss correction methods in smog chamber experiments." Atmospheric Measurement Techniques 11, no. 12 (2018): 6577–88. http://dx.doi.org/10.5194/amt-11-6577-2018.

Full text
Abstract:
Abstract. The interaction of particles with the chamber walls has been a significant source of uncertainty when analyzing results of secondary organic aerosol (SOA) formation experiments performed in Teflon chambers. A number of particle wall-loss correction methods have been proposed including the use of a size-independent loss rate constant, the ratio of suspended organic mass to that of a conserved tracer (e.g., sulfate seeds), and a size-dependent loss rate constant, etc. For complex experiments such as the chemical aging of SOA, the results of the SOA quantification analysis can be quite sensitive to the adopted correction method due to the evolution of the particle size distribution and the duration of these experiments. We evaluated the performance of several particle wall-loss correction methods for aging experiments of α-pinene ozonolysis products. Determining the loss rates from seed loss periods is necessary for this system because it is not clear when chemical reactions have been completed. Results from the OA ∕ sulfate ratio and the size-independent correction methods can be influenced significantly by the size dependence of the particle wall-loss process. Coagulation can also affect the particle size distribution, especially for particles with diameter less than 100 nm, thus introducing errors in the results of the wall-loss correction. The corresponding loss rate constants may vary from experiment to experiment, and even during a specific experiment. Friction between the Teflon chamber walls and non-conductive surfaces can significantly increase particle wall-loss rates and the chamber may require weeks to recover to its original condition. Experimental procedures are proposed for the characterization of particle losses during different stages of these experiments and the evaluation of corresponding particle wall-loss correction.
APA, Harvard, Vancouver, ISO, and other styles
37

Wang, N., S.D. Jorga, J.R. Pierce, N.M. Donahue, and S.N. Pandis. "Particle wall-loss correction methods in smog chamber experiments." Atmospheric Chemistry and Physics 11, no. 12 (2018): 6577–88. https://doi.org/10.5194/amt-11-6577-2018.

Full text
Abstract:
The interaction of particles with the chamber walls has been a significant source of uncertainty when analyzing results of secondary organic aerosol (SOA) formation experiments performed in Teflon chambers. A number of particle wall-loss correction methods have been proposed including the use of a size-independent loss rate constant, the ratio of suspended organic mass to that of a conserved tracer (e.g., sulfate seeds), and a size-dependent loss rate constant, etc. For complex experiments such as the chemical aging of SOA, the results of the SOA quantification analysis can be quite sensitive to the adopted correction method due to the evolution of the particle size distribution and the duration of these experiments. We evaluated the performance of several particle wall-loss correction methods for aging experiments of&nbsp;<em>&alpha;</em>-pinene ozonolysis products. Determining the loss rates from seed loss periods is necessary for this system because it is not clear when chemical reactions have been completed. Results from the OA&thinsp;∕&thinsp;sulfate ratio and the size-independent correction methods can be influenced significantly by the size dependence of the particle wall-loss process. Coagulation can also affect the particle size distribution, especially for particles with diameter less than 100&thinsp;nm, thus introducing errors in the results of the wall-loss correction. The corresponding loss rate constants may vary from experiment to experiment, and even during a specific experiment. Friction between the Teflon chamber walls and non-conductive surfaces can significantly increase particle wall-loss rates and the chamber may require weeks to recover to its original condition. Experimental procedures are proposed for the characterization of particle losses during different stages of these experiments and the evaluation of corresponding particle wall-loss correction.
APA, Harvard, Vancouver, ISO, and other styles
38

Jiang, Min Lan, Xiao Dong Wang, and Xiu Hui He. "Dynamic Accuracy Loss Prediction Model Based on BPNN." Advanced Materials Research 108-111 (May 2010): 795–98. http://dx.doi.org/10.4028/www.scientific.net/amr.108-111.795.

Full text
Abstract:
In this paper, study the dynamic accuracy loss prediction model of measurement system, using measurement standard deviation as the systemic accuracy, the change of measurement standard deviation in different time as the systemic accuracy loss. Using BPNN build the dynamic accuracy loss prediction model about the practical measurement, model precision reach , realized real-time prediction of systemic accuracy loss , the establishment of predictive models to the system Real-time error correction and compensation to provide a theoretical basis, and that can cost-effectively improve the system of measurement accuracy.
APA, Harvard, Vancouver, ISO, and other styles
39

Chen, Chun-Chi, Song-Xian Lin, and Hyundoo Jeong. "Low-Complexity Timing Correction Methods for Heart Rate Estimation Using Remote Photoplethysmography." Sensors 25, no. 2 (2025): 588. https://doi.org/10.3390/s25020588.

Full text
Abstract:
With the rise of modern healthcare monitoring, heart rate (HR) estimation using remote photoplethysmography (rPPG) has gained attention for its non-contact, continuous tracking capabilities. However, most HR estimation methods rely on stable, fixed sampling intervals, while practical image capture often involves irregular frame rates and missing data, leading to inaccuracies in HR measurements. This study addresses these issues by introducing low-complexity timing correction methods, including linear, cubic, and filter interpolation, to improve HR estimation from rPPG signals under conditions of irregular sampling and data loss. Through a comparative analysis, this study offers insights into efficient timing correction techniques for enhancing HR estimation from rPPG, particularly suitable for edge-computing applications where low computational complexity is essential. Cubic interpolation can provide robust performance in reconstructing signals but requires higher computational resources, while linear and filter interpolation offer more efficient solutions. The proposed low-complexity timing correction methods improve the reliability of rPPG-based HR estimation, making it a more robust solution for real-world healthcare applications.
APA, Harvard, Vancouver, ISO, and other styles
40

Levytskyi, A. F., V. O. Rogozinskyi, I. M. Benzar, M. M. Dolianytskyi, and O. D. Karpinska. "Analysis of the effectiveness of the halo-gravitational traction system as a factor influencing blood loss in the surgical correction of complex scoliotic deformities in children." Paediatric Surgery. Ukraine, no. 1(74) (March 30, 2022): 34–39. http://dx.doi.org/10.15574/ps.2022.74.34.

Full text
Abstract:
Halo-gravity traction (HGT) systems are widely used in leading clinics around the world as a staged method for correcting complex (&gt;100°) scoliotic deformities of the spine in children. Today there is no single approach to the use of this technique, and each doctor makes a decision regarding the treatment regimen empirically, based on his clinical experience. Purpose - to identify the factors that affect the amount of blood loss during surgical correction of scoliotic deformity in children. Materials and methods. 76 patients aged 7 to 17 years were examined, on average 11.0±2.8 years. I (experimental) group - 38 children were treated with HGT using the developed tactics of staged surgical treatment; II (control) group - 38 children who underwent one-step surgical correction. The age of children in the groups in group I was 11.0±2.8 years, in group II - 11.2±2.8 years, the age of children in groups was statistically the same (t=-0.409; p=0.684). There were 28 (36.8%) boys and 48 (63.2%) girls. The distribution of children by age and sex in the groups was the same. Data on operative blood loss were statistically processed. Results. According to the statistical study, it can be argued that the blood loss during surgical correction of scoliotic deformity is most affected by the angle of deformation and age of the patient. At an angle of deformation of &gt;100° blood loss is significantly greater than at an angle of &lt;100°, children over 14 years also observed a significant increase in blood loss, and this is associated not only with greater body weight but also with the fact that these children the greater the angle of deformation of the spine, as well as the use of HGT, the angle of deformation decreases less than in younger children. Conclusions. It was determined that after HGT in children the angle of deformation decreased statistically significantly (p&lt;0.001) by an average of 36.5±14.9°. The change in deformation in boys and girls was the same. With surgical correction of scoliotic deformity &lt;100° blood loss (1025.0±235.9 ml), (statistically significant) (p&lt;0.001) is statistically significantly less than blood loss with correction of deformity greater than 100° (1297.5±327.8 ml). With age, blood loss increases in children, and this difference between age subgroups is statistically significant (α=0.05). The value of the angle of scoliotic deformity after surgical correction has a statistically significant (r=0.576; p=0.001) effect on blood loss, the value of surgical correction of deformity on blood loss (r=0.015; p=0.879) does not affect. The research was carried out in accordance with the principles of the Helsinki declaration. The study protocol was approved by the Local ethics committee of all participating institutions. The informed consent of the patient was obtained for conducting the studies. No conflict of interest was declared by the authors. Key words: intraoperative bleeding, spinal deformity, halo-gravity traction.
APA, Harvard, Vancouver, ISO, and other styles
41

Liu, Yao Lin, Feng Han, Zhen Liu, and Min Chen Zhai. "Analysis of Energy Loss-Gain Error in Discrete Fourier Transform." Applied Mechanics and Materials 568-570 (June 2014): 172–75. http://dx.doi.org/10.4028/www.scientific.net/amm.568-570.172.

Full text
Abstract:
In asynchronous sampling, discrete Fourier transform (DFT) spectrum involves errors. Scholars have done great investigations on the correction techniques of DFT spectrum, but the errors have not been completely eliminated all along. In this paper, spectrums were examined from the principle of conservation of energy. It is unnoticed before that the energy of the digital signal, which is the analysis object of DFT, isn't equal to that of the finite continuous signal truncated by rectangular window. Thus the energy of their spectrums are different according to the Parseval's theorem. The Energy Loss-Gain (ELG) error was introduced to express the energy difference between these two spectrums. The ELG error is zero if the observed continuous signal is truncated in integral multiple of half cycle and it is related to the cycle number and sampling number in one cycle. Analysis show that the ELG error decreases with the increment of these two parameters, which are helpful to the engineering.
APA, Harvard, Vancouver, ISO, and other styles
42

Sheu, Jia Shing, Ho Nien Shou, Wei Jun Lin, and Wen Chin Chung. "Implementation of Internet Audio Transmission System via Clock Correction." Applied Mechanics and Materials 284-287 (January 2013): 3438–43. http://dx.doi.org/10.4028/www.scientific.net/amm.284-287.3438.

Full text
Abstract:
Purpose of this research wants to use a device, called Dante board, the board enabled AD converter. It digitalizes audio and transmits the audio using TCP/IP technology through cat5 cable to replace the expensive analog cable used in the past. Still, there are some problems needed to be overcome. First of all, during the analog-to-digital process, distorted audio issues are often encountered due to inconsistency of the encoder during digital sampling. In order to compensate the distortion during the conversion, frequency needs to be filtered out to compensate or attenuate. Secondly, due to delay in Internet transmission, the audio outputs by each device may not be synchronized. Therefore, each device needs to have its own clock to overcome the delay and reach the synchronized effect through clock synchronization. Also, the IEEE1588 precision time protocol (PTP) is used in clock synchronization to synchronize the audio in each device.
APA, Harvard, Vancouver, ISO, and other styles
43

Hong, Xu, Jianbin Zhou, Shijun Ni, et al. "Counting-loss correction for X-ray spectroscopy using unit impulse pulse shaping." Journal of Synchrotron Radiation 25, no. 2 (2018): 505–13. http://dx.doi.org/10.1107/s1600577518000322.

Full text
Abstract:
High-precision measurement of X-ray spectra is affected by the statistical fluctuation of the X-ray beam under low-counting-rate conditions. It is also limited by counting loss resulting from the dead-time of the system and pile-up pulse effects, especially in a high-counting-rate environment. In this paper a detection system based on a FAST-SDD detector and a new kind of unit impulse pulse-shaping method is presented, for counting-loss correction in X-ray spectroscopy. The unit impulse pulse-shaping method is evolved by inverse deviation of the pulse from a reset-type preamplifier and a C-R shaper. It is applied to obtain the true incoming rate of the system based on a general fast–slow channel processing model. The pulses in the fast channel are shaped to unit impulse pulse shape which possesses small width and no undershoot. The counting rate in the fast channel is corrected by evaluating the dead-time of the fast channel before it is used to correct the counting loss in the slow channel.
APA, Harvard, Vancouver, ISO, and other styles
44

Strini Paixão, Carla Segatto, Murillo Aperecido Voltarelli, Jarlyson Brunno Costa Souza, Armando Lopes De Brito Filho, and Rouverson Pereira da Silva. "Loss sampling methods for soybean mechanical harvest." Bioscience Journal 38 (August 12, 2022): e38050. http://dx.doi.org/10.14393/bj-v38n0a2022-56409.

Full text
Abstract:
Harvesting is one of the most important stages of the agricultural production process. However, the lack of monitoring during this operation and the absence of efficient methodologies to quantify losses have contributed to the decline in the quality of the operation. The objective of this study was to monitor mechanized soybean harvest by quantifying losses through two methodologies using statistical process control. The study was conducted in March 2016 in an agricultural area in the municipality of Ribeirão Preto, SP, using a John Deere harvester model 1470 with a tangential-type track system and separation by a straw-blower. The experimental design followed the standards established by statistical process control, and every 8 min of harvest, the total losses by the circular framework and rectangular framework methodologies were simultaneously quantified, totaling 40 points. Data were analyzed using descriptive statistics and statistical process control. The averages of the circular methodology framework were values above those found in the rectangular methodology framework, presenting greater representativeness of losses. The process was considered unable to maintain losses of soybeans at acceptable levels during mechanical harvest throughout the operation of the two frameworks. The circular framework for collecting samples at different locations resulted in higher reliability of data.
APA, Harvard, Vancouver, ISO, and other styles
45

Wangchuk, Tashi Rapden. "Automatic Power Factor Correction Using Arduino." International Journal for Research in Applied Science and Engineering Technology 12, no. 6 (2024): 1077–80. http://dx.doi.org/10.22214/ijraset.2024.63268.

Full text
Abstract:
Abstract: In today's technological revolution, power is very valuable. Low power factor results in increased energy consumption, voltage drops, decreased power system efficiency, and shortened equipment lifespan. Identifying the causes of power loss and improving power system stability is crucial. The rise in inductive loads has decreased power system efficiency. Therefore, a simple and effective method for improving power factor is needed. An automatic power factor correction device precisely determines the delay between the line voltage and line current by measuring the time difference between the arrival of the current and voltage signals from the AC mains. This time value is then calibrated to calculate the phase angle and corresponding power factor using an internal timer. The microcontroller (Arduino) then calculates the power factor and switches on different capacitor banks accordingly. This automatic power factor correction method can be applied in industries, power systems, and even households to improve stability and system efficiency. Power factor correction plays a significant role in enhancing the efficiency and reliability of electrical power systems.
APA, Harvard, Vancouver, ISO, and other styles
46

Chang, Pao-Lung, Michael Jian-Wen Chen, Pang-Hsuan Hsiao, et al. "Navigation-Assisted One-Staged Posterior Spinal Fusion Using Pedicle Screw Instrumentation in Adolescent Idiopathic Scoliosis—A Case Series." Medicina 60, no. 2 (2024): 300. http://dx.doi.org/10.3390/medicina60020300.

Full text
Abstract:
Background and Objectives: Adolescent idiopathic scoliosis (AIS) is a prevalent three-dimensional spinal disorder, with a multifactorial pathogenesis, including genetics and environmental aspects. Treatment options include non-surgical and surgical treatment. Surgical interventions demonstrate positive outcomes in terms of deformity correction, pain relief, and improvements of the cardiac and pulmonary function. Surgical complications, including excessive blood loss and neurologic deficits, are reported in 2.27–12% of cases. Navigation-assisted techniques, such as the O-arm system, have been a recent focus with enhanced precision. This study aims to evaluate the results and complications of one-stage posterior instrumentation fusion in AIS patients assisted by O-arm navigation. Materials and Methods: This retrospective study assesses 55 patients with AIS (12–28 years) who underwent one-stage posterior instrumentation correction supported by O-arm navigation from June 2016 to August 2023. We examined radiological surgical outcomes (initial correction rate, loss of correction rate, last follow-up correction rate) and complications as major outcomes. The characteristics of the patients, intraoperative blood loss, operation time, number of fusion levels, and screw density were documented. Results: Of 73 patients, 55 met the inclusion criteria. The average age was 16.67 years, with a predominance of females (78.2%). The surgical outcomes demonstrated substantial initial correction (58.88%) and sustained positive radiological impact at the last follow-up (56.56%). Perioperative complications, including major and minor, occurred in 18.18% of the cases. Two patients experienced a major complication. Blood loss (509.46 mL) and operation time (402.13 min) were comparable to the literature ranges. Trend analysis indicated improvements in operation time and blood loss over the study period. Conclusions: O-arm navigation-assisted one-stage posterior instrumentation proves reliable for AIS corrective surgery, achieving significant and sustained positive radiological outcomes, lower correction loss, reduced intraoperative blood loss, and absence of implant-related complications. Despite the challenges, our study demonstrates the efficacy and maturation of this surgical approach.
APA, Harvard, Vancouver, ISO, and other styles
47

Balamurali, S., and M. Usha. "Designing of variables quick switching sampling system by considering process loss functions." Communications in Statistics - Theory and Methods 46, no. 5 (2016): 2299–314. http://dx.doi.org/10.1080/03610926.2015.1041983.

Full text
APA, Harvard, Vancouver, ISO, and other styles
48

Lee, Jai H., Yanzhi Chen, and Ignatius N. Tang. "Heterogeneous loss of gaseous hydrogen peroxide in an atmospheric air sampling system." Environmental Science & Technology 25, no. 2 (1991): 339–42. http://dx.doi.org/10.1021/es00014a019.

Full text
APA, Harvard, Vancouver, ISO, and other styles
49

Zhou, Xiao Hu. "Correcting Synchronous Scanning OMIS Remote Sensing Images Using the Spatial Orientation Data of the Inertial Navigation System." Applied Mechanics and Materials 513-517 (February 2014): 2867–70. http://dx.doi.org/10.4028/www.scientific.net/amm.513-517.2867.

Full text
Abstract:
Inertial navigation system using IMU (Inertial Measurement Unit) of the flying space positioning data POS (Position &amp; Orientation System) synchronized scanning of the hyperspectral remote sensing OMIS (Operational Modular Imaging Spectrometer) image correction, obtaining from the IMU in sync with the attitude parameter OMIS , the coordinate transformation parameter calculation and flight attitude, according to OMIS imaging principle of mathematical calibration model, the corrected image pixel re-sampling, the image correction, and achieved better image processing results.
APA, Harvard, Vancouver, ISO, and other styles
50

Madhiarasan, Manoharan. "Implementation of IoT-based energy monitoring and automatic power factor correction system." Thermal Science and Engineering 6, no. 1 (2023): 13. http://dx.doi.org/10.24294/tse.v6i1.1996.

Full text
Abstract:
Energy monitoring facilitates quick access and helps to know the power utilization and normal and abnormal conditions. Nowadays, many applications and majorly industries face problems regarding power quality. In the power system, the power factor plays a vital role in power quality. The addition of capacitance overcomes the decay of the power factor and reduces the power loss. This paper aims to build an automatic power factor correction (APFC) system, which can monitor the energy consumption of a system and automatically improve its power factor. In the design, an open-source energy monitoring library has been implemented for accurate power calculations. This paper carried out the work of hardware experimentation of energy monitoring and automatic power factor correction using a capacitor bank with the association of Internet of Things (IoT) technology. Build a mobile application to more simply and comfortablely monitor power and correct automatically. The developed hardware model’s performance is validated with and without load conditions. The result proves that the designed Raspberry Pi-based energy monitoring and automatic power factor correction system outperforms to improve the power factor without human interaction by properly switching the capacitor bank. Hence, the power loss, penalty, and power quality-related problems were resolved based on the proposed approach. The proposed design is compact, simple, and easy to implement and aids in power system advancement.
APA, Harvard, Vancouver, ISO, and other styles
We offer discounts on all premium plans for authors whose works are included in thematic literature selections. Contact us to get a unique promo code!

To the bibliography