Статті в журналах з теми "Domain Shift Robustness"

Щоб переглянути інші типи публікацій з цієї теми, перейдіть за посиланням: Domain Shift Robustness.

Оформте джерело за APA, MLA, Chicago, Harvard та іншими стилями

Оберіть тип джерела:

Ознайомтеся з топ-50 статей у журналах для дослідження на тему "Domain Shift Robustness".

Біля кожної праці в переліку літератури доступна кнопка «Додати до бібліографії». Скористайтеся нею – і ми автоматично оформимо бібліографічне посилання на обрану працю в потрібному вам стилі цитування: APA, MLA, «Гарвард», «Чикаго», «Ванкувер» тощо.

Також ви можете завантажити повний текст наукової публікації у форматі «.pdf» та прочитати онлайн анотацію до роботи, якщо відповідні параметри наявні в метаданих.

Переглядайте статті в журналах для різних дисциплін та оформлюйте правильно вашу бібліографію.

1

Goodarzi, Payman, Andreas Schütze, and Tizian Schneider. "Comparison of different ML methods concerning prediction quality, domain adaptation and robustness." tm - Technisches Messen 89, no. 4 (February 25, 2022): 224–39. http://dx.doi.org/10.1515/teme-2021-0129.

Повний текст джерела
Стилі APA, Harvard, Vancouver, ISO та ін.
Анотація:
Abstract Nowadays machine learning methods and data-driven models have been used widely in different fields including computer vision, biomedicine, and condition monitoring. However, these models show performance degradation when meeting real-life situations. Domain or dataset shift or out-of-distribution (OOD) prediction is mentioned as the reason for this problem. Especially in industrial condition monitoring, it is not clear when we should be concerned about domain shift and which methods are more robust against this problem. In this paper prediction results are compared for a conventional machine learning workflow based on feature extraction, selection, and classification/regression (FESC/R) and deep neural networks on two publicly available industrial datasets. We show that it is possible to visualize the possible shift in domain using feature extraction and principal component analysis. Also, experimental competition shows that the cross-domain validated results of FESC/R are comparable to the reported state-of-the-art methods. Finally, we show that the results for simple randomly selected validation sets do not correctly represent the model performance in real-world applications.
2

Xu, Minghao, Jian Zhang, Bingbing Ni, Teng Li, Chengjie Wang, Qi Tian, and Wenjun Zhang. "Adversarial Domain Adaptation with Domain Mixup." Proceedings of the AAAI Conference on Artificial Intelligence 34, no. 04 (April 3, 2020): 6502–9. http://dx.doi.org/10.1609/aaai.v34i04.6123.

Повний текст джерела
Стилі APA, Harvard, Vancouver, ISO та ін.
Анотація:
Recent works on domain adaptation reveal the effectiveness of adversarial learning on filling the discrepancy between source and target domains. However, two common limitations exist in current adversarial-learning-based methods. First, samples from two domains alone are not sufficient to ensure domain-invariance at most part of latent space. Second, the domain discriminator involved in these methods can only judge real or fake with the guidance of hard label, while it is more reasonable to use soft scores to evaluate the generated images or features, i.e., to fully utilize the inter-domain information. In this paper, we present adversarial domain adaptation with domain mixup (DM-ADA), which guarantees domain-invariance in a more continuous latent space and guides the domain discriminator in judging samples' difference relative to source and target domains. Domain mixup is jointly conducted on pixel and feature level to improve the robustness of models. Extensive experiments prove that the proposed approach can achieve superior performance on tasks with various degrees of domain shift and data complexity.
3

Fan, Mengbao, Binghua Cao, and Guiyun Tian. "Enhanced Measurement of Paper Basis Weight Using Phase Shift in Terahertz Time-Domain Spectroscopy." Journal of Sensors 2017 (2017): 1–14. http://dx.doi.org/10.1155/2017/3520967.

Повний текст джерела
Стилі APA, Harvard, Vancouver, ISO та ін.
Анотація:
THz time-domain spectroscopy has evolved as a noncontact, safe, and efficient technique for paper characterization. Our previous work adopted peak amplitude and delay time as features to determine paper basis weight using terahertz time-domain spectroscopy. However, peak amplitude and delay time tend to suffer from noises, resulting in degradation of accuracy and robustness. This paper proposes a noise-robust phase-shift based method to enhance measurements of paper basis weight. Based on Fresnel Formulae, the physical relationship between phase shift and paper basis weight is formulated theoretically neglecting multiple reflections in the case of normal incidence. The established formulation indicates that phase shift correlates linearly with paper basis weight intrinsically. Subsequently, paper sheets were stacked to fabricate the samples with different basis weights, and experimental results verified the developed mathematical formulation. Moreover, a comparison was made between phase shift, peak amplitude, and delay time with respect to linearity, accuracy, and noise robustness. The results show that phase shift is superior to the others.
4

Murala, Kranthi Kumar, Dr M. Kamaraju, and Dr K. Ramanjaneyulu. "Digital Fingerprinting In Encrypted Domain." INTERNATIONAL JOURNAL OF COMPUTERS & TECHNOLOGY 12, no. 1 (December 15, 2013): 3138–46. http://dx.doi.org/10.24297/ijct.v12i1.3360.

Повний текст джерела
Стилі APA, Harvard, Vancouver, ISO та ін.
Анотація:
Digital fingerprinting is a method for protecting multimedia content from illegal redistribution and identified the colluders.In copy protection, a content seller embeds a unique identity as a watermark into the content before it is sold to a buyer. When an illegal copy is found, the seller can identify illegal users by extracting the fingerprint. In this proposing an anonymous fingerprinting  based on a homomorphic additive encryption scheme, it present a construction of anti-collision codes created using BIBD(Balanced incomplete block design) codes technique and dither technique which makes use of LFSR (linear feedback shift register) are used for improving the high robustness and Security.
5

Li, Qingchuan, Jiangxing Zheng, Wenfeng Tan, Xingshu Wang, and Yingwei Zhao. "Traffic Sign Detection: Appropriate Data Augmentation Method from the Perspective of Frequency Domain." Mathematical Problems in Engineering 2022 (December 7, 2022): 1–11. http://dx.doi.org/10.1155/2022/9571513.

Повний текст джерела
Стилі APA, Harvard, Vancouver, ISO та ін.
Анотація:
This study introduces a challenge faced by CNN in the task of traffic sign detection: how to achieve robustness to distributional shift. At present, all kinds of CNN models rely on strong data augmentation methods to enrich samples and achieve robustness, such as Mosaic and Mixup. In this study, we note that these methods do not have similar effects on combating noise. We explore the performance of augmentation strategies against disturbance in different frequency bands and provide understanding from the Fourier analysis perspective. This understanding can provide a guidance for selecting data augmentation strategies for different detection tasks and benchmark datasets.
6

Aryal, Jagannath, and Bipul Neupane. "Multi-Scale Feature Map Aggregation and Supervised Domain Adaptation of Fully Convolutional Networks for Urban Building Footprint Extraction." Remote Sensing 15, no. 2 (January 13, 2023): 488. http://dx.doi.org/10.3390/rs15020488.

Повний текст джерела
Стилі APA, Harvard, Vancouver, ISO та ін.
Анотація:
Automated building footprint extraction requires the Deep Learning (DL)-based semantic segmentation of high-resolution Earth observation images. Fully convolutional networks (FCNs) such as U-Net and ResUNET are widely used for such segmentation. The evolving FCNs suffer from the inadequate use of multi-scale feature maps in their backbone of convolutional neural networks (CNNs). Furthermore, the DL methods are not robust in cross-domain settings due to domain-shift problems. Two scale-robust novel networks, namely MSA-UNET and MSA-ResUNET, are developed in this study by aggregating the multi-scale feature maps in U-Net and ResUNET with partial concepts of the feature pyramid network (FPN). Furthermore, supervised domain adaptation is investigated to minimise the effects of domain-shift between the two datasets. The datasets include the benchmark WHU Building dataset and a developed dataset with 5× fewer samples, 4× lower spatial resolution and complex high-rise buildings and skyscrapers. The newly developed networks are compared to six state-of-the-art FCNs using five metrics: pixel accuracy, adjusted accuracy, F1 score, intersection over union (IoU), and the Matthews Correlation Coefficient (MCC). The proposed networks outperform the FCNs in the majority of the accuracy measures in both datasets. Compared to the larger dataset, the network trained on the smaller one shows significantly higher robustness in terms of adjusted accuracy (by 18%), F1 score (by 31%), IoU (by 27%), and MCC (by 29%) during the cross-domain validation of MSA-UNET. MSA-ResUNET shows similar improvements, concluding that the proposed networks when trained using domain adaptation increase the robustness and minimise the domain-shift between the datasets of different complexity.
7

S. Garea, Alberto S., Dora B. Heras, and Francisco Argüello. "TCANet for Domain Adaptation of Hyperspectral Images." Remote Sensing 11, no. 19 (September 30, 2019): 2289. http://dx.doi.org/10.3390/rs11192289.

Повний текст джерела
Стилі APA, Harvard, Vancouver, ISO та ін.
Анотація:
The use of Convolutional Neural Networks (CNNs) to solve Domain Adaptation (DA) image classification problems in the context of remote sensing has proven to provide good results but at high computational cost. To avoid this problem, a deep learning network for DA in remote sensing hyperspectral images called TCANet is proposed. As a standard CNN, TCANet consists of several stages built based on convolutional filters that operate on patches of the hyperspectral image. Unlike the former, the coefficients of the filter are obtained through Transfer Component Analysis (TCA). This approach has two advantages: firstly, TCANet does not require training based on backpropagation, since TCA is itself a learning method that obtains the filter coefficients directly from the input data. Second, DA is performed on the fly since TCA, in addition to performing dimensional reduction, obtains components that minimize the difference in distributions of data in the different domains corresponding to the source and target images. To build an operating scheme, TCANet includes an initial stage that exploits the spatial information by providing patches around each sample as input data to the network. An output stage performing feature extraction that introduces sufficient invariance and robustness in the final features is also included. Since TCA is sensitive to normalization, to reduce the difference between source and target domains, a previous unsupervised domain shift minimization algorithm consisting of applying conditional correlation alignment (CCA) is conditionally applied. The results of a classification scheme based on CCA and TCANet show that the DA technique proposed outperforms other more complex DA techniques.
8

Griffiths, Matthew P., André J. M. Pugin, and Dariush Motazedian. "Estimating local slope in the time-frequency domain: Velocity-independent seismic imaging in the near surface." GEOPHYSICS 85, no. 5 (July 28, 2020): U99—U107. http://dx.doi.org/10.1190/geo2019-0753.1.

Повний текст джерела
Стилі APA, Harvard, Vancouver, ISO та ін.
Анотація:
Seismic reflection processing for multicomponent data is very time consuming. To automatically streamline and shorten this process, a new approach for estimating the local event slope (local static shift) in the time-frequency domain is proposed and tested. The seismic event slope is determined by comparing the local phase content of Stockwell transformed signals. This calculation allows for noninterfering arrivals to be aligned by iteratively correcting trace by trace. Alternatively, the calculation can be used in a velocity-independent imaging framework with the possibility of exporting the determined time and velocities for each common midpoint gather, which leads to a more robust moveout correction. Synthetic models are used to test the robustness of the calculation and compare it directly to an existing method of local slope estimation. Compared to dynamic time warping, our method is more robust to noise but less robust to large time shifts, which limits our method to shorter geophone spacing. We apply the calculation to near-surface shear-wave data and compare it directly to semblance/normal-moveout processing. Examples demonstrate that the calculation yields an accurate local slope estimate and can produce sections of better or equal quality to sections processed using the conventional approach with much less user time input. It also serves as a first example of velocity-independent processing applied to near-surface reflection data.
9

Yang, Fengxiang, Zhun Zhong, Hong Liu, Zheng Wang, Zhiming Luo, Shaozi Li, Nicu Sebe, and Shin'ichi Satoh. "Learning to Attack Real-World Models for Person Re-identification via Virtual-Guided Meta-Learning." Proceedings of the AAAI Conference on Artificial Intelligence 35, no. 4 (May 18, 2021): 3128–35. http://dx.doi.org/10.1609/aaai.v35i4.16422.

Повний текст джерела
Стилі APA, Harvard, Vancouver, ISO та ін.
Анотація:
Recent advances in person re-identification (re-ID) have led to impressive retrieval accuracy. However, existing re-ID models are challenged by the adversarial examples crafted by adding quasi-imperceptible perturbations. Moreover, re-ID systems face the domain shift issue that training and testing domains are not consistent. In this study, we argue that learning powerful attackers with high universality that works well on unseen domains is an important step in promoting the robustness of re-ID systems. Therefore, we introduce a novel universal attack algorithm called ``MetaAttack'' for person re-ID. MetaAttack can mislead re-ID models on unseen domains by a universal adversarial perturbation. Specifically, to capture common patterns across different domains, we propose a meta-learning scheme to seek the universal perturbation via the gradient interaction between meta-train and meta-test formed by two datasets. We also take advantage of a virtual dataset (PersonX), instead of real ones, to conduct meta-test. This scheme not only enables us to learn with more comprehensive variation factors but also mitigates the negative effects caused by biased factors of real datasets. Experiments on three large-scale re-ID datasets demonstrate the effectiveness of our method in attacking re-ID models on unseen domains. Our final visualization results reveal some new properties of existing re-ID systems, which can guide us in designing a more robust re-ID model. Code and supplemental material are available at \url{https://github.com/FlyingRoastDuck/MetaAttack_AAAI21}.
10

Sun, Haidong, Cheng Liu, Hao Zhang, Yanming Cheng, and Yongyin Qu. "Research on a Self-Coupling PID Control Strategy for a ZVS Phase-Shift Full-Bridge Converter." Mathematical Problems in Engineering 2021 (March 8, 2021): 1–9. http://dx.doi.org/10.1155/2021/6670382.

Повний текст джерела
Стилі APA, Harvard, Vancouver, ISO та ін.
Анотація:
As an important part of the high-frequency switching power supply, the control accuracy of the phase-shift full-bridge converter directly affects the efficiency of the switching power supply. To improve the stability and antidisturbance ability of phase-shift control systems, this article presents a dual closed-loop control system based on Self-Coupling PID (SC-PID) control and applies the SC-PID control strategy to the voltage control of the phase-shift full-bridge converter. To begin with, in response to the contradiction of traditional PID, SC-PID breaks the limitation of PID control by introducing a new control idea instead of weighted summation of each gain, which fundamentally solves the contradiction between overshoot and rapidity. Then, using the dimension attributes between gains to develop new tuning rules to solve the system load disturbance, output voltage deviation from the reference value, and other problems, the purpose is to ensure the stability of the output voltage and improve the control effect. At the same time, the stability of the whole control system is analyzed in the complex frequency domain. Finally, with the same main circuit and parameters, three types of controllers are built separately, and using MATLAB for simulation comparison, the simulation results show that the control system based on SC-PID has better steady-state accuracy, faster response, and better robustness, which proves the feasibility of the SC-PID control idea.
11

Park, Kwang, Jeungmin Joo, Sungsoo Choi, and Kiseon Kim. "EFFECTS OF TIMING JITTER IN TH-BPSK UWB SYSTEMS APPLYING THE FCC-CONSTRAINT PULSES UNDER NAKAGAMI-M FADING CHANNEL." SYNCHROINFO JOURNAL 7, no. 6 (2021): 21–25. http://dx.doi.org/10.36724/2664-066x-2021-7-6-21-25.

Повний текст джерела
Стилі APA, Harvard, Vancouver, ISO та ін.
Анотація:
Ultra wideband (UWB) technology has obtained lots of attention as a strong candidate for short range indoor wireless communication because of low power consumption, low cost implementation and the robustness against multipath fading. It uses trains of short pulses which widely spread the signal energy in frequency domain. Since such large bandwidth can cause interference with other narrow band communication systems, the federal communications commission (FCC) has restricted not only the frequency region from 3.1GHz to 10.6GHz but also the transmission power level for commercial use of UWB systems. The effects of timing jitter on time hopping binary phase shift keying (TH-BPSK) UWB systems applying the FCC-constraint pulses are investigated under flat Nakagami-mfading channel and additive white Gaussian noise (AWGN). The numerical results show that two FCC-constraint pulses, PSP and MMNHP, have almost same sensitivity to the timing jitter even though they have different transceiver complexity. Additionally, the additional required power due to the timing jitter exponentially increases, but that due to the amplitude fading is not exceeded over 4dB.
12

Yang, Yu, Qi Ran, Kang Chen, Cheng Lei, Yusheng Zhang, Han Liang, Song Han, and Cong Tang. "Denoising Seismic Data via a Threshold Shrink Method in the Non-Subsampled Contourlet Transform Domain." Mathematical Problems in Engineering 2022 (August 8, 2022): 1–12. http://dx.doi.org/10.1155/2022/1013623.

Повний текст джерела
Стилі APA, Harvard, Vancouver, ISO та ін.
Анотація:
In seismic exploration, effective seismic signals can be seriously distorted by and interfered with noise, and the performance of traditional seismic denoising approaches can hardly meet the requirements of high-precision seismic exploration. To remarkably enhance signal-to-noise ratios (SNR) and adapt to high-precision seismic exploration, this work exploits the non-subsampled contourlet transform (NSCT) and threshold shrink method to design a new approach for suppressing seismic random noise. NSCT is an excellent multiscale, multidirectional, and shift-invariant image decomposition scheme, which can not only calculate exact contourlet transform coefficients through multiresolution analysis but also give an almost optimized approximation. It has better high-frequency response and stronger ability to describe curves and surfaces. Specifically, we propose to utilize the superior performance NSCT to decomposing the noisy seismic data into various frequency sub-bands and orientation response sub-bands, obtaining fine enough transform high frequencies to effectively achieve the separation of signals and noises. Besides, we use the adaptive Bayesian threshold shrink method instead of traditional handcraft threshold scheme for denoising the high-frequency sub-bands of NSCT coefficients, which pays more attention to the internal characteristics of the signals/data itself and improve the robustness of method, which can work better for preserving richer structure details of effective signals. The proposed method can achieve seismic random noise attenuation while retaining effective signals to the maximum degree. Experimental results reveal that the proposed method is superior to wavelet-based and curvelet-based threshold denoising methods, which increases synthetic seismic data with lower SNR from −8.2293 dB to 8.6838 dB, and 11.8084 dB and 9.1072 dB higher than two classic sparse transform based methods, respectively. Furthermore, we also apply the proposed method to process field data, which achieves satisfactory results.
13

Li, Guang, Zhushi He, Jingtian Tang, Juzhi Deng, Xiaoqiong Liu, and Huijie Zhu. "Dictionary learning and shift-invariant sparse coding denoising for controlled-source electromagnetic data combined with complementary ensemble empirical mode decomposition." GEOPHYSICS 86, no. 3 (April 8, 2021): E185—E198. http://dx.doi.org/10.1190/geo2020-0246.1.

Повний текст джерела
Стилі APA, Harvard, Vancouver, ISO та ін.
Анотація:
Controlled-source electromagnetic (CSEM) data recorded in industrialized areas are inevitably contaminated by strong cultural noise. Traditional noise attenuation methods are often ineffective for intricate aperiodic noise. To address the abovementioned problem, we have developed a novel noise isolation method based on the fast Fourier transform, complementary ensemble empirical mode decomposition (CEEMD), and shift-invariant sparse coding (SISC, an unsupervised machine-learning algorithm under a data-driven framework). First, large powerline noise is accurately subtracted in the frequency domain. Then, the CEEMD-based algorithm is used to correct the large baseline drift. Finally, taking advantage of the sparsity of periodic signals, SISC is applied to autonomously learn a feature atom (the useful signal with a length of one period) from the detrended signal and recover the CSEM signal with high accuracy. We determine the performance of the SISC by comparing it with three other promising signal processing methods, such as the mathematic morphology filtering, soft-threshold wavelet filtering, and K-singular-value decomposition (another dictionary learning method) sparse decomposition. Experimental results illustrate that SISC provides the best performance. Robustness test results indicate that SISC can increase the signal-to-noise ratio of noisy signal from 0 to more than 15 dB. Case studies of synthetic and real data collected in the Chinese provinces of Sichuan and Yunnan indicate that our method is capable of effectively recovering the useful signal from the observed data contaminated with different kinds of strong ambient noise. The curves of U/I and apparent resistivity after applying our method improved greatly. Moreover, our method performs better than the robust estimation method based on correlation analysis.
14

Ke, Zhanghan, Jiayu Sun, Kaican Li, Qiong Yan, and Rynson W. H. Lau. "MODNet: Real-Time Trimap-Free Portrait Matting via Objective Decomposition." Proceedings of the AAAI Conference on Artificial Intelligence 36, no. 1 (June 28, 2022): 1140–47. http://dx.doi.org/10.1609/aaai.v36i1.19999.

Повний текст джерела
Стилі APA, Harvard, Vancouver, ISO та ін.
Анотація:
Existing portrait matting methods either require auxiliary inputs that are costly to obtain or involve multiple stages that are computationally expensive, making them less suitable for real-time applications. In this work, we present a light-weight matting objective decomposition network (MODNet) for portrait matting in real-time with a single input image. The key idea behind our efficient design is by optimizing a series of sub-objectives simultaneously via explicit constraints. In addition, MODNet includes two novel techniques for improving model efficiency and robustness. First, an Efficient Atrous Spatial Pyramid Pooling (e-ASPP) module is introduced to fuse multi-scale features for semantic estimation. Second, a self-supervised sub-objectives consistency (SOC) strategy is proposed to adapt MODNet to real-world data to address the domain shift problem common to trimap-free methods. MODNet is easy to be trained in an end-to-end manner. It is much faster than contemporaneous methods and runs at 67 frames per second on a 1080Ti GPU. Experiments show that MODNet outperforms prior trimap-free methods by a large margin on both Adobe Matting Dataset and a carefully designed photographic portrait matting (PPM-100) benchmark proposed by us. Further, MODNet achieves remarkable results on daily photos and videos.
15

Liaqait, Raja Awais, Shermeen Hamid, Salman Sagheer Warsi, and Azfar Khalid. "A Critical Analysis of Job Shop Scheduling in Context of Industry 4.0." Sustainability 13, no. 14 (July 9, 2021): 7684. http://dx.doi.org/10.3390/su13147684.

Повний текст джерела
Стилі APA, Harvard, Vancouver, ISO та ін.
Анотація:
Scheduling plays a pivotal role in the competitiveness of a job shop facility. The traditional job shop scheduling problem (JSSP) is centralized or semi-distributed. With the advent of Industry 4.0, there has been a paradigm shift in the manufacturing industry from traditional scheduling to smart distributed scheduling (SDS). The implementation of Industry 4.0 results in increased flexibility, high product quality, short lead times, and customized production. Smart/intelligent manufacturing is an integral part of Industry 4.0. The intelligent manufacturing approach converts renewable and nonrenewable resources into intelligent objects capable of sensing, working, and acting in a smart environment to achieve effective scheduling. This paper aims to provide a comprehensive review of centralized and decentralized/distributed JSSP techniques in the context of the Industry 4.0 environment. Firstly, centralized JSSP models and problem-solving methods along with their advantages and limitations are discussed. Secondly, an overview of associated techniques used in the Industry 4.0 environment is presented. The third phase of this paper discusses the transition from traditional job shop scheduling to decentralized JSSP with the aid of the latest research trends in this domain. Finally, this paper highlights futuristic approaches in the JSSP research and application in light of the robustness of JSSP and the current pandemic situation.
16

Singer, Bension Sh, and Svetlana Atramonova. "Vertical electric source in transient marine CSEM: Effect of 3D inhomogeneities on the late time response." GEOPHYSICS 78, no. 4 (July 1, 2013): E173—E188. http://dx.doi.org/10.1190/geo2012-0316.1.

Повний текст джерела
Стилі APA, Harvard, Vancouver, ISO та ін.
Анотація:
The time-domain marine controlled-source electromagnetic method, based on injection of electric currents into the sea via a vertical cable and measurements of the transient vertical electric field, is characterized by high sensitivity to resistive reservoirs and robustness with respect to distorting effects of lateral heterogeneities. This is due to the fact that a vertical electric dipole induces in a stratified medium only a transverse magnetic (TM) field. In addition, the vertical electric field is not directly contributed by the transverse electric (TE) component of the field scattered by heterogeneities. Nevertheless, a closed-form solution shows that the first order effect of a lateral heterogeneity displays itself as a parallel shift of the late time response curves. The effect is also observed in 3D responses evaluated by numerical solutions of the integral equation of the modified iterative dissipative method. Moreover, in “favorable” conditions, scattering on heterogeneities may change the law of the field decay. The parallel shift of the late time curves is caused by vertical polarization of the scatterer, while its horizontal polarization leads to an abnormally fast decay of the vertical electric field. The latter effect, observed against the background of the general decay of the free electromagnetic field, can be associated with “energy channeling from the TM to TE field.” Neither of the effects necessarily deteriorates the method sensitivity. Unlike the vertical electric field, the horizontal electromagnetic field is contributed by the scattered TE field. As a result, the abnormally fast decay of the vertical electric field is accompanied by an abnormally slow decay of the horizontal components. The transient horizontal electric field may become almost insensitive to the reservoir resistivity. In addition to unrealistically harsh requirements to the transmitter tilt, this may render accurate measurements of the horizontal electromagnetic field of a vertical electric bipole not feasible.
17

Meeks, Shannon L., Alexander M. Sevy, John F. Healey, Wei Deng, P. Clint Spiegel, and Renhao Li. "Cooperative Binding Of Anti-Factor VIII Inhibitors and Induced Conformational Change Detected By Hydrogen-Deuterium Exchange Mass Spectrometry." Blood 122, no. 21 (November 15, 2013): 1088. http://dx.doi.org/10.1182/blood.v122.21.1088.1088.

Повний текст джерела
Стилі APA, Harvard, Vancouver, ISO та ін.
Анотація:
Abstract The development of anti-factor VIII (fVIII) antibodies (inhibitors) is a significant complication in the management of patients with hemophilia A leading to significant increases in morbidity and treatment cost. Using a panel of anti-fVIII monoclonal antibodies to different epitopes on fVIII, we recently have shown that epitope specificity, inhibitor kinetics, and time to maximum inhibition are more important than inhibitor titer in predicting response to fVIII and the combination of fVIII and recombinant factor VIIa. Thus the ability to quickly map the epitope spectrum of patient plasma using a clinically feasible assay may fundamentally change how clinicians approach the treatment of high-titer inhibitor patients. To this end, we have characterized the binding epitopes of 4 monoclonal antibodies (MAbs) targeted against fVIII C2 domain by hydrogen-deuterium exchange coupled with liquid chromatography-mass spectrometry (HDX-MS). MAbs included both classical (inhibiting binding of fVIII to von Willebrand factor and phospholipid) and non-classical inhibitors (inhibiting activation of fVIII), which target separate regions of fVIII C2 domain and have distinct inhibitory mechanisms. HDX-MS analysis showed clear differences in binding patterns between classical and non-classical inhibitors, centering on the protruding hydrophobic loop at Met2199. The binding epitopes of classical and non-classical inhibitors mapped by HDX-MS agree well with previously reported ones characterized by structural studies and mutagenesis analysis. Classical and non-classical inhibitors could be distinguished by a limited subset of C2-derived peptides, simplifying analysis significantly. In addition, HDX-MS was able to detect subtle differences in binding patterns of various classical inhibitors, based on the HDX protection pattern around the hydrophobic loop at Leu2251. Interestingly, two MAbs, G99 and 3E6, exhibited an observable shift in HDX protection when bound to C2 as a ternary complex, as opposed to when bound individually, thus providing evidence for cooperative binding of these two MAbs (Figure 1). In summary, our results demonstrate the effectiveness and robustness of the HDX-MS method in the rapid epitope mapping of fVIII inhibitors. This method can be expanded to map epitopes of inhibitors against other domains of fVIII, potentially leading to a more personalized treatment of hemophilia A patients.Figure 1Figure 1. Disclosures: No relevant conflicts of interest to declare.
18

Liu, Xiaonan, та Yufei Ma. "Tunable Diode Laser Absorption Spectroscopy Based Temperature Measurement with a Single Diode Laser Near 1.4 μm". Sensors 22, № 16 (15 серпня 2022): 6095. http://dx.doi.org/10.3390/s22166095.

Повний текст джерела
Стилі APA, Harvard, Vancouver, ISO та ін.
Анотація:
The rapidly changing and wide dynamic range of combustion temperature in scramjet engines presents a major challenge to existing test techniques. Tunable diode laser absorption spectroscopy (TDLAS) based temperature measurement has the advantages of high sensitivity, fast response, and compact structure. In this invited paper, a temperature measurement method based on the TDLAS technique with a single diode laser was demonstrated. A continuous-wave (CW), distributed feedback (DFB) diode laser with an emission wavelength near 1.4 μm was used for temperature measurement, which could cover two water vapor (H2O) absorption lines located at 7153.749 cm−1 and 7154.354 cm−1 simultaneously. The output wavelength of the diode laser was calibrated according to the two absorption peaks in the time domain. Using this strategy, the TDLAS system has the advantageous of immunization to laser wavelength shift, simple system structure, reduced cost, and increased system robustness. The line intensity of the two target absorption lines under room temperature was about one-thousandth of that under high temperature, which avoided the measuring error caused by H2O in the environment. The system was tested on a McKenna flat flame burner and a scramjet model engine, respectively. It was found that, compared to the results measured by CARS technique and theoretical calculation, this TDLAS system had less than 4% temperature error when the McKenna flat flame burner was used. When a scramjet model engine was adopted, the measured results showed that such TDLAS system had an excellent dynamic range and fast response. The TDLAS system reported here could be used in real engine in the future.
19

Sharifi, Ayyoob. "Urban Resilience Assessment: Mapping Knowledge Structure and Trends." Sustainability 12, no. 15 (July 23, 2020): 5918. http://dx.doi.org/10.3390/su12155918.

Повний текст джерела
Стилі APA, Harvard, Vancouver, ISO та ін.
Анотація:
The literature on urban resilience assessment has grown rapidly over the past two decades. This paper aims to provide a better understanding of the state of knowledge on urban resilience assessment through mapping the knowledge domain and highlighting emerging trends during different periods. The objects of study were 420 papers published in the Web of Science from 1998 to 2020. Science mapping was done using VOSviewer and CiteSpace, two widely known software tools for bibliometrics analysis and scientometric visualization. The results show that research published on urban resilience assessment was very limited and fragmented until 2009, and the focus has mainly been on risk mitigation and vulnerability assessment. The intellectual base grew between 2010 and 2014, when a paradigm shift from approaches based on robustness and reliability toward more adaptation-oriented approaches occurred. Finally, the annual publication trends have grown rapidly over the past five years and there has been more emphasis on climate change adaptation and flood resilience. Overall, in terms of dimensional focus, more attention has been paid to infrastructural, institutional, and environmental aspects at the expense of social and economic dimensions. In addition to information on thematic focus and evolution, this paper also provides other bibliometrics information on the influential authors, institutions, journals, and publications that lay the foundation of the field and can be used by various interested groups as points of reference to gain better knowledge about the structure and thematic evolution of urban resilience assessment. The paper concludes by highlighting gaps and making some recommendations for future improvement of the field. Major gaps are related to assessing resilience against socio-economic and health risks (e.g., economic recession and pandemics such as COVID-19).
20

Ramachandran, K., and Sudhir Voleti. "Business Process Outsourcing (BPO): Emerging Scenario and Strategic Options for IT-enabled Services." Vikalpa: The Journal for Decision Makers 29, no. 1 (January 2004): 49–62. http://dx.doi.org/10.1177/0256090920040105.

Повний текст джерела
Стилі APA, Harvard, Vancouver, ISO та ін.
Анотація:
The paradigm shift that the Internet has brought about in communication has opened up a plethora of opportunities for outsourcing business processes (BPO) across continents. Success lessons in manufacturing sub-contracting are found to be relevant for under- standing the logic of BPO. Outsourcing involves transferring certain value contributing activities or processes to another firm to save costs and for the principal to focus on its areas of key competence. The possibilities of disaggregating value elements for the purpose of creating value in them at the sub-contractors‘ premises and final aggregation and synthesis at the parent organization are determined by the nature of industry, limitations of coordination and control, product maturity, and level of inter-firm competition. IT-enabled services (ITES) includes services that can be outsourced using the powers of IT; the extent to which this is possible depends on the industry, location, time, costs, and managerial perception of the risks involved. The Internet has facilitated execution of several activities, previously done within geographical proximity to the firm, from remote low-labour cost locations, drawing both transaction cost and production cost efficiencies. Some of the factors that come in the way of parents setting up their own operations in India and have significant implications for the growth trajectory of Indian BPOs are: direct cost of operations and scale economies long-term assessment of India as a low cost centre cost-benefit assessment of own vs rented possible loss of control over their transactions and confidentiality and security of the data if an outsider handles them brand implications of perceived drop in quality robustness of existing systems and processes. Many BPO firms do not seem to realize the possible exit barriers and strategies to manage exit, if necessary. What happened in the dot com era can very well happen in the BPO space also unless care is taken to manage this rapid growth while retaining productivity and quality. Two key capabilities required for success in ITES space are: capabilities to understand customer needs in the specific domains and acquiring business (BD capabilities) and capabilities to execute them efficiently (Ops capabilities). ITES firms are likely to bifurcate their firm into two parts based on these two critical success factors. The successful segregation of value elements in a number of processes has enabled value configuration in as many ways as required by customers, both in the case of product and service components of customer value. The current trend in outsourcing will go up when such analysis-synthesis becomes a routine. This will be accelerated also because the capabilities required to do so depend not only on technical skills and knowledge in a domain but also strong process capabilities. The trend of outsourcing is likely to continue to grow in the future despite temporary political protests because of the robust arguments outsourcing finds for itself in the economics literature, both in terms of transaction and production cost advantages. Sub- contractors need robust systems and processes along with adequate domain knowledge and assured physical infrastructure for this to happen. In any case, the Indian BPO firms have to consistently prove their capabilities to deliver and create near indispensable situation for the parent to survive without them. This will not only involve growing technical and domain expertise, but also refinement in systems and practices, while keeping costs under control. In essence, BPO firms have to manage their consolidation and growth challenges simultaneously.
21

Hacimurtazaoglu, Murat, and Kemal Tutuncu. "LSB-based pre-embedding video steganography with rotating & shifting poly-pattern block matrix." PeerJ Computer Science 8 (January 6, 2022): e843. http://dx.doi.org/10.7717/peerj-cs.843.

Повний текст джерела
Стилі APA, Harvard, Vancouver, ISO та ін.
Анотація:
Background In terms of data-hiding areas, video steganography is more advantageous compared to other steganography techniques since it uses video as its cover medium. For any video steganography, the good trade-off among robustness, imperceptibility, and payload must be created and maintained. Even though it has the advantage of capacity, video steganography has the robustness problem especially regarding spatial domain is used to implement it. Transformation operations and statistical attacks can harm secret data. Thus, the ideal video steganography technique must provide high imperceptibility, high payload, and resistance towards visual, statistical and transformation-based steganalysis attacks. Methods One of the most common spatial methods for hiding data within the cover medium is the Least Significant Bit (LSB) method. In this study, an LSB-based video steganography application that uses a poly-pattern key block matrix (KBM) as the key was proposed. The key is a 64 × 64 pixel block matrix that consists of 16 sub-pattern blocks with a pixel size of 16 × 16. To increase the security of the proposed approach, sub-patterns in the KBM are allowed to shift in four directions and rotate up to 270° depending on the user preference and logical operations. For additional security XOR and AND logical operations were used to determine whether to choose the next predetermined 64 × 64 pixel block or jump to another pixel block in the cover video frame to place a KBM to embed the secret data. The fact that the combination of variable KBM structure and logical operator for the secret data embedding distinguishes the proposed algorithm from previous video steganography studies conducted with LSB-based approaches. Results Mean Squared Error (MSE), Structural Similarity Index (SSIM) and Peak Signal-to-Noise Ratio (PSNR) parameters were calculated for the detection of the imperceptibility (or the resistance against visual attacks ) of the proposed algorithm. The proposed algorithm obtained the best MSE, SSIM and PSNR parameter values based on the secret message length as 0.00066, 0.99999, 80.01458 dB for 42.8 Kb of secret message and 0.00173, 0.99999, 75.72723 dB for 109 Kb of secret message, respectively. These results are better than the results of classic LSB and the studies conducted with LSB-based video steganography approaches in the literature. Since the proposed system allows an equal amount of data embedding in each video frame the data loss will be less in transformation operations. The lost data can be easily obtained from the entire text with natural language processing. The variable structure of the KBM, logical operators and extra security preventions makes the proposed system be more secure and complex. This increases the unpredictability and resistance against statistical attacks. Thus, the proposed method provides high imperceptibility and resistance towards visual, statistical and transformation-based attacks while acceptable even high payload.
22

Bai, Haoyue, Rui Sun, Lanqing Hong, Fengwei Zhou, Nanyang Ye, Han-Jia Ye, S. H. Gary Chan, and Zhenguo Li. "DecAug: Out-of-Distribution Generalization via Decomposed Feature Representation and Semantic Augmentation." Proceedings of the AAAI Conference on Artificial Intelligence 35, no. 8 (May 18, 2021): 6705–13. http://dx.doi.org/10.1609/aaai.v35i8.16829.

Повний текст джерела
Стилі APA, Harvard, Vancouver, ISO та ін.
Анотація:
While deep learning demonstrates its strong ability to handle independent and identically distributed (IID) data, it often suffers from out-of-distribution (OoD) generalization, where the test data come from another distribution (w.r.t. the training one). Designing a general OoD generalization framework for a wide range of applications is challenging, mainly due to different kinds of distribution shifts in the real world, such as the shift across domains or the extrapolation of correlation. Most of the previous approaches can only solve one specific distribution shift, leading to unsatisfactory performance when applied to various OoD benchmarks. In this work, we propose DecAug, a novel decomposed feature representation and semantic augmentation approach for OoD generalization. Specifically, DecAug disentangles the category-related and context-related features by orthogonalizing the two gradients (w.r.t. intermediate features) of losses for predicting category and context labels, where category-related features contain causal information of the target object, while context-related features cause distribution shifts between training and test data. Furthermore, we perform gradient-based augmentation on context-related features to improve the robustness of learned representations. Experimental results show that DecAug outperforms other state-of-the-art methods on various OoD datasets, which is among the very few methods that can deal with different types of OoD generalization challenges.
23

Gokhale, Tejas, Rushil Anirudh, Bhavya Kailkhura, Jayaraman J. Thiagarajan, Chitta Baral, and Yezhou Yang. "Attribute-Guided Adversarial Training for Robustness to Natural Perturbations." Proceedings of the AAAI Conference on Artificial Intelligence 35, no. 9 (May 18, 2021): 7574–82. http://dx.doi.org/10.1609/aaai.v35i9.16927.

Повний текст джерела
Стилі APA, Harvard, Vancouver, ISO та ін.
Анотація:
While existing work in robust deep learning has focused on small pixel-level norm-based perturbations, this may not account for perturbations encountered in several real world settings. In many such cases although test data might not be available, broad specifications about the types of perturbations (such as an unknown degree of rotation) may be known. We consider a setup where robustness is expected over an unseen test domain that is not i.i.d. but deviates from the training domain. While this deviation may not be exactly known, its broad characterization is specified a priori, in terms of attributes. We propose an adversarial training approach which learns to generate new samples so as to maximize exposure of the classifier to the attributes-space, without having access to the data from the test domain. Our adversarial training solves a min-max optimization problem, with the inner maximization generating adversarial perturbations, and the outer minimization finding model parameters by optimizing the loss on adversarial perturbations generated from the inner maximization. We demonstrate the applicability of our approach on three types of naturally occurring perturbations --- object-related shifts, geometric transformations, and common image corruptions. Our approach enables deep neural networks to be robust against a wide range of naturally occurring perturbations. We demonstrate the usefulness of the proposed approach by showing the robustness gains of deep neural networks trained using our adversarial training on MNIST, CIFAR-10, and a new variant of the CLEVR dataset.
24

Kim, Jaehun. "Increasing trust in complex machine learning systems." ACM SIGIR Forum 55, no. 1 (June 2021): 1–3. http://dx.doi.org/10.1145/3476415.3476435.

Повний текст джерела
Стилі APA, Harvard, Vancouver, ISO та ін.
Анотація:
Machine learning (ML) has become a core technology for many real-world applications. Modern ML models are applied to unprecedentedly complex and difficult challenges, including very large and subjective problems. For instance, applications towards multimedia understanding have been advanced substantially. Here, it is already prevalent that cultural/artistic objects such as music and videos are analyzed and served to users according to their preference, enabled through ML techniques. One of the most recent breakthroughs in ML is Deep Learning (DL), which has been immensely adopted to tackle such complex problems. DL allows for higher learning capacity, making end-to-end learning possible, which reduces the need for substantial engineering effort, while achieving high effectiveness. At the same time, this also makes DL models more complex than conventional ML models. Reports in several domains indicate that such more complex ML models may have potentially critical hidden problems: various biases embedded in the training data can emerge in the prediction, extremely sensitive models can make unaccountable mistakes. Furthermore, the black-box nature of the DL models hinders the interpretation of the mechanisms behind them. Such unexpected drawbacks result in a significant impact on the trustworthiness of the systems in which the ML models are equipped as the core apparatus. In this thesis, a series of studies investigates aspects of trustworthiness for complex ML applications, namely the reliability and explainability. Specifically, we focus on music as the primary domain of interest, considering its complexity and subjectivity. Due to this nature of music, ML models for music are necessarily complex for achieving meaningful effectiveness. As such, the reliability and explainability of music ML models are crucial in the field. The first main chapter of the thesis investigates the transferability of the neural network in the Music Information Retrieval (MIR) context. Transfer learning, where the pre-trained ML models are used as off-the-shelf modules for the task at hand, has become one of the major ML practices. It is helpful since a substantial amount of the information is already encoded in the pre-trained models, which allows the model to achieve high effectiveness even when the amount of the dataset for the current task is scarce. However, this may not always be true if the "source" task which pre-trained the model shares little commonality with the "target" task at hand. An experiment including multiple "source" tasks and "target" tasks was conducted to examine the conditions which have a positive effect on the transferability. The result of the experiment suggests that the number of source tasks is a major factor of transferability. Simultaneously, it is less evident that there is a single source task that is universally effective on multiple target tasks. Overall, we conclude that considering multiple pre-trained models or pre-training a model employing heterogeneous source tasks can increase the chance for successful transfer learning. The second major work investigates the robustness of the DL models in the transfer learning context. The hypothesis is that the DL models can be susceptible to imperceptible noise on the input. This may drastically shift the analysis of similarity among inputs, which is undesirable for tasks such as information retrieval. Several DL models pre-trained in MIR tasks are examined for a set of plausible perturbations in a real-world setup. Based on a proposed sensitivity measure, the experimental results indicate that all the DL models were substantially vulnerable to perturbations, compared to a traditional feature encoder. They also suggest that the experimental framework can be used to test the pre-trained DL models for measuring robustness. In the final main chapter, the explainability of black-box ML models is discussed. In particular, the chapter focuses on the evaluation of the explanation derived from model-agnostic explanation methods. With black-box ML models having become common practice, model-agnostic explanation methods have been developed to explain a prediction. However, the evaluation of such explanations is still an open problem. The work introduces an evaluation framework that measures the quality of the explanations employing fidelity and complexity. Fidelity refers to the explained mechanism's coherence to the black-box model, while complexity is the length of the explanation. Throughout the thesis, we gave special attention to the experimental design, such that robust conclusions can be reached. Furthermore, we focused on delivering machine learning framework and evaluation frameworks. This is crucial, as we intend that the experimental design and results will be reusable in general ML practice. As it implies, we also aim our findings to be applicable beyond the music applications such as computer vision or natural language processing. Trustworthiness in ML is not a domain-specific problem. Thus, it is vital for both researchers and practitioners from diverse problem spaces to increase awareness of complex ML systems' trustworthiness. We believe the research reported in this thesis provides meaningful stepping stones towards the trustworthiness of ML.
25

Dockal, Michael, Johannes Brandstetter, Martin Ludwiczek, Georg Kontaxis, Markus Fries, M. C. L. G. D. Thomassen, A. Heinzmann, et al. "Peptides Binding to Kunitz Domain 1 of Tissue Factor Pathway Inhibitor (TFPI) Inhibit All Functions of TFPI and Improve Thrombin Generation of Hemophilia Plasma." Blood 118, no. 21 (November 18, 2011): 2245. http://dx.doi.org/10.1182/blood.v118.21.2245.2245.

Повний текст джерела
Стилі APA, Harvard, Vancouver, ISO та ін.
Анотація:
Abstract Abstract 2245 Blood coagulation is initiated by the tissue factor-factor VIIa (TF-FVIIa) complex which cleaves and activates coagulation factor X to Xa (FXa). Tissue factor pathway inhibitor (TFPI) controls this key process and thus plays a crucial role in maintaining the delicate balance of pro- and anticoagulant processes. Inhibition of TFPI in hemophilia plasma and in a rabbit model of hemophilia has been shown to improve coagulation and hemostasis (Nordfang et al., Thromb Haemost. 1991;66:464; Erhardsen et al., Blood Coagulation and Fibrinolysis 1995;6:388). TFPI is a Kunitz-type protease inhibitor that inhibits FXa and TF-FVIIa. TFPI is a slow, tight-binding FXa inhibitor which rapidly forms a loose FXa-TFPI complex that slowly isomerises to a tight FXa-TFPI* complex. The FXa-TFPI* complex inhibits TF-FVIIa by formation of a quaternary FXa-TFPI-TF-FVIIa complex. Using a library approach, we selected a peptide which binds and inhibits TFPI. We located the binding site of the antagonistic peptide on TFPI by NMR spectroscopy. Residues of TFPI undergoing the strongest chemical shift changes were exclusively found on the Kunitz domain 1 (KD1). NMR data were confirmed by solving the crystal structure of KD1 in complex with the antagonistic peptide at 2.55 Å resolution. Like in related Kunitz domains, the robustness of this approximately 60-amino-acid long folding module largely depends on stabilization by the three disulfides bonds and a hydrophobic cluster of three phenylalanines. The disulfide bridging of the P2 residue induces conformational constraints on the reactive centre loop (RCL), thereby establishing an extended RCL conformation; consequently, the amino acid side chains flanking the “scissile” peptide bond are exposed to the solvent. This RCL geometry also explains why the distorted, improperly activated scissile peptide bond is hardly cleaved. Whereas Cys-Lys/Arg is a rather conserved P2-P1 motif, reflecting the topological restraints in Kunitz protease inhibitors, proline at position P3 induces an additional conformational constraint on the RCL, which would not be possible in the narrow active site of FXa. Proline at the P3 and to a lesser extent Lys rather than Arg at P1 thus represent two major specificity determinants of KD1 towards FVIIa over FXa. The structure of the 20-mer peptide can be segmented into (i) an N-terminal anchor; (ii) an Ω-shaped loop; (iii) an intermediate segment; (iv) a tight glycine loop; and (v) a C-terminal α-helix that is anchored to KD1 at its RCL and two-strand β-sheet. The contact surface has an overall hydrophobic character with some charged hot spots but the major driving force of complex formation is steric surface complementarity. One of the optimized peptides, which binds to KD1 of TFPI, had an affinity for TFPI of <1 nM. In a model system, the peptide blocked both FXa inhibition by TFPI (IC50=5 nM) and inhibition of TF-FVIIa-catalyzed FX activation by TFPI (IC50=5.7 nM). In FVIII-depleted plasma, the peptide enhanced thrombin generation 9-fold (EC50=4 nM). Detailed kinetic analysis in a model system showed that the peptide almost fully inhibited TFPI and prevented the transition from the loose to the tight FXa-TFPI* complex, but did not affect formation of the loose FXa-TFPI complex. Since KD1 binds to the active site of FVIIa and KD2 to the active site of FXa our kinetic data with the KD1-binding peptide show that KD1 is not only important for FVIIa inhibition but is also required for FXa inhibition, i.e. for the transition from the loose to the tight FXa-TFPI* complex. In line with this mechanism, the peptide did not affect FXa inhibition by the isolated KD2. The peptide was also able to dissociate preformed FXa-TFPI* and FXa-TFPI-TF-FVIIa complexes and liberate active FXa and TF-FVIIa. In summary, we developed a peptide that binds to KD1 of TFPI, that prevents FXa-TFPI and FXa-TFPI-TF-FVIIa complex formation and that enhances coagulation under hemophilia conditions. Disclosures: Dockal: Baxter Innovations GmbH: Employment. Brandstetter:University of Salzburg: Employment. Ludwiczek:Baxter Innovations GmbH: Employment. Kontaxis:University of Vienna: Employment. Fries:Baxter Innovations GmbH: Employment. Thomassen:Maastricht University: Employment. Heinzmann:Maastricht University: Employment. Ehrlich:Baxter Innovations GmbH: Employment. Prohaska:Baxter Innovations GmbH: Employment. Hartmann:Baxter Innovations GmbH: Employment. Rosing:Maastricht University: Employment. Scheiflinger:Baxter Innovations GmbH: Employment.
26

Dexl, Jakob, Michaela Benz, Petr Kuritcyn, Thomas Wittenberg, Volker Bruns, Carol Geppert, Arndt Hartmann, Bernd Bischl, and Jann Goschenhofer. "Robust Colon Tissue Cartography with Semi-Supervision." Current Directions in Biomedical Engineering 8, no. 2 (August 1, 2022): 344–47. http://dx.doi.org/10.1515/cdbme-2022-1088.

Повний текст джерела
Стилі APA, Harvard, Vancouver, ISO та ін.
Анотація:
Abstract We explore the task of tissue classification for colon cancer histology in a low label regime comparing a semi-supervised and a supervised learning strategy in a series of experiments. Further, we investigate the model robustness w.r.t. distribution shifts in the unlabeled data and domain shifts across different scanners to prove their practicality in a histology context. By utilizing unlabeled data in addition to nl = 1000 labeled tiles per class, we yield a substantial increase in accuracy from 89.9% to 91.4%.
27

Nützel, Matthias, Sabine Brinkop, Martin Dameris, Hella Garny, Patrick Jöckel, Laura L. Pan, and Mijeong Park. "Climatology and variability of air mass transport from the boundary layer to the Asian monsoon anticyclone." Atmospheric Chemistry and Physics 22, no. 24 (December 14, 2022): 15659–83. http://dx.doi.org/10.5194/acp-22-15659-2022.

Повний текст джерела
Стилі APA, Harvard, Vancouver, ISO та ін.
Анотація:
Abstract. Air masses within the Asian monsoon anticyclone (AMA) show anomalous signatures in various trace gases. In this study, we investigate how air masses are transported from the planetary boundary layer (PBL) to the AMA based on multiannual trajectory analyses. In particular, we focus on the climatological perspective and on the intraseasonal and interannual variability. Further, we also discuss the relation of the interannual east–west displacements of the AMA with the transport from the PBL to the AMA. To this end we employ backward trajectories, which were computed for 14 northern summer (June–August) seasons using reanalysis data. Further, we backtrack forward trajectories from a free-running chemistry–climate model (CCM) simulation, which includes parametrized Lagrangian convection. The analysis of 30 monsoon seasons of this additional model data set helps us to carve out robust or sensitive features of transport from the PBL to the AMA with respect to the employed model. Results from both the trajectory model and the Lagrangian CCM emphasize the robustness of the three-dimensional transport pathways from the top of the PBL to the AMA. Air masses are transported upwards on the south-eastern side of the AMA and subsequently recirculate within the full AMA domain, where they are lifted upwards on the eastern side and transported downwards on the western side of the AMA. The contributions of different PBL source regions to AMA air are robust across the two models for the Tibetan Plateau (TP; 17 % vs. 15 %) and the West Pacific (around 12 %). However, the contributions from the Indian subcontinent and Southeast Asia are considerably larger in the Lagrangian CCM data, which might indicate an important role of convective transport in PBL-to-AMA transport for these regions. The analysis of both model data sets highlights the interannual and intraseasonal variability of the PBL source regions of the AMA. Although there are differences in the transport pathways, the interannual east–west displacement of the AMA – which we find to be related to the monsoon Hadley index – is not connected to considerable differences in the overall transport characteristics. Our results from the trajectory model data reveal a strong intraseasonal signal in the transport from the PBL over the TP to the AMA: there is a weak contribution of TP air masses in early June (less than 4 % of the AMA air masses), whereas in August the contribution is considerable (roughly 24 %). The evolution of the contribution from the TP is consistent across the two modelling approaches and is related to the northward shift of the subtropical jet and the AMA during this period. This finding may help to reconcile previous results and further highlights the need of taking the subseasonal (and interannual) variability of the AMA and associated transport into account.
28

Jung, Donghwi, Seungyub Lee, and Joong Hoon Kim. "Robustness and Water Distribution System: State-of-the-Art Review." Water 11, no. 5 (May 9, 2019): 974. http://dx.doi.org/10.3390/w11050974.

Повний текст джерела
Стилі APA, Harvard, Vancouver, ISO та ін.
Анотація:
The resilience of a water distribution system (WDS) is defined as its ability to prepare, respond to, and recover from a catastrophic failure event such as an earthquake or intentional contamination. Robustness (ROB), one of the components of resilience, is the ability to maintain functionality to meet customer demands. Recently, the traditional probability-based system performance perspective has begun to shift toward the ROB and system performance variation point of view. This paper provides a state-of-the-art review of WDS ROB-based approaches proposed in three research categories: Design, operation, and management. While few pioneering works have been published in the latter two areas, an ROB indicator was proposed and thoroughly investigated for WDS design. Then, some future works are recommended in each of the three domains to promote developments in WDS ROB. Finally, a brief summary of this paper is presented, from which the final conclusions of the state-of-the-art review and recommendations are drawn. The new paradigm of WDS ROB-based design, operation, and management is in its infant stage and should be carved out in future studies.
29

Shergadwala, Murtuza N., Himabindu Lakkaraju, and Krishnaram Kenthapadi. "A Human-Centric Perspective on Model Monitoring." Proceedings of the AAAI Conference on Human Computation and Crowdsourcing 10, no. 1 (October 14, 2022): 173–83. http://dx.doi.org/10.1609/hcomp.v10i1.21997.

Повний текст джерела
Стилі APA, Harvard, Vancouver, ISO та ін.
Анотація:
Predictive models are increasingly used to make various consequential decisions in high-stakes domains such as healthcare, finance, and policy. It becomes critical to ensure that these models make accurate predictions, are robust to shifts in the data, do not rely on spurious features, and do not unduly discriminate against minority groups. To this end, several approaches spanning various areas such as explainability, fairness, and robustness have been proposed in recent literature. Such approaches need to be human-centered as they cater to the understanding of the models to their users. However, there is little to no research on understanding the needs and challenges in monitoring deployed machine learning (ML) models from a human-centric perspective. To address this gap, we conducted semi-structured interviews with 13 practitioners who are experienced with deploying ML models and engaging with customers spanning domains such as financial services, healthcare, hiring, online retail, computational advertising, and conversational assistants. We identified various human-centric challenges and requirements for model monitoring in real-world applications. Specifically, we found that relevant stakeholders would want model monitoring systems to provide clear, unambiguous, and easy-to-understand insights that are readily actionable. Furthermore, our study also revealed that stakeholders desire customization of model monitoring systems to cater to domain-specific use cases.
30

J. Thiagarajan, Jayaraman, Vivek Narayanaswamy, Rushil Anirudh, Peer-Timo Bremer, and Andreas Spanias. "Accurate and Robust Feature Importance Estimation under Distribution Shifts." Proceedings of the AAAI Conference on Artificial Intelligence 35, no. 9 (May 18, 2021): 7891–98. http://dx.doi.org/10.1609/aaai.v35i9.16963.

Повний текст джерела
Стилі APA, Harvard, Vancouver, ISO та ін.
Анотація:
With increasing reliance on the outcomes of black-box models in critical applications, post-hoc explainability tools that do not require access to the model internals are often used to enable humans understand and trust these models. In particular, we focus on the class of methods that can reveal the influence of input features on the predicted outputs. Despite their wide-spread adoption, existing methods are known to suffer from one or more of the following challenges: computational complexities, large uncertainties and most importantly, inability to handle real-world domain shifts. In this paper, we propose PRoFILE (Producing Robust Feature Importances using Loss Estimates), a novel feature importance estimation method that addresses all these challenges. Through the use of a loss estimator jointly trained with the predictive model and a causal objective, PRoFILE can accurately estimate the feature importance scores even under complex distribution shifts, without any additional re-training. To this end, we also develop learning strategies for training the loss estimator, namely contrastive and dropout calibration, and find that it can effectively detect distribution shifts. Using empirical studies on several benchmark image and non-image data, we show significant improvements over state-of-the-art approaches, both in terms of fidelity and robustness.
31

Wen, Fu-Lai, Chun Wai Kwan, Yu-Chiun Wang, and Tatsuo Shibata. "Autonomous epithelial folding induced by an intracellular mechano–polarity feedback loop." PLOS Computational Biology 17, no. 12 (December 6, 2021): e1009614. http://dx.doi.org/10.1371/journal.pcbi.1009614.

Повний текст джерела
Стилі APA, Harvard, Vancouver, ISO та ін.
Анотація:
Epithelial tissues form folded structures during embryonic development and organogenesis. Whereas substantial efforts have been devoted to identifying mechanical and biochemical mechanisms that induce folding, whether and how their interplay synergistically shapes epithelial folds remains poorly understood. Here we propose a mechano–biochemical model for dorsal fold formation in the early Drosophila embryo, an epithelial folding event induced by shifts of cell polarity. Based on experimentally observed apical domain homeostasis, we couple cell mechanics to polarity and find that mechanical changes following the initial polarity shifts alter cell geometry, which in turn influences the reaction-diffusion of polarity proteins, thus forming a feedback loop between cell mechanics and polarity. This model can induce spontaneous fold formation in silico, recapitulate polarity and shape changes observed in vivo, and confer robustness to tissue shape change against small fluctuations in mechanics and polarity. These findings reveal emergent properties of a developing epithelium under control of intracellular mechano–polarity coupling.
32

Mridha, Muhammad Firoz, Abu Quwsar Ohi, Muhammad Mostafa Monowar, Md Abdul Hamid, Md Rashedul Islam, and Yutaka Watanobe. "U-Vectors: Generating Clusterable Speaker Embedding from Unlabeled Data." Applied Sciences 11, no. 21 (October 27, 2021): 10079. http://dx.doi.org/10.3390/app112110079.

Повний текст джерела
Стилі APA, Harvard, Vancouver, ISO та ін.
Анотація:
Speaker recognition deals with recognizing speakers by their speech. Most speaker recognition systems are built upon two stages, the first stage extracts low dimensional correlation embeddings from speech, and the second performs the classification task. The robustness of a speaker recognition system mainly depends on the extraction process of speech embeddings, which are primarily pre-trained on a large-scale dataset. As the embedding systems are pre-trained, the performance of speaker recognition models greatly depends on domain adaptation policy, which may reduce if trained using inadequate data. This paper introduces a speaker recognition strategy dealing with unlabeled data, which generates clusterable embedding vectors from small fixed-size speech frames. The unsupervised training strategy involves an assumption that a small speech segment should include a single speaker. Depending on such a belief, a pairwise constraint is constructed with noise augmentation policies, used to train AutoEmbedder architecture that generates speaker embeddings. Without relying on domain adaption policy, the process unsupervisely produces clusterable speaker embeddings, termed unsupervised vectors (u-vectors). The evaluation is concluded in two popular speaker recognition datasets for English language, TIMIT, and LibriSpeech. Also, a Bengali dataset is included to illustrate the diversity of the domain shifts for speaker recognition systems. Finally, we conclude that the proposed approach achieves satisfactory performance using pairwise architectures.
33

Pham, Minh Tuan, Jong-Myon Kim, and Cheol Hong Kim. "Intelligent Fault Diagnosis Method Using Acoustic Emission Signals for Bearings under Complex Working Conditions." Applied Sciences 10, no. 20 (October 12, 2020): 7068. http://dx.doi.org/10.3390/app10207068.

Повний текст джерела
Стилі APA, Harvard, Vancouver, ISO та ін.
Анотація:
Recent convolutional neural network (CNN) models in image processing can be used as feature-extraction methods to achieve high accuracy as well as automatic processing in bearing fault diagnosis. The combination of deep learning methods with appropriate signal representation techniques has proven its efficiency compared with traditional algorithms. Vital electrical machines require a strict monitoring system, and the accuracy of these machines’ monitoring systems takes precedence over any other factors. In this paper, we propose a new method for diagnosing bearing faults under variable shaft speeds using acoustic emission (AE) signals. Our proposed method predicts not only bearing fault types but also the degradation level of bearings. In the proposed technique, AE signals acquired from bearings are represented by spectrograms to obtain as much information as possible in the time–frequency domain. Feature extraction and classification processes are performed by deep learning using EfficientNet and a stochastic line-search optimizer. According to our various experiments, the proposed method can provide high accuracy and robustness under noisy environments compared with existing AE-based bearing fault diagnosis methods.
34

Zhong, Jiwei, Ziru Xiang, and Cheng Li. "Synchronized Assessment of Bridge Structural Damage and Moving Force via Truncated Load Shape Function." Applied Sciences 12, no. 2 (January 11, 2022): 691. http://dx.doi.org/10.3390/app12020691.

Повний текст джерела
Стилі APA, Harvard, Vancouver, ISO та ін.
Анотація:
Moving load and structural damage assessment has always been a crucial topic in bridge health monitoring, as it helps analyze the daily operating status of bridges and provides fundamental information for bridge safety evaluation. However, most studies and research consider these issues as two separate problems. In practice, unknown moving loads and damage usually coexist and influence the bridge vibration synergically. This paper proposes an innovative synchronized assessment method that determines structural damages and moving forces simultaneously. The method firstly improves the virtual distortion method, which shifts the structural damage into external virtual forces and hence transforms the damage assessment as well as the moving force identification to a multi-force reconstruction problem. Secondly, a truncated load shape function (TLSF) technique is developed to solve the forces in the time domain. As the technique smoothens the pulse function via a limited number of TLSF, the singularity and dimension of the system matrix in the force reconstruction is largely reduced. A continuous beam and a three-dimensional truss bridge are simulated as examples. Case studies show that the method can effectively identify various speeds and numbers of moving loads, as well as different levels of structural damages. The calculation efficiency and robustness to white noise are also impressive.
35

Pedrini, Giulio. "Varieties of capitalism in Europe: an inter-temporal comparison of HR policies." Personnel Review 45, no. 3 (April 4, 2016): 480–504. http://dx.doi.org/10.1108/pr-04-2014-0069.

Повний текст джерела
Стилі APA, Harvard, Vancouver, ISO та ін.
Анотація:
Purpose – The purpose of this paper is to analyse the attitude of European firms towards human resource management (HRM) configuration and HRM practices on a country-level basis. Assuming the persistent relevance of institutional framework, the paper investigates the applicability of the varieties of capitalism (VoC) theory to these domains in European countries and their evolution between 1999 and 2005. Design/methodology/approach – The paper selects and groups together variables that are related to both HRM configuration and HRM practices using data coming from the survey performed in 2005 by the Cranfield Network on International HRM. Then, a hierarchical cluster analysis among 16 European countries is performed. Relevant varieties are obtained through the combined application of two stopping rules. Findings – Evidence shows that the evolution of HR policies over time is in line with an extended VoC approach that divides Europe in four VoC. One of these varieties (the “State” model), however, is not validated after a robustness check. Practical implications – For HR managers, the implementation of common personnel policies within the same variety of capitalism could represent a potential fertile ground for beneficial interactions and mutual learning among HR functions. In particular, the classification suggested in the paper does matter if an intervention on HRM practices is accompanied by a change in the participation of the HR department to the decision-making process and/or in the delegation of responsibilities between the HR department and the line management. Originality/value – The authors’ results contribute to the debate on the relationship between HRM and institutional context in two ways. First, they show that an extended VoC framework can explain the differentiation among European countries with regard to HRM domains. Notably, the correlation between the structure of the HR function and the intensity of HRM practices generates a clusterization of European countries based on at least three models of capitalism. Second, it emerges from the analysis that a substantial shift occurred with respect to the previous wave of the survey together with an increase of similarities between countries.
36

Fentaye, Amare Desalegn, Valentina Zaccaria, and Konstantinos Kyprianidis. "Aircraft Engine Performance Monitoring and Diagnostics Based on Deep Convolutional Neural Networks." Machines 9, no. 12 (December 7, 2021): 337. http://dx.doi.org/10.3390/machines9120337.

Повний текст джерела
Стилі APA, Harvard, Vancouver, ISO та ін.
Анотація:
The rapid advancement of machine-learning techniques has played a significant role in the evolution of engine health management technology. In the last decade, deep-learning methods have received a great deal of attention in many application domains, including object recognition and computer vision. Recently, there has been a rapid rise in the use of convolutional neural networks for rotating machinery diagnostics inspired by their powerful feature learning and classification capability. However, the application in the field of gas turbine diagnostics is still limited. This paper presents a gas turbine fault detection and isolation method using modular convolutional neural networks preceded by a physics-driven performance-trend-monitoring system. The trend-monitoring system was employed to capture performance changes due to degradation, establish a new baseline when it is needed, and generatefault signatures. The fault detection and isolation system was trained to step-by-step detect and classify gas path faults to the component level using fault signatures obtained from the physics part. The performance of the method proposed was evaluated based on different fault scenarios for a three-shaft turbofan engine, under significant measurement noise to ensure model robustness. Two comparative assessments were also carried out: with a single convolutional-neural-network-architecture-based fault classification method and with a deep long short-term memory-assisted fault detection and isolation method. The results obtained revealed the performance of the proposed method to detect and isolate multiple gas path faults with over 96% accuracy. Moreover, sharing diagnostic tasks with modular architectures is seen as relevant to significantly enhance diagnostic accuracy.
37

Tran, Hai, and Tat-Hien Le. "Wavelet deconvolution technique for impact force reconstruction: mutual deconvolution approach." Science & Technology Development Journal - Engineering and Technology 3, SI2 (January 22, 2021): first. http://dx.doi.org/10.32508/stdjet.v3isi2.507.

Повний текст джерела
Стилі APA, Harvard, Vancouver, ISO та ін.
Анотація:
In the field of impact engineering, one of the most concerned issues is how to exactly know the history of impact force which often difficult or impossible to be measured directly. In reality, information of impact force apply to structure can be identified by means of indirect method from using information of corresponding output responses measured on structure. Namely, by using the output responses (caused by the unknown impact force) such as acceleration, displacement, or strain, etc. in cooperation with the impulse response function, the profile of unknown impact force can be rebuilt. A such indirect method is well known as impact force reconstruction or impact force deconvolution technique. Unfortunately, a simple deconvolution technique for reconstructing impact force has often encountered difficulty due to the ill-posed nature of inversion. Deconvolution technique thus often results in unexpected reconstruction of impact force with the influences of unavoidable errors which is often magnified to a large value in reconstructed result. This large magnification of errors dominates profile of desired impact force. Although there have been some regularization methods in order to improve this ill-posed problem so far, most of these regularizations are considered in the whole-time domain, and this may make the reconstruction inefficient and inaccurate because impact force is normally limited to some portions of impact duration. This work is concerned with the development of deconvolution technique using wavelets transform. Based on the advantages of wavelets (i.e., localized in time and the possibility to be analyzed at different scales and shifts), the mutual reconstruction process is proposed and formulated by considering different scales of wavelets. The experiment is conducted to verify the proposed technique. Results demonstrated the robustness of the present technique when reconstructing impact force with more stability and higher accuracy.
38

Rudakov, Dmytro, and Sebastian Westermann. "Analytical modeling of mine water rebound: Three case studies in closed hard-coal mines in Germany." Mining of Mineral Deposits 15, no. 3 (September 2021): 22–30. http://dx.doi.org/10.33271/mining15.03.022.

Повний текст джерела
Стилі APA, Harvard, Vancouver, ISO та ін.
Анотація:
Purpose.In this paper we present and validate an analytical model of water inflow and rising level in a flooded mine and examine the model robustness and sensitivity to variations of input data considering the examples of three closed hard-coal mines in Germany. Methods. We used the analytical solution to a boundary value problem of radial ground water flow to the shaft, treated as a big well, and water balance relations for the series of successive stationary positions of a depression cone to simulate a mine water rebound in the mine taking into account vertical distribution of hydraulic conductivity, residual volume of underground workings, and natural pores. Findings. The modeling demonstrated very good agreement with the measured data for all the studied mines. The maximum relative deviation for the mine water level during the measurement period did not exceed 2.1%; the deviation for the inflow rate to a mine before its flooding did not exceed 0.8%. Sensitivity analysis revealed the higher significance of the residual working volume and hydraulic conductivity for mine water rebound in the case of thick overburden and the growing significance of the infiltration rate and the flooded area size in the case of lower overburden thickness. Originality.The developed analytical model allows realistic prediction of transient mine water rebound and inflow into a mine with layered heterogeneity of rocks, irregular form of the drained area, and with the inflow/outflow to a neighboring mine and the volume of voids as a distributed parameter without gridding the flow domain performed in numerical models. Practical implications.The study demonstrated the advantages of analytical modeling as a tool for preliminary evaluation and prediction of flooding indicators and parameters of mined out disturbed rocks. In case of uncertain input data, modeling can be considered as an attractive alternative to usually applied numerical methods of modeling ground and mine water flow.
39

Kohlmann, Alexander, Andreas Roller, Andreia Albuquerque, Sabrina Kuznia, Sandra Weissmann, Sabine Jeromin, Wolfgang Kern, Claudia Haferlach, Susanne Schnittger, and Torsten Haferlach. "A 13-Gene Panel Targeted To Investigate CLL By Next-Generation Amplicon Deep-Sequencing Can Be Successfully Implemented In Routine Diagnostics." Blood 122, no. 21 (November 15, 2013): 867. http://dx.doi.org/10.1182/blood.v122.21.867.867.

Повний текст джерела
Стилі APA, Harvard, Vancouver, ISO та ін.
Анотація:
Abstract Introduction Massively parallel next-generation sequencing (NGS) data have changed the landscape of molecular mutations in chronic lymphocytic leukemia (CLL). The number of molecular markers continues to constantly increase. As such, physicians and laboratories face a great unmet yet challenging need to test panels of genes at a high level of sensitivity. Aim To develop an assay that is easily adoptable to adjust gene targets and amplicons according to current state-of-the-art evidence regarding the published landscape of mutations in CLL. Methods We developed a sensitive deep-sequencing assay for routine diagnostics. In total, 13 genes with relevance in CLL providing in part adverse prognostic information were chosen: ATM, BIRC3, BRAF (V600), FBXW7, KLHL6, KRAS, NOTCH1 (PEST domain), NRAS, MYD88, POT1, SF3B1 (HEAT repeats), TP53, and XPO1. Targets of interest comprised either complete coding gene regions or hotspots. In summary, 323 amplicons were designed with a median length of 204 bp (range 150-240 bp), representing a total target sequence of 39.36 kb. The sequencing library was constructed starting off 2.2 μg genomic DNA per patient using a single-plex microdroplet-based assay (RainDance, Lexington, MA). Sequencing data was generated using the MiSeq instrument (Illumina, San Diego, CA) loading up to 10 patients per run. The total turn-around time of the assay was less than 5 days. As a proof-of-principle cohort, 18 clinically well-annotated CLL patients were analyzed during the evaluation phase. The median age was 78 years (range: 52 – 87 years). Results Using the 500 cycles sequencing by-synthesis chemistry, in median 7,262 millions of paired-end reads were generated per run. This resulted in a median coverage per gene of 7,476 reads (range: 5,595 - 10,337). (1) In this cohort of 18 cases, a total of 71 mutation analyses had already been previously performed for eight of the 13 genes using either capillary Sanger sequencing or alternative amplicon deep-sequencing assays (454 LifeSciences or Illumina MiSeq). In detail, in these 8 genes these 71 assays detected 56 known polymorphisms or mutations in ATM (n=8), BIRC3 (n=6), FBXW7 (n=4), MYD88 (n=4), NOTCH1 (n=10), SF3B1 (n=5), TP53 (n=14), and XPO1 (n=4) and 28 analyses revealed a wild-type status. When comparing these results with data obtained using the 13-gene NGS panel, in all 84/84 (100%) parallel assessments concordant results were obtained underlining the robustness of this assay. (2) Overall and extending the previous results, the comprehensive 13-gene NGS panel then detected in 18/18 patients a total of 46 mutations in 10 of the 13 genes with a range of 1-5 mutations per case (median: 2). The mutation types comprised 22 missense, 4 nonsense, 16 frame-shifts, 3 insertions and 1 splice-site alterations. In median, the coverage per variant was 10,390-fold, thus enabling a sensitive detection of mutations at a lower limit of detection set at 3%. The mutation burden ranged from 3.0% to 62.0%. 18/46 (39.13%) mutations were detected with a clone size <20%, thus being detected only due to the higher sensitivity of this assay in comparison to direct capillary Sanger sequencing. With respect to the technical limit of detecting larger alterations, a 34 bp deletion variant (NOTCH1; c.7403_7436del) was successfully sequenced. Moreover, a common theme in hematological malignancies is the emergence of novel prognostic scoring systems, integrating molecular mutations and cytogenetic lesions into revised survival prediction models. Importantly, a number of patients (14/18) was detected to harbor mutations in genes reported to be associated with decreased overall survival, both in high-risk (e.g. TP53, BIRC3) and intermediate-risk (NOTCH1, SF3B1) categories according to Rossi et al., 2013 (Blood;121:1403-12). As such, detecting these adverse somatic alterations may influence the course of therapy for these patients underlining the utility of such a screening panel. Conclusion We demonstrated that microdroplet-based sample preparation enabled to robustly target 13 genes for next-generation sequencing in a routine diagnostics environment. This included also larger gene targets such as ATM, being represented by 119 amplicons. Thus, this approach provides the potential to screen for prognostically relevant mutations in all CLL patients in a fast and comprehensive way providing actionable information suitable to guide therapy. Disclosures: Kohlmann MLL Munich Leukemia Laboratory: Employment. Roller:MLL Munich Leukemia Laboratory: Employment. Albuquerque:MLL Munich Leukemia Laboratory: Employment. Kuznia:MLL Munich Leukemia Laboratory: Employment. Weissmann:MLL Munich Leukemia Laboratory: Employment. Jeromin:MLL Munich Leukemia Laboratory: Employment. Kern:MLL Munich Leukemia Laboratory: Employment, Equity Ownership. Haferlach:MLL Munich Leukemia Laboratory: Employment, Equity Ownership. Schnittger:MLL Munich Leukemia Laboratory: Employment, Equity Ownership. Haferlach:MLL Munich Leukemia Laboratory: Employment, Equity Ownership.
40

Heinze-Deml, Christina, and Nicolai Meinshausen. "Conditional variance penalties and domain shift robustness." Machine Learning, November 23, 2020. http://dx.doi.org/10.1007/s10994-020-05924-1.

Повний текст джерела
Стилі APA, Harvard, Vancouver, ISO та ін.
Анотація:
AbstractWhen training a deep neural network for image classification, one can broadly distinguish between two types of latent features of images that will drive the classification. We can divide latent features into (i) ‘core’ or ‘conditionally invariant’ features $$C$$ C whose distribution $$C\vert Y$$ C | Y , conditional on the class Y, does not change substantially across domains and (ii) ‘style’ features $$S$$ S whose distribution $$S\vert Y$$ S | Y can change substantially across domains. Examples for style features include position, rotation, image quality or brightness but also more complex ones like hair color, image quality or posture for images of persons. Our goal is to minimize a loss that is robust under changes in the distribution of these style features. In contrast to previous work, we assume that the domain itself is not observed and hence a latent variable. We do assume that we can sometimes observe a typically discrete identifier or “$$\mathrm {ID}$$ ID variable”. In some applications we know, for example, that two images show the same person, and $$\mathrm {ID}$$ ID then refers to the identity of the person. The proposed method requires only a small fraction of images to have $$\mathrm {ID}$$ ID information. We group observations if they share the same class and identifier $$(Y,\mathrm {ID})=(y,\mathrm {id})$$ ( Y , ID ) = ( y , id ) and penalize the conditional variance of the prediction or the loss if we condition on $$(Y,\mathrm {ID})$$ ( Y , ID ) . Using a causal framework, this conditional variance regularization (CoRe) is shown to protect asymptotically against shifts in the distribution of the style variables in a partially linear structural equation model. Empirically, we show that the CoRe penalty improves predictive accuracy substantially in settings where domain changes occur in terms of image quality, brightness and color while we also look at more complex changes such as changes in movement and posture.
41

Guo, Lin Lawrence, Stephen R. Pfohl, Jason Fries, Alistair E. W. Johnson, Jose Posada, Catherine Aftandilian, Nigam Shah, and Lillian Sung. "Evaluation of domain generalization and adaptation on improving model robustness to temporal dataset shift in clinical medicine." Scientific Reports 12, no. 1 (February 17, 2022). http://dx.doi.org/10.1038/s41598-022-06484-1.

Повний текст джерела
Стилі APA, Harvard, Vancouver, ISO та ін.
Анотація:
AbstractTemporal dataset shift associated with changes in healthcare over time is a barrier to deploying machine learning-based clinical decision support systems. Algorithms that learn robust models by estimating invariant properties across time periods for domain generalization (DG) and unsupervised domain adaptation (UDA) might be suitable to proactively mitigate dataset shift. The objective was to characterize the impact of temporal dataset shift on clinical prediction models and benchmark DG and UDA algorithms on improving model robustness. In this cohort study, intensive care unit patients from the MIMIC-IV database were categorized by year groups (2008–2010, 2011–2013, 2014–2016 and 2017–2019). Tasks were predicting mortality, long length of stay, sepsis and invasive ventilation. Feedforward neural networks were used as prediction models. The baseline experiment trained models using empirical risk minimization (ERM) on 2008–2010 (ERM[08–10]) and evaluated them on subsequent year groups. DG experiment trained models using algorithms that estimated invariant properties using 2008–2016 and evaluated them on 2017–2019. UDA experiment leveraged unlabelled samples from 2017 to 2019 for unsupervised distribution matching. DG and UDA models were compared to ERM[08–16] models trained using 2008–2016. Main performance measures were area-under-the-receiver-operating-characteristic curve (AUROC), area-under-the-precision-recall curve and absolute calibration error. Threshold-based metrics including false-positives and false-negatives were used to assess the clinical impact of temporal dataset shift and its mitigation strategies. In the baseline experiments, dataset shift was most evident for sepsis prediction (maximum AUROC drop, 0.090; 95% confidence interval (CI), 0.080–0.101). Considering a scenario of 100 consecutively admitted patients showed that ERM[08–10] applied to 2017–2019 was associated with one additional false-negative among 11 patients with sepsis, when compared to the model applied to 2008–2010. When compared with ERM[08–16], DG and UDA experiments failed to produce more robust models (range of AUROC difference, − 0.003 to 0.050). In conclusion, DG and UDA failed to produce more robust models compared to ERM in the setting of temporal dataset shift. Alternate approaches are required to preserve model performance over time in clinical medicine.
42

Murali, K., and S. Siva Perumal. "Error rate performance analysis of power domain NOMA over AWGN and fading channels with generalized space shift keying in wireless 5G." Journal of Intelligent & Fuzzy Systems, October 26, 2020, 1–6. http://dx.doi.org/10.3233/jifs-189416.

Повний текст джерела
Стилі APA, Harvard, Vancouver, ISO та ін.
Анотація:
The Non-Orthogonal Multiple Access (NOMA) emerged as a latest solution to demand of high data rated with excellent reliability and robustness. In this paper, the performance analysis of the NOMA under fading channel is presented with emphasis on error rate calculations. In addition, the focus is on exploring the impact of various modulation techniques like binary phase shift keying (BPSK), Quadrature Phase Shift Keying (QPSK) and Generalized Space Shift Keying (GSSK). The simulation study has been performed on MATLAB tool and results are analyzed efficiently in the metrics of NOMA.
43

KAUR, HARLEEN. "STUDY ON AUDIO AND VIDEO WATERMARKING." International Journal of Communication Networks and Security, January 2013, 34–38. http://dx.doi.org/10.47893/ijcns.2013.1064.

Повний текст джерела
Стилі APA, Harvard, Vancouver, ISO та ін.
Анотація:
This paper gives the overview of audio and video watermarking. This paper introduces the basic requirements that affect the algorithms for audio and video watermarking which are perceptibility, robustness and security. The attacks which cause manipulations of the audio and video signals are also discussed. The common group of attacks on audio and video data is dynamics, filtering, conversion, compression, noise, modulation, time stretch and pitch shift, multiple watermark, cropping, rotation etc. The applications of audio and video watermarking are Fingerprinting, copyright protection, authentication, copy control etc. The audio watermarking techniques can be classified into Time-domain and Frequencydomain methods and video watermarking techniques are classified into spatial domain, frequency domain and formatspecific domain.
44

"Transform Domain Block Based Watermarking using Spatial Frequency and SVD." International Journal of Innovative Technology and Exploring Engineering 8, no. 9S4 (October 1, 2019): 154–62. http://dx.doi.org/10.35940/ijitee.i1123.0789s419.

Повний текст джерела
Стилі APA, Harvard, Vancouver, ISO та ін.
Анотація:
Digital image watermarking has been proposed to protect the digital multimedia content. The main objectives of watermarking scheme are robustness, reliability, security against numerous attacks. To improve the imperceptibility, robustness and capacity of the watermarked image, this paper presents a transform domain watermarking method using spatial frequency and block SVD. The spatial frequency is used to select the appropriate blocks for embedding the watermark image by transforming the SVD coefficients of these blocks of the cover image. In this paper first we scramble the cover image by ZIG-ZAG sequencing and then rearranged. After that Shift Invariant Discrete Wavelet Transformed (SIDWT) cover image is partioned in to non-overlapping blocks. Then find out the spatial frequency of these blocks, those blocks which spatial frequency value greater than threshold value are selected for embedding process. Now the watermark image directly embedded by modifying the SVD coefficient of these blocks and get watermarked image. Then inverse process is applied for extracting for watermark image form noisy image. Experimental outcomes show that the proposed scheme is higher imperceptible, robust against various image processing attacks and produce improved results as compared to previous presented schemes
45

Azamfar, Moslem, Jaskaran Singh, Xiang Li, and Jay Lee. "Cross-domain gearbox diagnostics under variable working conditions with deep convolutional transfer learning." Journal of Vibration and Control, June 8, 2020, 107754632093379. http://dx.doi.org/10.1177/1077546320933793.

Повний текст джерела
Стилі APA, Harvard, Vancouver, ISO та ін.
Анотація:
This study proposes a novel 1D deep convolutional transfer learning method that is able to learn the high-dimensional domain-invariant feature from the labeled training dataset and perform diagnosis tasks on the unlabeled testing dataset subjected to a domain shift. To obtain the domain-invariant features, the cross-entropy loss in the source domain classifier and the maximum mean discrepancies between the source and target domain data are minimized simultaneously. To evaluate the performance of the proposed method, an experimental study is conducted on a gearbox under significant speed variation. Because of inherent limitations of the vibration data, in this research, the effectiveness of torque measurement signals has been explored for gearbox fault diagnosis. Comprehensive studies on network parameters and the training sample size are performed to illustrate the robustness and effectiveness of the proposed method. A comparison study is performed on similar techniques to illustrate the superiority and high performance of the proposed diagnosis method. The achieved results illustrate the effectiveness of torque signal in multiclass cross-domain fault diagnosis of gearboxes.
46

Xue, Ya-Juan, Jun-Xing Cao, Xing-Jian Wang, Hao-Kun Du, Wei Chen, Jia-Chun You, and Feng Tan. "Q factor estimation from surface seismic data in the time–frequency domain: A comparative analysis." GEOPHYSICS, March 19, 2022, 1–72. http://dx.doi.org/10.1190/geo2021-0210.1.

Повний текст джерела
Стилі APA, Harvard, Vancouver, ISO та ін.
Анотація:
The quality factor Q is generally used to describe seismic attenuation that leads to amplitude decay and wavelet distortion. Time-frequency transforms are commonly used to measure quality factor Q on surface seismic data. These methods capture frequency changes over time using a fixed or variable sliding time window. Other adaptive transforms can also provide time localization and they often are superior for Q estimation. In this study, we compared three time–frequency transforms and showed how the choice of a fixed– or variable–time window or an adaptive transform affects the accuracy and robustness of Q factor estimation. We used the short–time Fourier and continuous wavelet transforms as fixed– and variable–window transforms, respectively. The synchrosqueezed wavelet transform was used as an adaptive transform. We compared four Q factor estimation methods in the time–frequency domain, including the amplitude decay, spectral ratio, centroid frequency shift, and compound time–frequency variable methods. Further, we studied some of the difficulties associated with these estimation methods, such as quantitative attenuation sensitivity, noise robustness, regression bandwidth influence, and key parameter selection for each time–frequency transform. Real data examples were used to investigate the robustness of Q factor estimation with different methods using different time–frequency transforms and the statistics of how well the attenuation measurements match the expected seismic attenuation behavior. Furthermore, in these real data examples, we were able to use the Q estimates to compensate for attenuation through inverse Q filtering.
47

Liu, Yu, and Enming Cui. "Classification of tumor from computed tomography images: A brain-inspired multisource transfer learning under probability distribution adaptation." Frontiers in Human Neuroscience 16 (October 20, 2022). http://dx.doi.org/10.3389/fnhum.2022.1040536.

Повний текст джерела
Стилі APA, Harvard, Vancouver, ISO та ін.
Анотація:
Preoperative diagnosis of gastric cancer and primary gastric lymphoma is challenging and has important clinical significance. Inspired by the inductive reasoning learning of the human brain, transfer learning can improve diagnosis performance of target task by utilizing the knowledge learned from the other domains (source domain). However, most studies focus on single-source transfer learning and may lead to model performance degradation when a large domain shift exists between the single-source domain and target domain. By simulating the multi-modal information learning and transfer mechanism of human brain, this study designed a multisource transfer learning feature extraction and classification framework, which can enhance the prediction performance of the target model by using multisource medical data (domain). First, this manuscript designs a feature extraction network that takes the maximum mean difference based on the Wasserstein distance as an adaptive measure of probability distribution and extracts the domain-specific invariant representations between source and target domain data. Then, aiming at the random generation of parameters bringing uncertainties to prediction accuracy and generalization ability of extreme learning machine network, the 1-norm regularization is used to implement sparse constraints of the output weight matrix and improve the robustness of the model. Finally, some experiments are carried out on the data of two medical centers. The experimental results show that the area under curves (AUCs) of the method are 0.958 and 0.929 in the two validation cohorts, respectively. The method in this manuscript can provide doctors with a better diagnostic reference, which has certain practical significance.
48

Lübbering, Max, Michael Gebauer, Rajkumar Ramamurthy, Christian Bauckhage, and Rafet Sifa. "Bounding open space risk with decoupling autoencoders in open set recognition." International Journal of Data Science and Analytics, July 16, 2022. http://dx.doi.org/10.1007/s41060-022-00342-z.

Повний текст джерела
Стилі APA, Harvard, Vancouver, ISO та ін.
Анотація:
AbstractOne-vs-Rest (OVR) classification aims to distinguish a single class of interest (COI) from other classes. The concept of novelty detection and robustness to dataset shift becomes crucial in OVR when the scope of the rest class is extended from the classes observed during training to unseen and possibly unrelated classes, a setting referred to as open set recognition (OSR). In this work, we propose a novel architecture, namely decoupling autoencoder (DAE), which provides a proven upper bound on the open space risk and minimizes open space risk via a dedicated training routine. Our method is benchmarked within three different scenarios, each isolating different aspects of OSR, namely plain classification, outlier detection, and dataset shift. The results conclusively show that DAE achieves robust performance across all three tasks. This level of cross-task robustness is not observed for any of the seven potent baselines from the OSR, OVR, outlier detection, and ensembling domain which, apart from ATA (Lübbering et al., From imbalanced classification to supervised outlier detection problems: adversarially trained auto encoders. In: Artificial neural networks and machine learning—ICANN 2020, 2020), tend to fail on either one of the tasks. Similar to DAE, ATA is based on autoencoders and facilitates the reconstruction error to predict the inlierness of a sample. However unlike DAE, it does not provide any uncertainty scores and therefore lacks rudimentary means of interpretation. Our adversarial robustness and local stability results further support DAE’s superiority in the OSR setting, emphasizing its applicability in safety-critical systems.
49

Ahmadi, Seyed-Ahmad, Johann Frei, Gerome Vivar, Marianne Dieterich, and Valerie Kirsch. "IE-Vnet: Deep Learning-Based Segmentation of the Inner Ear's Total Fluid Space." Frontiers in Neurology 13 (May 11, 2022). http://dx.doi.org/10.3389/fneur.2022.663200.

Повний текст джерела
Стилі APA, Harvard, Vancouver, ISO та ін.
Анотація:
BackgroundIn-vivo MR-based high-resolution volumetric quantification methods of the endolymphatic hydrops (ELH) are highly dependent on a reliable segmentation of the inner ear's total fluid space (TFS). This study aimed to develop a novel open-source inner ear TFS segmentation approach using a dedicated deep learning (DL) model.MethodsThe model was based on a V-Net architecture (IE-Vnet) and a multivariate (MR scans: T1, T2, FLAIR, SPACE) training dataset (D1, 179 consecutive patients with peripheral vestibulocochlear syndromes). Ground-truth TFS masks were generated in a semi-manual, atlas-assisted approach. IE-Vnet model segmentation performance, generalizability, and robustness to domain shift were evaluated on four heterogenous test datasets (D2-D5, n = 4 × 20 ears).ResultsThe IE-Vnet model predicted TFS masks with consistently high congruence to the ground-truth in all test datasets (Dice overlap coefficient: 0.9 ± 0.02, Hausdorff maximum surface distance: 0.93 ± 0.71 mm, mean surface distance: 0.022 ± 0.005 mm) without significant difference concerning side (two-sided Wilcoxon signed-rank test, p&gt;0.05), or dataset (Kruskal-Wallis test, p&gt;0.05; post-hoc Mann-Whitney U, FDR-corrected, all p&gt;0.2). Prediction took 0.2 s, and was 2,000 times faster than a state-of-the-art atlas-based segmentation method.ConclusionIE-Vnet TFS segmentation demonstrated high accuracy, robustness toward domain shift, and rapid prediction times. Its output works seamlessly with a previously published open-source pipeline for automatic ELS segmentation. IE-Vnet could serve as a core tool for high-volume trans-institutional studies of the inner ear. Code and pre-trained models are available free and open-source under https://github.com/pydsgz/IEVNet.
50

Pocevičiūtė, Milda, Gabriel Eilertsen, Sofia Jarkman, and Claes Lundström. "Generalisation effects of predictive uncertainty estimation in deep learning for digital pathology." Scientific Reports 12, no. 1 (May 18, 2022). http://dx.doi.org/10.1038/s41598-022-11826-0.

Повний текст джерела
Стилі APA, Harvard, Vancouver, ISO та ін.
Анотація:
AbstractDeep learning (DL) has shown great potential in digital pathology applications. The robustness of a diagnostic DL-based solution is essential for safe clinical deployment. In this work we evaluate if adding uncertainty estimates for DL predictions in digital pathology could result in increased value for the clinical applications, by boosting the general predictive performance or by detecting mispredictions. We compare the effectiveness of model-integrated methods (MC dropout and Deep ensembles) with a model-agnostic approach (Test time augmentation, TTA). Moreover, four uncertainty metrics are compared. Our experiments focus on two domain shift scenarios: a shift to a different medical center and to an underrepresented subtype of cancer. Our results show that uncertainty estimates increase reliability by reducing a model’s sensitivity to classification threshold selection as well as by detecting between 70 and 90% of the mispredictions done by the model. Overall, the deep ensembles method achieved the best performance closely followed by TTA.

До бібліографії