To see the other types of publications on this topic, follow the link: Batch update.

Journal articles on the topic 'Batch update'

Create a spot-on reference in APA, MLA, Chicago, Harvard, and other styles

Select a source type:

Consult the top 50 journal articles for your research on the topic 'Batch update.'

Next to every source in the list of references, there is an 'Add to bibliography' button. Press on it, and we will generate automatically the bibliographic reference to the chosen work in the citation style you need: APA, MLA, Harvard, Chicago, Vancouver, etc.

You can also download the full text of the academic publication as pdf and read online its abstract whenever available in the metadata.

Browse journal articles on a wide variety of disciplines and organise your bibliography correctly.

1

Pereira, Fabio Henrique, Francisco Elânio Bezerra, Diego Oliva, et al. "Forecast Model Update Based on a Real-Time Data Processing Lambda Architecture for Estimating Partial Discharges in Hydrogenerator." Sensors 20, no. 24 (2020): 7242. http://dx.doi.org/10.3390/s20247242.

Full text
Abstract:
The prediction of partial discharges in hydrogenerators depends on data collected by sensors and prediction models based on artificial intelligence. However, forecasting models are trained with a set of historical data that is not automatically updated due to the high cost to collect sensors’ data and insufficient real-time data analysis. This article proposes a method to update the forecasting model, aiming to improve its accuracy. The method is based on a distributed data platform with the lambda architecture, which combines real-time and batch processing techniques. The results show that th
APA, Harvard, Vancouver, ISO, and other styles
2

Cunha, João, Rui Serra, Nuno Lau, Luís Seabra Lopes, and Antóio J. R. Neves. "Batch Reinforcement Learning for Robotic Soccer Using the Q-Batch Update-Rule." Journal of Intelligent & Robotic Systems 80, no. 3-4 (2015): 385–99. http://dx.doi.org/10.1007/s10846-014-0171-1.

Full text
APA, Harvard, Vancouver, ISO, and other styles
3

Yang, Zhuang, Cheng Wang, Yu Zang, and Jonathan Li. "Mini-batch algorithms with Barzilai–Borwein update step." Neurocomputing 314 (November 2018): 177–85. http://dx.doi.org/10.1016/j.neucom.2018.06.002.

Full text
APA, Harvard, Vancouver, ISO, and other styles
4

Lambert, Sherwood Lane. "Auto Accessories, Inc.: An Educational Case on Online Transaction Processing (OLTP) and Controls as Compared to Batch Processing and Controls." Journal of Emerging Technologies in Accounting 14, no. 2 (2017): 59–81. http://dx.doi.org/10.2308/jeta-51844.

Full text
Abstract:
ABSTRACT The intent of this educational case is to increase students' understanding of online transaction processing (OLTP) and controls as compared to batch processing and controls. Learning concepts about batch processing is important because many entities continue to use batch processing for critical applications such as payroll, credit card processing, and Big Data. Students learn the advantages and disadvantages of batch processing and OLTP. The case provides a Microsoft Access database that includes a working batch program (module) and an online screen (form). Students use the form to up
APA, Harvard, Vancouver, ISO, and other styles
5

Brown, Christopher C., and Erin Elzi. "Revisiting the cataloging of free Internet resources at the University of Denver." Interlending & Document Supply 44, no. 1 (2016): 31–36. http://dx.doi.org/10.1108/ilds-11-2015-0034.

Full text
Abstract:
Purpose – This paper aims to present updated statistics demonstrating the value of cataloging free Internet resources and the challenges of batch loading, vendor records, electronic resource modules and discovery tools, as an update to the 2008 paper in this journal Design/methodology/approach – Updates the statistics from the URL redirection system for tracking user access to freely available Web publications. Findings – With more projects and bibliographic records included within the scope of the project, users still find and use the links to outbound content. New technologies and management
APA, Harvard, Vancouver, ISO, and other styles
6

Foerster, Klaus-Tycho, Janne H. Korhonen, Ami Paz, Joel Rybicki, and Stefan Schmid. "Input-Dynamic Distributed Algorithms for Communication Networks." Proceedings of the ACM on Measurement and Analysis of Computing Systems 5, no. 1 (2021): 1–33. http://dx.doi.org/10.1145/3447384.

Full text
Abstract:
Consider a distributed task where the communication network is fixed but the local inputs given to the nodes of the distributed system may change over time. In this work, we explore the following question: if some of the local inputs change, can an existing solution be updated efficiently, in a dynamic and distributed manner? To address this question, we define the batch dynamic \congest model in which we are given a bandwidth-limited communication network and a dynamic edge labelling defines the problem input. The task is to maintain a solution to a graph problem on the labeled graph under ba
APA, Harvard, Vancouver, ISO, and other styles
7

Dangi, Siddharth, Suraj Gowda, Helene G. Moorman, et al. "Continuous Closed-Loop Decoder Adaptation with a Recursive Maximum Likelihood Algorithm Allows for Rapid Performance Acquisition in Brain-Machine Interfaces." Neural Computation 26, no. 9 (2014): 1811–39. http://dx.doi.org/10.1162/neco_a_00632.

Full text
Abstract:
Closed-loop decoder adaptation (CLDA) is an emerging paradigm for both improving and maintaining online performance in brain-machine interfaces (BMIs). The time required for initial decoder training and any subsequent decoder recalibrations could be potentially reduced by performing continuous adaptation, in which decoder parameters are updated at every time step during these procedures, rather than waiting to update the decoder at periodic intervals in a more batch-based process. Here, we present recursive maximum likelihood (RML), a CLDA algorithm that performs continuous adaptation of a Kal
APA, Harvard, Vancouver, ISO, and other styles
8

Huang, Siyuan, Brian D. Hoskins, Matthew W. Daniels, Mark D. Stiles, and Gina C. Adam. "Streaming Batch Gradient Tracking for Neural Network Training (Student Abstract)." Proceedings of the AAAI Conference on Artificial Intelligence 34, no. 10 (2020): 13813–14. http://dx.doi.org/10.1609/aaai.v34i10.7178.

Full text
Abstract:
Faster and more energy efficient hardware accelerators are critical for machine learning on very large datasets. The energy cost of performing vector-matrix multiplication and repeatedly moving neural network models in and out of memory motivates a search for alternative hardware and algorithms. We propose to use streaming batch principal component analysis (SBPCA) to compress batch data during training by using a rank-k approximation of the total batch update. This approach yields comparable training performance to minibatch gradient descent (MBGD) at the same batch size while reducing overal
APA, Harvard, Vancouver, ISO, and other styles
9

Gineste, Michael, and Jo Eidsvik. "Batch seismic inversion using the iterative ensemble Kalman smoother." Computational Geosciences 25, no. 3 (2021): 1105–21. http://dx.doi.org/10.1007/s10596-021-10043-4.

Full text
Abstract:
AbstractAn ensemble-based method for seismic inversion to estimate elastic attributes is considered, namely the iterative ensemble Kalman smoother. The main focus of this work is the challenge associated with ensemble-based inversion of seismic waveform data. The amount of seismic data is large and, depending on ensemble size, it cannot be processed in a single batch. Instead a solution strategy of partitioning the data recordings in time windows and processing these sequentially is suggested. This work demonstrates how this partitioning can be done adaptively, with a focus on reliable and eff
APA, Harvard, Vancouver, ISO, and other styles
10

Ye, Yongqiang, and Danwei Wang. "Implementation of ILC batch update using a robotic experimental setup." Microprocessors and Microsystems 30, no. 5 (2006): 259–67. http://dx.doi.org/10.1016/j.micpro.2005.11.003.

Full text
APA, Harvard, Vancouver, ISO, and other styles
11

Chen, Chien-Hsing. "Comparing batch update with randomized update for identifying salient genes applied to cancer gene expression clustering." Journal of Information Science 40, no. 6 (2014): 835–45. http://dx.doi.org/10.1177/0165551514550141.

Full text
APA, Harvard, Vancouver, ISO, and other styles
12

Kudo, Tsukasa, Yui Takeda, Masahiko Ishino, Kenji Saotome, and Nobuhiro Kataoka. "An Implementation of Concurrency Control between Batch Update and Online Entries." Procedia Computer Science 35 (2014): 1625–34. http://dx.doi.org/10.1016/j.procs.2014.08.247.

Full text
APA, Harvard, Vancouver, ISO, and other styles
13

Bradley, D., T. St Clair, M. Farrellee, et al. "An update on the scalability limits of the Condor batch system." Journal of Physics: Conference Series 331, no. 6 (2011): 062002. http://dx.doi.org/10.1088/1742-6596/331/6/062002.

Full text
APA, Harvard, Vancouver, ISO, and other styles
14

Sanzida, Nahid. "Determination of Optimum Drying Temperature Profile by Iterative Learning Control (ILC) Method to Obtain a Desired Moisture Content in Tablets." Chemical Engineering Research Bulletin 20, no. 1 (2018): 1. http://dx.doi.org/10.3329/cerb.v20i1.36923.

Full text
Abstract:
<p>The paper presents an industrial case study example to evaluate the performance of the linear time varying (LTV) perturbation model based iterative learning control (ILC) in a pilot scale batch system. The operating data based strategy applied here is based on utilizing the repetitive nature of batch processes to update the operating trajectories using process knowledge obtained from previous runs and thereby providing a convergent batch-to-batch improvement of the process performance indicator. The method was applied to determine the required drying temperature of Paracetamol granule
APA, Harvard, Vancouver, ISO, and other styles
15

Yan, Yan, and Yuhong Guo. "Partial Label Learning with Batch Label Correction." Proceedings of the AAAI Conference on Artificial Intelligence 34, no. 04 (2020): 6575–82. http://dx.doi.org/10.1609/aaai.v34i04.6132.

Full text
Abstract:
Partial label (PL) learning tackles the problem where each training instance is associated with a set of candidate labels, among which only one is the true label. In this paper, we propose a simple but effective batch-based partial label learning algorithm named PL-BLC, which tackles the partial label learning problem with batch-wise label correction (BLC). PL-BLC dynamically corrects the label confidence matrix of each training batch based on the current prediction network, and adopts a MixUp data augmentation scheme to enhance the underlying true labels against the redundant noisy labels. In
APA, Harvard, Vancouver, ISO, and other styles
16

Hwang, Hang-Wen, Chien-Chao Tseng, and Ming-Feng Chang. "A batch-update strategy for the distributed HLRs architecture in PCS networks." Wireless Communications and Mobile Computing 3, no. 3 (2003): 311–27. http://dx.doi.org/10.1002/wcm.90.

Full text
APA, Harvard, Vancouver, ISO, and other styles
17

Guo, Wei, Hua Zhang, Sujuan Qin, et al. "Outsourced dynamic provable data possession with batch update for secure cloud storage." Future Generation Computer Systems 95 (June 2019): 309–22. http://dx.doi.org/10.1016/j.future.2019.01.009.

Full text
APA, Harvard, Vancouver, ISO, and other styles
18

Yu, Hao, Sen Yang, and Shenghuo Zhu. "Parallel Restarted SGD with Faster Convergence and Less Communication: Demystifying Why Model Averaging Works for Deep Learning." Proceedings of the AAAI Conference on Artificial Intelligence 33 (July 17, 2019): 5693–700. http://dx.doi.org/10.1609/aaai.v33i01.33015693.

Full text
Abstract:
In distributed training of deep neural networks, parallel minibatch SGD is widely used to speed up the training process by using multiple workers. It uses multiple workers to sample local stochastic gradients in parallel, aggregates all gradients in a single server to obtain the average, and updates each worker’s local model using a SGD update with the averaged gradient. Ideally, parallel mini-batch SGD can achieve a linear speed-up of the training time (with respect to the number of workers) compared with SGD over a single worker. However, such linear scalability in practice is significantly
APA, Harvard, Vancouver, ISO, and other styles
19

Soodabeh, Asadi, and Vogel Manfred. "A Learning Rate Method for Full-Batch Gradient Descent." Műszaki Tudományos Közlemények 13, no. 1 (2020): 174–77. http://dx.doi.org/10.33894/mtk-2020.13.33.

Full text
Abstract:
Abstract In this paper, we present a learning rate method for gradient descent using only first order information. This method requires no manual tuning of the learning rate. We applied this method on a linear neural network built from scratch, along with the full-batch gradient descent, where we calculated the gradients for the whole dataset to perform one parameter update. We tested the method on a moderate sized dataset of housing information and compared the result with that of the Adam optimizer used with a sequential neural network model from Keras. The comparison shows that our method f
APA, Harvard, Vancouver, ISO, and other styles
20

Sung and Lee. "Implementation of SOH Estimator in Automotive BMSs Using Recursive Least-Squares." Electronics 8, no. 11 (2019): 1237. http://dx.doi.org/10.3390/electronics8111237.

Full text
Abstract:
This paper presents a computationally efficient state-of-health (SOH) estimator that is readily applicable to automotive battery management systems (BMSs). The proposed scheme uses a recursive estimator to improve the original scheme based on a batch estimator. In the batch process, state estimation requires significantly longer CPU time than data measurement, and the original scheme may fail to satisfy real-time guarantees. To prevent this problem, we apply recursive least-squares. By replacing the batch process to solve the normal equation with a recursive update, the proposed scheme can spr
APA, Harvard, Vancouver, ISO, and other styles
21

Ortiz-Martínez, Daniel. "Online Learning for Statistical Machine Translation." Computational Linguistics 42, no. 1 (2016): 121–61. http://dx.doi.org/10.1162/coli_a_00244.

Full text
Abstract:
We present online learning techniques for statistical machine translation (SMT). The availability of large training data sets that grow constantly over time is becoming more and more frequent in the field of SMT—for example, in the context of translation agencies or the daily translation of government proceedings. When new knowledge is to be incorporated in the SMT models, the use of batch learning techniques require very time-consuming estimation processes over the whole training set that may take days or weeks to be executed. By means of the application of online learning, new training sampl
APA, Harvard, Vancouver, ISO, and other styles
22

Liu, Weihua, Xiaoyan Liu, and Xiang Li. "The two-stage batch ordering strategy of logistics service capacity with demand update." Transportation Research Part E: Logistics and Transportation Review 83 (November 2015): 65–89. http://dx.doi.org/10.1016/j.tre.2015.08.009.

Full text
APA, Harvard, Vancouver, ISO, and other styles
23

Ren, Wei, Yan Sun, Hong Luo, and Mohsen Guizani. "BLLC: A Batch-Level Update Mechanism With Low Cost for SDN-IoT Networks." IEEE Internet of Things Journal 6, no. 1 (2019): 1210–22. http://dx.doi.org/10.1109/jiot.2018.2868708.

Full text
APA, Harvard, Vancouver, ISO, and other styles
24

Zhou, Ningnan, Xuan Zhou, Xiao Zhang, and Shan Wang. "An I/O-Efficient Buffer Batch Replacement Policy for Update-Intensive Graph Databases." Data Science and Engineering 1, no. 4 (2016): 231–41. http://dx.doi.org/10.1007/s41019-016-0026-9.

Full text
APA, Harvard, Vancouver, ISO, and other styles
25

Davis, Allan Peter, Cynthia J. Grondin, Robin J. Johnson, et al. "Comparative Toxicogenomics Database (CTD): update 2021." Nucleic Acids Research 49, no. D1 (2020): D1138—D1143. http://dx.doi.org/10.1093/nar/gkaa891.

Full text
Abstract:
Abstract The public Comparative Toxicogenomics Database (CTD; http://ctdbase.org/) is an innovative digital ecosystem that relates toxicological information for chemicals, genes, phenotypes, diseases, and exposures to advance understanding about human health. Literature-based, manually curated interactions are integrated to create a knowledgebase that harmonizes cross-species heterogeneous data for chemical exposures and their biological repercussions. In this biennial update, we report a 20% increase in CTD curated content and now provide 45 million toxicogenomic relationships for over 16 300
APA, Harvard, Vancouver, ISO, and other styles
26

Florentino, Rodolfo F., and Ma Regina A. Pedro. "Update on Rice Fortification in the Philippines." Food and Nutrition Bulletin 19, no. 2 (1998): 149–53. http://dx.doi.org/10.1177/156482659801900209.

Full text
Abstract:
Rice, the staple in all regions in the Philippines, is an excellent vehicle for fortification. The Food and Nutrition Research Institute developed the technology for the fortification of rice with iron, using ferrous sulphate as the fortificant. A prototype machine was manufactured for the production of iron-fortified premix with a capacity of 200 kg per batch. A study on iron bioavailability showed a significant increase in the amount of iron absorbed with iron-fortified rice. A clinical trial conducted with 173 schoolchildren for six months showed a greater increase in haemoglobin in subject
APA, Harvard, Vancouver, ISO, and other styles
27

Mongillo, Gianluigi, and Sophie Deneve. "Online Learning with Hidden Markov Models." Neural Computation 20, no. 7 (2008): 1706–16. http://dx.doi.org/10.1162/neco.2008.10-06-351.

Full text
Abstract:
We present an online version of the expectation-maximization (EM) algorithm for hidden Markov models (HMMs). The sufficient statistics required for parameters estimation is computed recursively with time, that is, in an online way instead of using the batch forward-backward procedure. This computational scheme is generalized to the case where the model parameters can change with time by introducing a discount factor into the recurrence relations. The resulting algorithm is equivalent to the batch EM algorithm, for appropriate discount factor and scheduling of parameters update. On the other ha
APA, Harvard, Vancouver, ISO, and other styles
28

YAMAMORI, Satoshi, Masayuki HIROMOTO, and Takashi SATO. "Efficient Mini-Batch Training on Memristor Neural Network Integrating Gradient Calculation and Weight Update." IEICE Transactions on Fundamentals of Electronics, Communications and Computer Sciences E101.A, no. 7 (2018): 1092–100. http://dx.doi.org/10.1587/transfun.e101.a.1092.

Full text
APA, Harvard, Vancouver, ISO, and other styles
29

Kornmann, Henri, Sergio Valentinotti, Ian Marison, and Urs von Stockar. "Real-time update of calibration model for better monitoring of batch processes using spectroscopy." Biotechnology and Bioengineering 87, no. 5 (2004): 593–601. http://dx.doi.org/10.1002/bit.20153.

Full text
APA, Harvard, Vancouver, ISO, and other styles
30

Cheng, Haodong, Meng Han, Ni Zhang, Xiaojuan Li, and Le Wang. "A Survey of incremental high-utility pattern mining based on storage structure." Journal of Intelligent & Fuzzy Systems 41, no. 1 (2021): 841–66. http://dx.doi.org/10.3233/jifs-202745.

Full text
Abstract:
Traditional association rule mining has been widely studied, but this is not applicable to practical applications that must consider factors such as the unit profit of the item and the purchase quantity. High-utility itemset mining (HUIM) aims to find high-utility patterns by considering the number of items purchased and the unit profit. However, most high-utility itemset mining algorithms are designed for static databases. In real-world applications (such as market analysis and business decisions), databases are usually updated by inserting new data dynamically. Some researchers have proposed
APA, Harvard, Vancouver, ISO, and other styles
31

Chen, Ning, Jian Tao Du, Xi Xian Xie, and Qing Yang Xu. "Dynamic Models for L-Histidine Fed-Batch Fermentation by Corynebacterium glutamicum." Advanced Materials Research 160-162 (November 2010): 1749–55. http://dx.doi.org/10.4028/www.scientific.net/amr.160-162.1749.

Full text
Abstract:
To predict and control feed batch fermentations of Corynebacterium glutamicun TQ2226 which can produce L-histidine , in this paper , we use a recurrent neural network model(RNNM).The control variables are the limiting substrate and the feeding conditions. The multi-input and multi-output RNNM proposed has seven outputs, nineteen neurons, twelve inputs, in the hidden layer, and global and local feedbacks. The weight update learning algorithm designed is a version of the well known backpropagation through time algorithm directed to the RNNM learning. The RNNM generalization was carried out repro
APA, Harvard, Vancouver, ISO, and other styles
32

Lin, Mingbao, Rongrong Ji, Hong Liu, Xiaoshuai Sun, Yongjian Wu, and Yunsheng Wu. "Towards Optimal Discrete Online Hashing with Balanced Similarity." Proceedings of the AAAI Conference on Artificial Intelligence 33 (July 17, 2019): 8722–29. http://dx.doi.org/10.1609/aaai.v33i01.33018722.

Full text
Abstract:
When facing large-scale image datasets, online hashing serves as a promising solution for online retrieval and prediction tasks. It encodes the online streaming data into compact binary codes, and simultaneously updates the hash functions to renew codes of the existing dataset. To this end, the existing methods update hash functions solely based on the new data batch, without investigating the correlation between such new data and the existing dataset. In addition, existing works update the hash functions using a relaxation process in its corresponding approximated continuous space. And it rem
APA, Harvard, Vancouver, ISO, and other styles
33

Chen, Tiantian, Xincheng Tian, and Yan Li. "Dimensional Accuracy Enhancement in CNC Batch Grinding through Fractional Order Iterative Learning Compensation." Advances in Mechanical Engineering 6 (January 1, 2014): 260420. http://dx.doi.org/10.1155/2014/260420.

Full text
Abstract:
This paper presents a systematic method to compensate for dimensional errors of workpieces machined in computer numerical control (CNC) batch grinding process. The dimensional error precompensation scheme includes a fractional order compensator, automatic dimensional measuring device, and a comparator. A practical fractional order differential plus low-pass iterative learning approach is used to update the compensation for the next workpiece. An incremental order updating law is proposed for the fractional system order identification, which plays a fundamental role to optimize the performance
APA, Harvard, Vancouver, ISO, and other styles
34

Righini, Simona, Pietro Cassaro, Uwe Bach, et al. "Update on the Multi-Frequency Monitoring of Blazars with Medicina and Noto." Proceedings of the International Astronomical Union 14, S342 (2018): 234–36. http://dx.doi.org/10.1017/s1743921318007962.

Full text
Abstract:
AbstractThe Medicina and Noto radiotelescopes have been employed for over 14 years to monitor the flux density variations of a vast sample of blazars at different radio frequencies. Radio data are essential components of blazar spectral energy distribution (SED, spanning from radio waves to gamma rays), whose trend with luminosity and shape changes provide decisive information on the physics of extra-galactic jets and, eventually, on the mechanism extracting energy from the central black hole in radio-loud AGN. Observations presently carried out at 5, 8 and 24 GHz have taken advantage of the c
APA, Harvard, Vancouver, ISO, and other styles
35

Bagaev, Dmitry V., Renske M. A. Vroomans, Jerome Samir, et al. "VDJdb in 2019: database extension, new analysis infrastructure and a T-cell receptor motif compendium." Nucleic Acids Research 48, no. D1 (2019): D1057—D1062. http://dx.doi.org/10.1093/nar/gkz874.

Full text
Abstract:
Abstract Here, we report an update of the VDJdb database with a substantial increase in the number of T-cell receptor (TCR) sequences and their cognate antigens. The update further provides a new database infrastructure featuring two additional analysis modes that facilitate database querying and real-world data analysis. The increased yield of TCR specificity identification methods and the overall increase in the number of studies in the field has allowed us to expand the database more than 5-fold. Furthermore, several new analysis methods are included. For example, batch annotation of TCR re
APA, Harvard, Vancouver, ISO, and other styles
36

Kenda, Klemen, Jože Peternelj, Nikos Mellios, Dimitris Kofinas, Matej Čerin, and Jože Rožanec. "Usage of statistical modeling techniques in surface and groundwater level prediction." Journal of Water Supply: Research and Technology-Aqua 69, no. 3 (2020): 248–65. http://dx.doi.org/10.2166/aqua.2020.143.

Full text
Abstract:
Abstract The paper presents a thorough evaluation of the performance of different statistical modeling techniques in ground- and surface-level prediction scenarios as well as some aspects of the application of data-driven modeling in practice (feature generation, feature selection, heterogeneous data fusion, hyperparameter tuning, and model evaluation). Twenty-one different regression and classification techniques were tested. The results reveal that batch regression techniques are superior to incremental techniques in terms of accuracy and that among them gradient boosting, random forest and
APA, Harvard, Vancouver, ISO, and other styles
37

Yeh, Lo-Yao, Chun-Chuan Yang, Jee-Gong Chang, and Yi-Lang Tsai. "A secure and efficient batch binding update scheme for route optimization of nested NEtwork MObility (NEMO) in VANETs." Journal of Network and Computer Applications 36, no. 1 (2013): 284–92. http://dx.doi.org/10.1016/j.jnca.2012.06.001.

Full text
APA, Harvard, Vancouver, ISO, and other styles
38

Hu, Jen-Wei, Lo-Yao Yeh, Shih-Wei Liao, and Chu-Sing Yang. "Autonomous and malware-proof blockchain-based firmware update platform with efficient batch verification for Internet of Things devices." Computers & Security 86 (September 2019): 238–52. http://dx.doi.org/10.1016/j.cose.2019.06.008.

Full text
APA, Harvard, Vancouver, ISO, and other styles
39

Peris-Díaz, Manuel D., Shannon R. Sweeney, Olga Rodak, Enrique Sentandreu, and Stefano Tiziani. "R-MetaboList 2: A Flexible Tool for Metabolite Annotation from High-Resolution Data-Independent Acquisition Mass Spectrometry Analysis." Metabolites 9, no. 9 (2019): 187. http://dx.doi.org/10.3390/metabo9090187.

Full text
Abstract:
Technological advancements have permitted the development of innovative multiplexing strategies for data independent acquisition (DIA) mass spectrometry (MS). Software solutions and extensive compound libraries facilitate the efficient analysis of MS1 data, regardless of the analytical platform. However, the development of comparable tools for DIA data analysis has significantly lagged. This research introduces an update to the former MetaboList R package and a workflow for full-scan MS1 and MS/MS DIA processing of metabolomic data from multiplexed liquid chromatography high-resolution mass sp
APA, Harvard, Vancouver, ISO, and other styles
40

Younus, Muhammad, Yong Yu, Hu Lu, and Yu Qing Fan. "Study and Execution of Advanced Batch Production Planning System for an Aerospace Industry." Applied Mechanics and Materials 16-19 (October 2009): 748–52. http://dx.doi.org/10.4028/www.scientific.net/amm.16-19.748.

Full text
Abstract:
Modern manufacturing industries are increasingly faced with international competition and fluctuating market conditions in the age of globalization. In striving to remain competitive, manufacturing industries must deliver products to customer at the lowest cost, at the best quality and in the minimum lead time. As a result, it becomes mandatory to design and implement the advance production planning and scheduling system that supports shorter product cycles despite more complex and specialized manufacturing processes. The advance production planning and scheduling system provides the leaner pr
APA, Harvard, Vancouver, ISO, and other styles
41

Mančev, Dejan, and Branimir Todorović. "A primal sub-gradient method for structured classification with the averaged sum loss." International Journal of Applied Mathematics and Computer Science 24, no. 4 (2014): 917–30. http://dx.doi.org/10.2478/amcs-2014-0067.

Full text
Abstract:
Abstract We present a primal sub-gradient method for structured SVM optimization defined with the averaged sum of hinge losses inside each example. Compared with the mini-batch version of the Pegasos algorithm for the structured case, which deals with a single structure from each of multiple examples, our algorithm considers multiple structures from a single example in one update. This approach should increase the amount of information learned from the example. We show that the proposed version with the averaged sum loss has at least the same guarantees in terms of the prediction loss as the s
APA, Harvard, Vancouver, ISO, and other styles
42

Sachs, Ulrich, Konstantine A. Fetfatsidis, Josefine Schumacher, et al. "A Friction-Test Benchmark with Twintex PP." Key Engineering Materials 504-506 (February 2012): 307–12. http://dx.doi.org/10.4028/www.scientific.net/kem.504-506.307.

Full text
Abstract:
This paper presents an update on a friction benchmark, that was proposed during the 13th ESAFORM conference. The goal is to compare different friction test set-ups [1–4] by determining the coefficient of friction (CoF) for Twintex® PP. The benchmark instructions are based on the ASTM standard D1894 [5] but also account for different friction velocities, pressures and temperatures. At the time of writing five research groups contributed to the benchmark, each with a custom designed test set-up, differing in size, mechanism, force control and temperature regulation. All tests will be conducted w
APA, Harvard, Vancouver, ISO, and other styles
43

Yang, Haibin, Zhengge Yi, Ruifeng Li, et al. "Improved Outsourced Provable Data Possession for Secure Cloud Storage." Security and Communication Networks 2021 (July 22, 2021): 1–12. http://dx.doi.org/10.1155/2021/1805615.

Full text
Abstract:
With the advent of data outsourcing, how to efficiently verify the integrity of data stored at an untrusted cloud service provider (CSP) has become a significant problem in cloud storage. In 2019, Guo et al. proposed an outsourced dynamic provable data possession scheme with batch update for secure cloud storage. Although their scheme is very novel, we find that their proposal is not secure in this paper. The malicious cloud server has ability to forge the authentication labels, and thus it can forge or delete the user’s data but still provide a correct data possession proof. Based on the orig
APA, Harvard, Vancouver, ISO, and other styles
44

Lin, Jerry Chun-Wei, Wensheng Gan, Tzung-Pei Hong, and Jingliang Zhang. "Updating the Built Prelarge Fast Updated Sequential Pattern Trees with Sequence Modification." International Journal of Data Warehousing and Mining 11, no. 1 (2015): 1–22. http://dx.doi.org/10.4018/ijdwm.2015010101.

Full text
Abstract:
Mining useful information or knowledge from a very large database to aid managers or decision makers to make appropriate decisions is a critical issue in recent years. Sequential patterns can be used to discover the purchased behaviors of customers or the usage behaviors of users from Web log data. Most approaches process a static database to discover sequential patterns in a batch way. In real-world applications, transactions or sequences in databases are frequently changed. In the past, a fast updated sequential pattern (FUSP)-tree was proposed to handle dynamic databases whether for sequenc
APA, Harvard, Vancouver, ISO, and other styles
45

Tan, Zijing, Ai Ran, Shuai Ma, and Sheng Qin. "Fast incremental discovery of pointwise order dependencies." Proceedings of the VLDB Endowment 13, no. 10 (2020): 1669–81. http://dx.doi.org/10.14778/3401960.3401965.

Full text
Abstract:
Pointwise order dependencies (PODs) are dependencies that specify ordering semantics on attributes of tuples. POD discovery refers to the process of identifying the set Σ of valid and minimal PODs on a given data set D. In practice D is typically large and keeps changing, and it is prohibitively expensive to compute Σ from scratch every time. In this paper, we make a first effort to study the incremental POD discovery problem, aiming at computing changes ΔΣ to Σ such that Σ ⊕ ΔΣ is the set of valid and minimal PODs on D with a set Δ D of tuple insertion updates. (1) We first propose a novel in
APA, Harvard, Vancouver, ISO, and other styles
46

Zhang, Tianjun, Shuang Song, Shugang Li, Li Ma, Shaobo Pan, and Liyun Han. "Research on Gas Concentration Prediction Models Based on LSTM Multidimensional Time Series." Energies 12, no. 1 (2019): 161. http://dx.doi.org/10.3390/en12010161.

Full text
Abstract:
Effective prediction of gas concentrations and reasonable development of corresponding safety measures have important guiding significance for improving coal mine safety management. In order to improve the accuracy of gas concentration prediction and enhance the applicability of the model, this paper proposes a long short-term memory (LSTM) cyclic neural network prediction method based on actual coal mine production monitoring data to select gas concentration time series with larger samples and longer time spans, including model structural design, model training, model prediction, and model op
APA, Harvard, Vancouver, ISO, and other styles
47

Dueñas, Montserrat, Irene Muñoz-González, Carolina Cueva, et al. "A Survey of Modulation of Gut Microbiota by Dietary Polyphenols." BioMed Research International 2015 (2015): 1–15. http://dx.doi.org/10.1155/2015/850902.

Full text
Abstract:
Dietary polyphenols present in a broad range of plant foods have been related to beneficial health effects. This review aims to update the current information about the modulation of the gut microbiota by dietary phenolic compounds, from a perspective based on the experimental approaches used. After referring to general aspects of gut microbiota and dietary polyphenols, studies related to this topic are presented according to their experimental design: batch culture fermentations, gastrointestinal simulators, animal model studies, and human intervention studies. In general, studies evidence th
APA, Harvard, Vancouver, ISO, and other styles
48

Ma, Junshui, James Theiler, and Simon Perkins. "Accurate On-line Support Vector Regression." Neural Computation 15, no. 11 (2003): 2683–703. http://dx.doi.org/10.1162/089976603322385117.

Full text
Abstract:
Batch implementations of support vector regression (SVR) are inefficient when used in an on-line setting because they must be retrained from scratch every time the training set is modified. Following an incremental support vector classification algorithm introduced by Cauwenberghs and Poggio (2001), we have developed an accurate on-line support vector regression (AOSVR) that efficiently updates a trained SVR function whenever a sample is added to or removed from the training set. The updated SVR function is identical to that produced by a batch algorithm. Applications of AOSVR in both on-line
APA, Harvard, Vancouver, ISO, and other styles
49

Jeon, Hajin, Jeongmin Bae, Sang-Hyun Hwang, et al. "MRPrimerW2: an enhanced tool for rapid design of valid high-quality primers with multiple search modes for qPCR experiments." Nucleic Acids Research 47, W1 (2019): W614—W622. http://dx.doi.org/10.1093/nar/gkz323.

Full text
Abstract:
Abstract For the best results in quantitative polymerase chain reaction (qPCR) experiments, it is essential to design high-quality primers considering a multitude of constraints and the purpose of experiments. The constraints include many filtering constraints, homology test on a huge number of off-target sequences, the same constraints for batch design of primers, exon spanning, and avoiding single nucleotide polymorphism (SNP) sites. The target sequences are either in database or given as FASTA sequences, and the experiment is for amplifying either each target sequence with each correspondin
APA, Harvard, Vancouver, ISO, and other styles
50

Yao, Jiangchao, Hao Wu, Ya Zhang, Ivor W. Tsang, and Jun Sun. "Safeguarded Dynamic Label Regression for Noisy Supervision." Proceedings of the AAAI Conference on Artificial Intelligence 33 (July 17, 2019): 9103–10. http://dx.doi.org/10.1609/aaai.v33i01.33019103.

Full text
Abstract:
Learning with noisy labels is imperative in the Big Data era since it reduces expensive labor on accurate annotations. Previous method, learning with noise transition, has enjoyed theoretical guarantees when it is applied to the scenario with the class-conditional noise. However, this approach critically depends on an accurate pre-estimated noise transition, which is usually impractical. Subsequent improvement adapts the preestimation in the form of a Softmax layer along with the training progress. However, the parameters in the Softmax layer are highly tweaked for the fragile performance and
APA, Harvard, Vancouver, ISO, and other styles
We offer discounts on all premium plans for authors whose works are included in thematic literature selections. Contact us to get a unique promo code!