To see the other types of publications on this topic, follow the link: March algorithms.

Journal articles on the topic 'March algorithms'

Create a spot-on reference in APA, MLA, Chicago, Harvard, and other styles

Select a source type:

Consult the top 50 journal articles for your research on the topic 'March algorithms.'

Next to every source in the list of references, there is an 'Add to bibliography' button. Press on it, and we will generate automatically the bibliographic reference to the chosen work in the citation style you need: APA, MLA, Harvard, Chicago, Vancouver, etc.

You can also download the full text of the academic publication as pdf and read online its abstract whenever available in the metadata.

Browse journal articles on a wide variety of disciplines and organise your bibliography correctly.

1

Tobing, Fenina Adline Twince, and Prayogo . "Perbandingan Algoritma Convex Hull: Jarvis March dan Graham Scan." Ultimatics : Jurnal Teknik Informatika 12, no. 2 (2020): 114–17. http://dx.doi.org/10.31937/ti.v12i2.1800.

Full text
Abstract:
Comparison of algorithms is needed to determine the level of efficiency of an algorithm. The existence of a speed comparison in an algorithm that will make a unique shape is sometimes a problem that must be solved by comparing one algorithm with another. This problem can be solved by using Jarvis March and Graham Scan where this algorithm will create a unique shape of a point followed by testing the speed comparison of the two algorithms and the result can be stated that the Graham Scan algorithm in general works faster than the algorithm Jarvis March.
APA, Harvard, Vancouver, ISO, and other styles
2

Praneeth, B. V. S. Sai. "Finite State Machine based Programmable Memory Built-in Self-Test." International Journal for Research in Applied Science and Engineering Technology 9, no. VI (2021): 3805–9. http://dx.doi.org/10.22214/ijraset.2021.35875.

Full text
Abstract:
We propose a methodology to design a Finite State Machine(FSM)-based Programmable Memory Built-In Self Test (PMBIST) which includes a planned procedure for Memory BIST which has a controller to select a test algorithm from a fixed set of algorithms that are built in the memory BIST. In general, it is not possible to test all the different memory modules present in System-on-Chip (SoC) with a single Test algorithm. Subsequently it is desirable to have a programmable Memory BIST controller which can execute multiple test algorithms. The proposed Memory BIST controller is designed as a FSM (Finite State Machine) written in Verilog HDL and this scheme greatly simplifies the testing process and it achieves a good flexibility with smaller circuit size compared with Individual Testing designs. We have used March test algorithms like MATS+, March X, March C- to build the project.
APA, Harvard, Vancouver, ISO, and other styles
3

Sungju, Park, Youn Donkyu, Kim Taehyung, et al. "Microcode-Based Memory BIST Implementing Modified March Algorithms." Journal of the Korean Physical Society 40, no. 4 (2002): 749. http://dx.doi.org/10.3938/jkps.40.749.

Full text
APA, Harvard, Vancouver, ISO, and other styles
4

Berchtold, André, Ogier Maitre, and Kevin Emery. "Optimization of the Mixture Transition Distribution Model Using the March Package for R." Symmetry 12, no. 12 (2020): 2031. http://dx.doi.org/10.3390/sym12122031.

Full text
Abstract:
Optimization of mixture models such as the mixture transition distribution (MTD) model is notoriously difficult because of the high complexity of their solution space. The best approach comprises combining features of two types of algorithms: an algorithm that can explore as completely as possible the whole solution space (e.g., an evolutionary algorithm), and another that can quickly identify an optimum starting from a set of initial conditions (for instance, an EM algorithm). The march package for the R environment is a library dedicated to the computation of Markovian models for categorical variables. It includes different algorithms that can manage the complexity of the MTD model, including an ad hoc hill-climbing procedure. In this article, we first discuss the problems related to the optimization of the MTD model, and then we show how march can be used to solve these problems; further, we provide different syntaxes for the computation of other models, including homogeneous Markov chains, hidden Markov models, and double chain Markov models.
APA, Harvard, Vancouver, ISO, and other styles
5

Van De Goor, AD J., and Yervant Zorian. "Effective march algorithms for testing single-order addressed memories." Journal of Electronic Testing 5, no. 4 (1994): 337–45. http://dx.doi.org/10.1007/bf00972518.

Full text
APA, Harvard, Vancouver, ISO, and other styles
6

Vyšniauskaitė, Laura, and Vydūnas Šaltenis. "A PRIORI FILTRATION OF POINTS FOR FINDING CONVEX HULL." Technological and Economic Development of Economy 12, no. 4 (2006): 341–46. http://dx.doi.org/10.3846/13928619.2006.9637764.

Full text
Abstract:
Convex hull is the minimum area convex polygon containing the planar set. By now there are quite many convex hull algorithms (Graham Scan, Jarvis March, QuickHull, Incremental, Divide‐and‐Conquer, Marriage‐before‐Conquest, Monotone Chain, Brute Force). The main attention while choosing the algorithm is paid to the running time. In order to raise the efficiency of all the algorithms an idea of a priori filtration of points is given in this article. Besides, two new algorithms have been created and presented. The experiment research has shown a very good efficiency of these algorithms.
APA, Harvard, Vancouver, ISO, and other styles
7

Yarmolik, V. N., I. Mrozek, and S. V. Yarmolik. "Pseudoexhaustive memory testing based on March A type march tests." Informatics 17, no. 2 (2020): 54–70. http://dx.doi.org/10.37661/1816-0301-2020-17-2-54-70.

Full text
Abstract:
The relevance of testing of memory devices of modern computing systems is shown. The methods and algorithms for implementing test procedures based on classical March tests are analyzed. Multiple March tests are highlighted to detect complex pattern-sensitive memory faults. To detect them, the necessary condition that test procedures must satisfy to deal complex faults, is substantiated. This condition is in the formation of a pseudo-exhaustive test for a given number of arbitrary memory cells. We study the effectiveness of single and double application of tests like MATS ++, March C– and March A, and also give its analytical estimates for a different number of k ≤ 10 memory cells participating in a malfunction. The applicability of the mathematical model of the combinatorial problem of the coupon collector for describing multiple memory testing is substantiated. The values of the average, minimum, and maximum multiplicity of multiple tests are presented to provide an exhaustive set of binary combinations for a given number of arbitrary memory cells. The validity of analytical estimates is experimentally shown and the high efficiency of the formation of a pseudo-exhaustive coverage by tests of the March A type is confirmed.
APA, Harvard, Vancouver, ISO, and other styles
8

Wei Chun, Quek, Pang Wai Leong, Chan Kah Yoong, Lee It Ee, and Chung Gwo Chin. "HDL Modelling of Low-CostMemory Fault Detection Tester." Journal of Engineering Technology and Applied Physics 2, no. 2 (2020): 17–23. http://dx.doi.org/10.33093/jetap.2020.2.2.3.

Full text
Abstract:
Memory modules are widely used in varies kind of electronics system design. The capacity of the memory modules has increased rapidly since the past few years in order to satisfy the high demand from the end-users. The memory modules’ manufacturers demand more units of automatic test equipment (ATE)to increase the production rate. However, the existing ATE used in the industry to carry out the memory testing is too costly(at least a million dollars per ATE tester). The low-cost memory testers are urgently needed to increase the production rate of the memory module. This has in spired us to design a low-cost memory tester. A low-cost memory fault detection tester with all the major fault detection algorithms that used in industry is modelled using Very High Speed Integrated Circuit Hardware Description Language (VHDL) in this paper to support the need of the low-cost ATE memory tester. The fault detection algorithms modelled are MATS+ (Modified Algorithm Test Sequence), MATS++, March C, March C-, March X ,March Y, zero-one and checkerboard scan tests. PERL program is used to analyse the simulation results and a log file will be generated at the end of the memory test. Extensive simulation and experimental test results show that the memory tester modelled covers all the memory test algorithms used in the industry. The low-cost memory fault detection tester designed provides the 100% fault detection coverage for all memory defects.
APA, Harvard, Vancouver, ISO, and other styles
9

Lin, Zhiting, Chunyu Peng, and Kun Wang. "A Novel Controllable BIST Circuit for embedded SRAM." Open Electrical & Electronic Engineering Journal 10, no. 1 (2016): 1–10. http://dx.doi.org/10.2174/1874129001610010001.

Full text
Abstract:
With increasingly stringent requirements for memory test, the complexity of the test algorithm is increasing. This will make BIST (Build-In-Self-Test) circuit more complex and the area of BIST circuit larger. This paper proposes a novel controllable BIST circuit. The controllable BIST circuit provides a cost-effective solution that supports a variety of March algorithms and SRAM embedded testing operation modes. It controls the test patterns with three additional input ports. And it indicates the algorithm progress, the test result and the number of fails with three output ports. To achieve test patterns generation, analy-sis and test results recording, the proposed BIST circuit contains five internal functional modules, which are Address Gener-ator, Control Generator, Data Generator, Data Comparator and Fail Accumulator. The test patterns of the proposed BIST cir-cuit are controlled by external signals. It is not only suitable for any existing march algorithms but also leaves room for ex-tension if needed.
APA, Harvard, Vancouver, ISO, and other styles
10

Singh, Balwinder, Arun Khosla, and Sukhleen Bindra Narang. "Area Overhead and Power Analysis of March Algorithms for Memory BIST." Procedia Engineering 30 (2012): 930–36. http://dx.doi.org/10.1016/j.proeng.2012.01.947.

Full text
APA, Harvard, Vancouver, ISO, and other styles
11

Дмитриев and V. Dmitriev. "Effective Algorithms of the Programming Tasks." Profession-Oriented School 4, no. 2 (2016): 53–61. http://dx.doi.org/10.12737/19627.

Full text
Abstract:
In the article the questions of fi nding and implementing of eff ective algorithms for solving the ingenious tasks on programming are considered. Under the effi ciency algorithm is understood the fulfi llment of the requirements for the minimum memory (under the terms of objectives) and the speed of the algorithm. The tasks, presented in the article, were used by the author to conduct urban programming contest among schoolchildren and students in March 2015. Considered in work tasks are aimed to show how tasks can be eff ective solved, using optimal solutions, which, often, may well remain not found. Source codes of programs are listed on the Delphi programming language. The importance of training pupils and students for solving such tasks due to the practical necessity and modern requirements to graduate.
APA, Harvard, Vancouver, ISO, and other styles
12

Ye, Yizhou, Sudhakar Manne, and Dimitri Bennett. "Identifying Patients With Inflammatory Bowel Diseases in an Administrative Health Claims Database: Do Algorithms Generate Similar Findings?" INQUIRY: The Journal of Health Care Organization, Provision, and Financing 56 (January 2019): 004695801988781. http://dx.doi.org/10.1177/0046958019887816.

Full text
Abstract:
Application of selective algorithms to administrative health claims databases allows detection of specific patients and disease or treatment outcomes. This study identified and applied different algorithms to a single data set to compare the numbers of patients with different inflammatory bowel disease classifications identified by each algorithm. A literature review was performed to identify algorithms developed to define inflammatory bowel disease patients, including ulcerative colitis, Crohn’s disease, and inflammatory bowel disease unspecified in routinely collected administrative claims databases. Based on the study population, validation methods, and results, selected algorithms were applied to the Optum Clinformatics® Data Mart database from June 2000 to March 2017. The patient cohorts identified by each algorithm were compared. Three different algorithms were identified from literature review and selected for comparison (A, B, and C). Each identified different numbers of patients with any form of inflammatory bowel disease (323 833; 246 953, and 171 537 patients, respectively). The proportions of patients with ulcerative colitis, Crohn’s disease, and inflammatory bowel disease unspecified were 32.0% to 47.5%, 38.6% to 43.8%, and 8.7% to 26.6% of the total population with inflammatory bowel disease, respectively, depending on the algorithm applied. Only 5.1% of patients with inflammatory bowel disease unspecified were identified by all 3 algorithms. Algorithm C identified the smallest cohort for each disease category except inflammatory bowel disease unspecified. This study is the first to compare numbers of inflammatory bowel disease patients identified by different algorithms from a single database. The differences between results highlight the need for validation of algorithms to accurately identify inflammatory bowel disease patients.
APA, Harvard, Vancouver, ISO, and other styles
13

Kun-Jin Lin and Cheng-Wen Wu. "Testing content-addressable memories using functional fault models and march-like algorithms." IEEE Transactions on Computer-Aided Design of Integrated Circuits and Systems 19, no. 5 (2000): 577–88. http://dx.doi.org/10.1109/43.845082.

Full text
APA, Harvard, Vancouver, ISO, and other styles
14

Salvy, Bruno, Bob Sedgewick, Michele Soria, Wojciech Szpankowski, and Brigitte Vallée. "PHILIPPE FLAJOLET 1 December 1948 – 22 March 2011." Combinatorics, Probability and Computing 20, no. 5 (2011): 647–49. http://dx.doi.org/10.1017/s0963548311000320.

Full text
Abstract:
Philippe Flajolet, mathematician and computer scientist extraordinaire, the father of analytic combinatorics, suddenly passed away on 22 March 2011, at the prime of his career. He is celebrated for opening new lines of research in the analysis of algorithms, developing powerful new methods, and solving difficult open problems. His research contributions will have an impact for generations, and his approach to research, based on curiosity, discriminating taste, broad knowledge and interests, intellectual integrity, and a genuine sense of camaraderie, will serve as an inspiration to those who knew him, for years to come.
APA, Harvard, Vancouver, ISO, and other styles
15

Pinker, Rachel T., Donglian Sun, Meng-Pai Hung, Chuan Li, and Jeffrey B. Basara. "Evaluation of Satellite Estimates of Land Surface Temperature from GOES over the United States." Journal of Applied Meteorology and Climatology 48, no. 1 (2009): 167–80. http://dx.doi.org/10.1175/2008jamc1781.1.

Full text
Abstract:
Abstract A comprehensive evaluation of split-window and triple-window algorithms to estimate land surface temperature (LST) from Geostationary Operational Environmental Satellites (GOES) that were previously described by Sun and Pinker is presented. The evaluation of the split-window algorithm is done against ground observations and against independently developed algorithms. The triple-window algorithm is evaluated only for nighttime against ground observations and against the Sun and Pinker split-window (SP-SW) algorithm. The ground observations used are from the Atmospheric Radiation Measurement Program (ARM) Central Facility, Southern Great Plains site (April 1997–March 1998); from five Surface Radiation Budget Network (SURFRAD) stations (1996–2000); and from the Oklahoma Mesonet. The independent algorithms used for comparison include the National Oceanic and Atmospheric Administration/National Environmental Satellite, Data and Information Service operational method and the following split-window algorithms: that of Price, that of Prata and Platt, two versions of that of Ulivieri, that of Vidal, two versions of that of Sobrino, that of Coll and others, the generalized split-window algorithm as described by Becker and Li and by Wan and Dozier, and the Becker and Li algorithm with water vapor correction. The evaluation against the ARM and SURFRAD observations indicates that the LST retrievals from the SP-SW algorithm are in closer agreement with the ground observations than are the other algorithms tested. When evaluated against observations from the Oklahoma Mesonet, the triple-window algorithm is found to perform better than the split-window algorithm during nighttime.
APA, Harvard, Vancouver, ISO, and other styles
16

Kodera, Yuki, Naoki Hayashimoto, Ken Moriwaki, et al. "First-Year Performance of a Nationwide Earthquake Early Warning System Using a Wavefield-Based Ground-Motion Prediction Algorithm in Japan." Seismological Research Letters 91, no. 2A (2020): 826–34. http://dx.doi.org/10.1785/0220190263.

Full text
Abstract:
Abstract The propagation of local undamped motion (PLUM) algorithm is a wavefield-based method that predicts ground motions using direct observations. In March 2018, the Japan Meteorological Agency (JMA) implemented PLUM into its nationwide earthquake early warning (EEW) system, in order to enhance system robustness for complex earthquake scenarios in which traditional source-based algorithms fail to provide accurate and timely ground-motion predictions. This was the first nationwide EEW system to implement a wavefield-based methodology. Here, we evaluate the performance of PLUM during its first year of implementation in the JMA EEW system, using earthquakes that occurred between March 2018 and March 2019; these include 13 earthquakes that satisfied the public warning issuance criteria. Our analysis shows that PLUM predicted ground motions without significant errors and reduced the number of missed warnings. These findings indicate that introducing the wavefield-based methodology benefits EEW users with high tolerance of false alarms, including the general public.
APA, Harvard, Vancouver, ISO, and other styles
17

Yeh, Jen-Chieh, Kuo-Liang Cheng, Yung-Fa Chou, and Cheng-Wen Wu. "Flash Memory Testing and Built-In Self-Diagnosis With March-Like Test Algorithms." IEEE Transactions on Computer-Aided Design of Integrated Circuits and Systems 26, no. 6 (2007): 1101–13. http://dx.doi.org/10.1109/tcad.2006.885828.

Full text
APA, Harvard, Vancouver, ISO, and other styles
18

, Dr. K Deepti, P. Aishwarya. "Design and Implementation of FSM Based MBIST using March Algorithm." International Journal for Modern Trends in Science and Technology, no. 8 (August 5, 2020): 18–21. http://dx.doi.org/10.46501/ijmtst060804.

Full text
Abstract:
The research article aims at identifying memory testing in static random access memory which is significant in deep sub micron era. Built in self test provides a best solution replacing the external Automatic test equipment. Built in Self Test is a technique of designing additional hardware and software feature into Integrated circuits to allow them to perform testing. BIST works in the background checking memories for faults without interfering with actual functionality of the memory. The objective of the proposed work is to identify faults associated with the memory, perform test algorithms to detect the faults in memory BIST architecture.The implementation of Memory BIST is done using Finite state machine model. The design of memory BIST is accomplished using Xilinx Vivado IDE for 32X8 memory.
APA, Harvard, Vancouver, ISO, and other styles
19

Simpson, Ewurabena, Robert J. Klaassen, Pranesh Chakraborty, et al. "Increasing Incidence and Prevalence of Pathologic Hemoglobinopathies Among Children in Ontario, Canada from 1991-2013." Blood 132, Supplement 1 (2018): 4698. http://dx.doi.org/10.1182/blood-2018-99-110468.

Full text
Abstract:
Abstract Background: In Ontario, Canada's largest province, population-based health administrative data represents an accessible and useful tool for population surveillance of people with chronic diseases. While hemoglobinopathies can be identified using data from universal hemoglobinopathy screening, which was implemented in November 2006, these data would not contain information on affected immigrants (21.9% of the population). We validated algorithms using provincial health administrative data and newborn screening data to identify children with hemoglobinopathies whether or not they were born in Ontario, thereby creating a population-based surveillance cohort. Objectives: (1) Validate algorithms to identify children with sickle cell disease, thalassemia and other hemoglobinopathies from within health administrative data; and (2) Determine incidence and prevalence of hemoglobinopathies in Ontario children. Methods: For the validation study, a positive reference cohort was established using lists of known hemoglobinopathy patients who were followed at five pediatric hemoglobinopathy treatment sites in Ontario and born between November 24, 2006 and March 31, 2013. Health card numbers of these patients were linked deterministically to unique identification numbers in administrative data, which included data on hospitalizations, physician claims, sociodemographic characteristics, immigration records and cause of death. The negative reference cohort included all children residing in Ontario cities who had never been seen at a hemoglobinopathy centre, and therefore assumed not to have disease. Various combinations of administrative data codes were tested for their ability to identify children <18 years of age with hemoglobinopathies from within the databases, and we selected the algorithms with the highest positive predictive value, while maintaining sensitivity >80%. Using two validated algorithms, we identified all children with hemoglobinopathies born between April 1, 1991 and March 31, 2013. We described the crude incidence and prevalence per 100,000 patient-years (PYs). Results: Two algorithms functioned best to identify incident and prevalent hemoglobinopathy cases (see Table). Among children born between April 1, 1991 to March 31, 2013, 1526 incident hemoglobinopathy patients were identified using Algorithm 1 (crude incidence of 4.85 per 100,000 PYs) and 1660 new hemoglobinopathy patients were identified using Algorithm 2 (crude incidence 5.28 per 100,000 PYs, 95% CI 3.51 to 3.92). In 2013, the overall prevalence of children <18 years living with hemoglobinopathies was 1215-1325 cases. Conclusion: Through an innovative approach using provincial health administrative, immigration and demographic data, this study identified a rising incidence and prevalence of hemoglobinopathies among Ontario children <18 years of age between April 1, 1991 and March 31, 2013, potentially due to increased immigration rates. These findings could be used to inform health services distribution. This surveillance cohort will be used to understand the impact of immigration status on health care inequality for hemoglobinopathy-related health services delivery, as well as to assess outcomes in this important group of chronic diseases. Disclosures Klaassen: Amgen Inc.: Membership on an entity's Board of Directors or advisory committees; Octapharma AG: Consultancy, Honoraria; Agios Pharmaceuticals Inc.: Consultancy; Novartis: Research Funding; Hoffman-La Roche: Consultancy; Shire: Consultancy; Cangene: Research Funding. Jardine:Pfizer: Other: Advisory board; Bayer: Other: Advisory board; Baxalta: Other: Advisory board.
APA, Harvard, Vancouver, ISO, and other styles
20

Kavats, Olena, Dmitriy Khramov, Kateryna Sergieieva, and Volodymyr Vasyliev. "Monitoring of Sugarcane Harvest in Brazil Based on Optical and SAR Data." Remote Sensing 12, no. 24 (2020): 4080. http://dx.doi.org/10.3390/rs12244080.

Full text
Abstract:
The algorithms for determining sugarcane harvest dates are proposed; the algorithms allow the ability to monitor large areas and are based on the publicly available Synthetic Aperture Radar (SAR) and optical satellite data. Algorithm 1 uses the NDVI (Normalized Difference Vegetation Index) time series derived from Sentinel-2 data. Sharp and continuous decrease in the NDVI values is the main sign of sugarcane harvest. The NDVI time series allows the ability to determine most harvest dates. The best estimates of the sugarcane areas harvested per month have been obtained from March to August 2018 when cloudy pixel percentage is less than 45% of the image area. Algorithm 2 of the harvest monitoring uses the coherence time series derived from Sentinel-1 Single Look Complex (SLC) images and optical satellite data. Low coherence, demonstrating sharp growth upon the harvest completion, corresponds to the harvest period. The NDVI time series trends were used to refine the algorithm. It is supposed that the descending NDVI trend corresponds to harvest. The algorithms were used to identify the harvest dates and calculate the harvested areas of the reference sample of 574 sugarcane parcels with a total area of 3745 ha in the state of São Paulo, Brazil. The harvested areas identified by visual interpretation coincide with the optical-data algorithm (algorithm 1) by 97%; the coincidence with the algorithm based on SAR and optical data (algorithm 2) is 90%. The main practical applications of the algorithms are harvest monitoring and identification of the harvested fields to estimate the harvested area.
APA, Harvard, Vancouver, ISO, and other styles
21

Löwe, Sedmíková, Natov, Jankovský, Hejcmanová, and Dvořák. "Differences in Timber Volume Estimates Using Various Algorithms Available in the Control and Information Systems of Harvesters." Forests 10, no. 5 (2019): 388. http://dx.doi.org/10.3390/f10050388.

Full text
Abstract:
Timber is the most important source of revenue in forestry and, therefore, is necessary to precisely estimate its volume. The share of timber volume produced by harvesters is annually growing in many European countries. Suitable settings of harvesters will allow us to achieve the most accurate volume estimates of the produced timber. In this study, we compared the different methods of log volume estimation applied by control and information systems of harvesters. The aim was to analyze the price categories that can be set up in the StanForD standard and to determine the differences between the algorithms used for log volume estimations. We obtained the data from *.STM files collected from March 2017 until June 2018 on a medium-size harvester. We analyzed price categories and found seven different algorithms used to estimate the log volumes. Log volume estimates according to Algorithm A2 were considered as standard because these estimates should be closest to the true log volumes. Significant differences, except the difference between Algorithm A2 and Algorithm A3, were found between log volumes estimated by different algorithms. After categorization of logs to assortments, the results showed that significant differences existed between algorithms in each assortment. In the roundwood assortment, which contains the most valuable logs, a difference of more than 6% was found between the log volumes estimated by Algorithm A5 and Algorithm A2. This is interesting because Algorithm A5 is widely used in some Central European countries. To obtain volumes closest to the true volumes, we should use Algorithm A2 for the harvester production outputs. The resulting differences between the algorithms can be used to estimate the volume difference between harvester outputs using the different price categories. Understanding this setting of harvesters and the differences between the price categories will provide users useful information in applied forest management.
APA, Harvard, Vancouver, ISO, and other styles
22

Inoue, Isao H. "CREST project focused on neuromorphic devices and networks." Impact 2020, no. 5 (2020): 6–9. http://dx.doi.org/10.21820/23987073.2020.5.6.

Full text
Abstract:
The artificial neural network is a type of electronic circuit modelled after the human brain. It contains thousands of artificial neurons and synapses that, in general, assemble to execute algorithms that can allow the neural network to incorporate a large amount of input data. One of the algorithms is known as deep learnig (DL), which is a kind of statistical processing to learn and infer several features of the big data while consuming tremendous energy. A team, led by Dr Isao H Inoue of the National Institute of Advanced Industrial Science and Technology (AIST), is working on a five-and-a-half-year CREST project until March 2025 to develop a novel neuromorphic architecture that can do the learning and inference without using such an algorithm, thus in low power consumption.
APA, Harvard, Vancouver, ISO, and other styles
23

Wandishin, Matthew S., Michael E. Baldwin, Steven L. Mullen, and John V. Cortinas. "Short-Range Ensemble Forecasts of Precipitation Type." Weather and Forecasting 20, no. 4 (2005): 609–26. http://dx.doi.org/10.1175/waf871.1.

Full text
Abstract:
Abstract Short-range ensemble forecasting is extended to a critical winter weather problem: forecasting precipitation type. Forecast soundings from the operational NCEP Short-Range Ensemble Forecast system are combined with five precipitation-type algorithms to produce probabilistic forecasts from January through March 2002. Thus the ensemble combines model diversity, initial condition diversity, and postprocessing algorithm diversity. All verification numbers are conditioned on both the ensemble and observations recording some form of precipitation. This separates the forecast of type from the yes–no precipitation forecast. The ensemble is very skillful in forecasting rain and snow but it is only moderately skillful for freezing rain and unskillful for ice pellets. However, even for the unskillful forecasts the ensemble shows some ability to discriminate between the different precipitation types and thus provides some positive value to forecast users. Algorithm diversity is shown to be as important as initial condition diversity in terms of forecast quality, although neither has as big an impact as model diversity. The algorithms have their individual strengths and weaknesses, but no algorithm is clearly better or worse than the others overall.
APA, Harvard, Vancouver, ISO, and other styles
24

Dipaola, Franca, Mauro Gatti, Veronica Pacetti, et al. "Artificial Intelligence Algorithms and Natural Language Processing for the Recognition of Syncope Patients on Emergency Department Medical Records." Journal of Clinical Medicine 8, no. 10 (2019): 1677. http://dx.doi.org/10.3390/jcm8101677.

Full text
Abstract:
Background: Enrollment of large cohorts of syncope patients from administrative data is crucial for proper risk stratification but is limited by the enormous amount of time required for manual revision of medical records. Aim: To develop a Natural Language Processing (NLP) algorithm to automatically identify syncope from Emergency Department (ED) electronic medical records (EMRs). Methods: De-identified EMRs of all consecutive patients evaluated at Humanitas Research Hospital ED from 1 December 2013 to 31 March 2014 and from 1 December 2015 to 31 March 2016 were manually annotated to identify syncope. Records were combined in a single dataset and classified. The performance of combined multiple NLP feature selectors and classifiers was tested. Primary Outcomes: NLP algorithms’ accuracy, sensitivity, specificity, positive predictive value, negative predictive value, and F3 score. Results: 15,098 and 15,222 records from 2013 and 2015 datasets were analyzed. Syncope was present in 571 records. Normalized Gini Index feature selector combined with Support Vector Machines classifier obtained the best F3 value (84.0%), with 92.2% sensitivity and 47.4% positive predictive value. A 96% analysis time reduction was computed, compared with EMRs manual review. Conclusions: This artificial intelligence algorithm enabled the automatic identification of a large population of syncope patients using EMRs.
APA, Harvard, Vancouver, ISO, and other styles
25

Carvalho, Melissa, Aurea Maria Ciotti, Sônia Maria Flores Gianesella, Flávia Marisa Prado Saldanha Corrêa, and Rafael Riani Costa Perinotto. "Bio-Optical Properties of the Inner Continental Shelf off Santos Estuarine System, Southeastern Brazil, and their Implications for Ocean Color Algorithm Performance." Brazilian Journal of Oceanography 62, no. 2 (2014): 71–87. http://dx.doi.org/10.1590/s1679-87592014044506202.

Full text
Abstract:
Optical characterizations of coastal water masses are important tools for a better understanding of physical and biochemical processes and aid the optimization of ocean color algorithms. In this study we present three optical classes of water observed during October/2005 and March/2006 on the inner continental shelf adjacent to Santos Bay (Brazil), based on remote sensing reflectance. ANOVA indicated a crescent estuarine influence in classes 1 to 3. Class 3 presented the highest chlorophyll-a and nutrient concentration and highest light absorption coefficients. Colored dissolved organic matter (CDOM) dominated the light absorption in all classes and was strongly correlated to salinity in October/2005 due to the influence of the La Plata plume. The results indicated that CDOM dynamics in the Santos inner shelf are very complex. The performance of global chlorophyll algorithms was significantly smaller for October/2005 than for March/2006. As inconsistent changes in light absorption spectra by phytoplankton were detected between samplings, the results show that future bio-optical algorithms for this region must be optimized preferentially considering CDOM optical parameters.
APA, Harvard, Vancouver, ISO, and other styles
26

Mata, Carlos. "Technology Focus: Production Monitoring (March 2021)." Journal of Petroleum Technology 73, no. 03 (2021): 48. http://dx.doi.org/10.2118/0321-0048-jpt.

Full text
Abstract:
Last year, events accelerated several trends in the energy landscape. Oil and gas prices have remained low, and the industry is focusing more strongly on reducing costs and increasing operational efficiency. Reducing costs is not only about cutting costs today but also about reducing the life-cycle cost per barrel. Implementing innovative technologies that increase recovery requires a small investment but can bring large rewards. Advances in sensor accuracy, computing power, and data analytics unlock innovative use cases for technology for mapping subsurface movements of fluids. Very different technologies provide independent insights into the displacement process in the reservoir—for example, distributed acoustic sensing (DAS) used for periodic 4D seismic, time-lapse borehole microgravity surveys for 4D reservoir fluid mapping, DNA analytics to map interwell connectivity and zonal contributions, and interpreting downhole gauge pressure fluctuations caused by tidal forces to map fluid fronts. The aggregation of these uncorrelated insights helps geologists and reservoir engineers narrow down the number of possible realizations of the reservoir model, better map bypassed resources, and provide forecasts that are more realistic. On the production side, well instrumentation continues to become more affordable over time. For example: distributed temperature sensing, DAS, and downhole gauge lines can be run in a single optoelectric cable, which reduces well complexity and instrumentation costs. Wells can be instrumented at surface for less than $5,000 using wireless technologies. Wireless downhole gauges also can be retrofitted in older completions. There are many ways to leverage technology for improving production and reservoir monitoring and unlocking potential. The OnePetro online library offers a large collection of novel use cases for technology and algorithms applied to production and reservoir monitoring. The challenge now is to transform the way we work to realize the maximum value from such technology.
APA, Harvard, Vancouver, ISO, and other styles
27

Et. al., Rojasree V. "Research Intuitions of Asymmetric Crypto System." Turkish Journal of Computer and Mathematics Education (TURCOMAT) 12, no. 3 (2021): 5024–33. http://dx.doi.org/10.17762/turcomat.v12i3.2016.

Full text
Abstract:
The fast moving world full of data exchange and communication technology, with all sensitive information of an individual virtually available anywhere and anytime, make the Internet world more critical in security aspects. The areas of risks are attended and assured to be safe by means of some sought of crypto mechanisms. The strength and vulnerability of the crypto mechanism defines the durability of the system. The encryption on the communication channel can implement either public or private key algorithms based on the area of applications. The public key cryptography is specifically designed to keep the key itself safe between the sender and receiver themselves. There are plenty of public key cryptographic algorithms but only a few are renowned. This paper is aimed to collect all possible public key cryptographic methods and analyze its pros and cons so as to find a better algorithm to suite almost all conditions in Internet communication world and e-commerce. Research in quantum computers is booming now and it is anticipated that the supremacy of quantum computers will crack the present public key crypto algorithms. This paper highlights issues and challenges to be faced from quantum computing and draws the attention of network researchers to march towards researching on quantum-safe algorithms.
APA, Harvard, Vancouver, ISO, and other styles
28

Chao, Lijun, Ke Zhang, Jingfeng Wang, Jin Feng, and Mengjie Zhang. "A Comprehensive Evaluation of Five Evapotranspiration Datasets Based on Ground and GRACE Satellite Observations: Implications for Improvement of Evapotranspiration Retrieval Algorithm." Remote Sensing 13, no. 12 (2021): 2414. http://dx.doi.org/10.3390/rs13122414.

Full text
Abstract:
Evapotranspiration (ET) is a vital part of the hydrological cycle and the water–energy balance. To explore the characteristics of five typical remote sensing evapotranspiration datasets and provide guidance for algorithm development, we used reconstructed evapotranspiration (Recon) data based on ground and GRACE satellite observations as a benchmark and evaluated five remote sensing datasets for 592 watersheds across the continental United States. The Global Land Evaporation Amsterdam Model (GLEAM) dataset (with bias and RMSE values of 23.18 mm/year and 106.10 mm/year, respectively), process-based land surface evapotranspiration/heat flux (P-LSH) dataset (bias = 22.94 mm/year and RMSE = 114.44 mm/year) and the Penman–Monteith–Leuning (PML) algorithm generated ET dataset (bias = −17.73 mm/year and RMSE = 108.97 mm/year) showed the better performance on a yearly scale, followed by the model tree ensemble (MTE) dataset (bias = 99.45 mm/year and RMSE = 141.32 mm/year) and the moderate-resolution imaging spectroradiometer (MODIS) dataset (bias = −106.71 mm/year and RMSE = 158.90 mm/year). The P-LSH dataset outperformed the other four ET datasets on a seasonal scale, especially from March to August. Both PML and MTE showed better overall accuracy and could accurately capture the spatial variability of evapotranspiration in arid regions. The P-LSH and GLEAM products were consistent with the Recon data in middle-value section. MODIS and MTE had larger bias and RMSE values on a yearly scale, whereby the MODIS and MTE datasets tended to underestimate and overestimate ET values in all the sections, respectively. In the future, the aim should be to reduce bias in the MODIS and MTE algorithms and further improve seasonality of the ET estimation in the GLEAM algorithm, while the estimation accuracy of the P-LSH and MODIS algorithms should be improved in arid regions. Our analysis suggests that combining artificial intelligence algorithms or data-driven algorithms and physical process algorithms will further improve the accuracy of ET estimation algorithms and the quality of ET datasets, as well as enhancing their capacity to be applied in different climate regions.
APA, Harvard, Vancouver, ISO, and other styles
29

Alonso, Javier, Joshué Pérez, Vicente Milanés, Carlos González, and Teresa de Pedro. "EU CyberCars-2 Final Demo Results: IAI-CSIC Standpoint." Journal of Robotics and Mechatronics 22, no. 6 (2010): 702–7. http://dx.doi.org/10.20965/jrm.2010.p0702.

Full text
Abstract:
In our work on decision and control algorithms for cooperative driving maneuvers developed by the AUTOPIA group of the IAI-CSIC1 in EU project CyberCars-2 (CC2), we focused on defining and developing the software architecture and procedures enabling transparent cooperation between cybercars and dual-mode cars in complex maneuvers tested in the final CyberCars-2 demo. After briefly outlining a common architecture definition, a detailed study of cooperative maneuvers and an analysis of decision and control primitives for cooperative driving, we report on the final demonstration, the data it provided and its control algorithms. The main contributors to this work are: IAI-CSIC, INRIA, and TNO. Note that three different vehicles with different architectures and different control can cooperate using the data exchanged and a common decision algorithm to conduct complex cooperative maneuvers. 1. The Industrial Automation Institute (IAI-CSIC) merged with the Polytechnic University of Madrid to become the Robotics and Automation Center (CAR-UPM-CSIC) on March 1, 2010.
APA, Harvard, Vancouver, ISO, and other styles
30

Parera-Portell, Joan Antoni, Raquel Ubach, and Charles Gignac. "An improved sea ice detection algorithm using MODIS: application as a new European sea ice extent indicator." Cryosphere 15, no. 6 (2021): 2803–18. http://dx.doi.org/10.5194/tc-15-2803-2021.

Full text
Abstract:
Abstract. The continued loss of sea ice in the Northern Hemisphere due to global warming poses a threat to biota and human activities, evidencing the necessity of efficient sea ice monitoring tools. Aiming at the creation of an improved sea ice extent indicator covering the European regional seas, the new IceMap500 algorithm has been developed to classify sea ice and water at a resolution of 500 m at nadir. IceMap500 features a classification strategy built upon previous MODIS sea ice extent algorithms and a new method to reclassify areas affected by resolution-breaking features inherited from the MODIS cloud mask. This approach results in an enlargement of mapped area, a reduction of potential error sources and a better delineation of the sea ice edge, while still systematically achieving accuracies above 90 %, as obtained by manual validation. Swath maps have been aggregated at a monthly scale to obtain sea ice extent with a method that is sensitive to spatio-temporal variations in the sea ice cover and that can be used as an additional error filter. The resulting dataset, covering the months of maximum and minimum sea ice extent (i.e. March and September) over 2 decades (from 2000 to 2019), demonstrates the algorithm's applicability as a monitoring tool and as an indicator, illustrating the sea ice decline at a regional scale. The European sea regions located in the Arctic, NE Atlantic and Barents seas display clear negative trends in both March (−27.98 ± 6.01 × 103 km2yr−1) and September (−16.47 ± 5.66 × 103 km2yr−1). Such trends indicate that the sea ice cover is shrinking at a rate of ∼ 9 % and ∼ 13 % per decade, respectively, even though the sea ice extent loss is comparatively ∼ 70 % greater in March.
APA, Harvard, Vancouver, ISO, and other styles
31

He, Mingjun, Shuangyan He, Xiaodong Zhang, Feng Zhou, and Peiliang Li. "Assessment of Normalized Water-Leaving Radiance Derived from GOCI Using AERONET-OC Data." Remote Sensing 13, no. 9 (2021): 1640. http://dx.doi.org/10.3390/rs13091640.

Full text
Abstract:
The geostationary ocean color imager (GOCI), as the world’s first operational geostationary ocean color sensor, is aiming at monitoring short-term and small-scale changes of waters over the northwestern Pacific Ocean. Before assessing its capability of detecting subdiurnal changes of seawater properties, a fundamental understanding of the uncertainties of normalized water-leaving radiance (nLw) products introduced by atmospheric correction algorithms is necessarily required. This paper presents the uncertainties by accessing GOCI-derived nLw products generated by two commonly used operational atmospheric algorithms, the Korea Ocean Satellite Center (KOSC) standard atmospheric algorithm adopted in GOCI Data Processing System (GDPS) and the NASA standard atmospheric algorithm implemented in Sea-Viewing Wide Field-of-View Sensor Data Analysis System (SeaDAS/l2gen package), with Aerosol Robotic Network Ocean Color (AERONET-OC) provided nLw data. The nLw data acquired from the GOCI sensor based on two algorithms and four AERONET-OC sites of Ariake, Ieodo, Socheongcho, and Gageocho from October 2011 to March 2019 were obtained, matched, and analyzed. The GDPS-generated nLw data are slightly better than that with SeaDAS at visible bands; however, the mean percentage relative errors for both algorithms at blue bands are over 30%. The nLw data derived by GDPS is of better quality both in clear and turbid water, although underestimation is observed at near-infrared (NIR) band (865 nm) in turbid water. The nLw data derived by SeaDAS are underestimated in both clear and turbid water, and the underestimation worsens toward short visible bands. Moreover, both algorithms perform better at noon (02 and 03 Universal Time Coordinated (UTC)), and worse in the early morning and late afternoon. It is speculated that the uncertainties in nLw measurements arose from aerosol models, NIR water-leaving radiance correction method, and bidirectional reflectance distribution function (BRDF) correction method in corresponding atmospheric correction procedure.
APA, Harvard, Vancouver, ISO, and other styles
32

Mrozek, Ireneusz, and Vyacheslav Yarmolik. "Two-Run RAM March Testing with Address Decimation." Journal of Circuits, Systems and Computers 26, no. 02 (2016): 1750031. http://dx.doi.org/10.1142/s0218126617500311.

Full text
Abstract:
Conventional march memory tests have high fault coverage, especially for simple faults like stack-at fault (SAF), transition fault (TF) or coupling fault (CF). The same-time standard march tests, which are based on only one run, are becoming insufficient for complex faults like pattern-sensitive faults (PSFs). To increase fault coverage, the multi-run transparent march test algorithms have been used. This solution is especially suitable for built-in self-test (BIST) implementation. The transparent BIST approach presents the incomparable advantage of preserving the content of the random access memory (RAM) after testing. We do not need to save the memory content before the test session or to restore it at the end of the session. Therefore, these techniques are widely used in critical applications (medical electronics, railway control, avionics, telecommunications, etc.) for periodic testing in the field. Unfortunately, in many cases, there is very limited time for such test sessions. Taking into account the above limitations, we focus on short, two-run march test procedures based on counter address sequences. The advantage of this paper is that it defines requirements that must be taken into account in the address sequence selection process and presents a deeply analytical investigation of the optimal address decimation parameter. From the experiments we can conclude that the fault coverage of the test sessions generated according to the described method is higher than in the case of pseudorandom address sequences. Moreover, the benefit of this solution seems to be low hardware overhead in implementation of an address generator.
APA, Harvard, Vancouver, ISO, and other styles
33

Kim, Seongjai. "3‐D eikonal solvers: First‐arrival traveltimes." GEOPHYSICS 67, no. 4 (2002): 1225–31. http://dx.doi.org/10.1190/1.1500384.

Full text
Abstract:
The article is concerned with the development and comparison of three different algorithms for the computation of first‐arrival traveltimes: the fast marching method (FMM), the group marching method (GMM), and a second‐order finite‐difference eikonal solver. GMM is introduced as a variant of FMM. It proceeds the solution by advancing a selected group of grid points at a time, rather than sorting the solution in the narrow band to march forward a single grid point. The second‐order eikonal solver studied in the article is an expanding‐box, essentially nonoscillatory scheme for which the stability is enforced by the introduction of a down ‘n’ out marching and a post‐sweeping iteration. Techniques such as the maximum angle condition, the average normal velocity, and cache‐based implementation are introduced for the algorithms to improve the numerical accuracy and efficiency. The algorithms are implemented for solving the eikonal equation in 3‐D isotropic media, and their performances are compared. GMM is numerically verified to be faster than FMM. However, the second‐order algorithm turns out to be superior to these first‐order level‐set methods in both accuracy and efficiency; the incorporation of average normal velocity improves accuracy dramatically for the second‐order scheme.
APA, Harvard, Vancouver, ISO, and other styles
34

Razavi-Termeh, Seyed Vahid, Abolghasem Sadeghi-Niaraki, Farbod Farhangi, and Soo-Mi Choi. "COVID-19 Risk Mapping with Considering Socio-Economic Criteria Using Machine Learning Algorithms." International Journal of Environmental Research and Public Health 18, no. 18 (2021): 9657. http://dx.doi.org/10.3390/ijerph18189657.

Full text
Abstract:
The reduction of population concentration in some urban land uses is one way to prevent and reduce the spread of COVID-19 disease. Therefore, the objective of this study is to prepare the risk mapping of COVID-19 in Tehran, Iran, using machine learning algorithms according to socio-economic criteria of land use. Initially, a spatial database was created using 2282 locations of patients with COVID-19 from 2 February 2020 to 21 March 2020 and eight socio-economic land uses affecting the disease—public transport stations, supermarkets, banks, automated teller machines (ATMs), bakeries, pharmacies, fuel stations, and hospitals. The modeling was performed using three machine learning algorithms that included random forest (RF), adaptive neuro-fuzzy inference system (ANFIS), and logistic regression (LR). Feature selection was performed using the OneR method, and the correlation between land uses was obtained using the Pearson coefficient. We deployed 70% and 30% of COVID-19 patient locations for modeling and validation, respectively. The results of the receiver operating characteristic (ROC) curve and the area under the curve (AUC) showed that the RF algorithm, which had a value of 0.803, had the highest modeling accuracy, which was followed by the ANFIS algorithm with a value of 0.758 and the LR algorithm with a value of 0.747. The results showed that the central and the eastern regions of Tehran are more at risk. Public transportation stations and pharmacies were the most correlated with the location of COVID-19 patients in Tehran, according to the results of the OneR technique, RF, and LR algorithms. The results of the Pearson correlation showed that pharmacies and banks are the most incompatible in distribution, and the density of these land uses in Tehran has caused the prevalence of COVID-19.
APA, Harvard, Vancouver, ISO, and other styles
35

ARIESTA, AGUS EKA, G. K. GANDHIADI, NI KETUT TARI TASTRAWATI, and I. PUTU EKA NILA KENCANA. "PENGKLASIFIKASIAN DEBITUR DENGAN MENGGUNAKAN ALGORITMA GRAHAM SCAN DALAM PENGAPLIKASIAN CONVEX HULL." E-Jurnal Matematika 2, no. 4 (2013): 46. http://dx.doi.org/10.24843/mtk.2013.v02.i04.p058.

Full text
Abstract:
Computational geometry is the mathematical science of computation by using the algorithm analysis to solve the problems of geometry. The problems of computational include polygon triangulations, convex hulls, Voronoi diagrams, and motion planning. Convex hull is the set of points that form a convex polygon that covers the entire set of points. The algorithms for determining the convex hull, among others, Graham Scan, Jarvis March, and Divide and Conquer. In the two-dimensional case, Graham Scan algorithm is highly efficient in the use of time complexity. This article discusses the quest convex hull of the data bank debtors, some of the data used to look at the classification accuracy of the convex hull formed. The coordinates of all the data found by using principal component analysis.After the data are analyzed, we get the accuracy of classification by 74%.
APA, Harvard, Vancouver, ISO, and other styles
36

McCoury, John B., Randolph V. Fugit, and Mary T. Bessesen. "2005. Successful Implementation of a Procalcitonin Algorithm Associated with Reduction in Antibiotic Days." Open Forum Infectious Diseases 6, Supplement_2 (2019): S672—S673. http://dx.doi.org/10.1093/ofid/ofz360.1685.

Full text
Abstract:
Abstract Background Randomized controlled trials of procalcitonin (PCT)-based algorithms for antibacterial therapy have been shown to reduce antimicrobial use and improve survival. Translation of PCT algorithms to clinical settings has often been unsuccessful. Methods We implemented a PCT algorithm, supported by focus groups prior to introduction of the PCT test in April 2016 and clinician training on the PCT algorithm for testing and antimicrobial management after test roll-out. The standard PCT algorithm period (SPAP) was defined as October 1, 2017 to March 31, 2018. The antimicrobial stewardship team (AST) initiated an AST-supported PCT algorithm (ASPA) in August 2018. The AST prospectively evaluated patients admitted to ICU for sepsis and ordered PCT per algorithm if the primary medical team had not ordered them. The ASPA period was defined as October 1, 2018–March 31, 2019. The AST conducted concurrent review and feedback for all antibiotic orders during both periods, using PCT result when available. We compared patient characteristics and outcomes between the two periods. The primary outcome was adherence to the PCT algorithm, with subcomponents of appropriate PCT orders and antimicrobial discontinuation. Secondary outcomes were total antibiotic days, excess antibiotic days avoided, ICU and hospital length of stay (LOS), 30-day readmission and mortality. Continuous variables were analyzed with Student t-test. Categorical variables were analyzed with chi-square or Mann–Whitney test, as appropriate. Results There were 35 cases in the SPAP cohort and 57 cases in the ASPA cohort. There were no differences in demographics or infection site (Table 1). Baseline PCT was ordered in 57% of the SPAP cohort and 90% of the ASPA cohort (P = 0.0006) (Table 2). Follow-up PCT was performed in 23% of SPAP and 76% of ASPA (P < 0.0001). Antibiotics were discontinued per algorithm in 2/35 (7%) in the SPAP cohort and 25/57 (44%) in the ASPA cohort (P < 0.0001). Total antibiotic days was 7 (IQR 4–10) in the SPAP cohort and 5 (IQR 2–7) in the ASPA cohort (P = 0.02). There was no significant difference in LOS, ICU LOS, 30-day readmission, or mortality (Table 4). Conclusion A PCT algorithm successfully implemented by an AST was associated with a significant decrease in total antibiotic days. There were no differences in mortality or LOS. Disclosures All authors: No reported disclosures.
APA, Harvard, Vancouver, ISO, and other styles
37

Lohn, Jason D., Gregory S. Hornby, and Derek S. Linden. "Human-competitive evolved antennas." Artificial Intelligence for Engineering Design, Analysis and Manufacturing 22, no. 3 (2008): 235–47. http://dx.doi.org/10.1017/s0890060408000164.

Full text
Abstract:
AbstractWe present a case study showing a human-competitive design of an evolved antenna that was deployed on a NASA spacecraft in 2006. We were fortunate to develop our antennas in parallel with another group using traditional design methodologies. This allowed us to demonstrate that our techniques were human-competitive because our automatically designed antenna could be directly compared to a human-designed antenna. The antennas described below were evolved to meet a challenging set of mission requirements, most notably the combination of wide beamwidth for a circularly polarized wave and wide bandwidth. Two evolutionary algorithms were used in the development process: one used a genetic algorithm style representation that did not allow branching in the antenna arms; the second used a genetic programming style tree-structured representation that allowed branching in the antenna arms. The highest performance antennas from both algorithms were fabricated and tested, and both yielded very similar performance. Both antennas were comparable in performance to a hand-designed antenna produced by the antenna contractor for the mission, and so we consider them examples of human-competitive performance by evolutionary algorithms. Our design was approved for flight, and three copies of it were successfully flown on NASA's Space Technology 5 mission between March 22 and June 30, 2006. These evolved antennas represent the first evolved hardware in space and the first evolved antennas to be deployed.
APA, Harvard, Vancouver, ISO, and other styles
38

Alonso-Benito, Alfonso, Lara A. Arroyo, Manuel Arbelo, Pedro Hernández-Leal, and Alejandro González-Calvo. "Pixel and object-based classification approaches for mapping forest fuel types in Tenerife Island from ASTER data." International Journal of Wildland Fire 22, no. 3 (2013): 306. http://dx.doi.org/10.1071/wf11068.

Full text
Abstract:
Four classification algorithms have been assessed and compared with mapped forest fuel types from Terra-ASTER sensor images in a representative area of Tenerife Island (Canary Islands, Spain). A BEHAVE fuel-type map from 2002, together with field data also obtained in 2002 during the Third Spanish National Forest Inventory, was used as reference data. The BEHAVE fuel types of the reference dataset were first converted into the Fire Behaviour Fuel Types described by Scott and Burgan, taking into account the vegetation of the study area. Then, three pixel-based algorithms (Maximum Likelihood, Neural Network and Support Vector Machine) and an Object-Based Image Analysis were applied to classify the Scott and Burgan fire behaviour fuel types from an ASTER image from 3 March 2003. The performance of the algorithms tested was assessed and compared in terms of quantity disagreement and allocation disagreement. Within the pixel-based classifications, the best results were obtained from the Support Vector Machine algorithm, which showed an overall accuracy of 83%; 14% of disagreement was due to allocation and 3% to quantity disagreement. The Object-Based Image Analysis approach produced the most accurate maps, with an overall accuracy of 95%; 4% disagreement was due to allocation and 1% to quantity disagreement. The object-based classification achieved thus an overall accuracy of 12% above the best results obtained for the pixel-based algorithms tested. The incorporation of context information to the object-based classification allowed better identification of fuel types with similar spectral behaviour.
APA, Harvard, Vancouver, ISO, and other styles
39

Mugo, Robinson, and Sei-Ichi Saitoh. "Ensemble Modelling of Skipjack Tuna (Katsuwonus pelamis) Habitats in the Western North Pacific Using Satellite Remotely Sensed Data; a Comparative Analysis Using Machine-Learning Models." Remote Sensing 12, no. 16 (2020): 2591. http://dx.doi.org/10.3390/rs12162591.

Full text
Abstract:
To examine skipjack tuna’s habitat utilization in the western North Pacific (WNP) we used an ensemble modelling approach, which applied a fisher- derived presence-only dataset and three satellite remote-sensing predictor variables. The skipjack tuna data were compiled from daily point fishing data into monthly composites and re-gridded into a quarter degree resolution to match the environmental predictor variables, the sea surface temperature (SST), sea surface chlorophyll-a (SSC) and sea surface height anomalies (SSHA), which were also processed at quarter degree spatial resolution. Using the sdm package operated in RStudio software, we constructed habitat models over a 9-month period, from March to November 2004, using 17 algorithms, with a 70:30 split of training and test data, with bootstrapping and 10 runs as parameter settings for our models. Model performance evaluation was conducted using the area under the curve (AUC) of the receiver operating characteristic (ROC), the point biserial correlation coefficient (COR), the true skill statistic (TSS) and Cohen’s kappa (k) metrics. We analyzed the response curves for each predictor variable per algorithm, the variable importance information and the ROC plots. Ensemble predictions of habitats were weighted with the TSS metric. Model performance varied across various algorithms, with the Support Vector Machines (SVM), Boosted Regression Trees (BRT), Random Forests (RF), Multivariate Adaptive Regression Splines (MARS), Generalized Additive Models (GAM), Classification and Regression Trees (CART), Multi-Layer Perceptron (MLP), Recursive Partitioning and Regression Trees (RPART), and Maximum Entropy (MAXENT), showing consistently high performance than other algorithms, while the Flexible Discriminant Analysis (FDA), Mixture Discriminant Analysis (MDA), Bioclim (BIOC), Domain (DOM), Maxlike (MAXL), Mahalanobis Distance (MAHA) and Radial Basis Function (RBF) had lower performance. We found inter-algorithm variations in predictor variable responses. We conclude that the multi-algorithm modelling approach enabled us to assess the variability in algorithm performance, hence a data driven basis for building the ensemble model. Given the inter-algorithm variations observed, the ensemble prediction maps indicated a better habitat utilization map of skipjack tuna than would have been achieved by a single algorithm.
APA, Harvard, Vancouver, ISO, and other styles
40

Al-Jaishi, Ahmed A., Louise M. Moist, Matthew J. Oliver, et al. "Validity of administrative database code algorithms to identify vascular access placement, surgical revisions, and secondary patency." Journal of Vascular Access 19, no. 6 (2018): 561–68. http://dx.doi.org/10.1177/1129729818762008.

Full text
Abstract:
Background: We assessed the validity of physician billing codes and hospital admission using International Classification of Diseases 10th revision codes to identify vascular access placement, secondary patency, and surgical revisions in administrative data. Methods: We included adults (≥18 years) with a vascular access placed between 1 April 2004 and 31 March 2013 at the University Health Network, Toronto. Our reference standard was a prospective vascular access database (VASPRO) that contains information on vascular access type and dates of placement, dates for failure, and any revisions. We used VASPRO to assess the validity of different administrative coding algorithms by calculating the sensitivity, specificity, and positive predictive values of vascular access events. Results: The sensitivity (95% confidence interval) of the best performing algorithm to identify arteriovenous access placement was 86% (83%, 89%) and specificity was 92% (89%, 93%). The corresponding numbers to identify catheter insertion were 84% (82%, 86%) and 84% (80%, 87%), respectively. The sensitivity of the best performing coding algorithm to identify arteriovenous access surgical revisions was 81% (67%, 90%) and specificity was 89% (87%, 90%). The algorithm capturing arteriovenous access placement and catheter insertion had a positive predictive value greater than 90% and arteriovenous access surgical revisions had a positive predictive value of 20%. The duration of arteriovenous access secondary patency was on average 578 (553, 603) days in VASPRO and 555 (530, 580) days in administrative databases. Conclusion: Administrative data algorithms have fair to good operating characteristics to identify vascular access placement and arteriovenous access secondary patency. Low positive predictive values for surgical revisions algorithm suggest that administrative data should only be used to rule out the occurrence of an event.
APA, Harvard, Vancouver, ISO, and other styles
41

Yadav, V. P., R. Prasad, R. Bala, A. K. Vishwakarma, S. A. Yadav, and S. K. Singh. "A COMPARISON OF MACHINE-LEARNING REGRESSION ALGORITHMS FOR THE ESTIMATION OF LAI USING LANDSAT - 8 SATELLITE DATA." ISPRS - International Archives of the Photogrammetry, Remote Sensing and Spatial Information Sciences XLII-4/W16 (October 1, 2019): 679–83. http://dx.doi.org/10.5194/isprs-archives-xlii-4-w16-679-2019.

Full text
Abstract:
Abstract. The leaf area index (LAI) is one of key variable of crops which plays important role in agriculture, ecology and climate change for global circulation models to compute energy and water fluxes. In the recent research era, the machine-learning algorithms have provided accurate computational approaches for the estimation of crops biophysical parameters using remotely sensed data. The three machine-learning algorithms, random forest regression (RFR), support vector regression (SVR) and artificial neural network regression (ANNR) were used to estimate the LAI for crops in the present study. The three different dates of Landsat-8 satellite images were used during January 2017 – March 2017 at different crops growth conditions in Varanasi district, India. The sampling regions were fully covered by major Rabi season crops like wheat, barley and mustard etc. In total pooled data, 60% samples were taken for the training of the algorithms and rest 40% samples were taken as testing and validation of the machinelearning regressions algorithms. The highest sensitivity of normalized difference vegetation index (NDVI) with LAI was found using RFR algorithms (R2 = 0.884, RMSE = 0.404) as compared to SVR (R2 = 0.847, RMSE = 0.478) and ANNR (R2 = 0.829, RMSE = 0.404). Therefore, RFR algorithms can be used for accurate estimation of LAI for crops using satellite data.
APA, Harvard, Vancouver, ISO, and other styles
42

Sofiev, M., J. Vira, R. Kouznetsov, M. Prank, J. Soares, and E. Genikhovich. "Construction of the SILAM Eulerian atmospheric dispersion model based on the advection algorithm of Michael Galperin." Geoscientific Model Development 8, no. 11 (2015): 3497–522. http://dx.doi.org/10.5194/gmd-8-3497-2015.

Full text
Abstract:
Abstract. The paper presents the transport module of the System for Integrated modeLling of Atmospheric coMposition SILAM v.5 based on the advection algorithm of Michael Galperin. This advection routine, so far weakly presented in the international literature, is positively defined, stable at any Courant number, and efficient computationally. We present the rigorous description of its original version, along with several updates that improve its monotonicity and shape preservation, allowing for applications to long-living species in conditions of complex atmospheric flows. The scheme is connected with other parts of the model in a way that preserves the sub-grid mass distribution information that is a cornerstone of the advection algorithm. The other parts include the previously developed vertical diffusion algorithm combined with dry deposition, a meteorological pre-processor, and chemical transformation modules. The quality of the advection routine is evaluated using a large set of tests. The original approach has been previously compared with several classic algorithms widely used in operational dispersion models. The basic tests were repeated for the updated scheme and extended with real-wind simulations and demanding global 2-D tests recently suggested in the literature, which allowed one to position the scheme with regard to sophisticated state-of-the-art approaches. The advection scheme performance was fully comparable with other algorithms, with a modest computational cost. This work was the last project of Dr. Sci. Michael Galperin, who passed away on 18 March 2008.
APA, Harvard, Vancouver, ISO, and other styles
43

Swain, Richard S., Lockwood G. Taylor, Elisa R. Braver, Wei Liu, Simone P. Pinheiro, and Andrew D. Mosholder. "A systematic review of validated suicide outcome classification in observational studies." International Journal of Epidemiology 48, no. 5 (2019): 1636–49. http://dx.doi.org/10.1093/ije/dyz038.

Full text
Abstract:
Abstract Background Suicidal outcomes, including ideation, attempt, and completed suicide, are an important drug safety issue, though few epidemiological studies address the accuracy of suicidal outcome ascertainment. Our primary objective was to evaluate validated methods for suicidal outcome classification in electronic health care database studies. Methods We performed a systematic review of PubMed and EMBASE to identify studies that validated methods for suicidal outcome classification published 1 January 1990 to 15 March 2016. Abstracts and full texts were screened by two reviewers using prespecified criteria. Sensitivity, specificity, and predictive value for suicidal outcomes were extracted by two reviewers. Methods followed PRISMA-P guidelines, PROSPERO Protocol: 2016: CRD42016042794. Results We identified 2202 citations, of which 34 validated the accuracy of measuring suicidal outcomes using International Classification of Diseases (ICD) codes or algorithms, chart review or vital records. ICD E-codes (E950-9) for suicide attempt had 2–19% sensitivity, and 83–100% positive predictive value (PPV). ICD algorithms that included events with ‘uncertain’ intent had 4–70% PPV. The three best-performing algorithms had 74–92% PPV, with improved sensitivity compared with E-codes. Read code algorithms had 14–68% sensitivity and 0–56% PPV. Studies estimated 19–80% sensitivity for chart review, and 41–97% sensitivity and 100% PPV for vital records. Conclusions Pharmacoepidemiological studies measuring suicidal outcomes often use methodologies with poor sensitivity or predictive value or both, which may result in underestimation of associations between drugs and suicidal behaviour. Studies should validate outcomes or use a previously validated algorithm with high PPV and acceptable sensitivity in an appropriate population and data source.
APA, Harvard, Vancouver, ISO, and other styles
44

Linder, Michael, Alfred Eder, Ulf Schlichtmann, and Klaus Oberlander. "An Analysis of Industrial SRAM Test Results—A Comprehensive Study on Effectiveness and Classification of March Test Algorithms." IEEE Design & Test 31, no. 3 (2014): 42–53. http://dx.doi.org/10.1109/mdat.2013.2279752.

Full text
APA, Harvard, Vancouver, ISO, and other styles
45

O.S., Nisha, and Sivasankar K. "Architecture for an efficient MBIST using modified March-y algorithms to achieve optimized communication delay and computational speed." International Journal of Pervasive Computing and Communications 17, no. 1 (2021): 135–47. http://dx.doi.org/10.1108/ijpcc-05-2020-0032.

Full text
Abstract:
Purpose In this work, an efficient architecture for memory built in self-test (MBIST) that incorporates a modified March Y algorithm using concurrent technique and a modified linear feedback shift register (LFSR)–based address generator is proposed. Design/methodology/approach Built in self-test (BIST) is emerging as the essential ingredient of the system on chip. In the ongoing high speed, high tech sophistication technology of the very large-scale integrated circuits, testing of these memories is a very tedious and challenging job, since the area overhead, the testing time and the cost of the test play an important role. Findings With the efficient service of the adapted architecture, switching activity is considerably cut down. As the switching activity is in direct proportion to the power consumed scaling down, the switching process of the address generator inevitably leads to the reduction in power consumption of the MBIST. Originality/value To improve the yield and fault tolerance of on-chip memories without degradation on its performance self-repair mechanisms can be implemented on chip.
APA, Harvard, Vancouver, ISO, and other styles
46

Vinnitskaya, E. V., Yu G. Sandler, I. G. Bakulin, et al. "Important problems in the diagnosis and treatment of autoimmune hepatitis (based on the Russian consensus 2017)." Terapevticheskii arkhiv 90, no. 2 (2018): 12–18. http://dx.doi.org/10.26442/terarkh201890212-18.

Full text
Abstract:
The analysis of publications devoted to the Russian Consensus on the Diagnostic and Treatment of Autoimmune Hepatitis (AIH), which was considered at the 43rd annual Scientific Session of the CNIIG From Traditions to Innovation (March 4, 2017) is carried out. The presence of clear algorithms and recommendations for the diagnosis and treatment of AIH significantly help the doctor in real clinical practice, but do not exclude a personified approach to the patient.
APA, Harvard, Vancouver, ISO, and other styles
47

Knepper, B. C., H. Young, T. C. Jenkins, and C. S. Price. "Time-Saving Impact of an Algorithm to Identify Potential Surgical Site Infections." Infection Control & Hospital Epidemiology 34, no. 10 (2013): 1094–98. http://dx.doi.org/10.1086/673154.

Full text
Abstract:
Objective.To develop and validate a partially automated algorithm to identify surgical site infections (SSIs) using commonly available electronic data to reduce manual chart review.Design.Retrospective cohort study of patients undergoing specific surgical procedures over a 4-year period from 2007 through 2010 (algorithm development cohort) or over a 3-month period from January 2011 through March 2011 (algorithm validation cohort).Setting.A single academic safety-net hospital in a major metropolitan area.Patients.Patients undergoing at least 1 included surgical procedure during the study period.Methods.Procedures were identified in the National Healthcare Safety Network; SSIs were identified by manual chart review. Commonly available electronic data, including microbiologic, laboratory, and administrative data, were identified via a clinical data warehouse. Algorithms using combinations of these electronic variables were constructed and assessed for their ability to identify SSIs and reduce chart review.Results.The most efficient algorithm identified in the development cohort combined microbiologic data with postoperative procedure and diagnosis codes. This algorithm resulted in 100% sensitivity and 85% specificity. Time savings from the algorithm was almost 600 person-hours of chart review. The algorithm demonstrated similar sensitivity on application to the validation cohort.Conclusions.A partially automated algorithm to identify potential SSIs was highly sensitive and dramatically reduced the amount of manual chart review required of infection control personnel during SSI surveillance.
APA, Harvard, Vancouver, ISO, and other styles
48

Dymond, Chelsea, Kimberly Ann Hill, Chelsea McCullough, Julia Dixon, and Emilie Calvello-Hynes. "Use of Clinical Algorithms for Evaluation and Management of Pediatric and Adult Sepsis Patients in Low-Resource Clinical Environments." Prehospital and Disaster Medicine 34, s1 (2019): s175. http://dx.doi.org/10.1017/s1049023x19004011.

Full text
Abstract:
Introduction:Acute infection in post-disaster settings is associated with increased morbidity and mortality. Sepsis management in low resource settings is controversial with recent research suggesting that aggressive fluid resuscitation may cause greater harm than benefit. However, the vast majority of international sepsis guidelines still suggest large initial fluid boluses as part of sepsis algorithms.Aim:To create an adult and pediatric sepsis algorithm to be applied in low resource clinical settings. This is part of a larger project to create clinical algorithms to provide standardization of emergency case management for low-resource clinical environments.Methods:A literature search was performed through PubMed identifying and reviewing fluid resuscitation in adult and pediatric sepsis patients in high and low resource clinical environments. The pathways were created based on interpretation of the available evidence-based literature. Focus groups were conducted in Zambia in March 2018 for feedback from local practitioners regarding feasibility of pathways. The pathways were then modified, reviewed by experts peer-review and revised.Results:Final pediatric and adult sepsis clinical algorithms were created and posted to the free web-based application AgileMD™. They will be available via app access, an online platform, or printable pathways for use in the clinical environment.Discussion:The study is currently undergoing IRB approval with a plan for implementation of multiple clinical algorithms at a referral hospital site in Zambia in January 2019. Site direction at Ndola Hospital will be conducted under the leadership of an Emergency Medicine trained physician, who will assist in implementation of algorithms and collection of data. Initial data review will be conducted in May 2019. There will be incremental site visits by organizing researchers throughout the implementation and data collection period. Statistical analysis will examine sepsis associated processes and outcome indicators pre and post-intervention to further delineate sepsis management in low resource clinical environments.
APA, Harvard, Vancouver, ISO, and other styles
49

Bressan, L., and S. Tinti. "Detecting the 11 March 2011 Tohoku tsunami arrival on sea-level records in the Pacific Ocean: application and performance of the Tsunami Early Detection Algorithm (TEDA)." Natural Hazards and Earth System Sciences 12, no. 5 (2012): 1583–606. http://dx.doi.org/10.5194/nhess-12-1583-2012.

Full text
Abstract:
Abstract. Real-time detection of a tsunami on instrumental sea-level records is quite an important task for a Tsunami Warning System (TWS), and in case of alert conditions for an ongoing tsunami it is often performed by visual inspection in operational warning centres. In this paper we stress the importance of automatic detection algorithms and apply the TEDA (Tsunami Early Detection Algorithm) to identify tsunami arrivals of the 2011 Tohoku tsunami in a real-time virtual exercise. TEDA is designed to work at station level, that is on sea-level data of a single station, and was calibrated on data from the Adak island, Alaska, USA, tide-gauge station. Using the parameters' configuration devised for the Adak station, the TEDA has been applied to 123 coastal sea-level records from the coasts of the Pacific Ocean, which enabled us to evaluate the efficiency and sensitivity of the algorithm on a wide range of background conditions and of signal-to-noise ratios. The result is that TEDA is able to detect quickly the majority of the tsunami signals and therefore proves to have the potential for being a valid tool in the operational TWS practice.
APA, Harvard, Vancouver, ISO, and other styles
50

McIlquham, Taylor, Anna Sick-Samuels, Carrie Billman, et al. "Use of a Multidisciplinary Incident Command System in Response to Measles Outbreak in Maryland." Infection Control & Hospital Epidemiology 41, S1 (2020): s502—s504. http://dx.doi.org/10.1017/ice.2020.1184.

Full text
Abstract:
Background: Measles is a highly contagious virus that reemerged in 2019 with the highest number of reported cases in the United States since 1992. Beginning in March 2019, The Johns Hopkins Hospital (JHH) responded to an influx of patients with concern for measles as a result of outbreaks in Maryland and the surrounding states. We report the JHH Department of Infection Control and Hospital Epidemiology (HEIC) response to this measles outbreak using a multidisciplinary measles incident command system (ICS). Methods: The JHH HEIC and the Johns Hopkins Office of Emergency Management established the HEIC Clinical Incident Command Center and coordinated a multipronged response to the measles outbreak with partners from occupational health services, microbiology, the adult and pediatric emergency departments, marketing and communication and local and state public health departments. The multidisciplinary structure rapidly developed, approved, and disseminated tools to improve the ability of frontline providers to quickly identify, isolate, and determine testing needs for patients suspected to have measles infection and reduce the risk of secondary transmission. The tools included a triage algorithm, visitor signage, staff and patient vaccination guidance and clinics, and standard operating procedures for measles evaluation and testing. The triage algorithms were developed for phone or in-person and assessed measles exposure history, immune status, and symptoms, and provided guidance regarding isolation and the need for testing. The algorithms were distributed to frontline providers in clinics and emergency rooms across the Johns Hopkins Health System. The incident command team also distributed resources to community providers to reduce patient influx to JHH and staged an outdoor measles evaluation and testing site in the event of a case influx that would exceed emergency department resources. Results: From March 2019 through June 2019, 37 patients presented with symptoms or concern for measles. Using the ICS tools and algorithms, JHH rapidly identified, isolated, and tested 11 patients with high suspicion for measles, 4 of whom were confirmed positive. Of the other 26 patients not tested, none developed measles infection. Exposures were minimized, and there were no secondary measles transmissions among patients. Conclusions: Using the ICS and development of tools and resources to prevent measles transmission, including a patient triage algorithm, the JHH team successfully identified, isolated, and evaluated patients with high suspicion for measles while minimizing exposures and secondary transmission. These strategies may be useful to other institutions and locales in the event of an emerging or reemerging infectious disease outbreak.Funding: NoneDisclosures: Aaron Milstone reports consulting for Becton Dickinson.
APA, Harvard, Vancouver, ISO, and other styles
We offer discounts on all premium plans for authors whose works are included in thematic literature selections. Contact us to get a unique promo code!