Academic literature on the topic 'Probabilistic scoring'

Create a spot-on reference in APA, MLA, Chicago, Harvard, and other styles

Select a source type:

Consult the lists of relevant articles, books, theses, conference reports, and other scholarly sources on the topic 'Probabilistic scoring.'

Next to every source in the list of references, there is an 'Add to bibliography' button. Press on it, and we will generate automatically the bibliographic reference to the chosen work in the citation style you need: APA, MLA, Harvard, Chicago, Vancouver, etc.

You can also download the full text of the academic publication as pdf and read online its abstract whenever available in the metadata.

Journal articles on the topic "Probabilistic scoring"

1

Machete, Reason L. "Contrasting probabilistic scoring rules." Journal of Statistical Planning and Inference 143, no. 10 (2013): 1781–90. http://dx.doi.org/10.1016/j.jspi.2013.05.012.

Full text
APA, Harvard, Vancouver, ISO, and other styles
2

Predd, Joel B., Robert Seiringer, Elliott H. Lieb, Daniel N. Osherson, H. Vincent Poor, and Sanjeev R. Kulkarni. "Probabilistic Coherence and Proper Scoring Rules." IEEE Transactions on Information Theory 55, no. 10 (2009): 4786–92. http://dx.doi.org/10.1109/tit.2009.2027573.

Full text
APA, Harvard, Vancouver, ISO, and other styles
3

Hughes, Gareth, and Fiona J. Burnett. "Evaluation of Probabilistic Disease Forecasts." Phytopathology® 107, no. 10 (2017): 1136–43. http://dx.doi.org/10.1094/phyto-01-17-0023-fi.

Full text
Abstract:
The statistical evaluation of probabilistic disease forecasts often involves calculation of metrics defined conditionally on disease status, such as sensitivity and specificity. However, for the purpose of disease management decision making, metrics defined conditionally on the result of the forecast—predictive values—are also important, although less frequently reported. In this context, the application of scoring rules in the evaluation of probabilistic disease forecasts is discussed. An index of separation with application in the evaluation of probabilistic disease forecasts, described in the clinical literature, is also considered and its relation to scoring rules illustrated. Scoring rules provide a principled basis for the evaluation of probabilistic forecasts used in plant disease management. In particular, the decomposition of scoring rules into interpretable components is an advantageous feature of their application in the evaluation of disease forecasts.
APA, Harvard, Vancouver, ISO, and other styles
4

Mitchell, K., and C. A. T. Ferro. "Proper scoring rules for interval probabilistic forecasts." Quarterly Journal of the Royal Meteorological Society 143, no. 704 (2017): 1597–607. http://dx.doi.org/10.1002/qj.3029.

Full text
APA, Harvard, Vancouver, ISO, and other styles
5

Parry, Matthew. "Linear scoring rules for probabilistic binary classification." Electronic Journal of Statistics 10, no. 1 (2016): 1596–607. http://dx.doi.org/10.1214/16-ejs1150.

Full text
APA, Harvard, Vancouver, ISO, and other styles
6

Bröcker, Jochen, and Leonard A. Smith. "Scoring Probabilistic Forecasts: The Importance of Being Proper." Weather and Forecasting 22, no. 2 (2007): 382–88. http://dx.doi.org/10.1175/waf966.1.

Full text
Abstract:
Abstract Questions remain regarding how the skill of operational probabilistic forecasts is most usefully evaluated or compared, even though probability forecasts have been a long-standing aim in meteorological forecasting. This paper explains the importance of employing proper scores when selecting between the various measures of forecast skill. It is demonstrated that only proper scores provide internally consistent evaluations of probability forecasts, justifying the focus on proper scores independent of any attempt to influence the behavior of a forecaster. Another property of scores (i.e., locality) is discussed. Several scores are examined in this light. There is, effectively, only one proper, local score for probability forecasts of a continuous variable. It is also noted that operational needs of weather forecasts suggest that the current concept of a score may be too narrow; a possible generalization is motivated and discussed in the context of propriety and locality.
APA, Harvard, Vancouver, ISO, and other styles
7

Vecer, Jan. "Dynamic Scoring: Probabilistic Model Selection Based on Utility Maximization." Entropy 21, no. 1 (2019): 36. http://dx.doi.org/10.3390/e21010036.

Full text
Abstract:
We propose a novel approach of model selection for probability estimates that may be applied in time evolving setting. Specifically, we show that any discrepancy between different probability estimates opens a possibility to compare them by trading on a hypothetical betting market that trades probabilities. We describe the mechanism of such a market, where agents maximize some utility function which determines the optimal trading volume for given odds. This procedure produces supply and demand functions, that determine the size of the bet as a function of a trading probability. These functions are closed form for the choice of logarithmic and exponential utility functions. Having two probability estimates and the corresponding supply and demand functions, the trade matching these estimates happens at the intersection of the supply and demand functions. We show that an agent using correct probabilities will realize a profit in expectation when trading against any other set of probabilities. The expected profit realized by the correct view of the market probabilities can be used as a measure of information in terms of statistical divergence.
APA, Harvard, Vancouver, ISO, and other styles
8

Choi, Hyungwon, Brett Larsen, Zhen-Yuan Lin, et al. "SAINT: probabilistic scoring of affinity purification–mass spectrometry data." Nature Methods 8, no. 1 (2010): 70–73. http://dx.doi.org/10.1038/nmeth.1541.

Full text
APA, Harvard, Vancouver, ISO, and other styles
9

Diligenti, M., M. Gori, and M. Maggini. "A unified probabilistic framework for web page scoring systems." IEEE Transactions on Knowledge and Data Engineering 16, no. 1 (2004): 4–16. http://dx.doi.org/10.1109/tkde.2004.1264818.

Full text
APA, Harvard, Vancouver, ISO, and other styles
10

Murvai, J., K. Vlahovicek, and S. Pongor. "A simple probabilistic scoring method for protein domain identification." Bioinformatics 16, no. 12 (2000): 1155–56. http://dx.doi.org/10.1093/bioinformatics/16.12.1155.

Full text
APA, Harvard, Vancouver, ISO, and other styles
More sources

Dissertations / Theses on the topic "Probabilistic scoring"

1

Namanya, Anitta P. "A Heuristic Featured Based Quantification Framework for Efficient Malware Detection. Measuring the Malicious intent of a file using anomaly probabilistic scoring and evidence combinational theory with fuzzy hashing for malware detection in Portable Executable files." Thesis, University of Bradford, 2016. http://hdl.handle.net/10454/15863.

Full text
Abstract:
Malware is still one of the most prominent vectors through which computer networks and systems are compromised. A compromised computer system or network provides data and or processing resources to the world of cybercrime. With cybercrime projected to cost the world $6 trillion by 2021, malware is expected to continue being a growing challenge. Statistics around malware growth over the last decade support this theory as malware numbers enjoy almost an exponential increase over the period. Recent reports on the complexity of the malware show that the fight against malware as a means of building more resilient cyberspace is an evolving challenge. Compounding the problem is the lack of cyber security expertise to handle the expected rise in incidents. This thesis proposes advancing automation of the malware static analysis and detection to improve the decision-making confidence levels of a standard computer user in regards to a file’s malicious status. Therefore, this work introduces a framework that relies on two novel approaches to score the malicious intent of a file. The first approach attaches a probabilistic score to heuristic anomalies to calculate an overall file malicious score while the second approach uses fuzzy hashes and evidence combination theory for more efficient malware detection. The approaches’ resultant quantifiable scores measure the malicious intent of the file. The designed schemes were validated using a dataset of “clean” and “malicious” files. The results obtained show that the framework achieves true positive – false positive detection rate “trade-offs” for efficient malware detection.
APA, Harvard, Vancouver, ISO, and other styles
2

Krishnan, Sharenya. "Text-Based Information Retrieval Using Relevance Feedback." Thesis, KTH, Skolan för informations- och kommunikationsteknik (ICT), 2011. http://urn.kb.se/resolve?urn=urn:nbn:se:kth:diva-53603.

Full text
Abstract:
Europeana, a freely accessible digital library with an idea to make Europe's cultural and scientific heritage available to the public was founded by the European Commission in 2008. The goal was to deliver a semantically enriched digital content with multilingual access to it. Even though they managed to increase the content of data they slowly faced the problem of retrieving information in an unstructured form. So to complement the Europeana portal services, ASSETS (Advanced Search Service and Enhanced Technological Solutions) was introduced with services that sought to improve the usability and accessibility of Europeana. My contribution is to study different text-based information retrieval models, their relevance feedback techniques and to implement one simple model. The thesis explains a detailed overview of the information retrieval process along with the implementation of the chosen strategy for relevance feedback that generates automatic query expansion. Finally, the thesis concludes with the analysis made using relevance feedback, discussion on the model implemented and then an assessment on future use of this model both as a continuation of my work and using this model in ASSETS.
APA, Harvard, Vancouver, ISO, and other styles
3

Souza, Anderson Luiz de. "Redes probabilísticas de K-dependência para problemas de classificação binária." Universidade Federal de São Carlos, 2012. https://repositorio.ufscar.br/handle/ufscar/4560.

Full text
Abstract:
Made available in DSpace on 2016-06-02T20:06:06Z (GMT). No. of bitstreams: 1 4338.pdf: 1335557 bytes, checksum: 8e0bef5711ff8c398be194e335deecec (MD5) Previous issue date: 2012-02-28<br>Universidade Federal de Sao Carlos<br>Classification consists in the discovery of rules of prediction to assist with planning and decision-making, being a continuously indispensable tool and a highly discussed subject in literature. As a special case in classification, we have the process of credit risk rating, within which there is interest in identifying good and bad paying customers through binary classification methods. Therefore, in many application backgrounds, as in financial, several techniques can be utilized, such as discriminating analysis, probit analysis, logistic regression and neural nets. However, the Probabilistic Nets technique, also known as Bayesian Networks, have showed itself as a practical convenient classification method with successful applications in several areas. In this paper, we aim to display the appliance of Probabilistic Nets in the classification scenario, specifically, the technique named K-dependence Bayesian Networks also known as KDB nets, as well as compared its performance with conventional techniques applied within context of the Credit Scoring and Medical diagnosis. Applications of the technique based in real and artificial datasets and its performance assisted by the bagging procedure will be displayed as results.<br>A classificação consiste na descoberta de regras de previsão para auxílio no planejamento e tomada de decisões, sendo uma ferramenta indispensável e um tema bastante discutido na literatura. Como caso especial de classificação, temos o processo de avaliação de risco de crédito, no qual temos o interesse de identificar clientes bons e maus pagadores através de métodos de classificação binária. Assim, em diversos enredos de aplicação, como nas financeiras, diversas técnicas podem ser utilizadas, tais como análise discriminante, análise probito, regressão logística e redes neurais. Porém, a técnica de Redes Probabilísticas, também conhecida como Redes Bayesianas, tem se mostrado um método prático de classificação e com aplicações bem sucedidas em diversos campos. Neste trabalho, visamos exibir a aplicação das Redes Probabilísticas no contexto de classificação, em específico, a técnica denominada Redes Probabilísticas com K-dependência, também conhecidas como redes KDB, bem como comparar seu desempenho com as técnicas convencionais aplicadas no contexto de Credit Scoring e Diagnose Médica. Exibiremos como resultado aplicações da técnica baseadas em conjuntos de dados reais e artificiais e seu desempenho auxiliado pelo procedimento de bagging.
APA, Harvard, Vancouver, ISO, and other styles
4

Farooqi, Owais Ehtisham. "An Assessment and Modeling of Copper Plumbing pipe Failures due to Pinhole Leaks." Thesis, Virginia Tech, 2006. http://hdl.handle.net/10919/33918.

Full text
Abstract:
Pinhole leaks in copper plumbing pipes are a big concern for the homeowners. The problem is spread across the nation and remains a threat to plumbing systems of all ages. Due to the absence of a single acceptable mechanistic theory no preventive measure is available to date. Most of the present mechanistic theories are based on analysis of failed pipe samples however an objective comparison with other pipes that did not fail is seldom made. The variability in hydraulic and water quality parameters has made the problem complex and unquantifiable in terms of plumbing susceptibility to pinhole leaks. The present work determines the spatial and temporal spread of pinhole leaks across United States. The hotspot communities are identified based on repair histories and surveys. An assessment of variability in water quality is presented based on nationwide water quality data. A synthesis of causal factors is presented and a scoring system for copper pitting is developed using goal programming. A probabilistic model is presented to evaluate optimal replacement time for plumbing systems. Methodologies for mechanistic modeling based on corrosion thermodynamics and kinetics are presented.<br>Master of Science
APA, Harvard, Vancouver, ISO, and other styles
5

Casero, Cañas Ramón. "Left ventricle functional analysis in 2D+t contrast echocardiography within an atlas-based deformable template model framework." Thesis, University of Oxford, 2008. http://ora.ox.ac.uk/objects/uuid:b17b3670-551d-4549-8f10-d977295c1857.

Full text
Abstract:
This biomedical engineering thesis explores the opportunities and challenges of 2D+t contrast echocardiography for left ventricle functional analysis, both clinically and within a computer vision atlas-based deformable template model framework. A database was created for the experiments in this thesis, with 21 studies of contrast Dobutamine Stress Echo, in all 4 principal planes. The database includes clinical variables, human expert hand-traced myocardial contours and visual scoring. First the problem is studied from a clinical perspective. Quantification of endocardial global and local function using standard measures shows expected values and agreement with human expert visual scoring, but the results are less reliable for myocardial thickening. Next, the problem of segmenting the endocardium with a computer is posed in a standard landmark and atlas-based deformable template model framework. The underlying assumption is that these models can emulate human experts in terms of integrating previous knowledge about the anatomy and physiology with three sources of information from the image: texture, geometry and kinetics. Probabilistic atlases of contrast echocardiography are computed, while noting from histograms at selected anatomical locations that modelling texture with just mean intensity values may be too naive. Intensity analysis together with the clinical results above suggest that lack of external boundary definition may preclude this imaging technique for appropriate measuring of myocardial thickening, while endocardial boundary definition is appropriate for evaluation of wall motion. Geometry is presented in a Principal Component Analysis (PCA) context, highlighting issues about Gaussianity, the correlation and covariance matrices with respect to physiology, and analysing different measures of dimensionality. A popular extension of deformable models ---Active Appearance Models (AAMs)--- is then studied in depth. Contrary to common wisdom, it is contended that using a PCA texture space instead of a fixed atlas is detrimental to segmentation, and that PCA models are not convenient for texture modelling. To integrate kinetics, a novel spatio-temporal model of cardiac contours is proposed. The new explicit model does not require frame interpolation, and it is compared to previous implicit models in terms of approximation error when the shape vector changes from frame to frame or remains constant throughout the cardiac cycle. Finally, the 2D+t atlas-based deformable model segmentation problem is formulated and solved with a gradient descent approach. Experiments using the similarity transformation suggest that segmentation of the whole cardiac volume outperforms segmentation of individual frames. A relatively new approach ---the inverse compositional algorithm--- is shown to decrease running times of the classic Lucas-Kanade algorithm by a factor of 20 to 25, to values that are within real-time processing reach.
APA, Harvard, Vancouver, ISO, and other styles
6

Jose, Victor Richmond. "The Verification of Probabilistic Forecasts in Decision and Risk Analysis." Diss., 2009. http://hdl.handle.net/10161/1270.

Full text
Abstract:
<p> Probability forecasts play an important role in many decision and risk analysis applications. Research and practice over the years have shown that the shift towards distributional forecasts provides a more accurate and appropriate means of capturing risk in models for these applications. This means that mathematical tools for analyzing the quality of these forecasts, may it come from experts, models or data, become important to the decision maker. In this regard, strictly proper scoring rules have been widely studied because of their ability to encourage assessors to provide truthful reports. This dissertation contributes to the scoring rule literature in two main areas of assessment - probability forecasts and quantile assessments. </p><p>In the area of probability assessment, scoring rules typically studied in the literature, and commonly used in practice, evaluate probability assessments relative to a default uniform measure. In many applications, the uniform baseline used to represent some notion of ignorance is inappropriate. In this dissertation, we generalize the power and pseudospherical family of scoring rules, two large parametric families of commonly-used scoring rules, by incorporating the notion of a non-uniform baseline distribution for both the discrete and continuous cases. With an appropriate normalization and choice of parameters, we show that these new families of scoring rules relate to various well-known divergence measures from information theory and to well-founded decision models when framed in an expected utility maximization context. </p><p>In applications where the probability space considered has an ordinal ranking between states, an important property often considered is sensitivity to distance. Scoring rules with this property provide higher scores to assessments that allocate higher probability mass to events “closer” to that which occurs based on some notion of distance. In this setting, we provide an approach that allows us to generate new sensitive to distance strictly proper scoring rules from well-known strictly proper binary scoring rules. Through the use of the weighted scoring rules, we also show that these new scores can incorporate a specified baseline distribution, in addition to being strictly proper and sensitive to distance. </p><p>In the inverse problem of quantile assessment, scoring rules have not yet been well-studied and well-developed. We examine the differences between scoring rules for probability and quantile assessments, and demonstrate why the tools that have been developed for probability assessments no longer encourage truthful reporting when used for quantile assessments. In addition, we shed light on new properties and characterizations for some of these rules that could guide decision makers trying to choosing an appropriate scoring rule. </p><br>Dissertation
APA, Harvard, Vancouver, ISO, and other styles

Book chapters on the topic "Probabilistic scoring"

1

Culpepper, Ryan, and Andrew Cobb. "Contextual Equivalence for Probabilistic Programs with Continuous Random Variables and Scoring." In Programming Languages and Systems. Springer Berlin Heidelberg, 2017. http://dx.doi.org/10.1007/978-3-662-54434-1_14.

Full text
APA, Harvard, Vancouver, ISO, and other styles
2

Yiu, Kwok-Kwong, Man-Wai Mak, and Sun-Yuan Kung. "Kernel-Based Probabilistic Neural Networks with Integrated Scoring Normalization for Speaker Verification." In Advances in Multimedia Information Processing — PCM 2002. Springer Berlin Heidelberg, 2002. http://dx.doi.org/10.1007/3-540-36228-2_77.

Full text
APA, Harvard, Vancouver, ISO, and other styles
3

Hu, Nan, and Haojie Cheng. "Survival Analysis and ROC Analysis in Analyzing Credit Risks." In Analyzing Risk through Probabilistic Modeling in Operations Research. IGI Global, 2016. http://dx.doi.org/10.4018/978-1-4666-9458-3.ch016.

Full text
Abstract:
As the aim of large banks has been changing to select customers of highest benefits, it is important for banks to know not only if but also when a customer will default. Survival analyses have been used to estimate over time risk of default or early payoff, two major risks for banks. The major benefit of this method is that it can easily handle censoring and competing risks. An ROC curve, as a statistical tool, was applied to evaluate credit scoring systems. Traditional ROC analyses allow banks to evaluate if a credit-scoring system can correctly classify customers based on their cross-sectional default status, but will fail when assessing a credit-scoring system at a series of future time points, especially when there are censorings or competing risks. The time-dependent ROC analysis was introduced by Hu and Zhou to evaluate credit-scoring systems in a time-varying fashion and it allows us to assess credit scoring systems for predicting default by any time within study periods.
APA, Harvard, Vancouver, ISO, and other styles
4

Jiang, Xia, Shyam Visweswaran, and Richard E. Neapolitan. "Scoring, Searching and Evaluating Bayesian Network Models of Gene-phenotype Association." In Probabilistic Graphical Models for Genetics, Genomics, and Postgenomics. Oxford University Press, 2014. http://dx.doi.org/10.1093/acprof:oso/9780198709022.003.0011.

Full text
APA, Harvard, Vancouver, ISO, and other styles
5

Nelson, Kenric P. "Reduced Perplexity." In Advances in Info-Metrics. Oxford University Press, 2020. http://dx.doi.org/10.1093/oso/9780190636685.003.0012.

Full text
Abstract:
This chapter introduces a simple, intuitive approach to the assessment of probabilistic inferences. The Shannon information metrics are translated to the probability domain. The translation shows that the negative logarithmic score and the geometric mean are equivalent measures of the accuracy of a probabilistic inference. The geometric mean of forecasted probabilities is thus a measure of forecast accuracy and represents the central tendency of the forecasts. The reciprocal of the geometric mean is referred to as the perplexity and defines the number of independent choices needed to resolve the uncertainty. The assessment method introduced in this chapter is intended to reduce the ‘qualitative’ perplexity relative to the potpourri of scoring rules currently used to evaluate machine learning and other probabilistic algorithms. Utilization of this assessment will provide insight into designing algorithms with reduced the ‘quantitative’ perplexity and thus improved the accuracy of probabilistic forecasts. The translation of information metrics to the probability domain is incorporating the generalized entropy functions developed Rényi and Tsallis. Both generalizations translate to the weighted generalized mean. The generalized mean of probabilistic forecasts forms a spectrum of performance metrics referred to as a Risk Profile. The arithmetic mean is used to measure the decisiveness, while the –2/3 mean is used to measure the robustness.
APA, Harvard, Vancouver, ISO, and other styles
6

Raydugin, Yuri G. "High-level Overview of Project Risk Management (PRM)." In Modern Risk Quantification in Complex Projects. Oxford University Press, 2020. http://dx.doi.org/10.1093/oso/9780198844334.003.0002.

Full text
Abstract:
Selection of adequate project risk management (PRM) methodologies should be based on deep understanding of PRM context of a project to avoid PRM context—PRM method mismatch. Various realizations of bias can impede the selection. All PRM methodologies can be grouped as either conventional (they follow traditional PRM process steps to identify, evaluate, address, monitor, etc.) and unconventional (they are focused mostly at risk assessments). All PRM methodologies—deterministic (scoring), probabilistic (Monte Carlo), parametric, etc—have limitations. Deterministic (scoring) methods can help with development of risk addressing although they are useless for development of project contingencies. Monte Carlo methods can be used for development of project contingencies only if they take into account all relevant components of project risk exposure consistently. Parametric methods suffer from using biased sampling—convenience and judgement sampling—that undermine their accuracy. Two emerging methodologies—system dynamics and artificial neural networks (ANN)—can be considered unconventional.
APA, Harvard, Vancouver, ISO, and other styles
7

Raydugin, Yuri G. "Overview of Conventional Risk Quantification Methods." In Modern Risk Quantification in Complex Projects. Oxford University Press, 2020. http://dx.doi.org/10.1093/oso/9780198844334.003.0004.

Full text
Abstract:
Deterministic (scoring) and probabilistic (Monte Carlo) risk quantification methodologies are overviewed as related to risk assessment steps of the PRM process. A role of various realizations of bias including zealotry in selection of conventional methods is reviewed. It is shown that deterministic (scoring) methods are irreplaceable when developing risk-addressing actions (the first PRM goal), although they are notoriously inaccurate when developing project contingencies (the second PRM goal). These methods can handle risks of impacts on both quantifiable and unquantifiable project objectives and are effective in selection of project alternatives. The Monte Carlo methods are valuable for development of schedule and cost contingencies only. Various Monte Carlo methodologies may have high systematic errors if they overlook main components of project risk exposure. Activity and line-item ranging methods miss out risk events; cost risk analyses miss out schedule-driven costs; all Monte Carlo methods overlook risk interactions as attributes of complex project systems.
APA, Harvard, Vancouver, ISO, and other styles

Conference papers on the topic "Probabilistic scoring"

1

Veilleux, N. M., and M. Ostendorf. "Probabilistic parse scoring with prosodic information." In Proceedings of ICASSP '93. IEEE, 1993. http://dx.doi.org/10.1109/icassp.1993.319227.

Full text
APA, Harvard, Vancouver, ISO, and other styles
2

Taesung Lee, Zhongyuan Wang, Haixun Wang, and Seung-won Hwang. "Attribute extraction and scoring: A probabilistic approach." In 2013 29th IEEE International Conference on Data Engineering (ICDE 2013). IEEE, 2013. http://dx.doi.org/10.1109/icde.2013.6544825.

Full text
APA, Harvard, Vancouver, ISO, and other styles
3

Veilleux, N. M., and M. Ostendorf. "Probabilistic parse scoring based on prosodic phrasing." In the workshop. Association for Computational Linguistics, 1992. http://dx.doi.org/10.3115/1075527.1075630.

Full text
APA, Harvard, Vancouver, ISO, and other styles
4

Harma, Aki, and Rim Helaoui. "Probabilistic scoring of validated insights for personal health services." In 2016 IEEE Symposium Series on Computational Intelligence (SSCI). IEEE, 2016. http://dx.doi.org/10.1109/ssci.2016.7849892.

Full text
APA, Harvard, Vancouver, ISO, and other styles
5

Koksal, Aysun, Aydogan Ozdemir, and Joydeep Mitra. "System-Level RCAM Assessment of Power Transformers Using Fuzzy Multi-Attribute Scoring Procedure." In 2018 IEEE International Conference on Probabilistic Methods Applied to Power Systems (PMAPS). IEEE, 2018. http://dx.doi.org/10.1109/pmaps.2018.8440340.

Full text
APA, Harvard, Vancouver, ISO, and other styles
6

Luk-Pat, Gerard T., Raghava V. Kondepudy, Robert Du, and Ray E. Morgan. "Probabilistic calibration of a simple resist model for simulation-based scoring of mask defects." In SPIE 31st International Symposium on Advanced Lithography, edited by Chas N. Archie. SPIE, 2006. http://dx.doi.org/10.1117/12.656427.

Full text
APA, Harvard, Vancouver, ISO, and other styles
7

Kong, Sheng-yi, and Lin-shan Lee. "IMPROVED SUMMARIZATION OF CHINESE SPOKEN DOCUMENTS BY PROBABILISTIC LATENT SEMANTIC ANALYSIS (PLSA) WITH FURTHER ANALYSIS AND INTEGRATED SCORING." In 2006 IEEE Spoken Language Technology Workshop. IEEE, 2006. http://dx.doi.org/10.1109/slt.2006.326808.

Full text
APA, Harvard, Vancouver, ISO, and other styles
8

Muhlbauer, W. Kent, Derek Johnson, Elaine Hendren, and Steve Gosse. "A New Generation of Pipeline Risk Algorithms." In 2006 International Pipeline Conference. ASMEDC, 2006. http://dx.doi.org/10.1115/ipc2006-10178.

Full text
Abstract:
While the previous generation of scoring-type algorithms have served us (the industry) well, the associated technical compromises can be troublesome in today’s environment of increasing regulatory and public oversight. Risk analyses often become the centerpiece of any legal, regulatory, or public proceedings. This prompts the need for analysis techniques that can produce risk estimates anchored in absolute terms, such as “consequences per mile year”. Accordingly, a new generation of algorithms has been developed to meet today’s needs without costly re-vamping of previously collected data or increasing the costs of risk analysis. A simple re-grouping of variables into categories of “exposure”, “mitigation”, and ‘resistance’, along with a few changes in the mathematics of combining variables, transitions older scoring models into the new approach. The advantages of the new algorithms are significant since they: • are more intuitive and predictive, • better model reality, • lead to better risk management decisions by distinguishing between unmitigated exposure to a threat, mitigation effectiveness, and system resistance, • eliminate the need for unrealistic and troublesome reweighting or balancing of variables for changes such as new technologies, • offer flexibility to present results in either absolute (probabilistic) terms or relative terms, depending on the user’s needs. The challenge is to accomplish these without losing the advantages of earlier approaches. One of the intent of the new algorithms is to avoid overly-analytic techniques that often accompany more absolute quantifications of risk. This paper will showcase this new generation of algorithms to better suit the changing needs of risk analysis within the pipeline industry.
APA, Harvard, Vancouver, ISO, and other styles
9

Fuge, Mark, Josh Stroud, and Alice Agogino. "Automatically Inferring Metrics for Design Creativity." In ASME 2013 International Design Engineering Technical Conferences and Computers and Information in Engineering Conference. American Society of Mechanical Engineers, 2013. http://dx.doi.org/10.1115/detc2013-12620.

Full text
Abstract:
Measuring design creativity is crucial to evaluating the effectiveness of idea generation methods. Historically, there has been a divide between easily-computable metrics, which are often based on arbitrary scoring systems, and human judgement metrics, which accurately reflect human opinion but rely on the expensive collection of expert ratings. This research bridges this gap by introducing a probabilistic model that computes a family of repeatable creativity metrics trained on expert data. Focusing on metrics for variety, a combination of submodular functions and logistic regression generalizes existing metrics, accurately recovering several published metrics as special cases and illuminating a space of new metrics for design creativity. When tasked with predicting which of two sets of concepts has greater variety, our model matches two commonly used metrics to 96% accuracy on average. In addition, using submodular functions allows this model to efficiently select the highest variety set of concepts when used in a design synthesis system.
APA, Harvard, Vancouver, ISO, and other styles
10

Mitchell, James, Steve Jasper, and Jim Mihell. "A Semi-Quantitative Risk Assessment to Support Oil Pipeline Risk-Based Design." In 2014 10th International Pipeline Conference. American Society of Mechanical Engineers, 2014. http://dx.doi.org/10.1115/ipc2014-33659.

Full text
Abstract:
During the regulatory phase of the Enbridge Northern Gateway Project (Northern Gateway), the Joint Review Panel (JRP) requested information on “how the risk factors resulting from the geotechnical and geographic aspects of the pipeline will be taken into account” and to demonstrate “the integration of risk factors with the environmental and socio-economic consequences from potential hydrocarbon releases”. Furthermore, the JRP required Northern Gateway to identify where a risk-based approach to design would be used to address geotechnical and seismic hazards, valve locations for spill consequence reduction and risk reduction in consequence areas”. [1] To meet this requirement a semi-quantitative risk assessment (SQRA) was undertaken. Risk was defined as a function of probability and consequence, where the probability (expressed as a frequency) of loss of pipe integrity was quantitatively determined and the consequence of failure was qualitatively determined. The frequency of failure was a probabilistic combination of the calculated probability of failure from reliability methods, historical frequencies and assessed geo-hazard failure frequency rates. Consequence scoring was based on intersection of theoretical spills with “consequence areas” for environmental or socio-economic effects Frequency and consequence were then combined to provide risk scoring and ranking. Failure frequencies were developed using reliability methods where appropriate. The use of reliability methods addresses the primary challenge associated with quantifying risk for new pipelines as industry failure statistics are not directly applicable to modern pipeline designs, materials, and operating practices. In the pipeline industry, reliability models exist for the most significant threats, including third-party damage, internal corrosion and external corrosion. In addition, geotechnical threats can be characterized in terms of expected magnitude and associated frequency of occurrence, thereby enabling pipeline reliability to be established for each geo-hazard. Consequence scoring was based on modeling full bore rupture spill scenarios and determining whether these spills would potentially intersect identified “consequence areas”. Over the course of the application and hearing process two SQRA’s were undertaken. Following the filing of the first SQRA, additional measures were included in the pipeline design to reduce the frequency of failure and to reduce potential consequences. This resulted in the calculated overall risk being reduced by a factor of 84%, primarily due to increases in wall thickness resulting in a reduction in the likelihood of 3rd party damage and in a reduction of consequence by an increased number of valves.
APA, Harvard, Vancouver, ISO, and other styles
We offer discounts on all premium plans for authors whose works are included in thematic literature selections. Contact us to get a unique promo code!

To the bibliography