To see the other types of publications on this topic, follow the link: Threshold Discovery.

Journal articles on the topic 'Threshold Discovery'

Create a spot-on reference in APA, MLA, Chicago, Harvard, and other styles

Select a source type:

Consult the top 50 journal articles for your research on the topic 'Threshold Discovery.'

Next to every source in the list of references, there is an 'Add to bibliography' button. Press on it, and we will generate automatically the bibliographic reference to the chosen work in the citation style you need: APA, MLA, Harvard, Chicago, Vancouver, etc.

You can also download the full text of the academic publication as pdf and read online its abstract whenever available in the metadata.

Browse journal articles on a wide variety of disciplines and organise your bibliography correctly.

1

Cherenkov, P. A. "At the threshold of discovery." Nuclear Instruments and Methods in Physics Research Section A: Accelerators, Spectrometers, Detectors and Associated Equipment 248, no. 1 (1986): 1–4. http://dx.doi.org/10.1016/0168-9002(86)90487-0.

Full text
APA, Harvard, Vancouver, ISO, and other styles
2

Taro Takei and Hiroki Horita. "Analysis of Business Processes with Automatic Detection of KPI Thresholds and Process Discovery Based on Trace Variants." Research Briefs on Information and Communication Technology Evolution 9 (September 5, 2023): 59–76. http://dx.doi.org/10.56801/rebicte.v9i.157.

Full text
Abstract:
One effective way to analyze an event log of a complex business process is to filter the event log by a KPI threshold and then use an analysis algorithm. Event logs filtered by KPI thresholds are simpler and easier to understand than the original business process. KPI thresholds have conventionally had to be set through a trial-and-error process, but existing research has proposed an automatic KPI threshold detection method that reduces the time and effort required to search for a threshold value. In the existing method, an event log is first divided by an arbitrary number k, and a business process model is generated for the divided event log using a process discovery algorithm. By repeatedly aggregating similar business process models, it is possible to analyze how the business process models differ according to KPI values. However, the existing method requires trial-and-error to determine the value of k because the detection threshold and business process model vary depending on the value of k, which is time-consuming. Therefore, this paper proposes a method to automatically detect KPI thresholds by dividing event logs based on trace variants. Experimental results show that the business process models detected by the proposed method are simpler and the threshold detection time is significantly reduced.
APA, Harvard, Vancouver, ISO, and other styles
3

Cao, Xiaowen, Yao Dong, Li Xing, and Xuekui Zhang. "A Genome-Wide Association Study of Dementia Using the Electronic Medical Record." BioMedInformatics 3, no. 1 (2023): 141–49. http://dx.doi.org/10.3390/biomedinformatics3010010.

Full text
Abstract:
Dementia is characterized as a decline in cognitive function, including memory, language and problem-solving abilities. In this paper, we conducted a Genome-Wide Association Study (GWAS) using data from the electronic Medical Records and Genomics (eMERGE) network. This study has two aims, (1) to investigate the genetic mechanism of dementia and (2) to discuss multiple p-value thresholds used to address multiple testing issues. Using the genome-wide significant threshold (p≤5×10−8), we identified four SNPs. Controlling the False Positive Rate (FDR) level below 0.05 leads to one extra SNP. Five SNPs that we found are also supported by QQ-plot comparing observed p-values with expected p-values. All these five SNPs belong to the TOMM40 gene on chromosome 19. Other published studies independently validate the relationship between TOMM40 and dementia. Some published studies use a relaxed threshold (p≤1×10−5) to discover SNPs when the statistical power is insufficient. This relaxed threshold is more powerful but cannot properly control false positives in multiple testing. We identified 13 SNPs using this threshold, which led to the discovery of extra genes (such as ATP10A-DT and PTPRM). Other published studies reported these genes as related to brain development or neuro-development, indicating these genes are potential novel genes for dementia. Those novel potential loci and genes may help identify targets for developing new therapies. However, we suggest using them with caution since they are discovered without proper false positive control.
APA, Harvard, Vancouver, ISO, and other styles
4

Genovese, Christopher R., Nicole A. Lazar, and Thomas E. Nichols. "Threshold determination using the false discovery rate." NeuroImage 13, no. 6 (2001): 124. http://dx.doi.org/10.1016/s1053-8119(01)91467-3.

Full text
APA, Harvard, Vancouver, ISO, and other styles
5

Calabrese, Edward. "Linear Non-Threshold (LNT) historical discovery milestones." La Medicina del Lavoro 113, no. 4 (2022): e2022033. https://doi.org/10.23749/mdl.v113i4.13381.

Full text
Abstract:
The present paper provides a summarized identification of critical historical milestones in the discovery of the flawed and corrupt foundations of cancer risk assessment, with particular focus on the LNT Dose Response model. The milestone sequence presented herein is based on a large body of published findings by the author. The history of LNT and cancer response represents what may be the most significant case of scientific misconduct reported in the US, with its revelation severely damaging the scientific credibility and moral authority of leading US regulatory agencies and organizations such as the National Academy of Sciences (NAS) and the journal Science. The consequences of this corrupt history are substantial, affecting cancer risk assessment throughout the world, critical aspects of national economies, the development of critical technologies and public health practices.
APA, Harvard, Vancouver, ISO, and other styles
6

Hayat, O., R. Ngah, and Yasser Zahedi. "Cooperative GPS and Neighbors Awareness Based Device Discovery for D2D Communication in in-Band Cellular Networks." International Journal of Engineering & Technology 7, no. 2.29 (2018): 700. http://dx.doi.org/10.14419/ijet.v7i2.29.14001.

Full text
Abstract:
Device to Device (D2D) communication is a new paradigm for next-generation wireless systems to offload data traffic. A device needs to discover neighbor devices on the certain channel to initiate the D2D communication within the minimum period. A device discovery technique based on Global Positioning System (GPS) and neighbor awareness base are proposed for in-band cellular networks. This method is called network-centric approach, and it improves the device discovery efficiency, accuracy, and channel capacity. The differential code is applied to measure the signal to noise ratio of each discovered device. In the case that the signal to noise ratio (SNR) of two devices is above a specified threshold value, then these two devices are qualified for D2D communication. Two procedures are explored for device discovery; discovery by CN (core network) and eNB (evolved node B) cooperation with the help of GPS and neighbor awareness. Using ‘Haversine’ formula, SNR base distance is calculated. Results show an increment in the channel capacity relative to SNR obtained for each device.
APA, Harvard, Vancouver, ISO, and other styles
7

KANWAL, ATTIYA, SAHAR FAZAL, SOHAIL ASGHAR, and Muhammad Naeem. "SUBGROUP DISCOVERY OF THE MODY GENES;." Professional Medical Journal 20, no. 05 (2013): 644–52. http://dx.doi.org/10.29309/tpmj/2013.20.05.1207.

Full text
Abstract:
Background: The pandemic of metabolic disorders is accelerating in the urbanized world posing huge burden to healthand economy. The key pioneer to most of the metabolic disorders is Diabetes Mellitus. A newly discovered form of diabetes is MaturityOnset Diabetes of the Young (MODY). MODY is a monogenic form of diabetes. It is inherited as autosomal dominant disorder. Till to date11 different MODY genes have been reported. Objective: This study aims to discover subgroups from the biological text documentsrelated to these genes in public domain database. Data Source: The data set was obtained from PubMed. Period: September-December,2011. Materials and Methodology: APRIORI-SD subgroup discovery algorithm is used for the task of discovering subgroups. A wellknown association rule learning algorithm APRIORI is first modified into classification rule learning algorithm APRIORI-C. APRIORI-Calgorithm generates the rule from the discretized dataset with the minimum support set to 0.42% with no confidence threshold. Total 580rules are generated at the given support. APRIOIR-C is further modified by making adaptation into APRIORI-SD. Results: Experimentalresults demonstrate that APRIORI discovers the substantially smaller rule sets; each rule has higher support and significance. The rulesthat are obtained by APRIORI-C are ordered by weighted relative accuracy. Conclusion: Only first 66 rules are ordered as they cover therelation between all the 11 MODY genes with each other. These 66 rules are further organized into 11 different subgroups. The evaluationof obtained results from literature shows that APRIORI-SD is a competitive subgroup discovery algorithm. All the association amonggenes proved to be true.
APA, Harvard, Vancouver, ISO, and other styles
8

Lele Yu, Lele Yu, Wensheng Gan Lele Yu, Zhixiong Chen Wensheng Gan, and Yining Liu Zhixiong Chen. "IDHUP: Incremental Discovery of High Utility Pattern." 網際網路技術學刊 24, no. 1 (2023): 135–47. http://dx.doi.org/10.53106/160792642023012401013.

Full text
Abstract:
<p>As a sub-problem of pattern discovery, utility-oriented pattern mining has recently emerged as a focus of researchers’ attention and offers broad application prospects. Considering the dynamic characteristics of the input databases, incremental utility mining methods have been proposed, aiming to discover implicit information/ patterns whose importance/utility is not less than a user-specified threshold from incremental databases. However, due to the explosive growth of the search space, most existing methods perform unsatisfactorily under the low utility threshold, so there is still room for improvement in terms of running efficiency and pruning capacity. Motivated by this, we provide an effective and efficient method called IDHUP by designing an indexed partitioned utility list structure and employing four pruning strategies. With the proposed data structure, IDHUP can not only dynamically update the utility values of patterns but also avoid visiting non-occurred patterns. Moreover, to further exclude ineligible patterns and avoid unnecessary exploration, we put forward the remaining utility reducing strategy and three other revised pruning strategies. Experiments on various datasets demonstrated that the designed IDHUP algorithm has the best performance in terms of running time compared to state-of-the-art algorithms.</p> <p> </p>
APA, Harvard, Vancouver, ISO, and other styles
9

Kim, C. S., Ruben N. Lubowski, Jan Lewandrowski, and Mark E. Eiswerth. "Prevention or Control: Optimal Government Policies for Invasive Species Management." Agricultural and Resource Economics Review 35, no. 1 (2006): 29–40. http://dx.doi.org/10.1017/s1068280500010030.

Full text
Abstract:
We present a conceptual, but empirically applicable, model for determining the optimal allocation of resources between exclusion and control activities for managing an invasive species with an uncertain discovery time. This model is used to investigate how to allocate limited resources between activities before and after the first discovery of an invasive species and the effects of the characteristics of an invasive species on limited resource allocation. The optimality conditions show that it is economically efficient to spend a larger share of outlays for exclusion activities before, rather than after, a species is first discovered, up to a threshold point. We also find that, after discovery, more exclusionary measures and fewer control measures are optimal, when the pest population is less than a threshold. As the pest population increases beyond this threshold, the exclusionary measures are no longer optimal. Finally, a comparative dynamic analysis indicates that the efficient level of total expenditures on preventive and control measures decreases with the level of the invasive species stock and increases with the intrinsic population growth rate, the rate of additional discoveries avoided, and the maximum possible pest population.
APA, Harvard, Vancouver, ISO, and other styles
10

Zhang, Sheng, and Guang Lin. "Robust data-driven discovery of governing physical laws with error bars." Proceedings of the Royal Society A: Mathematical, Physical and Engineering Sciences 474, no. 2217 (2018): 20180305. http://dx.doi.org/10.1098/rspa.2018.0305.

Full text
Abstract:
Discovering governing physical laws from noisy data is a grand challenge in many science and engineering research areas. We present a new approach to data-driven discovery of ordinary differential equations (ODEs) and partial differential equations (PDEs), in explicit or implicit form. We demonstrate our approach on a wide range of problems, including shallow water equations and Navier–Stokes equations. The key idea is to select candidate terms for the underlying equations using dimensional analysis, and to approximate the weights of the terms with error bars using our threshold sparse Bayesian regression. This new algorithm employs Bayesian inference to tune the hyperparameters automatically. Our approach is effective, robust and able to quantify uncertainties by providing an error bar for each discovered candidate equation. The effectiveness of our algorithm is demonstrated through a collection of classical ODEs and PDEs. Numerical experiments demonstrate the robustness of our algorithm with respect to noisy data and its ability to discover various candidate equations with error bars that represent the quantified uncertainties. Detailed comparisons with the sequential threshold least-squares algorithm and the lasso algorithm are studied from noisy time-series measurements and indicate that the proposed method provides more robust and accurate results. In addition, the data-driven prediction of dynamics with error bars using discovered governing physical laws is more accurate and robust than classical polynomial regressions.
APA, Harvard, Vancouver, ISO, and other styles
11

Hyndman, Brendon. "Unlocking the Discovery Threshold: Active Exploration in Physical Education." Journal of Physical Education, Recreation & Dance 92, no. 3 (2021): 26–33. http://dx.doi.org/10.1080/07303084.2020.1866718.

Full text
APA, Harvard, Vancouver, ISO, and other styles
12

Mollamotalebi, Mahdi, Raheleh Maghami, and Abdul Samad Ismail. "THRD: Threshold-based hierarchical resource discovery for Grid environments." Computing 97, no. 5 (2014): 439–58. http://dx.doi.org/10.1007/s00607-014-0427-4.

Full text
APA, Harvard, Vancouver, ISO, and other styles
13

Meshram, Swati, and Kishor P. Wagh. "Effective Maximal Co-location Pattern MiningA Hybrid Approach for Spatial and Spatiotemporal Data." International Journal of Advances in Soft Computing and its Applications 16, no. 3 (2024): 214–30. https://doi.org/10.15849/ijasca.241130.12.

Full text
Abstract:
Spatial co-location patterns refer to a set of distinct spatial features often found in proximity over a study region. Spatial co-location pattern mining is a process of discovering co-location patterns in global and local regions. However, this relationship of co-occurrence is not uniformly observed. That is, few patterns are discovered at global regions but are not found at local regions and vice versa. Such pattern discovery is based on a single prevalent threshold value in various previous research works. Moreover, this single prevalent threshold would not be suitable to detect maximal patterns globally and locally. Alternatively, it would either miss certain patterns globally or locally due to non-uniform distribution of data instances. To discover the spatial co-location patterns, this paper presents a prevalent region mining algorithm to mine spatial co-location at global and local regions based on distribution of data. Additionally, the effectiveness of this algorithm is proven by comparing with various other state of art algorithms. The algorithm is implemented and evaluated on synthetic and real dataset.
APA, Harvard, Vancouver, ISO, and other styles
14

Aqra, Iyad, Norjihan Abdul Ghani, Carsten Maple, José Machado, and Nader Sohrabi Safa. "Incremental Algorithm for Association Rule Mining under Dynamic Threshold." Applied Sciences 9, no. 24 (2019): 5398. http://dx.doi.org/10.3390/app9245398.

Full text
Abstract:
Data mining is essentially applied to discover new knowledge from a database through an iterative process. The mining process may be time consuming for massive datasets. A widely used method related to knowledge discovery domain refers to association rule mining (ARM) approach, despite its shortcomings in mining large databases. As such, several approaches have been prescribed to unravel knowledge. Most of the proposed algorithms addressed data incremental issues, especially when a hefty amount of data are added to the database after the latest mining process. Three basic manipulation operations performed in a database include add, delete, and update. Any method devised in light of data incremental issues is bound to embed these three operations. The changing threshold is a long-standing problem within the data mining field. Since decision making refers to an active process, the threshold is indeed changeable. Accordingly, the present study proposes an algorithm that resolves the issue of rescanning a database that had been mined previously and allows retrieval of knowledge that satisfies several thresholds without the need to learn the process from scratch. The proposed approach displayed high accuracy in experimentation, as well as reduction in processing time by almost two-thirds of the original mining execution time.
APA, Harvard, Vancouver, ISO, and other styles
15

Shang, Kun. "Semantic - based service discovery in grid environment." Journal of Intelligent & Fuzzy Systems 39, no. 4 (2020): 5263–72. http://dx.doi.org/10.3233/jifs-189011.

Full text
Abstract:
In the process of informatization, there are also some new problems, mainly information can’t be shared and integrated, distributed resources can’t be used effectively, these problems make the industry face new challenges. The goal of this paper is to combine the grid technology and ontology organically, to build a unified information system integration and interoperation platform based on semantics, to realize information sharing and accelerate the pace of informatization. The method is to construct the whole structure of the system according to the actual needs of the system. This paper firstly analyzes the current research status and existing problems of semantic grid service matching, and proposes a semantic layered matching algorithm based on Massimo Paolucci elastic matching algorithm. To verify the feasibility and effectiveness of the hierarchical matching algorithm based on semantics, a prototype system named SGSM was designed and its functional model, matching process and performance were studied. Experimental results show that for the semantic-based hierarchical matching algorithm proposed in this paper, the threshold value of service semantic correlation degree is 0.84, the threshold value of service basic concept matching degree is 0.89, the threshold value of service comprehensive similarity degree is 0.66, and the threshold value of service quality matching degree is 0.78. Statistics through the experiment, the above three methods of recall, respectively, 33%, 62%, 85%, the precision is respectively: 29%, 57%, 88%, and illustrate the hierarchical matching algorithm based on semantic is feasible in practical application, compared with the traditional service based on keyword matching algorithm and Massimo Paolucci elastic matching algorithm on the recall and precision are improved significantly.
APA, Harvard, Vancouver, ISO, and other styles
16

SHRINER, DANIEL. "Mapping multiple quantitative trait loci under Bayes error control." Genetics Research 91, no. 3 (2009): 147–59. http://dx.doi.org/10.1017/s001667230900010x.

Full text
Abstract:
SummaryIn mapping of quantitative trait loci (QTLs), performing hypothesis tests of linkage to a phenotype of interest across an entire genome involves multiple comparisons. Furthermore, linkage among loci induces correlation among tests. Under many multiple comparison frameworks, these problems are exacerbated when mapping multiple QTLs. Traditionally, significance thresholds have been subjectively set to control the probability of detecting at least one false positive outcome, although such thresholds are known to result in excessively low power to detect true positive outcomes. Recently, false discovery rate (FDR)-controlling procedures have been developed that yield more power both by relaxing the stringency of the significance threshold and by retaining more power for a given significance threshold. However, these procedures have been shown to perform poorly for mapping QTLs, principally because they ignore recombination fractions between markers. Here, I describe a procedure that accounts for recombination fractions and extends FDR control to include simultaneous control of the false non-discovery rate, i.e. the overall error rate is controlled. This procedure is developed in the Bayesian framework using a direct posterior probability approach. Data-driven significance thresholds are determined by minimizing the expected loss. The procedure is equivalent to jointly maximizing positive and negative predictive values. In the context of mapping QTLs for experimental crosses, the procedure is applicable to mapping main effects, gene–gene interactions and gene–environment interactions.
APA, Harvard, Vancouver, ISO, and other styles
17

Bouzid, Rachid, Monique T. A. de Beijer, Robbie J. Luijten, et al. "Empirical Evaluation of the Use of Computational HLA Binding as an Early Filter to the Mass Spectrometry-Based Epitope Discovery Workflow." Cancers 13, no. 10 (2021): 2307. http://dx.doi.org/10.3390/cancers13102307.

Full text
Abstract:
Immunopeptidomics is used to identify novel epitopes for (therapeutic) vaccination strategies in cancer and infectious disease. Various false discovery rates (FDRs) are applied in the field when converting liquid chromatography-tandem mass spectrometry (LC-MS/MS) spectra to peptides. Subsequently, large efforts have recently been made to rescue peptides of lower confidence. However, it remains unclear what the overall relation is between the FDR threshold and the percentage of obtained HLA-binders. We here directly evaluated the effect of varying FDR thresholds on the resulting immunopeptidomes of HLA-eluates from human cancer cell lines and primary hepatocyte isolates using HLA-binding algorithms. Additional peptides obtained using less stringent FDR-thresholds, although generally derived from poorer spectra, still contained a high amount of HLA-binders and confirmed recently developed tools that tap into this pool of otherwise ignored peptides. Most of these peptides were identified with improved confidence when cell input was increased, supporting the validity and potential of these identifications. Altogether, our data suggest that increasing the FDR threshold for peptide identification in conjunction with data filtering by HLA-binding prediction, is a valid and highly potent method to more efficient exhaustion of immunopeptidome datasets for epitope discovery and reveals the extent of peptides to be rescued by recently developed algorithms.
APA, Harvard, Vancouver, ISO, and other styles
18

Choi, Moon Sub. "Price Discovery of Cross-listings and Threshold Error Correction Model." INTERNATIONAL BUSINESS REVIEW 15, no. 2 (2011): 109. http://dx.doi.org/10.21739/ibr.2011.06.15.2.109.

Full text
APA, Harvard, Vancouver, ISO, and other styles
19

Santhosh Kumar, R., and R. Bharanidharan. "Neighbordiscovery-based security enhancement using threshold cryptography for IP address assigning in network." International Journal of Engineering & Technology 7, no. 1.1 (2017): 439. http://dx.doi.org/10.14419/ijet.v7i1.1.9951.

Full text
Abstract:
The security threats related to current IP configurationand also includes with Threshold Cryptography (TC). Circulated situations are attractive more prevalent as these knowledge such as networks, aim to enable a large scale collaboration for resource sharing outline in the network. Secure verification is the interesting concern for such surroundings. The proposed system to maintain a two stages for improving the network security based on IP addressing time in the network. An first stage is threshold cryptography and another one is Neighbor discovery protocol (NDP). In main objectives of our proposed work is Threshold Cryptography based addressing scheme in network, also the authorization process fully depends on Neighbor discovery protocol based frameworks. The goal of the Neighbor discovery protocol is to support a Mobile Node (MN) roaming across network fields without communication disturbance or recognizable delay. Moreover, this approach also ensures correct synchronization between nodes that send overlapping data on the network. Finally, using this NDP-TC during data transfer time packet loss is very low so the network security is automatically increased.
APA, Harvard, Vancouver, ISO, and other styles
20

Hartmann, A., G. Nuernberg, D. Repsilber, et al. "Effects of threshold choice on the results of gene expression profiling, using microarray analysis, in a model feeding experiment with rats." Archives Animal Breeding 52, no. 1 (2009): 65–78. http://dx.doi.org/10.5194/aab-52-65-2009.

Full text
Abstract:
Abstract. Global gene expression studies using microarray technology are widely employed to identify biological processes which are influenced by a treatment e.g. a specific diet. Affected processes are characterized by a significant enrichment of differentially expressed genes (functional annotation analysis). However, different choices of statistical thresholds to select candidates for differential expression will alter the resulting candidates list. This study was conducted to investigate the effect of applying a False Discovery Rate (FDR) correction and different fold change thresholds in statistical analysis of microarray data on diet-affected biological processes based on a significantly increased proportion of differentially expressed genes. In a model feeding experiment with rats fed genetically modified food additives, animals received a supplement of either lyophilized inactivated recombinant VP60 baculovirus (rBV-VP60) or lyophilized inactivated wild type baculovirus (wtBV). Comparative expression profiling was done in spleen, liver and small intestine mucosa. We demonstrated the extent to which threshold choice can affect the biological processes identified as significantly regulated and thus the conclusion drawn from the microarray data. In our study, the combined application of a moderate fold change threshold (FC≥1.5) and a stringent FDR threshold (q≤0.05) exhibited high reliability of biological processes identified as differentially regulated. The application of a stringent FDR threshold of q≤0.05 seems to be an essential prerequisite to reduce considerably the number of false positives. Microarray results of selected differentially expressed molecules were validated successfully by using real-time RT-PCR.
APA, Harvard, Vancouver, ISO, and other styles
21

Bindra, Puneet, Jaswinder Kaur, and Gurjeevan Singh. "Investigation of Optimum TTL Threshold value for Route Discovery in AODV." International Journal of Computer Applications 79, no. 9 (2013): 45–49. http://dx.doi.org/10.5120/13773-1643.

Full text
APA, Harvard, Vancouver, ISO, and other styles
22

P, Iris Punitha, and G. R. Sathiaseelan J. "A Novel Two Tier Missing at Random Type Missing Data Imputation using Enhanced Linear Interpolation Technique on Internet of Medical Things." Indian Journal of Science and Technology 16, no. 16 (2021): 1192–204. https://doi.org/10.17485/IJST/v16i16.60.

Full text
Abstract:
Abstract <strong>Objectives:</strong>&nbsp;Data collection and distribution are essential components required for the victory of Internet of Medical Things (IoMT) system. Generally, missing data is the most recurrent problem that impacts an overall system performance.&nbsp;<strong>Methods:</strong>&nbsp;Missing data in IoMT systems can be caused by various factors, including faulty connections, external attacks, or sensing errors. Although missing data is ubiquitous in IoT, missing data imputation is hardly ever observed in an IoMT setting. As a result, doing analytics on IoMT data with missing values causes a deterioration in the accuracy and dependability of the data analysis outputs. To achieve excellent performance, missing data must be imputed once it occurs in such systems. Therefore, this paper proposes a novel Two Tier Missing Data Imputation (TT-MDI) technique for missing at random (MAR) type missing data in IoMT using an enhanced linear interpolation technique.&nbsp;<strong>Findings:</strong>&nbsp;The proposed TT-MDI algorithm has two tiers for imputing MAR missing data and it was tested using the Kaggle Machine Learning Repository&rsquo;s cStick IoMT dataset. Utilizing the distances between the class centroids with their related data instances, the first tier aims to identify the imputation threshold. The identified threshold is then used by the second tier to impute missing data. According to the experimental findings, the proposed work offers higher accuracy, precision, recall, and F-measure for imputed dataset using the TT-MDI technique than missing data included dataset when compared to the original dataset.&nbsp;<strong>Novelty:</strong>&nbsp;The TT-MDI technique consists of two tiers. The first tier uses Manhattan distances between class centroids and related data instances to discover the imputation threshold. Next, the second tier uses the discovered threshold to impute missing data using the Enhanced Linear Interpolation technique. <strong>Keywords:</strong> Internet of Medical Things; Imputation of Missing Data; Threshold Discovery; Manhattan Distance
APA, Harvard, Vancouver, ISO, and other styles
23

Eldawy, Eman O., Abdeltawab Hendawi, Mohammed Abdalla, and Hoda M. O. Mokhtar. "FraudMove: Fraud Drivers Discovery Using Real-Time Trajectory Outlier Detection." ISPRS International Journal of Geo-Information 10, no. 11 (2021): 767. http://dx.doi.org/10.3390/ijgi10110767.

Full text
Abstract:
Taxicabs and rideshare cars nowadays are equipped with GPS devices that enable capturing a large volume of traces. These GPS traces represent the moving behavior of the car drivers. Indeed, the real-time discovery of fraud drivers earlier is a demand for saving the passenger’s life and money. For this purpose, this paper proposes a novel time-based system, namely FraudMove, to discover fraud drivers in real-time by identifying outlier active trips. Mainly, the proposed FraudMove system computes the time of the most probable path of a trip. For trajectory outlier detection, a trajectory is considered an outlier trajectory if its time exceeds the time of this computed path by a specified threshold. FraudMove employs a tunable time window parameter to control the number of checks for detecting outlier trips. This parameter allows FraudMove to trade responsiveness with efficiency. Unlike other related works that wait until the end of a trip to indicate that it was an outlier, FraudMove discovers outlier trips instantly during the trip. Extensive experiments conducted on a real dataset confirm the efficiency and effectiveness of FraudMove in detecting outlier trajectories. The experimental results prove that FraudMove saves the response time of the outlier check process by up to 65% compared to the state-of-the-art systems.
APA, Harvard, Vancouver, ISO, and other styles
24

Catena, Riccardo, and Vanessa Zema. "Prospects for dark matter signal discovery and model selection via timing information in a low-threshold experiment." Journal of Cosmology and Astroparticle Physics 2022, no. 02 (2022): 022. http://dx.doi.org/10.1088/1475-7516/2022/02/022.

Full text
Abstract:
Abstract In the recent years, many low-threshold dark matter (DM) direct detection experiments have reported the observation of unexplained excesses of events at low energies. Exemplary for these, the experiment CRESST has detected unidentified events below an energy of about 200 eV — a result hampering the detector performance in the search for GeV-scale DM. In this work, we test the impact of nuclear recoil timing information on the potential for DM signal discovery and model selection on a low-threshold experiment limited by the presence of an unidentified background resembling this population of low-energy events. Among the different targets explored by the CRESST collaboration, here we focus on Al2O3, as a sapphire detector was shown to reach an energy threshold as low as 19.7 eV [1]. We test the ability of a low-threshold experiment to discover a signal above a given background, or to reject the spin-independent interaction in favour of a magnetic dipole coupling in terms of p-values. We perform our p-value calculations: 1) taking timing information into account; and 2) assuming that the latter is not available. By comparing the two approaches, we find that under our assumptions timing information has a marginal impact on the potential for DM signal discovery, while provides more significant results for the selection between the two models considered. For the model parameters explored here, we find that the p-value for rejecting spin-independent interactions in favour of a magnetic dipole coupling is about 0.11 when the experimental exposure is 460 g×year and smaller (about 0.06) if timing information is available. The conclusion on the role of timing information remains qualitatively unchanged for exposures as large as 1 kg×5 year. At the same time, our results show that a 90% C.L. rejection of spin-independent interactions in favour of a magnetic dipole coupling is within reach of an upgrade of the CRESST experiment [2].
APA, Harvard, Vancouver, ISO, and other styles
25

Gruenwald, Oskar. "Philosophy as Creative Discovery." Journal of Interdisciplinary Studies 11, no. 1 (1999): 157–74. http://dx.doi.org/10.5840/jis1999111/28.

Full text
Abstract:
At the dawn of the Third Millennium, philosophy is at an important crossroads in its role as paideia—philosophy educating humanity. A major challenge for philosophy today is to mediate the emergmg science-religion dialogue, and enhance understanding of the relationship between science, ethics, and faith. Curiously, the methodological dilemmas and thorny issues of demarcation between science and religion reflect a new awareness regarding metascientific questions posed by science itself. We are at the threshold of a new Golden Age of scientific discoveries and faith-informed, interdisciplinary, and liberal learning interconnecting once more all areas of knowledge with ethics and faith. The likely key to new discoveries is an interdisciplinary approach seeking interrelatedness between all phenomena. This means also that the restoration of philosqphy in the classic sense as sophia or the love of wisdom can only be achieved within the larger framework of dialogue among all disciplines in the quest for truth.
APA, Harvard, Vancouver, ISO, and other styles
26

Sanford, Emily M., and Justin Halberda. "A Shared Intuitive (Mis)understanding of Psychophysical Law Leads Both Novices and Educated Students to Believe in a Just Noticeable Difference (JND)." Open Mind 7 (2023): 785–801. http://dx.doi.org/10.1162/opmi_a_00108.

Full text
Abstract:
Abstract Humans are both the scientists who discover psychological laws and the thinkers who behave according to those laws. Oftentimes, when our natural behavior is in accord with those laws, this dual role serves us well: our intuitions about our own behavior can serve to inform our discovery of new laws. But, in cases where the laws that we discover through science do not agree with the intuitions and biases we carry into the lab, we may find it harder to believe in and adopt those laws. Here, we explore one such case. Since the founding of psychophysics, the notion of a Just Noticeable Difference (JND) in perceptual discrimination has been ubiquitous in experimental psychology—even in spite of theoretical advances since the 1950’s that argue that there can be no such thing as a threshold in perceiving difference. We find that both novices and psychologically educated students alike misunderstand the JND to mean that, below a certain threshold, humans will be unable to tell which of two quantities is greater (e.g., that humans will be completely at chance when trying to judge which is heavier, a bag with 3000 grains of sand or 3001). This belief in chance performance below a threshold is inconsistent with psychophysical law. We argue that belief in a JND is part of our intuitive theory of psychology and is therefore very difficult to dispel.
APA, Harvard, Vancouver, ISO, and other styles
27

Patel, Dharmendra. "Threshold based partial partitioning fuzzy means clustering algorithm (TPPFMCA) for pattern discovery." International Journal of Information Technology 12, no. 1 (2019): 215–22. http://dx.doi.org/10.1007/s41870-019-00343-5.

Full text
APA, Harvard, Vancouver, ISO, and other styles
28

Tarmizi, Seleviawati, Prakash Veeraraghavan, and Somnath Ghosh. "Improvement on the Multihop Shareholder Discovery for Threshold Secret Sharing in MANETs." Journal of Computer Science and Technology 26, no. 4 (2011): 711–21. http://dx.doi.org/10.1007/s11390-011-1170-3.

Full text
APA, Harvard, Vancouver, ISO, and other styles
29

Supriyadin, Dedi. "Pengaruh Discovery Inqury Dengan Tinjauan Rraktek Dan Teoritis Pada Materi Getaran Terhadap Kemampuan Berfikir Kritis." GRAVITY EDU ( JURNAL PENDIDIKAN FISIKA ) 7, no. 1 (2024): 23–26. https://doi.org/10.33627/ge.v7i1.3244.

Full text
Abstract:
This study aims to determine the effect of Discovery Inquiry Learning with practical and theoretical reviews, accompanied by animations, on critical thinking skills in the topic of vibrations and waves during the even semester of the 2016/2017 academic year. The research sample consisted of 58 students, divided into 29 students in the experimental group and 29 students in the control group. The research design used was the Posttest Control Group Design, and the analysis method applied was two-way analysis of variance (ANOVA). The results of the study showed that: (1) There was no difference in students' understanding of critical thinking between those who used the Discovery Inquiry learning model and those who followed conventional learning methods. In the Discovery Inquiry learning model, the significance value obtained was 0.117, which was smaller than the significance threshold (i.e., sig &lt; 0.05). Meanwhile, in the Conventional learning model, the significance value obtained was 0.01, which was considered small because it was below the significance threshold (sig &gt; 0.01).
APA, Harvard, Vancouver, ISO, and other styles
30

Rotondi, Renata, and Elisa Varini. "Global statistical tests for clustered earthquake pattern discovery in Italy." Research in Geophysics 2, no. 2 (2012): 9. http://dx.doi.org/10.4081/rg.2012.e9.

Full text
Abstract:
It is a widely shared opinion that not only secondary earthquakes (aftershocks) but also main earthquakes tend to occur in time-space clusters. The importance of this assumption requires the application of statistical tools to objectively evaluate its coherence with the reality at different scales of size-space-time. Global tests allow us to select the data sets with significant space-time clustering in order to perform more in-depth analyses to detect cluster locations. According to different fixed magnitude thresholds, we perform two global statistical tests, the Knox test and the Jacquez test, based on the space-time distance between pairs of earthquakes under the null hypothesis of uniform distribution in time and space, and evaluate the significance of the possible clusters. We analyze subsets of historical Italian earthquakes drawn from the Parametric Catalog of Italian Earthquakes (CPTI04) with magnitude thresholds 4.5, 5.3 and 6.0, associated with the composite seismogenic sources of the Database of Individual Seismogenic Sources. Each subset is related to one of the eight tectonically homogeneous macroregions in which the Italian territory has been divided. Significant space-time clustering is found for all sets with a magnitude threshold of 4.5. This tendency decreases drastically or disappears when the cut off rises to 5.3, with the exception of two macroregions located in the Eastern Alps and the Calabrian Arc, respectively, where evidence of space-time interaction may refer to stress transfer among consecutive or adjacent faults. The link between clustering effect and tectonic behavior could guide the choice of different stochastic point processes to model the seismic activity.
APA, Harvard, Vancouver, ISO, and other styles
31

Walsh, Roddy, Rafik Tadros, and Connie R. Bezzina. "When genetic burden reaches threshold." European Heart Journal 41, no. 39 (2020): 3849–55. http://dx.doi.org/10.1093/eurheartj/ehaa269.

Full text
Abstract:
Abstract Rare cardiac genetic diseases have generally been considered to be broadly Mendelian in nature, with clinical genetic testing for these conditions predicated on the detection of a primary causative rare pathogenic variant that will enable cascade genetic screening in families. However, substantial variability in penetrance and disease severity among carriers of pathogenic variants, as well as the inability to detect rare Mendelian variants in considerable proportions of patients, indicates that more complex aetiologies are likely to underlie these diseases. Recent findings have suggested genetic variants across a range of population frequencies and effect sizes may combine, along with non-genetic factors, to determine whether the threshold for expression of disease is reached and the severity of the phenotype. The availability of increasingly large genetically characterized cohorts of patients with rare cardiac diseases is enabling the discovery of common genetic variation that may underlie both variable penetrance in Mendelian diseases and the genetic aetiology of apparently non-Mendelian rare cardiac conditions. It is likely that the genetic architecture of rare cardiac diseases will vary considerably between different conditions as well as between patients with similar phenotypes, ranging from near-Mendelian disease to models more akin to common, complex disease. Uncovering the broad range of genetic factors that predispose patients to rare cardiac diseases offers the promise of improved risk prediction and more focused clinical management in patients and their families.
APA, Harvard, Vancouver, ISO, and other styles
32

Elliott, Terry. "Sparseness, Antisparseness and Anything in Between: The Operating Point of a Neuron Determines Its Computational Repertoire." Neural Computation 26, no. 9 (2014): 1924–72. http://dx.doi.org/10.1162/neco_a_00630.

Full text
Abstract:
A recent model of intrinsic plasticity coupled to Hebbian synaptic plasticity proposes that adaptation of a neuron's threshold and gain in a sigmoidal response function to achieve a sparse, exponential output firing rate distribution facilitates the discovery of heavy-tailed or super- gaussian sources in the neuron's inputs. We show that the exponential output distribution is irrelevant to these dynamics and that, furthermore, while sparseness is sufficient, it is not necessary. The intrinsic plasticity mechanism drives the neuron's threshold large and positive, and we prove that in such a regime, the neuron will find supergaussian sources; equally, however, if the threshold is large and negative (an antisparse regime), it will also find supergaussian sources. Away from such extremes, the neuron can also discover subgaussian sources. By examining a neuron with a fixed sigmoidal nonlinearity and considering the synaptic strength fixed-point structure in the two-dimensional parameter space defined by the neuron's threshold and gain, we show that this space is carved up into sub- and supergaussian-input-finding regimes, possibly with regimes of simultaneous stability of sub- and supergaussian sources or regimes of instability of all sources; a single gaussian source may also be stabilized by the presence of a nongaussian source. A neuron's operating point (essentially its threshold and gain coupled with its input statistics) therefore critically determines its computational repertoire. Intrinsic plasticity mechanisms induce trajectories in this parameter space but do not fundamentally modify it. Unless the trajectories cross critical boundaries in this space, intrinsic plasticity is irrelevant and the neuron's nonlinearity may be frozen with identical receptive field refinement dynamics.
APA, Harvard, Vancouver, ISO, and other styles
33

Xie, Guo Cai, Yi Ding Fu, and Zhao Hui Li. "A Real-Time Fault Prewarning Approach to Generator Sets Based on Dynamic Threshold." Applied Mechanics and Materials 510 (February 2014): 248–53. http://dx.doi.org/10.4028/www.scientific.net/amm.510.248.

Full text
Abstract:
Fault prewarning is important to guarantee the safe and stable operation of Generator Sets (GS). In order to generate prewarnings quickly and accurately before the failures or faults occur in GS, a real-time fault prewarning approach to GS based on dynamic threshold was put forward. This approach was consisted of five steps, that is operating condition (or abnormal event) synchronous, dynamic threshold selection, threshold analysis, fault detection and fault prewarning. The dynamic threshold (closely related to operating condition or abnormal event of GS) was the key of this approach, which can be obtained by means of expertise knowledge discovery. This approach can effectively reduce false positives and false negatives for the faults of GS, whose effectiveness is validated by the applications and practices of Gezhouba power plant.
APA, Harvard, Vancouver, ISO, and other styles
34

Liao, Weihua, Zhiheng Zhang, and Weiguo Jiang. "Concept Lattice Method for Spatial Association Discovery in the Urban Service Industry." ISPRS International Journal of Geo-Information 9, no. 3 (2020): 155. http://dx.doi.org/10.3390/ijgi9030155.

Full text
Abstract:
A relative lag in research methods, technical means and research paradigms has restricted the rapid development of geography and urban computing. Hence, there is a certain gap between urban data and industry applications. In this paper, a spatial association discovery framework for the urban service industry based on a concept lattice is proposed. First, location data are used to form the formal context expressed by 0 and 1. Frequent closed itemsets and a concept lattice are computed on the basis of the formal context of the urban service industry. Frequent closed itemsets can filter out redundant information in frequent itemsets, uniquely determine the complete set of all frequent itemsets, and be orders of magnitude smaller than the latter. Second, spatial frequent closed itemsets and association rules discovery algorithms are designed and built based on the formal context. The inputs of the frequent closed itemsets discovery algorithms include the given formal context and frequent threshold value, while the outputs are all frequent closed itemsets and the partial order relationship between them. Newly added attributes create new concepts to guarantee the uniqueness of the new spatial association concept. The inputs of spatial association rules discovery algorithms include frequent closed itemsets and confidence threshold values, and a rule is confident when and only if its confidence degree is not less than the confidence threshold value. Third, the spatial association of the urban service industry in Nanning, China is taken as a case to verify the method. The results are basically consistent with the spatial distribution of the urban service industry in Nanning City. This study enriches the theories and methods of geography as well as urban computing, and these findings can provide guidance for location-based service planning and management of urban services.
APA, Harvard, Vancouver, ISO, and other styles
35

Shoyeb Raihan, Ahmed, Hamed Khosravi, Srinjoy Das, and Imtiaz Ahmed. "Accelerating material discovery with a threshold-driven hybrid acquisition policy-based Bayesian optimization." Manufacturing Letters 41 (October 2024): 1300–1311. http://dx.doi.org/10.1016/j.mfglet.2024.09.157.

Full text
APA, Harvard, Vancouver, ISO, and other styles
36

Ali, Ahmed, Qin Qin, and Wei Wang. "Discovery potential of stable and near-threshold doubly heavy tetraquarks at the LHC." Physics Letters B 785 (October 2018): 605–9. http://dx.doi.org/10.1016/j.physletb.2018.09.018.

Full text
APA, Harvard, Vancouver, ISO, and other styles
37

Saif-Ur-Rehman, Jawad Ashraf, Asad Habib, and Abdus Salam. "Top-K Miner: top-K identical frequent itemsets discovery without user support threshold." Knowledge and Information Systems 48, no. 3 (2015): 741–62. http://dx.doi.org/10.1007/s10115-015-0907-7.

Full text
APA, Harvard, Vancouver, ISO, and other styles
38

Kruszka, Paul, and Michael Silberbach. "The state of Turner syndrome science: Are we on the threshold of discovery?" American Journal of Medical Genetics Part C: Seminars in Medical Genetics 181, no. 1 (2019): 4–6. http://dx.doi.org/10.1002/ajmg.c.31688.

Full text
APA, Harvard, Vancouver, ISO, and other styles
39

Li, Chao, and Dermot J. Hayes. "Price Discovery on the International Soybean Futures Markets: A Threshold Co-Integration Approach." Journal of Futures Markets 37, no. 1 (2016): 52–70. http://dx.doi.org/10.1002/fut.21794.

Full text
APA, Harvard, Vancouver, ISO, and other styles
40

Prakash Dixit, Chandra, and Nilay Khare. "A Survey of Frequent Subgraph Mining Algorithms." International Journal of Engineering & Technology 7, no. 3.8 (2018): 58. http://dx.doi.org/10.14419/ijet.v7i3.8.15218.

Full text
Abstract:
Graphs are broadly used data structure. Graphs are very useful in representing/analyzing and processing real world data. Evolving graphs are graphs which are frequently changing in nature. There is either increase or decrease in their size i.e. change in number of edges or/and vertices. Mining is the process done for knowledge discovery in graphs. Detecting specific patterns with their number of repetition more than a predefined threshold in graph is known as frequent subgraph mining or FSM. Real Timed data representing graphs are high volumetric or of very large in size, handling such graphs require processing them with special mechanisms and algorithms. Our review paper discovers present FSM techniques and tries to give their comparative study.
APA, Harvard, Vancouver, ISO, and other styles
41

Baiden-Amissah, Josephine, Blestmond A. Brako, Gordon Foli, Jonathan Quaye-Ballard, and Simon K. Y. Gawu. "USING GIS AS A SPATIAL SUPPORT TOOL TO DISCRIMINATE BETWEEN TRUE AND FALSE GEOCHEMICAL ANOMALIES AT THE NORTHERN MARGIN OF THE ASANKRAGWA GOLD BELT IN THE PALEOPROTEROZOIC KUMASI BASIN, GHANA." Earth Science Malaysia 8, no. 1 (2023): 61–69. https://doi.org/10.26480/esmy.01.2024.61.69.

Full text
Abstract:
This study uses Geographical Information Systems (GIS) as a support tool for gold exploration to distinguish between true and false soil geochemical anomalies at the northern segment of the Asankragwa gold belt in the Paleoproterozoic Kumasi Basin, Ghana. The main objective of this study is to identify potentially mineralized zones within the northern segment of the Asankragwa gold belt by integrating GIS, structural and soil geochemical datasets. To reduce the probability of delineating false anomalies as true anomalies, diverse graphical threshold determination methods, namely histogram, box plot, QQ plot, mean+2SD, Jenks Natural Break and Probability plot, as well as advanced threshold determination methods like the Mean Absolute Deviation (MAD) and double MAD were employed. The threshold values established from the graphical methods are 175 ppb, 96 ppb, 335ppb, 384 ppb and 100 ppb respectively. However, the MAD and double MAD methods produced threshold values of 74.5ppb and 130ppb respectively. Based on the high variability in the threshold values, anomalous areas were delineated using thresholds values of 100ppb and 130ppb respectively established from the Jenks Natural Break and Probability plot and double MAD method. About 40%, 35% and 20% of the selected anomalous areas are located within soils overlying volcanoclastic, clastic sedimentary and marine volcanoclastic rocks respectively. These anomalies are not lithologically controlled since they are not confined to a particular rock type. Superimposing the selected aanomalies over geological structures and Landsat imagery, 90% of the anomalies can be linked to the NE-SW geological structures. Upon integrating the anomalies with structural data and illegal mining activities and using the Booleon analysis, not all anomalies may be true anomalies. True gold anomalies within the Asankragwa gold belt are consistent with the central&gt; northern&gt; southern portions. Hence, the discovery of gold in the Asankragwa gold belt has been enhanced using GIS as a spatial support tool.
APA, Harvard, Vancouver, ISO, and other styles
42

May, Susanne, and Carol Bigelow. "Modeling Nonlinear Dose-Response Relationships in Epidemiologic Studies: Statistical Approaches and Practical Challenges." Dose-Response 3, no. 4 (2005): dose—response.0. http://dx.doi.org/10.2203/dose-response.003.04.004.

Full text
Abstract:
Non-linear dose response relationships pose statistical challenges for their discovery. Even when an initial linear approximation is followed by other approaches, the results may be misleading and, possibly, preclude altogether the discovery of the nonlinear relationship under investigation. We review a variety of straightforward statistical approaches for detecting nonlinear relationships and discuss several factors that hinder their detection. Our specific context is that of epidemiologic studies of exposure-outcome associations and we focus on threshold and J-effect dose response relationships. The examples presented reveal that no single approach is universally appropriate; rather, these (and possibly other) nonlinearities require for their discovery a variety of both graphical and numeric techniques.
APA, Harvard, Vancouver, ISO, and other styles
43

Wagler, Jörg, Karsten Meiner, Florian Gattnar, Alexandra Thiere, Michael Stelter, and Alexandros Charitos. "Sodium Dithiocuprate(I) Dodecahydrate [Na3(H2O)12][CuS2], the First Crystal Structure of an Exclusively H-Bonded Dithiocuprate(I) Ion, and Its Formation in the Alkaline Sulfide Treatment of Copper Ore Concentrates." Crystals 15, no. 6 (2025): 501. https://doi.org/10.3390/cryst15060501.

Full text
Abstract:
This article presents the single-crystal structure of the complex salt sodium dithiocuprate(I) dodecahydrate Na3CuS2·12(H2O), i.e., [Na3(H2O)12][CuS2], which forms in the high-sulfide concentrations of the alkaline solutions used for arsenic separation from copper concentrates. It features a linear hydrogen-bonded dithiocuprate(I) anion, a novelty in crystallographically characterized thiocuprates. During the study of the alkaline sulfide leaching of Chilean copper concentrates, an analytical investigation of the solution led to the detection of this complex. This study aimed to understand the chemical behavior of the leaching solution by identifying existing ions, which facilitated the discovery of the complex using single-crystal analysis. The newly discovered complex was also synthesized from a modeling solution based on the leaching solution recipe for arsenic removal, allowing for further crystal characterization through Raman and XRD analysis. By estimating the sodium sulfide threshold concentration that enhanced the formation of the copper disulfide complex, this study defined the upper technical threshold limit of sulfide concentration for the economic development of alkaline sulfide leaching to remove arsenic.
APA, Harvard, Vancouver, ISO, and other styles
44

Huynh, M. T., A. Hopkins, R. Norris, et al. "The Completeness and Reliability of Threshold and False-discovery Rate Source Extraction Algorithms for Compact Continuum Sources." Publications of the Astronomical Society of Australia 29, no. 3 (2012): 229–43. http://dx.doi.org/10.1071/as11026.

Full text
Abstract:
AbstractThe process of determining the number and characteristics of sources in astronomical images is so fundamental to a large range of astronomical problems that it is perhaps surprising that no standard procedure has ever been defined that has well-understood properties with a high degree of statistical rigour on completeness and reliability. The Evolutionary Map of the Universe (EMU) survey with the Australian Square Kilometre Array Pathfinder (ASKAP), a continuum survey of the Southern Hemisphere up to declination +30°, aims to utilise an automated source identification and measurement approach that is demonstrably optimal, to maximise the reliability, utility and robustness of the resulting radio source catalogues. A key stage in source extraction methods is the background estimation (background level and noise level) and the choice of a threshold high enough to reject false sources, yet not so high that the catalogues are significantly incomplete. In this analysis, we present results from testing the SExtractor, Selavy (Duchamp), and SFIND source extraction tools on simulated data. In particular, the effects of background estimation, threshold and false-discovery rate settings are explored. For parameters that give similar completeness, we find the false-discovery rate method employed by SFIND results in a more reliable catalogue compared to the peak threshold methods of SExtractor and Selavy.
APA, Harvard, Vancouver, ISO, and other styles
45

Fitriyanto, Rachmad, and Mohamad Ardi. "FEATURE SELECTION COMPARATIVE PERFORMANCE FOR UNSUPERVISED LEARNING ON CATEGORICAL DATASET." Jurnal Techno Nusa Mandiri 22, no. 1 (2025): 61–69. https://doi.org/10.33480/techno.v22i1.6512.

Full text
Abstract:
In the era of big data, Knowledge Discovery in Databases (KDD) is vital for extracting insights from extensive datasets. This study investigates feature selection for clustering categorical data in an unsupervised learning context. Given that an insufficient number of features can impede the extraction of meaningful patterns, we evaluate two techniques—Chi-Square and Mutual Information—to refine a dataset derived from questionnaires on college library visitor characteristics. The original dataset, containing 24 items, was preprocessed and partitioned into five subsets: one via Chi-Square and four via Mutual Information using different dependency thresholds (a low-mid-high scheme and dynamic quartile thresholds: Q1toMax, Q2toMax, and Q3toMax). K-Means clustering was applied across nine variations of K (ranging from 2 to 10), with clustering performance assessed using the silhouette score and Davies-Bouldin Index (DBI). Results reveal that while the Mutual Information approach with a Q3toMax threshold achieves an optimal silhouette score at K=7, it retains only 4 features—insufficient for comprehensive analysis based on domain requirements. Conversely, the Chi-Square method retains 18 features and yields the best DBI at K=9, better capturing the intrinsic characteristics of the data. These findings underscore the importance of aligning feature selection techniques with both clustering quality and domain knowledge, and highlight the need for further research on optimal dependency threshold determination in Mutual Information.
APA, Harvard, Vancouver, ISO, and other styles
46

Ray, Emily Miller, Xinyi Zhang, Lisette Dunham, Xianming Tan, Jennifer Elston Lafata, and Katherine Elizabeth Reeder-Hayes. "Development of a breast cancer-specific prognostic tool using CancerLinQ Discovery." Journal of Clinical Oncology 39, no. 28_suppl (2021): 275. http://dx.doi.org/10.1200/jco.2020.39.28_suppl.275.

Full text
Abstract:
275 Background: Oncologists often struggle to know which patients are near end of life to enable a timely transition to supportive care. We developed a breast cancer-specific prognostic tool, using electronic health record data from CancerLinQ Discovery (CLQD), to help identify patients at high risk of near-term death. We created multiple candidate models with varying thresholds for defining high risk that will be considered for future clinical use. Methods: We included patients with breast cancer diagnosed between 1/1/2000 to 6/1/2020 who had at least one encounter with vital signs and evidence of metastatic breast cancer (MBC). All encounters from 1/1/2000 to 7/5/2020 were included. We used multiple imputation (MI) to impute missing numeric variables and treated missing values as a new level for categorical variables. We sampled one encounter per patient and oversampled within 30 days of death, so that the event rate (death within 30 days of encounter) was about 10%. We randomly divided these patients into training (70%) and test datasets (30%). We evaluated candidate predictors of the event using logistic regression with forward variable selection. Candidate predictors included age, vital signs, laboratory values, performance status, pain score, time since chemotherapy, and ER/PR/HER2 receptor status, and change from baseline and change rate of numeric variables. We obtained a single final model by combining resulted logistic regression model from 10 MI training sets. We evaluated this final model on the MI test sets. We varied the alert threshold (i.e., high-risk proportion) from 5% to 40%. Results: We identified 9,270 patients, representing 586,801 encounters. Significant predictors of mortality were: increased age, decreased age at diagnosis, negative change in body mass index, low albumin, high ALP, high AST, high WBC, low sodium, high creatinine, worse performance status, low pulse oximetry, increased age with increased creatinine, high pain score with no opiates, increased pulse rate, unknown/missing PR, opiate use in past 3 months, and prior chemotherapy in past 1 year but not past 30 days. Candidate models had prediction accuracy of 70-89% and positive predictive value of 31-77%. Conclusions: Demographic and clinical variables can be used to predict risk of death within 30 days of a clinical encounter for patients with MBC. Next steps include selection of a preferred model for clinical use, balancing performance characteristics and acceptability, followed by implementation and evaluation of the prognostic tool in the clinic. Candidate models, varying by threshold or percentage of patients assumed to be at high risk, for the outcome of death within 30 days among patients with metastatic breast cancer.[Table: see text]
APA, Harvard, Vancouver, ISO, and other styles
47

Chen, J. J., C. A. Tsai, H. Moon, H. Ahn, J. J. Young, and C. H. Chen. "Decision threshold adjustment in class prediction." SAR and QSAR in Environmental Research 17, no. 3 (2006): 337–52. http://dx.doi.org/10.1080/10659360600787700.

Full text
APA, Harvard, Vancouver, ISO, and other styles
48

Zhang, Yuting, Liqun Tang, Yiping Liu, et al. "An Objective Injury Threshold for the Maximum Principal Strain Criterion for Brain Tissue in the Finite Element Head Model and Its Application." Bioengineering 11, no. 9 (2024): 918. http://dx.doi.org/10.3390/bioengineering11090918.

Full text
Abstract:
Although the finite element head model (FEHM) has been widely utilized to analyze injury locations and patterns in traumatic brain injury, significant controversy persists regarding the selection of a mechanical injury variable and its corresponding threshold. This paper aims to determine an objective injury threshold for maximum principal strain (MPS) through a novel data-driven method, and to validate and apply it. We extract the peak responses from all elements across 100 head impact simulations to form a dataset, and then determine the objective injury threshold by analyzing the relationship between the combined injury degree and the threshold according to the stationary value principle. Using an occipital impact case from a clinical report as an example, we evaluate the accuracy of the injury prediction based on the new threshold. The results show that the injury area predicted by finite element analysis closely matches the main injury area observed in CT images, without the issue of over- or underestimating the injury due to an unreasonable threshold. Furthermore, by applying this threshold to the finite element analysis of designed occipital impacts, we observe, for the first time, supra-tentorium cerebelli injury, which is related to visual memory impairment. This discovery may indicate the biomechanical mechanism of visual memory impairment after occipital impacts reported in clinical cases.
APA, Harvard, Vancouver, ISO, and other styles
49

Tucker, Virginia M. "Threshold concepts and core competences in the library and information science (LIS) domain: Methodologies for discovery." Library and Information Research 41, no. 125 (2018): 61–80. http://dx.doi.org/10.29173/lirg750.

Full text
Abstract:
Researchers have used a variety of methodologies for investigating threshold concepts, and this paper considers these approaches for library and information science (LIS) domains. The focus is on specific benefits of constructivist grounded theory for eliciting evidence of core knowledge, and elements of research design for this purpose are discussed, including the importance of collecting experiences from the learners themselves as well as effective protocols for data gathering and analysis through the use of active tasks and semi-structured interviews. The discussion extends to implications of the research design for how it may be applied to thematic analysis more broadly and to discovering critical knowledge that does not have the characteristics of threshold concepts but which may indicate attributes of core competences in the LIS discipline.
APA, Harvard, Vancouver, ISO, and other styles
50

Milanese, Stefania, Maria Luisa De Giorgi, Luis Cerdán, et al. "Amplified Spontaneous Emission Threshold Dependence on Determination Method in Dye-Doped Polymer and Lead Halide Perovskite Waveguides." Molecules 27, no. 13 (2022): 4261. http://dx.doi.org/10.3390/molecules27134261.

Full text
Abstract:
Nowadays, the search for novel active materials for laser devices is proceeding faster and faster thanks to the development of innovative materials able to combine excellent stimulated emission properties with low-cost synthesis and processing techniques. In this context, amplified spontaneous emission (ASE) properties are typically investigated to characterize the potentiality of a novel material for lasers, and a low ASE threshold is used as the key parameter to select the best candidate. However, several different methods are currently used to define the ASE threshold, hindering meaningful comparisons among various materials. In this work, we quantitatively investigate the ASE threshold dependence on the method used to determine it in thin films of dye-polymer blends and lead halide perovskites. We observe a systematic ASE threshold dependence on the method for all the different tested materials, and demonstrate that the best method choice depends on the kind of information one wants to extract. In particular, the methods that provide the lowest ASE threshold values are able to detect the excitation regime of early-stage ASE, whereas methods that are mostly spread in the literature return higher thresholds, detecting the excitation regime in which ASE becomes the dominant process in the sample emission. Finally, we propose a standard procedure to properly characterize the ASE threshold, in order to allow comparisons between different materials.
APA, Harvard, Vancouver, ISO, and other styles
We offer discounts on all premium plans for authors whose works are included in thematic literature selections. Contact us to get a unique promo code!