Academic literature on the topic 'Usage constraint inference'

Create a spot-on reference in APA, MLA, Chicago, Harvard, and other styles

Select a source type:

Consult the lists of relevant articles, books, theses, conference reports, and other scholarly sources on the topic 'Usage constraint inference.'

Next to every source in the list of references, there is an 'Add to bibliography' button. Press on it, and we will generate automatically the bibliographic reference to the chosen work in the citation style you need: APA, MLA, Harvard, Chicago, Vancouver, etc.

You can also download the full text of the academic publication as pdf and read online its abstract whenever available in the metadata.

Journal articles on the topic "Usage constraint inference"

1

Zhang, Chi, Bryant Chen, and Judea Pearl. "A Simultaneous Discover-Identify Approach to Causal Inference in Linear Models." Proceedings of the AAAI Conference on Artificial Intelligence 34, no. 06 (April 3, 2020): 10318–25. http://dx.doi.org/10.1609/aaai.v34i06.6595.

Full text
Abstract:
Modern causal analysis involves two major tasks, discovery and identification. The first aims to learn a causal structure compatible with the available data, the second leverages that structure to estimate causal effects. Rather than performing the two tasks in tandem, as is usually done in the literature, we propose a symbiotic approach in which the two are performed simultaneously for mutual benefit; information gained through identification helps causal discovery and vice versa. This approach enables the usage of Verma constraints, which remain dormant in constraint-based methods of discovery, and permit us to learn more complete structures, hence identify a larger set of causal effects than previously achievable with standard methods.
APA, Harvard, Vancouver, ISO, and other styles
2

Tian, Lu, Rui Wang, Tianxi Cai, and Lee-Jen Wei. "The Highest Confidence Density Region and Its Usage for Joint Inferences about Constrained Parameters." Biometrics 67, no. 2 (September 3, 2010): 604–10. http://dx.doi.org/10.1111/j.1541-0420.2010.01486.x.

Full text
APA, Harvard, Vancouver, ISO, and other styles
3

Peluso, Valentino, Roberto Giorgio Rizzo, and Andrea Calimera. "Efficacy of Topology Scaling for Temperature and Latency Constrained Embedded ConvNets." Journal of Low Power Electronics and Applications 10, no. 1 (March 13, 2020): 10. http://dx.doi.org/10.3390/jlpea10010010.

Full text
Abstract:
Embedded Convolutional Neural Networks (ConvNets) are driving the evolution of ubiquitous systems that can sense and understand the environment autonomously. Due to their high complexity, aggressive compression is needed to meet the specifications of portable end-nodes. A variety of algorithmic optimizations are available today, from custom quantization and filter pruning to modular topology scaling, which enable fine-tuning of the hyperparameters and the right balance between quality, performance and resource usage. Nonetheless, the implementation of systems capable of sustaining continuous inference over a long period is still a primary source of concern since the limited thermal design power of general-purpose embedded CPUs prevents execution at maximum speed. Neglecting this aspect may result in substantial mismatches and the violation of the design constraints. The objective of this work was to assess topology scaling as a design knob to control the performance and the thermal stability of inference engines for image classification. To this aim, we built a characterization framework to inspect both the functional (accuracy) and non-functional (latency and temperature) metrics of two ConvNet models, MobileNet and MnasNet, ported onto a commercial low-power CPU, the ARM Cortex-A15. Our investigation reveals that different latency constraints can be met even under continuous inference, yet with a severe accuracy penalty forced by thermal constraints. Moreover, we empirically demonstrate that thermal behavior does not benefit from topology scaling as the on-chip temperature still reaches critical values affecting reliability and user satisfaction.
APA, Harvard, Vancouver, ISO, and other styles
4

Kang, James Jin, Kiran Fahd, and Sitalakshmi Venkatraman. "An Enhanced Inference Algorithm for Data Sampling Efficiency and Accuracy Using Periodic Beacons and Optimization." Big Data and Cognitive Computing 3, no. 1 (January 16, 2019): 7. http://dx.doi.org/10.3390/bdcc3010007.

Full text
Abstract:
Transferring data from a sensor or monitoring device in electronic health, vehicular informatics, or Internet of Things (IoT) networks has had the enduring challenge of improving data accuracy with relative efficiency. Previous works have proposed the use of an inference system at the sensor device to minimize the data transfer frequency as well as the size of data to save network usage and battery resources. This has been implemented using various algorithms in sampling and inference, with a tradeoff between accuracy and efficiency. This paper proposes to enhance the accuracy without compromising efficiency by introducing new algorithms in sampling through a hybrid inference method. The experimental results show that accuracy can be significantly improved, whilst the efficiency is not diminished. These algorithms will contribute to saving operation and maintenance costs in data sampling, where resources of computational and battery are constrained and limited, such as in wireless personal area networks emerged with IoT networks.
APA, Harvard, Vancouver, ISO, and other styles
5

Scioscia, Floriano, Michele Ruta, Giuseppe Loseto, Filippo Gramegna, Saverio Ieva, Agnese Pinto, and Eugenio Di Sciascio. "A Mobile Matchmaker for the Ubiquitous Semantic Web." International Journal on Semantic Web and Information Systems 10, no. 4 (October 2014): 77–100. http://dx.doi.org/10.4018/ijswis.2014100104.

Full text
Abstract:
The Semantic Web and Internet of Things visions are converging toward the so-called Semantic Web of Things (SWoT). It aims to enable smart semantic-enabled applications and services in ubiquitous contexts. Due to architectural and performance issues, it is currently impractical to use existing Semantic Web reasoners. They are resource consuming and are basically optimized for standard inference tasks on large ontologies. On the contrary, SWoT use cases generally require quick decision support through semantic matchmaking in resource-constrained environments. This paper presents Mini-ME, a novel mobile inference engine designed from the ground up for the SWoT. It supports Semantic Web technologies and implements both standard (subsumption, satisfiability, classification) and non-standard (abduction, contraction, covering) inference services for moderately expressive knowledge bases. In addition to an architectural and functional description, usage scenarios are presented and an experimental performance evaluation is provided both on a PC testbed (against other popular Semantic Web reasoners) and on a smartphone.
APA, Harvard, Vancouver, ISO, and other styles
6

Antolín-Díaz, Juan, and Juan F. Rubio-Ramírez. "Narrative Sign Restrictions for SVARs." American Economic Review 108, no. 10 (October 1, 2018): 2802–29. http://dx.doi.org/10.1257/aer.20161852.

Full text
Abstract:
We identify structural vector autoregressions using narrative sign restrictions. Narrative sign restrictions constrain the structural shocks and/or the historical decomposition around key historical events, ensuring that they agree with the established narrative account of these episodes. Using models of the oil market and monetary policy, we show that narrative sign restrictions tend to be highly informative. Even a single narrative sign restriction may dramatically sharpen and even change the inference of SVARs originally identified via traditional sign restrictions. Our approach combines the appeal of narrative methods with the popularized usage of traditional sign restrictions. (JEL C32, E52, Q35, Q43)
APA, Harvard, Vancouver, ISO, and other styles
7

LORENZ, DAVID. "Form does not follow function, but variation does: the origin and early usage of possessive havegot in English." English Language and Linguistics 20, no. 3 (October 25, 2016): 487–510. http://dx.doi.org/10.1017/s1360674316000332.

Full text
Abstract:
This article investigates the emergence and early use of possessive havegot in English. Two hypotheses about its emergence are tested on historical data (c.1460–1760). One hypothesis is based on communicative functionality, suggesting that got was inserted as a ‘pattern preserver’ to compensate for the increased reduction of have. The other hypothesis invokes the conventionalization of an invited inference, thus a (non-functional) semantic shift which does not immediately serve to support a communicative function. The diachronic evidence is found to support only the latter hypothesis.In the second part the early stage of the variation of have and havegot is investigated (c.1720–50). The results show a strong register difference, but also a division of labour between the variants that can be explained by the syntactic and semantic properties of havegot as having emerged out of a present perfect of get. Thus, the variation is organized in a functionally motivated way.It is concluded that in the development of possessive havegot functional constraints apply to the variation early on, but do not play an evident role in the emergence of the new variant. This suggests that functional motivations are a directing force but not necessarily a driving force in language change.
APA, Harvard, Vancouver, ISO, and other styles
8

Braun, Henry I., and Judith D. Singer. "Assessment for Monitoring of Education Systems: International Comparisons." ANNALS of the American Academy of Political and Social Science 683, no. 1 (May 2019): 75–92. http://dx.doi.org/10.1177/0002716219843804.

Full text
Abstract:
Over the last two decades, with the increase in both numbers of participating jurisdictions and media attention, international large-scale assessments (ILSAs) have come to play a more salient role in global education policies than they once did. This has led to calls for greater transparency with regard to instrument development and closer scrutiny of the use of instruments in education policy. We begin with a brief review of the history of ILSAs and describe the requirements and constraints that shape ILSA design, implementation, and analysis. We then evaluate the rationales of employing ILSA results for different purposes, ranging from those we argue are most appropriate (comparative description) to least appropriate (causal inference). We cite examples of ILSA usage from different countries, with particular attention to the widespread misinterpretations and misuses of country rankings based on average scores on an assessment (e.g., literacy or numeracy). Looking forward, we offer suggestions on how to enhance the constructive roles that ILSAs play in informing education policy.
APA, Harvard, Vancouver, ISO, and other styles
9

Ismail, Ali, Taghlub Ryhan, and Zahrra Abdullah. "Usage of Alternative And Complementary Medicine Among Patients With Diabetes Mellitus at Diabetic Clinic in Kirkuk City / Iraq." Al-Kitab Journal for Pure Sciences 3, no. 2 (October 17, 2020): 239–46. http://dx.doi.org/10.32441/kjps.03.02.p22.

Full text
Abstract:
Constrained investigation on the utilization of complementary alternative medicine (CAM) among patients with diabetes mellitus (DM), especially in essential - care settings. This investigation looks to comprehend the commonness, types, consumptions, dispositions, convictions, and impression of CAM use among patients with DM visiting outpatient diabetic facility. use of CAM has increment lately. We assessed the augmentation CAM utilization by patients with diabetes mellitus; in spite of constrained proof bases. The point of this study was to decide the CAM use among individuals with analyzed diabetes mellitus at diabetic facility at Azadi Teaching Hospital. Prospective descriptive cross sectional study; up close and personal meeting poll and self-directed unknown study techniques to get results from 417 patients who were going to Azadi teaching hospital at Kirkuk city/Iraq. The information was analyzed by usage cross-tabulation analysis (X2 test). P value of 0.05 or less is medically significant. Therefor; about Of 417 members were overviewed, around two third of them utilized some type of CAM treatments were the most widely recognized modalities. The consequences of a strategic relapse examination demonstrated that the parallel use design was most clear in the gatherings matured more than 40. Likewise, numerous sociodemographic and wellbeing related qualities are identified with the examples of the parallel utilization of CAM.At end, utilization of CAM especially biologically base CAM treatments is normal and is bound to be utilized by those with diabetes mellitus. it is as yet lacking the proof to reach complete inference about the adequacy of individual herbs and enhancements for diabetes; be that as it may, they are seem, by all accounts, to be commonly sheltered. The accessible information recommend that few enhancements might be warrant further examination
APA, Harvard, Vancouver, ISO, and other styles
10

Mannweiler, C., C. Lottermann, A. Klein, J. Schneider, and H. D. Schotten. "Cyber-physical networking for wireless mesh infrastructures." Advances in Radio Science 10 (September 18, 2012): 113–18. http://dx.doi.org/10.5194/ars-10-113-2012.

Full text
Abstract:
Abstract. This paper presents a novel approach for cyber-physical network control. "Cyber-physical" refers to the inclusion of different parameters and information sources, ranging from physical sensors (e.g. energy, temperature, light) to conventional network information (bandwidth, delay, jitter, etc.) to logical data providers (inference systems, user profiles, spectrum usage databases). For a consistent processing, collected data is represented in a uniform way, analyzed, and provided to dedicated network management functions and network services, both internally and, through an according API, to third party services. Specifically, in this work, we outline the design of sophisticated energy management functionalities for a hybrid wireless mesh network (WLAN for both backhaul traffic and access, GSM for access only), disposing of autonomous energy supply, in this case solar power. Energy consumption is optimized under the presumption of fluctuating power availability and considerable storage constraints, thus influencing, among others, handover and routing decisions. Moreover, advanced situation-aware auto-configuration and self-adaptation mechanisms are introduced for an autonomous operation of the network. The overall objective is to deploy a robust wireless access and backbone infrastructure with minimal operational cost and effective, cyber-physical control mechanisms, especially dedicated for rural or developing regions.
APA, Harvard, Vancouver, ISO, and other styles
More sources

Dissertations / Theses on the topic "Usage constraint inference"

1

Saied, Mohamed Aymen. "Inferring API Usage Patterns and Constraints : a Holistic Approach." Thèse, 2016. http://hdl.handle.net/1866/18471.

Full text
Abstract:
Les systèmes logiciels dépendent de plus en plus des librairies et des frameworks logiciels. Les programmeurs réutilisent les fonctionnalités offertes par ces librairies à travers une interface de programmation (API). Par conséquent, ils doivent faire face à la complexité des APIs nécessaires pour accomplir leurs tâches, tout en surmontant l’absence de directive sur l’utilisation de ces API dans leur documentation. Dans cette thèse, nous proposons une approche holistique qui cible le problème de réutilisation des librairies, à trois niveaux. En premier lieu, nous nous sommes intéressés à la réutilisation d’une seule méthode d’une API. À ce niveau, nous proposons d’identifier les contraintes d’utilisation liées aux paramètres de la méthode, en analysant uniquement le code source de la librairie. Nous avons appliqué plusieurs analyses de programme pour détecter quatre types de contraintes d’utilisation considérées critiques. Dans un deuxième temps, nous changeons l’échelle pour nous focaliser sur l’inférence des patrons d’utilisation d’une API. Ces patrons sont utiles pour aider les développeurs à apprendre les façons courantes d’utiliser des méthodes complémentaires de l’API. Nous proposons d’abord une technique basée sur l’analyse des programmes clients de l’API. Cette technique permet l’inférence de patrons multi-niveaux. Ces derniers présentent des relations de co-utilisation entre les méthodes de l’API à travers des scénarios d’utilisation entremêlés. Ensuite, nous proposons une technique basée uniquement sur l’analyse du code de la librairie, pour surmonter la contrainte de l’existence des programmes clients de l‘API. Cette technique infère les patrons par analyse des relations structurelles et sémantiques entre les méthodes. Finalement, nous proposons une technique coopérative pour l’inférence des patrons d’utilisation. Cette technique est axée sur la combinaison des heuristiques basées respectivement sur les clients et sur le code de la librairie. Cette combinaison permet de profiter à la fois de la précision des techniques basées sur les clients et de la généralisabilité des techniques basées sur les librairies. Pour la dernière contribution de notre thèse, nous visons un plus haut niveau de réutilisation des librairies. Nous présentons une nouvelle approche, pour identifier automatiquement les patrons d’utilisation de plusieurs librairies, couramment utilisées ensemble, et généralement développées par différentes tierces parties. Ces patrons permettent de découvrir les possibilités de réutilisation de plusieurs librairies pour réaliser diverses fonctionnalités du projets.
Software systems increasingly depend on external library and frameworks. Software developers need to reuse functionalities provided by these libraries through their Application Programming Interfaces (APIs). Hence, software developers have to cope with the complexity of existing APIs needed to accomplish their work, and overcome the lack of usage directive in the API documentation. In this thesis, we propose a holistic approach that deals with the library usability problem at three levels of granularity. In the first step, we focus on the method level. We propose to identify usage constraints related to method parameters, by analyzing only the library source code. We applied program analysis strategies to detect four critical usage constraint types. At the second step, we change the scale to focus on API usage pattern mining in order to help developers to better learn common ways to use the API complementary methods. We first propose a client-based technique for mining multilevel API usage patterns to exhibit the co-usage relationships between API methods across interfering usage scenarios. Then, we proposed a library-based technique to overcome the strong constraint of client programs’ selection. Our technique infers API usage patterns through the analysis of structural and semantic relationships between API methods. Finally, we proposed a cooperative usage pattern mining technique that combines client-based and library-based usage pattern mining. Our technique takes advantage at the same time from the precision of the client-based technique and from the generalizability of the library-based technique. As a last contribution of this thesis, we target a higher level of library usability. We present a novel approach, to automatically identify third-party library usage patterns, of libraries that are commonly used together. This aims to help developers to discover reuse opportunities, and pick complementary libraries that may be relevant for their projects.
APA, Harvard, Vancouver, ISO, and other styles

Book chapters on the topic "Usage constraint inference"

1

Kahn, David M., and Jan Hoffmann. "Exponential Automatic Amortized Resource Analysis." In Lecture Notes in Computer Science, 359–80. Cham: Springer International Publishing, 2020. http://dx.doi.org/10.1007/978-3-030-45231-5_19.

Full text
Abstract:
AbstractAutomatic amortized resource analysis (AARA) is a type-based technique for inferring concrete (non-asymptotic) bounds on a program’s resource usage. Existing work on AARA has focused on bounds that are polynomial in the sizes of the inputs. This paper presents and extension of AARA to exponential bounds that preserves the benefits of the technique, such as compositionality and efficient type inference based on linear constraint solving. A key idea is the use of the Stirling numbers of the second kind as the basis of potential functions, which play the same role as the binomial coefficients in polynomial AARA. To formalize the similarities with the existing analyses, the paper presents a general methodology for AARA that is instantiated to the polynomial version, the exponential version, and a combined system with potential functions that are formed by products of Stirling numbers and binomial coefficients. The soundness of exponential AARA is proved with respect to an operational cost semantics and the analysis of representative example programs demonstrates the effectiveness of the new analysis.
APA, Harvard, Vancouver, ISO, and other styles
2

Scioscia, Floriano, Michele Ruta, Giuseppe Loseto, Filippo Gramegna, Saverio Ieva, Agnese Pinto, and Eugenio Di Sciascio. "Mini-ME Matchmaker and Reasoner for the Semantic Web of Things." In Innovations, Developments, and Applications of Semantic Web and Information Systems, 262–94. IGI Global, 2018. http://dx.doi.org/10.4018/978-1-5225-5042-6.ch010.

Full text
Abstract:
The Semantic Web of Things (SWoT) aims to support smart semantics-enabled applications and services in pervasive contexts. Due to architectural and performance issues, most Semantic Web reasoners are often impractical to be ported: they are resource consuming and are basically designed for standard inference tasks on large ontologies. On the contrary, SWoT use cases generally require quick decision support through semantic matchmaking in resource-constrained environments. This paper describes Mini-ME (the Mini Matchmaking Engine), a mobile inference engine designed from the ground up for the SWoT. It supports Semantic Web technologies and implements both standard (subsumption, satisfiability, classification) and non-standard (abduction, contraction, covering, bonus, difference) inference services for moderately expressive knowledge bases. In addition to an architectural and functional description, usage scenarios and experimental performance evaluation are presented on PC (against other popular Semantic Web reasoners), smartphone and embedded single-board computer testbeds.
APA, Harvard, Vancouver, ISO, and other styles
3

Scioscia, Floriano, Michele Ruta, Giuseppe Loseto, Filippo Gramegna, Saverio Ieva, Agnese Pinto, and Eugenio Di Sciascio. "A Mobile Matchmaker for the Ubiquitous Semantic Web." In Mobile Computing and Wireless Networks, 994–1017. IGI Global, 2016. http://dx.doi.org/10.4018/978-1-4666-8751-6.ch042.

Full text
Abstract:
The Semantic Web and Internet of Things visions are converging toward the so-called Semantic Web of Things (SWoT). It aims to enable smart semantic-enabled applications and services in ubiquitous contexts. Due to architectural and performance issues, it is currently impractical to use existing Semantic Web reasoners. They are resource consuming and are basically optimized for standard inference tasks on large ontologies. On the contrary, SWoT use cases generally require quick decision support through semantic matchmaking in resource-constrained environments. This paper presents Mini-ME, a novel mobile inference engine designed from the ground up for the SWoT. It supports Semantic Web technologies and implements both standard (subsumption, satisfiability, classification) and non-standard (abduction, contraction, covering) inference services for moderately expressive knowledge bases. In addition to an architectural and functional description, usage scenarios are presented and an experimental performance evaluation is provided both on a PC testbed (against other popular Semantic Web reasoners) and on a smartphone.
APA, Harvard, Vancouver, ISO, and other styles
4

Bernhard, Stefan, Kristine Al Zoukra, and Christof Schtte. "From Non-Invasive Hemodynamic Measurements towards Patient-Specific Cardiovascular Diagnosis." In Data Mining, 2069–93. IGI Global, 2013. http://dx.doi.org/10.4018/978-1-4666-2455-9.ch106.

Full text
Abstract:
The past two decades have seen impressive success in medical technology, generating novel experimental data at an unexpected rate. However, current computational methods cannot sufficiently manage the data analysis for interpretation, so clinical application is hindered, and the benefit for the patient is still small. Even though numerous physiological models have been developed to describe complex dynamical mechanisms, their clinical application is limited, because parameterization is crucial, and most problems are ill-posed and do not have unique solutions. However, this information deficit is imminent to physiological data, because the measurement process always contains contamination like artifacts or noise and is limited by a finite measurement precision. The lack of information in hemodynamic data measured at the outlet of the left ventricle, for example, induces an infinite number of solutions to the hemodynamic inverse problem (possible vascular morphologies that can represent the hemodynamic conditions) (Quick, 2001). Within this work, we propose that, despite these problems, the assimilation of morphological constraints, and the usage of statistical prior knowledge from clinical observations, reveals diagnostically useful information. If the morphology of the vascular network, for example, is constrained by a set of time series measurements taken at specific places of the cardiovascular system, it is possible to solve the hemodynamic inverse problem by a carefully designed mathematical forward model in combination with a Bayesian inference technique. The proposed cardiovascular system identification procedure allows us to deduce patient-specific information that can be used to diagnose a variety of cardiovascular diseases in an early state. In contrast to traditional inversion approaches, the novel method produces a distribution of physiologically interpretable models (patient-specific parameters and model states) that allow the identification of disease specific patterns that correspond to clinical diagnoses, enabling a probabilistic assessment of human health condition on the basis of a broad patient population. In the ongoing work we use this technique to identify arterial stenosis and aneurisms from anomalous patterns in signal and parameter space. The novel data mining procedure provides useful clinical information about the location of vascular defects like aneurisms and stenosis. We conclude that the Bayesian inference approach is able to solve the cardiovascular inverse problem and to interpret clinical data to allow a patient-specific model-based diagnosis of cardiovascular diseases. We think that the information-based approach provides a useful link between mathematical physiology and clinical diagnoses and that it will become constituent in the medical decision process in near future.
APA, Harvard, Vancouver, ISO, and other styles
5

Bernhard, Stefan, Kristine Al Zoukra, and Christof Schtte. "From Non-Invasive Hemodynamic Measurements towards Patient-Specific Cardiovascular Diagnosis." In Quality Assurance in Healthcare Service Delivery, Nursing and Personalized Medicine, 1–25. IGI Global, 2012. http://dx.doi.org/10.4018/978-1-61350-120-7.ch001.

Full text
Abstract:
The past two decades have seen impressive success in medical technology, generating novel experimental data at an unexpected rate. However, current computational methods cannot sufficiently manage the data analysis for interpretation, so clinical application is hindered, and the benefit for the patient is still small. Even though numerous physiological models have been developed to describe complex dynamical mechanisms, their clinical application is limited, because parameterization is crucial, and most problems are ill-posed and do not have unique solutions. However, this information deficit is imminent to physiological data, because the measurement process always contains contamination like artifacts or noise and is limited by a finite measurement precision. The lack of information in hemodynamic data measured at the outlet of the left ventricle, for example, induces an infinite number of solutions to the hemodynamic inverse problem (possible vascular morphologies that can represent the hemodynamic conditions) (Quick, 2001). Within this work, we propose that, despite these problems, the assimilation of morphological constraints, and the usage of statistical prior knowledge from clinical observations, reveals diagnostically useful information. If the morphology of the vascular network, for example, is constrained by a set of time series measurements taken at specific places of the cardiovascular system, it is possible to solve the hemodynamic inverse problem by a carefully designed mathematical forward model in combination with a Bayesian inference technique. The proposed cardiovascular system identification procedure allows us to deduce patient-specific information that can be used to diagnose a variety of cardiovascular diseases in an early state. In contrast to traditional inversion approaches, the novel method produces a distribution of physiologically interpretable models (patient-specific parameters and model states) that allow the identification of disease specific patterns that correspond to clinical diagnoses, enabling a probabilistic assessment of human health condition on the basis of a broad patient population. In the ongoing work we use this technique to identify arterial stenosis and aneurisms from anomalous patterns in signal and parameter space. The novel data mining procedure provides useful clinical information about the location of vascular defects like aneurisms and stenosis. We conclude that the Bayesian inference approach is able to solve the cardiovascular inverse problem and to interpret clinical data to allow a patient-specific model-based diagnosis of cardiovascular diseases. We think that the information-based approach provides a useful link between mathematical physiology and clinical diagnoses and that it will become constituent in the medical decision process in near future.
APA, Harvard, Vancouver, ISO, and other styles

Conference papers on the topic "Usage constraint inference"

1

Du, Changying, Xingyu Xie, Changde Du, and Hao Wang. "Redundancy-resistant Generative Hashing for Image Retrieval." In Twenty-Seventh International Joint Conference on Artificial Intelligence {IJCAI-18}. California: International Joint Conferences on Artificial Intelligence Organization, 2018. http://dx.doi.org/10.24963/ijcai.2018/696.

Full text
Abstract:
By optimizing probability distributions over discrete latent codes, Stochastic Generative Hashing (SGH) bypasses the critical and intractable binary constraints on hash codes. While encouraging results were reported, SGH still suffers from the deficient usage of latent codes, i.e., there often exist many uninformative latent dimensions in the code space, a disadvantage inherited from its auto-encoding variational framework. Motivated by the fact that code redundancy usually is severer when more complex decoder network is used, in this paper, we propose a constrained deep generative architecture to simplify the decoder for data reconstruction. Specifically, our new framework forces the latent hashing codes to not only reconstruct data through the generative network but also retain minimal squared L2 difference to the last real-valued network hidden layer. Furthermore, during posterior inference, we propose to regularize the standard auto-encoding objective with an additional term that explicitly accounts for the negative redundancy degree of latent code dimensions. We interpret such modifications as Bayesian posterior regularization and design an adversarial strategy to optimize the generative, the variational, and the redundancy-resistanting parameters. Empirical results show that our new method can significantly boost the quality of learned codes and achieve state-of-the-art performance for image retrieval.
APA, Harvard, Vancouver, ISO, and other styles
2

Tomkins, Sabina, Jay Pujara, and Lise Getoor. "Disambiguating Energy Disaggregation: A Collective Probabilistic Approach." In Twenty-Sixth International Joint Conference on Artificial Intelligence. California: International Joint Conferences on Artificial Intelligence Organization, 2017. http://dx.doi.org/10.24963/ijcai.2017/398.

Full text
Abstract:
Reducing household energy usage is a priority for improving the resiliency and stability of the power grid and decreasing the negative impact of energy consumption on the environment and public health.Relevant and timely feedback about the power consumption of specific appliances can help household residents to reduce their energy demand. Given only a total energy reading, such as that collected from a residential meter, energy disaggregation strives to discover the consumption of individual appliances. Existing disaggregation algorithms are computationally inefficient and rely heavily on high-resolution ground truth data. We introduce a probabilistic framework which infers the energy consumption of individual appliances using a hinge-loss Markov random field (HL-MRF), which admits highly scalable inference. To further enhance efficiency, we introduce a temporal representation which leverages state duration. We also explore how contextual information impacts solution quality with low-resolution data. Our framework is flexible in its ability to incorporate additional constraints; by constraining appliance usage with context and duration we can better disambiguate appliances with similar energy consumption profiles. We demonstrate the effectiveness of our framework on two public real-world datasets, reducing the error relative to a previous state-of-the-art method by as much as 50%.
APA, Harvard, Vancouver, ISO, and other styles
We offer discounts on all premium plans for authors whose works are included in thematic literature selections. Contact us to get a unique promo code!

To the bibliography