Littérature scientifique sur le sujet « Interpretative Optimization »

Créez une référence correcte selon les styles APA, MLA, Chicago, Harvard et plusieurs autres

Choisissez une source :

Consultez les listes thématiques d’articles de revues, de livres, de thèses, de rapports de conférences et d’autres sources académiques sur le sujet « Interpretative Optimization ».

À côté de chaque source dans la liste de références il y a un bouton « Ajouter à la bibliographie ». Cliquez sur ce bouton, et nous générerons automatiquement la référence bibliographique pour la source choisie selon votre style de citation préféré : APA, MLA, Harvard, Vancouver, Chicago, etc.

Vous pouvez aussi télécharger le texte intégral de la publication scolaire au format pdf et consulter son résumé en ligne lorsque ces informations sont inclues dans les métadonnées.

Articles de revues sur le sujet "Interpretative Optimization"

1

Todorovic, Zaklina, Ljubinka Rajakovic, and Antonije Onjia. "Interpretative optimization of the isocratic ion chromatographic separation of anions." Journal of the Serbian Chemical Society 81, no. 6 (2016): 661–72. http://dx.doi.org/10.2298/jsc150927022t.

Texte intégral
Résumé :
Interpretive retention modeling was utilized to optimize the isocratic ion chromatographic (IC) separation of the nine anions (formate, fluoride, chloride, nitrite, bromide, nitrate, phosphate, sulfate, oxalate). The carbonate-bicarbonate eluent was used and separation was done on a Dionex AS14 ion-exchange column. The influence of combined effects of two mobile phase factors, the total eluent concentration (2 - 6 mM) and the carbonate/bicaronate ratio from 1:9 to 9:1 (which corespondent to pH range 9.35 - 11.27), on the IC separation was studied. The multiple species analyte/eluent model that takes into account ion-exchange equilibria of the eluent and sample anions was used. In order to estimate the parameters in the model, a non-linear fitting of the retention data, obtained at two-factor three-level experimental design, was applied. To find the optimal conditions in the experimental design, the normalized resolution product as a chromatographic objective function was employed. This criterion includes both the individual peak resolution and the total analysis time. A good agreement between experimental and simulated chromatograms was obtained.
Styles APA, Harvard, Vancouver, ISO, etc.
2

Mayer, Katja. "Objectifying Social Structures. Network visualization as means of social optimization." Theory and Psychology 22, no. 2 (2012): 162–78. https://doi.org/10.5281/zenodo.49591.

Texte intégral
Résumé :
Social network analysis offers a broad range of formal and interpretative methodologies to deal with social structures, not only by discursive, but also by visual means. Sociograms, depicting social relations as nodes and lines, have played an important part in the reification of social structures since the beginnings of sociometry. This paper brings together two strands of analysis: first, a historical perspective on the development of social network visualization; and, second, exemplary stories of black-boxed technologies that inform not only the depiction, but also the interpretation of social networks. The article aims to reflect upon scientific construction of social structures as knowledge that is appropriated by society not least owing to its easy handling as tool and interface. Drawing social networks is regarded as social technology and therefore as an application within the realms of social engineering.
Styles APA, Harvard, Vancouver, ISO, etc.
3

Johnson, Pamela T., Karen M. Horton, and Elliot K. Fishman. "Adrenal Imaging with Multidetector CT: Evidence-based Protocol Optimization and Interpretative Practice." RadioGraphics 29, no. 5 (2009): 1319–31. http://dx.doi.org/10.1148/rg.295095026.

Texte intégral
Styles APA, Harvard, Vancouver, ISO, etc.
4

Ramanauskienė, Sada, and Aida Norviliene. "OPTIMIZATION OF ACHIEVEMENTS ASSESSMENT OF PRESCHOOL CHILDREN." SOCIETY. INTEGRATION. EDUCATION. Proceedings of the International Scientific Conference 2 (May 26, 2017): 318. http://dx.doi.org/10.17770/sie2017vol2.2445.

Texte intégral
Résumé :
In this article the authors present the system assessing the achievements of preschool children in Lithuania, the teacher's opinion regarding the description of achievements assessment of preschool children and its adaptation in a preschool education institution. This article presents the recommendations for the optimization of achievements assessment of preschool children. The quantitative research was applied in accordance with the interpretative attitude, a total of 130 preschool education teachers and managers from Klaipeda region. The results: optimization of implementation of description, assessing the achievements of preschool children, by preparing the model of description; motivation of teachers to improve the skills of assessing the achievements of children, rationalizing and allocating the cost of labor in the teaching process; better education planning process with regard to children's individual needs and abilities, which are determined in the model of achievements assessment of preschool children.
Styles APA, Harvard, Vancouver, ISO, etc.
5

Sremac, Snežana, Aleksandar Popović, Žaklina Todorović, Đuro Čokeša, and Antonije Onjia. "Interpretative optimization and artificial neural network modeling of the gas chromatographic separation of polycyclic aromatic hydrocarbons." Talanta 76, no. 1 (2008): 66–71. http://dx.doi.org/10.1016/j.talanta.2008.02.004.

Texte intégral
Styles APA, Harvard, Vancouver, ISO, etc.
6

Jiang, Bin, Hong Jun Yao, Dan Hua Xia, Ying Li, and Hong Xu. "Hierarchical Structure Model of High Speed Milling Hardened Steel." Advanced Materials Research 305 (July 2011): 75–79. http://dx.doi.org/10.4028/www.scientific.net/amr.305.75.

Texte intégral
Résumé :
In high speed milling hardened steel, cutting performance of cutter is uncertainty because hardness distribution and machining elements change. Based on the experiments of high speed milling hardened steel, the interaction among cutting force, cutting temperature, cutting vibration, processing surface quality and machining efficiency is investigated by means of interpretative structural modeling method, hierarchical structure model in high speed milling hardened steel is founded, optimization design method of high speed milling performance is proposed. Results show that uneven distribution of hardness swells the interaction of machining elements, and causes cutting vibration and processing surface quality mutations. Hierarchical structure model reveals the relationship between control variables and performance index in high speed milling hardened steel, it helps optimization of process parameters, restrains cutting vibration in high speed milling automobile mould, and improves processing surface quality and machining efficiency.
Styles APA, Harvard, Vancouver, ISO, etc.
7

Li, Tiehong, Jin Li, Junbang Jiang, and Xinyu Liu. "Fuzzy PID control based on genetic algorithm optimization inverted pendulum system." Journal of Physics: Conference Series 2816, no. 1 (2024): 012001. http://dx.doi.org/10.1088/1742-6596/2816/1/012001.

Texte intégral
Résumé :
Abstract For the first-order inverted pendulum control system, a fuzzy PID control system based on the optimization of the genetic algorithm is proposed. The traditional genetic algorithm has the problem that the difference in the fuzzy subset parameter leads to a decrease in the interpretative ability of the fuzzy system. The main problem of the current genetic algorithm is the complexity of the computation and the low efficiency. Based on this problem, this paper proposes an improved genetic algorithm, i.e., it adopts the variance operator and adaptive change of the variance index and elite retention strategy, which solves the premature and local convergence problems of the standard genetic algorithm, in order to optimize the fuzzy system. The experimental results show that the optimized genetic algorithm gives full play to the advantages of fuzzy control in terms of interpretability and robustness, and at the same time guarantees the prediction accuracy, which provides a new research idea in the field of artificial intelligence control.
Styles APA, Harvard, Vancouver, ISO, etc.
8

Guo, Kexin, Guoqing Peng, Jining Pang, Shuaibing Shen, Xuewen Lin, and Qian Wan. "A dynamic post-evaluation analysis strategy based on Space syntax for optimizing middle school campus layout." E3S Web of Conferences 248 (2021): 03055. http://dx.doi.org/10.1051/e3sconf/202124803055.

Texte intégral
Résumé :
Space syntax has injected new vitality into quantitative and parameterized planning and design of middle school campus. Although, there are extensive interpretative problems in space syntax that need to be clarified during engineering planning and design of campus. This paper focuses on: (1) the parameters and models of space syntax are clarified from the perspectives of planning and design; (2) the specific application of space syntax in campus planning and design is discussed from the four stages of preliminary research, scheme analysis, scheme design and scheme optimization; (3) a relatively reliable quantitative, visual and procedural post-evaluation system of campus planning and design is provided with GIS spatial analysis technology and dynamic simulation means.
Styles APA, Harvard, Vancouver, ISO, etc.
9

Xu, Bo, Yuan Yao, Xuan Wang, Linsong Sun, Bin Ou, and Yanming Zhang. "A Multi-Input Multi-Output Considering Correlation and Hysteresis Prediction Method for Gravity Dam Displacement with Interpretative Functions." Applied Sciences 15, no. 13 (2025): 7096. https://doi.org/10.3390/app15137096.

Texte intégral
Résumé :
The displacement of a concrete gravity dam is a direct manifestation of its deformation. It provides an intuitive reflection of the dam’s overall operational behavior and serves as a key indicator of the dam’s safe operating condition. In this paper, we propose a factor set that considers the hysteresis effects of temperature on displacement and ranks the importance of the features to select the optimal factor sets at different measurement points by the ReliefF method. Then, we realize the simultaneous prediction of the displacements at multiple measurement points by the multi-input multi-output least-squares support vector machine with particle swarm optimization (MIMO-PSO-LSSVM). The case study demonstrates that this method effectively enhances the accuracy and efficiency of gravity dam displacement prediction, thereby providing a novel reference for dam safety monitoring and health service diagnosis.
Styles APA, Harvard, Vancouver, ISO, etc.
10

Ankita H. Harkare. "Intelligent Crop Management Optimization using Machine Learning Algorithms: A Linear Analytical Approach." Advances in Nonlinear Variational Inequalities 27, no. 3 (2024): 198–210. http://dx.doi.org/10.52783/anvi.v27.1367.

Texte intégral
Résumé :
This research paper explores Utilizing machine learning techniques in practice, to enhance crop management recommendations. It leverages historical and real-time data, encompassing weather conditions, soil characteristics, growth stages, and past information. A rigorous data prepossessing procedure is employed to generate a well-structured 1.31 lacs data set. Various methodologies are utilized to construct predictive. These models' effectiveness is evaluated using common assessment metrics, such as Recall, accuracy, precision, and F1-score. To enhance the reliability and transparency of the recommendations, ensemble methods such as Naive Bayes, Support vector machines, decision trees, gradient boosting, and random forests, Linear Regression, and k-Nearest Neighbors}, along with interpretative techniques, are incorporated. The results provide insights into different algorithms and predict crucial agricultural activities such as planting, irrigation, fertilization, and pest control. Among all the machine learning models considered, Naive Bayes emerges as the best-performing model, achieving perfect scores of 1.0 in accuracy, precision, recall, and F1-scores. The second-best machine learning model is Random Forests, which follows closely behind with Achieving remarkable results reaching 0.99. These prognostic parameters have a big impact on agricultural productivity and sustainability. In order to improve crop management productivity and resource efficiency, this study promotes the use of data-driven decision-making in agriculture.
Styles APA, Harvard, Vancouver, ISO, etc.
Plus de sources

Thèses sur le sujet "Interpretative Optimization"

1

Yang, Dekun. "An optimization approach to labelling problems in computer vision." Thesis, University of Surrey, 1995. http://epubs.surrey.ac.uk/843153/.

Texte intégral
Résumé :
This thesis is concerned with the development of an optimization based approach to solving labelling problems which involve the assignment of image entities into interpretation categories in computer vision. Attention is mainly focussed on the theoretical basis and computational aspect of continuous relaxation for solving a discrete labelling problem based on an optimization framework. First, a theoretical basis for continuous relaxation is presented which includes the formulation of a discrete labelling problem as a continuous minimization problem and an analysis of labelling unambiguity associated with continuous relaxation. The main advantage of the formulation over existing formulations is the embedding of relational measurements into the specification of a consistent labelling. The analysis provides a sufficient condition for a continuous labelling formulation to ensure that a consistent labelling is unambiguous. Second, a continuous relaxation labelling algorithm based on mean field theory is presented with the aim of approximating simulated annealing in a deterministic manner. The novelty of the algorithm lies in the utilization of mean field theory technique to avoid stochastic optimization for approximating the global optimum of a consistent labelling criterion. This is contrast to the conventional methods which find a local optimum near an initial estimate of labelling. A special three-frame discrete labelling problem of establishing trinocular stereo correspondence and a mixed labelling problem of interpreting image entities in terms of cylindrical objects and their locations are also addressed. For the former, two orientation based geometric constraints are suggested for matching lines among three viewpoints and a method is presented to find a consistent labelling using simulated annealing. For the latter, the image interpretation of 3D cylindrical objects and their 3D locations is achieved using three knowledge sources: edge map, region map and the ground plane constraint. The method differs from existing methods in that it exploits an integrated use of multiple image cues to simplify the interpretation task and improve the interpretation performance. Experimental results on both synthetic data and real images are provided to demonstrate the viability and the potential of the proposed methods throughout the thesis.
Styles APA, Harvard, Vancouver, ISO, etc.
2

Elms, Kim. "Debugging optimised code using function interpretation." Thesis, Queensland University of Technology, 1999.

Trouver le texte intégral
Styles APA, Harvard, Vancouver, ISO, etc.
3

Beard, Jacob. "Developing rich, web-based user interfaces with the statecharts interpretation and optimization engine." Thesis, McGill University, 2013. http://digitool.Library.McGill.CA:80/R/?func=dbin-jump-full&object_id=116899.

Texte intégral
Résumé :
Today's Web browsers provide a platform for the development of complex, richly interactive user interfaces. But, with this complexity come additional challenges for Web developers. The complicated behavioural relationships between user interface components are often stateful, and are difficult to describe, encode and maintain using conventional programming techniques with ECMAScript, the general-purpose scripting language embedded in all Web browsers. Furthermore, user interfaces must be performant, reacting to user input quickly; if the system responds too slowly, the result is a visible UI "lag," degrading the user's experience. Statecharts, a visual modelling language created in the 1980's for developing safety-critical embedded systems, can provide solutions to these problems, as it is well-suited to describing the reactive, timed, state-based behaviour that often comprises the essential complexity of a Web user interface. The contributions of this thesis are then twofold. First, in order to use Statecharts effectively, an interpreter is required to execute the Statecharts models. Therefore, the primary contribution of this thesis is the design, description, implementation and empirical evaluation of the Statecharts Interpretation and Optimization eNgine (SCION), a Statecharts interpreter implemented in ECMAScript that can run in all Web browsers, as well as other ECMAScript environments such as Node.js and Rhino. This thesis first describes a syntax and semantics for SCION which aims to be maximally intuitive for Web developers. Next, test-driven development is used to verify the correct implementation of this semantics. Finally, SCION is optimized and rigorously evaluated to maximize performance and minimize memory usage when run in various Web browser environments. While SCION began as a research project toward the completion of this master thesis, it has grown into an established open source software project, and is currently being used for real work in production environments by several organizations. The secondary contribution of this thesis is the introduction of a new Statecharts-based design pattern called Stateful Command Syntax, which can be used to guide Web user interface development with Statecharts. This design pattern can be applied to a wide class of Web user interfaces, specifically those whose behaviour comprises a command syntax that varies depending on high-level application state. Its use is illustrated through detailed case studies of two highly interactive, but very different applications.<br>Les navigateurs web modernes permettent la création d'interfaces utilisateur particulièrement élaborées et interactives. Mais cela engendre également des difficultés additionnelles pour les développeurs web. Les relations qui existent entre les différentes composantes des interfaces utilisateur sont souvent complexes et difficiles à élaborer, encoder et entretenir en utilisant les techniques de programmation conventionnelles qu'offre ECAMScript, un langage que l'on retrouve dans tous les navigateurs. Or, les interfaces doivent être assez performantes pour répondre rapidement aux actions effectuées par l'utilisateur, car un système trop lent engendre un décalage qui nuit à l'expérience d'utilisation. Statecharts, un langage de modélisation visuelle créé dans les années 80 pour assurer la sécurité des systèmes embarqués, peut offrir des solutions à ces défis : il sait répondre aux impératifs de réactivité, de programmation dans le temps et de comportement fondé sur l'état qui représentent en somme ce qui est le plus difficile avec les interfaces utilisateur sur le web. L'apport de ce mémoire est double. Tout d'abord, l'exécution des modèles Statecharts nécessite un interprète. La première contribution de cet article consiste donc en le design, l'élaboration, l'implémentation et l'évaluation empirique du Statecharts Interpretation and Optimization eNgine (SCION), un interprète en ECMAScript qui peut être exécuté dans tous les navigateurs web ainsi que dans d'autres environnements ECMAScript, comme Node.js et Rhino. SCION est doté d'une syntaxe et d'une sémantique qui se veut la plus intuitive possible pour les développeurs web. De nombreux tests sont ensuite effectués pour en assurer le bon fonctionnement. Enfin, SCION est rigoureusement optimisé afin de maximiser la performance et de minimiser l'utilisation de la mémoire vive dans différents navigateurs web. Bien que SCION ait débuté comme un projet de recherche personnel en vue de l'obtention du diplôme de maîtrise, il est devenu un logiciel libre bien établi et utilisé par plusieurs organisations dans des environnements de travail. La deuxième contribution de ce mémoire est la présentation d'un nouveau patron de conception créé à partir de Statecharts nommé Stateful Command Syntax, qui peut être utilisé pour guider la création d'interfaces utilisateur sur le web. Ce patron de conception peut être employé sur une grande variété d'interfaces, notamment celles dont le comportement inclut une syntaxe de commande qui varie selon le degré d'abstraction de l'application. Son utilisation est illustrée à travers quelques études de cas ainsi que deux logiciels très différents mais hautement interactifs.
Styles APA, Harvard, Vancouver, ISO, etc.
4

Richtarik, Pavel. "Rychlý a částečně překládaný simulátor pro aplikačně specifické procesory." Master's thesis, Vysoké učení technické v Brně. Fakulta informačních technologií, 2018. http://www.nusl.cz/ntk/nusl-385970.

Texte intégral
Résumé :
The major objective of this work is to analyse possibilities of using simulation within the development of application-specific instruction-set processors, to explore and compare some common simulation techniques and to use the collected information to design a new simulation tool suitable for utilization in the processors development and optimization. This thesis presents the main requirements on the new simulator and describes the design and implementation of its key parts with emphasis on the high performance.
Styles APA, Harvard, Vancouver, ISO, etc.
5

Jiang, Xulin. "Political justice and Laissez-faire : a consequentialist optimization of Rawl's scheme of justice as fairness." HKBU Institutional Repository, 2009. http://repository.hkbu.edu.hk/etd_ra/983.

Texte intégral
Styles APA, Harvard, Vancouver, ISO, etc.
6

Shen, Sumin. "Contributions to Structured Variable Selection Towards Enhancing Model Interpretation and Computation Efficiency." Diss., Virginia Tech, 2020. http://hdl.handle.net/10919/96767.

Texte intégral
Résumé :
The advances in data-collecting technologies provides great opportunities to access large sample-size data sets with high dimensionality. Variable selection is an important procedure to extract useful knowledge from such complex data. While in many real-data applications, appropriate selection of variables should facilitate the model interpretation and computation efficiency. It is thus important to incorporate domain knowledge of underlying data generation mechanism to select key variables for improving the model performance. However, general variable selection techniques, such as the best subset selection and the Lasso, often do not take the underlying data generation mechanism into considerations. This thesis proposal aims to develop statistical modeling methodologies with a focus on the structured variable selection towards better model interpretation and computation efficiency. Specifically, this thesis proposal consists of three parts: an additive heredity model with coefficients incorporating the multi-level data, a regularized dynamic generalized linear model with piecewise constant functional coefficients, and a structured variable selection method within the best subset selection framework. In Chapter 2, an additive heredity model is proposed for analyzing mixture-of-mixtures (MoM) experiments. The MoM experiment is different from the classical mixture experiment in that the mixture component in MoM experiments, known as the major component, is made up of sub-components, known as the minor components. The proposed model considers an additive structure to inherently connect the major components with the minor components. To enable a meaningful interpretation for the estimated model, we apply the hierarchical and heredity principles by using the nonnegative garrote technique for model selection. The performance of the additive heredity model was compared to several conventional methods in both unconstrained and constrained MoM experiments. The additive heredity model was then successfully applied in a real problem of optimizing the Pringlestextsuperscript{textregistered} potato crisp studied previously in the literature. In Chapter 3, we consider the dynamic effects of variables in the generalized linear model such as logistic regression. This work is motivated from the engineering problem with varying effects of process variables to product quality caused by equipment degradation. To address such challenge, we propose a penalized dynamic regression model which is flexible to estimate the dynamic coefficient structure. The proposed method considers modeling the functional coefficient parameter as piecewise constant functions. Specifically, under the penalized regression framework, the fused lasso penalty is adopted for detecting the changes in the dynamic coefficients. The group lasso penalty is applied to enable a sparse selection of variables. Moreover, an efficient parameter estimation algorithm is also developed based on alternating direction method of multipliers. The performance of the dynamic coefficient model is evaluated in numerical studies and three real-data examples. In Chapter 4, we develop a structured variable selection method within the best subset selection framework. In the literature, many techniques within the LASSO framework have been developed to address structured variable selection issues. However, less attention has been spent on structured best subset selection problems. In this work, we propose a sparse Ridge regression method to address structured variable selection issues. The key idea of the proposed method is to re-construct the regression matrix in the angle of experimental designs. We employ the estimation-maximization algorithm to formulate the best subset selection problem as an iterative linear integer optimization (LIO) problem. the mixed integer optimization algorithm as the selection step. We demonstrate the power of the proposed method in various structured variable selection problems. Moverover, the proposed method can be extended to the ridge penalized best subset selection problems. The performance of the proposed method is evaluated in numerical studies.<br>Doctor of Philosophy<br>The advances in data-collecting technologies provides great opportunities to access large sample-size data sets with high dimensionality. Variable selection is an important procedure to extract useful knowledge from such complex data. While in many real-data applications, appropriate selection of variables should facilitate the model interpretation and computation efficiency. It is thus important to incorporate domain knowledge of underlying data generation mechanism to select key variables for improving the model performance. However, general variable selection techniques often do not take the underlying data generation mechanism into considerations. This thesis proposal aims to develop statistical modeling methodologies with a focus on the structured variable selection towards better model interpretation and computation efficiency. The proposed approaches have been applied to real-world problems to demonstrate their model performance.
Styles APA, Harvard, Vancouver, ISO, etc.
7

Saraswati, Anita Thea. "Development of a Numerical Tool for Gravimetry and Gradiometry Data Processing and Interpretation : application to GOCE Observations." Thesis, Montpellier, 2018. http://www.theses.fr/2018MONTG077/document.

Texte intégral
Résumé :
Aujourd’hui, la communauté scientifique dispose de jeux de données gravimétriques avec une précision et une résolution spatiale sans précédent qui améliorent nos connaissances du champ gravitationnel terrestre à différentes échelles et longueurs d’ondes, obtenues de mesures du sol à des satellites. Parallèlement à la gravimétrie, l’avancement des observations par satellite fournit à la communauté des modèles d’élévation numérique plus détaillés pour refléter la géométrie de la structure terrestre. Ensemble, ces nouveaux jeux de données offrent une excellente occasion de mieux comprendre les structures et la dynamique de la Terre à l’échelle locale, régionale et mondiale. L'utilisation et l'interprétation de ces données de haute qualité exigent le raffinement des approches standards dans le traitement et l'analyse des données liées à la gravité. Cette thèse consiste en une série d’études visant à améliorer la précision du traitement des données de gravité et gravité de gravité gradients pour les études géodynamiques. Pour ce faire, nous développons un outil, appelé GEEC (Gal Eötvös Earth Calculator), pour calculer précisément les effets gravimétriques dues à tout corps de masse, indépendamment de sa géométrie et de sa distance par rapport aux mesures. Les effets de gravité et des gravité gradients sont calculés analytiquement en utilisant la solution intégrale linéaire d'un polyèdre irrégulier. Les validations aux échelles locale, régionale et mondiale confirment la robustesse des performances du GEEC, où la résolution du modèle, qui dépend à la fois de la taille de la masse corporelle et de sa distance par rapport au point de mesure, contrôle fortement la précision des résultats. Nous présentons une application pour évaluer les paramètres optimaux dans le calcul des gradients de gravité et de gravité dus aux variations de topographie. La topographie joue un rôle majeur dans l'attraction gravitationnelle de la Terre; par conséquent, l'estimation des effets topographiques doit être soigneusement prise en compte dans le traitement des données gravimétriques, en particulier dans les zones de topographie accidentée ou à grande échelle. Pour les études de gravité de haute précision à l'échelle mondiale, le processus de correction de la topographie doit prendre en compte l'effet topographique de la Terre entière. Mais pour les applications locales à régionales basées sur des variations relatives à l'intérieur de la zone, nous montrons que la topographie tronquée à une distance spécifique peut être adéquate, même si ignorer la topographie de cette distance peut générer des erreurs. Pour soutenir ces arguments, nous montrons les relations entre les erreurs relatives à la gravité, la distance de troncature de la topographie et l'étendue de la zone d'étude. Enfin, nous abordons le problème: les mesures GOCE sont-elles pertinentes pour obtenir une image détaillée de la structure d'une plaque de subduction, y compris sa géométrie et ses variations latérales? Les résultats du calcul des avec des modèles de subduction synthétiques calculés à l’altitude moyenne du GOCE (255 km) démontrent que les bords de subduction et les variations latérales du pendage produisent des variations des gradients détectables avec le jeu de données GOCE. Dans l'application à la zone de subduction Izu-Bonin-Mariana (IBM), la topographie et les effets bathymétriques ont été supprimés avec succès. Cependant, dans l'application au cas réel de la zone de subduction Izu-Bonin-Mariana, les caractéristiques géométriques du second ordre du slab sont difficiles à détecter en raison de la présence des effets crustaux restants. Ceci est dû à l'imprécision du modèle crustal global existant qui est utilisée, qui conduit à une élimination impropre de l'effet crustal<br>Nowadays, the scientific community has at its disposal gravity and gravity gradient datasets with unprecedented accuracy and spatial resolution that enhances our knowledge of Earth gravitational field at various scales and wavelengths, obtained from ground to satellite measurements. In parallel with gravimetry, the advancement of satellite observations provides the community with more detailed digital elevation models to reflect the Earth’s structure geometry. Together, these novel datasets provide a great opportunity to better understand the Earth’s structures and dynamics at local, regional, and global scales. The use and interpretation of these high-quality data require refinement of standard approaches in gravity-related data processing and analysis. This thesis consists of a series of studies aiming to improve the precision in the chain of gravity and gravity gradient data processing for geodynamic studies. To that aim, we develop a tool, named GEEC (Gal Eötvös Earth Calculator) to compute precisely the gravity and gravity gradients effects of due to any mass body regardless of its geometry and its distance from measurements. The gravity and gravity gradients effects are computed analytically using the line integral solution of an irregular polyhedron. The validations at local, regional, and global scales confirm the robustness of GEEC’s performance, where the resolution of the model, that depends on both size of the body mass and its distance from the measurement point, control strongly the accuracy of the results. We present an application for assessing the optimum parameters in computing gravity and gravity gradients due to topography variations. Topography has a major contribution in Earth gravitational attraction, therefore the estimation of topography effects must be carefully considered in the processing of gravity data, especially in areas of rugged topography or in large-scale studies. For high-accuracy gravity studies at a global scale, the topography correction process must consider the topography effect of the entire Earth. But for local to regional applications based on relative variations within the zone, we show that truncated topography at a specific distance can be adequate, although, ignoring the topography pas this distance could produce errors. To support these arguments, we show the relationships between gravity relative errors, topography truncation distance, and the extent of study zone. Lastly, we approach the issue: Are GOCE measurements relevant to obtain a detailed image of the structure of a subducting plate, including its geometry and lateral variation? The results of gravity gradient forward modelling using synthetic subduction models computed at GOCE’s mean altitude (255 km) demonstrate that both subduction edges and lateral variations of subduction angle produce gravity gradient variations that are detectable with GOCE dataset (∼100 km wavelength and 10 mE amplitude). However, in the application to the real case of Izu-Bonin-Mariana subduction zone, the second-order geometric features of the subducting plate are difficult to be detected due to the presence of the remaining crustal effects. This is caused by the inaccuracy of the existing global crustal model, that leads to inaccurate crustal effect removal
Styles APA, Harvard, Vancouver, ISO, etc.
8

Maalej, Kammoun Maroua. "Low-cost memory analyses for efficient compilers." Thesis, Lyon, 2017. http://www.theses.fr/2017LYSE1167/document.

Texte intégral
Résumé :
La rapidité, la consommation énergétique et l'efficacité des systèmes logiciels et matériels sont devenues les préoccupations majeures de la communauté informatique de nos jours. Gérer de manière correcte et efficace les problématiques mémoire est essentiel pour le développement des programmes de grande tailles sur des architectures de plus en plus complexes. Dans ce contexte, cette thèse contribue aux domaines de l'analyse mémoire et de la compilation tant sur les aspects théoriques que sur les aspects pratiques et expérimentaux. Outre l'étude approfondie de l'état de l'art des analyses mémoire et des différentes limitations qu'elles montrent, notre contribution réside dans la conception et l'évaluation de nouvelles analyses qui remédient au manque de précision des techniques publiées et implémentées. Nous nous sommes principalement attachés à améliorer l'analyse de pointeurs appartenant à une même structure de données, afin de lever une des limitations majeures des compilateurs actuels. Nous développons nos analyses dans le cadre général de l'interprétation abstraite « non dense ». Ce choix est motivé par les aspects de correction et d'efficacité : deux critères requis pour une intégration facile dans un compilateur. La première analyse que nous concevons est basée sur l'analyse d'intervalles des variables entières ; elle utilise le fait que deux pointeurs définis à l'aide d'un même pointeur de base n'aliasent pas si les valeurs possibles des décalages sont disjointes. La seconde analyse que nous développons est inspirée du domaine abstrait des Pentagones ; elle génère des relations d'ordre strict entre des paires de pointeurs comparables. Enfin, nous combinons et enrichissons les deux analyses précédentes dans un cadre plus général. Ces analyses ont été implémentées dans le compilateur LLVM. Nous expérimentons et évaluons leurs performances, et les comparons aux implémentations disponibles selon deux métriques : le nombre de paires de pointeurs pour lesquelles nous inférons le non-aliasing et les optimisations rendues possibles par nos analyses<br>This thesis was motivated by the emergence of massively parallel processing and supercomputingthat tend to make computer programming extremely performing. Speedup, the power consump-tion, and the efficiency of both software and hardware are nowadays the main concerns of theinformation systems community. Handling memory in a correct and efficient way is a step towardless complex and more performing programs and architectures. This thesis falls into this contextand contributes to memory analysis and compilation fields in both theoretical and experimentalaspects.Besides the deep study of the current state-of-the-art of memory analyses and their limitations,our theoretical results stand in designing new algorithms to recover part of the imprecisionthat published techniques still show. Among the present limitations, we focus our research onthe pointer arithmetic to disambiguate pointers within the same data structure. We develop ouranalyses in the abstract interpretation framework. The key idea behind this choice is correctness,and scalability: two requisite criteria for analyses to be embedded to the compiler construction.The first alias analysis we design is based on the range lattice of integer variables. Given a pair ofpointers defined from a common base pointer, they are disjoint if their offsets cannot have valuesthat intersect at runtime. The second pointer analysis we develop is inspired from the Pentagonabstract domain. We conclude that two pointers do not alias whenever we are able to build astrict relation between them, valid at program points where the two variables are simultaneouslyalive. In a third algorithm we design, we combine both the first and second analysis, and enhancethem with a coarse grained but efficient analysis to deal with non related pointers.We implement these analyses on top of the LLVM compiler. We experiment and evaluate theirperformance based on two metrics: the number of disambiguated pairs of pointers compared tocommon analyses of the compiler, and the optimizations further enabled thanks to the extraprecision they introduce
Styles APA, Harvard, Vancouver, ISO, etc.
9

Li, Bin. "Statistical learning and predictive modeling in data mining." Columbus, Ohio : Ohio State University, 2006. http://rave.ohiolink.edu/etdc/view?acc%5Fnum=osu1155058111.

Texte intégral
Styles APA, Harvard, Vancouver, ISO, etc.
10

Préville-Ratelle, Louis-François. "A Combinatorial Interpretation of Minimal Transitive Factorizations into Transpositions for Permutations with two Disjoint Cycles." Thesis, 2008. http://hdl.handle.net/10012/3524.

Texte intégral
Résumé :
This thesis is about minimal transitive factorizations of permutations into transpositions. We focus on finding direct combinatorial proofs for the cases where no such direct combinatorial proofs were known. We give a description of what has been done previously in the subject at the direct combinatorial level and in general. We give some new proofs for the known cases. We then present an algorithm that is a bijection between the set of elements in {1, ..., k} dropped into n cyclically ordered boxes and some combinatorial structures involving trees attached to boxes, where these structures depend on whether k > n, k = n or k < n. The inverse of this bijection consists of removing vertices from trees and placing them in boxes in a simple way. In particular this gives a bijection between parking functions of length n and rooted forests on n elements. Also, it turns out that this bijection allows us to give a direct combinatorial derivation of the number of minimal transitive factorizations into transpositions of the permutations that are the product of two disjoint cycles.
Styles APA, Harvard, Vancouver, ISO, etc.

Livres sur le sujet "Interpretative Optimization"

1

International Seminar on Model Optimization in Exploration Geophysics (9th 1991 Berlin, Germany). Geophysical data interpretation by inverse modeling: Proceedings of the ninth International Seminar on Model Optimization in Exploration Geophysics, Berlin, 1991. Vieweg, 1993.

Trouver le texte intégral
Styles APA, Harvard, Vancouver, ISO, etc.
2

Akademiyası, Azärbaycan Milli Elmlär, ed. Akademik Tofiq Mämmäd oğlu Äliyev. Elm, 2011.

Trouver le texte intégral
Styles APA, Harvard, Vancouver, ISO, etc.
3

Wierzbicki, A. P., and M. Grauer. Interactive Decision Analysis: Proceedings of an International Workshop on Interactive Decision Analysis and Interpretative Computer Intelligence Held at the International Institute for Applied Systems Analysis , Laxenburg, Austria September 20-23 1983. Springer London, Limited, 2013.

Trouver le texte intégral
Styles APA, Harvard, Vancouver, ISO, etc.
4

Garbi, Madalina. The general principles of echocardiography. Oxford University Press, 2011. http://dx.doi.org/10.1093/med/9780199599639.003.0001.

Texte intégral
Résumé :
Knowledge of basic ultrasound principles and current echocardiography technology features is essential for image interpretation and for optimal use of equipment during image acquisition and post-processing.Echocardiography uses ultrasound waves to generate images of cardiovascular structures and to display information regarding the blood flow through these structures.The present chapter starts by presenting the physics of ultrasound and the construction and function of instruments. Image formation, optimization, display, presentation, storage, and communication are explained. Advantages and disadvantages of available imaging modes (M-mode, 2D, 3D) are detailed and imaging artefacts are illustrated. The biological effects of ultrasound and the need for quality assurance are discussed.
Styles APA, Harvard, Vancouver, ISO, etc.
5

Garbi, Madalina, Jan D’hooge, and Evgeny Shkolnik. General principles of echocardiography. Oxford University Press, 2016. http://dx.doi.org/10.1093/med/9780198726012.003.0001.

Texte intégral
Résumé :
Echocardiography uses ultrasound waves to generate images of cardiovascular structures and to display information regarding the blood flow through these structures. Knowledge of basic ultrasound principles and current technology is essential for image interpretation and for optimal use of equipment during image acquisition and post-processing. This chapter starts by presenting the physics of ultrasound and the construction and function of instruments. Image formation, optimization, display, presentation, storage, and communication are explained. Advantages and disadvantages of available imaging modes (M-mode, two-dimensional, and three-dimensional) are detailed and imaging artefacts are illustrated. The potential biologic effects of ultrasound and the need for quality assurance are discussed.
Styles APA, Harvard, Vancouver, ISO, etc.
6

Cook, Byron, and Andreas Podelski. Verification, Model Checking, and Abstract Interpretation: 8th International Conference, VMCAI 2007, Nice, France, January 14-16, 2007, Proceedings. Springer London, Limited, 2007.

Trouver le texte intégral
Styles APA, Harvard, Vancouver, ISO, etc.
7

Emerson, E. Allen, and Kedar S. Namjoshi. Verification, Model Checking, and Abstract Interpretation: 7th International Conference, VMCAI 2006, Charleston, SC, USA, January 8-10, 2006, Proceedings. Springer London, Limited, 2005.

Trouver le texte intégral
Styles APA, Harvard, Vancouver, ISO, etc.

Chapitres de livres sur le sujet "Interpretative Optimization"

1

Pachecho, Cecilia, Fatima Calouro, and Anabela Andrade. "Interpretative indices for leaf analysis in vineyards of the Portuguese region of Bairrada." In Optimization of Plant Nutrition. Springer Netherlands, 1993. http://dx.doi.org/10.1007/978-94-017-2496-8_8.

Texte intégral
Styles APA, Harvard, Vancouver, ISO, etc.
2

Johannes, Wolfgang Jacoby, and Peter L. Smilde. "Optimization and Inversion." In Gravity Interpretation. Springer Berlin Heidelberg, 2009. http://dx.doi.org/10.1007/978-3-540-85329-9_7.

Texte intégral
Styles APA, Harvard, Vancouver, ISO, etc.
3

Nagurney, Anna. "Variational Inequalities: Geometric Interpretation, Existence, and Uniqueness." In Encyclopedia of Optimization. Springer International Publishing, 2024. http://dx.doi.org/10.1007/978-3-030-54621-2_697-1.

Texte intégral
Styles APA, Harvard, Vancouver, ISO, etc.
4

Nakayama, Hirotaka. "Lagrange Duality and Its Geometric Interpretation." In Mathematics of Multi Objective Optimization. Springer Vienna, 1985. http://dx.doi.org/10.1007/978-3-7091-2822-0_5.

Texte intégral
Styles APA, Harvard, Vancouver, ISO, etc.
5

Pilgun, Maria, and Nailia Gabdrakhmanova. "Data and Text Interpretation in Social Media: Urban Planning Conflicts." In Data Analysis and Optimization. Springer Nature Switzerland, 2023. http://dx.doi.org/10.1007/978-3-031-31654-8_18.

Texte intégral
Styles APA, Harvard, Vancouver, ISO, etc.
6

Duch, Włodzisław, Norbert Jankowski, Krzysztof Grąbczewski, and Rafał Adamczak. "Optimization and Interpretation of Rule-Based Classifiers." In Intelligent Information Systems. Physica-Verlag HD, 2000. http://dx.doi.org/10.1007/978-3-7908-1846-8_1.

Texte intégral
Styles APA, Harvard, Vancouver, ISO, etc.
7

Traneva, Velichka, and Stoyan Tranev. "On Index-Matrix Interpretation of Interval-Valued Intuitionistic Fuzzy Hamiltonian Cycle." In Recent Advances in Computational Optimization. Springer International Publishing, 2021. http://dx.doi.org/10.1007/978-3-030-82397-9_17.

Texte intégral
Styles APA, Harvard, Vancouver, ISO, etc.
8

Jacobs, J. A. C., and Th J. A. Kemme. "Integration of Recently Developed Seismic Data Processing and Interpretation Algorithms in an Interactive Seismic Data Interpretation System." In Optimization of the Production and Utilization of Hydrocarbons. Springer Netherlands, 1992. http://dx.doi.org/10.1007/978-94-011-2256-6_5.

Texte intégral
Styles APA, Harvard, Vancouver, ISO, etc.
9

Nolet, Guust. "Partitioned Nonlinear Optimization for the Interpretation of Seismograms." In Inverse Problems in Wave Propagation. Springer New York, 1997. http://dx.doi.org/10.1007/978-1-4612-1878-4_19.

Texte intégral
Styles APA, Harvard, Vancouver, ISO, etc.
10

Traneva, Velichka, and Stoyan Tranev. "Index-Matrix Interpretation of a Two-Stage Three-Dimensional Intuitionistic Fuzzy Transportation Problem." In Recent Advances in Computational Optimization. Springer International Publishing, 2022. http://dx.doi.org/10.1007/978-3-031-06839-3_10.

Texte intégral
Styles APA, Harvard, Vancouver, ISO, etc.

Actes de conférences sur le sujet "Interpretative Optimization"

1

Liu, Baisen, Dongchuan Yang, and Weili Kong. "Interpretation and Optimization of Shallow Convolutional Filters." In 2024 IEEE International Conference on Progress in Informatics and Computing (PIC). IEEE, 2024. https://doi.org/10.1109/pic62406.2024.10892778.

Texte intégral
Styles APA, Harvard, Vancouver, ISO, etc.
2

Kocak, Onur, Ziya Telatar, and Cansel Ficici. "Identification and Interpretation of Focal Center of Brain Activity from EEG Signal Recordings." In 2025 7th International Congress on Human-Computer Interaction, Optimization and Robotic Applications (ICHORA). IEEE, 2025. https://doi.org/10.1109/ichora65333.2025.11017230.

Texte intégral
Styles APA, Harvard, Vancouver, ISO, etc.
3

Arroyo-Bonifaz, Luis, Cesar Osorio-Guerra, and Willy Ugarte. "Improving Radiological Interpretation through Optimization of Radiographs using Vision Transformers." In 2025 37th Conference of Open Innovations Association (FRUCT). IEEE, 2025. https://doi.org/10.23919/fruct65909.2025.11007989.

Texte intégral
Styles APA, Harvard, Vancouver, ISO, etc.
4

Vats, Shaurya, Sai Phani Chatti, Aravind Devanand, Sandeep Krishnan, and Rohit Karanth Kota. "Empowering LLMs for Mathematical Reasoning and Optimization: A Multi-Agent Symbolic Regression System." In The 35th European Symposium on Computer Aided Process Engineering. PSE Press, 2025. https://doi.org/10.69997/sct.172269.

Texte intégral
Résumé :
Understanding data with complex patterns is a significant part of the journey toward accurate data prediction and interpretation. The relationships between input and output variables can unlock diverse advancement opportunities across various processes. However, most AI models attempting to uncover these patterns are not explainable or remain opaque, offering little interpretation. This paper explores an approach in explainable AI by introducing a multi-agent system (MaSR) for extracting equations between features using data. We developed a novel approach to perform symbolic regression by discovering mathematical functions using a multi-agent system of LLMs. This system addresses the traditional challenges of genetic optimization, such as random seed generation, complexity, and the explainability of the final equation. We utilize the in-context learning capabilities of LLMs trained on vast amounts of data to generate accurate equations more quickly. This study presents research on expanding the reasoning capacities of large language models alongside their mathematical understanding. The paper serves as a benchmark in understanding the capabilities of LLMs in mathematical reasoning and can be a starting point for solving numerous complex tasks using LLMs. The MaSR framework can be applied in various areas where the reasoning capabilities of LLMs are tested for complex and sequential tasks. MaSR can explain the predictions of black-box models, develop data-driven models, identify complex relationships within the data, assist in feature engineering and feature selection, and generate synthetic data equations to address data scarcity, which are explored as further directions for future research in this paper.
Styles APA, Harvard, Vancouver, ISO, etc.
5

Westeng, Kjetil, Yann Van Crombrugge, Christian Nilsen Lehre, Peder Aursand, and Tanya Kontsedal. "Data-Driven Petrophysics: An Automated Approach to Parameter Optimization in Well Log Interpretation." In 2024 SPWLA 65th Annual Symposium. Society of Petrophysicists and Well Log Analysts, 2024. https://doi.org/10.30632/spwla-2024-0134.

Texte intégral
Résumé :
In petrophysics, big data, comprising both core measurements and dynamic statistics, offers transformative potential for interpreting well logs using empirical evidence. This study investigates the practical applications of these data types, emphasizing their utility in automated interpretation systems. The primary aim is to leverage data analytics to delineate the distribution of petrophysical properties through a comprehensive analysis of dynamic curve statistics and rock databases. Building upon the foundational work of automated shale volume interpretation introduced in 2023 by Westeng et al., this study utilizes lithological knowledge to accurately map individual core measurements to their corresponding lithologies. Subsequently, core measurements, such as grain density, are categorized based on predefined multimeasurement criteria. Our rock and fluid database, detailed in Petersen et al. (2022), serves as the backbone for this endeavor. Concurrently, automatic trend curve analysis for various rock properties, including shale porosity, shale slowness, shale neutron response, shale resistivity, and vertical stress, are derived using the same shale volume interpretations. For other key properties, e.g., water resistivity, our framework incorporates insights from existing interpretations, enabled through a structured and rigorous organization of metadata. We have formulated and implemented an automated framework that cohesively integrates core measurements with statistical curve analytics into petrophysical interpretation. The innovative fusion of expansive core databases with automated log analytics can revolutionize petrophysical interpretation. This approach obviates the need for simplified methodologies and nonrealistic assumptions, enabling more precise, time-efficient interpretations. Importantly, it minimizes the role of subjectivity, promoting data-driven results. The methodology also clarifies when deviations from general data trends are justifiable, thus reducing the chance of interpretative errors. Traditionally, utilizing extensive core data sets for optimal petrophysical parameter assignment has been labor-intensive and reliant on specialized knowledge. Our proposed framework offers a more efficient, high-quality alternative that challenges current industry standards. This disruptive approach enables the effective use of the abundant data available, thereby refining both parameter uncertainty and likelihood estimations. The authors anticipate that the Open Subsurface Data Universe (OSDU) will significantly improve the use of big data analytics of core data and well logs across the industry. The figure depicts the concept of a fully automated formation evaluation workflow. On the left, a log plot contrasts automated predictions of grain density, porosity, saturation, and shale volume with actual core measurements. On the right, a flowchart details the integration of automatic parameter prediction from measured curves, the core/fluid database, and the extraction of contextualized learning into the automatic formation evaluation process.
Styles APA, Harvard, Vancouver, ISO, etc.
6

Bankert, Raymond J., Vinod K. Singh, and Harindra Rajiyah. "Model Based Diagnostics and Prognosis System for Rotating Machinery." In ASME 1995 International Gas Turbine and Aeroengine Congress and Exposition. American Society of Mechanical Engineers, 1995. http://dx.doi.org/10.1115/95-gt-252.

Texte intégral
Résumé :
A PC-based automated mechanical vibration diagnostic and prognosis system for rotating machinery is under development by integrating AI-expert system-based interpretative capabilities with rotor dynamics based modeling and numerical optimization techniques. Presented here are the details involved while generating a rotor dynamic simulator: A turbine-generator is properly modeled using the finite element approach to compute steady-state response. The bearing stiffness and damping properties are computed by numerically solving the Reynold’s equation. An optimizer software has been interfaced with the rotor dynamics code for the solution of nonlinear unconstrained function minimization. A finite element based closed form approach has been adopted to compute the gradients of the objective function. The model is perturbed by the optimizer to match the results with simulated field measurement. The model based optimization technique has been demonstrated on a 120 mass, 6 bearing rotor system. After optimization, the responses and mass unbalance converge to their goal (field) values. To date, feasibility studies show encouraging results to differentiate between mass unbalance and misalignment for realistic systems using this methodology.
Styles APA, Harvard, Vancouver, ISO, etc.
7

Oliveira, Francisco Fábio de, Diego Pereira, Douglas D. J. de Macedo, Geraldo Pereira Rocha Filho, and Roger Immich. "IrrigaFlow: An IoT Architecture Integrated with Fuzzy Logic for Sustainable Agricultural Irrigation Optimization." In Simpósio Brasileiro de Sistemas de Informação. Sociedade Brasileira de Computação, 2025. https://doi.org/10.5753/sbsi.2025.246547.

Texte intégral
Résumé :
Context: The growing demand for food security and the scarcity of natural resources pose global challenges, particularly in agriculture, which accounts for a significant portion of water consumption. In Brazil, agriculture holds substantial economic importance, but its intensive water use calls for more sustainable practices. Problem: Managing irrigation adaptively and efficiently is challenging due to the complexity of multiple environmental variables. This reduces water-use efficiency and negatively impacts agricultural productivity. Solution: IrrigaFlow is a modular architecture that automates irrigation using IoT, fuzzy logic, and distributed processing. It consists of three layers, namely IoT Module, Network Edge, and Cloud, enabling real-time monitoring and adjustments based on local environmental data. This approach optimizes water usage and improves responsiveness to climatic conditions. Information Systems Theory: Grounded in the Sociotechnical Systems Theory, this proposal balances advanced technology with human and organizational contexts, promoting efficiency and sustainability by dynamically adapting to environmental and operational conditions. Methodology: A qualitative interpretative approach was employed, combining case studies and simulations. Environmental data collection, fuzzy logic processing, and MQTT-based communication were validated to ensure the system’s effectiveness before practical implementation. Results: The system demonstrated efficiency in irrigation management by adjusting water volume and timing based on environmental variables, showcasing its potential to optimize water use in agriculture. Contributions and Impact on Information Systems: From an academic perspective, this work lays a foundation for research on distributed technologies applied to agriculture. For the industry, it offers a replicable model that enhances water efficiency and sustainability
Styles APA, Harvard, Vancouver, ISO, etc.
8

Kisacanin, Branislav. "Integral Image Optimizations for Embedded Vision Applications." In Interpretation (SSIAI). IEEE, 2008. http://dx.doi.org/10.1109/ssiai.2008.4512315.

Texte intégral
Styles APA, Harvard, Vancouver, ISO, etc.
9

Mahfouz, Ahmed, Ahmad Mohammad Ahmad, Shimaa Basheir Abdelkarim, et al. "Marketing Strategies for Smart Buildings." In The 2nd International Conference on Civil Infrastructure and Construction. Qatar University Press, 2023. http://dx.doi.org/10.29117/cic.2023.0030.

Texte intégral
Résumé :
Globally, there is a growing proportion of the need to optimize monitoring and facility management of new and existing built facilities. Smart buildings provide waste reduction to the environment, flexibility to facility users, and optimization opportunities to the owner. Several research projects explore the monitoring, management, and maintenance of smart buildings towards efficient facility management (FM). However, there is a lack of defined, effective, efficient, and successful marketing schemes for smart buildings. Furthermore, smart buildings utilize the different technological possibilities and advancements in the smart building business and impact relevant stakeholders such as clients, facility managers, and users. Therefore, the study aims to develop a marketing strategy for smart buildings. The study adopts an integrative approach as the underpinning theory. The study's methodology adopts a robust analysis of different market strategies for various building types in the construction industry. In addition, lessons are deducted from the building typologies, such as sustainable buildings, tall and green buildings. The proposed marketing strategy requires four defined phases: segmentation, targeting, positioning and differentiations. The marketing directions focus on activities, actors, and tools through a comprehensive, detailed, and interpretative literature review. The proposed adaptable market strategy integrates client and facility users, focusing on the main drivers for marketing smart buildings. Therefore, the study is significant for facility managers, developers, and facility users.
Styles APA, Harvard, Vancouver, ISO, etc.
10

Chai, Joyce Y., Pengyu Hong, Michelle X. Zhou, and Zahar Prasov. "Optimization in multimodal interpretation." In the 42nd Annual Meeting. Association for Computational Linguistics, 2004. http://dx.doi.org/10.3115/1218955.1218956.

Texte intégral
Styles APA, Harvard, Vancouver, ISO, etc.

Rapports d'organisations sur le sujet "Interpretative Optimization"

1

Borgwardt, Stefan, and Veronika Thost. Temporal Query Answering in EL. Technische Universität Dresden, 2015. http://dx.doi.org/10.25368/2022.214.

Texte intégral
Résumé :
Context-aware systems use data about their environment for adaptation at runtime, e.g., for optimization of power consumption or user experience. Ontology-based data access (OBDA) can be used to support the interpretation of the usually large amounts of data. OBDA augments query answering in databases by dropping the closed-world assumption (i.e., the data is not assumed to be complete any more) and by including domain knowledge provided by an ontology. We focus on a recently proposed temporalized query language that allows to combine conjunctive queries with the operators of the well-known propositional temporal logic LTL. In particular, we investigate temporalized OBDA w.r.t. ontologies in the DL EL, which allows for efficient reasoning and has been successfully applied in practice. We study both data and combined complexity of the query entailment problem.
Styles APA, Harvard, Vancouver, ISO, etc.
2

Alwan, Iktimal, Dennis D. Spencer, and Rafeed Alkawadri. Comparison of Machine Learning Algorithms in Sensorimotor Functional Mapping. Progress in Neurobiology, 2023. http://dx.doi.org/10.60124/j.pneuro.2023.30.03.

Texte intégral
Résumé :
Objective: To compare the performance of popular machine learning algorithms (ML) in mapping the sensorimotor cortex (SM) and identifying the anterior lip of the central sulcus (CS). Methods: We evaluated support vector machines (SVMs), random forest (RF), decision trees (DT), single layer perceptron (SLP), and multilayer perceptron (MLP) against standard logistic regression (LR) to identify the SM cortex employing validated features from six-minute of NREM sleep icEEG data and applying standard common hyperparameters and 10-fold cross-validation. Each algorithm was tested using vetted features based on the statistical significance of classical univariate analysis (p&lt;0.05) and extended () 17 features representing power/coherence of different frequency bands, entropy, and interelectrode-based distance. The analysis was performed before and after weight adjustment for imbalanced data (w). Results: 7 subjects and 376 contacts were included. Before optimization, ML algorithms performed comparably employing conventional features (median CS accuracy: 0.89, IQR [0.88-0.9]). After optimization, neural networks outperformed others in means of accuracy (MLP: 0.86), the area under the curve (AUC) (SLPw, MLPw, MLP: 0.91), recall (SLPw: 0.82, MLPw: 0.81), precision (SLPw: 0.84), and F1-scores (SLPw: 0.82). SVM achieved the best specificity performance. Extending the number of features and adjusting the weights improved recall, precision, and F1-scores by 48.27%, 27.15%, and 39.15%, respectively, with gains or no significant losses in specificity and AUC across CS and Function (correlation r=0.71 between the two clinical scenarios in all performance metrics, p&lt;0.001). Interpretation: Computational passive sensorimotor mapping is feasible and reliable. Feature extension and weight adjustments improve the performance and counterbalance the accuracy paradox. Optimized neural networks outperform other ML algorithms even in binary classification tasks. The best-performing models and the MATLAB® routine employed in signal processing are available to the public at (Link 1).
Styles APA, Harvard, Vancouver, ISO, etc.
Nous offrons des réductions sur tous les plans premium pour les auteurs dont les œuvres sont incluses dans des sélections littéraires thématiques. Contactez-nous pour obtenir un code promo unique!