Academic literature on the topic 'Functional and informational criterion'

Create a spot-on reference in APA, MLA, Chicago, Harvard, and other styles

Select a source type:

Consult the lists of relevant articles, books, theses, conference reports, and other scholarly sources on the topic 'Functional and informational criterion.'

Next to every source in the list of references, there is an 'Add to bibliography' button. Press on it, and we will generate automatically the bibliographic reference to the chosen work in the citation style you need: APA, MLA, Harvard, Chicago, Vancouver, etc.

You can also download the full text of the academic publication as pdf and read online its abstract whenever available in the metadata.

Journal articles on the topic "Functional and informational criterion"

1

Tufeanu, Daniela, Augustin Semenescu, and Adrian Ioana. "Management Criteria and Principles, Applicable in Education and Scientific Research." Advanced Engineering Forum 34 (October 2019): 277–82. http://dx.doi.org/10.4028/www.scientific.net/aef.34.277.

Full text
Abstract:
In the article we present certain criteria and management principles applicable to education and scientific research, in order to optimize these two important areas. We thoroughly presented management criteria such as: the methodological criterion, the economic criterion, the social criterion, the informational criterion, the organizational criterion, the functional criterion. Among the management principles we analyzed there are: the principle of the command unit, the principle of decision-making and the principle of the balance between centralization and decentralization.
APA, Harvard, Vancouver, ISO, and other styles
2

O.A., Shkuropat, Shelehov I.V., and Myronenko M.A. "Intelligence system of artificial vision for unmanned aerial vehicle." Artificial Intelligence 25, no. 4 (2020): 53–58. http://dx.doi.org/10.15407/jai2020.04.053.

Full text
Abstract:
The article considers the method of factor cluster analysis which allows automatically retrain the onboard recognition system of an unmanned aerial system. The task of informational synthesis of an on-board system for identifying frames is solved within the information-extreme intellectual technology of data analysis, based on maxi- mizing the informational ability of the system during machine learning. Based on the functional approach to modeling cognitive processes inherent to humans during forming and making classification decisions, it was proposed a categorical model in the form of a direct graph. According to this model, the algorithmic support of the information extreme factor cluster analysis is developed. It allows automatically retrain the system when expanding the alphabet of recognition classes. According to this algorithm, the on-board recognition system preliminarily carries out the information-extremal machine learning of recognition classes of relatively low power. When new classes appear, their unclassified structured recognition attribute vectors form additional learning matrixes. After reaching a representational volume, additional learning matrix joins the input learning matrix and the on-board recognition system is retrained. Forming additional learning matrixes of new recognition classes is carried out by the agglomerative algorithm of cluster analysis of unclassified vectors by k-means clustering. As a criterion of optimizing machine-learning parameters, we used the modified Kullback criterion which is a functional of the exact characteristics of classification solutions. To increase the functional efficiency of factor cluster analysis, it is proposed to increase the depth of machine learning by optimizing the parameters of image processing frames.
APA, Harvard, Vancouver, ISO, and other styles
3

Ю., С. Бушнєв. "ЕКСПЕРИМЕНТАЛЬНА ПЕРЕВІРКА ЕФЕКТИВНОСТІ МОДЕЛІ ФОРМУВАННЯ ГРОМАДЯНСЬКОЇ ПОЗИЦІЇ МАЙБУТНІХ УЧИТЕЛІВ У ПОЗАНАВЧАЛЬНІЙ ДІЯЛЬНОСТІ ПЕДАГОГІЧНОГО КОЛЕДЖУ". Засоби навчальної та науково-дослідної роботи, № 49 (16 липня 2018): 26–40. https://doi.org/10.5281/zenodo.1312952.

Full text
Abstract:
The pedagogical experiment is described in the article, the purpose of which is the formation of the future teachers’ civil position in the extracurricular activities of the pedagogical college, the empirical results of the experiment are presented, the conclusions and recommendations for the implementation of relevant activities in other higher education establishments are outlined. The purpose of the experiment described in this article is to check the content-functional model of forming the future teachers’ civil position in the extracurricular activity of the pedagogical college, provided with certain pedagogical conditions (teachers’ awareness and students’ need to increase their own level of formation of civil position, the direction of the content of extracurricular activities on formation of future teachers’ active civil position, students’ involvement into the environment creation, favorable to the process of formation future teachers’ civil position). In accordance with the structure of the future teachers’ civil position, the criteria with the components for its formation are identified: motivational-valuable, knowledge-informational, activity-behavioral. The motivational-valuable criterion of the formation of the future teachers’ civil position enables the establishment of the formation level of future teachers’ valuable orientations and professional qualities and their availability of cognitive interest to the state problems. The knowledge-informational criterion points at the formation level of knowledge which provides functional civil education, psychological, pedagogical and methodological knowledge, which are necessary for the formation of this personality quality. Activity-behavioral criterion makes it possible to assess the future teachers’ aspiration for the manifestation of social activity and their ability to occupy and defend an active civil position, to determine the formation level of skills in implementing measures promoting the development of civil society. Thus, the conducted research proves the justification of organizational-pedagogical conditions and formation model of the future teachers’ civil position in the extracurricular activity of the pedagogical college.
APA, Harvard, Vancouver, ISO, and other styles
4

Lazarev, Maxim, and Olga Stukalova. "Functional model for evaluation of quality of education in modern informational space." Alma mater (Vestnik vysshei shkoly), no. 8 (August 15, 2019): 87–90. https://doi.org/10.5281/zenodo.6500388.

Full text
Abstract:
Presented are results of research on testing of developed by the authors functional model for evaluating of quality of culturological education in information space. To experimental work were attracted higher educational institutions of Moscow, Ulyanovsk, Naberezhnye Chelny and Leningrad region, as well as colleges of the Moscow region. This model takes into account requirements of integrated approach to assessing quality of education and presents itself as combination of functional components. This functional model is also aimed at assessing immanent to formation of “person of culture” processes of socialization and inculturation. Proved is effectiveness of elaborated by the authors functional model.
APA, Harvard, Vancouver, ISO, and other styles
5

KIM, HYEON SOO, YONG RAE KWON, and IN SANG CHUNG. "RESTRUCTURING PROGRAMS THROUGH PROGRAM SLICING." International Journal of Software Engineering and Knowledge Engineering 04, no. 03 (1994): 349–68. http://dx.doi.org/10.1142/s0218194094000179.

Full text
Abstract:
Software restructuring is recognized as a promising method to improve logical structure and understandability of a software system which is composed of modules with loosely-coupled elements. In this paper, we present methods of restructuring an ill-structured module at the software maintenance phase. The methods identify modules performing multiple functions and restructure such modules. For identifying the multi-function modules, the notion of the tightly-coupled module that performs a single specific function is formalized. This method utilizes information on data and control dependence, and applies program slicing to carry out the task of extracting the tightly-coupled modules from the multi-function module. The identified multi-function module is restructured into a number of functional strength modules or an informational strength module. The module strength is used as a criterion to decide how to restructure. The proposed methods can be readily automated and incorporated in a software tool.
APA, Harvard, Vancouver, ISO, and other styles
6

Naumenko, Igor, Mykyta Myronenko, and Taras Savchenko. "Information-extreme machine training of on-board recognition system with optimization of RGB-component digital images." RADIOELECTRONIC AND COMPUTER SYSTEMS, no. 4 (November 29, 2021): 59–70. http://dx.doi.org/10.32620/reks.2021.4.05.

Full text
Abstract:
The research increases the recognition reliability of ground natural and infrastructural objects by use of an autonomous onboard unmanned aerial vehicle (UAV). An information-extreme machine learning method of an autonomous onboard recognition system with the optimization of RGB components of a digital image of ground objects is proposed. The method is developed within the framework of the functional approach to modeling cognitive processes of natural intelligence at the formation and acceptance of classification decisions. This approach, in contrast to the known methods of data mining, including neuro-like structures, provides the recognition system with the properties of adaptability to arbitrary initial conditions of image formation and flexibility in retraining the system. The idea of the proposed method is to maximize the information capacity of the recognition system in the machine learning process. As a criterion for optimizing machine learning parameters, a modified Kullback information measure was used, this informational criterion is the functionality of exact characteristics. As optimization parameters, the geometric parameters of hyperspherical containers of recognition classes and control tolerances for recognition signs were considered, which played the role of input data quantization levels when transforming the input Euclidean training matrix into a working binary training matrix using admissible transformations of a working training matrix the offered machine learning method allows to adapt the input mathematical description of recognition system to the maximum full probability of the correct classification decision acceptance. To increase the depth of information-extreme machine learning, optimization was conducted according to the information criterion of the weight coefficients of the RGB components of the brightness spectrum of ground object images. The results of physical modeling on the example the recognition of terrestrial natural and infrastructural objects confirm the increase in functional efficiency of information-extreme machine learning of on-board system at optimum in information understanding weight coefficients of RGB-components of terrestrial objects image brightness.
APA, Harvard, Vancouver, ISO, and other styles
7

Samchuk, Zoreslav, and Alona Hurkivska. "Posttruth as an instrument of political narrative of the postmodern era." Political Studies, no. 5 (2023): 163–77. http://dx.doi.org/10.53317/2786-4774-2023-1-9.

Full text
Abstract:
The article provides a multidimensional analysis of the post-truth phenomenon from the historical aspect, as well as an instrumental means of achieving an informational-communicational goal; a way of positioning in relation to significant aspects of socio-political reality; and a political technology aims at ensuring competitive advantages, etc. Post-truth is conceived in the form of a specific information and communication product of the postmodern era, which arose in response to certain megatrends including criterion relativism, devaluation of meanings and values, and the simulation imperative of socio-political life activity. Priority research efforts are focused on the meaningful demarcation of the post-truth regarding truth and lies, since this subject field determines the prospects for the conceptual expression of post-truth. The narrative form of representation is essential for the instrumental and functional effectiveness of post-truth. Not polemics and discourse with their requirement of convincing semantic reliefs and hierarchies, but the narrative as a certain order of words and storytelling, not burdened by the requirements of convincing argumentation, provides the most comfortable conditions for the expansion of post-truth in the informational and communicational environment. An important feature is a striking difference between reality and its portrayal by post-truth instruments. An analysis of the challenges and threats of post-truth, which requires a priority response from the professional community and the public consciousness in general – shows that the avalanche-like use of post-truth tools in the informational environment leads to the increase of a disinformation level aimed at creating and spreading false perceptions of reality to serve political interests and needs. It is the post-truth toolkit that is a specific experimental platform for double standards, bias, prejudice, manipulation, and propaganda. The devolution of truth into post-truth objectively and naturally leads to symmetrical degradation and involution of politics into politicking, which is never guided by public interests, but always only by its own, de facto parasitizing on the opportunities and powers it has gained.
APA, Harvard, Vancouver, ISO, and other styles
8

Zinko, Roman, Mykhailo Hlobchak, Andiy Beshley, and Oleksiy Pitrenko. "Algorithm of machine creation using the mechanism of articulated disjunction." Ukrainian Journal of Mechanical Engineering and Materials Science 8, no. 2 (2022): 59–66. http://dx.doi.org/10.23939/ujmems2022.02.059.

Full text
Abstract:
Problem statement. Any methodology is based on knowledge about the problem. The fullness and orderliness of the base determine further volumes and possible options for implementing the methodology. Purpose. To analyze the existing ways of machine creations and propose the new effective algorithm of machine creation using the mechanism of articulated disjunction. Considering the research structure and improvement of the informational and program technologies, changes to the traditional methodology of machine creation are made. The existing paradigm: parametric and functional performance of the machine, the design of which is improved based on existing machines, is a modification of predecessors' parametric and functional performance. Proposed paradigm: parametric and functional performance of the new machine, improved based on the subject area of technical solutions, methods of design and operational conditions more precisely coincide with the technological process to realize what they are called. Proposed hypothesis: there are such methodologies of design that provide an opportunity to provide rational productivity within existing operational conditions. Methodology. Morphological space is used to form qualitative features of the created machine. The peculiarity of the proposed methodology is that the set of features of the created machine also contains subsets of processes and phenomena in which the machine is involved. This allows assessing the compliance of the process in which the machine is involved and the machine operation processes. Findings (results). The algorithm of machine creation using the mechanism of articulated disjunction is proposed. It allows the creation of the new construction at a given sequence of stages to improve at every step in a given direction based on the determined criteria. Originality (novelty). Method of articulated disjunction, which is based on the principle of formation of the set of necessary properties of the structural elements of the machine based on a given primary criterion of machine efficiency. The essence of the method of articulated disjunction is that the sample of elements that have common features and properties is reformulated based on a given criterion. The proposed method provides an opportunity to determine the advantages of one structure described by various factors, in comparison with others, based on the criterion set. Practical value. The algorithm of synthesis and improvement of existing machines allows for determining their main quality features in the initial stages of the design of machines.
APA, Harvard, Vancouver, ISO, and other styles
9

Kismetova, G., and G. Mambetova. "EXPERIMENTAL VERIFICATION OF THE MODEL OF THE PROCESS OF FORMATION OF COMMUNICATIVE COMPETENCE OF FOREIGN STUDENTS BY MEANS OF INTERACTIVE TECHNOLOGIES." SCIENTIFIC-DISCUSSION, no. 95 (December 16, 2024): 17–19. https://doi.org/10.5281/zenodo.14498984.

Full text
Abstract:
The article is devoted to the description of experimental work aimed at verifying the formation of communicative competence of foreign students by means of interactive technologies. The article reflects the stages of experimental research, presents educational and methodological support and conditions necessary for the implementation of the proposed technology.
APA, Harvard, Vancouver, ISO, and other styles
10

SAMOILENKO, OLEXANDRA, BAI XIAONAN, SUN XIRAN, JIN YUHAN, and CHEN XIAOPAI. "AESTHETIC FOUNDATIONS AND VALUE CRITERIA OF MODERN MUSIC STUDIES." AD ALTA: 13/01-XXXII. 13, no. 1 (2023): 84–86. http://dx.doi.org/10.33543/1301328486.

Full text
Abstract:
The purpose of this study is to analyze and theoretically substantiate the complex multidimensional value communicative system inherent in musical art in its dynamic movement and holistic form, to clarify its structure and functional formations, its main internal and external relationships, as well as to establish some ways of influencing the course of artistic processes in society, and, to a certain extent, clarifying the ability to manage them without invading them. It is shown that, for all the subjectivity of perception in the art of music, style favors the formation of conditions for objective value judgments and conclusions. The latter, in turn, form an informational thesaurus that influences the historical dynamics of public musical consciousness.
APA, Harvard, Vancouver, ISO, and other styles
More sources

Dissertations / Theses on the topic "Functional and informational criterion"

1

Massidda, Davide. "Criteri dell'Informazione e Selezione dei Modelli in Misurazione Funzionale." Doctoral thesis, Università degli studi di Padova, 2012. http://hdl.handle.net/11577/3425317.

Full text
Abstract:
The processes of evaluation of environmental stimuli and decision are common in everyday life and in many social and economic situations. These processes are generally described in scientific literature using multi-attribute choice models. These models assume that evaluation of a stimulus described by several attributes results from a multi-stage process (Anderson, 1981; Lynch, 1985): evaluation of the attributes, integration of the values of the attributes and explicit evaluation of the stimulus. Commonly, in this field, experimental settings require the evaluation of a set of stimuli built combining some attributes. A subject evaluator examines the attributes of each stimulus; using her “mental” model of choice (Oral & Kettani, 1989), it assigns a value to attributes and formulate an overall judgment. Finally, subject expresses his opinion in terms of order-ranking, pairwise preference comparisons, values in a rating scale, and so on. This so-called multi-attribute evaluation suffers of a fundamental difficulty to measure the values of each attribute of a stimulus starting by the overall evaluation of each subject. Basically, the problem is to derive each value decomposing the overall judgment (i.e. the response output). This difficulty in measuring is typical in most of the often complementary multi-attribute models traditions, as those of Conjoint Analysis (Luce & Tukey, 1964; Krantz & Tversky, 1971; Green & Rao, 1971) or Information Integration Theory (IIT: Anderson, 1970, 1981, 1982). According to Anderson’s IIT, cognitive system give a subjective value to each characteristic of a stimulus, and the values are put together in a overall judgment using a specific integration function. IIT describe integration modalities using different mathematical rules, and functional measurement is the methodology proposed to determine and measure the integration function. Functional measurement use factorial experiments, selecting some attributes of a stimulus and combining them in factorial plans. Usually, subject’s evaluations for each cell of experimental design are reported on a category scale, and each subject replicates each evaluation for more trials. Starting from subject’s evaluations, functional measurement aims to quantify the value of each level of factors and its importance in the global judgment, for each subject evaluator or group of subjects. Anderson’s theory suggests that the most widely used integration rules are of three fundamental and simple kinds: additive, multiplicative and weighted average. Statistical techniques as the analysis of variance can be used to detect the integration rule on the basis of the goodness of fit. The averaging rule in particular can account for interaction effects between factors, splitting evaluation in two components: scale value and weight, which can be identified and measured separately (Zalinski & Anderson, 1989). If scale value represents the location of the level of attribute on the response scale, the weight represents his importance into global judgment. Averaging model provides a useful way to manage interaction between factors, surpassing the assumption of independence on which most applications of multi-attribute choice models are based. However, the model presents some critical points about the estimation issue, and for this motivation it potential is not fully exploited up until now. In this research work, a new method for parameter estimation for averaging model is proposed. The method provides a way to select the best set of parameters to fit data, and aims to overcome some problems that have limited the use of the model. According to this new method, named R-Average (Vidotto & Vicentini, 2007; Vidotto, Massidda & Noventa, 2010), the choice of optimal model is made according to so-called “principle of parsimony”: the best model is the “simplest” one which found the best compromise between explanation of phenomenon (explained variance) and structural complexity (number of different weight parameters). Selection process use in combination two goodness-of-fit indexes: Akaike Information Criterion (AIC; Akaike, 1974) and Bayesian Information Criterion (BIC; Schwarz, 1978). Both indexes are derived starting from the logarithm of the residual variance weighted for the number of observations, and by penalizing the models with additional parameters. AIC and BIC differ in penalty function - since the BIC imposes a larger penalty for complex models than the AIC does - and are very useful for model comparison. In this research work, two version of R-Average method are presented. This two versions are one evolution of the other, and both methods are structured in some procedures to perform estimation. Basically, R-Average consists of three procedures: EAM Procedure, DAM Procedure and Information Criteria (IC) Procedure. EAM, DAM and IC differ in constraints imposed on weights during the optimization process. EAM Procedure constrains all the weight within each factor to be equal, estimating an Equal-weight Averaging Model. This model is the optimum in terms of parsimony, because it presents the smallest number of parameters (one single weight for all levels of each factor). In fact, it is defined as “parsimonious” a simple model, in which the weights are equal. Differently, DAM Procedure does not impose constraints on the weights, leaving their free to vary. Thus, this procedure may converge to a complete Differential-weight Averaging Model, which is the less parsimonious model (i.e. all the weights of each level of each factor are different). The core of R-Average method is the Information Criteria Procedure. This procedure is based on idea that, from a psychological point of view, a simple model is more plausible than a complex model. For this reason, estimation algorithm is not oriented to search parameters that explain the greater proportion of variance, but search a compromise between explained variance and model complexity. A complex model will be evaluated as better than a simpler one only if the allows a significantly higher degree of explanation of phenomenon. IC Procedure search the model, trying to keep (in the “forward” version) or to make (in the “backward” version) all the weights equal. In the forward version, the procedure starts from the EAM model and spans all the possible combination of weights, modifying it: initially one by one, then two by two, then three by three and so on. For each combination, the procedure tries to diversifies weights. From time to time, using BIC and AIC indexes, the procedure selects the best set of parameters and assume the selected model as reference for the following step (if an evidence of improvement is found). In the backward version, the procedure starts from the DAM model and spans all the possible combinations of weights, trying to equalize them. BIC and AIC are used to compare the new models with the reference model: if a new model is detected as better than the reference one, it will used as new reference for following steps. Finally, all the estimated models by the procedures are compared, and the best model based on information criteria is preferred. The original formulation of the averaging model was modified in the evolution of the basic R-Average method. This reformulation considers the weight not as simply w parameters but as w = exp(t). This exponential transformation leads to a solution for classical problem of uniqueness which affect averaging formulation (Vidotto, 2011). Furthermore, this reformulation justifies the application of cluster analysis algorithms on weight values, necessary for the clustering procedure of experimental subjects on the basis of their similarity. In fact, the distance between two t values can be evaluated in terms of simply difference. Differently, the distance between two w values can be evaluated only in terms of ratio between them. This allows to use clustering algorithms of subjects based on matrices of proximity between parameters. The performance of R-Average was tested using Monte Carlo studies and practical applications in three different research fields: in marketing, in economic decision theory and in interpersonal trust. Results of Monte Carlo studies show a good capability of the method to identify parameters of averaging model. Scale parameters are in general well estimated. Differently, weight estimation is a bit more critical. Punctual estimation of the real value of weights are not precise as the estimation of scale values, in particular as the standard deviation of the error component in observed data increases. However, estimations appears reliable, and equalities between weights are identified. The increasing of the number of experimental trials can help model selection when the errors present a greater standard deviation. In summary, R-Average appear as an useful instrument to select the best model within the family of averaging models, allowing to manage particular multi-attribute conditions in functional measurement experiments. R-Average method was applied in a first study in marketing field. In buying a product, people express a preference for particular products: understanding cognitive processes underlying the formulation of consumers’ preferences is an important issue. The study was conducted in collaboration with a local pasta manufacturer, the Sgambaro company. The aims of research were three: understand the consumer’s judgment formulation about a market product, test the R-Average method in real conditions, and provide to Sgambaro company useful information for a good marketing of its product. Two factors was manipulated: the packaging of the Sgambaro’s pasta (Box with window, Box without window and Plastic bag) and the price (0.89€, 0.99€, 1.09€). Analyses started considering evaluations of the product express by participants: for each subject, parameters of averaging model was estimated. Since the consumers population is presumably not homogeneous in preferences, the overall sample has been split in three clusters (simply named A, B and C) by an cluster analysis algorithm. For both Price and Packaging factors, different clusters showed different ratings. Cluster A express judgments that are positioned on the center of scale, suggesting as participants are not particularly attracted by this products. By contrast, Cluster B express positive judgments, and Cluster C express globally negative with the exception of the package “box with window”. For packaging, it observes that the box with window, although is not the preferred one in the three clusters, has always positive evaluations, while judgments on other packaging are inconsistent across groups. Therefore, if the target of potential consumers for the product is the general population, the box with window can be considered the most appreciated packaging. Moreover, in Cluster C ANOVA shows a significant interaction between Price and Packaging. In fact, estimated parameters of averaging model show that Cluster C is greatly affected by a high price. In this cluster the highest price had a double weight in the final ratings, therefore the positive influence on the judgment of the “box with window” packaging could be invalidated. It’s important to notice that the group which is more sensitive to the high price is also that one which gave the lowest ratings compared to the other clusters. In a second experiment, the R-Average method has been applied in a study in the field of economic decision marking under risk. The assumption that moved the study is that, when a person must evaluate an economic bet in a risky situation, person integrates cognitively the economic value of bet and the probability to win. In the past, Shanteau (1974) shown that integration between value and probability is made according a multiplicative rule. The study, as Lynch (1979), highlighted that when the situation concern two simultaneous bets, each one composed from a value and a probability, judgments for double bet is different to the sum of judgments for single bets. This observation, named subadditivity effect, violate the assumptions of Expected Utility Theory. The proposed study analyze the convenience/satisfaction associated with single and duplex bets. The study proposed to participants two kind of bets. A first group of bets involved a good (Mobile Phones), and the other one, a service (free SMS per day); to each good/service was associated the a probability to obtained him. Two experimental conditions was defined. In the first condition, subjects judge bets considering that phones come from a good company, and SMS service came from a untrustworthy provider. In the reverse condition, subjects judge bets considering that phones was made with low-quality and come from a untrustworthy company, and SMS service come from a strong and trustworthy provider. For duplex bets, the presence of averaging integration model was hypnotized, and the parameters of model was estimated using R-Average on each subject. Results show that the integration in presence of a duplex bet is fully compatible with an averaging model: the averaging and not adding appear the correct integration rule. In the last experiment, averaging model and R-Average methodology were applied to study trust beliefs in three contexts of everyday life: interpersonal, institutional and organizational. Trusting beliefs are a solid persuasion that trustee has favorable attributes to induce trusting intentions. Trusting beliefs are relevant factors in making an individual to consider another individual as trustworthy. They modulate the extent to which a trustor feels confident in believing that a trustee is trustworthy. According to McKnight, Cummings & Chervany (1998), the most cited trusting beliefs are: benevolence, competence, honesty/integrity and predictability. The basic idea under the proposed study is that beliefs might be cognitive integrated in the concept of trustworthiness with some weighting processes. The R-Average method was used to identify parameters of averaging model for each participant. As main result, analysis shown that, according to McKnight, Cummings & Chervany (1998), the four main beliefs play a fundamental role in judging trust. Moreover, agreeing with information integration theory and functional measurement, an averaging model seems to explain individual responses. The great majority of participants could be referred to the differential-weight case. While scale values show a neat linear trend with higher slopes for honesty and competence, weights show differences with higher mean values, still, for honesty and competence. These results are coherent with the idea that different attributes play a different role in the final judgment: indeed, honesty and competence seem to play the major role while predictability seems less relevant. Another interesting conclusion refers to the high weight of the low level of honesty; it seems to show how a belief related to low integrity play the most important role for a final negative judgment. Finally, the different tilt of the trend for the levels of the attributes in the three situational contexts suggests a prominent role of the honesty in the interpersonal scenarios and of the competence in the institutional scenarios. In conclusion, information integration theory and functional measurement seem to represent an interesting approach to comprehend the human judgment formulation. This research work proposes a new method to estimate parameters of averaging models. The method shows a good capability to identify parameters and opens new scenarios in information integration theory, providing a good instrument to understand more in detail the averaging integration of attributes<br>I processi di valutazione degli stimoli ambientali e di decisione sono comuni nella vita quotidiana e in tante situazioni di carattere sociale ed economico. Questi processi sono generalmente descritti dalla letteratura scientifica utilizzando modelli di scelta multi-attributo. Tali modelli assumono che la valutazione di uno stimolo descritto da più attributi sia il risultato di un processo a più stadi (Anderson, 1981; Lynch, 1985): valutazione degli attributi, integrazione dei valori e valutazione esplicita dello stimolo. Comunemente, in questo campo, le situazioni sperimentali richiedono la valutazione di un set di stimoli costruiti combinando diversi attributi. Un soggetto valutatore esamina gli attributi di ogni stimolo; usando il solo modello “mentale” di scelta (Oral e Kettani, 1989), assegna un valore agli attributi e formula un giudizio globale. Infine, il soggetto esprime la sua opinione in termini di ordinamento, preferenze a coppie, valori su una scala numerica e così via. Questa cosiddetta valutazione multi-attributo soffre di una fondamentale difficoltà nel misurare i valori di ogni attributo di uno stimolo partendo dalle valutazioni complessive di ogni soggetto. Fondamentalmente, il problema è derivare ogni valore decomponendo il giudizio complessivo (cioè la risposta in output). Questa difficoltà di misurazione è tipica di molte delle spesso complementari tradizioni dei modelli multi-attributo, come la Conjoint Analysis (Luce e Tukey, 1964; Krantz e Tversky, 1971; Green e Rao, 1971) o la Teoria dell’Integrazione delle Informazioni (IIT: Anderson, 1970, 1981, 1982). Secondo la IIT di Anderson, il sistema cognitivo fornisce un valore soggettivo a ogni caratteristica di uno stimolo, e tali valori vengono combinati in un giudizio complessivo utilizzando una specifica funzione d’integrazione. La IIT descrive le modalità d’integrazione utilizzando differenti regole matematiche, e la misurazione funzionale è la metodologia proposta per determinare e misurare la funzione d’integrazione. La misurazione funzionale si serve di esperimenti fattoriali, selezionando alcuni attributi di uno stimolo e combinandoli in piani fattoriali. Solitamente, le valutazioni dei soggetti per ogni cella del disegno sperimentale sono riportate su una category scale, e ogni soggetto ripete la valutazione per più prove. Partendo dalle valutazioni soggettive, la misurazione funzionale mira a quantificare il valore di ogni livello dei fattori e la sua importanza nel giudizio complessivo, per ogni soggetto valutatore o gruppo di soggetti. La teoria di Anderson suggerisce che le regole d’integrazione più ampiamente utilizzate sono di tre fondamentali e semplici tipologie: additiva, moltiplicativa e di media ponderata (averaging). Tecniche statistiche come l’analisi della varianza possono essere utilizzare per individuare la regola d’integrazione sulla base della bontà dell’adattamento. La regola averaging in particolare è in grado di tenere in considerazione gli effetti d’interazione tra i fattori, scindendo la valutazione in due componenti: valore di scala e peso, che possono essere identificati e misurati separatamente (Zalisnki e Anderson, 1989). Se il valore di scala rappresenta il posizionamento del livello del fattore sulla scala di risposta, il peso rappresenta la sua importanza nel giudizio complessivo. Il modello averaging fornisce una via molto utile per gestire gli effetti d’interazione tra i fattori, superando l’assunto d’indipendenza sul quale molte applicazioni dei modelli di scelta multi-attributo sono basate. Tuttavia, il modello presenta alcuni punti critici relativi alla questione della stima, e per questo motivo il suo potenziale non è stato pienamente sfruttano fin’ora. In questo lavoro di ricerca viene proposto un nuovo metodo per la stima dei parametri del modello averaging. Il metodo consente di selezionare il miglior set di parametri per adattare i dati, e mira a superare alcuni problemi che ne hanno limitato l’uso. Secondo questo nuovo metodo, chiamato R-Average (Vidotto e Vicentini, 2007; Vidotto, Massidda e Noventa, 2010), la scelta del miglior modello è fatta in accordo al cosiddetto “principio di parsimonia”: il miglior modello è quello più “semplice”, che trova il miglior compromesso tra spiegazione del fenomeno (varianza spiegata) e complessità strutturale (numero di parametri di peso diversi). Il processo di selezione usa in combinazione due indici di bontà dell’adattamento: l’Akaike Information Criterion (AIC; Akaike, 1974) e il Bayesian Information Criterion (BIC; Schwartz, 1978). Entrambi gli indici sono ricavati partendo dal logaritmo della varianza residua pesata per il numero di osservazioni, e penalizzando i modelli con parametri aggiuntivi. AIC e BIC differiscono nella funzione di penalizzazione – dato che il BIC impone una penalità maggiore ai modelli con più parametri – e sono molto utili per la comparazione fra modelli. In questo lavoro di ricerca vengono presentate due versioni del metodo R-Average. Queste due versioni sono una l’evoluzione dell’altra, ed entrambi i metodi sono strutturati in diverse procedure per eseguire la stima. Fondamentalmente, R-Average consta di tre procedure: procedura EAM, procedura DAM e procedura Information Criteria (IC). EAM, DAM e IC differiscono nei vincoli imposti sui pesi durante il processo di ottimizzazione. La procedura EAM vincola tutti i pesi all’interno di ogni fattore a essere uguali, stimando un modello a pesi uguali. Questo modello è il migliore in termini di parsimonia, perché presenta il minor numero di parametri (uno unico per ogni fattore). Infatti, si definisce come “parsimonioso” un modello semplice, nel quale i pesi sono uguali. Diversamente, la procedura DAM non impone alcun vincolo sui pesi, lasciandoli liberi di variare. Così, questa procedura può potenzialmente convergere verso un modello averaging a pesi completamente diversi (dove cioè tutti i pesi dei livelli di ogni fattore sono diversi). Il cuore del metodo R-Average è la procedura Information Criteria. Questa procedura è basata sull’idea che, da un punto di vista psicologico, un modello semplice è più plausibile di un modello complesso. Per questo motivo, l’algoritmo di stima non è volto alla ricerca dei parametri che spiegano la maggior quota di varianza, ma cerca un compromesso tra varianza spiegata e complessità del modello. Un modello complesso sarà valutato come migliore di uno più semplice solo se permette di ottenere un grado significativamente superiore di spiegazione del fenomeno. La procedura IC cerca il modello provando a tenere (nella versione “forward”) o a rendere (nella versione “backward”) tutti i pesi uguali. Nella versione forward, la procedura parte dal modello EAM e passa in rassegna tutte le possibili combinazioni di pesi, modificandole: inizialmente uno a uno, poi due a due, poi tre a tre e così via. Per ogni combinazione, la procedura prova a diversificare i pesi. Di volta in volta, utilizzando gli indici BIC e AIC, la procedura seleziona il miglior set di parametri e assume il modello selezionato come rifermento per il passo successivo (se un’evidenzia di miglioramento viene trovata). Nella versione backward, la procedura parte dal modello DAM e passa in rassegna tutte le possibili combinazioni di pesi, provando a renderli uguali. Gli indici BIC e AIC sono utilizzati per comparare i nuovi modelli con quelli di riferimento: se un nuovo modello viene individuato come migliore di quello di riferimento, sarà utilizzato come nuovo riferimento per i passi successivi. Infine, tutti i modelli stimati dalle procedure vengono comparati, e il quello migliore sulla base dei criteri dell’informazione viene scelto. La formulazione originale del modello averaging è stata modificata nell’evoluzione del metodo R-Average di base. Questa riformulazione considera il peso non come semplice parametro w ma come w = exp(t). Questa trasformazione esponenziale conduce a una soluzione del classico problema di unicità che affligge la formulazione averaging (Vidotto, 2011). Inoltre, essa giustifica l’applicazione di algoritmi di cluster analysis sui parametri di peso, necessari per le procedure di raggruppamento dei soggetti sperimentali sulla base delle loro similarità. Infatti, la distanza tra due valori t può essere valutata in termini di semplice differenza. Diversamente, la distanza tra due valori w può essere valutata solo in termini di rapporto tra loro. Ciò consente l’uso di algoritmi di raggruppamento dei soggetti basati su matrici di prossimità fra i parametri. La performance di R-Average è stata testata utilizzando studi Monte Carlo e applicazioni pratiche in tre differenti campi di ricerca: nel marketing, nella teoria delle decisioni economiche e nella fiducia interpersonale. I risultati degli studi Monte Carlo mostrano una buona capacità del metodo di identificare i parametri del modello averaging. I parametri di scala sono in generale ben stimati. Diversamente, la stima dei pesi è un po’ più critica. La stima puntuale del valore reale del peso non è precisa come quella dei valori di scala, in particolare all’aumento della deviazione standard della componente d’errore dei dati. Nonostante questo, le stime appaiono attendibili, e le uguaglianze fra i pesi sono identificate. L’aumento del numero di replicazioni sperimentali può aiutare la selezione del modello quando gli errori presentano una grande deviazione standard. In sintesi, R-Average si configura come uno strumento molto utile per selezionare il miglior modello all’interno della famiglia dei modelli averaging, permettendo di gestire particolari condizioni multi-attributo negli esperimenti di misurazione funzionale. Il metodo R-Average è stato applicato in un primo studio nel campo del marketing. Nell’acquistare un prodotto, le persone esprimono una preferenza per particolari prodotti: comprendere i processi cognitivi sottostanti la formulazione delle preferenze dei consumatori risulta quindi un punto importante. Lo studio è stato condotto in accordo con un produttore locale di pasta, l’azienda Sgambaro. Gli scopi della ricerca erano tre: comprendere la formulazione dei giudizi dei consumatori su un prodotto di mercato, testare il metodo R-Aveage in condizioni reali e fornire all’azienda Sgambaro utili informazioni per un’ottimale commercializzazione del prodotto. Sono stati manipolati due fattori: la confezione della pasta Sgambaro (scatola con finestra, scatola senza finestra e busta di plastica) e il prezzo (0.89€, 0.99€, 1.09€). Le analisi sono partite considerando le valutazioni del prodotto espresse dai partecipanti: per ogni soggetto sono stati stimati i parametri del modello averaging. Dato che la popolazione dei consumatori presumibilmente non è omogenea in quanto a preferenze, il campione complessivo è stato diviso in tre gruppi (chiamati semplicemente Cluster A, Cluster B e Cluster C) attraverso un algoritmo di cluster analysis. Per entrambi i fattori Prezzo e Confezione, i diversi raggruppamenti mostrano punteggi differenti. Il Cluster A esprime giudizi che si posizionano nel centro scala, indicando come questi partecipanti non fossero particolarmente attratti dai prodotti. All’opposto, il Cluster B esprime giudizi positivi, e il Cluster C esprime giudizi generalmente negativi con l’eccezione della confezione “scatola con finestra”. Per quanto concerne la confezione, si osserva che la scatola con finestra, sebbene non sia quella preferita in tutti e tre i gruppi, ha sempre valutazione positive, mentre i giudizi per le altre confezioni variano tra i gruppi. Inoltre, se il target di potenziali consumatori per il prodotto è la popolazione generale, la scatola con finestra può essere considerata la confezione più apprezzata. Inoltre, nel Cluster C l’ANOVA mostra un’interazione significativa tra Prezzo e Confezione. Difatti, i parametri stimati per il modello averaging mostrano che il Cluster C è generalmente influenzato da un prezzo elevato. In questo gruppo il prezzo più alto ha un peso doppio rispetto agli altri nel punteggio finale, e ciò potrebbe invalidare l’influenza positiva della confezione “scatola con finestra”. È importante notare che il gruppo che è più sensibile a un prezzo alto è anche quello che presenta i punteggi di preferenza più bassi rispetto agli altri gruppi. In un secondo esperimento, il metodo R-Average è stato applicato in uno studio nel campo delle decisioni economiche in condizioni di rischio. L’assunzione che ha mosso lo studio è che, quando una persona deve valutare una scommessa a carattere economico in una situazione rischiosa, la persona integra cognitivamente il valore economico della scommessa con quello della probabilità di vittoria. In passato, Shanteau (1974) ha mostrato che l’integrazione tra valore e probabilità è realizzata attraverso una regola moltiplicativa. Lo studio, come quello di Lynch (1979), ha sottolineato che quando la situazione concerne due scommesse simultanee, ognuna composta da un valore e una probabilità, i giudizi per la scommessa doppia sono diversi dalla somma dei giudizi espressi per le scommesse singole. Questa osservazione, denominata effetto di subadditività, viola le assunzioni della Teoria dell’Utilità Attesa. Lo studio proposto analizza la convenienza/soddisfazione associata alle scommesse singole e doppie. Lo studio ha proposto ai partecipanti due tipologie di scommessa. Un primo gruppo di scommesse riguardava un bene (telefono cellulare) e l’altro un servizio (messaggi SMS gratis per giorno); a ogni bene/servizio era associata la probabilità di ottenerlo. Sono state definite due condizioni sperimentali. Nella prima condizione, i soggetti giudicano le scommesse considerando che i telefoni cellulari sono prodotti da una buona compagnia, e il servizio SMS è fornito da un provider inaffidabile. Nella condizione inversa, i soggetti giudicano le scommesse considerando che i telefoni cellulari sono prodotti con bassa qualità da una compagnia inaffidabile, e il servizio SMS è fornito da un provider robusto e affidabile. Per le scommesse doppie, è stata ipotizzata la presenza di un modello d’integrazione averaging, e i parametri del modello sono stati stimati utilizzando R-Average per ogni soggetto. I risultati mostrano che, in presenza di una scommessa doppia, l’integrazione è pienamente compatibile con un modello averaging: la corretta regola d’integrazione sembra essere quella a media ponderata e non quella additiva. Nell’ultimo esperimento, il modello averaging e la metodologia R-Average sono state applicate a uno studio sulle credenze di fiducia in tre contesti di vita quotidiana: interpersonale, istituzionale e organizzativo. Le credenze di fiducia sono attributi positivi che si ritiene una persona debba possedere affinché ci si possa fidare di lei. Le credenze di fiducia sono fattori rilevanti perché un individuo ne consideri un altro affidabile. Esse definiscono fino a che punto chi ripone fiducia si sente sicuro nel credere che la persona su cui ripone fiducia sia affidabile. Secondo McKnight, Cummings e Chervany (1998), le credenze di fiducia più citate sono: benevolenza, competenza, onestà e prevedibilità. L’idea sottostante lo studio proposto è che le credenze potrebbero essere integrate cognitivamente nel concetto di affidabilità attraverso un processo di ponderazione. Il metodo R-Average è stato utilizzato per identificare i parametri del modello averaging per ogni partecipante. Come principale risultato, l’analisi mostra che, in accordo con McKnight, Cummings e Chervany (1998), le quattro credenze principali giocano un ruolo fondamentale nel giudicare la fiducia. Inoltre, in accordo con la teoria dell’integrazione delle informazioni, un modello averaging sembra spiegare le risposte individuali. La grande maggioranza dei partecipanti potrebbe essere inquadrata come caso a pesi diversi. Mentre i valori di scala mostrano un netto andamento lineare con slopes più elevati per onestà e competenza, i pesi mostrano differenze con valori medi più elevati anche per onestà e competenza. Questi risultati sono coerenti con l’idea che attributi diversi giochino un ruolo diverso nel giudizio finale: infatti, onestà e competenza sembrano rivestire un ruolo preminente, mentre la prevedibilità sembra meno rilevante. Un’altra interessante conclusione riguarda l’elevato peso assunto da un basso livello di onestà; ciò sembra mostrare come una credenza connessa alla bassa onestà giochi il principale ruolo all’interno di un giudizio finale negativo. Infine, la differente inclinazione dell’andamento dei livelli degli attributi nei tre contesti situazionali suggerisce un ruolo preminente dell’onestà nelle situazioni interpersonali e della competenza nelle situazioni istituzionali. In conclusione, la teoria dell’integrazione delle informazioni e la misurazione funzionale sembrano rappresentare un approccio interessante per comprendere la formulazione del giudizio umano. Questo lavoro di ricerca propone un nuovo metodo per stimare i parametri dei modelli averaging. Il metodo mostra una buona capacità di identificare i parametri e apre nuovi scenari nella teoria dell’integrazione delle informazioni, fornendo un buon strumento per comprendere più nel dettaglio l’integrazione averaging degli attributi
APA, Harvard, Vancouver, ISO, and other styles
2

McKeone, James P. "Statistical methods for electromyography data and associated problems." Thesis, Queensland University of Technology, 2014. https://eprints.qut.edu.au/79631/1/James_McKeone_Thesis.pdf.

Full text
Abstract:
This thesis proposes three novel models which extend the statistical methodology for motor unit number estimation, a clinical neurology technique. Motor unit number estimation is important in the treatment of degenerative muscular diseases and, potentially, spinal injury. Additionally, a recent and untested statistic to enable statistical model choice is found to be a practical alternative for larger datasets. The existing methods for dose finding in dual-agent clinical trials are found to be suitable only for designs of modest dimensions. The model choice case-study is the first of its kind containing interesting results using so-called unit information prior distributions.
APA, Harvard, Vancouver, ISO, and other styles
3

Kurnaz, Guzin. "Covering Sequences And T,k Bentness Criteria." Phd thesis, METU, 2009. http://etd.lib.metu.edu.tr/upload/12610504/index.pdf.

Full text
Abstract:
This dissertation deals with some crucial building blocks of cryptosystems in symmetric cryptography<br>namely the Boolean functions that produce a single-bit result for each possible value of the m-bit input vector, where m&gt<br>1. Objectives in this study are two-fold<br>the first objective is to develop relations between cryptographic properties of Boolean functions, and the second one is to form new concepts that associate coding theory with cryptology. For the first objective, we concentrate on the cryptographic properties of Boolean functions such as balancedness, correlation immunity, nonlinearity, resiliency and propagation characteristics<br>many of which are depending on the Walsh spectrum that gives components of the Boolean function along the direction of linear functions. Another efficient tool to study Boolean functions is the subject of covering sequences introduced by Carlet and Tarannikov in 2000. Covering sequences are defined in terms of the derivatives of the Boolean function. Carlet and Tarannikov relate the correlation immunity and balancedness properties of the Boolean function to its covering sequences. We find further relations between the covering sequence and the Walsh spectrum, and present two theorems for the calculation of covering sequences associated with each null frequency of the Walsh spectrum. As for the second objective of this thesis, we have studied linear codes over the rings Z4 and Z8 and their binary images in the Galois field GF(2). We have investigated the best-known examples of nonlinear binary error-correcting codes such as Kerdock, Preperata and Nordstrom-Robinson, which are -linear codes. We have then reviewed Tokareva&rsquo<br>s studies on Z4-linear codes and extended them to Z8-linear codes. We have defined a new classes of bent functions. Next, we have shown that the newly defined classes of bent, namely Tokareva&rsquo<br>s k-bent and our t,k-bent functions are affine equivalent to the well-known Maiorana McFarland class of bent functions. As a cryptological application, we have described the method of cubic cryptanalysis, as a generalization of the linear cryptanalysis given by Matsui in 1993. We conjecture that the newly introduced t,k-bent functions are also strong against cubic cryptanalysis, because they are as far as possible to t,k-bent functions.
APA, Harvard, Vancouver, ISO, and other styles
4

Jenkins, Bradlee A., and L. Lee Glenn. "Variability of FEV and Criterion for Acute Pulmonary Exacerbation." Digital Commons @ East Tennessee State University, 2014. https://dc.etsu.edu/etsu-works/7465.

Full text
Abstract:
Excerpt: Morgan et al. (1) concluded that cystic fibrosis (CF) in children and adolescents with a high baseline forced expiratory volume (FEV1) were less likely to have a therapeutic intervention or slower rate of FEV1 decline after a single acute decline in FEV1 of 10%. This conclusion is not well supported due to the arbitrary criteria used for defining a pulmonary exacerbation, as explained below.
APA, Harvard, Vancouver, ISO, and other styles
5

Sheng, Zhaohui. "Population invariance as a criterion to evaluate equating relationship for college basic academic subject examination." Diss., Columbia, Mo. : University of Missouri-Columbia, 2007. http://hdl.handle.net/10355/5929.

Full text
Abstract:
Thesis (Ph. D.)--University of Missouri-Columbia, 2007.<br>The entire dissertation/thesis text is included in the research.pdf file; the official abstract appears in the short.pdf file (which also appears in the research.pdf); a non-technical general description, or public abstract, appears in the public.pdf file. Title from title screen of research.pdf file (viewed on December 28, 2007) Includes bibliographical references.
APA, Harvard, Vancouver, ISO, and other styles
6

Бабич, В. О. "Інтелектуальна технологія підтримки прийняття рішень за умов апріорної невизначеності. Метод екзамену з перенавчанням". Master's thesis, Сумський державний університет, 2018. http://essuir.sumdu.edu.ua/handle/123456789/72194.

Full text
Abstract:
Розроблено алгоритм та програмне забезпечення комп’ютерної СППР. Проведена перевірка ефективності запропонованого алгоритму за даними, отриманими в результаті моніторингу слабоформалізованого технологічного процесу. Розроблений алгоритм реалізовано у програмному забезпеченні створеному за допомогою інструментального програмного середовища.
APA, Harvard, Vancouver, ISO, and other styles
7

Murgue-Varoclier, Paul-Maxence. "Le critère organique en droit administratif français." Thesis, Lyon, 2017. http://www.theses.fr/2017LYSE3061.

Full text
Abstract:
Le critère organique en droit administratif est un instrument de qualification juridique qui repose sur la présence d’une personne publique dans un rapport de droit. Il trouve ses origines à la fin du XIXème siècle dans la subjectivisation des droits de puissance publique dont l’Etat est investi et l’admission de la distinction entre les personnes publiques et les personnes privées. Confondu avec le critère du service public au début du XXème siècle, le critère organique acquiert son autonomie à l’heure de la « crise » de la notion juridique de service public dans les années 1930-1940. Le critère organique, qui témoigne de la logique institutionnelle à laquelle le droit administratif français est attaché, sert de support à la construction des notions-cadres de ce droit.Depuis de nombreuses années, le critère organique fait cependant l’objet d’une vive contestation. D’une part, le mouvement de « banalisation » qui traverse le droit des personnes publiques renforce l’insuffisance de ce critère dans la détermination du droit applicable. D’autre part, les transformations contemporaines du modèle administratif français provoquent une régression de la référence à ce critère. Alors que la personnalité publique apparaissait hier comme le mode privilégié de prise en charge de l’action publique, l’administration est incitée à externaliser ses activités. En dépit d’un phénomène de « privatisation » de l’action administrative, le juge et le législateur maintiennent l’application de règles exorbitantes en l’absence du critère organique.Alors que le phénomène administratif se déploie aujourd’hui au-delà des seules personnes de droit public, la définition du critère organique en droit administratif demeure fermement arrimée à la notion de personne publique. Plusieurs facteurs invitent toutefois à reconsidérer la définition de ce critère. La fonctionnalisation de l’action publique ne dissimule qu’imparfaitement les liens qui s’établissent au sein de la « sphère publique » entre les personnes publiques et certaines personnes privées, qui demeurent sous étroit contrôle public. C’est donc sur la base de la notion de « contrôle public » que peut être entreprise une redéfinition de ce critère en droit administratif<br>In French administrative law, the organic criterion is an instrument of legal qualification dependent upon the presence of a public body in a legal relationship. Its origins date back to the 19th century in the subjectivation of the rights of public power of which the State is invested and the differentiation of the public and private bodies. Misconstrued with the criterion of public service at the beginning of the 20th century, the organic criterion gains its autonomy at the time of the "crisis" of the notion of public service which consecrates the dissociation of notions of public body and public service. This criterion, which bears witness to the institutional logic to which administrative law is associated, serves as the foundation for the construction of notions.However, the organic criterion has been the subject of strong opposition for many years. On the one hand, the "trivialization" movement which affects rights of public bodies reinforces the inadequacy of this criterion in determining the applicable law. On the other, as a result of contemporary changes to the French administrative model, the reference to this criterion has diminished. While the public body appeared in the past as the preferred mode for public action, the administration is encouraged nowadays to "outsource" its activities. Despite a phenomenon of "privatization" of administrative measures, the judge and the legislator maintain the application of special rules in absence of the organic criterion.While the administrative phenomenon now extends beyond public law, the definition of the organic criterion in administrative law remains firmly linked to the notion of public body. Several factors, however, call for a redefinition of this criterion. The functionalization of public action only partially conceals the relation between public and certain private bodies within the public sphere which nevertheless remain under close public control. It is on the basis of the notion of "public control" that a redefinition of this criterion can be undertaken in administrative law
APA, Harvard, Vancouver, ISO, and other styles
8

Прилепа, Дмитро Вікторович, Дмитрий Викторович Прилепа та Dmytro Viktorovych Prylepa. "Оптимізація вхідного математичного опису інтелектуальної системи комп'ютерної психодіагностики". Thesis, Сумський державний університет, 2014. http://essuir.sumdu.edu.ua/handle/123456789/39126.

Full text
Abstract:
Формування вхідного математичного опису інтелектуальної системи комп’ютерної психодіагностики (ІСКП) є актуальною проблемою, при вирішенні якої враховуються особливості як предметної області, так і інтелектуальної технології проектування ІСКП.
APA, Harvard, Vancouver, ISO, and other styles
9

Sevgili, Fatma Didem. "La responsabilité de l'Etat et des collectivités territoriales. Les problèmes d'imputabilité et de répartition." Thesis, Lyon 3, 2011. http://www.theses.fr/2011LYO30004/document.

Full text
Abstract:
Le problème de la détermination de la personne publique responsable comporte deux points à examiner : tout d’abord il s’agit de trouver un débiteur pour indemniser la victime ensuite de répartir la charge indemnitaire entre les responsables du dommage. Il existe trois critères utilisés par le juge administratif afin de déterminer la personne publique responsable : le critère matériel, le critère fonctionnel et le critère décisionnel. Pourtant aucun d’eux ne s’avère suffisant pour expliquer tout les cas de responsabilité. Toutefois on peut dire qu’en principe la responsabilité suit la compétence. Dans ce cas devient important de limiter précisément les compétences des différentes personnes publiques ce qui n’est pas toujours le cas. Concernant la répartition de la charge indemnitaire, il peut être utilisé deux critères soit celui de la gravité des fautes respectives soit celui des rôles causals des coresponsables dans la survenance du dommage<br>The problem of determining the person public responsible has two points: first step is finding a debtor to indemnify the victim second step is distribution of the financial load among those responsible for compensation of the damage. There are three criteria used by the administrative judge to determine the person responsible public: organic criterion, functional criterion and decision criterion. Yet none of them is sufficient to explain all cases of responsibility. However; in principle we can say that the responsibility follows the competence. In this instance it becomes particularly important to determine the powers of the different public bodies, but in reality it is not always perfectly characterised. On the other hand, concerning the distribution of the load compensation two criteria can be used one of them is the severity of each fault, the other one is the causal roles of each co-responsible on the formation of the damage
APA, Harvard, Vancouver, ISO, and other styles
10

Gnanguenon, guesse Girault. "Modélisation et visualisation des liens entre cinétiques de variables agro-environnementales et qualité des produits dans une approche parcimonieuse et structurée." Electronic Thesis or Diss., Montpellier, 2021. http://www.theses.fr/2021MONTS139.

Full text
Abstract:
L'essor de l'agriculture numérique permet de plus en plus d'observer de manière automatisée et parfois à haute fréquence des dynamiques d'élaboration de la production et de sa qualité en fonction du climat. Les données issues de ces observations dynamiques peuvent être considérées comme des données fonctionnelles. Analyser ce nouveau type de données nécessite d'étendre les outils statistiques usuels au cas fonctionnel ou d'en proposer de nouveaux.Nous avons proposé dans cette thèse une nouvelle approche (SpiceFP: Sparse and Structured Procedure to Identify Combined Effects of Functional Predictors) permettant d'expliquer les variations d'une variable réponse scalaire par deux ou trois prédicteurs fonctionnels dans un contexte d'influence conjointe de ces derniers. Une attention particulière a été apportée à l'interprétabilité des résultats via l'utilisation de classes d'intervalles combinées définissant une partition du domaine d'observation des facteurs explicatifs. Les développements récents autour des modèles LASSO (Least Absolute Shrinkage and Selection Operator) ont été adaptés pour estimer les régions d'influence dans la partition via une régression pénalisée généralisée. L'approche intègre aussi une double sélection, de modèles (parmi les partitions possibles) et de variables (pour une partition donnée) à partir des critères d'information AIC et BIC. La présentation méthodologique de l'approche, son étude grâce à des simulations ainsi qu'une étude de cas basée sur des données réelles ont été présentés dans le chapitre 2.Les données réelles utilisées au cours de cette thèse proviennent d'une expérimentation viticole visant à mieux comprendre l'impact du changement climatique sur l'accumulation d'anthocyanes dans les baies. L'analyse de ces données dans le chapitre 3 à l'aide de l'approche SpiceFP que nous avons étendue a permis d'identifier un impact négatif des combinaisons matinales de faible irradiance (inférieure à environ 100 µmol/s/m2 ou 45 µmol/s/m2 selon l'état avancé-retardé des baies) et température élevée (supérieure à environ 25°C). Une légère différence induite par la température de la nuit a été observée entre ces effets identifiés en matinée.Dans le chapitre 4 de cette thèse, nous proposons une implémentation de l'approche proposée sous la forme d'un package R. Cette implémentation fournit un ensemble de fonctions permettant de construire les intervalles de classes suivant des échelles linéaire ou logarithmique, de transformer les prédicteurs fonctionnels grâces aux classes d'intervalles combinées puis de mettre en oeuvre l'approche en deux ou trois dimensions. D'autres fonctions facilitent la réalisation de post-traitements ou permettent à l'utilisateur de s'intéresser à d'autres modèles que ceux retenus par l'approche comme par exemple une moyenne de différents modèles.Mots clés: Régressions pénalisées, Interaction, critères d'information, scalar-on-function, coefficients interprétables, microclimat de la vigne<br>The development of digital agriculture allows to observe at high frequency the dynamics of production according to the climate. Data from these dynamic observations can be considered as functional data. To analyze this new type of data, it is necessary to extend the usual statistical tools to the functional case or develop new ones.In this thesis, we have proposed a new approach (SpiceFP: Sparse and Structured Procedure to Identify Combined Effects of Functional Predictors) to explain the variations of a scalar response variable by two or three functional predictors in a context of joint influence of these predictors. Particular attention was paid to the interpretability of the results through the use of combined interval classes defining a partition of the observation domain of the explanatory factors. Recent developments around LASSO (Least Absolute Shrinkage and Selection Operator) models have been adapted to estimate the areas of influence in the partition via a generalized penalized regression. The approach also integrates a double selection, of models (among the possible partitions) and of variables (areas inside a given partition) based on AIC and BIC information criteria. The methodological description of the approach, its study through simulations as well as a case study based on real data have been presented in chapter 2 of this thesis.The real data used in this thesis were obtained from a vineyard experiment aimed at understanding the impact of climate change on anthcyanins accumulation in berries. Analysis of these data in chapter 3 using SpiceFP and one extension identified a negative impact of morning combinations of low irradiance (lower than about 100 µmol/s/m2 or 45 µmol/s/m2 depending on the advanced-delayed state of the berries) and high temperature (higher than about 25°C). A slight difference associated with overnight temperature occurred between these effects identified in the morning.In chapter 4 of this thesis, we propose an implementation of the proposed approach as an R package. This implementation provides a set of functions allowing to build the class intervals according to linear or logarithmic scales, to transform the functional predictors using the joint class intervals and finally to execute the approach in two or three dimensions. Other functions help to perform post-processing or allow the user to explore other models than those selected by the approach, such as an average of different models.Keywords: Penalized regressions, Interaction, information criteria, scalar-on-function, interpretable coefficients,grapevine microclimate
APA, Harvard, Vancouver, ISO, and other styles
More sources

Books on the topic "Functional and informational criterion"

1

Lerner, Vladimir S. Information path functional and informational macrodynamics. Nova Science Publishers, 2009.

Find full text
APA, Harvard, Vancouver, ISO, and other styles
2

Matsumoto, Kazuko. Intonation units in Japanese conversation: Syntactic, informational and functional structures. Benjamins, 2002.

Find full text
APA, Harvard, Vancouver, ISO, and other styles
3

Matsumoto, Kazuko. Intonation units in Japanese conversation: Syntactic, informational, and functional structures. John Benjamins Pub., 2003.

Find full text
APA, Harvard, Vancouver, ISO, and other styles
4

Tempelman, Arkady. Ergodic Theorems for Group Actions: Informational and Thermodynamical Aspects. Springer Netherlands, 1992.

Find full text
APA, Harvard, Vancouver, ISO, and other styles
5

Hall, Alastair. Information criteria for impulse response function matching estimation of DSGE models. Federal Reserve Bank of Atlanta, 2007.

Find full text
APA, Harvard, Vancouver, ISO, and other styles
6

Arnold, Johnson L., Rosenthal Lynne, and National Institute of Standards and Technology (U.S.), eds. Criteria for United States Geological Survey (USGS) recognizing certificate issuing organizations functions and requirements part of United States Geological Survey recognition of Spatial Data Transfer Standard (SDTS) Topological Vector Profile (TVP) certification system. U.S. Dept. of Commerce, Technology Administration, National Institute of Standards and Technology, 1998.

Find full text
APA, Harvard, Vancouver, ISO, and other styles
7

Chistyakova, Guzel, Lyudmila Ustyantseva, Irina Remizova, Vladislav Ryumin, and Svetlana Bychkova. CHILDREN WITH EXTREMELY LOW BODY WEIGHT: CLINICAL CHARACTERISTICS, FUNCTIONAL STATE OF THE IMMUNE SYSTEM, PATHOGENETIC MECHANISMS OF THE FORMATION OF NEONATAL PATHOLOGY. AUS PUBLISHERS, 2022. http://dx.doi.org/10.26526/monography_62061e70cc4ed1.46611016.

Full text
Abstract:
The purpose of the monograph, which contains a modern view of the problem of adaptation of&#x0D; children with extremely low body weight, is to provide a wide range of doctors with basic information&#x0D; about the clinical picture, functional activity of innate and adaptive immunity, prognostic criteria&#x0D; of postnatal pathology, based on their own research. The specific features of the immunological&#x0D; reactivity of premature infants of various gestational ages who have developed bronchopulmonary&#x0D; dysplasia (BPD) and retinopathy of newborns (RN) from the moment of birth and after reaching&#x0D; postconceptional age (37-40 weeks) are described separately. The mechanisms of their implementation&#x0D; with the participation of factors of innate and adaptive immunity are considered in detail. Methods&#x0D; for early prediction of BPD and RN with the determination of an integral indicator and an algorithm&#x0D; for the management of premature infants with a high risk of postnatal complications at the stage&#x0D; of early rehabilitation are proposed. The information provided makes it possible to personify the&#x0D; treatment, preventive and rehabilitation measures in premature babies. The monograph is intended for&#x0D; obstetricians-gynecologists, neonatologists, pediatricians, allergists-immunologists, doctors of other&#x0D; specialties, residents, students of the system of continuing medical education.&#x0D; This work was done with financial support from the Ministry of Education and Science, grant of&#x0D; the President of the Russian Federation No. MK-1140.2020.7.
APA, Harvard, Vancouver, ISO, and other styles
8

Yakobson, Zinaida, Nadezhda Baskakova, and Dmitriy Simakov. Production management of the enterprise. INFRA-M Academic Publishing LLC., 2024. http://dx.doi.org/10.12737/1225050.

Full text
Abstract:
The second volume of the textbook examines the content and essence of strategic planning, the leading managerial function of production management, including: setting up strategic and functional management of a manufacturing enterprise; industrial production system; organizational structure of the enterprise; methodological foundations of a systematic approach in the basic schemes of logistics of a metallurgical enterprise. Special attention is paid to the issues of technological organization and planning of the main production, its structuring and optimization according to the criterion of productivity increase. A factorial model of the formation of the production capacity of the enterprise and its innovative development is presented. The forms of labor organization and the quality of its rationing are justified in accordance with the level of development of technology and technology, which are the fundamental condition for achieving high production efficiency. The formulation of planned work at the enterprise is schematically considered in accordance with the principles of a systematic approach, priority of strategic planning and subordination of current planning. Risk management is justified on the basis of the concept of innovative development of the company's production activities. Meets the requirements of the federal state educational standards of higher education of the latest generation. For students and teachers of economic universities.
APA, Harvard, Vancouver, ISO, and other styles
9

Information Path Functional and Informational Macrodynamics. Nova Science Publishers, Incorporated, 2009.

Find full text
APA, Harvard, Vancouver, ISO, and other styles
10

Intonation Units in Japanese Conversation: Syntactic, Informational and Functional Structures. Benjamins Publishing Company, John, 2003.

Find full text
APA, Harvard, Vancouver, ISO, and other styles
More sources

Book chapters on the topic "Functional and informational criterion"

1

Thomas, Erik G. F. "On Prohorov’s Criterion for Projective Limits." In Partial Differential Equations and Functional Analysis. Birkhäuser Basel, 2006. http://dx.doi.org/10.1007/3-7643-7601-5_15.

Full text
APA, Harvard, Vancouver, ISO, and other styles
2

Sharadin, Nathaniel P. "Epistemic Correctness and the Minimal Functional Criterion." In Epistemic Instrumentalism Explained. Routledge, 2022. http://dx.doi.org/10.4324/9781003096726-4.

Full text
APA, Harvard, Vancouver, ISO, and other styles
3

Suchithra, M., and M. Ramakrishnan. "Non Functional QoS Criterion Based Web Service Ranking." In Proceedings of the International Conference on Soft Computing Systems. Springer India, 2015. http://dx.doi.org/10.1007/978-81-322-2674-1_8.

Full text
APA, Harvard, Vancouver, ISO, and other styles
4

Rutledge, Lloyd, and Koen van der Kruk. "Prompt Engineering for Analyzing Acceptance Criteria for Functional Requirements." In Lecture Notes in Business Information Processing. Springer Nature Switzerland, 2025. https://doi.org/10.1007/978-3-031-98033-6_11.

Full text
APA, Harvard, Vancouver, ISO, and other styles
5

Daniłowicz, Czesław, and Ngoc Thanh Nguyen. "Criteria and Functions for Expert Information Representation Choice." In Advances in Intelligent and Soft Computing. Physica-Verlag HD, 2001. http://dx.doi.org/10.1007/978-3-7908-1813-0_20.

Full text
APA, Harvard, Vancouver, ISO, and other styles
6

Xu, Wentao. "Reference Materials: A Golden Criterion in Nucleic Acid Identification." In Functional Nucleic Acids Detection in Food Safety. Springer Singapore, 2016. http://dx.doi.org/10.1007/978-981-10-1618-9_4.

Full text
APA, Harvard, Vancouver, ISO, and other styles
7

Waltz, Jennifer, and Marsha M. Linehan. "Functional Analysis of Borderline Personality Disorder Behavioral Criterion Patterns." In Treatment of Personality Disorders. Springer US, 1999. http://dx.doi.org/10.1007/978-1-4757-6876-3_14.

Full text
APA, Harvard, Vancouver, ISO, and other styles
8

Petitjean, Caroline, Nicolas Rougon, Françoise Prêteux, Philippe Cluzel, and Philippe Grenier. "Measuring Myocardial Deformations in Tagged MR Image Sequences Using Informational Non-rigid Registration." In Functional Imaging and Modeling of the Heart. Springer Berlin Heidelberg, 2003. http://dx.doi.org/10.1007/3-540-44883-7_17.

Full text
APA, Harvard, Vancouver, ISO, and other styles
9

Seshasayee, Aswin Sai Narain. "5. Reading and organising the genome." In Bacterial Genomes. Open Book Publishers, 2025. https://doi.org/10.11647/obp.0446.05.

Full text
Abstract:
The genome is informational rather than functional. This information must be read or “expressed”, eventually producing proteins or functional RNA molecules, for the cell to be active. This is a tightly regulated process orchestrated by a complex network of interactions between regulatory proteins and other molecules. Functional regions on a genome are usually non-randomly positioned, and this, while driven by how the genome is replicated during reproduction, also enables efficient gene expression.
APA, Harvard, Vancouver, ISO, and other styles
10

Matsuda, Yoshitatsu, and Kazunori Yamaguchi. "The InfoMin Criterion: An Information Theoretic Unifying Objective Function for Topographic Mappings." In Artificial Neural Networks and Neural Information Processing — ICANN/ICONIP 2003. Springer Berlin Heidelberg, 2003. http://dx.doi.org/10.1007/3-540-44989-2_48.

Full text
APA, Harvard, Vancouver, ISO, and other styles

Conference papers on the topic "Functional and informational criterion"

1

Patricio, Manuel, Mario Sousa, Paulo Matos, and Pedro Filipe Oliveira. "Unstructuring the sequentiality of commits intoa semantic network with higher informational and functional quality." In 2024 International Conference on Engineering and Emerging Technologies (ICEET). IEEE, 2024. https://doi.org/10.1109/iceet65156.2024.10913956.

Full text
APA, Harvard, Vancouver, ISO, and other styles
2

Reshetnikova, Irina V., Daniil V. Marshakov, and Sergey V. Sokolov. "Solution of Robust Stochastic Discrete Filtering Problem Based on Minimax Functional Criterion." In 2024 International Russian Automation Conference (RusAutoCon). IEEE, 2024. http://dx.doi.org/10.1109/rusautocon61949.2024.10694280.

Full text
APA, Harvard, Vancouver, ISO, and other styles
3

Dugarte, M., and A. A. Sagüés. "Modeling Performance of Galvanic Point Anodes for Cathodic Prevention of Reinforcing Steel in Concrete Repairs." In CORROSION 2013. NACE International, 2013. https://doi.org/10.5006/c2013-02840.

Full text
Abstract:
Abstract This paper addresses projecting the performance of point anodes for patch repairs applications as function of service parameters and anode aging. Input data from a concurrent experimental test program are presented as well. Modeling of a generic patch configuration was implemented by a simplified finite differences method. The model calculates the throwing distance that could be achieved by a given number of anodes per unit perimeter of the patch, concrete thickness, concrete resistivity, amount of steel and amount of polarization needed for cathodic prevention. The model projections and aging data suggest that anode performance in likely application scenarios may derate soon even if a relatively optimistic 100 mV corrosion prevention criterion were assumed. The effect of adopting less conservative criteria proposed in the literature is presented as well and the need for supporting information is discussed.
APA, Harvard, Vancouver, ISO, and other styles
4

leon, Roberto T. "Functional Recovery and Rehabilitation For New and Existing Infrastructure." In IABSE Congress, San José 2024: Beyond Structural Engineering in a Changing World. International Association for Bridge and Structural Engineering (IABSE), 2024. https://doi.org/10.2749/sanjose.2024.1039.

Full text
Abstract:
&lt;p&gt;The rehabilitation of existing structures and the incorporation of functional recovery principles for new structures are themes of great relevance for structural engineers in the USA today. Sustainability and community resilience paradigms require that building owners carefully assess conditions of existing infrastructure before deciding on whether to rehabilitate or replace the structure. Once that decision is made, another important criterion is whether the structure needs to comply with functional recovery performance goals. The limit state of functional recovery is required in the USA for new government buildings by a series of executive orders and presidential policy directives issued over the last 10 years. Recently, the Building Seismic Safety Council sponsored a workshop for federal agencies interested in these two topics. The conclusions of those workshops indicate that while most agencies are eager to adopt new guidelines, the agencies have little guidance or experience to help them satisfy these new requirements. In addition, it is unclear how new technologies and materials, manpower and training shortages, and cost issues will impact implementation. This paper reports on the background, code framework, and possible solutions to these issues.&lt;/p&gt;
APA, Harvard, Vancouver, ISO, and other styles
5

Hartmann, Mats, and Niklas Johansson. "COMPONENT KILL CRITERIA ESTIMATE FOR A SMALL COMBUSTION ENGINE—EXPERIMENTAL PART." In 34th International Symposium on Ballistics. Destech Publications, Inc., 2025. https://doi.org/10.12783/ballistics25/37155.

Full text
Abstract:
Vulnerability and lethality (V/L) assessments can provide probabilities of a successful engagement or probabilities of kill given hit. The assessments require information about the munition, target and engagement conditions. The targets used in V/L assessment software are normally described on a component level in combination with ballistic protection performance and definitions of primary functions of the target (e.g. mobility, fire power). In order to evaluate if a target function or capability is lost after an engagement, it is necessary to determine the status of all the components that contribute to that specific function. The component kill criteria relates the damaging load on the component to a probability of rendering the component non-functional. The functional status of the complete target can then be derived from the status of each individual component by a fault tree analysis. This paper presents the experimental part of a work aiming to strengthen the process of defining component kill criteria. Projectiles were fired against running lawn mower engines. The engines were later disassembled in order to study the internal damage. The results generated here will be used in a following simulation based study aiming to find a kill criterion.
APA, Harvard, Vancouver, ISO, and other styles
6

Wesołowski, Mariusz, Krzysztof Blacha, and Piotr Barszcz. "Multi-Criteria Analysis in Assessment of the Degree of Degradation Pavement Elements Functional Airports Made of Cement Concrete." In Environmental Engineering. VGTU Technika, 2017. http://dx.doi.org/10.3846/enviro.2017.125.

Full text
Abstract:
An important factor that affects the safety of flight operations is the proper management of airports, which should be based on the obtained in systematic way information about the state of the surface of the functional elements. One of the characteristic quantities of the technical condition of airport indicator is the assessment of the degree of degradation. It should be noted that the degradation is a slow process extended in time and is the lowering of the properties of the structure by external influences, which in turn generates the changes in their structure. Rating degrade surface should be conducted periodically, which period shall be estimated on the basis of information obtained from the process of the operation of aircraft. Demotions surface is determined on the basis of the type and quantity survey found damage and made repairs. The basis for evaluating the degree of surface degradation is to obtain data from surveys conducted using the method of visual and inventory of them. Research by visual method, despite its apparent simplicity, it is difficult to implement. Qualification of damage or repairs to the appropriate group is often not clear, and therefore the inventory process is described in the documentation of quality management system. The multi-criteria analysis is a method of evaluating the weighted supporting estimating the degree of degradation airfield pavements based on data obtained through inspections performed. Its base is included in the determination of a number of criteria for the selection variant, taking into account different weights to each criterion. The value of the indicator characterizing the degree of surface degradation in relation to estimated criteria allows you to schedule the necessary resources needed to carry out repairs and rational planning of repairs.
APA, Harvard, Vancouver, ISO, and other styles
7

Fujita, Kikuo, Naoki Ono, and Yutaka Nomaguchi. "Two-Dimensional Lineup Design of Intermediate Functional Products With Enumerative Optimization of Combinatorial Mini-Max Optimality." In ASME 2022 International Design Engineering Technical Conferences and Computers and Information in Engineering Conference. American Society of Mechanical Engineers, 2022. http://dx.doi.org/10.1115/detc2022-90899.

Full text
Abstract:
Abstract As mechanical products have become further complicated and diversified, integrative design of their components, such as motors, fans, and pumps, through commonalization has become essential for shorter lead time and more cost-saving behind the mainstream. This paper names those products as intermediate functional products and proposes their lineup design method. Scale and coefficient identify their performance. Thus, they are arranged over a two-dimensional space of design requirements to minimize the lead time and production cost with maintaining performance optimality. The former cannot be measured quantitatively before actual production. Thus, minimizing the process complexity is demanded. The latter criterion translates into optimizing the worst-case performance across the requirement space. Those understanding leads to a design method by enumerating possible commonalization patterns, exclusively arranging the lineup for each pattern optimally, and investigating the best lineup through trade-off analysis. While the lineup arrangement under a pattern takes a mini-max type formulation, its optimization is performed based on monotonicity analysis efficiently. This paper demonstrates the formulation and procedures through an application to universal motors.
APA, Harvard, Vancouver, ISO, and other styles
8

Jain Sudhir, Prathik, Ravindra Holalu Venkatadas, Naveen Prakash Goravi Vijaya Dev, and Ugrasen Gonchikar. "Estimation and Comparison of Acoustic Emission Parameters and Surface Roughness in Wire Cut Electric Discharge Machining of Stavax Material Using Multiple Regression Analysis and Group Method Data Handling Technique." In ASME 2015 International Mechanical Engineering Congress and Exposition. American Society of Mechanical Engineers, 2015. http://dx.doi.org/10.1115/imece2015-50596.

Full text
Abstract:
Wire Electrical Discharge Machining (WEDM) is a specialized thermal machining process capable of accurately machining parts with varying hardness or complex shapes, which have sharp edges that are very difficult to be machined by the main stream machining processes. Selection of cutting parameters for obtaining higher cutting efficiency or accuracy in WEDM is still not fully solved, even with most up-to-date CNC WEDM machine. It is widely recognised that Acoustic Emission (AE) is gaining ground as a monitoring method for health diagnosis on rotating machinery. The advantage of AE monitoring over vibration monitoring is that the AE monitoring can detect the growth of subsurface cracks whereas the vibration monitoring can detect defects only when they appear on the surface. This study outlines the estimation of AE parameters viz., signal strength, absolute energy, RMS in the WEDM. Stavax (modified AISI 420) steel material was machined using different process parameters based on Taguchi’s L’16 standard orthogonal array. Among different process parameters voltage and flush rate were kept constant. Parameters such as pulse-on time, pulse-off time, current and bed speed was varied. Molybdenum wire having diameter of 0.18 mm was used as an electrode. Simple functional relationships between the parameters were plotted to arrive at possible information on surface roughness and AE signals. But these simpler methods of analysis did not provide any information about the status of the work material. Thus, there is a requirement for more sophisticated methods that are capable of integrating information from the multiple sensors. Hence, methods like Multiple Regression Analysis (MRA) and Group Method of Data Handling (GMDH) have been applied for the estimation of surface roughness, AE signal strength, AE absolute energy and AE RMS. The GMDH algorithm is designed to learn the process by training the algorithm with the experimental data. The experimental observations are divided into two sets: the training set and testing set. The training set is used to make the GMDH learn the process and the testing set will check the performance of GMDH. Different models can be obtained by varying the percentage of data in the training set and the best model can be selected from these, viz., 50%, 62.5% and 75%. The best model is selected from the said percentages of data. Number of variables selected at each layer is usually taken as a fixed number or a constantly increasing number. It is usually given as fractional increase in number of independent variables present in the previous level. Three different criterion functions, viz., Root Mean Square (Regularity) criterion, Unbiased criterion and Combined criterion were considered for the estimation. The choice of criterion for node selection is another important parameter for proper modeling. From the results it was observed that, AE parameters and estimated surface roughness values were correlates well with GMDH when compare to MRA.
APA, Harvard, Vancouver, ISO, and other styles
9

Yang, Y. S., B. S. Jang, Y. S. Song, Y. S. Yeon, and S. H. Do. "Application of Design Axioms to Marine Design Problems." In ASME 2000 International Design Engineering Technical Conferences and Computers and Information in Engineering Conference. American Society of Mechanical Engineers, 2000. http://dx.doi.org/10.1115/detc2000/dtm-14558.

Full text
Abstract:
Abstract The Design Axioms proposed by N. P. Suh consist of Independence Axiom and Information Axiom. The Independence Axiom assists a designer in generating good design alternatives by considering the relations between the functions and the physical product using a hierarchical mapping procedure. The Information Axiom, which is related to the probability of achieving the given functional requirements, can be used as a criterion for the selection of the best solution among the proposed alternatives in the conceptual or preliminary design stage. In the early stages of marine design, especially ship design, there exists a lot of uncertainty because of the size and complexity of a marine vehicle. The uncertainty often leads to a probabilistic approach rather than a deterministic approach. The ship designs are mostly routine design to change an existing design case a little. In this paper, the availability of the Design Axioms in this marine design field will be investigated through three examples. In the conceptual design of a thruster, the Independence Axiom will be proven to be useful in examining the independence of functional requirements at each level of the decomposition process. In main engine selection example, the Information Axiom will be used for selecting the best solution among the given alternatives by estimating their respective information contents under the uncertain and ambiguous condition. In the structural design, some difficulties arise in maintaining the independence of functional requirements in general because the number of design parameters is greater than that of functional requirements. Therefore, there is much trouble in generalizing the application of the Design Axioms for the structural design, especially for the preliminary design where the principal design parameters of a design object have to be determined after its shape fixed. This paper will try a generalized approach to the similarity-based design where it is important to select which parameters should be changed and in what order they should be changed. How to make use of the Design Axioms will be showed in a barge design example. However, a lot of research is needed for the generalized application of the Design Axioms for the structural design.
APA, Harvard, Vancouver, ISO, and other styles
10

Jahangir, Ebad, and Dan Frey. "Differential Entropy As a Measure of Information Content in Axiomatic Design." In ASME 1999 Design Engineering Technical Conferences. American Society of Mechanical Engineers, 1999. http://dx.doi.org/10.1115/detc99/dtm-8751.

Full text
Abstract:
Abstract Axiomatic design theory aims to put the design process on a more scientific footing and is based on two axioms, viz., the independence axiom and the information axiom. Several quantitative measures to determine the degree of independence between functional requirement have been defined in the past and their use illustrated through numerous examples. However, very little work exists on quantifying information content of a design. In this paper, we outline the existing measures of information content and propose a more general quantitative measure. This measure is based on the concept of entropy from information theory. The case of discrete as well as differential entropy is examined in the context of axiomatic design. Differential entropy is proposed as a measure of information content. A case study is presented which demonstrates and compares the use of quantitative measures of information content in a design. It is shown that differential entropy offers a more general measure of information content in a design than Sun’s information measure or Taguchi’s signal-to-noise ratio and, therefore, may serve as a decision criterion in engineering design.
APA, Harvard, Vancouver, ISO, and other styles

Reports on the topic "Functional and informational criterion"

1

Sikora, Yaroslava B., Olena Yu Usata, Oleksandr O. Mosiiuk, Dmytrii S. Verbivskyi, and Ekaterina O. Shmeltser. Approaches to the choice of tools for adaptive learning based on highlighted selection criteria. [б. в.], 2021. http://dx.doi.org/10.31812/123456789/4447.

Full text
Abstract:
The article substantiates the relevance of adaptive learning of students in the modern information society, reveals the essence of such concepts as “adaptability” and “adaptive learning system”. It is determined that a necessary condition for adaptive education is the criterion of an adaptive learning environment that provides opportunities for advanced education, development of key competencies, formation of a flexible personality that is able to respond to different changes, effectively solve different problems and achieve results. The authors focus on the technical aspect of adaptive learning. Different classifications of adaptability are analyzed. The approach to the choice of adaptive learning tools based on the characteristics of the product quality model stated by the standard ISO / IEC 25010 is described. The following criteria for the selecting adaptive learning tools are functional compliance, compatibility, practicality, and support. By means of expert assessment method there were identified and selected the most important tools of adaptive learning, namely: Acrobatiq, Fishtree, Knewton (now Wiliy), Lumen, Realize it, Smart Sparrow (now Pearson). Comparative tables for each of the selected tools of adaptive learning according to the indicators of certain criteria are given.
APA, Harvard, Vancouver, ISO, and other styles
2

Olefirenko, Nadiia V., Ilona I. Kostikova, Nataliia O. Ponomarova, Liudmyla I. Bilousova, and Andrey V. Pikilnyak. E-learning resources for successful math teaching to pupils of primary school. [б. в.], 2019. http://dx.doi.org/10.31812/123456789/3266.

Full text
Abstract:
Ukrainian primary schools are undergoing significant changes as for Reform ‘New Ukrainian School’, it reflects rapid updating information technology and high level of children’ informational activity. Primary schools are basically focused on development subject knowledge and general study skills. One of the ways of their developing is to use tools and apps. There are the examples of using interactive tools and apps for teaching Math for young learners by teachers-to-be in the article. The article presents as well the experimental data about training teachers-to-be to use tools and apps. Interactive tools and apps provide real task variability, uniqueness of exercises, operative assessment of correction, adjustment of task difficulty, a shade of competitiveness and gaming to the exercises. To create their own apps teachers-to be use the tools that are the part of the integrated Microsoft Office package using designing environments, and other simple and convenient programs. The article presents experimental data about the results of training teachers-to-be to create apps. A set of criteria for creation apps was made and checked at the experimental research such as ability to develop apps, knowledge and understanding the functional capabilities of apps, knowledge of tools for creating apps and their functional capabilities, ability to select and formulate tasks for young learners, ability to assess adequately the quality of the developed apps.
APA, Harvard, Vancouver, ISO, and other styles
3

Uchitel, Aleksandr D., Ilona V. Batsurovska, Nataliia A. Dotsenko, Olena A. Gorbenko, and Nataliia I. Kim. Implementation of future agricultural engineers' training technology in the informational and educational environment. [б. в.], 2021. http://dx.doi.org/10.31812/123456789/4440.

Full text
Abstract:
The article presents the implementation of future agricultural engineers’ training technology in the informational and educational environment. To train future agricultural engineers, it is advisable to form tutorials for the study of each discipline in the conditions of informational and educational environment. Such tutorials are an assistance in mastering both theoretical material and course navigation, where interactive electronic learning tools are presented to perform tasks in the informational and educational environment. Higher education applicants perform such tasks directly in the classroom with the help of gadgets or personal computers. The final grade is formed from the scores obtained in the classroom and the rating of higher education applicants while studying in the informational and educational environment. The outlined approach is able to help in the quality of learning content. The use of interactive audiovisual online tools allows to get acquainted with the theoretical, practical and experimental provisions clearly, it is important for the training of future agricultural engineers. At the end of the experiment, it can be argued that the developed technology increases the level of motivation and self-incentive to work in the informational and educational environment. The application of the presented technology provides an opportunity to combine the educational process in the classroom with learning in the informational and educational environment, forms analytical abilities and competencies in professional activity. The reliability of the obtained results was checked using the λ Kolmogorov-Smirnov criterion. It is determined that when using this technology in the educational process, the indicators in the experimental group increased, which displays the effectiveness of training bachelors in agricultural engineering in the conditions of informational and educational environment.
APA, Harvard, Vancouver, ISO, and other styles
4

Hlushak, Oksana M., Volodymyr V. Proshkin, and Oksana S. Lytvyn. Using the e-learning course “Analytic Geometry” in the process of training students majoring in Computer Science and Information Technology. [б. в.], 2019. http://dx.doi.org/10.31812/123456789/3268.

Full text
Abstract:
As a result of literature analysis the expediency of free access of bachelors majoring in Computer Sciences and Information Technologies to modern information educational resources, in particular, e-learning courses in the process of studying mathematical disciplines is substantiated. It was established that the e-learning course is a complex of teaching materials and educational services created for the organization of individual and group training using information and communication technologies. Based on the outlined possibilities of applying the e-learning course, as well as its didactic functions, the structure of the certified e-learning course “Analytic Geometry” based on the Moodle platform was developed and described. Features of application of cloud-oriented resources are considered: Desmos, Geogebra, Wolfram|Alpha, Sage in the study of the discipline “Analytic Geometry”. The results of the pedagogical experiment on the basis of Borys Grinchenko Kyiv University and A. S. Makarenko Sumy State Pedagogical University are presented. The experiment was conducted to verify the effectiveness of the implementation of the e-learning course “Analytic Geometry”. Using the Pearson criterion it is proved that there are significant differences in the level of mathematical preparation of experimental and control group of students. The prospect of further scientific research is outlined through the effectiveness of the use of e-learning courses for the improvement of additional professional competences of students majoring in Computer Sciences and Information Technologies (specialization “Programming”, “Internet of Things”).
APA, Harvard, Vancouver, ISO, and other styles
5

Meidan, Rina, and Robert Milvae. Regulation of Bovine Corpus Luteum Function. United States Department of Agriculture, 1995. http://dx.doi.org/10.32747/1995.7604935.bard.

Full text
Abstract:
The main goal of this research plan was to elucidate regulatory mechanisms controlling the development, function of the bovine corpus luteum (CL). The CL contains two different sterodigenic cell types and therefore it was necessary to obtain pure cell population. A system was developed in which granulosa and theca interna cells, isolated from a preovulatory follicle, acquired characteristics typical of large (LL) and small (SL) luteal cells, respectively, as judged by several biochemical and morphological criteria. Experiments were conducted to determine the effects of granulosa cells removal on subsequent CL function, the results obtained support the concept that granulosa cells make a substaintial contribution to the output of progesterone by the cyclic CL but may have a limited role in determining the functional lifespan of the CL. This experimental model was also used to better understand the contribution of follicular granulosa cells to subsequent luteal SCC mRNA expression. The mitochondrial cytochrome side-chain cleavage enzyme (SCC), which converts cholesterol to pregnenolone, is the first and rate-limiting enzyme of the steroidogenic pathway. Experiments were conducted to characterize the gene expression of P450scc in bovine CL. Levels of P450scc mRNA were higher during mid-luteal phase than in either the early or late luteal phases. PGF 2a injection decreased luteal P450scc mRNA in a time-dependent manner; levels were significantly reduced by 2h after treatment. CLs obtained from heifers on day 8 of the estrous cycle which had granulosa cells removed had a 45% reduction in the levels of mRNA for SCC enzymes as well as a 78% reduction in the numbers of LL cells. To characterize SCC expression in each steroidogenic cell type we utilized pure cell populations. Upon luteinization, LL expressed 2-3 fold higher amounts of both SCC enzymes mRNAs than SL. Moreover, eight days after stimulant removal, LL retained their P4 production capacity, expressed P450scc mRNA and contained this protein. In our attempts to establish the in vitro luteinization model, we had to select the prevulatory and pre-gonadotropin surge follicles. The ratio of estradiol:P4 which is often used was unreliable since P4 levels are high in atretic follicles and also in preovulatory post-gonadotropin follicles. We have therefore examined whether oxytocin (OT) levels in follicular fluids could enhance our ability to correctly and easily define follicular status. Based on E2 and OT concentrations in follicular fluids we could more accurately identify follicles that are preovulatory and post gonadotropin surge. Next we studied OT biosynthesis in granulosa cells, cells which were incubated with forskolin contained stores of the precursor indicating that forskolin (which mimics gonadotropin action) is an effective stimulator of OT biosynthesis and release. While studying in vitro luteinization, we noticed that IGF-I induced effects were not identical to those induced by insulin despite the fact that megadoses of insulin were used. This was the first indication that the cells may secrete IGF binding protein(s) which regonize IGFs and not insulin. In a detailed study involving several techniques, we characterized the species of IGF binding proteins secreted by luteal cells. The effects of exogenous polyunsaturated fatty acids and arachidonic acid on the production of P4 and prostanoids by dispersed bovine luteal cells was examined. The addition of eicosapentaenoic acid and arachidonic acid resulted in a dose-dependent reduction in basal and LH-stimulated biosynthesis of P4 and PGI2 and an increase in production of PGF 2a and 5-HETE production. Indomethacin, an inhibitor of arachidonic acid metabolism via the production of 5-HETE was unaffected. Results of these experiments suggest that the inhibitory effect of arachidonic acid on the biosynthesis of luteal P4 is due to either a direct action of arachidonic acid, or its conversion to 5-HETE via the lipoxgenase pathway of metabolism. The detailed and important information gained by the two labs elucidated the mode of action of factors crucially important to the function of the bovine CL. The data indicate that follicular granulosa cells make a major contribution to numbers of large luteal cells, OT and basal P4 production, as well as the content of cytochrome P450 scc. Granulosa-derived large luteal cells have distinct features: when luteinized, the cell no longer possesses LH receptors, its cAMP response is diminished yet P4 synthesis is sustained. This may imply that maintenance of P4 (even in the absence of a Luteotropic signal) during critical periods such as pregnancy recognition, is dependent on the proper luteinization and function of the large luteal cell.
APA, Harvard, Vancouver, ISO, and other styles
6

Vélez, Rómulo Andrés, Alejandro Fereño Caceres, Wilson Daniel Bravo Torres, Daniela Astudillo Rubio, and Jacinto José Alvarado Cordero. Primary stability with the osseodensification drilling technique for dental implants in low density bone in humans: a systematic review. INPLASY - International Platform of Registered Systematic Review and Meta-analysis Protocols, 2022. http://dx.doi.org/10.37766/inplasy2022.9.0066.

Full text
Abstract:
Review question / Objective: - Does the osseodensification drilling technique increase primary stability in low-density bone? - The aim of the present investigation was to evaluate primary stability in dental implants in people with low density bone using the osseodensification technique. Condition being studied: The replacement of missing teeth through dental implants is currently the most practiced in dental clinics. The main criterion for determining the success of an implant is osseointegration, which is a direct structural and functional connection between vital bone and the prosthetic load-bearing surface of an implant. In the same way, primary stability must be obtained for a good lasting clinical result of the implant and to achieve this purpose, the bone density must be evaluated where the dental implant is to be placed. Salah Huwais in 2013 introduced a new osteotomy procedure (Oseodensification) for site preparation without removal and bone preservation. The Osseodensification process produces an autograft layer around the implant with the osteotomy surface, the autologous bone comes into contact through an endosteal device that accelerates osseointegration due to the nucleation of osteoblasts in the instrumented bone adjacent to the implant and has a greater primary stability due to contact between the device and the bone.
APA, Harvard, Vancouver, ISO, and other styles
7

Christensen, Steen. Documentation of Edcrop – version 1: A Python package to simulate field-scale evapotranspiration and drainage from crop, wetland, or forest. Royal Danish Library, 2024. http://dx.doi.org/10.7146/aul.539.

Full text
Abstract:
Evapotranspiration is one of the major components of Earth’s Water Balance, being the sum of evaporation and plant transpiration from the land and ocean surface. This report documents edcrop, a Python package that use climate input to simulate field-scale evapotranspiration and drainage from the root zone of an area covered with a crop, a wetland, or a forest. The conceptual model implemented in edcrop is a modification of the Evacrop model by Olesen and Heidmann (2002), which itself builds on the Watcros model by Aslyng and Hansen (1982). The edcrop conceptualization is based on considerations regarding the physical processes that are important for turning precipitation and irrigation into either evaporation, transpiration, or drainage from the root zone: Temperature determines whether precipitation falls as rain or snow, and it determines when snow thaws and infiltrates. The vegetation intercepts a part of precipitation, while the rest infiltrates into the ground. The infiltrated water will either evaporate, be absorbed by plant roots, be stored in the soil, or drain from the root zone. Potential evaporation is distributed between vegetation and soil, where the former part drives evaporation of intercepted water and plant transpiration from the green leaf area, while the latter part drives evaporation from the soil. The soil’s ability to store water depends on its field capacity; when the water content exceeds field capacity, water will gradually drain downwards. Furthermore, it is assumed that the annual life cycle of crops and wetland vegetation can be described by growing degree-days alone, while for forests the life cycle is described by a calendar. For irrigation, either (i) date and amount are input, or (ii) they are determined automatically by edcrop using certain criteria. There are two alternative soil water balance functions to choose between in edcrop. The first alternative is an almost straight copy of the function used in the original Evacrop code by Olesen and Heidmann (2002), simulating flow through the soil profile as flow through two linear reservoirs using daily time steps. However, it can simulate macro-pore drainage, which the original Evacrop cannot. The second alternative simulates flow through the soil profile as flow through four linear or nonlinear reservoirs using daily or sub-daily time steps. For nonlinear reservoirs, edcrop uses Mualem – van Genuchten like functions. It also simulates gravity driven macro-pore flow as well as precipitation loss due to surface runoff. As input, given in text files, edcrop requires daily temperature, precipitation, and reference evapotranspiration. It also requires information about combination(s) of soil type and vegetation type to simulate. One can choose between seven default soil types and fifteen default vegetation types, or one can manually input information for other types of soil or vegetation. In a single model run, edcrop can loop through lists of climate files, soils, and vegetation. Edcrop can be imported and used in own python script, or it can be executed from the command line as a script.
APA, Harvard, Vancouver, ISO, and other styles
8

Sedlacek, Guilherme Luis. Is Gender Being Mainstreamed in Bank's Projects? Inter-American Development Bank, 2010. http://dx.doi.org/10.18235/0010556.

Full text
Abstract:
In May 2010, the Bank presented a draft "Operational Policy on Gender Equality in Development" that, if approved, would replace its "Operational Policy on Women in Development" (OP-761). As stated in its draft document, the new Operational Policy intends to overcome a number of challenges that emerged under the current OP-761 Policy and demand urgent attention. One of those challenges is the limited progress made so far in evaluating the results and overall performance of its Gender Equality (GE) interventions. One of the preconditions for evaluating a project's performance is ensuring the availability of the information needed to ascertain how the intervention functioned, from its very beginning through completion. For that purpose, projects need to establish upfront their evaluation criteria, propose indicators with baseline values and define targets and milestones for them. This evaluation seeks to contribute to the enhancement of the Bank's work in the field of GE by examining the gender mainstreaming efforts of 21 projects in addressing these conditions. With that aim, it will focus on the projects' efforts to document the distribution of the benefits that they provide and the projects' effects dis-aggregated by gender.
APA, Harvard, Vancouver, ISO, and other styles
9

Tiku, Sanjay, Amin Eshraghi, Aaron Dinovitzer, and Arnav Rana. PR-214-114500-R01 Fatigue Life Assessment of Dents with and without Interacting Features. Pipeline Research Council International, Inc. (PRCI), 2018. http://dx.doi.org/10.55274/r0011540.

Full text
Abstract:
The long-term integrity of a dented pipeline segment is a complex function of a variety of parameters, including pipe geometry, indenter shape, dent depth, indenter support, pressure history at and following indentation. In order to estimate the safe remaining operational life of a dented pipeline, all of these factors must be accounted for in the assessment. The current project provides pipeline operators with a methodology for assessing and managing dent fatigue, thus making it possible to prioritize response and remedial action(s) in an informed manner. The methodology allows the users to carry out dent ranking/prioritization and dent fatigue life assessment. In addition to the development of the plain dent fatigue life assessment methodology, dent weld and dent metal loss interaction criteria have been developed to ascertain their effect on the fatigue life of a dent. The dent assessment methodology utilizes dent shape information that can be derived from in-line inspection (ILI) data, operating pressure spectra and pipeline material grade. A three-level approach has been developed for assessing the fatigue life or cyclic pressure loading dependent fail-ure of pipeline dents. All three assessment levels draw upon information regarding pipeline operational, material and mechanical damage data. The assessment level selection and accuracy of the results are based upon the complexity of the features, the availability of required data and the level of detail and certainty in the input data. The three levels provide a range of alternatives for integrity management, where the appropriate method to use is dependent on the desired outcome and the available information.
APA, Harvard, Vancouver, ISO, and other styles
10

Sánchez-Pájaro, Andrés, Tonatiuh Barrientos-Gutiérrez, and Carolina Pérez-Ferrer. Social and built environment interventions to prevent alcohol, tobacco, and legal cannabis use: a scoping review. INPLASY - International Platform of Registered Systematic Review and Meta-analysis Protocols, 2023. http://dx.doi.org/10.37766/inplasy2023.5.0101.

Full text
Abstract:
Eligibility criteria: We will use the following inclusion criteria: 1) Document must mention by name or describe at least one intervention, strategy, program or policy to prevent alcohol, tobacco and legal cannabis use. 2) Document must contain enough information for the researchers to determine if the intervention, strategy, program or policy was aimed at modifying the social and/or built environment. 3) Intervention, strategy, program or policy must have been aimed at modifying the social and/or built environment, using the following definitions: Social environment: “…the immediate physical surroundings, social relationships, and cultural milieus within which defined groups of people function and interact…Social environments can be experienced at multiple scales, often simultaneously, including households, kin networks, neighborhoods, towns and cities, and regions…”; Built environment: “the surroundings or conditions designed and built through human intervention, where a person lives or operates”. 4) Document must mention that intervention/strategy/program/policy has been implemented within the last 30 years (1992-2022), whatever the setting, time frame, or subpopulation. 5) Document must be within the body of scientific literature (peer-reviewed articles, research journal commentaries, editorials, or perspective pieces), be a published book or book chapter, a government, multinational organization or non-profit organization report, or a dissertation/thesis. 6) Document must not be a conference abstract, public letter, speech transcript, budget report, independent website post or blog, or news article. 7) Document must be in English or Spanish. 8) Document must be open-source, publicly available online, or accessible through the INSP’s library services.
APA, Harvard, Vancouver, ISO, and other styles
We offer discounts on all premium plans for authors whose works are included in thematic literature selections. Contact us to get a unique promo code!