To see the other types of publications on this topic, follow the link: Logical effort.

Dissertations / Theses on the topic 'Logical effort'

Create a spot-on reference in APA, MLA, Chicago, Harvard, and other styles

Select a source type:

Consult the top 50 dissertations / theses for your research on the topic 'Logical effort.'

Next to every source in the list of references, there is an 'Add to bibliography' button. Press on it, and we will generate automatically the bibliographic reference to the chosen work in the citation style you need: APA, MLA, Harvard, Chicago, Vancouver, etc.

You can also download the full text of the academic publication as pdf and read online its abstract whenever available in the metadata.

Browse dissertations / theses on a wide variety of disciplines and organise your bibliography correctly.

1

Wunderlich, Richard Bryan. "CMOS gate delay, power measurements and characterization with logical effort and logical power." Thesis, Atlanta, Ga. : Georgia Institute of Technology, 2009. http://hdl.handle.net/1853/31652.

Full text
Abstract:
Thesis (M. S.)--Electrical and Computer Engineering, Georgia Institute of Technology, 2010.
Committee Chair: Paul Hasler; Committee Member: David V Anderson; Committee Member: Saibal Mukhopadhyay. Part of the SMARTech Electronic Thesis and Dissertation Collection.
APA, Harvard, Vancouver, ISO, and other styles
2

Alegretti, Caio Graco Prates. "Analytical logical effort formulation for local sizing." reponame:Biblioteca Digital de Teses e Dissertações da UFRGS, 2013. http://hdl.handle.net/10183/97867.

Full text
Abstract:
A indústria de microeletrônica tem recorrido cada vez mais à metodologia de projeto baseado em células para fazer frente à crescente complexidade dos projetos de circuitos integrados digitais, uma vez que circuitos baseados em células são projetados mais rápida e economicamente que circuitos full-custom. Entretanto, apesar do progresso ocorrido na área de Electronic Design Automation, circuitos digitais baseados em células apresentam desempenho inferior ao de circuitos full-custom. Assim, torna-se interessante encontrar maneiras de se fazer com que circuitos baseados em células tenham desempenho próximo ao de circuitos full-custom, sem que isso implique elevação significativa nos custos do projeto. Com tal objetivo em vista, esta tese apresenta contribuições para um fluxo automático de otimização local para circuitos digitais baseados em células. Por otimização local se entende a otimização do circuito em pequenas janelas de contexto, onde são feitas otimizações considerando o contexto global. Deste modo, a otimização local pode incluir a detecção e isolamento de regiões críticas do circuito e a geração de redes lógicas e de redes de transistores de diferentes topologias que são dimensionadas de acordo com as restrições de projeto em questão. Como as otimizações locais atuam em um contexto reduzido, várias soluções podem ser obtidas considerando as restrições locais, entre as quais se escolhe a mais adequada para substituir o subcircuito (região crítica) original. A contribuição específica desta tese é o desenvolvimento de um método de dimensionamento de subcircuitos capaz de obter soluções com área ativa mínima, respeitando a capacitância máxima de entrada, a carga a ser acionada, e a restrição de atraso imposta. O método é baseado em uma formulação de logical effort, e a principal contribuição é calcular analiticamente a derivada da área para obter área mínima, ao invés de fazer a derivada do atraso para obter o atraso mínimo, como é feito na formulação tradicional do logical effort. Simulações elétricas mostram que o modelo proposto é muito preciso para uma abordagem de primeira ordem, uma vez que apresenta erros médios de 1,48% para dissipação de potência, 2,28% para atraso de propagação e 6,5% para os tamanhos dos transistores.
Microelectronics industry has been relying more and more upon cell-based design methodology to face the growing complexity in the design of digital integrated circuits, since cell-based integrated circuits are designed in a faster and cheaper way than fullcustom circuits. Nevertheless, in spite of the advancements in the field of Electronic Design Automation, cell-based digital integrated circuits show inferior performance when compared with full-custom circuits. Therefore, it is desirable to find ways to bring the performance of cell-based circuits closer to that of full-custom circuits without compromising the design costs of the former circuits. Bearing this goal in mind, this thesis presents contributions towards an automatic flow of local optimization for cellbased digital circuits. By local optimization, it is meant circuit optimization within small context windows, in which optimizations are done taking into account the global context. This way, local optimization may include the detection and isolation of critical regions of the circuit and the generation of logic and transistor networks; these networks are sized according to the existing design constraints. Since local optimizations act in a reduced context, several solutions may be obtained considering local constraints, out of which the fittest solution is chosen to replace the original subcircuit (critical region). The specific contribution of this thesis is the development of a subcircuit sizing method capable of obtaining minimum active area solutions, taking into account the maximum input capacitance, the output load to be driven, and the imposed delay constraint. The method is based on the logical effort formulation, and the main contribution is to compute the area derivative to obtain minimum area, instead of making the delay derivative to obtain minimum delay, as it is done in the traditional logical effort formulation. Electrical simulations show that the proposed method is very precise for a first order approach, as it presents average errors of 1.48% in power dissipation, 2.28% in propagation delay, and 6.5% in transistor sizes.
APA, Harvard, Vancouver, ISO, and other styles
3

Galvis, Jorge Alberto. "Low-power flip-flop using internal clock gating and adaptive body bias." [Tampa, Fla] : University of South Florida, 2006. http://purl.fcla.edu/usf/dc/et/SFE0001465.

Full text
APA, Harvard, Vancouver, ISO, and other styles
4

Yongyi, Yuan. "Investigation and implementation of data transmission look-ahead D flip-flops." Thesis, Linköping University, Department of Electrical Engineering, 2004. http://urn.kb.se/resolve?urn=urn:nbn:se:liu:diva-2529.

Full text
Abstract:

This thesis investigates four D flip-flops with data transmission look-ahead circuits. Based on logical effort and power-delay products to resize all the transistor widths along the critical path in µm CMOS technology. The main goal is to verify and proof this kind of circuits can be used when the input data have low switching probabilities. From comparing the average energy consumption between the normal D flip-flops and D flip-flops with look-ahead circuits, D flip-flops with look-ahead circuits consume less power when the data switching activities are low.

APA, Harvard, Vancouver, ISO, and other styles
5

Veřmiřovský, Jakub. "Koevoluce v evolučním návrhu obvodů." Master's thesis, Vysoké učení technické v Brně. Fakulta informačních technologií, 2016. http://www.nusl.cz/ntk/nusl-255429.

Full text
Abstract:
This thesis deals with evolutionary design of the digital circuits performed by a cartesian genetic programing and optimization by a coevolution. Algorithm coevolves fitness predictors that are optimized for a population of candidate digital circuits. The thesis presents theoretical basis, especially genetic programming, coevolution in genetic programming, design of the digital circuits, and deals with possibilities of the utilization of the coevolution in the combinational circuit design. On the basis of this proposal, the application designing and optimizing logical circuits is implemented. Application functionality is verified in the five test tasks. The comparison between Cartesian genetic programming with and without coevolution is considered. Then logical circuits evolved using cartesian genetic programming with and without coevolution is compared with conventional design methods. Evolution using coevolution has reduced the number of evaluation of circuits during evolution in comparison with standard cartesian genetic programming without coevolution and in some cases is found solution with better parameters (i.e. less logical gates or less delay).
APA, Harvard, Vancouver, ISO, and other styles
6

Kalargaris, Charalampos. "Design methodologies and tools for vertically integrated circuits." Thesis, University of Manchester, 2017. https://www.research.manchester.ac.uk/portal/en/theses/design-methodologies-and-tools-for-vertically-integrated-circuits(63c9c674-566a-44e5-b6b6-8a277b1adf08).html.

Full text
Abstract:
Vertical integration technologies, such as three-dimensional integration and interposers, are technologies that support high integration densities while offering shorter interconnect lengths as compared to planar integration and other packaging technologies. To exploit these advantages, however, several challenges lay across the designing, manufacturing and testing stages of integrated systems. Considering the high complexity of modern microelectronic devices and the diverse features of vertical integration technologies, this thesis sheds light on the circuit design process. New methodologies and tools are offered in order to assess and improve traditional objectives in circuit design, such as performance, power, and area for vertically integrated circuits. Interconnects on different interposer materials are investigated, demonstrating the several trade-offs between power, performance, area, and crosstalk. A backend design flow is proposed to capture the performance and power gains from the introduction of the third dimension. Emphasis is also placed on the power consumption of modern circuits due to the immense growth of battery-operated devices in the last fifteen years. Therefore, the effect of scaling the operating voltage in three-dimensional circuits is investigated as it is one of the most efficient techniques for reducing power while considering the performance of the circuit. Furthermore, a solution to eliminate timing penalties from the usage of voltage scaling technique at finer circuits granularities is also presented in this thesis.
APA, Harvard, Vancouver, ISO, and other styles
7

Szalapaj, Peter J. "Logical graphics : logical representation of drawings to effect graphical transformation." Thesis, University of Edinburgh, 1988. http://hdl.handle.net/1842/19334.

Full text
APA, Harvard, Vancouver, ISO, and other styles
8

Rogers, Donna R. B. "The Effect of Dyad Interaction and Marital Adjustment on Cognitive Performance in Everyday Logical Problem Solving." DigitalCommons@USU, 1992. https://digitalcommons.usu.edu/etd/6061.

Full text
Abstract:
The theory of formal operations as a final stage of adult development has come under criticism for various reasons, primarily the overemphasis on logical thought processes which are based on invariant and absolute rules within a closed system. Everyday problems, in contrast, are typically "open-ended" and are defined by the context in which they are embedded. The purpose of this study was to investigate cognitive behaviors that occurred between two individuals as they cooperatively worked together to solve logical problems. Of interest were the effects of marital adjustment on cognitive performance, the relation between social behaviors, marital adjustment, and cognition, and the influence of familiar versus a stranger dyadic problem-solving setting on cognitive behaviors. It was hypothesized that well adjusted married and stranger dyads would not only demonstrate mastery of problem-solving tasks at the formal operational level, but would also demonstrate more relativistic and/or dialectical problem solving, and more facilitative social behaviors, than poorly adjusted married and stranger dyads. Forty couples between the ages of 35 and 50, who had been married between five and thirty years, were prescreened for verbal intelligence and marital adjustment. They were then randomly assigned to participate in one of four dyadic settings, that is, maritally well versus poorly adjusted couples solving problems in either married or unmarried/stranger dyads. Dyads were administered five formal operational problems. Two of the five were formal logical, or mathematical in nature, while three problems contained both mathematical and interpersonal, or social elements. Each dyad was videotaped during the problem-solving process, beginning with the instructions. Participant averaged about 1 hour and 15 minutes to complete five problems. Analyses of variance were performed on marital adjustment and dyadic setting as related to formal and relativistic cognitions. There were no marital adjustment or dyadic setting differences in overall ability to use formal operations. However, maritally well adjusted stranger and married dyads evidenced significantly more relativistic cognitions, particularly on problems involving a social/everyday element, than poorly adjusted married and stranger dyads. These differences also held constant across each of three increasingly complex levels of relativistic behaviors. Multivariate analyses were performed on four separate social behavior scales as related to formal and relativistic cognitions, as well as marital adjustment and dyadic setting groups. Again, formal operations did not distinguish between the differing social behaviors; however, the social behavior scales, particularly avoidant versus cooperative behaviors, were strongly related to marital adjustment and relativistic thinking.
APA, Harvard, Vancouver, ISO, and other styles
9

Rijn, Dirk Hendrik van. "Exploring the limited effect of inductive discovery learning computational models and model-based analyses /." [Amsterdam : Amsterdam : EPOS, experimenteel-psychologische onderzoekschool] ; Universiteit van Amsterdam [Host], 2003. http://dare.uva.nl/document/68567.

Full text
APA, Harvard, Vancouver, ISO, and other styles
10

May, Bruce Matthew. "Elementary Logic as a Tool in Proving Mathematical Statements." Thesis, University of the Western Cape, 2008. http://etd.uwc.ac.za/index.php?module=etd&action=viewtitle&id=gen8Srv25Nme4_1025_1263170321.

Full text
Abstract:

The findings of the study indicate that knowledge of logic does help to improve the ability of students to make logical connections (deductions) between and from
statements. The results of the study, however, do not indicate that knowledge and understanding of logic translates into improved proving ability of mathematical
statements by students.

APA, Harvard, Vancouver, ISO, and other styles
11

Happy, Henri. "Helena : un logiciel convivial de simulation des composants à effets de Champs." Lille 1, 1992. http://www.theses.fr/1992LIL10062.

Full text
Abstract:
Ce travail concerne la modélisation de composants à Effet de Champ en régime linéaire, intégrée à un logiciel convivial nommé HELENA, pour «Hemt ELEctrical properties and Noise Analysis». HELENA est un logiciel facile d'emploi, qui permet d'obtenir : la loi de commande de charges de la structure des couches sous la grille (caractéristique C-V), les caractéristiques statiques du transistor, le schéma équivalent intrinsèque petit signal, les performances de bruit du transistor extrinsèque, les paramètres S et les différents gains. La modélisation est basée sur l'approche quasi-bidimensionnelle, qui est améliorée de façon significative: 1) Nous tenons compte de la loi de commande de charges exacte de la structure pour l'analyse du composant. 2) Les paramètres du schéma équivalent petit signal sont déterminés par la méthode de la ligne active. 3) Les performances de bruit du transistor sont calculées en utilisant le formalisme des matrices de corrélation, couplé à la méthode de la ligne active. 4) Le découplage total entre l'analyse de la loi de commande de charges et l'analyse des performances du transistor permet de traiter divers types de composants HEMT et MESFET. Les différents résultats sont obtenus avec des temps de calcul réduits, et son en bon accord avec l'expérience, dans une large gamme de fréquences. Le logiciel HELENA constitue ainsi un outil très intéressant pour l'optimisation de composants discrets et des circuits micro-ondes. Une commercialisation de ce logiciel est prévue, et des contacts sont pris dans ce sens
APA, Harvard, Vancouver, ISO, and other styles
12

Lu, Weiyun. "Topics in Many-valued and Quantum Algebraic Logic." Thesis, Université d'Ottawa / University of Ottawa, 2016. http://hdl.handle.net/10393/35173.

Full text
Abstract:
Introduced by C.C. Chang in the 1950s, MV algebras are to many-valued (Łukasiewicz) logics what boolean algebras are to two-valued logic. More recently, effect algebras were introduced by physicists to describe quantum logic. In this thesis, we begin by investigating how these two structures, introduced decades apart for wildly different reasons, are intimately related in a mathematically precise way. We survey some connections between MV/effect algebras and more traditional algebraic structures. Then, we look at the categorical structure of effect algebras in depth, and in particular see how the partiality of their operations cause things to be vastly more complicated than their totally defined classical analogues. In the final chapter, we discuss coordinatization of MV algebras and prove some new theorems and construct some new concrete examples, connecting these structures up (requiring a detour through effect algebras!) to boolean inverse semigroups.
APA, Harvard, Vancouver, ISO, and other styles
13

Ninet, Olivier. "Prise en compte du phénomène d'hystérésis dans un logiciel de calcul de champ 2D en magnétostatique : validation expérimentale." Lyon 1, 1996. http://www.theses.fr/1996LYO10324.

Full text
Abstract:
Dans ce travail, on s'interesse au developpement d'un logiciel de calcul de champ qui prenne en compte l'hysteresis des materiaux magnetiques anisotropes. Notre etude se limite au cas des geometries 2d et aux phenomenes statiques. Pour decrire l'hysteresis, nous utilisons le modele de preisach neel. Ce modele est ensuite integre dans un logiciel base sur la methode des elements finis du premier ordre. La technique d'integration est realisee en modifiant la loi constitutive qui fait alors intervenir le champ coercitif relatif au cycle considere. Les problemes lies a la mise en oeuvre du logiciel sont en mis en evidence : une modification de l'algorithme de resolution est necessaire afin de garantir la convergence de la methode. Dans un deuxieme temps, une validation experimentale du logiciel est realisee sur des echantillons de forme specifique (tore, carre a section variable, transformateur) et constitues de materiaux aux proprietes differentes. Des comparaisons entre le calcul et la mesure sont effectuees et illustrent l'efficacite du logiciel concu. Parallelement, une technique de caracterisation des materiaux est developpee et appliquee sur ces echantillons.
APA, Harvard, Vancouver, ISO, and other styles
14

Trang, Si Quoc Viet. "FLOWER, an innovative Fuzzy LOWer-than-best-EffoRt transport protocol." Thesis, Toulouse, ISAE, 2015. http://www.theses.fr/2015ESAE0029/document.

Full text
Abstract:
Nous examinons la possibilité de déployer un service Lower-than-Best-Effort(LBE)sur des liens à long délai tels que des liens satellites. L'objectif estde fournir une deuxième classe de priorité dédiée à un trafic en tâche defond ou un trafic de signalisation. Dans le contexte des liens à long délai, unservice LBE peut aider à optimiser l'utilisation de la capacité du lien. Enoutre, un service de LBE peut permettre un accès à Internet à faible coût oumême gratuit dans les collectivités éloignées via la communication parsatellite. Il existe deux niveaux de déploiement possible d'une approche de LBE: soit àla couche MAC ou soità la couche de transport. Dans cette thèse, nous nousintéressons à une approche de bout-en-bout et donc nous nousconcentrons spécifiquement sur les solutions de la couche transport. Nousproposons tout d'abord d'étudier LEDBAT (Low Extra Delay BackgroundTransport)en raison de son potentiel. En effet, LEDBAT a été normalisé parl'IETF et est largement déployé dans le client BitTorrent officiel.Malheureusement, le réglage des paramètres de LEDBAT dépend fortement desconditions du réseau. Dans le pire des cas, les flux LEDBAT peuvent prendretoute la bande passante d'autre trafic tels que le trafic commercial sur lelien satellite. LEDBAT souffre également d'un problème intra-inéquité, appelélatecomer advantage. Toutes ces raisons empêchent souvent les opérateursde permettre l'utilisation de ce protocole sur le lien sans fil et à longdélai puisqu'une mauvaise configuration peut surcharger la capacité du lien.Pour répondre à l'ensemble de ces problèmes, nous proposons FLOWER, un nouveauprotocole de transport, qui se positionne comme alternative à LEDBAT. Enutilisant un contrôleur de logique floue pour réguler le débit des données,FLOWER vise à résoudre les problèmes de LEDBAT tout en remplissant le rôle d'unprotocole de LBE. Dans cette thèse, nous montrons que FLOWER peut transporter letrafic deLBE non seulement dans le contexte à long délai, mais dansplusieurs conditions du réseau où LEDBAT se trouve en échec
In this thesis, we look at the possibility to deploy a Lower-than-Best-Effort(LBE) service over long delay links such as satellite links. The objective isto provide a second priority class dedicated to background or signalingtraffic. In the context of long delay links, an LBE service might also help tooptimize the use of the link capacity. In addition, an LBE service can enablea low-cost or even free Internet access in remote communities via satellitecommunication. There exists two possible deployment level of an LBE approach: either at MAClayer or at transport layer. In this thesis, we are interested in anend-to-end approach and thusspecifically focus on the transport layersolutions. We first propose to study LEDBAT (Low Extra Delay BackgroundTransport) because of its potential. Indeed, LEDBAT has been standardized byIETF and is widely deployed within the official BitTorrent client.Unfortunately, the tuning of LEDBAT parameters is revealed to highly depend onthe network conditions. In the worst case scenario, LEDBAT flows can starveother traffic such as commercial traffic performing over a satellite link.LEDBAT also suffers from an intra-unfairness issue, called the latecomeradvantage. All these reasons often prevent operators to allow the use of suchprotocol over wireless and long-delay link as a misconfiguration can overloadthe link capacity. Therefore, we design FLOWER, a new delay-based transportprotocol, as an alternative to LEDBAT. By using a fuzzy controller to modulatethe sending rate, FLOWER aims to solve LEDBAT issues while fulfilling the roleof an LBE protocol. Our simulation results show that FLOWER can carry LBEtraffic not only in the long delay context, but in a wide range of networkconditions where LEDBAT usually fails
APA, Harvard, Vancouver, ISO, and other styles
15

Wu, Lei. "An efficient logic fault diagnosis framework based on effect-cause approach." [College Station, Tex. : Texas A&M University, 2007. http://hdl.handle.net/1969.1/ETD-TAMU-2442.

Full text
APA, Harvard, Vancouver, ISO, and other styles
16

Angeldal, Jacob, and Anton Westin. "Value creation from sustainability efforts : How customers’ value creation is affected by providers’ communication of sustainability efforts." Thesis, Uppsala universitet, Företagsekonomiska institutionen, 2020. http://urn.kb.se/resolve?urn=urn:nbn:se:uu:diva-413758.

Full text
Abstract:
As sustainability becomes a more prominent part of people’s lives, firms that embrace sustainability can create more value for customers. Value has traditionally been seen as being determined by the provider. However, recent theorisations have conceptualised value as being created by the customer with interactions as a key component. The primary way for customers to interact with firms is through indirect interaction – such as when reading labels on product packaging or taking part in advertising. In extant literature, there is a lack of research on how customers’ value creation is affected by interactions with firms – and more specifically – by indirect interaction. The purpose of this study has been to explore how customers’ value creation is affected by providers’ communication of sustainability efforts through indirect interaction. To gain this insight, 12 interviews with customers have been conducted and analysed in four dimensions – general, sustainability, communication and the value creation process. The study found that sustainability efforts were mainly communicated through indirect interactions. Sustainability efforts affected all respondents’ lives and consumption process. Customers valued communication of sustainability efforts that they perceived as being honest, could understand and was presented to them at an appropriate time.
APA, Harvard, Vancouver, ISO, and other styles
17

Ahmed, Elias. "The effect of logic block granularity on deep-submicron FPGA performance and density." Thesis, National Library of Canada = Bibliothèque nationale du Canada, 2001. http://www.collectionscanada.ca/obj/s4/f2/dsk3/ftp04/MQ58679.pdf.

Full text
APA, Harvard, Vancouver, ISO, and other styles
18

Yilmazoglu, Candan. "Effect Of Analogy-enhanced Instruction Accompanied With Concept Maps On Understanding Of Acid-base Concept." Master's thesis, METU, 2003. http://etd.lib.metu.edu.tr/upload/12605247/index.pdf.

Full text
Abstract:
This study was conducted to explore the effectiveness of analogy-enhanced instruction accompanied with concept maps over traditionally designed chemistry introduction on understanding of acid-base concept and attitude toward chemistry as a school subject. 81 8th grade students from two classes of a chemistry course taught by the same teacher in Nuh Eskiyapan Primary School in Ankara in 2003-2004 fall semesters were enrolled in the study. There were two groups of students. During the treatment, students in the control group were instructed only with traditionally designed instruction. Students in the experimental group studied with the analogy-enhanced instruction accompanied with concept maps through teacher lecture. Both groups were administered Acid-Base Chemistry Achievement Test and Attitude Scale toward Chemistry as a School Subject as pre-tests and post-tests. Logical Thinking Ability Test was given to both groups at the beginning of the study to determine students&rsquo
logical thinking ability levels. Research data were analyzed by using (SPSS 12.0) ANCOVA and t-test. As a result of the research, it was obviously seen that analogy-enhanced instruction accompanied with concept maps caused a significantly better acquisition of scientific conception related to acid-base and produced significantly higher positive attitudes toward chemistry as a school subject than the traditionally designed chemistry instruction.
APA, Harvard, Vancouver, ISO, and other styles
19

Mawfik, Nadia. "Effet du logiciel "geometric supposer" sur l'habileté à conjecturer et l'habileté à argumenter d'élèves-professeurs marocains." Master's thesis, Université Laval, 1987. http://hdl.handle.net/20.500.11794/29319.

Full text
APA, Harvard, Vancouver, ISO, and other styles
20

Nordenmark, Nicklas. "The Effect of using a Trailing Persistent Array to Embed Logic Programming into a Functional Language." Thesis, Uppsala universitet, Institutionen för informationsteknologi, 2011. http://urn.kb.se/resolve?urn=urn:nbn:se:uu:diva-150823.

Full text
Abstract:
Logic programming is an important paradigm because of its declarative nature – a programmer declares values and facts and then the program executes by inferring their consequences via backtracking search and unification. There are many situations where logic programming allows elegant solutions that are difficult to emulate in other paradigms, such as implementing type inference or solving problems that require backtracking search. Unfortunately it is generally not feasible for a language to be purely based on logic – search spaces are often large or infinite and greater control is required, normally via constructs that move closer to other non-logical paradigms. An attractive approach that has been attempted by for example Felleisen [12] and Seres & Spivey [16] is to embed logic programming into a host language with rich control constructs such as a functional language. This report describes a new technique for implementing such an embedding that improves on previous embeddings by concealing trailing and reversion with the help of a persistent array data structure proposed by Baker [6]. This structure was recently used in a domain similar to ours with backtracking by Conchon & Filliatre [8].
APA, Harvard, Vancouver, ISO, and other styles
21

Cetin, Gulcan. "The Effect Of Conceptual Change Instruction On Understanding Of Ecology Concepts." Phd thesis, METU, 2003. http://etd.lib.metu.edu.tr/upload/1260322/index.pdf.

Full text
Abstract:
The purpose of this study was to investigate the effects of conceptual change text oriented instruction accompanied by demonstrations in small groups (CCTI) on ninth grade students&rsquo
achievement and understanding levels of ecology, attitudes towards biology, and attitudes towards environment. The instruments used in this study were the Test of Ecological Concepts (TEC), the Attitude Scale towards Biology (ASB), the Attitude Scale towards Environment (ASE), and the Test of Logical Thinking (TOLT). All data were collected from the public high school in Balikesir in the Spring Semester of 2001-2002. 88 students from four classes and two teachers were included in this study. Two of the classes were called control group and two of them were called experimental group. While the TEC, ASE and ASB were administered to all of the students as pre- and post-tests, the TOLT were conducted as pre-test. Data related to the TEC, ASB, and ASE were analyzed by multivariate analysis of covariance (MANCOVA). The results of the MANCOVA showed that there was significant effect of the treatment which was the conceptual change texts oriented instruction accompanied by demonstrations in small groups on the TEC, while there were no significant effect of the treatment on the attitudes towards biology and attitudes towards environment.
APA, Harvard, Vancouver, ISO, and other styles
22

Diaz, Diaz Alberto. "Délaminage des matériaux multicouches : phénomènes, modèles et critères." Marne-la-vallée, ENPC, 2001. http://www.theses.fr/2001ENPC0014.

Full text
Abstract:
Ce mémoire a pour objet l'étude du délaminage, dans les matériaux multicouches. Des essais, des modèles, des logiciels et des critères sont proposés pour l'appréhension du délaminage, dans les multicouches. Les essais sur des multicouches en carbone--époxy mettent en évidence un phénomène de glissement plastique d'interface -avant l'amorçage du délaminage et une valeur critique de glissement semble gouverner le délaminage en mode iii. Un effet d'épaisseur sur l'initiation de ce mode d'endommagement est remarqué. Deux modèles simplifiés, dits multiparticulaires,sont alors proposés pour l'évaluation des efforts d'interface responsables du délaminage et du glissement d'interface. Ces modèles prennent en compte la présence d'éventuels champs anélastiques dans les couches et aux interfaces. On applique ensuite les équations des modèles à un problème de traction d'un multicouche rectangulaire quelconque avec, sur chacune des couches, des conditions de bord quelconques. La programmation de ces équations a donné lieu à deux logiciels de calcul d'effet de bord. Les calculs prouvent l'absence de singularités pour nos modèles. Deux types d'analyse sont ensuite proposés pour l'étude du délaminage. L'une élastique fragile et l'autre, plastique fragile. La première consiste à adopter des critères empiriques en contrainte maximal et l'application de ces critères donne des résultats de très bonne qualité pour les matériaux testés. De plus, les critères rendent compte de l'effet d'épaisseur. Une analyse des taux de restitution d'énergie, permet de valider énergétiquement les critères en contrainte maximal. Le deuxième type d'analyse, consiste à calculer le glissement d'interface et proposer des critères de délaminage, en glissement et ceux-ci donnent aussi des résultats très précis. La comparaison des énergies dissipées par le délaminage, calculées par les deux analyses rend les critères quasi équivalents.
APA, Harvard, Vancouver, ISO, and other styles
23

Yoosefi, Oraman. "Simulation and design of all-optical logic gates based on photonic crystals." Doctoral thesis, Universitat Politècnica de Catalunya, 2021. http://hdl.handle.net/10803/672369.

Full text
Abstract:
In this thesis, design and simulation of optical logic gases based on different photonics crystals presented to used in the electronics and telecommunication industries. Optical devices perform faster with higher efficiencies compare to the electrical device.The photonic crystal applications to achieve higher transmission power and contrast ratio focus on the design criteria. Results proved promising insights toward the development of gas sensors. The proposed structures have small dimensions as well as a wide functional interval.In Chapter 1,before employing the wavelength-division multiplexing (WDM) method, the notion of the electromagnetic wave in free space and in conductors with a description of equations was defined.Chapter 2 is dedicated to studying literature and similar research,started by the review of photonic crystals and photonic band gap. The gates, characteristics and design layouts were discussed without using nonlinear materials and optical amplifiers.Chapter 3 describes schemes proposed structures.In chapter 4, simulation studies and analysis of six new structures are presented. The procedure is as follows to use the linear logic NOT, OR and AND gates first. These structures have an input waveguide for applying a Gaussian optical pulse at a wavelength of 1550 nm. By changing the radius of the defect, the best dimension with the highest transmission is obtained.Afterward, by coupling these gates, and getting NOR and NAND gates in to study a reasonable contrast ratio and transmission power in each case by changing the defect radius obtained and proved the design concept. A full adder based on metal-insulator-metal (MIM) waveguide-based plasmonic waves. We studied the 4-input OR gate to design and simulate a full adder circuit, which used plasmonic waves to transmit signals; the 4-input gate presented in this study has a simple structure and is manufactured at a low cost. By optimizing the structure's dimensions, the losses and achieve a transmission coefficient of about 0.62 and educe the losses to 25% less than the mentioned design in the references.The next propose structure is a 2DPC based eight channels demultiplexer. This structure is proposed and designed using an octagonal ring resonator for WDM applications.The functional parameters are resonant wavelength, Q factor, channel spacing, spectral width, output efficiency, and crosstalk, are investigated. In this attempt, the channel selection is carried out by altering the octagonal ring resonator's size. The average transmission efficiency, Q factor, spectral width, and channel spacing of the proposed demultiplexer are 98.65%, 2212, 0.76 nm, and 1.75 nm, respectively. The proposed demultiplexer's crosstalk is low (30 dB ) as the even number of channels and the odd number of channels are dropped separately. The demultiplexer's size is about 752.64 µm2, and the functional characteristics of the proposed demultiplexer meet the requirements of WDM systems. Hence this demultiplexer can be incorporated for integrated optics. We have shown that the device is perfectly suitable for communication applications.Chapter 5 is the conclusion of the thesis and recommendation of future studies which has been presented for industrial purposes. In this thesis, a new photonic crystal slab for its use in gas sensing applications is proposed. Theoretical studies have been done to determine the response of the proposed structure to carbon dioxide. A simple laser with around 1 nm spectral widths can be used to simulate this device. Measurements can be done in two steps, which can be done simultaneously by using a reference device: step one with synthetic air and then adding known concentrations of CO.The output is referenced to the measurement with synthetic air.Our theoretical results show that variations of 17% in the transmission intensity and a clear variation on the transmission peaks' central wavelength.These results are already promising for the development of gas sensors
En esta tesis, diseño y simulación de gases ópticos lógicos basados en diferentes cristales fotónicos presentados para ser utilizados en las industrias de la electrónica y las telecomunicaciones. Los dispositivos ópticos funcionan más rápido con mayor eficiencia en comparación con el dispositivo eléctrico. Las aplicaciones de cristal fotónico para lograr una mayor potencia de transmisión y una relación de contraste se centran en los criterios de diseño. Los resultados demostraron conocimientos prometedores hacia el desarrollo de sensores de gas. Las estructuras propuestas tienen pequeñas dimensiones así como un amplio intervalo funcional. En el Capítulo 1, antes de emplear el método de multiplexación por división de longitud de onda (WDM), se desarrolló la noción de onda electromagnética en el espacio libre y en conductores con una descripción de ecuaciones. El capítulo 2 está dedicado al estudio de la literatura e investigaciones similares, comenzando por la revisión de los cristales fotónicos y la banda prohibida fotónica. Se discutieron las puertas, las características y los diseños de diseño sin utilizar materiales no lineales ni amplificadores ópticos. En el capítulo 3 se describen los esquemas de estructuras propuestos, en el capítulo 4 se presentan estudios de simulación y análisis de seis nuevas estructuras. El procedimiento es el siguiente para utilizar primero las puertas lógicas NOT, OR y AND de lógica lineal. Estas estructuras tienen una guía de ondas de entrada para aplicar un pulso óptico Gaussi-an a una longitud de onda de 1550 nm. Al cambiar el radio del defecto, se obtiene la mejor dimensión con la mayor transmisión. Posteriormente, al acoplar estas puertas y hacer que las puertas NOR y NAND estudien una relación de contraste y potencia de transmisión razonables en cada caso, cambiando el radio de defecto obtenido y probado el concepto de diseño. Un sumador completo basado en ondas plasmónicas basadas en guías de ondas de metal-aislante-metal (MIM). Estudiamos la puerta OR de 4 entradas para diseñar y simular un circuito sumador completo, que usaba ondas plasmónicas para transmitir señales; la compuerta de 4 entradas presentada en este estudio tiene una estructura simple y está fabricada a bajo costo. Optimizando las dimensiones de la estructura, las pérdidas y logran un coeficiente de transmisión de alrededor de 0,62 y reducen las pérdidas a un 25% menos que el diseño mencionado en las referencias. La siguiente estructura propuesta es un demultiplexor de ocho canales basado en 2DPC. Esta estructura se propone y diseña utilizando un resonador de anillo octagonal para aplicaciones WDM. Los parámetros funcionales son la longitud de onda resonante, el factor Q, el espaciado de canales, el ancho espectral, la eficiencia de salida y la diafonía. En este intento, la selección de canal se lleva a cabo alterando el tamaño del resonador de anillo octagonal. La eficiencia de transmisión promedio, el factor Q, el ancho espectral y el espaciado de canales del demultiplexor propuesto son 98,65%, 2212, 0,76 nm y 1,75 nm, respectivamente. La diafonía del demultiplexor propuesto es baja (30 dB) ya que el número par de canales y el número impar de canales se eliminan por separado. El tamaño del demultiplexor es de aproximadamente 752,64 µm2 y las características funcionales del demultiplexor propuesto cumplen los requisitos de los sistemas WDM. Por tanto, este demultiplexor se puede incorporar para ópticas integradas. Hemos demostrado que el dispositivo es perfectamente apto para aplicaciones de comunicación. El capítulo 5 es la conclusión de la tesis y recomendación de futuros estudios que se ha presentado con fines industriales.En esta tesis, se propone una nueva placa de cristal fotónico para su uso en aplicaciones de detección de gases. Se han realizado estudios teóricos para determinar la respuesta de la estructura propuesta al dióxido de carbono. Se puede utilizar un láser simple con anchos espectrales de alrededor de 1 nm para simular este dispositivo. Las mediciones se pueden realizar en dos pasos, que se pueden hacer simultáneamente utilizando un dispositivo de referencia: el paso uno con aire sintético y luego agregando concentraciones conocidas de CO. La salida se refiere a la medición con aire sintético. Nuestros resultados teóricos muestran que las variaciones de 17% en la intensidad de transmisión y una clara variación en la longitud de onda central de los picos de transmisión, resultados que ya son prometedores para el desarrollo de sensores de gas.
Enginyeria electrònica
APA, Harvard, Vancouver, ISO, and other styles
24

Ndenga, Malanga Kennedy. "Predicting post-release software faults in open source software as a means of measuring intrinsic software product quality." Electronic Thesis or Diss., Paris 8, 2017. http://www.theses.fr/2017PA080099.

Full text
Abstract:
Les logiciels défectueux ont des conséquences coûteuses. Les développeurs de logiciels doivent identifier et réparer les composants défectueux dans leurs logiciels avant de les publier. De même, les utilisateurs doivent évaluer la qualité du logiciel avant son adoption. Cependant, la nature abstraite et les multiples dimensions de la qualité des logiciels entravent les organisations de mesurer leur qualités. Les métriques de qualité logicielle peuvent être utilisées comme proxies de la qualité du logiciel. Cependant, il est nécessaire de disposer d'une métrique de processus logiciel spécifique qui peut garantir des performances de prédiction de défaut meilleures et cohérentes, et cela dans de différents contextes. Cette recherche avait pour objectif de déterminer un prédicteur de défauts logiciels qui présente la meilleure performance de prédiction, nécessite moins d'efforts pour la détection et a un coût minimum de mauvaise classification des composants défectueux. En outre, l'étude inclut une analyse de l'effet de la combinaison de prédicteurs sur la performance d'un modèles de prédiction de défauts logiciels. Les données expérimentales proviennent de quatre projets OSS. La régression logistique et la régression linéaire ont été utilisées pour prédire les défauts. Les métriques Change Burst ont enregistré les valeurs les plus élevées pour les mesures de performance numérique, avaient les probabilités de détection de défaut les plus élevées et le plus faible coût de mauvaise classification des composants
Faulty software have expensive consequences. To mitigate these consequences, software developers have to identify and fix faulty software components before releasing their products. Similarly, users have to gauge the delivered quality of software before adopting it. However, the abstract nature and multiple dimensions of software quality impede organizations from measuring software quality. Software quality metrics can be used as proxies of software quality. There is need for a software process metric that can guarantee consistent superior fault prediction performances across different contexts. This research sought to determine a predictor for software faults that exhibits the best prediction performance, requires least effort to detect software faults, and has a minimum cost of misclassifying components. It also investigated the effect of combining predictors on performance of software fault prediction models. Experimental data was derived from four OSS projects. Logistic Regression was used to predict bug status while Linear Regression was used to predict number of bugs per file. Models built with Change Burst metrics registered overall better performance relative to those built with Change, Code Churn, Developer Networks and Source Code software metrics. Change Burst metrics recorded the highest values for numerical performance measures, exhibited the highest fault detection probabilities and had the least cost of mis-classification of components. The study found out that Change Burst metrics could effectively predict software faults
APA, Harvard, Vancouver, ISO, and other styles
25

Gaudrat, Véronique. "Quelques méthodes pour l'optimisation de la coulée continue de l'acier dans le cas non stationnaire." Paris 9, 1987. https://portail.bu.dauphine.fr/fileviewer/index.php?doc=1987PA090032.

Full text
APA, Harvard, Vancouver, ISO, and other styles
26

Amir, Abdulgader. "The spatial logic of pedestrian movement and exploration in the central area of Jeddah : the effect of spatial configuration on shopping behavior." Diss., Georgia Institute of Technology, 1998. http://hdl.handle.net/1853/23375.

Full text
APA, Harvard, Vancouver, ISO, and other styles
27

Truong, Quang Huy. "Risks and Performance in the Supply Chain -An Empirical Study in Vietnam Construction Sector-." Kyoto University, 2018. http://hdl.handle.net/2433/232209.

Full text
APA, Harvard, Vancouver, ISO, and other styles
28

Kim, Kwang-Jin. "The logic of decisions in militarized disputes the effect of regime, power, arms contorol [sic], and airpower on decision-making in militarized disputes /." Diss., Columbia, Mo. : University of Missouri-Columbia, 2007. http://hdl.handle.net/10355/4831.

Full text
Abstract:
Thesis (Ph. D.)--University of Missouri-Columbia, 2007.
The entire dissertation/thesis text is included in the research.pdf file; the official abstract appears in the short.pdf file (which also appears in the research.pdf); a non-technical general description, or public abstract, appears in the public.pdf file. Title from title screen of research.pdf file (viewed on February 14, 2008) Vita. Includes bibliographical references.
APA, Harvard, Vancouver, ISO, and other styles
29

Ruffoni, Michelle L. "The effect of construct differentiation, biological sex, and locus of control on message design logic and message goal structure in regulative communication situations." Scholarly Commons, 1997. https://scholarlycommons.pacific.edu/uop_etds/2312.

Full text
Abstract:
This study replicates and extends previous research on the relationship between interpersonal construct differentiation and message production in regulative communication situations (O'Keefe & McComack, 1987; O'Keefe, 1988). The research examines whether a subject's use of a particular message design logic (expressive, conventional, or rhetorical) and goal structure (minimal, unifunctional, or multifunctional) is related to his or her level of cognitive complexity, gender, and locus of control. Subjects (n = 160) were asked to complete the Crockett's (1966) Role Category Questionnaire (RCQ) and Levenson's (1981) Internal, Powerful Others, and Chance Scale. Subjects were also asked to respond to a hypothetical regulative communication task. Their responses were then classified according to criteria established by O'Keefe. The study found a significant positive relationship between construct differentiation and message design such that less complex subjects wrote expressive messages, moderately complex subjects wrote conventional messages, and highly complex subjects wrote rhetorical messages. There was a significant negative relationship between construct differentiation and goal structure such that less complex respondents sought multifunctional goals while highly complex subjects sought minimal goals. There were no gender related differences. The locus of control constructs (internality, powerlessness, and chance) were related to message design. Internal, powerful, and low chance orientated actors composed conventional or rhetorical messages. External, powerless, and high chance orientated respondents wrote expressive messages. Powerlessness was related to goal structure such that powerless actors sought multiple goals while powerful subjects sought minimal goals. The results ofthe study provide partial support for O'Keefe's (1988) theory of message design. In particular, the results confirm the premise that construct differentiation is a predictor of message design logic. The findings also identify locus of control as a predictor of message design. The negative relationships identified in the study suggest that there may be conceptual or methodological problems with O'Keefe's model which must be addressed before any additional conclusions can be made.
APA, Harvard, Vancouver, ISO, and other styles
30

Schmitt, Antonin. "Détermination des caractéristiques des patients impliquées dans les toxicités hématologiques consécutives à l'administration de médicaments anticancéreux : apport de la méthodologie de pharmacocinétique/pharmacodynamique de population." Toulouse 3, 2010. http://thesesups.ups-tlse.fr/948/.

Full text
Abstract:
Nous présentons les résultats d'une étude clinique multicentrique ayant pour objectif principal de déterminer des expositions cibles en carboplatine. Une revue de la bibliographie sur le carboplatine ainsi que sur les modèles pharmacocinétique/pharmacodynamique développés en oncologie constituent la première partie de cette thèse. Ensuite nous développons les résultats de l'analyse pharmacocinétique du protocole, avec la validation de la formule de Thomas. Enfin, nous abordons les résultats de l'analyse pharmacocinétique/pharmacodynamique de modélisation de la toxicité hématologique du carboplatine. Les principales caractéristiques permettant d'expliquer les différences de toxicité sont principalement liées aux médicaments anticancéreux associés
The results of a multicenter clinical trial, which main objective was to determine carboplatin target exposures, are presented. A bibliography review on the carboplatin as well as on pharmacokinetic/pharmacodynamic models in oncology constitutes the first part of this thesis. Then the results of the pharmacokinetic analysis are presented, with the validation of the Thomas formula. Finally, the results of the pharmacokinetic/pharmacodynamic analysis of carboplatin hematotoxicity are shown. The main characteristics allowing to explain the differences of toxicity are mainly connected to associated chemotherapy
APA, Harvard, Vancouver, ISO, and other styles
31

Mohammad, Azhar. "EMERGING COMPUTING BASED NOVEL SOLUTIONS FOR DESIGN OF LOW POWER CIRCUITS." UKnowledge, 2018. https://uknowledge.uky.edu/ece_etds/125.

Full text
Abstract:
The growing applications for IoT devices have caused an increase in the study of low power consuming circuit design to meet the requirement of devices to operate for various months without external power supply. Scaling down the conventional CMOS causes various complications to design due to CMOS properties, therefore various non-conventional CMOS design techniques are being proposed that overcome the limitations. This thesis focuses on some of those emerging and novel low power design technique namely Adiabatic logic and low power devices like Magnetic Tunnel Junction (MTJ) and Carbon Nanotube Field Effect transistor (CNFET). Circuits that are used for large computations (multipliers, encryption engines) that amount to maximum part of power consumption in a whole chip are designed using these novel low power techniques.
APA, Harvard, Vancouver, ISO, and other styles
32

Ryu, Hyeyeon. "Integrated Circuits Based on Individual Single-Walled Carbon Nanotube Field-Effect Transistors." Doctoral thesis, Universitätsbibliothek Chemnitz, 2012. http://nbn-resolving.de/urn:nbn:de:bsz:ch1-qucosa-98220.

Full text
Abstract:
This thesis investigates the fabrication and integration of nanoscale field-effect transistors based on individual semiconducting carbon nanotubes. Such devices hold great potential for integrated circuits with large integration densities that can be manufactured on glass or flexible plastic substrates. A process to fabricate arrays of individually addressable carbon-nanotube transistors has been developed, and the electrical characteristics of a large number of transistors has been measured and analyzed. A low-temperature-processed gate dielectric with a thickness of about 6 nm has been developed that allows the transistors and circuits to operate with voltages of about 1.5 V. The transistors show excellent electrical properties, including a large transconductance (up to 10 µS), a large On/Off ratio (>10^4), a steep subthreshold swing (65 mV/decade), and negligible leakage currents (~10^-13 A). For the realization of unipolar logic circuits, monolithically integrated load resistors based on high-resistance metallic carbon nanotubes or vacuum-evaporated carbon films have been developed and analyzed by four-probe and transmission line measurements. A variety of combinational logic circuits, such as inverters, NAND gates and NOR gates, as well as a sequential logic circuit based on carbon-nanotube transistors and monolithically integrated resistors have been fabricated on glass substrates and their static and dynamic characteristics have been measured. Optimized inverters operate with frequencies as high as 2 MHz and switching delay time constants as short as 12 ns
Thema dieser Arbeit ist die Herstellung und Integration von Feldeffekt-Transistoren auf der Grundlage einzelner halbleitender Kohlenstoffnanoröhren. Solche Bauelemente sind zum Beispiel für die Realisierung integrierter Schaltungen mit hoher Integrationsdichte auf Glassubstraten oder auf flexiblen Kunststofffolien von Interesse. Zunächst wurde ein Herstellungsverfahren für die Anfertigung einer großen Anzahl solcher Transistoren auf Glas- oder Kunststoffsubstraten entwickelt, und deren elektrische Eigenschaften wurden gemessen und ausgewertet. Das Gate-Dielektrikum dieser Transistoren hat eine Schichtdicke von etwa 6 nm, so das die Versorgungsspannungen bei etwa 1.5 V liegen. Die Transistoren haben sehr gute elektrische Parameter, z.B. einen großen Durchgangsleitwert (bis zu 10 µS), ein großes Modulationsverhältnis (>10^4), einen steilen Unterschwellanstieg (65 mV/Dekade) und vernachlässigbar kleine Leckströme (~10^-13 A). Für die Realisierung unipolarer Logikschaltungen wurden monolithisch integrierte Lastwiderstände auf der Grundlage metallischer Kohlenstoffnanoröhren mit großem Widerstand oder mittels Vakuumabscheidung erzeugter Kohlenstoffschichten entwickelt und u. a. mittels Vierpunkt- und Transferlängen-Messungen analysiert. Eine Reihe kombinatorischer Schaltungen, z.B. Inverter, NAND-Gatter und NOR-Gatter, sowie eine sequentielle Logikschaltung wurden auf Glassubstraten hergestellt, und deren statische und dynamische Parameter wurden gemessen. Optimierte Inverter arbeiten bei Frequenzen von bis zu 2 MHz und haben Signalverzögerungen von lediglich 12 ns
APA, Harvard, Vancouver, ISO, and other styles
33

Praz, Jean. "Négation et Diffraction de la volonté en éducation." Thesis, Lyon, 2017. http://www.theses.fr/2017LYSE2025/document.

Full text
Abstract:
Entre 1880 et 1920, l’institution scolaire en France, les pédagogues et les professeurs de pédagogie, les enseignants dans leurs commentaires, les parents souvent inquiets en appellent à la volonté qui explique l’échec lorsqu’elle manque, qui galvanise les énergies si elle s’exerce et qui élève l’esprit quand elle est une ascèse. Au même moment, silencieusement parce que dans les marges de la communauté éducative, se forment les conditions même de la disparition de la volonté. L’Education nouvelle de congrès en congrès desserrera l’étau de cette volonté anémiante. Edouard Claparède invitera à dévolontariser la volonté. L’écho de Jean-Jacques Rousseau prônant l’épanouissement de l’enfance retrouvée légitimera cet effacement de la volonté. Et très vite de volonté il ne sera plus question. Comment expliquer cette contradiction entre cette omniprésence de la volonté et sa disparition ? Deux voies : l’une qui analysera la volonté même, l’autre qui décrira des pratiques éducatives où la volonté a joué un rôle ou celles d’où elle s’est retirée. L’enquête s’en tiendra au mot même de volonté dans ses composants sémantiques et dans l’histoire de sa traduction du grec ou du latin au français, puis au concept qui lui correspond : décomposition en traits, description de ses modalités et évaluation de sa teneur ontologique en dessinant la scène de toute action. Apparaîtront quatre dimensions de la volonté : effort, intention, décision et force, dimensions qui renverront aux vertus épistémiques, à la logique de l’action et à une conception de l’homme. Autrement dit, à quelle anthropologie la volonté comme force correspond-elle ? Et son éviction, quel sujet implique-t-elle ? D’autre part, si la volonté est intention et décision, ne relève-t-elle pas de la logique de l’action, comme son contraire l’acrasie ? Enfin, identifier volonté et effort, c’est redécouvrir les vertus épistémiques de la studiosité, de la curiosité, de l’attention, c’est dire ce qu’elles sont et comment les développer. Il est un dernier point qu’on ne saurait négliger : la volonté, ou du moins son contraire qu’est la paresse, met en jeu l’assise métaphysique de tout homme.Cette analyse sera corrélée, sinon à des pratiques éducatives puisque l’archive en est souvent absente, du moins à des théories ou à des comptes rendus de pratiques dans des genres littéraires le plus divers. D’abord, Célestin Freinet parce qu’il a critiqué la volonté comme instance morale, mais a gardé la notion d’effort s’appuyant peut-être sur le travail comme force d’émancipation, en tout cas comme émanation de la vie. Ensuite Piaget qui transforme la volonté en un contraire de la ligne de moindre résistance dans une conception évolutionniste, la naturalisant ainsi. Puis, Maine de Biran et Pestalozzi qui fondèrent une école presque ensemble, l’un faisant de l’effort le constituant premier de l’homme, l’autre hésitant entre l’épanouissement dévolontarisé et les nécessaires contraintes de toute action. Descartes aussi parce qu’il éduque la volonté conçue comme décision et qu’il la place au cœur de sa logique de sorte que l’homme se caractérise par la générosité qui est l’aptitude à poser des actes judicieux. Enfin, Dewey et Kilpatrick qui substitueront à la volonté l’intérêt s’opposant à l’éducation comme jeu et aux conceptions d’Herbart pour qui rien ne venait de l’élève et pour qui tout s’imposait de l’extérieur. Ce parcours s’achève par une mise en rapport de ces conceptions de la volonté et de certains traits d’anthropologie dans le but de dessiner une logique où le recours à la volonté correspond à un sujet vide et où l’effacement de la volonté suppose un sujet doté d’une intériorité qui ne demande qu’à s’exprimer. Puis, la fiction d’un compte rendu d’un congrès de philosophie de l’éducation permettra de reprendre les perspectives des uns et des autres et d’exprimer leur style de pensée. S'ouvrent alors les perspectives, dans une vision naturaliste, d'un sujet se constituant
Between 1880 and 1920 the academic institution of France, pedagogues and professors of pedagogy, teachers writing school reports and worried parents, have called upon the concept of will to explain failure where there is a lack of it, to galvanise energy where it is present and to raise the spirit in the field of asceticism. At the same time, the very conditions for the extinction of the concept of will have been growing quietly, at the margins of the education community. The Modern School Movement, from conference to conference, have been loosening the grip on this debilitating concept. Edouard Claparède suggests the ‘dewilling’ of will. The echo of Jean-Jacques Rousseau defending the blossoming of a rediscovered childhood, adds further weight to the idea of the fading out of the notion of will. How long before the question of will becomes no longer relevant? How can this contradiction between the omnipresent subject of will and its disappearance be explained? There are two approaches: the first analyses will itself, the second describes the educative practices where will plays a role or those where it is absent. The inquiry analyses the semantic components of the word, from its translation in Greek or Latin to French. Alongside this, it investigates the concept behind the word: its distinguishing features, the description of its modalities and its ontological constitution, describing the nature and the elements that make up an action. Four dimensions of will are identified: effort, intention, decision and strength. These dimensions clearly refer to epistemic virtues, the logic of action and the concept of what it is to be human. Put another way, to which anthropological system does will, as a strength, correspond? And in its absence, what idea of human behaviour do we conceive? On the other hand, if will is intention and decision, can it not be assumed that it comes from the logic of action, along with its opposite, akrasia. Finally, to identify will in terms of effort, is to revisit the epistemic virtues of studiousness, curiosity and attention, stating what they are and how to develop them. Another aspect that deserves consideration: will, or at least its opposite, laziness, calls into play the metaphysical bases underpinning human existence. This analysis correlates if not to the educative practices, archival material not often existing, at least to the theories or accounts of practices to be found in the myriad literary genres. Firstly, Célestin Freinet who criticised the idea of will as a moral value, but maintained the idea of effort, emphasing perhaps the notion of work as a liberating force, as an expression of life. Then Piaget, working within the school of evolutionary theory, who transforms will into an opposite of the path of least resistance. This is followed by Maine de Biran and Pestalozzi who almost founded a school together, the former identifying effort as the principal characteristic of man, the later hesitating between the blossoming of the individual that happens outside of will and the essential limits of any given action. Decartes conceives will as a decision which he places at the centre of his theory that man is characterised by generosity, which he defines as the ability to be reasonable. Lastly Dewey and Kilpatrick who substitute will for interest, opposing the idea of education as a game and Herbart’s idea that nothing comes from the student, everything is imposed from external sources. The journey finishes with a bringing together of the concepts of will and certain anthropological features, the aim of which is to draw upon logic where will is called upon in a situation of personal need or is eliminated, presuming that the individual’s inner life is left unexpressed. The imaginary reporting to a fictitious conference between the various educational philosophers would allow the sharing and reformulating of each other’s perspectives along with the investigation of their various styles of thought
APA, Harvard, Vancouver, ISO, and other styles
34

Wang, Gefei. "Conception et développement de nouveaux circuits logiques basés sur des spin transistor à effet de champ." Thesis, Université Paris-Saclay (ComUE), 2019. http://www.theses.fr/2019SACLS056.

Full text
Abstract:
Le développement de la technologie CMOS a déclenché une révolution dans la production IC. Chaque nouvelle génération technologique, par la mise à l’échelle des dimensions, a entraîné une accélération de son fonctionnement et une réduction de sa consommation. Cependant, la miniaturisation sera contrainte par les limites physiques fondamentales régissant la commutation des dispositifs CMOS dès lors que la technologie atteint des dimensions inférieures à 10 nm. Les chercheurs veulent trouver d'autres moyens de dépasser ces limites physiques. La spintronique est l’un des concepts les plus prometteurs pour de nouvelles applications de circuits intégrés sans courant de charge. La STT-MRAM est l’une des technologies de mémoires fondée sur la spintronique qui entre avec succès en phase de production de masse. Les opérateurs logiques à base de spin, associés aux métiers, doivent être maintenant étudiés. Notre recherche porte sur le domaine des transistors à effet de champ de spin (spin-FET), l'un des dispositifs logiques fondamentaux à base de spin. Le mécanisme principal pour réaliser un spin-FET consiste à contrôler le spin des électrons, ce qui permet d'atteindre l'objectif de réduction de puissance. De plus, en tant que dispositifs à spin, les spin-FET peuvent facilement être combinés à des éléments de stockage magnétique, tels que la jonction tunnel magnétique (MTJ), pour développer une architecture à «logique non volatile» offrant des performances de hautes vitesses et de faible consommation. La thèse présentée ici consiste à développer un modèle compact de spin-FET et à explorer les possibilités de son application pour la conception logique et la simulation logique non volatile. Tout d'abord, nous avons proposé un modèle à géométrie non locale pour spin-FET afin de décrire les comportements des électrons, tels que l'injection et la détection de spin, le décalage de phase d'angle de spin induit par l'interaction spin-orbite. Nous avons programmé un modèle spin-FET non local à l'aide du langage Verilog-A et l'avons validé en comparant la simulation aux résultats expérimentaux. Afin de développer un modèle électrique pour la conception et la simulation de circuits, nous avons proposé un modèle de géométrie local pour spin-FET basé sur le modèle non-local spin-FET. Le modèle de spin-FET local étudié peut être utilisé pour la conception logique et la simulation transitoire à l'aide d'outil de conception de circuit. Deuxièmement, nous avons proposé un modèle spin-FET à plusieurs grilles en améliorant le modèle susmentionné. Afin d'améliorer les performances du spin-FET, nous avons mis en cascade le canal en utilisant une structure d'injection / détection de spin partagée. En concevant différentes longueurs de canal, le spin-FET à plusieurs grilles peut agir comme différentes portes logiques. Les performances de ces portes logiques sont analysées par rapport à la logique CMOS conventionnelle. En utilisant les portes logiques multi-grille à spin-FET, nous avons conçu et simulé un certain nombre de blocs logiques booléens. La fonctionnalité des blocs logiques est démontrée par le résultat de simulations transitoires à l'aide du modèle spin-FET à plusieurs grilles. Enfin, en combinant le modèle spin-FET et le modèle multi-grille spin-FET avec le modèle d'élément de stockage MTJ, les portes à «logique non volatile» sont proposées. Comme le seul signal de pur spin peut atteindre le côté détection du spin-FET, la MTJ reçoit un courant de pur spin pour le transfert de spin. Dans ce cas, la commutation de la MTJ peut être plus efficace par rapport à la structure conventionnelle MTJ / CMOS. La comparaison des performances entre la structure hybride MTJ / spin-FET et la structure hybride MTJ / CMOS est démontrée par un calcul de retard et de courant critique qui est dérivé de l'équation de Landau-Lifshitz-Gilbert (LLG). La simulation transitoire valide le fonctionnement de la logique non volatile basée sur MTJ / spin-FET
The development of Complementary Metal Oxide Semiconductor (CMOS) technology drives the revolution of the integrate circuits (IC) production. Each new CMOS technology generation is aimed at the fast and low-power operation which mostly benefits from the scaling with its dimensions. However, the scaling will be influenced by some fundamental physical limits of device switching since the CMOS technology steps into sub-10 nm generation. Researchers want to find other ways for addressing the physical limitation problem. Spintronics is one of the most promising fields for the concept of non-charge-based new IC applications. The spin-transfer torque magnetic random access memory (STT-MRAM) is one of the successful spintronics-based memory devices which is coming into the volume production stage. The related spin-based logic devices still need to be investigated. Our research is on the field of the spin field effect transistors (spin-FET), one of the fundamental spin-based logic devices. The main mechanism for realizing a spin-FET is controlling the spin of the electrons which can achieve the objective of power reduction. Moreover, as spin-based devices, the spin-FET can easily combine with spin-based storage elements such as magnetic tunnel junction (MTJ) to construct the “non-volatile logic” architecture with high-speed and low-power performance. Our focus in this thesis is to develop the compact model for spin-FET and to explore its application on logic design and non-volatile logic simulation. Firstly, we proposed the non-local geometry model for spin-FET to describe the behaviors of the electrons such as spin injection and detection, the spin angle phase shift induced by spin-orbit interaction. We programmed the non-local spin-FET model using Verilog-A language and validated it by comparing the simulation with the experimental result. In order to develop an electrical model for circuit design and simulation, we proposed the local geometry model for spin-FET based on the non-local spin-FET model. The investigated local spin-FET model can be used for logic design and transient simulation on the circuit design tool. Secondly, we proposed the multi-gate spin-FET model by improving the aforementioned model. In order to enhance the performance of the spin-FET, we cascaded the channel using a shared spin injection/detection structure. By designing different channel length, the multi-gate spin-FET can act as different logic gates. The performance of these logic gates is analyzed comparing with the conventional CMOS logic. Using the multi-gate spin-FET-based logic gates, we designed and simulated a number of the Boolean logic block. The logic block is demonstrated by the transient simulation result using the multi-gate spin-FET model. Finally, combing the spin-FET model and multi-gate spin-FET model with the storage element MTJ model, the “non-volatile logic” gates are proposed. Since the only pure spin signal can reach to the detection side of the spin-FET, the MTJ receives pure spin current for the spin transfer. In this case, the switching of the MTJ can be more effective compared with the conventional MTJ/CMOS structure. The performance comparison between hybrid MTJ/spin-FET structure and hybrid MTJ/CMOS structure are demonstrated by delay and critical current calculation which are derived from Landau-Lifshitz-Gilbert (LLG) equation. The transient simulation verifies the function of the MTJ/spin-FET based non-volatile logic
APA, Harvard, Vancouver, ISO, and other styles
35

Westfall, Jonathan E. "Exploring Common Antecedents of Three Related Decision Biases." Connect to full text in OhioLINK ETD Center, 2009. http://rave.ohiolink.edu/etdc/view?acc_num=toledo1248468207.

Full text
APA, Harvard, Vancouver, ISO, and other styles
36

Pepitone, Kévin. "Etude de la production, de la propagation et de la focalisation d'un faisceau d'électrons impulsionnel intense." Thesis, Bordeaux, 2014. http://www.theses.fr/2014BORD0158/document.

Full text
Abstract:
Le faisceau d’électrons (500 keV, 30 kA, 100 ns) produit par le générateur RKA (Relativistic Klystron Amplifier) est utilisé pour étudier des matériaux soumis à des chocs de basse fluence (< 10 cal/cm²). Leur réponse dépend des caractéristiques du faisceau, principalement en termes d’homogénéité spatiale lors de l’impact. Dans ce but, nous avons utilisé des diagnostics électriques et un diagnostic optique basé sur l’émission Cerenkov. Les photons visibles produits sont détectables par des caméras rapides. Nous avons ainsi pu étudier l’homogénéité du faisceau émis dans la diode sous vide en fonction des matériaux utilisés pour la cathode et pour l’anode, mais aussi pu suivre sa propagation dans une enceinte contenant un gaz à basse pression.Chaque partie de l’installation a été optimisée lors de cette thèse. Nous avons constaté qu’une cathode en velours avec des fibres bien ordonnées était le meilleur émetteur. Une anode d’une dizaine de micromètres d’épaisseur permet de diffuser le faisceau avant qu’il n’impacte la cible, améliorant encore son homogénéité. Ces travaux sur la diode ont été complétés par une étude de la propagation du faisceau dans une enceinte remplie d’air ou d’argon à différentes pressions, avec ou sans focalisation produite par un champ magnétique externe. D’après les résultats expérimentaux, un faisceau d’électrons de 400 keV, 4,2 kA peut être propagé, avec un rayon constant, dans 0,7 mbar d’argon. Enfin, pour interpréter les expériences, des simulations ont été réalisées à l’aide du code Monte Carlo Geant4 pour calculer l’interaction du faisceau avec la cible Cerenkov et l’anode. Au niveau de l’émission et du transport du faisceau, le bon accord obtenu avec les prédictions du code PIC Magic permet d’estimer les distributions des électrons par la simulation et d’initialiser correctement les calculs de réponse des matériaux
The electron beam (500 keV, 30 kA, 100 ns) of the RKA (Relativistic Klystron Amplifier) generator is used to study materials under shocks at low fluences (< 10 cal/cm²). Their response depends on the beam characteristics at the impact location, mainly in terms of spatial homogeneity. We have used electrical diagnostics as well as an optical diagnostics where the visible photons produced by Cerenkov emission in a silica target are collected by fast cameras. Beam homogeneity has been studied in the vacuum diode as a function of the materials used for the cathode and the anode. Beam propagation and focusing in a chamber filled with a low-pressure gas has also been investigated.Each part of the installation has been optimized during this work. We found that, among the tested materials, a velvet cathode with well-aligned fibers is the best emitter. An anode of thickness about ten micrometers improves the beam homogeneity by scattering of electrons. Next, we focused on beam propagation and focusing in the chamber. For example, a 400 keV, 4.2 kA electron beam can be propagated at constant radius in argon at 0.7 mbar. We performed simulations with the Monte Carlo code Geant4 in order to compute the beam interaction with the Cerenkov target as well as with the anode. Beam emission and propagation were simulated with the PIC code Magic. The good agreement with the experimental results allows us to estimate the electron distributions at any position along the beam path in order to initialize correctly the computation of the beam-material interaction
APA, Harvard, Vancouver, ISO, and other styles
37

Bellissant, Éric, and J. F. GIUDICELLI. "La modelisation pharmacocinetique-pharmacodynamique en pharmacologie clinique cardiovasculaire : developpement d'un logiciel de modelisation pharmacocinetique-pharmacodynamique, application a l'etude des relations concentration-effet d'un antagoniste calcique et de deux inhibiteurs de l'enzyme de conversion chez le volontaire sain et d'un inhibiteur de l'enzyme de conversion chez l'insuffisant cardiaque." Paris 6, 1994. http://www.theses.fr/1994PA066483.

Full text
Abstract:
L'objectif de cette these a ete d'evaluer la faisabilite de la modelisation pharmacocinetique-pharmacodynamique en pharmacologie clinique cardiovasculaire apres administration unique d'un medicament. La premiere partie a consiste a developper un logiciel de modelisation pharmacocinetique-pharmacodynamique permettant, a partir de donnees obtenues apres administration unique, l'etablissement de relations concentration-effet individuelles lorsque les effets sont quantitatifs. Ce logiciel possede deux caracteristiques essentielles: lorsque la relation concentration-effet montre un phenomene d'hysteresis, il permet l'etude de la fonction a optimiser pour s'affranchir de cette hysteresis ; lorsque le plan experimental prevoit l'etude de plusieurs doses, il permet d'etablir la relation concentration-effet a partir des donnees provenant de toutes les doses etudiees. La deuxieme partie a consiste a realiser 4 etudes de pharmacologie clinique concues pour quantifier, en intensite et en duree, les effets induits par l'administration orale unique de 4 vasodilatateurs et pour etudier simultanement la pharmacocinetique des molecules actives. La troisieme partie a consiste a rechercher, avec le logiciel, l'existence de relations concentration-effet dans ces etudes. Il ressort de ce travail qu'il est possible d'etablir, chez l'homme, des relations entre les concentrations plasmatiques des molecules actives et certains effets cardiaques, hemodynamiques regionaux ou biologiques engendres par ces molecules. Les informations obtenues peuvent etre importantes a considerer pour determiner la dose optimale des molecules etudiees. Nos resultats permettent de conclure que la determination de relations concentration-effet en pharmacologique clinique cardiovasculaire est realisable en phase precoce de developpement des medicaments mais qu'elle necessite, pour aboutir de facon presque certaine, au moins 10 a 15 mesures d'effets et, si possible, l'etude d'au moins 2 doses, pour obtenir entre 20 et 30 points experimentaux et approcher les effets maximum
APA, Harvard, Vancouver, ISO, and other styles
38

Rachid, Ahmed. "Contribution à la modélisation et à la commande d'un haut-fourneau." Nancy 1, 1986. http://www.theses.fr/1986NAN10016.

Full text
Abstract:
Ces travaux traitent de la modélisation et de la commande d'un haut fourneau. Le problème de la validation de données est pose en termes d'équilibrage de bilans et une résolution utilisant le calcul hierarchisée est proposée. Une approche à l'étude dynamique du haut fourneau aboutit à un modèle d'état stochastique et permet la conception et l'élaboration d'une loi de commande
APA, Harvard, Vancouver, ISO, and other styles
39

Alileche, Nassim. "Etude des effets dominos sur une zone industrielle." Thesis, Rouen, INSA, 2015. http://www.theses.fr/2015ISAM0013.

Full text
Abstract:
Les effets dominos ou cascade d’événements dans les industries et particulièrement dans les industries chimiques et de transformation, sont reconnus comme des scénarios d’accidents possibles depuis environ trois décennies. Ils représentent une préoccupation croissante, car ils ont le potentiel de provoquer des conséquences dévastatrices. L’effet domino, comme phénomène, est un sujet controversé lorsque son évaluation est nécessaire. L’examen de la bibliographie a démontré l’absence d’une définition commune et d’une procédure simple d’utilisation et précise pour son appréciation. C’est pourquoi l’un des objectifs de cette recherche est de formaliser les connaissances relatives aux effets dominos afin de comprendre les mécanismes de leurs occurrences. Pour ce faire nous avons étudié les paramètres à examiner pour déterminer la possibilité de cascade et être en mesure d’identifier les scénarios dominos. L’enjeu étant de permettre l’amélioration de la prévention du risque d’effet domino. L’autre objectif est donc de produire une méthode pour l’identification et l’analyse des effets dominos. Nous avons développé une méthodologie globale pour l’étude des effets dominos en chaîne initiés par des pertes de confinement. Elle permet l’identification et la hiérarchisation des chemins de propagation des accidents. Cette méthode facilite la prise de décision pour la prévention des effets dominos, tout en proposant un outil efficace et simple d’utilisation. Les résultats de l’étude sont fournis sous forme d’une hiérarchisation quantitative des équipements impliqués dans les scénarios dominos, en tenant compte des effets des conditions météorologiques et des mesures de maîtrise des risques existantes ou proposées.Cette hiérarchisation donne une idée claire des dangers que représentent les équipements par rapport aux accidents en cascade, en précisant si la dangerosité de l’équipement provient de sa capacité à initier ou à propager un effet de cascade.La méthode est basée sur une description topographique de la zone étudiée, incluant les caractéristiques de chaque équipement, et prend en compte les mesures de maîtrise des risques mises en œuvre par l’industriel. Elle repose sur deux phases principales : La première, est l’identification des chemins de propagation des accidents. Pour ce faire, la méthode d’analyse par arbre d’événements est utilisée. Les cibles potentielles sont déterminées en combinant les valeurs seuils d’escalade et les modèles de vulnérabilité (pour l’estimation de la probabilité d’endommagement). Cette première phase est implémentée sous MATLAB® et Visual Basic for Applications (VBA) afin de faciliter l’entrée des données, et l’analyse des résultats dans Microsoft Excel®. La deuxième phase est l’identification des équipements les plus dangereux vis-à-vis des effets dominos. Elle consiste à hiérarchiser les équipements impliqués dans les chemins de propagation, en fonction de leur vraisemblance à causer ou à propager un effet domino. L’algorithme qui effectue cette phase est codé sous VBA. La méthode a été conçue de façon à ce qu’elle puisse être utilisée sans qu’il soit nécessaire de s’appuyer sur les résultats des études de dangers. Néanmoins, si ces résultats sont disponibles, il est alors possible d’alléger certaines étapes de la méthode. Elle s’est révélée facile à utiliser, cela a été constaté lors de son application dans le cadre de projets et stages d’étudiants
Domino effects or cascading events in the chemical and process industries are recognized as credible accident scenarios since three decades. They are raising a growing concern, as they have the potential to cause catastrophic consequences. Domino effect, as phenomenon, is still a controversial topic when coming to its assessment. There is still a poor agreement on the definition of domino effect and its assessment procedures. A number of different definitions and approaches are proposed in technical standards and in the scientific literature. Therefore, one of this research objectives is to formalize domino effects knowledges in order to comprehend their occurrence mechanisms. Thus, the parameters that should be looked at so as to understand the escalation possibility and in order to identify domino scenarios, were analyzed. The aim is to improve domino effect hazards prevention, through the development of a methodology for the identification and the analysis of domino effects.We developed a method for the analysis of domino accident chain caused by loss of containments. It allow the identification and prioritization of accident propagation paths. The method is user-friendly and help decision making regarding the prevention of cascading events. The final outcomes of the model are given in form of quantitative rankings of equipment involved in domino scenarios, taking into account the effect of meteorological conditions and safety barriers. The rankings give a clear idea of equipment hazard for initiating or continuing cascading events.The methodology is based on a topography of the industrial area of concern, including the characteristics of each unit and accounting for protection and mitigation barriers. It is based on two main stages. The first is the identification of accident propagation paths. For this, the event tree method is used. The possible targets are identified combining the escalation thresholds and vulnerability models (to estimate damage probability). This first stage was implemented using the MATLAB® software and Visual Basic for Applications (VBA) to enable an easy input procedure and output analysis in Microsoft Excel®.The second stage is the identification of the most dangerous equipment. It consists in prioritizing equipment involved in the propagation paths according to their likelihood to cause/propagate domino effect. The algorithm that performs this phase was coded in VBA.The method was designed so as it can be used without the need to rely on the results of safety reports. However, if such results are available, it is possible to lighten some steps of the method. It revealed easy to apply, this was confirmed through projects and student internships
Gli effetti domino, in cui un primo incidente causa in cascata altri scenari incidentali, sono tragli scenari incidentali più severi che avvengono nell’industria chimica. Nonostante l’attenzioneche anche la normativa dedica a tali scenari, la valutazione dell’effetto domino è un soggettocontroverso. L’analisi della letteratura tecnica e scientifica ha mostrato l’assenza di unadefinizione comune di « effetto domino » e di una semplice procedura per l’identificazione ditali scenari. È per tale motivo che uno degli obiettivi di questo lavoro di ricerca è diformalizzare le conoscenze relative agli effetti domino al fine di meglio comprendere imeccanismi che possono provocarli. A tal proposito sono stati studiati i parametri necessariper determinare la possibilità dell’insorgere di cascate di eventi e per essere in grado diidentificare i possibili scenari incidentali dovuti ad effetto domino. L’obiettivo finale del lavoroè stato di sviluppare un metodo per l’identificazione e l’analisi quantitativa della propagazionedi incidenti primari nell’ambito di scenari dovuti ad effetto domino.E’ stata sviluppata una metodologia generale per l’analisi degli effetti domino causati daperdite di confinamento. Tale metodologia permette l’identificazione e la classificazione deipercorsi di propagazione degli incidenti. Tale metodo facilita inoltre la prevenzione deglieffetti domino, proponendo uno strumento efficace e semplice da utilizzare.I risultati di questo studio sono forniti in forma di una classificazione delle apparecchiaturecoinvolte in scenari dovuti ad effetto domino, tenendo conto degli effetti delle condizionimeteorologiche e delle misure esistenti per la gestione del rischio. Tale classificazione fornisceanche un chiara idea dei pericoli rappresentati dalle singole apparecchiature nel caso diincidenti in cascata, in quanto precisando se la pericolosità delle attrezzature proviene dallaloro capacità di innescare o propagare un reazione a catena.Il metodo è basato su una descrizione topografica del sito studiato, che comprende anche lecaratteristiche di ogni attrezzatura, che tiene conto delle misure di gestione dei rischi e dellebarriere di sicurezza presenti, basato su due fasi principali. La prima è l’identificazione deipercorsi di propagazione degli incidenti. A tale scopo è stato utilizzato un metodo basatoVIsull’albero degli eventi. I potenziali bersagli vengono determinati combinando i valori di sogliaper la propagazione degli eventi ed i modelli di vulnerabilità delle apparecchiature. Questaprima fase è implementata in MATLAB® e Visual Basic for Applications (VBA) in modo dafacilitare la gestione dei dati e l’analisi dei risultati in Microsoft Excel®.La seconda fase è l’identificazione delle apparecchiature più pericolose per gli effetti domino.Tale fase consiste nel classificare le apparecchiature coinvolte nei percorsi di propagazione infunzione della loro capacità di causare o propagare un effetto domino. L’algoritmo dedicato inquesta fase è eseguito su VBA.I risultati ottenuti anche nell’applicazione ad un caso di studio hanno evidenziato le potenzialitàdel metodo, che rappresenta un significativo progresso nell’analisi quantitativa dell’effetto domino
APA, Harvard, Vancouver, ISO, and other styles
40

Wu, Chun-Hui, and 吳春慧. "Logical Effort Model Extension with Temperature and Voltage Variations." Thesis, 2008. http://ndltd.ncl.edu.tw/handle/72050583009180402403.

Full text
Abstract:
碩士
國立交通大學
電信工程系所
97
In the integrated circuits design, performance estimation and circuit optimization are two of the most important issues. The method of “Logical Effort Delay Model” allows designers to quickly estimate delay time and optimize logic paths, but the previous variances of logical effort models do not mention how to handle process, voltage, and temperature (PVT) variations appropriately, which may induce a serious misestimate. According to simulation results in 90nm process, delay time increases 21% while temperature increasing from 0°C to 125°C. In the mean time, delay time increases 2X while supply voltage decreasing from 1V to 0.5V. Thus a simple linear extension of logical effort g, 1/g = (mtt+bt)VDD+C, supporting for temperature t and supply voltage V¬DD variations is presented. The linear characteristic is convenient for designers to calculate and the integration of proposed model and CAD tools is easier. The proposed model enables designers to estimate the logic path delay and to optimize an N-stage logic network under different temperature and supply voltage conditions. Furthermore, each functional block on a chip can be optimized under different PVT conditions through this simple model. After validation, the accuracy of this new extended logical effort model can achieve about 90%.
APA, Harvard, Vancouver, ISO, and other styles
41

Tseng, Yuh-hom, and 曾峪鴻. "A Logical-effort-based Software Toolfor Designing Fast CMOS Circuits." Thesis, 2009. http://ndltd.ncl.edu.tw/handle/03564695379480614765.

Full text
Abstract:
碩士
國立臺灣科技大學
電子工程系
97
This thesis is based on the logical effort method and, by using C language, a software tool has been implemented which can help designing fast CMOS circuits. The related research work includes studying the logical effort method, using C language to realize the logical-effort-based software tool, and using the logical-effort-based software tool to design fast CMOS circuits. Finally, CMOS circuits with one- or multi-way branches are used to verify the function of the logical-effort-based software tool. Meanwhile, the output results also show that the software tool can compute the minimum delay along a path in a CMOS circuit and find the optimal transistor sizes for the related logical gates.
APA, Harvard, Vancouver, ISO, and other styles
42

Esquit, Hernandez Carlos A. "IMPACT OF DYNAMIC VOLTAGE SCALING (DVS) ON CIRCUIT OPTIMIZATION." 2009. http://hdl.handle.net/1969.1/ETD-TAMU-2009-05-324.

Full text
Abstract:
Circuit designers perform optimization procedures targeting speed and power during the design of a circuit. Gate sizing can be applied to optimize for speed, while Dual-VT and Dynamic Voltage Scaling (DVS) can be applied to optimize for leakage and dynamic power, respectively. Both gate sizing and Dual-VT are design-time techniques, which are applied to the circuit at a fixed voltage. On the other hand, DVS is a run-time technique and implies that the circuit will be operating at a different voltage than that used during the optimization phase at design-time. After some analysis, the risk of non-critical paths becoming critical paths at run-time is detected under these circumstances. The following questions arise: 1) should we take DVS into account during the optimization phase? 2) Does DVS impose any restrictions while performing design-time circuit optimizations?. This thesis is a case study of applying DVS to a circuit that has been optimized for speed and power, and aims at answering the previous two questions. We used a 45-nm CMOS design kit and flow. Synthesis, placement and routing, and timing analysis were applied to the benchmark circuit ISCAS?85 c432. Logical Effort and Dual-VT algorithms were implemented and applied to the circuit to optimize for speed and leakage power, respectively. Optimizations were run for the circuit operating at different voltages. Finally, the impact of DVS on circuit optimization was studied based on HSPICE simulations sweeping the supply voltage for each optimization. The results showed that DVS had no impact on gate sizing optimizations, but it did on Dual-VT optimizations. It is shown that we should not optimize at an arbitrary voltage. Moreover, simulations showed that Dual-VT optimizations should be performed at the lowest voltage that DVS is intended to operate, otherwise non-critical paths will become critical paths at run-time.
APA, Harvard, Vancouver, ISO, and other styles
43

Waters, Ronald S. "Total delay optimization for column reduction multipliers considering non-uniform arrival times to the final adder." Thesis, 2014. http://hdl.handle.net/2152/24858.

Full text
Abstract:
Column Reduction Multiplier techniques provide the fastest multiplier designs and involve three steps. First, a partial product array of terms is formed by logically ANDing each bit of the multiplier with each bit of the multiplicand. Second, adders or counters are used to reduce the number of terms in each bit column to a final two. This activity is commonly described as column reduction and occurs in multiple stages. Finally, some form of carry propagate adder (CPA) is applied to the final two terms in order to sum them to produce the final product of the multiplication. Since forming the partial products, in the first step, is simply forming an array of the logical AND's of two bits, there is little opportunity for delay improvement for the first step. There has been much work done in optimizing the reduction stages for column multipliers in the second reduction step. All of the reduction approaches of the second step result in non-uniform arrival times to the input of the final carry propagate adder in the final step. The designs for carry propagate adders have been done assuming that the input bits all have the same arrival time. It is not evident that the non-uniform arrival times from the columns impacts the performance of the multiplier. A thorough analysis of the several column reduction methods and the impact of carry propagate adder designs, along with the column reduction design step, to provide the fastest possible final results, for an array of multiplier widths has not been undertaken. This dissertation investigates the design impact of three carry propagate adders, with different performance attributes, on the final delay results for four column reduction multipliers and suggests general ways to optimize the total delay for the multipliers.
text
APA, Harvard, Vancouver, ISO, and other styles
44

Lin, Chen-Hsien, and 林貞嫺. "The Effect of Logical Reasoning Abilities, Creativity and Personalities Traits of Higher Programming." Thesis, 2016. http://ndltd.ncl.edu.tw/handle/06414354844513580831.

Full text
Abstract:
碩士
中華大學
資訊管理學系
104
Recently, there has been a worldwide upsurge in opportunities to learn programming. Various countries have long included code writing in their design course outlines. Even kindergarten aged children can learn basic programming concepts through game-based programming websites. Faced with this wave of digitalization, we should quickly cultivate the next generation by familiarizing them with and allowing them to make good use of information tools, such as programming logic and programming languages. As such, they would then have sufficient future capacities to face the challenges of and be more competitive in the next wave of digitalization. Therefore, based on related literature, this study explored the factors that affect higher grade elementary students in learning game-based programming. The purposes of this study were as follows: (1) to explore the differences in participation interest towards game-based programming based on higher grade elementary students with different personality traits, creativity, and participation interest in mathematics; (2) to explore the differences in game-based programming learning achievements attained by higher grade elementary students with different logical reasoning abilities, learning achievements in mathematics, and participation interest towards learning game-based programming; and (3) to act as a reference for educational authorities, schools, or teachers who are responsible for planning and implementing programming courses, based on this study’s data analysis’ results. This study used convenience sampling. A non-random convenience sampling method was adopted to acquire a sample of higher grade elementary students from a school in New Taipei City. One hundred and thirteen valid samples were collected. This study gathered relevant past literature for use in the questionnaire design, which contained four parts: participation interest in mathematics, personality traits scale, participation interest in game-based programming scale, and personal information. The data collected were analyzed using methods including descriptive statistics, independent sample t-testing, and one-way analysis of variance. The research findings on participation interest indicated that, regarding personality traits, those students with high scores for “agreeableness,” “conscientiousness,” “extroversion,” and “openness” had significantly higher participation interest in game-based programming than those with low scores in these areas. With regard to participation interest in mathematics, those students with high scores in these areas had substantially higher participation interest in game-based programming than those with low scores. For creative thinking activities, those students who scored highly in “openness” and “originality” had significantly higher participation interest in game-based programming than those with low scores in these areas. With regard to creative tendency, those students who scored highly in “adventurous,” “curiosity,” and “imagination” had significantly higher participation interest in game-based programming than those with low scores in these areas. The research findings on learning achievement indicated that, with regard to participation achievement in mathematics, those students with high scores had a substantially higher participation achievements in game-based programming than those with low scores. For logical reasoning abilities, the high-scorers had substantially higher participation achievements in game-based programming than the low-scorers. For participation interest in game-based programming, the high-scorers had substantially higher participation achievements in game-based programming than the low-scorers.
APA, Harvard, Vancouver, ISO, and other styles
45

Chang, Kuei-Nei, and 鄭貴內. "The Effect of Scratch Jr Programming on the Third Grade Students' Logical Reasoning Ability." Thesis, 2016. http://ndltd.ncl.edu.tw/handle/35370671477621972708.

Full text
Abstract:
碩士
國立雲林科技大學
資訊管理系
104
This thesis evaluates the influence of a programming and design course, utilizing Scratch Jr. software, on third grade students’ logical reasoning abilities. This study evaluates differences in students’ logical reasoning between the pretest and posttest and any correlations with the results of a learning attitude questionnaire. The results are as follows: First, significant differences between the experimental and control groups were found for one category of logical reasoning ability, hypothetical proposition, while no differences were found for the remaining categories. Second, analysis of variance revealed significant differences between low-achieving students in the experimental and control group for the logical reasoning category of first-order logic, with no significant differences found on the remaining categories or the total score. However, on average, the experimental group performed better than the control group, albeit at a non-significant level. Third, analysis of variance revealed significant differences between high-achieving students in the experimental and control group for the logical reasoning category of hypothetical proposition, with no significant differences found on the remaining categories or the total score. However, on average, the experimental group performed better than the control group, albeit at a non-significant level. Fourth, analysis of variance found no significant differences among small groups for the experimental class in any category of the Logical Reasoning Test. However, it was discovered that the third group, consisting of only female students, demonstrated a decline in scores between the pretest and posttest, while the first group, consisting only of boys, demonstrated the greatest progress. Thus, gender may have played a role in this experiment. Fifth, a statistically significant and positive moderate correlation was found between learning attitude and logical reasoning ability, suggesting that positive learning attitude may lead to improved performance in logical reasoning.
APA, Harvard, Vancouver, ISO, and other styles
46

FANG, CHIEH-JEN, and 方玠仁. "Effect of mBot Robotics on Logical Reasoning Ability and Problem Solving Ability of Grade 7 Students." Thesis, 2017. http://ndltd.ncl.edu.tw/handle/23741560117792273594.

Full text
Abstract:
碩士
國立中正大學
教學專業發展數位學習碩士在職專班
105
This research studied the effects of the mBot robotics course and gender on the logical reasoning and problem-solving abilities of seventh grade students. Twenty-two students comprising 8 boys and 14 girls from the 7th grade of a junior high school in Tainan City were involved in this study. The “One-group Pretest-Posttest Design” was used to conduct eight classes of experimental teaching in this study. The research tools used in this study including the “Logical Reasoning Ability Test” and “Problem-Solving Ability Test for Junior High School Students.” The experimental results were analyzed using dependent samples T-test and independent samples T-test, and the following findings were obtained: I. Impact of the mBot robotics course on logical reasoning ability 1. The implementation of the mBot robotics course had significant effects on the “Total Score,” “Disjunctive Proposition,” and “De Morgan's Theorem” indicators of the Logical Reasoning Ability Test. 2. The implementation of mBot robotics course had no significant effect on the “Conjunctive Proposition,” “Hypothetical Proposition,” and “First-order Logic” indicators of the Logical Reasoning Ability Test. II. Impact of gender on logical reasoning ability Differences in gender led to no significant differences in the “Total Score,” “Conjunctive Proposition,” “Disjunctive Proposition,” “Hypothetical Proposition,” “De Morgan's Theorem,” and “First Order Logic” indicators of the students in the Logical Reasoning Ability Test. III. Impact of the mBot robotics course on problem-solving abilities 1. The implementation of the mBot robotics course had a significant effect on the “Total Score” and “Problem Redefinition” indicators of the Problem-Solving Ability Test. 2. The implementation of the mBot robotics course had no significant effect on the “Problem Recognition,” “Causal Inference,” “Idea Proposal,” and “Optimal Solution-Seeking” indicators of the Problem-Solving Ability Test. IV. Impact of gender on problem-solving abilities The differences in gender led to no significant differences in the “Total Score,” “Problem Detection,” “Problem Redefinition,” “Causal Inference,” “Idea Proposal,” and “Optimal Solution-Seeking” indicators of the students in the Problem-Solving Ability Test. V. Correlation between logical reasoning and problem-solving ability There was a positive correlation between logical reasoning and problem-solving abilities in both the pre-test and the post-test. Finally, recommendations were proposed in accordance with the findings of this study to provide teachers and future studies with a reference for the implementation of mBot robots in education and in-depth studies. Keywords: mBot robot, logical reasoning ability, problem-solving abilities, visualization programs
APA, Harvard, Vancouver, ISO, and other styles
47

Lo, Pi-Han, and 羅筆韓. "The Effect of Worked-Example Problem-based Learning on University Students'' Logical Problem-solving Performance." Thesis, 2014. http://ndltd.ncl.edu.tw/handle/yypy6a.

Full text
Abstract:
碩士
淡江大學
教育科技學系碩士班
102
We are expected or required to solve diverse problems which we encounter in our daily life or work. The logical problem is the most fundamental one among those problems. Therefore, in order to increase students'' logical problem-solving performance, the problem-based Learning (PBL) has been frequently adopted to engage students in problem-solving process. Moreover, prior research studies indicate that adding the worked-out examples into PBL design facilitates students'' learning. This research aimed to explore the effects of Worked-example Problem-based Learning (WPBL) compared with conventional PBL. 60 students matriculated in information science programs from one private university were recruited and randomly assigned to the two groups: the PBL group and the WPBL group. Also, the pre-and-post test experimental design was adopted to find out if WPBL influenced students'' logical problem-solving performance. There are two conclusions drawn from the research findings: Both worked-example problem-based learning and conventional PBL can effectively enhance logical problem-solving performance. However, no significant difference in the effect brought by the WPBL compared with PBL was found.
APA, Harvard, Vancouver, ISO, and other styles
48

CHENG, CHIH-JUNG, and 鄭志榮. "The Effect of Code Studio Programming Learning on the Fifth Grade Student’s Logical Reasoning and Problem Solving Ability." Thesis, 2018. http://ndltd.ncl.edu.tw/handle/fxk8s2.

Full text
Abstract:
碩士
中華大學
資訊管理學系
106
The main purpose of this study was to investigate the effect of Code Studio programming learning on 5th graders’ logical reasoning abilities and problem solving abilities. In addition, the students’ learning achievement and learning attitudes toward Code Studio were analyzed. This study adopted a Quasi-Experimental design. The targets were the students taught by the researcher. There were two classes, forty-six students in total. Twenty-three students were designated as experimental group and twenty-three students as control group. The experimental group was grouped in a mixed gender, while the control group in the same gender. Both groups received a 5-week Code Studio programming learning course. The data were collected in quantitative ways, supplemented by qualitative interview data. The instruments which included computer attitude questionnaire, logical reasoning abilities test, problem solving abilities questionnaire, Code Studio learning achievement test, Code Studio learning attitude questionnaire, were developed and employed for gathering quantitative data. This study conducted a semi-structured interview for examining the students’ perception toward this course. The results were as follows: 1. Code Studio programming learning had significant effects on the disjunction of logical reasoning ability for the experimental and the control group students. 2.Code Studio programming learning had significant effects on the disjunction and implication of logical reasoning abilities for students with high computer attitudes in the experimental and the control group students. 3. Code Studio programming learning had no significant effects on logical reasoning ability among the different learning achievement students of the experimental and the control group. 4. Code Studio programming learning did not show significant difference in problem solving abilities between the different learning achievement students of the experimental and the control group. 5. There was no significant difference in the achievement of Code Studio learning between the experimental and the control group. 6. There was no significant difference in the achievement of Code Studio learning between the different computer attitude students of the experimental and the control group. 7. Students of low computer attitude from the experimental group and the control group showed significant difference in dimensions like total scores, satisfaction and practicality in Code Studio programming learning attitude. 8. Students of various learning achievement from the experimental group and the control group did not show significant difference in Code Studio programming learning attitude.
APA, Harvard, Vancouver, ISO, and other styles
49

何仲仁. "The effect of RF interference on logic gates." Thesis, 1992. http://ndltd.ncl.edu.tw/handle/20811253565958824785.

Full text
APA, Harvard, Vancouver, ISO, and other styles
50

Hsu, Kuo-Huang, and 徐國晃. "Dynamic Programmable Logic Arrays Considering Optical Proximity Effect." Thesis, 2007. http://ndltd.ncl.edu.tw/handle/24339169885750027769.

Full text
Abstract:
碩士
南台科技大學
電子工程系
95
Since the critical dimension (CD) in the nano-scaled fabrication is way smaller than the wavelength, diffraction might create serious problem such as optical proximity effect (OPE) and cause distortion of the masked prints. Furthermore, other types of distortion might include errors in the wire length when we perform lithography on the wafer, which might cause the yield to go down in the production. This thesis offers a method of automation in circuit layout by using dynamic programmable logic array (DPLA), which reduces the cost of performing the optical proximity correction (OPC). In the average DPLA circuit design, engineers intend to design the circuit with emphasis on the circuit performance but without the circuit yields. Also, when the OPC is performed, most of the fabrication manufacturers do run an overall correction, which is time-consuming and high cost. We intend to use the proposed algorithm to pre-arrange the product lines in the DPLA without affecting the functionality and the increase of the area by reducing the adjacent transistors that couple together with the possible minimization of the interference between two prints in terms of design costs. According to the experiment we have done on 28 different circuits, we have obtained a drastic improvement to nearly 31% of the original OPC cost. Moreover, we use the DPLA compiler and automatically generate DPLA configuration and the corresponding critical path with the help of the optical simulation software SPLAT for a more reliable distortion analysis. Finally, the analysis on comparing the performance of circuits could be the reference for the OPC.
APA, Harvard, Vancouver, ISO, and other styles
We offer discounts on all premium plans for authors whose works are included in thematic literature selections. Contact us to get a unique promo code!

To the bibliography