To see the other types of publications on this topic, follow the link: Binary logit.

Dissertations / Theses on the topic 'Binary logit'

Create a spot-on reference in APA, MLA, Chicago, Harvard, and other styles

Select a source type:

Consult the top 50 dissertations / theses for your research on the topic 'Binary logit.'

Next to every source in the list of references, there is an 'Add to bibliography' button. Press on it, and we will generate automatically the bibliographic reference to the chosen work in the citation style you need: APA, MLA, Harvard, Chicago, Vancouver, etc.

You can also download the full text of the academic publication as pdf and read online its abstract whenever available in the metadata.

Browse dissertations / theses on a wide variety of disciplines and organise your bibliography correctly.

1

Mannila, Kranthi Kiran. "ANALYSIS OF VARIOUS CAR-TRUCK CRASH TYPES BASED ON GES AND FARS CRASH DATABASES USING MUTLINOMIAL AND BINARY LOGIT MODEL." Master's thesis, University of Central Florida, 2006. http://digital.library.ucf.edu/cdm/ref/collection/ETD/id/2583.

Full text
Abstract:
Each year about 400,000 trucks are involved in motor vehicle crashes. Crashes involving a car and truck have always been a major concern due to the heavy fatality rates. These types of crashes result in about 60 percent of all fatal truck crashes and two-thirds of all police-reportable truck crashes. Car-truck crashes need to be analyzed further to study the trends for a car-truck crash and develop some countermeasures to lower these crashes. Various types of car-truck crashes are analyzed in this study and the effects of various roadway/environment factors and variables related to driver characteristics in these car-truck crashes are investigated. To examine the crash characteristics and to investigate the significant factors related to a car-truck crash, this study analyzed five years of data (2000-2004) of the General estimates system of National Sampling System (GES) and the Fatality Analysis Reporting system database (FARS). All two vehicle crashes including either a car or truck (truck-truck cases excluded because of their low percentage composition) were obtained from these databases. Based on the five year data (GES/FARS) the percentage of car-truck angle collisions constituted the highest percent of frequency of all types of car-truck collisions. Furthermore, based on the 2004 GES data there is a clear trend that the frequency of angle collision increases with the increase in driver injury severity. When analyzing the GES data it was observed that the percentage of angle collisions was the highest followed by the rear end and sideswipe (same direction) collisions respectively. When the fatalities were considered (FARS database used), the percentage of angle collisions was the highest followed by head-on and rear-end collisions. The nominal multinomial logit model and logistic regression models were utilized for this analysis. Divided section, alcohol involvement, adverse weather conditions, dark lighting condition and old age of drivers had a significant effect on the car-truck crashes and were likely to increase the likelihood of a car-truck crash. Whereas dark but light conditions, young aged drivers showed a less likelihood of involving in a car-truck crash. This research is significant in providing an insight into various car-truck crash types and provides with results, which have impacted the car-truck crashes. A better understanding of the factors impacting these crashes will help in providing better countermeasures, which would result in reducing the car-truck crashes.
M.S.
Department of Civil and Environmental Engineering
Engineering and Computer Science
Civil Engineering
APA, Harvard, Vancouver, ISO, and other styles
2

Nti, Frank Kyekyeku. "Climate change vulnerability and coping mechanisms among farming communities in Northern Ghana." Thesis, Kansas State University, 2012. http://hdl.handle.net/2097/15116.

Full text
Abstract:
Master of Science
Department of Agricultural Economics
Andrew Barkley
This study examines the effect of extreme climatic conditions (drought, flood, and bushfires) on the livelihood of households in the Bawku West district of Ghana. The research identified the mechanisms with which households cope in such situations, and analyzed factors influencing the adoption of coping strategies for flood, coping strategies for drought, and coping strategies for bushfires. Data for the study were collected in selected villages across the district in the aftermath of the 2007/2008 extreme climatic events (a prolonged drought period followed by an erratic rainfall). A binary logit regression (BLR) model was then specified to estimate factors that influence the adoption of a given coping mechanisms. Results from the BLR model indicate that literacy level, membership with an FBO, household income, and location of households had positive and significant impacts on adaptation to drought. Similarly, source of seeds for planting, membership with an FBO, household income, and farm size had positive significant influence on adaptation to flood. Adaption to bushfire was positively influenced by radio ownership, seed source and income. The main effect of these climatic extreme events on households included destruction of crops, livestock and buildings; food and water shortage; poor yield or harvest and limited fields for livestock grazing. Therefore, government policies should be geared towards creating revenue generating channels and in strengthening institutions that provide access to farm credit, readily available improve seeds and extension. Additionally, policies that expedite information dissemination through radio and other public media will enhance households’ adaptive capacity.
APA, Harvard, Vancouver, ISO, and other styles
3

Kotikalapudi, Siddhartha. "Characteristics and contributory causes related to large truck crashes (phase-II) - all crashes." Thesis, Kansas State University, 2012. http://hdl.handle.net/2097/14027.

Full text
Abstract:
Master of Science
Department of Civil Engineering
Sunanda Dissanayake
In order to improve safety of the overall surface transportation system, each of the critical areas needs to be addressed separately with more focused attention. Statistics clearly show that large-truck crashes contribute significantly to an increased percentage of high-severity crashes. It is therefore important for the highway safety community to identify characteristics and contributory causes related to large-truck crashes. During the first phase of this study, fatal crash data from the Fatality Analysis Reporting System (FARS) database were studied to achieve that objective. In this second phase, truck-crashes of all severity levels were analyzed with the intention of understanding characteristics and contributory causes, and identifying factors contributing to increased severity of truck-crashes, which could not be achieved by analyzing fatal crashes alone. Various statistical methodologies such as cross-classification analysis and severity models were developed using Kansas crash data. Various driver-, road-, environment- and vehicle- related characteristics were identified and contributory causes were analyzed. From the cross-classification analysis, severity of truck-crashes was found to be related with variables such as road surface (type, character and condition), accident class, collision type, driver- and environment-related contributory causes, traffic-control type, truck-maneuver, crash location, speed limit, light and weather conditions, time of day, functional class, lane class, and Average Annual Daily Traffic (AADT). Other variables such as age of truck driver, day of the week, gender of truck-driver, pedestrian- and truck-related contributory causes were found to have no relationship with crash severity of large trucks. Furthermore, driver-related contributory causes were found to be more common than any other type of contributory cause for the occurrence of truck-crashes. Failing to give time and attention, being too fast for existing conditions, and failing to yield right of way were the most dominant truck-driver-related contributory causes, among many others. Through the severity modeling, factors such as truck-driver-related contributory cause, accident class, manner of collision, truck-driver under the influence of alcohol, truck maneuver, traffic control device, surface condition, truck-driver being too fast for existing conditions, truck-driver being trapped, damage to the truck, light conditions, etc. were found to be significantly related with increased severity of truck-crashes. Truck-driver being trapped had the highest odds of contributing to a more severe crash with a value of 82.81 followed by the collision resulting in damage to the truck, which had 3.05 times higher odds of increasing the severity of truck-crashes. Truck-driver under the influence of alcohol had 2.66 times higher odds of contributing to a more severe crash. Besides traditional practices like providing adequate traffic signs, ensuring proper lane markings, provision of rumble strips and elevated medians, use of technology to develop and implement intelligent countermeasures were recommended. These include Automated Truck Rollover Warning System to mitigate truck-crashes involving rollovers, Lane Drift Warning Systems (LDWS) to prevent run-off-road collisions, Speed Limiters (SLs) to control the speed of the truck, connecting vehicle technologies like Vehicle-to-Vehicle (V2V) integration system to prevent head-on collisions etc., among many others. Proper development and implementation of these countermeasures in a cost effective manner will help mitigate the number and severity of truck-crashes, thereby improving the overall safety of the transportation system.
APA, Harvard, Vancouver, ISO, and other styles
4

Saúde, Arthur Moreira. "Metodologia de previsão de recessões: um estudo econométrico com aplicações de modelos de resposta binária." reponame:Repositório Institucional do FGV, 2017. http://hdl.handle.net/10438/18221.

Full text
Abstract:
Submitted by Arthur Moreira Saude (arthur-moreira@hotmail.com) on 2017-04-27T16:03:53Z No. of bitstreams: 1 Dissertacao Final.pdf: 947767 bytes, checksum: ca50219ab757930a6d88422c06d48234 (MD5)
Approved for entry into archive by GILSON ROCHA MIRANDA (gilson.miranda@fgv.br) on 2017-04-28T19:14:36Z (GMT) No. of bitstreams: 1 Dissertacao Final.pdf: 947767 bytes, checksum: ca50219ab757930a6d88422c06d48234 (MD5)
Made available in DSpace on 2017-05-02T19:31:50Z (GMT). No. of bitstreams: 1 Dissertacao Final.pdf: 947767 bytes, checksum: ca50219ab757930a6d88422c06d48234 (MD5) Previous issue date: 2017-03-31
This paper aims to create an econometric model capable of anticipating recessions in the United States economy, one year in advance, using not only monetary market variables that are already used by economists, but also capital market variables. Using a data span from 1959 to 2016, it was observed that the yield spread continues to be an explanatory variable with excellent predictive power over recessions. Evidence has also emerged of new variables that have very high statistical significance, and which offer valuable contributions to the regressions. Out-of-sample tests have been conducted which suggest that past recessions would have been predicted with substantially higher accuracy if the proposed Probit model had been used instead of the most widespread model in the economic literature. This accuracy is evident not only in the predictive quality, but also in the reduction of the number of false positives and false negatives in the regression, and in the robustness of the out-of-sample tests.
Este trabalho visa desenvolver um modelo econométrico capaz de antecipar, com um ano de antecedência, recessões na economia dos Estados Unidos, utilizando não só variáveis dos mercados monetários, que já são indicadores antecedentes bastante utilizados por economistas, mas também dos mercados de capitais. Utilizando-se dados de 1959 a 2016, pode-se observar que o spread de juros de longo e curto prazo continua sendo uma variável explicativa com excelente poder preditivo sobre recessões. Também surgiram evidências de novas variáveis que possuem altíssimas significâncias estatísticas, e que oferecem valiosas contribuições para as regressões. Foram conduzidos testes fora da amostra que sugerem que as recessões passadas teriam sido previstas com acurácia substancialmente superior, caso o modelo Probit proposto tivesse sido utilizado no lugar do modelo mais difundido na literatura econômica. Essa acurácia é evidente não só na qualidade preditiva, mas também na redução do número de falsos positivos e falsos negativos da regressão, e na robustez dos testes fora da amostra.
APA, Harvard, Vancouver, ISO, and other styles
5

Rönchen, Philipp. "Constraints of Binary Simple Homogeneous Structures." Thesis, Uppsala universitet, Algebra och geometri, 2018. http://urn.kb.se/resolve?urn=urn:nbn:se:uu:diva-361217.

Full text
APA, Harvard, Vancouver, ISO, and other styles
6

Flick, Jason. "Evaluating the Impact of OOCEA's Dynamic Message Signs (DMS) on Travelers' Experience Using a Pre and Post-Deployment Survey." Master's thesis, University of Central Florida, 2008. http://digital.library.ucf.edu/cdm/ref/collection/ETD/id/3030.

Full text
Abstract:
The purpose of this thesis was to evaluate the impact of dynamic message signs (DMS) on the Orlando-Orange County Expressway Authority (OOCEA) toll road network using a Pre and Post-Deployment DMS Survey (henceforth referred to as "pre and post-deployment survey") analysis. DMS are electronic traffic signs used on roadways to give travelers information about travel times, traffic congestion, accidents, disabled vehicles, AMBER alerts, and special events. The particular DMS referred to in this study are large rectangular signs installed over the travel lanes and these are not the portable trailer mount signs. The OOCEA have been working over the past two years to add several fixed DMS on their toll road network. At the time of the pre-deployment survey, only one DMS was installed on the OOCEA toll road network. At the time of the post-deployment survey, a total of 30 DMS were up and running on the OOCEA toll road network. Since most of the travelers on the OOCEA toll roads are from Orange, Osceola, and Seminole counties, this study was limited to these counties. This thesis documents the results and comparisons between the pre and post-deployment survey analysis. The instrument used to analyze the travelers' perception of DMS was a survey that utilized computer aided telephone interviews. The pre-deployment survey was conducted during early November of 2006, and the post-deployment survey was conducted during the month of May, 2008. Questions pertaining to the acknowledgement of DMS on the OOCEA toll roads, satisfaction with travel information provided on the network, formatting of the messages, satisfaction with different types of messages, diversion questions (Revealed and Stated preferences), and classification/socioeconomic questions (such as age, education, most traveled toll road, county of residence, and length of residency) were asked to the respondents. The results of both the pre and post-deployment surveys are discussed in this thesis, but it should be noted that the more telling results are those of the post-deployment survey. The results of the post-deployment survey show the complete picture of the impact of DMS on travelers' experience on the OOCEA toll road network. The pre-deployment results are included to show an increase or decrease in certain aspects of travel experience with relation to DMS. The results of the pre-deployment analysis showed that 54.4% of the OOCEA travelers recalled seeing DMS on the network, while a total of 63.93% of the OOCEA travelers recalled seeing DMS during the post-deployment analysis. This showed an increase of almost 10% between the two surveys demonstrating the people are becoming more aware of DMS on the OOCEA toll road network. The respondents commonly agreed that the DMS were helpful for providing information about hazardous conditions, and that the DMS are easy to read. Also, upon further research it was found that between the pre and post-deployment surveys the travelers' satisfaction with special event information provided on DMS and travel time accuracy on DMS increased significantly. With respect to formatting of the DMS, the following methods were preferred by the majority of respondents in both the pre and post-deployment surveys: • Steady Message as a default DMS message format • Flashing Message for abnormal traffic information (94% of respondents would like to be notified of abnormal traffic information) • State road number to show which roadway (for Colonial – SR 50, Semoran – SR 436 and Alafaya – SR 434) • "I-Drive" is a good abbreviation for International Drive • If the distance to the international airport is shown on a DMS it thought to be the distance to the airport exit The results from the binary logit model for "satisfaction with travel information provided on OOCEA toll road network" displayed the significant variables that explained the likelihood of the traveler being satisfied. This satisfaction model was based on respondents who showed a prior knowledge of DMS on OOCEA toll roads. With the use of a pooled model (satisfaction model with a total of 1775 responses – 816 from pre-deployment and 959 from post-deployment), it was shown that there was no statistical change between the pre and post-deployment satisfaction based on variables thought to be theoretically relevant. The results from the comparison between the pre and post-deployment satisfaction models showed that many of the coefficients of the variables showed a significant change. Although some of the variables were statistically insignificant in one of the two survey model results: Either the pre or post-deployment model, it was still shown that every variable was significant in at least one of the two models. The coefficient for the variable corresponding to DMS accuracy showed a significantly lower value in the post-deployment model. The coefficient for the variable "DMS was helpful for providing special event information" showed a significantly higher value in the post-deployment model. The final post-deployment diversion model was based on a total of 732 responses who answered that they had experienced congestion in the past 6 months. Based on this final post-deployment diversion model, travelers who had stated that their most frequently traveled toll road was either SR 408 or SR 417 were more likely to divert. Also, travelers who stated that they would divert in the case of abnormal travel times displayed on DMS or stated that a DMS influenced their response to congestion showed a higher likelihood of diversion. These two variables were added between the pre and post-deployment surveys. It is also beneficial to note that travelers who stated they would divert in a fictitious congestion situation of at least 30 minutes of delay were more likely to divert. This shows that they do not contradict themselves in their responses to Revealed Preference and Stated Preference diversion situations. Based on a comparison between pre and post-deployment models containing similar variables, commuters were more likely to stay on the toll road everything else being equal to the base case. Also, it was shown that in the post-deployment model the respondents traveling on SR 408 and SR 417 were more likely to divert, but in the pre-deployment model only the respondents traveling on SR 408 were more likely to divert. This is an expected result since during the pre-deployment survey only one DMS was located on SR 408, and during the post-deployment survey there were DMS located on all toll roads. Also, an interesting result to be noted is that in the post-deployment survey, commuters who paid tolls with E-pass were more likely to stay on the toll road than commuters who paid tolls with cash. The implications for implementation of these results are discussed in this thesis. DMS should be formatted as a flashing message for abnormal traffic situations and the state road number should be used to identify a roadway. DMS messages should pertain to information on roadway hazards when necessary because it was found that travelers find it important to be informed on events that are related to their personal safety. The travel time accuracy on DMS was shown to be significant for traveler information satisfaction because if the travelers observe inaccurate travel times on DMS, they may not trust the validity of future messages. Finally, it is important to meet the travelers' preferences and concerns for DMS.
M.S.C.E.
Department of Civil and Environmental Engineering
Engineering and Computer Science
Civil Engineering MS
APA, Harvard, Vancouver, ISO, and other styles
7

Pérez, Manríquez Alejandra. "Quivers for semigroup algebras of binary relations of small rank." Thesis, Uppsala universitet, Algebra och geometri, 2020. http://urn.kb.se/resolve?urn=urn:nbn:se:uu:diva-411481.

Full text
APA, Harvard, Vancouver, ISO, and other styles
8

Buonocore, Chiara. "Development of a model to choose the path of cyclists using GPS data collected via smartphone." Master's thesis, Alma Mater Studiorum - Università di Bologna, 2018. http://amslaurea.unibo.it/17199/.

Full text
Abstract:
Nowadays it is impossible being indifferent to the improvement of the urbanization and the mobility and sustainable road infrastructures.It has to be taken into account the bike as a mode of transport for many reasons such as:reduction of pollution and the emissions,safety on the road,less space and healthy.Mostly in big cities,perhaps it is the fastest mode of transport because it's not subject to traffic and its bottlenecks that block the flow.The Netherlands is the most interested country to travel by bike.It's the place where there are more bikes than cars.In recent years the availability of GPS data has seen a marked improvement in terms of accuracy,continuity and quality of data,thanks to the spread of smartphones and applications for auto-location and navigation.The main advantage is to obtain information on the travel routes actually followed by a large sample of cyclists on the entire network, from their origin to their destination.When GPS tracks can be attributed to detailed transport networks, it is possible to evaluate the factors that users consider in the process of choosing a specific route.It's important to study the choice of the route that cyclists make for many reasons.The objective of this thesis is to examine the aspects that the cyclists taking into account when they choose a route instead of another one.We want to focus on the time travelled,considering the average speed per each link and its correspondent length,and the average waiting times at the intersections:how the time influences cyclists choice. This research will investigate which aspects of the bicycle infrastructures have greater or lesser repercussions on the path made by the cyclists and to model their route choices.It will explore the link between the routes chosen by the cyclists and some attributes of the transport network of the Netherlands.The chosen routes will be compare with the fastest and the shortest calculated thanks to the network analyst in ArcMap for each OD pair.
APA, Harvard, Vancouver, ISO, and other styles
9

Mendonça, Danilo Marques de. "Perfil das famílias tomadoras de crédito no Brasil: caracterização a partir de um modelo desenvolvido com microdados da POF 2008/09." Pontifícia Universidade Católica de São Paulo, 2014. https://tede2.pucsp.br/handle/handle/9236.

Full text
Abstract:
Made available in DSpace on 2016-04-26T20:48:41Z (GMT). No. of bitstreams: 1 Danilo Marques de Mendonca.pdf: 845535 bytes, checksum: 668ae8796a8ad81784166717cbbe69bc (MD5) Previous issue date: 2014-05-28
After the period of monetary stabilization started with the Real Plan in 1994 , the credit market has shown annual growth rates of 20 %. About 40 % of this growth came from the credit market for individuals . This paper analyzed the profile of the families who have credit expenses, and what changes in their characteristics can cause any effect in their propensity to take credit . For this purpose we applied binary logit choice model based on microdata from the Household Budget Survey (POF 2008 / 09 ) of the IBGE, in an attempt to measure the probability of the family take a loan. For this, we used categorical variables relating to the constitution of families, such as education level, sex, race and age of household head, and other information on the composition of household expenditures found in POF. The data suggest that the two most important factors to increase the likelihood of family borrowing is the age of the household head and income per capita. However other factors also contribute significantly, such as the existence of financial investment spending , expend with reform the household or even health spending, children's age, sex, race and education of household head
Após o período de estabilização monetária iniciado com o Plano Real em 1994, o mercado de crédito brasileiro vem apresentando taxas de crescimento anuais nominais acima de 20%. Cerca de 40% deste crescimento advêm do mercado de crédito direcionado às pessoas físicas. Neste trabalho é analisado o perfil das famílias que possuem despesas com crédito, e quais mudanças em suas características podem causar alterações em sua propensão a tomar crédito. Para tal objetivo foi aplicado o modelo de escolha binária logit à base dos microdados da Pesquisa de Orçamento Familiar (POF 2008/09) do IBGE, na tentativa de mensurar a probabilidade da família ser tomadora de crédito. Para tanto, são usadas variáveis categóricas referentes à constituição das famílias, como: grau de escolaridade, sexo, raça e idade do chefe da família, além de outras informações sobre a composição das despesas familiares encontradas na POF. Os dados sugerem que os fatores mais relevantes a aumentar a probabilidade da família tomar empréstimos são a idade do chefe da família e a renda per capita. No entanto outros fatores também contribuem significativamente, tais como a existência de gastos com aplicação financeira, gastos com reforma do domicílio ou mesmo com saúde emergencial, idade dos filhos, sexo, raça e educação do chefe da família
APA, Harvard, Vancouver, ISO, and other styles
10

Cazor, Laurent. "How to make the most of open data? A travel demand and supply model for regional bicycle paths." Thesis, KTH, Transport och systemanalys, 2021. http://urn.kb.se/resolve?urn=urn:nbn:se:kth:diva-296219.

Full text
Abstract:
Detta examensarbete syftar till att svara på ett av Trafikverket fastställt problem: en gemensam regional cykelplanerings process skulle göra dem billigare och mer jämförbara. De erbjuder för närvarande planerarna en modell som utvecklades av Kågeson 2007. Denna modell har formen av en rapport som ger råd om när man ska bygga en cykelväg mellan städer eller platser i en region. Ändå används den bara i endast 6 av de 21 svenska länen. Trafikverket kräver ett nytt planeringsstödverktyg, mer interaktivt och komplett än Kågeson-modellen. Några nya önskade funktioner är separationen av efterfrågan per syfte, införandet av e-cyklar, olika resesyfte och en prioritering av investeringarna.  Examensarbetet är att designa och implementera det här verktyget, även kallat Planning Support System (PSS), som syftar till att jämföra utbud och efterfrågan på cykelväg till prioritering av infrastrukturförbättringar. En huvudbegränsning för modellen är att den måste vara billig datavis, men så komplett och exakt som möjligt. Det baseras på flera öppna dataleverantörer, till exempel OpenStreetMap, den svenska nationella vägdatabasen (NVDB) eller reseundersökningar från Sverige och Nederländerna. Resultatet är en modell, uppdelad efter turändamål och typ av cykel.  Del för efterfrågeuppskattning anpassar en klassisk fyrsteg transportmodell till cykelplanering och begränsad data. För olika resändamål genereras och distribueras resor tack vare en ursprungs begränsad gravitationsmodell. Valet av cykelläge är anpassat till det faktiska resebeteendet genom logistisk regression med en binär logit-modell. Resorna tilldelas sedan nätverket med tilldelnings metoden "allt-eller-ingenting" genom Dijkstras algoritm. För att utvärdera cykelförsörjningen använde vi ett mått som heter Level of Traffic Stress (LTS), som uppskattar den potentiella användningen av en nätverkslänk för olika delar av befolkningen som en funktion av vägnätvariablerna. Prioriteringsrankningen är då förhållandet mellan mått på efterfrågan och utbud.  Detta nya verktyg implementeras med opensource Geographic Information System (GIS) som heter QGIS och med Python 3 och testas i Södermanlands län
This Master Thesis main objective is to answer a problem set by the Swedish Transport Administration: a common regional bicycle planning process would them cheaper and more comparable. They currently offer the planners a model developed by Kågeson in 2007. This model takes the form of a report which advises on when to build a bicycle path between cities or places of a region. Still, it is only used in only 6 of the 21 Swedish counties. Trafikverket requires a new planning support tool, more interactive and complete than the Kågeson model. Some new desired features are the separation of demand per purpose, the inclusion of e-bikes, different trip purposes, and a prioritization of the investments.  The Degree Project work is to design and implement this tool, also called Planning Support System (PSS), which compares supply and demand for bicycle path to prioritizing infrastructure improvements. A main constraint for the model is that it needs to be cheap data-wise, but as complete and precise as possible. It bases on several open data providers, such as OpenStreetMap, the Swedish National Road Database (NVDB), or Travel Surveys from Sweden and the Netherlands. The result is a model, disaggregated by trip purpose and type of bicycle.  The demand estimation part adapts a classic four-step transportation model to bicycle planning and limited data. For different trip purposes, trips are generated and distributed thanks to an origin-constrained gravity model. Bicycle mode choice is fit to actual travel behaviour through logistic regression with a binary logit model. The trips are then assigned to the network using the "all-or-nothing" assignment method through the Dijkstra algorithm. To evaluate bicycle supply, we used a metric called Level of Traffic Stress (LTS), which estimates the potential use of a network link by different parts of the population as a function of the road network variables. The prioritization ranking is then the ratio between demand and supply metrics.  This new tool is implemented with the opensource Geographic Information System (GIS) called QGIS and with Python 3, and it is tested on Södermanland County.
APA, Harvard, Vancouver, ISO, and other styles
11

Heredia, Rico Jobany J. "Simulation and Application of Binary Logic Regression Models." FIU Digital Commons, 2016. http://digitalcommons.fiu.edu/etd/2455.

Full text
Abstract:
Logic regression (LR) is a methodology to identify logic combinations of binary predictors in the form of intersections (and), unions (or) and negations (not) that are linearly associated with an outcome variable. Logic regression uses the predictors as inputs and enables us to identify important logic combinations of independent variables using a computationally efficient tree-based stochastic search algorithm, unlike the classical regression models, which only consider pre-determined conventional interactions (the “and” rules). In the thesis, we focused on LR with a binary outcome in a logistic regression framework. Simulation studies were conducted to examine the performance of LR under the assumption of independent and correlated observations, respectively, for various characteristics of the data sets and LR search parameters. We found that the proportion of times that LR selected the correct logic rule was usually low when the signal and/or prevalence of the true logic rule were relatively low. The method performed satisfactorily under easy learning conditions such as high signal, simple logic rules and/or small numbers of predictors. Given the simulation characteristics and correlation structures tested, we found some but not significant difference in performance when LR was applied to dependent observations compared to the independent case. In addition to simulation studies, an advanced application method was proposed to integrate LR and resampling methods in order to enhance LR performance. The proposed method was illustrated using two simulated data sets as well as a data set from a real-life situation. The proposed method showed some evidence of being effective in discerning the correct logic rule, even for unfavorable learning conditions.
APA, Harvard, Vancouver, ISO, and other styles
12

Croson, E., J. Howard, and L. Jue. "Binary Decision Machines: Alternative Logic for Telemetry Control." International Foundation for Telemetering, 1987. http://hdl.handle.net/10150/615292.

Full text
Abstract:
International Telemetering Conference Proceedings / October 26-29, 1987 / Town and Country Hotel, San Diego, California
A Binary Decision Machine (BDM) is described as a means of achieving logical control of data acquisition equipment and telemetry systems. The basic architecture of a BDM is initially presented followed by a description of its implementation as a Very Large Scale Integration (VLSI) device. Performance characteristics, programming, and ease of use as a controller are then presented via actual applications. The results of these endeavors led to a means of digitizing and extracting doppler data in a missile telemetry system.
APA, Harvard, Vancouver, ISO, and other styles
13

Kelan, Elisabeth Kristina. "Binary logic? : doing gender in information communication technology work." Thesis, London School of Economics and Political Science (University of London), 2006. http://ethos.bl.uk/OrderDetails.do?uin=uk.bl.ethos.429370.

Full text
APA, Harvard, Vancouver, ISO, and other styles
14

Wu, Nicholas(Nicholas T. ). "Inductive logic programming with gradient descent for supervised binary classification." Thesis, Massachusetts Institute of Technology, 2020. https://hdl.handle.net/1721.1/129926.

Full text
Abstract:
Thesis: M. Eng., Massachusetts Institute of Technology, Department of Electrical Engineering and Computer Science, February, 2020
Cataloged from student-submitted PDF of thesis.
Includes bibliographical references (pages 75-76).
As machine learning techniques have become more advanced, interpretability has become a major concern for models making important decisions. In contrast to Local Interpretable Model-Agnostic Explanations (LIME), this thesis seeks to develop an interpretable model using logical rules, rather than explaining existing blackbox models. We extend recent inductive logic programming methods developed by Evans and Grefenstette [3] to develop an gradient descent-based inductive logic programming technique for supervised binary classification. We start by developing our methodology for binary input data, and then extend the approach to numerical data using a threshold-gate based binarization technique. We test our implementations on datasets with varying pattern structures and noise levels, and select our best performing implementation. We then present an example where our method generates an accurate and interpretable rule set, whereas the LIME technique fails to generate a reasonable model. Further, we test our original methodology on the FICO Home Equity Line of Credit dataset. We run a hyperparameter search over differing number of rules and rule sizes. Our best performing model achieves a 71.7% accuracy, which is comparable to multilayer perceptron and randomized forest models. We conclude by suggesting directions for future applications and potential improvements.
by Nicholas Wu.
M. Eng.
M.Eng. Massachusetts Institute of Technology, Department of Electrical Engineering and Computer Science
APA, Harvard, Vancouver, ISO, and other styles
15

Shapiro, Albina. "Interface timing verification using constraint logic programming and binary decision diagrams." Thesis, McGill University, 2004. http://digitool.Library.McGill.CA:80/R/?func=dbin-jump-full&object_id=82632.

Full text
Abstract:
The design and verification of high performance circuits is becoming increasingly challenging due to the complex sets of constraints that must hold to ensure correct operation. The complexity of digital circuits increases rapidly, which in turn results in an increased complexity of the verification process and increased user effort. Efficient and easy to use tools are therefore required for timed verification.
In this thesis we propose two methods to aid the verification process. Firstly, we introduce a new verification methodology that combines the advantages of several existing successful approaches. In particular, our verification technique uses a combination of untimed, relative timing and timed verification. Secondly, we propose and evaluate a novel method of solving CSPs (constraint satisfaction problems) using BDDs (binary decision diagrams). We investigate two different implementations of a BDD-based CSP solver and their capacity to bridge the gap between untimed and timed verification. Finally, we present two case studies to demonstrate the proposed techniques.
APA, Harvard, Vancouver, ISO, and other styles
16

Cartwright, Peter. "Self organising knowledge based control of a binary distillation column." Thesis, Manchester Metropolitan University, 1995. http://ethos.bl.uk/OrderDetails.do?uin=uk.bl.ethos.309885.

Full text
APA, Harvard, Vancouver, ISO, and other styles
17

Whaley, John. "Context-sensitive pointer analysis using binary decision diagrams /." May be available electronically:, 2007. http://proquest.umi.com/login?COPT=REJTPTU1MTUmSU5UPTAmVkVSPTI=&clientId=12498.

Full text
APA, Harvard, Vancouver, ISO, and other styles
18

Cabrol, Sébastien. "Les crises économiques et financières et les facteurs favorisant leur occurrence." Thesis, Paris 9, 2013. http://www.theses.fr/2013PA090019.

Full text
Abstract:
Cette étude vise à mettre en lumière les différences et similarités existant entre les principales crises économiques et financières ayant frappé un échantillon de 21 pays avancés depuis 1981. Nous analyserons plus particulièrement la crise des subprimes que nous rapprocherons avec des épisodes antérieurs. Nous étudierons à la fois les années du déclenchement des turbulences (analyse typologique) ainsi que celles les précédant (prévision). Cette analyse sera fondée sur l’utilisation de la méthode CART (Classification And Regression Trees). Cette technique non linéaire et non paramétrique permet de prendre en compte les effets de seuil et les interactions entre variables explicatives de façon à révéler plusieurs contextes distincts explicatifs d’un même événement. Dans le cadre d‘un modèle de prévision, l’analyse des années précédant les crises nous indique que les variables à surveiller sont : la variation et la volatilité du cours de l’once d’or, le déficit du compte courant en pourcentage du PIB et la variation de l’openness ratio et enfin la variation et la volatilité du taux de change. Dans le cadre de l’analyse typologique, l’étude des différentes variétés de crise (année du déclenchement de la crise) nous permettra d’identifier deux principaux types de turbulence d’un point de vue empirique. En premier lieu, nous retiendrons les crises globales caractérisées par un fort ralentissement ou une baisse de l’activité aux Etats-Unis et une faible croissance du PIB dans les pays touchés. D’autre part, nous mettrons en évidence des crises idiosyncratiques propres à un pays donné et caractérisées par une inflation et une volatilité du taux de change élevées
The aim of this thesis is to analyze, from an empirical point of view, both the different varieties of economic and financial crises (typological analysis) and the context’s characteristics, which could be associated with a likely occurrence of such events. Consequently, we analyze both: years seeing a crisis occurring and years preceding such events (leading contexts analysis, forecasting). This study contributes to the empirical literature by focusing exclusively on the crises in advanced economies over the last 30 years, by considering several theoretical types of crises and by taking into account a large number of both economic and financial explanatory variables. As part of this research, we also analyze stylized facts related to the 2007/2008 subprimes turmoil and our ability to foresee crises from an epistemological perspective. Our empirical results are based on the use of binary classification trees through CART (Classification And Regression Trees) methodology. This nonparametric and nonlinear statistical technique allows us to manage large data set and is suitable to identify threshold effects and complex interactions among variables. Furthermore, this methodology leads to characterize crises (or context preceding a crisis) by several distinct sets of independent variables. Thus, we identify as leading indicators of economic and financial crises: variation and volatility of both gold prices and nominal exchange rates, as well as current account balance (as % of GDP) and change in openness ratio. Regarding the typological analysis, we figure out two main different empirical varieties of crises. First, we highlight « global type » crises characterized by a slowdown in US economic activity (stressing the role and influence of the USA in global economic conditions) and low GDP growth in the countries affected by the turmoil. Second, we find that country-specific high level of both inflation and exchange rates volatility could be considered as evidence of « idiosyncratic type » crises
APA, Harvard, Vancouver, ISO, and other styles
19

Jacobi, Ricardo Pezzuol. "A study of the application of binary decision diagrams in multilevel logic synthesis." reponame:Biblioteca Digital de Teses e Dissertações da UFRGS, 1993. http://hdl.handle.net/10183/17646.

Full text
APA, Harvard, Vancouver, ISO, and other styles
20

Wingfield, James. "Approaches to test set generation using binary decision diagrams." Thesis, Texas A&M University, 2003. http://hdl.handle.net/1969.1/20.

Full text
Abstract:
This research pursues the use of powerful BDD-based functional circuit analysis to evaluate some approaches to test set generation. Functional representations of the circuit allow the measurement of information about faults that is not directly available through circuit simulation methods, such as probability of random detection and test-space overlap between faults. I have created a software tool that performs experiments to make such measurements and augments existing test generation strategies with this new information. Using this tool, I explored the relationship of fault model difficulty to test set length through fortuitous detection, and I experimented with the application of function-based methods to help reconcile the traditionally opposed goals of making test sets that are both smaller and more effective.
APA, Harvard, Vancouver, ISO, and other styles
21

Telfer, David Irwin. "The design and manufacture of a binary decision machine and an attendant workstation /." Thesis, McGill University, 1987. http://digitool.Library.McGill.CA:80/R/?func=dbin-jump-full&object_id=63875.

Full text
APA, Harvard, Vancouver, ISO, and other styles
22

Yourst, Matt T. "Peptidal processor enhanced with programmable translation and integrated dynamic acceleration logic /." Diss., Online access via UMI:, 2005.

Find full text
Abstract:
Thesis (M.S.)--State University of New York at Binghamton, Department of Computer Science, Thomas J. Watson School of Engineering and Applied Science, 2005.
"This dissertation is a compound document (contains both a paper copy and a CD as part of the dissertation)"--ProQuest abstract document view. Includes bibliographical references.
APA, Harvard, Vancouver, ISO, and other styles
23

Fares, George E. "Probabilistic fault location in combinational logic networks by multistage binary tree classifier algorith development, implementation results and efficiency." Thesis, University of Ottawa (Canada), 1989. http://hdl.handle.net/10393/5937.

Full text
APA, Harvard, Vancouver, ISO, and other styles
24

Buchele, Suzanne Fox. "Three-dimensional binary space partitioning tree and constructive solid geometry tree construction from algebraic boundary representations /." Digital version accessible at:, 1999. http://wwwlib.umi.com/cr/utexas/main.

Full text
APA, Harvard, Vancouver, ISO, and other styles
25

Pettersson, Fredrik. "En jämförelse av regressioner med binära utfall." Thesis, Uppsala universitet, Statistiska institutionen, 2020. http://urn.kb.se/resolve?urn=urn:nbn:se:uu:diva-451007.

Full text
Abstract:
The purpose of this bachelor thesis was to compare three different methods for regression with binary outcomes. The three methods used for comparison are: Linear Probability Model, Logit and Probit. To compare the methods, data gathered from the World Value Survey when it last was done in Sweden in 2011 was used. The outcome variable in the creation of the models was whether the respondent preferred protecting the environment or economic growth. A Monte Carlo-simulation was also performed to strengthen the arguments in the comparison.  The results from the different models created was very small, but there are still differences. Two examples of the differences are the simplicity of interpretation between the models and errors that argues for not using Linear Probability Model under certain circumstances.
APA, Harvard, Vancouver, ISO, and other styles
26

Monta, Ross Alan. "Symmetrical residue-to-binary conversion algorithm, pipelined FPGA implementation, and testing logic for use in high-speed folding digitizers." Thesis, Monterey, Calif. : Springfield, Va. : Naval Postgraduate School ; Available from National Technical Information Service, 2005. http://library.nps.navy.mil/uhtbin/hyperion/05Dec%5FMonta.pdf.

Full text
Abstract:
Thesis (M.S. in Electrical Engineering)--Naval Postgraduate School, December 2005.
Thesis Advisor(s): Phillip E. Pace, Douglas Fouts. Includes bibliographical references (p. 65). Also available online.
APA, Harvard, Vancouver, ISO, and other styles
27

DELFINO, Jair. "Ifá e Odús: interdisciplinaridade, lógica binária, cultura e filosofia africana." www.teses.ufc.br, 2016. http://www.repositorio.ufc.br/handle/riufc/16444.

Full text
Abstract:
DELFINO, Jair. Ifá e Odús: interdisciplinaridade, lógica binária, cultura e filosofia africana. 2016. 106f. – Dissertação (Mestrado) – Universidade Federal do Ceará, Programa de Pós-graduação em Educação Brasileira, Fortaleza (CE), 2016.
Submitted by Márcia Araújo (marcia_m_bezerra@yahoo.com.br) on 2016-04-26T11:27:11Z No. of bitstreams: 1 2016_dis_jdelfino.pdf: 4763014 bytes, checksum: 4dd053fbbc1d1fc95331c06e51401d3b (MD5)
Approved for entry into archive by Márcia Araújo (marcia_m_bezerra@yahoo.com.br) on 2016-04-26T16:57:00Z (GMT) No. of bitstreams: 1 2016_dis_jdelfino.pdf: 4763014 bytes, checksum: 4dd053fbbc1d1fc95331c06e51401d3b (MD5)
Made available in DSpace on 2016-04-26T16:57:00Z (GMT). No. of bitstreams: 1 2016_dis_jdelfino.pdf: 4763014 bytes, checksum: 4dd053fbbc1d1fc95331c06e51401d3b (MD5) Previous issue date: 2016
Within the history of studies and African culture and afrodescendencia this dissertation is an innovative integration within the Ifa tradition. Ifa is a literary and philosophical body, which descends from a deity between two worlds understood as the physical and the spiritual. The systematization of Ifa representations work questions on binary algebra and organization of Ifa meanings was the purpose of the idealized research. The importance of this issue is to examine specific knowledge of a culture and tradition that has the educational process orality and the preservation of interdisciplinary culture. We add our proposal for the systematic examination of the concepts and propositions of life that includes developing the collectivity and individuality in learning and understanding through the exercise of philosophical virtues, specific thinking Yoruba, starting from the design of nature and divine genealogy of creation within the African tradition inherited from Ancient Egypt. Ifa is cultural diversity that can be science, religion and sociability. We bring to this body of work beyond what has already been explained geometry, aesthetics and medicine within the inter-relational aspect to show as is the absorption of knowledge. We intend to stay away from the universalistic, Eurocentric theoretical basis seeking the deepening of African philosophy and interdisciplinarity to contemplate the cultural and ethnic-Brazilian aspects as well as being in accordance with Law No. 10,639 / 03. Thus, based on orality present in religions of African origin, and through the literary body of Ifá, embark on the complexity of reason and metaphysics and timeless logic to understand cognition in the aspect of institutiva worldview values and principles.
Dentro dos estudos de história e cultura africana e afrodescendente a presente dissertação faz uma inserção inovadora dentro da tradição do Ifá. O Ifá é um corpo literário e filosófico, que descende de uma divindade entre dois mundos entendidos como o físico e o espiritual. As sistematizações das representações do Ifá trabalham as questões sobre álgebra binária e a organização dos significados do Ifá foi o objetivo da pesquisa idealizada. A importância deste tema está em examinar conhecimentos específicos de uma cultura e tradição que tem como processo educativo a oralidade e a preservação da cultura interdisciplinar. Adicionamos à nossa proposta o exame sistemático dos conceitos e proposições de vida que abrange desenvolver a coletividade e individualidade, no aprender e entender através do exercício das virtudes filosóficas, específicas do pensar yorubá, partindo da concepção da natureza e da genealogia divina da criação dentro da tradição africana herdada do Antigo Egito. Ifá é pluralidade cultural que pode ser ciência, religião e sociabilidade. Além do que já foi explanado, trazemos para este corpo de trabalho a geometria, estética e medicina dentro do aspecto inter-relacional, a fim de mostrar como acontece a absorção de conhecimentos. Pretendemos ficar distantes da base teórica universalista e eurocentrista buscando o aprofundamento da filosofia africana e a interdisciplinaridade para contemplar os aspectos culturais e étnico-brasileiros bem como estar de acordo com a Lei n° 10.639/03. Assim, com base na oralidade presente nas religiões de matriz africana e, através do corpo literário do Ifá, embarcaremos na complexidade da razão e da lógica metafísica e atemporal para entendermos a cognição no aspecto da cosmovisão institutiva de valores e princípios.
APA, Harvard, Vancouver, ISO, and other styles
28

Dellaluce, Jason. "Enhancing symbolic AI ecosystems with Probabilistic Logic Programming: a Kotlin multi-platform case study." Master's thesis, Alma Mater Studiorum - Università di Bologna, 2021. http://amslaurea.unibo.it/23856/.

Full text
Abstract:
As Artificial Intelligence (AI) progressively conquers the software industry at a fast pace, the demand for more transparent and pervasive technologies increases accordingly. In this scenario, novel approaches to Logic Programming (LP) and symbolic AI have the potential to satisfy the requirements of modern software environments. However, traditional logic-based approaches often fail to match present-day planning and learning workflows, which natively deal with uncertainty. Accordingly, Probabilistic Logic Programming (PLP) is emerging as a modern research field that investigates the combination of LP with the probability theory. Although research efforts at the state of the art demonstrate encouraging results, they are usually either developed as proof of concepts or bound to specific platforms, often having inconvenient constraints. In this dissertation, we introduce an elastic and platform-agnostic approach to PLP aimed to surpass the usability and portability limitations of current proposals. We design our solution as an extension of the 2P-Kt symbolic AI ecosystem, thus endorsing the mission of the project and inheriting its multi-platform and multi-paradigm nature. Additionally, our proposal comprehends an object-oriented and pure-Kotlin library for manipulating Binary Decision Diagrams (BDDs), which are notoriously relevant in the context of probabilistic computation. As a Kotlin multi-platform architecture, our BDD module aims to surpass the usability constraints of existing packages, which typically rely on low level C/C++ bindings for performance reasons. Overall, our project explores novel directions towards more usable, portable, and accessible PLP technologies, which we expect to grow in popularity both in the research community and in the software industry over the next few years.
APA, Harvard, Vancouver, ISO, and other styles
29

Martinelli, Andres. "Advances in Functional Decomposition: Theory and Applications." Doctoral thesis, SICS, 2006. http://urn.kb.se/resolve?urn=urn:nbn:se:ri:diva-21180.

Full text
Abstract:
Functional decomposition aims at finding efficient representations for Boolean functions. It is used in many applications, including multi-level logic synthesis, formal verification, and testing. This dissertation presents novel heuristic algorithms for functional decomposition. These algorithms take advantage of suitable representations of the Boolean functions in order to be efficient. The first two algorithms compute simple-disjoint and disjoint-support decompositions. They are based on representing the target function by a Reduced Ordered Binary Decision Diagram (BDD). Unlike other BDD-based algorithms, the presented ones can deal with larger target functions and produce more decompositions without requiring expensive manipulations of the representation, particularly BDD reordering. The third algorithm also finds disjoint-support decompositions, but it is based on a technique which integrates circuit graph analysis and BDD-based decomposition. The combination of the two approaches results in an algorithm which is more robust than a purely BDD-based one, and that improves both the quality of the results and the running time. The fourth algorithm uses circuit graph analysis to obtain non-disjoint decompositions. We show that the problem of computing non-disjoint decompositions can be reduced to the problem of computing multiple-vertex dominators. We also prove that multiple-vertex dominators can be found in polynomial time. This result is important because there is no known polynomial time algorithm for computing all non-disjoint decompositions of a Boolean function. The fifth algorithm provides an efficient means to decompose a function at the circuit graph level, by using information derived from a BDD representation. This is done without the expensive circuit re-synthesis normally associated with BDD-based decomposition approaches. Finally we present two publications that resulted from the many detours we have taken along the winding path of our research.
APA, Harvard, Vancouver, ISO, and other styles
30

Suguitani, Leandro Oliva 1976. "Sobre a lógica e a aritmética das relações." [s.n.], 2013. http://repositorio.unicamp.br/jspui/handle/REPOSIP/280386.

Full text
Abstract:
Orientador: Itala Maria Loffredo D'Ottaviano
Tese (doutorado) - Universidade Estadual de Campinas, Instituto de Filosofia e Ciências Humanas
Made available in DSpace on 2018-08-24T01:21:05Z (GMT). No. of bitstreams: 1 Suguitani_LeandroOliva_D.pdf: 1496205 bytes, checksum: 6197787056972a0020750fa2c72c9cd6 (MD5) Previous issue date: 2013
Resumo: O resumo poderá ser visualizado no texto completo da tese digital
Abstract: The complete abstract is available with the full electronic document.
Doutorado
Filosofia
Doutor em Filosofia
APA, Harvard, Vancouver, ISO, and other styles
31

Hawash, Maher Mofeid. "Methods for Efficient Synthesis of Large Reversible Binary and Ternary Quantum Circuits and Applications of Linear Nearest Neighbor Model." PDXScholar, 2013. https://pdxscholar.library.pdx.edu/open_access_etds/1090.

Full text
Abstract:
This dissertation describes the development of automated synthesis algorithms that construct reversible quantum circuits for reversible functions with large number of variables. Specifically, the research area is focused on reversible, permutative and fully specified binary and ternary specifications and the applicability of the resulting circuit to the physical limitations of existing quantum technologies. Automated synthesis of arbitrary reversible specifications is an NP hard, multiobjective optimization problem, where 1) the amount of time and computational resources required to synthesize the specification, 2) the number of primitive quantum gates in the resulting circuit (quantum cost), and 3) the number of ancillary qubits (variables added to hold intermediate calculations) are all minimized while 4) the number of variables is maximized. Some of the existing algorithms in the literature ignored objective 2 by focusing on the synthesis of a single solution without the addition of any ancillary qubits while others attempted to explore every possible solution in the search space in an effort to discover the optimal solution (i.e., sacrificed objective 1 and 4). Other algorithms resorted to adding a huge number of ancillary qubits (counter to objective 3) in an effort minimize the number of primitive gates (objective 2). In this dissertation, I first introduce the MMDSN algorithm that is capable of synthesizing binary specifications up to 30 variables, does not add any ancillary variables, produces better quantum cost (8-50% improvement) than algorithms which limit their search to a single solution and within a minimal amount of time compared to algorithms which perform exhaustive search (seconds vs. hours). The MMDSN algorithm introduces an innovative method of using the Hasse diagram to construct candidate solutions that are guaranteed to be valid and then selects the solution with the minimal quantum cost out of this subset. I then introduce the Covered Set Partitions (CSP) algorithm that expands the search space of valid candidate solutions and allows for exploring solutions outside the range of MMDSN. I show a method of subdividing the expansive search landscape into smaller partitions and demonstrate the benefit of focusing on partition sizes that are around half of the number of variables (15% to 25% improvements, over MMDSN, for functions less than 12 variables, and more than 1000% improvement for functions with 12 and 13 variables). For a function of n variables, the CSP algorithm, theoretically, requires n times more to synthesize; however, by focusing on the middle k (k by MMDSN which typically yields lower quantum cost. I also show that using a Tabu search for selecting the next set of candidate from the CSP subset results in discovering solutions with even lower quantum costs (up to 10% improvement over CSP with random selection). In Chapters 9 and 10 I question the predominant methods of measuring quantum cost and its applicability to physical implementation of quantum gates and circuits. I counter the prevailing literature by introducing a new standard for measuring the performance of quantum synthesis algorithms by enforcing the Linear Nearest Neighbor Model (LNNM) constraint, which is imposed by the today's leading implementations of quantum technology. In addition to enforcing physical constraints, the new LNNM quantum cost (LNNQC) allows for a level comparison amongst all methods of synthesis; specifically, methods which add a large number of ancillary variables to ones that add no additional variables. I show that, when LNNM is enforced, the quantum cost for methods that add a large number of ancillary qubits increases significantly (up to 1200%). I also extend the Hasse based method to the ternary and I demonstrate synthesis of specifications of up to 9 ternary variables (compared to 3 ternary variables that existed in the literature). I introduce the concept of ternary precedence order and its implication on the construction of the Hasse diagram and the construction of valid candidate solutions. I also provide a case study comparing the performance of ternary logic synthesis of large functions using both a CUDA graphic processor with 1024 cores and an Intel i7 processor with 8 cores. In the process of exploring large ternary functions I introduce, to the literature, eight families of ternary benchmark functions along with a Multiple Valued file specification (the Extended Quantum Specification XQS). I also introduce a new composite quantum gate, the multiple valued Swivel gate, which swaps the information of qubits around a centrally located pivot point. In summary, my research objectives are as follows: * Explore and create automated synthesis algorithms for reversible circuits both in binary and ternary logic for large number of variables. * Study the impact of enforcing Linear Nearest Neighbor Model (LNNM) constraint for every interaction between qubits for reversible binary specifications. * Advocate for a revised metric for measuring the cost of a quantum circuit in concordance with LNNM, where, on one hand, such a metric would provide a way for balanced comparison between the various flavors of algorithms, and on the other hand, represents a realistic cost of a quantum circuit with respect to an ion trap implementation. * Establish an open source repository for sharing the results, software code and publications with the scientific community. With the dwindling expectations for a new lifeline on silicon-based technologies, quantum computations have the potential of becoming the future workhorse of computations. Similar to the automated CAD tools of classical logic, my work lays the foundation for creating automated tools for constructing quantum circuits from reversible specifications.
APA, Harvard, Vancouver, ISO, and other styles
32

Xu, Xiao Mark. "Developing a GIS-Based Decision Support Tool For Evaluating Potential Wind Farm Sites." The University of Waikato, 2007. http://hdl.handle.net/10289/2348.

Full text
Abstract:
In recent years, the popularity of wind energy has grown. It is starting to play a large role in generating renewable, clean energy around the world. In New Zealand, there is increasing recognition and awareness of global warming and the pollution caused by burning fossil fuels, as well as the increased difficulty of obtaining oil from foreign sources, and the fluctuating price of non-renewable energy products. This makes wind energy a very attractive alternative to keep New Zealand clean and green. There are many issues involved in wind farm development. These issues can be grouped into two categories - economic issues and environmental issues. Wind farm developers often use site selection process to minimise the impact of these issues. This thesis aims to develop GIS based models that provide effective decision support tool for evaluating, at a regional scale, potential wind farm locations. This thesis firstly identifies common issues involved in wind farm development. Then, by reviewing previous research on wind farm site selection, methods and models used by academic and corporate sector to solve issues are listed. Criteria for an effective decision support tool are also discussed. In this case, an effective decision support tool needs to be flexible, easy to implement and easy to use. More specifically, an effective decision support tool needs to provide users the ability to identify areas that are suitable for wind farm development based on different criteria. Having established the structure and criteria for a wind farm analysis model, a GIS based tool was implemented using AML code using a Boolean logic model approach. This method uses binary maps for the final analysis. There are a total of 3645 output maps produced based on different combination of criteria. These maps can be used to conduct sensitivity analysis. This research concludes that an effective GIS analysis tool can be developed for provide effective decision support for evaluating wind farm sites.
APA, Harvard, Vancouver, ISO, and other styles
33

Martinelli, Andrés. "Advances in Functional Decomposition: Theory and Applications." Doctoral thesis, KTH, Mikroelektronik och Informationsteknik, IMIT, 2006. http://urn.kb.se/resolve?urn=urn:nbn:se:kth:diva-4135.

Full text
Abstract:
Functional decomposition aims at finding efficient representations for Boolean functions. It is used in many applications, including multi-level logic synthesis, formal verification, and testing. This dissertation presents novel heuristic algorithms for functional decomposition. These algorithms take advantage of suitable representations of the Boolean functions in order to be efficient. The first two algorithms compute simple-disjoint and disjoint-support decompositions. They are based on representing the target function by a Reduced Ordered Binary Decision Diagram (BDD). Unlike other BDD-based algorithms, the presented ones can deal with larger target functions and produce more decompositions without requiring expensive manipulations of the representation, particularly BDD reordering. The third algorithm also finds disjoint-support decompositions, but it is based on a technique which integrates circuit graph analysis and BDD-based decomposition. The combination of the two approaches results in an algorithm which is more robust than a purely BDD-based one, and that improves both the quality of the results and the running time. The fourth algorithm uses circuit graph analysis to obtain non-disjoint decompositions. We show that the problem of computing non-disjoint decompositions can be reduced to the problem of computing multiple-vertex dominators. We also prove that multiple-vertex dominators can be found in polynomial time. This result is important because there is no known polynomial time algorithm for computing all non-disjoint decompositions of a Boolean function. The fifth algorithm provides an efficient means to decompose a function at the circuit graph level, by using information derived from a BDD representation. This is done without the expensive circuit re-synthesis normally associated with BDD-based decomposition approaches. Finally we present two publications that resulted from the many detours we have taken along the winding path of our research.
QC 20100909
APA, Harvard, Vancouver, ISO, and other styles
34

Delfino, Jair. "Ifà e OdÃs: interdisciplinaridade, lÃgica binÃria, cultura e filosofia africana." Universidade Federal do CearÃ, 2016. http://www.teses.ufc.br/tde_busca/arquivo.php?codArquivo=16737.

Full text
Abstract:
FundaÃÃo de Amparo à Pesquisa do Estado do CearÃ
Dentro dos estudos de histÃria e cultura africana e afrodescendente a presente dissertaÃÃo faz uma inserÃÃo inovadora dentro da tradiÃÃo do IfÃ. O Ifà à um corpo literÃrio e filosÃfico, que descende de uma divindade entre dois mundos entendidos como o fÃsico e o espiritual. As sistematizaÃÃes das representaÃÃes do Ifà trabalham as questÃes sobre Ãlgebra binÃria e a organizaÃÃo dos significados do Ifà foi o objetivo da pesquisa idealizada. A importÃncia deste tema està em examinar conhecimentos especÃficos de uma cultura e tradiÃÃo que tem como processo educativo a oralidade e a preservaÃÃo da cultura interdisciplinar. Adicionamos à nossa proposta o exame sistemÃtico dos conceitos e proposiÃÃes de vida que abrange desenvolver a coletividade e individualidade, no aprender e entender atravÃs do exercÃcio das virtudes filosÃficas, especÃficas do pensar yorubÃ, partindo da concepÃÃo da natureza e da genealogia divina da criaÃÃo dentro da tradiÃÃo africana herdada do Antigo Egito. Ifà à pluralidade cultural que pode ser ciÃncia, religiÃo e sociabilidade. AlÃm do que jà foi explanado, trazemos para este corpo de trabalho a geometria, estÃtica e medicina dentro do aspecto inter-relacional, a fim de mostrar como acontece a absorÃÃo de conhecimentos. Pretendemos ficar distantes da base teÃrica universalista e eurocentrista buscando o aprofundamento da filosofia africana e a interdisciplinaridade para contemplar os aspectos culturais e Ãtnico-brasileiros bem como estar de acordo com a Lei n 10.639/03. Assim, com base na oralidade presente nas religiÃes de matriz africana e, atravÃs do corpo literÃrio do IfÃ, embarcaremos na complexidade da razÃo e da lÃgica metafÃsica e atemporal para entendermos a cogniÃÃo no aspecto da cosmovisÃo institutiva de valores e princÃpios.
Within the history of studies and African culture and afrodescendencia this dissertation is an innovative integration within the Ifa tradition. Ifa is a literary and philosophical body, which descends from a deity between two worlds understood as the physical and the spiritual. The systematization of Ifa representations work questions on binary algebra and organization of Ifa meanings was the purpose of the idealized research. The importance of this issue is to examine specific knowledge of a culture and tradition that has the educational process orality and the preservation of interdisciplinary culture. We add our proposal for the systematic examination of the concepts and propositions of life that includes developing the collectivity and individuality in learning and understanding through the exercise of philosophical virtues, specific thinking Yoruba, starting from the design of nature and divine genealogy of creation within the African tradition inherited from Ancient Egypt. Ifa is cultural diversity that can be science, religion and sociability. We bring to this body of work beyond what has already been explained geometry, aesthetics and medicine within the inter-relational aspect to show as is the absorption of knowledge. We intend to stay away from the universalistic, Eurocentric theoretical basis seeking the deepening of African philosophy and interdisciplinarity to contemplate the cultural and ethnic-Brazilian aspects as well as being in accordance with Law No. 10,639 / 03. Thus, based on orality present in religions of African origin, and through the literary body of IfÃ, embark on the complexity of reason and metaphysics and timeless logic to understand cognition in the aspect of institutiva worldview values and principles.
APA, Harvard, Vancouver, ISO, and other styles
35

Hojný, Ondřej. "Evoluční návrh kombinačních obvodů." Master's thesis, Vysoké učení technické v Brně. Fakulta strojního inženýrství, 2021. http://www.nusl.cz/ntk/nusl-442801.

Full text
Abstract:
This diploma thesis deals with the use of Cartesian Genetic Programming (CGP) for combinational circuits design. The work addresses the issue of optimizaion of selected logic circuts, arithmetic adders and multipliers, using Cartesian Genetic Programming. The implementation of the CPG is performed in the Python programming language with the aid of NumPy, Numba and Pandas libraries. The method was tested on selected examples and the results were discussed.
APA, Harvard, Vancouver, ISO, and other styles
36

Ye, Xin. "Model checking self modifying code." Thesis, Université de Paris (2019-....), 2019. http://www.theses.fr/2019UNIP7010.

Full text
Abstract:
Le code auto-modifiant est un code qui modifie ses propres instructions pendant le temps d'exécution. Il est aujourd'hui largement utilisé, notamment dans les logiciels malveillants pour rendre le code difficile à analyser et à été détecté par les anti-virus. Ainsi, l'analyse de tels programmes d'auto-modifiant est un grand défi. Pushdown System(PDSs) est un modèle naturel qui est largement utilisé pour l'analyse des programmes séquentiels car il permet de modéliser précisément les appels de procédures et de simuler la pile du programme. Dans cette thèse, nous proposons d'étendre le modèle du PDS avec des règles auto-modifiantes. Nous appelons le nouveau modèle Self-Modifying PushDown System (SM- PDS). Un SM-PDS est un PDS qui peut modifier l’ensemble des règles de transitions pendant l'exécution. Tout d'abord, nous montrons comment les SM-PDS peuvent être utilisés pour représenter des programmes auto- et nous fournissons des algorithmes efficaces pour calculer les configurations précédentes et suivantes des SM-PDS accessibles. Ensuite, nous résolvons les problèmes sur la vérification de propriétés LTL et CTL pour le code auto-modifiant. Nous implémentons nos techniques dans un outil appelé SMODIC. Nous avons obtenu des résultats encourageants. En particulier, notre outil est capable de détecter plusieurs logiciels malveillants auto-modifiants ; il peut même détecter plusieurs logiciels malveillants que les autres logiciels anti-virus bien connus comme McAfee, Norman, BitDefender, Kinsoft, Avira, eScan, Kaspersky, Qihoo-360, Avast et Symantec n'ont pas pu détecter
A Self modifying code is code that modifies its own instructions during execution time. It is nowadays widely used, especially in malware to make the code hard to analyse and to detect by anti-viruses. Thus, the analysis of such self modifying programs is a big challenge. Pushdown Systems (PDSs) is a natural model that is extensively used for the analysis of sequential programs because it allows to accurately model procedure calls and mimic the program’s stack. In this thesis, we propose to extend the PushDown System model with self-modifying rules. We call the new model Self-Modifying PushDown System (SM-PDS). A SM-PDS is a PDS that can modify its own set of transitions during execution. First, we show how SM-PDSs can be used to naturally represent self-modifying programs and provide efficient algorithms to compute the backward and forward reachable configurations of SM-PDSs. Then, we consider the LTL model-checking problem of self-modifying code. We reduce this problem to the emptiness problem of Self-modifying Büchi Pushdown Systems (SM-BPDSs). We also consider the CTL model-checking problem of self-modifying code. We reduce this problem to the emptiness problem of Self-modifying Alternating Büchi Pushdown Systems (SM-ABPDSs). We implement our techniques in a tool called SMODIC. We obtained encouraging results. In particular, our tool was able to detect several self-modifying malwares; it could even detect several malwares that well-known anti-viruses such as McAfee, Norman, BitDefender, Kinsoft, Avira, eScan, Kaspersky, Qihoo-360, Avast and Symantec failed to detect
APA, Harvard, Vancouver, ISO, and other styles
37

Tufekci, Nesrin. "Gis Based Geothermal Potential Assessment For Western Anatolia." Master's thesis, METU, 2006. http://etd.lib.metu.edu.tr/upload/12607651/index.pdf.

Full text
Abstract:
This thesis aims to predict the probable undiscovered geothermal systems through investigation of spatial relation between geothermal occurrences and its surrounding geological phenomenon in Western Anatolia. In this context, four different public data, which are epicenter map, lineament map, Bouger gravity anomaly and magnetic anomaly maps, are utilized. In order to extract the necessary information for each map layer the raw public data is converted to a synthetic data which are directly used in the analysis. Synthetic data employed during the investigation process include Gutenberg-Richter b-value map, distance to lineaments map and distance to major grabens present in the area. Thus, these three layers including directly used magnetic anomaly maps are combined by means of Boolean logic model and Weights of Evidence method (WofE), which are multicriteria decision methods, in a Geographical Information System (GIS) environment. Boolean logic model is based on the simple logic of Boolean operators, while the WofE model depends on the Bayesian probability. Both of the methods use binary maps for their analysis. Thus, the binary map classification is the key point of the analysis. In this study three different binary map classification techniques are applied and thus three output maps were obtained for each of the method. The all resultant maps are evaluated within and among the methods by means of success indices. The findings reveal that the WofE method is better predictor than the Boolean logic model and that the third binarization approach, which is named as optimization procedure in this study, is the best estimator of binary classes due to obtained success indices. Finally, three output maps of each method are combined and the favorable areas in terms of geothermal potential are produced. According to the final maps the potential sites appear to be Aydin, Denizli and Manisa, of which first two have been greatly explored and exploited since today and thus not surprisingly found as potential in the output maps, while Manisa when compared to first two is nearly virgin.
APA, Harvard, Vancouver, ISO, and other styles
38

Nguyên, Duy-Tùng. "Vérification symbolique de modèles à l'aide de systèmes de ré-écriture dédiés." Phd thesis, Université d'Orléans, 2010. http://tel.archives-ouvertes.fr/tel-00579490.

Full text
Abstract:
Cette thèse propose un nouveau type de systèmes de ré-écriture, appelé les systèmes de réécriture fonctionnels. Nous montrons que notre modèle a la puissance d'expression des systèmes de ré-écriture et qu'il est bien adapté à l'étude de propriétés de sûreté et de propriétés de logique temporelle de modèles.Nous avons mis en évidence une sous classe de systèmes fonctionnels, les élémentaires et les élémentaires à droite, préservant la puissance d'expression des systèmes fonctionnels et des techniques d'accélération des calculs aboutissant à un outil de vérification symbolique efficace.Dans la partie expérimentale, nous avons comparé notre outil, d'une part avec des outils de ré-écriture tels que Timbuk, Maude et TOM, d'autre part avec des outils de vérification tels que SPIN, NuSMV, SMART, HSDD. Nos benchmarks démontrent l'efficacité des systèmes fonctionnels élémentaires pour la vérification de modèles.
APA, Harvard, Vancouver, ISO, and other styles
39

Jebelli, Ali. "Design of an Autonomous Underwater Vehicle with Vision Capabilities." Thesis, Université d'Ottawa / University of Ottawa, 2016. http://hdl.handle.net/10393/35358.

Full text
Abstract:
In the past decade, the design and manufacturing of intelligent multipurpose underwater vehicles has increased significantly. In the wide range of studies conducted in this field, the flexibility and autonomy of these devices with respect to their intended performance had been widely investigated. This work is related to the design and manufacturing of a small and lightweight autonomous underwater vehicle (AUV) with vision capabilities allowing detecting and contouring obstacles. It is indeed an exciting challenge to build a small and light submarine AUV, while making tradeoffs between performance and minimum available space as well as energy consumption. In fact, due to the ever-increasing in equipment complexity and performance, designers of AUVs are facing the issues of limited size and energy consumption. By using a pair of thrusters capable to rotate 360o on their axis and implementing a mass shifter with a control loop inside the vehicle, this later can efficiently adapt its depth and direction with minimal energy consumption. A prototype was fabricated and successfully tested in real operating conditions (in both pool and ocean). It includes the design and embedding of accurate custom multi-purpose sensors for multi-task operation as well as an enhanced coordinated system between a high-speed processor and accustomed electrical/mechanical parts of the vehicle, to allow automatic controlling its movements. Furthermore, an efficient tracking system was implemented to automatically detect and bypass obstacles. Then, fuzzy-based controllers were coupled to the main AUV processor system to provide the best commands to safely get around obstacles with minimum energy consumption. The fabricated prototype was able to work for a period of three hours with object tracking options and five hours in a safe environment, at a speed of 0.6 m/s at a depth of 8 m.
APA, Harvard, Vancouver, ISO, and other styles
40

Boudou, Joseph. "Procédures de décision pour des logiques modales d'actions, de ressources et de concurrence." Thesis, Toulouse 3, 2016. http://www.theses.fr/2016TOU30145/document.

Full text
Abstract:
Les concepts d'action et de ressource sont omniprésents en informatique. La caractéristique principale d'une action est de changer l'état actuel du système modélisé. Une action peut ainsi être l'exécution d'une instruction dans un programme, l'apprentissage d'un fait nouveau, l'acte concret d'un agent autonome, l'énoncé d'un mot ou encore une tâche planifiée. La caractéristique principale d'une ressource est de pouvoir être divisée, par exemple pour être partagée. Il peut s'agir des cases de la mémoire d'un ordinateur, d'un ensemble d'agents, des différent sens d'une expression, d'intervalles de temps ou de droits d'accès. Actions et ressources correspondent souvent aux dimensions temporelles et spatiales du système modélisé. C'est le cas par exemple de l'exécution d'une instruction sur une case de la mémoire ou d'un groupe d'agents qui coopèrent. Dans ces cas, il est possible de modéliser les actions parallèles comme étant des actions opérant sur des parties disjointes des ressources disponibles. Les logiques modales permettent de modéliser les concepts d'action et de ressource. La sémantique relationnelle d'une modalité unaire est une relation binaire permettant d'accéder à un nouvel état depuis l'état courant. Ainsi une modalité unaire correspond à une action. De même, la sémantique d'une modalité binaire est une relation ternaire permettant d'accéder à deux états. En considérant ces deux états comme des sous-états de l'état courant, une modalité binaire modélise la séparation de ressources. Dans cette thèse, nous étudions des logiques modales utilisées pour raisonner sur les actions, les ressources et la concurrence. Précisément, nous analysons la décidabilité et la complexité du problème de satisfaisabilité de ces logiques. Ces problèmes consistent à savoir si une formule donnée peut être vraie. Pour obtenir ces résultats de décidabilité et de complexité, nous proposons des procédures de décision. Ainsi, nous étudions les logiques modales avec des modalités binaires, utilisées notamment pour raisonner sur les ressources. Nous nous intéressons particulièrement à l'associativité. Alors qu'il est généralement souhaitable que la modalité binaire soit associative, puisque la séparation de ressources l'est, cette propriété rend la plupart des logiques indécidables. Nous proposons de contraindre la valuation des variables propositionnelles afin d'obtenir des logiques décidables ayant une modalité binaire associative. Mais la majeure partie de cette thèse est consacrée à des variantes de la logique dynamique propositionnelle (PDL). Cette logiques possède une infinité de modalités unaires structurée par des opérateurs comme la composition séquentielle, l'itération et le choix non déterministe. Nous étudions tout d'abord des variantes de PDL comparables aux logiques temporelle avec branchement. Nous montrons que les problèmes de satisfaisabilité de ces variantes ont la même complexité que ceux des logiques temporelles correspondantes. Nous étudions ensuite en détails des variantes de PDL ayant un opérateur de composition parallèle de programmes inspiré des logiques de ressources. Cet opérateur permet d'exprimer la séparation de ressources et une notion intéressante d'actions parallèle est obtenue par la combinaison des notions d'actions et de séparation. En particulier, il est possible de décrire dans ces logiques des situations de coopération dans lesquelles une action ne peut être exécutée que simultanément avec une autre. Enfin, la contribution principale de cette thèse est de montrer que, dans certains cas intéressants en pratique, le problème de satisfaisabilité de ces logiques a la même complexité que PDL
The concepts of action and resource are ubiquitous in computer science. The main characteristic of an action is to change the current state of the modeled system. An action may be the execution of an instruction in a program, the learning of a new fact, a concrete act of an autonomous agent, a spoken word or a planned task. The main characteristic of resources is to be divisible, for instance in order to be shared. Resources may be memory cells in a computer, performing agents, different meanings of a phrase, time intervals or access rights. Together, actions and resources often constitute the temporal and spatial dimensions of a modeled system. Consider for instance the instructions of a computer executed at memory cells or a set of cooperating agents. We observe that in these cases, an interesting modeling of concurrency arises from the combination of actions and resources: concurrent actions are actions performed simultaneously on disjoint parts of the available resources. Modal logics have been successful in modeling both concepts of actions and resources. The relational semantics of a unary modality is a binary relation which allows to access another state from the current state. Hence, unary modalities are convenient to model actions. Similarly, the relational semantics of a binary modality is a ternary relation which allows to access two states from the current state. By interpreting these two states as substates of the current state, binary modalities allow to divide states. Hence, binary modalities are convenient to model resources. In this thesis, we study modal logics used to reason about actions, resources and concurrency. Specifically, we analyze the decidability and complexity of the satisfiability problem of these logics. These problems consist in deciding whether a given formula can be true in any model. We provide decision procedures to prove the decidability and state the complexity of these problems. Namely, we study modal logics with a binary modality used to reason about resources. We are particularly interested in the associativity property of the binary modality. This property is desirable since the separation of resources is usually associative too. But the associativity of a binary modality generally makes the logic undecidable. We propose in this thesis to constrain the valuation of propositional variables to make modal logics with an associative binary modality decidable. The main part of the thesis is devoted to the study of variants of the Propositional Dynamic Logic (PDL). These logics features an infinite set of unary modalities representing actions, structured by some operators like sequential composition, iteration and non-deterministic choice. We first study branching time variants of PDL and prove that the satisfiability problems of these logics have the same complexity as the corresponding branching-time temporal logics. Then we thoroughly study extensions of PDL with an operator for parallel composition of actions called separating parallel composition and based on the semantics of binary modalities. This operator allows to reason about resources, in addition to actions. Moreover, the combination of actions and resources provides a convenient expression of concurrency. In particular, these logics can express situations of cooperation where some actions can be executed only in parallel with some other actions. Finally, our main contribution is to prove that the complexity of the satisfiability problem of a practically useful variant of PDL with separating parallel composition is the same as the satisfiability problem of plain PDL
APA, Harvard, Vancouver, ISO, and other styles
41

Lengál, Ondřej. "Automaty v nekonečně stavové formální verifikaci." Doctoral thesis, Vysoké učení technické v Brně. Fakulta informačních technologií, 2015. http://www.nusl.cz/ntk/nusl-261279.

Full text
Abstract:
Tato práce se zaměřuje na konečné automaty nad konečnými slovy a konečnými stromy, a použití těchto automatů při formální verifikaci nekonečně stavových systémů. Práce se nejdříve věnuje rozšíření existujícího přístupu pro verifikaci programů které manipulují s haldou (konkrétně programů s dynamickými datovými strukturami), jenž je založen na stromových automatech. V práci je navrženo několik rozšíření tohoto přístupu, jako například jeho plná automatizace či jeho rozšíření o podporu uspořádaných dat. V práci jsou popsány nové rozhodovací procedury pro dvě logiky, které jsou často používány ve formální verifikaci: pro separační logiku a pro slabou monadickou druhořádovou logiku s následníkem. Obě tyto rozhodovací procedury jsou založeny na převodu jejich problému do automatové domény a následné manipulaci v této cílové doméně. Posledním přínosem této práce je vývoj nových algoritmů k efektivní manipulaci se stromovými automaty, s důrazem na testování inkluze jazyků těchto automatů a manipulaci s automaty s velkými abecedami, a implementace těchto algoritmů v knihovně pro obecné použití. Tyto vyvinuté algoritmy jsou použity jako klíčová technologie, která umožňuje použití výše uvedených technik v praxi.
APA, Harvard, Vancouver, ISO, and other styles
42

Miozzi, Ferrini Francesca. "Experimental study of the Fe-Si-C system and application to carbon rich exoplanet." Electronic Thesis or Diss., Sorbonne université, 2019. https://accesdistant.sorbonne-universite.fr/login?url=https://theses-intra.sorbonne-universite.fr/2019SORUS241.pdf.

Full text
Abstract:
Plus de 4000 exoplanètes ont été découvertes, orbitant autour d’étoiles ayant différentes compositions. Ces exoplanètes sont détectées et étudiées par observations indirectes qui, dans de nombreux cas, donnent accès aux propriétés principales des planètes: leurs masses et leurs rayons. Ces paramètres peuvent être calculés à partir d’un modèle et comparés à ceux observés. Toutefois, cela est plus difficile pour des planètes qui orbitent autour d’étoiles ayant une composition chimique différente du Soleil, par exemple enrichie en carbone, car les propriétés physiques des carbures (i.e. carbures de silicium ou de fer) sont inconnues. Dans cette étude les systèmes Si-C et Fe-Si-C ont été étudiés entre 20 et 200 GPa et 300-3000 K, en utilisant la diffraction de rayons x et l’analyse chimique des échantillons récupérés pour déterminer les propriétés physiques dans des conditions extrêmes. Dans le système Si-C les équations d’états et les modèles thermiques pour les deux phases de basse et haute pression ont été déterminés. Les résultats ont ensuite été utilisé pour calculer la relation masse-rayon de planètes synthétiques ayant un noyau de fer et un manteau de SiC. Concernant le système Fe-Si-C le diagramme de phase ternaire a été reconstruit. En faisant l’hypothèse d’une composition Fe-Si-C pour un noyau planétaire, quatre différentes séquences de cristallisation ont été démontrées, déterminant des comportements dynamiques très diffèrent. En conclusion la relation masse-rayon n’est pas suffisante pour déterminer la composition et la structure interne des exoplanètes observées mais des données relatives à la chimie du système planétaire sont requises
More than 4000 exoplanets have been discovered, orbiting around stars with a wide variety of composition. Such planets are detected and studied through indirect methods that in many cases give access to the main properties of the planets: mass and radius. The same parameters can be calculated from a chosen model and compared to the observed ones. However it is difficult for planets orbiting around stars with compositions very different from our Sun, for example carbon enriched, as the physical properties carbides (i.e. silicon carbides and iron carbides) at extreme pressure are unknown. In this work the Si-C and Fe-Si-C systems were studied in the range between 20 and 200 GPa and 300-3000 K employing X-ray diffraction and chemical analyses on the recovered samples were used to determine the physical properties at extreme conditions. In the Si-C system the equations of state and thermal model for both the low pressure and high pressure phases were determined. The results were then used to model a mass radius plot for different archetypal planets with a Fe core and SiC mantle. Regarding the Fe-Si-C system a ternary phase diagram was reconstructed up to 200 GPa and 3000 K. Assuming Fe-Si-C as main component of planetary cores, four different crystallization paths are individuated, giving rise to way different dynamical behaviour. We conclude that using only mass radius relations is not sufficient to determine the interior composition and structure of an observed exoplanet and further data relative to the chemistry are needed, for example the composition of the host star
APA, Harvard, Vancouver, ISO, and other styles
43

Kocina, Filip. "Moderní metody modelování a simulace elektronických obvodů." Doctoral thesis, Vysoké učení technické v Brně. Fakulta informačních technologií, 2017. http://www.nusl.cz/ntk/nusl-412585.

Full text
Abstract:
Disertační práce se zabývá simulací elektronických obvodů. Popisuje metodu kapacitorové substituce (CSM) pro převod elektronických obvodů na elektrické obvody, jež mohou být následně řešeny pomocí numerických metod, zejména Moderní metodou Taylorovy řady (MTSM). Tato metoda se odlišuje automatickým výběrem řádu, půlením kroku v případě potřeby a rozsáhlou oblastí stability podle zvoleného řádu. V rámci disertační práce bylo autorem disertace vytvořeno specializované programové vybavení pro řešení obyčejných diferenciálních rovnic pomocí MTSM, s mnoha vylepšeními v algoritmech (v porovnání s TKSL/386). Tyto algoritmy zahrnují zjednodušování obecných výrazů na polynomy, paralelizaci nezávislou na integrační metodě atp. Tento software běží na linuxovém serveru, který komunikuje pomocí protokolu TCP/IP. Toto vybavení bylo úspěšně použito pro simulaci VLSI obvodů, jejichž řešení pomocí CSM bylo značně rychlejší a spotřebovávalo méně paměti než state-of-the-art SPICE.
APA, Harvard, Vancouver, ISO, and other styles
44

Chang, Wei-Ming, and 張惟明. "Accounting Ratios and the Prediction of Financial Distress: a Comparison of Binary Logit and Multinomial Logit Analysis." Thesis, 1997. http://ndltd.ncl.edu.tw/handle/27282243671406512437.

Full text
Abstract:
碩士
淡江大學
會計學系
85
Previous studies on financial distress employed binary models to analyzeand predict failing versus nonfailing events without considering the degreesof severity of financial distress. This study extends the binary model toconstruct financial distress as a tri-stage process in terms of tradingmethod-the normal situation, reclassification of listing, and changes intrading. As an ancillary test, the study also examines whether the inclusionof Industry-Relative Ratios (IRRs) can improve the performaf the predictionmodels with respect to financial distress. The financial data of 62 companies running from 1985 to 1994 are used toestimate the model. The financial data of the holdout sample including 66companies for years 1995 and 1996 are used to test these models'' predictiveability. The prediction models include a binary logit to incorporate theconventional failing/non-failing dichotomy and a multinomial logit toincorporate the tri-stage process of financial distress. The predictionmodels are built one by one for the three-year period prior to the financialdistress. The major findings include: 1. The binary analysis suggests that return on asset has the highestpredictive ability regarding financial distress, followed by total assetturnover and profit margin. The percentage of correct prediction for yearone to year three prior to financial distress are 91.94%, 85.48% and 77.42%for the in-sample test; the corresponding prediction rates for the holdoutsample are 90.91%, 84.85% and 62.12%. 2. The percentage of correct prediction rates for the multi- nominalanalysis are 85%, 75.81% and 70.97% for the in-sample tests and 80.3%, 68.18%and 59.09% for the hold-out sample. The predictive ability of the MLA modelis not superior to the binary logit model. Given the differential impactsupon the stock prices of reclassification of listing and changes in trading,it is not appropriate to compare the relative predictive ability purely basedon the forecasting accuracy. 3. The non-failing firms have lower rate of misclassifications. Thosefirms are more likely to be misclassified as reclassification of tradingcategory than as changes in trading. The change-in- trading firms have ahigher rate of being misclassified as the reclassification firms than as anon-failing firm. 4. The inclusion of IRRs in the model does not improve the predictiveability, suggesting that the listed companies may diversify their managementand business so that the industry effect are diluted.
APA, Harvard, Vancouver, ISO, and other styles
45

Kang, Ya-Hsin, and 康雅欣. "Predicting Airlines Bankruptcies – A Comparison of Binary Logit and LDA Analysis." Thesis, 2012. http://ndltd.ncl.edu.tw/handle/66180951141508012898.

Full text
Abstract:
碩士
淡江大學
經濟學系碩士班
100
Much has been learned over the past thirty years about the subject of predicting bankruptcy of corporations using financial ratios. The early work of Altman (1968) set the stage for many subsequent studies of the topic using data from various industries. This study considers a modification of the original Altman model made by Pilarksi and Dinh (1999) involving a P-score to study the prediction of the bankruptcies of airlines. The paper here uses the most recent financial data on airlines from the US and includes other factors such as SARS and terrorism. It also makes use of a version of binary logit regression and attempts to determine the probability that a particular airline will go bankrupt. We also compare the Binary Logit Analysis and Linear Discriminant Analysis(LDA) to know which one is better on prediction of bankruptcies.
APA, Harvard, Vancouver, ISO, and other styles
46

Wu, Sih-Huan, and 吳思緩. "Using Leading Indicators to Forecast Recession in the US Business Cycle—A Binary Logit Analysis." Thesis, 2014. http://ndltd.ncl.edu.tw/handle/t2xqe2.

Full text
Abstract:
碩士
淡江大學
美洲研究所碩士班
102
In this paper, we seek to study the relation between data on leading indicators and U.S. business cycle by using binary logit model to examine business cycle index of the National Bureau of Economic Research and compare to the leading indicators of the 50 states of the United States. from 1982-2013. According to Stock and Watson’s theory to determine the variable combination. The use of dynamic factor models and principal components to model the current state of the economy has led to a series of coincident and leading indicators. These indicators are specifically designed to help predict the future movements in the state of the economy. The empirical result shows Kentucky’s leading index using both of the two or three month lags, the predict ability is significant effect. The conclusion that can be drawn is that two and three months prior, predictions made using the Kentucky leading index will generally be more reliable a predictor for NBER recessions than those using the national leading indicator. One month ahead forecasts are not likely to be very reliable since the NBER number is not available within one month and the state and national leading indicator will not also generally be available so early. Finally, this paper also discusses the use of time for quarterly or monthly frequency difference between the two. When the state’s economic activity is similar to the country, it can forecast the future direction of the business cycle more accurately.
APA, Harvard, Vancouver, ISO, and other styles
47

Hsie, Meng-Hua, and 謝孟樺. "To Apply Binary Logit Model and GRA to Give Advice for Female Consumers of Used Car Website Content." Thesis, 2013. http://ndltd.ncl.edu.tw/handle/03369698173880975498.

Full text
Abstract:
碩士
國立中興大學
行銷學系所
101
This research generalized the factors which female consumers would be concerned in purchasing cars and used cars by literature review. We collected 361 effective questionnaires completed by those females who once bought used cars. The aim of this research is to explore and identify the features of female customers who buy used cars through the Internet and determine the key factors they may be concerned in purchasing those cars. The outcome would be used as a reference for used car industry to sell used cars online. Through two methods (Binary Logit Model and Grey Relational Analysis), we do the research to find female consumers’ features and critical factors for purchasing used cars online. Firstly, we used seven independent variables (marital status, whether you have children, age, education degree, occupation, job position, and the average monthly income) by Binary Logit Model in different permutations, which should be analyzed 1.1 trillion times in total. Eventually we analyzed about 790 thousand times in our research due to the limited time. We classified nine groups of female customers buying used cars online. By inquiring of the used car industry, we then determine the characteristics of female consumers and concretized the different types of female consumers buying used cars online. Secondly, we applied Grey Relational Analysis (GRA) to determine the critical factors of the nine female groups, concluded the critical factors individually, and identified the potential needs of female consumers. At last, we generalized the outcome of the nine groups of female consumers, and paired it with the used car market’s potential demand. Through the concept of adaptive selling behavior, we provide this research as a reference for the used car industry to apply to the industry sells its products online.
APA, Harvard, Vancouver, ISO, and other styles
48

Kang, Chung-yi, and 康崇儀. "Research on Binary Logit Model Analysis of the Impact to Children Magazine Selection Factor Adopted by Elementary School Teacher." Thesis, 2010. http://ndltd.ncl.edu.tw/handle/38221217624737462845.

Full text
Abstract:
碩士
南華大學
出版與文化事業管理研究所
98
Low birth rate facilitates parental willingness to allocate more educational spending to their children. Nonetheless, children magazine have become one of the learning tools for children in the capacity of providing with vital extracurricular learning assistance. Since opinions from elementary school teachers do affect the parents acting in the best interest for their kids, and the teacher recommendation can also serve as the important reference index while parents are selecting children magazines. Hence, purpose of this research is to explore the vital views on each attribute of the teacher’s point of view as opposed to children magazines; in addition, various factors to respective impacts in selecting the subscription to children magazines are also explored. This research not only analyzes the level of impact to consumer’s selection behavior from the perspective of each factor identified, but also uses the binary logit modeling to construct a theoretical architecture for elementary school teachers as opposed to children magazine selection. Furthermore, this research not only conducted the calibrated model parameters and elasticity analyses, but also proposed recommendations based upon the findings from those analyses. And the research findings are as follow: 1. In all views for children attribute importance to children magazine, elementary school teachers acknowledged that “cultivate children’s reading proclivity”, “increase children’s extracurricular knowledge” and “assist children in knowledge learning, expand and broaden their horizons” would be the most important ones. 2. The cognitive variance analysis from different population statistics, within each level of the assistance for children from children’s magazine, in addition to different gender, marital status, age and seniority, all of the above would exhibit significant differences. And those from male students were significantly higher than those from the female; married significantly higher than single; those above 40 significantly higher than those under 40; those with over 10 years of seniority significantly higher than those below 10 years. 3. Consumers were also influenced by variables like pricing, content and giveaway while selecting children’s magazine. In another words, when criteria like “the lower the average price for each issue of the magazine”, “contents becoming more and more within the range of expectation” as well as “giveaways meeting the requirements of the subscribers” for a certain magazine were met, consumer would be prone to subscribe that magazine in interest. 4. Factors like teaching seniority and current post held by the teacher would also be the main factors influencing subscription to children’s magazine. Teachers with over 10 years in seniority and teachers working part-time in the administrative capacity would be more inclined to subscribe the magazine. 5. In the elasticity analysis, the direct elasticity for the Top945 was -0.127, and the cross elasticity was 0.171; direct elasticity for Global citizen 365 was -0.174 and the cross elasticity was.0.13 . 6. Marketing recommendation: It would be more appropriate by targeting the elementary school teachers who are female, married, over 40, seniority over 10years. In addition, in order to meet customer expectation, it would make sense to cooperate with the advertiser to lower the sales price for children magazine. In addition, it would also be proper to focus on the abundance in the magazine content and the setting for giveaways marketing should target to a specific group as well. In the future, it would make sense by expanding the subscriber base through sharing, developing electronic version of children magazine so as to expand to new market segments; enhance the communication, the service and the add-on-values; re-subscribe the old subscription in the past, and retain the subscribers in a long-term fashion
APA, Harvard, Vancouver, ISO, and other styles
49

Kibedi, Francisco. "Adding a binary modal operator to predicate logic /." 2005.

Find full text
Abstract:
Thesis (M.A.)--York University, 2005. Graduate Programme in Mathematics and Statistics.
Typescript. Includes bibliographical references (leaves 92-94). Also available on the Internet. MODE OF ACCESS via web browser by entering the following URL: http://gateway.proquest.com/openurl?url%5Fver=Z39.88-2004&res%5Fdat=xri:pqdiss &rft_val_fmt=info:ofi/fmt:kev:mtx:dissertation&rft_dat=xri:pqdiss:MR11823
APA, Harvard, Vancouver, ISO, and other styles
50

WU, JI-GIANG, and 吳繼強. "Signed binary delta modulation-digital filterusing multiple valued logic." Thesis, 1987. http://ndltd.ncl.edu.tw/handle/07234416289816591794.

Full text
APA, Harvard, Vancouver, ISO, and other styles
We offer discounts on all premium plans for authors whose works are included in thematic literature selections. Contact us to get a unique promo code!

To the bibliography