To see the other types of publications on this topic, follow the link: Race car data analysis.

Dissertations / Theses on the topic 'Race car data analysis'

Create a spot-on reference in APA, MLA, Chicago, Harvard, and other styles

Select a source type:

Consult the top 50 dissertations / theses for your research on the topic 'Race car data analysis.'

Next to every source in the list of references, there is an 'Add to bibliography' button. Press on it, and we will generate automatically the bibliographic reference to the chosen work in the citation style you need: APA, MLA, Harvard, Chicago, Vancouver, etc.

You can also download the full text of the academic publication as pdf and read online its abstract whenever available in the metadata.

Browse dissertations / theses on a wide variety of disciplines and organise your bibliography correctly.

1

Voung, Jan Wen. "Dataflow analysis for concurrent programs using data-race detection." Diss., [La Jolla] : University of California, San Diego, 2010. http://wwwlib.umi.com/cr/ucsd/fullcit?p3397178.

Full text
Abstract:
Thesis (Ph. D.)--University of California, San Diego, 2010.
Title from first page of PDF file (viewed March 31, 2010). Available via ProQuest Digital Dissertations. Vita. Includes bibliographical references (p. 134-143).
APA, Harvard, Vancouver, ISO, and other styles
2

Štipák, Patrik. "Vývoj zavěšení kol vozu Formule Student." Master's thesis, Vysoké učení technické v Brně. Fakulta strojního inženýrství, 2015. http://www.nusl.cz/ntk/nusl-231940.

Full text
Abstract:
Tato diplomová práce je věnována vývoji a analýze zavěšení Formule Student. V úvodu je představena mezinárodní soutěž Formula Student, její disciplíny a historie týmu TU Brno Racing a UH Racing. V teoretické části je detailní popis charakteristik pneumatiky, kinematický návrh zavěšení a aerodynamiky s ohledem na závodní vozy. V části věnované konstrukčnímu návrh je popis vývoje ramen zavěšení, vahadel a zadního stabilizátoru. Návrh kinematických charakteristik vozu Dragon 5 je detailně popsán a porovnán s předešlou variantou. Mimo vývoje kinematiky Dragon 5 je popsán vývoj vozu UH 18. Závěrečná kapitola detailně analyzuje zaznamenaná data z testování vozu Dragon 4.
APA, Harvard, Vancouver, ISO, and other styles
3

Ameri, K. Al, P. Hanson, N. Newell, J. Welker, K. Yu, and A. Zain. "DESIGN OF A RACE CAR TELEMETERING SYSTEM." International Foundation for Telemetering, 1997. http://hdl.handle.net/10150/607539.

Full text
Abstract:
International Telemetering Conference Proceedings / October 27-30, 1997 / Riviera Hotel and Convention Center, Las Vegas, Nevada
This student paper was produced as part of the team design competition in the University of Arizona course ECE 485, Radiowaves and Telemetry. It describes the design of a telemetering system for race cars. Auto Racing is an exciting sport where the winners are the ones able to optimize the balance between the driver’s skill and the racing teams technology. One of the main reasons for this excitement is that the main component, the race car, is traveling at extremely high speeds and constantly making quick maneuvers. To be able to do this continually, the car itself must be constantly monitored and possibly adjusted to insure proper maintenance and prevent damage. To allow for better monitoring of the car’s performance by the pit crew and other team members, a telemetering system has been designed, which facilitates the constant monitoring and evaluation of various aspects of the car. This telemetering system will provide a way for the speed, engine RPM, engine and engine compartment temperature, oil pressure, tire pressure, fuel level, and tire wear of the car to be measured, transmitted back to the pit, and presented in a way which it can be evaluated and utilized to increase the car’s performance and better its chances of winning the race. Furthermore, this system allows for the storing of the data for later reference and analysis.
APA, Harvard, Vancouver, ISO, and other styles
4

Schultz, Aaron. "TELEMETRY AND DATA LOGGING IN A FORMULA SAE RACE CAR." International Foundation for Telemetering, 2017. http://hdl.handle.net/10150/627009.

Full text
Abstract:
The problem with designing and simulating a race car entirely through CAD and other computer simulations, is that the real world behavior of the car will differ from the results outputted from CFD and FEA analysis. One way to learn more about how the car actually handles, is through telemetry and data logging of many different sensors on the car while it is running at racing speeds. This data can help the engineering team build new components, and tune the many different systems on the car in order to get the fastest time around a track as possible.
APA, Harvard, Vancouver, ISO, and other styles
5

Bottiglieri, M. "Data acquisition to analyse the driver and his race car." Thesis, Cranfield University, 2007. http://ethos.bl.uk/OrderDetails.do?uin=uk.bl.ethos.559466.

Full text
APA, Harvard, Vancouver, ISO, and other styles
6

Käck, David, and Eric Lindström. "Analysis of car simulator data." Thesis, Linköpings universitet, Institutionen för teknik och naturvetenskap, 2008. http://urn.kb.se/resolve?urn=urn:nbn:se:liu:diva-95284.

Full text
Abstract:
Simulators are being used for a wide variety of purposes, not least for vehicle simulation as an aid in driver behaviour analysis. Common values to be measured are reaction times such as brake and steer reaction time. One problem with monitored simulator studies is that there are a limited number of test drivers. With only 50-100 test drivers, presented reaction times from the studies are often mean values. From a safety analytic perspective, it would be more interesting to show the entire distribution of values. The unmanned simulator at Universeum gives a large number of test drivers (~40k / year) which makes it possible to investigate entire distributions of values. Another benefit with the unmanned study at Universeum is that drivers tends to act more natural than in a monitored study, where drivers are aware of that they are being observed. On the other hand, the fact that users are not being observed, lead to a lot of questionable data. Drivers are exploring and do not behave as they would in real traffic. The main objective with our thesis project has been to find algorithms to extract trustworthy data from the simulator. It has been proven that there are large amounts of data that can be used for driver behaviour analysis. Methods to calculate common measures used in traffic safety analysis have been developed. An updated simulator software, better adjusted for an unmonitored study, has been developed and installed in the Universeum simulator. Summaries from the different scenarios in the simulator K2 Summary There are quite few drivers in the K2 scenario where a brake signal has not been registered at all. This is the main issue with the scenario and the explanation to this is that the scenario is very sensitive to the speed kept by the user. Speeding drivers will not experience the situation at all and since so many drivers are speeding, a lot of data is lost. By adapting the speed of the mover to the speed of the driver this problem would be reduced. The sensitivity to speed is also noticeable in the plots as the BRT seems to have very little influence on the results. The BRTs span from ~0.4 seconds to 2 seconds which is in range with other studies. [P1] There are some drivers steering but they are not many enough to draw any valid conclusions. K3 Summary The K3 scenario works much better than the other crossing scenarios, producing measurable data at a much higher rate than the other scenarios. This is probably thanks to the MeetAtPoint function which takes the drivers speed into consideration. The wider acceptance to speeds in the scenario gives better possibilities to analyse the impact of BRT and deceleration on the result. There are some drivers steering but they are too few to present any valid distributions. K5 Summary This scenario gives the least percentage of valid files among the K-scenarios. This is due to the very high urgency in the scenario. This together with the fact that the speed of the mover is not adapted to the speed of the driver allows very few drivers to actually experience the scenario at all. By adapting the speed of the mover to the speed of the driver, drivers will be able to experience the situation at a much higher rate than before. There are very few drivers steering instead of braking. Interesting is that, remembering that there are very few entries to investigate, the steering option has been quite successful in this scenario compared to the other K scenarios. U1 Summary The percentage of valid files from this scenario is at the same level as the other scenarios, i.e. quite low. The reason to the low percentage in this scenario is however not the speed kept. One big loss is the drivers which chooses to overtake the braking lead car and therefore fails to enter the situation as intended. The other problem is the distance between the lead car and the driver. The distance between them as the lead car brakes varies a lot and the BRT´s will therefore also vary. There are almost no drivers that try to steer away from the car instead of braking. U4 Summary The percentage of files where a brake signal has been registered is in level with the other scenarios. One explanation to the few registered brake signals is, as in the U1 scenario, that a lot of drivers choose to overtake the lead car and will therefore not experience the scenario at all. Overall the U4 data and results are questionable. Even when normalising the BRT by TTC no satisfying distributions can be found. The amount of files holding the U4 scenario (1761 or 23 % of all scenario files) are also remarkable high. And apart from the files holding a full scenario, there are a lot of files that holds only U4 start messages without any stop messages. This indicates that the U4 trigger points are trigged even when they are not supposed to. This will be investigated and discussed in chapter 3.1.5. There are a few drivers steering, an option that has proven quite successful in this scenario. M1 Summary The urgency in the scenario is low, which makes it possible for drivers to avoid a collision without steering. The lane position does not play any significant role on the type of reaction, but many drivers have a lane position to the right. A high amount of drivers have an offset which gives them a position more than 0.5 meters out on the road verge. The road verge is 3.2meters wide, which is much wider than in reality. The explanation to the high number of drivers that drives on the road verge can be that the width of the lane (3.2 meters) corresponds to the width of a road with speed limit 70km/h. Drivers may experience the road as narrow at a speed around 90km/h and therefore sometimes choose to drive on the road verge, which looks perfectly fine to drive on. [D] If the verge were less wide and looked less appealing to drive on this behaviour would probably be reduced. In reality the road verge on a road with a speed limit of 90km/h is 50cm. [I5] It was not possible to use the SWRR measure to analyse the driver behaviour. The SWRR is higher on parts of the road where it is more difficult for the driver to follow the road as in long turns. The SWRR increases linear as the speed increases but are too spread to make any conclusions from. M2 Summary The urgency in the scenario is high. It is hard to avoid a collision without steering. The lane position does not play any significant role on the type of reaction, but many drivers have a lane position to the right. It is not a normal behaviour to drive on the road verge and the meeting situation occurs in a left curve and a lane position to the right is not expected as many drivers would like to cut the curve, giving a lane position to the left. Despite the initial lane position, the most common type of reaction is steering right. 3,6% of all drivers are out on the grass (out of road). This can be considered as a high amount and probably depends on the wide road verge and the flat grass area, which looks perfectly fine to drive on. As in the M1 scenario, the wide, fully driveable, road verge is probably the reason to why 78% of the drivers choose to drive a least 50cm out on the road verge as they try to avoid the situation. Another explanation to the high number of drivers that drives on the road verge can be that the width of the lane (3.2m) corresponds to the width of a road with speed limit 70km/h. Drivers may experience the road as narrow at a speed around 90km/h and therefore sometimes choose to drive on the road verge, which looks perfectly fine to drive on. [D] It was not possible to use the SWRR measure to analyse the driver behaviour. The SWRR is higher on parts of the road where it is more difficult for the driver to follow the road as in long turns. The SWRR increases linear as the speed increases but are too spread to make any conclusions from.
APA, Harvard, Vancouver, ISO, and other styles
7

Sangster, John David. "Naturalistic Driving Data for the Analysis of Car-Following Models." Thesis, Virginia Tech, 2011. http://hdl.handle.net/10919/76925.

Full text
Abstract:
The driver-specific data from a naturalistic driving study provides car-following events in real-world driving situations, while additionally providing a wealth of information about the participating drivers. Reducing a naturalistic database into finite car-following events requires significant data reduction, validation, and calibration, often using manual procedures. The data collection performed herein included: the identification of commuting routes used by multiple drivers, the extraction of data along those routes, the identification of potential car-following events from the dataset, the visual validation of each car-following event, and the extraction of pertinent information from the database during each event identified. This thesis applies the developed process to generate car-following events from the 100-Car Study database, and applies the dataset to analyze four car-following models. The Gipps model was found to perform best for drivers with greater amounts of data in congested driving conditions, while the Rakha-Pasumarthy-Adjerid (RPA) model was best for drivers in uncongested conditions. The Gipps model was found to generate the lowest error value in aggregate, with the RPA model error 21 percent greater, and the Gaxis-Herman-Rothery model (GHR) and the Intelligent Driver Model (IDM) errors 143 percent and 86 percent greater, respectively. Additionally, the RPA model provides the flexibility for a driver to change vehicles without the need to recalibrate parameter values for that driver, and can also capture changes in roadway surface type and condition. With the error values close between the RPA and Gipps models, the additional advantages of the RPA model make it the recommended choice for simulation.
Master of Science
APA, Harvard, Vancouver, ISO, and other styles
8

Yaron, Gil. "Trade unions, race and sex discrimination : a theoretical and empirical analysis using UK data." Thesis, University of Oxford, 1989. http://ethos.bl.uk/OrderDetails.do?uin=uk.bl.ethos.305183.

Full text
APA, Harvard, Vancouver, ISO, and other styles
9

Haverås, Daniel. "Data Race Detection for Parallel Programs Using a Virtual Platform." Thesis, KTH, Skolan för elektroteknik och datavetenskap (EECS), 2018. http://urn.kb.se/resolve?urn=urn:nbn:se:kth:diva-230189.

Full text
Abstract:
Data races are highly destructive bugs found in concurrent programs. Because of unordered thread interleavings, data races can randomly appear and disappear during the debugging process which makes them difficult to find and reproduce. A data race exists when multiple threads or processes concurrently access a shared memory address, with at least one of the accesses being a write. Such a scenario can cause data corruption, memory leaks, crashes, or incorrect execution. It is therefore important that data races are absent from production software. This thesis explores dynamic data race detection in programs running on Ericsson’s System Virtualization Platform (SVP), a SystemC/TLM-2.0-based virtual platform used for running software on simulated hardware. SVP is a bit-accurate simulator of Ericsson Many-Core Architecture (EMCA) hardware, enabling software and hardware to be developed in parallel, as well as providing unique insight into software execution. This latter property of SVP has been utilized to implement SVPracer, a proof-of-concept dynamic data race detector. SVPracer is based on a happens-before algorithm similar to Google’s ThreadSanitizer v2, but is significantly different in implementation as it relies entirely on instrumenting binary code during runtime without requiring code modification during build time. A set of test programs exhibiting various data races were written and compiled for EOS, the operating system (OS) running on EMCA Digital Signal Processors (DSPs). Similar programs were created for Linux using POSIX APIs, to compare SVPracer against ThreadSanitizer v2. Both SVPracer and ThreadSanitizer v2 correctly detect the data races present in the respective test programs. Further work must be done in SVPracer to eliminate some false positive results, caused by missing support for some OS functionality such as semaphores. Still, the present state of SVPracer is sufficient proof that dynamic data race detection is possible using a virtual platform. Future work could involve exploring other data race detection algorithms as well as implementing deadlock/livelock detection in virtual platforms.
Datakapplöpning är en mycket destruktiv typ av bugg i samtidig programvara. På grund av icke-ordnad sammanvävning av trådar kan datakapplöpning slumpmässigt dyka upp och försvinna under avlusning (debugging), vilket gör dem svåra att hitta och återskapa. Datakapplöpning existerar när flera trådar eller processer samtidigt accessar en delad minnesaddress och minst en av accesserna är en skrivning. Ett sådant scenario kan orsaka datakorruption, minnesläckor, krascher eller felaktig exekvering. Det är därför viktigt att datakapplöpning inte finns med i programvara för slutlig release. Det här examensarbetet utforskar dynamisk detektion av datakapplöpning i program som körs på Ericssons System Virtualization Platform (SVP), en SystemC/TLM-2.0baserad virtuell platform som används för att köra program på simulerad hårdvara. SVP är en bit-exakt simulator för hårdvara av typen Ericsson Many-Core Architecture (EMCA), vilket möjliggör parallell utveckling av hårdvara och programvara samt unik inblick i programvaruexekvering. Den senare egenskapen hos SVP har använts för att implementera SVPracer, en konceptvalidering av dynamisk detektion av datakapplöpning. SVPracer baseras på en algoritm av typen happens-before, som liknar den i Googles ThreadSanitizer v2. Stora skillnader finns dock i SVPracers implementation eftersom den instrumenterar binärkod under körning, utan att behöva modifiera koden under kompilering. Ett antal testprogram med olika typer av datakapplöpning skapades för (EOS), ett operativsystem som körs på EMCAs signalprocessorer (DSP). Motsvarande program skrevs för Linux med POSIX-APIer, för att kunna jämföra SVPracer med ThreadSanitizer v2. Både SVPracer och ThreadSanitizer v2 upptäckte datakapplöpningarna i samtliga testprogram. SVPracer kräver vidare arbete för att eliminera några falska positiva resultat orsakade av saknat stöd för vissa OS-funktioner, exempelvis semaforer. Trots det bedöms SVPracers nuvarande prestanda som tillräckligt bevis för att virtuella plattformar kan användas för detektion av datakapplöpning. Framtida arbete skulle kunna involvera utforskning av andra detektionsalgoritmer samt detektion av baklås.
APA, Harvard, Vancouver, ISO, and other styles
10

Fair, Elizabeth L. "Educational Disparities in Early Education| A Critical Race Theory Analysis of ECLS-K| 2011 Data." Thesis, Notre Dame of Maryland University, 2018. http://pqdtopen.proquest.com/#viewpdf?dispub=10784565.

Full text
Abstract:

African American children’s public school education outcomes differ from those of their White, non-Hispanic peers. This dissertation used the data from The Early Childhood Longitudinal Survey for the Kindergarten Class of 2011 (ECLS-K: 2011) to explore the question: What factors during a child’s kindergarten through third-grade years contribute to disparate test scores, opportunities, and outcomes? There is a large body of research citing a gap between African American students and their White, non- Hispanic peers in later years of schooling. This study utilized data collected from students, parents, teachers, and administrators from a child’s entry to kindergarten through the completion of third grade. The results were interpreted through the lens of Critical Race Theory (CRT). Most CRT work has been qualitative. This study aimed to identify areas in which follow-up qualitative work could enrich the findings of the quantitative work and offer insight beyond the deficit models that are routinely provided to explain the gap.

Findings suggest that there is a slight gap between African American students and their White, non-Hispanic peers in reading and math scores on kindergarten entry. Those differences increased over a 4-year period. The data also suggest poverty played a factor in this disparity. The beliefs about kindergarten readiness between teachers and parents were aligned, and African American parents’ beliefs were more aligned than were those of the parent population as a whole. Teachers reported closer relationships with White, non-Hispanic students and higher levels of conflict with African American students, although this did not seem to correlate directly with reading and math test scores.

The research results indicate that there needs to be an increase in culturally relevant pedagogical training for preservice and inservice teachers. Early education programs need to be closely examined for practices that exclude or disadvantage children who are not from White, middle class backgrounds. The curriculum needs to build on the skills the students possess, rather than considering those without the desired skills deficient. Finally, intervention programs need to be evaluated as the data in the study indicate that reading gaps were less than math.

APA, Harvard, Vancouver, ISO, and other styles
11

Vadeby, Anna. "Computer based statistical treatment in models with incidental parameters : inspired by car crash data." Doctoral thesis, Linköping : Univ, 2003. http://www.bibl.liu.se/liupubl/disp/disp2003/tek814s.pdf.

Full text
APA, Harvard, Vancouver, ISO, and other styles
12

Roemer, Jake. "Practical High-Coverage Sound Predictive Race Detection." The Ohio State University, 2019. http://rave.ohiolink.edu/etdc/view?acc_num=osu1563505463237874.

Full text
APA, Harvard, Vancouver, ISO, and other styles
13

Winkler, Jordan M. "Racial Disparity in Traffic Stops: An Analysis of Racial Profiling Data in Texas." Thesis, University of North Texas, 2016. https://digital.library.unt.edu/ark:/67531/metadc862791/.

Full text
Abstract:
The primary goal of this study was to analyze existing racial profiling data collected and reported by law enforcement agencies in Texas. The internet-based data used was obtained through TCOLE, as it is the state mandated repository in which all law enforcement agencies must submit their annual racial profiling reports to. In analyzing a collection requirement of these reports, this study sought to determine how frequently law enforcement officers know the race or ethnicity of drivers prior to traffic stops. Furthermore, the study sought to determine if there are differences in the rates of race or ethnicity known prior to stops across Texas geographical regions, county population sizes, agency types, as well as between counties with and without interstate thoroughfares. This analysis consisted of 3,250,984 traffic stops conducted by 1,186 law enforcement agencies in 2014. Findings revealed that law enforcement officers rarely know the race or ethnicity of drivers prior to traffic stops, as was consistently found across all measures. Findings and implications are discussed.
APA, Harvard, Vancouver, ISO, and other styles
14

Keler, Andreas [Verfasser], and Jukka M. [Akademischer Betreuer] Krisp. "Traffic Pattern Analysis Framework with Emphasis on Floating Car Data (FCD) / Andreas Keler ; Betreuer: Jukka M. Krisp." Augsburg : Universität Augsburg, 2017. http://d-nb.info/1143518934/34.

Full text
APA, Harvard, Vancouver, ISO, and other styles
15

Atkins, Andrew Jarred. "School Shootings: How Race, Income and Class Affect Media Coverage." Kent State University Honors College / OhioLINK, 2018. http://rave.ohiolink.edu/etdc/view?acc_num=ksuhonors1534157783735381.

Full text
APA, Harvard, Vancouver, ISO, and other styles
16

Bonilla, Hernández Ana Esther. "Analysis and direct optimization of cutting tool utilization in CAM." Licentiate thesis, Högskolan Väst, Forskningsmiljön produktionsteknik(PTW), 2015. http://urn.kb.se/resolve?urn=urn:nbn:se:hv:diva-8672.

Full text
Abstract:
The search for increased productivity and cost reduction in machining can be interpreted as the desire to increase the material removal rate, MRR, and maximize the cutting tool utilization. The CNC process is complex and involves numerous limitations and parameters, ranging from tolerances to machinability. A well-managed preparation process creates the foundations for achieving a reduction in manufacturing errors and machining time. Along the preparation process of the NC-program, two different studies have been conducted and are presented in this thesis. One study examined the CAM programming preparation process from the Lean perspective. The other study includes an evaluation of how the cutting tools are used in terms of MRR and tool utilization. The material removal rate is defined as the product of three variables, namely the cutting speed, the feed and the depth of cut, which all constitute the cutting data. Tool life is the amount of time that a cutting tool can be used and is mainly dependent on the same variables. Two different combinations of cutting data might provide the same MRR, however the tool life will be different. Thereby the difficulty is to select the cutting data to maximize both MRR and cutting tool utilization. A model for the analysis and efficient selection of cutting data for maximal MRR and maximal tool utilization has been developed and is presented. The presented model shortens the time dedicated to the optimized cutting data selection and the needed iterations along the program development.
APA, Harvard, Vancouver, ISO, and other styles
17

Chowdhury, Mahzabin, and Khan Salam. "Green Race! A Conjoint Analysis in High Involvement Purchase Decision Process ­­­- In Context of Green Cars in Sweden." Thesis, Umeå universitet, Handelshögskolan vid Umeå universitet, 2011. http://urn.kb.se/resolve?urn=urn:nbn:se:umu:diva-46620.

Full text
Abstract:
Environment and its conservation is one of the key issues across the globe these days. It is even more important in the Scandinavian region.Swedenis one of the leading pro-environment nations in the world when it comes to environment-friendly or green automobiles. Introducing emissions tax, green car rebate, and congestion tax exemption for green cars on large cities have resulted in a surge of green car sales inSwedenover the past few years. The preferences of the Swedish green car consumers are examined in this study.   Consumer decision process and preferences related theories have been used for the theoretical understanding of this study and based on these understandings, the Adaptive Choice Based Conjoint Analysis has been selected to measure and understand the consumer preferences towards green cars. The Swedish green car market has been explained and understood as a prerequisite to conduct this study. Examinations of previous related studies, a small scale pre-screening survey, and expert interviews were carried out prior to formulating the conjoint experiment to ensure the inclusion of significant components into the study. The collected data were analyzed using advanced analysis software such as, SSI Web, SMRT, and SPSS, to understand and measure consumer preferences. The findings provide answers to the importance of different attributes in the purchase decision-making for green cars, the effect of each attribute to the decision-making process, the effect of prior purchase experience on the formation of preference, and the relationship between consumer’s green consciousness level and green decision-making process.   This study contributes to the theoretical field of green consumer behavior and to the practical field of marketing of green cars. The study also identifies and recommends key areas of interest that warrant further research.   Key Words: High Involvement Purchase, Green Consumer Behavior, Conjoint Analysis, Adaptive Choice Based Conjoint Analysis (ACBC), Green Preference, Green Car.
APA, Harvard, Vancouver, ISO, and other styles
18

Nakade, Radha Vi. "Verification of Task Parallel Programs Using Predictive Analysis." BYU ScholarsArchive, 2016. https://scholarsarchive.byu.edu/etd/6176.

Full text
Abstract:
Task parallel programming languages provide a way for creating asynchronous tasks that can run concurrently. The advantage of using task parallelism is that the programmer can write code that is independent of the underlying hardware. The runtime determines the number of processor cores that are available and the most efficient way to execute the tasks. When two or more concurrently executing tasks access a shared memory location and if at least one of the accesses is for writing, data race is observed in the program. Data races can introduce non-determinism in the program output making it important to have data race detection tools. To detect data races in task parallel programs, a new Sound and Complete technique based on computation graphs is presented in this work. The data race detection algorithm runs in O(N2) time where N is number of nodes in the graph. A computation graph is a directed acyclic graph that represents the execution of the program. For detecting data races, the computation graph stores shared heap locations accessed by the tasks. An algorithm for creating computation graphs augmented with memory locations accessed by the tasks is also described here. This algorithm runs in O(N) time where N is the number of operations performed in the tasks. This work also presents an implementation of this technique for the Java implementation of the Habanero programming model. The results of this data race detector are compared to Java Pathfinder's precise race detector extension and permission regions based race detector extension. The results show a significant reduction in the time required for data race detection using this technique.
APA, Harvard, Vancouver, ISO, and other styles
19

Vassenkov, Phillip. "Contech: a shared memory parallel program analysis framework." Thesis, Georgia Institute of Technology, 2013. http://hdl.handle.net/1853/50379.

Full text
Abstract:
We are in the era of multicore machines, where we must exploit thread level parallelism for programs to run better, smarter, faster, and more efficiently. In order to increase instruction level parallelism, processors and compilers perform heavy dataflow analyses between instructions. However, there isn’t much work done in the area of inter-thread dataflow analysis. In order to pave the way and find new ways to conserve resources across a variety of domains (i.e., execution speed, chip die area, power efficiency, and computational throughput), we propose a novel framework, termed Contech, to facilitate the analysis of multithreaded program in terms of its communication and execution patterns. We focus the scope on shared memory programs rather than message passing programs, since it is more difficult to analyze the communication and execution patterns for these programs. Discovering patterns of shared memory programs has the potential to allow general purpose computing machines to turn on or off architectural tricks according to application-specific features. Our design of Contech is modular in nature, so we can glean a large variety of information from an architecturally independent representation of the program under examination.
APA, Harvard, Vancouver, ISO, and other styles
20

Andalib, Maryam Alsadat. "Model-based Analysis of Diversity in Higher Education." Diss., Virginia Tech, 2018. http://hdl.handle.net/10919/96221.

Full text
Abstract:
U.S. higher education is an example of a large multi-organizational system within the service sector. Its performance regarding workforce development can be analyzed through the lens of industrial and systems engineering. In this three-essay dissertation, we seek the answer to the following question: How can the U.S. higher education system achieve an equal representation of female and minority members in its student and faculty populations? In essay 1, we model the education pipeline with a focus on the system's gender composition from k-12 to graduate school. We use a system dynamics approach to present a systems view of the mechanisms that affect the dynamics of higher education, replicate historical enrollment data, and forecast future trends of higher education's gender composition. Our results indicate that, in the next two decades, women will be the majority of advanced degree holders. In essay 2, we look at the support mechanisms for new-parent, tenure-track faculty in universities with a specific focus on tenure-clock extension policies. We construct a unique data set to answer questions around the effectiveness of removing the stigma connected with automatic tenure-clock policies. Our results show that such policies are successful in removing the stigma and that, overall, faculty members that have newborns and are employed by universities that adopt auto-TCE policies stay one year longer in their positions than other faculty members. In addition, although faculty employed at universities that adopt such policies are generally more satisfied with their jobs, there is no statistically significant effect of auto TCE policies on the chances of obtaining tenure. In essay 3, we focus on the effectiveness of training underrepresented minorities (e.g., African Americans and Hispanics) in U.S. higher education institutions using a Data Envelopment Analysis approach. Our results indicate that graduation rates, average GPAs, and post-graduate salaries of minority students are higher in selective universities and those located in more diverse towns/cities. Furthermore, the graduation rate of minority students in private universities and those with affirmative action programs is higher than in other institutions. Overall, this dissertation provides new insights into improving diversity within the science workforce at different organizational levels by using industrial and systems engineering and management sciences methods.
Ph. D.
APA, Harvard, Vancouver, ISO, and other styles
21

Li, Tianyou. "3D Representation of EyeTracking Data : An Implementation in Automotive Perceived Quality Analysis." Thesis, KTH, Skolan för elektroteknik och datavetenskap (EECS), 2021. http://urn.kb.se/resolve?urn=urn:nbn:se:kth:diva-291222.

Full text
Abstract:
The importance of perceived quality within the automotive industry has been rapidly increasing these years. Since judgmentsconcerning perceived quality is a highly subjective process, eye-tracking technology is one of the best approaches to extractcustomers’ subconscious visual activity during interaction with the product. This thesis aims to find an appropriate solution forrepresenting 3D eye-tracking data for further improvements in the validity and verification efficiency of perceived qualityanalysis, attempting to answer the question:How can eye-tracking data be presented and integrated into 3D automobile design workflow as a material that allows designersto understand their customers better?In the study, a prototype system was built for car-interior inspection in the virtual reality (VR) showroom through an explorativeresearch process including investigations in the acquisition of gaze data in VR, classification of eye movement from thecollected gaze data, and the visualizations for the classified eye movements. The prototype system was then evaluated throughcomparisons between algorithms and feedbacks from the engineers who participated in the pilot study.As a result, a method combining I-VT (identification with velocity threshold) and DBSCAN (density-based spatial clusteringof application with noise) was implemented as the optimum algorithm for eye movement classification. A modified heat map,a cluster plot, a convex hull plot, together with textual information, were used to construct the complete visualization of theeye-tracking data. The prototype system has enabled car designers and engineers to examine both the customers’ and their ownvisual behavior in the 3D virtual showroom during a car inspection, followed by the extraction and visualization of the collectedgaze data. This paper presents the research process, including the introduction to relevant theory, the implementation of theprototype system, and its results. Eventually, strengths and weaknesses, as well as the future work in both the prototype solutionitself and potential experimental use cases, are discussed.
Betydelsen av upplevd kvalitet inom bilindustrin har ökat kraftigt dessa år. Eftersom uppfattningar om upplevd kvalitet är en mycket subjektivt är ögonspårningsteknik en av de bästa metoderna för att extrahera kundernas undermedvetna visuella aktivitet under interaktion med produkten. Denna avhandling syftar till att hitta en lämplig lösning för att representera 3Dögonspårningsdata för ytterligare förbättringar av validitets- och verifieringseffektiviteten hos upplevd kvalitetsanalys, och försöker svara på frågan: Hur kan ögonspårningsdata presenteras och integreras i 3D-arbetsflödet för bildesign som ett material som gör det möjligt för designers att bättre förstå sina kunder? I studien byggdes ett prototypsystem för bilinteriörinspektion i showroomet för virtuell verklighet (VR) genom en explorativ forskningsprocess inklusive undersökningar i förvärv av blickdata i VR, klassificering av ögonrörelse från insamlad blicksdata och visualiseringar för de klassificerade ögonrörelserna. Prototypsystemet utvärderades sedan genom jämförelser mellan algoritmer och återkopplingar från ingenjörerna som deltog i pilotstudien. Följaktligen implementerades en metod som kombinerar I-VT (identifiering med hastighetströskel) och DBSCAN (densitetsbaserad spatial gruppering av applikation med brus) som den optimala algoritmen för ögonrörelseklassificering. En modifierad värmekarta, ett klusterdiagram, en konvex skrovdiagram, tillsammans med textinformation, användes för att konstruera den fullständiga visualiseringen av ögonspårningsdata. Prototypsystemet har gjort det möjligt för bilkonstruktörer och ingenjörer att undersöka både kundernas och deras visuella beteende i det virtuella 3D-utställningsrummet under en bilinspektion, följt av utvinning och visualisering av den insamlade blicken. Denna uppsats presenterar forskningsprocessen, inklusive introduktion till relevant teori, implementeringen av prototypsystemet och dess resultat. Så småningom diskuteras styrkor och svagheter, liksom det framtida arbetet i både prototyplösningen och potentiella experimentella användningsfall.
APA, Harvard, Vancouver, ISO, and other styles
22

Pereira, Bani Valério Alves. "Análise de estrutura de carro de corrida (stock-car) pelo método de elementos finitos." Universidade de Taubaté, 2012. http://www.bdtd.unitau.br/tedesimplificado/tde_busca/arquivo.php?codArquivo=346.

Full text
Abstract:
O objetivo deste trabalho é analisar a parte estrutural de um veiculo de stock-car com aplicação de método de elementos finitos. A atenção especial é dedicada a lateral do veiculo, onde ocorrem impactos mais perigosos para a integridade física de piloto. O modelo em questão (chassis tubular) foi modelado no software CATIA VR R20 e a analise numérica realizada no software ABAQUS. A resistência da estrutura tubular, em casos de acidentes, impactos frontais, laterais, traseiros e em caso de capotamento é estudada em termos de tensões mecânicas e deformações provocadas na estrutura tubular soldada. Os resultados permitem determinar níveis perigosos de intensidade de impacto em situações reais de corrida. Verifica-se que há probabilidade de danos severos neste tipo de estrutura. Com base nos resultados obtidos, são sugeridas modificações de projeto de estrutura Stock-Car, focadas no aumento da segurança.
The objective of this work is to analyze the Stock-Car structure applying finite element method. The special attention is dedicated to the lateral vehicle, where more dangerous impacts for the physical integrity of pilot occur. The model in question (tubular chassis) was developed using software CATIA VR R20 and analyzed using FEM software ABAQUS. The resistance of the tubular structure in cases of accidents, such as front, lateral and rear impacts, rollover case, are studied in terms of mechanical tensions and deformations in the welded tubular structure. The results allow to determine harmful levels of intensity of impact in real racing conditions It appears that severe damage in this type of structure are possible. Based on these results, design modifications for Stock-car, aimed at safety increasing are suggested.
APA, Harvard, Vancouver, ISO, and other styles
23

Zhou, Liren. "An analysis of journey to work characteristics in Florida using Census 2000 public use microdata sample data files." [Tampa, Fla.] : University of South Florida, 2004. http://purl.fcla.edu/fcla/etd/SFE0000256.

Full text
APA, Harvard, Vancouver, ISO, and other styles
24

Tollefson, John Dietrich. "Identifying the factors that affect the severity of vehicular crashes by driver age." Thesis, University of Iowa, 2016. https://ir.uiowa.edu/etd/2285.

Full text
Abstract:
Vehicular crashes are the leading cause of death for young adult drivers, however, very little life course research focuses on drivers in their 20s. Moreover, most data analyses of crash data are limited to simple correlation and regression analysis. This thesis proposes a data-driven approach and usage of machine-learning techniques to further enhance the quality of analysis. We examine over 10 years of data from the Iowa Department of Transportation by transforming all the data into a format suitable for data analysis. From there, the ages of drivers present in the crash are discretized depending on the ages of drivers present for better analysis. In doing this, we hope to better discover the relationship between driver age and factors present in a given crash. We use machine learning algorithms to determine important attributes for each age group with the goal of improving predictivity of individual methods. The general format of this thesis follows a Knowledge Discovery workflow, preprocessing and transforming the data into a usable state, from which we perform data mining to discover results and produce knowledge. We hope to use this knowledge to improve the predictivity of different age groups of drivers with around 60 variables for most sets as well as 10 variables for some. We also explore future directions this data could be analyzed in.
APA, Harvard, Vancouver, ISO, and other styles
25

Kelley, Lindsey. "Utilization of Depot Fluphenazine and Haloperidol in a Community Mental Health Center: A 12-Month Retrospective Analysis of Compliance, Hospitalization Data, and Cost of Care for Patients with Schizophrenia." The University of Arizona, 2005. http://hdl.handle.net/10150/624756.

Full text
Abstract:
Class of 2005 Abstract
Objectives: This retrospective study investigated the relationship between compliance, hospitalizations rates, and cost of care in an outpatient behavioral health facility over a 12-month period for schizophrenic patients treated with depot antipsychotic medications. Methods: Databases from COPE Behavioral Health Center in Tucson, AZ were utilized for administration of depot injections, hospitalizations, and cost of care for patients between July 1, 2003 and June 30, 2004. Results: Records were utilized for 103 patients receiving depot antipsychotics (n = 34 fluphenazine decanoate, n = 69 haloperidol decanoate). Increased number of injections received per year was associated with lower number of hospitalizations per year (p = 0.025 fluphenazine and p = 0.001 haloperidol). Also, increased number of hospitalizations was associated with increased total cost of care (p = 0.001 fluphenazine and p < 0.001 haloperidol). Implications: Patients with schizophrenia who received a greater number of depot antipsychotic injections had a lower number of hospitalizations during a 12-month period. Improved adherence with depot antipsychotics may improve clinical outcomes and reduce the total cost of care in patients with schizophrenia.
APA, Harvard, Vancouver, ISO, and other styles
26

Conroy, Amy. "E-racing the Genetic Family Tree: A Critical Race Analysis of the Impact of Familial DNA Searching on Canada's Aboriginal Peoples." Thesis, Université d'Ottawa / University of Ottawa, 2016. http://hdl.handle.net/10393/34285.

Full text
Abstract:
Canada established its National DNA Data Bank (NDDB) in 2000. Since that time, the NDDB has assisted in the solving of numerous criminal investigations. The NDDB has two indexes: the convicted offender index, which holds the identifiable DNA of persons convicted of designated crimes, and the anonymous crime scene index, which holds anonymous DNA collected from crime scenes. A match to a crime scene profile provides criminal investigators with extremely valuable evidence linking a suspect to a crime scene and the NDDB has been used to identify perpetrators in thousands of crimes in Canada. By limiting the identifiable DNA in the NDDB to convicted offenders, Canada has aimed to balance the crime-solving benefits of the data bank with competing rights issues, particularly the individual right to privacy. Some have encouraged expansions to the NDDB scheme in order to increase the number of crimes that can be resolved through the use of DNA evidence. One possible expansion is to introduce familial searching, a technique in DNA analysis that enables suspect identification based on the existence of a partial match between an identifiable DNA profile and an anonymous profile retrieved from the scene of a crime. Where closely matching profiles indicate that a close genetic relationship likely exists between the identifiable offender and an anonymous perpetrator, police will have a useful lead for follow-up and may be able to locate a suspect by testing the DNA of the identified offender’s close relatives. The use of familial searching is controversial. As a crime-solving tool, it has helped solve crimes in other jurisdictions in which it is currently used. At the same time, it introduces legal and ethical questions that have not been fully explored in Canada. One of the crucial questions is whether and to what extent familial searching may discriminate against Canada’s Aboriginal peoples, who suffer the effects of systemic bias in the criminal justice system generally and who are likely to be overrepresented in the NDDB. Applied in an inherently unequal system, familial searching would disproportionately impact Aboriginal peoples and perpetuate or possibly worsen this existing inequality. To help inform Canada’s decision about the use of familial searching as part of NDDB operations, this dissertation examines the issue from a Critical Race Theory perspective. It outlines the various ways in which familial searching would disproportionately impact Aboriginal peoples. The dissertation further examines international approaches to familial searching and evaluates the extent to which these policies protect against racial inequality concerns relating to the use of familial searching in each jurisdiction considered. It argues that Canada should prohibit familial searching of NDDB data in order to avoid a situation in which the technique would perpetuate or worsen systemic bias against Aboriginal peoples in the Canadian criminal justice system.
APA, Harvard, Vancouver, ISO, and other styles
27

Barone, Anthony J. "State Level Earned Income Tax Credit’s Effects on Race and Age: An Effective Poverty Reduction Policy." Scholarship @ Claremont, 2013. http://scholarship.claremont.edu/cmc_theses/771.

Full text
Abstract:
In this paper, I analyze the effectiveness of state level Earned Income Tax Credit programs on improving of poverty levels. I conducted this analysis for the years 1991 through 2011 using a panel data model with fixed effects. The main independent variables of interest were the state and federal EITC rates, minimum wage, gross state product, population, and unemployment all by state. I determined increases to the state EITC rates provided only a slight decrease to both the overall white below-poverty population and the corresponding white childhood population under 18, while both the overall and the under-18 black population for this category realized moderate decreases in their poverty rates for the same time period. I also provide a comparison of the effectiveness of the state level EITCs and minimum wage at the state level over the same time period on these select demographic groups.
APA, Harvard, Vancouver, ISO, and other styles
28

Ding, Linfang Verfasser], Liqiu [Akademischer Betreuer] [Gutachter] [Meng, Gerd [Gutachter] Buziek, and Gennady [Gutachter] Andrienko. "Visual Analysis of Large Floating Car Data – A Bridge-Maker between Thematic Mapping and Scientific Visualization / Linfang Ding. Betreuer: Liqiu Meng. Gutachter: Liqiu Meng ; Gerd Buziek ; Gennady Andrienko." München : Universitätsbibliothek der TU München, 2016. http://d-nb.info/110169517X/34.

Full text
APA, Harvard, Vancouver, ISO, and other styles
29

Cao, Man. "Efficient, Practical Dynamic Program Analyses for Concurrency Correctness." The Ohio State University, 2017. http://rave.ohiolink.edu/etdc/view?acc_num=osu1492703503634986.

Full text
APA, Harvard, Vancouver, ISO, and other styles
30

Zhang, Tong. "Designing Practical Software Bug Detectors Using Commodity Hardware and Common Programming Patterns." Diss., Virginia Tech, 2020. http://hdl.handle.net/10919/96422.

Full text
Abstract:
Software bugs can cost millions and affect people's daily lives. However, many bug detection tools are not always practical in reality, which hinders their wide adoption. There are three main concerns regarding existing bug detectors: 1) run-time overhead in dynamic bug detectors, 2) space overhead in dynamic bug detectors, and 3) scalability and precision issues in static bug detectors. With those in mind, we propose to: 1) leverage commodity hardware to reduce run-time overhead, 2) reuse metadata maintained by one bug detector to detect other types of bugs, reducing space overhead, and 3) apply programming idioms to static analyses, improving scalability and precision. We demonstrate the effectiveness of three approaches using data race bugs, memory safety bugs, and permission check bugs, respectively. First, we leverage the commodity hardware transactional memory (HTM) selectively to use the dynamic data race detector only if necessary, thereby reducing the overhead from 11.68x to 4.65x. We then present a production-ready data race detector, which only incurs a 2.6% run-time overhead, by using performance monitoring units (PMUs) for online memory access sampling and offline unsampled memory access reconstruction. Second, for memory safety bugs, which are more common than data races, we provide practical temporal memory safety on top of the spatial memory safety of the Intel MPX in a memory-efficient manner without additional hardware support. We achieve this by reusing the existing metadata and checks already available in the Intel MPX-instrumented applications, thereby offering full memory safety at only 36% memory overhead. Finally, we design a scalable and precise function pointer analysis tool leveraging indirect call usage patterns in the Linux kernel. We applied the tool to the detection of permission check bugs; the detector found 14 previously unknown bugs within a limited time budget.
Doctor of Philosophy
Software bugs have caused many real-world problems, e.g., the 2003 Northeast blackout and the Facebook stock price mismatch. Finding bugs is critical to solving those problems. Unfortunately, many existing bug detectors suffer from high run-time and space overheads as well as scalability and precision issues. In this dissertation, we address the limitations of bug detectors by leveraging commodity hardware and common programming patterns. Particularly, we focus on improving the run-time overhead of dynamic data race detectors, the space overhead of a memory safety bug detector, and the scalability and precision of the Linux kernel permission check bug detector. We first present a data race detector built upon commodity hardware transactional memory that can achieve 7x overhead reduction compared to the state-of-the-art solution (Google's TSAN). We then present a very lightweight sampling-based data race detector which re-purposes performance monitoring hardware features for lightweight sampling and uses a novel offline analysis for better race detection capability. Our result highlights very low overhead (2.6%) with 27.5% detection probability with a sampling period of 10,000. Next, we present a space-efficient temporal memory safety bug detector for a hardware spatial memory safety bug detector, without additional hardware support. According to experimental results, our full memory safety solution incurs only a 36% memory overhead with a 60% run-time overhead. Finally, we present a permission check bug detector for the Linux kernel. This bug detector leverages indirect call usage patterns in the Linux kernel for scalable and precise analysis. As a result, within a limited time budget (scalable), the detector discovered 14 previously unknown bugs (precise).
APA, Harvard, Vancouver, ISO, and other styles
31

Leandro, Carolina Gonçalves. "Aplicação da análise do sinal do GPR na definição de ambientes costeiros." reponame:Biblioteca Digital de Teses e Dissertações da UFRGS, 2018. http://hdl.handle.net/10183/173817.

Full text
Abstract:
Na barreira regressiva da Pinheira, são reconhecidos quatro ambientes deposicionais costeiros, caracterizados por parâmetros geológicos como a análise de litofácies, estruturas sedimentares, grau de compactação e conteúdo de moluscos. Informações que são analisadas em conjunto com imagens de dados geofísicos obtidas com o método do Radar de Penetração no Solo (GPR – Ground Penetrating Radar) para determinar esses ambientes. O presente trabalho visa a caracterização destes ambientes deposicionais através da análise da amplitude do sinal em traços de antenas com frequências centrais de 80, 100, 200 e 400 MHz em conjunto com os dados de compactação e litológicos de um furo de sondagem. E também mostra o comportamento da atenuação do sinal em relação a umidade presente no ambiente. A análise dos traços permitiu a identificação dos contatos entre os ambientes já descritos para barreiras regressivas, mostrando variação no valor das amplitudes (decréscimo ou aumento) em conjunto com a variação no grau de compactação, que evidenciam em subsuperfície a mudança entre os ambientes de cordões litorâneos, backshore/foreshore e shoreface superior e inferior. A interferência da umidade na atenuação do sinal nos dados analisados pode ser observada apenas nos primeiros 0,5 m. Demonstrando que a pluviosidade não é um fator de relevância para atenuação do sinal em ambientes arenosos onde o nível da água é próximo a superfície. A análise dos radargramas para todas as antenas, permitiu a identificação dos padrões de refletores já descritos para os ambientes da área de estudo e a antena com frequência central de 200 MHz apresentou maior resolução para a definição de todos os ambientes.
In the Pinheira regressive barrier, four coastal depositional environments are recognized, characterized by geological parameters such as lithofacies analysis, sedimentary structures, compaction degree and set of mollusks. Information that is analyzed together with images of geophysical data obtained with the Ground Penetrating Radar (GPR) method to determine these environments. The present work aims to characterize these depositional environments by analyzing the signal amplitude in traces with central frequencies antennas of 80, 100, 200 and 400 MHz in conjunction with the compaction and lithological data of a drill hole. It also shows the behavior of signal attenuation in relation to the humidity present in the environment. The analysis of the traces allowed the identification of the contacts between the environments already described for regressive barriers, showing variation in the value of the amplitudes (decrease or increase), together with the variation in the degree of compaction, which evidences in subsurface the change between the environments of foredune ridges, backshore/foreshore and upper and lower shoreface. The interference of humidity in attenuation of the signal in the studied data can be observed only in the first 0.5 m. Rainfall was not relevant for signal attenuation in the studied sandy deposits with water level close to the surface. The analysis of the radargrams for all the antennas allowed the identification of the patterns of reflectors already described for the environments of the study area and the central frequency antenna of 200 MHz showed the highest resolution for the definition of all the environments.
APA, Harvard, Vancouver, ISO, and other styles
32

Boudali, Selma, and Mattias Olausson. "Venturi Undertray : KTH Bachelor Thesis Report." Thesis, KTH, Skolan för industriell teknik och management (ITM), 2020. http://urn.kb.se/resolve?urn=urn:nbn:se:kth:diva-274378.

Full text
Abstract:
This bachelor thesis aims to describe the work performed for the design of the undertray for the Kungliga Tekniska Högskolan Formula Student(KTHFS) race car. The goal was to achieve an aerodynamically optimized undertray design that follows the regulations of the competition and the targets set by KTHFS concerning the weight, the size, the materials needed for its manufacture and costs. After some research on previous work, the concept, on which we decided that the undertray would rely on, is Venturi tunnels inspired by the Aston Martin Valkyrie, chosen for its ability to provide a large amount of downforce with a negligible amount of drag using ”ground effect”. Numerous CAD design models were created in Solid Edge and a finalized design was then ported over to Siemens NX to be analyzed using Star-CCM+ and its Design Manager feature. The CFD analyses and optimization was performed in Star-CCM+ with regards to pressure gradient, streamline velocity and downforce. These were done with variable parameters in areas such as expansion height, inlet area and ride height. Contained within this report is a more detailed description of how the CFD analysis was performed as well as suggestions for manufacturing said undertray. Given the time constraints and the societal impacts of COVID-19, manufacturing had to be removed from the scope of the project, however, a step-by-step manufacture guide is provided within. Analysis of uur final design showed 428 N of downforce, a weight of 2.55 kg and a production cost of approximately 2320 SEK. It therefore passes the requirements for weight, cost and ride-height rule regulations set by Formula Student and internal KTHFS targets.
Detta kandidatexamesarbete syftar till att beskriva arbetet som utförts för konstruktionsdesignen av Kungliga Tekniska Högskolan Formula Student (KTHFS) racerbils underrede. Målet var att uppnå en aerodynamisk optimerad underredes design som följer de regler och krav fastställda av KTHFS gällande vikt, storlek, material som behövs till tillverkningen och kostnader. Efter en litteraturstudie på tidigare arbete, blev Venturi tunnlar, inspirerade av Aston Martin Valkyrie, konceptet som vi beslutade att uderreden skulle bygga på och valda på grund av deras förmåga att förbättra bilens prestanda genom sitt nedkraftsbildande och försumbar mängd drag med hjälp av ”ground effect”. Många CAD-designmodeller skapades i Solid Edge och en slutgiltig design överfördes sedan till Siemens NX för att analyseras med Star CCM+ och dess Design Managerfunktion. CFD-analyserna och optimeringen utfördes i Star CCM+ med avseende på tryckgradient, strömlinjehastighet och nedkrafter. Dessa gjordes med variabla parametrar i områden som utvidgningshöjd, inloppsarea och frigångshöjd. I denna rapport finns en mer detaljerad beskrivning av hur CFD-analysen utfördes samt förslag för tillverkning. Med tanke på tidsbegränsningarna och samhällseffekterna av COVID-19 fick vi ta bort tillverknink från projektets omfattning, men en steg-för-steg tillverkningsguide tillhandahålls i rapporten. Analyser av vår slutgiltiga design visade på 428N downforce, en vikt på 2,55 kg och en produktionskostnad på cirka 2320 SEK. Den överenstämmer därför kraven för vikt, kostnad och frigångshöjd som fastställdes av Formula Student.
APA, Harvard, Vancouver, ISO, and other styles
33

Grimal, Richard. "L'auto-mobilité au tournant du millénaire : une approche emboîtée, individuelle et longitudinale." Thesis, Paris Est, 2015. http://www.theses.fr/2015PESC0056/document.

Full text
Abstract:
L’automobile occupe une place fondamentale dans notre société, au point qu’on a pu parler de « civilisation de l’automobile ». En dépit des critiques qui lui sont régulièrement adressées, celle-ci n’a cessé de se renforcer, avec toujours davantage de voitures par adulte et une proportion croissante de déplacements effectués en voiture. Cependant, depuis le tournant du millénaire, on assiste à un retournement de tendance. Pour la première fois, la mobilité en voiture baisse dans les grandes agglomérations, tandis que la circulation automobile plafonne à l’échelle nationale. Cette évolution, du reste, n’est pas spécifique à la France mais s’observe dans l’ensemble des pays développés, une tendance parfois désignée sous le terme de « peak car (travel) ». Parmi les explications les plus convaincantes de ce retournement, figurent l’augmentation du prix du carburant, suivie de la récession de 2008. La volonté des ménages de maîtriser leurs budgets-temps de transport y contribue également, dans un contexte d’allongement des déplacements vers le travail et de dégradation des vitesses de déplacements. En outre, la diffusion de l’automobile se rapproche de la saturation. Si à long terme, la croissance du kilométrage moyen par adulte est indexée sur le taux de motorisation, cependant à moyen terme l’utilisation des véhicules fluctue en fonction du pouvoir d’achat énergétique, et un modèle basé sur ces deux variables suggère qu’on observerait une réaction normale à une augmentation exceptionnelle du prix du carburant. Les facteurs de croissance du taux de motorisation tiennent eux-mêmes principalement à la succession de générations de plus en plus motorisées, surtout chez les femmes, compte tenu d’un accès de plus en plus large au permis de conduire, à l’activité professionnelle, et d’une urbanisation de plus en plus diffuse, qui ont augmenté le besoin d’une seconde voiture. Pour modéliser l’auto-mobilité, on propose une approche emboîtée, individuelle et longitudinale, segmentée en fonction du genre. L’auto-mobilité peut en effet être vue au niveau individuel comme une succession de choix emboîtés, puisque la détention du permis conditionne l’accès à un véhicule personnel, de même que la motorisation conditionne l’usage d’un véhicule. L’avantage d’une approche longitudinale réside dans la possibilité de distinguer entre mesures d’hétérogénéité et de sensibilité, qui ne sont pas équivalentes. Pour chaque niveau de choix, l’approche est structurée autour d’une analyse de type âge-cohorte-période. Globalement, les taux de motorisation sont plus hétérogènes chez les femmes, un résultat qui est susceptible de recevoir une double interprétation, économique ou sociétale. On peut le voir en termes d’inégalités de genre. Mais il peut également s’interpréter comme le reflet d’un statut encore intermédiaire du second véhicule, dont l’opportunité serait davantage évaluée au regard des besoins et des contraintes réels du ménage. A l’inverse, l’usage des véhicules est à la fois plus élevé et plus hétérogène chez les hommes, compte tenu de la fonction collective du véhicule principal et des arbitrages internes aux ménages quant aux choix du lieu de résidence et des lieux de travail des conjoints. Pour finir, on estime à partir de modèles sur données de panel des effets marginaux et des élasticités par rapport au revenu, au prix du carburant et à la densité, qui sont ensuite comparées avec la littérature. Dans l’ensemble, les résultats sont cohérents avec l’analyse descriptive, ainsi qu’avec la littérature. Le modèle permet également de rendre compte du déclin tendanciel des élasticités, traduisant l’approche de la saturation. Pour finir, une évaluation a posteriori confirme l’opportunité d’une modélisation séquentielle, indiquant que les choix de motorisation sont indépendants des niveaux d’usage de la voiture
Car ownership and use are a decisive part of our society, which was sometimes designed as the “civilization of the car”. Despite many critics, the car has become ever-more central in the modern way of life, with an ever-increasing number of cars per adult and proportion of trips realized by car. However, from the beginning of the millennium, there was a reversal in the trend towards ever-more car use. For the first time, the average number of daily trips realized by car has been falling down in French conurbations, and nationwide traffic by car is leveling off. This situation, nonetheless, is not specific to France but is common to many developed countries, and is often referred to as the “peak car (travel)”. The main explanations for such a downturn include rising fuel prices from the late 1990’s, followed by the recession in 2008, but also household’s willingness to control their travel time budgets, in a context of increasing commuting distances and reduced travel speeds. Besides, the diffusion of car ownership is approaching saturation. While on the long-run, average car travel per adult is indexed on motorization, mid-term fluctuations of average car use per vehicle are related to the energetic purchasing power, and a simple model based on these two variables is suggesting that the stagnation of car use from the 2000’s could be a reaction of a usual kind to an exceptional rise in fuel prices. The growth in motorization is itself principally caused by the follow-up of ever-more motorized generations, especially among women, given their increasing access to driving license, job participation and ever-more diffuse land use patterns, which have increased the need for a second car within households. In order to model auto-mobility, a nested, individual and longitudinal approach is implemented, segmented by gender. Auto-mobility can indeed be seen as a follow-up of nested choices, as driving license is necessary for holding a car, while access to a personal vehicle is itself required for car use. The advantage of a longitudinal approach consists in the ability to distinguish between measures of heterogeneity and sensitivity, which can be shown not to be equivalent. For every given level of choice, the approach is based on an age-cohort-period-type analysis. Motorization rates happen to be more heterogeneous among women, a result which is likely to receive an interpretation either of a social or economic nature. According to the first interpretation, it should be regarded as the illustration of gender inequalities. However, it could also be regarded as reflecting the still-intermediary status of the second vehicle, which opportunity is assessed depending upon household’s specific needs and constraints. On the contrary, car use is at the same time higher and more heterogeneous among men, given the collective function of the first vehicle and household’s internal trade-offs in residential and job choices. Finally, average partial effects and elasticities are estimated from panel data models, either with respect to income, fuel prices or density. Generally, results are consistent with the descriptive part, as with the literature. The model also rationally gives account of the decreasing trend for elasticities, which was often noticed in the literature and reflects the approach of saturation. As a conclusion, an a posteriori evaluation of the assumption of a sequential decision process is made, confirming that choices of motorization and car use are mutually independent
APA, Harvard, Vancouver, ISO, and other styles
34

Stevenson, Clint W. "A Logistic Regression Analysis of Utah Colleges Exit Poll Response Rates Using SAS Software." BYU ScholarsArchive, 2006. https://scholarsarchive.byu.edu/etd/1116.

Full text
Abstract:
In this study I examine voter response at an interview level using a dataset of 7562 voter contacts (including responses and nonresponses) in the 2004 Utah Colleges Exit Poll. In 2004, 4908 of the 7562 voters approached responded to the exit poll for an overall response rate of 65 percent. Logistic regression is used to estimate factors that contribute to a success or failure of each interview attempt. This logistic regression model uses interviewer characteristics, voter characteristics (both respondents and nonrespondents), and exogenous factors as independent variables. Voter characteristics such as race, gender, and age are strongly associated with response. An interviewer's prior retail sales experience is associated with whether a voter will decide to respond to a questionnaire or not. The only exogenous factor that is associated with voter response is whether the interview occurred in the morning or afternoon.
APA, Harvard, Vancouver, ISO, and other styles
35

Ling, David. "Dynamická analýza paralelních programů na platformě .NET Framework." Master's thesis, Vysoké učení technické v Brně. Fakulta informačních technologií, 2021. http://www.nusl.cz/ntk/nusl-445497.

Full text
Abstract:
The thesis deals with a design and implementation of the dynamic analyser of parallel applications on the .NET Framework platform. The problematic of synchronization in parallel applications, the instrumentation of such an applications, testing of parallel applications and a specifics of these problems for C\# language and for the platform .NET Framework are discussed in the theoretical part. Selected algorithms for detection of deadlocks (the algorithm of Goodlock) and data-race errors (the algorithm of FastTrack and AtomRace) are described in detail in this part as well. Requirements for the dynamic analyser and the system design is made in the following part of this thesis. The thesis also contains a description of the implementation of the proposed solution, a description of the entire testing of the implemented tool. Last but not least, the thesis describes the sample of using dynamic analysers in a particular application environment.
APA, Harvard, Vancouver, ISO, and other styles
36

Lapato, Dana. "Latent Growth Model Approach to Characterize Maternal Prenatal DNA Methylation Trajectories." VCU Scholars Compass, 2019. https://scholarscompass.vcu.edu/etd/5995.

Full text
Abstract:
Background. DNA methylation (DNAm) is a removable chemical modification to the DNA sequence intimately associated with genomic stability, cellular identity, and gene expression. DNAm patterning reflects joint contributions from genetic, environmental, and behavioral factors. As such, differences in DNAm patterns may explain interindividual variability in risk liability for complex traits like major depression (MD). Hundreds of significant DNAm loci have been identified using cross-sectional association studies. This dissertation builds on that foundational work to explore novel statistical approaches for longitudinal DNAm analyses. Methods. Repeated measures of genome-wide DNAm and social and environmental determinants of health were collected up to six times across pregnancy and the first year postpartum as part of the Pregnancy, Race, Environment, Genes (PREG) Study. Statistical analyses were completed using a combination of the R statistical environment, Bioconductor packages, MplusAutomate, and Mplus software. Prenatal maternal DNAm was measured using the Infinium HumanMethylation450 Beadchip. Latent growth curve models were used to analyze repeated measures of maternal DNAm and to quantify site-level DNAm latent trajectories over the course of pregnancy. The purpose was to characterize the location and nature of prenatal DNAm changes and to test the influence of clinical and demographic factors on prenatal DNAm remodeling. Results. Over 1300 sites had DNAm trajectories significantly associated with either maternal age or lifetime MD. Many of the genomic regions overlapping significant results replicated previous age and MD-related genetic and DNAm findings. Discussion. Future work should capitalize on the progress made here integrating structural equation modeling (SEM) with longitudinal omics-level measures.
APA, Harvard, Vancouver, ISO, and other styles
37

Mužikovská, Monika. "Podpora pro monitorování procesů za běhu v prostředí ANaConDA." Master's thesis, Vysoké učení technické v Brně. Fakulta informačních technologií, 2020. http://www.nusl.cz/ntk/nusl-417233.

Full text
Abstract:
Tato práce rozšiřuje nástroj ANaConDA pro dynamickou analýzu vícevláknových programů o možnost analyzovat také programy víceprocesové. Část práce se soustředí na popis nástroje ANaConDA a mechanismů, které pro monitorování využívá, a na jejich nutné úpravy vzhledem k rozdílům procesů a vláken. Tyto zahrnují nutnost složitějších mechanismů pro meziprocesovou komunikaci, nutnost překládat logické adresy na jiný jednoznačný identifikátor a monitorování obecných semaforů. Rozšíření pro monitorování procesů tyto problémy řeší za vývojáře analyzátorů, čímž velmi zjednodušuje jejich vývoj. Užitečnost rozšíření je ukázána na implementaci dvou analyzátorů pro detekci souběhu (AtomRace a FastTrack), které bylo dosud možné využít pouze na vícevláknové programy. Implementace algoritmu FastTrack využívá happens-before relaci pro obecné semafory, která byla také definována jako součást této práce. Experimenty s analyzátory na studentských projektech ukázaly, že nástroj ANaConDA je nyní schopen detekovat paralelní chyby i ve víceprocesových programech a může tak pomoci při vývoji další skupiny paralelních programů.
APA, Harvard, Vancouver, ISO, and other styles
38

Gellner, Pavel. "Měření sil působících za jízdy mezi kolem a vozovkou." Master's thesis, Vysoké učení technické v Brně. Fakulta strojního inženýrství, 2020. http://www.nusl.cz/ntk/nusl-417508.

Full text
Abstract:
The diploma thesis is focused on the measurement of forces acting between the tire and the road. There is an outline of tire and tire models problematic in the opening part. In the following part, the rear right suspension of the formula student car was mounted with strain gauges and the data logging system was described. There is also a multi-body model of the rear axle created in Adams/Car and SAMS software, that is able to calculate forces acting between the tire and the road, taking the measured forces in the suspension, rocker position, and throttle position into consideration. After a series of calibrations and verification measurements, the measurement on the test track was made, with data analysis focused on forces acting between the tire and the road.
APA, Harvard, Vancouver, ISO, and other styles
39

Myers, Ron Y. "The Effects of the Use of Technology In Mathematics Instruction on Student Achievement." FIU Digital Commons, 2009. http://digitalcommons.fiu.edu/etd/136.

Full text
Abstract:
The purpose of this study was to examine the effects of the use of technology on students’ mathematics achievement, particularly the Florida Comprehensive Assessment Test (FCAT) mathematics results. Eleven schools within the Miami-Dade County Public School System participated in a pilot program on the use of Geometers Sketchpad (GSP). Three of these schools were randomly selected for this study. Each school sent a teacher to a summer in-service training program on how to use GSP to teach geometry. In each school, the GSP class and a traditional geometry class taught by the same teacher were the study participants. Students’ mathematics FCAT results were examined to determine if the GSP produced any effects. Students’ scores were compared based on assignment to the control or experimental group as well as gender and SES. SES measurements were based on whether students qualified for free lunch. The findings of the study revealed a significant difference in the FCAT mathematics scores of students who were taught geometry using GSP compared to those who used the traditional method. No significant differences existed between the FCAT mathematics scores of the students based on SES. Similarly, no significant differences existed between the FCAT scores based on gender. In conclusion, the use of technology (particularly GSP) is likely to boost students’ FCAT mathematics test scores. The findings also show that the use of GSP may be able to close known gender and SES related achievement gaps. The results of this study promote policy changes in the way geometry is taught to 10th grade students in Florida’s public schools.
APA, Harvard, Vancouver, ISO, and other styles
40

Jannatin, Raditya Derifa, and 張强. "Association Analysis on Passenger Car Recall Archived Data." Thesis, 2014. http://ndltd.ncl.edu.tw/handle/71387575642215369246.

Full text
Abstract:
碩士
國立臺灣科技大學
工業管理系
102
ABSTRACT This study analyzed 347 safety recalls of vehicle taken from National Highway Traffic Safety Administration (NHTSA) database during 2011 to 2012. Each of the recall case, consists of name of vehicle brand, number of affected vehicles, production date, recall date, causes of recall, part/component affected, safety consequence related to affected component, and corrective action is coded for further analysis. A coding scheme is designed to classify the affected component, cause of recall, and consequence into useful categories. Phi coefficient analysis is employed to examine the interdependency trend between the affected parts and cause for recall to identify root cause in affected component for future preventive action. Significant associations between affected components and cause for recall indicated poor lubrication was often found in power windows master switch, electrical defect problem was often found in light and air bag, suspension component had problem in quality control and corrosion, seats component is often failed to comply the safety standard, design dimension change be a problem in ignition switch, component fatigue was often found in transmission and improper material handling at fabrication caused defect in braking. Significant association between causes and consequence showed that design defect will highly cause injury and poor lubrication causes vehicle in fire. While significant association between component and consequence indicated that defect air bag high possibility to cause injury and defect in window cause vehicle in fire.
APA, Harvard, Vancouver, ISO, and other styles
41

Camilo, Giancarlo. "Demand analysis and privacy of floating car data." Thesis, 2019. http://hdl.handle.net/1828/11150.

Full text
Abstract:
This thesis investigates two research problems in analyzing floating car data (FCD): automated segmentation and privacy. For the former, we design an automated segmentation method based on the social functions of an area to enhance existing traffic demand analysis. This segmentation is used to create an extension of the traditional origin-destination matrix that can represent origins of traffic demand. The methods are then combined for interactive visualization of traffic demand, using a floating car dataset from a ride-hailing application. For the latter, we investigate the properties in FCD that may lead to privacy leaks. We present an attack on a real-world taxi dataset, showing that FCD, even though anonymized, can potentially leak privacy.
Graduate
APA, Harvard, Vancouver, ISO, and other styles
42

Liao, Hsiang-Yun, and 廖翔允. "A simulation analysis of structure strength for a race car roll-cage." Thesis, 2016. http://ndltd.ncl.edu.tw/handle/71795714356520843363.

Full text
Abstract:
碩士
國立屏東科技大學
車輛工程系所
104
ith the racing cars' performances getting better and faster, the driving safety of the racer need to be more valued. And roll cage is one of many methods to protect racers. While the vehicles were overturned, the roll cage reduces the deformation of the cockpit. And if the vehicle is hit, this impact force could cause the roll cage deformation and breakage. But the strength of the weld is far lower than the roll cage itself, the welding would break before roll cage body. This would affect the overall strength of the roll cage. The premier purpose of this thesis is analysis of the structural strength of the roll cage and improvement of it. This paper applies numerical simulation software “SolidWorks Simulation” for the simulation of a plurality of speed impact the roll cage, and observing whether the weld will be broken in the first place. With the result of roll cage structure stress analysis, it shows that when the roll cage is hit, it will not start to destroy with weld first. Instead bending pipe of the roll cage will be broken first. Because of this result, the bending pipe must be reinforced in order to improve strength of overall body. By comparing the data of before and after the improvements this paper could validate that no matter the direction of the impact is directly above or oblique, these improvements could be able to effectively enhance the damage impact velocity tolerance. With all these results it can be said that the reinforcements of roll cage could heighten the overall strength effectively.
APA, Harvard, Vancouver, ISO, and other styles
43

Raman, Raghavan. "Dynamic Data Race Detection for Structured Parallelism." Thesis, 2012. http://hdl.handle.net/1911/71681.

Full text
Abstract:
With the advent of multicore processors and an increased emphasis on parallel computing, parallel programming has become a fundamental requirement for achieving available performance. Parallel programming is inherently hard because, to reason about the correctness of a parallel program, programmers have to consider large numbers of interleavings of statements in different threads in the program. Though structured parallelism imposes some restrictions on the programmer, it is an attractive approach because it provides useful guarantees such as deadlock-freedom. However, data races remain a challenging source of bugs in parallel programs. Data races may occur only in few of the possible schedules of a parallel program, thereby making them extremely hard to detect, reproduce, and correct. In the past, dynamic data race detection algorithms have suffered from at least one of the following limitations: some algorithms have a worst-case linear space and time overhead, some algorithms are dependent on a specific scheduling technique, some algorithms generate false positives and false negatives, some have no empirical evaluation as yet, and some require sequential execution of the parallel program. In this thesis, we introduce dynamic data race detection algorithms for structured parallel programs that overcome past limitations. We present a race detection algorithm called ESP-bags that requires the input program to be executed sequentially and another algorithm called SPD3 that can execute the program in parallel. While the ESP-bags algorithm addresses all the above mentioned limitations except sequential execution, the SPD3 algorithm addresses the issue of sequential execution by scaling well across highly parallel shared memory multiprocessors. Our algorithms incur constant space overhead per memory location and time overhead that is independent of the number of processors on which the programs execute. Our race detection algorithms support a rich set of parallel constructs (including async, finish, isolated, and future) that are found in languages such as HJ, X10, and Cilk. Our algorithms for async, finish, and future are precise and sound for a given input. In the presence of isolated, our algorithms are precise but not sound. Our experiments show that our algorithms (for async, finish, and isolated) perform well in practice, incurring an average slowdown of under 3x over the original execution time on a suite of 15 benchmarks. SPD3 is the first practical dynamic race detection algorithm for async-finish parallel programs that can execute the input program in parallel and use constant space per memory location. This takes us closer to our goal of building dynamic data race detectors that can be "always-on" when developing parallel applications.
APA, Harvard, Vancouver, ISO, and other styles
44

Vaseghi, Payam. "Benchmarking of Advertising Efficiency in U.S. Car Market Using Data Envelopment Analysis." Thesis, 2012. http://spectrum.library.concordia.ca/974644/1/Vaseghi_MSc_F2012.pdf.

Full text
Abstract:
Measuring advertising efficiency is an important and challenging issue in marketing. It is important since advertising spending consumes the biggest part of a marketing budget. Yet many firms have difficulty to determine the optimal level of advertising budget and to allocate this budget across different media. And it is challenging since finding a methodology that can incorporate multiple effects of advertising (cognitive, affective and behavioral), measure efficiency in a competitive setting, and provide guidelines for advertising improvement is difficult. This thesis explores the usability of an alternative method, data envelopment analysis, in measuring advertising efficiency. The focus of this research, which comprises of two studies, is to benchmark advertising efficiency of major car-models in U.S. car market with application of DEA. The objective of first study is to measure the level of over-advertising at macro level, in the whole industry, and also to determine the level of advertising inefficiency in each major media. The objective of second study is to measure advertising inefficiency of each car-model in creating different levels of advertising effects, and also to investigate the influence of strategy on advertising effects and efficiency.
APA, Harvard, Vancouver, ISO, and other styles
45

YINGHUI, ZHU, and 朱穎慧. "Can Gold Hedge Against The Risks of Exchange Rate? – The Analysis of Intraday Data." Thesis, 2018. http://ndltd.ncl.edu.tw/handle/j3w847.

Full text
Abstract:
碩士
輔仁大學
金融與國際企業學系金融碩士班
106
In this paper, I use the intraday data of gold price and exchange rate as samples, and establish the model, which is based on the Realized BiPower Covariance model proposed by Barndorff-Nielsen & Shephard (2004a), to estimate the covariance of the gold return along with the exchange rate return. The Median Realized Volatility model proposed by Andersen, Dobrev & Schauumburg (2010) was used to estimate the corresponding realized volatility and calculate the regression coefficient of the daily gold price to exchange rate, which is referred to as “β”. For examing the factors, that influence the hedge ability against the devaluation of USD, we analyse VIX index, S&P500, TIPS, CPI and its announcement date to see their relationship with the regression coeffient of the realized beta. We find that there are different combinations of variables that have significant impacts on beta of the three foreign exchange markets. Gold-related news and changes in the expected inflation rate on the announcement date put significant impacts on the beta of European Dollar. And the VIX index significantly affects the beta of British Pond. The return on stock price index, the change in daily commodity price index、 treasury inflation protected securities and expected inflation rate influence the beta of Japanese Yen. In threshold regression, we find a gold-related news threshold effect on the beta of European Dollar, and the return on stock price has a threshold effect on the beta of Japanese Yen. We also use logistic regression to analyse the factors that affect the beta coeffcient while it is greater than zero, that is, gold under hedge status. The empirical results show that the factors influence the hedging effect of gold are different among the three exchange markets.
APA, Harvard, Vancouver, ISO, and other styles
46

Lu, Ching-Wen, and 呂靖文. "Applying data mining techniques on the fuel consumption analysis model of driving a car." Thesis, 2010. http://ndltd.ncl.edu.tw/handle/34672039249650076191.

Full text
Abstract:
碩士
元智大學
資訊工程學系
98
Factors affecting fuel consumption including driving behavior, driving traffic and vehicle condition itself. We use digital tachograph and OBDII to collect the information of drivers, including speed, acceleration, angular velocity and other variables affect fuel consumption of the driving behavior. Applied regression analysis and neural network method, and get a variety of formulas to estimate fuel consumption and corresponding analysis are established models, allowing users by their owns actions(step on the brake, step on the acceleration, left or right turn) to estimate fuel consumption.
APA, Harvard, Vancouver, ISO, and other styles
47

"Bayesian analysis for time series of count data." Thesis, 2014. http://hdl.handle.net/10388/ETD-2014-07-1589.

Full text
Abstract:
Time series involving count data are present in a wide variety of applications. In many applications, the observed counts are usually small and dependent. Failure to take these facts into account can lead to misleading inferences and may detect false relationships. To tackle such issues, a Poisson parameter-driven model is assumed for the time series at hand. This model can account for the time dependence between observations through introducing an autoregressive latent process. In this thesis, we consider Bayesian approaches for estimating the Poisson parameter-driven model. The main challenge is that the likelihood function for the observed counts involves a high dimensional integral after integrating out the latent variables. The main contributions of this thesis are threefold. First, I develop a new single-move (SM) Markov chain Monte Carlo (MCMC) method to sample the latent variables one by one. Second, I adopt the idea of the particle Gibbs sampler (PGS) method \citep{andrieu} into our model setting and compare its performance with the SM method. Third, I consider Bayesian composite likelihood methods and compare three different adjustment methods with the unadjusted method and the SM method. The comparisons provide a practical guide to what method to use. We conduct simulation studies to compare the latter two methods with the SM method. We conclude that the SM method outperforms the PGS method for small sample size, while they perform almost the same for large sample size. However, the SM method is much faster than the PGS method. The adjusted Bayesian composite methods provide closer results to the SM than the unadjusted one. The PGS and the selected adjustment method from simulation studies are compared with the SM method via a real data example. Similar results are obtained: first, the PGS method provides results very close to those of the SM method. Second, the adjusted composite likelihood methods provide closer results to the SM than the unadjusted one.
APA, Harvard, Vancouver, ISO, and other styles
48

"Dynamic Analysis of Embedded Software." Doctoral diss., 2015. http://hdl.handle.net/2286/R.I.35414.

Full text
Abstract:
abstract: Most embedded applications are constructed with multiple threads to handle concurrent events. For optimization and debugging of the programs, dynamic program analysis is widely used to collect execution information while the program is running. Unfortunately, the non-deterministic behavior of multithreaded embedded software makes the dynamic analysis difficult. In addition, instrumentation overhead for gathering execution information may change the execution of a program, and lead to distorted analysis results, i.e., probe effect. This thesis presents a framework that tackles the non-determinism and probe effect incurred in dynamic analysis of embedded software. The thesis largely consists of three parts. First of all, we discusses a deterministic replay framework to provide reproducible execution. Once a program execution is recorded, software instrumentation can be safely applied during replay without probe effect. Second, a discussion of probe effect is presented and a simulation-based analysis is proposed to detect execution changes of a program caused by instrumentation overhead. The simulation-based analysis examines if the recording instrumentation changes the original program execution. Lastly, the thesis discusses data race detection algorithms that help to remove data races for correctness of the replay and the simulation-based analysis. The focus is to make the detection efficient for C/C++ programs, and to increase scalability of the detection on multi-core machines.
Dissertation/Thesis
Doctoral Dissertation Computer Science 2015
APA, Harvard, Vancouver, ISO, and other styles
49

Singh, A., M. Jenamani, J. J. Thakker, and Nripendra P. Rana. "Propagation of online consumer-perceived negativity: Quantifying the effect of supply chain underperformance on passenger car sales." 2021. http://hdl.handle.net/10454/18456.

Full text
Abstract:
Yes
The paper presents a text analytics framework that analyses online reviews to explore how consumer-perceived negativity corresponding to the supply chain propagates over time and how it affects car sales. In particular, the framework integrates aspect-level sentiment analysis using SentiWordNet, time-series decomposition, and bias-corrected least square dummy variable (LSDVc) – a panel data estimator. The framework facilitates the business community by providing a list of consumers’ contemporary interests in the form of frequently discussed product attributes; quantifying consumer-perceived performance of supply chain (SC) partners and comparing the competitors; and a model assessing various firms’ sales performance. The proposed framework demonstrated to the automobile supply chain using a review dataset received from a renowned car-portal in India. Our findings suggest that consumer-voiced negativity is maximum for dealers and minimum for manufacturing and assembly related features. Firm age, GDP, and review volume significantly influence car sales whereas the sentiments corresponding to SC partners do not. The proposed research framework can help the manufacturers in inspecting their SC partners; realising consumer-cited critical car sales influencers; and accurately predicting the sales, which in turn can help them in better production planning, supply chain management, marketing, and consumer relationships.
The full-text of this article will be released for public view at the end of the publisher embargo on 21 Oct 2022.
APA, Harvard, Vancouver, ISO, and other styles
50

Poissant, Mathieu. "Statistical methods for insurance fraud detection." Thèse, 2008. http://hdl.handle.net/1866/8191.

Full text
APA, Harvard, Vancouver, ISO, and other styles
We offer discounts on all premium plans for authors whose works are included in thematic literature selections. Contact us to get a unique promo code!

To the bibliography