To see the other types of publications on this topic, follow the link: Operating approach.

Dissertations / Theses on the topic 'Operating approach'

Create a spot-on reference in APA, MLA, Chicago, Harvard, and other styles

Select a source type:

Consult the top 50 dissertations / theses for your research on the topic 'Operating approach.'

Next to every source in the list of references, there is an 'Add to bibliography' button. Press on it, and we will generate automatically the bibliographic reference to the chosen work in the citation style you need: APA, MLA, Harvard, Chicago, Vancouver, etc.

You can also download the full text of the academic publication as pdf and read online its abstract whenever available in the metadata.

Browse dissertations / theses on a wide variety of disciplines and organise your bibliography correctly.

1

Leslie, Rebekah. "A Functional Approach to Memory-Safe Operating Systems." PDXScholar, 2011. https://pdxscholar.library.pdx.edu/open_access_etds/499.

Full text
Abstract:
Purely functional languages--with static type systems and dynamic memory management using garbage collection--are a known tool for helping programmers to reduce the number of memory errors in programs. By using such languages, we can establish correctness properties relating to memory-safety through our choice of implementation language alone. Unfortunately, the language characteristics that make purely functional languages safe also make them more difficult to apply in a low-level domain like operating systems construction. The low-level features that support the kinds of hardware manipulations required by operating systems are not typically available in memory-safe languages with garbage collection. Those that are provided may have the ability to violate memory- and type-safety, destroying the guarantees that motivate using such languages in the first place. This work demonstrates that it is possible to bridge the gap between the requirements of operating system implementations and the features of purely functional languages without sacrificing type- and memory-safety. In particular, we show that this can be achieved by isolating the potentially unsafe memory operations required by operating systems in an abstraction layer that is well integrated with a purely functional language. The salient features of this abstraction layer are that the operations it exposes are memory-safe and yet sufficiently expressive to support the implementation of realistic operating systems. The abstraction layer enables systems programmers to perform all of the low-level tasks necessary in an OS implementation, such as manipulating an MMU and executing user-level programs, without compromising the static memory-safety guarantees of programming in a purely functional language. A specific contribution of this work is an analysis of memory-safety for the abstraction layer by formalizing a meaning for memory-safety in the presence of virtual-memory using a novel application of noninterference security policies. In addition, we evaluate the expressiveness of the abstraction layer by implementing the L4 microkernel API, which has a flexible set of virtual memory management operations.
APA, Harvard, Vancouver, ISO, and other styles
2

Penn, Marion Louise. "Developing a multi-methodological approach to hospital operating theatre scheduling." Thesis, University of Southampton, 2014. https://eprints.soton.ac.uk/366470/.

Full text
Abstract:
Operating theatres and surgeons are among the most expensive resources in any hospital, so it is vital that they are used efficiently. Due to the complexity of the challenges involved in theatre scheduling we split the problem into levels and address the tactical and day-to-day scheduling problems. Cognitive mapping is used to identify the important factors to consider in theatre scheduling and their interactions. This allows development and testing of our understanding with hospital staff, ensuring that the aspects of theatre scheduling they consider important are included in the quantitative modelling. At the tactical level, our model assists hospitals in creating new theatre timetables, which take account of reducing the maximum number of beds required, surgeons’ preferences, surgeons’ availability, variations in types of theatre and their suitability for different types of surgery, limited equipment availability and varying the length of the cycle over which the timetable is repeated. The weightings given to each of these factors can be varied allowing exploration of possible timetables. At the day-to-day scheduling level we focus on the advanced booking of individual patients for surgery. Using simulation a range of algorithms for booking patients are explored, with the algorithms derived from a mixture of scheduling literature and ideas from hospital staff. The most significant result is that more efficient schedules can be achieved by delaying scheduling as close to the time of surgery as possible, however, this must be balanced with the need to give patients adequate warning to make arrangements to attend hospital for their surgery. The different stages of this project present different challenges and constraints, therefore requiring different methodologies. As a whole this thesis demonstrates that a range of methodologies can be applied to different stages of a problem to develop better solutions.
APA, Harvard, Vancouver, ISO, and other styles
3

Meinhardt, Johan, and Dennis Kallin. "Designing a company-specific Production System : Developing an appropriate operating approach." Thesis, KTH, Industriell ekonomi och organisation (Inst.), 2013. http://urn.kb.se/resolve?urn=urn:nbn:se:kth:diva-126802.

Full text
Abstract:
To boost operational performance and ultimately competitiveness, firms choose to develop company-specific Production Systems (XPS). Developing such production systems the management literature suggests that a XPS must be tailored to the firm operating context to yield full effect. This explorative case study examines how to design a XPS that provides an appropriate operating approach. Clarifying terminological confusion, the study proposes a XPS framework derived from the literature that encompasses three levels of operating elements - philosophical, principle, and practice. Investigating how to prioritize among these elements the study empirically validate the importance of tailoring firm operating approaches. In particular, categorizing practices as technical or socio-technical, and internal or external, the study contradicts existing research and posit that (1) socio-technical practices are a prerequisite for the adoption of technical practices and (2), practice classified as internal also have an external dimension. In addition, the results indicate that a XPS must evolve as contextual requirements and prerequisites change – thus making the design of a XPS dynamic. Finally, this study proposes a case-specific production system, tailored to the requirements of the research objects market-, organizational- and process context.
APA, Harvard, Vancouver, ISO, and other styles
4

Sham, Gregory C. (Gregory Chi-Keung). "Developing a data-driven approach for improving operating room scheduling processes." Thesis, Massachusetts Institute of Technology, 2012. http://hdl.handle.net/1721.1/73397.

Full text
Abstract:
Thesis (M.B.A.)--Massachusetts Institute of Technology, Sloan School of Management; and, (S.M.)--Massachusetts Institute of Technology, Engineering Systems Division; in conjunction with the Leaders for Global Operations Program at MIT, 2012.<br>Cataloged from PDF version of thesis.<br>Includes bibliographical references (p. 52).<br>In the current healthcare environment, the cost of delivering patient care is an important concern for hospitals. As a result, healthcare organizations are being driven to maximize their existing resources, both in terms of infrastructure and human capital. Using a data-driven approach with analytical techniques from operations management can contribute towards this goal. More specifically, this thesis shows, drawing from a recent project at Beth Israel Deaconess Medical Center (BIDMC), that predictive modeling can be applied to operating room (OR) scheduling in order to effectively increase capacity. By examining the current usage of the existing block schedule system at BIDMC and developing a linear regression model, OR time that is expected to go unused can be instead identified in advance and freed for use. Sample model results show that it is expected to be operationally effective by capturing a large enough portion of OR time for a pooled set of blocks to be useful for advanced scheduling purposes. This analytically determined free time represents an improvement in how the current block system is employed, especially in terms of the nominal block release time. This thesis makes the argument that such a model can integrate into a scheduling system with more efficient and flexible processes, ultimately resulting in more effective usage of existing resources.<br>by Gregory C. Sham.<br>S.M.<br>M.B.A.
APA, Harvard, Vancouver, ISO, and other styles
5

Hajdukiewicz, John R. "Development of a structured approach for patient monitoring in the operating room." Thesis, National Library of Canada = Bibliothèque nationale du Canada, 1998. http://www.collectionscanada.ca/obj/s4/f2/dsk2/tape17/PQDD_0013/MQ34145.pdf.

Full text
APA, Harvard, Vancouver, ISO, and other styles
6

Dong, Yan. "Terminal operating plan and achievability : an approach to improve rail terminal performance." Thesis, Massachusetts Institute of Technology, 1994. http://hdl.handle.net/1721.1/36506.

Full text
APA, Harvard, Vancouver, ISO, and other styles
7

Yap, Xiang Ling. "A model-based approach to regulating electricity distribution under new operating conditions." Thesis, Massachusetts Institute of Technology, 2012. http://hdl.handle.net/1721.1/72905.

Full text
Abstract:
Thesis (S.M. in Technology and Policy)-- Massachusetts Institute of Technology, Engineering Systems Division, Technology and Policy Program, 2012.<br>Vita. Cataloged from PDF version of thesis.<br>Includes bibliographical references (p. 147-152).<br>New technologies such as distributed generation and electric vehicles are connecting to the electricity distribution grid, a regulated natural monopoly. Existing regulatory schemes were not designed for these new technologies and may not provide distribution companies with adequate remuneration to integrate the new technologies. To investigate how regulation should change in the presence of new technologies, current regulatory schemes and possible improvements to make them suitable for new technologies are reviewed. A Reference Network Model capable of calculating the costs of building a distribution network is utilized to compare the costs of accommodating different penetrations and locations of distributed generation. Results for residential generators with a 3 kW/unit power output show that as the penetration of generators among residential customers increases, costs initially decrease but then increase at higher penetration levels. A comparison of results for residential generators with results for distributed generator conurbations located far away from customers shows that residential and far away generators require similar investment costs when total distributed generation power output is lower than effective customer demand. However, when total distributed generation power output exceeds effective demand, residential generators necessitate higher investment costs than far away generators. At all levels of distributed generation power output, residential generators imply lower losses costs than far away generators. A second Reference Network Model capable of calculating the costs of expanding an existing distribution network is utilized to compare the costs of expanding a network to accommodate new technologies under different technology management approaches. Results show that network investment costs are lower for an actively managed network than for a passively managed network, illustrating the potential benefits of active management. Based on an analysis of the modeling results and the regulatory review, an ex ante schedule of charges for distributed generators that incorporates forecast levels of DG penetration is suggested to improve remuneration adequacy for the costs of integrating distributed generation. To promote active management of distribution networks, measures such as funding pots, outputs-focused regulatory schemes, and regulating total expenditure rather than separating the regulation of capital and operating expenditure are selected as proposals.<br>by Xiang Ling Yap.<br>S.M.in Technology and Policy
APA, Harvard, Vancouver, ISO, and other styles
8

Kneip, Frank [Verfasser]. "Iterative Learning Control for Nonlinear Systems : An Operating Regime Based Approach / Frank Kneip." Aachen : Shaker, 2007. http://d-nb.info/1166511995/34.

Full text
APA, Harvard, Vancouver, ISO, and other styles
9

Baggerohr, Stephan. "A deep learning approach towards diagnostics of bearings operating under non-stationary conditions." Diss., University of Pretoria, 2019. http://hdl.handle.net/2263/73452.

Full text
Abstract:
Faults in bearings usually manifest as marginal defects that intensify over time, allowing for well-informed preventative actions with early Fault Detection and Diagnosis (FDD) protocols. Detection of the fault begins with capturing, for example, acceleration signals from a machine. Traditionally, handpicked descriptive statistical features (mean, RMS, skewness, kurtosis, etc.) or spectral diagrams obtained from these signals are then used for FDD. However, machine signals are often generated under non-stationary operating conditions of varying loads and speeds, requiring further intervention. More advanced signal processing techniques (spectral kurtosis, or cyclostationary analysis) are hence used to account for the non-stationarity of the signal. This is usually done by separating acceleration signals into deterministic and random components. Fault detection in bearings is possible by observing the random components of the signal. A wealth of research has been invested in machine learning-based techniques to circumvent the problems associated with non-stationary signals. Many of these methods require vast amounts of historical data to train. Machines typically spend most of their life operating in a healthy condition, therefore, most historical data is occupied with data that comes from a healthy machine condition, training these methods is difficult, due to the shortage of data from a machine running in an unhealthy condition. Furthermore, well-performing machine learning algorithms still require a domain expert to extract features that are known to be fault sensitive. Deep learning is a recent approach in data analysis whereby feature extraction is incorporated within the training of the algorithm. The algorithm is given the ability to find and extract its features. The architecture of the algorithm allows for the extraction of complex hierarchical non-linear features. To the author’s knowledge, no attempt has been made to make full use of the power of deep learning together with the known structure of bearing acceleration signals to perform FDD. In this work, a bearing FDD methodology is developed using deep learning approaches. A model based on Generative Adversarial Networks (GANs) and Variational Autoencoders (VAEs) is used to learn a lower-dimensional representation of an acceleration signal. A regularization strategy based on information maximization is used, which allows deterministic and random components of the signals to be learned separately. This representation is subsequently used to perform bearing FDD. The algorithm is trained in a completely unsupervised manner on exclusively healthy data and requires no preprocessing of that data. Furthermore, no auxiliary signals such as a shaft encoder, which contains information about the machine operating condition, is required for the algorithm to work. The methodology was tested on well-known benchmark datasets, and it was shown to be robust against non-stationary operating conditions. The algorithm can learn its fault metric and by observing the trajectory of the signal representation, it is also able to diagnose the type of fault.<br>Dissertation (MEng)--University of Pretoria, 2019.<br>Mechanical and Aeronautical Engineering<br>MEng<br>Unrestricted
APA, Harvard, Vancouver, ISO, and other styles
10

Goddard, Timothy Glenn. "A systematic approach to the design of weaving preparation systems operating under variable demand." Diss., Georgia Institute of Technology, 1997. http://hdl.handle.net/1853/8526.

Full text
APA, Harvard, Vancouver, ISO, and other styles
11

Kalidindi, Srinivas R. "Linux Operating System Configuration Management Framework: A Scalable and Efficient Approach Using Open Source Utilities." Ohio : Ohio University, 2007. http://www.ohiolink.edu/etd/view.cgi?ohiou1193950374.

Full text
APA, Harvard, Vancouver, ISO, and other styles
12

Gong, Xiao Yan. "Identifying and minimising preventable delay within the operating theatre management process: an adapted lean thinking approach." University of Southern Queensland, Faculty of Business, 2009. http://eprints.usq.edu.au/archive/00006196/.

Full text
Abstract:
[Abstract]This study examines how preventable delay could be identified and minimised by using adapted lean thinking within the Operating Theatre Management Process (OTMP). The study uses the operating theatre of a regional hospital in Toowoomba (Queensland, Australia) as a case study. The theoretical framework for this study comprised socio-technical system theory and coordination theory. From the perspective of socio-technical system theory, each activity within the OTMP has two types of elements: social elements and technical elements. Coordination theory, on the other hand, considers the coordination between various elements of the activities. Time and motion study has been employed to analyse activities in terms of operation, transportation, delay and monitoring within the operating rooms. Subsequently, adapted lean thinking has been employed as an integrating approach to identify preventable delay and disruption within both value added and non-value added activities. Identifying preventable delay within the value added activities inside the operating room is one of the most important contributions of this study. This research uses an exploratory qualitative case study. The focus of this research is to study activities inside the operating rooms, rather than the whole OTMP. Notwithstanding the limited time available to the researcher within a Masters degree, the study sought to establish the direct link of the activities inside the operating rooms with patients‘ waiting time. Data were collected from 22 surgery cases through direct observations. In each surgery, the research team followed patient progress from the pre-operative holding area through to discharge. The researcher observed and recorded the timing of all the activities inside the operating rooms. As much detail as possible was observed and recorded to capture sufficient details to allow identification of problems. Moreover, initial observation results were verified and additional information was collected as necessary through communications and interviews with medical staff (surgeons, scrub nurses, technicians etc.) and review of documents. The study indicates that coordination, motion economy, consent form, protocol policy, and surgeon preference sheets were the major areas impacting on preventable delay in the operating theatre suite activities. With the application of lean thinking, the results suggest that preventable delay and disruption within both value added and non-value added activities could be eliminated or minimized through better work organization, motion economy training and better coordination of tasks. For further study, a benchmarking based study could be conducted to see if similar sets of preventable delay are observed in other healthcare institutions. In addition, examination of other related sections in a hospital is highly desirable to identify the wide range of preventable delay within the OTMP. This, in turn, will help to improve OTMP efficiency and, accordingly, reduce the waiting time of waiting lists.
APA, Harvard, Vancouver, ISO, and other styles
13

Cao, Nan [Verfasser], and Ullrich [Akademischer Betreuer] Martin. "Approach to determine and evaluate the homogeneity of operating programs in railway operation based on infrastructure occupancy / Nan Cao ; Betreuer: Ullrich Martin." Stuttgart : Universitätsbibliothek der Universität Stuttgart, 2017. http://d-nb.info/1144955750/34.

Full text
APA, Harvard, Vancouver, ISO, and other styles
14

Cobb, Nicholas. "A Problem Solving Approach to Enterprise FileVault 2 Management and Integration." TopSCHOLAR®, 2013. http://digitalcommons.wku.edu/theses/1296.

Full text
Abstract:
Consumer technology adoption into large enterprise environments is occurring at an unprecedented rate. Employees require the flexibility and efficiency of using operating systems, computers, and mobility products they are familiar with and that enable their productivity. Due to this industry phenomenon, one large shipping enterprise must work to create solutions to integrate Apple’s OS X operating system into its traditional Windows-based operating environment. This level of integration must take place carefully to enable usability and foster the continued data security of enterprise assets. This paper describes the steps and methodology taken, as well as the rationale used, to accomplish the task of integrating Apple’s FileVault 2 full disk encryption technology into existing McAfee management infrastructure and traditional deployment and support workflows. Using a combination of industry and community solutions and techniques, a low-cost software solution named EscrowToEPO is created to facilitate the secure and user-friendly adoption of FileVault 2 as a full disk encryption solution. This paper also includes the success/failure rate of adoption and implications as to how the adoption of similar solutions can occur to support future operating systems or other environments.
APA, Harvard, Vancouver, ISO, and other styles
15

Nordlund, Fredrik Hans. "Enabling Network-Aware Cloud Networked Robots with Robot Operating System : A machine learning-based approach." Thesis, KTH, Radio Systems Laboratory (RS Lab), 2015. http://urn.kb.se/resolve?urn=urn:nbn:se:kth:diva-160877.

Full text
Abstract:
During the recent years, a new area called Cloud Networked Robotics (CNR) has evolved from conventional robotics, thanks to the increasing availability of cheap robot systems and steady improvements in the area of cloud computing. Cloud networked robots refers to robots with the ability to offload computation heavy modules to a cloud, in order to make use of storage, scalable computation power, and other functionalities enabled by a cloud such as shared knowledge between robots on a global level. However, these cloud robots face a problem with reachability and QoS of crucial modules that are offloaded to the cloud, when operating in unstable network environments. Under such conditions, the robots might lose the connection to the cloud at any moment; in worst case, leaving the robots “brain-dead”. This thesis project proposes a machine learning-based network aware framework for a cloud robot, that can choose the most efficient module placement based on location, task, and the network condition. The proposed solution was implemented upon a cloud robot prototype based on the TurtleBot 2 robot development kit, running Robot Operating System (ROS). A continuous experiment was conducted where the cloud robot was ordered to execute a simple task in the laboratory corridor under various network conditions. The proposed solution was evaluated by comparing the results from the continuous experiment with measurements taken from the same robot, with all modules placed locally, doing the same task. The results show that the proposed framework can potentially decrease the battery consumption by 10% while improving the efficiency of the task by 2.4 seconds (2.8%). However, there is an inherent bottleneck in the proposed solution where each new robot would need 2 months to accumulate enough data for the training set, in order to show good performance. The proposed solution can potentially benefit the area of CNR if connected and integrated with a shared-knowledge platform which can enable new robots to skip the training phase, by downloading the existing knowledge from the cloud.<br>Under de senaste åren har ett nytt forskningsområde kallat Cloud Networked Robotics (CNR) växt fram inom den konventionella robottekniken, tack vare den ökade tillgången på billiga robotsystem och stadiga framsteg inom området cloud computing. Molnrobotar syftar på robotar med förmågan att flytta resurstunga moduler till ett moln för att ta del av lagringskapaciteten, den skalbara processorkraften och andra tjänster som ett moln kan tillhandahålla, t.ex. en kunskapsdatabas för robotar över hela världen. Det finns dock ett problem med dessa sorters robotar gällande nåbarhet och QoS för kritiska moduler placerade på ett moln, när dessa robotar verkar i instabila nätverksmiljöer. I ett sådant scenario kan robotarna när som helst förlora anslutningen till molnet, vilket i värsta fall lämnar robotarna hjärndöda. Den här rapporten föreslår en maskininlärningsbaserad nätverksmedveten ramverkslösning för en molnrobot, som kan välja de mest effektiva modulplaceringarna baserat på robotens position, den givna uppgiften och de rådande nätverksförhållanderna. Ramverkslösningen implementerades på en molnrobotsprototyp, baserad på ett robot development kit kallat TurtleBot 2, som använder sig av ett middleware som heter Robot Operating System (ROS). Ett fortskridande experiment utfördes där molnroboten fick i uppgift att utföra ett enkelt uppdrag i laboratoriets korridor, under varierande nätverksförhållanden. Ramverkslösningen utvärderades genom att jämföra resultaten från det fortskridrande experimentet med mätningar som gjordes med samma robot som utförde samma uppgift, fast med alla moduler placerade lokalt på roboten. Resultaten visar att den föreslagna ramverkslösningen kan potentiellt minska batterikonsumptionen med 10%, samtidigt som tiden för att utföra en uppgift kan minskas med 2.4 sekunder (2.8%). Däremot uppstår en flaskhals i framtagna lösningen där varje ny robot kräver 2 månader för att samla ihop nog med data för att maskinilärningsalgoritmen ska visa bra prestanda. Den förlsagna lösningen kan dock vara fördelaktig för CNR om man integrerar den med en kunskapsdatabas för robotar, som kan möjliggöra för varje ny robot att kringå den 2 månader långa träningsperioden, genom att ladda ner existerande kunskap från molnet.
APA, Harvard, Vancouver, ISO, and other styles
16

Poprawa, Stefan. "Statistical approach to payload capability forecasting for large commercial aircraft operating payload range limited routes." Thesis, University of Pretoria, 2019. http://hdl.handle.net/2263/72847.

Full text
Abstract:
Large commercial aircraft by design typically are not capable of transporting maximum fuel capacity and maximum payload simultaneously. Maximum payload range remains less than maximum range. When an aircraft is operated on a route that may exceed its maximum payload range capability, environmental conditions can vary the payload capability by as much as 20%. An airline’s commercial department needs to know of such restrictions well in advance, to restrict booking levels accordingly. Current forecasting approaches use monthly average performance, at, typically, the 85% probability level, to determine such payload capability. Such an approach can be overly restrictive in an industry where yields are marginal, resulting in sellable seats remaining empty. The analysis of operational flight plans for a particular ultra-long routing revealed that trip fuel requirements are near exclusively predictable by the average wind component for a given route, at a correlation of over 98%. For this to hold, the route must be primarily influenced by global weather patterns rather than localised weather phenomena. To improve on the current monthly stepped approach the average wind components were modelled through a sinusoidal function, reflecting the annual repetitiveness of weather patterns. Long term changes in weather patterns were also considered. Monte Carlo simulation principles were then applied to model the variance around the mean predicted by the sinusoidal function. Monte Carlo simulation was also used to model expected payload demand. The resulting forecasting model thus combines supply with demand, allowing the risk of demand exceeding supply to be assessed on a daily basis. Payload restrictions can then be imposed accordingly, to reduce the risk of demand exceeding supply to a required risk level, if required. With payload demand varying from day of week to seasonally, restricting payload only became necessary in rare cases, except for one particular demand peak period where supply was also most restricted by adverse wind conditions. Repeated application of the forecasting model as the day of flight approaches minimises the risk of seats not sold, respectively of passengers denied boarding.<br>Thesis (PhD)--University of Pretoria, 2019.<br>Mechanical and Aeronautical Engineering<br>PhD<br>Unrestricted
APA, Harvard, Vancouver, ISO, and other styles
17

Zhou, Kefei. "A unified approach to nonparametric comparison of receiver operating characteristic curves for longitudinal and clustered data." Diss., Restricted to subscribing institutions, 2006. http://proquest.umi.com/pqdweb?did=1273096611&sid=1&Fmt=2&clientId=1564&RQT=309&VName=PQD.

Full text
APA, Harvard, Vancouver, ISO, and other styles
18

Muthu, Srinivas. "A Context-Aware Approach to Android Memory Management." University of Toledo / OhioLINK, 2016. http://rave.ohiolink.edu/etdc/view?acc_num=toledo1449665506.

Full text
APA, Harvard, Vancouver, ISO, and other styles
19

Hancox, Jennie Elizabeth. "Examination of the social-environmental and motivational processes operating in dance contexts : a self-determination theory approach." Thesis, University of Birmingham, 2014. http://etheses.bham.ac.uk//id/eprint/5162/.

Full text
Abstract:
Grounded in the self-determination (SDT; Deci & Ryan, 1985, 2000) and achievement goal (Nicholls, 1989) theories this thesis had the broad aim of expanding current knowledge and theoretical understanding of motivational processes. This was achieved via four studies in dance addressing contemporary conceptual and methodological issues raised in the motivation literature. The studies aimed to progress the conceptualisation and measurement of key motivational constructs (i.e., the teacher-created motivational climate and motivation regulations) by examining their application in dance using quantitative and qualitative methods. Additionally, the studies aimed to enhance understanding of the motivational processes via which the motivational climate, as a multi-dimensional construct, predicts dancers’ psychological well- and ill-being. Specifically, the mediating roles of basic psychological needs and motivation regulations between dancers’ perceptions of the motivational climate and affective states were examined. Collectively the thesis supports the central features of the SDT framework, including Duda’s (2013) conceptualisation of the motivational climate as multi-dimensional, and basic psychological need thwarting (as detailed in Bartholomew, Ntoumanis, Ryan, & Thøgersen-Ntoumani, 2011b). The studies in this thesis will substantiate the need for and inform theoretically-grounded interventions which aim to educate teachers in how they can support dancers’ psychological well-being in a variety of dance settings.
APA, Harvard, Vancouver, ISO, and other styles
20

Stahl, Katharina [Verfasser]. "Online anomaly detection for reconfigurable self-X real-time operating systems : a danger theory-inspired approach / Katharina Stahl." Paderborn : Universitätsbibliothek, 2016. http://d-nb.info/1102255106/34.

Full text
APA, Harvard, Vancouver, ISO, and other styles
21

Wei, Chongfeng. "A finite element based approach to characterising flexible ring tire (FTire) model for extended range of operating conditions." Thesis, University of Birmingham, 2015. http://etheses.bham.ac.uk//id/eprint/5825/.

Full text
Abstract:
In order to accurately predict vehicle dynamic properties when tires impact high obstacles or large bumps, appropriate tire models need to be developed and characterised. The Flexible Ring Tire (FTire) model is one of the models for predicting the transient dynamic responses when traversing obstacles. In this thesis, a combination of experimental tests and Finite Element (FE) modelling is used in deriving FTire models for different levels of tire/road interaction severity. A FE tire model is built to characterize tire properties including static properties, steady-state rolling properties and transient dynamic rolling properties. A 235/60 R18 tire is cut in order that the tire cross-section can be captured and the tire rubber and reinforcement components can be extracted. A detailed method for the determination of geometrical and material properties of tires has been developed for tire modelling. The 2D and 3D models for static and dynamic analysis are both developed using a commercial FE code ABAQUS. The parameters of FTire model are derived based on the experimental data and FE simulation data, and different FTire models are derived under different operation conditions. Multi-body dynamic analysis is carried out using these FTire models, and the transient dynamic responses using different FTire models are compared with each other. It is shown that FE modelling can be used to accurately characterise the behaviour of a tire where limitations in experimental facilities prevent tire characterisation using the required level of input severity in physical tests.
APA, Harvard, Vancouver, ISO, and other styles
22

Koh, Younggyun. "Kernel service outsourcing: an approach to improve performance and reliability of virtualized systems." Diss., Georgia Institute of Technology, 2010. http://hdl.handle.net/1853/34700.

Full text
Abstract:
Virtualization environments have become basic building blocks in consolidated data centers and cloud computing infrastructures. By running multiple virtual machines (VMs) in a shared physical machine, virtualization achieves high utilization of hardware resources and provides strong isolation between virtual machines. This dissertation discusses the implementation and the evaluation of an approach, called kernel service outsourcing, which improves the performance and the reliability of guest systems in the virtualized, multi-kernel environments. Kernel service outsourcing allows applications to exploit OS services from an external kernel existing in the shared system, not limiting application OS service requests to the local kernel. Because external kernels may provide more efficient services than the local kernel does, kernel service outsourcing provides new opportunities with applications in the guest OS for better performance. In addition, we apply the kernel service outsourcing technique to implement natural diversity, improving the reliability of the virtualized systems. We present two major benefits of kernel service outsourcing. First, we show that I/O service outsourcing can significantly improve the I/O performance of guest OSes by up to several times. In some important cases, the performance of network applications in the guest OS using network outsourcing was comparable to that of native OS (Linux). We also apply kernel service outsourcing between Windows and Linux, and show that kernel service outsourcing is viable even with two heterogeneous OS kernels. In addition, we study further performance optimization techniques that can be achieved in the external kernel when certain OS services are outsourced to the external kernel. The second benefit of kernel service outsourcing is to improve system reliability through natural diversity created by the combination of different kinds of the OS kernel implementations. Because OS services can be outsourced to different versions or even heterogeneous types of OS kernel for equivalent functions, malicious attacks that aim to exploit certain vulnerabilities in specific versions of OS kernels would not succeed in the outsourced kernels. Our case studies with Windows and Linux show that kernel service outsourcing was able to prevent the malicious attacks designed to exploit implementation-dependent vulnerabilities in the OSes from becoming successful in the outsourced systems.
APA, Harvard, Vancouver, ISO, and other styles
23

Luk, Kin-man Ricardo, and 陸建文. "An integral approach in applying information technology to reduce operating cost and enhance efficiency in Hong Kong housing managementindustry." Thesis, The University of Hong Kong (Pokfulam, Hong Kong), 2004. http://hub.hku.hk/bib/B31969446.

Full text
APA, Harvard, Vancouver, ISO, and other styles
24

Meissner, Roy, Kurt Junghanns, and Michael Martin. "A Decentralized and Remote Controlled Webinar Approach, Utilizing Client-side Capabilities: To Increase Participant Limits and Reduce Operating Costs." ScitePress, 2018. https://ul.qucosa.de/id/qucosa%3A32159.

Full text
Abstract:
We present a concept and implementation on increasing the efficiency of webinar software by a remote control approach using the technology WebRTC. This technology enables strong security and privacy, is crossdevice usable, uses open-source technology and enables a new level of interactiveness to webinars. We used SlideWiki, WebRTC, and browser speech to text engines to provide innovative accessibility features like multilingual presentations and live subtitles. Our solution was rated for real world usage aspects, tested within the SlideWiki project and we determined technological limits. Such measurements are currently not available and show that our approach outperforms open-source market competitors by efficiency and costs.
APA, Harvard, Vancouver, ISO, and other styles
25

Luk, Kin-man Ricardo. "An integral approach in applying information technology to reduce operating cost and enhance efficiency in Hong Kong housing management industry." Click to view the E-thesis via HKUTO, 2004. http://sunzi.lib.hku.hk/hkuto/record/B31969446.

Full text
APA, Harvard, Vancouver, ISO, and other styles
26

Cilliers, Anthonie Christoffel. "A deterministic approach for establishing a narrow band dynamic operating envelope to detect and locate hardware deterioration in nuclear power plants." Thesis, North-West University, 2013. http://hdl.handle.net/10394/9001.

Full text
Abstract:
Being able to detect and describe hardware deterioration in nuclear power plants benefits the nuclear industry tremendously as it would enable appropriate outage and maintenance planning. Being able to detect and describe this faulty behaviour also assists in fault analysis of nuclear power plants. This thesis describes the development of narrow band dynamic operating envelope that makes use of real-time simulated plant measurements and control operations to compare with actual plant measurements and control operations. By simulating the plant behaviour in real-time whilst comparing it with the real-time transient the plant is following, a second set of plant measurements is generated. The newly generated plant measurements represent plant measurements if the control system did not introduce control operations to nullify the effect of the fault. This enables the calculation of the unknown disturbance introduced into the plant as a fault condition. The benefit of such a system is that plant faults that are too small to detect (especially during transients when the plant operating point is moving around) can be identified. The behaviour of the control system is also continuously predicted so the effect of the control system compensating for fault effects (which in most cases hides the fault condition) is used to characterise the fault condition in terms of magnitude, position and subsystem being affected. The combination of the fault detection and fault characterisations produces a complete fault identification system. The approach is verified by making use of an implementation of the fault identification system on a simulated plant. Typical faults (small enough to go undetected for an extended period of time during a typical transient) are introduced into the virtual plant and continuously compared with another plant simulation, producing the same transient without the introduction of the fault. A comparison is done to evaluate the speed and detail provided by the fault identification system as opposed to the conventional plant protection system. Using the described methodology, the fault is detected and characterised before plant design limitations are reached or the fault is detected by the conventional protection system. In addition to the fault identification system, this research develops the functional requirements for a full scope engineering and training simulator that would allow the simulator to be fully utilised to simulate postulated accident scenarios, plan plant modification procedures as well as provide an in- transient real time reference for plant diagnostic systems. To ensure practical implementation of the system in the regulated nuclear industry, an implementation framework that keeps the conventional plant protection system intact, is created. It allows the implementation of narrow band dynamic operating envelope operating within the conventional operating envelope. The framework allows the implementation of the developed fault identification system and other plant diagnostic systems on existing nuclear power plants without impacting on existing nuclear power plant licences as well as the licensing process of new nuclear power plants.<br>Thesis(PhD (Nuclear Engineering))--North-West University, Potchefstroom Campus, 2013.
APA, Harvard, Vancouver, ISO, and other styles
27

Bettaieb, Luc Alexandre. "A Deep Learning Approach To Coarse Robot Localization." Case Western Reserve University School of Graduate Studies / OhioLINK, 2017. http://rave.ohiolink.edu/etdc/view?acc_num=case1493646936728041.

Full text
APA, Harvard, Vancouver, ISO, and other styles
28

Li, Wenming. "Group-EDF: A New Approach and an Efficient Non-Preemptive Algorithm for Soft Real-Time Systems." Thesis, University of North Texas, 2006. https://digital.library.unt.edu/ark:/67531/metadc5317/.

Full text
Abstract:
Hard real-time systems in robotics, space and military missions, and control devices are specified with stringent and critical time constraints. On the other hand, soft real-time applications arising from multimedia, telecommunications, Internet web services, and games are specified with more lenient constraints. Real-time systems can also be distinguished in terms of their implementation into preemptive and non-preemptive systems. In preemptive systems, tasks are often preempted by higher priority tasks. Non-preemptive systems are gaining interest for implementing soft-real applications on multithreaded platforms. In this dissertation, I propose a new algorithm that uses a two-level scheduling strategy for scheduling non-preemptive soft real-time tasks. Our goal is to improve the success ratios of the well-known earliest deadline first (EDF) approach when the load on the system is very high and to improve the overall performance in both underloaded and overloaded conditions. Our approach, known as group-EDF (gEDF), is based on dynamic grouping of tasks with deadlines that are very close to each other, and using a shortest job first (SJF) technique to schedule tasks within the group. I believe that grouping tasks dynamically with similar deadlines and utilizing secondary criteria, such as minimizing the total execution time can lead to new and more efficient real-time scheduling algorithms. I present results comparing gEDF with other real-time algorithms including, EDF, best-effort, and guarantee scheme, by using randomly generated tasks with varying execution times, release times, deadlines and tolerances to missing deadlines, under varying workloads. Furthermore, I implemented the gEDF algorithm in the Linux kernel and evaluated gEDF for scheduling real applications.
APA, Harvard, Vancouver, ISO, and other styles
29

Долгіх, Володимир Миколайович, Владимир Николаевич Долгих та Volodymyr Mykolaiovych Dolhikh. "Непараметрическое исследование относительной эффективности украинских банков в 2005–2011 годах". Thesis, ДИАЙПИ, 2012. http://essuir.sumdu.edu.ua/handle/123456789/64137.

Full text
Abstract:
Банковская система является важнейшим сектором экономики, обеспечивающим ее кредитование за счет привлечения оборотных средств предприятий и сбережений населения. Некачественный банковский менеджмент, неадекватная рисковая политика негативно влияют как на эффективность банковской системы, так и на эффективность экономики страны.
APA, Harvard, Vancouver, ISO, and other styles
30

Gao, Ying. "A Digital Signal Processing Approach for Affective Sensing of a Computer User through Pupil Diameter Monitoring." FIU Digital Commons, 2009. http://digitalcommons.fiu.edu/etd/132.

Full text
Abstract:
Recent research has indicated that the pupil diameter (PD) in humans varies with their affective states. However, this signal has not been fully investigated for affective sensing purposes in human-computer interaction systems. This may be due to the dominant separate effect of the pupillary light reflex (PLR), which shrinks the pupil when light intensity increases. In this dissertation, an adaptive interference canceller (AIC) system using the H∞ time-varying (HITV) adaptive algorithm was developed to minimize the impact of the PLR on the measured pupil diameter signal. The modified pupil diameter (MPD) signal, obtained from the AIC was expected to reflect primarily the pupillary affective responses (PAR) of the subject. Additional manipulations of the AIC output resulted in a processed MPD (PMPD) signal, from which a classification feature, PMPDmean, was extracted. This feature was used to train and test a support vector machine (SVM), for the identification of stress states in the subject from whom the pupil diameter signal was recorded, achieving an accuracy rate of 77.78%. The advantages of affective recognition through the PD signal were verified by comparatively investigating the classification of stress and relaxation states through features derived from the simultaneously recorded galvanic skin response (GSR) and blood volume pulse (BVP) signals, with and without the PD feature. The discriminating potential of each individual feature extracted from GSR, BVP and PD was studied by analysis of its receiver operating characteristic (ROC) curve. The ROC curve found for the PMPDmean feature encompassed the largest area (0.8546) of all the single-feature ROCs investigated. The encouraging results seen in affective sensing based on pupil diameter monitoring were obtained in spite of intermittent illumination increases purposely introduced during the experiments. Therefore, these results confirmed the benefits of using the AIC implementation with the HITV adaptive algorithm to isolate the PAR and the potential of using PD monitoring to sense the evolving affective states of a computer user.
APA, Harvard, Vancouver, ISO, and other styles
31

SHA, SHI. "The Thermal-Constrained Real-Time Systems Design on Multi-Core Platforms -- An Analytical Approach." FIU Digital Commons, 2018. https://digitalcommons.fiu.edu/etd/3713.

Full text
Abstract:
Over the past decades, the shrinking transistor size enabled more transistors to be integrated into an IC chip, to achieve higher and higher computing performances. However, the semiconductor industry is now reaching a saturation point of Moore’s Law largely due to soaring power consumption and heat dissipation, among other factors. High chip temperature not only significantly increases packing/cooling cost, degrades system performance and reliability, but also increases the energy consumption and even damages the chip permanently. Although designing 2D and even 3D multi-core processors helps to lower the power/thermal barrier for single-core architectures by exploring the thread/process level parallelism, the higher power density and longer heat removal path has made the thermal problem substantially more challenging, surpassing the heat dissipation capability of traditional cooling mechanisms such as cooling fan, heat sink, heat spread, etc., in the design of new generations of computing systems. As a result, dynamic thermal management (DTM), i.e. to control the thermal behavior by dynamically varying computing performance and workload allocation on an IC chip, has been well-recognized as an effective strategy to deal with the thermal challenges. Over the past decades, the shrinking transistor size, benefited from the advancement of IC technology, enabled more transistors to be integrated into an IC chip, to achieve higher and higher computing performances. However, the semiconductor industry is now reaching a saturation point of Moore’s Law largely due to soaring power consumption and heat dissipation, among other factors. High chip temperature not only significantly increases packing/cooling cost, degrades system performance and reliability, but also increases the energy consumption and even damages the chip permanently. Although designing 2D and even 3D multi-core processors helps to lower the power/thermal barrier for single-core architectures by exploring the thread/process level parallelism, the higher power density and longer heat removal path has made the thermal problem substantially more challenging, surpassing the heat dissipation capability of traditional cooling mechanisms such as cooling fan, heat sink, heat spread, etc., in the design of new generations of computing systems. As a result, dynamic thermal management (DTM), i.e. to control the thermal behavior by dynamically varying computing performance and workload allocation on an IC chip, has been well-recognized as an effective strategy to deal with the thermal challenges. Different from many existing DTM heuristics that are based on simple intuitions, we seek to address the thermal problems through a rigorous analytical approach, to achieve the high predictability requirement in real-time system design. In this regard, we have made a number of important contributions. First, we develop a series of lemmas and theorems that are general enough to uncover the fundamental principles and characteristics with regard to the thermal model, peak temperature identification and peak temperature reduction, which are key to thermal-constrained real-time computer system design. Second, we develop a design-time frequency and voltage oscillating approach on multi-core platforms, which can greatly enhance the system throughput and its service capacity. Third, different from the traditional workload balancing approach, we develop a thermal-balancing approach that can substantially improve the energy efficiency and task partitioning feasibility, especially when the system utilization is high or with a tight temperature constraint. The significance of our research is that, not only can our proposed algorithms on throughput maximization and energy conservation outperform existing work significantly as demonstrated in our extensive experimental results, the theoretical results in our research are very general and can greatly benefit other thermal-related research.
APA, Harvard, Vancouver, ISO, and other styles
32

Ren, Peng. "Off-line and On-line Affective Recognition of a Computer User through A Biosignal Processing Approach." FIU Digital Commons, 2013. http://digitalcommons.fiu.edu/etd/838.

Full text
Abstract:
Physiological signals, which are controlled by the autonomic nervous system (ANS), could be used to detect the affective state of computer users and therefore find applications in medicine and engineering. The Pupil Diameter (PD) seems to provide a strong indication of the affective state, as found by previous research, but it has not been investigated fully yet. In this study, new approaches based on monitoring and processing the PD signal for off-line and on-line affective assessment (“relaxation” vs. “stress”) are proposed. Wavelet denoising and Kalman filtering methods are first used to remove abrupt changes in the raw Pupil Diameter (PD) signal. Then three features (PDmean, PDmax and PDWalsh) are extracted from the preprocessed PD signal for the affective state classification. In order to select more relevant and reliable physiological data for further analysis, two types of data selection methods are applied, which are based on the paired t-test and subject self-evaluation, respectively. In addition, five different kinds of the classifiers are implemented on the selected data, which achieve average accuracies up to 86.43% and 87.20%, respectively. Finally, the receiver operating characteristic (ROC) curve is utilized to investigate the discriminating potential of each individual feature by evaluation of the area under the ROC curve, which reaches values above 0.90. For the on-line affective assessment, a hard threshold is implemented first in order to remove the eye blinks from the PD signal and then a moving average window is utilized to obtain the representative value PDr for every one-second time interval of PD. There are three main steps for the on-line affective assessment algorithm, which are preparation, feature-based decision voting and affective determination. The final results show that the accuracies are 72.30% and 73.55% for the data subsets, which were respectively chosen using two types of data selection methods (paired t-test and subject self-evaluation). In order to further analyze the efficiency of affective recognition through the PD signal, the Galvanic Skin Response (GSR) was also monitored and processed. The highest affective assessment classification rate obtained from GSR processing is only 63.57% (based on the off-line processing algorithm). The overall results confirm that the PD signal should be considered as one of the most powerful physiological signals to involve in future automated real-time affective recognition systems, especially for detecting the “relaxation” vs. “stress” states.
APA, Harvard, Vancouver, ISO, and other styles
33

Ruiz, Miguel Angel Vilaplana. "Co-operative conflict resolution in autonomous aircraft operations using a multi-agent approach." Thesis, University of Glasgow, 2002. http://ethos.bl.uk/OrderDetails.do?uin=uk.bl.ethos.249988.

Full text
APA, Harvard, Vancouver, ISO, and other styles
34

Nitayaprapha, Sasiphan. "A systemic interpretation of the soft complexity existing in the managerial process of information systems using a soft systems thinking approach : a case study of the telecommunication companies operating in Thailand." Thesis, University of Manchester, 2009. http://ethos.bl.uk/OrderDetails.do?uin=uk.bl.ethos.505392.

Full text
Abstract:
Since an understanding of 'culture' positively affects the wayan information systems practitioner influences and initiates actions in an organization, this research aims to get insight into an organizational infomlation systems management culture. In this research, a case study has been carried out in the context of the four major telecommunication companies operating in Thailand. The information systems management culture has been interpreted in terms of value systems embedded in the organization's managerial process of information systems and the manifestation of Thai values in such value systems. As such, the research domain of study is defined as 'the soft socio-technical aspect of the managerial process of information systems of the telecommunication companies operating in Thailand'. To tackle the research domain of the 'soft socio-technical aspect', the research embraces a 'soft systems' concept and adopts a 'soft' and 'socio-technical' approach to information systems. This is because of the fact that the research context is involved with human-machine inter-connections, pluralism, and multi-facets of a problem situation. The research enquiry process is formed by the systemic tools and techniques available in the socio-technical methodology namely SISTeM (Atkinson 1997; Atkinson and Brooks 2008) and 'Generative Systemic Metaphor' (Atkinson and Checkland 1988; Atkinson 2003). In order to obtain an in-depth understanding of the research domain of study, the constructed research enquiry process is used to interpretatively analyze the 'managerial process of information systems of the telecommunication companies operating in Thailand', as well as the value systems embedded in such managerial process and the influences of Thai culture on the identified value systems. Because within the Thai information systems research community, there is no previous 'soft systems' research, it could be argued that, for such a community, the research carried out in this thesis opens up an arena for a further 'soft systems' approach to information systems research, particularly those relevant to the soft socio-technical aspect of information systems.
APA, Harvard, Vancouver, ISO, and other styles
35

Clukey, David S. "A district approach to countering Afghanistan's insurgency." Thesis, Monterey, California : Naval Postgraduate School, 2009. http://edocs.nps.edu/npspubs/scholarly/theses/2009/Dec/09Dec%5FClukey.pdf.

Full text
Abstract:
Thesis (M.S. in Defense Analysis)--Naval Postgraduate School, December 2009.<br>Thesis Advisor(s): Borer, Douglas A. Second Reader: Rothstein, Hy S. "December 2009." Description based on title screen as viewed on January 26, 2010. Author(s) subject terms: Afghanistan, Operation Enduring Freedom, counterinsurgency, foreign internal defense, unconventional warfare, International Security and Assistance Force, U.S. Special Operations Forces. Includes bibliographical references (p. 109-116). Also available in print.
APA, Harvard, Vancouver, ISO, and other styles
36

Emery, Norman E., and Robert S. Earl. "Terrorist approach to information operations." Thesis, Monterey, California. Naval Postgraduate School, 2003. http://hdl.handle.net/10945/993.

Full text
Abstract:
Approved for public release; distribution is unlimited<br>This thesis provides insight into how terrorist organizations exploit the information environment to achieve their objectives. The study establishes an analytical IO framework, by integrating US military doctrine with a fundamental approach to IO theory. The framework proves useful in examining the IO tools terrorists have assembled and how they implement them to influence their target audiences. The thesis shows that terrorists are, indeed, naturally linked to the information environment by their nature and strategy. Generally speaking, all terrorists employ IO tactically to enhance their operations. However, many organizations have a profound understanding of the information environment and also have the ability to manipulate information to achieve their objectives. Since, terrorist organizations are militarily weaker than the states they face and cannot rely on physical attacks to accomplish their goals, they must adopt an information strategy to achieve their objectives. This thesis emphasizes three primary conclusions: first terrorist conduct violent attacks in the physical environment to enable operations in the information environment. Second, terrorist integrate offensive and defensive IO to survive and appear legitimate to potential supporters and to the state. Finally, terrorists intentionally target four different audiences: opposing, uncommitted, sympathetic, and active to influence their perceptions.<br>Major, United States Army
APA, Harvard, Vancouver, ISO, and other styles
37

Lettovsky, Ladislav. "Airline operations recovery : an optimization approach." Diss., Georgia Institute of Technology, 1997. http://hdl.handle.net/1853/24326.

Full text
APA, Harvard, Vancouver, ISO, and other styles
38

Hoke, Carson S., William M. Grieshaber, Bradley M. Carr, and Edward M. Lopacienski. "Influence operations : redefining the indirect approach." Thesis, Monterey, California. Naval Postgraduate School, 2011. http://hdl.handle.net/10945/5611.

Full text
Abstract:
Approved for public release; distribution is unlimited.<br>Across today's spectrum of contemporary warfare, the human terrain is routinely recognized as the center of gravity, but disconnects exists between how states or power holders seek to influence target audiences and how insurgents, terrorist groups, and similar nonstate actors fight to seize the population's cognitive terrain. Insurgents and nonstate actor threats increasingly seek the influence advantage through grassroots processes to subvert populations and establish asymmetric advantages against the United States and other state actors. U.S. policy recognizes the need to influence the behavior, perceptions, and attitudes of foreign audiences through an indirect approach, but its influence methods, in reality, remain tied to Cold War constructs unable to generate the desired effects needed for current/future threats. This thesis examines case studies of insurgent and nonstate actor influence operations to analyze their effects on the perceptions and attitudes of various disparate audiences at a grassroots level. The analysis then identifies methodology, vulnerabilities, and opportunities to engage these asymmetric threats within their own influence safe havens.
APA, Harvard, Vancouver, ISO, and other styles
39

Earl, Robert S. Emery Norman E. "A terrorist approach to information operations /." Monterey, Calif. : Springfield, Va. : Naval Postgraduate School ; Available from National Technical Information Service, 2003. http://library.nps.navy.mil/uhtbin/hyperion-image/03Jun%5FEarl.pdf.

Full text
Abstract:
Thesis (M.S. in Defense Analysis)--Naval Postgraduate School, June 2003.<br>Thesis advisor(s): Dorothy Denning, Raymond Buettner. Includes bibliographical references (p. 141-148). Also available online.
APA, Harvard, Vancouver, ISO, and other styles
40

Blanco, Martin Laura. "Étude théorique et expérimentale du boulonnage à ancrage réparti sous sollicitations axiales." Phd thesis, Ecole Nationale Supérieure des Mines de Paris, 2012. http://pastel.archives-ouvertes.fr/pastel-00711490.

Full text
Abstract:
Le boulonnage et le câblage à ancrage réparti sont deux techniques de renforcement du terrain couramment utilisées dans l'industrie minière et dans le génie civil. Au fil de cette recherche, on s'intéresse à la réponse de ces éléments sous des sollicitations axiales de traction, en régime statique. Dans ces conditions, l'expérience montre que la rupture se produit le plus fréquemment à l'interface barre-scellement via un processus de décohésion qui commence dès que la force sur la barre atteint une valeur limite. L'objectif est de mieux comprendre le comportement de cette interface, avant et après rupture. Premièrement, on revoit l'état de l'art afin de comprendre le travail effectué et les aspects non maîtrisés à ce jour. Deuxièmement, on décrit des outils analytiques qui permettent de comprendre la réponse d'un boulon ou d'un câble à ancrage réparti soumis à une force de traction. Ensuite, on présente les études expérimentales menées en laboratoire et in situ. Des essais d'arrachement ont été effectués pour déterminer les principaux facteurs qui régissent la réponse de l'interface. Finalement, on analyse les résultats des essais effectués en laboratoire sur les boulons. Après l'obtention des variables nécessaires, on propose un modèle semi-empirique d'interface, qui devra être validé par des essais complémentaires. Cette perspective et d'autres améliorations sont également présentées.
APA, Harvard, Vancouver, ISO, and other styles
41

Watson, Robert Nicholas Maxwell. "New approaches to operating system security extensibility." Thesis, University of Cambridge, 2011. http://ethos.bl.uk/OrderDetails.do?uin=uk.bl.ethos.609485.

Full text
APA, Harvard, Vancouver, ISO, and other styles
42

Deretic, Goran. "Manöverkrigföringens indirekta och direkta metod : en studie i fördelningen av dessa vid flyginsatser under Operation Desert Storm." Thesis, Swedish National Defence College, Swedish National Defence College, 2009. http://urn.kb.se/resolve?urn=urn:nbn:se:fhs:diva-93.

Full text
Abstract:
<p>En av huvuddelarna i manöverkrigföringen, som Försvarsmakten nyttjar, är indirekt och direkt metod. Men då Sverige inte har varit i krig på väldigt länge så beslöt jag att skriva denna uppsats i syfte att skapa ett underlag för en ökad förståelse för nyttjandet av indirekt och direkt metod vid manöverkrigföring. För att göra detta så försökte jag svara på följande fråga:</p><ul><li>Hur ser fördelningen mellan indirekt och direkt metod ut vid flyginsatser i krig under 1990-talet?</li></ul><p>För att få ett svar på frågan så gjorde jag en fallstudie på Operation Desert Storm där jag först analyserade vilken metod de allierade styrkorna använde mot de olika anfallsmålen. Efter detta gjorde jag en undersökning i hur många anfall och uppdrag som genomfördes mot varje mål. Resultaten visade bland annat att även om de flesta målen var kritiska sårbarheter, där indirekt metod nyttjades, så gjordes de flesta anfallen med direkt metod.</p><br><p>A fundamental part of the manoeuvre warfare, which the Swedish Armed Forces use, is indirect and direct approach. But since Sweden has not been in a war for a long time I decided to write this essay with the purpose to create a basis of further understanding of how to use indirect and direct approach during manoeuvre warfare. To do this I have tried to answer the following question:</p><ul><li>How are the indirect and direct approach divided in air raids in wars during the 1990s?</li></ul><p>To answer this question I made a case study on Operation Desert Storm, where I first analysed which approach the allied forces used on the different targets. After that I made a research on how many attacks and missions they made on each target. The results showed among others that even though most of the targets were critical vulnerabilities, on which the indirect approach was used, the most part of the attacks were made using the direct approach.</p>
APA, Harvard, Vancouver, ISO, and other styles
43

Martonosi, Susan Elizabeth. "An Operations Research approach to aviation security." Thesis, Massachusetts Institute of Technology, 2005. http://hdl.handle.net/1721.1/33671.

Full text
Abstract:
Thesis (Ph. D.)--Massachusetts Institute of Technology, Sloan School of Management, Operations Research Center, 2005.<br>Includes bibliographical references (p. 151-163).<br>Since the terrorist attacks of September 11, 2001, aviation security policy has remained a focus of national attention. We develop mathematical models to address some prominent problems in aviation security. We explore first whether securing aviation deserves priority over other potential targets. We compare the historical risk of aviation terrorism to that posed by other forms of terrorism and conclude that the focus on aviation might be warranted. Secondly, we address the usefulness of passenger pre-screening systems to select potentially high-risk passengers for additional scrutiny. We model the probability that a terrorist boards an aircraft with weapons, incorporating deterrence effects and potential loopholes. We find that despite the emphasis on the pre-screening system, of greater importance is the effectiveness of the underlying screening process. Moreover, the existence of certain loopholes could occasionally decrease the overall chance of a successful terrorist attack. Next, we discuss whether proposed explosives detection policies for cargo, airmail and checked luggage carried on passenger aircraft are cost-effective.<br>(cont.) We define a threshold time such that if an attempted attack is likely to occur before this time, it is cost-effective to implement the policy, otherwise not. We find that although these three policies protect against similar types of attacks, their cost-effectiveness varies considerably. Lastly, we explore whether dynamically assigning security screeners at various airport security checkpoints can yield major gains in efficiency. We use approximate dynamic programming methods to determine when security screeners should be switched between checkpoints in an airport to accommodate stochastic queue imbalances. We compare the performance of such dynamic allocations to that of pre-scheduled allocations. We find that unless the stochasticity in the system is significant, dynamically reallocating servers might reduce only marginally the average waiting time. Without knowing certain parameter values or understanding terrorist behavior, it can be difficult to draw concrete conclusions about aviation security policies.<br>(cont.) Nevertheless, these mathematical models can guide policy-makers in adopting security measures, by helping to identify parameters most crucial to the effectiveness of aviation security policies, and helping to analyze how varying key parameters or assumptions can affect strategic planning.<br>by Susan Elizabeth Martonosi.<br>Ph.D.
APA, Harvard, Vancouver, ISO, and other styles
44

Yusof, Yusri. "A STEP-compliant approach to turning operations." Thesis, Loughborough University, 2007. https://dspace.lboro.ac.uk/2134/35164.

Full text
Abstract:
There is no doubt that today manufacturing is more competitive and challenging than ever before in trying to respond to "production on demand". Companies from east and west and all over the world have changing rules of business and need to collaborate beyond geographic boundaries with the support of the rapid advancement of information technology associated with manufacturing technology. To satisfy customers' demands for product variety and the industrial need for high precision, numerically controlled machining with multiple axes and sophisticated machine tools are required. Due to the complexity of programming there is a need to model their process capability to improve the interoperable manufacturing capability of machines such as turning centres. This thesis focuses on the use of the new standard, ISO 14649 (STEP-NC), to address the process planning and machining of discrete turned components.
APA, Harvard, Vancouver, ISO, and other styles
45

Pachamanova, Dessislava A. (Dessislava Angelova) 1975. "A robust optimization approach to finance." Thesis, Massachusetts Institute of Technology, 2002. http://hdl.handle.net/1721.1/8509.

Full text
Abstract:
Thesis (Ph.D.)--Massachusetts Institute of Technology, Sloan School of Management, Operations Research Center, 2002.<br>Includes bibliographical references (p. 137-141).<br>An important issue in real-world optimization problems is how to treat uncertain coefficients. Robust optimization is a modeling methodology that takes a deterministic view: the optimal solution is required to remain feasible for any realization of the uncertain coefficients within prescribed uncertainty sets. The focus of this thesis is on robust linear programming problems in which the uncertainty sets are polytopes. The assumption of polyhedral uncertainty leads to compact, efficiently solvable linear formulations. In the first part of the thesis, we study special types of polyhedral uncertainty sets that allow for incorporating moment information about the distribution of the uncertain coefficients, and for controlling the tradeoff between robustness and optimality. We provide probabilistic guarantees on the feasibility of optimal solutions obtained with such uncertainty sets for any realization of the uncertain coefficients. We then illustrate the versatility of robust polyhedral formulations by studying three financial applications: single period portfolio optimization, multiperiod portfolio management, and credit risk estimation. In the area of single period portfolio optimization, we propose ways of modeling inaccuracy in parameter estimates, and explore the benefits of robust optimal strategies through computational experiments with the statistical estimation of a particular measure of portfolio risk - sample shortfall. We emphasize the advantages of linear, as opposed to nonlinear, robust formulations in large portfolio problems with integrality constraints.<br>(cont.) In the area of multiperiod portfolio management, we propose robust polyhedral formulations that use some minimal information about long-term direction of movement of asset returns to make informed decisions about portfolio rebalancing over the short term. The suggested formulations allow for including considerations of transaction costs and taxes while keeping the dimension of the problem low. In the area of credit risk estimation, we propose a model for estimating the survival probability distribution and the fair prices of credit risky bonds from market prices of similar credit risky securities. We address the issue of uncertainty in key parameters of the model, such as discount factors, by using robust optimization modeling. We also suggest a method for classification of credit risky bonds based on integer programming techniques.<br>by Dessislava A. Pachamanova.<br>Ph.D.
APA, Harvard, Vancouver, ISO, and other styles
46

Epstein, Christina (Christina Lynn). "An analytics approach to hypertension treatment." Thesis, Massachusetts Institute of Technology, 2014. http://hdl.handle.net/1721.1/91299.

Full text
Abstract:
Thesis: S.M., Massachusetts Institute of Technology, Sloan School of Management, Operations Research Center, 2014.<br>This electronic version was submitted by the student author. The certified thesis is available in the Institute Archives and Special Collections.<br>13<br>Cataloged from student-submitted PDF version of thesis.<br>Includes bibliographical references (pages 67-68).<br>Hypertension is a major public health issue worldwide, affecting more than a third of the adult population and increasing the risk of myocardial infarction, heart failure, stroke, and kidney disease. Current clinical guidelines have yet to achieve consensus and continue to rely on expert opinion for recommendations lacking a sufficient evidence base. In practice, trial and error is typically required to discover a medication combination and dosage that works to control blood pressure for a given patient. We propose an analytics approach to hypertension treatment: applying visualization, predictive analytics methods, and optimization to existing electronic health record data to (1) find conjectures parallel and potentially orthogonal to guidelines, (2) hasten response time to therapy, and/or (3) optimize therapy selection. This thesis presents work toward these goals including data preprocessing and exploration, feature creation, the discovery of clinically-relevant clusters based on select blood pressure features, and three development spirals of predictive models and results.<br>by Christina Epstein.<br>S.M.
APA, Harvard, Vancouver, ISO, and other styles
47

Lai, Kuan-Yu, and 賴貫郁. "A Multi-Objective Evolutionary Approach to Optimize Operating Room Scheduling Problems." Thesis, 2012. http://ndltd.ncl.edu.tw/handle/79056640078768159434.

Full text
Abstract:
碩士<br>中華大學<br>資訊工程學系碩士班<br>100<br>The operating room is one of the most expensive parts of hospital operating costs. The cost of operating room is closely related to the operating room scheduling operations. In this thesis, operating rooms, surgical teams and patient information are integrated and constructed into a mathematical optimization model. The objective of this model is to plan a week of operating room schedule list. Therefore, a multi-objective genetic algorithm (MOGA) is applied to automatically assign the patient and surgical team to the operating room. Our approach can optimize the four cost objective function at the same time: minimize the cost of all the surgical team and operating room, and matching the preferences of the surgical team to reduce patient waiting days. The experimental results show that the proposed approach can provide a good tool for hospital decision-makers, and indirectly improve healthcare efficiency and quality, while reducing unnecessary waste.
APA, Harvard, Vancouver, ISO, and other styles
48

Chun-Chi, Chen, and 陳俊錡. "Evaluating Operating Systems of Smart Phones through a Fuzzy MCDM Approach." Thesis, 2010. http://ndltd.ncl.edu.tw/handle/73363924565528533681.

Full text
Abstract:
碩士<br>南台科技大學<br>科技管理研究所<br>98<br>The market share of Smart phone has been increasing. The operating system (OS) is just like the heart of smart phone. OS influences the efficiency of overall operation and user preferences of smart phone. Thus the selection of the OS of smart phone has then become a very important issue. Many criteria must be considered when evaluating an OS of smart phone. Some criteria are quantitative, such as the numbers of applications; some are qualitative, such as, the security of the OS. Furthermore, different decision maker may have different opinion on the different importance among criteria. The purpose of this research is to suggest a fuzzy MCDM model for the evaluation and selection of the OS of smart phone. The membership functions of the final fuzzy evaluation values in the model will be derived. A ranking approach of maximizing area and minimizing area is suggested to produce the defuzzification values in order to obtain the ranking of the alternatives. Finally, a numerical example demonstrates the feasibility of the proposed model.
APA, Harvard, Vancouver, ISO, and other styles
49

Chien, Chi-Ying, and 簡綺瑩. "Using DEA Approach to Analyze the Operating Efficiency of Industrial PC Companies." Thesis, 2008. http://ndltd.ncl.edu.tw/handle/53784855579190635890.

Full text
Abstract:
碩士<br>國立交通大學<br>管理科學系所<br>96<br>Taiwan firms play an indispensable role in the IPC industry. This research utilizes the Data Envelopment Analysis (DEA) approach to evaluate 13 Taiwan domestic and 2 foreign IPC firms’relative efficiency during the period from 2002 to 2006. The purpose is to discuss the operating efficiency of IPC firms, and to explore whether performance better off via strategic alliances. In addition, some improvement guidance and suggestions are provided for operation units with less efficiency. Then through Tobit regression analysis method, several factors that affect the operation performance of IPC companies can be identified. The findings are as following: 1. Domestic IPC firms’operating efficiency is superior to foreign ones'. 2. Small IPC firms focusing on niche markets have enjoyed high profits. 3. Big IPC firms grow via integration. 4. IPC firms can find the growth opportunities through alliance. 5. According to the Tobit model, firms with larger capital, brand equities and going vertical have better performance.
APA, Harvard, Vancouver, ISO, and other styles
50

Huang, Yi-Ta, and 黃以達. "Evaluation of Operating Characteristics of Hybrid Approach of Calculating Value at Risk." Thesis, 2007. http://ndltd.ncl.edu.tw/handle/58723883298178276671.

Full text
Abstract:
碩士<br>國立臺灣大學<br>數學研究所<br>95<br>VaR(Value at Risk) is a method of assessing risk that uses standard statistical techniques routinely used in other technical fields. In this thesis, we focus on finding the characteristics of hybrid approach proposed in Boudokh, Richardson and Whitelaw (1998) which is a nonparametric approach for estimating VaR. Under some regular conditions, we prove that the resulting estimator is not consistent. We then propose a modified approach, which is called the modified hybrid approach, to increase its precision. We also demonstrate the pros and cons of the hybrid approach and modified hybrid approach by using some evaluation criteria under various different models and some empirical datas.
APA, Harvard, Vancouver, ISO, and other styles
We offer discounts on all premium plans for authors whose works are included in thematic literature selections. Contact us to get a unique promo code!