To see the other types of publications on this topic, follow the link: System analysis and design.

Dissertations / Theses on the topic 'System analysis and design'

Create a spot-on reference in APA, MLA, Chicago, Harvard, and other styles

Select a source type:

Consult the top 50 dissertations / theses for your research on the topic 'System analysis and design.'

Next to every source in the list of references, there is an 'Add to bibliography' button. Press on it, and we will generate automatically the bibliographic reference to the chosen work in the citation style you need: APA, MLA, Harvard, Chicago, Vancouver, etc.

You can also download the full text of the academic publication as pdf and read online its abstract whenever available in the metadata.

Browse dissertations / theses on a wide variety of disciplines and organise your bibliography correctly.

1

Eriksson, Daniel. "Diagnosability analysis and FDI system design for uncertain systems." Licentiate thesis, Linköpings universitet, Fordonssystem, 2013. http://urn.kb.se/resolve?urn=urn:nbn:se:liu:diva-89947.

Full text
Abstract:
Our society depends on advanced and complex technical systems and machines, for example, cars for transportation, industrial robots in production lines, satellites for communication, and power plants for energy production. Consequences of a fault in such a system can be severe and result in human casualties, environmentally harmful emissions, high repair costs, or economical losses caused by unexpected stops in production lines. Thus, a diagnosis system is important, and in some applications also required by legislations, to monitor the system health in order to take appropriate preventive actions when a fault occurs. Important properties of diagnosis systems are their capability of detecting and identifying faults, i.e., their fault detectability and isolability performance. This thesis deals with quantitative analysis of fault detectability and isolability performance when taking model uncertainties and measurement noise into consideration. The goal is to analyze diagnosability performance given a mathematical model of the system to be monitored before a diagnosis system is developed. A measure of fault diagnosability performance, called distinguishability, is proposed based on the Kullback-Leibler divergence. For linear descriptor models with Gaussian noise, distinguishability gives an upper limit for the fault to noise ratio of any linear residual generator. Distinguishability is used to analyze fault detectability and isolability performance of a non-linear mean value engine model of gas flows in a heavy duty diesel engine by linearizing the model around different operating points. It is also shown how distinguishability is used for determine sensor placement, i.e, where sensors should be placed in a system to achieve a required fault diagnosability performance. The sensor placement problem is formulated as an optimization problem, where minimum required diagnosability performance is used as a constraint. Results show that the required diagnosability performance greatly affects which sensors to use, which is not captured if not model uncertainties and measurement noise are taken into consideration. Another problem considered here is the on-line sequential test selection problem. Distinguishability is used to quantify the performance of the different test quantities. The set of test quantities is changed on-line, depending on the output of the diagnosis system. Instead of using all test quantities the whole time, changing the set of active test quantities can be used to maintain a required diagnosability performance while reducing the computational cost of the diagnosis system. Results show that the number of used test quantities can be greatly reduced while maintaining a good fault isolability performance. A quantitative diagnosability analysis has been used during the design of an engine misfire detection algorithm based on the estimated torque at the flywheel. Decisions during the development of the misfire detection algorithm are motivated using quantitative analysis of the misfire detectability performance. Related to the misfire detection problem, a flywheel angular velocity model for misfire simulation is presented. An evaluation of the misfire detection algorithm show results of good detection performance as well as low false alarm rate.
APA, Harvard, Vancouver, ISO, and other styles
2

Gemino, Andrew C. "Empirical comparisons of system analysis modeling techniques." Thesis, National Library of Canada = Bibliothèque nationale du Canada, 1999. http://www.collectionscanada.ca/obj/s4/f2/dsk1/tape7/PQDD_0015/NQ46346.pdf.

Full text
APA, Harvard, Vancouver, ISO, and other styles
3

Uddin, Amad. "Development of an integrated interface modelling methodology to support system architecture analysis." Thesis, University of Bradford, 2016. http://hdl.handle.net/10454/15905.

Full text
Abstract:
This thesis presents the development and validation of a novel interface modelling methodology integrated with a system architectural analysis framework that emphasises the need to manage the integrity of deriving and allocating requirements across multiple levels of abstraction in a structured manner. The state of the art review in this research shows that there is no shared or complete interface definition model that could integrate diverse interaction viewpoints for defining system requirements with complete information. Furthermore, while existing system modelling approaches define system architecture with functions and their allocation to subsystems to meet system requirements, they do not robustly address the importance of considering well-defined interfaces in an integrated manner at each level of systems hierarchy. This results in decomposition and integration issues across the multiple levels of systems hierarchy. Therefore, this thesis develops and validates following: -Interface Analysis Template as a systematic tool that integrates diverse interaction viewpoints for modelling system interfaces with intensive information for deriving requirements. -Coupling Matrix as an architecture analysis framework that not only allocates functions to subsystems to meet requirements but also promotes consistent consideration of well-defined interfaces at each level of design hierarchy. Insights from the validation of developed approach with engineering case studies within an automotive OEM are discussed, reflecting on the effectiveness, efficiency and usability of the methods.
APA, Harvard, Vancouver, ISO, and other styles
4

Gregory, Frank Hutson. "A logical analysis of soft systems modelling : implications for information system design and knowledge based system design." Thesis, University of Warwick, 1993. http://wrap.warwick.ac.uk/2888/.

Full text
Abstract:
The thesis undertakes an analysis of the modelling methods used in the Soft Systems Methodology (SSM) developed by Peter Checkland and Brian Wilson. The analysis is undertaken using formal logic and work drawn from modern Anglo-American analytical philosophy especially work in the area of philosophical logic, the theory of meaning, epistemology and the philosophy of science. The ability of SSM models to represent causation is found to be deficient and improved modelling techniques suitable for cause and effect analysis are developed. The notional status of SSM models is explained in terms of Wittgenstein's language game theory. Modal predicate logic is used to solve the problem of mapping notional models on to the real world. The thesis presents a method for extending SSM modelling in to a system for the design of a knowledge based system. This six stage method comprises: systems analysis, using SSM models; language creation, using logico-linguistic models; knowledge elicitation, using empirical models; knowledge representation, using modal predicate logic; codification, using Prolog; and verification using a type of non-monotonic logic. The resulting system is constructed in such a way that built in inductive hypotheses can be falsified, as in Karl Popper's philosophy of science, by particular facts. As the system can learn what is false it has some artificial intelligence capability. A variant of the method can be used for the design of other types of information system such as a relational database.
APA, Harvard, Vancouver, ISO, and other styles
5

Moolman, G. Chris. "A relational database management systems approach to system design /." This resource online, 1992. http://scholar.lib.vt.edu/theses/available/etd-07102009-040421/.

Full text
APA, Harvard, Vancouver, ISO, and other styles
6

Terblanche, Riaan. "Requirements for efficient commercial systems analysis and design." Thesis, Cape Technikon, 1996. http://hdl.handle.net/20.500.11838/1357.

Full text
Abstract:
Thesis (MTech(Information Technology)--Cape Technikon, Cape Town, 1996
Systems Analysis and Design (also known as Systems Development) is the systematic process of problem identification, problem definition, analysis of the causes of the problem, the design of alternative solutions to the problem, and the eventual implementation of the selected solution as a computer system (if possible). During this systems development process there are a quite a large number of principles and approaches that need to be considered. It is also required from the systems analyst to have certain skills and to know how to apply a number of tools and techniques. The purpose of this research is to determine which approaches, principles, skills, tools and techniques are required by the industry for the development of commercial computer systems so that_prospective systems analysts can be properly trained in those aspects. This means that the course content of training institutions should be updated accordingly. The aspects that form part of the course offered by the training institution where the researcher works, are discussed and identified out of the literature. The course content is compared with what is required by the industry by means of an empirical research consisting of a questionnaire and frequency and regression analysis. The results from the research indicate that the following aspects of systems development must be emphasized: 1. Phases of systems development " Implementation " Analysis " Planning " Design 2. Important tools and techniques " Data modelling " Process modelling " Presentations " Input and output design " Observation (for fact-finding) " Cross Reference between the data and process model 3. Skills " Interpersonal relations " Third generation programming languages " . Basic understanding of CASE " Working knowledge of CASE
APA, Harvard, Vancouver, ISO, and other styles
7

He, Zhijun. "System And Algorithm Design For Varve Image Analysis System." Diss., The University of Arizona, 2007. http://hdl.handle.net/10150/196015.

Full text
Abstract:
This dissertation describes the design and implementation of a computer vision based varve image analysis system. The primary issues covered are software engineering design, varve image calibrations, varve image enhancement, varve Dynamic Spatial Warping (DSW) profile generation, varve core image registration, varve identification, boundary identification and varve thickness measurement. A varve DSW matching algorithm is described to generate DSW profile and register two core images. Wavelet Multiple Resolution Analysis (MRA) is also used to do the core image registrations. By allowing an analyst to concentrate on other research work while the VARVES software analyzes a sample, much of the tedious varve analysis work is reduced, and potentially increasing the productivity. Additionally, by using new computer vision techniques, VARVES system is able to do some varve analysis which was impossible handled manually.
APA, Harvard, Vancouver, ISO, and other styles
8

Wang, Chenjie. "The design exploration method for adaptive design systems." Thesis, Atlanta, Ga. : Georgia Institute of Technology, 2009. http://hdl.handle.net/1853/28084.

Full text
Abstract:
Thesis (M. S.)--Mechanical Engineering, Georgia Institute of Technology, 2009.
Committee Chair: Janet K. Allen; Committee Member: Benjamin Klein; Committee Member: Farrokh Mistree; Committee Member: Seung-Kyum Choi.
APA, Harvard, Vancouver, ISO, and other styles
9

Thomas, Sabin M. (Sabin Mammen). "A system analysis of improvements in machine learning." Thesis, Massachusetts Institute of Technology, 2014. http://hdl.handle.net/1721.1/100386.

Full text
Abstract:
Thesis: S.M. in Engineering and Management, Massachusetts Institute of Technology, Engineering Systems Division, System Design and Management Program, February 2015.
Cataloged from PDF version of thesis.
Includes bibliographical references (pages 50-51).
Machine learning algorithms used for natural language processing (NLP) currently take too long to complete their learning function. This slow learning performance tends to make the model ineffective for an increasing requirement for real time applications such as voice transcription, language translation, text summarization topic extraction and sentiment analysis. Moreover, current implementations are run in an offline batch-mode operation and are unfit for real time needs. Newer machine learning algorithms are being designed that make better use of sampling and distributed methods to speed up the learning performance. In my thesis, I identify unmet market opportunities where machine learning is not employed in an optimum fashion. I will provide system level suggestions and analyses that could improve the performance, accuracy and relevance.
by Sabin M. Thomas.
S.M. in Engineering and Management
APA, Harvard, Vancouver, ISO, and other styles
10

Moolman, George Christiaan. "A relational database management systems approach to system design." Thesis, Virginia Tech, 1992. http://hdl.handle.net/10919/43628.

Full text
Abstract:
Systems are developed to fulfill certain requirements. Several system design configurations usually can fulfill the technical requirements, but at different equivalent life-cycle costs. The problem is how to manipulate and evaluate different system configurations so that the required system effectiveness can be achieved at a minimum equivalent cost. It is also important to have a good definition of all the major consequences of each design configuration. For each alternative configuration considered, it is useful to know the number of units to deploy, the inventory and other logistic requirements, as well as the sensitivity of the system to changes in input variable values. An intelligent relational database management system is defined to solve the problem described. Table structures are defined to maintain the required data elements and algorithms are constructed to manipulate the data to provide the necessary information. The methodology is as follows: Customer requirements are analyzed in functional terms. Feasible design alternatives are considered and defined as system design configurations. The reliability characteristics of each system configuration are determined, initially from a system-level allocation, and later determined from test and evaluation data. A maintenance analysis is conducted to determine the inventory requirements (using reliability data) and the other logistic requirements for each design configuration. A vector of effectiveness measures can be developed for each customer, depending on objectives, constraints, and risks. These effectiveness measures, consisting of a combination of performance and cost measures, are used to aid in objectively deciding which alternative is preferred. Relationships are defined between the user requirements, the reliability and maintainability of the system, the number of units deployed, the inventory level, and other logistic characteristics of the system. A heuristic procedure is developed to interactively manipulate these parameters to obtain a good solution to the problem with technical performance and cost measures as criteria. Although it is not guaranteed that the optimal solution will be found, a feasible solution close to the optimal will be found. Eventually the user will have, at any time, the ability to change the value of any parameter modeled. The impact on the total system will subsequently be made visible.
Master of Science
APA, Harvard, Vancouver, ISO, and other styles
11

Kromwijk, Katinka. "Conference information system - analysis design and implementation." Zurich : ETH, Swiss Federal Institute of Technology Zurich, Department of Computer Science, Institute of Information Systems, Global Information Systems Group, 2009. http://e-collection.ethbib.ethz.ch/show?type=dipl&nr=447.

Full text
APA, Harvard, Vancouver, ISO, and other styles
12

Thomas, Peter James. "Conversation analysis in interactive computer system design." Thesis, University of Hull, 1990. http://hydra.hull.ac.uk/resources/hull:3895.

Full text
Abstract:
Chapter one discusses the rationale for, and the aims of, this study. The design of interactive computer systems is an enterprise quite distinct from the design of other artefacts: design, or inventing a pattern, for interactive computer systems is a matter of design for use.HCI research has recognised the need for a user-centred approach to design, and has correspondingly drawn upon a variety of disciplines. However, the dominance of psychological theory and method has led to the exclusion of a body of applicable findings and methods from disciplines which deal with human interaction, and to a failure to systematically investigate the the links between human interaction and human-computer interaction. Prospectively, conversation analysis provides the resources for design of more natural interactive systems,and represents the possibility of design guidance which avoids the problems inherent in current design guidelines. The methods and findings of conversation analysis, this chapter has proposed, will provide a principled approach both to the investigation of human-computer interaction, and to the design of interactive systems. Within the general aim of investigating the applicability of conversation analysis to HCI, the remainder of this study addresses both the theoretical issues, and illustrates the practical outcomes, in relation to an empirical study of user-system interaction. Chapter two examines in greater detail the perspective of ethnomethodology and the findings of conversation analysis. The expository materials, such as exist in these fields, are recognised as being difficult, especially so for those who may be approaching these topics for the first time, and from other than sociological backgrounds. Accordingly the discussion concentrates upon only their more central assumptions and findings. Chapter one observes that conversation analysis and ethnomethodology have not yet found expression in HCI research largely because of the divergence between their methods and those of psychology. The exact nature of those methods, and their advantages for HCI research, are explored in chapter three. This discussion concerns both the practical methodology adopted in this study, the relationship between experimental and non-experimental investigative methods, and the practical applicability of the methods of conversation analysis in the investigation of human-computer interaction.An empirical study of human-computer interaction is undertaken in chapter four. The examination of videotaped sequences of humancomputer interaction through conversation analytic methods is combined with the findings of conversation analysis, to formulate design guidelines and recommendations. Finally, chapter five attempts to assess the significance of this approach to HCI research and design. The promising route which conversation analysis provides for investigation of user-system interaction, and the possibility that it can inform the design of future interactive systems, is explored.
APA, Harvard, Vancouver, ISO, and other styles
13

Greenwood, Rob. "Semantic analysis for system level design automation." Thesis, This resource online, 1992. http://scholar.lib.vt.edu/theses/available/etd-10062009-020216/.

Full text
APA, Harvard, Vancouver, ISO, and other styles
14

Chandra, Saumya. "Variability-aware system-level design and analysis." Diss., [La Jolla, Calif.] : University of California, San Diego, 2009. http://wwwlib.umi.com/cr/ucsd/fullcit?p3344744.

Full text
Abstract:
Thesis (Ph. D.)--University of California, San Diego, 2009.
Title from first page of PDF file (viewed April 3, 2009). Available via ProQuest Digital Dissertations. Vita. Includes bibliographical references (p. 147-154).
APA, Harvard, Vancouver, ISO, and other styles
15

Lajevardi, Payam. "Design and analysis of reconfigurable analog system." Thesis, Massachusetts Institute of Technology, 2011. http://hdl.handle.net/1721.1/63071.

Full text
Abstract:
Thesis (Ph. D.)--Massachusetts Institute of Technology, Dept. of Electrical Engineering and Computer Science, 2011.
Cataloged from PDF version of thesis.
Includes bibliographical references (p. 147-152).
A highly-configurable analog system is presented. A prototype chip is fabricated and an ADC and filter functionalities are demonstrated. The chip consists of eight identical programmable stages. In an ADC configuration, the first five stages are programmed to implement a 10- bit ADC. The ADC has ENOB of 8 bits at 50 MSPS. The ENOB improves to 8.5 bits if the sampling rate is lowered to 30MSPS. The ADC has an FOM of 150fJ/conversionstep, which is very competitive with the state of the art ADCs. The first stage is responsible for 75% of the input-referred noise power. The sampling noise is responsible for 40% of the total noise power and the zero-crossing detector is responsible for 60%. The chip is tested in two different filter configurations. In one test, the first two stages of the chip are configured as a second order Butterworth filter and the third stage is configured as an amplifier. In another test, the first three stages of the chip are programmed as a third-order Butterworth filter. The desired filter functionalities are demonstrated in both configurations. It is shown that in a third order Butterworth filter, more than 90% of the noise is due to the zero-crossing detector of the last stage. This is mainly due to the fact that the noise of earlier stages is filtered with the filter transfer function, but the noise of the last stage is not filtered. The ZCBC architecture has been used to avoid the stability problems and scale power consumption with sampling frequency. A new technique is introduced to implement the terminating resistors in a ladder filter. This technique does not have any area or power overhead. An asymmetric differential signaling is also introduced. This method improves the dynamic range of the output signals, which is particularly important in new technology nodes with low supply voltage.
by Payam Lajevardi.
Ph.D.
APA, Harvard, Vancouver, ISO, and other styles
16

Bromley, II Michael William. "Pneumatic Particulate Collection System Analysis and Design." Thesis, Virginia Tech, 2012. http://hdl.handle.net/10919/33562.

Full text
Abstract:
A pneumatic particulate collection system harnesses the energy associated with the release of a compressed gas to transport particulate to a collection chamber. In an effort to improve the efficiency of a previously designed collection system, high speed imaging in conjunction with computational fluid dynamics (CFD) was utilized to highlight design deficiencies. Areas of recirculation within the collection device as well as impingement of the sampling surface were observed through the testing and CFD analysis. The basis of the improved collection system was conceived through research of pneumatic transport and the deficiencies found through testing and simulation. An improved rectangular-duct-styled system was designed in three main stages. A variety of filters used to contain the desired particulate were characterized through testing for use in simulations as well as fluids calculations. The improved system was then analyzed utilizing compressible and incompressible flow calculations and design iterations were conducted with CFD to determine the final parameters. The final design was simulated with a multiphase flow model to examine the particulate entrainment performance. The improved collection system efficiently expanded and developed the gas flow prior to the collection area to employ the particulate entrainment process. The final design was constructed with an additive manufacturing process and experimentally tested to validate the simulations and flow calculations. The testing proved that the final design operated purely on particulate entrainment and collected only the top layer of particles as simulated. The improved collection system eliminated all areas of flow recirculation and impingement of the particle bed to provide a more efficient sampling device.
Master of Science
APA, Harvard, Vancouver, ISO, and other styles
17

Nyberg, Karl-Johan. "Performance Analysis of Detection System Design Algorithms." Thesis, Virginia Tech, 2003. http://hdl.handle.net/10919/41789.

Full text
Abstract:
Detection systems are widely used in industry. Designers, operators and users of these systems need to choose an appropriate design, based on the intended usage and the operating environment. The purpose of this research is to analyze the effect of various system design variables (controllable) and system parameters (uncontrollable) on the performance of detection systems. To optimize system performance one must manage the tradeoff between two errors that can occur. A False Alarm occurs if the detection system falsely indicates a target is present and a False Clear occurs if the detection system falsely fails to indicate a target is present. Given a particular detection system and a pre-specified false clear (or false alarm) rate, there is a minimal false alarm (or false clear) rate that can be achieved. Earlier research has developed methods that address this false alarm, false clear tradeoff problem (FAFCT) by formulating a Neyman-Pearson hypothesis problem, which can be solved as a Knapsack problem. The objective of this research is to develop guidelines that can be of help in designing detection systems. For example, what system design variables must be implemented to achieve a certain false clear standard for a parallel 2-sensor detection system for Salmonella detection? To meet this objective, an experimental design is constructed and an analysis of variance is performed. Computational results are obtained using the FAFCT-methodology and the results are presented and analyzed using ROC (Receiver Operating Characteristic) curves and an analysis of variance. The research shows that sample size (i.e., size of test data set used to estimate the distribution of sensor responses) has very little effect on the FAFCT compared to other factors. The analysis clearly shows that correlation has the most influence on the FAFCT. Negatively correlated sensor responses outperform uncorrelated and positively correlated sensor responses with large margins, especially for strict FC-standards (FC-standard is defined as the maximum allowed False Clear rate). Suggestions for future research are also included. FC-standard is the second most influential design variable followed by grid size.
Master of Science
APA, Harvard, Vancouver, ISO, and other styles
18

LIU, HUAZHOU. "DIGITAL DIRECTION FINDING SYSTEM DESIGN AND ANALYSIS." University of Cincinnati / OhioLINK, 2003. http://rave.ohiolink.edu/etdc/view?acc_num=ucin1060976413.

Full text
APA, Harvard, Vancouver, ISO, and other styles
19

Ukhov, Ivan. "System-Level Analysis and Design under Uncertainty." Doctoral thesis, Linköpings universitet, Programvara och system, 2017. http://urn.kb.se/resolve?urn=urn:nbn:se:liu:diva-140758.

Full text
Abstract:
One major problem for the designer of electronic systems is the presence of uncertainty, which is due to phenomena such as process and workload variation. Very often, uncertainty is inherent and inevitable. If ignored, it can lead to degradation of the quality of service in the best case and to severe faults or burnt silicon in the worst case. Thus, it is crucial to analyze uncertainty and to mitigate its damaging consequences by designing electronic systems in such a way that uncertainty is effectively and efficiently taken into account. We begin by considering techniques for deterministic system-level analysis and design of certain aspects of electronic systems. These techniques do not take uncertainty into account, but they serve as a solid foundation for those that do. Our attention revolves primarily around power and temperature, as they are of central importance for attaining robustness and energy efficiency. We develop a novel approach to dynamic steady-state temperature analysis of electronic systems and apply it in the context of reliability optimization. We then proceed to develop techniques that address uncertainty. The first technique is designed to quantify the variability in process parameters, which is induced by process variation, across silicon wafers based on indirect and potentially incomplete and noisy measurements. The second technique is designed to study diverse system-level characteristics with respect to the variability originating from process variation. In particular, it allows for analyzing transient temperature profiles as well as dynamic steady-state temperature profiles of electronic systems. This is illustrated by considering a problem of design-space exploration with probabilistic constraints related to reliability. The third technique that we develop is designed to efficiently tackle the case of sources of uncertainty that are less regular than process variation, such as workload variation. This technique is exemplified by analyzing the effect that workload units with uncertain processing times have on the timing-, power-, and temperature-related characteristics of the system under consideration. We also address the issue of runtime management of electronic systems that are subject to uncertainty. In this context, we perform an early investigation into the utility of advanced prediction techniques for the purpose of fine-grained long-range forecasting of resource usage in large computer systems. All the proposed techniques are assessed by extensive experimental evaluations, which demonstrate the superior performance of our approaches to analysis and design of electronic systems compared to existing techniques.
APA, Harvard, Vancouver, ISO, and other styles
20

Descalzi, Douglas Hall. "System design of a satellite radio frequency interference analysis system /." This resource online, 1996. http://scholar.lib.vt.edu/theses/available/etd-02232010-020027/.

Full text
APA, Harvard, Vancouver, ISO, and other styles
21

Descalzi, Doug. "System design of a satellite radio frequency interference analysis system." Master's thesis, Virginia Tech, 1996. http://hdl.handle.net/10919/41244.

Full text
Abstract:
see document
The conceptual and preliminary design of the Dynamic Link Tool (DLTool) System are presented. The DLTool System performs moderate fidelity, satellite radio frequency (RF) interference and communications analysis. The primary customer of the DLTool System is Program 607 of Lockheed Martin Corporation (LMC) in Valley Forge, Pennsylvania. The system will dramatically improve the existing satellite RF interference analysis capabilities of Program 607, which are currently inadequate. Most-importantly, the DLTool System will reduce the required analysis time for satellite RF interference problems from approximately 6 hours to 1 hour. The DLTool System is intended to provide the capability to quickly predict or anticipate potential periods of RF interference. The design of the DLTool System includes an analysis of the needs of the customer, a feasibility study, the definition and allocation of operational and maintenance requirements, and the functional analysis of the system. The system is designed for a workstation-based local area network with simultaneous users, and includes a graphical user interface for input and output. The core component of the DLTool System is customized c++ code that performs the computational analysis of user defined satellite-ground station scenarios. The primary users of the system are communication engineers who will use the DLTool System to study RF interference issues for their customer.
Master of Science
APA, Harvard, Vancouver, ISO, and other styles
22

Li, Lu. "New Method for Robotic Systems Architecture Analysis, Modeling, and Design." Case Western Reserve University School of Graduate Studies / OhioLINK, 2019. http://rave.ohiolink.edu/etdc/view?acc_num=case1562595008913311.

Full text
APA, Harvard, Vancouver, ISO, and other styles
23

Wu, Hao. "Analysis and Design of Vehicular Networks." Diss., Georgia Institute of Technology, 2005. http://hdl.handle.net/1853/7639.

Full text
Abstract:
Advances in computing and wireless communication technologies have increased interest in smart vehicles, vehicles equipped with significant computing, communication and sensing capabilities to provide services to travelers. Smart vehicles can be exploited to improve driving safety and comfort as well as optimize surface transportation systems. Wireless communications among vehicles and between vehicles and roadside infrastructures represent an important class of vehicle communications. One can envision creating an integrated radio network leveraging various wireless technologies that work together in a seamless fashion. Based on cost-performance tradeoffs, different network configurations may be appropriate for different environments. An understanding of the properties of different vehicular network architectures is absolutely necessary before services can be successfully deployed. Based on this understanding, efficient data services (e.g., data dissemination services) can be designed to accommodate application requirements. This thesis examines several research topics concerning both the evaluation and design of vehicular networks. We explore the properties of vehicle-to-vehicle (v2v) communications. We study the spatial propagation of information along the road using v2v communications. Our analysis identifies the vehicle traffic characteristics that significantly affect information propagation. We also evaluate the feasibility of propagating information along a highway. Several design alternatives exist to build infrastructure-based vehicular networks. Their characteristics have been evaluated in a realistic vehicular environment. Based on these evaluations, we have developed some insights into the design of future broadband vehicular networks capable of adapting to varying vehicle traffic conditions. Based on the above analysis, opportunistic forwarding that exploit vehicle mobility to overcome vehicular network partitioning appears to be a viable approach for data dissemination using v2v communications for applications that can tolerate some data loss and delay. We introduce a methodology to design enhanced opportunistic forwarding algorithms. Practical algorithms derived from this methodology have exhibited different performance/overhead tradeoffs. An in-depth understanding of wireless communication performance in a vehicular environment is necessary to provide the groundwork for realizing reliable mobile communication services. We have conducted an extensive set of field experiments to uncover the performance of short-range communications between vehicles and between vehicles and roadside stations in a specific highway scenario.
APA, Harvard, Vancouver, ISO, and other styles
24

Franco, Nicola. "Distributed Observer Analysis and Design." Master's thesis, Alma Mater Studiorum - Università di Bologna, 2019. http://amslaurea.unibo.it/19642/.

Full text
Abstract:
A Distributed observer design is described for estimating the state of a continuous-time, input free, linear system. This thesis explains how to construct the local estimators, which comprise the observer inputs and outputs, and it is shown which are the requirements to deal with this structure. Every agent senses an output signal from the system and distributes it across a fixed-time network to its neighbors. The information flow increases the capability of each agent to estimate the state of the system and uses collaboration to improve the quality of data. The proposed solution has several positive features compared to recent results in the literature, which include milder assumptions on the network connectivity and the maximum dimension of the state of each observer does not exceed the order of the plant. The conditions are reduced to certain detectability requirements for each cluster of agents in the network, where a cluster is identified as a subset of agents that satisfy specific properties. Instead, the dimension of each observer is reduced to the number of possible observable states of the system, collected by the agent and by the neighbors.
APA, Harvard, Vancouver, ISO, and other styles
25

Sarshar, Marjan. "A prototype decision support system for Perturbation Analysis of Flexible Manufacturing Systems." Thesis, University of Manchester, 1989. http://ethos.bl.uk/OrderDetails.do?uin=uk.bl.ethos.329755.

Full text
APA, Harvard, Vancouver, ISO, and other styles
26

Inada, Kenichiro. "Analysis of Japanese Software Business." Thesis, Massachusetts Institute of Technology, 2010. http://hdl.handle.net/1721.1/59244.

Full text
Abstract:
Thesis (S.M. in System Design and Management)--Massachusetts Institute of Technology, Engineering Systems Division, System Design and Management Program, 2010.
Cataloged from PDF version of thesis.
Includes bibliographical references (p. 94-96).
Today, our society is surrounded by information system, computers, and software. It is no exaggeration to say that our daily life depends on software and its function. Accordingly, the business of software has made miraculous growth in the last two decades and is playing a significant role in various industries. In accordance with the growing business needs for effective software and information systems, various firms in various countries have entered the business of software seeking for prosperity. Some have succeeded, some have failed. What distinguishes these firms is its ability to manage and deliver quality products on demand, on time, at a low cost. To achieve such goal, software firms have thought out different methods and tools striving to establish its practice. Nevertheless, many software firms around the globe are struggling to satisfy its clients to achieve business success. With no exception, Japanese software firms are facing difficulties of managing software projects. While its ability to deliver high quality product is well acknowledged among software industry, its high cost structure and schedule delays are thought of as serious problems. Moreover, some of the transitions in the industry are forcing Japanese software firms to seek new opportunities. Therefore, it is important for Japanese software firms to establish more productive ways of developing software products and effective business strategies. Primal objective of this paper is to analyze the present conditions of Japanese software firms and to derive some recommendations which could enhance its current situation. It will also include the discussion of software development practices in US and India firms to better understand strength and weaknesses of Japanese firms and capture some important concepts which can be applied to improve current practice.
by Kenichiro Inada.
S.M.in System Design and Management
APA, Harvard, Vancouver, ISO, and other styles
27

Jensen, Deron Eugene. "System-wide Performance Analysis for Virtualization." PDXScholar, 2014. https://pdxscholar.library.pdx.edu/open_access_etds/1789.

Full text
Abstract:
With the current trend in cloud computing and virtualization, more organizations are moving their systems from a physical host to a virtual server. Although this can significantly reduce hardware, power, and administration costs, it can increase the cost of analyzing performance problems. With virtualization, there is an initial performance overhead, and as more virtual machines are added to a physical host the interference increases between various guest machines. When this interference occurs, a virtualized guest application may not perform as expected. There is little or no information to the virtual OS about the interference, and the current performance tools in the guest are unable to show this interference. We examine the interference that has been shown in previous research, and relate that to existing tools and research in root cause analysis. We show that in virtualization there are additional layers which need to be analyzed, and design a framework to determine if degradation is occurring from an external virtualization layer. Additionally, we build a virtualization test suite with Xen and PostgreSQL and run multiple tests to create I/O interference. We show that our method can distinguish between a problem caused by interference from external systems and a problem from within the virtual guest.
APA, Harvard, Vancouver, ISO, and other styles
28

Burgess, Cheri Nicole Markt. "Value based analysis of acquisition portfolios." Thesis, Massachusetts Institute of Technology, 2010. http://hdl.handle.net/1721.1/59225.

Full text
Abstract:
Thesis (S.M. in System Design and Management)--Massachusetts Institute of Technology, Engineering Systems Division, 2010.
Vita. Cataloged from PDF version of thesis.
Includes bibliographical references (p. 113-119).
Currently, program-funding allocation is based on program performance. Funding cuts commonly lead to a poor reflection on the program management assigned to the given program. If additional factors such as program risk and benefit are objectively factored in, this may lead to a more effective exit strategy for program capabilities, which are no longer required. An enterprise architecture analysis and applied framework case study were carried out to develop a methodology to quantify system-level value for the Office of the Assistant Secretary of the Air Force for Acquisition Research, Development, Test and Evaluation portfolio. Portfolio value is quantified in order to transition from a single program, single stakeholder value analysis to a program portfolio and stakeholder system composite analysis. This methodology is developed based on interviews, official organization literature, and a case study. The results of the applied framework case study on a portfolio of seven programs showed a positive correlation between quantitative capability, execution and risk data at the portfolio level and access to a more informed and objective identification of programs of greatest interest and concern as compared to a qualitative program-by-program analysis when allocating Air Force Acquisition resources. This system includes 17 stakeholder categories, which significantly influence the allocation of resources for a portfolio worth roughly 0.4% of the US GDP. Interviews include high-ranking leadership, including two 3-Star Generals in the US Air Force.
by Cheri Nicole Markt Burgess.
S.M.in System Design and Management
APA, Harvard, Vancouver, ISO, and other styles
29

Lee, Heechang. "An analysis of the impact of datacenter temperature on energy efficiency." Thesis, Massachusetts Institute of Technology, 2012. http://hdl.handle.net/1721.1/76590.

Full text
Abstract:
Thesis (S.M. in Engineering and Management)--Massachusetts Institute of Technology, Engineering Systems Division, System Design and Management Program; in conjunction with the SDM Fellows Program, 2012.
Cataloged from PDF version of thesis.
Includes bibliographical references (p. 65-66).
The optimal air temperature for datacenters is one of ways to improve energy efficiency of datacenter cooling systems. Many datacenter owners have been interested in raising the room temperature as a quick and simple method to increase energy efficiency. The purpose of this paper is both to provide recommendations on maximizing the energy efficiency of datacenters by optimizing datacenter temperature setpoint, and to understand the drivers of datacenter costs. This optimization and the potential energy savings used in cooling system can drive higher energy use in IT equipment and may not be a good trade off. For this reason, this paper provided a detailed look at the overall effect on energy of temperature changes in order to figure out the optimal datacenter temperature setpoint. Since this optimal temperature range varies by equipment and other factors in the datacenter, each datacenter should identify its appropriate temperature based on the optimization calculation in this paper. Sensitivity analysis is used to identify the drivers of the cost of ownership in a datacenter and to identify opportunities for datacenter efficiency improvement. The model is also used to evaluate potential datacenter efficiency.
by Heechang Lee.
S.M.in Engineering and Management
APA, Harvard, Vancouver, ISO, and other styles
30

Pabolu, Venkata Krishna Rao. "DFM – Weldability analysis and system development." Thesis, Tekniska Högskolan, Högskolan i Jönköping, JTH. Forskningsmiljö Produktutveckling - Datorstödd konstruktion, 2015. http://urn.kb.se/resolve?urn=urn:nbn:se:hj:diva-29316.

Full text
Abstract:
This thesis work is mainly focused on the processes involved in manufacturing of aircraft engine components. The processes are especially about welding and welding methods. The basics of welding and the thesis support has been taken from the GKN Aerospace Sweden AB, a global aerospace product supplier.  The basic objective of this thesis work is to improve the usability of an automation system which is developed for evaluating the weldability of a part. A long run maintainability aspect of this automation system has been considered. The thesis work addresses the problems arising during the usage of a computerised automated system such as process transparency, recognisability, details traceability and other maintenance aspects such as maintainability and upgradability of the system in the course of time. The action research methodology has been used to address these problems.  Different approaches have been tried to finding the solution to those problems. A rule based manufacturability analysis system has been attempted to analyse the weldability of a component in terms of different welding technics. The software “Howtomation” has been used to improve the transparency of this analysis system. User recognisability and details tractability have been taken into account during the usage of a ruled based analysis system. The system attributes such as maintainability, upgradability, adaptiveness to modern welding methods has been addressed. The system suitability for large scale analysis has been considered.
APA, Harvard, Vancouver, ISO, and other styles
31

Nagle, Brian J. (Brian James). "System analysis and design of a low-cost micromechanical seeker system." Thesis, Massachusetts Institute of Technology, 2008. http://hdl.handle.net/1721.1/47894.

Full text
Abstract:
Thesis (S.M.)--Massachusetts Institute of Technology, Dept. of Mechanical Engineering, 2008.
Includes bibliographical references (p. 137-140).
Precision guided targeting systems have been in use by the U.S. military for the last half-century. The desire for high targeting accuracies while maintaining minimal collateral damage has driven the implementation of guidance systems on a myriad of different platforms. Current seeker systems using global positioning system (GPS)-aided technology offer good accuracy, but are limited by an adversary's signal jamming capabilities and the dynamic nature of the military target environment. Furthermore, ultra-accurate inertial measurement units (IMU) that serve as stand-alone guidance systems are very expensive and offer no terminal guidance enhancement. As a result, it is cost prohibitive to equip some platforms with precision guidance capability. The demand for high accuracy at low cost has prompted substantial recent development of micro-electromechanical systems (MEMS) IMU's and optical focal plane arrays (FPA). The resulting decreasing device size and production costs coupled with higher unit performance have created opportunities for implementing seeker-enabled systems on platforms previously deemed impractical. As a result, the author proposes a design methodology to develop a low-cost system while satisfying stringent performance requirements. The methodology is developed within the context of a strap-down seeker system for tactical applications. The design tenets of the optical sensor, the inertial sensor, and projectile flight dynamics were analyzed in-depth for the specific scenario. The results of each analysis were combined to formulate a proposed system.
(cont.) The system was then modeled to produce system miss distance estimates for differing engagement situations. The system demonstrated 3[sigma] miss distance estimates that were less than the maximum allowable error in each case. The system cost was tabulated and a production price was approximated. Using current technology and pricing information for the main components, the analysis shows that a system with a 3[sigma] miss distance of 0.801 m could be built for a unit price in the range of $11,730 -$19,550, depending on production costs. Design limitations are discussed, as well as strategies to improve the analysis for future consideration.
by Brian J. Nagle.
S.M.
APA, Harvard, Vancouver, ISO, and other styles
32

Li, Qiang S. M. Massachusetts Institute of Technology. "Analysis of oil market fundamental using a system dynamics approach." Thesis, Massachusetts Institute of Technology, 2015. http://hdl.handle.net/1721.1/105312.

Full text
Abstract:
Thesis: S.M. in Engineering and Management, Massachusetts Institute of Technology, Engineering Systems Division, 2015.
Cataloged from PDF version of thesis.
Includes bibliographical references (pages 124-129).
An unexpected substantial reduction of oil price since June 2014 has drawn a great attention from governments, financial institutions and oil industry because oil supply has been a critical factor influencing the energy markets, economic development and geopolitics worldwide. From a system perspective, oil price results from the interactions of multiple entities and forces in the world oil market, and the impact of the low price has started propagating through the whole value chain of the industry resulting in a reduction of investment plans. Therefore, it is necessary to re-evaluate the key factors of the system and analyze how the oil industry would evolve when those factors vary. System dynamics modeling has been proved to be an efficient tool to capture dynamics of a complex system, such as world oil market, and it is intended to construct a system dynamics model in the thesis to understand how the world oil market would react to various disturbances. Based on a thorough review on oil industry and world oil market, key players are identified for major suppliers (OPEC, US, Non-OPEC, and Rest of the World - ROW) and major consumers (US, China, and ROW), and correlations among those players are established in the system dynamics model. Different scenarios are created and simulated to explore the dynamics of the world oil market. Starting from an initial equilibrium state, different scenarios simulate the impact of changes in OPEC oil production, the US oil demand, and China oil demand, respectively. Then the consequences of the changes combined the previous scenarios together are discussed. The constructed system dynamic model is able to capture the fundamental dynamics of the world oil market. Specifically, simulations addressing the booming of unconventional oil, changing oil production of OPEC, and slowing down of China's economy development that reflect the real situation in the current oil market confirm the reduction of oil price, and estimate how long the low oil price would last in different scenarios. Although the oil price predictions have to be taken with a great degree of caution, the developed mode is able to provide insightful implications for industry analysts and policy makers. The major challenges fall into how to balance the relationship between market shares and financial loss for oil producers, and energy security for major consumers.
by Qiang Li.
S.M. in Engineering and Management
APA, Harvard, Vancouver, ISO, and other styles
33

McCoy, Kathleen Marie LCDR. "Design and analysis of US Navy shipbuilding contract architecture." Thesis, Massachusetts Institute of Technology, 2015. http://hdl.handle.net/1721.1/100110.

Full text
Abstract:
Thesis: Nav. E., Massachusetts Institute of Technology, Department of Mechanical Engineering, 2015.
Thesis: S.M. in Engineering and Management, Massachusetts Institute of Technology, Engineering Systems Division, System Design and Management Program, 2015.
Cataloged from PDF version of thesis.
Includes bibliographical references (pages 92-95).
Contracting for US Navy ship procurement is complex due several factors such as budgetary and political concerns, sole or near sole source environments, and long lead-time construction. In the current climate of shrinking budgets, it is especially important to set programs up for financial success. One potential area for cost management improvement in acquisition programs is with the initial contract and incentive structure. If shipbuilding contracts could be described in engineering architectural terms, then perhaps that architecture could provide better clarity of contract options. Further, if contracting can be described as an engineering architecture, then perhaps that architecture could be optimized for a given result. These are the central questions of this thesis. To answer them, interviews were conducted with several experienced individuals from both industry and the government. Additionally, past shipbuilding contracts in both the US and Canada were examined. These insights were then used to form a contract architecture concept in accordance with the Tradespace engineering paradigm. From the concept definition came the design vector definition which included variables such as shareline definition, incentives, and contracted profit percentage. The tradespace was then populated by manipulating the design vector parameters. The Palisade tool [at]Risk was used to conduct the design vector manipulation and tradespace population. [at]Risk is an excel plug in that allows uncertain variables to be defined by probability distributions. The tradespace of contract outcomes was then evaluated against utilities such as cost, profit, and risk. Although the factors affecting the contracting environment are complex, and not all are modeled, quantitative modeling allows the architect to roughly evaluate different approaches, vice just basing the contract on past models. It also gives the government the ability to check whether shipbuilder furnished predicted costs are reasonable for a given contract structure.
by Kathleen Marie McCoy.
Nav. E.
S.M. in Engineering and Management
APA, Harvard, Vancouver, ISO, and other styles
34

Lee, Hyun Seop. "Requirement analysis framework of naval military system for expeditionary warfare." Diss., Georgia Institute of Technology, 2013. http://hdl.handle.net/1853/50403.

Full text
Abstract:
Military systems are getting more complex due to the demands of various types of missions, rapidly evolving technologies, and budgetary constraints. In order to support complex military systems, there is a need to develop a new naval logistic asset that can respond to global missions effectively. This development is based on the requirement which must be satisfice-able within the budgetary constraints, address pressing real world needs, and allow designers to innovate. This research is conducted to produce feasible and viable requirements for naval logistic assets in complex military systems. The process to find these requirements has diverse uncertainties about logistics, environment and missions. To understand and address these uncertainties, this research includes instability analysis, operational analysis, sea state analysis and disembarkation analysis. By the adaptive Monte-Carlo simulation with maximum entropy, uncertainties are considered with corresponding probabilistic distribution. From Monte-Carlo simulation, the concept of Probabilistic Logistic Utility (PLU) was created as a measure of logistic ability. To demonstrate the usability of this research, this procedure is applied to a Medium Exploratory Connector (MEC) which is an Office of Naval Research (ONR) innovative naval prototype. Finally, the preliminary design and multi-criteria decision-making method become capable of including requirements considering uncertainties.
APA, Harvard, Vancouver, ISO, and other styles
35

Bayar, Cevdet. "Optical Design And Analysis Of A Riflescope System." Master's thesis, METU, 2009. http://etd.lib.metu.edu.tr/upload/2/12611093/index.pdf.

Full text
Abstract:
Today, riflescope systems are used widely, mostly by military forces. In this study, a riflescope working in the visible range (400-700 nm) will be designed. The riflescope will have 3 degrees field of view and maximum 15 cm total track. Total design length is limited to 15 cm because a short riflescope is more stabilized than a long one with respect to thermal instability and vibrational effects. Taken into account the cost factor, only two types of glasses will be used in the design. One of them is NBK7 a crown glass and the other is N-F2 a flint glass. Moreover, Schmidt-Pechan prism will be used to construct an erected image. The optical performance analysis of the design is also carried out for a production ready riflescope system.
APA, Harvard, Vancouver, ISO, and other styles
36

Faanes, Audun. "Controllability analysis for process and control system design." Doctoral thesis, Norwegian University of Science and Technology, Faculty of Natural Sciences and Technology, 2003. http://urn.kb.se/resolve?urn=urn:nbn:no:ntnu:diva-35.

Full text
Abstract:

Controllability is the ability of a process to achieve acceptable performance, and in this thesis we use controllability analysis in the design of buffer tanks, feedforward controllers, and multivariable controllers such as model predictive control (MPC).

There is still an increasing pressure on the process industry, both from competitors (prize and quality) and the society (safety and pollution), and one important contribution is a smooth and stable production. Thus, it is important to dampen the effect of uncontrolled variations (disturbances) that the process may experience.

The process itself often dampens high-frequency disturbances, and feedback controllers are installed to handle the low-frequency part of the disturbances, including at steady state if integral action is applied. However, there may be an intermediate frequency range where neither of these two dampens the disturbances sufficiently. In the first part of this thesis we present methods for the design of buffer tanks based on this idea. Both mixing tanks for quality disturbances and surge tanks with “slow” level control for flow-rate variations are addressed.

Neutralization is usually performed in one or several mixing tanks, and we give recommendations for tank sizes and the number of tanks. With local PI or PID control, we recommend equal tanks, and give a simple formula for the total volume. We also give recommendations for design of buffer tanks for other types of processes. We propose first to determine the required transfer function to achieve the required performance, and thereafter to find a physical realization of this transfer function.

Alternatively, if measurements of the disturbances are available, one may apply feedforward control to handle the intermediate frequency range. Feedforward control is based mainly on a model, and we study the effect of model errors on the performance. We define feedforward sensitivities. For some model classes we provide rules for when the feedforward controller is effective, and we also design robust controllers such as μ -optimal feedforward controllers.

Multivariable controllers, such as model predictive control (MPC), may use both feedforward and feedback control, and the differences between these two also manifest themselves in a multivariable controller. We use the class of serial processes to gain insight into how a multivariable controller works. For one specific MPC we develop a state space formulation of the controller and its state estimator under the assumption that no constraints are active. Thus, for example the gains of each channel of the MPC (from measurements to the control inputs) can be found, which gives further insight into to the controller. Both a neutralization process example and an experiment are used to illustrate the ideas.


Paper 4 reprinted with kind permission of the Research Council of Norway Paper 8 reprinted with kind permission of Elsevier Publishing, http://www.sciencedirect.com
APA, Harvard, Vancouver, ISO, and other styles
37

Nicol, Robert A. (Robert Arthur) 1969. "Design and analysis of an enterprise metrics system." Thesis, Massachusetts Institute of Technology, 2001. http://hdl.handle.net/1721.1/82686.

Full text
Abstract:
Thesis (S.M.)--Massachusetts Institute of Technology, Sloan School of Management; and, (S.M.)--Massachusetts Institute of Technology, Dept. of Chemical Engineering; in conjunction with the Leaders for Manufacturing Program at MIT, 2001.
This electronic version was submitted by the student author. The certified thesis is available in the Institute Archives and Special Collections.
Includes bibliographical references (leaf 82).
by Robert A. Nicol.
S.M.
APA, Harvard, Vancouver, ISO, and other styles
38

Mohammad, Fouad Abbas. "Analysis and design of the LR55 track system." Thesis, Liverpool John Moores University, 1998. http://researchonline.ljmu.ac.uk/4952/.

Full text
APA, Harvard, Vancouver, ISO, and other styles
39

Aull, Mark J. "Airborne Wind Energy System Analysis and Design Optimization." University of Cincinnati / OhioLINK, 2020. http://rave.ohiolink.edu/etdc/view?acc_num=ucin1592168644639446.

Full text
APA, Harvard, Vancouver, ISO, and other styles
40

LEE, JHIH-REN, and 李志仁. "Systems Analysis and Design to Construction Temporary Workers Statistics Analysis System." Thesis, 2017. http://ndltd.ncl.edu.tw/handle/s9275m.

Full text
Abstract:
碩士
國立雲林科技大學
營建工程系
105
With the decline of the construction industry in Taiwan and high population density in this small country, many contractors tend to make low bids, causing the constant decrease of construction site profits. The management fee included in the total construction cost needs to cover the cost of temporary resources. Now the industry often budgets according to a certain proportion of the total construction cost. However, the actual budget always far exceeds the estimation, which is shown most obviously in the cost of temporary workers. With the rapid change of technology, cloud computing and handheld mobile devices are getting mature and processing steadily, many industries incline to combine information technology to promote industrial integration, which is the most necessary for the consturction industry. To increase industrial competitiveness, the aim of this study was to handle “the statistics and analysis of the work sign-in sheet and process centering” in the management of temporary workers. Therefore, the study adopted system analysis and its methods to design a statistical and analytical system for temporary workers that was suitable for the use of general contractors. Through filling out the digital form at site, the data were stored to integrated database for each contractor in order to reduce the complicated paperwork for the traditional management of temporary workers. With the analytical function in the system, the cost of temporary workers could be estimated with the reference of the database during tender. During the project at site, the number of workers for different construction projects could be decided according to the data of statistical and analytical statements. The study was divided into three stages. First, the real needs of users at construction site were grasped with user requirements analysis. The relationship between system architecture and function modules was developed based on the analytical results. The Entity-Relationship (ER) model was then used to build the relationship between the database and datasheets. Finally, a users interface that was easy to use and master was designed to enable users to input intuitively. The design methods of this study were hoped to serve as the blueprint for industries that would need to develop systems.
APA, Harvard, Vancouver, ISO, and other styles
41

Gershwin, Stanley B., Nicola Maggio, Andrea Matta, Tullio Tolio, and Loren M. Werner. "Efficient Methods for Manufacturing System Analysis and Design." 2002. http://hdl.handle.net/1721.1/4028.

Full text
Abstract:
The goal of the research described here is to develop tools to assist the rapid analysis and design of manufacturing systems. The methods we describe are based on mathematical models of production systems. We combine earlier work on the decomposition method for factory performance prediction and design with the hedging point method for scheduling. We propose an approach that treats design and operation in a unified manner. The models we study take many of the most important features and phenomena in factories into account, including random failures and repairs of machines, finite buffers, random demand, production lines, assembly and disassembly, imperfect yield, and token-based control policies.
Singapore-MIT Alliance (SMA)
APA, Harvard, Vancouver, ISO, and other styles
42

"Manufacturing system design : tradeoff curve analysis." Alfred P. Sloan School of Management, Massachusetts Institute of Technology, 1995. http://hdl.handle.net/1721.1/2570.

Full text
APA, Harvard, Vancouver, ISO, and other styles
43

詹森雄. "Design and Analysis of Pendant System." Thesis, 2007. http://ndltd.ncl.edu.tw/handle/41423350965046652381.

Full text
Abstract:
碩士
國立清華大學
動力機械工程學系
95
Abstract Pendant system is frequently used equipment in operation room, and its main function is to integrate various gas, electric, signal pipelines and supporting the equipments of operation. It can provide nursing staffs a convenient way to use the operation equipments, and prevent them from the stumble caused by those supply lines. Pendant system is usually with locating capability to place the equipments on a suitable position in operation room. Pendant system needs a properly loading capability due to installed equipments. The most frequently used pendant systems can be separated into two groups: single-arm type, duo-arm type. Single-arm type usually has high loading capability, but the location ability only on its arm-radius circumference; duo-arm type has the location ability on its arm-radius full circle surface, but a comparatively low loading ability. Therefore, when choosing the pendant, there is trade-off due to these two requirements (loading capability, locating ability). In this study, Quality Function Deployment to research pedant system is used to find the specifications, and design a kind pendant system with single arm and full circle surface location capability.
APA, Harvard, Vancouver, ISO, and other styles
44

Tsao, Ju-Hsuan, and 曹如璇. "Systems Analysis and Design to Construction Project Schedule Management System." Thesis, 2017. http://ndltd.ncl.edu.tw/handle/qxpgux.

Full text
Abstract:
碩士
國立雲林科技大學
營建工程系
105
Progress in construction industry is one of the most important category, but there are amounts of situations or factors that can make schedule management hard to control. Therefore, this study adopted system analysis and design approach in an integrated information system, which aims to solve the problem from construction site. By the web-based system, we try to avoid duplicated input, shorten time of information delivery and simplify operating peocess, for achieving the goal of minimizing construction site management time lag. For reasons outlined above, the study designed a scheduling management prototype system by applying concepts of system analysis and design. First, the users’ demands were analyzed, and then according to the analysis result to design the system architecture and functional modules. After that, the entity-relationship model was used to create the relationship between tables and database. Finally, a user interface was designed. The scheduling management prototype system based on the methods and analysis mentioned above is expected to improve the progress control and help the project manager to make essential decisions.
APA, Harvard, Vancouver, ISO, and other styles
45

Hsu, Ming-Shan, and 許銘珊. "Systems Analysis and Design to Construction Project Scheduling Managment System." Thesis, 2016. http://ndltd.ncl.edu.tw/handle/28286556142512247981.

Full text
Abstract:
碩士
國立雲林科技大學
營建工程系
104
In recent years, while the cloud computing and mobile device tended to maturely. The way that the industry give impetus to an E-Commerce platform or makes use of information technology to integrate a great deal of information. Construction Industry is no exception. By the review of the literature can be found in the current construction industry encountered many problems on the progress of the control, the main purpose of this study is to solve the schedule control process "systematic control processes". Therefore this study wants to develop the construction concept project progress tube to control the system embryonic form. This study first improved the traditional implementation framework for project management before introducing the improved framework into the system in this study as its basic work flow. Then, user needs were carefully analyzed by applying concepts in information system analysis and design. The analysis results then served as the basis for developing the architecture and functional modules of the system. This was followed by modeling of data flows in the system using data flow diagrams in graphics tool. Then, data relationships were established by referring to the database design and were represented in an entity relationship diagram. Finally, a user interface was designed to provide users with a prototypical user interface of the system. Graphical display was used to enable users to try the future operation panel of the system, thus minimizing major changes to the system when it goes live in the future and eliminating the need of additionally purchasing commercial hardware and software for project management during setup and the cost. This study contributed to the systemization of progress control processes by discussing concepts and logic in the system and clearly explaining their definitions, such as critical information, user need analysis and system architecture design. The schedule control system developed in this study is expected to help improveaccuracy in progress control by integrating related operations using informationtechnology and provide more decision scheme options for project management.
APA, Harvard, Vancouver, ISO, and other styles
46

Uddin, Amad, I. Felician Campean, and M. Khurshid Khan. "Development of an Interface Analysis Template for System Design Analysis." 2015. http://hdl.handle.net/10454/8083.

Full text
Abstract:
yes
Interface definition is an essential and integral part of systems engineering. In current practice, interface requirements or control documents are generally used to define systems or subsystems interfaces. One of the challenges with the use of such documents in product development process is the diversity in their types, methodology, contents coverage, and structure across various design levels and across multidisciplinary teams, which often impedes the design process. It is important that interface information is described with appropriate detail and minimal or no ambiguity at each design level. The purpose of this paper is to present an interface analysis template (IAT) as a structured tool and coherent methodology, built upon a critical review of existing literature concepts, with the aim of using and implementing the same template for capturing interface requirements at various levels of design starting from stakeholders' level down to component level analysis. The proposed IAT is illustrated through a desktop case study of an electric pencil sharpener, and two examples of application to automotive systems.
APA, Harvard, Vancouver, ISO, and other styles
47

KUO-CHUAN, WANG, and 王國權. "Design and Analysis of Zoom Lens System." Thesis, 2003. http://ndltd.ncl.edu.tw/handle/26303979572214954276.

Full text
Abstract:
碩士
國立臺北科技大學
光電技術研究所
91
Due to the crucial issues of digital still camera have been readily solved, digital still camera is popularly accepted by consumers in recent years. Among the manufacturing expenses of digital still camera, the zoom lens occupies the most of prime cost. Designing optical zoom system is more difficult than the fix focus lens for that the image plane must be fixed during zooming. Thus, the canceling not only monochromatic Seidel aberration but also chromatic aberration becomes very complicated. Traditionally, optical designer will follow customer’s need to complete zoom system by solving Gaussian solution of thin lens theory first, i.e. slide trajectory, lens power and magnification. Once the first-order solution can’t fit system request then the Gaussian solution have to be figured out again. In this paper, we will introduce and analysis the zoom theory of one group gradually to two group and finally three group structure. Moreover, we will discuss the aberration balance concepts and chromatic aberration free ideas. Furthermore, fast solution of first-order properties of zoom system utilizing merit function provided by optical design software. In this project, a zoom ratio 3x and 200-mega pixels zoom lens with power - + + arrangement is presented. There are eight lens elements with six glassed and two aspheric plastic lenses in this system. The second group acts as zooming part when changing system effective focal length while the first group operates as a compensator. The characteristics of high-level digital camera, which MTF is larger than 30% under 150 lps/mm and its distortion is small than
APA, Harvard, Vancouver, ISO, and other styles
48

Chang, Hwey Chung, and 章惠群. "Design of Electrogastric Record And Analysis System." Thesis, 1993. http://ndltd.ncl.edu.tw/handle/72510285369470908284.

Full text
APA, Harvard, Vancouver, ISO, and other styles
49

陳信榮. "Integration of structural analysis and design system." Thesis, 1993. http://ndltd.ncl.edu.tw/handle/79617208637326007634.

Full text
APA, Harvard, Vancouver, ISO, and other styles
50

Su, Chia-Wen, and 蘇家雯. "Design and Analysis of RFID System Antenna." Thesis, 2007. http://ndltd.ncl.edu.tw/handle/27368349178720546972.

Full text
Abstract:
碩士
國立臺灣海洋大學
通訊與導航工程系
95
Abstract Because of the department communication system, it provides people more convince in regular live such as saving manpower and time. In the future communication, RFID used at management of stock this kind of new and developing market can deal with people affair everyday besides voice ,image communication, wire and wireless communication. Due to its wide application, we research and study antenna more from application frequency, the outline of product , the transmission distance and property and design for matching chip. Take the application level of RFID of view , the common characteristic of stocking, billing, identification characteristic is shorten-used , and non-recycle. Hence ,I won’t use metal if we want to Mass product, saving cost , be easily surface cohered. I consider to use polyvinyl chloride (PVC) and PET with dielectric constant about three for designing RFID antenna .I verify my point to use FR-4 PCB in RFID research for experiment convince and measure its bandwidth , radiation pattern, VSWR and so on. In the thesis of researching and design RFID antenna, I take dipole antenna structure for Reader system and Folded Antenna for Tag system because of wide beam, bandwidth and unbalance input. Base on the theory of shorten dipole and the length of effective antenna ,I design Dipole Reader Antenna with 50 ohm system input and covering the frequency of 866MHz、915MHz and 953MHz.Its impedance bandwidth is about 100MHz (S11<-10dBm) . I design 1/4 wavelength impedance transfer to match 50 ohm between Antenna and Reader chip. I change the area of ending loading to fine tune the imaginary impedance and make it slowly change. I design Tag Antenna to take Folded Antenna with the effective antenna length structure. Folded Antenna control imaginary impedance and the effective antenna length deal with real impedance.
APA, Harvard, Vancouver, ISO, and other styles
We offer discounts on all premium plans for authors whose works are included in thematic literature selections. Contact us to get a unique promo code!

To the bibliography