To see the other types of publications on this topic, follow the link: Data Flow Design.

Dissertations / Theses on the topic 'Data Flow Design'

Create a spot-on reference in APA, MLA, Chicago, Harvard, and other styles

Select a source type:

Consult the top 50 dissertations / theses for your research on the topic 'Data Flow Design.'

Next to every source in the list of references, there is an 'Add to bibliography' button. Press on it, and we will generate automatically the bibliographic reference to the chosen work in the citation style you need: APA, MLA, Harvard, Chicago, Vancouver, etc.

You can also download the full text of the academic publication as pdf and read online its abstract whenever available in the metadata.

Browse dissertations / theses on a wide variety of disciplines and organise your bibliography correctly.

1

Lo, I.-Lung. "Data flow description with VHLD." Thesis, Monterey, California : Naval Postgraduate School, 1990. http://handle.dtic.mil/100.2/ADA246211.

Full text
Abstract:
Thesis (M.S. in Electrical Engineering)--Naval Postgraduate School, December 1990.
Thesis Advisor(s): Lee, Chin-Hwa Second Reader: Cotton, Mitchell L. "December 1990." Description based on title screen as viewed on April 1, 2010. DTIC Identifier(s): Computer Aided Design, High Level Languages, Computerized Simulation, Theses, VHSIC (Very High Speed Integrated Circuits), VHDL (VHSIC Hardware Description Language). Author(s) subject terms: W-4 Computer, PC, TAR, RAM, ACC, ALU, B_REG, IR, Controller, Test_Bench, VHDL. Includes bibliographical references (p. 113). Also available in print.
APA, Harvard, Vancouver, ISO, and other styles
2

Malayattil, Sarosh Aravind. "Design of a Multibus Data-Flow Processor Architecture." Thesis, Virginia Tech, 2012. http://hdl.handle.net/10919/31379.

Full text
Abstract:
General purpose microcontrollers have been used as computational elements in various spheres of technology. Because of the distinct requirements of specific application areas, however, general purpose microcontrollers are not always the best solution. There is a need for specialized processor architectures for specific application areas. This thesis discusses the design of such a specialized processor architecture targeted towards event driven sensor applications. This thesis presents an augmented multibus dataflow processor architecture and an automation framework suitable for executing a range of event driven applications in an energy efficient manner. The energy efficiency of the multibus processor architecture is demonstrated by comparing the energy usage of the architecture with that of a PIC12F675 microcontroller.
Master of Science
APA, Harvard, Vancouver, ISO, and other styles
3

Huang, Henna Priscilla. "Hybrid flow data center network architecture design and analysis." Thesis, Massachusetts Institute of Technology, 2017. http://hdl.handle.net/1721.1/108998.

Full text
Abstract:
Thesis: Ph. D., Massachusetts Institute of Technology, Department of Electrical Engineering and Computer Science, 2017.
Cataloged from PDF version of thesis.
Includes bibliographical references (pages 127-132).
In this thesis, we propose a hybrid flow network architecture for future data center. The hybrid flow architecture has its origins in the early 1990s with studies on all-optical networks and fiber-optical computer networks. Research in optical flow switching has spanned over two decades. Our contribution to the study of all-optical networks is on the performance of hybrid flow data center networks. We compare the delay performance of hybrid flow architectures and traditional packet switched networks in future data center. We present a simplified data center traffic model, where data center traffic is categorized into mice traffic and elephant flows. The electronic packet switched architecture allows for low overhead and complexity for small transactions. However, mice traffic suffers as the size, fraction, and arrival rates of elephant flows increase. In the hybrid flow architecture, elephant flows are transmitted on an all-optical flow-switched data plane, where wavelength channels are reserved for the duration of a flow. In addition, the hybrid flow architecture allows for the dynamic allocation of optical wavelengths. In electronic packet switched networks, wavelength assignments are static, where traditional networking protocols do not consider the optical domain in routing decisions. We show that the hybrid flow architecture allows for superior delay performance compared to the electronic packet switched architecture as data rates and data volume increase in future data center networks.
by Henna Huang.
Ph. D.
APA, Harvard, Vancouver, ISO, and other styles
4

Huang, Henna Priscilla. "Transport layer protocol design over flow-switched data networks." Thesis, Massachusetts Institute of Technology, 2012. http://hdl.handle.net/1721.1/75711.

Full text
Abstract:
Thesis (S.M.)--Massachusetts Institute of Technology, Dept. of Electrical Engineering and Computer Science, 2012.
Cataloged from PDF version of thesis.
Includes bibliographical references (p. 135-136).
In this work, we explore transport layer protocol design for an optical flow-switched network. The objective of the protocol design is to guarantee the reliable delivery of data files over an all-optical end-to- end flow-switched network which is modeled as a burst-error channel. We observe that Transport Control Protocol (TCP) is not best suited for Optical Flow-Switching (OFS). Specifically, flow control and fair resource allocation through windowing in TCP are unnecessary in an OFS network. Moreover TCP has poor throughput and delay performance at high transfer rates due to window flow control and window closing with missing or dropped packets. In OFS, flows are scheduled and congestion control is performed by a scheduling algorithm. Thus, we focus on defining a more efficient transport protocol for optical flow-switched networks that is neither a modification of TCP nor derived from TCP. The main contribution of this work is to optimize the throughput and delay performance of OFS using file segmentation and reassembly, forward error-correction (FEC), and frame retransmission. We analyze the throughput and delay performance of four example transport layer protocols: the Simple Transport Protocol (STP), the Simple Transport Protocol with Interleaving (STPI), the Transport Protocol with Framing (TPF) and the Transport Protocol with Framing and Interleaving (TPFI). First, we show that a transport layer protocol without file segmentation and without interleaving and FEC (STP) results in poor throughput and delay performance and is not well suited for OFS. Instead, we found that interleaving across a large file (STPI) results in the best theoretical delay performance, though the large code lengths and interleaver sizes in this scheme will be hard to implement. Also, in the unlikely case that a file experiences an uncorrectable error, STPI requires extra network resources equal to that of an entire transaction for file retransmission and adds to the delay of the transaction significantly. For the above reason, we propose the segmentation of a file into large frames combined with FEC, interleaving, and retransmission of erroneous frames (TPFI) as the protocol of choice for an OFS network. In TPFI, interleaving combined with FEC and frame retransmission allows a file to be segmented into large frames (>100 Mbits). In addition, TPFI also allows for fewer processing and file segmentation and reassembly overhead compared with a transport layer protocol that does not include interleaving and FEC (TPF).
by Henna Priscilla Huang.
S.M.
APA, Harvard, Vancouver, ISO, and other styles
5

Falk, Joachim [Verfasser]. "A Clustering-Based MPSoC Design Flow for Data Flow-Oriented Applications / Joachim Falk." München : Verlag Dr. Hut, 2015. http://d-nb.info/1075409497/34.

Full text
APA, Harvard, Vancouver, ISO, and other styles
6

Narváez, Guarnieri Paolo L. (Paolo Lucas). "Design and analysis of flow control algorithms for data networks." Thesis, Massachusetts Institute of Technology, 1997. http://hdl.handle.net/1721.1/42717.

Full text
Abstract:
Thesis (M.S.)--Massachusetts Institute of Technology, Dept. of Electrical Engineering and Computer Science, 1997.
Includes bibliographical references (leaves 110-112).
by Paolo L. Naváez Guarnieri.
M.S.
APA, Harvard, Vancouver, ISO, and other styles
7

Nejad-Sattary, Mohammad. "An extended data flow diagram notation for specification of real-time systems." Thesis, City University London, 1990. http://ethos.bl.uk/OrderDetails.do?uin=uk.bl.ethos.276150.

Full text
APA, Harvard, Vancouver, ISO, and other styles
8

Lakshmanan, Karthick. "Design of an Automation Framework for a Novel Data-Flow Processor Architecture." Thesis, Virginia Tech, 2010. http://hdl.handle.net/10919/34193.

Full text
Abstract:
Improved process technology has resulted in the integration of computing elements into multiple application areas. General purpose micro-controllers are designed to assist in this integration through a flexible design. The application areas, however, are so diverse in nature that the general purpose micro-controllers may not provide a suitable abstraction for all classes of applications. There is a need for specially designed architectures in application areas where the general purpose micro-controllers suffer from inefficiencies. This thesis focuses in the design of a processor architecture that provides a suitable design abstraction for a class of periodic, event-driven embedded applications such as sensor-monitoring systems. The design principles of the processor architecture are focused on the target application requirements, which are identified as event-driven nature with concurrent task execution and deterministic timing behavior. Additionally, to reduce the design complexity of applications on this novel architecture, an automation framework has been implemented. This thesis presents the design of the processor architecture and the automation framework explaining the suitability of the designed architecture for the target applications. The energy use of the novel architecture is compared with that of PIC12F675 micro-controller to demonstrate the energy-efficiency of the designed architecture.
Master of Science
APA, Harvard, Vancouver, ISO, and other styles
9

Nguyen, Quang Do Lisa [Verfasser]. "User-centered tool design for data-flow analysis / Lisa Nguyen Quang Do." Paderborn : Universitätsbibliothek, 2019. http://d-nb.info/119830782X/34.

Full text
APA, Harvard, Vancouver, ISO, and other styles
10

Barackman, Martin Lee 1953, and Martin Lee 1953 Barackman. "Diverging flow tracer tests in fractured granite: equipment design and data collection." Thesis, The University of Arizona, 1986. http://hdl.handle.net/10150/191896.

Full text
Abstract:
Down-hole injection and sampling equipment was designed and constructed in order to perform diverging-flow tracer tests. The tests were conducted at a field site about 8 km southeast of Oracle, Arizona, as part of a project sponsored by the U. S. Nuclear Regulatory Commission to study mass transport of fluids in saturated, fractured granite. The tracer injection system was designed to provide a steady flow of water or tracer solution to a packed off interval of the borehole and allow for monitoring of down-hole tracer concentration and pressure in the injection interval. The sampling system was designed to collect small volume samples from multiple points in an adjacent borehole. Field operation of the equipment demonstrated the importance of prior knowledge of the location of interconnecting fractures before tracer testing and the need for down-hole mixing of the tracer solution in the injection interval. The field tests were designed to provide data that could me analyzed to provide estimates of dispersivity and porosity of the fractured rock. Although analysis of the data is beyond the scope of this thesis, the detailed data are presented in four appendices.
APA, Harvard, Vancouver, ISO, and other styles
11

Karlsson, Mikael. "An Evaluation of the Predictable System-on-a-Chip Automated System Design Flow." Thesis, KTH, Skolan för informations- och kommunikationsteknik (ICT), 2015. http://urn.kb.se/resolve?urn=urn:nbn:se:kth:diva-186378.

Full text
Abstract:
In spite of hard real-time embedded systems often being seemingly simple, modern embedded system designs often incorporate features such as multiple processors and complex inter-processor communication. In situations where safety is critical, such as in for instance many automotive applications, great demand is put on developers to prove correctness. The ForSyDe research project aims to remedy this problem by providing a design philosophy based on the theory of models of computation which aims to formally ensure predictability and correctness by design. A system designed with the ForSyDe design methodology consists of a well defined system model which can be refined by design transformations until it is mappable onto an application specific predictable hardware template. This thesis evaluates one such hardware template called the predictable System-on-a-Programmable-Chip. This hardware template was developed during the work on a masters thesis by Marcus Mikulcak [7] in 2013. The evaluation was done by creating many simple dual processor systems using the automated design flow provided by PSOPC. Using these systems, the time to communicate data between the processors was measured and compared against the claims made in [7]. The results presented in this thesis suggests that the current implementation of the PSOPC platform is not yet mature enough for production use. Data collected from many different configurations show that many of the generated systems exhibit unacceptable anomalies. Some systems would not start at all, and some systems could not communicate the data properly. Although this thesis does not propose solutions to the problems found herein, it serves to show that further work on the PSOPC platform is necessary before it can be incorporated into the larger context of the ForSyDe platform. However, it is the author’s genuine hope that the reader will gain appreciation for PSOPC as an idea, and that this work can instil interest into further working on perfecting it, so that it can serve as a part of ForSyDe in the future.
Även om hårda realtidssystem ofta verkar enkla så finner man i moderna inbyggda system numera ofta avancerade koncept såsom multipla processorer med komplicerad processor-till-processor-kommunikation. I situationer där säkerhet är ett kritiskt krav, som t.ex. i många applikationer inom bilindustrin, så föreligger enorma krav på de som utvecklar dessa system att kunna bevisa att systemen fungerar i enlighet med specifikationerna. Forskningsprojektet ForSyDe försöker lösa dessa problem genom att tillhandahålla en designfilosofi baserad på teorin om så kallade models of computation som via formella bevis kan garantera förutsägbarhet och korrekthet. Ett system designat med ForSyDes designmetodologi består av en väldefinierad modell av systemet som transformeras, tills dess den kan mappas mot en applikationsspecifik förutsägbar hårdvarumall. Detta examensarbete ämnar att utvärdera en sådan hårdvarumall som kallas predictable System-on-a-Programmable-Chip, eller PSOPC. Denna hårdvarumall utvecklades under arbetet med en masteruppsats av Markus Mikulcak [7] under året 2013. Utvärderingen bestod av skapandet av ett enkla tvåprocessorsystem med hjälp av PSOPCs automatiska designflöde. På dessa mättes sedan tiden för att kommunicera data mellan processorerna. Dessa kommunikationstider jämfördes sedan med de påståenden som görs i [7]. Resultaten som presenteras i detta examensarbete föreslår att nuvarande implementation av PSOPC-plattformen inte ännu uppnått tillräcklig mognad för att kunna användas i verkliga tillämpningar. De data som insamlats från många olika systemkonfigurationer visar att många av de genererade systemen uppvisar oacceptabla avvikelser. Några system startade inte ens och några klarade inte av att kommunicera data på ett korrekt sätt. Även om detta arbete inte föreslår några lösningar på de problem som presenteras häri så visar det på behovet av mer arbete med PSOPC-plattformen innan den kan bli en del av hela ForSyDe. Men, det är författarens genuina förhoppning att läsaren förstår de positiva aspekterna av PSOPC som idé, och att detta arbetet kan ingjuta intresse för att arbeta vidare med plattformen, så att den i framtiden kan bli en integral del i ForSyDe.
APA, Harvard, Vancouver, ISO, and other styles
12

Bruneau, Phillippe Roger Paul, and Backstrom T. W. Von. "The design of a single rotor axial flow fan for a cooling tower application." Thesis, Stellenbosch : University of Stellenbosch, 1994. http://hdl.handle.net/10019.1/15528.

Full text
Abstract:
Thesis (MEng (Mechanical Engineering))--University of Stellenbosch, 1994.
213 leaves printed on single pages, preliminary pages i-xix and numbered pages 1-116. Includes bibliography, list of tables, list of figures and nomenclature.
Digitized at 600 dpi grayscale to pdf format (OCR), using a Bizhub 250 Konica Minolta Scanner.
ENGLISH ABSTRACT: A design methodology for low pressure rise, rotor only, ducted axial flow fans is formulated, implemented and validated using the operating point specifications of a 1/6th scale model fan as a reference. Two experimental fans are designed by means of the design procedure and tested in accordance with British Standards 848, Type A. The design procedure makes use of the simple radial equilibrium equations, embodied in a suite of computer programs. The experimental fans have the same hub-tip ratio and vortex distribution, but differ in the profile section used. The first design utilises the well known Clark-Y aerofoil profile whilst the second takes advantage of the high lift characteristics of the more modern NASA LS series. The characteristics of the two designs are measured over the entire operating envelope and compared to the reference fan from which the utility and accuracy of the design procedure is assessed. The performance of the experimental fans compares well with both the reference fan as well as the design intent.
AFRIKAANSE OPSOMMING: 'n Ontwerpmetode vir lae drukstyging, enkel rotor aksiaal waaiers is geformuleer, toegepas en bevestig deur gebruik te maak van die ontwerppunt spesifikasies van 'n 1/6 skaal verwysingswaaier. Twee eksperimentele waaiers is ontwerp deur middel van die ontwerpmetode en getoets volgens die BS 848, Type A kode. Die ontwerpmetode maak gebruik van die eenvoudig radiale ewewigsvergelykings en 'n stel rekenaarprogramme. Die twee eksperimentele waaiers het dieselfde naaf-huls verhouding en werwel verdeling, maar verskil daarin dat verskillende vleuelprofiele gebruik is vir elkeen van die twee waaiers. Die eerste ontwerp maak gebruik van die bekende Clark-Y profiel terwyl die tweede die moderne NASA LS profiel gebruik. Die karakteristieke van die twee eksperimentele waaiers is gemeet oor die hele werkbereik en vergelyk met die verwysings waaier waardeur die geldigheid en akkuraatheid van die ontwerpmetode bepaal is. Die werkverigting van die eksperimentele waaiers vergelyk goed met die verwysingswaaier en bevredig die ontwerpsdoelwitte.
APA, Harvard, Vancouver, ISO, and other styles
13

Jargård, Anna, and Robert Kindwall. "Improving Flow Rate with Funnel-shaped Space Design using Crowd Simulation for Large Crowds." Thesis, KTH, Skolan för elektroteknik och datavetenskap (EECS), 2019. http://urn.kb.se/resolve?urn=urn:nbn:se:kth:diva-259008.

Full text
Abstract:
Crowd simulation is a technique used to model real people as agents in a computer-generated environment. Generating simulated crowds can help the research process of testing with agents in multiple scenarios without having to use real people. The flow of large crowds in different space-designs vary and this thesis will examine flow rates in funnel-shaped constructions and bottleneck constructions with the help of computer-generated crowd simulations. These flows can impact individuals to the extent where they are hurt by misdirected forces. Two different variables are defined for the funnel-shaped construction: corridor width and funnel-angle. For the bottleneck construction, there is only a corridor width property. Agents moving through an environment only move in one direction with no possibilities of a headon collision. Timing the agents that walk through the environments and looking at the forces they form shows that the funnels had a better flow and took less time than the bottlenecks. When looking at the different widths of the corridor the wider constructions was faster than all of the narrower constructions except the case with the narrow 15-degree funnel that was faster than the wide bottleneck. Introducing a funnel-shaped construction shows an improvement of the flow rate where you get better results the lower the angles. Applications of the funnel-shaped construction can include urban planning and architecture for spaces that have a capacity for large crowds, where an improvement in flow rate can make the space safer in an evacuation.
Simulation av folkmassor är en teknik som används för att modellera riktiga personer i en datorgenererad miljö som agenter. Att generera simulerade folkmassor kan hjälpa till med forskningsprocessen för testning av agenter i flertal scenarion utan att behöva använda riktiga personer. Flödet för folkmassor kan i hög grad variera för olika rumskonstruktioner. I den här avhandlingen undersöks flödeshastigheten i flaskhals- och trattformadekonstruktioner med hjälp av datorgenererade simulationer. Flödena kan påverka individer till den grad så att de skadas av missriktade krafter. Två variabler för de trattformade konstruktionerna är definerade, där den ena är bredden på öppningen till korridoren och den andra är vinkeln på tratten. För flaskhalskonstruktionerna används endast variabeln för bredden. Agenter i konstruktionen rör sig i samma riktning så det finns inte någon risk för frontalkollision. Resultat från tidtagning och kraftpilarnas riktning när agenterna går igenom konstruktionerna visar att en trattformad konstruktion ger ett bättre flöde samt tar mindre tid. En flaskhalskonstruktion med en bredare öppning ger bättre resultat än alla konstruktioner med en smalare öppning förutom den med 15-gradig vinkel på tratten. Att introducera en trattform till konstruktionen visar på en förbättrad flödeshastighet där lägre trattvinklar ger bättre resultat. Applikationer av den trattformade konstruktionen kan inkludera stadsplanering och arkitektur för rum som har en kapacitet för stora folkmassor, där en förbättring av flödeshastighet kan göra rummen sakrare i en evakuering.
APA, Harvard, Vancouver, ISO, and other styles
14

Meyer-Spradow, Jennis, Timo Ropinski, Jörg Mensmann, and Klaus Hinrichs. "Interactive Design and Debugging of GPU-based Volume Visualizations." Linköpings universitet, Medie- och Informationsteknik, 2010. http://urn.kb.se/resolve?urn=urn:nbn:se:liu:diva-92878.

Full text
Abstract:
There is a growing need for custom visualization applications to deal with the rising amounts of volume data to be analyzed in fields like medicine, seismology, and meteorology. Visual programming techniques have been used in visualization and other fields to analyze and visualize data in an intuitive manner. However, this additional step of abstraction often results in a performance penalty during the actual rendering. In order to prevent this impact, a careful modularization of the required processing steps is necessary, which provides flexibility and good performance at the same time. In this paper, we will describe the technical foundations as well as the possible applications of such a modularization for GPU-based volume raycasting, which can be considered the state-of-the-art technique for interactive volume rendering. Based on the proposed modularization on a functional level, we will show how to integrate GPU-based volume ray-casting in a visual programming environment in such a way that a high degree of flexibility is achieved without any performance impact.
APA, Harvard, Vancouver, ISO, and other styles
15

Blitz, John Leonard. "The design and construction of a power compensation heat flow calorimeter for the study of fermentation processes." Thesis, London South Bank University, 1989. http://ethos.bl.uk/OrderDetails.do?uin=uk.bl.ethos.235584.

Full text
APA, Harvard, Vancouver, ISO, and other styles
16

Bharath, Karthik. "The logic of information flow a graded approach /." Diss., Online access via UMI:, 2008.

Find full text
Abstract:
Thesis (M.S.)--State University of New York at Binghamton, Thomas J. Watson School of Engineering and Applied Science, Department of Systems Science and Industrial Engineering, 2008.
Includes bibliographical references.
APA, Harvard, Vancouver, ISO, and other styles
17

Wen, Renhua. "The design and implementation of an accurate array data-flow analyzer in the HPC compiler." Thesis, McGill University, 1995. http://digitool.Library.McGill.CA:80/R/?func=dbin-jump-full&object_id=23304.

Full text
Abstract:
Parallelizing compilers are increasingly relying on accurate data dependence information to exploit parallelism. This thesis addresses the problem of array data-flow analysis which provides more accurate information than traditional scalar data-flow analysis and array data dependence analysis. We describe the design and implementation of an array data-flow analyzer which has been integrated into our HPC compiler. The HPC compiler automatically parallelizes programs written in the HPC language, an extension of the C language, and generates SPMD codes for distributed memory machines. Two other important parts of the compiler, the automatic data and computation decomposition tool and the SPMD code generator, use the array data-flow information.
We introduce the array data-flow representation source tree which allows us to compute array data-flow information more effectively and efficiently. We perform array data-flow computation from the highest dependence level to the lowest dependence level and sequence the writes by lexicographical order in the presence of multiple writes. The check of a source tree against the coverage conditions allows us to avoid any further computation once the sources have been found.
We introduce our global array data-flow analysis to compute array data-flow information beyond a single loop nest. This is useful for advanced program transformation and optimizations in the scope of a cluster of loop nests.
We have investigated the pragmatic issues of applying Feautrier's method in the HPC compiler framework. We have performed experiments of both local array data-flow analysis and global array data-flow analysis on seven programs of the Perfect Benchmarks, under the platform of the HPC compiler. The experimental results show that our method has acceptable performances for analyzing scientific and engineering applications.
APA, Harvard, Vancouver, ISO, and other styles
18

Cai, Wenlong. "Application of network flow and zero-one programming to open pit mine design problems." Diss., The University of Arizona, 1989. http://hdl.handle.net/10150/184797.

Full text
Abstract:
An algorithm which adopts a moving cone approach but is guided by maximal network flow principles is developed. This study argues that from a network flow point of view, the re-allocation problem is a major obstacle to prevent a simulation oriented pit design algorithm from reaching the optimum solution. A simulation oriented pit design algorithm can not resolve the re-allocation problem entirely without explicit definition of predecessors and successors. In order to preserve the advantages of moving cone algorithm and to improve the moving cone algorithm, the new algorithm trys to avoid the re-allocation situations. Theoretical proof indicates that the new algorithm can consistently generate higher profit than the popular moving cone algorithm. A case study indicates that the new algorithm improved over the moving cone algorithm (1% more profit). Also, the difference between the new algorithm and the rigorous Lerchs-Grossmann algorithm in terms of generated profit is very insignificant (0.015% less). The new algorithm is only 2.08 times slower than the extremely fast moving cone algorithm. This study also presents a multi-period 0-1 programming mine sequencing model. Once pushbacks are generated and the materials between a series of cutoffs are available for each bench of every pushback, the model can quickly answer, period by period, what is the best (maximum or minimum) that can be expected on any one of these four items: mineral contents, ore tonnages, waste tonnages and stripping ratios. This answer is based on a selected cutoff and considers the production capacity defined by the ore tonnage, the desired stripping ratio and the precedence constraints among benches and pushbacks. The maximization of mineral contents is suggested to be the direct mine sequencing objective when it is permissible. Suggestions also are provided on how to reduce the number of decision variables and how to reduce the number of precedence constraints. A case study reveals that the model is fast and operational. The maximization of mineral contents increases the average grades in early planning periods.
APA, Harvard, Vancouver, ISO, and other styles
19

Devlin, Steve. "Telemetry Data Processing: A Modular, Expandable Approach." International Foundation for Telemetering, 1988. http://hdl.handle.net/10150/615091.

Full text
Abstract:
International Telemetering Conference Proceedings / October 17-20, 1988 / Riviera Hotel, Las Vegas, Nevada
The growing complexity of missle, aircraft, and space vehicle systems, along with the advent of fly-by-wire and ultra-high performance unstable airframe technology has created an exploding demand for real time processing power. Recent VLSI developements have allowed addressing these needs in the design of a multi-processor subsystem supplying 10 MIPS and 5 MFLOPS per processor. To provide up to 70 MIPS a Digital Signal Processing subsystem may be configured with up to 7 Processors. Multiple subsystems may be employed in a data processing system to give the user virtually unlimited processing power. Within the DSP module, communication between cards is over a high speed, arbitrated Private Data bus. This prevents the saturation of the system bus with intermediate results, and allows a multiple processor configuration to make full use of each processor. Design goals for a single processor included executing number system conversions, data compression algorithms and 1st order polynomials in under 2 microseconds, and 5th order polynomials in under 4 microseconds. The processor design meets or exceeds all of these goals. Recently upgraded VLSI is available, and makes possible a performance enhancement to 11 MIPS and 9 MFLOPS per processor with reduced power consumption. Design tradeoffs and example applications are presented.
APA, Harvard, Vancouver, ISO, and other styles
20

Nattanmai, Ganesh Babu Goushik. "Assembly flow design and process data digitalization for Process Industries : Flow and Line balancing with Simulation for Paper and Pulp Instruments with digital Information management." Thesis, KTH, Industriell produktion, 2020. http://urn.kb.se/resolve?urn=urn:nbn:se:kth:diva-286008.

Full text
Abstract:
The ABB Lorentzen and Wettre in Kista produces Paper and Pulp Testing Instruments for Paper Mills worldwide. The unit has product range of over 140 instruments. In the production there is significantly longer throughput times compared to the demand takt in the assembly of instruments since they are working in a Fixed position layout. The problem with order control and production planning strategies have been focused on the company. The Project focuses on the assembly design with line and flow balancing of the Paper and pulp Instruments. It has been performed for 4 Instruments with various types of Assembly methodologies, layouts and determine feasible work content balancing. The second objective is the Lead Time analysis and Throughput time optimization for the above-mentioned Products and recommendation for reduction. And further aligning with the challenges within sourcing and net-working capital for the instruments. The third objective is in the Shop floor Information data analysis. The fourth objective is the digitalization of process information through work Instructions of process data used by Assembly and integrating vertically with the ERP system. Several assembly design strategies were evaluated as traditional U-shaped line, Mixed product lines, Two-sided assembly as well as the Rabbit chasing cell layout using simulation with input regarding assembly precedence, order flow and resource routing were studied. The strategies were evaluated based on the operator utilization, throughput, and lead times for the Instruments with the Line balancing. Furthermore, validating in flow simulation for the dynamic scenario of the product routing, scheduling, and resource allocation in the Assembly. This means that the assembly design was chosen as the strategy with the shortest lead time, throughput time for the products. The process data digitalization has been performed for one of the instruments as a pilot project and vertical integration has been proposed as an information model. The results from the assembly balancing shows that the proposed assembly strategies have influence in the assembly in terms of the operator utilization and the Product routing in ABBs manual assembly. The process data digitalization has been aligned closely with assembly design since the transformation from the fixed position layout to the Assembly cells or lines have significant influence in the product knowledge across the resources in the assembly. The Lead times can be reduced by 32% in average from 12 weeks to 8 weeks by implementing the proposals of the instruments that are evaluated in the production system. The conclusion of the research shows that it will be more beneficial to implement the methodologies and deploy them across the other instruments for a more effective and smarter production process.
ABB Lorentzen and Wettre i Kista tillverkar pappers- och massatestningsinstrument för pappersindustrin världen över. Enheten har ett produktsortiment med över 140 instrument. På grund av att monteringen av instrument utförs i en fixed position layout förekommer i nuläget lång genomloppstid jämförelse med den efterfrågade takten. Problemet med strategier för orderkontroll och produktionsplanering har fokuserats på företaget. Projektet fokuserar på monteringsupplägg med linje- och flödesbalansering för pappers- och massainstrumenten. Det ska utföras på fyra olika instrument med monteringsmetoder, layouter och sedan bestämma möjlig balansering av arbetsinnehåll. Det andra målet är analys av ledtider och Optimering av genomloppstid för ovannämnda produkter och rekommendation för reduktion, samt ytterligare anpassning till utmaningarna inom inköp och nettokapital för instrumenten. Det tredje målet innefattar informations- och dataflödesanalys för produktionen. Det fjärde målet är digitalisering av processinformation genom arbetsinstruktioner för data som används av montering och integreras vertikalt med ERP-systemet. Olika monteringsupplägg och strategier har utvärderats bland annat som traditionell U-shaped line, Mixed Product lines, Two-sided assembly och därefter the Rabbit chasing cell layout vid simulering med input data av monteringsprioritet, orderflöde och resursdistribution studerades. Analysen av utvärderingen baserades på operatörsuttnyttjande, genomloppstid och ledtider för instrumenten med lean balansering och bekräftar i flödesimuleringen för det dynamiska scenariot för planering av operationsföljd, schemaläggning och resursallokering i monteringen. Ledtiderna har optimerats med den nya strategin som valts som upplägg för monteringen. Digitaliseringen av Processdata har utförts på ett av instrumenten som ett pilotprojekt. Vertikal integration har föreslagits som en informationsmodell. Resultaten från monteringens balansering visar att de föreslagna monteringsstrategierna påverkar resultatet i monteringen vad gäller resurseffektivitet och produkt routing i ABBs manuella montering. Digitaliseringen av Processdata har anpassats nära till monteringsupplägg sedan omvandlingen från den Fixed position layout till monteringscellerna eller linjerna har betydande inflytande på produktkunskapen för resurserna i monteringen. Ledtiderna kan minskas med 32% i genomsnitt från 12 veckor till 8 veckor om man genomför de förslag som denna avhandling har påvisat av de instrument som utvärderats. Slutledningen av avhandlingen visar att det kommer att vara fördelaktigt att implementera utvärderingsmetoderna och gruppera dem över de andra instrumenten för ett mer effektivt och smartare produktion process.
APA, Harvard, Vancouver, ISO, and other styles
21

Li, Zhiyong. "Data-Driven Adaptive Reynolds-Averaged Navier-Stokes k - ω Models for Turbulent Flow-Field Simulations." UKnowledge, 2017. http://uknowledge.uky.edu/me_etds/93.

Full text
Abstract:
The data-driven adaptive algorithms are explored as a means of increasing the accuracy of Reynolds-averaged turbulence models. This dissertation presents two new data-driven adaptive computational models for simulating turbulent flow, where partial-but-incomplete measurement data is available. These models automatically adjust (i.e., adapts) the closure coefficients of the Reynolds-averaged Navier-Stokes (RANS) k-ω turbulence equations to improve agreement between the simulated flow and a set of prescribed measurement data. The first approach is the data-driven adaptive RANS k-ω (D-DARK) model. It is validated with three canonical flow geometries: pipe flow, the backward-facing step, and flow around an airfoil. For all 3 test cases, the D-DARK model improves agreement with experimental data in comparison to the results from a non-adaptive RANS k-ω model that uses standard values of the closure coefficients. The second approach is the Retrospective Cost Adaptation (RCA) k-ω model. The key enabling technology is that of retrospective cost adaptation, which was developed for real-time adaptive control technology, but is used in this work for data-driven model adaptation. The algorithm conducts an optimization, which seeks to minimize the surrogate performance, and by extension the real flow-field error. The advantage of the RCA approach over the D-DARK approach is that it is capable of adapting to unsteady measurements. The RCA-RANS k-ω model is verified with a statistically steady test case (pipe flow) as well as two unsteady test cases: vortex shedding from a surface-mounted cube and flow around a square cylinder. The RCA-RANS k-ω model effectively adapts to both averaged steady and unsteady measurement data.
APA, Harvard, Vancouver, ISO, and other styles
22

Höglund, Britta, and Oskar Öberg. "Vad gör en webbsida tilltalande? : En jämförelse mellan tre stora teorier inom webbdesign." Thesis, Örebro University, Swedish Business School at Örebro University, 2009. http://urn.kb.se/resolve?urn=urn:nbn:se:oru:diva-5086.

Full text
Abstract:

Syftet med denna undersökning är att ta reda på till vilken grad olika teorier påverkar hur webbsidor uppfattas. Detta för att kunna utveckla mer tilltalande webbsidor. Vi valde att undersöka sex webbsidor: Aftonbladet, Nerikes Allehanda (NA), Dagens Nyheter (DN), IKEA, Hennes & Mauritz (H&M) och NetOnNet.Undersökningen är uppdelad i tre delar: en enkätundersökning, en expertutvärdering och en jämförelseanalys. Enkäten skickades ut till studenter vid Örebro Universitet via studentmailen och resultatet har sedan sammanställts i ett antal frekvenstabeller. Efter detta utfördes expertutvärderingarna utifrån litteraturstudien. De teorier vi valde att analysera webbsidor utifrån är användbarhet, grafisk design och flow. Efter att enkätundersökningen och expertutvärderingen utförts gjordes en analysjämförelse av mönster och samband mellan respondenternas svar och expertutvärderingarna.Vi kom fram till att vad som vad viktigast för användaren var den grafiska designen, men att om man vill skapa en riktigt bra webbsida så borde man använda flera teorier. Vi kom även fram till att de riktlinjer som finns för de olika teorierna fungerar och hjälper till för att skapa en bättre webbsida.

APA, Harvard, Vancouver, ISO, and other styles
23

Bittner, Ray Albert Jr. "Wormhole Run-Time Reconfiguration: Conceptualization and VLSI Design of a High Performance Computing System." Diss., Virginia Tech, 1997. http://hdl.handle.net/10919/30499.

Full text
Abstract:
In the past, various approaches to the high performance numerical computing problem have been explored. Recently, researchers have begun to explore the possibilities of using Field Programmable Gate Arrays (FPGAs) to solve numerically intensive problems. FPGAs offer the possibility of customization to any given application, while not sacrificing applicability to a wide problem domain. Further, the implementation of data flow graphs directly in silicon makes FPGAs very attractive for these types of problems. Unfortunately, current FPGAs suffer from a number of inadequacies with respect to the task. They have lower transistor densities than ASIC solutions, and hence less potential computational power per unit area. Routing overhead generally makes an FPGA solution slower than an ASIC design. Bit-oriented computational units make them unnecessarily inefficient for implementing tasks that are generally word-oriented. And finally, in large volumes, FPGAs tend to be more expensive per unit due to their lower transistor density. To combat these problems, researchers are now exploiting the unique advantage that FPGAs exhibit over ASICs: reconfigurability. By customizing the FPGA to the task at hand, as the application executes, it is hoped that the cost-performance product of an FPGA system can be shown to be a better solution than a system implemented by a collection of custom ASICs. Such a system is called a Configurable Computing Machine (CCM). Many aspects of the design of the FPGAs available today hinder the exploration of this field. This thesis addresses many of these problems and presents the embodiment of those solutions in the Colt CCM. By offering word grain reconfiguration and the ability to partially reconfigure at computational element resolution, the Colt can offer higher effective utilization over traditional FPGAs. Further, the majority of the pins of the Colt can be used for both normal I/O and for chip reconfiguration. This provides higher reconfiguration bandwidth contrasted with the low percentage of pins used for reconfiguration of FPGAs. Finally, Colt uses a distributed reconfiguration mechanism called Wormhole Run-Time Reconfiguration (RTR) that allows multiple data ports to simultaneously program different sections of the chip independently. Used as the primary example of Wormhole RTR in the patent application, Colt is the first system to employ this computing paradigm.
Ph. D.
APA, Harvard, Vancouver, ISO, and other styles
24

Hanikat, Marcus. "Towards a Correct-by-Construction design flow : A case-study from railway signaling systems." Thesis, KTH, Skolan för elektroteknik och datavetenskap (EECS), 2021. http://urn.kb.se/resolve?urn=urn:nbn:se:kth:diva-299362.

Full text
Abstract:
As technological advancements and manufacturing techniques continues to bring us more complex and powerful hardware, software engineers struggle to keep up with this rapid progress and reap the benefits brought by this hardware. In the field of safety-critical system development, where a thorough understanding and deterministic nature of the hardware often is required, the cost of development closely relates to the complexity of the hardware used. For software developers to be able to reap the benefits of the technological advancement in hardware design, a Correct-by-Construction with a model- based design flow seem promising. Even though there seem to be significant benefits in using a Correct-by-Construction workflow for developing safety- critical systems, it is far from exclusively used within the industry. Therefore, this thesis illustrates how a model-based design flow should be applied when developing safety-critical systems for usage in the rail transport sector. This thesis also explores the benefits Correct-by-Construction can bring to the development process of safety-critical systems. Within this thesis, two different modeling tools, ForSyDe and Simulink, were used to achieve a model-based design flow. The functionality of these tools is investigated to see how they can be used for developing safety-critical systems, meeting the EN 50128 standard. The result presented is an example of how these tools can be used within a model-based design flow which meets the EN 50128 standard for developing Safety Integrity Level (SIL) 4 systems. The thesis also compares the tools investigated and highlights their differences. Finally, future work required to create a complete Correct-by-Construction workflow that complies with the EN 50128 standard requirements for system development is identified.
Allt eftersom teknologiska framsteg och tillverkningstekniker fortsätter att ge oss tillgång till mer komplex och kraftfull hårdvara så kämpar mjukvaruingenjörer fibrilit med att kunna hänga med i denna utvecklingstakt och kunna utnyttja de nya möjligheterna som denna nya hårdvara ger. Inom fältet för säkerhetskritiska system, där en genomgående förståelse av och deterministiska egenskaper för hårdvara ofta krävs, så är kostnaden för utveckling nära relaterat till komplexiteten för hårdvaran som används. För att kunna ta till vara på de fördelar som dessa nya teknologiska framsteg för med sig så föreslås ofta användningen av utvecklingsprocessen Korrektvid- Konstruktion. Även fast det verkar finnas stora fördelar med att använda Korrekt-vid-Konstruktion som utvecklingsprocess så har det inte sett en bred användning inom industrin. På grund av detta så försöker denna avhandling svara på hur ett modelleringsbaserat utvecklingsflöde kan användas vid utveckling av säkerhetskritiska system för tågtransportsektorn. Arbetet undersöker även fördelarna med användningen av Korrekt-vid-Konstruktion vid utveckling av säkerhetskritiska system. Arbetet i denna avhandling undersöker hur två olika modeleringsverktyg, ForSyDe och Simulink, kan användas i ett modeleringsbasert utvecklingsflöde. Funktionaliteten för dessa modeleringsverktyg undersöks för att se hur dem kan användas för utveckling av säkerhetskritiska system på ett sätt som klarar av kraven i EN 50128 standarden. Resultaten som presenteras är ett exempel på hur dessa verktyg kan användas i ett modeleringsbaserat utvecklingsflöde som möter kraven i EN 50128 standarden för utveckling av SIL 4 system. Arbetet jämför även de undersökta modeleringsverktygen för att påvisa deras skillnader. Till sist så beskrivs det framtida arbete som krävs för att få till en komplett utvecklingsprocess som är Korrekt-vid-Konstruktion och även möter systemutvecklingskraven i EN 50128 standarden.
APA, Harvard, Vancouver, ISO, and other styles
25

Charfi, Manel. "Declarative approach for long-term sensor data storage." Thesis, Lyon, 2017. http://www.theses.fr/2017LYSEI081/document.

Full text
Abstract:
De nos jours, on a de plus en plus de capteurs qui ont tendance à apporter confort et facilité dans notre vie quotidienne. Ces capteurs sont faciles à déployer et à intégrer dans une variété d’applications (monitoring de bâtiments intelligents, aide à la personne,...). Ces milliers (voire millions)de capteurs sont de plus en plus envahissants et génèrent sans arrêt des masses énormes de données qu’on doit stocker et gérer pour le bon fonctionnement des applications qui en dépendent. A chaque fois qu'un capteur génère une donnée, deux dimensions sont d'un intérêt particulier : la dimension temporelle et la dimension spatiale. Ces deux dimensions permettent d'identifier l'instant de réception et la source émettrice de chaque donnée. Chaque dimension peut se voir associée à une hiérarchie de granularités qui peut varier selon le contexte d'application. Dans cette thèse, nous nous concentrons sur les applications nécessitant une conservation à long terme des données issues des flux de données capteurs. Notre approche vise à contrôler le stockage des données capteurs en ne gardant que les données jugées pertinentes selon la spécification des granularités spatio-temporelles représentatives des besoins applicatifs, afin d’améliorer l'efficacité de certaines requêtes. Notre idée clé consiste à emprunter l'approche déclarative développée pour la conception de bases de données à partir de contraintes et d'étendre les dépendances fonctionnelles avec des composantes spatiales et temporelles afin de revoir le processus classique de normalisation de schéma de base de données. Étant donné des flux de données capteurs, nous considérons à la fois les hiérarchies de granularités spatio-temporelles et les Dépendances Fonctionnelles SpatioTemporelles (DFSTs) comme objets de premier ordre pour concevoir des bases de données de capteurs compatibles avec n'importe quel SGBDR. Nous avons implémenté un prototype de cette architecture qui traite à la fois la conception de la base de données ainsi que le chargement des données. Nous avons mené des expériences avec des flux de donnés synthétiques et réels provenant de bâtiments intelligents. Nous avons comparé notre solution avec la solution de base et nous avons obtenu des résultats prometteurs en termes de performance de requêtes et d'utilisation de la mémoire. Nous avons également étudié le compromis entre la réduction des données et l'approximation des données
Nowadays, sensors are cheap, easy to deploy and immediate to integrate into applications. These thousands of sensors are increasingly invasive and are constantly generating enormous amounts of data that must be stored and managed for the proper functioning of the applications depending on them. Sensor data, in addition of being of major interest in real-time applications, e.g. building control, health supervision..., are also important for long-term reporting applications, e.g. reporting, statistics, research data... Whenever a sensor produces data, two dimensions are of particular interest: the temporal dimension to stamp the produced value at a particular time and the spatial dimension to identify the location of the sensor. Both dimensions have different granularities that can be organized into hierarchies specific to the concerned context application. In this PhD thesis, we focus on applications that require long-term storage of sensor data issued from sensor data streams. Since huge amount of sensor data can be generated, our main goal is to select only relevant data to be saved for further usage, in particular long-term query facilities. More precisely, our aim is to develop an approach that controls the storage of sensor data by keeping only the data considered as relevant according to the spatial and temporal granularities representative of the application requirements. In such cases, approximating data in order to reduce the quantity of stored values enhances the efficiency of those queries. Our key idea is to borrow the declarative approach developed in the seventies for database design from constraints and to extend functional dependencies with spatial and temporal components in order to revisit the classical database schema normalization process. Given sensor data streams, we consider both spatio-temporal granularity hierarchies and Spatio-Temporal Functional Dependencies (STFDs) as first class-citizens for designing sensor databases on top of any RDBMS. We propose a specific axiomatisation of STFDs and the associated attribute closure algorithm, leading to a new normalization algorithm. We have implemented a prototype of this architecture to deal with both database design and data loading. We conducted experiments with synthetic and real-life data streams from intelligent buildings
APA, Harvard, Vancouver, ISO, and other styles
26

Briones, Maria. "Validating the Accuracy of Neatwork, a Rural Gravity Fed Water Distribution System Design Program, Using Field Data in the Comarca Ngöbe-Bugle, Panama." Scholar Commons, 2018. https://scholarcommons.usf.edu/etd/7268.

Full text
Abstract:
Despite the sustainable development goals to increase access to improved water there are still 884 million people in the world without access to an improved water source (WHO, 2017). One method to improve access to water in rural, mountainous areas, is through construction of gravity fed water distribution systems. These systems should be designed based upon fundamental principles of hydraulics. One method of doing so in a time efficient manner with minimal engineering knowledge is to utilize a downloadable computer program such as Neatwork, which aids in design of rural, gravity fed water distribution systems and has been used by volunteers in Peace Corps Panama for years. It was the goal of this research to validate the results of the Neatwork program by comparing the flow results produced in the simulation program with flow results measured at tap stands of a rural gravity fed water distribution system in the community of Alto Nube, Comarca Ngöbe Bugle, Panama. The author measured flow under default Neatwork conditions of 40% faucets open in the system (in the field an equivalent of 8 taps) to have an initial basis as to whether the Neatwork program and field conditions yielded corresponding flows. The second objective would be to vary the number of taps open if the default condition did not produce comparable results between the field and the simulation, to pinpoint if under a certain condition of open faucets in the system the two methods would agree. The author did this by measuring flow at varying combinations from 10-100% of the open taps in the system (2-20 taps). Lastly the author observed the flow differences in the Neatwork program against the field flows, when the elevation of water in the water reservoir is set to the Neatwork default, where elevation of water is the tank outlet (at the bottom of the tank) versus when the elevation is established at the overflow at the tank (at the top of the tank) for the case of two taps open. The author used paired t-tests to test for statistical difference between Neatwork and field produced flows. She found that for the default condition of 40% taps open and all other combinations executed between 30-80% taps open, the field and Neatwork flows did not produce statistically similar results and, in fact, had the tendency to overestimate flows. The author also found that the change in water elevation in the storage tank from outlet to overflow increased the flow at the two taps measured by 0.140 l/s and 0.145 l/s and in this case, did not change whether the flows at these taps were within desired range (0.1 -0.3 l/s). Changing the elevation of the water level in the tank in the Neatwork program to correspond to a “full” tank condition is not recommended, as assuming an empty tank will account for seasonal changes or other imperfections in topographical surveying that could reduce available head at each tap. The author also found that the orifice coefficients, θ, of 0.62 and 0.68, did not demonstrate more or less accurate results that coincided with field measurements, but rather showed the tendency of particular faucets to prefer one coefficient over the other, regardless of combination of other taps open in the system. This study demonstrates a consistent overestimation in flow using the computer program Neatwork. Further analysis on comparisons made show that between field and flow results across each individual faucet, variations between Neatwork and the field were a result of variables dependent upon the tap, such as flow reducers or errors in surveying. Flow reducers are installed before taps to distribute flow equally amongst homes over varying distances and elevations and are fabricated using different diameter orifices depending on the location of the tap. While Neatwork allows the user to simulate the effect of these flow reducers on tap flow, it may not account for the imperfect orifices made by the simple methods used in the field to make such flow reducers. The author recommends further investigation to be done on the results of field flow versus Neatwork simulated flow using other methods of flow reducer fabrication which produce varying degrees of accuracy in orifice sizing. The author also recommends executing these field measurements over a greater sample size of faucets and more randomized combination of open/closed taps to verify the results of this research. More work should be done to come up with a practical solution for poor and rural communities to fabricate and/or obtain more precisely sized flow reducers. A full sensitivity analysis of the input variables into the Neatwork program should be performed to understand the sensitivity of varying each input.
APA, Harvard, Vancouver, ISO, and other styles
27

Eidukynaitė, Vilma. "Vartotojo sąsajos modeliavimas duomenų srautų specifikacijos pagrindu." Master's thesis, Lithuanian Academic Libraries Network (LABT), 2006. http://vddb.library.lt/obj/LT-eLABa-0001:E.02~2006~D_20060529_134451-87134.

Full text
Abstract:
The user interface is the direct mediator between the user and the system. It is one of main factors which influences how fluently and with what time resources system could be integrated into business process and how fast systems deployment could be performed. User interface is one of the most important points in software design, because it determines quality and rate of project implementation. Software design methodologies, based on Unified Modeling Language (UML), Oracle CASE, introduced by C. Finkelstein, D. J. Anderson, V. Balasubramanian, A. Granlund, D. Lafreniere, D. Carr are analyzed in this paper. The user interface modeling method based on data flow specification is presented in this work; the software prototype of modeling user interface based on this method is implemented.
APA, Harvard, Vancouver, ISO, and other styles
28

Zhang, Lei Ph D. Massachusetts Institute of Technology Department Electrical Engineering and Computer Science fl 2014. "Network management and control of flow-switched optical networks : joint architecture design and analysis of control plane and data plane with physical-layer impairments." Thesis, Massachusetts Institute of Technology, 2014. http://hdl.handle.net/1721.1/100879.

Full text
Abstract:
Thesis: Ph. D., Massachusetts Institute of Technology, Department of Electrical Engineering and Computer Science, 2014.
Cataloged from PDF version of thesis.
Includes bibliographical references (pages 175-178).
Optical Flow Switching (OFS) that employs agile end-to-end lightpath switching for users with large transactions has been shown to be cost-effective and energy-efficient. However, whether it is possible to coordinate lightpath switching and scheduling at a global scale on a per-session basis, and how the control plane and data plane performance correlate remained un-answered. In this thesis, we have addressed the network management and control aspect of OFS, and designed a network architecture enabling both a scalable control plane and an efficient data plane. We have given an overview of essential network management and control entities and functionalities. We focused on the scheduling problem of OFS because its processing power and generated control traffic increase with traffic demand, network size, and closely correlate with data network architecture, while other routine maintenance type of control plane functionalities contribute either a fixed amount or negligibly to the total efforts. We considered two possible Wide Area Network architectures: meshed or tunneled, and developed a unified model for data plane performance to provide a common platform for the performance comparison of the control plane. The results showed that with aggregation of at least two wavelengths of traffic and allowing about two transactions per wavelength to be scheduled to the future, the tunneled architecture provides comparable data plane performance as the meshed architecture. We have developed a framework to analyze the processing complexity and traffic of the control plane as functions of network architecture, and traffic demand. To guarantee lightpath quality in presence of physical-layer impairments, we developed models for quality of EDFA-amplified optical links and impairment-aware scheduling algorithms for two cases, a) the known worst case of channel quality is when there is no "On" channel in a fiber, and b) detailed channel configuration of a fiber is needed to determine channel quality. Without physical-layer impairments, tunneled architecture reduces control plane traffic and processing complexity by orders of magnitude. With impairment-aware scheduling, detailed channel configuration information reporting leads to heavy control traffic (~250 Gbps/edge); while known worst case and tunneling leads to manageable control traffic (~36 Gbps/edge) and processing power (1-4 i7 CPUs).
by Lei Zhang.
Ph. D.
APA, Harvard, Vancouver, ISO, and other styles
29

Liu, Ana Wansul. "Diretrizes para projetos de edifícios de escritórios." Universidade de São Paulo, 2010. http://www.teses.usp.br/teses/disponiveis/3/3146/tde-19102010-163058/.

Full text
Abstract:
A complexidade no desenvolvimento de projetos para edifícios de escritórios está relacionada a dificuldades na conciliação de interesses de empreendedores, projetistas, construtores e usuários finais, e a diversidade e especialização cada vez maiores das disciplinas envolvidas. A clareza quanto aos pontos que devem ser definidos, e quem deve defini-los, ainda na fase de concepção deste tipo de projeto, é fundamental para que o empreendimento apresente viabilidades técnica, construtiva e de negócio, e a gestão do processo do projeto deve ter domínio total destas questões nesta fase. A proposta deste trabalho é apresentar as informações críticas das diversas disciplinas, que devem ser definidas ainda na concepção da arquitetura, e sua correta seqüência de inserção no processo. Para tal, a metodologia adotada baseia-se em revisão bibliográfica e na realização de um estudo de caso, cujas condições de contorno são consideradas ímpares: a empresa contratante de projetos é uma incorporadora que tem o domínio das informações sobre as necessidades mercadológicas do produto, tem um corpo técnico que apresenta condições de avaliar e escolher soluções técnicas construtivas, e também é uma empresa de administração predial, ou seja, opera o funcionamento do edifício construído, resultando em decisões de projeto que realmente focam o custo do empreendimento em seu ciclo da vida, o que não ocorre freqüentemente no mercado brasileiro. Propõe-se o desenvolvimento de um fluxo de informações de projetos que indique a necessidade e a etapa de cada informação na fase de concepção do projeto, o que ajuda a esclarecer o correto papel de cada agente no processo e constitui uma ferramenta extremamente útil para a gestão de projetos.
The complexity in office buildings design development is related to difficulties in incorporating the interests of all the players involved (owners, designers, contractors and end-users) and to the increasing diversity of specialist designers. The clarity about key points definitions and who should make them, during the design conceptual phase, is imperative for technical, constructive and commercial feasibilities of the project itself, and design management must have complete control of these aspects. The aim is to investigate what critical information from several design subjects should be defined during this conceptual phase and its correct insertion sequence in the design process. In order to achieve this investigation, the research is based on the case study method, the studied object of which has distinctive conditions: the design team contractor is a real estate company that fully understands office building market needs, holds an experienced technical team to evaluate and select constructive solutions and, also, is a facility manager. Due to this, their design decisions actually focus on the project entire life cycle, which is not common in the Brazilian market. In conclusion, the development of an information flow is proposed, during the design conceptual phase, which indicates when each piece of information should be located in the design process, which is helpful to elucidate the correct function of each related player and to establish a useful tool for design management.
APA, Harvard, Vancouver, ISO, and other styles
30

Johansson, Gustav. "Dubbeldesign och dess påverkan på spelupplevelse." Thesis, Högskolan i Skövde, Institutionen för informationsteknologi, 2018. http://urn.kb.se/resolve?urn=urn:nbn:se:his:diva-15361.

Full text
Abstract:
Denna undersökning utforskar en designteknik som kan kallas dubbeldesign. Tekniken går ut på att en mekanik i ett spel har flera syften. Målet är att mäta vilken skillnad på spelupplevelsen användning av tekniken har. De delar av spelupplevelsen som undersöks är flow och positive affect. Värden för flow och positive affect tas ut genom att testare spelar en prototyp i två versioner och svarar på frågor. Ena versionen av spelet använder dubbeldesign till stor grad medan den andra inte använder tekniken alls. Resultat visar att både flow och positive affect var högre för versionen som inte använder dubbeldesign. En stark koppling hittades mellan användning av spelets mekaniker och resultatet. De som använde många av en versions mekaniker hade generellt en bättre spelupplevelse i den versionen. Att personer väljer att inte använda ett spels mekaniker som är påverkade av dubbeldesign kan bero på förståelsebrist eller en vilja att spara på resurser.
APA, Harvard, Vancouver, ISO, and other styles
31

Tachtler, Franziska Maria. "Best way to go? Intriguing citizens to investigate what is behind smart city technologies." Thesis, Malmö högskola, Fakulteten för kultur och samhälle (KS), 2016. http://urn.kb.se/resolve?urn=urn:nbn:se:mau:diva-22303.

Full text
Abstract:
The topic of smart cities is growing in importance. However, a field study in the city of Malmö, Sweden shows that there is a discrepancy between the ongoing activities of urban planners and companies using analytical and digital tools to interpret humans’ behavior and preferences on the one hand, and the visibility of these developments in public spaces on the other. Citizens are affected by the invisible data and software not only when they use an application, but also when their living space is transformed. By Research through Design, this thesis examines ways of triggering discussion about smart city issues, which are hidden in software and code. In this thesis, a specific solution is developed: a public, tangible, and interactive visualization in the form of an interactive signpost. The final, partly functioning prototype is mountable in public places and points in the direction of the most beautiful walking path. The design refers to a smart city application that analyzes geo-tagged locative media and thereby predicts the beauty and security of a place.The aim is to trigger discussion about the contradictory issue of software interpreting the beauty of a place. Through its tangible, non-digital, and temporary character, the interactive representation encourages passers-by to interact with the prototype.
APA, Harvard, Vancouver, ISO, and other styles
32

Hermant, Laurent Fernand Leon. "Video data collection method for pedestrian movement variables & development of a pedestrian spatial parameters simulation model for railway station environments." Thesis, Stellenbosch : Stellenbosch University, 2012. http://hdl.handle.net/10019.1/20148.

Full text
Abstract:
Thesis (PhD)--Stellenbosch University, 2012.
ENGLISH ABSTRACT: The design of railway station environments in South Africa and to a certain extent internationally, is based on rules of thumb. These rules, using general macroscopic principles for determining peak passenger loads are inadequate and misleading for detailed design purposes. The principles advocated in local design guideline documents are erroneous and ignore the highly variable flow nature or “micro-peaking” effects that typically occur within railway station environments. Furthermore, there are no procedures proposed in these guideline documents, which leads to ambiguous assessment techniques used by practitioners in the determination of pedestrian spatial areas. It is evident that the knowledge in the area of pedestrian movement contained within the design guidance is far from comprehensive. Without a reliable method for estimating pedestrian levels-of-service and capacities, design of new facilities does not follow a uniform process, resulting in high levels of uncertainty in determining if the time, money and resources invested in upgrading facilities will actually cater to the demand. The situation is further exacerbated by current industry thinking towards pedestrian modelling in South Africa, where it is perceived by both clients and practitioners to be more cost effective to use macroscopic techniques and designing infrastructure according to a “one-level-up” level-of-service method. Working with architects confirmed that the area of circulation design was lacking in data and guidance and that associated quantified assessments of pedestrian movement was rarely, if ever, carried out. Towards addressing these issues, the development of a Spatial Parameter (SP)-model spreadsheet application became the main objective of the study. The model contributes towards addressing the needs of individual station users based on the trade-off between level-of-service and infrastructure costs. The output of the model allows the designer to avoid the under-provision (detrimental to operations) and oversizing of railway station infrastructure (with obvious financial implications). The author recognised the lack of pedestrian movement data in South Africa and addressed this by conducting extensive video-based pedestrian observations aimed at exploring the macroscopic fundamental relationships and the ways in which these relationships might be influenced by the various personal, situational and environmental factors that characterise the context in which pedestrians move. The movement trajectories of 24,410 pedestrians were investigated over three infrastructure environments at Maitland and Bonteheuwel stations in Cape Town, carefully selected to incorporate the cultural diversity common in South Africa. Tracking of pedestrians was achieved via the use of an in-house developed “video annotator” software tool. Boarding and alighting rates of 7,426 passengers were also observed at these stations incorporating contributory attributes such as age, gender, body size, encumbrance, group size, time of day, and location. The research makes a number of significant advances in the understanding of pedestrian flow behaviour within railway station environments and provides recommendations to industry of what issues to consider. The empirical study has provided comprehensive pedestrian movement characteristics incorporating the relationships between density, speed and flow including the effect of culture and other context factors unique to the local South African environment. New methods for determining spatial requirements are proposed, together with new and unique empirical data for use by the local industry. A calibrated spreadsheet SP-model for assessing the design of concourse type railway stations is developed and presented in the study. The advance in local pedestrian flow knowledge, together with the SP-model, is shown to be practical through application to two real railway station case study projects. The results of this study constitute an important contribution to local pedestrian flow knowledge and is considered a valuable resource for those developing pedestrian models in practice. It is expected that the results will be useful in the planning and design of pedestrian environments in South African railway stations and can be applied to other African metro railway stations with similar pedestrian characteristics. Overall, this research has succeeded in advancing the approach to railway station design, empirical data, knowledge and methods held within the local engineering industry. However, the contribution of this study and associated conference papers is an early step in changing the perceptions in this country towards ensuring fully informed and appropriate performance-based spatial designs.
AFRIKAANSE OPSOMMING: Die ontwerp van areas binne Suid-Afrikaanse spoorweg stasies en ook tot ‘n sekere mate internasionaal, is gebaseer op historiese ondervindings asook riglyne wat tans in die praktyk gebruik word. Die riglyne gebruik algemene makroskopiese beginsels om die spits passasiersvrag te bepaal vir gedetaileerde ontwerp doeleindes. Hierdie riglyne is egter ongeskik en misleidend aangesien dit nie die hoogs wispelturige natuur van vloei en mikrospits effekte wat binne die stasies plaasvind, in ag nie. Die riglyne ontbreek ook van prosedures wat gevolg moet word vir die bepaling van ruimtelike areas vir voetgangers wat die gevolg het dat dubbelsinnige beramingstegnieke deur praktisyne gebruik word. Die kennis oor voetganger bewegings in die ontwerp riglyne is nie omvattend genoeg nie. Sonder ‘n betroubare beramings metode vir die bepaling van voetganger diensvlak en kapasiteit kan daar nie bepaal word of die tyd, geld en hulpbronne wat in die fasilitieit geinvesteer word, aan die behoeftes gaan voldoen nie. Die situasie word verder vererger deur die huidige persepsie oor voetganger modellering in Suid-Afrika, waar dit deur beide kliënte en praktisyne, as ‘n meer koste effektiewe oplossing gesien word om makroskopiese tegnieke te gebruik en om infrastruktuur te ontwerp volgens ‘n metode waar ‘n hoër diensvlak as die teiken diensvlak gebruik word. In samewerking met argitekte is dit bevestig dat die area van sirkulasie ontwerp ‘n tekort het aan data en riglyne en dat die kwantitatiewe skattings verbonde aan voetganger beweging selde, indien ooit, uitgevoer word. Die ontwikkeling van ‘n Spatial Parameters (SP)-model om die bogenoemde problem te oorkom, is die hoofdoel van hierdie tesis. Die model poog om die behoeftes van individuele stasie gebruikers aan te spreek gebaseer op die wisselwerking tussen diensvlak en infrastruktuur kostes. Die uitsette van die model stel die ontwerper in staat om ondervoorsiening en oorvoorsiening van spoorweg stasie infrastruktuur te voorkom wat nadelige vir die bedryf is en ook ooglopende finansiële implikasies tot gevolg het. Die skrywer het die tekort aan data aangaande voetganger bewegings in Suid-Afrika geidentifiseer en dit aangespreek deur omvattende video gebaseerde voetganger waarnemings te maak met die doel om die basiese makroskopiese verhoudings te ondersoek asook in hoe ‘n mate hierdie verhoudings beinvloed word deur verskeie persoonlike, liggings- en omgewingsfaktore wat die konteks waarin voetgangers beweeg, karakteriseer. Die bewegingsprofiel van 24,410 voetgangers is ondersoek by drie infrastruktuur omgewings by Maitland en Bonteheuwel stasies in Kaapstad. Die stasies is noukeurig uitgesoek om Suid-Afrika se kulturele diversiteit te verteenwoordig. Die voetgangers is nagevolg deur gebruik te maak van ‘n selfontwikkelde video-annoteerder sagteware. Waarneming van die opklim- en afklimspoed van 7,426 passasiers is gemaak by hierdie stasies en faktore soos ouderdom, geslag, liggaamsgrootte, mobiliteit, grootte van groepe, tyd van die dag en ligging was ingesluit by die waarnemings. Hierdie navorsing maak belangrike bydraes tot die begrip van die vloei van voetgangers binne spoorweg stasies en aanbevellings word aan die industrie gemaak oor die faktore wat in ag geneem moet word by ontwerp van fasilitieite. Die empiriese studie het omvattende voetganger beweging karakteristieke uitgewys wat die verhoudings tussen digtheid, spoed en vloei inkorporeer asook die effek van kultuur en ander faktore wat verband hou met die unieke konteks van die plaaslike Suid-Afrikaanse omgewing. Nuwe metodes om ruimtelike-vereistes te bepaal word voorgestel, saam met nuwe en unieke empiriese data vir gebruik deur die plaaslike industrie. ‘n Gekalibreerde en gevalideerde SP-model is ontwikkel om die ontwerp van spoorweg stasies te assesseer en word in hierdie tesis beskyf en aangebied. Die studie toon dat akkurate data en kennis oor plaaslike voetganger vloei met die SP-model verkry kan word, soos bewys uit twee spoorweg stasie studiegevalle. Die resultate van hierdie tesis dien as ‘n belangrike bydrae tot die kennis van plaaslike voetganger vloei en word geag as ‘n waardevolle hulpbron vir die ontwikkeling van voetganger modelle in die praktyk. Hierdie resultate mag nuttig wees gedurende die beplanning en ontwerp van voetganger-areas in Suid-Afrikaanse spoorweg stasies. Dit kan ook toegepas word vir spoorweg stasies in die res van Afrika wat soortgelyke voetganger karaktereienskappe het. Die navorsing het daarin geslaag om die benadering tot spoorweg stasie ontwerp te verbeter, asook om empiriese data, kennis en die metodes wat binne die plaaslike ingenieurs industrie voorgehou word, te verbeter. Let egter daarop dat die bydrae wat hierdie tesis maak, asook bydraes deur relevante konferensie verhandelinge, ‘n vroeë stap is in die verandering van persepsies in Suid-Afrika om geskikte prestasie-gebaseerde ruimte ontwerpe te verseker.
APA, Harvard, Vancouver, ISO, and other styles
33

Ungureanu, George. "Automatic Software Synthesis from High-Level ForSyDe Models Targeting Massively Parallel Processors." Thesis, KTH, Skolan för informations- och kommunikationsteknik (ICT), 2013. http://urn.kb.se/resolve?urn=urn:nbn:se:kth:diva-127832.

Full text
Abstract:
In the past decade we have witnessed an abrupt shift to parallel computing subsequent to the increasing demand for performance and functionality that can no longer be satisfied by conventional paradigms. As a consequence, the abstraction gab between the applications and the underlying hardware increased, triggering both industry and academia in several research directions. This thesis project aims at analyzing some of these directions in order to offer a solution for bridging the abstraction gap between the description of a problem at a functional level and the implementation on a heterogeneous parallel platform using ForSyDe – a formal design methodology. This report treats applications employing data-parallel and time-parallel computation, regards nvidia CUDA-enabled GPGPUs as the main backend platform. The report proposes a heuristic transformation-and-refinement process based on analysis methods and design decisions to automate and aid in a correct-by-design backend code synthesis. Its purpose is to identify potential data parallelism and time parallelism in a high-level system. Furthermore, based on a basic platform model, the algorithm load-balances and maps the execution onto the best computation resources in an automated design flow. This design flow will be embedded into an already existing tool, f2cc (ForSyDe-to-CUDA C) and tested for correctness on an industrial-scale image processing application aimed at monitoring inkjet print-heads reliability.
APA, Harvard, Vancouver, ISO, and other styles
34

Pettersson, Emma, and Johanna Karlsson. "Design för översikt av kontinuerliga dataflöden : En studie om informationsgränssnitt för energimätning till hjälp för fastighetsbolag." Thesis, Linnéuniversitetet, Institutionen för informatik (IK), 2018. http://urn.kb.se/resolve?urn=urn:nbn:se:lnu:diva-78510.

Full text
Abstract:
Programvaror och gränssnitt är idag en naturlig del av vår vardag. Att ta fram användbara och framgångsrika gränssnitt är i företagens intresse då det kan leda till nöjdare och fler kunder. Problemformulering i den här rapporten bygger på användarundersökningar som genomförts på ett energipresenterade informationsgränssnitt som används av personer i fastighetsbranschen. Företaget som äger programvaran genomförde en enkätundersökning, i den indikerades att programvarans användbarhet behövde utvecklas och detta gavs i uppgift till projektgruppen att vidareutveckla. Vidareutvecklingen baseras på Delone och McLeans (2003) Information system success model samt begreppen informationsdesign, användbarhet och featuritis. Utifrån dessa skapades den teoretiska bakgrund som låg till grund för de kvalitativa intervjuerna och frågeformulär som togs fram. Den teoretiska bakgrunden låg dessutom till grund för de gränssnittsförslag som slutligen togs fram i projektet (Se figur 4). Resultatet av undersökningen visade att användare och supportpersonal hade förhållandevis olika upplevelser av Programvaran. Andra slutsatser som kunde dras om hur ett informationsgränssnitt ska designas för att fungera som stöd för användaren var följande. Det ska följa konventionella designmönster som ska vara konsekvent genom hela programvaran. De ska använda ett anpassat och tydligt språk och antingen vara så tydlig och intuitiv att alla verkligen kan förstå programvaran eller ha en bra och tydlig manual.
Software and interfaces are today a natural part of our everyday lives. Developing useful and successful interfaces is in business interest as it can lead to more satisfied customers. The problem in this report is based on user surveys conducted on an energy-presented information interface used by individuals in the real estate industry. The company that owns the software conducted a survey, indicating that the software usability needed to develop, and this was assigned to the project team to further develop. Further development is based on Delone and McLeans (2003) Information System Success Model as well as the terms information design, usability and featuritis. Based on these, the theoretical background used was the basis for the qualitative interviews and questionnaires that were presented. The theoretical background provided the basis for the interface proposals that were finally presented in the project (See Figure 6). The results of the survey showed that users and support staff had relatively different experiences of the software. The other conclusions that could be drawn about how an information interface should be designed to serve as support for the user were the following, it should follow conventional design patterns. The design should be consistent throughout the software, it should use an adapted and clear language, and either be so clear and intuitive that anyone can understand the software or offer a clear manual.
APA, Harvard, Vancouver, ISO, and other styles
35

Weigel, Martin. "Návrh mobilní aplikace pro portál Hlidani.eu." Master's thesis, Vysoké učení technické v Brně. Fakulta podnikatelská, 2016. http://www.nusl.cz/ntk/nusl-241598.

Full text
Abstract:
The master‘s thesis focuses on the design of mobile application for web portal Hlidani.eu on Android platform. The theoretical part of the thesis analyzes problems and terms concerning mobile applications. The thesis uses selected analytical methods to analyze the current state of web portal Hlidani.eu. Based on these results, the application itself is designed.
APA, Harvard, Vancouver, ISO, and other styles
36

Jovanovic, Petar. "Requirement-driven design and optimization of data-intensive flows." Doctoral thesis, Universitat Politècnica de Catalunya, 2016. http://hdl.handle.net/10803/400139.

Full text
Abstract:
Data have become number one assets of today's business world. Thus, its exploitation and analysis attracted the attention of people from different fields and having different technical backgrounds. Data-intensive flows are central processes in today's business intelligence (BI) systems, deploying different technologies to deliver data, from a multitude of data sources, in user-preferred and analysis-ready formats. However, designing and optimizing such data flows, to satisfy both users' information needs and agreed quality standards, have been known as a burdensome task, typically left to the manual efforts of a BI system designer. These tasks have become even more challenging for next generation BI systems, where data flows typically need to combine data from in-house transactional storages, and data coming from external sources, in a variety of formats (e.g., social media, governmental data, news feeds). Moreover, for making an impact to business outcomes, data flows are expected to answer unanticipated analytical needs of a broader set of business users' and deliver valuable information in near real-time (i.e., at the right time). These challenges largely indicate a need for boosting the automation of the design and optimization of data-intensive flows. This PhD thesis aims at providing automatable means for managing the lifecycle of data-intensive flows. The study primarily analyzes the remaining challenges to be solved in the field of data-intensive flows, by performing a survey of current literature, and envisioning an architecture for managing the lifecycle of data-intensive flows. Following the proposed architecture, we further focus on providing automatic techniques for covering different phases of the data-intensive flows' lifecycle. In particular, the thesis first proposes an approach (CoAl) for incremental design of data-intensive flows, by means of multi-flow consolidation. CoAl not only facilitates the maintenance of data flow designs in front of changing information needs, but also supports the multi-flow optimization of data-intensive flows, by maximizing their reuse. Next, in the data warehousing (DW) context, we propose a complementary method (ORE) for incremental design of the target DW schema, along with systematically tracing the evolution metadata, which can further facilitate the design of back-end data-intensive flows (i.e., ETL processes). The thesis then studies the problem of implementing data-intensive flows into deployable formats of different execution engines, and proposes the BabbleFlow system for translating logical data-intensive flows into executable formats, spanning single or multiple execution engines. Lastly, the thesis focuses on managing the execution of data-intensive flows on distributed data processing platforms, and to this end, proposes an algorithm (H-WorD) for supporting the scheduling of data-intensive flows by workload-driven redistribution of data in computing clusters. The overall outcome of this thesis an end-to-end platform for managing the lifecycle of data-intensive flows, called Quarry. The techniques proposed in this thesis, plugged to the Quarry platform, largely facilitate the manual efforts, and assist users of different technical skills in their analytical tasks. Finally, the results of this thesis largely contribute to the field of data-intensive flows in today's BI systems, and advocate for further attention by both academia and industry to the problems of design and optimization of data-intensive flows.
Actualment, les dades han esdevingut el principal actiu del món empresarial. En conseqüència, la seva explotació i anàlisi ha atret l'atenció de gent provinent de diferents camps i experiència tècnica. Els fluxes de dades intensius són processos centrals en els actuals sistemes d'inteligència de negoci (BI), desplegant diferents tecnologies per a proporcionar dades, provinents de diferents fonts i centrant-se en formats orientats a l'usuari. Tantmateix, el disseny i l'optimització de tals fluxes, per tal de satisfer ambdós usuaris de la informació i els estàndars de qualitat, resulta una tasca tediosa, normalment dirigida als esforços manuals del dissenyador del sistema BI. Aquestes tasques han esdevingut encara més complexes en el context dels sistemes BI de nova generació, on els fluxes de dades típicament combinen dades internes de fonts transaccionals, amb dades externes representades amb diferents formats (xarxes socials, dades governamentals, notícies). A més a més, per tal de tenir un impacte en el negoci, s'espera que els fluxes de dades responguin a necessitats analítiques no anticipades en un marge de temps proper a temps real. Aquests reptes clarament indiquen la necessitat de millora en l'automatització del disseny i optimització dels fluxes de dades intensius. L'objectiu d'aquesta tesi doctoral és el de proporcionar mitjans automàtics per tal de manegar el cicle de vida de fluxes de dades intensius. L'estudi primerament analitza els reptes pendents de resoldre en l'àrea de fluxes intensius de dades, mitjançant l'anàlisi de la literatura recent, i concebent una arquitectura per a la gestió del cicle de vida dels fluxes de dades intensius. A partir de l'arquitectura proposada, ens centrem en la proposta de tècniques automàtiques per tal de cobrir cadascuna de les fases del cicle de vida dels fluxes intensius de dades. Particularment, aquesta tesi inicialment proposa una tècnica (CoAl) per el disseny incremental dels fluxes de dades intensius, mitjançant la consolidació de multiples fluxes. CoAl no només facilita el manteniment dels flux de dades davant de noves necessitats d'informació, sinó que també permet la optimització de múltiples fluxes mitjançant la maximització de la reusabilitat. Posteriorment, en un contexte de magatzems de dades (DW), proposem un mètode complementari (ORE) per el disseny incremental d'un esquema de DW objectiu, acompanyat per la traça sistemàtica de metadades d'evolució, les quals poden facilitar el disseny dels fluxes intensius de dades (processos ETL). A continuació, la tesi estudia el problema d'implementació de fluxes de dades intensius a diferents sistemes d'execució, i proposa el sistema BabbleFlow per la traducció de fluxes de dades intensius lògics a formats executables, a un o múltiples sistemes d'execució. Finalment, la tesi es centra en la gestió dels fluxes de dades intensius en plataformes distribuïdes de processament de dades, amb aquest objectiu es proposa un algorisme (H-WorD) per donar suport a la planificació de l'execució de fluxes intensius de dades mitjançant la redistribució de dades dirigides per la carga de treball. El resultat general d'aquesta tesi és una plataforma d'inici a fi per tal de gestionar el cicle de vida dels fluxes intensius de dades, anomenada Quarry. Les tècniques propostes en aquesta tesi, incorporades a la plataforma Quarry, en gran part simplifiquen els esforços manuals i assisteixen usuaris amb diferent experiència tècnica a les seves tasques analítiques. Finalment, els resultats d'aquesta tesi contribueixen a l'àrea de fluxes intensius de dades en els sistemes de BI actuals. A més a més, reflecteixen la necessitat de més atenció per part dels mons acadèmic i industrial als problemes de disseny i optimització de fluxes de dades intensius.
APA, Harvard, Vancouver, ISO, and other styles
37

Mannapperuma, Chanaka. "Tangible Social Network System : Visual Markers for Social Network." Thesis, Umeå University, Department of Informatics, 2010. http://urn.kb.se/resolve?urn=urn:nbn:se:umu:diva-34927.

Full text
Abstract:

Tangible social network system is a home-based communication solution specifically designed for elders. Former researches indicate that insufficient communication among elders cause several challenges in their daily activities such as social isolation, loneliness, depression and decreased appetite. In addition, lack of social participation increases the risk of Alzheimer´s (Ligt Enid, 1990). The major cause of these challenges are that elders are increasingly removed from communication technology using emails, text messaging, interact with social network systems and mobile phones due to cognitive and physical difficulties. To overcome this problem, new suggested social network system incorporates photo frame and photo album based interaction which allows instantaneous participation to the social network. By designing the new social network system, I tried to create an easier venue for more active cross-generational communication between elders and younger family members.This paper discusses the early results of the marker based social networking system aiming to propose digital technologies to enhance the social life of older people, who live alone their home. A prototype combining a touch screen, photo frame and a camera are described. It allows the older people to manage their participation to the social network system and get in touch with their loved ones. This paper demonstrates a User Sensitive inclusive Design (USID) process from the generation of user needs to the evaluation prototype. A key theme of tangible social network system shows how usable and emotional design derived from a user inclusive design process can encourage elders to adopt new modern technology. A first evaluation has shown the usability as well as the good acceptance of this system.


AGNES
APA, Harvard, Vancouver, ISO, and other styles
38

Dours, Daniel. "Conception d'un systeme multiprocesseur traitant un flot continu de donnees en temps reel pour la realisation d'une interface vocale intelligente." Toulouse 3, 1986. http://www.theses.fr/1986TOU30107.

Full text
Abstract:
Une serie de transformations syntaxiques et semantiques permettant de paralleliser une application, sont definies dans le deuxieme chapitre. On obtient ainsi une representation de l'application en terme de reseaux de modules imbriques. Une architecture modulaire reconfigurable adaptee a ce type de representation est decrite dans le troisieme chapitre. Pour projeter l'application sur cette architecture, un langage approprie est defini et un ensemble de moyens et de methodes permettant la construction d'un logiciel interactif recherchant la configuration optimale du systeme multiprocesseur executant l'application donnee est decrit. Quant a la derniere partie, elle a pour but de montrer la parfaite adequation entre le systeme multiprocesseur ainsi concu et l'organisation modulaire d'un terminal vocal, de jeter un regard prospectif sur l'utilisation d'un tel systeme dans d'autre domaines d'application en particulier les systemes de vision et les robots intelligents
APA, Harvard, Vancouver, ISO, and other styles
39

Beisler, Matthias Werner. "Modelling of input data uncertainty based on random set theory for evaluation of the financial feasibility for hydropower projects." Doctoral thesis, Technische Universitaet Bergakademie Freiberg Universitaetsbibliothek "Georgius Agricola", 2011. http://nbn-resolving.de/urn:nbn:de:bsz:105-qucosa-71564.

Full text
Abstract:
The design of hydropower projects requires a comprehensive planning process in order to achieve the objective to maximise exploitation of the existing hydropower potential as well as future revenues of the plant. For this purpose and to satisfy approval requirements for a complex hydropower development, it is imperative at planning stage, that the conceptual development contemplates a wide range of influencing design factors and ensures appropriate consideration of all related aspects. Since the majority of technical and economical parameters that are required for detailed and final design cannot be precisely determined at early planning stages, crucial design parameters such as design discharge and hydraulic head have to be examined through an extensive optimisation process. One disadvantage inherent to commonly used deterministic analysis is the lack of objectivity for the selection of input parameters. Moreover, it cannot be ensured that the entire existing parameter ranges and all possible parameter combinations are covered. Probabilistic methods utilise discrete probability distributions or parameter input ranges to cover the entire range of uncertainties resulting from an information deficit during the planning phase and integrate them into the optimisation by means of an alternative calculation method. The investigated method assists with the mathematical assessment and integration of uncertainties into the rational economic appraisal of complex infrastructure projects. The assessment includes an exemplary verification to what extent the Random Set Theory can be utilised for the determination of input parameters that are relevant for the optimisation of hydropower projects and evaluates possible improvements with respect to accuracy and suitability of the calculated results
Die Auslegung von Wasserkraftanlagen stellt einen komplexen Planungsablauf dar, mit dem Ziel das vorhandene Wasserkraftpotential möglichst vollständig zu nutzen und künftige, wirtschaftliche Erträge der Kraftanlage zu maximieren. Um dies zu erreichen und gleichzeitig die Genehmigungsfähigkeit eines komplexen Wasserkraftprojektes zu gewährleisten, besteht hierbei die zwingende Notwendigkeit eine Vielzahl für die Konzepterstellung relevanter Einflussfaktoren zu erfassen und in der Projektplanungsphase hinreichend zu berücksichtigen. In frühen Planungsstadien kann ein Großteil der für die Detailplanung entscheidenden, technischen und wirtschaftlichen Parameter meist nicht exakt bestimmt werden, wodurch maßgebende Designparameter der Wasserkraftanlage, wie Durchfluss und Fallhöhe, einen umfangreichen Optimierungsprozess durchlaufen müssen. Ein Nachteil gebräuchlicher, deterministischer Berechnungsansätze besteht in der zumeist unzureichenden Objektivität bei der Bestimmung der Eingangsparameter, sowie der Tatsache, dass die Erfassung der Parameter in ihrer gesamten Streubreite und sämtlichen, maßgeblichen Parameterkombinationen nicht sichergestellt werden kann. Probabilistische Verfahren verwenden Eingangsparameter in ihrer statistischen Verteilung bzw. in Form von Bandbreiten, mit dem Ziel, Unsicherheiten, die sich aus dem in der Planungsphase unausweichlichen Informationsdefizit ergeben, durch Anwendung einer alternativen Berechnungsmethode mathematisch zu erfassen und in die Berechnung einzubeziehen. Die untersuchte Vorgehensweise trägt dazu bei, aus einem Informationsdefizit resultierende Unschärfen bei der wirtschaftlichen Beurteilung komplexer Infrastrukturprojekte objektiv bzw. mathematisch zu erfassen und in den Planungsprozess einzubeziehen. Es erfolgt eine Beurteilung und beispielhafte Überprüfung, inwiefern die Random Set Methode bei Bestimmung der für den Optimierungsprozess von Wasserkraftanlagen relevanten Eingangsgrößen Anwendung finden kann und in wieweit sich hieraus Verbesserungen hinsichtlich Genauigkeit und Aussagekraft der Berechnungsergebnisse ergeben
APA, Harvard, Vancouver, ISO, and other styles
40

Fahey, Mark, and n/a. "Assessment of the suitability of CFD for product design by analysing complex flows around a domestic oven." University of Otago. Department of Design Studies, 2007. http://adt.otago.ac.nz./public/adt-NZDU20070417.111809.

Full text
Abstract:
Competitive global markets are increasing the commercial pressure on manufacturing companies to develop better products in less time. To meet these demands, the appliance manufacturer, Fisher & Paykel, has considered the use of computer simulation of fluid flows to assist in product design. This technology, known as Computational Fluid Dynamics (CFD), has the potential to provide rewarding insight into the behaviour of designs involving fluids. However, the investment in CFD is not without risk. This thesis investigates the use of CFD in oven design expressly to evaluate the numerical accuracy and suitability of CFD in the context of oven product development. CFD was applied to four cases related to oven design, along with detailed experimental investigations, and resulted in a number of relevant findings. In a study of an impinging jet, the SST turbulence model was found to produce better results than the k-ε turbulence model. Measurements indicated that the flow was unsteady, but CFD struggled to reproduce this behaviour. The synergy between experimental and numerical techniques was highlighted in the simulation of a two-pane oven door, and resulted in temperatures on outer surface of the door predicted by CFD to within 2% of measured values. In the third study, a CFD simulation of a tangential fan failed to deliver acceptable steady-state results, however a transient simulation showed promise. The final case examined the flows through the door and cooling circuit of the Titan oven. Velocities predicted by CFD compared well against measurements in some regions, such as the potential core of the jet at the outlet vent, but other regions, such as entrained air, were poor. Temperatures were predicted to within an average of 2% of measured values. It is found that limited accuracy does not necessarily prevent CFD from delivering engineering value to the product development process. The engineering value delivered by CFD is instead more likely to be limited by the abilities of the user. Incompatibilities between CFD and the product development process can reduce the potential value of CFD but the effects can be minimised by appropriate management action. The benefits of CFD are therefore found to be sufficient to merit its use in the product development process, provided its integration into the organisation is managed effectively and the tool is used with discernment. Recommendations for achieving this are provided.
APA, Harvard, Vancouver, ISO, and other styles
41

Czudek, Aleš. "Simulace přestupu tepla v nízkonapěťovém rozváděči MNS." Master's thesis, Vysoké učení technické v Brně. Fakulta elektrotechniky a komunikačních technologií, 2015. http://www.nusl.cz/ntk/nusl-221077.

Full text
Abstract:
The thesis includes diagnostics temperature field of industrial low voltage. Place of origin, flow and heat transfer are important aspects in the design of the switchgear, especially in terms of proper equipment layout. The correctness of the design of the switchgear is verified by measuring the practical temperature field during testing or in work mode. To determine the temperature profile, it is necessary to measure the temperature at various points of the switchgear, either contact or contactless method. Measurements are performed on standardized low voltage switchboards, which are located power elements. The goal is to replace costly and time-consuming field testing switchgear efficient simulation of the temperature field mathematical model developed switchboards.
APA, Harvard, Vancouver, ISO, and other styles
42

Piecek, Adam. "Modul pro sledování politiky sítě v datech o tocích." Master's thesis, Vysoké učení technické v Brně. Fakulta informačních technologií, 2019. http://www.nusl.cz/ntk/nusl-403192.

Full text
Abstract:
The aim of this master's thesis is to design a language through which it would be possible to monitor a stream of network flows in order to detect network policy violations in the local network. An analysis of the languages used in the data stream management systems and an analysis of tasks submitted by the potential administrator were both carried out. The analysis specified resulted in the language design which represents pipelining consisting of filtering and aggregation. These operations can be clearly defined and managed within security rules. The result of this thesis also results in the Policer modul being integrated in the NEMEA system, which is able to apply the main commands of the proposed language. Finally, the module meets the requirements of the specified tasks and may be used for further development in the area of monitoring network policies.
APA, Harvard, Vancouver, ISO, and other styles
43

Sobotka, Josef. "Aplikační možnosti programovatelného zesilovače LNVGA." Master's thesis, Vysoké učení technické v Brně. Fakulta elektrotechniky a komunikačních technologií, 2015. http://www.nusl.cz/ntk/nusl-220430.

Full text
Abstract:
This thesis deals with the theoretical description of the qualitative characteristics and parameters of some modern active elements, also discusses the theory of signal flow graphs at the level applicable for the following frequency filter design methods. The thesis is also generally discussed the issue with the circuit simulator PSpice modeling theory and voltage amplifiers on the basic 6-levels. The practical part of the work is divided into two parts. The first practical part is dedicated to design four levels of simulation model of components LNVGA element. The second practical part contains detailed theoretical proposals for three circuit structures implementing the frequency filters 2nd order (based on the basic structure of the OTA-C) using signal flow graphs with configuration options of Q and fm based on the parameters of active elements in the peripheral structure and their verification with prepared LNVGA model layers.
APA, Harvard, Vancouver, ISO, and other styles
44

Chang, Chih-ming. "Micro data flow processor design." Thesis, 1993. http://hdl.handle.net/1957/35635.

Full text
Abstract:
Computer has evolved rapidly during the past several decades in terms of its implementation technology; it's architecture, however, has not changed dramatically since the von Neumann computer(control flow) model emerged in the 1940s. One main reason is that the performance for this kind of computers was able to satisfy the requirement of most users. Another reason maybe that the engineers who designed them are more familiar with this model. However, recent solutions to the problem of parallelizing sequential nature instructions on a von Neumann machine complicate both the compiler and the controller design. Therefore, another computer model, namely the data flow model, has regained attention since this model of computation exposes parallelism inherent in the program naturally. In terms of implementation methodology, we currently use synchronous sequential logic, which is clock controlled for synchronization within circuits. This design philosophy becomes hard to follow due to the occurrence of clock skew as the clock frequency goes higher and higher. One way to eliminate these clock related problems is to use the self-timed(asynchronous) implementation methodology. It features advantages such as free of clock-skew, low power consumption, composibility and so forth. Since data flow(data driven) computation model provides the execution of instructions asynchronously, it is natural to implement a data flow processor using self-timed circuits. In this thesis, micro pipelines, one of the self-timed implementation methodology, is used to implement a preliminary version of general purpose static data flow processor. Some interesting observations will be addressed in this thesis. An example program of general difference recursive equation is given to test the correctness and performance of this processor. We hope to gain more insight on how to design and implement self-timed systems in the future.
Graduation date: 1994
APA, Harvard, Vancouver, ISO, and other styles
45

Merani, Lalit T. "A micro data flow (MDF) : a data flow approach to self-timed VLSI system design for DSP." Thesis, 1993. http://hdl.handle.net/1957/36301.

Full text
Abstract:
Synchronization is one of the important issues in digital system design. While other approaches have been intriguing, up until now a globally clocked timing discipline has been the dominant design philosophy. However, we have reached the point, with advances in technology, where other options should be given serious consideration. VLSI promises great processing power at low cost. This increase in computation power has been obtained by scaling the digital IC process. But as this scaling continues, it is doubtful that the advantages of faster devices can be fully exploited. This is because the clock periods are getting much smaller in relation to the interconnect propagation delays, even within a single chip and certainly at the board and backplane level. In this thesis, some alternative approaches to synchronization in digital system design are described and developed. We owe these techniques to a long history of effort in both digital computational system design as well as digital communication system design. The latter field is relevant because large propagation delays have always been a dominant consideration in its design methods. Asynchronous design gives better performance than comparable synchronous design in situations for which a global synchronization with a high speed clock becomes a constraint for greater system throughput. Asynchronous circuits with unbounded gate delays, or self-timed digital circuit can be designed by employing either of two request-acknowledge protocols 4-cycle and 2-cycle. We will also present an alternative approach to the problem of mapping computation algorithms directly into asynchronous circuits. Data flow graph or language is used to describe the computation algorithms. The data flow primitives have been designed using both the 2-cycle and 4-cycle signaling schemes which are compared in terms of performance and transistor count. The 2-cycle implementations prove to be better than their 4-cycle counterparts. A promising application of self-timed design is in high performance DSP systems. Since there is no global constraint of clock distribution, localized forwardonly connection allows computation to be extended and sped up using pipelining. A decimation filter was designed and simulated to check the system level performance of the two protocols. Simulations were carried out using VHDL for high level definition of the design. The simulation results will demonstrate not only the efficacy of our synthesis procedure but also the improved efficiency of the 2-cycle scheme over the 4- cycle scheme.
Graduation date: 1994
APA, Harvard, Vancouver, ISO, and other styles
46

Deng, An-Te. "Flexible ASIC design using the Block Data Flow Paradigm (BDFP)." 2002. http://www.lib.ncsu.edu/theses/available/etd-05152002-131155/unrestricted/etd.pdf.

Full text
APA, Harvard, Vancouver, ISO, and other styles
47

Lee, Jun-Rong, and 李濬榮. "Service Oriented Architecture Design for Engineering Data Analysis Flow Management System." Thesis, 2007. http://ndltd.ncl.edu.tw/handle/35446004508545522391.

Full text
Abstract:
碩士
國立臺灣大學
資訊管理學研究所
95
SOA is a kind of IS design thoughts. The idea is to analysis enterprise overall business activities, to find out reusable and compos-able functions, and to build up these functions to services with the IT which adopt widely used, standardized protocols and specifications. Services which have these features and capabilities could be widely reused, composed and integrated in intra and inter corporate business activities. Features of the EDA system are: "Complex analysis flow" and "Dispersed manufacturing data". In the complex analysis flow situation, highly reusable and compos-able analysis functions could be useful to data analyzers. Dispersed manufacturing data are caused by kinds of equipment and different monitoring and managing IS in manufacturing processes. These IS are usually developed in heterogeneous environments. Regarding the above SOA features and capabilities are quite matched EDA IS needs, this paper discusses major issues of SOA design for EDA flow management system, proposes solutions, discusses the mainstream IT of SOA, and proposes the service architecture of EDA flow management system.
APA, Harvard, Vancouver, ISO, and other styles
48

"Network design: districting and multi-commodity flow problems." 2002. http://library.cuhk.edu.hk/record=b6073407.

Full text
Abstract:
by Ng Suk Fung.
"February 18, 2002."
Thesis (Ph.D.)--Chinese University of Hong Kong, 2002.
Includes bibliographical references (p. 215-222).
Electronic reproduction. Hong Kong : Chinese University of Hong Kong, [2012] System requirements: Adobe Acrobat Reader. Available via World Wide Web.
Electronic reproduction. Ann Arbor, MI : ProQuest Information and Learning Company, [200-] System requirements: Adobe Acrobat Reader. Available via World Wide Web.
Mode of access: World Wide Web.
Abstracts in English and Chinese.
APA, Harvard, Vancouver, ISO, and other styles
49

WANG, MEI-LING, and 王美玲. "Design of a testable multiplier in GF (21YS) based on data flow concept." Thesis, 1989. http://ndltd.ncl.edu.tw/handle/97993889207661040429.

Full text
APA, Harvard, Vancouver, ISO, and other styles
50

Tuan-Pao, Chiu, and 邱團寶. "Transformation Of Object Oriented Analysis Into Data Flow Oriented Analysis and Design of Related CASE Tool." Thesis, 1993. http://ndltd.ncl.edu.tw/handle/02575257231107544055.

Full text
Abstract:
碩士
國立臺灣大學
資訊工程研究所
81
OOA( Object-Oriented Analysis) is another choice except analysis. Because OOP(Objected Oriented Programming) is so OOA is important gradually. The paper presents a OOA method precedures contain four steps: states problem,establishes model, establishes state model,and establish process model.The step of the method is problem statement which describes system requirements.In information model,we identify objects and adopt relationship diagram to present the association between state model,state transition diagram is used to describe the behavior of object.The process model is to describe the action of state transition diagram by data flow diagram .To analysis a system , the method suggests to identify the domains of the After OOA, OOD(Object Oriented Design) will be begun and get from OOA. So we present a method to transfer OOA to OOD. The was present by Shlaer and Mellor. Structured analysis is used for some years. Its tools such like (data flow diagram) are so popular. Many CASE(Computer Aided Engineering) based on structured analysis had been accomplished. paper presents a method to transfer OOA to DFD which purpose is help developers to use OOA method in analysis phase and use CASE based on structured analysis in other phase. Finially,the paper presents a rough specification of CASE based object oriented model. We focus on its OOA part .Some CASE's formats will be presented . Transfermation of OOA to OOD and Transfermation of OOA to DFD can be automatically developed.This paper uses ATM(Automatic Tellor Machine) as an example to explain methods and transfermations.
APA, Harvard, Vancouver, ISO, and other styles
We offer discounts on all premium plans for authors whose works are included in thematic literature selections. Contact us to get a unique promo code!

To the bibliography