To see the other types of publications on this topic, follow the link: Data Fabric.

Dissertations / Theses on the topic 'Data Fabric'

Create a spot-on reference in APA, MLA, Chicago, Harvard, and other styles

Select a source type:

Consult the top 44 dissertations / theses for your research on the topic 'Data Fabric.'

Next to every source in the list of references, there is an 'Add to bibliography' button. Press on it, and we will generate automatically the bibliographic reference to the chosen work in the citation style you need: APA, MLA, Harvard, Chicago, Vancouver, etc.

You can also download the full text of the academic publication as pdf and read online its abstract whenever available in the metadata.

Browse dissertations / theses on a wide variety of disciplines and organise your bibliography correctly.

1

Zou, Haichuan. "Investigation of hardware and software configuration on a wavelet-based vision system--a case study." Thesis, Georgia Institute of Technology, 1998. http://hdl.handle.net/1853/8719.

Full text
APA, Harvard, Vancouver, ISO, and other styles
2

Thomas, Howard LaVann. "Analysis of defects in woven fabrics : development of the knowledge base." Thesis, Georgia Institute of Technology, 1987. http://hdl.handle.net/1853/9185.

Full text
APA, Harvard, Vancouver, ISO, and other styles
3

Júlio, Fábio José Correia. "A layer 2 multipath fabric using a centralized controller." Master's thesis, Faculdade de Ciências e Tecnologia, 2013. http://hdl.handle.net/10362/11139.

Full text
Abstract:
Dissertação para obtenção do Grau de Mestre em Engenharia Electrotécnica e de Computadores<br>Ethernet is the most used L2 protocol in modern datacenters networks. These networks serve many times like the underlying infrastructure for highly virtualised cloud computing services. To support such services the underlying network needs to be prepared to support host mobility and multi-tenant isolation for a high number of hosts while using the available bandwidth e ciently and maintaing the inherent costs low. These important properties are not ensured by Ethernet protocols. The bandwidth is always wasted because the spanning tree protocol is used to calculate paths. Also, the scalability can be an issue because the MAC learning process is based in frame ooding. On layer 3 some of this problems can be solved, but layer 3 is harder to con gure, poses di culties in host mobility and is more expensive. Recent e orts try to bring the advantages of layer 3 to layer 2. Most of them are based in some form of Equal-Cost Multipath (ECMP) to calculate paths on data center network. The solution proposed on this document uses a di erent approach. Paths are calculated using a non-ECMP policy based control-plane that is implemented in an OpenFlow controller. OpenFlow is a new protocol developed to help researchers test their new discovers on real networks without messing with the real tra c. To do that OpenFlow has to be supported by the network's switches. The communication between systems is done by SSL and all switches features are available to the controller. The non-ECMP policy based algorithm is a di erent way to do routing. Instead of using unitary metrics on each link, one policy is chosen for each link. The use of policies opens the possibility to consider very di erent paths as having the same forwarding preference increasing the number of used paths. Our approach uses the recent Backbone Provider Bridging (PBB) standard that adds extra header information to the Ethernet frame and provides isolation between customer and network address space improving scalability.
APA, Harvard, Vancouver, ISO, and other styles
4

Zhang, Yuan. "The Fabric of Entropy: A Discussion on the Meaning of Fractional Information." Thesis, University of North Texas, 2019. https://digital.library.unt.edu/ark:/67531/metadc1538775/.

Full text
Abstract:
Why is the term information in English an uncountable noun, whereas in information theory it is a well-defined quantity? Since the amount of information can be quantified, what is the meaning of a fraction of that amount? This dissertation introduces a quasi-entropy matrix which developed from Claude Shannon's information measure as an analytical tool for behavioral studies. Such matrix emphasizes the role of relative characteristics of individual level data across different collections. The real challenge in the big data era is never the size of the dataset, but how data lead scientists to individuals rather than arbitrarily divided statistical groups. This proposed matrix, when combining with other statistical measures, provides a new and easy-to-do method for identifying pattern in a well-defined system because it is built on the idea that uneven probability distributions lead to decrease in system entropy. Although the matrix is not superior to classical correlation techniques, it allows an interpretation not available with traditional standard statistics. Finally, this matrix connects heterogeneous datasets because it is a frequency-based method and it works on the modes of data rather than the means of values. It also visualizes clustering in data although this type of clustering is not measured by the squared Euclidean distance of the numerical attributes.
APA, Harvard, Vancouver, ISO, and other styles
5

Comstedt, Erik. "Increasing the trust between automotive actors using a Hyperledger Fabric blockchain." Thesis, KTH, Skolan för elektroteknik och datavetenskap (EECS), 2019. http://urn.kb.se/resolve?urn=urn:nbn:se:kth:diva-263100.

Full text
Abstract:
It is a well-established phenomenon that blockchain technology can be applied to reach a consensus between entities which do not trust one another. Moreover, blockchain technology also allows these trustless entities to reach an agreement on a shared ledger. Through its trustless consensus and shared ledger properties, the blockchain technology can provide trust between trustless parties. The present-day automotive industry suffers from several trust issues between the involved parties during a vehicle’s life cycle. In this thesis, it is evaluated whether a blockchain-based solution can be applied to address the trust issues between involved parties in the automotive industry. A proof of concept is implemented using Hyperledger Fabric. In order to evaluate whether the proposed solution can improve trust, in addition to the proposed solution, a centralized database approach is implemented as the baseline which is considered as the traditional solution. A comparison between the two solutions is then carried out. The evaluated aspects in the comparison are security, performance, and usefulness, where security is considered as the most vital aspect. The experiments show that the blockchain-based solution achieves a higher degree of both security and usefulness, whereas the baseline solution (i.e., the database solution) achieves better performance. The overall conclusion of our experiments implies that the blockchain-based solution is significantly more trustworthy than the traditional database implementation. The conclusion is motivated by the fact that the blockchain-based solution is superior in terms of both security and usefulness.<br>Det är ett välkänt faktum att blockkedjeteknik kan utnyttjas för att komma fram till en överenskommelse mellan medlemmar i ett nätverk som nödvändigtvis inte litar på varandra. Utöver detta faktum, har blockkedjetekniken möjliggjort att förtroendelösa medlemmar kan komma till samtycke om en gemensam informationslog. Genom dess förtroendelösa överenskommelseprocess och gemensamma informationslogsegenskaper, kan blockkedjetekniken skapa förtroende mellan förtroendelösa medlemmar. I dagens fordonsindustri finns flera förtroenderelaterade problem mellan olika aktörer under ett fordons livscykel. Den här rapporten utvärderar om en blockkedjebaserad lösning kan appliceras för att lösa de förtroenderelaterade problem som existerar mellan diverse involverade parter under ett fordons livscykel. En implementation för att påvisa konceptet skapas med hjälp av blockkedjeramverket Hyperledger Fabric. För att utvärdera om den föreslagna lösningen kan förbättra förtroende skapas även en alternativ lösning baserad kring en centraliserad databas. En jämförelse mellan de två lösningarna utförs sedan. Då den alternativa lösningen anses vara den traditionella metoden för att lösa problem av denna karaktär, används den alternativa lösningen som en ursprungspunkt för jämförelsen. Jämförelsen utvärderar aspekterna säkerhet, prestanda och användbarhet, där säkerhet anses vara den viktigaste aspekten. Jämförelsen visar att den blockkedjebaserade lösningen uppnår en högre grad av både säkerhet och användbarhet. Medan ursprungslösningen, det vill säga den databasbaserade lösningen, uppnår bättre prestanda. Slutsatsen av våra experiment antyder att den blockkedjebaserade lösningen är betydligt mer förtroendefull än den traditionella databasbaserade lösningen. Slutsatsen motiveras av att den blockkedjebaserade lösningen uppnådde bättre resultat gällande både säkerhet och användbarhet.
APA, Harvard, Vancouver, ISO, and other styles
6

Lek-Uthai, J. "Real-time data monitoring on circular knitting to improve process efficiency and fabric quality." Thesis, University of Manchester, 1998. http://ethos.bl.uk/OrderDetails.do?uin=uk.bl.ethos.488368.

Full text
Abstract:
The quality of the knitted fabric is an important factor in the knitwear industry. On circular knitting machines the fabric quality is improved by using the positive storage feeding device which delivers the length of the yam to the needles in order to form stitches at a constant rate. Yam tension, yam properties and elongation of the yam are important parameters influencing course length in the knitted fabric. However, course length is the most important parameter that determines the dimensions of the knitted fabrics. Therefore, a mathematical analysis was carried out to study the feasibility of using yarn length measurement in order to improve the fabric quality and the process efficiency. A PC based system was created to monitor important information such as yam run-in length, running machine condition, yam breakage, needle breakage, machine and time performance during the normal operation. An addition sensor was installed in order to determine the above parameters. Special hardware and software were developed for monitoring and analysing the knitting process parameters in real-time.
APA, Harvard, Vancouver, ISO, and other styles
7

Ponnakanti, Hari Priya. "A Hyperledger based Secure Data Management and Disease Diagnosis Framework Design for Healthcare." University of Cincinnati / OhioLINK, 2021. http://rave.ohiolink.edu/etdc/view?acc_num=ucin1627662565879478.

Full text
APA, Harvard, Vancouver, ISO, and other styles
8

Yan, Zhaohui. "Performance Analysis of A Banyan Based ATM Switching Fabric with Packet Priority." PDXScholar, 1995. https://pdxscholar.library.pdx.edu/open_access_etds/5199.

Full text
Abstract:
Since the emergence of the Asynchronous Transfer Mode ( A TM ) concept, various switching architectures have been proposed. The multistage interconnection networks have been proposed for the switching architecture under the A TM environment. In this thesis, we propose a new model for the performance analysis of an A TM switching fabric based on single-buffered Banyan network. In this model, we use a three-state, i.e., "empty", "new" and "blocked" Markov chain model to describe the behavior of the buffer within a switching element. In addition to traditional statistical analysis including throughput and delay, we also examine the delay variation. Performance results show that the proposed model is more accurate in describing the switch behavior under uniform traffic environment in comparison with the "two-state" Markov chain model developed by Jenq, et. al.[4] [6] . Based on the "three-state" model, we study a packet priority scheme which gives the blocked packet higher priority to be routed forward during contention. It is found that the standard deviation of the network delay is reduced by about 30%.
APA, Harvard, Vancouver, ISO, and other styles
9

DeBenedetto, Louis J. "A Survey of Scalable Real-Time Architectures for Data Acquisition Systems." International Foundation for Telemetering, 1999. http://hdl.handle.net/10150/606834.

Full text
Abstract:
International Telemetering Conference Proceedings / October 25-28, 1999 / Riviera Hotel and Convention Center, Las Vegas, Nevada<br>Today’s large-scale signal processing systems impose massive bandwidth requirements on both internal and external communication systems. Most often, these bandwidth requirements are met by scalable input/output architectures built around high-performance, standards-based technology. Several such technologies are available and are in common use as internal and/or external communication mechanisms. This paper provides an overview of some of the more common scalable technologies used for internal and external communications in real-time data acquisition systems. With respect to internal communications mechanisms this paper focuses on three ANSI-standard switched fabric technologies: RACEway (ANSI/VITA 5-1994), SKYchannel (ANSI/VITA 10-1995) and Myrinet (ANSI/VITA 26-1998). The discussion then turns to how Fibre Channel, HiPPI, and ATM are used to provide scalable external communications in real-time systems. Finally, glimpse of how these technologies are evolving to meet tomorrow’s requirements is provided.
APA, Harvard, Vancouver, ISO, and other styles
10

PETROVICH, EUGENIO. "THE FABRIC OF KNOWLEDGE. TOWARDS A DOCUMENTAL HISTORY OF LATE ANALYTIC PHILOSOPHY." Doctoral thesis, Università degli Studi di Milano, 2019. http://hdl.handle.net/2434/613334.

Full text
Abstract:
The dissertation aims at presenting an innovative approach (called «documental history») to the study of the history of contemporary philosophy, focusing on the case of Late Analytic Philosophy (LAP). The methodological innovation consists in the application of citation analysis techniques, drawn from the field of scientometrics, to the analysis of the structure and the dynamics of LAP. The main empirical results are presented in four scientometric analyses of LAP, which focus, respectively, on the scientometric distributions of LAP, the co-citation mapping of LAP, the epistemological function of citations within LAP, and the aging of LAP literature. The main theoretical result is the «feedback hypothesis», according to which the «documental space» of LAP shapes the intellectual behavior of analytic philosophers. Thus, the documental space of LAP should be accounted as a factor of philosophical change, besides the traditional intellectual and sociological factors. A key aspect of the dissertation is the interdisciplinary integration of distant fields, such as scientometrics, history of philosophy, and philosophy of science.
APA, Harvard, Vancouver, ISO, and other styles
11

LeBlanc, Robert-Lee Daniel. "Analysis of Data Center Network Convergence Technologies." BYU ScholarsArchive, 2014. https://scholarsarchive.byu.edu/etd/4150.

Full text
Abstract:
The networks in traditional data centers have remained unchanged for decades and have grown large, complex and costly. Many data centers have a general purpose Ethernet network and one or more additional specialized networks for storage or high performance low latency applications. Network convergence promises to lower the cost and complexity of the data center network by virtualizing the different networks onto a single wire. There is little evidence, aside from vendors' claims, that validate network convergence actually achieves these goals. This work defines a framework for creating a series of unbiased tests to validate converged technologies and compare them to traditional configurations. A case study involving two different network converged technologies was developed to validate the defined methodology and framework. The study also shows that these two technologies do indeed perform similarly to non-virtualized network, reduce costs, cabling, power consumption and are easy to operate.
APA, Harvard, Vancouver, ISO, and other styles
12

Yeo, Yong-Kee. "Dynamically Reconfigurable Optical Buffer and Multicast-Enabled Switch Fabric for Optical Packet Switching." Diss., Georgia Institute of Technology, 2006. http://hdl.handle.net/1853/14615.

Full text
Abstract:
Optical packet switching (OPS) is one of the more promising solutions for meeting the diverse needs of broadband networking applications of the future. By virtue of its small data traffic granularity as well as its nanoseconds switching speed, OPS can be used to provide connection-oriented or connectionless services for different groups of users with very different networking requirements. The optical buffer and the switch fabric are two of the most important components in an OPS router. In this research, novel designs for the optical buffer and switch fabric are proposed and experimentally demonstrated. In particular, an optical buffer that is based on a folded-path delay-line tree architecture will be discussed. This buffer is the most compact non-recirculating optical delay line buffer to date, and it uses an array of high-speed ON-OFF optical reflectors to dynamically reconfigure its delay within several nanoseconds. A major part of this research is devoted to the design and performance optimization of these high-speed reflectors. Simulations and measurements are used to compare different reflector designs as well as to determine their optimal operating conditions. Another important component in the OPS router is the switch fabric, and it is used to perform space switching for the optical packets. Optical switch fabrics are used to overcome the limitations imposed by conventional electronic switch fabrics: high power consumption and dependency on the modulation format and bit-rate of the signals. Currently, only those fabrics that are based on the broadcast-and-select architecture can provide truly non-blocking multicast services to all input ports. However, a major drawback of these fabrics is that they are implemented using a large number of optical gates based on semiconductor optical amplifiers (SOA). This results in large component count and high energy consumption. In this research, a new multicast-capable switch fabric which does not require any SOA gates is proposed. This fabric relies on a passive all-optical gate that is based on the Four-wave mixing (FWM) wavelength conversion process in a highly-nonlinear fiber. By using this new switch architecture, a significant reduction in component count can be expected.
APA, Harvard, Vancouver, ISO, and other styles
13

Hammes, Daniel Markus [Verfasser]. "Data processing, 3D grain boundary modelling and analysis of in-situ deformation experiments using an automated fabric analyser microscope / Daniel Markus Hammes." Mainz : Universitätsbibliothek Mainz, 2017. http://d-nb.info/114906577X/34.

Full text
APA, Harvard, Vancouver, ISO, and other styles
14

Gruber, Jakub. "Využití podnikových dat k zabezpečování kvality výrobku." Master's thesis, Vysoké učení technické v Brně. Fakulta strojního inženýrství, 2021. http://www.nusl.cz/ntk/nusl-442862.

Full text
Abstract:
The task of the thesis is a theoretical analysis and description of the use of company data. Emphasis is placed on the system analysis of the problem. The specific production process and the data available from it are evaluated, which help to find a technical and economic evaluation.
APA, Harvard, Vancouver, ISO, and other styles
15

Nguyen, Trang Pham Ngoc. "A privacy preserving online learning framework for medical diagnosis applications." Thesis, Edith Cowan University, Research Online, Perth, Western Australia, 2022. https://ro.ecu.edu.au/theses/2503.

Full text
Abstract:
Electronic Health records are an important part of a digital healthcare system. Due to their significance, electronic health records have become a major target for hackers, and hospitals/clinics prefer to keep the records at local sites protected by adequate security measures. This introduces challenges in sharing health records. Sharing health records however, is critical in building an accurate online diagnosis framework. Most local sites have small data sets, and machine learning models developed locally based on small data sets, do not have knowledge about other data sets and learning models used at other sites. The work in this thesis utilizes the framework of coordinating the blockchain technology and online training mechanism in order to address the concerns of privacy and security in a methodical manner. Specifically, it integrates online learning with a permissioned blockchain network, using transaction metadata to broadcast a part of models while keeping patient health information private. This framework can treat different types of machine learning models using the same distributed dataset. The study also outlines the advantages and drawbacks of using blockchain technology to tackle the privacy-preserving predictive modeling problem and to improve interoperability amongst institutions. This study implements the proposed solutions for skin cancer diagnosis as a representative case and shows promising results in preserving security and providing high detection accuracy. The experimentation was done on ISIC dataset, and the results were 98.57, 99.13, 99.17 and 97,18 in terms of precision, accuracy, F1-score and recall, respectively.
APA, Harvard, Vancouver, ISO, and other styles
16

Sinander, Pierre, and Tomas Issa. "Sign Language Translation." Thesis, KTH, Mekatronik, 2021. http://urn.kb.se/resolve?urn=urn:nbn:se:kth:diva-296169.

Full text
Abstract:
The purpose of the thesis was to create a data glove that can translate ASL by reading the finger- and hand movements. Furthermore, the applicability of conductive fabric as stretch sensors was explored. To read the hand gestures stretch sensors constructed from conductive fabric were attached to each finger of the glove to distinguish how much they were bent. The hand movements were registered using a 3-axis accelerometer which was mounted on the glove. The sensor values were read by an Arduino Nano 33 IoT mounted to the wrist of the glove which processed the readings and translated them into the corresponding sign. The microcontroller would then wirelessly transmit the result to another device through Bluetooth Low Energy. The glove was able to correctly translate all the signs of the ASL alphabet with an average accuracy of 93%. It was found that signs with small differences in hand gestures such as S and T were harder to distinguish between which would result in an accuracy of 70% for these specific signs.<br>Syftet med uppsatsen var att skapa en datahandske som kan översätta ASL genom att läsa av finger- och handrörelser. Vidare undersöktes om ledande tyg kan användas som sträcksensorer. För att läsa av handgesterna fästes ledande tyg på varje finger på handsken för att urskilja hur mycket de böjdes. Handrörelserna registrerades med en 3-axlig accelerometer som var monterad på handsken. Sensorvärdena lästes av en Arduino Nano 33 IoT monterad på handleden som översatte till de motsvarande tecknen. Mikrokontrollern överförde sedan resultatet trådlöst till en annan enhet via Bluetooth Low Energy. Handsken kunde korrekt översätta alla tecken på ASL-alfabetet med en genomsnittlig exakthet på 93%. Det visade sig att tecken med små skillnader i handgester som S och T var svårare att skilja mellan vilket resulterade i en noggrannhet på 70% för dessa specifika tecken.
APA, Harvard, Vancouver, ISO, and other styles
17

Talele, Suraj Harish. "Comparative Study of Thermal Comfort Models Using Remote-Location Data for Local Sample Campus Building as a Case Study for Scalable Energy Modeling at Urban Level Using Virtual Information Fabric Infrastructure (VIFI)." Thesis, University of North Texas, 2018. https://digital.library.unt.edu/ark:/67531/metadc1404602/.

Full text
Abstract:
The goal of this dissertation is to demonstrate that data from a remotely located building can be utilized for energy modeling of a similar type of building and to demonstrate how to use this remote data without physically moving the data from one server to another using Virtual Information Fabric Infrastructure (VIFI). In order to achieve this goal, firstly an EnergyPlus model was created for Greek Life Center, a campus building located at University of North Texas campus at Denton in Texas, USA. Three thermal comfort models of Fanger model, Pierce two-node model and KSU two-node model were compared in order to find which one of these three models is most accurate to predict occupant thermal comfort. This study shows that Fanger's model is most accurate in predicting thermal comfort. Secondly, an experimental data pertaining to lighting usage and occupancy in a single-occupancy office from Carnegie Mellon University (CMU) has been implemented in order to perform energy analysis of Greek Life Center assuming that occupants in this building's offices behave similarly as occupants in CMU. Thirdly, different data types, data formats and data sources were identified which are required in order to develop a city-scale urban building energy model (CS-UBEM). Two workflows were created, one for an individual scale building energy model and another one for CS-UBEM. A new innovative infrastructure called as Virtual Information Fabric Infrastructure (VIFI) has been introduced in this dissertation. The workflows proposed in this study will demonstrate in the future work that by using VIFI infrastructure to develop building energy models there is a potential of using data for remote servers without actually moving the data. It has been successfully demonstrated in this dissertation that data located at remote location can be used credibly to predict energy consumption of a newly built building. When the remote experimental data of both lighting and occupancy are implemented, 4.57% energy savings was achieved in the Greek Life Center energy model.
APA, Harvard, Vancouver, ISO, and other styles
18

Karlsson, Daniel. "Modelling and Analysis of Swedish Heavy Industry Supply Chain Data Management to Improve Efficiency and Security." Thesis, KTH, Skolan för elektroteknik och datavetenskap (EECS), 2021. http://urn.kb.se/resolve?urn=urn:nbn:se:kth:diva-291230.

Full text
Abstract:
Product certificates are sent throughout the supply chain of Swedish heavy industry in order to show provenance and physical characteristics of objects such as screws. The data management of the certificates has been, and still is, a very manual process. The process requires extensive work in order to maintain a correct record of the certificates. In particular, tracing causes of errors and establishing compliance takes a long time and effort. The company Chaintraced is developing an application to automate the process by acting as a third party to digitalize and manage the certificates. Introducing a third party into a business-to-business process requires that data integrity is preserved and that information reaches its expected destination. Recent research has indicated that distributed ledger technologies showpromise to fulfill these requirements. In particular, blockchain-based systems offer immutability and traceability of data, and can reduce the trust needed between different parties by relying on cryptographic primitives and consensus mechanisms. This thesis investigates the application of distributed ledger technology to further automate the Swedish heavy industry supply chain and reduce the trust needed in a third party managing the certificates. Requirements for an industrial strength system is set up and several distributed ledger technology solutions are considered to fit the use case of Swedish heavy industry. A proof of concept based on the findings is implemented, tested and compared with a centralized database to explore its possible usage in the supply chain with regard to feasibility, immutability, traceability and security. The investigation resulted in a prototype based on Hyperledger Fabric to store product certificates. The solution provides certain guarantees to immutability and security while being developed with feasibility for deployment in mind. The proposed solution is shown to be slow compared to a centralized solution but scales linearly with number of certificates and is considered within bounds for the use case. The results also show that the proposed solution is more trustworthy than a centralized solution, but that adopting blockchain technology is an extensive task. In particular, trustworthiness and guarantees provided by the solution is highly dependent on the feasibility aspect and the investigation concludes that adoption of blockchain technology within the Swedish heavy industry must take this into consideration.<br>Hanteringen av produktcertifikat inom den svenska tungindustrin är en mycket manuell process vilket resulterar i att ett enormt arbete krävs för att upprätthålla en korrekt hantering av certifikaten. Att spåra orsaken till fel och att kontrollera efterlevnaden av krav inom industrin tar lång tid. Chaintraced har utvecklat en applikation som automatiserar hanteringen av certifikaten genom digitalisering och att som tredje part lagra informationen. Att introducera en tredje part i affärsverksamheter kräver att integriteten av datan bibehålls och att information anländer till korrekt mottagare. Ny forskning har visat att distribuerade liggare har möjligheten att uppfylla dessa krav. Framförallt gällande blockkedjetekniken med dess många egenskaper och garantier som företag letar efter, så som oföränderlig och spårbar data. Blockkedjetekniken reducerar också förtroendet som behövs för parter inom nätverket genom att förlita sig på kryptografi och konsensus mekanismer. Den här rapporten utreder användningen av distribuerade liggare för att ytterliggare automatisera den svenska tungindustrins leveranskedja och minska tilliten som krävs för en tredje part som hanterar certifikaten. Krav ställs upp för ett system och flertalet distribuerade databastekniker undersöks för att passa in i fallet angående den svenska tungindustrin. En prototyp är utvecklad baserad på kraven, prototypen är testad och jämförd med en central databas för att undersöka hur implementationen står sig vad gäller genomförbarhet, oföränderlighet, spårbarhet och säkerhet. Undersökningen resulterade i en prototyp baserad på Hyperledger Fabric. Prototypen lagrar produktcertifikaten och ger vissa garantier till oföränderligbarhet samt säkerhet. Möjligheten för aktörer i kedjan att använda prototypen hade stor inverkan på hur systemet utvecklades. Prototypen visar sig vara långsammare än en centraliserad lösning men mätningarna kan anses vara inom kraven för ett system inom tungindustrins leveranskedja. Skalbarheten av lösningen är beroende av kraven på säkerhet men är linjär i antalet certifikat som skickas och lagras. Resultaten visar också att den föreslagna lösningen inger mer tillit än en centraliserad lösning men att introducera blockkedjetekniken är en komplex process. Trovärdighet och garantier som ges av lösningen är till stor del beroende av komplexiteten vilket rapporten kommer fram till är det viktigaste för svensk tungindustri att ha i åtanke vid eventuell antagande av blockkedjeteknik.
APA, Harvard, Vancouver, ISO, and other styles
19

Tian, Xuwen, and 田旭文. "Data-driven textile flaw detection methods." Thesis, The University of Hong Kong (Pokfulam, Hong Kong), 2012. http://hdl.handle.net/10722/196091.

Full text
Abstract:
This research develops three efficient textile flaw detection methods to facilitate automated textile inspection for the textile-related industries. Their novelty lies in detecting flaws with knowledge directly extracted from textile images, unlike existing methods which detect flaws with empirically specified texture features. The first two methods treat textile flaw detection as a texture classification problem, and consider that defect-free images of a textile fabric normally possess common latent images, called basis-images. The inner product of a basis-image and an image acquired from this fabric is a feature value of this fabric image. As the defect-free images are similar, their feature values gather in a cluster, whose boundary can be determined by using the feature values of known defect-free images. A fabric image is considered defect-free, if its feature values lie within this boundary. These methods extract the basis-images from known defect-free images in a training process, and require less consideration than existing methods on the degree of matching of a textile to the texture features specified for the textile. One method uses matrix singular value decomposition (SVD) to extract these basis-images containing the spatial relationship of pixels in rows or in columns. The alternative method uses tensor decomposition to find the relationship of pixels in both rows and columns within each training image and the common relationship of these training images. Tensor decomposition is found to be superior to matrix SVD in finding the basis-images needed to represent these defect-free images, because extracting and decomposing the tri-lateral relationship usually generates better basis-images. The third method solves the textile flaw detection problem by means of texture segmentation, and is suitable for online detection because it does not require texture features specified by experience or found from known defect-free images. The method detects the presence of flaws by using the contrast between regions in the feature images of a textile image. These feature images are the output of a filter bank consisting of Gabor filters with scales and rotations. This method selects the feature image with maximal image contrast, and partitions this image into regions with morphological watershed transform to facilitate faster searching of defect-free regions and to remove isolated pixels with exceptional feature values. Regions with no flaws have similar statistics, e.g. similar means. Regions with significantly dissimilar statistics may contain flaws and are removed iteratively from the set which initially contains all regions. Removing regions uses the thresholds determined by using Neyman-Pearson criterion and updated along with the remaining regions in the set. This procedure continues until the set only contains defect-free regions. The occurrence of the removed regions indicates the presence of flaws whose extents are decided by pixel classification using the thresholds derived from the defect-free regions. A prototype textile inspection system is built to demonstrate the automatic textile inspection process. The developed methods are proved reliable and effective by testing them with a variety of defective textile images. These methods also have several advantages, e.g. less empirical knowledge of textiles is needed for selecting texture features.<br>published_or_final_version<br>Industrial and Manufacturing Systems Engineering<br>Doctoral<br>Doctor of Philosophy
APA, Harvard, Vancouver, ISO, and other styles
20

Hassen, Fadoua. "Multistage packet-switching fabrics for data center networks." Thesis, University of Leeds, 2017. http://etheses.whiterose.ac.uk/17620/.

Full text
Abstract:
Recent applications have imposed stringent requirements within the Data Center Network (DCN) switches in terms of scalability, throughput and latency. In this thesis, the architectural design of the packet-switches is tackled in different ways to enable the expansion in both the number of connected endpoints and traffic volume. A cost-effective Clos-network switch with partially buffered units is proposed and two packet scheduling algorithms are described. The first algorithm adopts many simple and distributed arbiters, while the second approach relies on a central arbiter to guarantee an ordered packet delivery. For an improved scalability, the Clos switch is build using a Network-on-Chip (NoC) fabric instead of the common crossbar units. The Clos-UDN architecture made with Input-Queued (IQ) Uni-Directional NoC modules (UDNs) simplifies the input line cards and obviates the need for the costly Virtual Output Queues (VOQs). It also avoids the need for complex, and synchronized scheduling processes, and offers speedup, load balancing, and good path diversity. Under skewed traffic, a reliable micro load-balancing contributes to boosting the overall network performance. Taking advantage of the NoC paradigm, a wrapped-around multistage switch with fully interconnected Central Modules (CMs) is proposed. The architecture operates with a congestion-aware routing algorithm that proactively distributes the traffic load across the switching modules, and enhances the switch performance under critical packet arrivals. The implementation of small on-chip buffers has been made perfectly feasible using the current technology. This motivated the implementation of a large switching architecture with an Output-Queued (OQ) NoC fabric. The design merges assets of the output queuing, and NoCs to provide high throughput, and smooth latency variations. An approximate analytical model of the switch performance is also proposed. To further exploit the potential of the NoC fabrics and their modularity features, a high capacity Clos switch with Multi-Directional NoC (MDN) modules is presented. The Clos-MDN switching architecture exhibits a more compact layout than the Clos-UDN switch. It scales better and faster in port count and traffic load. Results achieved in this thesis demonstrate the high performance, expandability and programmability features of the proposed packet-switches which makes them promising candidates for the next-generation data center networking infrastructure.
APA, Harvard, Vancouver, ISO, and other styles
21

Bayona, Viveros Jose [Verfasser], Fabrice [Akademischer Betreuer] Cotton, Danijel [Akademischer Betreuer] Schorlemmer, Fabrice [Gutachter] Cotton, Corné [Gutachter] Kreemer, and David [Gutachter] Marsan. "Constructing global stationary seismicity models from the long-term balance of interseismic strain measurements and earthquake-catalog data / Jose Bayona Viveros ; Gutachter: Fabrice Cotton, Corné Kreemer, David Marsan ; Fabrice Cotton, Danijel Schorlemmer." Potsdam : Universität Potsdam, 2021. http://d-nb.info/1236786521/34.

Full text
APA, Harvard, Vancouver, ISO, and other styles
22

Keränen, Saara. ""Vaddå bygga en fabrik i datorn, liksom?" : Om användandet av simulering inom pappers- och massaindustrin." Thesis, Linköping University, Department of Computer and Information Science, 2005. http://urn.kb.se/resolve?urn=urn:nbn:se:liu:diva-4719.

Full text
Abstract:
<p>Uppsatsen behandlar simuleringsteknikens användning inom pappers- och massaindustrin.</p><p>Syftet är att undersöka vilka faktorer som kan påverka dess användning. Andra frågor i fokus är: Hur används simuleringsteknik idag? Hur sprids informationen om simulering? Hur kan användandet av simulering integreras i det kontinuerliga arbetet?</p><p>Hur ser utvecklingen inom simulering ut?</p><p>Datainsamlingen skedde i form av en fallstudie, intervjuer, telefonintervjuer och som åhörare till en diskussion kring simuleringsteknikens lämpbarhet. Deltagarna är personer med olika befattningar inom pappers- och massaindustrin, konsulter och andra för studien relevanta personer. Totalt har 13 personer deltagit i studien. Fem bruk finns representerade varav tre använder simulering.</p><p>Resultatet visar att användandet av simulering främst påverkas av ekonomiska, organisatoriska, personalmässiga, tekniska och kunskapsmässiga resurser på bruken, den bristande integrationen mellan befintlig teknik och simuleringsteknik och det sätt informationen om simulering sprids. Av stor vikt för användandet är framför allt kunskap och intresse hos personalen på bruket.</p><p>Simulering används exempelvis för att få fram dimensioneringsunderlag och för att utöka förståelsen för processerna i fabriken. Användandet sprids främst genom personliga kontakter t ex under konferenser, seminarium och genom konsulter. Även exempel på användning är betydande för spridningen. En förutsättning för att simulering ska kunna bli ett integrerat verktyget på bruken verkar vara, att modellen och simuleringskunskap finns på bruken, inte på konsultfirmor. En kontinuerlig användning kan underlättas av att en funktion med huvudansvar för tillämpningen tillsätts samt av att modellens uppdateringsmöjligheter förbättras. Respondenterna tror att utvecklingen av simulering främst handlar om en utveckling mot mer omfattande modeller, förbättringar av både simuleringsverktygen och möjligheten att integrera simuleringsteknik med befintlig teknik på bruken.</p>
APA, Harvard, Vancouver, ISO, and other styles
23

Gigioli, George William Jr. "Optimization and tolerancing of nonlinear Fabry Perot etalons for optical computing systems." Diss., The University of Arizona, 1988. http://hdl.handle.net/10150/184537.

Full text
Abstract:
Since the discovery of optical bistability a considerable amount of research activity has been aimed toward the realization of general-purpose all-optical computers. The basic premise for most of this work is the widely held notion that a reliable optical switch can be fabricated from a piece of optically bistable material. To date only a very small number of published articles have addressed the subject of the engineering issues (that is, the optimization and tolerancing) of these optical switches. This dissertation is a systematic treatment of these issues. From the starting point of Maxwell's equations a simple model of optically bistable Fabry-Perot etalons is outlined, in which the material is assumed to be a pure Kerr medium having linear absorption. This model allows for a relatively straightforward optical switch optimization procedure. The procedure is applicable for optimizing any number of switch parameters. The emphasis in this dissertation is on the optimization of the contrast of the switch's output signals, with the other parameters (switching energy, tolerance sensitivity) assuming a secondary yet critical role. Following the optimization of the optical switch is a tolerance analysis which addresses the manufacturability and noise immunity of the optimized switch. In the first part of this analysis equations describing the propagation of errors through a large scale system of like devices are derived from the truth tables of the switches themselves. From these equations worst case tolerances are established on the optical switch's transfer function parameters. In the second part of the tolerance analysis the bistability model is used to arrive at tolerances on the physical parameters of the switch. These tolerances are what determine the manufacturability of the optical switches. The major conclusion of the dissertation is that, within the range of validity of the model and the other simplifying assumptions, optically bistable Fabry-Perot etalons cannot be used reliably as logic gates in large-scale computing systems.
APA, Harvard, Vancouver, ISO, and other styles
24

Dobler, Jeremy Todd. "Novel Alternating Frequency Doppler Lidar Instrument for Wind Measurements in the Lower Troposphere." Diss., Tucson, Arizona : University of Arizona, 2005. http://etd.library.arizona.edu/etd/GetFileServlet?file=file:///data1/pdf/etd/azu%5Fetd%5F1358%5F1%5Fm.pdf&type=application/pdf.

Full text
APA, Harvard, Vancouver, ISO, and other styles
25

Wonjiga, Amir Teshome. "User-centric security monitoring in cloud environments." Thesis, Rennes 1, 2019. http://www.theses.fr/2019REN1S080.

Full text
Abstract:
Les fournisseurs de services reconnaissent le problème de confiance et fournissent une garantie par le biais d'un contrat appelé Service Level Agreement (SLA). Presque tous les SLAs existants ne traitent que des fonctionnalités du nuage informatique et ne garantissent donc pas l'aspect sécurité des services hébergés pour les locataires. Comme le nuage informatique continue d'être de plus en plus intégré, il est nécessaire d'avoir des services de supervision de la sécurité propres a l'utilisateur, qui sont fondés sur les exigences des locataires. Cet aspect de supervision de la sécurité du système exige également que les fournisseurs offrent des garanties. Dans cette thèse, nous contribuons à l'inclusion de termes de supervision de la sécurité centres sur l'utilisateur dans les SLAs de l'informatique en nuage. Nous concevons des extensions à un langage de SLA existant appelé Cloud SLA (CSLA). Notre extension, appelée Extended CSLA (ECSLA), permet aux locataires de décrire leurs besoins en matière de supervision de la sécurité en termes de vulnérabilités. Plus précisément, un service de supervision de la sécurité est décrit comme une relation entre les besoins de l'utilisateur en tant que vulnérabilités, un produit logiciel présentant ces vulnérabilités et une infrastructure où le logiciel est exécuté. Nous proposons une solution pour réduire le nombre d’évaluations nécessaires par rapport au nombre de configurations possibles. La solution proposée introduit deux nouvelles idées. Tout d'abord, nous concevons une méthode de construction d'une base de connaissances qui repose sur des regroupements de vulnérabilités a partir d'heuristiques. Deuxièmement, nous proposons un modèle pour quantifier l’interférence entre des règles de détection associées à des vulnérabilités différentes. En utilisant ces deux méthodes, nous pouvons estimer la performance d'un dispositif de supervision avec peu d’évaluations par rapport à une approche naïve. Les métriques utilisées dans nos termes de SLA tiennent compte de l'environnement opérationnel des dispositifs de supervision de la sécurité. Afin de tenir compte des paramètres imprévisibles de l'environnement opérationnel, nous proposons un mécanisme d'estimation ou la performance d'un dispositif de supervision est mesurée à l'aide de valeurs connues pour ces paramètres et le résultat est utilisé pour modéliser sa performance et l'estimer pour les valeurs inconnues de ces paramètres. Une définition de SLA contient le modèle, qui peut être utilisé chaque fois que la mesure est effectuée. Nous proposons une méthode d’évaluation in situ de la configuration d'un dispositif de supervision de sécurité. Elle permet de vérifier la performance d'une configuration de l'infrastructure de supervision de sécurité dans un environnement de production. La méthode utilise une technique d'injection d'attaques mais les attaques injectées n'affectent pas les machines virtuelles de production. Nous avons mis en œuvre et évalué la méthode de vérification proposée. La méthode peut être utilisée par l'une ou l'autre des parties pour calculer la métrique requise. Cependant, elle exige une coopération entre les locataires et les fournisseurs de service. Afin de réduire la dépendance entre eux lors de la vérification, nous proposons d'utiliser un composant logique sécurisé. L'utilisation proposée d'un composant logique sécurisé pour la vérification est illustrée dans un SLA portant sur l’intégrité des données dans les nuages informatiques. La méthode utilise un registre sécurisé, fiable et distribué pour stocker les preuves de l’intégrité des données. La méthode permet de vérifier l’intégrité des données sans se fier à l'autre partie. S'il y a un conflit entre un locataire et un fournisseur, la preuve peut être utilisée pour résoudre ce conflit<br>Migrating to the cloud results in losing full control of the physical infrastructure as the cloud service provider (CSP) is responsible for managing the infrastructure including its security. As this incites tenants to rely on CSPs for the security of their information system, it creates a trust issue. CSPs acknowledge the trust issue and provide a guarantee through Service Level Agreement (SLA). The agreement describes the provided service and penalties for the cases of violation. Almost all existing SLAs only address the functional features of the cloud and thus do not guarantee the security aspect of tenants’ hosted services. Security monitoring is the process of collecting and analyzing indicators of potential security threats, then triaging these threats by appropriate action. It is highly desirable for CSPs to provide user-specific security monitoring services which are based on the requirements of a tenant. In this thesis we present our contribution to include user-centric security monitoring terms into cloud SLAs. This requires performing different tasks in the cloud service life-cycle, starting before the actual service deployment until the end of the service. Our contributions are presented as follows : we design extensions to an existing SLA language called Cloud SLA (CSLA). Our extension, called Extended CSLA (ECSLA), allows tenants to describe their security monitoring requirements in terms of vulnerabilities. More precisely, a security monitoring service is described as a relation between user requirements as vulnerabilities, a software product having the vulnerabilities and an infrastructure where the software is running. To offer security monitoring SLAs, CSPs need to measure the performance of their security monitoring capability with different configurations. We propose a solution to reduces the required number of evaluations compared to the number of possible configurations. The proposed solution introduces two new ideas. First, we design a knowledge base building method which uses clustering to categorize a bunch of vulnerabilities together in groups using some heuristics. Second we propose a model to quantify the interference between operations of monitoring vulnerabilities. Using these two methods we can estimate the performance of a monitoring device with few numbers of evaluations compared to the naive approach. The metrics used in our SLA terms consider the operational environment of the security monitoring devices. In order to consider the non-determistic operational environment parameters, we propose an estimation mechanism where the performance of a monitoring device is measured using known parameters and the result is used to model its performance and estimate it for unknown values of that parameter. An SLA definition contains the model, which can be used whenever the measurement is performed. We propose an in situ evaluation method of the security monitoring configuration. It can evaluate the performance of a security monitoring setup in a production environment. The method uses an attack injection technique but injected attacks do not affect the production virtual machines. We have implemented and evaluated the proposed method. The method can be used by either of the parties to compute the required metric. However, the method requires cooperation between tenants and CSPs. In order to reduce the dependency between tenants and CSPs while performing verification, we propose to use a logical secure component. The proposed use of a logical secure component for verification is illustrated in an SLA addressing data integrity in clouds. The method uses a secure trusted and distributed ledger (blockchain) to store evidences of data integrity. The method allows checking data integrity without relying on the other party. If there is any conflict between tenants and CSPs the evidence can be used to resolve the conflict
APA, Harvard, Vancouver, ISO, and other styles
26

KLINGA, PETTER, and ERIK STORÅ. "Vilka utmaningar och hinder möter större tillverkande företag vid implementering av digital och smart teknik samt hur kan dessa åtgärdas? : En studie kring den pågående digitala transformationen av tillverkningsindustrin." Thesis, KTH, Industriell produktion, 2018. http://urn.kb.se/resolve?urn=urn:nbn:se:kth:diva-233206.

Full text
Abstract:
Den globala industrin har under det senaste decenniet genomgått en enorm digital transformation, där tillämpandet av digitala och smarta verktyg inom företag aldrig har varit mer påtagligt. Under november 2011 presenterades begreppet Industrial 4.0 i en artikel skriven av den Tyska regeringen som beskriver en teknikintensiv strategi för år 2020 och omfattar vad idag betraktas som den fjärde industriella revolutionen. Industri 4.0 utgörs till stor del av integrationsprocessen mellan teknik och övrig verksamhet inom ett tillverkningsföretag, vilket i sin tur ger upphov till teknik såsom; automation, förstärkt verklighet, simuleringar, intelligenta tillverkningsprocesser samt övriga processindustriella IT-verktyg och -system. Flertal forskningsstudier hävdar att Industri 4.0-teknologier har potential att revolutionera sättet företag idag tillverkar produkter, men i och med att begreppet är relativt nytt, abstrakt samt består av väldigt komplexa tekniker och komponenter, är införandet av dessa inom en tillverkningsmiljö för närvarande en stor utmaning för tillverkande företag. Denna studie syftar alltså till att belysa de utmaningar och hinder som större tillverkande företag möter vid implementering av digital och smart teknik, samt åtgärder för att motverka dessa. Målet med studien är att leverera ett användbart resultat både för aktiva företag inom tillverkningsindustrin i form av stöd vid analys och diskussion av eventuella implementeringsstrategier och -satsningar inom Industri 4.0, men också ge övriga intressenter en uppfattning kring ämnet med tanke på att det, som sagt, är ett abstrakt system. En litteraturstudie genomfördes både för att få en överblick kring ämnet Industri 4.0 och hur det har behandlats i tidigare examensarbeten, avhandlingar samt forskningsstudier, men även för att identifiera tidigare identifierade hinder. Därefter genomfördes fältstudier på två tillverkande företag, Scania och Atlas Copco, samt teknikkonsultföretaget Knightec. Syftet med detta var framförallt att få en mer påtaglig och verklighetsförankrad uppfattning av Industri 4.0 men även verifiera att informationen i den teoretiska delen är relevant i praktiken för en tillverkande verksamhet. Studien påvisade därtill att identifierade utmaningar och hinder återfinns bland flertal organisatoriska områden inom ett tillverkande företag, varav de mest framgående aspekterna omfattade strategi, ledarskap, kunder, kultur, anställda, juridik samt teknik. Resultatet avslöjade vidare att tillverkande företag präglas av bristfälliga planer och strategier för att identifiera samt implementera nya tekniska lösningar, konflikter bland de anställda, svårigheter att integrera kundsystem enhetligt inom produktionen, avsaknad av lämplig teknisk kompetens, juridiska problem vad gäller hantering av data samt svårigheter att integrera nya och gamla teknologier.<br>The global industry has during the last decade undergone a considerable digital transformation, whereas the application of digital and smart technology within companies has never been more of a relevant field. During November of 2011, the term Industrial 4.0 was presented in an article written by the German government to describe a technology intensive strategy for the year 2020 and signifies what today is defined as the fourth industrial revolution. Industry 4.0 largely consists of the integration process between technology and remaining operations within a manufacturing company, which enables the development of technologies such as; automation, augmented reality, simulations, intelligent manufacturing processes and other process industrial IT-tools and systems. Several research studies has suggested that Industry 4.0 technologies has the potential to revolutionize the way companies today manufacture products, however, since the concept is relatively new, abstract and consists of various complex technologies and components, the implementation process of these within a manufacturing environment is one largest challenges that manufacturing companies are facing. This study therefore aims to highlight the challenges and difficulties that large manufacturing companies are facing when implementing digital and smart technology, as well as provide solutions regarding how they can be overcome. The overall goal is to deliver useful results both for active companies within the manufacturing industry in regards to serving as support when analyzing and discussing possible implementation strategies as well investments related to Industry 4.0, but also to provide surrounding stakeholders with a perception of the subject. At the commencement of the project, a literature study was performed to develop an overview of how Industry 4.0 has been discussed in previous theses and research studies as well as to find previously identified difficulties regarding the implementation process. Finally, a field study was performed at Scania and Atlas Copco and at the technology consulting firm Knightec. The main purpose was to gain a more realistic perspective regarding how digitalization and Industry 4.0 systems are applied and to verify that the information from our theoretical study is relevant and applicable within an actual manufacturing company. The study furthermore revealed that the identified difficulties and challenges can be found within multiple organizational areas of a manufacturing company, whereas the most distinct aspects consisted of strategy, leadership, customers, culture, employees, legal governance as well as technology. The results showed that companies were characterized by an overall lack of strategy to implement new technologies, conflicts with employees during implementation, difficulties to integrate customer orders with production, lack of technical skills in staff, legal issues regarding data storage and difficulties integrating new and old technologies.
APA, Harvard, Vancouver, ISO, and other styles
27

Abily-Donval, Lénaïg. "Exploration des mécanismes physiopathologiques des mucopolysacharidoses et de la maladie de Fabry par approches "omiques" et modulation de l'autophagie. Urinary metabolic phenotyping of mucopolysaccharidosis type I combining untargeted and targeted strategies with data modeling Unveiling metabolic remodeling in mucopolysaccharidosis type III through integrative metabolomics and pathway analysis." Thesis, Normandie, 2019. http://www.theses.fr/2019NORMR108.

Full text
Abstract:
Les pathologies lysosomales sont des maladies liées au déficit quantitatif ou qualitatif d’une hydrolase ou d’un transporteur à l’origine d’une atteinte multiviscérale potentiellement sévère. Certaines de ces pathologies sont accessibles à des traitements mais ces thérapeutiques sont uniquement symptomatiques et ne guérissent pas les patients. Même si le phénomène de surcharge peut expliquer entre autres la symptomatologie observée, la physiopathologie de ces maladies est complexe et non précisément connue. Une meilleure connaissance de ces pathologies pourrait permettre d’améliorer leur prise en charge globale. L’objectif de ce travail était dans un premier temps d’appliquer des techniques « omiques » dans deux groupes de maladies : les mucopolysaccharidoses et la maladie de Fabry. Cette étude a permis la mise en place d’une méthodologie métabolomique non ciblée basée sur une stratégie analytique multidimensionnelle comportant la spectrométrie de masse à haute résolution couplée à la chromatographie liquide ultra-haute performance et la mobilité ionique. Dans les mucopolysaccharidoses, l’étude des voies métaboliques a mis en évidence des modifications dans le métabolisme de plusieurs acides aminés et du système oxydatif du glutathion. Dans la maladie de Fabry, des modifications ont été observées dans l’expression de l’interleukine 7 et du facteur de croissance FGF2. La deuxième partie du travail s’est intéressée à la modulation de l’autophagie dans la maladie de Fabry. Notre étude a montré une diminution du flux autophagique avec un retard d’adressage de l’enzyme au lysosome dans les cellules Fabry. L’inhibition de l’autophagie permet de diminuer l’accumulation du substrat accumulé (Gb3) et améliore l’efficacité de l’enzymothérapie substitutive. En conclusion ce travail a permis une meilleure compréhension des mécanismes physiopathologiques des pathologies lysosomales et a montré la complexité du fonctionnement du lysosome. Ces données permettent d’espérer l’amélioration des stratégies thérapeutiques et diagnostiques dans ces maladies<br>Lysosomal diseases caused by quantitative or qualitative hydrolase or transporter defect induce multiorgan features. Some specific symptomatic treatments are available but they do not cure patients. Pathophysiological bases of lysosomal disease are poorly understood and cannot be due to storage only. A better knowledge of these pathologies could improve their management. The first aim of this study was to apply “omics” strategies in mucopolysaccharidosis and in Fabry disease. This thesis allowed the implementation of an untargeted metabolomic methodology based on a multidimensional analytical strategy including high-resolution mass spectrometry coupled with ultra-high-performance liquid chromatography and ion mobility. Analysis of metabolic pathways showed a major remodeling of the amino acid metabolisms as well as oxidative stress via glutathione metabolism. In Fabry disease, changes were observed in expression of interleukin 7 and FGF2. The second study focused on modulation of autophagy in Fabry disease. In this work, we have shown a disruption of the autophagic process and a delay in enzyme targeting to the lysosome in Fabry disease. Autophagic inhibition reduced accumulation of accumulated substrate (Gb3) and improved the efficiency of enzyme replacement therapy. This work allowed a better knowledge of the physiopathological mechanisms implicated in lysosomal diseases and showed the complexity of lysosome. These data could ameliorate management of these disease and are associated with hope for patients
APA, Harvard, Vancouver, ISO, and other styles
28

Nessle, Åsbrink Marcus. "A case study of how Industry 4.0 will impact on a manual assembly process in an existing production system : Interpretation, enablers and benefits." Thesis, KTH, Industriell produktion, 2020. http://urn.kb.se/resolve?urn=urn:nbn:se:kth:diva-288514.

Full text
Abstract:
The term Industry 4.0, sometimes referred to as a buzzword, is today on everyone’s tongue and the benefits undeniably seem to be promising and have potential to revolutionize the manufacturing industry. But what does it really mean? From a high-level business perspective, the concept of Industry 4.0 most often demonstrates operational efficiency and promising business models but studies show that many companies either lack understanding for the concept and how it should be implemented or are dissatisfied with progress of already implemented solutions. Further, there is a perception that it is difficult to implement the concept without interference with the current production system.The purpose of this study is to interpret and outline the main characteristics and key components of the concept Industry 4.0 and further break down and conclude the potential benefits and enablers for a manufacturing company within the heavy automotive industry. In order to succeed, a case study has been performed at a manual final assembly production unit within the heavy automotive industry. Accordingly, the study intends to give a deeper understanding of the concept and specifically how manual assembly within an already existing manual production system will be affected. Thus outline the crucial enablers in order to successfully implement the concept of Industry 4.0 and be prepared to adapt to the future challenges of the industry. The case study, performed through observations and interviews, attacks the issue from two perspectives; current state and desired state. A theoretical framework is then used as a basis for analysis of the result in order to be able to further present the findings and conclusion of the study. Lastly, two proof of concept are performed to exemplify and support the findings. The study shows that succeeding with implementation of Industry 4.0 is not only about the related technology itself. Equally important parts to be considered and understood are the integration into the existing production system and design and purpose of the manual assembly process. Lastly the study shows that creating understanding and commitment in the organization by strategy, leadership, culture and competence is of greatest importance to succeed.<br>Begreppet Industri 4.0, ibland benämnt som modeord, är idag på allas tungor och fördelarna verkar onekligen lovande och tros ha potential att revolutionera tillverkningsindustrin. Men vad betyder det egentligen? Ur ett affärsperspektiv påvisar begreppet Industri 4.0 oftast ökad operativ effektivitet och lovande affärsmodeller men flera studier visar att många företag antingen saknar förståelse för konceptet och hur det ska implementeras eller är missnöjda med framstegen med redan implementerade lösningar. Vidare finns det en uppfattning att det är svårt att implementera konceptet utan störningar i det nuvarande produktionssystemet. Syftet med denna studie är att tolka och beskriva huvudegenskaperna och nyckelkomponenterna i konceptet Industri 4.0 och ytterligare bryta ner och konkludera de potentiella fördelarna och möjliggörarna för ett tillverkande företag inom den tunga bilindustrin. För att lyckas har en fallstudie utförts vid en manuell slutmonteringsenhet inom den tunga lastbilsindustrin. Studien avser på så sätt att ge en djupare förståelse för konceptet och specifikt hur manuell montering inom ett redan existerande manuellt produktionssystem kommer att påverkas. Alltså att kartlägga viktiga möjliggörare för att framgångsrikt kunna implementera konceptet Industri 4.0 och på så sätt vara beredd att ta sig an industrins framtida utmaningar. Fallstudien, utförd genom observationer och intervjuer, angriper frågan från två perspektiv; nuläge och önskat läge. Ett teoretiskt ramverk används sedan som underlag för analys av resultatet för att vidare kunna presentera rön och slutsats från studien. Slutligen utförs två experiment för att exemplifiera och stödja resultatet. Studien visar att en framgångsrik implementering av Industri 4.0 troligtvis inte bara handlar om den relaterade tekniken i sig. Lika viktiga delar som ska beaktas och förstås är integrationen i det befintliga produktionssystemet och utformningen och syftet med den manuella monteringsprocessen. Slutligen visar studien att det är av största vikt att skapa förståelse och engagemang i organisationen genom strategi, ledarskap, kultur och kompetens.
APA, Harvard, Vancouver, ISO, and other styles
29

Sylvan, Andreas. "Internet of Things in Surface Mount TechnologyElectronics Assembly." Thesis, KTH, Medieteknik och interaktionsdesign, MID, 2017. http://urn.kb.se/resolve?urn=urn:nbn:se:kth:diva-209243.

Full text
Abstract:
Currently manufacturers in the European Surface Mount Technology (SMT) industry seeproduction changeover, machine downtime and process optimization as their biggestchallenges. They also see a need for collecting data and sharing information betweenmachines, people and systems involved in the manufacturing process. Internet of Things (IoT)technology provides an opportunity to make this happen. This research project gives answers tothe question of what the potentials and challenges of IoT implementation are in European SMTmanufacturing. First, key IoT concepts are introduced. Then, through interviews with expertsworking in SMT manufacturing, the current standpoint of the SMT industry is defined. The studypinpoints obstacles in SMT IoT implementation and proposes a solution. Firstly, local datacollection and sharing needs to be achieved through the use of standardized IoT protocols andAPIs. Secondly, because SMT manufacturers do not trust that sensitive data will remain securein the Cloud, a separation of proprietary data and statistical data is needed in order take a stepfurther and collect Big Data in a Cloud service. This will allow for new services to be offered byequipment manufacturers.<br>I dagsläget upplever tillverkare inom den europeiska ytmonteringsindustrin för elektronikproduktionsomställningar, nedtid för maskiner och processoptimering som sina störstautmaningar. De ser även ett behov av att samla data och dela information mellan maskiner,människor och system som som är delaktiga i tillverkningsprocessen.Sakernas internet, även kallat Internet of Things (IoT), erbjuder teknik som kan göra dettamöjligt. Det här forskningsprojektet besvarar frågan om vilken potential som finns samt vilkautmaningar en implementation av sakernas internet inom europeisk ytmonteringstillverkning avelektronik innebär. Till att börja med introduceras nyckelkoncept inom sakernas internet. Sedandefinieras utgångsläget i elektroniktillverkningsindustrin genom intervjuer med experter.Studien belyser de hinder som ligger i vägen för implementation och föreslår en lösning. Dettainnebär först och främst att datainsamling och delning av data måste uppnås genomanvändning av standardiserade protokoll för sakernas internet ochapplikationsprogrammeringsgränssnitt (APIer). På grund av att elektroniktillverkare inte litar påatt känslig data förblir säker i molnet måste proprietär data separeras från statistisk data. Dettaför att möjliggöra nästa steg som är insamling av så kallad Big Data i en molntjänst. Dettamöjliggör i sin tur för tillverkaren av produktionsmaskiner att erbjuda nya tjänster.
APA, Harvard, Vancouver, ISO, and other styles
30

Nigam, Aakash. "Information Centric Data Collection and Dissemination Fabric for Smart Infrastructures." Thesis, 2013. http://hdl.handle.net/1807/43274.

Full text
Abstract:
Evolving smart infrastructures requires both content distribution as well as event notification and processing support. Content Centric Networking (CCN), built around named data, is a clean slate network architecture for supporting future applications. Due to its focus on content distribution, CCN does not inherently support Publish-Subscribe event notification, a fundamental building block in computer mediated systems and a critical requirement for smart infrastructure applications. While semantics of content distribution and event notification require different support systems from the underlying network infrastructure, content distribution and event notification can still be united by leveraging similarities in the routing infrastructure. Our Extended-CCN architecture(X-CCN) realizes this to provide lightweight content based pub-sub service at the network layer, which is used to provide advanced publish/subscribe services at higher layers. Light weight content based pub-sub and CCN communication at network layer along with advanced publish/subscribe together are presented as data fabric for the smart infrastructures applications.
APA, Harvard, Vancouver, ISO, and other styles
31

Eichelbaum, Sebastian, Mario Hlawitschka, Bernd Hamann, and Gerik Scheuermann. "Fabric-like Visualization of Tensor Field Data on Arbitrary Surfaces in Image Space." 2012. https://ul.qucosa.de/id/qucosa%3A32502.

Full text
Abstract:
Tensors are of great interest to many applications in engineering and in medical imaging, but a proper analysis and visualization remains challenging. It already has been shown that, by employing the metaphor of a fabric structure, tensor data can be visualized precisely on surfaces where the two eigendirections in the plane are illustrated as thread-like structures. This leads to a continuous visualization of most salient features of the tensor data set. We introduce a novel approach to compute such a visualization from tensor field data that is motivated by image-space line integral convolution (LIC). Although our approach can be applied to arbitrary, non-selfintersecting surfaces, the main focus lies on special surfaces following important features, such as surfaces aligned to the neural pathways in the human brain. By adding a postprocessing step, we are able to enhance the visual quality of the of the results, which improves perception of the major patterns.
APA, Harvard, Vancouver, ISO, and other styles
32

Belli, Romina. "Replicate palaeoclimate multi-proxy data series from different speleothems from N. Italy: reproducibility of the data and new methodologies." Thesis, 2013. http://hdl.handle.net/1959.13/1037787.

Full text
Abstract:
Research Doctorate - Doctor of Philosophy (PhD)<br>Changes in geochemical and physical properties of speleothems are considered to be accurate proxies of climate variability. However, the climate signal is modified by the internal dynamics of the whole karst system. The aim of the research was to obtain reproducible data extracted by established and non-conventional techniques from two coeval speleothems removed at Grotta Savi cave (Italy), to gain information about regional climate responses across the Last Glacial Maximum to Holocene transition. Different past hydrological regimes for the two stalagmites’ drips were reconstructed on the basis of stalagmites’ physical characteristics and this helped to disentangle the global from the local phenomena. This non-conventional approach, was applied here for the first time on fossil sample, resulting in a benchmark for interpreting the chemical proxies, and enabling assessment of calcite formation environment, hitherto not possible. The interpretation of δ<sup>18</sup>O values as reflecting past hydrology was then validated by using the Hydrology Index. The Index, developed in this study, considers two independent proxies: the Mg concentrations and the fraction of Sr uptake that is not dictated by growth rates. The method allowed recognition of a non-hydrological component encapsulated in δ<sup>18</sup>O values, then interpreted as changes of air mass provenance and rainfall seasonality. The δ<sup>13</sup>C was chiefly driven by temperature-dependent soil respiration rate. However, a hydrological component was also detected in the δ<sup>13</sup>C by using dead carbon proportion (dcp) and <sup>87</sup>Sr/<sup>86</sup>Sr ratios. Increases of <sup>87</sup>Sr/<sup>86</sup>Sr ratios suggest increases of aeolian dust deflated from proximal subalpine periglacial regions facilitated by vegetation-cover reduction, soil destabilisation and windier conditions, which in turn enhanced drier conditions. Although, the dcp trend was likely related to a local, faster, soil organic matter turnover, enhanced by warmer conditions, episodes of high dcp values were possibly hydrologically induced, as a result of wetter conditions. Furthermore, the Hydrology Index and δ<sup>13</sup>C signal allowed reconstructing that wet conditions occurred during climate cooling, an improvement relative to the state of the art of δ<sup>13</sup>C interpretation, where more commonly wet conditions occurs during warming. The comparison of δ<sup>13</sup>C trend of Savi with another stalagmite with similar physical characteristics, but from a cave (Sofular) located in Turkey, revealed a common trend despite the impact of the last glaciation having been drastic at Savi (no speleothem growth). Such δ<sup>13</sup>C similarity could be related to global phenomena and point to an intriguing possibility, which needs future testing, that speleothems may encode information of the C cycle, similar to soil carbonates. The palaeoclimate interpretation extracted from the Savi records between 15 to 9 ka indicates that the Younger Dryas (YD) was a dramatic climate reversal. In the northern Adriatic, the YD is characterised by high hydrological variability, strong winds and a cooling, which resulted in a decrease of vegetation cover and increase of soil erosion. The wind regime was possibly orographically induced, with the Alps acting as a barrier, deviating westerly winds and causing increased windiness in the northern Adriatic region. The Savi records reveal a significant Early Holocene anomaly (10.4 ka), whose drier and colder conditions were probably amplified by a local synoptic framework.
APA, Harvard, Vancouver, ISO, and other styles
33

Yadav, Akash. "Experimental Study and Data Analysis of Water Transport and Their Initial Fate in Through Unsaturated or Dry Bioreactor Columns Filled with Different Porous Media." Thesis, 2013. http://hdl.handle.net/10919/24266.

Full text
Abstract:
The electro-kinetic characteristics of different material bioreactor columns for treating water and waste water are experimentally studied. Separate columns of unsaturated gravels (~6mm) and ball clay were assessed for electro-kinetic characteristics by dosing water at a hydraulic loading rate of 50ml/min and 10ml/min. Similarly locally available organic materials such as sawdust, Moringa oleifera sheets and textile clothe pieces were also empirically analyzed. Size effects of the bio-reactor columns were also studied. The effluent from textile clothe and gravel reactor respectively showed an increase in pH while a depreciation in pH in the effluent was observed in the Moringa Oleifera reactor and sawdust reactor. This may be due to leaching of acidic organic components for sawdust and Moringa Oleifera . In gravels effluent pH depreciated with increase in flow rate but the general trend of the effluent pH curve showed an initial improvement before it slowed down to an asymptote for a specific constant dosage and height. A multi-parameter stochastic linear model for change in pH as a function of column height, dosage rate, time for specific volume discharge and change in electrical conductivity between influent and effluent was derived. A general stochastic model was also developed to characterize pH change in any bioreactor irrespective of the material media. Thirty centimeters of gravel exhibited an increase in conductivity with increase in flow rate while conductivity dipped with increasing flow rate when the gravel column height was halved. The measure of organic compounds in water decreased with increasing percolation rate through gravel. The chemical oxygen demand ratio within the gravel improved to unity showing increased containment of organic compounds with time. Organic textile clothes reactor also illustrated increased conductivity with increasing flow but conductivity dipped with increase in column height. For Moringa Oleifera reactors, a dosage of water at 10ml/min showed a significant improvement in conductivity with increase in column height. An initial depreciation in temperature curve was observed within clay and gravel reactor. With increase in depth there was an increase in temperature within the gravel as the saturation by water improved. In sawdust reactors this was not the trend. A birth process model is proposed to simulated temperature within a bioreactor as a function of time irrespective of any specific material used as bioreactor media.
APA, Harvard, Vancouver, ISO, and other styles
34

Antolín, Tomas Borja [Verfasser]. "Tectonic evolution of the Tethyan Himalaya in SE Tibet deduced from magnetic fabric, structural, metamorphic and paleomagnetic data / vorgelegt von Borja Antolín Tomas." 2010. http://d-nb.info/1004223420/34.

Full text
APA, Harvard, Vancouver, ISO, and other styles
35

Bastos, Pedro. "Sistema de predição de avarias em máquinas de unidades fabris globalmente dispersas." Doctoral thesis, 2015. http://hdl.handle.net/10198/11866.

Full text
Abstract:
Nos últimos anos, temos assistido a várias e profundas alterações na produção industrial. Muitos processos industriais estão agora automatizados com o objetivo de garantir a qualidade da produção e minimizar os seus custos. Atualmente, as organizações têm vindo a recolher e armazenar quantidades cada vez maiores de dados relevantes e precisos dos seus processos de produção. Este armazenamento de dados oferece um enorme potencial, constituindo uma fonte de novo conhecimento. No entanto, a elevada quantidade de dados e a sua complexidade não se coaduna em muitos casos com a capacidade de analisá-los, e torna-se necessária a utilização de técnicas de análise automatizadas. O data mining emerge como uma importante ferramenta no processo de aquisição de conhecimento a partir de dados provenientes do processo produtivo. Apresenta uma oportunidade de aumento significativo da capacidade de transformação de elevados volumes de dados em informação útil. O uso cumulativo de dados tem sido limitado, o que conduz ao problema de "fontes de dados ricas, mas pobres em informação". Neste trabalho, através da utilização da ferramenta de data mining RapidMiner, são aplicados diferentes algoritmos a dados de manutenção e de monitorização da condição de determinados equipamentos existentes em diferentes linhas de produção. Os algoritmos aplicados são comparados quanto à exatidão obtida na descoberta de padrões e nas predições efetuadas. A recolha de dados baseia-se num sistema de agentes distribuídos, o que, dada a sua natureza, será responsável pela recolha de dados através de uma arquitetura funcional. O uso de data mining está integrado num sistema on-line capaz de recolher dados através da utilização de agentes automáticos, apresentando os resultados obtidos às diferentes equipas de manutenção, de forma facilmente compreensível. O objetivo dos algoritmos de predição desenvolvidos é de prever valores futuros com base em registos de valores presentes, a fim de estimar a possibilidade da falha de uma determinada máquina e, desta forma,apoiar as equipas de manutenção no planeamento de medidas adequadas para evitar falhas ou para mitigar os seus efeitos. As principais contribuições deste trabalho são: (i) definição da arquitetura de um sistema funcional de predição de avarias, (ii) a criação de um protótipo de data mining utilizando para tal a ferramenta RapidMiner v.5.3.15.<br>In the last years we have assisted to several and deep changes in industrial manufacturing. Many industrial processes are now automated in order to ensure production quality and to minimize costs. Currently, manufacturing companies have been collecting and storing increasingly larger amouts of accurate and relevant production data. The stored data offer enormous potential, providing a source of new knowledge. However, the huge amount of data and its complexity is not consistent in many cases with the ability to analysing ability . Data mining has emerged as an important tool for knowledge acquisition from manufacturing databases. Data mining technology presents an opportunity to increase significantly the rate at which the volume of data can be turned into useful information. However, the use of accumulated data has been limited, which has led to the “rich data but poor information” problem. In this work, a data mining tool named RapidMiner is used to create and apply different data mining prediction algorithms to maintenance and condition monitoring data. Their accuracy in the discovery of patterns and also the accuracy of predictions is compared. This tool is integrated with an online system which collects data using automatic agents and presents all the results to the maintenance teams in an comprehensible way. The remote data collection is based on an intricate system of distributed agents, which, given its nature, will be responsible for remote data collection through a functional architecture. The purpose of the prediction algorithms applied is to forecast future values based on present records, in order to estimate the possibility of a machine breakdown, and therefore to support maintenance teams in planning appropriate measures to avoid failures or to mitigate their effects. The main contributions of this work are (i) the definition of a system architecture; (ii) the creation of an data mining prototype system using a data mining tool named RapidMiner v.5.3.15.<br>Fundação para a Ciência e a Tecnologia (FCT)
APA, Harvard, Vancouver, ISO, and other styles
36

Tong, Shidong. "A computerized data acquisition and control system for Fabry-Perot interferometry /." 1990. http://collections.mun.ca/u?/theses,51067.

Full text
APA, Harvard, Vancouver, ISO, and other styles
37

Becker, Jens Karl. "The emplacement of the Chinamora Batholith (Zimbabwe) inferred from field observations, magnetic- and microfabrics." Doctoral thesis, 2000. http://hdl.handle.net/11858/00-1735-0000-0006-B35F-B.

Full text
APA, Harvard, Vancouver, ISO, and other styles
38

Bastos, Pedro Miguel Lopes. "Sistema de predição de avarias em máquinas de unidades fabris globalmente dispersas." Doctoral thesis, 2015. http://hdl.handle.net/1822/38505.

Full text
Abstract:
Tese de Doutoramento Ramo Engenharia Industrial e de Sistemas<br>Nos últimos anos, temos assistido a várias e profundas alterações na produção industrial. Muitos processos industriais estão agora automatizados com o objetivo de garantir a qualidade da produção e minimizar os seus custos. Atualmente, as organizações têm vindo a recolher e armazenar quantidades cada vez maiores de dados relevantes e precisos dos seus processos de produção. Este armazenamento de dados oferece um enorme potencial, constituíndo uma fonte de novo conhecimento. No entanto, a elevada quantidade de dados e a sua complexidade não se coaduna em muitos casos com a capacidade de analisá-los, e torna-se necessária a utilização de técnicas de análise automatizadas. O data mining emerge como uma importante ferramenta no processo de aquisição de conhecimento a partir de dados provenientes do processo produtivo. Apresenta uma oportunidade de aumento significativo da capacidade de transformação de elevados volumes de dados em informação útil. O uso cumulativo de dados tem sido limitado, o que conduz ao problema de "fontes de dados ricas, mas pobres em informação". Neste trabalho, através da utilização da ferramenta de data mining RapidMiner, são aplicados diferentes algoritmos a dados de manutenção e de monitorização da condição de determinados equipamentos existentes em diferentes linhas de produção. Os algoritmos aplicados são comparados quanto à exatidão obtida na descoberta de padrões e nas predições efetuadas. A recolha de dados baseia-se num sistema de agentes distribuídos, o que, dada a sua natureza, será responsável pela recolha de dados através de uma arquitetura funcional. O uso de data mining está integrado num sistema on-line capaz de recolher dados através da utilização de agentes automáticos, apresentando os resultados obtidos às diferentes equipas de manutenção, de forma facilmente compreensível. O objetivo dos algoritmos de predição desenvolvidos é de prever valores futuros com base em registos de valores presentes, a fim de estimar a possibilidade da falha de uma determinada máquina e, desta forma, apoiar as equipas de manutenção no planeamento de medidas adequadas para evitar falhas ou para mitigar os seus efeitos. As principais contribuições deste trabalho são: (i) definição da arquitetura de um sistema funcional de predição de avarias, (ii) a criação de um protótipo de data mining utilizando para tal a ferramenta RapidMiner v.5.3.15.<br>In the last years we have assisted to several and deep changes in industrial manufacturing. Many industrial processes are now automated in order to ensure production quality and to minimize costs. Currently, manufacturing companies have been collecting and storing increasingly larger amouts of accurate and relevant production data. The stored data offer enormous potential, providing a source of new knowledge. However, the huge amount of data and its complexity is not consistent in many cases with the ability to analysing ability . Data mining has emerged as an important tool for knowledge acquisition from manufacturing databases. Data mining technology presents an opportunity to increase significantly the rate at which the volume of data can be turned into useful information. However, the use of accumulated data has been limited, which has led to the “rich data but poor information” problem. In this work, a data mining tool named RapidMiner is used to create and apply different data mining prediction algorithms to maintenance and condition monitoring data. Their accuracy in the discovery of patterns and also the accuracy of predictions is compared. This tool is integrated with an online system which collects data using automatic agents and presents all the results to the maintenance teams in an comprehensible way. The remote data collection is based on an intricate system of distributed agents, which, given its nature, will be responsible for remote data collection through a functional architecture. The purpose of the prediction algorithms applied is to forecast future values based on present records, in order to estimate the possibility of a machine breakdown, and therefore to support maintenance teams in planning appropriate measures to avoid failures or to mitigate their effects. The main contributions of this work are (i) the definition of a system architecture; (ii) the creation of an data mining prototype system using a data mining tool named RapidMiner v.5.3.15.<br>Fundação para a Ciência e para a Tecnologia (FCT) através da Bolsa PROTEC (Programa de Apoio à Formação Avançada de Docentes do Ensino Superior Politécnico).
APA, Harvard, Vancouver, ISO, and other styles
39

PIZZO, Federica. "Estensione della definizione di Malattia di Fabry in base ai dati ottenuti dallo studio dell'attività enzimatica e degli aplotipi genetici." Doctoral thesis, 2011. http://hdl.handle.net/10447/95123.

Full text
APA, Harvard, Vancouver, ISO, and other styles
40

Caranto, Neil Raymund Yu. "Birefringent Fabry-Perot sensors." Thesis, 1998. https://vuir.vu.edu.au/15754/.

Full text
Abstract:
This thesis describes the development and performance of short-cavity polarisation-maintaining (birefringent) infibre Fabry-Perot interferometric sensors for the measurement of temperature or strain. By using a highly birefringent optical fibre in their cavity, these sensors form two interferometers - one for each polarisation axis. As in the case of other fibre interferometers, each axial interferometer is extremely sensitive to temperature and strain. But this high sensitivity together with the periodic nature of each interferometer output yields a limited unambiguous measurand range (UMR). However, the differential phase between the axial interferometric phase responses of the birefringent sensors developed in this work can be exploited to provide an extended UMR.
APA, Harvard, Vancouver, ISO, and other styles
41

Kaddu, Ssenyomo Charles. "Low coherence fibre optic Fabry-Perot sensors suitable for multiplexed strain measurement." Thesis, 1995. https://vuir.vu.edu.au/15756/.

Full text
Abstract:
This thesis contains an investigation of the potential of low finesse in-fibre Fabry-Perot interferometer sensors for the measurement of strain. There are a number of areas of modern engineering applications where there is a need for an alternative to conventional resistive strain gauges; particularly where a number of such sensors can be multiplexed onto a common carrier so that one system can be used for multi-point strain measurements. Hence, the emphasis is on low finesse sensors which are suitable for multiplexing. This thesis concentrates on the use of white light interferometer (WLI) techniques to measure the optical path changes produced in the sensors by the application of strain. Since thermal effects also produce phase changes in the fibre Fabry-Perot interferometer (FFPI) which are indistinguishable from strain changes the investigations have included both thermal and strain responses of the sensors.
APA, Harvard, Vancouver, ISO, and other styles
42

Vasiliev, Mikhail. "Low coherence fibre interferometry with a multi-wavelength light source." Thesis, 2001. https://vuir.vu.edu.au/15753/.

Full text
Abstract:
This thesis describes the results of study of the multi-wavelength low coherence fibre interferometry. The development and performance of the all-fibre interferometric measurement systems utilising multi-wavelength combination sources are described. Principal aspects of the low-coherence sensor interrogation scheme are analysed in order to substantiate the development of fibre sensors with a particular configuration that employs a reflection-type Fabry-Perot as sensing interferometer and a reflection-type Michelson as receiver interferometer. The background on the choice of an optimised signal processing scheme and data processing algorithms is given. Predictions were made during this work regarding the achievable measurement resolution and speed, together with the analysis of the limitations of the chosen measurement scheme.
APA, Harvard, Vancouver, ISO, and other styles
43

Ingraham, Patrick. "Détection et caractérisation de naines brunes et exoplanètes avec un filtre accordable pour applications dans l'espace." Thèse, 2013. http://hdl.handle.net/1866/9194.

Full text
Abstract:
Cette thèse porte sur la capacité à détecter des compagnons de faible intensité en présence de bruit de tavelures dans le contexte de l’imagerie à haute gamme dynamique pour l’astronomie spatiale. On s’intéressera plus particulièrement à l’imagerie spectrale différentielle (ISD) obtenue en utilisant un étalon Fabry-Pérot comme filtre accordable. Les performances d’un tel filtre accordable sont présentées dans le cadre du Tunable Filter Imager (TFI), instrument conçu pour le télescope spatial James Webb (JWST). La capacité de l’étalon à supprimer les tavelures avec ISD est démontrée expérimentalement grâce à un prototype de l’étalon installé sur un banc de laboratoire. Les améliorations de contraste varient en fonction de la séparation, s’étendant d’un facteur 10 pour les séparations supérieures à 11 lambda/D jusqu’à un facteur 60 à 5 lambda/D. Ces résultats sont cohérents avec une étude théorique qui utilise un modèle basé sur la propagation de Fresnel pour montrer que les performances de suppression de tavelures sont limitées par le banc optique et non pas par l’étalon. De plus, il est démontré qu’un filtre accordable est une option séduisante pour l’imagerie à haute gamme dynamique combinée à la technique ISD. Une seconde étude basée sur la propagation de Fresnel de l’instrument TFI et du télescope, a permis de définir les performances de la technique ISD combinée avec un étalon pour l’astronomie spatiale. Les résultats prévoient une amélioration de contraste de l’ordre de 7 jusqu’à 100, selon la configuration de l’instrument. Une comparaison entre ISD et la soustraction par rotation a également été simulée. Enfin, la dernière partie de ce chapitre porte sur les performances de la technique ISD dans le cadre de l’instrument Near-Infrared Imager and Slitless Spectrograph (NIRISS), conçu pour remplacer TFI comme module scientifique à bord du Fine Guidance Sensor du JWST. Cent quatre objets localisés vers la région centrale de la nébuleuse d’Orion ont été caractérisés grâce à un spectrographe multi-objet, de basse résolution et multi-bande (0.85-2.4 um). Cette étude a relevé 7 nouvelles naines brunes et 4 nouveaux candidats de masse planétaire. Ces objets sont utiles pour déterminer la fonction de masse initiale sous-stellaire et pour évaluer les modèles atmosphériques et évolutifs futurs des jeunes objets stellaires et sous-stellaires. Combinant les magnitudes en bande H mesurées et les valeurs d’extinction, les objets classifiés sont utilisés pour créer un diagramme de Hertzsprung-Russell de cet amas stellaire. En accord avec des études antérieures, nos résultats montrent qu’il existe une seule époque de formation d’étoiles qui a débuté il y a environ 1 million d’années. La fonction de masse initiale qui en dérive est en accord avec des études antérieures portant sur d’autres amas jeunes et sur le disque galactique.<br>This thesis determines the capability of detecting faint companions in the presence of speckle noise when performing space-based high-contrast imaging through spectral differential imagery (SDI) using a low-order Fabry-Perot etalon as a tunable filter. The performance of such a tunable filter is illustrated through the Tunable Filter Imager (TFI), an instrument designed for the James Webb Space Telescope (JWST). Using a TFI prototype etalon and a custom designed test bed, the etalon’s ability to perform speckle-suppression through SDI is demonstrated experimentally. Improvements in contrast vary with separation, ranging from a factor of 10 at working angles greater than 11 lambda/D and increasing up to a factor of 60 at 5 lambda/D. These measurements are consistent with a Fresnel optical propagation model which shows the speckle suppression capability is limited by the test bed and not the etalon. This result demonstrates that a tunable filter is an attractive option to perform high-contrast imaging through SDI. To explore the capability of space-based SDI using an etalon, we perform an end-to-end Fresnel propagation of JWST and TFI. Using this simulation, a contrast improvement ranging from a factor of 7 to 100 is predicted, depending on the instrument’s configuration. The performance of roll-subtraction is simulated and compared to that of SDI. The SDI capability of the Near-Infrared Imager and Slitless Spectrograph (NIRISS), the science instrument module to replace TFI in the JWST Fine Guidance Sensor is also determined. Using low resolution, multi-band (0.85-2.4 um) multi-object spectroscopy, 104 objects towards the central region of the Orion Nebular Cluster have been assigned spectral types including 7 new brown dwarfs, and 4 new planetary mass candidates. These objects are useful for determining the substellar initial mass function and for testing evolutionary and atmospheric models of young stellar and substellar objects. Using the measured H band magnitudes, combined with our determined extinction values, the classified objects are used to create an Hertzsprung-Russell diagram for the cluster. Our results indicate a single epoch of star formation beginning 1 Myr ago. The initial mass function of the cluster is derived and found to be consistent with the values determined for other young clusters and the galactic disk.
APA, Harvard, Vancouver, ISO, and other styles
44

Silva, João Pedro Mendonça de Assunção da. "Automatic and intelligent integration of manufacture standardized specifications to support product life cycle - an ontology based methodology." Doctoral thesis, 2009. http://hdl.handle.net/1822/10154.

Full text
Abstract:
Tese de doutoramento em Tecnologias da Produção<br>In the last decades, the globalization introduced significant changes in the product’s lifecycle. A worldwide market advantageously offered a vast range of Products, both in terms of variety and quality. In consequence, markets progressively demand highly customized products with short life cycle. Computational resources provided an important contribute to maintain Manufacture competitiveness and a rapid adaptation to paradigm change from mass production to mass customization as well. In this environment, Enterprise and Product modeling were the best response to new requirements like flexibility, agility and intense dynamic behavior. Enterprise Modeling enabled production convergence to an integrated virtual process. Several enterprises clearly assumed new formats like Extended Enterprises or Virtual/Agile Enterprises to guarantee product and resources coordination and management within the organization and with volatile external partners. By the other hand, Product modeling suffered an evolution, with traditional human based resources (like technical drawings) migrating to more skilful computational product models (like CAD or CAE models). Product modeling, together with an advanced information structure, has been recognized by academic and industrial communities as the best way to integrate and co-ordinate in early Design stages the various aspects of product’s lifecycle. An early and accurate product specifications settlement is the direct consequence of the product models enrichment with additional features. Therefore, Manufacture specifications – for longtime included in technical drawings or text based notes – need to re-adapt to such reality, namely due to missing integration automation and computational support. Recent enhancements in standard product models (like ISO 10303 STEP product data models) made a significant contribution towards product knowledge capture and information integration skills. Nevertheless, computational integration issues arise because multiple terminologies are in use along Product Life Cycle, namely due to different team backgrounds. Besides, the advent of internet claimed semantic capabilities in standard product models to a better integration with Enterprise agents. Ontologies facilitate the computational understanding, communication and seamless interoperability between people and organizations. They allow key concepts and terms relevant in a given domain to be identified and defined in an open and unambiguous computational way. Therefore, ontologies facilitate the use and exchange of data, information, and knowledge among inter-disciplinary teams and heterogeneous systems, towards intelligent systems integration. This work proposed a methodology to support the development of a harmonized reference ontology for a group of enterprises sharing a business domain. This methodology is based on the concept of Mediator Ontology (MO), which assists the semantic transformations between each enterprise’s ontology and the referential one. The methodology makes possible each organization to keep its own terminology, glossary and ontological structures, providing seamless communication and interaction with the others. The methodology foment the re-use of data and knowledge incorporated in the standard product models, as an effective support of collaborative engineering teams in the process of product manufacturability evaluation, anticipating validity of manufacture specifications.<br>Nas últimas décadas, o advento da globalização introduziu mudanças significativas no ciclo de vida dos produtos. Um mercado mundial passou a oferecer vantajosamente uma gama alargada de Produtos, tanto em termos de variedade como de qualidade. Como consequência, os mercados passaram a exigir progressivamente produtos muito personalizados e com um ciclo de vida mais curto. O recurso a meios computacionais constitui um contributo importante para manter a competitividade da Manufactura e uma adaptação rápida à mudança de paradigma da Produção em massa para a personalização em massa. Neste ambiente, com muitas novas exigências tais como a flexibilidade, a agilidade e o comportamento extremamente dinâmico, a modelação das Empresas e do Produto foram a melhor solução encontrada para os meios produtivos. A Modelação de Empresas permitiu a convergência da produção para um processo virtual integrado. Várias empresas assumiram claramente novos formatos como Empresas Estendidas ou Empresas Virtuais/Ágeis de modo a garantir coordenação e gestão do produto e de recursos, quer dentro da organização e quer com parceiros externos voláteis e/ou pontuais. Por outro lado, a modelação de Produto sofreu uma evolução, assistindo-se à migração dos recursos tradicionais de natureza humana (como desenhos técnicos) para recurso a modelos de produtos auxiliados por computador (como CAD ou modelos CAE). A modelação de Produto, juntamente com uma estrutura de informação avançada, tem sido reconhecida pelas comunidades académicas e industriais, como a melhor forma de integrar e coordenar, na fase inicial de Design/Projecto, os multifacetados aspectos do ciclo de vida do produto. Todas estas contribuições permitiram a estipulação de especificações dos produtos com mais antecedência e com melhor precisão. No entanto, ficaria ainda a faltar uma adaptação das especificações de Fabrico - incluídas desde sempre em desenhos técnicos ou em notas baseadas em texto – a essa nova realidade, nomeadamente pela falta de automação, de integração e de possibilidade de suporte computacional adequado. As melhorias recentes em modelos de produto normalizados (como é exemplo o modelo de dados de produto STEP – ISO 10303) deram um contributo significativo para a inclusão de conhecimento e mecanismos de integração de informação adicional acerca do produto. Contudo, subsistiram alguns problemas de integração computacional porque várias terminologias são usadas ao longo do Ciclo de Vida do Produto, tendo em conta as diferentes vocações das equipas de Projecto e Fabrico. Por outro lado, a crescente utilização de Internet começou a necessitar de modelos de produtos com capacidades semânticas, para uma completa e profícua integração com os agentes de Modelos de Empresas Virtuais. As ontologias facilitam o entendimento computacional entre aplicações, e como tal a melhoria da comunicação e interoperabilidade entre pessoas e organizações. As ontologias visaram que conceitos chave e termos relevantes de um determinado domínio fossem identificados e definidos de um modo computacional explicito, normalizado e inequívoco. Assim, as ontologias facilitam o uso e intercâmbio de dados, informação e conhecimento entre equipas interdisciplinares e sistemas heterogéneos, catalizando a integração de sistemas inteligentes. Este trabalho propõe uma metodologia de apoio ao desenvolvimento de uma ontologia de referência, harmonizada para um grupo de empresas que partilhem um domínio de negócios. Esta metodologia é baseada no conceito de Ontologia Mediadora, que possibilita as transformações semânticas entre a ontologia preexistente de cada empresa e a de referência. A metodologia possibilita que cada organização mantenha a sua própria terminologia, glossário e estruturas ontológicas, proporcionando uma comunicação e interacção directa com os outros. Esta metodologia contribui para a reutilização de dados e conhecimentos incorporados nos modelos de produto normalizados, como um apoio efectivo às equipas de engenharia no processo de avaliação da fasebilidade do produto, nomeadamente pela averiguação automática da validade das especificações de fabrico.
APA, Harvard, Vancouver, ISO, and other styles
We offer discounts on all premium plans for authors whose works are included in thematic literature selections. Contact us to get a unique promo code!

To the bibliography