Academic literature on the topic 'Fluidic devices – Design – Data processing'

Create a spot-on reference in APA, MLA, Chicago, Harvard, and other styles

Select a source type:

Consult the lists of relevant articles, books, theses, conference reports, and other scholarly sources on the topic 'Fluidic devices – Design – Data processing.'

Next to every source in the list of references, there is an 'Add to bibliography' button. Press on it, and we will generate automatically the bibliographic reference to the chosen work in the citation style you need: APA, MLA, Harvard, Chicago, Vancouver, etc.

You can also download the full text of the academic publication as pdf and read online its abstract whenever available in the metadata.

Journal articles on the topic "Fluidic devices – Design – Data processing"

1

Mendez, Miguel Alfonso, Maria Teresa Scelzo, Adriana Enache, and Jean-Marie Buchlin. "Fluidic Vectoring of a Planar Incompressible Jet Flow." EPJ Web of Conferences 180 (2018): 02065. http://dx.doi.org/10.1051/epjconf/201818002065.

Full text
Abstract:
This paper presents an experimental, a numerical and a theoretical analysis of the performances of a fluidic vectoring device for controlling the direction of a turbulent, bi-dimensional and low Mach number (incompressible) jet flow. The investigated design is the co-flow secondary injection with Coanda surface, which allows for vectoring angles up to 25° with no need of moving mechanical parts. A simple empirical model of the vectoring process is presented and validated via experimental and numerical data. The experiments consist of flow visualization and image processing for the automatic detection of the jet centerline; the numerical simulations are carried out solving the Unsteady Reynolds Average Navier- Stokes (URANS) closed with the k - ω SST turbulence model, using the PisoFoam solver from OpenFOAM. The experimental validation on three different geometrical configurations has shown that the model is capable of providing a fast and reliable evaluation of the device performance as a function of the operating conditions.
APA, Harvard, Vancouver, ISO, and other styles
2

Gomes, Paulo Veloso, António Marques, João Donga, Catarina Sá, António Correia, and Javier Pereira. "Adaptive Model for Biofeedback Data Flows Management in the Design of Interactive Immersive Environments." Applied Sciences 11, no. 11 (May 30, 2021): 5067. http://dx.doi.org/10.3390/app11115067.

Full text
Abstract:
The interactivity of an immersive environment comes up from the relationship that is established between the user and the system. This relationship results in a set of data exchanges between human and technological actors. The real-time biofeedback devices allow to collect in real time the biodata generated by the user during the exhibition. The analysis, processing and conversion of these biodata into multimodal data allows to relate the stimuli with the emotions they trigger. This work describes an adaptive model for biofeedback data flows management used in the design of interactive immersive systems. The use of an affective algorithm allows to identify the types of emotions felt by the user and the respective intensities. The mapping between stimuli and emotions creates a set of biodata that can be used as elements of interaction that will readjust the stimuli generated by the system. The real-time interaction generated by the evolution of the user’s emotional state and the stimuli generated by the system allows him to adapt attitudes and behaviors to the situations he faces.
APA, Harvard, Vancouver, ISO, and other styles
3

Bureneva, Olga, Mikhail Kupriyanov, and Nikolay Safyannikov. "Bit Streaming Processing Algorithms for Intelligent Hardware Converters." Applied Sciences 11, no. 11 (May 26, 2021): 4899. http://dx.doi.org/10.3390/app11114899.

Full text
Abstract:
The need to transfer the primary data conversions close to the sensors, to the endpoints of monitoring systems, as well as in IoT terminal devices makes the development of new approaches to computing and the design of appropriate algorithms relevant. The article shows stream processing algorithms that provide functional transformations of signals presented in bit stream form (single pulse streams, PWM signal streams) and binary codes at the same time. In such algorithms, the computational process is based on discretization, pulse frequency sweep and pulse-width sweep of codes as well as organization of parallel-serial processing. The suggested principles of algorithm organization are based on the fact that the computation is considered not as an event associated with calculation but as a continuous process of a result formation. The transition to algorithmic representations proposed by the authors makes it possible to obtain universal behavioral descriptions, independently of the specific hardware on which their implementation is performed.
APA, Harvard, Vancouver, ISO, and other styles
4

Ney, Peter, Lee Organick, Jeff Nivala, Luis Ceze, and Tadayoshi Kohno. "DNA Sequencing Flow Cells and the Security of the Molecular-Digital Interface." Proceedings on Privacy Enhancing Technologies 2021, no. 3 (April 27, 2021): 413–32. http://dx.doi.org/10.2478/popets-2021-0054.

Full text
Abstract:
Abstract DNA sequencing is the molecular-to-digital conversion of DNA molecules, which are made up of a linear sequence of bases (A,C,G,T), into digital information. Central to this conversion are specialized fluidic devices, called sequencing flow cells, that distribute DNA onto a surface where the molecules can be read. As more computing becomes integrated with physical systems, we set out to explore how sequencing flow cell architecture can affect the security and privacy of the sequencing process and downstream data analysis. In the course of our investigation, we found that the unusual nature of molecular processing and flow cell design contributes to two security and privacy issues. First, DNA molecules are ‘sticky’ and stable for long periods of time. In a manner analogous to data recovery from discarded hard drives, we hypothesized that residual DNA attached to used flow cells could be collected and re-sequenced to recover a significant portion of the previously sequenced data. In experiments we were able to recover over 23.4% of a previously sequenced genome sample and perfectly decode image files encoded in DNA, suggesting that flow cells may be at risk of data recovery attacks. Second, we hypothesized that methods used to simultaneously sequence separate DNA samples together to increase sequencing throughput (multiplex sequencing), which incidentally leaks small amounts of data between samples, could cause data corruption and allow samples to adversarially manipulate sequencing data. We find that a maliciously crafted synthetic DNA sample can be used to alter targeted genetic variants in other samples using this vulnerability. Such a sample could be used to corrupt sequencing data or even be spiked into tissue samples, whenever untrusted samples are sequenced together. Taken together, these results suggest that, like many computing boundaries, the molecular-to-digital interface raises potential issues that should be considered in future sequencing and molecular sensing systems, especially as they become more ubiquitous.
APA, Harvard, Vancouver, ISO, and other styles
5

Chang, Tien Li, Chieh Fu Chang, Ya Wei Lee, Chun Hu Cheng, Cheng Ying Chou, and Meng Chi Huang. "Design of Self-Alignment Devices with Fluidic Self-Assembly for Flip Chip Packages in Batch Processing." Advanced Materials Research 918 (April 2014): 79–83. http://dx.doi.org/10.4028/www.scientific.net/amr.918.79.

Full text
Abstract:
An advanced LED multi-die-bonding integration using a fluidic self-assembly technique is proposed in the field of flip chip packages. Different form the conventional pick-and-place methods for a single LED die bonding, the fluidic approach is a relatively new design and a batch process, which can achieve not only die self-alignment but die self-assembly. Here, the size of LED die is 1-mm-square chip with the thickness of 0.3 mm. Due to the smaller size of LED die, the die-bonding process is still in need of finding a suitable approach and breakthrough. In this study, our design of fluidic self-assembly device is based on the experimental test and simulation results. The device design is the gas-flow channels with the magnetism. The width, height and length of each gas-flow channel are 1.1 mm, 0.5 mm, and 1 cm, respectively. With the restriction of the channel width, this structure design can control well to die self-alignment. In addition, the design of two circular structures in the channel can form a flat rim to achieve the die self-assemble. This mechanism of fluidic approach can be useful to the LED die self-alignment and self-assembly in the future batch processing.
APA, Harvard, Vancouver, ISO, and other styles
6

Castro-Martin, Ana Pamela, Horacio Ahuett-Garza, Darío Guamán-Lozada, Maria F. Márquez-Alderete, Pedro D. Urbina Coronado, Pedro A. Orta Castañon, Thomas R. Kurfess, and Emilio González de Castilla. "Connectivity as a Design Feature for Industry 4.0 Production Equipment: Application for the Development of an In-Line Metrology System." Applied Sciences 11, no. 3 (February 1, 2021): 1312. http://dx.doi.org/10.3390/app11031312.

Full text
Abstract:
Industry 4.0 (I4.0) is built upon the capabilities of Internet of Things technologies that facilitate the recollection and processing of data. Originally conceived to improve the performance of manufacturing facilities, the field of application for I4.0 has expanded to reach most industrial sectors. To make the best use of the capabilities of I4.0, machine architectures and design paradigms have had to evolve. This is particularly important as the development of certain advanced manufacturing technologies has been passed from large companies to their subsidiaries and suppliers from around the world. This work discusses how design methodologies, such as those based on functional analysis, can incorporate new functions to enhance the architecture of machines. In particular, the article discusses how connectivity facilitates the development of smart manufacturing capabilities through the incorporation of I4.0 principles and resources that in turn improve the computing capacity available to machine controls and edge devices. These concepts are applied to the development of an in-line metrology station for automotive components. The impact on the design of the machine, particularly on the conception of the control, is analyzed. The resulting machine architecture allows for measurement of critical features of all parts as they are processed at the manufacturing floor, a critical operation in smart factories. Finally, this article discusses how the I4.0 infrastructure can be used to collect and process data to obtain useful information about the process.
APA, Harvard, Vancouver, ISO, and other styles
7

Ardeleanu, Mihǎiţǎ Nicolae, Simona Mihai, Ruxandra Vidu, Emil Mihai Diaconu, and Ileana Nicoleta Popescu. "Design of Microfluidic Device and Measurements of MPWM for Single Cell /Particle Manipulation." Scientific Bulletin of Valahia University - Materials and Mechanics 17, no. 16 (May 1, 2019): 39–43. http://dx.doi.org/10.2478/bsmm-2019-0006.

Full text
Abstract:
Abstract A microfluidic device designated for measurement of fluidic flows with different viscosity, necessary within trapping/realising of cells/particles system has been developed. We use a new concept as Microfluidic Pulse Width Modulation (MPWM) for controlling transport of a single cell/particle. The image processing helped the nano-hydraulic volumes/flow rates measurement, through tracking inovative methods with the purpose to build a flow sensor. The device open an unique opportunitie for single cell study with applications in biomedical devices, tools for biochemistry or analytical systems.
APA, Harvard, Vancouver, ISO, and other styles
8

Catterton, Megan A., Alexander G. Ball, and Rebecca R. Pompano. "Rapid Fabrication by Digital Light Processing 3D Printing of a SlipChip with Movable Ports for Local Delivery to Ex Vivo Organ Cultures." Micromachines 12, no. 8 (August 20, 2021): 993. http://dx.doi.org/10.3390/mi12080993.

Full text
Abstract:
SlipChips are two-part microfluidic devices that can be reconfigured to change fluidic pathways for a wide range of functions, including tissue stimulation. Currently, fabrication of these devices at the prototype stage requires a skilled microfluidic technician, e.g., for wet etching or alignment steps. In most cases, SlipChip functionality requires an optically clear, smooth, and flat surface that is fluorophilic and hydrophobic. Here, we tested digital light processing (DLP) 3D printing, which is rapid, reproducible, and easily shared, as a solution for fabrication of SlipChips at the prototype stage. As a case study, we sought to fabricate a SlipChip intended for local delivery to live tissue slices through a movable microfluidic port. The device was comprised of two multi-layer components: an enclosed channel with a delivery port and a culture chamber for tissue slices with a permeable support. Once the design was optimized, we demonstrated its function by locally delivering a chemical probe to slices of hydrogel and to living tissue with up to 120 µm spatial resolution. By establishing the design principles for 3D printing of SlipChip devices, this work will enhance the ability to rapidly prototype such devices at mid-scale levels of production.
APA, Harvard, Vancouver, ISO, and other styles
9

Walsh, E. J., C. King, R. Grimes, A. Gonzalez, and D. Ciobanu. "Compatibility of Segmenting Fluids in Continuous-Flow Microfluidic PCR." Journal of Medical Devices 1, no. 4 (September 12, 2007): 241–45. http://dx.doi.org/10.1115/1.2812426.

Full text
Abstract:
Continuous flow offers notable advantages over batch processing for analytical applications like gene expression profiling of biological material, which demands very high processing. The technology of choice for future genetic analyzers will most likely use the polymerase chain reaction (PCR); therefore, high-throughput, high-speed PCR devices have raised enormous interest. Continuous-flow, biphasic PCR can meet these requirements but segmenting∕carrier fluids chemically compatible with the PCR are needed. The present paper compares several fluids in terms of compatibility with PCR and fluidic dynamics in a continuous, two-phase flow microfluidic device, and PCR efficiency was assessed quantitatively. The results represent the first step toward rational fluid design for biphasic continuous PCR.
APA, Harvard, Vancouver, ISO, and other styles
10

Wu, Xing Cun, Gang Fu, and Ren Long Li. "Design and Implementation of Telemetry Data Post-Processing System." Advanced Materials Research 989-994 (July 2014): 4165–68. http://dx.doi.org/10.4028/www.scientific.net/amr.989-994.4165.

Full text
Abstract:
This paper presents a post-hoc analysis of telemetry data processing systems, introduces the design ideas and composition structure of the system, discussed in detail the design and implementation of key technologies demodulation processing module and the module frame involved. System to achieve a recording deal with post hoc analysis of telemetry data, to solve the current telemetry signal reception playback devices, high maintenance costs, has some economic benefits.
APA, Harvard, Vancouver, ISO, and other styles
More sources

Dissertations / Theses on the topic "Fluidic devices – Design – Data processing"

1

李仲麟 and Chung-lun Li. "Conceptual design of single and multiple state mechanical devices: an intelligent CAD approach." Thesis, The University of Hong Kong (Pokfulam, Hong Kong), 1998. http://hub.hku.hk/bib/B31237332.

Full text
APA, Harvard, Vancouver, ISO, and other styles
2

Almaghrawi, Ahmed Almaamoun. "Collaborative design in electromagnetics." Thesis, McGill University, 2007. http://digitool.Library.McGill.CA:80/R/?func=dbin-jump-full&object_id=103363.

Full text
Abstract:
We present a system architecture and a set of control techniques that allow heterogeneous software design tools to collaborate intelligently and automatically. One of their distinguishing features is the ability to perform concurrent processing. Systems based on this architecture are able to effectively solve large electromagnetic analysis problems, particularly those that involve loose coupling between several areas of physics. The architecture can accept any existing software analysis tool, without requiring any modification or customization of the tool. This characteristic is produced in part by our use of a neutral virtual representation for storing problem data, including geometry and material definitions. We construct a system based on this architecture, using several circuit and finite-element analysis tools, and use it to perform electromagnetic analyses of several different devices. Our results show that our architecture and techniques do allow practical problems to be solved effectively by heterogeneous tools.
On présente une architecture de système et un ensemble de techniquesde contrôle qui permettent aux logiciels d'analyse hétérogènes de collaborerde façon intelligente et automatique. Un de ses traits caractéristiques est sacapacité d'effectuer simultanément plusieurs traitements. Les systèmes baséssur cette architecture sont capables de résoudre de manière efficace des grandsproblèmes dans le domaine de l'analyse électromagnétique, particulièrementceux où existe un accouplement dégagé entre plusieurs domaines de physique.L'architecture peut accepter n'importe quel logiciel d'analyse existant; ellen'exige pas que les logiciels soyent modifiés ou fabriqués sur mesure. Cettecaractéristique est produite en partie par notre utilisation d'une représentationneutre virtuelle pour représenter les données du problème, y inclus sa géométrieet les proprietés de ses matériels. On construit un système basé sur cettearchitecture, comprenant plusieurs logiciels de simulation, et on l'emploie pourexécuter des analyses électromagnétiques de plusieurs appareils différents. Nosrésultats montrent que notre architecture et nos techniques permettent desproblèmes pratiques d'être résolus efficacement par les outils hétérogènes.
APA, Harvard, Vancouver, ISO, and other styles
3

Ray, Subhasis. "Multi-objective optimization of an interior permanent magnet motor." Thesis, McGill University, 2008. http://digitool.Library.McGill.CA:80/R/?func=dbin-jump-full&object_id=116021.

Full text
Abstract:
In recent years, due to growing environmental awareness regarding global warming, green cars, such as hybrid electric vehicles, have gained a lot of importance. With the decreasing cost of rare earth magnets, brushless permanent magnet motors, such as the Interior Permanent Magnet Motor, have found usage as part of the traction drive system in these types of vehicles. As a design issue, building a motor with a performance curve that suits both city and highway driving has been treated in this thesis as a multi-objective problem; matching specific points of the torque-speed curve to the desired performance output. Conventionally, this has been treated as separate problems or as a combination of several individual problems, but doing so gives little information about the trade-offs involved. As a means of identifying the compromising solutions, we have developed a stochastic optimizer for tackling electromagnetic device optimization and have also demonstrated a new innovative way of studying how different design parameters affect performance.
APA, Harvard, Vancouver, ISO, and other styles
4

Wong, Ing Hoo. "Design of a realtime high speed recognizer for unconstrained handprinted alphanumeric characters." Thesis, University of British Columbia, 1985. http://hdl.handle.net/2429/25135.

Full text
Abstract:
This thesis presents the design of a recognizer for unconstrained handprinted alphanumeric characters. The design is based on a thinning process that is capable of producing thinned images with well defined features that are considered essential for character image description and recognition. By choosing the topological points of the thinned ('line') character image as these desired features, the thinning process achieves not only a high degree of data reduction but also transforms a binary image into a discrete form of line drawing that can be represented by graphs. As a result powerful graphical analysis techniques can be applied to analyze and classify the image. The image classification is performed in two stages. Firstly, a technique for identifying the topological points in the thinned image is developed. These topological points represent the global features of the image and because of their invariance to elastic deformations, they are used for image preclassification. Preclassification results in a substantial reduction in the entropy of the input image. The subsequent process can concentrate only on the differentiation of images that are topologically equivalent. In the preclassifier simple logic operations localized to the immediate neighbourhood of each pixel are used. These operations are also highly independent and easy to implement using VLSI. A graphical technique for image extraction and representation called the chain coded digraph representation is introduced. The technique uses global features such as nodes and the Freeman's chain codes for digital curves as branches. The chain coded digraph contains all the information that is present in the thinned image. This avoids using the image feature extraction approach for image description and data reduction (a difficult process to optimize) without sacrificing speed or complexity. After preclassification, a second stage of the recognition process analyses the chain coded digraph using the concept of attributed relational graph (ARG). ARG representation of the image can be obtained readily through simple transformations or rewriting rules from the chain coded digraph. The ARG representation of an image describes the shape primitives in the image and their relationships. Final classification of the input image can be made by comparing its ARG with the ARGs of known characters. The final classification involves only the comparison of ARGs of a predetermined topology. This information is crucial to the design of a matching algorithm called the reference guided inexact matching procedure, designed for high speed matching of character image ARGs. This graph matching procedure is shown to be much faster than other conventional graph matching procedures. The designed recognizer is implemented in Pascal on the PDP11/23 and VAX 11/750 computer. Test using Munson's data shows a high recognition rate of 91.46%. However, the recognizer is designed with the aim of an eventual implementation using VLSI and also as a basic recognizer for further research in reading machines. Therefore its full potential is yet to be realized. Nevertheless, the experiments with Munson's data illustrates the effectiveness of the design approach and the advantages it offers as a basic system for future research.
Applied Science, Faculty of
Electrical and Computer Engineering, Department of
Graduate
APA, Harvard, Vancouver, ISO, and other styles
5

Parrish, Janet Yvonne. "Using the computer to motivate at-risk students as writers." CSUSB ScholarWorks, 1997. https://scholarworks.lib.csusb.edu/etd-project/1437.

Full text
APA, Harvard, Vancouver, ISO, and other styles
6

Beu, Jesse Garrett. "Design of heterogeneous coherence hierarchies using manager-client pairing." Diss., Georgia Institute of Technology, 2013. http://hdl.handle.net/1853/47710.

Full text
Abstract:
Over the past ten years, the architecture community has witnessed the end of single-threaded performance scaling and a subsequent shift in focus toward multicore and manycore processing. While this is an exciting time for architects, with many new opportunities and design spaces to explore, this brings with it some new challenges. One area that is especially impacted is the memory subsystem. Specifically, the design, verification, and evaluation of cache coherence protocols becomes very challenging as cores become more numerous and more diverse. This dissertation examines these issues and presents Manager-Client Pairing as a solution to the challenges facing next-generation coherence protocol design. By defining a standardized coherence communication interface and permissions checking algorithm, Manager-Client Pairing enables coherence hierarchies to be constructed and evaluated quickly without the high design-cost previously associated with hierarchical composition. Further, Manager-Client Pairing also allows for verification composition, even in the presence of protocol heterogeneity. As a result, this rapid development of diverse protocols is ensured to be bug-free, enabling architects to focus on performance optimization, rather than debugging and correctness concerns, while comparing diverse coherence configurations for use in future heterogeneous systems.
APA, Harvard, Vancouver, ISO, and other styles
7

Kunzi, Tekweme. "Design and construction of a microwave cavity for oil-water emulsion separation." Thesis, 2012. http://hdl.handle.net/10210/4803.

Full text
Abstract:
M. Tech.
The mechanism of microwave assisted waste oil and water emulsions destabilisation is analysed. The broad overall features of the microwave system to break emulsions are established and the design specifications of each constituent component are set. The type of well suited microwave cavity is selected. After analysis, the multimode cavity proves to be a suitable choice. The established overall dimensions of the cavity are set and used to visualize the field pattern in the treatment chamber. Some MATLAB routines are written and used to enhance the visualization of the field pattern inside the microwave cavity. The individual excited modes are first visualised, followed by the resultant electric field. The detailed descriptions of the drawing are then finalised and the microwave system constituents selected. The cavity is manufactured and tested for leakage. The test shows that the cavity is watertight.
APA, Harvard, Vancouver, ISO, and other styles
8

Zhu, Xiaoliang. "Systems Engineering for Silicon Photonic Devices." Thesis, 2015. https://doi.org/10.7916/D87D2TRW.

Full text
Abstract:
The increasing integration of digital information with our daily lives has led to the rise of big data, cloud computing, and the internet of things. The growth in these categories will lead to an exponential increase in the required capacity for data centers and high performance computation. Meanwhile, due to bottlenecks in data access caused by the limited energy and bandwidth scalability of electrical interconnects, computational speedup can no longer scale with demand. A better solution is necessary in order to increase computational performance and reduce the carbon footprint of our digital future. People have long thought of photonic interconnects, which can offer higher bandwidth, greater energy efficiency, and orders-of-magnitude distance scalability compared to electrical interconnects, as a solution to the data access bottleneck in chip, board, and datacenter scale networks. Over the past three decades we have seen impressive growth of photonic technology from theoretical predictions to high-performance commercially available devices. However, the dream of an all-optical interconnection network for use in CPU, Memory, and rack-to-rack datacenter interconnects is not yet realized. Many challenges and obstacles still have to be addressed. This work investigates these challenges and describe some of the ways to overcome them. First we will first examine the pattern sensitivity of microring modulators, which are likely to be found as the first element in an optical interconnect. My work will illustrate the advantage of using depletion mode modulators compared to injection mode modulators as the number of consecutive symbols in the data pattern increases. Next we will look at the problem of thermal initialization for microring demultiplexers near the output of the optical interconnect. My work demonstrates the fastest achieved initialization speed to-date for a microring based demultiplexer. I will also explore an thermal initialization and control method for microrings based on temperature measurement using a pn-junction. Finally, we will look at how to control and initialize microring and MZI based optical switch fabrics, which is the second element found in a optical interconnect. Work here will show the possibility of switching high-speed WDM datastreams through microring based switches, as well as methods to deal with the complexities inherent in control and initialization of high-radix switch topologies. Through these demonstrations I hope to show that the challenges facing optical interconnects, although very real, are surmountable using reasonable engineering efforts.
APA, Harvard, Vancouver, ISO, and other styles
9

Labay, Vladimir A. "Computer-aided design of passive microwave components nonstandard rectangular waveguide technology." Thesis, 1995. http://hdl.handle.net/1828/6605.

Full text
APA, Harvard, Vancouver, ISO, and other styles
10

"On-line Chinese character recognition." 1997. http://library.cuhk.edu.hk/record=b1962412.

Full text
Abstract:
by Jian-Zhuang Liu.
Thesis (Ph.D.)--Chinese University of Hong Kong, 1997.
Includes bibliographical references (p. 183-196).
Microfiche. Ann Arbor, Mich.: UMI, 1998. 3 microfiches ; 11 x 15 cm.
APA, Harvard, Vancouver, ISO, and other styles

Books on the topic "Fluidic devices – Design – Data processing"

1

Grätz, Florian Manfred. Teilautomatische Generierung von Stromlauf- und Fluidplänen für mechatronische Systeme. München: Herbert Utz, 2006.

Find full text
APA, Harvard, Vancouver, ISO, and other styles
2

Wyatt, Clair L. Radiometric system design. New York: Macmillan, 1987.

Find full text
APA, Harvard, Vancouver, ISO, and other styles
3

The design warrior's guide to FPGAs: Devices, tools, and flows. Boston: Newnes/Elsevier, 2004.

Find full text
APA, Harvard, Vancouver, ISO, and other styles
4

Klein, Lawrence A. Millimeter-wave and infrared multisensor design and signal processing. Boston: Artech House, 1997.

Find full text
APA, Harvard, Vancouver, ISO, and other styles
5

Pellerin, David. Digital design using ABEL. Englewood Cliffs, N.J: PTR Prentice Hall, 1994.

Find full text
APA, Harvard, Vancouver, ISO, and other styles
6

Electronic devices and circuits using MICRO-CAP III. New York: Merrill, 1993.

Find full text
APA, Harvard, Vancouver, ISO, and other styles
7

Berube, R. H. Electronic devices and circuits using MICRO-CAP III. New York: Merrill, 1992.

Find full text
APA, Harvard, Vancouver, ISO, and other styles
8

Berube, R. H. Electronic devices and circuits using MICRO-CAP III. New York: Merrill, 1992.

Find full text
APA, Harvard, Vancouver, ISO, and other styles
9

Hitschfeld, Nancy. Grid generation for three-dimensional non-rectangular semiconductor devices. Konstanz: Hartung-Gorre, 1993.

Find full text
APA, Harvard, Vancouver, ISO, and other styles
10

Electronic devices and circuits using MICRO-CAP II. New York: Merrill, 1991.

Find full text
APA, Harvard, Vancouver, ISO, and other styles
More sources

Book chapters on the topic "Fluidic devices – Design – Data processing"

1

Murmann, Patrick. "Eliciting Design Guidelines for Privacy Notifications in mHealth Environments." In Research Anthology on Privatizing and Securing Data, 1909–28. IGI Global, 2021. http://dx.doi.org/10.4018/978-1-7998-8954-0.ch093.

Full text
Abstract:
The possibilities of employing mobile health (mhealth) devices for the purpose of self-quantification and fitness tracking are increasing; yet few users of online mhealth services possess proven knowledge of how their personal data are processed once the data have been disclosed. Ex post transparency-enhancing tools (TETs) can provide such insight and guide users in making informed decisions with respect to intervening with the processing of their personal data. At present, however, there are no suitable guidelines that aid designers of TETs in implementing privacy notifications that reflect their recipients' needs in terms of what they want to be notified about and the level of guidance required to audit their data effectively. Based on an analysis of gaps related to TETs, the findings of a study on privacy notification preferences, and the findings on notifications and privacy notices discussed in the literature, this paper proposes a set of guidelines for the human-centred design of privacy notifications that facilitate ex post transparency.
APA, Harvard, Vancouver, ISO, and other styles
2

Ciufudean, Calin. "Innovative Formalism for Biological Data Analysis." In Encyclopedia of Information Science and Technology, Fourth Edition, 1814–24. IGI Global, 2018. http://dx.doi.org/10.4018/978-1-5225-2255-3.ch158.

Full text
Abstract:
Modern medical devices involves information technology (IT) based on electronic structures for data and signals sensing and gathering, data and signals transmission as well as data and signals processing in order to assist and help the medical staff to diagnose, cure and to monitors the evolution of patients. By focusing on biological signals processing we may notice that numerical processing of information delivered by sensors has a significant importance for a fair and optimum design and manufacture of modern medical devices. We consider for this approach fuzzy set as a formalism of analysis of biological signals processing and we propose to be accomplished this goal by developing fuzzy operators for filtering the noise of biological signals measurement. We exemplify this approach on neurological measurements performed with an Electro-Encephalograph (EEG).
APA, Harvard, Vancouver, ISO, and other styles
3

Ciufudean, Calin. "Innovative Formalism for Biological Data Analysis." In Advances in Computer and Electrical Engineering, 390–402. IGI Global, 2019. http://dx.doi.org/10.4018/978-1-5225-7598-6.ch029.

Full text
Abstract:
Modern medical devices involve information technology (IT) based on electronic structures for data and signals sensing and gathering, data and signals transmission, as well as data and signals processing in order to assist and help the medical staff to diagnose, cure, and to monitor the evolution of patients. By focusing on biological signals processing we may notice that numerical processing of information delivered by sensors has a significant importance for a fair and optimum design and manufacture of modern medical devices. The authors consider for this approach fuzzy set as a formalism of analysis of biological signals processing and they propose to accomplish this goal by developing fuzzy operators for filtering the noise of biological signals measurement. The authors exemplify this approach on neurological measurements performed with an electro-encephalograph (EEG).
APA, Harvard, Vancouver, ISO, and other styles
4

Lagerlund, Terrence D. "Digital Signal Processing." In Clinical Neurophysiology, 222–35. Oxford University Press, 2016. http://dx.doi.org/10.1093/med/9780190259631.003.0015.

Full text
Abstract:
Digital computers can perform types of signal processing not readily available with analog devices, such as ordinary electrical circuits. This includes making the process of obtaining, storing, retrieving, and viewing clinical neurophysiology data easier; aiding in extracting information from waveforms that is not readily obtainable with visual analysis alone; and improving quantification of key features of waveforms. These processes are useful in accurate clinical diagnosis of electroencephalographic (EEG), electromyographic (EMG), and evoked potential studies, and it also lend themselves to serial comparisons between studies performed on the same subject at different times or between two groups of subjects in scientific investigations. Digital computers may also partially automate the interpretation of clinical neurophysiology studies. This chapter reviews the principles of digitization, the design of digitally based instruments for clinical neurophysiology, and several common uses of digital processing, including averaging, digital filtering, and some types of time-domain and frequency-domain analysis.
APA, Harvard, Vancouver, ISO, and other styles
5

Mahmud, Umar, Shariq Hussain, Arif Jamal Malik, Sherjeel Farooqui, and Nazir Ahmed Malik. "Realizing IoE for Smart Service Delivery." In Smart Systems Design, Applications, and Challenges, 186–215. IGI Global, 2020. http://dx.doi.org/10.4018/978-1-7998-2112-0.ch010.

Full text
Abstract:
Widespread use of numerous hand-held smart devices has opened new avenues in computing. Internet of things (IoT) is the next big thing resulting in the 4th industrial revolution. Coupling IoT with data collection, storage, and processing leads to Internet of everything (IoE). This work outlines the concept of smart device and presents an IoE ecosystem. Characteristics of IoE ecosystem with a review of contemporary research is also presented. A comparison table contains the research finding. To realize IoE, an object-oriented context aware model is presented. This model is based on Unified Modelling Language (UML). A case study of a museum guide system is outlined that discusses how IoE can be implemented. The contribution of this chapter includes review of contemporary IoE systems, a detailed comparison, a context aware IoE model, and a case study to review the concepts.
APA, Harvard, Vancouver, ISO, and other styles
6

Alkadi, Ihssan. "Assessing Security with Regard to Cloud Applications in STEM Education." In Advances in Educational Technologies and Instructional Design, 260–76. IGI Global, 2016. http://dx.doi.org/10.4018/978-1-4666-9924-3.ch017.

Full text
Abstract:
There are many steps involved with securing a cloud system and its applications (SaaS) and developed ones in (PaaS). Security and privacy issues represent the biggest concerns to moving services to external clouds (Public). With cloud computing, data are stored and delivered across the Internet. The owner of the data does not have control or even know where their data are being stored. Additionally, in a multi-tenant environment, it may be very difficult for a cloud service provider to provide the level of isolation and associated guarantees that are possible with an environment dedicated to a single customer. Unfortunately, to develop a security algorithm that outlines and maps out the enforcement of a security policy and procedure can be a daunting task. A good security algorithm presents a strategy to counter the vulnerabilities in a cloud system. This chapter covers the complete overview, comparative analysis of security methods in Cloud Applications in STEM Education and the introduction of a new methodology that will enforce cloud computing security against breaches and intrusions. Much light will be shed on existing methodologies of security on servers used for cloud applications in STEM education and storage of data, and several methods will be presented in addition to the newly developed method of security in cloud-based servers, such as the MIST (Alkadi). Not only can cloud networks be used to gather sensitive information on multiple platforms, also there are needs to prevent common attacks through weak password recovery, retrieval, authentication, and hardening systems; otherwise hackers will spread cyber mayhem. Discussion of current security issues and algorithms in a real world will be presented. Different technologies are being created and in constant competition to meet the demands of users who are generally “busy”. The selling point of these technologies is the ability to address these demands without adding more to any workloads. One of the demands often discussed is that users want to have their digital information accessible from anywhere at any time. This information includes documents, audio libraries, and more. Users also demand the ability to manage, edit and update this information regardless of physical location. Somewhat recently, mobile devices such as laptops, tablets, and smartphones have provided these abilities. This is no small feat as vendors and providers have reduced the size of these devices to increase mobility. However, as the amount of personal information that users are wanting to access has grown exponentially, manipulation and storage of it require more capable devices. To meet increased demands, increasing the capabilities of mobile devices may be impractical. Making mobile devices more powerful without technological advancement would require that the device be larger and use more resources such as battery life and processing power to function properly. Storing all of a user's information on a mobile device that travels everywhere also adds vulnerability risks. The best technical solution to having a user's information accessible is some sort of online storage where there is the convenience to store, manipulate and retrieve data. This is one of the most practical applications for the concept of cloud computing in STEM education. As storage capabilities and Internet bandwidth has increased, so has the amount of personal data that users store online. And today, the average user has billions of bytes of data online. Access is everywhere and whenever is needed. As everyone started doing so, people want their data safe and secure to maintain their privacy. As the user base grew in size, the number of security issues of the personal data started to become increasingly important. As soon as someone's data are in the remote server, unwanted users or “hackers” can have many opportunities to compromise the data. As the online server needs to be up and running all the time, the only way to secure the cloud server is by using better passwords by every user. By the same token, the flaws in the password authentication and protection system can also help unwanted users to get their way to other people's personal data. Thus, the password authentication system should also be free from any loopholes and vulnerabilities.
APA, Harvard, Vancouver, ISO, and other styles
7

R., Stephen, Ayshwarya B., R. Shantha Mary Joshitta, and Hubert B. J. Shanthan. "Internet of Things (IoT)." In Cases on Edge Computing and Analytics, 55–72. IGI Global, 2021. http://dx.doi.org/10.4018/978-1-7998-4873-8.ch003.

Full text
Abstract:
Naturally, IoT network consists of constrained devices with limited power, storage, and processing capabilities. However, IoT has some challenges to face internal and external issues, for instance, security issues, connectivity complex, data management, etc. In such a case, IoT is not supported by a heavyweight protocol suite. So, developing the lightweight protocol suite is the challenge for IoT environment. This chapter also describes such protocols that fulfill the requirements of IoT. In fact, IoT environment consists of a huge amount of devices. The controlling of all the devices is another issue, as well as data analytics among different devices, also considered a major issue. As a constraint, this chapter focuses to design a standard protocol suite for IoT environment.
APA, Harvard, Vancouver, ISO, and other styles
8

El-Nasr, Magy Seif, and Athanasios V. Vasilakos. "Ambient Intelligence on the Dance Floor." In Transdisciplinary Advancements in Cognitive Mechanisms and Human Information Processing, 116–33. IGI Global, 2011. http://dx.doi.org/10.4018/978-1-60960-553-7.ch007.

Full text
Abstract:
With the evolution of intelligent devices, sensors, and ambient intelligent systems, it is not surprising to see many research projects starting to explore the design of intelligent artifacts in the area of art and technology; these projects take the form of art exhibits, interactive performances, and multi-media installations. In this paper, we seek to propose a new architecture for an ambient intelligent dance performance space. Dance is an art form that seeks to explore the use of gesture and body as means of artistic expression. This paper proposes an extension to the medium of expression currently used in dance—we seek to explore the use of the dance environment itself, including the stage lighting and music, as a medium for artistic reflection and expression. To materialize this vision, the performance space will be augmented with several sensors: physiological sensors worn by the dancers, as well as pressure sensor mats installed on the floor to track dancers’ movements. Data from these sensors will be passed into a three layered architecture: a layer analyzes sensor data collected from physiological and pressure sensors. Another layer intelligently adapts the lighting and music to portray the dancer’s physiological state given artistic patterns authored through specifically developed tools; and, lastly, a layer for presenting the music and lighting changes in the physical dance environment.
APA, Harvard, Vancouver, ISO, and other styles
9

Acharya, Subrata. "PITWALL." In Situational Awareness in Computer Network Defense, 320–43. IGI Global, 2012. http://dx.doi.org/10.4018/978-1-4666-0104-8.ch018.

Full text
Abstract:
The continuous growth in the Internet’s size, the amount of data traffic, and the complexity of processing this traffic give rise to new challenges in building high performance network devices. Such an exponential growth, coupled with the increasing sophistication of attacks, is placing stringent demands on the performance of network Information Systems. These challenges require new designs, architecture, and algorithms for raising situational awareness, and hence, providing performance improvements on current network devices and cyber systems. In this research, the author focuses on the design of architecture and algorithms for optimization of network defense systems, specifically firewalls, to aid not only adaptive and real-time packet filtering but also fast content based routing (differentiated services) for today’s data-driven networks.
APA, Harvard, Vancouver, ISO, and other styles
10

Y. Kariduraganavar, Mahadevappa, Radha V. Doddamani, Balachandar Waddar, and Saidi Reddy Parne. "Nonlinear Optical Responsive Molecular Switches." In Nonlinear Optics - From Solitons to Similaritons. IntechOpen, 2021. http://dx.doi.org/10.5772/intechopen.92675.

Full text
Abstract:
Nonlinear optical (NLO) materials have gained much attention during the last two decades owing to their potentiality in the field of optical data storage, optical information processing, optical switching, and telecommunication. NLO responsive macroscopic devices possess extensive applications in our day to day life. Such devices are considered as assemblies of several macroscopic components designed to achieve specific functions. The extension of this concept to the molecular level forms the basis of molecular devices. In this context, the design of NLO switches, that is, molecules characterized by their ability to alternate between two or more chemical forms displaying contrasts in one of their NLO properties, has motivated many experimental and theoretical works. Thus, this chapter focuses on the rational design of molecular NLO switches based on stimuli and materials with extensive examples reported in the literature. The factors affecting the efficiency of optical switches are discussed. The device fabrication of optical switches and their efficiency based on the optical switch, internal architecture, and substrate materials are described. In the end, applications of switches and future prospectus of designing new molecules with references are suitably discussed.
APA, Harvard, Vancouver, ISO, and other styles

Conference papers on the topic "Fluidic devices – Design – Data processing"

1

Shevchenko, Viktor, Oleksiy Bychkov, Alina Shevchenko, and Denis Berestov. "Dynamic data processing for emergency monitoring by mobile devices." In 2017 XIIIth International Conference on Perspective Technologies and Methods in MEMS Design (MEMSTECH). IEEE, 2017. http://dx.doi.org/10.1109/memstech.2017.7937547.

Full text
APA, Harvard, Vancouver, ISO, and other styles
2

Hu, Zhengbing, and Vitaly Deibuk. "New design of reversible/quantum devices for ternary arithmetic." In 2016 IEEE First International Conference on Data Stream Mining & Processing (DSMP). IEEE, 2016. http://dx.doi.org/10.1109/dsmp.2016.7583512.

Full text
APA, Harvard, Vancouver, ISO, and other styles
3

Lushpa, Igor, Konstantin Novikov, and Sergey Polesskiy. "The Reliability Characteristics of the Data Processing Centers Cooling Systems." In 2019 International Seminar on Electron Devices Design and Production (SED). IEEE, 2019. http://dx.doi.org/10.1109/sed.2019.8798415.

Full text
APA, Harvard, Vancouver, ISO, and other styles
4

Greisukh, Grigoriy I., and Sergei A. Stepanov. "Design of optical systems with diffractive and gradient-index elements." In Coherent Measuring and Data Processing Methods and Devices: Selected Papers, edited by Valery I. Mandrosov. SPIE, 1993. http://dx.doi.org/10.1117/12.155066.

Full text
APA, Harvard, Vancouver, ISO, and other styles
5

van der Bent, Vincent, Amin Amin, and Timothy Jadot. "The Application of Data Validation and Reconciliation to Upstream Production Measurement Integration and Surveillance – Field Study." In SPE Annual Technical Conference and Exhibition. SPE, 2021. http://dx.doi.org/10.2118/205934-ms.

Full text
Abstract:
Abstract With the advent of increased measurements and instrumentation in oil and gas upstream production infrastructure; in the wellbore, in subsea and on surface processing facilities, data integration from all sources can be used more effectively in producing consistent and robust production profiles. The proposed data integration methodology aims at identifying the sources of measurement and process errors and removing them from the system. This ensures quasi error-free data when driving critical applications such as well rate determination from virtual and multiphase meters, and production allocation schemes, to name few. Confidence in the data is further enhanced by quantifying the uncertainty of each measured and unmeasured variable. Advanced Data Validation and Reconciliation (DVR) methodology uses data redundancy to correct measurements. As more data is ingested in a modeling system the statistical aspect attached to each measurement becomes an important source of information to further improve its precision. DVR is an equation-based calculation process. It combines data redundancy and conservation laws to correct measurements and convert them into accurate and reliable information. The methodology is used in upstream oil & gas, refineries and gas plants, petrochemical plants as well as power plants including nuclear. DVR detects faulty sensors and identifies degradation of equipment performance. As such, it provides more robust inputs to operations, simulation, and automation processes. The DVR methodology is presented using field data from a producing offshore field. The discussion details the design and implementation of a DVR system to integrate all available field data from the wellbore and surface facilities. The integrated data in this end-to-end evaluation includes reservoir productivity parameters, downhole and wellhead measurements, tuned vertical lift models, artificial lift devices, fluid sample analysis and thermodynamic models, and top facility process measurements. The automated DVR iterative runs solve all conservation equations simultaneously when determining the production flowrates "true values" and their uncertainties. The DVR field application is successfully used in real-time to ensure data consistency across a number of production tasks including the continual surveillance of the critical components of the production facility, the evaluation and validation of well tests using multiphase flow metering, the virtual flow metering of each well, the modeling of fluid phase behavior in the well and in the multistage separation facility, and performing the back allocation from sales meters to individual wells.
APA, Harvard, Vancouver, ISO, and other styles
6

Kelly, Jesse. "GPU-Accelerated Simulation of Two-Phase Incompressible Fluid Flow Using a Level-Set Method for Interface Capturing." In ASME 2009 International Mechanical Engineering Congress and Exposition. ASMEDC, 2009. http://dx.doi.org/10.1115/imece2009-13330.

Full text
Abstract:
Computational fluid dynamics has seen a surge of popularity as a tool for visual effects animators over the past decade since Stam’s seminal Stable Fluids paper [1]. Complex fluid dynamics simulations can often be prohibitive to run due to the time it takes to perform all of the necessary computations. This project proposes an accelerated two-phase incompressible fluid flow solver implemented on programmable graphics hardware. Modern graphics-processing units (GPUs) are highly parallel computing devices, and in problems with a large potential for parallel computation the GPU may vastly out-perform the CPU. This project will use the potential parallelism in the solution of the Navier-Stokes equations in writing a GPU-accelerated flow solver. NVIDIA’s Compute-Unified-Device-Architecture (CUDA) language will be used to program the parallel portions of the solver. CUDA is a C-like language introduced by the NVIDIA Corporation with the goal of simplifying general-purpose computing on the GPU. CUDA takes advantage of data-parallelism by executing the same or near-same code on different data streams simultaneously, so the algorithms used in the flow solver will be designed to be highly data-parallel. Most finite difference-based fluid solvers for computer graphics applications have used the traditional staggered marker-and-cell (MAC) grid, introduced by Harlow and Welsh [2]. The proposed approach improves upon the programmability of solvers such as these by using a non-staggered (collocated) grid. An efficient technique is implemented to smooth the pressure oscillations that often result from the use of a collocated grid in the simulation of incompressible flows. To be appropriate for visual effects use, a fluid solver must have some means of tracking fluid interfaces in order to have a renderable fluid surface. This project uses the level-set method [3] for interface tracking. The level set is treated as a scalar property, and so its propagation in time is computed using the same transport algorithm used in the main fluid flow solver.
APA, Harvard, Vancouver, ISO, and other styles
7

Kim, Nam Sung, and Pankaj Mehra. "Practical Near-Data Processing to Evolve Memory and Storage Devices into Mainstream Heterogeneous Computing Systems." In DAC '19: The 56th Annual Design Automation Conference 2019. New York, NY, USA: ACM, 2019. http://dx.doi.org/10.1145/3316781.3323484.

Full text
APA, Harvard, Vancouver, ISO, and other styles
8

Vijayaraghavan, Athulan, Stephen V. Jayanathan, Moneer M. Helu, and David A. Dornfeld. "Design and Fabrication of a Roller Imprinting Device for Microfluidic Device Manufacturing." In ASME 2008 International Manufacturing Science and Engineering Conference collocated with the 3rd JSME/ASME International Conference on Materials and Processing. ASMEDC, 2008. http://dx.doi.org/10.1115/msec_icmp2008-72202.

Full text
Abstract:
Microfluidic devices are gaining popularity in a variety of applications, ranging from molecular biology to bio-defense. However, the widespread adoption of this technology is constrained by the lack of efficient and cost-effective manufacturing processes. This paper focuses on the roller imprinting process, which is being developed to rapidly and inexpensively fabricate micro-fluidic devices. In this process, a cylindrical roll with raised features on its surface creates imprints by rolling over a fixed workpiece substrate and mechanically deforming it. Roller imprinting aims to replace processes that were developed for laboratory scale prototyping which tend to not be scalable and have high equipment requirements and overheads. We discuss the limitations of PDMS soft lithography in large-scale manufacture of microfluidic devices. We also discuss the design, fabrication, and testing of a simple roller imprinting device. This imprinter has been developed based on the principles of precision machine design and is implemented using a three-axis machine tool for actuation and position measurement. A framework for the micro-machining of precision imprint rolls is also presented.
APA, Harvard, Vancouver, ISO, and other styles
9

Gonzalez-Domenzain, Walter, and Ashwin A. Seshia. "Rapid Prototyping of PDMS Devices With Applications to Protein Crystallization." In ASME 2007 5th International Conference on Nanochannels, Microchannels, and Minichannels. ASMEDC, 2007. http://dx.doi.org/10.1115/icnmm2007-30046.

Full text
Abstract:
This paper describes a microfabrication process for constructing three-dimensional microfluidic structures in polydimethylsiloxane (PDMS). Rapid prototyping of microfluidic devices is possible starting from ink-jet printed masks and by utilising replica molding to create fluidic structures in PDMS from SU-8 and SPR-220 masters pre-patterned on a silicon or glass substrate. Multi-layer bonded and stacked alignment of up to 13 different functional polymer microfluidic layers with through-layer fluidic interconnects has been demonstrated. Pneumatically actuated valves have also been demonstrated for the regulation of sub-10 nL of fluid volumes. The geometric design of the valves is described with experimental verification conducted on rounded and vertical channel profiles to examine the effects of channel geometry on valve leak rates. The PDMS-based technology allows for the fabrication of devices with extremely small reaction volumes and parallel sample processing, making these devices ideally suited to applications which require high throughput processing and the ability to conduct parallel assays with very limited volumes of reagent and sample. We describe the applications of this technology to protein crystallization in particular.
APA, Harvard, Vancouver, ISO, and other styles
10

Lovell, John R., Omar Kulbrandstad, Sai Madem, and Daniel Meza. "Real-Time Digital Chemistry Offshore Transforms Flow Assurance Management." In Offshore Technology Conference. OTC, 2021. http://dx.doi.org/10.4043/31121-ms.

Full text
Abstract:
Abstract Managing asphaltene accumulation in offshore Gulf-of-Mexico wells is a significant challenge. Until recently there was no real-time chemical monitoring that could advise on whether chemical inhibition was making a particular well more, or less, stable. This changed with the development of real-time hardware that directly measures the ratio of asphaltene flowing in the oil. A new generation of that hardware has now been launched which meets all of the Qualification and HSE requirements for deployment on offshore platforms. A microwave resonator was designed to receive fluid at wellhead conditions, i.e., without a reduction in pressure or temperature, and the parameters of that resonator were optimized to maximize microwave intensity for typical oilfield fluids. The microwave circuitry is incorporated in an explosion-proof container with Class 1 Div 2 rated electrical and fluid connections. By combining that resonator with a solenoid that can generate a large magnetic field around a flowline, the resulting device resonates electrons within asphaltene molecules to create a unique signature that is proportional to the total asphaltene count. Estimates of oil-water cut and gas-oil ratio are also obtained as part of the processing and this combination gives the percentage of asphaltene within the oil. The use of this hardware with controlling software and cloud processing creates a unique Internet-of-Things device which can be used to optimize asphaltene-related flow assurance challenges offshore. Pressure testing up to 5ksi and 120C gives the device a working envelope well exceeding typical offshore production hardware requirements. For a fixed fluid, the computation of asphaltene ratio was shown to be independent of applied pressure. Conversely, it was found that in a live well chemical properties of fluids can change over the course of a few hours even when the surface pressure and flow-rates stay the same. In one well, the surface asphaltene percentage within an oil was seen to vary from 0.3% to 3% because of alternating deposition and erosion of an asphaltene layer that had been forming along the ID of production tubing. Over the course of a series of tests in the Middle East, it was observed that those wells with uniform asphaltene percentage were seen as less troublesome to manage compared to wells with a higher deviation. In two Permian fields subject to CO2 flooding, a geographic variation in asphaltene percentage which correlated to the long-term exposure to injected gas was observed. It has long been standard for chemical properties of fluids to be obtained by sending samples to a lab. This paper demonstrates additional value that can be obtained from getting that data in real-time, especially when viewed in the context of an overall chemical management program.
APA, Harvard, Vancouver, ISO, and other styles
We offer discounts on all premium plans for authors whose works are included in thematic literature selections. Contact us to get a unique promo code!

To the bibliography