To see the other types of publications on this topic, follow the link: Complex Machinery.

Dissertations / Theses on the topic 'Complex Machinery'

Create a spot-on reference in APA, MLA, Chicago, Harvard, and other styles

Select a source type:

Consult the top 50 dissertations / theses for your research on the topic 'Complex Machinery.'

Next to every source in the list of references, there is an 'Add to bibliography' button. Press on it, and we will generate automatically the bibliographic reference to the chosen work in the citation style you need: APA, MLA, Harvard, Chicago, Vancouver, etc.

You can also download the full text of the academic publication as pdf and read online its abstract whenever available in the metadata.

Browse dissertations / theses on a wide variety of disciplines and organise your bibliography correctly.

1

Bentley, Darren. "Intelligent control of complex soil tillage machinery." Thesis, Cranfield University, 2000. http://ethos.bl.uk/OrderDetails.do?uin=uk.bl.ethos.399714.

Full text
APA, Harvard, Vancouver, ISO, and other styles
2

Lin, Tsan-hwan. "Operation analysis and design of large complex conveyor networks." Diss., Georgia Institute of Technology, 1994. http://hdl.handle.net/1853/24300.

Full text
APA, Harvard, Vancouver, ISO, and other styles
3

Heian, Mats Johan. "Factors Influencing Machinery System Selection for Complex Operational Profiles." Thesis, Norges teknisk-naturvitenskapelige universitet, Institutt for marin teknikk, 2014. http://urn.kb.se/resolve?urn=urn:nbn:no:ntnu:diva-25881.

Full text
Abstract:
BackgroundThe environmental consequences, caused by global emission of green house gases (GHG), have received increasingly concern in recent years. CO2 emissions from the maritime sector represent 3,3% of the world’s total CO2 emissions and are forecast to increase the next decades. To meet the new and stricter regulations on emissions, the International Maritime Organization (IMO) has announced that by 2015 the regulations known as “Tier III” will take effect in the emission controlled areas (ECA), and globally by the year 2020. They are currently debating technical, operational and market-based measures for reducing GHG emissions from the shipping industry. Hybrid power systems, which is a power system combining power production and energy storage, have been used in several industries and have received particular interest in the power production and car industries. Introducing it to power production onboard ships, the performance of the vessel can be improved and the emission of GHG can be reduced. Overall Aim and FocusThe overall objective of this thesis is to identify the cutting point between selecting hybrid power systems which combine energy production with energy storage capacity and diesel electric systems based on operational profiles and external influences such as cost, route, distance, weather conditions, maneuvering and type of operations. MethodThis thesis is a research of the influencing factors on selection of the most efficient machinery solutions for vessels with complex operational profiles. The first part of the report is a review of diesel electric power and propulsion systems, and state of the art configurations of it, which have been proven able to handle variations in load in an efficient manner. The influencing factors, such as operations to perform, weather conditions, emissions and rules and regulations, have been evaluated and described in order to create a platform for supporting the decisions made when selecting a machinery system based on the planned operational profile of the vessel. A stepwise method has been made to evaluate the influencing factors up against the operational profile. The first step is a simple overview of the operational profile to evaluate the degree of variation in operations and loads. Further on, the power demands in the planned operations are evaluated with focus on the possible dynamic loads. Finally, a screening process is presented to evaluate whether a hybrid of diesel electric and energy storage configuration is beneficial for a type of ship, compared to a pure diesel electric system. To estimate the potential economical benefits of hybridization compared to a diesel electric configuration, a simple calculation on savings in lifetime costs is conducted as the fourth step.ResultsThe ability of the vessels machinery to handle the dynamic loading picture during specialized operations in an efficient matter is of the utmost importance. Especially regarding operating costs, not only for the fuel consumption, but also the maintenance cost and emissions of green house gases. To illustrate the selection process, a case, where selecting the best-suited machinery system for a PSV is the goal, was made and run through the method. Two scenarios were simulated, one where the fuel and battery prices were held constant over a period of 25 years. The other scenario, which is believed to be the most likely due to the last year’s trend in fuel prices, has a 2% annual increase in fuel price and a 20% decrease in battery prices every 10 years. The calculations show, in both scenarios, that a hybrid of diesel electric and batteries as energy storage will give reductions in costs compared to pure diesel electric. The payback time of hybridization is less than five years in both cases which indicates that, in addition to reduce emissions, this might be a good investment for the ship-owner. However, the savings and potential benefits turned out to be larger for the case with varying prices. This had a potentially 10% reduction in cost after 25 years of operation. For vessels operating with a large amount of variations in loads and which experience several transients and low loads, a hybrid system with an energy storage unit will assist the engines in handling the load peaks and troughs, which lead to a more efficient operation compared to diesel electric. Diesel electric system has been a preferred choice of machinery for ships with complex operational profiles in recent years. However, despite the higher installation cost, the hybrid system turns out to be a more profitable choice in the future.
APA, Harvard, Vancouver, ISO, and other styles
4

Zhao, Wenyu. "A Probabilistic Approach for Prognostics of Complex Rotary Machinery Systems." University of Cincinnati / OhioLINK, 2015. http://rave.ohiolink.edu/etdc/view?acc_num=ucin1423581651.

Full text
APA, Harvard, Vancouver, ISO, and other styles
5

Kriama, Abdulbast. "3D complex shaped- dissolvable multi level micro/nano mould fabrication." Thesis, University of Glasgow, 2011. http://theses.gla.ac.uk/2405/.

Full text
Abstract:
There is growing interest in the development of fabrication techniques to cost effectively mass-produce high-resolution (micro/nano) 3D structures in a range of materials. Biomedical applications are particularly significant. This work demonstrates a novel technique to simultaneously fabricate a sacrificial mould having the inverse shape of the desired device structure and also create the desired device structure using electroplating deposition techniques. The mould is constructed of many thin layers using a photoresist material that is dissolvable and sensitive to UV light. At the same time the device is created in the emerging mould layers using Gold electroplating deposition technique. Choosing to fabricate the mould and the 3D structures in multiple thin layers allows the use of UV light and permits the potential cost-effective realization of 3D curved surfaces, the accuracy and geometric details of which are related to the number of layers used. In this work I present a novel idea to improve the LIGA process when using many masks to deposit multi thin layer over each other. Moreover, this technique can be utilized to produce a curved surface in the vertical direction with any diameter. Practically, a 2 µm thickness of layer is applied in the proposed technique. However, a layer of 0.5 µm or less can be deposited. An example is provided to explain the novel fabrication process and to outline the resulting design and fabrication constraints. With this technique, any structure could be made and any material used. The work employs conventional techniques to produce a 3D complex shape. By using conventional techniques with multi layers to produce a 3D structure, many problems are expected to occur during the process. Those problems were mentioned by many researchers in general but have not been addressed correctly. Most researchers have covered those problems by leaving the conventional and using a new technique they invented to produce the required product. However, in my work I have addressed those problems for the first time and I offered a new and effective technique to improve the MEMS technology and make this technology cheaper. This was achieved by using a research methodology requiring a rigorous review of existing processes, as outlined above, then by proposing a concept design for an improved process. This novel proposed process was then tested and validated by a series of experiments involving the manufacture of demo-devices. The conclusion is that this new process has the potential to be developed into a commercially implementable process.
APA, Harvard, Vancouver, ISO, and other styles
6

McInally, Stephen Geoffrey. "A novel approach to eliciting requirements in the process of designing complex instrument systems." Thesis, University College London (University of London), 2002. http://ethos.bl.uk/OrderDetails.do?uin=uk.bl.ethos.271332.

Full text
APA, Harvard, Vancouver, ISO, and other styles
7

Lahudkar, Shweta L. "REGULATION OF EUKARYOTIC GENE EXPRESSION BY mRNA CAP BINDING COMPLEX AND CAPPING MACHINERY." OpenSIUC, 2014. https://opensiuc.lib.siu.edu/dissertations/834.

Full text
Abstract:
A characteristic feature of gene expression in eukaryotes is the addition of a 5' terminal 7-methylguanine "cap" to nascent pre-messenger RNA (mRNA) in the nucleus. It is the 5'capping process, which proves vital to creating a mature mRNA. The synthesis of an mRNA followed by its capping is a complex undertaking which requires a set of protein factors. The capped mRNA is then exclusively bound by a cap-binding complex (CBC). CBC shields mRNA from exonucleases as well as regulates downstream post-transcriptional events, translational initiation and nonsense mediated mRNA decay (NMD). Any misregulation during capping or in the binding of CBC can lead a number of diseases/disorders. Thus, the process and regulation of capping and CBC binding to mRNA are important fields to study the control of gene expression. Over the years, capping apparatus and CBC have been implicated in post-transcriptional regulation. However, it is not yet known whether CBC plays any role in controlling transcriptional initiation or elongation. Thus, the major research focus in my thesis had been to analyze the role of CBC and capping enzymes in regulation of transcriptional initiation and elongation. The results have revealed the role of CBC in stimulating the formation of pre-initiation complex (PIC) at the promoter in vivo via Mot1p (modifier of transcription). Subsequently, we have demonstrated the roles of CBC in transcription elongation, splicing and nuclear export of mRNA. Interestingly, we find that the capping enzyme, Cet1p, decreases promoter proximal accumulation of RNA polymerase II. These results support that Cet1p promotes the release of paused-RNA polymerase II to get engaged into elongating form for productive transcription. Such function of Cet1p appears to be mediated via the Facilitates chromatin transcription (FACT) complex. We find that FACT is targeted to the active gene by the N-terminal domain of Cet1p independently of its capping activity. In the absence of Cet1p, recruitment of FACT to the active gene is impaired, leading to paused-RNA polymerase II. Collectively, the results of my thesis work provide significant insight on the regulation of gene expression by CBC and capping enzyme, Cet1p.
APA, Harvard, Vancouver, ISO, and other styles
8

Buzza, Matthew. "An Evaluation of Classification Algorithms for Machinery Fault Diagnosis." University of Cincinnati / OhioLINK, 2017. http://rave.ohiolink.edu/etdc/view?acc_num=ucin1490702571145903.

Full text
APA, Harvard, Vancouver, ISO, and other styles
9

El, Hayek Mustapha Mechanical &amp Manufacturing Engineering Faculty of Engineering UNSW. "Optimizing life-cycle maintenance cost of complex machinery using advanced statistical techniques and simulation." Awarded by:University of New South Wales. School of Mechanical and Manufacturing Engineering, 2006. http://handle.unsw.edu.au/1959.4/24955.

Full text
Abstract:
Maintenance is constantly challenged with increasing productivity by maximizing up-time and reliability while at the same time reducing expenditure and investment. In the last few years it has become evident through the development of maintenance concepts that maintenance is more than just a non-productive support function, it is a profit- generating function. In the past decades, hundreds of models that address maintenance strategy have been presented. The vast majority of those models rely purely on mathematical modeling to describe the maintenance function. Due to the complex nature of the maintenance function, and its complex interaction with other functions, it is almost impossible to accurately model maintenance using mathematical modeling without sacrificing accuracy and validity with unfeasible simplifications and assumptions. Analysis presented as part of this thesis shows that stochastic simulation offers a viable alternative and a powerful technique for tackling maintenance problems. Stochastic simulation is a method of modeling a system or process (on a computer) based on random events generated by the software so that system performance can be evaluated without experimenting or interfering with the actual system. The methodology developed as part of this thesis addresses most of the shortcomings found in literature, specifically by allowing the modeling of most of the complexities of an advanced maintenance system, such as one that is employed in the airline industry. This technique also allows sensitivity analysis to be carried out resulting in an understanding of how critical variables may affect the maintenance and asset management decision-making process. In many heavy industries (e.g. airline maintenance) where high utilization is essential for the success of the organization, subsystems are often of a rotable nature, i.e. they rotate among different systems throughout their life-cycle. This causes a system to be composed of a number of subsystems of different ages, and therefore different reliability characteristics. This makes it difficult for analysts to estimate its reliability behavior, and therefore may result in a less-than-optimal maintenance plan. Traditional reliability models are based on detailed statistical analysis of individual component failures. For complex machinery, especially involving many rotable parts, such analyses are difficult and time consuming. In this work, a model is proposed that combines the well-established Weibull method with discrete simulation to estimate the reliability of complex machinery with rotable subsystems or modules. Each module is characterized by an empirically derived failure distribution. The simulation model consists of a number of stages including operational up-time, maintenance down-time and a user-interface allowing decisions on maintenance and replacement strategies as well as inventory levels and logistics. This enables the optimization of a maintenance plan by comparing different maintenance and removal policies using the Cost per Unit Time (CPUT) measure as the decision variable. Five different removal strategies were tested. These include: On-failure replacements, block replacements, time-based replacements, condition-based replacements and a combination of time-based and condition-based strategies. Initial analyses performed on aircraft gas-turbine data yielded an optimal combination of modules out of a pool of multiple spares, resulting in an increased machine up-time of 16%. In addition, it was shown that condition-based replacement is a cost-effective strategy; however, it was noted that the combination of time and condition-based strategy can produce slightly better results. Furthermore, a sensitivity analysis was performed to optimize decision variables (module soft-time), and to provide an insight to the level of accuracy with which it has to be estimated. It is imperative as part of the overall reliability and life-cycle cost program to focus not only on reducing levels of unplanned (i.e. breakdown) maintenance through preventive and predictive maintenance tasks, but also optimizing inventory of spare parts management, sometimes called float hardware. It is well known that the unavailability of a spare part may result in loss of revenue, which is associated with an increase in system downtime. On the other hand increasing the number of spares will lead to an increase in capital investment and holding cost. The results obtained from the simulation model were used in a discounted NPV (Net Present Value) analysis to determine the optimal number of spare engines. The benefits of this methodology are that it is capable of providing reliability trends and forecasts in a short time frame and based on available data. In addition, it takes into account the rotable nature of many components by tracking the life and service history of individual parts and allowing the user to simulate different combinations of rotables, operating scenarios, and replacement strategies. It is also capable of optimizing stock and spares levels as well as other related key parameters like the average waiting time, unavailability cost, and the number of maintenance events that result in extensive durations due to the unavailability of spare parts. Importantly, as more data becomes available or as greater accuracy is demanded, the model or database can be updated or expanded, thereby approaching the results obtainable by pure statistical reliability analysis.
APA, Harvard, Vancouver, ISO, and other styles
10

Li, S. "Experimental testing and numerical investigation of materials with embedded systems during indentation and complex loading conditions." Thesis, Liverpool John Moores University, 2018. http://researchonline.ljmu.ac.uk/8981/.

Full text
Abstract:
In this work, parametric FE (Finite Element) modelling has been developed and used to study the deformation of soft materials with different embedded systems under indentation and more complex conditions. The deformation of a soft material with an embedded stiffer layer under cylindrical flat indenter was investigated through FE modelling. A practical approach in modelling embedded system is evaluated and presented. The FE results are correlated with an analytical solution for homogenous materials and results from a mathematical approach for embedded systems in a half space. The influence of auxeticity on the indentation stiffness ratio and the de-formation of the embedded system under different conditions (indenter size, thickness and embedment depth of the embedded layer) was established and key mechanisms of the Poisson’s ratio effect are highlighted. The results show that the auxeticity of the matrix has a direct influence on the indentation stiffness of the system with an embedded layer. The enhancement of indentation resistance due to embedment increases, as the matrix Poisson’s ratio is decreased to zero and to negative values. The indentation stiffness could be increased by over 30% with a thin inextensible shell on top of a negative Poisson’s ratio matrix. The deformation of the embedded layer is found to be significantly influenced by the auxeticity of the matrix. Selected case studies show that the modelling approach developed is effective in simulating piezoelectrical sensors, and force sensitive resistor, as well as investigating the deformation and embedded auxetic meshes. A full scale parametric FE foot model is developed to simulate the deformation of the human foot under different conditions including soles with embedded shells and negative Poisson’s ratio. The models used a full bone structure and effective embedded structure method to increase the modelling efficiency. A hexahedral dominated meshing scheme was applied on the surface of the foot bones and skin. An explicit solver (Abaqus/Explicit) was used to simulate the transient landing process. Navicular drop tests have been performed and the displacement of the Navicular bone is measured using a 3D image analysing system. The experimental results show a good agreement with the numerical models and published data. The detailed deformation of the Navicular bone and factors affecting the Navicular bone displacement and measurement is discussed. The stress level and rate of stress increase in the Metatarsals and the injury risk in the foot between forefoot strike (FS) and rearfoot (RS) is evaluated and discussed. A detailed full parametric FE foot model is developed and validated. The deformation and internal energy of the foot and stresses in the metatarsals are comparatively investigated. The results for forefoot strike tests showed an overall higher average stress level in the metatarsals during the entire landing cycle than that for rearfoot strike. The increased rate of the metatarsal stress from the 0.5 body weight (BW) to 2 BW load point is 30.76% for forefoot strike and 21.39% for rearfoot strike. The maximum rate of stress increase among the five metatarsals is observed on the 1st metatarsal in both landing modes. The results indicate that high stress level during forefoot landing phase may increase potential of metatarsal injuries. The FE was used to evaluate the effect of embedded shell and auxetic materials on the foot-shoe sole interaction influencing both the contact area and the pressure. The work suggests that application of the auxetic matrix with embedded shell can reinforce the indentation resistance without changing the elastic modulus of the material which can optimise the wearing experience as well as providing enough support for wearers. . Potential approaches of using auxetic structures and randomly distributed 2D inclusion embedded in a soft matrix for footwear application is discussed. The design and modelling of foot prosthetic, which resembles the human foot structure with a rigid structure embedded in soft matrix is also presented and discussed.
APA, Harvard, Vancouver, ISO, and other styles
11

Cheng, Guilong. "Unraveling Macro-Molecular Machinery by Mass Spectrometry: from Single Proteins to Non-Covalent Protein Complexes." Diss., The University of Arizona, 2007. http://hdl.handle.net/10150/195466.

Full text
Abstract:
Presented in this dissertation are studies of protein dynamics and protein/protein interactions using solution phase hydrogen/deuterium exchange in combination with mass spectrometry (HXMS). In addition, gas phase fragmentation behaviors of deuterated peptides are investigated, with the purpose of increasing resolution of the HXMS. In the area of single protein dynamics, two protein systems are studied. Studies on the cytochrome c2 from Rhodobacter capsulatus indicate its domain stability to be similar to that of the horse heart cytochrome c. Further comparison of the exchange kinetics of the cytochrome c2 in its reduced and oxidized state reveals that the so-called hinge region is destabilized upon oxidation. We also applied a similar approach to investigate the conformational changes of photoactive yellow protein when it is transiently converted from the resting state to the signaling state. The central β-sheet of the protein is shown to be destabilized upon photoisomerization of the double bond in the chromophore. Another equally important question when it comes to understanding how proteins work is the interactions between proteins. To this end, two protein complexes are subjected to studies by solution phase hydrogen deuterium exchange and mass spectrometry. In the case of LexA/RecA interaction, both proteins show decreases in their extents of exchange upon complex formation. The potential binding site in LexA was further mapped to the same region that the protein uses to cleave itself upon interacting with RecA. In the sHSP/MDH system, hydrogen/deuterium exchange experiments revealed regions within sHSP-bound MDH that were significantly protected against exchange under heat denaturing condition, indicative of a partially unfolded state. Hydrogen/deuterium exchange therefore provides a way of probing low resolution protein structure within protein complexes that have a high level of heterogeneity. Finally, the feasibility of increasing resolution of HXMS by gas phase peptide fragmentation is investigated by using a peptide with three prolines near the C-terminus. Our data show that deuterium migration indeed occurs during the collision activated dissociation process. Caution is required when interpreting the MS/MS spectra as a way of pinpointing the exact deuterium distribution within peptides.
APA, Harvard, Vancouver, ISO, and other styles
12

Ismail, Hafizul. "Intelligent model-based control of complex multi-link mechanisms." Thesis, Cardiff University, 2016. http://orca.cf.ac.uk/97374/.

Full text
Abstract:
Complex under-actuated multilink mechanism involves a system whose number of control inputs is smaller than the dimension of the configuration space. The ability to control such a system through the manipulation of its natural dynamics would allow for the design of more energy-efficient machines with the ability to achieve smooth motions similar to those found in the natural world. This research aims to understand the complex nature of the Robogymnast, a triple link underactuated pendulum built at Cardiff University with the purpose of studying the behaviour of non-linear systems and understanding the challenges in developing its control system. A mathematical model of the robot was derived from the Euler-Lagrange equations. The design of the control system was based on the discrete-time linear model around the downward position and a sampling time of 2.5 milliseconds. Firstly, Invasive Weed Optimization (IWO) was used to optimize the swing-up motion of the robot by determining the optimum values of parameters that control the input signals of the Robogymnast’s two motors. The values obtained from IWO were then applied to both simulation and experiment. The results showed that the swing-up motion of the Robogymnast from the stable downward position to the inverted configuration to be successfully achieved. Secondly, due to the complex nature and nonlinearity of the Robogymnast, a novel approach of modelling the Robogymnast using a multi-layered Elman neural ii network (ENN) was proposed. The ENN model was then tested with various inputs and its output were analysed. The results showed that the ENN model to be capable of providing a better representation of the actual system compared to the mathematical model. Thirdly, IWO is used to investigate the optimum Q values of the Linear Quadratic Regulator (LQR) for inverted balance control of the Robogymnast. IWO was used to obtain the optimal Q values required by the LQR to maintain the Robogymnast in an upright configuration. Two fitness criteria were investigated: cost function J and settling time T. A controller was developed using values obtained from each fitness criteria. The results showed that LQRT performed faster but LQRJ was capable of stabilizing the Robogymnast from larger deflection angles. Finally, fitness criteria J and T were used simultaneously to obtain the optimal Q values for the LQR. For this purpose, two multi-objective optimization methods based on the IWO, namely the Weighted Criteria Method IWO (WCMIWO) and the Fuzzy Logic IWO Hybrid (FLIWOH) were developed. Two LQR controllers were first developed using the parameters obtained from the two optimization methods. The same process was then repeated with disturbance applied to the Robogymnast states to develop another two LQR controllers. The response of the controllers was then tested in different scenarios using simulation and their performance was evaluated. The results showed that all four controllers were able to balance the Robogymnast with the fastest settling time achieved by WMCIWO with disturbance followed by in the ascending order: FLIWOH with disturbance, FLIWOH, and WCMIWO.
APA, Harvard, Vancouver, ISO, and other styles
13

Wang, Xiang. "Molecular dissection of the Sec62/63p complex, a member of protein translocation machinery of the endoplasmic reticulum membrane /." Karlsruhe : Forschungszentrum Karlsruhe in der Helmholtz-Gemeinschaft, 2005. http://bibliothek.fzk.de/zb/berichte/FZKA7163.pdf.

Full text
APA, Harvard, Vancouver, ISO, and other styles
14

Wang, Xian. "Molecular dissection of the Sec62/63p complex, a member of protein translocation machinery of the endoplasmic reticulum membrane." Karlsruhe : FZKA, 2005. http://bibliothek.fzk.de/zb/berichte/FZKA7163.pdf.

Full text
APA, Harvard, Vancouver, ISO, and other styles
15

Smith, John P. "Effective and efficient non-destructive testing of large and complex shaped aircraft structures." Thesis, University of Central Lancashire, 2004. http://clok.uclan.ac.uk/7646/.

Full text
Abstract:
The main aim of the research described within this thesis is to develop methodologies that enhance the defect detection capabilities of nondestructive testing (NDT) for the aircraft industry. Modem aircraft non-destructive testing requires the detection of small defects in large complex shaped components. Research has therefore focused on the limitations of ultrasonic, radioscopic and shearographic methods and the complimentary aspects associated with each method. The work has identified many parameters that have significant effect on successful defect detection and has developed methods for assessing NDT systems capabilities by noise analysis, excitation performance and error contributions attributed to the positioning of sensors. The work has resulted in 1. The demonstration that positional accuracy when ultrasonic testing has a significant effect on defect detection and a method to measure positional accuracy by evaluating the compensation required in a ten axis scanning system has revealed limitsio the achievable defect detection when using complex geometry scanning systems. 2. A method to reliably detect 15 micron voids in a diffusion bonded joint at ultrasonic frequencies of 20 MHz and above by optimising transducer excitation, focussing and normalisation. 3. A method of determining the minimum detectable ultrasonic attenuation variation by plotting the measuring error when calibrating the alignment of a ten axis scanning system. 4. A new formula for the calculation of the optimum magnification for digital radiography. The formula is applicable for focal spot sizes less than 0.1 mm. 5. A practical method of measuring the detection capabilities of a digital radiographic system by calculating the modulation transfer function and the noise power spectrum from a reference image. 6. The practical application of digital radiography to the inspection of super plastically formed ditThsion bonded titanium (SPFDB) and carbon fibre composite structure has been demonstrated but has also been supported by quantitative measurement of the imaging systems capabilities. 7. A method of integrating all the modules of the shearography system that provides significant improvement in the minimum defect detection capability for which a patent has been granted. 8. The matching of the applied stress to the data capture and processing during a shearographic inspection which again contributes significantly to the defect detection capability. 9. The testing and validation of the Parker and Salter [1999] temporal unwrapping and laser illumination work has led to the realisation that producing a pressure drop that would result in a linear change in surface deformation over time is difficult to achieve. 10. The defect detection capabilities achievable by thermal stressing during a shearographic inspection have been discovered by applying the pressure drop algorithms to a thermally stressed part. 11. The minimum surface displacement measurable by a shearography system and therefore the defect detection capabilities can be determined by analysing the signal to noise ratio of a transition from a black (poor reflecting surface) to white (good reflecting surface). The quantisation range for the signal to noise ratio is then used in the Hung [1982] formula to calculate the minimum displacement. Many of the research aspects contained within this thesis are cuffently being implemented within the production inspection process at BAE Samlesbury.
APA, Harvard, Vancouver, ISO, and other styles
16

Elwany, Alaa H. "Sensor-based prognostics and structured maintenance policies for components with complex degradation." Diss., Georgia Institute of Technology, 2009. http://hdl.handle.net/1853/37198.

Full text
Abstract:
We propose a mathematical framework that integrates low-level sensory signals from monitoring engineering systems and their components with high-level decision models for maintenance optimization. Our objective is to derive optimal adaptive maintenance strategies that capitalize on condition monitoring information to update maintenance actions based upon the current state of health of the system. We refer to this sensor-based decision methodology as "sense-and-respond logistics". As a first step, we develop and extend degradation models to compute and periodically update the remaining life distribution of fielded components using in situ degradation signals. Next, we integrate these sensory updated remaining life distributions with maintenance decision models to; (1) determine, in real-time, the optimal time to replace a component such that the lost opportunity costs due to early replacements are minimized and system utilization is increased, and (2) sequentially determine the optimal time to order a spare part such that inventory holding costs are minimized while preventing stock outs. Lastly, we integrate the proposed degradation model with Markov process models to derive structured replacement and spare parts ordering policies. In particular, we show that the optimal maintenance policy for our problem setting is a monotonically non-decreasing control limit type policy. We validate our methodology using real-world data from monitoring a piece of rotating machinery using vibration accelerometers. We also demonstrate that the proposed sense-and-respond decision methodology results in better decisions and reduced costs compared to other traditional approaches.
APA, Harvard, Vancouver, ISO, and other styles
17

Marigo, Michele. "Discrete element method modelling of complex granular motion in mixing vessels : evaluation and validation." Thesis, University of Birmingham, 2012. http://etheses.bham.ac.uk//id/eprint/3402/.

Full text
Abstract:
In recent years, it has been recognised that a better understanding of processes involving particulate material is necessary to improve manufacturing capabilities and product quality. The use of Discrete Element Modeling (DEM) for more complicated particulate systems has increased concordantly with hardware and code developments, making this tool more accessible to industry. The principal aim of this project was to study DEM capabilities and limitations with the final goal of applying the technique to relevant Johnson Matthey operations. This work challenged the DEM numerical technique by modelling a mixer with a complex motion, the Turbula mixer. The simulations revealed an unexpected trend for rate of mixing with speed, initially decreasing between 23 rpm and 46 rpm, then increasing between 46 rpm and 69 rpm. The DEM results were qualitatively validated with measurements from Positron Emission Particle Tracking (PEPT), which revealed a similar pattern regarding the mixing behaviour for a similar system. The effect of particle size and speed on segregation were also shown, confirming comparable results observed in the literature. Overall, the findings illustrated that DEM could be an effective tool for modelling and improving processes related to particulate material.
APA, Harvard, Vancouver, ISO, and other styles
18

Malik, Imran Tarik [Verfasser]. "Modulation of the Clp protease by agonist molecules as a tool to investigate the functional properties of the complex machinery / Imran Tarik Malik." Tübingen : Universitätsbibliothek Tübingen, 2021. http://d-nb.info/1236994221/34.

Full text
APA, Harvard, Vancouver, ISO, and other styles
19

Menk, Alexander. "Simulation of complex microstructural geometries using X-FEM and the application to solder joint lifetime prediction." Thesis, University of Glasgow, 2011. http://theses.gla.ac.uk/2519/.

Full text
Abstract:
In electronic devices solder joints form a mechanical as well as an electrical connection between the circuit board and the component (e.g. a chip or a resistor). Temperature variations occurring during field use cause crack initiation and crack growth inside the joints. Accurate prediction of the lifetime requires a method to simulate the damage process based on microstructural properties. Numerical simulation of developing cracks and microstructural entities such as grain boundaries and grain junctions gives rise to several problems. The solution contains strong and weak discontinuities as well as weak singularities. To obtain reasonable solutions with the finite element method (FEM) the element edges have to align with the cracks and the grain boundaries, which imposes geometrical restrictions on the mesh choice. Additionally, a large number of elements has to be used in the vicinity of the singularities which increases the computational effort. Both problems can be circumvented with the extended finite element method (X-FEM) by using appropriate enrichment functions. In this thesis the X-FEM will be developed for the simulation of complex microstructural geometries. Due to the anisotropy of the different grains forming a joint and the variety of different microstructural configurations it is not always possible to write the enrichment functions in a closed form. A procedure to determine enrichment functions numerically is explained and tested. As a result, a very simple meshing scheme, which will be introduced here, can be used to simulate developing cracks in solder joint microstructures. Due to the simplicity of the meshing algorithm the simulation can be automated completely. A large number of enrichment functions must be used to realize this. Well-conditioned equation systems, however, cannot be guaranteed for such an approach. To improve the condition number of the X-FEM stiffness matrix and thus the robustness of the solution process a preconditioning technique is derived and applied. This approach makes it possible to develop a new and fully automated procedure for addressing the reliability of solder joints numerically. The procedure relies on the random generation of microstructures. Performing crack growth calculations for a series of these structures makes it possible to address the influence of varying microstructures on the damage process. Material parameters describing the microstructure are determined in an inverse procedure. It will be shown that the numerical results correspond well with experimental observations.
APA, Harvard, Vancouver, ISO, and other styles
20

Ferreira, Ramos Ana Sofia. "Inhibitors of the mRNA capping machinery and structural studies on macro domains from alphaviruses." Thesis, Aix-Marseille, 2019. http://theses.univ-amu.fr.lama.univ-amu.fr/190708_FERREIRARAMOS_112plefdq222vlt303lhj860uuajmi_TH.pdf.

Full text
Abstract:
Les alphavirus comme le virus Chikungunya et le virus de l'encéphalite équine vénézuélienne sont des arbovirus (ré)-émergents. Ils possèdent un mécanisme de coiffe de l’ARNm non conventionnel catalysé par nsP1 et nsP2 pour former une structure cap-0 (m7GpppN-) qui est cruciale pour la réplication. Le coiffage constitue une cible antivirale attractive. NsP1 catalyse trois activités: méthyltransférase, guanylylation de nsP1 (GT), et transfert sur l’ARNm. Nous avons développé un test pour cribler la chimiothèque de composés de Prestwick Chemical® contre l’activité GT de nsP1. 18 composés sont ressortis de ce crible et trois séries de composés ont été sélectionnées pour une caractérisation plus poussée. Ces composés inhibent peu une MTase cellulaire suggérant leur spécificité vis-à-vis de nsP1. Des analyses de relations structure/activité (SAR) ont également été initiées pour identifier les pharmacophores actifs. Ce travail montre que notre test permet de sélectionner des composés spécifiques ciblant le coiffage de l’ARNm des alphavirus. NsP3 consiste en un Macro domaine, un domaine de liaison au zinc et une région hypervariable. Le Macro domaine est essentiel à la réplication virale en fixant l’ADP-ribose (ADPr) et en dé-ribosylatant des protéines cellulaires. Nous avons effectué une étude structurale et fonctionnelle du Macro domaine du virus Getah (GETV) dont la séquence de la boucle catalytique présente des particularités. Cette étude a permis de caractériser plusieurs poses adoptées par l’ADPr dans le site actif. Ces poses peuvent représenter plusieurs instantanés du mécanisme de l’ADP-ribosylhydrolase, et mettent en lumière de nouveaux résidus à caractériser
Alphaviruses such as Chikungunya virus and Venezuelan equine encephalitis virus (VEEV) are (re-)emerging arboviruses. They own an unconventional mRNA capping catalysed by nsP1 and nsP2 leading to the formation of a cap-0 structure (m7GpppN-), which is crucial for virus replication and constitutes an attractive antiviral target. NsP1 catalyses three activities: methyltransferase (MTase), guanylylation (GT) and guanylyltransferase (GTase). A high throughput ELISA was developed to monitor the GT reaction and screen the Prestwick Chemical library®. The IC50 was determined for 18 selected hit compounds. Three series of compounds were selected for further characterization. These compounds poorly inhibit a cellular MTase suggesting their specificity against nsP1. Analogue search and structural activity relationships (SAR) were also initiated to identify the active pharmacophore features. The results show that our strategy is a convenient way to select specific hit compounds targeting the mRNA capping of alphaviruses. NsP3 consists in a Macro domain at the N-terminal, a zinc binding domain and a C-terminal hypervariable region. The Macro domain is essential for the replication through ADP-ribose (ADPr) binding and de-ribosylation of cellular proteins. In order to better understand this mechanism, we initiated a structure-based study of Getah virus (GETV) Macro domain, which contains a peculiar substitution in the catalytic loop. By crystallographic studies we characterized several poses adopted by ADPr in the binding site. Together, these poses may represent several snapshots of the ADP-ribosylhydrolase mechanism, highlighting new residues to be further characterised
APA, Harvard, Vancouver, ISO, and other styles
21

Wang, Xian [Verfasser]. "Molecular dissection of the Sec62/63p complex, a member of protein translocation machinery of the endoplasmic reticulum membrane / Forschungszentrum Karlsruhe GmbH, Karlsruhe. Xian Wang." Karlsruhe : FZKA, 2005. http://d-nb.info/977282295/34.

Full text
APA, Harvard, Vancouver, ISO, and other styles
22

Le, Flohic Julien. "Vers une commande basée modèle des machines complexes : application aux machines-outils et machines d'essais mécaniques." Thesis, Clermont-Ferrand 2, 2015. http://www.theses.fr/2015CLF22551/document.

Full text
Abstract:
De nos jours, les exigences de productivité et de maîtrise des coûts ont incité les industriels à développer de nouvelles machines, et avec elles, de nouveaux enjeux sont apparus : souplesse de la structure, vibration, effets dynamiques non-négligeables, etc. Pourtant, leur mise en œuvre est toujours issue de méthodes employées pour les machines conventionnelles. Ces travaux s’intéressent donc à la définition de stratégies globales englobant la prise en compte de la structure utilisée et de la tâche à réaliser, appliquée à deux contextes d’illustration. Dans le contexte de l’usinage, nous proposons un réglage des machines basé sur le modèle comportemental de la structure qui ne nécessite que peu de modifications manuelles et permettant un gain de temps pour la mise en œuvre. Une nouvelle loi de commande en couple calculé est également proposé, elle permet de réduire les phénomènes vibratoires lors de phases dynamiquement exigeantes. Dans le contexte des essais mécaniques, l’objectif est de montrer la faisabilité de l’utilisation de machines parallèles à 6 degrés de liberté dans le cadre d’essais dont la gestion des conditions aux limites est critique. Nous proposons une instrumentation et un schéma de commande qui permettent de respecter les consignes avec une erreur maximale de l’ordre de 0.40μm, même dans le cas d’éprouvettes très rigide (en béton par exemple)
Nowadays, the requirements in productivity and costs mastering have forced the industrial manufacturers to develop new kind of mechanisms. Thus, the complexity of the machine-tools structures and machining processes has increased and new challenges have emerged : flexible structure, vibration, non-negligible dynamic effects, etc ... However, their implementation still comes from methods used for conventional machines. These works are thus about defining overall strategies including consideration of the kind of structure used and the task to realise. Two illustrative contexts are used. In the context of machining, we propose a generic tuning method based on kinematic and dynamic model of machine-tools structure that requires only a few manual modifications, in order to save time for implementation. A new computed torque control law is proposed, it reduces vibration phenomena in dynamical demanding phases. In the context of the mechanical tests, the objective is to demonstrate the feasibility of using parallel machines with 6 degrees of freedom in the context of mechanical tests, whereas the boundary conditions are perfectly controlled. We propose an instrumentation and control scheme that is able to perform mechanical tests with a maximum error of about 0.40 mu m, even in the case of very rigid specimen (concrete for example)
APA, Harvard, Vancouver, ISO, and other styles
23

Cherrak, Yassine. "Caractérisation structurale et inhibition d’une nanomachine impliquée dans la compétition bactérienne : le T6SS." Thesis, Aix-Marseille, 2020. http://www.theses.fr/2020AIXM0232.

Full text
Abstract:
Le T6SS est un appareil de sécrétion retrouvé chez près d'un tiers des bactéries à Gram-négative. Il est responsable de la translocation de toxines dont l'activité permet l’élimination de microorganismes compétiteurs et ainsi facilite l’accès des bactéries pathogènes vers des environnements essentiels à l’homéostasie de l’Homme. Il est ancré à la paroi bactérienne par l’intermédiaire d’un complexe membranaire sur lequel est liée une plateforme d’assemblage, ou baseplate, permettant la polymérisation d’une queue contractile. Pendant ma thèse, je me suis intéressé à la baseplate du T6SS chez notre modèle d’étude Escherichia coli entéro-agrégative (EAEC). Après avoir participé à la caractérisation de TssK et dévoilé son rôle de connecteur, nous avons révélé la stœchiométrie, l’ordre d’assemblage ainsi que la structure des autres protéines constituant la baseplate. Ces travaux ont permis d’augmenter nos connaissances fondamentales sur le fonctionnement de cette machinerie mais également de dévoiler une interface d’interaction dont l’interférence altère le fonctionnement du T6SS. En parallèle, je me suis intéressé au complexe membranaire afin d’étudier sa connexion avec la baseplate de symétrie différente. Cette étude a abouti à la caractérisation structurale du complexe membranaire du T6SS et à la mise en évidence d’un rôle prépondérant de la lipoprotéine TssJ qui, étonnamment, est absente chez certaines bactéries. L’une d’entre elles, A. baumannii, possède un T6SS caractérisé par la présence de protéines membranaires de fonction inconnue dont l’étude, entamée durant ma dernière année de thèse, suggère un mode d’ancrage et d’assemblage différent de celui d’EAEC
The type VI secretion system (T6SS) is a contractile nanomachine found in one third of Gram-negative and translocating toxins into both prokaryotic and eukaryotic cells. This device is used by pathogenic strains to induce virulence and/or to compete with other bacteria, fostering environments colonization including the human gut microbiota.The T6SS assembles a cytoplasmic bacteriophage-related-tail structure anchored to the cell envelope by a membrane complex. The tail is composed of an inner tube wrapped by a sheath whose contraction is thought to translocate the tube, the tip proteins and puncture the prey’s cell wall. The tail is built from an assembly platform, the baseplate, connected to the membrane complex and hence used as an evolutionary adaptor. During my thesis, I have characterized the poorly studied baseplate complex in our model enteroaggregative E. coli (EAEC). After describing the structural properties of TssK and its role as connector, we revealed the assembly pathway, the stoichiometry and the structure of the other baseplate proteins. These works increased significantly our comprehension of the T6SS dynamic and highlighted a key interface we targeted through an interfering peptide. Meanwhile, I studied the membrane complex and its connection with the baseplate complex. This study lead to the high-resolution description of the membrane complex of EAEC and revealed a major role of the lipoprotein TssJ which, surprisingly, is absent in other bacteria such as Acinetobacter baumannii. The investigation of the non-canonical T6SS membrane complex of A. baumannii during my last PhD year suggests an anchoring and assembly mechanism different from EAEC’s
APA, Harvard, Vancouver, ISO, and other styles
24

Arruda, Guilherme Ferraz de. "Mineração de dados em redes complexas: estrutura e dinâmica." Universidade de São Paulo, 2013. http://www.teses.usp.br/teses/disponiveis/55/55134/tde-25062013-085958/.

Full text
Abstract:
A teoria das redes complexas é uma área altamente interdisciplinar que oferece recursos para o estudo dos mais variados tipos de sistemas complexos, desde o cérebro até a sociedade. Muitos problemas da natureza podem ser modelados como redes, tais como: as interações protéicas, organizações sociais, o mercado financeiro, a Internet e a World Wide Web. A organização de todos esses sistemas complexos pode ser representada por grafos, isto é, vértices conectados por arestas. Tais topologias têm uma influencia fundamental sobre muitos processos dinâmicos. Por exemplo, roteadores altamente conectados são fundamentais para manter o tráfego na Internet, enquanto pessoas que possuem um grande número de contatos sociais podem contaminar um grande número de outros indivíduos. Ao mesmo tempo, estudos têm mostrado que a estrutura do cérebro esta relacionada com doenças neurológicas, como a epilepsia, que está ligada a fenômenos de sincronização. Nesse trabalho, apresentamos como técnicas de mineração de dados podem ser usadas para estudar a relação entre topologias de redes complexas e processos dinâmicos. Tal estudo será realizado com a simulação de fenômenos de sincronização, falhas, ataques e propagação de epidemias. A estrutura das redes será caracterizada através de métodos de mineração de dados, que permitirão classificar redes de acordo com um conjunto de modelos e determinar padrões de conexões presentes na organização de diferentes tipos de sistemas complexos. As análises serão realizadas com aplicações em neurociências, biologia de sistemas, redes sociais e Internet
The theory of complex networks is a highly interdisciplinary reseach area offering resources for the study of various types of complex systems, from the brain to the society. Many problems of nature can be modeled as networks, such as protein interactions, social organizations, the financial market, the Internet and World Wide Web. The organization of all these complex systems can be represented by graphs, i.e. a set of vertices connected by edges. Such topologies have a fundamental influence on many dynamic processes. For example, highly connected routers are essential to keep traffic on the Internet, while people who have a large number of social contacts may infect many other individuals. Indeed, studies have shown that the structure of brain is related to neurological conditions such as epilepsy, which is relatad to synchronization phenomena. In this text, we present how data mining techniques data can be used to study the relation between complex network topologies and dynamic processes. This study will be conducted with the simulation of synchronization, failures, attacks and the epidemics spreading. The structure of the networks will be characterized by data mining methods, which allow classifying according to a set of theoretical models and to determine patterns of connections present in the organization of different types of complex systems. The analyzes will be performed with applications in neuroscience, systems biology, social networks and the Internet
APA, Harvard, Vancouver, ISO, and other styles
25

Sooksawat, Dhassida. "Transition metal complex-based molecular machines." Thesis, University of Edinburgh, 2015. http://hdl.handle.net/1842/10045.

Full text
Abstract:
Inspired by the performance and evolutionarily-optimised natural molecular machines that carry out all the essential tasks contributing to the molecular basis of life, chemists aim towards fabricating synthetic molecular machines that mimic biological nanodevices. The use of rotaxanes as a prototype for molecular machines has emerged as a result of their ability to undergo translational motion between two or more co-conformations. Although biological machines are capable of complex and intricate functions, their inherent stability and operational conditions are restricted to in vivo. Synthetic systems offer a limitless number of building blocks and a range of interactions to be manipulated. Transition metal-ligand interactions are utilised as one strategy to control the directional movement of submolecular components in artificial machines due to their well-defined geometric requirements and significant strength. This thesis presents new externally addressable and switchable molecular elements for transition metal complexed-molecular machines involving an acid-base switch. The proton input that induces changes to cyclometallated platinum complexes can be exploited to control exchange between different coordination modes. The development of the pH-switchable metal-ligand motif for the stimuli-responsive platinum-complexed molecular shuttle has also been explored. The metal-directed self-assembly of tubular complexes were studied in order to develop self-assembled rotaxanes. A series of metal building blocks was explored to extend the scope for a tube self-assembly.
APA, Harvard, Vancouver, ISO, and other styles
26

Enshaeifar, Shirin. "Eigen-based machine learning techniques for complex and hyper-complex processing." Thesis, University of Surrey, 2016. http://epubs.surrey.ac.uk/811040/.

Full text
Abstract:
One of the earlier works on eigen-based techniques for the hyper-complex domain of quaternions was on “quaternion principal component analysis of colour images”. The results of this work are still instructive in many aspects. First, it showed how naturally the quaternion domain accounts for the coupling between the dimensions of red, blue and green of an image, hence its suitability for multichannel processing. Second, it was clear that there was a lack of eigen-based techniques for such a domain, which explains the non-trivial gap in the literature. Third, the lack of such eigen-based quaternion tools meant that the scope and the applications of quaternion signal processing were quite limited, especially in the field of biomedicine. And fourth, quaternion principal component analysis made use of complex matrix algebra, which reminds us that the complex domain lays the building blocks of the quaternion domain, and therefore any research endeavour in quaternion signal processing should start with the complex domain. As such, the first contribution of this thesis lies in the proposition of complex singular spectrum analysis. That research provided a deep understanding and an appreciation of the intricacies of the complex domain and its impact on the quaternion domain. As the complex domain offers one degree of freedom over the real domain, the statistics of a complex variable x has to be augmented with its complex conjugate x*, which led to the term augmented statistics. This recent advancement in complex statistics was exploited in the proposed complex singular spectrum analysis. The same statistical notion was used in proposing novel quaternion eigen-based techniques such as the quaternion singular spectrum analysis, the quaternion uncorrelating transform, and the quaternion common spatial patterns. The latter two methods highlighted an important gap in the literature – there were no algebraic methods that solved the simultaneous diagonalisation of quaternion matrices. To address this issue, this thesis also presents new fundamental results on quaternion matrix factorisations and explores the depth of quaternion algebra. To demonstrate the efficacy of these methods, real-world problems mainly in biomedical engineering were considered. First, the proposed complex singular spectrum analysis successfully addressed an examination of schizophrenic data through the estimation of the event-related potential of P300. Second, the automated detection of the different stages of sleep was made possible using the proposed quaternion singular spectrum analysis. Third, the proposed quaternion common spatial patterns facilitated the discrimination of Parkinsonian patients from healthy subjects. To illustrate the breadth of the proposed eigen-based techniques, other areas of applications were also presented, such as in wind and financial forecasting, and Alamouti-based communication problems. Finally, a preliminary work is made available to suggest that the next step from this thesis is to move from static models (eigen-based models) to dynamic models (such as tracking models).
APA, Harvard, Vancouver, ISO, and other styles
27

Khadir, Lahouari. "Étude du phénomène de résonance des pièces complexes en aluminium /." Thèse, Chicoutimi : Université du Québec à Chicoutimi, 2007. http://theses.uqac.ca.

Full text
Abstract:
Thèse (M.Eng.) -- Université du Québec à Chicoutimi, 2007.
La p. de t. porte en outre: Mémoire présenté à l'Université du Québec à Chicoutimi comme exigence partielle de la maîtrise en ingénierie. CaQQUQ Bibliogr.: f. 117-120. Document électronique également accessible en format PDF. CaQQUQ
APA, Harvard, Vancouver, ISO, and other styles
28

Stamp, D. I. "Machine learning approaches to complex time series." Thesis, University of Liverpool, 1999. http://ethos.bl.uk/OrderDetails.do?uin=uk.bl.ethos.399317.

Full text
Abstract:
It has been noted that there are numerous similarities between the behaviour of chaotic and stochastic systems. The theoretical links between chaotic and stochastic systems are investigated based on the evolution of the density of dynamics and an equivalency relationship based on the invariant measure of an ergodic system. It is shown that for simple chaotic systems an equivalent stochastic model can be analytically derived when the initial position in state space is only known to a limited precision. Based on this a new methodology for the modelling of complex nonlinear time series displaying chaotic behaviour with stochastic models is proposed. This consists of using a stochastic model to learn the evolution of the density of the dynamics of the chaotic system by estimating initial and transitional density functions directly from a time series. A number of models utilising this methodology are proposed, based on Markov chains and hidden Markov models. These are implemented and their performance and characteristics compared using computer simulation with several standard techniques.
APA, Harvard, Vancouver, ISO, and other styles
29

Lemoine, Marie-Pierre. "Coopération hommes-machines dans les procédés complexes : Modèles techniques et cognitifs pour le contrôle de trafic aérien." Valenciennes, 1998. https://ged.uphf.fr/nuxeo/site/esupversions/0821b192-7376-49d6-ba14-abc99ab0917a.

Full text
Abstract:
La coopération est un point de rencontre de nombreuses disciplines. L'automatique décrit la coopération homme-machine au travers des outils d'assistance qu'elle est capable de développer. Les sciences humaines l'étudient au travers du comportement en milieu collectif. Cette thèse s'inscrit dans ce courant pluridisciplinaire et propose des critères et des moyens pour concevoir des systèmes d'aide qui répondent aux exigences tant humaines qu'industrielles. La répartition dynamique de taches entre un opérateur humain et un système d'aide, est une forme de coopération qui permet de couvrir une vaste plage de l'activité de supervision et de contrôle de procédés complexes. Ce mémoire en donne un exemple puisque la recherche s'applique au contrôle de trafic aérien. Compte tenu du foisonnement de définitions de la coopération, notre recherche tente de recentrer les points de vue éparses et propose une synthèse en fonction des notions de savoir-faire et de savoir-coopérer. La description des agents coopératifs en fonction de ces notions facilite la recherche des lacunes dans l'organisation des agents, ainsi que dans les tâches qui leur sont affectées. Une méthodologie d'évaluation est notamment présentée. Elle s'appuie sur un modèle cognitif de l'opérateur humain, et des critères de performance pondérés par des critères sur la qualité de la coopération. L'une des solutions proposées a l'issue des expérimentations est la conception d'un espace de travail commun. En plus d'une interface homme-machine, il est muni de modules intelligents qui facilitent ou automatisent l'identification des plans des agents coopératifs. Le système d'aide dispose alors de données précises et objectives pour piloter le répartiteur et l'allocateur de tâches. Un exemple d'espace de travail commun est présenté, il répond a l'augmentation du trafic aérien en apportant un soutien actif à la coopération entre les contrôleurs aériens, les pilotes, et les systèmes d'aide.
APA, Harvard, Vancouver, ISO, and other styles
30

Little, Claire. "Machine learning for understanding complex, interlinked social data." Thesis, Manchester Metropolitan University, 2018. http://e-space.mmu.ac.uk/622001/.

Full text
Abstract:
With the growing availability of 'big' data, increasing computer power, and improved data storage capacities, machine learning techniques are now frequently employed in order to make sense of data. Yet, the social sciences have been slow to adopt these techniques, and there is little evidence of their use in some academic fields. This thesis explores the methods most commonly utilised in social science research, that is, linear regression and null hypothesis significance testing, in order to identify how machine learning methods might complement these more established methods. A case study exploring the Troubled Families programme provides a practical example of how machine learning techniques can be utilised on complex, interlinked social data in order to provide deeper understanding and more insight into the data. Eleven different types of families were identified using cluster analysis, and analysis was performed in order to understand how the family's lives changed after joining the TF programme when compared to before. The analysis provided insight into the various types of families that existed and the problems that they had. It also highlighted that, had the data been analysed on an overall global level, it would have been prone to an averaging effect whereby many of the changes that occurred were not apparent; analysis on the cluster-level resulted in identification of cluster-level patterns, and a greater understanding of the data. This thesis demonstrated that machine learning techniques, such as cluster analysis and decision tree learning, can be effectively utilised on complex 'real-life' social science datasets. These methods can identify hidden groups and relationships, and important predictors in a dataset, provide a better understanding of the structure of the data, and aid in generating research questions and hypotheses.
APA, Harvard, Vancouver, ISO, and other styles
31

Eagle, Nathan Norfleet. "Machine perception and learning of complex social systems." Thesis, Massachusetts Institute of Technology, 2005. http://hdl.handle.net/1721.1/32498.

Full text
Abstract:
Thesis (Ph. D.)--Massachusetts Institute of Technology, School of Architecture and Planning, Program in Media Arts and Sciences, 2005.
Includes bibliographical references (p. 125-136).
The study of complex social systems has traditionally been an arduous process, involving extensive surveys, interviews, ethnographic studies, or analysis of online behavior. Today, however, it is possible to use the unprecedented amount of information generated by pervasive mobile phones to provide insights into the dynamics of both individual and group behavior. Information such as continuous proximity, location, communication and activity data, has been gathered from the phones of 100 human subjects at MIT. Systematic measurements from these 100 people over the course of eight months has generated one of the largest datasets of continuous human behavior ever collected, representing over 300,000 hours of daily activity. In this thesis we describe how this data can be used to uncover regular rules and structure in behavior of both individuals and organizations, infer relationships between subjects, verify self- report survey data, and study social network dynamics. By combining theoretical models with rich and systematic measurements, we show it is possible to gain insight into the underlying behavior of complex social systems.
by Nathan Norfleet Eagle.
Ph.D.
APA, Harvard, Vancouver, ISO, and other styles
32

Gogia, Sumit. "Insight : interactive machine learning for complex graphics selection." Thesis, Massachusetts Institute of Technology, 2016. http://hdl.handle.net/1721.1/106021.

Full text
Abstract:
Thesis: M. Eng., Massachusetts Institute of Technology, Department of Electrical Engineering and Computer Science, 2016.
This electronic version was submitted by the student author. The certified thesis is available in the Institute Archives and Special Collections.
Cataloged from student-submitted PDF version of thesis.
Includes bibliographical references (pages 89-91).
Modern vector graphics editors support the creation of a wonderful variety of complex designs and artwork. Users produce highly realistic illustrations, stylized representational art, even nuanced data visualizations. In light of these complex graphics, selections, representations of sets of objects that users want to manipulate, become more complex as well. Direct manipulation tools that artists and designers find accessible and useful for editing graphics such as logos and icons do not have the same applicability in these more complex cases. Given that selection is the first step for nearly all editing in graphics, it is important to enable artists and designers to express these complex selections. This thesis explores the use of interactive machine learning techniques to improve direct selection interfaces. To investigate this approach, I created Insight, an interactive machine learning selection tool for making a relevant class of complex selections: visually similar objects. To make a selection, users iteratively provide examples of selection objects by clicking on them in the graphic. Insight infers a selection from the examples at each step, allowing users to quickly understand results of actions and reactively shape the complex selection. The interaction resembles the direct manipulation interactions artists and designers have found accessible, while helping express complex selections by inferring many parameter changes from simple actions. I evaluated Insight in a user study of digital designers and artists, finding that Insight enabled users to effectively and easily make complex selections not supported by state-of-the-art vector graphics editors. My results contribute to existing work by both indicating a useful approach for providing complex representation access to artists and designers, and showing a new application for interactive machine learning.
by Sumit Gogia.
M. Eng.
APA, Harvard, Vancouver, ISO, and other styles
33

Verri, Filipe Alves Neto. "Collective dynamics in complex networks for machine learning." Universidade de São Paulo, 2018. http://www.teses.usp.br/teses/disponiveis/55/55134/tde-18102018-113054/.

Full text
Abstract:
Machine learning enables machines to learn automatically from data. In literature, graph-based methods have received increasing attention due to their ability to learn from both local and global information. In these methods, each data instance is represented by a vertex and is linked to other vertices according to a predefined affinity rule. However, they usually have unfeasible time cost for large problems. To overcome this problem, techniques can employ a heuristic to find suboptimal solutions in a feasible time. Early heuristic optimization methods exploit nature-inspired collective processes, such as ants looking for food sources and swarms of bees. Nowadays, advances in the field of complex systems provide powerful tools to assess and to understand dynamical systems. Complex networks, which are graphs with nontrivial topology, are among these theoretical tools capable of describing the interplay of topology, structure, and dynamics of complex systems. Therefore, machine learning methods based on complex networks and collective dynamics have been proposed. They encompass three steps. First, a complex network is constructed from the input data. Then, the simulation of a distributed collective system in the network generates rich information. Finally, the collected information is used to solve the learning problem. The coordination of the individuals in the system permit to achieve dynamics that is far more complex than the behavior of single individuals. In this research, I have explored collective dynamics in machine learning tasks, both in unsupervised and semi-supervised scenarios. Specifically, I have proposed a new collective system of competing particles that shifts the traditional vertex-centric dynamics to a more informative edge-centric one. Moreover, it is the first particle competition system applied in machine learning task that has deterministic behavior. Results show several advantages of the edge-centric model, including the ability to acquire more information about overlapping areas, a better exploration behavior, and a faster convergence time. Also, I have proposed a new network formation technique that is not based on similarity and has low computational cost. Since addition and removal of samples in the network is cheap, it can be used in real-time application. Finally, I have conducted analytical investigations of a flocking-like system that was needed to guarantee the expected behavior in community detection tasks. In conclusion, the result of the research contributes to many areas of machine learning and complex systems.
Aprendizado de máquina permite que computadores aprendam automaticamente dos dados. Na literatura, métodos baseados em grafos recebem crescente atenção por serem capazes de aprender através de informações locais e globais. Nestes métodos, cada item de dado é um vértice e as conexões são dadas uma regra de afinidade. Todavia, tais técnicas possuem custo de tempo impraticável para grandes grafos. O uso de heurísticas supera este problema, encontrando soluções subótimas em tempo factível. No início, alguns métodos de otimização inspiraram suas heurísticas em processos naturais coletivos, como formigas procurando por comida e enxames de abelhas. Atualmente, os avanços na área de sistemas complexos provêm ferramentas para medir e entender estes sistemas. Redes complexas, as quais são grafos com topologia não trivial, são uma das ferramentas. Elas são capazes de descrever as relações entre topologia, estrutura e dinâmica de sistemas complexos. Deste modo, novos métodos de aprendizado baseados em redes complexas e dinâmica coletiva vêm surgindo. Eles atuam em três passos. Primeiro, uma rede complexa é construída da entrada. Então, simula-se um sistema coletivo distribuído na rede para obter informações. Enfim, a informação coletada é utilizada para resolver o problema. A interação entre indivíduos no sistema permite alcançar uma dinâmica muito mais complexa do que o comportamento individual. Nesta pesquisa, estudei o uso de dinâmica coletiva em problemas de aprendizado de máquina, tanto em casos não supervisionados como semissupervisionados. Especificamente, propus um novo sistema de competição de partículas cuja competição ocorre em arestas ao invés de vértices, aumentando a informação do sistema. Ainda, o sistema proposto é o primeiro modelo de competição de partículas aplicado em aprendizado de máquina com comportamento determinístico. Resultados comprovam várias vantagens do modelo em arestas, includindo detecção de áreas sobrepostas, melhor exploração do espaço e convergência mais rápida. Além disso, apresento uma nova técnica de formação de redes que não é baseada na similaridade dos dados e possui baixa complexidade computational. Uma vez que o custo de inserção e remoção de exemplos na rede é barato, o método pode ser aplicado em aplicações de tempo real. Finalmente, conduzi um estudo analítico em um sistema de alinhamento de partículas. O estudo foi necessário para garantir o comportamento esperado na aplicação do sistema em problemas de detecção de comunidades. Em suma, os resultados da pesquisa contribuíram para várias áreas de aprendizado de máquina e sistemas complexos.
APA, Harvard, Vancouver, ISO, and other styles
34

Cupertino, Thiago Henrique. "Machine learning via dynamical processes on complex networks." Universidade de São Paulo, 2013. http://www.teses.usp.br/teses/disponiveis/55/55134/tde-25032014-154520/.

Full text
Abstract:
Extracting useful knowledge from data sets is a key concept in modern information systems. Consequently, the need of efficient techniques to extract the desired knowledge has been growing over time. Machine learning is a research field dedicated to the development of techniques capable of enabling a machine to \"learn\" from data. Many techniques have been proposed so far, but there are still issues to be unveiled specially in interdisciplinary research. In this thesis, we explore the advantages of network data representation to develop machine learning techniques based on dynamical processes on networks. The network representation unifies the structure, dynamics and functions of the system it represents, and thus is capable of capturing the spatial, topological and functional relations of the data sets under analysis. We develop network-based techniques for the three machine learning paradigms: supervised, semi-supervised and unsupervised. The random walk dynamical process is used to characterize the access of unlabeled data to data classes, configuring a new heuristic we call ease of access in the supervised paradigm. We also propose a classification technique which combines the high-level view of the data, via network topological characterization, and the low-level relations, via similarity measures, in a general framework. Still in the supervised setting, the modularity and Katz centrality network measures are applied to classify multiple observation sets, and an evolving network construction method is applied to the dimensionality reduction problem. The semi-supervised paradigm is covered by extending the ease of access heuristic to the cases in which just a few labeled data samples and many unlabeled samples are available. A semi-supervised technique based on interacting forces is also proposed, for which we provide parameter heuristics and stability analysis via a Lyapunov function. Finally, an unsupervised network-based technique uses the concepts of pinning control and consensus time from dynamical processes to derive a similarity measure used to cluster data. The data is represented by a connected and sparse network in which nodes are dynamical elements. Simulations on benchmark data sets and comparisons to well-known machine learning techniques are provided for all proposed techniques. Advantages of network data representation and dynamical processes for machine learning are highlighted in all cases
A extração de conhecimento útil a partir de conjuntos de dados é um conceito chave em sistemas de informação modernos. Por conseguinte, a necessidade de técnicas eficientes para extrair o conhecimento desejado vem crescendo ao longo do tempo. Aprendizado de máquina é uma área de pesquisa dedicada ao desenvolvimento de técnicas capazes de permitir que uma máquina \"aprenda\" a partir de conjuntos de dados. Muitas técnicas já foram propostas, mas ainda há questões a serem reveladas especialmente em pesquisas interdisciplinares. Nesta tese, exploramos as vantagens da representação de dados em rede para desenvolver técnicas de aprendizado de máquina baseadas em processos dinâmicos em redes. A representação em rede unifica a estrutura, a dinâmica e as funções do sistema representado e, portanto, é capaz de capturar as relações espaciais, topológicas e funcionais dos conjuntos de dados sob análise. Desenvolvemos técnicas baseadas em rede para os três paradigmas de aprendizado de máquina: supervisionado, semissupervisionado e não supervisionado. O processo dinâmico de passeio aleatório é utilizado para caracterizar o acesso de dados não rotulados às classes de dados configurando uma nova heurística no paradigma supervisionado, a qual chamamos de facilidade de acesso. Também propomos uma técnica de classificação de dados que combina a visão de alto nível dos dados, por meio da caracterização topológica de rede, com relações de baixo nível, por meio de medidas de similaridade, em uma estrutura geral. Ainda no aprendizado supervisionado, as medidas de rede modularidade e centralidade Katz são aplicadas para classificar conjuntos de múltiplas observações, e um método de construção evolutiva de rede é aplicado ao problema de redução de dimensionalidade. O paradigma semissupervisionado é abordado por meio da extensão da heurística de facilidade de acesso para os casos em que apenas algumas amostras de dados rotuladas e muitas amostras não rotuladas estão disponíveis. É também proposta uma técnica semissupervisionada baseada em forças de interação, para a qual fornecemos heurísticas para selecionar parâmetros e uma análise de estabilidade mediante uma função de Lyapunov. Finalmente, uma técnica não supervisionada baseada em rede utiliza os conceitos de controle pontual e tempo de consenso de processos dinâmicos para derivar uma medida de similaridade usada para agrupar dados. Os dados são representados por uma rede conectada e esparsa na qual os vértices são elementos dinâmicos. Simulações com dados de referência e comparações com técnicas de aprendizado de máquina conhecidas são fornecidos para todas as técnicas propostas. As vantagens da representação de dados em rede e de processos dinâmicos para o aprendizado de máquina são evidenciadas em todos os casos
APA, Harvard, Vancouver, ISO, and other styles
35

El, Kaliouby Rana Ayman. "Mind-reading machines : automated inference of complex mental states." Thesis, University of Cambridge, 2005. http://ethos.bl.uk/OrderDetails.do?uin=uk.bl.ethos.615030.

Full text
APA, Harvard, Vancouver, ISO, and other styles
36

Spiegler, Sebastian Reiner. "Machine learning for the analysis of morphologically complex languages." Thesis, University of Bristol, 2011. http://ethos.bl.uk/OrderDetails.do?uin=uk.bl.ethos.535166.

Full text
APA, Harvard, Vancouver, ISO, and other styles
37

Silva, Thiago Christiano. "Machine learning in complex networks: modeling, analysis, and applications." Universidade de São Paulo, 2012. http://www.teses.usp.br/teses/disponiveis/55/55134/tde-19042013-104641/.

Full text
Abstract:
Machine learning is evidenced as a research area with the main purpose of developing computational methods that are capable of learning with their previously acquired experiences. Although a large amount of machine learning techniques has been proposed and successfully applied in real systems, there are still many challenging issues, which need be addressed. In the last years, an increasing interest in techniques based on complex networks (large-scale graphs with nontrivial connection patterns) has been verified. This emergence is explained by the inherent advantages provided by the complex network representation, which is able to capture the spatial, topological and functional relations of the data. In this work, we investigate the new features and possible advantages offered by complex networks in the machine learning domain. In fact, we do show that the network-based approach really brings interesting features for supervised, semisupervised, and unsupervised learning. Specifically, we reformulate a previously proposed particle competition technique for both unsupervised and semisupervised learning using a stochastic nonlinear dynamical system. Moreover, an analytical analysis is supplied, which enables one to predict the behavior of the proposed technique. In addition to that, data reliability issues are explored in semisupervised learning. Such matter has practical importance and is found to be of little investigation in the literature. With the goal of validating these techniques for solving real problems, simulations on broadly accepted databases are conducted. Still in this work, we propose a hybrid supervised classification technique that combines both low and high orders of learning. The low level term can be implemented by any classification technique, while the high level term is realized by the extraction of features of the underlying network constructed from the input data. Thus, the former classifies the test instances by their physical features, while the latter measures the compliance of the test instances with the pattern formation of the data. Our study shows that the proposed technique not only can realize classification according to the semantic meaning of the data, but also is able to improve the performance of traditional classification techniques. Finally, it is expected that this study will contribute, in a relevant manner, to the machine learning area
Aprendizado de máquina figura-se como uma área de pesquisa que visa a desenvolver métodos computacionais capazes de aprender com a experiência. Embora uma grande quantidade de técnicas de aprendizado de máquina foi proposta e aplicada, com sucesso, em sistemas reais, existem ainda inúmeros problemas desafiantes que necessitam ser explorados. Nos últimos anos, um crescente interesse em técnicas baseadas em redes complexas (grafos de larga escala com padrões de conexão não triviais) foi verificado. Essa emergência é explicada pelas inerentes vantagens que a representação em redes complexas traz, sendo capazes de capturar as relações espaciais, topológicas e funcionais dos dados. Nesta tese, serão investigadas as possíveis vantagens oferecidas por redes complexas quando utilizadas no domínio de aprendizado de máquina. De fato, será mostrado que a abordagem por redes realmente proporciona melhorias nos aprendizados supervisionado, semissupervisionado e não supervisionado. Especificamente, será reformulada uma técnica de competição de partículas para o aprendizado não supervisionado e semissupervisionado por meio da utilização de um sistema dinâmico estocástico não linear. Em complemento, uma análise analítica de tal modelo será desenvolvida, permitindo o entendimento evolucional do modelo no tempo. Além disso, a questão de confiabilidade de dados será investigada no aprendizado semissupervisionado. Tal tópico tem importância prática e é pouco estudado na literatura. Com o objetivo de validar essas técnicas em problemas reais, simulações computacionais em bases de dados consagradas pela literatura serão conduzidas. Ainda nesse trabalho, será proposta uma técnica híbrica de classificação supervisionada que combina tanto o aprendizado de baixo como de alto nível. O termo de baixo nível pode ser implementado por qualquer técnica de classificação tradicional, enquanto que o termo de alto nível é realizado pela extração das características de uma rede construída a partir dos dados de entrada. Nesse contexto, aquele classifica as instâncias de teste segundo qualidades físicas, enquanto que esse estima a conformidade da instância de teste com a formação de padrões dos dados. Os estudos aqui desenvolvidos mostram que o método proposto pode melhorar o desempenho de técnicas tradicionais de classificação, além de permitir uma classificação de acordo com o significado semântico dos dados. Enfim, acredita-se que este estudo possa gerar contribuições relevantes para a área de aprendizado de máquina.
APA, Harvard, Vancouver, ISO, and other styles
38

Venkatesan, Vaidehi. "Cuisines as Complex Networks." University of Cincinnati / OhioLINK, 2011. http://rave.ohiolink.edu/etdc/view?acc_num=ucin1321969310.

Full text
APA, Harvard, Vancouver, ISO, and other styles
39

Shen, Xueying. "Complex lot Sizing problem with parallel machines and setup carryover." Thesis, Paris Sciences et Lettres (ComUE), 2017. http://www.theses.fr/2017PSLED057/document.

Full text
Abstract:
Dans cette thèse, nous étudions deux problèmes de planification de production motivés par des applications du monde réel. Tout d'abord, un problème de planification de production pour un projet de fabrication de vêtements est étudié et un outil d'optimisation est développé pour le résoudre. Deuxièmement, une version restreinte du problème de dimensionnement du lot de capacité avec des configurations dépendantes de la séquence est explorée. Diverses formulations mathématiques sont développées et une analyse de complexité est effectuée pour donner une première analyse du problème
In this thesis, we study two production planning problems motivated by challenging real-world applications. First, a production planning problem for an apparel manufacturing project is studied and an optimization tool is developed to tackle it. Second, a restricted version of the capacitated lot sizing problem with sequence dependent setups is explored. Various mathematical formulations are developed and complexity analysis is performed to offer a first glance to the problem
APA, Harvard, Vancouver, ISO, and other styles
40

Yuan, Weifeng. "Greedy tool heuristic for rough milling of complex pockets /." View Abstract or Full-Text, 2002. http://library.ust.hk/cgi/db/thesis.pl?IEEM%202002%20YUAN.

Full text
Abstract:
Thesis (M. Phil.)--Hong Kong University of Science and Technology, 2002.
Includes bibliographical references (leaves 48-52). Also available in electronic version. Access restricted to campus users.
APA, Harvard, Vancouver, ISO, and other styles
41

Hwang, Jung-Taik. "A fragmentation technique for parsing complex sentences for machine translation." Thesis, Massachusetts Institute of Technology, 1997. http://hdl.handle.net/1721.1/10204.

Full text
Abstract:
Thesis (M. Eng.)--Massachusetts Institute of Technology, Dept. of Electrical Engineering and Computer Science, 1997.
Includes bibliographical references (leaves 110-111).
by Jung-Taik Hwang.
M.Eng.
APA, Harvard, Vancouver, ISO, and other styles
42

Malasky, Jeremy S. "Human machine collaborative decision making in a complex optimization system." Thesis, Massachusetts Institute of Technology, 2005. http://hdl.handle.net/1721.1/32514.

Full text
Abstract:
Thesis (S.M.)--Massachusetts Institute of Technology, Sloan School of Management, Operations Research Center, 2005.
Includes bibliographical references (p. 149-151).
Numerous complex real-world applications are either theoretically intractable or unable to be solved in a practical amount of time. Researchers and practitioners are forced to implement heuristics in solving such problems that can lead to highly sub-optimal solutions. Our research focuses on inserting a human "in-the-loop" of the decision-making or problem solving process in order to generate solutions in a timely manner that improve upon those that are generated either scolely by a human or solely by a computer. We refer to this as Human-Machine Collaborative Decision-Making (HMCDM). The typical design process for developing human-machine approaches either starts with a human approach and augments it with decision-support or starts with an automated approach and augments it with operator input. We provide an alternative design process by presenting an 1HMCDM methodology that addresses collaboration from the outset of the design of the decision- making approach. We apply this design process to a complex military resource allocation and planning problem which selects, sequences, and schedules teams of unmanned aerial vehicles (UAVs) to perform sensing (Intelligence, Surveillance, and Reconnaissance - ISR) and strike activities against enemy targets. Specifically, we examined varying degrees of human-machine collaboration in the creation of variables in the solution of this problem. We also introduce an IIHMCDM method that combines traditional goal decomposition with a model formulation into an Iterative Composite Variable Approach for solving large-scale optimization problems.
(cont.) Finally, we show through experimentation the potential for improvement in the quality and speed of solutions that can be achieved through the use of an HMCDM approach.
by Jeremy S. Malasky.
S.M.
APA, Harvard, Vancouver, ISO, and other styles
43

Banfield, Robert E. "Learning on complex simulations." [Tampa, Fla.] : University of South Florida, 2007. http://purl.fcla.edu/usf/dc/et/SFE0002112.

Full text
APA, Harvard, Vancouver, ISO, and other styles
44

Pashike, Amitesh Kumar Singam and Venkat Raj Reddy. "Low Complex Blind Video Quality Predictor based on Support Vector Machines." Thesis, Blekinge Tekniska Högskola, Sektionen för ingenjörsvetenskap, 2012. http://urn.kb.se/resolve?urn=urn:nbn:se:bth-3654.

Full text
Abstract:
Objective Video Quality Assessment plays a vital role in visual processing systems and especially in the mobile communication field, some of the video applications boosted the interest of robust methods for quality assessment. Out of all existing methods for Video Quality Analysis, No-Reference (NR) Video Quality Assessment is the one which is most needed in situations where the handiness of reference video is not available. Our challenge lies in formulating and melding effective features into one model based on human visualizing characteristics. Our research work explores the tradeoffs between quality prediction and complexity of a system. Therefore, we implemented support vector regression algorithm as NR-based Video Quality Metric(VQM) for quality estimation with simplified input features. The features are obtained from extraction of H.264 bitstream data at the decoder side of the network. Our metric predicted with Pearson correlation coefficient of 0.99 for SSIM, 0.98 for PEVQ, 0.96 for subjective score and 0.94 for PSNR metric. Therefore in terms of prediction accuracy, the proposed model has good correlation with all deployed metrics and the obtained results demonstrates the robustness of our approach. In our research work, the proposed metric has a good correlation with subjective scores which concludes that proposed metric can be employed for real time use, since subjective scores are considered as true or standard values of video quality.
APA, Harvard, Vancouver, ISO, and other styles
45

Breve, Fabricio Aparecido. "Aprendizado de máquina em redes complexas." Universidade de São Paulo, 2010. http://www.teses.usp.br/teses/disponiveis/55/55134/tde-21092010-104722/.

Full text
Abstract:
Redes complexas é um campo de pesquisa científica recente e bastante ativo que estuda redes de larga escala com estruturas topológicas não triviais, tais como redes de computadores, redes de telecomunicações, redes de transporte, redes sociais e redes biológicas. Muitas destas redes são naturalmente divididas em comunidades ou módulos e, portanto, descobrir a estrutura dessas comunidades é um dos principais problemas abordados no estudo de redes complexas. Tal problema está relacionado com o campo de aprendizado de máquina, que tem como interesse projetar e desenvolver algoritmos e técnicas que permitem aos computadores aprender, ou melhorar seu desempenho através da experiência. Alguns dos problemas identificados nas técnicas tradicionais de aprendizado incluem: dificuldades em identificar formas irregulares no espaço de atributos; descobrir estruturas sobrepostas de grupos ou classes, que ocorre quando elementos pertencem a mais de um grupo ou classe; e a alta complexidade computacional de alguns modelos, que impedem sua aplicação em bases de dados maiores. Neste trabalho tratamos tais problemas através do desenvolvimento de novos modelos de aprendizado de máquina utilizando redes complexas e dinâmica espaço-temporal, com capacidade para tratar grupos e classes sobrepostas, além de fornecer graus de pertinência para cada elemento da rede com relação a cada cluster ou classe. Os modelos desenvolvidos tem desempenho similar ao de algoritmos do estado da arte, ao mesmo tempo em que apresentam ordem de complexidade computacional menor do que a maioria deles
Complex networks is a recent and active scientific research field, which concerns large scale networks with non-trivial topological structure, such as computer networks, telecommunication networks, transport networks, social networks and biological networks. Many of these networks are naturally divided into communities or modules and, therefore, uncovering their structure is one of the main problems related to complex networks study. This problem is related with the machine learning field, which is concerned with the design and development of algorithms and techniques which allow computers to learn, or increase their performance based on experience. Some of the problems identified in traditional learning techniques include: difficulties in identifying irregular forms in the attributes space; uncovering overlap structures of groups or classes, which occurs when elements belong to more than one group or class; and the high computational complexity of some models, which prevents their application in larger data bases. In this work, we deal with these problems through the development of new machine learning models using complex networks and space-temporal dynamics. The developed models have performance similar to those from some state-of-the-art algorithms, at the same time that they present lower computational complexity order than most of them
APA, Harvard, Vancouver, ISO, and other styles
46

Zhuo, Yue. "Solution studies of protein complexes of the endocytic machinery : a dissertation /." San Antonio : UTHSC, 2007. http://proquest.umi.com/pqdweb?did=1310415421&sid=2&Fmt=2&clientId=70986&RQT=309&VName=PQD.

Full text
APA, Harvard, Vancouver, ISO, and other styles
47

Amil, Marletti Pablo. "Machine learning methods for the characterization and classification of complex data." Doctoral thesis, Universitat Politècnica de Catalunya, 2020. http://hdl.handle.net/10803/668842.

Full text
Abstract:
This thesis work presents novel methods for the analysis and classification of medical images and, more generally, complex data. First, an unsupervised machine learning method is proposed to order anterior chamber OCT (Optical Coherence Tomography) images according to a patient's risk of developing angle-closure glaucoma. In a second study, two outlier finding techniques are proposed to improve the results of above mentioned machine learning algorithm, we also show that they are applicable to a wide variety of data, including fraud detection in credit card transactions. In a third study, the topology of the vascular network of the retina, considering it a complex tree-like network is analyzed and we show that structural differences reveal the presence of glaucoma and diabetic retinopathy. In a fourth study we use a model of a laser with optical injection that presents extreme events in its intensity time-series to evaluate machine learning methods to forecast such extreme events.
El presente trabajo de tesis desarrolla nuevos métodos para el análisis y clasificación de imágenes médicas y datos complejos en general. Primero, proponemos un método de aprendizaje automático sin supervisión que ordena imágenes OCT (tomografía de coherencia óptica) de la cámara anterior del ojo en función del grado de riesgo del paciente de padecer glaucoma de ángulo cerrado. Luego, desarrollamos dos métodos de detección automática de anomalías que utilizamos para mejorar los resultados del algoritmo anterior, pero que su aplicabilidad va mucho más allá, siendo útil, incluso, para la detección automática de fraudes en transacciones de tarjetas de crédito. Mostramos también, cómo al analizar la topología de la red vascular de la retina considerándola una red compleja, podemos detectar la presencia de glaucoma y de retinopatía diabética a través de diferencias estructurales. Estudiamos también un modelo de un láser con inyección óptica que presenta eventos extremos en la serie temporal de intensidad para evaluar diferentes métodos de aprendizaje automático para predecir dichos eventos extremos.
Aquesta tesi desenvolupa nous mètodes per a l’anàlisi i la classificació d’imatges mèdiques i dades complexes. Hem proposat, primer, un mètode d’aprenentatge automàtic sense supervisió que ordena imatges OCT (tomografia de coherència òptica) de la cambra anterior de l’ull en funció del grau de risc del pacient de patir glaucoma d’angle tancat. Després, hem desenvolupat dos mètodes de detecció automàtica d’anomalies que hem utilitzat per millorar els resultats de l’algoritme anterior, però que la seva aplicabilitat va molt més enllà, sent útil, fins i tot, per a la detecció automàtica de fraus en transaccions de targetes de crèdit. Mostrem també, com en analitzar la topologia de la xarxa vascular de la retina considerant-la una xarxa complexa, podem detectar la presència de glaucoma i de retinopatia diabètica a través de diferències estructurals. Finalment, hem estudiat un làser amb injecció òptica, el qual presenta esdeveniments extrems en la sèrie temporal d’intensitat. Hem avaluat diferents mètodes per tal de predir-los.
APA, Harvard, Vancouver, ISO, and other styles
48

George, David Frederick James. "Reconfigurable cellular automata computing for complex systems on the SPACE machine." University of Western Australia. School of Computer Science and Software Engineering, 2006. http://theses.library.uwa.edu.au/adt-WU2006.0020.

Full text
Abstract:
Many complex natural and man made systems are inherently concurrent in nature, consisting of many autonomous parts that interact with each other. Cellular automata allow the concurrency and interactions of these complex systems to be modelled. Using a reconfigurable a computing platform for running cellular automata models allows the natural concurrency of digital electronics to be directly exploited by the system being modelled. This thesis investigates methods and philosophies for developing cellular automata models on a reconfigurable computing platform, the SPACE machine. Modelling and verification techniques are developed using a process algebra, Circal. These techniques allow the desired behaviour of a system to be specified and simulated. The model is then translated into a digital design, which can be verified as correct against the behavioural model using the Circal system. Three cellular automata system are used to develop the methods and philosophies. The Game of Life is used to investigate how to model and implement CA on the SPACE machine. The Philosophies and techniques that are developed for the Game of Life are used in the following systems. More complex cellular automata models of road traffic are used to further develop the modelling techniques developed in the Game a Life. A user interface, which was created for viewing the outputs from the Game a Life, is extended to allow cellular automata cells to be dynamically placed and moved about on the computing surface, allowing the user to observe and modify experiment in real time. A cellular automata based cryptography system is then used to further enhance the techniques developed, and particularly to explore the area of producing dynamically reconfigured circuits as the inputs to the system change. The thesis concludes that there are many real life complex systems, such as road traffic simulation and cryptography, which require high performs systems to run on. The methods and philosophies developed in this thesis allow CA systems to be modelled using process algebra and run directly in digital hardware, allowing the natural concurrency of the hardware to be fully exploited.
APA, Harvard, Vancouver, ISO, and other styles
49

George, David Frederick James. "Reconfigurable cellular automata computing for complex systems on the SPACE machine /." Connect to this title, 2005. http://theses.library.uwa.edu.au/adt-WU2006.0020.

Full text
APA, Harvard, Vancouver, ISO, and other styles
50

Alabdulkareem, Ahmad. "Analyzing cities' complex socioeconomic networks using computational science and machine learning." Thesis, Massachusetts Institute of Technology, 2018. http://hdl.handle.net/1721.1/119325.

Full text
Abstract:
Thesis: Ph. D. in Computational Science & Engineering, Massachusetts Institute of Technology, Department of Civil and Environmental Engineering, 2018.
Cataloged from PDF version of thesis.
Includes bibliographical references (pages 133-141).
By 2050, it is expected that 66% of the world population will be living in cities. The urban growth explosion in recent decades has raised many questions concerning the evolutionary advantages of urbanism, with several theories delving into the multitude of benefits of such efficient systems. This thesis focuses on one important aspect of cities: their social dimension, and in particular, the social aspect of their complex socioeconomic fabric (e.g. labor markets and social networks). Economic inequality is one of the greatest challenges facing society today, in tandem with the eminent impact of automation, which can exacerbate this issue. The social dimension plays a significant role in both, with many hypothesizing that social skills will be the last bastion of differentiation between humans and machines, and thus, jobs will become mostly dominated by social skills. Using data-driven tools from network science, machine learning, and computational science, the first question I aim to answer is the following: what role do social skills play in today's labor markets on both a micro and macro scale (e.g. individuals and cities)? Second, how could the effects of automation lead to various labor dynamics, and what role would social skills play in combating those effects? Specifically, what are social skills' relation to career mobility? Which would inform strategies to mitigate the negative effects of automation and off-shoring on employment. Third, given the importance of the social dimension in cities, what theoretical model can explain such results, and what are its consequences? Finally, given the vulnerabilities for invading individuals' privacy, as demonstrated in previous chapters, how does highlighting those results affect people's interest in privacy preservation, and what are some possible solutions to combat this issue?
by Ahmad Alabdulkareem.
Ph. D. in Computational Science & Engineering
APA, Harvard, Vancouver, ISO, and other styles
We offer discounts on all premium plans for authors whose works are included in thematic literature selections. Contact us to get a unique promo code!

To the bibliography