To see the other types of publications on this topic, follow the link: Nett proceeds.

Dissertations / Theses on the topic 'Nett proceeds'

Create a spot-on reference in APA, MLA, Chicago, Harvard, and other styles

Select a source type:

Consult the top 50 dissertations / theses for your research on the topic 'Nett proceeds.'

Next to every source in the list of references, there is an 'Add to bibliography' button. Press on it, and we will generate automatically the bibliographic reference to the chosen work in the citation style you need: APA, MLA, Harvard, Chicago, Vancouver, etc.

You can also download the full text of the academic publication as pdf and read online its abstract whenever available in the metadata.

Browse dissertations / theses on a wide variety of disciplines and organise your bibliography correctly.

1

Turnas, Daniel. "Next generation software process improvement." Thesis, Monterey, Calif. : Springfield, Va. : Naval Postgraduate School ; Available from National Technical Information Service, 2003. http://library.nps.navy.mil/uhtbin/hyperion-image/03Jun%5FTurnas.pdf.

Full text
Abstract:
Thesis (M.S. in Software Engineering)--Naval Postgraduate School, June 2003.
Thesis advisor(s): Mikhail Auguston, Christopher D. Miles. Includes bibliographical references (p. 59-61). Also available online.
APA, Harvard, Vancouver, ISO, and other styles
2

Gains, Francesca. "Understanding department : next steps agency relationships." Thesis, University of Sheffield, 1999. http://etheses.whiterose.ac.uk/6028/.

Full text
Abstract:
This thesis examines the establishment of 'Next Steps' agencies in government and how they were intended to allow the delivery of government goals at arm's length. The research is concerned with how changes in relationships at the heart of Government can be understood. It seeks to address the impact of these changes on the policy process. It does so by examining the nature of the relationship between departments and agencies and asking why some relationships appeared to have worked well and others have not. These questions are not adequately addressed in the existing literature on agencies. The thesis takes a multiple case study approach and draws on the concepts of historical institutionalism, power dependency and policy networks to approach these questions. It is argued that the introduction and development of agencies changed the formal and informal institutional 'rules of the game', affecting the roles actors expected to play and radically altered the distribution of resources in central government. The changed distribution of resources led to the development of new power dependent networks between departments and agencies. Path dependency in the development of the Next Steps concept led to a tension between the idea of agencies operating at 'arm's length' with the continuation of traditional accountability arrangements. The key argument presented is that, where department-agency networks are based on shared values, goals and institutional support, they will be able to manage the tension created by the new institutional arrangements and are able to successfully deliver government goals. In concluding, it is suggested that understanding department-agency relationships as power dependent networks presents three implications. Firstly, for the applicability of this analytical framework to other 'institutional arrangements', secondly for policy making in the core executive and, finally, for insights on normative issues of accountability and autonomy in contemporary governance.
APA, Harvard, Vancouver, ISO, and other styles
3

Kummailil, John. "Process models for laser engineered net shaping." Link to electronic thesis, 2004. http://www.wpi.edu/Pubs/ETD/Available/etd-0429104-103828.

Full text
Abstract:
Thesis (Ph. D.)--Worcester Polytechnic Institute.
Keywords: rapid prototyping; solid freeform fabrication; LENS; laser engineered net shaping; laser; titanium. Includes bibliographical references (p. 83-85).
APA, Harvard, Vancouver, ISO, and other styles
4

Du, Dechuan [Verfasser]. "Nucleon-nucleon scattering process in Lattice Chiral Effective Field Theory approach up to next-to-next-to-next-to-leading order / Dechuan Du." Bonn : Universitäts- und Landesbibliothek Bonn, 2018. http://d-nb.info/1160594236/34.

Full text
APA, Harvard, Vancouver, ISO, and other styles
5

Chang, Ai-Fu. "Process Modeling of Next-Generation Liquid Fuel Production - Commercial Hydrocracking Process and Biodiesel Manufacturing." Diss., Virginia Tech, 2011. http://hdl.handle.net/10919/58043.

Full text
Abstract:
This dissertation includes two process modeling studies -- (1) predictive modeling of large-scale integrated refinery reaction and fractionation systems from plant data – hydrocracking process; and (2) integrated process modeling and product design of biodiesel manufacturing. \r\n1. Predictive Modeling of Large-Scale Integrated Refinery Reaction and Fractionation Systems from Plant Data -- Hydrocracking Processes: This work represents a workflow to develop, validate and apply a predictive model for rating and optimization of large-scale integrated refinery reaction and fractionation systems from plant data. We demonstrate the workflow with two commercial processes -- medium-pressure hydrocracking unit with a feed capacity of 1 million ton per year and high-pressure hydrocracking unit with a feed capacity of 2 million ton per year in the Asia Pacific. This work represents the detailed procedure for data acquisition to ensure accurate mass balances, and for implementing the workflow using Excel spreadsheets and a commercial software tool, Aspen HYSYS from Aspen Technology, Inc. The workflow includes special tools to facilitate an accurate transition from lumped kinetic components used in reactor modeling to the boiling point based pseudo-components required in the rigorous tray-by-tray distillation simulation. Two to three months of plant data are used to validate models' predictability. The resulting models accurately predict unit performance, product yields, and fuel properties from the corresponding operating conditions.\r\n2. Integrated Process Modeling and Product Design of Biodiesel Manufacturing: This work represents first a comprehensive review of published literature pertaining to developing an integrated process modeling and product design of biodiesel manufacturing, and identifies those deficient areas for further development. It also represents new modeling tools and a methodology for the integrated process modeling and product design of an entire biodiesel manufacturing train. We demonstrate the methodology by simulating an integrated process to predict reactor and \r\nseparator performance, stream conditions, and product qualities with different feedstocks. The results show that the methodology is effective not only for the rating and optimization of an existing biodiesel manufacturing, and but also for the design of a new process to produce biodiesel with specified fuel properties.
Ph. D.
APA, Harvard, Vancouver, ISO, and other styles
6

Jetavat, Dhavalsinh. "Near net shape preforming by 3D weaving process." Thesis, University of Manchester, 2012. https://www.research.manchester.ac.uk/portal/en/theses/near-net-shape-preforming-by-3d-weaving-process(bb697182-f424-480b-963a-dc49b84425c6).html.

Full text
Abstract:
Significant proportion of composite industry is currently produced using prepregs, cured in autoclave which is very expensive and time consuming process. Dry textile preforms in conjunction with liquid molding techniques can lead to significant reductions in material costs, manufacturing costs and cycle times. These dry preforms are typically 2D woven or braided fabrics which also required lay-up and have low interlaminar properties. Through thickness reinforcement provides solution for this problem as it gives better interlaminar properties as well as near net shape performing. Various 3D performing methods are discussed and reviewed in this research where 3D weaving comes out as ideal process to develop near net shape preforms with more efficiency and better material performance. This research highlights the advantages and limitations of conventional 3D weaving processes. A number of approaches for improving the flexibility of 3D weaving process have been presented including changing fiber architecture in different sections of the preform, tapering in the width and thickness directions and finally to change the fiber orientation. It is concluded that multi step and taper fabrics can be produced on conventional weaving by some modifications. Furthermore, a novel 3D weaving machine is designed and developed after reviewing various patents and weaving methods to overcome limitations of conventional weaving machine. Key criterions from limitations of conventional weaving processes are considered and modified such as multiple weft insertion, limited warp stuffer movement, linear take-up to develop 3D weaving machine. In order to achieve isotropic material, two textile technologies are combined to get final requirements. 3D weaving can provide us fibres in 0° and 90° direction with through thickness reinforcement, whereas braiding can satisfy the requirement of bias direction fibres. Near net shape preforms such as taper and multistep are produced and laminated. Preliminary testing is performed on these laminates to evaluate fibre architectures. Further work is required in terms of machine modification which can provide weave design flexibility to explore various multilayer weave architectures. Thorough testing is required to evaluate and define structure performance and effect of fibre damage during weaving process.
APA, Harvard, Vancouver, ISO, and other styles
7

Ljungqvist, Ebba, and Johansson Sofia Stegs. "Development of Next Generation Rollator." Thesis, KTH, Maskinkonstruktion (Inst.), 2015. http://urn.kb.se/resolve?urn=urn:nbn:se:kth:diva-182897.

Full text
Abstract:
Today the county councils in Sweden prescribe rollators, but in the near future, the retail market for rollators will increase simultaneously with increased pressure within customer requirements. Among the users there is an increased awareness and urge to have a rollator that is up to date according to today’s development of the society. Which the current rollators on the market does not do. The demands from the customer have therefore started to change and are increasing within the coming generations. This is a result of the availability and possibility of all types of information regarding the subject. This means that there is a gap on the market that is growing, which leaves room for developing of a new product, to be able to keep up with technological progress. The goal was to deliver a reasonable concept proposal for a new rollator that with further development could be taken on to the next phase of development and production. The work also aims to clarify the next generation’s customer demands. Identify the issues surrounding today's prescriptions of rollators, as well as clarify the values that are imbedded in the product. The work has been based on a method called Pretzel, which is a product developing process developed by the company Scalae. Mapping of the society's value and norms connected to the rollators shows that the biggest problem among people is the early acceptance of the aid, this results in late prescribing of rollators. Which means that unnecessary injury often occurs, that is in not only unfortunately for the injured person but also a big cost for the society. Especially in comparison with the prescription cost of a rollator. These problems are thereby deep-rooted in values that closely connected to the slow development of rollators. The technological development today is extremely fast and new products are launched on the market at increasingly speed. For example, it would be very unusually to have the same model of mobile phone as your mother had when she was young. But when it comes to rollators, it is most possible that you may get the same model as your grandmother. To decrease the resistance against rollators the final concept is inspired by existing products on the market that already are accepted by the society. Although in other contexts, such as strollers and shopping carts. By using design and functions that the user is already are familiar with reach acceptance faster and reduce the risk of injuries. The final concept is called the Dramator and is a hybrid between a shopping cart and a rollator.
Idag förskrivs rollator av Landstingen samt kommuner, men inom en snar framtid kan marknaden för retail komma att öka radikalt i samband med att kraven från kunden ökar. Det råder en allt mer ökad medvetenhet och en vilja hos kunden att få en produkt som hänger med i samhällets utveckling, vilket dagens rollatorer inte gör. Då möjligheten och tillgängligheten till all sorts information ökar bidrar detta till att framtida generationer kommer att ställa högre krav på produkten än vad som ställs idag. Detta innebär att det just nu håller på att bildas ett glapp på marknaden och att en ny produkt som hänger med teknikens framfart måste utvecklas. Målet var att komma fram till ett konceptförslag, som sedan skulle kunna tas vidare till nästa fas för vidareutveckling och produktion. Arbetet syftar även till att bringa klarhet i kundkraven hos nästa generation och problem kring dagens förskrivningar, samt klarlägga värderingar som råder kring produkten. Arbetet har utgått från en metod som heter Pretzel, som är en process för degenerering och produktutveckling framtagen av företaget Scalae. Kartläggningen av värderingarna och samhällets syn på rollatorn, visar på problem som resulterar i att många rollatorer utskrivs försent. Detta innebär att onödiga personskador hinner uppkomma som i sin tur kostar samhället mycket pengar i jämförelse med förskrivningskostnaden av en rollator. Problemen ligger djup rotade i värderingar som går hand i hand men den långsamma utvecklingen av rollatorer. Den tekniska utvecklingen går idag oerhört fort och nya produkter lanseras på marknaden med rasande fart. Att till exempel ha samma mobiltelefonmodell som sin mamma är i dagens samhälle högst orimligt, men när det gäller rollatorer kan du till och med få samma modell som din farmor hade.Konceptet är inspirerat av existerande produkter på marknaden som är socialt accepterande, fast i andra sammanhang, så som barnvagnar och klassiska Dramaten-väskan. Genom att använda formspråk och funktioner som användaren redan är bekant med är målet acceptansprocessen för användaren kan kortas ner så att hjälpmedlet kan börja användas i tid och på så sätt minska risken för fallskador. Det slutliga konceptet kallas för Dramatorn och är en hybrid mellan en dramatenväska och en rollator.
APA, Harvard, Vancouver, ISO, and other styles
8

RAMIDAN, MARCO ANTONIO DA SILVA. "THE GULLY PROCESS STUDY NEXT TO THE ITUMBIARA HYDROELECTRIC COMPLEX - GO." PONTIFÍCIA UNIVERSIDADE CATÓLICA DO RIO DE JANEIRO, 2003. http://www.maxwell.vrac.puc-rio.br/Busca_etds.php?strSecao=resultado&nrSeq=4372@1.

Full text
Abstract:
Esta dissertação, desenvolvida dentro do contexto do projeto PRONEX do Centro Geotécnico de Meio Ambiente da PUC-Rio e através de um convênio com FURNAS - PUC-Rio, apresenta uma contribuição quanto à identificação e compreensão dos mecanismos envolvidos em processos de erosão, considerando seus aspectos geológico-geotécnicos bem como medidas preventivas de reparo no caso da formação de uma voçoroca específica. A evolução do processo erosivo estudado tem suas origens na remoção mecânica de uma camada com cerca de cinco metros de solo argiloso de uma área explorada na época da construção de uma barragem de terra homogênea pertencente ao complexo Hidrelétrico de Itumbiara. Os principais aspectos da área estudada, tais como sua localização, condições climáticas e tipos de solo e vegetação foram considerados no desenvolvimento do trabalho, além de aspectos geológicos regionais e características geológico-geotécnicas da área afetada pelos processos de erosão. Tomando como base o perfil de intemperismo identificado pela inspeção das paredes da formação da voçoroca, bem como amostras de furos de sondagens SPT (ensaio de penetração normal), quatro tipos de camadas de solo foram tomados como representantes das condições do local. Objetivando-se a identificação, classificação e definição do potencial de erosão de tais materiais, espécies de amostras indeformadas de bloco (bem como as amolgadas) foram submetidas a investigações laboratoriais abrangendo: ensaio convencional de caracterização de solo; ensaio de caracterização MCT (mini-MCV); Crumb Test; ensaio de Desagregação; Pinhole Test; e Inderbitzen; ensaios de permeabilidade (também desenvolvidos em campo); análise química da água intersticial; análise mineralógica (difração de raios - X); ensaio de Resistência a Tração (sob condições de saturações diferentes) e ensaio de papel filtro (para definição das curvas características da umidade). Baseado em testes de laboratório, observações e dados de pesquisas de campo advindos de um monitoramento de poro-pressão (através de piezômetros instalados no local), mecanismos de erosão que possam predominar na área foram definidos como: micro-ravinas, ravinas e voçorocas, dentre outros. Também, ao final do trabalho, são apresentadas sugestões para remediação do local, considerando-se medidas corretivas usadas dentro do contexto da prática geotécnica convencional, e uma metodologia a ser seguida em investigações futuras relacionadas à caracterização do local e desenvolvimento de processos de erosão.
This dissertation, developed within the context of the PRONEX Project of the Environmental Geotechnical Center of PUC-Rio and through a FURNAS-PUC-Rio Convenium, presents a contribution towards the identification and comprehension of mechanisms involved in erosion processes, considering its geological and geotechnical aspects as well as preventive repairing measures in the case of a specific gully formation. The evolution of the studied erosive process has its origin in the mechanical removal of some five meters of a clayey soil layer from a borrow area exploited at the time of the construction of an homogeneous embankment dam belonging to the Itumbiara Hydroelectric Complex. The main aspects of the studied area, such as its location, climatic conditions and soil and vegetation types were considered in the development of the work, besides regional geological aspects and geological-geotechnical features of the area affected by the erosion processes. Taking as a basis the unsaturated weathering profile identified by inspection of the walls of the gully formation, as well as of samples from SPT (standard penetration test) boreholes, four types of soil layers were taken as representative of site conditions. Aiming the identification, classification and definition of the erodibility potential of such materials, specimens from undisturbed block samples (as well as remolded ones) were submitted to laboratory investigations comprising: conventional soil characterization tests; MCT characterization test (mini-MCV); crumb test; desegregation test; pinhole test; Inderbitzen test, permeability tests (also performed in the field); chemical analysis (both soil and voids -water); mineralogical analysis (Xrays diffraction); tensile strength test (under different saturation conditions) and filter paper test (for definition of soil-moisture characteristic curves). Based on the results of the laboratory tests, field observations and data from pore-pressure monitoring (through piezometers installed in the site), erosion mechanisms that may prevail in the area were defined as micro-rills, rill, gully and others. Also, at the end of the work, are presented suggestions for remediation of the site, considering corrective measures used within the context of conventional geotechnical practice, and a methodology to be followed in further investigations related to the characterization of the site and development of erosion processes.
APA, Harvard, Vancouver, ISO, and other styles
9

Etheridge, Tom. "A structural design process for a next generation aerospace design environment." Thesis, University of Southampton, 2005. https://eprints.soton.ac.uk/72036/.

Full text
Abstract:
The current structural sizing process used to design military aircraft was developed when the emphasis was on the design of the most advanced products possible, with the customer bearing the associated risks of its development. However the marketplace has evolved into where the customer expects ‘better, cheaper, faster’ products and at a lower degree of risk. It is not clear if the current structural design processes meet the needs of this type of market. This work argues that the current proprietary process should be replaced by one that is more flexible, allowing the company to adapt its current structural sizing process to meet the needs of a particular product. It includes a study of the current and future engineering environment within a ‘typical’ airframe design organisation. It looked at the current use of structural optimisation technology throughout the design lifecycle and identified barriers to the potential benefits of wider use. Two existing elements of the organisation’s in-house toolset were adapted to size components and the results compared against the literature. This provided an insight into the toolset and the development of proprietary tools. Finally a multilevel ‘global-local’ sizing approach was developed and studied as an alternative to the current, more tightly coupled, somewhat ‘monolithic’, sizing system. Strength, stability and stiffness design criteria were considered. Automation of the process was also considered and compared against the existing sizing process. It was found that the current labour intensive sizing process could be improved upon using some simple techniques. Based on this a future structural sizing process is suggested which could be implemented using in-house or commercially available tools.
APA, Harvard, Vancouver, ISO, and other styles
10

Carpio, Alvarez Gustavo Andrés, and Espinoza Denis Rolando Lopez. "Mejora en el proceso de reparación de vehículos en la empresa Carpio SAC implementando la metodología Lean Logistcs para mejorar el Net Promoter Score del servicio de Postventa." Bachelor's thesis, Universidad Peruana de Ciencias Aplicadas (UPC), 2020. http://hdl.handle.net/10757/656649.

Full text
Abstract:
El presente trabajo de investigación fue desarrollado en la empresa Carpio SAC los cuales luego de un análisis a profundidad se determinó que presentaba bajos indicadores de NPS del 68%, las áreas que se vieron más perjudicados por los indicadores fueron el servicio técnico (Taller) y la atención al cliente, los cuales se realizó una búsqueda de las causas principales que generan el problema mencionado. Con la finalidad de elaborar propuestas que permitan eliminar las causas que generan el problema se realizó una revisión de literatura para conocer las principales herramientas que permitan mitigar los impactos en tiempos, productividad y gestión de inventarios en el rubro automotriz. Una vez definido los conceptos se estableció el aporte de la propuesta de mejora los cuales fue la creación de una metodología lean logistics soportado en herramientas como 5’s, gestión de inventarios, trabajo estandarizado y gestión de inventarios. Luego se implementó la propuesta definida en las áreas mencionadas y se analizó los resultados obtenidos inicialmente vs los resultados finales luego de la implementación con la finalidad de evaluar si el indicador del NPS sobrepaso el objetivo del 95% de acuerdo con lo establecido por la casa matriz Volkswagen.
This research work was developed in the company Carpio SAC, which after an in-depth analysis it was determined that it had low NPS indicators of 68%, the areas that were most affected by the indicators were the technical service (Workshop) and customer service, which conducted a search for the main causes that generate the aforementioned problem. In order to prepare proposals to eliminate the causes that generate the problem, a literature review was carried out to find out the main tools that allow mitigating the impacts on time, productivity and inventory management in the automotive sector. Once the concepts were defined, the report of the improvement proposal was established, which was the creation of a lean logistics methodology supported by tools such as 5's, inventory management, standardized work and inventory management. Then, the proposal defined in the aforementioned areas was implemented and the results obtained vs the final results after implementation were analyzed in order to evaluate if the NPS indicator exceeded the 95% objective according to the Volkswagen parent company.
Tesis
APA, Harvard, Vancouver, ISO, and other styles
11

Trapletti, Adrian, Friedrich Leisch, and Kurt Hornik. "On the ergodicity and stationarity of the ARMA (1,1) recurrent neural network process." SFB Adaptive Information Systems and Modelling in Economics and Management Science, WU Vienna University of Economics and Business, 1999. http://epub.wu.ac.at/652/1/document.pdf.

Full text
Abstract:
In this note we consider the autoregressive moving average recurrent neural network ARMA-NN(1, 1) process. We show that in contrast to the pure autoregressive process simple ARMA-NN processes exist which are not irreducible. We prove that the controllability of the linear part of the process is sufficient for irreducibility. For the irreducible process essentially the shortcut weight corresponding to the autoregressive part determines whether the overall process is ergodic and stationary.
Series: Working Papers SFB "Adaptive Information Systems and Modelling in Economics and Management Science"
APA, Harvard, Vancouver, ISO, and other styles
12

Vupputuri, Naga Durga Murali Mohan. "Catalyst for accelerating recruitment process." Kansas State University, 2011. http://hdl.handle.net/2097/10721.

Full text
Abstract:
Master of Science
Department of Computing and Information Sciences
Mitchell L. Neilsen
Hiring process in an organization is very crucial which consumes cost along with a lot of time. I developed a catalyst which will accelerate the recruiting process which assures optimum time and cost utilization. Keeping track of all the applicants throughout the recruiting process is cumbersome. This catalyst for accelerating recruitment process is an interactive web application which helps multinational corporations and organizations to keep track of the recruiting steps for different position, reasons for creating a new position, history of all the potential applicants and the employer’s feedback along with email communication to all the people involved. Major emphasis of the web application ensures that none of the applicants in recruiting process are terminated unconditionally due to manual errors or miscommunication. Initially a position is created specifying the requirements and duties by a line manager or a branch manager. Initial applications are shortlisted by the recruiter and interviews are scheduled with panelists. This web application does not involve in the decision making, rather it provides a framework and sequence to follow and to tag an offer to a potential candidate. The web application follows 3- tier architecture and Asp.net is used is used to develop the web application. Asp.net web forms, HTML, CSS, and JavaScript are used to provide rich front end, Vb.net classes provide the business logic, and Microsoft SQL server serves as the data layer. A part from the SMTP mail server is used to send mails to HR manager, line manager, panelists and candidates.
APA, Harvard, Vancouver, ISO, and other styles
13

Tlolka, Martin. "Systém pro podporu procesního řízení." Master's thesis, Vysoké učení technické v Brně. Fakulta informačních technologií, 2012. http://www.nusl.cz/ntk/nusl-236551.

Full text
Abstract:
Process management is used by companies more frequently than ever. There are many systems for process support. Some of them free, some of them required payment. This theses is about creating core of process management system. You can find there theoretical analysis of process management and comparison of some process management systems. Main part of this theses consist of designing, implementing and testing of process management system. Contribution of this work is core of process management system and web application, which enable user testing of system.
APA, Harvard, Vancouver, ISO, and other styles
14

Hesketh, Martin. "Synthesis and axiomatisation for structural equivalences in the Petri Box Calculus." Thesis, University of Newcastle Upon Tyne, 1998. http://hdl.handle.net/10443/1993.

Full text
Abstract:
The Petri Box Calculus (PBC) consists of an algebra of box expressions, and a corresponding algebra of boxes (a class of labelled Petri nets). A compo- sitional semantics provides a translation from box expressions to boxes. The synthesis problem is to provide an algorithmic translation from boxes to box expressions. The axiomatisation problem is to provide a sound and complete axiomatisation for the fragment of the calculus under consideration, which captures a particular notion of equivalence for boxes. There are several alternative ways of defining an equivalence notion for boxes, the strongest one being net isomorphism. In this thesis, the synthesis and axiomatisation problems are investigated for net semantic isomorphism, and a slightly weaker notion of equivalence, called duplication equivalence, which can still be argued to capture a very close structural similarity of con- current systems the boxes are supposed to represent. In this thesis, a structured approach to developing a synthesis algorithm is proposed, and it is shown how this may be used to provide a framework for the production of a sound and complete axiomatisation. This method is used for several different fragments of the Petri Box Calculus, and for gener- ating axiomatisations for both isomorphism and duplication equivalence. In addition, the algorithmic problems of checking equivalence of boxes and box expressions, and generating proofs of equivalence are considered as extensions to the synthesis algorithm.
APA, Harvard, Vancouver, ISO, and other styles
15

Dallas, P. S. "Computer control of continuous and batch processes using a Petri-net interpreter." Thesis, University of Bradford, 1986. http://ethos.bl.uk/OrderDetails.do?uin=uk.bl.ethos.375108.

Full text
APA, Harvard, Vancouver, ISO, and other styles
16

Goodrick, James W. "T-NET and the disciple-making process at Troutdale Community Church." Theological Research Exchange Network (TREN), 1996. http://www.tren.com.

Full text
APA, Harvard, Vancouver, ISO, and other styles
17

Folkesson, John. "Projection of a Markov Process with Neural Networks." Thesis, KTH, Centrum för Autonoma System, CAS, 2001. http://urn.kb.se/resolve?urn=urn:nbn:se:kth:diva-183498.

Full text
Abstract:
In this work we have examined an application from the insurance industry. We first reformulate it into a problem of projecting a markov process. We then develop a method of carrying out the projection many steps into the future by using a combination of neural networks trained using a maximum entropy principle. This methodology improves on current industry standard solution in four key areas: variance, bias, confidence level estimation, and the use of inhomogeneous data. The neural network aspects of the methodology include the use of a generalization error estimate that does not rely on a validation set. We also develop our own approximation to the hessian matrix, which seems to be significantly better than assuming it to be diagonal and much faster than calculating it exactly. This hessian is used in the network pruning algorithm. The parameters of a conditional probability distribution were generated by a neural network, which was trained to maximize the log-likelihood plus a regularization term. In preparing the data for training the neural networks we have devised a scheme to decorrelate input dimensions completely, even non-linear correlations, which should be of general interest in its own right. The results we found indicate that the bias inherent in the current industry-standard projection technique is very significant. This work may be the only accurate measurement made of this important source of error.
APA, Harvard, Vancouver, ISO, and other styles
18

Rodríguez, César. "Verification based on unfoldings of Petri nets with read arcs." Phd thesis, École normale supérieure de Cachan - ENS Cachan, 2013. http://tel.archives-ouvertes.fr/tel-00927064.

Full text
Abstract:
Humans make mistakes, especially when faced to complex tasks, such as the construction of modern hardware or software. This thesis focuses on machine-assisted techniques to guarantee that computers behave correctly. Modern computer systems are large and complex. Automated formal verification stands as an alternative to testing or simulation to ensuring their reliability. It essentially proposes to employ computers to exhaustively check the system behavior. Unfortunately, automated verification suffers from the state-space explosion problem: even relatively small systems can reach a huge number of states. Using the right representation for the system behavior seems to be a key step to tackle the inherent complexity of the problems that automated verification solves. The verification of concurrent systems poses additional issues, as their analysis requires to evaluate, conceptually, all possible execution orders of their concurrent actions. Petri net unfoldings are a well-established verification technique for concurrent systems. They represent behavior by partial orders, which not only is natural but also efficient for automatic verification. This dissertation focuses on the verification of concurrent systems, employing Petri nets to formalize them, and studies two prominent verification techniques: model checking and fault diagnosis. We investigate the unfoldings of Petri nets extended with read arcs. The unfoldings of these so-called contextual nets seem to be a better representation for systems exhibiting concurrent read access to shared resources: they can be exponentially smaller than conventional unfoldings on these cases. Theoretical and practical contributions are made. We first study the construction of contextual unfoldings, introducing algorithms and data structures that enable their efficient computation. We integrate contextual unfoldings with merged processes, another representation of concurrent behavior that alleviates the explosion caused by non-determinism. The resulting structure, called contextual merged processes, is often orders of magnitude smaller than unfoldings, as we experimentally demonstrate. Next, we develop verification techniques based on unfoldings. We define SAT encodings for the reachability problem in contextual unfoldings, thus solving the problem of detecting cycles of asymmetric conflict. Also, an unfolding-based decision procedure for fault diagnosis under fairness constraints is presented, in this case only for conventional unfoldings. Finally, we implement our verification algorithms, aiming at producing a competitive model checker intended to handle realistic benchmarks. We subsequently evaluate our methods over a standard set of benchmarks and compare them with existing unfolding-based techniques. The experiments demonstrate that reachability checking based on contextual unfoldings outperforms existing techniques on a wide number of cases. This suggests that contextual unfoldings, and asymmetric event structures in general, have a rightful place in research on concurrency, also from an efficiency point of view.
APA, Harvard, Vancouver, ISO, and other styles
19

Power, Yvonne. "The development of an integrated process operation management system." Power, Yvonne (2004) The development of an integrated process operation management system. PhD thesis, Murdoch University, 2004. http://researchrepository.murdoch.edu.au/266/.

Full text
Abstract:
This project details the development of a new framework known as the Coordinated Knowledge Management method to enable complete task integration of all low and midlevel tasks for process industries. The framework overcomes past problems of task integration, which made it impossible to have a fully integrated system and with integration being limited to data acquisition, regulatory control and occasionally supervisory control. The main component of the project includes the use of hierarchically structured timed place Petri nets, which have not previously been used for integrating tasks in intelligent process operations management. Tasks which have been integrated include all low-level tasks such as data acquisition, regulatory control and data reconciliation, and all mid-level tasks including supervisory control and most significantly the integration of process monitoring fault detection and diagnosis. The Coordinated Knowledge Management method makes use of hierarchical timed place Petri nets to (i) coordinate tasks, (ii) monitor the system, (iii) activate tasks, (iv) send requests for data updates and (iv) receive notice when tasks are complete. Visualization of the state of the system is achieved through the moving tokens in the Petri net. The integration Petri nets are generic enough to be applied to any plant for integration using existing modules thus allowing the integration of different tasks, which use different problem solving methodologies. Integrating tasks into an intelligent architecture has been difficult to achieve in the past since the developed framework must be able to take into account information flow and timing in a continuously changing environment. In this thesis Petri nets have been applied to continuous process operations rather than to batch processes as in the past. In a continuous process, raw materials are fed and products are delivered continuously at known flow-rates and the plant is generally operated at steady state (Gu and Bahri, 2002). However, even in a continuous process, data is received from the distributed control system (DCS) at discrete time intervals. By transforming this data into process events, a Petri net can be used for overseeing process operations. The use of hierarchical Petri nets as the coordination mechanism introduces inherent hierarchy without the rigidity of previous methods. Petri nets are used to model the conditions and events occurring within the system and modules. This enables the development of a self-monitoring system, which takes into account information flow and timing in a continuously changing environment. Another major obstacle to integration of tasks in the past has been the presence of faults in the process. The project included the integration of fault detection and diagnosis a component not integrated into current systems but which is necessary to prevent abnormal plant operation. A novel two-step supervisory fault detection and diagnosis framework was developed and tested for the detection and diagnosis of faults in large-scale systems, using condition-event nets for fault detection and Radial Basis Function neural networks for fault diagnosis. This fault detection and diagnosis methodology detects and diagnoses faults in the early stages of fault occurrence, before fault symptoms propagate throughout the plant. The Coordinated Knowledge Management method and the newly developed fault diagnosis module were developed in G21 and applied and tested on the Separation and Heating sections of the Pilot plant for the Bayer process at the School of Engineering Science, Murdoch University. Testing indicated that the use of an intelligent system comprising of Petri nets for integration of tasks results in improved plant performance and makes the plant easier to monitor increasing profits. The fault detection and diagnosis module was found to be useful in detecting faults very early on and diagnosing the exact location of faults, which would otherwise prove to be difficult to detect. This would also increase plant safety, reduce wastage and improve environmental considerations of the plant.
APA, Harvard, Vancouver, ISO, and other styles
20

Slifko, Matthew D. "The Cauchy-Net Mixture Model for Clustering with Anomalous Data." Diss., Virginia Tech, 2019. http://hdl.handle.net/10919/93576.

Full text
Abstract:
We live in the data explosion era. The unprecedented amount of data offers a potential wealth of knowledge but also brings about concerns regarding ethical collection and usage. Mistakes stemming from anomalous data have the potential for severe, real-world consequences, such as when building prediction models for housing prices. To combat anomalies, we develop the Cauchy-Net Mixture Model (CNMM). The CNMM is a flexible Bayesian nonparametric tool that employs a mixture between a Dirichlet Process Mixture Model (DPMM) and a Cauchy distributed component, which we call the Cauchy-Net (CN). Each portion of the model offers benefits, as the DPMM eliminates the limitation of requiring a fixed number of a components and the CN captures observations that do not belong to the well-defined components by leveraging its heavy tails. Through isolating the anomalous observations in a single component, we simultaneously identify the observations in the net as warranting further inspection and prevent them from interfering with the formation of the remaining components. The result is a framework that allows for simultaneously clustering observations and making predictions in the face of the anomalous data. We demonstrate the usefulness of the CNMM in a variety of experimental situations and apply the model for predicting housing prices in Fairfax County, Virginia.
Doctor of Philosophy
We live in the data explosion era. The unprecedented amount of data offers a potential wealth of knowledge but also brings about concerns regarding ethical collection and usage. Mistakes stemming from anomalous data have the potential for severe, real-world consequences, such as when building prediction models for housing prices. To combat anomalies, we develop the Cauchy-Net Mixture Model (CNMM). The CNMM is a flexible tool for identifying and isolating the anomalies, while simultaneously discovering cluster structure and making predictions among the nonanomalous observations. The result is a framework that allows for simultaneously clustering and predicting in the face of the anomalous data. We demonstrate the usefulness of the CNMM in a variety of experimental situations and apply the model for predicting housing prices in Fairfax County, Virginia.
APA, Harvard, Vancouver, ISO, and other styles
21

Jalali, Amin. "Foundation of Aspect Oriented Business Process Management." Thesis, Stockholms universitet, Institutionen för data- och systemvetenskap, 2012. http://urn.kb.se/resolve?urn=urn:nbn:se:su:diva-118871.

Full text
Abstract:
Reducing the complexity in information systems is a main concern on which researchers work. Separation of concerns, also known as the principle of ‘divide and conquer’, has long time been a strategy for dealing with complexity. Two examples of the application of this principle in the area of information system design are the break out the data management into Database Management Systems(DBMSs) and the separation of the business logic from the application logic into Business Process Management Systems (BPMSs). However, separation of cross-cutting concerns from the core-concern of a business process is not yet supported in the Business Process Management (BPM) area. Aspect Oriented principle recommends such a separation. When looking into the business process, several concerns, such as security and privacy, can be identified. Therefore, a formal model that provides a foundation for enabling BPMSs to support separation of concerns in BPM area is needed. This thesis provides a formal model for dealing with separation of concerns in the BPM area. Implementing this model in BPMSs would facilitate the design and implementation of business processes with a lower level of complexity, which in turn would reduce the costs associated with BPM projects. The thesis starts with a literature review on aspect orientation both in programming and in the BPM areas. Based on this study, a list of requirements for an Aspect Oriented Service for BPMSs is compiled. Then a formal model for such a service, fulfilling a set of these requirements, is designed using Coloured Petri Nets and implemented in CPN Tools. The model is evaluated through the execution of a number of scenarios. The solution is also validated through an industrial case study. The results of the case study are presented the direction for future work outlined. The case study demonstrates that separation of concerns through aspect orientation does indeed reduce the complexity of business process models.
APA, Harvard, Vancouver, ISO, and other styles
22

La, Rosa Marcello. "Managing variability in process-aware information systems." Queensland University of Technology, 2009. http://eprints.qut.edu.au/20531/.

Full text
Abstract:
Configurable process models are integrated representations of multiple variants of a process model in a given domain, e.g. multiple variants of a shipment-to-delivery process in the logistics domain. Configurable process models provide a basis for managing variability and for enabling reuse of process models in Process-Aware Information Systems. Rather than designing process models from scratch, analysts can derive process models by configuring existing ones, thereby reusing proven practices. This thesis starts with the observation that existing approaches for capturing and managing configurable process models suffer from three shortcomings that affect their usability in practice. Firstly, configuration in existing approaches is performed manually and as such it is error-prone. In particular, analysts are left with the burden of ensuring the correctness of the individualized models. Secondly, existing approaches suffer from a lack of decision support for the selection of configuration alternatives. Consequently, stakeholders involved in the configuration of process models need to possess expertise both in the application domain and in the modeling language employed. This assumption represents an adoption obstacle in domains where users are unfamiliar with modeling notations. Finally, existing approaches for configurable process modeling are limited in scope to control-flow aspects, ignoring other equally important aspects of process models such as object flow and resource management. Following a design science research method, this thesis addresses the above shortcomings by proposing an integrated framework to manage the configuration of process models. The framework is grounded on three original and interrelated contributions: (i) a conceptual foundation for correctness-preserving configuration of process models; (ii) a questionnaire-driven approach for process model configuration, providing decision support and abstraction from modeling notations; (iii) a meta-model for configurable process models covering control-flow, data objects and resources. While the framework is language-independent, an embodiment of the framework in the context of a process modeling language used in practice is also developed in this thesis. The framework was formally defined and validated using four scenarios taken from different domains. Moreover, a comprehensive toolset was implemented to support the validation of the framework.
APA, Harvard, Vancouver, ISO, and other styles
23

Tiwari, Railesha. "A Decision-Support Framework for Design of Non-Residential Net-Zero Energy Buildings." Diss., Virginia Tech, 2015. http://hdl.handle.net/10919/73301.

Full text
Abstract:
Designing Net-Zero Energy Buildings (NZEB) is a complex and collaborative team process involving knowledge sharing of experts leading to the common goal of meeting the Net-Zero Energy (NZE) project objectives. The decisions made in the early stages of design drastically affect the final outcome of design and energy goals. The Architecture, Engineering and Construction (AEC) industry is pursuing ways to improve the current building design process and project delivery methods for NZEBs. To enable the building industry to improve the building design process, it is important to identify the gaps, ways of improvement and potential opportunities to structure the decision-making process for the purpose of NZE performance outcome. It is essential to identify the iterative phases of design decisions between the integrated team of experts for the design processes conducted in these early stages to facilitate the decision-making of NZEB design. The lack of a structured approach to help the AEC industry in making informed decisions for the NZEB context establishes the need to evaluate the argumentation of the NZEB design decision process. The first step in understanding the NZEB design decision process is to map the current processes in practice that have been successful in achieving the NZE goal. Since the energy use performance goal drives the design process, this research emphasizes first the need to document, in detail, and investigate the current NZEB design process with knowledge mapping techniques to develop an improved process specific to NZEB context. In order to meet this first objective, this research qualitatively analyzed four NZEB case studies that informed decision-making in the early design phases. The four components that were studied in the early design phases included (1) key stakeholders involved (roles played), (2) phases of assessments (design approach, (3) processes (key processes, sub-processes and design activities affecting performance) and (4) technology (knowledge type and flow). A series of semi-structured, open-ended interviews were conducted with the key decision-makers and decision facilitators to identify their roles in the early design processes, the design approach adopted, rationale for decision-making, types of evaluations performed, and tools used for analysis. The qualitative data analysis was performed through content analysis and cognitive mapping techniques. Through this process, the key phases of decision-making were identified that resulted in understanding of the path to achieving NZE design goal and performance outcome. The second objective of this research was to identify the NZE decision nodes through a comparative investigation of the case studies. This research also explored the key issues specific to each stakeholder group. The inter-relationships between the project objectives, decision context, occupants usage patterns, strategies and integrated systems, building operation and renewable energy production was identified through a series of knowledge maps and visual process models leading to the identification of the key performance indicators. This research reviewed the similarities and differences in the processes to identify significant opportunities that can improve the early building design process for NZEBs. This research identifies the key decision phases used by the integrated teams and describes the underlying structure that can change the order of key phases. A process mapping technique was adapted to capture the practice-based complex NZEB design approach and draw insights of the teamwork and interdisciplinary communication to enable more comprehensive understanding of linkages between processes, sub-processes and design activities, knowledge exchange, and decision rationale. Ket performance indicators identified for early design of NZEBs resulted in developing a decision-support process model that can help the AEC industry in making informed decisions. This dissertation helps improve understanding of linkages between processes, decision nodes and decision rationale to enable industry-wide NZEB design process assessment and improvement. This dissertation discusses the benefits the proposed NZEB design process model brings to the AEC industry and explores future development efforts.
Ph. D.
APA, Harvard, Vancouver, ISO, and other styles
24

Tang, Man. "Statistical methods for variant discovery and functional genomic analysis using next-generation sequencing data." Diss., Virginia Tech, 2020. http://hdl.handle.net/10919/104039.

Full text
Abstract:
The development of high-throughput next-generation sequencing (NGS) techniques produces massive amount of data, allowing the identification of biomarkers in early disease diagnosis and driving the transformation of most disciplines in biology and medicine. A greater concentration is needed in developing novel, powerful, and efficient tools for NGS data analysis. This dissertation focuses on modeling ``omics'' data in various NGS applications with a primary goal of developing novel statistical methods to identify sequence variants, find transcription factor (TF) binding patterns, and decode the relationship between TF and gene expression levels. Accurate and reliable identification of sequence variants, including single nucleotide polymorphisms (SNPs) and insertion-deletion polymorphisms (INDELs), plays a fundamental role in NGS applications. Existing methods for calling these variants often make simplified assumption of positional independence and fail to leverage the dependence of genotypes at nearby loci induced by linkage disequilibrium. We propose vi-HMM, a hidden Markov model (HMM)-based method for calling SNPs and INDELs in mapped short read data. Simulation experiments show that, under various sequencing depths, vi-HMM outperforms existing methods in terms of sensitivity and F1 score. When applied to the human whole genome sequencing data, vi-HMM demonstrates higher accuracy in calling SNPs and INDELs. One important NGS application is chromatin immunoprecipitation followed by sequencing (ChIP-seq), which characterizes protein-DNA relations through genome-wide mapping of TF binding sites. Multiple TFs, binding to DNA sequences, often show complex binding patterns, which indicate how TFs with similar functionalities work together to regulate the expression of target genes. To help uncover the transcriptional regulation mechanism, we propose a novel nonparametric Bayesian method to detect the clustering pattern of multiple-TF bindings from ChIP-seq datasets. Simulation study demonstrates that our method performs best with regard to precision, recall, and F1 score, in comparison to traditional methods. We also apply the method on real data and observe several TF clusters that have been recognized previously in mouse embryonic stem cells. Recent advances in ChIP-seq and RNA sequencing (RNA-Seq) technologies provides more reliable and accurate characterization of TF binding sites and gene expression measurements, which serves as a basis to study the regulatory functions of TFs on gene expression. We propose a log Gaussian cox process with wavelet-based functional model to quantify the relationship between TF binding site locations and gene expression levels. Through the simulation study, we demonstrate that our method performs well, especially with large sample size and small variance. It also shows a remarkable ability to distinguish real local feature in the function estimates.
Doctor of Philosophy
The development of high-throughput next-generation sequencing (NGS) techniques produces massive amount of data and bring out innovations in biology and medicine. A greater concentration is needed in developing novel, powerful, and efficient tools for NGS data analysis. In this dissertation, we mainly focus on three problems closely related to NGS and its applications: (1) how to improve variant calling accuracy, (2) how to model transcription factor (TF) binding patterns, and (3) how to quantify of the contribution of TF binding on gene expression. We develop novel statistical methods to identify sequence variants, find TF binding patterns, and explore the relationship between TF binding and gene expressions. We expect our findings will be helpful in promoting a better understanding of disease causality and facilitating the design of personalized treatments.
APA, Harvard, Vancouver, ISO, and other styles
25

Salas, Sánchez Julio. "El valor neto negativo del bloque patrimonial que se transfiere en los procesos de reorganización societaria." IUS ET VERITAS, 2017. http://repositorio.pucp.edu.pe/index/handle/123456789/122445.

Full text
APA, Harvard, Vancouver, ISO, and other styles
26

ONOGI, Katsuaki, Tomoyuki YAJIMA, Susumu HASHIZUME, and Takashi ITO. "Integration between Scheduling and Design of Batch Systems Based on Petri Net Models." Institute of Electronics, Information and Communication Engineers, 2005. http://hdl.handle.net/2237/14964.

Full text
APA, Harvard, Vancouver, ISO, and other styles
27

Silva, Luciane de Fátima. "Detecção e correção de situações de deadlock em workflow nets interorganizacionais." Universidade Federal de Uberlândia, 2014. https://repositorio.ufu.br/handle/123456789/12555.

Full text
Abstract:
In this work, an approach based on Deadlock avoidance of Interorganizational Work-Flow nets is proposed to deal with these situations. Interorganizational business processes are modeled by Interorganizational WorkFlow nets. Deadlock situations in interorganizational business processes come generally related to losses during message exchanges between several business processes. Within the Petri net theory, a Deadlock situation is characterized by the presence of a siphon that can be empty. After detecting and controlling the Siphon structures that lead to Deadlock situations in Interorganizational WorkFlow nets, a method for the design of Interorganizational WorkFlow nets free of Deadlock is proposed. In particular, the basic principle is to dene new Work- Flow nets shared among the original work ow processes that allow one to remove the scenarios responsible for the Deadlocks.
Neste trabalho e proposta uma abordagem baseada na prevenção de deadlocks em WorkFlow nets Interorganizacionais para lidar com situações dessa natureza. Processos de negocio interorganizacionais são modelados por work ows interorganizacionais. Situações de deadlock nos processos de negocio interorganizacionais geralmente estão relacionadas a perdas durante trocas de mensagens entre varios processos de negocio. Dentro da teoria das redes de Petri, uma situação de deadlock e caracterizada pela presenca de um sifão que pode car vazio. Depois de detectar e controlar as estruturas de sifão que levam as situações de deadlock nas WorkFlow nets Interorganizacionais, e proposta uma arquitetura distribuda para modelar as WorkFlow nets Interorganizacionais livre de deadlock. Em particular, o princpio basico consiste em denir novas WorkFlow nets compartilhadas entre os work ows originais que permitem remover os cenarios responsaveis pelos deadlocks.
Mestre em Ciência da Computação
APA, Harvard, Vancouver, ISO, and other styles
28

Hölscher, Karsten. "Autonomous units as a rule based concept for the modeling of autonomous and cooperating process." Berlin Logos, 2008. http://d-nb.info/992076374/04.

Full text
APA, Harvard, Vancouver, ISO, and other styles
29

Ziegert, Kristina. "Everyday Life among Next of Kin of Haemodialysis Patients." Doctoral thesis, Linköping : Halmstad : Dept. of Medicine and Care, Linköping University ; Scool of Social and Health Sciences, Halmstad University, 2005. http://www.bibl.liu.se/liupubl/disp/disp2005/med926s.pdf.

Full text
APA, Harvard, Vancouver, ISO, and other styles
30

Zumaeta, Liza Helen Fiorela. "Implementación del nivel 1 y 2 del MoProSoft en Net Factory." Bachelor's thesis, Universidad Peruana de Ciencias Aplicadas - UPC, 2013. http://hdl.handle.net/10757/273752.

Full text
APA, Harvard, Vancouver, ISO, and other styles
31

Roy, Mousumi. "Front-end considerations for next generation communication receivers." Thesis, University of Manchester, 2011. https://www.research.manchester.ac.uk/portal/en/theses/frontend-considerations-for-next-generation-communication-receivers(636dc047-7772-46c3-b049-183d3af2a7bb).html.

Full text
Abstract:
The ever increasing diversity in communication systems has created a demand for constant improvements in receiver components. This thesis describes the design and characterisation of front-end receiver components for various challenging applications, including characterisation of low noise foundry processes, LNA design and multi-band antenna design. It also includes a new theoretical analysis of noise coupling in low noise phased array receivers.In LNA design much depends on the choice of the optimum active devices. A comprehensive survey of the performance of low noise transistors is therefore extremely beneficial. To this end a comparison of the DC, small-signal and noise behaviours of 10 state-of-the-art GaAs and InP based pHEMT and mHEMT low noise processes has been carried out. Their suitability in LNA designs has been determined, with emphasis on the SKA project. This work is part of the first known detailed investigation of this kind. Results indicate the superiority of mature GaAs-based pHEMT processes, and highlight problems associated with the studied mHEMT processes. Two of the more promising processes have then been used to design C-band and UHF-band MMIC LNAs. A new theoretical analysis of coupled noise between antenna elements of a low noise phased array receiver has been carried out. Results of the noise wave analysis, based on fundamental principles of noisy networks, suggest that the coupled noise contribution to system noise temperatures should be smaller than had previously been suggested for systems like the SKA. The principles are applicable to any phased array receiver. Finally, a multi-band antenna has been designed and fabricated for a severe operating environment, covering the three extremely crowded frequency bands, the 2.1 GHz UMTS, the 2.4 GHz ISM and the 5.8 GHz ISM bands. Measurements have demonstrated excellent performance, exceeding that of equivalent commercial antennas aimed at similar applications.
APA, Harvard, Vancouver, ISO, and other styles
32

Makem, J. E. "Virtual Net-Shape Forging of Aerofoil Blades - Dimensional Inspection and Shape Sensitivity to Process Variables." Thesis, Queen's University Belfast, 2010. http://ethos.bl.uk/OrderDetails.do?uin=uk.bl.ethos.517543.

Full text
APA, Harvard, Vancouver, ISO, and other styles
33

Avenäs, Sebastian. "Why the bear kicked the hornet’s nest : Causal processes of Russian foreign policy on Syria." Thesis, Uppsala universitet, Statsvetenskapliga institutionen, 2017. http://urn.kb.se/resolve?urn=urn:nbn:se:uu:diva-313994.

Full text
Abstract:
This paper examines causal mechanisms of the process leading up to the Russian military intervention in Syria that began in September 2015. It aims to concretize the causal processes of three different hypotheses that are based on commonplace assumptions of Russian foreign policy on Syria. It thoroughly explores three different causal paths, mapping events that may have had implications to the apparent change of heart within the Russian leadership. The paper analyses the relevance of these processes through a rational choice theory framework.
APA, Harvard, Vancouver, ISO, and other styles
34

Faltis, Jan. "Analýza a řešení systému pro monitoring ISIR a vybraných registrů ARES." Master's thesis, Vysoká škola ekonomická v Praze, 2011. http://www.nusl.cz/ntk/nusl-114002.

Full text
Abstract:
This diploma thesis deals with the analysis of the requirements, design and implementation of the System for monitoring of insolvency register and commercial register from administrative register of economical subjects. The system has an extendable design which allows further sources to be added at a later date to suit the client's needs. The design process is oriented at small business customers and emphasises the need for simplicity and understandability of the marketed solution. System requirements are analysed first, subsequently, several models with decreasing levels of abstraction are formulated and the System is then implemented, according to these models, using several funkcional increments.
APA, Harvard, Vancouver, ISO, and other styles
35

Bouhouche, Salah. "Contribution to quality and process optimisation in continuous casting using mathematical modelling." Doctoral thesis, Technische Universitaet Bergakademie Freiberg Universitaetsbibliothek "Georgius Agricola&quot, 2009. http://nbn-resolving.de/urn:nbn:de:swb:105-6900128.

Full text
Abstract:
Mathematical modelling using advanced approach based on the neural networks has been applied to the control and the quality optimisation in the main processes of steelwork such as the ladle metallurgical treatment and continuous casting. Particular importance has been given to the improvement of breakout prediction system and the reduction in the rate of false alarm generated by the conventional breakout detection system. Prediction of the chemical composition and temperature of liquid steel in the ladle has been achieved by neural networks and linear model. This prediction can be considered as a soft sensor. Slab surface temperature stabilisation on the basis of the casting events has been controlled by a neural networks algorithm, that gives an improvement in the surface temperature fluctuation in comparison to the conventional control system which is based on the PID controller. Quality monitoring and classification is also achieved by a neural network which is related to the breakout detection system. This technique achieves a classification of different defects based on the different alarm signal given by the breakout prediction system. Fault detection and process monitoring is developed using neural networks modelling. All models are developed on basis of practical operating database obtained from the iron and steel industry.
APA, Harvard, Vancouver, ISO, and other styles
36

Ribas, Maristella. "A Petri net decision model for cloud services adoption." Universidade Federal do CearÃ, 2015. http://www.teses.ufc.br/tde_busca/arquivo.php?codArquivo=15609.

Full text
Abstract:
Cloud services are now widely used, especially in Infrastructure as a Service (IaaS), with big players offering several purchasing options, and expanding almost daily the range of offered services. Cost reduction is a major factor promoting cloud services adoption. However, qualitative factors need to be evaluated as well, making the decision process of cloud services adoption a non-trivial task for managers. In this work, we propose a Petri net-based multi-criteria decision-making (MCDM) framework, in order to evaluate a cloud service in relation to a similar on-premises offer. The evaluation of both options considers cost and qualitative issues in a novel and simple method that incorporates best practices from academy and IT specialists. Furthermore, the use of Petri net models allows powerful extensions to perform deeper analysis of specific factors as needed. The framework can be helpful for IT managers to decide between the two options, and can be used for any type of cloud service (IaaS, SaaS, PaaS). Since cost is one of the most important factors promoting cloud adoption, we proceed with a deeper analysis of one important cost factor. We propose a Petri net to model cost savings using public clouds spot Instances purchasing option. Through extensive simulations in several scenarios we conclude that spot Instances can be a very interesting option for savings in auto scaling process, even in simple business applications using only a few servers. Exploring different purchasing options for cloud services can make the difference in the decision making process.
Atualmente, os serviÃos em nuvem sÃo amplamente utilizados, principalmente em infraestrutura como serviÃo (IaaS), com grandes fornecedores oferecendo vÃrias opÃÃes de compra e expandindo quase diariamente a gama de serviÃos oferecidos. A reduÃÃo de custos à o principal fator que promove a adoÃÃo de serviÃos em nuvem. No entanto, à preciso avaliar tambÃm fatores qualitativos, o que torna o processo de decisÃo de adoÃÃo de serviÃos em nuvem uma tarefa pouco trivial para os gestores. Este trabalho propÃe um modelo para tomada de decisÃo multicritÃrio (MDMC) utilizando redes de Petri para avaliar um serviÃo de nuvem comparado com um serviÃo disponibilizado localmente (on-premises), nas dependÃncias do usuÃrio. A avaliaÃÃo das duas opÃÃes considera questÃes qualitativas e de custo atravÃs de um mÃtodo novo e simples, que incorpora as melhores prÃticas de especialistas da academia e de tecnologia da informaÃÃo (TI). AlÃm disso, o uso de redes de Petri permite extensÃes poderosas para realizar anÃlises mais profundas de fatores especÃficos, conforme a necessidade de cada cenÃrio. O modelo pode ser Ãtil para apoiar gestores de TI na decisÃo entre as duas opÃÃes e pode ser usado para qualquer tipo de serviÃo de nuvem (IaaS, SaaS, PaaS). Como o custo à um dos fatores mais importantes para a adoÃÃo da nuvem, procedemos a uma anÃlise mais profunda de um fator de custo importante. à apresentada uma extensÃo ao modelo, tambÃm construÃdo com redes de Petri, para simular economias de custo usando uma determinada opÃÃo de compra de serviÃos em nuvens pÃblicas, as instÃncias spot. AtravÃs de extensas simulaÃÃes em vÃrios cenÃrios, o trabalho conclui que a utilizaÃÃo de instÃncias spot pode gerar uma grande economia no processo de escalonamento automÃtico, mesmo em aplicaÃÃes relativamente simples, utilizando apenas alguns servidores. Explorar diferentes opÃÃes de compra para os serviÃos em nuvem faz uma enorme diferenÃa nos custos e pode ter grande influÃncia no processo de tomada de decisÃo.
APA, Harvard, Vancouver, ISO, and other styles
37

Jana, Santhanu Shakti Pada [Verfasser]. "Numerical predictions of misruns in development of near-net shape casting process / Santhanu Shakti Pada Jana." Aachen : Hochschulbibliothek der Rheinisch-Westfälischen Technischen Hochschule Aachen, 2015. http://d-nb.info/1071688812/34.

Full text
APA, Harvard, Vancouver, ISO, and other styles
38

Ford, Derek William. "The next generation planning board : a visible solution for effective manufacturing planning and control for a process manufacturing environment." Thesis, Cranfield University, 2001. http://ethos.bl.uk/OrderDetails.do?uin=uk.bl.ethos.393696.

Full text
APA, Harvard, Vancouver, ISO, and other styles
39

Passos, Lígia Maria Soares. "Formalização de workflow nets utilizando lógica linear: análise qualitativa e quantitativa." Universidade Federal de Uberlândia, 2009. https://repositorio.ufu.br/handle/123456789/12473.

Full text
Abstract:
Coordenação de Aperfeiçoamento de Pessoal de Nível Superior
This work presents a method for qualitative and quantitative analysis of WorkFlow nets based on the proof trees of linear logic, and an approach for the verification of workflow specifications in UML through the transformation of UML Activity Diagrams into WorkFlow nets. The qualitative analysis is concerned with the proof of soundness correctness criterion defined for WorkFlow nets. The quantitative analysis is based on the computation of symbolic dates for the planning of resources used to handle each task of the workflow process modeled by a t-Time WorkFlow net. For the verification of the specifications of workflow processes mapped into UML Activity Diagrams are presented formal rules to transform this ones into WorkFlow nets. In this context is proposed the analysis and correction of critical points in UML Activity Diagrams through the analysis of proof trees of linear logic. The advantages of such an approach are diverse. The fact of working with linear logic permits one to prove the correctness criterion soundness in a linear time without considering the construction of the reachability graph, considering the proper structure of the WorkFlow net instead of considering the corresponding automata. Moreover, the computation of symbolic dates for the execution of each task mapped into the t-Time WorkFlow net permits to plan the utilization of the resources involved in the activities of the workflow process, through formulas that can be used for any case handled by the correspondent workflow process, without to examine again the process to recalculate, for each new case, the dates of start and conclusion for the activities involved in the process. Regarding the verification of workflow processes mapped into UML Activity Diagrams, the major advantage of this approach is the transformation of a semi-formal model into a formal model, such that some properties, like soundness, can be formally verified.
Este trabalho apresenta um método para a análise qualitativa e quantitativa de Work- Flow nets baseado nas árvores de prova canônica da lógica linear e uma abordagem para a verificação de especificações de processos de workflow em UML através da transformação de Diagramas de Atividades da UML em WorkFlow nets. A análise qualitativa refere-se à prova do critério de corretude soundness definido para WorkFlow nets. Já a análise quantitativa preocupa-se com o planejamento de recursos para cada atividade de um processo de workflow mapeado em uma t-Time WorkFlow net e baseia-se no cálculo de datas simbólicas para o planejamento de recursos utilizados na realização de cada tarefa do processo de workflow. Para a verificação das especificações de processos de workflow mapeados em Diagramas de Atividades da UML são apresentadas regras formais para transformar estes diagramas em WorkFlow nets. Neste contexto também é proposta a análise e correção de pontos críticos em Diagramas de Atividades da UML através da análise de árvores de prova canônica da lógica linear. As vantagens das abordagens apresentadas neste trabalho são diversas. O fato de trabalhar com lógica linear permite provar o critério de corretude soundness em tempo linear e sem que seja necessária a construção de um grafo das marcações acessíveis, considerando diretamente a própria estrutura da WorkFlow net, ao invés de considerar o seu autômato correspondente. Além disso, o cálculo de datas simbólicas correspondentes à execução de cada tarefa mapeada em uma t-Time WorkFlow net permite planejar a utilização dos recursos envolvidos nas atividades do processo de workflow, através de fórmulas que podem ser utilizadas por qualquer caso tratado pelo processo de workflow correspondente, sem que seja necessário percorrer novamente o processo de workflow inteiro para recalcular, para cada novo caso, datas de início e término das atividades envolvidas no processo. Já no que diz respeito à verificação de processos de workflow mapeados em Diagramas de Atividades da UML, a principal vantagem desta abordagem é a transformação de um modelo semi-formal em um modelo formal, para o qual algumas propriedades, como soundness, podem ser formalmente verificadas.
Mestre em Ciência da Computação
APA, Harvard, Vancouver, ISO, and other styles
40

Russell, Nicholas Charles. "Foundations of process-aware information systems." Queensland University of Technology, 2007. http://eprints.qut.edu.au/16592/.

Full text
Abstract:
Over the past decade, the ubiquity of business processes and their need for ongoing management in the same manner as other corporate assets has been recognized through the establishment of a dedicated research area: Business Process Management (or BPM). There are a wide range of potential software technologies on which a BPM o®ering can be founded. Although there is signi¯cant variation between these alternatives, they all share one common factor { their execution occurs on the basis of a business process model { and consequently, this ¯eld of technologies can be termed Process-Aware Information Systems (or PAIS). This thesis develops a conceptual foundation for PAIS based on the results of a detailed examination of contemporary o®erings including work°ow and case han- dling systems, business process modelling languages and web service composition languages. This foundation is based on 126 patterns that identify recurrent core constructs in the control-°ow, data and resource perspectives of PAIS. These patterns have been used to evaluate some of the leading systems and business process modelling languages. It also proposes a generic graphical language for de¯ning exception handling strategies that span these perspectives. On the basis of these insights, a comprehensive reference language { newYAWL { is developed for business process modelling and enactment. This language is formally de¯ned and an abstract syntax and operational semantics are provided for it. An assessment of its capabilities is provided through a comprehensive patterns-based analysis which allows direct comparison of its functionality with other PAIS. newYAWL serves as a reference language and many of the ideas embodied within it are also applicable to existing languages and systems. The ultimate goal of both the patterns and newYAWL is to improve the support and applicability of PAIS.
APA, Harvard, Vancouver, ISO, and other styles
41

Rogge-Solti, Andreas, Ronny S. Mans, der Aalst Wil M. P. van, and Mathias Weske. "Repairing event logs using stochastic process models." Universität Potsdam, 2013. http://opus.kobv.de/ubp/volltexte/2013/6679/.

Full text
Abstract:
Companies strive to improve their business processes in order to remain competitive. Process mining aims to infer meaningful insights from process-related data and attracted the attention of practitioners, tool-vendors, and researchers in recent years. Traditionally, event logs are assumed to describe the as-is situation. But this is not necessarily the case in environments where logging may be compromised due to manual logging. For example, hospital staff may need to manually enter information regarding the patient’s treatment. As a result, events or timestamps may be missing or incorrect. In this paper, we make use of process knowledge captured in process models, and provide a method to repair missing events in the logs. This way, we facilitate analysis of incomplete logs. We realize the repair by combining stochastic Petri nets, alignments, and Bayesian networks. We evaluate the results using both synthetic data and real event data from a Dutch hospital.
Unternehmen optimieren ihre Geschäftsprozesse laufend um im kompetitiven Umfeld zu bestehen. Das Ziel von Process Mining ist es, bedeutende Erkenntnisse aus prozessrelevanten Daten zu extrahieren. In den letzten Jahren sorgte Process Mining bei Experten, Werkzeugherstellern und Forschern zunehmend für Aufsehen. Traditionell wird dabei angenommen, dass Ereignisprotokolle die tatsächliche Ist-Situation widerspiegeln. Dies ist jedoch nicht unbedingt der Fall, wenn prozessrelevante Ereignisse manuell erfasst werden. Ein Beispiel hierfür findet sich im Krankenhaus, in dem das Personal Behandlungen meist manuell dokumentiert. Vergessene oder fehlerhafte Einträge in Ereignisprotokollen sind in solchen Fällen nicht auszuschließen. In diesem technischen Bericht wird eine Methode vorgestellt, die das Wissen aus Prozessmodellen und historischen Daten nutzt um fehlende Einträge in Ereignisprotokollen zu reparieren. Somit wird die Analyse unvollständiger Ereignisprotokolle erleichtert. Die Reparatur erfolgt mit einer Kombination aus stochastischen Petri Netzen, Alignments und Bayes'schen Netzen. Die Ergebnisse werden mit synthetischen Daten und echten Daten eines holländischen Krankenhauses evaluiert.
APA, Harvard, Vancouver, ISO, and other styles
42

Propes, Nicholas Chung. "Hybrid Systems Diagnosis and Control Reconfiguration for Manufacturing Systems." Diss., Georgia Institute of Technology, 2004. http://hdl.handle.net/1853/5150.

Full text
Abstract:
A methodology for representing and analyzing manufacturing systems in a hybrid systems framework for control reconfiguration purposes in the presence of defects and failures at the product and system levels is presented. At the top level, a supervisory Petri net directs parts/jobs through the manufacturing system. An object-based hybrid systems model that incorporates both Petri nets at the event-driven level and differential equations at the time-driven level describes the subsystems. Rerouting capabilities utilizing this model at the product and operation levels were explained. Simulations were performed on a testbed model for optimal time and mode transition cost to determine the route for parts. The product level reconfiguration architecture utilizes an adaptive network-based fuzzy inference system (ANFIS) to map histogram comparison metrics to set-point adjustments when product defects were detected. Tests were performed on good and defective plastic parts from a plastic injection molding machine. In addition, a mode identification architecture was described that incorporates both time- and event-driven information to determine the operating mode of a system from measured sensor signals. Simulated data representing the measured process signals from a Navy ship chiller system were used to verify that the appropriate operating modes were detected.
APA, Harvard, Vancouver, ISO, and other styles
43

Sha, Sha. "Performance Modelling and Analysis of Handover and Call Admission Control Algorithm for Next Generation Wireless Networks." Thesis, University of Bradford, 2011. http://hdl.handle.net/10454/5509.

Full text
Abstract:
The next generation wireless system (NGWS) has been conceived as a ubiquitous wireless environment. It integrates existing heterogeneous access networks, as well as future networks, and will offer high speed data, real-time applications (e.g. Voice over IP, videoconference ) and real-time multimedia (e.g. real-time audio and video) support with a certain Quality of Service (QoS) level to mobile users. It is required that the mobile nodes have the capability of selecting services that are offered by each provider and determining the best path through the various networks. Efficient radio resource management (RRM) is one of the key issues required to support global roaming of the mobile users among different network architectures of the NGWS and a precise call admission control (CAC) scheme satisfies the requirements of high network utilization, cost reduction, minimum handover latency and high-level QoS of all the connections. This thesis is going to describe an adaptive class-based CAC algorithm, which is expected to prioritize the arriving channel resource requests, based on user¿s classification and channel allocation policy. The proposed CAC algorithm couples with Fuzzy Logic (FL) and Pre-emptive Resume (PR) theories to manage and improve the performance of the integrated wireless network system. The novel algorithm is assessed using a mathematical analytic method to measure the performance by evaluating the handover dropping probability and the system utilization.
APA, Harvard, Vancouver, ISO, and other styles
44

Bouhouche, Salah. "Contribution to quality and process optimisation in continuous casting using mathematical modelling." Doctoral thesis, [S.l. : s.n.], 2002. http://deposit.ddb.de/cgi-bin/dokserv?idn=966041208.

Full text
APA, Harvard, Vancouver, ISO, and other styles
45

Júnior, Roberto Nicolas De Jardin. "Modelagem matemática de um processo industrial de produção de cloro e soda por eletrólise de salmoura visando sua otimização." Universidade de São Paulo, 2006. http://www.teses.usp.br/teses/disponiveis/3/3137/tde-08122006-141824/.

Full text
Abstract:
O presente trabalho envolve a elaboração de um modelo matemático para um processo industrial de produção de cloro e soda a partir de salmoura, visando sua otimização em termos de eficiência de produção e dos custos dos consumos de energia elétrica e vapor. O estudo contemplou duas etapas do processo: eletrólise e concentração de licor de NaOH por evaporação. Para a unidade de eletrólise não foram encontrados na literatura modelos fenomenológicos adequados à simulação do processo. Por essa razão, foram desenvolvidos modelos empíricos baseados em redes neurais tipo ?feedforward? constituídas por três camadas, a partir de dados da operação industrial. Para a unidade de evaporação foi elaborado um balanço de energia adequado à estimativa do consumo de vapor. Porém, devido à falta de modelos para previsão das relações de equilíbrio para o sistema, o modelo fenomenológico foi substituído por um modelo de redes neurais tipo ?feedforward? de três camadas também para essa unidade. Para ajuste dos modelos, uma base de dados foi montada a partir de dados de operação do processo da Carbocloro S.A. Indústrias Químicas, localizada em Cubatão-SP, analisados por meio de técnicas estatísticas multivariadas, visando detectar e eliminar erros grosseiros e dados anômalos, além de identificar correlações entre variáveis e diferentes regimes operacionais da planta de produção de cloro e soda. Os modelos ajustados para os diferentes circuitos de células de eletrólise, bem como para a etapa de evaporação, apresentaram boa concordância com os dados operacionais. Isto possibilitou sua utilização para simular a operação das unidades de células eletrolíticas e evaporação no processo industrial de produção de cloro-soda, com células tipo diafragma. O modelo matemático baseado em redes neurais foi utilizado em estudos de otimização do processo, de modo a maximizar o ganho financeiro na unidade industrial, para uma dada condição de operação.
The present work consists on the development of a mathematical model on an industrial chlorine and sodium hydroxide production plant, aiming at the optimization of production efficiency and costs saving concerning electrical energy and vapor consumption. Two process steps were considered in the study: electrolysis and NaOH-liquor concentration by evaporation. Since there are no adequate models reported in the literature for simulating electrolysis-based processes like the one considered, empirical models for the different types of electrolysis cells were developed based on the fitting of neural networks to operational data from industrial operation. In this case, feedforward neural networks containing three neuron layers were fitted to the data. The raw data obtained from industrial operation at Carbocloro plant, in Cubatão ? SP, were first treated by means of multivariate statistical techniques, with the purpose of detecting and eliminating data containing gross errors and outliers, as well as to identify correlations among variables and different operational regimes of the industrial plant. Although material and energy balances for the evaporation step have been initially adopted, this approach could not be used in simulations due to the lack of valid models to predict liquid ? vapor equilibria for the specific system. Thus, a neural network model was also fitted to data from operation of the evaporation step. Fitting of the neural network models resulted in good agreement between model predictions and measured values of the model output variables, and this enabled their use in simulation studies for the electrolysis and evaporation process steps. The neural network-based mathematical model was utilized in process optimization studies aiming at the best financial gain under given operational conditions.
APA, Harvard, Vancouver, ISO, and other styles
46

Vincent, Deirdre Kathleen [Verfasser]. "NEC is a NETs dependent process and markers of NETosis are predictive of NEC in mice and humans / Deirdre Kathleen Vincent." Hamburg : Staats- und Universitätsbibliothek Hamburg Carl von Ossietzky, 2020. http://d-nb.info/1236695100/34.

Full text
APA, Harvard, Vancouver, ISO, and other styles
47

Hashmi, Jahanzeb Maqbool. "Designing High Performance Shared-Address-Space and Adaptive Communication Middlewares for Next-Generation HPC Systems." The Ohio State University, 2020. http://rave.ohiolink.edu/etdc/view?acc_num=osu1588038721555713.

Full text
APA, Harvard, Vancouver, ISO, and other styles
48

Cândido, Renato Markele Ferreira 1988. "Filtros de partículas aplicados a sistemas max plus." [s.n.], 2013. http://repositorio.unicamp.br/jspui/handle/REPOSIP/259747.

Full text
Abstract:
Orientador: Rafael Santos Mendes
Dissertação (mestrado) - Universidade Estadual de Campinas, Faculdade de Engenharia Elétrica e de Computação
Made available in DSpace on 2018-08-24T01:12:10Z (GMT). No. of bitstreams: 1 Candido_RenatoMarkeleFerreira_M.pdf: 1921815 bytes, checksum: a5e82ec1bfadd836b1ba66fda5ce00ec (MD5) Previous issue date: 2013
Resumo: A principal contribuição desta dissertação é a proposta de algoritmos de filtragem por partículas em sistemas a eventos discretos nos quais predominam os problemas de sincronização. Esta classe de sistemas pode ser descrita por meio de equações lineares em uma álgebra não convencional usualmente conhecida como álgebra Max Plus. Os Filtros de Partículas são algoritmos Bayesianos sub-ótimos que realizam uma amostragem sequencial de Monte Carlo para construir uma aproximação discreta da densidade de probabilidade dos estados baseada em um conjunto de partículas com pesos associados. É apresentada uma revisão de sistemas a eventos discretos, de filtragem não linear e de filtros de partículas de um modo geral. Após apresentar esta base teórica, são propostos dois algoritmos de filtros de partículas aplicados a sistemas Max Plus. Em seguida algumas simulações foram apresentadas e os resultados apresentados mostraram a eficiência dos filtros desenvolvidos
Abstract: This thesis proposes, as its main contribution, particle filtering algorithms for discrete event systems in which synchronization phenomena are prevalent. This class of systems can be described by linear equation systems in a nonconventional algebra commonly known as Max Plus algebra. Particles Filters are suboptimal Bayesian algorithms that perform a sequential Monte Carlo sampling to construct a discrete approximation of the probability density of states based on a set of particles with associated weights. It is presented a review of discrete event systems, nonlinear filtering and particle filters. After presenting this theoretical background, two particle filtering algorithms applied to Max Plus systems are proposed. Finally some simulation results are presented, confirming the accuracy of the designed filters
Mestrado
Automação
Mestre em Engenharia Elétrica
APA, Harvard, Vancouver, ISO, and other styles
49

McCann, David. "Implementing T-net disciple-making in the Evangelical Free Churches of New England--critical factors in preparation and process." Columbia, SC : Columbia Theological Seminary, 2008. http://dx.doi.org/10.2986/tren.023-0214.

Full text
APA, Harvard, Vancouver, ISO, and other styles
50

Meeuw, H., V. K. Wisniewski, U. Köpke, A. S. Nia, A. R. Vázquez, M. R. Lohe, X. Feng, and B. Fiedler. "In-line monitoring of carbon nanoparticle epoxy dispersion processes: Insights into the process via next generation three roll mills and impedance spectroscopy." Springer Nature Switzerland, 2019. https://tud.qucosa.de/id/qucosa%3A34561.

Full text
Abstract:
The new generation of three roll mills is able to monitor occurring process loads while dispersion. This paper focuses on the interpretation of the gathered data to find criteria quantifying the dispersion state online. The aim is process time reduction. We used impedance spectroscopy to identify the dispersion state and correlated it with the occurring process loads. The dispersion process of a wide spectrum of carbon based nano particles, namely carbon black, single walled carbon nanotubes, multi walled carbon nanotubes, a few-layer graphene powder, electrochemically exfoliated graphite and a functionalized electrochemically exfoliated graphite was investigated. The filler content was varied along the material’s electrical percolation threshold. The criteria found led to a reduction of processing time and revealed the prevalent mechanisms during dispersion.
APA, Harvard, Vancouver, ISO, and other styles
We offer discounts on all premium plans for authors whose works are included in thematic literature selections. Contact us to get a unique promo code!

To the bibliography