To see the other types of publications on this topic, follow the link: Process Model Matching.

Dissertations / Theses on the topic 'Process Model Matching'

Create a spot-on reference in APA, MLA, Chicago, Harvard, and other styles

Select a source type:

Consult the top 15 dissertations / theses for your research on the topic 'Process Model Matching.'

Next to every source in the list of references, there is an 'Add to bibliography' button. Press on it, and we will generate automatically the bibliographic reference to the chosen work in the citation style you need: APA, MLA, Harvard, Chicago, Vancouver, etc.

You can also download the full text of the academic publication as pdf and read online its abstract whenever available in the metadata.

Browse dissertations / theses on a wide variety of disciplines and organise your bibliography correctly.

1

Klinkmüller, Christopher. "Adaptive Process Model Matching." Doctoral thesis, Universitätsbibliothek Leipzig, 2017. http://nbn-resolving.de/urn:nbn:de:bsz:15-qucosa-224884.

Full text
Abstract:
Process model matchers automate the detection of activities that represent similar functionality in different models. Thus, they provide support for various tasks related to the management of business processes including model collection management and process design. Yet, prior research primarily demonstrated the matchers’ effectiveness, i.e., the accuracy and the completeness of the results. In this context (i) the size of the empirical data is often small, (ii) all data is used for the matcher development, and (iii) the validity of the design decisions is not studied. As a result, existing matchers yield a varying and typically low effectiveness when applied to different datasets, as among others demonstrated by the process model matching contests in 2013 and 2015. With this in mind, the thesis studies the effectiveness of matchers by separating development from evaluation data and by empirically analyzing the validity and the limitations of design decisions. In particular, the thesis develops matchers that rely on different sources of information. First, the activity labels are considered as natural-language descriptions and the Bag-of-Words Technique is introduced which achieves a high effectiveness in comparison to the state of the art. Second, the Order Preserving Bag-of-Words Technique analyzes temporal dependencies between activities in order to automatically configure the Bag-of-Words Technique and to improve its effectiveness. Third, expert feedback is used to adapt the matchers to the domain characteristics of process model collections. Here, the Adaptive Bag-of-Words Technique is introduced which outperforms the state-of-the-art matchers and the other matchers from this thesis.
APA, Harvard, Vancouver, ISO, and other styles
2

Kuss, Elena [Verfasser], and Heiner [Akademischer Betreuer] Stuckenschmidt. "Evaluation of process model matching techniques / Elena Kuss ; Betreuer: Heiner Stuckenschmidt." Mannheim : Universitätsbibliothek Mannheim, 2019. http://d-nb.info/1183572700/34.

Full text
APA, Harvard, Vancouver, ISO, and other styles
3

Kuss, Elena Verfasser], and Heiner [Akademischer Betreuer] [Stuckenschmidt. "Evaluation of process model matching techniques / Elena Kuss ; Betreuer: Heiner Stuckenschmidt." Mannheim : Universitätsbibliothek Mannheim, 2019. http://nbn-resolving.de/urn:nbn:de:bsz:180-madoc-492194.

Full text
APA, Harvard, Vancouver, ISO, and other styles
4

Al, Hajri Abdullah Said Mechanical &amp Manufacturing Engineering Faculty of Engineering UNSW. "Logistics technology transfer model." Publisher:University of New South Wales. Mechanical & Manufacturing Engineering, 2008. http://handle.unsw.edu.au/1959.4/41469.

Full text
Abstract:
A consecutive number of studies on the adoption trend of logistics technology since 1988 revealed that logistics organizations are not in the frontier when it comes to adopting new technology and this delayed adoption creates an information gap. In the advent of supply chain management and the strategic position of logistics, the need for accurate and timely information to accompany the logistics executives became more important than ever before. Given the integrative nature of logistics technology, failure to implement the technology successfully could result in writing off major investments in developing and implementing the technology or even in abandoning the strategic initiatives underpinned by these innovations. Consequently, the need to employ effective strategies and models to cope with these uncertainties is rather crucial. This thesis addresses the aspect of uncertainty in implementation success by process and factor research models. Process research approach focuses on the sequence of events in the technology transfer process that occurs over time. It explains the story that explains the degree of association between these sequences and implementation success. Through content analysis, this research gathers, extracts, and categorizes process data of actual stories of logistics technology adoption and implementations in organizations that are published in literature. The extracted event sequences are then analyzed using optimal matching from natural science and grouped using cluster analysis. Four patterns were revealed that organizations follow to transfer logistics technology namely, formal minimalist, mutual adaptation, development concerned, and organizational roles dispenser. Factors that contribute to successful implementation in each pattern were defined as the crucial and necessary events that characterized and differentiated each pattern from others. The factor approach identifies the potential predictors of successful technology implementation and tests empirical association between predictors and outcomes. This research develops a logistics technology success model. In developing the model, various streams of research were investigated including logistics, information systems, and organizational psychology. The model is tested using a questionnaire survey study. The data were collected from Australian companies which have recently adopted and implemented logistics technology. The results of a partial least squares structured equation modeling provide strong support for the model constructs and valuable insights to logistics/supply chain managers. The last study reports a convergent triangulation study using multiple case study of three Australian companies which have implemented logistics technology. A within and a cross case analysis of the three cases provide cross validation for the results of the other two studies. The results provided high predictive validity for the two models. Furthermore, the case study approach was so beneficial in explaining and contextualizing the linkages of the factor-based model and in confirming the importance of the crucial events in the process-based model. The thesis concludes with a research and managerial implications chapter which is devoted for logistics/supply chain managers and researchers.
APA, Harvard, Vancouver, ISO, and other styles
5

Belhoul, Yacine. "Graph-based Ad Hoc Networks Topologies and Business Process Matching." Thesis, Lyon 1, 2013. http://www.theses.fr/2013LYO10202.

Full text
Abstract:
Un réseau mobile ad hoc (Mobile Ad hoc Network, MANET) est un réseau sans fil, formé dynamiquement par un ensemble d'utilisateurs équipés de terminaux mobiles, sans l'utilisation d'une infrastructure préexistante, ou d'une administration centralisée. Les équipements utilisés dans les MANETs sont limités par la capacité de la batterie, la puissance de calcul et la bande passante. Les utilisateurs des MANETs sont libres de se déplacer, ce qui induit à des topologies dynamiques dans le temps. Toutes ces contraintes ajoutent plus de challenges aux protocoles et services de communications afin de fonctionner dans les MANETs. L'évolution des réseaux de 4ème génération (4G) est appelée à intégrer les MANETs avec les autres types de réseaux afin d'étendre leurs portées. Nous nous sommes intéressés dans la première partie de cette thèse à quelques challenges connus dans les MANETs en proposant des solutions novatrices utilisant des propriétés intéressantes des topologies de graphes. Dans un premier temps, nous avons effectué une étude sur la prédiction de la mobilité afin de maintenir une topologie d'ensemble dominant connecté dans les MANETs. Nous avons proposé dans un autre travail comment construire des topologies de graphes ayant des propriétés globales en se basant seulement sur des informations locales des nœuds mobiles. Ces topologies servent comme overlay aux MANETs. Nous avons proposé des algorithmes distribués pour construire des alliances offensives et défensives globales minimales. Nous avons aussi défini des heuristiques pour ces algorithmes afin de réduire les tailles des alliances obtenues. La première partie de cette thèse est achevée par la proposition d'un framework pour la conception et l'analyse des protocoles de contrôle de topologie dans les MANETs. Nous avons identifié les points communs des algorithmes de contrôle de topologie conçus pour les réseaux mobiles ad hoc et nous avons enrichi le simulateur NS-2 avec un ensemble d'extensions pour supporter le contrôle de topologie
We are interested in this thesis to graph-based approaches to deal with some challenges in networking, namely, graph topologies of mobile ad hoc networks (MANETs) and process model matchmaking in large scale web service. We propose in the first part: (1) a generic mechanism using mobility information of nodes to maintain a graph topology of the network. We show particularly, how to use the prediction of future emplacements of nodes to maintain a connected dominating set of a given MANET. (2) distributed algorithms to construct minimal global offensive alliance and global defensive alliance sets in MANETs. We also introduce several heuristics to get a better approximation of the cardinality of the alliance sets which is a desirable property for practical considerations. (3) a framework to facilitate the design and evaluation of topology control protocols in MANETs. We propose in the framework, a common schema for topology control based on NS-2 simulator and inspired from the commonalities between the components of the topology control algorithms in MANETs. In the second part, we focus on process model matchmaking. We propose two graph-based solutions for process model inexact matching to deal with high computational time of existing work in the literature. In the first solution, we decompose the process models into their possible execution sequences. After, we propose generic graph techniques using string comparator metrics for process model matchmaking based on this decomposition. In order to get better optimization of the execution time and to deal with process model matching in large scale web services, the second solution combines a spectral graph matching with structural and semantic proposed approaches. This solution uses an eigen-decomposition projection technique that makes the runtime faster
APA, Harvard, Vancouver, ISO, and other styles
6

Harding, Bradley. "A Single Process Model of the Same-Different Task." Thesis, Université d'Ottawa / University of Ottawa, 2018. http://hdl.handle.net/10393/38329.

Full text
Abstract:
The Same-Different task has a long and controversial history in cognitive psychology. For over five decades, researchers have had many difficulties modelling the simple task, in which participants must respond as quickly and as accurately as possible whether two stimuli are the “Same” or “Different”. The main difficulty in doing so stems from the fact that “Same” decisions are much faster than can be modelled using a single process model without resorting to post-hoc processes, a finding since coined the fast-same phenomenon. In this thesis, I evaluate the strengths and shortcomings of past modelling endeavours, deconstruct the fast-same phenomenon while exploring the role of priming as its possible mechanism, investigate coactivity as a possible architecture underlying both decision modalities, and present an accumulator model whose assumptions and parameters stem from these results that predicts Same-Different performance (both response times and accuracies) using a single-process, a finding deemed near impossible by Sternberg (1998).
APA, Harvard, Vancouver, ISO, and other styles
7

Klinkmüller, Christopher [Verfasser], André [Akademischer Betreuer] Ludwig, and Stefan [Gutachter] Sackmann. "Adaptive Process Model Matching : Improving the Effectiveness of Label-Based Matching through Automated Configuration and Expert Feedback / Christopher Klinkmüller ; Gutachter: Stefan Sackmann ; Betreuer: André Ludwig." Leipzig : Universitätsbibliothek Leipzig, 2017. http://d-nb.info/1241064075/34.

Full text
APA, Harvard, Vancouver, ISO, and other styles
8

Kreacic, Eleonora. "Some problems related to the Karp-Sipser algorithm on random graphs." Thesis, University of Oxford, 2017. http://ora.ox.ac.uk/objects/uuid:3b2eb52a-98f5-4af8-9614-e4909b8b9ffa.

Full text
Abstract:
We study certain questions related to the performance of the Karp-Sipser algorithm on the sparse Erdös-Rényi random graph. The Karp-Sipser algorithm, introduced by Karp and Sipser [34] is a greedy algorithm which aims to obtain a near-maximum matching on a given graph. The algorithm evolves through a sequence of steps. In each step, it picks an edge according to a certain rule, adds it to the matching and removes it from the remaining graph. The algorithm stops when the remining graph is empty. In [34], the performance of the Karp-Sipser algorithm on the Erdös-Rényi random graphs G(n,M = [cn/2]) and G(n, p = c/n), c > 0 is studied. It is proved there that the algorithm behaves near-optimally, in the sense that the difference between the size of a matching obtained by the algorithm and a maximum matching is at most o(n), with high probability as n → ∞. The main result of [34] is a law of large numbers for the size of a maximum matching in G(n,M = cn/2) and G(n, p = c/n), c > 0. Aronson, Frieze and Pittel [2] further refine these results. In particular, they prove that for c < e, the Karp-Sipser algorithm obtains a maximum matching, with high probability as n → ∞; for c > e, the difference between the size of a matching obtained by the algorithm and the size of a maximum matching of G(n,M = cn/2) is of order Θlog n(n1/5), with high probability as n → ∞. They further conjecture a central limit theorem for the size of a maximum matching of G(n,M = cn/2) and G(n, p = c/n) for all c > 0. As noted in [2], the central limit theorem for c < 1 is a consequence of the result of Pittel [45]. In this thesis, we prove a central limit theorem for the size of a maximum matching of both G(n,M = cn/2) and G(n, p = c/n) for c > e. (We do not analyse the case 1 ≤ c ≤ e). Our approach is based on the further analysis of the Karp-Sipser algorithm. We use the results from [2] and refine them. For c > e, the difference between the size of a matching obtained by the algorithm and the size of a maximum matching is of order Θlog n(n1/5), with high probability as n → ∞, and the study [2] suggests that this difference is accumulated at the very end of the process. The question how the Karp-Sipser algorithm evolves in its final stages for c > e, motivated us to consider the following problem in this thesis. We study a model for the destruction of a random network by fire. Let us assume that we have a multigraph with minimum degree at least 2 with real-valued edge-lengths. We first choose a uniform random point from along the length and set it alight. The edges burn at speed 1. If the fire reaches a node of degree 2, it is passed on to the neighbouring edge. On the other hand, a node of degree at least 3 passes the fire either to all its neighbours or none, each with probability 1/2. If the fire extinguishes before the graph is burnt, we again pick a uniform point and set it alight. We study this model in the setting of a random multigraph with N nodes of degree 3 and α(N) nodes of degree 4, where α(N)/N → 0 as N → ∞. We assume the edges to have i.i.d. standard exponential lengths. We are interested in the asymptotic behaviour of the number of fires we must set alight in order to burn the whole graph, and the number of points which are burnt from two different directions. Depending on whether α(N) » √N or not, we prove that after the suitable rescaling these quantities converge jointly in distribution to either a pair of constants or to (complicated) functionals of Brownian motion. Our analysis supports the conjecture that the difference between the size of a matching obtained by the Karp-Sipser algorithm and the size of a maximum matching of the Erdös-Rényi random graph G(n,M = cn/2) for c > e, rescaled by n1/5, converges in distribution.
APA, Harvard, Vancouver, ISO, and other styles
9

Logemann, Karsten. "Sensitivity analysis for an assignment incentive pay in the United States Navy enlisted personnel assignment process in a simulation environment." Thesis, Monterey, Calif. : Springfield, Va. : Naval Postgraduate School ; Available from National Technical Information Service, 2004. http://library.nps.navy.mil/uhtbin/hyperion/04Mar%5FLogemann.pdf.

Full text
APA, Harvard, Vancouver, ISO, and other styles
10

Höffl, Marc. "A new programming model for enterprise software : Allowing for rapid adaption and supporting maintainability at scale." Thesis, KTH, Elkraftteknik, 2017. http://urn.kb.se/resolve?urn=urn:nbn:se:kth:diva-215103.

Full text
Abstract:
Companies are under constant pressure to adapt and improve their processes to staycompetitive. Since most of their processes are handled by software, it also needs toconstantly change. Those improvements and changes add up over time and increase thecomplexity of the system, which in turn prevents the company from further adaption.In order to change and improve existing business processes and their implementation withinsoftware, several stakeholders have to go through a long process. Current IT methodologies arenot suitable for such a dynamic environment. The analysis of this change process shows thatfour software characteristics are important to speed it up. They are: transparency, adaptability,testability and reparability. Transparency refers to the users capability to understand what thesystem is doing, where and why. Adaptability is a mainly technical characteristic that indicatesthe capability of the system to evolve or change. Testability allows automated testing andvalidation for correctness without requiring manual checks. The last characteristic is reparability,which describes the possibility to bring the system back into a consistent and correct state, evenif erroneous software was deployed.An architecture and software development patterns are evaluated to build an overall programmingmodel that provides the software characteristics. The overall architecture is basedon microservices, which facilitates decoupling and maintainability for the software as well asorganizations. Command Query Responsibility Segregation decouples read from write operationsand makes data changes explicit. With Event Sourcing, the system stores not only the currentstate, but all historic events. It provides a built-in audit trail and is able to reproduce differentscenarios for troubleshooting and testing.A demo process is defined and implemented within multiple prototypes. The design of theprototype is based on the programming model. It is built in Javascript and implements Microservices,CQRS and Event Sourcing. The prototypes show and validate how the programmingmodel provides the software characteristics. Software built with the programming model allowscompanies to iterate faster at scale. Since the programming model is suited for complex processes,the main limitation is that the validation is based on a demo process that is simpler and thebenefits are hard to quantify.
ör att fortsatt vara konkurrenskraftiga är företag under konstant press att anpassa ochförbättra sina processer. Eftersom de flesta processer hanteras av programvara, behöveräven de ständigt förändras. Övertiden leder dessa förbättringar och förändringar till ökadsystemkomplexitet, vilket i sin tur hindrar företaget från ytterligare anpassningar. För attförändra och förbättra befintliga affärsprocesser och dess programvara, måste idag typiskt fleraaktörer vara en del av en lång och tidskrävande process. Nuvarande metoder är inte lämpade fören sådan dynamisk miljö. Detta arbete har fokuserat på fyra programvaruegenskaper som ärviktiga för att underlätta förändringsprocesser. Dessa fyra egenskaper är: öppenhet, anpassningsförmåga,testbarhet och reparerbarhet. Öppenhet, hänvisar till förmågan att förstå varför, var ochvad systemet gör. Anpassningsbarhet är huvudsakligen en teknisk egenskap som fokuserar påsystemets förmåga att utvecklas och förändras. Testbarhet strävar efter automatisk testning ochvalidering av korrekthet som kräver ingen eller lite manuell kontroll. Den sista egenskapen ärreparerbarhet, som beskriver möjligheten att återhämta systemet till ett konsekvent och korrekttillstånd, även om felaktig programvara har använts. En programmeringsmodell som rustarprogramvara med de ovan beskrivna programegenskaperna är utvecklad i detta examensarbete.Programmeringsmodellens arkitektur är baserad på diverse micro-tjänster, vilka ger brafrånkopplings- och underhållsförmåga för en programvara, samt användarorganisationerna.Command Query Responsibility Segregation (CQRS) frånkopplar läsoperationer från skrivoperationeroch gör ändringar i data explicita. Med Event Sourcing lagrar systemet inte endastdet nuvarande tillståndet, utan alla historiska händelser. Modellen förser användarna medett inbyggt revisionsspår och kan reproducera olika scenarion för felsökning och testning. Endemoprocess är definierad och implementerad i tre olika prototyper. Designen av prototypernaär baserad på den föreslagna programmeringsmodellen. Vilken är byggd i Javascript och implementerarmicro-tjänster, CQRS och Event Sourcing. Prototyperna visar och validerar hurprogrammeringsmodellen ger programvaran rätt egenskaper. Programvara byggd med dennaprogrammeringsmodell tillåter företag att iterera snabbare. De huvudsakliga begränsningarna iarbetet är att valideringen är baserad på en enklare demoprocess och att dess fördelar är svåraatt kvantifiera.
APA, Harvard, Vancouver, ISO, and other styles
11

Denis, Yvan. "Implémentation de PCM (Process Compact Models) pour l’étude et l’amélioration de la variabilité des technologies CMOS FDSOI avancées." Thesis, Université Grenoble Alpes (ComUE), 2016. http://www.theses.fr/2016GREAT045/document.

Full text
Abstract:
Récemment, la course à la miniaturisation a vue sa progression ralentir à cause des défis technologiques qu’elle implique. Parmi ces obstacles, on trouve l’impact croissant de la variabilité local et process émanant de la complexité croissante du processus de fabrication et de la miniaturisation, en plus de la difficulté à réduire la longueur du canal. Afin de relever ces défis, de nouvelles architectures, très différentes de celle traditionnelle (bulk), ont été proposées. Cependant ces nouvelles architectures demandent plus d’efforts pour être industrialisées. L’augmentation de la complexité et du temps de développement requièrent de plus gros investissements financier. De fait il existe un besoin réel d’améliorer le développement et l’optimisation des dispositifs. Ce travail donne quelques pistes dans le but d’atteindre ces objectifs. L’idée, pour répondre au problème, est de réduire le nombre d’essai nécessaire pour trouver le processus de fabrication optimal. Le processus optimal est celui qui conduit à un dispositif dont les performances et leur dispersion atteignent les objectifs prédéfinis. L’idée développée dans cette thèse est de combiner l’outil TCAD et les modèles compacts dans le but de construire et calibrer ce que l’on appelle un PCM (Process Compact Model). Un PCM est un modèle analytique qui établit les liens entre les paramètres process et électriques du MOSFET. Il tire à la fois les bénéfices de la TCAD (puisqu’il relie directement les paramètres process aux paramètres électriques) et du modèle compact (puisque le modèle est analytique et donc rapide à calculer). Un PCM suffisamment prédictif et robuste peut être utilisé pour optimiser les performances et la variabilité globale du transistor grâce à un algorithme d’optimisation approprié. Cette approche est différente des méthodes de développement classiques qui font largement appel à l’expertise scientifique et à des essais successifs dans le but d’améliorer le dispositif. En effet cette approche apporte un cadre mathématique déterministe et robuste au problème.Le concept a été développé, testé et appliqué aux transistors 28 et 14 nm FD-SOI ainsi qu’aux simulations TCAD. Les résultats sont exposés ainsi que les recommandations nécessaires pour implémenter la technique à échelle industrielle. Certaines perspectives et applications sont de même suggérées
Recently, the race for miniaturization has seen its growth slow because of technological challenges it entails. These barriers include the increasing impact of the local variability and processes from the increasing complexity of the manufacturing process and miniaturization, in addition to the difficult of reducing the channel length. To address these challenges, new architectures, very different from the traditional one (bulk), have been proposed. However these new architectures require more effort to be industrialized. Increasing complexity and development time require larger financial investments. In fact there is a real need to improve the development and optimization of devices. This work gives some tips in order to achieve these goals. The idea to address the problem is to reduce the number of trials required to find the optimal manufacturing process. The optimal process is one that results in a device whose performance and dispersion reach the predefined aims. The idea developed in this thesis is to combine TCAD tool and compact models in order to build and calibrate what is called PCM (Process Compact Model). PCM is an analytical model that establishes linkages between process and electrical parameters of the MOSFET. It takes both the benefits of TCAD (since it connects directly to the process parameters electrical parameters) and compact (since the model is analytic and therefore faster to calculate). A sufficiently robust predictive and PCM can be used to optimize performance and overall variability of the transistor through an appropriate optimization algorithm. This approach is different from traditional development methods that rely heavily on scientific expertise and successive tests in order to improve the system. Indeed this approach provides a deterministic and robust mathematical framework to the problem. The concept was developed, tested and applied to transistors 28 and 14 nm FD-SOI and to TCAD simulations. The results are presented and recommendations to implement it at industrial scale are provided. Some perspectives and applications are likewise suggested
APA, Harvard, Vancouver, ISO, and other styles
12

Bozanic, Mladen. "Design methods for integrated switching-mode power amplifiers." Thesis, University of Pretoria, 2011. http://hdl.handle.net/2263/26616.

Full text
Abstract:
While a lot of time and resources have been placed into transceiver design, due to the pace of a conventional engineering design process, the design of a power amplifier is often completed using scattered resources; and not always in a methodological manner, and frequently even by an iterative trial and error process. In this thesis, a research question is posed which enables for the investigation of the possibility of streamlining the design flow for power amplifiers. After thorough theoretical investigation of existing power amplifier design methods and modelling, inductors inevitably used in power amplifier design were identified as a major drawback to efficient design, even when examples of inductors are packaged in design HIT-Kits. The main contribution of this research is engineering of an inductor design process, which in-effect contributes towards enhancing conventional power amplifiers. This inductance search algorithm finds the highest quality factor configuration of a single-layer square spiral inductor within certain tolerance using formulae for inductance and inductor parasitics of traditional single-π inductor model. Further contribution of this research is a set of algorithms for the complete design of switch-mode (Class-E and Class-F) power amplifiers and their output matching networks. These algorithms make use of classic deterministic design equations so that values of parasitic components can be calculated given input parameters, including required output power, centre frequency, supply voltage, and choice of class of operation. The hypothesis was satisfied for SiGe BiCMOS S35 process from Austriamicrosystems (AMS). Several metal-3 and thick-metal inductors were designed using the abovementioned algorithm and compared with experimental results provided by AMS. Correspondence was established between designed, experimental and EM simulation results, enabling qualification of inductors other than those with experimental results available from AMS by means of EM simulations with average relative errors of 3.7% for inductors and 21% for the Q factor at its peak frequency. For a wide range of inductors, Q-factors of 10 and more were readily experienced. Furthermore, simulations were performed for number of Class-E and Class-F amplifier configurations with HBTs with ft greater than 60 GHz and total emitter area of 96 μm² as driving transistors to complete the hypothesis testing. For the complete PA system design (including inductors), simulations showed that switch-mode power amplifiers for 50 Ω load at 2.4 GHz centre frequency can be designed using the streamlined method of this research for the output power of about 6 dB less than aimed. This power loss was expected, since it can be attributed to non-ideal properties of the driving transistor and Q-factor limitations of the integrated inductors, assumptions which the computations of the routine were based on. Although these results were obtained for a single micro-process, it was further speculated that outcome of this research has a general contribution, since streamlined method can be used with a much wider range of CMOS and BiCMOS processes, when low-gigahertz operating power amplifiers are needed. This theory was confirmed by means of simulation and fabrication in 180 nm BiCMOS process from IBM, results of which were also presented. The work presented here, was combined with algorithms for SPICE netlist extraction and the spiral inductor layout extraction (CIF and GDSII formats). This secondary research outcome further contributed to the completeness of the design flow. All the above features showed that the routine developed here is substantially better than cut-and-try methods for design of power amplifiers found in the existing body of knowledge.
Thesis (PhD(Eng))--University of Pretoria, 2011.
Electrical, Electronic and Computer Engineering
unrestricted
APA, Harvard, Vancouver, ISO, and other styles
13

Júnior, Luís Antônio Guimarães Bitencourt. "Modelagem do processo de falha em materiais cimentícios reforçados com fibras de aço." Universidade de São Paulo, 2014. http://www.teses.usp.br/teses/disponiveis/3/3144/tde-16112015-150922/.

Full text
Abstract:
Este trabalho apresenta uma estratégia numérica desenvolvida usando o método dos elementos finitos para simular o processo de falha de compósitos cimentícios reforçados com fibras de aço. O material é descrito como um compósito composto por três fases: matriz cimentícia (pasta, argamassa ou concreto), fibras descontínuas discretas, e interface fibra-matriz. Um novo esquema de acoplamento para malhas de elementos finitos não-conformes foi desenvolvido para acoplar as malhas geradas independentes, da matriz cimentícia e de uma nuvem de fibras de aço, baseado na utilização de novos elementos finitos desenvolvidos, denominados elementos finitos de acoplamento. Utilizando este esquema de acoplamento, um procedimento não-rígido é proposto para a modelagem do complexo comportamento não linear da interface fibra-matriz, utilizando um modelo constitutivo de dano apropriado para descrever a relação entre a tensão de cisalhamento (tensão de aderência) e deslizamento relativo entre a matriz e cada fibra de aço individualmente. Este esquema também foi adotado para considerar a presença de barras de aço para as análises de estruturas de concreto armado. As fibras de aço são modeladas usando elementos finitos lineares com dois nós (elementos de treliça) com modelo material elastoplástico. As fibras são posicionadas usando uma distribuição randômica uniforme isotrópica, considerando o efeito parede. Uma abordagem contínua e outra descontínua são investigadas para a modelagem do comportamento frágil da matriz cimentícia. Para a primeira, é utilizado um modelo de dano isotrópico com duas variáveis de dano para descrever o comportamento de dano à tração e à compressão. A segunda emprega uma técnica de fragmentação de malha que utiliza elementos finitos degenerados, posicionados entre todos os elementos finitos que formam a matriz cimentícia. Para esta técnica é proposto um modelo constitutivo à tração, compatível com a abordagem descontínua forte contínua, para prever a propagação de fissura. Para acelerar o cálculo e aumentar a robustez dos modelos de dano contínuos para simular o processamento de falhas, um esquema de integração implícito-explícito é utilizado. Exemplos numéricos são apresentados ao longo do desenvolvimento desta tese. Inicialmente, exemplos numéricos com um único reforço são apresentados para validar a técnica desenvolvida e para investigar à influência das propriedades geométricas 7 das fibras e sua posição em relação à superfície de falha. Posteriormente, exemplos mais complexos são considerados envolvendo uma nuvem de fibras. Nestes casos, atenção especial é dada à influência da distribuição das fibras no comportamento do compósito relacionado ao processo de fissuração. Comparações com resultados experimentais demonstram que a aplicação da ferramenta numérica para modelar o comportamento de compósitos cimentícios reforçados com fibras de aço é muito promissora e pode ser utilizada como uma importante ferramenta para melhor entender os efeitos dos diferentes aspectos envolvidos no processo de falha deste material.
This work presents a numerical strategy developed using the Finite Element Method (FEM) to simulate the failure process of Steel Fiber Reinforced Cementitious Composites (SFRCCs). The material is described as a composite made up by three phases: a cementitious matrix (paste, mortar or concrete), discrete discontinuous fibers, and a fiber-matrix interface. A novel coupling scheme for non-matching finite element meshes has been developed to couple the independent generated meshes of the bulk cementitious matrix and a cloud of discrete discontinuous fibers based on the use of special finite elements developed, termed Coupling Finite Elements (CFEs). Using this approach, a nonrigid coupling procedure is proposed for modeling the complex nonlinear behavior of the fiber-matrix interface by adopting an appropriate constitutive damage model to describe the relation between the shear stress (adherence stress) and the relative sliding between the matrix and each fiber individually. This scheme has also been adopted to account for the presence of regular reinforcing bars in the analysis of reinforced concrete structural elements. The steel fibers are modeled using two-node finite elements (truss elements) with a one-dimensional elastoplastic constitutive model. They are positioned using an isotropic uniform random distribution, considering the wall effect of the mold. Continuous and discontinuous approaches are developed to model the brittle behavior of the bulk cementitious matrix. For the former, an isotropic damage model including two independent scalar damage variables for describing the composite behavior under tension and compression is considered. The discontinuous approach is based on a mesh fragmentation technique that employs degenerated solid finite elements in between all regular (bulk) elements. In this case, a tensile damage constitutive model, compatible with the Continuum Strong Discontinuity Approach (CSDA), is proposed to predict crack propagation. To increase the computability and robustness of the continuum damage models used to simulate the failure processes in both of the strategies, an implicit-explicit integration scheme is used. Numerical analyses are performed throughout the presentation of the work. Initially, numerical examples with a single reinforcement are presented to validate the technique and to investigate the influence of the fibers geometrical properties and its position relative to the crack surface. Then, more complex examples involving a cloud of steel fibers are considered. In these cases, special attention is given to the analysis of the influence of the fiber distribution on the composite behavior relative to the cracking process. Comparisons with experimental results demonstrate that the application of the numerical tool for modeling the behavior of SFRCCs is very promising and may constitute an important tool for better understanding the effects of the different aspects involved in the failure process of this material.
APA, Harvard, Vancouver, ISO, and other styles
14

Klinkmüller, Christopher. "Adaptive Process Model Matching: Improving the Effectiveness of Label-Based Matching through Automated Configuration and Expert Feedback." Doctoral thesis, 2016. https://ul.qucosa.de/id/qucosa%3A15640.

Full text
Abstract:
Process model matchers automate the detection of activities that represent similar functionality in different models. Thus, they provide support for various tasks related to the management of business processes including model collection management and process design. Yet, prior research primarily demonstrated the matchers’ effectiveness, i.e., the accuracy and the completeness of the results. In this context (i) the size of the empirical data is often small, (ii) all data is used for the matcher development, and (iii) the validity of the design decisions is not studied. As a result, existing matchers yield a varying and typically low effectiveness when applied to different datasets, as among others demonstrated by the process model matching contests in 2013 and 2015. With this in mind, the thesis studies the effectiveness of matchers by separating development from evaluation data and by empirically analyzing the validity and the limitations of design decisions. In particular, the thesis develops matchers that rely on different sources of information. First, the activity labels are considered as natural-language descriptions and the Bag-of-Words Technique is introduced which achieves a high effectiveness in comparison to the state of the art. Second, the Order Preserving Bag-of-Words Technique analyzes temporal dependencies between activities in order to automatically configure the Bag-of-Words Technique and to improve its effectiveness. Third, expert feedback is used to adapt the matchers to the domain characteristics of process model collections. Here, the Adaptive Bag-of-Words Technique is introduced which outperforms the state-of-the-art matchers and the other matchers from this thesis.
APA, Harvard, Vancouver, ISO, and other styles
15

"ADAPTIVE LEARNING OF NEURAL ACTIVITY DURING DEEP BRAIN STIMULATION." Master's thesis, 2015. http://hdl.handle.net/2286/R.I.29727.

Full text
Abstract:
abstract: Parkinson's disease is a neurodegenerative condition diagnosed on patients with clinical history and motor signs of tremor, rigidity and bradykinesia, and the estimated number of patients living with Parkinson's disease around the world is seven to ten million. Deep brain stimulation (DBS) provides substantial relief of the motor signs of Parkinson's disease patients. It is an advanced surgical technique that is used when drug therapy is no longer sufficient for Parkinson's disease patients. DBS alleviates the motor symptoms of Parkinson's disease by targeting the subthalamic nucleus using high-frequency electrical stimulation. This work proposes a behavior recognition model for patients with Parkinson's disease. In particular, an adaptive learning method is proposed to classify behavioral tasks of Parkinson's disease patients using local field potential and electrocorticography signals that are collected during DBS implantation surgeries. Unique patterns exhibited between these signals in a matched feature space would lead to distinction between motor and language behavioral tasks. Unique features are first extracted from deep brain signals in the time-frequency space using the matching pursuit decomposition algorithm. The Dirichlet process Gaussian mixture model uses the extracted features to cluster the different behavioral signal patterns, without training or any prior information. The performance of the method is then compared with other machine learning methods and the advantages of each method is discussed under different conditions.
Dissertation/Thesis
Masters Thesis Electrical Engineering 2015
APA, Harvard, Vancouver, ISO, and other styles
We offer discounts on all premium plans for authors whose works are included in thematic literature selections. Contact us to get a unique promo code!

To the bibliography