To see the other types of publications on this topic, follow the link: Algorithme du simplexe.

Dissertations / Theses on the topic 'Algorithme du simplexe'

Create a spot-on reference in APA, MLA, Chicago, Harvard, and other styles

Select a source type:

Consult the top 50 dissertations / theses for your research on the topic 'Algorithme du simplexe.'

Next to every source in the list of references, there is an 'Add to bibliography' button. Press on it, and we will generate automatically the bibliographic reference to the chosen work in the citation style you need: APA, MLA, Harvard, Chicago, Vancouver, etc.

You can also download the full text of the academic publication as pdf and read online its abstract whenever available in the metadata.

Browse dissertations / theses on a wide variety of disciplines and organise your bibliography correctly.

1

Keraghel, Abdelkrim. "Étude adaptative et comparative des principales variantes dans l'algorithme de Karmarkar." Phd thesis, Grenoble 1, 1989. http://tel.archives-ouvertes.fr/tel-00332749.

Full text
Abstract:
Après une description de la méthode de Karmarkar, il est montré que la valeur du pas de déplacement peut être largement améliorée. Les principales difficultés pratiques de la méthode sont discutées. Plus particulièrement, l'hypothèse de connaitre, au départ, la valeur optimale de l'objectif. Diverses extensions et variantes sont étudiées dans le but de relaxer l'hypothèse ci-dessus
APA, Harvard, Vancouver, ISO, and other styles
2

Bouarfa, Abdelkader. "Méthodes de commande par allocation de convertisseurs statiques polyphasés, multi-niveaux : de la modélisation à la mise en oeuvre temps-réel." Thesis, Toulouse 3, 2017. http://www.theses.fr/2017TOU30261/document.

Full text
Abstract:
Dans nos travaux, nous nous intéressons à la commande des convertisseurs statiques à grand nombre d'interrupteurs. Le développement des topologies multi-niveaux multi-bras a ouvert l'accès aux domaines de la forte puissance et de la haute qualité harmonique. Outre cette montée en puissance, la commande spéciale de ces dispositifs permet de conférer au convertisseur des fonctionnalités avancées de plus en plus nécessaires, comme la possibilité de filtrage actif des harmoniques, la tolérance aux pannes, la gestion du réactif, les liaisons HVDC, etc. Toutefois, un plus grand nombre d'interrupteurs au sein d'une même structure de conversion se traduit par une forte croissance du nombre de variables de commande, des degrés de liberté et par une explosion combinatoire du nombre de configurations possibles. La synthèse de lois de commande suivant les approches traditionnellement conçues pour les topologies classiques, comme les méthodes de modulation vectorielle fondées sur la représentation géométrique du convertisseur, en devient rapidement fastidieuse pour les nouvelles topologies plus complexes. De plus, les interrupteurs présents en surnombre apportent des redondances fortes qui ne sont pas nécessairement exploitées, ou du moins arbitrairement. Nous proposons une nouvelle approche de commande qui se veut moins dépendante du nombre d'interrupteurs, et qui s'affranchit des limitations induites par les méthodes de modulation géométrique. Notre approche consiste dans un premier temps à formuler de manière algébrique des problèmes de commande qui sont généralement sous-déterminés, témoignant de la présence de redondances ou degrés de liberté, et contraints, car tenant compte des limitations propres aux rapports cycliques. De manière intéressante, ces problèmes offrent une similarité avec les problèmes dits d'allocation de commande rencontrés en aéronautique, en marine ou en robotique. Dans un second temps, dans le but de fournir à chaque période de découpage une solution de commande unique et optimisée, nous concevons de nouvelles méthodes d'allocation pour les convertisseurs statiques fondées sur l'optimisation numérique en ligne à partir de techniques d'optimisation linéaire. En conséquence, les rapports cycliques sont automatiquement optimisés pour satisfaire aux références de tension tout en respectant les saturations et en exploitant les redondances disponibles selon l'état actuel du convertisseur. Nous mettons en lumière les propriétés naturellement offertes par nos méthodes. Notamment, toutes nos solutions de modulation étendent de manière maximale la zone de linéarité du convertisseur. Nous proposons des méthodes d'allocation pour la commande en tension ou en courant de topologies variées : l'onduleur quatre bras deux niveaux, l'onduleur multicellulaire à condensateurs flottants, l'onduleur modulaire multi-niveaux. Concernant les convertisseurs multicellulaires, nos méthodes d'allocation utilisent automatiquement les degrés de liberté disponible pour fournir un équilibrage actif très rapide des tensions de condensateurs flottants. Aussi, grâce à la formulation algébrique des contraintes de commande, nos algorithmes peuvent prendre en compte un défaut sur un interrupteur pour conférer au convertisseur une propriété de tolérance aux fautes du point de vue de la commande<br>In our works, we are interested in control of high-switch-count power converters. The development of multileg, multilevel converters has opened the access to high power and high harmonic quality. The special control of these devices brings to the converter advanced abilities that are more and more requested nowadays, like active harmonic filtering, fault tolerance, active and reactive power transfer, High Voltage Direct Current (HVDC) links, etc. However, a higher number of switches in a conversion structure leads to a higher number of control variables, as well as more redundancies and a combinatorial explosion of the number of possible configurations. The development of control laws resulting from approaches traditionally designed for classical topologies, as for space vector modulation methods, becomes harder for new, much complex topologies. Moreover, the too many available switches bring strong control redundancies that are not necessarily exploited, at least arbitrarily. We propose a new control approach that is expected to be less dependent on the number of switches, and that does not suffer from limitations proper to geometrical modulation methods. Firstly, our approach consists in the algebraic formulation of control problems that are generally under-determined, highlighting the presence of redundancies and degrees of freedom, and constrained, because control limitations are taken into account. Interestingly, a connection can be highlighted to the so-called control allocation problem in flight control, robotics, or marine applications. Secondly, in order to compute a unique and optimized control solution at each switching period, we develop new control allocation methods for power converters based on on-line numerical optimization using linear programming techniques. Consequently, duty cycles are automatically optimized to satisfy voltage references while respecting saturations and exploiting available redundancies depending on the state of the converter. We highlight the properties naturally offered by our methods. In particular, all modulation solutions yield a maximized extension of the linearity range of the converter. We propose control allocation methods for the voltage or current control of many topologies: the four-leg two-level inverter, the multicellular flying capacitor inverter, the modular multilevel inverter
APA, Harvard, Vancouver, ISO, and other styles
3

Szeto, Raymond W. L. (Raymond Wen Li) 1977. "Clamping-simplex methods : improved direct search simplex algorithms." Thesis, Massachusetts Institute of Technology, 2000. http://hdl.handle.net/1721.1/86829.

Full text
Abstract:
Thesis (M.Eng.)--Massachusetts Institute of Technology, Dept. of Electrical Engineering and Computer Science, 2000.<br>Includes bibliographical references (leaf 66).<br>by Raymond W.L. Szeto.<br>M.Eng.
APA, Harvard, Vancouver, ISO, and other styles
4

CORDEIRO, JOAQUIM PEDRO DE V. "NETWORK SIMPLEX, ALGORITHM E IMPLEMENTATION." PONTIFÍCIA UNIVERSIDADE CATÓLICA DO RIO DE JANEIRO, 2008. http://www.maxwell.vrac.puc-rio.br/Busca_etds.php?strSecao=resultado&nrSeq=13219@1.

Full text
Abstract:
PONTIFÍCIA UNIVERSIDADE CATÓLICA DO RIO DE JANEIRO<br>COORDENAÇÃO DE APERFEIÇOAMENTO DO PESSOAL DE ENSINO SUPERIOR<br>Este trabalho busca desenvolver o método Simplex para Redes na solução de problemas de Fluxo de Custo Mínimo. Este método consiste em uma adaptação do método Simplex primal em que são exploradas as características específicas da rede subjacente ao problema ao se buscar a solução ótima em um número finito de árvores geradoras. A árvore geradora ótima será obtida iterativamente através de sucessivas melhorias na estrutura de cada árvore formada. A maior eficiência do Simplex para Redes se dá tanto no menor número de iterações necessárias para se atingir o ótimo, quanto na maior velocidade destas iterações, trata-se, portanto, de um método bastante poderoso na resolução de problemas de Fluxo de Custo Mínimo. Serão, também, abordados aspectos práticos da implementação do algoritmo além da aplicação deste algoritmo implementado em VBA (Visual Basic for Applications) em um problema prático a título de exemplificação.<br>The current work intends to develop a Network Simplex Method for solving Minimum Cost Flow problems. Such method consists of a primal Simplex Method adaptation in which specific characteristics of the network underlying the problem are investigated by searching for the optimal solution within a finite number of spanning trees. The optimal spanning tree is iteratively obtained through successive structure improvements in each formed tree. The higher efficiency of Network Simplex lies both in fewer iterations necessary to achieve the optimum and in the higher speed of these iterations. Therefore, it is a powerful method for solving Minimum Cost Flow Problems. Practical aspects of implementing the algorithm will be discussed, as well as the algorithm´s implementation in VBA (Visual Basic for Applications) through a practical instance.
APA, Harvard, Vancouver, ISO, and other styles
5

Aggarwal, Charu C., Haim Kaplan, and Robert E. 1948 Tarjan. "A Faster Primal Network Simplex Algorithm." Massachusetts Institute of Technology, Operations Research Center, 1996. http://hdl.handle.net/1721.1/5266.

Full text
Abstract:
We present a faster implementation of the polynomial time primal simplex algorithm due to Orlin [23]. His algorithm requires O(nm min{log(nC), m log n}) pivots and O(n2 m ??n{log nC, m log n}) time. The bottleneck operations in his algorithm are performing the relabeling operations on nodes, selecting entering arcs for pivots, and performing the pivots. We show how to speed up these operations so as to yield an algorithm whose running time is O(nm. log n) per scaling phase. We show how to extend the dynamic-tree data-structure in order to implement these algorithms. The extension may possibly have other applications as well.
APA, Harvard, Vancouver, ISO, and other styles
6

Luangpaiboon, Pongchanun. "A comparison of algorithms for automatic process optimisation." Thesis, University of Newcastle Upon Tyne, 2000. http://ethos.bl.uk/OrderDetails.do?uin=uk.bl.ethos.324922.

Full text
APA, Harvard, Vancouver, ISO, and other styles
7

Yan, Ping. "Theory of simple genetic algorithms." Thesis, University of Macau, 2000. http://umaclib3.umac.mo/record=b1446649.

Full text
APA, Harvard, Vancouver, ISO, and other styles
8

Valkanova, Elena. "Algorithms for simple stochastic games." [Tampa, Fla] : University of South Florida, 2009. http://purl.fcla.edu/usf/dc/et/SFE0003070.

Full text
APA, Harvard, Vancouver, ISO, and other styles
9

Costa, Rogério Rodrigues Lacerda. "Algoritmo genético aplicado à formulação de ração para frangos de corte." Universidade de São Paulo, 2017. http://www.teses.usp.br/teses/disponiveis/74/74131/tde-23112017-134715/.

Full text
Abstract:
Este projeto teve por objetivo a implementação de software para formulação de ração de frangos de corte utilizando Algoritmo Genético (AG). A geração da população inicial foi direcionada, impedindo a geração de indivíduos que possuíam características restritivas. Realizou-se três experimentos, sendo o primeiro para definição do tamanho da população, número de gerações e método de seleção de pais, o segundo para comparar a formulação de ração do AG com a do Simplex e o terceiro para verificar a variabilidade de resultados do AG. O experimento 1 foi realizado em delineamento inteiramente ao acaso, com tratamentos arranjados em esquema fatorial 2 x 5 x 19, sendo os fatores: métodos de seleção de pais (roleta e torneio de três), tamanho de população (200, 360, 500, 1.000 e 1.500 indivíduos) e número de geração (10, 20, 30, 40, 50, 60, 70, 80, 90, 100, 200, 300, 400, 500, 600, 700, 800, 900 e 1.000), totalizando 190 tratamentos, com 20 repetições resultando em 3.800 observações. A cada observação registrou-se o fitness que foi submetido a análise de variância e quando significativa (P&lt;0,05) aplicou-se o teste de Scott-Knott (5%). No experimento 2 foram formuladas três rações, sendo uma ração pelo método Simplex e duas pelo AG. As rações formuladas com AG utilizaram os parâmetros de tamanho de população, método de seleção de pais e número de gerações definidos no experimento 1. Os resultados obtidos pelo AG proporcionaram rações que apresentam uma diferença média no atendimento das necessidades nutricionais de 0,34% para a ração formulada pelo método roleta e de 0,16% pelo método torneio de três, sendo essas diferenças pequenas e que provavelmente não impactam sobre o desempenho animal e sobre as características de carcaça. A variação de resultados existente no AG, devido a sua característica heurística, foi testada no experimento 3 por intermédio de 100 execuções para cada método de seleção de pais, roleta e torneio de três, utilizando os mesmos parâmetros de tamanho de população e número de gerações das rações formuladas no experimento 2. Os resultados obtidos demonstram baixa dispersão nos dados. Conclui-se que o AG é uma estratégia de otimização eficiente para formulação de rações para frangos de corte, pois aproxima-se do atendimento exato da exigência nutricional, com variação pequena, e com mínimo custo.<br>The objective of the present project was to implement software for the formulation of broiler chicken feed using a Genetic Algorithm (GA). The generation of the initial population was directed, preventing the production of individuals with restrictive characteristics. A total of three experiments were carried out: the first one to define the population size, number of generations, and the method of parent selection; the second to compare ration formulation using the GA with that of the Simplex method, and the third to verify result variability using the GA. Experiment 1 was performed in a completely randomized design, with arranged treatments in a 2 x 5 x 19 factorial scheme, assessing the following factors: parent selection methods (roulette-wheel selection and tournament selection of three), population size (200, 360, 500, 1 000 and 1 500 individuals), and number of generations (10, 20, 30, 40, 50, 60, 70, 80, 90, 100, 200, 300, 400, 500, 600, 700, 800, 900 and 1 000 ), totaling 190 treatments, with 20 repetitions each, resulting in 3 800 recordings. At each observation, the registered fitness was submitted to variance analysis, and if significant (P &lt; 0.05), the Scott-Knott test (5%) was applied. In the second experiment, three rations were formulated: one by the Simplex method, and two employing the GA. The feeds formulated with the GA used the parameters of population size, parent selection method, and number of generations, defined in experiment 1. The results obtained by the GA provided feeds that exhibited a mean difference in nutritional requirements of 0.34% for the ration formulated by the roulette-wheel method and 0.16% for the tournament selection of three technique. These differences are considered small and may not impact on animal performance and carcass characteristics. The variation regarding the GA results, given its heuristic attribute, was tested in experiment 3 using 100 repetitions of each method of parent selection, employing the same parameters regarding population size and number of generations of the rations formulated in experiment 2. The obtained results demonstrate low data dispersion. In conclusion, the GA is an efficient optimization strategy for the formulation of broiler chicken feeds, since it approximates the exact fulfillment of the nutritional requirement, with small variation, and with minimum cost.
APA, Harvard, Vancouver, ISO, and other styles
10

Vernersson, Ante. "Adaptive Backprojection : Developing a simple reconstruction algorithm." Thesis, Mittuniversitetet, Avdelningen för elektronikkonstruktion, 2018. http://urn.kb.se/resolve?urn=urn:nbn:se:miun:diva-34098.

Full text
Abstract:
This group project aims to investigate the possibility ofconstructing a prototype micro-CT-scanner, hopefullyfurthering the advancement in the field of small CT-scanners,with the ultimate goal to provide a structure useable in anambulance, to scan and send images to the hospital, so thattreatment can commence directly upon arrival.This report deals with the matter of reconstructing the takenimages to construct a three-dimensional model of thescanned object. An algorithm is developed in a mannerunderstandable to those not familiar with traditionalreconstruction techniques, simplifying the understanding ofCT backprojection while also providing a useful algorithm touse along with the scanner setup. The working principles ofthe reconstruction algorithm are explained along with itsdevelopment process, and the project ends with clear resultsin the form of an “Adaptive Backprojection” algorithm.
APA, Harvard, Vancouver, ISO, and other styles
11

Koduru, Praveen. "Multi-objective GA-simplex hybrid algorithm for gene differential equation modeling /." Search for this dissertation online, 2006. http://wwwlib.umi.com/cr/ksu/main.

Full text
APA, Harvard, Vancouver, ISO, and other styles
12

Msimanga, Ntombiyomusa D. G. "The resolution of the Gaussian fused peaks using the simplex algorithm." DigitalCommons@Robert W. Woodruff Library, Atlanta University Center, 1985. http://digitalcommons.auctr.edu/dissertations/3687.

Full text
Abstract:
The simplex algorithm is used to perform curve-resolution of fused peak systems. The gaussian function is chosen as the peak model. The algorithm basically involves moving from a region of poor response to a region of the best response. The movement is controlled by changing the guessed parameter values of the function and using the sum of the squares of the residuals as the calculated response. The resolution is achieved when the deviation of the observed fused peak from the calculated model peak is minimum. The program GAUSSEX which implements the simplex algorithm, was tested with synthesized data as well as experimental data. The fast fourier transform technique was used to do digital filtering if the peak array was noisey. GAU S S EX gives the plot of synthesized data or experimental data, the smoothed data, the fitted data and the components of each spectrum. The results in both synthesized spectra and experimental spectra were good. The deviation between the actual parameters and computed parameters were less than 5%
APA, Harvard, Vancouver, ISO, and other styles
13

NAYAK, VARUN R. "A SURVEY ON ALGORITHMS FOR SOLVING LINEAR INTEGER TYPE CONSTRAINTS." University of Cincinnati / OhioLINK, 2002. http://rave.ohiolink.edu/etdc/view?acc_num=ucin1022176838.

Full text
APA, Harvard, Vancouver, ISO, and other styles
14

Goward, Russell A. "A simple algorithm for principalization of monomial ideals /." free to MU campus, to others for purchase, 2001. http://wwwlib.umi.com/cr/mo/fullcit?p3012972.

Full text
APA, Harvard, Vancouver, ISO, and other styles
15

Donner, Quentin. "Correction de l'atténuation et du rayonnement diffusé en tomographie d'émission à simples photons." Université Joseph Fourier (Grenoble), 1994. http://www.theses.fr/1994GRE10155.

Full text
Abstract:
L'objectif de la tomographie d'émission à simples photons est d'établir une image fonctionnelle d'un organe. Pour ce faire, on administre au patient un radioélément qui se fixe dans l'organe, puis on calcule la distribution du radioélément à partir de mesures du rayonnement émis. Une proportion non négligeable de ce rayonnement interagit cependant dans la matière. L'objectif de ces travaux est de tenir compte de ces interactions afin de reconstruire plus précisément. Après avoir décrit les principales caractéristiques du système d'acquisition, nous nous consacrons a la reconstruction de la distribution d'émission à partir de projections atténuées, notamment quand la géométrie d'acquisition est conique. Plusieurs méthodes du type prédiction-correction sont fréquemment utilisées, mais leur convergence n'est pas établie. En fait, nous montrons théoriquement que la plus simple de ces méthodes diverge dans un cas particulier. La correction d'atténuation peut également être traitée par la méthode du gradient préconditionnée. Nous proposons plusieurs préconditionnements, qui conduisent à différents algorithmes de reconstruction. Ces algorithmes présentent de grandes analogies avec les algorithmes de prédiction-correction, mais ils ont l'avantage d'être convergents. Nous terminons ce chapitre en validant une méthode du type prédiction-correction, celle de Morozumi, sur données simulées et sur données expérimentales. Nous abordons ensuite le problème de la détermination de l'objet atténuant suivant deux approches. La première repose sur l'utilisation des mesures de rayonnement direct: on reconstruit l'émission une première fois sans corriger l'atténuation, puis on ajuste un ellipsoïde aux contours extérieurs de cette image. La seconde approche est basée sur l'utilisation des mesures du rayonnement diffuse: nous étudions la possibilité d'utiliser ces mesures pour établir une cartographie d'atténuation et nous mettons en évidence les difficultés liées a ce problème. Nous terminons en regardant comment la cartographie d'atténuation peut servir à corriger les effets du rayonnement diffuse. Nous proposons une méthode du type prédiction-correction, nous discutons sa convergence, puis nous la comparons a une méthode classique sur données expérimentales. .<br>The aim of single photon emission tomography is to compute a functional picture of an organ. This is done by administering to the patient a radiopharmaceutical which is fixing in the organ. Then, one computes the distribution of the radiopharmaceutical from the measurement of the emitted gamma-rays. However, an important part of these gamma-rays are interacting with the matter inside the body. The aim of this work is to take these interactions into account so as to reconstruct more accurately. .
APA, Harvard, Vancouver, ISO, and other styles
16

Ochoa, Gabriela. "Error thresholds and optimal mutation rates in genetic algorithms." Thesis, University of Sussex, 2000. http://ethos.bl.uk/OrderDetails.do?uin=uk.bl.ethos.341070.

Full text
APA, Harvard, Vancouver, ISO, and other styles
17

Sheng, Yi Qian. "Modifications to the SIMPLE method for buoyancy-driven flows /." *McMaster only, 1999.

Find full text
APA, Harvard, Vancouver, ISO, and other styles
18

Aziz, Haris. "Algorithmic and complexity aspects of simple coalitional games." Thesis, University of Warwick, 2009. http://wrap.warwick.ac.uk/3113/.

Full text
Abstract:
Simple coalitional games are a fundamental class of cooperative games and voting games which are used to model coalition formation, resource allocation and decision making in computer science, artificial intelligence and multiagent systems. Although simple coalitional games are well studied in the domain of game theory and social choice, their algorithmic and computational complexity aspects have received less attention till recently. The computational aspects of simple coalitional games are of increased importance as these games are used by computer scientists to model distributed settings. This thesis fits in the wider setting of the interplay between economics and computer science which has led to the development of algorithmic game theory and computational social choice. A unified view of the computational aspects of simple coalitional games is presented here for the first time. Certain complexity results also apply to other coalitional games such as skill games and matching games. The following issues are given special consideration: influence of players, limit and complexity of manipulations in the coalitional games and complexity of resource allocation on networks. The complexity of comparison of influence between players in simple games is characterized. The simple games considered are represented by winning coalitions, minimal winning coalitions, weighted voting games or multiple weighted voting games. A comprehensive classification of weighted voting games which can be solved in polynomial time is presented. An efficient algorithm which uses generating functions and interpolation to compute an integer weight vector for target power indices is proposed. Voting theory, especially the Penrose Square Root Law, is used to investigate the fairness of a real life voting model. Computational complexity of manipulation in social choice protocols can determine whether manipulation is computationally feasible or not. The computational complexity and bounds of manipulation are considered from various angles including control, false-name manipulation and bribery. Moreover, the computational complexity of computing various cooperative game solutions of simple games in dierent representations is studied. Certain structural results regarding least core payos extend to the general monotone cooperative game. The thesis also studies a coalitional game called the spanning connectivity game. It is proved that whereas computing the Banzhaf values and Shapley-Shubik indices of such games is #P-complete, there is a polynomial time combinatorial algorithm to compute the nucleolus. The results have interesting significance for optimal strategies for the wiretapping game which is a noncooperative game defined on a network.
APA, Harvard, Vancouver, ISO, and other styles
19

Rafique, Emran Carleton University Dissertation Computer Science. "On asymptotic algorithms for approximating arbitrary polygons by simpler convex polygons." Ottawa, 1993.

Find full text
APA, Harvard, Vancouver, ISO, and other styles
20

Júnior, José Elievam Bessa. "Caracterização do fluxo de tráfego em rodovias de pista simples do Estado de São Paulo." Universidade de São Paulo, 2009. http://www.teses.usp.br/teses/disponiveis/18/18144/tde-04122009-150455/.

Full text
Abstract:
A meta desta pesquisa foi caracterizar as relações fundamentais do fluxo de tráfego em rodovias de pista simples paulistas através de modelos baseados em parâmetros que reflitam a qualidade de serviço e possam ser observados diretamente em campo. Para que esta meta fosse atingida, primeiramente foram obtidos dados através de observações em campo e de sensores instalados em rodovias. Os dados coletados nas observações diretas foram usados para calibrar e validar um modelo de simulação através de um processo automático, baseado num algoritmo genético. Constatou-se que a versão recalibrada do simulador é capaz de reproduzir tanto informações de detectores como as correntes de tráfego observadas nos onze trechos onde foram coletados dados. Propôs-se um método para produção de dados de tráfego sintéticos, que utiliza um simulador microscópico e um algoritmo genético. Os dados sintéticos obtidos pelo método proposto foram usados para obter os modelos que descrevem as relações entre o fluxo de tráfego e a velocidade e a porcentagem de tempo viajando em pelotões (PTSF) para rodovias de pista simples no estado de São Paulo. Esses modelos poderiam substituir os utilizados pelo HCM-2000 em análises da qualidade de serviço em rodovias paulistas. Também foram propostos novos modelos para relações fundamentais que se adequaram melhor às condições paulistas: um modelo côncavo para a curva fluxovelocidade e um novo modelo exponencial para a relação entre o fluxo e a PTSF. Cinco medidas de desempenho capazes de substituir PTSF foram estudadas, tendo sido relacionadas com a taxa de fluxo bidirecional e unidirecional. As medidas de desempenho propostas foram avaliadas pela capacidade de refletir o nível de serviço observado em campo. Destas, uma nova definição da PTSF, calculada em função do número médio de headways dentro e fora de pelotões, apresentou a melhor porcentagem de acertos (90%), usando-se o mesmo critério adotado pelo HCM-2000. Em razão disso, e da possibilidade de observação direta da PTSF, recomenda-se sua adoção para avaliar a qualidade de serviço em rodovias de pista simples.<br>The goal of this research was to characterize the fundamental relationships of traffic flow on two-lane rural highways in the state of São Paulo through models based on parameters that reflect the quality of service and that could be obtained from direct observations of traffic flows. To reach this goal, sets of data were obtained from observation of traffic flows and from detectors installed on roads. The data collected from direct observation was used to calibrate and validate a microscopic traffic simulation model, as well as for the calculation of performance measures used in some of the analyses. The microsimulation model was calibrated using an automatic procedure that is based on a genetic algorithm. The recalibrated model was found to be able to reproduce traffic sensor data as well as traffic flow characteristics observed in the 11 road segments observed for this research. A procedure for synthetic data generation, which uses a microsimulation model and a genetic algorithm, was proposed. Synthetic data obtained through this procedure were used to develop the models that describe the relationships between flow rate, traffic stream speed and percent time spent following (PTSF) for two-lane roads in the state of São Paulo. These models could replace those used in the HCM-2000 for quality of service analysis of two-lane roads in São Paulo. New fundamental relationships, which better reflect the operational conditions on local two-lane roads were also studied: a concave speed-flow relationship and an exponential PTSF-flow model. Five alternatives to PTSF were studied and correlated to one-way and two-way flows. Among these, a novel definition of PTSF, based on the ratio of average number of headways within platoons and average number of headways between platoons, was found to be the most accurate (90% of the cases), adopting the HCM-2000 criteria. Thus, this new measure could be used to evaluate the quality of service on two-lane rural highways.
APA, Harvard, Vancouver, ISO, and other styles
21

Sandberg, Roland. "Generation of floating islands using height maps." Thesis, Blekinge Tekniska Högskola, Sektionen för datavetenskap och kommunikation, 2013. http://urn.kb.se/resolve?urn=urn:nbn:se:bth-3183.

Full text
Abstract:
A floating island was generated using two height maps, one for the bottom of the island and one for the top. Vertices of the island was displaced in the Y axis using an algorithm called Simplex noise. To avoid texture stretching that appears when displacing the vertices an algorithm called “tri planar projection” was implemented. After the vertices had been displaced the normal was smoothed using “normal smoothing”. An algorithm here called “the AVG algorithm” is created to smooth jig sawed edges around the island that appears after the island has been generated. The results of the AVG algorithm is judged by a group of 26 participants to see if any jig sawed edges are perceived. They answered a survey containing seven pictures, with each picture having one more iteration of the AVG algorithm then the last, starting from 0 iterations. Five iterations proved to be the best.
APA, Harvard, Vancouver, ISO, and other styles
22

SABNIS, SUDEEP SUHAS. "AN APPROACH TO FACILITATING VERIFICATION OF LINEAR CONSTRAINTS." University of Cincinnati / OhioLINK, 2003. http://rave.ohiolink.edu/etdc/view?acc_num=ucin1066407869.

Full text
APA, Harvard, Vancouver, ISO, and other styles
23

Gunning, Robin. "A performance comparison of coverage algorithms for simple robotic vacuum cleaners." Thesis, KTH, Skolan för elektroteknik och datavetenskap (EECS), 2018. http://urn.kb.se/resolve?urn=urn:nbn:se:kth:diva-229676.

Full text
Abstract:
Cheap automatic robotic vacuum cleaners keep their cost down by having less sensors and many of them focus on using a single frontal bumper sensor, this makes it important to be able to get good coverage of a room with no knowledge of said room. This paper investigates whether the algorithm boustrophedon is enough to get a good coverage of a simple room with a maximum of two furnitures and only 90 degree corners. A graphical simulation were made to test the algorithms that are commonly used in cheap automatic robotic vacuum cleaners to compare them with the result of only using boustrophedon. The results show that the best algorithms are the non-deterministic random walk and all combined. Boustrophedon tends to get stuck when the room is not empty and it only cleans half the room when starting in the middle of the room, while being the fastest and gets most coverage in an empty room when starting in a corner.<br>Billiga automatiska dammsugare håller kostnaderna nere genom att minska antalet sensorer och många av dem använder endast en stötfångare med stötsensor framtill, vilket gör det viktigt att kunna få en bra täckning av ett rum utan vetskap om rummet. I denna rapport undersöks om algoritmen boustrophedon räcker för att få en bra täckning av ett enkelt rum med som mest två möbler och endast 90 graders hörn. En grafisk simulering gjordes för att testa de algoritmer som vanligtvis används i billiga automatiska dammsugare för att jämföra dem med resultatet av att endast använda boustrophedon. Resultaten visar att de bästa algoritmerna är den icke-deterministiska random walk och alla algoritmer kombinerade. Boustrophedon tenderar att fastna när rummet inte är tomt och den rengör bara hälften av rummet när man börjar i mitten av rummet, men den är snabbast och får mest täckning i ett tomt rum när man börjar i ett hörn.
APA, Harvard, Vancouver, ISO, and other styles
24

Matos, Jody Maick Araujo de. "Graph based algorithms to efficiently map VLSI circuits with simple cells." reponame:Biblioteca Digital de Teses e Dissertações da UFRGS, 2018. http://hdl.handle.net/10183/174523.

Full text
Abstract:
Essa tese introduz um conjunto de algoritmos baseados em grafos para o mapeamento eficiente de circuitos VLSI com células simples. Os algoritmos propostos se baseiam em minimizar de maneira eficiente o número de elementos lógicos usados na implementação do circuito. Posteriormente, uma quantidade significativa de esforço é aplicada na minimização do número de inversores entre esses elementos lógicos. Por fim, essa representação lógica é mapeada para circuitos compostos somente por células NAND e NOR de duas entradas, juntamente com inversores. Células XOR e XNOR de duas entradas também podem ser consideradas. Como nós também consideramos circuitos sequenciais, flips-flops também são levados em consideração. Com o grande esforço de minimização de elementos lógicos, o circuito gerado pode conter algumas células com um fanout impraticável para os nodos tecnológicos atuais. Para corrigir essas ocorrências, nós propomos um algoritmo de limitação de fanout que considera tanto a área sendo utilizada pelas células quanto a sua profundidade lógica. Os algoritmos propostos foram aplicados sobre um conjunto de circuitos de benchmark e os resultados obtidos demonstram a utilidade dos métodos. Essa tese introduz um conjunto de algoritmos baseados em grafos para o mapeamento eficiente de circuitos VLSI com células simples. Os algoritmos propostos se baseiam em minimizar de maneira eficiente o número de elementos lógicos usados na implementação do circuito. Posteriormente, uma quantidade significativa de esforço é aplicada na minimização do número de inversores entre esses elementos lógicos. Por fim, essa representação lógica é mapeada para circuitos compostos somente por células NAND e NOR de duas entradas, juntamente com inversores. Células XOR e XNOR de duas entradas também podem ser consideradas. Como nós também consideramos circuitos sequenciais, flips-flops também são levados em consideração. Com o grande esforço de minimização de elementos lógicos, o circuito gerado pode conter algumas células com um fanout impraticável para os nodos tecnológicos atuais. Para corrigir essas ocorrências, nós propomos um algoritmo de limitação de fanout que considera tanto a área sendo utilizada pelas células quanto a sua profundidade lógica. Os algoritmos propostos foram aplicados sobre um conjunto de circuitos de benchmark e os resultados obtidos demonstram a utilidade dos métodos. Adicionalmente, algumas aplicações Morethan-Moore, tais como circuitos baseados em eletrônica impressa, também podem ser beneficiadas pela abordagem proposta.<br>This thesis introduces a set of graph-based algorithms for efficiently mapping VLSI circuits using simple cells. The proposed algorithms are concerned to, first, effectively minimize the number of logic elements implementing the synthesized circuit. Then, we focus a significant effort on minimizing the number of inverters in between these logic elements. Finally, this logic representation is mapped into a circuit comprised of only two-input NANDs and NORS, along with the inverters. Two-input XORs and XNORs can also be optionally considered. As we also consider sequential circuits in this work, flip-flops are taken into account as well. Additionally, with high-effort optimization on the number of logic elements, the generated circuits may contain some cells with unfeasible fanout for current technology nodes. In order to fix these occurrences, we propose an area-oriented, level-aware algorithm for fanout limitation. The proposed algorithms were applied over a set of benchmark circuits and the obtained results have shown the usefulness of the method. We show that efficient implementations in terms of inverter count, transistor count, area, power and delay can be generated from circuits with a reduced number of both simple cells and inverters, combined with XOR/XNOR-based optimizations. The proposed buffering algorithm can handle all unfeasible fanout occurrences, while (i) optimizing the number of added inverters; and (ii) assigning cells to the inverter tree based on their level criticality. When comparing with academic and commercial approaches, we are able to simultaneously reduce the average number of inverters, transistors, area, power dissipation and delay up to 48%, 5%, 5%, 5%, and 53%, respectively. As the adoption of a limited set of simple standard cells have been showing benefits for a variety of modern VLSI circuits constraints, such as layout regularity, routability constraints, and/or ultra low power constraints, the proposed methods can be of special interest for these applications. Additionally, some More-than-Moore applications, such as printed electronics designs, can also take benefit from the proposed approach.
APA, Harvard, Vancouver, ISO, and other styles
25

Júnior, José Elievam Bessa. "Medidas de desempenho para avaliação da qualidade de serviço em rodovias de pista simples no Brasil." Universidade de São Paulo, 2015. http://www.teses.usp.br/teses/disponiveis/18/18144/tde-10062015-102520/.

Full text
Abstract:
Para estimar o nível de serviço em rodovias de pista simples, o Highway Capacity Manual 2010 (HCM2010) adota como medidas de desempenho a Porcentagem de Tempo Viajando em Pelotões (PTSF) e a Velocidade Média de Viagem (ATS). A PTSF, no entanto, é praticamente impossível de ser obtida de observações em campo. Na literatura, algumas pesquisas propõem medidas de desempenho alternativas que podem ser coletadas diretamente da observação do tráfego. A meta deste trabalho consistiu em avaliar e propor medidas de desempenho que pudessem ser adequadas para descrever a qualidade de serviço em rodovias de pista simples no Brasil. Foi utilizado um conjunto de dados de tráfego coletados em diversas rodovias no estado de São Paulo para calibrar e validar o simulador de tráfego escolhido, o CORSIM, a partir de um Algoritmo Genético (AG). Com o simulador recalibrado, foi gerado um conjunto de dados sintéticos, para diversas condições de geometria viária e composição de tráfego. Com esses dados sintéticos, foram produzidos modelos teóricos para estimar a PTSF a partir de dados de tráfego que seriam \"observáveis em campo\": a porcentagem de veículos em pelotões (PF); o modelo porposto por Pursula (1995); o modelo de Laval (2006); o criado por Polus e Cohen (2009); e um modelo polinomial baseado na PF e outras variáveis. As estimativas obtidas com esses modelos divergiram significativamente da PTSF produzida pelo CORSIM, sugerindo a necessidade de substituir a PTSF por uma outra medida de desempenho. Assim sendo, nove medidas de desempenho alternativas foram estudadas. Usando dados de tráfego sintéticos produzidos com o CORSIM, foram desenvolvidos modelos que relacionavam medidas de desempenho alternativas com o fluxo de tráfego unidirecional. Comparações dos valores provenientes dessas relações com dados de campo indicaram que três medidas de desempenho (a velocidade média de viagem dos automóveis; a densidade para automóveis e a densidade de veículos em pelotões) poderiam ser usadas para propor critérios para estimar o nível de serviço em rodovias de pista simples no Brasil.<br>The Highway Capacity Manual 2010 uses Percent-Time-Spent Following (PTSF) and Average Travel Speed (ATS) to estimate level of service on two-lane rural highways. As it is almost impossible to observe PTSF directly in the field, the literature suggests alternative measures of effectiveness (MOEs) that can be obtained from traffic stream parameters. The objective of this thesis was to analyze MOEs that could adequately describe quality of service on two-lane rural highways in Brazil. Traffic data collected on several roads in the state of São Paulo were used to calibrate and validate the traffic simulation model CORSIM, using a Genetic Algorithm (GA). The recalibrated CORSIM was used to create a synthetic set of traffic data, comprising a wide range of traffic flows and road geometries. Using this synthetic data, several models relating PTSF to \"directly observable\" traffic parameters were developed: percent following (PF), as in the HCM2010; the shockwave theory model proposed by Pursula (1995); the Laval (2006) moving bottleneck model; the Polus and Cohen (2009) queueing model; and a polynomial model. PTSF estimates produced by these models significantly diverged from PTSF values produced by CORSIM, suggesting the need for a new measure of effectiveness. Thus, nine alternative MOEs were analyzed and models relating these MOEs to directional traffic flow were fitted, using the synthetic traffic data set. Comparisons between the values obtained from these models and from the field indicated that three MOEs (average travel speed of cars, density for cars and follower density) could be used to create level of service criteria for two-lane rural highways in Brazil.
APA, Harvard, Vancouver, ISO, and other styles
26

Lima, Júnior Marcelo Eustáquio Soares de. "As diferentes interpretações do algoritmo simplex na resolução de problemas de programação linear." reponame:Repositório Institucional da UnB, 2015. http://repositorio.unb.br/handle/10482/18973.

Full text
Abstract:
Dissertação (mestrado)—Universidade de Brasília, Instituto de Ciências Exatas, Departamento de Matemática, Programa de Mestrado Profissional em Matemática em Rede Nacional, 2015.<br>Submitted by Raquel Viana (raquelviana@bce.unb.br) on 2015-12-01T15:57:31Z No. of bitstreams: 1 2015_MarceloEustáquioSoaresdeLimaJúnior.pdf: 30575722 bytes, checksum: b66272514664bc6824a2d99836dcffc9 (MD5)<br>Approved for entry into archive by Marília Freitas(marilia@bce.unb.br) on 2015-12-20T15:40:26Z (GMT) No. of bitstreams: 1 2015_MarceloEustáquioSoaresdeLimaJúnior.pdf: 30575722 bytes, checksum: b66272514664bc6824a2d99836dcffc9 (MD5)<br>Made available in DSpace on 2015-12-20T15:40:26Z (GMT). No. of bitstreams: 1 2015_MarceloEustáquioSoaresdeLimaJúnior.pdf: 30575722 bytes, checksum: b66272514664bc6824a2d99836dcffc9 (MD5)<br>A principal ideia deste trabalho foi de aplicar o conteúdo Programação Linear no Ensino Médio de forma contextualizada e com a utilização dos sotwares Geogebra e o Excel. Neste trabalho abordamos a metodologia da resolução de problemas, partindo de exemplos que busquem a otimização através da programação linear. O experimento tem por objetivo principal descrever a abordagem utilizada na implementação realizada a partir desses problemas. ______________________________________________________________________________________________ ABSTRACT<br>The main idea of this work was to apply the Linear Programming in secondary education content in context and with the use of Geogebra sotwares and Excel. This paper deals with the methodology of solving problems, starting with exempls involving optimization through linear programming. The experiment's main objective is to describe the approach used in the implementation carried out from these problems.
APA, Harvard, Vancouver, ISO, and other styles
27

Modtkoski, Heloisa Milena. "Conceito matemático X algoritmo : construção do conhecimento ou simples mecanização?" reponame:Repositório Institucional da UFPR, 2016. http://hdl.handle.net/1884/46146.

Full text
Abstract:
Orientadora: Profª. Drª. Ettiéne Cordeiro Guérios<br>Dissertação (mestrado) - Universidade Federal do Paraná, Setor de Educação, Programa de Pós-Graduação em Educação. Defesa: Curitiba, 23/03/2016<br>Inclui referências : f. 153-158<br>Resumo: O presente estudo refere-se aos resultados da pesquisa qualitativa de natureza interpretativa, cujo objetivo é identificar se os alunos de uma escola do Ensino Fundamental do Município de Curitiba compreendem conceitualmente o conteúdo programático de equações polinomiais de 1º e 2º graus ou se as resolvem mecanicamente pela compreensão apenas de seu algoritmo. Os instrumentos para a coleta de dados são listas de exercícios, sendo uma de aplicação direta de equações de 1º e 2º graus e outra de problemas relativos às mesmas equações e foram construídos tendo por base elementos teóricos organizados nos eixos denominados como: construção do conhecimento, conceito matemático e algoritmo, resolução de problemas e a álgebra escolar. As categorias para análise dos dados são provenientes do conteúdo teórico referente ao campo algébrico e do nosso referencial teórico. Analisamos os dados em função das seguintes categorias provisórias organizadas a partir de Bernard e Cohen (1995) para as equações de 1º grau: método de desfazer, inversos operacionais, reversibilidade de um processo, passos invertíveis, resoluções aritméticas e resoluções algébricas. Já para as equações de 2º grau as categorias são: completamento de quadrados, fórmula de Bhaskara e relação entre os coeficientes. Os instrumentos foram aplicados a 29 alunos, seguido de entrevistas com alguns deles, conforme as resoluções apresentadas. Após esta aplicação, os dados empíricos foram analisados à luz da Teoria da Aprendizagem Significativa de David Ausubel, de elementos da Teoria da Complexidade na perspectiva de Edgar Morin (2010), do conceito de experiência de Jorge Larrosa (2002) e de Ettiène Guérios (2002), em que fragmentação, relação parte-todo e circularidade-linearidade constituíram pontos de aproximação interpretativa entre as categorias. Palavras-chave: Conceito matemático. Algoritmo. Construção do conhecimento. Mecanização.<br>Abstract: This study refers to the results of a qualitative research of interpretative nature, which aims to identify whether students understand conceptually the syllabus polynomial equations of 1st and 2nd degrees or mechanically solved by only understanding their algorithm. The instruments for data collection are lists of exercises, being a direct application of equations 1st and 2nd degrees and other problems related to the same equations and were built based on theoretical elements arranged on the axes called construction of knowledge, mathematical concept and algorithm, problem solving and school algebra. The categories for data analysis are arising from two circumstances: from the theoretical framework of the algebraic field and our theoretical references. We analyzed the data according to the following provisional categories arranged from Bernard and Cohen (1995) for the 1st degree equations: method of undo, operational inverse, reversibility of a process, invertible steps, arithmetic resolutions and algebric resolutions. For the equations of 2nd degree, the categories are: completing the square, Bhaskara formula and relation between the coefficients. After validation, the tools were applied to 29 students, followed by interviews with someone as the resolutions presented. After this application, the empirical data were analyzed according to the Theory of Meaningful Learning from David Ausubel, Theory of Complexity from Edgar Morin (2010), the concept of experience from Jorge Larrosa (2002) and Ettiene Guérios (2002), when fragmentation, part-whole relation and circularity-linearity constituted points of interpretive approach between the categories. Key-words: Mathematical concept. Algorithm. Knowledge contruction. Mechanization.
APA, Harvard, Vancouver, ISO, and other styles
28

Leydold, Josef, and Wolfgang Hörmann. "A Sweep-Plane Algorithm for Generating Random Tuples in Simple Polytopes." Department of Statistics and Mathematics, Abt. f. Angewandte Statistik u. Datenverarbeitung, WU Vienna University of Economics and Business, 1997. http://epub.wu.ac.at/476/1/document.pdf.

Full text
Abstract:
A sweep-plane algorithm by Lawrence for convex polytope computation is adapted to generate random tuples on simple polytopes. In our method an affine hyperplane is swept through the given polytope until a random fraction (sampled from a proper univariate distribution) of the volume of the polytope is covered. Then the intersection of the plane with the polytope is a simple polytope with smaller dimension. In the second part we apply this method to construct a black-box algorithm for log-concave and T-concave multivariate distributions by means of transformed density rejection. (author's abstract)<br>Series: Preprint Series / Department of Applied Statistics and Data Processing
APA, Harvard, Vancouver, ISO, and other styles
29

Martin, Andrew K. "A simple primal algorithm for intersecting 3-polyhedra in linear time." Thesis, University of British Columbia, 1991. http://hdl.handle.net/2429/30114.

Full text
Abstract:
This thesis presents, in full, a simple linear time algorithm for intersecting two convex 3-polyhedra P and Q. This differs from the first such algorithm — due to Chazelle — in that it operates entirely in primal space, whereas Chazelle's algorithm relies heavily on duality transforms. We use the hierarchical representations of polyhedra due to Dobkin and Kirkpatrick to induce a cell complexes between coarse approximations, Pk and Pk of P satisfying Pk ⊆ P ⊆ Pk, and similar approximations Qk and Qk of Q satisfying Qk ⊆ Q ⊆ Qk . We show that the structure of such complexes allows intersection queries to be answered efficiently. In particular, the sequence of cells intersected by a ray can be identified in time proportional to the length of the sequence. The algorithm operates by recursively computing the intersections: Pk ∩ Qk and Pk ∩Qk. Then edges of the union of approximations P ∩ Qk and Q ∩ Pk are traversed by tracing their intersection with the two cell complexes. We show that each such edge can be traversed in constant time. In the process, most of the edges of P ∩ Q which lie simultaneously on the boundary of P and Q will be traced. We show that the total time needed to construct those which remain is linear in the size of P and Q. Though based on the same general principles, the algorithm presented here is somewhat simpler than that described by Chazelle, which uses only the cell complexes induced by the inner hierarchical representations of P and Q. By extending Chazelle's search structure to the space exterior to the given polyhedra, we avoid having to operate simultaneously in primal and dual spaces. This permits us to conceptualise the algorithm as traversing the edges of the boundary of (P∩Qk)⋃(Q∩Pk). As a side effect, we avoid one half of Chazelle's recursive calls, which leads to a modest improvement in the asymptotic constants.<br>Science, Faculty of<br>Computer Science, Department of<br>Graduate
APA, Harvard, Vancouver, ISO, and other styles
30

Bottura, Heitor Miranda. "Uma família de algoritmos hermitianos para a integração direta das equações de dinâmica das estruturas." Universidade de São Paulo, 1997. http://www.teses.usp.br/teses/disponiveis/18/18134/tde-20032018-105806/.

Full text
Abstract:
No presente trabalho desenvolve-se uma família de algoritmos de passo simples, com ordem de precisão local qualquer e aniquilamento assintótico para a análise dinâmica de estruturas. São utilizadas expressões hermitianas para as relações em diferenças envolvidas na representação das equações que descrevem o problema. Explicitam-se os membros da família, com precisão desde a primeira até a oitava ordem, que apresentam estabilidade incondicional, efetuando-se sua análise espectral bem como resolvendo-se um problema unidimensional e comparando-se com outros métodos, permitindo concluir-se pelo seu grande potencial de aplicação.<br>An one-step methods family for direct numerical integration in structural dynamic analysis is derived. Asymptotic annihilation and arbitrary truncation error order are attained. Hermitian type expressions are used in difference equations involved in the problem description. Inconditionally stable members, from first up to eighth order, are presented. A spectral analysis is performed in these cases and a single degree of freedom problem is solved. The solution is compared with those given from other methods, allowing to expect a good performance in practical applications.
APA, Harvard, Vancouver, ISO, and other styles
31

Sagaser, Michael Bernard. "A computational comparison of the primal simplex and relaxation algorithms for solving minimum cost flow networks." Thesis, Monterey, California. Naval Postgraduate School, 1989. http://hdl.handle.net/10945/26936.

Full text
APA, Harvard, Vancouver, ISO, and other styles
32

Antoni, Vinicius de. "An asynchronous algorithm to improve scheduling quality in the multiagent simple temporal problem." reponame:Biblioteca Digital de Teses e Dissertações da UFRGS, 2014. http://hdl.handle.net/10183/89798.

Full text
Abstract:
Ao tentar agendar uma atividade que dependa da presença de outras pessoas, geralmente acabamos desperdiçando tempo precioso avaliando os possíveis horários e verificando se os mesmos são aceitos por todos envolvidos. Embora a modelagem e a resolução do problema de agendamento multiagente pareçam estar completamente entendidas e ainda diversos algoritmos possam ser encontrados na literatura, uma questão ainda existe: Como definir horários compatíveis para uma atividade compartilhada sem que os usuários tenham que manualmente escolher horários livres de seus calendários até que todos envolvidos aceitem um horário. A principal contribuição é um algoritmo chamado Descobridor Asíncrono de Horários (ATF) baseado no Rastreamento Asíncrono (ABT) que permite que aplicações encontrem horários compatíveis para atividades compartilhadas requerendo mínima intervenção manual dos usuários. Esta dissertação revisita o Problema Temporal Simples (STP) e a sua versão multiagente (MaSTP), demonstra como eles podem ser utilizados para resolver o problema de agentamentos e ao final apresenta o ATF, a avaliação experimental e a análise de complexidade.<br>In order to schedule an activity that depends on other people, we very often end up wasting precious time trying to find compatible times and evaluating if they are accepted by all involved. Even though modeling and solving multiagent scheduling problems seem completely understood and several algorithms can be found in the literature, one limitation still stands up: How to find a compatible time slot for an activity shared by many users without requiring the users themselves to spend time going through their calendar and choosing time slots until everybody agrees. The main contribution of this work is an algorithm called Asynchronous Time Finder (ATF) based on the Asynchronous Backtracking (ABT) that enables applications to find compatible times when scheduling shared activities among several users while requiring minimal user interaction. This dissertation starts by revisiting the Simple Temporal Problem (STP) and its multiagent version (MaSTP), it then shows how they can be used to solve the problem of managing agendas and then finally it presents the ATF giving an experimental evaluation and the analysis of its complexity.
APA, Harvard, Vancouver, ISO, and other styles
33

Yue, Liming. "A simple algorithm for designing control systems and its applicaitons in robotics." Ohio : Ohio University, 1998. http://www.ohiolink.edu/etd/view.cgi?ohiou1176233375.

Full text
APA, Harvard, Vancouver, ISO, and other styles
34

Skjelnes, Mika. "Implementation and Specification of a Simple Polygon Triangulation Algorithm in Isabelle/HOL." Thesis, Uppsala universitet, Institutionen för informationsteknologi, 2020. http://urn.kb.se/resolve?urn=urn:nbn:se:uu:diva-448536.

Full text
Abstract:
Unexpected behaviour in software can be both expensive and time-consuming toresolve. Unit-testing is a common method used to gain confidence in the correctnessof implementations where the basic idea is to simulate a finite set of input-values and check if the program produces the expected outputs. This method is far from perfectas bugs are still present and a great concern in software. Formal verification have insome cases been successfully used to prove correctness in programs when exhaustiveunit-testing was infeasible. This approach relies on using formal methods to provewhether the I/O-behaviour of the implementation is equivalent to its formal specification. Computational geometry is a field that has become increasingly relevant in the industry. Numerous of algorithms in this field have a multitude of applications and are used to help solving real-life problems. As the systems using geometry algorithmsscale the need for its' components to function properly grows more critical. Formal verification can be a great tool for this task as it can prove the correctness of the components, essentially eliminating implementation bugs. Little research is howeverbeing done on formal verification in computational geometry even though it could supplement the flaws of unit testing, as seen in some other fields. In this project, the first few steps of formal verification is performed. A program thattriangulates a simple polygon is implemented in Isabelle/HOL, followed up by a formalspecification of the program. In other words, the pre- and postconditions for simple polygon triangulation are specified, which could be used for verification of theimplementation. The results show that formal verification is applicable for thisprogram as the expected I/O-behaviour for the program was naturally translated intoa formal specification.
APA, Harvard, Vancouver, ISO, and other styles
35

Yue, Liming. "A simple algorithm for designing control systems and its applications in robotics." Ohio University / OhioLINK, 1998. http://rave.ohiolink.edu/etdc/view?acc_num=ohiou1176233375.

Full text
APA, Harvard, Vancouver, ISO, and other styles
36

Woods, Walt. "The Design of a Simple, Spiking Sparse Coding Algorithm for Memristive Hardware." PDXScholar, 2016. http://pdxscholar.library.pdx.edu/open_access_etds/2721.

Full text
Abstract:
Calculating a sparse code for signals with high dimensionality, such as high-resolution images, takes substantial time to compute on a traditional computer architecture. Memristors present the opportunity to combine storage and computing elements into a single, compact device, drastically reducing the area required to perform these calculations. This work focused on the analysis of two existing sparse coding architectures, one of which utilizes memristors, as well as the design of a new, third architecture that employs a memristive crossbar. These architectures implement either a non-spiking or spiking variety of sparse coding based on the Locally Competitive Algorithm (LCA) introduced by Rozell et al. in 2008. Each architecture receives an arbitrary number of input lines and drives an arbitrary number of output lines. Training of the dictionary used for the sparse code was implemented through external control signals that approximate Oja's rule. The resulting designs were capable of representing input in real-time: no resets would be needed between frames of a video, for instance, though some settle time would be needed. The spiking architecture proposed is novel, emphasizing simplicity to achieve lower power than existing designs. The architectures presented were tested for their ability to encode and reconstruct 8 x 8 patches of natural images. The proposed network reconstructed patches with a normalized, root-mean-square error of 0.13, while a more complicated CMOS-only approach yielded 0.095, and a non-spiking approach yielded 0.074. Several outputs competing for representation of the input was shown to improve reconstruction quality and preserve more subtle components in the final encoding; the proposed algorithm lacks this feature. Steps to address this were proposed for future work by scaling input spikes according to the current expected residual, without adding much complexity. The architectures were also tested with the MNIST digit database, passing a sparse code onto a basic classifier. The proposed architecture scored 81% on this test, a CMOS-only spiking variant scored 76%, and the non-spiking algorithm scored 85%. Power calculations were made for each design and compared against other publications. The overall findings showed great promise for spiking memristor-based ASICs, consuming only 28% of the power used by non-spiking architectures and 6.6% as much power as a CMOS-only spiking architecture on this task. The spike-based nature of the novel design was also parameterized into several intuitive parameters that could be adjusted to prefer either performance or power efficiency. The design and analysis of architectures for sparse coding should greatly reduce the amount of future work needed to implement an end-to-end classification pipeline for images or other signal data. When lower power is a primary concern, the proposed architecture should be considered as it surpassed other published algorithms. These pipelines could be used to provide low-power visual assistance, highlighting objects within high-definition video frames in real-time. The technology could also be used to help self-driving cars identify hazards more quickly and efficiently.
APA, Harvard, Vancouver, ISO, and other styles
37

Le, Thanh Tam. "Geometry-Aware Learning Algorithms for Histogram Data Using Adaptive Metric Embeddings and Kernel Functions." 京都大学 (Kyoto University), 2016. http://hdl.handle.net/2433/204594.

Full text
Abstract:
Kyoto University (京都大学)<br>0048<br>新制・課程博士<br>博士(情報学)<br>甲第19417号<br>情博第596号<br>新制||情||104(附属図書館)<br>32442<br>京都大学大学院情報学研究科知能情報学専攻<br>(主査)教授 山本 章博, 教授 黒橋 禎夫, 教授 鹿島 久嗣, 准教授 Cuturi Marco<br>学位規則第4条第1項該当
APA, Harvard, Vancouver, ISO, and other styles
38

Talačkaitė, Simona. "Daugiamačių simpleksinių Lipšico algoritmų su nežinoma Lipšico konstanta ir įvairiais simplekso centrais kūrimas ir eksperimentinis tyrimas." Master's thesis, Lithuanian Academic Libraries Network (LABT), 2014. http://vddb.library.lt/obj/LT-eLABa-0001:E.02~2014~D_20140724_105124-61903.

Full text
Abstract:
Globaliojo optimizavimo metodai, pagrįsti Lipšico rėžių apskaičiavimu, yra plačiai taikomi įvairių optimizavimo uždavinių sprendimui. Tačiau Lipšico metodai dažniausiai remiasi prielaida, kad Lipšico konstanta žinoma iš anksto, o tai retas atvejis sprendžiant praktinius uždavinius. Todėl Simonos Talačkaitės magistro darbe yra toliau nagrinėjama aktuali ir svarbi problematika iškylanti realizuojant Lipšico metodus nesiremiančius jokiomis išankstinėmis prielaidomis apie Lipšico konstantą. Praktinio tiriamojo pobūdžio magistro darbe iškeliamas toks pagrindinis tikslas: ištirti daugiamačių simpleksinių globaliojo optimizavimo algoritmų su nežinoma Lipšico konstanta efektyvumą priklausomai nuo naudojamo simplekso centro. Šiam tikslui pasiekti buvo iškelti šie uždaviniai: apžvelgti naujausią literatūrą skirta Lipšico metodams su nežinoma Lipšico konstanta; matematiškai išnagrinėti įvairių daugiamačių simplekso centrų apskaičiavimus bendru atveju bei juos realizuoti Matlab aplinkoje; papildyti simpleksinį globaliojo optimizavimo DISIMPL algoritmą šių simpleksų centrų apskaičiavimo paprogramėmis; eksperimentiškai ištirti pasiūlytų rezultatų praktiškumą sprendžiant testinius optimizavimo uždavinius.<br>This work analyzes Global optimization objectives, the most important it will be algorithms with simplicial Lipšico constant. Also, this work analyzes multidi- mensional DIRECT algorithm. We will provide dividing in higher dimennsions DIRECT algorithm. Then analyzes two simplex and apply the solutions. The hand simplex to smallerpartitions. Perceive multidimensional DIRECT algorithm division rules. In this work wrote a lot about simplicial center about dividing of hyoer-cube. Finally, the experiment it will be about the best way, how we can …nd circle center ir diferent way. Simplex centers using 8 test funkcions , changing the number of iterations and mistakes number. Create tables and to analyzes them. The purpose of this paper work is to introduce the simplex algorithm for global optimization with unknown Lipšicas constant depending on the e¢ ciency of the division of the rules used in the simplex.
APA, Harvard, Vancouver, ISO, and other styles
39

Coutinho, Demetrios Ara?jo Magalh?es. "Implementa??o paralela escal?vel e eficiente do algoritmo simplex padr?o em arquitetura multicore." Universidade Federal do Rio Grande do Norte, 2014. http://repositorio.ufrn.br:8080/jspui/handle/123456789/15502.

Full text
Abstract:
Made available in DSpace on 2014-12-17T14:56:18Z (GMT). No. of bitstreams: 1 DemetriusAMC_DISSERT.pdf: 2429364 bytes, checksum: 57aaf24560c189720b218dbca0ef1a56 (MD5) Previous issue date: 2014-01-24<br>Coordena??o de Aperfei?oamento de Pessoal de N?vel Superior<br>This work presents a scalable and efficient parallel implementation of the Standard Simplex algorithm in the multicore architecture to solve large scale linear programming problems. We present a general scheme explaining how each step of the standard Simplex algorithm was parallelized, indicating some important points of the parallel implementation. Performance analysis were conducted by comparing the sequential time using the Simplex tableau and the Simplex of the CPLEXR IBM. The experiments were executed on a shared memory machine with 24 cores. The scalability analysis was performed with problems of different dimensions, finding evidence that our parallel standard Simplex algorithm has a better parallel efficiency for problems with more variables than constraints. In comparison with CPLEXR , the proposed parallel algorithm achieved a efficiency of up to 16 times better<br>Este trabalho apresenta uma implementa??o paralela escal?vel e eficiente do algoritmo Simplex padr?o em arquitetura de processadores multicore para resolver problemas de programa??o linear de grande escala. Apresenta-se um esquema geral explicando como foi paralelizado cada passo do algoritmo simplex padr?o, apontando pontos importantes da implementa??o paralela. Foram realizadas an?lises de desempenho atrav?s da compara??o dos tempos sequenciais utilizando o Simplex tableau e Simplex do CPLEXR da IBM. Os experimentos foram realizados em uma m?quina de mem?ria compartilhada com 24 n?cleos. A an?lise de escalabilidade foi feita com problemas de diferentes dimens?es, encontrando evid?ncias de que a implementa??o paralela proposta do algoritmo simplex padr?o tem melhor efici?ncia paralela para problemas com mais vari?veis do que restri??es. Na compara??o com CPLEXR , o algoritmo proposto paralelo obteve uma efici?ncia de at? 16 vezes maior
APA, Harvard, Vancouver, ISO, and other styles
40

Gum, Teren. "A simple linear algorithm for computing edge-to-edge visibility in a polygon /." Thesis, McGill University, 1986. http://digitool.Library.McGill.CA:80/R/?func=dbin-jump-full&object_id=65508.

Full text
APA, Harvard, Vancouver, ISO, and other styles
41

Abbott, Walter D. "A simple, low overhead data compression algorithm for converting lossy processes to lossless." Thesis, Monterey, Calif. : Springfield, Va. : Naval Postgraduate School ; Available from National Technical Information Service, 1993. http://handle.dtic.mil/100.2/ADA277905.

Full text
Abstract:
Thesis (M.S. in Electrical Engineering) Naval Postgraduate School, December 1993.<br>Thesis advisor(s): Ron J. Pieper. "December 1993." Cover title: A simple, ... lossy compression processes ... Includes bibliographical references. Also available online.
APA, Harvard, Vancouver, ISO, and other styles
42

Gusmão, Andrezza Almeida. "Metodo robusto e simples de compressão de imagens baseado no algoritmo EZW." [s.n.], 2002. http://repositorio.unicamp.br/jspui/handle/REPOSIP/259617.

Full text
Abstract:
Orientador : Max Henrique Machado Costa<br>Dissertação (mestrado) - Universidade Estadual de Campinas, Faculdade de Engenharia Eletrica e de Computação<br>Made available in DSpace on 2018-08-02T08:36:15Z (GMT). No. of bitstreams: 1 Gusmao_AndrezzaAlmeida_M.pdf: 1547214 bytes, checksum: cc9e0a40cf5f63688a8dc84924409c11 (MD5) Previous issue date: 2002<br>Mestrado
APA, Harvard, Vancouver, ISO, and other styles
43

Židek, Stanislav. "Implementace algoritmů Teorie her." Master's thesis, Vysoké učení technické v Brně. Fakulta informačních technologií, 2009. http://www.nusl.cz/ntk/nusl-236778.

Full text
Abstract:
Game theory has become very powerful tool for modelling decision-making situations of rational players. However, practical applications are strongly limited by the size of particular game, which is connected to the computational power of computers nowadays. Aim of this master's thesis is to design and implement a library, which would be able to find correlated equilibria in as complex non-cooperative games as possible.
APA, Harvard, Vancouver, ISO, and other styles
44

Cervelin, Bruno Henrique 1988. "Sobre um método de minimização irrestrita baseado em derivadas simplex." [s.n.], 2013. http://repositorio.unicamp.br/jspui/handle/REPOSIP/306038.

Full text
Abstract:
Orientador: Maria Aparecida Diniz Ehrhardt<br>Dissertação (mestrado) - Universidade Estadual de Campinas, Instituto de Matemática Estatística e Computação Científica<br>Made available in DSpace on 2018-08-22T15:48:00Z (GMT). No. of bitstreams: 1 Cervelin_BrunoHenrique_M.pdf: 1935510 bytes, checksum: 91d17dd60bdd280c9eddd301cb3d2c24 (MD5) Previous issue date: 2013<br>Resumo: O objetivo deste trabalho é apresentar alguns métodos de minimização irrestrita sem derivadas, tais como, Nelder-Mead, busca padrão e SID-PSM, assim como compará-los. Ainda pretendemos apresentar o problema de otimização de parâmetros de algoritmos, e aplicar o método SID-PSM de modo a encontrar parâmetros ótimos para o próprio método SID-PSM em relação ao número de avaliações de função que o método realiza. Os experimentos numéricos realizados mostram que o SID-PSM _e mais robusto e mais eficiente que os métodos clássicos sem derivadas (busca padrão e Nelder-Mead). Outros experimentos nos mostram o potencial do problema de otimização de parâmetros de algoritmos em melhorar tanto a eficiência quanto a robustez dos métodos<br>Abstract: The aim of this paper is to present some derivative-free methods for unconstrained minimization problems, such as Nelder-Mead, pattern search and SID-PSM, and compare them. We also intend to present the problem of optimal algorithmic parameters, and apply the method SID-PSM in order to find optimal parameters for the method SID-PSM itself in relation to the number of function evaluations performed by the method. The numerical experiments performed show that the SID-PSM is more robust and more efficient than the classical derivative-free methods (pattern search and Nelder-Mead). Other experiments show us the potential of the problem of optimal algorithmic parameters to improve both the efficiency and the robustness of the methods<br>Mestrado<br>Matematica Aplicada<br>Mestre em Matemática Aplicada
APA, Harvard, Vancouver, ISO, and other styles
45

Zhang, Wei. "Study of Central Configurations and Relative Equilibria in the Problem of Four Bodies." University of Cincinnati / OhioLINK, 2000. http://rave.ohiolink.edu/etdc/view?acc_num=ucin974472331.

Full text
APA, Harvard, Vancouver, ISO, and other styles
46

Schmid, Andreas [Verfasser], and Kurt [Akademischer Betreuer] Mehlhorn. "Plane and simple : using planar subgraphs for efficient algorithms / Andreas Schmid ; Betreuer: Kurt Mehlhorn." Saarbrücken : Saarländische Universitäts- und Landesbibliothek, 2019. http://d-nb.info/1202039928/34.

Full text
APA, Harvard, Vancouver, ISO, and other styles
47

Guenounou, Ouahib. "Méthodologie de conception de contrôleurs intelligents par l'approche génétique : application à un bioprocédé." Toulouse 3, 2009. http://thesesups.ups-tlse.fr/546/.

Full text
Abstract:
Dans ce travail, le problème de conception de contrôleurs flous est étudié. Dans une première partie, on présente un état de l'art sur les techniques utilisées à savoir les algorithmes génétiques et ses différentes variantes, les réseaux de neurones, la logique floue et leurs hybridations. Prenant appui sur cet état de l'art nous proposons une première méthode de conception des contrôleurs flous de Mamdani par algorithmes génétiques simples. Cette méthode est en suite améliorée par l'emploi des algorithmes génétiques hiérarchisés. Ces derniers permettent par le biais de la structure de leurs chromosomes, une meilleure optimisation des paramètres du contrôleur tout en éliminant les règles incohérentes qui peuvent se présenter, comme pour la première méthode, à la fin du processus d'optimisation. La dernière méthode proposée concerne la synthèse des contrôleurs flous de Sugeno. Elle est basée sur une procédure d'apprentissage hybride qui se déroule en deux étapes. Durant la première étape, le contrôleur flou est représenté sous forme d'un réseau de neurones multicouches dont les paramètres sont optimisés par l'algorithme de rétropropagation. Dans la deuxième étape, les paramètres obtenus à l'issue de la première phase sont extraits et optimisés par le NSGA-II suivant un codage hiérarchisé. L'ensemble des ces méthodes est appliqué pour la conduite d'un procédé de fermentation alcoolique en mode continu<br>In this work, the problem of design of fuzzy controllers is studied. In a first part, we present a state of art on the techniques used, namely the genetic algorithms and its various alternatives, the neural networks, fuzzy logic and their hybridizations. Taking support on this state of art, we propose a first design method of the fuzzy controllers of Mamdani by simple genetic algorithms. Thereafter, this method is improved by the use of the hierarchical genetic algorithms. These algorithms allow, by the means of the structure of their chromosomes, a better optimization of the controller parameters while eliminating the incoherent rules which can arise, as well as for the first method, at the end of the optimization process. The last method suggested relates to the synthesis of the fuzzy controllers of Sugeno. It is based on a hybrid procedure of training which proceeds in two stages. During the first stage, the fuzzy controller is represented in the form of a network of multi-layer neural networks, whose parameters are optimized by the algorithm of retro propagation. In the second phase, the parameters obtained at the end of the first phase are extracted and optimized by the NSGA-II according to a coding arranged hierarchically. These methods are applied for control of an alcoholic fermentation process in continuous mode
APA, Harvard, Vancouver, ISO, and other styles
48

Lorriaux, Etienne. "Etude de méthodes métaheuristiques appliquées à l'optimisation aérodynamique ferroviaire." Valenciennes, 2007. http://ged.univ-valenciennes.fr/nuxeo/site/esupversions/ec593272-c953-4cfa-8769-3eb3b82c3fa7.

Full text
Abstract:
L’amélioration de la qualité du transport ferroviaire passe par l’augmentation des vitesses opérationnelles sans altération des niveaux de sécurité et de confort. Dans ces conditions, les aspects aérodynamiques jouent un rôle important et imposent des contraintes de conception parfois contradictoires. Pour y répondre, cette thèse expose les bases d’une méthode globale d’optimisation. Ce travail s’appuie sur la simulation numérique du comportement aérodynamique des rames, nécessitant des moyens de calcul importants. La complexité du domaine de recherche à prospecter impose de recourir à une procédure d’optimisation flexible et performante. L’étude concerne donc les méthodes métaheuristiques, et plus particulièrement l’emploi d’un algorithme génétique couplé à une procédure d’estimation des solutions automatisée. Les méthodes hybrides, consistant à greffer une méthode de recherche locale à l’algorithme général, sont avantageuses mais sont difficiles à mettre en place. Une solution originale est proposée, consistant à intégrer la méthode de recherche locale à l’élaboration d’une génération de l’algorithme génétique. Cette procédure, appelée génération ciblée, combine les qualités de l’algorithme génétique avec la précision de la recherche locale, tout en s’affranchissant de transition entre chaque méthode. Après avoir été validée sur des exemples classiques, la génération ciblée a été appliquée à des profils bidimensionnels représentatifs de formes ferroviaires. Les sensibilités des caractéristiques des algorithmes génétiques et de l’estimateur ont été étudiées. Enfin, une application tridimensionnelle mono-objectif a été réalisée pour démontrer la faisabilité de la méthode<br>Improving the quality of railway transport requires higher operational speeds with equivalent security and comfort levels. Under these conditions aerodynamic effects play an important role and can imply conflicting design constraints. This work lays the basis of a global optimization method. This work is based on numerical simulations of trains aerodynamics, demanding substantial computing resources. The complexity of the search space to be explored imposes the use of a flexible and highly efficient optimization process. The study concerns metaheuristic methods and particularly a genetic algorithm relying on a fully automatic process for the flow simulations. The hybrid method, consisting in using a local search method with the general algorithm, are advantageous but are difficult to set up. An original solution is proposed, consisting in incorporating the simplex method in the generation process of a genetic algorithm. This method, called Targeted Generation Simplex, combines the genetic algorithm advantages with the accuracy of the local search and does not need any transitions between each method. The Targeted Generation Simplex has been first validated on classical examples. Therefore, it has been applied to two dimensional profiles representative of railway shapes. Sensitivity with respect to the genetic algorithm characteristics and to the estimator has been studied. The method has been successfully applied to a three dimensional single objective application to demonstrate its feasibility
APA, Harvard, Vancouver, ISO, and other styles
49

Alves, Luís M. "Paralelização em mecânica dos fluidos computacional usando HPF." Master's thesis, Universidade do Porto, 2000. http://hdl.handle.net/10198/1703.

Full text
Abstract:
Neste trabalho realizou-se a paralelização de um programa de Mecânica dos Fluidos Computacional, baseado no algoritmo SIMPLE, usando a linguagem HPF (High Performance Fortran). O HPF é uma extensão do Fortran 90 que usa o paradigma de programação data parallel, e permite a paralelização para sistemas SMP (Symmetric Multiple Processing) e clusters SMP. As simulações foram realizadas no cluster Bewolf existente na Faculdade de Engenharia da Universidade do Porto. Este cluster é constituído por 23 processadores Intel PIII (o master com velocidade de processamento de 550 MHz e os restantes 22 com 450 MHz) ligados por uma rede ethernet dedicada. Cada processador possui uma placa de 100 Mbit/s. O HPF mostrou-se uma linguagem de fácil utilização, permitindo uma rápida adaptação de códigos sequenciais para códigos paralelos. Ao contrário de outras técnicas de paralelização, no HPF as comunicações necessárias à execução de um código paralelo são implementadas pelo compilador, pelo que a modificação do nível de paralelização (número de processadores usados) pode ser conseguida por uma simples alteração de um parâmetro no código fonte. Assim, o HPF é uma técnica de paralelização com baixos custos de manutenção, permitindo com grande facilidade a alteração do código para novas situações de cálculo. No entanto, o HPF mostrou-se menos eficiente que por exemplo a técnica de decomposição de domínio usando message passing, não sendo a ferramenta ideal quando o principal objectivo da paralelização é maximizar o desempenho. Em particular, na simulação do escoamento de uma caixa bidimensional com tampa deslizante, obtiveram-se speed-ups de 0.42 para corridas com dois processadores e uma malha de 800 X 800 nós. A maior eficiência obtida no trabalho presente foi de 67% para 2 processadores.
APA, Harvard, Vancouver, ISO, and other styles
50

Soames, Kieron, and Jonas Lind. "Detecting Cycles in GraphQL Schemas." Thesis, Linköpings universitet, Institutionen för datavetenskap, 2019. http://urn.kb.se/resolve?urn=urn:nbn:se:liu:diva-156174.

Full text
Abstract:
GraphQL is a database handling API created by Facebook, that provides an effective al-ternative to REST-style architectures. GraphQL provides the ability for a client to spec-ify exactly what data it wishes to receive. A problem with GraphQL is that the freedomof creating customized requests allows data to be included several times in the response,growing the response’s size exponentially. The thesis contributes to the field of GraphQLanalysis by studying the prevalence of simple cycles in GraphQL schemas. We have im-plemented a locally-run tool and webtool using Tarjan’s and Johnson’s algorithms, thatparses the schemas, creates a directed graph and enumerates all simple cycles in the graph.A collection of schemas was analysed with the tool to collect empirical data. It was foundthat 39.73 % of the total 2094 schemas contained at least one simple cycle, with the averagenumber of cycles per schema being 4. The runtime was found to be on average 11 mil-liseconds, most of which consisted of the time for parsing the schemas. It was found that44 out of the considered schemas could not be enumerated due to containing a staggeringamount of simple cycles. It can be concluded that it is possible to test schemas for cyclicityand enumerate all simple cycles in a given schema efficiently.
APA, Harvard, Vancouver, ISO, and other styles
We offer discounts on all premium plans for authors whose works are included in thematic literature selections. Contact us to get a unique promo code!

To the bibliography