To see the other types of publications on this topic, follow the link: Deterministic and nondeterministic algorithm.

Dissertations / Theses on the topic 'Deterministic and nondeterministic algorithm'

Create a spot-on reference in APA, MLA, Chicago, Harvard, and other styles

Select a source type:

Consult the top 40 dissertations / theses for your research on the topic 'Deterministic and nondeterministic algorithm.'

Next to every source in the list of references, there is an 'Add to bibliography' button. Press on it, and we will generate automatically the bibliographic reference to the chosen work in the citation style you need: APA, MLA, Harvard, Chicago, Vancouver, etc.

You can also download the full text of the academic publication as pdf and read online its abstract whenever available in the metadata.

Browse dissertations / theses on a wide variety of disciplines and organise your bibliography correctly.

1

Merryman, William Patrick. "Animating the conversion of nondeterministic finite state automata to deterministic finite state automata." Thesis, Montana State University, 2007. http://etd.lib.montana.edu/etd/2007/merryman/MerrymanW0507.pdf.

Full text
APA, Harvard, Vancouver, ISO, and other styles
2

D'Souza, Sammy Raymond. "Parallelizing a nondeterministic optimization algorithm." CSUSB ScholarWorks, 2007. https://scholarworks.lib.csusb.edu/etd-project/3084.

Full text
Abstract:
This research explores the idea that for certain optimization problems there is a way to parallelize the algorithm such that the parallel efficiency can exceed one hundred percent. Specifically, a parallel compiler, PC, is used to apply shortcutting techniquest to a metaheuristic Ant Colony Optimization (ACO), to solve the well-known Traveling Salesman Problem (TSP) on a cluster running Message Passing Interface (MPI). The results of both serial and parallel execution are compared using test datasets from the TSPLIB.
APA, Harvard, Vancouver, ISO, and other styles
3

Kopřiva, Jan. "Srovnání algoritmů při řešení problému obchodního cestujícího." Master's thesis, Vysoké učení technické v Brně. Fakulta podnikatelská, 2009. http://www.nusl.cz/ntk/nusl-222126.

Full text
Abstract:
The Master Thesis deals with logistic module innovation of information system ERP. The principle of innovation is based on implementation of heuristic algorithms which solve Travel Salesman Problems (TSP). The software MATLAB is used for analysis and tests of these algorithms. The goal of Master Thesis is the comparison of selections algorithm, which are suitable for economic purposes (accuracy of solution, speed of calculation and memory demands).
APA, Harvard, Vancouver, ISO, and other styles
4

Schardl, Tao Benjamin. "Design and analysis of a nondeterministic parallel breadth-first search algorithm." Thesis, Massachusetts Institute of Technology, 2010. http://hdl.handle.net/1721.1/61575.

Full text
Abstract:
Thesis (M. Eng.)--Massachusetts Institute of Technology, Dept. of Electrical Engineering and Computer Science, 2010.
Cataloged from PDF version of thesis.
Includes bibliographical references (p. 75-77).
I have developed a multithreaded implementation of breadth-first search (BFS) of a sparse graph using the Cilk++ extensions to C++. My PBFS program on a single processor runs as quickly as a standard C++ breadth-first search implementation. PBFS achieves high workefficiency by using a novel implementation of a multiset data structure, called a "bag," in place of the FIFO queue usually employed in serial breadth-first search algorithms. For a variety of benchmark input graphs whose diameters are significantly smaller than the number of vertices - a condition met by many real-world graphs - PBFS demonstrates good speedup with the number of processing cores. Since PBFS employs a nonconstant-time "reducer" - a "hyperobject" feature of Cilk++ - the work inherent in a PBFS execution depends nondeterministically on how the underlying work-stealing scheduler load-balances the computation. I provide a general method for analyzing nondeteriministic programs that use reducers. PBFS also is nondeterministic in that it contains benign races which affect its performance but not its correctness. Fixing these races with mutual-exclusion locks slows down PBFS empirically, but it makes the algorithm amenable to analysis. In particular, I show that for a graph G = (V, E) with diameter D and bounded out-degree. this data-race-free version of PBFS algorithm runs in time O((V +E)/P+DIg[supercript 3] (V/D)) on P processors, which means that it attains near-perfect linear speedup if P < (V +E)/DIg[supercript 3] (V/D).
by Tao Benjamin Schardl.
M.Eng.
APA, Harvard, Vancouver, ISO, and other styles
5

Xu, Didi. "Bandwidth extension algorithm for multiple deterministic systems /." View abstract or full-text, 2006. http://library.ust.hk/cgi/db/thesis.pl?MECH%202006%20XU.

Full text
APA, Harvard, Vancouver, ISO, and other styles
6

Zhang, Hao. "Nondeterministic Linear Static Finite Element Analysis: An Interval Approach." Diss., Available online, Georgia Institute of Technology, 2005, 2005. http://etd.gatech.edu/theses/available/etd-08232005-020145/.

Full text
Abstract:
Thesis (Ph. D.)--Civil and Environmental Engineering, Georgia Institute of Technology, 2006.
White, Donald, Committee Member ; Will, Kenneth, Committee Member ; Zureick, Abdul Hamid, Committee Member ; Hodges, Dewey, Committee Member ; Muhanna, Rafi, Committee Chair ; Haj-Ali, Rami, Committee Member.
APA, Harvard, Vancouver, ISO, and other styles
7

Domingues, Riaal. "A polynomial time algorithm for prime recognition." Diss., Pretoria : [s.n.], 2006. http://upetd.up.ac.za/thesis/available/etd-08212007-100529.

Full text
APA, Harvard, Vancouver, ISO, and other styles
8

Shabana, H. M. D. "Synchronization of partial and non-deterministic automata: a sat-based approach : dissertation for the degree of candidate of physical and mathematical sciences : 05.13.17." Thesis, б. и, 2020. http://hdl.handle.net/10995/83662.

Full text
APA, Harvard, Vancouver, ISO, and other styles
9

Wang, Bo Yu. "Deterministic annealing EM algorithm for robust learning of Gaussian mixture models." Thesis, University of Macau, 2011. http://umaclib3.umac.mo/record=b2493309.

Full text
APA, Harvard, Vancouver, ISO, and other styles
10

Moon, Kyoung-Sook. "Adaptive Algorithms for Deterministic and Stochastic Differential Equations." Doctoral thesis, KTH, Numerical Analysis and Computer Science, NADA, 2003. http://urn.kb.se/resolve?urn=urn:nbn:se:kth:diva-3586.

Full text
APA, Harvard, Vancouver, ISO, and other styles
11

Panning, Thomas D. "Deterministic Parallel Global Parameter Estimation for a Model of the Budding Yeast Cell Cycle." Thesis, Virginia Tech, 2006. http://hdl.handle.net/10919/33360.

Full text
Abstract:
Two parallel deterministic direct search algorithms are combined to find improved parameters for a system of differential equations designed to simulate the cell cycle of budding yeast. Comparing the model simulation results to experimental data is difficult because most of the experimental data is qualitative rather than quantitative. An algorithm to convert simulation results to mutant phenotypes is presented. Vectors of the 143 parameters defining the differential equation model are rated by a discontinuous objective function. Parallel results on a 2200 processor supercomputer are presented for a global optimization algorithm, DIRECT, a local optimization algorithm, MADS, and a hybrid of the two. A second formulation is presented that uses a system of smooth inequalities to evaluate the phenotype of a mutant. Preliminary results of this formulation are given.
Master of Science
APA, Harvard, Vancouver, ISO, and other styles
12

Ackland, Patrik. "Fast and Scalable Static Analysis using Deterministic Concurrency." Thesis, KTH, Skolan för datavetenskap och kommunikation (CSC), 2017. http://urn.kb.se/resolve?urn=urn:nbn:se:kth:diva-210927.

Full text
Abstract:
This thesis presents an algorithm for solving a subset of static analysis data flow problems known as Interprocedural Finite Distribute Subset problems. The algorithm, called IFDS-RA, is an implementation of the IFDS algorithm which is an algorithm for solving such problems. IFDS-RA is implemented using Reactive Async which is a deterministic, concurrent, programming model. The scalability of IFDS-RA is compared to the state-of-the-art Heros implementation of the IFDS algorithm and evaluated using three different taint analyses on one to eight processing cores. The results show that IFDS-RA performs better than Heros when using multiple cores. Additionally, the results show that Heros does not take advantage of multiple cores even if there are multiple cores available on the system.
Detta examensarbete presenterar en algoritm för att lösa en klass av problem i statisk analys känd som Interprocedural Finite Distribute Subset problem.  Algoritmen, IFDS-RA, är en implementation av IFDS algoritmen som är utvecklad för att lösa denna typ av problem. IFDS-RA använder sig av Reactive Async som är en deterministisk programmeringsmodell för samtida exekvering av program.  Prestendan evalueras genom att mäta exekveringstid för tre stycken taint analyser med en till åtta processorkärnor och jämförs med state-of-the-art implementationen Heros. Resultaten visar att IFDS-RA presterar bättre än Heros när de använder sig av flera processorkärnor samt att Heros inte använder sig av flera processorkärnor även om de finns tillgängliga.
APA, Harvard, Vancouver, ISO, and other styles
13

Fakeih, Adnan M. "A deterministic approach for identifying the underlying states of multi-stationary systems using neural networks." Thesis, Lancaster University, 1998. http://ethos.bl.uk/OrderDetails.do?uin=uk.bl.ethos.287251.

Full text
APA, Harvard, Vancouver, ISO, and other styles
14

Anderson, Robert Lawrence. "An Exposition of the Deterministic Polynomial-Time Primality Testing Algorithm of Agrawal-Kayal-Saxena." Diss., CLICK HERE for online access, 2005. http://contentdm.lib.byu.edu/ETD/image/etd869.pdf.

Full text
APA, Harvard, Vancouver, ISO, and other styles
15

Pajak, Dominik. "Algorithms for Deterministic Parallel Graph Exploration." Phd thesis, Université Sciences et Technologies - Bordeaux I, 2014. http://tel.archives-ouvertes.fr/tel-01064992.

Full text
Abstract:
Nous étudions dans cette thèse le problème de l'exploration parallèle d'un graphe à l'aide des multiples, synchronisés et mobiles agents. Chaque agent est une entité individuelle qui peut, indépendamment des autres agents, visitez les sommets du graphe ou parcourir ses arêtes. Le but de ensemble des agents est de visiter tous les sommets de graphe. Nous étudions d'abord l'exploration du graphe dans un modèle où chaque agent est équipé de mémoire interne, mais les nœuds n'ont pas de mémoire. Dans ce modèle les agents sont autorisés à communiquer entre eux en échangeant des messages. Nous présentons des algorithmes qui s'exécutent dans un minimum de temps possible pour polynomiale nombre d'agents (polynomiale en nombre de sommets du graphe). Nous étudions aussi quelle est l'impacte de différent méthodes des communications. Nous étudions des algorithmes où les agents peuvent se communiquer à distance arbitraire, mais aussi où communication est possible seulement entre les agents situés dans le même sommet. Dans les deux cas nous présentons des algorithmes efficaces. Nous avons aussi obtenu des limites inférieures qui correspondent bien à la performance des algorithmes. Nous considérons également l'exploration de graphe en supposant que les mouvements des agents sont déterminés par le soi-disant rotor-router mécanisme. Du point de vue d'un sommet fixé, le rotor- router envoie des agents qui visitent les sommet voisins dans un mode round-robin. Nous étudions l'accélération défini comme la proportion entre le pire des cas de l'exploration d'un agent unique et des plusieurs agents. Pour générales graphes, nous montrerons que le gain de vitesse en cas de multi-agent rotor-router est toujours entre fonction logarithmique et linéaire du nombre d'agents. Nous présentons également des résultats optimaux sur l'accélération de multi-agent rotor-router pour cycles, expanseurs, graphes aléatoires, cliques, tores de dimension fixé et une analyse presque optimale pour hypercubes. Finalement nous considérons l'exploration sans collision, où chaque agent doit explorer le graphe de manière indépendante avec la contrainte supplémentaire que deux agents ne peuvent pas occuper le même sommet. Dans le cas où les agents sont donnés le plan de graphe, on présente un algorithme optimal pour les arbres et un algorithme asymptotiquement optimal pour générales graphes. Nous présentons aussi des algorithmes dans le cas de l'exploration sans collision des arbres et des générales graphes dans la situation où les agents ne connaissent pas le graphe. Nous fermons la thèse par des observations finales et une discussion de problèmes ouverts liés dans le domaine de l'exploration des graphes.
APA, Harvard, Vancouver, ISO, and other styles
16

Swaminathan, Anand. "An Algorithm for Influence Maximization and Target Set Selection for the Deterministic Linear Threshold Model." Thesis, Virginia Tech, 2014. http://hdl.handle.net/10919/49381.

Full text
Abstract:
The problem of influence maximization has been studied extensively with applications that include viral marketing, recommendations, and feed ranking. The optimization problem, first formulated by Kempe, Kleinberg and Tardos, is known to be NP-hard. Thus, several heuristics have been proposed to solve this problem. This thesis studies the problem of influence maximization under the deterministic linear threshold model and presents a novel heuristic for finding influential nodes in a graph with the goal of maximizing contagion spread that emanates from these influential nodes. Inputs to our algorithm include edge weights and vertex thresholds. The threshold difference greedy algorithm presented in this thesis takes into account both the edge weights as well as vertex thresholds in computing influence of a node. The threshold difference greedy algorithm is evaluated on 14 real-world networks. Results demonstrate that the new algorithm performs consistently better than the seven other heuristics that we evaluated in terms of final spread size. The threshold difference greedy algorithm has tuneable parameters which can make the algorithm run faster. As a part of the approach, the algorithm also computes the infected nodes in the graph. This eliminates the need for running simulations to determine the spread size from the influential nodes. We also study the target set selection problem with our algorithm. In this problem, the final spread size is specified and a seed (or influential) set is computed that will generate the required spread size.
Master of Science
APA, Harvard, Vancouver, ISO, and other styles
17

Mudgal, Apurva. "Worst-case robot navigation in deterministic environments." Diss., Georgia Institute of Technology, 2009. http://hdl.handle.net/1853/33924.

Full text
Abstract:
We design and analyze algorithms for the following two robot navigation problems: 1. TARGET SEARCH. Given a robot located at a point s in the plane, how will a robot navigate to a goal t in the presence of unknown obstacles ? 2. LOCALIZATION. A robot is "lost" in an environment with a map of its surroundings. How will it find its true location by traveling the minimum distance ? Since efficient algorithms for these two problems will make a robot completely autonomous, they have held the interest of both robotics and computer science communities. Previous work has focussed mainly on designing competitive algorithms where the robot's performance is compared to that of an omniscient adversary. For example, a competitive algorithm for target search will compare the distance traveled by the robot with the shortest path from s to t. We analyze these problems from the worst-case perspective, which, in our view, is a more appropriate measure. Our results are : 1. For target search, we analyze an algorithm called Dynamic A*. The robot continuously moves to the goal on the shortest path which it recomputes on the discovery of obstacles. A variant of this algorithm has been employed in Mars Rover prototypes. We show that D* takes O(n log n) time on planar graphs and also show a comparable bound on arbitrary graphs. Thus, our results show that D* combines the optimistic possibility of reaching the goal very soon while competing with depth-first search within a logarithmic factor. 2. For the localization problem, worst-case analysis compares the performance of the robot with the optimal decision tree over the set of possible locations. No approximation algorithm has been known. We give a polylogarithmic approximation algorithm and also show a near-tight lower bound for the grid graphs commonly used in practice. The key idea is to plan travel on a "majority-rule map" which eliminates uncertainty and permits a link to the half-Group Steiner problem. We also extend the problem to polygonal maps by discretizing the domain using novel geometric techniques.
APA, Harvard, Vancouver, ISO, and other styles
18

Shwayder, Kobey. "The best binary split algorithm a deterministic method for dividing vowel inventories into contrastive distinctive features /." Waltham, Mass. : Brandeis University, 2009. http://dcoll.brandeis.edu/handle/10192/23254.

Full text
APA, Harvard, Vancouver, ISO, and other styles
19

Moon, Kyoung-Sook. "Convergence rates of adaptive algorithms for deterministic and stochastic differential equations." Licentiate thesis, KTH, Numerical Analysis and Computer Science, NADA, 2001. http://urn.kb.se/resolve?urn=urn:nbn:se:kth:diva-1382.

Full text
APA, Harvard, Vancouver, ISO, and other styles
20

Fazeli, Amir. "The application of a novel deterministic algorithm for controlling the power flow through real time dispatch of the available distributed energy resources within a microgrid." Thesis, University of Nottingham, 2014. https://ethos.bl.uk/OrderDetails.do?uin=uk.bl.ethos.646027.

Full text
APA, Harvard, Vancouver, ISO, and other styles
21

KRISHNAN, RAJESH. "DEVELOPMENT OF A MODULAR SOFTWARE SYSTEM FOR MODELING AND ANALYZING BIOLOGICAL PATHWAYS." University of Cincinnati / OhioLINK, 2007. http://rave.ohiolink.edu/etdc/view?acc_num=ucin1181709492.

Full text
APA, Harvard, Vancouver, ISO, and other styles
22

Joseph, Binoy. "Clustering For Designing Error Correcting Codes." Thesis, Indian Institute of Science, 1994. http://hdl.handle.net/2005/66.

Full text
Abstract:
In this thesis we address the problem of designing codes for specific applications. To do so we make use of the relationship between clusters and codes. Designing a block code over any finite dimensional space may be thought of as forming the corresponding number of clusters over the particular dimensional space. In literature we have a number of algorithms available for clustering. We have examined the performance of a number of such algorithms, such as Linde-Buzo-Gray, Simulated Annealing, Simulated Annealing with Linde-Buzo-Gray, Deterministic Annealing, etc, for design of codes. But all these algorithms make use of the Eucledian squared error distance measure for clustering. This distance measure does not match with the distance measure of interest in the error correcting scenario, namely, Hamming distance. Consequently we have developed an algorithm that can be used for clustering with Hamming distance as the distance measure. Also, it has been observed that stochastic algorithms, such as Simulated Annealing fail to produce optimum codes due to very slow convergence near the end. As a remedy, we have proposed a modification based on the code structure, for such algorithms for code design which makes it possible to converge to the optimum codes.
APA, Harvard, Vancouver, ISO, and other styles
23

Basupi, Innocent. "Adaptive water distribution system design under future uncertainty." Thesis, University of Exeter, 2013. http://hdl.handle.net/10871/14722.

Full text
Abstract:
A water distribution system (WDS) design deals with achieving the desired network performance. WDS design can involve new and / or existing network redesigns in order to keep up with the required service performance. Very often, WDS design is expensive, which encourages cost effectiveness in the required investments. Moreover, WDS design is associated with adverse environmental implications such as greenhouse gas (GHG) emissions due to energy consumption. GHGs are associated with global warming and climate change. Climate change is generally understood to cause reduction in water available at the sources and increase water demand. Urbanization that takes into account factors such as demographics (population ageing, household occupancy rates, etc.) and other activities are associated with water demand changes. In addition to the aforementioned issues, the challenge of meeting the required hydraulic performance of WDSs is worsened by the uncertainties that are associated with WDS parameters (e.g., future water demand). With all the factors mentioned here, mitigation and adaptive measures are considered essential to improve WDS performance in the long-term planning horizon. In this thesis, different formulations of a WDS design methodologies aimed at mitigating or adapting the systems to the effects of future changes such as those of climate change and urbanization are explored. Cost effective WDS designs that mitigate climate change by reducing GHG emissions have been investigated. Also, water demand management (DM) intervention measures, i.e., domestic rainwater harvesting (RWH) systems and water saving appliance schemes (WSASs) have been incorporated in the design of WDSs in an attempt to mitigate, adapt to or counteract the likely effects of future climate change and urbanization. Furthermore, flexibility has been introduced in the long-term WDS design under future uncertainty. The flexible methodology is adaptable to uncertain WDS parameters (i.e., future water demand in this thesis) thereby improving the WDS economic cost and hydraulic performance (resilience). The methodology is also complimented by strategically incorporating DM measures to further enhance the WDS performance under water demand uncertainty. The new methodologies presented in this thesis were successfully tested on case studies. Finally, conclusions and recommendations for possible further research work are made. There are potential benefits (e.g., cost savings, additional resilience, and lower GHG emissions) of incorporating an environmental objective and DM interventions in WDS design. Flexibility and DM interventions add value in the design of WDSs under uncertainty.
APA, Harvard, Vancouver, ISO, and other styles
24

Tan, Jun Liang. "Development of a pitch based wake optimisation control strategy to improve total farm power production." Thesis, Uppsala universitet, Institutionen för geovetenskaper, 2016. http://urn.kb.se/resolve?urn=urn:nbn:se:uu:diva-304705.

Full text
Abstract:
In this thesis, the effect of pitch based optimisation was explored for a 80 turbine wind farm. Using a modified Jensen wake model and the Particle Swarm Optimisation (PSO) model, a pitch optimisation strategy was created for the dominant turbulence and atmospheric condition for the wind farm. As the wake model was based on the FLORIS model developed by P.M.O Gebraad et. al., the wake and power model was compared with the FLORIS model and a -0.090% difference was found. To determine the dynamic predictive capability of the wake model, measurement values across a 10 minute period for a 19 wind turbine array were used and the wake model under predicted the power production by 17.55%. Despite its poor dynamic predictive capability, the wake model was shown to accurately match the AEP production of the wind farm when compared to a CFD simulation done in FarmFlow and only gave a 3.10% over-prediction. When the optimisation model was applied with 150 iterations and particles, the AEP production of the wind farm increased by 0.1052%, proving that the pitch optimisation method works for the examined wind farm. When the iterations and particles used for the optimisation was increased to 250, the power improvement between optimised results improved by 0.1144% at a 222.5% increase in computational time, suggesting that the solution has yet to fully converge. While the solutions did not fully converge, they converged sufficiently and an increase in iterations gave diminishing results. From the results, the pitch optimisation model was found to give a significant increase in power production, especially in wake intensive wind directions. However, the dynamic predictive capabilities will have be improved upon before the control strategy can be applied to an operational wind farm.
APA, Harvard, Vancouver, ISO, and other styles
25

Hůlka, Tomáš. "Stabilizace chaosu: metody a aplikace." Master's thesis, Vysoké učení technické v Brně. Fakulta strojního inženýrství, 2017. http://www.nusl.cz/ntk/nusl-320103.

Full text
Abstract:
This thesis focuses on deterministic chaos and selected methods of chaos control. It briefly describes the matter of deterministic chaos and presents commonly used tools of analysis of dynamical systems exhibiting chaotic behavior. A list of frequently studied chaotic systems is presented and followed by a description of methods of chaos control and the optimization of these methods. The practical part is dedicated to the stabilization of two model systems and one real system with described methods.
APA, Harvard, Vancouver, ISO, and other styles
26

Jenča, Pavol. "Identifikace parametrů elektrických motorů metodou podprostorů." Master's thesis, Vysoké učení technické v Brně. Fakulta elektrotechniky a komunikačních technologií, 2012. http://www.nusl.cz/ntk/nusl-219678.

Full text
Abstract:
The electrical motors parameters identification is solved in this master’s thesis using subspace based methods. Electrical motors are simulated in Matlab/Simulink interactive environment, specifically permanent magnet DC motor and permanent magnet synchronous motor. Identification is developed in Matlab interactive environment. Different types of subspace algorithms are used for the estimation of parameters. Results of subspace parameters estimation are compared with least squares parameters estimation. The thesis describes subspace method, types of subspace algorithms, used electrical motors, nonlinear approach of identification and comparation of parameters identification.
APA, Harvard, Vancouver, ISO, and other styles
27

Stanek, Timotej. "Automatické shlukování regulárních výrazů." Master's thesis, Vysoké učení technické v Brně. Fakulta informačních technologií, 2011. http://www.nusl.cz/ntk/nusl-235531.

Full text
Abstract:
This project is about security of computer networks using Intrusion Detection Systems. IDS contain rules for detection expressed with regular expressions, which are for detection represented by finite-state automata. The complexity of this detection with non-deterministic and deterministic finite-state automata is explained. This complexity can be reduced with help of regular expressions grouping. Grouping algorithm and approaches for speedup and improvement are introduced. One of the approches is Genetic algorithm, which can work real-time. Finally Random search algorithm for grouping of regular expressions is presented. Experiment results with these approches are shown and compared between each other.
APA, Harvard, Vancouver, ISO, and other styles
28

Grymel, Martin-Thomas. "Error control with binary cyclic codes." Thesis, University of Manchester, 2013. https://www.research.manchester.ac.uk/portal/en/theses/error-control-with-binary-cyclic-codes(a5750b4a-e4d6-49a8-915b-3e015387ad36).html.

Full text
Abstract:
Error-control codes provide a mechanism to increase the reliability of digital data being processed, transmitted, or stored under noisy conditions. Cyclic codes constitute an important class of error-control code, offering powerful error detection and correction capabilities. They can easily be generated and verified in hardware, which makes them particularly well suited to the practical use as error detecting codes.A cyclic code is based on a generator polynomial which determines its properties including the specific error detection strength. The optimal choice of polynomial depends on many factors that may be influenced by the underlying application. It is therefore advantageous to employ programmable cyclic code hardware that allows a flexible choice of polynomial to be applied to different requirements. A novel method is presented in this thesis to realise programmable cyclic code circuits that are fast, energy-efficient and minimise implementation resources.It can be shown that the correction of a single-bit error on the basis of a cyclic code is equivalent to the solution of an instance of the discrete logarithm problem. A new approach is proposed for computing discrete logarithms; this leads to a generic deterministic algorithm for analysed group orders that equal Mersenne numbers with an exponent of a power of two. The algorithm exhibits a worst-case runtime in the order of the square root of the group order and constant space requirements.This thesis establishes new relationships for finite fields that are represented as the polynomial ring over the binary field modulo a primitive polynomial. With a subset of these properties, a novel approach is developed for the solution of the discrete logarithm in the multiplicative groups of these fields. This leads to a deterministic algorithm for small group orders that has linear space and linearithmic time requirements in the degree of defining polynomial, enabling an efficient correction of single-bit errors based on the corresponding cyclic codes.
APA, Harvard, Vancouver, ISO, and other styles
29

Liu, Zhixin. "Capacity allocation and rescheduling in supply chains." Columbus, Ohio : Ohio State University, 2007. http://rave.ohiolink.edu/etdc/view?acc%5Fnum=osu1187883767.

Full text
APA, Harvard, Vancouver, ISO, and other styles
30

Caron, Jérôme. "Etude et validation clinique d'un modèle aux moments entropique pour le transport de particules énergétiques : application aux faisceaux d'électrons pour la radiothérapie externe." Thesis, Bordeaux, 2016. http://www.theses.fr/2016BORD0452/document.

Full text
Abstract:
En radiothérapie externe, les simulations des dépôts de dose aux patients sont réalisées sur des systèmesde planification de traitement (SPT) dotés d'algorithmes de calcul qui diffèrent dans leur modélisationdes processus physiques d'interaction des électrons et des photons. Or ces SPT, bien que rapides enclinique, montrent parfois des erreurs significatives aux abords des hétérogénéités du corps humain. Montravail de thèse a consisté à valider le modèle aux moments entropique M1 pour des faisceaux d'électronscliniques. Cet algorithme développé au CELIA dans le cadre de la physique des plasmas repose sur larésolution de l'équation cinétique de transport de Boltzmann linéarisée selon une décomposition auxmoments. M1 nécessite une fermeture du système d'équations basée sur le H-Théorème (maximisationde l'entropie). Les cartographies de dose 1D de faisceaux d'électrons de 9 et 20 MeV issues de M1 ontété comparées à celles issues de codes de référence : macro Monte-Carlo clinique (eMC) et full Monte-Carlo (GEANT-MCNPX) ainsi qu'à des données expérimentales. Les cas tests consistent en des fantômesd'abord homogènes puis de complexité croissante avec insertion d'hétérogéenéités mimant les tissus osseuxet pulmonaire. In fine, le modèle aux moments M1 démontre des propriétés de précision meilleures quecertains algorithmes de type Pencil Beam Kernel encore utilisés cliniquement et proches de celles fourniespar des codes full Monte-Carlo académiques ou macro Monte-Carlo cliniques, même dans les cas testscomplexes retenus. Les performances liées aux temps de calcul de M1 ont été évaluées comme étantmeilleures que celles de codes Monte-Carlo
In radiotherapy field, dose deposition simulations in patients are performed on Treatment Planning Systems (TPS) equipped with specific algorithms that differ in the way they model the physical interaction processes of electrons and photons. Although those clinical TPS are fast, they show significant discrepancies in the neighbooring of inhomogeneous tissues. My work consisted in validating for clinical electron beams an entropic moments based algorithm called M1. Develelopped in CELIA for warm and dense plasma simulations, M1 relies on the the resolution of the linearized Boltzmann kinetic equation for particles transport according to a moments decomposition. M1 equations system requires a closure based on H-Theorem (entropy maximisation). M1 dose deposition maps of 9 and 20 MeV electron beams simulations were compared to those extracted from reference codes simulations : clinical macro Monte-Carlo (eMC) and full Monte-carlo (GEANT4-MCNPX) codes and from experimental data as well. The different test cases consisted in homogeneous et complex inhomogeneous fantoms with bone and lung inserts. We found that M1 model provided a dose deposition accuracy better than some Pencil Beam Kernel algorithm and close of those furnished by clinical macro and academic full Monte-carlo codes, even in the worst inhomogeneous cases. Time calculation performances were also investigated and found better than the Monte-Carlo codes
APA, Harvard, Vancouver, ISO, and other styles
31

Joly, Jean-Luc. "Contributions à la génération aléatoire pour des classes d'automates finis." Thesis, Besançon, 2016. http://www.theses.fr/2016BESA2012/document.

Full text
Abstract:
Le concept d’automate, central en théorie des langages, est l’outil d’appréhension naturel et efficace de nombreux problèmes concrets. L’usage intensif des automates finis dans un cadre algorithmique s ’illustre par de nombreux travaux de recherche. La correction et l’ évaluation sont les deux questions fondamentales de l’algorithmique. Une méthode classique d’ évaluation s’appuie sur la génération aléatoire contrôlée d’instances d’entrée. Les travaux d´écrits dans cette thèse s’inscrivent dans ce cadre et plus particulièrement dans le domaine de la génération aléatoire uniforme d’automates finis.L’exposé qui suit propose d’abord la construction d’un générateur aléatoire d’automates à pile déterministes, real time. Cette construction s’appuie sur la méthode symbolique. Des résultats théoriques et une étude expérimentale sont exposés.Un générateur aléatoire d’automates non-déterministes illustre ensuite la souplesse d’utilisation de la méthode de Monte-Carlo par Chaînes de Markov (MCMC) ainsi que la mise en œuvre de l’algorithme de Metropolis - Hastings pour l’ échantillonnage à isomorphisme près. Un résultat sur le temps de mélange est donné dans le cadre général .L’ échantillonnage par méthode MCMC pose le problème de l’évaluation du temps de mélange dans la chaîne. En s’inspirant de travaux antérieurs pour construire un générateur d’automates partiellement ordonnés, on montre comment différents outils statistiques permettent de s’attaquer à ce problème
The concept of automata, central to language theory, is the natural and efficient tool to apprehendvarious practical problems.The intensive use of finite automata in an algorithmic framework is illustrated by numerous researchworks.The correctness and the evaluation of performance are the two fundamental issues of algorithmics.A classic method to evaluate an algorithm is based on the controlled random generation of inputs.The work described in this thesis lies within this context and more specifically in the field of theuniform random generation of finite automata.The following presentation first proposes to design a deterministic, real time, pushdown automatagenerator. This design builds on the symbolic method. Theoretical results and an experimental studyare given.This design builds on the symbolic method. Theoretical results and an experimental study are given.A random generator of non deterministic automata then illustrates the flexibility of the Markov ChainMonte Carlo methods (MCMC) as well as the implementation of the Metropolis-Hastings algorithm tosample up to isomorphism. A result about the mixing time in the general framework is given.The MCMC sampling methods raise the problem of the mixing time in the chain. By drawing on worksalready completed to design a random generator of partially ordered automata, this work shows howvarious statistical tools can form a basis to address this issue
APA, Harvard, Vancouver, ISO, and other styles
32

Caprile, Bruno, and Federico Girosi. "A Nondeterministic Minimization Algorithm." 1990. http://hdl.handle.net/1721.1/6560.

Full text
Abstract:
The problem of minimizing a multivariate function is recurrent in many disciplines as Physics, Mathematics, Engeneering and, of course, Computer Science. In this paper we describe a simple nondeterministic algorithm which is based on the idea of adaptive noise, and that proved to be particularly effective in the minimization of a class of multivariate, continuous valued, smooth functions, associated with some recent extension of regularization theory by Poggio and Girosi (1990). Results obtained by using this method and a more traditional gradient descent technique are also compared.
APA, Harvard, Vancouver, ISO, and other styles
33

Prasetyo, Dimas, and Dimas Prasetyo. "Implementation on Deterministic and Metaheuristic Algorithm for Electricity Generation Problem." Thesis, 2019. http://ndltd.ncl.edu.tw/handle/6s7zbh.

Full text
Abstract:
碩士
國立臺灣科技大學
工業管理系
107
In solving the electricity generation, a common objective for all power system operators is to ensure that sufficient generation is available for hours and days ahead of the operation time. The on-off states of the generation units or the “commitment decision” provides the first step toward the optimal solution. In power generation scheduling, the unit commitment decision indicates, for each point in time over the scheduling horizon, what generating units are to be used. Then, the most economic dispatch, i.e. the distribution of load across generating units for each point in time, is then determined to meet system load and reserve requirements. Various approaches to the solution of the UC problem have been proposed where they ranged from simple to complicated methods. The electricity generation problem belongs to the complex combinational optimization problems. Several mathematical programming techniques have been proposed to solve this time-dependent problem. Recent mathematical developments and advances in computing technologies made the problem readily solvable. The application of hybrid systems in power system problems has been advanced in recent literature, and it still represents a future trend in power systems research. This research initially want to collaborate the deterministic and metaheuristic to make an improvement in computational for solving electricity generation. The specific algorithm that will be used are dynamic programming and particle swarm optimization.
APA, Harvard, Vancouver, ISO, and other styles
34

Hsu, Jung-Hua, and 許榮華. "A Pattern Matching Algorithm Using Deterministic Finite Automata with Infixes Checking." Thesis, 2005. http://ndltd.ncl.edu.tw/handle/42273068520390912376.

Full text
Abstract:
碩士
國立中興大學
資訊科學研究所
93
This thesis presents two string matching algorithms. The first algorithm constructs a string matching finite automaton (called mc-DFA) for multi-class of keystrings with infix checking and then applies this mc-DFA to process the matching procedure for a given text in a single pass. This algorithm can output the occurrences of multiple keystrings in a text concurrently. An algorithm used to minimize mc-DFA is also given and studied. The second algorithm applies the failure function to detect the occurrence of a prefix square when a string is on-line inputted. Some properties concerning these two algorithms are studied to show the time complexity of these two string matching algorithms.
APA, Harvard, Vancouver, ISO, and other styles
35

Lin, Ching-Hung, and 林璟鴻. "A Heuristic Algorithm for Deterministic Test Pattern Based on Continuous Scan." Thesis, 2003. http://ndltd.ncl.edu.tw/handle/27592773142439796416.

Full text
APA, Harvard, Vancouver, ISO, and other styles
36

Lu, Songjian. "Randomized and Deterministic Parameterized Algorithms and Their Applications in Bioinformatics." 2009. http://hdl.handle.net/1969.1/ETD-TAMU-2009-12-7222.

Full text
Abstract:
Parameterized NP-hard problems are NP-hard problems that are associated with special variables called parameters. One example of the problem is to find simple paths of length k in a graph, where the integer k is the parameter. We call this problem the p-path problem. The p-path problem is the parameterized version of the well-known NP-complete problem - the longest simple path problem. There are two main reasons why we study parameterized NP-hard problems. First, many application problems are naturally associated with certain parameters. Hence we need to solve these parameterized NP-hard problems. Second, if parameters take only small values, we can take advantage of these parameters to design very effective algorithms. If a parameterized NP-hard problem can be solved by an algorithm of running time in form of f(k)nO(1), where k is the parameter, f(k) is independent of n, and n is the input size of the problem instance, we say that this parameterized NP-hard problem is fixed parameter tractable (FPT). If a problem is FPT and the parameter takes only small values, the problem can be solved efficiently (it can be solved almost in polynomial time). In this dissertation, first, we introduce several techniques that can be used to design efficient algorithms for parameterized NP-hard problems. These techniques include branch and bound, divide and conquer, color coding and dynamic programming, iterative compression, iterative expansion and kernelization. Then we present our results about how to use these techniques to solve parameterized NP-hard problems, such as the p-path problem and the pd-feedback vertex set problem. Especially, we designed the first algorithm of running time in form of f(k)nO(1) for the pd-feedback vertex set problem. Thus solved an outstanding open problem, i.e. if the pd-feedback vertex set problem is FPT. Finally, we will introduce how to use parameterized algorithm techniques to solve the signaling pathway problem and the motif finding problem from bioinformatics.
APA, Harvard, Vancouver, ISO, and other styles
37

Joseph, Ajin George. "Optimization Algorithms for Deterministic, Stochastic and Reinforcement Learning Settings." Thesis, 2017. http://etd.iisc.ernet.in/2005/3645.

Full text
Abstract:
Optimization is a very important field with diverse applications in physical, social and biological sciences and in various areas of engineering. It appears widely in ma-chine learning, information retrieval, regression, estimation, operations research and a wide variety of computing domains. The subject is being deeply studied both theoretically and experimentally and several algorithms are available in the literature. These algorithms which can be executed (sequentially or concurrently) on a computing machine explore the space of input parameters to seek high quality solutions to the optimization problem with the search mostly guided by certain structural properties of the objective function. In certain situations, the setting might additionally demand for “absolute optimum” or solutions close to it, which makes the task even more challenging. In this thesis, we propose an optimization algorithm which is “gradient-free”, i.e., does not employ any knowledge of the gradient or higher order derivatives of the objective function, rather utilizes objective function values themselves to steer the search. The proposed algorithm is particularly effective in a black-box setting, where a closed-form expression of the objective function is unavailable and gradient or higher-order derivatives are hard to compute or estimate. Our algorithm is inspired by the well known cross entropy (CE) method. The CE method is a model based search method to solve continuous/discrete multi-extremal optimization problems, where the objective function has minimal structure. The proposed method seeks, in the statistical manifold of the parameters which identify the probability distribution/model defined over the input space to find the degenerate distribution concentrated on the global optima (assumed to be finite in quantity). In the early part of the thesis, we propose a novel stochastic approximation version of the CE method to the unconstrained optimization problem, where the objective function is real-valued and deterministic. The basis of the algorithm is a stochastic process of model parameters which is probabilistically dependent on the past history, where we reuse all the previous samples obtained in the process till the current instant based on discounted averaging. This approach can save the overall computational and storage cost. Our algorithm is incremental in nature and possesses attractive features such as stability, computational and storage efficiency and better accuracy. We further investigate, both theoretically and empirically, the asymptotic behaviour of the algorithm and find that the proposed algorithm exhibits global optimum convergence for a particular class of objective functions. Further, we extend the algorithm to solve the simulation/stochastic optimization problem. In stochastic optimization, the objective function possesses a stochastic characteristic, where the underlying probability distribution in most cases is hard to comprehend and quantify. This begets a more challenging optimization problem, where the ostentatious nature is primarily due to the hardness in computing the objective function values for various input parameters with absolute certainty. In this case, one can only hope to obtain noise corrupted objective function values for various input parameters. Settings of this kind can be found in scenarios where the objective function is evaluated using a continuously evolving dynamical system or through a simulation. We propose a multi-timescale stochastic approximation algorithm, where we integrate an additional timescale to accommodate the noisy measurements and decimate the effects of the gratuitous noise asymptotically. We found that if the objective function and the noise involved in the measurements are well behaved and the timescales are compatible, then our algorithm can generate high quality solutions. In the later part of the thesis, we propose algorithms for reinforcement learning/Markov decision processes using the optimization techniques we developed in the early stage. MDP can be considered as a generalized framework for modelling planning under uncertainty. We provide a novel algorithm for the problem of prediction in reinforcement learning, i.e., estimating the value function of a given stationary policy of a model free MDP (with large state and action spaces) using the linear function approximation architecture. Here, the value function is defined as the long-run average of the discounted transition costs. The resource requirement of the proposed method in terms of computational and storage cost scales quadratically in the size of the feature set. The algorithm is an adaptation of the multi-timescale variant of the CE method proposed in the earlier part of the thesis for simulation optimization. We also provide both theoretical and empirical evidence to corroborate the credibility and effectiveness of the approach. In the final part of the thesis, we consider a modified version of the control problem in a model free MDP with large state and action spaces. The control problem most commonly addressed in the literature is to find an optimal policy which maximizes the value function, i.e., the long-run average of the discounted transition payoffs. The contemporary methods also presume access to a generative model/simulator of the MDP with the hidden premise that observations of the system behaviour in the form of sample trajectories can be obtained with ease from the model. In this thesis, we consider a modified version, where the cost function to be optimized is a real-valued performance function (possibly non-convex) of the value function. Additionally, one has to seek the optimal policy without presuming access to the generative model. In this thesis, we propose a stochastic approximation algorithm for this peculiar control problem. The only information, we presuppose, available to the algorithm is the sample trajectory generated using a priori chosen behaviour policy. The algorithm is data (sample trajectory) efficient, stable, robust as well as computationally and storage efficient. We provide a proof of convergence of our algorithm to a high performing policy relative to the behaviour policy.
APA, Harvard, Vancouver, ISO, and other styles
38

Gupta, Saurabh. "Development Of Deterministic And Stochastic Algorithms For Inverse Problems Of Optical Tomography." Thesis, 2013. http://etd.iisc.ernet.in/handle/2005/2608.

Full text
Abstract:
Stable and computationally efficient reconstruction methodologies are developed to solve two important medical imaging problems which use near-infrared (NIR) light as the source of interrogation, namely, diffuse optical tomography (DOT) and one of its variations, ultrasound-modulated optical tomography (UMOT). Since in both these imaging modalities the system matrices are ill-conditioned owing to insufficient and noisy data, the emphasis in this work is to develop robust stochastic filtering algorithms which can handle measurement noise and also account for inaccuracies in forward models through an appropriate assignment of a process noise. However, we start with demonstration of speeding of a Gauss-Newton (GN) algorithm for DOT so that a video-rate reconstruction from data recorded on a CCD camera is rendered feasible. Towards this, a computationally efficient linear iterative scheme is proposed to invert the normal equation of a Gauss-Newton scheme in the context of recovery of absorption coefficient distribution from DOT data, which involved the singular value decomposition (SVD) of the Jacobian matrix appearing in the update equation. This has sufficiently speeded up the inversion that a video rate recovery of time evolving absorption coefficient distribution is demonstrated from experimental data. The SVD-based algorithm has made the number of operations in image reconstruction to be rather than. 2()ONN3()ONN The rest of the algorithms are based on different forms of stochastic filtering wherein we arrive at a mean-square estimate of the parameters through computing their joint probability distributions conditioned on the measurement up to the current instant. Under this, the first algorithm developed uses a Bootstrap particle filter which also uses a quasi-Newton direction within. Since keeping track of the Newton direction necessitates repetitive computation of the Jacobian, for all particle locations and for all time steps, to make the recovery computationally feasible, we devised a faster update of the Jacobian. It is demonstrated, through analytical reasoning and numerical simulations, that the proposed scheme, not only accelerates convergence but also yields substantially reduced sample variance in the estimates vis-à-vis the conventional BS filter. Both accelerated convergence and reduced sample variance in the estimates are demonstrated in DOT optical parameter recovery using simulated and experimental data. In the next demonstration a derivative free variant of the pseudo-dynamic ensemble Kalman filter (PD-EnKF) is developed for DOT wherein the size of the unknown parameter is reduced by representing of the inhomogeneities through simple geometrical shapes. Also the optical parameter fields within the inhomogeneities are approximated via an expansion based on the circular harmonics (CH) (Fourier basis functions). The EnKF is then used to recover the coefficients in the expansion with both simulated and experimentally obtained photon fluence data on phantoms with inhomogeneous inclusions. The process and measurement equations in the Pseudo-Dynamic EnKF (PD-EnKF) presently yield a parsimonious representation of the filter variables, which consist of only the Fourier coefficients and the constant scalar parameter value within the inclusion. Using fictitious, low-intensity Wiener noise processes in suitably constructed ‘measurement’ equations, the filter variables are treated as pseudo-stochastic processes so that their recovery within a stochastic filtering framework is made possible. In our numerical simulations we have considered both elliptical inclusions (two inhomogeneities) and those with more complex shapes ( such as an annular ring and a dumbbell) in 2-D objects which are cross-sections of a cylinder with background absorption and (reduced) scattering coefficient chosen as = 0.01 mm-1 and = 1.0 mm-1respectively. We also assume=0.02 mm-1 within the inhomogeneity (for the single inhomogeneity case) and=0.02 and 0.03 mm-1 (for the two inhomogeneities case). The reconstruction results by the PD-EnKF are shown to be consistently superior to those through a deterministic and explicitly regularized Gauss-Newton algorithm. We have also estimated the unknown from experimentally gathered fluence data and verified the reconstruction by matching the experimental data with the computed one. The superiority of a modified version of the PD-EnKF, which uses an ensemble square root filter, is also demonstrated in the context of UMOT by recovering the distribution of mean-squared amplitude of vibration, related to the Young’s modulus, in the ultrasound focal volume. Since the ability of a coherent light probe to pick-up the overall optical path-length change is limited to modulo an optical wavelength, the individual displacements suffered owing to the US forcing should be very small, say within a few angstroms. The sensitivity of modulation depth to changes in these small displacements could be very small, especially when the ROI is far removed from the source and detector. The contrast recovery of the unknown distribution in such cases could be seriously impaired whilst using a quasi-Newton scheme (e.g. the GN scheme) which crucially makes use of the derivative information. The derivative-free gain-based Monte Carlo filter not only remedies this deficiency, but also provides a regularization insensitive and computationally competitive alternative to the GN scheme. The inherent ability of a stochastic filter in accommodating the model error owing to a diffusion approximation of the correlation transport may be cited as an added advantage in the context of the UMOT inverse problem. Finally to speed up forward solve of the partial differential equation (PDE) modeling photon transport in the context of UMOT for which the PDE has time as a parameter, a spectral decomposition of the PDE operator is demonstrated. This allows the computation of the time dependent forward solution in terms of the eigen functions of the PDE operator which has speeded up the forward solution, which in turn has rendered the UMOT parameter recovery computationally efficient.
APA, Harvard, Vancouver, ISO, and other styles
39

Karjee, Jyotirmoy. "Spatially Correlated Data Accuracy Estimation Models in Wireless Sensor Networks." Thesis, 2013. http://hdl.handle.net/2005/3087.

Full text
Abstract:
One of the major applications of wireless sensor networks is to sense accurate and reliable data from the physical environment with or without a priori knowledge of data statistics. To extract accurate data from the physical environment, we investigate spatial data correlation among sensor nodes to develop data accuracy models. We propose three data accuracy models namely Estimated Data Accuracy (EDA) model, Cluster based Data Accuracy (CDA) model and Distributed Cluster based Data Accuracy (DCDA) model with a priori knowledge of data statistics. Due to the deployment of high density of sensor nodes, observed data are highly correlated among sensor nodes which form distributed clusters in space. We describe two clustering algorithms called Deterministic Distributed Clustering (DDC) algorithm and Spatial Data Correlation based Distributed Clustering (SDCDC) algorithm implemented under CDA model and DCDA model respectively. Moreover, due to data correlation in the network, it has redundancy in data collected by sensor nodes. Hence, it is not necessary for all sensor nodes to transmit their highly correlated data to the central node (sink node or cluster head node). Even an optimal set of sensor nodes are capable of measuring accurate data and transmitting the accurate, precise data to the central node. This reduces data redundancy, energy consumption and data transmission cost to increase the lifetime of sensor networks. Finally, we propose a fourth accuracy model called Adaptive Data Accuracy (ADA) model that doesn't require any a priori knowledge of data statistics. ADA model can sense continuous data stream at regular time intervals to estimate accurate data from the environment and select an optimal set of sensor nodes for data transmission to the network. Data transmission can be further reduced for these optimal sensor nodes by transmitting a subset of sensor data using a methodology called Spatio-Temporal Data Prediction (STDP) model under data reduction strategies. Furthermore, we implement data accuracy model when the network is under a threat of malicious attack.
APA, Harvard, Vancouver, ISO, and other styles
40

Groparu-Cojocaru, Ionica. "A class of bivariate Erlang distributions and ruin probabilities in multivariate risk models." Thèse, 2012. http://hdl.handle.net/1866/8947.

Full text
Abstract:
Nous y introduisons une nouvelle classe de distributions bivariées de type Marshall-Olkin, la distribution Erlang bivariée. La transformée de Laplace, les moments et les densités conditionnelles y sont obtenus. Les applications potentielles en assurance-vie et en finance sont prises en considération. Les estimateurs du maximum de vraisemblance des paramètres sont calculés par l'algorithme Espérance-Maximisation. Ensuite, notre projet de recherche est consacré à l'étude des processus de risque multivariés, qui peuvent être utiles dans l'étude des problèmes de la ruine des compagnies d'assurance avec des classes dépendantes. Nous appliquons les résultats de la théorie des processus de Markov déterministes par morceaux afin d'obtenir les martingales exponentielles, nécessaires pour établir des bornes supérieures calculables pour la probabilité de ruine, dont les expressions sont intraitables.
In this contribution, we introduce a new class of bivariate distributions of Marshall-Olkin type, called bivariate Erlang distributions. The Laplace transform, product moments and conditional densities are derived. Potential applications of bivariate Erlang distributions in life insurance and finance are considered. Further, our research project is devoted to the study of multivariate risk processes, which may be useful in analyzing ruin problems for insurance companies with a portfolio of dependent classes of business. We apply results from the theory of piecewise deterministic Markov processes in order to derive exponential martingales needed to establish computable upper bounds of the ruin probabilities, as their exact expressions are intractable.
APA, Harvard, Vancouver, ISO, and other styles
We offer discounts on all premium plans for authors whose works are included in thematic literature selections. Contact us to get a unique promo code!

To the bibliography