To see the other types of publications on this topic, follow the link: Randomization.

Dissertations / Theses on the topic 'Randomization'

Create a spot-on reference in APA, MLA, Chicago, Harvard, and other styles

Select a source type:

Consult the top 50 dissertations / theses for your research on the topic 'Randomization.'

Next to every source in the list of references, there is an 'Add to bibliography' button. Press on it, and we will generate automatically the bibliographic reference to the chosen work in the citation style you need: APA, MLA, Harvard, Chicago, Vancouver, etc.

You can also download the full text of the academic publication as pdf and read online its abstract whenever available in the metadata.

Browse dissertations / theses on a wide variety of disciplines and organise your bibliography correctly.

1

Wu, Huayue. "Randomization and Restart Strategies." Thesis, University of Waterloo, 2006. http://hdl.handle.net/10012/2923.

Full text
Abstract:
The runtime for solving constraint satisfaction problems (CSP) and propositional satisfiability problems (SAT) using systematic backtracking search has been shown to exhibit great variability. Randomization and restarts is an effective technique for reducing such variability to achieve better expected performance. Several restart strategies have been proposed and studied in previous work and show differing degrees of empirical effectiveness.

The first topic in this thesis is the extension of analytical results on restart strategies through the introduction of physically based assumptions. In particular, we study the performance of two of the restart strategies on Pareto runtime distributions. We show that the geometric strategy provably removes heavy tail. We also examine several factors that arise during implementation and their effects on existing restart strategies.

The second topic concerns the development of a new hybrid restart strategy in a realistic problem setting. Our work adapts the existing general approach on dynamic strategy but implements more sophisticated machine learning techniques. The resulting hybrid strategy shows superior performance compared to existing static strategies and an improved robustness.
APA, Harvard, Vancouver, ISO, and other styles
2

Palmer, Thomas M. "Extensions to Mendelian randomization." Thesis, University of Leicester, 2009. http://hdl.handle.net/2381/7617.

Full text
Abstract:
The Mendelian randomization approach is concerned with the causal pathway between a gene, an intermediate phenotype and a disease. The aim of the approach is to estimate the causal association between the phenotype and the disease when confounding or reverse causation may affect the direct estimate of this association. The approach represents the use of genes as instrumental variables in epidemiological research and is justified through Mendel's second law. Instrumental variable analysis was developed in econometrics as an alternative to regression analyses affected by confounding and reverse causation. Methods such as two-stage least squares are appropriate for instrumental variable analyses where the phenotype and disease are continuous. However, case-control and cohort studies typically report binary outcomes and instrumental variable methods for these studies are less well developed. For a binary outcome study three estimators of the phenotype-disease log odds ratio are compared. An adjusted instrumental variable estimator is shown to have the least bias compared with the other two estimators. However, significance tests of the adjusted estimator are shown to have an inflated type I error rate, so the standard estimator, which had the correct type I error rate, could be used for testing. A single study may not have adequate statistical power to detect a causal association in a Mendelian randomization analysis. Meta-analysis models that extend existing approaches are investigated. The ratio of coefficients approach is applied within the meta-analysis models and a Taylor series approximation is used to investigate its finite sample bias. The increasing awareness of the Mendelian randomization approach has made researchers aware of the need for instrumental variable methods appropriate for epidemiological study designs. The work in this thesis viewed in the context of the research into instrumental variable analysis in other areas of biostatistics such as non-compliance in clinical trials and other subject areas such as econometrics and causal inference contributes to the development of methods for Mendelian randomization analyses.
APA, Harvard, Vancouver, ISO, and other styles
3

Batidzirai, Jesca Mercy. "Randomization in a two armed clinical trial: an overview of different randomization techniques." Thesis, University of Fort Hare, 2011. http://hdl.handle.net/10353/395.

Full text
Abstract:
Randomization is the key element of any sensible clinical trial. It is the only way we can be sure that the patients have been allocated into the treatment groups without bias and that the treatment groups are almost similar before the start of the trial. The randomization schemes used to allocate patients into the treatment groups play a role in achieving this goal. This study uses SAS simulations to do categorical data analysis and comparison of differences between two main randomization schemes namely unrestricted and restricted randomization in dental studies where there are small samples, i.e. simple randomization and the minimization method respectively. Results show that minimization produces almost equally sized treatment groups, but simple randomization is weak in balancing prognostic factors. Nevertheless, simple randomization can also produce balanced groups even in small samples, by chance. Statistical power is also improved when minimization is used than in simple randomization, but bigger samples might be needed to boost the power.
APA, Harvard, Vancouver, ISO, and other styles
4

LaValley, Jason. "Next Generation RFID Randomization Protocol." Thèse, Université d'Ottawa / University of Ottawa, 2011. http://hdl.handle.net/10393/20471.

Full text
Abstract:
Radio Frequency IDentification (RFID) is a wireless communications technology which allows companies to secure their assets and increase the portability of information. This research was motivated by the increased commercial use of RFID technology. Existing security protocols with high levels of security have high computation requirements, and less intensive protocols can allow a tag to be tracked. The techniques proposed in this thesis result in the increase of ciphertexts available without a significant increase in processing power or storage requirements. The addition of random inputs to the generation of ciphertexts will increase the number of possible results without requiring a more advanced encryption algorithm or an increased number of stored encryption keys. Four methods of altering the plaintext/ciphertext pair (random block, set pattern, random pattern, and indexed placement) are analyzed to determine the effectiveness of each method. The number of ciphertexts generated, generation time, and generation errors were recorded to determine which of the four proposed methods would be the most beneficial in a RFID system. The comparison of these method characteristics determined that the set pattern placement method provided the best solution. The thesis also discusses how RFID transmissions appear to attackers and explains how the random inputs reduce effectiveness of current system attacks. In addition to improving the anonymity of RFID tag transmissions, the concept of authenticating random inputs is also introduced in this thesis. These methods help prevent an adversary from easily associating a tag with its transmissions, thus increasing the security of the RFID system.
APA, Harvard, Vancouver, ISO, and other styles
5

Pobbathi, Venkatesh Paneesh Kumar. "Randomization Based Verification for Microprocessors." Thesis, KTH, Skolan för informations- och kommunikationsteknik (ICT), 2014. http://urn.kb.se/resolve?urn=urn:nbn:se:kth:diva-177438.

Full text
Abstract:
Verification of microprocessors is a vital phase in their development. It takes majority of time and cost in the microprocessor development. Verification can be split into two; coverage and check. In coverage we try to find out if all desired conditions are executed. Where as in check, we try to find out if the behaviour of the DUT is as expected. In this thesis we concentrate more on coverage. The test bench should be able to cover all the cases, hence methodologies have to be used which will not only reduce the total time of the project but also get maximum coverage to increase the bug detection chances. Random simulation helps to quickly attain corner cases that would not have been found by the traditional directed testing. In this thesis functional verification for the microprocessor M6802 was implemented. Few verification approaches were implemented to find out their feasibility. It was found out that random generation had many advantages over directed testing but both the approaches failed to attain good coverage in reasonable time. To overcome this other implementations were explored such as coverage driven and machine learning. Machine learning showed significant improvement over the other methods for coverage on the filp side it required a lot of setup time. It was found out that the combination of these approaches have to be used to reduce the setup time and get maximum coverage. The method to be selected depends on the complexity of the processor and the functional coverpoint.
APA, Harvard, Vancouver, ISO, and other styles
6

Loukas, Vasileios. "Efficient Cache Randomization for Security." Thesis, Uppsala universitet, Institutionen för informationsteknologi, 2019. http://urn.kb.se/resolve?urn=urn:nbn:se:uu:diva-417725.

Full text
Abstract:
The effectiveness of cache hierarchies, undeniably, is of crucial importance, since they essentially constitute the solution to the disparity between fast processors and high memory latency. Nevertheless, security developments spanning for more than the last decade, critically expose cache hierarchies' vulnerabilities, thus creating a need for counter-measures to take place. Through conflict-based attacks, the access pattern of a co-running application might be inferred, which in turn can be used to leak sensitive information from the application, such as encryption keys. Consequently, different ways of securing cache memories with respect to conflict- based attacks have emerged, ideally incurring neither large storage overhead nor requiring any Operating System support, yet providing both high performance and strong security. Prior work in the field has shown that a static encryption scheme is practically deemed insufficient, thus dynamic remapping policies have been introduced, so that the eviction sets form periodically, making it much harder for an adversary to recognize them. In this thesis project, a randomization technique that leverages the indexing function of a 3-level cache hierarchy (RASCAL) as well as a smooth dynamic remapping policy that further curates the performance gap introduced have been designed and implemented. The performance overhead incurred by our intervention on a typical cache hierarchy mechanism is identified, compared and contrasted to another two different remapping policies implemented, eventually exhibiting that it is feasible for a cache to be randomized and dynamically remapped at a sensible security-wise interval with a performance decrease of less than 1% in terms of miss ratio.
APA, Harvard, Vancouver, ISO, and other styles
7

Berry, Eric Dean. "Randomization testing of machine induced rules." Thesis, Monterey, Calif. : Springfield, Va. : Naval Postgraduate School ; Available from National Technical Information Service, 1995. http://handle.dtic.mil/100.2/ADA304271.

Full text
Abstract:
Thesis (M.S. in Information Technology Management) Naval Postgraduate School, September 1995.
"September 1995." Thesis advisor(s): B. Ramesh, William J. Haga. Includes bibliographical references. Also available online.
APA, Harvard, Vancouver, ISO, and other styles
8

Vishnoi, Nisheeth Kumar. "Theoretical Aspects of Randomization in Computation." Diss., Georgia Institute of Technology, 2004. http://hdl.handle.net/1853/6424.

Full text
Abstract:
Randomness has proved to be a powerful tool in all of computation. It is pervasive in areas such as networking, machine learning, computer graphics, optimization, computational number theory and is "necessary" for cryptography. Though randomized algorithms and protocols assume access to "truly" random bits, in practice, they rely on the output of "imperfect" sources of randomness such as pseudo-random number generators or physical sources. Hence, from a theoretical standpoint, it becomes important to view randomness as a resource and to study the following fundamental questions pertaining to it: Extraction: How do we generate "high quality" random bits from "imperfect" sources? Randomization: How do we use randomness to obtain efficient algorithms? Derandomization: How (and when) can we "remove" our dependence on random bits? In this thesis, we consider important problems in these three prominent and diverse areas pertaining to randomness. In randomness extraction, we present extractors for "oblivious bit fixing sources". In (a non-traditional use of) randomization, we have obtained results in machine learning (learning juntas) and proved hardness of lattice problems. While in derandomization, we present a deterministic algorithm for a fundamental problem called "identity testing". In this thesis we also initiate a complexity theoretic study of Hilbert's 17th problem. Here identity testing is used in an interesting manner. A common theme in this work has been the use of tools from areas such as number theory in a variety of ways, and often the techniques themselves are quite interesting.
APA, Harvard, Vancouver, ISO, and other styles
9

Johnston, Robert S. "Modeling the effects of restricted randomization." Thesis, National Library of Canada = Bibliothèque nationale du Canada, 1998. http://www.collectionscanada.ca/obj/s4/f2/dsk2/tape15/PQDD_0003/NQ31993.pdf.

Full text
APA, Harvard, Vancouver, ISO, and other styles
10

Letsou, Christina. "Preferences for Randomization in Social Choice:." Thesis, Boston College, 2020. http://hdl.handle.net/2345/bc-ir:108719.

Full text
Abstract:
Thesis advisor: Uzi Segal
This dissertation consists of three chapters analyzing preferences for randomization in social choice problems. The first two chapters are related and in the fields of distributive justice and social choice. They concern allocation of an indivisible good in social choice problems where efficiency is at odds with equality. The last chapter addresses a social choice problem from an individual's perspective using decision theoretical analysis. In this dissertation I demonstrate why randomization may be an attractive policy in social choice problems and demonstrate how individuals may have preferences over the precise method of randomization. The first chapter is titled "Live and Let Die." This paper discusses how to allocate an indivisible good by social lottery when agents have asymmetric claims. Intuition suggests that there may exist agents who should receive zero probability in the optimal social lottery. In such a case, I say that these agents have weak claims to the good. This paper uses a running example of allocating an indivisible medical treatment to individuals with different survival rates and reactions to the treatment in order to provide conditions for consistency of weak claims. As such, I develop two related assumptions on a social planner's preferences over lotteries. The first -- survival rate scaling -- states that if an individual has a weak claim, then his claim is also weak when survival rates increase proportionally. The second -- independence of weak claims -- states that if an individual has a weak claim, then his removal does not affect others' probabilities of receiving the treatment. These assumptions imply that a compatible social welfare function must exhibit constant elasticity of substitution, which results in potentially-degenerate weighted lotteries. The second chapter is titled "Why is Six Afraid of Seven? Bringing the "Numbers" to Economics." This chapter discusses the numbers problem: the question of if the numbers of people involved should be used to determine whether to help certain people or to help certain other people. I discuss the main solutions that have been proposed: flipping a coin, saving the greater number, and proportionally weighted lotteries. Using the economic tools of social choice, I then show how the model of the previous chapter, "Live and Let Die," can be extended to address numbers problems and compare the implications of prominent social welfare functions for numbers problems. I argue that potentially-degenerate weighted lotteries can assuage the main concerns discussed in the literature and I show that both the Nash product social welfare function as well as constant elasticity of substitution (CES) social welfare functions are compatible with this solution. Finally, I discuss a related problem known as "probability cases," in which individuals differ in survival chances rather than numbers of individuals at risk. When the model is extended to allow for both asymmetries in survival chances and numbers of individuals in groups, CES results in potentially-degenerate weighted lotteries whereas Nash product does not. The third chapter is titled "All Probabilities are Equal, but Some Probabilities are More Equal than Others," which is joint work with Professor Uzi Segal of the Economics Department at Boston College and Professor Shlomo Naeh of the Departments of Talmud and Jewish Thought at The Hebrew University of Jerusalem. In this chapter we compare preferences for different procedures of selecting people randomly. A common procedure for selecting people is to have them draw balls from an urn in turn. Modern and ancient stories (for example, by Graham Greene and the Talmud) suggest that such a lottery may not be viewed by the individuals as "fair.'' In this paper, we compare this procedure with several alternatives. These procedures give all individuals equal chance of being selected, but have different structures. We analyze these procedures as multi-stage lotteries. In line with previous literature, our analysis is based on the observation that multi-stage lotteries are not considered indifferent to their probabilistic one-stage representations. As such, we use a non-expected utility model to understand the preferences of risk-averse individuals over these procedures and show that they may be not indifferent between them
Thesis (PhD) — Boston College, 2020
Submitted to: Boston College. Graduate School of Arts and Sciences
Discipline: Economics
APA, Harvard, Vancouver, ISO, and other styles
11

Sariaydin, Selin. "Randomization for Efficient Nonlinear Parametric Inversion." Diss., Virginia Tech, 2018. http://hdl.handle.net/10919/83451.

Full text
Abstract:
Nonlinear parametric inverse problems appear in many applications in science and engineering. We focus on diffuse optical tomography (DOT) in medical imaging. DOT aims to recover an unknown image of interest, such as the absorption coefficient in tissue to locate tumors in the body. Using a mathematical (forward) model to predict measurements given a parametrization of the tissue, we minimize the misfit between predicted and actual measurements up to a given noise level. The main computational bottleneck in such inverse problems is the repeated evaluation of this large-scale forward model, which corresponds to solving large linear systems for each source and frequency at each optimization step. Moreover, to efficiently compute derivative information, we need to solve, repeatedly, linear systems with the adjoint for each detector and frequency. As rapid advances in technology allow for large numbers of sources and detectors, these problems become computationally prohibitive. In this thesis, we introduce two methods to drastically reduce this cost. To efficiently implement Newton methods, we extend the use of simultaneous random sources to reduce the number of linear system solves to include simultaneous random detectors. Moreover, we combine simultaneous random sources and detectors with optimized ones that lead to faster convergence and more accurate solutions. We can use reduced order models (ROM) to drastically reduce the size of the linear systems to be solved in each optimization step while still solving the inverse problem accurately. However, the construction of the ROM bases still incurs a substantial cost. We propose to use randomization to drastically reduce the number of large linear solves needed for constructing the global ROM bases without degrading the accuracy of the solution to the inversion problem. We demonstrate the efficiency of these approaches with 2-dimensional and 3-dimensional examples from DOT; however, our methods have the potential to be useful for other applications as well.
Ph. D.
APA, Harvard, Vancouver, ISO, and other styles
12

Deng, Shuoqing. "Robust finance : a model randomization approach." Thesis, Paris Sciences et Lettres (ComUE), 2019. http://www.theses.fr/2019PSLED005.

Full text
Abstract:
Dans cette thèse, on considère trois sujets. Les deux premiers sujets sont liés avec la domaine de robuste finance et le dernier est une méthode numérique appliqué sur la gestion du risque des entreprises d’assurance. Dans la première partie, on considère le problème de la surréplication des options américaines au temps discret. On considère une famille non-dominée des mesures de probabilité et les stratégies de trading sont dynamiques pour les sous-jacents et statiques pour les options. Pour obtenir la dualité de valorisation-couverture, on a deux méthodes. La première méthode est de reformuler les options américaines comme options européens dans un espace élargi. La deuxième méthode est de considérer un marché fictif dans lesquelles stratégies pour tous les actifs sont dynamiques. Ensuite on applique le résultat général à deux exemples importants dans le contexte robuste. Dans la deuxième partie, on considère le problème de sur-réplication and maximisation d’utilité au temps discret avec coût de trans action sous l’incertitude du modèle. L’idée principale est de convertir le problème original à un problème sans friction dans un espace élargi en utilisant un argument de randomisation et le théorème de minimax. Pour le problème de sur-réplication, on obtient la dualité comme dans le cas classique. Pour le problème de maximisation d’utilité, en utilisant un argument de la programmation dynamique, on peut prouver à la fois l’existence de la stratégie optimale et le théorème de la dualité convexe. Dans le troisième partie, on présente une méthode numérique basé sur l’approximation du sparse grid pour calculer la distribution de la perte du bilan d’un entreprise d’assurance. On compare la nouvelle méthode numérique avec l’approche classique de la simulation et étudie la vitesse de la convergence des deux méthodes pour estimer l’indicateur du risque
This PhD dissertation presents three research topics. The first two topics are related to the domain of robust finance and the last is related to a numerical method applied in risk management of insurance companies. In the first part, we focus on the problem of super-replication duality for American options in discrete time financial models. We con- sider the robust framework with a family of non-dominated probability measures and the trading strategies are dynamic on the stocks and static on the options. We use two differ- ent ways to obtain the pricing-hedging duality. The first insight is that we can reformulate American options as European options on an enlarged space. The second insight is that by considering a fictitious extensions of the market on which all the assets are traded dynamically. We then show that the general results apply in two important examples of the robust framework. In the second part, we consider the problem of super-replication and utility maximization with proportional transaction cost in discrete time financial market with model uncertainty. Our key technique is to convert the original problem to a frictionless problem on an enlarged space by using a randomization technique to get her with the minimax theorem. For the super-replication problem, we obtain the duality results well-known in the classical dominated context. For the utility maximization problem, we are able to prove the existence of the optimal strategy and the convex duality theorem in our context with transaction costs. In the third part, we present a numerical method based on a sparse grid approximation to compute the loss distribution of the balance sheet of an insurance company. We compare the new numerical method with the traditional nested simulation approach and review the convergence of both methods to estimate the risk indicators under consideration
APA, Harvard, Vancouver, ISO, and other styles
13

SALDANHA, IZABEL CRISTINA CORREA. "RANDOMIZATION IN DESIGN OF EXPERIMENTS: A CASE STUDY." PONTIFÍCIA UNIVERSIDADE CATÓLICA DO RIO DE JANEIRO, 2008. http://www.maxwell.vrac.puc-rio.br/Busca_etds.php?strSecao=resultado&nrSeq=12395@1.

Full text
Abstract:
PONTIFÍCIA UNIVERSIDADE CATÓLICA DO RIO DE JANEIRO
CONSELHO NACIONAL DE DESENVOLVIMENTO CIENTÍFICO E TECNOLÓGICO
O presente trabalho teve como objetivo apresentar diretrizes para a execução de experimentos fatoriais com restrições na aleatorização, mostrando a importância em identificar tais restrições, com base na visão de alguns autores e da aplicação de um estudo de caso. Este estudo foi cedido pela Companhia Siderúrgica Nacional - CSN, e exposto através da comparação entre dois modelos, cujas análises refletem as diferenças ao se considerar a restrição na aleatorização do experimento para obter uma resposta otimizada. Conforme identificado na literatura, poucos autores abordam a importância de reinicializar o nível dos fatores em um projeto de experimento industrial. Reinicializar o nível dos fatores, junto à necessidade de aleatorizar a ordem das corridas experimentais, torna válida a hipótese de que as observações obtidas no experimento serão variáveis aleatórias independentemente distribuídas. Quando a aleatorização completa do experimento não é possível de ser atingida, cabe ao experimentalista a decisão de projetar o experimento de tal forma que garanta a correta análise estatística e, conseqüentemente, a validação do modelo. Ao identificar se o experimento apresenta restrições em ser aleatorizado, classificando-o, identificando os fatores fáceis e difíceis de reinicializar, e analisando-se corretamente, evitam-se avaliações equivocadas ou incompletas, como se apresentou neste trabalho. Por fim, a análise, tendo em vista a existência da restrição em executar um experimento completamente aleatorizado e levando em consideração a presença de dois termos de erro no modelo permitiu a identificação das condições experimentais que garantem a minimização da resposta para o estudo de caso.
This work presents some guidance for the execution of factorial experiments with restrictions in the randomization by showing the importance of restrictions identifying. The study is based on some author´s points of view and on a case study application. The original research information comes from Companhia Siderúrgica Nacional - CSN, in fact, the research is presented through two models comparisons. The analysis of these models reveals the differences in taking into account a restriction in the experiment randomization with the aim of getting an optimized response. As shown in the studied literature, just a few authors approach the importance of restarting the factors level in an experimental industrial project. Resetting the factor´s level added to the necessity of randomizing the order of the experimental runs, valid the hypothesis that sustains that the experiment observations will be random variables independently distributed. When the complete randomization of the experiment results in an impossible chore, it is expected that the one who is in charge decides to project the experiment in a way that assures the correct statistic analysis, and consequently, the model´s validation. By identifying if the experiment has restrictions to be randomized, classifying the experiment, identifying which ones are the easiest and hardest factors and doing a correct analyze; it is expected that incomplete or mistaken assessments, like those showed in this research, will be avoided. Finally, the analyses taking into account a restriction in the complete randomized experiment execution and the presence of two error terms in the model, allowed the identification of the experimental conditions that guarantee the case study´s response minimization.
APA, Harvard, Vancouver, ISO, and other styles
14

Willenson, Daniel M. "Preventing injection attacks through automated randomization of keywords." Thesis, Massachusetts Institute of Technology, 2012. http://hdl.handle.net/1721.1/77451.

Full text
Abstract:
Thesis (M. Eng. and S.B.)--Massachusetts Institute of Technology, Dept. of Electrical Engineering and Computer Science, 2012.
Cataloged from PDF version of thesis.
Includes bibliographical references (p. 47-48).
SQL injection attacks are a major security issue for database-backed web applications, yet the most common approaches to prevention require a great deal of programmer effort and attention. Even one unchecked vulnerability can lead to the compromise of an entire application and its data. We present a fully automated system for securing applications against SQL injection which can be applied at runtime. Our system mutates SQL keywords in the program's string constants as they are loaded, and instruments the program's database accesses so that we can verify that all keywords in the final query string have been properly mutated, before passing it to the database. We instrument other method calls within the program to ensure correct program operation, despite the fact that its string constants have been mutated. Additionally, we instrument places where the program generates user-visible output to ensure that randomized keyword mutations are never revealed to an attacker.
by Daniel M. Willenson.
M.Eng.and S.B.
APA, Harvard, Vancouver, ISO, and other styles
15

Nadeem, Muhammad Hassan. "Linux Kernel Module Continuous Address Space Re-Randomization." Thesis, Virginia Tech, 2020. http://hdl.handle.net/10919/104685.

Full text
Abstract:
Address space layout randomization (ASLR) is a technique employed to prevent exploitation of memory corruption vulnerabilities in user-space programs. While this technique is widely studied, its kernel space counterpart known as kernel address space layout randomization (KASLR) has received less attention in the research community. KASLR, as it is implemented today is limited in entropy of randomization. Specifically, the kernel image and its modules can only be randomized within a narrow 1GB range. Moreover, KASLR does not protect against memory disclosure vulnerabilities, the presence of which reduces or completely eliminates the benefits of KASLR. In this thesis, we make two major contributions. First, we add support for position-independent kernel modules to Linux so that the modules can be placed anywhere in the 64-bit virtual address space and at any distance apart from each other. Second, we enable continuous KASLR re-randomization for Linux kernel modules by leveraging the position-independent model. Both contributions increase the entropy and reduce the chance of successful ROP attacks. Since prior art tackles only user-space programs, we also solve a number of challenges unique to the kernel code. Our experimental evaluation shows that the overhead of position-independent code is very low. Likewise, the cost of re-randomization is also small even at very high re-randomization frequencies.
Master of Science
Address space layout randomization (ASLR) is a computer security technique used to prevent attacks that exploit memory disclosure and corruption vulnerabilities. ASLR works by randomly arranging the locations of key areas of a process such as the stack, heap, shared libraries and base address of the executable in the address space. This prevents an attacker from jumping to vulnerable code in memory and thus making it hard to launch control flow hijacking and code reuse attacks. ASLR makes it impossible for the attacker to leverage return-oriented programming (ROP) by pre-computing the location of code gadgets. Unfortunately, ASLR can be defeated by using memory disclosure vulnerabilities to unravel static randomization in an attack known as Just-In-Time ROP (JIT-ROP) attack. There exist techniques that extend the idea of ASLR by continually re-randomizing the program at run-time. With re-randomization, any leaked memory location is quickly obsoleted by rapidly and continuously rearranging memory. If the period of re-randomization is kept shorter than the time it takes for an attacker to create and launch their attack, then JIT-ROP attacks can be prevented. Unfortunately, there exists no continuous re-randomization implementation for the Linux kernel. To make matters worse, the ASLR implementation for the Linux kernel (KASLR) is limited. Specifically, for x86-64 CPUs, due to architectural restrictions, the Linux kernel is loaded in a narrow 1GB region of the memory. Likewise, all the kernel modules are loaded within the 1GB range of the kernel image. Due to this relatively low entropy, the Linux kernel is vulnerable to brute-force ROP attacks. In this thesis, we make two major contributions. First, we add support for position-independent kernel modules to Linux so that the modules can be placed anywhere in the 64-bit virtual address space and at any distance apart from each other. Second, we enable continuous KASLR re-randomization for Linux kernel modules by leveraging the position-independent model. Both contributions increase the entropy and reduce the chance of successful ROP attacks. Since prior art tackles only user-space programs, we also solve a number of challenges unique to the kernel code. We demonstrate the mechanism and the generality of our proposed re-randomization technique using several different, widely used device drivers, compiled as re-randomizable modules. Our experimental evaluation shows that the overhead of position-independent code is very low. Likewise, the cost of re-randomization is also small even at very high re-randomization frequencies.
APA, Harvard, Vancouver, ISO, and other styles
16

Wang, Hui. "Response Adaptive Randomization using Surrogate and Primary Endpoints." VCU Scholars Compass, 2016. http://scholarscompass.vcu.edu/etd/4517.

Full text
Abstract:
In recent years, adaptive designs in clinical trials have been attractive due to their efficiency and flexibility. Response adaptive randomization procedures in phase II or III clinical trials are proposed to appeal ethical concerns by skewing the probability of patient assignments based on the responses obtained thus far, so that more patients will be assigned to a superior treatment group. General response-adaptive randomizations usually assume that the primary endpoint can be obtained quickly after the treatment. However, in real clinical trials, the primary outcome is delayed, making it unusable for adaptation. Therefore, we utilize surrogate and primary endpoints simultaneously to adaptively assign subjects between treatment groups for clinical trials with continuous responses. We explore two types of primary endpoints commonly used in clinical tirials: normally distributed outcome and time-to-event outcome. We establish a connection between the surrogate and primary endpoints through a Bayesian model, and then update the allocation ratio based on the accumulated data. Through simulation studies, we find that our proposed response adaptive randomization is more effective in assigning patients to better treatments as compared with equal allocation randomization and standard response adaptive randomization which is solely based on the primary endpoint.
APA, Harvard, Vancouver, ISO, and other styles
17

Lee, Joseph Jiazong. "Extensions of Randomization-Based Methods for Causal Inference." Thesis, Harvard University, 2015. http://nrs.harvard.edu/urn-3:HUL.InstRepos:17463974.

Full text
Abstract:
In randomized experiments, the random assignment of units to treatment groups justifies many of the traditional analysis methods for evaluating causal effects. Specifying subgroups of units for further examination after observing outcomes, however, may partially nullify any advantages of randomized assignment when data are analyzed naively. Some previous statistical literature has treated all post-hoc analyses homogeneously as entirely invalid and thus uninterpretable. Alternative analysis methods and the extent of the validity of such analyses remain largely unstudied. Here Chapter 1 proposes a novel, randomization-based method that generates valid post-hoc subgroup p-values, provided we know exactly how the subgroups were constructed. If we do not know the exact subgrouping procedure, our method may still place helpful bounds on the significance level of estimated effects. Chapter 2 extends the proposed methodology to generate valid posterior predictive p-values for partially post-hoc subgroup analyses, i.e., analyses that compare existing experimental data --- from which a subgroup specification is derived --- to new, subgroup-only data. Both chapters are motivated by pharmaceutical examples in which subgroup analyses played pivotal and controversial roles. Chapter 3 extends our randomization-based methodology to more general randomized experiments with multiple testing and nuisance unknowns. The results are valid familywise tests that are doubly advantageous, in terms of statistical power, over traditional methods. We apply our methods to data from the United States Job Training Partnership Act (JTPA) Study, where our analyses lead to different conclusions regarding the significance of estimated JTPA effects. In all chapters, we investigate the operating characteristics and demonstrate the advantages of our methods through series of simulations.
Statistics
APA, Harvard, Vancouver, ISO, and other styles
18

Ding, Peng. "Exploring the Role of Randomization in Causal Inference." Thesis, Harvard University, 2015. http://nrs.harvard.edu/urn-3:HUL.InstRepos:17467349.

Full text
Abstract:
This manuscript includes three topics in causal inference, all of which are under the randomization inference framework (Neyman, 1923; Fisher, 1935a; Rubin, 1978). This manuscript contains three self-contained chapters. Chapter 1. Under the potential outcomes framework, causal effects are defined as comparisons between potential outcomes under treatment and control. To infer causal effects from randomized experiments, Neyman proposed to test the null hypothesis of zero average causal effect (Neyman’s null), and Fisher proposed to test the null hypothesis of zero individual causal effect (Fisher’s null). Although the subtle difference between Neyman’s null and Fisher’s null has caused lots of controversies and confusions for both theoretical and practical statisticians, a careful comparison between the two approaches has been lacking in the literature for more than eighty years. I fill in this historical gap by making a theoretical comparison between them and highlighting an intriguing paradox that has not been recognized by previous re- searchers. Logically, Fisher’s null implies Neyman’s null. It is therefore surprising that, in actual completely randomized experiments, rejection of Neyman’s null does not imply rejection of Fisher’s null for many realistic situations, including the case with constant causal effect. Furthermore, I show that this paradox also exists in other commonly-used experiments, such as stratified experiments, matched-pair experiments, and factorial experiments. Asymptotic analyses, numerical examples, and real data examples all support this surprising phenomenon. Besides its historical and theoretical importance, this paradox also leads to useful practical implications for modern researchers. Chapter 2. Causal inference in completely randomized treatment-control studies with binary outcomes is discussed from Fisherian, Neymanian and Bayesian perspectives, using the potential outcomes framework. A randomization-based justification of Fisher’s exact test is provided. Arguing that the crucial assumption of constant causal effect is often unrealistic, and holds only for extreme cases, some new asymptotic and Bayesian inferential procedures are proposed. The proposed procedures exploit the intrinsic non-additivity of unit-level causal effects, can be applied to linear and non- linear estimands, and dominate the existing methods, as verified theoretically and also through simulation studies. Chapter 3. Recent literature has underscored the critical role of treatment effect variation in estimating and understanding causal effects. This approach, however, is in contrast to much of the foundational research on causal inference; Neyman, for example, avoided such variation through his focus on the average treatment effect and his definition of the confidence interval. In this chapter, I extend the Ney- manian framework to explicitly allow both for treatment effect variation explained by covariates, known as the systematic component, and for unexplained treatment effect variation, known as the idiosyncratic component. This perspective enables es- timation and testing of impact variation without imposing a model on the marginal distributions of potential outcomes, with the workhorse approach of regression with interaction terms being a special case. My approach leads to two practical results. First, I combine estimates of systematic impact variation with sharp bounds on over- all treatment variation to obtain bounds on the proportion of total impact variation explained by a given model—this is essentially an R2 for treatment effect variation. Second, by using covariates to partially account for the correlation of potential out- comes problem, I exploit this perspective to sharpen the bounds on the variance of the average treatment effect estimate itself. As long as the treatment effect varies across observed covariates, the resulting bounds are sharper than the current sharp bounds in the literature. I apply these ideas to a large randomized evaluation in educational research, showing that these results are meaningful in practice.
Statistics
APA, Harvard, Vancouver, ISO, and other styles
19

Georgii, Hellberg Kajsa-Lotta, and Andreas Estmark. "Fisher's Randomization Test versus Neyman's Average Treatment Test." Thesis, Uppsala universitet, Statistiska institutionen, 2019. http://urn.kb.se/resolve?urn=urn:nbn:se:uu:diva-385069.

Full text
Abstract:
The following essay describes and compares Fisher's Randomization Test and Neyman's average treatment test, with the intention of concluding an easily understood blueprint for the comprehension of the practical execution of the tests and the conditions surrounding them. Focus will also be directed towards the tests' different implications on statistical inference and how the design of a study in relation to assumptions affects the external validity of the results. The essay is structured so that firstly the tests are presented and evaluated, then their different advantages and limitations are put against each other before they are applied to a data set as a practical example. Lastly the results obtained from the data set are compared in the Discussion section. The example used in this paper, which compares cigarette consumption after having treated one group with nicotine patches and another with fake nicotine patches, shows a decrease in cigarette consumption for both tests. The tests differ however, as the result from the Neyman test can be made valid for the population of interest. Fisher's test on the other hand only identifies the effect derived from the sample, consequently the test cannot draw conclusions about the population of heavy smokers. In short, the findings of this paper suggests that a combined use of the two tests would be the most appropriate way to test for treatment effect. Firstly one could use the Fisher test to check if any effect at all exist in the experiment, and then one could use the Neyman test to compensate the findings of the Fisher test, by estimating an average treatment effect for example.
APA, Harvard, Vancouver, ISO, and other styles
20

Basler, Georg. "Mass-balanced randomization : a significance measure for metabolic networks." Phd thesis, Universität Potsdam, 2012. http://opus.kobv.de/ubp/volltexte/2012/6203/.

Full text
Abstract:
Complex networks have been successfully employed to represent different levels of biological systems, ranging from gene regulation to protein-protein interactions and metabolism. Network-based research has mainly focused on identifying unifying structural properties, including small average path length, large clustering coefficient, heavy-tail degree distribution, and hierarchical organization, viewed as requirements for efficient and robust system architectures. Existing studies estimate the significance of network properties using a generic randomization scheme - a Markov-chain switching algorithm - which generates unrealistic reactions in metabolic networks, as it does not account for the physical principles underlying metabolism. Therefore, it is unclear whether the properties identified with this generic approach are related to the functions of metabolic networks. Within this doctoral thesis, I have developed an algorithm for mass-balanced randomization of metabolic networks, which runs in polynomial time and samples networks almost uniformly at random. The properties of biological systems result from two fundamental origins: ubiquitous physical principles and a complex history of evolutionary pressure. The latter determines the cellular functions and abilities required for an organism’s survival. Consequently, the functionally important properties of biological systems result from evolutionary pressure. By employing randomization under physical constraints, the salient structural properties, i.e., the smallworld property, degree distributions, and biosynthetic capabilities of six metabolic networks from all kingdoms of life are shown to be independent of physical constraints, and thus likely to be related to evolution and functional organization of metabolism. This stands in stark contrast to the results obtained from the commonly applied switching algorithm. In addition, a novel network property is devised to quantify the importance of reactions by simulating the impact of their knockout. The relevance of the identified reactions is verified by the findings of existing experimental studies demonstrating the severity of the respective knockouts. The results suggest that the novel property may be used to determine the reactions important for viability of organisms. Next, the algorithm is employed to analyze the dependence between mass balance and thermodynamic properties of Escherichia coli metabolism. The thermodynamic landscape in the vicinity of the metabolic network reveals two regimes of randomized networks: those with thermodynamically favorable reactions, similar to the original network, and those with less favorable reactions. The results suggest that there is an intrinsic dependency between thermodynamic favorability and evolutionary optimization. The method is further extended to optimizing metabolic pathways by introducing novel chemically feasibly reactions. The results suggest that, in three organisms of biotechnological importance, introduction of the identified reactions may allow for optimizing their growth. The approach is general and allows identifying chemical reactions which modulate the performance with respect to any given objective function, such as the production of valuable compounds or the targeted suppression of pathway activity. These theoretical developments can find applications in metabolic engineering or disease treatment. The developed randomization method proposes a novel approach to measuring the significance of biological network properties, and establishes a connection between large-scale approaches and biological function. The results may provide important insights into the functional principles of metabolic networks, and open up new possibilities for their engineering.
In der Systembiologie und Bioinformatik wurden in den letzten Jahren immer komplexere Netzwerke zur Beschreibung verschiedener biologischer Prozesse, wie Genregulation, Protein-Interaktionen und Stoffwechsel (Metabolismus) rekonstruiert. Ein Hauptziel der Forschung besteht darin, die strukturellen Eigenschaften von Netzwerken für Vorhersagen über deren Funktion nutzbar zu machen, also eine Verbindung zwischen Netzwerkeigenschaften und Funktion herzustellen. Die netzwerkbasierte Forschung zielte bisher vor allem darauf ab, gemeinsame Eigenschaften von Netzwerken unterschiedlichen Ursprungs zu entdecken. Dazu zählen die durchschnittliche Länge von Verbindungen im Netzwerk, die Häufigkeit redundanter Verbindungen, oder die hierarchische Organisation der Netzwerke, welche als Voraussetzungen für effiziente Kommunikationswege und Robustheit angesehen werden. Dabei muss zunächst bestimmt werden, welche Eigenschaften für die Funktion eines Netzwerks von besonderer Bedeutung (Signifikanz) sind. Die bisherigen Studien verwenden dafür eine Methode zur Erzeugung von Zufallsnetzwerken, welche bei der Anwendung auf Stoffwechselnetzwerke unrealistische chemische Reaktionen erzeugt, da sie physikalische Prinzipien missachtet. Es ist daher fraglich, ob die Eigenschaften von Stoffwechselnetzwerken, welche mit dieser generischen Methode identifiziert werden, von Bedeutung für dessen biologische Funktion sind, und somit für aussagekräftige Vorhersagen in der Biologie verwendet werden können. In meiner Dissertation habe ich eine Methode zur Erzeugung von Zufallsnetzwerken entwickelt, welche physikalische Grundprinzipien berücksichtigt, und somit eine realistische Bewertung der Signifikanz von Netzwerkeigenschaften ermöglicht. Die Ergebnisse zeigen anhand der Stoffwechselnetzwerke von sechs Organismen, dass viele der meistuntersuchten Netzwerkeigenschaften, wie das Kleine-Welt-Phänomen und die Vorhersage der Biosynthese von Stoffwechselprodukten, von herausragender Bedeutung für deren biologische Funktion sind, und somit für Vorhersagen und Modellierung verwendet werden können. Die Methode ermöglicht die Identifikation von chemischen Reaktionen, welche wahrscheinlich von lebenswichtiger Bedeutung für den Organismus sind. Weiterhin erlaubt die Methode die Vorhersage von bisher unbekannten, aber physikalisch möglichen Reaktionen, welche spezifische Zellfunktionen, wie erhöhtes Wachstum in Mikroorganismen, ermöglichen könnten. Die Methode bietet einen neuartigen Ansatz zur Bestimmung der funktional relevanten Eigenschaften biologischer Netzwerke, und eröffnet neue Möglichkeiten für deren Manipulation.
APA, Harvard, Vancouver, ISO, and other styles
21

Williams-King, David. "Binary shuffling : defeating memory disclosure attacks through re-randomization." Thesis, University of British Columbia, 2014. http://hdl.handle.net/2429/48600.

Full text
Abstract:
Software that is in use and under development today still contains as many bugs as ever. These bugs are often exploitable by attackers using advanced techniques such as Return-Oriented Programming (ROP), where pieces of legitimate code are stitched together to form a malicious exploit. One class of defenses against these attacks is Address-Space Layout Randomization (ASLR), which randomly selects the base addresses of legitimate code. However, it has recently been shown that this randomization can be unravelled with memory disclosure attacks, which divulge the contents of memory at a given address. In this work, we strengthen code randomization against memory disclosure attacks, in order to make it a viable defense in the face of Return-Oriented Programming. We propose a technique called binary shuffling, which dynamically re-randomizes the position of code blocks at runtime. While a memory disclosure may reveal the contents of a memory address (thus unravelling the randomization), this information is only valid for a very short time. Our system, called Shuffler, operates on program binaries without access to source code, and can re-randomize the position of all code in a program in as little as ten milliseconds. We show that this is fast enough to defeat any attempt at Return-Oriented Programming, even when armed with a memory disclosure attack. Shuffler adds only 10 to 21% overhead on average, making it a viable defense against these types of attack.
Science, Faculty of
Computer Science, Department of
Graduate
APA, Harvard, Vancouver, ISO, and other styles
22

Chou, Remi. "Information-theoretic security under computational, bandwidth, and randomization constraints." Diss., Georgia Institute of Technology, 2015. http://hdl.handle.net/1853/53837.

Full text
Abstract:
The objective of the proposed research is to develop and analyze coding schemes for information-theoretic security, which could bridge a gap between theory an practice. We focus on two fundamental models for information-theoretic security: secret-key generation for a source model and secure communication over the wire-tap channel. Many results for these models only provide existence of codes, and few attempts have been made to design practical schemes. The schemes we would like to propose should account for practical constraints. Specifically, we formulate the following constraints to avoid oversimplifying the problems. We should assume: (1) computationally bounded legitimate users and not solely rely on proofs showing existence of code with exponential complexity in the block-length; (2) a rate-limited public communication channel for the secret-key generation model, to account for bandwidth constraints; (3) a non-uniform and rate-limited source of randomness at the encoder for the wire-tap channel model, since a perfectly uniform and rate-unlimited source of randomness might be an expensive resource. Our work focuses on developing schemes for secret-key generation and the wire-tap channel that satisfy subsets of the aforementioned constraints.
APA, Harvard, Vancouver, ISO, and other styles
23

Wang, Xiaofei. "Randomization test and correlation effects in high dimensional data." Kansas State University, 2012. http://hdl.handle.net/2097/14039.

Full text
Abstract:
Master of Science
Department of Statistics
Gary Gadbury
High-dimensional data (HDD) have been encountered in many fields and are characterized by a “large p, small n” paradigm that arises in genomic, lipidomic, and proteomic studies. This report used a simulation study that employed basic block diagonal covariance matrices to generate correlated HDD. Quantities of interests in such data are, among others, the number of ‘significant’ discoveries. This number can be highly variable when data are correlated. This project compared randomization tests versus usual t-tests for testing of significant effects across two treatment conditions. Of interest was whether the variance of the number of discoveries is better controlled in a randomization setting versus a t-test. The results showed that the randomization tests produced results similar to that of t-tests.
APA, Harvard, Vancouver, ISO, and other styles
24

Pemberton, Haley. "Analyzing Math to Mastery through Randomization of Intervention Components." Thesis, Southern Illinois University at Edwardsville, 2018. http://pqdtopen.proquest.com/#viewpdf?dispub=10807966.

Full text
Abstract:

This study examined the effect of Math to Mastery and a randomized version of Math to Mastery at increasing digits correct per minute (DCPM) for three elementary-aged students. All three students received the standard and randomized version of the math fact fluency intervention, and progress was monitored using an adapted alternating treatments design. Data was collected and student progress was monitored to examine whether the randomized version of Math to Mastery would be just as or more effective than the standard version of Mast the Mastery. Results of the study indicated the standard version of Math to Mastery to be more effective than the randomized version for all three students at increasing digits correct per minute.

APA, Harvard, Vancouver, ISO, and other styles
25

Morris, David Dry. "Randomization analysis of experimental designs under non standard conditions." Diss., Virginia Polytechnic Institute and State University, 1987. http://hdl.handle.net/10919/53649.

Full text
Abstract:
Often the basic assumptions of the ANOVA for an experimental design are not met or the statistical model is incorrectly specified. Randomization of treatments to experimental units is expected to protect against such shortcomings. This paper uses randomization theory to examine the impact on the expectations of mean squares, treatment means, and treatment differences for two model mis·specifications: Systematic response shifts and correlated experimental units. Systematic response shifts are presented in the context of the randomized complete block design (RCBD). In particular fixed shifts are added to the responses of experimental units in the initial and final positions of each block. The fixed shifts are called border shifts. It is shown that the RCBD is an unbiased design under randomization theory when border shifts are present. Treatment means are biased but treatment differences are unbiased. However the estimate of error is biased upwards and the power of the F test is reduced. Alternative designs to the RCBD under border shifts are the Latin square, semi-Latin square, and two-column designs. Randomization analysis demonstrates that the Latin square is an unbiased design with an unbiased estimate of error and of treatment differences. The semi-Latin square has each of the t treatments occurring only once per row and column, but t is a multiple of the number of rows or columns. Thus each row-column combination contains more than one experimental unit. The semi-Latin square is a biased design with a biased estimate of error even when no border shifts are present. Row-column interaction is responsible for the bias. Border shifts do not contaminate the expected mean squares or treatment differences, and thus the semi-Latin square is a viable alternative when the border shift overwhelms the row-column interaction. The two columns of the two-column design correspond to the border and interior experimental units respectively. Results similar to that for the semi-Latin square are obtained. Simulation studies for the RCBD and its alternatives indicate that the power of the F test is reduced for the RCBD when border shifts are present. When no row-column interaction is present, the semi-Latin square and two-column designs provide good alternatives to the RCBD. Similar results are found for the split plot design when border shifts occur in the sub plots. A main effects plan is presented for situations when the number of whole plot units equals the number of sub plot units per whole plot. The analysis of designs in which the experimental units occur in a sequence and exhibit correlation is considered next. The Williams Type Il(a) design is examined in conjunction with the usual ANOVA and with the method of first differencing. Expected mean squares, treatment means, and treatment differences are obtained under randomization theory for each analysis. When only adjacent experimental units have non negligible correlation, the Type Il(a) design provides an unbiased error estimate for the usual ANOVA. However the expectation of the treatment mean square is biased downwards for a positive correlation. First differencing results in a biased test and a biased error estimate. The test is approximately unbiased if the correlation between units is close to a half.
Ph. D.
APA, Harvard, Vancouver, ISO, and other styles
26

Hu, Xianghong. "Statistical methods for Mendelian randomization using GWAS summary data." HKBU Institutional Repository, 2019. https://repository.hkbu.edu.hk/etd_oa/639.

Full text
Abstract:
Mendelian Randomization (MR) is a powerful tool for accessing causality of exposure on an outcome using genetic variants as the instrumental variables. Much of the recent developments is propelled by the increasing availability of GWAS summary data. However, the accuracy of the MR causal effect estimates could be challenged in case of the MR assumptions are violated. The source of biases could attribute to the weak effects arising because of polygenicity, the presentence of horizontal pleiotropy and other biases, e.g., selection bias. In this thesis, we proposed two works, expecting to deal with these issues.In the first part, we proposed a method named 'Bayesian Weighted Mendelian Randomization (BMWR)' for causal inference using summary statistics from GWAS. In BWMR, we not only take into account the uncertainty of weak effects owning to polygenicity of human genomics but also models the weak horizontal pleiotropic effects. Moreover, BWMR adopts a Bayesian reweighting strategy for detection of large pleiotropic outliers. An efficient algorithm based on variational inference was developed to make BWMR computationally efficient and stable. Considering the underestimated variance provided by variational inference, we further derived a closed form variance estimator inspired by a linear response method. We conducted several simulations to evaluate the performance of BWMR, demonstrating the advantage of BWMR over other methods. Then, we applied BWMR to access causality between 126 metabolites and 90 complex traits, revealing novel causal relationships. In the second part, we further developed BWMR-C: Statistical correction of selection bias for Mendelian Randomization based on a Bayesian weighted method. Based on the framework of BWMR, the probability model in BWMR-C is built conditional on the IV selection criteria. In such way, BWMR-C delicated to reduce the influence of the selection process on the causal effect estimates and also preserve the good properties of BWMR. To make the causal inference computationally stable and efficient, we developed a variational EM algorithm. We conducted several comprehensive simulations to evaluate the performance of BWMR-C for correction of selection bias. Then, we applied BWMR-C on seven body fat distribution related traits and 140 UK Biobank traits. Our results show that BWMR-C achieves satisfactory performance for correcting selection bias. Keywords: Mendelian Randomization, polygenicity, horizontal pleiotropy, selection bias, variation inference.
APA, Harvard, Vancouver, ISO, and other styles
27

Vilakati, S. E. "Inference Following Two-Stage Randomization Designs with Survival Endpoints." Doctoral thesis, Università degli studi di Padova, 2017. http://hdl.handle.net/11577/3423158.

Full text
Abstract:
Treatment of complex diseases such as cancer, HIV, leukemia and depression usually follows complex treatment sequences. In two-stage randomization designs, patients are randomized to first-stage treatments, and upon response, a second randomization to the second-stage treatments is done. The clinical goal in such trials is to achieve a response such as complete remission of leukemia, 50% shrinkage of solid tumor or increase in CD4 count in HIV patients. These responses are presumed to predict longer survival. The focus in two-stage randomization designs with survival endpoints is on estimating survival distributions and comparing different treatment policies. In this thesis, we make contributions in these two areas. A simulation study is conducted to compare three non-parametric methods for estimating survival distributions. A parametric method is proposed for estimating survival distributions in time-varying SMART designs. The proposed estimator is studied using simulations and also applied to a clinical trial dataset. Thirdly, we propose a method for comparing different treatment policies. The new method works well even if the survival curves from the treatment policies cross. Simulation studies show that the new method has better statistical power than the weighted log-rank test in cases where survival curves cross. The last part of this thesis focuses on analyzing adverse events data from two-stage randomization designs. We develop a methodology for analyzing adverse events data in the competing risk setting which has been applied to a leukemia clinical trial dataset.
Il trattamento di malattie complesse come cancro, AIDS, leucemia e depressione richiedono solitamente l’applicazione sequenziale di terapie complesse multiple. Nei disegni randomizzati a due stadi, inizialmente i pazienti sono randomizzati al primo stadio di trattamenti, e successivamente, sulla base della risposta al trattamento, i pazienti sono randomizzati ad un secondo stadio di trattamenti. In questi studi randomizzati, l’obiettivo clinico è quello di ottenere una risposta all’intero piano di trattamento, come per esempio la remissione completa dalla leucemia, la riduzione del 50% di un tumore solido, o l’aumento della proteina CD4 in pazienti con infezioneda HIV. Si presume che la risposta al trattamento possa predire una sopravvivenza più lunga. Nei disegni randomizzati a due stadi che coinvolgono una risposta sul tempo di so pravvivenza, l’interesse principale è rivolto sia a stimare le distribuzioni di sopravvivenza sia a confrontare le variepolitiche di trattamento. La tesi di dottorato fornisce contributi di ricerca su questi due aspetti. È stato condotto uno studio di simulazione per confrontare diversi metodi non arametriciesistenti in letteratura per la stima delle distribuzioni di sopravvivenza. È stato proposto un metodo parametrico per stimarele distribuzioni di sopravvivenza in disegni randomizzati a due stadi di tipo SMART tempo-dipendente (“time-varying SMART designs”). Lo stimatore proposto è stato verificato tramite studi di simulazione ed è stato applicato a dati relativi a prove cliniche di trattamenti per la leucemia. In terzo luogo, è stato proposto un metodo di verifica di ipotesi per il confronto delle diverse strategie di trattamento, sotto l’assunzione di non proporzionalità delle funzioni di sopravvivenza. Questo metodo risulta particolarmenteutile quando le funzionidi so pravvivenza stimata si incrociano tra loro. Gli studi di simulazione condotti su questo metodo hanno mostrato che esso presenta una potenza più elevata rispetto al test pesato dei ranghi logaritmici, nel caso in cui le curve di sopravvivenza si incrociano e non sono quindi proporzionali tra loro. L’ultima parte della tesi si concentra sull’analisi di eventi avversi nell’ambito degli studi randomizzati a due stadi. È stata sviluppata una metodologia per analizzare dati relativi ad eventi avversi, che si basa anche sui modelli a rischi competitivi. Questa metodologia è stata poi applicata per analizzare dati di eventi avversi in prove cliniche di trattamenti per la leucemia.
APA, Harvard, Vancouver, ISO, and other styles
28

Davison, Jennifer J. "Response surface designs and analysis for bi-randomization error structures." Diss., This resource online, 1995. http://scholar.lib.vt.edu/theses/available/etd-10042006-143852/.

Full text
APA, Harvard, Vancouver, ISO, and other styles
29

Bernard, Anthony Joseph. "Robust I-Sample Analysis of Means Type Randomization Tests for Variances." UNF Digital Commons, 1999. http://digitalcommons.unf.edu/etd/90.

Full text
Abstract:
The advent of powerful computers has brought about the randomization technique for testing statistical hypotheses. Randomization tests are based on shuffles or rearrangements of the (combined) sample. Putting each of the I samples "in a bowl" forms the combined sample. Drawing samples "from the bowl" forms a shuffle. Shuffles can be made with or without replacement. In this thesis, analysis of means type randomization tests will be presented to solve the homogeneity of variance problem. An advantage of these tests is that they allow the user to graphically present the results via a decision chart similar to a Shewhart control chart. The focus is on finding tests that are robust to departures from normality. The proposed tests will be compared against commonly used nonrandomization tests. The type I error stability across several nonnormal distributions and the power of each test will be studied via Monte Carlo simulation.
APA, Harvard, Vancouver, ISO, and other styles
30

Wang, Li. "Recommendations for Design Parameters for Central Composite Designs with Restricted Randomization." Diss., Virginia Tech, 2006. http://hdl.handle.net/10919/28794.

Full text
Abstract:
In response surface methodology, the central composite design is the most popular choice for fitting a second order model. The choice of the distance for the axial runs, alpha, in a central composite design is very crucial to the performance of the design. In the literature, there are plenty of discussions and recommendations for the choice of alpha, among which a rotatable alpha and an orthogonal blocking alpha receive the greatest attention. Box and Hunter (1957) discuss and calculate the values for alpha that achieve rotatability, which is a way to stabilize prediction variance of the design. They also give the values for alpha that make the design orthogonally blocked, where the estimates of the model coefficients remain the same even when the block effects are added to the model. In the last ten years, people have begun to realize the importance of a split-plot structure in industrial experiments. Constructing response surface designs with a split-plot structure is a hot research area now. In this dissertation, Box and Hunters' choice of alpha for rotatablity and orthogonal blocking is extended to central composite designs with a split-plot structure. By assigning different values to the axial run distances of the whole plot factors and the subplot factors, we propose two-strata rotatable splitplot central composite designs and orthogonally blocked split-plot central composite designs. Since the construction of the two-strata rotatable split-plot central composite design involves an unknown variance components ratio d, we further study the robustness of the two-strata rotatability on d through simulation. Our goal is to provide practical recommendations for the value of the design parameter alpha based on the philosophy of traditional response surface methodology.
Ph. D.
APA, Harvard, Vancouver, ISO, and other styles
31

Parker, Peter A. "Response Surface Design and Analysis in the Presence of Restricted Randomization." Diss., Virginia Tech, 2005. http://hdl.handle.net/10919/26555.

Full text
Abstract:
Practical restrictions on randomization are commonplace in industrial experiments due to the presence of hard-to-change or costly-to-change factors. Employing a split-plot design structure minimizes the number of required experimental settings for the hard-to-change factors. In this research, we propose classes of equivalent estimation second-order response surface split-plot designs for which the ordinary least squares estimates of the model are equivalent to the generalized least squares estimates. Designs that possess the equivalence property enjoy the advantages of best linear unbiased estimates and design selection that is robust to model misspecification and independent of the variance components. We present a generalized proof of the equivalence conditions that enables the development of several systematic design construction strategies and provides the ability to verify numerically that a design provides equivalent estimates, resulting in a broad catalog of designs. We explore the construction of balanced and unbalanced split-plot versions of the central composite and Box-Behnken designs. In addition, we illustrate the utility of numerical verification in generating D-optimal and minimal point designs, including split-plot versions of the Notz, Hoke, Box and Draper, and hybrid designs. Finally, we consider the practical implications of analyzing a near-equivalent design when a suitable equivalent design is not available. By simulation, we compare methods of estimation to provide a practitioner with guidance on analysis alternatives when a best linear unbiased estimator is not available. Our goal throughout this research is to develop practical experimentation strategies for restricted randomization that are consistent with the philosophy of traditional response surface methodology.
Ph. D.
APA, Harvard, Vancouver, ISO, and other styles
32

Chang, Sin-ting Cynthia, and 張倩婷. "Randomization of recrystallization textures in an experimental Al-5%Mgalloy and AA6111." Thesis, The University of Hong Kong (Pokfulam, Hong Kong), 2005. http://hub.hku.hk/bib/B36375561.

Full text
APA, Harvard, Vancouver, ISO, and other styles
33

MARTELOTTE, MARCELA COHEN. "USING LINEAR MIXED MODELS ON DATA FROM EXPERIMENTS WITH RESTRICTION IN RANDOMIZATION." PONTIFÍCIA UNIVERSIDADE CATÓLICA DO RIO DE JANEIRO, 2010. http://www.maxwell.vrac.puc-rio.br/Busca_etds.php?strSecao=resultado&nrSeq=16422@1.

Full text
Abstract:
PONTIFÍCIA UNIVERSIDADE CATÓLICA DO RIO DE JANEIRO
Esta dissertação trata da aplicação de modelos lineares mistos em dados provenientes de experimentos com restrição na aleatorização. O experimento utilizado neste trabalho teve como finalidade verificar quais eram os fatores de controle do processo de laminação a frio que mais afetavam a espessura do material utilizado na fabricação das latas para bebidas carbonatadas. A partir do experimento, foram obtidos dados para modelar a média e a variância da espessura do material. O objetivo da modelagem era identificar quais fatores faziam com que a espessura média atingisse o valor desejado (0,248 mm). Além disso, era necessário identificar qual a combinação dos níveis desses fatores que produzia a variância mínima na espessura do material. Houve replicações neste experimento, mas estas não foram executadas de forma aleatória, e, além disso, os níveis dos fatores utilizados não foram reinicializados, nas rodadas do experimento. Devido a estas restrições, foram utilizados modelos mistos para o ajuste da média, e da variância, da espessura, uma vez que com tais modelos é possível trabalhar na presença de dados auto-correlacionados e heterocedásticos. Os modelos mostraram uma boa adequação aos dados, indicando que para situações onde existe restrição na aleatorização, a utilização de modelos mistos se mostra apropriada.
This dissertation presents an application of linear mixed models on data from an experiment with restriction in randomization. The experiment used in this study was aimed to verify which were the controlling factors, in the cold-rolling process, that most affected the thickness of the material used in the carbonated beverages market segment. From the experiment, data were obtained to model the mean and variance of the thickness of the material. The goal of modeling was to identify which factors were significant for the thickness reaches the desired value (0.248 mm). Furthermore, it was necessary to identify which combination of levels, of these factors, produced the minimum variance in the thickness of the material. There were replications of this experiment, but these were not performed randomly. In addition, the levels of factors used were not restarted during the trials. Due to these limitations, mixed models were used to adjust the mean and the variance of the thickness. The models showed a good fit to the data, indicating that for situations where there is restriction on randomization, the use of mixed models is suitable.
APA, Harvard, Vancouver, ISO, and other styles
34

Wolford, Katherine Anne. "Effects of item randomization and applicant instructions on distortion on personality measures." Bowling Green, Ohio : Bowling Green State University, 2009. http://rave.ohiolink.edu/etdc/view?acc%5Fnum=bgsu1245555713.

Full text
APA, Harvard, Vancouver, ISO, and other styles
35

Chang, Sin-ting Cynthia. "Randomization of recrystallization textures in an experimental Al-5%Mg alloy and AA6111." Click to view the E-thesis via HKUTO, 2005. http://sunzi.lib.hku.hk/hkuto/record/B36375561.

Full text
APA, Harvard, Vancouver, ISO, and other styles
36

Thålin, Felix. "A Random Bored : How randomization in cooperative board games create replayability and tension." Thesis, Uppsala universitet, Institutionen för speldesign, 2015. http://urn.kb.se/resolve?urn=urn:nbn:se:uu:diva-258207.

Full text
Abstract:
This paper examines five cooperative board games from the perspective of how randomization is used and how it affects the replayability and player strategy, with the intent to properly identify and categorize elements that contribute to replayability and tension and uses randomization to do so. Each element directly affected by or causes randomization is identified, explained (both what it does and how and what it affects), and categorized based on where in the game the randomization originates in the effort to create a base for game designers to get a better understanding of randomization, if and how they can use it, and which method of using it that can be useful for their own designs.The thesis discusses the impact of using certain randomization elements and draws some conclusions based on how they relate to the replayability and tension of games that use those elements.
APA, Harvard, Vancouver, ISO, and other styles
37

Jiang, Bo, and 姜博. "Effective and efficient regression testing and fault localization through diversification, prioritization, and randomization." Thesis, The University of Hong Kong (Pokfulam, Hong Kong), 2011. http://hub.hku.hk/bib/B46541214.

Full text
APA, Harvard, Vancouver, ISO, and other styles
38

Smoot, Melissa C. "An analysis of noise reduction in variable reluctance motors using pulse position randomization." Thesis, Monterey, California. Naval Postgraduate School, 1994. http://hdl.handle.net/10945/25675.

Full text
Abstract:
The design and implementation of a control system to introduce randomization into the control of a variable reluctance motor (VRM) is presented. The goal is to reduce noise generated by radial vibrations of the stator. Motor phase commutation angles are dithered by 1 or 2 mechanical degrees to investigate the effect of randomization on acoustic noise. VRM commutation points are varied using a uniform probability density function and a 4 state Markov chain among other methods. The theory of VRM and inverter operation and a derivation of the major source of acoustic noise are developed. The experimental results show the effects of randomization. Uniform dithering and Markov chain dithering both tend to spread the noise spectrum, reducing peak noise components. No clear evidence is found to determine which is the optimum randomization scheme. The benefit of commutation angle randomization in reducing VRM loudness as perceived by humans is found to be questionable.
APA, Harvard, Vancouver, ISO, and other styles
39

Smoot, Melissa C. (Melissa Cannon). "An analysis of noise reduction in variable reluctance motors using pulse position randomization." Thesis, Massachusetts Institute of Technology, 1994. http://hdl.handle.net/1721.1/36501.

Full text
Abstract:
Thesis (Nav. E.)--Massachusetts Institute of Technology, Dept. of Ocean Engineering, 1994, and Thesis (M.S.)--Massachusetts Institute of Technology, Dept. of Electrical Engineering, 1994.
Includes bibliographical references (leaves 80-82).
by Melissa C. Smoot.
M.S.
Nav.E.
APA, Harvard, Vancouver, ISO, and other styles
40

Eskandari, Aram, and Benjamin Tellström. "Analysis of the Performance Impact of Black-box Randomization for 7 Sorting Algorithms." Thesis, KTH, Skolan för teknikvetenskap (SCI), 2018. http://urn.kb.se/resolve?urn=urn:nbn:se:kth:diva-231089.

Full text
Abstract:
Can black-box randomization change the performance of algorithms? The problem of worst-case behaviour in algorithms is difficult to handle, black-box randomization is one method that has not been rigorously tested. If it could be used to mitigate worst-case behaviour for our chosen algorithms, black-box randomization should be seriously considered for active usage in more algorithms. We have found variables that can be put through a black-box randomizer while our algorithm still gives correct output. These variables have been disturbed and a qualitative manual analysis has been done to observe the performance impact during black-box randomization. This analysis was done for 7 different sorting algorithms using Java openJDK 8. Our results show signs of improvement after black-box randomization, however our experiments showed a clear uncertainty when con- ducting time measurements for sorting algorithms.
Kan svartlåde-slumpning förändra prestandan hos algoritmer? Problemet med värsta-fall beteende hos algoritmer är svårt att hantera, svartlåde-slumpning är en metod som inte testast rigoröst än. Om det kan utnyttjas för att mildra värsta-fall beteende för våra utval- da algoritmer, bör svartlåde-slumpning beaktas för aktiv användning i fler algoritmer. Vi har funnit variabler som kan köras igenom svartlåde-slumpning samtidigt som vår algoritm ger korrekt utmatning. Dessa variabler har blivit utsatta för små störningar och en kvalitativ manuell ana- lys har gjorts för att observera huruvida prestandan förändrats under svartlåde-slumpning. Denna analys har gjorts för 7 sorteringsalgoritmer med hjälp av Java openJDK 8. Våra resultat visar tecken på förbättring efter svartlåde-slumpning, men våra experiment visade en klar osäkerhet när man utför tidsmätningar på sorteringsalgoritmer.
APA, Harvard, Vancouver, ISO, and other styles
41

Huang, Zhengli. "Privacy and utility analysis of the randomization approach in Privacy-Preserving Data Publishing." Related electronic resource: Current Research at SU : database of SU dissertations, recent titles available full text, 2008. http://wwwlib.umi.com/cr/syr/main.

Full text
APA, Harvard, Vancouver, ISO, and other styles
42

González-Martin, Sergio. "Applications of Biased Randomization and Simheuristic Algorithms to Arc Routing and Facility Location Problems." Doctoral thesis, Universitat Oberta de Catalunya, 2015. http://hdl.handle.net/10803/306605.

Full text
Abstract:
La majoria de metaheuristiques tenen una component aleatori, que normalment està basada en aleatorització uniforme -i.e., l'ús de la distribució de probabilitat uniforme per fer seleccions aleatòries. Per altra banda, el marc Multi-start biased Randomization of classical Heuristics with Adaptive local search proposa l'ús de aleatorització esbiaixada (no uniforme) per al disseny de algoritmes metaheuristics alternatius -i.e., l'ús de distribucions de probabilitat esbiaixades com la geomètrica o la triangular. En algunes situacions, aquesta aleatorització no uniforme ha obtingut una convergència més ràpida a la solució quasi òptima. El marc MIRHA també inclou un pas de cerca local per a millorar les solucions generades durant el procés iteratiu. A més, permet afegir passos de cerca adaptats al problema, com cache (memòria) o splitting (dividir i conquerir), que permeten la generació de solucions competitives (quasi òptimes). Els algoritmes dissenyats amb el marc MIRHA permeten obtenir solucions d'alta qualitat a problemes realistes en temps de computació raonables. A més, tendeixen a utilitzar un nombre reduït de paràmetres, el que els fa simples d'implementar i configurar en la majoria d'aplicacions pràctiques. El marc s'ha aplicat exitosament a diversos problemes d'enrutament i planificació. Un dels principals objectius d'aquesta tesi és desenvolupar nous algoritmes , basats en el marc mencionat, per solucionar problemes d'optimització combinatòria que poden ser d'interès a la industria de les telecomunicacions.
La mayoría de metaheurísticas tienen un componente aleatorio, que normalmente está basada en aleatorización uniforme -ie, el uso de la distribución de probabilidad uniforme para hacer selecciones aleatorias. Por otra parte, el marco Multi-start Biased Randomization of classical heurística with Adaptive local search propone el uso de aleatorización sesgada (no uniforme) para el diseño de algoritmos metaheurísticos alternativos -ie, el uso de distribuciones de probabilidad sesgadas como la geométrica o triangular. En algunas situaciones, esta aleatorización no uniforme ha obtenido una convergencia más rápida en la solución casi óptima. El marco MIRHA también incluye un paso de búsqueda local para mejorar las soluciones generadas durante el proceso iterativo. Además, permite añadir pasos de búsqueda adaptados al problema, como caché (memoria) o splitting (dividir y conquistar), que permiten la generación de soluciones competitivas (casi óptimas). Los algoritmos diseñados con el marco MIRHA permiten obtener soluciones de alta calidad a problemas realistas en tiempo de computación razonables. Además, tienden a utilizar un número reducido de parámetros, lo que los hace simples de implementar y configurar en la mayoría de aplicaciones prácticas. El marco se ha aplicado exitosamente a varios problemas de enrutamiento y planificación. Uno de los principales objetivos de esta tesis es desarrollar nuevos algoritmos, basados ¿¿en el marco mencionado, para solucionar problemas de optimización combinatoria que pueden ser de interés en la industria de las telecomunicaciones.
Most metaheuristics contain a randomness component, which is usually based on uniform randomization -i.e., the use of the Uniform probability distribution to make random choices. However, the Multi-start biased Randomization of classical Heuristics with Adaptive local search framework proposes the use of biased (non-uniform) randomization for the design of alternative metaheuristics -i.e., the use of skewed probability distributions such as the Geometric or Triangular ones. In some scenarios, this non-biased randomization has shown to provide faster convergence to near-optimal solutions. The MIRHA framework also includes a local search step for improving the incumbent solutions generated during the multi-start process. It also allows the addition of tailored local search components, like cache (memory) or splitting (divide-and-conquer) techniques, that allow the generation of competitive (near-optimal) solutions. The algorithms designed using the MIRHA framework allows to obtain "high-quality" solutions to realistic problems in reasonable computing times. Moreover, they tend to use a reduced number of parameters, which makes them simple to implement and configure in most practical applications. This framework has successfully been applied in many routing and scheduling problems. One of the main goals of this thesis is to develop new algorithms, based in the aforementioned framework, for solving some combinatorial optimization problems that can be of interest in the telecommunication industry.
APA, Harvard, Vancouver, ISO, and other styles
43

Shepherd, Bryan E. "Causal inference in HIV vaccine trials : comparing outcomes in a subset chosen after randomization /." Thesis, Connect to this title online; UW restricted, 2005. http://hdl.handle.net/1773/9608.

Full text
APA, Harvard, Vancouver, ISO, and other styles
44

Burgess, Stephen. "Statistical issues in Mendelian randomization : use of genetic instrumental variables for assessing causal associations." Thesis, University of Cambridge, 2012. https://www.repository.cam.ac.uk/handle/1810/242184.

Full text
Abstract:
Mendelian randomization is an epidemiological method for using genetic variationto estimate the causal effect of the change in a modifiable phenotype onan outcome from observational data. A genetic variant satisfying the assumptionsof an instrumental variable for the phenotype of interest can be usedto divide a population into subgroups which differ systematically only in thephenotype. This gives a causal estimate which is asymptotically free of biasfrom confounding and reverse causation. However, the variance of the causalestimate is large compared to traditional regression methods, requiring largeamounts of data and necessitating methods for efficient data synthesis. Additionally,if the association between the genetic variant and the phenotype is notstrong, then the causal estimates will be biased due to the “weak instrument”in finite samples in the direction of the observational association. This biasmay convince a researcher that an observed association is causal. If the causalparameter estimated is an odds ratio, then the parameter of association willdiffer depending on whether viewed as a population-averaged causal effect ora personal causal effect conditional on covariates. We introduce a Bayesian framework for instrumental variable analysis, whichis less susceptible to weak instrument bias than traditional two-stage methods,has correct coverage with weak instruments, and is able to efficiently combinegene–phenotype–outcome data from multiple heterogeneous sources. Methodsfor imputing missing genetic data are developed, allowing multiple genetic variantsto be used without reduction in sample size. We focus on the question ofa binary outcome, illustrating how the collapsing of the odds ratio over heterogeneousstrata in the population means that the two-stage and the Bayesianmethods estimate a population-averaged marginal causal effect similar to thatestimated by a randomized trial, but which typically differs from the conditionaleffect estimated by standard regression methods. We show how thesemethods can be adjusted to give an estimate closer to the conditional effect. We apply the methods and techniques discussed to data on the causal effect ofC-reactive protein on fibrinogen and coronary heart disease, concluding withan overall estimate of causal association based on the totality of available datafrom 42 studies.
APA, Harvard, Vancouver, ISO, and other styles
45

Pfrommer, Timo [Verfasser], and Jürgen [Akademischer Betreuer] Dippon. "Randomization and companion algorithms in stochastic approximation with semimartingales / Timo Pfrommer ; Betreuer: Jürgen Dippon." Stuttgart : Universitätsbibliothek der Universität Stuttgart, 2018. http://d-nb.info/1162497289/34.

Full text
APA, Harvard, Vancouver, ISO, and other styles
46

Matheja, Christoph [Verfasser], Joost-Pieter [Akademischer Betreuer] Katoen, and Radu [Akademischer Betreuer] Iosif. "Automated reasoning and randomization in separation logic / Christoph Matheja ; Joost-Pieter Katoen, Radu Iosif." Aachen : Universitätsbibliothek der RWTH Aachen, 2020. http://d-nb.info/1216175748/34.

Full text
APA, Harvard, Vancouver, ISO, and other styles
47

OLMASTRONI, ELENA. "USE OF MENDELIAN RANDOMIZATION STUDIES TO IDENTIFY POSSIBLE PHARMACOLOGICAL TARGETS IN THE CARDIOVASCULAR AREA." Doctoral thesis, Università degli Studi di Milano, 2022. http://hdl.handle.net/2434/915802.

Full text
Abstract:
Different statistical approaches have been implemented to overcome the limitations that typically and differently influence both randomized clinical trials and observational studies. Mendelian randomization studies, in which functional genetic variants serve as tools (“instrumental variables”) to approximate modifiable environmental exposures, have been developed and implemented in the context of observational epidemiological studies to strengthen causal inferential estimates in non-experimental situations. Since genetic variants are randomly transferred from parents to offspring at the time of gamete formation, they can realistically mimic the random allocation process of treatment in a randomized clinical trial, offering a strategy to eliminate, or at least reduce, the residual confounding typically affecting observational studies, thus allowing to obtain generalizable results for the entire population. If correctly conducted and carefully interpreted, Mendelian randomization studies can provide useful scientific evidence to support or reject causal hypotheses that verify the association between modifiable exposures and diseases. This kind of evidence may provide useful information to identify new potential drug targets, with a higher probability of success than approaches based on animal models or in vitro studies. This thesis summarizes the history and context of Mendelian randomization, the main features of the study design, the assumptions for its correct use, and a brief discussion on the advantages and disadvantages of this approach. In addition, an overview of what the Mendelian randomization technique has contributed to date in the cardiovascular field has also been presented. The methods and techniques discussed have been also practically applied on several studies conducted thanks to a collaboration established with Professor Brian A. Ference from the Cardiovascular Epidemiology Unit of the Department of Public Health and Primary Care (University of Cambridge). This agreement has allowed to access the UK Biobank, a prospective cohort study with deep genetic, physical, and health data, collected on about 500,000 volunteer participants recruited throughout the UK. The access to this large-scale biomedical database has been fundamental to carried out the projects presented in this thesis, which have provided key evidence to improve our knowledge about cardiovascular disease. First, we found that the increase of measured body mass index is a much stronger risk factor for type 2 diabetes than polygenic predisposition that leads to reversible metabolic changes that do not accumulate over time. Therefore, most cases of diabetes potentially can be prevented or reversed, leading to a major reduction of the prevalence of one of the most impactful risk factors for the development of cardiovascular disease. Second, we found that parental family history of coronary heart disease provides independent, complementary and additive information to the individual polygenic predisposition in the definition of the inherited genetic variation as well as to LDL cholesterol exposure in the estimation of the lifetime cardiovascular risk. In order to develop a simple, but powerful, algorithm to contextualize the frame of who will need to be treated, it is essential to retrieve information about parental family history of heart disease and individual polygenic predisposition to coronary artery disease, in addition to the measurement of all the other well-known cardiovascular risk factors, especially LDL cholesterol levels. Finally, we discovered three important evidence regarding lipoprotein(a), an independent risk factor for the development of coronary and cerebral atherosclerosis: (i) the cumulative lifetime risk of major coronary events is comparable considering genetically and clinically determined Lp(a) concentrations, meaning that, in terms of cardiovascular risk prediction, it is reasonable to rely on measured levels, regardless the genotype; (ii) there is no significant association between high Lp(a) concentrations and the occurrence of venous thromboembolism events; (iii) an extra reduction of LDL cholesterol can overcome the extra cardiovascular risk due to high Lp(a) levels, and we quantitatively defined the additional LDL cholesterol reduction needed to abolish this risk. At the end of this dissertation, the potential use of Mendelian randomization to inform the design of randomized controlled trials is also presented, as well as the possibility to use this approach to anticipate trials results in terms of predicting treatment efficacy and adverse effects, and to inform on potential repurposing of drugs.
APA, Harvard, Vancouver, ISO, and other styles
48

Assimes, Themistocles L., and Robert Roberts. "Genetics: Implications for Prevention and Management of Coronary Artery Disease." ELSEVIER SCIENCE INC, 2016. http://hdl.handle.net/10150/623131.

Full text
Abstract:
An exciting new era has dawned for the prevention and management of CAD utilizing genetic risk variants. The recent identification of over 60 susceptibility loci for coronary artery disease (CAD) confirm not only the importance of established risk factors, but also the existence of many novel causal pathways that are expected to improve our understanding of the genetic basis of CAD and facilitate the development of new therapeutic agents over time. Concurrently, Mendelian randomization studies have provided intriguing insights on the causal relationship between CAD-related traits, and highlight the potential benefits of long-term modifications of risk factors. Lastly, genetic risk scores of CAD may serve not only as prognostic, but also as predictive markers and carry the potential to considerably improve the delivery of established prevention strategies. This review will summarize the evolution and discovery of genetic risk variants for CAD and their current and future clinical applications.
APA, Harvard, Vancouver, ISO, and other styles
49

Carper, Benjamin Alan. "Assessing Multivariate Heritability through Nonparametric Methods." Diss., CLICK HERE for online access, 2008. http://contentdm.lib.byu.edu/ETD/image/etd2565.pdf.

Full text
APA, Harvard, Vancouver, ISO, and other styles
50

Levin, Joel R., John M. Ferron, and Boris S. Gafurov. "Additional comparisons of randomization-test procedures for single-case multiple-baseline designs: Alternative effect types." PERGAMON-ELSEVIER SCIENCE LTD, 2017. http://hdl.handle.net/10150/625957.

Full text
Abstract:
A number of randomization statistical procedures have been developed to analyze the results from single-case multiple-baseline intervention investigations. In a previous simulation study, comparisons of the various procedures revealed distinct differences among them in their ability to detect immediate abrupt intervention effects of moderate size, with some procedures (typically those with randomized intervention start points) exhibiting power that was both respectable and superior to other procedures (typically those with single fixed intervention start points). In Investigation 1 of the present follow-up simulation study, we found that when the same randomization-test procedures were applied to either delayed abrupt or immediate gradual intervention effects: (1) the powers of all of the procedures were severely diminished; and (2) in contrast to the previous study's results, the single fixed intervention start-point procedures generally outperformed those with randomized intervention start points. In Investigation 2 we additionally demonstrated that if researchers are able to successfully anticipate the specific alternative effect types, it is possible for them to formulate adjusted versions of the original randomization-test procedures that can recapture substantial proportions of the lost powers.
APA, Harvard, Vancouver, ISO, and other styles
We offer discounts on all premium plans for authors whose works are included in thematic literature selections. Contact us to get a unique promo code!

To the bibliography