To see the other types of publications on this topic, follow the link: Physics problems.

Dissertations / Theses on the topic 'Physics problems'

Create a spot-on reference in APA, MLA, Chicago, Harvard, and other styles

Select a source type:

Consult the top 50 dissertations / theses for your research on the topic 'Physics problems.'

Next to every source in the list of references, there is an 'Add to bibliography' button. Press on it, and we will generate automatically the bibliographic reference to the chosen work in the citation style you need: APA, MLA, Harvard, Chicago, Vancouver, etc.

You can also download the full text of the academic publication as pdf and read online its abstract whenever available in the metadata.

Browse dissertations / theses on a wide variety of disciplines and organise your bibliography correctly.

1

Coleman, Elaine B. "Problem-solving differences between high and average performers on physics problems." Thesis, McGill University, 1987. http://digitool.Library.McGill.CA:80/R/?func=dbin-jump-full&object_id=63961.

Full text
APA, Harvard, Vancouver, ISO, and other styles
2

Bürger, Steven. "Inverse Autoconvolution Problems with an Application in Laser Physics." Doctoral thesis, Universitätsbibliothek Chemnitz, 2016. http://nbn-resolving.de/urn:nbn:de:bsz:ch1-qucosa-211850.

Full text
Abstract:
Convolution and, as a special case, autoconvolution of functions are important in many branches of mathematics and have found lots of applications, such as in physics, statistics, image processing and others. While it is a relatively easy task to determine the autoconvolution of a function (at least from the numerical point of view), the inverse problem, which consists in reconstructing a function from its autoconvolution is an ill-posed problem. Hence there is no possibility to solve such an inverse autoconvolution problem with a simple algebraic operation. Instead the problem has to be regularized, which means that it is replaced by a well-posed problem, which is close to the original problem in a certain sense. The outline of this thesis is as follows: In the first chapter we give an introduction to the type of inverse problems we consider, including some basic definitions and some important examples of regularization methods for these problems. At the end of the introduction we shortly present some general results about the convergence theory of Tikhonov-regularization. The second chapter is concerned with the autoconvolution of square integrable functions defined on the interval [0, 1]. This will lead us to the classical autoconvolution problems, where the term “classical” means that no kernel function is involved in the autoconvolution operator. For the data situation we distinguish two cases, namely data on [0, 1] and data on [0, 2]. We present some well-known properties of the classical autoconvolution operators. Moreover, we investigate nonlinearity conditions, which are required to show applicability of certain regularization approaches or which lead convergence rates for the Tikhonov regularization. For the inverse autoconvolution problem with data on the interval [0, 1] we show that a convergence rate cannot be shown using the standard convergence rate theory. If the data are given on the interval [0, 2], we can show a convergence rate for Tikhonov regularization if the exact solution satisfies a sparsity assumption. After these theoretical investigations we present various approaches to solve inverse autoconvolution problems. Here we focus on a discretized Lavrentiev regularization approach, for which even a convergence rate can be shown. Finally, we present numerical examples for the regularization methods we presented. In the third chapter we describe a physical measurement technique, the so-called SD-Spider, which leads to an inverse problem of autoconvolution type. The SD-Spider method is an approach to measure ultrashort laser pulses (laser pulses with time duration in the range of femtoseconds). Therefor we first present some very basic concepts of nonlinear optics and after that we describe the method in detail. Then we show how this approach, starting from the wave equation, leads to a kernel-based equation of autoconvolution type. The aim of chapter four is to investigate the equation and the corresponding problem, which we derived in chapter three. As a generalization of the classical autoconvolution we define the kernel-based autoconvolution operator and show that many properties of the classical autoconvolution operator can also be shown in this new situation. Moreover, we will consider inverse problems with kernel-based autoconvolution operator, which reflect the data situation of the physical problem. It turns out that these inverse problems may be locally well-posed, if all possible data are taken into account and they are locally ill-posed if one special part of the data is not available. Finally, we introduce reconstruction approaches for solving these inverse problems numerically and test them on real and artificial data.
APA, Harvard, Vancouver, ISO, and other styles
3

Kitic, Srdan. "Cosparse regularization of physics-driven inverse problems." Thesis, Rennes 1, 2015. http://www.theses.fr/2015REN1S152/document.

Full text
Abstract:
Les problèmes inverses liés à des processus physiques sont d'une grande importance dans la plupart des domaines liés au traitement du signal, tels que la tomographie, l'acoustique, les communications sans fil, le radar, l'imagerie médicale, pour n'en nommer que quelques uns. Dans le même temps, beaucoup de ces problèmes soulèvent des défis en raison de leur nature mal posée. Par ailleurs, les signaux émanant de phénomènes physiques sont souvent gouvernées par des lois s'exprimant sous la forme d'équations aux dérivées partielles (EDP) linéaires, ou, de manière équivalente, par des équations intégrales et leurs fonctions de Green associées. De plus, ces phénomènes sont habituellement induits par des singularités, apparaissant comme des sources ou des puits d'un champ vectoriel. Dans cette thèse, nous étudions en premier lieu le couplage entre de telles lois physiques et une hypothèse initiale de parcimonie des origines du phénomène physique. Ceci donne naissance à un concept de dualité des régularisations, formulées soit comme un problème d'analyse coparcimonieuse (menant à la représentation en EDP), soit comme une parcimonie à la synthèse équivalente à la précédente (lorsqu'on fait plutôt usage des fonctions de Green). Nous dédions une part significative de notre travail à la comparaison entre les approches de synthèse et d'analyse. Nous défendons l'idée qu'en dépit de leur équivalence formelle, leurs propriétés computationnelles sont très différentes. En effet, en raison de la parcimonie héritée par la version discrétisée de l'EDP (incarnée par l'opérateur d'analyse), l'approche coparcimonieuse passe bien plus favorablement à l'échelle que le problème équivalent régularisé par parcimonie à la synthèse. Nos constatations sont illustrées dans le cadre de deux applications : la localisation de sources acoustiques, et la localisation de sources de crises épileptiques à partir de signaux électro-encéphalographiques. Dans les deux cas, nous vérifions que l'approche coparcimonieuse démontre de meilleures capacités de passage à l'échelle, au point qu'elle permet même une interpolation complète du champ de pression dans le temps et en trois dimensions. De plus, dans le cas des sources acoustiques, l'optimisation fondée sur le modèle d'analyse \emph{bénéficie} d'une augmentation du nombre de données observées, ce qui débouche sur une accélération du temps de traitement, plus rapide que l'approche de synthèse dans des proportions de plusieurs ordres de grandeur. Nos simulations numériques montrent que les méthodes développées pour les deux applications sont compétitives face à des algorithmes de localisation constituant l'état de l'art. Pour finir, nous présentons deux méthodes fondées sur la parcimonie à l'analyse pour l'estimation aveugle de la célérité du son et de l'impédance acoustique, simultanément à l'interpolation du champ sonore. Ceci constitue une étape importante en direction de la mise en œuvre de nos méthodes en en situation réelle<br>Inverse problems related to physical processes are of great importance in practically every field related to signal processing, such as tomography, acoustics, wireless communications, medical and radar imaging, to name only a few. At the same time, many of these problems are quite challenging due to their ill-posed nature. On the other hand, signals originating from physical phenomena are often governed by laws expressible through linear Partial Differential Equations (PDE), or equivalently, integral equations and the associated Green’s functions. In addition, these phenomena are usually induced by sparse singularities, appearing as sources or sinks of a vector field. In this thesis we primarily investigate the coupling of such physical laws with a prior assumption on the sparse origin of a physical process. This gives rise to a “dual” regularization concept, formulated either as sparse analysis (cosparse), yielded by a PDE representation, or equivalent sparse synthesis regularization, if the Green’s functions are used instead. We devote a significant part of the thesis to the comparison of these two approaches. We argue that, despite nominal equivalence, their computational properties are very different. Indeed, due to the inherited sparsity of the discretized PDE (embodied in the analysis operator), the analysis approach scales much more favorably than the equivalent problem regularized by the synthesis approach. Our findings are demonstrated on two applications: acoustic source localization and epileptic source localization in electroencephalography. In both cases, we verify that cosparse approach exhibits superior scalability, even allowing for full (time domain) wavefield interpolation in three spatial dimensions. Moreover, in the acoustic setting, the analysis-based optimization benefits from the increased amount of observation data, resulting in a speedup in processing time that is orders of magnitude faster than the synthesis approach. Numerical simulations show that the developed methods in both applications are competitive to state-of-the-art localization algorithms in their corresponding areas. Finally, we present two sparse analysis methods for blind estimation of the speed of sound and acoustic impedance, simultaneously with wavefield interpolation. This is an important step toward practical implementation, where most physical parameters are unknown beforehand. The versatility of the approach is demonstrated on the “hearing behind walls” scenario, in which the traditional localization methods necessarily fail. Additionally, by means of a novel algorithmic framework, we challenge the audio declipping problemregularized by sparsity or cosparsity. Our method is highly competitive against stateof-the-art, and, in the cosparse setting, allows for an efficient (even real-time) implementation
APA, Harvard, Vancouver, ISO, and other styles
4

Sverin, Tomas. "Open-ended problems in physics : Upper secondary technical program students’ ways of approaching outdoor physics problems." Thesis, Umeå universitet, Institutionen för naturvetenskapernas och matematikens didaktik, 2011. http://urn.kb.se/resolve?urn=urn:nbn:se:umu:diva-52486.

Full text
Abstract:
This study reports on technical program students’ approaches to solving open-ended problems during an introductory physics course in a Swedish upper secondary school. The study used case study methodology to investigate students’ activities in outdoor context. The findings come from observations and audio recordings of students solving three different open-ended problems. The results showed that the students had difficulties to formulate ‘solvable’ problems and to perform necessary ‘at home’ preparations to be able to solve the problems. Furthermore, students preferred to use a single solution method even though different solution methods were possible. This behavior can be attributed to their previous experience of solving practical problems in physics education. The result also indicated need of different levels of guidance to help the students in their problem solving process. A tentative conclusion can be made that open-ended problems have an educational potential for developing students’ understanding of scientific inquiry and problem solving strategies in the process of performing practical outdoor activities.
APA, Harvard, Vancouver, ISO, and other styles
5

Dou, Lixin. "Procedures for basic inverse problems: Black body radiation problem and phonon density of states problem." Thesis, University of Ottawa (Canada), 1992. http://hdl.handle.net/10393/7544.

Full text
Abstract:
Two numerical procedures, the regularization method and the maximum entropy method, have been investigated and developed to solve some basic inverse problems in theoretical physics. Both of them are applied to the inverse black body radiation problem and the inverse phonon density of states problem. The inverse black body radiation problem is concerned with the determination of the area temperature distribution of a black body source from spectral measurements of its radiation. The phonon density of states problem is defined to be the determination of the phonon density of states function from the measured lattice specific heat function at constant volume. Those problems are ill-posed and can be expressed as a Fredholm integral equation of the first kind. It appears that both the regularization method and the maximum entropy method are successful in solving the two ill-posed problems. Generally the two procedures can be applied to any inverse problem which belongs to the class of the Fredholm integral equation of the first kind.
APA, Harvard, Vancouver, ISO, and other styles
6

Zdeborová, Lenka. "Statistical physics of hard optimization problems." Paris 11, 2008. http://www.theses.fr/2008PA112080.

Full text
Abstract:
L'optimisation est un concept fondamental dans beaucoup de domaines scientifiques comme l'informatique, la théorie de l'information, les sciences de l'ingénieur et la physique statistique, ainsi que pour la biologie et les sciences sociales. Un problème d'optimisation met typiquement en jeu un nombre important de variables et une fonction de coût qui dépend de ces variables. La classe des problèmes NP-complets est particulièrement difficile, et il est communément admis que, dans le pire des cas, un nombre d'opérations exponentiel dans la taille du problème est nécessaire pour minimiser la fonction de coût. Cependant, même ces problèmes peuveut être faciles à résoudre en pratique. La question principale considérée dans cette thèse est comment reconnaître si un problème de satisfaction de contraintes NP-complet est "typiquement" difficile et quelles sont les raisons pour cela ? Nous suivons une approche inspirée par la physique statistique des systèmes désordonnés, en particulier la méthode de la cavité développée originalement pour les systèmes vitreux. Nous décrivons les propriétés de l'espace des solutions dans deux des problèmes de satisfaction les plus étudiés : la satisfiabilité et le coloriage aléatoire. Nous suggérons une relation entre l'existence de variables dites "gelées" et la difficulté algorithmique d'un problème donné. Nous introduisons aussi une nouvelle classe de problèmes, que nous appelons "problèmes verrouillés", qui présentent l'avantage d'être à la fois facilement résoluble analytiquement, du point de vue du comportement moyen, mais également extrêmement difficiles du point de vue de la recherche de solutions dans un cas donné<br>Optimization is fundamental in many areas of science, from computer science and information theory to engineering and statistical physics, as well as to biology or social sciences. It typically involves a large number of variables and a cost function depending on these variables. Optimization problems in the NP-complete class are particularly difficult, it is believed that the number of operations required to minimize the cost function is in the most difficult cases exponential in the system size. However, even in an NP-complete problem the practically arising instances might, in fact, be easy to solve. The principal question we address in this thesis is: How to recognize if an NP-complete constraint satisfaction problem is typically hard and what are the main reasons for this? We adopt approaches from the statistical physics of disordered systems, in particular the cavity method developed originally to describe glassy systems. We describe new properties of the space of solutions in two of the most studied constraint satisfaction problems - random satisfiability and random graph coloring. We suggest a relation between the existence of the so-called frozen variables and the algorithmic hardness of a problem. Based on these insights, we introduce a new class of problems which we named "locked" constraint satisfaction, where the statistical description is easily solvable, but from the algorithmic point of view they are even more challenging than the canonical satisfiability
APA, Harvard, Vancouver, ISO, and other styles
7

Sanzeni, A. "THEORETICAL PHYSICS MODELING OF NEUROLOGICAL PROBLEMS." Doctoral thesis, Università degli Studi di Milano, 2016. http://hdl.handle.net/2434/390620.

Full text
Abstract:
In this thesis we approach different problems in neurobiology using methods from theoretical physics. The first topic that we studied is the mechanoelectrical transduction at the basis of touch sensation, i.e. the process by which a mechanical signal conveyed during touch is transformed into an electric signal. We investigate how the neural response is generated in the C. elegans and propose a channel gating mechanism to explain the activation of touch receptor neurons by mechanical stimuli. The second part of the thesis is related to our ability of orient ourself and navigate in space. The neural system underlying this ability has been extensively characterized in rats, where the activity of different types of neurons has been found to be correlated with the spatial position of the animal. Grid cells in the rat entorhinal cortex are part of this “neural map” of space; they form regular triangular lattices whose geometrical properties have a modular distribution among the population of neurons. We show that some of the features observed in the system may be explained by assuming that grid cells provide an efficient representation of space. We predict a scaling law connecting the number of neurons within a module and the spatial period of the associated grids. The last problem discussed in this thesis concerns the neurodegenerative Parkinson’s disease. Limb tremor caused by the disease is currently treated by administering drugs and by fixed-frequency deep brain stimulation. The latter interferes directly with the brain dynamics by delivering electrical impulses to neurons in the subthalamic nucleus. We develop a theory to describe the onset of anomalous oscillations in the neural activity that are at the origin of the characteristic tremor. We propose a new feedback-controlled stimulation procedure and show that it could outperform the standard protocol.
APA, Harvard, Vancouver, ISO, and other styles
8

Rawal, Sonal. "Application of statistical physics to social problems." Thesis, Brunel University, 2005. http://ethos.bl.uk/OrderDetails.do?uin=uk.bl.ethos.422411.

Full text
APA, Harvard, Vancouver, ISO, and other styles
9

Johnston, John C. "Bayesian analysis of inverse problems in physics." Thesis, University of Oxford, 1996. http://ethos.bl.uk/OrderDetails.do?uin=uk.bl.ethos.337737.

Full text
APA, Harvard, Vancouver, ISO, and other styles
10

Muddle, Richard Louden. "Parallel block preconditioning for multi-physics problems." Thesis, University of Manchester, 2011. https://www.research.manchester.ac.uk/portal/en/theses/parallel-block-preconditioning-for-multiphysics-problems(2efc63e4-f426-4be9-b48a-4016365e08b8).html.

Full text
Abstract:
In this thesis we study efficient parallel iterative solution algorithms for multi-physics problems. In particular, we consider fluid structure interaction (FSI) problems, a type of multi-physics problem in which a fluid and a deformable solid interact. All computations were performed in Oomph-Lib, a finite element library for the simulation of multi-physics problems. In Oomph-Lib, the constituent problems in a multi-physics problem are coupled monolithically, and the resulting system of non-linear equations solved with Newton's method. This requires the solution of sequences of large, sparse linear systems, for which optimal solvers are essential. The linear systems arising from the monolithic discretisation of multi-physics problems are natural candidates for solution with block-preconditioned Krylov subspace methods.We developed a generic framework for the implementation of block preconditioners within Oomph-Lib. Furthermore the framework is parallelised to facilitate the efficient solution of very large problems. This framework enables the reuse of all of Oomph-Lib's existing linear algebra infrastructure and preconditioners (including block preconditioners). We will demonstrate that a wide range of block preconditioners can be seamlessly implemented in this framework, leading to optimal iterative solvers with good parallel scaling.We concentrate on the development of an effective preconditioner for a FSI problem formulated in an arbitrary Lagrangian Eulerian (ALE) framework with pseudo-solid node updates (for the deforming fluid mesh). We begin by considering the pseudo-solid subsidiary problem; the deformation of a solid governed by equations of large displacement elasticity, subject to a prescribed boundary displacement imposed with Lagrange multiplier. We present a robust, optimal, augmented-Lagrangian type preconditioner for the resulting saddle-point linear system and prove analytically tight bounds for the spectrum of the preconditioned operator with respect to the discrete problem size.This pseudo-solid preconditioner is incorporated into a block preconditioner for the full FSI problem. One key feature of the FSI preconditioner is that existing optimal single physics preconditioners (such as the well known Navier-Stokes Least Squares Commutator preconditioner) can be employed to approximately solve the linear systems associated with the constituent sub-problems. We evaluate its performance on selected 2D and 3D problems. The preconditioner is optimal for most problems considered. In cases when sub-optimality is detected, we explain the reasons for such behavior and suggest potential improvements.
APA, Harvard, Vancouver, ISO, and other styles
11

Jaksic, Vojkan Simon Barry Simon Barry. "Solutions to some problems in mathematical physics /." Diss., Pasadena, Calif. : California Institute of Technology, 1992. http://resolver.caltech.edu/CaltechETD:etd-09122005-162352.

Full text
APA, Harvard, Vancouver, ISO, and other styles
12

Bouchard, Josée. "Physics students' approaches to learning and cognitive processes in solving physics problems." Thesis, McGill University, 2005. http://digitool.Library.McGill.CA:80/R/?func=dbin-jump-full&object_id=100325.

Full text
Abstract:
This study examined traditional instruction and problem-based learning (PBL) approaches to teaching and the extent to which they foster the development of desirable cognitive processes, including metacognition, critical thinking, physical intuition, and problem solving among undergraduate physics students. The study also examined students' approaches to learning and their perceived role as physics students. The research took place in the context of advanced courses of electromagnetism at a Canadian research university. The cognitive science, expertise, physics and science education, instructional psychology, and discourse processes literature provided the framework and background to conceptualize and structure this study. A within-stage mixed-model design was used and a number of instruments, including a survey, observation grids, and problem sets were developed specifically for this study. A special one-week long problem-based learning (PBL) intervention was also designed. Interviews with the instructors participating in the study provided complementary data.<br>Findings include evidence that students in general engage in metacognitive processes in the organization of their personal study time. However, this potential, including the development of other cognitive processes, might not be stimulated as much as it could in the traditional lecture instructional context. The PBL approach was deemed as more empowering for the students. An unexpected finding came from the realisation that a simple exposure to a structured exercise of problem-solving (pre-test) was sufficient to produce superior planning and solving strategies on a second exposure (post-test) even for the students who had not been exposed to any special treatment. Maturation was ruled out as a potential threat to the validity of this finding. Another promising finding appears to be that the problem-based learning (PBL) intervention tends to foster the development of cognitive competencies, particularly physical intuition, even if it was only implemented for a short period of time. Other findings relate to the nature of the cognitive actions and activities that the students engage in when learning to solve electromagnetism problems in a PBL environment for the first time and the tutoring actions that guide students in this context.
APA, Harvard, Vancouver, ISO, and other styles
13

Lukaschewitsch, Michael. "Geoelectrical conductivity problems on unbounded domains." Universität Potsdam, 1998. http://opus.kobv.de/ubp/volltexte/2007/1470/.

Full text
Abstract:
This paper deals with the electrical conductivity problem in geophysics. It is formulated as an elliptic boundary value problem of second order for a large class of bounded and unbounded domains. A special boundary condition, the so called "Complete Electrode Model", is used. Poincaré inequalities are formulated and proved in the context of weighted Sobolev spaces, leading to existence and uniqueness statements for the boundary value problem. In addition, a parameter-to-solution operator arising from the inverse conductivity problem in medicine (EIT) and geophysics is investigated mathematically and is shown to be smooth and analytic.
APA, Harvard, Vancouver, ISO, and other styles
14

Woo, Jung Min. "Two mathematical problems in disordered systems." Diss., The University of Arizona, 2000. http://hdl.handle.net/10150/289124.

Full text
Abstract:
Two mathematical problems in disordered systems are studied: geodesics in first-passage percolation and conductivity of random resistor networks. In first-passage percolation, we consider a translation-invariant ergodic family {t(b): b bond of Z²} of nonnegative random variables, where t(b) represent bond passage times. Geodesics are paths in Z², infinite in both directions, each of whose finite segments is time-minimizing. We prove part of the conjecture that geodesics do not exist in any fixed half-plane and that they have to intersect all straight lines with rational slopes. In random resistor networks, we consider an independent and identically distributed family {C(b): b bond of a hierarchical lattice H} of nonnegative random variables, where C(b) represent bond conductivities. A hierarchical lattice H is a sequence {H(n): n = 0, 1, 2} of lattices generated in an iterative manner. We prove a central limit theorem for a sequence x(n) of effective conductivities, each of which is defined on lattices H(n), when a system is in a percolating regime. At a critical point, it is expected to have non-Gaussian behavior.
APA, Harvard, Vancouver, ISO, and other styles
15

Wu, Songyan. "Theoretical study of two problems in polymer physics." Thesis, University of Ottawa (Canada), 1994. http://hdl.handle.net/10393/10327.

Full text
Abstract:
(1) Static Structure Factor and Shape of Reptating Telehelic Ionomers in Electric Fields: We calculate the static structure factor of reptating block copolymers which have a neutral middle block and charged ends. In the presence of an electric field, these telehelic ionomers reptate randomly in their "tubes" but the latter tend to orient along the field axis. If the two ends have different charges, competition between many length scales occurs. The resulting scattering function shows unusual features that are normally characteristic of highly polydisperse mixtures. (results published in Macromolecules, volume 26, number 8, pages 1905-1913). (2) Reptation, Entropic Trapping, Percolation and Rouse Dynamics of Polymer Chains in "Random" Environments: We report the simulation study of the dynamics of linear polymer chains in two-dimensional periodic arrays of obstacles where a fraction 1-c of obstacles are removed. We find Rouse dynamics when c is small, reptation dynamics when c = 1, as well as two other regimes between these two limits. Surprisingly, the diffusion coefficient actually decreases when we start removing obstacles. A study of the sites visited by the polymer molecules indicates that the latter are then entropically trapped in large but isolated voids. When about 60% of the obstacles are removed, the large voids form a percolation path and diffusion is easier when further obstacles are removed. Our results thus predict that the diffusion coefficient can vary in a non-monotonic way with concentration.
APA, Harvard, Vancouver, ISO, and other styles
16

Schülke, Christophe. "Statistical physics of linear and bilinear inference problems." Sorbonne Paris Cité, 2016. http://www.theses.fr/2016USPCC058.

Full text
Abstract:
Le développement récent de l'acquisition comprimée a permis de spectaculaires avancées dans la compréhension des problèmes d'estimation linéaire parcimonieuse. Ce développement a suscité un intérêt renouvelé pour les problèmes d'inférence linéaire et bilinéaire généralisée. Ces problèmes combinent une étape linéaire avec une étape non lineaire et probabiliste, à l'issue de laquelle des mesures sont effectuées. Ce type de situations se présente notamment en imagerie médicale et en astronomie. Cette thèse s'intéresse à des algorithmes pour la résolution de ces problèmes et à leur analyse théorique. Pour cela, nous utilisons des algorithmes de passage de message, qui permettent d'échantillonner des distributions de haute dimension. Ces algorithmes connaissent des changements de phase, qui se laissent analyser à l'aide de la méthode des répliques, initialement développée dans le cadre de la physique statistique des milieux désordonnés. L'analyse des phases révèle qu'elles correspondent à des domaines dans lesquels l'inférence est facile, difficile ou impossible. Les principales contributions de cette thèse sont de trois types. D'abord, l'application d'algorithmes connus à des problèmes concrets : détection de communautés, codes correcteurs d'erreurs ainsi qu'un système d'imagerie innovant. Ensuite, un nouvel algorithme traitant le problème de calibration aveugle de capteurs, potentiellement applicable à de nombreux systèmes de mesure. Enfin, une analyse théorique du problème de reconstruction de matrices à petit rang à partir de projections linéaires, ainsi qu'une analyse d'une instabilité présente dans les algorithmes d'inférence bilinéaire<br>The recent development of compressed sensing has led to spectacular advances in the under standing of sparse linear estimation problems as well as in algorithms to solve them. It has also triggered anew wave of developments in the related fields of generalized linear and bilinear inference problems. These problems have in common that they combine a linear mixing step and a nonlinear, probabilistic sensing step, producing indirect measurements of a signal of interest. Such a setting arises in problems such as medical or astronomical Imaging. The aim of this thesis is to propose efficient algorithms for this class of problems and to perform their theoretical analysis. To this end, it uses belief propagation, thanks to which high-dimensional distributions can be sampled efficiently, thus making a bayesian approach to inference tractable. The resulting algorithms undergo phase transitions that can be analyzed using the replica method, initially developed in statistical physics of disordered systems. The analysis reveals phases in which inference is easy, hard or impossible, corresponding to different energy landscapes of the problem. The main contributions of this thesis can be divided into three categories. First, the application of known algorithms to concrete problems : community detection, superposition codes and an innovative imaging system. Second, a new, efficient message-passing algorithm for blind sensor calibration, that could be used in signal processing for a large class of measurement systems. Third, a theoretical analysis of achievable performances in matrix compressed sensing and of instabilities in bayesian bilinear inference algorithms
APA, Harvard, Vancouver, ISO, and other styles
17

Olofsson, Rikard. "Problems in Number Theory related to Mathematical Physics." Doctoral thesis, Stockholm : Engineering sciences, Kungliga Tekniska högskolan, 2008. http://urn.kb.se/resolve?urn=urn:nbn:se:kth:diva-9514.

Full text
APA, Harvard, Vancouver, ISO, and other styles
18

BELFATTO, BENEDETTA. "Flavour problems and new physics at TeV scale." Doctoral thesis, Gran Sasso Science Institute, 2020. http://hdl.handle.net/20.500.12571/10042.

Full text
Abstract:
There is no explanation in the Standard Model for the replication of fermion families, neither for the mass hierarchy between them nor for the structure of the Yukawa coupling matrices, which remain arbitrary. The key towards under- standing the fermion masses and mixing pattern can be in symmetry principles. Masses of fermions can actually have a dynamical origin, following the spon- taneous symmetry breaking of gauge “horizontal" symmetries unifying families, differently acting on left and right species. U(3) or SU(3) “horizontal" family symmetries seem an intuitive hypothesis to be considered. Fermion masses should be induced by higher order operators containing flavon scalars emerging from renormalizable interactions via ‘universal seesaw’ mechanism after integrat- ing out some heavy fields, scalars or verctor-like fermions. Then in this case the fermion mass hierarchy and mixing among families can be related to the pat- tern of spontaneous breaking of the gauge SU(3) symmetry. The corresponding gauge bosons have flavor-nondiagonal couplings to fermions which in principle can induce flavour changing phenomena. In this case strong lower limits on the flavor symmetry breaking scales are expected. However, for special choices of horizontal symmetries there is a natural suppression of flavour changing effects due to a custodial symmetry. So gauge bosons can have mass in the TeV range, without contradicting the existing experimental limits. However an unexpected anomaly shows up in quark mixing. After the recent high precision determinations of Vus and Vud, the first row of the CKM matrix is about 4σ deviated from unitarity. The existence of the gauge symmetry SU(3)l acting between lepton families can recover unitarity if the symmetry is broken at a scale of about 6 TeV. In fact the gauge bosons of this symmetry contribute to muon decay in interference with the Standard Model, so that the Fermi constant is slightly smaller than the muon decay constant and unitarity is restored. Alternatively, extra vector-like quarks can be thought as a solution to the CKM unitarity problem. The extra species should exhibit a large mixing with the first family in order to recover unitarity, then their mass should be no more than 6 TeV or so. The implications of the existence of so large mixing must be examined, in order to understand if it can actually exist without contradiction with experimental results on flavour changing neutral current processes and Standard Model observables. In principle an extra weak isodoublet can solve all the discrepancies between independent determinations of the CKM elements in the first row. However not all the discrepancies can be entirely recovered with- out contradicting experimental constraints. Then the existence of two or more vector-like doublets or a vector-like isodoublet with a down-type or up-type isos- inglet can be considered. In these scenarios unitarity can be resettled and flavour changing can be avoided by setting to zero some couplings of extra species with Standard Model families. If the anomalies in the determination of CKM mixing angles are confirmed by future experiments with greater precision, there might be strong indication towards the existence of physics beyond the Standard Model at the TeV scale, such as flavour changing gauge bosons and vector-like fermions with masses of few TeV. This new physics can be testable at next runs of high luminosity LHC or, more effectively, in future accelerators.
APA, Harvard, Vancouver, ISO, and other styles
19

Corn, John Russell. "Optimization Problems in Hilbert Space with POSS Complexes." Digital Commons @ East Tennessee State University, 2011. https://dc.etsu.edu/etd/1381.

Full text
Abstract:
Beginning with a survey of functional variation methods in classical physics, we derive the Hartree-Fock theory from canonical quantization. Following a development of density functional theory, many-body perturbation theory, and other techniques of computational condensed matter physics, we perform a systematic study of metal-polyhydride impurities in T8 and T12 polyhedral oligomeric silsesquioxane (POSS) cage molecules. Second-quantized methods motivate the derivations throughout.
APA, Harvard, Vancouver, ISO, and other styles
20

O'Brien, Stephen Brian Gerard. "Free boundary problems from industry." Thesis, University of Oxford, 1992. http://ethos.bl.uk/OrderDetails.do?uin=uk.bl.ethos.314860.

Full text
APA, Harvard, Vancouver, ISO, and other styles
21

Feng, Wenying. "Semilinear problems and spectral theory." Thesis, University of Glasgow, 1997. http://ethos.bl.uk/OrderDetails.do?uin=uk.bl.ethos.360167.

Full text
APA, Harvard, Vancouver, ISO, and other styles
22

Dicken, Volker, and Peter Maaß. "Wavelet-Galerkin methods for ill-posed problems." Universität Potsdam, 1995. http://opus.kobv.de/ubp/volltexte/2007/1389/.

Full text
Abstract:
Projection methods based on wavelet functions combine optimal convergence rates with algorithmic efficiency. The proofs in this paper utilize the approximation properties of wavelets and results from the general theory of regularization methods. Moreover, adaptive strategies can be incorporated still leading to optimal convergence rates for the resulting algorithms. The so-called wavelet-vaguelette decompositions enable the realization of especially fast algorithms for certain operators.
APA, Harvard, Vancouver, ISO, and other styles
23

Bhatt, Ronak Jayant. "Inverse problems in elliptic charged-particle beams." Thesis, Massachusetts Institute of Technology, 2006. http://hdl.handle.net/1721.1/37055.

Full text
Abstract:
Thesis (Ph. D.)--Massachusetts Institute of Technology, Dept. of Physics, 2006.<br>Includes bibliographical references (p. [155]-159).<br>The advantages of elliptic (or sheet) beams have been known for many years, but their inherent three-dimensional nature presents significant theoretical, design, and experimental challenges in the development of elliptic beam systems. The present work provides a framework for the design of elliptic cross-section charged-particle beam formation and transport systems. An effective mathematical formalism for describing accelerating elliptic cross-section beams is developed in which the particle distribution function for an elliptic beam is associated with a hyperellipsoid in phase space, and the evolution equations for the particle distribution hyperellipsoid are obtained. A novel methodology is presented for the design of elliptic beam-forming diodes utilizing an analytic prescription for the surfaces of three-dimensional electrodes which generate, accelerate, and confine a highly laminar elliptic beam. Three-dimensional simulations and tolerance studies are performed, confirming the theoretical predictions that a near-ideal beam can be produced. Focusing systems are described for elliptic beams in coasting, accelerating, and compressing regions with analytic prescriptions for the applied electric and magnetic fields required to maintain a laminar flow profile for particles within the beam. Numerical phase-space evolution and 3D simulations confirm that self-consistent laminar flow profiles are maintained by the theoretically-designed applied fields. The traditional approach to charged-particle dynamics problems involves extensive numerical optimization over the space of initial and boundary conditions in order to obtain desired charged-particle trajectories. The approach taken in the present work is to obtain analytic inverses wherever possible in order to minimize any necessary numerical optimization. Desired trajectories are assumed, and the applied fields and electrode geometries are then determined in a manner consistent with the assumed trajectories.<br>by Ronak Jayant Bhatt.<br>Ph.D.
APA, Harvard, Vancouver, ISO, and other styles
24

Duka, E. D. "Bifurcation problems in finite elasticity." Thesis, University of Nottingham, 1988. http://ethos.bl.uk/OrderDetails.do?uin=uk.bl.ethos.384747.

Full text
APA, Harvard, Vancouver, ISO, and other styles
25

Mueller, Thibaut. "Solving hierarchy problems in the LHC era." Thesis, University of Cambridge, 2014. https://ethos.bl.uk/OrderDetails.do?uin=uk.bl.ethos.708391.

Full text
APA, Harvard, Vancouver, ISO, and other styles
26

Tzou, Leo. "Linear and nonlinear analysis and applications to mathematical physics /." Thesis, Connect to this title online; UW restricted, 2007. http://hdl.handle.net/1773/5761.

Full text
APA, Harvard, Vancouver, ISO, and other styles
27

Donertas, Sule. "Role Of Thought Experiments In Solving Conceptual Physics Problems." Phd thesis, METU, 2011. http://etd.lib.metu.edu.tr/upload/12614025/index.pdf.

Full text
Abstract:
The purpose of this study was to contribute to the science education literature by describing how thought experiments vary in terms of the nature, purpose of use and reasoning resources behind during the solution of conceptual physics problems. Three groups of participants were selected according to the level of participants&rsquo<br>physics knowledge- low, medium, and high level groups- in order to capture the variation. Methodology of phenomenographic research was adapted for this study. Think aloud and retrospective questioning strategies were used throughout the individually conducted problem solving sessions. The analysis of data showed that thought experiments were frequently used cognitive tools for all level of participants while working on the problems. Four different thought experiment structures were observed which were categorized as limiting case, extreme case, simple case, and familiar case. It was also observed that participants conducted thought experiments for different purposes such as prediction, proof, and explanation. The reasoning resources running behind the thought experiment processes were classified in terms of observed facts, intuitive principles, and scientific concepts. The results of the analysis suggested that, thought experiments used as a creative reasoning instrument for theory formation or hypothesis testing by scientists can also be used by students during the inquiry processes as well as problem solving in instructional settings. It was also argued that, instructional practices can be developed according to the outcomes of thought experiments, which illuminate thinking processes of students and displays hidden or missing components of their reasoning.
APA, Harvard, Vancouver, ISO, and other styles
28

Chan, Terence. "Stochastic differential equations and related problems inspired by physics." Thesis, University of Cambridge, 1990. http://ethos.bl.uk/OrderDetails.do?uin=uk.bl.ethos.335063.

Full text
APA, Harvard, Vancouver, ISO, and other styles
29

Kozlowska, Katarzyna. "Riemann-Hilbert problems and their applications in mathematical physics." Thesis, University of Reading, 2017. http://centaur.reading.ac.uk/73488/.

Full text
Abstract:
The aim of this thesis is to present the reader with the very effective and rigorous Riemann-Hilbert approach of solving asymptotic problems. We consider a transition problem for a Toeplitz determinant; its symbol depends on an additional parameter t. When t > 0, the symbol has one Fisher-Hartwig singularity at an arbitrary point z1 6= 1 on the unit circle (with associated α1, β1 ∈ C strengths) and as t → 0, a new Fisher-Hartwig singularity emerges at the point z0 = 1 (with α0, β0 ∈ C strengths). The asymptotics we present for the determinant are uniform for sufficiently small t. The location of the β-parameters leads to the consideration of two cases, both of which are addressed in this thesis. In the first case, when | Re β0 − Re β1| < 1 we see a transition between two asymptotic regimes, both given by the same result by Ehrhardt, but with different parameters, thus producing different asymptotics. In the second case, when | Re β0 − Re β1| = 1 the symbol has Fisher-Hartwig representations at t = 0, and the asymptotics are given the Tracy-Basor conjecture. These double scaling limits are used to explain transition in the theory of XY spin chains between different regions in the phase diagram across critical lines.
APA, Harvard, Vancouver, ISO, and other styles
30

Mahault, Benoît. "Outstanding problems in the statistical physics of active matter." Thesis, Université Paris-Saclay (ComUE), 2018. http://www.theses.fr/2018SACLS250/document.

Full text
Abstract:
La matière active, désignant les systèmes hors d’équilibre composés de particules étant capable d’utiliser l’énergie présente dans leur environnement afin de se déplacer de façon systématique, a suscité beaucoup d’attention auprès des communautés de mécanique statistique et matière molle ces dernières décennies. Les systèmes actifs couvrent en effet un large panel d’exemples allant de la biologie aux granulaires. Cette thèse se concentre sur l’étude de modèles minimaux de matière active sèche (ceux pour lesquels le fluide dans lequel les particles sont immergées est négligé), tel que le modèle de Vicsek qui considère des particules ponctuelles se déplaçant à vitesse constante tout en alignant leur direction de mouvement avec celles de leurs voisins localement en présence de bruit, et définit une classe d’universalité hors équilibre pour la transition vers le mouvement collectif. Quatre problèmes en suspens ont été abordés : La définition d’une classe d’universalité en matière active sèche qui décrit des systèmes de particles présentant un alignement polaire et un mouvement apolaire. Cette nouvelle classe exhibe une transition continue vers un quasi-ordre polaire doté d’exposants variant continument, et donc analogue au modèle XY à l’équilibre, mais n’appartenant pas à la classe d’universalité Kosterlitz-Thouless. Ensuite, l’étude de la validité des théories cinétiques décrivant les modèles de type Vicsek, qui sont confrontées aux résultats obtenus aux niveaux microscopique et hydrodynamique. Puis une évaluation quantitative de la théorie de Toner et Tu, permettant de mesurer les exposants caractérisant les fluctuations dans la phase ordonnée du modèle de Vicsek, à partir de simulations numériques à grande échelle du modèle microscopique. Enfin, la création d’un formalisme pour la dérivation d’équations hydrodynamiques à partir de modèles de matière active sèche à trois dimensions, ainsi que leur étude au niveau linéaire<br>Active matter, i.e. nonequilibrium systems composed of many particles capable of exploiting the energy present in their environment in order to produce systematic motion, has attracted much attention from the statistical mechanics and soft matter communities in the past decades. Active systems indeed cover a large variety of examples that range from biological to granular. This Ph.D. focusses on the study of minimal models of dry active matter (when the fluid surrounding particles is neglected), such as the Vicsek model: point-like particles moving at constant speed and aligning their velocities with those of their neighbors locally in presence of noise, that defines a nonequilibrium universalilty class for the transition to collective motion. Four current issues have been addressed: The definition of a new universality class of dry active matter with polar alignment and apolar motion, showing a continuous transition to quasilong-range polar order with continuously varying exponents, analogous to the equilibrium XY model, but that does not belong to the Kosterlitz-Thouless universality class. Then, the study of the faithfulness of kinetic theories for simple Vicsek-style models and their comparison with results obtained at the microscopic and hydrodynamic levels. Follows a quantitative assessment of Toner and Tu theory, which has allowed to compute the exponents characterizing fluctuations in the flocking phase of the Vicsek model, from large scale numerical simulations of the microscopic dynamics. Finally, the establishment of a formalism allowing for the derivation of hydrodynamic field theories for dry active matter models in three dimensions, and their study at the linear level
APA, Harvard, Vancouver, ISO, and other styles
31

Guler, Marifi. "Strong coupling problems in gauge theories." Thesis, University of Edinburgh, 1986. http://hdl.handle.net/1842/17001.

Full text
APA, Harvard, Vancouver, ISO, and other styles
32

Han, Song. "Computational Methods for Multi-dimensional Neutron Diffusion Problems." Licentiate thesis, KTH, Physics, 2009. http://urn.kb.se/resolve?urn=urn:nbn:se:kth:diva-11298.

Full text
APA, Harvard, Vancouver, ISO, and other styles
33

Sears, G. J. "Finite element solution of unbounded field problems." Thesis, University of Salford, 1985. http://ethos.bl.uk/OrderDetails.do?uin=uk.bl.ethos.356188.

Full text
APA, Harvard, Vancouver, ISO, and other styles
34

Oberem, Graham Edmund. "An intelligent computer-based tutor for elementary mechanics problems." Thesis, Rhodes University, 1987. http://hdl.handle.net/10962/d1001997.

Full text
Abstract:
ALBERT, an intelligent problem-solving monitor and coach, has been developed to assist students solving problems in one-dimensional kinematics. Students may type in kinematics problems directly from their textbooks. ALBERT understands the problems, knows how to solve them, and can teach students how to solve them. The program is implemented in the TUTOR language and runs on the Control Data mainframe PLATO system. A natural language interface was designed to understand kinematics problems stated in textbook English. The interface is based on a pattern recognition system which is intended to parallel a cognitive model of language processing. The natural language system has understood over 60 problems taken directly from elementary Physics textbooks. Two problem-solving routines are included in ALBERT. One is goal-directed and solves the problems using the standard kinematic equations. The other uses the definition of acceleration and the relationship between displacement and average velocity to solve the problems. It employs a forward-directed problem-solving strategy. The natural language interface and both the problem-solvers are fast and completely adequate for the task. The tutorial dialogue system uses a modified version of the natural language interface which operates in a two-tier fashion. First an attempt is made to understand the input with the pattern recognition system, and if that fails, a keyword matching system is invoked. The result has been a fairly robust language interface. The tutorial is driven by a tutorial management system (embodying a tutorial model) and a context model. The context model consists of a student model, a tutorial status model and a dynamic dialogue model. ALBERT permits a mixed initiative dialogue in the discussion of a problem. The system has been tested by Physics students in more than 80 problemsolving sessions and the results have been good. The response of the students has been very favourable
APA, Harvard, Vancouver, ISO, and other styles
35

Cox, Jürgen 1970. "Solution of sign and complex action problems with cluster algorithms." Thesis, Massachusetts Institute of Technology, 2001. http://hdl.handle.net/1721.1/8646.

Full text
Abstract:
Thesis (Ph. D.)--Massachusetts Institute of Technology, Dept. of Physics, 2001.<br>Includes bibliographical references (p. [105]-109) and index.<br>Two kinds of models are considered which have a Boltzmann weight which is either not real or real but not positive and so standard Monte Carlo methods are not applicable. These sign or complex action problems are solved with the help of cluster algorithms. In each case improved estimators for the Boltzmann weight are constructed which are real and positive. The models considered belong to two classes: fermionic and non-fermionic models. An example for a non-fermionic model is the Potts model approximation to QCD at non-zero baryon density. The three-dimensional three-state Potts model captures the qualitative features of this theory. It has a complex action and so the Boltzmann weight cannot be interpreted as a probability. The complex action problem is solved by using a cluster algorithm. The improved estimator for the complex phase of the Boltzmann factor is real and positive and is used for importance sampling. The first order deconfinement transition line is investigated and the universal behavior at its critical endpoint is studied.<br>(cont.) An example for a fermionic model with a sign problem are staggered fermions with 2 flavors in 3+1 dimensions. Here the sign is connected to the permutation sign of fermion world lines and is of nonlocal nature. Cluster flips change the topology of the fermion world lines and they have a well defined effect on the permutation sign independent of the other clusters. The sign problem is solved by suppressing those clusters whose contribution to the partition function and observables of interest would be zero. We confirm that the universal critical behavior of the finite temperature chiral phase transition is the one of the three dimensional Ising model. We also study staggered fermions with one flavor in 2+1 dimensions and confirm that the chiral phase transition then belongs to the universality class of the two dimensional Ising model.<br>by Jürgen Cox.<br>Ph.D.
APA, Harvard, Vancouver, ISO, and other styles
36

Lundberg, Erik. "Problems in Classical Potential Theory with Applications to Mathematical Physics." Scholar Commons, 2011. http://scholarcommons.usf.edu/etd/3220.

Full text
Abstract:
In this thesis we are interested in some problems regarding harmonic functions. The topics are divided into three chapters. Chapter 2 concerns singularities developed by solutions of the Cauchy problem for a holomorphic elliptic equation, especially Laplace's equation. The principal motivation is to locate the singularities of the Schwarz potential. The results have direct applications to Laplacian growth (or the Hele-Shaw problem). Chapter 3 concerns the Dirichlet problem when the boundary is an algebraic set and the data is a polynomial or a real-analytic function. We pursue some questions related to the Khavinson-Shapiro conjecture. A main topic of interest is analytic continuability of the solution outside its natural domain. Chapter 4 concerns certain complex-valued harmonic functions and their zeros. The special cases we consider apply directly in astrophysics to the study of multiple-image gravitational lenses.
APA, Harvard, Vancouver, ISO, and other styles
37

Nicholson, Colin. "Modernity and the irrational : reciprocal problems for physics and culture." Thesis, King's College London (University of London), 1998. http://ethos.bl.uk/OrderDetails.do?uin=uk.bl.ethos.299972.

Full text
APA, Harvard, Vancouver, ISO, and other styles
38

Rodrigues, Sérgio da Silva. "Methods of nonlinear control theory in problems of mathematical physics." Doctoral thesis, Universidade de Aveiro, 2008. http://hdl.handle.net/10773/2931.

Full text
Abstract:
Doutoramento em Matemática<br>Consideramos a equação de Navier-Stokes num domínio bidimensional e estudamos a sua controlabilidade aproximada e a sua controlabilidade nas projecções em subespaços de campos vectoriais de dimensão finita. Consideramos controlos internos que tomam valores num espaço de dimensão finita. Mais concretamente, procuramos um subespaço de campos vectoriais de divergência nula de dimensão finita de tal modo que seja possível controlar aproximadamente a equação, através de controlos que tomam valores no mesmo subespaço. Usando algumas propriedades de continuidade da equação nos dados iniciais, nomeadamente a continuidade da solução quando o controlo varia na chamada métrica relaxada, reduzimos os resultados em controlabilidade à existência de um chamado conjunto saturante. Consideramos ambas as condições de fronteira do tipo Navier e Dirichlet homogéneas. Damos alguns exemplos de domínios e respectivos conjuntos saturantes. No caso especial das condições de fronteira do tipo Lions - um caso particular das condições do tipo Navier - através de uma técnica envolvendo perturbação analítica de métricas, transferimos a chamada controlabilidade nas projecções em espaços coordenados de dimensão finita de uma métrica para (muitas) outras.<br>We consider the Navier-Stokes equation on a two-dimensional domain and study its approximate controllability and its controllability on projections onto finite-dimensional subspaces of vector fields. We consider body controls taking values in a finite-dimensional space. More precisely we look for a finitedimensional subspace of divergence free vector fields that allow us to control approximately the equation using controls taking values in that subspace. Using some continuity properties of the equation on the initial data, namely the continuity of the solution when the control varies in so-called relaxation metric, we reduce the controllability issues to the existence of a so-called saturating set. Both Navier and no-slip boundary conditions are considered. We present some examples of domains and respective saturating sets. For the special case of Lions boundary conditions - a particular case of Navier boundary conditions - trough a technique involving analytic perturbation of metrics, we transfer so-called controllability on observed coordinate space from one metric to (many) other.
APA, Harvard, Vancouver, ISO, and other styles
39

GUASTAVINO, SABRINA. "Learning and inverse problems: from theory to solar physics applications." Doctoral thesis, Università degli studi di Genova, 2020. http://hdl.handle.net/11567/998315.

Full text
Abstract:
The problem of approximating a function from a set of discrete measurements has been extensively studied since the seventies. Our theoretical analysis proposes a formalization of the function approximation problem which allows dealing with inverse problems and supervised kernel learning as two sides of the same coin. The proposed formalization takes into account arbitrary noisy data (deterministically or statistically defined), arbitrary loss functions (possibly seen as a log-likelihood), handling both direct and indirect measurements. The core idea of this part relies on the analogy between statistical learning and inverse problems. One of the main evidences of the connection occurring across these two areas is that regularization methods, usually developed for ill-posed inverse problems, can be used for solving learning problems. Furthermore, spectral regularization convergence rate analyses provided in these two areas, share the same source conditions but are carried out with either increasing number of samples in learning theory or decreasing noise level in inverse problems. Even more in general, regularization via sparsity-enhancing methods is widely used in both areas and it is possible to apply well-known $ell_1$-penalized methods for solving both learning and inverse problems. In the first part of the Thesis, we analyze such a connection at three levels: (1) at an infinite dimensional level, we define an abstract function approximation problem from which the two problems can be derived; (2) at a discrete level, we provide a unified formulation according to a suitable definition of sampling; and (3) at a convergence rates level, we provide a comparison between convergence rates given in the two areas, by quantifying the relation between the noise level and the number of samples. In the second part of the Thesis, we focus on a specific class of problems where measurements are distributed according to a Poisson law. We provide a data-driven, asymptotically unbiased, and globally quadratic approximation of the Kullback-Leibler divergence and we propose Lasso-type methods for solving sparse Poisson regression problems, named PRiL for Poisson Reweighed Lasso and an adaptive version of this method, named APRiL for Adaptive Poisson Reweighted Lasso, proving consistency properties in estimation and variable selection, respectively. Finally we consider two problems in solar physics: 1) the problem of forecasting solar flares (learning application) and 2) the desaturation problem of solar flare images (inverse problem application). The first application concerns the prediction of solar storms using images of the magnetic field on the sun, in particular physics-based features extracted from active regions from data provided by Helioseismic and Magnetic Imager (HMI) on board the Solar Dynamics Observatory (SDO). The second application concerns the reconstruction problem of Extreme Ultra-Violet (EUV) solar flare images recorded by a second instrument on board SDO, the Atmospheric Imaging Assembly (AIA). We propose a novel sparsity-enhancing method SE-DESAT to reconstruct images affected by saturation and diffraction, without using any a priori estimate of the background solar activity.
APA, Harvard, Vancouver, ISO, and other styles
40

DI, GIOACCHINO ANDREA. "EUCLIDEAN CORRELATIONS IN COMBINATORIAL OPTIMIZATION PROBLEMS: A STATISTICAL PHYSICS APPROACH." Doctoral thesis, Università degli Studi di Milano, 2019. http://hdl.handle.net/2434/693073.

Full text
Abstract:
In this thesis I discuss combinatorial optimization problems, from the statistical physics perspective. The starting point are the motivations which brought physicists together with computer scientists and mathematicians to work on this beautiful and deep topic. I give some elements of complexity theory, and I motivate why the point of view of statistical physics, although different from the one adopted in standard complexity theory, leads to many interesting results, as well as new questions. I discuss the connection between combinatorial optimization problems and spin glasses. Finally, I briefly review some topics of large deviation theory, as a way to go beyond average quantities. As a concrete example of this, I show how the replica method can be used to explore the large deviations of a well-known toy model of spin glasses, the p-spin spherical model. In the second chapter I specialize in Euclidean combinatorial optimization problems. In particular, I explain why these problems, when embedded in a finite dimensional Euclidean space, are difficult to deal with. I analyze several problems (the matching and assignment problems, the traveling salesman problem, and the 2-factor problem) in one dimension to explain a quite general technique to deal with one dimensional Euclidean combinatorial optimization problems. Whenever possible, and in a detailed way for the traveling-salesman problem case, I also discuss how to proceed in two (and also more) dimensions. In the last chapter I outline a promising approach to tackle hard combinatorial optimization problems: quantum computing. After giving a quick overview of the paradigm of quantum computation (and its differences with respect to the classical one), I discuss in detail the application of the so-called quantum annealing algorithm to a specific case of the matching problem, also by providing a comparison between the performance of a recent quantum annealer machine (the D-Wave 2000Q) and a classical super-computer equipped with an heuristic algorithm (an implementation of parallel tempering). Finally, I draw the conclusions of my work and I suggest some interesting directions for future studies.
APA, Harvard, Vancouver, ISO, and other styles
41

Geyer, Jani. "Superconductivity problems with multiple Ginzburg-Landau order parameters." Thesis, Stellenbosch : Stellenbosch University, 2011. http://hdl.handle.net/10019.1/17916.

Full text
Abstract:
Thesis (MSc)--Stellenbosch University, 2011.<br>ENGLISH ABSTRACT: Two problems in the field of materials-based condensed matter physics, specifically in the field of superconductivity, are studied theoretically. In both problems, where each is of current exper- imental interest, an extension of Ginzburg-Landau theory is used to describe a physical system, with focus on the energy associated to the interface(s) occurring in the respective systems. The first physical system under consideration is that of a two-band superconductor. Using Ginzburg-Landau theory for two-band superconductors, the interface energy ¾s between normal and superconducting states coexisting at the thermodynamic critical magnetic field is determined. From the theoretical and numerical analysis of the interface energy, it is found that close to the transition temperature, where the Ginzburg-Landau theory is applicable, the two-band problem maps onto an effective single band problem. This finding puts into question the possibility of intermediate, so called type-1.5 superconductivity, in the regime where the Ginzburg-Landau theory applies. The second physical system is that of a system with competing superconductivity and anti- ferromagnetism. From Ginzburg-Landau theory for such competing systems in a thermodynamic critical magnetic field, it is shown that two possible interfaces can occur: an interface between a pure anti-ferromagnetic state and a pure superconducting state; and an interface between a state with coexisting superconductivity and anti-ferromagnetism and a pure anti-ferromagnetic state. The energy associated to both these interfaces is analysed theoretically and numerically from which the boundary between type-I and type-II superconductivity is obtained for certain specific cases.<br>AFRIKAANSE OPSOMMING: Twee probleme in die veld van materiaal-gebaseerde gekondenseerde materie fisika, spesifiek in die veld van supergeleiding, word teoreties bestudeer. In beide probleme, albei tans van eksper- imentele belang, word ’n fisiese sisteem beskryf deur ’n uitbreiding van enkel-band Ginzburg- Landau teorie, met fokus op die energie geassosieer met die koppelvlak(ke) wat in die onderskeie sisteme aangetref word. Die eerste fisiese sisteem wat beskou word is die van ’n twee-band supergeleier. Deur van Ginzburg-Landau teorie vir twee-band supergeleiers gebruik te maak, word die koppelvlak energie ¾s tussen die gelyktydig bestaande normaal- en supergeleidende toestand in die termodinamiese kritieke magneetveld bepaal. Deur beide teoretiese en numeriese analieses word bepaal dat na aan die oorgangstemperatuur, waar Ginzburg-Landau teorie geldig is, die twee-band probleem op ’n effektiewe een-band probleem afbeeld. Hierdie bevinding bevraagteken dus die moontlikheid van onkonvensionele, of sogenaamde tipe-1.5 supergeleiding, vir gevalle waar Ginzburg-Landau teorie geldig is. Die tweede fisiese siteem wat beskou word is ’n sisteem met kompeterende supergeleiding en anti-ferromagnetisme. Met behulp van Ginzburg-Landau teorie vir sulke sisteme in ’n termod- inamiese kritiese magneetveld word gewys dat daar twee moontlike koppelvlakke kan ontstaan: ’n koppelvlak tussen ’n uitsluitlik anti-ferromagnetiese toestand en ’n uitsluitlik supergeleidende toestand; sowel as ’n koppelvlak tussen ’n uitsluitlik anti-ferromagnetiese toestand en ’n toes- tand van beide supergeleiding en anti-ferromagnetisme. Die energie geassosieer met beide hierdie koppelvlakke word teoreties en numeries geanaliseer wat lei tot ’n beskrywing van die grenslyn tussen tipe-I en tipe-II supergeleiding in sekere spesifieke gevalle.
APA, Harvard, Vancouver, ISO, and other styles
42

McKay, Barry. "Wrinkling problems for non-linear elastic membranes." Thesis, University of Glasgow, 1996. http://ethos.bl.uk/OrderDetails.do?uin=uk.bl.ethos.307187.

Full text
APA, Harvard, Vancouver, ISO, and other styles
43

Larnder, Chris. "Observer problems in multifractals : the example of rain." Thesis, McGill University, 1995. http://digitool.Library.McGill.CA:80/R/?func=dbin-jump-full&object_id=22752.

Full text
Abstract:
Non-linear phenomena exhibit extreme variability over a wide range of scales and intensities. In multifractal processes, variability increases algebraically with resolution: as we approach the small scale limit, we develop a highly singular field of diverging and vanishing densities. Even over a finite range of scales, the variability can readily exceed the finite signal-response range capabilities of measuring devices. In face of such extreme behavior, one can no longer consider the problems of observing such processes as "merely" experimental ones.<br>Detectors will rarely be capable of handling the full dynamic range of intensities, missing either the extreme events or the small input signal. Therefore it is of fundamental importance to understand what multifractals "look like" when observed through a detector having only a finite dynamic range.<br>Limitations on the observable dynamic range affect intensities at nearby scales, breaking the scale invariance and imposing a limit on the range of scales over which scaling behaviour can be observed.<br>A simple model of a threshold-type problem, in which a detector has a (finite) minimum detectable signal level, is solved in the multifractal framework. Results include a breaking of the scaling symmetry for scales particularly close to the scale corresponding to the resolution of the detector. The scaling improves as we as we degrade further to lower resolutions. It also improves as we move to higher moment statistics.<br>Rainfall time series from time scales of 180 years to 5 minutes are analysed, revealing, in particular, a break in the spectral scaling behavior near 2.4 hours. Some of the theoretical results are used to show that this break is likely to be caused by instrumental problems at low signal intensities. The correct scaling behavior is successfully recovered from the low resolution information.
APA, Harvard, Vancouver, ISO, and other styles
44

Thomson, Laura C. "Inverse techniques : problems in optics and gas sensing." Thesis, University of Glasgow, 2010. http://theses.gla.ac.uk/1617/.

Full text
Abstract:
In this thesis, two, seemingly different, classes of problems are discussed: locating gas sources from downwind gas concentration measurements and designing diffractive optics (i.e. computer generated holograms), which on illumination will produce a desired light beam in the far field. The similarity between these problems is that they are both “inverse problems” and we discuss the use of inverse techniques to solve them. In many instances within science, it is possible to calculate accurately a set of consequences which result from defined events. In most cases, however, it is math- ematically impossible to analytically calculate the unique set of events which led to the observed consequences. Such problems are termed “inverse problems”. Taking the example of gas dispersion, one sees that a known source leads to a calculable set of downwind concentrations. However, given a single concentration measurement it is impossible to distinguish a specific source and location from a larger, more distant source that would have given the same measured concentration. This is an example of the same consequence resulting from two, or more, different events. Key to solving inverse problems are iterative algorithms which randomly trial different possible events to find those which best describe the observed consequences. Such algorithms use a search method to postulate possible events, apply a forward model to calculate the anticipated consequences and then use a cost function to compare the postulated with the known consequences. The process is iterated until the optimum value of the cost function is found, at which point the current set of postulated events are taken to be the best estimate of the real events. In this thesis I apply similar iterative algorithms to solve the two classes of problem. The current demand on the world’s oil resources have encouraged the development of new prospecting techniques. LightTouch is one such solution which is discussed in this thesis and was developed with Shell Global Solutions. LightTouch uses the fact that oil reserves, through microseepages, leak hydrocarbons to their surface. Detection of these hydrocarbons can indicate the presence of oil reserves. LightTouch measures Ethane to sub-part-per-billion sensitivity at multiple positions across a survey area. Locating the source of the Ethane from the sparse downwind concentration measurements is an inverse problem and we deploy algorithms of the type discussed above to locate the Ethane sources. The algorithm is written in LabView and the software, Recon, is currently used by Shell Global Solutions to solve this problem. In appendix B the Recon user interface is shown. We investigate both the impact of choice of cost function (chapter 3) and forward model (chapter 4), which in this inverse problem is a gas dispersion model, on the algorithm’s ability to locate the gas sources. We find that the choice of cost function is more important to the success of the algorithm than the choice of forward model. Optical tweezers trap and manipulate particles with light beams. In order to manipulate the particles in a desired way it is necessary for the shape and position of the light beam to be controlled. One way of achieving a desired light beam is to use a spatial light modulator (SLM) which displays a phase pattern (referred to as a computer generated hologram), off which the light is diffracted. Calculating the phase pattern which will result in the desired light beam is an inverse problem and is referred to as holographic light shaping. The forward model in this case is a Fourier transform. In this thesis we use an algorithm similar to that used to solve the gas location problem and the Gerchberg-Saxton algorithm to calculate phase patterns with applications in optical tweezers. Within an optical tweezers system the highest trap resolution (the smallest distance between neighboring traps) that can be achieved is conventionally dictated by the diffraction limit. In this thesis we investigate two possible ways of beating the diffraction limit: superresolution and evanescent waves. In chapter 5 we investigate the application of inverse techniques to calculating phase patterns which produce superresolution optical traps. We calculate theoreti- cally the improvements to both relative trap stiffness and trap resolution using the superresolution optical traps. Although both are improved it comes at a cost to trap strength. In chapter 7 we simulate evanescent wave fields and demonstrate shaping three dimensional evanescent optical traps. Similar light shaping techniques are used in chapter 6 to shape light beams which after being disturbed will self-reconstruct.
APA, Harvard, Vancouver, ISO, and other styles
45

Tucker, Adam Philip. "Local moment phases in quantum impurity problems." Thesis, University of Oxford, 2014. http://ora.ox.ac.uk/objects/uuid:538d2d83-963e-4a51-81cd-4235e9761da4.

Full text
Abstract:
This thesis considers quantum impurity models that exhibit a quantum phase transition (QPT) between a Fermi liquid strong coupling (SC) phase, and a doubly-degenerate non-Fermi liquid local moment (LM) phase. We focus on what can be said from exact analytic arguments about the LM phase of these models, where the system is characterized by an SU(2) spin degree of freedom in the entire system. Conventional perturbation theory about the non-interacting limit does not hold in the non-Fermi liquid LM phase. We circumvent this problem by reformulating the perturbation theory using a so-called `two self-energy' (TSE) description, where the two self-energies may be expressed as functional derivatives of the Luttinger-Ward functional. One particular paradigmatic model that possesses a QPT between SC and LM phases is the pseudogap Anderson impurity model (PAIM). We use infinite-order perturbation theory in the interaction, U, to self-consistently deduce the exact low-energy forms of both the self-energies and propagators in each of the distinct phases of the model. We analyse the behaviour of the model approaching the QPT from each phase, focusing on the scaling of the zero-field single-particle dynamics using both analytical arguments and detailed numerical renormalization group (NRG) calculations. We also apply two `conserving' approximations to the PAIM. First, second-order self-consistent perturbation theory and second, the fluctuation exchange approximation (FLEX). Within the FLEX approximation we develop a numerical algorithm capable of self-consistently and coherently describing the QPT coming from both distinct phases. Finally, we consider a range of static spin susceptibilities that each probe the underlying QPT in response to coupling to a magnetic field.
APA, Harvard, Vancouver, ISO, and other styles
46

Kernan, Peter John. "Two astroparticle physics problems : solar neutrinos, and primordial 4[superscript]Helium /." The Ohio State University, 1993. http://rave.ohiolink.edu/etdc/view?acc_num=osu1487846354482934.

Full text
APA, Harvard, Vancouver, ISO, and other styles
47

Esparza, David. "Elasticity problems in domains with with nonsmooth boundaries." Thesis, University of Liverpool, 2001. http://ethos.bl.uk/OrderDetails.do?uin=uk.bl.ethos.250253.

Full text
APA, Harvard, Vancouver, ISO, and other styles
48

Wood, Gerard Paul. "Some problems in nonlinear diffusion." Thesis, University of Nottingham, 1996. http://eprints.nottingham.ac.uk/12721/.

Full text
Abstract:
In this thesis we investigate mathematical models for a number of topics in the field of nonlinear diffusion, using similarity, asymptotic and numerical methods and focussing on the time-asymptotic behaviour in most cases. Firstly, we consider `fast' diffusion in the vicinity of a mask-edge, with application to dopant diffusion into a semiconductor. A variety of approaches are used to determine concentration contours and aspect ratios. Next we consider flow by curvature. Using group analysis, we determine a number of new symmetries for the governing equations in two and three dimensions. By tracking a moving front numerically, we also construct single and double spiral patterns (reminiscent of those observed in the Belousov-Zhabotinskii chemical reaction), and classify the types of behaviour that can occur. Finally, we analyse travelling wave solutions and the behaviour near to extinction for closed loops. We next consider relaxation waves in a system that can be used to model target patterns, also observed in the Belousov-Zhabotinskii reaction. Numerical and asymptotic results are presented, and a number of new cases of front behaviour are obtained. Finally, we investigate a number of systems using an approach based on the WKB method, analysing the motion of invasive fronts and also the form of the pattern left behind. For Fisher's equation, we demonstrate how modulated travelling waves can be obtained by prescribing an oscillatory initial profile. The method is then extended, firstly to Turing systems and then to oscillatory systems, for which we use an additional periodic plane wave argument to determine the unequal front and pattern speeds, as well as the periodicity. Finally, we illustrate how these methods apply to a recently-used `chaotic' model from ecology.
APA, Harvard, Vancouver, ISO, and other styles
49

Kogan, V. R. "Method of quasiclassical green function in different problems of mesoscopic physics." [S.l.] : [s.n.], 2003. http://deposit.ddb.de/cgi-bin/dokserv?idn=968833500.

Full text
APA, Harvard, Vancouver, ISO, and other styles
50

Strachan, John Paul 1978. "Instrumentation and algorithms for electrostatic inverse problems." Thesis, Massachusetts Institute of Technology, 2001. http://hdl.handle.net/1721.1/18015.

Full text
Abstract:
Thesis (M. Eng.)--Massachusetts Institute of Technology, Dept. of Electrical Engineering and Computer Science; and, (S.B.)--Massachusetts Institute of Technology, Dept. of Physics, 2001.<br>Includes bibliographical references (leaf 89).<br>This thesis describes tracking objects with low-level electric fields. A physical model is presented that describes the important interactions and the required mathematical inversions. Sophisticated hardware used to perform the measurements is described in detail. Finally, a discussion of the myriad applications for electric field sensing is described. The main application goal for this thesis is to make an efficient 3D mouse using electric field sensing technology.<br>by John Paul Strachan.<br>S.B.<br>M.Eng.
APA, Harvard, Vancouver, ISO, and other styles
We offer discounts on all premium plans for authors whose works are included in thematic literature selections. Contact us to get a unique promo code!

To the bibliography