To see the other types of publications on this topic, follow the link: Invariance scales.

Dissertations / Theses on the topic 'Invariance scales'

Create a spot-on reference in APA, MLA, Chicago, Harvard, and other styles

Select a source type:

Consult the top 50 dissertations / theses for your research on the topic 'Invariance scales.'

Next to every source in the list of references, there is an 'Add to bibliography' button. Press on it, and we will generate automatically the bibliographic reference to the chosen work in the citation style you need: APA, MLA, Harvard, Chicago, Vancouver, etc.

You can also download the full text of the academic publication as pdf and read online its abstract whenever available in the metadata.

Browse dissertations / theses on a wide variety of disciplines and organise your bibliography correctly.

1

Herst, David Evan Loran. "Cross-Cultural Measurement Invariance Of Work/Family Conflict Scales Across English-Speaking Samples." [Tampa, Fla.] : University of South Florida, 2003. http://purl.fcla.edu/fcla/etd/SFE0000181.

Full text
APA, Harvard, Vancouver, ISO, and other styles
2

Uzdavines, Alexander William. "Test 'Em All and Let God Get Sorted Out: Re-Validating, Modifying, and Integrating God Health Locus of Control Scales." Case Western Reserve University School of Graduate Studies / OhioLINK, 2020. http://rave.ohiolink.edu/etdc/view?acc_num=case1591819824966006.

Full text
APA, Harvard, Vancouver, ISO, and other styles
3

Brand-Labuschagne, Lelani. "Development and validation of new scales for psychological fitness and work characteristics of blue collar workers / Lelani Brand-Labuschagne." Thesis, North-West University, 2010. http://hdl.handle.net/10394/4429.

Full text
Abstract:
Over the last decade the focus has shifted to ensure a holistic view of employee well-being in organisations by focusing on both physical and psychological well-being. Previous research suggests that work characteristics and psychological work-related well-being influence both the individual (i.e. health) and organisational outcomes (i.e. commitment, safety, productivity, etc.). Moreover, the increasing importance of focusing on work-related psychological well-being of employees is evident in legislation from around the world. In South Africa the Occupational Health and Safety legislation, spesifically the Construction Regulations, also recognises the importance of the psychological well-being of employees and refers to it as ?psychological fitness?. However, no clear definition or instrument for psychological fitness exists. Similarly, no instrument exist to measure work characteristics of blue-collar workers. The objectives of this research were 1) to propose a defintion for psychological fitness of blue-collar employees 2) to propose a theoretical framework to better our understanding of psychological fitness 3) to develop a psychological fitness instrument for blue-collar employees that is suitable for the South African context 4) to test the psychometric properties of the newly developed psychological fitness instrument 5) to develop a work characteristics questionnaire for blue-collar mine workers to gain insight into their work experiences, and 6) to evaluate the psychometric properties of the newly developed job demands-resources scale for blue-collar mine workers. The empirical study consisted of two phases. During the first phase, following an extensive literature review, a definition and theoretical framework for psychological fitness was proposed. Thereafter, a new instrument for measuring psychological fitness was developed and tested. An instrument for measuring the work characteristics of blue-collar mine workers has also been developed to further the understanding of their work experiences. During the second phase, the psychometric properties of the newly developed psychological fitness instrument were tested (i.e. factorial validity, factorial invariance, reliability and external validity; N = 2769). Furthermore, the psychometric properties of the newly developed job demands-resources scale for blue collar workers were also investigated (i.e. factorial validity, reliability and the relationship with theoretically relevant external variables; N = 361). During the conceptualisation process, the definition of psychological fitness has been proposed based on previous work-related well-being literature. The work-related well-being concepts, distress and eustress were proposed as indicators of psychological fitness. Therefore, psychological fitness was defined as a state in which an employee display high levels of emotional and mental energy and high levels of psychological motivation to be able to work and act safely. The dimensions of burnout and engagement were proposed as possible indicators of psychological fitness and included exhaustion, mental distance, cognitive weariness, vitality and work devotion. Furthermore, the underlying work-related well-being theories and models were identified as the theoretic framework to enable the development of a questionnaire for psychological fitness. In order to ensure that the low literacy employees understand the meaning of each questionnaire close attention has been paid during the development of items. Firstly, the psychological fitness instrument (SAPFI) for blue-collar employees has been translated into all the official languages of South Africa following a multistage translation process. Secondly, the job demands-resources scale for blue collar mine workers (JDRSM) has been translated into the three most commonly spoken languages (Sesotho, isiXhosa and Setswana) by employees working in this specific mine. During this phase various problematic items were identified and eliminated from both questionnaires using the Rasch measurement model. The final phase included the validation study where the psychometric properties of both the new instruments were investigated. The SAPFI results provided evidence for factorial validity, factorial invariance, reliability and significant relations with external variables of the distress scale. Although evidence was provided for the factorial validity, reliability and external validity of the eustress scale, factorial invariance could not be confirmed. Furthermore, the JDRSM results provided evidence for the factorial validity, reliability (except for the workload scale) and external validity. Recommendations for future research were made.<br>Thesis (Ph.D. (Industrial Psychology))--North-West University, Potchefstroom Campus, 2011.
APA, Harvard, Vancouver, ISO, and other styles
4

van, Kolck U. "Unitarity and Discrete Scale Invariance." SPRINGER WIEN, 2017. http://hdl.handle.net/10150/626026.

Full text
Abstract:
While the complexity of some many-body systems may stem from a profusion of distinct scales, as we approach two-body unitarity (through experimental control or as a theoretical limit) rich structures exist even though there is no more than one essential scale. I comment, from the point of view of effective field theory, on some current problems in the transition from few to many bodies in bosonic and multi-state fermion systems, where order emerges from the discrete scale invariance associated with a single, contact three-body force.
APA, Harvard, Vancouver, ISO, and other styles
5

Currie, Warren J. S. "Scale-invariance and patchiness in the plankton." Thesis, National Library of Canada = Bibliothèque nationale du Canada, 2001. http://www.collectionscanada.ca/obj/s4/f2/dsk3/ftp05/NQ61972.pdf.

Full text
APA, Harvard, Vancouver, ISO, and other styles
6

Meinke, Alexander. "Applications of the Extremal Functional Bootstrap." Universidade de São Paulo, 2018. http://www.teses.usp.br/teses/disponiveis/43/43134/tde-26112018-120129/.

Full text
Abstract:
The study of conformal symmetry is motivated through an example in statistical mechanics and then rigorously developed in quantum field theories in general spatial dimensions. In particular, primary fields are introduced as the fundamental objects of such theories and then studied in the formalism of radial quantization. The implications of conformal invariance on the functional form of correlation functions are studied in detail. Conformal blocks are defined and various approaches to their analytical and numerical calculation are presented with a special emphasis on the one-dimensional case. Building on these preliminaries, a modern formulation of the conformal bootstrap program and its various extensions are discussed. Examples are given in which bounds on the scaling dimensions in a one-dimensional theory are derived numerically. Using these results I motivate the technique of using the extremal functional bootstrap which I then develop in more detail. Many technical details are discussed and examples shown. After a brief discussion of conformal field theories with a boundary I apply numerical methods to find constraints on the spectrum of the 3D Ising model. Another application is presented in which I study the 4-point function on the boundary of a particular theory in Anti-de-Sitter space in order to approximate the mass spectrum of the theory.<br>O estudo da simetria conforme é motivado através de um exemplo em mecânica estatística e em seguida rigorosamente desenvolvido em teorias de campos quânticos em dimensões espaciais gerais. Em particular, os campos primários são introduzidos como os objetos fundamentais de tais teorias e então estudados através do formalismo de quantização radial. As implicações da invariância conforme na forma funcional das funções de correlação são estudadas em detalhe. Blocos conformes são definidos e várias abordagens para seu cálculo analítico e numérico são apresentadas com uma ênfase especial no caso unidimensional. Com base nessas preliminares, uma formulação moderna do programa de bootstrap conforme e suas várias extensões são discutidas. Exemplos são dados em que limites nas dimensões de escala em uma teoria unidimensional são derivados numericamente. Usando esses resultados, motivei a técnica de usar o bootstrap funcional extremo, que depois desenvolvo em mais detalhes. Diversos detalhes técnicos são discutidos e exemplos são apresentados. Após uma breve discussão das teorias de campo conformes com fronteiras, eu aplico métodos numéricos para encontrar restrições no espectro do modelo de Ising em 3D. Outra aplicação é apresentada em que eu estudo a função de 4 pontos na fronteira de uma teoria particular no espaço Anti-de-Sitter, a fim de aproximar o espectro de massa da teoria.
APA, Harvard, Vancouver, ISO, and other styles
7

Dewar, R. C. "Configurational studies of scaling phenomena." Thesis, University of Edinburgh, 1986. http://ethos.bl.uk/OrderDetails.do?uin=uk.bl.ethos.373376.

Full text
APA, Harvard, Vancouver, ISO, and other styles
8

Font, Clos Francesc. "On the Scale Invariance of certain Complex Systems." Doctoral thesis, Universitat Autònoma de Barcelona, 2015. http://hdl.handle.net/10803/308310.

Full text
Abstract:
La Ciència de la Complexitat és un camp d'estudi interdisciplinari que aplica conceptes i mètodes de la física estadística i la teoria dels fenòmens crítics a altres camps, des de la Biologia a l'Economia, la Geologia o la Ciència de l'Esport. Ciència de la Complexitat posa en dubte el reduccionisme científic, afirmant que "el tots és més que la suma de les parts", i que, per tant, el reduccionisme fracassarà tard o d'hora: Si un problema o sistema s'analitza estudiant-ne les unitats que el constitueixen, i aquestes unitats s'estudien, al seu torn, en termes d'altres elements més simples, i així successivament, aleshores s'acaba formant una jerarquia de paradigmes o nivells d'estudi. I si bé el sistema pot entendres, fins a cert grau, en termes de conceptes i mecanismes específics de cada un dels nivells d'estudi, no hi ha cap garantia d'una reconstrucció comprensible i satisfactòria del sistema. En altres paraules, el reduccionisme només ens ofereix un bitllet d'anada dins de la jerarquia de teories, en direcció a aquelles suposadament més bàsiques i elementals; la Ciència de la Complexitat tracta de trobar el camí de tornada, des dels elements microscòpic elementals fins a l'objecte inicial d'estudi. La invariància d'escala es la propietat d'ésser invariant sota una transformació d'escala. Per tant, els objectes invariants d'escala no tenen escales característiques, ja que un re-escalament de les variables no produeix cap efecte detectable. Això es considera molt important en el paradigma de la complexitat, ja que permet connectar el món microscòpic amb el món macroscòpic. Aquesta Tesi consisteix en un estudi de les propietats invariants d'escala de la representació en freqüències de la llei de Zipf en llenguatge, de la corba de creixement "type-token" en sistemes zipfians generals, i de la distribució de durada d'esdeveniments en un "thresholded birth-death process". S'evidencia que algunes propietats d'aquests sistemes poden expressar-se com a lleis d'escala, i per tant són invariants d'escala. Es determinen els exponents d'escala i les funciones d'escala corresponents.<br>Complexity Science is an interdisciplinary field of study that applies ideas and methods mostly from statistical physics and critical phenomena to a variety of systems in almost any other field, from Biology to Economics, to Geology or even Sports Science. In essence, it attempts to challenge the reductionist approach to scientific inquiry by claiming that "the total is more that the sum of its parts" and that, therefore, reductionism shall ultimately fail: When a problem or system is analyzed by studying its constituent units, and these units are subsequently analyzed in terms of even simpler units, and so on, then a descending hierarchy of realms of study is formed. And while the system might be somewhat understood in terms of different concepts at each different level, from the coarser description down to its most elementary units, there is no guarantee of a successful bottom-up, comprehensive "reconstruction" of the system. Reductionism only provides a way down the hierarchy of theories, i.e., towards those supposedly more basic and elementary; Complexity aims at finding a way back home, that is, from the basic elementary units up to the original object of study. Scale invariance is the property of being invariant under a scale transformation. Thus, scale-invariant systems lack characteristic scales, as rescaling its variables leaves them unchanged. This is considered of importance in Complexity Science, because it provides a bridge between different realms of physics, linking the microscopic world with the macroscopic world. This Thesis studies the scale invariant properties of the frequency-count representation of Zipf's law in natural languages, the type-token growth curve of general Zipf's systems and the distribution of event durations in a thresholded birth-death process. It is shown that some properties of this systems can be expressed as scaling laws, and are therefore scale-invariant. The associated scaling exponents and scaling functions are determined.
APA, Harvard, Vancouver, ISO, and other styles
9

Pflug, Karen. "Generalized scale invariance, differential rotation and cloud texture." Thesis, McGill University, 1991. http://digitool.Library.McGill.CA:80/R/?func=dbin-jump-full&object_id=61076.

Full text
Abstract:
The standard 2D/3D picture of atmospheric dynamics of two distinct isotropic regimes separated by a "meso-scale gap" has been seriously questioned in recent years. Using satellite cloud images and the formalism of generalized scale invariance (GSI), we test the contrary hypothesis that cloud radiance fields are scaling in the range 1-1000 km.<br>Using a two-dimensional representation of GSI and three new analysis techniques, we test the following relation for each picture: $ langle vert F( lambda sp{ tilde G} vec k) vert sp2 rangle = lambda sp{-s} langle vert F( vec k) vert sp2 rangle$, where $F( vec k)$ is the Fourier amplitude at wavenumber $ vec k$, $ lambda$ is the scale ratio and $ tilde G$ is the generator of the semi-group of scale changes in Fourier space. Since we test only the linear approximation to GSI, $ tilde G$ is approximated here as a matrix.<br>For the three texturally--and meteorologically--very different images analyzed, we find three different generators that generally well reproduce the Fourier space anisotropy. These results show that linear GSI is a workable approximation for studying the atmosphere and that GSI can be used for cloud classification and modeling over this important mesoscale range.
APA, Harvard, Vancouver, ISO, and other styles
10

Liang, Jie Ying. "The phenomenology and cosmological implications of scale invariance." Thesis, The University of Sydney, 2017. http://hdl.handle.net/2123/18157.

Full text
Abstract:
The discovery of the Higgs boson completes the Standard Model (SM) and confirms the mass generation mechanism through spontaneous electroweak symmetry breaking (EWSB). Meanwhile, the SM is only a low-energy effective theory (EFT) that is unable to explain problems such as the origin of dark matter and neutrino masses, and thus ‘‘new physics’’ is expected to arise at some scale Λ above the electroweak (EW) scale. In this case a hierarchy problem arises due to the quadratic sensitivity of the quantum corrections of the Higgs mass to the high scale Λ. As an alternative to supersymmetry and technicolour models both of which are still lacking experimental evidence, this thesis considers scale invariance (SI) as a symmetry for solving this hierarchy problem and examined its cosmological implications. Considering the minimal SM as an EFT of a fundamental theory with spontaneously broken SI, a study was performed on the effective theory that exhibits classical SI through the dilaton field. Next, a scale-invariant model was built with dynamical EWSB via top condensation, within the minimal top condensate see-saw framework. In both models, the scalar masses are generated via dimensional transmutation and depend on the (radiatively stable) hierarchy between the EW scale and the large dilaton VEV that defines Λ in this thesis. Cosmological phase transition (PT) is studied within the first aforementioned model, which predicts that the transition of the universe into the chiral symmetry breaking vacuum is of first-order with experimental signals. Finally, a new class of natural inflation models based on hidden SI was introduced, where inflation proceeds without the need for unnatural fine-tuning and the cosmological observables derived in the conformal limit are consistent with current measurements. These models generically predict the existence of the light scalar dilaton, and searches for such particle may reveal a fundamental role of scale symmetry in Nature.
APA, Harvard, Vancouver, ISO, and other styles
11

Tremblay, Luc 1969. "Can scale invariance be realized as a quantum symmetry?" Thesis, McGill University, 1994. http://digitool.Library.McGill.CA:80/R/?func=dbin-jump-full&object_id=55399.

Full text
Abstract:
The theoretical prediction of the value of the cosmological constant is far from being in accord with observations. One way to try to solve this problem is to use a scale invariant theory, however, no standard Lagrangian is truly scale invariant. In this thesis, we try to obtain a scale invariant Lagrangian by adding a suitable counterterm to the Lagrangian for the $ lambda phi sp4$ model. We then apply this technique to different models to see if this method can be applied for very different situations. We also check if flat directions in a scale invariant potential get lifted by renormalization.
APA, Harvard, Vancouver, ISO, and other styles
12

Levi, D. M., David J. Whitaker, and A. Provost. "Amblyopia masks the scale invariance of normal human vision." ARVO, 2009. http://hdl.handle.net/10454/4549.

Full text
Abstract:
no<br>In normal vision, detecting a kink (a change in orientation) in a line is scale invariant: it depends solely on the length/width ratio of the line (D. Whitaker, D. M. Levi, & G. J. Kennedy, 2008). Here we measure detection of a change in the orientation of lines of different length and blur and show that strabismic amblyopia is qualitatively different from normal foveal vision, in that: 1) stimulus blur has little effect on performance in the amblyopic eye, and 2) integration of orientation information follows a different rule. In normal foveal vision, performance improves in proportion to the square root of the ratio of line length to blur (L: B). In strabismic amblyopia improvement is proportional to line length. Our results are consistent with a substantial degree of internal neural blur in first-order cortical filters. This internal blur results in a loss of scale invariance in the amblyopic visual system. Peripheral vision also shows much less effect of stimulus blur and a failure of scale invariance, similar to the central vision of strabismic amblyopes. Our results suggest that both peripheral vision and strabismic amblyopia share a common bottleneck in having a truncated range of spatial mechanisms-a range that becomes more restricted with increasing eccentricity and depth of amblyopia.<br>Leverhulme Trust, Wellcome Trust, NIH
APA, Harvard, Vancouver, ISO, and other styles
13

Kerr, Dermot. "Autonomous Scale Invariant Feature Extraction." Thesis, University of Ulster, 2008. http://ethos.bl.uk/OrderDetails.do?uin=uk.bl.ethos.502896.

Full text
APA, Harvard, Vancouver, ISO, and other styles
14

Christou, Alexis. "Dynamics on scale-invariant structures." Thesis, University of Oxford, 1987. http://ora.ox.ac.uk/objects/uuid:15fd6e54-0ac4-4f4d-8115-0ee51ad74504.

Full text
Abstract:
We investigate dynamical processes on random and regular fractals. The (static) problem of percolation in the semi-infinite plane introduces many pertinent ideas including real space renormalisation group (RSRG) fugacity transformations and scaling forms. We study the percolation probability to determine the surface critical behaviour and to establish exponent relations. The fugacity approach is generalised to study random walks on diffusion-limited aggregates (DLA). Using regular and random models, we calculate the walk dimensionality and demonstrate that it is consistent with a conjecture by Aharony and Stauffer. It is shown that the kinetically grown DLA is in a distinct dynamic universality class to lattice animals. Similarly, the speculation of Helman-Coniglio-Tsallis regarding diffusion on self-avoiding walks (SAWs) is shown to be incorrect. The results are corroborated by an exact enumeration analysis of the internal structure of SAWs. A 'spin' and field theoretic Hamiltonian formulation for the conformational and resistance properties of random walks is presented. We consider Gaussian random walks, SAWs, spiral SAWs and valence walks. We express resistive susceptibilities as correlation functions and hence e-expansions are calculated for the resistance exponents. For SAWs, the local crosslinks are shown to be irrelevant and we calculate corrections to scaling. A scaling description is introduced into an equation-of-motion method in order to study spin wave damping in d-dimensional isotropic Heisenberg ferro-, antiferro- and ferri- magnets near pc . Dynamic scaling is shown to be obeyed by the Lorentzian spin wave response function and lifetime. The ensemble of finite clusters and multicritical behaviour is also treated. In contrast, the relaxational dynamics of the dilute Anisotropic Heisenberg model is shown to violate conventional dynamic scaling near the percolation bicritical point but satisfies instead a singular scaling behaviour arising from activation of Bloch walls over percolation cluster energy barriers.
APA, Harvard, Vancouver, ISO, and other styles
15

Idiculla, Thomaskutty B. "Gender Invariance of Behavior and Symptom Identification Scale Factor Structure." Thesis, Boston College, 2008. http://hdl.handle.net/2345/1815.

Full text
Abstract:
Thesis advisor: Thomas O'Hare<br>The Behavior and Symptom Identification Scale 24 (BASIS-24) is a psychiatric outcome measure used for inpatient and outpatient populations. This 24-item measure comprises six subscales: depression/functioning; interpersonal relationships; self-harm; emotional lability; psychosis; and substance abuse. Earlier studies examined the reliability and validity of the BASIS-24, but none empirically examined its factor structure across gender. The purpose of this study was therefore to assess the construct validity of the BASIS-24 six-factor model and find evidence of configural, metric, strong and strict factorial invariance across gender. The sample consisted of 1398 psychiatric inpatients that completed BASIS-24 at admission and discharge at 11 facilities nation-wide. Confirmatory factor analyses were used to test measurement invariance of the BASIS-24 six-factor model across males and females. The single confirmatory factor analysis showed the original six-factor model of BASIS-24 provided an acceptable fit to the male sample at admission (RMSEA=0.058, SRMR=0.070, CFI=0.975, NNFI=0.971 and GFI=0.977) and at discharge (RMSEA=0.059, SRMR=0 .078, CFI=0.977, NNFI=0.972, and GFI=0.969). The goodness-of-fit indices for the female group at admission (RMSEA=0.055, SRMR=0.067, CFI=0.980, NNFI=0.976, and GFI=0.983), and at discharge (RMSEA=0.055, SRMR=0.079, CFI=0.98, NNFI=0.977, and GFI=0.971) also revealed that the six factor model fit reasonably well to the data. The goodness-of-fit indices between the unconstrained and constrained models showed that all four multi-group models were equivalent for both male and female samples at admission and discharge in terms of goodness-of-fit examined through the &#916;CFI and that all of them show an acceptable fit to the data. The decrease in CFI was &lt;0.008 for admission sample and &lt;0.003 for discharge sample and both fell below the 0.01 cut-off. This indicates that the configural, metric, as well as the strong and strict factorial invariance of BASIS-24 exist across males and females. The two important contributions of the present study are: 1) BASIS-24 can be used as a reliable and valid symptom measurement tool in assessing psychiatric inpatient populations which can compare quantitative differences in the magnitude of patient symptoms and functioning across genders; 2) the current study provides an example of useful statistical methodology for examining specific questions related to factorial invariance of the BASIS-24 instrument across gender. Implications of social work practice and research are discussed<br>Thesis (PhD) — Boston College, 2008<br>Submitted to: Boston College. Graduate School of Social Work<br>Discipline: Social Work
APA, Harvard, Vancouver, ISO, and other styles
16

Wang, Jue Computer Science &amp Engineering Faculty of Engineering UNSW. "Vehicle tracking using scale invariant features." Publisher:University of New South Wales. Computer Science & Engineering, 2008. http://handle.unsw.edu.au/1959.4/41290.

Full text
Abstract:
Object tracking is an active research topic in computer vision and has appli- cation in several areas, such as event detection and robotics. Vehicle tracking is used in Intelligent Transport System (ITS) and surveillance systems. Its re- liability is critical to the overall performance of these systems. Feature-based methods that are used to represent distinctive content in visual frames are one approach to vehicle tracking. Existing feature-based tracking systems can only track vehicles under ideal conditions. They have difficulties when used under a variety of conditions, for example, during both the day and night. They are highly dependent on stable local features that can be tracked for a long time period. These local features are easily lost because of their local property and image noise caused by factors such as, headlight reflections and sun glare. This thesis presents a new approach, addressing the reliability issues mentioned above, tracking whole feature groups composed of feature points extracted with the Scale Invariant Feature Transform (SIFT) algorithm. A feature group in- cludes several features that share a similar property over a time period and can be tracked to the next frame by tracking individual feature points inside it. It is lost only when all of the features in it are lost in the next frame. We cre- ate these feature groups by clustering individual feature points using distance, velocity and acceleration information between two consecutive frames. These feature groups are then hierarchically clustered by their inter-group distance, velocity and acceleration information. Experimental results show that the pro- posed vehicle tracking system can track vehicles with the average accuracy of over 95%, even when the vehicles have complex motions in noisy scenes. It gen- erally works well even in difficult environments, such as for rainy days, windy days, and at night. We are surprised to find that our tracking system locates and tracks motor bikes and pedestrians. This could open up wider opportunities and further investigation and experiments are required to confirm the tracking performance for these objects. Further work is also required to track more com- plex motions, such as rotation and articulated objects with different motions on different parts.
APA, Harvard, Vancouver, ISO, and other styles
17

Tonge, Ashwini Kishor. "Object Recognition Using Scale-Invariant Chordiogram." Thesis, University of North Texas, 2017. https://digital.library.unt.edu/ark:/67531/metadc984116/.

Full text
Abstract:
This thesis describes an approach for object recognition using the chordiogram shape-based descriptor. Global shape representations are highly susceptible to clutter generated due to the background or other irrelevant objects in real-world images. To overcome the problem, we aim to extract precise object shape using superpixel segmentation, perceptual grouping, and connected components. The employed shape descriptor chordiogram is based on geometric relationships of chords generated from the pairs of boundary points of an object. The chordiogram descriptor applies holistic properties of the shape and also proven suitable for object detection and digit recognition mechanisms. Additionally, it is translation invariant and robust to shape deformations. In spite of such excellent properties, chordiogram is not scale-invariant. To this end, we propose scale invariant chordiogram descriptors and intend to achieve a similar performance before and after applying scale invariance. Our experiments show that we achieve similar performance with and without scale invariance for silhouettes and real world object images. We also show experiments at different scales to confirm that we obtain scale invariance for chordiogram.
APA, Harvard, Vancouver, ISO, and other styles
18

Pecknold, Sean. "Modeling anisotropic geophysical fields using generalized scale invariance and universal multifractals." Thesis, National Library of Canada = Bibliothèque nationale du Canada, 2000. http://www.collectionscanada.ca/obj/s4/f2/dsk1/tape4/PQDD_0034/NQ64638.pdf.

Full text
APA, Harvard, Vancouver, ISO, and other styles
19

Pecknold, Sean. "Modeling anisotropic geophysical fields using generalized scale invariance and universal multifractals." Thesis, McGill University, 1999. http://digitool.Library.McGill.CA:80/R/?func=dbin-jump-full&object_id=36673.

Full text
Abstract:
This thesis focuses on a universal multifractal, Fractionally Integrated flux model. Although this type of model is one of the first continuous multiplicative cascade models developed, and in fact the first chapter of this thesis dates back six years, it still remains an excellent model for a wide range of systems. Using the framework of GSI, extended in this thesis, it has been shown here that sea ice, cloud radiances and topography are all fields that can be modeled by universal multifractals. The multifractal parameters of such fields are given, and classification of anisotropies using GSI parameters is made. In addition, the anisotropic universal multifractal model is used for magnetization, and the implications for magnetic anomaly field are examined. The model agrees not only with magnetization data, but with the magnetic field data, and correctly predicts what could otherwise be considered an anomalous scale break. This theory is easily generalized to gravitational potential fields and rock density, and has applications in the modeling of other physical systems.
APA, Harvard, Vancouver, ISO, and other styles
20

Lindeberg, Tony. "Scale Selection Properties of Generalized Scale-Space Interest Point Detectors." KTH, Beräkningsbiologi, CB, 2013. http://urn.kb.se/resolve?urn=urn:nbn:se:kth:diva-101220.

Full text
Abstract:
Scale-invariant interest points have found several highly successful applications in computer vision, in particular for image-based matching and recognition. This paper presents a theoretical analysis of the scale selection properties of a generalized framework for detecting interest points from scale-space features presented in Lindeberg (Int. J. Comput. Vis. 2010, under revision) and comprising: an enriched set of differential interest operators at a fixed scale including the Laplacian operator, the determinant of the Hessian, the new Hessian feature strength measures I and II and the rescaled level curve curvature operator, as well as an enriched set of scale selection mechanisms including scale selection based on local extrema over scale, complementary post-smoothing after the computation of non-linear differential invariants and scale selection based on weighted averaging of scale values along feature trajectories over scale. A theoretical analysis of the sensitivity to affine image deformations is presented, and it is shown that the scale estimates obtained from the determinant of the Hessian operator are affine covariant for an anisotropic Gaussian blob model. Among the other purely second-order operators, the Hessian feature strength measure I has the lowest sensitivity to non-uniform scaling transformations, followed by the Laplacian operator and the Hessian feature strength measure II. The predictions from this theoretical analysis agree with experimental results of the repeatability properties of the different interest point detectors under affine and perspective transformations of real image data. A number of less complete results are derived for the level curve curvature operator.<br><p>QC 20121003</p><br>Image descriptors and scale-space theory for spatial and spatio-temporal recognition
APA, Harvard, Vancouver, ISO, and other styles
21

Le, Thi-Tinh-Minh. "Modélisation dynamique des réseaux d'énergie électrique tenant compte des propriétés d'invariance d'échelle." Thesis, Grenoble, 2014. http://www.theses.fr/2014GRENT020/document.

Full text
Abstract:
L'arrivée massive de la production décentralisée, l'intégration de technologies d'information et de communication et de convertisseurs d'électronique de puissance permettent aux réseaux électriques de devenir plus flexibles, plus accessibles, plus efficaces. Mais ils deviennent aussi plus complexes et plus difficiles à modéliser, à analyser et à dimensionner. Dans cette thèse, nous allons nous focaliser sur le problème de la modélisation dynamique du réseau électrique. En effet, la complexité du fonctionnement du réseau électrique moderne rend encore plus indispensable de comprendre comment il se comporte suite à des perturbations ou tout simplement à des changements de son état de fonctionnement. C'est cette compréhension qui doit permettre d'éviter que le réseau perde sa stabilité. Grâce aux modèles développés dans la thèse, on veut notamment retrouver des liens de connaissance forts entre le comportement dynamique et les propriétés topologiques du réseau. On espère ainsi pouvoir fournir à termes des préconisations pour l'évolution des topologies de réseaux ou de leurs modes d'exploitation. Pour mener à bien ce travail, l'invariance d'échelle d'un réseau électrique est tout d'abord explorée. Pour cela, des méthodes issues de la géométrie fractale sont exposées et appliquées à des réseaux réalistes. Partant du constat que les réseaux électriques étudiés présentent une invariance d'échelle sur une plage d'observation importante, une nouvelle modélisation dynamique est proposée. Cette modélisation a l'intérêt d'une représentation plus parcimonieuse que les représentations classiques par des approches boite noire et permet de conserver des liens de connaissance avec entre la topologie et les propriétés dynamiques<br>The influx of distributed generation, the integration of information as well as communication technologies and the integration of electronic power converters allows electrical grids to become more flexible, more accessible and more effective. However they become at the same time more complex thus making the modeling, analyzing and sizing more difficult. This thesis will focus on the problem of dynamic modeling of electrical networks. Indeed, the operation's complexity of the modern power grid makes it even more essential to understand how it behaves after disturbances or just simply after changes in operation condition. It is this understanding that should allow one to prevent the case that the system loses its stability. With models developed in this thesis, we particularly want to find strong links between the dynamic behavior and the topological properties of the network. It is hoped to provide eventually propositions for evolution of topology or operation modes of networks. To carry through this study, the scale invariance of an electrical network is first explored. For this purpose, methods issued from fractal geometry are presented and applied to realistic networks. Noting that the considered electrical networks exhibit scale invariance over a large observation range, a new dynamic modeling is proposed
APA, Harvard, Vancouver, ISO, and other styles
22

Neil, Geoffrey. "Shape recognition using fractal geometry." Thesis, University of Nottingham, 1996. http://ethos.bl.uk/OrderDetails.do?uin=uk.bl.ethos.319951.

Full text
APA, Harvard, Vancouver, ISO, and other styles
23

Davis, Anthony. "Radiation transport in scale invariant optical media." Thesis, McGill University, 1992. http://digitool.Library.McGill.CA:80/R/?func=dbin-jump-full&object_id=39357.

Full text
Abstract:
We focus primarily on the bulk response to external illumination of conservatively scattering thick inhomogeneous media (or simply "clouds") which are exactly or statistically scale invariant; these radiative properties are compared to those of homogeneous media with the same shapes and masses. Also considered are the ensemble-average responses of multifractal distributions of optical thicknesses and the closely related spatially averaged responses obtained within the "independent pixel" approximation to inhomogeneous transfer. In all cases, the nonlinearity of the radiation/density field coupling induces systematic and specific variability effects. Generally speaking, the details of the scattering process and of the boundary shape affect only prefactors whereas "anomalous" scaling exponents are found for extreme forms of internal variability which, moreover, are different for different physical transport models (e.g., kinetic versus diffusion approaches). Finally, detailed numerical computations of radiation flows inside a log-normal multifractal illustrate the basic inhomogeneous transport mechanism of "channeling."
APA, Harvard, Vancouver, ISO, and other styles
24

Saad, Elhusain Salem. "Defocus Blur-Invariant Scale-Space Feature Extractions." University of Dayton / OhioLINK, 2014. http://rave.ohiolink.edu/etdc/view?acc_num=dayton1418907974.

Full text
APA, Harvard, Vancouver, ISO, and other styles
25

Bergström, Johannes. "Signatures of Unparticle Self-Interactions at the Large Hadron Collider." Thesis, KTH, Teoretisk partikelfysik, 2009. http://urn.kb.se/resolve?urn=urn:nbn:se:kth:diva-79613.

Full text
Abstract:
Unparticle physics is the physics of a hidden sector which is conformal in the infrared and coupled to the Standard Model. The concept of unparticle physics was introduced by Howard Georgi in 2007 and has since then received a lot of attention, including many studies of its phenomenology in different situations. After a review of the necessary background material, the implications of the self-interactions of the unparticle sector for LHC physics is studied. More specifically, analyses of four-body final states consisting of photons and leptons are performed. The results are upper bounds on the total cross sections as well as distributions of transverse momentum.
APA, Harvard, Vancouver, ISO, and other styles
26

Plascencia-Contreras, Alexis David. "Theory and phenomenology of classical scale invariance, dark matter and ultralight axions." Thesis, Durham University, 2018. http://etheses.dur.ac.uk/12867/.

Full text
Abstract:
The Standard Model of particle physics does not provide a complete description of nature, there are many questions that remain unsolved. In this work, we study the theory and phenomenology of different models beyond the Standard Model that address some of its shortcomings. Motivated by naturalness arguments, we discuss the idea of classical scale invariance where all the fundamental scales are generated dynamically via quantum effects. We apply this approach to an extension of the inert doublet model and present a model that addresses the dark matter, neutrino masses and the baryon asymmetry of the Universe simultaneously. We then study a set of simplified models of dark matter to address the effects of three-point interactions between the dark matter particle, its dark coannihilation partner, and the Standard Model degree of freedom, which we take to be the tau lepton. In these models, the contributions from dark matter coannihilation channels are highly relevant for a determination of the correct relic abundance. Firstly, we investigate these effects as well as the discovery potential for dark matter coannihilation partners at the LHC by searches for long-lived electrically charged particles. Secondly, we study the sensitivity that future linear electron-positron colliders will have to these models for the region in the parameter space where the coannihilation partner decays promptly. Lastly, we discuss an observable for the detection of ultralight axions. In the presence of an ultralight axion, a cloud of these particles will form surrounding a rotating black hole through the mechanism of superradiance. This inhomogeneous pseudo-scalar field configuration behaves like an optically active medium. Consequently, as light passes through the axion cloud it experiences polarisation-dependent bending, we argue that for some regions in the parameter space of axion-like particles this effect can be observed by current radio telescope arrays.
APA, Harvard, Vancouver, ISO, and other styles
27

Michieli, Niccolò. "Innovative Plasmonic Nanostructures Based on Translation or Scale Invariance for Nano-Photonics." Doctoral thesis, Università degli studi di Padova, 2014. http://hdl.handle.net/11577/3423502.

Full text
Abstract:
In this thesis innovative plasmonic nanostructures have been studied under many aspects, from the synthesis to the characterization and finite elements modeling. We focused on two kinds of ordered nanostructures: (i) those exhibiting bidimensional (2D) translational invariance and (ii) those possessing autosimilarity and fractal character. Three kinds of nanostructures characterized by 2D periodicity have been analyzed: nanoprism arrays (NPA), nanohole arrays (NHA) and quasishell arrays (QSA), whose building blocks are, respectively, metallic prisms with triangular-like base, holes passing through a metal thin film and metallic non-closed shells around a dielectric core. The first kind is the base for biosensors, and in this case an optimization study has been performed to maximize sensivity. The second one is the key for a fine control of the emission from excited Erbium ions, which overcomes the previous results obtained without nano-patterning. The third one is based on a novel approach to bi-metallic nanostructures fabrication, enabling the realization of plasmonic and magneto-plasmonic materials. The patterning at nano scales has been made cost-effective, as all these periodic systems are based on a cheap synthesis technique. Finally, nanostructures showing scale invariance, fractals, have been synthesized and thoroughly studied, both experimentally and with simulations. As a result, a universal role of correlation has been recognized in these plasmonic systems. Overall, this thesis gives insights on the physics underlying the plasmonic response of nanostructures which base their outstanding properties on symmetry, either on scaling or on translation.<br>In questa tesi nanostrutture plasmoniche innovative sono state studiate sotto molteplici aspetti, a partire dalla sintesi fino alla caratterizzazione e alla modellizazione ad elementi finiti. L’attenzione è stata focalizzata su due tipi di nanostrutture ordinate: (i) quelle che mostrano invarianza traslazionale bidimensionale (2D) e (ii) quelle che hanno autosimilarità e carattere frattale. Tre tipi di nanostrutture caratterizzate dalla periodicità 2D sono state analizzate: matrici di nanoprismi, matrici di nanobuchi e matrici di gusci quasi chiusi, i cui elementi di base sono, rispettivamente, prismi metallici a base triangolare, buchi che attraversano strati sottili di metallo e gusci metallici non chiusi attorno a un nucleo dielettrico. Il primo tipo è la base per dei biosensori, e in questo caso uno studio di ottimizzazione è stato compiuto per massimizzare la sensibilità. Il secondo tipo è la chiave per il controllo fine dell’emissione da ioni di Erbio eccitati, e il risultato supera i precedenti, ottenuti senza nanostrutturazione. Il terzo tipo è basato su di un nuovo approccio per la fabbricazione di nanostrutture bi-metalliche, consentendo la produzione di materiali plasmonici e magnetoplasmonici. La strutturazione alla nanoscala è stata portata avanti in modo economicamente vantaggioso, essento tutti e tre i sistemi periodici basati su di una tecnica poco costosa di sintesi. Infine, nanostrutture che mostrano invarianza di scala, frattali, sono state sintetizzate e studiate meticolosamente, sia sperimentalmente che con simulazioni. Come risultato, il ruolo universale della correlazione è stato identificato in questo tipo di sistemi plasmonici. Complessivamente, la presente tesi fornisce una comprensione della fisica alla base della risposta plasmonica delle nanstrutture che basano le loro notevoli proprietà sulle simmetrie, siano esse di scala o per translazione.
APA, Harvard, Vancouver, ISO, and other styles
28

Federici, Bruno. "Interactions between large-scale invariants in infinite graphs." Thesis, University of Warwick, 2017. http://wrap.warwick.ac.uk/108882/.

Full text
Abstract:
This thesis is devoted to the study of a number of properties of graphs. Our first main result clarifies the relationship between hyperbolicity and non-amenability for plane graphs of bounded degree. This generalises a known result for Cayley graphs to bounded degree graphs. The second main result provides a counterexample to a conjecture of Benjamini asking whether a transient, hyperbolic graph must have a transient subtree. In Chapter 4 we endow the set of all graphs with two pseudometrics and we compare metric properties arising from each of them. The two remaining chapters deal with bi-infinite paths in Z2 and geodetic Cayley graphs.
APA, Harvard, Vancouver, ISO, and other styles
29

Colledani, D. "A contribution toward the validation of the Junior Eysenck Personality Questionnaire-Revised (JEPQ-R) in the Italian context. Functioning and meaning of the Lie scale: Social desirability bias, social conformity, and religiosity." Doctoral thesis, Università degli studi di Padova, 2016. http://hdl.handle.net/11577/3424522.

Full text
Abstract:
The aim of the thesis was to provide a contribution toward the validation of the Junior Eysenck Personality Questionnaire-Revised (JEPQ-R) in the Italian context, providing in addition further evidence about the meaning and functioning of its Lie scale (social desirability scale). The first theoretical and introductory chapters of the essay are devoted to describing the main theories in the field of personality psychology. Great attention has been addressed to traits theories and to the development of personality. Furthermore special attention has been devoted to the Eysenck model, well-known as "Giants Three" or PEN model, because of the names of the three main dimensions (or traits) at the basis of the theory: Psychoticism (or tough-mindedness), Extraversion (as opposed to introversion ) and Neuroticism (as opposed to emotional stability). The experimental part, instead, has been organized into four main studies. The first, specifically, was aimed to provide a contribution toward the validation of the Junior Eysenck Personality Questionnaire-Revised (JEPQ-R) in the Italian context. To this purpose in the first step of the research the instrument was translated from English to Italian and afterward back-translated by a native English speaker, for the assessment of linguistic and cultural equivalence. Finally the questionnaire was administered to a large sample of adolescents (N = 595) aged between 13 and 17, and data were analyzed in order to test the metric characteristics of the instrument. Specifically reliability, validity, factor structure and its metric invariance (across genders and two age classes: 13-15 and 16-17) were tested; results supported the adequacy of the metric characteristics of the instrument as well as its invariance. Analyses suggested that scales have the same meaning across groups and reliability coefficients were in line with the results of the original version. Moreover validity coefficients of PEN-L scale, assessed in relation to another well-known validated questionnaire, such as: BFQ-2, provided support to the adequacy of the questionnaire. Further studies, moreover, were performed in order to better understand the functioning and meaning of the Lie scale of the questionnaire. Specifically, the second study analyzed in detail the factor structure of the scale and its strong invariance across two conditions: standard and "fake-good" instructions. Results supported the one-factor solution and its invariance. The third study was, instead, aimed to verify the effectiveness of Lie scale in identifying dissimulation tendencies. In this study the abbreviated version of the questionnaire (JEPQR-Abbreviated), comprising 24 items only (six items for each scale: PEN-Lie), was used. In the first part of the research the adequacy of the metric characteristics of the questionnaire (reliability and factor structure) was evaluated, while, subsequently some analyses were performed in order to test the effectiveness of the scale as fake-detector. Analyses were performed comparing self and informant-report and results suggested a limited effectiveness of the scale in assessing dissimulation tendencies, providing, on the contrary, some support for an interpretation more tied to a social conformity disposition. This suggestion was finally tested in the fourth study. In this research a structural equation model was tested in order to explore relations between three religiosity facets (intrinsic orientation, extrinsic orientation, and quest orientations), PEN traits and Lie scale, conceived as a social conformity measure. The relationship between social desirability scales and religiosity, even though rather controversial, is in fact well known in literature. In this study it was, therefore, suggested that this curious relationship could be better explained, conceiving Lie scale as the measure of a social acquiescence disposition. Specifically, in the study it was assumed that the relationship between PEN-L traits and religiosity could be mediated by four sets of values described in the Schwartz model (second-order factors: openness to change, conservatism, self-transcendence, and self-enhancement). In particular, it was hypothesized that the Lie scale, representing a social conformity measure, would have reported strong relations with conservatism-related values (security, tradition, conformity), which in turn were expected to show a role in religious experience. These assumptions were substantially supported by the empirical data of the present work and moreover some contributions were provided about the controversial relations, described in literature, between PEN traits and religiosity. The thesis ends with a summary of the main results and with a comprehensive and systematic discussion about the main findings obtained in the research.<br>Lo scopo della tesi è stato quello di fornire un contributo alla validazione del questionario Junior Eysenck Personality Questionnaire-Revised (JEPQ-R) nel contesto italiano, approfondendo inoltre il significato e il funzionamento della scala Lie (scala di desiderabilità sociale) in esso contenuta. Nella parte iniziale dell’elaborato alcuni capitoli teorici e introduttivi sono stati dedicati a delineare le principali teorizzazioni nell’ambito della psicologia della personalità. Dopo un excursus volto a chiarire le principali teorie sull’argomento, ampio spazio è stato dedicato alle teorie dei tratti e alle formulazioni rivolte all’età evolutiva. Inoltre una speciale attenzione è stata indirizzata al modello di Eysenck, noto anche come modello “Giants Three” o modello PEN, dal nome delle tre dimensioni (o tratti) alla base della teoria: Psicoticismo (o mentalità dura), Estroversione (opposta all’introversione) e Nevroticismo (opposto alla stabilità emotiva). La parte empirica dell’elaborato è stata, invece, articolata in quattro principali studi. Il primo, in particolare, è stato dedicato a fornire un contributo alla validazione del questionario Junior Eysenck Personality Questionnaire-Revised (JEPQ-R) nel contesto italiano. A tale scopo il questionario è stato dapprima tradotto dall’inglese e successivamente sottoposto al vaglio di un esperto madrelingua che potesse valutare l’equivalenza culturale e linguistica della versione tradotta. Il questionario è stato, infine, somministrato ad un campione di adolescenti (N=595) di età compresa tra i 13 e i 17 anni ed i dati raccolti sono stati accuratamente analizzati al fine di testare le caratteristiche metriche dello strumento. In particolare sono state verificate attendibilità, validità, struttura fattoriale e invarianza metrica delle scale attraverso generi (maschie e femmine) e classi di età (13-15 e 16-17 anni). I risultati hanno confermato l’adeguatezza delle caratteristiche metriche dello strumento e la sua invarianza. Le analisi hanno chiarito che le scale hanno lo stesso significato nei diversi gruppi considerati e i coefficienti di affidabilità si sono dimostrati in linea con quelli della versione originale. Inoltre i coefficienti di validità, calcolati utilizzando come strumento di confronto il noto questionario BFQ-2 (Big Five Questionnaire-2), hanno fornito supporto all’adeguatezza del questionario JEPQ-R. I successivi studi sono stati dedicati, invece, ad approfondire funzionamento e significato della scala Lie del questionario. In particolare il secondo studio ha verificato attentamente la struttura fattoriale della scala e la sua invarianza scalare attraverso due condizioni: istruzioni standard e istruzioni “fake-good”. I risultati hanno confermato la struttura mono-fattoriale della scala e la sua invarianza. Il terzo studio è stato finalizzato a verificare l’efficacia della scala Lie nell’identificare le tendenze a dissimulare. In questo studio è stata utilizzata la versione abbreviata del questionario (JEPQR-Abbreviated), composta da 24 item (6 item per ogni scala: PEN-Lie). Nella prima parte della ricerca è stata valutata l’adeguatezza delle caratteristiche metriche del questionario (attendibilità e struttura fattoriale). Successivamente, invece, alcune analisi sono state effettuate al fine di testare l’efficacia della scala come fake-detector. Le analisi sono state condotte confrontando self e informant-report ed hanno permesso di attribuire alla scala una limitata capacità di valutare le tendenze a dissimulare, suggerendo che lo strumento possa essere meglio inteso come una misura di conformismo sociale. Tale possibilità è stata, infine, verificata nel quarto studio. In questa ricerca attraverso un modello di equazioni strutturali sono state esplorate le relazioni fra tre orientamenti religiosi (religiosità intrinseca, estrinseca e quest), i tratti PEN e la scala Lie, interpretata come misura di conformismo sociale. La relazione tra scale di desiderabilità sociale e religiosità è, infatti, ben nota in letteratura anche se piuttosto controversa. Nello studio è stato quindi ipotizzato che tale curiosa relazione possa essere meglio spiegata attribuendo alla scala Lie un significato non tanto legato alla misurazione della dissimulazione ma piuttosto al conformismo sociale. In particolare nello studio si è ipotizzato che le relazioni tra i tratti PEN-L e la religiosità fossero mediate dai quattro orientamenti valoriali descritti nel modello di Schwartz (fattori di secondo ordine: Apertura al cambiamento, Autoaffermazione, Autotrascendenza e Conservatorismo). Nello specifico è stato ipotizzato che la scala Lie, rappresentando la misura di un tratto di acquiescenza sociale, avrebbe riportato forti legami con i valori di conservatorismo (sicurezza, tradizione, rispetto delle convenzioni), che si riteneva avrebbero dimostrato a loro volta di essere legati all’esperienza religiosa. Tali ipotesi sono state supportate dai dati empirici, che, inoltre, hanno chiarito anche i controversi legami descritti in letteratura tra i tratti PEN e la religiosità. Il lavoro si conclude con un sommario dei principali risultati ottenuti e con una discussione generale che mette in luce i punti di maggiore interesse con un approccio globale ed organico.
APA, Harvard, Vancouver, ISO, and other styles
30

Whitaker, David J., D. M. Levi, and Graeme J. Kennedy. "Integration across time determines path deviation discrimination for moving objects." PLoS, 2008. http://hdl.handle.net/10454/4521.

Full text
Abstract:
Yes<br>Background: Human vision is vital in determining our interaction with the outside world. In this study we characterize our ability to judge changes in the direction of motion of objects-a common task which can allow us either to intercept moving objects, or else avoid them if they pose a threat. Methodology/Principal Findings: Observers were presented with objects which moved across a computer monitor on a linear path until the midline, at which point they changed their direction of motion, and observers were required to judge the direction of change. In keeping with the variety of objects we encounter in the real world, we varied characteristics of the moving stimuli such as velocity, extent of motion path and the object size. Furthermore, we compared performance for moving objects with the ability of observers to detect a deviation in a line which formed the static trace of the motion path, since it has been suggested that a form of static memory trace may form the basis for these types of judgment. The static line judgments were well described by a 'scale invariant' model in which any two stimuli which possess the same two-dimensional geometry (length/width) result in the same level of performance. Performance for the moving objects was entirely different. Irrespective of the path length, object size or velocity of motion, path deviation thresholds depended simply upon the duration of the motion path in seconds. Conclusions/Significance: Human vision has long been known to integrate information across space in order to solve spatial tasks such as judgment of orientation or position. Here we demonstrate an intriguing mechanism which integrates direction information across time in order to optimize the judgment of path deviation for moving objects.<br>Wellcome Trust, Leverhulme Trust, NIH
APA, Harvard, Vancouver, ISO, and other styles
31

Cáncer, Castillo Víctor. "Non-linear elastic response of scale invariant solids." Doctoral thesis, Universitat Autònoma de Barcelona, 2020. http://hdl.handle.net/10803/671059.

Full text
Abstract:
L’objectiu d’aquesta tesis és aplicar mètodes de teoria de camps per entendre la resposta elàstica no-lineal (ENL) dels sòlids. La resposta ENL conté un gran número de quantitats observables, que no sempre són fàcils de derivar de la composició microscòpica del material. Un actor esencial en la resposta elàstica dels sòlids són els fonons, que poden ser descrits com els bosons de Goldstone d’una ruptura espontània de les simetries de l’espai-temps. Com a tals, la seva dinàmica a baixes energies (incloent no-linealitats) pot ser capturada sistemàticament per mètodes estàndard de Teoria de Camps Efectiva (TCE). Això ofereix naturalment una manera nova de lidiar amb la fenomenologia ENL. Una conclusió principal es que, efectivament, aquests mètodes de baixes energies donen informació no trivial, com relacions entre differents observables ENL. Il·lustrem aquest fet obtenint límits en la màxima deformació que un material pot tolerar, que pot ser expressat en funció d’altres observables ENL. Un cas especial són els sòlids invariants d’escala (IE). Això inclou dos sub-casos diferents, ja que l’EI pot realitzar-se de manera manifesta o com una simetria trencada espontàniament. El primer cas correspon a un punt fix no trivial i requereix l’ús de metodes hologràfics (AdS/CFT). El segon cas pot ser descrit utilitzant mètodes TCE. Comparem els resultats obtinguts als dos casos i trobem que els límits difereixen significativament als dos sub-casos.<br>El objetivo de esta tesis es aplicar métodos de teoría de campos para entender la respuesta elástica no-lineal (ENL) de los sólidos. La respuesta ENL contiene un gran número de cantidades observables, que no siempre son fáciles de derivar de la composición microscópica del material. Un actor esencial en la respuesta elástica de los solidos son los fonones, que pueden ser descritos como bosones de Goldstone de una ruptura espontánea de las simetrías del espacio-tiempo. Como tales, su dinàmica a bajas energías (incluyendo no linealidades) puede ser capturado sistemáticamente con métodos estándar de Teoría de Campos Efectiva (TCE) a bajas energías. Esto ofrece naturalmente una manera nueva de tratar la fenomenología ENL. Una conclusión principal es que, efectivamente, los métodos de baja energía TCE ofrecen información no trivial, como relaciones entre diferentes observables ENL. Ilustramos esto obteniendo límites en la máxima deformation que un material puede tolerar, lo cual puede ser expresado en función de otros observables ENL. Un caso de especial interés son los sólidos invariantes de escala (IE). Esto incluye dos sub-casos distintos, puesto que la IE puede ser realizada de manera manifiesta o como una simetría rota espontáneamente. El primer caso corresponde a un punto fijo no trivial y requiere el uso de métodos holográficos (AdS/CFT). El segundo caso puede ser descrito con métodos TCE estándar. Comparamos los resultados obtenidos en ambos casos y encontramos que los límites elásticos difieren significativamente en los dos sub-casos.<br>The goal of this thesis is to apply modern field theory methods to understand the nonlinear elastic (NLE) response of solids. The NLE response contains a large number of low-energy observable quantities, not always easy to derive from the microscopic composition of the material. An essential actor in the elastic response are the phonons, which can be described as the Goldstone bosons of the spontaneously broken spacetime symmetries. As such, their low energy dynamics (including non-linearities) can be captured systematically by standard low energy Effective Field Theory (EFT) methods. This offers naturally a novel approach to tackle NLE phenomenology. One main conclusion is that indeed the low energy effective methods can provide non-trivial information, as relations among various different NLE observables. We illustrate this by obtaining bounds on the maximum deformation that a material can tolerate, which can be expressed in function of other NLE observables. A case of special interest is that of scale invariant (SI) solids. This includes two distinct sub-cases, since SI can be realized either as a manifest symmetry or a spontaneously broken symmetry. The former case corresponds to a nontrivial fixed point and requires the use of holographic (AdS/CFT) techniques. The latter case instead can be described with more standard EFT methods. We compare the results obtained in the two cases, and find that the obtained elasticity bounds differ significantly in the two sub-cases.
APA, Harvard, Vancouver, ISO, and other styles
32

Badri, Hicham. "Sparse and Scale-Invariant Methods in Image Processing." Thesis, Bordeaux, 2015. http://www.theses.fr/2015BORD0139/document.

Full text
Abstract:
Dans cette thèse, on présente de nouvelles approches à base de parcimonie et d'invariance d' échelle pour le développement de techniques rapides et efficaces en traitement d'images. Au lieu d'utiliser la norme l1 pour imposer la parcimonie, on exploite plutôt des pénalités non-convexes qui encouragent plus la parcimonie. On propose une approche de premier ordre pour estimer une solution d'un opérateur proximal non-convexe, ce qui permet d'exploiter facilement la non-convexité. On étudie aussi le problème de pluri-parcimonie quand le problème d'optimisation est composé de plusieurs termes parcimonieux. Ce cas survient généralement dans les problèmes qui nécessitent à la fois une estimation robuste pour rejeter les valeurs aberrantes et exploiter une information de parcimonie connue a priori. Ces techniques sont appliquées à plusieurs problèmes importants en vision par ordinateur bas niveau telles que le lissage sélectif, la séparation d'images, l'intégration robuste et la déconvolution. On propose aussi d'aller au-delà de la parcimonie et apprendre un modèle de mapping spectral non-local pour le débruitage d'images. La notion d'invariance d' échelle joue aussi un rôle important dans nos travaux. En exploitant ce principe, une définition précise des contours est définie, ce qui peut être complémentaire à la notion de parcimonie. Plus précisément, on peut construire des représentations invariantes pour la classification en se basant sur une architecture de réseaux convolutionnels profonds. L'invariance d' échelle permet aussi d'extraire les pixels qui portent les informations nécessaires pour la reconstruction ou aussi améliorer l'estimation du flot optique sur les images turbulentes en imposant la parcimonie comme régularisation sur les exposants de singularité locaux<br>In this thesis, we present new techniques based on the notions of sparsity and scale invariance to design fast and efficient image processing applications. Instead of using the popular l1-norm to model sparsity, we focus on the use of non-convex penalties that promote more sparsity. We propose to use a first-order approximation to estimate a solution of non-convex proximal operators, which permits to easily use a wide rangeof penalties. We address also the problem of multi-sparsity, when the minimization problem is composed of various sparse terms, which typically arises in problems that require both a robust estimation to reject outliers and a sparse prior. These techniques are applied to various important problems in low-level computer vision such as edgeaware smoothing, image separation, robust integration and image deconvolution. We propose also to go beyond sparsity models and learn non-local spectral mapping with application to image denoising. Scale-invariance is another notion that plays an important role in our work. Using this principle, a precise definition of edges can be derived which can be complementary to sparsity. More precisely, we can extractinvariant features for classification from sparse representations in a deep convolutional framework. Scale-invariance permits also to extract relevant pixels for sparsifying images. We use this principle as well to improve optical ow estimation on turbulent images by imposing a sparse regularization on the local singular exponents instead of regular gradients
APA, Harvard, Vancouver, ISO, and other styles
33

Ross, Zachary E. "Probabilistic Fault Displacement Hazard Analysis For Reverse Faults and Surface Rupture Scale Invariance." DigitalCommons@CalPoly, 2011. https://digitalcommons.calpoly.edu/theses/457.

Full text
Abstract:
p.p1 {margin: 0.0px 0.0px 0.0px 0.0px; font: 12.0px 'Times New Roman'} A methodology is presented for evaluating the potential surface fault displacement on reverse faults in a probabilistic manner. This methodology follows the procedures put forth for Probabilistic Fault Displacement Hazard Analysis (PFDHA). Empirical probability distributions that are central to performing a PFDHA are derived from field investigations of reverse faulting events. Statistical analyses are used to test previously assumed properties of scale invariance with respect to magnitude for normalized displacement. It is found that normalized displacement is statistically invariant with respect to magnitude and focal mechanism, allowing for the combination of a large number of events into a single dataset for regression purposes. An empirical relationship is developed using this single dataset to be used as a fault displacement prediction equation. A PFDHA is conducted on the Los Osos fault zone in central California and a hazard curve for fault displacement is produced. A full sensitivity analysis is done using this fault as a reference, to test for the sources of variability in the PFDHA methodology. The influence of the major primary variables is quantified to provide a future direction for PFDHA.
APA, Harvard, Vancouver, ISO, and other styles
34

Huang, Zhijia. "Topology and congestion invariant in global internet-scale networks." Thesis, Queen Mary, University of London, 2010. http://qmro.qmul.ac.uk/xmlui/handle/123456789/519.

Full text
Abstract:
Infrastructures like telecommunication systems, power transmission grids and the Internet are complex networks that are vulnerable to catastrophic failure. A common mechanism behind this kind of failure is avalanche-like breakdown of the network's components. If a component fails due to overload, its load will be redistributed, causing other components to overload and fail. This failure can propagate throughout the entire network. From studies of catastrophic failures in di erent technological networks, the consensus is that the occurrence of a catastrophe is due to the interaction between the connectivity and the dynamical behaviour of the networks' elements. The research in this thesis focuses particularly on packet-oriented networks. In these networks the tra c (dynamics) and the topology (connectivity) are coupled by the routing mechanisms. The interactions between the network's topology and its tra c are complex as they depend on many parameters, e.g. Quality of Service, congestion management (queuing), link bandwidth, link delay, and types of tra c. It is not straightforward to predict whether a network will fail catastrophically or not. Furthermore, even if considering a very simpli ed version of packet networks, there are still fundamental questions about catastrophic behaviour that have not been studied, such as: will a network become unstable and fail catastrophically as its size increases; do catastrophic networks have speci c connectivity properties? One of the main di culties when studying these questions is that, in general, we do not know in advance if a network is going to fail catastrophically. In this thesis we study how to build catastrophic 5 networks. The motivation behind the research is that once we have constructed networks that will fail catastrophically then we can study its behaviour before the catastrophe occurs, for example the dynamical behaviour of the nodes before an imminent catastrophe. Our theoretical and algorithmic approach is based on the observation that for many simple networks there is a topology-tra c invariant for the onset of congestion. We have extended this approach to consider cascading congestion. We have developed two methods to construct catastrophes. The main results in this thesis are that there is a family of catastrophic networks that have a scale invariant; hence at the break point it is possible to predict the behaviour of large networks by studying a much smaller network. The results also suggest that if the tra c on a network increases exponentially, then there is a maximum size that a network can have, after that the network will always fail catastrophically. To verify if catastrophic networks built using our algorithmic approach can re ect real situations, we evaluated the performance of a small catastrophic network. By building the scenario using open source network simulation software OMNet++, we were able to simulate a router network using the Open Shortest Path First routing protocol and carrying User Datagram Protocol tra c. Our results show that this kind of networks can collapse as a cascade of failures. Furthermore, recently the failure of Google Mail routers [1] con rms this kind of catastrophic failure does occur in real situations.
APA, Harvard, Vancouver, ISO, and other styles
35

Alexander-Nunneley, Lisa Pamela. "The minimal scale invariant extension of the standard model." Thesis, University of Manchester, 2010. https://www.research.manchester.ac.uk/portal/en/theses/the-minimal-scale-invariant-extension-of-the-standard-model(9fdbf3a3-ed27-428a-9b3c-80a3a1b3b9e6).html.

Full text
Abstract:
The Minimal Scale Invariant extension of the Standard Model (MSISM) is a model of low-energy particle physics which is identical to the Standard Model except for the inclusion of an additional complex singlet scalar and tree-level scale invariance. Scale invariance is a classical symmetry which is explicitly broken by quantum corrections whose interplay with the quartic couplings can be used to trigger electroweak symmetry breaking. The scale invariant Standard Model suffers from a number of problems, however the inclusion of a complex singlet scalar results in a perturbative and phenomenologically viable theory. We present a thorough and systematic investigation of the MSISM for a number of representative scenarios along two of its three classified types of flat direction. In these scenarios we determine the permitted quartic coupling parameter space, using both theoretical and experimental constraints, and apply these limits to make predictions of the scalar mass spectrum and the energy scale at which scale invariance is broken. We calculate the one-loop effective potential and the one-loop beta functions of the pertinent couplings of the MSISM specifically for this purpose. We also discuss the phenomenological implications of these scenarios, in particular, whether they realise explicit or spontaneous CP violation, contain neutrino masses or provide dark matter candidates. Of particular importance is the discovery of a new minimal scale invariant model which provides maximal spontaneous CP violation, can naturally incorporate neutrino masses, produces a massive stable scalar dark matter candidate and can remain perturbative up to the Planck scale. It can be argued that the last property, along with the classical scale invariance, can potentially solve the gauge hierarchy problem for this model.
APA, Harvard, Vancouver, ISO, and other styles
36

Cherry, Donna J., and John G. Orme. "Validation Study of a Co-Parenting Scale for Foster Couples." Digital Commons @ East Tennessee State University, 2011. https://dc.etsu.edu/etsu-works/7643.

Full text
Abstract:
This study examined the Casey Foster Applicant Inventory-Applicant-Co-Parenting Scale (CFAI-CP), a new scale developed to measure foster parent applicants' co-parenting potential. Also, this study illustrates statistical methods used to analyze the psychometric properties of dyadic data. Factor structure and measurement invariance were tested with 111 approved foster couples. Mplus was used to accommodate ordinal-level data. Exploratory factor analysis supported a 10-item, unidimensional measure with excellent internal consistency reliability (.88 fathers,.89 mothers). Confirmatory factor analysis supported scalar measurement invariance but not structural invariance, as expected. Good construct validity was evident. Findings support the CFAI-CP as an empirically sound measure to assess foster parent co-parenting.
APA, Harvard, Vancouver, ISO, and other styles
37

MANTOVANI, SARTI Valentina. "Scaled chiral quark-solitons for nuclear matter." Doctoral thesis, Università degli studi di Ferrara, 2012. http://hdl.handle.net/11392/2389448.

Full text
Abstract:
One of the most challenging problems in hadronic and nuclear physics is to study nuclear matter at finite density by using a scheme which includes one of the fundamental properties of QCD, namely chiral symmetry. The problem of studying nuclear matter with chiral Lagrangians is not trivial; for instance models based on the linear σ-model fail to describe nuclear matter already at ρ ∼ ρ0 because the normal solution in which chiral symmetry is broken becomes unstable respect to the Lee-Wick phase. The main problems in these models are due to the constraints on the scalar field dynamics imposed by the Mexican hat potential [1]. The interaction terms of σ and π fields in the linear realization of chiral symmetry allows the chiral fields to move away from the chiral circle as the density raises and to reach, already at ρ0, the local maximum where σv = 0 and chiral symmetry is restored. The difficulty of a too early restoration of chiral symmetry at finite density can be overcame in two different ways. One could implement chiral symmetry into the Lagrangian through a non-linear realization [2] where the scalar fields are forced to stay on the chiral circle. The other approach is still based on a linear realization of chiral symmetry but with a new potential, which includes terms not present in the Mexican hat potential. A possible guideline in building such a potential is scale invariance, which is spontaneously broken in QCD due to the presence of the parameter ΛQCD coming from the renormalization process and it is strictly connected to a non vanishing gluon condensate. This fundamental symmetry of QCD can be implemented in the Lagrangian at mean-field level, following the approaches in [3, 4], through the introduction of a new scalar field, the dilaton field, whose dynamics is regulated by a potential chosen in order to reproduce the scale divergence of QCD. In this work we will adopt a Chiral Dilaton Model (CDM) which also includes scale invariance introduced by the nuclear physics group of the University of Minnesota [5–8]. It has already been shown that an hadronic model based on this dynamics provides a good description of nuclear physics at densities about ρ0 and it describes the gradual restoration of chiral symmetry at higher densities [9]. In the same work the authors have shown a phase diagram, where the interplay between chiral and scale invariance restoration lead to a scenario similar to that proposed by McLerran and Pisarski in [10]. It is therefore tempting to explore the scenario presented in [9] at a more microscopic level. The new idea we develop in this work is to interpret the fermions as quarks, to build the hadrons as solitonic solutions of the fields equations as in [11] and, finally, to explore the properties of the soliton at finite density using the Wigner- Seitz approximation. Similar approaches to a finite density system have been investigated in the past [12–17]. A problem of those works is that the solitonic solutions are unstable and disappear already at moderate densities when e.g. the linear σ-model is adopted [16]. We are therefore facing an instability similar to the one discussed and solved when studying nuclear matter with hadronic chiral Lagrangians. The first aim of this thesis is to check whether, just by modifying the mesons interaction with the inclusion of scale invariance, the new logarithmic potential allows the soliton crystal to reach higher densities. Next, since the CDM also takes into account the presence of vector mesons, the second and more important aim is to check whether the inclusion of vector mesons in the dynamics of the quarks can provide saturation for chiral matter. We should remark that no calculation, neither in vacuum nor at finite density, exists at the moment for the CDM with quarks and vector mesons.
APA, Harvard, Vancouver, ISO, and other styles
38

Caurio, Maurizio. "Simulazione della creazione e della stabilità di una scale-free network." Bachelor's thesis, Alma Mater Studiorum - Università di Bologna, 2011. http://amslaurea.unibo.it/1838/.

Full text
APA, Harvard, Vancouver, ISO, and other styles
39

Woo, Myung Chul. "Biologically-inspired translation, scale, and rotation invariant object recognition models /." Online version of thesis, 2007. http://hdl.handle.net/1850/3933.

Full text
APA, Harvard, Vancouver, ISO, and other styles
40

Hagan, Scott. "Scale invariant and topological approaches to the cosmological constant problem." Thesis, McGill University, 1995. http://digitool.Library.McGill.CA:80/R/?func=dbin-jump-full&object_id=39926.

Full text
Abstract:
The cosmological constant is historically reviewed from its introduction in classical and relativistic cosmology through its modern quantum guise where it appears as a vacuum energy density. Limits on the empirical value are in glaring contradiction to the expectations of field theoretical calculations.<br>Motivated by the natural connection between dilatation invariance and the extinction of the vacuum energy density, a phenomenological realization of a global scale symmetry is constructed. A complete treatment of such a realization in the context of a supergravitational toy model is calculated to one loop using an effective potential formalism. Particular attention is paid to the quantization of both supersymmetric and general coordinate gauges and to the concomitant ghost structure since traditional treatments have introduced non-local operators in the ghost Lagrangian and generating functional. Contributions to the effective potentid from the gravity sector are thus determined that contradict the literature. A particular class of tree-level scalar potentials that includes the 'no-scale' case is studied in the that space limit. While it is found that scale invariance can be maintained at the one-loop level and the cosmological constant made to vanish for all potentials in the class this is directly attributable to supersymmetry. A richer form of the Kahler potential or an enlarged particle content may facilitate the breaking of supersymmetry.<br>Phenomenological consequences of supergravity are investigated through a one-loop calculation of the electromagnetic form factor of the gravitino. Should such a form factor exist a signature of the gravitino might be found in processes with unlabeled products such as $e sp+e sp- to nothing.$ It is found that the form factor vanishes to this order, the Lorentz structures generated being too impoverished to withstand a constraining set of polarization conditions.<br>Finally the wormhole solution to the cosmological constant problem is examined in a semiclassical approximation. The notion that scalar field worm-holes must have associated conserved charges is questioned and a model of massive scalar field wormholes is delineated and proven to provide a counterexample. As the model allows baby universes nucleated with a certain eigenvalue of the scalar field momentum to classically evolve to a different value, competing semiclassical paths contribute to the same transition amplitude. Numerical simulations demonstrate that the novel semiclassical paths available to massive solutions cannot be overlooked in approximating the tunneling amplitude.
APA, Harvard, Vancouver, ISO, and other styles
41

Lewis, Gregory. "The scale invariant generator technique and scaling anisotropy in geophysics /." Thesis, McGill University, 1993. http://digitool.Library.McGill.CA:80/R/?func=dbin-jump-full&object_id=68198.

Full text
Abstract:
Recently, there has been a dramatic increase in the use of scale invariance in the study of geophysical fields. However, very little attention has been paid to the anisotropy that is invariably present in these fields, in the form of stratification, differential rotation, texture and morphology. In order to account for scaling anisotropy, the formalism of Generalized Scale Invariance (GSI) was developed. Until now, only a single analysis technique has been developed which incorporates this formalism and which can be used to study the differential rotation of fields.<br>Using a two-dimensional representation of the linear approximation to GSI, a new, greatly improved, technique for quantifying anisotropic scale invariance in geophysical fields is developed: the Scale Invariant Generator technique (SIG).<br>The ability of the technique to yield valid estimates is tested by performing the analysis on multifractal (scale invariant) simulations. It was found that SIG yields reasonable estimates for fields with a diversity of anisotropic and statistical characteristics. The analysis is also performed on three satellite cloud radiances and three sea ice SAR reflectivities to test the applicability of the technique. SIG also produced reasonable estimates in these cases.
APA, Harvard, Vancouver, ISO, and other styles
42

Grimes, Catherine Alison. "Neural network techniques for position and scale invariant image classification." Thesis, Open University, 1998. http://oro.open.ac.uk/57866/.

Full text
Abstract:
This research is concerned with the application of neural network techniques to the problems of classifying images in a manner that is invariant to changes in position and scale. In addition to the goal of invariant classification, the network has to classify the objects in a hierarchical manner, in which complex features are constructed from simpler features, and use unsupervised learning. The resultant hierarchical structure should be able to classify the image by having an internal representation that models the structure of the image. After finding existing neural network techniques unsuitable, a new type of neural network was developed that differed from the conventional multi-layer perceptron type of architecture. This network was constructed from neurons that were grouped into feature detectors. These neurons were taught in an unsupervised manner that used a technique based on Kohonen learning. A number of novel techniques were developed to improve the learning and classification performance of the network. The network was able to retain the spatial relationship of the classified features; this inherent property resulted in the capability for position and scale invariant classification. As a consequence, an additional invariance filter was not required. In addition to achieving the invariance property, the developed techniques enabled multiple objects in an image to be classified. When the network had learned the spatial relationships between the lower level features, names could be assigned to the identified features. As part of the classification process, the system was able to identify the positions of the classified features in all layers of the network. A software model of an artificial retina was used to test the grey scale classification performance of the network and to assess the response of the retina to changes in brightness. Like the Neocognitron, the resulting network was developed solely for image classification. Although the Neocognitron is not designed for scale or position invariance, it was chosen for comparison purposes because it has structural similarities and the ability to accommodates light changes in the image. This type of network could be used as the basis for a 2D-scene analysis neural network, in which the inherent parallelism of the neural network would provide simultaneous classification of the objects in the image.
APA, Harvard, Vancouver, ISO, and other styles
43

Shen, Yao. "Scene Analysis Using Scale Invariant Feature Extraction and Probabilistic Modeling." Thesis, University of North Texas, 2011. https://digital.library.unt.edu/ark:/67531/metadc84275/.

Full text
Abstract:
Conventional pattern recognition systems have two components: feature analysis and pattern classification. For any object in an image, features could be considered as the major characteristic of the object either for object recognition or object tracking purpose. Features extracted from a training image, can be used to identify the object when attempting to locate the object in a test image containing many other objects. To perform reliable scene analysis, it is important that the features extracted from the training image are detectable even under changes in image scale, noise and illumination. Scale invariant feature has wide applications such as image classification, object recognition and object tracking in the image processing area. In this thesis, color feature and SIFT (scale invariant feature transform) are considered to be scale invariant feature. The classification, recognition and tracking result were evaluated with novel evaluation criterion and compared with some existing methods. I also studied different types of scale invariant feature for the purpose of solving scene analysis problems. I propose probabilistic models as the foundation of analysis scene scenario of images. In order to differential the content of image, I develop novel algorithms for the adaptive combination for multiple features extracted from images. I demonstrate the performance of the developed algorithm on several scene analysis tasks, including object tracking, video stabilization, medical video segmentation and scene classification.
APA, Harvard, Vancouver, ISO, and other styles
44

Li, Bo. "Interest Curves : Concept, Evaluation, Implementation and Applications." Doctoral thesis, Umeå universitet, Institutionen för tillämpad fysik och elektronik, 2015. http://urn.kb.se/resolve?urn=urn:nbn:se:umu:diva-111175.

Full text
Abstract:
Image features play important roles in a wide range of computer vision applications, such as image registration, 3D reconstruction, object detection and video understanding. These image features include edges, contours, corners, regions, lines, curves, interest points, etc. However, the research is fragmented in these areas, especially when it comes to line and curve detection. In this thesis, we aim to discover, integrate, evaluate and summarize past research as well as our contributions in the area of image features. This thesis provides a comprehensive framework of concept, evaluation, implementation, and applications for image features. Firstly, this thesis proposes a novel concept of interest curves. Interest curves is a concept derived and extended from interest points. Interest curves are significant lines and arcs in an image that are repeatable under various image transformations. Interest curves bring clear guidelines and structures for future curve and line detection algorithms and related applications. Secondly, this thesis presents an evaluation framework for detecting and describing interest curves. The evaluation framework provides a new paradigm for comparing the performance of state-of-the-art line and curve detectors under image perturbations and transformations. Thirdly, this thesis proposes an interest curve detector (Distinctive Curves, DICU), which unifies the detection of edges, corners, lines and curves. DICU represents our state-of-the-art contribution in the areas concerning the detection of edges, corners, curves and lines. Our research efforts cover the most important attributes required by these features with respect to robustness and efficiency. Interest curves preserve richer geometric information than interest points. This advantage gives new ways of solving computer vision problems. We propose a simple description method for curve matching applications. We have found that our proposed interest curve descriptor outperforms all state-of-the-art interest point descriptors (SIFT, SURF, BRISK, ORB, FREAK). Furthermore, in our research we design a novel object detection algorithm that only utilizes DICU geometries without using local feature appearance. We organize image objects as curve chains and to detect an object, we search this curve chain in the target image using dynamic programming. The curve chain matching is scale and rotation-invariant as well as robust to image deformations. These properties have given us the possibility of resolving the rotation-variance problem in object detection applications. In our face detection experiments, the curve chain matching method proves to be scale and rotation-invariant and very computational efficient.<br>Bilddetaljer har en viktig roll i ett stort antal applikationer för datorseende, t.ex., bildregistrering, 3D-rekonstruktion, objektdetektering och videoförståelse. Dessa bilddetaljer inkluderar kanter, konturer, hörn, regioner, linjer, kurvor, intressepunkter, etc. Forskningen inom dessa områden är splittrad, särskilt för detektering av linjer och kurvor. I denna avhandling, strävar vi efter att hitta, integrera, utvärdera och sammanfatta tidigare forskning tillsammans med vår egen forskning inom området för bildegenskaper. Denna avhandling presenterar ett ramverk för begrepp, utvärdering, utförande och applikationer för bilddetaljer. För det första föreslår denna avhandling ett nytt koncept för intressekurvor. Intressekurvor är ett begrepp som härrör från intressepunkter och det är viktiga linjer och bågar i bilden som är repeterbara oberoende av olika bildtransformationer. Intressekurvor ger en tydlig vägledning och struktur för framtida algoritmer och relaterade tillämpningar för kurv- och linjedetektering. För det andra, presenterar denna avhandling en utvärderingsram för detektorer och beskrivningar av intressekurvor. Utvärderingsramverket utgör en ny paradigm för att jämföra resultatet för de bästa möjliga teknikerna för linje- och kurvdetektorer vid bildstörningar och bildtransformationer. För det tredje presenterar denna avhandling en detektor för intressekurvor (Distinctive curves, DICU), som förenar detektering av kanter, hörn, linjer och kurvor. DICU representerar vårt främsta bidrag inom området detektering av kanter, hörn, kurvor och linjer. Våra forskningsinsatser täcker de viktigaste attribut som krävs av dessa funktioner med avseende på robusthet och effektivitet. Intressekurvor innehåller en rikare geometrisk information än intressepunkter. Denna fördel öppnar för nya sätt att lösa problem för datorseende. Vi föreslår en enkel beskrivningsmetod för kurvmatchningsapplikationer och den föreslagna deskriptorn för intressekurvor överträffar de bästa tillgängliga deskriptorerna för intressepunkter (SIFT, SURF, BRISK, ORB, och FREAK). Dessutom utformar vi en ny objektdetekteringsalgoritm som bara använder geometri för DICU utan att använda det lokala utseendet. Vi organiserar bildobjekt som kurvkedjor och för att upptäcka ett objekt behöver vi endast söka efter denna kurvkedja i målbilden med hjälp av dynamisk programmering. Kurvkedjematchningen är oberoende av skala och rotationer samt robust vid bilddeformationer. Dessa egenskaper ger möjlighet att lösa problemet med rotationsberoende inom objektdetektering. Vårt ansiktsigenkänningsexperiment visar att kurvkedjematchning är oberoende av skala och rotationer och att den är mycket beräkningseffektiv.<br>INTRO – INteractive RObotics research network
APA, Harvard, Vancouver, ISO, and other styles
45

Caycho-Rodríguez, Tomás, Patricia Sancho, José M. Tomás, et al. "Validity and invariance of measurement of the satisfaction with love life scale in older adults." South-West University "Neofit Rilski", 2020. http://hdl.handle.net/10757/655588.

Full text
Abstract:
In recent years, interest in satisfaction with love life (SWLL) has increased. Empirical evidence demonstrated that SWLL favors subjective well-being, physical and mental health, marital quality and stability. In this regard, the study aimed to examine evidence based on the internal structure validity, reliability, and measurement invariance of the Peruvian version of the Satisfaction with Love Life Scale (SWLLS). The participants were 323 older adults recruited from the region of San Martin (Peru) with an average age of 68.73 years (SD = 7.17). The sample comprised of 49.5% women and 50.5% men. The results supported the one-dimensional model and adequate reliability of the SWLLS. A multi-group analysis provided evidence of configural, metric, and scale invariance across genders. The findings verified the validity and reliability of the Peruvian version of the SWLLS, which can be used to measure SWLL.<br>Universidad del Norte
APA, Harvard, Vancouver, ISO, and other styles
46

Ramos, Raymundo Alberto. "Solving Problems of the Standard Model through Scale Invariance, Dark Matter, Inflation and Flavor Symmetry." W&M ScholarWorks, 2016. https://scholarworks.wm.edu/etd/1477068273.

Full text
Abstract:
Through beyond standard model formulations we are able to suggest solutions to some of the current shortcomings of the standard model. In this thesis we focus in particular on inflation, the hierarchy of fermion masses, scale invariant extensions and dark matter candidates. First we present a model of hybrid natural inflation based on the discrete group S_3, the smallest non-Abelian group. The S_3 potential has an accidental symmetry whose breaking results in a pseudo-Goldstone boson with the appropriate potential for a slow-rolling inflaton. The hybrid adjective comes from the fact that inflation is ended by additional scalar fields interacting with the inflaton. at some point during inflation, this interaction forces the additional scalars to develop vacuum expectation values, then they fall to a global minimum and inflation ends. We continue with another inflation model, in this case involving a two-field potential. This potential comes from the breaking of a flavor symmetry, the one that yields the hierarchy of fermion masses. Depending on the choice of parameters, the path followed by the inflaton may or not reach a point where inflation ends by a hybrid mechanism. For every model presented we study the field content and the parameters to demonstrate that there are solutions that follow the constraints from current experimental observations. Then we move to classically scale invariant extensions of the standard model. We proceed with a model where one-loop corrections break a non-Abelian gauge symmetry in a dark sector. This breaking provides an origin for the electroweak scale and gives mass to the gauge multiplet. For some parameter regions this massive gauge boson is a dark matter candidate. to finish, we also develop a model where we employ strong interactions in the dark sector. The particular dynamics of this model set the electroweak scale and generate a massive pseudo-Goldstone boson appropriate for dark matter.
APA, Harvard, Vancouver, ISO, and other styles
47

Caycho-Rodríguez, Tomás, José M. Tomás, León José Ventura-, et al. "Factorial validity and invariance analysis of the five items version of Mindful Awareness Attention Scale in older adults." Routledge, 2020. http://hdl.handle.net/10757/652458.

Full text
Abstract:
Objective: Mindfulness or the full attention state is a factor that contributes to the successful process of aging. This study aims to evaluate the evidence of validity, on the basis of the internal structure, convergent and discriminant validity, reliability and factorial invariance across gender, for the five items Mindful Attention Awareness Scale (MAAS-5) within a sample of older adults. Methods: The participants were 323 Peruvian older adults, consisting of 160 women and 163 men, whose average ages were 68.58 (S.D = 7.23) and 68.91 years (S.D = 7.12), respectively. In addition to the MAAS-5, the Satisfaction with Life Scale and the Patient Health Questionnaire-2 were administered. Results: The Confirmatory Factor Analysis indicates that the one-factor structure of the MAAS-5 presents adequate fit for the total sample (χ2 = 11.24, df = 5, χ2/df = 2.25, CFI =.99, RMSEA =.06 [90%CI:.01,.11]; and SRMR =.025), as well as for the sub-samples of men and women. This one-factor solution presents adequate internal consistency (ω = 80 [95%CI:.76 -.82]) and it is invariant across gender. Regarding convergent validity, high scores in the MAAS are associated with a greater satisfaction with life (r =.88, p<.01 [95%CI:.85,.95]) and less depression (r = −.56, p<.01 [95%CI: −.48, −.77]) in older adults. Conclusions: The preliminary results back the use of the MAAS-5 as a self-report measure of mindfulness that has an adequate unifactorial structure that is reliable and invariant across gender for measuring the full attention state in elderly Peruvians.
APA, Harvard, Vancouver, ISO, and other styles
48

Butavicius, Marcus A. "Recognition of planar rotated and scaled forms : normalization versus invariant features /." Title page, table of contents and abstract only, 2002. http://web4.library.adelaide.edu.au/theses/09PH/09phb982.pdf.

Full text
APA, Harvard, Vancouver, ISO, and other styles
49

Friday, Brian Matthew. "VANISHING LOCAL SCALAR INVARIANTS ON GENERALIZED PLANE WAVE MANIFOLDS." CSUSB ScholarWorks, 2019. https://scholarworks.lib.csusb.edu/etd/884.

Full text
Abstract:
Characterizing a manifold up to isometry is a challenging task. A manifold is a topological space. One may equip a manifold with a metric, and generally speaking, this metric determines how the manifold “looks". An example of this would be the unit sphere in R3. While we typically envision the standard metric on this sphere to give it its familiar shape, one could define a different metric on this set of points, distorting distances within this set to make it seem perhaps more ellipsoidal, something not isometric to the standard round sphere. In an effort to distinguish manifolds up to isometry, we wish to compute meaningful invariants. For example, the Riemann curvature tensor and its surrogates are examples of invariants one could construct. Since these objects are generally too complicated to compare and are not real valued, we construct scalar invariants from these objects instead. This thesis will explore these invariants and exhibit a special family of manifolds that are not flat on which all of these invariants vanish. We will go on to properly define, and gives examples of, manifolds, metrics, tangent vector fields, and connections. We will show how to compute the Christoffel symbols that define the Levi-Civita connection, how to compute curvature, and how to raise and lower indices so that we can produce scalar invariants. In order to construct the curvature operator and curvature tensor, we use the miracle of pseudo-Riemannian geometry, i.e., the Levi-Civita connection, the unique torsion free and metric compatible connection on a manifold. Finally, we examine Generalized Plane Wave Manifolds, and show that all scalar invariants of Weyl type on these manifolds vanish, despite the fact that many of these manifolds are not flat.
APA, Harvard, Vancouver, ISO, and other styles
50

Caycho-Rodríguez, Tomás, Lindsey W. Vilca, Thomas G. Plante, et al. "Spanish version of the Santa Clara Brief Compassion Scale: evidence of validity and factorial invariance in Peru." Springer, 2020. http://hdl.handle.net/10757/655482.

Full text
Abstract:
The Santa Clara Brief Compassion Scale (SCBCS) is a brief measure of compassion, created in English and translated into Brazilian Portuguese. Nonetheless, to date, no study has assessed the psychometric evidence of its Spanish translation. This study examines the evidence of validity, reliability, and factorial invariance according to the gender of a Spanish version of the SCBCS. Participants included 273 Peruvian university students (50.9% women) with an average age of 21.23 years (SD = 3.24); divided into two groups of men and women to conduct the invariance factor analysis. Other measures of mindfulness, well-being, empathy, and anxiety were applied along with the SCBCS. The Confirmatory Factor Analysis (CFA) indicated that a unifactorial model adjusted significantly to the data (χ2 = 12,127, df = 5, p =.033, χ2 /df = 2.42, CFI =.998, RMSEA =.072 [CI90%.019,.125]; SRMR =.030, WRMR =.551) and presented good reliability (α =.90 [95%.88–.92]; ω =.91). Moreover, correlations between the SCBCS and other measures of mindfulness (r =.53, p <.05, cognitive empathy (r = 55; p <.05), affective empathy (r =.56, p <.05), well-being (r =.55, p <.05), and anxiety (r = −.46; p <.05) supported the convergent and discriminant validity. Likewise, the multiple-group CFA supported the factorial invariance according to the gender of the SCBCS. Results indicate that the SCBCS possesses evidence of validity, reliability, and invariance between men and women for measuring compassion toward others in Peruvian undergraduate students. SCBCS is expected to be used by researchers, healthcare professionals, teachers, and others as a useful measure of compassion in college students.
APA, Harvard, Vancouver, ISO, and other styles
We offer discounts on all premium plans for authors whose works are included in thematic literature selections. Contact us to get a unique promo code!

To the bibliography