Academic literature on the topic 'Negative squares'

Create a spot-on reference in APA, MLA, Chicago, Harvard, and other styles

Select a source type:

Consult the lists of relevant articles, books, theses, conference reports, and other scholarly sources on the topic 'Negative squares.'

Next to every source in the list of references, there is an 'Add to bibliography' button. Press on it, and we will generate automatically the bibliographic reference to the chosen work in the citation style you need: APA, MLA, Harvard, Chicago, Vancouver, etc.

You can also download the full text of the academic publication as pdf and read online its abstract whenever available in the metadata.

Journal articles on the topic "Negative squares"

1

Fendley, Paul, Kareljan Schoutens, and Hendrik van Eerten. "Hard squares with negative activity." Journal of Physics A: Mathematical and General 38, no. 2 (2004): 315–22. http://dx.doi.org/10.1088/0305-4470/38/2/002.

Full text
APA, Harvard, Vancouver, ISO, and other styles
2

Berg, C., J. P. R. Christensen, and P. H. Maserick. "Sequences with finitely many negative squares." Journal of Functional Analysis 79, no. 2 (1988): 260–87. http://dx.doi.org/10.1016/0022-1236(88)90014-6.

Full text
APA, Harvard, Vancouver, ISO, and other styles
3

Li, Yifeng, and Alioune Ngom. "Classification approach based on non-negative least squares." Neurocomputing 118 (October 2013): 41–57. http://dx.doi.org/10.1016/j.neucom.2013.02.012.

Full text
APA, Harvard, Vancouver, ISO, and other styles
4

Arsene, Gr, T. Constantinescu, and A. Gheondea. "Lifting of operators and prescribed numbers of negative squares." Michigan Mathematical Journal 34, no. 2 (1987): 201–16. http://dx.doi.org/10.1307/mmj/1029003552.

Full text
APA, Harvard, Vancouver, ISO, and other styles
5

Paatero, Pentti. "Least squares formulation of robust non-negative factor analysis." Chemometrics and Intelligent Laboratory Systems 37, no. 1 (1997): 23–35. http://dx.doi.org/10.1016/s0169-7439(96)00044-5.

Full text
APA, Harvard, Vancouver, ISO, and other styles
6

Behrndt, Jussi, and Carsten Trunk. "On the negative squares of indefinite Sturm–Liouville operators." Journal of Differential Equations 238, no. 2 (2007): 491–519. http://dx.doi.org/10.1016/j.jde.2007.01.026.

Full text
APA, Harvard, Vancouver, ISO, and other styles
7

Behrndt, Jussi, Roland Möws, and Carsten Trunk. "Eigenvalue estimates for operators with finitely many negative squares." Opuscula Mathematica 36, no. 6 (2016): 717. http://dx.doi.org/10.7494/opmath.2016.36.6.717.

Full text
APA, Harvard, Vancouver, ISO, and other styles
8

Rajkó, Róbert, and Yu Zheng. "Distance algorithm based procedure for non-negative least squares." Journal of Chemometrics 28, no. 9 (2014): 691–95. http://dx.doi.org/10.1002/cem.2625.

Full text
APA, Harvard, Vancouver, ISO, and other styles
9

Elphick, Clive, and William Linz. "Symmetry and asymmetry between positive and negative square energies of graphs." Electronic Journal of Linear Algebra 40 (May 13, 2024): 418–32. http://dx.doi.org/10.13001/ela.2024.8447.

Full text
Abstract:
The positive and negative square energies of a graph, $s^+(G)$ and $s^-(G)$, are the sums of squares of the positive and negative eigenvalues of the adjacency matrix, respectively. The first results on square energies revealed symmetry between $s^+(G)$ and $s^-(G)$. This paper reviews examples of asymmetry between these parameters, for example using large random graphs and the ratios $s^+/s^-$ and $s^-/s^+$, as well as new examples of symmetry. Some questions previously asked about $s^{+}$ and $s^{-}$ are answered and several further avenues of research are suggested.
APA, Harvard, Vancouver, ISO, and other styles
10

Loconte, Lorenzo, Stefan Mengel, and Antonio Vergari. "Sum of Squares Circuits." Proceedings of the AAAI Conference on Artificial Intelligence 39, no. 18 (2025): 19077–85. https://doi.org/10.1609/aaai.v39i18.34100.

Full text
Abstract:
Designing expressive generative models that support exact and efficient inference is a core question in probabilistic ML. Probabilistic circuits (PCs) offer a framework where this tractability-vs-expressiveness trade-off can be analyzed theoretically. Recently, squared PCs encoding subtractive mixtures via negative parameters have emerged as tractable models that can be exponentially more expressive than monotonic PCs, i.e., PCs with positive parameters only. In this paper we provide a more precise theoretical characterization of the expressiveness relationships among these models. First, we prove that squared PCs can be less expressive than monotonic ones. Second, we formalize a novel class of PCs – sum of squares PCs – that can be exponentially more expressive than both squared and monotonic PCs. Around sum of squares PCs, we build an expressiveness hierarchy that allows us to precisely unify and separate different tractable model classes such as Born Machines and PSD models, and other recently introduced tractable probabilistic models by using complex parameters. Finally, we empirically show the effectiveness of sum of squares circuits in performing distribution estimation.
APA, Harvard, Vancouver, ISO, and other styles
More sources

Dissertations / Theses on the topic "Negative squares"

1

Copeland, Dylan Matthew. "Negative-norm least-squares methods for axisymmetric Maxwell equations." Texas A&M University, 2006. http://hdl.handle.net/1969.1/3837.

Full text
Abstract:
We develop negative-norm least-squares methods to solve the three-dimensional Maxwell equations for static and time-harmonic electromagnetic fields in the case of axial symmetry. The methods compute solutions in a two-dimensional cross section of the domain, thereby reducing the dimension of the problem from three to two. To achieve this dimension reduction, we work with weighted spaces in cylindrical coordinates. In this setting, approximation spaces consisting of low order finite element functions and bubble functions are analyzed. In contrast to other methods for axisymmetric Maxwell equations, our leastsquares methods allow for discontinuous coefficients with large jumps and non-convex, irregular polygonal domains discretized by unstructured meshes. The resulting linear systems are of modest size, are symmetric positive definite, and can be solved very efficiently. Computations demonstrate the robustness of the methods with respect to the coefficients and domain shape.
APA, Harvard, Vancouver, ISO, and other styles
2

Kolev, Tzanio Valentinov. "Least-squares methods for computational electromagnetics." Texas A&M University, 2004. http://hdl.handle.net/1969.1/1115.

Full text
Abstract:
The modeling of electromagnetic phenomena described by the Maxwell's equations is of critical importance in many practical applications. The numerical simulation of these equations is challenging and much more involved than initially believed. Consequently, many discretization techniques, most of them quite complicated, have been proposed. In this dissertation, we present and analyze a new methodology for approximation of the time-harmonic Maxwell's equations. It is an extension of the negative-norm least-squares finite element approach which has been applied successfully to a variety of other problems. The main advantages of our method are that it uses simple, piecewise polynomial, finite element spaces, while giving quasi-optimal approximation, even for solutions with low regularity (such as the ones found in practical applications). The numerical solution can be efficiently computed using standard and well-known tools, such as iterative methods and eigensolvers for symmetric and positive definite systems (e.g. PCG and LOBPCG) and reconditioners for second-order problems (e.g. Multigrid). Additionally, approximation of varying polynomial degrees is allowed and spurious eigenmodes are provably avoided. We consider the following problems related to the Maxwell's equations in the frequency domain: the magnetostatic problem, the electrostatic problem, the eigenvalue problem and the full time-harmonic system. For each of these problems, we present a natural (very) weak variational formulation assuming minimal regularity of the solution. In each case, we prove error estimates for the approximation with two different discrete least-squares methods. We also show how to deal with problems posed on domains that are multiply connected or have multiple boundary components. Besides the theoretical analysis of the methods, the dissertation provides various numerical results in two and three dimensions that illustrate and support the theory.
APA, Harvard, Vancouver, ISO, and other styles
3

Santiago, Claudio Prata. "On the nonnegative least squares." Diss., Atlanta, Ga. : Georgia Institute of Technology, 2009. http://hdl.handle.net/1853/31768.

Full text
Abstract:
Thesis (Ph.D)--Industrial and Systems Engineering, Georgia Institute of Technology, 2010.<br>Committee Chair: Earl Barnes; Committee Member: Arkadi Nemirovski; Committee Member: Faiz Al-Khayyal; Committee Member: Guillermo H. Goldsztein; Committee Member: Joel Sokol. Part of the SMARTech Electronic Thesis and Dissertation Collection.
APA, Harvard, Vancouver, ISO, and other styles
4

Zigic, Ljiljana. "Direct L2 Support Vector Machine." VCU Scholars Compass, 2016. http://scholarscompass.vcu.edu/etd/4274.

Full text
Abstract:
This dissertation introduces a novel model for solving the L2 support vector machine dubbed Direct L2 Support Vector Machine (DL2 SVM). DL2 SVM represents a new classification model that transforms the SVM's underlying quadratic programming problem into a system of linear equations with nonnegativity constraints. The devised system of linear equations has a symmetric positive definite matrix and a solution vector has to be nonnegative. Furthermore, this dissertation introduces a novel algorithm dubbed Non-Negative Iterative Single Data Algorithm (NN ISDA) which solves the underlying DL2 SVM's constrained system of equations. This solver shows significant speedup compared to several other state-of-the-art algorithms. The training time improvement is achieved at no cost, in other words, the accuracy is kept at the same level. All the experiments that support this claim were conducted on various datasets within the strict double cross-validation scheme. DL2 SVM solved with NN ISDA has faster training time on both medium and large datasets. In addition to a comprehensive DL2 SVM model we introduce and derive its three variants. Three different solvers for the DL2's system of linear equations with nonnegativity constraints were implemented, presented and compared in this dissertation.
APA, Harvard, Vancouver, ISO, and other styles
5

Shen, Chong. "Topic Analysis of Tweets on the European Refugee Crisis Using Non-negative Matrix Factorization." Scholarship @ Claremont, 2016. http://scholarship.claremont.edu/cmc_theses/1388.

Full text
Abstract:
The ongoing European Refugee Crisis has been one of the most popular trending topics on Twitter for the past 8 months. This paper applies topic modeling on bulks of tweets to discover the hidden patterns within these social media discussions. In particular, we perform topic analysis through solving Non-negative Matrix Factorization (NMF) as an Inexact Alternating Least Squares problem. We accelerate the computation using techniques including tweet sampling and augmented NMF, compare NMF results with different ranks and visualize the outputs through topic representation and frequency plots. We observe that supportive sentiments maintained a strong presence while negative sentiments such as safety concerns have emerged over time.
APA, Harvard, Vancouver, ISO, and other styles
6

Moda, Hari Priya. "Non-Negative Least Square Optimization Model for Industrial Peak Load Estimation." Thesis, Virginia Tech, 2009. http://hdl.handle.net/10919/36003.

Full text
Abstract:
Load research is the study of load characteristics on a power distribution system which helps planning engineer make decisions about equipment ratings and future expansion decisions. As it is expensive to collect and maintain data across the entire system, data is collected only for a sample of customers, where the sample is divided into groups based upon the customer class. These sample measurements are used to calculate the load research factors like kWHr-to-peak kW conversion factors, diversity factors and 24 hour average consumption as a function of class, month and day type. These factors are applied to the commonly available monthly billing kW data to estimate load on the system. Among various customers on a power system, industrial customers form an important group for study as their annual kWHr consumption is among the highest. Also the errors with which the estimates are calculated are also highest for this class. Hence we choose the industrial class to demonstrate the Lawson-Hanson Non-Negative Least Square (NNLS) optimization technique to minimize the residual squared error between the estimated loads and the SCADA currents on the system. Five feeders with industrial dominant customers are chosen to demonstrate the improvement provided by the NNLS model. The results showed significant improvement over the Nonlinear Load Research Estimation (NLRE) method.<br>Master of Science
APA, Harvard, Vancouver, ISO, and other styles
7

Nguyen, Thi Thanh. "Algorithmes gloutons orthogonaux sous contrainte de positivité." Thesis, Université de Lorraine, 2019. http://www.theses.fr/2019LORR0133/document.

Full text
Abstract:
De nombreux domaines applicatifs conduisent à résoudre des problèmes inverses où le signal ou l'image à reconstruire est à la fois parcimonieux et positif. Si la structure de certains algorithmes de reconstruction parcimonieuse s'adapte directement pour traiter les contraintes de positivité, il n'en va pas de même des algorithmes gloutons orthogonaux comme OMP et OLS. Leur extension positive pose des problèmes d'implémentation car les sous-problèmes de moindres carrés positifs à résoudre ne possèdent pas de solution explicite. Dans la littérature, les algorithmes gloutons positifs (NNOG, pour “Non-Negative Orthogonal Greedy algorithms”) sont souvent considérés comme lents, et les implémentations récemment proposées exploitent des schémas récursifs approchés pour compenser cette lenteur. Dans ce manuscrit, les algorithmes NNOG sont vus comme des heuristiques pour résoudre le problème de minimisation L0 sous contrainte de positivité. La première contribution est de montrer que ce problème est NP-difficile. Deuxièmement, nous dressons un panorama unifié des algorithmes NNOG et proposons une implémentation exacte et rapide basée sur la méthode des contraintes actives avec démarrage à chaud pour résoudre les sous-problèmes de moindres carrés positifs. Cette implémentation réduit considérablement le coût des algorithmes NNOG et s'avère avantageuse par rapport aux schémas approximatifs existants. La troisième contribution consiste en une analyse de reconstruction exacte en K étapes du support d'une représentation K-parcimonieuse par les algorithmes NNOG lorsque la cohérence mutuelle du dictionnaire est inférieure à 1/(2K-1). C'est la première analyse de ce type<br>Non-negative sparse approximation arises in many applications fields such as biomedical engineering, fluid mechanics, astrophysics, and remote sensing. Some classical sparse algorithms can be straightforwardly adapted to deal with non-negativity constraints. On the contrary, the non-negative extension of orthogonal greedy algorithms is a challenging issue since the unconstrained least square subproblems are replaced by non-negative least squares subproblems which do not have closed-form solutions. In the literature, non-negative orthogonal greedy (NNOG) algorithms are often considered to be slow. Moreover, some recent works exploit approximate schemes to derive efficient recursive implementations. In this thesis, NNOG algorithms are introduced as heuristic solvers dedicated to L0 minimization under non-negativity constraints. It is first shown that the latter L0 minimization problem is NP-hard. The second contribution is to propose a unified framework on NNOG algorithms together with an exact and fast implementation, where the non-negative least-square subproblems are solved using the active-set algorithm with warm start initialisation. The proposed implementation significantly reduces the cost of NNOG algorithms and appears to be more advantageous than existing approximate schemes. The third contribution consists of a unified K-step exact support recovery analysis of NNOG algorithms when the mutual coherence of the dictionary is lower than 1/(2K-1). This is the first analysis of this kind
APA, Harvard, Vancouver, ISO, and other styles
8

Nguyen, Thi Thanh. "Algorithmes gloutons orthogonaux sous contrainte de positivité." Electronic Thesis or Diss., Université de Lorraine, 2019. http://www.theses.fr/2019LORR0133.

Full text
Abstract:
De nombreux domaines applicatifs conduisent à résoudre des problèmes inverses où le signal ou l'image à reconstruire est à la fois parcimonieux et positif. Si la structure de certains algorithmes de reconstruction parcimonieuse s'adapte directement pour traiter les contraintes de positivité, il n'en va pas de même des algorithmes gloutons orthogonaux comme OMP et OLS. Leur extension positive pose des problèmes d'implémentation car les sous-problèmes de moindres carrés positifs à résoudre ne possèdent pas de solution explicite. Dans la littérature, les algorithmes gloutons positifs (NNOG, pour “Non-Negative Orthogonal Greedy algorithms”) sont souvent considérés comme lents, et les implémentations récemment proposées exploitent des schémas récursifs approchés pour compenser cette lenteur. Dans ce manuscrit, les algorithmes NNOG sont vus comme des heuristiques pour résoudre le problème de minimisation L0 sous contrainte de positivité. La première contribution est de montrer que ce problème est NP-difficile. Deuxièmement, nous dressons un panorama unifié des algorithmes NNOG et proposons une implémentation exacte et rapide basée sur la méthode des contraintes actives avec démarrage à chaud pour résoudre les sous-problèmes de moindres carrés positifs. Cette implémentation réduit considérablement le coût des algorithmes NNOG et s'avère avantageuse par rapport aux schémas approximatifs existants. La troisième contribution consiste en une analyse de reconstruction exacte en K étapes du support d'une représentation K-parcimonieuse par les algorithmes NNOG lorsque la cohérence mutuelle du dictionnaire est inférieure à 1/(2K-1). C'est la première analyse de ce type<br>Non-negative sparse approximation arises in many applications fields such as biomedical engineering, fluid mechanics, astrophysics, and remote sensing. Some classical sparse algorithms can be straightforwardly adapted to deal with non-negativity constraints. On the contrary, the non-negative extension of orthogonal greedy algorithms is a challenging issue since the unconstrained least square subproblems are replaced by non-negative least squares subproblems which do not have closed-form solutions. In the literature, non-negative orthogonal greedy (NNOG) algorithms are often considered to be slow. Moreover, some recent works exploit approximate schemes to derive efficient recursive implementations. In this thesis, NNOG algorithms are introduced as heuristic solvers dedicated to L0 minimization under non-negativity constraints. It is first shown that the latter L0 minimization problem is NP-hard. The second contribution is to propose a unified framework on NNOG algorithms together with an exact and fast implementation, where the non-negative least-square subproblems are solved using the active-set algorithm with warm start initialisation. The proposed implementation significantly reduces the cost of NNOG algorithms and appears to be more advantageous than existing approximate schemes. The third contribution consists of a unified K-step exact support recovery analysis of NNOG algorithms when the mutual coherence of the dictionary is lower than 1/(2K-1). This is the first analysis of this kind
APA, Harvard, Vancouver, ISO, and other styles
9

Tejkal, Martin. "Vybrané transformace náhodných veličin užívané v klasické lineární regresi." Master's thesis, Vysoké učení technické v Brně. Fakulta strojního inženýrství, 2017. http://www.nusl.cz/ntk/nusl-318798.

Full text
Abstract:
Klasická lineární regrese a z ní odvozené testy hypotéz jsou založeny na předpokladu normálního rozdělení a shodnosti rozptylu závislých proměnných. V případě že jsou předpoklady normality porušeny, obvykle se užívá transformací závisle proměnných. První část této práce se zabývá transformacemi stabilizujícími rozptyl. Značná pozornost je udělena náhodným veličinám s Poissonovým a negativně binomickým rozdělením, pro které jsou studovány zobecněné transformace stabilizující rozptyl obsahující parametry v argumentu navíc. Pro tyto parametry jsou stanoveny jejich optimální hodnoty. Cílem druhé části práce je provést srovnání transformací uvedených v první části a dalších často užívaných transformací. Srovnání je provedeno v rámci analýzy rozptylu testováním hypotézy shodnosti středních hodnot p nezávislých náhodných výběrů s pomocí F testu. V této části jsou nejprve studovány vlastnosti F testu za předpokladu shodných a neshodných rozptylů napříč výběry. Následně je provedeno srovnání silofunkcí F testu aplikovaného pro p výběrů z Poissonova rozdělení transformovanými odmocninovou, logaritmickou a Yeo Johnsnovou transformací a z negativně binomického rozdělení transformovaného argumentem hyperbolického sinu, logaritmickou a Yeo-Johnsnovou transformací.
APA, Harvard, Vancouver, ISO, and other styles
10

Wondim, Yonas kassaw. "Hyperspectral Image Analysis Algorithm for Characterizing Human Tissue." Thesis, Linköpings universitet, Biomedicinsk instrumentteknik, 2011. http://urn.kb.se/resolve?urn=urn:nbn:se:liu:diva-75156.

Full text
Abstract:
AbstractIn the field of Biomedical Optics measurement of tissue optical properties, like absorption, scattering, and reduced scattering coefficient, has gained importance for therapeutic and diagnostic applications. Accuracy in determining the optical properties is of vital importance to quantitatively determine chromophores in tissue.There are different techniques used to quantify tissue chromophores. Reflectance spectroscopy is one of the most common methods to rapidly and accurately characterize the blood amount and oxygen saturation in the microcirculation. With a hyper spectral imaging (HSI) device it is possible to capture images with spectral information that depends both on tissue absorption and scattering. To analyze this data software that accounts for both absorption and scattering event needs to be developed.In this thesis work an HSI algorithm, capable of assessing tissue oxygenation while accounting for both tissue absorption and scattering, is developed. The complete imaging system comprises: a light source, a liquid crystal tunable filter (LCTF), a camera lens, a CCD camera, control units and power supply for light source and filter, and a computer.This work also presents a Graphic processing Unit (GPU) implementation of the developed HSI algorithm, which is found computationally demanding. It is found that the GPU implementation outperforms the Matlab “lsqnonneg” function by the order of 5-7X.At the end, the HSI system and the developed algorithm is evaluated in two experiments. In the first experiment the concentration of chromophores is assessed while occluding the finger tip. In the second experiment the skin is provoked by UV light while checking for Erythema development by analyzing the oxyhemoglobin image at different point of time. In this experiment the melanin concentration change is also checked at different point of time from exposure.It is found that the result matches the theory in the time dependent change of oxyhemoglobin and deoxyhemoglobin. However, the result of melanin does not correspond to the theoretically expected result.
APA, Harvard, Vancouver, ISO, and other styles

Books on the topic "Negative squares"

1

editor, Rendel Martin, Cheng Cathrine editor, Schaden Markus editor, Reisen Richard editor, Shu Yang 1969-, and Goodrow Gérard A, eds. Negatives. Verlag Kettler, 2015.

Find full text
APA, Harvard, Vancouver, ISO, and other styles
2

San Francisco (Calif.). Planning Dept. and Cocoa Development Associates, eds. Preliminary mitigated negative declaration: [900 North Point Street, Ghirardelli Square Rehabilitation and Hotel]. Planning Dept., 2005.

Find full text
APA, Harvard, Vancouver, ISO, and other styles
3

Press, Duke University, ed. Negative exposures: Knowing what not to know in contemporary China. Duke University Press, 2020.

Find full text
APA, Harvard, Vancouver, ISO, and other styles
4

Square Root of Negative Forty-Two. Lulu Press, Inc., 2011.

Find full text
APA, Harvard, Vancouver, ISO, and other styles
5

Kritz, Mary M., and Douglas T. Gurak. International Student Mobility. Oxford University Press, 2018. http://dx.doi.org/10.1093/oso/9780198815273.003.0011.

Full text
Abstract:
This chapter examines the role that sending country structural factors play in influencing the proportion of tertiary students studying abroad. It examines how outbound mobility ratio (OMR) responds to sending county supply and demand for tertiary education, population size, per capital GDP, development, education expenditures, and other factors. In all Ordinary Least Squares (OLS) and fixed-effect model specifications, the OMR had a negative relationship to tertiary supply. While countries with larger populations send more students abroad, they have smaller OMRs. Fixed-effects models also showed that changes in tertiary supply and the percentage of GDP spent on tertiary education were negatively related to OMRs. The chapter reviews government scholarship programmes sponsored by Global South countries and the practices they pursue to encourage student return and strengthen tertiary capacity in science, technology, engineering, and mathematics (STEM). These programmes in developing countries in Africa, Asia, and Latin America are changing international student flows.
APA, Harvard, Vancouver, ISO, and other styles
6

Aimar, Oussama El. Imaginary Notes - Imaginary Number, Square Root Negative One - Funny Hilarious Geek Gift: I = Square Root Negative One. Independently Published, 2019.

Find full text
APA, Harvard, Vancouver, ISO, and other styles
7

Smith, Andrew, Guy Osborn, and Bernadette Quinn, eds. Festivals and the City: The Contested Geographies of Urban Events. University of Westminster Press, 2022. http://dx.doi.org/10.16997/book64.

Full text
Abstract:
This book explores how festivals and events affect urban places and public spaces, with a particular focus on their role in fostering inclusion. The ‘festivalisation’ of culture, politics and space in cities is often regarded as problematic, but this book examines the positive and negative ways that festivals affect cities by examining festive spaces as contested spaces. The book focuses on Western European cities, a particularly interesting context given the social and cultural pressures associated with high levels of in-migration and concerns over the commercialisation and privatisation of public spaces. The key themes of this book are the quest for more inclusive urban spaces and the contested geographies of festival spaces and places. Festivals are often used by municipal authorities to break down symbolic barriers that restrict who uses public spaces and what those spaces are used for. However, the rise of commercial festivals and ticketed events means that they are also responsible for imposing physical and financial obstacles that reduce the accessibility of city parks, streets and squares. Alongside addressing the contested effects of urban festivals on the character and inclusivity of public spaces, the book addresses more general themes including the role of festivals in culture-led regeneration. Several chapters analyse festivals and events as economic development tools, and the book also covers contested representations of festival cities and the ways related images and stories are used in place marketing. A range of cases from Western Europe are used to explore these issues, including chapters on some of the world’s most significant and contested festival cities: Venice, Edinburgh, London and Barcelona. The book covers a wide range of festivals, including those dedicated to music and the arts, but also events celebrating particular histories, identities and pastimes. A series of fascinating cases are discussed - from the Venice Biennale and Dublin Festival of History, to Rotterdam’s music festivals and craft beer festivals in Manchester. The diverse and innovative qualities of the book are also evident in the range of urban spaces covered: obvious examples of public spaces – such as parks, streets, squares and piazzas – are addressed, but the book includes chapters on enclosed public spaces (e.g., libraries) and urban blue spaces (waterways) too. This reflects the interpretation of public spaces as socio-material entities: they are produced informally through their use (including for festivals and events), as well as through their formal design and management.
APA, Harvard, Vancouver, ISO, and other styles
8

Levesque, Roger J. R. The Science and Law of School Segregation and Diversity. Oxford University Press, 2017. http://dx.doi.org/10.1093/oso/9780190633639.001.0001.

Full text
Abstract:
The law does not square with people’s experiences of segregation and diversity. An empirical look at the legal system’s effectiveness in addressing school segregation reveals, from a practical perspective, that segregation persists and even surpasses levels before the civil rights movement. Yet, the legal system continues as though segregation is a thing of the past. Even more bizarre, the negative effects of racial and ethnic disparities in schooling are well documented, and the legal system compels itself to ignore much of them. To exacerbate matters, legal analysts increasingly interpret the law as a system that operates in a different world than the one documented by researchers who describe disparities and what could be done about them. For their part, researchers pervasively continue to document experiences without considering the legal system’s basic concerns. This book details the source of these gaps, evaluates their empirical and legal foundation, explains why they persist, and reveals what can be done about them.
APA, Harvard, Vancouver, ISO, and other styles
9

Alexander, Peter D. G., and Malachy O. Columb. Presentation and handling of data, descriptive and inferential statistics. Edited by Jonathan G. Hardman. Oxford University Press, 2017. http://dx.doi.org/10.1093/med/9780199642045.003.0028.

Full text
Abstract:
The need for any doctor to comprehend, assimilate, analyse, and form an opinion on data cannot be overestimated. This chapter examines the presentation and handling of such data and its subsequent statistical analysis. It covers the organization and description of data, measures of central tendency such as mean, median, and mode, measures of dispersion (standard deviation), and the problems of missing data. Theoretical distributions, such as the Gaussian distribution, are examined and the possibility of data transformation discussed. Inferential statistics are used as a means of comparing groups, and the rationale and use of parametric and non-parametric tests and confidence intervals is outlined. The analysis of categorical variables using the chi-squared test and assessing the value of diagnostic tests using sensitivity, specificity, positive and negative predictive values, and a likelihood ratio are discussed. Measures of association are covered, namely linear regression, as is time-to-event analysis using the Kaplan–Meier method. Finally, the chapter discusses the statistical analysis used when comparing clinical measurements—the Bland and Altman method. Illustrative examples, relevant to the practice of anaesthesia, are used throughout and it is hoped that this will provide the reader with an outline of the methodologies employed and encourage further reading where necessary.
APA, Harvard, Vancouver, ISO, and other styles

Book chapters on the topic "Negative squares"

1

Sasvári, Zoltán. "On the number of negative squares of certain functions." In Contributions to Operator Theory in Spaces with an Indefinite Metric. Birkhäuser Basel, 1998. http://dx.doi.org/10.1007/978-3-0348-8812-7_19.

Full text
APA, Harvard, Vancouver, ISO, and other styles
2

Wang, Mu, and Xiaoge Wang. "A Jump-Start of Non-negative Least Squares Solvers." In High-Performance Scientific Computing. Springer London, 2012. http://dx.doi.org/10.1007/978-1-4471-2437-5_15.

Full text
APA, Harvard, Vancouver, ISO, and other styles
3

Constantinescu, Tiberiu. "Schur Analysis for Matrices with a Finite Number of Negative Squares." In Advances in Invariant Subspaces and Other Results of Operator Theory. Birkhäuser Basel, 1986. http://dx.doi.org/10.1007/978-3-0348-7698-8_7.

Full text
APA, Harvard, Vancouver, ISO, and other styles
4

Franc, Vojtěch, Václav Hlaváč, and Mirko Navara. "Sequential Coordinate-Wise Algorithm for the Non-negative Least Squares Problem." In Computer Analysis of Images and Patterns. Springer Berlin Heidelberg, 2005. http://dx.doi.org/10.1007/11556121_50.

Full text
APA, Harvard, Vancouver, ISO, and other styles
5

Berg, Christian. "On the uniqueness of minimal definitizing polynomials for a sequence with finitely many negative squares." In Harmonic Analysis. Springer Berlin Heidelberg, 1988. http://dx.doi.org/10.1007/bfb0086590.

Full text
APA, Harvard, Vancouver, ISO, and other styles
6

Heinrich, Mattias P., Matthias Wilms, and Heinz Handels. "Multi-atlas Segmentation Using Patch-Based Joint Label Fusion with Non-Negative Least Squares Regression." In Patch-Based Techniques in Medical Imaging. Springer International Publishing, 2015. http://dx.doi.org/10.1007/978-3-319-28194-0_18.

Full text
APA, Harvard, Vancouver, ISO, and other styles
7

Araki, Ryuichiro, and Ichiro Nashimoto. "Multicomponent Analysis of Near-Infrared Spectra of Anesthetized Rat Head: (II) Quantitative Multivariate Analysis of Hemoglobin and Cytochrome Oxidase by Non-Negative Least Squares Method." In Oxygen Transport to Tissue XI. Springer US, 1989. http://dx.doi.org/10.1007/978-1-4684-5643-1_2.

Full text
APA, Harvard, Vancouver, ISO, and other styles
8

Mulama, Olga Nekesa, and Caroline Wanjiru Kariuki. "Panel Analysis of the Relationship Between Weather Variability and Sectoral Output in Kenya." In African Handbook of Climate Change Adaptation. Springer International Publishing, 2021. http://dx.doi.org/10.1007/978-3-030-45106-6_77.

Full text
Abstract:
AbstractClimate change and economic growth are closely connected. Climate change has the potential to reduce economic growth in developing countries due to their limited ability to respond to the negative impacts of a changing climate. A better understanding of weather variability can enhance climate change policies, which would help to support economic growth in these countries. As such, this research sought to examine if there is a long-run relationship between sectoral output and weather variables (temperature and rainfall) and to analyze the effect of weather variability on sectoral output using a panel of 13 sectors in Kenya.A Pedroni cointegration test was carried out to find out if there exists a long-run relationship among the variables and thereafter, a fully modified ordinary least squares regression was conducted to establish the effect of weather variability on sectoral output. The results indicate that there is a long-run relationship between temperature and sectoral output. Moreover, temperature has a larger effect on sectoral output compared to rainfall. With the evidence gathered from this research, it can be concluded that weather variability has an economic effect on sectoral output in Kenya. Given this, the Kenyan government needs to take a keen interest in understanding the effect of weather variability on the economy and in the broader picture, take steps to mitigate climate change.
APA, Harvard, Vancouver, ISO, and other styles
9

Xin Wei, Sha. "The Square Root of Negative One Is Imaginary." In Ontogenesis Beyond Complexity. Routledge, 2021. http://dx.doi.org/10.4324/9781003146858-6.

Full text
APA, Harvard, Vancouver, ISO, and other styles
10

Beziau, Jean-Yves. "Round Squares Are No Contradictions (Tutorial on Negation Contradiction and Opposition)." In Springer Proceedings in Mathematics & Statistics. Springer India, 2015. http://dx.doi.org/10.1007/978-81-322-2719-9_2.

Full text
APA, Harvard, Vancouver, ISO, and other styles

Conference papers on the topic "Negative squares"

1

Sun, Qianqian, Zhentian Zhang, Jian Dang, and Zaichen Zhang. "Non-Negative Least Squares Exploiting Multiple Measurements in Unsourced Random Access." In 2024 16th International Conference on Wireless Communications and Signal Processing (WCSP). IEEE, 2024. https://doi.org/10.1109/wcsp62071.2024.10827794.

Full text
APA, Harvard, Vancouver, ISO, and other styles
2

Tamas, Anca. "THE IMPACT OF DEMOGRAPHICS ON ECONOMIC DEVELOPMENT OF THE BRICS COUNTRIES." In 11th SWS International Scientific Conferences on SOCIAL SCIENCES - ISCSS 2024. SGEM WORLD SCIENCE, 2024. https://doi.org/10.35603/sws.iscss.2024/s03/24.

Full text
Abstract:
The aim of this paper is to assess the impact of the demographics on the economic development on the BRICS countries during 2009-2021. The BRICS countries as Brazil, Russia, India, China and South Africa are known starting 2009 (and 2010 for South Africa respectively), they are fast growing economies, emerging markets in the second stage of demographic transition and highly heterogeneous countries in most social economic features. Due to the fact they are counting for more than 40% of the global population, the demographic impact on the economic development should be significant. EViews 8 was used for linear regression with Ordinary Least Squares, where the dependent variable is GDP per capita and the independent variables are age dependency rate, death rate, fertility rate and labor force population 15-64. Correlations, Unit Root Test, Collinearity Test and Granger Causality were performed as well. The findings: There is a strong, positive correlation between GDP per capita and labor force 15-64 and negative correlations between GDP per capita and fertility rate and age dependency rate respectively, which is in line with the previous studies. Among the demographic indicators, fertility rate has a negative influence on GDP per capita, while age dependency rate, death rate and labor force have a positive influence. Death rate was found to have a positive influence on GDP per capital, in contradiction with the previous studies. There is Granger causality between labor force and GDP, as well as between GDP per capita and life expectancy.
APA, Harvard, Vancouver, ISO, and other styles
3

Tamas, Anca. "DETERMINANTS OF THE TRADE BETWEEN BRICS COUNTRIES A GRAVITY MODEL APPROACH." In 11th SWS International Scientific Conferences on SOCIAL SCIENCES - ISCSS 2024. SGEM WORLD SCIENCE, 2024. https://doi.org/10.35603/sws.iscss.2024/s03/22.

Full text
Abstract:
The aim of this paper is to find out the main determinants of the bilateral trade between the BRICS countries. The BRICS countries are fast growing, emerging markets, relatively recently known as a group that counts for more than a third of the world�s land and more than 40% of the world�s population. The augmented gravity model was used because is the model that explains most of the international bilateral trade and a panel approach was chosen to address the heteroskedasticity issues. The linear regression using Panel Least Squares Method was performed, with bilateral trade as dependent variable. The findings: The GDP, the population, common language and trade openness have a positive influence on bilateral trade, which is in line with the previous studies. The geographical distance, as a proxy for trade costs, the exchange rate, the market concentration index, the inflation and the export market penetration index are determinants with the negative impact of the bilateral trade of the BRICS countries in the 2009-2021 period.
APA, Harvard, Vancouver, ISO, and other styles
4

Yun, Joosun, Byongjin Ma, Guesuk Lee, et al. "Extracting Time-Constant Spectra by the Subspace Barzilai and Borwei Non-Negative Least Square Algorithm." In 2024 30th International Workshop on Thermal Investigations of ICs and Systems (THERMINIC). IEEE, 2024. http://dx.doi.org/10.1109/therminic62015.2024.10732283.

Full text
APA, Harvard, Vancouver, ISO, and other styles
5

Yaghoobi, Mehrdad, and Mike E. Davies. "Fast non-negative orthogonal least squares." In 2015 23rd European Signal Processing Conference (EUSIPCO). IEEE, 2015. http://dx.doi.org/10.1109/eusipco.2015.7362429.

Full text
APA, Harvard, Vancouver, ISO, and other styles
6

Elvander, Filip, Stefan Ingi Adalbjornsson, and Andreas Jakobsson. "Robust non-negative least squares using sparsity." In 2016 24th European Signal Processing Conference (EUSIPCO). IEEE, 2016. http://dx.doi.org/10.1109/eusipco.2016.7760210.

Full text
APA, Harvard, Vancouver, ISO, and other styles
7

Liu, Ran, Tian Gao, and Mingqiang Guo. "Reconstructing Negative Survey with Least Squares Criterion." In 2022 4th International Conference on Data Intelligence and Security (ICDIS). IEEE, 2022. http://dx.doi.org/10.1109/icdis55630.2022.00024.

Full text
APA, Harvard, Vancouver, ISO, and other styles
8

Anuar, N., and C. Marianno. "Gamma Source Imaging Using Non-Negative Least Squares." In Tranactions - 2019 Winter Meeting. AMNS, 2019. http://dx.doi.org/10.13182/t30720.

Full text
APA, Harvard, Vancouver, ISO, and other styles
9

Sardarabadi, A. M., M. Coutino, F. Uysal, S. E. Kotti, and L. Anitori. "Radar sparse signal processing by non-negative least-squares estimation." In International Conference on Radar Systems (RADAR 2022). Institution of Engineering and Technology, 2022. http://dx.doi.org/10.1049/icp.2022.2339.

Full text
APA, Harvard, Vancouver, ISO, and other styles
10

Yanık, Bahar, and Ayşe Kalaycı Önaç. "Effects of Sound Perception in Square Design." In 7th International Students Science Congress. Izmir International guest Students Association, 2023. http://dx.doi.org/10.52460/issc.2023.051.

Full text
Abstract:
Public spaces contribute to their psychological and mental well-being by enabling people to interact socially with others. Public squares have a social communication purpose by enabling people from different social, cultural and economic levels to come together in a common area. Squares are spaces open to everyone with different belongings and different social perspectives. This social interaction allows people to perceive others, themselves and the environment. Surrounded by other structures, these spaces have a balancing function in today's crowded and congested urban fabric. These urban voids function as an intersection connecting roads and areas. In addition, squares should provide the functions of people to see, be seen, relax and communicate with others. Therefore, it is important to consider the user-attracting effects and environmental conditions of the squares. Although auditory perception has weaker information than visual perception, it is richer in emotion. In addition to natural and cultural data, soundscape perception data should also be considered in the design of the square. In this study, the effects of sound on square users are investigated. Sound types with positive and negative effects were examined. Design recommendations have been developed to eliminate the negative effects of sound.
APA, Harvard, Vancouver, ISO, and other styles

Reports on the topic "Negative squares"

1

Filipiak, Katarzyna, Dietrich von Rosen, Martin Singull, and Wojciech Rejchel. Estimation under inequality constraints in univariate and multivariate linear models. Linköping University Electronic Press, 2024. http://dx.doi.org/10.3384/lith-mat-r-2024-01.

Full text
Abstract:
In this paper least squares and maximum likelihood estimates under univariate and multivariate linear models with a priori information related to maximum effects in the models are determined. Both loss functions (the least squares and negative log-likelihood) and the constraints are convex, so the convex optimization theory can be utilized to obtain estimates, which in this paper are called Safety belt estimates. In particular, the complementary slackness condition, common in convex optimization, implies two alternative types of solutions, strongly dependent on the data and the restriction. It is experimentally shown that, despite of the similarity to the ridge regression estimation under the univariate linear model, the Safety belt estimates behave usually better than estimates obtained via ridge regression. Moreover, concerning the multivariate model, the proposed technique represents a completely novel approach.
APA, Harvard, Vancouver, ISO, and other styles
2

Koechlin, Valerie, and Gianmarco León. International Remittances and Income Inequality: An Empirical Investigation. Inter-American Development Bank, 2006. http://dx.doi.org/10.18235/0010865.

Full text
Abstract:
The aim of this paper is to provide comprehensive empirical evidence on the relationship between international remittances and income inequality. In simple cross-country regressions we find a non-monotonic link between these two variables when using ordinary least squares, instrumental variables; we also test our hypothesis using dynamic panel data methods. We provide evidence in support of existing theoretical work that accounts for network effects that describe how, in the first stages of migration history, there is an inequality-increasing effect of remittances on income inequality. Then, as the opportunity cost of migrating is lowered due to these effects, remittances sent to those households have a negative impact on inequality. We also show how education and the development of the financial sector can help countries to reach the inequality-decreasing section of the curve more quickly. Our results are robust to several empirical specifications, as well as for a wide variety of inequality measures.
APA, Harvard, Vancouver, ISO, and other styles
3

Saha, Amrita. The Welfare Effects of Trade Preferences Removal: Evidence for UK–India Trade. Institute of Development Studies, 2025. https://doi.org/10.19088/ids.2025.006.

Full text
Abstract:
This paper examines the welfare effects of the unilateral trade preferences scheme of the Generalized System of Preferences (GSP) for United Kingdom (UK)–India trade on households in India. The design of unilateral trade preference schemes has been linked to significant uncertainty about preferential market access. And the removal of trade preferences requires adjustments raising trade costs with corresponding effects for workers in sectors reliant on exports in beneficiary countries. I investigate India’s sectoral graduations from the European Union (EU) GSP in 2014 and 2017 that were applicable to the UK until 2020. I also predict the welfare effects of the UK’s Developing Countries Trading Scheme (DCTS) as a hypothetical scenario. These scenarios are of importance as they reflect the combined effect of higher tariffs and uncertainty attached with unilateral preference schemes that could be addressed by a move to a free trade agreement (FTA). Also, there is limited empirical evidence on how unexpected changes in unilateral preferences such as the GSP may specifically affect employment and wage dynamics. Given that the UK aims to balance trade preferences with social equity, the lack of evidence should be of particular importance to the UK. I compute a trade exposure measure and estimate its differential welfare effects using an ordinary least squares (OLS) and quantile regression approach with UK–India trade data and granular high-frequency household-level data for India from 2014–19. I find considerable differences in the effects of trade exposure at the district level with the UK on household welfare in India. The negative welfare effect of trade exposure to graduated sectors appears on wage incomes, especially for lower-income households in urban areas. The overall results show GSP removals affecting the poorest, while the benefits concentrating at higher-income levels. The DCTS predictions suggest arguably these sectoral graduations are designed in a more targeted way as the negative effects are across all income levels, and the benefits also span across income distributions.
APA, Harvard, Vancouver, ISO, and other styles
4

Verdisco, Aimee, Jennelle Thompson, and Santiago Cueto. Early Childhood Development: Wealth, the Nurturing Environment and Inequality First Results from the PRIDI Database. Inter-American Development Bank, 2016. http://dx.doi.org/10.18235/0011753.

Full text
Abstract:
This paper presents findings from the Regional Project on Child Development Indicators, PRIDI for its acronym in Spanish. PRIDI created a new tool, the Engle Scale, for evaluating development in children aged 24 to 59 months in four domains: cognition, language and communication, socio-emotional and motor skills. It also captures and identifies factors associated with child development. The Engle Scale was applied in nationally representative samples in four Latin American countries: Costa Rica, Nicaragua, Paraguay and Peru. The results presented here are descriptive, but they offer new insight regarding the complexity of child development in Latin America. The basic message emerging from this study is that child development in Latin America is unequal. Inequality in results appears as early as 24 months and increases with age. There is variation in inequality. For example, correlations with the socio-economic characteristics of the home and maternal education are stronger for cognition, and language and communication than for motor development. The environment within which children develop and the adult-child interactions predominant within this environment ¿ referred to in this study as the nurturing environment - is important for all domains of child development utilized in this study, although stronger associations appear for cognition, language and communication, and socio-emotional development. For all domains measured by the Engle Scale, the nurturing environment bears a statistically stronger correlation than the socio-economic endowment of the home or maternal education. Gaps between the development of children in the top and low extremes in these factors matter. By 59 months, the development of a poor and under-nurtured child will lag by as much as 18 months behind her richer and more nurtured peers. For this child it will be more difficult to recognize basic shapes like triangles or squares, count to 20, or understand temporal sequences. She will also have gaps in her basic executive functioning and socio-emotional skills, including empathy and autonomy. She will not likely be ready for school and may not have success once there. Notably, however, if this same child, in the same poor household, were to benefit from a nurturing environment, her level of development would rise and would start to approach levels found in children in richer but less nurtured households. The nurturing environment thus appears to mitigate the negative association lower levels of wealth have with the domains of development included in this study.
APA, Harvard, Vancouver, ISO, and other styles
5

Yung, P. INDUSTRIAL HYGIENE ASBESTOS NEGATIVE EXPOSURE ASSESSMENT Class III Wallboard penetration and removal up to 10 square feet of wallboard with asbestos containing joint compound. Office of Scientific and Technical Information (OSTI), 2020. http://dx.doi.org/10.2172/1598112.

Full text
APA, Harvard, Vancouver, ISO, and other styles
6

Smith, Donald L., Denise Neudecker, and Roberto Capote Noy. Investigation of the Effects of Probability Density Function Kurtosis on Evaluated Data Results. IAEA Nuclear Data Section, 2018. http://dx.doi.org/10.61092/iaea.yxma-3y50.

Full text
Abstract:
In two previous investigations that are documented in this IAEA report series, we examined the effects of non-Gaussian, non-symmetric probability density functions (PDFs) on the outcomes of data evaluations. Most of this earlier work involved considering just two independent input data values and their respective uncertainties. They were used to generate one evaluated data point. The input data are referred to, respectively, as the mean value and standard deviation pair (y0,s0) for a prior PDF p0(y) and a second mean value and standard deviation pair (ye,se) for a likelihood PDF pe(y). Conceptually, these input data could be viewed as resulting from theory (subscript “0”) and experiment (subscript “e”). In accordance with Bayes Theorem, the evaluated mean value and standard deviation pair (ysol,ssol) corresponds to the posterior PDF p(y) which is related to p0(y) and pe(y) by p(y) = Cp0(y)pe(y). The prior and likelihood PDFs are both assumed to be normalized so that they integrate to unity for all y ≥ 0. Negative values of y are viewed as non-physical so they are not permitted. The product function p0(y)pe(y) is not normalized, so a positive multiplicative constant C is required to normalize p(y). In the earlier work, both normal (Gaussian) and lognormal functions were considered for the prior PDF. The likelihood functions were all taken to be Gaussians. Gaussians are symmetric, with zero skewness, and they always possess a fixed kurtosis of 3. Lognormal functions are inherently skewed, with widely varying values of skewness and kurtosis that depend on the function parameters. In order to explore the effects of kurtosis, distinct from skewness, the present work constrains the likelihood function to be Gaussian, and it considers three distinct, inherently symmetric prior PDF types: Gaussian (kurtosis = 3), Continuous Uniform (kurtosis = 1.8), and Laplace (kurtosis = 6). A product of two Gaussians produces a Gaussian even if ye ≠ y0. The product of a Gaussian PDF and a Uniform PDF, or a Laplace PDF, yields a symmetric PDF with zero skewness only when ye = y0. A pure test of the effect of kurtosis on an evaluation is provided by considering combinations of s0 and se with ye = y0. The present work also investigates the extent to which p(y) exhibits skewness when ye ≠ y0, again by considering various values for s0 and se. The Bayesian results from numerous numerical examples have been compared with corresponding least-squares solutions in order to arrive at some general conclusions regarding how the evaluated result (ysol,ssol) depends on various combinations of the input data y0, s0, ye, and se as well as on prior-likelihood PDF combinations: Gaussian-Gaussian, Uniform-Gaussian, and Laplace-Gaussian.
APA, Harvard, Vancouver, ISO, and other styles
7

Smith, Donald L., Denise Neudecker, and Roberto Capote Noy. Investigation of the Effects of Probability Density Function Kurtosis on Evaluated Data Results. IAEA Nuclear Data Section, 2020. http://dx.doi.org/10.61092/iaea.nqsh-f02d.

Full text
Abstract:
In two previous investigations that are documented in this IAEA report series, we examined the effects of non-Gaussian, non-symmetric probability density functions (PDFs) on the outcomes of data evaluations. Most of this earlier work involved considering just two independent input data values and their respective uncertainties. They were used to generate one evaluated data point. The input data are referred to, respectively, as the mean value and standard deviation pair (y0,s0) for a prior PDF p0(y) and a second mean value and standard deviation pair (ye,se) for a likelihood PDF pe(y). Conceptually, these input data could be viewed as resulting from theory (subscript “0”) and experiment (subscript “e”). In accordance with Bayes Theorem, the evaluated mean value and standard deviation pair (ysol,ssol) corresponds to the posterior PDF p(y) which is related to p0(y) and pe(y) by p(y) = Cp0(y)pe(y). The prior and likelihood PDFs are both assumed to be normalized so that they integrate to unity for all y ≥ 0. Negative values of y are viewed as non-physical so they are not permitted. The product function p0(y)pe(y) is not normalized, so a positive multiplicative constant C is required to normalize p(y). In the earlier work, both normal (Gaussian) and lognormal functions were considered for the prior PDF. The likelihood functions were all taken to be Gaussians. Gaussians are symmetric, with zero skewness, and they always possess a fixed kurtosis of 3. Lognormal functions are inherently skewed, with widely varying values of skewness and kurtosis that depend on the function parameters. In order to explore the effects of kurtosis, distinct from skewness, the present work constrains the likelihood function to be Gaussian, and it considers three distinct, inherently symmetric prior PDF types: Gaussian (kurtosis = 3), Continuous Uniform (kurtosis = 1.8), and Laplace (kurtosis = 6). A product of two Gaussians produces a Gaussian even if ye ≠ y0. The product of a Gaussian PDF and a Uniform PDF, or a Laplace PDF, yields a symmetric PDF with zero skewness only when ye = y0. A pure test of the effect of kurtosis on an evaluation is provided by considering combinations of s0 and se with ye = y0. The present work also investigates the extent to which p(y) exhibits skewness when ye ≠ y0, again by considering various values for s0 and se. The Bayesian results from numerous numerical examples have been compared with corresponding least-squares solutions in order to arrive at some general conclusions regarding how the evaluated result (ysol,ssol) depends on various combinations of the input data y0, s0, ye, and se as well as on prior-likelihood PDF combinations: Gaussian-Gaussian, Uniform-Gaussian, and Laplace-Gaussian.
APA, Harvard, Vancouver, ISO, and other styles
8

Smith, D. L., D. Neudecker, and R. Capote Noy. Investigation of the Effects of Probability Density Function Kurtosis on Evaluated Data Results. IAEA Nuclear Data Section, 2020. http://dx.doi.org/10.61092/iaea.3ar5-xmp8.

Full text
Abstract:
In two previous investigations that are documented in this IAEA report series, we examined the effects of non-Gaussian, non-symmetric probability density functions (PDFs) on the outcomes of data evaluations. Most of this earlier work involved considering just two independent input data values and their respective uncertainties. They were used to generate one evaluated data point. The input data are referred to, respectively, as the mean value and standard deviation pair (y0,s0) for a prior PDF p0(y) and a second mean value and standard deviation pair (ye,se) for a likelihood PDF pe(y). Conceptually, these input data could be viewed as resulting from theory (subscript “0”) and experiment (subscript “e”). In accordance with Bayes Theorem, the evaluated mean value and standard deviation pair (ysol,ssol) corresponds to the posterior PDF p(y) which is related to p0(y) and pe(y) by p(y) = Cp0(y)pe(y). The prior and likelihood PDFs are both assumed to be normalized so that they integrate to unity for all y ≥ 0. Negative values of y are viewed as non-physical so they are not permitted. The product function p0(y)pe(y) is not normalized, so a positive multiplicative constant C is required to normalize p(y). In the earlier work, both normal (Gaussian) and lognormal functions were considered for the prior PDF. The likelihood functions were all taken to be Gaussians. Gaussians are symmetric, with zero skewness, and they always possess a fixed kurtosis of 3. Lognormal functions are inherently skewed, with widely varying values of skewness and kurtosis that depend on the function parameters. In order to explore the effects of kurtosis, distinct from skewness, the present work constrains the likelihood function to be Gaussian, and it considers three distinct, inherently symmetric prior PDF types: Gaussian (kurtosis = 3), Continuous Uniform (kurtosis = 1.8), and Laplace (kurtosis = 6). A product of two Gaussians produces a Gaussian even if ye ≠ y0. The product of a Gaussian PDF and a Uniform PDF, or a Laplace PDF, yields a symmetric PDF with zero skewness only when ye = y0. A pure test of the effect of kurtosis on an evaluation is provided by considering combinations of s0 and se with ye = y0. The present work also investigates the extent to which p(y) exhibits skewness when ye ≠ y0, again by considering various values for s0 and se. The Bayesian results from numerous numerical examples have been compared with corresponding least-squares solutions in order to arrive at some general conclusions regarding how the evaluated result (ysol,ssol) depends on various combinations of the input data y0, s0, ye, and se as well as on prior-likelihood PDF combinations: Gaussian-Gaussian, Uniform-Gaussian, and Laplace-Gaussian.
APA, Harvard, Vancouver, ISO, and other styles
9

Fee, Kyle D. Income Inequality and Economic Growth in United States Counties: 1990s, 2000s and 2010s. Federal Reserve Bank of Cleveland, 2025. https://doi.org/10.26509/frbc-wp-202505.

Full text
Abstract:
Using a common reduced-form regional growth model framework, an expanded geographic classification of counties, additional years of data, a trio of income inequality metrics, and multiple empirical specifications, this analysis confirms and builds upon the notion that the nature of the relationship between income inequality and economic growth varies across geography (Fallah and Partridge, 2007). A positive relationship between an income Gini coefficient and per capita income growth is observed only in central metro counties with population densities greater than 915 people per square mile or in about 5 percent of all counties, whereas previous research found a positive relationship in all metropolitan counties (27 percent of counties) and a negative relationship in nonmetropolitan counties. Where inequality is in the distribution is also shown to impact this relationship. Inequality in the top and bottom halves of the income distribution has a positive relationship with growth within this 5 percent of counties. However, in most locations (the other 95 percent of the counties), inequality in the bottom half of the income distribution has either no statistical relationship with growth or a positive relationship, while inequality in the top half of the income distribution tends to have a negative relationship. These patterns are relatively stable over time but tend to not be robust to the inclusion of county fixed effects. These results provide some evidence that the mechanisms explaining how this relationship varies across places are more likely associated with agglomeration and market incentives rather than social cohesion. This analysis also highlights the need for a robust research agenda focused on further refining the growth model along with incorporating new data sources and concepts of income inequality.
APA, Harvard, Vancouver, ISO, and other styles
10

Dashtey, Ahmed, Patrick Mormile, Sandra Pedre, Stephany Valdaliso, and Walter Tang. Prediction of PFOA and PFOS Toxicity through Log P and Number of Carbon with CompTox and Machine Learning Tools. Florida International University, 2024. http://dx.doi.org/10.25148/ceefac.2024.00202400.

Full text
Abstract:
Perfluorooctanoic acid (PFOA) and perfluorooctane sulfonic acid (PFOS) are two major groups of PFAS will be subjected to the Maximal Contamination Concentration (MCL) of 4 ng/l in drinking water to be implemented by the U.S. EPA by 2025. How to accurately predict toxicity of PFAS with varied carbon chain length is important for treatment and sequential removal from drinking water. This study presents Quantitative Structure and Activity Relationship (QSAR) models developed through both linear regression and two order regression. Log P is compiled from reference and carbon content is counted as the molecule represented. Bioconcentration potential is predicted from CompTox.The results suggest that as log P and carbon content increase, the bioconcentration potential of PFCAs also increases. In other words, larger PFCA molecules tend to be easier to bioaccumulate in living organisms. This finding is crucial because bioconcentration refers to the accumulation of substances from water directly into living organisms through the process of passive diffusion across cell membranes. On the other hand, 96-hour fathead minnow LC50 has an inverse relationship, with higher LC50 values associated with lower log P and fewer carbons. The varying R-squared values across methods indicate differing degrees of correlation, underscoring the impact of compound structure on aquatic toxicity. Similarly, for oral rat LD50 and 48-hour D. magna LC50, the R-squared values reflect moderate to strong correlations with log P and the number of carbons. As the log P and carbon content decrease,the toxicity expressed in LC50 or LD50 increases. This relationship underscores the role of chemical properties in influencing the toxicity of PFCAs across different organisms and exposure routes. For instance, the negative correlation between log P and aquatic toxicity (96-hour fathead minnow LC50 and 48-hour D. magna LC50) suggests that compounds with higher hydrophobicity (higher log P) and more carbons may exhibit lower acute toxicity to aquatic organisms.
APA, Harvard, Vancouver, ISO, and other styles
We offer discounts on all premium plans for authors whose works are included in thematic literature selections. Contact us to get a unique promo code!