To see the other types of publications on this topic, follow the link: Information aggregation.

Dissertations / Theses on the topic 'Information aggregation'

Create a spot-on reference in APA, MLA, Chicago, Harvard, and other styles

Select a source type:

Consult the top 50 dissertations / theses for your research on the topic 'Information aggregation.'

Next to every source in the list of references, there is an 'Add to bibliography' button. Press on it, and we will generate automatically the bibliographic reference to the chosen work in the citation style you need: APA, MLA, Harvard, Chicago, Vancouver, etc.

You can also download the full text of the academic publication as pdf and read online its abstract whenever available in the metadata.

Browse dissertations / theses on a wide variety of disciplines and organise your bibliography correctly.

1

Schulte, Elisabeth. "Information aggregation in organizations." [S.l. : s.n.], 2006. http://nbn-resolving.de/urn:nbn:de:bsz:180-madoc-13540.

Full text
APA, Harvard, Vancouver, ISO, and other styles
2

Mulanda, Chilongo D. "Social network effects on information aggregation." Thesis, Massachusetts Institute of Technology, 2006. http://hdl.handle.net/1721.1/55264.

Full text
Abstract:
Thesis (M. Eng.)--Massachusetts Institute of Technology, Dept. of Electrical Engineering and Computer Science, 2006.
Cataloged from PDF version of thesis.
Includes bibliographical references (p. 57-58).
In this thesis, we investigated how sociometric information can be used to improve different methods of aggregating dispersed information. We specifically compared four different approaches of information aggregation: vanilla opinion poll, opinion polls where sociometric data is inferred from the population's own perception of social connectivity, opinion polls where sociometric data is obtained independent of the populations beliefs and data aggregation using market mechanisms. On comparing the entropy of the error of between the prediction of each of these different methods with the truth, preliminary results suggest that sociometric data does indeed improve the enterprise of information aggregation. The results also raise interesting questions about the relevance and application of different kinds of sociometric data as well as the somewhat surprising efficiency of information market mechanisms.
by Chilongo D. Mulanda.
M.Eng.
APA, Harvard, Vancouver, ISO, and other styles
3

Han, Simeng. "Statistical Methods for Aggregation of Indirect Information." Thesis, Harvard University, 2014. http://dissertations.umi.com/gsas.harvard:11348.

Full text
Abstract:
How to properly aggregate indirect information is more and more important. In this dissertation, we will present two aspects of the issue: indirect comparison of treatment effects and aggregation of ordered-based rank data.
Statistics
APA, Harvard, Vancouver, ISO, and other styles
4

Lobel, Ilan. "Social networks : rational learning and information aggregation." Thesis, Massachusetts Institute of Technology, 2009. http://hdl.handle.net/1721.1/54232.

Full text
Abstract:
Thesis (Ph. D.)--Massachusetts Institute of Technology, Sloan School of Management, Operations Research Center, 2009.
This electronic version was submitted by the student author. The certified thesis is available in the Institute Archives and Special Collections.
Cataloged from student-submitted PDF version of thesis.
Includes bibliographical references (p. 137-140).
This thesis studies the learning problem of a set of agents connected via a general social network. We address the question of how dispersed information spreads in social networks and whether the information is efficiently aggregated in large societies. The models developed in this thesis allow us to study the learning behavior of rational agents embedded in complex networks. We analyze the perfect Bayesian equilibrium of a dynamic game where each agent sequentially receives a signal about an underlying state of the world, observes the past actions of a stochastically-generated neighborhood of individuals, and chooses one of two possible actions. The stochastic process generating the neighborhoods defines the network topology (social network). We characterize equilibria for arbitrary stochastic and deterministic social networks and characterize the conditions under which there will be asymptotic learning -- that is, the conditions under which, as the social network becomes large, the decisions of the individuals converge (in probability) to the right action. We show that when private beliefs are unbounded (meaning that the implied likelihood ratios are unbounded), there will be asymptotic learning as long as there is some minimal amount of expansion in observations. This result therefore establishes that, with unbounded private beliefs, there will be asymptotic learning in almost all reasonable social networks. Furthermore, we provide bounds on the speed of learning for some common network topologies. We also analyze when learning occurs when the private beliefs are bounded.
(cont.) We show that asymptotic learning does not occur in many classes of network topologies, but, surprisingly, it happens in a family of stochastic networks that has infinitely many agents observing the actions of neighbors that are not sufficiently persuasive. Finally, we characterize equilibria in a generalized environment with heterogeneity of preferences and show that, contrary to a nave intuition, greater diversity (heterogeneity) 3 facilitates asymptotic learning when agents observe the full history of past actions. In contrast, we show that heterogeneity of preferences hinders information aggregation when each agent observes only the action of a single neighbor.
by Ilan Lobel.
Ph.D.
APA, Harvard, Vancouver, ISO, and other styles
5

Wang, John (John Michael) 1976. "Information aggregation and dissemination in simulated markets." Thesis, Massachusetts Institute of Technology, 1999. http://hdl.handle.net/1721.1/80140.

Full text
Abstract:
Thesis (S.B. and M.Eng.)--Massachusetts Institute of Technology, Dept. of Electrical Engineering and Computer Science, 1999.
Includes bibliographical references (leaf 39).
by John Wang.
S.B.and M.Eng.
APA, Harvard, Vancouver, ISO, and other styles
6

Kotronis, Stelios. "Information aggregation in dynamic markets under ambiguity." Thesis, University of Southampton, 2017. https://eprints.soton.ac.uk/411958/.

Full text
Abstract:
Does ambiguity affect the efficiency of information aggregation in dynamic markets? To the present day there is a sparse and fragmented literature pointing towards an answer. This Thesis studies dynamic markets under ambiguity and examines under what conditions information gets aggregated. Three particular perspectives are investigated: i) Does information get aggregated when traders are myopic and ambiguity averse? In the first Chapter it is proved that information gets aggregated only when a 'separable under ambiguity' security is traded. In case the security is not 'separable under ambiguity', then there exist markets in which information does not get aggregated. The class of 'separable under ambiguity' securities is proved to be non trivial. Finally, it is proved that even if the security is not 'separable under ambiguity', traders will reach an agreement about the price of the security. ii) Does information get aggregated when traders are strategic and ambiguity averse? By defining appropriately an equilibrium concept for infinite horizon games of incomplete information in a setting with ambiguity, it is proved that in a market with a 'separable under ambiguity' security information gets aggregated in every equilibrium in pure strategies. The second chapter concludes by proving that when the security is not 'separable under ambiguity', then there exists an equilibrium in which information does not get aggregated. iii) Are the previous theoretical predictions met in real life? In the third chapter a laboratory experiment is included. The experimental design follows the theoretical models of the first two chapters. The results of the experiment provide significant evidence in favor of the results of the first two chapters.
APA, Harvard, Vancouver, ISO, and other styles
7

Tam, Wing-yan. "Quality of service routing with path information aggregation." Click to view the E-thesis via HKUTO, 2006. http://sunzi.lib.hku.hk/hkuto/record/B36782956.

Full text
APA, Harvard, Vancouver, ISO, and other styles
8

Tam, Wing-yan, and 譚泳茵. "Quality of service routing with path information aggregation." Thesis, The University of Hong Kong (Pokfulam, Hong Kong), 2006. http://hub.hku.hk/bib/B36782956.

Full text
APA, Harvard, Vancouver, ISO, and other styles
9

Suen, Benny (Benny Hung Kit) 1975. "Internet information aggregation using the Context Interchange framework." Thesis, Massachusetts Institute of Technology, 1998. http://hdl.handle.net/1721.1/46187.

Full text
Abstract:
Thesis (M. Eng.)--Massachusetts Institute of Technology, Dept. of Electrical Engineering and Computer Science; and, Thesis (B.S.)--Massachusetts Institute of Technology, Dept. of Electrical Engineering and Computer Science, 1998.
by Benny Suen.
B.S.
M.Eng.
APA, Harvard, Vancouver, ISO, and other styles
10

Ozkes, Ali Ihsan. "Essays on hyper-preferences, polarization and information aggregation." Palaiseau, Ecole polytechnique, 2014. https://tel.archives-ouvertes.fr/pastel-01071827/document.

Full text
Abstract:
Dans cette thèse, certains problèmes importants et des propriétés de prise de décision collective sont étudiés. En particulier, d'abord, une propriété de stabilité des règles d'agrégation de préférences est introduite et certaines classes bien connues de règles sont testées à cet égard. Deuxièmement, le mesurage de la polarisation préférentielle est étudié, à la fois théorique et empirique. Enfin, le comportement stratégique dans des situations d'agrégation de l'information est étudié à la lumière d'une sorte de modèle de la rationalité limitée, à la fois théoriquement et expérimentalement. La notion de stabilité étudié dans la première partie de la thèse est imposée en particulier sur les fonctions de bien-être sociale et exige que le résultat de ces fonctions doit être robuste à la réduction de la transmission de préférences qui sont soutenu avoir lieu lorsque les individus présentent un ordre des alternatives lorsque les résultats sont également limités à être ordres. Pour tous profils sociétaux de préférences donné, qui est une collection d'ordres des alternatives, une collection compatible d'ordre des classements est extraite et les résultats des fonctions de bien-être social dans ces deux niveaux sont comparés. Il s'avère qu'aucune règle de notation donne des résultats cohérents, bien qu'il puisse y exister des règles Condorcetien. Mesures de polarisation qui sont étudiées en deuxième partie sont en forme d'agrégation des antagonismes par paires dans une société. La polarisation de l'opinion publique aux États-Unis pour les trois dernières décennies est analysé à la lumière de ce point de vue, en utilisant une mesure de polarisation bien acclamé qui est introduit dans la littérature de l'inégalité des revenus. La conclusion est qu'aucune tendance significative dans l'opinion publique polarisation peut être réclamé à exister au cours des dernières décennies. En outre, une adaptation de la même mesure est montrée à satisfaire des propriétés souhaitables à la place de profils de préférences ordinales lorsque trois alternatives sont considérées. En outre, une mesure qui est en effet l'agrégation des différences par paires entre les préférences des individus est caractérisée axiomatiquement. Dans la dernière partie de la thèse, situations de l'agrégation de l'information telles que décrites comme dans le modèle du jury de Condorcet sont étudiées à la lumière d'une approche de rationalité limitée qui est connue hiérarchie cognitive. Plus précisément, une expérience de laboratoire est exécutée pour tester les prédictions théoriques de la notion d'équilibre symétrique de Nash bayésien. On constate que le comportement en laboratoire n'est pas correctement capturé par ce concept qui suppose une forte notion de la rationalité et de l'homogénéité entre les comportements des individus. Pour mieux décrire les résultats à l'expérience, un nouveau modèle de hiérarchie cognitive est développé et montré à faire mieux que la fois l'approche de la rationalité forte et des modèles précédentes de hiérarchie cognitive. Ce modèle de hiérarchie cognitive endogène est comparé en théorie aux modèles précédents de la hiérarchie cognitive et montré pour améliorer dans certaines catégories de jeux
In this thesis, some important problems and properties of collective decision-making are studied. In particular, first, a stability property of preference aggregation rules is introduced and some well-known classes of rules are tested in this regard. Second, measuring preferential polarization is studied, both theoretically and empirically. Finally, strategic behavior in information aggregation situations is investigated in light of a sort of bounded rationality model, both theoretically and experimentally. The stability notion studied in the first part of the thesis is imposed particularly on social welfare functions and requires that the outcome of these functions should be robust to reduction in preference submission that are argued to take place when individuals submit a ranking of alternatives when the outcomes are also restricted to be rankings. Given the preference profile of a society, that is a collection of rankings of alternatives, a compatible collection of rankings of rankings are extracted and the outcome of social welfare functions in these two levels are compared. It turns out that no scoring rule gives consistent results, although there might exist Condorcet-type rules. Polarization measures studied in second part are in form of aggregation of pairwise antagonisms in a society. The public opinion polarization in the United States for the last three decades is analyzed in light of this view, by using a well-acclaimed measure of polarization introduced in the literature of income inequality. The conclusion is that no significant trend in public opinion polarization can be claimed to exist over the last several decades. Also, an adaptation of the same measure is shown to satisfy desirable properties in lieu of ordinal preference profiles when three alternatives are considered. Furthermore, a measure that is the aggregation of pairwise differences among individuals' preferences is characterized by a set of axioms. In the final part of the thesis, information aggregation situations described as in Condorcet jury model is studied in light of cognitive hierarchy approach to bounded rationality. Specifically, a laboratory experiment is run to test the theoretical predictions of the symmetric Bayesian Nash equilibrium concept. It is observed that behavior in lab is not correctly captured by this concept that assumes a strong notion of rationality and homogeneity among individuals' behaviors. To better describe the findings in the experiment, a novel model of cognitive hierarchy is developed and shown to perform better than both strong rationality approach and previous cognitive hierarchy models. This endogenous cognitive hierarchy model is compared theoretically to previous models of cognitive hierarchy and shown to improve in certain classes of games
APA, Harvard, Vancouver, ISO, and other styles
11

Li, Hui. "A configurable online reputation aggregation system." Thesis, University of Ottawa (Canada), 2008. http://hdl.handle.net/10393/27998.

Full text
Abstract:
Online reputation systems are currently receiving increased attention while online interactions are flourishing. However, they lack one important feature: globality. Users are allowed to build a reputation within one online community, and sometimes several reputations within several independent online communities, but each reputation is only valid within the corresponding community. Moreover, such reputation is usually aggregated by the provider of the online reputation system, giving the querying agent no say in the process. This thesis presents a novel solution to this problem. We conduct a literature review on existing trust and reputation models and classify these models using proper criteria. We introduce an online reputation system that collects reputation information about a ratee from several online communities and allows for this information to be aggregated according to the inquiring agent's own requirements. We propose a configurable aggregation method for local and global reputation based on a discrete statistical model, taking into account several factors and parameters that qualify the reputation. We also implement a prototype of the proposed reputation computation model.
APA, Harvard, Vancouver, ISO, and other styles
12

Thornton, Michael Alan. "Information and aggregation : The econometrics of dynamic models of consumption under cross-sectional and temporal aggregation." Thesis, University of Essex, 2009. http://ethos.bl.uk/OrderDetails.do?uin=uk.bl.ethos.510508.

Full text
APA, Harvard, Vancouver, ISO, and other styles
13

Pickard, Galen. "The use of domain knowledge in optimal information aggregation." Thesis, Massachusetts Institute of Technology, 2006. http://hdl.handle.net/1721.1/37080.

Full text
Abstract:
Thesis (M. Eng.)--Massachusetts Institute of Technology, Dept. of Electrical Engineering and Computer Science, 2006.
Includes bibliographical references (leaf 35).
In this thesis, I present some novel results pertaining to the relationship between two popular and interesting information aggregation methods: the Condorcet and Borda tallies. I present numerical results showing how the much simpler Borda tally can be used to approximate the outcome of the Condorcet tally with high probability in certain circumstances, a proof that there exist classes of problems for which the two tallies can never agree, and an extension of these results to small-world graphs, which have been of great interest recently due to their practical applicability to many complex problems.
bu Galen Pickard.
M.Eng.
APA, Harvard, Vancouver, ISO, and other styles
14

Rose, Harry. "Mechanism design for information aggregation within the smart grid." Thesis, University of Southampton, 2013. https://eprints.soton.ac.uk/349119/.

Full text
Abstract:
The introduction of a smart electricity grid enables a greater amount of information exchange between consumers and their suppliers. This can be exploited by novel aggregation services to save money by more optimally purchasing electricity for those consumers. Now, if the aggregation service pays consumers for said information, then both parties could benefit. However, any such payment mechanism must be carefully designed to encourage the customers (say, home-owners) to exert effort in gathering information and then to truthfully report it to the aggregator. This work develops a model of the information aggregation problem where each home has an autonomous home agent, which acts on its behalf to gather information and report it to the aggregation agent. The aggregator has its own historical consumption information for each house under its service, so it can make an imprecise estimate of the future aggregate consumption of the houses for which it is responsible. However, it uses the information sent by the home agents in order to make a more precise estimate and, in return, gives each home agent a reward whose amount is determined by the payment mechanism in use by the aggregator. There are three desirable properties of a mechanism that this work considers: budget balance (the aggregator does not reward the agents more than it saves), incentive compatibility (agents are encouraged to report truthfully), and finally individual rationality (the payments to the home agents must outweigh their incurred costs). In this thesis, mechanism design is used to develop and analyse two mechanisms. The first, named the uniform mechanism, divides the savings made by the aggregator equally among the houses. This is both Nash incentive compatible, strongly budget balanced and individually rational. However, the agents' rewards are not fair insofar as each agent is rewarded equally irrespective of that agent's actual contribution to the savings. This results in a smaller incentive for agents to produce precise reports. Moreover, it encourages undesirable behaviour from agents who are able to make the loads placed upon the grid more volatile such that they are harder to predict. To resolve these issues, a novel scoring rule-based mechanism named sum of others' plus max is developed, which uses the spherical scoring rule to more fairly distribute rewards to agents based on the accuracy and precision of their individual reports. This mechanism is weakly budget balanced, dominant strategy incentive compatible and individually rational. Moreover, it encourages agents to make their loads less volatile, such that they are more predictable. This has obvious advantages to the electricity grid. For example, the amount of spinning reserve generation can be reduced, reducing the carbon output of the grid and the cost per unit of electricity. This work makes use of both theoretical and empirical analysis in order to evaluate the aforementioned mechanisms. Theoretical analysis is used in order to prove budget balance, individual rationality and incentive compatibility. However, theoretical evaluation of the equilibrium strategies within each of the mechanisms quickly becomes intractable. Consequently, empirical evaluation is used to further analyse the properties of the mechanisms. This analysis is first performed in an environment in which agents are able to manipulate their reports. However, further analysis is provided which shows the behaviour of the agents when they are able to make themselves harder to predict. Such a scenario has thus far not been discussed within mechanism design literature. Empirical analysis shows the sum of others' plus max mechanism to provide greater incentives for agents to make precise predictions. Furthermore, as a result of this, the aggregator increases its utility through implementing the sum of others' plus max mechanism over the uniform mechanism and over implementing no mechanism. Moreover, in settings which allow agents to manipulate the volatility of their loads, it is shown that the uniform mechanism causes the aggregator to lose utility in comparison to using no mechanism, whereas in comparison to no mechanism, the sum of others' plus max mechanism causes an increase in utility to the aggregator.
APA, Harvard, Vancouver, ISO, and other styles
15

Cheng, Kit-hung. "Top-k aggregation of ranked inputs." Click to view the E-thesis via HKUTO, 2005. http://sunzi.lib.hku.hk/hkuto/record/B35506519.

Full text
APA, Harvard, Vancouver, ISO, and other styles
16

Cheng, Kit-hung, and 鄭傑雄. "Top-k aggregation of ranked inputs." Thesis, The University of Hong Kong (Pokfulam, Hong Kong), 2005. http://hub.hku.hk/bib/B35506519.

Full text
APA, Harvard, Vancouver, ISO, and other styles
17

Rutberg, David. "Aggregation and visualization of test data." Thesis, Mittuniversitetet, Institutionen för informationsteknologi och medier, 2010. http://urn.kb.se/resolve?urn=urn:nbn:se:miun:diva-11926.

Full text
APA, Harvard, Vancouver, ISO, and other styles
18

Olshevsky, Alexander. "Efficient information aggregation strategies for distributed control and signal processing." Thesis, Massachusetts Institute of Technology, 2010. http://hdl.handle.net/1721.1/62427.

Full text
Abstract:
Thesis (Ph. D.)--Massachusetts Institute of Technology, Dept. of Electrical Engineering and Computer Science, 2010.
Cataloged from PDF version of thesis.
Includes bibliographical references (p. 129-136).
This thesis will be concerned with distributed control and coordination of networks consisting of multiple, potentially mobile, agents. This is motivated mainly by the emergence of large scale networks characterized by the lack of centralized access to information and time-varying connectivity. Control and optimization algorithms deployed in such networks should be completely distributed, relying only on local observations and information, and robust against unexpected changes in topology such as link failures. We will describe protocols to solve certain control and signal processing problems in this setting. We will demonstrate that a key challenge for such systems is the problem of computing averages in a decentralized way. Namely, we will show that a number of distributed control and signal processing problems can be solved straightforwardly if solutions to the averaging problem are available. The rest of the thesis will be concerned with algorithms for the averaging problem and its generalizations. We will (i) derive the fastest known averaging algorithms in a variety of settings and subject to a variety of communication and storage constraints (ii) prove a lower bound identifying a fundamental barrier for averaging algorithms (iii) propose a new model for distributed function computation which reflects the constraints facing many large-scale networks, and nearly characterize the general class of functions which can be computed in this model.
by Alexander Olshevsky.
Ph.D.
APA, Harvard, Vancouver, ISO, and other styles
19

Klemens, Ben Jackson Matthew O. "Information aggregation, with application to monotone ordering, advocacy, and conviviality /." [Pasadena, Calif. : California Institute of Technology], 2003. http://www.fluff.info/klemens/.

Full text
APA, Harvard, Vancouver, ISO, and other styles
20

Li, Mingyang. "Multi-Level Information Aggregation for Reliability Assurance of Hierarchical Systems." Diss., The University of Arizona, 2015. http://hdl.handle.net/10150/560825.

Full text
Abstract:
Reliability assurance of hierarchical systems is crucial for their health management in many mission-critical industries. Due to the limited/absent reliability data and engineering knowledge available at the system level and the complex system structure, system-level reliability assurance is challenging. To meet with these challenges, the dissertation proposes a generic, flexible and recursive multi-level information aggregation framework by systematically utilizing multi-level reliability information throughout a system structure to improve the performance of a variety of system reliability assurance tasks. Specifically, the aggregation approach is first present to aggregate complex reliability data structure (e.g., failure time data with covariates and different censoring) with less distribution assumptions to improve accuracy of system-level reliability modeling. The system structure is mainly restricted to the hierarchical series-and-parallel system with independent intra-level components and/or sub-systems. Then, the aggregation approach is extended to accommodate multi-state hierarchical system by considering both probabilistic inter-level failure relationships and cascading intra-level failure dependency. Last, the aggregation approach is incorporated into the design of system-level reliability demonstration testing to achieve the potential sample size reduction. Different demonstration testing strategies with and without information aggregation are comprehensively compared with closed-form conditions obtained. A series of case studies have also been conducted to demonstrate that the proposed aggregation methodology can successfully improve the system reliability modeling accuracy and precision, and improve the cost-effectiveness of the system reliability demonstration tests.
APA, Harvard, Vancouver, ISO, and other styles
21

Hsu, Tiffany. "Data preservation in intermittently connected sensor networks via data aggregation." Thesis, California State University, Long Beach, 2014. http://pqdtopen.proquest.com/#viewpdf?dispub=1527382.

Full text
Abstract:

Intermittently connected sensor networks are a subset of wireless sensor networks that have a high data volume and suffer from the problem of infrequent data offloading. When the generated data exceeds the storage capacity of the network between offloading opportunities, there must be some method of data preservation. This requires two phases: compressing the data and redistributing it. The use of data aggregation with consideration given to minimizing total energy is examined as a solution for the compression problem. Simulations of the optimal solution and an approximation heuristic are compared.

APA, Harvard, Vancouver, ISO, and other styles
22

Tran-Thi-Thuy, Trang. "Secure data aggregation for wireless sensor network." Thesis, Norwegian University of Science and Technology, Department of Telematics, 2010. http://urn.kb.se/resolve?urn=urn:nbn:no:ntnu:diva-10912.

Full text
Abstract:
Like conventional networks, security is also a big concern in wireless sensor networks. However, security in this type of networks faces not only typical but also new challenges. Constrained devices, changing topology or susceptibility to unprecedented security threats such as node capture and node compromise has refrained developers from applying conventional security solutions into wireless sensor networks. Hence, developing security solutions for wireless sensor networks not only requires well security analysis but also offers a low power and processing consuming.In this thesis, we implemented security solution targeting IRIS sensor motes. In our implementation, a public key-based key exchange is used to establish shared secret keys between sensor nodes. These secret keys are used to provide authenticity, integrity and freshness for transmission data. Our implementation ensures the flexibility in integrating our solution with available TinyOS operating system. Additionally, the thesis work also focuses on evaluating the performance in wireless sensor networks both in memory and energy consuming.
APA, Harvard, Vancouver, ISO, and other styles
23

Xu, Jian. "Iterative Aggregation of Bayesian Networks Incorporating Prior Knowledge." Miami University / OhioLINK, 2004. http://rave.ohiolink.edu/etdc/view?acc_num=miami1105563019.

Full text
APA, Harvard, Vancouver, ISO, and other styles
24

ERIKSSON, LINDA. "Sequential Aggregation of Textual Features forDomain Independent Author Identication." Thesis, KTH, Skolan för datavetenskap och kommunikation (CSC), 2014. http://urn.kb.se/resolve?urn=urn:nbn:se:kth:diva-156304.

Full text
Abstract:
In the area of Author Identication many approaches have been made to identify the author of a written text. By identifying the individual variation that can be found in texts, features can be calculated. These feature values are commonly calculated by normalizing the values to an average valueover the whole text. When using this kind of Simple features much of the variation that can be found in texts will not get captured. This project intends to use the sequential nature of the text to denie Sequential featuresat sentence level. The theory is that the Sequential features will be able to capture more of the variation that can be found in the texts, compared to the Simple features. To evaluate these features a classication of authors was made on several dierent datasets. The result showed that the Sequential features performs better than the Simple features in some cases, however the dierence was not large enough to confirm the theory of them being better than the Simple features.
Inom området som behandlar författarbestämning har många olika tillvägagångs- sätt använts for att identiera författaren av en skriven text. Genom att identfiera den individuella variation som särskiljer texter från varandra,kan olika särdrag beräknas. Dessa särdrags värden beräknas vanligen genom att normaliseras till ett medelvärde över hela texten. När denna typ av Enkla särdrag används så döljs mycket av den variation som särskiljertexter från varandra. Målet med detta projekt är att istället användatextens sekventiella natur som grund for att deniera Sekventiella särdrag på meningsnivå. Teorin är att de sekventiella särdragen kommer att kunna identifiera mer av den variation som kan identifieras i texter, jämfört med de enkla särdragen. For att utvärdera dessa särdrag gjordes en klassicering av författare på era olika dataset. Resultatet visade att de sekventiella särdragen presterade bättre än de enkla särdragen i vissa fall, men skillnaden var inte tillräckligt stor for att bekräfta teorin om att de skulle vara bättre än de enkla sädragen.
APA, Harvard, Vancouver, ISO, and other styles
25

Rata, Cristina. "Voting and information aggregation. Theories and experiments in the tradition of condorcet." Doctoral thesis, Universitat Autònoma de Barcelona, 2002. http://hdl.handle.net/10803/4039.

Full text
Abstract:
Esta tesis ofrece una justificación para el uso de la pluralidad como una manera óptima de agregar información en las sociedades compuestas por individuos con intereses comunes pero con información diversa.
El motivo de esta tesis es seguir una línea de investigación sobre la elección social que se remonta al matemático y filósofo político francés Jean-Antoine-Nicolas de Caritat, Marqués de Condorcet (1743-1794). En su Essai sur l'application de l'analyse à la probabilité des decisions rendues à la pluralité des voix (1785), Condorcet afirmó que se garantizaría la justicia social si las naciones adoptaran constituciones políticas que facilitaran el juicio correcto del grupo y argumentó que la votación por mayoría sería la herramienta constitucional más probable para alcanzar este objetivo.
Siguiendo esta línea de investigación, la primera parte de esta tesis estudia las condiciones bajo las cuales la pluralidad proporciona a la sociedad el método más adecuado para llegar a decisiones de grupo. Aquí, como en el estudio de Condorcet, supondremos que los votantes actúan honradamente.
El desarrollo natural de la teoría de votación, que ha introducido los temas de incentivos e interacción estratégica en la toma de decisiones de grupos, ha sido utilizado para cuestionar la suposición de votación honesta. Austen-Smith y Banks (1996) fueron los primeros en observar que la combinación de información privada e intereses comunes en el sistema propuesto por Condorcet podría crear incentivos para los votantes para actuar estratégicamente. Esta observación les condujo a plantear si la votación honesta sería compatible con el comportamiento de equilibrio de Nash en el juego inducido por la mayoría. La segunda parte de esta tesis expone esta problemática estudiando el comportamiento de los votantes en el juego inducido por la pluralidad.
El interés en las instituciones del mundo real, para las cuales la votación es un elemento importante, ha hecho plantear desde hace tiempo la cuestión de si los votantes se comportan tal y como pronostican los modelos teóricos. Otra cuestión ha sido cómo tratar la complejidad del entorno estratégico. La segunda parte de esta tesis pide respuestas a estas preguntas. Puesto que la literatura sobre experimentos de votación parece proporcionar respuestas razonables a estas preguntas, la tercera parte de esta tesis utiliza experimentos de laboratorio para verificar las implicaciones de la segunda parte.
This thesis offers a justification for the use of plurality rule as an optimal way to aggregate information for societies composed of individuals with common interests but diverse information. The motivation of this thesis follows a line of research in social choice that dates back to the French mathematician and political philosopher Jean-Antoine-Nicolas de Caritat, Marquis de Condorcet (1743-1794). In his Essai sur l'application de l'analyse à la probabilité des decisions rendues à la pluralité des voix (1785), Condorcet posited that social justice would be secured if nations would adopt political constitutions that facilitate accurate group judgments, and argued that the majority rule would be the most likely constitutional tool to achieve this goal.
Following this line of research, the first part of this thesis discusses the conditions under which plurality rule provides the society with the most likely method to reach accurate group judgments. In this part, as in Condorcet's work, it is assumed that voters act honestly.
Natural developments in the theory of voting, that brought in the issues of incentives and strategic interaction in group decision making, were used to challenge the assumption of honest voting. Austen-Smith and Banks (1996) were the first to notice that the combination of private information and common interests in the framework proposed by Condorcet might create an incentive for voters to act strategically. This observation led them to ask the question of whether honest voting is compatible with the Nash equilibrium behavior in the game induced by majority rule. The second part of this thesis advances this problematic by studying voters' behavior in the game induced by plurality rule.
The interest in real-world institutions, for which voting is an important element, raised for some time the question of whether voters behave as predicted by the theoretical models. Another question was of how to deal with the complexity of the strategic environment. The second part of this thesis calls for answers to these types of questions. Since the literature on voting experiments seems to provide reasonable answers to these questions, the third part of this thesis uses laboratory experiments to test the implications of the second part.
APA, Harvard, Vancouver, ISO, and other styles
26

Siemroth, Christoph [Verfasser], and Hans Peter [Akademischer Betreuer] Grüner. "On information aggregation in financial markets / Christoph Siemroth. Betreuer: Hans Peter Grüner." Mannheim : Universitätsbibliothek Mannheim, 2015. http://d-nb.info/1100396136/34.

Full text
APA, Harvard, Vancouver, ISO, and other styles
27

Gao, Xi. "Eliciting and Aggregating Truthful and Noisy Information." Thesis, Harvard University, 2014. http://nrs.harvard.edu/urn-3:HUL.InstRepos:13067680.

Full text
Abstract:
In the modern world, making informed decisions requires obtaining and aggregating relevant information about events of interest. For many political, business, and entertainment events, the information of interest only exists as opinions, beliefs, and judgments of dispersed individuals, and we can only get a complete picture by putting the separate pieces of information together. Thus, an important first step towards decision making is motivating the individuals to reveal their private information and coalescing the separate pieces of information together. In this dissertation, I study three information elicitation and aggregation methods, prediction markets, peer prediction mechanisms, and adaptive polling, using both theoretical and applied approaches. These methods mainly differ by their assumptions on the participants' behavior, namely whether the participants possess noisy or perfect information and whether they strategically decide on what information to reveal. The first two methods, prediction markets and peer prediction mechanisms, assume that the participants are strategic and have perfect information. Their primary goal is to use carefully designed monetary rewards to incentivize the participants to truthfully reveal their private information. As a result, my studies of these methods focus on understanding to what extent are these methods incentive compatible in theory and in practice. The last method, adaptive polling, assumes that the participants are not strategic and have noisy information. In this case, our goal is to accurately and efficiently estimate the latent ground truth given the noisy information, and we aim to evaluate whether this goal can be achieved by using this method experimentally. I make four main contributions in this dissertation. First, I theoretically analyze how the participants' knowledge of one another's private information affects their strategic behavior when trading in a prediction market with a finite number of participants. Each participant may trade multiple times in the market, and hence may have an incentive to withhold or misreport his information in order to mislead other participants and capitalize on their mistakes. When the participants' private information is unconditionally independent, we show that the participants reveal their information as late as possible at any equilibrium, which is arguably the worse outcome for the purpose of information aggregation. We also provide insights on the equilibria of such prediction markets when the participants' private information is both conditionally and unconditionally dependent given the outcome of the event. Second, I theoretically analyze the participants' strategic behavior in a prediction market when a participant has outside incentives to manipulate the market probability. The presence of such outside incentives would seem to damage the information aggregation in the market. Surprisingly, when the existence of such incentives is certain and common knowledge, we show that there exist separating equilibria where all the participants' private information is revealed and fully aggregated into the market probability. Although there also exist pooling equilibria with information loss, we prove that certain separating equilibria are more desirable than many pooling equilibria because the separating equilibria satisfy domination based belief refinements, maximize the social welfare of the setting, or maximize either participant's total expected payoff. When the existence of the outside incentives is uncertain, trust cannot be established and the separating equilibria no longer exist. Third, I experimentally investigate participants' behavior towards the peer prediction mechanisms, which were proposed to elicit information without observable ground truth. While peer prediction mechanisms promise to elicit truthful information by rewarding participants with carefully constructed payments, they also admit uninformative equilibria where coordinating participants provide no useful information. We conduct the first controlled online experiment of the Jurca and Faltings peer prediction mechanism, engaging the participants in a multiplayer, real-time and repeated game. Using a hidden Markov model to capture players' strategies from their actions, our results show that participants successfully coordinate on uninformative equilibria and the truthful equilibrium is not focal, even when some uninformative equilibria do not exist or result in lower payoffs. In contrast, most players are consistently truthful in the absence of peer prediction, suggesting that these mechanisms may be harmful when truthful reporting has similar cost to strategic behavior. Finally, I design and experimentally evaluate an adaptive polling method for aggregating small pieces of imprecise information together to produce an accurate estimate of a latent ground truth. In designing this method, we make two main contributions: (1) Our method aggregates the participants' noisy information by using a theoretical model to account for the noise in the participants' contributed information. (2) Our method uses an active learning inspired approach to adaptively choose the query for each participant. We apply this method to the problem of ranking a set of alternatives, each of which is characterized by a latent strength parameter. At each step, adaptive polling collects the result of a pairwise comparison, estimates the strength parameters from the pairwise comparison data, and adaptively chooses the next pairwise comparison question to maximize expected information gain. Our MTurk experiment shows that our adaptive polling method can effectively incorporate noisy information and improve the estimate accuracy over time. Compared to a baseline method, which chooses a random pairwise comparison question at each step, our adaptive method can generate more accurate estimates with less cost.
Engineering and Applied Sciences
APA, Harvard, Vancouver, ISO, and other styles
28

Cabanillas, Macias Cristina, Anne Baumgrass, and Ciccio Claudio Di. "A Conceptual Architecture for an Event-based Information Aggregation Engine in Smart Logistics." Gesellschaft für Informatik e.V, 2015. https://dl.gi.de/handle/20.500.12116/2040.

Full text
Abstract:
The field of Smart Logistics is attracting interest in several areas of research, including Business Process Management. Awide range of research works are carried out to enhance the capability of monitoring the execution of ongoing logistics processes and predict their likely evolvement. In order to do this, it is crucial to have in place an IT infrastructure that provides the capability of automatically intercepting the digitalised transportation-related events stemming from widespread sources, along with their elaboration, interpretation and dispatching. In this context, we present here the service-oriented software architecture of such an event-based information engine. In particular, we describe the requisites that it must meet. Thereafter, we present the interfaces and subsequently the service-oriented components that are in charge of realising them. The outlined architecture is being utilised as the reference model for an ongoing European research project on Smart Logistics, namely GET Service.
APA, Harvard, Vancouver, ISO, and other styles
29

Chen, Kai. "Mitigating Congestion by Integrating Time Forecasting and Realtime Information Aggregation in Cellular Networks." FIU Digital Commons, 2011. http://digitalcommons.fiu.edu/etd/412.

Full text
Abstract:
An iterative travel time forecasting scheme, named the Advanced Multilane Prediction based Real-time Fastest Path (AMPRFP) algorithm, is presented in this dissertation. This scheme is derived from the conventional kernel estimator based prediction model by the association of real-time nonlinear impacts that caused by neighboring arcs’ traffic patterns with the historical traffic behaviors. The AMPRFP algorithm is evaluated by prediction of the travel time of congested arcs in the urban area of Jacksonville City. Experiment results illustrate that the proposed scheme is able to significantly reduce both the relative mean error (RME) and the root-mean-squared error (RMSE) of the predicted travel time. To obtain high quality real-time traffic information, which is essential to the performance of the AMPRFP algorithm, a data clean scheme enhanced empirical learning (DCSEEL) algorithm is also introduced. This novel method investigates the correlation between distance and direction in the geometrical map, which is not considered in existing fingerprint localization methods. Specifically, empirical learning methods are applied to minimize the error that exists in the estimated distance. A direction filter is developed to clean joints that have negative influence to the localization accuracy. Synthetic experiments in urban, suburban and rural environments are designed to evaluate the performance of DCSEEL algorithm in determining the cellular probe’s position. The results show that the cellular probe’s localization accuracy can be notably improved by the DCSEEL algorithm. Additionally, a new fast correlation technique for overcoming the time efficiency problem of the existing correlation algorithm based floating car data (FCD) technique is developed. The matching process is transformed into a 1-dimensional (1-D) curve matching problem and the Fast Normalized Cross-Correlation (FNCC) algorithm is introduced to supersede the Pearson product Moment Correlation Co-efficient (PMCC) algorithm in order to achieve the real-time requirement of the FCD method. The fast correlation technique shows a significant improvement in reducing the computational cost without affecting the accuracy of the matching process.
APA, Harvard, Vancouver, ISO, and other styles
30

Gomes, Rahul. "Incorporating Sliding Window-Based Aggregation for Evaluating Topographic Variables in Geographic Information Systems." Diss., North Dakota State University, 2019. https://hdl.handle.net/10365/29913.

Full text
Abstract:
The resolution of spatial data has increased over the past decade making them more accurate in depicting landform features. From using a 60m resolution Landsat imagery to resolution close to a meter provided by data from Unmanned Aerial Systems, the number of pixels per area has increased drastically. Topographic features derived from high resolution remote sensing is relevant to measuring agricultural yield. However, conventional algorithms in Geographic Information Systems (GIS) used for processing digital elevation models (DEM) have severe limitations. Typically, 3-by-3 window sizes are used for evaluating the slope, aspect and curvature. Since this window size is very small compared to the resolution of the DEM, they are mostly resampled to a lower resolution to match the size of typical topographic features and decrease processing overheads. This results in low accuracy and limits the predictive ability of any model using such DEM data. In this dissertation, the landform attributes were derived over multiple scales using the concept of sliding window-based aggregation. Using aggregates from previous iteration increases the efficiency from linear to logarithmic thereby addressing scalability issues. The usefulness of DEM-derived topographic features within Random Forest models that predict agricultural yield was examined. The model utilized these derived topographic features and achieved the highest accuracy of 95.31% in predicting Normalized Difference Vegetation Index (NDVI) compared to a 51.89% for window size 3-by-3 in the conventional method. The efficacy of partial dependence plots (PDP) in terms of interpretability was also assessed. This aggregation methodology could serve as a suitable replacement for conventional landform evaluation techniques which mostly rely on reducing the DEM data to a lower resolution prior to data processing.
National Science Foundation (Award OIA-1355466)
APA, Harvard, Vancouver, ISO, and other styles
31

Pekkanen, Linus, and Patrik Johansson. "Simulating Broadband Analog Aggregation for Federated Learning." Thesis, KTH, Skolan för elektroteknik och datavetenskap (EECS), 2020. http://urn.kb.se/resolve?urn=urn:nbn:se:kth:diva-295616.

Full text
Abstract:
With increasing amounts of data coming fromconnecting progressively more devices, new machine learningmodels have risen. For wireless networks the idea of using adistributed approach to machine learning has gained increasingpopularity, where all nodes in the network participate in creatinga global machine learning model by training with the localdata stored at each node, an example of this approach is calledfederated learning. However, traditional communication protocolshave been proven inefficient. This opens up opportunities todesign new machine-learning specific communication schemes.The concept ofOver-the-air computationis built on the fact thata wireless communication channel can naturally compute somelinear functions, for instance the sum. If all nodes in a networktransmits simultaneously to a server, the signals are aggregatedbefore reaching the server.
I takt med denökande datamängden frånallt fler uppkopplade enheter har nya modeller för mask-ininlärning dykt upp. För trådlösa nätverk har idén att appliceradecentraliserade maskininlärnings modellerökat i popularitet,där alla noder i nätverket bidrar till en global maskininlärningsmodell genom att träna på den data som finns lokalt på varjenod. Ett exempel på en sådan metodärFederated Learning.Traditionella metoder för kommunikation har visat sig varaineffektiva vilket öppnar upp möjligheten för att designa nyamaskininlärningsspecifika kommunikationsscheman. Konceptetover-the-air computationutnyttjar det faktum att en trådlöskommunikationskanal naturligt kan beräkna vissa funktioner,som exempelvis en summa. Om alla noder i nätverket sändertill en server samtidigt aggregeras signalerna genom interferensinnan de når servern.
Kandidatexjobb i elektroteknik 2020, KTH, Stockholm
APA, Harvard, Vancouver, ISO, and other styles
32

Deretic, Momcilo. "information aggregation, psychological biases and efficiency of prediction markets in selection of innovation projects." Thesis, Aix-Marseille 2, 2011. http://www.theses.fr/2011AIX24023.

Full text
Abstract:
Ma thèse de doctorat traite de la sélection de projets d'innovation en entreprises, en utilisant les marchés de prédiction comme mécanisme de sélection alternatif. Le processus d'innovation et son évaluation sont des activités ayant des répercussions sur la croissance et le développement. L’évidence montre que les méthodes habituelles d'évaluation et de sélection de projets d’innovation, comme le processus en entonnoir, ne sont pas rentables. Proposer une méthode plus efficace contribuera de manière significative à une meilleure allocation des ressources. Dans la première partie de ma thèse, je teste les prévisions du marché de prédiction contre celles des experts. Dans la deuxième, j'examine les aspects comportementaux de la prise de décision sur le marché de prédiction entrepreneurial, notamment comment le biais d’optimisme influence les décisions des traders. J’ai mené pour ces parties des expériences avec des sujets humains. Dans la troisième partie, j'examine les propriétés et éléments clés des marchés de prédiction et fourni une chronique et une classification d’articles sur les contributions les plus importantes de la littérature dans ce sujet
My PhD thesis deals with selection of corporate and entrepreneurial innovation projects, using prediction markets as an alternative selection mechanism. Innovation process and its evaluation are two very important economic activities with repercussions for growth and development. Available evidence strongly suggests that conventional evaluation and selection methods, such as development funnel in corporate setting or decisions of Venture Capital firms in entrepreneurial one, do not yield cost-effective results. Coming up with an efficient and cost-effective method would contribute significantly to better resource allocation and social welfare. In the first part of the thesis, I test the prediction market predictions against experts’. In the second part, I examine behavioral aspects of decision-making in entrepreneurial prediction market setting, particularly how optimism bias influences traders’ decisions in prediction market. I conducted experiments with human subjects for the first two parts. In the third part of the thesis, I examine the most important elements and properties of prediction markets and provide a survey of most important contributions to prediction market literature, together with the classification and list of articles in major categories
APA, Harvard, Vancouver, ISO, and other styles
33

Evans, Julian Claude. "Group-foraging and information transfer in European shags, Phalacrocorax aristotelis." Thesis, University of Exeter, 2015. http://hdl.handle.net/10871/18537.

Full text
Abstract:
Many animals including marine mammals and several seabird species dive in large groups, but the impacts that social interactions can have on diving behaviour are poorly understood. There are several potential benefits to social diving, such as access to social information or reduced predation risk. In this body of research I explore the use of social information by groups of diving animals by studying the behaviour of European shags (Phalacrocorax aristotelis) diving in “foraging rafts” in the Isles of Scilly. Using GPS tracking I establish where shags regularly forage in relation to bathymetry and areas where foraging rafts frequently formed. Using these data I show that the foraging ranges of different colonies overlap and that foraging ranges of individual shags are often predictable. This suggests that social information will be of less value while searching for foraging patches. However, using observational studies to further explore the conditions and areas in which foraging rafts formed, I show that advantages such as anti-predation or hydrodynamic benefits are unlikely to be the main drivers of rafting behaviour in the Scillies. I therefore suggest that access to social information from conspecifics at a foraging patch may be one of the main benefits diving in groups. Using a dynamic programming model I show that individuals diving in a group benefit from using social information, even when unable to assess conspecific foraging success. Finally I use video analysis to extract the positions and diving behaviour of individuals within a foraging raft and compare this to simulated data of collective motion and diving behaviour. The results of these studies indicate that an individual being able to utilise dives of conspecifics to inform their own diving decisions may be one of the main advantages of social diving.
APA, Harvard, Vancouver, ISO, and other styles
34

Perumal, Murugan Ananda Sentraya. "A Study of NoSQL and NewSQL databases for data aggregation on Big Data." Thesis, KTH, Skolan för informations- och kommunikationsteknik (ICT), 2013. http://urn.kb.se/resolve?urn=urn:nbn:se:kth:diva-143345.

Full text
Abstract:
Sensor data analysis at Scania deal with large amount of data collected from vehicles. Each time a Scania vehicle enters a workshop, a large number of variables are collected and stored in a RDBMS at high speed. Sensor data is numeric and is stored in a Data Warehouse. Ad-hoc analyses are performed on this data using Business Intelligence (BI) tools like SAS. There are challenges in using traditional database that are studied to identify improvement areas. Sensor data is huge and is growing at a rapid pace. It can be categorized as BigData for Scania. This problem is studied to define ideal properties for a high performance and scalable database solution. Distributed database products are studied to find interesting products for the problem. A desirable solution is a distributed computational cluster, where most of the computations are done locally in storage nodes to fully utilize local machine’s memory, and CPU and minimize network load. There is a plethora of distributed database products categorized under NoSQL and NewSQL. There is a large variety of NoSQL products that manage Organizations data in a distributed fashion. NoSQL products typically have advantage as improved scalability and disadvantages like lacking BI tool support and weaker consistency. There is an emerging category of distributed databases known as NewSQL databases that are relational data stores and they are designed to meet the demand for high performance and scalability. In this paper, an exploratory study was performed to find suitable products among these two categories. One product from each category was selected based on comparative study for practical implementation and the production data was imported to the solutions. Performance for a common use case (median computation) was measured and compared. Based on these comparisons, recommendations were provided for a suitable distributed product for Sensor data analysis.
APA, Harvard, Vancouver, ISO, and other styles
35

Cannalire, Pietro. "Geo-distributed multi-layer stream aggregation." Thesis, KTH, Skolan för elektroteknik och datavetenskap (EECS), 2018. http://urn.kb.se/resolve?urn=urn:nbn:se:kth:diva-230217.

Full text
Abstract:
The standard processing architectures are enough to satisfy a lot of applications by employing already existing stream processing frameworks which are able to manage distributed data processing. In some specific cases, having geographically distributed data sources requires to distribute even more the processing over a large area by employing a geographically distributed architecture.‌ The issue addressed in this work is the reduction of data movement across the network which is continuously flowing in a geo-distributed architecture from streaming sources to the processing location and among processing entities within the same distributed cluster. Reduction of data movement can be critical for decreasing bandwidth costs since accessing links placed in the middle of the network can be costly and can increase as the amount of data exchanges increase. In this work we want to create a different concept to deploy geographically distributed architectures by relying on Apache Spark Structured Streaming and Apache Kafka. The features needed for an algorithm to run on a geo-distributed architecture are provided. The algorithms to be executed on this architecture apply the windowing and the data synopses techniques to produce a summaries of the input data and to address issues of the geographically distributed architecture. The computation of the average and the Misra-Gries algorithm are then implemented to test the designed architecture. This thesis work contributes in providing a new model of building geographically distributed architecture. The experimental results show that, for the algorithms running on top of the geo distributed architecture, the computation time is reduced on average by 70% compared to the distributed setup. Similarly, and the amount of data exchanged across the network is reduced on average by 99%, compared to the distributed setup.
Standardbehandlingsarkitekturer är tillräckligt för uppfylla behoven av många tillämpningar genom användning av befintliga ramverk för flödesbehandling med stöd för distribuerad databehandling. I specifika fall kan geografiskt fördelade datakällor kräva att databehandlingen fördelas över ett stort område med hjälp av en geografiskt distribuerad arkitektur. Problemet som behandlas i detta arbete är minskningen av kontinuerlig dataöverföring i ett nätverk med geo-distribuerad arkitektur. Minskad dataöverföring kan vara avgörande för minskade bandbreddskonstnader då åtkomst av länkar placerade i mitten av ett nätverk kan vara dyrt och öka ytterligare med tilltagande dataöverföring. I det här arbetet vill vi skapa ett nytt koncept för att upprätta geografiskt distribuerade arkitekturer med hjälp av Apache Spark Structured Streaming och Apache Kafka. Funktioner och förutsättningar som behövs för att en algoritm ska kunna köras på en geografisk distribuerad arkitektur tillhandahålls. Algoritmerna som ska köras på denna arkitektur tillämpar “windowing synopsing” och “data synopses”-tekniker för att framställa en sammanfattning av ingående data samt behandla problem beträffande den geografiskt fördelade arkitekturen. Beräkning av medelvärdet och Misra-Gries-algoritmen implementeras för att testa den konstruerade arkitekturen. Denna avhandling bidrar till att förse ny modell för att bygga geografiskt distribuerad arkitektur. Experimentella resultat visar att beräkningstiden reduceras i genomsnitt 70% för de algoritmer som körs ovanför den geo-distribuerade arkitekturen jämfört med den distribuerade konfigurationen. På liknande sätt reduceras mängden data som utväxlas över nätverket med 99% i snitt jämfört med den distribuerade inställningen.
APA, Harvard, Vancouver, ISO, and other styles
36

Chaudhry, Omair. "Modelling geographic phenomena at multiple levels of detail : a model generalisation approach based on aggregation." Thesis, University of Edinburgh, 2008. http://hdl.handle.net/1842/2385.

Full text
Abstract:
Considerable interest remains in capturing once geographical information at the fine scale, and from this, automatically deriving information at various levels of detail and scale via the process of map generalisation. This research aims to develop a methodology for transformation of geographic phenomena at a high level of detail directly into geographic phenomena at higher levels of abstraction. Intuitive and meaningful interpretation of geographical phenomena requires their representation at multiple levels of detail. This is due to the scale dependent nature of their properties. Prior to the cartographic portrayal of that information, model generalisation is required in order to derive higher order phenomena typically associated with the smaller scales. This research presents a model generalisation approach able to support the derivation of phenomena typically present at 1:250,000 scale mapping, directly from a large scale topographic database (1:1250/1:2500/1:10,000). Such a transformation involves creation of higher order or composite objects, such as settlement, forest, hills and ranges, from lower order or component objects, such as buildings, trees, streets, and vegetation, in the source database. In order to perform this transformation it is important to model the meaning and relationships among source database objects rather than to consider the object in terms of their geometric primitives (points, lines and polygons). This research focuses on two types of relationships: taxonomic and partonomic. These relationships provide different but complimentary strategies for transformation of source database objects into required target database objects. The proposed methodology highlights the importance of partonomic relations for transformation of spatial databases over large changes in levels of detail. The proposed approach involves identification of these relationships and then utilising these relationships to create higher order objects. The utility of the results obtained, via the implementation of the proposed methodology, is demonstrated using spatial analysis techniques and the creation of ‘links’ between objects at different representations needed for multiple representation databases. The output database can then act as input to cartographic generalisation in order to create maps (digital or paper). The results are evaluated using manually generalised datasets.
APA, Harvard, Vancouver, ISO, and other styles
37

Bengtsson, Mattias. "Mathematical foundation needed for development of IT security metrics." Thesis, Linköping University, Department of Electrical Engineering, 2007. http://urn.kb.se/resolve?urn=urn:nbn:se:liu:diva-9766.

Full text
Abstract:

IT security metrics are used to achieve an IT security assessment of certain parts of the IT security environment. There is neither a consensus of the definition of an IT security metric nor a natural scale type of the IT security. This makes the interpretation of the IT security difficult. To accomplish a comprehensive IT security assessment one must aggregate the IT security values to compounded values.

When developing IT security metrics it is important that permissible mathematical operations are made so that the information are maintained all the way through the metric. There is a need for a sound mathematical foundation for this matter.

The main results produced by the efforts in this thesis are:

• Identification of activities needed for IT security assessment when using IT security metrics.

• A method for selecting a set of security metrics in respect to goals and criteria, which also is used to

• Aggregate security values generated from a set of security metrics to compounded higher level security values.

• A mathematical foundation needed for development of security metrics.

APA, Harvard, Vancouver, ISO, and other styles
38

Schuster, Alfons. "Supporting data analysis and the management of uncertainty in knowledge-based systems through information aggregation processes." Thesis, University of Ulster, 1998. http://ethos.bl.uk/OrderDetails.do?uin=uk.bl.ethos.264825.

Full text
APA, Harvard, Vancouver, ISO, and other styles
39

Jolly, Richard Donald. "The Role of Feedback in the Assimilation of Information in Prediction Markets." PDXScholar, 2011. https://pdxscholar.library.pdx.edu/open_access_etds/468.

Full text
Abstract:
Leveraging the knowledge of an organization is an ongoing challenge that has given rise to the field of knowledge management. Yet, despite spending enormous sums of organizational resources on Information Technology (IT) systems, executives recognize there is much more knowledge to harvest. Prediction markets are emerging as one tool to help extract this tacit knowledge and make it operational. Yet, prediction markets, like other markets, are subject to pathologies (e.g., bubbles and crashes) which compromise their accuracy and may discourage organizational use. The techniques of experimental economics were used to study the characteristics of prediction markets. Empirical data was gathered from an on-line asynchronous prediction market. Participants allocated tickets based on private information and, depending on the market type, public information indicative of how prior participants had allocated their tickets. The experimental design featured three levels of feedback (no-feedback, percentages of total allocated tickets and frequency of total allocated tickets) presented to the participants. The research supported the hypothesis that information assimilation in feedback markets is composed of two mechanisms - information collection and aggregation. These are defined as: Collection - The compilation of dispersed information - individuals using their own private information make judgments and act accordingly in the market. Aggregation - The market's judgment on the implications of this gathered information - an inductive process. This effect comes from participants integrating public information with their private information in their decision process. Information collection was studied in isolation in no feedback markets and the hypothesis that markets outperform the average of their participants was supported. The hypothesis that with the addition of feedback, the process of aggregation would be present was also supported. Aggregation was shown to create agreement in markets (as measured by entropy) and drive market results closer to correct values (the known probabilities). However, the research also supported the hypothesis that aggregation can lead to information mirages, creating a market bubble. The research showed that the presence and type of feedback can be used to modulate market performance. Adding feedback, or more informative feedback, increased the market's precision at the expense of accuracy. The research supported the hypotheses that these changes were due to the inductive aggregation process which creates agreement (increasing precision), but also occasionally generates information mirages (which reduces accuracy). The way individual participants use information to make allocations was characterized. In feedback markets the fit of participants' responses to various decision models demonstrated great variety. The decision models ranged from little use of information (e.g., MaxiMin), use of only private information (e.g., allocation in proportion to probabilities), use of only public information (e.g., allocating in proportion to public distributions) and integration of public and private information. Analysis of all feedback market responses using multivariate regression also supported the hypothesis that public and private information were being integrated by some participants. The subtle information integration results are in contrast to the distinct differences seen in markets with varying levels of feedback. This illustrates that the differences in market performance with feedback are an emergent phenomenon (i.e., one that could not be predicted by analyzing the behavior of individuals in different market situations). The results of this study have increased our collective knowledge of market operation and have revealed methods that organizations can use in the construction and analysis of prediction markets. In some situations markets without feedback may be a preferred option. The research supports the hypothesis that information aggregation in feedback markets can be simultaneously responsible for beneficial information processing as well as harmful information mirage induced bubbles. In fact, a market subject to mirage prone data resembles a Prisoner's Dilemma where individual rationality results in collective irrationality.
APA, Harvard, Vancouver, ISO, and other styles
40

Schaller, Jean-Pierre. "Multiple criteria decision aid under incomplete information : a partial aggregation method based on the theory of hints /." Lausanne : Payot, 1991. http://bvbr.bib-bvb.de:8991/F?func=service&doc_library=BVB01&doc_number=003426691&line_number=0001&func_code=DB_RECORDS&service_type=MEDIA.

Full text
APA, Harvard, Vancouver, ISO, and other styles
41

Andersson, Christofer, and Lotta Mähönen. "Managerial use of accounting information : A study on how managers use business reports at NCC." Thesis, Uppsala universitet, Företagsekonomiska institutionen, 2014. http://urn.kb.se/resolve?urn=urn:nbn:se:uu:diva-226799.

Full text
Abstract:
There is a need to learn more about how managers use accounting information. This thesis investigates how managers make use of business reports; as they are one of the ways managers receive information. Previous research was found to broadly correspond to four important aspects affecting how managers make use of business reports; aggregation, timeliness, flexibility and dimensions. A case study was conducted at NCC Construction. The main findings from this study are that managers have the possibility to view information in the reports at their desired level of specificity and they are not concerned about the issue of timeliness. Furthermore they are satisfied with flexibility in reports, but wish for more capabilities and do not desire non-financial information in reports. Therefore the four aspects are found to no longer be a hindrance to managers in their use of business reports as much as could be expected from previous studies. Technological developments and business practices are found to have changed managerial work. Reporting has become faster and is more accurately reflecting the real world operations, making business reports more useful to managers.
APA, Harvard, Vancouver, ISO, and other styles
42

Hjelm, Daniel, Emanuel Wreeby, and Anton Sjöström. "Aggregation and power forecasting for the CoordiNet power flexibility market in Uppsala." Thesis, Uppsala universitet, Institutionen för materialvetenskap, 2021. http://urn.kb.se/resolve?urn=urn:nbn:se:uu:diva-445610.

Full text
Abstract:
In the region of Uppland, a shortage of electric power during cold days has emerged during the past years by virtue of the electrification of the society and industry in general. As a result, a power flexibility market managed by the CoordiNet project has commenced to hopefully create a more reliant, eco-friendly and accessible electricity supply. Uppsala kommun wishes to participate in the market but needs a solution for communication between market and technology and smart control methods. In this project, the solution to the problem, consisting of a mobile app, API, database, server and deep learning model, almost meets the requirements to participate on the market this autumn. With more time and resources, the product can hopefully be completed, enabling both economic and city growth in the region.
APA, Harvard, Vancouver, ISO, and other styles
43

Elers, Andreas. "Continual imitation learning: Enhancing safe data set aggregation with elastic weight consolidation." Thesis, KTH, Skolan för elektroteknik och datavetenskap (EECS), 2019. http://urn.kb.se/resolve?urn=urn:nbn:se:kth:diva-256074.

Full text
Abstract:
The field of machine learning currently draws massive attention due to ad- vancements and successful applications announced in the last few years. One of these applications is self-driving vehicles. A machine learning model can learn to drive through behavior cloning. Behavior cloning uses an expert’s behavioral traces as training data. However, the model’s steering predictions influence the succeeding input to the model and thus the model’s input data will vary depending on earlier predictions. Eventually the vehicle may de- viate from the expert’s behavioral traces and fail due to encountering data it has not been trained on. This is the problem of sequential predictions. DAG- GER and its improvement SafeDAGGER are algorithms that enable training models in the sequential prediction domain. Both algorithms iteratively col- lect new data, aggregate new and old data and retrain models on all data to avoid catastrophically forgetting previous knowledge. The aggregation of data leads to problems with increasing model training times, memory requirements and requires that previous data is maintained forever. This thesis’s purpose is investigate whether or not SafeDAGGER can be improved with continual learning to create a more scalable and flexible algorithm. This thesis presents an improved algorithm called EWC-SD that uses the continual learning algo- rithm EWC to protect a model’s previous knowledge and thereby only train on new data. Training only on new data allows EWC-SD to have lower training times, memory requirements and avoid storing old data forever compared to the original SafeDAGGER. The different algorithms are evaluated in the con- text of self-driving vehicles on three tracks in the VBS3 simulator. The results show EWC-SD when trained on new data only does not reach the performance of SafeDAGGER. Adding a rehearsal buffer containing only 23 training exam- ples to EWC-SD allows it to outperform SafeDAGGER by reaching the same performance in half as many iterations. The conclusion is that EWC-SD with rehearsal solves the problems of increasing model training times, memory re- quirements and requiring access to all previous data imposed by data aggre- gation.
Fältet för maskininlärning drar för närvarande massiv uppmärksamhet på grund av framsteg och framgångsrika applikationer som meddelats under de senaste åren. En av dessa applikationer är självkörande fordon. En maskininlärningsmodell kan lära sig att köra ett fordon genom beteendekloning. Beteendekloning använder en experts beteendespår som träningsdata. En modells styrförutsägelser påverkar emellertid efterföljande indata till modellen och således varierar modellens indata utifrån tidigare förutsägelser. Så småningom kan fordonet avvika från expertens beteendespår och misslyckas på grund av att modellen stöter på indata som den inte har tränats på. Det här är problemet med sekventiella förutsägelser. DAGGER och dess förbättring SafeDAGGER är algoritmer som möjliggör att träna modeller i domänen sekventiella förutsägelser. Båda algoritmerna samlar iterativt nya data, aggregerar nya och gamla data och tränar om modeller på alla data för att undvika att katastrofalt glömma tidigare kunskaper. Aggregeringen av data leder till problem med ökande träningstider, ökande minneskrav och kräver att man behåller åtkomst till all tidigare data för alltid. Avhandlingens syfte är att undersöka om SafeDAGGER kan förbättras med stegvis inlärning för att skapa en mer skalbar och flexibel algoritm. Avhandlingen presenterar en förbättrad algoritm som heter EWC-SD, som använder stegvis inlärningsalgoritmen EWC för att skydda en modells tidigare kunskaper och därigenom enbart träna på nya data. Att endast träna på nya data gör det möjligt för EWC-SD att ha lägre träningstider, ökande minneskrav och undvika att lagra gamla data för evigt jämfört med den ursprungliga SafeDAGGER. De olika algoritmerna utvärderas i kontexten självkörande fordon på tre banor i VBS3-simulatorn. Resultaten visar att EWC-SD tränad enbart på nya data inte uppnår prestanda likvärdig SafeDAGGER. Ifall en lägger till en repeteringsbuffert som innehåller enbart 23 träningsexemplar till EWC-SD kan den överträffa SafeDAGGER genom att uppnå likvärdig prestanda i hälften så många iterationer. Slutsatsen är att EWC-SD med repeteringsbuffert löser problemen med ökande träningstider, ökande minneskrav samt kravet att alla tidigare data ständigt är tillgängliga som påtvingas av dataaggregering.
APA, Harvard, Vancouver, ISO, and other styles
44

Flynn, John Michael. "Locally significant content on regional television : a case study of North Queensland commercial television before and after aggregation." Thesis, Queensland University of Technology, 2008. https://eprints.qut.edu.au/16697/1/John_Michael_Flynn_Thesis.pdf.

Full text
Abstract:
This thesis is an exploration of the fate which has befallen the regional commercial television industry in North Queensland in the wake of the aggregation policy introduced by the Federal Labor Government in 1990. More specifically, it examines the effectiveness of policy outcomes which stem from the Australian Broadcasting Authority's 2001 inquiry into the adequacy of regional and rural commercial television news and information services. The research is primarily concerned with the quality of local content provided by regional commercial broadcasters in response to the implementation of the Australian Communications and Media Authority's points system for broadcast of matters of local significance. The policy outcomes are balanced against an historical context, which traces the regional commercial television industry in North Queensland back to its very beginning. Regulatory reform has resulted in a basic level of news content being maintained. However the significance of elements of this news content to local viewers is minimal. The reduction in local information content, despite being identified in the earliest stages of the ABA investigation, has not been adequately addressed by the reform process.
APA, Harvard, Vancouver, ISO, and other styles
45

Flynn, John Michael. "Locally significant content on regional television : a case study of North Queensland commercial television before and after aggregation." Queensland University of Technology, 2008. http://eprints.qut.edu.au/16697/.

Full text
Abstract:
This thesis is an exploration of the fate which has befallen the regional commercial television industry in North Queensland in the wake of the aggregation policy introduced by the Federal Labor Government in 1990. More specifically, it examines the effectiveness of policy outcomes which stem from the Australian Broadcasting Authority's 2001 inquiry into the adequacy of regional and rural commercial television news and information services. The research is primarily concerned with the quality of local content provided by regional commercial broadcasters in response to the implementation of the Australian Communications and Media Authority's points system for broadcast of matters of local significance. The policy outcomes are balanced against an historical context, which traces the regional commercial television industry in North Queensland back to its very beginning. Regulatory reform has resulted in a basic level of news content being maintained. However the significance of elements of this news content to local viewers is minimal. The reduction in local information content, despite being identified in the earliest stages of the ABA investigation, has not been adequately addressed by the reform process.
APA, Harvard, Vancouver, ISO, and other styles
46

Izadkhast, Seyedmahdi. "Aggregation of Plug-in Electric Vehicles in Power Systems for Primary Frequency Control." Doctoral thesis, KTH, Elkraftteknik, 2017. http://urn.kb.se/resolve?urn=urn:nbn:se:kth:diva-205286.

Full text
Abstract:
The number of plug-in electric vehicles (PEVs) is likely to increase in the near future and these vehicles will probably be connected to the electric grid most of the day time. PEVs are interesting options to provide a wide variety of services such as primary frequency control (PFC), because they are able to quickly control their active power using electronic power converters. However, to evaluate the impact of PEVs on PFC, one should either carry out complex and time consuming simulation involving a large number of PEVs or formulate and develop aggregate models which could efficiently reduce simulation complexity and time while maintaining accuracy. This thesis proposes aggregate models of PEVs for PFC. The final aggregate model has been developed gradually through the following steps. First of all, an aggregate model of PEVs for the PFC has been developed where various technical characteristics of PEVs such as operating modes (i.e., idle, disconnected, and charging) and PEV’s state of charge have been formulated and incorporated. Secondly, some technical characteristics of distribution networks have been added to the previous aggregate model of PEVs for the PFC. For this purpose, the power consumed in the network during PFC as well as the maximum allowed current of the lines and transformers have been taken into account. Thirdly, the frequency stability margins of power systems including PEVs have been evaluated and a strategy to design the frequency-droop controller of PEVs for PFC has been described. The controller designed guaranties similar stability margins, in the worst case scenario, to those of the system without PEVs. Finally, a method to evaluate the positive economic impact of PEVs participation in PFC has been proposed.
En el futuro cercano se espera un notable incremento en el número de vehículos eléctricos enchufables (PEVs), los cuales están conectados a la red eléctrica durante la mayor parte del día. Los PEVs constituyen una opción interesante a la hora de proporcionar una amplia variedad de servicios, tales como el control primario de frecuencia (PFC), dado que tienen la capacidad de controlar rápidamente el flujo de potencia activa a través de convertidores electrónicos de potencia. Sin embargo, para evaluar el impacto de los PEVs sobre el PFC se debe llevar a cabo una simulación computacionalmente compleja y con un largo tiempo de simulación en la que se considere un gran número de PEVs. Otra opción sería la formulación y desarrollo de modelos agregados, los cuales podrían reducer eficazmente la complejidad y tiempo de simulación manteniendo una alta precisión. Esta tesis propone modelos agregados de PEVs para PFC. El modelo agregado definitivo ha sido desarrollado de manera gradual a través de los siguientes pasos. En primer lugar, se ha desarrollado un modelo agregado de PEVs para PFC en el cual son incorporadas varias características técnicas de los PEVs, tales como los modos de operación (inactivo, desconectado y cargando), y la formulación del estado de carga de los PEVs. En segundo lugar, ciertas características técnicas de las redes de distribución han sido consideradas en el modelo agregado de PEVs para PFC previamente propuesto. Para este fin, la potencia consumida por la red durante el PFC, así como la corriente máxima permitida en las líneas y transformadores han sido consideradas. En tercer lugar, se han evaluado los márgenes de estabilidad en la frecuencia de los sistemas de potencia que incluyen PEVs y se ha descrito una estrategia para diseñar un control de frecuencia-droop de PEVs para PFC. El controlador diseñado garantiza márgenes de estabilidad similares, en el peor de los casos, a aquellos de un sistema sin PEVs. Finalmente, se ha propuesto un método para evaluar el impacto económico positivo de la participación de los PEVs en el PFC.
Inom en snar framtid förväntas antalet laddbara bilar (laddbilar) öka kraftig, vilka tidvis kommer att vara anslutna till elnätet. Då laddbilar snabbt kan styra och variera sin aktiva laddningseffekt med hjälp av kraftelektroniken i omriktaren kan dessa fordon erbjuda en rad systemtjänster, såsom primär frekvensregleringen. Att utvärdera hur laddbilarna kan påverka den primära frekvensreglering är utmanande då en stor mängd laddbilar måste beaktas vilket kräver komplexa och tidskrävande simuleringar. Ett effektivt sätt att minska komplexiteten men bibehålla noggrannheten är genom att utforma och använda aggregerade modeller. Syftet med denna avhandling är att ta fram aggregerade modeller för laddbilars påverkan på primär frekvensreglering. Modellen har gradvis utvecklats genom följande steg. I första steget har en aggregerad modell av hur laddbilar kan användas för primär frekvensreglering utvecklats där olika tekniska detaljer så som laddbilars tillstånd (d.v.s. inkopplade, urkopplade eller laddas) och laddningnivån beaktats. I andra steget har en modell av distributionsnätet integrerats i den aggregerade modellen. Här tas hänsyn till effektflöden i elnätet samt begränsningar i överföringskapacitet i transformatorer och ledningar i distributionsnätet. I ett tredje steg har frekvensstabiliteten i ett elnät med laddbilar utvärderats och en strategi för hur en frekvensregulator kan designas för att tillhandahålla primär frekvensreglering med hjälp av laddbilar har utvecklats. Designen garanterar samma stabilitetsmarginal för styrsystemet både med och utan laddbilar. Dessutom föreslås en metod för att utvärdera de ekonomiska effekterna av att använda laddbilar för primär frekvensreglering.
Het aantal elektrische voertuigen (EV’s) zal zeer waarschijnlijk toenemen in de nabije toekomst en deze voertuigen zullen vermoedelijk gedurende het grootste deel van de dag aan het elektriciteitsnetwerk aangesloten zijn. EV’s zijn interessante opties om een grote verscheidenheid van diensten te leveren, zoals bijvoorbeeld primaire frequentieregeling, omdat ze snel hun actieve vermogen kunnen aanpassen met behulp van elektronische vermogensomvormers. Echter, om de invloed van EV’s en primaire frequentieregeling te kunnen evalueren, moet men complexe en tijdrovende simulaties met een groot aantal EVs uitvoeren of verzamelmodellen formuleren en ontwikkelen die de complexiteit en duur van de simulaties kunnen reduceren zonder nauwkeurigheid te verliezen. Dit onderzoek presenteert verzamelmodellen voor EV’s en primaire frequentieregeling. Het uiteindelijke verzamelmodel is geleidelijk ontwikkeld door de volgende stappen te nemen. Ten eerste is een verzamelmodel voor EV’s en primaire frequentieregeling ontwikkeld waar verscheidene technische karakteristieken van EV’s, zoals bedieningsmodi (bijv. Inactief, losgekoppeld en ladend) en de actuele laadtoestand in zijn geformuleerd en geïntegreerd. Ten tweede zijn enkele technische karakteristieken van distributienetwerken toegevoegd aan het eerdere verzamelmodel van EV’s voor primaire frequentieregeling. Hiervoor zijn de vermogensconsumptie in het network gedurende primaire frequentieregeling en de maximaal toegestane stroomsterkte van de kabels meegerekend. Ten derde zijn de marges voor de frequentiestabiliteit van elektriciteitssystemen met EV’s geëvalueerd en is een strategie voor het ontwerpen van de frequentie-droop regeling van de EV’s voor primaire frequentieregeling beschreven. De ontworpen controller garandeert soortgelijke stabiliteitsmarges in het slechtste scenario, als voor het systeem zonder EV’s. Ten slotte is er een methode voorgesteld om de positieve economische invloed van EV-participatie in primaire frequentieregeling te evaluëren.

“SETS Joint Doctorate Programme

The Erasmus Mundus Joint Doctorate in Sustainable Energy Technologies and Strategies (SETS), the SETS Joint Doctorate, is an international programme run by six institutions in cooperation:

• Comillas Pontifical University, Madrid, Spain

• Delft University of Technology, Delft, the Netherlands

• Florence School of Regulation, Florence, Italy

• Johns Hopkins University, Baltimore, USA

• KTH Royal Institute of Technology, Stockholm, Sweden

• University Paris-Sud 11, Paris, France

The Doctoral Degrees provided upon completion of the programme are issued by Comillas Pontifical University, Delft University of Technology, and KTH Royal Institute of Technology.

The Degree Certificates are giving reference to the joint programme. The doctoral candidates are jointly supervised, and must pass a joint examination procedure set up by the three institutions issuing the degrees.

This Thesis is a part of the examination for the doctoral degree.

The invested degrees are official in Spain, the Netherlands and Sweden respectively.

SETS Joint Doctorate was awarded the Erasmus Mundus excellence label by the European Commission in year 2010, and the European Commission’s Education, Audiovisual and Culture Executive Agency, EACEA, has supported the funding of this programme

The EACEA is not to be held responsible for contents of the Thesis.”  QC 20170412

APA, Harvard, Vancouver, ISO, and other styles
47

Piri, E. (Esa). "Improving heterogeneous wireless networking with cross-layer information services." Doctoral thesis, Oulun yliopisto, 2015. http://urn.fi/urn:isbn:9789526208213.

Full text
Abstract:
Abstract Substantially growing data traffic over wireless networks poses increased challenges for mobile network operators in deploying sufficient network resources and managing user mobility. This dissertation considers these challenges to providing satisfactory Quality of Service (QoS) for end-users and studies solutions for better utilization of the heterogeneous network environment. First, the dissertation examines what solutions mobile devices and network management entities can use to dynamically collect valid cross-layer information from different network entities. Cross-layer information allows monitoring of the condition of the network in multiple layers on a user and application basis. The second research topic considers the techniques the network management entities can use to improve resource usage in wireless networks based on the collected cross-layer information. The IEEE 802.21 standard, specified to facilitate handovers between heterogeneous networks, is used as the basis for cross-layer information delivery. This dissertation also focuses on utilization of the standard beyond the inter-access technology handovers. In order to improve resource usage in wireless networks dynamically, event delivery enhancements are proposed for the standard so that it better applies to the requirements of different techniques. Such techniques are traffic priority adjustment, traffic adaptation, packet aggregation, and network protocol header compression. The results show that when a handover is not feasible, these techniques effectively allow sharing of the limited radio resources for the user data according to applications’ importance and type. Mobility management is studied in terms of network information service, one of the main services of IEEE 802.21. The thesis proposes enhancing the information service with a base station cell coverage area database. The database provides significant improvements for the selection of a handover target in a dense base station environment. With all the results taken together, the dissertation provides mobile network operators various means to improve the usage of wireless networks on the basis of applications’ varying QoS requirements
Tiivistelmä Voimakkaasti kasvava langattomien tietoverkkojen dataliikenne aiheuttaa verkko-operaattoreille haasteita tarjota riittävät verkkoresurssit ja hallita käyttäjien liikkuvuutta. Väitöskirja huomioi nämä haasteet tarjota loppukäyttäjille tyydyttävä palvelunlaatu (QoS) ja tutkii ratkaisuja, joilla heterogeenistä verkkoympäristöä voidaan hyödyntää tehokkaammin. Aluksi väitöskirja tutkii, mitä ratkaisuja päätelaitteet ja verkkohallintatoimijat voivat käyttää keräämään protokollakerrosten välistä (cross-layer) tietoa eri verkkotoimijoilta. Protokollakerrosten välinen tieto mahdollistaa verkon tilan seuraamisen usealla eri kerroksella käyttäjä- ja sovelluskohtaisesti. Toinen tutkimusaihe tarkastelee protokollakerrosten välistä tietoa hyödyntäviä tekniikoita, joita verkonhallintatoimijat voivat käyttää tehostamaan resurssien käyttöä langattomissa verkoissa. IEEE 802.21-standardia, joka on määritetty helpottamaan verkonvaihtoja heterogeenisten verkkojen välillä, käytetään pohjana protokollakerrosten välisen tiedon jakelulle. Väitöskirjassa keskitytään standardin hyödyntämiseen myös muussa kuin verkkoteknologioiden välisen verkonvaihdon yhteydessä. Väitöskirja ehdottaa parannuksia standardin tapahtumatietovälitykseen, jotta se täyttäisi paremmin eri tekniikoiden asettamat vaatimukset dynaamisesti toteutettavista toimista langattomien verkkojen resurssikäytön tehostamiseksi. Nämä tekniikat ovat liikenteen prioriteetin muutokset, liikenteen adaptointi, pakettien yhdistäminen ja verkkoprotokollaotsikoiden pakkaus. Tulokset osoittavat, että kun tukiasema- tai verkonvaihto ei ole mahdollinen, nämä tekniikat mahdollistavat rajattujen verkkoresurssien jakamisen tehokkaasti sovellusten tärkeyden ja tyypin mukaan. Liikkuvuudenhallintaa tutkitaan verkkoinformaatiopalvelun, joka on myös yksi IEEE 802.21-standardin pääpalveluista, kautta. Väitöskirja ehdottaa, että informaatiopalvelua tehostetaan liittämällä siihen tietokanta tukiasemasolujen peittoalueista. Tietokanta tehostaa huomattavasti verkonvaihdon kohteen valintaa tiheissä tukiasemaympäristöissä. Kun väitöskirjan tulokset huomioidaan kokonaisuutena, väitöskirja tarjoaa verkko-operaattoreille useita tapoja tehostaa langattomien verkkojen käyttöä sovellusten vaihtelevien palvelunlaatuvaatimusten perusteella
APA, Harvard, Vancouver, ISO, and other styles
48

Sereevinyayut, Piya. "On estimate aggregation : studies of how decision makers aggregate quantitative estimates in three different cases." Doctoral thesis, Universitat Pompeu Fabra, 2013. http://hdl.handle.net/10803/125062.

Full text
Abstract:
This dissertation examines how people aggregate quantitative advices to reach their own estimates. Each chapter explores a different situation that could affect how advices are evaluated, and consequentially how advices will be combined. The first chapter demonstrates that people measure advices' extremity degrees by anhoring upon the advice set's median. It also shows that, unlike multiplicative scaling, additive scaling of advices affects how outliers are perceived. The second chapter deals with advices that are obtained serially. The results reveals that whether people execute the aggregation sequentially or only once at the end of the series affects how an outlier in the series is detected and combined. The third chapter studies how people revise their own estimates with advices of others, and finds that people revise more if they appear a dissensus. Consequentially having multiple advices can attenuate of the effect of egocentricity and improve accuracy of revisions compared to having only a single advice
Aquesta tesi estudia com les persones agreguen consells quantitatius per arribar a les seves pròpies estimacions. Cada capítol explora una situació diferent que podria afectar com s'avaluen els consells, i en conseqüència com es combinen aquests consells. El primer capítol demostra que les persones mesuren els graus extrems dels consells per ancoratge a la mediana del conjunt de consells. També es mostra que, en comptes d’una escala multiplicadora,l’ escala additiva dels consells afecta a com es perceben els valors atípics. El segon capítol tracta de consells que s'obtenen en sèrie. Els resultats revelen que si les persones executen l'agregació seqüencialment o només una vegada al final de la sèrie, afecta a com es detecten i es combinen els valors atípics en la sèrie. El tercer capítol estudia com les persones revisen les seves estimacions a partir consells dels altres, i es troba que les persones revisen més si es troben en un dissens. Conseqüentment, tenir consells múltiples pot atenuar l'efecte d'egocentrisme i millorar la precisió de les revisions si es compara en tenir només un únic consell.
APA, Harvard, Vancouver, ISO, and other styles
49

Tang, Dawei. "Container Line Supply Chain security analysis under complex and uncertain environment." Thesis, University of Manchester, 2012. https://www.research.manchester.ac.uk/portal/en/theses/container-line-supply-chain-security-analysis-under-complex-and-uncertain-environment(2b058744-e0fc-4b4f-9222-6a4b41cf7348).html.

Full text
Abstract:
Container Line Supply Chain (CLSC), which transports cargo in containers and accounts for approximately 95 percent of world trade, is a dominant way for world cargo transportation due to its high efficiency. However, the operation of a typical CLSC, which may involve as many as 25 different organizations spreading all over the world, is very complex, and at the same time, it is estimated that only 2 percent of imported containers are physically inspected in most countries. The complexity together with insufficient prevention measures makes CLSC vulnerable to many threats, such as cargo theft, smuggling, stowaway, terrorist activity, piracy, etc. Furthermore, as disruptions caused by a security incident in a certain point along a CLSC may also cause disruptions to other organizations involved in the same CLSC, the consequences of security incidents to a CLSC may be severe. Therefore, security analysis becomes essential to ensure smooth operation of CLSC, and more generally, to ensure smooth development of world economy. The literature review shows that research on CLSC security only began recently, especially after the terrorist attack on September 11th, 2001, and most of the research either focuses on developing policies, standards, regulations, etc. to improve CLSC security from a general view or focuses on discussing specific security issues in CLSC in a descriptive and subjective way. There is a lack of research on analytical security analysis to provide specific, feasible and practical assistance for people in governments, organizations and industries to improve CLSC security. Facing the situation mentioned above, this thesis intends to develop a set of analytical models for security analysis in CLSC to provide practical assistance to people in maintaining and improving CLSC security. In addition, through the development of the models, the thesis also intends to provide some methodologies for general risk/security analysis problems under complex and uncertain environment, and for some general complex decision problems under uncertainty. Specifically, the research conducted in the thesis is mainly aimed to answer the following two questions: how to assess security level of a CLSC in an analytical and rational way, and according to the security assessment result, how to develop balanced countermeasures to improve security level of a CLSC under the constraints of limited resources. For security assessment, factors influencing CLSC security as a whole are identified first and then organized into a general hierarchical model according to the relations among the factors. The general model is then refined for security assessment of a port storage area along a CLSC against cargo theft. Further, according to the characteristics of CLSC security analysis, the belief Rule base Inference Methodology using the Evidential Reasoning approach (RIMER) is selected as the tool to assess CLSC security due to its capabilities in accommodating and handling different forms of information with different kinds of uncertainty involved in both the measurement of factors identified and the measurement of relations among the factors. To build a basis of the application of RIMER, a new process is introduced to generate belief degrees in Belief Rule Bases (BRBs), with the aim of reducing bias and inconsistency in the process of the generation. Based on the results of CLSC security assessment, a novel resource allocation model for security improvement is also proposed within the framework of RIMER to optimally improve CLSC security under the constraints of available resources. In addition, it is reflected from the security assessment process that RIMER has its limitations in dealing with different information aggregation patterns identified in the proposed security assessment model, and in dealing with different kinds of incompleteness in CLSC security assessment. Correspondently, under the framework of RIMER, novel methods are proposed to accommodate and handle different information aggregation patterns, as well as different kinds of incompleteness. To validate the models proposed in the thesis, several case studies are conducted using data collected from different ports in both the UK and China. From a methodological point of view, the ideas, process and models proposed in the thesis regarding BRB generation, optimal resource allocation based on security assessment results, information aggregation pattern identification and handling, incomplete information handling can be applied not only for CLSC security analysis, but also for dealing with other risk and security analysis problems and more generally, some complex decision problems. From a practical point of view, the models proposed in the thesis can help people in governments, organizations, and industries related to CLSC develop best practices to ensure secure operation, assess security levels of organizations involved in a CLSC and security level of the whole CLSC, and allocate limited resources to improve security of organizations in CLSC. The potential beneficiaries of the research may include: governmental organizations, international/regional organizations, industrial organizations, classification societies, consulting companies, companies involved in a CLSC, companies with cargo to be shipped, individual researchers in relevant areas etc.
APA, Harvard, Vancouver, ISO, and other styles
50

Llinares, Philippe. "L'agrégation financière territorialisée en région Provence-Alpes-Côte d'Azur." Thesis, Aix-Marseille 3, 2011. http://www.theses.fr/2011AIX32097.

Full text
Abstract:
Les régions qui gagnent, sont celles dans lesquelles existent, grâce à une décentralisation, notamment régionale, large et réelle, des centres de décision proches, des synergies fortes entre tous les acteurs de la vie sociale et économique, appuyés sur une identité reconnue. Au sein de la région PACA, l'agrégation financière territorialisée doit constituer un outil de transparence démocratique qui permet une dynamique à l'intérieur du territoire régional.L'agrégation se définit comme l'addition de certaines données financières de plusieurs niveaux de collectivités sur un territoire donné. Le champ et la nature des informations agrégées sont déterminés par les acteurs locaux en fonction de leurs besoins. Ils peuvent être globaux ou partiels, porter sur l'ensemble de l'action des collectivités ou sur une compétence déterminée.L'objectif de la démarche est de permettre à la région PACA d'avoir à sa disposition une restitution qui lui permette de connaître avec exactitude la situation financière des zones de son territoire. Actuellement, la représentation d'une région s'opère uniquement à travers les comptes du conseil régional. Certes, les régions opèrent des audits pour leur permettre de connaître la situation d'une zone sur son territoire, mais ces études sont isolées, incomplètes et peu représentatives de la réalité financière globale de leur territoire.Aussi, est-il nécessaire de mettre en œuvre un véritable outil qui permettrait d'avoir une vision complète et exacte des différentes zones territoriales dans la région.Les thèmes étudiés relèveront de l'économie spatiale, mais aussi des finances publiques territoriales et des systèmes de diffusion de l'information.Les travaux de recherche se sont étalés sur cinq années et présentent l'exercice 2008 comme une année de référence. Outre une présentation agrégée de la région, c'est une méthode scientifique qui est conçue pour y parvenir
The winning regions are those where a real decentralization, particularly regional, exists with local centres of decision and strong synergies between the different social and economic partners in a common identity.Within the PACA region, the territorial financial aggregate should constitute a dynamic tool for democratic transparency.The aggregate can be defined as the sum of certain financial data on several local community levels in a given territory. The content of the aggregate information is chosen by the local decision centres depending on their needs. This can be global or partial : concerning all or only certain community competences.Our aim is to make the exact financial situation of the different sectors of its territory avaible to the PACA region.At the present time, the information on the situation of any region can only be obtained from the accounts of the regional council. Indeep, the regions carry out audits to be informed of the situation of particular sectors, but these audits are isolated, incomplete and do not reflect the global financial reality of the territory. It is thus necessary to provide a tool capable of giving an exact and complete report on the different sectors. The themes to be studied will concern spatial economics, as well as territorial public finance and the diffusion of information. The research has taken place over five years and presents 2008 as the year of reference. In addition to an aggregate presentation of the region, this is a scientific method conceived to succeed
APA, Harvard, Vancouver, ISO, and other styles
We offer discounts on all premium plans for authors whose works are included in thematic literature selections. Contact us to get a unique promo code!

To the bibliography