To see the other types of publications on this topic, follow the link: Balanced partitioning.

Dissertations / Theses on the topic 'Balanced partitioning'

Create a spot-on reference in APA, MLA, Chicago, Harvard, and other styles

Select a source type:

Consult the top 21 dissertations / theses for your research on the topic 'Balanced partitioning.'

Next to every source in the list of references, there is an 'Add to bibliography' button. Press on it, and we will generate automatically the bibliographic reference to the chosen work in the citation style you need: APA, MLA, Harvard, Chicago, Vancouver, etc.

You can also download the full text of the academic publication as pdf and read online its abstract whenever available in the metadata.

Browse dissertations / theses on a wide variety of disciplines and organise your bibliography correctly.

1

Elander, Aman Johan. "Distributed balanced edge-cut partitioning of large graphs having weighted vertices." Thesis, KTH, Skolan för informations- och kommunikationsteknik (ICT), 2015. http://urn.kb.se/resolve?urn=urn:nbn:se:kth:diva-175145.

Full text
Abstract:
Large scale graphs are sometimes too big to store and process on a single machine. Instead, these graphs have to be divided into smaller parts and distributed over several machines, while minimizing the dependency between the different parts. This is known as the graph partitioning problem, which has been shown to be NP-complete. The problem is well studied, however most solutions are either not suitable for a distributed environment or unable to do balanced partitioning of graphs having weighted vertices. This thesis presents an extension to the distributed balanced graph partitioning algorithm JA-BE-JA, that solves the balanced partitioning problem for graph having weighted vertices. The extension, called wJA-BE-JA, is implemented in both the Spark framework and in Scala. The two main contributions of this report are the algorithm and a comprehensive evaluation of its performance, including a comparison with the recognized METIS graph partitioner. The evaluation shows that a random sampling policy in combination with the simulated annealing technique gives good results. It further shows that the algorithm is competitive to METIS, as the extension outperforms METIS in 17 of 20 tests.
APA, Harvard, Vancouver, ISO, and other styles
2

Alkhelaiwi, Ali Mani Turki. "Network partitioning techniques based on network natural properties for power system application." Thesis, Brunel University, 2002. http://bura.brunel.ac.uk/handle/2438/5065.

Full text
Abstract:
In this thesis, the problem of partitioning a network into interconnected sub-networks is addressed. The goal is to achieve a partitioning which satisfies a set of specific engineering constraints, imposed in this case, by the requirements of the decomposed state-estimation (DSE) in electrical power systems. The network-partitioning problem is classified as NP-hard problem. Although many heuristic algorithms have been proposed for its solution, these often lack directness and computational simplicity. In this thesis, three new partitioning techniques are described which (i) satisfy the DSE constraints, and (ii) simplify the NP-hard problem by using the natural graph properties of a network. The first technique is based on partitioning a spanning tree optimally using the natural property of the spanning tree branches. As with existing heuristic techniques, information on the partitioning is obtained only at the end of the partitioning process. The study of the DSE constraints leads to define conditions of an ideal balanced partitioning. This enables data on the balanced partitioning to be obtained, including the numbers of boundary nodes and cut-edges. The second partitioning technique is designed to obtain these data for a given network, by finding the minimum covering set of nodes with maximum nodal degree. Further simplification is then possible if additional graph-theoretical properties are used. A new natural property entitled the 'edge state phenomenon' is defined. The edge state phenomenon may be exploited to generate new network properties. In the third partitioning technique, two of these, the 'network external closed path' and the 'open internal paths', are used to identify the balanced partitioning, and hence to partition the network. Examples of the application of all three methods to network partitioning are provided.
APA, Harvard, Vancouver, ISO, and other styles
3

McHenry, Bailey Marie. "Balanced nutrition and crop production practices for the study of grain sorghum nutrient partitioning and closing yield gaps." Thesis, Kansas State University, 2016. http://hdl.handle.net/2097/32725.

Full text
Abstract:
Master of Science<br>Agronomy<br>Ignacio Ciampitti<br>P. V. Vara Prasad<br>Mid-west grain sorghum (Sorghum bicolor (L.) Moench) producers are currently obtaining much lower than attainable yields across varying environments, therefore, closing yield gaps will be important. Yield gaps are the difference between maximum economic attainable yield and current on-farm yields. Maximum economic yield can be achieved through the optimization of utilizing the best genotypes and management practices for the specific site-environment (soil-weather) combination. This research project examines several management factors in order to quantify complex farming interactions for maximizing sorghum yields and studying nutrient partitioning. The factors that were tested include narrow row-spacing (37.5 cm) vs. standard wide row-spacing (76 cm), high (197,600 seeds haˉ¹) and low (98,800 seeds haˉ¹) seeding rates, balanced nutrient management practices including applications of NPKS and micronutrients (Fe and Zn), crop protection with fungicide and insecticide, the use of a plant growth regulator, and the use of precision Ag technology (GreenSeeker for N application). This project was implemented at four sites in Kansas during 2014 (Rossville, Scandia, Ottawa, and Hutchinson) and 2015 (Topeka, Scandia, Ottawa, Ashland Bottoms) growing seasons. Results from both years indicate that irrigation helped to minimize yield variability and boost yield potential across all treatments, though other factors affected the final yield. In 2014, the greatest significant yield difference under irrigation in Rossville, KS (1.32 Mg haˉ¹) was documented between the ‘low-input’ versus the ‘high-input’ treatments. The treatment difference in grain sorghum yields in 2014 was not statistically significant. In 2014, the Ottawa site experienced drought-stress during reproductive stages of plant development, which resulted in low yields and was not influenced by the cropping system approach. In 2015 the treatments were significant, and in Ottawa, narrow row spacing at a lower seeding rate maximized yield for this generally low-yielding environment (<6 Mg haˉ¹) (treatment two at 6.26 vs. treatment ten at 4.89 Mg haˉ¹). Across several sites, including Rossville, Hutchinson, Scandia, Topeka, and Ashland, a similar trend of narrow row spacing promoting greater yields has been documented. Additionally, when water was not limiting sorghum yields (i.e., under irrigation), a balanced nutrient application and optimization of production practices did increase grain sorghum yields (‘high-input’ vs. ‘low-input’; the greatest difference was seen in 2014 in Rossville, 1.2 Mg haˉ¹, and in 2015 in Ashland, 1.98 Mg haˉ¹). In the evaluation of nutrient uptake and partitioning in different plant fractions, there was variability across all site-years which did not always follow the same patterns as the yield, however, the low-input treatment was shown to have significantly lower nutrient uptakes across all the nutrients evaluated (N, P, K, S, Fe, Zn) and across most fractions and sampling times. The objectives of this project were to identify management factors that contributed to high sorghum yields in diverse environments, and to investigate nutrient uptake and partitioning under different environments and crop production practices.
APA, Harvard, Vancouver, ISO, and other styles
4

Echbarthi, Ghizlane. "Big Graph Processing : Partitioning and Aggregated Querying." Thesis, Lyon, 2017. http://www.theses.fr/2017LYSE1225/document.

Full text
Abstract:
Avec l'avènement du « big data », de nombreuses répercussions ont eu lieu dans tous les domaines de la technologie de l'information, préconisant des solutions innovantes remportant le meilleur compromis entre coûts et précision. En théorie des graphes, où les graphes constituent un support de modélisation puissant qui permet de formaliser des problèmes allant des plus simples aux plus complexes, la recherche pour des problèmes NP-complet ou NP-difficils se tourne plutôt vers des solutions approchées, mettant ainsi en avant les algorithmes d'approximations et les heuristiques alors que les solutions exactes deviennent extrêmement coûteuses et impossible d'utilisation.Nous abordons dans cette thèse deux problématiques principales: dans un premier temps, le problème du partitionnement des graphes est abordé d'une perspective « big data », où les graphes massifs sont partitionnés en streaming. Nous étudions et proposons plusieurs modèles de partitionnement en streaming et nous évaluons leurs performances autant sur le plan théorique qu'empirique. Dans un second temps, nous nous intéressons au requêtage des graphes distribués/partitionnés. Dans ce cadre, nous étudions la problématique de la « recherche agrégative dans les graphes » qui a pour but de répondre à des requêtes interrogeant plusieurs fragments de graphes et qui se charge de la reconstruction de la réponse finale tel que l'on obtient un « matching approché » avec la requête initiale<br>With the advent of the "big data", many repercussions have taken place in all fields of information technology, advocating innovative solutions with the best compromise between cost and accuracy. In graph theory, where graphs provide a powerful modeling support for formalizing problems ranging from the simplest to the most complex, the search for NP-complete or NP-difficult problems is rather directed towards approximate solutions, thus Forward approximation algorithms and heuristics while exact solutions become extremely expensive and impossible to use. In this thesis we discuss two main problems: first, the problem of partitioning graphs is approached from a perspective big data, where massive graphs are partitioned in streaming. We study and propose several models of streaming partitioning and we evaluate their performances both theoretically and empirically. In a second step, we are interested in querying distributed / partitioned graphs. In this context, we study the problem of aggregative search in graphs, which aims to answer queries that interrogate several fragments of graphs and which is responsible for reconstructing the final response such that a Matching approached with the initial query
APA, Harvard, Vancouver, ISO, and other styles
5

Nottingham, Andrew Thomas. "The carbon balance of tropical forest soils : partitioning sources of respiration." Thesis, University of Cambridge, 2010. http://ethos.bl.uk/OrderDetails.do?uin=uk.bl.ethos.608423.

Full text
APA, Harvard, Vancouver, ISO, and other styles
6

Villalobos, Jorge Alejandro. "DEVELOPMENT AND IMPLEMENTATION OF THE MULTI-RESOLUTION AND LOADING OF TRANSPORTATION ACTIVITIES (MALTA) SIMULATION BASED DYNAMIC TRAFFIC ASSIGNMENT SYSTEM, RECURSIVE ON-LINE LOAD BALANCE FRAMEWORK (ROLB)." Diss., The University of Arizona, 2011. http://hdl.handle.net/10150/203446.

Full text
Abstract:
The Multi-resolution Assignment and Loading of Transport Activities (MALTA) system is a simulation-based Dynamic Traffic Assignment model that exploits the advantages of multi-processor computing via the use of the Message Passing Interface (MPI) protocol. Spatially partitioned transportation networks are utilized to estimate travel time via alternate routes on mega-scale network models, while the concurrently run shortest path and assignment procedures evaluate traffic conditions and re-assign traffic in order to achieve traffic assignment goals such as User Optimal and/or System Optimal conditions.Performance gain is obtained via the spatial partitioning architecture that allows the simulation domains to distribute the work load based on a specially designed Recursive On-line Load Balance model (ROLB). The ROLB development describes how the transportation network is transformed into an ordered node network which serves as the basis for a minimum cost heuristic, solved using the shortest path, which solves a multi-objective NP Hard binary optimization problem. The approach to this problem contains a least-squares formulation that attempts to balance the computational load of each of the mSim domains as well as to minimize the inter-domain communication requirements. The model is developed from its formal formulation to the heuristic utilized to quickly solve the problem. As a component of the balancing model, a load forecasting technique is used, Fast Sim, to determine what the link loading of the future network in order to estimate average future link speeds enabling a good solution for the ROLB method.The runtime performance of the MALTA model is described in detail. It is shown how a 94% reduction in runtime was achieved with the Maricopa Association of Governments (MAG) network with the use of 33 CPUs. The runtime was reduced from over 60 minutes of runtime on one machine to less than 5 minutes on the 33 CPUs. The results also showed how the individual runtimes on each of the simulation domains could vary drastically with naïve partitioning methods as opposed to the balanced run-time using the ROLB method; confirming the need to have a load balancing technique for MALTA.
APA, Harvard, Vancouver, ISO, and other styles
7

Britton, Kevin John. "Application of a Mass Balance Partitioning Model of Ra-226, Pb-210 and Po-210 to Freshwater Lakes and Streams." Thesis, Université d'Ottawa / University of Ottawa, 2018. http://hdl.handle.net/10393/38459.

Full text
Abstract:
The objectives of this thesis were: (1) to develop a mass balance partitioning model of the natural uranium-238 series comprising radium-226, lead-210 and polonium-210 and (2) to apply the model to estimate the source and fate of these radionuclides in freshwater lakes and streams. Samples were collected from Ottawa River watershed tributaries and measured for lead-210 and polonium-210 content to determine the water concentrations that were input to the model. The radium-226 partitioning model was developed by reconstructing and analyzing Quantitative Water, Air, Sediment Interaction (QWASI) models of lead for Lake Ontario and Hamilton Harbour and selecting parameters for an updated QWASI model of lead for a Lake Ontario basin. This study gave insight about model basis definition, and partition coefficient and sediment particle constraint. The radium-226 series model was formulated by connecting separate QWASI modules for radium-226, lead-210 and polonium-210 with decay and ingrowth terms. The radium-226 model was applied to studies of Crystal Lake, Wisconsin; Bickford Pond, Massachusetts; and Clinton River, Michigan, using parameters reported in these and other studies. Model error was evident in the applications to Crystal Lake due to underlying lake heterogeneity, to Bickford Pond due to unidentified sources of lead-210 from sediment diffusion or watershed runoff, and to Clinton River from watershed runoff. The model was applied to seven Laurentian Shield lakes in the Ottawa River watershed using the sample measurements as the basis for water concentration inputs. The application showed that hydrologic flushing rate may be a factor in the proportion of watershed atmospheric deposition and overall Pb-210 input to the water. Laurentian Shield Lakes with the lowest hydrologic flushing rates (<3 a-1) had proportions of Pb-210 losses to sediment greater than 85%. In another application to Judge Sissons Lake, Nunavut, the model indicated that the watershed was the source of about 85% of Pb-210 and 98% of Po-210 input to the water, and that a significant geologic component of Pb-210 input to the lake was likely. The model indicated that most of the Pb-210 in Judge Sissons Lake was lost to outflow, and that most of the Po-210 was lost to sediment. The model showed that sedimentation is a better proxy measurement for atmospheric deposition of Pb-210 to the Laurentian lakes than originally estimated. The model also showed that watershed contributions to Judge Sissons Lake could explain the observed background concentrations of Pb-210 and Po-210.
APA, Harvard, Vancouver, ISO, and other styles
8

DeVita-McBride, Amy Kathleen. "Analysis of the Acid-Base Balance of Mainstream Tobacco Smoke and its Effect on the Gas/Particle Partitioning of Nicotine." PDXScholar, 2017. https://pdxscholar.library.pdx.edu/open_access_etds/4156.

Full text
Abstract:
Tobacco smoke particulate matter (PM) is a complex mixture of condensed organic compounds, with about 5 to 10% water. Its general properties are similar in some respects to that of atmospheric organic aerosol PM and thus provides a useful surrogate when studying atmospheric PM. Due to its ability to undergo acid-base chemistry, nicotine is of particular interest in the tobacco smoke system. The gas/particle partitioning of nicotine depends on the protonation state of nicotine in the particles, so the distribution of nicotine between these phases provides a means of understanding the acid-base balance in the tobacco smoke system. The goal of this work is to develop an acid-base balance for mainstream tobacco smoke that accounts for the extent of protonation of nicotine. Samples of extracted smoke particulate matter from seven brands of cigarettes were analyzed by ion chromatography (IC) and titration by both acid (HCl) and base (lithium phenoxide) for comparison with nicotine data collected by colleagues. IC analysis was used to quantify tracers of known acidic and basic species in tobacco smoke. Anion tracers for acids included: glycolate, acetate, formate, lactate, chloride, nitrite, sulfate, and nitrate. The cation tracers for base were ammonium, sodium, and potassium. The tobacco smoke extracts were also analyzed after acidification by the HCl titrant for changes in ammonia and organic acid concentrations to determine whether "bound" forms of these compounds were present in the PM. The titration data provided total concentrations of weak acid and bases in the samples. This titration data was compared with the concentrations of the tracers for weak acids and bases (along with the quantification of total nicotine by colleagues) to determine whether the IC analyses were accounting for all of the important species. The results of this comparison show that these analyses missed relevant species in the tobacco smoke system. As tobacco smoke PM is a complex organic mixture, the ability of acid species to protonate nicotine will be different than in aqueous media. The acidic species of interest were assumed to be either strong or weak, with the strong species assumed to be fully ionized after protonation of nicotine. Some portion of the weak acid species could then protonate any available nicotine. An electroneutrality equation (ENE) was developed for the tobacco smoke PM and populated using the IC data and the nicotine data obtained by colleagues. Using this ENE, the extent ionization of the weak acids species (α1A) and the net reaction constant for the protonation of nicotine by these weak acids (K*) was estimated. However, interpretation of the results were complicated by the underrepresentation of the pertinent weak acid species in our IC analyses. This study concluded that further work is needed to identify the missing weak acid and base species to obtain a better representation of the acid-base balance in tobacco smoke PM and to understand the ability of these weak acid species to protonate nicotine.
APA, Harvard, Vancouver, ISO, and other styles
9

Fisher, Russ James. "Partitioning of nitrogen by lactating cows fed diets varying in nonfibrous carbohydrate and rumen undegradable protein." Diss., This resource online, 1995. http://scholar.lib.vt.edu/theses/available/etd-10042006-143902/.

Full text
APA, Harvard, Vancouver, ISO, and other styles
10

Dubbert, Maren [Verfasser], and Christiane [Akademischer Betreuer] Werner. "Water balance and productivity of a Mediterranean oak woodland: Quantifying understory vegetation impacts by development of a stable oxygen isotope partitioning approach / Maren Dubbert. Betreuer: Christiane Werner." Bayreuth : Universität Bayreuth, 2014. http://d-nb.info/1060010348/34.

Full text
APA, Harvard, Vancouver, ISO, and other styles
11

Campoe, Otávio Camargo. "Ecologia da produção e da competição intra-específica do Eucalyptus grandis ao longo de um gradiente de produtividade no estado de São Paulo." Universidade de São Paulo, 2012. http://www.teses.usp.br/teses/disponiveis/11/11150/tde-17042012-105858/.

Full text
Abstract:
A produtividade dos plantios de eucalipto no Brasil apresentou ganhos significativos nas últimas décadas devido a avanços em melhoramento genético e silvicultura. Contudo, a produção de madeira representa apenas uma fração da produtividade primária bruta (GPP). Avaliar fluxos e partição de carbono (C) entre os diferentes componentes da floresta, e estudar o uso e a eficiência de uso dos recursos disponíveis é essencial para compreender os mecanismos que controlam a produtividade de plantios intensivamente manejados. O estudo quantificou os fluxos e partição de C e a eficiência de uso da luz para a produção de lenho (LUE) em 12 parcelas em um gradiente natural de produtividade, durante o sétimo ano de um plantio comercial de Eucalyptus grandis. Nessas mesmas parcelas, na escala da árvore, foram avaliadas a dominância do crescimento, produção de lenho e LUE, identificando a representatividade de árvores dominantes e suprimidas na produtividade do povoamento. O estudo do balanço de C e a aplicação da teoria da ecologia da produção em diferentes escalas objetivaram ampliar o conhecimento sobre os processos que governam a produtividade florestal. A heterogeneidade espacial dos atributos do solo e a topografia da área experimental influenciaram fortemente os fluxos componentes da GPP e sua partição, gerando um gradiente de produtividade. A produtividade de lenho variou de 554 gC m-2 ano-1 na parcela com menor GPP a 923 gC m-2 ano-1 na parcela com maior GPP. O fluxo de C para o solo variou de 497 gC m-2 ano-1 a 1235 gC m-2 ano-1 sem relação significativa com GPP. A partição do GPP para produção de lenho aumentou de 0,19 a 0,23, com tendência de aumento com o GPP (R2=0,30, p=0,07). A LUE aumentou em 66% (de 0,25 gC MJ-1 para 0,42 gC MJ-1) com a GPP, como resultado da elevação do fluxo e partição de C para produção de lenho. Ao longo do gradiente de produtividade, parcelas com alta eficiência quântica do dossel também mostraram alta LUE. A dominância do crescimento entre árvores teve forte impacto sobre a produtividade do povoamento. As 20% maiores árvores apresentaram em média 38% da biomassa de lenho e representaram 47% da produção de lenho. Características das folhas sugeriram que a maior produtividade de árvores dominantes, em relação às suprimidas, pode resultar de diferenças no controle estomático e não na capacidade fotossintética. A ecologia da produção na escala da árvore mostrou que os indivíduos dominantes produziram mais madeira por terem absorvido mais radiação e pela maior eficiência do uso da luz, comparativamente às árvores suprimidas. Em média, uma árvore suprimida cresceu 1,2 kg ano-1 de lenho, absorveu 2,9 GJ ano-1 de radiação e teve uma LUE de 0,4 g MJ-1. Já uma dominante cresceu 37 kg ano-1, absorveu 38 GJ ano-1 com mais que o dobro da eficiência (1,01 g MJ-1). Estudos sobre o balanço de carbono e ecologia da produção em diferentes escalas são essenciais para aperfeiçoar o conhecimento sobre os processos que controlam a produtividade de madeira e a fixação de carbono, e aprimorar os modelos ecofisiológicos.<br>The productivity of the eucalypt plantations in Brazil showed significant increase over the last decades, due to improvement in breeding and silviculture. However, wood production represents only a fraction of the gross primary production (GPP). Assessing carbon (C) fluxes and partitioning among forest components, and evaluate use and use efficiency of the available resources is essential to understand mechanisms driving productivity of intensively managed plantations. The study quantified fluxes and partitioning of C and light use efficiency for stem production (LUE) in 12 plots across a natural gradient of productivity during the seventh year of a commercial Eucalyptus grandis. Within these plots, at tree level, were evaluated growth dominance, stem production and LUE, identifying representativeness of dominant and suppressed trees to stand productivity. The study of C budget and the application of the production ecology theory at different levels aimed increase the knowledge about the processes driving forest productivity. The spatial heterogeneity of soil attributes and topography across the experimental site strongly influenced the component fluxes of GPP and partitioning, generating a gradient of productivity. Stem production ranged from 554 gC m-2 year-1 at the lowest GPP plot to 923 gC m-2 year-1 at the highest GPP plot. Total below ground carbon flux (TBCF) ranged from 497 g C m-2 year-1 to 1235 g C m-2 year-1, with no relationship to ANPP or GPP. Stem NPP:GPP partitioning ratio increased from 0.19 to 0.23 showing a trend of increase with GPP (R2=0.30, p=0.07). LUE increased by 66% (from 0.25 gC MJ-1 to 0.42 gC MJ-1) with GPP, as a result of the increased C partitioned and flux to stem NPP. Across the gradient of productivity, plots with the highest canopy quantum efficiency also showed the highest LUE. Growth dominance between trees showed a strong impact on stand productivity. The 20% larger trees accounted for 38% of stem biomass and represented 47% stem production. Leaf characteristics suggested that dominant trees were more productive, in relation to suppressed, may result in differences on stomatal control and not on photosynthetic capacity. The production ecology at tree level showed that dominant trees produced more wood by absorbing more radiation and due to higher light use efficiency, comparing to suppressed trees. On average, a suppressed tree grew 1,2 kg year-1 of stem, absorbed 2,9 GJ year-1 of radiation with a LUE of 0.4 g MJ-1. Although, a dominant grew 37 kg year-1 of stem, absorbed 38 GJ year-1 of radiation with the double of efficiency (1.01 g MJ-1). Studies regarding carbon balance and production ecology at different levels are essential to improve the knowledge on processes controlling wood production and carbon uptake, and develop ecophysiological models.
APA, Harvard, Vancouver, ISO, and other styles
12

Gou, Changjiang. "Task Mapping and Load-balancing for Performance, Memory, Reliability and Energy." Thesis, Lyon, 2020. http://www.theses.fr/2020LYSEN047.

Full text
Abstract:
Cette thèse se concentre sur les problèmes d'optimisation multi-objectifs survenant lors de l'exécution d'applications scientifiques sur des plates-formes de calcul haute performance et des applications de streaming sur des systèmes embarqués. Ces problèmes d'optimisation se sont tous avérés NP-complets, c'est pourquoi nos efforts portent principalement sur la conception d'heuristiques efficaces pour des cas généraux et sur la proposition de solutions optimales pour des cas particuliers.Certaines applications scientifiques sont généralement modélisées comme des arbres enracinés. En raison de la taille des données temporaires, le traitement d'une telle arborescence peut dépasser la capacité de la mémoire locale. Une solution pratique sur un système multiprocesseur consiste à partitionner l'arborescence en plusieurs sous-arbres, et à exécuter chacun d'eux sur un processeur, qui est équipé d'une mémoire locale. Nous avons étudié comment partitionner l'arbre en plusieurs sous-arbres de sorte que chaque sous-arbre tienne dans la mémoire locale et que le makespan soit minimisé, lorsque les coûts de communication entre les processeurs sont pris en compte. Ensuite, un travail pratique d'ordonnancement d'arbres apparaissant dans un solveur de matrice clairsemée parallèle est examiné. L'objectif est de minimiser le temps de factorisation en présentant une bonne localisation des données et un équilibrage de charge. La technique de cartographie proportionnelle est une approche largement utilisée pour résoudre ce problème d'allocation des ressources. Il réalise une bonne localisation des données en affectant les mêmes processeurs à de grandes parties de l'arborescence des tâches. Cependant, cela peut limiter l'équilibrage de charge dans certains cas. Basé sur une cartographie proportionnelle, un algorithme d'ordonnancement dynamique est proposé. Il assouplit le critère de localisation des données pour améliorer l'équilibrage de charge. La performance de notre approche a été validée par de nombreuses expériences avec le solveur direct à matrice clairsemée parallèle PaStiX. Les applications de streaming apparaissent souvent dans les domaines vidéo et audio. Ils se caractérisent par une série d'opérations sur le streaming de données et un débit élevé. Le système multiprocesseur sur puce (MPSoC) est un système embarqué multi / plusieurs cœurs qui intègre de nombreux cœurs spécifiques via une interconnexion haute vitesse sur une seule puce. De tels systèmes sont largement utilisés pour les applications multimédias. De nombreux MPSoC fonctionnent sur piles. Un budget énergétique aussi serré nécessite intrinsèquement un calendrier efficace pour répondre aux demandes de calcul intensives. La mise à l'échelle dynamique de la tension et de la fréquence (DVFS) peut économiser de l'énergie en diminuant la fréquence et la tension au prix d'une augmentation des taux de défaillance. Une autre technique pour réduire le coût énergétique et atteindre l'objectif de fiabilité consiste à exécuter plusieurs copies de tâches. Nous modélisons d'abord les applications sous forme de chaînes linéaires et étudions comment minimiser la consommation d'énergie sous des contraintes de débit et de fiabilité, en utilisant DVFS et la technique de duplication sur les plates-formes MPSoC.Ensuite, dans une étude suivante, avec le même objectif d'optimisation, nous modélisons les applications de streaming sous forme de graphes série-parallèle, plus complexes que de simples chaînes et plus réalistes. La plate-forme cible dispose d'un système de communication hiérarchique à deux niveaux, ce qui est courant dans les systèmes embarqués et les plates-formes informatiques hautes performances. La fiabilité est garantie par l'exécution des tâches à la vitesse maximale ou par la triplication des tâches. Plusieurs heuristiques efficaces sont proposées pour résoudre ce problème d'optimisation NP-complet<br>This thesis focuses on multi-objective optimization problems arising when running scientific applications on high performance computing platforms and streaming applications on embedded systems. These optimization problems are all proven to be NP-complete, hence our efforts are mainly on designing efficient heuristics for general cases, and proposing optimal solutions for special cases.Some scientific applications are commonly modeled as rooted trees. Due to the size of temporary data, processing such a tree may exceed the local memory capacity. A practical solution on a multiprocessor system is to partition the tree into many subtrees, and run each on a processor, which is equipped with a local memory. We studied how to partition the tree into several subtrees such that each subtree fits in local memory and the makespan is minimized, when communication costs between processors are accounted for.Then, a practical work of tree scheduling arising in parallel sparse matrix solver is examined. The objective is to minimize the factorization time by exhibiting good data locality and load balancing. The proportional mapping technique is a widely used approach to solve this resource-allocation problem. It achieves good data locality by assigning the same processors to large parts of the task tree. However, it may limit load balancing in some cases. Based on proportional mapping, a dynamic scheduling algorithm is proposed. It relaxes the data locality criterion to improve load balancing. The performance of our approach has been validated by extensive experiments with the parallel sparse matrix direct solver PaStiX.Streaming applications often appear in video and audio domains. They are characterized by a series of operations on streaming data, and a high throughput. Multi-Processor System on Chip (MPSoC) is a multi/many-core embedded system that integrates many specific cores through a high speed interconnect on a single die. Such systems are widely used for multimedia applications. Lots of MPSoCs are batteries-operated. Such a tight energy budget intrinsically calls for an efficient schedule to meet the intensive computation demands. Dynamic Voltage and Frequency Scaling (DVFS) can save energy by decreasing the frequency and voltage at the price of increasing failure rates. Another technique to reduce the energy cost and meet the reliability target consists in running multiple copies of tasks. We first model applications as linear chains and study how to minimize the energy consumption under throughput and reliability constraints, using DVFS and duplication technique on MPSoC platforms.Then, in a following study, with the same optimization goal, we model streaming applications as series-parallel graphs, which are more complex than simple chains and more realistic. The target platform has a hierarchical communication system with two levels, which is common in embedded systems and high performance computing platforms. The reliability is guaranteed through either running tasks at the maximum speed or triplication of tasks. Several efficient heuristics are proposed to tackle this NP-complete optimization problem
APA, Harvard, Vancouver, ISO, and other styles
13

Sabajo, Clifton. "Changements dans l’utilisation des terres et de la couverture terrestre en Asie du sud-est : les effets de la transformation sur les paramètres de la surface en Indonésie." Thesis, Paris, AgroParisTech, 2018. http://www.theses.fr/2018AGPT0005.

Full text
Abstract:
Au cours des dernières décennies, l'Indonésie a connu des transformations spectaculaires des terres avec une expansion des plantations de palmiers à huile au détriment des forêts tropicales. L'Indonésie est actuellement l'une des régions ayant le plus haut taux de transformation de la surface terrestre dans le monde à cause de l'expansion des plantations de palmiers à huile et d'autres agricultures qui remplacent les forêts à grande échelle. Comme la végétation est un modificateur du climat près du sol, ces transformations à grande échelle ont des impacts majeurs sur les variables biophysiques de surface telles que la température de surface, l'albédo, les indices de végétation (NDVI), sur le bilan énergétique de surface et le partitionnement énergétique.Ce travail de thèse vise à quantifier les impacts des changements d’usage des terres en Indonésie sur les variables biophysiques de surface. Pour évaluer ces changements à l'échelle régionale, des données de télédétection sont nécessaires.Étant une variable clé de nombreuses fonctions écologiques, la température de surface (LST) est directement affectée par les changements de la couverture terrestre. Nous avons analysé la LST à partir de la bande thermique d'une image Landsat et produit une carte de température de surface avec une haute résolution (30m) pour les basses terres de la province de Jambi à Sumatra (Indonésie), une région qui a subi de grandes transformations au cours des dernières décennies. La comparaison des LST, albédo, NDVI et évapotranspiration (ET) entre sept différents types de couverture terrestre (forêts, zones urbaines, terres incultes, plantations de palmiers à huile jeunes et matures, plantations d'acacias et de caoutchouc) montre que les forêts ont des températures de surface inférieures à celles des autres types de couvert végétal, ce qui indique un effet de réchauffement local après la conversion des forêts vers des plantations. Les différences de LST atteignaient 10,1 ± 2,6 ºC (moyenne ± écart-type) entre les forêts et les terres déforestées. Les différences de températures de surface s'expliquent par un effet de refroidissement évaporatif des forêts, qui compense l'effet de réchauffement de l'albédo.Basé sur des différences observées dans les variables biophysiques entre les plantations de palmiers à huile jeunes et matures, nous avons analysé trois images Landsat couvrant une chronoséquence de plantations de palmiers à huile pour étudier la dynamique des variables biophysiques de surface pendant le cycle de rotation de 20-25 ans des plantations de palmiers à huile.Nos résultats montrent que les différences entre les plantations de palmiers à huile à différents stades du cycle de rotation du palmier à huile se reflètent dans les différences du bilan énergétique de surface, du partitionnement énergétique et des variables biophysiques. Au cours du cycle de rotation des plantations de palmiers à huile, les différences de température à la surface diminuent graduellement et se rapprochent de zéro autour du stade mature de la plantation de palmiers à huile de 10 ans. Parallèlement, le NDVI augmente et l'albédo diminue à proximité des valeurs typiques des forêts. Le bilan énergétique de surface et le partitionnement énergétique montrent des tendances de développement liés aux variables biophysiques et à l'âge des plantations de palmiers à huile. Les nouvelles plantations et les jeunes plantations (&lt;5 ans) ont un rayonnement net plus faible que les plantations de palmiers à huile matures, mais ont des températures de surface plus élevées que les plantations de palmiers à huile matures. (Suite et fin du résumé dans la thèse)<br>Over the last decades, Indonesia has experienced dramatic land transformations with an expansion of oil palm plantations at the expense of tropical forests. Indonesia is currently one of the regions with the highest transformation rate of the land surface worldwide related to the expansion of oil palm plantations and other cash crops replacing forests on large scales. As vegetation is a modifier of the climate near the ground these large-scale land transformations have major impacts on surface biophysical variables such as land surface temperature (LST), albedo, vegetation indices (e.g. the normalized difference vegetation index, NDVI), on the surface energy balance and energy partitioning.Despite the large historic land transformation in Indonesia toward oil palm and other cash crops and governmental plans for future expansion, this is the first study so far to quantify the impacts of land transformation on biophysical variables in Indonesia. To assess such changes at regional scale remote sensing data are needed.As a key driver for many ecological functions, LST is directly affected by land cover changes.We analyze LST from the thermal band of a Landsat image and produce a high-resolution surface temperature map (30 m) for the lowlands of the Jambi province in Sumatra (Indonesia), a region which experienced large land transformation towards oil palm and other cash crops over the past decades. The comparison of LST, albedo, NDVI, and evapotranspiration (ET) between seven different land cover types (forest, urban areas, clear cut land, young and mature oil palm plantations, acacia and rubber plantations) shows that forests have lower surface temperatures than the other land cover types, indicating a local warming effect after forest conversion. LST differences were up to 10.1 ± 2.6 ºC (mean ± SD) between forest and clear-cut land. The differences in surface temperatures are explained by an evaporative cooling effect, which offsets an albedo warming effect.Young and mature oil palm plantations differenced in their biophysical. To study the development of surface biophysical variables during the 20 – 25 years rotation cycle of oil palm plantations, we used three Landsat images from the Jambi province in Sumatra/Indonesia covering a chronosequence of oil palm plantations.Our results show that differences between oil palm plantations in different stages of the oil palm rotation cycle are reflected in differences in the surface energy balance, energy partitioning and biophysical variables. During the oil palm plantation lifecycle the surface temperature differences to forest gradually decrease and approach zero around the mature oil palm plantation stage of 10 years. Concurrently, NDVI increases and the albedo decreases approaching typical values of forests. The surface energy balance and energy partitioning show a development patterns related to biophysical variables and the age of the oil palm plantations. Newly established and young plantations (&lt; 5 years) have less net radiation available than mature oil palm plantations, yet have higher surface temperatures than mature oil palm plantations. The changes in biophysical variables, energy balance and energy partitioning during the oil palm rotation cycle can be explained by the previously identified evaporative cooling effect in which the albedo warming effect is offset. A main determinant in this mechanism is the vegetation cover during the different phases in the oil palm rotation cycle. NDVI as a proxy for vegetation cover showed a consistent inverse relation with the LST of different aged oil palm plantations, a trend that is also observed for different land use types in this study. (Last and final summary in the thesis)
APA, Harvard, Vancouver, ISO, and other styles
14

Shen, Yi-Siang, and 沈逸翔. "A Balanced-Partitioning Algorithm for Handling Data Skew in MapReduce." Thesis, 2017. http://ndltd.ncl.edu.tw/handle/7ef9g8.

Full text
Abstract:
碩士<br>國立臺灣科技大學<br>電子工程系<br>105<br>Recently, with the advance of smart phone and internet, the issue of big data become more popular, big data analysis and cloud computing become more popular too. Apache Hadoop is one of the big data analysis platform. With the change of people’s living style, data type become more complex, many data have characteristic of skew. In parallel processing, data with skew may lead to load imbalancing problems, load imbalancing may cause Reducer spend more times which has more loading.   Therefore, the purpose of this paper is how to handling data skew in MapReduce achieves load balancing. We are using a MapReduce program (Balanced-Partitioning algorithm) to handling pre-processing. Mapper to counts frequency of every key, Reducer to merges all Mapper output, using our cut sub-partition and bucket-packing methods assigning every key to correct partition. Finaly, made the Reducer of actual application load balancing.
APA, Harvard, Vancouver, ISO, and other styles
15

Chen, Syu-Huan, and 陳旭洹. "A Balanced Partitioning Mechanism with Condensed, Collapsed Trie in MapReduce." Thesis, 2018. http://ndltd.ncl.edu.tw/handle/49897a.

Full text
Abstract:
碩士<br>國立臺灣科技大學<br>電子工程系<br>106<br>The MapReduce has emerged as an efficient platform for coping with big data. It achieves this goal by decoupling the data and then distributing the workloads to multiple reducers for processing in a fully parallel manner. The hash function of MapReduce usually generates the unbalanced workloads to multiple reducers for the skewed data. The unbalanced workloads to multiple reducers lead to degrading the performance of MapReduce significantly, because the overall running time of a map-reduce cycle is determined by the longest running reducer. Thus, it is an important issue to develop a balanced partitioning algorithm which partitions the workloads evenly for all the reducers. The aim of this proposal is to propose a balanced partitioning mechanism with condensed trie in mapreduce, which evenly distributes the data to the reducers. Then, we propose a quasi-optimal packing algorithm to assign sub-partitions to the reducers evenly, resulting in reducing the total execution time. The proposed partitioning mechanism requires a reasonable amount of memory usage and incurs a small running overhead. The experiments using Inverted Indexing on several real-world datasets are conducted to evaluate the performance of our proposed partitioning mechanism.
APA, Harvard, Vancouver, ISO, and other styles
16

"Load-balanced Range Query Workload Partitioning for Compressed Spatial Hierarchical Bitmap (cSHB) Indexes." Master's thesis, 2018. http://hdl.handle.net/2286/R.I.51718.

Full text
Abstract:
abstract: The spatial databases are used to store geometric objects such as points, lines, polygons. Querying such complex spatial objects becomes a challenging task. Index structures are used to improve the lookup performance of the stored objects in the databases, but traditional index structures cannot perform well in case of spatial databases. A significant amount of research is made to ingest, index and query the spatial objects based on different types of spatial queries, such as range, nearest neighbor, and join queries. Compressed Spatial Bitmap Index (cSHB) structure is one such example of indexing and querying approach that supports spatial range query workloads (set of queries). cSHB indexes and many other approaches lack parallel computation. The massive amount of spatial data requires a lot of computation and traditional methods are insufficient to address these issues. Other existing parallel processing approaches lack in load-balancing of parallel tasks which leads to resource overloading bottlenecks. In this thesis, I propose novel spatial partitioning techniques, Max Containment Clustering and Max Containment Clustering with Separation, to create load-balanced partitions of a range query workload. Each partition takes a similar amount of time to process the spatial queries and reduces the response latency by minimizing the disk access cost and optimizing the bitmap operations. The partitions created are processed in parallel using cSHB indexes. The proposed techniques utilize the block-based organization of bitmaps in the cSHB index and improve the performance of the cSHB index for processing a range query workload.<br>Dissertation/Thesis<br>Masters Thesis Computer Science 2018
APA, Harvard, Vancouver, ISO, and other styles
17

Lai, Pin-Cheng, and 賴品丞. "Multi-Layer Partitioning with Consideration of Area Balance and Thermal Distribution." Thesis, 2014. http://ndltd.ncl.edu.tw/handle/979ugq.

Full text
Abstract:
碩士<br>國立臺北科技大學<br>電機工程系研究所<br>102<br>Nowadays integrated circuits are getting larger and with more complex functions. As the density of components increases, the routing between components becomes relatively distant, which may raise time latency and cause routing fails. Three-dimensional integrated circuit are viewed as a new alternative solution to such a problem, and through-silicon vias are widely used to connect the different layers. In the current design of three-dimensional integrated circuits, heat is an important issue, this paper introduce a simulated-annealing based method, which is applied after layer assignment, to further minimize the amount of TSV while considering area balance and power. Experimental results show that, with our method, the intermediate layers can have acceptable power density, while the area balance is achieved and the amount of TSVs is minimized.
APA, Harvard, Vancouver, ISO, and other styles
18

"Precipitation Phase Partitioning with a Psychrometric Energy Balance: Model Development and Application." Thesis, 2013. http://hdl.handle.net/10388/ETD-2013-10-1279.

Full text
Abstract:
Precipitation phase is fundamental to a catchment’s hydrological response to precipitation events in cold regions and is especially variable over time and space in complex topography. Phase is controlled by the microphysics of the falling hydrometeor, but microphysical calculations require detailed atmospheric information that is often unavailable and lacking from hydrological analyses. In hydrology, there have been many methods developed to estimate phase, but most are regionally calibrated and many depend on air temperature (Ta) and use daily time steps. Phase is not only related to Ta, but to other meteorological variables such as humidity. In addition, precipitation events are dynamic, adding uncertainties to the use of daily indices to estimate phase. To better predict precipitation phase with respect to meteorological conditions, the combined mass and energy balance of a falling hydrometeor was calculated and used to develop a model to estimate precipitation phase. Precipitation phase and meteorological data were observed at multiple elevations in a small Canadian Rockies catchment, Marmot Creek Research Basin, at 15-minute intervals over several years to develop and test the model. The mass and energy balance model was compared to other methods over varying time scales, seasons, elevations and topographic exposures. The results indicate that the psychrometric energy balance model performs much better than Ta methods and that this improvement increases as the calculation time interval decreases. The uncertainty that differing phase methods introduce to hydrological process estimation was assessed with the Cold Regions Hydrological Model (CRHM). The rainfall/total precipitation ratio, runoff, discharge and snowpack accumulation were calculated using a single and a double Ta threshold method and the proposed physically based mass and energy balance model. Intercomparison of the hydrological responses of the methods highlighted differences between Ta based and psychrometric approaches. Uncertainty of hydrological processes, as established by simulating a wide range of Ta methods, reached up to 20% for rain ratio, 1.5 mm for mean daily runoff, 0.4 mm for mean daily discharge and 160 mm of peak snow water equivalent. The range of Ta methods showed that snowcover duration, snow free date and peak discharge date could vary by up to 36, 26 and 10 days respectively. The greatest hydrological uncertainty due to precipitation phase methods was found at sub-alpine and sub-arctic headwater basins and the least uncertainty was found at a small prairie basin.
APA, Harvard, Vancouver, ISO, and other styles
19

Chen, Chin-Fu, and 陳進福. "3D IC Layer Partitioning with Area Balance and Number of TSVs Consideration." Thesis, 2013. http://ndltd.ncl.edu.tw/handle/p5dnv3.

Full text
Abstract:
碩士<br>國立臺北科技大學<br>電機工程系研究所<br>101<br>As the complexity of integrated circuits have grown rapidly in recent years, the interconnect delay and power consumption became the bottleneck of traditional 2-D architecture. The emerging 3-D architecture used the technique of Through-Silicon-Via (TSV) to connect each layer. It can effectively reduce chip area, decrease interconnect delay and power consumption. In this thesis, we propose a methodology of 3-D partitioning. A KL-based algorithm, named NTAB, is proposed for circuit partitioning. Instead of simply minimizing cut size, our approach considered not only the number of TSVs, but also the area balance between different layers. In addition, we provide a cloud platform, by which the client can use our program remotely. In the experiments, we use GSRC benchmark as test circuits. The experimental result reveals that our approach can derive a 3-D partitioning with better area balance and less TSVs.
APA, Harvard, Vancouver, ISO, and other styles
20

Geremew, Eticha Birdo. "Modelling the soil water balance to improve irrigation management of traditional irrigation schemes in Ethiopia." Thesis, 2009. http://hdl.handle.net/2263/24932.

Full text
Abstract:
Traditional irrigation was practiced in Ethiopia since time immemorial. Despite this, water productivity in the sector remained low. A survey on the Godino irrigation scheme revealed that farmers used the same amount of water and intervals, regardless of crop species and growth stage. In an effort to improve the water productivity, two traditional irrigation scheduling methods were compared with two scientific methods, using furrow irrigation. The growth performance and tuber yield of potato (cv. Awash) revealed that irrigation scheduling using a neutron probe significantly outperformed the traditional methods, followed by the SWB model Irrigation Calendar. Since the NP method involves high initial cost and skills, the use of the SWB Calendar is suggested as replacement for the traditional methods. SWB is a generic crop growth model that requires parameters specific to each crop, to be determined experimentally before it could be used for irrigation scheduling. It also accurately describes deficit irrigation strategies where water supply is limited. Field trials to evaluate four potato cultivars for growth performance and assimilate partitioning, and onions' critical growth stages to water stress were conducted. Crop-specific parameters were also generated. Potato and onion crops are widely grown at the Godino scheme where water scarcity is a major constraint. These crop-specific parameters were used to calibrate and evaluate SWB model simulations. Results revealed that SWB model simulations for Top dry matter (TDM), Harvestable dry matter (HDM), Leaf area index (LAI), soil water deficit (SWD) and Fractional interception (FI) fitted well with measured data, with a high degree of statistical accuracy. The response of onions to water stress showed that bulb development (70-110 DATP) and bulb maturity (110-145) stages were most critical to water stress, which resulted in a significant reduction in onion growth and bulb yields. SWB also showed that onion yield was most sensitive to water stress during these two stages. An irrigation calendar, using the SWB model, was developed for five different schemes in Ethiopia, using long-term weather data and crop-specific parameters for potatoes and onions. The calendars revealed that water depth varied, depending on climate, crop type and growth stage.<br>Thesis (PhD)--University of Pretoria, 2009.<br>Plant Production and Soil Science<br>unrestricted
APA, Harvard, Vancouver, ISO, and other styles
21

Krämer, Inga. "Rainfall partitioning and soil water dynamics along a tree species diversity gradient in a deciduous old-growth forest in Central Germany." Doctoral thesis, 2009. http://hdl.handle.net/11858/00-1735-0000-0006-B692-C.

Full text
APA, Harvard, Vancouver, ISO, and other styles
We offer discounts on all premium plans for authors whose works are included in thematic literature selections. Contact us to get a unique promo code!