Siga este enlace para ver otros tipos de publicaciones sobre el tema: Benchmark.

Tesis sobre el tema "Benchmark"

Crea una cita precisa en los estilos APA, MLA, Chicago, Harvard y otros

Elija tipo de fuente:

Consulte los 50 mejores tesis para su investigación sobre el tema "Benchmark".

Junto a cada fuente en la lista de referencias hay un botón "Agregar a la bibliografía". Pulsa este botón, y generaremos automáticamente la referencia bibliográfica para la obra elegida en el estilo de cita que necesites: APA, MLA, Harvard, Vancouver, Chicago, etc.

También puede descargar el texto completo de la publicación académica en formato pdf y leer en línea su resumen siempre que esté disponible en los metadatos.

Explore tesis sobre una amplia variedad de disciplinas y organice su bibliografía correctamente.

1

Eugster, Manuel J. A. "Benchmark Experiments". Diss., lmu, 2011. http://nbn-resolving.de/urn:nbn:de:bvb:19-129904.

Texto completo
Los estilos APA, Harvard, Vancouver, ISO, etc.
2

Nascimento, Samara Martins do. "Spatial Star Shema Benchmark – um benchmark para data warehouse geográfico". Universidade Federal de Pernambuco, 2013. https://repositorio.ufpe.br/handle/123456789/12421.

Texto completo
Resumen
Submitted by Luiz Felipe Barbosa (luiz.fbabreu2@ufpe.br) on 2015-03-12T15:00:05Z No. of bitstreams: 2 license_rdf: 1232 bytes, checksum: 66e71c371cc565284e70f40736c94386 (MD5) Dissertação Samara Nascimento.pdf: 9257843 bytes, checksum: 902eb4bdc806735989a42837a7f7bbe3 (MD5)
Approved for entry into archive by Daniella Sodre (daniella.sodre@ufpe.br) on 2015-03-13T13:15:07Z (GMT) No. of bitstreams: 2 license_rdf: 1232 bytes, checksum: 66e71c371cc565284e70f40736c94386 (MD5) Dissertação Samara Nascimento.pdf: 9257843 bytes, checksum: 902eb4bdc806735989a42837a7f7bbe3 (MD5)
Made available in DSpace on 2015-03-13T13:15:07Z (GMT). No. of bitstreams: 2 license_rdf: 1232 bytes, checksum: 66e71c371cc565284e70f40736c94386 (MD5) Dissertação Samara Nascimento.pdf: 9257843 bytes, checksum: 902eb4bdc806735989a42837a7f7bbe3 (MD5) Previous issue date: 2013-03-25
CNPQ
A técnica experimental de avaliação de desempenho utilizada em aplicações e sistemas de bancos de dados é composta principalmente da técnica de benchmark, que consiste em um conjunto de testes experimentais previamente definidos e posteriormente executados para obtenção de resultados de desempenho. Data Warehouses Geográficos (DWG) permitem o armazenamento de geometrias dos objetos que representam localizações na superfície terrestre e possibilitam o processamento de consultas analíticas e multidimensionais. Os benchmarks TPC-D, TPC-H e SSB são utilizados para avaliar o desempenho de Data Warehouses Convencionais. O benchmark Spadawan é utilizado para avaliar o desempenho de Data Warehouses Geográficos. Contudo, os benchmarks anteriores não conseguem ser considerados abrangentes, devido a sua limitada carga de trabalho. Desta forma, nesta dissertação, propomos um novo benchmark, chamado Spatial Star Schema Benchmark, ou Spatial SSB, projetado especialmente para realizar a avaliação de desempenho de consultas em ambientes de DWG. As principais contribuições do Spatial SSB estão concentradas em três pontos. Primeiro, o Spatial SSB utiliza três tipos de dados geométricos (i.e. pontos, linhas e polígonos), propostos em um esquema híbrido. Além disto, garante o controle da seletividade, que indica o número de linhas retornadas na tabela de fatos para cada consulta espacial pertencente à carga de trabalho deste benchmark. Segundo, o Spatial SSB controla a geração e distribuição dos dados no extent, assim como a variação do volume de dados, tanto aumentando a complexidade dos objetos espaciais, quanto aumentando o número de objetos espaciais, pelo aumento do fator de escala. Terceiro, o Spatial SSB obtém o número de objetos intersectados por janelas de consultas definidas de forma ad hoc, que sobrepõem uma porcentagem do extent definida pelo usuário. Os resultados experimentais mostraram que estas características degradam significativamente o desempenho de consultas sobre DWG.
Los estilos APA, Harvard, Vancouver, ISO, etc.
3

Brewer, Chris L., Nick Sexton, Julian Mintzis y Abhay Bansal. "uLocal Benchmark Evaluation". Thesis, The University of Arizona, 2009. http://hdl.handle.net/10150/192309.

Texto completo
Los estilos APA, Harvard, Vancouver, ISO, etc.
4

Islam, mohammad Nazrul. "Extending WCET benchmark programs". Thesis, Mälardalens högskola, Akademin för innovation, design och teknik, 2012. http://urn.kb.se/resolve?urn=urn:nbn:se:mdh:diva-13929.

Texto completo
Resumen
Today, traditional mechanical and electrical systems are replaced with special ICT (Information and communication technology) based solutions and with the invention of new technologies; this trend is increasing further more. This special ICT-based domain is called Real-time systems and today’s driveby-wire, electronic stability programs in car, control software in vehicles are just a few examples of real time systems. The task is a fundamental element of the software in a real-time system, and it is always necessary to know the longest execution time of a task, since missing a task’s deadline is a not allowed in a time critical hard real-time system. The longest execution time of a task or the Worst Case Execution Time (WCET) is estimated by WCET analysis. This estimation should be tight and safe to ensure the proper timing behavior of the real time system. But this WCET analysis is not always easy to perform, as the execution time of a task can vary by software characteristics like program flow or input data and also by hardware characteristics like speed of CPU, cache, pipeline and others. There are several methods and tools for WCET analysis. Some of them are commercial products and other are research prototypes. To verify and validate WCET analysis tools, evaluations of the tool’s properties are important, and thus WCET benchmark programs has emerged in recent years. These are intended for comparison between these tools properties and associated methods. The Mälardalen WCET benchmark suite has been maintained to evaluate the properties of various tool sets. In this thesis these benchmarks programs have been analyzed by SWEET (Swedish WCET Analysis Tool), the main tool used in this thesis. SWEET is a research prototype for WCET analysis. The main goal of this thesis work was to extend existing benchmark programs for WCET tools. It was obvious that most work load will be on benchmark program extension and at the beginning the work has been started by analyzing different small WCET benchmark programs. The evaluation of SWEET’s properties has been taken into a further extent by analyzing another benchmark program which is called PapaBench, a free real-time benchmark from Paparazzi project that represents a real-time application, developed to be embedded on different Unmanned Aerial Vehicles (UAV). Lots of time was required to complete the analyzing of PapaBench. The main reason behind this extensive work was that we decided to participate with SWEET in WCET Challenge 2011 (WCC 2011). So the purpose of the thesis ultimately turned into analyzing PapaBench instead of extending the WCET benchmark programs. The result of the thesis work is therefore mainly the analysis results from the analysis of PapaBench, which were reported to WCC 2011. The results from WCC 2011 are included in a paper presented at the WCET 2011 workshop, which took place in July 2011 in Porto, Portugal. Another part of the work was to examine real-time train control software which was provided by Bombardier. The main reason behind getting these industrial codes was to possibly add new benchmark programs to the Mälardalen WCET benchmark suite. A thorough manual study of this code has been performed to find out whether new benchmark programs could be found. However, due  to its structure and size, we decided that this code was not suitable to add to the Mälardalen WCET benchmark suite.
Los estilos APA, Harvard, Vancouver, ISO, etc.
5

Bhikadiya, Ruchit Anilbhai. "Hybrid Vehicle Control Benchmark". Thesis, Linköpings universitet, Fordonssystem, 2020. http://urn.kb.se/resolve?urn=urn:nbn:se:liu:diva-171586.

Texto completo
Resumen
The new emission regulations for new trucks was made to decrease the CO2 emissions by 30% from 2020 to 2030. One of the solutions is hybridizing the truck powertrain with 48V or 600V that can recover brake energy with electrical machines and batteries. The control of this hybrid powertrain is key to increase fuel efficiency. The idea behind this approach is to combine two different power sources, an internal combustion engine and a battery driven electric machine, and use both to provide tractive forces to the vehicle. This approach requires a HEV controller to operate the power flow within the systems. The HEV controller is the key to maximize fuel savings which contains an energy management strategy. It uses the knowledge of the road profile ahead by GPS and maps, and strongly interacts with the control of the cruise speed, automated gear shifts, powertrain modes and state of charge. In this master thesis, the dynamic programming strategy is used as predictive energy management for hybrid electric truck in forward- facing simulation environment. An analysis of predictive energy management is thus done for receding and full horizon length on flat and hilly drive cycle, where fuel consumption and recuperation energy will be regarded as the primary factor. Another important factor to consider is the powertrain mode of the vehicle with different penalty values. The result from horizon study indicates that the long receding horizon length has a benefit to store more recuperative energy. The fuel consumption is decreased for all drive cycle in the comparison with existing Volvo’s strategy.
Los estilos APA, Harvard, Vancouver, ISO, etc.
6

Wang, Yie. "An electronic commerce Web benchmark". Thesis, National Library of Canada = Bibliothèque nationale du Canada, 1999. http://www.collectionscanada.ca/obj/s4/f2/dsk2/ftp01/MQ39706.pdf.

Texto completo
Los estilos APA, Harvard, Vancouver, ISO, etc.
7

Dasarathan, Dinesh. "Benchmark Characterization of Embedded Processors". NCSU, 2005. http://www.lib.ncsu.edu/theses/available/etd-05152005-170108/.

Texto completo
Resumen
The design of a processor is an iterative process, with many cycles of simulation, performance analysis and subsequent changes. The inputs to these cycles of simulations are generally a selected subset of standard benchmarks. To aid in reducing the number of cycles involved in design, one can characterize these selected benchmarks and use those characteristics to hit at a good initial design that will converge faster. Methods and systems to characterize benchmarks for normal processors are designed and implemented. This thesis extends these approaches and defines an abstract system to characterize benchmarks for embedded processors, taking into consideration the architectural requirements, power constraints and code compressibility. To demonstrate this method, around 25 benchmarks are characterized (10 from SPEC, and 15 from standard embedded benchmark suites - Mediabench and Netbench), and compared. Moreover, the similarities between these benchmarks are also analyzed and presented.
Los estilos APA, Harvard, Vancouver, ISO, etc.
8

Deckert, Arthur Allen Jr. "Benchmark evaluation of PC SIMSCRIPT". Thesis, Monterey, California. Naval Postgraduate School, 1985. http://hdl.handle.net/10945/21177.

Texto completo
Los estilos APA, Harvard, Vancouver, ISO, etc.
9

Zu, Yige. "Developing a practicable benchmark VAT". Thesis, University of Leeds, 2018. http://etheses.whiterose.ac.uk/22448/.

Texto completo
Resumen
This thesis develops a practicable benchmark VAT that bridges the gap between theory and practice in VAT design and provides concrete guidance for countries to evaluate, assess, and, where appropriate, reform their VATs. The potential use of the practicable benchmark in devising a reform agenda is illustrated by means of a case study based on the Chinese VAT, a tax that is at odds with the theoretical model in many respects. Experience has shown that real-world VATs most often deviate substantially from the theoretical VAT model, revealing the disconnect between the theoretical model based primarily on economic criteria and actual VAT designs that recognise the administrative, political and technical constraints encountered in the real world. A single rate and broad-based VAT is not readily achievable in many countries and VAT designers are further faced with issues that are not addressed directly in the model, including the application of VAT to small businesses, non-resident businesses, financial supplies, low value imports, and cross-border services as well as the challenges of devising workable arrangements for VAT systems in a federal or economic community setting. The thesis applies a tax expenditure analysis to evaluate the effectiveness, efficiency implications and revenue impact of VAT concessions. The negative consequences of concessions could be reduced with better targeting if the removal of concessions is politically unattainable. Registration threshold should be set at a level where the revenue costs are offset by the administrative savings from excluding small businesses from the VAT. Small business regimes often do not achieve the intended objectives and moreover yield efficiency and revenue costs. The best option to bring the financial and insurance sectors into full taxation is to use a separate (reduced) rate approach to tax intermediary loan services, a cash-flow model to tax insurance services and to categorise the issue and transfer of financial securities as zero-rated supplies. Effective collection of VAT on cross-border B2C imports of low value goods and services and removal of VAT from business acquisitions by non-resident businesses could be achieved with a higher level of international cooperation through bilateral treaties and a clearing house mechanism. No single benchmark is possible in terms of the design of VAT sharing in federations or economic communities because appropriate design relies heavily on the political and structural factors in federations. The clearing house model appears to be the best option to distribute VAT revenue in most circumstances where sub-central jurisdictions have their own VATs. The benchmark needs to be modified to accommodate local factors when applied to any particular country. It nevertheless provides a starting point for countries to evaluate and reform their VATs. The case study of China shows the process of applying the benchmark to an ill-designed real-world VAT. VAT design often reflects features of predecessor taxes and in this respect China may have an advantage notwithstanding the significant deviation of its current VAT from the benchmark. Some features inherited from the predecessor tax may make reform, particularly in respect of financial supplies, easier than in counterparts that evolved from European turnover taxes.
Los estilos APA, Harvard, Vancouver, ISO, etc.
10

Albaaj, Hassan y Victor Berggren. "Benchmark av Containers och Unikernels". Thesis, Tekniska Högskolan, Jönköping University, JTH, Datateknik och informatik, 2020. http://urn.kb.se/resolve?urn=urn:nbn:se:hj:diva-50214.

Texto completo
Resumen
Purpose – The purpose of this paper is to explore the possibility to effectivize local networks and databases using unikernels and compare this to containers. This could also apply to reliability of executing programs the same way on different hardware in software development. Method – Two experiments have been performed to explore if the purpose could be realized, quantitative data have been gatheredand displayed in both cases. Python-scripts have been used to start C-scripts, acting client and server. Algorithms have been timed running in unikernels as well as in containers along with compared measurements of memory in multiple simultaneous instantiations. Findings – Intermittent response times spiked made the data hard to parse correctly. Containers had a lower average response time when running lighter algorithms. The average response times of unikernels dives below that of containers when heavier programs are simulated. Few minor bugs were discovered in Unikraft unikernels. Implications – unikernels havecharacteristics that make them more suitable for certain tasks compared to their counterpart, this is also true for containers. Unikraft unikernels are unstable which makes it seem like containers are faster during lighter simulations. Unikernels are onlyfaster and more secure if the tools used to build them does so in a manner that makes them stable. Limitations – The lack of standards, the lack of a support community together with the fact that unikernels is a small and niche field means that unikernels have a relatively high learning curve. Keywords – Unikraft, Unikernels, Docker, Container
Syfte – Syftet med denna studie är att undersöka möjligheten att effektivisera lokala nätverk och databaser med hjälp av unikernels och att jämföra denna möjlighet med containrar. Detta kan även gälla utveckling av programvara för att säkerställa att programvaran exekveras på servern på exakt samma sätt som den tidigare gjort lokalt på utvecklarens lokala dator. Metod – Två experiment utförs för att undersöka om det går besvara syftet, kvantitativa data samlas in i båda fallen, datan är även redovisad kvantitativt. Python-script används föratt starta C-script som agerar klient och server. Tidtagning på algoritmer i unikernels respektive containrar samt minnesanvändning vid multipel instansiering mättes för att analyseras och jämföras. Resultat – Intermittenta svarstids-toppar gjorde datan från unikernels svår att korrekt utvärdera. Containrar hade ett lägre medelvärde på svarstider vid mindre krävande algoritm-användning. Unikernels medelvärde dyker under container-svarstiderna när mer krävande program simuleras. Några små buggar upptäcktesi Unikraft unikernels. Implikationer – Unikernels har egenskaper som gör de mer passande för vissa uppgifter jämfört med dess motsvarighet medan detsamma gäller för Containrar. Unikraft unikernels är instabila och ger därfören bild av att containrar vidmindre processorkrävande program faktiskt är snabbare än unikernels. Unikernels är bara snabbare och säkrare i den mån verktyget som bygger dem, gör det på ett sätt att de är stabila. Begränsningar – Avsaknaden av standarder, avsaknaden av ett communitysom kan svara på frågor tillsammans med att unikernelsär ett litet och nischat fält gör att unikernels har en relativ hög inlärningskurva. Nyckelord – Unikernel, Unikraft, Container, Docker
Los estilos APA, Harvard, Vancouver, ISO, etc.
11

Fang, Qijun. "Hierarchical Bayesian Benchmark Dose Analysis". Diss., The University of Arizona, 2014. http://hdl.handle.net/10150/316773.

Texto completo
Resumen
An important objective in statistical risk assessment is estimation of minimum exposure levels, called Benchmark Doses (BMDs) that induce a pre-specified Benchmark Response (BMR) in a target population. Established inferential approaches for BMD analysis typically involve one-sided, frequentist confidence limits, leading in practice to what are called Benchmark Dose Lower Limits (BMDLs). Appeal to hierarchical Bayesian modeling and credible limits for building BMDLs is far less developed, however. Indeed, for the few existing forms of Bayesian BMDs, informative prior information is seldom incorporated. Here, a new method is developed by using reparameterized quantal-response models that explicitly describe the BMD as a target parameter. This potentially improves the BMD/BMDL estimation by combining elicited prior belief with the observed data in the Bayesian hierarchy. Besides this, the large variety of candidate quantal-response models available for applying these methods, however, lead to questions of model adequacy and uncertainty. Facing this issue, the Bayesian estimation technique here is further enhanced by applying Bayesian model averaging to produce point estimates and (lower) credible bounds. Implementation is facilitated via a Monte Carlo-based adaptive Metropolis (AM) algorithm to approximate the posterior distribution. Performance of the method is evaluated via a simulation study. An example from carcinogenicity testing illustrates the calculations.
Los estilos APA, Harvard, Vancouver, ISO, etc.
12

Werner, Sarah. "Internationalisierung von Universitäten Eine Benchmark-Analyse /". St. Gallen, 2006. http://www.biblio.unisg.ch/org/biblio/edoc.nsf/wwwDisplayIdentifier/03605649001/$FILE/03605649001.pdf.

Texto completo
Los estilos APA, Harvard, Vancouver, ISO, etc.
13

Liu, Jingyu y Jingyu Liu. "Autologistic Modeling in Benchmark Risk Analysis". Diss., The University of Arizona, 2017. http://hdl.handle.net/10150/626166.

Texto completo
Resumen
An important objective in statistical risk assessment is estimation of minimum exposure levels, called Benchmark Doses (BMDs), that induce a pre-specified Benchmark Response (BMR) in a target population. Established inferential approaches for BMD analysis typically involve one-sided, frequentist confidence limits, leading in practice to what are called Benchmark Dose Lower Limits (BMDLs). With this context, a quantitative methodology is developed to characterize vulnerability among 132 U.S. urban centers ('cities') to terrorist events, applying a place-based vulnerability index to a database of terrorist incidents and related human casualties. A centered autologistic regression model is employed to relate urban vulnerability to terrorist outcomes and also to adjust for autocorrelation in the geospatial data. Risk- analytic BMDs are then estimated from this modeling framework, wherein levels of high and low urban vulnerability to terrorism are identified. This new, translational adaptation of the risk-benchmark approach, including its ability to account for geospatial autocorrelation, is seen to operate quite flexibly in this socio-geographic setting. Further, alternative definitions for neighborhoods are considered to extend the autologistic benchmark paradigm to non-spatial settings. All 3108 counties in the contiguous 48 U.S. states are studied to identify a benchmark dose variable as the number of hazards. This is employed to benchmark billion-dollar losses across each county. County-level resilience is used as a potential characteristic for defining the neighborhood structure within the autologistic model.
Los estilos APA, Harvard, Vancouver, ISO, etc.
14

Li, Xi. "Benchmark generation in a new framework /". View abstract or full-text, 2007. http://library.ust.hk/cgi/db/thesis.pl?IELM%202007%20LI.

Texto completo
Los estilos APA, Harvard, Vancouver, ISO, etc.
15

Lotter, Norman Owen. "Statistical benchmark surveying of production concentrators". Thesis, McGill University, 2005. http://digitool.Library.McGill.CA:80/R/?func=dbin-jump-full&object_id=100646.

Texto completo
Resumen
The sampling and analysis of sulphide mineral processing plants is addressed in this study. A review of the published literature has shown that the foundations of this topic were laid in the 1970's, but typically a single sampling test was performed, and its representativity accepted provided its metallurgical balance closed without excessive adjustments. There was no mention made of quality control or equivalent tests of representativity of the feed material during sampling tests. No recognition of the effect(s) of ore grade on metallurgical performance was given.
In this study, a quantitative model, called a statistical benchmark survey, is presented. Multiple surveys are completed over a limited time; the corresponding stream samples of the surveys deemed acceptable are combined to obtain high confidence composite samples. The head grade of each survey is compared to two distributions to test its acceptability, typically at a 95% confidence level. These distributions are called the Internal Reference Distribution and the External Reference Distribution.
The first test---on the Internal Reference Distribution---uses the Sichel t-estimator, a lognormal model designed for use on small data sets, on the set of six survey unit head grades. The associated confidence limits of this mean grade are equivalent to two standard errors of the distribution, but are skewed about the sample mean. The second test, this time by the External Reference Distribution, also uses a lognormal platform, designed by Krige, but uses larger data sets from 1-3 months of shift sample head grades. The associated confidence limits of this second model are also skewed, but are wider than for the Sichel model, and are equivalent to two standard deviations of the sample mean. This outlier rejection model produces ore grade estimates that are in good agreement with the more robust External Reference Distribution means.
The Raglan Mine case study is used to illustrate that ore grades in situ are highly lognormal; this lognormality is also present in the time domain in head samples (taken at the cyclone overflow), but is less pronounced (i.e. residual).
Two survey models are presented. The benchmark model describes typical operations. The campaign model specifically chooses ore types that are mined and milled in a specific week of operations for predictive or diagnostic purposes.
The multiple mineral hosting of nickel across three orders of magnitude extends this problem into that of a compound distribution. The construction and use of an External Reference Distribution to estimate the mean and associated skew confidence limits of this compound distribution is shown for both drill core and ore milled (the latter in a case of residual lognormality). A trial decomposition of the spatial External Reference Distribution is discussed. The heterogeneous nickel mineral hosting in ore, after processing, becomes an artificially controlled final concentrate, containing most of the economic nickel sulphides in a normal distribution, and most of the uneconomic nickel minerals in a final tailing with a residually bimodal lognormal distribution.
The presence of bimodal lognormality in final tailing data may have historical or predictive uses: at Raglan, flowsheet improvements and more seasoned operations contributed to the decrease in the mean of both the low-grade and high-grade modes, and increase the contribution of the low-grade mode.
Los estilos APA, Harvard, Vancouver, ISO, etc.
16

Sakalis, Christos. "Correctly Synchronised POSIX-threads Benchmark Applications". Thesis, Uppsala universitet, Institutionen för informationsteknologi, 2015. http://urn.kb.se/resolve?urn=urn:nbn:se:uu:diva-268183.

Texto completo
Resumen
With the future of high performance computing quickly moving towards a higher and higher count of CPU cores, the need for efficient memory coherence models is becoming more and more prevalent. Strict memory models, while convenient for the programmer, limit the scalability and overall performance of multi- and manycore systems. For this reason, relaxed memory models are looked into, both in academia and in the industry. Applications written for stronger memory models often contain data races, which cause unexpected behaviour in more relaxed models, many of which rely on data race free code to work. At the same time, some of the most widely used programming languages now require data race free code. For these reasons, the need for benchmarks based on properly synchronised code is bigger than ever. In this thesis, we will identify data races in major benchmark suites, remove them, and then quantify and compare the performance differences between the unmodified and the properly synchronised versions.
Los estilos APA, Harvard, Vancouver, ISO, etc.
17

Hormi, K. (Kari). "Qt benchmark suite for embedded devices". Master's thesis, University of Oulu, 2017. http://urn.fi/URN:NBN:fi:oulu-201710112978.

Texto completo
Resumen
Embedded computing systems appear everywhere nowadays. Because their power has been increasing constantly, the demand to make these systems more intelligent and interactive has been also increasing. A part of improving system intelligence and interactivity is to create user interfaces which are responsive even in most complex environments. This, when coupled with a need to minimize hardware costs, creates a challenge of finding a good balance between resource usage of rich content user interfaces and minimizing hardware requirements. Benchmarking has traditionally provided good answers to these kind of questions. However, most modern day benchmarks measuring graphical power of computing systems are mainly targeted for desktop or mobile platforms and do not offer alternatives for pure embedded systems. In this thesis, a benchmark suite for embedded systems using Qt, a crossplatform application framework, is created and evaluated. This suite is mostly targeted for users interested in finding out the graphical performance of their embedded hardware. The evaluation revealed strengths and weaknesses of devices measured. It also revealed strengths and weaknesses of the benchmark suite itself
Sulautettuja järjestelmiä on nykyään joka paikassa. Niiden tehot ovat kasvaneet jatkuvasti, mikä on johtanut järjestelmien älykkyyden ja interaktiivisuuden kysynnän kasvamiseen. Yksi osa järjestelmien älykkyyden ja interaktiivisuuden parantamisessa on luoda käyttöliittymiä, jotka ovat mukautuvia jopa kaikkein monimutkaisimmissakin ympäristöissä. Kun tämä yhdistetään tavoitteeseen pienentää laitteiston kustannuksia, on haastavaa löytää hyvä tasapaino sisältörikkaiden käyttöliittymien ja mahdollisimman kustannustehokkaiden laitevaatimusten väliltä. Näihin kysymyksiin suorituskykytestit ovat perinteisesti olleet hyviä antamaan vastauksia. Monet nykypäivän grafiikkasuorituskykyä mittaavat suorituskykytestit tähtäävät yleensä joko työpöytä- tai mobiilialustoille, eivätkä tarjoa vaihtoehtoja puhtaiden sulautettujen järjestelmien suorituskyvyn arviointiin. Tässä työssä luodaan ja arvioidaan suorituskykytestipaketti sulautetuille järjestelmille käyttäen Qt:ta, alustariippumatonta ohjelmistokehitysrunkoa. Tämä paketti on tarkoitettu lähinnä käyttäjille, jotka ovat kiinnostuneita tietämään oman sulautetun laitteen graafisen suorituskyvyn. Arviointi paljasti arvioitujen laitteiden vahvuuksia ja heikkouksia. Myös suorituskykytestipaketin vahvuudet ja heikkoudet nousivat arvioinnissa esille
Los estilos APA, Harvard, Vancouver, ISO, etc.
18

Guo, Meng. "Benchmark, Explain, and Model Urban Commuting". The Ohio State University, 2012. http://rave.ohiolink.edu/etdc/view?acc_num=osu1354597241.

Texto completo
Los estilos APA, Harvard, Vancouver, ISO, etc.
19

Černý, Jan. "Benchmark nástrojů pro řízení datové kvality". Master's thesis, Vysoká škola ekonomická v Praze, 2013. http://www.nusl.cz/ntk/nusl-198427.

Texto completo
Resumen
Companies all around the world are wasting their funds due to the poor data quality. Rationally speaking as the volume of processed data increase, the volume of error data increase too. This diploma thesis explains what is it data quality about, what are the causes of data quality errors, the impact of poor data and the way it can be measured. If you can measure it, you can improve it. This is where data quality tools are used. There are vendors that offer commercial solutions and there are also vendors that offer open-source solutions of data quality tools. Comparing DataCleaner (open-source tool) with DataFlux (commercial tool) using defined criteria this diploma thesis proves that those two tools could be equal in terms of data profiling, data enhancement and data monitoring. DataFlux is slightly better in standardization and data validation. Data deduplication is not included in tested version of DataCleaner, although DataCleaner's vendor claimed it should be. One of the biggest obstacles why companies don't buy data quality tools could be its price. At this moment, it is possible to consider DataCleaner as an inexpensive solution for companies looking for data profiling tool. If Human Inference added data deduplication to DataCleaner, it could be also possible to consider it as an inexpensive solution covers whole data quality process.
Los estilos APA, Harvard, Vancouver, ISO, etc.
20

Peterson, Ross Jordan. "LANDS' END: OWL TOWELS BENCHMARK ANALYSIS". Thesis, The University of Arizona, 2009. http://hdl.handle.net/10150/192563.

Texto completo
Los estilos APA, Harvard, Vancouver, ISO, etc.
21

Sales, Jon Kyle. "OWL TOWELS BENCHMARK FIRM: L.L. BEAN". Thesis, The University of Arizona, 2009. http://hdl.handle.net/10150/192957.

Texto completo
Los estilos APA, Harvard, Vancouver, ISO, etc.
22

Hasan, I. (Irtiza). "Benchmark evaluation of object segmentation proposal". Master's thesis, University of Oulu, 2015. http://jultika.oulu.fi/Record/nbnfioulu-201508291934.

Texto completo
Resumen
Abstract. In this research, we provide an in depth analysis and evaluation of four recent segmentation proposals algorithms on PASCAL VOC benchmark. The principal goal of this study is to investigate these object detection proposal methods in an un-biased evaluation framework. Despite having a widespread application, the strengths and weaknesses of different segmentation proposal methods with respect to each other are mostly not completely clear in the previous works. This thesis provides additional insights to the segmentation proposal methods. In order to evaluate the quality of proposals we plot the recall as a function of average number of regions per image. PASCAL VOC 2012 Object categories, where the methodologies show high performance and instances where these algorithms suffer low recall is also discussed in this work. Experimental evaluation reveals that, despite being different in the operational nature, generally all segmentation proposal methods share similar strengths and weaknesses. The analysis also show how one could select a proposal generation method based on object attributes. Finally we show that, improvement in recall can be obtained by merging the proposals of different algorithms together. Experimental evaluation shows that this merging approach outperforms individual algorithms both in terms of precision and recall.
Los estilos APA, Harvard, Vancouver, ISO, etc.
23

Riffel, Judith Louise Seelig. "Effect of principal leadership strategies on teachers' use of data in benchmark and non-benchmark middle schools". Click here to access dissertation, 2007. http://www.georgiasouthern.edu/etd/archive/fall2007/judith_l_riffel/riffel_judith_200708_edd.pdf.

Texto completo
Resumen
Thesis (Ed.D.)--Georgia Southern University, 2007.
"A dissertation submitted to the Graduate Faculty of Georgia Southern University in partial fulfillment of the requirements for the degree Doctor of Education." Education Administration, under the direction of Walter S. Polka. ETD. Electronic version approved: December 2007. Includes bibliographical references (p. 88-98) and appendices.
Los estilos APA, Harvard, Vancouver, ISO, etc.
24

Kihlström, Andreas y Joakim Weivert. "RR PLC Application Code : a Benchmark Study". Thesis, Karlstad University, Karlstad University, Division for Engineering Sciences, Physics and Mathematics, 2009. http://urn.kb.se/resolve?urn=urn:nbn:se:kau:diva-4579.

Texto completo
Resumen

This degree thesis is performed at Rolls-Royce at the Control Systems division.

The assignment is to compare two different PLC development tools and determine how a transition toa new PLC development tool would be. The programs that will be compared is the current toolAutoCAD with the extension ACG and CoDeSys.

A transition from AutoCAD to CoDeSys is realizable but will take considerable time and effort. Theeasiest way to achieve this is to during the transition generate an export file from the existing drawingsin AutoCAD which then can be imported to CoDeSys. By this CoDeSys can be used as a platform fordeveloping. This is fully realizable because ACG which generates the C code can be modified togenerate almost any export file.

Los estilos APA, Harvard, Vancouver, ISO, etc.
25

Willschütz, Hans-Georg. "CFD-Calculations to a Core Catcher Benchmark". Forschungszentrum Dresden, 2010. http://nbn-resolving.de/urn:nbn:de:bsz:d120-qucosa-30419.

Texto completo
Resumen
There are numerous experiments for the exploration of the corium spreading behaviour, but comparable data have not been available up to now in the field of the long term behaviour of a corium expanded in a core catcher. The difficulty consists in the experimental simulation of the decay heat that can be neglected for the short-run course of events like relocation and spreading, which must, however, be considered during investigation of the long time behaviour. Therefore the German GRS, defined together with Battelle Ingenieurtechnik a benchmark problem in order to determine particular problems and differences of CFD codes simulating an expanded corium and from this, requirements for a reasonable measurement of experiments, that will be performed later. First the finite-volume-codes Comet 1.023, CFX 4.2 and CFX-TASCflow were used. To be able to make comparisons to a finite-element-code, now calculations are performed at the Institute of Safety Research at the Forschungszentrum Rossendorf with the code ANSYS/FLOTRAN.For the benchmark calculations of stage 1 a pure and liquid melt with internal heat sources was assumed uniformly distributed over the area of the planned core catcher of a EPR plant. Using the Standard-k-e-turbulence model and assuming an initial state of a motionless superheated melt several large convection rolls will establish within the melt pool. The temperatures at the surface do not sink to a solidification level due to the enhanced convection heat transfer. The temperature gradients at the surface are relatively flat while there are steep gradients at the ground where the no slip condition is applied. But even at the ground no solidification temperatures are observed. Although the problem in the ANSYS-calculations is handled two-dimensional and not three-dimensional like in the finite-volume-codes, there are no fundamental deviations to the results of the other codes.
Los estilos APA, Harvard, Vancouver, ISO, etc.
26

Lam, Mary. "Benchmark of Probabilistic Methods for Fault Diagnosis". Thesis, KTH, Reglerteknik, 2007. http://urn.kb.se/resolve?urn=urn:nbn:se:kth:diva-106235.

Texto completo
Resumen
To be able to do the correct action when a fault is detected, the fault isolation part must be precise and run in real time during operation of the process. In many cases can it be difficult to decide exactly where the fault is localized. In those cases, the isolation algorithm must rank the faults according to their probability to be the cause to the behavior. The masters thesis project aims at probabilistic methods and algorithms for fault isolation in embedded systems. Different kind of Bayesian Networks have been compared in this report and the comparison has been done on a literature defined “benchmark system”. Those Bayesian network models which have been implemented for fault isolation are: 1. Manually (on the basis of physical representations) 2. Two-layer structure continuous signals discreet signals 3. Via temporal causal graph (dynamical network) The algorithms should be compared in the following areas: computational complexity, isolation performance and degree of difficulty to construct the network on the basis of data. The evaluated algorithms showed good results. Even though the system data which have been used in the Bayesian Networks are not very accurate in the first place, it manage to give a fairly precise isolation of the faults. The continuous Bayesian Network manage to show a good isolation performance for different type of faults and the Dynamic Bayesian Network found most of the faults even for a rather complex network.
Detta examensarbete handlar om sannolikhetsbaserade metoder för felisolering. När ett fel uppstår ombord på en Scania lastbil kan man upptäcka det. I bästa fall kan en viss komponent pekas ut som orsak, men ofta kommer man att ha ett antal komponenter som kan vara orsaken. I många fall är det dock svårt att hitta var exakta felet finns. För att hantera dessa situationer vill man använda metoder för att beräkna sannolikheten att olika komponenter är trasiga. För att beräkna sannolikheten kan man använda en probabilistisk model, dvs. Bayesianska nätverk. I detta arbete har olika metoder för att skapa Bayesianska nätverk jämförts. Jämförelsen görs på ett litteratur väl definierat benchmark problem: diagnosar en två-tank system. De typer av Bayesiansk nätverks modeller som har implementerats för felisolering är: 1. Manuellt (ut ifrån fysikalisk modell) 2. Två-lagers struktur kontinuerliga signaler diskreta signaler 3. Via Bindningsgrafer (dynamiskt nätverk) Problemen som undersöktes var bland annat svårighet att bygga nätverket utifrån data, beräkningskomplexitet samt isolerings prestanda. En jämförelse mellan de Bayesianska metoderna för felisolering och samt dem befintliga standardmetoder har även gjorts. De undersökta algoritmerna visade goda resultat. Trots bristen på data, visade algoritmerna lovande resultat. Det Två-lagers Bayesianska nätverket visade en bra isoleringsprestanda på olika komponent fel och det Dynamiska Bayesianska nätverket upptäckte de flesta fel trots att det var ett ganska complext nätverk.
Los estilos APA, Harvard, Vancouver, ISO, etc.
27

Östberg, Mikael. "UTS: A Portable Benchmark for Erlang/OTP". Thesis, KTH, Skolan för informations- och kommunikationsteknik (ICT), 2010. http://urn.kb.se/resolve?urn=urn:nbn:se:kth:diva-26323.

Texto completo
Resumen
In this paper the benchmark Unbalanced Tree Search (UTS) is ported and evaluated to the functional programming language Erlang. The purpose is to provide a portable benchmark that scales as the number of cores do in a system. Since Erlang is language built around concurrency language the speedup should prove to be interesting comparing to its competitors as the number of cores rise. This paper is written to describe how the algorithm works as well as what it has performed on a few different systems at SICS and presents the conclusions that can be drawn from them. Some questions remain unanswered however such as how well the benchmark performed on the Tilera64 because of some technical difficulties during the project. Also the results proved quite odd since there are possible bottlenecks in the performance making the speedup per added processor core somewhat limited. As a consequence of the strange behavior of the software some of the conclusions drawn from this thesis are mostly speculations.
Los estilos APA, Harvard, Vancouver, ISO, etc.
28

Saune, Naibuka Uluilakeba Accounting Australian School of Business UNSW. "A re-examination of benchmark beating evidence". Awarded by:University of New South Wales. Accounting, 2009. http://handle.unsw.edu.au/1959.4/44565.

Texto completo
Resumen
This thesis examines the extent to which benchmark beating by Australian firms around the earnings level and earnings changes thresholds can be reliably interpreted as evidence of earnings management. A number of recent academic papers challenge the earnings management explanation for the observed kinks in the distribution of net Income. In response to this criticisms, this thesis is motivated to conduct tests of earnings management with a refined methodology of selecting a subset of firms immediately above the threshold that have a priori incentives to achieve the benchmark. This approach allows for investigations to focus on benchmark beating observations where earnings manipulations would be more prevalent and thereby provide a powerful test for the existence of opportunistic reporting. The paper uses a number of unexpected accruals measures including the Kothari et al. (2005) performance matched models. In testing the hypotheses, this thesis utilises two approaches which were; the regression approach and the test of difference of means approach. Based on a broad sample drawn from all listed Australian firms for the years 1995-2007, small profit firms and small increase firms with high price-to-sales ratio were found to have evidence consistent with opportunistic benchmark beating behaviour. Similar results are also documented for benchmark beating firms with low book-to-market (high market-to-book) ratio. This thesis also finds that firms with equity offering incentives who reported improvement in earnings display unexpected accruals consistent with earnings management. In addition, the accounting behaviour of firms which previously incurred a loss is consistent with earnings management explanation. Firms with long strings of earnings increases also appear to use accounting discretion in order to avoid earnings deterioration. Similarly, evidence of earnings management are also displayed by small profit firms which have consistently reported negative earnings. Finally, this thesis provides evidence that resolves the apparent paradox that benchmark beating is evidence of earnings management which is devoid of the statistical artefact argument posited by Durtschi and Easton (2005) and Durtschi and Easton (2008).
Los estilos APA, Harvard, Vancouver, ISO, etc.
29

Hothorn, Torsten, Friedrich Leisch, Achim Zeileis y Kurt Hornik. "The design and analysis of benchmark experiments". SFB Adaptive Information Systems and Modelling in Economics and Management Science, WU Vienna University of Economics and Business, 2003. http://epub.wu.ac.at/758/1/document.pdf.

Texto completo
Resumen
The assessment of the performance of learners by means of benchmark experiments is established exercise. In practice, benchmark studies are a tool to compare the performance of several competing algorithms for a certain learning problem. Cross-validation or resampling techniques are commonly used to derive point estimates of the performances which are compared to identify algorithms with good properties. For several benchmarking problems, test procedures taking the variability of those point estimates into account have been suggested. Most of the recently proposed inference procedures are based on special variance estimators for the cross-validated performance. We introduce a theoretical framework for inference problems in benchmark experiments and show that standard statistical test procedures can be used to test for differences in the performances. The theory is based on well defined distributions of performance measures which can be compared with established tests. To demonstrate the usefulness in practice, the theoretical results are applied to benchmark studies in a supervised learning situation based on artificial and real-world data.
Series: Report Series SFB "Adaptive Information Systems and Modelling in Economics and Management Science"
Los estilos APA, Harvard, Vancouver, ISO, etc.
30

Guo, Pinglei. "Benchhub| Store Database Benchmark Result in Database". Thesis, University of California, Santa Cruz, 2018. http://pqdtopen.proquest.com/#viewpdf?dispub=10749082.

Texto completo
Resumen

Benchmark is an essential part of evaluating database performance. However, the procedure of setting up the environment and collecting result is time-consuming and not well defined. The uncertainty in benchmark procedure leads to low reproducibility. Furthermore, many benchmark results are highly compressed and only published in unstructured formats like document and graph. Without a structural format and the context of a benchmark, comparing benchmark results across sources is time-consuming and often biased.

In this thesis, BenchHub is presented to remedy those problems. It defines a job specification that covers the life cycle of running database benchmark in a distributed environment. A reference implementation of infrastructure is provided and will be hosted as a public service. Using this service, database developers can focus on analyzing benchmark result instead of getting them. BenchHub stores metrics like latency in time series databases and puts aggregated results along with benchmark context in relational databases. Users can query results directly using SQL and compare results across sources without extra preprocessing.

BenchHub integrates time series workloads like Xephon-B and standard database workloads like TPC-C. Comparisons between open source databases are made to demonstrate its usability. BenchHub is open sourced under MIT license and hosted on GitHub.

Los estilos APA, Harvard, Vancouver, ISO, etc.
31

Willschütz, Hans-Georg. "CFD-Calculations to a Core Catcher Benchmark". Forschungszentrum Rossendorf, 1999. https://hzdr.qucosa.de/id/qucosa%3A21868.

Texto completo
Resumen
There are numerous experiments for the exploration of the corium spreading behaviour, but comparable data have not been available up to now in the field of the long term behaviour of a corium expanded in a core catcher. The difficulty consists in the experimental simulation of the decay heat that can be neglected for the short-run course of events like relocation and spreading, which must, however, be considered during investigation of the long time behaviour. Therefore the German GRS, defined together with Battelle Ingenieurtechnik a benchmark problem in order to determine particular problems and differences of CFD codes simulating an expanded corium and from this, requirements for a reasonable measurement of experiments, that will be performed later. First the finite-volume-codes Comet 1.023, CFX 4.2 and CFX-TASCflow were used. To be able to make comparisons to a finite-element-code, now calculations are performed at the Institute of Safety Research at the Forschungszentrum Rossendorf with the code ANSYS/FLOTRAN.For the benchmark calculations of stage 1 a pure and liquid melt with internal heat sources was assumed uniformly distributed over the area of the planned core catcher of a EPR plant. Using the Standard-k-e-turbulence model and assuming an initial state of a motionless superheated melt several large convection rolls will establish within the melt pool. The temperatures at the surface do not sink to a solidification level due to the enhanced convection heat transfer. The temperature gradients at the surface are relatively flat while there are steep gradients at the ground where the no slip condition is applied. But even at the ground no solidification temperatures are observed. Although the problem in the ANSYS-calculations is handled two-dimensional and not three-dimensional like in the finite-volume-codes, there are no fundamental deviations to the results of the other codes.
Los estilos APA, Harvard, Vancouver, ISO, etc.
32

Wu, Xiaolong. "Synthesizing a Hybrid Benchmark Suite with BenchPrime". Thesis, Virginia Tech, 2018. http://hdl.handle.net/10919/85332.

Texto completo
Resumen
This paper presents BenchPrime, an automated benchmark analysis toolset that is systematic and extensible to analyze the similarity and diversity of benchmark suites. BenchPrime takes multiple benchmark suites and their evaluation metrics as inputs and generates a hybrid benchmark suite comprising only essential applications. Unlike prior work, BenchPrime uses linear discriminant analysis rather than principal component analysis, as well as selects the best clustering algorithm and the optimized number of clusters in an automated and metric-tailored way, thereby achieving high accuracy. In addition, BenchPrime ranks the benchmark suites in terms of their application set diversity and estimates how unique each benchmark suite is compared to other suites. As a case study, this work for the first time compares the DenBench with the MediaBench and MiBench using four different metrics to provide a multi-dimensional understanding of the benchmark suites. For each metric, BenchPrime measures to what degree DenBench applications are irreplaceable with those in MediaBench and MiBench. This provides means for identifying an essential subset from the three benchmark suites without compromising the application balance of the full set. The experimental results show that the necessity of including DenBench applications varies across the target metrics and that significant redundancy exists among the three benchmark suites.
Master of Science
Representative benchmarks are widely used in the research area to achieve an accurate and fair evaluation of hardware and software techniques. However, the redundant applications in the benchmark set can skew the average towards redundant characteristics overestimating the benefit of any proposed research. This work proposes a machine learning-based framework BenchPrime to generates a hybrid benchmark suite comprising only essential applications. In addition, BenchPrime ranks the benchmark suites in terms of their application set diversity and estimates how unique each benchmark suite is compared to other suites.
Los estilos APA, Harvard, Vancouver, ISO, etc.
33

Abbott, Renata. "Benchmark de resiliencia a través del mundo". Tesis, Universidad de Chile, 2018. http://repositorio.uchile.cl/handle/2250/164046.

Texto completo
Resumen
TESIS PARA OPTAR AL GRADO DE MAGISTER EN ANALISIS ECONOMICO
En esta tesis la resiliencia es reconocida como la capacidad de los países y por tanto, de sus hogares, para resistir y recuperarse de shocks negativos. Se utiliza este concepto como un marco para evaluar qu e tan efectivas han sido las políticas de los países en lograr mayor tolerancia al riesgo. Se analiza la resiliencia global y a su vez, la resiliencia medida para cinco áreas: Capital Humano, Activos Financieros y Fisicos, Capital Social, Aspectos Estatales y Aspectos Macroeconómicos. En conjunto con lo anterior, y con el objetivo de realizar una comparación adecuada entre países, se controla por variables exogenas que no se vean afectadas por la política y, a su vez, expliquen la resiliencia. Se utiliza un modelo MIMIC de variable latente para unir indicadores y causas obteniendo un ranking de resiliencia para una muestra de 99 países en terminos de cross-section para los años 2005-2015.
Los estilos APA, Harvard, Vancouver, ISO, etc.
34

Hofbauer, Jens. "Leistungsbewertung von Workstations mit SPEC-SFS-Benchmarks fuer den Einsatz als Fileserver". [S.l. : s.n.], 1996. http://www.bsz-bw.de/cgi-bin/xvms.cgi?SWB10324483.

Texto completo
Los estilos APA, Harvard, Vancouver, ISO, etc.
35

Yi, Sheng. "Earnings Management to Achieve the Peer Performance Benchmark". FIU Digital Commons, 2016. http://digitalcommons.fiu.edu/etd/2619.

Texto completo
Resumen
Other than three extensively researched earnings thresholds, avoiding earnings declines, avoiding negative earnings and avoiding negative earnings surprises (Burgstahler and Dichev 1997; Degeorge, Patel, and Zeckhauser 1999), peer performance is an additional threshold that is often mentioned in news reports, compensation contracts and analysts’ reports, while largely ignored in the academic research. Thus, I examine whether firms manage earnings to achieve peer performance. First, I examine accruals-based earnings management to achieve peer performance. The empirical results show that firms exhibit more income-increasing accruals management in the current year under the following situations: 1) when firms’ prior year performance is below that of their peer group; 2) when firms’ average performance over the prior two years is below that of its peer group; 3) when firms’ expected performance is below its peer group’s expected performance. In addition, firms with cumulative performance that is lower than that of its peer group through the first three quarters of the fiscal year exhibit more upward accruals management in the fourth quarter. Second, I investigate real activities manipulation to achieve peer performance. The empirical results show that that firms exhibit more income-increasing real activities manipulation in the current year under the following situations: 1) when firms’ prior year performance is below that of their peer group; 2) when firms’ average performance over the prior two years is below that of its peer group. Third, firms that are under pressure to achieve peer performance benchmarks tend to restate financial statements in subsequent years. Specifically, firms under the following four situations are more likely to restate current earnings in the future: 1) firm’s prior year performance is below that of its peer group; 2) firm’s average performance over the prior two years is below that of its peer group; 3) firm’s expected performance is below that of its peer group; and 4) firm’s cumulative performance for the first three fiscal quarters is below that of its peer group. The influence of peer performance on earnings management behavior implies that relative performance evaluation can induce income-increasing earnings management and subsequent restatements.
Los estilos APA, Harvard, Vancouver, ISO, etc.
36

Danielsson, Mattias. "A benchmark of algorithms for the Professor’s Cube". Thesis, KTH, Skolan för datavetenskap och kommunikation (CSC), 2015. http://urn.kb.se/resolve?urn=urn:nbn:se:kth:diva-168198.

Texto completo
Resumen
Rubik’s Cube is a well known puzzle that has entertained for decades. This thesis studies two relatively new phenomenons; the 53 cube, also known as the Professor’s Cube , and the practice of speedcubing. This report presents two existing algorithms developed by speedcubers that enable human executors to quickly solve the puzzle. The algorithms are implemented and their results compared in order to show which one requires fewer steps to solve the puzzle. The conclusion is that it is faster to use the original Davenport algorithm instead of the newer version by Monroe if the number of twists is your limiting factor.
Los estilos APA, Harvard, Vancouver, ISO, etc.
37

Ma, Shaohua. "Development of a Web-based transaction processing benchmark". Thesis, National Library of Canada = Bibliothèque nationale du Canada, 1998. http://www.collectionscanada.ca/obj/s4/f2/dsk2/ftp01/MQ31615.pdf.

Texto completo
Los estilos APA, Harvard, Vancouver, ISO, etc.
38

Naz, Sabiha. "Benchmark criticality calculations for one speed neutron transport". Diss., Columbia, Mo. : University of Missouri-Columbia, 2007. http://hdl.handle.net/10355/5927.

Texto completo
Resumen
Thesis (Ph. D.)--University of Missouri-Columbia, 2007.
The entire dissertation/thesis text is included in the research.pdf file; the official abstract appears in the short.pdf file (which also appears in the research.pdf); a non-technical general description, or public abstract, appears in the public.pdf file. Title from title screen of research.pdf file (viewed on October 17, 2007) Vita. Includes bibliographical references.
Los estilos APA, Harvard, Vancouver, ISO, etc.
39

Arens, Kai. "Numerische Berechnung thermoakustischer Instabilität einer 3D-Benchmark-Brennkammer /". Düsseldorf : VDI-Verl, 2006. http://bvbr.bib-bvb.de:8991/F?func=service&doc_library=BVB01&doc_number=015055872&line_number=0001&func_code=DB_RECORDS&service_type=MEDIA.

Texto completo
Los estilos APA, Harvard, Vancouver, ISO, etc.
40

Andrén, August y Patrik Hagernäs. "Data-parallel Acceleration of PARSEC Black-Scholes Benchmark". Thesis, KTH, Skolan för informations- och kommunikationsteknik (ICT), 2013. http://urn.kb.se/resolve?urn=urn:nbn:se:kth:diva-128607.

Texto completo
Resumen
The way programmers has been relying on processor improvements to gain speedup in their applications is no longer applicable in the same fashion. Programmers usually have to parallelize their code to utilize the CPU cores in the system to gain a signicant speedup. To accelerate parallel applications furthermore there are a couple of techniques available. One technique is to vectorize some of the parallel code. Another technique is to move parts of the parallel code to the GPGPU and utilize this very good multithreading unit of the system. The main focus of this report is to accelerate the data-parallel workload Black-Scholes of PARSEC benchmark suite. We are going to compare three accelerations of this workload, using vector instructions in the CPU, using the GPGPU and using a combination of them both. The two fundamental aspects are to look at the speedup and determine which technique requires more or less programming eort. To accelerate with vectorization in the CPU we use SSE & AVX techniques and to accelerate the workload in the GPGPU we use OpenACC.
Los estilos APA, Harvard, Vancouver, ISO, etc.
41

Randall, Ryan Nicole. "Experimental phylogenetics: a benchmark for ancestral sequence reconstruction". Thesis, Georgia Institute of Technology, 2012. http://hdl.handle.net/1853/48998.

Texto completo
Resumen
The field of molecular evolution has benefited greatly from the use of ancestral sequence reconstruction as a methodology to better understand the molecular mechanisms associated with functional divergence. The method of ancestral sequence reconstruction has never been experimentally validated despite the method being exploited to generate high profile publications and gaining wider use in many laboratories. The failure to validate such a method is a consequence of 1) our inability to travel back in time to document evolutionary transitions and 2) the slow pace of natural evolutionary processes that prevent biologists from ‘witnessing’ evolution in action (pace viruses). In this thesis research, we have generated an experimentally known phylogeny of fluorescent proteins in order to benchmark ancestral sequence reconstruction methods. The tips/leaves of the fluorescent protein experimental phylogeny are used to determine the performances of various ASR methods. This is the first example of combining experimental phylogenetics and ancestral sequence reconstruction.
Los estilos APA, Harvard, Vancouver, ISO, etc.
42

Okehielem, Nelson. "A benchmark for impact assessment of affordable housing". Thesis, University of Wolverhampton, 2011. http://hdl.handle.net/2436/138925.

Texto completo
Resumen
There is a growing recognition in the built environment for the significance of benchmarking. It is recognized as a key driver for measuring success criteria in the built environment sector. In spite of the huge application of this technique to the sector and other sectors, very little is known of it in affordable housing sub-sector and where it has been used, components of housing quality were not holistically considered. This study considers this identified deficiency in developing a benchmark for assessing affordable housing quality impact factors. As part of this study, samples of 4 affordable Housing projects were examined. Two each were originally selected from under 5 categories of ‘operational quality standards’ within United Kingdom. Samples of 10 projects were extracted from a total of 80 identified UK affordable housing projects. Investigative study was conducted on these projects showing varying impact factors and constituent parameters responsible for their quality. Identified impact criteria found on these projects were mapped against a unifying set standard and weighted with ‘relative importance index’. Adopting quality function deployment (QFD) technique, a quality matrix was developed from these quality standards groupings with their impact factors. An affordable housing quality benchmark and a relative toolkit evolved from resultant quality matrix of project case studies and questionnaire served on practitioners’ performance. Whereas the toolkit was empirically tested for reliability and construct validity, the benchmark was subjected to refinement with the use of project case study.
Los estilos APA, Harvard, Vancouver, ISO, etc.
43

麥志華 y Chi-wah Mak. "Nas benchmark evaluation of HKU cluster of workstations". Thesis, The University of Hong Kong (Pokfulam, Hong Kong), 1999. http://hub.hku.hk/bib/B29872984.

Texto completo
Los estilos APA, Harvard, Vancouver, ISO, etc.
44

Carter, Kirk. "An AI performance benchmark for the ncube 2". Honors in the Major Thesis, University of Central Florida, 1993. http://digital.library.ucf.edu/cdm/ref/collection/ETH/id/110.

Texto completo
Resumen
This item is only available in print in the UCF Libraries. If this is your Honors Thesis, you can help us make it available online for use by researchers around the world by following the instructions on the distribution consent form at http://library.ucf.edu/Systems/DigitalInitiatives/DigitalCollections/InternetDistributionConsentAgreementForm.pdf You may also contact the project coordinator, Kerri Bottorff, at kerri.bottorff@ucf.edu for more information.
Bachelors
Engineering
Electrical Engineering
Los estilos APA, Harvard, Vancouver, ISO, etc.
45

Milne, Andrew Steven. "A benchmark fault coverage metric for analogue circuits". Thesis, University of Huddersfield, 1997. http://ethos.bl.uk/OrderDetails.do?uin=uk.bl.ethos.285669.

Texto completo
Los estilos APA, Harvard, Vancouver, ISO, etc.
46

Peters, Teresa Baker 1981. "Finite element comparison for a geologically motivated benchmark". Thesis, Massachusetts Institute of Technology, 2003. http://hdl.handle.net/1721.1/114317.

Texto completo
Resumen
Thesis: S.B., Massachusetts Institute of Technology, Department of Earth, Atmospheric, and Planetary Sciences, 2003.
Cataloged from PDF version of thesis.
Includes bibliographical references (page 22).
Geologic deformation in three dimensions can be modeled using finite element analysis. In choosing the elements used to solve a model it is important to consider the accuracy of the solution and the computational intensity. The results for models using six element types and six element side lengths are compared for the accuracy of the displacements calculated by the solution and the number of nodes required, as a proxy for computational intensity. Elements that allow higher order solutions are much more accurate than elements that only allow linear interpolation of the stresses and displacements between nodes; however the number of nodes required is five times greater. Free-form meshes do not significantly improve the performance of tetrahedra for the models tested, but could be accurate enough to model curved problem geometries. Comparisons for other models, such as a thrust fault, can be made using a twodimensional simplification of the three-dimensional problem. If three-dimensional comparisons are required it is important to choose a model that has an analytical solution.
by Teresa Baker.
S.B.
Los estilos APA, Harvard, Vancouver, ISO, etc.
47

Bordin, Maycon Viana. "A benchmark suite for distributed stream processing systems". reponame:Biblioteca Digital de Teses e Dissertações da UFRGS, 2017. http://hdl.handle.net/10183/163441.

Texto completo
Resumen
Um dado por si só não possui valor algum, a menos que ele seja interpretado, contextualizado e agregado com outros dados, para então possuir valor, tornando-o uma informação. Em algumas classes de aplicações o valor não está apenas na informação, mas também na velocidade com que essa informação é obtida. As negociações de alta frequência (NAF) são um bom exemplo onde a lucratividade é diretamente proporcional a latência (LOVELESS; STOIKOV; WAEBER, 2013). Com a evolução do hardware e de ferramentas de processamento de dados diversas aplicações que antes levavam horas para produzir resultados, hoje precisam produzir resultados em questão de minutos ou segundos (BARLOW, 2013). Este tipo de aplicação tem como característica, além da necessidade de processamento em tempo-real ou quase real, a ingestão contínua de grandes e ilimitadas quantidades de dados na forma de tuplas ou eventos. A crescente demanda por aplicações com esses requisitos levou a criação de sistemas que disponibilizam um modelo de programação que abstrai detalhes como escalonamento, tolerância a falhas, processamento e otimização de consultas. Estes sistemas são conhecidos como Stream Processing Systems (SPS), Data Stream Management Systems (DSMS) (CHAKRAVARTHY, 2009) ou Stream Processing Engines (SPE) (ABADI et al., 2005). Ultimamente estes sistemas adotaram uma arquitetura distribuída como forma de lidar com as quantidades cada vez maiores de dados (ZAHARIA et al., 2012). Entre estes sistemas estão S4, Storm, Spark Streaming, Flink Streaming e mais recentemente Samza e Apache Beam. Estes sistemas modelam o processamento de dados através de um grafo de fluxo com vértices representando os operadores e as arestas representando os data streams. Mas as similaridades não vão muito além disso, pois cada sistema possui suas particularidades com relação aos mecanismos de tolerância e recuperação a falhas, escalonamento e paralelismo de operadores, e padrões de comunicação. Neste senário seria útil possuir uma ferramenta para a comparação destes sistemas em diferentes workloads, para auxiliar na seleção da plataforma mais adequada para um trabalho específico. Este trabalho propõe um benchmark composto por aplicações de diferentes áreas, bem como um framework para o desenvolvimento e avaliação de SPSs distribuídos.
Recently a new application domain characterized by the continuous and low-latency processing of large volumes of data has been gaining attention. The growing number of applications of such genre has led to the creation of Stream Processing Systems (SPSs), systems that abstract the details of real-time applications from the developer. More recently, the ever increasing volumes of data to be processed gave rise to distributed SPSs. Currently there are in the market several distributed SPSs, however the existing benchmarks designed for the evaluation this kind of system covers only a few applications and workloads, while these systems have a much wider set of applications. In this work a benchmark for stream processing systems is proposed. Based on a survey of several papers with real-time and stream applications, the most used applications and areas were outlined, as well as the most used metrics in the performance evaluation of such applications. With these information the metrics of the benchmark were selected as well as a list of possible application to be part of the benchmark. Those passed through a workload characterization in order to select a diverse set of applications. To ease the evaluation of SPSs a framework was created with an API to generalize the application development and collect metrics, with the possibility of extending it to support other platforms in the future. To prove the usefulness of the benchmark, a subset of the applications were executed on Storm and Spark using the Azure Platform and the results have demonstrated the usefulness of the benchmark suite in comparing these systems.
Los estilos APA, Harvard, Vancouver, ISO, etc.
48

Souza, De Oliveira Roberto. "Méthode d'élaboration d'un benchmark pour les SGBD relationnels". Toulouse, ENSAE, 1987. http://www.theses.fr/1987ESAE0012.

Texto completo
Resumen
L'étude de la charge d'un système, de ses performances et de ses algorithmes de base s'appuie fortement sur l'application d'outils mathématiques. En ce qui concerne les Systemes de Gestion de Base de Données Relationnels (SGBDR), l'analyse des données fournit des outils qui s'avèrent très importants dans l'évolution de telles études. À partir de l'application des méthodes de classification aux attributs appartenant aux relations du schéma d'une base, la charge du SGBDR (due à cette base) est définie en fonction des représentations des opérandes (attributs, groupes d'attributs, relations) composant ses requêtes. L'emploi de cette méthode à une base de données réelle évoluant avec le temps a été faite et la synthèse des résultats obtenus y est étudiée.
Los estilos APA, Harvard, Vancouver, ISO, etc.
49

Kimer, Tomáš. "Benchmark pro zařízení s podporou OpenGL ES 3.0". Master's thesis, Vysoké učení technické v Brně. Fakulta informačních technologií, 2014. http://www.nusl.cz/ntk/nusl-236024.

Texto completo
Resumen
This thesis deals with the development of benchmark application for the OpenGL ES 3.0 devices using the realistic real-time rendering of 3D scenes. The first part covers the history and new features of the OpenGL ES 3.0 graphics library. Next part briefly describes selected algorithms for the realistic real-time rendering of 3D scenes which can be implemented using the new features of the discussed library. The design of benchmark application is covered next, including the design of online result database containing detailed device specifications. The last part covers implementation on Android and Windows platforms and the testing on mobile devices after publishing the application on Google Play. Finally, the results and possibilites of further development are discussed.
Los estilos APA, Harvard, Vancouver, ISO, etc.
50

Guha, Subharup. "Benchmark estimation for Markov Chain Monte Carlo samplers". The Ohio State University, 2004. http://rave.ohiolink.edu/etdc/view?acc_num=osu1085594208.

Texto completo
Los estilos APA, Harvard, Vancouver, ISO, etc.
Ofrecemos descuentos en todos los planes premium para autores cuyas obras están incluidas en selecciones literarias temáticas. ¡Contáctenos para obtener un código promocional único!

Pasar a la bibliografía