To see the other types of publications on this topic, follow the link: Tire load.

Dissertations / Theses on the topic 'Tire load'

Create a spot-on reference in APA, MLA, Chicago, Harvard, and other styles

Select a source type:

Consult the top 50 dissertations / theses for your research on the topic 'Tire load.'

Next to every source in the list of references, there is an 'Add to bibliography' button. Press on it, and we will generate automatically the bibliographic reference to the chosen work in the citation style you need: APA, MLA, Harvard, Chicago, Vancouver, etc.

You can also download the full text of the academic publication as pdf and read online its abstract whenever available in the metadata.

Browse dissertations / theses on a wide variety of disciplines and organise your bibliography correctly.

1

Dhasarathy, Deepak. "Estimation of vertical load on a tire from contact patch length and its use in vehicle stability control." Thesis, Virginia Tech, 2010. http://hdl.handle.net/10919/33559.

Full text
Abstract:
The vertical load on a moving tire was estimated by using accelerometers attached to the inner liner of a tire. The acceleration signal was processed to obtain the contact patch length created by the tire on the road surface. Then an appropriate equation relating the patch length to the vertical load is used to calculate the load. In order to obtain the needed data, tests were performed on a flat-track test machine at the Goodyear Innovation Center in Akron, Ohio; tests were also conducted on the road using a trailer setup at the Intelligent Transportation Laboratory in Danville, Virginia. During the tests, a number of different loads were applied; the tire-wheel setup was run at different speeds with the tire inflated to two different pressures. Tests were also conducted with a camber applied to the wheel. An algorithm was developed to estimate load using the collected data.

It was then shown how the estimated load could be used in a control algorithm that applies a suitable control input to maintain the yaw stability of a moving vehicle. A two degree of freedom bicycle model was used for developing the control strategy. A linear quadratic regulator (LQR) was designed for the purpose of controlling the yaw rate and maintaining vehicle stability.
Master of Science

APA, Harvard, Vancouver, ISO, and other styles
2

Hlavatý, Jiří. "Měření tvaru zatížené pneumatiky." Master's thesis, Vysoké učení technické v Brně. Fakulta strojního inženýrství, 2015. http://www.nusl.cz/ntk/nusl-231715.

Full text
Abstract:
This thesis is focused on measuring the shape of loaded tire and finding dependencies between inner tire pressure, load and the influence of these parameters on the resulting shape of the tire. Data for these dependencies were obtained by using a constructed measuring stand and 3D optical technology. Found dependencies describe the change in shape of the tire in specific mathematical functions, and served the creation of a parametric model of the tire. The main finding of this thesis is that the tire is actually behaves according to dependencies described by varying degrees of polynomial function.
APA, Harvard, Vancouver, ISO, and other styles
3

Trinkūnas, Aistis. "Padangų riedėjimo pasipriešinimo lauko sąlygomis tyrimas." Master's thesis, Lithuanian Academic Libraries Network (LABT), 2013. http://vddb.laba.lt/obj/LT-eLABa-0001:E.02~2013~D_20130621_141605-60535.

Full text
Abstract:
Magistrantūros studijų baigiamajame darbe pateikiami padangų riedėjimo pasipriešinimo, lauko sąlygomis, tyrimo duomenys, esant skirtingiems atraminio paviršiaus tipams (pieva, ražiena, sausas žvyrkelis), įvairioms padangų vertikalioms apkrovoms (nuo 1,4 kN iki 5,9 kN) bei skirtingiems oro slėgiams padangoje (nuo 0,5 bar iki 2,0 bar). Darbo objektas – padangos riedėjimo pasipriešinimo koeficientas, jo reikšmės kitimas, esant skirtingiems padangos, apkrovos, ir atraminio paviršiaus parametrams. Tyrimas atliktas naudojant BELCHINA 7.50L – 16 ФБел – 253 padangą. Darbo metodai: padangos tyrimams atlikti buvo suprojektuotas ir pagamintas mobilus stendas, kuris buvo tvirtinamas prie savaeigės važiuoklės T–16M. Darbo rezultatai. Oro slėgis padangoje ir vertikalios apkrovos dydis turėjo įtakos riedėjimo pasipriešinimo koeficiento reikšmių kaitai, esant skirtingiems atraminiams paviršiams (pievoje, ražienoje, sausame žvyrkelyje). Mažinant padangos slėgį riedėjimo pasipriešinimo jėga yra tiesiogiai proporcinga vertikalios apkrovos jėgai ir atvirkščiai proporcinga slėgiui padangoje. Pievoje mažiausia riedėjimo pasipriešinimo jėgos Fp reikšmė gauta esant 1,0 bar slėgiui ir padangą apkrovus 1,4 kN vertikalia apkrova (Fp = 14,375 kN). Ražienoje – esant 0,5 bar slėgiui padangoje, ir 1,4 kN vertikaliai apkrovai (Fp = 22,283 kN). Bandymą atliekant ant žvyrkelio – minimali riedėjimo pasipriešinimo jėgos Fp reikšmė buvo tada, kai padangą veikė 1,4 kN vertikali apkrova ir oro slėgis... [toliau žr. visą tekstą]
The Master’s thesis presents findings of the study of tire rolling resistance under field conditions with different types of support surfaces (grassland, stubble, dry gravel-road), different vertical loads of tires (from 1.4 kN to 5.9 kN) and different air pressures in a tire (from 0.5 bar to 2.0 bar). Object of the thesis – tire rolling resistance coefficient f, change of its values with different parameters of a tire, load and support surface. The study was carried out using the tire BELCHINA 7.50L – 16 ФБел – 253. Methods of the thesis: in order to carry out the tire study, mobile stand was designed and manufactured and attached to self-propelled chassis T – 16M. Results of the thesis. Air pressure in the tire and the value of vertical load influenced the change of rolling resistance coefficient with different support surfaces (grassland, stubble, dry gravel-road). At the tire pressure being decreased, the rolling resistance force was directly proportional to the force of vertical load and is inversely proportional to pressure in the tire. The lowest value of rolling resistance force Fp on grassland was obtained at the pressure of 1,0 bar and vertical load of 1,4 kN on the tire (Fp = 14,375 kN). On stubble – at the pressure of 0,5 bar in the tire and vertical load of 1,4 kN (Fp = 22,283 kN). When the test was performed on dry gravel-road, the minimal value of rolling resistance force Fp was when the tire was affected by vertical load of 1,4 kN and air pressure in the... [to full text]
APA, Harvard, Vancouver, ISO, and other styles
4

Öhrling, Emil. "Brandrisker i däckhotell : Är samhällets krav på byggnadstekniskt brandskydd tillräckligt?" Thesis, Luleå tekniska universitet, Byggkonstruktion och brand, 2021. http://urn.kb.se/resolve?urn=urn:nbn:se:ltu:diva-83256.

Full text
Abstract:
Syftet med studien är att undersöka de risker som finns i samband med brand i däckhotell, samt utvärdera om samhällets krav på byggnadstekniskt brandskydd är tillräckligt för att hantera risknivån. Samhällets krav i studien är Boverkets byggregler BFS 2011:6 med ändring till och med BFS 2020:4 (BBR) och den kravställning som sker i enlighet med förenklad dimensionering. Kraven i BBR har kvantifierats för att möjliggöra en jämförelse mellan BBR och de verkliga förutsättningar som återfinns på däckhotell. Studien behandlar enbart tre av de fem punkter som BBR baseras på. Utveckling och spridning av brand och rök inom byggnadsverket ska begränsas, spridning av brand till närliggande byggnadsverk ska begränsas och hänsyn ska tas till räddningsmanskapets säkerhet vid brand. Det finns erfarenheter kring bränder i däcklager men ingen samlad bild av hur däckhotell bör hanteras i regelverken, eller om det är förenligt med byggreglernas intention och räddningstjänstens praktiska erfarenheter att utföra insats. Hur ska då brandskyddet utformas i däckhotell för att samhällets krav ska vara uppfyllt? I ett däcklager kan det handla om flera tusen däck som lagras samtidigt. Metodval för att besvara frågeställningarna var att utifrån verkliga däckhotell tillsammans med forskning och studier inom området, genomföra en rad olika analyser. Detta för att kunna besvara frågorna kvantitativt och/eller kvalitativt. Alla frågeställningar krävde dock flera antaganden för att vara möjliga att besvara. För att erhålla något att basera antagandena mot, genomfördes en fallstudie på verkliga däckhotell. Därefter skapades fem olika geometriska modeller baserade på de verkliga byggnaderna av volym, konstruktionsmaterial och ventilationsmöjligheter. Fallstudien visade även på stor variation på antalet däck som fanns placerade i däckhotell. Dock kan det konstateras att brandbelastningen i ett däckhotell överskrider 1600 MJ/m2 golvarea. Det konstruktionsmaterial byggnaden är uppförd med har stor inverkan på temperaturen i brandrummet. Däckhotell uppförda med en betongkonstruktion ger bättre förutsättningar för de brandavskiljande komponenterna att upprätthålla den brandbegränsande funktionen, detta i jämförelse med ett däckhotell uppfört av plåt med isoleringskärna. En brandcellsgräns som utsätts för den temperaturutveckling som sker i byggnad med väggar och tak av plåt-/isoleringskonstruktion, kommer eventuellt inte att begränsa brandspridningen under avsedd tid. De två skyddsbarriärerna som anges i BBR för att begränsa brandspridning mellan byggnader är skyddsavstånd eller att ytterväggen utformas som en brandcellsgräns, men där funktionen av en brandcellsgräns blir beroende av byggnadens konstruktionsmaterial. Fungerande skyddsavstånd är under förutsättning att öppningarna i fasad är begränsade och inte är större än en normal garageport. Syftet med att skydda närliggande byggnader uppfylls därmed inte. Skyddsavståndet bör vara i relation till arean på möjliga öppningar istället för ett fast värde. Ska skyddsavståndet vara fast bör det ske reglering av arean på möjliga öppningar och begränsa storleken eller kritisk strålningsnivå som får uppkomma på närliggande byggnad. Granskning av räddningsmanskapets säkerhet var en jämförelse mellan BBR och intervjuer på hur en räddningsinsats skulle kunna genomföras. Det som diskuterades var vilka risker branden och byggnaden utgör, samt hur dessa kan påverka genomförandet av insatsen. Brandtekniska åtgärder för att ta hänsyn till räddningsmanskapets säkerhet vid insats finns inte i erforderlig omfattning, vid brandteknisk projektering enligt förenklad dimensionering. Utan tidig detektion är risken överhängande att branden är för omfattande för att användning av invändiga brandposter för begränsning ska kunna vara möjlig. Dock är den enskilt viktigaste åtgärden för räddningsmanskapets säkerhet är att säkerställa tillgången till rätt mängd släckvatten vid byggnaden. Däckhotell placerade i containers är den enda byggnadsgeometri vilken kan projekteras enligt förenklad dimensionering. Den lagringsmetoden ger bäst möjlighet till en lyckad räddningsinsats och låg riskbild, och de är den enda modellen där brandcellsgränser helt klart skulle uppfylla sitt syfte både i klass EI 30 och EI 60. Containers har normalt inga fönster eller andra likvärdiga öppningar.
The aim of the thesis is mainly to investigate the risks that exist in case of fire in tire hotels and to evaluate whether society's requirements for fire protection in buildings are enough to manage this level of risk. Society's requirements in the study are Boverket's building regulations, BFS 2011:6 with amendments up to BFS 2020:4, (BBR) and the requirements that takes place in accordance with simplified design. The requirements in BBR have been quantified to enable a comparison between BBR and the actual conditions found in tire hotels. The study only treats three of the five items which BBR is based on. Development and spread of fire and smoke within the construction works is limited, spread of fire to adjacent construction works is limited and consideration has been taken to the rescue team's safety in case of fire. It exists some experience of fires in tire storage, but not a general picture of how a tire hotel should be design according to the building regulations, nor if it´s compatible with the building regulations' intention or the rescue team's practical experience of carrying out a rescue operation. The question is how the fire protection should be designed in tire hotels so that society's requirements can be fulfilled? When it can be thousands of tires which are stored at the same time in a tire hotel. The method to answer the questions was to carry out a few different analyses based on real tire hotels, together with research and studies in this area, so the questions could be answered quantitatively and/or qualitatively. However, all questions required some assumptions to be answered. To obtain something to base the assumptions against, a case study on real tire hotels was conducted. Five different geometric models were therefore created based on the buildings in terms of volume, construction materials and ventilation openings. The case study also showed a great variation in the number of tires that were stored in the hotels. Even with the variation, it can be stated that the fire load in a tire hotel exceeds 1600 MJ/m2 per floor area. The buildings construction material has a big impact on the fire temperature in the room. Tire hotels with a concrete construction provide better conditions for the fire-separation components to maintain the limiting function, in comparison with a construction of metal sheets with a core of insulation. A fire compartment boundary that is exposed to a temperature rise that occur in a metal structure, may not have the function over time it supposed to limit the spread of fire to other rooms during the intended time. BBR specifies two protective barriers to limit the spread of fire to adjacent construction, which are safety distances or that an exterior wall is designed as a fire compartment boundary. The function to limited fire spread by a fire compartment boundary is dependent on the building's construction material to fulfill its purpose. For a safety distance to work, the openings in the facade must be limited and not larger than a normal garage door. The purpose of protecting adjacent construction is therefore not fulfilled. The safety distance should be in relation to the area of ​​ openings instead of a fixed value. If the safety distance is a fixed value, the areas of openings should be regulated, if not, the size of the critical radiation that occur on an adjacent construction should be limited. Examination of the rescue team's safety was a comparison between BBR and interviews on how a rescue operation could be carried out. Under the interviews it was discussed what type of risks that are caused by the fire and the building, and how these risks can affect the implementation of the operation. The fire technical arrangements do not fulfill its purpose, to create the level of safety that are required for the rescue team when the fire technical design is according to a simplified design. Without early detection, the risk is imminent that the fire is too large for a person to use an indoor fire hydrant. The most important arrangements for the safety of the rescue team are however to ensure access to the right volume of water near the building. Tire hotels placed in containers are the only type of building which can be projected according to simplified design. This storage method provides the best opportunity for a successful rescue operation with a low risk.  Containers are also the only geometric model where fire compartment boundary would clearly fulfill its purpose, in both class EI 30 and EI 60. Containers have normally no windows or other equivalent openings.
APA, Harvard, Vancouver, ISO, and other styles
5

Ricker, Timothy J. Cowan Nelson. "Cognitive load and time based forgetting." Diss., Columbia, Mo. : University of Missouri--Columbia, 2009. http://hdl.handle.net/10355/6470.

Full text
Abstract:
Title from PDF of title page (University of Missouri--Columbia, viewed on Feb 18, 2010). The entire thesis text is included in the research.pdf file; the official abstract appears in the short.pdf file; a non-technical public abstract appears in the public.pdf file. Thesis advisor: Dr. Nelson Cowan. Includes bibliographical references.
APA, Harvard, Vancouver, ISO, and other styles
6

SILVA, HELIO FRANCISCO DA. "ON ADDRESSING IRREGULARITIES IN ELECTRICITY LOAD TIME-SERIES AND SHORT TERM LOAD FORECASTING." PONTIFÍCIA UNIVERSIDADE CATÓLICA DO RIO DE JANEIRO, 2001. http://www.maxwell.vrac.puc-rio.br/Busca_etds.php?strSecao=resultado&nrSeq=1737@1.

Full text
Abstract:
COORDENAÇÃO DE APERFEIÇOAMENTO DO PESSOAL DE ENSINO SUPERIOR
As alterações na legislação do Setor de Energia Elétrica Brasileiro em fins do milênio passado, provocou profundas mudanças no planejamento da Operação do Sistema e na Comercialização de energia elétrica no Brasil. O desmembramento das atividades de geração, de transmissão e de distribuição de energia elétrica criou novas características no comportamento dos Agentes Concessionários e as previsões de demanda por energia elétrica, que sempre foram ferramenta importante, por exemplo, na programação da operação, passaram a ser indispensáveis também, na comercialização de energia elétrica no mercado livre. Neste novo cenário, a obtenção e o armazenamento de dados confiáveis passou a ser parte integrante do patrimônio das Empresas e um sistema eficiente de previsões de carga passou a ser um diferencial na mesa de negociações. Os Agentes concessionários e o Operador Nacional do Sistema Elétrico vêm fazendo investimentos para aperfeiçoar os seus sistemas de aquisição de dados, entretanto em sistemas de multipontos algumas falhas imprevistas durante a sincronização da telemedição podem ocorrer, provocando defeitos nas séries. Nas séries de minuto em minuto, por exemplo, uma falha de algumas horas acarreta centenas de registros defeituosos e as principais publicações a respeito de modelagens de séries temporais para tratamento de dados não abordam as dificuldades encontradas diante de grandes falhas consecutivas nos dados.
As a result of the continuing privatization process within the energy sector,electricity load forecasting is a ritical tool for decision-making in the Industry. Reliable forecasts are now needed not only for developing strategies for business planning and short term operational scheduling, but also to define the spot market electricity price. The forecasting process is data-ntensive and interest has been driven to shorter and shorter intervals. Large investments are being made in modernizing and improving metering systems, so as to make more data available to the forecaster. However, the forecaster is still faced with irregular time-series. Gaps, missing values, spurious information or repeated values in the time-series can result from transmission errors or small failures in the recording process. These so- called irregularities have led to research that focused on either iterative processes,like the Kalman filter and the EM algorithm, or applications of the statistical literature on treatment of missing values and outliers. Nevertheless, these methods often result in large forecast errors when confronted with consecutive failures in the data. On the other hand, the minute to minute series have a large amount of points and so the one day ahead forecast horizont becomes very large to handling with the conventional methods. In this context, we propose an alternative to detect and replace values and present a methodology to perform the forecasting process by using of other information in the time-series that relate to the variability and seasonality, which are commonly encountered in electricity load-forecasting data. We illustrate the method and address the problem as part of a wider project that aims at the development of an automatic on line system for tracking the Brazilian Interlinked Electric Network Operation and performing short term load forecasting. The data were collected by ONS / ELETROBRAS - Brazil. We concentrate on 10 minutes data for the years 1997-1999 of Light Serviços de Eletricidade S.A. (Rio de Janeiro and its surroundings).
Las alteraciones en la legislación del Sector de Energía Elétrica Brasilero a finales del milenio pasado, provocó profundos cambios en el planificación de la Operación del Sistema y en la Comercialización de energía eléctrica en Brasil. La desarticulación de las actividades de generación, de transmisión y de distribuición de energía eléctrica creó nuevas características en el comportamiento de los Agentes Concesionarios. Así, las previsiones de demanda por energía eléctrica, que siempre fueron una herramienta importante, por ejemplo, en la programación de la operación, pasaron a ser indispensables también en la comercialización de energía eléctrica en el mercado libre. En este nuevo escenario, la obtención y almacenamiento de datos confiables pasó a ser parte integrante del patrimonio de las Empresas y un sistema eficiente de previsiones de carga constituye un diferencial en la mesa de negociaciones. Los Agentes concesionarios y el Operador Nacional del Sistema Eléctrico han invertido en el perfeccionamiento de sus sistemas de adquisición de datos. Sin embargo, en sistemas de multipuntos algunas fallas imprevistas durante la sincronización de la telemedición pueden ocurrir, provocando defectos en las series. En las series de minuto en minuto, por ejemplo, una falla de algunas horas trae consigo centenas de registros defectuosos y las principales publicaciones sobre modelos de series temporales para tratamiento de datos no abordan las dificuldades encontradas frente a grandes fallas consecutivas en los datos.
APA, Harvard, Vancouver, ISO, and other styles
7

El-Khatib, Khalil M. "Dynamic load balancing for clustered time warp." Thesis, McGill University, 1996. http://digitool.Library.McGill.CA:80/R/?func=dbin-jump-full&object_id=27311.

Full text
Abstract:
In this thesis, we consider the problem of dynamic load balancing for parallel discrete event simulation. We focus on the optimistic synchronization protocol, Time Warp.
A distributed load balancing algorithm was developed, which makes use of the active process migration in Clustered Time Warp. Clustered Time Warp is a hybrid synchronization protocol; it uses an optimistic approach between the clusters and a sequential approach within the clusters. As opposed to the centralized algorithm developed by H. Avril for Clustered Time Warp, the presented load balancing algorithm is a distributed token-passing one.
We present two metrics for measuring the load: processor utilization and processor advance simulation rate. Different models were simulated and tested: VLSI models and queuing network models (pipeline and distributed networks). Results show that improving the performance of the system depends a great deal on the nature of the simulated model.
For the VLSI model, we also examined the effect of the dynamic load balancing algorithm on the total number of processed messages per unit time. Performance results show that dynamically balancing the load, the throughput of the simulation was improved by more than 100%.
APA, Harvard, Vancouver, ISO, and other styles
8

El-Khatib, Khalil M. "Dynamic load balancing for clustered time warp." Thesis, National Library of Canada = Bibliothèque nationale du Canada, 1997. http://www.collectionscanada.ca/obj/s4/f2/dsk2/tape16/PQDD_0003/MQ29686.pdf.

Full text
APA, Harvard, Vancouver, ISO, and other styles
9

Soon, Wilson Wei-Chwen. "Near real-time extract, transform and load." [Denver, Colo.] : Regis University, 2007. http://165.236.235.140/lib/WSoon2007.pdf.

Full text
APA, Harvard, Vancouver, ISO, and other styles
10

Huang, Simon. "Load time optimization of JavaScript web applications." Thesis, Blekinge Tekniska Högskola, Institutionen för programvaruteknik, 2019. http://urn.kb.se/resolve?urn=urn:nbn:se:bth-17931.

Full text
Abstract:
Background. Websites are getting larger in size each year, the median size increased from 1479.6 kilobytes to 1699.0 kilobytes on the desktop and 1334.3 kilobytes to 1524.1 kilobytes on the mobile. There are several methods that can be used to decrease the size. In my experiment I use the methods tree shaking, code splitting, gzip, bundling, and minification. Objectives. I will investigate how using the methods separately affect the loading time and con- duct a survey targeted at participants that works as JavaScript developers in the field. Methods. I have used Vue for creating a website and ran Lighthouse tests against the website. All this within two Docker containers to make the reproducibility easier. Interviews with JavaScript developers were made to find out if they use these methods in their work. Results. The best result would be to use all of the methods; gzip, minification, tree shaking, code splitting, and bundling in a combination. If gzip is the only option available for the developer to use, we can see around 60% decrease in loading time. The inter- views showed that most developers did not use or did not know of tree shaking and code splitting. Some frameworks have these methods built in to work automatically, therefor the developers does not know that it is being utilized. Conclusions. Since tree shaking and code splitting are two relatively new techniques, there is not much scientific measured values available. From the results, we can give the conclusion that using all of the mentioned methods will give the best result in loading time. All of the methods will affect the loading time, and only using gzip will affect it with a 60% decrease.
APA, Harvard, Vancouver, ISO, and other styles
11

Ghosh, Sushmita. "Real time data acquisition for load management." Thesis, Virginia Tech, 1985. http://hdl.handle.net/10919/45726.

Full text
Abstract:
Demand for Data Transfer between computers has increased ever since the introduction of Personal Computers (PC). Data Communicating on the Personal Computer is much more productive as it is an intelligent terminal that can connect to various hosts on the same I/O hardware circuit as well as execute processes on its own as an isolated system. Yet, the PC on its own is useless for data communication. It requires a hardware interface circuit and software for controlling the handshaking signals and setting up communication parameters. Often the data is distorted due to noise in the line. Such transmission errors are imbedded in the data and require careful filtering. The thesis deals with the development of a Data Acquisition system that collects real time load and weather data and stores them as historical database for use in a load forecast algorithm in a load management system. A filtering technique has been developed here that checks for transmission errors in the raw data. The microcomputers used in this development are the IBM PC/XT and the AT&T 3B2 supermicro computer.
Master of Science
APA, Harvard, Vancouver, ISO, and other styles
12

Zimmerman, Nicole P. "Time-Variant Load Models of Electric Vehicle Chargers." PDXScholar, 2015. https://pdxscholar.library.pdx.edu/open_access_etds/2297.

Full text
Abstract:
In power distribution system planning, it is essential to understand the impacts that electric vehicles (EVs), and the non-linear, time-variant loading profiles associated with their charging units, may have on power distribution networks. This research presents a design methodology for the creation of both analytical and behavioral models for EV charging units within a VHDL-AMS simulation environment. Voltage and current data collected from Electric Avenue, located on the Portland State University campus, were used to create harmonic profiles of the EV charging units at the site. From these profiles, generalized models for both single-phase (Level 2) and three-phase (Level 3) EV chargers were created. Further, these models were validated within a larger system context utilizing the IEEE 13-bus distribution test feeder system. Results from the model's validation are presented for various charger and power system configurations. Finally, an online tool that was created for use by distribution system designers is presented. This tool can aid designers in assessing the impacts that EV chargers have on electrical assets, and assist with the appropriate selection of transformers, conductor ampacities, and protection equipment & settings.
APA, Harvard, Vancouver, ISO, and other styles
13

Dädeby, Oskar. "Dynamic Blast Load Analysis using RFEM : Software evaluation." Thesis, Luleå tekniska universitet, Institutionen för samhällsbyggnad och naturresurser, 2021. http://urn.kb.se/resolve?urn=urn:nbn:se:ltu:diva-84784.

Full text
Abstract:
The purpose of this Master thesis is to evaluate the RFEM software and determine if it could be used for dynamic analyses using blast loads from explosions. Determining the blast resistance for a structure is a growing market and would therefore be beneficial for Sweco Eskilstuna if RFEM could be used for this type of work. The verification involved comparing the RFEM software to a real experiment which consisted of a set of blast tested reinforced concrete beams. By using the structural properties from the experiment project with the experiment setup the same structure could be replicated in RFEM. RFEM would then simulate a dynamic analysis loaded with the same dynamic load measured from the experiment project in two different dynamic load cases caused by two differently loaded explosions. The structural response from the experiment could then be compared to the response simulated by the RFEM software, which consisted of displacement- and acceleration time diagrams. By analysing the displacement and acceleration of both the experiment and the RFEM software the accuracy was determined, and how well RFEM preformed the analysis for this specific situation. The comparison of the displacement and acceleration between the experiment and RFEM was considered acceptable if the maximum displacement was consistent with the experiments result and within the same time frame. The acceleration was considered acceptable if the initial acceleration was consistent with the experiment result. These criteria needed to be met for the verification that RFEM could simulate a dynamic analysis. If the software managed to complete a dynamic analysis for two dynamic load cases, then the software could be evaluated which consisted of determining if the post blast effects could be determined and if the modelling method was reliable.  The acceleration from RFEM were in good agreement with the experiment test at the initial part of the blast, reaching a close comparison for both load cases after 3 ms. Then the RFEM acceleration had a chaotic behaviour reaching no similarities for the duration of the blast. The displacement managed to get a close comparison of the maximum displacement with a margin of 0,5 mm for both load cases within a 1 ms time margin. RFEM managed in conclusion to simulate a blast load analysis, the displacement and acceleration gave acceptable results according to the criteria.  With the method chosen a fast simulation was achieved and with the same model complying with two different load cases for the same model gave indication that the first result was not a coincidence. The steps taken in the modelling method was straight forward, but two contributing parameters were determined to devalue the reliability. First parameter was the material model chosen for the concrete, which was chosen to a plastic material model. The two optional material model’s linear elastic and non-linear elastic both caused failed simulations. Also, the better model for the material model would have been a diagram model which insured that the concrete lost is capacity in tension with maximum capacity, but this was not available in a dynamic analysis with multiple load increments. Which is the reason why a plastic material model was chosen for the concrete. The second reason was the movement of the beam in the supports. This data was not recorded in the experiment but was determined to be a contributing part of the test. This however gave big differences of the result depending on how much the beam could move. In the end the best possible result was chosen to comply with the first load case where the same RFEM model was used in the second test. The second load case showed just as good results as the first load case, but with the big variation in results depending on the movement of the beam in the supports made this part unclear.  For the evaluation the question if the RFEM could provide a post blast analysis needed to be addressed, where the answer is no. The failure mode was chosen to comply with the choice of modelling method which required the analysis of the plastic strain in the reinforcement bars. This information was not available using the add-on module DYNAM-PRO and could therefore not provide the answer if the model structure resisted the blast.  For future work of this master thesis is to build a model that would give a more detailed post blast analysis, where this thesis was made to test the software. For this more work would be necessary by the creators Dlubal to further improve the add-on-module, which involves more extractable results and more detailed tools when using a dynamic load case, where some important functionality is only usable in a static load case. Other than that, RFEM managed to complete the dynamic analysis, and with further improving of the modelling method a more detailed analysis can be made and then be usable in real projects in the future.
APA, Harvard, Vancouver, ISO, and other styles
14

Nigrini, L. B., and G. D. Jordaan. "Short term load forecasting using neural networks." Journal for New Generation Sciences, Vol 11, Issue 3: Central University of Technology, Free State, Bloemfontein, 2013. http://hdl.handle.net/11462/646.

Full text
Abstract:
Published Article
Several forecasting models are available for research in predicting the shape of electric load curves. The development of Artificial Intelligence (AI), especially Artificial Neural Networks (ANN), can be applied to model short term load forecasting. Because of their input-output mapping ability, ANN's are well-suited for load forecasting applications. ANN's have been used extensively as time series predictors; these can include feed-forward networks that make use of a sliding window over the input data sequence. Using a combination of a time series and a neural network prediction method, the past events of the load data can be explored and used to train a neural network to predict the next load point. In this study, an investigation into the use of ANN's for short term load forecasting for Bloemfontein, Free State has been conducted with the MATLAB Neural Network Toolbox where ANN capabilities in load forecasting, with the use of only load history as input values, are demonstrated.
APA, Harvard, Vancouver, ISO, and other styles
15

Joshipura, Sanket Manjul. "Scalable object-based load balancing in multi-tier architectures." Thesis, McGill University, 2012. http://digitool.Library.McGill.CA:80/R/?func=dbin-jump-full&object_id=106554.

Full text
Abstract:
An exponential growth in internet usage and penetration amongst the general population has led to an ever increasing demand for e-commerce applications and other internet-based services. E-commerce applications must provide high levels of service that include reliability, low response times and scalability. Most e-commerce applications follow a multi-tier architecture. As they are highly dynamic and data-intensive, the database is often a bottleneck in the whole system as most systems deploy multiple application servers in the replicated application tier, while only deploying a single database as managing a replicated database is not a trivial task. Hence, in order to achieve scalability, caching of data at the application server is an attractive option.In this thesis, we develop effective load balancing and caching strategies for read-only transaction workloads that help scaling multi-tier architectures and improve their performance. Our strategies have several special features. Firstly, our strategies take into account statistics about the objects of the cache, such as access frequency. Secondly, our algorithms that generate the strategies, despite being object-aware, are generic in nature, and thus, not limited to any specific type of applications. The main objective is to direct a request to an appropriate application server so that there is a high probability that the objects required to serve that request can be accessed from the cache, avoiding a database access. We have developed a whole suite of strategies, which differ in the way they assign objects and requests to application servers. We use distributed caching so as to make better utilization of the aggregate cache capacity of the application servers. Experimental results show that our strategies are promising and help to improve performance.
Une croissance exponentielle de l'utilisation d'Internet et sa pénétration dans la population générale ont conduit à une demande toujours croissante d'applications de commerce électronique et d'autres services basés sur l'internet. Les applications de commerce électronique doivent fournir des niveaux élevés de services qui comprennent la fiabilité, un court temps de réponse et de la variabilité dimensionnelle. La plupart des applications de commerce électronique suivent une architecture multi-niveau. Comme elles sont très dynamiques et possèdent une forte intensité de données, la base de données est souvent un goulot d'étranglement dans le système en entier comme la plupart des systèmes déploient des serveurs d'applications multiples dans l'application tierce reproduite. D'un autre côté, le déploiement d'une base de données unique pour la gestion d'une base de données répliquée n'est pas une tâche simple. Ainsi, afin de parvenir à une variabilité dimensionnelle, la mise en cache des données au serveur d'applications est une option attrayante.Dans cette thèse, nous développons un équilibrage de charge efficace et des stratégies de mise en cache qui aident à échelonner les architectures multi-niveaux et à améliorer leurs performances. Nos stratégies ont plusieurs caractéristiques particulières. Premièrement, nos stratégies prennent en compte les statistiques sur les objets de la mémoire cache, comme la fréquence d'accès. Deuxièmement, nos algorithmes qui génèrent les stratégies, tout en étant conscients des objets, sont de nature générique, et donc, ne se limitent pas à un type spécifique d'applications. L'objectif principal est de diriger une requête au serveur d'applications approprié afin qu'il y ait une forte probabilité que les objets requis pour servir cette demande puissent être consultés à partir de la mémoire cache, évitant un accès à la base de données. Nous avons développé toute une série de stratégies qui différent dans leur façon d'assigner des objets et des requêtes aux serveurs d'applications. Nous utilisons une mise en cache distribuée de manière à mieux utiliser la capacité totale de la mémoire cache des serveurs d'applications. Les résultats expérimentaux montrent que nos stratégies sont prometteuses et permettent d'améliorer les performances.
APA, Harvard, Vancouver, ISO, and other styles
16

Liljeroth, Henrik. "Measuring and Analysing Execution Time in an Automotive Real-Time Application." Thesis, Linköping University, Department of Computer and Information Science, 2009. http://urn.kb.se/resolve?urn=urn:nbn:se:liu:diva-51691.

Full text
Abstract:

Autoliv has developed the Night Vision system, which is a safety system for use incars to improve the driver’s situational awareness during night conditions. It is areal-time system that is able to detect pedestrians in the traffic environment andissue warnings when there is a risk of collision. The timing behaviour of programsrunning on real-time systems is vital information when developing and optimisingboth hardware and software. As a part of further developing their Night Visionsystem, Autoliv wanted to examine detailed timing behaviour of a specific part ofthe Night Vision algorithm, namely the Tracking module, which tracks detectedpedestrians. Parallel to this, they also wanted a reliable method to obtain timingdata that would work for other parts of that system as well, or even other applications.

A preliminary study was conducted in order to determine the most suitable methodof obtaining the timing data desired. This resulted in a measurement-based approachusing software profiling, in which the Tracking module was measured usingvarious input data. The measurements were performed on simulated hardwareusing both a cycle accurate simulator and measurement tools from the systemCPU manufacturer, as well as tools implemented specifically to handle input andoutput data.

The measurements resulted in large amounts of data used to compile performancestatistics. Using different scenarios in the input data, we were able to obtain timingcharacteristics for several typical situations the system may encounter duringoperation. By manipulating the input data we were also able to observe generalbehaviour and achieve artificially high execution times, which serves as indicationson how the system responds to irregular and unexpected input data.

The method used for collecting timing information was well suited for this particularproject. It provided the possibility to analyse behavior in a better waythan other, more theoretical, approaches would have. The method is also easilyadaptable to other parts of the Night Vision system, or other systems, with onlyminor adjustments to measurement environment and tools.

APA, Harvard, Vancouver, ISO, and other styles
17

Nandorf, Joel. "Responsive Web Design – Evaluation of Techniques to Optimize Load Time." Thesis, Umeå universitet, Institutionen för datavetenskap, 2013. http://urn.kb.se/resolve?urn=urn:nbn:se:umu:diva-81028.

Full text
Abstract:
Responsive Web Design has, in short time, become a common method to create websites that automatically adapt the layout to different screen sizes. However, critique has been raised about poor performance due to large page size and high number of requests to the web server. The aim of this study is to evaluate techniques to optimize the load time of a responsive website. This is done by creating a prototype in the form of a responsive website, which is used as a base for the optimization. Tests are performed by measuring the page size and the number of requests when using four different optimization techniques. The results show that combining optimization techniques can dramatically reduce the page size and the number of requests. Consequently, this has a positive impact on the load time of the website. Furthermore, the Mobile First approach is important in responsive web design as it prioritizes the use of mobile devices and as a result highlights the significance of web performance. It is also suggested to set a Performance Budget early in web development projects in order to avoid slow websites and spread awareness about the importance of performance.
APA, Harvard, Vancouver, ISO, and other styles
18

Tickoo, Neeraj. "Cache aware load balancing for scaling of multi-tier architectures." Thesis, McGill University, 2011. http://digitool.Library.McGill.CA:80/R/?func=dbin-jump-full&object_id=103731.

Full text
Abstract:
To keep pace with the increasing user base and resulting processing requirements, enterprise and e-commerce applications need constant innovation in their application design and system architecture. Scalability and availability are the basic principles that must be adhered to by the businesses if they want to retain and expand their customer base. The most popular design which provides for both availability and scalability is when the application tier is replicated. In it, all the application servers share a single database, and to prevent the database from becoming the bottleneck in a high volume scenario, caching layers are deployed in each application server. By serving requests from the local cache instead of going to the database, response times are reduced and the load at the database is kept low. Thus, caching is a critical component of such architectures. In this thesis, we focus on object caches at the application tier, which cache Java EE entities. Our target applications are e-commerce applications which are database driven and are resource intensive. In this thesis we design a cache aware load balancing solution which makes effective usage of the caching layer. This results in a more scalable application tier of a multi-tier architecture. Most of the load balancing solutions present in literature are cache agnostic when making the dispatching decision. Example solutions like round-robin cause duplication of the same cache content across all the application servers. In contrast, we present a cache aware load balancing algorithm, which make best possible effort to prevent the duplication of cached entries across the different caches in the cluster, enabling us to make a more efficient usage of cache space available to us. This in turn, results in less cache evictions. We also extend our cache aware load balancing algorithm to take into account the dynamic nature of the application server cluster where the nodes can come up and shutdown as the system is running. The evaluation of our implementation shows improvements in response time and throughput of a well known e-commerce benchmark compared to existing strategies.
Afin de suivre le rythme croissant d'utilisateurs ainsi que les demandes de traitements résultants, les applications entreprise et de commerce électronique ont besoin d'innovations régulières dans leur conception et architecture. L'extensibilité ainsi que la disponibilité sont primordiales pour tout type d'affaires ayant intérêt à garder, voir même étendre, leur clientèle. L'architecture la plus populaire qui fournit en même temps l'extensibilité et la disponibilité est celle pour laquelle le serveur d'applications est répliqué. Une architecture au niveau de laquelle les serveurs d'applications partagent une seule base de données et chacun d'entre eux utilise des couches de cache afin de réduire la charge sur la base de données. En servant les requêtes à partir du cache local, au lieu de les servir à partir de la base données, les temps de réponses sont réduits et la charge de traitement de la base de données est maintenue à un bas niveau. Ainsi, la mise en cache est une composante critique pour ce type d'architectures. Dans cette thèse, on se concentre sur la mise en case d'objets au niveau du serveur d'applications, qui met en cache des entités Java EE. On vise principalement les applications de commerce électroniques qui sont basées sur les bases de données et qui demandent assez de ressources. Dans cette thèse, nous concevons une solution de balancement de la charge qui tient en compte la mise en cache, ce qui rend l'utilisation de la couche du cache assez effective. Ceci résulte en un serveur d'applications assez extensible pour les architectures multi-tier. La plupart des solutions de balancement de la charge ne tiennent pas en compte la mise en cache lors de la distribution de leur requêtes. Par exemple des solutions comme le round-robin entraînent la duplication du même contenu du cache à travers tous les serveurs d'applications. En revanche, nous présentons un algorithme de balancement de la charge qui tient en compte la mise en cache et qui fait de son mieux pour éviter la duplication des entrées mises en cache à travers tous les caches distribués. Ceci nous permet d'utiliser d'une façon efficace l'espace de cache disponible et de réduire le nombre d'expulsions d'entités à partir du cache. Au niveau de notre algorithme de distribution de la charge, et qui tient en compte la mise en cache, nous prenons en considération le nombre dynamique des applications serveurs. En fait, lors de l'exécution d'un système réel, les noeuds de serveurs peuvent joindre ou quitter le système à n'importe quel moment. L'évaluation de notre implémentation montre des améliorations en terme de temps de réponse et de débit de requêtes pour un benchmark bien connu, comparativement à des stratégies existantes.
APA, Harvard, Vancouver, ISO, and other styles
19

Khan, Asif H. "Analysis of time varying load for minimum loss distribution reconfiguration." Diss., This resource online, 1992. http://scholar.lib.vt.edu/theses/available/etd-06062008-171313/.

Full text
APA, Harvard, Vancouver, ISO, and other styles
20

Christophersen, Jon Petter. "Battery state-of-health assessment using a near real-time impedance measurement technique under no-load and load conditions." Diss., Montana State University, 2011. http://etd.lib.montana.edu/etd/2011/christophersen/ChristophersenJ0511.pdf.

Full text
Abstract:
The reliability of battery technologies has become a critical issue as the United States seeks to reduce its dependence on foreign oil. One of the significant limitations of in-situ battery health and reliability assessments, however, has been the inability to rapidly acquire information on power capability during aging. The Idaho National Laboratory has been collaborating with Montana Tech of the University of Montana and Qualtech Systems, Incorporated, on the development of a Smart Battery Status Monitor. This in-situ device will track changes in battery performance parameters to estimate its state-of-health and remaining useful life. A key component of this onboard monitoring system will be rapid, in-situ impedance measurements from which the available power can be estimated. A novel measurement technique, known as Harmonic Compensated Synchronous Detection, has been developed to acquire a wideband impedance spectrum based on an input sum-of-sines signal that contains frequencies separated by octave harmonics and has a duration of only one period of the lowest frequency. For this research, studies were conducted with high-power lithium-ion cells to examine the effectiveness and long-term impact of in-situ Harmonic Compensated Synchronous Detection measurements. Cells were cycled using standardized methods with periodic interruptions for reference performance tests to gauge degradation. The results demonstrated that in-situ impedance measurements were benign and could be successfully implemented under both no-load and load conditions. The acquired impedance spectra under no-load conditions were highly correlated to the independently determined pulse resistance growth and power fade. Similarly, the impedance measurements under load successfully reflected changes in cycle-life pulse resistance at elevated test temperatures. However, both the simulated and measured results were corrupted by transient effects and, for the under-load spectra, a bias voltage error. These errors mostly influenced the impedance at low frequencies, while the mid-frequency charge transfer resistance was generally retained regardless of current level. It was further demonstrated that these corrupting influences could be minimized with additional periods of the lowest frequency. Therefore, the data from these studies demonstrate that Harmonic Compensated Synchronous Detection is a viable in-situ impedance measurement technique that could be implemented as part of the overall Smart Battery Status Monitor.
APA, Harvard, Vancouver, ISO, and other styles
21

Cunningham, Ian Joseph. "Load balancing schemes for distributed real-time interactive virtual world simulations." Thesis, National Library of Canada = Bibliothèque nationale du Canada, 2000. http://www.collectionscanada.ca/obj/s4/f2/dsk1/tape3/PQDD_0016/MQ56681.pdf.

Full text
APA, Harvard, Vancouver, ISO, and other styles
22

Taherifard, Ershad. "Load and Demand Forecasting in Iraqi Kurdistan using Time series modelling." Thesis, KTH, Skolan för elektroteknik och datavetenskap (EECS), 2019. http://urn.kb.se/resolve?urn=urn:nbn:se:kth:diva-260260.

Full text
Abstract:
This thesis examines the concept of time series forecasting. More specifically, it predicts the load and power demand in Sulaymaniyah, Iraqi Kurdistan, who are today experiencing frequent power shortages. This study applies a commonly used time series model, the autoregressive integrated moving average model, which is compared to the naïve method. Several key model properties are inspected to evaluate model accuracy. The model is then used to forecast the load and the demand on a daily, weekly and monthly basis. The forecasts are evaluated by examining the residual metrics. Furthermore, the quantitative results and the answers collected from interviews are used as a basis to investigate the conditions of capacity planning in order to determine a suitable strategy to minimize the unserved power demand. The findings indicate an unsustainable over consumption of power in the region due to low tariffs and subsidized energy. A suggested solution is to manage power demand by implementing better strategies such as increasing tariffs and to use demand forecast to supply power accordingly. The monthly supply forecast in this study outperforms the baseline method but not the demand forecast. On weekly basis, both the load and the demand models underperform. The performance of the daily forecasts performs equally or worse than the baseline. Overall, the supply predictions are more precise than the demand predictions. However, there is room for improvement regarding the forecasts. For instance, better model selection and data preparation can result in more accurate forecasts.
Denna studie undersöker prediktion av tidserier. Den tittar närmare på last- och effektbehov i Sulaymaniyah i Irak som idag drabbas av regelbunden effektbrist. Rapporten applicerar en vedertagen tidseriemodell, den autoregressiva integrerade glidande medelvärdesmodellen, som sedan jämförs med den naiva metoden. Några karaktäristiska modellegenskaper undersöks för att evaluera modellens noggrannhet. Den anpassade modellen används sedan för att predikera last- och effektbehovet på dags-, månads-, och årsbasis. Prognoserna evalueras genom att undersöka dess residualer. Vidare så användas de kvalitativa svaren från intervjuerna som underlag för att undersöka förutsättningarna för kapacitetsplanering och den strategi som är bäst lämpad för att möta effektbristen. Studien visar att det råder en ohållbar överkonsumtion av energi i regionen som konsekvens av låga elavgifter och subventionerad energi. En föreslagen lösning är att hantera efterfrågan genom att implementera strategier som att höja elavgifter men även försöka matcha produktionen med efterfrågan med hjälp av prognoser. De månadsvisa prognoserna för produktionen i studien överträffar den naiva metoden men inte för prognoserna för efterfrågan. På veckobasis underpresterar båda modellerna. De dagliga prognoserna presterar lika bra eller värre än den naiva metoden. I sin helhet lyckas modellerna förutspå utbudet bättre än efterfrågan på effekt. Men det finns utrymme för förbättringar. Det går nog att uppnå bättre resultat genom bättre förbehandling av data och noggrannare valda tidseriemodeller.
APA, Harvard, Vancouver, ISO, and other styles
23

Sebitosi, A. B. "Load compensation : design of a real time analysis and control device." Master's thesis, University of Cape Town, 2001. http://hdl.handle.net/11427/5107.

Full text
Abstract:
Includes bibliographical references.
The aim of this thesis is to produce a load compensator for a three-phase system. It should be simple, accurate and affordable. The three-phase load compensator design is based on a more recent definition of power factor. Attempts to establish a universally acceptable definition can be traced as early as 1920 at the 36th Annual convention of the American Institution of Electrical Engineers. Subsequently, a number of definitions have been adopted by different scholars. Each definition can lead to a different compensator solution. This problem, for example, is illustrated by Eammanuel [25].
APA, Harvard, Vancouver, ISO, and other styles
24

Eneman, Rasmus. "Improving load time of SPAs : An evaluation of three performance techniques." Thesis, Linnéuniversitetet, Institutionen för datavetenskap (DV), 2016. http://urn.kb.se/resolve?urn=urn:nbn:se:lnu:diva-54474.

Full text
Abstract:
The code size of single page web applications are constantly growing which do have an negative effect on the load time. Previous research have shown that load time are important to users and that a slow application will lose potential customers even before it has loaded. In this paper three architecturally far-reaching techniques are measured to see how they can improve the load time and help to decide if an application should be built with one or more of the tested techniques which are HTTP2 push, Code Splitting and Isomorphism. The experiment shows that Isomorphism can provide a big improvement for the time to first paint and that Code Splitting can be a useful technique for large code bases on mobile phones.
APA, Harvard, Vancouver, ISO, and other styles
25

Taghinezhadbilondy, Ramin. "Extending Use of Simple for Dead Load and Continuous for Live Load (SDCL) Steel Bridge System to Seismic Areas." FIU Digital Commons, 2016. http://digitalcommons.fiu.edu/etd/2986.

Full text
Abstract:
The steel bridge system referred to as Simple for Dead load and Continuous for Live load (SDCL) has gained popularity in non-seismic areas of the country. Accordingly, it results in many advantages including enhanced service life and lower inspection and maintenance costs as compared to conventional steel systems. To-date, no research studies have been carried out to evaluate the behavior of the SDCL steel bridge system in seismic areas. The main objective of this research was to extend the application of SDCL to seismic areas. The concept of the SDCL system was developed at the University of Nebraska-Lincoln and a complete summary of the research is provided in five AISC Engineering Journal papers. The SDCL system is providing steel bridges with new horizons and opportunities for developing economical bridge systems, especially in cases for which accelerating the construction process is a priority. The SDCL steel bridge system also provides an attractive alternative for use in seismic areas. The SDCL concept for seismic areas needed a suitable connection between the girder and pier. In this research, an integral SDCL bridge system was considered for further investigation. The structural behavior and force resistance mechanism of the proposed seismic detail considered through analytical study. The proposed connection evaluated under push-up, push-down, inverse and axial loading to find the sequence of failure modes. The global and local behavior of the system under push-down forces was mainly similar to non-seismic detail. The nonlinear time history analysis indicated that there is a high probability that bottom flange sustains tension forces under seismic events. The finite element model subjected to push-up forces to simulate the response of the system under the vertical component of seismic loads. However, the demand-capacity ratio was low for vertical excitation of seismic loads. Besides finite element results showed that continuity of bottom flange increased ductility and capacity of the system. While the bottom flange was not continuous, tie bars helped the system to increase the ultimate moment capacity. To model the longitudinal effect of earthquake loads, the model subjected under inverse forces as well as axial forces at one end. In this case scenario, dowel bars were most critical elements of the system. Several finite element analyses performed to investigate the role of each component of preliminary and revised detail. All the results demonstrated that continuity of the bottom flange, bolts area (in the preliminary detail), tie bars over the bottom flange (in the revised detail) were not able to provide more moment capacity for the system. The only component increased the moment capacity was dowel bars. In fact, increasing the volume ratio of dowel bars could be able to increase the moment capacity and prevent premature failure of the system. This project was Phase I of an envisioned effort that culminated in the development of a set of details and associated design provisions to develop a version of the SDCL steel bridge system, suitable for the seismic application. Phase II of this project is an ongoing project and currently the component specimen design and test setup are under consideration. The test specimen is going to be constructed and tested in the structures lab of Florida International University. A cyclic loading will be applied to the specimen to investigate the possible damages and load resistance mechanism. These results will be compared with the analysis results. In the next step, as phase III, a complete bridge with all the components will be constructed in the structures lab at the University of Nevada-Reno. The connection between steel girders will be an SDCL connection and the bridge will be subjected to a shake table test to study the real performance of the connection due to earthquake excitation.
APA, Harvard, Vancouver, ISO, and other styles
26

Joubert, Adriaan Wolfgang. "Parallel methods for systems of nonlinear equations applied to load flow analysis." Thesis, Queen Mary, University of London, 1996. http://ethos.bl.uk/OrderDetails.do?uin=uk.bl.ethos.362721.

Full text
APA, Harvard, Vancouver, ISO, and other styles
27

Gillam, David A. "Airloads on a finite wing in a time dependent incompressible freestream." Diss., Georgia Institute of Technology, 1994. http://hdl.handle.net/1853/12371.

Full text
APA, Harvard, Vancouver, ISO, and other styles
28

Davis, A. G. W. "A transputer ring network for real time distributed control applications." Thesis, University of Nottingham, 1994. http://ethos.bl.uk/OrderDetails.do?uin=uk.bl.ethos.260571.

Full text
APA, Harvard, Vancouver, ISO, and other styles
29

Sarkar, Falguni. "Rollback Reduction Techniques Through Load Balancing in Optimistic Parallel Discrete Event Simulation." Thesis, University of North Texas, 1996. https://digital.library.unt.edu/ark:/67531/metadc279308/.

Full text
Abstract:
Discrete event simulation is an important tool for modeling and analysis. Some of the simulation applications such as telecommunication network performance, VLSI logic circuits design, battlefield simulation, require enormous amount of computing resources. One way to satisfy this demand for computing power is to decompose the simulation system into several logical processes (Ip) and run them concurrently. In any parallel discrete event simulation (PDES) system, the events are ordered according to their time of occurrence. In order for the simulation to be correct, this ordering has to be preserved. There are three approaches to maintain this ordering. In a conservative system, no lp executes an event unless it is certain that all events with earlier time-stamps have been executed. Such systems are prone to deadlock. In an optimistic system on the other hand, simulation progresses disregarding this ordering and saves the system states regularly. Whenever a causality violation is detected, the system rolls back to a state saved earlier and restarts processing after correcting the error. There is another approach in which all the lps participate in the computation of a safe time-window and all events with time-stamps within this window are processed concurrently. In optimistic simulation systems, there is a global virtual time (GVT), which is the minimum of the time-stamps of all the events existing in the system. The system can not rollback to a state prior to GVT and hence all such states can be discarded. GVT is used for memory management, load balancing, termination detection and committing of events. However, GVT computation introduces additional overhead. In optimistic systems, large number of rollbacks can degrade the system performance considerably. We have studied the effect of load balancing in reducing the number of rollbacks in such systems. We have designed three load balancing algorithms and implemented two of them on a network of workstations. The other algorithm has been analyzed probabilistically. The reason for choosing network of workstations is their low cost and the availability of efficient message passing softwares like PVM and MPI. All of these load balancing algorithms piggyback on the existing GVT computation algorithms and try to balance the speed of simulation in different lps. We have also designed an optimal GVT computation algorithm for the hypercubes and studied its performance with respect to the other GVT computation algorithms by simulating a hypercube in our network cluster. We use the topological properties of a star network in order to design an algorithm for computing a safe time-window for parallel discrete event simulation. We have analyzed and simulated the behavior of an open queuing network resembling such an architecture. Our algorithm is also extended for hierarchical stars and for recursive window computation.
APA, Harvard, Vancouver, ISO, and other styles
30

Chouman, Mustapha M. "The effect of additional reinforcement on time-dependent behaviour of partially prestressed concrete." Thesis, University of Leeds, 1990. http://ethos.bl.uk/OrderDetails.do?uin=uk.bl.ethos.252940.

Full text
APA, Harvard, Vancouver, ISO, and other styles
31

Bjerke, Hanne. "Revealing Causes of Restrictions by Signatures in Real-Time Hook Load Signals." Thesis, Norges teknisk-naturvitenskapelige universitet, Institutt for petroleumsteknologi og anvendt geofysikk, 2013. http://urn.kb.se/resolve?urn=urn:nbn:no:ntnu:diva-22906.

Full text
Abstract:
Downhole restrictions are causing non-productive time, which represents large economic losses. Knowing the cause of the restriction in order to implement the correct remedies are crucial for preventing extensive cleaning activities or even stuck pipe. Using the hook load signal to find special signatures for the different causes of restriction could be a solution to quick recognition of restriction type.In the wells studied it was found 22 cases of restriction, which could be divided into 5 main groups of causes; unstable wellbore, ledges, cuttings accumulation, differential sticking and local dogleg. One incident from each group was chosen for an extensive post-event analysis for the purpose of strengthening the hypothesis of the cause.The results from the study has shown that it was necessary to simplify the analysis in to two main types of hook load restriction signatures; fixed and moveable. For the physical interpretation of the two signatures, fixed and moveable, hook load signals from ledge and cuttings bed were used respectively. These were assumed to be good representatives for the two main types, and the signals proved to coincide with the physical explanation valid for them. The two groups were created based on the clear differences in the hook load signals between ledges and cuttings accumulation visible in the post-event analysis. Both of these causes of restriction have a very clear physical explanation; when the drill string encounters a ledge it will stop moving and is thereby fixed to one position in the well. On the other hand, cuttings downhole is moveable and is able to move along with the drill string. It became clear that dividing signatures into groups based on causes of restrictions was not the best way to do it, but rather divide it into groups based on physical explanation such as fixed and moveable. From that point on causes of restrictions are related to one or two of the two main groups. The goal of distinguishing hook load signals from different causes of restrictions was reached to some extent. It was found that by recognizing if the restriction was fixed or moveable by looking at the hook load signal, 4 out of 5 causes of restrictions were distinguishable. This was possible because unstable wellbore was recognized by including both fixed and moveable restrictions and differential sticking was recognized by occurring at the beginning of a stand pulled.
APA, Harvard, Vancouver, ISO, and other styles
32

Maredia, Rizwan. "Automated application profiling and cache-aware load distribution in multi-tier architectures." Thesis, McGill University, 2012. http://digitool.Library.McGill.CA:80/R/?func=dbin-jump-full&object_id=106556.

Full text
Abstract:
Current business applications use a multi-tier architecture where business processing is done in a cluster of application servers, all querying a single shared database server making it a performance bottleneck. A prevalent solution to reduce the load on the database is to cache database results in the application servers as business entities. Since each of the in-memory application cache is small and independent of each other, a naïve load balancing algorithm like round-robin would result in cache redundancy and lead to cache evictions. By clustering these caches, we get a distributed cache with a larger aggregate capacity, where an object is retrieved from the remote cache if it is not found in local cache. This approach eliminates redundancy and reduces load on the database by a great extent. However, accessing remote objects incurs network latency affecting response time. In this thesis, we transform the distributed cache into a hybrid one that supports replication so that popular requests could be served locally by multiple application servers. We take advantage of this hybrid cache by developing a holistic caching infrastructure. This infrastructure is comprised of an application monitoring tool and an analysis framework that work continuously alongside live application to generate content-aware request distribution and caching policies. The policies are generated by request-centric strategies that aim to localize popular requests to specific servers in order to reduce remote calls. These strategies are flexible and can be adapted easily for various workloads and application needs. Experimental results show that we indeed derive substantial gain in performance using our infrastructure. Our strategies resulted in faster response time under normal workload and scaled much better with higher throughput than existing approaches under peak workload.
Les applications commerciales courantes utilisent une architecture multi-tiers où le traitement logique est effectué en un groupe de serveurs qui accèdent à une seule base de données partagée, ce qui la rend un point d'encombrement. Une solution répandue qui réduit la charge sur la base de données est la sauvegarde des résultats de requetes à la base de données au niveau des serveurs d'applications comme des entitiés logiques. Tandis que chaque cache local de chaque serveur est limité et est indépendant des autres, un algorithme naïve de balancement de la charge, comme round-robin, résultera en des duplications de copies dans les différents caches et mènera à des explusions de ceux-ci. En regroupant ces caches, nous formons un seul cache distribué avec une large capacité, où un objet est extrait à partir d'un cache distant s'il n'est pas trouvé localement. Cet approche élimine la redondance et réduit considérablement la charge sur la base de données. Cependant, accéder à des objets distants encours une latence au niveau du réseau ce qui affecte les temps de réponses.Dans cette thèse, nous transformons le cache distribué en un cache hybride qui supporte la duplication ce qui permet de servir les requêttes les plus populaires localement par plusieurs serveurs d'applications. Nous prenons avantage de cette structure hybride du cache en developpant une infrastructure holistique du cache. Cette infrastrcuture comprend un outil de surveillance et une infrastructure d'analyse qui fonctionne d'une façon continue et parallèle avec l'application afin de générer un contenu qui prend en considération la distribution de requêtes et les politiques du cache. Les politiques sont générées par des stratégies orientées requêtes qui visent à localizer les requêtes populaires à des serveurs spécifiques et ce pour réduire les appels distants. Ces stratégies sont flexibles et peuvent être ajustées facilement pour different charges de travail et besoins d'applications. Des résultats expérimentaux montrent qu'effectivement nous dérivons un gain substantial en utilisant notre infrastructure. Nos stratégies ont resulté en des temps de réponses rapides sous une charge de travail normale et donnent des bons résultats lors d'un débit élevé comparativemnt à d'autres approches sous des charges de travail de pointe.
APA, Harvard, Vancouver, ISO, and other styles
33

Branch, Perry L. "Development of real time non-intrusive load monitor for shipboard fluid systems." Thesis, (3 MB), 2008. http://handle.dtic.mil/100.2/ADA488243.

Full text
Abstract:
Thesis (Degrees of Naval Engineer and M.S. in Engineering and Management)--Massachusetts Institute of Technology, June 2008.
"June 2008." Description based on title screen as viewed on August 26, 2009. DTIC Descriptor(s): Shipboard, Electrical Loads, Diagnostic Equipment, Monitoring, Prototypes, Reverse Osmosis, Graphical User Interface, Shipbuilding, Cost Effectiveness, Theses, Field Tests, Computer Aided Diagnosis, Maintenance Management. DTIC Identifier(s): Condition Based Maintenance Systems, Health Monitoring Systems, Reverse Osmosis Systems, Spectral Analysis, NILM (Nonintrusive Load Monitor), Fourier-Series Analysis Equations, Spectral Envelopes, Transient Electrical Behavior. Includes bibliographical references (p. 84-85). Also available in print.
APA, Harvard, Vancouver, ISO, and other styles
34

COSTA, FELIPPE MORAES SILVA. "CYCLE COUNTING METHODS FOR LOAD-TIME-HISTORIES TYPICAL FOR POWER PLANT APPLICATIONS." PONTIFÍCIA UNIVERSIDADE CATÓLICA DO RIO DE JANEIRO, 2015. http://www.maxwell.vrac.puc-rio.br/Busca_etds.php?strSecao=resultado&nrSeq=26265@1.

Full text
Abstract:
PONTIFÍCIA UNIVERSIDADE CATÓLICA DO RIO DE JANEIRO
COORDENAÇÃO DE APERFEIÇOAMENTO DO PESSOAL DE ENSINO SUPERIOR
FUNDAÇÃO DE APOIO À PESQUISA DO ESTADO DO RIO DE JANEIRO
PROGRAMA DE EXCELENCIA ACADEMICA
BOLSA NOTA 10
Componentes estruturais de usinas térmicas para geração de energia sofrem transientes térmicos durante a operação da planta devido a partidas e paradas, variações de potência requerida e ocorrências causadas por anomalias. Estes transientes térmicos geram distribuições de temperaturas não uniformes ao longo da espessura dos componentes e, consequentemente, geram tensões térmicas. As variações destas tensões ao longo do tempo podem causar fadiga nos pontos mais solicitados destes componentes. A análise de fadiga para um ponto crítico do componente fornece o dano acumulado por meio do fator acumulado de dano ou CUF. O cálculo do CUF é feito baseado no conhecimento das histórias de tensões e deformações que ocorrem nos pontos críticos, no uso de modelos de geração de dano ciclo a ciclo e no uso de algoritmos para contagem de ciclos. Esta dissertação apresenta e discute modelos de dano a fadiga e suas associações aos modelos de contagem de ciclos existentes que são possíveis de serem aplicadas a componentes de usinas térmicas. Uma seleção de combinações entre modelos de dano e métodos de contagem foram utilizadas em dois exemplos nomeados estudos de caso.
Structural components of power plants are subjected to thermal transients during their operational life. These thermal transients generate unequal temperature distributions across the components wall thickness, causing severe thermal stresses. The repetition of the thermal transients and, consequently, repetition of stress and strain variations are responsible for fatigue damage of the structural components. In such cases, fatigue damage is assessed by calculating the cumulative usage factor or CUF. CUF calculations are based on the stresses and strains histories, on experimental fatigue curves and fatigue damage models, and on algorithms used to determine the number of cycles a given stress or strain range occurs during the life period considered. This thesis presents and discusses fatigue damage models and their association with existing cycle counting models that are applicable to power plant components. A selection of combinations of damage and cycle-counting models was used in two case study examples.
APA, Harvard, Vancouver, ISO, and other styles
35

Branch, Perry L. (Perry Lamar). "Development of real time non-intrusive load monitor for shipboard fluid systems." Thesis, Massachusetts Institute of Technology, 2008. http://hdl.handle.net/1721.1/44846.

Full text
Abstract:
Thesis (Nav. E.)--Massachusetts Institute of Technology, Dept. of Mechanical Engineering; and, (S.M.)--Massachusetts Institute of Technology, System Design and Management Program, 2008.
Includes bibliographical references (p. 84-85).
Since the year 2000, the United States Navy has spent an average of half a billion dollars over the congressionally approved budget for shipbuilding. Additionally, most experts project that in order to meet the Chief of Naval Operation's goal of a 313 ship Navy, the annual ship building budget will have to increase by about two thirds. Exacerbating this problem is the rising cost of maintaining the current inventory of ships. The U.S. Navy has long used a requirements driven maintenance program to reduce the number of total system failures by conducting routine maintenance and inspections whether they are needed or not. In order to combat this problem the Navy will inevitably have to turn to a condition based maintenance system. The Non-Intrusive Load Monitor (NILM) is a system that can greatly enhance the ability to monitor the health of engineering systems while incurring a low acquisition cost and low technology risk. This research focuses on the development of a real time user interface for the current NILM architecture in order to provide useful system information to an operator. Additionally, this research has shown that the NILM can be used effectively and reliably, to monitor equipment health, recognize and indicate abnormal operating conditions and casualties and provide invaluable information for training operators, diagnosing problems and troubleshooting. The NILM is an inexpensive and promising platform for monitoring equipment and reducing maintenance costs.
by Perry L. Branch.
S.M.
Nav.E.
APA, Harvard, Vancouver, ISO, and other styles
36

Macqueen, Christopher Neil. "Time based load-flow analysis and loss costing in electrical distribution systems." Thesis, Durham University, 1994. http://etheses.dur.ac.uk/1700/.

Full text
APA, Harvard, Vancouver, ISO, and other styles
37

ALWAN, HAYDER O. "Load Scheduling with Maximum Demand and Time of Use pricing for Microgrids." VCU Scholars Compass, 2019. https://scholarscompass.vcu.edu/etd/5954.

Full text
Abstract:
Several demand side management (DSM) techniques and algorithms have been used in the literature. These algorithms show that by adopting DSM and Time-of-Use (TOU) price tariffs; electricity cost significantly decreases, and optimal load scheduling is achieved. However, the purpose of the DSM is to not only lower the electricity cost, but also to avoid the peak load even if the electricity prices low. To address this concern, this dissertation starts with a brief literature review on the existing DSM algorithms and schemes. These algorithms can be suitable for Direct Load Control (DLC) schemes, Demand Response (DR), and load scheduling strategies. \end{abstract} Secondly, the dissertations compares two of DSM algorithms to show the performance based on cost minimization, voltage fluctuation, and system power loss [see in Chapter 5]. The results show the importance of balance between objectives such as electricity cost minimization, peak load occurrence, and voltage fluctuation evolution while simultaneously optimizing the cost.
APA, Harvard, Vancouver, ISO, and other styles
38

Gedda, Emil, and Anders Eriksson. "Practical analysis of the Precision Time Protocol under different types of system load." Thesis, KTH, Skolan för datavetenskap och kommunikation (CSC), 2017. http://urn.kb.se/resolve?urn=urn:nbn:se:kth:diva-208493.

Full text
Abstract:
The existence of distributed real-time systems calls for protocols for high accuracy time synchronization between devices. One such protocol, the Precision Time Protocol (PTP) reaches sub microsecond synchronization precision. PTP can be implemented both in hardware and software. This study aimed to analyze how system stress could affect the accuracy and precision of software implemented PTP between two devices. This was done using two Intel Galileo Generation 2 running Linux systems. Software was used to simulate CPU, I/O, network, and OS usage. Data was extracted from software logs and summarized in charts and then analyzed. The results showed that PTP synchronization accuracy and precision does suffer under certain types of system load, most notably under heavy I/O load. However the results might not be applicable to real-world scenario due to limitations in hardware and the synthetic stress tests do not correspond to real-world usage. Further research is required to analyze why and how different types of system load affects PTPs accuracy and precision.
Förekomsten av distribuerade realtidssystem kräver protokoll för noggrann tidssynkronisering mellan enheter. Ett sådant protokoll, Precision Time Protocol (PTP), kan uppnå en precision på under mikrosekunden under synkronisering. PTP kan implementeras i både hårdvara och mjukvara. Den här rapporten fokuserar på att analysera hur systembelastning kan påverka precision och noggrannheten hos mjukvaruimplementerad PTP mellan två enheter. Testen utfördes på två stycken Intel Galileo Generation 2 kö- randes Linux. Mjukvara användes sedan för att simulera belastning på olika system såsom CPU, I/O, nätverk och på operativsystemet. Data extraherades ifrån loggar från mjukvaran, vilken sammanfattades i grafer för att sedan analyseras. Resultaten visade att precisionen och noggrannheten hos PTP försämras under vissa typer av systembelastningar, mest märkbart under tung I/O belastning. Resultaten är dock potentiellt inte applicerbara på verklighetscenarion på grund av begränsingar i hårdvaran samt att syntetiska stresstest inte motsvarar normal belastning. Ytterligare forskning krävs för att analysera hur och varför olika typer av systembelastning påverkar PTPs precision och noggrannhet.
APA, Harvard, Vancouver, ISO, and other styles
39

Somasundaram, Meena Sivalingam. "Pulsed power and load-pull measurements for microwave transistors." [Tampa, Fla] : University of South Florida, 2009. http://purl.fcla.edu/usf/dc/et/SFE0003293.

Full text
APA, Harvard, Vancouver, ISO, and other styles
40

Brzostek, Richard J. "The relation of time perception to task load, job satisfaction, and organizational commitment /." View abstract, 2001. http://library.ccsu.edu/ccsu%5Ftheses/showit.php3?id=1636.

Full text
Abstract:
Thesis (M.A.)--Central Connecticut State University, 2001.
Thesis advisor: James Conway. " ... in partial fulfillment of the requirements for the degree of Master of Arts in Psychology." Includes bibliographical references (leaves 30-33). Also available via the World Wide Web.
APA, Harvard, Vancouver, ISO, and other styles
41

Wallentinsson, Emma Wallentinsson. "Multiple Time Series Forecasting of Cellular Network Traffic." Thesis, Linköpings universitet, Statistik och maskininlärning, 2019. http://urn.kb.se/resolve?urn=urn:nbn:se:liu:diva-154868.

Full text
Abstract:
The mobile traffic in cellular networks is increasing in a steady rate as we go intoa future where we are connected to the internet practically all the time in one wayor another. To map the mobile traffic and the volume pressure on the base stationduring different time periods, it is useful to have the ability to predict the trafficvolumes within cellular networks. The data in this work consists of 4G cellular trafficdata spanning over a 7 day coherent period, collected from cells in a moderately largecity. The proposed method in this work is ARIMA modeling, in both original formand with an extension where the coefficients of the ARIMA model are re-esimated byintroducing some user characteristic variables. The re-estimated coefficients produceslightly lower forecast errors in general than a isolated ARIMA model where thevolume forecasts only depends on time. This implies that the forecasts can besomewhat improved when we allow the influence of these variables to be a part ofthe model, and not only the time series itself.
APA, Harvard, Vancouver, ISO, and other styles
42

Phung, Kent, and Charles Chu. "Adhesives for Load-Bearing Timber-Glass Elements : Elastic, plastic and time dependent properties." Thesis, Linnéuniversitetet, Institutionen för bygg- och energiteknik (BE), 2013. http://urn.kb.se/resolve?urn=urn:nbn:se:lnu:diva-27386.

Full text
Abstract:
This thesis work is part of an on-going project regarding load-bearing timber glass composites within the EU program WoodWisdom-Net. One major scope of that project is the adhesive material between the glass and timber parts. The underlying importance of the bonding material is related to the transfer of stress between the two materials – the influence of the adhesive stiffness and ductility on the possibility of obtaining uniform stress distributions. In this study the mechanical properties of two different adhesives are investigated, an epoxy (3M DP490) and an acrylate (SikaFast 5215). The differences of the adhesives lay in dissimilar stiffness, strength and viscous behaviour. In long term load caring design is important to understand the materials behavior under a constant load and a permanent displacement within the structure can cause major consequences. Therefore the main aim in this project is to identify the adhesives strength, deformation capacity and possible viscous (time dependent) effects. Because of the limitation of equipment and time this study is restricted to only three different experiments. Three different types of tensile tests have been conducted: monotonic, cyclic relaxation tests.The results of the experiments show that 3M DP490 has a higher strength and a smaller deformation capacity as compared to the SikaFast 5215. Thus, the SikaFast 5215 is more ductile. The 3M DP490 exhibits a lower loss of strength under constant strain (at relaxation). SikaFast 5215 showed also a large dependency of strain level on the stress loss in relaxation.
APA, Harvard, Vancouver, ISO, and other styles
43

Li, Ran. "Load profiling on time and spectral domain : from big data to smart data." Thesis, University of Bath, 2015. https://ethos.bl.uk/OrderDetails.do?uin=uk.bl.ethos.665434.

Full text
Abstract:
With the promotion of demand side responses (DSRs) and low carbon technologies (LCTs), there is a growing interest in visualising the demand information at individual consumer and low voltage (LV) network level, where demands are less aggregated and highly volatile. Yet, traditional load profiling techniques, which are carried out on small data, are struggling to meet the requirements on accuracy and granularity. This thesis contributes to this area by extending traditional load profiling to a big-data context, where refined load profiles (smart data) can be extracted by two novel load profiling techniques for LV networks and individual consumers. The refined load profiles aim to: i) economically visualise LV networks with limited smart-grid monitoring data; ii) transform the smart metering data into a high-detail granular representation of the customers’ daily demand. For the LV networks, this thesis develops a novel concept, LV network templates, which aim to visualise the LV networks in a cost-effective manner. A novel three-stage load profiling method is proposed as: clustering, classification and scaling. By using statistical time-series analysis, three steps are undertaken: i) cluster a vast amount of load data according to their load shapes; ii) classify un-monitored substations to the most similar cluster without sample metering; iii) and also scale them to the right magnitude without sample metering. Through this method, limited representative monitoring data can be used to develop a library of typical load profiles for un-monitored networks, thus saving the cost of extensive monitoring for every single substation. In addition, it is the first load profiling method that can accurately express both load shapes and magnitudes for LV networks. Regarding the customer’s demand representation, the developed time-series analysis needs to be updated due to the volatile and uncertain nature of smart metering data, including inter-related factors such as overall load shapes, sudden spikes and magnitudes. Therefore, an innovative spectral load profiling is proposed to decompose these factors into different spectral levels, characterised by spectral features. By analysing the extracted features on each spectral level separately through multi-resolution analysis, the interference among different factors can effectively be prevented. The proposed method, for the first time, is able to fully capture the energy characteristics at the household level. The developed LV network load templates provide an economical but straightforward way to quantify the available headroom of unmonitored substations over time, providing quantitative information for distribution network operators to integrate LTCs at the minimal costs. The spectral load profiling gives an insight into customer’s energy behaviours with high granularity and accuracy. It can support the customer-specified DSR, tariff design, smart metering validation and load forecasting.
APA, Harvard, Vancouver, ISO, and other styles
44

Chen, Wenqu. "Direct load monitoring in rolling element bearing by using ultrasonic time of flight." Thesis, University of Sheffield, 2015. http://etheses.whiterose.ac.uk/10572/.

Full text
Abstract:
Rolling element bearings find widespread use in numerous machines and they are one of key components in involved systems. Bearing failures can cause catastrophic events if they are not detected in time and result in increasing downtime and maintenance cost. The need for longer endurance life with less cost drives research on bearing condition monitoring. Abstract Load monitoring provides significant information for bearing design and residual service life prediction as load applied by each rolling element on a bearing raceway controls friction and wear. It is possible to infer bearing load from load cells or strain gauges on the shaft or bearing housing. However this is not always simply and uniquely related to the real load transmitted by rolling elements directly to the raceway. Firstly, the load sharing between rolling elements in the raceway is statically indeterminate. And secondly, in a machine with non-steady loading the load path is complex and highly transient being subject to dynamic behavior of the transmission. This project develops a non-invasive, safe and portable technique to measure the load that transmitted directly by a rolling element to the raceway by using ultrasound. Abstract The technique works by monitoring the time-of-flight (ToF) of ultrasound that travels in a raceway and reflects back from the contact face. A piezoelectric sensor was permanently bonded onto the external surface of the stationary raceway in a rolling element bearing. The ToF of an ultrasonic pulse from the sensor to the raceway-rolling element contact was measured which depends on the wave speed and the thickness of the raceway. Abstract The speed of an ultrasonic wave in a component changes with the state of the stress; known as the acoustoelastic effect. The thickness of the element varies when deflection occurs as the contacting surfaces are subjected to load. Therefore, the ultrasonic ToF in a raceway is load dependent. In practical measurements, it was found that the phase of the wave reflected from rolling contacts varied with contact conditions. The phase was determined by the contact stiffness and in simple peak to peak measurement, this appeared as a change in the ToF. For typical rolling contacts, the ToF changes caused by deflection and acoustoelastic effect are of the order of nanoseconds, while the apparent time shift from the phase change effect is in the same order. Abstract Despite the phase change having effect on reflected signals, it does not affect the envelope of these signals. In this work the Hilbert transform was used to calculate the envelope of the reflected pulses and thus this contact dependent phase shift was eliminated. Time difference between the envelope of reflected pulses in unloaded and loaded state was a result of load effect alone. Abstract Ultrasonic measurements have been carried out on a model line contact formed between a steel plate and a cylindrical bearing steel roller, and line contacts in a cylindrical roller bearing which was used for the planet gear of a wind turbine epicyclic gearbox, as well as on elliptical contacts in a radially loaded ball bearing (deep groove). The ToF changes under different contact loads were recorded and used to determine the deflection of the raceway. This was then related to load using a simple elastic contact model. Measured load from the ultrasonic reflection was compared with the applied load upon the contact and good agreement has been achieved. The ultrasonic ToF technique shows promise as an effective method for load monitoring in real bearing applications.
APA, Harvard, Vancouver, ISO, and other styles
45

Kotriwala, Arzam Muzaffar. "Load Forecasting for Temporary Power Installations : A Machine Learning Approach." Thesis, KTH, Skolan för informations- och kommunikationsteknik (ICT), 2017. http://urn.kb.se/resolve?urn=urn:nbn:se:kth:diva-211554.

Full text
Abstract:
Sports events, festivals, construction sites, and film sites are examples of cases where power is required temporarily and often away from the power grid. Temporary Power Installations refer to systems set up for a limited amount of time with power typically generated on-site. Most load forecasting research has centered around settings with a permanent supply of power (such as in residential buildings). On the contrary, this work proposes machine learning approaches to accurately forecast load for Temporary Power Installations. In practice, these systems are typically powered by diesel generators that are over-sized and consequently, operate at low inefficient load levels. In this thesis, a ‘Pre-Event Forecasting’ approach is proposed to address this inefficiency by classifying a new Temporary Power Installation to a cluster of installations with similar load patterns. By doing so, the sizing of generators and power generation planning can be optimized thereby improving system efficiency. Load forecasting for Temporary Power Installations is also useful whilst a Temporary Power Installation is operational. A ‘Real-Time Forecasting’ approach is proposed to use monitored load data streamed to a server to forecast load two hours or more ahead in time. By doing so, practical measures can be taken in real-time to meet unexpected high and low power demands thereby improving system reliability.
Sportevenemang, festivaler, byggarbetsplatser och film platser är exempel på fall där kraften krävs Tillfälligt eller och bort från elnätet. Tillfälliga Kraft Installationer avser system som inrättats för en begränsad tid med Vanligtvis ström genereras på plats. De flesta lastprognoser forskning har kretsat kring inställningar med permanent eller strömförsörjning (zoals i bostadshus). Tvärtom föreslår detta arbete maskininlärning metoder för att noggrant prognos belastning under Tillfälliga anläggningar. I praktiken är thesis Typiskt system drivs med dieselgeneratorer som är överdimensionerad och följaktligen arbetar ineffektivt vid låga belastningsnivåer. I denna avhandling är en ‘Pre-Event Casting’ Föreslagen metod för att ta itu med denna ineffektivitet genom att klassificera ett nytt tillfälligt ström Installation till ett kluster av installationer med liknande lastmönster. Genom att göra så, kan dimensioneringen av generatorer och kraftproduktion planering optimeras därigenom förbättra systemets effektivitet. Load prognoser för Tillfälliga Kraft installationer är ook användbar Medan en tillfällig ström Installationen är i drift. En ‘Prognoser Real-Time’ Föreslagen metod är att använda övervakade lastdata strömmas till en server att förutse belastningen två timmar eller mer i förväg. Genom att göra så, kan praktiska åtgärder vidtas i realtid för att möta oväntade höga och låga effektbehov och därigenom förbättra systemets tillförlitlighet.
APA, Harvard, Vancouver, ISO, and other styles
46

Donati, Elena. "Extensometers for real-time detection of the elements' weight in an integrated security system." Master's thesis, Alma Mater Studiorum - Università di Bologna, 2018.

Find full text
APA, Harvard, Vancouver, ISO, and other styles
47

Liu, Ya. "High Efficiency Optimization of LLC Resonant Converter for Wide Load Range." Thesis, Virginia Tech, 2007. http://hdl.handle.net/10919/30990.

Full text
Abstract:
As information technology advances, so does the demand for power management of telecom and computing equipment. High efficiency and high power density are still the key technology drivers for power management for these applications. In order to save energy, in 2005, the U.S. Environmental Protection Agency (EPA) announced the first draft of its proposed revision to its ENERGY STAR specification for computers. The draft specification separately addresses efficiency requirements for laptop, desktop, workstation and server computers. The draft specification also proposes a minimum power supply efficiency of 80% for PCs and 75% to 83% for desktop derived servers, depending on loading condition and server type. Furthermore, recently some industry companies came out with a much higher efficiency target for the whole AC/DC front-end converter over a wide load range. Distributed power systems are widely adopted in the telecom and computing applications for the reason of high performance and high reliability. As one of the key building blocks in distributed power systems, DC/DC converters in the front-end converter are also under the pressure of increasing efficiency and power density. Due to the hold-up time requirement, PWM DC/DC converters cannot achieve high efficiency for well known reasons when they are designed for wide input voltage range. As a promising topology for this application, LLC resonant converters can achieve both high efficiency and wide input voltage range capability because of its voltage gain characteristics and small switching loss. However, the efficiency of LLC resonant converter with diode rectifier still cannot meet the recent efficiency target from industry. In order to further improve efficiency of LLC resonant converters, synchronous rectification must be used. The complete solution of synchronous rectification of LLC resonant converters is discussed in this thesis. The driving of the synchronous rectifier can be realized by sensing the voltage Vds of the SR. The turn-on of the SR can be triggered by the body-diode conduction of the SR. With the Vds compensation network, the precise voltage drop on Rds_on can be achieved, thus the SR can be turned off at the right time. Moreover, efficiency optimization at normal operation over wide load range is discussed. It is revealed that power loss at normal operation is solely determined by the magnetizing inductance while the magnetizing inductor is designed according to dead-time td selection. The mathematic equations for the relationship between power loss and dead-time are developed. For the first time, the relationship between power loss and dead-time is used as a tool for efficiency optimization. With this tool, the efficiency optimization of the LLC resonant converter can be made according to efficiency requirement over a wide load range. With the expectation to achieve high efficiency at ultra-light load, the green mode operation of LLC resonant converters is addressed. The rationale of the issue with the conventional control algorithm is revealed and a preliminary solution is proposed.
Master of Science
APA, Harvard, Vancouver, ISO, and other styles
48

Giacometto, Torres Francisco Javier. "Adaptive load consumption modelling on the user side: contributions to load forecasting modelling based on supervised mixture of experts and genetic programming." Doctoral thesis, Universitat Politècnica de Catalunya, 2017. http://hdl.handle.net/10803/457631.

Full text
Abstract:
This research work proposes three main contributions on the load forecasting field: the enhancement of the forecasting accuracy, the enhancement of the model adaptiveness, and the automatization on the execution of the load forecasting strategies implemented. On behalf the accuracy contribution, learning algorithms have been implemented on the basis of machine learning, computational intelligence, evolvable networks, expert systems, and regression approaches. The options for increase the forecasting quality, through the minimization of the forecasting error and the exploitation of hidden insights and miscellaneous properties of the training data, are equally explored in the form of feature based specialized base learners inside of a modelling ensemble structure. Preprocessing and the knowledge discovery algorithms are also implemented in order to boost the accuracy trough cleaning of variables, and to enhance the autonomy of the modelling algorithm via non-supervised intelligent algorithms respectively. The Adaptability feature has been enhanced by the implementation of three components inside of an ensemble learning strategy. The first one corresponds to resampling techniques, it ensures the replication of the global probability distribution on multiple independent training sub-sets and consequently the training of base learners on representatives spaces of occurrences. The second one corresponds to multi-resolution and cyclical analysis techniques; through the decomposition of endogenous variables on their time-frequency components, major insights are acquired and applied on the definition of the ensemble structure layout. The third one corresponds to Self-organized modelling algorithms, which provides of fully customized base learner's. The Autonomy feature is reached by the combination of automatic procedures in order to minimize the interaction of an expert user on the forecasting procedure. Experimental results obtained, from the application of the load forecasting strategies proposed, have demonstrated the suitability of the techniques and methodologies implemented, especially on the case of the novel ensemble learning strategy.
Este trabajo de investigación propone tres aportaciones principales en el campo de la previsión de consumos: la mejora en la exactitud de la predicción, la mejora en la adaptabilidad del modelo ante diferentes escenarios de consumo y la automatización en la ejecución de los algoritmos de modelado y predicción. La mejora de precisión que ha sido introducida en la estrategia de modelado propuesta ha sido obtenida tras la implementación de algoritmos de aprendizaje supervisados pertenecientes a las siguientes familias de técnicas: aprendizaje de máquinas, inteligencia computacional, redes evolutivas, sistemas expertos y técnicas de regresión. Otras las medidas implementadas para aumentar la calidad de la predicción han sido: la minimización del error de pronóstico a través de la extracción de información basada en análisis multi-variable, la combinación de modelos expertos especializados en atributos específicos del perfil de consumo, el uso de técnicas de pre procesamiento para aumentar la precisión a través de la limpieza de variables, y por último implementación de la algoritmos de clasificación no supervisados para obtener los atributos y las clases características del consumo. La mejora en la adaptación del algoritmo de modelado se ha conseguido mediante la implementación de tres componentes al interior de la estrategia de combinación de modelos expertos. El primer componente corresponde a la implementación de técnicas de muestreo sobre cada conjunto de datos agrupados por clase; esto asegura la replicación de la distribución de probabilidad global en múltiples y estadísticamente independientes subconjuntos de entrenamiento. Estos sub conjuntos son usados para entrenar los modelos expertos que consecuentemente pasaran a formar los modelos base de la estructura jerárquica que combina los modelos expertos. El segundo componente corresponde a técnicas de análisis multi-resolución. A través de la descomposición de variables endógenas en sus componentes tiempo-frecuencia, se abstraen e implementan conocimientos importantes sobre la forma de la estructura jerárquica que adoptaran los modelos expertos. El tercero componente corresponde a los algoritmos de modelado que generan una topología interior auto organizada, que proporciona de modelo experto base completamente personalizado al perfil de consumo analizado. La mejora en la automatización se alcanza mediante la combinación de procedimientos automáticos para minimizar la interacción de un usuario experto en el procedimiento de predicción. Los resultados experimentales obtenidos, a partir de la aplicación de las estrategias de predicción de consumos propuestas, han demostrado la idoneidad de las técnicas y metodologías implementadas; sobre todo en el caso de la novedosa estrategia para la combinación de modelos expertos.
APA, Harvard, Vancouver, ISO, and other styles
49

Gupta, Varun. "Stochastic Models and Analysis for Resource Management in Server Farms." Research Showcase @ CMU, 2011. http://repository.cmu.edu/dissertations/544.

Full text
Abstract:
Server farms are popular architectures for computing infrastructures such as supercomputing centers, data centers and web server farms. As server farms become larger and their workloads more complex, designing efficient policies for managing the resources in server farms via trial-and error becomes intractable. In this thesis, we employ stochastic modeling and analysis techniques to understand the performance of such complex systems and to guide design of policies to optimize the performance. There is a rich literature on applying stochastic modeling to diverse application areas such as telecommunication networks, inventory management, production systems, and call centers, but there are numerous disconnects between the workloads and architectures of these traditional applications of stochastic modeling and how compute server farms operate, necessitating new analytical tools. To cite a few: (i) Unlike call durations, supercomputing jobs and file sizes have high variance in service requirements and this critically affects the optimality and performance of scheduling policies. (ii) Most existing analysis of server farms focuses on the First-Come- First-Served (FCFS) scheduling discipline, while time sharing servers (e.g., web and database servers) are better modeled by the Processor- Sharing (PS) scheduling discipline. (in) Time sharing systems typically exhibit thrashing (resource contention) which limits the achievable concurrency level, but traditional models of time sharing systems ignore this fundamental phenomenon. (iv) Recently, minimizing energy consumption has become an important metric in managing server farms. State-of-the-art servers come with multiple knobs to control energy consumption, but traditional queueing models don’t take the metric of energy consumption into account. In this thesis we attempt to bridge some of these disconnects by bringing the stochastic modeling and analysis literature closer to the realities of today’s compute server farms. We introduce new queueing models for computing server farms, develop new stochastic analysis techniques to evaluate and understand these queueing models, and use the analysis to propose resource management algorithms to optimize their performance.
APA, Harvard, Vancouver, ISO, and other styles
50

Desouky, Azza Ahmed El. "Accurate fast weather dependent load forecast for optimal generation scheduling in real time application." Thesis, University of Bath, 2002. http://ethos.bl.uk/OrderDetails.do?uin=uk.bl.ethos.392211.

Full text
APA, Harvard, Vancouver, ISO, and other styles
We offer discounts on all premium plans for authors whose works are included in thematic literature selections. Contact us to get a unique promo code!

To the bibliography