Academic literature on the topic 'Distribution factor method'

Create a spot-on reference in APA, MLA, Chicago, Harvard, and other styles

Select a source type:

Consult the lists of relevant articles, books, theses, conference reports, and other scholarly sources on the topic 'Distribution factor method.'

Next to every source in the list of references, there is an 'Add to bibliography' button. Press on it, and we will generate automatically the bibliographic reference to the chosen work in the citation style you need: APA, MLA, Harvard, Chicago, Vancouver, etc.

You can also download the full text of the academic publication as pdf and read online its abstract whenever available in the metadata.

Journal articles on the topic "Distribution factor method"

1

Syarifuddin, Nojeng, Syamsir, Jaya Arif, Syarifuddin Andi, and Yusri Hassan Mohammad. "Distribution Factor Method Modified for Determine of Load Contribute based on the Power Factor in Transmission Line." Indonesian Journal of Electrical Engineering and Computer Science 11, no. 3 (2018): 1236–42. https://doi.org/10.11591/ijeecs.v11.i3.pp1236-1242.

Full text
Abstract:
This paper proposes a modification of distribution factor methods for identifying the load contribute in a transmission open access, with regard to the load power factor. This method may be considered as the first pricing strategy to be proposed in bilateral transaction for transmission usage, based on the actual use of the transmission network. The merit of this method relies on the existence of a load power factor with GLDF methods, which allocate the transmission cost, not only based on the amount of power flow but also on the load characteristic. A case study utilizing the IEEE 30-bus system was conducted to illustrate the contribution of the proposed method in allocating the transmission usage to the user in a fair manner.
APA, Harvard, Vancouver, ISO, and other styles
2

Liu, L. H. "Backward Monte Carlo Method Based on Radiation Distribution Factor." Journal of Thermophysics and Heat Transfer 18, no. 1 (2004): 151–53. http://dx.doi.org/10.2514/1.2555.

Full text
APA, Harvard, Vancouver, ISO, and other styles
3

Rufin, Bidounda, Michel Koukouatikissa Diafouka, R. Ìeolie Foxie Miz Ìel Ìe Kitoti, and Dominique Miz`ere. "The Bivariate Extended Poisson Distribution of Type 1." European Journal of Pure and Applied Mathematics 14, no. 4 (2021): 1517–29. http://dx.doi.org/10.29020/nybg.ejpam.v14i4.4151.

Full text
Abstract:
In this paper, we will construct the bivariate extended Poisson distribution whichgeneralizes the univariate extended Poisson distribution. This law will be obtained by the method of the product of its marginal laws by a factor. This method was demonstrated in [7]. Thus we call the bivariate extended Poisson distribution of type 1 the bivariate extended Poisson distribution obtained by the method of the product of its marginal distributions by a factor. We will show that this distribution belongs to the family of bivariate Poisson distributions and and will highlight the conditions relating to the independence of the marginal variables. A simulation study was realised.
APA, Harvard, Vancouver, ISO, and other styles
4

Wulandari, Rinjani Tri, and Anton Mulyono Azis. "The Saving Matrix Method for Improving Distribution Efficiency." Jurnal Manajemen Indonesia 22, no. 2 (2022): 217. http://dx.doi.org/10.25124/jmi.v22i2.4239.

Full text
Abstract:
The delivery route for the Bandung 40400 Mail Processing Center is determined based on the estimation using the zoning system, and this may trigger delays. This happens because of each vehicle can only deliver to one distribution center at a time. This makes the load factor level of vehicle utilization below 20%. This study aims to obtain the optimal route and delivery schedule to improve distribution efficiency and increase the load factor of vehicle capacity utilization. This quantitative research method uses the Saving Matrix algorithm on all distribution channels. The results obtained indicate that the optimal delivery routes are 5 to 6 cluster. This saves mileage of up to 144 km per route per day and saves distribution costs of up to Rp. 39,146.50 for one delivery. In addition, there was an increase in the load factor of 18.37% for dropping 1 and 16.49% for dropping 2. By using the proposed route, a resume of the delivery schedule is obtained with clearer departure and arrival times and saving times up to 275 minutes. Furthermore, the expansion of the scope of research and comparing the level of effectiveness between distribution centers in various regions can be carried out as a follow-up study.
 Keywords—Saving matrix algorithm; Delivery routes; Load factor; Scheduling
APA, Harvard, Vancouver, ISO, and other styles
5

Abdul Saleem, Shaik, J. V. G. Rama Rao, and Reddy Ch Lokeshwar. "Rehabilitation and Techno design of Distribution transformer and 11kV feeder of a radial distribution system." E3S Web of Conferences 472 (2024): 01019. http://dx.doi.org/10.1051/e3sconf/202447201019.

Full text
Abstract:
In any transmission and distribution system- the performance of that system mainly depends on voltage regulation and efficiency. Hence for the better operation of the system, it is required to maintain constant voltage regulation with good power factor and low power loss throughout the feeder. This can be achieved by using various methods. Out of those, placement of a capacitor bank at the densely loaded areas is the best-proven method compared to any other method. This research work considered the feeder-5 of the Al-Uwainath primary substation which was suffering from low voltage–poor regulation- problems. Here in this research modeled and analyzed 3 different techniques for rehabilitation to this low voltage, poor regulation, and power factor problem and recommended the most efficient method out of the three methods. As a part of this project simulated the same conditions of the site after collecting the data of the feeder from the site by using ETAP software and checked the performance by calculating its voltage regulation, power losses, and power factor in all three 3 different methods and recommended the most efficient method of feeder-5 rehabilitation by comparing the results of three methods with the exciting system.
APA, Harvard, Vancouver, ISO, and other styles
6

Zulkifli, Akhmad, Meisarah Riandini, B. Herawan Hayadi, and Elyandri Prasiwiningrum. "Application Method Certainty Factor in Electrical Damage." JOURNAL OF ICT APLICATIONS AND SYSTEM 2, no. 1 (2023): 25–28. http://dx.doi.org/10.56313/jictas.v2i1.236.

Full text
Abstract:
Electricity is need main For life people human . Electricity is used man For various type activity human . Electricity plays a big role for life , like For lighting , cooking , and so on . Almost all activity daily use electricity . Almost every home in Indonesia, both in the city nor village Already trellis with electricity . For stream and distribute electricity to each home , office nor distant institutions _ away , then needed Transformer Distribution . Transformer Distribution This own objective use special that is, to lower voltage tall to voltage low , so that the voltage used in accordance with equipment ratings electricity customer or load in general . For help in handle problem damage Transformer distribution , then one is needed branch from Knowledge computer that is System Expert . System Expert is system based computer that uses knowledge , facts , and techniques reasoning in solve problem , which usually is only can completed by one expert in field certain . (Putri, 2020). The method used in research _ This is Certainty Factor. Study This apply certainty factor method For role in diagnose damage to electricity . Based on results discussion on with choose one _ damage namely P1 ( Oil transformer go out from the transformer body ) on the study case obtained decision level accuracy that is as big That's 5.650198%. means system expert certainty factor method can overcome damage and deliver results diagnosis good at damage electricity
APA, Harvard, Vancouver, ISO, and other styles
7

Liu, Shu Zhi. "Improved Moment Distribution Method and its Application." Applied Mechanics and Materials 353-356 (August 2013): 3163–66. http://dx.doi.org/10.4028/www.scientific.net/amm.353-356.3163.

Full text
Abstract:
Change the general idea of the moment distribution method to analyze problems,not repeat fixed and relax joints to achieve a balance moment, but by the correction of the bending stiffness,deduced the moment carryover factor,moment once distributed and transferred twice you can get the moment exact solution.. Particularly suitable for the calculation of continuous beam internal forces and determination of envelope diagram and the influence lines of beam.
APA, Harvard, Vancouver, ISO, and other styles
8

Koukouatikissa Diafouka, M., C. G. Louzayadio, R. O. Malouata, N. R. Ngabassaka, and R. BIdounga. "ON A BIVARIATE KATZ’S DISTRIBUTION." Advances in Mathematics: Scientific Journal 11, no. 10 (2022): 955–68. http://dx.doi.org/10.37418/amsj.11.10.11.

Full text
Abstract:
In this paper, we propose the bivariate distribution of the univariate Katz distribution [7] using the technique of the product of marginal distributions by a multiplicative factor. This method has been examined in [11] and used in [9] to construct a bivariate Poisson distribution. The obtained model is a good way to unify bivariate Poisson, bivariate binomial and bivariate negative binomial distributions and has interesting properties. Among others, the correlation coefficient of the obtained model can be either positive, negative, or null, and the necessary condition of zero correlation is a necessary and sufficient condition for independence. We used two methods to estimate the parameters: the method of moments and the maximum likelihood method. An application to concrete insurance data has been made. This data concerns natural events insurance in the USA and third-party liability automobile insurance in France [13].
APA, Harvard, Vancouver, ISO, and other styles
9

Jiří, Brychta, Janeček Miloslav, and Walmsley Alena. "Crop-management factor calculation using weights of spatio-temporal distribution of rainfall erosivity." Soil and Water Research 13, No. 3 (2018): 150–60. http://dx.doi.org/10.17221/100/2017-swr.

Full text
Abstract:
Inappropriate integration of USLE or RUSLE equations with GIS tools and Remote Sensing (RS) data caused many simplifications and distortions of their original principles. Many methods of C and R factor estimation were developed due to the lack of optimal data for calculations according to original methodology. This paper focuses on crop-management factor evaluation (C) weighted by fully distributed form of rainfall erosivity factor (R) distribution throughout the year. We used high resolution (1-min) data from 31 ombrographic stations (OS) in the Czech Republic (CR) for monthly R map creation. All steps of the relatively time-consuming C calculation were automated in GIS environment with an innovative procedure of R factor weight determination for each agro-technical phase by land parcel geographic location. Very high spatial and temporal variability of rainfall erosivity within each month and throughout the year can be observed from our results. This highlights the importance of C factor calculation using a correctly presented method with emphasis on the geographic location of given land parcels.
APA, Harvard, Vancouver, ISO, and other styles
10

Ihaddadene, Razika, Nabila Ihaddadene, and Marouane Mostefaoui. "Estimation of monthly wind speed distribution basing on hybrid Weibull distribution." World Journal of Engineering 13, no. 6 (2016): 509–15. http://dx.doi.org/10.1108/wje-09-2016-0084.

Full text
Abstract:
Purpose The purpose of this paper is to analyze and compare four numerical methods to estimate the most suitable one which describes wind speed distribution of M’Sila, a province of northern Algeria. Design/methodology/approach The site chosen in this investigation is characterized by calm winds; in this case, the appropriate wind speed distribution is that of hybrid Weibull. Findings The four numerical methods used in the present paper are the maximum likelihood method, the graphical method, the moment method and the energy pattern factor method. The hybrid Weibull distributions using the abovementioned approaches are compared with the measured data via three statistical parameters, namely, the correlation coefficient, the root mean square error and the Chi-square error. Originality/value The obtained results showed that the moment method is the suitable one in describing month and annual wind speed hybrid Weibull parameters of this region.
APA, Harvard, Vancouver, ISO, and other styles
More sources

Dissertations / Theses on the topic "Distribution factor method"

1

Costa, BÃrbara Cristina Alves da. "Load measurement error influence on friction factor calibration of pipe water distribution networks through do reverse transient method and genetic algorithm." Universidade Federal do CearÃ, 2014. http://www.teses.ufc.br/tde_busca/arquivo.php?codArquivo=13621.

Full text
Abstract:
O estudo de redes hidrÃulicas para fins de operaÃÃo ou anÃlise de viabilidade para ampliaÃÃo ou recuperaÃÃo das mesmas à iniciado pela calibraÃÃo, neste contexto, entendida como identificaÃÃo de parÃmetros tais como: fator de atrito, rugosidade e diÃmetro. O MÃtodo Transiente Inverso em conjunto com Algoritmo genÃtico se mostra eficiente nessa tarefa. O referido mÃtodo emprega o MÃtodo das CaracterÃsticas na soluÃÃo das equaÃÃes de movimento para escoamento transiente em tubos de redes e a otimizaÃÃo das soluÃÃes à baseada na Teoria Evolutiva e avaliada por uma funÃÃo objetivo, que neste estudo à o somatÃrio do mÃdulo da diferenÃa entre as cargas medidas e calculadas pelo modelo para cada conjunto de soluÃÃes. Considerando que o objetivo do desenvolvimento de modelos matemÃticos para a calibraÃÃo de redes hipotÃticas à a utilizaÃÃo dos mesmos em redes reais, e que nessas, a coleta de dados de carga està sujeita a erros de mediÃÃo, seja devido a defeitos nos equipamentos seja por condiÃÃes ambiente desfavorÃveis ou outros efeitos aleatÃrios e tendo em vista a relevÃncia dos fatores de atrito nas tubulaÃÃes, pela sua relaÃÃo com perdas de carga que devem ser controladas para um Ãtimo funcionamento de redes, garantindo um abastecimento contÃnuo em quantidade e condiÃÃes de funcionamento adequados, este trabalho propÃe-se a verificar a interferÃncia da presenÃa de erros de mediÃÃo de carga transiente na identificaÃÃo dos fatores de atrito em duas redes hidrÃulicas hipotÃticas. As mesmas sÃo de portes diferentes com relaÃÃo ao nÃmero de anÃis, nÃs e tubos. Ambas sÃo alimentadas por um reservatÃrio cada. As condiÃÃes transientes sÃo atribuÃdas a uma manobra de vÃlvula instalada em um dos nÃs de cada rede. A coleta de dados de carga à restrita a 20% dos nÃs de cada rede, sendo que um deles à o nà onde se encontra a vÃlvula. O tempo de observaÃÃo do transiente hidrÃulico à restrito ao tempo da manobra de vÃlvula, 20s, e ocorre em intervalos de 0,1s, resultando em 200 registros de carga. A condiÃÃo permanente das redes à inicialmente desconhecida o conhecimento acerca da mesma à restrito a carga nos reservatÃrios e demandas nos nÃs, bem como diÃmetros dos tubos, os fatores de atrito sÃo inicialmente estipulados. A determinaÃÃo das condiÃÃes permanente e transiente bem como a identificaÃÃo dos fatores de atrito à realizada com a utilizaÃÃo de um modelo hidrÃulico e geram cargas transientes que sÃo consideradas convencionalmente verdadeiras, essas entÃo recebem incrementos de diversos erros sistemÃticos e aleatÃrios, que geram novas cargas e essas sÃo consideradas coletadas com erros de mediÃÃo. A partir dessas novas cargas sÃo realizadas identificaÃÃes de fatores de atrito, os quais sÃo comparados com os que foram obtidos considerando um caso ideal de cargas sem erros de mediÃÃo. A referida comparaÃÃo à realizada atravÃs do Erro MÃdio Relativo e da FunÃÃo Objetivo Ãtima. Os resultados encontrados demonstram que os erros de mediÃÃo interferem na identificaÃÃo dos fatores de atrito apesar de nÃo ser possÃvel delinear uma relaÃÃo entre os mesmos.<br>The study of hydraulic networks for operation purposes or viability analysis for extension or renovation of the same is started the calibration in this context understood as identification parameters, such as friction coefficient, surface roughness and diameter. The Transient Inverse Method in conjunction with genetic algorithm is efficient in this task shows. This method employs the method of characteristics in the solution of the equations of motion for transient flow in networks of pipes and the optimization of solutions is based on Evolutionary Theory and evaluated by an objective function, which in this study is the sum of the difference between the module loads measured and calculated by the model for each set of solutions. Whereas the objective of the development of mathematical models for calibration hypothetical networks is their use in real networks, and that these, the collection of payload data is subject to measurement errors, is due to defects in the equipment or by conditions unfavorable environment or other random effects and taking into account the relevance of friction factors in pipelines, by their relationship to head losses that must be controlled to a great operation of networks, ensuring a continuous supply in quantity and appropriate operating conditions, this work is proposed to verify the influence of the presence of transient load measurement errors in the identification of friction factors in two hypothetical hydraulic networks. They are of different sizes with the number of rings, knots and tubes. Both are each fed by a reservoir. The transient conditions are assigned to a valve maneuver installed in one of the nodes of each network. The load data collection is restricted to 20% of the nodes in each network, one of which is the node where the valve is located. The hydraulic transient observation time is restricted to the valve maneuver time, 20s, and occurs at intervals of 0.1s, resulting in 200 charge records. The permanent condition of networks is initially unknown knowledge about the same is restricted to load in the reservoirs and demands on us as well as pipe diameter, the friction factors are initially stipulated. The determination of the permanent and transient conditions and the identification of the friction factors is performed using a hydraulic model and generate transient loads which are conventionally considered true, then these various steps of receiving systematic and random errors, which generate new burdens and these are considered collected with measurement errors. From these new loads are carried IDs friction factors, which are compared with those obtained considering an ideal case with no measurement errors loads. This comparison is performed using the mean relative error and function great goal. The results show that measurement errors in the identification of interfering friction factors although not possible to draw a relationship between them.
APA, Harvard, Vancouver, ISO, and other styles
2

Jamali, Shojaeddin. "Assessing load carrying capacity of existing bridges using SHM techniques." Thesis, Queensland University of Technology, 2019. https://eprints.qut.edu.au/134484/1/Shojaeddin_Jamali_Thesis.pdf.

Full text
Abstract:
This research provides a multi-tier framework for load carrying capacity assessment of bridges using structural health monitoring techniques. In this framework, four tiers are developed ranging from simplified to detailed tiers for holistic bridge assessment. Performance of each tier has been validated using various numerical and experimental examples of bridges and beam-like structures.
APA, Harvard, Vancouver, ISO, and other styles
3

Brunet, Laurence. "Repartition spatiale de la densite electronique moleculaire en composantes atomiques in situ." Paris 6, 1987. http://www.theses.fr/1987PA066042.

Full text
Abstract:
Modelisation originale de la densite electronique moleculaire par la superposition de densites atomiques spheriques, centrees sur les noyaux a partir de calculs d'orbitales moleculaires, avec trois invariants : la charge totale, le potentiel electrons-noyaux, le moment dipolaire. Description de la methode numerique pour determiner les densites associes aux atomes dans une base d'orbitales gaussiennes. Renseignements obtenus sur les modifications spatiales de la charge electronique par formation de liaisons, sur les facteurs de structure atomiques effectifs, apres transformation de fourier, en tenant compte de l'effet de l'environnement. Extension de la methode a la decomposition de la densite electronique moleculaire en densites composantes, autres qu'atomiques et non necessairement centrees sur les noyaux. Outre les applications a l'etude de la structure et des proprietes electroniques des molecules, proposition d'utilisation au calcul des polarisabilites optiques (lineaires ou non), a l'analyse des donnees cristallographiques obtenues par diffraction de rayon x et a la prevision de la structure des cristaux moleculaires
APA, Harvard, Vancouver, ISO, and other styles
4

Zhang, Runtong. "Measurement of effective diffusivity : chromatographic method (pellets & monoliths)." Thesis, University of Bath, 2013. https://ethos.bl.uk/OrderDetails.do?uin=uk.bl.ethos.608352.

Full text
Abstract:
This thesis aims to find out the effective diffusivity (Deff) of a porous material – γ-alumina, using an unsteady state method with two inert gases at ambient condition with no reactions. For porous materials, Deff is important because it determines the amount of reactants that transfers to the surface of pores. When Deff is known, the apparent tortuosity factor of γ-alumina is calculated using the parallel pore model. The apparent tortuosity factor is important because: (a) it can be used to back-calculate Deff at reacting conditions; (b) once Deff with reactions is known, the Thiele modulus can be calculated and hence the global reaction rate can be found; (c) apparent tortuosity factor is also important for modelling purposes (e.g. modelling a packed-bed column or a catalytic combustion reactor packed with porous γ-alumina in various shapes and monoliths). Experimental measurements were performed to determine the effective diffusivity of a binary pair of non-reacting gases (He in N2, and N2 in He) in spherical γ-alumina pellets (1 mm diameter), and in γ-alumina washcoated monoliths (washcoat thickness 20 to 60 µm, on 400 cpsi (cells per square inch) cordierite support). The method used is based on the chromatographic technique, where a gas flows through a tube, which is packed with the sample to be tested. A pulse of tracer gas is injected (e.g. using sample loops: 0.1, 0.2, 0.5 ml) and by using an on-line mass spectrometer the response in the outlet of the packed bed is monitored over time. For the spherical pellets, the tube i.d. = 13.8 mm and the packed bed depths were 200 and 400 mm. For monoliths the tube i.d. = 7 mm and the packed lengths were 500 and 1000 mm. When the chromatographic technique was applied to the monoliths, it was observed that experimental errors can be significant, and it is very difficult to interpret the data. However, the technique worked well with the spherical pellets, and the effective diffusivity of He in N2 was 0.75 – 1.38 × 10-7 m2 s-1, and for N2 in He was 1.81 – 3.10 × 10-7 m2 s-1. Using the parallel pore model to back-calculate the apparent tortuosity factor, then a value between 5 to 9.5 was found for the pellets.
APA, Harvard, Vancouver, ISO, and other styles
5

來海, 博央, Hirohisa KIMACHI, 拓. 田中 та ін. "モードⅠき裂を有する長繊維強化複合材料における塑性領域の弾塑性有限要素法解析". 日本機械学会, 2000. http://hdl.handle.net/2237/9173.

Full text
APA, Harvard, Vancouver, ISO, and other styles
6

Horsák, Libor. "Optimalizace tvaru háků v pecích petrochemického průmyslu." Master's thesis, Vysoké učení technické v Brně. Fakulta strojního inženýrství, 2010. http://www.nusl.cz/ntk/nusl-228912.

Full text
Abstract:
Master’s thesis, „Optimization of hanger design in petrochemical industry heaters”, describes a procedure and means, leading to better hanger design in various cases. The thesis describes several problems which are necessary to be solved in hanger design. Technical expertise is executed on hangers of various designs. The procedure of optimization is shown on one chosen hanger design.
APA, Harvard, Vancouver, ISO, and other styles
7

Häggblom, Johan, and Jonathan Jerner. "Photovoltaic Power Production and Energy Storage Systems in Low-Voltage Power Grids." Thesis, Linköpings universitet, Fordonssystem, 2019. http://urn.kb.se/resolve?urn=urn:nbn:se:liu:diva-156875.

Full text
Abstract:
In recent years, photovoltaic (PV) power production have seen an increase and the PV power systems are often located in the distribution grids close to the consumers. Since the distributions grids rarely are designed for power production, investigation of its effects is needed. It is seen in this thesis that PV power production will cause voltages to rise, potentially to levels exceeding the limits that grid owners have to abide by. A model of a distribution grid is developed in MathWorks MATLAB. The model contains a transformer, cables, households, energy storage systems (ESS:s) and photovoltaic power systems. The system is simulated by implementing a numerical Forward Backward Sweep Method, solving for powers, currents and voltages in the grid. PV power systems are added in different configurations along with different configurations of ESS:s. The results are analysed, primarily concerning voltages and voltage limits. It is concluded that addition of PV power production in the distribution grid affects voltages, more or less depending on where in the grid the systems are placed and what peak power they have. It is also concluded that having energy storage systems in the grid, changing the power factor of the inverter for the PV systems or lowering the transformer secondary-side voltage can bring the voltages down.<br>På senare tid har det skett en ökning i antalet solcellsanläggningar som installeras i elnätet och dessa är ofta placerade i distributionsnäten nära hushållen. Eftersom distributionsnäten sällan är dimensionerade för produktion så behöver man utreda effekten av det. I det här arbetet visas det att solcellsproduktion kommer att öka spänningen i elnätet, potentiellt så mycket att de gränser elnätsägarna måste hålla nätet inom överstigs. En modell över lågspänningsnätet skapas i MathWorks MATLAB. Modellen innehåller transformator, kablar, hushåll, energilager och solcellsanläggningar. Systemet simuleras med hjälp av en numerisk Forward Backward Sweep-lösare som beräknar effekter, strömmar och spänningar i elnätet. Solcellanläggningarna placeras ut i elnätet i olika konfigurationer tillsammans med olika konfigurationer av energilager. Resultaten från simuleringarna analyseras främst med avseende på spänningen i elnätet utifrån dess gränser. De slutsatser som dras i arbetet är att solcellsproduktion kommer att påverka spänningen, mycket beroende på var i elnätet anläggningarna placeras och storleken hos dem. Det visas också att energilager, justering av effektfaktor hos solcellsanläggningarna eller en spänningssänkning på transformatorns lågspänningssida kan få ner spänningen i elnätet.<br><p>LiTH-ISY-EX--19/5194--SE</p>
APA, Harvard, Vancouver, ISO, and other styles
8

Sadykova, Saltanat. "Electric microfield distributions and structure factors in dense plasmas." Doctoral thesis, Humboldt-Universität zu Berlin, Mathematisch-Naturwissenschaftliche Fakultät I, 2011. http://dx.doi.org/10.18452/16316.

Full text
Abstract:
Die elektrischen Mikrofeldverteilungen (EMDs) und ihre Auswüchse wurden in einkomponentiger (OCP) Elektron-, zweikomponentigen (TCP) Elektron-Positron-, Wasserstoff- und einwertig ionisierten Alkaliplasmen im Rahmen verschiedener Pseudopotentialmodelle (PM) untersucht und mit sowohl Molekulardynamik (MD) und Monte-Carlo Simulationen als auch mit Experimenten vergliechen. Die verwendeten theoretischen Verfahren zur Berechnung von EMDs gehen zurück auf die von C. A. Iglesias entwickelte Kopplungsparameter Integrationstechnik (KPIT) für OCP und die von J. Ortner et al. vorgeschlagene verallgemeinerte KPIT für TCP. EMDs wurden im Rahmen der abgeschirmten Kelbg-, Deutsch-, Hellmann-Gurskii-Krasko(HGK)-PM untersucht, welche quantenmechanische Effekte, Abschirmungseffekte und die Struktur der Ionenrümpfe (HGK) berücksichtigen. Die Abschirmungseffekte wurden auf Grundlage der Bogoljubov-Born-Green-Kirkwood-Yvon- Methode eingeführt. Wir haben das abgeschirmte HGK-Pseudopotential in der Debye-Näherung sowie in einer mäßig gekoppelten Plasma-Näherung verwendet. Wir haben verschiedene Typen vom asymptotischen Verhalten der Verteilungsauswüchse in Abhangigheit von Plasmaparameter, Plasmatypen und Strahler bestimmt. Der Vergleich der experimentell gewonnenen Daten mit sowohl einem synthetischen Li2+-Lyman-Spektrum als auch mit einer synthetischen Li II 548 nm Linie lassen den Schluss zu, daß die EMD, welche auf der Grundlage der Iglesias-Methode für OCP im HGK-PM und der MD erhalten wurde, eine gute Übereinstimmung mit den experimentellen Werten liefert. Die statischen partiellen und Ladung-Ladung-Strukturfaktoren (SSF) wurden für Alkali- und Be2+-Plasmen unter Verwendung der von G. Gregori et al. beschriebenen Methode berechnet. Die dynamischen Strukturfaktoren (DSF) für Alkaliplasmen wurden unter Verwendung der durch V. M. Adamyan et al. entwickelten Methode der Momente berechnet. Bei beiden Methoden wurde das abgeschirmte HGK-Pseudopotential verwendet.<br>The electric microfield distributions (EMDs) and its tails have been studied for electron one-component plasma (OCP), electron-positron, hydrogen and single-ionized alkali two-component plasmas (TCP) in a frame of different pseudopotential models (PM) and compared with Molecular Dynamics (MD) and Monte-Carlo simulations as well as with experiments. The theoretical methods used for calculation of EMDs are a coupling-parameter integration technique (CPIT) developed by C. A. Iglesias for OCP and the generalized CPIT proposed by J. Ortner et al. for TCP. We studied the EMDs in a frame of the screened Kelbg, Deutsch, Hellmann-Gurskii-Krasko (HGK) PMs which take into account quantum-mechanical, screening effects and the ion shell structure (HGK) due to the Pauli exclusion principle. The screening effects were introduced on a base of Bogoljubov-Born-Green-Kirkwood-Yvon method. We used the screened HGK pseudopotential in the Debye approximation as well as in a moderately coupled plasma approximation. The influence of the plasma coupling parameter on the EMD along with the ion shell structure was investigated. We determined different types of asymptotic behaviour of EMD tails in dependence on the plasma type, parameters and radiator. Comparison of a synthetic Li2+ Lyman spectrum as well as comparison of a synthetic Li II 548 nm line with experimental data allows us to conclude that the EMD, obtained on a base of the CPIT method for OCP within the HGK PM and MD, provides a good agreement with the experiment. We have calculated the partial and charge-charge static structure factors (SSF) for alkali and Be2+ plasmas using the method described by G. Gregori et al.. We have calculated the dynamic structure factors (DSF) for alkali plasmas using the method of moments developed by V. M. Adamyan et al. In both methods the screened HGK pseudopotential has been used.
APA, Harvard, Vancouver, ISO, and other styles
9

Yuan, Miao. "Corporate Default Predictions and Methods for Uncertainty Quantifications." Diss., Virginia Tech, 2016. http://hdl.handle.net/10919/81905.

Full text
Abstract:
Regarding quantifying uncertainties in prediction, two projects with different perspectives and application backgrounds are presented in this dissertation. The goal of the first project is to predict the corporate default risks based on large-scale time-to-event and covariate data in the context of controlling credit risks. Specifically, we propose a competing risks model to incorporate exits of companies due to default and other reasons. Because of the stochastic and dynamic nature of the corporate risks, we incorporate both company-level and market-level covariate processes into the event intensities. We propose a parsimonious Markovian time series model and a dynamic factor model (DFM) to efficiently capture the mean and correlation structure of the high-dimensional covariate dynamics. For estimating parameters in the DFM, we derive an expectation maximization (EM) algorithm in explicit forms under necessary constraints. For multi-period default risks, we consider both the corporate-level and the market-level predictions. We also develop prediction interval (PI) procedures that synthetically take uncertainties in the future observation, parameter estimation, and the future covariate processes into account. In the second project, to quantify the uncertainties in the maximum likelihood (ML) estimators and compute the exact tolerance interval (TI) factors regarding the nominal confidence level, we propose algorithms for two-sided control-the-center and control-both-tails TI for complete or Type II censored data following the (log)-location-scale family of distributions. Our approaches are based on pivotal properties of ML estimators of parameters for the (log)-location-scale family and utilize the Monte-Carlo simulations. While for Type I censored data, only approximate pivotal quantities exist. An adjusted procedure is developed to compute the approximate factors. The observed CP is shown to be asymptotically accurate by our simulation study. Our proposed methods are illustrated using real-data examples.<br>Ph. D.
APA, Harvard, Vancouver, ISO, and other styles
10

Kriegler, Benjamin Jacobus. "Probabilistic analysis of monthly peak factors in a regional water distribution system." Thesis, Stellenbosch : Stellenbosch University, 2013. http://hdl.handle.net/10019.1/85738.

Full text
Abstract:
Thesis (MScEng)--Stellenbosch University, 2013.<br>ENGLISH ABSTRACT: The design of a water supply system relies on the knowledge of the water demands of its specific end-users. It is also important to understand the end-users’ temporal variation in water demand. Failure of the system to provide the required volume of water at the required flow-rate is deemed a system failure. The system therefore needs to be designed with sufficient capacity to ensure that it is able to supply the required volume of water during the highest demand periods. In practice, bulk water supply systems do not have to cater for the high frequency, short duration high peak demand scenarios of the end-user, such as the peak hour or peak day events, as the impact of events is reduced by the provision of water storage capacity at the off-take from the bulk supply system. However, for peak demand scenarios with durations longer than an hour or a day, depending on the situation, the provision of sufficient storage capacity to reduce the impact on the bulk water system, becomes impractical and could lead to potential water quality issues during low demand periods. It is, therefore, a requirement that bulk water systems be designed to be able to meet the peak weekly or peak month end-user demands. These peak demand scenarios usually occur only during a certain portion of the year, generally concentrated in a two to three month period during the drier months. Existing design guidelines usually follow a deterministic design approach, whereby a suitable DPF is applied to the average annual daily system demand in order to determine the expected peak demand on the system. This DPF does not account for the potential variability in end-user demand profiles, or the impact that end-storage has on the required peak design factor of the bulk system. This study investigated the temporal variations of end-user demand on two bulk water supply systems. These systems are located in the winter rainfall region of the Western Cape province of South Africa. The data analysed was the monthly measured consumption figures of different end-users supplied from the two systems. The data-sets extended over 14 years of data. Actual monthly peak factors were extracted from this data and used in deterministic and probabilistic methods to determine the expected monthly peak factor for both the end-user and the system design. The probabilistic method made use of a Monte Carlo analysis, whereby the actual recorded monthly peak factor for each end-user per bulk system was used as an input into discrete probability functions. The Monte Carlo analysis executed 1 500 000 iterations in order to produce probability distributions of the monthly peak factors for each system. The deterministic and probabilistic results were compared to the actual monthly peak factors as calculated from the existing water use data, as well as against current DPFs as published in guidelines used in the industry. The study demonstrated that the deterministic method would overstate the expected peak system demand and result in an oversized system. The probabilistic method yielded good results and compared well with the actual monthly peak factors. It is thus deemed an appropriate tool to use to determine the required DPF of a bulk water system for a chosen reliability of supply. The study also indicated the DPFs proposed by current guidelines to be too low. The study identified a potential relationship between the average demand of an end-user and the expected maximum monthly peak factor, whereas in current guidelines peak factors are not indicated as being influenced by the end-user average demand.<br>AFRIKAANSE OPSOMMING: Die ontwerp van ‘n watervoorsiening stelsel berus op die kennis van die water aanvraag van sy spesifieke eindverbruikers. Dit is ook belangrik om ‘n begrip te hê van die tydelike variasie van die eindverbruiker se water-aanvraag. Indien die voorsieningstelsel nie in staat is om die benodigde volume water teen die verlangde vloeitempo te kan lewer nie, word dit beskou as ‘n faling. Die stelsel word dus ontwerp met voldoende kapasiteit wat dit sal in staat stel om die benodigde volume gedurende die hoogste aanvraag periodes te kan voorsien. In die praktyk hoef grootmaat water-voorsiening stelsels nie te voldoen aan spits watergebeurtenisse met hoë frekwensie en kort duurtes, soos piek-dag of piek-uur aanvraag nie, aangesien hierdie gebeurtenisse se impak op die grootmaat stelsel verminder word deur die voorsiening van wateropgaring fasiliteite by die aftap-punte vanaf die grootmaatstelsels. Nieteenstaande, vir piek-aanvraag gebeurtenisse met langer duurtes as ‘n uur of dag, raak die voorsiening van voldoende wateropgaring kapasiteit by die aftap-punt onprakties en kan dit selfs lei tot waterkwaliteits probleme. Dit is dus ‘n vereiste dat grootmaat watervoorsienings stelsels ontwerp moet word om die piek-week of piek-maand eindverbruiker aanvrae te kan voorsien. Hierdie piek-aanvraag gebeurtenisse vind algemeen in gekonsentreerde twee- of drie maand periodes tydens die droeër maande plaas. Bestaande ontwerpsriglyne volg gewoonlik ‘n deterministiese ontwerp benadering, deurdat ‘n voldoende ontwerp spits faktor toegepas word op die gemiddelde jaarlikse daaglikse stelsel aanvraag om sodoende te bepaal wat die verwagte spits aanvraag van die stelsel sal wees. Hierdie ontwerp spits faktor maak nie voorsiening vir die potensiële variasie in die eindverbruiker se aanvraag karakter of die impak van die beskikbare water-opgaring fasiliteit op die benodigde ontwerp spits faktor van die grootmaat-stelsel nie. Hierdie studie ondersoek die tydelike variasie van die eindverbruiker se aanvraag op twee grootmaat watervoorsiening stelsels. Die twee stelsels is geleë in die winter reënval streek van die Wes-Kaap provinsie van Suid-Afrika. Die data wat geanaliseer is was die maandelikse gemeterde verbruiksyfers van verskillende eindverbruikers voorsien deur die twee stelsels. Die datastelle het oor 14 jaar gestrek. Die ware maand piekfaktore is bereken vanaf die data en is in deterministiese en probabilistiese metodes gebruik om die verwagte eindverbruiker en stelsel ontwerp se maand spits-faktore te bereken. Die probabilistiese metode het gebruik gemaak van ‘n Monte Carlo analise metode, waardeur die ware gemeette maand spits-faktor vir elke eindverbruiker vir elke grootmaatstelsel gebruik is as invoer tot diskrete waarskynlikheids funksies. Die Monte Carlo analise het 1 500 000 iterasies voltooi om waarskynlikheids-verdelings van elke maand spitsfaktor vir elke stelsel te bereken. Die deterministiese en probabilistiese resultate is vergelyk met die ware maand spits faktore soos bereken vanuit die bestaande waterverbruik data, asook teen huidige gepubliseerde ontwerp spits-faktore, wat in die bedryf gebruik word. Die studie het aangetoon dat die deterministiese metode te konserwatief is en dat dit die verwagte piekaanvraag van die stelsel sal oorskat en dus sal lei tot ‘n oorgrootte stelsel. Die probabilistiese metode het goeie resultate opgelewer wat goed vergelyk met die ware maand piek-faktore. Dit word gereken as ‘n toepaslike metode om die benodigde ontwerp spits-faktor van ‘n grootmaat-watervoorsiening stelsel te bepaal vir ‘n gekose voorsieningsbetroubaarheid. Die studie het ook aangedui dat die ontwerps piek-faktore voorgestel deur die huidige riglyne te laag is en dat dit tot die falings van ‘n stelsel sal lei. Die studie het ‘n moontlike verwantskap tussen die gemiddelde daaglikse wateraanvraag van die eindverbruiker en die verwagte maksimum maand spits faktor geïdentifiseer, nademaal die piek-faktore soos voorgestel deur die huidige riglyne nie beïnvloed word deur die eindverbruiker se gemiddelde verbruik nie.
APA, Harvard, Vancouver, ISO, and other styles
More sources

Books on the topic "Distribution factor method"

1

K, Tanaabe, and United States. National Aeronautics and Space Administration., eds. A new method of determining acid base strength distribution and a new acidity-basicity scale for solid catalysts: The strongest point, Ho. National Aeronautics and Space Administration, 1988.

Find full text
APA, Harvard, Vancouver, ISO, and other styles
2

Reznik, Galina. Marketing. INFRA-M Academic Publishing LLC., 2021. http://dx.doi.org/10.12737/1242303.

Full text
Abstract:
The textbook is the fourth edition, contains a detailed presentation of the topics of the discipline "Marketing". In an accessible and understandable form, the key concepts of the discipline "Marketing"are considered. In particular, the reader will get an idea of the essence of marketing, its types, principles, functions and basic elements; the environment of marketing and the conditions in which it can be applied. The textbook reveals the concept of the market, its types, capacity and segmentation; competition, its types, the role of the enterprise in the competition in order to achieve key success factors. Considerable attention is paid to the concepts of "product", "product", their distinctive features. The essence of product distribution is revealed and the features of marketing logistics as a method of managing product promotion channels are given.&#x0D; The textbook also includes a bibliographic list, questions for self-control, tests, which will allow you to study the course "Marketing" more fully.&#x0D; Meets the requirements of the federal state educational standards of higher education of the latest generation.&#x0D; For bachelors studying in the direction of training 38.03.02 "Management".
APA, Harvard, Vancouver, ISO, and other styles
3

Naumov, Vladimir. Consumer behavior. INFRA-M Academic Publishing LLC., 2020. http://dx.doi.org/10.12737/1014653.

Full text
Abstract:
The book describes the basic issues concerning consumer behavior on the basis of the simulation of the decision-making process on buying behavior of customers in the sales area of the store and shopping Internet sites. &#x0D; The classification of models of consumer behavior, based on research in the area of economic, social and psychological theories and empirical evidence regarding decision-making by consumers when purchasing the goods, including online stores. Methods of qualitative and quantitative research of consumer behavior, fundamentals of statistical processing of empirical data. &#x0D; Attention is paid to the processes of consumers ' perception of brands (brands) and advertising messages, the basic rules for the display of goods (merchandising) and its impact on consumer decision, recommendations on the use of psychology of consumer behavior in personal sales.&#x0D; Presents an integrated model of consumer behavior in the Internet environment, the process of perception of the visitor of the company, the factors influencing consumer choice of goods online. &#x0D; Is intended for preparation of bachelors in directions of preparation 38.03.02 "Management", 38.03.06 "trading business" and can be used for training of bachelors in direction of training 43.03.01 "Service", and will also be useful for professionals working in the field of marketing, distribution and sales.
APA, Harvard, Vancouver, ISO, and other styles
4

Machine Hour Rate Method of Distribution of Factory Indirect Expense. Creative Media Partners, LLC, 2023.

Find full text
APA, Harvard, Vancouver, ISO, and other styles
5

Machine Hour Rate Method of Distribution of Factory Indirect Expense. Creative Media Partners, LLC, 2023.

Find full text
APA, Harvard, Vancouver, ISO, and other styles
6

Cookson, Richard, Susan Griffin, Ole F. Norheim, and Anthony J. Culyer, eds. Distributional Cost-Effectiveness Analysis. Oxford University Press, 2020. http://dx.doi.org/10.1093/med/9780198838197.001.0001.

Full text
Abstract:
Distributional cost-effectiveness analysis aims to help healthcare and public health organizations make fairer decisions with better outcomes. Standard cost-effectiveness analysis provides information about total costs and effects. Distributional cost-effectiveness analysis provides additional information about fairness in the distribution of costs and effects—who gains, who loses, and by how much. It can also provide information about the trade-offs that sometimes occur between efficiency objectives such as improving total health and equity objectives such as reducing unfair inequality in health. This is a practical guide to a flexible suite of economic methods for quantifying the equity consequences of health programmes in high-, middle-, and low-income countries. The methods can be tailored and combined in various ways to provide useful information to different decision makers in different countries with different distributional equity concerns. The handbook is primarily aimed at postgraduate students and analysts specializing in cost-effectiveness analysis but is also accessible to a broader audience of health sector academics, practitioners, managers, policymakers, and stakeholders. Part I is an introduction and overview for research commissioners, users, and producers. Parts II and III provide step-by-step technical guidance on how to simulate and evaluate distributions, with accompanying hands-on spreadsheet training exercises. Part IV concludes with discussions about how to handle uncertainty about facts and disagreement about values, and the future challenges facing this young and rapidly evolving field of study.
APA, Harvard, Vancouver, ISO, and other styles
7

Ballon, Paola, and Jorge Dávalos. Inequality and the changing nature of work in Peru. UNU-WIDER, 2020. http://dx.doi.org/10.35188/unu-wider/2020/925-9.

Full text
Abstract:
This paper identifies the socioeconomic drivers of earnings inequality in Peru in the period 2004–18. Using the ENAHO household surveys and data on routine task content of occupations, we apply inequality decomposition methods to the real earnings distribution, its quantiles, and the Gini index. We find that in this period inequality has reduced, with great improvement attributed to reductions in the gender wage gap and macroeconomic factors. However, we did not find strong evidence for factors related to changes in workers’ attributes or shifts in job characteristics, except for a slight enhancing effect of the task content of occupations, which increases in importance as we move from ‘poorer’ to ‘richer’ deciles.
APA, Harvard, Vancouver, ISO, and other styles
8

Filippi, Massimo, and Maria A. Rocca. Multiple Sclerosis: White Matter versus Gray Matter Involvement (The Cause of Disability in MS). Oxford University Press, 2017. http://dx.doi.org/10.1093/med/9780199937837.003.0083.

Full text
Abstract:
The classic view of multiple sclerosis (MS) as a chronic, inflammatory-demyelinating condition affecting solely the white matter (WM) of the central nervous system (CNS) has been challenged by the demonstration, from pathologic and magnetic resonance imaging (MRI) studies, of an extensive and diffuse involvement of the gray matter (GM). This observation has driven the application of modern MR technology and methods of analysis to quantify the extent and distribution of damage to the different compartments of the CNS, with the ultimate goal of improving our understanding of the factors associated with the accumulation of clinical disability and cognitive impairment in these patients.
APA, Harvard, Vancouver, ISO, and other styles
9

Harper, Sarah. Demography: A Very Short Introduction. Oxford University Press, 2018. http://dx.doi.org/10.1093/actrade/9780198725732.001.0001.

Full text
Abstract:
Demography—the study of people—addresses the size, distribution, composition, and density of populations, and considers the impact these factors have on individual lives and the changing structure of human populations. Each generation’s demographic composition influences a person’s life chances; the economic and political structures within which that life is lived; the person’s access to social and natural resources; and life expectancy. Demography: A Very Short Introduction considers how the global population has evolved over time and space and discusses the theorists, theories, and methods involved in studying population trends and movements. It also looks at the emergence of new demographic sub-disciplines and addresses some of the future population challenges.
APA, Harvard, Vancouver, ISO, and other styles
10

Kosakowska-Berezecka, Natasza, Magdalena Żadkowska, Brita Gjerstad, et al. Changing Country, Changing Gender Roles. Oxford University Press, 2018. http://dx.doi.org/10.1093/oso/9780190265076.003.0005.

Full text
Abstract:
This chapter explores family lives of couples who migrate from a less gender-egalitarian (i.e., Poland) to a more egalitarian nation (i.e., Norway). The authors present selected results from a large-scale mixed-methods study drawing from interviews conducted longitudinally with couples in Poland and in Norway, interviews with public-sector servants and employers in Norway, and surveys on attitudes toward gender equality and men’s and women’s practices concerning division of parental roles and household duties in both Poland and Norway. The chapter examines the distribution of domestic responsibilities and potential change within cultural norms and practices in the context of migration and highlights cultural and psychological factors that facilitate social change from gender inequality to equality and from nonegalitarianism to egalitarianism among Polish migrant couples in Norway. The evolution of new, de-gendered family roles is discussed, with attention on factors enabling men to be more active in family life.
APA, Harvard, Vancouver, ISO, and other styles
More sources

Book chapters on the topic "Distribution factor method"

1

Cheng, Jing, Ziyao He, Zhong Liu, and Lei Zhang. "Slope Reliability Analysis Based on Nonlinear Stochastic Finite Element Method." In Advances in Frontier Research on Engineering Structures. Springer Nature Singapore, 2023. http://dx.doi.org/10.1007/978-981-19-8657-4_30.

Full text
Abstract:
AbstractIn slope stability reliability analysis, the deterministic analysis method is usually used to calculate the safety factor to measure the stability of the slope, but the traditional deterministic analysis method cannot fully consider and describe the natural spatial variability of soil, which leads to the failure probability calculation of the slope is not accurate enough. Aiming at the problem of spatial variability of soil mechanical parameters in slope stability analysis, this paper proposes a stochastic finite element method for calculating the distribution of FS (factor of safety) of dam slopes, and MC (Monte Carlo) strength reduction combined method and MC direct method are proposed to calculate the reliability of slope. Taking isotropic two-dimensional slope as an example: firstly, the random field is sampled to get the corresponding random field of material properties, and then the slope displacement, stress and plasticity zone results are calculated; then on the basis of NMC times sampling of random field, there are: (i) Combined method (M1): the strength reduction method is used to get the reduction coefficient of each sample, and then its distribution, slope failure probability and reliability index are calculated; (ii) MC direct method (M2): using the viscop lastic method to solve and judge the instability of slopes, and the instability cases under all sample conditions are counted to obtain the failure probability and reliability index of slopes. The results show that the slope stability analysis considering the random field of material properties can obtain the real and reliable slope stability analysis results by comprehensively evaluating the slope safety through the mean value, variance, distribution and reliability index of the slope safety factor.
APA, Harvard, Vancouver, ISO, and other styles
2

Hou, Shengjun, Gaojin Zhao, Yongfeng Yang, Fengjiao Fu, and Qilin Li. "A Method for Determining the Pile Location of Pile Based on the Point Safety Factor Distribution of Reinforced Slope." In Lecture Notes in Civil Engineering. Springer Nature Singapore, 2023. http://dx.doi.org/10.1007/978-981-99-1748-8_14.

Full text
Abstract:
AbstractAnti-slide pile is one of the supporting structures commonly used in landslide treatment, while the determination of pile location is empirical. A highway landslide in Yunnan Province was selected as a study case, this paper proposes a method to determine the anti-slide pile location based on the point safety factor distribution of sliding surface. The study found that the local sliding surface has a large value of point safety factor in the anti-slide section. With increase of the proportion of the anti-slide section, the anti-sliding ability of the slide surface can be fully utilized, and the reinforcement effect of the anti-slide pile will be great. Using the point safety factor to determine the pile location is a quantitative method, which enriches the design theory of landslide support structure.
APA, Harvard, Vancouver, ISO, and other styles
3

Feng, D. C. "An Accurate Method for Calculating Load Distribution Factor Kβ of Involute Gears." In Progress of Precision Engineering and Nano Technology. Trans Tech Publications Ltd., 2007. http://dx.doi.org/10.4028/0-87849-430-8.458.

Full text
APA, Harvard, Vancouver, ISO, and other styles
4

Cao, Shicong, and Hao Zheng. "A POI-Based Machine Learning Method for Predicting Residents’ Health Status." In Proceedings of the 2021 DigitalFUTURES. Springer Singapore, 2021. http://dx.doi.org/10.1007/978-981-16-5983-6_13.

Full text
Abstract:
AbstractHealth environment is a key factor in public health. Since people’s health depends largely on their lifestyle, the built environment which supports a healthy living style is becoming more important. With the right urban planning decisions, it’s possible to encourage healthier living and save healthcare expenditures for the society. However, there is not yet a quantitative relationship established between urban planning decisions and the health status of the residents. With the abundance of data and computing resources, this research aims to explore this relationship with a machine learning method. The data source is from both the OpenStreetMap and American Center for Decease Control and Prevention (CDC). By modeling the Point of Interest data and the geographic distribution of health-related outcome, the research explores the key factors in urban planning that could influence the health status of the residents quantitatively. It informs how to create a built environment that supports health and opens up possibilities for other data-driven methods in this field.
APA, Harvard, Vancouver, ISO, and other styles
5

Cha, Hyun Rok, K. S. Lee, Cheol Ho Yun, Hyeon Taek Son, and Tea Uk Jung. "A Considering Method of Density Distribution Factor for the Ceramic Motor with Soft Magnetic Composite Core." In High-Performance Ceramics V. Trans Tech Publications Ltd., 2008. http://dx.doi.org/10.4028/0-87849-473-1.211.

Full text
APA, Harvard, Vancouver, ISO, and other styles
6

Van Duong, Binh, Igor K. Fomenko, Denis N. Gorobtsov, et al. "An Integration of the Fractal Method and the Statistical Index Method for Mapping Landslide Susceptibility." In Progress in Landslide Research and Technology, Volume 3 Issue 1, 2024. Springer Nature Switzerland, 2024. http://dx.doi.org/10.1007/978-3-031-55120-8_30.

Full text
Abstract:
AbstractAppropriate land use planning and the sustainable development of residential communities play a crucial role in the development of mountainous provinces in Vietnam. Because these regions are especially prone to natural disasters, including landslides, landslide studies can provide valuable data for determining the evolution of the landslide process and assessing landslide risk. This study was conducted to assess landslide susceptibility in Muong Khoa commune, Son La province, Vietnam, using the Statistical Index method (SI) and the integration of the Fractal method and Statistical Index method (FSI). To produce landslide susceptibility zonation (LSZ) maps, eight causative factors, including elevation, slope aspect, slope, distance to roads, distance to drainage, distance to faults, distance to geological boundaries, and land use, were considered. Using SI and FSI models, two landslide susceptibility zonation maps (LSZ) were produced in ArcGIS, and the study territory was categorized into five susceptibility zones: very low, low, moderate, high, and very high. The area percentage of susceptibility zones predicted by the SI model is 10.11, 18.49, 29.71, 28.59, and 13.10%, respectively. Meanwhile, the susceptibility map generated by the FSI model divided the study area into zones with corresponding area proportions of 18.92, 18.71, 20.01, 22.94, and 19.42%. Using the ROC method, the prediction performance of the two models was determined to be AUC = 71.18% (SI model) and AUC = 75.18% (FSI model). The AUC &gt; 70% indicated that the models established a good relationship between the spatial distribution of past landslides and causative factors. In addition, the two models accurately predicted the occurrence of landslides in the study area. The FSI model has improved prediction performance by identifying the role of each factor in the landslide occurrences in the study area and, therefore, may be effectively utilized in other regions and contribute to Vietnam’s landslide prevention strategy.
APA, Harvard, Vancouver, ISO, and other styles
7

Liu, Chenyu, Anlin Wang, and Xiaotian Li. "Thermal Robustness Redesign of Electromagnet Under Multi-Physical Field Coupling." In Lecture Notes in Mechanical Engineering. Springer Nature Singapore, 2024. http://dx.doi.org/10.1007/978-981-97-1876-4_21.

Full text
Abstract:
AbstractAiming at the durability problem of the proportional electromagnet used in the proportional valve of engineering machinery, in order to improve its thermal failure resistance under random load conditions, a parametric redesign model of the proportional electromagnet was proposed based on the multi-physics coupling theory and robust optimization theory. This article takes the proportional electromagnet with a basin-type suction structure as the research object. The parameter model was verified through steady-state proportional electromagnet tests and temperature distribution tests. On the premise of ensuring the accuracy of electromagnetic calculation force, the conductivity and heat transfer parameters with fuzzy magnitude in the system were calibrated. Taking the key structural parameters of the proportional electromagnet and coil as the control factors, and the enameled wire diameter of the coil caused by the uncertainty of the production process conditions as the noise factor, an orthogonal experiment was designed based on the Taguchi method, and the thermal robustness redesign evaluation function of the proportional electromagnet was defined. Multi-factor weighted form. The thermal load of the proportional electromagnet obtained from the excavator field test was used as the response to calculate the heat source. Under the constraint of allowable temperature rise that can not cause coil insulation failure, a redesign method for key structural parameters that minimizes changes in system response under noise interference is given. Studies have shown that coil length and number of turns are the main factors affecting the thermal robustness of proportional electromagnets. The window shape of the coil is determined by the winding process and determines the magnetic properties and heat transfer capabilities of the system. The thermal robustness redesign method of proportional electromagnets proposed in this article has engineering reference value for the customized design of electromechanical products under magnetothermal coupling conditions.
APA, Harvard, Vancouver, ISO, and other styles
8

Nynäs, Peter, Ariela Keysar, and Martin Lagerström. "Who Are They and What Do They Value? – The Five Global Worldviews of Young Adults." In The Diversity Of Worldviews Among Young Adults. Springer International Publishing, 2022. http://dx.doi.org/10.1007/978-3-030-94691-3_3.

Full text
Abstract:
AbstractIn this chapter, we present five distinct worldview profiles that describe ways of being religious, spiritual and secular. The findings emerge from our international study with young adults in twelve countries worldwide, and it is based on the Faith Q-Sort (FQS) and Q-methodology. FQS is a novel way to assess worldviews based on what is called prototypes from a factor analysis of how people respond to a set of statements. We implemented the FQS as part of our mixed-method approach, and results from the survey part allows us to further explore the five prototypes closer. How are the worldviews different from each other in terms of national distribution, demographic data, measure of religiosity, basic values, life satisfaction, where they get information, and aspects of trust? Since FQS is a new instrument in the study of religions, the investigation based on the mixed method approach helps us to evaluate its usefulness and quality as a method for assessment of ways of being (non)religious.
APA, Harvard, Vancouver, ISO, and other styles
9

Calleo, Yuri, and Simone Di Zio. "Unsupervised spatial data mining for the development of future scenarios: a Covid-19 application." In Proceedings e report. Firenze University Press, 2021. http://dx.doi.org/10.36253/978-88-5518-461-8.33.

Full text
Abstract:
In the context of Futures Studies, the scenario development process permits to make assumptions on what the futures can be in order to support better today decisions. In the initial stages of the scenario building (Framing and Scanning phases), the process requires much time and efforts to scanning data and information (reading of documents, literature review and consultation of experts) to understand more about the object of the foresight study. The daily use of social networks causes an exponential increase of data and for this reason here we deal with the problem of speeding up and optimizing the Scanning phase by applying a new combined method based on the analysis of tweets with the use of unsupervised classification models, text-mining and spatial data mining techniques. For the purpose of having a qualitative overview, we applied the bag-of-words model and a Sentiment Analysis with the Afinn and Vader algorithms. Then, in order to extrapolate the influence factors, and the relevant key factors (Kayser and Blind, 2017; 2020) the Latent Dirichlet Allocation (LDA) was used (Tong and Zhang, 2016). Furthermore, to acquire also spatial information we used spatial data mining technique to extract georeferenced data from which it was possible to analyse and obtain a geographic analysis of the data. To showcase our method, we provide an example using Covid-19 tweets (Uhl and Schiebel, 2017), upon which 5 topics and 6 key factors have been extracted. In the last instance, for each influence factor, a cartogram was created through the relative frequencies in order to have a spatial distribution of the users discussing each particular topic. The results fully answer the research objectives and the model used could be a new approach that can offer benefits in the scenario developments process.
APA, Harvard, Vancouver, ISO, and other styles
10

Erüz, A. Orhun, M. Hulusi Özkul, Özlem Akalın, and Muhammed Maraşlı. "The Effects of Modified Andreassen Particle-Packing Model on Polymer Modified Self-Leveling Heavy-Weight Mortar." In Springer Proceedings in Materials. Springer Nature Switzerland, 2024. http://dx.doi.org/10.1007/978-3-031-72955-3_62.

Full text
Abstract:
AbstractHeavy-weight concretes are known for their high unit weight inherited from the aggregates and developed mainly for radiation shielding. Therefore, minimal porosity besides the high unit weight is a desired property. Numerous particle-packing theories were put forward to decrease the porosity by an ideal reference curve; modified Andreassen model based on the size distribution of ingredients to adjust the fineness. This research investigates the effect of the mentioned method on heavy-weight mortars. Cement, micro and nano silica combinations, and their polymer additive mixtures were used as binders in the specimens, along with barite and finely ground magnetite aggregates. In this work, the aggregate size limit selected 3 mm and the fineness factor of q was chosen as 0.22 and 0.25, depending on the mixture. To achieve a self-levelling consistency, the w/c ratio was kept constant at 0.40, and a superplasticizer was added to maintain the workability. Consecutively, the specimens were examined for unit weight, compressive strength, and capillary water absorption. The collected results were analyzed, and the difference between groups was compared according to their composition.
APA, Harvard, Vancouver, ISO, and other styles

Conference papers on the topic "Distribution factor method"

1

Yu, Zhongjun, Boran Fan, and Xin Zhao. "Discrepancies of Amplification Factor Method in Stability Design of Tall Buildings." In IABSE Symposium, Tokyo 2025: Environmentally Friendly Technologies and Structures: Focusing on Sustainable Approaches. International Association for Bridge and Structural Engineering (IABSE), 2025. https://doi.org/10.2749/tokyo.2025.0453.

Full text
Abstract:
&lt;p&gt;As building height increases, structural design constraint shifts from strength and displacement limitations to stability requirements. High-rise buildings experience growing displacements and internal forces under lateral and vertical loads, leading to instability that differs from buckling caused by gravity loads alone.Stability constraints, derived via the amplification factor method, depend on lateral stiffness and mass distribution. Using a simplified layer model, this paper compares Chinese and American codes on stability performance under lateral loads and validates displacement increments through finite element analyses, including linear first-order and nonlinear second-order methods.&lt;/p&gt;
APA, Harvard, Vancouver, ISO, and other styles
2

Huatian, Xu, Bi Wuxi, Zhang Lianjun, and Liu Zhenbin. "Research on Current and Potential Distribution of Horizontal Directional Drilling Pipeline." In CORROSION 2018. NACE International, 2018. https://doi.org/10.5006/c2018-11274.

Full text
Abstract:
Abstract The Horizontal Directional Drilling (HDD) process represents a significant improvement over traditional cut and cover methods for installing pipelines beneath obstructions. However, cathodic protection monitoring at HDD locations is typically limited to the entry/exit extremities, with protection levels in the intervening span either assumed or speculated. In this essay, the theoretical calculation method of current distribution on HDD will be shared. The laboratory and field experiment results show that the soil resistivity is the main influencing factor of the cathodic protection current distribution of the HDD. This article also gives a test method for the effectiveness of cathodic protection of HDD.
APA, Harvard, Vancouver, ISO, and other styles
3

Wu, Yao, Takeshi Hanji, Kazuo Tateishi, and Masaru Shimizu. "Evaluation of Through-Thickness Residual Stress at T-Joint Weld Toes Using Slitting Method." In IABSE Symposium, Tokyo 2025: Environmentally Friendly Technologies and Structures: Focusing on Sustainable Approaches. International Association for Bridge and Structural Engineering (IABSE), 2025. https://doi.org/10.2749/tokyo.2025.2459.

Full text
Abstract:
&lt;p&gt;Residual stress in welds is a critical factor influencing the fatigue strength of welded joints. Understanding the residual stress distribution, both on the surface and within the interior of the plates, is essential for improving the accuracy of fatigue strength evaluations in welded joints. This study aims to numerically evaluate the through-thickness residual stress at the weld toe of T-joints using the slitting method. This approach involves welding simulation, slitting simulation, and fracture mechanics to investigate the relationship between the internal residual stress and the strain changes on the plate surface near the weld toe, which occur due to stress relief during the slitting process. The results indicate that the through-thickness residual stress distributions estimated through the slitting method are consistent with those initially present in the welded joint, even under varying conditions such as different initial residual stress distributions, geometrical parameters of the joint (e.g., plate thickness, weld size), and slitting conditions (e.g., slit width and location, cutting depth ratio). Furthermore, this study identifies an optimal slitting strategy, which enhances the accuracy of residual stress evaluation.&lt;/p&gt;
APA, Harvard, Vancouver, ISO, and other styles
4

Jiang Fengyi, Shen Shuhong, and Jiang Yunji. "Method for improving the power factor of motor product." In 2008 China International Conference on Electricity Distribution (CICED 2008). IEEE, 2008. http://dx.doi.org/10.1109/ciced.2008.5211805.

Full text
APA, Harvard, Vancouver, ISO, and other styles
5

Xiao, Songqing, Weiqing Tao, Lin Li, and Jun Cao. "Distribution Fault Diagnosis Method Based on Comprehensive Fault Factor." In 2019 6th International Conference on Information Science and Control Engineering (ICISCE). IEEE, 2019. http://dx.doi.org/10.1109/icisce48695.2019.00209.

Full text
APA, Harvard, Vancouver, ISO, and other styles
6

Zhuang, Songlin, Yue Zhang, and Yanping Huang. "New method for measuring phase factor." In OSA Annual Meeting. Optica Publishing Group, 1988. http://dx.doi.org/10.1364/oam.1988.fo2.

Full text
Abstract:
We propose a new optical method for determining the optical phase term from the intensity distribution. The basic idea of the new method is as follows: Consider a simple Fourier optical setup or 4f system. An object f(x) = |f(x)| exp [jθ(x)] is put in the object plane, and the detector is inserted in the image plane. Obviously the intensity distribution I(x′) determined here does not include any phase information. However, if we use a spatial filter to modulate the spectrum distribution of the object function, the new intensity I(x′) will be related not only to the amplitude term |f (x)| but also to the phase term θ(x). The relationship between I(x′) and θ(x) is also changed by using differential filters. Therefore, the phase factor θ (x) can be obtained from a series of intensity measurements. It can be shown that the phase term of the object function can be uniquely determined by using two different modulated plates.
APA, Harvard, Vancouver, ISO, and other styles
7

Nojeng, Syarifuddin, Arif Jaya, Sugianto, Syamsir, and Mohammad Yusri Hassan. "Transmission usage allocation based on power factor using distribution factor method for deregulated electricity supply." In 2016 International Seminar on Application for Technology of Information and Communication (ISemantic). IEEE, 2016. http://dx.doi.org/10.1109/isemantic.2016.7873817.

Full text
APA, Harvard, Vancouver, ISO, and other styles
8

Ma, Rutao, Youzhe Ji, Hui Wang, and Fei Han. "Dual-Factor Method for Calculating Weight Distribution in Reaming While Drilling." In IADC/SPE Asia Pacific Drilling Technology Conference and Exhibition. Society of Petroleum Engineers, 2012. http://dx.doi.org/10.2118/155834-ms.

Full text
APA, Harvard, Vancouver, ISO, and other styles
9

Nie, Ding, Litao Fan, Ke Wang, Youle Song, Yiyi Gao, and Gang Miao. "Research on AHP-based Multi-factor Medium Voltage Distribution Network Line Risk Quantitative Assessment Method." In 2021 China International Conference on Electricity Distribution (CICED). IEEE, 2021. http://dx.doi.org/10.1109/ciced50259.2021.9556691.

Full text
APA, Harvard, Vancouver, ISO, and other styles
10

KangJian Zhang, Li Guo Gang, Feng-chu Xu, and Li Yuhua. "Probability distribution of knock factor and knock-band method for knock detection." In 2010 International Conference on Mechanic Automation and Control Engineering (MACE). IEEE, 2010. http://dx.doi.org/10.1109/mace.2010.5536754.

Full text
APA, Harvard, Vancouver, ISO, and other styles

Reports on the topic "Distribution factor method"

1

Kong, Zhihao, Aritro Roy Mitra, and Luna Lu. Developing AI-Assisted In-Situ NDT Method for Air-Void Distribution Testing in Fresh and Hardened Concrete. Purdue University, 2024. http://dx.doi.org/10.5703/1288284317745.

Full text
Abstract:
Understanding the air void content in concrete is crucial since it significantly influences the durability and strength of the material, especially in environments susceptible to freeze-thaw cycles. This report introduces an advanced nondestructive testing (NDT) method for the in-situ detection of air voids in concrete by employing diffusive ultrasound. Focusing on the ultrasound attenuation coefficient, this research established a strong correlation with key air void metrics, including the volumetric ratio and spacing factor, as outlined in ASTM C457. The study also undertook a comparative analysis of ASTM C457 methods B and C, revealing the instrument-dependent variability in measuring air voids. One pivotal discovery was that ultrasound attenuation in concrete is majorly influenced by air voids and aggregates, with a relatively minor contribution from cement. This methodology not only offers a novel approach for accurately assessing air void content but also enables visualization of air void distribution in concrete infrastructures like pavements. The findings of this research offer insights for enhancing concrete quality control and ensuring structural integrity in construction, particularly when the in-place air voids conditions are of interest.
APA, Harvard, Vancouver, ISO, and other styles
2

Ravazdezh, Faezeh, Julio A. Ramirez, and Ghadir Haikal. Improved Live Load Distribution Factors for Use in Load Rating of Older Slab and T-Beam Reinforced Concrete Bridges. Purdue University, 2021. http://dx.doi.org/10.5703/1288284317303.

Full text
Abstract:
This report describes a methodology for demand estimate through the improvement of load distribution factors in reinforced concrete flat-slab and T-beam bridges. The proposed distribution factors are supported on three-dimensional (3D) Finite Element (FE) analysis tools. The Conventional Load Rating (CLR) method currently in use by INDOT relies on a two-dimensional (2D) analysis based on beam theory. This approach may overestimate bridge demand as the result of neglecting the presence of parapets and sidewalks present in these bridges. The 3D behavior of a bridge and its response could be better modeled through a 3D computational model by including the participation of all elements. This research aims to investigate the potential effect of railings, parapets, sidewalks, and end-diaphragms on demand evaluation for purposes of rating reinforced concrete flat-slab and T-beam bridges using 3D finite element analysis. The project goal is to improve the current lateral load distribution factor by addressing the limitations resulting from the 2D analysis and ignoring the contribution of non-structural components. Through a parametric study of the slab and T-beam bridges in Indiana, the impact of selected parameters on demand estimates was estimated, and modifications to the current load distribution factors in AASHTO were proposed.
APA, Harvard, Vancouver, ISO, and other styles
3

Zhang, Xingyu, Matteo Ciantia, Jonathan Knappett, and Anthony Leung. Micromechanical study of potential scale effects in small-scale modelling of sinker tree roots. University of Dundee, 2021. http://dx.doi.org/10.20933/100001235.

Full text
Abstract:
When testing an 1:N geotechnical structure in the centrifuge, it is desirable to choose a large scale factor (N) that can fit the small-scale model in a model container and avoid unwanted boundary effects, however, this in turn may cause scale effects when the structure is overscaled. This is more significant when it comes to small-scale modelling of sinker root-soil interaction, where root-particle size ratio is much lower. In this study the Distinct Element Method (DEM) is used to investigate this problem. The sinker root of a model root system under axial loading was analysed, with both upward and downward behaviour compared with the Finite Element Method (FEM), where the soil is modelled as a continuum in which case particle-size effects are not taken into consideration. Based on the scaling law, with the same prototype scale and particle size distribution, different scale factors/g-levels were applied to quantify effects of the ratio of root diameter (𝑑𝑟) to mean particle size (𝐷50) on the root rootsoil interaction.
APA, Harvard, Vancouver, ISO, and other styles
4

O'Shea, Dónal. Estimating Ireland’s labour share. ESRI, 2024. https://doi.org/10.26504/rn20240401.

Full text
Abstract:
The labour share of income is a crucial economic indicator that captures income distribution between the factors of production. Its importance as a parameter in macroeconomic models motivates this detailed study of methods for estimating the Irish labour share. International comparisons of the labour share that rely on distorted measures of Irish national income are misleading. Modified gross national income (GNI*) should be used as the denominator for the Irish labour share when conducting international comparisons. The numerator of the labour share is a measure of total compensation for labour, including the labour income of the self-employed. This note evaluates existing methods for imputing the labour income of the self-employed and proposes a new method, which applies a sectoral approach to the common assumption of equal total earnings between employees and the self-employed. Using the proposed method, there is no evidence of a decline in the labour share since 1998.
APA, Harvard, Vancouver, ISO, and other styles
5

Mort, A. Controls on the distribution and composition of gas and condensate in the Montney resource play. Natural Resources Canada/CMSS/Information Management, 2022. http://dx.doi.org/10.4095/329790.

Full text
Abstract:
The Montney resource play has evolved from a peripheral conventional play to one of the most important hydrocarbon-producing unconventional resource plays in North America and has remained resilient throughout the economic challenges of recent years. Despite maturing as a resource play as a result of more than 15 years of unconventional development and research there are still aspects of the play that are not fully de-risked and prediction of fluid quality remains haphazard due to the complex interplay of geological and engineering factors. Among these are the delineation of structural and stratigraphic barriers and conduits, identification of enigmatic source rocks, which defy traditional methods, evaluating effects of fluid migration and the difficulty in predicting phase behavior in a tight, but open system. This study uses a combined approach leveraging geochemical tools combined with spatial and stratigraphic analysis in an attempt to improve current understanding of these issues.
APA, Harvard, Vancouver, ISO, and other styles
6

Singh, Anjali. Estimating the Chiasma Frequency in Diplotene-Diakinesis Stage. ConductScience, 2020. http://dx.doi.org/10.55157/cs20200925.

Full text
Abstract:
Chiasma is the point of crossing over or site where the exchange of genetic material takes place between two homologous, non-sister chromatids. The crossover occurs in the pachytene stage, however, it is observed in the diplotene stage of meiosis-I[2]. The cross-over between the two homologs also creates a new combination of parental genes, forming recombinants. The recombination of the genes causes variation in the population and exert a profound effect on genomic diversity and evolution. Meiotic recombination and variation in the population have been a concern for scientists to understand the impact and significance of crossing over in a population. Over time, various techniques, such as immunolocalization and electron microscopy of recombination nodules[2], were discovered for the analysis of meiotic recombination and quantification of crossing over. However, estimation of chiasma frequency is the traditional method followed widely to understand the phenomenon. Chiasma Frequency is defined as the estimation of the level of genetic recombination in a population. It is especially very effective to estimate the genetic recombination in organisms in which genetic analysis is impossible/difficult to perform[2]. So, this article is a layout of the origin of the concept of chiasmata, the factors affecting chiasma frequency, and its distribution in chromosomes. Also discussed, is the procedure for estimating chiasma frequency in plants as well as animals.
APA, Harvard, Vancouver, ISO, and other styles
7

Galili, Naftali, Roger P. Rohrbach, Itzhak Shmulevich, Yoram Fuchs, and Giora Zauberman. Non-Destructive Quality Sensing of High-Value Agricultural Commodities Through Response Analysis. United States Department of Agriculture, 1994. http://dx.doi.org/10.32747/1994.7570549.bard.

Full text
Abstract:
The objectives of this project were to develop nondestructive methods for detection of internal properties and firmness of fruits and vegetables. One method was based on a soft piezoelectric film transducer developed in the Technion, for analysis of fruit response to low-energy excitation. The second method was a dot-matrix piezoelectric transducer of North Carolina State University, developed for contact-pressure analysis of fruit during impact. Two research teams, one in Israel and the other in North Carolina, coordinated their research effort according to the specific objectives of the project, to develop and apply the two complementary methods for quality control of agricultural commodities. In Israel: An improved firmness testing system was developed and tested with tropical fruits. The new system included an instrumented fruit-bed of three flexible piezoelectric sensors and miniature electromagnetic hammers, which served as fruit support and low-energy excitation device, respectively. Resonant frequencies were detected for determination of firmness index. Two new acoustic parameters were developed for evaluation of fruit firmness and maturity: a dumping-ratio and a centeroid of the frequency response. Experiments were performed with avocado and mango fruits. The internal damping ratio, which may indicate fruit ripeness, increased monotonically with time, while resonant frequencies and firmness indices decreased with time. Fruit samples were tested daily by destructive penetration test. A fairy high correlation was found in tropical fruits between the penetration force and the new acoustic parameters; a lower correlation was found between this parameter and the conventional firmness index. Improved table-top firmness testing units, Firmalon, with data-logging system and on-line data analysis capacity have been built. The new device was used for the full-scale experiments in the next two years, ahead of the original program and BARD timetable. Close cooperation was initiated with local industry for development of both off-line and on-line sorting and quality control of more agricultural commodities. Firmalon units were produced and operated in major packaging houses in Israel, Belgium and Washington State, on mango and avocado, apples, pears, tomatoes, melons and some other fruits, to gain field experience with the new method. The accumulated experimental data from all these activities is still analyzed, to improve firmness sorting criteria and shelf-life predicting curves for the different fruits. The test program in commercial CA storage facilities in Washington State included seven apple varieties: Fuji, Braeburn, Gala, Granny Smith, Jonagold, Red Delicious, Golden Delicious, and D'Anjou pear variety. FI master-curves could be developed for the Braeburn, Gala, Granny Smith and Jonagold apples. These fruits showed a steady ripening process during the test period. Yet, more work should be conducted to reduce scattering of the data and to determine the confidence limits of the method. Nearly constant FI in Red Delicious and the fluctuations of FI in the Fuji apples should be re-examined. Three sets of experiment were performed with Flandria tomatoes. Despite the complex structure of the tomatoes, the acoustic method could be used for firmness evaluation and to follow the ripening evolution with time. Close agreement was achieved between the auction expert evaluation and that of the nondestructive acoustic test, where firmness index of 4.0 and more indicated grade-A tomatoes. More work is performed to refine the sorting algorithm and to develop a general ripening scale for automatic grading of tomatoes for the fresh fruit market. Galia melons were tested in Israel, in simulated export conditions. It was concluded that the Firmalon is capable of detecting the ripening of melons nondestructively, and sorted out the defective fruits from the export shipment. The cooperation with local industry resulted in development of automatic on-line prototype of the acoustic sensor, that may be incorporated with the export quality control system for melons. More interesting is the development of the remote firmness sensing method for sealed CA cool-rooms, where most of the full-year fruit yield in stored for off-season consumption. Hundreds of ripening monitor systems have been installed in major fruit storage facilities, and being evaluated now by the consumers. If successful, the new method may cause a major change in long-term fruit storage technology. More uses of the acoustic test method have been considered, for monitoring fruit maturity and harvest time, testing fruit samples or each individual fruit when entering the storage facilities, packaging house and auction, and in the supermarket. This approach may result in a full line of equipment for nondestructive quality control of fruits and vegetables, from the orchard or the greenhouse, through the entire sorting, grading and storage process, up to the consumer table. The developed technology offers a tool to determine the maturity of the fruits nondestructively by monitoring their acoustic response to mechanical impulse on the tree. A special device was built and preliminary tested in mango fruit. More development is needed to develop a portable, hand operated sensing method for this purpose. In North Carolina: Analysis method based on an Auto-Regressive (AR) model was developed for detecting the first resonance of fruit from their response to mechanical impulse. The algorithm included a routine that detects the first resonant frequency from as many sensors as possible. Experiments on Red Delicious apples were performed and their firmness was determined. The AR method allowed the detection of the first resonance. The method could be fast enough to be utilized in a real time sorting machine. Yet, further study is needed to look for improvement of the search algorithm of the methods. An impact contact-pressure measurement system and Neural Network (NN) identification method were developed to investigate the relationships between surface pressure distributions on selected fruits and their respective internal textural qualities. A piezoelectric dot-matrix pressure transducer was developed for the purpose of acquiring time-sampled pressure profiles during impact. The acquired data was transferred into a personal computer and accurate visualization of animated data were presented. Preliminary test with 10 apples has been performed. Measurement were made by the contact-pressure transducer in two different positions. Complementary measurements were made on the same apples by using the Firmalon and Magness Taylor (MT) testers. Three-layer neural network was designed. 2/3 of the contact-pressure data were used as training input data and corresponding MT data as training target data. The remaining data were used as NN checking data. Six samples randomly chosen from the ten measured samples and their corresponding Firmalon values were used as the NN training and target data, respectively. The remaining four samples' data were input to the NN. The NN results consistent with the Firmness Tester values. So, if more training data would be obtained, the output should be more accurate. In addition, the Firmness Tester values do not consistent with MT firmness tester values. The NN method developed in this study appears to be a useful tool to emulate the MT Firmness test results without destroying the apple samples. To get more accurate estimation of MT firmness a much larger training data set is required. When the larger sensitive area of the pressure sensor being developed in this project becomes available, the entire contact 'shape' will provide additional information and the neural network results would be more accurate. It has been shown that the impact information can be utilized in the determination of internal quality factors of fruit. Until now,
APA, Harvard, Vancouver, ISO, and other styles
8

Mathew, Sonu, Srinivas S. Pulugurtha, and Sarvani Duvvuri. Modeling and Predicting Geospatial Teen Crash Frequency. Mineta Transportation Institute, 2022. http://dx.doi.org/10.31979/mti.2022.2119.

Full text
Abstract:
This research project 1) evaluates the effect of road network, demographic, and land use characteristics on road crashes involving teen drivers, and, 2) develops and compares the predictability of local and global regression models in estimating teen crash frequency. The team considered data for 201 spatially distributed road segments in Mecklenburg County, North Carolina, USA for the evaluation and obtained data related to teen crashes from the Highway Safety Information System (HSIS) database. The team extracted demographic and land use characteristics using two different buffer widths (0.25 miles and 0.5 miles) at each selected road segment, with the number of crashes on each road segment used as the dependent variable. The generalized linear models with negative binomial distribution (GLM-based NB model) as well as the geographically weighted negative binomial regression (GWNBR) and geographically weighted negative binomial regression model with global dispersion (GWNBRg) were developed and compared. This research relied on data for 147 geographically distributed road segments for modeling and data for 49 segments for validation. The annual average daily traffic (AADT), light commercial land use, light industrial land use, number of household units, and number of pupils enrolled in public or private high schools are significant explanatory variables influencing the teen crash frequency. Both methods have good predictive capabilities and can be used to estimate the teen crash frequency. However, the GWNBR and GWNBRg better capture the spatial dependency and spatial heterogeneity among road teen crashes and the associated risk factors.
APA, Harvard, Vancouver, ISO, and other styles
9

Beavers. L51557 Pressure Losses in Compressor Station Yard Pipework - Phase II. Pipeline Research Council International, Inc. (PRCI), 1987. http://dx.doi.org/10.55274/r0010277.

Full text
Abstract:
The economic assessment of piping layout in compressor station yards relies on accurate prediction of pressure losses within the network. Methods currently used to predict pressure losses in station pipe work are unreliable. As a result inadequate and inaccurate information is being used when making economic assessments of piping layout and in the prediction of operating costs. By improving the design process substantial economic advantages may be gained in balancing pressure losses and compressor inlet flow conditions against investment in piping and components. Currently the existing data concentrate on isolated component losses and there is a lack of reliable data on interaction of adjacent components frequently present in compressor yard layouts. Thus, in order to produce a comprehensive guide to compressor yard losses, there was considerable incentive to quantify these interactions. This report details the experimental work to provide reliable pressure loss data for an engineer's design handbook. The tee tests include the effect of branch to run radius and two area ratios. A total of 36 bend/tee combinations were tested. Results are presented as overall bend/tee pressure loss coefficients and interaction corrections. The latter are used in the design handbook. The factors affecting bend and tee performance are discussed. Bend/tee interactions are explained qualitatively in terms of interaction of the pressure and flow distributions within the components. The work covers pressure losses in bends, close coupled bend/bend combinations, tees (combining and dividing) and tee/bend combinations.
APA, Harvard, Vancouver, ISO, and other styles
10

Zandiatashbar, Ahoura, Jochen Albrecht, and Hilary Nixon. A Bike System for All in Silicon Valley: Equity Assessment of Bike Infrastructure in San José, CA. Mineta Transportation Institute, 2023. http://dx.doi.org/10.31979/mti.2023.2162.

Full text
Abstract:
Investing in sustainable, multimodal infrastructure is of increasing importance throughout the United States and worldwide. Cities are increasingly making strategic capital investment decisions about bicycle infrastructure—decisions that need planning efforts that accurately assess the equity aspects of developments, achieve equitable distribution of infrastructures, and draw upon accurate assessment methods. Toward these efforts, this project uses a granular bike network dataset with statistical and geospatial analyses to quantify a bike infrastructure availability score (i.e., bike score) that accounts for the safety and comfort differences in bike path classes in San José, California. San José is the 10th largest U.S. city and a growing tech hub with a booming economy, factors that correlate with increased traffic congestion if adequate multimodal and active transportation infrastructure are not in place. Therefore, San José has been keen on becoming “one of the most bike-friendly communities in North America.” The City’s new plan, which builds on its first bike plan adopted in 2009, envisions a 557-mile network of allages-and-abilities bikeways to support a 20% bicycle mode split (i.e., 20% of all trips to be made by bike) by 2050. Hence, San José makes a perfect study area for piloting this project’s methodology for accurately assessing the equity of urban bike plans and infrastructures. The project uses the above-mentioned bike score (representing the bike infrastructure supply status) and San José residents’ bike travel patterns (to show bike trip demand status) utilizing StreetLight data to answer the following questions: (1) Where are San José's best (bike paradise) and worst (bike desert) regions for cycling? (2) How different are the socioeconomic attributes of San José’s bike desert and paradise residents? (3) Has San José succeeded in achieving an equitable infrastructure distribution and, if so, to what extent? And, (4) has the availability of infrastructure attracted riders from underserved communities and, if so, to what extent? Using the bike infrastructure availability score, this research measures and maps the City of San José's best and worst regions for cycling through geospatial analyses to answer Question 1 above. Further spatial and statistical analyses including t-tests, Pairwise Pearson correlation analysis, descriptive analysis, spatial visualization, principal component analysis (PCA), and multiple regression models to answer Questions 2, 3, and 4. In addition to this report, the findings are used to develop an open access web-tool, the San José Bike Equity Web Map (SJ-BE iMap). This research contributes to the critical assessment and planning efforts of sustainable, multimodal infrastructure in California and beyond.
APA, Harvard, Vancouver, ISO, and other styles
We offer discounts on all premium plans for authors whose works are included in thematic literature selections. Contact us to get a unique promo code!

To the bibliography