Academic literature on the topic 'Capacity spectrum-based methods'

Create a spot-on reference in APA, MLA, Chicago, Harvard, and other styles

Select a source type:

Consult the lists of relevant articles, books, theses, conference reports, and other scholarly sources on the topic 'Capacity spectrum-based methods.'

Next to every source in the list of references, there is an 'Add to bibliography' button. Press on it, and we will generate automatically the bibliographic reference to the chosen work in the citation style you need: APA, MLA, Harvard, Chicago, Vancouver, etc.

You can also download the full text of the academic publication as pdf and read online its abstract whenever available in the metadata.

Journal articles on the topic "Capacity spectrum-based methods"

1

Chopra, Anil K., and Rakesh K. Goel. "Capacity-Demand-Diagram Methods Based on Inelastic Design Spectrum." Earthquake Spectra 15, no. 4 (1999): 637–56. http://dx.doi.org/10.1193/1.1586065.

Full text
Abstract:
An improved capacity-demand-diagram method that uses the well-known constant-ductility design spectrum for the demand diagram is developed and illustrated by examples. This method estimates the deformation of inelastic SDF systems consistent with the selected inelastic design spectrum, while retaining the attraction of graphical implementation of the ATC-40 Nonlinear Static Procedure. One version of the improved method is graphically similar to ATC-40 Procedure A whereas the second version is graphically similar to ATC-40 Procedure B. However, the improved procedures differ from ATC-40 procedures in one important sense. The demand diagram used is different: the constant-ductility demand diagram for inelastic systems in the improved procedure versus the elastic demand diagram in ATC-40 for equivalent linear systems. The improved method can be conveniently implemented numerically if its graphical features are not important to the user. Such a procedure, based on equations relating the yield strength reduction factor, Ry, and ductility factor, μ, for different period, Tn, ranges, has been presented, and illustrated by examples using three different Ry - μ - Tn relations.
APA, Harvard, Vancouver, ISO, and other styles
2

Wan, Hai Tao, and Lin Yang. "Method of Performance-Based Design." Applied Mechanics and Materials 438-439 (October 2013): 1603–6. http://dx.doi.org/10.4028/www.scientific.net/amm.438-439.1603.

Full text
Abstract:
In order to overcome some deficiency of design, American earthquake engineering and structural engineering experts have profound conclusion after the previous earthquakes, improved bearing capacity design method, put forward the theory of performance-based design. Methods of performance-based design mainly include displacement coefficient method; direct displacement based design method, capacity spectrum method and improved capacity spectrum method. Through the understanding of the main methods, enable us to better understand performance-based design, so as to improve the design of civil engineering.
APA, Harvard, Vancouver, ISO, and other styles
3

Lantada, Nieves, Luis G. Pujades, and Alex H. Barbat. "Vulnerability index and capacity spectrum based methods for urban seismic risk evaluation. A comparison." Natural Hazards 51, no. 3 (2008): 501–24. http://dx.doi.org/10.1007/s11069-007-9212-4.

Full text
APA, Harvard, Vancouver, ISO, and other styles
4

Priestley, M. J. N. "Does capacity design do the job?" Bulletin of the New Zealand Society for Earthquake Engineering 36, no. 4 (2003): 276–92. http://dx.doi.org/10.5459/bnzsee.36.4.276-292.

Full text
Abstract:
Current provisions in the New Zealand Loadings code for dynamic amplification of moment and shear force in cantilever wall buildings are critically examined. Based on time-history analyses of six wall structures, from two- to twenty-storeys, it is found that higher mode effects are inadequately represented by either the equivalent lateral force or modal response spectrum design methods. The time-history results indicate that dynamic amplification is dependent on both initial period, and expected displacement ductility level.
 Two different methods for consideration of higher mode effects in cantilever walls are proposed. The first is based on a simple modification of the modal response spectrum method, while the second is appropriate for single-mode design approaches such as the equivalent lateral force method. Both are found to give excellent representation of expected response. It is shown that providing capacity protection at the design seismic intensity does not ensure against undesirable failure modes at intensities higher than the design level. This has significance for the design of critical facilities, such as hospitals.
APA, Harvard, Vancouver, ISO, and other styles
5

Zhang, Jing, Duixian Gao, and Zhiwei Chen. "The Equivalent Damping Ration in Capacity Spectrum Method." Advanced Materials Research 163-167 (December 2010): 4290–94. http://dx.doi.org/10.4028/www.scientific.net/amr.163-167.4290.

Full text
Abstract:
Study the calculation method of equivalent damping ratio in Capacity Spectrum Method. Analyzing the feature of several methods for the calculation of equivalent damping ratio in the world, from the numeral example, the numeral results of several calculation methods of equivalent damping ratio are compared to the results of time course method. The follow conclusions are reached: Secant stiffness method overestimate the value of equivalent damping ratio for steel structure, therefore, underestimate the supreme seismic reaction. The level of underestimate is interrelate to yield still coefficient; Under the medium earthquakes effect, all the method can fairly good forecast the supreme seismic reaction; But Under the large earthquakes effect, only the Kwan (EP) method can give the best result. The conclusions has impotent value for the perfect of Performance-Based Seismic Design Methods.
APA, Harvard, Vancouver, ISO, and other styles
6

Zhou, Dao Chuan, Guo Rong Chen, and Li Ying Nie. "Displacement-Based Design for RC Bridge Columns Based on Chinese Code." Advanced Materials Research 243-249 (May 2011): 3808–19. http://dx.doi.org/10.4028/www.scientific.net/amr.243-249.3808.

Full text
Abstract:
A comprehensive study of displacement-based design for reinforced concrete bridge columns is conducted. Section analysis software UC-Fyber is used to analyze the bending moment and curvature performance of columns’ sections, based on this, a new calculation method of target displacement of RC bridge columns is educed. Elastic displacement response spectrum, inelastic displacement response spectrum and inelastic demand spectrum are educed from acceleration spectra of Chinese Code JTG/T B02-01-2008; three simplified methods for displacement demand determination are developed. Example of the displacement-based design of bridge column was studied and checked by dynamic inelastic time-history analysis to clarify the reasonableness of the developed methods. Research shows that target displacement of RC bridge columns is relevant with concrete strength grade, longitudinal reinforcement ratio, height and the section form, etc; equivalent linearization method and inelastic displacement response spectrum method are based on the design response spectrum, could reach the target displacement and consider structure safety requirement; demand spectrum method is a simple and direct way to show design with graphics mode, with deficiency of structure capacity spectrum curve from pushover analysis differing from the reality.
APA, Harvard, Vancouver, ISO, and other styles
7

Yang, Min, and Ming Yan Jiang. "Hybrid Spectrum Access and Power Allocation Based on Improved Hopfield Neural Networks." Advanced Materials Research 588-589 (November 2012): 1490–94. http://dx.doi.org/10.4028/www.scientific.net/amr.588-589.1490.

Full text
Abstract:
This paper aims to solve the optimization power allocation problem based on cognitive radio network system. We propose a Hybrid Spectrum Access (HSA) method which considers the total transmit power constraint, the peak power constraint and the primary users’ tolerance. In order to solve this combinational optimization problem and achieve the global optimal solution, we derived a Simulated Annealing-Hopfield neural networks (SA-HNN). The simulation results of the optimized ergodic capacity shows that the proposed optimization problem can be solved more efficiently and better by SA-HNN than HNN or Simulated Annealing (SA), and the proposed HSA method by SA-HNN can achieve a better ergodic capacity than the traditional methods.
APA, Harvard, Vancouver, ISO, and other styles
8

Li, Shanshan, Ping Xiang, Biao Wei, Lu Yan, and Ye Xia. "A Nonlinear Static Procedure for the Seismic Design of Symmetrical Irregular Bridges." Shock and Vibration 2020 (September 26, 2020): 1–16. http://dx.doi.org/10.1155/2020/8899705.

Full text
Abstract:
Displacement-based seismic design methods support the performance-based seismic design philosophy known to be the most advanced seismic design theory. This paper explores one common type of irregular-continuous bridges and studies the prediction of their elastoplastic displacement demand, based on a new nonlinear static procedure. This benefits to achieve the operation of displacement-based seismic design. Three irregular-continuous bridges are analyzed to advance the equivalent SDOF system, build the capacity spectrum and the inelastic spectrum, and generate the new nonlinear static analysis. The proposed approach is used to simplify the prediction of elastoplastic displacement demand and is validated by parametric analysis. The new nonlinear static procedure is also used to achieve the displacement-based seismic design procedure. It is tested by an example to obtain results which show that after several combinations of the capacity spectrum (obtained by a pushover analysis) and the inelastic demand spectrum, the simplified displacement-based seismic design of the common irregular-continuous bridges can be achieved. By this design, the seismic damage on structures is effectively controlled.
APA, Harvard, Vancouver, ISO, and other styles
9

Schoo, Adrian M., Rosalie A. Boyce, Lee Ridoutt, and Teresa Santos. "Workload capacity measures for estimating allied health staffing requirements." Australian Health Review 32, no. 3 (2008): 548. http://dx.doi.org/10.1071/ah080548.

Full text
Abstract:
Workforce planning methodologies for the allied health professions are acknowledged as rudimentary despite the increasing importance of these professions to health care across the spectrum of health services settings. The objectives of this study were to (i) identify workload capacity measures and methods for profiling allied health workforce requirements from a systematic review of the international literature; (ii) explore the use of these methods in planning workforce requirements; (iii) identify barriers to applying such methods; and (iv) recommend further action. Future approaches to workforce planning were explored through a systematic review of the literature, interviews with key stakeholders and focus group discussions with representatives from the different professional bodies and health agencies in Victoria. Results identified a range of methods used to calculate workload requirements or capacity. In order of increasing data demands and costliness to implement, workload capacity methods can be broadly classified into four groups: ratio-based, procedure-based, categories of care-based and diagnostic or casemix-based. Despite inherent limitations, the procedure-based measurement approach appears to be most widely accepted. Barriers to more rigorous workforce planning methods are discussed and future directions explored through an examination of the potential of casemix and mixed-method approaches.
APA, Harvard, Vancouver, ISO, and other styles
10

Wang, Li Yong, Xiao Li Xu, and Guo Xin Wu. "Grey Modeling Method of Wear Prediction for Reciprocating Machinery Based on Spectrum Analysis." Applied Mechanics and Materials 120 (October 2011): 582–86. http://dx.doi.org/10.4028/www.scientific.net/amm.120.582.

Full text
Abstract:
Oil spectrum analysis technology is an important method to research wear of mechanism without unpacking, based on the characteristics of limited amount of oil analysis samples and randomness of sample data changes to apply gray theory to prediction of wearing capacity of reciprocating engine. When gray modeling, if sample data does not meet smoothness conditions, firstly parametric process is carried out for the data in order to weaken randomness of the original random sequence, then equal interval process for initial data sequence is conducted and sequence of gray values is determined according to variation rules of sample data, finally, base on four-dimensional data to carry out unequal interval process by timing fitting method. Test confirms that gray modeling by timing fitting methods based on wearing law to determine gray values point can significantly increase prediction accuracy of wearing capacity.
APA, Harvard, Vancouver, ISO, and other styles
More sources

Dissertations / Theses on the topic "Capacity spectrum-based methods"

1

Patankar, Digvijay Babasaheb. "Capacity Spectrum Method : Energy Based Approach." Thesis, 2011. https://etd.iisc.ac.in/handle/2005/1978.

Full text
Abstract:
The capacity spectrum method is a very popular tool in the performance based earthquake resistant design of structures. Though it involves nonlinear static analysis, it can be used to predict the dynamic behaviour of the building under earthquake load. Since the analysis is only static and not dynamic, it is very well suited for the design offices and low end computer terminals as opposed to dynamic analysis which is very resource consuming. There are several methods/variations of methods, to perform the nonlinear static analysis, popularly known as pushover analysis and convert it to capacity spectrum. Displacement based pushover analysis, force based pushover analysis, modal pushover analysis, energy based pushover analysis etc. are some of the variations of pushover analysis. There are a few attempts to consider the change in mode shape but all the methods are silent about the change in frequency due to formation of hinges in the structure. The available codes for building design such as ATC-40 provide some guidelines for getting the capacity spectrum but are not yet developed for proper ductility consideration while converting the pushover curve to capacity spectrum. The present study tries to address the above issues while proposing a new energy based approach to draw capacity spectrum. The chapter 1 introduces the concept of pushover analysis and capacity spectrum concepts. Different approaches to get these curves, their theoretical background, variations and limitations are discussed as a quick review. Chapter 2 is about the review of literature present on these topics. It is found that most of the studies have been carried out in the past on the framed buildings regarding the pushover analysis. In the last few years attempts are also made to consider the effect of torsion. Summarising the various contributions till now, it may be concluded that even in the earlier multimode pushover analysis the effect of different modes on the only static force distribution was considered. Further the spectral acceleration is obtained as a ratio of base shear and α times the weight of the building, where α is the modal mass coefficient. Only the first mode frequency could be utilized to convert the maximum displacement at the top to the spectral acceleration and the corresponding maximum potential energy (P.E.) could be used for equivalence of MDOF and SDOF. Therefore in chapter 3 which follows, the above limitation is removed as explained below. In chapter 3, the new methodology based on energy equivalence consideration is proposed step by step. For the given multistorey building, a displacement profile is applied to the building which is proportional to the effective mode shape. The effective mode shape can be the first mode shape or a combination of first few mode shapes. In the present study, two cases are considered. In the first case, the effective mode shape is considered to be the first mode shape itself whereas in the second case the effective mode shape is considered to be a linear combination of first three modes weighted by corresponding participation factors. After this, a nonlinear static analysis is performed on the structure considering the above displacement profile. Due to the above provided displacement profile, there will be yielding in the structure at a few locations. The yielded structure is again analysed for eigenvalues and mode shapes and the first three mode shapes are extracted along with their participation factors. Again the deflected structure is subjected to the deflection proportional to the effective mode shape and the analysis is continued until the collapse. The chapter also describes the details of the model used for simulation. Two kinds of simulation are performed on the model. One is considering only single mode of vibration whereas the other is considering the multiple modes (3 in this case) of vibration of the structure. Chapter 4 discusses the results of the simulations performed on the model. Single mode and multimode cases are treated and discussed separately. The proposed method is in its nascent stage and hence a lot of modification and validation work is needed to consider the method acceptable. The chapter 5 concludes the overall outcome of the present study and provides scope for the further study.
APA, Harvard, Vancouver, ISO, and other styles
2

Patankar, Digvijay Babasaheb. "Capacity Spectrum Method : Energy Based Approach." Thesis, 2011. http://etd.iisc.ernet.in/handle/2005/1978.

Full text
Abstract:
The capacity spectrum method is a very popular tool in the performance based earthquake resistant design of structures. Though it involves nonlinear static analysis, it can be used to predict the dynamic behaviour of the building under earthquake load. Since the analysis is only static and not dynamic, it is very well suited for the design offices and low end computer terminals as opposed to dynamic analysis which is very resource consuming. There are several methods/variations of methods, to perform the nonlinear static analysis, popularly known as pushover analysis and convert it to capacity spectrum. Displacement based pushover analysis, force based pushover analysis, modal pushover analysis, energy based pushover analysis etc. are some of the variations of pushover analysis. There are a few attempts to consider the change in mode shape but all the methods are silent about the change in frequency due to formation of hinges in the structure. The available codes for building design such as ATC-40 provide some guidelines for getting the capacity spectrum but are not yet developed for proper ductility consideration while converting the pushover curve to capacity spectrum. The present study tries to address the above issues while proposing a new energy based approach to draw capacity spectrum. The chapter 1 introduces the concept of pushover analysis and capacity spectrum concepts. Different approaches to get these curves, their theoretical background, variations and limitations are discussed as a quick review. Chapter 2 is about the review of literature present on these topics. It is found that most of the studies have been carried out in the past on the framed buildings regarding the pushover analysis. In the last few years attempts are also made to consider the effect of torsion. Summarising the various contributions till now, it may be concluded that even in the earlier multimode pushover analysis the effect of different modes on the only static force distribution was considered. Further the spectral acceleration is obtained as a ratio of base shear and α times the weight of the building, where α is the modal mass coefficient. Only the first mode frequency could be utilized to convert the maximum displacement at the top to the spectral acceleration and the corresponding maximum potential energy (P.E.) could be used for equivalence of MDOF and SDOF. Therefore in chapter 3 which follows, the above limitation is removed as explained below. In chapter 3, the new methodology based on energy equivalence consideration is proposed step by step. For the given multistorey building, a displacement profile is applied to the building which is proportional to the effective mode shape. The effective mode shape can be the first mode shape or a combination of first few mode shapes. In the present study, two cases are considered. In the first case, the effective mode shape is considered to be the first mode shape itself whereas in the second case the effective mode shape is considered to be a linear combination of first three modes weighted by corresponding participation factors. After this, a nonlinear static analysis is performed on the structure considering the above displacement profile. Due to the above provided displacement profile, there will be yielding in the structure at a few locations. The yielded structure is again analysed for eigenvalues and mode shapes and the first three mode shapes are extracted along with their participation factors. Again the deflected structure is subjected to the deflection proportional to the effective mode shape and the analysis is continued until the collapse. The chapter also describes the details of the model used for simulation. Two kinds of simulation are performed on the model. One is considering only single mode of vibration whereas the other is considering the multiple modes (3 in this case) of vibration of the structure. Chapter 4 discusses the results of the simulations performed on the model. Single mode and multimode cases are treated and discussed separately. The proposed method is in its nascent stage and hence a lot of modification and validation work is needed to consider the method acceptable. The chapter 5 concludes the overall outcome of the present study and provides scope for the further study.
APA, Harvard, Vancouver, ISO, and other styles
3

Nettis, Andrea. "Seismic fragility and risk assessment of large bridge portfolios: efficient mechanical approaches based on multi-source data collection and integration." Doctoral thesis, 2021. http://hdl.handle.net/11589/229598.

Full text
Abstract:
In earthquake-prone countries, most of the existing bridges were designed in the past without appropriate anti-seismic regulations and can induce important direct or indirect losses if subjected to severe seismic ground shaking. The main challenges in the extensive seismic risk assessment of existing bridges are related to the large number of structures to be inspected and the limited available resources. Therefore, time- and cost-saving approaches for providing seismic risk metrics on existing bridges are needed. This dissertation investigates efficient methodologies for bridge-specific seismic risk assessment within portfolio analysis by using multi-source data integration and simplified mechanical approaches. A methodology for multi-source data collection is described. The applicability of remote-sensing data in populating inventory for structural analysis purposes is discussed. A procedure for using Remotely Piloted Aircraft Systems and photogrammetry to retrieve exhaustive structural information is presented. The effectiveness of displacement-based assessment approaches to be used together with the capacity spectrum method (CSM) for seismic performance assessment is analysed, considering continuous-deck reinforced-concrete (RC) and steel truss multi-span bridges. A fragility analysis methodology based on cloud analysis using the CSM results is also presented. The CSM is applied with real (i.e. recorded) ground-motion spectra (as opposed to code-based conventional spectra) to explicitly consider record-to-record variability. A seismic risk assessment framework combining the proposed efficient data collection and simplified probabilistic seismic assessment methodologies is finally presented. It accounts for the influence of knowledge-based uncertainties associated with an initial incomplete data collection. The proposed approach is applied and tested on eight simply-supported RC bridges of the Basilicata national road network.
APA, Harvard, Vancouver, ISO, and other styles

Book chapters on the topic "Capacity spectrum-based methods"

1

Thakur, Ajay Singh, and Tanmay Gupta. "Analytical Investigation of Moment Resisting Frame Structure—A Case Study on Performance-Based Capacity Spectrum Method." In Lecture Notes in Civil Engineering. Springer Singapore, 2021. http://dx.doi.org/10.1007/978-981-16-6557-8_60.

Full text
APA, Harvard, Vancouver, ISO, and other styles
2

Kusunoki, K. "Damage Assessment in Japan and Potential Use of New Technologies in Damage Assessment." In Springer Tracts in Civil Engineering. Springer International Publishing, 2021. http://dx.doi.org/10.1007/978-3-030-68813-4_2.

Full text
Abstract:
AbstractRight after an earthquake, it is quite important to evaluate the damage level of the buildings in the affected area. In Japan, a rapid inspection is conducted to evaluate the risk of collapse due to an aftershock. If any damage is detected, it is required to conduct damage classification, which takes time but categorizes its damage into five damage categories. Japan has a standard for both rapid inspection and damage classification. They are briefed in this chapter. Similar to the damage classification, the loss of the house and home contents for the earthquake insurance. The method for earthquake insurance is also introduced. Since they are based on visual inspection, it is quite difficult to investigate the damage of the high-rise buildings and buildings covered by finishing. Recently, many kinds of research are conducted to use sensors for automatic and realtime damage classification. A structural health monitoring method with accelerometers based on the capacity spectrum method, which is currently installed into more than 40 buildings, is also introduced.
APA, Harvard, Vancouver, ISO, and other styles
3

Shimomura, Suguru. "Reservoir Computing Based on Iterative Function Systems." In Photonic Neural Networks with Spatiotemporal Dynamics. Springer Nature Singapore, 2023. http://dx.doi.org/10.1007/978-981-99-5072-0_11.

Full text
Abstract:
AbstractVarious approaches have been proposed to construct reservoir computing systems. However, the network structure and information processing capacity of these systems are often tied to their individual implementations, which typically become difficult to modify after physical setup. This limitation can hinder performance when the system is required to handle a wide spectrum of prediction tasks. To address this limitation, it is crucial to develop tunable systems that can adapt to a wide range of problem domains. This chapter presents a tunable optical computing method based on the iterative function system (IFS). The tuning capability of IFS provides adjustment of the network structure and optimizes the performance of the optical system. Numerical and experimental results show the tuning capability of the IFS reservoir computing. The relationship between tuning parameters and reservoir properties is discussed. We further investigate the impact of optical feedback on the reservoir properties and present the prediction results.
APA, Harvard, Vancouver, ISO, and other styles
4

Nugroho, Bayu Adi, Aufa Hanif Abiyyu Sulthon, Ahadin Banu Muflih, Edy Purwanto, and Stefanus Adi Kristiawan. "Seismic Assessment of Non-engineered Residential Frame Building Based on Performance Point of Capacity Spectrum Method: A Case Study in Pacitan Regency, Indonesia." In Lecture Notes in Civil Engineering. Springer Nature Singapore, 2025. https://doi.org/10.1007/978-981-96-2143-9_6.

Full text
APA, Harvard, Vancouver, ISO, and other styles
5

Xu, Dabin, Yi Huang, Guilin Du, and Xiaoyang Zhao. "Study of Aqueduct Based on Dynamic Time History Method and Response Spectrum Method." In Advances in Transdisciplinary Engineering. IOS Press, 2024. http://dx.doi.org/10.3233/atde240645.

Full text
Abstract:
In this paper, an anti-seepage aqueduct project is taken as the research object, and the full bridge model is established by finite element software MIDAS FEA NX to analyze its mechanical performance. This kind of aqueduct structure is cast in situ U-shaped structure, the construction is relatively simple, can effectively control the construction quality, accelerate the construction progress, and has certain reference significance for related projects; According to the structural characteristics of this kind of aqueduct, the overall calculation shows that the stiffness, bearing capacity and strength of the bridge can meet the requirements of the code, and can effectively meet the needs of vehicle traffic and water crossing. At present, seismic analysis methods for aqueducts are generally based on response spectrum analysis. In response spectrum analysis, eigenvalue analysis is used to determine its natural vibration period and mass participation coefficient, and typical natural vibration period and mass participation coefficient are combined with seismic response spectrum acceleration to simulate the stress characteristics of the overall bridge structure under earthquake conditions. Based on the relevant design ideas, this paper puts forward the dynamic time history analysis method as a comparative analysis of this method, which can provide reference for similar bridge design.
APA, Harvard, Vancouver, ISO, and other styles
6

Sreeja, P. C., Rekha Sharma, and Sapna Nehra. "Divergent Applications of Hydrotalcite-Based Materials." In Hydrotalcite-based Materials: Synthesis, Characterization and Application. BENTHAM SCIENCE PUBLISHERS, 2024. http://dx.doi.org/10.2174/9789815256116124010005.

Full text
Abstract:
Hydrotalcites (HDTL) are layered double hydroxides of the anionic clay family. They possess a large surface area, ability to accommodate divalent and trivalent metallic ions, anion exchange capacity and intercalation ability. HDTL play a vital role in nanotechnology, specifically in various nanomaterial production, functionalization, and applications. HDTL nanohybrids with unique properties are created through intercalation with various compounds like inorganic anions, organic anions, biomolecules, active pharmaceutical ingredients, and dyes. Their adaptive layered charge density and chemical combination constitute HDTL as resourceful materials befitting for a broad spectrum of applications. There are a variety of methods for preparing HDTL based nanomaterials, including co-precipitation, sol gel method, ion exchange method, intercalation method and microwave assisted methods. The morphologies of HDTL materials are characterised using technologies like X-ray powder diffraction (XRD), Fourier Transform Infrared Spectroscopy (FTIR), Thermogravimetry coupled (TGA) with Differential Scanning Calorimetry (DSC), Scanning Electron Microscopy (SEM), and Transmission Electron Microscopy (TEM). The nanocomposites of HDTL are widely used in the field of fine chemical synthesis, pharmaceutical field, water purification, and agriculture. Biocompatible HDTL nanostructures enticed remarkable attention in therapeutic and diagnostic functions. HDTL nanohybrids are prominent bio reservoirs for drug and delivery systems and used in cancer therapy. These materials have been utilised by bioimaging techniques such as MRI and CT. The HDTL-based nanomaterials are effective adsorbents and find widespread application in the water treatment industry. These are used for the amelioration of polluted water by removing heavy metals, dyes, and other impurities. These materials are also used as flame retardants, in porous ceramics, carbon dioxide adsorption and deodorants. This chapter describes in detail about the preparation methods, properties, structural characterisation, and wide applications of HDTL based nanohybrids.
APA, Harvard, Vancouver, ISO, and other styles
7

Kang, Lei, Fei Yang, Kai Wang, et al. "GRIF-DM: Generation of Rich Impression Fonts Using Diffusion Models." In Frontiers in Artificial Intelligence and Applications. IOS Press, 2024. http://dx.doi.org/10.3233/faia240492.

Full text
Abstract:
Fonts are integral to creative endeavors, design processes, and artistic productions. The appropriate selection of a font can significantly enhance artwork and endow advertisements with a higher level of expressivity. Despite the availability of numerous diverse font designs online, traditional retrieval-based methods for font selection are increasingly being supplanted by generation-based approaches. These newer methods offer enhanced flexibility, catering to specific user preferences and capturing unique stylistic impressions. However, current impression font techniques based on Generative Adversarial Networks (GANs) necessitate the utilization of multiple auxiliary losses to provide guidance during generation. Furthermore, these methods commonly employ weighted summation for the fusion of impression-related keywords. This leads to generic vectors with the addition of more impression keywords, ultimately lacking in detail generation capacity. In this paper, we introduce a diffusion-based method, termed GRIF-DM, to generate fonts that vividly embody specific impressions, utilizing an input consisting of a single letter and a set of descriptive impression keywords. The core innovation of GRIF-DM lies in the development of dual cross-attention modules, which process the characteristics of the letters and impression keywords independently but synergistically, ensuring effective integration of both types of information. Our experimental results, conducted on the MyFonts dataset, affirm that this method is capable of producing realistic, vibrant, and high-fidelity fonts that are closely aligned with user specifications. This confirms the potential of our approach to revolutionize font generation by accommodating a broad spectrum of user-driven design requirements. Our code is publicly available at https://github.com/leitro/GRIF-DM.
APA, Harvard, Vancouver, ISO, and other styles
8

Simonin Gilles and O'Sullivan Barry. "Optimisation for the Ride-Sharing Problem: a Complexity-based Approach." In Frontiers in Artificial Intelligence and Applications. IOS Press, 2014. https://doi.org/10.3233/978-1-61499-419-0-831.

Full text
Abstract:
The dial-a-ride problem is a classic challenge in transportation and continues to be relevant across a large spectrum of applications, e.g. door-to-door transportation services, patient transportation, etc. Recently a new variant of the dial-a-ride problem, called ride-sharing, has received attention due to emergence of the use of smartphone-based applications that support location-aware transportation services. The general dial-a-ride problem involves complex constraints on a time-dependent network. In ride-sharing riders (resp. drivers) specify transportation requests (resp. offers) between journey origins and destinations. The two sets of participants, namely riders and drivers, have different constraints; the riders have time windows for starting and finishing the journey, while drivers have a starting time window, a destination, and a vehicle capacity. The challenge is to maximise the overall utility of the participants in the system which can be defined in a variety of ways. In this paper we study variations of the ride-sharing problem, under different notions of utility, from a computational complexity perspective, and identify a number of tractable and intractable cases. These results provide a basis for the development of efficient methods and heuristics for solving problems of real-world scale.
APA, Harvard, Vancouver, ISO, and other styles
9

Sharon, S. Miraclin. "CARDIAC REHABILITATION." In Futuristic Trends in Medical Sciences Volume 3 Book 9. Iterative International Publisher, Selfypage Developers Pvt Ltd, 2024. http://dx.doi.org/10.58532/v3bfms9p1ch20.

Full text
Abstract:
Cardiac rehabilitation (CR) is an extensive program designed to manage the mortality risk associated with cardiovascular disease and enhance the overall function of the cardiovascular system while elevating the individual's quality of life. This comprehensive initiative primarily revolves around physical exercise, the cultivation of a healthy lifestyle, the incorporation of cardio-active medications, provision of educational support, and comprehensive psychical and psychological evaluations. These integrated components collectively offer safety and substantial benefits, leading to remarkable improvements in lifestyle quality, functional capacity, mortality rates, and reduced hospital readmissions. Current guidelines strongly endorse the application of cardiac rehabilitation across a wide spectrum of cardiac conditions. Notably, exercise-based CR is recognized as a pivotal element in the comprehensive management of coronary artery disease (CAD). It's imperative that exercise is tailored to each individual's unique characteristics, optimizing the rehabilitation program's effectiveness. Cardiac rehabilitation encompasses an overview of recommended components for an effective cardiac rehabilitation or secondary prevention program. It also delves into varied delivery methods for these services, suggests avenues for future research, and rationalizes each program element. Notably, exercise training is underscored as a crucial focus. The conventional challenges inherent in center-based CR could potentially be resolved through the integration of digital technology, thereby enhancing care delivery. The American Heart Association's science advisory serves as a guide for the development and implementation of digital cardiac rehabilitation interventions, specifically designed for clinical settings. This initiative aims to amplify health outcomes and promote health equity. However, the comprehension of these interventions as a digital approach to CR is still in its infancy, with much to explore. The realm of digital health technologies, encompassing internet-based platforms, wearable devices, and mobile applications, holds the potential to mitigate challenges associated with traditional facility-based CR programs.
APA, Harvard, Vancouver, ISO, and other styles
10

Sharma, Sourabh, Mr Ashish, Ashok Kumar, and O. P. Thakur. "PEROVSKITE BiFeO3: EMERGING MATERIAL FOR WASTEWATER TREATMENT." In Futuristic Trends in Chemical Material Sciences & Nano Technology Volume 3 Book 12. Iterative International Publishers, Selfypage Developers Pvt Ltd, 2024. http://dx.doi.org/10.58532/v3becs12p1ch6.

Full text
Abstract:
This chapter delves into the promising applications of BiFeO3, a perovskite oxide, in the field of wastewater treatment. As global concerns regarding water pollution escalate, innovative solutions are imperative to address this critical issue. This chapter focuses on the unique properties and versatile characteristics of BiFeO3 that position it as a novel contender for advanced wastewater treatment processes. It begins by elucidating the fundamental properties of perovskite materials and their relevance to environmental remediation. It then delves into the synthesis methods and structural modifications of BiFeO3 to enhance its performance in pollutant removal. The multifunctional nature of BiFeO3, including its photocatalytic, adsorptive, and catalytic attributes, is explored in depth, shedding light on its efficacy in degrading a spectrum of organic pollutants, heavy metals, and even emerging contaminants. Furthermore, it critically examines the factors influencing the photocatalytic efficiency and adsorption capacity of BiFeO3, such as crystal structure, morphology, and surface area. Rare earth and transition metal substituted BiFeO3 and the integration of BiFeO3 into various hybrid nanocomposites and its synergistic effects for enhanced wastewater treatment are also discussed, highlighting the role of nanotechnology in advancing environmental remediation strategies. Real-world applications and case studies showcase the successful utilization of BiFeO3-based materials in treating wastewater from industrial, agricultural, and municipal sources. The material's scalability, cost-effectiveness, and potential for regeneration contribute to its appeal as a sustainable solution for diverse wastewater treatment challenges. In conclusion, "Perovskite BiFeO3: Emerging Material for Wastewater Treatment" underscores the paradigm shift toward harnessing advanced materials like BiFeO3 for tackling contemporary water pollution issues. Its comprehensive exploration of synthesis techniques, material properties, and application strategies provides valuable insights for researchers, engineers, and policymakers engaged in developing efficient and eco-friendly solutions to the global water crisis. As an emerging frontrunner in the realm of wastewater treatment, BiFeO3 holds the promise of revolutionizing the way we approach water purification and environmental conservation.
APA, Harvard, Vancouver, ISO, and other styles

Conference papers on the topic "Capacity spectrum-based methods"

1

Tian, Xiang, Wei Zhang, Li Huang, Qin Li, Erheng Yang, and Liang Xiao. "Comparison of Methods for Predicting Formation Permeability from Electrical Imaging Logging in Fractured Tight Reservoirs." In GOTECH. SPE, 2024. http://dx.doi.org/10.2118/219165-ms.

Full text
Abstract:
Abstract Permeability is of great importance in indicating formation filtration capacity and deliverability. Hence, it plays a key role in exploration and development wells evaluation. However, how to accurately predict reservoir permeability has become a key problem that has puzzled petrophysicists in the past few decades. The common methods, which are established based on multivariate statistics and widely applied, lose their role. The nuclear magnetic resonance (NMR)-based models, e.g., the Schlumberger Doll Research (SDR) center-based model and the Timur-Coates-based model, all cannot be well used due to the effect of saturated hydrocarbon or methane gas (CH4) to NMR response, especially in tight reservoirs due to the poor relationships among permeability and others parameters that caused by complicated pore structure. In addition, fractures play an important role in connecting intergranular pores and increasing permeability, whereas the common and NMR logging responses cannot well reflect this improvement. Since the birth of electrical imaging logging in the late 1980s, quantitatively characterizing fractured tight reservoirs is realized. In this study, to characterize the role of fractures in improving filtration capacity and permeability in fractured tight reservoir, the Palaeogene tight reservoirs in Huizhou Depression, eastern South China Sea Basin is used as an example, two new models of predicting permeability from electrical imaging logging are raised, and the reliability and accuracy are compared. In the first model, we extract two parameters from the porosity frequency spectrum, and they are defined as the logarithmic geometric mean value (φmv) and the golden section point variance (σg). Afterwards, we establish a relationship that connects formation permeability (K) with porosity (φ), φmv and σg. Based on this relationship, fractured tight reservoir permeability can be predicted from porosity frequency spectrum in the intervals with which electrical imaging logging is first acquired. In the second model, we improve the classical hydraulic flow unit (HFU) approach, and establish a new model to predict flow zone indicator (FZI) from electrical imaging logging to classify fractured formation. In these two models, all the involved coefficients are calibrated by using the experimented results of 118 core samples. Finally, these two models are extended into field applications to consecutively predict permeability from electrical imaging logging, and the predicted permeabilities are compared with core-derived results. Good consistency among them illustrates that the raised two models are all usable in our target Palaeogene fractured tight reservoirs in Huizhou Depression, especially the HFU-based model. It can be well used in all three kinds of formations. The average relative error between predicted permeabilities by using HFU-based model and core-derived results is only 14.37%. However, if the classical models are directly used in our target formations, permeability curve is underestimated.
APA, Harvard, Vancouver, ISO, and other styles
2

Moretić, Antonela, Mislav Stepinac, and Karlo Ožić. "A STATE-OF-THE-ART REVIEW: SEISMIC VULNERABILITY ASSESSMENT METHODS FOR MASONRY STRUCTURES." In 3rd Croatian Conference on Earthquake Engineering. University of Zagreb Faculty of Civil Engineering, 2025. https://doi.org/10.5592/co/3crocee.2025.4.

Full text
Abstract:
Seismic vulnerability assessment is a comprehensive process that involves evaluating the susceptibility of structures to potential damage or failure during seismic events. This assessment is crucial for understanding and mitigating the impact of earthquakes on structures. Still, it is necessary to emphasize that vulnerability assessment is not a singular procedure: it varies based on the scale of analysis and the specific goals of the assessment. Conducting such assessments requires significant technical expertise as well as human and financial resources, which are often limited. These challenges are particularly evident in historic urban areas due to the unique characteristics of historical buildings, including complex materials, construction techniques, and the difficulty of quantifying their cultural value and significance. Recent decades have seen significant advancements in vulnerability assessment methods, driven by technological progress in seismic event detection and structural analysis software. Previously, earthquake information was largely limited to qualitative descriptions in historical records. Methods for assessing structural vulnerability can be categorized into empirical, analytical, expert-based, or hybrid approaches. Initially, macroseismic methods were the most widely used. These methods are based on observed earthquake damage, making them relatively straightforward to apply, even on a larger scale. In contrast, analytical methods focus on determining the structural response, which requires more computational and time effort. Analytical methods are further divided into detailed and simplified approaches, with the latter split into three subgroups: collapse mechanism-based methods, capacity spectrum-based methods, and fully displacement-based methods. The paper discusses the existing vulnerability assessment methods presented by various authors, emphasizing the advantages and disadvantages of each and examining their practical applications.
APA, Harvard, Vancouver, ISO, and other styles
3

P, Geetha. "Design Optimization of Solar Energy Harvesting Using Perovskite Solar Cell for Electric Vehicles Using Finite Element Method." In International Conference on Advances in Design, Materials, Manufacturing and Surface Engineering for Mobility. SAE International, 2023. http://dx.doi.org/10.4271/2023-28-0095.

Full text
Abstract:
<div class="section abstract"><div class="htmlview paragraph">Excellent charge-carrier mobilities and life time of perovskite materials enables it with exceptional light absorption capacity. This provides improved device potential and performance with low-cost commercially feasible technology. The challenges towards handling the perovskite cells are its strength and its environmentally compatible property. Resolving these issues leads perovskite-based technology to hold an innovative potential for quick terawatt-scale solar power distribution. In this line, Organic Photovoltaic is a fast developing PV technology with improved the cell efficiency and life time performance. As organic Photovoltaic cell is available in mulit-colours and can be used to build transparent devices, it finds its application in building-integrated Organic Photovoltaic fair. Optimization of device physics, charge-transport methods, charge-separation procedures, and interfacial effects, would enable the development of stable, more effective device architectures. In this direction, multi-physics simulation software based on the Finite Element Method (FEM) is used to determine the electrical performance of the device. It is constructed on materials with enhanced energy-level orientation, spectrum responsiveness, and carrier transport properties, leading to the design of more effective, reliable device architectures. In this work, hybrid perovskite semiconductor based 2D Organic Photovoltaic cell is developed using finite element method that can be applied on the roof of the electric vehicles for photo energy generation.</div></div>
APA, Harvard, Vancouver, ISO, and other styles
4

Isaeva, Oksana, Marina Boronenko, and Pavel Gulyaev. "Measuring the Velocity and Temperature of Particles in a Low-Temperature Plasma Flow." In 13th International Conference on Applied Human Factors and Ergonomics (AHFE 2022). AHFE International, 2022. http://dx.doi.org/10.54941/ahfe1001628.

Full text
Abstract:
The improvement of optical methods for diagnosing fast processes in plasma deposition technologies is associated with the solution of a well-known contradiction between an increase in the speed of recording a track of a moving particle and a decrease in the signal level to a critical noise threshold. Improving the accuracy of measuring the temperature of the condensed phase of the flow, under the conditions of plasma background radiation, is possible with the transition from the brightness pyrometry of individual particles to spectral methods for determining the temperature distribution of a large group of particles by their thermal spectrum. The purpose of this study is to experimentally verify the effectiveness of using microchannel photomultipliers and nanosecond electron-optical switches to improve the accuracy and speed of time-of-flight anemometry and brightness pyrometry methods. The experimental technique for detecting tracks of self-luminous heated particles in plasma is based on the use of specialized high-speed video cameras with parallel signal reading. The technical possibilities of using high-speed video cameras for registration of particles in the technological process of plasma spraying of coatings are shown. The use of an optical shutter with a nanosecond resolution makes it possible to measure the particle velocity in the range from 10 to 350 m/s with an accuracy that ensures the calculation of the dynamic parameters of particle acceleration in the jet. The use of a microchannel photomultiplier makes it possible to measure the brightness temperature of particles even at high speeds. The set of experimental data makes it possible to determine the form of the fundamental diagram of a two-phase plasma jet by the value of the particle transfer velocity in the idle mode and the maximum load capacity of the flow. The considered experimental technique makes it possible to measure the dynamic constants of motion and heating of both individual particles in a plasma flow and the fundamental diagram of interaction during collective motion. The proposed diagnostic method is recommended to be used to study the load capacity of two-phase flows, as well as an indicator of the limiting technological state of the plasma torch and the transition to unstable spraying modes.
APA, Harvard, Vancouver, ISO, and other styles
5

Li, Gaoren, Ronghui Yan, Tianding Liu, Xiaoping Sun, Jiaqi Li, and Liang Xiao. "A New Approach of Predicting Fractured Tight Sandstone Reservoir Permeability Based on Electrical Image Logging." In SPE Canadian Energy Technology Conference and Exhibition. SPE, 2024. http://dx.doi.org/10.2118/218025-ms.

Full text
Abstract:
Abstract Permeability is an important input parameter in Tight reservoir characterization and evaluation, precisely predicting formation permeability is indispensable. However, permeability prediction faces great challenge in tight sandstone reservoirs, empirical statistical methods, and nuclear magnetic resonance (NMR) based models lose their role due to complicated pore structure and the effect of methane gas (CH4) or hydrocarbon to NMR responses. In addition, fractures also play important role in improving tight reservoir permeability, whereas current logging responses cannot be used to characterize this improvement besides electrical image logging. In this study, to quantitatively characterize the improvement of fractures to filtration capacity in tight sandstone reservoir and accurately predict permeability, the Triassic Chang 63 Member of Jiyuan Region, Northwestern Ordos Basin is used as an example, a novel model of predicting permeability from electrical image logging is raised. In this model, the porosity frequency spectra are first extracted to characterize the pore structure of fractured tight sandstones. Afterwards, two parameters, which are defined as the logarithmic geometric mean value (φmv) and the golden section point variance (σg) of porosity frequency spectrum, are extracted to characterize the contribution of fractures to permeability. Comparing with the shape of porosity frequency spectrum, permeability φmv and σg, the quality of our target fractured tight sandstone reservoirs is quantified, and relationships among permeability, φmv and σg are established. High-quality reservoirs exhibit wide porosity frequency spectrum, high values of φmv and σg, and vice versa. Three parameters, which are formation total porosity, φmv and σg, are chosen to establish a novel fractured tight sandstone reservoir permeability prediction model. The involved input parameters in this model are calibrated by using the routine experiments of 35 core samples. Finally, we apply this model into field applications to consecutively calculate permeability in the intervals with which electrical image logging is acquired. Comparison of predicting permeability with core derived results illustrate that our raised model is usable in the Chang 63 Member of Jiyuan Region. The average relative errors between these two kinds of permeabilities is only 16.54% in 12 wells. This research gives a novel technique of calculating permeability in fractured tight reservoirs. It can avoid the effect of CH4 or hydrocarbon on conventional and NMR logging responses, and will play a great important role in unconventional reservoirs permeability prediction and formation characterization.
APA, Harvard, Vancouver, ISO, and other styles
6

Li, Gaoren, Ronghui Yan, Tianding Liu, Xiaoping Sun, Peixian Wang, and Liang Xiao. "A Novel Method of Predicting Fractured Tight Sandstone Reservoir Permeability from Electrical Image Logging." In Mediterranean Offshore Conference. SPE, 2024. http://dx.doi.org/10.2118/223179-ms.

Full text
Abstract:
Abstract Permeability is an important input parameter in Tight reservoir characterization and evaluation, precisely predicting formation permeability is indispensable. However, permeability prediction faces great challenge in tight sandstone reservoirs, empirical statistical methods, and nuclear magnetic resonance (NMR) based models lose their role due to complicated pore structure and the effect of methane gas (CH4) or hydrocarbon to NMR responses. In addition, fractures also play important role in improving tight reservoir permeability, whereas current logging responses cannot be used to characterize this improvement besides electrical image logging. In this study, to quantitatively characterize the improvement of fractures to filtration capacity in tight sandstone reservoir and accurately predict permeability, the Triassic Chang 63 Member of Jiyuan Region, Northwestern Ordos Basin is used as an example, a novel model of predicting permeability from electrical image logging is raised. In this model, the porosity frequency spectra are first extracted to characterize the pore structure of fractured tight sandstones. Afterwards, two parameters, which are defined as the logarithmic geometric mean value (φmv) and the golden section point variance (σg) of porosity frequency spectrum, are extracted to characterize the contribution of fractures to permeability. Comparing with the shape of porosity frequency spectrum, permeability φmv and σg, the quality of our target fractured tight sandstone reservoirs is quantified, and relationships among permeability, φmv and σg are established. High-quality reservoirs exhibit wide porosity frequency spectrum, high values of φmv and σg, and vice versa. Three parameters, which are formation total porosity, φmv and σg, are chosen to establish a novel fractured tight sandstone reservoir permeability prediction model. The involved input parameters in this model are calibrated by using the routine experiments of 35 core samples. Finally, we apply this model into field applications to consecutively calculate permeability in the intervals with which electrical image logging is acquired. Comparison of predicting permeability with core derived results illustrate that our raised model is usable in the Chang 63 Member of Jiyuan Region. The average relative errors between these two kinds of permeabilities is only 16.54% in 12 wells. This research gives a novel technique of calculating permeability in fractured tight reservoirs. It can avoid the effect of CH4 or hydrocarbon on conventional and NMR logging responses, and will play a great important role in unconventional reservoirs permeability prediction and formation characterization.
APA, Harvard, Vancouver, ISO, and other styles
7

Dobri, Mirona Letitia, Alina-Ioana Voinea, Constantin Marcu, Eva Maria Elkan, Ionuț-Dragoș Rădulescu, and Petronela Nechita. "MINDFULNESS: A PSYCHOTHERAPEUTIC METHOD OF ACCEPTANCE AND CENTERING OF THE MENTAL FRAMEWORK." In The European Conference of Psychiatry and Mental Health "Galatia". Archiv Euromedica, 2023. http://dx.doi.org/10.35630/2022/12/psy.ro.29.

Full text
Abstract:
Mindfulness as a term comes from Buddhist traditions, translating as awareness, concentration or remembrance. Western neuroscientists define mindfulness practices as a combination of emotional and attentional training regimes that help cultivate physical and psychological well-being and improve emotional regulation while noting neurobiological changes in the brain. The formal introduction of oriental ways of thinking into western philosophy, psychology and medicine happened decades ago, generating a large spectrum of discussions and scientific works concerning the therapeutic applications of mindfulness practice. Basing our presentation on a thorough study of scientific papers, we propose a synthesis of the theoretical aspects related to mindfulness and a new perspective regarding its applications in clinical psychiatric care. The modern occidental approaches of the practice are adapted into methods used in cognitive therapy based on mindfulness. The benefits of formal practice proven from the neurological perspective are the result of a less reactive autonomic nervous system. Regulation of attention, body awareness, regulation of emotions, increased capacity of adaptation is just a few of the mechanisms involved. Therefore, it is integrated into western psychotherapy as an adjunctive or alternative method of treatment for several psychiatric disorders among which are depression, anxiety, substance use, smoking cessation, insomnia. In conclusion, mindfulness has shown to have great promise in clinical application, and the hope is to be used in the future with the purpose of improving mental and physical wellbeing and quality of life.
APA, Harvard, Vancouver, ISO, and other styles
8

Chen, Bo-Jen, C. S. Tsai, L. L. Chung, and Tsu-Cheng Chiang. "Applications of Capacity Spectrum Method for Buildings With Metallic Yielding Dampers." In ASME 2005 Pressure Vessels and Piping Conference. ASMEDC, 2005. http://dx.doi.org/10.1115/pvp2005-71163.

Full text
Abstract:
The 921 Chi-Chi Earthquake was one of the most destructive earthquakes in Taiwan in the twentieth century. The earthquake caused severe damage or collapse to the residential and public structures. It is a sensible choice to utilize the metallic yielding dampers for retrofitting damaged structures and to enhance earthquake-resistant capacity of new structures. In this paper, in order to facilitate the designs of the metallic yielding dampers, an improved nonlinear static analysis iteration procedure based on the capacity spectrum method for buildings with metallic yielding dampers has been proposed. The numerical results of the buildings with the metallic yielding dampers through the nonlinear static analysis iteration procedure and the nonlinear dynamic analysis have been obtained, compared and verified in this study. Moreover, it is also illustrated that the proposed nonlinear static analysis iteration procedure based on the capacity spectrum method for structures with metallic yielding dampers can fairly predict the seismic responses of the buildings with metallic yielding dampers during the earthquakes.
APA, Harvard, Vancouver, ISO, and other styles
9

Juanjuan, Zhao, and Zhao Peikun. "A spectrum access method based on capacity utility maximization for cognitive mesh networks." In 2013 25th Chinese Control and Decision Conference (CCDC). IEEE, 2013. http://dx.doi.org/10.1109/ccdc.2013.6561656.

Full text
APA, Harvard, Vancouver, ISO, and other styles
10

Kuramoto, Akisue, Masaya Noguchi, and Motomu Nakashima. "Biomechanical modeling of subjective fatigue during high-frequency repetitive manual-handling tasks." In 2024 AHFE International Conference on Human Factors in Design, Engineering, and Computing (AHFE 2024 Hawaii Edition). AHFE International, 2024. http://dx.doi.org/10.54941/ahfe1005720.

Full text
Abstract:
Accumulation of muscle fatigue and subjective fatigue are significant causes of decline in individual performance. Those fatigues can also lead to work errors and the associated musculoskeletal disorders. Thus, a quantitative evaluation of fatigue accumulation during work is required to manage the risk of industrial accidents. Most of the current assessment methods of workload are based on observing and scoring the range of joint motion and work frequency at a point in time. In other words, these assessment methods do not fully consider the continuous accumulation of fatigue. However, even while repeating the same task, muscle fatigue-recovery states and work movements change over time. Therefore, risk management of industrial accidents is important to objectively evaluate the subjective sense of strain and muscle fatigue from work movement data. This study aims to biomechanically model muscle fatigue and subjective fatigue during high-frequency repetitive manual-handling tasks. In an experiment, participants were asked to repeatedly lift a bottle weighing approximately 1 kg, containing salt as ballast, from a chest-height shelf to an eye-level shelf every two seconds for ten minutes. Both start and end points were set at the point approximately 80% of the upper limb length from shoulders at those heights in the midsagittal plane. During the experiment, whole-body motion was measured using an inertial sensor-based motion capture system. In addition to the body motion, electromyograms and subjective evaluations based on the Borg-CR10 scale of the upper limb were recorded. The measured body motion data were applied to a human musculoskeletal model to simulate muscle activity at each sampling time in the experiment. The results were then applied to the Xia and Frey-Law muscle fatigue model to simulate each muscle's residual capacity and fatigue at each sampling time during the experiment. The ratio of the simulated muscle activity to the simulated residual capacity was defined as the substantial muscle activity rate (SMAR). Changes in the SMAR during the experiment were compared with the changes in subjective fatigue and EMG median frequency. Throughout the task, slight abduction and forward flexion were kept in the upper arm. Therefore, we focused our discussion on the deltoid muscle, which might be the most heavily loaded during the experiment. The frequency analysis result of electromyograms indicated that the frequency power spectrum in the medial deltoid shifted to a lower frequency band in the first few minutes and was generally constant in the rest. The residual capacity of the medial deltoid simulated by the muscle fatigue model declined nonlinearly in the first few minutes and was almost constant after that. These results indicate that the muscle fatigue model sufficiently represented the fatigue at the medial deltoid. The muscle activity rate simulated by the musculoskeletal model was almost the same throughout the experiment. On the other hand, the SMAR declined in the first few minutes and continued at a higher range than the muscle activity rate. This changing trend of the SMAR was similar to the time change of the subjective fatigue of the shoulder.
APA, Harvard, Vancouver, ISO, and other styles

Reports on the topic "Capacity spectrum-based methods"

1

Levisohn, Sharon, Mark Jackwood, and Stanley Kleven. New Approaches for Detection of Mycoplasma iowae Infection in Turkeys. United States Department of Agriculture, 1995. http://dx.doi.org/10.32747/1995.7612834.bard.

Full text
Abstract:
Mycoplasma iowae (Mi) is a pathogenic avian mycoplasma which causes mortality in turkey embryos and as such has clinical and economic significance for the turkey breeder industry. Control of Mi infection is severely hampered by lack of adequate diagnostic tests, together with resistance to most antibiotics and resilience to environment. A markedly high degree of intra-species antigenic variation also contributes to difficulties in detection and control of infection. In this project we have designed an innovative gene-based diagnostic test based on specific amplification of the 16S rRNA gene of Mi. This reaction, designed Multi-species PCR-RFLP test, also amplifies the DNA of the pathogenic avian mycoplasmas M. gallisepticum (Mg) and M. synoviae (Ms). This test detects DNA equivalent to about 300 cfu Mi or either of the other two target mycoplasmas, individually or in mixed infection. It is a quick test, applicable to a wide variety of clinical samples, such as allantoic fluid or tracheal or cloacal swab suspensions. Differential diagnosis is carried out by gel electro-phoresis of the PCR amplicon digested with selected restriction enzymes (Restriction Fragment Length Polymorphism). This can also be readily accomplished by using a simple Dot-Blot hybridization assay with digoxigenin-labeled oligonucleotide probes reacting specifically with unique Mi, Mg or Ms sequences in the PCR amplicon. The PCR/OLIGO test increased sensitivity by at least 10-fold with a capacity for rapid testing of large numbers of samples. Experimental infection trials were carried out to evaluate the diagnostic tools and to study pathogenesis of Mi infection. Field studies and experimental infection of embryonated eggs indicated both synergistic and competitive interaction of mycoplasma pathogens in mixed infection. The value of the PCR diagnostic tests for following the time course of egg transmission was shown. A workable serological test (Dot Immunobinding Assay) was also developed but there was no clear-cut evidence that infected turkeys develop an immune response. Typing of a wide spectrum of Mi field isolates by a variety of gene-based molecular techniques indicated a higher degree of genetic homogeneity than predicted on the basis of the phenotypic variability. All known strains of Mi were detected by the method developed. Together with an M. meleagridis-PCR test based on the same gene, the Multi-species PCR test is a highly valuable tool for diagnosis of pathogenic mycoplasmas in single or mixed infection. The further application of this rapid and specific test as a part of Mi and overall mycoplasma control programs will be dependent on developments in the turkey industry.
APA, Harvard, Vancouver, ISO, and other styles
2

Lunn, Pete, Marek Bohacek, Jason Somerville, Áine Ní Choisdealbha, and Féidhlim McGowan. PRICE Lab: An Investigation of Consumers’ Capabilities with Complex Products. ESRI, 2016. https://doi.org/10.26504/bkmnext306.

Full text
Abstract:
Executive Summary This report describes a series of experiments carried out by PRICE Lab, a research programme at the Economic and Social Research Institute (ESRI) jointly funded by the Central Bank of Ireland, the Commission for Energy Regulation, the Competition and Consumer Protection Commission and the Commission for Communications Regulation. The experiments were conducted with samples of Irish consumers aged 18-70 years and were designed to answer the following general research question: At what point do products become too complex for consumers to choose accurately between the good ones and the bad ones? BACKGROUND AND METHODS PRICE Lab represents a departure from traditional methods employed for economic research in Ireland. It belongs to the rapidly expanding area of ‘behavioural economics’, which is the application of psychological insights to economic analysis. In recent years, behavioural economics has developed novel methods and generated many new findings, especially in relation to the choices made by consumers. These scientific advances have implications both for economics and for policy. They suggest that consumers often do not make decisions in the way that economists have traditionally assumed. The findings show that consumers have limited capacity for attending to and processing information and that they are prone to systematic biases, all of which may lead to disadvantageous choices. In short, consumers may make costly mistakes. Research has indeed documented that in several key consumer markets, including financial services, utilities and telecommunications, many consumers struggle to choose the best products for themselves. It is often argued that these markets involve ‘complex’ products. The obvious question that arises is whether consumer policy can be used to help them to make better choices when faced with complex products. Policies are more likely to be successful where they are informed by an accurate understanding of how real consumers make decisions between products. To provide evidence for consumer policy, PRICE Lab has developed a method for measuring the accuracy with which consumers make choices, using techniques adapted from the scientific study of human perception. The method allows researchers to measure how reliably consumers can distinguish a good deal from a bad one. A good deal is defined here as one where the product is more valuable than the price paid. In other words, it offers good value for money or, in the jargon of economics, offers the consumer a ‘surplus’. Conversely, a bad deal offers poor value for money, providing no (or a negative) surplus. PRICE Lab’s main experimental method, which we call the ‘Surplus Identification’ (S-ID) task, allows researchers to measure how accurately consumers can spot a surplus and whether they are prone to systematic biases. Most importantly, the S-ID task can be used to study how the accuracy of consumers’ decisions changes as the type of product changes. For the experiments we report here, samples of consumers arrived at the ESRI one at a time and spent approximately one hour doing the S-ID task with different kinds of products, which were displayed on a computer screen. They had to learn to judge the value of one or more products against prices and were then tested for accuracy. As well as people’s intrinsic motivation to do well when their performance on a task like this is tested, we provided an incentive: one in every ten consumers who attended PRICE Lab won a prize, based on their performance. Across a series of these experiments, we were able to test how the accuracy of consumers’ decisions was affected by the number and nature of the product’s characteristics, or ‘attributes’, which they had to take into account in order to distinguish good deals from bad ones. In other words, we were able to study what exactly makes for a ‘complex’ product, in the sense that consumers find it difficult to choose good deals. FINDINGS Overall, across all ten experiments described in this report, we found that consumers’ judgements of the value of products against prices were surprisingly inaccurate. Even when the product was simple, meaning that it consisted of just one clearly perceptible attribute (e.g. the product was worth more when it was larger), consumers required a surplus of around 16-26 per cent of the total price range in order to be able to judge accurately that a deal was a good one rather than a bad one. Put another way, when most people have to map a characteristic of a product onto a range of prices, they are able to distinguish at best between five and seven levels of value (e.g. five levels might be thought of as equivalent to ‘very bad’, ‘bad’, ‘average’, ‘good’, ‘very good’). Furthermore, we found that judgements of products against prices were not only imprecise, but systematically biased. Consumers generally overestimated what products at the top end of the range were worth and underestimated what products at the bottom end of the range were worth, typically by as much as 10-15 per cent and sometimes more. We then systematically increased the complexity of the products, first by adding more attributes, so that the consumers had to take into account, two, three, then four different characteristics of the product simultaneously. One product might be good on attribute A, not so good on attribute B and available at just above the xii | PRICE Lab: An Investigation of Consumers’ Capabilities with Complex Products average price; another might be very good on A, middling on B, but relatively expensive. Each time the consumer’s task was to judge whether the deal was good or bad. We would then add complexity by introducing attribute C, then attribute D, and so on. Thus, consumers had to negotiate multiple trade-offs. Performance deteriorated quite rapidly once multiple attributes were in play. Even the best performers could not integrate all of the product information efficiently – they became substantially more likely to make mistakes. Once people had to consider four product characteristics simultaneously, all of which contributed equally to the monetary value of the product, a surplus of more than half the price range was required for them to identify a good deal reliably. This was a fundamental finding of the present experiments: once consumers had to take into account more than two or three different factors simultaneously their ability to distinguish good and bad deals became strikingly imprecise. This finding therefore offered a clear answer to our primary research question: a product might be considered ‘complex’ once consumers must take into account more than two or three factors simultaneously in order to judge whether a deal is good or bad. Most of the experiments conducted after we obtained these strong initial findings were designed to test whether consumers could improve on this level of performance, perhaps for certain types of products or with sufficient practice, or whether the performance limits uncovered were likely to apply across many different types of product. An examination of individual differences revealed that some people were significantly better than others at judging good deals from bad ones. However the differences were not large in comparison to the overall effects recorded; everyone tested struggled once there were more than two or three product attributes to contend with. People with high levels of numeracy and educational attainment performed slightly better than those without, but the improvement was small. We also found that both the high level of imprecision and systematic bias were not reduced substantially by giving people substantial practice and opportunities to learn – any improvements were slow and incremental. A series of experiments was also designed to test whether consumers’ capability was different depending on the type of product attribute. In our initial experiments the characteristics of the products were all visual (e.g., size, fineness of texture, etc.). We then performed similar experiments where the relevant product information was supplied as numbers (e.g., percentages, amounts) or in categories (e.g., Type A, Rating D, Brand X), to see whether performance might improve. This question is important, as most financial and contractual information is supplied to consumers in a numeric or categorical form. The results showed clearly that the type of product information did not matter for the level of imprecision and bias in consumers’ decisions – the results were essentially the same whether the product attributes were visual, numeric or categorical. What continued to drive performance was how many characteristics the consumer had to judge simultaneously. Thus, our findings were not the result of people failing to perceive or take in information accurately. Rather, the limiting factor in consumers’ capability was how many different factors they had to weigh against each other at the same time. In most of our experiments the characteristics of the product and its monetary value were related by a one-to-one mapping; each extra unit of an attribute added the same amount of monetary value. In other words, the relationships were all linear. Because other findings in behavioural economics suggest that consumers might struggle more with non-linear relationships, we designed experiments to test them. For example, the monetary value of a product might increase more when the amount of one attribute moves from very low to low, than when it moves from high to very high. We found that this made no difference to either the imprecision or bias in consumers’ decisions provided that the relationship was monotonic (i.e. the direction of the relationship was consistent, so that more or less of the attribute always meant more or less monetary value respectively). When the relationship involved a turning point (i.e. more of the attribute meant higher monetary value but only up to a certain point, after which more of the attribute meant less value) consumers’ judgements were more imprecise still. Finally, we tested whether familiarity with the type of product improved performance. In most of the experiments we intentionally used products that were new to the experimental participants. This was done to ensure experimental control and so that we could monitor learning. In the final experiment reported here, we used two familiar products (Dublin houses and residential broadband packages) and tested whether consumers could distinguish good deals from bad deals any better among these familiar products than they could among products that they had never seen before, but which had the same number and type of attributes and price range. We found that consumers’ performance was the same for these familiar products as for unfamiliar ones. Again, what primarily determined the amount of imprecision and bias in consumers’ judgments was the number of attributes that they had to balance against each other, regardless of whether these were familiar or novel. POLICY IMPLICATIONS There is a menu of consumer polices designed to assist consumers in negotiating complex products. A review, including international examples, is given in the main body of the report. The primary aim is often to simplify the consumer’s task. Potential policies, versions of which already exist in various forms and which cover a spectrum of interventionist strength, might include: the provision and endorsement of independent, transparent price comparison websites and other choice engines (e.g. mobile applications, decision software); the provision of high quality independent consumer advice; ‘mandated simplification’, whereby regulations stipulate that providers must present product information in a simplified and standardised format specifically determined by regulation; and more strident interventions such as devising and enforcing prescriptive rules and regulations in relation to permissible product descriptions, product features or price structures. The present findings have implications for such policies. However, while the experimental findings have implications for policy, it needs to be borne in mind that the evidence supplied here is only one factor in determining whether any given intervention in markets is likely to be beneficial. The findings imply that consumers are likely to struggle to choose well in markets with products consisting of multiple important attributes that must all be factored in when making a choice. Interventions that reduce this kind of complexity for consumers may therefore be beneficial, but nothing in the present research addresses the potential costs of such interventions, or how providers are likely to respond to them. The findings are also general in nature and are intended to give insights into consumer choices across markets. There are likely to be additional factors specific to certain markets that need to be considered in any analysis of the costs and benefits of a potential policy change. Most importantly, the policy implications discussed here are not specific to Ireland or to any particular product market. Furthermore, they should not be read as criticisms of existing regulatory regimes, which already go to some lengths in assisting consumers to deal with complex products. Ireland currently has extensive regulations designed to protect consumers, both in general and in specific markets, descriptions of which can be found in Section 9.1 of the main report. Nevertheless, the experiments described here do offer relevant guidance for future policy designs. For instance, they imply that while policies that make it easier for consumers to switch providers may be necessary to encourage active consumers, they may not be sufficient, especially in markets where products are complex. In order for consumers to benefit, policies that help them to identify better deals reliably may also be required, given the scale of inaccuracy in consumers’ decisions that we record in this report when products have multiple important attributes. Where policies are designed to assist consumer decisions, the present findings imply quite severe limits in relation to the volume of information consumers can simultaneously take into account. Good impartial Executive Summary | xv consumer advice may limit the volume of information and focus on ensuring that the most important product attributes are recognised by consumers. The findings also have implications for the role of competition. While consumers may obtain substantial potential benefits from competition, their capabilities when faced with more complex products are likely to reduce such benefits. Pressure from competition requires sufficient numbers of consumers to spot and exploit better value offerings. Given our results, providers with larger market shares may face incentives to increase the complexity of products in an effort to dampen competitive pressure and generate more market power. Where marketing or pricing practices result in prices or attributes with multiple components, our findings imply that consumer choices are likely to become less accurate. Policymakers must of course be careful in determining whether such practices amount to legitimate innovations with potential consumer benefit. Yet there is a genuine danger that spurious complexity can be generated that confuses consumers and protects market power. The results described here provide backing for the promotion and/or provision by policymakers of high-quality independent choice engines, including but not limited to price comparison sites, especially in circumstances where the number of relevant product attributes is high. A longer discussion of the potential benefits and caveats associated with such policies is contained in the main body of the report. Mandated simplification policies are gaining in popularity internationally. Examples include limiting the number of tariffs a single energy company can offer or standardising health insurance products, both of which are designed to simplify the comparisons between prices and/or product attributes. The present research has some implications for what might make a good mandate. Consumer decisions are likely to be improved where a mandate brings to the consumer’s attention the most important product attributes at the point of decision. The present results offer guidance with respect to how many key attributes consumers are able simultaneously to trade off, with implications for the design of standardised disclosures. While bearing in mind the potential for imposing costs, the results also suggest benefits to compulsory ‘meta-attributes’ (such as APRs, energy ratings, total costs, etc.), which may help consumers to integrate otherwise separate sources of information. FUTURE RESEARCH The experiments described here were designed to produce findings that generalise across multiple product markets. However, in addition to the results outlined in this report, the work has resulted in new experimental methods that can be applied to more specific consumer policy issues. This is possible because the methods generate experimental measures of the accuracy of consumers’ decision-making. As such, they can be adapted to assess the quality of consumers’ decisions in relation to specific products, pricing and marketing practices. Work is underway in PRICE Lab that applies these methods to issues in specific markets, including those for personal loans, energy and mobile phones.
APA, Harvard, Vancouver, ISO, and other styles
We offer discounts on all premium plans for authors whose works are included in thematic literature selections. Contact us to get a unique promo code!

To the bibliography