Academic literature on the topic 'Means absolute value'

Create a spot-on reference in APA, MLA, Chicago, Harvard, and other styles

Select a source type:

Consult the lists of relevant articles, books, theses, conference reports, and other scholarly sources on the topic 'Means absolute value.'

Next to every source in the list of references, there is an 'Add to bibliography' button. Press on it, and we will generate automatically the bibliographic reference to the chosen work in the citation style you need: APA, MLA, Harvard, Chicago, Vancouver, etc.

You can also download the full text of the academic publication as pdf and read online its abstract whenever available in the metadata.

Journal articles on the topic "Means absolute value"

1

Dangalchev, C. A. "Partially-Linear Transportation Problems, Obtained by Means of Absolute Value Functions." International Transactions in Operational Research 4, no. 4 (1997): 259–71. http://dx.doi.org/10.1111/j.1475-3995.1997.tb00082.x.

Full text
APA, Harvard, Vancouver, ISO, and other styles
2

Tang, Yuyao. "An Optimized 4-bits Absolute Value Detector." Transactions on Computer Science and Intelligent Systems Research 5 (August 12, 2024): 345–54. http://dx.doi.org/10.62051/0jwh7s16.

Full text
Abstract:
The binary absolute value detector is crucial in today's computer domain, especially in computer storage and analysis systems. It ensures the integrity and accuracy of data. Therefore, this paper proposes an optimized design of a 4-bit absolute value detector aimed at finding the circuit with the lowest energy consumption. Firstly, this paper introduces a design different from the traditional absolute value calculator, resulting in a reduced total number of stages in the circuit. Secondly, this paper calculates the delay of the main path can be using the logic effort formula and determine the specific gate sizing value for each logic component. Finally, this paper adjusts sizing and power supply ( ) ratio to achieve the lowest energy consumption at 1.5 times the minimum delay. The detector exhibits fewer critical path gates, resulting in lower average delay compared to traditional designs and features an even number of gates which means it has lower delay fluctuations across different inputs.
APA, Harvard, Vancouver, ISO, and other styles
3

Fang, Zhaoben, and Taizhong Hu. "Developments on MTP2 properties of absolute value multinormal variables with nonzero means." Acta Mathematicae Applicatae Sinica 13, no. 4 (1997): 376–84. http://dx.doi.org/10.1007/bf02009546.

Full text
APA, Harvard, Vancouver, ISO, and other styles
4

Cui, Jifeng, Zhiliang Lin, and Yinlong Zhao. "Limit Cycles of Nonlinear Oscillator Equations with Absolute Value by Means of the Homotopy Analysis Method." Zeitschrift für Naturforschung A 70, no. 3 (2015): 193–202. http://dx.doi.org/10.1515/zna-2014-0353.

Full text
Abstract:
AbstractAn analytic approach based on the homotopy analysis method is proposed to obtain the limit cycles of highly nonlinear oscillating equations with absolute value terms. The non-smoothness of the absolute value terms is handled by means of an iteration approach with Fourier expansion. Two typical examples are employed to illustrate the validity and flexibility of this approach. It has general meanings and thus can be used to solve many other highly nonlinear oscillating systems with this kind of non-smoothness.
APA, Harvard, Vancouver, ISO, and other styles
5

Cui, Jifeng, Hang Xu, and Zhiliang Lin. "Homotopy Analysis Method for Nonlinear Periodic Oscillating Equations with Absolute Value Term." Mathematical Problems in Engineering 2015 (2015): 1–7. http://dx.doi.org/10.1155/2015/132651.

Full text
Abstract:
Based on the homotopy analysis method (HAM), an analytic approach is proposed for highly nonlinear periodic oscillating equations with absolute value terms. The nonsmoothness of absolute value terms is handled by means of Fourier expansion, and the convergence is accelerated by using the iteration method. Two typical examples which can not be solved by the method of averaging of perturbation technique are employed to illustrate the validity and flexibility of this approach. Rather, accurate approximations are obtained using the HAM-based approach. The proposed approach has general meanings and thus can be used to solve many highly nonlinear periodic oscillating systems with this type of nonsmoothness of absolute value term.
APA, Harvard, Vancouver, ISO, and other styles
6

Damarikhsan, Rahadian, and Sumirat Erman. "The Financial Performance and Stock Valuation of Coal Mining Company in Indonesia (Case Study: Pt. Abm Investama Tbk (ABMM))." International Journal of Current Science Research and Review 05, no. 12 (2022): 4533–55. https://doi.org/10.5281/zenodo.7432463.

Full text
Abstract:
<strong>ABSTRACT: </strong>Indonesia, the world&#39;s largest supplier of coal, may profit from the uncertainties surrounding the present geopolitical situation. According to experts, the current high price of coal could be stable through the end of 2022, before declining moderately in 2023, but remain well above its five-year average. Therefore, investing in coal companies&rsquo; stock right now is a good idea. Theoretically, investing in the stocks of any firm whose primary business is in the coal industry will result in a profit. The issue is deciding which stock to purchase to increase the portfolio&rsquo;s return. Value investing, often known as finding an undervalued firm with a great potential for growth, is the main goal of the research. The first step of this study is to observe the coal mining sector, and analyze the problem that occurs. Then, a simple screening valuation method using PBV and PER is conducted to choose the appropriate company to evaluate. Next, external factor analysis using PESTEL analysis and Porter&rsquo;s Five Forces analysis is conducted. Afterward, internal factor analysis using Financial Ratios and F-Score is conducted to evaluate the problem that lies within the company. Finally, to summarize, SWOT analysis is conducted to analyze the advantages and disadvantages of the company&rsquo;s business environment that provide a thorough knowledge of the company&rsquo;s competitive advantages. Furthermore, an absolute valuation method is carried out to produce the company&rsquo;s intrinsic value. The financial performance of ABMM when viewed from its financial statement from 2017 &ndash; 2021 is strongly increased in 2021, but stagnant in 2017 &ndash; 2020. After obtaining the valuations result through three absolute valuation method, the normalized earnings valuation shows an upside of IDR 2,317 or&nbsp; 77%, the DCF valuation shows an upside of IDR 2,193 or 73%, and lastly, the Monte Carlo Simulation shows an upside of IDR 2,233 or 74%. Therefore, from all three method, ABMM current stock price is considered as undervalued. According on the findings, this study advises purchasing ABMM shares. The present stock price was found to be undervalued using three absolute valuation methods, which means that anyone who purchases the stock at the current price of IDR 3,000 will see a capital gain on their investment.
APA, Harvard, Vancouver, ISO, and other styles
7

Mili, L., M. G. Cheniae, N. S. Vichare, and P. J. Rousseeuw. "Robustification of the least absolute value estimator by means of projection statistics [power system state estimation]." IEEE Transactions on Power Systems 11, no. 1 (1996): 216–25. http://dx.doi.org/10.1109/59.486098.

Full text
APA, Harvard, Vancouver, ISO, and other styles
8

Wei, Xing, and Xian Mei. "Poly Real Estate Value Evaluation Research." Applied Mechanics and Materials 687-691 (November 2014): 5075–79. http://dx.doi.org/10.4028/www.scientific.net/amm.687-691.5075.

Full text
Abstract:
This paper made valuation research on poly real estate by combining the actual situation of poly real estate and special factors affecting real estate industry assessment, using relative valuation method p/e ratio method and price-to-book ratio method, and discount cash flow method in absolute valuation method respectively. Three methods valuation results have little difference from the actual price, and are in line with the actual situation. By contrast, two methods valuation results are slightly lower than the actual price, which means that poly still has a certain rise space. From overall analysis, discount cash flow method is more rational comparing with valuation method.
APA, Harvard, Vancouver, ISO, and other styles
9

Barani, A., S. Barani, and S. S. Dragomir. "Refinements of Hermite-Hadamard Inequalities for Functions When a Power of the Absolute Value of the Second Derivative IsP-Convex." Journal of Applied Mathematics 2012 (2012): 1–10. http://dx.doi.org/10.1155/2012/615737.

Full text
Abstract:
We extend some estimates of the right-hand side of Hermite-Hadamard-type inequalities for functions whose second derivatives absolute values areP-convex. Applications to some special means are considered.
APA, Harvard, Vancouver, ISO, and other styles
10

Randolph, KaDonna C. "Comparison of the Arithmetic and Geometric Means in Estimating Crown Diameter and Crown Cross-Sectional Area." Southern Journal of Applied Forestry 34, no. 4 (2010): 186–89. http://dx.doi.org/10.1093/sjaf/34.4.186.

Full text
Abstract:
Abstract The use of the geometric and arithmetic means for estimating tree crown diameter and crown cross-sectional area were examined for trees with crown width measurements taken at the widest point of the crown and perpendicular to the widest point of the crown. The average difference between the geometric and arithmetic mean crown diameters was less than 0.2 ft in absolute value. The mean difference between crown cross-sectional areas based on the geometric and arithmetic mean crown diameters was less than 6.0 ft2 in absolute value. At the plot level, the average difference between cumulative crown cross-sectional areas based on the geometric and arithmetic mean crown diameters amounted to less than 2.5% of the total plot area. The practical significance of these differences will depend on the final application in which the mean crown diameters are used.
APA, Harvard, Vancouver, ISO, and other styles
More sources

Dissertations / Theses on the topic "Means absolute value"

1

Wang, Wei. "Sample Average Approximation of Risk-Averse Stochastic Programs." Diss., Georgia Institute of Technology, 2007. http://hdl.handle.net/1853/19784.

Full text
Abstract:
Sample average approximation (SAA) is a well-known solution methodology for traditional stochastic programs which are risk neutral in the sense that they consider optimization of expectation functionals. In this thesis we establish sample average approximation methods for two classes of non-traditional stochastic programs. The first class is that of stochastic min-max programs, i.e., min-max problems with expected value objectives, and the second class is that of expected value constrained stochastic programs. We specialize these SAA methods for risk-averse stochastic problems with a bi-criteria objective involving mean and mean absolute deviation, and those with constraints on conditional value-at-risk. For the proposed SAA methods, we prove that the results of the SAA problem converge exponentially fast to their counterparts for the true problem as the sample size increases. We also propose implementation schemes which return not only candidate solutions but also statistical upper and lower bound estimates on the optimal value of the true problem. We apply the proposed methods to solve portfolio selection and supply chain network design problems. Our computational results reflect good performance of the proposed SAA schemes. We also investigate the effect of various types of risk-averse stochastic programming models in controlling risk in these problems.
APA, Harvard, Vancouver, ISO, and other styles
2

Chen, Hsiang-Chung, and 陳祥重. "A RPL Scheme Based on Expected Value and Mean Absolute Deviation of Transmission Counts in Low Power Wireless Networks." Thesis, 2019. http://ndltd.ncl.edu.tw/handle/t2abmb.

Full text
Abstract:
碩士<br>國立中央大學<br>通訊工程學系在職專班<br>107<br>In this era of the Internet of Things, a large number of devices need to be connected to the Internet, from smart home applications to smart city and industries 4.0. It seems to be an infinite business opportunity, but in reality, there is no uniform standard. The 6LoWPAN working group under the Internet Engineering Task Force (IETF) defines an interface between IPv6 and IEEE 802.15.4, enabling a large number of wireless devices to connect directly over IPv6. The ROLL Working Group defines the RPL protocol to make a low-power wireless network based on IPv6. Not only is its low cost, low power consumption, but it is also suitable for large deployments and is compatible with existing Internet protocols. Further, there are already standards to follow. The RPL protocol uses the "objective function" to define networking behavior. Each node must select the parent node according to the path cost to form a network with the lowest cost. Traditional wireless networks use the simplest "accumulation" for path cost calculations. Adding the cost of each hop on the path is the total cost of reaching the root node. However, the accumulated metrics do not truly reflect the actual cost of the path. For example, the total cost is 6 paths. It can consist of a different set of paths, which can be {2, 2, 2}, {5, 1} or {3, 3}, etc. Assume that the path cost is proportional to the distance of the communication. If you include a higher path cost segment in the set, it will have an adverse effect on communication quality. The accumulated metric cannot distinguish whether the parent node's path contains long hops. The parent node is frequently replaced with the child node, causing the control message to be transmitted in a large amount in the network, and the packet loss is more serious. This article will determine the path cost of the parent node from a statistical point of view, using the concept of approximate mean and standard deviation to analyze the path set. Considering the high time complexity of calculating the standard deviation, the mean absolute deviation is used instead, and the cost metric is multicast to neighbor nodes so that the neighbor nodes have enough information to decide the parent node. After the simulation, it is proved that compared with the officially defined two objective functions, under the deployment of high-density nodes, there is a lower packet loss rate, a slightly lower delay, and a slightly increased throughput. It is a big step forward for overall wireless network reliability.
APA, Harvard, Vancouver, ISO, and other styles

Books on the topic "Means absolute value"

1

Cárdenas, Paniel Reyes. Semiotic Theory of Community. The Rowman & Littlefield Publishing Group, 2023. https://doi.org/10.5040/9781666983319.

Full text
Abstract:
This book shows the unity and novelty of Josiah Royce’s philosophy, one that he called an Absolute Pragmatism. The development of Royce’s thought led him to propose a synthetic-semiotic view of community that constitutes a unique and unparalleled metaphysical vision in a world in great need of integration. Royce’s proposal also fosters the prominent value of loyalty and reconnects the individual human being to its more radical needs of transcendence. A Semiotic Theory of Community: Josiah Royce’s Absolute Pragmatism explores the mediation provided by community as a means by which to respond to the big questions entertained by humans at all times: through an ongoing and always open process of interpretation towards the Absolute, the community of interpretation in all its different forms provides an ideal of loyalty. Paniel Reyes Cárdenas argues that by undertaking the process of interpretation and recognition of ourselves and the Other, we become true persons, and that we get a hold of the sense of purpose and loyalty we crave—both individually and in a universal unlimited community.
APA, Harvard, Vancouver, ISO, and other styles
2

Romagnoli, Stefano, and Giovanni Zagli. Blood pressure monitoring in the ICU. Oxford University Press, 2016. http://dx.doi.org/10.1093/med/9780199600830.003.0131.

Full text
Abstract:
Two major systems are available for measuring blood pressure (BP)—the indirect cuff method and direct arterial cannulation. In critically-ill patients admitted to the intensive care unit, the invasive blood pressure is the ‘gold standard’ as a tight control of BP values, and its change over time is important for choosing therapies and drugs titration. Since artefacts due to the inappropriate dynamic responses of the fluid-filled monitoring systems may lead to clinically relevant differences between actual and displayed pressure values, before considering the BP value shown as reliable, the critical care giver should carefully evaluate the presence/absence of artefacts (over- or under-damping/resonance). After the arterial pressure waveform quality has been verified, the observation of each component of the arterial wave (systolic upstroke, peak, systolic decline, small pulse of reflected pressure waves, dicrotic notch) may provide a number of useful haemodynamic information. In fact, changes in the arterial pulse contour are due the interaction between the heart beat and the whole vascular properties. Vasoconstriction, vasodilatation, shock states (cardiogenic, hypovolaemic, distributive, obstructive), valve diseases (aortic stenosis, aortic regurgitation), ventricular dysfunction, cardiac tamponade are associated with particular arterial waveform characteristics that may suggest to the physician underlying condition that could be necessary to investigate properly. Finally, the effects of positive-pressure mechanical ventilation on heart–lung interaction, may suggest the existence of an absolute or relative hypovolaemia by means of the so-called dynamic indices of fluid responsiveness.
APA, Harvard, Vancouver, ISO, and other styles
3

Higham, Philip A., Katarzyna Zawadzka, and Maciej Hanczakowski. Internal Mapping and Its Impact on Measures of Absolute and Relative Metacognitive Accuracy. Edited by John Dunlosky and Sarah (Uma) K. Tauber. Oxford University Press, 2015. http://dx.doi.org/10.1093/oxfordhb/9780199336746.013.15.

Full text
Abstract:
Research in decision making and metacognition has long investigated the calibration of subjective probabilities. To assess calibration, mean ratings on a percentage scale (e.g., subjective likelihood of recalling an item) are typically compared directly to performance percentages (e.g., actual likelihood of recall). Means that are similar versus discrepant are believed to indicate good versus poor calibration, respectively. This chapter argues that this process is incomplete: it examines only the mapping between the overt scale values and objective performance (mapping 2), while ignoring the process by which the overt scale values are first assigned to different levels of subjective evidence (mapping 1). The chapter demonstrates how ignoring mapping 1 can lead to conclusions about calibration that are misleading. It proposes a signal detection framework that not only provides a powerful method for analyzing calibration data, but also offers a variety of measures of relative metacognitive accuracy (resolution).
APA, Harvard, Vancouver, ISO, and other styles
4

Adelstein, Richard. Criminal Liability. Oxford University Press, 2017. http://dx.doi.org/10.1093/oso/9780190694272.003.0009.

Full text
Abstract:
This chapter distinguishes torts from crimes in terms of the moral costs created by crimes, discusses the nature and incidence of these costs and the problems of assigning liability prices to compensate for them, and describes criminal liability as organized vengeance, a means of inflicting visible, proportioned suffering on offenders as compensation for the moral costs imposed by crimes. The ideas of retribution and deterrence are illustrated in the case of competitive market prices, which also separate efficient from inefficient cost imposition through retribution. Criminal entitlements are defined and distinguished from tortious entitlements, and the differences and connections between tort and criminal liability are explored. In seeking punishment that fits the crime in every case, criminal liability also seeks corrective justice, in this context called proportional punishment, rather than absolute deterrence, and through retributive liability pricing effectively encourages crimes whose value to the perpetrator exceeds the moral costs they impose.
APA, Harvard, Vancouver, ISO, and other styles
5

Steane, Andrew. What Must Be Embraced, Not Derived. Oxford University Press, 2018. http://dx.doi.org/10.1093/oso/9780198824589.003.0011.

Full text
Abstract:
The chapter discusses the subject of values and moral judgement. This begins with what is meant by values, and whether or not they can be objective and absolute. The main business of the chapter is to present a philosophical argument about the nature of this area. The argument shows that the existence of a standard which can properly command the allegiance of all free agents can be neither proved nor disproved using the tools of reason and logic. It is argued that the absence of such a standard would tend towards isolation of individuals from one another. Finally, it is pointed out that what people are most drawn to and value highest is not well captured in terms of purely impersonal abstractions. This is a pointer towards the journey beyond atheism. The interplay of reason and faith is then discussed.
APA, Harvard, Vancouver, ISO, and other styles
6

Gazis, George Alexander. Homer and the Poetics of Hades. Oxford University Press, 2018. http://dx.doi.org/10.1093/oso/9780198787266.001.0001.

Full text
Abstract:
This book examines Homer’s use of Hades as a poetic resource. By portraying Hades as a realm where vision is not possible, Homer creates a unique poetic environment where social constraints and divine prohibitions are not applicable. The resulting narrative emulates that of the Muses but is markedly distinct from it, as in Hades experimentation with and alteration of epic forms and values can be pursued, giving rise to a ‘poetics of Hades’. In the Iliad, Homer shows how this alternative poetics works through the visit of Patroclus’ shade in Achilles’ dream. The recollection offered by the shade reveals an approach to its past in which regret, self-pity, and a lingering memory of intimate and emotional moments displace an objective tone and a traditional exposition of heroic values. The potential of Hades for providing alternative means of commemorating the past is more fully explored in the ‘Nekyia’ of Odyssey 11; there, Odysseus’ extraordinary ability to see (idein) the dead in Hades allows him to meet and interview the shades of heroines and heroes of the epic past. The absolute confinement of Hades allows the shades to recount their stories from their own viewpoint. The poetic implications of this are important since by visiting Hades and hearing the shades’ stories, Odysseus–and Homer—gains access to a tradition in which epic values associated with gender roles and even divine law are suspended in favour of a more immediate and personally inflected approach to the epic past.
APA, Harvard, Vancouver, ISO, and other styles
7

Adelstein, Richard. Tort Liability. Oxford University Press, 2017. http://dx.doi.org/10.1093/oso/9780190694272.003.0007.

Full text
Abstract:
Torts are involuntary seizures of entitlements of a certain kind in a particular exchange environment, and tort liability attempts to ensure that tortfeasors compensate their victims for the costs these takings impose. Liability is the law’s answer to externality. It doesn’t seek to deter torts absolutely, but to control them through the principle of corrective justice, which separates efficient from inefficient torts by liability prices and deters only the latter. This chapter examines how these involuntary exchanges are governed by tort liability to do corrective justice and imperfectly completed through individual and class action tort suits for compensatory damages. Tort liability is shown to effectively encourage efficient torts, in which the value of the unlawful cost imposition to the tortfeasor exceeds the external costs of the tort, and thus provide a means to move entitlements to higher-valuing owners in an environment of involuntary takings by private takers.
APA, Harvard, Vancouver, ISO, and other styles

Book chapters on the topic "Means absolute value"

1

Lavasa, Eleni, Christos Chadoulos, Athanasios Siouras, et al. "Toward Explainable Metrology 4.0: Utilizing Explainable AI to Predict the Pointwise Accuracy of Laser Scanning Devices in Industrial Manufacturing." In Artificial Intelligence in Manufacturing. Springer Nature Switzerland, 2024. http://dx.doi.org/10.1007/978-3-031-46452-2_27.

Full text
Abstract:
AbstractThe field of metrology, which focuses on the scientific study of measurement, is grappling with a significant challenge: predicting the measurement accuracy of sophisticated 3D scanning devices. These devices, though transformative for industries like manufacturing, construction, and archeology, often generate complex point cloud data that traditional machine learning models struggle to manage effectively. To address this problem, we proposed a PointNet-based model, designed inherently to navigate point cloud data complexities, thereby improving the accuracy of prediction for scanning devices’ measurement accuracy. Our model not only achieved superior performance in terms of mean absolute error (MAE) across all three axes (X, Y, Z) but also provided a visually intuitive means to understand errors through 3D deviation maps. These maps quantify and visualize the predicted and actual deviations, which enhance the model’s explainability as well. This level of explainability offers a transparent tool to stakeholders, assisting them in understanding the model’s decision-making process and ensuring its trustworthy deployment. Therefore, our proposed model offers significant value by elevating the level of precision, reliability, and explainability in any field that utilizes 3D scanning technology. It promises to mitigate costly measurement errors, enhance manufacturing precision, improve architectural designs, and preserve archeological artifacts with greater accuracy.
APA, Harvard, Vancouver, ISO, and other styles
2

Chaiyakan, Songkomkrit, and Phantipa Thipwiwatpotjana. "Mean Absolute Deviation Portfolio Frontiers with Interval-Valued Returns." In Lecture Notes in Computer Science. Springer International Publishing, 2019. http://dx.doi.org/10.1007/978-3-030-14815-7_19.

Full text
APA, Harvard, Vancouver, ISO, and other styles
3

Romano, Maurizio, Francesco Mola, and Claudio Conversano. "Decomposing tourists’ sentiment from raw NL text to assess customer satisfaction." In Proceedings e report. Firenze University Press, 2021. http://dx.doi.org/10.36253/978-88-5518-304-8.29.

Full text
Abstract:
The importance of the Word of Mouth is growing day by day in many topics. This phenomenon is evident in everyday life, e.g., the rise of influencers and social media managers. If more people positively debate specific products, then even more people are encouraged to buy them and vice versa. This effect is directly affected by the relationship between the potential customer and the reviewer. Moreover, considering the negative reporting bias is evident in how the Word of Mouth analysis is of absolute interest in many fields. We propose an algorithm to extract the sentiment from a natural language text corpus. The combined approach of Neural Networks, with high predictive power but more challenging interpretation, with more simple but informative models, allows us to quantify a sentiment with a numeric value and to predict if a sentence has a positive (negative) sentiment. The assessment of an objective quantity improves the interpretation of the results in many fields. For example, it is possible to identify crucial specific sectors that require intervention, improving the company's services whilst finding the strengths of the company himself (useful for advertising campaigns). Moreover, considering that the time information is usually available in textual data with a web origin, to analyze trends on macro/micro topics. After showing how to properly reduce the dimensionality of the textual data with a data-cleaning phase, we show how to combine: WordEmbedding, K-Means clustering, SentiWordNet, and the Threshold-based Naïve Bayes classifier. We apply this method to Booking.com and TripAdvisor.com data, analyzing the sentiment of people who discuss a particular issue, providing an example of customer satisfaction.
APA, Harvard, Vancouver, ISO, and other styles
4

Tomaselli, Venera, and Giulio Giacomo Cantone. "Multipoint vs slider: a protocol for experiments." In Proceedings e report. Firenze University Press, 2021. http://dx.doi.org/10.36253/978-88-5518-304-8.19.

Full text
Abstract:
Since the broad diffusion of Computer-Assisted survey tools (i.e. web surveys), a lively debate about innovative scales of measure arose among social scientists and practitioners. Implications are relevant for applied Statistics and evaluation research since while traditional scales collect ordinal observations, data from sliders can be interpreted as continuous. Literature, however, report excessive times of completion of the task from sliders in web surveys. This experimental protocol is aimed at testing hypotheses on the accuracy in prediction and dispersion of estimates from anonymous participants who are recruited online and randomly assigned into tasks in recognition of shades of colour. The treatment variable is two scales: a traditional multipoint 0-10 multipoint vs a slider 0-100. Shades have a unique parametrisation (true value) and participants have to guess the true value through the scale. These tasks are designed to recreate situations of uncertainty among participants while minimizing the subjective component of a perceptual assessment and maximizing information about scale-driven differences and biases. We propose to test statistical differences in the treatment variable: (i) mean absolute error from the true value (ii), time of completion of the task. To correct biases due to the variance in the number of completed tasks among participants, data about participants can be collected through both pre-tasks acceptance of web cookies and post-tasks explicit questions.
APA, Harvard, Vancouver, ISO, and other styles
5

Werling, Dorina, Maximilian Beichter, Benedikt Heidrich, Kaleb Phipps, Ralf Mikut, and Veit Hagenmeyer. "Automating Value-Oriented Forecast Model Selection by Meta-learning: Application on a Dispatchable Feeder." In Energy Informatics. Springer Nature Switzerland, 2023. http://dx.doi.org/10.1007/978-3-031-48649-4_6.

Full text
Abstract:
AbstractTo successfully increase the share of renewable energy sources in the power system and for counteract their fluctuating nature in view of system stability, forecasts are required that suit downstream applications, such as demand side management or management of energy storage systems. However, whilst many forecast models to create these forecasts exist, the selection of the forecast model best suited to the respective downstream application can be challenging. The selection is commonly based on quality measures (such as mean absolute error), but these quality measures do not consider the value of the forecast in the downstream application. Thus, we introduce a meta-learning framework for forecast model selection, which automatically selects the forecast model leading to the forecast with the highest value in the downstream application. More precisely, we use a meta-learning approach that considers the selection task as a classification problem. Furthermore, we empirically evaluate the proposed framework on the downstream application of a smart building’s photovoltaic-battery management problem known as dispatchable feeder on building-level with a data set containing time series from 300 buildings. The results of our evaluation demonstrate that the proposed framework reduces the cost and improves the accuracy compared to existing forecast model selection heuristics. Furthermore, compared to a manual forecast model selection, it requires noticeably less computational effort and leads to comparable results.
APA, Harvard, Vancouver, ISO, and other styles
6

Robertz, Leon, Lassi Rieppo, Seppo Korkala, Tommi Jaako, and Simo Saarakkala. "Inter- and Intra-Day Precision of a Low-Cost and Wearable Bioelectrical Impedance Analysis Device." In Communications in Computer and Information Science. Springer Nature Switzerland, 2024. http://dx.doi.org/10.1007/978-3-031-59091-7_29.

Full text
Abstract:
AbstractBioimpedance analysis (BIA) is a non-invasive and safe method to measure body composition. Nowadays, due to technological progress, smaller and cheaper devices allow the implementation of BIA into wearable devices. In this pilot study, we analyzed the measurement precision of a cheap BIA solution for wearable devices. Intra-session, intra-day, and inter-day reproducibility of raw impedance values from three subjects at three different body locations (hand-to-hand, hand-to-torso, torso-to-torso), and for three different frequencies (6, 54, and 500 kHz) were analyzed using the coefficient of variation (CV%). Hand-to-hand and hand-to-torso measurements resulted, on average, in high intra-session (CV% = 0.14% and CV% = 0.11%, respectively), intra-day (CV% = 1.67% and CV% = 1.26%, respectively), and inter-day (CV% = 1.53% and CV% = 1.31%) precision. Absolute impedance values for the torso-to-torso measurements showed a larger mean variation (intra-session CV% = 0.68%; intra-day CV% = 5.53%, inter-day CV% = 3.13%). Overall, this cheap BIA solution shows high precision and promising usability for further integration into a wearable measurement environment.
APA, Harvard, Vancouver, ISO, and other styles
7

Jeyaratnam, S., and S. Panchapakesan. "An Integrated Formulation for Selecting the Best From Several Normal Populations in Terms of the Absolute Values of Their Means: Common Known Variance Case." In Advances in Statistical Decision Theory and Applications. Birkhäuser Boston, 1997. http://dx.doi.org/10.1007/978-1-4612-2308-5_19.

Full text
APA, Harvard, Vancouver, ISO, and other styles
8

Stanchina, Gabriella. "Conclusions: Facets of Self across Cultures." In The Art of Becoming Infinite. Open Book Publishers, 2025. https://doi.org/10.11647/obp.0442.06.

Full text
Abstract:
In the concluding chapter, I address in a systematic way the challenge posed to us by the contraposition between a vertical and a horizontal self. The key argument raised by Mou Zongsan, i.e., that it is Confucianism that opens the door to the subjective universe, may seem controversial. Indeed, since its foundation, Western thought seems to have revolved precisely around the subjective dimension. It is therefore necessary to analyze the three main characteristics of self—interiority, solitude, and reflection—and to see how these characteristics present themselves in Western epistemologically-oriented thought, and in that moral-metaphysical dimension which is Mou's original contribution to the philosophy of self. Interiority: starting from Plato and Descartes, the realm of the self is presented as a kind of fortified citadel, an ideal refuge for spiritual hermitage and contemplative life. The premise for the constitution of this inner space is that the subject withdraws from any relationship with the universe outside itself, and this operation of radical isolation and detachment causes the subject to play the role of spectator rather than of participant in the world. The “hard problem of consciousness,” i.e., the enigmatic connection between the subjective qualia and the objective structure of the brain, is the latest formulation of this topological divide between interiority and exteriority. According to Mou, interiority can be reformulated as inherence. Human beings possess in themselves an autonomous principle of actualization and concretization. If the mind’s interiority represents its structural capacity to embody the creativity of heaven and Earth and extends this creative capacity to all things, giving them meaning and value, then interiority is both centripetal and centrifugal. The Chinese traditional concepts of “inner resonance” (ganying 感應) or “inherent connection” (gantong 感通) efficaciously condense human’s ability to extend the sphere of their own mind-body oneness to all things, experiencing others and the universe as a sensible part of themselves. Solitude: analyzing both philosophical and poetical rendition of the singularity of individual self, we may conclude that the loneliness of the contemplative subject is an inevitable consequence of the withdrawal, isolation, and rupture of all relations wrought by the subject. On the contrary, the solitude pursued by Confucians through the practice of “vigilance in solitude” (shendu 慎獨) is positive and revelatory of the common origin of the self and the universe. The state of solitude in a vertical dimension is no more a condition of aloneness and seclusion, but a return to the auroral and silent state of mind, when subjective understanding and objective universe have not been yet divided and contraposed. Paradoxically, in a moral-metaphysical dimension, solitude as silent resonance makes possible maximum alertness and responsivity. Reflection: how can we formulate the idea of self-reflection in a way that avoids a reduction to the centripetal tension, self-objectification, and self-enclosure that we have seen to be dominant in Western thought? According to Mou, the mind, through its being implicated in and morally participating in the affairs of the world, reveals itself to itself and becomes self-conscious. Thus, the “self” already contains the dynamism of resonating with the things of the world and responding and corresponding to them. Bringing myself to completion implies bringing all things to completion; through turning back, I go back to the origin of the co-creation of mind and world. This unity of the human heart and the universe is not the mystical contemplation of a static One but the timely practical realization of becoming one body shared by self and universe. Indeed, the second responsibility of the moral mind is to emancipate things from the domain of the useful and the exploitable and to look at them as an absolute finality, that is, as that “thing-in-itself” which Kant could not attain through his merely cognitive quest. Finally, Mou Zongsan’s moral metaphysics implies that “self” is only fully realized in a vertical dimension, which means that absorbing and processing information is not the primary function of the subject. When I speak of a “self,” I mean a dynamism of uninterrupted self-transcendence and a desire to ascend to a higher level of realization. Furthermore, it is not the will to knowledge that unlocks the dimension of interiority, but the manifestation of the all-encompassing moral mind through my practical action.
APA, Harvard, Vancouver, ISO, and other styles
9

Li, Zeguang, Laiping Li, and Wei Huang. "Comparative Analysis of Mathematical Models of Hydro-pneumatic Suspension Damping." In Lecture Notes in Mechanical Engineering. Springer Nature Singapore, 2024. http://dx.doi.org/10.1007/978-981-97-1876-4_66.

Full text
Abstract:
AbstractAt present, for the study of the dynamic characteristics of the hydro-pneumatic suspension of vehicles, the elastic force is mainly modeled by the variable gas equation of state, and the damping force is modeled by thin-walled orifice theory, which only considers the turbulent flow. Here, based on expressing the whole flow field including laminar flow, transition flow, and turbulence with piecewise function, the turbulence region is modeled by the Brasius formula and thin-walled orifice theory respectively. By applying vibration signals collected from real roads, the responses of two piecewise function damping force models and traditional thin-walled orifice model of 1/4 suspension system in the time domain and frequency domain respectively are calculated. The average absolute error MAE and root mean square error RSME are used to compare them with the real upper fulcrum data of the suspension cylinder. The results show that different models can simulate suspension vibration well in the low-frequency range, but there are obvious deficiencies in the middle and high-frequency range, while the short-hole flow theoretical model in the form of a piecewise function is closer to the real value in the frequency domain.
APA, Harvard, Vancouver, ISO, and other styles
10

Schächtele, Alexander, Björn Hardebusch, Kerstin Krätschmer, Karin Tschiggfrei, Theresa Zwickel, and Rainer Malisch. "Analysis and Quality Control of WHO- and UNEP-Coordinated Human Milk Studies 2000–2019: Polybrominated Diphenyl Ethers, Hexabromocyclododecanes, Chlorinated Paraffins and Polychlorinated Naphthalenes." In Persistent Organic Pollutants in Human Milk. Springer International Publishing, 2023. http://dx.doi.org/10.1007/978-3-031-34087-1_6.

Full text
Abstract:
AbstractFour different analytical methods were used for the determination of (1) polybrominated diphenyl ethers (PBDE), (2) hexabromocyclododecanes (HBCDD), (3) chlorinated paraffins (CP) and (4) polychlorinated naphthalenes (PCN) in human milk samples of the WHO/UNEP-coordinated exposure studies. As a laboratory accredited according to EN ISO/IEC 17025, a comprehensive quality control program was applied to assure the reliability of results. This included procedural blanks, the use of numerous quality control samples as in-house reference materials and the participation in proficiency tests (PTs). Trueness was estimated from the PT samples using the assigned values.The mean absolute deviation of the sum parameters ∑PBDE6 and ∑PBDE7 from the assigned values of 53 PT samples analysed between 2006 and 2021 was 12% and 14%, respectively.For α-HBCDD as the most abundant diastereomer and the sum of α-, β- and γ-HBCDD, deviations of the reported value from the assigned value of the proficiency tests (31 samples, analysed between 2007 and 2021) were in most cases below 40% over a large concentration range, e.g., for α-HBCDD, between 0.0084 and 19 ng/g fw. For concentrations above 0.5 ng/g lipid, the deviation was in the range of approximately 0–30%.For short-chain and medium-chain CP (SCCP and MCCP) all z-scores achieved in interlaboratory comparisons during 2017–2020 were within ±2 z and therefore satisfactory (13 PT samples were analysed for ΣCP, ΣSCCP and ΣMCCP using the GC-ECNI-Orbitrap-HRMS method, eight results achieved for ΣCP using the GC-EI-MS/MS method).Due to the lack of available proficiency tests for PCN at the time of measuring the human milk samples of the 2016–2019 period, an external validation for control of the trueness was performed through an interlaboratory comparison with an independent laboratory. The deviation of the ΣPCN13 in five test samples between the external laboratory and CVUA Freiburg was in the range from 3 to 20%. At a later stage (in 2021), the laboratory participated successfully in the first interlaboratory comparison study on PCN congeners in cod liver oil. The z-scores for seven congeners and two sum parameters were within ±2 z and therefore satisfactory. Also, the results for other of the altogether 26 PCN congeners were in accordance with the median values reported by all participants.As a result, the determination of PBDE, HBCDD, CP and PCN in human milk samples of the WHO/UNEP-coordinated exposure studies followed the strict rules of the accreditation system and the general criteria for the operation of testing laboratories as laid down in EN ISO/IEC 17025.
APA, Harvard, Vancouver, ISO, and other styles

Conference papers on the topic "Means absolute value"

1

Cvetkov, Vasil. "LINE DISCREPANCIES AS WEIGHTS IN THE ADJUSTMENT OF PRECISE LEVELLING NETWORKS." In 24th SGEM International Multidisciplinary Scientific GeoConference 2024. STEF92 Technology, 2024. https://doi.org/10.5593/sgem2024/2.1/s09.26.

Full text
Abstract:
Precise levelling networks play an important role in the research of many scientific processes, e.g. the recent crustal movements, the change of ocean and sea levels, geoid or quasi-geoid, etc. Their appropriate adjustment goes through the choice of suitable weights, which have physical common sense and minimize the standard errors of the adjusted heights. The conventional weights are supposed to be inversely proportional to the length of levelling lines, due to the assumption that the accumulation of errors in the geometric levelling is proportional to the square root of the distance between benchmarks. On the other hand, it is a well-known fact that the more accurate random observations are, the less is their spread around the quantity expectation. Thus, the absolute values of the discrepancies in the lines in levelling networks seem reasonable and natural candidates of weights. In order to check the �supremacy� of the conventional weights in the geometric levelling, we adjusted the Third Levelling network of Finland in two variants. In the first variant, we used the traditional weights inversely proportional to the length of the levelling lines. In the second variant, we used weights inversely proportional to the absolute values of the discrepancies in the lines. The t-Test Paired Samples for means applied with the means of the sample of the standard errors yielded by both variants revealed that the second variant produced statistically significant better results at significance level greater than 99 %.
APA, Harvard, Vancouver, ISO, and other styles
2

Bogaert-Alvarez, Ricardo J., Yonghyun Kim, Jaclyn M. Sekula, and James LaBuz. "Relationships between Electrochemical Noise, Pit Depth, and Pit Density in Metallic Alloys Exposed to 3.5 Wt% Nacl Solutions." In CORROSION 2002. NACE International, 2002. https://doi.org/10.5006/c2002-02332.

Full text
Abstract:
Abstract Electrochemical noise experiments with pairs of electrodes were performed in 3.5 wt % NaCl aqueous solutions using four alloys: 430 stainless steel (SS) (UNS 43000), 304L SS (UNS 30403), 316L SS (UNS 31603), and alloy 276 (UNS N10276). With a zero-resistance ammeter (ZRA), data were collected on the following parameters: root-mean-square (RMS) current and the absolute maximum current. Through microscopic observations, the following surface results were recorded: pit average depth, pit maximum depth, pit density, and differences in pit densities. Except for the latter result, the surface results are the average of both electrodes in an experiment. This paper examines the relationships between the noise currents and the surface results. Direct comparisons of the electrochemical parameters with the surface parameters yielded poor correlations. When the average values of the parameters for a given alloy were employed for the correlations, two main conclusions were drawn: 1) larger pit densities produce higher absolute maximum currents; 2) greater maximum penetration depth resulted in higher RMS currents.
APA, Harvard, Vancouver, ISO, and other styles
3

Finckenor, Jeff, Michael Chandler, Holly Evans, and Aniekan Ruffin. "Six Nines Reliable Army Fatigue Critical Component Using Operational Loads Data." In Vertical Flight Society 80th Annual Forum & Technology Display. The Vertical Flight Society, 2024. http://dx.doi.org/10.4050/f-0080-2024-1113.

Full text
Abstract:
This paper describes the work performed to determine a 0.999999, 6 nines, reliable fatigue critical component life using field monitored loads. The Tie Bar of the MH-47 is substantiated by Centrifugal Force (CF), which is a direct function of rotor speed, Nr, which is a monitored parameter in the Structural Usage Monitoring System (SUMS). Six nines of reliability has been the Army target for component reliability and it is generally assumed that legacy safe-life methods are near this level of reliability. With monitored loads it is possible to develop a statistical model for loads and determine an actual reliability value. This paper presents multiple methods for the Army's first attempt at establishing a retirement time using an absolute component reliability. Reliability is gained using a reduction of the Endurance Limit and mean and standard deviations of binned loads across multiple aircraft. Most notably fatigue lives can vary widely if the independent variable reliability contributions are assigned arbitrarily.
APA, Harvard, Vancouver, ISO, and other styles
4

Li, Jing, and Baoming Qiao. "A Novel Differential Evolution Algorithm with K-Means and Simplex Search Method for Absolute Value Equations." In 2015 11th International Conference on Computational Intelligence and Security (CIS). IEEE, 2015. http://dx.doi.org/10.1109/cis.2015.72.

Full text
APA, Harvard, Vancouver, ISO, and other styles
5

Li, Xinfang, Dongsheng Zhu, Gang Chen, and Xianju Wang. "Influence of SDBS on Stability of Copper Nano-Suspensions." In 2007 First International Conference on Integration and Commercialization of Micro and Nanosystems. ASMEDC, 2007. http://dx.doi.org/10.1115/mnc2007-21091.

Full text
Abstract:
Dispersion and stability of Cu nano-suspensions with dispersant is the important base for the study of rheology and heat transfer enhancement of the suspensions. This paper presented a procedure for preparing a nanofluid which was a suspension consisting of nanophase powders and a base liquids. By means of the procedure, Cu-H2O nanofluids with and without dispersant were prepared, whose sedimentation photographs were given to illustrate the stability and evenness of suspension with dispersant. Dispersion and stability of Cu nanoparticles in water were studied under different pH values and the concentration of sodium dodecylbenzenesulfonate (SDBS) dispersant by the method of zeta potential, absorbency and sedimentation photographs. The results show that zeta potential has very corresponding relation with absorbency, and the higher absolute value of zeta potential and absorbency are, the better dispersion and stability in system is. The absolute value of zeta potential and absorbency are higher at pH 9.5. SDBS can significantly increase the absolute value of the zeta potential of the particle surfaces by electrostatic repulsions, which leads to the enhancement of the stability of the Cu suspensions. The optimizing concentration for SDBS in the 0.1% copper nano-suspensions is 0.07%, which has the best disperse results.
APA, Harvard, Vancouver, ISO, and other styles
6

Zhang, Junfeng, and Daniel Y. Kwok. "On the Slip Mechanism of Microfluidics by Means of a Mean-Field Free-Energy Lattice Boltzmann Method." In ASME 3rd International Conference on Microchannels and Minichannels. ASMEDC, 2005. http://dx.doi.org/10.1115/icmm2005-75170.

Full text
Abstract:
In this paper, we studied liquid-solid slip by employing a mean-field free-energy lattice Boltzmann approach recently proposed [Zhang et al., Phy. Rev. E. 69, 032602, 2004]. With a general bounce-back no-slip boundary condition applied to the interface, liquid slip was observed because of the specific fluid-solid interaction. The slip length is clearly related to the interaction strength: the stronger the interaction, the less hydrophobic the surface and hence results in less slipping. Unlike other lattice Boltzmann models, a contact angle value between 0–180° can be generated here without using a less realistic repulsive fluid-solid interaction. We found that system size does not affect the absolute slip magnitude; however, the ratio of the slip length to system size increases quickly as the system becomes smaller, illustrating that slip becomes important in smaller-scale systems. A small negative slip length can also be produced with a strong fluid-solid attraction. These results are in qualitative agreement with those from experimental and molecular dynamics studies.
APA, Harvard, Vancouver, ISO, and other styles
7

Chan, Yen Pinng, Muhammad Yazuwan Sallij Bin Muhammad Yasin, Ahgheelan Sella Thurai, Hurul Ain Binti Abdul Nasher, and Rahimah Bt Binti A Halim. "Marginal Field Value Maximization by Going Back-To-Basics with Design-To-Value Enhancement." In Offshore Technology Conference. OTC, 2023. http://dx.doi.org/10.4043/32379-ms.

Full text
Abstract:
Abstract Value is defined by a quantum of functions or returns received from resources invested. It is challenging for extremely marginal oil and gas fields where returns hardly meet invested resources profitably. Stranded, widely scattered resources add to the complication. This paper shares a complete revamp in notional wells and facilities development concept generation for such fields by going back-to-basics and shifting mindset towards designing to value to rigorously improve project viability. Conventionally, wells and facilities design are to economically match recoverable resource volumes. This strategy is largely unsuccessful when it comes to marginal-untapped-scattered-shallow oil resource of less than 5MMboe. A case study was initially designed to have multiple drill centers with a handful of pipeline tiebacks, requiring a large wellhead platform (WHP). This led to a highly unfavorable economic outcome. Going back-to-basics, coupled with Design-to-Value (D2V) approach, has proven to be effective as it is a systematic way to develop an optimum concept for maximum value realization. This agile approach questions every aspect of concept development to achieve minimum technical requirements while providing better clarity in cost-risk trade-offs through concept evaluation in a step-up, staircase manner. For wells, adopting back-to-basics cum D2V approach means initiating development via a vertical-well-only concept, with well placement that gives the highest well deliverability. It starts with 1 vertical well at the selected reservoir layer with the highest recovery potential before further well addition. For facilities, this approach means challenging the absolute minimum of lightweight structure (LWS) design by identifying topside's minimal functional requirements. For the case study, evaluation starts with 1 well, 1 LWS and 2 pipelines. The incremental impact from stepwise addition of wells with or without adding another LWS and/or pipeline is subsequently assessed. From all iterations, a value curve is then plotted to ascertain the concept that delivers maximum value. As a result, its development concept was revised from having 12 costly deviated wells at 1 WHP to 5 individual, low-cost vertical wells from multiple LWS. With that, up to 80% hydrocarbon volume originally to be recovered remains achievable while attaining a 50% improvement in Unit Technical Cost. This approach demonstrates great improvement in project economic viability. Back-to-basics cum D2V approach is therefore recommended in development strategy formulation for marginal-untapped-scattered-shallow oil resources as it managed to establish a minimum business case for further optimization and value improvement. Aggressive value engineering efforts such as D2V are key to successfully unlock stranded, marginal fields. Readiness to deviate from technical design norms is also important. Trade-offs are expected to be made in realizing projects in these trying times. That said, resource owners can still strive to balance technical safety design requirements against economic targets by adopting suitable concept generation and design approaches.
APA, Harvard, Vancouver, ISO, and other styles
8

Foreman, Geoff, Steven Bott, Jeffrey Sutherland, and Stephan Tappert. "The Development and Use of an Absolute Depth Size Specification in ILI-Based Crack Integrity Management of Pipelines." In 2016 11th International Pipeline Conference. American Society of Mechanical Engineers, 2016. http://dx.doi.org/10.1115/ipc2016-64224.

Full text
Abstract:
To provide a more insightful and accurate feature description from Crack In-line Inspection (ILI) reporting as per the Fitness For Service analysis in API 1176, individual crack dimensions must be established to a given accuracy. PII Pipeline Solutions established an absolute depth sizing specification conforming to the dig verification processes of API 1163. This change represented a significant shift from a traditional reporting format for depth sizing in “bands” of 1–2 mm, 2–3 mm and &gt; 3 mm depths within crack ILI inspection reporting. When assessing features with characteristics stated in a sizing band, the pipeline integrity assessment approach required a conservative assumptions that all of the features in that band must be treated as if they are in the deepest band value. The implication then meant that the specification created only 3 sizes of crack depths 1–2 mm, 2–3 mm, &gt; 3 mm (± 0.5mm tolerance at 90% certainty). In practical terms a large quantity of features in the significant band of 2–3 mm must be treated as potential dig candidates with a depth of at least 3 mm, making length characteristics as the only severity ranking basis for any priority dig selection. Previous attempts at establishing absolute depth sizing for crack inspection required a series of calibration digs. The large sample size over multiple inspection runs and pipeline sections allowed for a statistical specification algorithm is developed as part of the analysis process, therefore no additional reporting time, or excavation cost was involved. The new absolute sizing algorithm has provided operators with a means of prioritizing digs, based upon individual feature length and depths. Replacing the traditional depth bands with individual feature specific peak depths and thereby providing a major step forward in achieving a cost effective process of prioritizing crack mitigation in pipelines. Following the dig verification process in API 1163, significant populations of infield NDE results were utilized on a variety of pipeline sections of different diameters. Predicted absolute depth estimation accuracy was determined for specific feature types and thereby created a depth tolerance, with statistical certainty levels established that match those available and recognized with metal loss ILI. This paper describes the process and the means by which an absolute depth crack ILI specification was established using characteristics from a significant set of real features. It also describes benefits realized within pipeline integrity engineering of moving to such a new reporting protocol.
APA, Harvard, Vancouver, ISO, and other styles
9

Sekiguchi, Tomohiro, Tatsuya Kawaguchi, Isao Satoh, and Takushi Saito. "Electric Charge of Micro and Nano Bubbles by Interferometric Laser Imaging Technique." In ASME/JSME 2011 8th Thermal Engineering Joint Conference. ASMEDC, 2011. http://dx.doi.org/10.1115/ajtec2011-44612.

Full text
Abstract:
In this paper we investigated ζ potential of microbubbles by electrophoresis method. The individual bubble diameter measured by means of interferometric laser imaging technique. The experimental results showed that different methods for producing microbubbles which were pressurizing dissolution method, swiveling gas-liquid two-phase flow method and electrolysis method did not make difference in the resultant values of ζ potential. In order to understand the mechanism of charging characteristic of microbubbles, we changed the pH of the deionized water and added alcohols or a surfactant to the deionized water. The results showed that the absolute value of the ζ potential increased when the pH increased. ζ potential was drastically changed by alcohols and surfactant which easily absorbed on the air-water interface. Moreover, simultaneous measurement of ζ potential and diameter of shriking microbubble resulted that bubbles shose diameter is less than 3 μm were observed by using Mie theory.
APA, Harvard, Vancouver, ISO, and other styles
10

Silva, Thiago Geraldo, Luis Kin Miyatake, Rafael Madeira Barbosa, et al. "AI Based Water-in-Oil Emulsions Rheology Model for Value Creation in Deepwater Fields Production Management." In Offshore Technology Conference. OTC, 2021. http://dx.doi.org/10.4043/31173-ms.

Full text
Abstract:
Abstract This work aims to present a new paradigm in the Exploration &amp; Production (E&amp;P) segment using Artificial Intelligence for rheological mapping of produced fluids and forecasting their properties throughout the production life cycle. The expected gain is to accelerate the process of prioritizing target fields for application of flow improvers and, as a consequence, to generate anticipation of revenue and value creation. Rheological data from laboratory analyses of water-in-oil emulsions from different production fields collected over the years are used in a machine learning framework, which enables a modeling based on supervised learning. The Artificial Intelligence infers the emulsion viscosity as a function of input parameters, such as API gravity, water cut and dehydrated oil viscosity. The modeling of emulsified fluids uses correlations that, in general, do not represent the viscosity emulsion suitably. Currently, an improvement over empirical correlations can be achieved via rheological characterization using tests from onshore laboratories, which have been generating a database for different Petrobras reservoirs over the years. The dataset used in the artificial intelligence framework results in a machine learning model with generalization ability, showing a good match between experimental and calculated data in both training and test datasets. This model is tested with a great deal of oils from different reservoirs, in an extensive range of API gravity, presenting a suitable mean absolute percentage error. In addition to that, the result preserves the expected physical behavior for the emulsion viscosity curve. Consequently, this approach eliminates frequent sampling requirements, which means lower logistical costs and faster actions in the decision making process with respect to flow improvers injection. Moreover, by embedding the AI model into a numerical flow simulation software, the overall flow model can estimate more reliably production curves due to better representation of the rheological fluid characteristics.
APA, Harvard, Vancouver, ISO, and other styles

Reports on the topic "Means absolute value"

1

Fisher, Andmorgan, Taylor Hodgdon, and Michael Lewis. Time-series forecasting methods : a review. Engineer Research and Development Center (U.S.), 2024. http://dx.doi.org/10.21079/11681/49450.

Full text
Abstract:
Time-series forecasting techniques are of fundamental importance for predicting future values by analyzing past trends. The techniques assume that future trends will be similar to historical trends. Forecasting involves using models fit on historical data to predict future values. Time-series models have wide-ranging applications, from weather forecasting to sales forecasting, and are among the most effective methods of forecasting, especially when making decisions that involve uncertainty about the future. To evaluate forecast accuracy and to compare among models fitted to a time series, three performance measures were used in this study: mean absolute error (MAE), mean square error (MSE), and root-mean-square error (RMSE).
APA, Harvard, Vancouver, ISO, and other styles
2

Ruosteenoja, Kimmo. Applicability of CMIP6 models for building climate projections for northern Europe. Finnish Meteorological Institute, 2021. http://dx.doi.org/10.35614/isbn.9789523361416.

Full text
Abstract:
In this report, we have evaluated the performance of nearly 40 global climate models (GCMs) participating in Phase 6 of the Coupled Model Intercomparison Project (CMIP6). The focus is on the northern European area, but the ability to simulate southern European and global climate is discussed as well. Model evaluation was started with a technical control; completely unrealistic values in the GCM output files were identified by seeking the absolute minimum and maximum values. In this stage, one GCM was rejected totally, and furthermore individual output files from two other GCMs. In evaluating the remaining GCMs, the primary tool was the Model Climate Performance Index (MCPI) that combines RMS errors calculated for the different climate variables into one index. The index takes into account both the seasonal and spatial variations in climatological means. Here, MCPI was calculated for the period 1981—2010 by comparing GCM output with the ERA-Interim reanalyses. Climate variables explored in the evaluation were the surface air temperature, precipitation, sea level air pressure and incoming solar radiation at the surface. Besides MCPI, we studied RMS errors in the seasonal course of the spatial means by examining each climate variable separately. Furthermore, the evaluation procedure considered model performance in simulating past trends in the global-mean temperature, the compatibility of future responses to different greenhouse-gas scenarios and the number of available scenario runs. Daily minimum and maximum temperatures were likewise explored in a qualitative sense, but owing to the non-existence of data from multiple GCMs, these variables were not incorporated in the quantitative validation. Four of the 37 GCMs that had passed the initial technical check were regarded as wholly unusable for scenario calculations: in two GCMs the responses to the different greenhouse gas scenarios were contradictory and in two other GCMs data were missing from one of the four key climate variables. Moreover, to reduce inter-GCM dependencies, no more than two variants of any individual GCM were included; this led to an abandonment of one GCM. The remaining 32 GCMs were divided into three quality classes according to the assessed performance. The users of model data can utilize this grading to select a subset of GCMs to be used in elaborating climate projections for Finland or adjacent areas. Annual-mean temperature and precipitation projections for Finland proved to be nearly identical regardless of whether they were derived from the entire ensemble or by ignoring models that had obtained the lowest scores. Solar radiation projections were somewhat more sensitive.
APA, Harvard, Vancouver, ISO, and other styles
3

Shawler, Justin, Aleksandra Ostojic, Joseph Harwood, Christopher Macon, Molly Reif, and Aaron Schad. Geomorphic Monitoring of Coastal Marsh Restoration Sites: Insights from Field and Remote Sensing Approaches in Louisiana, USA. Engineer Research and Development Center (U.S.), 2025. https://doi.org/10.21079/11681/48401.

Full text
Abstract:
Restoration of coastal marshes presents unique geomorphic monitoring challenges because these sites are often remote and/or inaccessible, and time and financial resources for field-based geomorphic monitoring may be limited. Yet, the geomorphic trajectory of coastal marshes controls the overall system health and longevity. The expansion of Unmanned Aircraft System (UAS) technology and new satellite platforms offer opportunities to complement ground-based geomorphic monitoring and overcome the challenges associated with traditional field methods. Here, we compare field-based and remote sensing approaches to monitor two restored coastal wetlands in Louisiana. At Spanish Pass, a restored marsh and ridge system, methods for measuring site elevation, quantifying shoreline position and classifying shoreline geomorphic types were compared. Field elevations measured with RTK GNSS were highly correlated (R2 = 0.97, RMSE = 0.08 m) to site elevations from the UAS lidar digital elevation model (DEM). Both UAS RGB imagery and satellite imagery collected within the same month as field data were within 3 m of field shoreline positions (mean absolute error 2.88 m UAS RGB imagery; 2.86 m October 2022 Google Earth imagery), though datum and slope-based shorelines derived from the UAS lidar DEM had the lowest error (2.09 m datum; 2.32 m slope) compared to the field. National Agriculture Imagery Program (March 2021) and Google Earth (January 2021) imagery collected &gt;18 months prior to field data collection had the least accuracy compared to the field shorelines (mean absolute error 3.13 m and 5.08 m, respectively), likely reflecting shoreline change since that date. In general, field classifications and remotely sensed geomorphic shoreline classifications did not compare well (similarity values 0.4 to 0.6), which supports previous results from the literature that show large differences between field and remote sensing classifications. At La Branche, the second marsh restoration site, access was limited by thick vegetation and ground-based RTK transects were not collected. Thus, UAS lidar DEM elevation data were compared to elevations measured by airborne lidar (2017) and ground surveys (2013) to extend the monitoring record and track elevation change through time. The results demonstrated that minimal measurable elevation change occurred during that time. The airborne lidar data showed consistently lower elevations than field data (within the 0.2 m vegetated accuracy of the lidar), but UAS lidar elevations matched field data, with some spots at slightly higher elevations due to lack of bathymetric lidar data as well as possible accretion of the marsh platform. Overall, the utility and accuracy of both satellite and UAS remote sensing techniques were demonstrated for monitoring shoreline positions and platform elevations. However, marsh shoreline classifications can be improved with additional detail and/or quantification using elevation profile data (e.g., slopes, tidal datums). Practitioners and researchers monitoring coastal marsh restoration sites can use the information presented in this study to assess the tradeoffs and benefits of various methods, including a multi-methods approach as resource and monitoring needs change through time.
APA, Harvard, Vancouver, ISO, and other styles
We offer discounts on all premium plans for authors whose works are included in thematic literature selections. Contact us to get a unique promo code!

To the bibliography