To see the other types of publications on this topic, follow the link: Value aggregation and capture.

Journal articles on the topic 'Value aggregation and capture'

Create a spot-on reference in APA, MLA, Chicago, Harvard, and other styles

Select a source type:

Consult the top 50 journal articles for your research on the topic 'Value aggregation and capture.'

Next to every source in the list of references, there is an 'Add to bibliography' button. Press on it, and we will generate automatically the bibliographic reference to the chosen work in the citation style you need: APA, MLA, Harvard, Chicago, Vancouver, etc.

You can also download the full text of the academic publication as pdf and read online its abstract whenever available in the metadata.

Browse journal articles on a wide variety of disciplines and organise your bibliography correctly.

1

Ziemba, Paweł, Aneta Becker, and Jarosław Becker. "A Consensus Measure of Expert Judgment in the Fuzzy TOPSIS Method." Symmetry 12, no. 2 (February 1, 2020): 204. http://dx.doi.org/10.3390/sym12020204.

Full text
Abstract:
In the case of many complex, real-world decision problems solved with the participation of a group of experts, it is important to capture the uncertainty of opinions and preferences expressed. In such situations, one can use many modifications of the technique for order preference by similarity to the ideal solution (TOPSIS) method, for example, based on fuzzy numbers. In fuzzy TOPSIS, two aggregation methods of fuzzy expert opinions dominate, the first based on the average value technique and the second one extended by the minimum and maximum functions for determining the support of the aggregated fuzzy number. An important disadvantage of both techniques is the fact that the agreement degree of expert opinions is not taken into account. This article proposes the inclusion of the modified procedure for aggregating individual expert opinions, taking into account the degree of agreement of their opinions (called the similarity aggregation method—SAM) and the ranking of experts into the fuzzy TOPSIS method. The fuzzy TOPSIS method extended in this way was used to solve the decision problem of recruiting employees by a group of experts. As part of the solution, the modified SAM was compared with aggregation procedures based on the average value and min-max (minimum and maximum) support. The results of the conducted research indicate that SAM allows fuzzy numbers to be obtained, characterized by less imprecision and greater stability than the other two considered aggregation procedures.
APA, Harvard, Vancouver, ISO, and other styles
2

Padmaja, P., and G. V. Marutheswar. "Certain Investigation on Secured Data Transmission in Wireless Sensor Networks." International Journal of Mobile Computing and Multimedia Communications 8, no. 1 (January 2017): 48–61. http://dx.doi.org/10.4018/ijmcmc.2017010104.

Full text
Abstract:
Wireless Sensor Network (WSN) need to be more secure while transmitting data as well as should be deployed properly to reduce redundancy and energy consumption. WSNs suffer from many constraints, including low computation capability, small memory, limited energy resources, susceptibility to physical capture and the use of insecure wireless communication channels. These constraints make security in WSNs a challenge. In this paper, a survey of security issues in WSNs is presented and a new algorithm TESDA is proposed, which is an optimized energy efficient secured data aggregation technic. The cluster head is rotated based on residual energy after each round of aggregation so that network lifetime increases. Based on deviation factor calculated, the trust weight is assigned, if more deviation, then the trust value is less. Simulation results observed by using NS-2. From network animator and x-graphs the result are analyzed. Among all protocols tesda is an energy efficient secured data aggregation method.
APA, Harvard, Vancouver, ISO, and other styles
3

Gajewski, Byron J., Shawn M. Turner, William L. Eisele, and Clifford H. Spiegelman. "Intelligent Transportation System Data Archiving: Statistical Techniques for Determining Optimal Aggregation Widths for Inductive Loop Detector Speed Data." Transportation Research Record: Journal of the Transportation Research Board 1719, no. 1 (January 2000): 85–93. http://dx.doi.org/10.3141/1719-11.

Full text
Abstract:
Although most traffic management centers collect intelligent transportation system (ITS) traffic monitoring data from local controllers in 20-s to 30-s intervals, the time intervals for archiving data vary considerably from 1 to 5, 15, or even 60 min. Presented are two statistical techniques that can be used to determine optimal aggregation levels for archiving ITS traffic monitoring data: the cross-validated mean square error and the F-statistic algorithm. Both techniques seek to determine the minimal sufficient statistics necessary to capture the full information contained within a traffic parameter distribution. The statistical techniques were applied to 20-s speed data archived by the TransGuide center in San Antonio, Texas. The optimal aggregation levels obtained by using the two algorithms produced reasonable and intuitive results—both techniques calculated optimal aggregation levels of 60 min or more during periods of low traffic variability. Similarly, both techniques calculated optimal aggregation levels of 1 min or less during periods of high traffic variability (e.g., congestion). A distinction is made between conclusions about the statistical techniques and how the techniques can or should be applied to ITS data archiving. Although the statistical techniques described may not be disputed, there is a wide range of possible aggregation solutions based on these statistical techniques. Ultimately, the aggregation solutions may be driven by nonstatistical parameters such as cost (e.g., “How much do we/the market value the data?”), ease of implementation, system requirements, and other constraints.
APA, Harvard, Vancouver, ISO, and other styles
4

Xu, Wuhuan, Xiaopu Shang, Jun Wang, and Weizi Li. "A Novel Approach to Multi-Attribute Group Decision-Making based on Interval-Valued Intuitionistic Fuzzy Power Muirhead Mean." Symmetry 11, no. 3 (March 25, 2019): 441. http://dx.doi.org/10.3390/sym11030441.

Full text
Abstract:
This paper focuses on multi-attribute group decision-making (MAGDM) course in which attributes are evaluated in terms of interval-valued intuitionistic fuzzy (IVIF) information. More explicitly, this paper introduces new aggregation operators for IVIF information and further proposes a new IVIF MAGDM method. The power average (PA) operator and the Muirhead mean (MM) are two powerful and effective information aggregation technologies. The most attractive advantage of the PA operator is its power to combat the adverse effects of ultra-evaluation values on the information aggregation results. The prominent characteristic of the MM operator is that it is flexible to capture the interrelationship among any numbers of arguments, making it more powerful than Bonferroni mean (BM), Heronian mean (HM), and Maclaurin symmetric mean (MSM). To absorb the virtues of both PA and MM, it is necessary to combine them to aggregate IVIF information and propose IVIF power Muirhead mean (IVIFPMM) operator and the IVIF weighted power Muirhead mean (IVIFWPMM) operator. We investigate their properties to show the strongness and flexibility. Furthermore, a novel approach to MAGDM problems with IVIF decision-making information is introduced. Finally, a numerical example is provided to show the performance of the proposed method.
APA, Harvard, Vancouver, ISO, and other styles
5

Qin, Jindong, and Xinwang Liu. "2-tuple linguistic Muirhead mean operators for multiple attribute group decision making and its application to supplier selection." Kybernetes 45, no. 1 (January 11, 2016): 2–29. http://dx.doi.org/10.1108/k-11-2014-0271.

Full text
Abstract:
Purpose – The purpose of this paper is to develop some 2-tuple linguistic aggregation operators based on Muirhead mean (MM), which is combined with multiple attribute group decision making (MAGDM) and applied the proposed MAGDM model for supplier selection under 2-tuple linguistic environment. Design/methodology/approach – The supplier selection problem can be regarded as a typical MAGDM problem, in which the decision information should be aggregated. In this paper, the authors investigate the MAGDM problems with 2-tuple linguistic information based on traditional MM operator. The MM operator is a well-known mean type aggregation operator, which has some particular advantages for aggregating multi-dimension arguments. The prominent characteristic of the MM operator is that it can capture the whole interrelationship among the multi-input arguments. Motivated by this idea, in this paper, the authors develop the 2-tuple linguistic Muirhead mean (2TLMM) operator and the 2-tuple linguistic dual Muirhead mean (2TLDMM) operator for aggregating the 2-tuple linguistic information, respectively. Some desirable properties and special cases are discussed in detail. Based on which, two approaches to deal with MAGDM problems under 2-tuple linguistic information environment are developed. Finally, a numerical example concerns the supplier selection problem is provided to illustrate the effectiveness and feasibility of the proposed methods. Findings – The results show that the proposed can solve the MAGDM problems within the context of 2-tuple linguistic information, in which the attributes are existing interaction phenomenon. Some 2-tuple aggregation operators based on MM have been developed. A case study of supplier selection is provided to illustrate the effectiveness and feasibility of the proposed methods. The results show that the proposed methods are useful to aggregate the linguistic decision information in which the attributes are not independent so as to select the most suitable supplier. Practical implications – The proposed methods can solve the 2-tuple linguistic MAGDM problem, in which the interactions exist among the attributes. Therefore, it can be used to supplier selection problems and other similar management decision problems. Originality/value – The paper develop some 2-tuple aggregation operators based on MM, and further present two methods based on the proposed operators for solving MAGDM problems. It is useful to deal with multiple attribute interaction decision-making problems and suitable to solve a variety of management decision-making applications.
APA, Harvard, Vancouver, ISO, and other styles
6

KRECH, CAROL A., FRAUKE RÜTHER, and OLIVER GASSMANN. "PROFITING FROM INVENTION: BUSINESS MODELS OF PATENT AGGREGATING COMPANIES." International Journal of Innovation Management 19, no. 03 (May 27, 2015): 1540005. http://dx.doi.org/10.1142/s1363919615400058.

Full text
Abstract:
Patent aggregating companies are institutions that aggregate patents for different purposes. From a managerial perspective as well as a theoretical perspective, it is interesting to understand what value such novel business models provide to inventing companies. In this paper we focus on the question how patent holders can use patent aggregating companies as means to capture value from their inventions. Therefore the business models of patent aggregating companies need to be understood. Existing literature lacks a systematic and comprehensive analysis of the patent aggregating companies' business models. The empirical data presented and discussed in this article was collected over a five-year (2009–2014) period in semi-structured interviews with patent aggregating companies' incorporating personnel and in an extensive analysis of secondary data. We conclude our study by identifying four groups of patent aggregating companies based on the values provided to the original patent holders: the guarders, the shielders, the funders and the earners.
APA, Harvard, Vancouver, ISO, and other styles
7

Bu, Qiong, Elena Simperl, Adriane Chapman, and Eddy Maddalena. "Quality assessment in crowdsourced classification tasks." International Journal of Crowd Science 3, no. 3 (September 2, 2019): 222–48. http://dx.doi.org/10.1108/ijcs-06-2019-0017.

Full text
Abstract:
Purpose Ensuring quality is one of the most significant challenges in microtask crowdsourcing tasks. Aggregation of the collected data from the crowd is one of the important steps to infer the correct answer, but the existing study seems to be limited to the single-step task. This study aims to look at multiple-step classification tasks and understand aggregation in such cases; hence, it is useful for assessing the classification quality. Design/methodology/approach The authors present a model to capture the information of the workflow, questions and answers for both single- and multiple-question classification tasks. They propose an adapted approach on top of the classic approach so that the model can handle tasks with several multiple-choice questions in general instead of a specific domain or any specific hierarchical classifications. They evaluate their approach with three representative tasks from existing citizen science projects in which they have the gold standard created by experts. Findings The results show that the approach can provide significant improvements to the overall classification accuracy. The authors’ analysis also demonstrates that all algorithms can achieve higher accuracy for the volunteer- versus paid-generated data sets for the same task. Furthermore, the authors observed interesting patterns in the relationship between the performance of different algorithms and workflow-specific factors including the number of steps and the number of available options in each step. Originality/value Due to the nature of crowdsourcing, aggregating the collected data is an important process to understand the quality of crowdsourcing results. Different inference algorithms have been studied for simple microtasks consisting of single questions with two or more answers. However, as classification tasks typically contain many questions, the proposed method can be applied to a wide range of tasks including both single- and multiple-question classification tasks.
APA, Harvard, Vancouver, ISO, and other styles
8

Qiu, Hong, Genhua Hu, Yuhong Yang, Jeffrey Zhang, and Ting Zhang. "Modeling the Risk of Extreme Value Dependence in Chinese Regional Carbon Emission Markets." Sustainability 12, no. 19 (September 24, 2020): 7911. http://dx.doi.org/10.3390/su12197911.

Full text
Abstract:
In this study, we analyze the risk of extreme value dependence in Chinese regional carbon emission markets. After filtering the daily return data of six carbon markets in China using a generalized autoregressive conditional heteroscedasticity (GARCH) model, we obtain the standardized residual series. Next, the dependence structures in the markets are captured by the Copula function and the Extreme Value theory (EVT). We report high peaks, heavy tails and fluctuation aggregation in the logarithm return series of the markets, as well as significant dependent structures. There are significant extreme value risks in Chinese regional carbon markets, but the risks can be mitigated through appropriate portfolio diversification.
APA, Harvard, Vancouver, ISO, and other styles
9

Zeng, Ping, Jing Dai, Siyi Jin, and Xiang Zhou. "Aggregating multiple expression prediction models improves the power of transcriptome-wide association studies." Human Molecular Genetics 30, no. 10 (February 22, 2021): 939–51. http://dx.doi.org/10.1093/hmg/ddab056.

Full text
Abstract:
Abstract Transcriptome-wide association study (TWAS) is an important integrative method for identifying genes that are causally associated with phenotypes. A key step of TWAS involves the construction of expression prediction models for every gene in turn using its cis-SNPs as predictors. Different TWAS methods rely on different models for gene expression prediction, and each such model makes a distinct modeling assumption that is often suitable for a particular genetic architecture underlying expression. However, the genetic architectures underlying gene expression vary across genes throughout the transcriptome. Consequently, different TWAS methods may be beneficial in detecting genes with distinct genetic architectures. Here, we develop a new method, HMAT, which aggregates TWAS association evidence obtained across multiple gene expression prediction models by leveraging the harmonic mean P-value combination strategy. Because each expression prediction model is suited to capture a particular genetic architecture, aggregating TWAS associations across prediction models as in HMAT improves accurate expression prediction and enables subsequent powerful TWAS analysis across the transcriptome. A key feature of HMAT is its ability to accommodate the correlations among different TWAS test statistics and produce calibrated P-values after aggregation. Through numerical simulations, we illustrated the advantage of HMAT over commonly used TWAS methods as well as ad hoc P-value combination rules such as Fisher’s method. We also applied HMAT to analyze summary statistics of nine common diseases. In the real data applications, HMAT was on average 30.6% more powerful compared to the next best method, detecting many new disease-associated genes that were otherwise not identified by existing TWAS approaches. In conclusion, HMAT represents a flexible and powerful TWAS method that enjoys robust performance across a range of genetic architectures underlying gene expression.
APA, Harvard, Vancouver, ISO, and other styles
10

Wang, Jie, Guiwu Wei, and Hui Gao. "Approaches to Multiple Attribute Decision Making with Interval-Valued 2-Tuple Linguistic Pythagorean Fuzzy Information." Mathematics 6, no. 10 (October 13, 2018): 201. http://dx.doi.org/10.3390/math6100201.

Full text
Abstract:
The Maclaurin symmetric mean (MSM) operator is a classical mean type aggregation operator used in modern information fusion theory, which is suitable to aggregate numerical values. The prominent characteristic of the MSM operator is that it can capture the interrelationship among multi-input arguments. Motivated by the ideal characteristic of the MSM operator, in this paper, we expand the MSM operator, generalized MSM (GMSM), and dual MSM (DMSM) operator with interval-valued 2-tuple linguistic Pythagorean fuzzy numbers (IV2TLPFNs) to propose the interval-valued 2-tuple linguistic Pythagorean fuzzy MSM (IV2TLPFMSM) operator, interval-valued 2-tuple linguistic Pythagorean fuzzy weighted MSM (IV2TLPFWMSM) operator, interval-valued 2-tuple linguistic Pythagorean fuzzy GMSM (IN2TLPFGMSM) operator, interval-valued 2-tuple linguistic Pythagorean fuzzy weighted GMSM (IV2TLPFWGMSM) operator, interval-valued 2-tuple linguistic Pythagorean fuzzy DMSM (IN2TLPFDMSM) operator, Interval-valued 2-tuple linguistic Pythagorean fuzzy weighted DMSM (IV2TLPFWDMSM) operator. Then the multiple attribute decision making (MADM) methods are developed with these three operators. Finally, an example of green supplier selection is used to show the proposed methods.
APA, Harvard, Vancouver, ISO, and other styles
11

Díaz-Curbelo, Alina, Ángel M. Gento, Alfonso Redondo, and Faisal Aqlan. "A Fuzzy-Based Holistic Approach for Supply Chain Risk Assessment and Aggregation Considering Risk Interdependencies." Applied Sciences 9, no. 24 (December 6, 2019): 5329. http://dx.doi.org/10.3390/app9245329.

Full text
Abstract:
Supply chain risk management requires dealing with uncertainty, interrelations, and subjectivity inherent in the risk assessment process. This paper proposes a holistic approach for risk management that considers the impact on multiple performance objectives, the relation between risk agents, and the risk event interdependencies. An aggregated risk score is proposed to capture the cascading effects of common risk triggers and quantify the aggregated score by risk agent and objective. The approach also uses fuzzy logic to allow for the treatment of vague and ambiguity data as input parameters to the model from different domains and scales, according to knowledge and criteria nature. The integration of the balanced scorecard tool improves the analysis and prioritization of mitigation strategies in decision-making, both by risk agent and by strategic objective. A case study of a telecommunication company is presented to illustrate the applicability of the proposed approach.
APA, Harvard, Vancouver, ISO, and other styles
12

Poisson, Laila, M. C. M. Kouwenhoven, James Snyder, Kristin Alfaro-Munoz, Manpreet Kaur, Amanda Bates, Roel Verhaak, et al. "EPID-23. PURSUIT OF AN INTERNATIONAL LANGUAGE OF GLIOMA RESEARCH: COMMON DATA ELEMENTS FOR THE LONGITUDINAL STUDY OF ADULT MALIGNANT GLIOMA." Neuro-Oncology 21, Supplement_6 (November 2019): vi79. http://dx.doi.org/10.1093/neuonc/noz175.323.

Full text
Abstract:
Abstract As an uncommon cancer, clinical and translational studies of glioma rely on multi-center collaborations, confirmatory studies, and meta-analyses. Unfortunately, interpretation of results across studies is hampered by the absence of uniformly coded clinical data. Common Data Elements (CDE) represent a set of clinical features for which the language has been standardized for consistent data capture across studies, institutions and registries. We constructed CDE for the longitudinal study of adult malignant glioma. To identify the minimum set of CDE needed to describe the clinical course of glioma, we surveyed clinical standards, ongoing trials, published studies, and data repositories for frequently used data elements. We harmonized the identified clinical variables, filled in gaps, and structured them in a modular schema, defining CDE for patient demographics, medical history, diagnosis, surgery, chemotherapy, radiotherapy, other treatments, and outcomes. Multidisciplinary experts from the Glioma Longitudinal AnalySiS (GLASS) consortium, representing clinical, molecular, and data research perspectives, were consulted regarding CDE. The validity and capture feasibility of the CDE were assessed through harmonization across published studies, then validated with single institution retrospective chart abstraction. The refined CDE library is implemented in the Research Electronic Data Capture (REDCap) System, a secure web application for building and managing online surveys and databases. The work was motivated by the GLASS consortium, which supports the aggregation and analysis of complex genetic datasets used to define molecular trajectories for glioma. The goal is that modular REDCap implementation of CDE allows broad adoption in glioma research. To accommodate novel aspects, the CDE sets can be expanded through additional modules. In contrast, for efficient initiation of focused studies, subsets of CDE can be selected. Broad adoption of CDE will improve the ability to compare results and share data between studies, thereby maximizing the value of existing data sources and small patient populations.
APA, Harvard, Vancouver, ISO, and other styles
13

Zeng, Yining, Rongxing Duan, Shujuan Huang, and Tao Feng. "Reliability analysis for complex systems based on generalized stochastic petri nets and EDA approach considering common cause failure." Engineering Computations 37, no. 5 (December 19, 2019): 1513–30. http://dx.doi.org/10.1108/ec-05-2019-0241.

Full text
Abstract:
Purpose This paper aims to deal with the problems of failure dependence and common cause failure (CCF) that arise in reliability analysis of complex systems. Design/methodology/approach Firstly, a dynamic fault tree (DFT) is used to capture the dynamic failure behaviours and converted into an equivalent generalized stochastic petri net (GSPN) for quantitative analysis. Secondly, an efficient decomposition and aggregation (EDA) theory is combined with GSPN to deal with the CCF problem, which exists in redundant systems. Finally, Birnbaum importance measure (BIM) is calculated based on the EDA approach and GSPN model, and it is used to take decisions for system improvement and fault diagnosis. Findings In this paper, a new reliability evaluation method for dynamic systems subject to CCF is presented based on the DFT analysis and the GSPN model. The GSPN model is easy to capture dynamic failure behaviours of complex systems, and the movement of tokens in the GSPN model represent the changes in the state of the systems. The proposed method takes advantage of the GSPN model and incorporates the EDA method into the GSPN, which simplifies the reliability analysis process. Meanwhile, simulation results under different conditions show that CCF has made a considerable impact on reliability analysis for complex systems, which indicates that the CCF should not be ignored in reliability analysis. Originality/value The proposed method combines the EDA theory with the GSPN model to improve the efficiency of the reliability analysis.
APA, Harvard, Vancouver, ISO, and other styles
14

Krishankumar, Ravichandran, Ahmed, Kar, and Peng. "Interval-Valued Probabilistic Hesitant Fuzzy Set Based Muirhead Mean for Multi-Attribute Group Decision-Making." Mathematics 7, no. 4 (April 9, 2019): 342. http://dx.doi.org/10.3390/math7040342.

Full text
Abstract:
As a powerful generalization to fuzzy set, hesitant fuzzy set (HFS) was introduced, which provided multiple possible membership values to be associated with a specific instance. But HFS did not consider occurrence probability values, and to circumvent the issue, probabilistic HFS (PHFS) was introduced, which associates an occurrence probability value with each hesitant fuzzy element (HFE). Providing such a precise probability value is an open challenge and as a generalization to PHFS, interval-valued PHFS (IVPHFS) was proposed. IVPHFS provided flexibility to decision makers (DMs) by associating a range of values as an occurrence probability for each HFE. To enrich the usefulness of IVPHFS in multi-attribute group decision-making (MAGDM), in this paper, we extend the Muirhead mean (MM) operator to IVPHFS for aggregating preferences. The MM operator is a generalized operator that can effectively capture the interrelationship between multiple attributes. Some properties of the proposed operator are also discussed. Then, a new programming model is proposed for calculating the weights of attributes using DMs’ partial information. Later, a systematic procedure is presented for MAGDM with the proposed operator and the practical use of the operator is demonstrated by using a renewable energy source selection problem. Finally, the strengths and weaknesses of the proposal are discussed in comparison with other methods.
APA, Harvard, Vancouver, ISO, and other styles
15

Thomas, Olubusola O., Rajagopal S. Raghavan, and Thomas N. Dixon. "Effect of Scaleup and Aggregation on the Use of Well Tests To Identify Geological Properties." SPE Reservoir Evaluation & Engineering 8, no. 03 (June 1, 2005): 248–54. http://dx.doi.org/10.2118/77452-pa.

Full text
Abstract:
Summary This paper discusses specific issues encountered when pressure tests are analyzed in reservoirs with complex geological properties. These issues relate to questions concerning the methodology of scaleup, the degree of aggregation, and the reliability of conventional methods of analysis. The paper shows that if we desire to use pressure-transient analysis to determine more complex geological features such as connectivity and widths of channels, we need a model that incorporates reservoir heterogeneity. This complexity can lead to significantly more computational effort in the analysis of the pressure transient. The paper demonstrates that scaleup criteria, based on steady-state procedures, are inadequate to capture transient pressure responses. Furthermore, the number of layers needed to match the transient response may be significantly greater than the number of layers needed for a reservoir-simulation study. The use of models without a sufficient number of layers may lead to interpretations that are in significant error. The paper compares various vertical aggregation methods to coarsen the fine-grid model. The pressure-derivative curve is used as a measure of evaluating the adequacy of the scaleup procedure. Neither the use of permeability at a wellbore nor the average layer permeability as criteria for the aggregation was adequate to reduce the number of layers significantly. Introduction The objectives of this paper are to demonstrate the impact of the detailed and small-scale heterogeneities of a formation on the flow characteristics that are obtained from a pressure test and how those heterogeneities affect the analysis of the pressure test. The literature recognizes that special scaleup procedures are required in the vicinity of wells located in heterogeneous fields. Our work demonstrates that these procedures apply only to rather small changes in pressure over time and are usually inadequate to meet objectives for history-matching well tests. Using a fine-scale geological model derived by geological and geophysical techniques, this work systematically examines the interpretations obtained by various aggregation and scaleup techniques. We will demonstrate that unless care is taken, the consequences of too much aggregation may lead to significant errors on decisions concerning the value of a reservoir. Current scaleup techniques presume that spatial (location of boundaries, location of faults, etc.) variables are maintained. In analyzing a well test, however, one of our principal objectives is to determine the relationship between the well response and geometrical variables. We show that a limited amount of aggregation will preserve the spatial and petrophysical relationships we wish to determine. At this time, there appears to be no method available to determine the degree of scaleup a priori. Because the objective of well testing is to estimate reservoir properties, the scaleup process needs to be made a part of the history-matching procedure. By assuming a truth case, we show that too much vertical aggregation may lead to significant errors. Comparisons with traditional analyses based on analytical techniques are made. Whenever an analytical model is used in the analysis, unless otherwise stated, we use a single-layer-reservoir solution.
APA, Harvard, Vancouver, ISO, and other styles
16

Jacinto, R., N. Grosso, E. Reis, L. Dias, F. D. Santos, and P. Garrett. "Continental Portuguese Territory Flood Susceptibility Index – contribution to a vulnerability index." Natural Hazards and Earth System Sciences 15, no. 8 (August 26, 2015): 1907–19. http://dx.doi.org/10.5194/nhess-15-1907-2015.

Full text
Abstract:
Abstract. This work defines a national flood susceptibility index for the Portuguese continental territory, by proposing the aggregation of different variables which represent natural conditions for permeability, runoff and accumulation. This index is part of the national vulnerability index developed in the scope of Flood Maps in Climate Change Scenarios (CIRAC) project, supported by the Portuguese Association of Insurers (APS). This approach expands on previous works by trying to bridge the gap between different flood mechanisms (e.g. progressive and flash floods) occurring at different spatial scales in the Portuguese territory through (a) selecting homogeneously processed data sets and (b) aggregating their values to better translate the spatially continuous and cumulative influence in floods at multiple spatial scales. Results show a good ability to capture, in the higher susceptibility classes, different flood types: fluvial floods and flash floods. Lower values are usually related to mountainous areas, low water accumulation potential and more permeable soils. Validation with independent flood data sets confirmed these index characteristics, although some overestimation can be seen in the southern region of Alentejo where, due to a dense hydrographic network and an overall low slope, floods are not as frequent as a result of lower precipitation mean values. Future work will focus on (i) including extreme precipitation data sets to represent the triggering factor, (ii) improving representation of smaller and stepper basins, (iii) optimizing variable weight definition process and (iii) developing more robust independent flood validation data sets.
APA, Harvard, Vancouver, ISO, and other styles
17

Jacinto, R., N. Grosso, E. Reis, L. Dias, F. D. Santos, and P. Garrett. "Continental Portuguese Territory Flood Susceptibility Index – contribution for a vulnerability index." Natural Hazards and Earth System Sciences Discussions 2, no. 12 (December 15, 2014): 7521–52. http://dx.doi.org/10.5194/nhessd-2-7521-2014.

Full text
Abstract:
Abstract. This work defines a national flood susceptibility index for the Portuguese continental territory, by proposing the aggregation of different variables which represent natural conditions for permeability, runoff and accumulation. This index is part of the national vulnerability index developed in the scope of Flood Maps in Climate Change Scenarios (CIRAC) project, supported by the Portuguese Association of Insurers (APS). This approach expands on previous works by trying to bridge the gap between different floods mechanisms (e.g. progressive and flash floods) occurring at different spatial scales in the Portuguese territory through: (a) selecting homogeneously processed datasets, (b) aggregating their values to better translate the spatially continuous and cumulative influence in floods at multiple spatial scales. Results show a good ability to capture, in the higher susceptibility classes, different flood types: progressive floods and flash floods. Lower values are usually related to: mountainous areas, low water accumulation potential and more permeable soils. Validation with independent flood datasets confirmed these index characteristics, although some overestimation can be seen in the southern region of Alentejo where, due to a dense hydrographic network and an overall low slope, floods are not as frequent as a result of lower precipitation mean values. Future work will focus on: (i) including extreme precipitation datasets to represent the triggering factor, (ii) improving representation of smaller and stepper basins, (iii) optimizing variable weight definition process, (iii) developing more robust independent flood validation datasets.
APA, Harvard, Vancouver, ISO, and other styles
18

Piovano, S., GE Lemons, A. Ciriyawa, A. Batibasaga, and JA Seminoff. "Diet and recruitment of green turtles in Fiji, South Pacific, inferred from in-water capture and stable isotope analysis." Marine Ecology Progress Series 640 (April 23, 2020): 201–13. http://dx.doi.org/10.3354/meps13287.

Full text
Abstract:
Green turtles Chelonia mydas are listed as Endangered on the IUCN Red List, yet in the South Pacific few conservation-relevant data are available for the species, especially relating to foraging and habitat use. Here, in situ observations and stable isotope analysis (δ13C and δ15N) were used to evaluate green turtle diet and recruitment patterns at Yadua Island and Makogai Island, Fiji. Juvenile green turtles (N = 110) were hand-captured, measured, and sampled. Stable isotope analysis was performed on skin samples and on putative prey items. ‘Resident’ turtles versus ‘recent recruits’ were classified based on their bulk skin tissue isotope values, which were compared with stable isotope values of local prey items and analyzed via cluster analysis. Green turtle diet composition was estimated using MixSIAR, a Bayesian mixing model. Recent recruits were characterized by ‘low δ13C/high δ15N’ values and ranged in curved carapace length (CCL) from 25.5 to 60.0 cm (mean ± SD = 48.5 ± 5.7 cm). Recruitment mostly occurred in summer. Green turtles identified as ‘residents’ had CCLs ranging from 43.5 to 89.0 cm (mean ± SD = 57.4 ± 9.0 cm) and were characterized by ‘high δ13C/low δ15N’ values; mixing model results indicate they fed primarily on invertebrates (40%), fishes (31%), and marine plants (29%). This study confirms the value of seagrass pastures as both an essential habitat and a primary food source for green turtles, and can serve as a baseline for evaluations of natural and anthropogenic changes in local green turtle aggregations.
APA, Harvard, Vancouver, ISO, and other styles
19

Jassim, Esam. "Particle Entrainment and Deposition Scenario in Sublayer Region of Variable Area Conduit." E3S Web of Conferences 162 (2020): 03006. http://dx.doi.org/10.1051/e3sconf/202016203006.

Full text
Abstract:
The study presents the particle deposition and aggregation phenomena by introducing new parameter called Particle Deposition Number PDN, defined as the ratio of the particle instantaneous velocity to its capturing value. The particle capture or rebound fate will decide from knowing such number. The study employed new scheme of particle deposition in the sublayer region which includes balancing of four forces. Moreover, the bouncing model is also considered for particle fate decision. The study examines the variation of particle velocity at varying area tube and the critical velocity in which particle will tend to stick if its velocity is lower than the threshold limit. The results show that threshold velocity is exponentially decreased with the increment in the particle size. Capturing of particles is shown to be enhanced as the conduit converges due to increasing in the PDN. The analysis of the deposition also investigates the impact of the particle size on the PDN. At low flow velocity, the NDP has V-shaped trend as particle size increases. However, veering toward constant PDN value has occurred as the flow velocity augmented. Finally, small sized particles experience rebound due to the prevailing of the particle impact energy over the adhesion energy before impacting with the surface. The dissipation in the particle energy during impaction causes large sized particle to loose greater amount of energy compare to small sized one, resulting in domination of the adhesion part, which leads to deposition on the surface.
APA, Harvard, Vancouver, ISO, and other styles
20

Björnsson, Björn. "Fish aggregating sound technique (FAST): how low-frequency sound could be used in fishing and ranching of cod." ICES Journal of Marine Science 75, no. 4 (January 18, 2018): 1258–68. http://dx.doi.org/10.1093/icesjms/fsx251.

Full text
Abstract:
Abstract In marine fisheries, considerable development has occurred in capture technology. Yet, some of the current fishing methods impact the environment by large greenhouse gas emission, harmful effects to benthic communities, and/or high bycatch of juvenile and unwanted species. It is proposed that for some fish species these deficiencies could be mitigated by classical conditioning using sound and food reward to concentrate wild fish before capture with environmentally friendly fishing gear. Atlantic cod (Gadus morhua), which globally is among the fish species with the highest landed value, can be acoustically trained. In a sea cage, it takes about a week to train a group of naïve cod to associate low frequency (250 Hz) sound with food, whereas the training of a group of naïve cod accompanied with one trained cod takes less than a day. In inshore areas, it takes a few weeks to attract thousands of cod to stations where food is regularly delivered. These conditioned cod wait at the stations for their meals and do not mingle much with the unconditioned cod which hunt for wild prey. It is suggested that by calling acoustically conditioned fish between stations, a much larger number of naïve fish can be gathered. This so-called fish aggregating sound technique (FAST) may thus facilitate the accumulation of wild fish and expedite their capture with a purse seine or a trap in a way that minimizes fuel consumption and mortality of juveniles and unwanted species. The operation of FAST requires exclusive rights of a designated fishing area. The exclusivity makes it possible to on-grow the fish in free-ranging schools and sea cages for several months to increase their size and food quality before capture.
APA, Harvard, Vancouver, ISO, and other styles
21

Wu, Zongning, Hongbo Cai, Ruining Zhao, Ying Fan, Zengru Di, and Jiang Zhang. "A Topological Analysis of Trade Distance: Evidence from the Gravity Model and Complex Flow Networks." Sustainability 12, no. 9 (April 25, 2020): 3511. http://dx.doi.org/10.3390/su12093511.

Full text
Abstract:
As a classical trade model, the gravity model plays an important role in the trade policy-making process. However, the effect of physical distance fails to capture the effects of globalization and even ignores the multilateral resistance of trade. Here, we propose a general model describing the effective distance of trade according to multilateral trade paths information and the structure of the trade flow network. Quantifying effective trade distance aims to identify the hidden resistance information from trade networks data, and then describe trade barriers. The results show that flow distance, hybrid by multi-path constraint, and international trade network contribute to the forecasting of trade flows. Meanwhile, we also analyze the role of flow distance in international trade from two perspectives of network science and econometric model. At the econometric model level, flow distance can collapse to the predicting results of geographic distance in the proper time lagging variable, which can also reflect that flow distance contains geographical factors. At the international trade network level, community structure detection by flow distances and flow space embedding instructed that the formation of international trade networks is the tradeoff of international specialization in the trade value chain and geographical aggregation. The methodology and results can be generalized to the study of all kinds of product trade systems.
APA, Harvard, Vancouver, ISO, and other styles
22

LEE, SANG-HEE. "FLOCK FORAGING EFFICIENCY IN RELATION TO FOOD SENSING ABILITY AND DISTRIBUTION: A SIMULATION STUDY." International Journal of Modern Physics B 27, no. 25 (September 12, 2013): 1350145. http://dx.doi.org/10.1142/s0217979213501452.

Full text
Abstract:
Flocking may be an advantageous strategy for acquiring food resources. The degree of advantage is related to two factors: the ability of flock members to detect food resources and patterns of food distribution in the environment. To understand foraging efficiency as a function of these factors, I constructed a two-dimensional (2D) flocking model incorporating the two factors. At the start of the simulation, food particles were heterogeneously distributed. The heterogeneity, H, was characterized as a value ranging from 0.0 to 1.0. For each flock member, food sensing ability was defined by two variables: sensing distance, R and sensing angle, θ. Foraging efficiency of a flock was defined as the time, τ, required for a flock to consume all the available food resources. Simulation results showed that flock foraging is most efficient when individuals had an intermediate sensing ability (R = 60), but decreased for low (R < 60) and high (R > 60) sensing ability. When R > 60, patterns in foraging efficiency with increasing sensing distance and food resource aggregation were less consistent. This inconsistency was due to instability of the flock and a higher rate of individuals failing to capture target food resources. In addition, I briefly discuss the benefits obtained by foraging in flocks from an evolutionary perspective.
APA, Harvard, Vancouver, ISO, and other styles
23

Stanchi, S., G. Falsone, and E. Bonifacio. "Soil aggregation, erodibility, and erosion rates in mountain soils (NW Alps, Italy)." Solid Earth 6, no. 2 (April 20, 2015): 403–14. http://dx.doi.org/10.5194/se-6-403-2015.

Full text
Abstract:
Abstract. Erosion is a relevant soil degradation factor in mountain agrosilvopastoral ecosystems that can be enhanced by the abandonment of agricultural land and pastures left to natural evolution. The on-site and off-site consequences of soil erosion at the catchment and landscape scale are particularly relevant and may affect settlements at the interface with mountain ecosystems. RUSLE (Revised Universal Soil Loss Equation) estimates of soil erosion consider, among others, the soil erodibility factor (K), which depends on properties involved in structure and aggregation. A relationship between soil erodibility and aggregation should therefore be expected. However, erosion may limit the development of soil structure; hence aggregates should not only be related to erodibility but also partially mirror soil erosion rates. The aim of the research was to evaluate the agreement between aggregate stability and erosion-related variables and to discuss the possible reasons for discrepancies in the two kinds of land use considered (forest and pasture). Topsoil horizons were sampled in a mountain catchment under two vegetation covers (pasture vs. forest) and analyzed for total organic carbon, total extractable carbon, pH, and texture. Soil erodibility was computed, RUSLE erosion rate was estimated, and aggregate stability was determined by wet sieving. Aggregation and RUSLE-related parameters for the two vegetation covers were investigated through statistical tests such as ANOVA, correlation, and regression. Soil erodibility was in agreement with the aggregate stability parameters; i.e., the most erodible soils in terms of K values also displayed weaker aggregation. Despite this general observation, when estimating K from aggregate losses the ANOVA conducted on the regression residuals showed land-use-dependent trends (negative average residuals for forest soils, positive for pastures). Therefore, soil aggregation seemed to mirror the actual topsoil conditions better than soil erodibility. Several hypotheses for this behavior were discussed. A relevant effect of the physical protection of the organic matter by the aggregates that cannot be considered in $K$ computation was finally hypothesized in the case of pastures, while in forests soil erodibility seemed to keep trace of past erosion and depletion of finer particles. A good relationship between RUSLE soil erosion rates and aggregate stability occurred in pastures, while no relationship was visible in forests. Therefore, soil aggregation seemed to capture aspects of actual vulnerability that are not visible through the erodibility estimate. Considering the relevance and extension of agrosilvopastoral ecosystems partly left to natural colonization, further studies on litter and humus protective action might improve the understanding of the relationship among erosion, erodibility, and structure.
APA, Harvard, Vancouver, ISO, and other styles
24

Lawson, John R., Corey K. Potvin, Patrick S. Skinner, and Anthony E. Reinhart. "The Vice and Virtue of Increased Horizontal Resolution in Ensemble Forecasts of Tornadic Thunderstorms in Low-CAPE, High-Shear Environments." Monthly Weather Review 149, no. 4 (April 2021): 921–44. http://dx.doi.org/10.1175/mwr-d-20-0281.1.

Full text
Abstract:
AbstractTornadoes have Lorenzian predictability horizons O(10) min, and convection-allowing ensemble prediction systems (EPSs) often provide probabilistic guidance of such events to forecasters. Given the O(0.1)-km length scale of tornadoes and O(1)-km scale of mesocyclones, operational models running at horizontal grid spacings (Δx) of 3 km may not capture narrower mesocyclones (typical of the southeastern United States) and certainly do not resolve most tornadoes per se. In any case, it requires O(50) times more computer power to reduce Δx by a factor of 3. Herein, to determine value in such an investment, we compare two EPSs, differing only in Δx (3 vs 1 km), for four low-CAPE, high-shear cases. Verification was grouped as 1) deterministic, traditional methods using pointwise evaluation, 2) a scale-aware probabilistic metric, and 3) a novel method via object identification and information theory. Results suggest 1-km forecasts better detect storms and any associated rapid low- and midlevel rotation, but at the cost of weak–moderate reflectivity forecast skill. The nature of improvement was sensitive to the case, variable, forecast lead time, and magnitude, precluding a straightforward aggregation of results. However, the distribution of object-specific information gain over all cases consistently shows greater average benefit from the 1-km EPS. We also reiterate the importance of verification methodology appropriate for the hazard of interest.
APA, Harvard, Vancouver, ISO, and other styles
25

Li, Ting, Jan van Dalen, and Pieter Jan van Rees. "More than just Noise? Examining the Information Content of Stock Microblogs on Financial Markets." Journal of Information Technology 33, no. 1 (March 2018): 50–69. http://dx.doi.org/10.1057/s41265-016-0034-2.

Full text
Abstract:
Scholars and practitioners alike increasingly recognize the importance of stock microblogs as they capture the market discussion and have predictive value for financial markets. This paper examines the extent to which stock microblog messages are related to financial market indicators and the mechanism leading to efficient aggregation of information. In particular, this paper investigates the information content of stock microblogs with respect to individual stocks and explores the effects of social influences on an interday and intraday basis. We collected more than 1.2 million stock-related messages (i.e., tweets) related to S&P 100 companies over a period of 7 months. Using methods from computational linguistics, we went through an elaborate process of message feature reduction, spam detection, language detection, and slang removal, which has led to an increase in classification accuracy for sentiment analysis. We analyzed the data on both a daily and a 15-min basis and found that the sentiment of messages is positively affected with contemporaneous daily abnormal stock returns and that message volume predicts 15-min follow-up returns, trading volume, and volatility. Disagreement in microblog messages positively influences stock features, both in interday and intraday analysis. Notably, if we give a greater share of voice to microblog messages depending on the social influence of microbloggers, this amplifies the relationship between bullishness and abnormal returns, market volume, and volatility. Following knowledgeable investors advice results in more power in explaining changes in market features. This offers an explanation for the efficient aggregation of information on microblogging platforms. Furthermore, we simulated a set of trading strategies using microblog features and the results suggest that it is possible to exploit market inefficiencies even when transaction costs are included. To our knowledge, this is the first study to comprehensively examine the association between the information content of stock microblogs and intraday stock market features. The insights from the study permit scholars and professionals to reliably identify stock microblog features, which may serve as valuable proxies for market sentiment and permit individual investors to make better investment decisions.
APA, Harvard, Vancouver, ISO, and other styles
26

Arifin, Muhammad Z., Emil Reppie, and Johnny Budiman. "The analysis of capture fisheries performance in Lembeh Island, Bitung City, North Sulawesi." AQUATIC SCIENCE & MANAGEMENT 5, no. 2 (July 27, 2019): 42. http://dx.doi.org/10.35800/jasm.5.2.2017.24569.

Full text
Abstract:
Title (Bahasa Indonesia): Analisis keragaan perikanan tangkap Pulau Lembeh, Kota Bitung, Sulawesi UtaraAccording to the statistics of 2014 Bitung City, fisheries production value of the city increased. This needs to be analyzed in order to establish whether the development provides economic benefit to the society and whether the fishing activities are efficient. The objectives of the study were to study the capture fisheries performance and its impact on the responsible fisheries management in Lembeh (as part of the Bitung City); to know the efficiency level of the fishing activities carried out by fishermen of Lembeh; and to analyze the economic benefit of the capture fisheries to the fishermen. The data were analyzed using descriptive methods. Results found that dominant fishing gears used by Lembeh fishermen were handlines(multihooks-handline, octopus handline, squid handline, and tuna handline), nets (beach seineand gill net), mini purse seine, and fish aggregating device (such as light boat). Those fishing gears were distributed in all villages of Lembeh. Efficient fishing activities are those with efficiency value of 1. From 111 fishermen respondents in Lembeh, 29 of them did efficient fishing operations. The range of investment needed for each fishing gear varied among different types. Mean fishermen’s exchange rate of Lembeh island was 1.29, meaning that there was positive impact of fishing activities on the fulfillment of family’s daily needs.Menurut data statistik Kota Bitung dalam angka tahun 2014, nilai produksi perikanan Kota Bitung mengalami kenaikan. Kenaikan ini perlu di analisis apakah memberi manfaat secara ekonomi kepada masyarakat dan apakah penangkapannya sudah efisien. Tujuan penelitian ini adalah 1) menganalisis keragaan perikanan tangkap dan dampaknya pada pengelolaan perikanan yang bertanggung jawab di Pulau Lembeh (yang merupakan bagian dari Kota Bitung), 2) menetapkan tingkat efisiensi kegiatan perikanan tangkap yang telah dilakukan oleh nelayan di Pulau Lembeh, dan 3) menganalisa manfaat kegiatan perikanan tangkap secara ekonomi bagi para nelayan. Analisis data menggunakan metode deskriptif. Hasil penelitian menujukkan, bahwa alat tangkap yang dominan dioperasikan oleh nelayan Pulau Lembeh adalah jenis handline(pancing noru, pancing, pancing gurita, pancing cumi, pancing tuna), soma (beach seinedan gillnet), mini purse seine(pajeko), dan pengumpul ikan (perahu lampu). Alat tangkap tersebut tersebar di seluruh kelurahan di Pulau Lembeh. Kegiatan penangkapan yang efisien adalah kegiatan penangkapan dengan nilai efisiensi 1. Dari 111 responden nelayan pulau Lembeh, 29 di antaranya melakukan operasi penangkapan yang efisien. Kebutuhan modal usaha masing-masing alat tangkap berbeda antara satu dan lainnya. Rata-rata Nilai Tukar Nelayan (NTN) di pulau tersebut adalah 1,29 di mana menunjukkan adanya dampak positif kegiatan penangkapan ikan dalam pemenuhan kebutuhan sehari hari.
APA, Harvard, Vancouver, ISO, and other styles
27

Anderson, Katie Elson. "Ask me anything: what is Reddit?" Library Hi Tech News 32, no. 5 (July 6, 2015): 8–11. http://dx.doi.org/10.1108/lhtn-03-2015-0018.

Full text
Abstract:
Purpose – This article aims to provide an overview of the popular website Reddit. Design/methodology/approach – In many cases, items that are popular on Reddit become viral, appearing on other social media sites and in the news several days later. Findings – The site’s content is supplied by its users, and the popularity of that content is also determined by the membership. While the site can be described simply as an aggregator of user provided content, this simple description does not adequately capture the essence of the Reddit community and the impact of that community and its generated content on society. Originality/value – Librarians should be aware of this impact and the potential that Reddit possesses for connecting with a number of different communities.
APA, Harvard, Vancouver, ISO, and other styles
28

Mullins, Oliver C. "Review of the Molecular Structure and Aggregation of Asphaltenes and Petroleomics." SPE Journal 13, no. 01 (March 1, 2008): 48–57. http://dx.doi.org/10.2118/95801-pa.

Full text
Abstract:
Summary Tremendous strides have been made recently in asphaltene science. Many advanced analytical techniques have been applied recently to asphaltenes, elucidating many asphaltene properties. The inability of certain techniques to provide correct asphaltene parameters has also been clarified. Longstanding controversies have been resolved. For example, molecular structural issues of asphaltenes have been resolved; in particular, asphaltene molecular weight is now known. The primary aggregation threshold has recently been established by a variety of techniques. Characterization of asphaltene interfacial activity has advanced considerably. The hierarchy of asphaltene aggregation has emerged into a fairly comprehensive picture, essentially in accord with the Yen model with the additional inclusion of certain constraints. Crude oil and asphaltene science is now poised to develop proper structure-function relations that are the defining objective of the new field: petroleomics. The purpose of this paper is to review these developments in order to present a more clear and accessible picture of asphaltenes, especially considering that the asphaltene literature is a bit opaque. Introduction The asphaltenes are a very important class of compounds in crude oils (Chilingarian and Yen 1978; Bunger and Li 1981; Sheu and Mullins 1995; Mullins and Sheu 1998; Mullins et al. 2007c). The asphaltenes represent a complex mixture of compounds and are defined by their solubility characteristics, not by a specific chemical classification. A common (laboratory) definition of asphaltenes is that they are toluene soluble, n-heptane insoluble. Other light alkanes are sometimes used to isolate asphaltenes. This solubility classification is very useful for crude oils because it captures the most aromatic portion of crude oil. As we will see, this solubility defintion also captures those molecular components of asphaltene that aggregate. Other carbonaceous materials such as coal do possess an asphaltene fraction, but that often will not correspond to the most aromatic fraction. Petroleum asphaltenes, the subject of this paper, can undergo phase transitions that are an impediment in the production of crude oil. Fig. 1 shows a picture of an asphaltene deposit in a pipeline; obviously, asphaltene deposition is detrimental to the production of oil. Immediately it becomes evident that different operational definitions apply for the term asphaltene in the field vs. the lab. Indeed, the field deposit is very enriched in n-heptane-insoluble, toluene-soluble materials, but this field asphaltene deposit is not identically the standard laboratory solubility class. It is common knowledge that a pressure drop on certain live crude oils (containing dissolved gas) can cause asphaltene flocculation, the first step in creating deposits that are seen in Fig. 1. Highly compressible, very undersaturated crude oils are most susceptible to asphaltene deposition problems with a pressure drop (de Boer et al. 1995). In depressurization flocculation, the character of the asphaltene flocs is dependent on the extent of pressure drop, suggesting some variations in the corresponding chemical composition (Hammami et al. 2000; Joshi et al. 2001). Comingling different oils can result in asphaltene precipitation that can resemble solvent precipitation. Asphaltenes are hydrogen-deficient compared to alkanes; thus, either hydrogen must be added or coke removed in crude oil refining to generate transportation fuels. Thus, asphaltene content lowers the economic value of crude oil. Increasing asphaltene content is associated with dramatically increasing viscosity, especially at room temperature; again, this is of operational concern. The strong temperature dependence of viscosity of asphaltic materials is one of their important properties that make them useful for paving and coating; application of asphaltic materials is facile at moderately high temperatures, while desired rheological properties are obtained at ambient temperatures.
APA, Harvard, Vancouver, ISO, and other styles
29

Moloney, Coleen L., Astrid Jarre, Hugo Arancibia, Yves-Marie Bozec, Sergio Neira, Jean-Paul Roux, and Lynne J. Shannon. "Comparing the Benguela and Humboldt marine upwelling ecosystems with indicators derived from inter-calibrated models." ICES Journal of Marine Science 62, no. 3 (January 1, 2005): 493–502. http://dx.doi.org/10.1016/j.icesjms.2004.11.009.

Full text
Abstract:
Abstract Large-scale, mass-balance trophic models have been developed for northern and southern regions of both the Benguela and Humboldt upwelling ecosystems. Four of these Ecopath models were compared and calibrated against one another. A common model structure was established, and a common basis was used to derive poorly known parameter values. The four resulting models represent ecosystems in which the main commercial fish species have been moderately to heavily fished: central-southern Chile (1992), northern-central Peru (1973–1981), South Africa (1980–1989), and Namibia (1995–2000). Quantitative ecosystem indicators derived from these models were compared. Indicators based on large flows (involving low trophic levels) or top predators were not well estimated, because of aggregation problems. Many of the indicators could be contrasted on the basis of differences between the Benguela and Humboldt systems, rather than on the basis of fishing impact. These include integrated values relating to total catches, and trophic levels of key species groups. Indicators based on integrated biomass, total production, and total consumption tended to capture differences between the model for Namibia (where fish populations were severely reduced) and the other models. We conclude that a suite of indicators is required to represent ecosystem state, and that interpretation requires relatively detailed understanding of the different ecosystems.
APA, Harvard, Vancouver, ISO, and other styles
30

Doud, Carl W., and Thomas W. Phillips. "Responses of Red Flour Beetle Adults, Tribolium castaneum (Coleoptera: Tenebrionidae), and Other Stored Product Beetles to Different Pheromone Trap Designs." Insects 11, no. 11 (October 27, 2020): 733. http://dx.doi.org/10.3390/insects11110733.

Full text
Abstract:
A series of laboratory and field experiments were performed to assess the responses of Tribolium castaneum (Herbst) and other stored-product beetles to pheromone-baited traps and trap components. A commercial Tribolium pitfall trap called the Flit-Trak M2, the predecessor to the Dome trap, was superior in both laboratory and field experiments over the other floor trap designs assessed at capturing walking T. castaneum. In field experiments, Typhaea stercorea (L.) and Ahasverus advena (Stephens) both preferred a sticky trap to the pitfall trap. Although the covered trap is effective at capturing several other species of stored product beetles, the synthetic Tribolium aggregation pheromone lure is critical for the pitfall trap’s efficacy for T. castaneum. Although the food-based trapping oil used in the pitfall trap was not found to be attractive to T. castaneum when assayed alone, it had value as an enhancer of the pheromone bait when the two were used together in the trap. A dust cover modification made to go over the pitfall trap was effective in protecting the trap from dust, although the trap was still vulnerable to dust contamination from sanitation techniques that used compressed air to blow down the mill floors. Capture of T. castaneum in the modified trap performed as well as the standard trap design in a non-dusty area of a flour mill, and was significantly superior over the standard trap in a dusty area. T. castaneum responded in flight outside a flourmill preferentially to multiple funnel traps with pheromone lures compared to traps without pheromone.
APA, Harvard, Vancouver, ISO, and other styles
31

Mukhopadhyay, Anish Kumar, Pinaki Chakrabarti, and Sugata Marjit. "Measuring Gender Discrimination: The Indian Experience and a new Index." Journal of Interdisciplinary Economics 21, no. 1 (May 2009): 53–68. http://dx.doi.org/10.1177/02601079x09002100105.

Full text
Abstract:
The problem of gender disparity exists in India as in many other developing and developed countries. There is a growing concern about the falling female-male ratio (FMR), a very important indicator of this inequality. Overall evaluation of the quality of life in the mode of averages puts little weight on the reality of a falling sex ratio. The standard gender development measures capture this inequality inadequately. The literature records a number of contradictory claims and findings on the subject. Strikingly, this falling FMR over time reflects gross deprivation of nearly fifty percent of our population and given that these deprivations are rising, the increasing value of gender development index is highly misleading, possibly concealing a deteriorating quality of life for females. The basic objective of this paper is to construct an aggregative index which will be able to resolve the problem.
APA, Harvard, Vancouver, ISO, and other styles
32

Barone, Giovanni Davide, Damir Ferizović, Antonino Biundo, and Peter Lindblad. "Hints at the Applicability of Microalgae and Cyanobacteria for the Biodegradation of Plastics." Sustainability 12, no. 24 (December 14, 2020): 10449. http://dx.doi.org/10.3390/su122410449.

Full text
Abstract:
Massive plastic accumulation has been taking place across diverse landscapes since the 1950s, when large-scale plastic production started. Nowadays, societies struggle with continuously increasing concerns about the subsequent pollution and environmental stresses that have accompanied this plastic revolution. Degradation of used plastics is highly time-consuming and causes volumetric aggregation, mainly due to their high strength and bulky structure. The size of these agglomerations in marine and freshwater basins increases daily. Exposure to weather conditions and environmental microflora (e.g., bacteria and microalgae) can slowly corrode the plastic structure. As has been well documented in recent years, plastic fragments are widespread in marine basins and partially in main global rivers. These are potential sources of negative effects on global food chains. Cyanobacteria (e.g., Synechocystis sp. PCC 6803, and Synechococcus elongatus PCC 7942), which are photosynthetic microorganisms and were previously identified as blue-green algae, are currently under close attention for their abilities to capture solar energy and the greenhouse gas carbon dioxide for the production of high-value products. In the last few decades, these microorganisms have been exploited for different purposes (e.g., biofuels, antioxidants, fertilizers, and ‘superfood’ production). Microalgae (e.g., Chlamydomonas reinhardtii, and Phaeodactylum tricornutum) are also suitable for environmental and biotechnological applications based on the exploitation of solar light. Can photosynthetic bacteria and unicellular eukaryotic algae play a role for further scientific research in the bioremediation of plastics of different sizes present in water surfaces? In recent years, several studies have been targeting the utilization of microorganisms for plastic bioremediation. Among the different phyla, the employment of wild-type or engineered cyanobacteria may represent an interesting, environmentally friendly, and sustainable option.
APA, Harvard, Vancouver, ISO, and other styles
33

Shang, Tiantian, Xiaoming Miao, and Waheed Abdul. "A historical review and bibliometric analysis of disruptive innovation." International Journal of Innovation Science 11, no. 2 (June 3, 2019): 208–26. http://dx.doi.org/10.1108/ijis-05-2018-0056.

Full text
Abstract:
Purpose The purpose of this paper is to demonstrate visually the knowledge structure and evolution of disruptive innovation. The paper used CiteSpace III to analyze 1,570 disruptive innovation records from the Web of Science database between 1997 and 2016. Design/methodology/approach Initially, this paper offers a comprehensive overview of papers, countries, journals, scholars and application areas. Subsequently, a time zone view of high-frequency keywords is presented, emphasizing the course of evolution of the study hotspots. Finally, a visualization map of cited references and co-citation analysis are provided to detect the knowledge base at the forefront of disruptive innovation. Findings The findings are as follows: the number of papers shows exponential growth. The USA has the largest contribution and the strongest center. The Netherlands shows the largest burst, followed by Japan. Journal of Production Innovation Management and Research Policy is the most important journals. Hang CC has the largest number of articles. Walsh ST is identified as a high-yielding scholar. Christensen CM is the most authoritative scholar. Engineering electrical electronic is the most widely used research category, followed by management and business. The evolutionary course of the study hotspots is divided into five stages, namely, start, burst, aggregation, dispersion and not yet formed. Eight key streams in the literature are extracted to summarize the knowledge base at the forefront of disruptive innovation. Originality/value This paper explores the whole picture of disruptive innovation research and demonstrates a visual knowledge structure and the evolution of disruptive innovation. It provides an important reference for scholars to capture the current situation and influential trends in this field.
APA, Harvard, Vancouver, ISO, and other styles
34

Alwan, Zaid, and Barry J. Gledson. "Towards green building performance evaluation using asset information modelling." Built Environment Project and Asset Management 5, no. 3 (July 6, 2015): 290–303. http://dx.doi.org/10.1108/bepam-03-2014-0020.

Full text
Abstract:
Purpose – The purpose of this paper is to provide a unique conceptual framework for integrated asset management strategy that includes making use of available facility assessment methods and tools such as BREEAM In-Use, and Leadership in Energy and Environmental Design (LEED); and highlights proposes areas of commonality between these and the use of as-built Building Information Modelling, that ultimately becomes the Asset Information Model (AIM). This framework will consider the emerging requirements for the capture of Building Performance Attribute Data (BPAD), and how these can be managed in order to assist with effective post-construction building performance evaluation. Design/methodology/approach – A review of the current process relevant to the development of as-built BIMs and AIMs was undertaken which included a discussion of BIM standards and of the COBie process. This was combined with data provided by industry practitioners. This led to the concept of BPADs being developed, to be used within existing green building tool, BREEAM In-Use, COBIE and FM/Asset management methods. In turn these methodologies were used to identify possible synergies and areas of integration in AIM-enabled environments. Findings – Recognising the cyclical nature of asset management and BIM, a conceptual model was generated. It was found that BPADs could be aggregated within an AIM model which could influence the delivery of effective facilities and asset management. The model considers the use of existing Building Management Systems (BMS) and Computer Aided Facility Management Systems (CAFMs) and identifies issues associated with the overall sustainability strategy. Originality/value – A conceptual framework is generated that proposes the use of effective information management and aggregation of BPAD within an AIM.
APA, Harvard, Vancouver, ISO, and other styles
35

Talley, Linda B., Jeffrey Hooper, Brian Jacobs, Cathie Guzzetta, Robert McCarter, Anne Sill, Sherry Cain, and Sally L. Wilson. "Cardiopulmonary Monitors and Clinically Significant Events in Critically Ill Children." Biomedical Instrumentation & Technology 45, s1 (March 1, 2011): 38–45. http://dx.doi.org/10.2345/0899-8205-45.s1.38.

Full text
Abstract:
Abstract Cardiopulmonary monitors (CPMs) generate false alarm rates ranging from 85%–99% with few of these alarms actually representing serious clinical events. The overabundance of clinically insignificant alarms in hospitals desensitizes the clinician to true-positive alarms and poses significant safety issues. In this IRB-approved externally funded study, we sought to assess the clinical conditions associated with true and false-positive CPM alarms and attempted to define optimal alarm parameters that would reduce false-positive alarm rates (as they relate to clinically significant events) and thus improve overall CPM performance in critically ill children. Prior to the study, clinically significant events (CSEs) were defined and validated. Over a seven-month period in 2009, critically ill children underwent evaluation of CSEs while connected to a CPM. Comparative CPM and CSE data were analyzed with an aim to estimate sensitivity, specificity, and positive and negative predictive values for CSEs. CPM and CSE data were evaluated in 98 critically ill children. Overall, 2,245 high priority alarms were recorded with 68 CSEs noted in 45 observational days. During the course of the study, the team developed a firm understanding of CPM functionality, including the pitfalls associated with aggregation and analysis of CPM alarm data. The inability to capture all levels of CPM alarms represented a significant study challenge. Selective CPM data can be easily queried with standard reporting, however the default settings with this reporting exclude critical information necessary in compiling a coherent study denominator database. Although the association between CPM alarms and CSEs could not be comprehensively evaluated, preliminary analysis reflected poor CPM alarm specificity. This study provided the necessary considerations for the proper design of a future study that improves the positive predictive value of CPM alarms. In addition, this investigation has resulted in improved awareness of CPM alarm parameter settings and associated false-positive alarms. This information has been incorporated into nursing educational programs.
APA, Harvard, Vancouver, ISO, and other styles
36

Durán Bustamante, Mario, Adrian Hernandez del Valle, and Ambrosio Ortiz Ramírez. "The Google trends effect on the behavior of the exchange rate Mexican peso - US dollar." Contaduría y Administración 64, no. 2 (October 23, 2018): 103. http://dx.doi.org/10.22201/fca.24488410e.2018.1710.

Full text
Abstract:
<p>We show the advantage of using Google search engine trends to forecast the volatility of the shortterm (weekly) exchange rate between the Mexican peso and United States dollar. We perform a comparison of models in the literature that have used Google Trends to examine explanatory variables. Some of<br />the models are based on time series, whereas others are based on the similarity function, which captures the cognitive form of human reasoning. For example, an investor who needs to know the value that a variable will take in the future will take into account relevant, known, and available information, and weigh it to calculate the forecast. We conclude that taking into account the Google Trends variable helps explains partially the behaviour of volatility; and it is necessary to incorporate more aggregation levels. Moreover, to the best of our knowledge, literature on the subject of using Google Trends to explain relevant economic variables is relatively scarce.</p>
APA, Harvard, Vancouver, ISO, and other styles
37

Liao, Ding-An. "Photoelectric Detection System and Machine Learning Recognition Method of Its Detection Images." Journal of Nanoelectronics and Optoelectronics 16, no. 1 (January 1, 2021): 80–88. http://dx.doi.org/10.1166/jno.2021.2907.

Full text
Abstract:
In the photoelectric detection system, the photoelectric detector can convert the optical signal to be measured into a current signal, and the current amplifier transforms the current signal output by the detector into a voltage signal for amplification. In this study, the photo-multiplier tube (PMT) is selected as the photoelectric detector. Compared with other photoelectric detectors, it can obtain higher internal gain, higher sensitivity, and better response performance. The current amplifier is prepared by pre-amplifier and voltage amplifier. In order to capture photoelectric signals well, a large-format scanning system is set up to design each component module, control module, and host computer module of the system. Besides, a machine learning-based algorithm is proposed, namely semi-supervised manifold image recognition algorithm, which is used for identify photoelectric detection images. In the test process, the printed circuit board (PCB) and sapphire material are firstly used as the substrate of the current amplifier, and their influences in the circuit are compared. The peak value of the output noise of each substrate circuit is around 2.8 mV when the input of the current amplifier is short-circuited. Then, the signal gain and signal bandwidth of the photoelectric detection system remain stable when there is no optical signal input. During the process of changing the system signal gain ratio, the noise output of the system is the lowest when the voltage of PMT is 0.50 V and the current amplifier gain is set to 2.2 × 105 V/A. The proposed recognition algorithm can identify different types of targets well. After the image is projected into a two-dimensional space by the algorithm, the distance between classes increases, and the targets in the class promote aggregation, thereby enhancing the identify-ability between samples.
APA, Harvard, Vancouver, ISO, and other styles
38

Peng, Zhi Qing, Xiaozhou Xin, Jin Jun Jiao, Ti Zhou, and Qinhuo Liu. "Remote sensing algorithm for surface evapotranspiration considering landscape and statistical effects on mixed pixels." Hydrology and Earth System Sciences 20, no. 11 (November 2, 2016): 4409–38. http://dx.doi.org/10.5194/hess-20-4409-2016.

Full text
Abstract:
Abstract. Evapotranspiration (ET) plays an important role in surface–atmosphere interactions and can be monitored using remote sensing data. However, surface heterogeneity, including the inhomogeneity of landscapes and surface variables, significantly affects the accuracy of ET estimated from satellite data. The objective of this study is to assess and reduce the uncertainties resulting from surface heterogeneity in remotely sensed ET using Chinese HJ-1B satellite data, which is of 30 m spatial resolution in VIS/NIR bands and 300 m spatial resolution in the thermal-infrared (TIR) band. A temperature-sharpening and flux aggregation scheme (TSFA) was developed to obtain accurate heat fluxes from the HJ-1B satellite data. The IPUS (input parameter upscaling) and TRFA (temperature resampling and flux aggregation) methods were used to compare with the TSFA in this study. The three methods represent three typical schemes used to handle mixed pixels from the simplest to the most complex. IPUS handles all surface variables at coarse resolution of 300 m in this study, TSFA handles them at 30 m resolution, and TRFA handles them at 30 and 300 m resolution, which depends on the actual spatial resolution. Analyzing and comparing the three methods can help us to get a better understanding of spatial-scale errors in remote sensing of surface heat fluxes. In situ data collected during HiWATER-MUSOEXE (Multi-Scale Observation Experiment on Evapotranspiration over heterogeneous land surfaces of the Heihe Watershed Allied Telemetry Experimental Research) were used to validate and analyze the methods. ET estimated by TSFA exhibited the best agreement with in situ observations, and the footprint validation results showed that the R2, MBE, and RMSE values of the sensible heat flux (H) were 0.61, 0.90, and 50.99 W m−2, respectively, and those for the latent heat flux (LE) were 0.82, −20.54, and 71.24 W m−2, respectively. IPUS yielded the largest errors in ET estimation. The RMSE of LE between the TSFA and IPUS methods was 51.30 W m−2, and the RMSE of LE between the TSFA and TRFA methods was 16.48 W m−2. Furthermore, additional analysis showed that the TSFA method can capture the subpixel variations of land surface temperature and the influences of various landscapes within mixed pixels.
APA, Harvard, Vancouver, ISO, and other styles
39

Gupta, Rakesh, Kirthivasan Sathya Narayan, Sowmya Sharma, and Roshni Sunny. "WudStay and the Houseboat Sector in India." Asian Case Research Journal 23, no. 01 (June 2019): 91–117. http://dx.doi.org/10.1142/s0218927519500044.

Full text
Abstract:
This case deals with the challenges and dilemma faced by WudStay, a start-up engaged in the aggregation of different accommodation options in India. The company wants to evaluate the attractiveness and the challenges facing the houseboat sector and decide whether to enter this sector. The case illustrates the present state of the houseboat sector and captures the direction it is heading. Since its inception, the sector has remained unorganized, resulting in unsatisfactory customer service standards. The short-sighted approach of houseboat owners to maximize profits and refusal to adopt technological solutions has put doubts on the long-term health of this sector with many tourists looking for better alternatives. The sector is concentrated in two states in India: Jammu and Kashmir (J&K) and Kerala. The differences in the external-macro environment has taken the sector in these two states to completely different directions. In Kashmir, the deteriorating security situation has a negative impact on tourist inflows, thus reducing the demand for houseboats. In Kerala, the sector is on an upswing, with the government playing a catalytic role in promoting tourism and incentivizing houseboats through subsidies. In this background, WudStay wants to evaluate its long-term attractiveness and the opportunity of creating a sustainable value proposition in this sector.
APA, Harvard, Vancouver, ISO, and other styles
40

Milleret, Cyril, Pierre Dupont, Henrik Brøseth, Jonas Kindberg, J. Andrew Royle, and Richard Bischof. "Using partial aggregation in spatial capture recapture." Methods in Ecology and Evolution 9, no. 8 (June 13, 2018): 1896–907. http://dx.doi.org/10.1111/2041-210x.13030.

Full text
APA, Harvard, Vancouver, ISO, and other styles
41

Agafonow, Alejandro. "Value Creation, Value Capture, and Value Devolution." Administration & Society 47, no. 8 (November 24, 2014): 1038–60. http://dx.doi.org/10.1177/0095399714555756.

Full text
APA, Harvard, Vancouver, ISO, and other styles
42

Risse, Verena. "Welfare as political morality: right-based, duty-based or goal-based?" International Journal of Social Economics 42, no. 5 (May 11, 2015): 424–33. http://dx.doi.org/10.1108/ijse-02-2014-0034.

Full text
Abstract:
Purpose – The purpose of this paper is to investigate new welfare indicators, which no longer rely solely on the gross domestic product but provide a more holistic understanding of welfare encompassing aspects such as health status, social inclusion or environmental quality. So far, it remains, however, questionable to what degree these new indicators can serve as an actual political morality. Design/methodology/approach – To assess this question, this paper proposes to turn to the distinction between right-based, duty-based and goal-based approaches. Assessing welfare in these terms not only suggests itself because of the consequentialist connotations of those alternative formulations that call for happiness or well-being, but also because the distinction allows to consider them in view of some of the central social goods and concerns. Findings – The analysis shows mixed results. It, first, shows that welfare as political morality is best captured in terms of goals. Still, whatever new indicator one chooses, it must not be conceived as a mere aggregation of particular interests, nor should individuals be sacrificed for the sake of an overall good. This makes it important that subjective rights that function as a counterweight are strengthened. Originality/value – The assessment of the new welfare indicators in these terms has not been undertaken so far, although they fit the purpose ideally. Thus, from the originality of the method, the originality of the findings follows so that the analysis provides neat categories and conclusions.
APA, Harvard, Vancouver, ISO, and other styles
43

Trambauer, P., S. Maskey, M. Werner, F. Pappenberger, L. P. H. van Beek, and S. Uhlenbrook. "Identification and simulation of space-time variability of past hydrological drought events in the Limpopo river basin, Southern Africa." Hydrology and Earth System Sciences Discussions 11, no. 3 (March 6, 2014): 2639–77. http://dx.doi.org/10.5194/hessd-11-2639-2014.

Full text
Abstract:
Abstract. Droughts are widespread natural hazards and in many regions their frequency seems to be increasing. A finer resolution version (0.05° x 0.05°) of the continental scale hydrological model PCR-GLOBWB was set up for the Limpopo river basin, one of the most water stressed basins on the African continent. An irrigation module was included to account for large irrigated areas of the basin. The finer resolution model was used to analyse droughts in the Limpopo river basin in the period 1979–2010 with a view to identifying severe droughts that have occurred in the basin. Evaporation, soil moisture, groundwater storage and runoff estimates from the model were derived at a spatial resolution of 0.05° (approximately 5 km) on a daily time scale for the entire basin. PCR-GLOBWB was forced with daily precipitation, temperature and other meteorological variables obtained from the ERA-Interim global atmospheric reanalysis product from the European Centre for Medium-Range Weather Forecasts. Two agricultural drought indicators were computed: the Evapotranspiration Deficit Index (ETDI) and the Root Stress Anomaly Index (RSAI). Hydrological drought was characterised using the Standardized Runoff Index (SRI) and the Groundwater Resource Index (GRI), which make use of the streamflow and groundwater storage resulting from the model. Other more widely used drought indicators, such as the Standardized Precipitation Index (SPI) and the Standardized Precipitation Evaporation Index (SPEI) were also computed for different aggregation periods. Results show that a carefully set up process-based model that makes use of the best available input data can successfully identify hydrological droughts even if the model is largely uncalibrated. The indicators considered are able to represent the most severe droughts in the basin and to some extent identify the spatial variability of droughts. Moreover, results show the importance of computing indicators that can be related to hydrological droughts, and how these add value to the identification of droughts/floods and the temporal evolution of events that would otherwise not have been apparent when considering only meteorological indicators. In some cases, meteorological indicators alone fail to capture the severity of the drought. Therefore, a combination of some of these indicators (e.g. SPEI-3, SRI-6, SPI-12) is found to be a useful measure for identifying hydrological droughts in the Limpopo river basin. Additionally, it is possible to make a characterisation of the drought severity, indicated by its duration and intensity.
APA, Harvard, Vancouver, ISO, and other styles
44

Afuah, Allan, and Christopher L. Tucci. "Value Capture and Crowdsourcing." Academy of Management Review 38, no. 3 (July 2013): 457–60. http://dx.doi.org/10.5465/amr.2012.0423.

Full text
APA, Harvard, Vancouver, ISO, and other styles
45

Anderson, B., and S. Yantis. "Value-Driven Oculomotor Capture." Journal of Vision 12, no. 9 (August 10, 2012): 372. http://dx.doi.org/10.1167/12.9.372.

Full text
APA, Harvard, Vancouver, ISO, and other styles
46

Anderson, B. A., P. A. Laurent, and S. Yantis. "Value-driven attentional capture." Proceedings of the National Academy of Sciences 108, no. 25 (June 6, 2011): 10367–71. http://dx.doi.org/10.1073/pnas.1104047108.

Full text
APA, Harvard, Vancouver, ISO, and other styles
47

An, Sang Bu, Kwangmo Yang, Chang Won Kim, Si Ho Choi, Eunji Kim, Sung Dae Kim, and Jae Soo Koh. "Longitudinal Imaging of Liver Cancer Using MicroCT and Nanoparticle Contrast Agents in CRISPR/Cas9-Induced Liver Cancer Mouse Model." Technology in Cancer Research & Treatment 20 (January 1, 2021): 153303382110164. http://dx.doi.org/10.1177/15330338211016466.

Full text
Abstract:
Introduction: Micro-computed tomography with nanoparticle contrast agents may be a suitable tool for monitoring the time course of the development and progression of tumors. Here, we suggest a practical and convenient experimental method for generating and longitudinally imaging murine liver cancer models. Methods: Liver cancer was induced in 6 experimental mice by injecting clustered regularly interspaced short palindromic repeats/clustered regularly interspaced short palindromic repeats-associated protein 9 plasmids causing mutations in genes expressed by hepatocytes. Nanoparticle agents are captured by Kupffer cells and detected by micro-computed tomography, thereby enabling longitudinal imaging. A total of 9 mice were used for the experiment. Six mice were injected with both plasmids and contrast, 2 injected with contrast alone, and one not injected with either agent. Micro-computed tomography images were acquired every 2- up to 14-weeks after cancer induction. Results: Liver cancer was first detected by micro-computed tomography at 8 weeks. The mean value of hepatic parenchymal attenuation remained almost unchanged over time, although the standard deviation of attenuation, reflecting heterogeneous contrast enhancement of the hepatic parenchyma, increased slowly over time in all mice. Histopathologically, heterogeneous distribution and aggregation of Kupffer cells was more prominent in the experimental group than in the control group. Heterogeneous enhancement of hepatic parenchyma, which could cause image quality deterioration and image misinterpretation, was observed and could be due to variation in Kupffer cells distribution. Conclusion: Micro-computed tomography with nanoparticle contrast is useful in evaluating the induction and characteristics of liver cancer, determining appropriate size of liver cancer for testing, and confirming therapeutic response.
APA, Harvard, Vancouver, ISO, and other styles
48

Henkel, Joachim, and Alexander Hoffmann. "Value capture in hierarchically organized value chains." Journal of Economics & Management Strategy 28, no. 2 (August 2018): 260–79. http://dx.doi.org/10.1111/jems.12278.

Full text
APA, Harvard, Vancouver, ISO, and other styles
49

Hoffmann, Alexander, and Joachim Henkel. "Value Capture In Hierarchically Organized Value Chains." Academy of Management Proceedings 2015, no. 1 (January 2015): 14194. http://dx.doi.org/10.5465/ambpp.2015.14194abstract.

Full text
APA, Harvard, Vancouver, ISO, and other styles
50

Chatain, Olivier, and Peter Zemsky. "Value creation and value capture with frictions." Strategic Management Journal 32, no. 11 (June 1, 2011): 1206–31. http://dx.doi.org/10.1002/smj.939.

Full text
APA, Harvard, Vancouver, ISO, and other styles
We offer discounts on all premium plans for authors whose works are included in thematic literature selections. Contact us to get a unique promo code!

To the bibliography