Academic literature on the topic 'Primary-secondary domain approach'

Create a spot-on reference in APA, MLA, Chicago, Harvard, and other styles

Select a source type:

Consult the lists of relevant articles, books, theses, conference reports, and other scholarly sources on the topic 'Primary-secondary domain approach.'

Next to every source in the list of references, there is an 'Add to bibliography' button. Press on it, and we will generate automatically the bibliographic reference to the chosen work in the citation style you need: APA, MLA, Harvard, Chicago, Vancouver, etc.

You can also download the full text of the academic publication as pdf and read online its abstract whenever available in the metadata.

Journal articles on the topic "Primary-secondary domain approach"

1

Holbrook, Jack. "A CONTEXT-BASED APPROACH TO SCIENCE TEACHING." Journal of Baltic Science Education 13, no. 2 (April 25, 2014): 152–54. http://dx.doi.org/10.33225/jbse/14.13.152.

Full text
Abstract:
It has been traditional to educate students in school, especially secondary schools, through subject domains and within lessons named according to the domain. Today in most countries, science lessons are offered in the curriculum, specified as science, or one of its sub-components e.g. biology, chemistry, physics, or perhaps a combination of these e.g. physical science. It does not have to be this way, of course, as can be amplified by the concept of an integrated day, implemented at the primary level in a number of countries.
APA, Harvard, Vancouver, ISO, and other styles
2

Ayadi, Mariem, Rayda Ben Ayed, Rim Mzid, Sami Aifa, and Mohsen Hanana. "Computational Approach for Structural Feature Determination of Grapevine NHX Antiporters." BioMed Research International 2019 (January 9, 2019): 1–13. http://dx.doi.org/10.1155/2019/1031839.

Full text
Abstract:
Plant NHX antiporters are responsible for monovalent cation/H+ exchange across cellular membranes and play therefore a critical role for cellular pH regulation, Na+ and K+ homeostasis, and salt tolerance. Six members of grapevine NHX family (VvNHX1-6) have been structurally characterized. Phylogenetic analysis revealed their organization in two groups: VvNHX1-5 belonging to group I (vacuolar) and VvNHX6 belonging to group II (endosomal). Conserved domain analysis of these VvNHXs indicates the presence of different kinds of domains. Out of these, two domains function as monovalent cation-proton antiporters and one as the aspartate-alanine exchange; the remaining are not yet with defined function. Overall, VvNHXs proteins are typically made of 11-13 putative transmembrane regions at their N-terminus which contain the consensus amiloride-binding domain in the 3rd TM domain and a cation-binding site in between the 5th and 6th TM domain, followed by a hydrophilic C-terminus that is the target of several and diverse regulatory posttranslational modifications. Using a combination of primary structure analysis, secondary structure alignments, and the tertiary structural models, the VvNHXs revealed mainly 18 α helices although without β sheets. Homology modeling of the 3D structure showed that VvNHX antiporters are similar to the bacterial sodium proton antiporters MjNhaP1 (Methanocaldococcus jannaschii) and PaNhaP (Pyrococcus abyssi).
APA, Harvard, Vancouver, ISO, and other styles
3

Qiao, Yanru, and Yang Xiao. "A Spatial Coding Approach for MIMO Cognitive Radio Networks’ Bandwidth Sharing." WSEAS TRANSACTIONS ON SIGNAL PROCESSING 17 (December 31, 2021): 123–26. http://dx.doi.org/10.37394/232014.2021.17.17.

Full text
Abstract:
The existing cognitive network can’t work together with licensed (primary) users’ network at the same frequency-time domain, and secondary users (SUs) of cognitive network only wait the frequency band occupied by primary users (PUs) to be free. To solve the problem, this paper proposed a spatial coding approach for MIMO cognitive network, where a MIMO base-station with six antennas provides three different spatial codes for three users such as one PU and two SUs, then the SUs can share the bandwidth of PUs. The spatial codes’ design for encoding and decoding vectors is provided. Simulation results verify the proposed approach.
APA, Harvard, Vancouver, ISO, and other styles
4

Ramello, Maria C., Ismahène Benzaïd, Brent M. Kuenzi, Maritza Lienlaf-Moreno, Wendy M. Kandell, Daniel N. Santiago, Mibel Pabón-Saldaña, et al. "An immunoproteomic approach to characterize the CAR interactome and signalosome." Science Signaling 12, no. 568 (February 12, 2019): eaap9777. http://dx.doi.org/10.1126/scisignal.aap9777.

Full text
Abstract:
Adoptive transfer of T cells that express a chimeric antigen receptor (CAR) is an approved immunotherapy that may be curative for some hematological cancers. To better understand the therapeutic mechanism of action, we systematically analyzed CAR signaling in human primary T cells by mass spectrometry. When we compared the interactomes and the signaling pathways activated by distinct CAR-T cells that shared the same antigen-binding domain but differed in their intracellular domains and their in vivo antitumor efficacy, we found that only second-generation CARs induced the expression of a constitutively phosphorylated form of CD3ζ that resembled the endogenous species. This phenomenon was independent of the choice of costimulatory domains, or the hinge/transmembrane region. Rather, it was dependent on the size of the intracellular domains. Moreover, the second-generation design was also associated with stronger phosphorylation of downstream secondary messengers, as evidenced by global phosphoproteome analysis. These results suggest that second-generation CARs can activate additional sources of CD3ζ signaling, and this may contribute to more intense signaling and superior antitumor efficacy that they display compared to third-generation CARs. Moreover, our results provide a deeper understanding of how CARs interact physically and/or functionally with endogenous T cell molecules, which will inform the development of novel optimized immune receptors.
APA, Harvard, Vancouver, ISO, and other styles
5

Tran, Van C., Corina Graif, Alison D. Jones, Mario L. Small, and Christopher Winship. "Participation in Context: Neighborhood Diversity and Organizational Involvement in Boston." City & Community 12, no. 3 (September 2013): 187–210. http://dx.doi.org/10.1111/cico.12028.

Full text
Abstract:
We use unique data from the Boston Non–Profit Organizations Study, an innovative survey containing rich information on organizational participation across seven social domains in two Boston neighborhoods, to examine the relationship between ethnic diversity and participation in local organizations. In particular, we identify neighborhood–based social ties as a key mechanism mediating the initial negative association between diversity and participation. In contrast to previous work, we measure participation using both the domain–based and group–based approach, with the former approach uncovering a wider range of organizational connections that are often missed in the latter approach. We also investigate the relationship between interpersonal ties and organizational ties, documenting how primary involvement with an organization facilitates the development of further interpersonal ties and secondary forms of organizational involvement. We then discuss implications of our findings for urban poverty research.
APA, Harvard, Vancouver, ISO, and other styles
6

Mao, Deqiang, and André Revil. "Induced polarization response of porous media with metallic particles — Part 3: A new approach to time-domain induced polarization tomography." GEOPHYSICS 81, no. 4 (July 2016): D345—D357. http://dx.doi.org/10.1190/geo2015-0283.1.

Full text
Abstract:
The secondary voltage associated with time-domain induced polarization data of disseminated metallic particles (such as pyrite and magnetite) in a porous material can be treated as a transient self-potential problem. This self-potential field is associated with the generation of a secondary-source current density. This source current density is proportional to the gradient of the chemical potentials of the [Formula: see text]- and [Formula: see text]-charge carriers in the metallic particles or ionic charge carriers in the pore water including in the electrical double layer coating the surface of the metallic grains. This new way to treat the secondary voltages offers two advantages with respect to the classical approach. The first is a gain in terms of acquisition time. Indeed, the target can be illuminated with a few primary current sources, all the other electrodes being used simultaneously to record the secondary voltage distribution. The second advantage is with respect to the inversion of the obtained data. Indeed, the secondary (source) current is linearly related to the secondary voltage. Therefore, the inverse problem of inverting the secondary voltages is linear with respect to the source current density, and the inversion can be done in a single iteration. Several iterations are, however, required to compact the source current density distribution, still obtaining a tomogram much faster than inverting the apparent chargeability data using the classical Gauss-Newton approach. We have performed a sandbox experiment with pyrite grains locally mixed to sand at a specific location in the sandbox to demonstrate these new concepts. A method initially developed for self-potential tomography is applied to the inversion of the secondary voltages in terms of source current distribution. The final result compares favorably with the classical inversion of the time-domain induced polarization data in terms of chargeability, but it is much faster to perform.
APA, Harvard, Vancouver, ISO, and other styles
7

Hu, Chien-An, Chia-Ming Chen, Yen-Chun Fang, Shinn-Jye Liang, Hao-Chien Wang, Wen-Feng Fang, Chau-Chyun Sheu, et al. "Using a machine learning approach to predict mortality in critically ill influenza patients: a cross-sectional retrospective multicentre study in Taiwan." BMJ Open 10, no. 2 (February 2020): e033898. http://dx.doi.org/10.1136/bmjopen-2019-033898.

Full text
Abstract:
ObjectivesCurrent mortality prediction models used in the intensive care unit (ICU) have a limited role for specific diseases such as influenza, and we aimed to establish an explainable machine learning (ML) model for predicting mortality in critically ill influenza patients using a real-world severe influenza data set.Study designA cross-sectional retrospective multicentre study in TaiwanSettingEight medical centres in Taiwan.ParticipantsA total of 336 patients requiring ICU-admission for virology-proven influenza at eight hospitals during an influenza epidemic between October 2015 and March 2016.Primary and secondary outcome measuresWe employed extreme gradient boosting (XGBoost) to establish the prediction model, compared the performance with logistic regression (LR) and random forest (RF), demonstrated the feature importance categorised by clinical domains, and used SHapley Additive exPlanations (SHAP) for visualised interpretation.ResultsThe data set contained 76 features of the 336 patients with severe influenza. The severity was apparently high, as shown by the high Acute Physiology and Chronic Health Evaluation II score (22, 17 to 29) and pneumonia severity index score (118, 88 to 151). XGBoost model (area under the curve (AUC): 0.842; 95% CI 0.749 to 0.928) outperformed RF (AUC: 0.809; 95% CI 0.629 to 0.891) and LR (AUC: 0.701; 95% CI 0.573 to 0.825) for predicting 30-day mortality. To give clinicians an intuitive understanding of feature exploitation, we stratified features by the clinical domain. The cumulative feature importance in the fluid balance domain, ventilation domain, laboratory data domain, demographic and symptom domain, management domain and severity score domain was 0.253, 0.113, 0.177, 0.140, 0.152 and 0.165, respectively. We further used SHAP plots to illustrate associations between features and 30-day mortality in critically ill influenza patients.ConclusionsWe used a real-world data set and applied an ML approach, mainly XGBoost, to establish a practical and explainable mortality prediction model in critically ill influenza patients.
APA, Harvard, Vancouver, ISO, and other styles
8

Ghosh, Bikramaditya, and Emira Kozarević. "Identifying explosive behavioral trace in the CNX Nifty Index: a quantum finance approach." Investment Management and Financial Innovations 15, no. 1 (March 3, 2018): 208–23. http://dx.doi.org/10.21511/imfi.15(1).2018.18.

Full text
Abstract:
The financial markets are found to be finite Hilbert space, inside which the stocks are displaying their wave-particle duality. The Reynolds number, an age old fluid mechanics theory, has been redefined in investment finance domain to identify possible explosive moments in the stock exchange. CNX Nifty Index, a known index on the National Stock Exchange of India Ltd., has been put to the test under this situation. The Reynolds number (its financial version) has been predicted, as well as connected with plausible behavioral rationale. While predicting, both econometric and machine-learning approaches have been put into use. The primary objective of this paper is to set up an efficient econophysics’ proxy for stock exchange explosion. The secondary objective of the paper is to predict the Reynolds number for the future. Last but not least, this paper aims to trace back the behavioral links as well.
APA, Harvard, Vancouver, ISO, and other styles
9

Alvarado, Carlos, Erik Stahl, Karissa Koessel, Andrew Rivera, Brian R. Cherry, Surya V. S. R. K. Pulavarti, Thomas Szyperski, William Cance, and Timothy Marlowe. "Development of a Fragment-Based Screening Assay for the Focal Adhesion Targeting Domain Using SPR and NMR." Molecules 24, no. 18 (September 14, 2019): 3352. http://dx.doi.org/10.3390/molecules24183352.

Full text
Abstract:
The Focal Adhesion Targeting (FAT) domain of Focal Adhesion Kinase (FAK) is a promising drug target since FAK is overexpressed in many malignancies and promotes cancer cell metastasis. The FAT domain serves as a scaffolding protein, and its interaction with the protein paxillin localizes FAK to focal adhesions. Various studies have highlighted the importance of FAT-paxillin binding in tumor growth, cell invasion, and metastasis. Targeting this interaction through high-throughput screening (HTS) provides a challenge due to the large and complex binding interface. In this report, we describe a novel approach to targeting FAT through fragment-based drug discovery (FBDD). We developed two fragment-based screening assays—a primary SPR assay and a secondary heteronuclear single quantum coherence nuclear magnetic resonance (HSQC-NMR) assay. For SPR, we designed an AviTag construct, optimized SPR buffer conditions, and created mutant controls. For NMR, resonance backbone assignments of the human FAT domain were obtained for the HSQC assay. A 189-compound fragment library from Enamine was screened through our primary SPR assay to demonstrate the feasibility of a FAT-FBDD pipeline, with 19 initial hit compounds. A final total of 11 validated hits were identified after secondary screening on NMR. This screening pipeline is the first FBDD screen of the FAT domain reported and represents a valid method for further drug discovery efforts on this difficult target.
APA, Harvard, Vancouver, ISO, and other styles
10

Huang, Xin, Changchun Yin, Colin G. Farquharson, Xiaoyue Cao, Bo Zhang, Wei Huang, and Jing Cai. "Spectral-element method with arbitrary hexahedron meshes for time-domain 3D airborne electromagnetic forward modeling." GEOPHYSICS 84, no. 1 (January 1, 2019): E37—E46. http://dx.doi.org/10.1190/geo2018-0231.1.

Full text
Abstract:
Mainstream numerical methods for 3D time-domain airborne electromagnetic (AEM) modeling, such as the finite-difference (FDTD) or finite-element (FETD) methods, are quite mature. However, these methods have limitations in terms of their ability to handle complex geologic structures and their dependence on quality meshing of the earth model. We have developed a time-domain spectral-element (SETD) method based on the mixed-order spectral-element (SE) approach for space discretization and the backward Euler (BE) approach for time discretization. The mixed-order SE approach can contribute an accurate result by increasing the order of polynomials and suppress spurious solutions. The BE method is an unconditionally stable technique without limitations on time steps. To deal with the rapid variation of the fields close to the AEM transmitting loop, we separate a secondary field from the primary field and simulate the secondary field only, for which the primary field is calculated in advance. To obtain a block diagonal mass matrix and hence minimize the number of nonzero elements in the system of equations to be solved, we apply Gauss-Lobatto-Legendre integral techniques of reduced order. A direct solver is then adopted for the system of equations, which allows for efficient treatment of the multiple AEM sources. To check the accuracy of our SETD algorithm, we compare our results with the semianalytical solution for a layered earth model. Then, we analyze the modeling accuracy and efficiency for different 3D models using deformed physical meshes and compare them against results from 3D FETD codes, to further show the flexibility of SETD for AEM forward modeling.
APA, Harvard, Vancouver, ISO, and other styles
More sources

Dissertations / Theses on the topic "Primary-secondary domain approach"

1

Su, G. H., of Western Sydney Nepean University, and School of Civic Engineering and Environment. "A new development in domain decomposition techniques for analysis of plates with mixed edge supports." THESIS_XXX_CEE_Su_G.xml, 2000. http://handle.uws.edu.au:8081/1959.7/277.

Full text
Abstract:
The importance of plates, with discontinuities in boundary supports in aeronautical and marine structures, have led to various techniques to solve plate problems with mixed edge support conditions. The domain decomposition method is one of the most effective of these techniques, providing accurate numerical solutions. This method is used to investigate the vibration and buckling of flat, isotropic, thin and elastic plates with mixed edge support conditions. Two practical approaches have been developed as an extension of the domain decomposition method, namely, the primary-secondary domain (PSD) approach and the line-domains (LD) approach. The PSD approach decomposes a plate into one primary domain and one/two secondary domain(s). The LD approach considers interconnecting boundaries as dominant domains whose basic functions take a higher edge restraint from the neighbouring edges. Convergence and comparison studies are carried out on a number of selected rectangular plate cases. Extensive practical plate problems with various shapes, combinations of mixed boundary conditions and different inplane loading conditions have been solved by the PSD and LD approaches.
Master of Engineering (Hons)
APA, Harvard, Vancouver, ISO, and other styles
2

Su, Guo. "A new development in domain decomposition techniques for analysis of plates with mixed edge supports." Thesis, 2000. http://handle.uws.edu.au:8081/1959.7/277.

Full text
Abstract:
The importance of plates, with discontinuities in boundary supports in aeronautical and marine structures, have led to various techniques to solve plate problems with mixed edge support conditions. The domain decomposition method is one of the most effective of these techniques, providing accurate numerical solutions. This method is used to investigate the vibration and buckling of flat, isotropic, thin and elastic plates with mixed edge support conditions. Two practical approaches have been developed as an extension of the domain decomposition method, namely, the primary-secondary domain (PSD) approach and the line-domains (LD) approach. The PSD approach decomposes a plate into one primary domain and one/two secondary domain(s). The LD approach considers interconnecting boundaries as dominant domains whose basic functions take a higher edge restraint from the neighbouring edges. Convergence and comparison studies are carried out on a number of selected rectangular plate cases. Extensive practical plate problems with various shapes, combinations of mixed boundary conditions and different inplane loading conditions have been solved by the PSD and LD approaches.
APA, Harvard, Vancouver, ISO, and other styles

Books on the topic "Primary-secondary domain approach"

1

Kaasa, Stein, and Jon Håvard Loge. Quality of life in palliative care: principles and practice. Oxford University Press, 2015. http://dx.doi.org/10.1093/med/9780199656097.003.0197.

Full text
Abstract:
To improve or sustain patients’ health-related quality of life (HRQOL) is the main goal of palliative care. In health care, HRQOL encompasses a range of components that are measurable and related to health, disease, illness, and medical interventions. Another term, patient-reported outcome (PRO), is used and understood as any measure that collects responses directly from the patients and measures any aspect of patients’ health status that is reported by the patients without any interpretation by health-care providers or family members. The selection of PRO-instruments (questionnaires) is recommended to follow a sequential approach. Define overall aim(s), define the research question(s), agree upon the key outcome(s), and select the appropriate set of questions/questionnaires guided by the primary and secondary outcomes. In general, it is recommended to use a HRQOL measure of generic or disease-specific character and supplement it with domain-specific measure(s) (such as measurement of fatigue, pain, anxiety, depression, etc.) reflecting the purpose(s) of the data collection.
APA, Harvard, Vancouver, ISO, and other styles
2

Tran, Thanh, Tam Nguyen, and Keith Chan. Developing Cross-Cultural Measurement in Social Work Research and Evaluation. Oxford University Press, 2018. http://dx.doi.org/10.1093/acprof:oso/9780190496470.001.0001.

Full text
Abstract:
Given the demographic changes and the reality of cultural diversity in the United States and other parts of the world today, social work researchers are increasingly aware of the need to conduct cross-cultural research and evaluation, whether for hypothesis testing or for outcome evaluation. This book’s aims are twofold: to provide an overview of issues and techniques relevant to the development of cross-cultural measures and to provide readers with a step-by-step approach to the assessment of cross-cultural equivalence of measurement properties. There is no discussion of statistical theory and principles underlying the statistical techniques presented in this book. Rather, this book is concerned with applied theories and principles of cross-cultural research, and draws information from existing work in the social sciences, public domain secondary data, and primary data from the author’s research. In this second edition, several changes have been made throughout the book and a new chapter on item response theory has been added. The chapter on developing new cross-cultural instrument has also been expanded with a concrete example.
APA, Harvard, Vancouver, ISO, and other styles
3

Murano, Dana M., and Richard D. Roberts. Traversing the Gap Between College and Workforce Readiness: Anything But a “Bridge Too Far”! Oxford University Press, 2017. http://dx.doi.org/10.1093/acprof:oso/9780199373222.003.0014.

Full text
Abstract:
This chapter reviews Chapters 11–13. Each chapter offers possible solutions for bridging the apparent gap between college and workforce readiness while inherently highlighting ways in which these two readiness domains are analogous. Across the chapters, an integrative framework for studying noncognitive skills across putative domains remains elusive, although it is possible. The authors also discuss various approaches to the measurement of noncognitive skills and both practical and policy implications. This chapter focuses on next steps that can be taken in an effort to resolve issues surrounding measurement and the organizational framework. It also advocates for social–emotional learning programs at the primary, secondary, and tertiary education levels to foster these skills. Juxtaposed, these chapters elucidate the current state of college and workforce readiness, potential pathways through which measurement of necessary skills can be improved, and a compelling means by which to bridge the gap between college and workforce readiness.
APA, Harvard, Vancouver, ISO, and other styles

Book chapters on the topic "Primary-secondary domain approach"

1

Guha, Sutirtha Kumar, Anirban Kundu, and Rana Dattagupta. "Domain-Based Dynamic Ranking." In Advances in Systems Analysis, Software Engineering, and High Performance Computing, 262–79. IGI Global, 2015. http://dx.doi.org/10.4018/978-1-4666-8676-2.ch017.

Full text
Abstract:
In this chapter a domain based ranking methodology is proposed in cloud environment. Web pages from the cloud are clustered as ‘Primary Domain' and ‘Secondary Domain'. ‘Primary' domain Web pages are fetched based on the direct matching with the keywords. ‘Primary Domain' Web pages are ranked based on Relevancy Factor (RF) and Turbulence Factor (TF). ‘Secondary Domain' is constructed by Nearest Keywords and Similar Web pages. Nearest Keywords are the keywords similar to the matched keywords. Similar Web pages are the Web pages having Nearest Keywords. Matched Web pages of ‘Primary' and ‘Secondary' domain are ranked separately. A wide range of Web pages from the cloud would be available and ranked more efficiently by this proposed approach.
APA, Harvard, Vancouver, ISO, and other styles
2

Lisboa, Isabel C., Joana Vieira, Sandra Mouta, Sara Machado, Nuno Ribeiro, Estêvão Silva, Rita A. Ribeiro, and Alfredo F. Pereira. "An MCDM Approach to the Selection of Novel Technologies for Innovative In-Vehicle Information Systems." In Human Performance Technology, 667–80. IGI Global, 2019. http://dx.doi.org/10.4018/978-1-5225-8356-1.ch034.

Full text
Abstract:
Driving a car is a complex skill that includes interacting with multiple systems inside the vehicle. Today's challenge in the automotive industry is to produce innovative In-Vehicle Information Systems (IVIS) that are pleasant to use and satisfy the costumers' needs while, simultaneously, maintaining the delicate balance of primary task vs. secondary tasks while driving. The authors report a MCDM approach for rank ordering a large heterogeneous set of human-machine interaction technologies; the final set consisted of hundred and one candidates. They measured candidate technologies on eight qualitative criteria that were defined by domain experts, using a group decision-making approach. The main objective was ordering alternatives by their decision score, not the selection of one or a small set of them. The authors' approach assisted decision makers in exploring the characteristics of the most promising technologies and they focused on analyzing the technologies in the top quartile, as measured by their MCDM model. Further, a clustering analysis of the top quartile revealed the presence of important criteria trade-offs.
APA, Harvard, Vancouver, ISO, and other styles
3

Loge, Jon Håvard, and Stein Kaasa. "Quality of life and patient-reported outcome measures." In Oxford Textbook of Palliative Medicine, edited by Nathan I. Cherny, Marie T. Fallon, Stein Kaasa, Russell K. Portenoy, and David C. Currow, 1318–27. Oxford University Press, 2021. http://dx.doi.org/10.1093/med/9780198821328.003.0125.

Full text
Abstract:
To improve or sustain patients’ quality of life (QoL) is the main goal of palliative care. In palliative care as in healthcare in general, QoL is commonly conceptualized as health-related quality of life (HRQoL) which is the self-perceived health status of an individual and encompasses measurable components that are related to health, disease, illness, and medical interventions. Patient-reported outcome (PRO) measures is the term presently used for any measure that collects responses directly from the patients and includes measures on QoL, HRQoL, functions, and symptoms. In spite of substantial evidence on the positive outcomes of using PRO instruments (questionnaires) in the clinic, such use still faces barriers from the health system and the healthcare providers. The content and the measurement capabilities of present PRO instruments can also be a barrier. The selection of PRO instruments is recommended to follow a sequential approach. Define overall aim(s), define the research question(s), agree upon the key outcome(s), and select the appropriate set of questions/questionnaires guided by the primary and secondary outcomes. In general, it is recommended to use a generic or a disease-specific questionnaire and supplement with domain-specific questionnaire(s) for measurement of fatigue, pain, anxiety, depression, or other symptoms/functions reflecting the purpose(s) of the data collection.
APA, Harvard, Vancouver, ISO, and other styles
4

Ardagna, Claudio Agostino, Fulvio Frati, and Gabriele Gianini. "Open Source in Web-Based Applications." In Integrated Approaches in Information Technology and Web Engineering, 83–97. IGI Global, 2009. http://dx.doi.org/10.4018/978-1-60566-418-7.ch006.

Full text
Abstract:
Business and recreational activities on the global communication infrastructure are increasingly based on the use of remote resources and services, and on the interaction between different, remotely located parties. In such a context, Single Sign-On technologies simplify the log-on process allowing automatic access to secondary domains through a unique log-on operation to the primary domain. In this paper, we evaluate different Single Sign-On implementations focusing on the central role of Open Source in the development of Web-based systems. We outline requirements for Single Sign-On systems and evaluate four existing Open Source implementations in terms of degree of fulfilment of those requirements. Finally we compare those Open Source systems with respect to some specific Open Source community patterns.
APA, Harvard, Vancouver, ISO, and other styles
5

Blum, Bruce I. "Participatory Design." In Beyond Programming. Oxford University Press, 1996. http://dx.doi.org/10.1093/oso/9780195091601.003.0014.

Full text
Abstract:
The theme of the book now becomes clearer. Design is the conscious modification of the human environment. As with all selfconscious change, there will be benefits—both projected and fortuitous—and deficiencies—both expected and unanticipated. In the modern world, change is unavoidable; thus, if we are to enter into a new era of design, we should seek methods and tools that maximize the benefits as they minimize the deficiencies. Of course, in the real world of systems there will be neither maxima nor minima. Here we can only measure qualitatively, not quantitatively. Consequently, we must rely on collective judgments and accept that any reference points will become obscured by the dynamics of change. Thus, few of our problems will be amenable to a static, rational solution; most will be soft, open, wicked, and, of course, context and domain specific. This final chapter of Part II explores design in-the-world with particular emphasis on how it affects, and is affected by, the stakeholders. I use the title “Participatory Design” to distinguish this orientation from the historical approach to product development—what I have called “technological design.” In technological design, we assume that an object is to be created and, moreover, that the essential description of that object exists in a specification. The design and fabrication activities, therefore, are directed to realizing the specification. How well the specified object fits into the real world is secondary to the design process; the primary criterion for success is the fidelity of the finished product with respect to its specification. we have seen from the previous chapter, however, that this abstract model of technological design seldom exists in practice. Even in architecture, where a building must conform to its drawings, we find excellence associated with flexibility and accommodation. Thus, in reality, technological and participatory design are complementary projections of a single process. Although I will emphasize computer-based information systems in this chapter, I open the discussion with an examination of a typical hardware-oriented system.
APA, Harvard, Vancouver, ISO, and other styles
6

Weber-Jahnke, Jens H. "The Canadian Health Record Interoperability Infrastructure." In Encyclopedia of Healthcare Information Systems, 188–93. IGI Global, 2008. http://dx.doi.org/10.4018/978-1-59904-889-5.ch026.

Full text
Abstract:
Countries around the globe are struggling with the rising cost of delivering health care. In the developed world, this trend is enforced by aging demographics and emerging forms of expensive medical interventions. Disease prevention, early disease detection, and evidence- based disease management are key for keeping health care systems sustainable. Electronic information management has been recognized as a central enabler for increasing the quality of health care while controlling the cost of delivering it. Secondary care facilities (e.g., hospitals) and laboratories have made use of electronic information systems for decades. However, the primary care sector has only recently begun to adopt such systems on a broader scale. The benefit provided by each system in isolation is limited since citizens generally receive their care from a multitude of providers. Health care information systems need to interoperate in order to enable integrated health information management and consequently attain the declared qualitative and economic objectives. Many industrial countries have begun to create common infrastructures for such an integrated electronic health record (EHR) (Blobel, 2006). Different approaches exist, ranging from centralized databases to highly distributed collections of mediated provider-based systems. This chapter describes the architecture of the Canadian infrastructure for health information management, which can be seen as a compromise between a fully centralized and a fully distributed solution. While in Canada the delivery of health care is a matter of provincial territorial authority, the health ministers of all provinces and the federation have created a joint organization called Health Canada Infoway with the mandate to develop an architecture for and foster implementation of a joint interoperability infrastructure for EHRs in Canada. The second major version of this architecture has now been released, and provinces have begun to implement it. The solution is based on the paradigm of a service-oriented architecture (SOA) (Erl, 2004) and embraces a range of domain-specific and technical standards. It leverages and integrates existing investments in health information systems by making them available through interface standards-conform interface adapters. The Canadian EHR architecture has received attention beyond the Canadian context. This chapter reports on this architecture, its enabling technology paradigms, experiences with its implementation, and its limitations.
APA, Harvard, Vancouver, ISO, and other styles

Conference papers on the topic "Primary-secondary domain approach"

1

Labbé, Pierre, and Thuong-Anh Nguyen. "Modified Response Spectrum Accounting for Seismic Load Categorization as Primary or Secondary in Multi-Modal Piping Systems." In ASME 2021 Pressure Vessels & Piping Conference. American Society of Mechanical Engineers, 2021. http://dx.doi.org/10.1115/pvp2021-61395.

Full text
Abstract:
Abstract This paper presents practical implementation of a theoretical approach that was introduced at PVP 2020 conference. Analysis of the seismically induced ductility demand in elastic-plastic oscillators of variable frequencies and hardening slope is carried out by running time response analyses. The input motion consists of 1000 stochastic process samples of central frequency fc. Oscillators of natural frequencies, f0, (0.1 fc ≤ f0 ≤ 10 fc) are submitted to a wide range of ductility demand, up to 20. It turns out that seismic loads should be regarded as secondary for flexible oscillators (f0 < fc) and primary for very stiff oscillators (f > fcut, cut-off frequency of the input motion), with intermediate situations. On this basis, we derive a strongly f0/fc dependent evaluation of the seismically induced inertial stresses primary part. In the frame of the conventional linear modal analysis method, practical implementation consists of reducing the input response spectrum: spectral ordinates are minimized in the low frequency domain by a factor that depends on the ductile capacity and hardening slope, unchanged at the ZPA frequency and vary linearly in the medium frequency domain. This approach is tested against nonlinear time-response analysis of a multimodal piping system.
APA, Harvard, Vancouver, ISO, and other styles
2

Leone, R. C., M. Nole, G. E. Hammond, and P. C. Lichtner. "Multiple Continuum Approach to Modeling Radionuclide Transport in Fractured Networks." In 3rd International Discrete Fracture Network Engineering Conference. ARMA, 2022. http://dx.doi.org/10.56952/arma-dfne-22-0045.

Full text
Abstract:
Abstract Traditional discrete fracture models implementing matrix diffusion can be computationally expensive and only applicable to simplified transport problems. Upscaling to a continuum model can reduce computational burden, but models based on only a primary continuum neglect fracture-matrix interaction. PFLOTRAN, a subsurface flow and reactive transport code, simulates a secondary continuum (matrix) coupled to the primary continuum (fracture) modeled as a disconnected one-dimensional domain using a method known as the Dual Continuum Disconnected Matrix (DCDM) model. This work presents several benchmarks to compare PFLOTRAN’s DCDM model to analytical solutions and a large-scale test problem in a one cubic km fractured domain modeling a conservative tracer with diffusion of the tracer into the rock matrix. The tracer was modeled using two different methods: first, with a Discrete Fracture Network (DFN) representation, and second, using the DCDM in PFLOTRAN. We find that the DCDM representation of the upscaled fracture network produces results comparable to the DFN and analytical solutions where available, verifying this method. We then apply the DCDM model to a fractured domain considering radionuclide isotope sorption, partitioning, decay, and ingrowth and find that radionuclide retardation is enhanced when considering these additional mechanisms.
APA, Harvard, Vancouver, ISO, and other styles
3

dela Cruz, MEJV, and MRD Ching. "DISCOVERING THE PRIMARY AND SECONDARY FACTORS THAT INFLUENCED THE FIRST PHILIPPINE ACADEMIC INSTITUTION TO ADOPT ENTERPRISE ARCHITECTURE." In The 7th International Conference on Education 2021. The International Institute of Knowledge Management, 2021. http://dx.doi.org/10.17501/24246700.2021.7157.

Full text
Abstract:
Academic Institutions utilize various ICT mechanisms to manage institution data, retrieve information, sustain financial activities, and deal with digital culture to create the learning and teaching setting. Thus, Enterprise Architecture (EA) is the ICT strategy employed in the domain to engage with radical changes and permuting trends. The purpose of EA is to organize and standardize Information Technology (IT) components to align with institution’s goals. Qualitative analysis was conducted to discover the factors that instigated the first academic institution in the Philippines to adopt EA as an ICT tool for its long-term strategy. The approach was an exploratory research design to closely examine data through thematic analysis, focusing on inductive reasoning that emphasizes the data convened from the semi-structured interviews with an open-ended question. The result of the interviews was graded as primary and secondary factors which influenced the adoption process. Primary factors are the elements that drives the EA adoption such as the organizational structure and human traits, while secondary factors consist of the characteristics of the transformation, specifically the intended techniques, proposed transformation capabilities, transformation obstacles and institutional perspectives. The purpose of this enquiry is to disseminate the primary and secondary factors, discovered from the first academic institution in the Philippines, to various academic institutions and other sectors with similar settings, as a learning ground and bedrock of future possibilities for EA adoption. Thus, the challenge for subsequent EA adopters is to utilize and strengthen the primary and secondary factors to boost the success of transformation for competitive advantage. The future research should gravitate towards factors of EA nonadoption in academic institutions and other sectors as EA is still emerging slowly particularly in the Philippines. Keywords: Academic Institution; Enterprise Architecture; Adoption Factors, Digitalization, Knowledge management, Transformation Capabilities, Transformation Obstacles
APA, Harvard, Vancouver, ISO, and other styles
4

Wang, Hong-Jun, Xin Chen, and De-Tao Zheng. "An Approach to Product Family Planning Based on Hypergraph Theory." In ASME 2004 International Mechanical Engineering Congress and Exposition. ASMEDC, 2004. http://dx.doi.org/10.1115/imece2004-59321.

Full text
Abstract:
A widespread method employed in product planning is to use quality function deployment (QFD), which provides a means of translating a single customer’s needs into a product’s design requirements. Whereas, mass customization oriented product family planning should translate a group of different customers’ needs into all kinds of engineering characteristics in a product family life cycle by mapping the complicated information between adjacent stages of a product family development cycle so that the customer group’s needs can be carried out in a product family, which needs to develop a batch of products at the same time. Unfortunately, there is lack of a tool to supporting the product family planning at present. Aiming at this problem, this paper proposes a new approach to extend the house of quality (HoQ) in order to fulfill the product family planning for mass customization. Firstly, hypergraph (HG) theory and QFD are brought together, and the processes of information mapping between the adjacent stages of a product family development cycle are described through a relational hypergraph (RH). Secondly, primary-input driven paths in a relational hypergraph, which represent the different information at the same domain in a HoQ, are placed on the left or at the bottom of a HoQ, and secondary-input paths in a relational hypergraph, which represent the different information at the adjacent domain in a HoQ, are placed at the middle array of a HoQ, an extended HoQ (EHoQ) is obtained according to the method described as above, which provides the tool of planning a batch of products at the same time. Finally, the process of translating a group of different customer group’s needs into engineering characteristics is illustrated by means of an EHoQ.
APA, Harvard, Vancouver, ISO, and other styles
5

Sonawat, Arihant, Abdus Samad, and Afshin Goharzadeh. "Numerical Analysis of Flare Gas Recovery Ejector." In ASME 2014 4th Joint US-European Fluids Engineering Division Summer Meeting collocated with the ASME 2014 12th International Conference on Nanochannels, Microchannels, and Minichannels. American Society of Mechanical Engineers, 2014. http://dx.doi.org/10.1115/fedsm2014-21409.

Full text
Abstract:
Flaring and venting contributes significantly to greenhouse gas emissions and environmental pollution in the upstream oil and gas industry. Present work focuses on a horizontal flow, multiphase ejector used for recovery of these flared gases. The ejector typically handles these gases being entrained by high pressure well head fluid and a comprehensive understanding is necessary to design and operate such recovery system. A CFD based analysis of the flow through the ejector has been reported in this paper. The flow domain was meshed and the mass and momentum equations for fluid flow were solved using commercial software CFX (v14.5). Euler-Euler multiphase approach was used to model different phases. The entrainment behavior of the ejector was investigated and compared for different fluid flow conditions. It was observed that for a fixed primary fluid flow rate, the entrained or secondary flow rate decreased linearly with an increase in pressure difference between exit and suction pressure. The higher was primary flow rate, the greater was the suction created ahead of the primary nozzle and greater was the amount of energy added to the entrained fluid.
APA, Harvard, Vancouver, ISO, and other styles
6

Tillou, Julien, Julien Leparoux, Jérome Dombard, Eleonore Riber, and Bénédicte Cuenot. "Evaluation and Validation of Two-Phase Flow Numerical Simulations Applied to an Aeronautical Injector Using a Lagrangian Approach." In ASME Turbo Expo 2020: Turbomachinery Technical Conference and Exposition. American Society of Mechanical Engineers, 2020. http://dx.doi.org/10.1115/gt2020-15612.

Full text
Abstract:
Abstract Non-reactive Lagrangian two-phase flow Large-Eddy Simulations (LES) of an industrial aeronautical injector are carried out with the compressible AVBP code and compared with an experimental database in an industrial context. While most of the papers are focused on simplex atomiser with only one fuel passage, we propose to account for specific industrial configurations based on duplex atomiser where both the primary and the secondary passages operate. For the second passage, the fuel spray angle is wider, leading to spray / wall interactions and airblast atomization. The computation domain consists in the experimental mock-up without the fuel atomizer part. The liquid-injection boundary condition is applied through the phenomenological FIM-UR model, which prescribes droplet velocities and diameter distribution at the atomizer tip based on both the atomizer characteristics and the liquid mass flow rate. No specific models are used for spray / wall interaction, and droplets are assumed to slip on the walls. The numerical results are compared with the experimental database for Jet-A1 fuel, built through Phase Doppler Anemometry instrumentation, allowing access to local information regarding the droplets velocity components. Three LES are performed for pressure loss ranging from 1 to 3%, covering an important part of the engine operating conditions, from high altitude relight to cruise operating point. Mean and fluctuating velocity profiles show a relatively good agreement with measurements, for all the operating points. It confirms that the spray/wall interactions, airblast and secondary breakup models may be neglected as a first approximation for configurations where only a relatively small amount of fuel impacts the wall.
APA, Harvard, Vancouver, ISO, and other styles
7

Malik, M. Afzaal, Badar Rashid, and Shahab Khushnood. "Dynamic Analysis of Fluid Flowing Through Micro Porous Filters Using Bondgraph Approach." In ASME 2006 2nd Joint U.S.-European Fluids Engineering Summer Meeting Collocated With the 14th International Conference on Nuclear Engineering. ASMEDC, 2006. http://dx.doi.org/10.1115/fedsm2006-98397.

Full text
Abstract:
Delivery of optimized fuel injection pressure to combustion chamber of an engine assembly leads to optimum torque and horsepower. Contaminant free supply of fuel without compromising on volume flow rate is the most important design requirement. Incorporation of very fine fuel filters having less than 10 micron rating reduces volume flow rate at the injection nozzles whereas fuel filter with larger pore size stabilize the injection pressure but may result into failure of fuel injection pump assembly due to scuffing produced by the fuel contaminant between the plunger and sleeve of hydraulic head of fuel injection pump. The fuel flows from fuel tank through low-pressure injection line, primary and secondary fuel filters, fuel transfer pump, fuel injection pump, and high-pressure injection line and injector nozzles. Modeling and simulation of volume flow rate vis-a`-vis fuel injection pressure together with micro-porous fuel filter poses a formidable challenge. Bondgraph method (BGM) is ideally suited for the modeling and simulation of such a multi-domain dynamic system. The aim of this research is to apply BGM to model and simulate the optimized fuel injection pressure and analysis of filters with different micro-porosity and their effect on volume flow rate. Fuel filter porosity, inlet and outlet pressures of transfer pump, fuel injection pump and low/high pressure injection line pressure have been determined experimentally. These experimentally determined parameters are then used as input in our Bondgraph model for the dynamic analysis of fuel injection pressure incorporating micro-porous filters.
APA, Harvard, Vancouver, ISO, and other styles
8

Zwiener, Kim, Cassie Carpenter, and Justin Hodges. "Conjugate Heat Transfer Simulation of an Industrial Gas Turbine Blade With Harmonic Balance Method." In ASME Turbo Expo 2021: Turbomachinery Technical Conference and Exposition. American Society of Mechanical Engineers, 2021. http://dx.doi.org/10.1115/gt2021-60276.

Full text
Abstract:
Abstract The performance of turbomachines is often dependent on the unsteady flow fields they naturally produce, owing primarily to row-to-row interactions from both moving and stationary components, as well as the unsteady nature of the turbulent flow. When it comes to computational fluid dynamics, a disparity exists between steady state and transient simulation as far as accuracy is concerned, albeit the computational cost of transient simulation on fully complex industrial hardware can be overwhelming. This study bridges the gap by presenting a harmonic balance conjugate heat transfer simulation approach in Simcenter STAR-CCM+, to model the unsteady flow phenomena while also providing accurate temperature predictions throughout the gas turbine blade solid bodies. The harmonic balance method used is a mixed time domain and frequency domain technique, which is suitable for periodic unsteady flows and is much less expensive than transient simulation. With this method, the impact of capturing these unsteady flow structures, such as the wake interactions and secondary cooling flows, is quantified on the resulting metal temperature distribution. Such is investigated and characterized throughout an industrial gas turbine blade with fully complex internal cooling passages, as well as film cooling for the external blade surface. Comparisons to steady simulation and transient simulation are also made to quantify the relative fidelity of each approach. Regarding the final resulting blade heat transfer, analysis is also provided to differentiate between important sources: the unsteadiness in the primary gas path flow and the classical unsteady nature of turbulence. Often these effects are lumped together when analyzing the resulting heat transfer, which is incorrect and can be better understood with more detailed analysis.
APA, Harvard, Vancouver, ISO, and other styles
9

Hadley, Isabel, and Simon Smith. "Effects of Mechanical Loading on Residual Stress and Fracture: Part II — Validation of the BS 7910:2013 Rules." In ASME 2014 Pressure Vessels and Piping Conference. American Society of Mechanical Engineers, 2014. http://dx.doi.org/10.1115/pvp2014-28092.

Full text
Abstract:
Failure of welded structures due to the presence of flaws is typically driven by a mixture of applied and residual stresses, yet in most cases only the former are known accurately. In as-welded structures, a typical assumption is that the magnitude of welding residual stress is bounded by the room temperature yield strength of the parent material. The UK flaw assessment procedure BS 7910:2013 also assumes that mechanical loading (either as a result of proof testing or during the initial loading of an as-welded structure) will bring about a relaxation in residual stress. Conversely, the UK structural assessment code for nuclear structures, R6, contains a warning on the ‘limited validation’ of the BS 7910 approaches for stress relaxation and suggests that they should be used ‘with caution’. The aim of this study was therefore to review the basis of the BS 7910 clauses on stress relaxation with a view to harmonising the BS 7910 and R6 rules for cases in which the original welding residual stress distribution is not known. A companion paper describes the history of the residual stress relaxation clauses of BS 7910. A considerable programme of work was carried out in the late 1980s to justify and validate the clauses, using a range of experimental and numerical work. This included analysis of work carried out by the UK power industry and used in the validation of the R6 procedure. The full underlying details of the work have not hitherto been available in the public domain, although the principles were published in 1988. The approach proposed in BS 7910 combines ‘global’ relaxation of residual stress (Qm) under high mechanical load with ‘local’ enhancement of crack tip driving force through the adoption of a simplified primary/secondary stress interaction factor, ρ. This is different from the method adopted by R6, but appears to be equivalent to allowing negative values of ρ under conditions of high primary stress. A re-analysis of the original TWI work, using the current version of BS 7910, has shown nothing to contradict the approach, which represents a workable engineering solution to the problem of how to analyse residual stress effects in as-welded structures rapidly and reasonably realistically when the as-welded stress distribution is unknown.
APA, Harvard, Vancouver, ISO, and other styles
10

Saini, Rohit, and Ashoke De. "Simulations of Non-Reacting Transient N-Dodecane Spray in a High-Pressure Combustion Vessel." In ASME 2015 Gas Turbine India Conference. American Society of Mechanical Engineers, 2015. http://dx.doi.org/10.1115/gtindia2015-1278.

Full text
Abstract:
In many combustion systems, fuel atomization and the spray breakup process play an important role in determining combustion characteristics and emission formation. Due to the ever-rising need for better fuel efficiency and lower emissions, the development of a fundamental understanding of its process is essential and remains a challenging task. The Spray-A case of the Engine Combustion Network (ECN) is considered in the study, in which liquid n-Dodecane (Spray-A) is injected at 1500 bar through a nozzle diameter of 90 μm into a constant volume vessel with an ambient density of 22.8 kg / m3 and an ambient temperature of 900 K. The unsteady Reynolds averaged Navier-Stokes (URANS) in conjunction with k-ε turbulence model is used to investigate the flow physics in a two-dimensional axisymmetric computational domain. A reduced chemical mechanism from Wang et al. [1] with 100 species and 432 reactions is invoked to represent the kinetics. The gas and liquid phases are modeled using Eulerian-Lagrangian coupled approach. The present model is validated with the experimental data as well as computational data of Pei et al. [2]. Initially, the effects of various turbulence models with modified constants are examined without introducing the breakup phenomena in the computational physics. Later on, primary and secondary breakup processes of the liquid fuel are taken into account. In the present study, we examine the effects of secondary breakup modeling on the spray under high-pressure conditions using different breakup models, including Wave, Kelvin-Helmholtz and Rayleigh-Taylor (KH-RT) and Stochastic Secondary Droplet (SSD) models. It has been observed that KH-RT model is more dominant in such high-pressure sprays and predict physics more accurately as compared to other models. The dominance of convection as well as diffusion controlled vaporization model is also realized over the diffusion controlled vaporization model. The investigations at different fuel injection pressures are also modeled and validated with the experimental data [3]. The results strongly suggest that applying high-pressure, leads to high injection velocity and momentum which enhances the air entrainment near the injector region and the mixing process.
APA, Harvard, Vancouver, ISO, and other styles

Reports on the topic "Primary-secondary domain approach"

1

Ginzberg, Idit, Richard E. Veilleux, and James G. Tokuhisa. Identification and Allelic Variation of Genes Involved in the Potato Glycoalkaloid Biosynthetic Pathway. United States Department of Agriculture, August 2012. http://dx.doi.org/10.32747/2012.7593386.bard.

Full text
Abstract:
Steroidal glycoalkaloids (SGAs) are secondary metabolites being part of the plant defense response. The two major SGAs in cultivated potato (Solanum tuberosum) are α-chaconine and α-solanine, which exhibit strong cellular lytic properties and inhibit acetylcholinesterase activity, and are poisonous at high concentrations for humans. As SGAs are not destroyed during cooking and frying commercial cultivars have been bred to contain low levels, and their content in tubers should not exceed 20 mg/100 g fresh weight. However, environmental factors can increase tuber SGA content above the safe level. The focus of the proposed research was to apply genomic approaches to identify candidate genes that control potato SGA content in order to develop tools for potato improvement by marker-assisted selection and/or transgenic approaches. To this end, the objectives of the proposal included identification of genes, metabolic intermediates and allelic variations in the potato SGAbiosynthetic pathway. The SGAs are biosynthesized by the sterol branch of the mevalonic acid/isoprenoid pathway. Transgenic potato plants that overexpress 3-hydroxy-3-methylglutaryl-CoA reductase 1 (HMG1) or squalene synthase 1 (SQS1), key enzymes of the mevalonic acid/isoprenoid pathway, exhibited elevated levels of solanine and chaconine as well as induced expression of genes downstream the pathway. These results suggest of coordinated regulation of isoprenoid (primary) metabolism and SGA secondary metabolism. The transgenic plants were further used to identify new SGA-related candidate genes by cDNA-AFLP approach and a novel glycosyltransferase was isolated. In addition, genes involved in phytosterol biosynthesis may have dual role and synthesize defense-related steroidal metabolites, such as SGAs, via lanosterol pathway. Potato lanosterol synthase sequence (LAS) was isolated and used to prepare transgenic plants with overexpressing and silencing constructs. Plants are currently being analyzed for SGA content. The dynamics of SGA accumulation in the various organs of a potato species with high SGA content gave insights into the general regulation of SGA abundance. Leaf SGA levels in S. chacoense were 10 to 20-fold greater than those of S. tuberosum. The leptines, SGAs with strong antifeedant properties against Colorado potato beetles, were present in all aerial tissues except for early and mid-developmental stages of above ground stolons, and accounted for the high SGA content of S. chacoense. These results indicate the presence of regulatory mechanisms in most tissues except in stolons that limit the levels of α-solanine and α-chaconine and confine leptine accumulation to the aerial tissues. The genomes of cultivated and wild potato contain a 4-member gene family coding for SQS. Three orthologs were cloned as cDNAs from S. chacoense and heterologously expressed in E. coli. Squalene accumulated in all E. coli lines transformed with each of the three gene constructs. Differential transcript abundance in various organs and amino acid sequence differences in the conserved domains of three isoenzymes indicate subfunctionalization of SQS activity and triterpene/sterol metabolism. Because S. chacoense and S. phureja differ so greatly for presence and accumulation of SGAs, we selected four candidate genes from different points along the biosynthetic pathway to determine if chcor phuspecific alleles were associated with SGA expression in a segregating interspecific diploid population. For two of the four genes (HMG2 and SGT2) F2 plants with chcalleles expressed significantly greater total SGAs compared with heterozygotes and those with phualleles. Although there are other determinants of SGA biosynthesis and composition in potato, the ability of allelic states at two genes to affect SGA levels confirms some of the above transgenic work where chcalleles at two other loci altered SGA expression in Desiree. Present results reveal new opportunities to manipulate triterpene/sterol biosynthesis in more targeted ways with the objective of altering SGA content for both human health concerns and natural pesticide content without disrupting the essential metabolism and function of the phytosterol component of the membranes and the growth regulating brassinosteroids.
APA, Harvard, Vancouver, ISO, and other styles
We offer discounts on all premium plans for authors whose works are included in thematic literature selections. Contact us to get a unique promo code!

To the bibliography