Academic literature on the topic 'Always-feasible designs'

Create a spot-on reference in APA, MLA, Chicago, Harvard, and other styles

Select a source type:

Consult the lists of relevant articles, books, theses, conference reports, and other scholarly sources on the topic 'Always-feasible designs.'

Next to every source in the list of references, there is an 'Add to bibliography' button. Press on it, and we will generate automatically the bibliographic reference to the chosen work in the citation style you need: APA, MLA, Harvard, Chicago, Vancouver, etc.

You can also download the full text of the academic publication as pdf and read online its abstract whenever available in the metadata.

Journal articles on the topic "Always-feasible designs"

1

Rose, Roderick A., and Natasha K. Bowen. "Difference-in-Differences as an Alternative to Pretest–Posttest Regression for Social Work Intervention Evaluation and Research." Social Work Research 43, no. 4 (November 21, 2019): 247–58. http://dx.doi.org/10.1093/swr/svz017.

Full text
Abstract:
Abstract Nonrandomized evaluation designs are an important part of social work research because randomization is not always feasible in social work settings. Although randomly assigned groups are assumed to be equivalent, nonrandomly assigned groups are not. In nonrandomized settings, designs with multiple waves are ideal, but two-wave designs are still widely used. A common method for estimating a treatment effect in nonrandom two-wave designs is the pretest–posttest model. However, depending on relationships among participants and the method of assignment to treatment groups, researchers should consider a difference-in-differences approach to testing treatment effects. Authors describe and compare the pretest–posttest and difference-in-differences approaches and assumptions and offer guidelines, developed from a literature review, about the conditions under which each model is likely to be best. Authors also demonstrate the decision-making process and application of the methods in an evaluation of an elementary school intervention program.
APA, Harvard, Vancouver, ISO, and other styles
2

Hu, Yirui, and Donald R. Hoover. "Non-randomized and randomized stepped-wedge designs using an orthogonalized least squares framework." Statistical Methods in Medical Research 27, no. 4 (July 10, 2016): 1202–18. http://dx.doi.org/10.1177/0962280216657852.

Full text
Abstract:
Randomized stepped-wedge (R-SW) designs are increasingly used to evaluate interventions targeting continuous longitudinal outcomes measured at T-fixed time points. Typically, all units start out untreated, and randomly chosen units switch to intervention at sequential time points until all receive intervention. As randomization is not always feasible, non-randomized stepped-wedge (NR-SW) designs (units switching to intervention are not randomly chosen) have attracted researchers. We develop an orthogonlized generalized least squares framework for both R-SW and NR-SW designs. The variance of the intervention effect estimate depends on the number of steps ( S), length of step sizes ( ts), and number of units ( ns) switched at each step ( s=1,…, S). If all other design parameters are equal, this variance is higher for the NR-SW than for the equivalent R-SW design (particularly if the intercepts of non-randomly stepped switching strata are analyzed as fixed effects). We focus on balanced stepped-wedge (BR-SW, BNR-SW) designs (where ts and ns remain constant across s) to obtain insights into optimality for variance of the estimated intervention effect. As previously observed for the BR-SW, the optimal choice for number of time points at each step is also [Formula: see text] for the BNR-SW. In our examples, when compared to BR-SW designs, equivalent BNR-SW designs even with intercepts of non-randomly stepped switching strata analyzed using fixed effects sacrifice little efficiency given an intra-unit repeated measure correlation [Formula: see text]. Compared to traditional difference-in-differences designs, optimal BNR-SW designs are more efficient with the ratio of variances of these designs converging to 0.75 when T > 10. We illustrate these findings using longitudinal outcomes in long-term care facilities.
APA, Harvard, Vancouver, ISO, and other styles
3

Wu, Linlin, Ruiying Zhao, Yuyu Li, and Ye-Hwa Chen. "Optimal Design of Adaptive Robust Control for the Delta Robot with Uncertainty: Fuzzy Set-Based Approach." Applied Sciences 10, no. 10 (May 18, 2020): 3472. http://dx.doi.org/10.3390/app10103472.

Full text
Abstract:
An optimal control design for the uncertain Delta robot is proposed in the paper. The uncertain factors of the Delta robot include the unknown dynamic parameters, the residual vibration disturbances and the nonlinear joints friction, which are (possibly fast) time-varying and bounded. A fuzzy set theoretic approach is creatively used to describe the system uncertainty. With the fuzzily depicted uncertainty, an adaptive robust control, based on the fuzzy dynamic model, is established. It designs an adaptation mechanism, consisting of the leakage term and the dead-zone, to estimate the uncertainty information. An optimal design is constructed for the Delta robot and solved by minimizing a fuzzy set-based performance index. Unlike the traditional fuzzy control methods (if-then rules-based), the proposed control scheme is deterministic and fuzzily optimized. It is proven that the global solution in the closed form for this optimal design always exists and is unique. This research provides the Delta parallel robot a novel optimal control to guarantee the system performance regardless of the uncertainty. The effectiveness of the proposed control is illustrated by a series of simulation experiments. The results reveal that the further applications in other robots are feasible.
APA, Harvard, Vancouver, ISO, and other styles
4

Deng, Shuwen, Xudong Shao, Banfu Yan, Yan Wang, and Huihui Li. "On Flexural Performance of Girder-To-Girder Wet Joint for Lightweight Steel-UHPC Composite Bridge." Applied Sciences 10, no. 4 (February 16, 2020): 1335. http://dx.doi.org/10.3390/app10041335.

Full text
Abstract:
Joints are always the focus of the precast structure for accelerated bridge construction. In this paper, a girder-to-girder joint suitable for steel-ultra-high-performance concrete (UHPC) lightweight composite bridge (LWCB) is proposed. Two flexural tests were conducted to verify the effectiveness of the proposed T-shaped girder-to-girder joint. The test results indicated that: (1) The T-shaped joint has a better cracking resistance than the traditional I-shaped joint; (2) The weak interfaces of the T-shaped joint are set in the areas with relatively lower negative bending moment, and thus the cracking risk could be decreased drastically; (3) The natural curing scheme for the joint is feasible, and the reinforcement has a very large inhibitory effect on the UHPC material shrinkage; The joint interface is the weak region of the LWCB, which requires careful consideration in future designs. Based on the experimental test results, the design and calculation methods for the deflection, crack width, and ultimate flexural capacity in the negative moment region of LWCB were presented.
APA, Harvard, Vancouver, ISO, and other styles
5

Manisha Gahlot and Vandana Bhandari, LaimayumJogeeta Devi, Anita Rani. "Traditional arts and crafts: Responsible lifestyle products design through heat transfer printing." International Journal for Modern Trends in Science and Technology 06, no. 9S (October 16, 2020): 234–41. http://dx.doi.org/10.46501/ijmtst0609s34.

Full text
Abstract:
Sustainability is the key to responsible production and conservation of environment, which is the need of the hour. Indian motifs based on traditional textile arts and crafts have always been a source of inspiration not only to Indian designers but also have intrigued global designers. These motifs can be adapted into lifestyle products through modern techniques of surface enrichment. Lifestyle products hold a lucrative market in the textile sector. Apron is one such lifestyle product which falls under the category of accessories. This study explores how traditional knowledge of Indian arts and crafts can open up avenues for responsible designing of lifestyle products. In the present study, fifty motifs/designs from textile and architectural sources of Manipur were collected from secondary sources, adapted and simplified for application in kitchen apron using CorelDraw X3 software. Ten adapted designs were selected through visual inspection by a panel of thirty judges. The design arrangements were developed for kitchen apron by preparing line patterns, motifs/designs layout and colourways, respectively. The outcome of every step was visually evaluated by the same panel of thirty judges, except for the line patterns, on five point scale. The prototype scoring highest weighted mean score i.e., rank I was selected for further developing the following consequent steps. The finalized designs were printed on the paper using disperse dyes. The printed papers were then used to transfer designs on the constructed and finished apron made of polyester/cotton blended fabric. The cost of apron was estimated Rs. 244/- which can be reduced if produced in bulk. Consumer assessment was carried out for the printed apron on various aesthetic parameters. Consumers’ acceptance for the printed apron was found high which reflected its marketability owing to uniqueness of the motifs, traditional values associated with the traditional motifs of Manipur, sharpness of design lines, the clarity of prints and the reasonable price. Thus, study outcome revealed that the designs inspired from traditional textile arts and crafts of Manipur can be successfully rejuvenated into lifestyle products through heat transfer printing which is environmentally feasible, socially acceptable and economically viable.
APA, Harvard, Vancouver, ISO, and other styles
6

Lloyd, Blair P., Kayla R. Randall, Emily S. Weaver, Johanna L. Staubitz, and Naomi Parikh. "An Initial Evaluation of a Concurrent Operant Analysis Framework to Identify Reinforcers for Work Completion." Behavioral Disorders 45, no. 2 (May 14, 2019): 85–102. http://dx.doi.org/10.1177/0198742919837647.

Full text
Abstract:
Although functional analysis is a powerful tool for testing the function of challenging behavior, it is not always feasible or appropriate to include as a component of functional behavior assessment (FBA). Alternative experimental analysis methods are needed to inform individualized interventions in schools, particularly for students who engage in passive forms of problem behavior. We evaluated a concurrent operant analysis (COA) framework to identify reinforcers for appropriate replacement behaviors for four students referred for FBA and reported by teachers to engage in low levels of work completion. After completing two COAs per student (researcher-as-therapist and teacher-as-therapist), we used alternating treatments designs to compare the effects of an intervention matched with COA outcomes to intervention conditions that were not matched to COA outcomes on levels of work completion and task engagement. COA outcomes corresponded across therapists for three of four participants and intervention results validated COA outcomes for two of these participants. Although results of this initial investigation seem promising, more research on COA frameworks is needed to determine their utility to guide reinforcement-based interventions in schools.
APA, Harvard, Vancouver, ISO, and other styles
7

Deng, Hong-Wen. "Characterization of Deleterious Mutations in Outcrossing Populations." Genetics 150, no. 2 (October 1, 1998): 945–56. http://dx.doi.org/10.1093/genetics/150.2.945.

Full text
Abstract:
Abstract Deng and Lynch recently proposed estimating the rate and effects of deleterious genomic mutations from changes in the mean and genetic variance of fitness upon selfing/outcrossing in outcrossing/highly selfing populations. The utility of our original estimation approach is limited in outcrossing populations, since selfing may not always be feasible. Here we extend the approach to any form of inbreeding in outcrossing populations. By simulations, the statistical properties of the estimation under a common form of inbreeding (sib mating) are investigated under a range of biologically plausible situations. The efficiencies of different degrees of inbreeding and two different experimental designs of estimation are also investigated. We found that estimation using the total genetic variation in the inbred generation is generally more efficient than employing the genetic variation among the mean of inbred families, and that higher degree of inbreeding employed in experiments yields higher power for estimation. The simulation results of the magnitude and direction of estimation bias under variable or epistatic mutation effects may provide a basis for accurate inferences of deleterious mutations. Simulations accounting for environmental variance of fitness suggest that, under full-sib mating, our extension can achieve reasonably well an estimation with sample sizes of only ∼2000-3000.
APA, Harvard, Vancouver, ISO, and other styles
8

Januschowski, Kai, Frank R. Ihmig, Timo Koch, Thomas Velten, and Annekatrin Rickmann. "Context-sensitive smart glasses monitoring wear position and activity for therapy compliance—A proof of concept." PLOS ONE 16, no. 2 (February 19, 2021): e0247389. http://dx.doi.org/10.1371/journal.pone.0247389.

Full text
Abstract:
Purpose To improve the acceptance and compliance of treatment of amblyopia, the aim of this study was to show that it is feasible to design an electronic frame for context-sensitive liquid crystal glasses, which can measure the state of wear position in a robust manner and detect distinct motion patterns for activity recognition. Methods Different temple designs with integrated temperature and capacitive sensors were developed to realize the detection of the state of wear position to distinguish three states (correct position/wrong position/glasses taken off). The electronic glasses frame was further designed as a tool for accelerometer data acquisition, which was used for algorithm development for activity classification. For this purpose, training data of 20 voluntary healthy adult subjects (5 females, 15 males) were recorded and a 10-fold cross-validation was computed for classifier selection. In order to perform functional testing of the electronic glasses frame, a proof of concept study was performed in a small group of healthy adults. Four healthy adult subjects (2 females, 2 males) were included to wear the electronic glasses frame and to protocol their activities in their everyday life according to a defined test protocol. Individual and averaged results for the precision of the state of wear position detection and of the activity recognition were calculated. Results Context-sensitive control algorithms were developed which detected the state of wear position and activity in a proof of concept. The pilot study revealed an average of 91.4% agreement of the detected states of wear position. The activity recognition match was 82.2% when applying an additional filter criterion. Removing the glasses was always detected 100% correctly. Conclusion The principles investigated are suitable for detecting the glasses’ state of wear position and for recognizing the wearer´s activity in a smart glasses concept.
APA, Harvard, Vancouver, ISO, and other styles
9

Green, J. D., and L. E. Black. "Status of preclinical safety assessment for immunomodulatory biopharmaceuticals." Human & Experimental Toxicology 19, no. 4 (April 2000): 208–12. http://dx.doi.org/10.1191/096032700678815864.

Full text
Abstract:
Scientists from academia, industry, FDA, European and Japanese regulatory groups met to discuss key considera-tions that are central to the safe and expeditious development of novel biologic agents that are thought to act by modulation of the host immune system. In the presentations and case studies, particular attention was given to the current clinical experience with immunosuppressant agents. Many new biologic agents (such as humanized monoclonal antibodies) have been developed to interact in a highly specific manner with their target. However, their pharmacologic properties may be more complex than originally appreciated, impact-ing on clinical trial designs. The goal of preclinical safety assessment should be to provide some assurance that patients will be protected from any unacceptable risks by defining “safe” and “active” doses. For immunomodulatory molecules, particular attention is paid to defining potential for increased risks of lymphoproliferative disorders, oppor-tunistic infections, and immune impairment. To address these issues, a wide variety of preclinical studies, mainly in non-human primates, have been performed for the purpose of assessing the potential risk of drug-induced, human immunotoxicity. Case studies presented at this symposium showed the feasibility of assessing humoral and cell-mediated aspects of the immune system, using antigen and neoantigen challenges, immunohistochemical, and flow cytometric (FACS) methods. In some cases, homologous forms of the biologic agent and “humanized” transgenic models have been used to assess potential clinical risks. These data have been useful in providing some assurance that severe adverse effects would not be induced in patients. Despite these limitations, it is important that industry sponsors provide information to regulatory authorities, the clinical investigator, and patients that provides the best feasible basis for risk assessment, safe clinical trial design, informed consent, and eventually, appropriate labeling. It is recognized that existing preclinical models often have significant limitations. Consequently, the sponsor's and regulatory authority's experienced judgement has deter-mined whether or not the purported benefits of the novel therapeutic agent are balanced by the potential short-and long-term risks. In this field of development, preclinical models often need to reflect recent technology innovations; therefore, these models are not always “validated” in a conventional sense. Experience to date suggests that im-proved methods and approaches are needed as these agents are developed for use in lower or moderate risk patient populations. Consequently, there is an increased need for an industry/regulatory partnership in order to achieve pro-gress in these risk assessment areas.
APA, Harvard, Vancouver, ISO, and other styles
10

Rushmer, Rosemary K., Mandy Cheetham, Lynda Cox, Ann Crosland, Joanne Gray, Liam Hughes, David J. Hunter, et al. "Research utilisation and knowledge mobilisation in the commissioning and joint planning of public health interventions to reduce alcohol-related harms: a qualitative case design using a cocreation approach." Health Services and Delivery Research 3, no. 33 (August 2015): 1–182. http://dx.doi.org/10.3310/hsdr03330.

Full text
Abstract:
BackgroundConsiderable resources are spent on research to establish what works to improve the nation’s health. If the findings from this research are used, better health outcomes can follow, but we know that these findings are not always used. In public health, evidence of what works may not ‘fit’ everywhere, making it difficult to know what to do locally. Research suggests that evidence use is a social and dynamic process, not a simple application of research findings. It is unclear whether it is easier to get evidence used via a legal contracting process or within unified organisational arrangements with shared responsibilities.ObjectiveTo work in cocreation with research participants to investigate how research is utilised and knowledge mobilised in the commissioning and planning of public health services to reduce alcohol-related harms.Design, setting and participantsTwo in-depth, largely qualitative, cross-comparison case studies were undertaken to compare real-time research utilisation in commissioning across a purchaser–provider split (England) and in joint planning under unified organisational arrangements (Scotland) to reduce alcohol-related harms. Using an overarching realist approach and working in cocreation, case study partners (stakeholders in the process) picked the topic and helped to interpret the findings. In Scotland, the topic picked was licensing; in England, it was reducing maternal alcohol consumption.MethodsSixty-nine interviews, two focus groups, 14 observations of decision-making meetings, two local feedback workshops (n = 23 andn = 15) and one national workshop (n = 10) were undertaken. A questionnaire (n = 73) using a Behaviourally Anchored Rating Scale was issued to test the transferability of the 10 main findings. Given the small numbers, care must be taken in interpreting the findings.FindingsNot all practitioners have the time, skills or interest to work in cocreation, but when there was collaboration, much was learned. Evidence included professional and tacit knowledge, and anecdotes, as well as findings from rigorous research designs. It was difficult to identify evidence in use and decisions were sometimes progressed in informal ways and in places we did not get to see. There are few formal evidence entry points. Evidence (prevalence and trends in public health issues) enters the process and is embedded in strategic documents to set priorities, but local data were collected in both sites to provide actionable messages (sometimes replicating the evidence base).ConclusionsTwo mid-range theories explain the findings. If evidence hassaliency(relates to ‘here and now’ as opposed to ‘there and then’) andimmediacy(short, presented verbally or visually and with emotional appeal) it is more likely to be used in both settings. A second mid-range theory explains how differing tensions pull and compete as feasible and acceptable local solutions are pursued across stakeholders. Answering what works depends on answering for whom and where simultaneously to find workable (if temporary) ‘blends’. Gaining this agreement across stakeholders appeared more difficult across the purchaser–provider split, because opportunities to interact were curtailed; however, more research is needed.FundingThis study was funded by the Health Services and Delivery Research programme of the National Institute for Health Research.
APA, Harvard, Vancouver, ISO, and other styles
More sources

Dissertations / Theses on the topic "Always-feasible designs"

1

Mecham, Bradley R. "Modeling and Optimization of Space Use and Transportation for a 3D Walkable City." BYU ScholarsArchive, 2013. https://scholarsarchive.byu.edu/etd/3693.

Full text
Abstract:
This thesis presents an investigation of a new three-dimensional urban form where walking distances are less than a half-mile and congestion is minimal. The car-free urban form investigated herein is a city composed of skyscrapers massively interconnected with skybridges at multiple levels. The investigation consists of optimizing space use arrangement, skybridge presence or absence, and elevator number to simultaneously minimize total travel time, skybridge light blockage, and elevator energy usage in the city. These objectives are evaluated using three objective functions, the most significant of which involves a three-dimensional, pedestrian-only, three-step version of the traditional four-step planning model. Optimal and diverse designs are discovered with a genetic algorithm that generates always-feasible designs and uses the maximum fitness function. The space use arrangements and travel times of four extreme designs are analyzed and discussed, and the overall results of the investigation are presented. Conclusions suggest that skybridges are beneficial in reducing travel time and that travel times are shorter in cities wherein space use is mixed vertically as well as horizontally.
APA, Harvard, Vancouver, ISO, and other styles

Book chapters on the topic "Always-feasible designs"

1

Fonseca, José, and Marco Vieira. "A Survey on Secure Software Development Lifecycles." In Software Design and Development, 17–33. IGI Global, 2014. http://dx.doi.org/10.4018/978-1-4666-4301-7.ch002.

Full text
Abstract:
This chapter presents a survey on the most relevant software development practices that are used nowadays to build software products for the web, with security built in. It starts by presenting three of the most relevant Secure Software Development Lifecycles, which are complete solutions that can be adopted by development companies: the CLASP, the Microsoft Secure Development Lifecycle, and the Software Security Touchpoints. However it is not always feasible to change ongoing projects or replace the methodology in place. So, this chapter also discusses other relevant initiatives that can be integrated into existing development practices, which can be used to build and maintain safer software products: the OpenSAMM, the BSIMM, the SAFECode, and the Securosis. The main features of these security development proposals are also compared according to their highlights and the goals of the target software product.
APA, Harvard, Vancouver, ISO, and other styles
2

Alsmadi, Izzat. "How Much Automation can be done in Testing?" In Software Design and Development, 1828–49. IGI Global, 2014. http://dx.doi.org/10.4018/978-1-4666-4301-7.ch087.

Full text
Abstract:
It is widely acknowledged that software testing stage is a stage in the software project that is time and resources’ consuming. In addition, this stage comes late in the project, usually at the time where pressure of delivery is high. Those are some reasons why major research projects in testing focus on methods to automate one or more of activities in this stage. In this chapter, description of all sub stages in software testing is explained along with possible methods to automate activities in this sub stage. The focus in this chapter is on the user interface of the software as it is one of the major components that receives a large percentage of testing. A question always raised in testing is whether full test automation is possible and if that can be feasible, possible and applicable. While 100% test automation is theoretic and impractical, given all types of activities that may occur in testing, it is hoped that a maximum coverage in test automation will be visible soon.
APA, Harvard, Vancouver, ISO, and other styles
3

Fonseca, José, and Marco Vieira. "A Survey on Secure Software Development Lifecycles." In Software Development Techniques for Constructive Information Systems Design, 57–73. IGI Global, 2013. http://dx.doi.org/10.4018/978-1-4666-3679-8.ch003.

Full text
Abstract:
This chapter presents a survey on the most relevant software development practices that are used nowadays to build software products for the web, with security built in. It starts by presenting three of the most relevant Secure Software Development Lifecycles, which are complete solutions that can be adopted by development companies: the CLASP, the Microsoft Secure Development Lifecycle, and the Software Security Touchpoints. However it is not always feasible to change ongoing projects or replace the methodology in place. So, this chapter also discusses other relevant initiatives that can be integrated into existing development practices, which can be used to build and maintain safer software products: the OpenSAMM, the BSIMM, the SAFECode, and the Securosis. The main features of these security development proposals are also compared according to their highlights and the goals of the target software product.
APA, Harvard, Vancouver, ISO, and other styles
4

Sandor, Christian, and Gudrun Klinker. "Lessons Learned in Designing Ubiquitous Augmented Reality User Interfaces." In Human Computer Interaction, 629–44. IGI Global, 2009. http://dx.doi.org/10.4018/978-1-87828-991-9.ch042.

Full text
Abstract:
In recent years, a number of prototypical demonstrators have shown that augmented reality has the potential to improve manual work processes as much as desktop computers and office tools have improved administrative work (Azuma et al., 2001; Ong & Nee, 2004). Yet, it seems that the “classical concept” of augmented reality is not enough (see also http://www.ismar05.org/IAR). Stakeholders in industry and medicine are reluctant to adopt it wholeheartedly due to current limitations of head-mounted display technology and due to the overall dangers involved in overwhelming a user’s view of the real world with virtual information. It is more likely that moderate amounts of augmented reality will be integrated into a more general interaction environment with many displays and devices, involving tangible, immersive, wearable, and hybrid concepts of ubiquitous and wearable computing. We call this emerging paradigm ubiquitous augmented reality (UAR) (MacWilliams, 2005; Sandor, 2005; Sandor & Klinker, 2005). It is not yet clear which UAR-based humancomputer interaction techniques will be most suitable for users to simultaneously work within an environment that combines real and virtual elements. Their success is influenced by a large number of design parameters. The overall design space is vast and difficult to understand. In Munich, we have worked on a number of applications for manufacturing, medicine, architecture, exterior construction, sports, and entertainment (a complete list of projects can be found at http://ar.in.tum.de/Chair/ProjectsOverview). Although many of these projects were designed in the short-term context of one semester student courses or theses, they provided insight into different aspects of design options, illustrating trade-offs for a number of design parameters. In this chapter, we propose a systematic approach toward identifying, exploring, and selecting design parameters at the example of three of our projects, PAARTI (Echtler et al., 2003), FataMorgana (Klinker et al., 2002), and a monitoring tool (Kulas, Sandor, & Klinker, 2004). Using a systematic approach of enumerating and exploring a defined space of design options is useful, yet not always feasible. In many cases, the dimensionality of the design space is not known a-priori but rather has to be determined as part of the design process. To cover the variety of aspects involved in finding an acceptable solution for a given application scenario, experts with diverse backgrounds (computer science, sensing and display technologies, human factors, psychology, and the application domain) have to collaborate. Due to the highly immersive nature of UAR-based user interfaces, it is difficult for these experts to evaluate the impact of various design options without trying them. Authoring tools and an interactively configurable framework are needed to help experts quickly set up approximate demonstrators of novel concepts, similar to “back-of-the-envelope” calculations and sketches. We have explored how to provide such first-step support to teams of user interface designers (Sandor, 2005). In this chapter, we report on lessons learned on generating authoring tools and a framework for immersive user interfaces for UAR scenarios. By reading this chapter, readers should understand the rationale and the concepts for defining a scheme of different classes of design considerations that need to be taken into account when designing UAR-based interfaces. Readers should see how, for classes with finite numbers of design considerations, systematic approaches can be used to analyze such design options. For less well-defined application scenarios, the chapter presents authoring tools and a framework for exploring interaction concepts. Finally, a report on lessons learned from implementing such tools and from discussing them within expert teams of user interface designers is intended to provide an indication of progress made thus far and next steps to be taken.
APA, Harvard, Vancouver, ISO, and other styles
5

Kuofie, Mattew, and Sonika Suman. "Future Opportunities for Using Gamification in Management Education." In Handbook of Research on Future Opportunities for Technology Management Education, 155–77. IGI Global, 2021. http://dx.doi.org/10.4018/978-1-7998-8327-2.ch010.

Full text
Abstract:
Gamification as a potent pedagogic tool existed even in the ancient periods and in different geographical regions. It was observed that using gamification to teach the learners was more powerful and useful so much so that it helped the optimum utilization of resources. However, with the advent of virtual reality and augmented reality making inroads into education in general and management education in particular, it is now feasible to use gamification for management education. It is often found that the management learners are comparatively brainier and selected after strict competitive examinations. They get easily blasé of the traditional methods of pedagogy. They have always demanded challenging curricula, deep contents, and exciting pedagogy to learn. It is in this context that the gamification of learning has been introduced to motivate and challenge the learners by using video game design and game elements in learning environments. These games are meant to maximise enjoyment and engagement through influencing the interests of learners and inspiring them to continue with their learning process. Gamification in its practical use in the management educational spaces and corporate training spaces made a substantial impact all across the globe. The future opportunities for gamification both in content space and structural space are going to be far more than can be imagined with the explosion taking place in technology.
APA, Harvard, Vancouver, ISO, and other styles
6

Blow, David. "Accuracy of the final model." In Outline of Crystallography for Biologists. Oxford University Press, 2002. http://dx.doi.org/10.1093/oso/9780198510512.003.0018.

Full text
Abstract:
The result of all the work described in the previous chapters will be a set of coordinates and other data suitable for deposit in the Protein Data Bank. You or I may use these coordinates, and we need to have some insight into their accuracy and reliability. In the previous chapters, indicators have been described, which may suggest aspects of the data or interpretation procedures that might lead to problems. But as the determination of protein crystal structures becomes more routine, many of these indicators are omitted from publications. Fortunately, crystallographic procedures are self-checking to a large extent. It is rare for a major error of interpretation to lead right through to a published refined structure. A high Rfree factor is a warning, especially if coupled with departures from the requirements of correct bond lengths, angles, and acceptable dihedral angles. On the other hand, there will always be a desire to squeeze more results from the data. All interpretations are subject to error; nearly all protein crystals have regions that are less ordered, where accurate interpretation is less feasible; and the structure may be overrefined, using too many variables for the data. If the majority of the molecule is correctly interpreted, a reasonable R factor may be obtained even though some small regions are completely wrong. During refinement it is usual to restrain the bond lengths and bond angles to be near their theoretical values, as described in Chapter 12. The extent to which bond lengths and bond angles depart from these values is often quoted as an indicator of accuracy. These departures are, however, difficult to interpret because they depend on how tightly the restraints have been applied. The same applies to the restraint of certain coordinates to lie in a plane. This difficulty illustrates a general problem. Designers of refinement procedures are understandably anxious to improve their procedures to lead directly to a well-refined structure. Every aspect of structure that can be recognized as having a regularity could, in principle, be expressed as a restraint which enforces it during refinement.
APA, Harvard, Vancouver, ISO, and other styles

Conference papers on the topic "Always-feasible designs"

1

Jiang, Tianyu, Hui Xiao, and Xu Chen. "An Inverse-Free Disturbance Observer for Adaptive Narrow-Band Disturbance Rejection With Application to Selective Laser Sintering." In ASME 2017 Dynamic Systems and Control Conference. American Society of Mechanical Engineers, 2017. http://dx.doi.org/10.1115/dscc2017-5184.

Full text
Abstract:
Selective laser sintering (SLS) is an additive manufacturing (AM) process that builds 3-dimensional (3D) parts by scanning a laser beam over powder materials in a layerwise fashion. Due to its capability of processing a broad range of materials, the rapidly developing SLS has attracted wide research attention. The increasing demands on part quality and repeatability are urging the applications of customized controls in SLS. In this work, a Youla-Kucera parameterization based forward-model selective disturbance observer (FMSDOB) is proposed for flexible servo control with application to SLS. The proposed method employs the advantages of a conventional disturbance observer but avoids the need of an explicit inversion of the plant, which is not always feasible in practice. Advanced filter designs are proposed to control the waterbed effect. In addition, parameter adaptation algorithm is constructed to identify the disturbance frequencies online. Simulation and experimentation are conducted on a galvo scanner in SLS system.
APA, Harvard, Vancouver, ISO, and other styles
2

Hettiger, Christof. "Applied Structural Simulation in Railcar Design." In 2017 Joint Rail Conference. American Society of Mechanical Engineers, 2017. http://dx.doi.org/10.1115/jrc2017-2330.

Full text
Abstract:
Fifty years ago, the railcar industry relied entirely on classical analysis methods using fundamental solid mechanics theory to establish design and manufacturing protocols. While this method produced working designs, the assumptions required by this type of analysis often led to overdesigned railcars. In the 1950s, the generalized mathematical approach of Finite Element Analysis (FEA) was developed to model the structural behaviors of mechanical systems. FEA involves creating a numerical model by discretizing a continuous system into a finite system of grid divisions. Each grid division, or element, has an inherent geometric shape and each element is comprised of points which are referred to as nodes. The connected pattern of nodes and elements is called a mesh. A solver organizes the mesh into a matrix of differential equations and computes the displacements using linear algebraic operations from which strains and stresses are obtained. The rapid development of computing technology provided the catalyst to drive FEA from research into industry. FEA is currently the standard approach for improving product design cycle times that were previously achieved by trial and error. Moreover, simulation has improved design efficiency allowing for greater advances in weight, strength, and material optimization. While FEA had its roots planted in the aerospace industry, competitive market conditions have driven simulation into many other professional fields of engineering. For the last few decades, FEA has become essential to the submittal of new railcar designs for unrestricted interchange service across North America. All new railcar designs must be compliant to a list of structural requirements mandated by the Association of American Railroads (AAR), which are listed in its MSRP (Manual of Standards and Recommended Practices) in addition to recommended practices in Finite Element (FE) modeling procedures. The MSRP recognizes that these guidelines are not always feasible to completely simulate, allowing the analyst to justify situations where deviations are necessary. Benefits notwithstanding, FEA has inherent challenges. It is understood that FEA does not provide exact solutions, only approximations. While FEA can provide meaningful insight into actual physical behavior leading to shorter development times and lower costs, it can also create bogus solutions that lead to potential safety and engineering risks. Regardless of how appropriate the FEA assumptions may be, engineering judgment is required to interpret the accuracy and significance of the results. A constant balance is made between model fidelity and computational solve time. The purpose of this paper is to discuss the FEA approach to railcar analysis that is used by BNSF Logistics, LLC (BNSFL) in creating AAR compliant railcar designs. Additionally, this paper will discuss the challenges inherent to FEA using experiences from actual case studies in the railcar industry. These challenges originate from assumptions that are made for the analysis including element types, part connections, and constraint locations for the model. All FEA terminology discussed in this paper is written from the perspective of an ANSYS Mechanical user. Closing remarks will be given about where current advances in FEA technology may be able to further improve railcar industry standards.
APA, Harvard, Vancouver, ISO, and other styles
3

Bunker, Ronald S. "Evolution of Turbine Cooling." In ASME Turbo Expo 2017: Turbomachinery Technical Conference and Exposition. American Society of Mechanical Engineers, 2017. http://dx.doi.org/10.1115/gt2017-63205.

Full text
Abstract:
Turbine cooling is a battle between the desire for greater hot section component life and the techno-economic demands of the marketplace. Surprisingly little separates the haves from the have nots. The evolution of turbine cooling is loosely analogous to that of the Darwinian theory of evolution for animals, starting from highly simplistic forms and progressing to increasingly more complex designs having greater capabilities. Yet even with the several generations of design advances, limitations are becoming apparent as complexity sometimes leads to less robust outcomes in operation. Furthermore, the changing environment for operation and servicing of cooled components, both the natural and the imposed environments, are resulting in new failure modes, higher sensitivities, and more variability in life. The present paper treats the evolution of turbine cooling in three broad aspects including the background development, the current state-of-the-art, and the prospects for the future. Unlike the Darwinian theory of evolution however, it is not feasible to implement thousands of small incremental design changes, random or not, to determine the fittest for survival and advancement. Instead, innovation and experience are utilized to direct the evolution. Over the last approximately 50 years, advances have led to an overall increase in component cooling effectiveness from 0.1 to 0.7. Innovation and invention aside, the performance of the engine has always dictated which technologies advance and which do not. Cooling technologies have been aided by complimentary and substantial advancements in materials and manufacturing. The state-of-the-art now contains dozens of internal component cooling methods with their many variations, yet still relies mainly on only a handful of basic film cooling forms that have been known for 40 years. Even so, large decreases in coolant usage, up to 50%, have been realized over time in the face of increasing turbine firing temperatures. The primary areas of greatest impact for the future of turbine cooling are discussed, these being new engine operating environments, component and systems integration effects, revolutionary turbine cooling, revolutionary manufacturing, and the quantification of unknowns. One key will be the marriage of design and manufacturing to bring about the concurrent use of engineered micro cooling or transpiration, with the ability of additive manufacturing. If successful, this combination could see a further 50% reduction in coolant usage for turbines. The other key element concerns the quantification of unknowns, which directly impacts validation and verification of current state-of-the-art and future turbine cooling. Addressing the entire scope of the challenges will require future turbine cooling to be of robust simplicity and stability, with freeform design, much as observed in the “designs” of nature.
APA, Harvard, Vancouver, ISO, and other styles
4

Henderson, Daniel, Thomas Booth, Kathryn Jablokow, and Neeraj Sonalkar. "Best Fits and Dark Horses: Can Design Teams Tell the Difference?" In ASME 2020 International Design Engineering Technical Conferences and Computers and Information in Engineering Conference. American Society of Mechanical Engineers, 2020. http://dx.doi.org/10.1115/detc2020-22589.

Full text
Abstract:
Abstract Design teams are often asked to produce solutions of a certain type in response to design challenges. Depending on the circumstances, they may be tasked with generating a solution that clearly follows the given specifications and constraints of a problem (i.e., a Best Fit solution), or they may be encouraged to provide a higher risk solution that challenges those constraints, but offers other potential rewards (i.e., a Dark Horse solution). In the current research, we investigate: what happens when design teams are asked to generate solutions of both types at the same time? How does this request for dual and conflicting modes of thinking impact a team’s design solutions? In addition, as concept generation proceeds, are design teams able to discern which solution fits best in each category? Rarely, in design research, do we prompt design teams for “normal” designs or ask them to think about both types of solutions (boundary preserving and boundary challenging) at the same time. This leaves us with the additional question: can design teams tell the difference between Best Fit solutions and Dark Horse solutions? In this paper, we present the results of an exploratory study with 17 design teams from five different organizations. Each team was asked to generate both a Best Fit solution and a Dark Horse solution in response to the same design prompt. We analyzed these solutions using rubrics based on familiar design metrics (feasibility, usefulness, and novelty) to investigate their characteristics. Our assumption was that teams’ Dark Horse solutions would be more novel, less feasible, but equally useful when compared with their Best Fit solutions. Our analysis revealed statistically significant results showing that teams generally produced Best Fit solutions that were more useful (met client needs) than Dark Horse solutions, and Dark Horse solutions that were more novel than Best Fit solutions. When looking at each team individually, however, we found that Dark Horse concepts were not always more novel than Best Fit concepts for every team, despite the general trend in that direction. Some teams created equally novel Best Fit and Dark Horse solutions, and a few teams generated Best Fit solutions that were more novel than their Dark Horse solutions. In terms of feasibility, Best Fit and Dark Horse solutions did not show significant differences. These findings have implications for both design educators and design practitioners as they frame design prompts and tasks for their teams of interest.
APA, Harvard, Vancouver, ISO, and other styles
5

Kestner, Brian K., Hernando Jimenez, Christopher A. Perullo, Jeff S. Schutte, and Dimitri N. Mavris. "Identifying Key Technology Areas to Fundamentally Reduce Risk in Engine Performance and Noise for Future Commercial Applications." In ASME Turbo Expo 2013: Turbine Technical Conference and Exposition. American Society of Mechanical Engineers, 2013. http://dx.doi.org/10.1115/gt2013-95695.

Full text
Abstract:
There have been many studies in the past which have evaluated the potential fuel savings benefits of various engine technologies applied simultaneously to a gas turbine engine. Coupled with the inclusion of new engine technology is almost always a change to the engine cycle design in order to maximize the full benefits of the technology. Current research focuses on the trends in benefits due to potential technologies and cycle design changes and identifies which specific technology areas can be targeted to reduce risk relative to meeting performance goals. The results are used to identify high priority technologies that increase the chances of meeting performance metrics. Such investigations can add value to government and industry entities that are engaging in aviation technology development programs attempting to simultaneously meet reduction goals for noise and fuel burn. Jet and fan noise are primary sources of noise on current engine architectures. Noise suppression technologies can be applied to current engine designs to suppress sources of noise thereby reducing the noise impact of the engine. Similarly, the engine cycle can be changed to reduce noise and fuel burn. The critical link is that feasible engine cycles are tightly coupled to the subsystem technologies applied to various engine components. This leads to two critical questions. First, what part of the engine cycle is key to simultaneously meeting noise and fuel burn goals? This inevitably leads to one of three areas, the propulsor (BPR), the core (OPR), or the transmission mechanism between the core and propulsor. The other question is: which subsystem technologies are crucial to achieve the necessary core, propulsor, or power transmission improvements? To answer these questions, this study examines and quantifies the potential benefits in technologies developed for both noise source reduction and thermal / propulsive efficiency increases. Trade studies on the potential impacts of each of the technologies are performed to capture the sensitivity on fuel burn and noise from changes to assumed benefits of the engine cycle and applied subsystem technologies. It will be demonstrated that a geared transmission is the primary enabler to simultaneously meeting performance goals. Should the geared transmission fall short, other areas of improvement simply cannot overcome the shortfall in fuel burn and noise.
APA, Harvard, Vancouver, ISO, and other styles
6

Schutte, Jeffrey, Jimmy Tai, Jonathan Sands, and Dimitri Mavris. "Cycle Design Exploration Using Multi-Design Point Approach." In ASME Turbo Expo 2012: Turbine Technical Conference and Exposition. American Society of Mechanical Engineers, 2012. http://dx.doi.org/10.1115/gt2012-69334.

Full text
Abstract:
The focus of this study is to compare the aerothermodynamic cycle design space of a gas turbine engine generated using two on-design approaches. The traditional approach uses a single design point (SDP) for on-design cycle analysis, where off-design cycle analysis must be performed at other operating conditions of interest. A multi-design point (MDP) method performs on-design cycle analysis at all operating conditions where performance requirements are specified. Effects on the topography of the cycle design space as well as the feasibility of the space are examined. The impacts that performance requirements and cycle assumptions have on the bounds and topography of the feasible space are investigated. The deficiencies of a SDP method in determining an optimum gas turbine engine will be shown for a given set of requirements. Analysis will demonstrate that the MDP method, unlike the SDP method, always obtains a properly sized engine for a set of given requirements and cycle design variables, resulting in an increased feasible region of the aerothermodynamic cycle design space from which the optimum performance engine can be obtained.
APA, Harvard, Vancouver, ISO, and other styles
7

Wang, Y., and E. Sandgren. "A New Dynamic Basis Algorithm for Solving Linear Programming Problems for Engineering Design." In ASME 1988 Design Technology Conferences. American Society of Mechanical Engineers, 1988. http://dx.doi.org/10.1115/detc1988-0011.

Full text
Abstract:
Abstract A new linear programming algorithm is proposed which has significant advantages compared to the traditional simplex method. The search direction generated which is always along a common edge of the active constraint set, is used to locate candidate constraints, and can be used to modify the current basis. The dimension of the basis begins at one and dynamically increases but remains less than or equal to the number of design variables. This is true regardless of the number of inequality constraints present including upper and lower bounds. The proposed method can operate equally well from a feasible or infeasible point. The pivot operation and artificial variable strategy of the simplex method are not used. Examples are presented and results are compared with a traditional revised simplex method.
APA, Harvard, Vancouver, ISO, and other styles
8

Piipponen, Samuli, Eero Hyry, Jukka Tuomela, and Andreas Müller. "Tailored Formulations of Geometric Constraints for Algebraic Analysis of Mechanism Kinematics." In ASME 2015 International Design Engineering Technical Conferences and Computers and Information in Engineering Conference. American Society of Mechanical Engineers, 2015. http://dx.doi.org/10.1115/detc2015-46094.

Full text
Abstract:
Algebraic analysis of the kinematics of large and complicated mechanisms is complicated by the fact that the computational complexity grows very quickly as a function of the number of variables. However, for mechanisms which contain only revolute and/or cylindrical joints one can always simplify the original constraint ideal by choosing the right prime component of the ideal of revolute or cylindrical joint. By making these choices one automatically discards components which are irrelevant to the problem and hence the resulting ideal is simpler than the original. In this way one can immediately analyze some systems whose analysis in the original form is not feasible. As an example we illustrate our results using the spherical 4–bar mechanism.
APA, Harvard, Vancouver, ISO, and other styles
9

Pasha, Akber, Andrew S. Ragland, and Suichu Sun. "Thermal and Economic Considerations for Optimizing HRSG Design." In ASME Turbo Expo 2002: Power for Land, Sea, and Air. ASMEDC, 2002. http://dx.doi.org/10.1115/gt2002-30250.

Full text
Abstract:
The design, operation and usage of Heat Recovery Steam Generators (HRSG) has undergone considerable changes in the last 30 years. Nowadays, instead of as an option item, HRSGs are a major part of the Combined Cycle Power Plant. This makes it necessary to optimize the design and operation of the HRSG so that it can be integrated with the total plant. However, because of the complexity, it is not always feasible to evaluate all possible configurations for selecting the most optimum one within the given time constraints. An attempt is made here to present the parametric effect of various variables through descriptive graphs. These graphs are developed for general cases but can be applied to specific cases to give the trend rather than the absolute values. Cycle designers can use those to narrow down the cycle HRSG configurations. Plant operators may be able to use these to improve the performance by simple additions or modifications.
APA, Harvard, Vancouver, ISO, and other styles
10

Schmidt-Eisenlohr, Uwe L. H., and Oliver E. Kosing. "Turbo Fans With Very High Bypass Ratio but Acceptable Dimensions." In ASME Turbo Expo 2008: Power for Land, Sea, and Air. ASMEDC, 2008. http://dx.doi.org/10.1115/gt2008-50817.

Full text
Abstract:
Turbofan engines for commercial aircraft will have to improve substantially in fuel burn, noise level and NOx emissions to fulfil the ACARE 2020 environmental goals. No doubt very high bypass ratios (VHBR) will be necessary to achieve the ambitious noise reduction targets. But weight and drag of the nacelle cannot increase further and under wing installation must always be feasible. Several innovative design options such as counter rotating turbo components, air breathing nacelle, and off axis core are presented and discussed in the paper which could help to overcome the increasing dimensions of the fan and the nacelle with increasing bypass ratio.
APA, Harvard, Vancouver, ISO, and other styles
We offer discounts on all premium plans for authors whose works are included in thematic literature selections. Contact us to get a unique promo code!

To the bibliography