To see the other types of publications on this topic, follow the link: Pugh Decision Matrix.

Journal articles on the topic 'Pugh Decision Matrix'

Create a spot-on reference in APA, MLA, Chicago, Harvard, and other styles

Select a source type:

Consult the top 49 journal articles for your research on the topic 'Pugh Decision Matrix.'

Next to every source in the list of references, there is an 'Add to bibliography' button. Press on it, and we will generate automatically the bibliographic reference to the chosen work in the citation style you need: APA, MLA, Harvard, Chicago, Vancouver, etc.

You can also download the full text of the academic publication as pdf and read online its abstract whenever available in the metadata.

Browse journal articles on a wide variety of disciplines and organise your bibliography correctly.

1

Olabanji, Olayinka Mohammed, and Khumbulani Mpofu. "Fusing Multi-Attribute Decision Models for Decision Making to Achieve Optimal Product Design." Foundations of Computing and Decision Sciences 45, no. 4 (2020): 305–37. http://dx.doi.org/10.2478/fcds-2020-0016.

Full text
Abstract:
Abstract Manufacturers need to select the best design from alternative design concepts in order to meet up with the demand of customers and have a larger share of the competitive market that is flooded with multifarious designs. Evaluation of conceptual design alternatives can be modelled as a Multi-Criteria Decision Making (MCDM) process because it includes conflicting design features with different sub features. Hybridization of Multi Attribute Decision Making (MADM) models has been applied in various field of management, science and engineering in order to have a robust decision-making process but the extension of these hybridized MADM models to decision making in engineering design still requires attention. In this article, an integrated MADM model comprising of Fuzzy Analytic Hierarchy Process (FAHP), Fuzzy Pugh Matrix and Fuzzy VIKOR was developed and applied to evaluate conceptual designs of liquid spraying machine. The fuzzy AHP was used to determine weights of the design features and sub features by virtue of its fuzzified comparison matrix and synthetic extent evaluation. The fuzzy Pugh matrix provides a methodical structure for determining performance using all the design alternatives as basis and obtaining aggregates for the designs using the weights of the sub features. The fuzzy VIKOR generates the decision matrix from the aggregates of the fuzzified Pugh matrices and determine the best design concept from the defuzzified performance index. At the end, the optimal design concept is determined for the liquid spraying machine.
APA, Harvard, Vancouver, ISO, and other styles
2

Zhu, Tian-Lu, Ya-Jun Li, Ceng-Juan Wu, Han Yue, and Yi-Qian Zhao. "Research on the Design of Surgical Auxiliary Equipment Based on AHP, QFD, and PUGH Decision Matrix." Mathematical Problems in Engineering 2022 (October 22, 2022): 1–13. http://dx.doi.org/10.1155/2022/4327390.

Full text
Abstract:
To improve the efficiency of medical staff in surgical operations and meet the physiological and psychological needs of surgeons, nurses, and patients during the operations, surgical auxiliary equipment is designed. This paper builds a design research model based on AHP (analytic hierarchy process), QFD (quality function deployment), and Platts conceptual decision matrix (PUGH decision matrix). Firstly, the user requirements are weighed through AHP analysis, and the design elements are prioritized based on the weight values. Then, QFD is used to analyze the design features of surgical auxiliary equipment from the aspects of structure, function, and shape, and a house of quality is established to get the significance of design features. Finally, the PUGH decision matrix is constructed to screen and evaluate multiple schemes, and the optimal design scheme is obtained. From the perspective of user requirements and product design characteristics, the significance of design elements is analyzed and calculated, which guides the design practices to complete the innovative design of surgical auxiliary equipment. The combination of AHP, QFD, and PUGH decision matrices are introduced into the innovative design of surgical auxiliary equipment, effectively avoiding subjective factors in product design, improving the scientific nature of the design, and providing new methods and ideas for the design and research of surgical auxiliary equipment and similar products.
APA, Harvard, Vancouver, ISO, and other styles
3

Thakker, A., J. Jarvis, M. Buggy, and A. Sahed. "3DCAD conceptual design of the next-generation impulse turbine using the Pugh decision-matrix." Materials & Design 30, no. 7 (2009): 2676–84. http://dx.doi.org/10.1016/j.matdes.2008.10.011.

Full text
APA, Harvard, Vancouver, ISO, and other styles
4

Raghavendra, Devadas, and G. Cholli Nagaraj. "PUGH Decision Trapezoidal Fuzzy and Gradient Reinforce Deep Learning for Large Scale Requirement Prioritization." Indian Journal of Science and Technology 15, no. 12 (2022): 542–53. https://doi.org/10.17485/IJST/v15i12.1757.

Full text
Abstract:
Abstract <strong>Objective:</strong>&nbsp;To prioritize requirements for large scale software projects within time involving uncertainty in the opinions among different stakeholders.&nbsp;<strong>Methods:</strong>&nbsp;We propose Pugh Trapezoidal Fuzzy and Gradient Reinforce Learning (PTF-GRL) methods for large scale software requirement prioritization. A Pugh Decision-based Trapezoidal Fuzzy Requirement Selection model is designed, inputting the functional and non-functional requirements of the corresponding stakeholders. With the assistance of Trapezoidal Fuzzy Inference, the qualitative factors are mapped with the corresponding numeric factors, which increases the computational efficiency.&nbsp;<strong>Findings:</strong>&nbsp;Performance is analyzed based on four parameters: The first parameter is accuracy and our method showed improvement of 4%, 7% and 3% compared to JRD-SCRUM, IFS and SRPTackle respectively. The second parameter is prioritization time and found that our method had reduced time of 30%, 37% and 39% compared with existing methods. The third parameter is precision and it was found that our method improves precision by 6%, 10% and 5% compared with the other two methods. The final parameter we consider is the test suite execution and our method showed improvement of 12%, 19% and 5% compared with the existing two methods.&nbsp;<strong>Novelty/Applications:</strong>&nbsp;The originality of this work indicates the better performance along with the optimal test suite execution even considering the uncertainty factor in the proposed method compared with existing similar methods. <strong>Keywords:</strong> Software Project; Pugh Decision Matrix; Trapezoidal Fuzzy Inference; Gradient Orientation; Reinforce Le
APA, Harvard, Vancouver, ISO, and other styles
5

Li, Hong, and Li Shi. "Applied research on the design of protective clothing based on the Kano-QFD-PUGH method." PLOS ONE 19, no. 10 (2024): e0312045. http://dx.doi.org/10.1371/journal.pone.0312045.

Full text
Abstract:
In order to improve the user experience of protective clothing for healthcare workers and reduce the design blindness and subjectivity of developers, we propose a research methodology that combines the Kano model, QFD quality function deployment, and PUGH decision-making scheme to develop conceptual solutions for medical protective clothing design. Firstly, we use the Kano model to identify the user requirements of healthcare workers and construct a hierarchy of functional requirements for protective clothing. Secondly, we use the QFD method to weigh the protective clothing design elements, convert user requirements into design elements, establish a relationship matrix between user requirements and design elements, and generate four conceptual design solutions based on the results. Finally, we use the PUGH decision-making method to filter and select the best concept solution for protective clothing design, and validate the design evaluation. Our results show that the protective clothing solutions designed using the combined Kano-QFD-PUGH system approach have a higher level of satisfaction compared to traditional protective clothing design. This method accurately explores the mapping relationship between user requirements and design functional elements and can be used as a general reliability design method. It helps to improve the development efficiency of designers and the decision-making role for design concept solution preference. Overall, our research methodology provides a comprehensive approach to developing medical protective clothing, which can be useful for designers and decision-makers in the healthcare industry.
APA, Harvard, Vancouver, ISO, and other styles
6

Wang, Nan, and Yin Zhao. "Research on the design of growable children’s beds based on combined hierarchical analyses." BioResources 19, no. 4 (2024): 8084–102. http://dx.doi.org/10.15376/biores.19.4.8084-8102.

Full text
Abstract:
Although the market share of domestic children’s furniture is increasing annually, some potential problems limit its long-term and stable development, and there is still a gap in China compared with foreign countries. This study focused on the demand preferences for growable children’s beds and examined the design features that influence these preferences. This study introduces a combination of Hierarchical Analyses (AHP), Quality Function Development (QFD), and the Platts Conceptual Decision Matrix (PUGH) into the innovative design of a research model for children’s furniture (AHP-QFD-PUGH). This study screened and classified the decision-making indicators obtained from the research, ranked their importance by quantitative calculation, and finally proposed an optimal design solution. Additionally, to further study the structural characteristics, the function-behavior-structure (FBS) model served as a supplementary analysis tool to effectively circumvent subjective factors in product design. This integrated model accurately explored user needs and product characteristics, providing substantial guidance and new ideas for optimizing the design of growable children’s beds and enhancing growth of the children’s furniture industry.
APA, Harvard, Vancouver, ISO, and other styles
7

Fang, Meichen, Wei Yang, Hui Li, and Younghwan Pan. "Enhancing User Experience through Optimization Design Method for Elderly Medication Reminder Mobile Applications: A QFD-Based Research Approach." Electronics 12, no. 13 (2023): 2860. http://dx.doi.org/10.3390/electronics12132860.

Full text
Abstract:
Poor medication adherence among older adults is a widespread problem worldwide. As the population ages, the design of smartphone medication management apps is critical to improving medication adherence among older adults. Taking the design of an elderly medication reminder APP as an example, this study proposes a sustainable design research method that integrates the KANO model, Analytic Hierarchy Process (AHP), Quality Function Deployment (QFD), and PUGH decision matrix. The method collects user demands through in-depth interviews, and applies the KANO model to classify these demands. The hierarchical structure of user needs is established by using AHP, and the priority is sorted according to the weight and importance determined by the judgment matrix. QFD is used to translate user needs into design requirements, and the house of quality matrix identifies key design requirements. Finally, design alternatives are evaluated using Pugh’s concept selection method. The results of this study demonstrate that the integration of KANO-AHP-QFD-PUGH can be effective as a sustainable optimal design approach for the user experience of a medication reminder application for the elderly. This integrated method not only facilitates innovative optimization and sustainability of application design methods but also provides valuable theoretical and practical insights for future drug-assisted design for elderly users.
APA, Harvard, Vancouver, ISO, and other styles
8

Tan, Wee Choon, Lim Eng Aik, Teoh Thean Hin, et al. "Conceptual design for smart organic waste recycling system." Journal of Physics: Conference Series 2051, no. 1 (2021): 012042. http://dx.doi.org/10.1088/1742-6596/2051/1/012042.

Full text
Abstract:
Abstract Composting can reduce the disposition of food waste, but it is a complex and time-consuming process. In order to shorten the processing time of composting, designing a machine that can provide the optimum condition for the decomposition of food material is needed. This paper presents the development of conceptual design for a smart organic waste recycling system. Customer requirements is identified and translated as engineering characteristics. A total of 5 conceptual design is generated. From the Pugh selection chart and weighted decision matrix, the conceptual concept 1 is selected.
APA, Harvard, Vancouver, ISO, and other styles
9

Grønningsæter, Magnus Sjøholt, and Siv Engen. "Conceptual Modeling for Early‐Phase Decision‐Making in the Maritime Industry: A Case Study of Power Generation System Concept Selection." INCOSE International Symposium 34, no. 1 (2024): 90–105. http://dx.doi.org/10.1002/iis2.13134.

Full text
Abstract:
AbstractThis study examines the use of conceptual modeling to aid in selecting a power generation system concept in the maritime industry. The research objective is to understand how conceptual modeling can enhance decision‐making during the early phases of concept evaluation. The study was conducted at a world‐leading maritime technology company to address the need for more formal processes to support decision‐making in complex development projects. The study applied a conceptual modeling approach in an industry case to facilitate decision‐making in the early phases of a development project. The study shows that conceptual modeling is effective in supporting early‐phase decision‐making in the development project. Conceptual models effectively manage complexity, enhance understanding, and enable effective communication among team members. By combining conceptual modeling with a Pugh matrix, informed decision‐making is facilitated, aligning with stakeholders' objectives. Overall, conceptual modeling provides a structured representation of the problem domain, guiding early‐phase decision‐making.
APA, Harvard, Vancouver, ISO, and other styles
10

Andriamanampisoa, Tsiry Angelos, Eva Nivonirina Andriamanampisoa, Karl Zimmermann, Harry Chaplin, and Edouard Andrianarison. "Identification Du Modèle De Distillateur Solaire À Effet De Serre Adapté Au Sud De Madagascar : Approche Par Matrice De Pugh." International Journal of Progressive Sciences and Technologies 45, no. 1 (2024): 543. https://doi.org/10.52155/ijpsat.v45.1.6358.

Full text
Abstract:
Le Sud de Madagascar fait face à une grave pénurie d’eau. Pour remédier efficacement à cela, il est nécessaire de choisir la technologie adéquate pour le des²salement des eaux de mer ou eaux saumâtres. Cela implique la prise en compte des facteurs contextuels locaux. Cette étude présente l’application de la matrice de Pugh comme outil d’aide à la décision pour la sélection du modèle de distillateur solaire adapté au contexte de cette zone. La zone d’étude et la population cible ont été décrites, et sept modèles de distillateurs solaires ont été évalués en fonction des critères de coût, de durabilité, de technologie, de productivité et de qualité de l’eau distillée obtenue. La matrice de Pugh a été utilisée pour classer et comparer les plans, en tenant compte des résultats quantitatifs et qualitatifs liés à chaque critère. Selon cet outil, le distillateur solaire à double pente est le plus approprié. Cela démontre que la matrice de Pugh peut aider dans la lutte contre la pénurie d’eau en fournissant une approche systématique pour sélectionner les modèles de distillateur solaire. Les limites de l’étude ont été reconnues, soulignant la nécessité de poursuivre les recherches sur la considération d’autres critères de sélection et sur l’optimisation des distillateurs solaires. En conclusion, cette étude offre des informations précieuses aux décideurs, aux praticiens et aux chercheurs travaillant sur la gestion durable de l’eau dans le Sud de Madagascar, contribuant ainsi à la lutte contre la pénurie d’eau et à l’avancement des technologies renouvelables. Abstract: Southern Madagascar is experiencing severe water scarcity. To address this issue effectively, it is crucial to select the appropriate technology for desalinating seawater or brackish water, taking into account local contextual factors. This study aims to present the application of the Pugh matrix as a decision support tool for selecting the solar distiller model best suited to the context of the region. The study area and target population were described, and seven solar distiller models were evaluated based on cost, sustainability, technology, productivity, and quality of distilled water obtained. The Pugh matrix was employed to rank and compare the designs, taking into account the quantitative and qualitative results related to each criterion. According to the tool, the double slope solar distiller was deemed the most suitable. This finding demonstrates that the Pugh matrix can aid in combating water scarcity by providing a systematic approach to selecting solar distiller models. However, the study acknowledged its limitations, emphasizing the need for further research on the consideration of other selection criteria and on the optimization of solar distillers. In conclusion, this study offers valuable information to policymakers, practitioners, and researchers working on sustainable water management in Southern Madagascar, thus contributing to the fight against water scarcity and the advancement of renewable technologies.
APA, Harvard, Vancouver, ISO, and other styles
11

OKUYUCU, Ş. Ebru, and Demet TANIK. "Designing Products with an Evaluating– Eliminating–Updating Loop Developed with the Pugh Decision Matrix Method: Design Studio Exercises." PLANARCH - Design and Planning Research 7, no. 2 (2023): 116–29. http://dx.doi.org/10.5152/planarch.2023.23173.

Full text
APA, Harvard, Vancouver, ISO, and other styles
12

Baležentis, Tomas, Mangirdas Morkūnas, Agnė Žičkienė, Artiom Volkov, Erika Ribašauskienė, and Dalia Štreimikienė. "Policies for Rapid Mitigation of the Crisis’ Effects on Agricultural Supply Chains: A Multi-Criteria Decision Support System with Monte Carlo Simulation." Sustainability 13, no. 21 (2021): 11899. http://dx.doi.org/10.3390/su132111899.

Full text
Abstract:
This paper proposes an integrated approach towards rapid decision-making in the agricultural sector aimed at improvement of its resilience. Methodologically, we seek to devise a framework that is able to take the uncertainty regarding policy preferences into account. Empirically, we focus on the effects of COVID-19 on agriculture. First, we propose a multi-criteria decision-making framework following the Pugh matrix approach for group decision-making. The Monte Carlo simulation is used to check the effects of the perturbations in the criteria weights. Then, we identify the factors behind agricultural resilience and organize them into the three groups (food security, agricultural viability, decent jobs). The expert survey is carried out to elicit the ratings in regard to the expected effects of the policy measures with respect to dimensions of agricultural resilience. The case of Lithuania is considered in the empirical analysis. The existing and newly proposed agricultural policy measures are taken into account. The measures related to alleviation of the financial burden (e.g., credit payment deferral) appear to be the most effective in accordance with the expert ratings.
APA, Harvard, Vancouver, ISO, and other styles
13

Cherd, Teoh Vil, Nor Hazadura Hamzah, Wong Kah Poh, et al. "Conceptual Design of a Bike Towing Trailer with Smart Towing Trailer." Journal of Physics: Conference Series 2051, no. 1 (2021): 012017. http://dx.doi.org/10.1088/1742-6596/2051/1/012017.

Full text
Abstract:
Abstract The roads in the rural places are narrow and bumpy. Hence, big vehicle such as a car, truck, and lorry have no access to the narrow roads. Therefore, most of the villagers choose light vehicles such as bicycles and motorcycles as the primary transportation method to deliver goods using a towing trailer. The towing trailer found in the many of the rural areas are very traditional and inconvenient. This paper focuses on the conceptual design of a smart bike towing trailer equipped with the multipurpose towing based on customer requirements. A total of 5 conceptual design is produced. The Pugh selection chart and weighted decision matrix is used and it is found that the conceptual concept 4 has the best design. A 3D detailed modelling of the concept 4 is developed for future testing and analysis of the design prior to fabrication.
APA, Harvard, Vancouver, ISO, and other styles
14

Rajesh R and Nandhini N. "Concept development and rula analyis of a virtual prototype wheelchair for physically challenged persons." Kongunadu Research Journal 11, no. 1 (2024): 1–8. https://doi.org/10.26524/krj.2024.1.

Full text
Abstract:
A wheelchair is one of the common internal locomotive vehicles used by handicap or sick people who are limited in their functions, such as it needs human force to move it and carry the person from a bed to a wheelchair. This study aims to design and develop a wheel chair using concept generation, concept design, and selection of a concept using different selection methods. Various designs of different parts in a wheelchair have been generated and suitable designs are combined to generate concepts. The Pugh chart and weighted decision matrix have been used to select the best concept based on the criteria. The selected concept has been modeled and analyzed using CATIA RULA module. Human comfort has been analyzed using RULA analysis. It was identified that current virtual prototype developed is safe and comfortable for the caretaker and patient.
APA, Harvard, Vancouver, ISO, and other styles
15

Seechurn, Yashwantraj, and Ritish Boodhun. "Optimum Redesign of an Agricultural Water Bowser." Designs 2, no. 4 (2018): 45. http://dx.doi.org/10.3390/designs2040045.

Full text
Abstract:
There are many types of agricultural water bowsers on the market, which vary in geometry and size. However, in all such bowsers there are “unused spaces” between the bottom of the tank and the axle. The objective of this research was to design an agricultural water bowser with improved capacity by exploiting the “unused spaces”. This would allow a sufficient amount of water to be supplied to wide areas in a short time. Each concept of agricultural water bowser was generated as an integrated chassis water tank to be hitched to a tractor, and the best concept was chosen using a multi-criteria decision-making methodology (house of quality matrix and Pugh selection matrix). The selected design consisted of an U-shaped angle bent bottom sheet welded to a top circular sheet. The Agreement Dangerous Road (ADR) European standard was used for the sizing of the bowser and the selected material was S275 steel. The resultant forces on the shell of the bowser were calculated using analytical methods. A 3-D model of the bowser was developed in SolidWorks 2015, and the static structural analysis tool was used to examine stresses on the body for various types of loading, roads, and driving maneuvers. The shape and size of the bottom part of the proposed bowser increased the capacity of the tank by 20.3%.
APA, Harvard, Vancouver, ISO, and other styles
16

Camillus, J., E. Mat Tokit, M. A. Mohd Rosli, F. A. Z. Mohd Sa’at, S. G. Herawan, and S. I. A. Syahputra. "Conceptual Design of Gravitational Water Vortex Turbine for Green Energy Generation." Journal of Physics: Conference Series 2051, no. 1 (2021): 012005. http://dx.doi.org/10.1088/1742-6596/2051/1/012005.

Full text
Abstract:
Abstract In this study, the conceptual design of a gravitational water vortex turbine has been developed. The conceptual turbine was designed to extract the energy from the house rooftop and generate the small power electricity using design principles. Through the concept of the House of Quality, the respondent requirements data were extracted to match with the scientific view in the design of the turbine. The data from the respondent was then linked to respond to the product characteristics by decomposing the function of the turbine. The divergence of three conceptual designs was being developed by cross-linking the part selection through the morphological chart. Of all designs, the final turbine design has been selected based on the concept of screening through the Pugh method and the concept of scoring of a weighted-decision matrix. Design 3 had been selected as the final product of this conceptual design study when the angle of the inlet opening is at 60°, the angle of pre-rotational is at 30°, with 3 number of blades, and when the body material is set as of polyvinyl chloride.
APA, Harvard, Vancouver, ISO, and other styles
17

Picardi, Giacomo, Mauro De Luca, Giovanni Chimienti, Matteo Cianchetti, and Marcello Calisti. "User-Driven Design and Development of an Underwater Soft Gripper for Biological Sampling and Litter Collection." Journal of Marine Science and Engineering 11, no. 4 (2023): 771. http://dx.doi.org/10.3390/jmse11040771.

Full text
Abstract:
Implementing manipulation and intervention capabilities in underwater vehicles is of crucial importance for commercial and scientific reasons. Mainstream underwater grippers are designed for the heavy load tasks typical of the industrial sector; however, due to the lack of alternatives, they are frequently used in biological sampling applications to handle irregular, delicate, and deformable specimens with a consequent high risk of damage. To overcome this limitation, the design of grippers for marine science applications should explicitly account for the requirements of end-users. In this paper, we aim at making a step forward and propose to systematically account for the needs of end-users by resorting to design tools used in industry for the conceptualization of new products which can yield great benefits to both applied robotic research and marine science. After the generation of the concept design for the gripper using a reduced version of the House of Quality and the Pugh decision matrix, we reported on its mechanical design, construction, and preliminary testing. The paper reports on the full design pipeline from requirements collection to preliminary testing with the aim of fostering and providing structure to fruitful interdisciplinary collaborations at the interface of robotics and marine science.
APA, Harvard, Vancouver, ISO, and other styles
18

Bourgeois, Austin, Brian Rice, and Chung-Hyun Goh. "Design Optimization of the Lift Mechanism in the Robotic Walking Training Device Using the Engineering Design Methodology." Applied Sciences 14, no. 1 (2023): 327. http://dx.doi.org/10.3390/app14010327.

Full text
Abstract:
Partial paralysis caused by spinal cord injury (SCI) or stroke are two of the most prevalent forms of physical disability. Through proper gait training, people with incomplete SCI have more potential to retain or regain the ability to walk than those with complete SCI. To help patients who have these disabilities regain the function of walking unassisted, the robotic walking training device (RWTD) has been developed to perform gait rehabilitation. This research plays a pivotal role in advancing medical robotic technology and gait rehabilitation by conducting a comprehensive evaluation and comparison of three lift mechanisms. Specifically, the lift mechanisms are designed to reposition a patient, using the RWTD, from a supine to a vertical position. Addressing a crucial gap in supporting and placing patients in gait rehabilitation devices, design optimization was performed using the engineering design process. This approach utilizes sophisticated techniques, including CAD modeling, motion analysis, structural analysis using finite element analysis, and a Pugh decision matrix. The findings offer valuable insights for optimizing lift mechanisms for the RWTD, contributing to the enhancement of patient-centric care. This research ensures a focus on safety, efficiency, and comfort in the gait rehabilitation process, with broader implications for the evolution of medical robotic devices.
APA, Harvard, Vancouver, ISO, and other styles
19

Wang, Ding, Hongfei Zhan, Junhe Yu, and Rui Wang. "Knowledge Push Method Based on Clustering Algorithm and DSM Matrix." Journal of Physics: Conference Series 2025, no. 1 (2021): 012057. http://dx.doi.org/10.1088/1742-6596/2025/1/012057.

Full text
Abstract:
Abstract In the process of business execution of traditional enterprises, traditional knowledge push is difficult to provide accurate knowledge support for business execution decision-making due to the complicated business problems. Therefore, this paper proposes a knowledge-push model for business execution. Firstly, the business problems in the business process are clustered through the hierarchical clustering algorithm to ensure the cohesion of the business problems after clustering. Then DSM matrix is used to represent the coupling of clustered business problems. Finally, knowledge push is carried out on the divided business problems to provide accurate knowledge support for business executives to make business problems decisions.
APA, Harvard, Vancouver, ISO, and other styles
20

Kidoido, Michael M., Komi Mensah Agboka, Frank Chidawanyika, et al. "Spatial Spillover Effects of Smallholder Households’ Adoption Behaviour of Soil Management Practices Among Push–Pull Farmers in Rwanda." Sustainability 16, no. 23 (2024): 10349. http://dx.doi.org/10.3390/su162310349.

Full text
Abstract:
Push–pull technology (PPT) integrates maize with the legume fodder Desmodium sp. and the border crop Brachiaria sp., aiming to enhance maize production in Rwanda. Despite its potential, the adoption of complementary soil management practices (SMP), vital for PPT’s success, remains low. This study employs spatial econometric methods to evaluate the determinants of SMP adoption and the interdependencies in decision-making among PPT-practicing farmers. We constructed a spatial weight matrix based on a global Moran’s I index and identified optimal model parameters through principal component analysis. Utilizing a spatial Durbin probit model (SDPM), we assessed the spatial interdependence of SMP adoption decisions among maize farmers. Our findings reveal significant spatial dependence in SMP adoption within a 1.962 km radius, with improved seed usage, household income, yield, farmer group membership and size of land cultivated being key factors positively influencing adoption. We propose a “nonequilibrium promotion strategy” to enhance SMP adoption, emphasizing the establishment of pilot regions to broaden outreach. Additionally, fostering technical training and selecting farmers with adequate resources as demonstration leaders can enhance spatial spillover effects. This research provides insights for developing policies to scale up push–pull technology in Rwanda and across Sub-Saharan Africa.
APA, Harvard, Vancouver, ISO, and other styles
21

Reyes Uribe, Ana Cecilia. "No Good-Bye and No Thanks: A Case Study on the Experience of Mexican University Professors Retiring in Times of Pandemic." Journal of Behavior, Health & Social Issues 14, no. 2 (2022): 11–22. http://dx.doi.org/10.22201/fesi.20070780e.2022.14.2.81310.

Full text
Abstract:
The pandemic has left its mark on virtually every aspect of human life, including retirement. The main objective of this paper was to explore the role of the pandemic in the retirement decision of a group of Mexican university professors. A qualitative research methodology was chosen; 10 women and four men were interviewed. The theoretical framework that guided this work was the 3D model in retirement decision-making, which presents three primary elements in decision making: finances, health, and psychological well-being. Also, the model uses 6 categories: push factors, pull factors, barriers, enablers, triggers, and overrides. A matrix based on the elements and categories of the 3D model was elaborated, and results indicate that the pandemic did play a role in the decision to retire of all participants. Among the main factors for retiring were the long-term after-effects of professors who contracted COVID-19, stress due to training needs for distance education, changes in the family dynamics, the return to face-to-face classes, guilt, uncertainty, and recognition of the fragility of life.
APA, Harvard, Vancouver, ISO, and other styles
22

Canumalla, Ramachandra, and Tanjore V. Jayaraman. "Decision Science Driven Selection of High-Temperature Conventional Ti Alloys for Aeroengines." Aerospace 10, no. 3 (2023): 211. http://dx.doi.org/10.3390/aerospace10030211.

Full text
Abstract:
Near-α Ti alloys find themselves in advanced aeroengines for applications of up to 600 °C, mainly as compressor components owing to their superior combination of ambient- and elevated-temperature mechanical properties and oxidation resistance. We evaluated, ranked, and selected near-α Ti alloys in the current literature for high-temperature applications in aeroengines driven by decision science by integrating multiple attribute decision making (MADM) and principal component analysis (PCA). A combination of 12 MADM methods ranked a list of 105 alloy variants based on the thermomechanical processing (TMP) conditions of 19 distinct near-α Ti alloys. PCA consolidated the ranks from various MADMs and identified top-ranked alloys for the intended applications as: Ti-6.7Al-1.9Sn-3.9Zr-4.6Mo-0.96W-0.23Si, Ti-4.8Al-2.2Sn-4.1Zr-2Mo-1.1Ge, Ti-6.6Al-1.75Sn-4.12Zr-1.91Mo-0.32W-0.1Si, Ti-4.9Al-2.3Sn-4.1Zr-2Mo-0.1Si-0.8Ge, Ti-4.8Al-2.3Sn-4.2Zr-2Mo, Ti-6.5Al-3Sn-4Hf-0.2Nb-0.4Mo-0.4Si-0.1B, Ti-5.8Al-4Sn-3.5Zr-0.7Mo-0.35Si-0.7Nb-0.06C, and Ti-6Al-3.5Sn-4.5Zr-2.0Ta-0.7Nb-0.5Mo-0.4Si. The alloys have the following metallurgical characteristics: bimodal matrix, aluminum equivalent preferably ~8, and nanocrystalline precipitates of Ti3Al, germanides, or silicides. The analyses, driven by decision science, make metallurgical sense and provide guidelines for developing next-generation commercial near-α Ti alloys. The investigation not only suggests potential replacement or substitute for existing alloys but also provides directions for improvement and development of titanium alloys over the current ones to push out some of the heavier alloys and thus help reduce the engine’s weight to gain advantage.
APA, Harvard, Vancouver, ISO, and other styles
23

Wang, Xudong, Changming Hu, Jing Liang, Juan Wang, and Siyuan Dong. "A Study of the Factors Influencing the Construction Risk of Steel Truss Bridges Based on the Improved DEMATEL–ISM." Buildings 13, no. 12 (2023): 3041. http://dx.doi.org/10.3390/buildings13123041.

Full text
Abstract:
To enhance the safety management of steel-truss-bridge construction, an evaluation method based on the improved DEMATEL–ISM was proposed to analyze the risk factors involved in such construction. Decision Making Trial and Evaluation Laboratory (DEMATEL) is a method for systematic factor analysis that utilizes graph-theory and -matrix tools, allowing for the assessment of the existence and strength of relationships between elements by analyzing the logical and direct impact relationships among various elements in a system. The distinctive feature of Interpretative Structural Modeling (ISM) is the decomposing of complex systems into several subsystems (elements) and constructing the system into a multi-level hierarchical structural model through algebraic operations. Specifically, triangular fuzzy numbers are introduced initially to improve the direct influence matrix in the DEMATEL method, thereby reducing the subjectivity of expert evaluations. The degree of influence, influenced degree, centrality degree, and causality degree of each influencing factor are determined and ranked based on the above analysis. In response to the characteristics of top-push construction, 20 key factors were selected from four aspects: “human, material, environment, and management”. The top five identified influencing factors are displacement during pushing (X10), safety-management qualification (X18), local buckling (X14), overturning of steel beams (X13), and collision with bridge piers during guide beam installation (X7). Subsequently, corresponding solutions were proposed for different influencing factors. The results of the study offer targeted measures to enhance the safety management of steel truss bridge construction and provide a reference for accident prevention.
APA, Harvard, Vancouver, ISO, and other styles
24

Zhang, Jinkai, Wenming Ma, En Zhang, and Xuchen Xia. "Time-Aware Dual LSTM Neural Network with Similarity Graph Learning for Remote Sensing Service Recommendation." Sensors 24, no. 4 (2024): 1185. http://dx.doi.org/10.3390/s24041185.

Full text
Abstract:
Technological progress has led to significant advancements in Earth observation and satellite systems. However, some services associated with remote sensing face issues related to timeliness and relevance, which affect the application of remote sensing resources in various fields and disciplines. The challenge now is to help end-users make precise decisions and recommendations for relevant resources that meet the demands of their specific domains from the vast array of remote sensing resources available. In this study, we propose a remote sensing resource service recommendation model that incorporates a time-aware dual LSTM neural network with similarity graph learning. We further use the stream push technology to enhance the model. We first construct interaction history behavior sequences based on users’ resource search history. Then, we establish a category similarity relationship graph structure based on the cosine similarity matrix between remote sensing resource categories. Next, we use LSTM to represent historical sequences and Graph Convolutional Networks (GCN) to represent graph structures. We construct similarity relationship sequences by combining historical sequences to explore exact similarity relationships using LSTM. We embed user IDs to model users’ unique characteristics. By implementing three modeling approaches, we can achieve precise recommendations for remote sensing services. Finally, we conduct experiments to evaluate our methods using three datasets, and the experimental results show that our method outperforms the state-of-the-art algorithms.
APA, Harvard, Vancouver, ISO, and other styles
25

Korchagina, Irina, and Olga Sychyova-Peredero. "Technological Entrepreneurship Potential as a Diversification Factor of the Territory’s Economy." Regionalnaya ekonomika. Yug Rossii, no. 4 (December 2019): 4–12. http://dx.doi.org/10.15688/re.volsu.2019.4.1.

Full text
Abstract:
The goal of the research consists in determining the potential of the technological entrepreneurship impact on the diversification processes of regional economy (regions and large municipal entities). The paper specifies the main types and diversification factors of regional economy. The authors determine the key limitations of modern diversification theories. The researchers propose the hypothesis of a “technological push” as a way of overcoming the effect of dependence on the previous economic development. The technological entrepreneurship gives the opportunity of creating principally new types of economic activity for a region which use the temporary monopoly effect. This fact determines the advantages of technological entrepreneurship when involving investors into the weak types of economic activity. The paper substantiates the main provisions displaying the impact of technological entrepreneurship on the dynamics of the regional economy’s structure. The authors classify the characteristics of technological entrepreneurship influencing the diversification of economy. The matrix of the correlation between the types of technological entrepreneurship and types of economic diversification is developed, on the basis of which the main directions and limitations of diversification are determined. The researchers suggest and classify the factors of the direct impact (creation of new industries and enterprises) and indirect positive effects of technological entrepreneurship in the context of creating a balanced structure of economy (development of human and social capital, growth of spatial importance of the territory). Technological entrepreneurship is a factor of the economic transformation and even under the conditions of the inertia development it allows minimizing the investment and market barriers of diversification when resources are limited. The results of the study can be used as a basis for further empirical research of the correlation of technological entrepreneurship and economic structure and also at substantiating the decisions of executive authorities, developing documents of strategic planning.
APA, Harvard, Vancouver, ISO, and other styles
26

Zhu, Guicun, Meihui Hao, Changlong Zheng, and Linlin Wang. "Design of Knowledge Graph Retrieval System for Legal and Regulatory Framework of Multilevel Latent Semantic Indexing." Computational Intelligence and Neuroscience 2022 (July 19, 2022): 1–11. http://dx.doi.org/10.1155/2022/6781043.

Full text
Abstract:
Latent semantic analysis (LSA) is a natural language statistical model, which is considered as a method to acquire, generalize, and represent knowledge. Compared with other retrieval models based on concept dictionaries or concept networks, the retrieval model based on LSA has the advantages of strong computability and less human participation. LSA establishes a latent semantic space through truncated singular value decomposition. Words and documents in the latent semantic space are projected onto the dimension representing the latent concept, and then the semantic relationship between words can be extracted to present the semantic structure in natural language. This paper designs the system architecture of the public prosecutorial knowledge graph. Combining the graph data storage technology and the characteristics of the public domain ontology, a knowledge graph storage method is designed. By building a prototype system, the functions of knowledge management, knowledge query, and knowledge push are realized. A named entity recognition method based on bidirectional long-short-term memory (bi-LSTM) combined with conditional random field (CRF) is proposed. Bi-LSTM-CRF performs named entity recognition based on character-level features. CRF can use the transition matrix to further obtain the relationship between each position label, so that bi-LSTM-CRF not only retains the context information but also considers the influence between the current position and the previous position. The experimental results show that the LSTM-entity-context method proposed in this paper improves the representation ability of text semantics compared with other algorithms. However, this method only introduces relevant entity information to supplement the semantic representation of the text. The order in the case is often ignored, especially when it comes to the time series of the case characteristics, and the “order problem” may eventually affect the final prediction result. The knowledge graph of legal documents of theft cases based on ontology can be updated and maintained in real time. The knowledge graph can conceptualize, share, and perpetuate knowledge related to procuratorial organs and can also reasonably utilize and mine many useful experiences and knowledge to assist in decision-making.
APA, Harvard, Vancouver, ISO, and other styles
27

Mbatha, Nhlanhla Cyril, and Joan Roodt. "Recent internal migration and labour market outcomes: Exploring the 2008 and 2010 national income dynamics study (NIDS) panel data in South Africa." South African Journal of Economic and Management Sciences 17, no. 5 (2014): 653–72. http://dx.doi.org/10.4102/sajems.v17i5.515.

Full text
Abstract:
We began with the premise that South African recent migrants from rural to urban areas experience relatively lower rates of participation in formal labour markets compared to local residents in urban communities, and that these migrants are overrepresented in the informal labour market and in the unemployment sector. This means that rural to urban migrants are less likely than locals to be found in formal employment and more likely to be found in informal employment and among the unemployed. Using perspectives from Development Economics we explore the South African National Income Dynamics Study (NIDS) panel datasets of 2008 and 2010, which only provide a perspective on what has happened between 2008 and 2010. We find that while migrants in general experience positive outcomes in informal labour markets, they also experience positive outcomes in formal markets, which is contrary to expectations. We also find that there are strong links between other indicators of performance in the labour market. Earned incomes are closely associated with migration decisions and educational qualifications (e.g. a matric certificate) for respondents between the ages of 30 and 60 years. The youth (15 to 30 years old) and senior respondents (over the age of 60) are the most disadvantaged in the labour market. The disadvantage is further reflected in lower earned incomes. This is the case even though the youth are most likely to migrate. We conclude that migration is motivated by both push (to seek employment) and pull (existing networks or marriage at destination) factors. For public policy, the emerging patterns – indicative and established – are important for informing strategies aimed at creating employment and developing skills for the unemployed, migrants and especially the youth. Similar policy strategies are embodied in the National Development Plan (NDP), the National Skills Development Strategy (NSDS), etc.
APA, Harvard, Vancouver, ISO, and other styles
28

Meng, Bin, Haibo Kuang, Erxuan Niu, Jing Li, and Zhenhui Li. "Research on the Transformation Path of the Green Intelligent Port: Outlining the Perspective of the Evolutionary Game “Government–Port–Third-Party Organization”." Sustainability 12, no. 19 (2020): 8072. http://dx.doi.org/10.3390/su12198072.

Full text
Abstract:
While promoting the global economy and trade, ports impose serious pollution on the global ocean and atmosphere. Therefore, the development of ports is restrained by the policies and measures of governments and international organizations used to cope with climate change and environmental protection. With the development of information technology, the operation and expansion of ports is facing forms of green and intelligent reform. This research aims to link the development of green intelligent ports, government policies, and third-party organizations to find the most suitable evolutionary path for the development of green intelligent ports. This paper assumes that governments will push ports to transform into green intelligent ports from the perspective of benefiting long-term interests, that the goal of ports is to maximize their profits, and that third-party organizations will actively promote the development of green intelligent ports. Based on these assumptions, this paper has established an evolutionary game theory model of “government–port–third-party organization” regarding the development of green intelligent ports. The Jacobian matrix of the game theory system was constructed by using the replicator dynamic equation, and local stability analysis was performed to obtain the equilibrium stability point of the entire system. This research reveals the limitations of the development of green intelligent ports without government involvement and explores the ability of third-party organizations to promote the implementation of policies, confirming the role of government regulation and control in promoting the development of green intelligent ports. This paper may be helpful for the development of green intelligent ports in the future. The results show that: (1) The main factors affecting the choice of port strategy are the benefits of building a green intelligent port, the intensity of government regulation, and the quantitative influence of third-party evaluation results on the port strategy selection. (2) Government decision-making plays an important role in port transformation. If the relevant government chooses the wrong strategy, then the transformation of the port will be delayed. (3) Government regulation and control need to change with the change of the evolution stage. (4) Compared with the macro-control policies of the government, the influence of the third-party organization on the port is significantly smaller.
APA, Harvard, Vancouver, ISO, and other styles
29

Donaires, Omar Sacilotto, Luciana Oranges Cezarino, Adriana Cristina Ferreira Caldana, and Lara Liboni. "Sustainable development goals – an analysis of outcomes." Kybernetes 48, no. 1 (2019): 183–207. http://dx.doi.org/10.1108/k-10-2017-0401.

Full text
Abstract:
Purpose The concept of sustainability evokes a multiplicity of meanings, depending on the field. Some authors have criticized the concept for its vagueness. Notwithstanding this criticism, worldwide efforts to meet the sustainable development goals (SDGs) are in progress and are expected to yield results by 2030. This paper aims to addresses two issues and make two primary contributions. First, the concept of sustainability is revisited to develop its integrative understanding. This concept is built on systems thinking – specifically, on the concepts of synergy, emergence, recursion and self-organization. Second, an approach is developed to help determine whether the efforts being made towards the SDGs can be expected to be effective (i.e., whether the world can hope to soon be a system that self-organizes towards sustainability). Design/methodology/approach Based on the assumption that the SDGs and their respective targets are systemically interrelated, the data on the progress towards the SDGs are correlated and the outcome is analysed. Findings The emerging pattern of correlations reflected the systemic coherence of the efforts as an indication of self-organization towards sustainability. This pattern also revealed that the efforts are still spotty and that the systemic synergy has not yet taken place. This correlation approach to Brazil is then applied. The data about Brazil’s progress towards the SDGs from the World Bank’s Word Development Indicators (WDI) database are gathered. The outcomes indicated that Brazil as a whole cannot yet be seen as self-organizing system that is evolving towards sustainability. Research limitations/implications To enable the calculation of the correlation matrix, the data series were not allowed to have missing values. Some of the WDI data series had many missing values and had to be eliminated. This unfortunately reduced the variability of the original data. In addition, the missing values in the remaining data series had to be calculated by means of interpolation or extrapolation. There are alternative algorithms to perform such functions. The impact of the interpolation and extrapolation of the missing values on the study, as well as the pros and cons of different algorithms, required investigation. It is important to remark that the WDI series was the only global and open data set that aligned with the SDGs. Social implications In Brazil, it is important to maintain the public policies that affect SDG 1-6, but it is necessary to develop policies geared towards SDG 12. Environmental goals also need more public policies (SDGs 14 and 15). To achieve this 2030 Agenda, much effort will be required for SDG 17, which is related to greater synergy through partnerships. Originality/value Three qualitatively distinct levels of efforts to sustainability are identified: individual, organizational and world activities. At the individual level, progress regarding sustainability depends on personal attitudes, including the willingness to abandon a self-centred lifestyle in favour of a more cooperative way of living and making decisions, and to embrace a new approach to ethics, which replaces self-interest by self-denial and self-sacrifice (de Raadt &amp; de Raadt, 2014). At the organizational level, a paradox of the need to internalize environmental and social costs into generic strategies and the sustainability strategy that involves core businesses are challenges for systems working towards sustainability. When it comes to global level, in this paper, the authors tried to make a contribution to push forward the frontier of knowledge by proposing an approach to understand whether the progress made towards the SDGs in the past 25 years indicates that the world is, after all, organizing for sustainability (Schwaninger, 2015).
APA, Harvard, Vancouver, ISO, and other styles
30

Wesseling, Jelle. "Abstract F1-2: Clonal evolution of DCIS to invasion." Cancer Research 83, no. 5_Supplement (2023): F1–2—F1–2. http://dx.doi.org/10.1158/1538-7445.sabcs22-f1-2.

Full text
Abstract:
Abstract Clonal evolution of DCIS to invasion Ductal carcinoma in situ (DCIS) is the most common form of preinvasive breast cancer and, despite treatment, a small fraction (5-10%) of DCIS patients develop subsequent invasive breast cancer (IBC). If not treated, at least 3 out of 4 women with DCIS will not develop IBC1-3. This implies many women with non-progressive, low-risk DCIS are likely to carry the burden of overtreatment. To solve this DCIS dilemma, two fundamental questions need to be answered. The first question is, how the subsequent IBC is related to the initial DCIS lesion. The second question is how to distinguish high- from low-risk DCIS at the time of diagnosis. This is essential to take well-informed DCIS management decisions, i.e., surgery, followed by radiotherapy in case of breast conserving treatment with or without subsequent endocrine treatment, or test whether active surveillance for low-risk DCIS is safe. How is the subsequent IBC related to the initial DCIS? The high genomic concordance in DNA aberrations between DCIS and IBC suggest that most driver mutations and CNA events are acquired at the earliest stages of DCIS initiation. It has therefore been assumed that most solid tumours arise from a single cell and that the probability of two independent tumours arising from the same tissue is low4-6. However, lineage tracing and genomic studies strongly suggest both direct and independent clonal lineages during the initiation of DCIS and evolution to IBC. In these processes, mammary stem cells have been implicated in DCIS initiation. Role of mammary stem cells in DCIS initiation Lineage tracing mouse model experiments have shown the fate of individual cells and lineages that acquire mutations before a tumour is established7-9. This is also relevant for DCIS initiation10,11, as different pools of MaSCs drive the growth and development of the ductal network and are considered the cell of origin for breast cancers9,10. The ductal trees remain quiescent until puberty, during which extension, branching and termination of terminal end buds (TEBs) leads to its expansion throughout the fat pad7,12,13. Any oncogenic mutation that occurs in a fetal MaSC will spread throughout the ductal network to a large part of the ductal tree, leading to sick lobes9. By contrast, oncogenic mutations acquired by a single MaSC during puberty spread to a smaller number of offspring located in small clusters in a part of the ductal network8,14. Direct lineage models for DCIS progression Direct lineage models postulate that DCIS has a single cell of origin that acquires mutations and progresses to IBC15-18. This is also supported by the high genomic concordance of CNAs and mutations in synchronous DCIS–IBC regions6,15,17,19-21 and the results of a recent large longitudinal study that profiled pure DCIS and recurrent IBC using multiple sequencing techniques, which estimated direct clonal lineages in approximately ~80% of patients18. Two distinct direct lineage models have been proposed: the evolutionary bottleneck model and the multiclonal invasion model. In the evolutionary bottleneckmodel, a single clone (or a limited number of clones) with an invasive genotype is selected and breaks through the basement membrane to migrate into surrounding tissues15,16,22, while other clones are unable to escape the ducts21-28. The multiclonal invasion model posits that most or all subclones can escape the basement membrane, establishing invasive disease6,16,17,20. The multiclonal model has not been studied widely in pure DCIS and recurrent IBC samples. Independent lineage model for DCIS progression DCIS lesions and IBCs can arise from different initiating cells in the same breast independently5,20,29-32. An analysis of sequential DCIS–IBC pairs in a unique, large-scale, in-depth study of 95 matched pure DCIS and recurrent IBC showed that ~20% of the IBC recurrences were indeed clonally unrelated to the primary DCIS18, as is also supported by some mathematical model studies33. The potential role of a field effect IBC can develop in the same breast as an initial DCIS even after treatment, which could be explained by the presence of a field effect34-37. Alternatively, the sick lobe hypothesis proposes that a single lobe harbours first-hit mutations, acquired in utero or during early mammary development37-42. This could also explain the restriction of IBC to the ipsilateral side of the breast39,43,44. Germline mutations may also explain the emergence of independent lineages in DCIS and IBC patients, lowering the threshold for cancer development32,43-46. Convergent evolution model of DCIS progression A third model for the emergence of IBC from DCIS is convergent evolution, in which the same mutations and CNA are selected and expanded during tumour growth such that environmental factors fuel competition between distinct clones and push them towards a similar genotype. Ultimately, two independent clonal lineages from different ancestral cells then happen to share multiple genomic aberrations or driver mutations across regions47-49. Although independent lineages are considered uncommon (~20%) in ipsilateral recurrences, they occur at much higher frequencies in contralateral recurrences (&amp;gt;80%), in which single-nucleotide polymorphism and comparative genomic hybridization microarrays show few (or no) genomic alterations shared in tumours from the contralateral breast cancer18,50,51. How to distinguish high- from low-risk DCIS at the time of diagnosis? The genomic and transcriptomic profile present at the time of DCIS diagnosis may contain crucial information on the risk of progression of DCIS to IBC. Thus far, it has been unclear whether prognostic gene expression markers can be used to separate indolent DCIS from potentially progressive DCIS. To this end, microarrays and RNA-seq have been applied for the comparison of bulk RNA from microdissected DCIS and IBC tissue. In synchronous DCIS–IBC, a limited number of transcriptional differences have been found and the few events discovered often varied extensively across different tumours52-56. Although these differences were strong, the added value of these studies is uncertain as they are often confounded by small sample size, lack of matched receptor status data, and low sample purity. Despite these limitations, these studies have implicated the epithelial-mesenchymal transition (EMT) and extracellular matrix (ECM) remodelling pathways as potentially relevant for the progression of DCIS to IBC55-62. We studied two large DCIS cohorts: the Sloane cohort, a prospective breast screening cohort from the UK (median follow-up of 12.5 years), and a Dutch population-based cohort (NKI, median follow-up of 13 years). FFPE tissue specimens from patients with pure primary DCIS after breast-conserving surgery (BCS) +/- RT that did develop a subsequent ipsilateral event (DCIS or invasive) were considered as cases, whereas patients that did not develop any form of recurrence up to the last follow-up or death were considered as controls. We performed copy number analysis (CNA) and RNAseq analysis on 229 cases (149 IBC recurrences and 80 DCIS recurrences) and 344 controls. We classified DCIS into the PAM50 subtypes using RNAseq data which revealed an enrichment of luminal A phenotype in DCIS that did not recur (P = 0.01, Fisher Exact test). No single copy number aberration was more common in cases compared to controls. RNAseq data did not reveal any genes significantly over/under expressed in cases versus controls after false discovery rate (FDR) correction. However, by limiting the analysis to samples that had not had RT and excluding pure DCIS recurrences we developed a penalized Cox model from RNAseq data. The model was trained on weighted samples (to correct for the biased sampling of the case control dataset) from the NKI series with double loop cross validation. Using this predicted hazard ratio, the samples were split into high, medium and low risk quantiles, with a recurrence risk of 20%, 9% and 2.5%, respectively at 5 years (p&amp;lt;0.001, Wald test). The NKI-trained predictor was independently validated in the Sloane No RT cohort (p = 0.02, Wald test). GSEA analysis revealed proliferation hallmarks enriched in the recurrence predictor (FDR = 0.058). The NKI-RNAseq predictor was more predictive of invasive recurrence than PAM50, clinical features (Grade, Her2 and ER) and the 12-gene Oncotype DCIS score (p &amp;lt; 0.001, permutation test using the Wald statistic) in both the NKI and Sloane series. In the methylation analysis, 50 controls were compared with 35 cases. We could identify Variably Methylation Regions (VMRs) and Differentially Methylated Regions (DMRs) between cases and controls. Interestingly, VMRs were enriched in cell adhesion pathways Conclusion The recently acquired knowledge described above on how often the subsequent IBC is directly related to the initial DCIS and on molecular markers predicting the risk of DCIS progression is essential for accurate DCIS risk assessment. This is essential to aid accurate clinical decision making to personalize DCIS management in the near future. References 1. Falk, R. S., Hofvind, S., Skaane, P. &amp; Haldorsen, T. Second events following ductal carcinoma in situ of the breast: a register-based cohort study. Breast Cancer Res Treat 129, 929-938, doi:10.1007/s10549-011-1531-1 (2011). 2. Ryser, M. D. et al. Cancer Outcomes in DCIS Patients Without Locoregional Treatment. Jnci J National Cancer Inst 111, 952-960, doi:10.1093/jnci/djy220 (2019). 3. Maxwell, A. J. et al. Unresected screen detected Ductal Carcinoma in Situ: outcomes of 311 women in the Forget-me–not 2 study. Breast 61, 145-155, doi:10.1016/j.breast.2022.01.001 (2022). 4. Hanahan, D. &amp; Weinberg, R. A. Hallmarks of cancer: the next generation. Cell 144, 646-674, doi:10.1016/j.cell.2011.02.013 (2011). 5. Kim, H., Kim, C. Y., Park, K. H. &amp; Kim, A. Clonality analysis of multifocal ipsilateral breast carcinomas using X-chromosome inactivation patterns. Hum Pathol 78, 106-114, doi:10.1016/j.humpath.2018.04.016 (2018). 6. Bergholtz, H. et al. Comparable cancer-relevant mutation profiles in synchronous ductal carcinoma in situ and invasive breast cancer. Cancer Rep (Hoboken) 3, e1248, doi:10.1002/cnr2.1248 (2020). 7. Giraddi, R. R. et al. Stem and progenitor cell division kinetics during postnatal mouse mammary gland development. Nat Commun 6, 8487, doi:10.1038/ncomms9487 (2015). 8. Scheele, C. L. et al. Identity and dynamics of mammary stem cells during branching morphogenesis. Nature 542, 313-317, doi:10.1038/nature21046 (2017). 9. Ying, Z. &amp; Beronja, S. Embryonic Barcoding of Equipotent Mammary Progenitors Functionally Identifies Breast Cancer Drivers. Cell Stem Cell 26, 403-419.e404, doi:10.1016/j.stem.2020.01.009 (2020). 10. Zhou, J. et al. Stem Cells and Cellular Origins of Breast Cancer: Updates in the Rationale, Controversies, and Therapeutic Implications. Front Oncol 9, 820, doi:10.3389/fonc.2019.00820 (2019). 11. Watson, C. J. &amp; Khaled, W. T. Mammary development in the embryo and adult: new insights into the journey of morphogenesis and commitment. Development 147, doi:10.1242/dev.169862 (2020). 12. Williams, J. M. &amp; Daniel, C. W. Mammary ductal elongation: differentiation of myoepithelium and basal lamina during branching morphogenesis. Dev Biol 97, 274-290, doi:10.1016/0012-1606(83)90086-6 (1983). 13. Silberstein, G. B. &amp; Daniel, C. W. Glycosaminoglycans in the basal lamina and extracellular matrix of serially aged mouse mammary ducts. Mech Ageing Dev 24, 151-162, doi:10.1016/0047-6374(84)90067-8 (1984). 14. Davis, F. M. et al. Single-cell lineage tracing in the mammary gland reveals stochastic clonal dispersion of stem/progenitor cell progeny. Nat Commun 7, 13053, doi:10.1038/ncomms13053 (2016). 15. Hernandez, L. et al. Genomic and mutational profiling of ductal carcinomas in situ and matched adjacent invasive breast cancers reveals intra-tumour genetic heterogeneity and clonal selection. J Pathol 227, 42-52, doi:10.1002/path.3990 (2012). 16. Casasent, A. K., Edgerton, M. &amp; Navin, N. E. Genome evolution in ductal carcinoma in situ: invasion of the clones. J Pathol 241, 208-218, doi:10.1002/path.4840 (2017). 17. Casasent, A. K. et al. Multiclonal Invasion in Breast Tumors Identified by Topographic Single Cell Sequencing. Cell 172, 205-217 e212, doi:10.1016/j.cell.2017.12.007 (2018). 18. Lips, E. H. et al. Genomic analysis defines clonal relationships of ductal carcinoma in situ and recurrent invasive breast cancer. Nat Genet 54, 850–860, doi:10.1038/s41588-022-01082-3 (2022). 19. Miron, A. et al. PIK3CA mutations in in situ and invasive breast carcinomas. Cancer Res 70, 5674-5678, doi:10.1158/0008-5472.CAN-08-2660 (2010). 20. Yates, L. R. et al. Subclonal diversification of primary breast cancer revealed by multiregion sequencing. Nat Med 21, 751-759, doi:10.1038/nm.3886 (2015). 21. Pareja, F. et al. Whole-Exome Sequencing Analysis of the Progression from Non-Low-Grade Ductal Carcinoma In Situ to Invasive Ductal Carcinoma. Clin Cancer Res 26, 3682-3693, doi:10.1158/1078-0432.CCR-19-2563 (2020). 22. Trinh, A. et al. Genomic Alterations during the In Situ to Invasive Ductal Breast Carcinoma Transition Shaped by the Immune System. Mol Cancer Res 19, 623-635, doi:10.1158/1541-7786.MCR-20-0949 (2021). 23. Poste, G. &amp; Fidler, I. J. The pathogenesis of cancer metastasis. Nature 283, 139-146, doi:10.1038/283139a0 (1980). 24. Greaves, M. &amp; Maley, C. C. Clonal evolution in cancer. Nature 481, 306-313, doi:10.1038/nature10762 (2012). 25. Kroigard, A. B. et al. Clonal expansion and linear genome evolution through breast cancer progression from pre-invasive stages to asynchronous metastasis. Oncotarget 6, 5634-5649, doi:10.18632/oncotarget.3111 (2015). 26. Martelotto, L. G. et al. Whole-genome single-cell copy number profiling from formalin-fixed paraffin-embedded samples. Nat Med 23, 376-385, doi:10.1038/nm.4279 (2017). 27. Walens, A. et al. Adaptation and selection shape clonal evolution of tumors during residual disease and recurrence. Nat Commun 11, 5017, doi:10.1038/s41467-020-18730-z (2020). 28. Welter, L. et al. Treatment response and tumor evolution: lessons from an extended series of multianalyte liquid biopsies in a metastatic breast cancer patient. Cold Spring Harb Mol Case Stud 6, doi:10.1101/mcs.a005819 (2020). 29. Maggrah, A. et al. Paired ductal carcinoma in situ and invasive breast cancer lesions in the D-loop of the mitochondrial genome indicate a cancerization field effect. Biomed Res Int 2013, 379438, doi:10.1155/2013/379438 (2013). 30. Desmedt, C. et al. Uncovering the genomic heterogeneity of multifocal breast cancer. J Pathol 236, 457-466, doi:10.1002/path.4540 (2015). 31. Visser, L. L. et al. Discordant Marker Expression Between Invasive Breast Carcinoma and Corresponding Synchronous and Preceding DCIS. Am J Surg Pathol 43, 1574-1582, doi:10.1097/PAS.0000000000001306 (2019). 32. McCrorie, A. D. et al. Multifocal breast cancers are more prevalent in BRCA2 versus BRCA1 mutation carriers. J Pathol Clin Res 6, 146-153, doi:10.1002/cjp2.155 (2020). 33. Sontag, L. &amp; Axelrod, D. E. Evaluation of pathways for progression of heterogeneous breast tumors. J Theor Biol 232, 179-189, doi:10.1016/j.jtbi.2004.08.002 (2005). 34. Mai, K. T. Morphological evidence for field effect as a mechanism for tumour spread in mammary Paget’s disease. Histopathology 35, 567-576, doi:10.1046/j.1365-2559.1999.00788.x (1999). 35. Foschini, M. P. et al. Genetic clonal mapping of in situ and invasive ductal carcinoma indicates the field cancerization phenomenon in the breast. Hum Pathol 44, 1310-1319, doi:10.1016/j.humpath.2012.09.022 (2013). 36. Asioli, S., Morandi, L., Cavatorta, C., Cucchi, M. C. &amp; Foschini, M. P. The impact of field cancerization on the extent of duct carcinoma in situ (DCIS) in breast tissue after conservative excision. Eur J Surg Oncol 42, 1806-1813, doi:10.1016/j.ejso.2016.07.005 (2016). 37. Tan, M. P. Integration of ’sick lobe hypothesis’ with concept of field cancerisation for a personalised surgical margin for breast conserving surgery. J Surg Oncol 116, 954-955, doi:10.1002/jso.24728 (2017). 38. Going, J. J. &amp; Mohun, T. J. Human breast duct anatomy, the ’sick lobe’ hypothesis and intraductal approaches to breast cancer. Breast Cancer Res Treat 97, 285-291, doi:10.1007/s10549-005-9122-7 (2006). 39. Tot, T. The theory of the sick breast lobe and the possible consequences. Int J Surg Pathol 15, 369-375, doi:10.1177/1066896907302225 (2007). 40. Dooley, W., Bong, J. &amp; Parker, J. Redefining lumpectomy using a modification of the "sick lobe" hypothesis and ductal anatomy. Int J Breast Cancer 2011, 726384, doi:10.4061/2011/726384 (2011). 41. Tan, M. P. &amp; Tot, T. The sick lobe hypothesis, field cancerisation and the new era of precision breast surgery. Gland Surg 7, 611-618, doi:10.21037/gs.2018.09.08 (2018). 42. Petrova, S. C. et al. Regulation of breast cancer oncogenesis by the cell of origin’s differentiation state. Oncotarget 11, 3832-3848, doi:10.18632/oncotarget.27783 (2020). 43. Knudson, A. G., Jr. Heredity and human cancer. Am J Pathol 77, 77-84 (1974). 44. Park, S., Supek, F. &amp; Lehner, B. Systematic discovery of germline cancer predisposition genes through the identification of somatic second hits. Nat Commun 9, 2601, doi:10.1038/s41467-018-04900-7 (2018). 45. Konishi, H. et al. Mutation of a single allele of the cancer susceptibility gene BRCA1 leads to genomic instability in human breast epithelial cells. Proc Natl Acad Sci U S A 108, 17773-17778, doi:10.1073/pnas.1110969108 (2011). 46. Mazzola, E., Cheng, S. C. &amp; Parmigiani, G. The penetrance of ductal carcinoma in situ among BRCA1 and BRCA2 mutation carriers. Breast Cancer Res Treat 137, 315-318, doi:10.1007/s10549-012-2345-5 (2013). 47. Tegze, B. et al. Parallel evolution under chemotherapy pressure in 29 breast cancer cell lines results in dissimilar mechanisms of resistance. PLoS One 7, e30804, doi:10.1371/journal.pone.0030804 (2012). 48. Gao, Y. et al. Single-cell sequencing deciphers a convergent evolution of copy number alterations from primary to circulating tumor cells. Genome Res 27, 1312-1322, doi:10.1101/gr.216788.116 (2017). 49. Wang, F. et al. MEDALT: single-cell copy number lineage tracing enabling gene discovery. Genome Biol 22, 70, doi:10.1186/s13059-021-02291-5 (2021). 50. Brommesson, S. et al. Tiling array-CGH for the assessment of genomic similarities among synchronous unilateral and bilateral invasive breast cancer tumor pairs. BMC Clin Pathol 8, 6, doi:10.1186/1472-6890-8-6 (2008). 51. Regitnig, P., Ploner, F., Maderbacher, M. &amp; Lax, S. F. Bilateral carcinomas of the breast with local recurrence: analysis of genetic relationship of the tumors. Mod Pathol 17, 597-602, doi:10.1038/modpathol.3800089 (2004). 52. Ma, X. J. et al. Gene expression profiles of human breast cancer progression. Proc Natl Acad Sci U S A 100, 5974-5979, doi:10.1073/pnas.0931261100 (2003). 53. Porter, D. et al. Molecular markers in ductal carcinoma in situ of the breast. Mol Cancer Res 1, 362-375 (2003). 54. Castro, N. P. et al. Evidence that molecular changes in cells occur before morphological alterations during the progression of breast ductal carcinoma. Breast Cancer Research 10, doi:ARTN R87 10.1186/bcr2157 (2008). 55. Dettogni, R. S. et al. Potential biomarkers of ductal carcinoma in situ progression. BMC Cancer 20, 119, doi:10.1186/s12885-020-6608-y (2020). 56. Song, G. et al. Identification of aberrant gene expression during breast ductal carcinoma in situ progression to invasive ductal carcinoma. J Int Med Res 48, 300060518815364, doi:10.1177/0300060518815364 (2020). 57. Abba, M. C. et al. Transcriptomic changes in human breast cancer progression as determined by serial analysis of gene expression. Breast Cancer Res 6, R499-513, doi:10.1186/bcr899 (2004). 58. Schuetz, C. S. et al. Progression-specific genes identified by expression profiling of matched ductal carcinomas in situ and invasive breast tumors, combining laser capture microdissection and oligonucleotide microarray analysis. Cancer Res 66, 5278-5286, doi:10.1158/0008-5472.CAN-05-4610 (2006). 59. Lee, S. et al. Differentially expressed genes regulating the progression of ductal carcinoma in situ to invasive breast cancer. Cancer Res 72, 4574-4586, doi:10.1158/0008-5472.CAN-12-0636 (2012). 60. Coradini, D., Boracchi, P., Ambrogi, F., Biganzoli, E. &amp; Oriana, S. Cell polarity, epithelial-mesenchymal transition, and cell-fate decision gene expression in ductal carcinoma in situ. Int J Surg Oncol 2012, 984346, doi:10.1155/2012/984346 (2012). 61. Knudsen, E. S. et al. Progression of ductal carcinoma in situ to invasive breast cancer is associated with gene expression programs of EMT and myoepithelia. Breast Cancer Res Treat 133, 1009-1024, doi:10.1007/s10549-011-1894-3 (2012). 62. Krstic, M. et al. TBX3 promotes progression of pre-invasive breast cancer cells by inducing EMT and directly up-regulating SLUG. Journal of Pathology 248, 191-203, doi:10.1002/path.5245 (2019). Citation Format: Jelle Wesseling. Clonal evolution of DCIS to invasion [abstract]. In: Proceedings of the 2022 San Antonio Breast Cancer Symposium; 2022 Dec 6-10; San Antonio, TX. Philadelphia (PA): AACR; Cancer Res 2023;83(5 Suppl):Abstract nr F1-2.
APA, Harvard, Vancouver, ISO, and other styles
31

Naik, Sujitkumar V., Anupam Saxena, Ashok Kumar Rai, and B. V. S. Nagendra Reddy. "How to Choose From a Synthesized Set of Path-Generating Mechanisms." Journal of Mechanical Design 133, no. 9 (2011). http://dx.doi.org/10.1115/1.4004608.

Full text
Abstract:
Partially compliant mechanisms inherit the attributes of fully compliant and rigid-body linkages and offer simpler, compact design alternatives to accomplish complex kinematic tasks such as tracing large nonsmooth paths. This paper describes qualitative and quantitative criteria that can be employed to select the linkage configuration. The proposed criteria are categorized as general or specific. General criteria pertain to often-used kinematic attributes whereas specific criteria address the application at hand. The veracity and viability of each mechanism are evaluated with respect to compactness, design simplicity, static and dynamic failure, number of rigid-body joints, relative ease of fabrication, and other relevant criteria. Three decision-making techniques, namely, Pugh decision matrix, analytic hierarchy process, and a variant of the Pugh decision matrix are used to perform the evaluation. An example of a displacement-delimited gripper with a prescribed large nonsmooth path is used to illustrate linkage selection.
APA, Harvard, Vancouver, ISO, and other styles
32

Olabanji, Olayinka, and Khumbulani Mpofu. "Pugh matrix and aggregated by extent analysis using trapezoidal fuzzy number for assessing conceptual designs." Decision Science Letters, 2020, 21–36. http://dx.doi.org/10.5267/j.dsl.2019.9.001.

Full text
APA, Harvard, Vancouver, ISO, and other styles
33

Bundschuh, Jana, Herbert Rappel, Andreas Bock, et al. "Effects of queen excluders on the colony dynamics of honeybees (Apis mellifera L.) under biodynamic management." Apidologie 55, no. 1 (2024). http://dx.doi.org/10.1007/s13592-023-01041-9.

Full text
Abstract:
AbstractThe evaluation of beekeeping management practices (BMPs) is important for beekeepers worldwide because their choice affects health and survival of managed honeybee (A. mellifera L.) colonies and touches ethical and economic questions. This study focusses on queen excluders, a common hive addition in contemporary beekeeping. Its impacts are controversially discussed but have not been studied scientifically yet. Within a 4-year participatory on-farm experiment, we assessed the effects on colony dynamics in 64 hives in 8 apiaries during one season in Germany using the Liebefeld estimation method. We found no significant deviation for parameters of colony dynamics between hives managed with and without queen excluders. A qualitative decision-making tool (Pugh decision matrix) facilitated concept selection only for specific beekeepers.
APA, Harvard, Vancouver, ISO, and other styles
34

S. S., Sreejith. "CONTINUOUS PERFORMANCE EVALUATION OF EMPLOYEES USING AHP AND MODIFIED PUGH MATRIX METHOD: CONTRASTING WITH TOPSIS, PROMETHEE AND VIKOR." International Journal of the Analytic Hierarchy Process 16, no. 1 (2024). http://dx.doi.org/10.13033/ijahp.v16i1.1129.

Full text
Abstract:
Applications of the AHP for employee performance evaluation in organizations are widely discussed in the literature. Contemporary organizations are increasingly discarding the traditional periodic appraisal systems and moving towards a real-time continuous process of evaluation. The existing multi-criteria decision making method (MCDM)-based employee performance evaluations are not suitable for such continuous evaluations, due to the complexity of the MCDM method. The current appraisal system is notoriously difficult to administer which prevents organizations from using it as an ongoing evaluation. There is a need for a simple yet robust multi-criteria decision making method for continuous performance evaluation of employees (CPEE). In this article, a modified version of the Pugh Matrix Method (MPMM) is proposed as a robust outranking method. The MPMM in combination with the AHP can function as an effective tool for CPEE. The MPMM is compared with other established and popular methods including TOPISIS, PROMETHEE and VIKOR. A statistical comparison using correlation validates the evaluation by the MPMM. There appears to be no significant difference in the evaluation of the MPMM with the other MCDM methods. Owing to its robustness and ease of use, the MPMM can easily be adopted by organizations for CPEE. The managerial implications and agenda for future research are also discussed.
APA, Harvard, Vancouver, ISO, and other styles
35

T S, Mohan Kumar, Sharnappa Joladarashi, and S. M. Kulkarni. "Comprehensive review of modeling and material selection for hybrid sandwich composites for ballistic impact application using Six Sigma DMAIC methodology." Proceedings of the Institution of Mechanical Engineers, Part C: Journal of Mechanical Engineering Science, January 28, 2025. https://doi.org/10.1177/09544062251315327.

Full text
Abstract:
Natural fiber-based PMC (polymer matrix composites) have recently been increasingly popular because of their reduced product weight, low material costs, and renewable sources. Hybrid composites with different combinations of fibers/matrix are attracting interest from many manufacturing industries and researchers for various applications because of their specialized mechanical and impact properties. Hybridization is one of the most essential and indispensable strategies to improve composite material performance. Hybrid sandwich composites are reviewed to enhance the mechanical properties. They mainly concentrate on improving impact properties with increased energy absorption and penetration behavior, making them competent for advanced applications. The most time-consuming and challenging task is identifying the suitable composite material for a specific application. Selecting a suitable fiber and matrix is a difficult job for impact applications because the impact can cause severe damage to composite used in structural applications. The main objective of this review is to select suitable fiber and matrix combinations for impact application by exploring the literature gap. The Six Sigma DMAIC methodology provides a different approach to the selection of material. The benefit of this methodology is the choice of material has been made based on a twofold decision-making process that provides an accurate result. In addition to this blending, the qualitative approach (Pugh method) and the quantitative approach (Analytical hierarchy process) produce more accurate results during the comparison process, making it easier to choose the best material.
APA, Harvard, Vancouver, ISO, and other styles
36

Boghani, Hitesh C., Ramakrishnan Ambur, Marcelo Blumenfeld, et al. "Sensitivity enriched multi-criterion decision making process for novel railway switches and crossings − a case study." European Transport Research Review 13, no. 1 (2021). http://dx.doi.org/10.1186/s12544-020-00467-x.

Full text
Abstract:
Abstract Background Despite their important role in railway operations, switches and crossings (S&amp;C) have changed little since their conception over a century ago. It stands now that the existing designs for S&amp;C are reaching their maximum point of incremental performance improvement, and only a radical redesign can overcome the constraints that current designs are imposing on railway network capacity. This paper describes the process of producing novel designs for next generation switches and crossings, as part of the S-CODE project. Methods Given the many aspects that govern a successful S&amp;C design, it is critical to adopt multi criteria decision making (MCDM) processes to identify a specific solution for the next generation of switches and crossings. However, a common shortcoming of these methods is that their results can be heavily influenced by external factors, such as uncertainty in criterium weighting or bias of the evaluators, for example. This paper therefore proposes a process based on the Pugh Matrix method to reduce such biases by using sensitivity analysis to investigate them and improve the reliability of decision making. Results In this paper, we analysed the influences of three different external factors, measuring the sensitivity of ranking due to (a) weightings, (b) organisational and (c) discipline bias. The order of preference of the results was disturbed only to a minimum while small influences of bias were detected. Conclusions Through this case study, we believe that the paper demonstrates an effective case study for a quantitative process that can improve the reliability of decision making.
APA, Harvard, Vancouver, ISO, and other styles
37

Boghani, Hitesh C., Ramakrishnan Ambur, Marcelo Blumenfeld, et al. "Sensitivity enriched multi-criterion decision making process for novel railway switches and crossings − a case study." European Transport Research Review 13, no. 1 (2021). http://dx.doi.org/10.1186/s12544-020-00467-x.

Full text
Abstract:
Abstract Background Despite their important role in railway operations, switches and crossings (S&amp;C) have changed little since their conception over a century ago. It stands now that the existing designs for S&amp;C are reaching their maximum point of incremental performance improvement, and only a radical redesign can overcome the constraints that current designs are imposing on railway network capacity. This paper describes the process of producing novel designs for next generation switches and crossings, as part of the S-CODE project. Methods Given the many aspects that govern a successful S&amp;C design, it is critical to adopt multi criteria decision making (MCDM) processes to identify a specific solution for the next generation of switches and crossings. However, a common shortcoming of these methods is that their results can be heavily influenced by external factors, such as uncertainty in criterium weighting or bias of the evaluators, for example. This paper therefore proposes a process based on the Pugh Matrix method to reduce such biases by using sensitivity analysis to investigate them and improve the reliability of decision making. Results In this paper, we analysed the influences of three different external factors, measuring the sensitivity of ranking due to (a) weightings, (b) organisational and (c) discipline bias. The order of preference of the results was disturbed only to a minimum while small influences of bias were detected. Conclusions Through this case study, we believe that the paper demonstrates an effective case study for a quantitative process that can improve the reliability of decision making.
APA, Harvard, Vancouver, ISO, and other styles
38

Chang, Jing-Rong, Venkateswarlu Nalluri, Long-Sheng Chen, and Shih-Hsun Chen. "Winning customers' hearts and minds using DFSS in the insurance industry." TQM Journal, November 1, 2022. http://dx.doi.org/10.1108/tqm-05-2022-0171.

Full text
Abstract:
PurposeThis study aims to simultaneously examine customer complaints through the proposed novel Design for Six Sigma (DFSS) model which incorporates of creating the new insurance services to win customers' hearts and mind for the insurance industry.Design/methodology/approachA novel DFSS research methodology which includes the theory of inventive problem solving (TRIZ), Pugh concept selection, creative product analysis matrix and importance–satisfaction model (I–S Model) was proposed. In addition, a real insurance company case was studied to illustrate the effectiveness of the proposed DFSS model.FindingsThe results of a novel DFSS model not only can establish new services, but also can dramatically reduce the cost of resolving customer complaints.Practical implicationsThe findings of this study are useful for insurance companies and other related service providers in devising tailored strategies to offer quality and suitable services to their customers.Originality/valueThis study addresses the paucity of research and marketing gaps through the proposed novel DFSS model for the first time in the insurance industry. These study findings would enable researchers and practitioners to formulate strategies for solving customer complaints effectively and develop new services from time to time.
APA, Harvard, Vancouver, ISO, and other styles
39

"An IoT-Based Ovitrap System Applied for Aedes Mosquito Surveillance." International Journal of Engineering and Advanced Technology 9, no. 1 (2019): 5752–58. http://dx.doi.org/10.35940/ijeat.a3058.109119.

Full text
Abstract:
Since the number of dengue fever cases has become the endemic disease in Malaysia, it is urgently need for rapid and efficient calculation of mosquito populations particularly for early detection and control measures. An ovitrap surveillance is used to determine the density of Aedes mosquito and it is one of the implemented method for vector control application. In this study, the prototype of an IoT-based ovitrap system was developed to automatically and simultaneously detect the Aedes mosquitoes using NodeMCU as the main IoT platform. The existing sticky ovitrap was modified to integrate the selected IoT components and to ensure its functionality for automatic detection. There are two phases were conducted in this study, with phase 1 evaluating the right IoT components to be selected and applied for automatic detection. Integrating the selected IoT components and modification of present ovitrap was carried out in phase 2 and the final revised design was considered. SWOT analysis and Pugh chart analysis also known as decision matrix method were used to select the best IoT components and final ovitrap design. It has been observed that the prototype D was the best design and be able to detect the adult mosquitoes. The lessons learned in the development of the IoT-based ovitrap were discussed in order to be employed for Aedes mosquitoes surveillance in the future.
APA, Harvard, Vancouver, ISO, and other styles
40

Goodenough, Bryant, Alexander Czarnecki, Darrell Robinette, et al. "Propulsion Electrification Architecture Selection Process and Cost of Carbon Abatement Analysis for Heavy-Duty Off-Road Material Handler." SAE International Journal of Commercial Vehicles 17, no. 3 (2024). http://dx.doi.org/10.4271/02-17-03-0014.

Full text
Abstract:
&lt;div&gt;The heavy-duty off-road industry continues to expand efforts to reduce fuel consumption and CO&lt;sub&gt;2&lt;/sub&gt;e (carbon dioxide equivalent) emissions. Many manufacturers are pursuing electrification to decrease fuel consumption and emissions. Future policies will likely require electrification for CO&lt;sub&gt;2&lt;/sub&gt;e savings, as seen in light-duty on-road vehicles. Electrified architectures vary widely in the heavy-duty off-road space, with parallel hybrids in some applications and series hybrids in others. The diverse applications for different types of equipment mean different electrified configurations are required. Companies must also determine the value in pursuing electrified architectures; this work analyzes a range of electrified architectures, from micro hybrids to parallel hybrids to series hybrids to a BEV, looking at the total cost, total CO&lt;sub&gt;2&lt;/sub&gt;e, and cost per CO&lt;sub&gt;2&lt;/sub&gt;e (cost of carbon abatement, or cost of carbon reduction) using data for the year 2021. This study is focused on a heavy-duty off-road material handler, the Pettibone Cary-Lift 204i. This machine’s specialty application, including events like unloading large oil pipes from a railcar, requires a unique electrified architecture that suits its specific needs. However, the results from this study may be extrapolated to similar machinery to inform fuel savings options across the heavy-duty off-road industry. In this study, a unique electrified architecture is determined for the Cary-Lift. This architecture is informed by multiple rounds of a Pugh matrix decision analysis to select a shortened list of desirable electrified architectures. The shortened list is modeled and simulated to determine CO&lt;sub&gt;2&lt;/sub&gt;e, cost, and cost per CO&lt;sub&gt;2&lt;/sub&gt;e. A final architecture is determined as a plug-in series hybrid that reduces fuel consumption by 65%, targeting the large fuel and CO&lt;sub&gt;2&lt;/sub&gt;e savings that are likely to be required for the future of the heavy-duty off-road industry.&lt;/div&gt;
APA, Harvard, Vancouver, ISO, and other styles
41

Collins, Gary, and Lucky Ditaunyane. "A Pugh Matrix framework for selecting effective CALL software in South African schools." Journal for Language Teaching 58, no. 2 (2024). http://dx.doi.org/10.56285/jltvol58iss2a6457.

Full text
Abstract:
In recent years, digital technology integration in education, including language learning, has become widespread. South African schools are increasingly acknowledging the benefits of computer-assisted language learning (CALL) applications. Digital tools in educational materials offer advantages such as time efficiency, enhanced accessibility, flexible learning methods, and inclusivity for disabled individuals. However, there's a lack of appropriate guidelines for assessing and selecting CALL software due to the unique complexities it presents. To address this gap, the study developed a systematic framework, utilising a Pugh Matrix, tailored for South African educational contexts and beyond. This matrix was informed by guiding principles derived from iterative developmental research. A Likert scale survey validated these principles, influencing the weighting of assessment criteria. These criteria encompass curriculum alignment, feedback mechanisms, socio-cultural relevance, affordability, technical considerations, and pedagogical approaches. The developed Pugh Matrix serves as a comprehensive and objective tool for CALL software selection and evaluation. It empowers schools to make informed decisions aligned with their educational goals, instructional methods, technical needs, and budgetary constraints. Keywords: Digital technology integration, computer-assisted language learning (CALL), educational technology, selection and evalution, Pugh Matrix, language learning, South African schools
APA, Harvard, Vancouver, ISO, and other styles
42

Nobari, Niloofar, Ali Mobini Dehkordi, Morteza Akbari, and Hamid Padash. "Innovation intelligence and its role in environmental uncertainty management: a conceptual framework." VINE Journal of Information and Knowledge Management Systems ahead-of-print, ahead-of-print (2020). http://dx.doi.org/10.1108/vjikms-06-2020-0109.

Full text
Abstract:
Purpose Organizations need actionable knowledge to cope with environmental uncertainty, make effective decisions and develop innovation strategies. Since innovation evolves through generations, the present study aims to unravel and define innovation intelligence, considering this transformation, and discuss how environmental uncertainty is resolved in each one. Design/methodology/approach This article is a conceptual paper that employs a typology and model approach in its research design Findings Contexts are categorized into ordered and unordered (according to the Cynefin framework), in which intelligence with prediction and control approaches are applied for uncertainty management, respectively. Also, the three generations of innovation management, namely, technology push, market pull and a combination of these two (hybrid), intelligence benefit from a prediction approach, and in the networked (collaborative) generation, intelligence takes advantage of a control approach. Research limitations/implications The conceptual approach adopted in this research is limited to, and focused on, understanding intelligence, innovation intelligence and presenting preliminary insights into their relationship with uncertainty management. Practical implications This research guides decision-makers to adopt the appropriate intelligence approach to manage uncertainty during their innovation management process and illustrate it by the industry uncertainty matrix and COVID-19 pandemic situation. Originality/value This study proposes a typology of intelligence based on different knowledge pyramids. Also, it introduces innovation intelligence and its relation to knowledge management and environmental uncertainty management that has not yet been clearly addressed in the literature. Moreover, it determines the uncertainty management approaches for each variant of innovation intelligence.
APA, Harvard, Vancouver, ISO, and other styles
43

Sundari, P. Shanmuga, and M. Subaji. "A comparative study to recognize fake ratings in recommendation system using classification techniques." Intelligent Decision Technologies, September 9, 2021, 1–8. http://dx.doi.org/10.3233/idt-200195.

Full text
Abstract:
The recommendation system is affected with attacks when the users are given liberty to rate the items based on their impression about the product or service. Some malicious user or other competitors’ try to inject fake rating to degrade the item’s graces that are mostly adored by several users. Attacks in the rating matrix are not executed just by a single profile. A group of users profile is injected into rating matrix to decrease the performance. It is highly complex to extract the fake ratings from the mixture of genuine profile as it resides the same pattern. Identifying the attacked profile and the target item of the fake rating is a challenging task in the big data environment. This paper proposes a unique method to identify the attacks in collaborating filtering method. The process of extracting fake rating is carried out in two phases. During the initial phase, doubtful user profile is identified from the rating matrix. In the following phase, the target item is analysed using push attack count to reduce the false positive rates from the doubtful user profile. The proposed model is evaluated with detection rate and false positive rates by considering the filler size and attacks size. The experiment was conducted with 6%, 8% and 10% filler sizes and with different attack sizes that ranges from 0%–100%. Various classification techniques such as decision tree, logistic regression, SVM and random forest methods are used to classify the fake ratings. From the results, it is witnessed that SVM model works better with random and bandwagon attack models at an average of 4% higher accuracy. Similarly the decision tree method performance better at an average of 3% on average attack model.
APA, Harvard, Vancouver, ISO, and other styles
44

Fouquet, Stephan. "Beat the elite or concede defeat? Populist problem (re-)representations of international financial disputes." Review of International Studies, November 26, 2024, 1–24. http://dx.doi.org/10.1017/s0260210524000743.

Full text
Abstract:
Abstract Many populist leaders politicise disputes with external financial ‘elites’, but most are forced by economic pressures to fundamentally change their ‘people-versus-elite’ problem representations and ‘concede defeat’. Notable exceptions are Hungary’s Viktor Orbán, Argentina’s Cristina Kirchner, and Turkey’s Recep Tayyip Erdoğan, who sooner or later resisted strong push-back and defied the IMF or distressed-debt funds. These instances of prolonged populist defiance differ widely across commonly used structural and agential explanatory factors at international, domestic, and individual levels. To explain how Orbán, Kirchner, and Erdoğan managed to ‘beat the elite’, this paper clusters several root causes into a parsimonious framework of two intervening variables. The temporality of strong ‘elite’ push-back and the openness of advisory systems are theorised as shaping distinct cognitive mechanisms of representational continuity or change, through which ‘people-versus-elite’ input is either preserved until – or discarded before – feeding into decision outputs. As the two-by-two matrix of early/later external shocks and open/closed inner circles explains, Orbán did not move beyond marginal representational adjustments; Kirchner’s contingent representational fluidity benefited from opportune situational developments; and Erdoğan defied significant socio-economic cost with fundamental representational continuity. These insights highlight the potential of studying populism at the intersection of foreign policy analysis and international political economy.
APA, Harvard, Vancouver, ISO, and other styles
45

Fouquet, Stephan. "Beat the elite or concede defeat? Populist problem (re-)representations of international financial disputes." Review of International Studies, November 26, 2024. https://doi.org/10.1017/S0260210524000743.

Full text
Abstract:
Many populist leaders politicise disputes with external financial &lsquo;elites&rsquo;, but most are forced by economic pressures to fundamentally change their &lsquo;people-versus-elite&rsquo; problem representations and &lsquo;concede defeat&rsquo;. Notable exceptions are Hungary&rsquo;s Viktor Orb&aacute;n, Argentina&rsquo;s Cristina Kirchner, and Turkey&rsquo;s Recep Tayyip Erdoğan, who sooner or later resisted strong push-back and defied the IMF or distressed-debt funds. These instances of prolonged populist defiance differ widely across commonly used structural and agential explanatory factors at international, domestic, and individual levels. To explain how Orb&aacute;n, Kirchner, and Erdoğan managed to &lsquo;beat the elite&rsquo;, this paper clusters several root causes into a parsimonious framework of two intervening variables. The temporality of strong &lsquo;elite&rsquo; push-back and the openness of advisory systems are theorised as shaping distinct cognitive mechanisms of representational continuity or change, through which &lsquo;people-versus-elite&rsquo; input is either preserved until &ndash; or discarded before &ndash; feeding into decision outputs. As the two-by-two matrix of early/later external shocks and open/closed inner circles explains, Orb&aacute;n did not move beyond marginal representational adjustments; Kirchner&rsquo;s contingent representational fluidity benefited from opportune situational developments; and Erdoğan defied significant socio-economic cost with fundamental representational continuity. These insights highlight the potential of studying populism at the intersection of foreign policy analysis and international political economy.
APA, Harvard, Vancouver, ISO, and other styles
46

Stockwell, Stephen. "The Manufacture of World Order." M/C Journal 7, no. 6 (2005). http://dx.doi.org/10.5204/mcj.2481.

Full text
Abstract:
&#x0D; &#x0D; &#x0D; Since the fall of the Berlin Wall, and most particularly since 9/11, the government of the United States has used its security services to enforce the order it desires for the world. The US government and its security services appreciate the importance of creating the ideological environment that allows them full-scope in their activities. To these ends they have turned to the movie industry which has not been slow in accommodating the purposes of the state. In establishing the parameters of the War Against Terror after 9/11, one of the Bush Administration’s first stops was Hollywood. White House strategist Karl Rove called what is now described as the Beverley Hills Summit on 19 November 2001 where top movie industry players including chairman of the Motion Picture Association of America, Jack Valenti met to discuss ways in which the movie industry could assist in the War Against Terror. After a ritual assertion of Hollywood’s independence, the movie industry’s powerbrokers signed up to the White House’s agenda: “that Americans must be called to national service; that Americans should support the troops; that this is a global war that needs a global response; that this is a war against evil” (Cooper 13). Good versus evil is, of course, a staple commodity for the movie industry but storylines never require the good guys to fight fair so with this statement the White House got what it really wanted: Hollywood’s promise to stay on the big picture in black and white while studiously avoiding the troubling detail in the exercise extra-judicial force and state-sanctioned murder. This is not to suggest that the movie industry is a monolithic ideological enterprise. Alternative voices like Mike Moore and Susan Sarandon still find space to speak. But the established economics of the scenario trade are too strong for the movie industry to resist: producers gain access to expensive weaponry to assist production if their story-lines are approved by Pentagon officials (‘Pentagon provides for Hollywood’); the Pentagon finances movie and gaming studios to provide original story formulas to keep their war-gaming relevant to emerging conditions (Lippman); and the Central Intelligence Agency’s “entertainment liaison officer” assists producers in story development and production (Gamson). In this context, the moulding of story-lines to the satisfaction of the Pentagon and CIA is not even an issue, and protestations of Hollywood’s independence is meaningless, as the movie industry pursues patriotic audiences at home and seeks to garner hearts and minds abroad. This is old history made new again. The Cold War in the 1950s saw movies addressing the disruption of world order not so much by Communists as by “others”: sci-fi aliens, schlock horror zombies, vampires and werewolves and mad scientists galore. By the 1960s the James Bond movie franchise, developed by MI5 operative Ian Fleming, saw Western secret agents ‘licensed to kill’ with the justification that such powers were required to deal with threats to world order, albeit by fanciful “others” such as the fanatical scientist Dr. No (1962). The Bond villains provide a catalogue of methods for the disruption of world order: commandeering atomic weapons and space flights, manipulating finance markets, mind control systems and so on. As the Soviet Union disintegrated, Hollywood produced a wealth of material that excused the paranoid nationalism of the security services through the hegemonic masculinity of stars such as Sylvester Stallone, Arnold Schwarzenegger, Steven Seagal and Bruce Willis (Beasley). Willis’s Die Hard franchise (1988/1990/1995) characterised US insouciance in the face of newly created terrorist threats. Willis personified the strategy of the Reagan, first Bush and Clinton administrations: a willingness to up the ante, second guess the terrorists and cower them with the display of firepower advantage. But the 1997 instalment of the James Bond franchise saw an important shift in expectations about the source of threats to world order. Tomorrow Never Dies features a media tycoon bent on world domination, manipulating the satellite feed, orchestrating conflicts and disasters in the name of ratings, market share and control. Having dealt with all kinds of Cold War plots, Bond is now confronted with the power of the media itself. As if to mark this shift, Austin Powers: International Man of Mystery (1997) made a mockery of the creatively bankrupt conventions of the spy genre. But it was the politically corrupt use to which the security services could be put that was troubling a string of big-budget filmmakers in the late 90s. In Enemy of the State (1998), an innocent lawyer finds himself targeted by the National Security Agency after receiving evidence of a political murder motivated by the push to extend the NSA’s powers. In Mercury Rising (1998), a renegade FBI agent protects an autistic boy who cracks a top-secret government code and becomes the target for assassins from an NSA-like organisation. Arlington Road (1999) features a college professor who learns too much about terrorist organisations and has his paranoia justified when he becomes the target of a complex operation to implicate him as a terrorist. The attacks on September 11 and subsequent Beverley Hills Summit had a major impact on movie product. Many film studios edited films (Spiderman) or postponed their release (Schwarzenegger’s Collateral Damage) where they were seen as too close to actual events but insufficiently patriotic (Townsend). The Bond franchise returned to its staple of fantastical villains. In Die Another Day (2002), the bad guy is a billionaire with a laser cannon. The critical perspective on the security services disappeared overnight. But the most interesting development has been how fantasy has become the key theme in a number of franchises dealing with world order that have had great box-office success since 9/11, particularly Lord of the Rings (2001/2/3) and Harry Potter (2001/2/4). While deeply entrenched in the fantasy genre, each of these franchises also addresses security issues: geo-political control in the Rings franchise; the subterfuges of the Ministry for Muggles in the _Potter _franchise. Looking at world order through the supernatural lens has particular appeal to audiences confronted with everyday threats. These fantasies follow George Bush’s rhetoric of the “axis of evil” in normalising the struggle for world order in term of black and white with the expectation that childish innocence and naïve ingenuity will prevail. Only now with three years hindsight since September 11 can we begin to see certain amount of self-reflection by disenchanted security staff return to the cinema. In Man on Fire (2004) the burned-out ex-CIA assassin has given up on life but regains some hope while guarding a child only to have everything disintegrate when the child is killed and he sets out on remorseless revenge. Spartan (2004) features a special forces officer who fails to find a girl and resorts to remorseless revenge as he becomes lost in a maze of security bureaucracies and chance events. Security service personnel once again have their doubts but only find redemption in violence and revenge without remorse. From consideration of films before and after September 11, it becomes apparent that the movie industry has delivered on their promises to the Bush administration. The first response has been the shift to fantasy that, in historical terms, will be seen as akin to the shift to musicals in the Depression. The flight to fantasy makes the point that complex situations can be reduced to simple moral decisions under the rubric of good versus evil, which is precisely what the US administration requested. The second, more recent response has been to accept disenchantment with the personal costs of the War on Terror but still validate remorseless revenge. Quentin Tarantino’s Kill Bill franchise (2003/4) seeks to do both. Thus the will to world order being fought out in the streets of Iraq is sublimated into fantasy or excused as a natural response to a world of violence. It is interesting to note that television has provided more opportunities for the productive consideration of world order and the security services than the movies since September 11. While programs that have had input from the CIA’s “entertainment liaison officer” such as teen-oriented, Buffy-inspired Alias and quasi-authentic The Agency provide a no-nonsense justification for the War on Terror (Gamson), others such as 24, West Wing _and _Threat Matrix have confronted the moral problems of torture and murder in the War on Terrorism. 24 uses reality TV conventions of real-time plot, split screen exposition, unexpected interventions and a close focus on personal emotions to explore the interactions between a US President and an officer in the Counter Terrorism Unit. The CTU officer does not hesitate to summarily behead a criminal or kill a colleague for operational purposes and the president takes only a little longer to begin torturing recalcitrant members of his own staff. Similarly, the president in West Wing orders the extra-judicial death of a troublesome player and the team in Threat Matrix are ready to exceeded their powers. But in these programs the characters struggle with the moral consequences of their violent acts, particularly as family members are drawn into the plot. A running theme of Threat Matrix is the debate within the group of their choices between gung-ho militarism and peaceful diplomacy: the consequences of a simplistic, hawkish approach are explored when an Arab-American college professor is wrongfully accused of supporting terrorists and driven towards the terrorists because of his very ordeal of wrongful accusation. The world is not black and white. Almost half the US electorate voted for John Kerry. Television still must cater for liberal, and wealthy, demographics who welcome the extended format of weekly television that allows a continuing engagement with questions of good or evil and whether there is difference between them any more. Against the simple world view of the Bush administration we have the complexities of the real world. References Beasley, Chris. “Reel Politics.” Australian Political Studies Association Conference, University of Adelaide, 2004. Cooper, Marc. “Lights! Cameras! Attack!: Hollywood Enlists.” The Nation 10 December 2001: 13-16. Gamson, J. “Double Agents.” The American Prospect 12.21 (3 December 2001): 38-9. Lippman, John. “Hollywood Casts About for a War Role.” Wall Street Journal 9 November 2001: A1. “Pentagon Provides for Hollywood.” USA Today 29 March 2001. http://www.usatoday.com/life/movies/2001-05-17-pentagon-helps-hollywood.htm&gt;. Townsend, Gary. “Hollywood Uses Selective Censorship after 9/11.” e.press 12 December 2002. http://www.scc.losrios.edu/~express/021212hollywood.html&gt;. &#x0D; &#x0D; &#x0D; &#x0D; Citation reference for this article&#x0D; &#x0D; MLA Style&#x0D; Stockwell, Stephen. "The Manufacture of World Order: The Security Services and the Movie Industry." M/C Journal 7.6 (2005). echo date('d M. Y'); ?&gt; &lt;http://journal.media-culture.org.au/0501/10-stockwell.php&gt;. APA Style&#x0D; Stockwell, S. (Jan. 2005) "The Manufacture of World Order: The Security Services and the Movie Industry," M/C Journal, 7(6). Retrieved echo date('d M. Y'); ?&gt; from &lt;http://journal.media-culture.org.au/0501/10-stockwell.php&gt;. &#x0D;
APA, Harvard, Vancouver, ISO, and other styles
47

Emilio Faroldi. "The architecture of differences." TECHNE - Journal of Technology for Architecture and Environment, May 26, 2021, 9–15. http://dx.doi.org/10.36253/techne-11023.

Full text
Abstract:
Following in the footsteps of the protagonists of the Italian architectural debate is a mark of culture and proactivity. The synthesis deriving from the artistic-humanistic factors, combined with the technical-scientific component, comprises the very root of the process that moulds the architect as an intellectual figure capable of governing material processes in conjunction with their ability to know how to skilfully select schedules, phases and actors: these are elements that – when paired with that magical and essential compositional sensitivity – have fuelled this profession since its origins.&#x0D; The act of X-raying the role of architecture through the filter of its “autonomy” or “heteronomy”, at a time when the hybridisation of different areas of knowledge and disciplinary interpenetration is rife, facilitates an understanding of current trends, allowing us to bring the fragments of a debate carved into our culture and tradition up to date.&#x0D; As such, heteronomy – as a condition in which an acting subject receives the norm of its action from outside itself: the matrix of its meaning, coming from ancient Greek, the result of the fusion of the two terms ἕτερος éteros “different, other” and νόμος nómos “law, ordinance” – suggests the existence of a dual sentiment now pervasive in architecture: the sin of self-reference and the strength of depending on other fields of knowledge.&#x0D; Difference, interpreted as a value, and the ability to establish relationships between different points of observation become moments of a practice that values the process and method of affirming architecture as a discipline.&#x0D; The term “heteronomy”, used in opposition to “autonomy”, has – from the time of Kant onwards – taken on a positive value connected to the mutual respect between reason and creativity, exact science and empirical approach, contamination and isolation, introducing the social value of its existence every time that it returns to the forefront.&#x0D; At the 1949 conference in Lima, Ernesto Nathan Rogers spoke on combining the principle of “Architecture is an Art” with the demands of a social dimension of architecture: «Alberti, in the extreme precision of his thought, admonishes us that the idea must be translated into works and that these must have a practical and moral purpose in order to adapt harmoniously ‘to the use of men’, and I would like to point out the use of the plural of ‘men’, society. The architect is neither a passive product nor a creator completely independent of his era: society is the raw material that he transforms, giving it an appearance, an expression, and the consciousness of those ideals that, without him, would remain implicit. Our prophecy, like that of the farmer, already contains the seeds for future growth, as our work also exists between heaven and earth. Poetry, painting, sculpture, dance and music, even when expressing the contemporary, are not necessarily limited within practical terms. But we architects, who have the task of synthesising the useful with the beautiful, must feel the fundamental drama of existence at every moment of our creative process, because life continually puts practical needs and spiritual aspirations at odds with one another. We cannot reject either of these necessities, because a merely practical or moralistic position denies the full value of architecture to the same extent that a purely aesthetic position would: we must mediate one position with the other» (Rogers, 1948).&#x0D; Rogers discusses at length the relationship between instinctive forces and knowledge acquired through culture, along with his thoughts on the role played by study in an artist’s training.&#x0D; It is in certain debates that have arisen within the “International Congresses of Modern Architecture” that the topic of architecture as a discipline caught between self-sufficiency and dependence acquires a certain centrality within the architectural context: in particular, in this scenario, the theme of the “autonomy” and “heteronomy” of pre-existing features of the environment plays a role of strategic importance. Arguments regarding the meaning of form in architecture and the need for liberation from heteronomous influences did not succeed in undermining the idea of an architecture capable of influencing the governing of society as a whole, thanks to an attitude very much in line with Rogers’ own writings.&#x0D; The idea of a project as the result of the fusion of an artistic idea and pre-existing features of an environment formed the translation of the push to coagulate the antithetical forces striving for a reading of the architectural work that was at once autonomous and heteronomous, as well as linked to geographical, cultural, sociological and psychological principles.&#x0D; The CIAM meeting in Otterlo was attended by Ignazio Gardella, Ernesto Nathan Rogers, Vico Magistretti and Giancarlo De Carlo as members of the Italian contingent: the architects brought one project each to share with the conference and comment on as a manifesto. Ernesto Nathan Rogers, who presented the Velasca Tower, and Giancarlo De Carlo, who presented a house in Matera in the Spine Bianche neighbourhood, were openly criticised as none of the principles established by the CIAM were recognisable in their work any longer, and De Carlo’s project represented a marked divergence from a consolidated method of designing and building in Matera.&#x0D; In this cultural condition, Giancarlo De Carlo – in justifying the choices he had made – even went so far as to say: «my position was not at all a flight from architecture, for example in sociology. I cannot stand those who, paraphrasing what I have said, dress up as politicians or sociologists because they are incapable of creating architecture. Architecture is – and cannot be anything other than – the organisation and form of physical space. It is not autonomous, it is heteronomous» (De Carlo, 2001).&#x0D; Even more so than in the past, it is not possible today to imagine an architecture encapsulated entirely within its own enclosure, autoimmune, averse to any contamination or relationships with other disciplinary worlds: architecture is the world and the world is the sum total of our knowledge.&#x0D; Architecture triggers reactions and phenomena: it is not solely and exclusively the active and passive product of a material work created by man. «We believed in the heteronomy of architecture, in its necessary dependence on the circumstances that produce it, in its intrinsic need to exist in harmony with history, with the happenings and expectations of individuals and social groups, with the arcane rhythms of nature. We denied that the purpose of architecture was to produce objects, and we argued that its fundamental role was to trigger processes of transformation of the physical environment that are capable of contributing to the improvement of the human condition» (De Carlo, 2001).&#x0D; Productive and cultural reinterpretations place the discipline of architecture firmly at the centre of the critical reconsideration of places for living and working. Consequently, new interpretative models continue to emerge which often highlight the instability of built architecture with the lack of a robust theoretical apparatus, demanding the sort of “technical rationality” capable of restoring the centrality of the act of construction, through the contribution of actions whose origins lie precisely in other subject areas.&#x0D; Indeed, the transformation of the practice of construction has resulted in direct changes to the structure of the nature of the knowledge of it, to the role of competencies, to the definition of new professional skills based on the demands emerging not just from the production system, but also from the socio-cultural system. The architect cannot disregard the fact that the making of architecture does not burn out by means of some implosive dynamic; rather, it is called upon to engage with the multiple facets and variations that the cognitive act of design itself implies, bringing into play a theory of disciplines which – to varying degrees and according to different logics – offer their significant contribution to the formation of the design and, ultimately, the work.&#x0D; As Álvaro Siza claims, «The architect is not a specialist. The sheer breadth and variety of knowledge that practicing design encompasses today – its rapid evolution and progressive complexity – in no way allow for sufficient knowledge and mastery. Establishing connections – pro-jecting [from Latin proicere, ‘to stretch out’] – is their domain, a place of compromise that is not tantamount to conformism, of navigation of the web of contradictions, the weight of the past and the weight of the doubts and alternatives of the future, aspects that explain the lack of a contemporary treatise on architecture. The architect works with specialists. The ability to chain things together, to cross bridges between fields of knowledge, to create beyond their respective borders, beyond the precarity of inventions, requires a specific education and stimulating conditions. [...] As such, architecture is risk, and risk requires impersonal desire and anonymity, starting with the merging of subjectivity and objectivity. In short, a gradual distancing from the ego. Architecture means compromise transformed into radical expression, in other words, a capacity to absorb the opposite and overcome contradiction. Learning this requires an education in search of the other within each of us» (Siza, 2008).&#x0D; We are seeing the coexistence of contrasting, often extreme, design trends aimed at recementing the historical and traditional mould of construction by means of the constant reproposal of the characteristics of “persistence” that long-established architecture, by its very nature, promotes, and at decrypting the evolutionary traits of architecture – markedly immaterial nowadays – that society promotes as phenomena of everyday living.&#x0D; Speed, temporariness, resilience, flexibility: these are just a few fragments.&#x0D; In other words, we indicate a direction which immediately composes and anticipates innovation as a characterising element, describing its stylistic features, materials, languages and technologies, and only later on do we tend to outline the space that these produce: what emerges is a largely anomalous path that goes from “technique” to “function” – by way of “form” – denying the circularity of the three factors at play.&#x0D; The threat of a short-circuit deriving from discourse that exceeds action – in conjunction with a push for standardisation aimed at asserting the dominance of construction over architecture, once again echoing the ideas posited by Rogers – may yet be able to finding a lifeline cast through the attempt to merge figurative research with technology in a balanced way, in the wake of the still-relevant example of the Bauhaus or by emulating the thinking of certain masters of modern Italian architecture who worked during that post-war period so synonymous with physical – and, at the same time, moral – reconstruction.&#x0D; These architectural giants’ aptitude for technical and formal transformation and adaptation can be held up as paradigmatic examples of methodological choice consistent with their high level of mastery over the design process and the rhythm of its phases. In all this exaltation of the outcome, the power of the process is often left behind in a haze: in the uncritical celebration of the architectural work, the method seems to dissolve entirely into the finished product.&#x0D; Technical innovation and disciplinary self-referentiality would seem to deny the concepts of continuity and transversality by means of a constant action of isolation and an insufficient relationship with itself: conversely, the act of designing, as an operation which involves selecting elements from a vast heritage of knowledge, cannot exempt itself from dealing in the variables of a functional, formal, material and linguistic nature – all of such closely intertwined intents – that have over time represented the energy of theoretical formulation and of the works created.&#x0D; For years, the debate in architecture has concentrated on the synergistic or contrasting dualism between cultural approaches linked to venustas and firmitas. Kenneth Frampton, with regard to the interpretative pair of “tectonics” and “form”, notes the existence of a dual trend that is both identifiable and contrasting: namely the predisposition to favour the formal sphere as the predominant one, rejecting all implications on the construction, on the one hand; and the tendency to celebrate the constructive matrix as the generator of the morphological signature – emphasised by the ostentation of architectural detail, including that of a technological matrix – on the other.&#x0D; The design of contemporary architecture is enriched with sprawling values that are often fundamental, yet at times even damaging to the successful completion of the work: it should identify the moment of coagulation within which the architect goes in pursuit of balance between all the interpretative categories that make it up, espousing the Vitruvian meaning, according to which practice is «the continuous reflection on utility» and theory «consists of being able to demonstrate and explain the things made with technical ability in terms of the principle of proportion» (Vitruvius Pollio, 15 BC).&#x0D; Architecture will increasingly be forced to demonstrate how it represents an applied and intellectual activity of a targeted synthesis, of a complex system within which it is not only desirable, but indeed critical, for the cultural, social, environmental, climatic, energy-related, geographical and many other components involved in it to interact proactively, together with the more spatial, functional and material components that are made explicit in the final construction itself through factors borrowed from neighbouring field that are not endogenous to the discipline of architecture alone.&#x0D; Within a unitary vision that exists parallel to the transcalarity that said vision presupposes, the technology of architecture – as a discipline often called upon to play the role of a collagen of skills, binding them together – acts as an instrument of domination within which science and technology interpret the tools for the translation of man’s intellectual needs, expressing the most up-to-date principles of contemporary culture.&#x0D; Within the concept of tradition – as inferred from its evolutionary character – form, technique and production, in their historical “continuity” and not placed in opposition to one other, make up the fields of application by which, in parallel, research proceeds with a view to ensuring a conforming overall design. The “technology of architecture” and “technological design” give the work of architecture its personal hallmark: a sort of DNA to be handed down to future generations, in part as a discipline dedicated to amalgamating the skills and expertise derived from other dimensions of knowledge.&#x0D; In the exercise of design, the categories of urban planning, composition, technology, structure and systems engineering converge, the result increasingly accentuated by multidisciplinary nuances in search of a sense of balance between the parts: a setup founded upon simultaneity and heteronomous logic in the study of variables, by means of translations, approaches and skills as expressions of multifaceted identities. «Architects can influence society with their theories and works, but they are not capable of completing any such transformation on their own, and end up being the interpreters of an overbearing historical reality under which, if the strongest and most honest do not succumb, that therefore means that they alone represent the value of a component that is algebraically added to the others, all acting in the common field» (Rogers, 1951).&#x0D; Construction, in this context, identifies the main element of the transmission of continuity in architecture, placing the “how” at the point of transition between past and future, rather than making it independent of any historical evolution. Architecture determines its path within a heteronomous practice of construction through an effective distinction between the strength of the principles and codes inherent to the discipline – long consolidated thanks to sedimented innovations – and the energy of experimentation in its own right.&#x0D; Architecture will have to seek out and affirm its own identity, its validity as a discipline that is at once scientific and poetic, its representation in the harmonies, codes and measures that history has handed down to us, along with the pressing duty of updating them in a way that is long overdue. The complexity of the architectural field occasionally expresses restricted forms of treatment bound to narrow disciplinary areas or, conversely, others that are excessively frayed, tending towards an eclecticism so vast that it prevents the tracing of any discernible cultural perimeter.&#x0D; In spite of the complex phenomenon that characterises the transformations that involve the status of the project and the figure of the architect themselves, it is a matter of urgency to attempt to renew the interpretation of the activity of design and architecture as a coherent system rather than a patchwork of components. «Contemporary architecture tends to produce objects, even though its most concrete purpose is to generate processes. This is a falsehood that is full of consequences because it confines architecture to a very limited band of its entire spectrum; in doing so, it isolates it, exposing it to the risks of subordination and delusions of grandeur, pushing it towards social and political irresponsibility. The transformation of the physical environment passes through a series of events: the decision to create a new organised space, detection, obtaining the necessary resources, defining the organisational system, defining the formal system, technological choices, use, management, technical obsolescence, reuse and – finally – physical obsolescence. This concatenation is the entire spectrum of architecture, and each link in the chain is affected by what happens in all the others.&#x0D; It is also the case that the cadence, scope and intensity of the various bands can differ according to the circumstances and in relation to the balances or imbalances within the contexts to which the spectrum corresponds. Moreover, each spectrum does not conclude at the end of the chain of events, because the signs of its existence – ruins and memory – are projected onto subsequent events. Architecture is involved with the entirety of this complex development: the design that it expresses is merely the starting point for a far-reaching process with significant consequences» (De Carlo, 1978).&#x0D; The contemporary era proposes the dialectic between specialisation, the coordination of ideas and actions, the relationship between actors, phases and disciplines: the practice of the organisational culture of design circumscribes its own code in the coexistence and reciprocal exploitation of specialised fields of knowledge and the discipline of synthesis that is architecture.&#x0D; With the revival of the global economy on the horizon, the dematerialisation of the working practice has entailed significant changes in the productive actions and social relationships that coordinate the process. Despite a growing need to implement skills and means of coordination between professional actors, disciplinary fields and sectors of activity, architectural design has become the emblem of the action of synthesis. This is a representation of society which, having developed over the last three centuries, from the division of social sciences that once defined it as a “machine”, an “organism” and a “system”, is now defined by the concept of the “network” or, more accurately, by that of the “system of networks”, in which a person’s desire to establish relationships places them within a multitude of social spheres.&#x0D; The “heteronomy” of architecture, between “hybridisation” and “contamination of knowledge”, is to be seen not only an objective fact, but also, crucially, as a concept aimed at providing the discipline with new and broader horizons, capable of putting it in a position of serenity, energy and courage allowing it to tackle the challenges that the cultural, social and economic landscape is increasingly throwing at the heart of our contemporary world.
APA, Harvard, Vancouver, ISO, and other styles
48

Kustritz, Anne. "Transmedia Serial Narration: Crossroads of Media, Story, and Time." M/C Journal 21, no. 1 (2018). http://dx.doi.org/10.5204/mcj.1388.

Full text
Abstract:
The concept of transmedia storyworlds unfolding across complex serial narrative structures has become increasingly important to the study of modern media industries and audience communities. Yet, the precise connections between transmedia networks, serial structures, and narrative processes often remain underdeveloped. The dispersion of potential story elements across a diverse collection of media platforms and technologies prompts questions concerning the function of seriality in the absence of fixed instalments, the meaning of narrative when plot is largely a personal construction of each audience member, and the nature of storytelling in the absence of a unifying author, or when authorship itself takes on a serial character. This special issue opens a conversation on the intersection of these three concepts and their implications for a variety of disciplines, artistic practices, and philosophies. By re-thinking these concepts from fresh perspectives, the collection challenges scholars to consider how a wide range of academic, aesthetic, and social phenomena might be productively thought through using the overlapping lenses of transmedia, seriality, and narrativity. Thus, the collection gathers scholars from life-writing, sport, film studies, cultural anthropology, fine arts, media studies, and literature, all of whom find common ground at this fruitful crossroads. This breadth also challenges the narrow use of transmedia as a specialized term to describe current developments in corporate mass media products that seek to exploit the affordances of hybrid digital media environments. Many prominent scholars, including Marie-Laure Ryan and Henry Jenkins, acknowledge that a basic definition of transmedia as stories with extensions and reinterpretations in numerous media forms includes the oldest kinds of human expression, such as the ancient storyworlds of Arthurian legend and The Odyssey. Yet, what Jenkins terms “top-down” transmedia—that is, pre-planned and often corporate transmedia—has received a disproportionate share of scholarly attention, with modern franchises like The Matrix, the Marvel universe, and Lost serving as common exemplars (Flanagan, Livingstone, and McKenny; Hadas; Mittell; Scolari). Thus, many of the contributions to this issue push the boundaries of what has commonly been studied as transmedia as well as the limits of what may be considered a serial structure or even a story. For example, these papers imagine how an autobiography may also be a digital concept album unfolding in reverse, how participatory artistic performances may unfold in unpredictable instalments across physical and digital space, and how studying sports fandom as a long series of transmedia narrative elements encourages scholars to grapple with the unique structures assembled by audiences of non-fictional story worlds. Setting these experimental offerings into dialogue with entries that approach the study of transmedia in a more established manner provides the basis for building bridges between such recognized conversations in new media studies and potential collaborations with other disciplines and subfields of media studies.This issue builds upon papers collected from four years of the International Transmedia Serial Narration Seminar, which I co-organized with Dr. Claire Cornillon, Assistant Professor (Maîtresse de Conférences) of comparative literature at Université de Nîmes. The seminar held sessions in Paris, Le Havre, Rouen, Amsterdam, and Utrecht, with interdisciplinary speakers from the USA, Australia, France, Belgium, and the Netherlands. As a transnational, interdisciplinary project intended to cross both theoretical and physical boundaries, the seminar aimed to foster exchange between academic conversations that can become isolated not only within disciplines, but also within national and linguistic borders. The seminar thus sought to enhance academic mobility between both people and ideas, and the digital, open-access publication of the collected papers alongside additional scholarly interlocutors serves to broaden the seminar’s goals of creating a border-crossing conversation. After two special issues primarily collecting the French language papers in TV/Series (2014) and Revue Française des Sciences de l’Information et de la Communication (2017), this issue seeks to share the Transmedia Serial Narration project with a wider audience by publishing the remaining English-language papers, accompanied by several other contributions in dialogue with the seminar’s themes. It is our hope that this collection will invite a broad international audience to creatively question the meaning of transmedia, seriality, and narrativity both historically and in the modern, rapidly changing, global and digital media environment.Several articles in the issue illuminate existing debates and common case studies in transmedia scholarship by comparing theoretical models to the much more slippery reality of a media form in flux. Thus, Mélanie Bourdaa’s feature article, “From One Medium to the Next: How Comic Books Create Richer Storylines,” examines theories of narrative complexity and transmedia by scholars including Henry Jenkins, Derek Johnson, and Jason Mittell to then propose a new typology of extensions to accommodate the lived reality expressed by producers of transmedia. Because her interviews with artists and writers emphasize the co-constitutive nature of economic and narrative considerations in professionals’ decisions, Bourdaa’s typology can offer researchers a tool to clarify the marketing and narrative layers of transmedia extensions. As such, her classification system further illuminates what is particular about forms of corporate transmedia with a profit orientation, which may not be shared by non-profit, collective, and independently produced transmedia projects.Likewise, Radha O’Meara and Alex Bevan map existing scholarship on transmedia to point out the limitations of deriving theory only from certain forms of storytelling. In their article “Transmedia Theory’s Author Discourse and Its Limitations,” O’Meara and Bevan argue that scholars have preferred to focus on examples of transmedia with a strong central author-figure or that they may indeed help to rhetorically shore up the coherency of transmedia authorship through writing about transmedia creators as auteurs. Tying their critique to the established weaknesses of auteur theory associated with classic commentaries like Roland Barthes’ “Death of the Author” and Foucault’s “What is an Author?”, O’Meara and Bevan explain that this focus on transmedia creators as authority figures reinforces hierarchical, patriarchal understandings of the creative process and excludes from consideration all those unauthorized transmedia extensions through which audiences frequently engage and make meaning from transmedia networks. They also emphasize the importance of constructing academic theories of transmedia authorship that can accommodate collaborative forms of hybrid amateur and professional authorship, as well as tolerate the ambiguities of “authorless” storyworlds that lack clear narrative boundaries. O’Meara and Bevan argue that such theories will help to break down gendered power hierarchies in Hollywood, which have long allowed individual men to “claim credit for the stories and for all the work that many people do across various sectors and industries.”Dan Hassler-Forest likewise considers existing theory and a corporate case study in his examination of analogue echoes within a modern transmedia serial structure by mapping the storyworld of Twin Peaks (1990). His article, “‘Two Birds with One Stone’: Transmedia Serialisation in Twin Peaks,” demonstrates the push-and-pull between two contemporary TV production strategies: first, the use of transmedia elements that draw viewers away from the TV screen toward other platforms, and second, the deployment of strategies that draw viewers back to the TV by incentivizing broadcast-era appointment viewing. Twin Peaks offers a particularly interesting example of the manner in which these strategies intertwine partly because it already offered viewers an analogue transmedia experience in the 1990s by splitting story elements between TV episodes and books. Unlike O’Meara and Bevan, who elucidate the growing prominence of transmedia auteurs who lend rhetorical coherence to dispersed narrative elements, Hassler-Forest argues that this older analogue transmedia network capitalized upon the dilution of authorial authority, due to the distance between TV and book versions, to negotiate tensions between the producers’ competing visions. Hassler-Forest also notes that the addition of digital soundtrack albums further complicates the serial nature of the story by using the iTunes and TV distribution schedules to incentivize repeated sequential consumption of each element, thus drawing modern viewers to the TV screen, then the computer screen, and then back again.Two articles offer a concrete test of these theoretical perspectives by utilizing ethnographic participant-observation and interviewing to examine how audiences actually navigate diffuse, dispersed storyworlds. For example, Céline Masoni’s article, “From Seriality to Transmediality: A Socio-narrative Approach of a Skilful and Literate Audience,” documents fans’ highly strategic participatory practices. From her observations of and interviews with fans, Masoni theorizes the types of media literacy and social as well as technological competencies cultivated through transmedia fan practices. Olivier Servais and Sarah Sepulchre’s article similarly describes a long-term ethnography of fan transmedia activity, including interviews with fans and participant-observation of the MMORPG (Massively Multiplayer Online Role-Playing Game) Game of Thrones Ascent (2013). Servais and Sepulchre find that most people in their interviews are not “committed” fans, but rather casual readers and viewers who follow transmedia extensions sporadically. By focusing on this group, they widen the existing research which often focuses on or assumes a committed audience like the skilful and literate fans discussed by Masoni.Servais and Sepulchre’s results suggest that these viewers may be less likely to seek out all transmedia extensions but readily accept and adapt unexpected elements, such as the media appearances of actors, to add to their serial experiences of the storyworld. In a parallel research protocol observing the Game of Thrones Ascent MMORPG, Servais and Sepulchre report that the most highly-skilled players exhibit few behaviours associated with immersion in the storyworld, but the majority of less-skilled players use their gameplay choices to increase immersion by, for example, choosing a player name that evokes the narrative. As a result, Servais and Sepulchre shed light upon the activities of transmedia audiences who are not necessarily deeply committed to the entire transmedia network, and yet who nonetheless make deliberate choices to collect their preferred narrative elements and increase their own immersion.Two contributors elucidate forms of transmedia that upset the common emphasis on storyworlds with film or TV as the core property or “mothership” (Scott). In her article “Transmedia Storyworlds, Literary Theory, Games,” Joyce Goggin maps the history of intersections between experimental literature and ludology. As a result, she questions the continuing dichotomy between narratology and ludology in game studies to argue for a more broadly transmedia strategy, in which the same storyworld may be simultaneously narrative and ludic. Such a theory can incorporate a great deal of what might otherwise be unproblematically treated as literature, opening up the book to interrogation as an inherently transmedial medium.L.J. Maher similarly examines the serial narrative structures that may take shape in a transmedia storyworld centred on music rather than film or TV. In her article “You Got Spirit, Kid: Transmedial Life-Writing Across Time and Space,” Maher charts the music, graphic novels, and fan interactions that comprise the Coheed and Cambria band storyworld. In particular, Maher emphasizes the importance of autobiography for Coheed and Cambria, which bridges between fictional and non-fictional narrative elements. This interplay remains undertheorized within transmedia scholarship, although a few have begun to explicate the use of transmedia life-writing in an activist context (Cati and Piredda; Van Luyn and Klaebe; Riggs). As a result, Maher widens the scope of existing transmedia theory by more thoroughly connecting fictional and autobiographical elements in the same storyworld and considering how serial transmedia storytelling structures may differ when the core component is music.The final three articles take a more experimental approach that actively challenges the existing boundaries of transmedia scholarship. Catherine Lord’s article, “Serial Nuns: Michelle Williams Gamaker’s The Fruit Is There to Be Eaten as Serial and Trans-serial,” explores the unique storytelling structures of a cluster of independent films that traverse time, space, medium, and gender. Although not a traditional transmedia project, since the network includes a novel and film adaptations and extensions by different directors as well as real-world locations and histories, Lord challenges transmedia theorists to imagine storyworlds that include popular history, independent production, and spatial performances and practices. Lord argues that the main character’s trans identity provides an embodied and theoretical pivot within the storyworld, which invites audiences to accept a position of radical mobility where all fixed expectations about the separation between categories of flora and fauna, centre and periphery, the present and the past, as well as authorized and unauthorized extensions, dissolve.In his article “Non-Fiction Transmedia: Seriality and Forensics in Media Sport,” Markus Stauff extends the concept of serial transmedia storyworlds to sport, focusing on an audience-centred perspective. For the most part, transmedia has been theorized with fictional storyworlds as the prototypical examples. A growing number of scholars, including Arnau Gifreu-Castells and Siobhan O'Flynn, enrich our understanding of transmedia storytelling by exploring non-fiction examples, but these are commonly restricted to the documentary genre (Freeman; Gifreu-Castells, Misek, and Verbruggen; Karlsen; Kerrigan and Velikovsky). Very few scholars comment on the transmedia nature of sport coverage and fandom, and when they do so it is often within the framework of transmedia news coverage (Gambarato, Alzamora, and Tárcia; McClearen; Waysdorf). Stauff’s article thus provides a welcome addition to the existing scholarship in this field by theorizing how sport fans construct a user-centred serial transmedia storyworld by piecing together narrative elements across media sources, embodied experiences, and the serialized ritual of sport seasons. In doing so, he points toward ways in which non-fiction transmedia may significantly differ from fictional storyworlds, but he also enriches our understanding of an audience-centred perspective on the construction of transmedia serial narratives.In his artistic practice, Robert Lawrence may most profoundly stretch the existing parameters of transmedia theory. Lawrence’s article, “Locate, Combine, Contradict, Iterate: Serial Strategies for PostInternet Art,” details his decades-long interrogation of transmedia seriality through performative and participatory forms of art that bridge digital space, studio space, and public space. While theatre and fine arts have often been considered through the theoretical lens of intermediality (Bennett, Boenisch, Kattenbelt, Vandsoe), the nexus of transmedia, seriality, and narrative enables Lawrence to describe the complex, interconnected web of planned and unplanned extensions of his hybrid digital and physical installations, which often last for decades and incorporate a global scope. Lawrence thus takes the strategies of engagement that are perhaps more familiar to transmedia theorists from corporate viral marketing campaigns and turns them toward civic ends (Anyiwo, Bourdaa, Hardy, Hassler-Forest, Scolari, Sokolova, Stork). As such, Lawrence’s artistic practice challenges theorists of transmedia and intermedia to consider the kinds of social and political “interventions” that artists and citizens can stage through the networked possibilities of transmedia expression and how the impact of such projects can be amplified through serial repetition.Together, the whole collection opens new pathways for transmedia scholarship, more deeply explores how transmedia narration complicates understandings of seriality, and constructs an international, interdisciplinary dialogue that brings often isolated conversations into contact. In particular, this issue enriches the existing scholarship on independent, artistic, and non-fiction transmedia, while also proposing some important limitations, exceptions, and critiques to existing scholarship featuring corporate transmedia projects with a commercial, top-down structure and a strong auteur-like creator. These diverse case studies and perspectives enable us to understand more inclusively the structures and social functions of transmedia in the pre-digital age, to theorize more robustly how audiences experience transmedia in the current era of experimentation, and to imagine more broadly a complex future for transmedia seriality wherein professionals, artists, and amateurs all engage in an iterative, inclusive process of creative and civic storytelling, transcending artificial borders imposed by discipline, nationalism, capitalism, and medium.ReferencesAnyiwo, U. Melissa. "It’s Not Television, It’s Transmedia Storytelling: Marketing the ‘Real’World of True Blood." True Blood: Investigating Vampires and Southern Gothic. Ed. Brigid Cherry. New York: IB Tauris, 2012. 157-71.Barthes, Roland. "The Death of the Author." Image, Music, Text. Trans. Stephen Heath. Basingstoke: Macmillian, 1988. 142-48.Bennett, Jill. "Aesthetics of Intermediality." Art History 30.3 (2007): 432-450.Boenisch, Peter M. "Aesthetic Art to Aisthetic Act: Theatre, Media, Intermedial Performance." (2006): 103-116.Bourdaa, Melanie. "This Is Not Marketing. This Is HBO: Branding HBO with Transmedia Storytelling." Networking Knowledge: Journal of the MeCCSA Postgraduate Network 7.1 (2014).Cati, Alice, and Maria Francesca Piredda. "Among Drowned Lives: Digital Archives and Migrant Memories in the Age of Transmediality." a/b: Auto/Biography Studies 32.3 (2017): 628-637.Flanagan, Martin, Andrew Livingstone, and Mike McKenny. The Marvel Studios Phenomenon: Inside a Transmedia Universe. New York: Bloomsbury Publishing, 2016.Foucault, Michel. "Authorship: What Is an Author?" Screen 20.1 (1979): 13-34.Freeman, Matthew. "Small Change – Big Difference: Tracking the Transmediality of Red Nose Day." VIEW Journal of European Television History and Culture 5.10 (2016): 87-96.Gambarato, Renira Rampazzo, Geane C. Alzamora, and Lorena Peret Teixeira Tárcia. "2016 Rio Summer Olympics and the Transmedia Journalism of Planned Events." Exploring Transmedia Journalism in the Digital Age. Hershey, PA: IGI Global, 2018. 126-146.Gifreu-Castells, Arnau. "Mapping Trends in Interactive Non-fiction through the Lenses of Interactive Documentary." International Conference on Interactive Digital Storytelling. Berlin: Springer, 2014.Gifreu-Castells, Arnau, Richard Misek, and Erwin Verbruggen. "Transgressing the Non-fiction Transmedia Narrative." VIEW Journal of European Television History and Culture 5.10 (2016): 1-3.Hadas, Leora. "Authorship and Authenticity in the Transmedia Brand: The Case of Marvel's Agents of SHIELD." Networking Knowledge: Journal of the MeCCSA Postgraduate Network 7.1 (2014).Hardy, Jonathan. "Mapping Commercial Intertextuality: HBO’s True Blood." Convergence 17.1 (2011): 7-17.Hassler-Forest, Dan. "Skimmers, Dippers, and Divers: Campfire’s Steve Coulson on Transmedia Marketing and Audience Participation." Participations 13.1 (2016): 682-692.Jenkins, Henry. “Transmedia 202: Further Reflections.” Confessions of an Aca-Fan. 31 July 2011. &lt;http://henryjenkins.org/blog/2011/08/defining_transmedia_further_re.html&gt;. ———. “Transmedia Storytelling 101.” Confessions of an Aca-Fan. 21 Mar. 2007. &lt;http://henryjenkins.org/blog/2007/03/transmedia_storytelling_101.html&gt;. ———. Convergence Culture: Where Old and New Media Collide. New York: New York University Press, 2006.Johnson, Derek. Media Franchising: Creative License and Collaboration in the Culture Industries. New York: New York UP, 2013.Karlsen, Joakim. "Aligning Participation with Authorship: Independent Transmedia Documentary Production in Norway." VIEW Journal of European Television History and Culture 5.10 (2016): 40-51.Kattenbelt, Chiel. "Theatre as the Art of the Performer and the Stage of Intermediality." Intermediality in Theatre and Performance 2 (2006): 29-39.Kerrigan, Susan, and J. T. Velikovsky. "Examining Documentary Transmedia Narratives through The Living History of Fort Scratchley Project." Convergence 22.3 (2016): 250-268.Van Luyn, Ariella, and Helen Klaebe. "Making Stories Matter: Using Participatory New Media Storytelling and Evaluation to Serve Marginalized and Regional Communities." Creative Communities: Regional Inclusion and the Arts. Intellect Press, 2015. 157-173.McClearen, Jennifer. "‘We Are All Fighters’: The Transmedia Marketing of Difference in the Ultimate Fighting Championship (UFC)." International Journal of Communication 11 (2017): 18.Mittell, Jason. "Playing for Plot in the Lost and Portal Franchises." Eludamos: Journal for Computer Game Culture 6.1 (2012): 5-13.O'Flynn, Siobhan. "Documentary's Metamorphic Form: Webdoc, Interactive, Transmedia, Participatory and Beyond." Studies in Documentary Film 6.2 (2012): 141-157.Riggs, Nicholas A. "Leaving Cancerland: Following Bud at the End of Life." Storytelling, Self, Society 10.1 (2014): 78-92.Ryan, Marie-Laure. “Transmedial Storytelling and Transfictionality.” Poetics Today, 34.3 (2013): 361-388. &lt;https://doi.org/10.1215/03335372-2325250&gt;.Scolari, Carlos Alberto. "Transmedia Storytelling: Implicit Consumers, Narrative Worlds, and Branding in Contemporary Media Production." International Journal of Communication 3 (2009).Scott, Suzanne. “Who’s Steering the Mothership: The Role of the Fanboy Auteur in Transmedia Storytelling.” The Participatory Cultures Handbook. Eds. Aaron Delwiche and Jennifer Henderson. New York: Routledge, 2013. 43-53.Sokolova, Natalia. "Co-opting Transmedia Consumers: User Content as Entertainment or ‘Free Labour’? The Cases of STALKER. and Metro 2033." Europe-Asia Studies 64.8 (2012): 1565-1583.Stork, Matthias. "The Cultural Economics of Performance Space: Negotiating Fan, Labor, and Marketing Practice in Glee's Transmedia Geography." Transformative Works &amp; Cultures 15 (2014).Waysdorf, Abby. "My Football Fandoms, Performance, and Place." Transformative Works &amp; Cultures 18 (2015).Vandsoe, Anette. "Listening to the World. Sound, Media and Intermediality in Contemporary Sound Art." SoundEffects – An Interdisciplinary Journal of Sound and Sound Experience 1.1 (2011): 67-81.
APA, Harvard, Vancouver, ISO, and other styles
49

Holleran, Samuel. "Better in Pictures." M/C Journal 24, no. 4 (2021). http://dx.doi.org/10.5204/mcj.2810.

Full text
Abstract:
While the term “visual literacy” has grown in popularity in the last 50 years, its meaning remains nebulous. It is described variously as: a vehicle for aesthetic appreciation, a means of defence against visual manipulation, a sorting mechanism for an increasingly data-saturated age, and a prerequisite to civic inclusion (Fransecky 23; Messaris 181; McTigue and Flowers 580). Scholars have written extensively about the first three subjects but there has been less research on how visual literacy frames civic life and how it might help the public as a tool to address disadvantage and assist in removing social and cultural barriers. This article examines a forerunner to visual literacy in the push to create an international symbol language born out of popular education movements, a project that fell short of its goals but still left a considerable impression on graphic media. This article, then, presents an analysis of visual literacy campaigns in the early postwar era. These campaigns did not attempt to invent a symbolic language but posited that images themselves served as a universal language in which students could receive training. Of particular interest is how the concept of visual literacy has been mobilised as a pedagogical tool in design, digital humanities and in broader civic education initiatives promoted by Third Space institutions. Behind the creation of new visual literacy curricula is the idea that images can help anchor a world community, supplementing textual communication. Figure 1: Visual Literacy Yearbook. Montebello Unified School District, USA, 1973. Shedding Light: Origins of the Visual Literacy Frame The term “visual literacy” came to the fore in the early 1970s on the heels of mass literacy campaigns. The educators, creatives and media theorists who first advocated for visual learning linked this aim to literacy, an unassailable goal, to promote a more radical curricular overhaul. They challenged a system that had hitherto only acknowledged a very limited pathway towards academic success; pushing “language and mathematics”, courses “referred to as solids (something substantial) as contrasted with liquids or gases (courses with little or no substance)” (Eisner 92). This was deemed “a parochial view of both human ability and the possibilities of education” that did not acknowledge multiple forms of intelligence (Gardner). This change not only integrated elements of mass culture that had been rejected in education, notably film and graphic arts, but also encouraged the critique of images as a form of good citizenship, assuming that visually literate arbiters could call out media misrepresentations and manipulative political advertising (Messaris, “Visual Test”). This movement was, in many ways, reactive to new forms of mass media that began to replace newspapers as key forms of civic participation. Unlike simple literacy (being able to decipher letters as a mnemonic system), visual literacy involves imputing meanings to images where meanings are less fixed, yet still with embedded cultural signifiers. Visual literacy promised to extend enlightenment metaphors of sight (as in the German Aufklärung) and illumination (as in the French Lumières) to help citizens understand an increasingly complex marketplace of images. The move towards visual literacy was not so much a shift towards images (and away from books and oration) but an affirmation of the need to critically investigate the visual sphere. It introduced doubt to previously upheld hierarchies of perception. Sight, to Kant the “noblest of the senses” (158), was no longer the sense “least affected” by the surrounding world but an input centre that was equally manipulable. In Kant’s view of societal development, the “cosmopolitan” held the key to pacifying bellicose states and ensuring global prosperity and tranquillity. The process of developing a cosmopolitan ideology rests, according to Kant, on the gradual elimination of war and “the education of young people in intellectual and moral culture” (188-89). Transforming disparate societies into “a universal cosmopolitan existence” that would “at last be realised as the matrix within which all the original capacities of the human race may develop” and would take well-funded educational institutions and, potentially, a new framework for imparting knowledge (Kant 51). To some, the world of the visual presented a baseline for shared experience. Figure 2: Exhibition by the Gesellschafts- und Wirtschaftsmuseum in Vienna, photograph c. 1927. An International Picture Language The quest to find a mutually intelligible language that could “bridge worlds” and solder together all of humankind goes back to the late nineteenth century and the Esperanto movement of Ludwig Zamenhof (Schor 59). The expression of this ideal in the world of the visual picked up steam in the interwar years with designers and editors like Fritz Kahn, Gerd Arntz, and Otto and Marie Neurath. Their work transposing complex ideas into graphic form has been rediscovered as an antecedent to modern infographics, but the symbols they deployed were not to merely explain, but also help education and build international fellowship unbounded by spoken language. The Neuraths in particular are celebrated for their international picture language or Isotypes. These pictograms (sometimes viewed as proto-emojis) can be used to represent data without text. Taken together they are an “intemporal, hieroglyphic language” that Neutrath hoped would unite working-class people the world over (Lee 159). The Neuraths’ work was done in the explicit service of visual education with a popular socialist agenda and incubated in the social sphere of Red Vienna at the Gesellschafts- und Wirtschaftsmuseum (Social and Economic Museum) where Otto served as Director. The Wirtschaftsmuseum was an experiment in popular education, with multiple branches and late opening hours to accommodate the “the working man [who] has time to see a museum only at night” (Neurath 72-73). The Isotype contained universalist aspirations for the “making of a world language, or a helping picture language—[that] will give support to international developments generally” and “educate by the eye” (Neurath 13). Figure 3: Gerd Arntz Isotype Images. (Source: University of Reading.) The Isotype was widely adopted in the postwar era in pre-packaged sets of symbols used in graphic design and wayfinding systems for buildings and transportation networks, but with the socialism of the Neuraths’ peeled away, leaving only the system of logos that we are familiar with from airport washrooms, charts, and public transport maps. Much of the uptake in this symbol language could be traced to increased mobility and tourism, particularly in countries that did not make use of a Roman alphabet. The 1964 Olympics in Tokyo helped pave the way when organisers, fearful of jumbling too many scripts together, opted instead for black and white icons to represent the program of sports that summer. The new focus on the visual was both technologically mediated—cheaper printing and broadcast technologies made the diffusion of image increasingly possible—but also ideologically supported by a growing emphasis on projects that transcended linguistic, ethnic, and national borders. The Olympic symbols gradually morphed into Letraset icons, and, later, symbols in the Unicode Standard, which are the basis for today’s emojis. Wordless signs helped facilitate interconnectedness, but only in the most literal sense; their application was limited primarily to sports mega-events, highway maps, and “brand building”, and they never fulfilled their role as an educational language “to give the different nations a common outlook” (Neurath 18). Universally understood icons, particularly in the form of emojis, point to a rise in visual communication but they have fallen short as a cosmopolitan project, supporting neither the globalisation of Kantian ethics nor the transnational socialism of the Neuraths. Figure 4: Symbols in use. Women's bathroom. 1964 Tokyo Olympics. (Source: The official report of the Organizing Committee.) Counter Education By mid-century, the optimism of a universal symbol language seemed dated, and focus shifted from distillation to discernment. New educational programs presented ways to study images, increasingly reproducible with new technologies, as a language in and of themselves. These methods had their roots in the fin-de-siècle educational reforms of John Dewey, Helen Parkhurst, and Maria Montessori. As early as the 1920s, progressive educators were using highly visual magazines, like National Geographic, as the basis for lesson planning, with the hopes that they would “expose students to edifying and culturally enriching reading” and “develop a more catholic taste or sensibility, representing an important cosmopolitan value” (Hawkins 45). The rise in imagery from previously inaccessible regions helped pupils to see themselves in relation to the larger world (although this connection always came with the presumed superiority of the reader). “Pictorial education in public schools” taught readers—through images—to accept a broader world but, too often, they saw photographs as a “straightforward transcription of the real world” (Hawkins 57). The images of cultures and events presented in Life and National Geographic for the purposes of education and enrichment were now the subject of greater analysis in the classroom, not just as “windows into new worlds” but as cultural products in and of themselves. The emerging visual curriculum aimed to do more than just teach with previously excluded modes (photography, film and comics); it would investigate how images presented and mediated the world. This gained wider appeal with new analytical writing on film, like Raymond Spottiswoode's Grammar of the Film (1950) which sought to formulate the grammatical rules of visual communication (Messaris 181), influenced by semiotics and structural linguistics; the emphasis on grammar can also be seen in far earlier writings on design systems such as Owen Jones’s 1856 The Grammar of Ornament, which also advocated for new, universalising methods in design education (Sloboda 228). The inventorying impulse is on display in books like Donis A. Dondis’s A Primer of Visual Literacy (1973), a text that meditates on visual perception but also functions as an introduction to line and form in the applied arts, picking up where the Bauhaus left off. Dondis enumerates the “syntactical guidelines” of the applied arts with illustrations that are in keeping with 1920s books by Kandinsky and Klee and analyse pictorial elements. However, at the end of the book she shifts focus with two chapters that examine “messaging” and visual literacy explicitly. Dondis predicts that “an intellectual, trained ability to make and understand visual messages is becoming a vital necessity to involvement with communication. It is quite likely that visual literacy will be one of the fundamental measures of education in the last third of our century” (33) and she presses for more programs that incorporate the exploration and analysis of images in tertiary education. Figure 5: Ideal spatial environment for the Blueprint charts, 1970. (Image: Inventory Press.) Visual literacy in education arrived in earnest with a wave of publications in the mid-1970s. They offered ways for students to understand media processes and for teachers to use visual culture as an entry point into complex social and scientific subject matter, tapping into the “visual consciousness of the ‘television generation’” (Fransecky 5). Visual culture was often seen as inherently democratising, a break from stuffiness, the “artificialities of civilisation”, and the “archaic structures” that set sensorial perception apart from scholarship (Dworkin 131-132). Many radical university projects and community education initiatives of the 1960s made use of new media in novel ways: from Maurice Stein and Larry Miller’s fold-out posters accompanying Blueprint for Counter Education (1970) to Emory Douglas’s graphics for The Black Panther newspaper. Blueprint’s text- and image-dense wall charts were made via assemblage and they were imagined less as charts and more as a “matrix of resources” that could be used—and added to—by youth to undertake their own counter education (Cronin 53). These experiments in visual learning helped to break down old hierarchies in education, but their aim was influenced more by countercultural notions of disruption than the universal ideals of cosmopolitanism. From Image as Text to City as Text For a brief period in the 1970s, thinkers like Marshall McLuhan (McLuhan et al., Massage) and artists like Bruno Munari (Tanchis and Munari) collaborated fruitfully with graphic designers to create books that mixed text and image in novel ways. Using new compositional methods, they broke apart traditional printing lock-ups to superimpose photographs, twist text, and bend narrative frames. The most famous work from this era is, undoubtedly, The Medium Is the Massage (1967), McLuhan’s team-up with graphic designer Quentin Fiore, but it was followed by dozens of other books intended to communicate theory and scientific ideas with popularising graphics. Following in the footsteps of McLuhan, many of these texts sought not just to explain an issue but to self-consciously reference their own method of information delivery. These works set the precedent for visual aids (and, to a lesser extent, audio) that launched a diverse, non-hierarchical discourse that was nonetheless bound to tactile artefacts. In 1977, McLuhan helped develop a media textbook for secondary school students called City as Classroom: Understanding Language and Media. It is notable for its direct address style and its focus on investigating spaces outside of the classroom (provocatively, a section on the third page begins with “Should all schools be closed?”). The book follows with a fine-grained analysis of advertising forms in which students are asked to first bring advertisements into class for analysis and later to go out into the city to explore “a man-made environment, a huge warehouse of information, a vast resource to be mined free of charge” (McLuhan et al., City 149). As a document City as Classroom is critical of existing teaching methods, in line with the radical “in the streets” pedagogy of its day. McLuhan’s theories proved particularly salient for the counter education movement, in part because they tapped into a healthy scepticism of advertisers and other image-makers. They also dovetailed with growing discontent with the ad-strew visual environment of cities in the 1970s. Budgets for advertising had mushroomed in the1960s and outdoor advertising “cluttered” cities with billboards and neon, generating “fierce intensities and new hybrid energies” that threatened to throw off the visual equilibrium (McLuhan 74). Visual literacy curricula brought in experiential learning focussed on the legibility of the cities, mapping, and the visualisation of urban issues with social justice implications. The Detroit Geographical Expedition and Institute (DGEI), a “collective endeavour of community research and education” that arose in the aftermath of the 1967 uprisings, is the most storied of the groups that suffused the collection of spatial data with community engagement and organising (Warren et al. 61). The following decades would see a tamed approach to visual literacy that, while still pressing for critical reading, did not upend traditional methods of educational delivery. Figure 6: Beginning a College Program-Assisting Teachers to Develop Visual Literacy Approaches in Public School Classrooms. 1977. ERIC. Searching for Civic Education The visual literacy initiatives formed in the early 1970s both affirmed existing civil society institutions while also asserting the need to better inform the public. Most of the campaigns were sponsored by universities, major libraries, and international groups such as UNESCO, which published its “Declaration on Media Education” in 1982. They noted that “participation” was “essential to the working of a pluralistic and representative democracy” and the “public—users, citizens, individuals, groups ... were too systematically overlooked”. Here, the public is conceived as both “targets of the information and communication process” and users who “should have the last word”. To that end their “continuing education” should be ensured (Study 18). Programs consisted primarily of cognitive “see-scan-analyse” techniques (Little et al.) for younger students but some also sought to bring visual analysis to adult learners via continuing education (often through museums eager to engage more diverse audiences) and more radical popular education programs sponsored by community groups. By the mid-80s, scores of modules had been built around the comprehension of visual media and had become standard educational fare across North America, Australasia, and to a lesser extent, Europe. There was an increasing awareness of the role of data and image presentation in decision-making, as evidenced by the surprising commercial success of Edward Tufte’s 1982 book, The Visual Display of Quantitative Information. Visual literacy—or at least image analysis—was now enmeshed in teaching practice and needed little active advocacy. Scholarly interest in the subject went into a brief period of hibernation in the 1980s and early 1990s, only to be reborn with the arrival of new media distribution technologies (CD-ROMs and then the internet) in classrooms and the widespread availability of digital imaging technology starting in the late 1990s; companies like Adobe distributed free and reduced-fee licences to schools and launched extensive teacher training programs. Visual literacy was reanimated but primarily within a circumscribed academic field of education and data visualisation. Figure 7: Visual Literacy; What Research Says to the Teacher, 1975. National Education Association. USA. Part of the shifting frame of visual literacy has to do with institutional imperatives, particularly in places where austerity measures forced strange alliances between disciplines. What had been a project in alternative education morphed into an uncontested part of the curriculum and a dependable budget line. This shift was already forecasted in 1972 by Harun Farocki who, writing in Filmkritik, noted that funding for new film schools would be difficult to obtain but money might be found for “training in media education … a discipline that could persuade ministers of education, that would at the same time turn the budget restrictions into an advantage, and that would match the functions of art schools” (98). Nearly 50 years later educators are still using media education (rebranded as visual or media literacy) to make the case for fine arts and humanities education. While earlier iterations of visual literacy education were often too reliant on the idea of cracking the “code” of images, they did promote ways of learning that were a deep departure from the rote methods of previous generations. Next-gen curricula frame visual literacy as largely supplemental—a resource, but not a program. By the end of the 20th century, visual literacy had changed from a scholarly interest to a standard resource in the “teacher’s toolkit”, entering into school programs and influencing museum education, corporate training, and the development of public-oriented media (Literacy). An appreciation of image culture was seen as key to creating empathetic global citizens, but its scope was increasingly limited. With rising austerity in the education sector (a shift that preceded the 2008 recession by decades in some countries), art educators, museum enrichment staff, and design researchers need to make a case for why their disciplines were relevant in pedagogical models that are increasingly aimed at “skills-based” and “job ready” teaching. Arts educators worked hard to insert their fields into learning goals for secondary students as visual literacy, with the hope that “literacy” would carry the weight of an educational imperative and not a supplementary field of study. Conclusion For nearly a century, educational initiatives have sought to inculcate a cosmopolitan perspective with a variety of teaching materials and pedagogical reference points. Symbolic languages, like the Isotype, looked to unite disparate people with shared visual forms; while educational initiatives aimed to train the eyes of students to make them more discerning citizens. The term ‘visual literacy’ emerged in the 1960s and has since been deployed in programs with a wide variety of goals. Countercultural initiatives saw it as a prerequisite for popular education from the ground up, but, in the years since, it has been formalised and brought into more staid curricula, often as a sort of shorthand for learning from media and pictures. The grand cosmopolitan vision of a complete ‘visual language’ has been scaled back considerably, but still exists in trace amounts. Processes of globalisation require images to universalise experiences, commodities, and more for people without shared languages. Emoji alphabets and globalese (brands and consumer messaging that are “visual-linguistic” amalgams “increasingly detached from any specific ethnolinguistic group or locality”) are a testament to a mediatised banal cosmopolitanism (Jaworski 231). In this sense, becoming “fluent” in global design vernacular means familiarity with firms and products, an understanding that is aesthetic, not critical. It is very much the beneficiaries of globalisation—both state and commercial actors—who have been able to harness increasingly image-based technologies for their benefit. To take a humorous but nonetheless consequential example, Spanish culinary boosters were able to successfully lobby for a paella emoji (Miller) rather than having a food symbol from a less wealthy country such as a Senegalese jollof or a Morrocan tagine. This trend has gone even further as new forms of visual communication are increasingly streamlined and managed by for-profit media platforms. The ubiquity of these forms of communication and their global reach has made visual literacy more important than ever but it has also fundamentally shifted the endeavour from a graphic sorting practice to a critical piece of social infrastructure that has tremendous political ramifications. Visual literacy campaigns hold out the promise of educating students in an image-based system with the potential to transcend linguistic and cultural boundaries. This cosmopolitan political project has not yet been realised, as the visual literacy frame has drifted into specialised silos of art, design, and digital humanities education. It can help bridge the “incomplete connections” of an increasingly globalised world (Calhoun 112), but it does not have a program in and of itself. Rather, an evolving visual literacy curriculum might be seen as a litmus test for how we imagine the role of images in the world. References Brown, Neil. “The Myth of Visual Literacy.” Australian Art Education 13.2 (1989): 28-32. Calhoun, Craig. “Cosmopolitanism in the Modern Social Imaginary.” Daedalus 137.3 (2008): 105–114. Cronin, Paul. “Recovering and Rendering Vital Blueprint for Counter Education at the California Institute for the Arts.” Blueprint for Counter Education. Inventory Press, 2016. 36-58. Dondis, Donis A. A Primer of Visual Literacy. MIT P, 1973. Dworkin, M.S. “Toward an Image Curriculum: Some Questions and Cautions.” Journal of Aesthetic Education 4.2 (1970): 129–132. Eisner, Elliot. Cognition and Curriculum: A Basis for Deciding What to Teach. Longmans, 1982. Farocki, Harun. “Film Courses in Art Schools.” Trans. Ted Fendt. Grey Room 79 (Apr. 2020): 96–99. Fransecky, Roger B. Visual Literacy: A Way to Learn—A Way to Teach. Association for Educational Communications and Technology, 1972. Gardner, Howard. Frames Of Mind. Basic Books, 1983. Hawkins, Stephanie L. “Training the ‘I’ to See: Progressive Education, Visual Literacy, and National Geographic Membership.” American Iconographic. U of Virginia P, 2010. 28–61. Jaworski, Adam. “Globalese: A New Visual-Linguistic Register.” Social Semiotics 25.2 (2015): 217-35. Kant, Immanuel. Anthropology from a Pragmatic Point of View. Cambridge UP, 2006. Kant, Immanuel. “Perpetual Peace.” Political Writings. Ed. H. Reiss. Cambridge UP, 1991 [1795]. 116–130. Kress, G., and T. van Leeuwen. Reading images: The Grammar of Visual Design. Routledge, 1996. Literacy Teaching Toolkit: Visual Literacy. Department of Education and Training (DET), State of Victoria. 29 Aug. 2018. 30 Sep. 2020 &lt;https://www.education.vic.gov.au:443/school/teachers/teachingresources/discipline/english/literacy/ readingviewing/Pages/litfocusvisual.aspx&gt;. Lee, Jae Young. “Otto Neurath's Isotype and the Rhetoric of Neutrality.” Visible Language 42.2: 159-180. Little, D., et al. Looking and Learning: Visual Literacy across the Disciplines. Wiley, 2015. Messaris, Paul. “Visual Literacy vs. Visual Manipulation.” Critical Studies in Mass Communication 11.2: 181-203. DOI: 10.1080/15295039409366894 ———. “A Visual Test for Visual ‘Literacy.’” The Annual Meeting of the Speech Communication Association. 31 Oct. to 3 Nov. 1991. Atlanta, GA. &lt;https://files.eric.ed.gov/fulltext/ED347604.pdf&gt;. McLuhan, Marshall. Understanding Media: The Extensions of Man. McGraw-Hill, 1964. McLuhan, Marshall, Quentin Fiore, and Jerome Agel. The Medium Is the Massage, Bantam Books, 1967. McLuhan, Marshall, Kathryn Hutchon, and Eric McLuhan. City as Classroom: Understanding Language and Media. Agincourt, Ontario: Book Society of Canada, 1977. McTigue, Erin, and Amanda Flowers. “Science Visual Literacy: Learners' Perceptions and Knowledge of Diagrams.” Reading Teacher 64.8: 578-89. Miller, Sarah. “The Secret History of the Paella Emoji.” Food &amp; Wine, 20 June 2017. &lt;https://www.foodandwine.com/news/true-story-paella-emoji&gt;. Munari, Bruno. Square, Circle, Triangle. Princeton Architectural Press, 2016. Newfield, Denise. “From Visual Literacy to Critical Visual Literacy: An Analysis of Educational Materials.” English Teaching-Practice and Critique 10 (2011): 81-94. Neurath, Otto. International Picture Language: The First Rules of Isotype. K. Paul, Trench, Trubner, 1936. Schor, Esther. Bridge of Words: Esperanto and the Dream of a Universal Language. Henry Holt and Company, 2016. Sloboda, Stacey. “‘The Grammar of Ornament’: Cosmopolitanism and Reform in British Design.” Journal of Design History 21.3 (2008): 223-36. Study of Communication Problems: Implementation of Resolutions 4/19 and 4/20 Adopted by the General Conference at Its Twenty-First Session; Report by the Director-General. UNESCO, 1983. Tanchis, Aldo, and Bruno Munari. Bruno Munari: Design as Art. MIT P, 1987. Warren, Gwendolyn, Cindi Katz, and Nik Heynen. “Myths, Cults, Memories, and Revisions in Radical Geographic History: Revisiting the Detroit Geographical Expedition and Institute.” Spatial Histories of Radical Geography: North America and Beyond. Wiley, 2019. 59-86.
APA, Harvard, Vancouver, ISO, and other styles
We offer discounts on all premium plans for authors whose works are included in thematic literature selections. Contact us to get a unique promo code!

To the bibliography