Academic literature on the topic 'Multiple arrival sources'

Create a spot-on reference in APA, MLA, Chicago, Harvard, and other styles

Select a source type:

Consult the lists of relevant articles, books, theses, conference reports, and other scholarly sources on the topic 'Multiple arrival sources.'

Next to every source in the list of references, there is an 'Add to bibliography' button. Press on it, and we will generate automatically the bibliographic reference to the chosen work in the citation style you need: APA, MLA, Harvard, Chicago, Vancouver, etc.

You can also download the full text of the academic publication as pdf and read online its abstract whenever available in the metadata.

Journal articles on the topic "Multiple arrival sources"

1

Baumann, Nadine, and Martin Skutella. "Earliest Arrival Flows with Multiple Sources." Mathematics of Operations Research 34, no. 2 (2009): 499–512. http://dx.doi.org/10.1287/moor.1090.0382.

Full text
APA, Harvard, Vancouver, ISO, and other styles
2

Gordon, Michael S., Darlene Edewaard, and Matthew Pacailler. "Time-to-arrival discrimination of multiple sound sources." Journal of the Acoustical Society of America 133, no. 5 (2013): 3511. http://dx.doi.org/10.1121/1.4806265.

Full text
APA, Harvard, Vancouver, ISO, and other styles
3

Mišković, Milan, Nenad Vukmirović, Dragan Golubović, and Miljko Erić. "Method for Direct Localization of Multiple Impulse Acoustic Sources in Outdoor Environment." Electronics 11, no. 16 (2022): 2509. http://dx.doi.org/10.3390/electronics11162509.

Full text
Abstract:
A method for the direct outdoor localization of multiple impulse acoustic sources by a distributed microphone array is proposed. This localization problem is of great interest for gunshot, firecracker and explosion detection localization in a civil environment, as well as for gun, mortar, small arms, artillery, sniper detection localization in military battlefield monitoring systems. Such a kind of localization is a complicated technical problem in many aspects. In such a scenario, the permutation of impulse arrivals on distributed microphones occurs, so the application of classical two-step localization methods, such as time-of-arrival (TOA), time-difference-of-arrival (TDOA), angle-of-arrival (AOA), fingerprint methods, etc., is faced with the so-called association problem, which is difficult to solve. The association problem does not exist in the proposed method for direct (one-step) localization, so the proposed method is more suitable for localization in a given acoustic scenario than the mentioned two-step localization methods. Furthermore, in the proposed method, direct localization is performed impulse by impulse. The observation interval used for the localization could not be arbitrarily chosen; it is limited by the duration of impulses. In the mathematical model formulated in the paper, atmospheric factors in acoustic signal propagation (temperature, pressure, etc.) are included. The results of simulations show that by using the proposed method, centimeter localization accuracy can be achieved.
APA, Harvard, Vancouver, ISO, and other styles
4

Ghogho, M., O. Besson, and A. Swami. "Estimation of directions of arrival of multiple scattered sources." IEEE Transactions on Signal Processing 49, no. 11 (2001): 2467–80. http://dx.doi.org/10.1109/78.960395.

Full text
APA, Harvard, Vancouver, ISO, and other styles
5

Shane, Phil, Paul Froggatt, Ian Smith, and Murray Gregory. "Multiple Sources for Sea-Rafted Loisels Pumice, New Zealand." Quaternary Research 49, no. 3 (1998): 271–79. http://dx.doi.org/10.1006/qres.1998.1968.

Full text
Abstract:
Sea-rafted Loisels Pumice is one of the few stratigraphic markers used to correlate late Holocene coastal deposits in New Zealand. Along with underlying sea-rafted products of the local Taupo eruption of ca. 1800 yr B.P., these events have been used to bracket the first arrival of humans at New Zealand. Loisels Pumice is dacitic to rhyolitic (SiO2 63–78 wt%) in composition, but individual clasts are homogeneous (SiO2 range ± 1 wt%). Characteristics include very low K2O (0.5–1.75 wt%) and Rb (<25 ppm) and a mineralogy dominated by calcic and mafic xenocrysts. Similar features are shared by pumices of the Tonga–Kermadec arc, suggesting a common tholeiitic oceanic source. Interclast diversity of Loisels Pumice suggests that it is the product of several eruptive events from different volcanoes. The differences in glass and mineral compositions found at various sites can be explained if the deposits are from different events. A multisource origin can also partially explain the discrepancy in reported 14C ages (ca. 1500–600 yr B.P.) from different localities. Therefore, the value of Loisels Pumice as a stratigraphic marker is questionable, and it does not constrain the arrival of humans. The predominant westward drift of historic Tonga–Kermadec arc pumices and prevailing ocean currents suggest a long anticlockwise semicircular transport route into the Tasman Sea before sea-rafted pumice arrival in New Zealand. The diversity of the pumices indicates that silicic eruptions frequently occur from the predominantly basic oceanic volcanoes.
APA, Harvard, Vancouver, ISO, and other styles
6

Zheng, Hong, Yi-Chang Chiu, and Pitu B. Mirchandani. "A Heuristic Algorithm for the Earliest Arrival Flow with Multiple Sources." Journal of Mathematical Modelling and Algorithms in Operations Research 13, no. 2 (2013): 169–89. http://dx.doi.org/10.1007/s10852-013-9226-8.

Full text
APA, Harvard, Vancouver, ISO, and other styles
7

Hu, Yonggang, Wei Wang, Zehong Gu, Tianpeng Mao, Xuchen Zhu, and Jiabao Jin. "Closed-form multiple source direction-of-arrival estimator under reverberant environments." Journal of the Acoustical Society of America 154, no. 4 (2023): 2349–64. http://dx.doi.org/10.1121/10.0021873.

Full text
Abstract:
Accurate direction-of-arrival (DOA) estimation of multiple sources, simultaneously active in a reverberant environment, remains a challenge, as the multi-path acoustic reflections and overlapped periods dramatically distort the direct-path wave propagation. This article proposes a prominent solution localizing multiple sources in a reverberant environment using closed-form estimates, circumventing any exhaustive search over the two-dimensional directional space. Apart from a low complexity cost, the algorithm has robustness to reverberant, inactive, and overlapped periods and an ease of operation in practice, achieving sufficient accuracy compared to state-of-the-art approaches. Specifically, this algorithm localizes an unknown number of sources through four steps: (i) decomposing the frequency domain signals on a spherical array to the spherical harmonics domain; (ii) extracting the first-order relative harmonic coefficients as the input features; (iii) achieving direct-path dominance detection and localization using closed-form estimation; and (iv) estimating the number of sources and their DOAs based on those pass the direct-path detection. Experimental results, using extensive simulated and real-life recordings, confirm the algorithm with a significantly reduced computational complexity, while preserving competitive localization accuracy as compared to the baseline approaches. Additional tests confirm this low-complexity algorithm even with a potential capacity for online DOA tracking of multiple moving sources.
APA, Harvard, Vancouver, ISO, and other styles
8

Koole, Ger, and Zhen Liu. "Stochastic Bounds for Queueing Systems with Multiple On–Off Sources." Probability in the Engineering and Informational Sciences 12, no. 1 (1998): 25–48. http://dx.doi.org/10.1017/s0269964800005040.

Full text
Abstract:
Consider a queueing system where the input traffic consists of background traffic, modeled by a Markov Arrival Process, and foreground traffic modeled by N ≥ 1 homogeneous on–off sources. The queueing system has an increasing and concave service rate, which includes as a particular case multiserver queueing systems. Both the infinite-capacity and the finite-capacity buffer cases are analyzed. We show that the queue length in the infinite-capacity buffer system (respectively, the number of losses in the finite-capacity buffer system) is larger in the increasing convex order sense (respectively, the strong stochastic order sense) than the queue length (respectively, the number of losses) of the queueing system with the same background traffic and M N homogeneous on–off sources of the same total intensity as the foreground traffic, where M is an arbitrary integer. As a consequence, the queue length and the loss with a foreground traffic of multiple homogeneous on–off sources is upper bounded by that with a single on–off source and lower bounded by a Poisson source, where the bounds are obtained in the increasing convex order (respectively, the strong stochastic order). We also compare N ≥ 1 homogeneous arbitrary two-state Markov Modulated Poisson Process sources. We prove the monotonicity of the queue length in the transition rates and its convexity in the arrival rates. Standard techniques could not be used due to the different state spaces that we compare. We propose a new approach for the stochastic comparison of queues using dynamic programming which involves initially stationary arrival processes.
APA, Harvard, Vancouver, ISO, and other styles
9

Hursky, Paul. "Correlating shipping noise on multiple beams." Journal of the Acoustical Society of America 153, no. 3_supplement (2023): A65. http://dx.doi.org/10.1121/10.0018178.

Full text
Abstract:
There has been much interest in using transiting surface ships as sources of opportunity for tomography and geo acoustic inversion. In tomography, it would be valuable to increase the ranges at which ship signatures can be processed by measuring times of arrival at the output of beamforming processes, which provide spatial processing gain. This additional gain may enable fainter multipath arrivals to be identified and exploited as additional paths to sample the ocean water column. This seems readily achievable when cross-correlating multiple vertical beams formed on a single vertical line array for example. However, cross-correlating beams from multiple arrays, vertical or horizontal, raises questions. In both cases, different beams may have different Doppler, which is manageable with a correlation process that includes Doppler compensation. But obeserving a ship from different vantage points may be problematic, if the ship signature is due to horizontally displaced noise sources distributed around the ship. We will present results of processing surface ships observed on single and multiple arrays and assess the use of such processing in ambient noise tomography.
APA, Harvard, Vancouver, ISO, and other styles
10

Pu, Henglin, Chao Cai, Menglan Hu, Tianping Deng, Rong Zheng, and Jun Luo. "Towards Robust Multiple Blind Source Localization Using Source Separation and Beamforming." Sensors 21, no. 2 (2021): 532. http://dx.doi.org/10.3390/s21020532.

Full text
Abstract:
Multiple blind sound source localization is the key technology for a myriad of applications such as robotic navigation and indoor localization. However, existing solutions can only locate a few sound sources simultaneously due to the limitation imposed by the number of microphones in an array. To this end, this paper proposes a novel multiple blind sound source localization algorithms using Source seParation and BeamForming (SPBF). Our algorithm overcomes the limitations of existing solutions and can locate more blind sources than the number of microphones in an array. Specifically, we propose a novel microphone layout, enabling salient multiple source separation while still preserving their arrival time information. After then, we perform source localization via beamforming using each demixed source. Such a design allows minimizing mutual interference from different sound sources, thereby enabling finer AoA estimation. To further enhance localization performance, we design a new spectral weighting function that can enhance the signal-to-noise-ratio, allowing a relatively narrow beam and thus finer angle of arrival estimation. Simulation experiments under typical indoor situations demonstrate a maximum of only 4∘ even under up to 14 sources.
APA, Harvard, Vancouver, ISO, and other styles
More sources

Dissertations / Theses on the topic "Multiple arrival sources"

1

Swartling, Mikael. "Direction of Arrival Estimation and Localization of Multiple Speech Sources in Enclosed Environments." Doctoral thesis, Blekinge Tekniska Högskola, Avdelningen för elektroteknik, 2012. http://urn.kb.se/resolve?urn=urn:nbn:se:bth-00520.

Full text
Abstract:
Speech communication is gaining in popularity in many different contexts as technology evolves. With the introduction of mobile electronic devices such as cell phones and laptops, and fixed electronic devices such as video and teleconferencing systems, more people are communicating which leads to an increasing demand for new services and better speech quality. Methods to enhance speech recorded by microphones often operate blindly without prior knowledge of the signals. With the addition of multiple microphones to allow for spatial filtering, many blind speech enhancement methods have to operate blindly also in the spatial domain. When attempting to improve the quality of spoken communication it is often necessary to be able to reliably determine the location of the speakers. A dedicated source localization method on top of the speech enhancement methods can assist the speech enhancement method by providing the spatial information about the sources. This thesis addresses the problem of speech-source localization, with a focus on the problem of localization in the presence of multiple concurrent speech sources. The primary work consists of methods to estimate the direction of arrival of multiple concurrent speech sources from an array of sensors and a method to correct the ambiguities when estimating the spatial locations of multiple speech sources from multiple arrays of sensors. The thesis also improves the well-known SRP-based methods with higher-order statistics, and presents an analysis of how the SRP-PHAT performs when the sensor array geometry is not fully calibrated. The thesis is concluded by two envelope-domain-based methods for tonal pattern detection and tonal disturbance detection and cancelation which can be useful to further increase the usability of the proposed localization methods. The main contribution of the thesis is a complete methodology to spatially locate multiple speech sources in enclosed environments. New methods and improvements to the combined solution are presented for the direction-of-arrival estimation, the location estimation and the location ambiguity correction, as well as a sensor array calibration sensitivity analysis.
APA, Harvard, Vancouver, ISO, and other styles
2

Wang, Xipeng. "CONSTANT FALSE ALARM RATE PERFORMANCE OF SOUND SOURCE DETECTION WITH TIME DELAY OF ARRIVAL ALGORITHM." UKnowledge, 2017. http://uknowledge.uky.edu/ece_etds/105.

Full text
Abstract:
Time Delay of Arrival (TDOA) based algorithms and Steered Response Power (SRP) based algorithms are two most commonly used methods for sound source detection and localization. SRP is more robust under high reverberation and multi-target conditions, while TDOA is less computationally intensive. This thesis introduces a modified TDOA algorithm, TDOA delay table search (TDOA-DTS), that has more stable performance than the original TDOA, and requires only 4% of the SRP computation load for a 3-dimensional space of a typical room. A 2-step adaptive thresholding procedure based on a Weibull noise peak distributions for the cross-correlations and a binomial distribution for combing potential peaks over all microphone pairs for the final detection. The first threshold limits the potential target peaks in the microphone pair cross-correlations with a user-defined false-alarm (FA) rates. The initial false-positive peak rate can be set to a higher level than desired for the final FA target rate so that high accuracy is not required of the probability distribution model (where model errors do not impact FA rates as they work for threshold set deep into the tail of the curve). The final FA rate can be lowered to the actual desired value using an M out of N (MON) rule on significant correlation peaks from different microphone pairs associated is a point in the space of interest. The algorithm is tested with simulated and real recorded data to verify resulting FA rates are consistent with the user-defined rates down to 10-6.
APA, Harvard, Vancouver, ISO, and other styles

Books on the topic "Multiple arrival sources"

1

Mitchell, Pablo R. Understanding Latino History. ABC-CLIO,LLC, 2017. http://dx.doi.org/10.5040/9798216029557.

Full text
Abstract:
This Latino history textbook is an outstanding reference source that covers many different Latino groups within a single comprehensive narrative. Latinos make up a vibrant, expanding, and extremely diverse population with a history of being in the Americas that dates back to the early 16th century. Today, Latinos represent the largest ethnic minority group in the United States, yet the history of Latinos is largely unknown to the wider nation. This book tells the larger "story" of Latinos in the United States and describes how they represent a breadth of ethnicities, addressing not only those in very large numbers from countries such as Mexico, Cuba, Puerto Rico, and El Salvador, but also Latino people from Peru, Argentina, Venezuela, Panama, and Costa Rica, as well as indigenous Oaxacans and Mixtecos, among others. Organized chronologically, the book's coverage begins with the arrival of the Spanish in the Americas around 1500 and stretches to the present. Each chapter discusses a particular time period and addresses multiple Latino groups in the United States together in the same narrative. The text is supplemented with interesting sidebars that spotlight topics such as Latino sports figures, authentic recipes, and Latino actors and pop stars. These sidebars help to engage readers and assist them in better understanding the wide range of "the Latino American experience" in the modern context.
APA, Harvard, Vancouver, ISO, and other styles

Book chapters on the topic "Multiple arrival sources"

1

Bergmann, Jonas. "Sierra: Rural-to-Urban Migration and Immobility Related to Water Scarcity in Peru’s Highlands." In Studien zur Migrations- und Integrationspolitik. Springer Fachmedien Wiesbaden, 2023. http://dx.doi.org/10.1007/978-3-658-42298-1_5.

Full text
Abstract:
AbstractThe research interest in the Lima Region in Peru’s Central Highlands was in longer-distance, rural-to-urban migration from two villages, both harmed by two types of gradual climate impacts: glacier retreat and rainfall changes. The villages V1 and V2 were the starting points to trace migrants in the Regional and national capitals Huancayo and Lima. This dynamic is of interest for two reasons: first, water scarcity due to glacier retreat and rainfall changes is already salient across the highlands (Sierra). Migration from the Sierra can be shaped by both such rainfall changes and glacier retreat. Second, while rainfall projections are uncertain, future glacier loss will be severe even for low emission scenarios, with likely strong impacts on water security and migration. In the first section of this chapter, I provide information on the geographical context, measured and projected climate change trends and impacts, exposure, vulnerabilities, local coping and adaptation, and hazard-related migration in Peru’s highlands. Afterwards, I describe the empirical results of the new case study, discuss them, and induce propositions on broader well-being impacts of climate (im)mobilities. The findings suggest that the climate-related migration occurred under relatively adverse structural conditions. Movements ranged from more voluntary to clearly forced instances, often with relatively limited agency. Among those remaining, age differentials influenced future migration aspirations. At the destinations in Lima and Huancayo, migrants’ well-being results were on average more net-negative than net-positive. Two factions can be identified. The first group covers most of the interviewees who had left the increasingly adverse situations in their home villages in recent years under distress conditions and arrived with limited baseline resources and skills in the oversaturated, large urban agglomerations. These migrants experienced great challenges for their development from a secure base and a space to live better, while gradually reaching similar or better social relatedness as before migration. Although they were subsisting in the cities, multiple deprivations restricted their development prospects. These migrants with multiple unmet needs subjectively suffered from deprivation in their cognitive satisfaction and emotions; only few reported partial and gradual adjustment. By contrast, the second group comprises a small number of migrants who achieved urban upward mobility, in particular those who had moved voluntarily and a longer time ago. In often-arduous processes, they have used urban education and job opportunities to improve their well-being and long-term prospects. This small group evaluated its needs as mostly fulfilled. Nonetheless, both the large number of migrants losing OWB and the few ones gaining OWB held mostly negative views of the future. Using these empirical findings from Peru’s highlands, I conclude by inducing broader propositions on the well-being of climate migrants in cities and stayers in rural source areas.
APA, Harvard, Vancouver, ISO, and other styles
2

Tiwari, Rajesh, and Bimal Anjum. "Role of Tourism in Economic Growth of India." In Corporate Social Responsibility in the Hospitality and Tourism Industry. IGI Global, 2016. http://dx.doi.org/10.4018/978-1-4666-9902-1.ch017.

Full text
Abstract:
This chapter evaluates the role of tourism for economic growth of India. Tourism and hospitality sector contributes 6.8% in India's GDP. For a country with 30 world heritage sites and a rich culture, tourism and hospitality industry has great potential to enhance tourist flow and accelerate economic growth due to multiplier effect of tourism on job creation. The descriptive research design is used for the study. The study examined the correlation of economic variables on foreign tourist arrivals. The paper evaluated the success stories of Kerala and Gujarat as a case of focused approach towards tourism. Secondary source of data is used to examine the trend of tourism and its role in economic growth of the country. Gross domestic capital formation was found to have highest correlation with foreign tourist arrivals. Growth rate of foreign tourist arrival is found to have significant positive correlation with growth rate of service sector in India and gross state domestic product of Kerala and Gujarat.
APA, Harvard, Vancouver, ISO, and other styles
3

Priya Muthaiah, Gnana Ruba, Motamarri Venkata Naga Lalitha Chaitanya, Seema Sajjan Singh Rathore, Maida Engels S.E., and Vishnu Nayak Badavath. "Importance of In silico Tools in Anticancer Drug Discovery from Nature." In Alternative Remedies and Natural Products for Cancer Therapy: An Integrative Approach. BENTHAM SCIENCE PUBLISHERS, 2023. http://dx.doi.org/10.2174/9789815124699123010010.

Full text
Abstract:
Currently, cancer has become one of the most dreadful diseases threatening human health. Natural plant sources play a vital role in the development of several anti-cancer drugs such as vincristine, vinblastine, vinorelbine, docetaxel, paclitaxel, camptothecin, etoposide, teniposide, etc. Various chemotherapies fail due to adverse reactions, target specificity, and drug resistance of some types of drugs. Researchers are attentive to developing drugs that overcome the problems stated above by using natural compounds that may affect multiple targets with reduced adverse effects and that are effective against several cancer types. The development of a new drug is a highly complex, expensive, and time-consuming endeavour. In the traditional drug discovery process, ending with a new medicine ready for the market can take up to 15 years and cost more than one billion dollars. Fortunately, this situation has changed with the arrival of novel approaches recently. Many new technologies and methodologies have been developed to increase the efficiency of the drug discovery process, and computational methodologies utilise the existing data to generate knowledge that affords valuable understanding for addressing current complications and guiding the further research and development of new naturally derived drugs. Consequently, the application of in silico techniques and optimization algorithms in drug discovery ventures can provide versatile solutions to understand the molecular-level interactions of chemical constituents and identify the hits. Lead optimization techniques such as ligand-based or structure-based drug design are widely used in many discovery efforts. In this chapter, we first introduce the concepts of CADD, in silico tools, etc. we then describe how this virtual screening has been successfully applied. Furthermore, we review the concept of natural product anticancer therapies and present some of the most representative examples of molecules identified through this method.
APA, Harvard, Vancouver, ISO, and other styles
4

Abusharaf, Rogaia Mustafa. "Diasporic Circularities." In Mobility and Forced Displacement in the Middle East. Oxford University Press, 2020. http://dx.doi.org/10.1093/oso/9780197531365.003.0005.

Full text
Abstract:
The experience of the Omani-Zanzibaris who were forced to migrate from Zanzibar to Oman in 1964 has received relatively little attention, particularly as seen from the arriving/returning Omani-Zanzibaris’ emic perspectives. As we will see in this chapter, Oman’s identity as a cosmopolitan empire offers a variety of pathways for understanding its present-day culture and politics, as well as its responses to the large wave of arrivals from postcolonial Zanzibar. The chapter seeks to arrive at a better understanding of the forced migrations by telling the story of this period from the theoretical stance of hybridity, which challenges the prevailing essentialism of the historical narratives of the 1964 events as an African uprising against Omani colonizers. To expound the experiences of Omani-Zanzibaris, this project gathered multiple accounts drawn from multi-sited ethnographic research carried out in the first round of fieldwork in Oman and Zanzibar together with extensive conversations held in Zanzibar and Muscat in 2016 and 2017. Life-history collections, memoirs (both published and in private family possession in Arabic, English, and Swahili), archival materials in London and Muscat, and digital sources were also researched.
APA, Harvard, Vancouver, ISO, and other styles
5

Pal, Kamalendu. "Quality Assurance Issues for Big Data Applications in Supply Chain Management." In Research Anthology on Agile Software, Software Development, and Testing. IGI Global, 2022. http://dx.doi.org/10.4018/978-1-6684-3702-5.ch070.

Full text
Abstract:
Heterogeneous data types, widely distributed data sources, huge data volumes, and large-scale business-alliance partners describe typical global supply chain operational environments. Mobile and wireless technologies are putting an extra layer of data source in this technology-enriched supply chain operation. This environment also needs to provide access to data anywhere, anytime to its end-users. This new type of data set originating from the global retail supply chain is commonly known as big data because of its huge volume, resulting from the velocity with which it arrives in the global retail business environment. Such environments empower and necessitate decision makers to act or react quicker to all decision tasks. Academics and practitioners are researching and building the next generation of big-data-based application software systems. This new generation of software applications is based on complex data analysis algorithms (i.e., on data that does not adhere to standard relational data models). The traditional software testing methods are insufficient for big-data-based applications. Testing big-data-based applications is one of the biggest challenges faced by modern software design and development communities because of lack of knowledge on what to test and how much data to test. Big-data-based applications developers have been facing a daunting task in defining the best strategies for structured and unstructured data validation, setting up an optimal test environment, and working with non-relational databases testing approaches. This chapter focuses on big-data-based software testing and quality-assurance-related issues in the context of Hadoop, an open source framework. It includes discussion about several challenges with respect to massively parallel data generation from multiple sources, testing methods for validation of pre-Hadoop processing, software application quality factors, and some of the software testing mechanisms for this new breed of applications
APA, Harvard, Vancouver, ISO, and other styles
6

Pal, Kamalendu. "Quality Assurance Issues for Big Data Applications in Supply Chain Management." In Predictive Intelligence Using Big Data and the Internet of Things. IGI Global, 2019. http://dx.doi.org/10.4018/978-1-5225-6210-8.ch003.

Full text
Abstract:
Heterogeneous data types, widely distributed data sources, huge data volumes, and large-scale business-alliance partners describe typical global supply chain operational environments. Mobile and wireless technologies are putting an extra layer of data source in this technology-enriched supply chain operation. This environment also needs to provide access to data anywhere, anytime to its end-users. This new type of data set originating from the global retail supply chain is commonly known as big data because of its huge volume, resulting from the velocity with which it arrives in the global retail business environment. Such environments empower and necessitate decision makers to act or react quicker to all decision tasks. Academics and practitioners are researching and building the next generation of big-data-based application software systems. This new generation of software applications is based on complex data analysis algorithms (i.e., on data that does not adhere to standard relational data models). The traditional software testing methods are insufficient for big-data-based applications. Testing big-data-based applications is one of the biggest challenges faced by modern software design and development communities because of lack of knowledge on what to test and how much data to test. Big-data-based applications developers have been facing a daunting task in defining the best strategies for structured and unstructured data validation, setting up an optimal test environment, and working with non-relational databases testing approaches. This chapter focuses on big-data-based software testing and quality-assurance-related issues in the context of Hadoop, an open source framework. It includes discussion about several challenges with respect to massively parallel data generation from multiple sources, testing methods for validation of pre-Hadoop processing, software application quality factors, and some of the software testing mechanisms for this new breed of applications
APA, Harvard, Vancouver, ISO, and other styles
7

"What the 3Vs Acronym Didn't Put Into Perspective?" In Big Data Analytics for Entrepreneurial Success. IGI Global, 2019. http://dx.doi.org/10.4018/978-1-5225-7609-9.ch002.

Full text
Abstract:
Data sizes have been growing exponentially within many companies. Facing this size of data—meta tagged piecemeal, produced in real-time, and arrives in continuous streams from multiple sources—analyzing the data to spot patterns and extract useful information is harder still. This includes the ever-changing landscape of data and their associated characteristics, evolving data analysis paradigms, challenges of computational infrastructure, data quality, complexity, and protection in addition to the data sharing and access, and—crucially—our ability to integrate data sets and their analysis toward an improved understanding. In this context, this second chapter will cover the issues and challenges that are hiding behind the 3Vs phenomenon. It gives a platform to complete the first chapter and proceed to different big data issues and challenges and how to tackle them in the dynamic processes.
APA, Harvard, Vancouver, ISO, and other styles
8

Stein, Louise K. "Carpio and the Integration of Opera in Public Life, Naples 1683–87." In The Marqués, the Divas, and the Castrati. Oxford University PressNew York, 2024. http://dx.doi.org/10.1093/oso/9780197681848.003.0005.

Full text
Abstract:
Abstract Carpio arrived in Naples as viceroy in January 1683 and soon announced that he intended to raise the quality of opera production there. His intervention and restructuring of the mechanisms of production were foundational for Naples’ rise to prominence as an opera center before the eighteenth century. His renovation of the production system 1683–87 is explained through study of multiple sources, with attention to the theaters, finances, casts, and politics of production, as well as many extant arias. His new production team included Alessandro Scarlatti and Philipp Schor, as well as some of Italy’s best female singers and castrati. The recruitment process for collaborative casts is here traced in some detail. Many arias now found in little-known manuscript collections are analyzed with musical examples to retrieve lost voices and reconstruct otherwise lost operas.
APA, Harvard, Vancouver, ISO, and other styles
9

"Clarifying the Confusing Terminology of Digital Textbooks." In Advances in Educational Technologies and Instructional Design. IGI Global, 2015. http://dx.doi.org/10.4018/978-1-4666-8300-6.ch003.

Full text
Abstract:
Generically, a digital textbook serves as a source of knowledge in a digital learning environment. It can be taken anytime and anywhere on almost any devices optimized for digital learning. Users of digital textbooks are teacher(s) and student(s), including life-long learners who use digital devices for learning. The recent challenges indicate that digital textbook use and development have become a hot area of cross-disciplinary research. The research problems arise, first of all, from controversies between traditional curricula and access to global content, that deals with the availability of more diverse forms of information, new technologies, interactive assessment and open source textbooks. However, the “digital textbook” concept does not yet have one established meaning. Rather, multiple partly consistent, partly contradictory definitions and usages exist. This chapter provides a framework for clarifying the confusing terminology of digital textbook initiatives. To arrive at this framework, the author explores the interdependences between textbook, digital (text) book and educational software concepts and proposes a synthetic definition.
APA, Harvard, Vancouver, ISO, and other styles
10

Rajendiran, Kishore, Kumar Kannan, and Yongbin Yu. "Applications of Machine Learning in Cyber Forensics." In Advances in Digital Crime, Forensics, and Cyber Terrorism. IGI Global, 2021. http://dx.doi.org/10.4018/978-1-7998-4900-1.ch002.

Full text
Abstract:
Nowadays, individuals and organizations experience an increase in cyber-attacks. Combating such cybercrimes has become the greatest struggle for individual persons and organizations. Furthermore, the battle has heightened as cybercriminals have gone a step ahead, employing the complicated cyber-attack technique. These techniques are minute and unobtrusive in nature and habitually disguised as authentic requests and commands. The cyber-secure professionals and digital forensic investigators enforce by collecting large and complex pools of data to reveal the potential digital evidence (PDE) to combat these attacks and helps investigators to arrive at particular conclusions and/or decisions. In cyber forensics, the challenging issue is hard for the investigators to make conclusions as the big data often comes from multiple sources and in different file formats. The objective is to explore the possible applications of machine learning (ML) in cyber forensics and to discuss the various research issues, the solutions of which will serve out to provide better predictions for cyber forensics.
APA, Harvard, Vancouver, ISO, and other styles

Conference papers on the topic "Multiple arrival sources"

1

Gordon, Michael S., Darlene Edewaard, and Matthew Pacailler. "Time-to-arrival discrimination of multiple sound sources." In ICA 2013 Montreal. ASA, 2013. http://dx.doi.org/10.1121/1.4800214.

Full text
APA, Harvard, Vancouver, ISO, and other styles
2

Suleesathira, R. "Close direction of arrival estimation for multiple narrowband sources." In Seventh International Symposium on Signal Processing and Its Applications, 2003. Proceedings. IEEE, 2003. http://dx.doi.org/10.1109/isspa.2003.1224899.

Full text
APA, Harvard, Vancouver, ISO, and other styles
3

Suleesathira, R. "Close direction of arrival estimation for multiple narrowband sources." In Proceedings of 2003 International Conference on Neural Networks and Signal Processing. IEEE, 2003. http://dx.doi.org/10.1109/icnnsp.2003.1281104.

Full text
APA, Harvard, Vancouver, ISO, and other styles
4

Meyer, Florian, Alessandra Tesei, and Moe Z. Win. "Localization of multiple sources using time-difference of arrival measurements." In 2017 IEEE International Conference on Acoustics, Speech and Signal Processing (ICASSP). IEEE, 2017. http://dx.doi.org/10.1109/icassp.2017.7952737.

Full text
APA, Harvard, Vancouver, ISO, and other styles
5

Baumann, Nadine, and Martin Skutella. "Solving Evacuation Problems Efficiently--Earliest Arrival Flows with Multiple Sources." In 2006 47th Annual IEEE Symposium on Foundations of Computer Science (FOCS'06). IEEE, 2006. http://dx.doi.org/10.1109/focs.2006.70.

Full text
APA, Harvard, Vancouver, ISO, and other styles
6

Mularzuk, Roman. "Monopulse estimation of direction of arrival in case of multiple incoherent sources." In 2018 22nd International Microwave and Radar Conference (MIKON). IEEE, 2018. http://dx.doi.org/10.23919/mikon.2018.8405304.

Full text
APA, Harvard, Vancouver, ISO, and other styles
7

Florio, Antonello, Gianfranco Avitabile, and Ka Lok Man. "Estimating the Angle of Arrival from Multiple RF Sources using Phase Interferometry." In 2022 19th International SoC Design Conference (ISOCC). IEEE, 2022. http://dx.doi.org/10.1109/isocc56007.2022.10031322.

Full text
APA, Harvard, Vancouver, ISO, and other styles
8

Adavanne, Sharath, Archontis Politis, and Tuomas Virtanen. "Direction of Arrival Estimation for Multiple Sound Sources Using Convolutional Recurrent Neural Network." In 2018 26th European Signal Processing Conference (EUSIPCO). IEEE, 2018. http://dx.doi.org/10.23919/eusipco.2018.8553182.

Full text
APA, Harvard, Vancouver, ISO, and other styles
9

Moore, Alastair H., Christine Evers, and Patrick A. Naylor. "2D direction of arrival estimation of multiple moving sources using a spherical microphone array." In 2016 24th European Signal Processing Conference (EUSIPCO). IEEE, 2016. http://dx.doi.org/10.1109/eusipco.2016.7760442.

Full text
APA, Harvard, Vancouver, ISO, and other styles
10

Cai, Xiuzhang, and Kamal Sarabandi. "A Fast Analytic Multiple-Sources Angle-of-Arrival Estimation Algorithm for Automotive MIMO Radars." In 2020 IEEE International Symposium on Antennas and Propagation and North American Radio Science Meeting. IEEE, 2020. http://dx.doi.org/10.1109/ieeeconf35879.2020.9330287.

Full text
APA, Harvard, Vancouver, ISO, and other styles

Reports on the topic "Multiple arrival sources"

1

LeDuc, Jamie, Ryan Maki, Tom Burri, et al. Voyageurs National Park interior lakes status and impact assessment. National Park Service, 2022. http://dx.doi.org/10.36967/nrr-2289923.

Full text
Abstract:
Voyageurs National Park (VNP) is a water-based park that includes part or all of four large lakes and twenty-six smaller lakes commonly referred to as “interior lakes”. The 26 interior lakes of VNP are important aquatic resources with differing size and depth, water chemistry, trophic status, fish communities, and visitor use. Despite the remote location of these lakes, they have been impacted by multiple stressors, including contaminants and non-native species. This assessment provides key information in support of a science-based management plan for these lakes. The objectives of this assessment were to: 1) assess the status of each of the interior lakes and categorically rank the lakes from least to most impacted; 2) use the results to make specific science-based management and restoration recommendations for the most impacted lakes; and 3) make general recommendations that may be useful in managing all of Voyageurs National Park’s interior lakes. When all factors considered in this analysis (water quality, fish mercury concentrations, and fish community characteristics) were combined and assessed, thirteen lakes were classified as most impacted, nine as moderately impacted, and three as least impacted. Although nearly half of the lakes were classified as most impacted in this assessment, many of these lakes are in excellent condition compared to other lakes in the region. The factor-by-factor impact status of the lakes was considered to help describe the condition of the lakes and the potential for restoration. No local management options were identified to mitigate the mercury contamination in these lakes as research has shown the sources of the contamination to arrive from regional and global sources. Similarly, no management actions were identified to reduce nutrient concentrations in the lakes ranked most impacted for nutrient conditions as these are remote lakes with minimal or no development within their watersheds, and it was also noted that paleolimnological studies have shown that the nutrient status of the interior lakes was relatively unchanged from pre-European settlement conditions. Finally, for the lakes in which presumed introduced fish species are present, piscicides were considered as a potential management action. Piscicides were not recommended as a strategy to eradicate introduced fish species as it was determined that genetic conservation of the populations of native fish species still present in these lakes was more valuable than eradicating the introduced species.
APA, Harvard, Vancouver, ISO, and other styles
2

Buesseler, Buessele, Daniele Bianchi, Fei Chai, et al. Paths forward for exploring ocean iron fertilization. Woods Hole Oceanographic Institution, 2023. http://dx.doi.org/10.1575/1912/67120.

Full text
Abstract:
We need a new way of talking about global warming. UN Secretary General António Guterres underscored this when he said the “era of global boiling” has arrived. Although we have made remarkable progress on a very complex problem over the past thirty years, we have a long way to go before we can keep the global temperature increase to below 2°C relative to the pre-industrial times. Climate models suggest that this next decade is critical if we are to avert the worst consequences of climate change. The world must continue to reduce greenhouse gas emissions, and find ways to adapt and build resilience among vulnerable communities. At the same time, we need to find new ways to remove carbon dioxide from the atmosphere in order to chart a “net negative” emissions pathway. Given their large capacity for carbon storage, the oceans must be included in consideration of our multiple carbon dioxide removal (CDR) options. This report focused on ocean iron fertilization (OIF) for marine CDR. This is by no means a new scientific endeavor. Several members of ExOIS (Exploring Ocean Iron Solutions) have been studying this issue for decades, but the emergence of runaway climate impacts has motivated this group to consider a responsible path forward for marine CDR. That path needs to ensure that future choices are based upon the best science and social considerations required to reduce human suffering and counter economic and ecological losses, while limiting and even reversing the negative impacts that climate change is already having on the ocean and the rest of the planet. Prior studies have confirmed that the addition of small amounts of iron in some parts of the ocean is effective at stimulating phytoplankton growth. Through enhanced photosynthesis, carbon dioxide can not only be removed from the atmosphere but a fraction can also be transferred to durable storage in the deep sea. However, prior studies were not designed to quantify how effective this storage can be, or how wise OIF might be as a marine CDR approach. ExOIS is a consortium that was created in 2022 to consider what OIF studies are needed to answer critical questions about the potential efficiency and ecological impacts of marine CDR (http://oceaniron.org). Owing to concerns surrounding the ethics of marine CDR, ExOIS is organized around a responsible code of conduct that prioritizes activities for the collective benefit of our planet with an emphasis on open and transparent studies that include public engagement. Our goal is to establish open-source conventions for implementing OIF for marine CDR that can be assessed with appropriate monitoring, reporting, and verification (MRV) protocols, going beyond just carbon accounting, to assess ecological and other non-carbon environmental effects (eMRV). As urgent as this is, it will still take 5 to 10 years of intensive work and considerable resources to accomplish this goal. We present here a “Paths Forward’’ report that stems from a week-long workshop held at the Moss Landing Marine Laboratories in May 2023 that was attended by international experts spanning atmospheric, oceanographic, and social sciences as well as legal specialists (see inside back cover). At the workshop, we reviewed prior OIF studies, distilled the lessons learned, and proposed several paths forward over the next decade to lay the foundation for evaluating OIF for marine CDR. Our discussion very quickly resulted in a recommendation for the need to establish multiple “Ocean Iron Observatories’’ where, through observations and modeling, we would be able to assess with a high degree of certainty both the durable removal of atmospheric carbon dioxide—which we term the “centennial tonne”—and the ecological response of the ocean. In a five-year phase I period, we prioritize five major research activities: 1. Next generation field studies: Studies of long-term (durable) carbon storage will need to be longer (year or more) and larger (>10,000 km2) than past experiments, organized around existing tools and models, but with greater reliance on autonomous platforms. While prior studies suggested that ocean systems return to ambient conditions once iron infusion is stopped, this needs to be verified. We suggest that these next field experiments take place in the NE Pacific to assess the processes controlling carbon removal efficiencies, as well as the intended and unintended ecological and geochemical consequences. 2. Regional, global and field study modeling Incorporation of new observations and model intercomparisons are essential to accurately represent how iron cycling processes regulate OIF effects on marine ecosystems and carbon sequestration, to support experimental planning for large-scale MRV, and to guide decision making on marine CDR choices. 3. New forms of iron and delivery mechanisms Rigorous testing and comparison of new forms of iron and their potential delivery mechanisms is needed to optimize phytoplankton growth while minimizing the financial and carbon costs of OIF. Efficiency gains are expected to generate responses closer to those of natural OIF events. 4. Monitoring, reporting, and verification: Advances in observational technologies and platforms are needed to support the development, validation, and maintenance of models required for MRV of large-scale OIF deployment. In addition to tracking carbon storage and efficiency, prioritizing eMRV will be key to developing regulated carbon markets. 5. Governance and stakeholder engagement: Attention to social dimensions, governance, and stakeholder perceptions will be essential from the start, with particular emphasis on expanding the diversity of groups engaged in marine CDR across the globe. This feedback will be a critical component underlying future decisions about whether to proceed, or not, with OIF for marine CDR. Paramount in the plan is the need to move carefully. Our goal is to conduct these five activities in parallel to inform decisions steering the establishment of ocean iron observatories at multiple locations in phase II. When completed, this decadal plan will provide a rich knowledge base to guide decisions about if, when, where, and under what conditions OIF might be responsibly implemented for marine CDR. The consensus of our workshop and this report is that now is the time for actionable studies to begin. Quite simply, we suggest that some form of marine CDR will be essential to slow down and reverse the most severe consequences of our disrupted climate. OIF has the potential to be one of these climate mitigation strategies. We have the opportunity and obligation to invest in the knowledge necessary to ensure that we can make scientifically and ethically sound decisions for the future of our planet.
APA, Harvard, Vancouver, ISO, and other styles
3

Lunn, Pete, Marek Bohacek, Jason Somerville, Áine Ní Choisdealbha, and Féidhlim McGowan. PRICE Lab: An Investigation of Consumers’ Capabilities with Complex Products. ESRI, 2016. https://doi.org/10.26504/bkmnext306.

Full text
Abstract:
Executive Summary This report describes a series of experiments carried out by PRICE Lab, a research programme at the Economic and Social Research Institute (ESRI) jointly funded by the Central Bank of Ireland, the Commission for Energy Regulation, the Competition and Consumer Protection Commission and the Commission for Communications Regulation. The experiments were conducted with samples of Irish consumers aged 18-70 years and were designed to answer the following general research question: At what point do products become too complex for consumers to choose accurately between the good ones and the bad ones? BACKGROUND AND METHODS PRICE Lab represents a departure from traditional methods employed for economic research in Ireland. It belongs to the rapidly expanding area of ‘behavioural economics’, which is the application of psychological insights to economic analysis. In recent years, behavioural economics has developed novel methods and generated many new findings, especially in relation to the choices made by consumers. These scientific advances have implications both for economics and for policy. They suggest that consumers often do not make decisions in the way that economists have traditionally assumed. The findings show that consumers have limited capacity for attending to and processing information and that they are prone to systematic biases, all of which may lead to disadvantageous choices. In short, consumers may make costly mistakes. Research has indeed documented that in several key consumer markets, including financial services, utilities and telecommunications, many consumers struggle to choose the best products for themselves. It is often argued that these markets involve ‘complex’ products. The obvious question that arises is whether consumer policy can be used to help them to make better choices when faced with complex products. Policies are more likely to be successful where they are informed by an accurate understanding of how real consumers make decisions between products. To provide evidence for consumer policy, PRICE Lab has developed a method for measuring the accuracy with which consumers make choices, using techniques adapted from the scientific study of human perception. The method allows researchers to measure how reliably consumers can distinguish a good deal from a bad one. A good deal is defined here as one where the product is more valuable than the price paid. In other words, it offers good value for money or, in the jargon of economics, offers the consumer a ‘surplus’. Conversely, a bad deal offers poor value for money, providing no (or a negative) surplus. PRICE Lab’s main experimental method, which we call the ‘Surplus Identification’ (S-ID) task, allows researchers to measure how accurately consumers can spot a surplus and whether they are prone to systematic biases. Most importantly, the S-ID task can be used to study how the accuracy of consumers’ decisions changes as the type of product changes. For the experiments we report here, samples of consumers arrived at the ESRI one at a time and spent approximately one hour doing the S-ID task with different kinds of products, which were displayed on a computer screen. They had to learn to judge the value of one or more products against prices and were then tested for accuracy. As well as people’s intrinsic motivation to do well when their performance on a task like this is tested, we provided an incentive: one in every ten consumers who attended PRICE Lab won a prize, based on their performance. Across a series of these experiments, we were able to test how the accuracy of consumers’ decisions was affected by the number and nature of the product’s characteristics, or ‘attributes’, which they had to take into account in order to distinguish good deals from bad ones. In other words, we were able to study what exactly makes for a ‘complex’ product, in the sense that consumers find it difficult to choose good deals. FINDINGS Overall, across all ten experiments described in this report, we found that consumers’ judgements of the value of products against prices were surprisingly inaccurate. Even when the product was simple, meaning that it consisted of just one clearly perceptible attribute (e.g. the product was worth more when it was larger), consumers required a surplus of around 16-26 per cent of the total price range in order to be able to judge accurately that a deal was a good one rather than a bad one. Put another way, when most people have to map a characteristic of a product onto a range of prices, they are able to distinguish at best between five and seven levels of value (e.g. five levels might be thought of as equivalent to ‘very bad’, ‘bad’, ‘average’, ‘good’, ‘very good’). Furthermore, we found that judgements of products against prices were not only imprecise, but systematically biased. Consumers generally overestimated what products at the top end of the range were worth and underestimated what products at the bottom end of the range were worth, typically by as much as 10-15 per cent and sometimes more. We then systematically increased the complexity of the products, first by adding more attributes, so that the consumers had to take into account, two, three, then four different characteristics of the product simultaneously. One product might be good on attribute A, not so good on attribute B and available at just above the xii | PRICE Lab: An Investigation of Consumers’ Capabilities with Complex Products average price; another might be very good on A, middling on B, but relatively expensive. Each time the consumer’s task was to judge whether the deal was good or bad. We would then add complexity by introducing attribute C, then attribute D, and so on. Thus, consumers had to negotiate multiple trade-offs. Performance deteriorated quite rapidly once multiple attributes were in play. Even the best performers could not integrate all of the product information efficiently – they became substantially more likely to make mistakes. Once people had to consider four product characteristics simultaneously, all of which contributed equally to the monetary value of the product, a surplus of more than half the price range was required for them to identify a good deal reliably. This was a fundamental finding of the present experiments: once consumers had to take into account more than two or three different factors simultaneously their ability to distinguish good and bad deals became strikingly imprecise. This finding therefore offered a clear answer to our primary research question: a product might be considered ‘complex’ once consumers must take into account more than two or three factors simultaneously in order to judge whether a deal is good or bad. Most of the experiments conducted after we obtained these strong initial findings were designed to test whether consumers could improve on this level of performance, perhaps for certain types of products or with sufficient practice, or whether the performance limits uncovered were likely to apply across many different types of product. An examination of individual differences revealed that some people were significantly better than others at judging good deals from bad ones. However the differences were not large in comparison to the overall effects recorded; everyone tested struggled once there were more than two or three product attributes to contend with. People with high levels of numeracy and educational attainment performed slightly better than those without, but the improvement was small. We also found that both the high level of imprecision and systematic bias were not reduced substantially by giving people substantial practice and opportunities to learn – any improvements were slow and incremental. A series of experiments was also designed to test whether consumers’ capability was different depending on the type of product attribute. In our initial experiments the characteristics of the products were all visual (e.g., size, fineness of texture, etc.). We then performed similar experiments where the relevant product information was supplied as numbers (e.g., percentages, amounts) or in categories (e.g., Type A, Rating D, Brand X), to see whether performance might improve. This question is important, as most financial and contractual information is supplied to consumers in a numeric or categorical form. The results showed clearly that the type of product information did not matter for the level of imprecision and bias in consumers’ decisions – the results were essentially the same whether the product attributes were visual, numeric or categorical. What continued to drive performance was how many characteristics the consumer had to judge simultaneously. Thus, our findings were not the result of people failing to perceive or take in information accurately. Rather, the limiting factor in consumers’ capability was how many different factors they had to weigh against each other at the same time. In most of our experiments the characteristics of the product and its monetary value were related by a one-to-one mapping; each extra unit of an attribute added the same amount of monetary value. In other words, the relationships were all linear. Because other findings in behavioural economics suggest that consumers might struggle more with non-linear relationships, we designed experiments to test them. For example, the monetary value of a product might increase more when the amount of one attribute moves from very low to low, than when it moves from high to very high. We found that this made no difference to either the imprecision or bias in consumers’ decisions provided that the relationship was monotonic (i.e. the direction of the relationship was consistent, so that more or less of the attribute always meant more or less monetary value respectively). When the relationship involved a turning point (i.e. more of the attribute meant higher monetary value but only up to a certain point, after which more of the attribute meant less value) consumers’ judgements were more imprecise still. Finally, we tested whether familiarity with the type of product improved performance. In most of the experiments we intentionally used products that were new to the experimental participants. This was done to ensure experimental control and so that we could monitor learning. In the final experiment reported here, we used two familiar products (Dublin houses and residential broadband packages) and tested whether consumers could distinguish good deals from bad deals any better among these familiar products than they could among products that they had never seen before, but which had the same number and type of attributes and price range. We found that consumers’ performance was the same for these familiar products as for unfamiliar ones. Again, what primarily determined the amount of imprecision and bias in consumers’ judgments was the number of attributes that they had to balance against each other, regardless of whether these were familiar or novel. POLICY IMPLICATIONS There is a menu of consumer polices designed to assist consumers in negotiating complex products. A review, including international examples, is given in the main body of the report. The primary aim is often to simplify the consumer’s task. Potential policies, versions of which already exist in various forms and which cover a spectrum of interventionist strength, might include: the provision and endorsement of independent, transparent price comparison websites and other choice engines (e.g. mobile applications, decision software); the provision of high quality independent consumer advice; ‘mandated simplification’, whereby regulations stipulate that providers must present product information in a simplified and standardised format specifically determined by regulation; and more strident interventions such as devising and enforcing prescriptive rules and regulations in relation to permissible product descriptions, product features or price structures. The present findings have implications for such policies. However, while the experimental findings have implications for policy, it needs to be borne in mind that the evidence supplied here is only one factor in determining whether any given intervention in markets is likely to be beneficial. The findings imply that consumers are likely to struggle to choose well in markets with products consisting of multiple important attributes that must all be factored in when making a choice. Interventions that reduce this kind of complexity for consumers may therefore be beneficial, but nothing in the present research addresses the potential costs of such interventions, or how providers are likely to respond to them. The findings are also general in nature and are intended to give insights into consumer choices across markets. There are likely to be additional factors specific to certain markets that need to be considered in any analysis of the costs and benefits of a potential policy change. Most importantly, the policy implications discussed here are not specific to Ireland or to any particular product market. Furthermore, they should not be read as criticisms of existing regulatory regimes, which already go to some lengths in assisting consumers to deal with complex products. Ireland currently has extensive regulations designed to protect consumers, both in general and in specific markets, descriptions of which can be found in Section 9.1 of the main report. Nevertheless, the experiments described here do offer relevant guidance for future policy designs. For instance, they imply that while policies that make it easier for consumers to switch providers may be necessary to encourage active consumers, they may not be sufficient, especially in markets where products are complex. In order for consumers to benefit, policies that help them to identify better deals reliably may also be required, given the scale of inaccuracy in consumers’ decisions that we record in this report when products have multiple important attributes. Where policies are designed to assist consumer decisions, the present findings imply quite severe limits in relation to the volume of information consumers can simultaneously take into account. Good impartial Executive Summary | xv consumer advice may limit the volume of information and focus on ensuring that the most important product attributes are recognised by consumers. The findings also have implications for the role of competition. While consumers may obtain substantial potential benefits from competition, their capabilities when faced with more complex products are likely to reduce such benefits. Pressure from competition requires sufficient numbers of consumers to spot and exploit better value offerings. Given our results, providers with larger market shares may face incentives to increase the complexity of products in an effort to dampen competitive pressure and generate more market power. Where marketing or pricing practices result in prices or attributes with multiple components, our findings imply that consumer choices are likely to become less accurate. Policymakers must of course be careful in determining whether such practices amount to legitimate innovations with potential consumer benefit. Yet there is a genuine danger that spurious complexity can be generated that confuses consumers and protects market power. The results described here provide backing for the promotion and/or provision by policymakers of high-quality independent choice engines, including but not limited to price comparison sites, especially in circumstances where the number of relevant product attributes is high. A longer discussion of the potential benefits and caveats associated with such policies is contained in the main body of the report. Mandated simplification policies are gaining in popularity internationally. Examples include limiting the number of tariffs a single energy company can offer or standardising health insurance products, both of which are designed to simplify the comparisons between prices and/or product attributes. The present research has some implications for what might make a good mandate. Consumer decisions are likely to be improved where a mandate brings to the consumer’s attention the most important product attributes at the point of decision. The present results offer guidance with respect to how many key attributes consumers are able simultaneously to trade off, with implications for the design of standardised disclosures. While bearing in mind the potential for imposing costs, the results also suggest benefits to compulsory ‘meta-attributes’ (such as APRs, energy ratings, total costs, etc.), which may help consumers to integrate otherwise separate sources of information. FUTURE RESEARCH The experiments described here were designed to produce findings that generalise across multiple product markets. However, in addition to the results outlined in this report, the work has resulted in new experimental methods that can be applied to more specific consumer policy issues. This is possible because the methods generate experimental measures of the accuracy of consumers’ decision-making. As such, they can be adapted to assess the quality of consumers’ decisions in relation to specific products, pricing and marketing practices. Work is underway in PRICE Lab that applies these methods to issues in specific markets, including those for personal loans, energy and mobile phones.
APA, Harvard, Vancouver, ISO, and other styles
We offer discounts on all premium plans for authors whose works are included in thematic literature selections. Contact us to get a unique promo code!

To the bibliography