Academic literature on the topic 'Audification'

Create a spot-on reference in APA, MLA, Chicago, Harvard, and other styles

Select a source type:

Consult the lists of relevant articles, books, theses, conference reports, and other scholarly sources on the topic 'Audification.'

Next to every source in the list of references, there is an 'Add to bibliography' button. Press on it, and we will generate automatically the bibliographic reference to the chosen work in the citation style you need: APA, MLA, Harvard, Chicago, Vancouver, etc.

You can also download the full text of the academic publication as pdf and read online its abstract whenever available in the metadata.

Journal articles on the topic "Audification"

1

Groß-Vogt, Katharina, Matthias Frank, and Robert Höldrich. "Focused Audification and the optimization of its parameters." Journal on Multimodal User Interfaces 14, no. 2 (2019): 187–98. http://dx.doi.org/10.1007/s12193-019-00317-8.

Full text
Abstract:
AbstractWe present a sonification method which we call Focused Audification (FA; previously: Augmented Audification) that allows to expand pure audification in a flexible way. It is based on a combination of single-side-band modulation and a pitch modulation of the original data stream. Based on two free parameters, the sonification’s frequency range is adjustable to the human hearing range and allows to interactively zoom into the data set at any scale. The parameters have been adjusted in a multimodal experiment on cardiac data by laypeople. Following from these results we suggest a procedure for parameter optimization to achieve an optimal listening range for any data set, adjusted to human speech.
APA, Harvard, Vancouver, ISO, and other styles
2

Ikeshiro, Ryo. "Audification and Non-Standard Synthesis in Construction in Self." Organised Sound 19, no. 1 (2014): 78–89. http://dx.doi.org/10.1017/s1355771813000435.

Full text
Abstract:
The author's Construction in Self (2009) belongs to the interdisciplinary context of auditory display/music. Its use of data at audio rate could be described as both audification and non-standard synthesis. The possibilities of audio-rate data use and the relation between the above descriptions are explored, and then used to develop a conceptual and theoretical basis of the work.Vickers and Hogg's term ‘indexicality’ is used to contrast audio with control rate. The conceptual implications of its use within the digital medium and the possibility for the formation of higher-order structures are discussed. Grond and Hermann's notion of ‘familiarity’ is used to illustrate the difference between audification and non-standard synthesis, and the contexts of auditory display and music respectively. Familiarity is given as being determined by Dombois and Eckel's categories of data. Kubisch's Electrical Walks, Xenakis's GENDYN and the audification of seismograms are used as examples. Bogost's concept of the alien is introduced, and its relevance to the New Aesthetic and Algorave are discussed. Sound examples from Construction in Self are used to demonstrate the varying levels of familiarity or noise possible and suggested as providing a way of bridging the divide between institutional and underground electronic music.
APA, Harvard, Vancouver, ISO, and other styles
3

Perkis, Tim, and Gregory Kramer. "Auditory Display: Sonification, Audification, and Auditory Interfaces." Computer Music Journal 19, no. 2 (1995): 110. http://dx.doi.org/10.2307/3680606.

Full text
APA, Harvard, Vancouver, ISO, and other styles
4

Rangayyan, Rangaraj M. "Audification and sonification of texture in images." Journal of Electronic Imaging 10, no. 3 (2001): 690. http://dx.doi.org/10.1117/1.1382811.

Full text
APA, Harvard, Vancouver, ISO, and other styles
5

Son, Jeong, and Lee. "An Audification and Visualization System (AVS) of an Autonomous Vehicle for Blind and Deaf People Based on Deep Learning." Sensors 19, no. 22 (2019): 5035. http://dx.doi.org/10.3390/s19225035.

Full text
Abstract:
When blind and deaf people are passengers in fully autonomous vehicles, an intuitive and accurate visualization screen should be provided for the deaf, and an audification system with speech-to-text (STT) and text-to-speech (TTS) functions should be provided for the blind. However, these systems cannot know the fault self-diagnosis information and the instrument cluster information that indicates the current state of the vehicle when driving. This paper proposes an audification and visualization system (AVS) of an autonomous vehicle for blind and deaf people based on deep learning to solve this problem. The AVS consists of three modules. The data collection and management module (DCMM) stores and manages the data collected from the vehicle. The audification conversion module (ACM) has a speech-to-text submodule (STS) that recognizes a user’s speech and converts it to text data, and a text-to-wave submodule (TWS) that converts text data to voice. The data visualization module (DVM) visualizes the collected sensor data, fault self-diagnosis data, etc., and places the visualized data according to the size of the vehicle’s display. The experiment shows that the time taken to adjust visualization graphic components in on-board diagnostics (OBD) was approximately 2.5 times faster than the time taken in a cloud server. In addition, the overall computational time of the AVS system was approximately 2 ms faster than the existing instrument cluster. Therefore, because the AVS proposed in this paper can enable blind and deaf people to select only what they want to hear and see, it reduces the overload of transmission and greatly increases the safety of the vehicle. If the AVS is introduced in a real vehicle, it can prevent accidents for disabled and other passengers in advance.
APA, Harvard, Vancouver, ISO, and other styles
6

Davies, T. Claire, Shane D. Pinder, and Catherine M. Burns. "How far is that wall? Judging distance with audification." Proceedings of the Human Factors and Ergonomics Society Annual Meeting 53, no. 17 (2009): 1091–95. http://dx.doi.org/10.1177/154193120905301710.

Full text
APA, Harvard, Vancouver, ISO, and other styles
7

Grond, Florian. "Safety Certificate: an audification performance of high-speed trains." AI & SOCIETY 27, no. 2 (2011): 293–95. http://dx.doi.org/10.1007/s00146-011-0351-5.

Full text
APA, Harvard, Vancouver, ISO, and other styles
8

Pinder, Shane D., and T. Claire Davies. "AUDEO: Audification of Ultrasound for the Detection of Environmental Obstacles." Proceedings of the Human Factors and Ergonomics Society Annual Meeting 53, no. 21 (2009): 1647–51. http://dx.doi.org/10.1177/154193120905302103.

Full text
APA, Harvard, Vancouver, ISO, and other styles
9

RODET, XAVIER. "SOUND AND MUSIC FROM CHUA'S CIRCUIT." Journal of Circuits, Systems and Computers 03, no. 01 (1993): 49–61. http://dx.doi.org/10.1142/s0218126693000058.

Full text
Abstract:
Nonlinear Dynamics have been very inspiring for musicians, but have rarely been considered specifically for sound synthesis. We discuss here the signals produced by Chua's circuit from an acoustical and musical point of view. We have designed a real-time simulation of Chua's circuit on a digital workstation allowing for easy experimentation with the properties and behaviors of the circuit and of the sounds. A surprisingly rich and novel family of musical sounds has been obtained. The audification of the local properties of the parameter space allows for easy determination of very complex structures which could not be computed analytically and would not be simple to determine by other methods. Finally, we have found that the time-delayed Chua's circuit can model the basic behavior of an interesting class of musical instruments.
APA, Harvard, Vancouver, ISO, and other styles
10

Davies, T. Claire, Shane D. Pinder, George Dodd, and Catherine M. Burns. "Where did that sound come from? Comparing the ability to localise using audification and audition." Disability and Rehabilitation: Assistive Technology 7, no. 2 (2011): 130–38. http://dx.doi.org/10.3109/17483107.2011.602172.

Full text
APA, Harvard, Vancouver, ISO, and other styles

Dissertations / Theses on the topic "Audification"

1

Jackson, Judith. "Generative Processes for Audification." Oberlin College Honors Theses / OhioLINK, 2018. http://rave.ohiolink.edu/etdc/view?acc_num=oberlin1528280288385596.

Full text
APA, Harvard, Vancouver, ISO, and other styles
2

Jondell, Karl Johannes. "Radio Diabetes : En studie av kollektiv sonifiering." Thesis, Kungl. Musikhögskolan, Institutionen för komposition, dirigering och musikteori, 2021. http://urn.kb.se/resolve?urn=urn:nbn:se:kmh:diva-3987.

Full text
Abstract:
A web installation that uses blood glucose levels uploaded by diabetics to create music. Available at https://radiodiabetes.eu<br>En webbinstallation som låter diabetiker ladda upp sina blodsockervärden och skapar musik av dessa. Tillgänglig på https://radiodiabetes.eu
APA, Harvard, Vancouver, ISO, and other styles
3

Davies, Theresa Claire. "Audification of Ultrasound for Human Echolocation." Thesis, 2008. http://hdl.handle.net/10012/3878.

Full text
Abstract:
Individuals with functional blindness must often utilise assistive aids to enable them to complete tasks of daily living. One of these tasks, locomotion, poses considerable risk. The long white cane is often used to perform haptic exploration, but cannot detect obstacles that are not ground-based. Although devices have been developed to provide information above waist height, these do not provide auditory interfaces that are easy to learn. Development of such devices should adapt to the user, not require adaptation by the user. Can obstacle avoidance be achieved through direct perception? This research presents an auditory interface that has been designed with the user as the primary focus. An analysis of the tasks required has been taken into account resulting in an interface that audifies ultrasound. Audification provides intuitive information to the user to enable perceptive response to environmental obstacles. A device was developed that provides Doppler shift signals that are audible as a result of intentional aliasing. This system provides acoustic flow that is evident upon initiation of travel and has been shown to be effective in perceiving apertures and avoiding environmental obstacles. The orientation of receivers on this device was also examined, resulting in better distance perception and centreline accuracy when oriented outward as compared to forward. The design of this novel user interface for visually impaired individuals has also provided a tool that can be used to evaluate direct perception and acoustic flow in a manner that has never been studied before.
APA, Harvard, Vancouver, ISO, and other styles

Books on the topic "Audification"

1

Gregory, Kramer, Santa Fe Institute (Santa Fe, N.M.), and International Conference on Auditory Display (1st : 1992 : Santa Fe, N.M.), eds. Auditory display: Sonification, audification, and auditory interfaces. Addison-Wesley, 1994.

Find full text
APA, Harvard, Vancouver, ISO, and other styles
2

(Editor), Gregory Kramer, and INTERNATIONAL CONFERENCE ON AUDITORY DIS (Editor), eds. Auditory Display: Sonification, Audification, and Auditory Interfaces (Proceedings Volume 18, Santa Fe Institute Studies in the Sci). Addison Wesley Publishing Company, 1993.

Find full text
APA, Harvard, Vancouver, ISO, and other styles
3

Fe, N. M. ). Santa Fe Institute (Santa, and N. M. ). International Conference on Auditory Display 1992 (Santa Fe. Auditory Display: Sonification, Audification, and Auditory Interfaces : Proceedings (Santa Fe Institute Studies in the Sciences of Complexity Proceedings). Addison Wesley Publishing Company, 1994.

Find full text
APA, Harvard, Vancouver, ISO, and other styles

Book chapters on the topic "Audification"

1

Worrall, David. "Audification Experiments: Market Data Correlation." In Human–Computer Interaction Series. Springer International Publishing, 2019. http://dx.doi.org/10.1007/978-3-030-01497-1_7.

Full text
APA, Harvard, Vancouver, ISO, and other styles
2

Søndergaard, Morten, and Anette Vandsø. "Sonification and Audification as Means of Representing Data." In The Aesthetics of Scientific Data Representation. Routledge, 2017. http://dx.doi.org/10.4324/9781315563411-7.

Full text
APA, Harvard, Vancouver, ISO, and other styles
3

Hermann, Thomas, Stella Paschalidou, Dirk Beckmann, and Helge Ritter. "Gestural Interactions for Multi-parameter Audio Control and Audification." In Lecture Notes in Computer Science. Springer Berlin Heidelberg, 2006. http://dx.doi.org/10.1007/11678816_37.

Full text
APA, Harvard, Vancouver, ISO, and other styles

Conference papers on the topic "Audification"

1

Fang Zhigang and Li Ting. "Audification-based Electronic Travel Aid system." In 2010 International Conference on Computer Design and Applications (ICCDA 2010). IEEE, 2010. http://dx.doi.org/10.1109/iccda.2010.5541522.

Full text
APA, Harvard, Vancouver, ISO, and other styles
2

Temko, Andriy, William Marnane, Geraldine Boylan, John M. O'Toole, and Gordon Lightbody. "Neonatal EEG audification for seizure detection." In 2014 36th Annual International Conference of the IEEE Engineering in Medicine and Biology Society (EMBC). IEEE, 2014. http://dx.doi.org/10.1109/embc.2014.6944612.

Full text
APA, Harvard, Vancouver, ISO, and other styles
3

Cowden, Patrick, and Luke Dosiek. "Auditory Displays of Electric Power Grids." In The 24th International Conference on Auditory Display. The International Community for Auditory Display, 2018. http://dx.doi.org/10.21785/icad2018.013.

Full text
Abstract:
This paper presents auditory displays of power grid voltage. Due to the constantly changing energy demands experienced by a power system, the voltage varies slightly about nominal, e.g., 120±2 V at 60±0.04 Hz. These variations are small enough that any audible effects, such as transformer hum, appear to have constant volume and pitch. Here, an audification technique is derived that amplifies the voltage variations and shifts the nominal frequency from 60 Hz to a common musical note. Sonification techniques are presented that map the voltage magnitude and frequency to MIDI velocity and pitch, and create a sampler trigger from frequency deviation. Several examples, including audio samples, are given under a variety of power system conditions. These results culminate in a multi-instrument track generated from the sonification of time-synchronized geographically widespread power grid measurements. In addition, an inexpensive Arduino-based device is detailed that allows for real-time sonification of wall outlet voltage.
APA, Harvard, Vancouver, ISO, and other styles
4

Hermann, Thomas. "Wave Space Sonification." In The 24th International Conference on Auditory Display. The International Community for Auditory Display, 2018. http://dx.doi.org/10.21785/icad2018.026.

Full text
Abstract:
This paper introduces Wave Space Sonification (WSS), a novel class of sonification techniques for time- (or space-) indexed data. WSS doesn’t fall into the classes of Audification, Parameter- Mapping Sonification or Model-based Sonification and thus constitutes a novel class of sonification techniques. It realizes a different link between data and their auditory representation, by scanning a scalar field – defined as wave space – along a data-driven trajectory. This allows both the highly controlled definition of the auditory representation for any area of interest, as well as subtle yet acoustically complex sound variations as the overall pattern changes. To illustrate Wave Space Sonification (WSS), we introduce three different WSS instances, (i) the Static Canonical WSS, (ii) Data-driven Localized WSS and (iii), Granular Wave Space Sonification (GWSS), and we demonstrate the different methods with sonification examples from various data domains. We discuss the technique and its relation to other sonification approaches and finally outline productive application areas.
APA, Harvard, Vancouver, ISO, and other styles
We offer discounts on all premium plans for authors whose works are included in thematic literature selections. Contact us to get a unique promo code!