ENG

2024

Time of exposure for a reliable pupil dilation response to unexpected sounds (Poster)

Autori

Amanda Saksida, Niccolò Granieri and Eva Orzan

Abstract

Introduction: Pupil dilation can serve as a measure of auditory attention and as an additional measure of hearing threshold. Studies in infants and adults show a difference in responses to speech and other sounds. It is unknown, however, how much exposure is needed to reliably observe this difference at a comfortable levels of intensity, how reliable is the measure of pupil diameter response (PDR) in individuals at various intensity levels, and whether we can observe systematic differences in the response to the specific type of deviant sounds.
Methods: We observed the PDR to tones and speech (ling-6-sounds) stimuli during passive listening at different intensities in two groups of young adults (N = 24, ME = 29 years, DS = 3.9, 11 females). An oddball paradigm with 20% of deviant sounds was used in both experiments. The time windows, in which the presence of a deviant sound elicited PDR compared to the standard sound across different intensity levels, were estimated by computing the cluster-based statistic using the permuted likelihood ratio tests. The averaged values of these time windows were used to model the group responses and predict individual performance.
Results: In both groups, the augmented PDR was associated with deviant sound stimuli. At the highest tested intensity level (70 dB, reported as comfortable by all participants), the analysis of 10 deviant and 10 standard trials (but not smaller amount of data) yielded reliable model predictions (tones: sensitivity = 0.83; sensitivity = 0.75, positive-predictive-value (PPV) = 0.77; speech: sensitivity = 0.83; sensitivity = 0.5, PPV = 0.63). Averaged raw data per participant yielded even higher PPVs (0.92 and 0.83). Further analysis revealed that in the tone experiment, only high frequency deviant tones (2 & 4 kHz) elicited significant change in PDR, whereas in the speech experiment, consonants (/ss/ and /sh/) but not vowels (/i/, /u/) elicited significant change in PDR.
Discussion: In this study, the minimal amount of exposure to tone and speech stimuli at the comfortable hearing level needed to fit a regression model and to reliably predict the performance in individual participants was measured. This represents the necessary step in creating the PDR-based adaptive procedure with which auditory attention can be measured. We also show that the PDR does not only depend on the type of the sound (speech, noise, tones) but also on the internal categories (e.g. vowels vs voiceless consonants).

Downloads

BibTex

@inproceedings{SaksidaEtAl:2024,
        title = {Time of exposure for a reliable pupil dilation response to unexpected sounds},
        author = {Saksida, Amanda and Granieri, Niccolò and Orzan, Eva},	
        date = {2024},
        booktitle = {Proceedings of the 15th Speech in Noise Workshop (SPIN2024)},
        url = {https://zenodo.org/records/10521736},
        language = {eng}
    }

2023

Ubiquitous Multimodality as a Tool in Violin Performance Classification (Atti di convegno)

Autori

William Wilson, Niccolò Granieri and Islah Ali-MacLachlan

Abstract

Through integrated sensors, wearable devices such as fitness trackers and smart-watches provide convenient interfaces by which multimodal time-series data may be recorded. Fostering multimodality in data collection allows for the observation of recorded actions, exercises or performances with consideration towards multiple transpiring aspects. This paper details an exploration of machine-learning based classification upon a dataset of audio-gestural violin recordings, collated through the use of a purpose-built smartwatch application. This interface allowed for the recording of synchronous gestural and audio data, which proved well-suited towards classification by deep neural networks (DNNs). Recordings were segmented into individual bow strokes, these were classified through completion of three tasks: Participant Recognition, Articulation Recognition, and Scale Recognition. Higher participant classification accuracies were observed through the use of lone gestural data, while multi-input deep neural networks (MI-DNNs) achieved varying increases in accuracy during completion of the latter two tasks, through concatenation of separate audio and gestural subnetworks. Across tasks and across network architectures, test-classification accuracies ranged between 63.83% and 99.67%. Articulation Recognition accuracies were consistently high, averaging 99.37%.

Downloads

BibTex

@inproceedings{WilsonEtAl:2023b,
        title = {Ubiquitous Multimodality as a Tool in Violin Performance Classification},
        author = {Wilson, William and Granieri, Niccolò and Ali-Maclachlan, Islah},	
        date = {2023},
        booktitle = {Proceedings of the 4th International Symposium on the Internet of Sounds (IS2 2023)},
        url = {https://ieeexplore.ieee.org/abstract/document/10335435},
        doi = {10.1109/IEEECONF59510.2023.10335435},
        language = {eng}
    }
Abitudini vocali degli insegnanti all’università e nelle scuole secondarie di secondo grado (Poster)

Autori

Niccolò Granieri, Edoardo Carini, Roberta Rebesco, Valeria Gambacorta, Alessia Fabbri, Elisa Morini, Giampietro Ricci, Mirella Damiani, Elena Magni, Eva Orzan

Abstract

L’utilizzo prolungato di un tono di voce alto associato a sforzo vocale rappresenta un importante fattore di rischio per l’insorgenza di patologie della voce, in particolare tra gli insegnanti. Vi è inoltre una correlazione tra intensità vocale utilizzata durante la lezione e le scarse condizioni acustiche delle aule scolastiche italiane, nello specifico: rumore di fondo, tempo di riverberazione e rapporto segnale rumore, nonostante siano fondamentali per un trattamento acustico ottimale sono spesso ignorati. Nell’ambito del progetto A.Ba.Co (Abbattimento delle Barriere Comunicative), promosso dalla Regione Friuli Venezia Giulia e l’ufficio per le Disabilità della Presidenza del Consiglio dei Ministri, presentiamo i risultati di un questionario che ha indagato l’affaticamento vocale derivato dall’esposizione al rumore in ambiente scolastico dei docenti dell’Università degli studi di Perugia e di quattro istituti secondari di secondo grado friulani. Al questionario hanno risposto un totale di 2606 persone così suddivise: dall’Università degli Studi di Perugia, 1087 studenti (41,71%) e 224 docenti (8,6%), e dalle Scuole Secondarie Secondo Grado FVG 1141 studenti (43,78%) e 154 docenti (5,9%). La prima considerazione che emerge dall’analisi delle risposte è che la percezione di utilizzare un tono di voce elevato è presente all’interno del campione dei docenti universitari nel 40% dei casi, ed è significativamente maggiore per gli insegnanti delle scuole secondarie di secondo grado (55,76%, p < 0,05). Nonostante vi sia una significativa differenza di percezione nell'utilizzo di un tono di voce elevato tra le due popolazioni, lo stesso non si può dire dell’affaticamento vocale, che è percepito come elevato in entrambi i sottogruppi. Il 50,44% dei docenti Universitari percepisce di affaticare la voce, e similmente il 60,57% degli insegnanti di scuola secondaria percepisce lo stesso. Le risposte degli studenti riguardo la percezione di sforzo vocale dei loro docenti è maggiore rispetto a quanto dichiarato dai docenti stessi (73,3%), mantenendo una significativa differenza tra gli studenti delle scuole secondarie (83,25%, p < 0,05). I risultati di questo lavoro sono importanti per sottolineare ancora una volta l’importanza di una buona acustica in classe, anche per una corretta igiene vocale degli insegnanti.

Downloads

BibTex

@inproceedings{GranieriEtAl:2023,
        author = {Granieri, Niccolò and Carini, Edoardo and Rebesco, Roberta and Gambacorta, Valeria and Fabbri, Alessia and Morini, Elisa and Ricci, Giampietro and Damiani, Mirella and Magni, Elena and Orzan, Eva},
        title = {Abitudini vocali degli insegnanti all’università e nelle scuole secondarie di secondo grado},
        year = {2023},
        month = {nov},
        booktitle = {Proceedings of the 4th International Symposium on the Internet of Sounds (S.I.A.F. 2023)},
        language = {ita}
        address = {Polo Didattico della Memoria San Rossore, Pisa, Italy},
    }
Le difficoltà di ascolto degli studenti universitari e delle scuole secondarie di secondo grado (Poster)

Autori

Edoardo Carini, Niccolò Granieri, Roberta Rebesco, Valeria Gambacorta, Alessia Fabbri, Elisa Morini, Giampietro Ricci, Mirella Damiani, Elena Magni, Eva Orzan

Abstract

Per i ragazzi che frequentano le scuole superiori e l’Università, persino una perdita di udito lieve può incidere negativamente sull'intellegibilità del parlato durante le lezioni e quindi sull’apprendimento: per questa categoria di studenti ipoacusici le condizioni di ascolto in aula devono essere particolarmente favorevoli. Purtroppo, le aule scolastiche italiane, nella maggior parte, non rispettano le condizioni ottimali di ascolto. Con la norma UNI 11532-2 per gli edifici scolastici introdotta da marzo 2020, sono stati stabiliti i minimi requisiti acustici degli ambienti scolastici (S/N > 15dB, rumore di fondo 40 < 30dB e tempo di riverbero 0,8 < 0,4s) e le difficoltà percepite dagli studenti lo dimostrano. Nell’ambito del progetto A.Ba.Co (Abbattimento delle Barriere Comunicative), promosso dalla Regione Friuli Venezia Giulia e l’ufficio per le Disabilità della Presidenza del Consiglio dei Ministri, presentiamo i risultati di un questionario che ha indagato le difficoltà di ascolto di studenti e docenti dell’Università degli studi di Perugia e di quattro istituti secondari di secondo grado friulani. Al questionario hanno risposto un totale di 2606 persone così suddivise: dall’Università degli Studi di Perugia, 1087 studenti (41,71%) e 224 docenti (8,6%), e dalle Scuole Secondarie Secondo Grado FVG 1141 studenti (43,78%) e 154 docenti (5,9%). Dall’analisi delle risposte emerge che l’8,89% degli studenti universitari e l’8,38% degli studenti liceali riportano qualche tipo di difficoltà uditiva, tra i professori universitari la prevalenza aumenta a 12,05% e la percentuale cresce ulteriormente per i docenti delle scuole secondarie al 26,92%. Le difficoltà riferite sono di diverso tipo e possono essere riassunte nelle seguenti categorie: scarsa discriminazione del parlato (0,92%), ipoacusia monolaterale (1,34%), ipoacusia bilaterale (0,77%), acufeni (0,77%), recruitment (0,31%) ed iperacusia (0,58%). Si prendono pertanto in considerazione i motivi dell’elevata frequenza di disturbi uditivi in ambito scolastico, incrociando i dati con le risposte riguardanti le condizioni acustiche non ottimali delle aule italiane. Infine, è bene sottolineare che tutti, anche le persone non affette da ipoacusia, possono beneficiare di un ambiente di comunicazione ben progettato per le loro esigenze.

Downloads

BibTex

@inproceedings{CariniEtAl:2023,
        author = {Carini, Edoardo and Granieri, Niccolò and Rebesco, Roberta and Gambacorta, Valeria and Fabbri, Alessia and Morini, Elisa and Ricci, Giampietro and Damiani, Mirella and Magni, Elena and Orzan, Eva},
        title = {Le difficoltà di ascolto degli studenti universitari e delle scuole secondarie di secondo grado},
        year = {2023},
        month = {nov},
        booktitle = {Proceedings of the 4th International Symposium on the Internet of Sounds (S.I.A.F. 2023)},
        language = {ita}
        address = {Polo Didattico della Memoria San Rossore, Pisa, Italy}
    }
Time's Up for the Myo? - The Smartwatch as an Alternative for Audio Gestural Analyses (Poster)

Autori

William Wilson, Niccolò Granieri, and Islah Ali-Maclachlan

Abstract

Warable gestural sensors have proved integral components of many past NIMEs. Previous implementations have typically made use of specialist, IMU and EMG based gestural technologies. Few have proved, singularly, as popular as the Myo armband. An informal review of the NIME archives found that the Myo has featured in 21 NIME publications, since an initial declaration of the Myo's promise as "a new standard controller in the NIME community".

Downloads

BibTex

@inproceedings{WilsonEtAl:2023a,
        author = {Wilson, William and Granieri, Niccolò and Ali-Maclachlan, Islah},
        title = {Time's Up for the Myo? - The Smartwatch as an Alternative for Audio Gestural Analyses},
        year = {2023},
        month = {jun},
        booktitle = {Proceedings of the International Conference on New Interfaces for Musical Expression (NIME 2023)},
        language = {eng},
        address = {Mexico City, Mexico},
    }

2022

Combining Gestural and Audio Approaches to the Classification of Violin (Atti di convegno)

Autori

William Wilson, Islah Ali-Maclachlan, and Niccolò Granieri

Abstract

This paper details a brief exploration of methods by which gestural and audio based approaches may be used in the classification of violin performances. These are based upon a multimodal dataset. Onsets are derived from audio signals and used to segment synchronous gestural recordings, allowing for the classification of individual bow strokes utilising data of either type—or both. Classification accuracies for the purposes of participant identification ranged between 71.06% and 91.35% for various data type combinations. Classification accuracies for the identification of bowing technique were typically lower, ranging between 53.33% and 77.35%. The findings of this paper inform a number of recommendations for future work. These are to be considered in the development of a principally similar dataset, for the analysis of traditional fiddle playing styles.

Downloads

BibTex

@inproceedings{WilsonEtAl:2022,
        author = {Wilson, William and Ali-Maclachlan, Islah and Granieri, Niccolò},
        title = {Combining Gestural and Audio Approaches to the Classification of Violin},
        year = {2022},
        month = {jun},
        booktitle = {Proceedings of the 10th International Workshop on Folk Music Analysis (FMA 2022)},
        url = {https://zenodo.org/record/7100288#.Y4nl5RTP2Ul},
        doi = {10.5281/zenodo.7100287},
        language = {eng},
        address = {Sheffield, United Kingdom}
    }

2021

Retaining Pianistic Virtuosity in #MIs: Exploring Pre-Existing Gestural Nuances for Live Sound Modulation through a Comparative Study (contributo in volume)

Autori

Niccolò Granieri, James Dooley and Tychonas Michailidis

Abstract

This paper focuses on Reach, a keyboard-based gesture recognition system for live piano sound modulation, and the comparative user testing conducted to evaluate it. Reach is a system built using the Leap Motion Orion SDK, a custom C++ OSC mapper and a Pure Data environment. It provides control over the sound modulation of a live piano feed, taking advantage of pre-existing gestural nuances offering a touch-free experience to the pianist.

The user testing compared the Reach system with two commercially available keyboard-based systems for augmented live sound modulation: Seaboard and TouchKeys. The approach taken during the user tests is illustrated and test results are discussed. The results that emerged suggest an underlying importance of recognising and utilising the musician’s existing technique when designing Digital and Augmented Musical Instruments (#MIs), and the potential of reducing the requirement to learn additional instrumental technique. The comparative user testing discussed in this paper is part of a larger research project that seeks to study and understand how a low degree of invasiveness in digital systems for live sound modulation can reduce the learning curve of new systems, allowing greater access to music making with technology.

Downloads

BibTex

@inbook{GranieriEtAl:2021,
        author = {Granieri, Niccolò and Dooley, James and Michailidis, Tychonas},
        booktitle = {Innovation in Music. Future Opportunities},
        title = {Retaining Pianistic Virtuosity in #MIs: Exploring Pre-Existing Gestural Nuances for Live Sound Modulation through a Comparative Study},
        year = {2021},
        month = {jan},
        day = {22},
        isbn = {9780367363352},
        publisher = {Focal Press},
        address = {London, United Kingdom}
    }

2020

Augmenting the experience of playing the piano: controlling audio processing through ancillary gestures (Tesi di Dottorato)

Autori

Niccolò Granieri

Abstract

Pianists spend many years practicing on their instrument. As a result they develop alongside their pianistic technique a set of gestural nuances that enable them to perform expressively and establish their own acoustic signature on the piano. This \textit{mute layer} of nuanced gestures is rarely taken into consideration when developing new keyboard-based gestural interfaces. These often usually require new gestural vocabularies to be learned resulting in a disruptive experience for the pianist. The main objective of this research is to investigate how new keyboard-based gestural interfaces can enable musicians to control and transform live piano sound through the gestural nuances embedded in their technique. Specifically, how keyboard interfaces with nuanced gestural control can extend the creative possibilities available to classically trained pianists, thus stimulating new approaches to build intuitive interfaces for musical expression, and new ways of learning and playing digital instruments. Towards this goal, interviews, user tests and case studies were conducted with a range of pianists coming from different musical backgrounds, and Reach, an augmented instrument for live sound modulation controlled by gestural nuances embedded in the pianistic technique was developed.

Downloads

BibTex

@phdthesis{Granieri:2020,
        author = {Granieri, Niccolò},
        title = {Augmenting the experience of playing of the piano: controlling audio processing through ancillary gestures},
        school = {Royal Birmingham Conservatoire, Birmingham City University},
        year = {2020},
        month = {jul},
        day = {21},
        url = {https://www.open-access.bcu.ac.uk/id/eprint/12131}
    }
NIME Publication Ecosystem Workshop (Workshop)

Autori

Alexander Refsum Jensenius, Andrew McPherson, Anna Xambó Sedó, Charles Patrick Martin, Jack Armitage, Niccolò Granieri, Rebecca Fiebrink and Luiz Naveda

Abstract

How can we develop an open, future-oriented, multimedia-rich, and institutionally recognised publication ecosystem for NIME practitioners and researchers? This workshop will continue previous discussions about the need for a NIME journal, and for solutions to share ideas, hardware designs, code, scores, and performances, systematically. Concerns about C19, climate change, and accessibility make these discussions urgent and demand reimagining our expectations for a publication venue. What solutions can we start implementing right away, and which goals do we have as a community? This open workshop will lay the ground for concrete experimentation in the year(s) to come.

Downloads

-

BibTex

@inproceedings{JenseniusEtAl:2020,
        author = {Jensenius, Alexander Refsum and McPherson, Andrew and Xambó Sedó, Anna and Martin, Charles Patrick and Armitage, Jack and Granieri, Niccolò and Fiebrink, Rebecca and Naveda, Luiz},
        booktitle = {Proceedings of the International Conference on New Interfaces for Musical Expression (NIME 2020)},
        title = {NIME Publication Ecosystem Workshop},
        year = {2020},
        month = {jul},
        address = {Birmingham, United Kingdom}
    }

2019

Microgestural implementation for the creation of an expressive keyboard interface (contributo in volume)

Autori

Niccolò Granieri, Tychonas Michailidis and James Dooley

Abstract

Musicians spend a great deal of time practising their instrument. As a result, they develop a unique set of microgestures that define their personal sound: their acoustic signature. This personal palette of gestures has been identified as one of the most distinctive aspects of piano playing and varies from musician to musician, making their sound unique and enabling them to expressively convey their music.

By using radar millimetre waves to capture micromotions and microgestures, it is possible to achieve a high level of expression without the need to modify the keyboard instrument itself or requiring additional technique. The aim of this research is to build on existing instrumental technique and remove the steep learning curve typical found when performing digital or augmented musical instruments. This approach enables the pianist to retain and focus on his or her technical control and musical freedom resulting in a less disruptive experience.

The paper describes through the implementation of microgestural sound control, how performers can gain a wide control over digital sound processing through their existing technique. The study is also meant to identify which musicians will mostly benefit from the interface analysing their musical background, level of expertise on the instrument, familiarity with digital instruments and music environments.

Downloads

BibTex

@inbook{GranieriEtAl:2019,
        author = {Granieri, Niccolò and Michailidis, Tychonas and Dooley, James},
        title = {Harnessing Ancillary Microgestures in Piano Technique. Implementing Microgestural Control Into an Expressive Keyboard-Based Hyper-Instrument},
        booktitle = {Innovation in Music. Performance, Production, Technology, and Business},
        pages = {269-282},
        year = {2019},
        month = {jul},
        day = {8},
        isbn = {9781138498198},
        publisher = {Routledge},
        address = {London, United Kingdom}
    }
Reach: a keyboard-based gesture recognition system for live piano sound modulation (Articolo / Dimostrazione)

Autori

Niccolò Granieri, James Dooley

Abstract

This paper presents Reach, a keyboard-based gesture recognition system for live piano sound modulation. Reach is a system built using the Leap Motion Orion SDK, Pure Data and a custom C++ OSC mapper. It provides control over the sound modulation of an acoustic piano using the pianist's ancillary gestures.

The system was developed using an iterative design process, incorporating research findings from two user studies and several case studies. The results that emerged show the potential of recognising and utilising the pianist's existing technique when designing keyboard-based DMIs, reducing the requirement to learn additional techniques.

Downloads

BibTex

@inproceedings{GranieriDooley:2019,
        author = {Granieri, Niccolò and Dooley, James},
        title = {Reach: a keyboard-based gesture recognition system for live piano sound modulation},
        pages = {375--376},
        booktitle = {Proceedings of the International Conference on New Interfaces for Musical Expression (NIME19)},
        editor = {Queiroz, Marcelo and Sedó, Anna Xambó},
        year = {2019},
        month = {jun},
        day = {1},
        address = {Porto Alegre, Brazil},
        doi = {10.5281/zenodo.3673000},
        url = {https://zenodo.org/record/3673000#.X9PceS1Q2CM}
    }

2018

Reach – Designing Keyboard Instruments with pianists in mind (Poster)

Autori

Niccolò Granieri

Abstract

This poster focuses on the comparative user testing conducted to evaluate Reach, a gesture recognition system for live piano sound modulation. The user testing compares the Reach system with two existing keyboard-based systems for keyboard live sound modulation: ROLI Seaboard (Lamb and Robertson, 2011) and TouchKeys (McPherson, 2012). The study analyses ease of use, learnability and creative freedom based on two jazz improvisations each on all three systems by the participants. This is presented along with user experience questionnaire (UEQ) data. The poster illustrates results from the test, focusing on the relationship between the learning curve and creative barrier in digital instruments and showing promising results for touch-free digital musical instruments (DMIs) like Reach.

The comparative user testing taken into analysis is part of a larger research project that seeks to investigate how a low degree of invasiveness in digital systems for live sound modulation can reduce the learning curve and eventually make electronic music more accessible.

Downloads

BibTex

@inproceedings{Granieri:2018,
        author = {Granieri, Niccolò},
        title = {Reach – Designing Keyboard Instruments with pianists in mind.},
        booktitle = {Sound, Image and Interaction Design Symposium},
        year = {2018},
        month = {oct},
        day = {4},
        address = {Madeira, Portugal}
    }
Improvising through the senses: a performance approach with the indirect use of technology (Articolo / Pubblicazione)

Autori

Tychonas Michailidis, James Dooley, Niccolò Granieri and Balandino Di Donato

Abstract

This article explores and proposes new ways of performing in a technology-mediated environment. We present a case study that examines feedback loop relationships between a dancer and a pianist. Rather than using data from sensor technologies to directly control and affect musical parameters, we captured data from a dancer’s arm movements and mapped them onto a bespoke device that stimulates the pianist’s tactile sense through vibrations. The pianist identifies and interprets the tactile sensory experience, with his improvised performance responding to the changes in haptic information received. Our system presents a new way of technology-mediated performer interaction through tactile feedback channels, enabling the user to establish new creative pathways. We present a classification of vibrotactile interaction as means of communication, and we conclude how users experience multi-point vibrotactile feedback as one holistic experience rather than a collection of discrete feedback points.

Downloads

BibTex

@article{MichailidisEtAl:2018,
        author = {Michailidis, Tychonas and Dooley, James and Granieri, Niccolò and Di Donato, Balandino},
        title = {Improvising through the senses: a performance approach with the indirect use of technology},
        journal = {Digital Creativity},
        volume = {29},
        number = {2-3},
        pages = {149-164},
        year  = {2018},
        month = {aug},
        day = {27},
        publisher = {Routledge},
        doi = {10.1080/14626268.2018.1511600},
        url = {https://doi.org/10.1080/14626268.2018.1511600}
    }

2017

From piano, to piano (Poster)

Autori

Niccolò Granieri

Downloads

BibTex

@unpublished{Granieri:2017,
        author = {Granieri, Niccolò},
        title = {From piano, to piano},
        booktitle = {Birmingham City University's Research Conference (RESCON17)},
        year = {2017},
        month = {apr},
        day = {5},
        address = {Birmingham, United Kingdom}
    }
Expressing through gesture nuances: Bridging the analog and digital divide (Performance)

Autori

Niccolò Granieri

Abstract

This piano performance has been composed to explore bridging the gap between acoustic instruments and the digital world. The audience will be placed in front of a musician that is stripped at first of all his human traits and gestural capabilities, being forced to play the instrument through machine like objects and movements. Wooden sticks will be used to strike the piano keys, making the act of playing mechanical, binary. Throughout the short piece, he will slowly regain control over all of his musical gestures, abandoning the objects that constrained him, finding a different instrument in front of him, one that transcends the classical concept of a piano.

He will explore this new instrument and slowly realise that his technique is being enhanced by the instrument itself and the explorable sound landscape is much more vast than he thought. The sound coming from the piano will be processed and effected following the pianists sound-accompanying gestures: what is usually made in response to the sound, in this case becomes responsible for the sound itself.

The steep learning curve on digital interfaces, often poses a creative barrier to musical creativity. New digital interfaces require years of practice to attain a certain fluency, thus pushing away instrumentalists that have spent a lifetime perfecting their own instrument and technique. The border between these two worlds is clear, and one that this research aims to dissolve.

The goal is to create an interface that takes advantage of the gestures and technique of classically trained pianists, and enhances the sound possibilities of the instrument throughout a non-invasive technology. The performance is meant to make the audience question if technology could actually enhance a performance without being obtrusive both to the audience itself and to the musician.

Downloads

-

2015

Segnale indesiderato (Tesi)

Autori

Niccolò Granieri

Downloads

BibTex

@unpublished{Granieri:2015,
        author = {Granieri, Niccolò},
        title = {Segnale indesiderato - Utilizzo e manipolazione di eventi sonori considerati comunemente indesiderati tramite algoritmi di generazione casuale},
        year = {2015},
        month = {jul},
        address = {Trieste, Italy}
    }