Skip to main content
Erschienen in: Indian Journal of Otolaryngology and Head & Neck Surgery 3/2022

21.07.2021 | Other Articles

Future Solutions for Voice Rehabilitation in Laryngectomees: A Review of Technologies Based on Electrophysiological Signals

verfasst von: Nithin Prakasan Nair, Vidhu Sharma, Abhinav Dixit, Darwin Kaushal, Kapil Soni, Bikram Choudhury, Amit Goyal

Erschienen in: Indian Journal of Otolaryngology and Head & Neck Surgery | Sonderheft 3/2022

Einloggen, um Zugang zu erhalten

Abstract

Loss of voice is a serious concern for a laryngectomee which should be addressed prior to planning the procedure. Voice rehabilitation options must be educated before the surgery. Even though many devices have been in use, each device has got its limitations. We are searching for probable future technologies for voice rehabilitation in laryngectomees and to familiarise with the ENT fraternity. We performed a bibliographic search using title/abstract searches and Medical Subject Headings (MeSHs) where appropriate, of the Medline, CINAHL, EMBASE, Web of Science and Google scholars for publications from January 1985 to January 2020. The obtained results with scope for the development of a device for speech rehabilitation were included in the review. A total of 1036 articles were identified and screened. After careful scrutining 40 articles have been included in this study. Silent speech interface is one of the topics which is extensively being studied. It is based on various electrophysiological biosignals like non-audible murmur, electromyography, ultrasound characteristics of vocal folds and optical imaging of lips and tongue, electro articulography and electroencephalography. Electromyographic signals have been studied in laryngectomised patients. Silent speech interface may be the answer for the future of voice rehabilitation in laryngectomees. However, all these technologies are in their primitive stages and are potential in conforming into a speech device.
Literatur
1.
Zurück zum Zitat Ţiple C, Drugan T, Dinescu FV, Mureşan R, Chirilă M, Cosgarea M (2016) The impact of vocal rehabilitation on quality of life and voice handicap in patients with total laryngectomy: J Res. Med Sci 21:127 Ţiple C, Drugan T, Dinescu FV, Mureşan R, Chirilă M, Cosgarea M (2016) The impact of vocal rehabilitation on quality of life and voice handicap in patients with total laryngectomy: J Res. Med Sci 21:127
2.
Zurück zum Zitat McQuellon RP, Hurt GJ (1997) The psychosocial impact of the diagnosis and treatment of laryngeal cancer. Otolaryngol Clin North Am 30:231–241CrossRef McQuellon RP, Hurt GJ (1997) The psychosocial impact of the diagnosis and treatment of laryngeal cancer. Otolaryngol Clin North Am 30:231–241CrossRef
3.
Zurück zum Zitat Kapila M, Deore N, Palav RS, Kazi RA, Shah RP, Jagade MV (2011) A brief review of voice restoration following total laryngectomy. Indian J Cancer 48:99–104CrossRef Kapila M, Deore N, Palav RS, Kazi RA, Shah RP, Jagade MV (2011) A brief review of voice restoration following total laryngectomy. Indian J Cancer 48:99–104CrossRef
4.
Zurück zum Zitat Tang CG, Sinclair CF (2015) Voice Restoration After Total Laryngectomy. Otolaryngol Clin North Am 48:687–702CrossRef Tang CG, Sinclair CF (2015) Voice Restoration After Total Laryngectomy. Otolaryngol Clin North Am 48:687–702CrossRef
5.
Zurück zum Zitat van Sluis KE, van der Molen L, van Son RJJH, Hilgers FJM, Bhairosing PA, van den Brekel MWM (2018) Objective and subjective voice outcomes after total laryngectomy: a systematic review. Eur Arch Otorhinolaryngol 275:11–26CrossRef van Sluis KE, van der Molen L, van Son RJJH, Hilgers FJM, Bhairosing PA, van den Brekel MWM (2018) Objective and subjective voice outcomes after total laryngectomy: a systematic review. Eur Arch Otorhinolaryngol 275:11–26CrossRef
6.
Zurück zum Zitat Pawar PV, Sayed SI, Kazi R, Jagade MV (2008) Current status and future prospects in prosthetic voice rehabilitation following laryngectomy. J Cancer Res Ther 4:186–91CrossRef Pawar PV, Sayed SI, Kazi R, Jagade MV (2008) Current status and future prospects in prosthetic voice rehabilitation following laryngectomy. J Cancer Res Ther 4:186–91CrossRef
7.
Zurück zum Zitat Denby B, Schultz T, Honda K, Hueber T, Gilbert JM, Brumberg JS (2010) Silent Speech Interfaces: Speech Commun 52:270–87 Denby B, Schultz T, Honda K, Hueber T, Gilbert JM, Brumberg JS (2010) Silent Speech Interfaces: Speech Commun 52:270–87
8.
Zurück zum Zitat Hawley M, Cunningham S, Green P, Enderby P, Palmer R, Sehgal S, et al. A Voice-Input Voice-Output Communication Aid for People With Severe Speech Impairment: IEEE transactions on neural systems and rehabilitation engineering : a publication of the IEEE Engineering in Medicine and Biology Society 2012;21:23-31 Hawley M, Cunningham S, Green P, Enderby P, Palmer R, Sehgal S, et al. A Voice-Input Voice-Output Communication Aid for People With Severe Speech Impairment: IEEE transactions on neural systems and rehabilitation engineering : a publication of the IEEE Engineering in Medicine and Biology Society 2012;21:23-31
9.
Zurück zum Zitat Judge S, Townend G. Perceptions of the design of voice output communication aids: Int J Lang Commun Disord 2013 Jul-Aug;48(4):366-81 Judge S, Townend G. Perceptions of the design of voice output communication aids: Int J Lang Commun Disord 2013 Jul-Aug;48(4):366-81
10.
Zurück zum Zitat Fleury A, Wu G, Chau T (2019) A wearable fabric-based speech-generating device: system design and case demonstration. Disabil Rehabil Assist Technol 14:434–444CrossRef Fleury A, Wu G, Chau T (2019) A wearable fabric-based speech-generating device: system design and case demonstration. Disabil Rehabil Assist Technol 14:434–444CrossRef
11.
Zurück zum Zitat Furlong LM, Morris ME, Erickson S, Serry TA. Quality of Mobile Phone and Tablet Mobile Apps for Speech Sound Disorders: Protocol for an Evidence-Based Appraisal:JMIR Res Protoc 2016;5:e233 Furlong LM, Morris ME, Erickson S, Serry TA. Quality of Mobile Phone and Tablet Mobile Apps for Speech Sound Disorders: Protocol for an Evidence-Based Appraisal:JMIR Res Protoc 2016;5:e233
12.
Zurück zum Zitat Nakajima Y, Kashioka H, Shikano K, Campbell N. Non-Audible Murmur Recognition: Interspeech 2003;4 Nakajima Y, Kashioka H, Shikano K, Campbell N. Non-Audible Murmur Recognition: Interspeech 2003;4
13.
Zurück zum Zitat Heracleous, Panikos et al. Accurate hidden Markov models for non-audible murmur (NAM) recognition based on iterative supervised adaptation: IEEE Workshop on Automatic Speech Recognition and Understanding 2003: 73-76 Heracleous, Panikos et al. Accurate hidden Markov models for non-audible murmur (NAM) recognition based on iterative supervised adaptation: IEEE Workshop on Automatic Speech Recognition and Understanding 2003: 73-76
14.
Zurück zum Zitat Tajiri Y, Tanaka K, Toda T, Neubig G, Sakti S, Nakamura S. Non-Audible Murmur Enhancement Based on Statistical Conversion Using Air- and Body-Conductive Microphones in Noisy Environments: Interspeech 2015 :5 Tajiri Y, Tanaka K, Toda T, Neubig G, Sakti S, Nakamura S. Non-Audible Murmur Enhancement Based on Statistical Conversion Using Air- and Body-Conductive Microphones in Noisy Environments: Interspeech 2015 :5
15.
Zurück zum Zitat Itoi M, Miyazaki R, Toda T, Saruwatari H, Shikano K. Blind speech extraction for Non-Audible Murmur speech with speaker’s movement noise: IEEE International Symposium on Signal Processing and Information Technology (ISSPIT) 2012: 320-325. Itoi M, Miyazaki R, Toda T, Saruwatari H, Shikano K. Blind speech extraction for Non-Audible Murmur speech with speaker’s movement noise: IEEE International Symposium on Signal Processing and Information Technology (ISSPIT) 2012: 320-325.
16.
Zurück zum Zitat Kumar TR, Suresh GR, Raja S (2018) Conversion of Non-Audible murmur to normal speech based on full-rank gaussian mixture model. J Comput Theor Nanosci 15:185–190CrossRef Kumar TR, Suresh GR, Raja S (2018) Conversion of Non-Audible murmur to normal speech based on full-rank gaussian mixture model. J Comput Theor Nanosci 15:185–190CrossRef
17.
Zurück zum Zitat Kumaresan A, Selvaraj P, Mohanraj S, Mohankumar N, Anand SM. Application of L-NAM speech in voice analyser: Advances in Natural and Applied Sciences 2016; 10:172 Kumaresan A, Selvaraj P, Mohanraj S, Mohankumar N, Anand SM. Application of L-NAM speech in voice analyser: Advances in Natural and Applied Sciences 2016; 10:172
18.
Zurück zum Zitat Csapó TG, Grósz T, Gosztolya G, Tóth L, Markó A. DNN-Based Ultrasound-to-Speech Conversion for a Silent Speech Interface: Interspeech 2017 (ISCA) 2017:3672–6 Csapó TG, Grósz T, Gosztolya G, Tóth L, Markó A. DNN-Based Ultrasound-to-Speech Conversion for a Silent Speech Interface: Interspeech 2017 (ISCA) 2017:3672–6
19.
Zurück zum Zitat Denby B, Stone M. Speech synthesis from real time ultrasound images of the tongue: IEEE International Conference on Acoustics, Speech, and Signal Processing 2004:685–8. Denby B, Stone M. Speech synthesis from real time ultrasound images of the tongue: IEEE International Conference on Acoustics, Speech, and Signal Processing 2004:685–8.
20.
Zurück zum Zitat Denby B, Oussar Y, Dreyfus G, Stone M. Prospects for a Silent Speech Interface using Ultrasound Imaging: IEEE International Conference on Acoustics Speed and Signal Processing Proceedings 2006;365-368 Denby B, Oussar Y, Dreyfus G, Stone M. Prospects for a Silent Speech Interface using Ultrasound Imaging: IEEE International Conference on Acoustics Speed and Signal Processing Proceedings 2006;365-368
21.
Zurück zum Zitat Hueber T, Aversano G, Cholle G, Denby B, Dreyfus G, Oussar Y, et al. Eigentongue Feature Extraction for an Ultrasound-Based Silent Speech Interface: IEEE International Conference on Acoustics, Speech and Signal Processing (ICASSP) 2007;1245-1248 Hueber T, Aversano G, Cholle G, Denby B, Dreyfus G, Oussar Y, et al. Eigentongue Feature Extraction for an Ultrasound-Based Silent Speech Interface: IEEE International Conference on Acoustics, Speech and Signal Processing (ICASSP) 2007;1245-1248
22.
Zurück zum Zitat Hueber T, Benaroya EL, Chollet G, Denby B, Dreyfus G, Stone M (2010) Development of a silent speech interface driven by ultrasound and optical images of the tongue and lips. Speech Commun 52:288–300CrossRef Hueber T, Benaroya EL, Chollet G, Denby B, Dreyfus G, Stone M (2010) Development of a silent speech interface driven by ultrasound and optical images of the tongue and lips. Speech Commun 52:288–300CrossRef
23.
Zurück zum Zitat Kimura N, Kono M, Rekimoto J. SottoVoce: An Ultrasound Imaging-Based Silent Speech Interaction Using Deep Neural Networks: Proceedings of the 2019 CHI Conference on Human Factors in Computing Systems 2019;1–11 Kimura N, Kono M, Rekimoto J. SottoVoce: An Ultrasound Imaging-Based Silent Speech Interaction Using Deep Neural Networks: Proceedings of the 2019 CHI Conference on Human Factors in Computing Systems 2019;1–11
24.
Zurück zum Zitat Harper S, Lee S, Goldstein L, Byrd D (2018) Simultaneous electromagnetic articulography and electroglottography data acquisition of natural speech. J Acoust Soc Am 144:380–5CrossRef Harper S, Lee S, Goldstein L, Byrd D (2018) Simultaneous electromagnetic articulography and electroglottography data acquisition of natural speech. J Acoust Soc Am 144:380–5CrossRef
25.
Zurück zum Zitat Steiner I, Richmond K, Ouni S. Speech animation using electromagnetic articulography as motion capture data: AVSP - 12th International Conference on Auditory-Visual Speech Processing 2013:55-60 Steiner I, Richmond K, Ouni S. Speech animation using electromagnetic articulography as motion capture data: AVSP - 12th International Conference on Auditory-Visual Speech Processing 2013:55-60
26.
Zurück zum Zitat Narayanan S, Toutios A, Ramanarayanan V, Lammert A, Kim J, Lee S et al (2014) Real-time magnetic resonance imaging and electromagnetic articulography database for speech production research. J Acoust Soc Am 136:1307–11CrossRef Narayanan S, Toutios A, Ramanarayanan V, Lammert A, Kim J, Lee S et al (2014) Real-time magnetic resonance imaging and electromagnetic articulography database for speech production research. J Acoust Soc Am 136:1307–11CrossRef
27.
Zurück zum Zitat Chen F, Li S, Zhang Y, Wang J. Detection of the Vibration Signal from Human Vocal Folds Using a 94-GHz Millimeter-Wave Radar: Sensors 2017;17:543 Chen F, Li S, Zhang Y, Wang J. Detection of the Vibration Signal from Human Vocal Folds Using a 94-GHz Millimeter-Wave Radar: Sensors 2017;17:543
28.
Zurück zum Zitat Svec JG, Schutte HK, Miller DG (1996) A subharmonic vibratory pattern in normal vocal folds. J Speech Hear Res 39:135–43CrossRef Svec JG, Schutte HK, Miller DG (1996) A subharmonic vibratory pattern in normal vocal folds. J Speech Hear Res 39:135–43CrossRef
29.
Zurück zum Zitat Janke M, Diener L. EMG-to-Speech: Direct Generation of Speech From Facial Electromyographic Signals: IEEE/ACM Trans Audio Speech Lang Process 2017;25:2375–85 Janke M, Diener L. EMG-to-Speech: Direct Generation of Speech From Facial Electromyographic Signals: IEEE/ACM Trans Audio Speech Lang Process 2017;25:2375–85
30.
Zurück zum Zitat Toth AR, Wand M, Schultz T. Synthesizing Speech from Electromyography Using Voice Transformation Techniques: Interspeech 2009:4 Toth AR, Wand M, Schultz T. Synthesizing Speech from Electromyography Using Voice Transformation Techniques: Interspeech 2009:4
31.
Zurück zum Zitat Nakamura K, Janke M, Wand M, Schultz T. Estimation of fundamental frequency from surface electromyographic data: EMG-to-F0: International Conference on Acoustics, Speech and Signal Processing (ICASSP) IEEE; 2011;573–6 Nakamura K, Janke M, Wand M, Schultz T. Estimation of fundamental frequency from surface electromyographic data: EMG-to-F0: International Conference on Acoustics, Speech and Signal Processing (ICASSP) IEEE; 2011;573–6
32.
Zurück zum Zitat Janke M, Wand M, Nakamura K, Schultz T. Further investigations on EMG-to-speech conversion: International Conference on Acoustics, Speech and Signal Processing (ICASSP) IEEE; 2012;365–8. Janke M, Wand M, Nakamura K, Schultz T. Further investigations on EMG-to-speech conversion: International Conference on Acoustics, Speech and Signal Processing (ICASSP) IEEE; 2012;365–8.
33.
Zurück zum Zitat Meltzner GS, Heaton JT, Deng Y, De Luca G, Roy SH, Kline JC. Silent Speech Recognition as an Alternative Communication Device for Persons With Laryngectomy: IEEE/ACM Trans Audio Speech Lang Process 2017;25:2386–98 Meltzner GS, Heaton JT, Deng Y, De Luca G, Roy SH, Kline JC. Silent Speech Recognition as an Alternative Communication Device for Persons With Laryngectomy: IEEE/ACM Trans Audio Speech Lang Process 2017;25:2386–98
34.
Zurück zum Zitat Porbadnigk A, Wester M, Calliess J-P, Schultz T. EEG-based Speech Recognition - Impact of Temporal Effects: Biosignals- Proceedings of the International Conference on Bio-inspired Systems and Signal Processing 2009;1;376-381 Porbadnigk A, Wester M, Calliess J-P, Schultz T. EEG-based Speech Recognition - Impact of Temporal Effects: Biosignals- Proceedings of the International Conference on Bio-inspired Systems and Signal Processing 2009;1;376-381
35.
Zurück zum Zitat DaSalla C, Kambara H, Koike Y, Sato M. Spatial filtering and single-trial classification of EEG during vowel speech imager: ICREATE ’09 - International Convention on Rehabilitation Engineering and Assistive Technology 2009; DaSalla C, Kambara H, Koike Y, Sato M. Spatial filtering and single-trial classification of EEG during vowel speech imager: ICREATE ’09 - International Convention on Rehabilitation Engineering and Assistive Technology 2009;
36.
Zurück zum Zitat Birbaumer N, Kübler A, Ghanayim N, Hinterberger T, Perelmouter J, Kaiser J, et al. The thought translation device (TTD) for completely paralyzed patients: IEEE Trans Rehabil Eng 2000;8:190–3 Birbaumer N, Kübler A, Ghanayim N, Hinterberger T, Perelmouter J, Kaiser J, et al. The thought translation device (TTD) for completely paralyzed patients: IEEE Trans Rehabil Eng 2000;8:190–3
37.
Zurück zum Zitat Farwell LA, Donchin E (1988) Talking off the top of your head: toward a mental prosthesis utilizing event-related brain potentials. Electroencephalogr Clin Neurophysiol 70:510–23CrossRef Farwell LA, Donchin E (1988) Talking off the top of your head: toward a mental prosthesis utilizing event-related brain potentials. Electroencephalogr Clin Neurophysiol 70:510–23CrossRef
38.
Zurück zum Zitat Pfurtscheller G, Neuper C (2001) Motor imagery and direct brain-computer communication: IEEE 89:1123–34 Pfurtscheller G, Neuper C (2001) Motor imagery and direct brain-computer communication: IEEE 89:1123–34
39.
Zurück zum Zitat Blankertz B, Losch F, Krauledat M, Dornhege G, Curio G, Müller K-R (2008) The Berlin brain-computer interface: accurate performance from first-session in BCI-naïve subjects. IEEE Trans Biomed Eng 55:2452–62CrossRef Blankertz B, Losch F, Krauledat M, Dornhege G, Curio G, Müller K-R (2008) The Berlin brain-computer interface: accurate performance from first-session in BCI-naïve subjects. IEEE Trans Biomed Eng 55:2452–62CrossRef
40.
Zurück zum Zitat Brumberg JS, Nieto-Castanon A, Kennedy PR, Guenther FH. Brain–computer interfaces for speech communication: Speech Communication 2010;52:367–79 Brumberg JS, Nieto-Castanon A, Kennedy PR, Guenther FH. Brain–computer interfaces for speech communication: Speech Communication 2010;52:367–79
41.
Zurück zum Zitat Anumanchipalli GK, Chartier J, Chang EF (2019) Speech synthesis from neural decoding of spoken sentences. Nature 568:493–8CrossRef Anumanchipalli GK, Chartier J, Chang EF (2019) Speech synthesis from neural decoding of spoken sentences. Nature 568:493–8CrossRef
42.
Zurück zum Zitat O’Connor TF, Fach ME, Miller R, Root SE, Mercier PP, Lipomi DJ. The Language of Glove: Wireless gesture decoder with low-power and stretchable hybrid electronics: PLOS ONE 2017;12:e0179766 O’Connor TF, Fach ME, Miller R, Root SE, Mercier PP, Lipomi DJ. The Language of Glove: Wireless gesture decoder with low-power and stretchable hybrid electronics: PLOS ONE 2017;12:e0179766
43.
Zurück zum Zitat Schuldt T, Kramp B, Ovari A, Timmermann D, Dommerich S, Mlynski R et al (2018) Intraoral voice recording-towards a new smartphone-based method for vocal rehabilitation. HNO 66:63–70CrossRef Schuldt T, Kramp B, Ovari A, Timmermann D, Dommerich S, Mlynski R et al (2018) Intraoral voice recording-towards a new smartphone-based method for vocal rehabilitation. HNO 66:63–70CrossRef
44.
Zurück zum Zitat Kunikoshi A, Qiao Y, Minematsu N, Hirose K. Speech Generation from Hand Gestures Based on Space Mapping: Interspeech 2009 :5 Kunikoshi A, Qiao Y, Minematsu N, Hirose K. Speech Generation from Hand Gestures Based on Space Mapping: Interspeech 2009 :5
45.
Zurück zum Zitat Fels SS, Hinton GE. Glove-Talk: a neural network interface between a data-glove and a speech synthesizer: IEEE Trans Neural Netw 1993;4:2–8 Fels SS, Hinton GE. Glove-Talk: a neural network interface between a data-glove and a speech synthesizer: IEEE Trans Neural Netw 1993;4:2–8
46.
Zurück zum Zitat Fels SS, Hinton GE (1997) Glove-talk II - a neural-network interface which maps gestures to parallel formant speech synthesizer controls. IEEE Trans Neural Netw 8:977–84CrossRef Fels SS, Hinton GE (1997) Glove-talk II - a neural-network interface which maps gestures to parallel formant speech synthesizer controls. IEEE Trans Neural Netw 8:977–84CrossRef
47.
Zurück zum Zitat Tolba AS, Abu-Rezq AN. Arabic glove-talk (AGT): A communication aid for vocally impaired: Pattern Analysis & Applic 1998;1:218–30 Tolba AS, Abu-Rezq AN. Arabic glove-talk (AGT): A communication aid for vocally impaired: Pattern Analysis & Applic 1998;1:218–30
48.
Zurück zum Zitat Goyal A, Dixit A, Kalra S, Khandelwal A, Nair NP. 2019. Automatic Speech Generation. Indian Patent Application 201911035856A (2019) Goyal A, Dixit A, Kalra S, Khandelwal A, Nair NP. 2019. Automatic Speech Generation. Indian Patent Application 201911035856A (2019)
Metadaten
Titel
Future Solutions for Voice Rehabilitation in Laryngectomees: A Review of Technologies Based on Electrophysiological Signals
verfasst von
Nithin Prakasan Nair
Vidhu Sharma
Abhinav Dixit
Darwin Kaushal
Kapil Soni
Bikram Choudhury
Amit Goyal
Publikationsdatum
21.07.2021
Verlag
Springer India
Erschienen in
Indian Journal of Otolaryngology and Head & Neck Surgery / Ausgabe Sonderheft 3/2022
Print ISSN: 2231-3796
Elektronische ISSN: 0973-7707
DOI
https://doi.org/10.1007/s12070-021-02765-9

Weitere Artikel der Sonderheft 3/2022

Indian Journal of Otolaryngology and Head & Neck Surgery 3/2022 Zur Ausgabe

Erhebliches Risiko für Kehlkopfkrebs bei mäßiger Dysplasie

29.05.2024 Larynxkarzinom Nachrichten

Fast ein Viertel der Personen mit mäßig dysplastischen Stimmlippenläsionen entwickelt einen Kehlkopftumor. Solche Personen benötigen daher eine besonders enge ärztliche Überwachung.

Hörschwäche erhöht Demenzrisiko unabhängig von Beta-Amyloid

29.05.2024 Hörstörungen Nachrichten

Hört jemand im Alter schlecht, nimmt das Hirn- und Hippocampusvolumen besonders schnell ab, was auch mit einem beschleunigten kognitiven Abbau einhergeht. Und diese Prozesse scheinen sich unabhängig von der Amyloidablagerung zu ereignen.

„Übersichtlicher Wegweiser“: Lauterbachs umstrittener Klinik-Atlas ist online

17.05.2024 Klinik aktuell Nachrichten

Sie sei „ethisch geboten“, meint Gesundheitsminister Karl Lauterbach: mehr Transparenz über die Qualität von Klinikbehandlungen. Um sie abzubilden, lässt er gegen den Widerstand vieler Länder einen virtuellen Klinik-Atlas freischalten.

Betalaktam-Allergie: praxisnahes Vorgehen beim Delabeling

16.05.2024 Pädiatrische Allergologie Nachrichten

Die große Mehrheit der vermeintlichen Penicillinallergien sind keine. Da das „Etikett“ Betalaktam-Allergie oft schon in der Kindheit erworben wird, kann ein frühzeitiges Delabeling lebenslange Vorteile bringen. Ein Team von Pädiaterinnen und Pädiatern aus Kanada stellt vor, wie sie dabei vorgehen.

Update HNO

Bestellen Sie unseren Fach-Newsletter und bleiben Sie gut informiert – ganz bequem per eMail.