default search action
Milos Cernak
Person information
Refine list
refinements active!
zoomed in on ?? of ?? records
view refined list in
export refined list as
2020 – today
- 2024
- [c60]Jozef Coldenhoff, Andrew Harper, Paul Kendrick, Tijana Stojkovic, Milos Cernak:
Multi-Channel Mosra: Mean Opinion Score and Room Acoustics Estimation Using Simulated Data and A Teacher Model. ICASSP 2024: 381-385 - [c59]Lingjun Meng, Jozef Coldenhoff, Paul Kendrick, Tijana Stojkovic, Andrew Harper, Kiril Ratmanski, Milos Cernak:
On Real-Time Multi-Stage Speech Enhancement Systems. ICASSP 2024: 10241-10245 - [i29]Siyi Wang, Siyi Liu, Andrew Harper, Paul Kendrick, Mathieu Salzmann, Milos Cernak:
Diffusion-based Speech Enhancement with Schrödinger Bridge and Symmetric Noise Schedule. CoRR abs/2409.05116 (2024) - [i28]Jozef Coldenhoff, Niclas Granqvist, Milos Cernak:
OpenACE: An Open Benchmark for Evaluating Audio Coding Performance. CoRR abs/2409.08374 (2024) - [i27]Jozef Coldenhoff, Milos Cernak:
Semi-intrusive audio evaluation: Casting non-intrusive assessment as a multi-modal text prediction task. CoRR abs/2409.14069 (2024) - 2023
- [c58]Karl El Hajal, Zihan Wu, Neil Scheidwasser-Clow, Gasser Elbanna, Milos Cernak:
Efficient Speech Quality Assessment Using Self-Supervised Framewise Embeddings. ICASSP 2023: 1-5 - [c57]Robert P. Spang, Karl El Hajal, Sebastian Möller, Milos Cernak:
Personalized Task Load Prediction in Speech Communication. ICASSP 2023: 1-5 - [c56]Zihan Wu, Neil Scheidwasser-Clow, Karl El Hajal, Milos Cernak:
Speaker Embeddings as Individuality Proxy for Voice Stress Detection. INTERSPEECH 2023: 1838-1842 - [c55]Bohan Wang, Damien Ronssin, Milos Cernak:
ALO-VC: Any-to-any Low-latency One-shot Voice Conversion. INTERSPEECH 2023: 2073-2077 - [c54]Philipp Schilk, Niccolò Polvani, Andrea Ronco, Milos Cernak, Michele Magno:
In-Ear-Voice: Towards Milli-Watt Audio Enhancement With Bone-Conduction Microphones for In-Ear Sensing Platforms. IoTDI 2023: 1-12 - [c53]Philipp Schilk, Niccolò Polvani, Andrea Ronco, Milos Cernak, Michele Magno:
Demo Abstract: In-Ear-Voice - Towards Milli-Watt Audio Enhancement With Bone-Conduction Microphones for In-Ear Sensing Platforms. IoTDI 2023: 488-490 - [i26]Robert P. Spang, Karl El Hajal, Sebastian Möller, Milos Cernak:
Personalized Task Load Prediction in Speech Communication. CoRR abs/2303.00630 (2023) - [i25]Bohan Wang, Damien Ronssin, Milos Cernak:
ALO-VC: Any-to-any Low-latency One-shot Voice Conversion. CoRR abs/2306.01100 (2023) - [i24]Zihan Wu, Neil Scheidwasser-Clow, Karl El Hajal, Milos Cernak:
Speaker Embeddings as Individuality Proxy for Voice Stress Detection. CoRR abs/2306.05915 (2023) - [i23]Philipp Schilk, Niccolò Polvani, Andrea Ronco, Milos Cernak, Michele Magno:
In-Ear-Voice: Towards Milli-Watt Audio Enhancement With Bone-Conduction Microphones for In-Ear Sensing Platforms. CoRR abs/2309.02393 (2023) - [i22]Boris Bergsma, Marta Brzezinska, Oleg V. Yazyev, Milos Cernak:
Cluster-based pruning techniques for audio data. CoRR abs/2309.11922 (2023) - [i21]Jozef Coldenhoff, Andrew Harper, Paul Kendrick, Tijana Stojkovic, Milos Cernak:
Multi-Channel MOSRA: Mean Opinion Score and Room Acoustics Estimation Using Simulated Data and a Teacher Model. CoRR abs/2309.11976 (2023) - 2022
- [c52]Neil Scheidwasser-Clow, Mikolaj Kegler, Pierre Beckmann, Milos Cernak:
SERAB: A Multi-Lingual Benchmark for Speech Emotion Recognition. ICASSP 2022: 7697-7701 - [c51]Boris Bergsma, Minhao Yang, Milos Cernak:
PEAF: Learnable Power Efficient Analog Acoustic Features for Audio Recognition. INTERSPEECH 2022: 381-385 - [c50]Gasser Elbanna, Alice Biryukov, Neil Scheidwasser-Clow, Lara Orlandic, Pablo Mainar, Mikolaj Kegler, Pierre Beckmann, Milos Cernak:
Hybrid Handcrafted and Learnable Audio Representation for Analysis of Speech Under Cognitive and Physical Load. INTERSPEECH 2022: 386-390 - [c49]Damien Ronssin, Milos Cernak:
Application for Real-time Personalized Speaker Extraction. INTERSPEECH 2022: 1955-1956 - [c48]Karl El Hajal, Milos Cernak, Pablo Mainar:
MOSRA: Joint Mean Opinion Score and Room Acoustics Speech Quality Assessment. INTERSPEECH 2022: 3313-3317 - [c47]Alexandru Mocanu, Benjamin Ricaud, Milos Cernak:
Fast accuracy estimation of deep learning based multi-class musical source separation. NLDL 2022 - [i20]Gasser Elbanna, Alice Biryukov, Neil Scheidwasser-Clow, Lara Orlandic, Pablo Mainar, Mikolaj Kegler, Pierre Beckmann, Milos Cernak:
Hybrid Handcrafted and Learnable Audio Representation for Analysis of Speech Under Cognitive and Physical Load. CoRR abs/2203.16637 (2022) - [i19]Karl El Hajal, Milos Cernak, Pablo Mainar:
MOSRA: Joint Mean Opinion Score and Room Acoustics Speech Quality Assessment. CoRR abs/2204.01345 (2022) - [i18]Gasser Elbanna, Neil Scheidwasser-Clow, Mikolaj Kegler, Pierre Beckmann, Karl El Hajal, Milos Cernak:
BYOL-S: Learning Self-supervised Speech Representations by Bootstrapping. CoRR abs/2206.12038 (2022) - [i17]Karl El Hajal, Zihan Wu, Neil Scheidwasser-Clow, Gasser Elbanna, Milos Cernak:
Efficient Speech Quality Assessment using Self-supervised Framewise Embeddings. CoRR abs/2211.06646 (2022) - [i16]Niccolò Polvani, Damien Ronssin, Milos Cernak:
BC-VAD: A Robust Bone Conduction Voice Activity Detection. CoRR abs/2212.02996 (2022) - 2021
- [c46]Damien Ronssin, Milos Cernak:
AC-VC: Non-Parallel Low Latency Phonetic Posteriorgrams Based Voice Conversion. ASRU 2021: 710-716 - [c45]Pierre Beckmann, Mikolaj Kegler, Milos Cernak:
Word-Level Embeddings for Cross-Task Transfer Learning in Speech Processing. EUSIPCO 2021: 446-450 - [c44]Gasser Elbanna, Neil Scheidwasser-Clow, Mikolaj Kegler, Pierre Beckmann, Karl El Hajal, Milos Cernak:
BYOL-S: Learning Self-supervised Speech Representations by Bootstrapping. HEAR@NeurIPS 2021: 25-47 - [c43]Natalia Nessler, Milos Cernak, Paolo Prandoni, Pablo Mainar:
Non-Intrusive Speech Quality Assessment with Transfer Learning and Subject-Specific Scaling. Interspeech 2021: 2406-2410 - [c42]Paula Sánchez López, Paul Callens, Milos Cernak:
A Universal Deep Room Acoustics Estimator. WASPAA 2021: 356-360 - [i15]Paula Sánchez López, Paul Callens, Milos Cernak:
A Universal Deep Room Acoustics Estimator. CoRR abs/2109.14436 (2021) - [i14]Neil Scheidwasser-Clow, Mikolaj Kegler, Pierre Beckmann, Milos Cernak:
SERAB: A multi-lingual benchmark for speech emotion recognition. CoRR abs/2110.03414 (2021) - [i13]Boris Bergsma, Minhao Yang, Milos Cernak:
Power efficient analog features for audio recognition. CoRR abs/2110.03715 (2021) - [i12]Damien Ronssin, Milos Cernak:
AC-VC: Non-parallel Low Latency Phonetic Posteriorgrams Based Voice Conversion. CoRR abs/2111.06601 (2021) - 2020
- [c41]Oriol Barbany Mayor, Milos Cernak:
FastVC: Fast Voice Conversion with non-parallel data. Blizzard Challenge / Voice Conversion Challenge 2020 - [c40]Giorgia Dellaferrera, Flavio Martinelli, Milos Cernak:
A Bin Encoding Training of a Spiking Neural Network Based Voice Activity Detection. ICASSP 2020: 3207-3211 - [c39]Flavio Martinelli, Giorgia Dellaferrera, Pablo Mainar, Milos Cernak:
Spiking Neural Networks Trained With Backpropagation for Low Power Neuromorphic Implementation of Voice Activity Detection. ICASSP 2020: 8544-8548 - [c38]Mikolaj Kegler, Pierre Beckmann, Milos Cernak:
Deep Speech Inpainting of Time-Frequency Masks. INTERSPEECH 2020: 3276-3280 - [i11]Oriol Barbany Mayor, Milos Cernak:
FastVC: Fast Voice Conversion with non-parallel data. CoRR abs/2010.04185 (2020) - [i10]Alexandru Mocanu, Benjamin Ricaud, Milos Cernak:
Fast accuracy estimation of deep learning based multi-class musical source separation. CoRR abs/2010.09453 (2020) - [i9]Paul Callens, Milos Cernak:
Joint Blind Room Acoustic Characterization From Speech And Music Signals Using Convolutional Recurrent Neural Networks. CoRR abs/2010.11167 (2020)
2010 – 2019
- 2019
- [c37]Thibault Viglino, Petr Motlícek, Milos Cernak:
End-to-End Accented Speech Recognition. INTERSPEECH 2019: 2140-2144 - [c36]Tomás Arias-Vergara, Juan Rafael Orozco-Arroyave, Milos Cernak, Sandra Gollwitzer, Maria Schuster, Elmar Nöth:
Phone-Attribute Posteriors to Evaluate the Speech of Cochlear Implant Users. INTERSPEECH 2019: 3108-3112 - [c35]Niccolò Sacchi, Alexandre Nanchen, Martin Jaggi, Milos Cernak:
Open-Vocabulary Keyword Spotting with Audio and Text Embeddings. INTERSPEECH 2019: 3362-3366 - [c34]Berkay Inan, Milos Cernak, Helmut Grabner, Helena Peic Tukuljac, Rodrigo C. G. Pena, Benjamin Ricaud:
Evaluating Audiovisual Source Separation in the Context of Video Conferencing. INTERSPEECH 2019: 4579-4583 - [p1]Ivan Himawan, Srikanth R. Madikeri, Petr Motlícek, Milos Cernak, Sridha Sridharan, Clinton Fookes:
Voice Presentation Attack Detection Using Convolutional Neural Networks. Handbook of Biometric Anti-Spoofing, 2nd Ed. 2019: 391-415 - [i8]Mikolaj Kegler, Pierre Beckmann, Milos Cernak:
Deep speech inpainting of time-frequency masks. CoRR abs/1910.09058 (2019) - [i7]Pierre Beckmann, Mikolaj Kegler, Hugues Saltini, Milos Cernak:
Speech-VGG: A deep feature extractor for speech processing. CoRR abs/1910.09909 (2019) - [i6]Flavio Martinelli, Giorgia Dellaferrera, Pablo Mainar, Milos Cernak:
Spiking neural networks trained with backpropagation for low power neuromorphic implementation of voice activity detection. CoRR abs/1910.09993 (2019) - [i5]Giorgia Dellaferrera, Flavio Martinelli, Milos Cernak:
A Bin Encoding Training of a Spiking Neural Network-based Voice Activity Detection. CoRR abs/1910.12459 (2019) - 2018
- [j11]Juan Rafael Orozco-Arroyave, Juan Camilo Vásquez-Correa, Jesús Francisco Vargas-Bonilla, Raman Arora, Najim Dehak, Phani S. Nidadavolu, Heidi Christensen, Frank Rudzicz, Maria Yancheva, Hamid R. Chinaei, Alyssa Vann, Nikolai Vogler, Tobias Bocklet, Milos Cernak, Julius Hannink, Elmar Nöth:
NeuroSpeech: An open-source software for Parkinson's speech analysis. Digit. Signal Process. 77: 207-221 (2018) - [j10]Juan Rafael Orozco-Arroyave, Juan Camilo Vásquez-Correa, Jesús Francisco Vargas-Bonilla, Raman Arora, Najim Dehak, Phani S. Nidadavolu, Heidi Christensen, Frank Rudzicz, Maria Yancheva, Hamid R. Chinaei, Alyssa Vann, Nikolai Vogler, Tobias Bocklet, Milos Cernak, Julius Hannink, Elmar Nöth:
NeuroSpeech. SoftwareX 8: 69-70 (2018) - [j9]Milos Cernak, Afsaneh Asaei, Alexandre Hyafil:
Cognitive Speech Coding: Examining the Impact of Cognitive Speech Processing on Speech Compression. IEEE Signal Process. Mag. 35(3): 97-109 (2018) - [c33]Milos Cernak, Sibo Tong:
Nasal Speech Sounds Detection Using Connectionist Temporal Classification. ICASSP 2018: 5574-5578 - [c32]Juan Camilo Vásquez-Correa, Nicanor García-Ospina, Juan Rafael Orozco-Arroyave, Milos Cernak, Elmar Nöth:
Phonological Posteriors and GRU Recurrent Units to Assess Speech Impairments of Patients with Parkinson's Disease. TSD 2018: 453-461 - [c31]Nicanor García-Ospina, Tomás Arias-Vergara, Juan Camilo Vásquez-Correa, Juan Rafael Orozco-Arroyave, Milos Cernak, Elmar Nöth:
Phonological i-Vectors to Detect Parkinson's Disease. TSD 2018: 462-470 - 2017
- [j8]Milos Cernak, Stefan Benus, Alexandros Lazaridis:
Speech vocoding for laboratory phonology. Comput. Speech Lang. 42: 100-121 (2017) - [j7]Milos Cernak, Juan Rafael Orozco-Arroyave, Frank Rudzicz, Heidi Christensen, Juan Camilo Vásquez-Correa, Elmar Nöth:
Characterisation of voice quality of Parkinson's disease using differential phonological posterior features. Comput. Speech Lang. 46: 196-208 (2017) - [j6]Afsaneh Asaei, Milos Cernak, Hervé Bourlard:
Perceptual Information Loss due to Impaired Speech Production. IEEE ACM Trans. Audio Speech Lang. Process. 25(12): 2433-2443 (2017) - [c30]Juan Camilo Vásquez-Correa, Juan Rafael Orozco-Arroyave, Raman Arora, Elmar Nöth, Najim Dehak, Heidi Christensen, Frank Rudzicz, Tobias Bocklet, Milos Cernak, Hamid R. Chinaei, Julius Hannink, Phani Sankar Nidadavolu, Maria Yancheva, Alyssa Vann, Nikolai Vogler:
Multi-view representation learning via gcca for multimodal analysis of Parkinson's disease. ICASSP 2017: 2966-2970 - [c29]Milos Cernak, Elmar Nöth, Frank Rudzicz, Heidi Christensen, Juan Rafael Orozco-Arroyave, Raman Arora, Tobias Bocklet, Hamidreza Chinaei, Julius Hannink, Phani Sankar Nidadavolu, Juan Camilo Vásquez-Correa, Maria Yancheva, Alyssa Vann, Nikolai Vogler:
On the impact of non-modal phonation on phonological features. ICASSP 2017: 5090-5094 - [c28]Milos Cernak, Alain Komaty, Amir Mohammadi, André Anjos, Sébastien Marcel:
Bob Speaks Kaldi. INTERSPEECH 2017: 2030-2031 - 2016
- [j5]Milos Cernak, Afsaneh Asaei, Hervé Bourlard:
On structured sparsity of phonological posteriors for linguistic parsing. Speech Commun. 84: 36-45 (2016) - [j4]Milos Cernak, Alexandros Lazaridis, Afsaneh Asaei, Philip N. Garner:
Composition of Deep and Spiking Neural Networks for Very Low Bit Rate Speech Coding. IEEE ACM Trans. Audio Speech Lang. Process. 24(12): 2301-2312 (2016) - [c27]Tamás Gábor Csapó, Géza Németh, Milos Cernak, Philip N. Garner:
Modeling unvoiced sounds in statistical parametric speech synthesis with a continuous vocoder. EUSIPCO 2016: 1338-1342 - [c26]Milos Cernak, Afsaneh Asaei, Pierre-Edouard Honnet, Philip N. Garner, Hervé Bourlard:
Sound Pattern Matching for Automatic Prosodic Event Detection. INTERSPEECH 2016: 170-174 - [c25]Milos Cernak, Philip N. Garner:
PhonVoc: A Phonetic and Phonological Vocoding Toolkit. INTERSPEECH 2016: 988-992 - [c24]Afsaneh Asaei, Gil Luyet, Milos Cernak, Hervé Bourlard:
Phonetic and Phonological Posterior Search Space Hashing Exploiting Class-Specific Sparsity Structures. INTERSPEECH 2016: 1873-1877 - [c23]Alexandros Lazaridis, Milos Cernak, Philip N. Garner:
Probabilistic Amplitude Demodulation Features in Speech Synthesis for Improving Prosody. INTERSPEECH 2016: 2298-2302 - [c22]Ramya Rasipuram, Milos Cernak, Mathew Magimai-Doss:
HMM-Based Non-Native Accent Assessment Using Posterior Features. INTERSPEECH 2016: 3137-3141 - [c21]Alexandros Lazaridis, Milos Cernak, Pierre-Edouard Honnet, Philip N. Garner:
Investigating Spectral Amplitude Modulation Phase Hierarchy Features in Speech Synthesis. SSW 2016: 32-37 - [i4]Sucheta Ghosh, Milos Cernak, Sarbani Palit, B. B. Chaudhuri:
An Analysis of Rhythmic Staccato-Vocalization Based on Frequency Demodulation for Laughter Detection in Conversational Meetings. CoRR abs/1601.00833 (2016) - [i3]Milos Cernak, Afsaneh Asaei, Hervé Bourlard:
On Structured Sparsity of Phonological Posteriors for Linguistic Parsing. CoRR abs/1601.05647 (2016) - [i2]Milos Cernak, Stefan Benus, Alexandros Lazaridis:
Speech vocoding for laboratory phonology. CoRR abs/1601.05991 (2016) - [i1]Milos Cernak, Alexandros Lazaridis, Afsaneh Asaei, Philip N. Garner:
Composition of Deep and Spiking Neural Networks for Very Low Bit Rate Speech Coding. CoRR abs/1604.04383 (2016) - 2015
- [j3]Milos Cernak, Philip N. Garner, Alexandros Lazaridis, Petr Motlícek, Xingyu Na:
Incremental Syllable-Context Phonetic Vocoding. IEEE ACM Trans. Audio Speech Lang. Process. 23(6): 1019-1030 (2015) - [c20]Milos Cernak, Blaise Potard, Philip N. Garner:
Phonological vocoding using artificial neural networks. ICASSP 2015: 4844-4848 - [c19]Afsaneh Asaei, Milos Cernak, Hervé Bourlard:
On compressibility of neural network phonological features for low bit rate speech coding. INTERSPEECH 2015: 418-422 - [c18]Milos Cernak, Pierre-Edouard Honnet:
An empirical model of emphatic word detection. INTERSPEECH 2015: 573-577 - [c17]Ramya Rasipuram, Milos Cernak, Alexandre Nanchen, Mathew Magimai-Doss:
Automatic accentedness evaluation of non-native speech using phonetic and sub-phonetic posterior probabilities. INTERSPEECH 2015: 648-652 - [c16]Alexandre Hyafil, Milos Cernak:
Neuromorphic based oscillatory device for incremental syllable boundary detection. INTERSPEECH 2015: 1191-1195 - [c15]Tamás Gábor Csapó, Géza Németh, Milos Cernak:
Residual-Based Excitation with Continuous F0 Modeling in HMM-Based Speech Synthesis. SLSP 2015: 27-38 - 2014
- [c14]Petr Motlícek, David Imseng, Milos Cernak, Namhoon Kim:
Development of bilingual ASR system for MediaParl corpus. INTERSPEECH 2014: 1391-1394 - [c13]Milos Cernak, Alexandros Lazaridis, Philip N. Garner, Petr Motlícek:
Stress and accent transmission in HMM-based syllable-context very low bit rate speech coding. INTERSPEECH 2014: 2799-2803 - 2013
- [j2]Philip N. Garner, Milos Cernak, Petr Motlícek:
A Simple Continuous Pitch Estimation Algorithm. IEEE Signal Process. Lett. 20(1): 102-105 (2013) - [c12]Lakshmi Saheer, Milos Cernak:
Automatic Staging of Audio with Emotions. ACII 2013: 705-706 - [c11]Milos Cernak, Petr Motlícek, Philip N. Garner:
On the (UN)importance of the contextual factors in HMM-based speech synthesis and coding. ICASSP 2013: 8140-8143 - [c10]Milos Cernak, Xingyu Na, Philip N. Garner:
Syllable-based pitch encoding for low bit rate speech coding with recognition/synthesis architecture. INTERSPEECH 2013: 3449-3452 - 2012
- [c9]Milos Cernak, David Imseng, Hervé Bourlard:
Robust triphone mapping for acoustic modeling. INTERSPEECH 2012: 1910-1913 - [c8]Arthur Kantor, Milos Cernak, Jirí Havelka, Sean Huber, Jan Kleindienst, Doris B. Gonzalez:
Reading companion: the technical and social design of an automated reading tutor. WOCCI 2012: 53-59 - 2011
- [c7]Sakhia Darjaa, Milos Cernak, Marián Trnka, Milan Rusko, Róbert Sabo:
Effective Triphone Mapping for Acoustic Modeling in Speech Recognition. INTERSPEECH 2011: 1717-1720 - [c6]Sakhia Darjaa, Milos Cernak, Stefan Benus, Milan Rusko, Róbert Sabo, Marián Trnka:
Rule-Based Triphone Mapping for Acoustic Modeling in Automatic Speech Recognition. TSD 2011: 268-275 - 2010
- [j1]Milos Cernak:
A Comparison of Decision Tree Classifiers for Automatic Diagnosis of Speech Recognition Errors. Comput. Informatics 29(3): 489-501 (2010) - [c5]Milos Cernak:
Diagnostics for Debugging Speech Recognition Systems. TSD 2010: 251-258
2000 – 2009
- 2006
- [c4]Milos Cernak, Christian Wellekens:
Diagnostics of speech recognition using classification phoneme diagnostic trees. Computational Intelligence 2006: 343-348 - [c3]Milos Cernak:
Unit Selection Speech Synthesis in Noise. ICASSP (1) 2006: 761-764 - 2005
- [c2]Thierry Dutoit, Milos Cernak:
TTSBOX: a MATLAB toolbox for teaching text-to-speech synthesis. ICASSP (5) 2005: 537-540 - 2004
- [c1]Milan Rusko, Marián Trnka, Sachia Darzágín, Milos Cernak:
Slovak Speech Database for Experiments and Application Building in Unit-Selection Speech Synthesis. TSD 2004: 457-464
Coauthor Index
manage site settings
To protect your privacy, all features that rely on external API calls from your browser are turned off by default. You need to opt-in for them to become active. All settings here will be stored as cookies with your web browser. For more information see our F.A.Q.
Unpaywalled article links
Add open access links from to the list of external document links (if available).
Privacy notice: By enabling the option above, your browser will contact the API of unpaywall.org to load hyperlinks to open access articles. Although we do not have any reason to believe that your call will be tracked, we do not have any control over how the remote server uses your data. So please proceed with care and consider checking the Unpaywall privacy policy.
Archived links via Wayback Machine
For web page which are no longer available, try to retrieve content from the of the Internet Archive (if available).
Privacy notice: By enabling the option above, your browser will contact the API of archive.org to check for archived content of web pages that are no longer available. Although we do not have any reason to believe that your call will be tracked, we do not have any control over how the remote server uses your data. So please proceed with care and consider checking the Internet Archive privacy policy.
Reference lists
Add a list of references from , , and to record detail pages.
load references from crossref.org and opencitations.net
Privacy notice: By enabling the option above, your browser will contact the APIs of crossref.org, opencitations.net, and semanticscholar.org to load article reference information. Although we do not have any reason to believe that your call will be tracked, we do not have any control over how the remote server uses your data. So please proceed with care and consider checking the Crossref privacy policy and the OpenCitations privacy policy, as well as the AI2 Privacy Policy covering Semantic Scholar.
Citation data
Add a list of citing articles from and to record detail pages.
load citations from opencitations.net
Privacy notice: By enabling the option above, your browser will contact the API of opencitations.net and semanticscholar.org to load citation information. Although we do not have any reason to believe that your call will be tracked, we do not have any control over how the remote server uses your data. So please proceed with care and consider checking the OpenCitations privacy policy as well as the AI2 Privacy Policy covering Semantic Scholar.
OpenAlex data
Load additional information about publications from .
Privacy notice: By enabling the option above, your browser will contact the API of openalex.org to load additional information. Although we do not have any reason to believe that your call will be tracked, we do not have any control over how the remote server uses your data. So please proceed with care and consider checking the information given by OpenAlex.
last updated on 2024-10-22 20:16 CEST by the dblp team
all metadata released as open data under CC0 1.0 license
see also: Terms of Use | Privacy Policy | Imprint