default search action
Yuta Nakashima
Person information
Refine list
refinements active!
zoomed in on ?? of ?? records
view refined list in
export refined list as
2020 – today
- 2024
- [j40]Bowen Wang, Jiaxin Zhang, Ran Zhang, Yunqin Li, Liangzhi Li, Yuta Nakashima:
Improving facade parsing with vision transformers and line integration. Adv. Eng. Informatics 60: 102463 (2024) - [j39]Tianwei Chen, Noa Garcia, Liangzhi Li, Yuta Nakashima:
Exploring Emotional Stimuli Detection in Artworks: A Benchmark Dataset and Baselines Evaluation. J. Imaging 10(6): 136 (2024) - [j38]Yankun Wu, Yuta Nakashima, Noa Garcia:
GOYA: Leveraging Generative Art for Content-Style Disentanglement. J. Imaging 10(7): 156 (2024) - [c93]Tianwei Chen, Yusuke Hirota, Mayu Otani, Noa Garcia, Yuta Nakashima:
Would Deep Generative Models Amplify Bias in Future Models? CVPR 2024: 10833-10843 - [c92]Warren Leu, Yuta Nakashima, Noa Garcia:
Auditing Image-based NSFW Classifiers for Content Filtering. FAccT 2024: 1163-1173 - [c91]Tianwei Chen, Noa Garcia, Liangzhi Li, Yuta Nakashima:
Retrieving Emotional Stimuli in Artworks. ICMR 2024: 515-523 - [c90]Yankun Wu, Yuta Nakashima, Noa Garcia, Sheng Li, Zhaoyang Zeng:
Reproducibility Companion Paper: Stable Diffusion for Content-Style Disentanglement in Art Analysis. ICMR 2024: 1228-1231 - [c89]Liyun Zhang, Zhaojie Luo, Shuqiong Wu, Yuta Nakashima:
MicroEmo: Time-Sensitive Multimodal Emotion Recognition with Subtle Clue Dynamics in Video Dialogues. MRAC@MM 2024: 110-115 - [c88]Zongshang Pang, Yuta Nakashima, Mayu Otani, Hajime Nagahara:
Revisiting Pixel-Level Contrastive Pre-Training on Scene Images. WACV 2024: 1773-1782 - [c87]Jiahao Zhang, Bowen Wang, Liangzhi Li, Yuta Nakashima, Hajime Nagahara:
Instruct Me More! Random Prompting for Visual In-Context Learning. WACV 2024: 2585-2594 - [i60]Tianwei Chen, Yusuke Hirota, Mayu Otani, Noa Garcia, Yuta Nakashima:
Would Deep Generative Models Amplify Bias in Future Models? CoRR abs/2404.03242 (2024) - [i59]Wanqing Zhao, Yuta Nakashima, Haiyuan Chen, Noboru Babaguchi:
Enhancing Fake News Detection in Social Media via Label Propagation on Cross-modal Tweet Graph. CoRR abs/2406.09884 (2024) - [i58]Yusuke Hirota, Ryo Hachiuma, Chao-Han Huck Yang, Yuta Nakashima:
From Descriptive Richness to Bias: Unveiling the Dark Side of Generative Image Caption Enrichment. CoRR abs/2406.13912 (2024) - [i57]Yusuke Hirota, Jerone T. A. Andrews, Dora Zhao, Orestis Papakyriakopoulos, Apostolos Modas, Yuta Nakashima, Alice Xiang:
Resampled Datasets Are Not Enough: Mitigating Societal Bias Beyond Single Attributes. CoRR abs/2407.03623 (2024) - [i56]Bowen Wang, Liangzhi Li, Jiahao Zhang, Yuta Nakashima, Hajime Nagahara:
Explainable Image Recognition via Enhanced Slot-attention Based Classifier. CoRR abs/2407.05616 (2024) - [i55]Bowen Wang, Jiuyang Chang, Yiming Qian, Guoxin Chen, Junhao Chen, Zhouqiang Jiang, Jiahao Zhang, Yuta Nakashima, Hajime Nagahara:
DiReCT: Diagnostic Reasoning for Clinical Notes via Large Language Models. CoRR abs/2408.01933 (2024) - [i54]Yusuke Hirota, Min-Hung Chen, Chien-Yi Wang, Yuta Nakashima, Yu-Chiang Frank Wang, Ryo Hachiuma:
SANER: Annotation-free Societal Attribute Neutralizer for Debiasing CLIP. CoRR abs/2408.10202 (2024) - [i53]Junhao Chen, Bowen Wang, Zhouqiang Jiang, Yuta Nakashima:
Putting People in LLMs' Shoes: Generating Better Answers via Question Rewriter. CoRR abs/2408.10573 (2024) - [i52]Yankun Wu, Yuta Nakashima, Noa Garcia:
Gender Bias Evaluation in Text-to-image Generation: A Survey. CoRR abs/2408.11358 (2024) - 2023
- [j37]Bowen Wang, Liangzhi Li, Manisha Verma, Yuta Nakashima, Ryo Kawasaki, Hajime Nagahara:
Match them up: visually explainable few-shot image classification. Appl. Intell. 53(9): 10956-10977 (2023) - [j36]Shoichiro Fujisawa, Katsuya Sato, Kazuyuki Minami, Kazuaki Nagayama, Ryo Sudo, Hiromi Miyoshi, Yuta Nakashima, Kennedy Omondi Okeyo, Tasuku Nakahara:
Special Issue on Bio-MEMS. J. Robotics Mechatronics 35(5): 1121-1122 (2023) - [j35]Haruhiko Takemoto, Keito Sonoda, Kanae Ike, Yoichi Saito, Yoshitaka Nakanishi, Yuta Nakashima:
Development of Cell Micropatterning Technique Using Laser Processing of Alginate Gel. J. Robotics Mechatronics 35(5): 1185-1192 (2023) - [j34]Yuta Kishimoto, Sachiko Ide, Toyohiro Naito, Yuta Nakashima, Yoshitaka Nakanishi, Noritada Kaji:
Development of a Microfluidic Ion Current Measurement System for Single-Microplastic Detection. J. Robotics Mechatronics 35(5): 1193-1202 (2023) - [j33]Bowen Wang, Liangzhi Li, Yuta Nakashima, Ryo Kawasaki, Hajime Nagahara:
Real-time estimation of the remaining surgery duration for cataract surgery using deep convolutional neural networks and long short-term memory. BMC Medical Informatics Decis. Mak. 23(1): 80 (2023) - [j32]Zekun Yang, Yuta Nakashima, Haruo Takemura:
Multi-modal humor segment prediction in video. Multim. Syst. 29(4): 2389-2398 (2023) - [j31]Hitoshi Teshima, Naoki Wake, Diego Thomas, Yuta Nakashima, Hiroshi Kawasaki, Katsushi Ikeuchi:
ACT2G: Attention-based Contrastive Learning for Text-to-Gesture Generation. Proc. ACM Comput. Graph. Interact. Tech. 6(3): 35:1-35:17 (2023) - [c86]Noa Garcia, Yusuke Hirota, Yankun Wu, Yuta Nakashima:
Uncurated Image-Text Datasets: Shedding Light on Demographic Bias. CVPR 2023: 6957-6966 - [c85]Bowen Wang, Liangzhi Li, Yuta Nakashima, Hajime Nagahara:
Learning Bottleneck Concepts in Image Classification. CVPR 2023: 10962-10971 - [c84]Mayu Otani, Riku Togashi, Yu Sawai, Ryosuke Ishigami, Yuta Nakashima, Esa Rahtu, Janne Heikkilä, Shin'ichi Satoh:
Toward Verifiable and Reproducible Human Evaluation for Text-to-Image Generation. CVPR 2023: 14277-14286 - [c83]Yusuke Hirota, Yuta Nakashima, Noa Garcia:
Model-Agnostic Gender Debiased Image Captioning. CVPR 2023: 15191-15200 - [c82]Yankun Wu, Yuta Nakashima, Noa Garcia:
Not Only Generative Art: Stable Diffusion for Content-Style Disentanglement in Art Analysis. ICMR 2023: 199-208 - [c81]Guillaume Habault, Minh-Son Dao, Michael Alexander Riegler, Duc-Tien Dang-Nguyen, Yuta Nakashima, Cathal Gurrin:
ICDAR'23: Intelligent Cross-Data Analysis and Retrieval. ICMR 2023: 674-675 - [c80]Wanqing Zhao, Yuta Nakashima, Haiyuan Chen, Noboru Babaguchi:
Enhancing Fake News Detection in Social Media via Label Propagation on Cross-modal Tweet Graph. ACM Multimedia 2023: 2400-2408 - [c79]Zongshang Pang, Yuta Nakashima, Mayu Otani, Hajime Nagahara:
Contrastive Losses Are Natural Criteria for Unsupervised Video Summarization. WACV 2023: 2009-2018 - [e1]Guillaume Habault, Minh-Son Dao, Michael Alexander Riegler, Duc-Tien Dang-Nguyen, Yuta Nakashima, Cathal Gurrin:
Proceedings of the 4th ACM Workshop on Intelligent Cross-Data Analysis and Retrieval, ICDAR 2023, Thessaloniki, Greece, June 12-15, 2023. ACM 2023 [contents] - [i51]Hugo Lemarchant, Liangzi Li, Yiming Qian, Yuta Nakashima, Hajime Nagahara:
Inference Time Evidences of Adversarial Attacks for Forensic on Transformers. CoRR abs/2301.13356 (2023) - [i50]Mayu Otani, Riku Togashi, Yu Sawai, Ryosuke Ishigami, Yuta Nakashima, Esa Rahtu, Janne Heikkilä, Shin'ichi Satoh:
Toward Verifiable and Reproducible Human Evaluation for Text-to-Image Generation. CoRR abs/2304.01816 (2023) - [i49]Noa Garcia, Yusuke Hirota, Yankun Wu, Yuta Nakashima:
Uncurated Image-Text Datasets: Shedding Light on Demographic Bias. CoRR abs/2304.02828 (2023) - [i48]Yusuke Hirota, Yuta Nakashima, Noa Garcia:
Model-Agnostic Gender Debiased Image Captioning. CoRR abs/2304.03693 (2023) - [i47]Bowen Wang, Liangzhi Li, Yuta Nakashima, Hajime Nagahara:
Learning Bottleneck Concepts in Image Classification. CoRR abs/2304.10131 (2023) - [i46]Yankun Wu, Yuta Nakashima, Noa Garcia:
Not Only Generative Art: Stable Diffusion for Content-Style Disentanglement in Art Analysis. CoRR abs/2304.10278 (2023) - [i45]Bowen Wang, Jiaxing Zhang, Ran Zhang, Yunqin Li, Liangzhi Li, Yuta Nakashima:
Improving Facade Parsing with Vision Transformers and Line Integration. CoRR abs/2309.15523 (2023) - [i44]Hitoshi Teshima, Naoki Wake, Diego Thomas, Yuta Nakashima, Hiroshi Kawasaki, Katsushi Ikeuchi:
ACT2G: Attention-based Contrastive Learning for Text-to-Gesture Generation. CoRR abs/2309.16162 (2023) - [i43]Jiahao Zhang, Bowen Wang, Liangzhi Li, Yuta Nakashima, Hajime Nagahara:
Instruct Me More! Random Prompting for Visual In-Context Learning. CoRR abs/2311.03648 (2023) - [i42]Amelia Katirai, Noa Garcia, Kazuki Ide, Yuta Nakashima, Atsuo Kishimoto:
Situating the social issues of image generation models in the model life cycle: a sociotechnical approach. CoRR abs/2311.18345 (2023) - [i41]Yankun Wu, Yuta Nakashima, Noa Garcia:
Stable Diffusion Exposed: Gender Bias from Prompt to Image. CoRR abs/2312.03027 (2023) - 2022
- [j30]Chenhui Chu, Vinícius Oliveira, Felix Giovanni Virgo, Mayu Otani, Noa Garcia, Yuta Nakashima:
The semantic typology of visually grounded paraphrases. Comput. Vis. Image Underst. 215: 103333 (2022) - [j29]Sudhakar Kumawat, Manisha Verma, Yuta Nakashima, Shanmuganathan Raman:
Depthwise Spatio-Temporal STFT Convolutional Neural Networks for Human Action Recognition. IEEE Trans. Pattern Anal. Mach. Intell. 44(9): 4839-4851 (2022) - [j28]Felix Giovanni Virgo, Chenhui Chu, Takaya Ogawa, Koji Tanaka, Kazuki Ashihara, Yuta Nakashima, Noriko Takemura, Hajime Nagahara, Takao Fujikawa:
Information Extraction from Public Meeting Articles. SN Comput. Sci. 3(4): 285 (2022) - [j27]Koji Tanaka, Chenhui Chu, Tomoyuki Kajiwara, Yuta Nakashima, Noriko Takemura, Hajime Nagahara, Takao Fujikawa:
Corpus Construction for Historical Newspapers: A Case Study on Public Meeting Corpus Construction Using OCR Error Correction. SN Comput. Sci. 3(6): 489 (2022) - [c78]Manisha Verma, Yuta Nakashima, Noriko Takemura, Hajime Nagahara:
Multi-label Disengagement and Behavior Prediction in Online Learning. AIED (1) 2022: 633-639 - [c77]Yusuke Hirota, Yuta Nakashima, Noa Garcia:
Quantifying Societal Bias Amplification in Image Captioning. CVPR 2022: 13440-13449 - [c76]Riku Togashi, Mayu Otani, Yuta Nakashima, Esa Rahtu, Janne Heikkilä, Tetsuya Sakai:
AxIoU: An Axiomatically Justified Measure for Video Moment Retrieval. CVPR 2022: 21044-21053 - [c75]Mayu Otani, Riku Togashi, Yuta Nakashima, Esa Rahtu, Janne Heikkilä, Shin'ichi Satoh:
Optimal Correction Cost for Object Detection Evaluation. CVPR 2022: 21075-21083 - [c74]Yusuke Hirota, Yuta Nakashima, Noa Garcia:
Gender and Racial Bias in Visual Question Answering Datasets. FAccT 2022: 1280-1292 - [c73]Haruya Suzuki, Sora Tarumoto, Tomoyuki Kajiwara, Takashi Ninomiya, Yuta Nakashima, Hajime Nagahara:
Emotional Intensity Estimation based on Writer's Personality. AACL/IJCNLP 2022 (Student Research Workshop) 2022: 1-7 - [c72]Hitoshi Teshima, Naoki Wake, Diego Thomas, Yuta Nakashima, Hiroshi Kawasaki, Katsushi Ikeuchi:
Deep Gesture Generation for Social Robots Using Type-Specific Libraries. IROS 2022: 8286-8291 - [c71]Haruya Suzuki, Yuto Miyauchi, Kazuki Akiyama, Tomoyuki Kajiwara, Takashi Ninomiya, Noriko Takemura, Yuta Nakashima, Hajime Nagahara:
A Japanese Dataset for Subjective and Objective Sentiment Polarity Classification in Micro Blog Domain. LREC 2022: 7022-7028 - [c70]Anh-Khoa Vo, Yuta Nakashima:
Tone Classification for Political Advertising Video using Multimodal Cues. ICDAR@ICMR 2022: 17-21 - [c69]Minh-Son Dao, Michael Alexander Riegler, Duc-Tien Dang-Nguyen, Cathal Gurrin, Yuta Nakashima, Mianxiong Dong:
ICDAR'22: Intelligent Cross-Data Analysis and Retrieval. ICMR 2022: 690-691 - [c68]Hitoshi Teshima, Naoki Wake, Diego Thomas, Yuta Nakashima, David Baumert, Hiroshi Kawasaki, Katsushi Ikeuchi:
Integration of Gesture Generation System Using Gesture Library with DIY Robot Design Kit. SII 2022: 361-366 - [i40]Mayu Otani, Riku Togashi, Yuta Nakashima, Esa Rahtu, Janne Heikkilä, Shin'ichi Satoh:
Optimal Correction Cost for Object Detection Evaluation. CoRR abs/2203.14438 (2022) - [i39]Yusuke Hirota, Yuta Nakashima, Noa Garcia:
Quantifying Societal Bias Amplification in Image Captioning. CoRR abs/2203.15395 (2022) - [i38]Riku Togashi, Mayu Otani, Yuta Nakashima, Esa Rahtu, Janne Heikkilä, Tetsuya Sakai:
AxIoU: An Axiomatically Justified Measure for Video Moment Retrieval. CoRR abs/2203.16062 (2022) - [i37]Yusuke Hirota, Yuta Nakashima, Noa Garcia:
Gender and Racial Bias in Visual Question Answering Datasets. CoRR abs/2205.08148 (2022) - [i36]Tianwei Chen, Noa Garcia, Mayu Otani, Chenhui Chu, Yuta Nakashima, Hajime Nagahara:
Learning More May Not Be Better: Knowledge Transferability in Vision and Language Tasks. CoRR abs/2208.10758 (2022) - [i35]Hitoshi Teshima, Naoki Wake, Diego Thomas, Yuta Nakashima, Hiroshi Kawasaki, Katsushi Ikeuchi:
Deep Gesture Generation for Social Robots Using Type-Specific Libraries. CoRR abs/2210.06790 (2022) - [i34]Zongshang Pang, Yuta Nakashima, Mayu Otani, Hajime Nagahara:
Contrastive Losses Are Natural Criteria for Unsupervised Video Summarization. CoRR abs/2211.10056 (2022) - 2021
- [j26]Wenjian Dong, Mayu Otani, Noa Garcia, Yuta Nakashima, Chenhui Chu:
Cross-Lingual Visual Grounding. IEEE Access 9: 349-358 (2021) - [j25]Bowen Wang, Liangzhi Li, Yuta Nakashima, Ryo Kawasaki, Hajime Nagahara, Yasushi Yagi:
Noisy-LSTM: Improving Temporal Awareness for Video Semantic Segmentation. IEEE Access 9: 46810-46820 (2021) - [j24]Noboru Babaguchi, Isao Echizen, Junichi Yamagishi, Naoko Nitta, Yuta Nakashima, Kazuaki Nakamura, Kazuhiro Kono, Fuming Fang, Seiko Myojin, Zhenzhong Kuang, Huy H. Nguyen, Ngoc-Dung T. Tieu:
Preventing Fake Information Generation Against Media Clone Attacks. IEICE Trans. Inf. Syst. 104-D(1): 2-11 (2021) - [j23]Isao Echizen, Noboru Babaguchi, Junichi Yamagishi, Naoko Nitta, Yuta Nakashima, Kazuaki Nakamura, Kazuhiro Kono, Fuming Fang, Seiko Myojin, Zhenzhong Kuang, Huy H. Nguyen, Ngoc-Dung T. Tieu:
Generation and Detection of Media Clones. IEICE Trans. Inf. Syst. 104-D(1): 12-23 (2021) - [j22]Zekun Yang, Noa Garcia, Chenhui Chu, Mayu Otani, Yuta Nakashima, Haruo Takemura:
A comparative study of language transformers for video question answering. Neurocomputing 445: 121-133 (2021) - [c67]Jules Samaran, Noa Garcia, Mayu Otani, Chenhui Chu, Yuta Nakashima:
Attending Self-Attention: A Case Study of Visually Grounded Supervision in Vision-and-Language Transformers. ACL (student) 2021: 81-86 - [c66]Tianran Wu, Noa Garcia, Mayu Otani, Chenhui Chu, Yuta Nakashima, Haruo Takemura:
Transferring Domain-Agnostic Knowledge in Video Question Answering. BMVC 2021: 301 - [c65]Bowen Wang, Liangzhi Li, Manisha Verma, Yuta Nakashima, Ryo Kawasaki, Hajime Nagahara:
MTUNet: Few-Shot Image Classification With Visual Explanations. CVPR Workshops 2021: 2294-2298 - [c64]Liangzhi Li, Bowen Wang, Manisha Verma, Yuta Nakashima, Ryo Kawasaki, Hajime Nagahara:
SCOUTER: Slot Attention-based Classifier for Explainable Image Recognition. ICCV 2021: 1026-1035 - [c63]Zechen Bai, Yuta Nakashima, Noa Garcia:
Explain Me the Painting: Multi-Topic Knowledgeable Art Description Generation. ICCV 2021: 5402-5412 - [c62]Yusuke Hirota, Noa Garcia, Mayu Otani, Chenhui Chu, Yuta Nakashima, Ittetsu Taniguchi, Takao Onoye:
Visual Question Answering with Textual Representations for Images. ICCVW 2021: 3147-3150 - [c61]Manisha Verma, Yuta Nakashima, Hirokazu Kobori, Ryota Takaoka, Noriko Takemura, Tsukasa Kimura, Hajime Nagahara, Masayuki Numao, Kazumitsu Shinohara:
Learners' Efficiency Prediction Using Facial Behavior Analysis. ICIP 2021: 1084-1088 - [c60]Akihiko Sayo, Diego Thomas, Hiroshi Kawasaki, Yuta Nakashima, Katsushi Ikeuchi:
PoseRN: A 2D Pose Refinement Network For Bias-Free Multi-View 3D Human Pose Estimation. ICIP 2021: 3233-3237 - [c59]Yoshiyuki Shoji, Kenro Aihara, Noriko Kando, Yuta Nakashima, Hiroaki Ohshima, Shio Takidaira, Masaki Ueta, Takehiro Yamamoto, Yusuke Yamamoto:
Museum Experience into a Souvenir: Generating Memorable Postcards from Guide Device Behavior Log. JCDL 2021: 120-129 - [c58]Cheikh Brahim El Vaigh, Noa Garcia, Benjamin Renoust, Chenhui Chu, Yuta Nakashima, Hajime Nagahara:
GCNBoost: Artwork Classification by Label Propagation through a Knowledge Graph. ICMR 2021: 92-100 - [c57]Bowen Wang, Liangzhi Li, Yuta Nakashima, Takehiro Yamamoto, Hiroaki Ohshima, Yoshiyuki Shoji, Kenro Aihara, Noriko Kando:
Image Retrieval by Hierarchy-aware Deep Hashing Based on Multi-task Learning. ICMR 2021: 486-490 - [c56]Yiming Qian, Cheikh Brahim El Vaigh, Yuta Nakashima, Benjamin Renoust, Hajime Nagahara, Yutaka Fujioka:
Built Year Prediction from Buddha Face with Heterogeneous Labels. SUMAC @ ACM Multimedia 2021: 5-12 - [c55]Tomoyuki Kajiwara, Chenhui Chu, Noriko Takemura, Yuta Nakashima, Hajime Nagahara:
WRIME: A New Dataset for Emotional Intensity Estimation with Subjective and Objective Annotations. NAACL-HLT 2021: 2095-2104 - [c54]Yuta Kayatani, Zekun Yang, Mayu Otani, Noa Garcia, Chenhui Chu, Yuta Nakashima, Haruo Takemura:
The Laughing Machine: Predicting Humor in Video. WACV 2021: 2072-2081 - [i33]Vinay Damodaran, Sharanya Chakravarthy, Akshay Kumar, Anjana Umapathy, Teruko Mitamura, Yuta Nakashima, Noa Garcia, Chenhui Chu:
Understanding the Role of Scene Graphs in Visual Question Answering. CoRR abs/2101.05479 (2021) - [i32]Kiichi Goto, Taikan Suehara, Tamaki Yoshioka, Masakazu Kurata, Hajime Nagahara, Yuta Nakashima, Noriko Takemura, Masako Iwasaki:
Development of a Vertex Finding Algorithm using Recurrent Neural Network. CoRR abs/2101.11906 (2021) - [i31]Cheikh Brahim El Vaigh, Noa Garcia, Benjamin Renoust, Chenhui Chu, Yuta Nakashima, Hajime Nagahara:
GCNBoost: Artwork Classification by Label Propagation through a Knowledge Graph. CoRR abs/2105.11852 (2021) - [i30]Yusuke Hirota, Noa Garcia, Mayu Otani, Chenhui Chu, Yuta Nakashima, Ittetsu Taniguchi, Takao Onoye:
A Picture May Be Worth a Hundred Words for Visual Question Answering. CoRR abs/2106.13445 (2021) - [i29]Akihiko Sayo, Diego Thomas, Hiroshi Kawasaki, Yuta Nakashima, Katsushi Ikeuchi:
PoseRN: A 2D pose refinement network for bias-free multi-view 3D human pose estimation. CoRR abs/2107.03000 (2021) - [i28]Yiming Qian, Cheikh Brahim El Vaigh, Yuta Nakashima, Benjamin Renoust, Hajime Nagahara, Yutaka Fujioka:
Built Year Prediction from Buddha Face with Heterogeneous Labels. CoRR abs/2109.00812 (2021) - [i27]Zechen Bai, Yuta Nakashima, Noa Garcia:
Explain Me the Painting: Multi-Topic Knowledgeable Art Description Generation. CoRR abs/2109.05743 (2021) - [i26]Tianran Wu, Noa Garcia, Mayu Otani, Chenhui Chu, Yuta Nakashima, Haruo Takemura:
Transferring Domain-Agnostic Knowledge in Video Question Answering. CoRR abs/2110.13395 (2021) - 2020
- [j21]Kazuki Ashihara, Cheikh Brahim El Vaigh, Chenhui Chu, Benjamin Renoust, Noriko Okubo, Noriko Takemura, Yuta Nakashima, Hajime Nagahara:
Improving topic modeling through homophily for legal documents. Appl. Netw. Sci. 5(1): 77 (2020) - [j20]Noa Garcia, Benjamin Renoust, Yuta Nakashima:
ContextNet: representation and exploration for painting classification and retrieval in context. Int. J. Multim. Inf. Retr. 9(1): 17-30 (2020) - [j19]Mayu Otani, Chenhui Chu, Yuta Nakashima:
Visually grounded paraphrase identification via gating and phrase localization. Neurocomputing 404: 165-172 (2020) - [c53]Noa Garcia, Mayu Otani, Chenhui Chu, Yuta Nakashima:
KnowIT VQA: Answering Knowledge-Based Questions about Videos. AAAI 2020: 10826-10834 - [c52]Sora Ohashi, Tomoyuki Kajiwara, Chenhui Chu, Noriko Takemura, Yuta Nakashima, Hajime Nagahara:
IDSOU at WNUT-2020 Task 2: Identification of Informative COVID-19 English Tweets. W-NUT@EMNLP 2020: 428-433 - [c51]Mayu Otani, Yuta Nakashima, Esa Rahtu, Janne Heikkilä:
Uncovering Hidden Challenges in Query-Based Video Moment Retrieval. BMVC 2020 - [c50]Manisha Verma, Sudhakar Kumawat, Yuta Nakashima, Shanmuganathan Raman:
Yoga-82: A New Dataset for Fine-grained Classification of Human Poses. CVPR Workshops 2020: 4472-4479 - [c49]Noa Garcia, Chentao Ye, Zihua Liu, Qingtao Hu, Mayu Otani, Chenhui Chu, Yuta Nakashima, Teruko Mitamura:
A Dataset and Baselines for Visual Question Answering on Art. ECCV Workshops (2) 2020: 92-108 - [c48]Nikolai Huckle, Noa Garcia, Yuta Nakashima:
Demographic Influences on Contemporary Art with Unsupervised Style Embeddings. ECCV Workshops (2) 2020: 126-142 - [c47]Noa Garcia, Yuta Nakashima:
Knowledge-Based Video Question Answering with Unsupervised Scene Descriptions. ECCV (18) 2020: 581-598 - [c46]Koji Tanaka, Chenhui Chu, Haolin Ren, Benjamin Renoust, Yuta Nakashima, Noriko Takemura, Hajime Nagahara, Takao Fujikawa:
Constructing a Public Meeting Corpus. LREC 2020: 1934-1940 - [c45]Liangzhi Li, Manisha Verma, Yuta Nakashima, Ryo Kawasaki, Hajime Nagahara:
Joint Learning of Vessel Segmentation and Artery/Vein Classification with Post-processing. MIDL 2020: 440-453 - [c44]Zhiqiang Guo, Huigui Liu, Zhenzhong Kuang, Yuta Nakashima, Noboru Babaguchi:
Privacy Sensitive Large-Margin Model for Face De-Identification. NCAA 2020: 488-501 - [c43]Zekun Yang, Noa Garcia, Chenhui Chu, Mayu Otani, Yuta Nakashima, Haruo Takemura:
BERT Representations for Video Question Answering. WACV 2020: 1545-1554 - [c42]Liangzhi Li, Manisha Verma, Yuta Nakashima, Hajime Nagahara, Ryo Kawasaki:
IterNet: Retinal Image Segmentation Utilizing Structural Redundancy in Vessel Networks. WACV 2020: 3645-3654 - [i25]Noa Garcia, Mayu Otani, Chenhui Chu, Yuta Nakashima:
Knowledge-Based Visual Question Answering in Videos. CoRR abs/2004.08385 (2020) - [i24]Manisha Verma, Sudhakar Kumawat, Yuta Nakashima, Shanmuganathan Raman:
Yoga-82: A New Dataset for Fine-grained Classification of Human Poses. CoRR abs/2004.10362 (2020) - [i23]Liangzhi Li, Manisha Verma, Yuta Nakashima, Ryo Kawasaki, Hajime Nagahara:
Joint Learning of Vessel Segmentation and Artery/Vein Classification with Post-processing. CoRR abs/2005.13337 (2020) - [i22]Noa Garcia, Yuta Nakashima:
Knowledge-Based Video Question Answering with Unsupervised Scene Descriptions. CoRR abs/2007.08751 (2020) - [i21]Sudhakar Kumawat, Manisha Verma, Yuta Nakashima, Shanmuganathan Raman:
Depthwise Spatio-Temporal STFT Convolutional Neural Networks for Human Action Recognition. CoRR abs/2007.11365 (2020) - [i20]Noa Garcia, Chentao Ye, Zihua Liu, Qingtao Hu, Mayu Otani, Chenhui Chu, Yuta Nakashima, Teruko Mitamura:
A Dataset and Baselines for Visual Question Answering on Art. CoRR abs/2008.12520 (2020) - [i19]Mayu Otani, Yuta Nakashima, Esa Rahtu, Janne Heikkilä:
Uncovering Hidden Challenges in Query-Based Video Moment Retrieval. CoRR abs/2009.00325 (2020) - [i18]Liangzhi Li, Bowen Wang, Manisha Verma, Yuta Nakashima, Ryo Kawasaki, Hajime Nagahara:
SCOUTER: Slot Attention-based Classifier for Explainable Image Recognition. CoRR abs/2009.06138 (2020) - [i17]Nikolai Huckle, Noa Garcia, Yuta Nakashima:
Demographic Influences on Contemporary Art with Unsupervised Style Embeddings. CoRR abs/2009.14545 (2020) - [i16]Chenhui Chu, Yuto Takebayashi, Mishra Vipul, Yuta Nakashima:
Constructing a Visual Relationship Authenticity Dataset. CoRR abs/2010.05185 (2020) - [i15]Bowen Wang, Liangzhi Li, Yuta Nakashima, Ryo Kawasaki, Hajime Nagahara, Yasushi Yagi:
Noisy-LSTM: Improving Temporal Awareness for Video Semantic Segmentation. CoRR abs/2010.09466 (2020) - [i14]Liangzhi Li, Manisha Verma, Bowen Wang, Yuta Nakashima, Ryo Kawasaki, Hajime Nagahara:
Grading the Severity of Arteriolosclerosis from Retinal Arterio-venous Crossing Patterns. CoRR abs/2011.03772 (2020) - [i13]Bowen Wang, Liangzhi Li, Manisha Verma, Yuta Nakashima, Ryo Kawasaki, Hajime Nagahara:
Match Them Up: Visually Explainable Few-shot Image Classification. CoRR abs/2011.12527 (2020)
2010 – 2019
- 2019
- [c41]Kazuki Ashihara, Chenhui Chu, Benjamin Renoust, Noriko Okubo, Noriko Takemura, Yuta Nakashima, Hajime Nagahara:
Legal Information as a Complex Network: Improving Topic Modeling Through Homophily. COMPLEX NETWORKS (2) 2019: 28-39 - [c40]Mayu Otani, Yuta Nakashima, Esa Rahtu, Janne Heikkilä:
Rethinking the Evaluation of Video Summaries. CVPR 2019: 7596-7604 - [c39]Manisha Verma, Hirokazu Kobori, Yuta Nakashima, Noriko Takemura, Hajime Nagahara:
Facial Expression Recognition with Skip-Connection to Leverage Low-Level Features. ICIP 2019: 51-55 - [c38]Noa Garcia, Benjamin Renoust, Yuta Nakashima:
Context-Aware Embeddings for Automatic Art Analysis. ICMR 2019: 25-33 - [c37]Benjamin Renoust, Matheus Oliveira Franca, Jacob Chan, Noa Garcia, Van Le, Ayaka Uesaka, Yuta Nakashima, Hajime Nagahara, Jueren Wang, Yutaka Fujioka:
Historical and Modern Features for Buddha Statue Classification. SUMAC @ ACM Multimedia 2019: 23-30 - [c36]Benjamin Renoust, Matheus Oliveira Franca, Jacob Chan, Van Le, Ayaka Uesaka, Yuta Nakashima, Hajime Nagahara, Jueren Wang, Yutaka Fujioka:
BUDA.ART: A Multimodal Content Based Analysis and Retrieval System for Buddha Statues. ACM Multimedia 2019: 1062-1064 - [c35]Takahiro Yamaguchi, Hajime Nagahara, Ken'ichi Morooka, Yuta Nakashima, Yuki Uranishi, Shoko Miyauchi, Ryo Kurazume:
3D Image Reconstruction from Multi-focus Microscopic Images. PSIVT Workshops 2019: 73-85 - [c34]Akihiko Sayo, Hayato Onizuka, Diego Thomas, Yuta Nakashima, Hiroshi Kawasaki, Katsushi Ikeuchi:
Human Shape Reconstruction with Loose Clothes from Partially Observed Data by Pose Specific Deformation. PSIVT 2019: 225-239 - [i12]Mayu Otani, Yuta Nakashima, Esa Rahtu, Janne Heikkilä:
Rethinking the Evaluation of Video Summaries. CoRR abs/1903.11328 (2019) - [i11]Noa Garcia, Benjamin Renoust, Yuta Nakashima:
Context-Aware Embeddings for Automatic Art Analysis. CoRR abs/1904.04985 (2019) - [i10]Noa Garcia, Benjamin Renoust, Yuta Nakashima:
Understanding Art through Multi-Modal Retrieval in Paintings. CoRR abs/1904.10615 (2019) - [i9]Benjamin Renoust, Matheus Oliveira Franca, Jacob Chan, Van Le, Ayaka Uesaka, Yuta Nakashima, Hajime Nagahara, Jueren Wang, Yutaka Fujioka:
Historical and Modern Features for Buddha Statue Classification. CoRR abs/1909.12921 (2019) - [i8]Benjamin Renoust, Matheus Oliveira Franca, Jacob Chan, Van Le, Ayaka Uesaka, Yuta Nakashima, Hajime Nagahara, Jueren Wang, Yutaka Fujioka:
BUDA.ART: A Multimodal Content-Based Analysis and Retrieval System for Buddha Statues. CoRR abs/1909.12932 (2019) - [i7]Noa Garcia, Mayu Otani, Chenhui Chu, Yuta Nakashima:
KnowIT VQA: Answering Knowledge-Based Questions about Videos. CoRR abs/1910.10706 (2019) - [i6]Liangzhi Li, Manisha Verma, Yuta Nakashima, Hajime Nagahara, Ryo Kawasaki:
IterNet: Retinal Image Segmentation Utilizing Structural Redundancy in Vessel Networks. CoRR abs/1912.05763 (2019) - 2018
- [j18]Mayu Otani, Atsushi Nishida, Yuta Nakashima, Tomokazu Sato, Naokazu Yokoya:
Finding Important People in a Video Using Deep Neural Networks with Conditional Random Fields. IEICE Trans. Inf. Syst. 101-D(10): 2509-2517 (2018) - [j17]Takahiro Tanaka, Norihiko Kawai, Yuta Nakashima, Tomokazu Sato, Naokazu Yokoya:
Iterative applications of image completion with CNN-based failure detection. J. Vis. Commun. Image Represent. 55: 56-66 (2018) - [j16]Antonio Tejero-de-Pablos, Yuta Nakashima, Tomokazu Sato, Naokazu Yokoya, Marko Linna, Esa Rahtu:
Summarization of User-Generated Sports Video by Using Deep Action Recognition Features. IEEE Trans. Multim. 20(8): 2000-2011 (2018) - [c33]Chenhui Chu, Mayu Otani, Yuta Nakashima:
iParaphrasing: Extracting Visually Grounded Paraphrases via an Image. COLING 2018: 3479-3492 - [c32]Ryosuke Kimura, Akihiko Sayo, Fabian Lorenzo Dayrit, Yuta Nakashima, Hiroshi Kawasaki, Ambrosio Blanco, Katsushi Ikeuchi:
Representing a Partially Observed Non-Rigid 3D Human Using Eigen-Texture and Eigen-Deformation. ICPR 2018: 1043-1048 - [i5]Chenhui Chu, Mayu Otani, Yuta Nakashima:
iParaphrasing: Extracting Visually Grounded Paraphrases via an Image. CoRR abs/1806.04284 (2018) - [i4]Ryosuke Kimura, Akihiko Sayo, Fabian Lorenzo Dayrit, Yuta Nakashima, Hiroshi Kawasaki, Ambrosio Blanco, Katsushi Ikeuchi:
Representing a Partially Observed Non-Rigid 3D Human Using Eigen-Texture and Eigen-Deformation. CoRR abs/1807.02632 (2018) - 2017
- [j15]Fabian Lorenzo Dayrit, Yuta Nakashima, Tomokazu Sato, Naokazu Yokoya:
Increasing pose comprehension through augmented reality reenactment. Multim. Tools Appl. 76(1): 1291-1312 (2017) - [j14]Mayu Otani, Yuta Nakashima, Tomokazu Sato, Naokazu Yokoya:
Video summarization using textual descriptions for authoring video blogs. Multim. Tools Appl. 76(9): 12097-12115 (2017) - [j13]Norihiko Kawai, Tomokazu Sato, Yuta Nakashima, Naokazu Yokoya:
Augmented Reality Marker Hiding with Texture Deformation. IEEE Trans. Vis. Comput. Graph. 23(10): 2288-2300 (2017) - [c31]Yuta Nakashima, Fumio Okura, Norihiko Kawai, Ryosuke Kimura, Hiroshi Kawasaki, Katsushi Ikeuchi, Ambrosio Blanco:
Realtime Novel View Synthesis with Eigen-Texture Regression. BMVC 2017 - [c30]Thiwat Rongsirigul, Yuta Nakashima, Tomokazu Sato, Naokazu Yokoya:
Novel view synthesis with light-weight view-dependent texture mapping for a stereoscopic HMD. ICME 2017: 703-708 - [c29]Kenshiro Nakatake, Yuta Nakashima, Ryo Iwamoto, Ayase Tashima, Yusuke Kitamura, Keiichiro Yasuda, Masaaki Iwatsuki, Hideo Baba, Toshihiro Ihara, Yoshitaka Nakanishi:
Fabrication of three-dimensional deformable microfilter for capturing target cells. MHS 2017: 1-3 - [c28]Koki Yamasaki, Yuta Nakashima, Tairo Yokokura, Yoshitaka Nakanishi:
Calcium signaling response of osteoblastic cells received with compressive stimuli. MHS 2017: 1-4 - [c27]Fabian Lorenzo Dayrit, Ryosuke Kimura, Yuta Nakashima, Ambrosio Blanco, Hiroshi Kawasaki, Katsushi Ikeuchi, Tomokazu Sato, Naokazu Yokoya:
ReMagicMirror: Action Learning Using Human Reenactment with the Mirror Metaphor. MMM (1) 2017: 303-315 - [i3]Antonio Tejero-de-Pablos, Yuta Nakashima, Tomokazu Sato, Naokazu Yokoya, Marko Linna, Esa Rahtu:
Summarization of User-Generated Sports Video by Using Deep Action Recognition Features. CoRR abs/1709.08421 (2017) - 2016
- [j12]Antonio Tejero-de-Pablos, Yuta Nakashima, Naokazu Yokoya, Francisco Javier Díaz Pernas, Mario Martínez-Zarzuela:
Flexible human action recognition in depth video sequences using masked joint trajectories. EURASIP J. Image Video Process. 2016: 20 (2016) - [j11]Yuta Nakashima, Tomoaki Ikeno, Noboru Babaguchi:
Evaluating Protection Capability for Visual Privacy Information. IEEE Secur. Priv. 14(1): 55-61 (2016) - [j10]Yuta Nakashima, Noboru Babaguchi, Jianping Fan:
Privacy Protection for Social Video via Background Estimation and CRF-Based Videographer's Intention Modeling. IEICE Trans. Inf. Syst. 99-D(4): 1221-1233 (2016) - [c26]Hikari Takehara, Yuta Nakashima, Tomokazu Sato, Naokazu Yokoya:
3D shape template generation from RGB-D images capturing a moving and deforming object. 3D Image Processing, Measurement (3DIPM), and Applications 2016: 1-7 - [c25]Mayu Otani, Yuta Nakashima, Esa Rahtu, Janne Heikkilä, Naokazu Yokoya:
Video Summarization Using Deep Semantic Features. ACCV (5) 2016: 361-377 - [c24]Mayu Otani, Yuta Nakashima, Esa Rahtu, Janne Heikkilä, Naokazu Yokoya:
Learning Joint Representations of Videos and Sentences with Web Image Search. ECCV Workshops (1) 2016: 651-667 - [c23]Antonio Tejero-de-Pablos, Yuta Nakashima, Tomokazu Sato, Naokazu Yokoya:
Human action recognition-based video summarization for RGB-D personal sports video. ICME 2016: 1-6 - [i2]Mayu Otani, Yuta Nakashima, Esa Rahtu, Janne Heikkilä, Naokazu Yokoya:
Learning Joint Representations of Videos and Sentences with Web Image Search. CoRR abs/1608.02367 (2016) - [i1]Mayu Otani, Yuta Nakashima, Esa Rahtu, Janne Heikkilä, Naokazu Yokoya:
Video Summarization using Deep Semantic Features. CoRR abs/1609.08758 (2016) - 2015
- [j9]Noboru Babaguchi, Yuta Nakashima:
Protection and Utilization of Privacy Information via Sensing. IEICE Trans. Inf. Syst. 98-D(1): 2-9 (2015) - [j8]Yuta Nakashima, Yusuke Uno, Norihiko Kawai, Tomokazu Sato, Naokazu Yokoya:
AR image generation using view-dependent geometry modification and texture mapping. Virtual Real. 19(2): 83-94 (2015) - [c22]Yuta Nakashima, Tatsuya Koyama, Naokazu Yokoya, Noboru Babaguchi:
Facial expression preserving privacy protection using image melding. ICME 2015: 1-6 - [c21]Mayu Otani, Yuta Nakashima, Tomokazu Sato, Naokazu Yokoya:
Textual description-based video summarization for video blogs. ICME 2015: 1-6 - [c20]Norihiko Kawai, Tomokazu Sato, Yuta Nakashima, Naokazu Yokoya:
AR Marker Hiding with Real-Time Texture Deformation. ISMAR Workshops 2015: 26-31 - [c19]Tairo Yokokura, Yuta Nakashima, Yukihiro Yonemoto, Yuki Hikichi, Yoshitaka Nakanishi:
Measurement of cell mechanical properties by cell compression microdevice. MHS 2015: 1-4 - 2014
- [j7]Norihiko Kawai, Naoya Inoue, Tomokazu Sato, Fumio Okura, Yuta Nakashima, Naokazu Yokoya:
Background Estimation for a Single Omnidirectional Image Sequence Captured with a Moving Camera. Inf. Media Technol. 9(3): 361-365 (2014) - [j6]Norihiko Kawai, Naoya Inoue, Tomokazu Sato, Fumio Okura, Yuta Nakashima, Naokazu Yokoya:
Background Estimation for a Single Omnidirectional Image Sequence Captured with a Moving Camera. IPSJ Trans. Comput. Vis. Appl. 6: 68-72 (2014) - [c18]Fabian Lorenzo Dayrit, Yuta Nakashima, Tomokazu Sato, Naokazu Yokoya:
Free-viewpoint AR human-motion reenactment based on a single RGB-D video stream. ICME 2014: 1-6 - [c17]Yuta Nakashima, Kohichi Tsusu, Yuki Hikichi, Tairo Yokokura, Kazuyuki Minami, Yoshitaka Nakanishi:
Evaluation of cell-cell or cell-substrate adhesion effect on cellular differentiation using a microwell array having convertible culture surface. MHS 2014: 1-4 - 2013
- [c16]Yuta Nakashima, Naokazu Yokoya:
Inferring what the videographer wanted to capture. ICIP 2013: 191-195 - [c15]Tatsuya Koyama, Yuta Nakashima, Noboru Babaguchi:
Real-time privacy protection system for social videos using intentionally-captured persons detection. ICME 2013: 1-6 - [c14]Yuta Nakashima, Tomokazu Sato, Yusuke Uno, Naokazu Yokoya, Norihiko Kawai:
Augmented reality image generation with virtualized real objects using view-dependent texture and geometry. ISMAR 2013: 1-6 - 2012
- [j5]Yuta Nakashima, Noboru Babaguchi, Jianping Fan:
Intended human object detection for automatically protecting privacy in mobile video surveillance. Multim. Syst. 18(2): 157-173 (2012) - [c13]Tatsuya Koyama, Yuta Nakashima, Noboru Babaguchi:
Markov random field-based real-time detection of intentionally-captured persons. ICIP 2012: 1377-1380 - [c12]Yuta Nakashima, Kouichi Tsusu, Kazuyuki Minami:
Development of a dynamic conversion technique of cell culture surface using alginate thin film. MHS 2012: 482-487 - 2011
- [j4]Yuta Nakashima, Ryosuke Kaneto, Noboru Babaguchi:
Indoor Positioning System Using Digital Audio Watermarking. IEICE Trans. Inf. Syst. 94-D(11): 2201-2211 (2011) - [c11]Yuta Nakashima, Noboru Babaguchi, Jianping Fan:
Automatic generation of privacy-protected videos using background estimation. ICME 2011: 1-6 - [c10]Yuta Nakashima, Yin Yang, Kazuyuki Minami:
Fabrication of a dynamic compression stimulus microdevice to cells for evaluating real-time cellular response. MHS 2011: 174-179 - [c9]Yuta Nakashima, Noboru Babaguchi:
Extracting intentionally captured regions using point trajectories. ACM Multimedia 2011: 1417-1420 - 2010
- [c8]Yuta Nakashima, Noboru Babaguchi, Jianping Fan:
Detecting intended human objects in human-captured videos. CVPR Workshops 2010: 33-40 - [c7]Ryosuke Kaneto, Yuta Nakashima, Noboru Babaguchi:
Real-Time User Position Estimation in Indoor Environments Using Digital Watermarking for Audio Signals. ICPR 2010: 97-100 - [c6]Hiroshi Uegaki, Yuta Nakashima, Noboru Babaguchi:
Discriminating Intended Human Objects in Consumer Videos. ICPR 2010: 4380-4383 - [c5]Takumi Takehara, Yuta Nakashima, Naoko Nitta, Noboru Babaguchi:
Digital Diorama: Sensing-Based Real-World Visualization. IPMU (2) 2010: 663-672 - [c4]Yuta Nakashima, Noboru Babaguchi, Jianping Fan:
Automatically protecting privacy in consumer generated videos using intended human object detector. ACM Multimedia 2010: 1135-1138
2000 – 2009
- 2009
- [j3]Yusuke Aiba, Koji Tomioka, Yuta Nakashima, Koichi Hamashita, Bang-Sup Song:
A Fifth-Order Gm-C Continuous-Time ΔΣ Modulator With Process-Insensitive Input Linear Range. IEEE J. Solid State Circuits 44(9): 2381-2391 (2009) - [j2]Yuta Nakashima, Ryuki Tachibana, Noboru Babaguchi:
Watermarked Movie Soundtrack Finds the Position of the Camcorder in a Theater. IEEE Trans. Multim. 11(3): 443-454 (2009) - 2007
- [c3]Yuta Nakashima, Ryuki Tachibana, Masafumi Nishimura, Noboru Babaguchi:
Determining Recording Location Based on Synchronization Positions of Audiowatermarking. ICASSP (2) 2007: 253-256 - [c2]Yuta Nakashima, Ryuki Tachibana, Noboru Babaguchi:
Maximum-Likelihood Estimation of Recording Position Based on Audio Watermarking. IIH-MSP 2007: 255-258 - 2006
- [c1]Yuta Nakashima, Ryuki Tachibana, Masafumi Nishimura, Noboru Babaguchi:
Estimation of recording location using audio watermarking. MM&Sec 2006: 108-113 - 2005
- [j1]Yuta Nakashima, Takashi Yasuda:
Fabrication of a Microfluidic Device for Axonal Guidance. J. Robotics Mechatronics 17(2): 158-163 (2005)
Coauthor Index
manage site settings
To protect your privacy, all features that rely on external API calls from your browser are turned off by default. You need to opt-in for them to become active. All settings here will be stored as cookies with your web browser. For more information see our F.A.Q.
Unpaywalled article links
Add open access links from to the list of external document links (if available).
Privacy notice: By enabling the option above, your browser will contact the API of unpaywall.org to load hyperlinks to open access articles. Although we do not have any reason to believe that your call will be tracked, we do not have any control over how the remote server uses your data. So please proceed with care and consider checking the Unpaywall privacy policy.
Archived links via Wayback Machine
For web page which are no longer available, try to retrieve content from the of the Internet Archive (if available).
Privacy notice: By enabling the option above, your browser will contact the API of archive.org to check for archived content of web pages that are no longer available. Although we do not have any reason to believe that your call will be tracked, we do not have any control over how the remote server uses your data. So please proceed with care and consider checking the Internet Archive privacy policy.
Reference lists
Add a list of references from , , and to record detail pages.
load references from crossref.org and opencitations.net
Privacy notice: By enabling the option above, your browser will contact the APIs of crossref.org, opencitations.net, and semanticscholar.org to load article reference information. Although we do not have any reason to believe that your call will be tracked, we do not have any control over how the remote server uses your data. So please proceed with care and consider checking the Crossref privacy policy and the OpenCitations privacy policy, as well as the AI2 Privacy Policy covering Semantic Scholar.
Citation data
Add a list of citing articles from and to record detail pages.
load citations from opencitations.net
Privacy notice: By enabling the option above, your browser will contact the API of opencitations.net and semanticscholar.org to load citation information. Although we do not have any reason to believe that your call will be tracked, we do not have any control over how the remote server uses your data. So please proceed with care and consider checking the OpenCitations privacy policy as well as the AI2 Privacy Policy covering Semantic Scholar.
OpenAlex data
Load additional information about publications from .
Privacy notice: By enabling the option above, your browser will contact the API of openalex.org to load additional information. Although we do not have any reason to believe that your call will be tracked, we do not have any control over how the remote server uses your data. So please proceed with care and consider checking the information given by OpenAlex.
last updated on 2024-11-07 20:33 CET by the dblp team
all metadata released as open data under CC0 1.0 license
see also: Terms of Use | Privacy Policy | Imprint