DyCC-Net: Dynamic Context Collection Network for Input-Aware Drone-View Object Detection
Abstract
:1. Introduction
- (1)
- We present a drone-view detector supporting input-aware inference, called “DyCC-Net”, which skips or executes a Context Collector module depending on inputs’ complexity. Thus, it improves the inference efficiency by minimizing unnecessary computation. To the best of our knowledge, this work is the first study exploring dynamic neural networks on a drone-view detector.
- (2)
- We design a core dynamic context collector module and adopt the Gumbel–Softmax function to address the issue of training networks with discrete variables.
- (3)
- We propose a pseudo-labelling-based semi-supervised learning strategy, called “Pseudo Learning”, which guides the process of allocating appropriate computation resources on diverse inputs, to achieve the speed-accuracy trade-off.
2. Related Work
2.1. Dynamic Neural Networks
2.2. Drone-View Object Detection
3. Preliminaries
3.1. Feature Pyramid Network
3.2. Context Collector
4. Methodology
4.1. Overview
4.2. Dynamic Gate
4.2.1. Designs of Gating Network
4.2.2. Gumbel–Softmax Gating Activation Function
4.3. Pseudo Learning
5. Experiments
5.1. Datasets and Models
- (1)
- VisDrone2021 [47]: The VisDrone2021 dataset contains ten object categories, e.g., pedestrian, person, car, etc. Every image in the dataset has annotations of object class and bounding box and has a resolution about . The VisDrone2021 is split into three subsets: 6471 images for training, 548 images for validation, and 3190 for testing.
- (2)
- UAVDT [48]: The UAVDT dataset contains three object categories including bus, truck, and car. Each image in the dataset also has annotations of object class and bounding box and has a resolution of . The UAVDT is split into two subsets: 23,258 images in the training subset, and 15,069 images in the testing subset.
5.2. Implementation and Evaluation Metrics
- (1)
- Implementation: All of the experiments are conducted using one NVIDIA RTX3090 GPU; DyCC-Net is implemented with PyTorch 1.8.1. During training, the pre-trained model YOLOv5 [52] is used as the backbone. The Stochastic Gradient Descent (SGD) optimizer is used for training DyCC-Net and the learning rate with a Cosine learning rate schedule is initialized to . The long side of the input images is 1536 pixels, as did TPH-YOLOv5 [53].
- (2)
- Evaluation Metrics: The detection performance of the proposed DyCC-Net is evaluated using the same metrics as PASCAL VOC [55], i.e., mean Average Precision (mAP) and Average Precision (AP), which are defined by:Here, R is Recall, measuring how good the classifier estimates the positives and calculated as the percentage of true positive predictions in the total number of positive samples, P is Precision, measuring how accurate the prediction is and calculated as the percentage of correct positive predictions in the total number of positive predictions, and is the precision-recall curve. P and R are defined as follows:
5.3. Ablation Studies
5.3.1. The Effectiveness of CC
5.3.2. The Effectiveness of DyCC
5.3.3. The Effectiveness of Pseudo Learning
5.3.4. Generation of Pseudo Labels
5.3.5. Analysis of Performance Gain and Complexity of DyCC-Net
5.4. Comparison with SOTA Models
- (1)
- Results on VisDrone2021: Table 5 compares the detection results of some detectors on VisDrone2021, including one-stage detectors SSD [51] and YOLOv5 [52], and two-stage detectors FPN [44] and FRCNN [50]. DyCC-Net achieves an of , of , and of , which outperforms the previous detectors. The performance comparison with the SOTA detectors specially designed for aerial images, namely UFPMP-Det [18], TPH-YOLOv5 [53], DSHNet [54], CRENet [49], GLSAN [12], and ClustDet [17], is also presented in Table 5. DyCC-Net outperforms UFPMP-Det [18] by large margins of in and in . Figure 9 shows the detection results on aerial images. Please note, we do not utilize tricks, e.g., model ensembles or oversized backbones, which are usually adopted in existing models for drone-captured images.
- (2)
- (3)
- Overall Complexity: We show the inference time cost, in comparison to ClusDet [17], CRENet [49], and UFPMP-Det [18], TPH-YOLOv5 [53] to evaluate the time efficiency of DyCC-Net. All the models are evaluated using a GTX 1080Ti GPU, except for CRENet [49] on a RTX 2080Ti GPU. Table 6 shows that, DyCC-Net reduces redundant computation by input-aware inference and, thus, achieves a significantly faster inference speed. Moreover, UFPMP-Det performs inference in a coarse-to-fine fashion, where a coarse detector is used to find sub-regions containing small and densely distributed objects, and then a fine detector is adopted to these areas to locate small targets. To obtain detection performance comparable to DyCC-Net, UFPMP-Det has to spend more time.
6. Conclusions
Author Contributions
Funding
Data Availability Statement
Acknowledgments
Conflicts of Interest
References
- Avola, D.; Cinque, L.; Diko, A.; Fagioli, A.; Foresti, G.L.; Mecca, A.; Pannone, D.; Piciarelli, C. MS-Faster R-CNN: Multi-stream backbone for improved Faster R-CNN object detection and aerial tracking from UAV images. Remote Sens. 2021, 13, 1670. [Google Scholar] [CrossRef]
- Stojnić, V.; Risojević, V.; Muštra, M.; Jovanović, V.; Filipi, J.; Kezić, N.; Babić, Z. A method for detection of small moving objects in UAV videos. Remote Sens. 2021, 13, 653. [Google Scholar] [CrossRef]
- Jin, P.; Mou, L.; Xia, G.S.; Zhu, X.X. Anomaly Detection in Aerial Videos with Transformers. IEEE Trans. Geosci. Remote Sens. 2022, 60, 5628213. [Google Scholar] [CrossRef]
- Moon, J.; Lim, S.; Lee, H.; Yu, S.; Lee, K.B. Smart Count System Based on Object Detection Using Deep Learning. Remote Sens. 2022, 14, 3761. [Google Scholar] [CrossRef]
- Yang, X.; Yan, J.; Liao, W.; Yang, X.; Tang, J.; He, T. Scrdet++: Detecting small, cluttered and rotated objects via instance-level feature denoising and rotation loss smoothing. IEEE Trans. Pattern Anal. Mach. Intell. 2022. [Google Scholar] [CrossRef]
- Yang, X.; Yang, J.; Yan, J.; Zhang, Y.; Zhang, T.; Guo, Z.; Sun, X.; Fu, K. Scrdet: Towards more robust detection for small, cluttered and rotated objects. In Proceedings of the IEEE/CVF International Conference on Computer Vision, Seoul, Republic of Korea, 27–28 October 2019; pp. 8232–8241. [Google Scholar]
- Yang, C.; Huang, Z.; Wang, N. QueryDet: Cascaded sparse query for accelerating high-resolution small object detection. In Proceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition, New Orleans, LA, USA, 19–20 June 2022; pp. 13668–13677. [Google Scholar]
- Li, G.; Liu, Z.; Zeng, D.; Lin, W.; Ling, H. Adjacent context coordination network for salient object detection in optical remote sensing images. IEEE Trans. Cybern. 2022. [Google Scholar] [CrossRef]
- Ye, C.; Li, X.; Lai, S.; Wang, Y.; Qian, X. Scale adaption-guided human face detection. Knowl.-Based Syst. 2022, 253, 109499. [Google Scholar] [CrossRef]
- Qi, G.; Zhang, Y.; Wang, K.; Mazur, N.; Liu, Y.; Malaviya, D. Small Object Detection Method Based on Adaptive Spatial Parallel Convolution and Fast Multi-Scale Fusion. Remote Sens. 2022, 14, 420. [Google Scholar] [CrossRef]
- Nan, F.; Jing, W.; Tian, F.; Zhang, J.; Chao, K.M.; Hong, Z.; Zheng, Q. Feature super-resolution based Facial Expression Recognition for multi-scale low-resolution images. Knowl.-Based Syst. 2022, 236, 107678. [Google Scholar] [CrossRef]
- Deng, S.; Li, S.; Xie, K.; Song, W.; Liao, X.; Hao, A.; Qin, H. A global-local self-adaptive network for drone-view object detection. IEEE Trans. Image Process. 2020, 30, 1556–1569. [Google Scholar] [CrossRef]
- Xie, X.; Li, L.; An, Z.; Lu, G.; Zhou, Z. Small Ship Detection Based on Hybrid Anchor Structure and Feature Super-Resolution. Remote Sens. 2022, 14, 3530. [Google Scholar] [CrossRef]
- Jiao, L.; Gao, J.; Liu, X.; Liu, F.; Yang, S.; Hou, B. Multi-Scale Representation Learning for Image Classification: A Survey. IEEE Trans. Artif. Intell. 2021. [Google Scholar] [CrossRef]
- Qiao, S.; Chen, L.C.; Yuille, A. Detectors: Detecting objects with recursive feature pyramid and switchable atrous convolution. In Proceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition, Nashville, TN, USA, 20–25 June 2021; pp. 10213–10224. [Google Scholar]
- Dai, X.; Chen, Y.; Xiao, B.; Chen, D.; Liu, M.; Yuan, L.; Zhang, L. Dynamic head: Unifying object detection heads with attentions. In Proceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition, Nashville, TN, USA, 20–25 June 2021; pp. 7373–7382. [Google Scholar]
- Yang, F.; Fan, H.; Chu, P.; Blasch, E.; Ling, H. Clustered object detection in aerial images. In Proceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition, Long Beach, CA, USA, 15–20 June 2019; pp. 8311–8320. [Google Scholar]
- Huang, Y.; Chen, J.; Huang, D. UFPMP-Det: Toward accurate and efficient object detection on drone imagery. In Proceedings of the AAAI Conference on Artificial Intelligence, Virtual, 22 February–1 March 2022; Volume 36, pp. 1026–1033. [Google Scholar]
- Xi, Y.; Jia, W.; Miao, Q.; Liu, X.; Fan, X.; Li, H. FiFoNet: Fine-Grained Target Focusing Network for Object Detection in UAV Images. Remote Sens. 2022, 14, 3919. [Google Scholar] [CrossRef]
- Han, Y.; Huang, G.; Song, S.; Yang, L.; Wang, H.; Wang, Y. Dynamic Neural Networks: A Survey. IEEE Trans. Pattern Anal. Mach. Intell. 2022, 44, 7436–7456. [Google Scholar] [CrossRef]
- Yang, B.; Bender, G.; Le, Q.V.; Ngiam, J. Condconv: Conditionally parameterized convolutions for efficient inference. Adv. Neural Inf. Process. Syst. 2019, 32. [Google Scholar]
- Wang, Y.; Lv, K.; Huang, R.; Song, S.; Yang, L.; Huang, G. Glance and focus: A dynamic approach to reducing spatial redundancy in image classification. Adv. Neural Inf. Process. Syst. 2020, 33, 2432–2444. [Google Scholar]
- Li, Y.; Song, L.; Chen, Y.; Li, Z.; Zhang, X.; Wang, X.; Sun, J. Learning dynamic routing for semantic segmentation. In Proceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition, Seattle, WA, USA, 13–19 June 2020; pp. 8553–8562. [Google Scholar]
- Mullapudi, R.T.; Mark, W.R.; Shazeer, N.; Fatahalian, K. Hydranets: Specialized dynamic architectures for efficient inference. In Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition, Salt Lake City, UT, USA, 18–23 June 2018; pp. 8080–8089. [Google Scholar]
- Cai, S.; Shu, Y.; Wang, W. Dynamic routing networks. In Proceedings of the IEEE/CVF Winter Conference on Applications of Computer Vision, Waikoloa, HI, USA, 4–8 January 2021; pp. 3588–3597. [Google Scholar]
- Bolukbasi, T.; Wang, J.; Dekel, O.; Saligrama, V. Adaptive neural networks for efficient inference. In Proceedings of the International Conference on Machine Learning, PMLR, Sydney, Australia, 6–11 August 2017; pp. 527–536. [Google Scholar]
- Wang, X.; Yu, F.; Dou, Z.Y.; Darrell, T.; Gonzalez, J.E. Skipnet: Learning dynamic routing in convolutional networks. In Proceedings of the European Conference on Computer Vision (ECCV), Munich, Germany, 8–14 September 2018; pp. 409–424. [Google Scholar]
- Ma, J.; Zhao, Z.; Yi, X.; Chen, J.; Hong, L.; Chi, E.H. Modeling task relationships in multi-task learning with multi-gate mixture-of-experts. In Proceedings of the 24th ACM SIGKDD International Conference on Knowledge Discovery & Data Mining, London, UK, 19–23 August 2018; pp. 1930–1939. [Google Scholar]
- Shazeer, N.; Mirhoseini, A.; Maziarz, K.; Davis, A.; Le, Q.; Hinton, G.; Dean, J. Outrageously large neural networks: The sparsely-gated mixture-of-experts layer. arXiv 2017, arXiv:1701.06538. [Google Scholar]
- Li, M.; Chen, S.; Shen, Y.; Liu, G.; Tsang, I.W.; Zhang, Y. Online Multi-Agent Forecasting with Interpretable Collaborative Graph Neural Networks. IEEE Trans. Neural Netw. Learn. Syst. 2022, 1–15. [Google Scholar] [CrossRef]
- Wu, Z.; Nagarajan, T.; Kumar, A.; Rennie, S.; Davis, L.S.; Grauman, K.; Feris, R. Blockdrop: Dynamic inference paths in residual networks. In Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition, Salt Lake City, UT, USA, 18–23 June 2018; pp. 8817–8826. [Google Scholar]
- Lin, J.; Rao, Y.; Lu, J.; Zhou, J. Runtime neural pruning. Adv. Neural Inf. Process. Syst. 2017, 30. [Google Scholar]
- Zhang, Y.; Zong, R.; Kou, Z.; Shang, L.; Wang, D. On streaming disaster damage assessment in social sensing: A crowd-driven dynamic neural architecture searching approach. Knowl.-Based Syst. 2022, 239, 107984. [Google Scholar] [CrossRef]
- Xi, Y.; Jia, W.; Zheng, J.; Fan, X.; Xie, Y.; Ren, J.; He, X. DRL-GAN: Dual-stream representation learning GAN for low-resolution image classification in UAV applications. IEEE J. Sel. Top. Appl. Earth Obs. Remote Sens. 2020, 14, 1705–1716. [Google Scholar] [CrossRef]
- Li, J.; Liang, X.; Wei, Y.; Xu, T.; Feng, J.; Yan, S. Perceptual generative adversarial networks for small object detection. In Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition, Honolulu, HI, USA, 21–26 July 2017; pp. 1222–1230. [Google Scholar]
- Bai, Y.; Zhang, Y.; Ding, M.; Ghanem, B. Sod-mtgan: Small object detection via multi-task generative adversarial network. In Proceedings of the European Conference on Computer Vision (ECCV), Munich, Germany, 8–14 September 2018; pp. 206–221. [Google Scholar]
- Hu, P.; Ramanan, D. Finding tiny faces. In Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition, Honolulu, HI, USA, 21–26 July 2017; pp. 951–959. [Google Scholar]
- Bell, S.; Zitnick, C.L.; Bala, K.; Girshick, R. Inside-outside net: Detecting objects in context with skip pooling and recurrent neural networks. In Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition, Las Vegas, NV, USA, 27–30 June 2016; pp. 2874–2883. [Google Scholar]
- Qiu, H.; Li, H.; Wu, Q.; Meng, F.; Xu, L.; Ngan, K.N.; Shi, H. Hierarchical context features embedding for object detection. IEEE Trans. Multimed. 2020, 22, 3039–3050. [Google Scholar] [CrossRef]
- Wang, Q.; Liu, Y.; Xiong, Z.; Yuan, Y. Hybrid Feature Aligned Network for Salient Object Detection in Optical Remote Sensing Imagery. IEEE Trans. Geosci. Remote Sens. 2022, 60, 1–15. [Google Scholar] [CrossRef]
- Liu, S.; Qi, L.; Qin, H.; Shi, J.; Jia, J. Path aggregation network for instance segmentation. In Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition, Salt Lake City, UT, USA, 18–23 June 2018; pp. 8759–8768. [Google Scholar]
- Tan, M.; Pang, R.; Le, Q.V. Efficientdet: Scalable and efficient object detection. In Proceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition, Seattle, WA, USA, 13–19 June 2020; pp. 10781–10790. [Google Scholar]
- Li, J.; Zhu, S.; Gao, Y.; Zhang, G.; Xu, Y. Change Detection for High-Resolution Remote Sensing Images Based on a Multi-Scale Attention Siamese Network. Remote Sens. 2022, 14, 3464. [Google Scholar] [CrossRef]
- Lin, T.Y.; Dollár, P.; Girshick, R.; He, K.; Hariharan, B.; Belongie, S. Feature pyramid networks for object detection. In Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition, Honolulu, HI, USA, 21–26 July 2017; pp. 2117–2125. [Google Scholar]
- Jang, E.; Gu, S.; Poole, B. Categorical Reparametrization with Gumble-Softmax. In Proceedings of the ICLR, Toulon, France, 24–26 April 2017. [Google Scholar]
- Rezatofighi, H.; Tsoi, N.; Gwak, J.; Sadeghian, A.; Reid, I.; Savarese, S. Generalized intersection over union: A metric and a loss for bounding box regression. In Proceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition, Long Beach, CA, USA, 16–17 June 2019; pp. 658–666. [Google Scholar]
- Zhu, P.; Wen, L.; Du, D.; Bian, X.; Fan, H.; Hu, Q.; Ling, H. Detection and Tracking Meet Drones Challenge. IEEE Trans. Pattern Anal. Mach. Intell. 2022, 44, 7380–7399. [Google Scholar] [CrossRef] [PubMed]
- Du, D.; Qi, Y.; Yu, H.; Yang, Y.; Duan, K.; Li, G.; Zhang, W.; Huang, Q.; Tian, Q. The unmanned aerial vehicle benchmark: Object detection and tracking. In Proceedings of the European Conference on Computer Vision (ECCV), Munich, Germany, 8–14 September 2018; pp. 370–386. [Google Scholar]
- Wang, Y.; Yang, Y.; Zhao, X. Object detection using clustering algorithm adaptive searching regions in aerial images. In European Conference on Computer Vision; Springer: Berlin/Heidelberg, Germany, 2020; pp. 651–664. [Google Scholar]
- Ren, S.; He, K.; Girshick, R.; Sun, J. Faster R-CNN: Towards Real-Time Object Detection with Region Proposal Networks. IEEE Trans. Pattern Anal. Mach. Intell. 2016, 39, 1137–1149. [Google Scholar] [CrossRef] [Green Version]
- Liu, W.; Anguelov, D.; Erhan, D.; Szegedy, C.; Reed, S.; Fu, C.Y.; Berg, A.C. Ssd: Single shot multibox detector. In European Conference on Computer Vision; Springer: Berlin/Heidelberg, Germany, 2016; pp. 21–37. [Google Scholar]
- Jocher, G. YOLOv5 Source Code. Available online: https://github.com/ultralytics/yolov5 (accessed on 1 August 2022).
- Zhu, X.; Lyu, S.; Wang, X.; Zhao, Q. TPH-YOLOv5: Improved YOLOv5 Based on Transformer Prediction Head for Object Detection on Drone-Captured Scenarios. In Proceedings of the ICCV Workshops, Beijing, China, 21 October 2021; pp. 2778–2788. [Google Scholar]
- Yu, W.; Yang, T.; Chen, C. Towards resolving the challenge of long-tail distribution in UAV images for object detection. In Proceedings of the IEEE/CVF Winter Conference on Applications of Computer Vision, Waikoloa, HI, USA, 4–8 January 2021; pp. 3258–3267. [Google Scholar]
- Everingham, M.; Eslami, S.M.A.; Van Gool, L.; Williams, C.K.I.; Winn, J.; Zisserman, A. The Pascal Visual Object Classes Challenge: A Retrospective. Int. J. Comput. Vis. 2015, 111, 98–136. [Google Scholar] [CrossRef]
- Chalavadi, V.; Jeripothula, P.; Datla, R.; Ch, S.B. mSODANet: A Network for Multi-Scale Object Detection in Aerial Images using Hierarchical Dilated Convolutions. Pattern Recognit. 2022, 126, 108548. [Google Scholar] [CrossRef]
k = 1 | k = 3 | k = 3 | k = 3 | k = 3 | k = 3 | |
---|---|---|---|---|---|---|
d = 1 | d = 1 | d = 2 | d = 3 | d = 4 | d = 5 | |
✓ | 39.2 | |||||
✓ | ✓ | 39.9 | ||||
✓ | ✓ | ✓ | 40.4 | |||
✓ | ✓ | ✓ | ✓ | 40.8 | ||
✓ | ✓ | ✓ | ✓ | ✓ | 41.0 | |
✓ | ✓ | ✓ | ✓ | ✓ | ✓ | 41.1 |
Method | FLOPs (G) | |||||
---|---|---|---|---|---|---|
s | l | x | s | l | x | |
YOLOv5 | 10.09 | 68.28 | 129.03 | 33.50 | 42.40 | 43.94 |
YOLOv5 + CC | 13.15 | 79.41 | 146.08 | 34.43 | 43.51 | 44.89 |
YOLOv5 + DyCC w/o PL | 13.50 | 80.87 | 148.36 | 34.41 | 43.50 | 44.89 |
YOLOv5 + DyCC(w PL) | 11.31 | 72.12 | 134.47 | 34.39 | 43.47 | 44.87 |
GateNet | FLOPs (G) | |||||
---|---|---|---|---|---|---|
s | l | x | s | l | x | |
GateNet-I | 0.30 | 1.19 | 1.87 | 34.36 | 43.45 | 44.86 |
GateNet-II | 1.50 | 5.96 | 9.31 | 34.42 | 43.51 | 44.90 |
GateNet-III | 0.38 | 1.49 | 2.33 | 34.39 | 43.47 | 44.87 |
Method | Image Size | Recall [%] | FLOPs (G) | Training Time (h) | |
---|---|---|---|---|---|
YOLOv5 | 41.59 | 42.40 | 68.28 | 8.3 | |
YOLOv5 | 53.76 | 55.60 | 392.89 | 23.0 | |
YOLOv5 + tinyHead | 56.34 | 58.59 | 440.08 | 32.7 | |
DyCC-Net w/o DyCC | 57.17 | 59.98 | 505.46 | 60.0 | |
DyCC-Net | 57.01 | 59.72 | 456.17 | 91.7 |
Method | Reference | VisDrone2021 | UAVDT | ||||
---|---|---|---|---|---|---|---|
SSD [51] | ECCV16 | - | 15.20 | - | 9.30 | 21.40 | 6.70 |
FRCNN [50] + FPN [44] | CVPR17 | 21.80 | 41.80 | 20.10 | 11.00 | 23.40 | 8.40 |
YOLOv5 [52] | Github21 | 24.90 | 42.40 | 25.10 | 19.10 | 33.90 | 19.60 |
DSHNet [54] | WACV21 | 30.30 | 51.80 | 30.90 | 17.80 | 30.40 | 19.70 |
GLSAN [12] | TIP20 | 30.70 | 55.60 | 29.90 | 19.00 | 30.50 | 21.70 |
ClustDet [17] | ICCV19 | 32.40 | 56.20 | 31.60 | 13.70 | 26.50 | 12.50 |
CRENet [49] | ECCV20 | 33.70 | 54.30 | 33.50 | - | - | - |
TPH-YOLOv5 [53] | ICCVW21 | 35.74 | 57.31 | - | - | - | - |
mSODANet [56] | PR22 | 36.89 | 55.92 | 37.41 | - | - | - |
UFPMP-Det [18] | AAAI22 | 39.20 | 65.30 | 40.20 | 24.60 | 38.70 | 28.00 |
DyCC-Net | Ours | 40.07 | 59.72 | 42.14 | 26.91 | 39.63 | 31.44 |
Publisher’s Note: MDPI stays neutral with regard to jurisdictional claims in published maps and institutional affiliations. |
© 2022 by the authors. Licensee MDPI, Basel, Switzerland. This article is an open access article distributed under the terms and conditions of the Creative Commons Attribution (CC BY) license (https://creativecommons.org/licenses/by/4.0/).
Share and Cite
Xi, Y.; Jia, W.; Miao, Q.; Liu, X.; Fan, X.; Lou, J. DyCC-Net: Dynamic Context Collection Network for Input-Aware Drone-View Object Detection. Remote Sens. 2022, 14, 6313. https://doi.org/10.3390/rs14246313
Xi Y, Jia W, Miao Q, Liu X, Fan X, Lou J. DyCC-Net: Dynamic Context Collection Network for Input-Aware Drone-View Object Detection. Remote Sensing. 2022; 14(24):6313. https://doi.org/10.3390/rs14246313
Chicago/Turabian StyleXi, Yue, Wenjing Jia, Qiguang Miao, Xiangzeng Liu, Xiaochen Fan, and Jian Lou. 2022. "DyCC-Net: Dynamic Context Collection Network for Input-Aware Drone-View Object Detection" Remote Sensing 14, no. 24: 6313. https://doi.org/10.3390/rs14246313