Just Arrived in Vienna for #ICML2024, I will be presenting our work on improving GNNs' generalization ability with ensemble learning (https://lnkd.in/gimXe5fS) at Oral 5A (Thu 25 Jul 10:45 a.m. — 11 a.m.) and Poster 2407 (Thu 25 Jul 11:30 a.m. — 1 p.m.). Paper title: From Coarse to Fine: Enable Comprehensive Graph Self-supervised Learning with Multi-granular Semantic Ensemble. A heartfelt thank you to all my collaborators for their incredible support; this work would not have been possible without them. I am also looking forward to reconnecting with old friends and making new ones. If you are attending ICML or are interested in my research, please join me during my presentation session. Additionally, please feel free to reach out to exchange ideas if you are interested in topics such as (1) improving graph model generalization across different datasets, and (2) foundational models for time series forecasting. See you in Vienna! 🇦🇹 #ICML2024 #MachineLearning #ArtificialIntelligence
Qianlong Wen’s Post
More Relevant Posts
-
We are excited to share that our paper, "Online Learning of Decision Trees with Thompson Sampling," co-authored by Ayman CHAOUKI, Jesse Read and Albert Bifet has been selected as one of the recipients of the "Outstanding Student Paper" award at the The 27th International Conference on Artificial Intelligence and Statistics 2024 (AISTATS). Our research addresses the suboptimality of traditional Decision Tree algorithms like C4.5, ID3, and CART, which rely on greedy splits and can lead to complex models. We introduce a novel Monte Carlo Tree Search algorithm, Thompson Sampling Decision Trees (TSDT), which produces optimal Decision Trees in an online setting. Our algorithm not only converges to the optimal tree but also outperforms existing algorithms on several benchmarks, tailored specifically for online learning. We are proud of this recognition and look forward to further advancing the field of Machine Learning. Read more about our work here: https://lnkd.in/gWQ3N2jJ #AISTATS #MachineLearning #TSDT
To view or add a comment, sign in
-
🚀 Day 13 of #30DaysOfFLCode 🌟 Today, I delved into the fascinating research on FedTGP: Trainable Global Prototypes with Adaptive-Margin-Enhanced Contrastive Learning for Data and Model Heterogeneity in Federated Learning by Jianqing Zhang et al. (arXiv:2401.03230). Highlights: Tackles the challenges of Heterogeneous Federated Learning (HtFL), particularly the high communication costs and privacy concerns associated with sharing model parameters. Proposes prototype-based HtFL to share only class representatives (prototypes), minimizing communication while safeguarding client privacy. Introduces FedTGP, leveraging Adaptive-margin-enhanced Contrastive Learning (ACL) to refine global prototypes, improving their separability and semantic consistency. Achieves state-of-the-art accuracy improvements of up to 9.08% across twelve heterogeneous models, all while maintaining efficiency and privacy. Check out the paper (https://lnkd.in/gUTe2xvJ) and the code (https://lnkd.in/gNJ_44dJ). Federated Learning continues to amaze with innovative solutions for real-world heterogeneity challenges. Onward to Day 14! 💡 #FederatedLearning #MachineLearning #AIResearch #HtFL #AdaptiveLearning
GitHub - TsingZ0/FedTGP: AAAI 2024 accepted paper, FedTGP: Trainable Global Prototypes with Adaptive-Margin-Enhanced Contrastive Learning for Data and Model Heterogeneity in Federated Learning
github.com
To view or add a comment, sign in
-
Free #MAXQDA webinars for this week: 🇬🇧 Using the Summary Grid to create Summary Tables (Part 1) 🇬🇧 Surveys in MAXQDA, from research questions to analysis 🇹🇷 MAXQDA ile Kelime Odaklı Analiz: MAXDictio (Advanced) 🇪🇸 Inteligencia Artificial y MAXQDA 24 (Advanced) 🇩🇪 Visualisieren von Daten und Ergebnissen mit MAXQDA (Advanced) 🇬🇧 Coding with a System: Organizing, changing, and documenting your Code System Research Session: In Search of the Meaning of European Identity: A Mixed-Methods Study 🔗 Save your spot now: https://ow.ly/t9ou50U47Kf
To view or add a comment, sign in
-
Can you believe I got a Grade A!! Of Part 3!! in Foundations of Modern Machine Learning while others struggled? #IIIT #FMML! nagababu molleti CHINTAMANENI NAGA PRASANNA Thanks For Supporting.🎓!!
To view or add a comment, sign in
-
🎉 Excited to share that our paper titled "Context Conquers Parameters: Outperforming Proprietary LLM in Commit Message Generation" has been directly accepted in the second cycle of ICSE 2025 (10% direct acceptance)! I express my deepest gratitude to my amazing co-authors, Dr. Iftekhar Ahmed and Dr. Mohammad Moshirpour, for their invaluable support and remarkable contributions. 🤔 LLM4SE papers typically employ the most recent proprietary LLMs. However, this practice renders such innovations impractical for industrial adoption due to privacy and cost concerns. Specifically, the state-of-the-art automated commit message generation approach that uses GPT-4 leads to a $75,000 annual cost for a medium-sized company. ✨ Our proposed commit message generation technique, OMEGA, operates locally and utilizes a 4-bit quantized Llama3 8B to generate commit messages that surpass the quality of GPT-4, as assessed by practitioners. OMEGA is free and guarantees users’ privacy by running locally! We have uploaded the pre-print to arXiv: https://lnkd.in/eNRz2wHW #ICSE2025 #LLM4SE #AI4SE #rigormeetsrelevance
To view or add a comment, sign in
-
Paper Alert: Our work towards the design of deeper and competitive Forward-Forward Networks has been published at the Transactions of Machine Learning Research (#TMLR). https://lnkd.in/evSifERq This is joint work with Thomas Dooms and Ing Jyh Tsang Kudos to Thomas as this was work he conducted as part of his Master Thesis project at the Department of Computer Science at the University of Antwerp IDLab (UGent - UAntwerpen - imec) #LocalLearning #ForwardForward #FF #deeplearning
The Trifecta: Three simple techniques for training deeper...
openreview.net
To view or add a comment, sign in
-
📚 Excited to announce, during my postdoctorate research journey, research results published in the Q1 category journal (IEEE ACCESS) with AI & XAI specialization, https://lnkd.in/dAqNN3rM !! My research paper is "TCLPI: Machine Learning-Driven Framework for Hybrid Learning Mode Identification. 🚀🔬 In this study, we introduce a framework that utilizes machine learning with explainable AI to automate the identification of hybrid learning for Theory Class and Lab practice (TCLPI). This research found that student-teacher interaction decreased during lab practice, which is crucial. Internet disconnections, a lack of support during technological malfunctions, and the likelihood of cheating in exams without monitoring are also issues. We also found that students were accepting of hybrid learning for theory classes. Each model’s intrinsic feature relevance and SHAP values helped prove this. Research shows that hybrid learning works for theory classes but is not enough for lab practice for students. 🌐 Special thanks to Eötvös Loránd University Eötvös Loránd University Faculty of Informatics ELTE IK and NRDIO - National Research Development and Innovation Office (NKFIH) for the support! Thanks to Dr. Deepak Mehta for his support and encouragement during this journey. His contribution is also admired in the article. Also, Dr. Zoltan Illes has always supported me as a mentor and colleague for this success. #PostdocLife #Research #AcademicAchievement #Q1Journal #Science #Innovation #Gratitude #NextChapter #Research #Publication #Science #AcademicAchievement #ai #education #ELTE #IK
TCLPI: Machine Learning-Driven Framework for Hybrid Learning Mode Identification
ieeexplore.ieee.org
To view or add a comment, sign in
-
If you are attending #iclr2024, please visit our poster on "𝗟𝗲𝗮𝗿𝗻𝗶𝗻𝗴 𝗖𝗼𝗺𝗺𝘂𝘁𝗮𝘁𝗶𝘃𝗲 𝗜𝗻𝘃𝗮𝗿𝗶𝗮𝗻𝗰𝗲𝘀 𝘁𝗵𝗿𝗼𝘂𝗴𝗵 𝗡𝗼𝗻-𝗖𝗼𝗺𝗺𝘂𝘁𝗮𝘁𝗶𝘃𝗶𝘁𝘆" on Wednesday 8 May during 10:45h - 12:45h at the Halle B #224. In this work, our core focus lies in 𝗶𝗻𝘃𝗮𝗿𝗶𝗮𝗻𝗰𝗲 𝗹𝗲𝗮𝗿𝗻𝗶𝗻𝗴 that forms the heart of robust representations that are extensively applied to tasks like 𝗱𝗼𝗺𝗮𝗶𝗻 𝗮𝗱𝗮𝗽𝘁𝗮𝘁𝗶𝗼𝗻, 𝗰𝗮𝘂𝘀𝗮𝗹 𝗱𝗶𝘀𝗰𝗼𝘃𝗲𝗿𝘆, 𝗼𝘂𝘁-𝗼𝗳-𝗱𝗶𝘀𝘁𝗿𝗶𝗯𝘂𝘁𝗶𝗼𝗻 𝗴𝗲𝗻𝗲𝗿𝗮𝗹𝗶𝘇𝗮𝘁𝗶𝗼𝗻, etc. We recognize that when a task has to be performed in a target domain known a priori (condition), the invariant learned should be able to preserve domain-specific information about the target. We show that a provably optimal and sample-efficient way of learning invariants of the above form is by relaxing the invariance criterion to be non-commutatively directed towards the target domain, an approach we call 𝗻𝗼𝗻-𝗰𝗼𝗺𝗺𝘂𝘁𝗮𝘁𝗶𝘃𝗲 𝗶𝗻𝘃𝗮𝗿𝗶𝗮𝗻𝗰𝗲 (𝗡𝗖𝗜). This allows us to derive tighter conditional target risk bounds and provide well-defined optimality criteria for any 𝗰𝗼𝗻𝗱𝗶𝘁𝗶𝗼𝗻𝗮𝗹 𝗶𝗻𝘃𝗮𝗿𝗶𝗮𝗻𝗰𝗲 learner - both commutative and non-commutative. We experimented by relaxing the adversarial objective of DANNs through NCI, and attained SOTA multi-source domain adaptation performance across a number of standard benchmarks. Looking at invariance learning through the lens of commutativity also allows the treatment of a number of open problems around the uniqueness of learning algorithms, emergent behavior, etc., with the machineries of commutative algebra and measure theory. Paper: https://lnkd.in/d6bsAEXw Code: https://lnkd.in/d7nJS43e Huge thanks to all the co-authors Abhra Chaudhuri and Serban Georgescu
To view or add a comment, sign in
-
🎙 Introducing the paper presented by Yuxiang Wang from Wuhan University at #ICDE2024: Generative and Contrastive Paradigms Are Complementary for Graph Self-Supervised Learning. In this paper, the team from Wuhan University, Centre for Perceptual and Interactive Intelligence (CPII) Limited, OceanBase and Peking University proposed a graph contrastive masked autoencoder (#GCMAE) framework which combines masked autoencoder (#MAE) and Contrastive Learning (#CL) to enhance graph self-supervised learning (#GSSL). The results show that #GCMAE consistently provides good accuracy, and the maximum accuracy improvement is up to 3.2% compared with the best-performing baseline. 🏄♂️ Learn more: https://lnkd.in/gXgy2az6
Generative and Contrastive Paradigms Are Complementary for Graph Self-Supervised Learning
arxiv.org
To view or add a comment, sign in