Great work is often not recognized or appreciated in the moment. Such was the case of the support vector machine (SVM), the crown jewel of classical machine learning. Here is the story ... Way back in 1964, in the then U.S.S.R, Vladimir Vapnik and Alexey Chervonenkis invented a classification method based on maximum margin hyperplanes. That was a whopping four decades before the era of fast GPU processors which ushered in the deep learning era. In 1990, Vapnik immigrated to the United States to work at AT&T Bell labs. There, he collaborated with Bernhard Bohser and Isabella Guyon, to apply the "kernel trick" to his 1964 maximum margin hyperplane method, thereby yielding the powerful support vector machine we know today. Excited about this great new discovery, in 1992 Vapnik submitted 3 papers describing the SVM to NeurIPS, the premier machine learning conference. All 3 papers were very swiftly rejected by the reviewers. Today, the SVM stands as a towering monument in the world of classical machine learning, bridging powerful ideas, frameworks, and eras. It turned out that the reviewers were squarely wrong. Great work is often not recognized or appreciated in the moment. If you are working on something today and are not getting the recognition and support you believe your work deserves, think of Vapnik and keep going. You are in good company. ---- Stay tuned for the entire video series of all 20 chapters from my book "The Foundational Mathematics of Artificial Intelligence." See comments for details. #ArtificialIntelligence #Mathematics
Excellent lecture Stephen Odaibo, MD, MS(Math), MS(Comp. Sci.) I just read on the kernel trick this week but you made it look like bread and butter in no time. I can see that you have a book for most of this but do you have a course or YouTube channel or anyother location where I can watch all your videos? I wouldn’t mind to dive deeper.
The issue seems to be that most reviewers care more about promoting their own work than what is actually best. There is a severe case of that in finance and economics research.
Stephen Odaibo, MD, MS(Math), MS(Comp. Sci.) Sadly, it's still not recognised. We have poor implementations of SVM that has made it go out of favor ( but which many folks don't understand is that the last layer of any DNN is a classification layer)
I fell in love with the kernel trick at the European Symposium on Artificial Neural Networks (ESANN) 2002. It was explained so well that I got it. It had the same impact on me that the Jensen inequality had when I learned it in statistics.
This is interesting. Had no clue
Thanks for the great SVM story.
Neurips is not a premier conference.
Really Insightful. Thank you Stephen Odaibo, MD, MS(Math), MS(Comp. Sci.) for sharing.
Interesting and insightful Stephen Odaibo, MD, MS(Math), MS(Comp. Sci.) , thank you for sharing
CEO & Founder RETINA-AI Health, Inc. | Retina Specialist | Healthcare AI expert | Math/Comp Sci. | AI Engineer
4moFor a comprehensive covering of the Foundational Mathematics of AI, you can check out my book. BOOK CHAPTERS: 1. The True History of AI 2. The Building Blocks (Calculus & Linear Algebra) 3. Linear Regression 4. Logistic Regression 5. Constrained Optimization 6. Support Vector Machines 7. Fourier Transforms 8. Eigenvalue Decomposition 9. Singular Value Decomposition 10. Principal Component Analysis 11. Neural Networks 12. Deep Reinforcement Learning 13. Generative Adversarial Networks 14. Variational Autoencoders 15. Diffusion Denoising Probabilistic Models 16. Word Embedding: Word2Vec 17. Attention Mechanism 18. Transformer Architecture 19. Contrastive Learning Image Pretraining (CLIP) 20. Large Language Models https://www.amazon.com/gp/aw/d/0997116323/ref=tmm_pap_swatch_0?ie=UTF8&qid=&sr=