I just hit "submit" on the final project for my final required class in my AI Major, AI 574, Natural Language Processing! I have not officially graduated yet, and I still have a capstone and independent study to figure out. I'll be sticking with a focus on more AI-gen and NLP stuff, as the stuff that I have seen in academia and professionally is mindblowing, innovative, and fun. It's kind of cool studying stuff that is evolving only now. I'll be continuing to work with some great minds at Penn State, so if you have questions or ideas for projects, hit me up. #ai #nlp #rag #graphRAG #genAI #km #psu
David Feldman’s Post
More Relevant Posts
-
🖥️Excited to share my latest article on Medium: "Unlocking the Secrets of Natural Language Processing: A Beginner's Guide!" In this post, I explore the fundamentals of NLP, its importance in today's digital landscape, and key concepts like text preprocessing, representation, and various techniques. Whether you’re new to AI or looking to deepen your understanding, this guide is for you! 🔗 Read the full article here https://lnkd.in/dpaATWub Let’s embark on this NLP journey together! I’d love to hear your thoughts and questions in the comments. #NLP #ArtificialIntelligence #MachineLearning #NaturalLanguageProcessing #TechForGood
To view or add a comment, sign in
-
Just completed the DeepLearning.AI Natural Language Processing Specialization. While I appreciate the review of foundational knowledge gained from exploring historical approaches like BOW, HMMs and LSTMs, I would rather spend more time on the cutting-edge advancements in NLP, particularly delving deeper into the capabilities of Hugging Face Transformers. All those certifications are fun, but I always feel let down because they are either too much from scratch, not professionalizing enough, or not focused on the current state-of-the-art. What are your thoughts? #LLM #NLP #deeplearning
To view or add a comment, sign in
-
In my work of AI detecting on sentences, I am using the traditional machine learning model together with Natural language processing. Here’s how NLP enhances my approach: Feature Extraction: The symbols of text mining include operations like Bag of Words (BoW) and Term Frequency-Inverse Document Frequency (TF-IDF) that quantitatively transform sentences to some numerical values for further gauge. Text Preprocessing: The process of removing stop words allows to clean and normalize the text data making the features better. Contextual Representation: Interestingly, incorporating n-grams enables getting more context, thus improving their detection. Sentiment Analysis: Getting sentiment scores in the Brown corpus offers more to the interpretation of each sentence. Model Selection: SVM and Naive Bayes are among the algorithms that have been used in classifying the words of a sentence into their appropriate categories depending on engineered features. Utilizing these two methods in parallel has improved the performance of the sentence detection even further. Can’t wait for the challenges that are next in line! #NLP #DataScience #CTP #MachineLearning
To view or add a comment, sign in
-
#NLP, or Natural Language Processing, is undoubtedly one of the most captivating domains I have ventured into! The ability to establish a connection between speech, writing, and computer science is truly astonishing. In our latest journal publication at Synapse Journal, I delved into the realm of NLP, aiming to delve deeper into this enigmatic world. Furthermore, in addition to reading this article, you can access the corresponding code on my #Kaggle account through the provided link: https://lnkd.in/dASY5Vqk
To view or add a comment, sign in
-
🔍 Staying updated with the latest in machine learning research, there are insights from a recent study by Lovish Madaan from GenAI, Meta, in partnership with researchers from Stanford University. This study focuses on understanding and quantifying variance in evaluation benchmarks for large language models. - Research goal: Investigate the variance in evaluation benchmarks and provide recommendations to mitigate its effects. - Research methodology: Researchers have analyzed 13 popular NLP benchmarks using over 280 models, including both publicly available models and custom-trained models, to measure different types of variance, such as seed variance and monotonicity during training. - Key findings: The study revealed significant variance in benchmark scores due to factors like random seed changes. Simple changes, such as framing choice tasks as completion tasks, reduced variance for smaller models, while traditional methods like item response theory were less effective. - Practical implications: These findings encourage practitioners to account for variance when comparing model performances and suggest techniques to reduce variance. This can be particularly beneficial in academic research, industry R&D, and any application involving the development and assessment of AI models. Stay tuned for more updates as I continue to share the latest from the world of ML and data science! #LabelYourData #TechNews #DeepLearning #NLP #MachineLearning #Innovation #AIResearch #MLResearch
To view or add a comment, sign in
-
🎉 Course Completion Announcement 🎉 I am excited to share that I have successfully completed the Introduction to NLP course from Great Learning! 🚀 This course provided me with a solid foundation in Natural Language Processing, where I learned key concepts like text preprocessing, tokenization, lemmatization, stemming, and more. NLP plays a vital role in shaping AI technologies, and I’m thrilled to have deepened my understanding of this fascinating field. I’m grateful for the opportunity to enhance my skills and look forward to applying my knowledge in real-world projects. #NLP #NaturalLanguageProcessing #GreatLearning #CourseCompletion #AI #MachineLearning #TechSkills
To view or add a comment, sign in
-
Last month, I submitted my undergraduate thesis book, where we researched on "Automated Literature Review Generation using NLP Techniques & Retrieval Augmented Generation utilizing LLM". Our main contributions were: 1. To develop a system to automatically generate the Literature Review segment of a research paper by using only the PDFs of the related papers as input. 2. Comparing results among various natural language processing approaches, such as: - Transformer: SimpleT5 model - Frequency-based approach: spaCy Library - Large Language Model: GPT-3.5-TURBO-0125 Watch the video for the conceptual tool demo. Currently, it is hosted locally with basic features. We plan to deploy the web tool in the future with additional customization options and model choices, such as Bert, Gemini, and LLaMA. I want to especially thank our supervisor, T Gopi Krishna Sir, for his guidance, and my thesis mates Mahdi Mohtasim and Shakil Mosharrof for their contributions. #NLP #AI #LLM
To view or add a comment, sign in
-
🚀 Very happy to share my latest blog post on quite an interesting topic. In this article, I dive into the advanced techniques that I used to create and supercharge a question-answering system by integrating the Kolmogorov-Arnold Theorem with state-of-the-art NLP models. By combining Dense Passage Retrieval (DPR), Kolmogorov-Arnold Networks, and T5 models, we create a powerful RAG pipeline that enhances the accuracy and relevance of generated answers. 🔍 Key Highlights: - Enhanced Representation Learning: Leveraging KAN for better question embedding transformations. - Improved Retrieval Accuracy: More precise context retrieval using FAISS. - Superior Generation Quality: Fine-tuning T5 for higher quality answer generation. Whether you're an NLP enthusiast or a professional in AI, this post provides valuable insights into building and optimizing advanced question-answering systems. Check out the full article on Medium!👇 #NLP #AI #MachineLearning #DeepLearning #RAG #KolmogorovArnoldNetwork #DataScience #ArtificialIntelligence
To view or add a comment, sign in
-
Exciting news to share: I just obtained a new certification in ✨ Building Transformer-Based Natural Language Processing Applications from NVIDIA! Through this workshop, I gained a deep understanding of how Transformers are used as the basic building blocks of 📓 modern LLMs for NLP applications. Additionally, I learned how self-supervision improves upon the Transformer architecture in 🎈 BERT, Megatron, and other LLM variants for superior NLP results. I'm now able to leverage pretrained, modern LLM models to solve multiple NLP tasks such as 💡 text classification, NER, and question answering. And, I can manage inference challenges and deploy refined models for live applications. Looking forward to applying this knowledge to future projects! #NLP #Transformers #Certification
To view or add a comment, sign in
-
🌟 Attention is All You Need 🌟 I'm thrilled to share that I've created a transformer from scratch using TensorFlow for text summarization! 🎉🚀 Feel free to check it out and let me know your thoughts! 🔗 [Transformer from Scratch for Text Summarizer](https://lnkd.in/gGyDcv4p) Transformer, introduced is a deep learning architecture designed for sequence-to-sequence tasks. It has become the foundation for many state-of-the-art natural language processing models, like GPT and BERT. This project has been an incredible learning experience. Here are some key takeaways: - Gained in-depth understanding of seq2seq problems and various strategies to tackle them - Mastered the concept of attention and how it revolutionizes seq2seq problem-solving - Developed insights into the fundamental building blocks of large language models This journey has not only enhanced my technical skills but also ignited my passion for NLP even further. What areas of AI and NLP are you most excited about? Let's connect and discuss! #MachineLearning #NLP #Transformer #TensorFlow #TextSummarization #DeepLearning
To view or add a comment, sign in
Virtual Chief Information Security Officer (vCISO) @ Thrive | CISSP | MBA | FullStack DevOps SecOps SecDev
6dhttps://github.com/ian-IBCIRL/AskAncestor been a while in imagination. Since mid naughties but reckon tech might make it work, if not fraught too.