MIDAS: Multimodal Digital Media Analysis Lab reposted this
We're thrilled to announce that Audino 2.0, our powerful audio annotation tool, is now open-source and available on GitHub! 🎧✨ Key Highlights: - Enhanced Performance: Leveraging CVAT's robust backend. - Supported by HUMAN Protocol: Advancing our mission to democratise AI development. - Versatile Tool: Ideal for Voice Activity Detection, Speaker Diarization, ASR, and more. - Audino empowers researchers, developers, and organisations to create high-quality labelled datasets for audio machine-learning tasks. Features Include: ✅ Multi-language support ✅ Emoji annotations ✅ Flexible label creation ✅ User-level project management ✅ Various export formats (Common Voice, Librispeech, and many more) We're excited to see how the community will use and contribute to Audino 2.0. Check out our GitHub repository to get started! https://lnkd.in/gjZbSw5X Thanks to CVAT.AI and HUMAN Protocol Foundation for helping us with this. #OpenSource #AudioAnnotation #MachineLearning #AI #ArtificialIntelligence #AudioProcessing #TechInnovation #DataScience #AIResearch #VoiceRecognition #DeepLearning #AudioML #TechForGood #OpenSourceCommunity #GitHub #Innovation #DataAnnotation #MLCommunity #AIForAll #StartupTech #AIDevelopment #FutureOfAI #AICommunity #AudioTech #AIProjects
Great milestone!