Jens Skakkebaek
København, Region Hovedstaden, Danmark
2 t følgere
500+ forbindelser
Se fælles forbindelser med Jens
Velkommen tilbage
Ved at klikke på Fortsæt for at tilmelde dig eller logge ind accepterer du LinkedIns Brugeraftale, Privatlivspolitik og Politik for cookies.
Ny på LinkedIn? Tilmeld dig nu
eller
Ved at klikke på Fortsæt for at tilmelde dig eller logge ind accepterer du LinkedIns Brugeraftale, Privatlivspolitik og Politik for cookies.
Ny på LinkedIn? Tilmeld dig nu
Se fælles forbindelser med Jens
Velkommen tilbage
Ved at klikke på Fortsæt for at tilmelde dig eller logge ind accepterer du LinkedIns Brugeraftale, Privatlivspolitik og Politik for cookies.
Ny på LinkedIn? Tilmeld dig nu
eller
Ved at klikke på Fortsæt for at tilmelde dig eller logge ind accepterer du LinkedIns Brugeraftale, Privatlivspolitik og Politik for cookies.
Ny på LinkedIn? Tilmeld dig nu
Aktivitet
-
Prediction for 2024: email will make a come-back for business 📧 What?!?! Why? We have reached peak focus on speed and always on, with never…
Prediction for 2024: email will make a come-back for business 📧 What?!?! Why? We have reached peak focus on speed and always on, with never…
Jens Skakkebaek synes godt om dette
-
I am thrilled to announce that I have been nominated for the Nordic DAIR Award in the category “AI Professional of the Year 2023”. This recognition…
I am thrilled to announce that I have been nominated for the Nordic DAIR Award in the category “AI Professional of the Year 2023”. This recognition…
Jens Skakkebaek synes godt om dette
-
A reindeer under a beach umbrella? Let’s make sure they have their natural habitat. Learn more about how we’re enabling a sustainable…
A reindeer under a beach umbrella? Let’s make sure they have their natural habitat. Learn more about how we’re enabling a sustainable…
Jens Skakkebaek synes godt om dette
Flere aktiviteter af Jens
-
Just spoke with a founder who's shutting down their company. Over the last few years, the company built a few products that people oohed and ahhed…
Just spoke with a founder who's shutting down their company. Over the last few years, the company built a few products that people oohed and ahhed…
Jens Skakkebaek synes godt om dette
Se hele Jens’ profil
Andre lignende profiler
-
Aparna Sinha
Palo Alto, CAOpret forbindelse -
Abhishek Singhal
Palo Alto, CAOpret forbindelse -
Manish Sainani 🤫
Seattle og omegnOpret forbindelse -
Tript Singh Lamba
Cupertino, CAOpret forbindelse -
Babak Pahlavan
Palo Alto, CAOpret forbindelse -
Maggie Mae
Seattle og omegnOpret forbindelse -
Craig Wiley
Seattle og omegnOpret forbindelse -
Roy Frenkiel
San Francisco Bay og omegnOpret forbindelse -
Sudhir Hasbe
Mercer Island, WAOpret forbindelse -
Andy Bird
Frisco, TXOpret forbindelse -
Josh Siegel
Los Angeles, CAOpret forbindelse -
Nitin Julka
Beachwood, OHOpret forbindelse -
Raghav Singh
Northfield, MNOpret forbindelse -
Sandeep Thakur
San Francisco Bay og omegnOpret forbindelse -
Rene Kolga
San Jose, CAOpret forbindelse -
Ziggy Lin
Seattle og omegnOpret forbindelse -
Anoop Sreenivasan
Sunnyvale, CAOpret forbindelse -
Travis Bowie
San Francisco Bay og omegnOpret forbindelse -
Robby Stein
San Francisco Bay og omegnOpret forbindelse
Se flere indlæg
-
Cansu Canca, Ph.D.
Join me for a breakfast discussion on #ResponsibleAI tomorrow at the #GenAI World conference! What are the real-world challenges leaders face with #RAI? How do you navigate the practical hurdles of integrating RAI into your business? We'll share insights, strategies, and experiences over some coffee ☕ #AIethics #TechEthics #WomenInAI
492 Kommentarer -
Dan Smith
One of the best things about generative AI is when it suggests things you didn't expect. Creativity is awesome, but many use cases being implemented right now still expect specific content or formatting in the output, or for the generated text to adhere to the persona/GPT/custom instructions. When we introduce changes to configuration values, model versions, or other factors, how will we determine if existing use cases that customers rely on today will continue to work? Testing, of course! We can write test cases based on current outputs or align on the definition of a "good" response by outlining key points we expect to see included in the generated response. Some examples: Code Requests: A code block in the requested language that solves the prompt, adheres to security standards (OWASP, NIST, ISO, etc.), modifies only requested code parts, and includes an explanation. Document Summaries: Major points of the source document, commentary, and at least one thought-provoking follow-up question. Test Case Generation: The requested number of cases with a variety of scenarios. What metrics will we use to measure our testing? Once we establish key points like above that we can expect in a generated response, we need a way to measure them that matches the probabilistic / non-deterministic nature of gen AI systems. 1. Precision - Measures the accuracy of the relevant information provided in the response -True Positives (TP): Things that should be there -False Positives (FP): Things that should not be there - irrelevant or incorrect Example: If the response correctly includes 7 out of 8 expected key points but also includes 3 irrelevant details, the precision would be 70% 2. Recall - Measures completeness, or degree to which the response captures all relevant information -True Positives (TP): Things that should be there -False Negatives (FN): Things expected but missing Example: If the response correctly includes 7 out of 8 key points and misses 1 key point, the recall would be 87.5% 3. F1 Score - F1 Score is the harmonic mean of precision and recall. It provides a fair balance between precision and recall, especially when there is an uneven class distribution or when one metric is much lower than the other Example: Using the precision and recall in the above examples: F1 score would be .778/77.8% The highest possible value of an F-score is 1.0 (100%), indicating perfect precision and recall, and the lowest possible value is 0, if precision and recall are zero. We could look at the F1 average of X generated test case examples to know if a given test case has improved or declined based on a recent change or model update. It is a more fair and balanced measure if we can only look at one value to decide if we have improved or regressed. There are other facets of AI output that can and should be tested, like the creativity, diversity of responses, effectiveness of bias prevention. What metrics are you monitoring in your tools?
8 -
Sanjay Venkatraman
As we invest more time in evaluating AI technologies and speak with businesses about their challenges, we are seeing a pattern emerge about productivity and how this can be enhanced with AI. The following could be one version of how we see the future evolving over the next year or two......
161 Kommentar -
Edgar Bermudez, PhD
Large language models (LLMs) are increasingly being explored for their potential in medicine, but applying them effectively to complex clinical scenarios remains a challenge. The paper "MDAgents: an adaptive collaboration of LLMs for medical decision-making" by Yubin Kim, Chanwoo Park, Hyewon Jeong, Yik Siu Chan, Xuhai Xu, Daniel McDuff, Hyeonhoon Lee, Marzyeh Ghassemi, Cynthia Breazeal, and Hae Won Park (NeurIPS 2024) introduces a novel framework aimed at addressing this gap. MDAgents structures collaboration among LLMs dynamically, assigning roles—either independent or group-based—based on the complexity of the medical task. This design mimics real-world clinical decision-making processes, where collaboration is tailored to the case at hand. Evaluated on 10 medical benchmarks, MDAgents achieved leading performance on 7 tasks and demonstrated an 11.8% accuracy boost when incorporating external medical knowledge and moderator reviews. A case study further illustrates how the framework synthesizes differing perspectives among LLMs to reach accurate, consensus-driven diagnoses. This framework stands out for its adaptability and its emphasis on collaboration. Ablation studies underscore the importance of the system’s components, from its ability to classify medical complexity to the integration of multi-modal reasoning. By reflecting the dynamic, consultative nature of clinical decision-making, MDAgents provides a thoughtful approach to enhancing LLM-assisted medical diagnosis. To me, this paper is interesting because it takes a practical step toward aligning AI capabilities with the intricacies of healthcare. By emulating real-world medical collaboration, MDAgents not only improves performance but also lays the groundwork for more robust and trustworthy AI systems in clinical settings. What are your thoughts on how AI frameworks can better reflect real-world decision-making processes in fields like healthcare? paper: https://lnkd.in/gd5fX4Ef #artificialintelligence #AIhealthcare #LLMs #AIAgents
11 -
Gabriel Westman
The Läkemedelsverket pilot secure processing environment (SPE) is nicely coming together. We now have a first iteration of the technical and legal framework in place and I can prodly announce that we are onboarding the first of our national agency partners in the system. Sharing compute, foundation models, code and (when possible) data make sense in many ways, but requires hard work for the synergies to become apparent. Starting from a bare metal DGX H100, we have virtualised the hardware resources and orchestrated them through a Kubernetes setup to allow dynamic allocation of compute in line with requirements for each user, carefully separating data layers from system setup and personal configuration. Together with E‑hälsomyndigheten, Folkhälsomyndigheten, Socialstyrelsen and Tandvårds- och läkemedelsförmånsverket, TLV we will further develop the framework to include an agreement on personal data protection and confidentiality classification to expand the potential and gather further experience on how a national resource for secure processing of EHDS data could be designed.
1015 Kommentarer -
Benjamin Flores
𝐓𝐡𝐨𝐮𝐠𝐡𝐭𝐬 𝐟𝐨𝐫 𝐓𝐡𝐮𝐫𝐬𝐝𝐚𝐲 𝘉𝘶𝘪𝘭𝘥𝘪𝘯𝘨 𝘍𝘰𝘶𝘯𝘥𝘢𝘵𝘪𝘰𝘯𝘢𝘭 𝘋𝘢𝘵𝘢 If you want to build foundational data, build the foundation to enable your scientists. Your scientists will ask the questions to enable AI. As AI is pushing forward, the concept of foundational data in increasing in popularity. I think it is great, but at the same time, it is reactive in nature to enable AI. Way before AI, the scientists that I worked with were trying to ask questions of the data... "If I increase (specific component) in my feed, will it reduce my end of run lactate spike?" We had put the data into several types of database and neural network technologies to attempt to ask questions of the data. Granted we were ahead of our time because we put the business of the business before the newest thing, but we had gained an understanding of "what goes where". Santha Ramakrishnan PhD mentions that we should "Plan for access and integration to data so models are not just built on PowerPoint Think of how data will be used multimodally when you plan for management of select data domains". We must take a step back from our own "world" and understand that our data ties into a bigger picture, from research, to development, to the vivarium, to the clinic, to commercial... through the cloning, through the cycles and functional groups, to pharmacokinetic and pharmacodynamic. When you are building your foundation... Build it for your scientists... Build it for the patients, at the end of the day, we are the patients too. Then build it for AI.
11 -
Justin H. Johnson
🎵 Coding the Soundtrack of Tomorrow: An AI-Powered Journey Through Music and Tech 🎵 "From Algorithms to Airwaves" I am excited to share my latest blog chronicling an unexpected journey from AI-generated music to building a full-fledged digital music distribution platform! 🚀 Discover how I leveraged cutting-edge tools like: - Suno for AI music generation - next.js and Vercel for web development - Cursor.ai for AI-assisted coding - Supabase for backend solutions - Freecords for global music distribution 👉 Read how these technologies democratized the process, allowing me to create and distribute AI-generated music worldwide on a shoestring budget through imbusion studios (https://imbusion.io). 🎧 Listen to the results on major platforms: Spotify: https://lnkd.in/dQmbwuKh Apple Music: https://lnkd.in/d5anB-HW This journey showcases the power of modern tech in transforming ideas into reality. Whether you're a developer, musician, or tech enthusiast, there's something in this story for you. #AIMusic #AI #DataScience #TechInnovation
4 -
Edgar Bermudez, PhD
One of the main limitations of using LLMs on certain applications is when their outputs are not “truthful” to the sources they were trained on. An option for this is to require the LLM to cite the source with the output. In “source-aware training enables knowledge attribution in language models” by Khalifa et al, 2024., the authors present a post pretraining approach to get an LLM to cite the pretraining sources when prompting. This approach consists of training the LLM to associate source identifiers with the knowledge in each document and to cite the associated pretraining source. To me, this is important because according to the authors this approach can be applicated to off-the-shelf LLMs to achieve responses with “truthful” attribution to their pretraining sources without compromising on model response quality. Enjoy! Paper: https://lnkd.in/g_umKBMX #LLM #AI
8 -
Caiming Xiong
A time-series foundation model is valuable because time-series data is ubiquitous across industries, yet traditional modeling approaches often fail to fully exploit the complex temporal dependencies and cross-domain similarities present in such data. We introduce Moirai-MoE: the first and state-of-the-art time series foundation model in universal forecasting. Takeaways: 1. Autonomous Specialization: The model autonomously achieves token-level specialization, enhancing efficiency and performance. 2. Performance Boost: Delivers a remarkable 17% improvement over its predecessor, Moirai, without increasing the model size. 3. Two model variants: 117M parameters with 11M activated parameters and 935M parameters with 86M activated parameters 4. Limitations of Existing LLMs: Current large language models (LLMs) struggle with time-series forecasting tasks. Moirai-MoE outperforms GPT4-o, a model 1000+ times its size in time-series tasks. 5. We need more time-series foundation models!!! Paper: https://bit.ly/3O1yiRQ Code: https://bit.ly/48FAF6i Models: https://bit.ly/3YNozDY
24617 Kommentarer -
Nick Tarazona, MD
👉🏼 Accuracy and Repeatability of ChatGPT Based on a Set of Multiple-Choice Questions on Objective Tests of Hearing 🤓 Krzysztof Kochanek 👇🏻 https://lnkd.in/exkWTN8g 🔍 Focus on data insights: - ChatGPT 4 showed higher accuracy (65-69%) compared to ChatGPT 3.5 (48-49%). - Short-term repeatability was assessed over four separate days, showing consistent performance improvements. - Percent agreement and Cohen's Kappa were used to evaluate response consistency over time. 💡 Main outcomes and implications: - ChatGPT 4 demonstrated superior accuracy and repeatability over ChatGPT 3.5. - Variability in responses raises concerns about professional applications of both versions. - The study highlights the importance of continuous improvement in AI models for reliable outcomes. 📚 Field significance: - Advances in AI technology like ChatGPT can enhance diagnostic processes in various fields. - Understanding the limitations and strengths of AI tools is crucial for informed decision-making in healthcare and other industries. 🗄️: [#accuracy #repeatability #AI #healthcare #diagnostics]
-
Joseph Bastante
Happy Monday. In case you missed the post below, definitely one to review if you're building solutions with Claude. I know many are concerned about the cost of running AI solutions. Prompt caching, if understood and used in the right way, can definitely help reduce costs. Take a look. I imagine other frontier model providers will also offer caching in the coming months.
115 Kommentarer -
Pramod Rao, MBA, PMP, CPM
Meta' case for Open Source LLM - https://lnkd.in/eZY6q7eC What impressed me about Llama 3.1 was its "Multi-token prediction". Most models responds one word at a time, which takes a lot of compute power to process all info to generate the final outcome. Meta changed this -- now, with Llama, you can output multiple tokens at a time, essentially making inference is lot faster! Love this new development. #GenerativeAI
-
Daniel Grahn, PhD
Do you remember the last time you called a company and a human answered? I don't mean after a menu and 15 minute wait. In a world increasingly mediated by technology, true human interaction can feel like a luxury. As companies race to implement Generative AI, there will be a core choice---do we show our hand to our clients? I fear that companies who are too obvious about their use of GenAI may devalue their brands. Clever companies will be subtle in their use of GenAI—accelerating behind the scenes tasks to maximize human interaction. #DontShowYourHand #GenAI #LLM #AI #ML
20 -
Anish Agarwal
OpenAI is set to release an autonomous AI agent called “Operator” in January as a research preview and developer tool. Unlike standard AI models, the Operator will be able to independently control computers and perform tasks on its own. OpenAI has kept the release date for a consumer version under wraps, but its move signals a major shift toward AI that can actively engage with digital interfaces. #ai #genai #generativeai #openai https://lnkd.in/g8-zy2y8
32 -
Justin H. Johnson
🚀 Exciting Advances in Sustainable AI! 🌟 A recent article from VentureBeat highlights a development in AI technology: a new transformer architecture designed to enable powerful large language models (LLMs) without relying on GPUs. This innovation could change the way we deploy and scale AI systems, making them more accessible and efficient. 📝 The referenced paper titled "ECO: Environmentally Conscious Optimizations for Transformer-Based Models" delves into optimizing transformer models to reduce their environmental impact. The authors present methods to significantly lower energy consumption and computational costs while maintaining high performance. 📄 Read the full journal article for an in-depth understanding of these technologies: - [VentureBeat Article](https://lnkd.in/e8QZrwwB) - [ECO Paper on arXiv](https://lnkd.in/eFXGfEgT) #AI #MachineLearning #Innovation #Sustainability #TechNews
19 -
Dr. Rebecca Portnoff
Today’s kick-off call with the EU AI Office for the drawing-up of the first General-Purpose AI Code of Practice highlighted the significant work that has already gone into, and still remains, to ensure we’re building #trustworthyAI and #responsibleAI. Cross-sector stakeholders coming together will be an important part of success here, and I’m honored to be included in the expert group assisting in this effort. https://lnkd.in/eK5fegz5
301 Kommentar -
Amarda Shehu
There is often an unspoken assumption by ML researchers that all we need to make progress and model every aspect of our world is data. We do not ascribe to this in my lab. In fact, we believe that data will never be enough. Our experiences with the nuances and complexities of scientific problems have informed us to the insufficiency of data to capture continuous physical processes, which afterall govern our biological and physical world. An example of this is this series of two papers led by my wonderful PhD student, Anowarul Kabir and advanced by a precious multi-year collaboration of my lab with Los Alamos National Lab: Anowarul Kabir, Manish Bhattarai, Kim Rasmussen, Amarda Shehu, Anny Usheva, Alan R Bishop, and Boian S Alexandrov. Examining DNA Breathing with pyDNA-EPBD. Bioinformatics 39(11):btad699, 2023. https://lnkd.in/gZaHE4hS Anowarul Kabir, Manish Bhattarai, Selma Peterson, Yonatan Najman-Licht, Kim Ø Rasmussen, Amarda Shehu, Alan R Bishop, Boian Alexandrov, Anny Usheva. DNA Breathing Integration with Deep learning Foundational Model Advances Genome-wide Binding Prediction of Human Transcription Factors. Nucleic Acids Research: gkae783, 2024. https://lnkd.in/gfTTqcPy Our goal: advance an exceptionally challenging problem in molecular biology, prediction of transcription factor binding sites. Our first step: capture the underlying physics that is missing in the data. Our second step: integrate that now with the data we have in a foundation model for predicting transcription factor binding sites. Performance improves. Most importantly, when we look at the sequence motifs that constitute "signatures" of what makes a transcription factor binding site, we obtain answers. All in the open, nothing opaque.
331 Kommentar -
Olivier Koch
I am releasing the Handbook of Applied AI teams 🚀 - How to spot a great scientist in 3 minutes! - The seven pillars of Applied AI - The six-week rule - And really, everything you wanted to know about AI teams without daring to ask! https://lnkd.in/eW9TiG8v #ai #machinelearning
828 Kommentarer