BJ Fogg, PhD
Stanford, California, United States
12K followers
500+ connections
View mutual connections with BJ
Welcome back
By clicking Continue to join or sign in, you agree to LinkedIn’s User Agreement, Privacy Policy, and Cookie Policy.
New to LinkedIn? Join now
or
By clicking Continue to join or sign in, you agree to LinkedIn’s User Agreement, Privacy Policy, and Cookie Policy.
New to LinkedIn? Join now
View mutual connections with BJ
Welcome back
By clicking Continue to join or sign in, you agree to LinkedIn’s User Agreement, Privacy Policy, and Cookie Policy.
New to LinkedIn? Join now
or
By clicking Continue to join or sign in, you agree to LinkedIn’s User Agreement, Privacy Policy, and Cookie Policy.
New to LinkedIn? Join now
Websites
- Personal Website
- http://www.bjfogg.com
- Company Website
- http://tinyhabits.com
- Company Website
- http://behaviormodel.org
About
I split my time between Stanford University and industry. At Stanford I teach and run a…
Experience
Education
-
Stanford University
-
Trained as an experimental psychologist, I investigated how computers -- from websites to mobile phones -- can motivate and persuade people. I once called this domain "captology" but I don't use that term much these days.
.
Languages
-
Spanish
Professional working proficiency
View BJ’s full profile
Other similar profiles
-
Alexa Clay
New York, NYConnect -
Dennis Rebelo, Ph.D.
Wakefield, RIConnect -
Kevin L McCrudden -Int l Author - Speaker - Coach Thought Leader - College Lecturer - Mentor
New York City Metropolitan AreaConnect -
Joe Eyerman
Leader and Innovator. Data and measurement, strategic planning, public sector research, applied social research, technology integration, public perceptions and security, extreme statistics
Arlington, VAConnect -
Ajay Kapur
Stevenson Ranch, CAConnect -
Abdiqani Farah
Author & Researcher
Nugaal, SomaliaConnect -
Marie Grinstead, Ph.D.
Botanical Extraction Scientist
Louisville, KYConnect -
Steve VanderVeen
Ph.D. business prof. Experiential educator. Start-Up AcademE, Inc. founder. Historian. Co-discovering and co-developing faithful entrepreneurial leaders.
Holland, MIConnect -
Dr. Steven Greer
Afton, VAConnect -
Luke Williams
New York, NYConnect -
Alex Soojung-Kim Pang
Helping create a million new years of free time. Author of REST; WORK LESS DO MORE; SHORTER; THE DISTRACTION ADDICTION; and the next book.
Menlo Park, CAConnect -
April Ursula Fox
Las Vegas, NVConnect -
Dr. SHIVA Ayyadurai, PhD (M.I.T.)
Cambridge, MAConnect -
Kevin Van Den Wymelenberg
Eugene, ORConnect -
• Daniel Burrus
Technology Futurist Keynote Speaker, Business Strategist and Disruptive Innovation Expert
San Diego, CAConnect -
Christina Wodtke
Palo Alto, CAConnect -
Fay Cobb Payton, Ph.D., MBA
Atlanta, GAConnect -
Whitney Brodsky
Edgewater, NJConnect -
Michael Hendron
Director of the BYU Rollins Center for Entrepreneurship & Technology
Provo, UTConnect
Explore more posts
-
Morgan Cheatham
There appears to be a "Superstar Effect" in healthcare and biomedical AI research right now. A small percentage of publications are truly novel methods contributions that introduce new capabilities or behaviors; whereas a majority of contemporary research publications are focused on implementation science to assess the real-world utility and performance of new methods. This dynamic seems different from prior eras, when the translational infrastructure for implementing novel AI methods was largely unavailable. While methods research often captures more attention and accolades, both types of research are critical – novel methods without implementation science lacks translatability; implementation science without novel methods lacks novelty. If our goal is to invent and deploy transformative AI methods in real-world healthcare and life sciences contexts, we ought to continue encouraging and rewarding stellar implementation science work alongside breakthrough methods development.
903 Comments -
Stedman Hood
"Move fast and break things? Not in healthcare." A hospital CEO raised his hand and said from the crowd. (I was addressing a room full of providers on AI's potential in healthcare.) As a YC alum and multi-time founder, I've seen tech revolutionize industries. But healthcare? The stakes are higher. Lives are on the line. The CEO's concern echoed across the room: "We can't afford Silicon Valley's 'move fast and break things' mentality when millions of lives are at stake." Fair point. But here's where it gets interesting. I asked, "Who here wants to learn more about AI in healthcare?" Every hand shot up. The enthusiasm was palpable. These professionals weren't resistant to change. They were hungry for knowledge. So, I flipped the script: "What if AI could help you move carefully and fix things?" Silence. Then a flood of questions. They asked how AI could: - Automate routine tasks, to free up clinicians to spend more time with patients - Improve diagnostic accuracy - Streamline administrative processes — for doctors, RCM, and patient access teams But the key? Safety and security at every step. I shared our approach: - Rigorous testing protocols (thanks to our CTO's background in self-driving cars) - Always-on human oversight - HIPAA compliance, HITRUST certification, and end-to-end encryption The skepticism turned to intrigue. This interaction showed me that healthcare innovation requires a delicate balance. We need to address people's concerns while showcasing possibilities. The future of healthcare isn't just about new technology. It's about building trust, ensuring safety, and keeping sight of what matters most: the patients we serve.
302 Comments -
Bill Aulet
From brain implant research in the labs to a new startup ... how do you do that?!?!? Learn how from MJ Antonini and Nicolette Driscoll -- fascinating and many lesson to learn. Lab-To-Market has never been a hotter topic and never seen so much progress being made in it, but we still have a long ways to go and Macauley Kenney is leading our efforts at the Martin Trust Center for MIT Entrepreneurship https://lnkd.in/eX_kTg_z
693 Comments -
Eric Ries
In the next episode of The Eric Ries show, I talk with Yancey Strickler, co-founder and former CEO of Kickstarter, and founder of Metalabel, a platform for releasing collective work. He’s also the author of This Could Be Our Future: A Manifesto for a More Generous World. Yancey, who started out as a music journalist, believes that creativity and humanity are implicitly connected. He’s been a forerunner in thinking about how to build companies that bring good things into the world and are also successful without devolving into extractive behaviors. He’s also had the rare experience of, as he says, “watching an idea go from impossible to explain to ubiquitous and instantly understood.” We talked about the founding and growth of Kickstarter, which has been profitable since it’s 14th month, the power of humility, past mistakes and future hopes, why he started Metalabel, and more. You can listen to or watch the episode here: - YouTube: https://lnkd.in/g76Ez6B5 - Spotify: https://lnkd.in/gSwg9kKb - Apple: https://lnkd.in/gJk5ng6H Meanwhile, here are a few takeaways from our conversation: 1. There are downsides and upsides to originating a new category: Crowdsourcing seems totally normal to us now, but only because Kickstarter led the way. At the beginning, there were years of struggle to even get people to understand the idea of conditional purchasing. But once they got the site built and people could see what it was, the excitement and adoption of this genuinely new platform were: “instantaneous – people knew they got it, they got what it was, they got how to use it.” 2. Maximizing growth isn’t always the right tactic. “We always had this thought of ‘we're not trying to be as big as possible. We're trying to be what feels right, what feels faithful in some nebulous way’,” Yancey says of Kickstarter. Staying true to their passion and their mission at the start ultimately led to success. 3. Crisis can be good. In many cases, it leads to uncovering the truth about what’s really going on. Instead of just trying to figure out how to make whatever’s going wrong go away, disaster can bring about real change. 4. Organizations are their own entities. They exert force on the people who found them and run them within them just as much as those people guide the organization. Learning to co-exist in that relationship is an art. “[Metalabel] exists beyond us without question. It has its own physics, its own personality that you need to listen to and be humble before. You're supporting it, you're serving it. And there's just so much art and humility and really hard work that comes with seeing it through.” — Lastly, a huge thank you to our sponsors. Without them, this podcast wouldn’t be possible: Mercury DigitalOcean Neo4j
442 Comments -
Phillip Rhodes
And as a follow-up to my last post[1], since I mentioned Neuro-Symbolic AI... the 3rd Annual Neuro-Symbolic AI Summer School is coming up in September, 2024. The event is fully remote and participation is free. For more details, see: https://lu.ma/xxyof4bg [1]: https://lnkd.in/et8zeD2m #AI #ArtificialIntelligence #NeuroSymbolicAI
1 -
Jeff Barson
For Anyone In Healthcare AI. 👈 Colorado shook up the healthcare AI game with the Colorado AI Act (CAIA), setting a new national standard. For digital health innovators nationwide, this means a more regulated landscape to try and ensure fairness and transparency in AI use. CAIA applies tough rules to "high-risk artificial intelligence systems" (HRAI Systems) that have a big say in healthcare or insurance decisions. It's all about preventing bias and ensuring these systems play fair. Developers and deployers now need to lay out all the cards—how their AI learns, its limitations, and how they’re tackling any bias. It's a hefty checklist, but necessary for playing by the rules. Why does this matter outside Colorado? Well, other states will be eyeing CAIA like a playbook. Expect similar laws in other states dropping soon, so having a trusted system in place beats trying to DIY your way through compliance. Clinician's, it’s time to start thinking ahead. Get that AI governance plan sorted, because staying ahead of the regulatory curve isn’t just smart—it’s necessary. You'll want to stop using ChatGPT in your personal account and move to a real system before you get your hand smacked. Disclaimer: I'm founder of Storyline (https://lnkd.in/g2FfZFnh) which is the leader in behavioral health AI platform for clinical care. Rebecca E. Gwilt wrote an excellent post on the subject here: https://lnkd.in/gt6DYNaT
3 -
Ben Guo
For NY Tech Week we'll be demo-ing Substrate at the South Park Commons Science Faire on Thursday: https://lnkd.in/e5JwDKKX There are so many incubator-shaped things out there these days, but there's nothing quite like SPC. It's like a chill, small-batch, locally-sourced alternative to YC, where you can either hang out as a member forever - or choose the VC-funded adventure. The -1 to 0 phase [0] they talk about at SPC is real, and it made a real difference for us to have a home at SPC. It takes a lot to quit your job and do something new, especially when it isn't entirely clear what you're doing. (1) Do it – take the leap! and (2) consider joining SPC. Though the core vision has remained the same, many of the important turns in our squiggle at Substrate (including the graph framework) came from those early days at SPC. If you're here for tech week and can't catch us at SPC on Thursday, DM me if you want to meet up! [0] https://lnkd.in/e6KEMr9V
331 Comment -
Eric Best
Hey friends making important decisions (FMID)! When you're calculating mission-critical metrics like #channelsales, #ROAS, #LTVCAC... or any other KPI for your WBR, why reinvent the wheel? SoundCommerce offers prebuilt dimensional BI models with dbt Labs SQL source code libraries that you can immediately load and run in your Snowflake, Google Cloud #BigQuery or Databricks cloud -- all populated automatically by Reactor • Intelligent Data Pipeline with prebuilt entities like products and services, orders, and customers. Stop spending data engineering $$$ on plumbing. Stop building dimensional BI models from scratch. Stop reinventing the wheel!
5 -
Morgan Cheatham
In case you missed it, last week the Coalition for Health AI (CHAI) released a draft consensus framework for responsible health AI. Developed with input from over 100 contributors representing a diverse network of healthcare stakeholders, this guide proposes actionable evaluation criteria throughout the AI lifecycle—from identifying use cases to deployment and monitoring. Key examples outlined in the guide include: • Predictive EHR Risk Use Case (Pediatric Asthma Exacerbation) • Imaging Diagnostic Use Case (Mammography) • Generative AI Use Case (EHR Query and Extraction) • Claims-Based Outpatient Use Case (Care Management) • Clinical Ops & Administration Use Case (Prior Authorization with Medical Coding) • Genomics Use Case (Precision Oncology with Genomic Markers) The draft framework is open for public review and comment for the next sixty days. Please use this form to submit feedback: https://lnkd.in/eYughFYC Kudos to the CHAI team led by CEO & President, Dr. Brian Anderson, MD, on this milestone. Eric Horvitz Jennifer Goldsack John Halamka, M.D., M.S. Michael Pencina Micky Tripathi Nigam Shah Suchi Saria Troy Tazbaz #healthcare #ai #artificialintelligence #generativeai https://lnkd.in/e7VRv6fD
1251 Comment -
Jon Irwin
🚨🚨🚨🚨🚨 REALLY COOL INSIGHT INTO AI!!! Peeking Inside the Black Box: New Advances in Understanding How AIs Think The article from Anthropic discusses their research on deciphering the inner workings of their conversational AI system, Claude. Using a technique called sparse dictionary learning, they were able to identify millions of semantic building blocks or "features" that Claude uses for reasoning and generating text. For example, they found distinct features for concepts like the Golden Gate Bridge, computer code, famous people, and geography. By analyzing and intervening on these features, they gained insights into how Claude represents knowledge and makes inferences. The discovery of abstract, interpretable features sheds light on therepresentations and computations happening inside Claude's neural network "black box." The article continues to build on the approach by scaling it up dramatically to extract features from Anthropic's latest model, Claude 3 Sonnet. By training larger sparse auto encoders with more compute power, they were able to find even more sophisticated features corresponding to complex, multilingual, and multimodal concepts. The researchers analyze these features in depth to understand what they represent, how they generalize, and how they enable model capabilities. Intriguingly, they also find features that appear relevant to AI safety, like detecting deception orsecurity flaws in code. While preliminary, this demonstrates how interpretability could eventually help ensure AI systems behave safely and reliably. This represents exciting progress in elucidating the mechanisms by which large language models operate. Methodically decoding these models promises to enhance our ability to build more robust, trustworthy, and beneficial AI. #AI #artificalintelligence #SRED #RD #innovation #funding #fundingexpert #grants #JonIrwin #futuretech https://lnkd.in/gRVVCppY
1 -
JT Benton
This is just incredible. Neal Ghosh's demonstration of how #VentureIQ takes deeply technical and complicated content across a range of industrial use cases and ages/stages through an AI-generated conversational podcast (a la #notebookLM) is really mind-blowing. Check out his post, below, and if you haven't seen VentureIQ, yet, let's get you set up to learn more. You can visit VentureIQ.ai or DM any of us over here to book a demo!
2 -
Umbereen S. Nehal, MD, MPH, MBA
Professor Hossein Rahnama is an expert and seasoned #tech innovator who was critical to our finding the right #businessmodel for HER Heard while we took the AI for Impact class with MIT Media Lab Professor Ramesh Raskar, a class with an incredible network of advisors who are industry experts and founders themselves. Professor Rahnama's words of wisdom on #AI is worth listening to. Access to the support and advice of globally recognized experts is exactly why I chose to do my #MBA at Massachusetts Institute of Technology. If you are the sum of the people you spend the most time with, then put yourself in rooms where everyone around you is smarter than you in some way and whose combination of optimism and strategic thinking helps to get things done.
71 Comment -
Brandon Peele
Meaningful might seem like a mystery or a nice-to-have, but it's actually quite explainable and achievable. According to researchers at MIT, key elements of meaningful work include its self-transcendent nature, episodic high-impact moments, personal relevance, and often poignant experiences. However, meaningfulness can be undermined by dehumanizing factors, such as a disconnect between personal values and organizational goals, lack of recognition, meaningless tasks, unfair treatment, and isolation. To cultivate meaningful work, organizations should focus on these four strategies: 1. Organizational Meaningfulness: Clearly communicate how individual roles contribute to the organization's broader purpose and societal benefits. 2. Job Meaningfulness: Help employees understand how their specific roles and tasks support the organization's mission, acknowledging that challenging jobs can be meaningful. 3. Task Meaningfulness: Provide context for repetitive or tedious tasks, showing how they fit into the larger goals. 4. Interactional Meaningfulness: Foster supportive relationships and positive interactions with beneficiaries of work. If you'd like to explore how to make your career or organization more meaningful, join me and my colleagues, Bea Boccalandro and Laszlo Karafiath for office hours (next one is September 24th at 11am PT): https://lnkd.in/dszSbcgV Dive deeper into the research here: https://lnkd.in/dmH9cC7E
72 Comments -
Tony Siebers
What do you think about AI in senior care? Personally, I think we have to be careful with it. When AI presents itself as human, it can be confusing for seniors who may not have strong digital literacy skills. In fact, if they think they're talking to a person and not an AI chatbot, it can be dangerous if they're having a medical emergency or issue. However, there are some helpful use cases. I think Kristen Fischer's exploration of AI here is really interesting. "Artificial intelligence may be a useful communication tool to help older adults with cancer communicate with their doctors and have more of a say in their treatment," she says. AI can provide more context or explanation about certain treatment plans and translate the patient’s needs to clinicians in situations where the patient doesn't feel understood or heard. AI will never replace professionals, but it could help patients and clinicians better understand each other. Feel free to share your thoughts on this—I'd love to know if you think AI would be an effective tool for these situations. #AI #SeniorCare #FutureOfHealthcare
31 Comment -
Marley Rosario, MPP
Here's how I made an AI Marley Rosario, MPP reading The Digital Polity newsletter in 23 minutes ⬇ . 1️⃣ You can find the written version of Forrest Alonso Haydon newsletter, The Digital Polity, at this link: https://lnkd.in/gJFhi5aF 2️⃣ Here's the steps to making this video: → Used ElevenLabs to make a clone of my voice. → Went to HeyGen and uploaded a 2 min video of myself talking to my iPhone and then created an instant avatar. → Went into the HeyGen AI studio and edited the video by adding the visuals of the newsletter to the background. I did all of this with a $20/month sub to HeyGen and the free ElevenLabs acct. AI is going to eat the world.
216 Comments -
Jay Matthew
Trying to explain the nuances of a healthcare ecosystem to data scientists so they have an understanding as close to domain experts like healthcare managers and clinicians is like explaining the beauty of Tchaikovsky's Piano Concerto No. 1 to someone who does not understand a thing about classical music! Even if we knew where to start, the why to start is not worth trying to, to begin with! Nonetheless, in my dealings with my data science team, I have had to teach them the intricacies of one health ecosystem, that of this dichotomous system we have in South Africa, and despite being health care users of this very system, they themselves were still left with many questions. Therefore, in the wake of this, I have developed my own symphony, the multi-S framework to explain the various levels of a healthcare ecosystem, as relevant and seen through the lens of data science. Voila!
10 -
Yubin Park, PhD
Google as a Provider Directory? Who said self-reported data is unreliable? A study funded by The Commonwealth Fund, "𝗜𝗺𝗽𝗿𝗼𝘃𝗶𝗻𝗴 𝗣𝗿𝗼𝘃𝗶𝗱𝗲𝗿 𝗗𝗶𝗿𝗲𝗰𝘁𝗼𝗿𝘆 𝗔𝗰𝗰𝘂𝗿𝗮𝗰𝘆: 𝗖𝗮𝗻 𝗠𝗮𝗰𝗵𝗶𝗻𝗲-𝗥𝗲𝗮𝗱𝗮𝗯𝗹𝗲 𝗗𝗶𝗿𝗲𝗰𝘁𝗼𝗿𝗶𝗲𝘀 𝗛𝗲𝗹𝗽?" by Michael Adelberg, Austin Frakt, Daniel Polsky, and Michelle Kitchman Strollo, says otherwise [1,2]. According to the paper, "𝘗𝘳𝘰𝘷𝘪𝘥𝘦𝘳 𝘥𝘪𝘳𝘦𝘤𝘵𝘰𝘳𝘺 𝘱𝘩𝘰𝘯𝘦 𝘯𝘶𝘮𝘣𝘦𝘳𝘴 𝘸𝘦𝘳𝘦 𝘮𝘰𝘳𝘦 𝘭𝘪𝘬𝘦𝘭𝘺 𝘵𝘰 𝘢𝘭𝘪𝘨𝘯 𝘸𝘪𝘵𝘩 𝘎𝘰𝘰𝘨𝘭𝘦 𝘥𝘢𝘵𝘢 𝘵𝘩𝘢𝘯 𝘸𝘪𝘵𝘩 𝘵𝘩𝘦 𝘥𝘪𝘳𝘦𝘤𝘵𝘰𝘳𝘺 𝘧𝘰𝘳 𝘵𝘩𝘦 𝘴𝘢𝘮𝘦 𝘤𝘰𝘮𝘱𝘢𝘯𝘺'𝘴 𝘩𝘦𝘢𝘭𝘵𝘩 𝘱𝘭𝘢𝘯𝘴 𝘪𝘯 𝘰𝘵𝘩𝘦𝘳 𝘮𝘢𝘳𝘬𝘦𝘵𝘴." Last year, a white paper prepared for the U.S. Department of Health and Human Services (HHS) by RTI International, "𝗦𝘁𝗮𝘁𝗲 𝗘𝗳𝗳𝗼𝗿𝘁𝘀 𝘁𝗼 𝗖𝗼𝗼𝗿𝗱𝗶𝗻𝗮𝘁𝗲 𝗣𝗿𝗼𝘃𝗶𝗱𝗲𝗿 𝗗𝗶𝗿𝗲𝗰𝘁𝗼𝗿𝘆 𝗔𝗰𝗰𝘂𝗿𝗮𝗰𝘆: 𝗙𝗶𝗻𝗮𝗹 𝗥𝗲𝗽𝗼𝗿𝘁," stated a reason for the provider directory inaccuracy as follows: "𝘛𝘩𝘦𝘳𝘦 𝘢𝘳𝘦 𝘮𝘢𝘯𝘺 𝘳𝘦𝘢𝘴𝘰𝘯𝘴 𝘸𝘩𝘺 𝘩𝘦𝘢𝘭𝘵𝘩 𝘱𝘭𝘢𝘯𝘴' 𝘱𝘳𝘰𝘷𝘪𝘥𝘦𝘳 𝘥𝘪𝘳𝘦𝘤𝘵𝘰𝘳𝘪𝘦𝘴 𝘢𝘳𝘦 𝘪𝘯𝘢𝘤𝘤𝘶𝘳𝘢𝘵𝘦. 𝘖𝘯𝘦 𝘳𝘦𝘢𝘴𝘰𝘯 𝘪𝘴 𝘵𝘩𝘢𝘵 𝘩𝘦𝘢𝘭𝘵𝘩 𝘱𝘭𝘢𝘯𝘴, 𝘰𝘳 𝘵𝘩𝘦𝘪𝘳 𝘷𝘦𝘯𝘥𝘰𝘳𝘴, 𝘵𝘺𝘱𝘪𝘤𝘢𝘭𝘭𝘺 𝘳𝘦𝘭𝘺 𝘰𝘯 𝙥𝙝𝙤𝙣𝙚 𝙘𝙖𝙡𝙡𝙨, 𝙛𝙖𝙭𝙚𝙨, 𝙖𝙣𝙙 𝙚𝙢𝙖𝙞𝙡𝙨 𝘵𝘰 𝘩𝘦𝘢𝘭𝘵𝘩 𝘤𝘢𝘳𝘦 𝘱𝘳𝘰𝘷𝘪𝘥𝘦𝘳𝘴 𝘵𝘰 𝘶𝘱𝘥𝘢𝘵𝘦 𝘢𝘯𝘥 𝘢𝘵𝘵𝘦𝘴𝘵 𝘵𝘰 𝘱𝘳𝘰𝘷𝘪𝘥𝘦𝘳 𝘥𝘪𝘳𝘦𝘤𝘵𝘰𝘳𝘺 𝘪𝘯𝘧𝘰𝘳𝘮𝘢𝘵𝘪𝘰𝘯 𝘭𝘪𝘬𝘦 𝘭𝘰𝘤𝘢𝘵𝘪𝘰𝘯, 𝘰𝘧𝘧𝘪𝘤𝘦 𝘩𝘰𝘶𝘳𝘴, 𝘸𝘩𝘦𝘵𝘩𝘦𝘳 𝘵𝘩𝘦𝘺 𝘢𝘳𝘦 𝘢𝘤𝘤𝘦𝘱𝘵𝘪𝘯𝘨 𝘯𝘦𝘸 𝘱𝘢𝘵𝘪𝘦𝘯𝘵𝘴, 𝘢𝘯𝘥 𝘸𝘩𝘦𝘵𝘩𝘦𝘳 𝘵𝘩𝘦𝘺 𝘢𝘤𝘤𝘦𝘱𝘵 𝘢 𝘩𝘦𝘢𝘭𝘵𝘩 𝘱𝘭𝘢𝘯'𝘴 𝘪𝘯𝘴𝘶𝘳𝘢𝘯𝘤𝘦 𝘱𝘳𝘰𝘥𝘶𝘤𝘵." Does Google make phone calls to check a business's location and office hours? I have maintained a few Google Business profiles, but I have never received a phone or fax call from Google. I voluntarily updated information because I cared and knew people would look for it. Many businesses care about the information on Google, as what Google would show to their (potential) customers matters a lot. If Google is more accurate than any other sources, should Google do a provider directory business? Or should they provide the data source for other start-ups to build a more healthcare-focused solution? Would Google even be interested in this market? [1] https://lnkd.in/e2cXQ_sH [2] https://lnkd.in/eZCtAT9d [3] https://lnkd.in/eWHaeMaw
7621 Comments -
Shahid Azim
Join C10 Labs and their AI Venture Studio for a roundtable discussion on building AI-first ventures. This is an opportunity to meet founders from the 1st cohort, network with other applicants from the 2nd cohort, and connect with early-stage entrepreneurs in the AI community. Don't miss out on the chance to learn from the experts and build your own AI venture. Register now! #AI #entrepreneurship #C10Labs #CIC
481 Comment -
david o. houwen
Collective intelligence: A unifying concept for integrating biology across scales and substrates | by Patrick McMillen & Michael Levin | Nature A defining feature of biology is the use of a multiscale architecture, ranging from molecular networks to cells, tissues, organs, whole bodies, and swarms. Crucially however, biology is not only nested structurally, but also functionally: each level is able to solve problems in distinct problem spaces, such as physiological, morphological, and behavioral state space. Percolating adaptive functionality from one level of competent subunits to a higher functional level of organization requires collective dynamics: multiple components must work together to achieve specific outcomes. Here we overview a number of biological examples at different scales which highlight the ability of cellular material to make decisions that implement cooperation toward specific homeodynamic endpoints, and implement collective intelligence by solving problems at the cell, tissue, and whole-organism levels. We explore the hypothesis that collective intelligence is not only the province of groups of animals, and that an important symmetry exists between the behavioral science of swarms and the competencies of cells and other biological systems at different scales. We then briefly outline the implications of this approach, and the possible impact of tools from the field of diverse intelligence for regenerative medicine and synthetic bioengineering. https://lnkd.in/ewMJCtxz #collective #intelligence #biological #systems #multiscale #architecture #biology #unification #regenerative #medicine #synthetic #bioengineering
-
Abass Toriola
Struggling to choose between ChatGPT-4 and AnthropicAI's Claude 3 Opus for your AI needs? 🤔 After 60 days of testing both tools side-by-side, I've uncovered key differences that can help you make the right choice. Here are the most important factors to consider: - Versatility: ChatGPT-4 offers more features like image generation, file downloads, and web page reading. It also has a massive library of 3M+ Custom GPTs for extra functionality. 🌟 - Response Time: ChatGPT-4 starts generating responses within 3 seconds, while Claude 3 Opus can take 10+ seconds to begin. ⏱️ - Output Length: Claude 3 Opus produces longer 1000+ word articles on the first try. ChatGPT-4 usually maxes out around 700 words. 📏 - Prompt Compliance: Claude 3 Opus captures more instructions from detailed prompts and mimics writing styles better. ChatGPT-4 tends to skip more prompts. 🎯 - Usage Limits: ChatGPT-4 allows 40 messages per 3 hours. Claude 3 Opus has less clear limits based on output length. 🚦 - Ethical Standards: Claude 3 Opus is stricter about inserting promotional links and references. ChatGPT-4 is more lenient. 🙏 In the end, both tools have strengths and weaknesses. The choice depends on your specific needs and priorities. But one thing is certain - AI writing tools like these are game-changers that can skyrocket your productivity! 🚀 Read the full comparison for more details on how these two AI powerhouses stack up: https://lnkd.in/dJpY5euM #chatgpt #chatgtp4 #claudeai #claude3opus #chatgptvsclaude
Explore collaborative articles
We’re unlocking community knowledge in a new way. Experts add insights directly into each article, started with the help of AI.
Explore More