One key part of any technological hype cycle is for an initial peak in interest to happen, followed by the "trough of disillusionment" where users realize that the new technology will not, in fact, solve all of their problems or put everyone out of work. I just saw my first such news article claiming this for AI - expect a lot more. Those initial press releases that every company had to have to show that they too were investing in AI are turning out to lead to some expensive capital projects. Some of the costs are now already being incurred, and the ROI now looks a little less rosy now that the engineers are reporting back on the projects. https://lnkd.in/gvwAcXy6
Brian A.’s Post
More Relevant Posts
-
I liked this short article on AI. Since AI is a hot topic in business, we see a lot of hype about the technology itself while finding the right applications requires more work and research. "The question they should ask themselves is, how can gen AI support me in solving a business question?" I am interested to see where this technology takes us in the AEC community. Already, as I learn more about truss design, I can see potential for predictive algorithms and software that learns from a group to help a single user solve a particular problem. I need to learn more about AI in general, but I am already looking forward to working with the next generation of software.
The Best Way to Work with AI? A Study May Reveal the Answers
wsj.com
To view or add a comment, sign in
-
Chatting with Alani Question- What specific performance metrics or benchmarks were used to evaluate the LLM's performance on Groq's platform? Based on the context documents provided, there are specific performance metrics used to evaluate the LLM's performance on Groq's platform. Performance Metrics for Groq's Platform Groq's platform performance for LLMs is evaluated using the following metrics: Latency: Groq achieved a median latency of 0.054ms per inference when running one model instance, which demonstrates unparalleled performance in terms of latency [1]. Throughput: Groq achieved a throughput of 471,585 inferences per second when running eight model instances in parallel [1]. Tokens per second: Groq has shown record-breaking performance of the open-source foundational LLM, Llama-2 70B, by Meta AI, running generated language at over 280 tokens per second per user [4]. Additionally, ArtificialAnalysis.ai has benchmarked Groq achieving 814 tokens per second with the Gemma 7B model [5]. Hardware Efficiency: Groq's language processor is purpose-built and software-driven, with ultra-low latency and ultra-fast performance. When dealing with massive LLMs, speed is a major factor for performance. Groq's automatic mapping of models to hardware without manual intervention provides better performance and higher efficiency than general-purpose AI chips [6]. The sources for the information are cited as [1], [4], [5], and [6]. These sources directly support the outlined performance metrics for evaluating the LLM's performance on Groq's platform. https://lnkd.in/efvn64tG
AI in Business Conversation
alani.ai
To view or add a comment, sign in
-
💡Innovation lies at the heart of our firm. Emerging technologies, like Generative AI, present exciting opportunities for us but they need to be explored responsibly. We have been engaging with clients and educating our people on the benefits and risks of GenAI, leading to a thoughtful implementation process across the firm. This is an intriguing and evolving space with more to come – please consider exploring what we have been up to so far by clicking below. 👇 #Technology #GenAI
Our approach to Generative AI
linklaters.com
To view or add a comment, sign in
-
In the ever-evolving landscape of technology, the power of #AI stands as a transformative force, particularly within the realm of Microsoft's #PowerPlatform. However, crafting effective prompts is the key to unlocking the true potential of these AI tools. To that end, Withum's Joseph Hickey delves into prompt engineering best practices and strategies to maximize the efficacy of AI, setting the stage for a deeper exploration of leveraging these tools to their fullest advantage. Read more here: https://lnkd.in/giSRAVHH #PromptEngineering #PromptEngineers #NaturalLanguage #GenAI #AIAdoption #AIBestPractices
Prompt Engineering Best Practices: Crafting Effective Prompts for Your Power Platform AI Tools
withum.com
To view or add a comment, sign in
-
AI is a game-changer—but it can’t replace your judgment. While AI excels at automating repetitive tasks, processing vast datasets, and identifying patterns, it lacks the human capacity for ethical decision-making, empathy, and contextual understanding. This makes your role in guiding AI’s outputs critical, especially in areas requiring nuanced choices. The best use of AI? Leverage it as a tool to enhance your productivity and decision-making, freeing you to focus on creativity, strategy, and connection. By working with AI, you’ll stay ahead in today’s evolving professional landscape. Learn more here. https://lnkd.in/emdeGZk2
Here's the one thing you should never outsource to an AI model
https://venturebeat.com
To view or add a comment, sign in
-
Artificial Intelligence (#AI) is revolutionizing the business landscape across industries and technologies. While it presents opportunities for improvement, it also poses risks for malicious activities. In this blog Thierry Caminel, CTO for AI at Eviden delves into the future of AI and the emerging key trends. #CTO #Innovation #artificialintelligence #trends #eviden
2025 and beyond: Three questions on how AI is shaping the future of business
eviden.com
To view or add a comment, sign in
-
The narrative around AI is often filled with tales of bigger and more complex models. But is 'bigger' truly 'better'? 🤔 In the world of AI, efficiency can often outshine sheer size. Smaller, efficiently trained models are proving that using less, but higher-quality data, can produce results on par—or sometimes superior—to their larger counterparts. This not only cuts computing costs but also curbs energy consumption, a crucial factor as AI expands its reach. 🔋💡 Exciting innovations, such as OpenAI's strides in complex reasoning and compact open-source AI models, challenge the notion that more is always better. These developments highlight a shift towards efficiency and effectiveness over grandeur. 🚀 Think about the ethical and practical implications too. As AI becomes a fixture in daily life, addressing issues like bias, transparency, and privacy becomes imperative. Balancing technological prowess with societal needs is essential for sustainable progress. ⚖️ So, what's your take? Do we focus on refining what we have or pushing the limits further? Share your thoughts! 👇 #AI #MachineLearning #Innovation #Sustainability #Efficiency
To view or add a comment, sign in
-
While enthusiasm for #generativeAI is crucial for igniting initial momentum, an excess of excitement can lead to distractions and unrealistic expectations, particularly among senior management. Companies that prioritize starting their explorations from the lens of business needs rather than the latest technology trends tend to maintain focus and derive the maximum value from all technologies. "You might think that news of “major AI breakthroughs” would do nothing but help machine learning’s (ML) adoption. If only. Even before the latest splashes — most notably OpenAI’s ChatGPT and other generative AI tools — the rich narrative about an emerging, all-powerful AI was already a growing problem for applied ML. That’s because for most ML projects, the buzzword “AI” goes too far. It overly inflates expectations and distracts from the precise way ML will improve business operations. Most practical use cases of ML — designed to improve the efficiencies of existing business operations — innovate in fairly straightforward ways. Don’t let the glare emanating from this glitzy technology obscure the simplicity of its fundamental duty: the purpose of ML is to issue actionable predictions, which is why it’s sometimes also called predictive analytics. This means real value, so long as you eschew false hype that it is “highly accurate,” like a digital crystal ball. This capability translates into tangible value in an uncomplicated manner. The predictions drive millions of operational decisions. For example, by predicting which customers are most likely to cancel, a company can provide those customers incentives to stick around. And by predicting which credit card transactions are fraudulent, a card processor can disallow them. It’s practical ML use cases like those that deliver the greatest impact on existing business operations, and the advanced data science methods that such projects apply boil down to ML and only ML. "
The AI Hype Cycle Is Distracting Companies
hbr.org
To view or add a comment, sign in
-
💡Innovation lies at the heart of our firm. Emerging technologies, like Generative AI, present exciting opportunities for us but they need to be explored responsibly. We have been engaging with clients and educating our people on the benefits and risks of GenAI, leading to a thoughtful implementation process across the firm. This is an intriguing and evolving space with more to come – please consider exploring what we have been up to so far by clicking below. 👇 #Technology #GenAI
Our approach to Generative AI
linklaters.com
To view or add a comment, sign in
-
🚀AI is going to change the way we work in ways that haven’t been seen since the Industrial Revolution. I think there will be significant challenges to address in relation to AI model bias, hallucinations, and guardrails to prevent bad actors from using this technology for harmful purposes. The debate over who configures these guardrails will likely rage for years. However, in my view, AI can be used immediately today to help with tasks that have definite answers. For example, code generation or AI-assisted document generation. In these cases, it is still up to the human operator to verify that the code works as intended and the document contents are accurate. The key is that the human operator can now enhance their productivity tenfold. I know from my own experience using AI that I can write better code and handle significantly greater complexity than was previously possible for me.
To view or add a comment, sign in