“I had the pleasure of working with Murat, during which I observed his determination in addressing various challenges as a Co-founder and CTO. His "win-win" mindset not only fosters collaborative relationships but also drives innovative solutions that deliver swift and highly effective results. Murat's impressive track record of translating concepts into actionable, high-quality outcomes is a testament to his visionary leadership and strategic acumen. His dedication to pushing boundaries, coupled with his precise execution, makes him an asset to any team or partnership. Murat's blend of resilience, strategic insight, and a genuine empathy for others, makes him well-positioned to leave a lasting imprint on the entrepreneurial landscape.”
Murat Derya Ozen
London, England, United Kingdom
2K followers
500+ connections
View mutual connections with Murat Derya
Welcome back
By clicking Continue to join or sign in, you agree to LinkedIn’s User Agreement, Privacy Policy, and Cookie Policy.
New to LinkedIn? Join now
or
By clicking Continue to join or sign in, you agree to LinkedIn’s User Agreement, Privacy Policy, and Cookie Policy.
New to LinkedIn? Join now
View mutual connections with Murat Derya
Welcome back
By clicking Continue to join or sign in, you agree to LinkedIn’s User Agreement, Privacy Policy, and Cookie Policy.
New to LinkedIn? Join now
or
By clicking Continue to join or sign in, you agree to LinkedIn’s User Agreement, Privacy Policy, and Cookie Policy.
New to LinkedIn? Join now
About
Entrepreneurial Product & Engineering, AI Lead. Currently leading Smart Kiwi where we…
Experience
Recommendations received
1 person has recommended Murat Derya
Join now to viewView Murat Derya’s full profile
Other similar profiles
-
Ivan V.
BromleyConnect -
Aliya R.
United KingdomConnect -
Eldar Isayev
GuildfordConnect -
Siddhant Kumar
LondonConnect -
Archit Garg
LondonConnect -
James Bedford
San Francisco, CAConnect -
Rui Malheiro
RugbyConnect -
Anuar Serikov
London Area, United KingdomConnect -
Ilinca Marin
United KingdomConnect -
Vlad Chiriacescu
Greater LondonConnect -
Radhika Satish
United KingdomConnect -
Arslan Ul Haq Qureshi
LondonConnect -
Nick Sukhanov
United KingdomConnect -
Madiyar Aitbayev
LondonConnect -
Mostafa Yassin
Greater LondonConnect -
Sean Dark
Greater Derby AreaConnect -
Neil Kakkar
United KingdomConnect -
Iftikhar Seayam
BirkenheadConnect -
Ziad Hendawy
United KingdomConnect -
Apurva Singh
Menlo Park, CAConnect
Explore more posts
-
Baris Aksoy
LinkedIn team (Juan Pablo Bottaro Karthik Ramgopal) shared a helpful deep dive into their deployment of an LLM-powered job search assistant. Highly recommend this read! Few highlights: 💡 Relatively easy to setup RAG-based pipeline. Achieved 80% of the basic experience in 1 month, then 4 more months to reach 95% mark. 💡 Evaluating the quality of application outputs is hard. I hear this from many enterprises developing LLM applications. (check out Okareo by Matthew Wyman Boris Selitser) 💡 Balancing latency with quality is important. While certain techniques like Chain of Thought could improve accuracy and reduce hallucinations, it will increase response times, which can affect user experience. https://lnkd.in/gtGGwf5P #largelanguagemodels #RAG #EBR #LLM #ml #ai #hallucinations #evaluations #
4 -
Nikhil Sonti
One of the best blogs I've come across on quantization, with outstanding visualizations. Highly recommend a read for anyone looking to understand LLM quantization in more depth! 📐 - https://lnkd.in/e_WTgq6x At Felafax AI (YC S24), we are building Infra to make full precision multi-GPU fine-tuning easier. #AI #Infra #LLM
163 Comments -
Neal Ghosh
📢 📢 📢 Being innovative and being busy rarely if ever coincide. Back in 2020, I faced this tension at Amazon when I was managing a data science and ML team, while also trying to innovate new product and strategies. The result was mostly failure, because I didn't know how to do both at once. Turns out it's not particularly natural for anyone. Since then, I've worked on methods to do this better, and my current process mostly boils down to two elements: time and space. Time: ➡️ You have to book time for the raw inputs to innovation: research, ideating, experimenting, prototyping, etc. just as if they were meetings. If you don't, your busy work will crowd out all your time. ➡️ The innovation process is not linear. You can't report on daily progress in standup, so don't bother. Give time for ideas and concepts to wander (and sometimes regress) so you can get to the right place. ➡️ Recalibrate time-to-value. The broader the innovative idea, the longer it takes to full realizes its value. Set expectations that the benefits of innovation work today won't come for a while, and that's OK. Space: ➡️ You can't context switch between busy work and ideating multiple times a day. Creating space between the two yields better efficiency in both. My preference is 3-4 hours of "chunk" time for either one. ➡️ Innovation has its own mandate, as does ops. Juggling both is hard, and impossible if you try to blend them. Keeping innovation workflows separate from daily operations -- splitting teams, processes, physical or digital locations -- mitigates cross-over and dilution. ➡️ Give yourself the emotional space to fail. It's cliche now to "celebrate failure" but the reality is that it is uncomfortable and stressful no matter how hard you shield against it. Without guardrails on the aversion to failure, innovation regresses back to ops pretty fast. What are your approaches balancing the two? #innovation #innovationmindset https://lnkd.in/eqSsJ4Jk
16 -
Triin Uustalu
Paris-based startup Mistral AI has just closed a staggering €600M funding round at a €5.8bn valuation. Their rapid rise showcases the explosive growth and potential of AI. As a heavy user of AI tools in my work at Glafos, I find this AI race very exciting. Thanks to new tech and tooling built on top of it, we can keep our team lean while efficiently writing code, creating content, optimizing customer support, and more. AI enables us to test our ideas, deploy quickly, and maintain a swift learning loop, which is important for an early-stage startup. However, this rapid technological progress comes with significant societal implications. It's estimated that up to 40% of the workforce will need retraining in the next three years. This pressing issue has not been adequately addressed by EU politicians, especially during the recent elections, and that makes me worried. https://lnkd.in/dzHjEj67
211 Comment -
Hamza Farooq
Trying to run LLMs locally? 🤔 Our course Enterprise RAG and MultiAgents Applications, covers all that and much more. Ever wondered how to use quantized LLMs like TheBloke's GGUFs on platforms like Hugging Face ? 🧐 Here's the deal: quantizing a model allows for compression, making it faster for inference with fewer bits. However, the more quantized the model, the lower the accuracy and sophistication it can provide. 🤓 Good news! Ollama lets you load a local GGUF model effortlessly. Simply obtain the GGUF files, store them, write a Modelfile, and voilà! You're all set to use it on Ollama. 🚀 From loading local models to becoming an (LLM) hero, our notebooks and lectures in Enterprise RAG and MultiAgents Applications are now available on Maven. Join us there to learn more! 📚💻 Signup for our course: https://lnkd.in/g7Z-7SAU #LLM #QuantizedModels #Ollama #EnterpriseRAG #MultiAgentsApplications #Maven
521 Comment -
Vinod Singh
Generalisation is Key to AGI 🗝️ I keep hearing that we're on the verge of achieving AGI, but I believe current approaches won't get us there. Right now, we're placing all our hopes on neural networks, thinking that by pouring in more compute and data, we'll evolve them into AGI. Neural networks are effective tools for pattern recognition and prediction but often struggle with out-of-distribution generalisation. This means they excel in tasks similar to their training data but can falter when encountering novel or unfamiliar scenarios. The root of this issue lies in their focus on learning specific patterns rather than broader, underlying principles that could be applied to diverse contexts. For example, a neural network trained to identify cat breeds might perform well on familiar images of cats but could struggle with recognising a cat in unusual situations, such as wearing a costume or captured from a unique angle. This challenge in generalisation can impact their ability to reason and plan effectively, given the complexity and unpredictability of real-world scenarios. These limitations also have significant ethical implications. The demand for extensive data can lead to privacy concerns and amplify existing biases within the data. Furthermore, the potential for failures in unfamiliar contexts, such as in autonomous vehicles or medical diagnostics, underscores the risks associated with current neural network technologies. To move toward Artificial General Intelligence, it is crucial to explore alternative approaches that can better generalise to new and diverse data. These methods should focus on understanding fundamental principles and relationships rather than merely memorising patterns from training sets. #ArtificialGeneralIntelligence #DataEthics #TechInnovation
15 -
Karthik Kalyanaraman
The best way to understand how GenAI adoption will evolve over time is by looking at how mature AI teams at big tech companies are applying it today. A couple of weeks ago, I launched a fully automated podcast, AI Arxiv that focuses exactly on this. I have built an automated pipeline that does the research for this podcast by sifting through various sources such as engineering blogs, research papers, youtube videos etc., extracts high signal content, categorizes it and feeds it to Google's NotebookLM for audio generation before getting published on public RSS feeds, Spotify and Apple podcast. I continue to be fascinated by how AI is being applied in novel ways beyond chatbots for solving unique use cases. The podcast has already crossed 40 episodes, 50 subscribers and 500 listens. Some of the more popular ones include: - How Meta uses LLMs for efficient incident response - How Salesforce operationalizes models at scale - How DoorDash built a high quality RAG for dasher support - How Pinterest built text-to-sql I was glad to hear from some folks that this was helping them stay on top of novel GenAI use cases and also to stretch their thinking beyond chatbots. The nice thing is, these podcast episodes are semi-automated with myself in the loop directing and approving workflows. As a result, it hardly takes few minutes to generate and publish new episodes. For the tech stack, I used CrewAI for content research, DSPy for content metadata & podcast notes/sponsor message and Langtrace for experimentation & testing. I will share a separate post going over the details of this entire system along with code samples so you can create your own podcasts too. If you are someone actively tinkering with LLMs or even generally curious about the GenAI space, do check it out. Link in the comment. 👇
171 Comment -
Matt Ober
Similarweb acquires 42Matters- an app intelligence data company. Smart acquisition and I would expect Similarweb to use their public stock to acquire aggressively. Lots of data and data technology would make sense as part of their company. 42matters' expertise in app store, engagement, and mobile SDK data promises to deliver more in-depth insights into app performance and user engagement. The integration of these advanced solutions is expected to benefit app owners by providing a clearer view of their apps' positioning against competitors and user interaction. The 42matters' dataset includes information from 12 app store platforms, over 2.1 million publishers, more than 2,600 SDKs, and upwards of 20 million apps. The combination of these two companies should benefit from the Similarweb distribution.
381 Comment -
Lisa Dolan
My updated resources list for founders for the fall (links to all the resources in the Comments section) I was recently asked by a founder: “Other than companies I admire, do you have any resources (blogs, books, talks, etc.…), I could take a look at that might get me better oriented within this space. I’m still very new to the startup world so any help is greatly appreciated!” In order to answer that question… my fave books/resources I believe founders will find the most valuable are: The Hard things about Hard things - https://lnkd.in/gnF65-ed The Lean Startup - https://lnkd.in/gCZ_57yk Four Steps to the Epiphany - https://lnkd.in/ghVTHdCh How Google Works - https://lnkd.in/gSsyt_9e The Startup Owner’s Manual - https://lnkd.in/ggczcf3E Zero to One - https://lnkd.in/eF4yjeWb Built to Last (not startup) - https://lnkd.in/eRWqEGky Harvard Business Review case studies - https://lnkd.in/ezWzF39S The Information (it’s a great blog/paper) - https://lnkd.in/eC73wfwW Scott Galloway (I love him) - https://lnkd.in/eRy6r7jN Any suggestions you’d make, let me know in the Comments section below! Link Ventures #mondaymotivation #venturecapital #foundersjourney
2611 Comments -
Rohit Agarwal
"The AI landscape isn't a monopoly—it's a vibrant ecosystem." This isn't just an opinion. It's backed by real data from Portkey's AI Gateway. We've analyzed the AI providers used by a sample set of organisations using LLMs in production over the last 30 days. Here's what we found: 📊 Key insights (from Portkey's data sample of ~500 organizations): - OpenAI leads with 45.58% market share, but doesn't dominate completely - Anthropic (12.19%) and Azure-OpenAI (7.85%) are strong contenders - Google holds 6.51%, closely trailing Azure-OpenAI - 14.86% use "Other" providers, hinting at a diverse long tail What's intriguing: - OpenAI's significant lead is balanced by strong competition - The field is diverse, with 7 named providers plus others - Newer entrants like Anthropic have quickly gained substantial ground 🤔 This raises some thought-provoking questions: - How will this distribution evolve over the next year? - What factors are driving organizations to choose alternatives to OpenAI? - How does this diversity impact AI implementation strategies across industries? #AITrends #MachineLearning #TechStrategy What's your take on these findings?
10814 Comments -
Sam Lee
Part II of our blog is out! In this installment Abde Tambawala and I explored usage metrics and pricing models across different AI modalities and discussed how traditional User-based subscriptions will need to be supplemented with usage metrics. Take a look and let us know what you think! #pricingstrategy #ai #monetization
272 Comments -
Lucas Dickey
Please media types writing about the cost of AI: recognize there is a difference between huge GPU/TPU clusters needed to train foundation models vs. inference costs to run most apps on top of foundation models. The two are not synonymous, so you can dial back some of the doom and gloom about AI startups and your commingling of the two different AI business types.
11 -
Rahul Reddy
If you're wondering how AI will be monetized, and how we'll come to pay for it, this article by Abde Tambawala and https://lnkd.in/gYJxgPSS is worth a read. If you're building an AI-driven product and are thinking about how you'll price and package it, this is a must read.
71 Comment -
Lance Hasson
Two thoughts on the OpenAI o1 launch: 1. Introducing “Thinking” is a very smart UX choice. When building ML systems a key challenge is setting user expectations. “Thinking” turns the long wait times & high latency into a positive, it’s working on giving you a higher quality answer 2. What does the productivity curve of o1 look like? Humans have variable productivity - we spend time thinking, get distracted, rest, refuel, etc. It’s easy to think that LLMs have a linear productivity curve - the token output is roughly constant. How does “thinking” affect the “productivity” in terms of the successful output of the model? How often does the thinking fail, leading to unproductive compute? https://openai.com/o1/
7 -
Anji Ismail
💥 Finnt is now publicly available! 🌎 Building an AI-native software is a whole new beast - actually that’s why there’s so many great demos but so few amazing experiences in production. The nature of Generative AI makes it difficult to reproduce identical, deterministic outcomes. Where a traditional feature would have two statuses “functional” vs. “bugged”; in the GenAI world, a slightly different prompt or setup can drastically change the output. But this is also what makes AI so powerful and why we are so excited to be building in this space. 🧠 As a reminder, Finnt specializes in extracting crucial data from large, unstructured, and complex documents — including OMs, PPMs, 10-Q/10-K, as well as prospectuses, reports, leases, loans, etc. It then integrates these findings into AI-generated and templated memos, enriching them with real-time market data. 🔑 Key facts: - Ingest any large, complex, and unstructured files - Design custom frameworks tailored towards your specific use cases - Select your preferred LLMs, from closed to open-source options - Customize the goals, structures, and formats of your output reports - Augment your reports with third-party sources and live market data - Ask anything to your files and transform responses into insights - Seamlessly integrate Finnt into your existing workflows 👉 Check the comments to get the link to signup. PS: you might need to get a custom onboarding but we are nice people 😅
17947 Comments -
Jerry Wu
AI teams I've spoken to are spending way too much money building massive internal benchmarking datasets for their product before releasing to a single user 🙁 Of the hundreds of AI engineers we spoke with, the most effective ground truth datasets were only between 50 - 150 examples in size. Common features? Small quantity, high quality, and extremely iterative. There were only two companies that had ground truth datasets with 1000+ examples, both of which threw it all away once real users started trying the product. “We threw our benchmarking data away after we saw real user queries” - AI Engineer, Tech Consulting Firm ❗ Benchmarking datasets are inherently flawed because they NEVER correlate with how users actually use your AI agent product. They also change as new user behaviors, requirements, and product features are implemented. Therefore, don't overthink it. The best practice we’ve seen is deploying early and collecting real user queries and responses over time as part of a continuous benchmark creation process. The dataset is a living thing (just like any product), and spending too much time crafting the perfect benchmark set before releasing is wasted effort. Another emerging practice is using evaluation models (model as a judge) to give first pass heuristics on performance. We're really excited about this approach because evaluation models are highly flexible, and decrease the cost/reliance on large benchmarking datasets. 🚀 Let me know if this experience resonates! Twitter: https://lnkd.in/eAsQvtiu
242 Comments -
Chia Jeng Yang
Some quick Q1 reflections on Knowledge Graphs and RAG, with high level principles to think about when developing Graph RAG systems: 1. The market for RAG is huge because white collar work is largely an information retrieval process 2. RAG is a process, not a system, and should be automated in piecemeal. As each question can almost be regarded as an information retrieval process. Group questions and tackle RAG accordingly. 3. Small Graphs and Minimum Viable Graphs as the future of Graph RAG, not large Graphs.
1309 Comments
Explore collaborative articles
We’re unlocking community knowledge in a new way. Experts add insights directly into each article, started with the help of AI.
Explore More