Hard tech is hard. For Anduril Industries, Palantir’s Warp Speed makes it 200x easier.
At #DevCon1, Analytics Engineer Callum McCann explained how Anduril is harnessing the power of the ontology to accelerate material resource planning within Arsenal OS, their integrated digital manufacturing platform, transforming processes that once took 2-4 hours into 30 second tasks.
“Foundry has helped Anduril take the next step into the realm of real-time decision making. This isn’t just about technology - it’s how we use it to cut through complexity and stay several steps ahead in a very quick-moving industry. At the heart of all of this is the Foundry ontology, which I believe is the most fully-featured semantic layer on the market. It takes all of the skillsets that we’ve been developing as data, analytics, and software engineers allows us to unify that into one platform.“
Hello, everyone. My name is Callum McCann. I'm an Analytics Engineer at Andrel. I've spent the majority of my career in the data ecosystem, most recently at DBT Labs. I joined Andrew just about a year ago and have spent the last year focused on our rollout of foundry and more specifically on our usage of foundry for demand and supply planning. So before I get into that though, some of you may not know who Andrew is or what we do. So Andrew is a global defense technology company. Our mission is to transform national security capabilities. Advanced technology, we're differentiated by a couple of principles. First, software is at the core of everything we do to the US and our allies will gain asymmetric advantage with affordable mass. And three, a new business model is critical to moving quickly. If you are curious about any of those and what those actually mean, please pull me aside later on in the conference. I'm more than happy to talk your ear off about them. But with that, let's dig into the actual problem statement. So I want to begin by saying hardware is very hard, and Andrew is a great example of why. I came from the software industry, so a lot of this has been for learnings from me as well. Andrew is a highly complicated organization who deals in an environment of uncertainty and changing needs. For example, product needs can change based on evolving conditions around the world, or our supply chains might be impacted by global conflict. And that is to say nothing of how complicated those supply chains are in the first place. Sourcing all the components for world class hardware is difficult regardless of industry. So let's start at the beginning of this problem and walk through the process. Let's start by building our demand signal. All hardware production needs an accurate picture of what you need to build in order to prepare for that, to get all your components, hire the right people, get the right facilities, so on. At Andrew, when we first started, we were starting to build our demand plans manually. So at the time it was the right solution. We relied upon people's expertise. We didn't waste resources trying to automate things and we could do it all very manually. However, it was very painful process. The planners had to aggregate information across the company's growth, information from our CRM, development information from our engineers, maintenance information from deployments. And then they had to consolidate all of this information into a specific format that could go into our ERP to drive supply processes. At the time with fewer product lines than we have today, the entire process would take about one to two weeks and was an acceptable burden. As our company has scaled, we have more demand. It's a good problem to have and we've reached higher levels of complexity. We reached the point where the manual processes weren't meeting our needs anymore, to say nothing of at risk demand and think, you know the financial risks that we need to take. On in order to procure that hardware. So is at this point that we were first piloting Foundry and we quickly realized we could use Foundry to help our planner planners quickly and accurately build their demand plan. So to solve these problems, we started developing a suite of planning tools that we called Mercury. The first part of this was probably familiar to most of you in this room and jesting, transforming, modeling all the foundry ontology. Once we've created the foundation for that though, with objects like items, inventory, bills of materials, I can talk your ear off about bills and materials. We spent time digging into the nuance of the demand workflow specifically and settled on the following ontology demand events as the atomic unit. So this is. Specific piece of hardware on a particular opportunity for a particular date. This date is largely served from our CRM on an hourly refresh rate along with a lot of nuance and in that little line right there as well. And then demand scenario as a container, all of all the demand events. This gives users the ability to create demand plans for each month, different risk profiles. If we want to have a high risk scenario, a low risk scenario, they can do all of that. So now let's dig into a demo. We start on the demand planning module. Each row within this table represents a specific demand event tied to a demand scenario, In this instance, the Devcon scenario that I've created with dummy data. So each of these two rows is some dummy data for a delivery that is coming up. Let's put ourselves in the shoes of our demand planners and say that something about this demand may be wrong. In this case, we'd be able to edit all the critical properties just in case the information is not up to date. So the item revision may have changed this production. Mind may have been moved to a new facility or the quantity of the demand needs to be updated and changed based on customer feedback. Once we finalize what we want to be included, we can create a draft upload. This runs through a number of validation checks and produces the output of what is actually going to be uploaded into the ERP. Right now you're going to see nothing on the screen because it's a demo, but in a real life scenario, you'd see a number of demand events flagged for violations, be it improper item types, invalid dates, so on and so forth if any demand events don't pass our validations. And click in and make those edits right then and there to fix whatever the issue is. As soon as we feel confident in our upload and all the tests have passed, we can upload our demand plan directly into our ERP through our integration. This upload then powers all of the supply planning processes, some of which I'll talk about later. But let's say you're a business line lead and you want to understand what exactly demand you're committed to. In order to understand this at the highest level, we can flip over to the demand plan overview tab where we built dashboarding presentations you can. Understand what the current demand plan is broken down by business line. This module is used to lead reviews within the business line, get production line, or get production leads us all on sort of the same page and ensure that everyone understands what demand is. Not the most technically hard problem, but I'm sure everyone in this room can agree that getting 10 people aligned on something is actually a lot harder than expected. So now that we've gotten that alignment, we now get to take advantage of what I believe to be the greatest power of Foundry, which is the network effects of a strong data model. The benefit of the Foundry ontology, once you've defined it for a critical process, the lift to build the next process is reduced. Each derivative use case is easier. And it was around this time that we were finishing, finishing the first version of Mercury that we decided to partner with Palantir. Together on warp speed, they're manufacturing operating system together we've built. About a simulation module that takes in the demand that we have defined in Mercury and quickly runs through a material requirement, material requirements planning algorithm. For those of you who aren't familiar what this is, this is a process that traverses the entire bill of materials for every single piece of demand and determines what you need to build, what you need to purchase and on what time in order to meet that demand. In our ERP today, this process takes about two hours, two to four hours actually, and requires managing state. In two different locations, both the ERP and in foundry. Additionally, it abstracts away from all the individual pieces of demand to just say hey we need 10 of this unit instead of we need 5 for customer A and five for customer B. In MRP Speed, the name of Foundries simulation engine, this takes about 30 seconds and ties together every single output to a specific piece of demand. So we begin by selecting our Devcon scenario and then we select A supply scenario. The second scenario allows us to simulate changes to the supply position and then save them across different runs. Once we hit the run button, you obviously that could see that it completes very quickly, but that's only because there's only two demand events. Upon completion, we can see a high level picture of all of our demand and how well we're able to meet these inputs. In this example, you can see that there is the quantity of required of 50 and the quantity on time of 50. In reality this would never occur. You'd have thousands of distinct demand events and a varying range that you'd be able to meet. So let's say there was an event that you had some sort of complication. You'd then be able to click into violations. This is again empty and our example, but in a real life use case, you'd be able to see lead time violations, inventory, stock violations, and shortages. Then we look at our planned orders. These are the outputs from MRP Speed that tell us what we need to meet. For example, this 2nd row tells me that I need a work order for 100 units on November 24th. In order to meet that top level demand, but most importantly, you can click into it and see the exact demand that it ties to O. We can click into the allocations and say what is the top level demand or the intermediate demand that this contributes to and unlocks. This is a huge unlock to our supply planners because now they can understand exactly what things output to, how much revenue they might drive, and how they need to prioritize things. The biggest unlock here is from a workflow perspective, if you think about the change from 2:00 to 4:00 hours to 30 seconds, that is bare minimum A200X increase in productivity and the ability to iterate through their their workflows and be able to make decisions. So to close out Foundry has helped and roll to take the next step into the realm of real time decision making. And this isn't just about technology, it's how we use it to cut through complexity and say several steps ahead in a very quick. Moving industry at the heart of all of this is the Foundry ontology, which I believe is the most fully featured semantic layer on the market, even beyond the network effects and the unified analytics and operations. The big unlock for us is that it takes all of the skill sets that we've been developing as data and analytics and software engineers through our entire career and allows us to unify that into one platform. So I urge you as you reflect on today's talks, as you, you know, listen to everyone speak and you're thinking about your next project to think big. Because you may just be able to accomplish something that you didn't think was previously possible.
CEO & Сo-founder Social Links (USA)| DarksideAI & BrightsideAI | Open Data & Big Data | Forbes Technology Council Member | Talks about #entrepreneurship #businessdevelopment #osint
Yep, agree, we are doing exactly the same, we want to unlock the innovation and performance in economic with Open Data (public Available Information) & AI.
200x Easier: 2-4 hours to Just 30 seconds
All companies sourcing best-in class materials, here are some takeaways:
Old way (1-2 weeks):
❌ Manual demand planning
❌ Aggregate information across company (growth information in CRM, development information from engineers maintenance information from deployments)
❌ Consolidate information and put into ERP
Problem: Expertise-dependent, unscalable, no continuous learning
New way (hours):
✅ Integrate diverse data source (ERP, MES, PLM, etc), human impact and demand-impacting events (risks, demand surges, etc)
✅ Align 10+ stakeholders on demand forecasts using common business terminology
✅Utilize AI/ML with real-time data for optimal purchasing decisions
Advantage: Faster, more informed decision-making across all levels and departments
"This isn’t just about technology - it’s how we use it to cut through complexity and stay several steps ahead in a very quick-moving industry. fully-featured semantic layer on the market. It takes all of the skillsets that we’ve been developing as data, analytics, and software engineers allows us to unify that into one platform.“
Palantir TechnologiesAnduril Industries#warpspeed#palantir#digitaltransformation#materailharmonization#hardware#software
Hard tech is hard. For Anduril Industries, Palantir’s Warp Speed makes it 200x easier.
At #DevCon1, Analytics Engineer Callum McCann explained how Anduril is harnessing the power of the ontology to accelerate material resource planning within Arsenal OS, their integrated digital manufacturing platform, transforming processes that once took 2-4 hours into 30 second tasks.
“Foundry has helped Anduril take the next step into the realm of real-time decision making. This isn’t just about technology - it’s how we use it to cut through complexity and stay several steps ahead in a very quick-moving industry. At the heart of all of this is the Foundry ontology, which I believe is the most fully-featured semantic layer on the market. It takes all of the skillsets that we’ve been developing as data, analytics, and software engineers allows us to unify that into one platform.“
It's incredible what software and the intelligent people who provide it can achieve in terms of productivity.
This is about the defense sector, but these workflows and procedures can increase the efficiency of all kinds of industries.
I was imagining calculating and documenting this by hand, I think we are all better off with software like this!
Respect to Anduril Industries for this cooperation, development and implementation.
#AI#Anduril#Palantir#Futurethinking#Innovation#Workflow#Optimization#Efficiency
Hard tech is hard. For Anduril Industries, Palantir’s Warp Speed makes it 200x easier.
At #DevCon1, Analytics Engineer Callum McCann explained how Anduril is harnessing the power of the ontology to accelerate material resource planning within Arsenal OS, their integrated digital manufacturing platform, transforming processes that once took 2-4 hours into 30 second tasks.
“Foundry has helped Anduril take the next step into the realm of real-time decision making. This isn’t just about technology - it’s how we use it to cut through complexity and stay several steps ahead in a very quick-moving industry. At the heart of all of this is the Foundry ontology, which I believe is the most fully-featured semantic layer on the market. It takes all of the skillsets that we’ve been developing as data, analytics, and software engineers allows us to unify that into one platform.“
**Driving the Future of Problem Solving: Optilogic and Gurobi Extend Partnership**
In the realm of software development and AI, collaborations are key to unlocking unprecedented possibilities. Exciting news has emerged as Optilogic and Gurobi extend their partnership, paving the way for groundbreaking advancements. Together, they are exploring innovative applications of Mixed Integer Linear Programming (MILP) and artificial intelligence to tackle some of the most complex and significant challenges faced today.
This partnership leverages the robust optimization capabilities of Gurobi and the strategic insights of Optilogic, aiming to revolutionize industries that rely heavily on data-driven decision making. By enhancing MILP with AI, these companies aim to develop solutions that are not only efficient but also highly adaptable to dynamic environments.
Key highlights of this partnership include:
- Advanced algorithms that integrate MILP and AI for optimal solutions
- Enhanced capabilities for sectors such as logistics, manufacturing, and finance
- A collaborative approach that fosters innovation and rapid problem-solving
As these two industry leaders join forces, they are set to drive substantial progress in how we approach optimization and AI solutions. Stay tuned for the incredible developments that will arise from this powerful collaboration.
📩 DM me if you want to access the best software development team!
#Optilogic#Gurobi#AI#MILP#Innovation#SoftwareDevelopment#Optimization#Partnership#FutureTech#ProblemSolving
Introducing RedEye by Accruent, the newest jewel in our tech crown at Kinsmen Group! 💎
In a world where engineering information can feel like a labyrinth, RedEye is the guiding light you've been searching for. Did you know? 71% of Engineers struggle to find the right document—a staggering statistic that underscores the urgent need for a solution like RedEye.
No more endless searches or sleepless nights wondering:
🔍 What and where is it? 🔍 How can I find it? 🔍 Is this the latest version?
🔍 What else am I missing?
Say goodbye to the chaos of multiple locations and hello to streamlined access and clarity.
But wait, there's more! Behind the scenes, our proprietary Kinsmen AI uses machine learning and AI to ensure that every piece of data and document in RedEye is pristine. From deduplication to data cleansing, metadata export, and identifying missing information for an asset, Kinsmen AI ensures that your RedEye experience is nothing short of perfection.
At Kinsmen, we don't just deliver technology; we sculpt solutions that redefine industry standards. With our award-winning expertise and unwavering commitment to your success, we're not just partners—we're catalysts for your triumph.
Let's pioneer together! 🚀 Contact us for a demo.
#KinsmenAI#RedEye#DigitalTransformation#DataMastery#ProcessInnovation#KinsmenAI
Business Analyst | Data Analyst | Power BI & Excel Specialist | Digital Marketing Consultant | Project Management Professional | Driving Business Growth Through Data-Driven Insights
I've been deep into operationalizing GenAI capabilities on IntelCenter's data now for five months. Three observations:
1) Periodically one of our GenAIs will write something and I'll look at it and think that can't be right. It must have it completely wrong. I'll then spend a good amount of time researching it only to discover that I'm the one who is wrong. (This just happened five minutes ago and thus this post.)
2) It's not always me. 😀 Sometimes I do find the GenAI is wrong. However, the number of times the GenAI is wrong is nothing compared to a green analyst or even a much more seasoned one who is incredibly overtasked. Also, a good percentage of those times the GenAI is wrong has more to do with how the data it's given is presented rather than its thinking being flawed.
3) There are many cases where it's not about being right or wrong. The GenAI can present thought provoking points that lead human analysts to explore new areas or inspire different ways of thinking. Here the GenAI response, whether spot-on, flawed or completely missing the mark is just acting as that initial spark and its contribution is valuable if it results in a valuable human output.
Factor in that the GenAI is doing tasks in seconds that would take a team weeks to complete and you start to get the game-changing power this brings to the table.
Everything said here though is predicated on the GenAI having access to quality data that is properly presented. Give the smartest GenAI in the world bad data and you will get bad responses.
#genai#intelligenceanalysis#threatanalysis#osint#cai
HFS Research recent report showing that only 5% of Global Enterprise have successfully deployed GenAI. This could mean 2 things:
One: Companie are taking a cautious approach to scaling given how new the tech is.
Two: The use cases just aren't that impactful. Or companies aren't picking the right use cases to create the largest impact.
I think it number 2.
#GenAI is not yet a burning platform. It’s a smoldering platform! Just 5% of global enterprises have successfully deployed genAI!
HFS ResearchPhil Fersht#hfsdisrupt
It was an interesting session by Arun Chandrasekaran where he talked about the 10 best practices for scaling #generativeai across the enterprise. It emphasized how the near-term future is slimmed-down models and domain models. We couldn't agree more, and our CTO Amitabh Patil has been pioneering in #GenAI for the past 6 years and our focus from day one has been focused on building smaller and domain-specific models for #lifesciences domain and using OSS (Open Source Models)
Great example of tech enabling industry transformation. Foundry's semantic layer seems like a big leap for efficient workflows.