Our very own Adam Cimarosti flew to Austin for a talk about #QuestDB at Data Council's annual conference about #Data and #AI. He delves into the internals of QuestDB and what makes it so fast and efficient with time-series data. Some examples with #SQL queries are briefly explored before the final section that lays out our vision for QuestDB: 💡 On the way in, stream time series data from the source or #Kafka in real-time. On the way out, produce #Parquet files, which can be deposited on object stores such as #AWS S3 for infinite scale. 🔥 After that, querying parquet files directly from QuestDB becomes a reality. Bypassing QuestDB entirely and depositing parquet on S3 will also be possible, with the option to leverage the QuestDB engine to query this data in open format. In short, QuestDB's #database is moving away from proprietary formats and monoliths ⚡ Thank you Pete Soderling for the invite, it was a blast! https://lnkd.in/ebSbSqNV
Nicolas Hourcard, CFA’s Post
More Relevant Posts
-
If you missed QuestDB at the Data Council's conference in Austin, check out Adam talking about the inner workings of #QuestDB, its speed, and its efficiency with examples of #SQL queries. #opensource #timeseries #database
Our very own Adam Cimarosti flew to Austin for a talk about #QuestDB at Data Council's annual conference about #Data and #AI. He delves into the internals of QuestDB and what makes it so fast and efficient with time-series data. Some examples with #SQL queries are briefly explored before the final section that lays out our vision for QuestDB: 💡 On the way in, stream time series data from the source or #Kafka in real-time. On the way out, produce #Parquet files, which can be deposited on object stores such as #AWS S3 for infinite scale. 🔥 After that, querying parquet files directly from QuestDB becomes a reality. Bypassing QuestDB entirely and depositing parquet on S3 will also be possible, with the option to leverage the QuestDB engine to query this data in open format. In short, QuestDB's #database is moving away from proprietary formats and monoliths ⚡ Thank you Pete Soderling for the invite, it was a blast! https://lnkd.in/ebSbSqNV
Optimizing Time Series Data in Mixed Architectures with QuestDB
https://www.youtube.com/
To view or add a comment, sign in
-
🚀 Excited to share my latest project: Building a machine learning pipeline with Airflow to forecast time-series data using ARIMA models! In this project, I've designed a robust pipeline in Airflow that reads datasets from an S3 bucket, trains ARIMA models to make accurate predictions, and stores the trained models back in the S3 for easy access and scalability. Check out my detailed blog post where I dive deep into the technical aspects of this project and share insights into the challenges and solutions: https://lnkd.in/gdVbtvkS #AirFlow #DataEngineering #DataPipeline #MachineLearning #DataScience #ARIMA #S3 #TimeSeriesForecasting #MachineLearningPipeline #MLOps
To view or add a comment, sign in
-
#DBRX is a new general-purpose and open source #LLM trained from scratch using the #Databricks Data Intelligence Platform. While it has 132B total parameters, with its fine-grained MoE architecture, DBRX only uses 36B at any given time. Therefore, great for enterprises that want to efficiently build and train LLMs on their OWN data with full governance to support it. Learn more about how the Databricks Mosaic Research team built #DBRX and benchmarked its performance.
Introducing DBRX: A New State-of-the-Art Open LLM
To view or add a comment, sign in
-
🌟Unity Catalog breaks barriers with seamless interoperability across data formats and compute engines, opening the door to a more flexible and open data architecture. Our latest blog breaks down its impacts and why there’s never been a better time to embrace a more open approach to your infrastructure. https://lnkd.in/g2q-g9iN #DataAnalytics #DataEngineering #DataLakeAnalytics #DataLake #DataLakeHouse
Build a More Open Lakehouse With Unity Catalog
starrocks.io
To view or add a comment, sign in
-
Great conversation between EDB's Marc Linster and Ayse Bilge Ince on how EDB is stepping up to tackle AI challenges. With AI increasingly relying on complex, non-traditional data, Bilge explains how we’re enhancing Postgres to accommodate new methods of storing and managing vast amounts of multimodal data in a desegregated lakehouse architecture. Check out this clip, and read the blog to learn more >> https://lnkd.in/eKrtqThp #EDBPostgresAI #JustSolveItWithPostgres #DataLakehouse #AIChallenges #MultimodalData #Postgres17 #PostgreSQL17 #DataArchitecture
To view or add a comment, sign in
-
#DBRX is a new general-purpose LLM trained from scratch using the Databricks Data Intelligence Platform. While it has 132B total parameters, with its fine-grained MoE architecture, DBRX only uses 36B at any given time. It’s great for enterprises that want to efficiently build and train LLMs on their own data. Learn more about how the Databricks Mosaic Research team built #DBRX & benchmarked its performance https://lnkd.in/gYKpyuDM
To view or add a comment, sign in
-
🌟 Unity Catalog breaks barriers with seamless interoperability across data formats and compute engines, opening the door to a more flexible and open data architecture. Our latest blog breaks down its impacts and why there’s never been a better time to embrace a more open approach to your infrastructure. https://lnkd.in/g2q-g9iN #DataAnalytics #DataEngineering #DataLakeAnalytics #DataLake #DataLakeHouse
Build a More Open Lakehouse With Unity Catalog
starrocks.io
To view or add a comment, sign in
-
Breaking news on the LLM front! Databricks has released DBRX our new standard to build open source customizable LLMs using data from our Data Intelligence Platform. With DBRX, customers will have access to 132B total parameters and fine-grained MoE architecture whilst only ever using 36B at any given time. This was a tool long requested by our larger customers and DBRX is the way for said enterprises to efficiently and cost effectively build and train LLMs on their own data. Learn more about how the Databricks Mosaic Research team built #DBRX & benchmarked its performance.
Introducing DBRX: A New State-of-the-Art Open LLM
To view or add a comment, sign in
-
#DBRX is a new general-purpose LLM trained from scratch using the Databricks Data Intelligence Platform. While it has 132B total parameters, with its fine-grained MoE architecture, DBRX only uses 36B at any given time. It’s great for enterprises that want to efficiently build and train LLMs on their own data. Learn more about how the Databricks Mosaic Research team built #DBRX & benchmarked its performance.
Introducing DBRX: A New State-of-the-Art Open LLM
To view or add a comment, sign in
Top LinkedIn Voice | Experienced Full-Stack Developer with over 200 Websites Built Across Diverse Niches | Top Rated on Upwork
8moInsightful