Time is money, right? You can now save more time by analyzing your financial data from Snowflake, MySQL, or SQLite directly on the Terminal Pro. This means you can combine the power of our functionalities with the value of your data. What native integrations would you like to see next on Terminal Pro? 🤔 You can try our Terminal Pro in a 3-week free trial 🦋 Check out the comments to sign up.
OpenBB’s Post
More Relevant Posts
-
🔍 Meet Anyquery — a powerful CLI tool for running SQL queries on any data source, from files and APIs to logs and local apps. 🔗 https://anyquery.dev/ It supports JSON, CSV, Parquet, Airtable, Google Sheets, Notion databases, Gmail, and more. With SQLite under the hood, it can even serve as a MySQL server for BI tools. #DataTools #SQL #TechInnovation
To view or add a comment, sign in
-
Finally got the OneLake data hub working with powerBI desktop and service tunnelling into an on-premise server. MS certainly does not make it super easy. I've created a .NET library for about 70% of PropertyWare's new REST API. Sometimes I think about the world without Newtonsoft; did I ever tell you about the time where I thought I could hack some JSON and it wound up sucking all of my time without the results I wanted? Wrote a document puller to walk through the WorkOrders collection and pull all documents into a ZIP. https://lnkd.in/gWNR_Xm4
To view or add a comment, sign in
-
Are you interested in enhancing performance and manageability using partitions in SQL-Server? - Then take a look at my last blog post: https://lnkd.in/eq2Sdnds
To view or add a comment, sign in
-
🚀 𝐔𝐧𝐯𝐞𝐢𝐥𝐢𝐧𝐠 𝐂𝐥𝐢𝐜𝐤𝐇𝐨𝐮𝐬𝐞: 𝐀 𝐆𝐚𝐦𝐞 𝐂𝐡𝐚𝐧𝐠𝐞𝐫 𝐟𝐨𝐫 𝐎𝐋𝐀𝐏 𝐎𝐩𝐞𝐫𝐚𝐭𝐢𝐨𝐧𝐬! (For all the data enthusiasts and tech professionals out there) I recently came across 𝐂𝐥𝐢𝐜𝐤𝐇𝐨𝐮𝐬𝐞, and I’m thoroughly impressed by its performance! This open-source columnar database is revolutionizing how we handle large-scale data analytics. Here’s why I’m so excited about ClickHouse: 1. 𝐁𝐥𝐚𝐳𝐢𝐧𝐠 𝐅𝐚𝐬𝐭 𝐒𝐩𝐞𝐞𝐝: ClickHouse processes millions of rows in milliseconds. I ran a test with 15 million records, and it processed the data in under a second! In comparison, PostgreSQL took several minutes to handle the same task. 2. 𝐔𝐧𝐦𝐚𝐭𝐜𝐡𝐞𝐝 𝐏𝐞𝐫𝐟𝐨𝐫𝐦𝐚𝐧𝐜𝐞: It’s 1000 times faster than MySQL, 300 times faster than PostgreSQL, and 60 times faster than Elasticsearch. This speed is a game-changer for real-time analytics and business intelligence. 3. 𝐒𝐜𝐚𝐥𝐚𝐛𝐢𝐥𝐢𝐭𝐲 𝐚𝐧𝐝 𝐅𝐥𝐞𝐱𝐢𝐛𝐢𝐥𝐢𝐭𝐲: ClickHouse’s columnar storage is designed for OLAP operations, making it ideal for data warehousing and big data analytics. Whether you’re handling terabytes of data or running complex queries, ClickHouse scales seamlessly. 4. 𝐂𝐨𝐬𝐭-𝐄𝐟𝐟𝐞𝐜𝐭𝐢𝐯𝐞: As an open-source solution, ClickHouse provides enterprise-level performance without the hefty price tag. It’s a powerful tool for startups and established enterprises alike. Want to experience this for yourself? Try ClickHouse on the 𝐂𝐥𝐢𝐜𝐤𝐇𝐨𝐮𝐬𝐞 𝐏𝐥𝐚𝐲𝐠𝐫𝐨𝐮𝐧𝐝 where you can play with large datasets without any setup: [ClickHouse Playground](https://lnkd.in/daZmiWmx). Related Links: - [ClickHouse Official Website](https://clickhouse.com) - [ClickHouse Benchmark Results](https://lnkd.in/dezANpVz) #ClickHouse #DataAnalytics #BigData #OLAP #Database #TechInnovation #Performance --- P.S. Have you used ClickHouse? Share your experiences and insights in the comments below! Let's discuss how this powerhouse is transforming data analytics.
ClickHouse Playground | ClickHouse Docs
clickhouse.com
To view or add a comment, sign in
-
See this gif, in which I mount a 32GB Dataset from Kaggle, as a local directory. I then immediately query it using DuckDB. with cloudzip (link to GitHub in comments), I only ever downloaded a handful of MBs to make that work. This is functionally equivalent to downloading the 32GB zip file, extracting it, and then querying the files included in it. But that would take order of magnitude more time to get done. So how does it work? cloudzip leverages 3 simple facts: 1. Kaggle Datasets are distributed via their API as Zip files. 2. Zip files allow random access reads. They have a "central directory" at their footer, describing the files contained inside, complete with offsets for each file. 3. Object stores (and many HTTP servers) allow using `Range` headers to fetch only parts of a remote object. 😎 cloudzip would issue a Range request to fetch the footer, where that directory is stored (typically KBs in size), even for that 32GB archive, and only fetch files according to their offsets as they are read by the user. To accomplish this, the `cz mount` command above actually spins up an NFS server, listening on localhost, and proceeds to mount the data_dir to that NFS server. `cz umount` would unmount, and kill that server. This works with any of the supported remote storage locations (kaggle, but also S3, HTTP and lakeFS) * cloudzip is a pet project, mostly for my own research. See the GitHub link in comments, where you are free to use/contribute/flame me as you please.
To view or add a comment, sign in
-
In this video, we'll examine how to sync user data from Clerk.com to our database using webhooks. We will use Prisma as our ORM to connect to MongoDB for end-to-end type safety. https://lnkd.in/gbVQUiQd
To view or add a comment, sign in
-
Hot Take: RAG is **easy** for 80% of use cases I've seen: - You can do just fine with *any* of the vector databases (including the postgres extension) - You can get reasonable results with *any* LLM provider's embeddings - Please don't say "scale". You can have millions of docs on your desktop and the lookup time is fine. You need a billion docs? Great, put it on a server somewhere.
To view or add a comment, sign in
-
SQL-Server: What are virtual log files and why should you care about them? - get the answers in my recent blog post:
SQL-Server: What are VLF’s and why should I care about them?
https://www.dbi-services.com/blog
To view or add a comment, sign in
-
Wow, I've just learned something interesting. It looks like PostgreSQL has a concept of general event triggers that can run a function on a certain event, like DDL executions, dropping objects, creating objects, altering objects, etc... Here is the docs: https://lnkd.in/dzJuXe8B That means that you can write a custom event handler that can prevent you from accidentally dropping an important table with important data (pictured). You can temporarily disable that event trigger if you really need to drop that table, but in general, as a person who creates and drops a lot of temporary tables, this is a life saver. Not that I ever needed it, though. Like, ever. It never happened, all right. Trust me, bro. Never! But it's good to know, nevertheless.
To view or add a comment, sign in
-
In the process of building out a Job Application Tracking System application using the MEN stack. Running into issues with user authentication and data relationships using MongoDB. I know MongoDB isn't the best solution for relationships with data but currently researching how to best utilize it's embedded or reference relationships with documents. If you'd like to see the current state of the application here's the github link: https://lnkd.in/eGHCiYPE
To view or add a comment, sign in
8,555 followers