DBeaver 24.3 is out! It contains a number of updates to improve users' workflows. This version introduces features for analyzing data from external files, improvements for secure networks, and additional database drivers. Highlights of DBeaver 24.3: Flat Files Drivers: Load XLSX, CSV, and Parquet files and work with data from them as you do with relational databases. Use joins and aggregate functions to analyze data across multiple files. DBeaver Tunnel: A new Team Edition desktop feature allows users to securely connect to databases hosted on closed networks. Improved Data Comparison: There is now support for composite keys, which improves data consistency checking in any two tables. New Database Drivers: We added support for libSQL and DolphinDB, expanding work with replicated and time-series data. For libSQL, we created the first open-source JDBC driver, which is now available on GitHub and Maven. To evaluate all the new features, update to DBeaver 24.3 today. Learn more about this release: https://hubs.li/Q02_sqjy0
DBeaver’s Post
More Relevant Posts
-
𝗧𝘄𝗼 𝗺𝗼𝗿𝗲 𝗰𝗼𝗻𝗻𝗲𝗰𝘁𝗼𝗿𝘀, 𝗮 𝗹𝗼𝘁 𝗺𝗼𝗿𝗲 𝗰𝗼𝗻𝘃𝗲𝗻𝗶𝗲𝗻𝗰𝗲 | 𝗠𝗦 𝗦𝗤𝗟, 𝗢𝗿𝗮𝗰𝗹𝗲 𝗰𝗼𝗻𝗻𝗲𝗰𝘁𝗼𝗿𝘀 𝗮𝗿𝗲 𝗻𝗼𝘄 𝗹𝗶𝘃𝗲 (https://lnkd.in/gvvJek9Z) We’ve been closely attuned to our clients' needs, and one request has been resoundingly clear: seamless integration with every possible database out there. By adding support for Oracle and Microsoft SQL Server, in addition to existing integrations with MySQL, MongoDB, PostgreSQL, Snowflake, and Redshift, Nected is taking a significant step towards becoming a comprehensive no-code platform for backend automation and business logic building. This expanded database connectivity empowers users to offload integration work and focus on building their applications, without having to worry about the underlying data sources. Just to reiterate, Nected's mission is to revolutionise the way businesses approach automating all dynamic, data-driven flows without any hassle, and by providing a unified interface to connect with a wide range of databases, Nected enables users to unlock the full potential of their data, regardless of where it resides. This seamless integration allows for more efficient and effective business processes, ultimately driving greater productivity and innovation. 𝗡𝗲𝘅𝘁 𝘁𝗵𝗶𝗻𝗴: 𝗚𝗼𝗼𝗴𝗹𝗲 𝗕𝗶𝗴𝗤𝘂𝗲𝗿𝘆. This ongoing commitment to connectivity ensures that Nected remains a powerful and versatile platform for businesses of all sizes and industries. Try it out yourself for Free - https://lnkd.in/gvvJek9Z
To view or add a comment, sign in
-
𝗧𝘄𝗼 𝗺𝗼𝗿𝗲 𝗰𝗼𝗻𝗻𝗲𝗰𝘁𝗼𝗿𝘀, 𝗮 𝗹𝗼𝘁 𝗺𝗼𝗿𝗲 𝗰𝗼𝗻𝘃𝗲𝗻𝗶𝗲𝗻𝗰𝗲 | 𝗠𝗦 𝗦𝗤𝗟, 𝗢𝗿𝗮𝗰𝗹𝗲 𝗰𝗼𝗻𝗻𝗲𝗰𝘁𝗼𝗿𝘀 𝗮𝗿𝗲 𝗻𝗼𝘄 𝗹𝗶𝘃𝗲 (https://lnkd.in/gvvJek9Z) We’ve been closely attuned to our clients' needs, and one request has been resoundingly clear: seamless integration with every possible database out there. By adding support for Oracle and Microsoft SQL Server, in addition to existing integrations with MySQL, MongoDB, PostgreSQL, Snowflake, and Redshift, Nected is taking a significant step towards becoming a comprehensive no-code platform for backend automation and business logic building. This expanded database connectivity empowers users to offload integration work and focus on building their applications, without having to worry about the underlying data sources. Just to reiterate, Nected's mission is to revolutionise the way businesses approach automating all dynamic, data-driven flows without any hassle, and by providing a unified interface to connect with a wide range of databases, Nected enables users to unlock the full potential of their data, regardless of where it resides. This seamless integration allows for more efficient and effective business processes, ultimately driving greater productivity and innovation. 𝗡𝗲𝘅𝘁 𝘁𝗵𝗶𝗻𝗴: 𝗚𝗼𝗼𝗴𝗹𝗲 𝗕𝗶𝗴𝗤𝘂𝗲𝗿𝘆. This ongoing commitment to connectivity ensures that Nected remains a powerful and versatile platform for businesses of all sizes and industries. Try it out yourself for Free - https://lnkd.in/gvvJek9Z
To view or add a comment, sign in
-
It's real: world-first Aerospike source connector capable of creating a unified real-time datapipeline to any major RDBMS, NoSQL, event-streaming services and data lakes. This is not one of the many Kafka toys. This is part of a strongly consistent data replication suite for serious players. Let's see how many of you are looking to bring data to #snowflake now... 🥂
Announcing today: Aerospike source connector for Gluesync is now GA 📣 Create new real-time datapipelines to connect your Aerospike database to any target: ranging from Oracle, S3, MS SQL Server, PostgreSQL and many many more. Make your Aerospike data in sync with existing databases thanks to Gluesync unique support for built-in bi-directional sync, now 100% compatible with Aerospike. Read more by looking at our announcement here below 👇🏻
Introducing the groundbreaking Aerospike Source Connector in Gluesync
https://molo17.com
To view or add a comment, sign in
-
1 connector multiple destination 🙂. Checkout Aerospike glusync source connector for Real time data streaming with scalable performance and consistency. #Aerospike #Molo17 #Glusync #realtime_data_streaming #CDC #data_engineeing
Announcing today: Aerospike source connector for Gluesync is now GA 📣 Create new real-time datapipelines to connect your Aerospike database to any target: ranging from Oracle, S3, MS SQL Server, PostgreSQL and many many more. Make your Aerospike data in sync with existing databases thanks to Gluesync unique support for built-in bi-directional sync, now 100% compatible with Aerospike. Read more by looking at our announcement here below 👇🏻
Introducing the groundbreaking Aerospike Source Connector in Gluesync
https://molo17.com
To view or add a comment, sign in
-
🔍 **The Missing Links in Open Table Formats with Open Source Catalogs** As the data ecosystem continues to evolve, open table formats like Apache Iceberg, Delta Lake, and Apache Hudi have gained significant traction. However, when paired with open-source catalogs, several critical features are still missing: 1. **RBAC (Role-Based Access Control):** Security is paramount, but many open-source catalogs lack robust RBAC support, making it challenging to effectively control access to sensitive data. Although polaris and unity are working on this it is still far from being production ready but a great work in progress Will rbac include row level security and not only table level, this is indeed a need for enterprises. 2. **Automatic Maintenance:** Automated tasks like compaction, vacuuming, and optimization are essential for maintaining performance over time. Yet, these features are often absent or require complex configurations in open-source catalogs. And the question will this be something that will be enabled by the catalogs or will they have integrations to external engines that will allow the triggering of these procedures? A question for discussion 3. **Column Masking:** Protecting sensitive data at the column level is crucial, especially in industries dealing with personal or financial data. Unfortunately, straightforward column masking capabilities are missing in these environments. 4. **Schema Evolution Management:** While some open formats handle schema evolution well, open-source catalogs often struggle, leading to potential data inconsistencies. 5. **Governance and Auditing:** Comprehensive data governance and auditing capabilities are still lacking in many open-source tools, making it difficult to track data lineage and ensure compliance. These missing features highlight areas where the open data community needs to focus as we strive to build more robust and secure data infrastructures. Addressing these gaps will be key to unlocking the full potential of open table formats. Would love to hear your thoughts
To view or add a comment, sign in
-
𝐘𝐨𝐮𝐫 𝐝𝐚𝐭𝐚𝐛𝐚𝐬𝐞 𝐢𝐬 𝐬𝐥𝐨𝐰𝐢𝐧𝐠 𝐲𝐨𝐮 𝐝𝐨𝐰𝐧. Here’s why: It’s not about code. It’s not about your server speed. It’s the way your data gets stored. Databases work like this: Data gets broken down into pages. Each page has a size limit—usually around 8K to 16K. But here's the problem: ↳ Pages fill up fast. ↳ Pages get messy with scattered data. ↳ Front-end slows down because it’s working harder to extract data. Your data isn’t just data. It’s JSON. It’s documents. It’s all getting dumped into pages. So when you save that data, here’s what really happens: ↳ The front-end processes it into bytes. ↳ Bytes get dumped onto a page. ↳ Pages fill and your system gets slower. Think about it. Every slow load time. Every lagging page. It’s not just poor code - it’s your database. How do you optimize your database?
To view or add a comment, sign in
-
Data Services Manager version 2.1 now available "The most noticeable change in Data Services Manager version 2.1 is the installation experience. The number of deployment steps have been considerably reduced by including the database templates in the DSM appliance itself. This means that there is now no need to pull down the database images manually, nor is there a need to upload the images manually to an S3 compatible object storage bucket..." (Cormac Hogan) https://lnkd.in/gYnuXP3f
Data Services Manager version 2.1 now available - CormacHogan.com
https://cormachogan.com
To view or add a comment, sign in
-
DELETEs are difficult Your database is ticking along nicely - until a simple DELETE brings it to its knees. What went wrong? While we tend to focus on optimizing SELECT and INSERT operations, we often overlook the hidden complexities of DELETE. Yet, removing unnecessary data is just as critical. Outdated or irrelevant data can bloat your database, degrade performance, and make maintenance a nightmare. Worse, retaining some types of data without valid justification might even lead to compliance issues. #DatabaseOptimization #DeleteOperations #DatabasePerformance #DataMaintenance #SQLPerformance #DatabaseBloat #DataCompliance #DataRetention #DatabaseEngineering #SoftwareDevelopment #TechInnovation #DatabaseManagement #QueryOptimization #DatabaseTuning #SQLBestPractices https://lnkd.in/gktxTbXK
DELETEs are difficult
notso.boringsql.com
To view or add a comment, sign in
-
#PostgreSQL's aggregate functions enable efficient in-database analysis, allowing users to compute averages, variances, and correlations directly, which enhances performance and supports data-driven insights across various applications. #PostgreSQLPerformance @MinervaDB #dba https://zurl.co/GmCp
Comprehensive Guide to Aggregate Functions in PostgreSQL
https://minervadb.xyz
To view or add a comment, sign in
-
#PostgreSQL's aggregate functions enable efficient in-database analysis, allowing users to compute averages, variances, and correlations directly, which enhances performance and supports data-driven insights across various applications. #PostgreSQLPerformance @MinervaDB #dba https://zurl.co/GmCp
Comprehensive Guide to Aggregate Functions in PostgreSQL
https://minervadb.xyz
To view or add a comment, sign in
2,674 followers