#8 - Docker Volumes for SQL Server to persist data In this eight lecture of our course Understanding Docker in 2024, we will understand how Docker volume persists SQL Server data https://lnkd.in/gafjqrie #executeautomation #docker #dockerimages #containers
Karthik K.K’s Post
More Relevant Posts
-
An Introduction to Distributed Database #distributed #database #distributeddatabase #backend #software #engineering #backenddevelopment #softwareengineering #cockroachdb #cassandradb #sql #nosql #replica #replication #fragmentation #dbms
To view or add a comment, sign in
-
The SQL Server CAST function is essential for converting data types, facilitating smooth data manipulation, and improving database processes. Let's Explore: https://lnkd.in/gtUXpFvm Please follow on FB: https://lnkd.in/gMVNxsdC #administration #DBA #Admin #powerbideveloper #powerbidashboard #powerbidesktop #data #madesimplemssql #CodeNewbies #code #azure #mssql #computerscience #coder #sqldba #developer #sqlinjection #software
To view or add a comment, sign in
-
🌟 Day 6 of #100DaysOfCode 🌟 Today, I ventured into the world of SQL, starting with the foundational DDL (Data Definition Language) commands: 1️⃣ CREATE - Defining new tables and structures 🛠️ 2️⃣ ALTER - Modifying existing table structures 🔄 3️⃣ DROP - Deleting tables and structures 🗑️ 4️⃣ TRUNCATE - Removing all data from a table efficiently ✨ After mastering the DDL commands, I focused on Data Integrity concepts: 1️⃣ Constraints - Enforcing rules on data integrity ⚖️ 2️⃣ Transactions - Ensuring data consistency and reliability 💾 3️⃣ Normalization - Organizing data to reduce redundancy and improve efficiency 📊 Next, I dove deeper into Constraints, which are essential for maintaining the accuracy and reliability of the data: 1️⃣ NOT NULL - Ensuring that a column cannot have a NULL value 🚫 2️⃣ UNIQUE - Ensuring all values in a column are distinct 🔑 3️⃣ PRIMARY KEY - Uniquely identifying each row in a table 🏷️ 4️⃣ AUTO INCREMENT - Automatically generating unique values for a column 🔄 5️⃣ CHECK - Ensuring that values in a column meet a specific condition ✅ 6️⃣ DEFAULT - Setting a default value for a column if none is provided 📝 7️⃣ FOREIGN KEY - Linking data between tables for relational integrity 🌐 Finally, I explored Referential Actions that help maintain the relationships between tables: 1️⃣ RESTRICT - Preventing changes if they would break a relationship 🚫 2️⃣ CASCADE - Automatically updating or deleting related rows in other tables 🌊 3️⃣ SET NULL - Setting the foreign key value to NULL when the related record is deleted 🚫0️⃣ 4️⃣ SET DEFAULT - Assigning a default value when a related record is deleted 🔄 SQL is a powerful tool, and I’m excited to keep expanding my knowledge as I move forward. Stay tuned for more updates as I build my skills and take on new challenges! 🚀💻 YOUTUBE: https://lnkd.in/gZ-PbEpb #CodingJourney #Programming #SQL #DataIntegrity #DatabaseManagement #TechTransition #FromMechanicalToSoftware #LearningToCode #100DaysOfCode
Session 31 - SQL DDL Commands | DSMP 2023
https://www.youtube.com/
To view or add a comment, sign in
-
Very useful tips on transaction log files
SQL Server Transaction Log Architecture-Overview
To view or add a comment, sign in
-
Dynamic SQL coding can be complex. The trick is to make it look easy. This simple formula for executing SQL syntax contained within a variable by another SQL command in Snowflake helped my current project generate views upon standard tables on-the-fly to reduce development and QA, as well as to support automated deployments. #snowflake #sql
To view or add a comment, sign in
-
Apache Spark queries with different syntax (DataFrame API or parameterized SQL) can have the same performance, as the physical plan is identical. Thus, the choice between DataFrame API and spark.sql() depends on the following: 🔹 Familiarity: Use spark.sql() if your team prefers SQL syntax. Use the DataFrame API if chained method calls are more intuitive for your team. 🔹 Complexity of Transformations: The DataFrame API is more flexible for complex manipulations, while SQL is more concise for simpler queries. #PySpark #ApacheSpark #SQL #DataFrame
To view or add a comment, sign in
-
This is how it should be, bring your own language!. Developers should have the choice to use what they are comfortable with. However, organisations also need to take into consideration the skillset of the wider team and future mainability of the code. #spark #sql
Apache Spark queries with different syntax (DataFrame API or parameterized SQL) can have the same performance, as the physical plan is identical. Thus, the choice between DataFrame API and spark.sql() depends on the following: 🔹 Familiarity: Use spark.sql() if your team prefers SQL syntax. Use the DataFrame API if chained method calls are more intuitive for your team. 🔹 Complexity of Transformations: The DataFrame API is more flexible for complex manipulations, while SQL is more concise for simpler queries. #PySpark #ApacheSpark #SQL #DataFrame
To view or add a comment, sign in
-
When you accidentally write a SQL query that loads the entire database 😂 #justunderstandingdata #dev #developer codingjokes #codingmemes
To view or add a comment, sign in
-
When you accidentally write a SQL query that loads the entire database 😂 #justunderstandingdata #dev #developer codingjokes #codingmemes
To view or add a comment, sign in