Google #CloudSpanner is expanding its ability to natively interact with JSON documents. Check out the new functions in the link below. The syntax is the same as #BigQuery making it easy to work with the two - federated! - platforms. Try them now: ⬇️⬇️ https://lnkd.in/dbc4X_mV
Francesco Cogno’s Post
More Relevant Posts
-
enforcing json check constraint or create calculated column with index on it as examples of the many features that json functions bring to spanner.
Google #CloudSpanner is expanding its ability to natively interact with JSON documents. Check out the new functions in the link below. The syntax is the same as #BigQuery making it easy to work with the two - federated! - platforms. Try them now: ⬇️⬇️ https://lnkd.in/dbc4X_mV
JSON functions in GoogleSQL | Spanner | Google Cloud
cloud.google.com
To view or add a comment, sign in
-
When working with an unfamiliar JSON string and needing to identify all keys, using a JSON_KEYS extractor can be extremely useful. #bigquery https://lnkd.in/eY3nKTE6
JSON functions | BigQuery | Google Cloud
cloud.google.com
To view or add a comment, sign in
-
🚀 Excited to share my latest project: Watch the Drop! 📉💻 "Watch the Drop" is a cutting-edge price scraping website that helps users track and get notified about price drops on their favorite products. Simply paste an Amazon link or type a product name in the search bar, and our platform does the rest! 🌟 Try out - https://lnkd.in/gN_8vfEh 1. Amazon Link Scraping: When a user pastes an Amazon product link into the website, the backend utilizes Bright Data to scrape detailed product information from Amazon. 2. Custom Google Search Engine: To generate a comprehensive price history graph, the system uses a custom Google search engine. This engine, combined with Google Dorking techniques, scrapes historical price data from various sources across the web. 3. Product Name Search: If a user types a product name in the search bar, the platform performs a Google search for that product, collecting all relevant shopping results. 4. Data Aggregation: The system aggregates all price information, displaying the highest and lowest prices, and tracking price changes over time. 5. Database Storage: All scraped data is stored in MongoDB, ensuring robust and scalable data management. 6. Price Drop Notifications: Users can submit their email to track a product. The system monitors price changes and sends an email notification when a price drop is detected. 7. Rate Limiting: For rate limiting, I used Upstash with Redis to manage API request limits efficiently. 8. Cron Job: A cron job regularly updates the price history and sends email notifications to users when price drops are detected. The entire project is built with Next.js for the frontend, a serverless backend for handling requests and processing data, Shadcn UI for a sleek and modern user interface, and MongoDB for reliable data storage. also thanks for my team !404 for supporting me to create this v2 version of watch the drop Krishna Bansal Lakshya Sharma PRIYANSHU SAINI Madhur Rastogi #WebDevelopment #NextJS #BrightData #Serverless #ShadcnUI #MongoDB #PriceTracking #HackathonWinner #WatchTheDrop
To view or add a comment, sign in
-
#15 Post on Bigquery | Answer is “Managed Service no lower limit” As it is fully managed no minimum and maximum infrastructure is required. BigQuery is a fully managed, AI-ready data platform that helps you manage and analyze your data with built-in features like machine learning, search, geospatial analysis, and business intelligence. BigQuery's serverless architecture lets you use languages like SQL and Python to answer your organization's biggest questions with zero infrastructure management. Ramanathan Manickam Shivendra Ghadge Gajanan Kawtikwar https://lnkd.in/dS4Sq7Hr
BigQuery overview | Google Cloud
cloud.google.com
To view or add a comment, sign in
-
The final frontier.... or at least the next 15 years. Edge Caching is going to eat the World. Keeping track of all the #cached is harder than it sounds but there is a relentless #ecosystem of companies and #developers working on just that. https://lnkd.in/ehGnQQT9 #edge #caching #WASM #WASI #V8Javascript #GraphQL #DynamoDB
Why edge is eating the world
https://venturebeat.com
To view or add a comment, sign in
-
🚀 𝐅𝐫𝐨𝐦 𝟐𝐬 𝐭𝐨 𝟐𝟎𝟎𝐦𝐬: 𝐇𝐨𝐰 𝐈 𝐒𝐥𝐚𝐬𝐡𝐞𝐝 𝐀𝐏𝐈 𝐑𝐞𝐬𝐩𝐨𝐧𝐬𝐞 𝐓𝐢𝐦𝐞𝐬 𝐰𝐢𝐭𝐡 𝐍𝐨𝐝𝐞.𝐣𝐬 𝐚𝐧𝐝 𝐂𝐚𝐜𝐡𝐢𝐧𝐠 Here’s a story about turning milliseconds into magic. We recently tackled a challenge where our API response times were hitting 2 seconds under load—far from ideal for our real-time application. Here’s how we optimized it down to a blazing 200ms using a mix of profiling, caching, and data restructuring. 🔍 𝐏𝐫𝐨𝐟𝐢𝐥𝐢𝐧𝐠 𝐭𝐡𝐞 𝐂𝐮𝐥𝐩𝐫𝐢𝐭 → Using tools like Chrome DevTools and New Relic, we identified database calls as the bottleneck. Nearly 80% of the response time was spent on redundant queries fetching the same data for multiple users. 🗄️ 𝐈𝐦𝐩𝐥𝐞𝐦𝐞𝐧𝐭𝐢𝐧𝐠 𝐂𝐚𝐜𝐡𝐢𝐧𝐠 𝐒𝐭𝐫𝐚𝐭𝐞𝐠𝐢𝐞𝐬 → Enter Redis. By caching frequently accessed data and implementing expiration policies, we reduced the need for repeated database lookups. This alone slashed response times by over 1 second. 📊 𝐎𝐩𝐭𝐢𝐦𝐢𝐳𝐢𝐧𝐠 𝐃𝐚𝐭𝐚 𝐒𝐭𝐫𝐮𝐜𝐭𝐮𝐫𝐞𝐬 → Some data was being parsed inefficiently. A simple change from nested arrays to hash maps reduced lookup times drastically. ⚙️ 𝐏𝐫𝐞-𝐟𝐞𝐭𝐜𝐡𝐢𝐧𝐠 𝐚𝐧𝐝 𝐁𝐚𝐭𝐜𝐡𝐢𝐧𝐠 → We batch-processed database requests and used a background worker to prefetch data that users were most likely to request. This reduced spikes in traffic and kept response times consistent. 💡 𝐏𝐫𝐨 𝐓𝐢𝐩: Always measure before you optimize. Profiling tools are your best friend in diagnosing where your code is "wasting time." The impact? Faster responses, happier users, and a scalable solution ready for growth. What’s your go-to approach for tackling performance issues? Let’s share strategies and insights!👇 #PerformanceOptimization #SoftwareEngineering #NodeJS #Caching #API #DevOps #TechTips
To view or add a comment, sign in
-
🚨 Attention #Elasticsearch & #OpenSearch users! 🚨 Did you know there are query limits that can impact your performance and stability? Whether you're optimizing your search queries or troubleshooting slowdowns, understanding these limits is crucial. In our latest blog post, we break down the key limits you should know about, and how to take them into account for optimal performance. 🚀 🔍 Don't miss out on this essential guide to keep your search engine running smoothly! 👉 Read now: https://lnkd.in/de_e5Skw
Elasticsearch and OpenSearch Query Limits
bigdataboutique.com
To view or add a comment, sign in
-
That's impressive progress! Implementing URL shortening Project along with user authentication features like signup and login using Node.js, Express.js, JWT token authentication, and MongoDB is a substantial achievement. How did you find the process of integrating authentication into your project ...What is url Shortener...? A URL shortener is a tool or service that converts long URLs into shorter, more manageable ones. These shorter URLs redirect users to the original, longer URL when clicked. They are commonly used to make lengthy URLs more shareable, particularly on platforms like social media where character limits may restrict the length of posts. URL shorteners also provide additional features such as tracking click metrics and analytics. #mernstack #backend #backendproject #database #authentication #nodejs #expressjs #mongodb #jwt #learning
To view or add a comment, sign in
-
🔍 Ever found yourself juggling multiple browser tabs trying to find the best Azure VM pricing across regions? I sure did! As a cloud engineer, I was getting frustrated with the slow loading times of the Azure pricing calculator and the tedious process of comparing prices across different regions and pricing models. I knew there had to be a better way! So, I rolled up my sleeves and built a solution: The Azure VM Price Checker ⚡ 🛠️ How I built it: Leveraged Azure Retail Prices API (https://lnkd.in/gJuP3pt3) Built with Python + Streamlit for a fast, responsive web interface Used concurrent API calls to fetch prices across regions simultaneously Implemented client-side filtering and sorting for instant results 🎯 What it does: Quick SKU search with instant price comparisons Compare prices across ALL Azure regions in one view See prices for different models (Spot, Reserved Instances, Dev/Test) side by side Find the cheapest option instantly Clean, intuitive interface that actually loads fast! 💡 The best part? You don't need any Azure credentials! The Retail Prices API is public, making this tool accessible to everyone. I've deployed it here: https://lnkd.in/gAKW-uYS The tool has helped me cut down my VM pricing research time from 15+ minutes to just seconds. No more tab juggling or slow loading times! 🤔 Are you also tired of the slow Azure pricing calculator? Give this tool a try and let me know what you think! I'd love to hear your feedback and suggestions for improvement. ⚙️ Tech Stack: Python Streamlit Azure Retail Prices API ThreadPoolExecutor for concurrent API calls Pandas for data manipulation #CloudComputing #Azure #DevTools #Automation #CloudCost #AzureVM #CloudEngineering #OpenSource #Python P.S. The code is written in Python using Streamlit. If you're interested in how it works or want to contribute, feel free to reach out! Together we can make cloud cost optimization easier for everyone.
To view or add a comment, sign in
-
⚖️ Tech Comparison: #Elasticsearch vs #Druid https://lnkd.in/gCb7JMg8 Fast for full-text search, Elasticsearch is often used with JSON data. However, when used for analytics at scale users cite ingestion, performance challenges, and stability issues.
To view or add a comment, sign in
SQL server Blackbelt | ex-Googler | (I)-[:LOVE]->(graph) | Database migration and modernization, Partner GCP
8moThis will allow companies to enforce a constraint to make sure that the entered value in a string data type is a json value.