West LA Coding

West LA Coding

Technology, Information and Internet

West LA Coding is a Southern California Tech Community dedicated to Comp Sci Education, Networking & Social Mixers.

About us

Join our discord! -> https://discord.gg/u4TEX4PV West LA Coding is a Southern California Tech Community dedicated to Comp Sci Education, Networking & Social Mixers. Upcoming Summer 2024: Full Stack Development Walkthroughs AWS Certification Study Groups LeetCode Interview Prep Networking Events (Online/In Person) Thank you and won't you join our amazing growing Tech community?

Industry
Technology, Information and Internet
Company size
2-10 employees
Type
Privately Held

Employees at West LA Coding

Updates

  • West LA Coding reposted this

    View profile for Nikki Siapno, graphic

    Engineering Manager at Canva | Co-Founder of Level Up Coding

    How to use Big O to ace your technical interviews: Firstly, what is Big O Notation? Big O describes an algorithm's runtime or memory consumption without the interference of contextual variables like RAM and CPU. It gives programmers a way to compare algorithms and identify the most efficient solution. 𝗕𝗶𝗴 𝗢 𝗮𝗻𝘀𝘄𝗲𝗿𝘀 𝗼𝗻𝗲 𝘀𝘁𝗿𝗮𝗶𝗴𝗵𝘁𝗳𝗼𝗿𝘄𝗮𝗿𝗱 𝗾𝘂𝗲𝘀𝘁𝗶𝗼𝗻: "How much does runtime or memory consumption grow as the size of the input increases, in the worst-case scenario?". 𝗧𝗼 𝘂𝘁𝗶𝗹𝗶𝘇𝗲 𝗕𝗶𝗴 𝗢, you'll need to know the possible values and how they compare with each other. Use the attached photo for a quick reference. 𝗦𝗼, 𝗵𝗼𝘄 𝗱𝗼 𝘆𝗼𝘂 𝗮𝗽𝗽𝗹𝘆 𝗕𝗶𝗴 𝗢 𝗶𝗻 𝘆𝗼𝘂𝗿 𝘁𝗲𝗰𝗵𝗻𝗶𝗰𝗮𝗹 𝗶𝗻𝘁𝗲𝗿𝘃𝗶𝗲𝘄𝘀? Here are a few scenarios where Big O can be used: 🔸 Live coding challenges 🔸 Code walk-throughs 🔸 Discussions about projects/solutions you've built 🔸 Discussions about your approach to programming & problem-solving When any of these scenarios come up, be sure to mention the Big O of your solution and how it compares to alternative approaches. This is especially useful in live coding challenges where you have to compare solutions on the spot — remember to think out loud! 𝗧𝗶𝗽: 𝗪𝗵𝗲𝗻 𝗰𝗼𝗺𝗽𝗮𝗿𝗶𝗻𝗴 𝘀𝗼𝗹𝘂𝘁𝗶𝗼𝗻𝘀, 𝗽𝗮𝘆 𝗮𝘁𝘁𝗲𝗻𝘁𝗶𝗼𝗻 𝘁𝗼 𝘁𝗵𝗲 𝗽𝗿𝗼𝗯𝗹𝗲𝗺'𝘀 𝗿𝗲𝗾𝘂𝗶𝗿𝗲𝗺𝗲𝗻𝘁𝘀. For example, linear time complexity may be completely fine when the input can never be too large. But if you’re dealing with big data, you’ll want to opt for something more efficient. Of course, the goal is to get the correct Big O notation that applies to your solution. But don't worry about getting it wrong! Your interviewer will probably correct you when you do. The point is to show that you are thinking about the efficiency and performance of your solution. 𝗗𝗼 𝘁𝗵𝗶𝘀 𝗮𝗻𝗱 𝘆𝗼𝘂'𝗹𝗹 𝗯𝗲 𝗮𝗯𝗹𝗲 𝘁𝗼 𝘀𝗵𝗼𝘄𝗰𝗮𝘀𝗲 𝗮𝗻 𝗶𝗺𝗽𝗼𝗿𝘁𝗮𝗻𝘁 𝘁𝗿𝗮𝗶𝘁 𝘁𝗵𝗮𝘁 𝘁𝗲𝗰𝗵𝗻𝗶𝗰𝗮𝗹 𝗵𝗶𝗿𝗶𝗻𝗴 𝗺𝗮𝗻𝗮𝗴𝗲𝗿𝘀 𝗹𝗼𝗼𝗸 𝗳𝗼𝗿: The ability to consider a solution's viability beyond whether it works or not. This shows maturity in your decision-making and approach to programming. ~~ Thank you to our partner Postman who keeps our content free to the community. The 2024 State of the API Report just got released. Check it out for key trends and insights in API development: https://lnkd.in/gGYishJz

    • No alternative text description for this image
  • West LA Coding reposted this

    View profile for Raul Junco, graphic

    Simplifying System Design

    A good Cache is critical for high performance.  6 cache strategies you need to know for that System Design interview. Caching helps make data access faster.  To use caching effectively, you must decide how to handle reads (getting data) and writes (saving data) separately. Here are the most popular strategies. 𝗖𝗮𝗰𝗵𝗲-𝗮𝘀𝗶𝗱𝗲 (𝗟𝗮𝘇𝘆 𝗟𝗼𝗮𝗱𝗶𝗻𝗴) 𝗛𝗼𝘄 𝗶𝘁 𝘄𝗼𝗿𝗸𝘀: The application checks the cache first. If the data isn’t in the cache, it fetches it from the database, populates the cache, and returns. 𝗨𝘀𝗲 𝗰𝗮𝘀𝗲: Best when reads are infrequent, but data consistency is key. 𝗗𝗿𝗮𝘄𝗯𝗮𝗰𝗸: Cache miss, penalty on the initial read. 𝗖𝗮𝗰𝗵𝗲-𝘁𝗵𝗿𝗼𝘂𝗴𝗵 𝗛𝗼𝘄 𝗶𝘁 𝘄𝗼𝗿𝗸𝘀: The cache is tightly integrated with the database. Reads and writes are managed via the cache, often with an underlying persistence mechanism. 𝗨𝘀𝗲 𝗰𝗮𝘀𝗲: Useful for consistent caching layer linked to the database. 𝗗𝗿𝗮𝘄𝗯𝗮𝗰𝗸: This may add complexity if the cache layer is unavailable. 𝗥𝗲𝗳𝗿𝗲𝘀𝗵-𝗮𝗵𝗲𝗮𝗱 𝗛𝗼𝘄 𝗶𝘁 𝘄𝗼𝗿𝗸𝘀: Predictively refreshes cache entries before expiration. 𝗨𝘀𝗲 𝗰𝗮𝘀𝗲: Suitable when cache misses are costly, and data freshness is predictable. 𝗗𝗿𝗮𝘄𝗯𝗮𝗰𝗸: May consume resources for unused data. 𝗪𝗿𝗶𝘁𝗲-𝘁𝗵𝗿𝗼𝘂𝗴𝗵 (𝗦𝘆𝗻𝗰𝗵𝗿𝗼𝗻𝗼𝘂𝘀) 𝗛𝗼𝘄 𝗶𝘁 𝘄𝗼𝗿𝗸𝘀: Writes go to the cache and the underlying database at the same time. 𝗨𝘀𝗲 𝗰𝗮𝘀𝗲: Ensures data consistency between the cache and database. 𝗗𝗿𝗮𝘄𝗯𝗮𝗰𝗸: Write latency is tied to both cache and database performance. 𝗪𝗿𝗶𝘁𝗲-𝗯𝗲𝗵𝗶𝗻𝗱 (𝗔𝘀𝘆𝗻𝗰𝗵𝗿𝗼𝗻𝗼𝘂𝘀) 𝗛𝗼𝘄 𝗶𝘁 𝘄𝗼𝗿𝗸𝘀: Writes are first made to the cache and then asynchronously persisted to the database. 𝗨𝘀𝗲 𝗰𝗮𝘀𝗲: Useful for optimizing write performance when data does not need to be immediately available in persistent storage. 𝗗𝗿𝗮𝘄𝗯𝗮𝗰𝗸: Possibility of data loss if the cache fails before writing to the database. 𝗪𝗿𝗶𝘁𝗲-𝗮𝗿𝗼𝘂𝗻𝗱 𝗛𝗼𝘄 𝗶𝘁 𝘄𝗼𝗿𝗸𝘀: Writes go directly to the database without immediately updating the cache. The cache gets updated on a subsequent read. 𝗨𝘀𝗲 𝗰𝗮𝘀𝗲: Best for use cases with infrequent reads where cache churn isn’t beneficial. 𝗗𝗿𝗮𝘄𝗯𝗮𝗰𝗸: Can lead to cache inconsistency, as it won’t always reflect the latest write until a read triggers a cache update. 𝗦𝘂𝗺𝗺𝗮𝗿𝘆 𝗥𝗲𝗮𝗱 𝘀𝘁𝗿𝗮𝘁𝗲𝗴𝗶𝗲𝘀 balance how and when data is loaded into the cache. 𝗪𝗿𝗶𝘁𝗲 𝘀𝘁𝗿𝗮𝘁𝗲𝗴𝗶𝗲𝘀 determine how changes in data propagate between cache and persistent storage, balancing performance and consistency needs. Save this for your next interview. Happy cache!

    • No alternative text description for this image

Similar pages