TrendyTech

TrendyTech

Professional Training and Coaching

Bengaluru, Karnataka 47,328 followers

Trendytech offers one of the top quality Big data online masters course with over 20000+ learners globally.

About us

Industry Oriented Big Data Hadoop & Spark Training. This course is specially designed to crack the interviews. The best part about this course is that both Hadoop & Spark are covered as part of single course. The course is very well structured. This is a 20 week course with over 120+ hours of content, and a lot of assignments. Also a lot of interview questions are covered. Real time projects are also discussed towards the ending of the course. Let's begin our Big Data journey now for a successful career ahead.

Industry
Professional Training and Coaching
Company size
11-50 employees
Headquarters
Bengaluru, Karnataka
Type
Self-Owned

Locations

  • Primary

    Tvaksa Technologies,Plot No-116, EPIP, 1st Stage 3rd Floor,

    Shailendra Tech Park, Whitefield,

    Bengaluru, Karnataka 560066, IN

    Get directions

Employees at TrendyTech

Updates

  • View organization page for TrendyTech, graphic

    47,328 followers

    Learning & Growing has been an integral part of Jitendra Mahato's career. Let’s read what he shares on Joining Wipro as Technical Lead - Data Engineer "The field of data has always piqued my interest". "Working on SQL related projects during my initial time in HCL, I started learning new technologies". "On my friend’s suggestion, I enrolled for Sumit Mittal Sir's Big Data program". The best thing about Sumit Sir's course is; > His style of teaching the most difficult concepts in an easy way, which has no substitute. > He used the right mix of theoretical as well as practical learning. "Ensuring 10 hours of weekly commitment for 5 months, I learnt and refreshed my knowledge on Pyspark, Azure & AWS cloud learning along with managing the end-to-end pipeline". "Things will keep blocking your way forward, but never stop to follow your ambition". "Thank you Sumit Mittal Sir and TrendyTech for helping me pursue my ambition". P.S ~ New batch of Cloud Focused Big Data Program is starting on 14th November. Checkout more details on https://lnkd.in/gkXgKRJx . . #bigdata #dataengineering #dataengineer #sumitteaches

    • No alternative text description for this image
  • View organization page for TrendyTech, graphic

    47,328 followers

    Success Story of Neha Goyal, a brilliant Trendytech Student joining IBM Here goes what she shares about her journey; "Starting my career at TCS in informatica profile, there was little chance to grow". "Figuring out various ways to get into the Data domain, I finally enrolled for the Sumit Mittal Sir Big Data Program". "Learning Big Data fundamentals & Pyspark, I joined Iqvia as a Data Engineer". This was my first move into Big Data domain. I again reached out to Sumit Sir or learning Azure & AWS cloud where I mastered; > Azure Databricks, Data Factory, Datawarehouse, Data Lake > AWS services like EMR, Redshift, Athena, Glue Also learning Data modeling & Structured streaming, it was now time to use these skills to the fullest. I recently joined IBM as Senior Data Engineer. These 5 years in Big Data with the skills and growth I carry, I couldn’t have imagined it without upskilling. Thankyou Sumit Sir & TrendyTech for preparing me beyond interviews for life. #bigdata #dataengineering #dataengineer #sumitteaches

    • No alternative text description for this image
  • View organization page for TrendyTech, graphic

    47,328 followers

    Determination, Hard work & Self Belief led to a 180-degree career switch. "Sai Vamsy Dhulipala" shares his story on Joining Risika, Denmark as a Data Engineer. "I always believed in trying out things without thinking much about success or failures" "Did my CA Intermediate and started working in the Finance domain" "Going through various client financial data, I started imaging the change data can bring in decision making" "Worked as Analyst & later pursued Post Graduation in Data Science" "Looking at more demand and wider scope in Data Engineering, I started learning Big Data from Sumit Mittal Sir's Program". "This was when I became sure about having a long term career in this domain". "In 6 months, I mastered Pyspark, AWS cloud services, Azure cloud, creating an end-to-end pipeline and much more". Thank you Sumit Mittal Sir & TrendyTech team for opening new doors for me! #bigdata #dataengineering #dataengineer #sumitteaches

    • No alternative text description for this image
  • View organization page for TrendyTech, graphic

    47,328 followers

    Complete Roadmap for Transition as Data Engineer 8 Months of Intensive Program Makes You Equivalent to 3 Year Experienced Data Engineer Start Giving Interview After Week 14. Enrol for Sumit Mittal Sir's, Cloud Focused Big Data Master Programs Batch Starting Today, 9th November 2024 @ 6:00 PM(IST) For more details visit https://lnkd.in/gkXgKRJx #bigdata #dataengineering #dataengineer #sumitteaches

    • No alternative text description for this image
  • TrendyTech reposted this

    View profile for Sumit Mittal, graphic

    Founder & CEO of Trendytech | Big Data Trainer | Ex-Cisco | Ex-VMware | MCA @ NIT Trichy | #SumitTeaches | New Batch Starting November 16th, 2024

    My very hard working student joined Atlassian recently. Wishing Madhukar Jaiswal a great career ahead on joining Atlassian as Senior Data Engineer. This is what he has to say "I always ensured I takeout time for learning, apart from whatever I am working on. This rule of life has given me a direction to grow and stay updated. Post working at companies like MAQ Software and Paytm, I wanted to be best at my skill sets. Joining Sumit Sir Big Data Program helped me in strengthening my fundamentals that I can handle complex projects at ease. The way Sumit Sir teaches is amazing, which helped me master Azure & AWS cloud services along with Pyspark internals and lot more. The best Big Data course I have come across till now focusing on exactly what industry needs." I am proud to say that I have multiple students who are working at Atlassian, one of the top product based company. 7-8 months of well planned efforts can take you few years ahead in your career. Always remember one thing your current circumstances don't define your future, your present actions does! My new ultimate big data batch is starting tomorrow, DM to know more!

    • No alternative text description for this image
  • View organization page for TrendyTech, graphic

    47,328 followers

    Dreams do come true, says Utkarsh Singh on joining VISA This is what he shares; "Don’t just focus on clearing interviews; work on building a strong foundation. The opportunities will follow". "From swiping VISA cards to working at VISA, I made it happen!" "As a Data Engineer, I was always looking to strengthen my foundation and enhance my skills". "To achieve this, I joined Sumit Mittal Sir’s Big Data Program at TrendyTech. "The learning was boundless, covering everything from PySpark to optimization techniques, along with mastery of Azure and AWS cloud services". "The program’s focus on practical knowledge and real-world readiness helped me build a strong foundation, giving me the confidence to crack multiple interviews". "A heartfelt thank you to Sumit Sir and the Trendytech team for their guidance and support". . . #bigdata #dataengineering #dataengineer #sumitteaches

    • No alternative text description for this image
  • TrendyTech reposted this

    View profile for Sumit Mittal, graphic

    Founder & CEO of Trendytech | Big Data Trainer | Ex-Cisco | Ex-VMware | MCA @ NIT Trichy | #SumitTeaches | New Batch Starting November 16th, 2024

    Internal working of Apache Spark - One of the most liked writeup Lets say you have a 20 node spark cluster Each node is of size - 16 cpu cores / 64 gb RAM Let's say each node has 3 executors, with each executor of size - 5 cpu cores / 21 GB RAM => 1. What's the total capacity of cluster? We will have 20 * 3 = 60 executors Total CPU capacity: 60 * 5 = 300 cpu Cores Total Memory capacity: 60 * 21 = 1260 GB RAM => 2. How many parallel tasks can run on this cluster? We have 300 CPU cores, we can run 300 parallel tasks on this cluster. => 3. Let's say you requested for 4 executors then how many parallel tasks can run? so the capacity we got is 20 cpu cores 84 GB RAM so a total of 20 parallel tasks can run. => 4. Let's say we read a csv file of 10.1 GB stored in datalake and have to do some filtering of data, how many tasks will run? if we create a dataframe out of 10.1 GB file we will get 81 partitions in our dataframe. (will cover in my next post on how many partitions are created) so we have 81 partitions each of size 128 mb, the last partition will be a bit smaller. so our job will have 81 total tasks. but we have 20 cpu cores lets say each task takes around 10 second to process 128 mb data. so first 20 tasks run in parallel, once these 20 tasks are done the other 20 tasks are executed and so on... so totally 5 cycles, if we think the most ideal scenario. 10 sec + 10 sec + 10 sec + 10 sec + 8 sec first 4 cycles is to process 80 tasks all of 128 mb, last 8 sec is to process just one task of around 100 mb, so it takes little lesser but 19 cpu cores were free during this time. => 5. is there a possibility of, out of memory error in the above scenario? Each executor has 5 cpu cores and 21 gb ram. This 21 gb RAM is divided in various parts - 300 mb reserved memory, 40% user memory to store user defined variables/data. example hashmap 60% spark memory - this is divided 50:50 between storage memory and execution memory. so basically we are looking at execution memory and it will be around 28% roughly of the total memory allotted. so consider around 6 GB of 21 GB memory is meant for execution memory. per cpu core we have (6 GB / 5 cores) = 1.2 GB execution memory. That means our task can roughly handle around 1.2 GB of data. however, we are handling 128 mb so we are well under this range. I hope you liked the explanation :) Do mention in comment what you want me to bring in my next post! if you want to experience a learning like never before & want to make a career in big data then DM me. New batch is starting tomorrow. #bigdata #career #dataengineering #apachespark

    • No alternative text description for this image
  • TrendyTech reposted this

    View profile for Akshay Bhushan, graphic

    Data Analyst at Bank of Ireland

    Another Week of Big Data Learning from the UK/Ireland Region 🚀 Week 7 has been a deep dive into Spark caching, persistence, and performance optimization. It’s been an exciting week of hands-on learning and practical application, focused on supercharging Spark’s efficiency for real-world data challenges. Here’s a glimpse into the key takeaways: 1. Spark UI and Resource Allocation The Spark UI proved invaluable for monitoring job execution and resource allocation. Understanding the benefits of dynamic executor allocation showed how efficiently Spark can optimize resources based on workload demands, ensuring smooth data processing. 2. Caching and Persistence: The Key to Performance Caching frequently accessed data—whether from RDDs, DataFrames, or Spark Tables—makes a significant impact on reducing computation time. This week explored the different caching strategies, such as memory-only, disk-only, and hybrid methods, helping tailor storage solutions based on dataset size and processing needs. • The difference between cache() and persist() was highlighted, showing how each technique plays a role in performance optimization based on data access frequency and resource availability. • Cache invalidation was another key focus, with insights into ensuring data stays fresh with the right refresh techniques. 3. Practical Optimizations The hands-on experience with caching and persistence strategies brought clarity to how these optimizations play out in real-time. Performance improvements were clearly visible, especially in speeding up wide transformations and reducing query times by caching frequently accessed data. 4. Spark SQL and Caching Incorporating caching strategies into Spark SQL and using the REFRESH TABLE command ensured that query results were always up-to-date. The ability to optimize query execution while managing resources effectively was a highlight of this week’s learning. Big thanks to Sumit Mittal and the TrendyTech team for their continuous support. Looking forward to more Big Data learning ahead! 🚀 #BigData #SparkSQL #ApacheSpark #DataFrames #Caching #DistributedProcessing #CloudComputing #DataEngineering #LearningJourney #TrendyTech #UKTech #IrelandTech #TechCommunity #TechNetworking #CareerGrowth #Innovation #DataScienceUK #UKBigData #IrelandBigData

    • No alternative text description for this image

Similar pages

Browse jobs