#hiring #aws #dataengineer #mountainview #california #onsite #hybrid
#[email protected]
Backfill requirement : AWS Data Engineer : Mountain view CA (Hyrbid)
Must have : #AWS , #Terraform , #Spark , #python
Expertise with any of the following Object Oriented Languages (OOD): Java/J2EE, C#, VB.NET, Python, or sometimes C++. Java and Python preferred.
Expertise with AWS (IAM, VPC), Spark, and Terraform are preferred. Expertise with Databricks is a strong bonus.
Skills
Multi-cloud data exploration
Terraform infrastructure-as-code for managing AWS infrastructure and deep integration between enterprise tools (Starburst, Privacera, and Databricks) and Intuit services (LDAP, data decryption)
Testing user flows for data analysis, processing, and visualization with Python Spark notebooks and SQL running on distributed compute to join data between AWS S3 and GCP BigQuery
Developing data pipelines in Python Spark or SQL to push structured enterprise tool telemetry to our data lake
Fine-grained access control for data exploration
Terraform infrastructure-as-code for managing AWS infrastructure and deep integration between enterprise tools (Databricks and Privacera)
Evaluating Databricks capabilities to sync Hive, Glue, and Unity Catalogs
Evaluating Privacera capabilities or building new capabilities (AWS Lambda with Python) to sync Intuit access policies with Unity Catalog
Testing user flows for data analysis, processing, and visualization with Python Spark notebooks on distributed compute or Databricks’ serverless SQL runtime
Make Coding Great Again
4moGood luck! 😁