Job posting has expired
Back to search resultsSolutions Architect (Multiple Positions)
![]() | |
![]() United States, California, San Francisco | |
![]() | |
Solutions Architect (Multiple Positions), Databricks, Inc., San Francisco, CA. Responsible for architecting, designing and implementing advanced, customer-specific big data analytics solutions using advanced knowledge of data architecture, data mining, data engineering, machine learning, computer programming and emerging open source technologies. Understand customer requirements and evaluate various Databricks technologies to fulfill them. Create initial proof of concepts for building end-to-end solutions which satisfy customer requirements using Databricks’ technology platform. Identify and improve new plans that help customers turn their data into relevant value and align these plans with their outcomes for continued success. Architect production level workloads, including end-to-end pipeline load performance testing and optimization. Analyze strategic customer existing pain points and develop code to solve scalability performance and stability requirements. Build reference architectures, how-tos and demo applications of Databricks products for customer use. Telecommuting permitted. (DBxCA013)
40 hrs/week, Mon-Fri, 8:30 a.m. - 5:30 p.m. Salary range: $197,995 - $220,800/yr.
MINIMUM REQUIREMENTS:
Master’s degree or foreign equivalent in Computer Science, Engineering, Data Science or a related field and two (2) years of experience in data engineering, data platforms or data analytics.
In the alternative, employer will accept a Bachelor’s degree or foreign equivalent in Computer Science, Engineering, Data Science or a related field and five (5) years of post-bachelor’s, progressive experience in data engineering, data platforms or data analytics.
Qualifying experience must include two (2) years in at least five (5) of the following which may be gained concurrently:
- Apache Spark and Spark runtime internals; - CI/CD for production deployments; - Design and deployment of performant end-to-end data architectures; - Technical project delivery, including managing scope, timelines and client relationships; - Designing and implementing big data technologies including at least one of the following: Apache Spark, Hadoop, Cassandra, NoSQL, MPP, OLTP or OLAP; - Programming experience with at least one of the following languages: Python, Scala or Java; and - Designing solutions on at least one of the following cloud infrastructure/services: AWS, Azure, or GCP.
To apply, please send resumes to USapplications@databricks.com and reference job code DBxCA013. |