US
0 suggestions are available, use up and down arrow to navigate them
PROCESSING APPLICATION
Hold tight! We’re comparing your resume to the job requirements…
ARE YOU SURE YOU WANT TO APPLY TO THIS JOB?
Based on your Resume, it doesn't look like you meet the requirements from the employer. You can still apply if you think you’re a fit.
Job Requirements of Data Platform Engineer:
-
Employment Type:
Full-Time
-
Location:
Plano, TX (Onsite)
Do you meet the requirements for this job?
Data Platform Engineer
Generis Tek Inc.
Plano, TX (Onsite)
Full-Time
Please Contact: To discuss this amazing opportunity, reach out to our Talent Acquisition Specialist Siddhant Singh at email address can be reached on #
We have Contract role Databricks DevOps Engineer/ Data Platform Engineer-Hybrid for our client at Dallas TX. Please let me know if you or any of your friends would be interested in this position.
Position Details:
Databricks DevOps Engineer/ Data Platform Engineer-Hybrid-Dallas TX
Location : Dallas, TX – 75201 (Hybrid)
Project Duration : 4+ Months Contract
Job Description::
Must Have:
Nice To Have:
Data Pipelines: Design, build, and optimize scalable data pipelines and ETL/ELT processes using Spark and Delta Lake.
Role Summary:
As a Data Platform Engineer, you will be responsible for the design, development, and maintenance of our high-scale, cloud-based data platform, treating data as a strategic product. You will lead the implementation of robust, optimized data pipelines using Py Spark and the Databricks Unified Analytics Platform—leveraging its full ecosystem for Data Engineering, Data Science, and ML workflows. You will also establish best-in-class DevOps practices using CI/CD and GitHub Actions to ensure automated deployment and reliability. This role demands expertise in large-scale data processing and a commitment to modern, scalable data engineering and AWS cloud infrastructure practices.
Key Responsibilities:
Required Qualifications:
Preferred Qualifications:
To discuss this amazing opportunity, reach out to our Talent Acquisition Specialist Siddhant Singh at email address can be reached on #
About generis tek: generis tek is a boutique it/professional staffing based in Chicago land. We offer both contingent labor & permanent placement services to several fortune 500 clients nationwide. Our philosophy is based on delivering long-term value and build lasting relationships with our clients, consultants and employees. Our fundamental success lies in understanding our clients’ specific needs and working very closely with our consultants to create a right fit for both sides. We aspire to be our client has most trusted business partner.
630-576-1906
.We have Contract role Databricks DevOps Engineer/ Data Platform Engineer-Hybrid for our client at Dallas TX. Please let me know if you or any of your friends would be interested in this position.
Position Details:
Databricks DevOps Engineer/ Data Platform Engineer-Hybrid-Dallas TX
Location : Dallas, TX – 75201 (Hybrid)
Project Duration : 4+ Months Contract
Job Description::
Must Have:
- Databricks Administration: Manage users, groups, clusters, jobs, notebooks, and monitor performance within Databricks workspaces.
- AWS Infrastructure: Provision and manage AWS resources like S3, EC2, VPCs, IAM, Lambda, and Cloud Watch to support Databricks.
- Infrastructure as Code (IaC): Implement and maintain infrastructure using tools like Terraform or Cloud Formation for automated deployments.
- Automation & CI/CD: Develop automation scripts (Python, SQL) and integrate with CI/CD pipelines (Jenkins, GitHub Actions) for efficient deployments.
- : Harden the platform, manage access controls (IAM), and ensure compliance with security best practices.
- Performance Optimization: Right-size clusters, optimize Spark jobs, manage caching, and monitor costs (DBUs, storage).
Nice To Have:
Data Pipelines: Design, build, and optimize scalable data pipelines and ETL/ELT processes using Spark and Delta Lake.
Role Summary:
As a Data Platform Engineer, you will be responsible for the design, development, and maintenance of our high-scale, cloud-based data platform, treating data as a strategic product. You will lead the implementation of robust, optimized data pipelines using Py Spark and the Databricks Unified Analytics Platform—leveraging its full ecosystem for Data Engineering, Data Science, and ML workflows. You will also establish best-in-class DevOps practices using CI/CD and GitHub Actions to ensure automated deployment and reliability. This role demands expertise in large-scale data processing and a commitment to modern, scalable data engineering and AWS cloud infrastructure practices.
Key Responsibilities:
- Platform Development: Design, build, and maintain scalable, efficient, and reliable ETL/ELT data pipelines to support data ingestion, transformation, and integration across diverse sources.
- Big Data Implementation: Serve as the subject matter expert for the Databricks environment, developing high-performance data transformation logic primarily using PySpark and Python. This includes utilizing Delta Live Tables (DLT) for declarative pipeline construction and ensuring governance through Unity Catalog.
- Cloud Infrastructure Management: Configure, maintain, and secure the underlying AWS cloud infrastructure required to run the Databricks platform, including virtual private clouds (VPCs), network endpoints, storage (S3), and cross-account access mechanisms.
- DevOps & Automation (CI/CD): Own and enforce Continuous Integration/Continuous Deployment (CI/CD) practices for the data platform. Specifically, design and implement automated deployment workflows using GitHub Actions and modern infrastructure-as-code concepts to deploy Databricks assets (Notebooks, Jobs, DLT Pipelines, and Repos).
- Data Quality & Testing: Design and implement automated unit, integration, and performance testing frameworks to ensure data quality, reliability, and compliance with architectural standards.
- Performance Optimization: Optimize data workflows and cluster configurations for performance, cost efficiency, and scalability across massive datasets.
- Technical Leadership: Provide technical guidance on data principles, patterns, and best practices (e.g., Medallion Architecture, ACID compliance) to promote team capabilities and maturity. This includes leveraging Databricks SQL for high-performance analytics.
- Documentation & Review: Draft and review architectural diagrams, design documents, and interface specifications to ensure clear communication of data solutions and technical requirements.
Required Qualifications:
- Experience: 5+ years of professional experience in Data Engineering, focusing on building scalable data platforms and production pipelines.
- Big Data Expertise: Minimum 3+ years of hands-on experience developing, deploying, and optimizing solutions within the Databricks ecosystem. Deep expertise required in:
- Delta Lake (ACID transactions, time travel, optimization).
- Unity Catalog (data governance, access control, metadata management).
- Delta Live Tables (DLT) (declarative pipeline development).
- Databricks Workspaces, Repos, and Jobs.
- Databricks SQL for analytics and warehouse operations.
- AWS Infrastructure & Security: Proven, hands-on experience (3+ years) with core AWS services and infrastructure components, including:
- Networking: Configuring and securing VPCs, VPC Endpoints, Subnets, and Route Tables for private connectivity.
- Security & Access: Defining and managing IAM Roles and Policies for secure cross-account access and least privilege access to data.
- Storage: Deep knowledge of Amazon S3 for data lake implementation and governance.
- Programming: Expert proficiency (4+ years) in Python for data manipulation, scripting, and pipeline development.
- Spark & SQL: Deep understanding of distributed computing and extensive experience (3+ years) with PySpark and advanced SQL for complex data transformation and querying.
- DevOps & CI/CD: Proven experience (2+ years) designing and implementing CI/CD pipelines, including proficiency with GitHub Actions or similar tools (e.g., GitLab CI, Jenkins) for automated testing and deployment.
- Data Concepts: Full understanding of ETL/ELT, Data Warehousing, and Data Lake concepts.
- Methodology: Strong grasp of Agile principles (Scrum).
- Version Control: Proficiency with Git for version control.
Preferred Qualifications:
- AWS Data Ecosystem Experience: Familiarity and experience with AWS cloud-native data services, such as AWS Glue, Amazon Athena, Amazon Redshift, Amazon RDS, and Amazon Dynamo DB.
- Knowledge of real-time or near-real-time streaming technologies (e.g., Kafka, Spark Structured Streaming).
- Experience in developing feature engineering pipelines for machine learning (ML) consumption.
- Background in performance tuning and capacity planning for large Spark clusters.
To discuss this amazing opportunity, reach out to our Talent Acquisition Specialist Siddhant Singh at email address can be reached on #
630-576-1906
.About generis tek: generis tek is a boutique it/professional staffing based in Chicago land. We offer both contingent labor & permanent placement services to several fortune 500 clients nationwide. Our philosophy is based on delivering long-term value and build lasting relationships with our clients, consultants and employees. Our fundamental success lies in understanding our clients’ specific needs and working very closely with our consultants to create a right fit for both sides. We aspire to be our client has most trusted business partner.
Get job alerts by email.
Sign up now!
Join Our Talent Network!