Mobile_menu_button_hd

Data Engineer

(Confidential)

Job Description

Title:                Data Engineer

City:                 Plano, TX

Duration:         12+ Months

Responsibilities:

  • •         Develop sustainable data driven solutions with current new gen data technologies to meet the needs of our organization and business customers
  •  
  • •         Help develop solutions for streaming, real-time, and search-driven analytics
  •  
  • •         Must have a firm understanding of delivering large-scale data sets solutions and SDLC best practices
  •  
  • •         Transform complex analytical models in scalable, production-ready solutions
  •  
  • •         Utilizing programming languages like Java, Scala, Python
  •  
  • •         Manage the development pipeline of distributed computing Big Data applications using Open Source frameworks like Apache Spark, Scala and Kafka on AWS and Cloud based data warehousing services such as Snowflake.
  •  
  • •         Leveraging DevOps techniques and practices like Continuous Integration, Continuous Deployment, Test Automation, Build Automation and Test Driven Development to enable the rapid delivery of working code utilizing tools like Jenkins, Maven, Nexus, Terraform, Git and Docker

Basic qualifications:

  • •         Bachelor Degree
  •  
  • •         At least 5 years of experience with the Software Development Life Cycle (SDLC)
  •  
  • •         At least 3 years of experience working on a big data platform
  •  
  • •         At least 2 years of experience working with unstructured datasets
  •  
  • •         At least 2 years of experience developing microservices: Python, Java, or Scala
  •  
  • •         At least 1 year of experience building data pipelines, CICD pipelines, and fit for purpose data stores
  •  
  • •         At least 1 year of experience in cloud technologies: AWS, Docker, Ansible, or Terraform
  •  
  • •         At least 1 year of Agile experience
  •  
  • •         At least 1 year of experience with a streaming data platform including Apache Kafka and Spark
  •  
  • •         Preferred qualifications
  •  
  • •         1+ years of experience with Identity & Access Management, including familiarity with principles like least privilege & role-based access control
  •  
  • •         Understanding of microservices architecture & RESTful web service frameworks
  •  
  • •         1+ years of experience with JSON, Parquet, or Avro formats
  •  
  • •         1+ years experience in RDS, NOSQL or Graph Databases
  •  
  • •         1+ years of experience working with AWS platforms, services, and component technologies, including S3, RDS and Amazon EMR  

Regards,

Charan V V D

Resource Development Manager

Cell : +1-972-792-9969 (Best way to reach me is Text/Whatspp/E-Mail)

Email: charan@infovision.com

Whatspp: https://wa.me/19727929969

Linkedin : www.linkedin.com/in/charanvvd

Job Requirements

Basic qualifications:

  • •         Bachelor Degree
  •  
  • •         At least 5 years of experience with the Software Development Life Cycle (SDLC)
  •  
  • •         At least 3 years of experience working on a big data platform
  •  
  • •         At least 2 years of experience working with unstructured datasets
  •  
  • •         At least 2 years of experience developing microservices: Python, Java, or Scala
  •  
  • •         At least 1 year of experience building data pipelines, CICD pipelines, and fit for purpose data stores
  •  
  • •         At least 1 year of experience in cloud technologies: AWS, Docker, Ansible, or Terraform
  •  
  • •         At least 1 year of Agile experience
  •  
  • •         At least 1 year of experience with a streaming data platform including Apache Kafka and Spark
  •  
  • •         Preferred qualifications
  •  
  • •         1+ years of experience with Identity & Access Management, including familiarity with principles like least privilege & role-based access control
  •  
  • •         Understanding of microservices architecture & RESTful web service frameworks
  •  
  • •         1+ years of experience with JSON, Parquet, or Avro formats
  •  
  • •         1+ years experience in RDS, NOSQL or Graph Databases
  •  
  • •         1+ years of experience working with AWS platforms, services, and component technologies, including S3, RDS and Amazon EMR  

Regards,

Charan V V D

Resource Development Manager

Cell : +1-972-792-9969 (Best way to reach me is Text/Whatspp/E-Mail)

Email: charan@infovision.com

Whatspp: https://wa.me/19727929969

Linkedin : www.linkedin.com/in/charanvvd

Apply
Apply

Job Snapshot

Location US-VA-Richmond
Employment Type Contract to Hire
Pay Type Year
Pay Rate N/A
Store Type IT & Technical

Recommended Jobs for You

Privacy Tips

For your privacy and protection, when applying to a job online:

Never give your social security number to a prospective employer, provide credit card or bank account information, or perform any sort of monetary transaction.

Terms & Conditions
Snapshot
(Confidential)
Company:
US-VA-Richmond
Location:
Contract to Hire
Employment Type:
Year
Pay Type:
N/A
Pay Rate:
IT & Technical
Store Type:

Job Description

Title:                Data Engineer

City:                 Plano, TX

Duration:         12+ Months

Responsibilities:

  • •         Develop sustainable data driven solutions with current new gen data technologies to meet the needs of our organization and business customers
  •  
  • •         Help develop solutions for streaming, real-time, and search-driven analytics
  •  
  • •         Must have a firm understanding of delivering large-scale data sets solutions and SDLC best practices
  •  
  • •         Transform complex analytical models in scalable, production-ready solutions
  •  
  • •         Utilizing programming languages like Java, Scala, Python
  •  
  • •         Manage the development pipeline of distributed computing Big Data applications using Open Source frameworks like Apache Spark, Scala and Kafka on AWS and Cloud based data warehousing services such as Snowflake.
  •  
  • •         Leveraging DevOps techniques and practices like Continuous Integration, Continuous Deployment, Test Automation, Build Automation and Test Driven Development to enable the rapid delivery of working code utilizing tools like Jenkins, Maven, Nexus, Terraform, Git and Docker

Basic qualifications:

  • •         Bachelor Degree
  •  
  • •         At least 5 years of experience with the Software Development Life Cycle (SDLC)
  •  
  • •         At least 3 years of experience working on a big data platform
  •  
  • •         At least 2 years of experience working with unstructured datasets
  •  
  • •         At least 2 years of experience developing microservices: Python, Java, or Scala
  •  
  • •         At least 1 year of experience building data pipelines, CICD pipelines, and fit for purpose data stores
  •  
  • •         At least 1 year of experience in cloud technologies: AWS, Docker, Ansible, or Terraform
  •  
  • •         At least 1 year of Agile experience
  •  
  • •         At least 1 year of experience with a streaming data platform including Apache Kafka and Spark
  •  
  • •         Preferred qualifications
  •  
  • •         1+ years of experience with Identity & Access Management, including familiarity with principles like least privilege & role-based access control
  •  
  • •         Understanding of microservices architecture & RESTful web service frameworks
  •  
  • •         1+ years of experience with JSON, Parquet, or Avro formats
  •  
  • •         1+ years experience in RDS, NOSQL or Graph Databases
  •  
  • •         1+ years of experience working with AWS platforms, services, and component technologies, including S3, RDS and Amazon EMR  

Regards,

Charan V V D

Resource Development Manager

Cell : +1-972-792-9969 (Best way to reach me is Text/Whatspp/E-Mail)

Email: charan@infovision.com

Whatspp: https://wa.me/19727929969

Linkedin : www.linkedin.com/in/charanvvd

Job Requirements

Basic qualifications:

  • •         Bachelor Degree
  •  
  • •         At least 5 years of experience with the Software Development Life Cycle (SDLC)
  •  
  • •         At least 3 years of experience working on a big data platform
  •  
  • •         At least 2 years of experience working with unstructured datasets
  •  
  • •         At least 2 years of experience developing microservices: Python, Java, or Scala
  •  
  • •         At least 1 year of experience building data pipelines, CICD pipelines, and fit for purpose data stores
  •  
  • •         At least 1 year of experience in cloud technologies: AWS, Docker, Ansible, or Terraform
  •  
  • •         At least 1 year of Agile experience
  •  
  • •         At least 1 year of experience with a streaming data platform including Apache Kafka and Spark
  •  
  • •         Preferred qualifications
  •  
  • •         1+ years of experience with Identity & Access Management, including familiarity with principles like least privilege & role-based access control
  •  
  • •         Understanding of microservices architecture & RESTful web service frameworks
  •  
  • •         1+ years of experience with JSON, Parquet, or Avro formats
  •  
  • •         1+ years experience in RDS, NOSQL or Graph Databases
  •  
  • •         1+ years of experience working with AWS platforms, services, and component technologies, including S3, RDS and Amazon EMR  

Regards,

Charan V V D

Resource Development Manager

Cell : +1-972-792-9969 (Best way to reach me is Text/Whatspp/E-Mail)

Email: charan@infovision.com

Whatspp: https://wa.me/19727929969

Linkedin : www.linkedin.com/in/charanvvd

Data Engineer Apply now