logo

View all jobs

Cloud Data Engineer (GCP)- Consultant

Nashville, TN
Classification: Contract-To-Hire
Contract Length: 12-months
Job ID: 15750002
 
At CereCore, our heart for healthcare is interconnected with our knowledge of technical solutions, creating a vital link that ultimately drives the delivery of high-quality care. 
CereCore is a wholly-owned subsidiary of Hospital Corporation of America (HCA) Healthcare.  

CereCore is seeking a Cloud Data Engineer- Consultant to join our team.


Summary:
Data Engineers within HCA’s Information and Analytics organization are responsible for defining and implementing data management practices across the enterprise.  This contract position will focus primarily on enterprise data management and migrating of data to the cloud. This role requires working closely with the different data teams and requires ‘self-starters’ who are proficient in problem solving and capable of bringing clarity to complex situations.

Data Engineers are expected to source and incorporate new data sources into the Enterprise Data Ecosystem. The responsibilities will include writing, testing, and reviewing ETL pipelines for defining and implementing data management practices across the enterprise.

Responsibilities:
  • Implement data migration pipelines from Teradata to the cloud.
  • Implement enterprise data management practices, standards, and frameworks for Data Integration
  • Develop, manage, and own full data lifecycle from raw data acquisition through transformation to end user consumption.
  • Analyze requirements, design data pipelines and integrate those solutions for customer environments
  • Understanding of fundamental cloud computing concepts
  • Translate business requirements into technical design specifications
  • Closely collaborates with team members to successfully execute development initiatives using Agile practices and principles  
  • Maintains a holistic view of information assets by creating and maintaining artifacts that illustrate how information is stored, processed, and accessed
  • Provide guidance on technology choices and design considerations for migrating data to the Cloud
  • Experience with building consumable data lakes, analytics applications and tools
  • Designing the cloud environment from a comprehensive perspective, ensuring that it satisfies all of the company’s needs.
  • Performing activities such as deployment, maintenance, monitoring, and management inside the cloud framework that has been created
  • Work closely with individuals across the technology organizations to help promote awareness of the data architecture and ensure that enterprise
  • assets of competence are leveraged

Requirements:
  • Cloud Data Experience; GCP 
  • Extensive Experience with ETL and big data tools such as Spark/Kafka/Hadoop etc.
  • Teradata ETL experience using BTEQ and SQL scripts.
  • Extensive experience with relational database management systems; Teradata, Oracle or SQL Server preferred.
  • Knowledge of ETL tools such as such as StreamSets, Cloud Data flow, Connect ETL etc.
  • Advanced SQL skills, including the ability to write, tune, and interpret SQL queries; tool specific experience in the RDBMS's listed above is ideal
  • Experience with Oracle, SQL Server, and other database platforms.
  • Scripting experience with Unix/Linux.
  • Experience with Git and GitHub version control.
  • Experience with relational databases such as Teradata and public cloud technologies such as, GCP Big Query, GCP Data Catalog and Azure Data Bricks preferred
  • Advanced SQL skills, including the ability to write, tune, and interpret SQL queries; tool specific experience in the RDBMS's listed above is ideal. Experience with GCP Big query is preferred
  • Experience with Cloud Data Flow, Airflow, Cloud Composer, Streamsets or managing streaming data is strongly preferred
  • Ability to troubleshoot, maintain, reverse engineer and optimize existing ETL pipelines.
  • Experience with Cloud Data Flow, Airflow, Cloud Composer, Cloud Data Fusion, Data Catalog, Kafka, DataProc, github, Streamsets or managing streaming data is strongly preferred
  • NoSQL, Hbase, Cassandra, MongoDB, In-memory, Columnar, other emerging technologies
  • Ability to analyze and interpret complex data, and offer solutions to complex clinical problems.
  • Ability to work independently on assigned tasks.
  • Strong written and verbal communication skills including the ability to explain complex technical issues in a way that non-technical people may understand.
  • Excellent problem-solving and critical thinking skills.
  • Knowledge of IT governance and operations.

CereCore was formed in 2001 as a shared service business within a large hospital operator.  We focus solely on helping healthcare organizations align business and IT strategies to improve processes and patient care. 
 
Awards and Recognition

Our Commitment to Diversity and Inclusion
We believe excellence in healthcare starts with a foundation of inclusion, compassion and respect for our patients and each other.  We are committed to fostering a culture of inclusion across all areas of our organization.  We are an equal opportunity employer and we value diversity at our company. We do not discriminate on the basis of race, religion, color, national origin, gender, sexual orientation, age, marital status, veteran status, or disability status.
 
Connect with us on 
LinkedInTwitter, and Facebook.
#Dice
Powered by