GE is the world's Digital Industrial Company, transforming industry with software-defined machines and solutions that are connected, responsive and predictive. Through our people, leadership development, services, technology and scale, GE delivers better outcomes for global customers by speaking the language of industry. GE offers a great work environment, professional development, challenging careers, and competitive compensation. GE is an Equal Opportunity Employer. Employment decisions are made without regard to race, color, religion, national or ethnic origin, sex, sexual orientation, gender identity or expression, age, disability, protected veteran status or other characteristics protected by law.
The Staff Data ELT Architect will have an exciting opportunity to work with one of the world’s largest datasets (petabyte range). Big data provides a unique challenge in all aspects of data acquisition, data enrichment, and analytics scheduling and operationalization that drives high-visibility, cross-division Aviation outcomes.
Architect data ingestion jobs in tools such as Talend to acquire, stage, and aggregate data in technologies like HAWQ, Hive, Spark, HDFS, and Greenplum
Design ETL jobs using both built-in Talend components and custom made scripts in Python, Java, and Spark
Devise big data ingestion jobs in Talend for relational, big data, streaming, IOT, flat file, JSON, API, and many other data sources
Experience with writing SQL that is able to transform, pivot, and stitch big data sets, both relational and non-relational
Prove data ingestion ETL flows using message queueing
Optimize Talend jobs to reduce memory and storage consumption
Operate in Agile framework as the Technical Architect, creating user stories from and tasks from customer requirements to track project progress
Bachelor’s Degree in Computer Science / Engineering, IT, or Data Analytics from an accredited university or college
Legal authorization to work in the U.S. is required. We will not sponsor individuals for employment visas, now or in the future, for this job
Must be willing to travel
Must be willing to work out of an office located in Evendale, Ohio
Experience with ETL architecture and developing ETL jobs
Experience working with databases
Experience architecting data ingestion design within a production environment
Experience performing Talend Data Integration (Including experience implementing integration jobs in a high volume environment
Experience working with Hadoop Big Data ecosystems and tools such as Hive, Spark, etc.
Excellent written and verbal communication skills, especially with product owners
Self-driven to learn new technology and build technical skill sets
Experience with Agile project delivery frameworks and participating in Scrum ceremonies
Experience with CDC-based enterprise data ingestion technologies (Talend or HVR)
Hands-on experience with streaming data architecture (Spark Streaming, message queues, etc.)
Hands-on proficiency in one or more scripting languages (e.g., Java, Python, Scala, R, Shell scripting)
Experience with Hadoop data flow tools such as Sqoop, Flume, and Nifi
Experience with Hadoop workflow tools such as Spark and Spark streaming
Experience consuming REST and WSDL services
Experience working with databases and hands-on proficiency in SQL
Familiarity with Greenplum data modeling, or MPP-style data warehouse or PostgreSQL knowledge
Perform development and operations duties, sometimes requiring support during off-work hours
We are in the process of transitioning to an improved job application system and in the interim we are operating with two systems. Have your Job ID ready (from the email you received when you applied) to log in and check your application status.
Click the appropriate button. If you don't know your job ID, you can still check your status: use both buttons.