- You will be a Data Engineer with a large team that includes onshore and offshore developers using best-in-class Big Data and relational Data warehouse technologies including MAPR Big Data environment, Talend, Teradata, Informatica etc.
- You will be prototyping data solutions to enable faster access to data for the analytics use case developers
- You will work closely with Marketing and Analytics & Business Insights (ABI) business teams on the implementation of Data Solutions
- You will be developing re-usable data solution patterns to enable quick to market data assets
- You will be analyzing & profiling business data on relational and Big Data Environment
- You will be a liaison between the data SMEs and analytics use case developers to facilitate rapid response from GDT
- You’ll have the opportunity to grow in responsibility, work on exciting and challenging projects, train on emerging technologies and help set the future of the Data Solution Delivery team
What you are good at
- Total minimum experience of 6+ years in Data / Software engineering.
- Experience with all aspects of data systems including database design, ETL, aggregation strategy, performance optimization. Minimum 4 Years of experience in traditional Data Warehousing
- Hands-on development experience in Informatica 9.x/10.x or any enterprise class ETL tool. Minimum 3 years
- Understanding of best practices for building and designing ETL code Strong SQL experience with the ability to develop, tune and debug complex SQL applications is required
Expertise in schema design, developing data models and proven ability to work with complex data is required
Proven experience in working in large environments such as RDBMS, EDW, NoSQL, etc.
- Hands-on development experience with parallel processing databases like Teradata. Minimum 4 years
What you have
- Experience in Cloud platforms like GCP or AWS. Preferably in Data Warehousing
- Experience in Google Big Query or Snowflake
- Experience in Big Data technologies as a Developer
- Experience developing applications on Big Data platform for both batch and real time ingestion
- Hands-on experience with massive data processing frameworks like Apache SPARK
- Knowledge of Big Data ETL such as Informatica BDM and/or Talend tools is preferred
- Hands on experience with programming language Java/Python/Scala
- Hands-on experience with Linux and shell scripting
- Schwab systems experienece
- Hands on experience with CI/CD tools like Bamboo, Jenkins, Bitbucket etc
Target Total Compensation in New York City: $106,900 - $192,500
Your actual pay will be based on your skills and experience -- talk with your recruiter to learn more.
Why work for us?
Own Your Tomorrow embodies everything we do! We are committed to helping our employees ignite their potential and achieve their dreams. Our employees get to play a central role in reinventing a multi-trillion-dollar industry, creating a better, more modern way to build and manage wealth.
Benefits: A competitive and flexible package designed to empower you for today and tomorrow. We offer a competitive and flexible package designed to help you make the most of your life at work and at home—today and in the future. Explore further.
Schwab is committed to building a diverse and inclusive workplace where everyone feels valued. As an Equal Opportunity Employer, our policy is to provide equal employment opportunities to all employees and applicants without regard to any status that is protected by law. Please click here to see the policy.
Schwab is an affirmative action employer, focused on advancing women, racial and ethnic minorities, veterans, and individuals with disabilities in the workplace. If you have a disability and require reasonable accommodations in the application process, contact Human Resources at email@example.com or call 800-275-1281.
TD Ameritrade, a subsidiary of Charles Schwab, is an Equal Opportunity Employer. At TD Ameritrade we believe People Matter. We value diversity and believe that it goes beyond all protected classes, thoughts, ideas, and perspectives.