Design and implement ETL pipelines using Pyspark, Shell scripting, and TPT for seamless data integration and processing. Load data from Google Cloud Platform storage to Hadoop using shell scripting and Pyspark for data transformation tasks. Transfer data from Hadoop to Teradata utilizing Teradata Parallel Transporter(TPT) scripts for efficient loading into Teradata tables. Perform data quality checks and implement data cleansing techniques as part of ETL process. Utilize BTEQ and TPT scripts for Teradata to execute complex SQL queries, load data efficiently, and manage Teradata database tasks. Collaborate with database administrators(DBAs) to optimize query performance, monitor database health, and troubleshoot issues as needed.