Vijay k - C2C Hiring (2023)

VIJAY KUMAR

Email: kothakv01@gmail.com Phone: (609)-779-2524

(Video) Foul Weather Captaincy | K Vijay Kumar | TEDxIIMRohtak

LICENSES AND CERTIFICATION

Teradata 14 Certified Professional

(Video) Calling script | HR Recruiter calling script | US Staffing | US Recruiter Training

Professional Summary:

A Data Platform Azure Cloud Lead presently associated with Entertainment Corporation with a strong consulting background possessing 14 years of hands on experience in Big Data, Azure – Cloud Data Engineering, and Enterprise applications.

(Video) VIJAY SUVADA - Superhit Song | Sinh To Sinh Kevay | સિંહ તો સિંહ કેવાય | RDC Gujarati HD

Senior Lead Azure Data Engineer

Caesars Entertainment Corporation, Las Vegas, NV / Jan 2018 – Till Date

(Video) How to build profile for finance | Rajiv Placed in KPMG with 9 LPA Package

  • Develop deep understanding of the data sources, implement data standards, and maintain data quality and master data management.
  • Expert in building Databricks notebooks in extracting the data from various source systems like DB2, Teradata and perform data cleansing, data wrangling, data ETL processing and loading to AZURE SQL DB.
  • Expert in building Ephemeral Notebooks in Databricks like wrapper, driver and config for processing the data, back feeding the data to DB2 using multiprocessing thread pool.
  • Expert in developing JSON Scripts for deploying the Pipeline in Azure Data Factory (ADF) that process the data.
  • Expert in using Databricks with Azure Data Factory (ADF) to compute large volumes of data.
  • Performed ETL operations in Azure Databricks by connecting to different relational database source systems using jdbc connectors.
  • Developed Python scripts to do file validations in Databricks and automated the process using ADF.
  • Analyzed the SQL scripts and designed it by using Pyspark SQL for faster performance.
  • Worked on reading and writing multiple data formats like JSON, Parquet, and delta from various sources using Pyspark.
  • Developed an automated process in Azure cloud which can ingest data daily from web service and load in to Azure SQL DB.
  • Expert in optimizing the Pyspark jobs to run on different Cluster for faster data processing.
  • Developed spark applications in python (Pyspark) on distributed environment to load huge number of CSV files with different schema in to Pyspark Dataframes and process them to reload in to Azure SQL DB tables.
  • Analyzed data where it lives by Mounting Azure Data Lake and Blob to Databricks.
  • Used Logic App to take decisional actions based on the workflow and developed custom alerts using Azure Data Factory, SQLDB and Logic App.
  • Developed Databricks ETL pipelines using notebooks, Spark Dataframes, SPARK SQL and python scripting.
  • Developed Spark applications using Pyspark and Spark-SQL for data extraction, transformation and aggregation from multiple file formats for analyzing & transforming the data to uncover insights into the customer usage patterns.
  • Good Knowledge and exposure to the Spark Architecture including Spark Core, Spark SQL, Data Frames, Spark Streaming, Driver Node, Worker Node, Stages, Executors and Tasks.
  • Involved in performance tuning of Spark Applications for setting right Batch Interval time, correct level of Parallelism and memory tuning.
  • Expert in understanding current production state of application and determine the impact of new implementation on existing business processes.
  • Involved in Migration of data from On-prem server to Cloud databases (Azure Synapse Analytics (DW) & Azure SQL DB).
  • Good Hands on experience in setting up Azure infrastructure like storage accounts, integration runtime, service principal id, and app registrations to enable scalable and optimized utilization of business user analytical requirements in Azure.
  • Expert in ingesting streaming data with Databricks Delta tables and Delta Lake to enable ACID transaction logging.
  • Expert in building Delta Lake On top Of Data Lake and performing transformations in Delta Lake.
  • Expert in implementation of distributed stream processing platform with low latency and seamless integration, with data and analytics services inside and outside Azure to build your complete big data pipeline.
  • Expert in performance tuning of delta lake (optimize, rollback, cloning, time travel) implementation.
  • Developed complex SQL queries using stored procedures, common table expressions (CTEs), temporary table to support Power BI reports.
  • Development level experience in Microsoft Azure providing data movement and scheduling functionality to cloud-based technologies such as Azure Blob Storage and Azure SQL Database.
  • Independently manage development of ETL processes – development to delivery.

Azure Data Engineer/ETL Developer Lead

Wells Fargo, New Brunswick, NJ / Oct 2014 – Dec 2017

(Video) Call Center Job | Bpo Live Call | English Call Center Job Hindi Call Centre [Mohd Kamran] Live

  • Involved in Requirement gathering, business Analysis, Design and Development, testing and implementation of business rules.
  • Understand business use cases, integration business, write business & technical requirements documents, logic diagrams, process flow charts, and other application related documents.
  • Used Pandas in Python for Data Cleansing and validating the source data.
  • Designed and developed ETL pipeline in Azure cloud which gets customer data from API and process it to Azure SQLDB.
  • Orchestrated all Data pipelines using Azure Data Factory and built a custom alerts platform for monitoring.
  • Created custom alerts queries in Log Analytics and used Web hook actions to automate custom alerts.
  • Created Databricks Job workflows which extracts data from SQL server and upload the files to sftp using pyspark and python.
  • Used Azure Key vault as central repository for maintaining secrets and referenced the secrets in Azure Data Factory and also in Databricks notebooks.
  • Built Teradata ELT frameworks which ingests data from different sources using Teradata Legacy load utilities.
  • Built a common sftp download or upload framework using Azure Data Factory and Databricks.
  • Maintain and support Teradata architectural environment for EDW Applications.
  • Involved in full lifecycle of projects, including requirement gathering, system designing, application development, enhancement, deployment, maintenance and support
  • Involved in physical database design, data sourcing and data transformation, data loading, SQL and performance tuning.
  • Project development estimations to business and upon agreement with business delivered project accordingly
  • Created proper Teradata Primary Indexes (PI) taking into consideration of both planned access of data and even distribution of data across all the available AMPS. Considering both the business requirements and factors, created appropriate Teradata NUSI for smooth (fast and easy) access of data.
  • Developing Data Extraction, Transformation and Loading jobs from flat files, Oracle, SAP, and Teradata Sources into Teradata using BTEQ, FastLoad, FastExport, MultiLoad and stored procedure.
  • Design of process oriented UNIX script and ETL processes for loading data into data warehouse
  • Developed mappings in Informatica to load the data from various sources into the Data Warehouse, using different transformations like Source Qualifier, Expression, Lookup, aggregate, Update Strategy, and Joiner
  • Worked on Informatica Advanced concepts & also Implementation of Informatica Push down Optimization technology and pipeline partitioning.
  • Performed bulk data load from multiple data source (ORACLE 8i, legacy systems) to TERADATA RDBMS using BTEQ, MultiLoad and FastLoad.
  • Used various transformations like Source qualifier, Aggregators, lookups, Filters, Sequence generators, Routers, Update Strategy, Expression, Sorter, Normalizer, Stored Procedure, Union etc.
  • Used Informatica Power Exchange to handle the change data capture (CDC) data from the source and load into Data Mart by following slowly changing dimensions (SCD) type II process.
  • Used Power Center Workflow Manager to create workflows, sessions, and also used various tasks like command, event wait, event raise, email.
  • Designed, created and tuned physical database objects (tables, views, indexes, PPI, UPI, NUPI, and USI) to support normalized and dimensional models.
  • Created a cleanup process for removing all the Intermediate temp files that were used prior to the loading process.
  • Used volatile table and derived queries for breaking up complex queries into simpler queries.
  • Responsible for performance monitoring, resource and priority management, space management, user management, index management, access control, execute disaster recovery procedures.
  • Used Python and Shell scripts to Automate Teradata ELT and Admin activities.
  • Performed Application level DBA activities creating tables, indexes, and monitored and tuned Teradata BETQ scripts using Teradata Visual Explain utility.
  • Performance tuning, monitoring, UNIX shell scripting, and physical and logical database design.
  • Developed UNIX scripts to automate different tasks involved as part of loading process.
  • Worked on Tableau software for the reporting needs.
  • Worked on creating few Tableau dashboard reports, Heat map charts and supported numerous dashboards, pie charts and heat map charts that were built on Teradata database.

ETL Developer/Teradata Consultant

Cognizant Technology Solutions, Hyd, TG / Dec 2009 – Sep 2014

(Video) How an Indian Master Chef Makes Dosas, Idli & More | Handcrafted | Bon Appétit

  • Maintain and support Teradata architectural environment for EDW Applications.
  • Interacted with business community and gathered requirements based on changing needs.
  • Developed mappings/scripts to extract data from Oracle, Flat files, SQL Server, DB2 and load into data warehouse using the Mapping Designer, BTEQ, Fast Load and MultiLoad.
  • Exported data from Teradata database using Fast Export and BTEQ.
  • Wrote SQL Queries, Triggers, Procedures, Macros, Packages and Shell Scripts to apply and maintain the Business Rules.
  • Coded and implemented PL/SQL packages to perform batch job scheduling.
  • Performed Teradata and Informatica tuning to improve the performance of the Load.
  • Performed error handling using error tables and log files.
  • Used Informatica Designer to create complex mappings using different transformations like Filter, Router, Connected & Unconnected lookups, Stored Procedure, Joiner, Update Strategy, Expressions and Aggregator transformations to pipeline data to Data Warehouse.
  • Performed DML and DDL operation with the help of SQL transformation in Informatica.
  • Collaborated with Informatica Admin in process of Informatica Upgradation from PowerCenter 7.1 to PowerCenter 8.1.
  • Used SQL Transformation to sequential loads in Informatica Power Center for ETL processes.
  • Worked closely with the business analyst’s team to solve the Problem Tickets, Service Requests. Helped the 24/7 Production Support team.
  • Developed mappings in Informatica to load the data from various sources into the Data Warehouse, using different transformations like Source Qualifier, Expression, Lookup, aggregate, Update Strategy, and Joiner.
  • Worked on Informatica Advanced concepts & also Implementation of Informatica Push down Optimization technology and pipeline partitioning.
  • Performed bulk data load from multiple data source (ORACLE 8i, legacy systems) to TERADATA RDBMS using BTEQ, MultiLoad and FastLoad.
  • Used various transformations like Source qualifier, Aggregators, lookups, Filters, Sequence generators, Routers, Update Strategy, Expression, Sorter, Normalizer, Stored Procedure, Union etc.
  • Used Informatica Power Exchange to handle the change data capture (CDC) data from the source and load into Data Mart by following slowly changing dimensions (SCD) type II process.
  • Used Power Center Workflow Manager to create workflows, sessions, and also used various tasks like command, event wait, event raise, email.
  • Designed, created and tuned physical database objects (tables, views, indexes, PPI, UPI, NUPI, and USI) to support normalized and dimensional models.
  • Created a cleanup process for removing all the Intermediate temp files that were used prior to the loading process.
  • Used volatile table and derived queries for breaking up complex queries into simpler queries.
  • Worked on Tableau software for the reporting needs.
  • Worked on creating few Tableau dashboard reports, Heat map charts and supported numerous dashboards, pie charts and heat map charts that were built on Teradata database.

Videos

1. How an Indian Master Chef Makes Dosas, Idli & More | Handcrafted | Bon Appétit
(Bon Appétit)
2. Jayesh Rathva New Timli 2022 || Power || Jayesh Rathva Kandari || Vinayak Studio Vadodara
(Timli King Jayesh Rathva)
3. Vijay Suvada - Chello Aadhar (છેલ્લો આધાર) | New Gujarati Song 2020 | Kesar Digital
(Kesar Digital)
4. Baba Mehar Kar | HD | Bhai Gurpreet Singh Ji ( Rinku Veer Ji ) | Bombay Wale | New 2019
(AMRITVELA TRUST LIVE)
5. Bapu Don | બાપુ ડોન | Vikram Chauhan | New Timli 2022 | Superhit Dj Timli 2022 | Dj Timli 2022
(Vikram Chauhan Official)
6. Thee Thalapathy | Thalapathy Vijay | Varisu | STR | Vamshi Paidipally | Thaman
(T-Series)
Top Articles
Latest Posts
Article information

Author: Barbera Armstrong

Last Updated: 06/19/2023

Views: 5728

Rating: 4.9 / 5 (59 voted)

Reviews: 82% of readers found this page helpful

Author information

Name: Barbera Armstrong

Birthday: 1992-09-12

Address: Suite 993 99852 Daugherty Causeway, Ritchiehaven, VT 49630

Phone: +5026838435397

Job: National Engineer

Hobby: Listening to music, Board games, Photography, Ice skating, LARPing, Kite flying, Rugby

Introduction: My name is Barbera Armstrong, I am a lovely, delightful, cooperative, funny, enchanting, vivacious, tender person who loves writing and wants to share my knowledge and understanding with you.