I have more than 22 yrs of very diversified experience in software industry. I have excellent exposure to Data Science with python(pandas, sci-kit learn, tensor flow etc), Big Data (hadoop, pyspark, hive etc.), cloud solution using AWS, cloud automation using terraform, for AWS azure and gcp.
Hadoop, core(HDFS),Pig, Hive, spark(pySpark, scala), relational and nonrelational data with Hadoop.
---- C,C++,Java
===Hive, Pig, Yarn, Sqoop, Flume, Hbase, Impala, Oozie, Zoo Keeper, Kafka
----Influxdb and Grafana with spark
----PySpark
--- sparkSql, Spark Streaming, mlLib, Graphx,
--- Integration with Resource Manager for Hadoop:- Yarn etc
----python and pySpark in shell and Jupyter Notebook
----MySql,Oracle database
---pySpark for CSV, Json, textfile and parquet file processing
---In Depth understanding of Spark Architecture, spark core, Spark Sql, RDD, DataFrames, Spark streaming, Spark Mlib, sharing and Caching.
---Excellent exposure Clouderas Hortonworks distribution
---Parquet file analysis using pyArrow
---Hadoop Cluster programming
All of my students are very successful now and are enjoying their work. I have my own excellent course material which differentiate me from others and make learning fun and easy. My learning material is highly informative. I provide excellent practicle exposure as i believe knowledge without practicle experience is only 20% done.
Experience
No experience mentioned.
Fee details
₹2,000–4,000/hour
(US$23.78–47.57/hour)