In the developer series Behind The Code, we reach out to the developers from the community to gain insights on how their journey started in the field of data science, what are the tools and skillset they use and what’s essential. For this week’s column, we caught up with Paritosh Gupta, Associate Data Scientist at JP Morgan & Chase, one of the leading global financial services firms.
Currently, Gupta is working as an Associate Data Scientist at JP Morgan & Chase where his work revolves around understanding the business problems, converting it to data science-powered insights and then communicating the results back to the business.
In 2012, Gupta started his journey as a Java Developer at TCS where he worked with SAS programming and provided various analytics consulting services to clients. This phase helped him to learn statistics and modelling parallelly.
With a passion to do more analytical work, learn maths & statistics helped Gupta to make the transition from a Java developer to a data scientist. However, during the transitioning days, he did face a few challenges in understanding the mathematics behind different algorithms and how to implement them in the real world. He did overcome this challenge learning simple algorithms at first and then focus on the complex ones.
Code, Toolkit & Skillsets
A post-graduate in Data Science from IIT Hyderabad, Gupta worked on many tech stacks — R, Python (mostly), CUDA for GPU programming during his university days. For research and Deep Learning, he used Google Colab and Jupyter Notebook.
Currently, he has been working on solving problems using the popular regression algorithm. Apart from regression, he also works using other techniques such as for ensemble method, Random Forest, GBM, XGBoost and for Deep Learning algorithms he uses CNN, RNN, LSTM, etc.
Talking about his developer toolkit, Gupta said that it comprises of Jupiter Notebook and Anaconda package because of their simplicity and elegance. Also, VSCODE- Light IDE for python production coding and debugging. Apart from this, he also suggested learning Python for data science (pandas, numpy, sklearn, etc.) and SQL for the query language and once it is covered, it is then time to move to Spark and cloud language.
We asked what according to him are the best learning resources and Gupta replied, “One should follow texts or videos as per his/her understanding and don’t get overwhelmed by information overload. My preferences are NPTEL videos (Mitesh Kharpa IITM), Coursera- Andrew NG’s ML and DL course, and elements of statistical learning book. Udemy courses are also good and cheap.”
We also asked him if he did learn anything recently and he added, “Currently, I am learning to be sane and not get overwhelmed by ML and AI hype. It always comes to basics be it life principles or Data Science, I am learning the basics of Deep Neural Network from NPTEL IITM course.”
AI In Finance
When we asked about the scenario of AI and data science in finance, Gupta replied that Data science in finance is tricky business and one needs to have domain knowledge along with data science knowledge. AI in finance will make banking products more customers centric and customizable. In the future, Bank tellers and advisors’ jobs will be done by AI bots. This will help in reducing the agent and third party costs/cuts in nearly every banking product.
What The Future Holds
Gupta’s personal endeavour in the next few years is to work on more challenging AI and ML problems and learning more about data science. He wants to help others in transitioning into the data science field and also wishes to pursue a PhD in the coming years.