Research Paper
Personal Portfolio
data science & algo-trading site to showcase my work. welcome to my portfolio.
Abstract
This research paper presents a comprehensive portfolio analysis documenting my journey from aspiring data scientist to quantitative trading candidate. Over the past three years, I have systematically developed competencies in time-series modeling, statistical analysis, and operational excellence through academic research, leadership roles, and real-world applications. My experience spans six major projects, three teaching assistantships serving 80+ students, founding and scaling Forbes Student Research (1,000+ applicants, 125 selected, 100% Ivy League acceptance rate), and currently serving as President of the Data Science Association and Research Assistant in the Economics Department at the University of San Francisco.
The paper details my transition from initial interest in traditional data science to discovering quantitative trading through extensive research (500+ industry professionals contacted in one month), systematic skill development through published research (two papers on ResearchGate in mathematics education), and practical application of forecasting methodologies (6-year air quality time-series analysis presented to Indore Municipal Corporation). I demonstrate how each role has built specific competencies relevant to quantitative trading: mathematical rigor from research projects, systematic evaluation from teaching 80+ students across three courses (CS111 Java, DS100 R), operational scalability from managing 40-person teams, and rapid prototyping from building real-time housing market prediction systems.
1. Who Am I? The Complete Story
1.1 The Beginning: High School and the Data Science Dream
My name is Krrish Ghindani. I am currently a sophomore studying Data Science at the University of San Francisco, but my journey began much earlier, back in high school in India. Like many students passionate about math and coding, I initially set my sights on becoming a data scientist. The field seemed perfect: it combined statistical thinking with programming, offered real-world applications, and was experiencing explosive growth. I spent countless hours teaching myself Python, working through machine learning tutorials, and building small projects on Kaggle.
But somewhere along the way, something felt incomplete. While I enjoyed building models and analyzing data, I found myself increasingly drawn to a specific subset of problems: those involving temporal dependencies, sequential decision-making, and quantitative evaluation under uncertainty. I noticed that my favorite projects were always the ones dealing with time-series data, forecasting, and optimization. When I built a simple stock price predictor as a learning exercise, I became fascinated not by the model itself, but by the underlying market mechanisms, the challenge of signal extraction from noise, and the mathematical rigor required to make defensible predictions in adversarial environments.
1.2 The Pivot: Discovering Quantitative Trading
The turning point came during my gap period between high school graduation (May 2024) and moving to the United States for college (approximately three months). Rather than taking a break, I decided to explore what career path would truly align with my interests in mathematics, coding, and systematic problem-solving. This led to one of the most intensive research periods of my life.
Over the course of one month, I reached out to over 500 professionals across finance, technology, and academia through LinkedIn, email, and referrals. My goal was simple: understand what people actually do in different quantitative roles, what skills matter most, and where someone with my background could add the most value. I didn't just send generic messages. I researched each person's background, read their publications or posts, and asked specific questions about their work. Out of 500+ contacts, I received approximately 180 substantive responses.
Through these conversations, I discovered quantitative trading. Multiple professionals pointed me toward this field, noting that it combined several elements I was passionate about: mathematical rigor (probability, statistics, stochastic processes), coding intensity (building and testing systematic strategies), and rapid feedback loops (you know quickly if your models work). More importantly, several quants told me something that resonated deeply: in trading, you cannot hide behind vague metrics or cherry-picked results. The market is the ultimate adversarial test. Either your model generates alpha, or it doesn't. Either your risk management works, or you lose money. This level of accountability and intellectual honesty appealed to me immensely.
I spent the remaining two months of my gap period reading everything I could find about quantitative trading. I went through academic papers on market microstructure, read blogs from successful quant funds, studied time-series econometrics, and worked through problems in probability and stochastic calculus. I read news articles about algorithmic trading systems, studied case studies of successful (and failed) quant strategies, and tried to understand the actual day-to-day work of quantitative researchers and traders. By the time I arrived at USF, I had made my decision: I wanted to build a career in quantitative and algorithmic trading.
1.3 Why Should You Hire Me? The Value Proposition
I understand that I am an early-career candidate competing against many talented individuals. So let me be direct about what I bring to the table and why I believe I can contribute meaningfully to a quantitative trading team.
First, I have proven ability to work with noisy, real-world time-series data. During my gap period, I worked with Prof. Saurabh Kumar at the Indian Institute of Management Indore on a six-year air quality forecasting project. This was not clean, pre-processed data from a textbook. It was messy, incomplete, real-world data with missing values, measurement errors, seasonal patterns, non-stationarity, and multiple confounding factors. I built ARIMA models to forecast AQI levels and conducted sensitivity analysis to identify key drivers. The work culminated in a presentation to the Indore Municipal Corporation. This experience taught me how to handle the kind of data challenges that are ubiquitous in financial markets: dealing with noise, identifying true signals, testing for robustness, and communicating model limitations honestly.
Second, I have demonstrated operational excellence and scalability. I founded Forbes Student Research with a simple mission: provide merit-based research opportunities to talented high school students without the $10,000+ price tags that predatory "research programs" charge. What started as an idea grew into an organization that processed over 1,000 applications in approximately 1.5 years, managed a team of 40 volunteers across research, marketing, HR, and operations, and successfully placed 125 students across 6 cohorts. The most powerful validation of our work: every single accepted student received admission to at least one Ivy League institution, and on average 2-3 top-30 U.S. universities. This required building systematic evaluation criteria, designing scalable interview processes, managing distributed teams, and making consistent decisions under resource constraints. These are exactly the kinds of operational challenges that trading desks face when scaling from prototype strategies to production systems.
Third, I have exceptional communication and teaching abilities. In my freshman year at USF (Spring 2025), I served as a Teaching Assistant for two courses simultaneously: CS111 (Foundations of Program Design in Java) and DS100 (Introduction to Data Science with R). This meant supporting approximately 80 students across two completely different programming paradigms and problem domains. I held regular office hours, created supplementary materials, debugged student code, and helped students understand core concepts in object-oriented programming and statistical computing. The results speak for themselves: students I worked with increased their major GPAs by an average of 0.5 points, transitioned smoothly into advanced Java courses, and many specifically requested sections where I was the TA because they felt comfortable learning from me. This ability to explain complex technical concepts clearly is crucial in trading, where you must communicate model assumptions, risk factors, and strategy logic to both technical and non-technical stakeholders.
Fourth, I have demonstrated leadership and community building at scale. In Fall 2024, I was elected President of the Data Science Association at USF. This is not a casual club. We built an ambitious, highly selective community with direct connections to Google, Microsoft, Salesforce, and several other leading tech companies. Our focus areas include machine learning, deep learning, neural networks, and real-world applications. We run technical workshops, interview preparation sessions, resume reviews, and speaker series featuring professors and industry professionals. Additionally, I serve as a mentor in two selective campus programs: MAGIS (leadership development) and STEP (personal transformation), where I mentor over 50 students in total. We review 70+ applications and select only 25 participants, then work intensively with them on developing leadership skills, personal growth, and community impact.
Fifth, I can prototype quickly and iterate systematically. One project I am particularly proud of is a real-time housing market prediction system I built with my team in the Data Science Association. We scraped real-time housing data, built time-series forecasting models, and created a tool that tells users which houses are good buys, predicts whether prices will rise or fall, and suggests optimal timing for purchases. While this is not a trading system per se, it demonstrates the same core skills: acquiring and cleaning real-time data, building predictive models under uncertainty, communicating probabilistic forecasts to end users, and iterating based on model performance. In trading, the ability to move from hypothesis to tested prototype quickly is invaluable, and this is an area where I excel.
Finally, I am intellectually honest and comfortable with uncertainty. One of the most important things I learned from my research and teaching experiences is the importance of knowing what you don't know. In my air quality project, I was upfront with the Municipal Corporation about model limitations and the assumptions underlying my forecasts. In teaching, I never pretended to have all the answers; when students asked questions I couldn't answer immediately, I said so, figured it out, and came back with a better explanation. In quantitative trading, where overconfidence can be extraordinarily expensive, this kind of intellectual humility is not just a virtue—it is a survival skill.
2. Academic Background and Research Experience
2.1 Published Research: Mathematics Education and AI (Dec 2022 - Jan 2023)
Before diving into time-series forecasting, I conducted foundational research in mathematics education and artificial intelligence applications. I published two papers on ResearchGate that demonstrate my early interest in systematic evaluation and data-driven decision making.
Paper 1: "Evolutionary Perspective for Developing Beyond Abacus" (December 2022)
This research challenged the traditional computational focus of mathematics education. I argued that mathematics should be taught as a science of pattern recognition and creation, not merely as a calculation tool. The paper proposed treating mathematics as a way of interpreting environmental patterns, scientific phenomena, business problems, and everyday life. This work was fundamentally about developing better mental models and frameworks for understanding complex systems—skills directly applicable to market analysis and trading strategy development. The ability to see patterns in noise, recognize structural relationships, and think abstractly about quantitative problems is at the core of quantitative trading.
Paper 2: "Maths & AI Parallel yet Intersecting" with Ms. Snehal Moghe and Moghe Prakash (January 2023)
This research proposal explored using AI and machine learning to diagnose students' mathematical proficiency levels and provide personalized learning paths. We developed a framework using P-VAE (Probabilistic Variational Autoencoders) to fill gaps in student data, classify learning patterns, and recommend curated content. The project involved understanding different user profiles based on performance patterns, demographics, and question-answering behaviors. This experience with classification, pattern matching, and probabilistic modeling directly informed my later work in time-series forecasting and predictive modeling. More importantly, it taught me how to think about model evaluation, training data quality, and the importance of validation in machine learning systems.
2.2 Air Quality Forecasting: Time-Series Analysis with Prof. Saurabh Kumar (Summer 2024)
This project represents my most substantial quantitative research to date and directly demonstrates skills relevant to quantitative trading. After finishing high school in May 2024, I had approximately three months before moving to the United States. Rather than treating this as vacation time, I decided to pursue rigorous research experience.
Finding the Right Advisor: I sent cold emails to 50 professors across India working in big data, machine learning, and AI domains. I received 48 responses, which was both overwhelming and exciting. I carefully researched each professor's work, read their recent publications, and evaluated where I could learn the most. I ultimately chose to work with Prof. Saurabh Kumar, a director-level faculty member at the Indian Institute of Management Indore, whose research focused on time-series econometrics and environmental data analysis.
The Problem: Indore, a major city in central India, had been struggling with deteriorating air quality. AQI levels consistently ranged between 90-100 (unhealthy for sensitive groups), and city officials wanted to understand (1) what was driving these high levels, (2) whether current trends would continue, and (3) what interventions might help bring AQI down toward healthier levels around 50.
The Data: I worked with approximately six years of daily air quality measurements (2018-2024), including:
- AQI (Air Quality Index) values
- PM2.5 and PM10 particulate matter concentrations
- Meteorological factors (temperature, humidity, wind speed, wind direction)
- Temporal features (day of week, month, season)
- External factors (traffic patterns, industrial activity, agricultural burning)
The data was far from clean. There were missing values, measurement inconsistencies across different monitoring stations, outliers from sensor malfunctions, and complex seasonal patterns overlaid with long-term trends.
Methodology: I built ARIMA (AutoRegressive Integrated Moving Average) models to forecast AQI levels. The modeling process involved:
- Exploratory Data Analysis: Identifying trends, seasonality, autocorrelation structure, and potential exogenous variables
- Stationarity Testing: Using Augmented Dickey-Fuller tests and differencing to achieve stationarity
- Model Selection: Testing different (p,d,q) parameter combinations using AIC/BIC criteria
- Feature Engineering: Creating lagged variables, rolling averages, seasonal indicators, and interaction terms
- Validation: Time-series cross-validation with expanding windows to prevent look-ahead bias
- Sensitivity Analysis: Identifying which factors had the strongest impact on AQI levels
- Scenario Planning: Modeling different intervention scenarios (reduced traffic, controlled industrial emissions, etc.)
Results: The model achieved reasonably accurate short-term forecasts (1-7 days ahead) and identified PM2.5 concentrations, wind patterns, and seasonal agricultural burning as the primary drivers of poor air quality. More importantly, the sensitivity analysis provided actionable insights: reducing PM2.5 sources could have outsized impact compared to other interventions.
Presentation to Indore Municipal Corporation: I presented findings to city officials, including forecasts, driver analysis, and scenario planning results. This was a humbling experience because I had to explain statistical models to non-technical stakeholders, communicate uncertainty honestly, and make clear what the model could and could not tell us. I learned that even sophisticated models are only useful if you can explain them clearly and acknowledge their limitations.
Relevance to Quantitative Trading: This project directly mirrors the challenges of quantitative trading:
- Working with noisy, non-stationary time-series data (just like financial markets)
- Building predictive models with limited signal-to-noise ratios
- Conducting rigorous validation to avoid overfitting
- Performing sensitivity analysis to understand model drivers (analogous to factor analysis in trading)
- Communicating probabilistic forecasts and model uncertainty
- Making decisions with incomplete information under time constraints
2.3 Current Research: Economics Department Research Assistant (Fall 2024 - Present)
I am currently working as a Research Assistant in the Economics Department at USF under the supervision of the department chair. This is a complex, long-term project with significant data engineering and statistical components.
Project Overview: We are building a comprehensive, decade-by-decade dataset of Catholic priests and their assignments in the United States from 1888 to the present. This dataset enables two intertwined research streams:
- Abuse-Impact Paper (Economics/Demography): Using longitudinal data to estimate the social harms of predator priests, particularly effects on teen suicide rates and compounding harms caused by organizational responses like reassignments.
- Sociology of the Priesthood: Studying the priesthood as a channel of assimilation and community formation in American religious life, tracking who served where and how patterns changed over time across different immigrant waves.
My Role: I lead the entire data pipeline:
- Digitization: Building OCR pipelines to extract data from historical PDF directories (starting with 1990, 1970, alongside ongoing 1980 work)
- Entity Resolution: Cleaning names, deduplicating priests across decades, linking assignments over time
- Geocoding: Mapping parishes and dioceses to precise locations for spatial analysis
- Validation: Cross-referencing sources and handling ambiguous cases
- Documentation: Maintaining clean, commented code and logging all methodological decisions (regex rules, matching thresholds, geocoding heuristics)
- Analysis: Generating decade-by-decade snapshots, time-series summaries, and cohort analyses
Technical Stack: Python (pandas, regex, fuzz for entity matching), OCR libraries (Tesseract, PyPDF2), spatial analysis tools (GeoPandas, geocoding APIs), version control (Git), and statistical analysis (R, statsmodels).
Why This Matters for Quant: This project demonstrates several critical skills:
- Large-scale data engineering and cleaning (essential for working with alternative data in trading)
- Entity resolution and record linkage (similar to matching trades, reconciling data sources)
- Handling ambiguity and making defensible methodological choices under uncertainty
- Building reproducible, well-documented pipelines (critical for production trading systems)
- Time-series analysis and longitudinal data handling
3. Teaching Experience: Communicating Complex Concepts
3.1 Teaching Assistant, CS111: Foundations of Program Design in Java (Spring 2025)
In my first semester as a Teaching Assistant, I supported approximately 40 students learning object-oriented programming in Java. This was a foundational course covering core programming concepts: data types, control flow, functions, classes, objects, inheritance, polymorphism, and basic algorithms.
Responsibilities:
- Holding weekly office hours (4 hours per week)
- Grading assignments and providing detailed feedback
- Debugging student code and explaining errors
- Creating supplementary examples and practice problems
- Answering questions on course forums
Impact: Students I worked with closely showed measurable improvement. On average, they increased their major GPAs by 0.5 points by the end of the semester. More importantly, they transitioned smoothly into CS211 (the next Java course), and many told me they felt significantly more confident in their programming abilities. Several students specifically sought out sections where I was the TA in subsequent semesters, which I take as strong validation of my teaching effectiveness.
Key Learning: Teaching Java to beginners taught me how to break down complex systems into understandable pieces, identify where students get stuck, and explain the same concept multiple ways until it clicks. These are exactly the skills you need when explaining a trading strategy to portfolio managers, debugging a live system with your team, or presenting research findings to stakeholders.
3.2 Teaching Assistant, DS100: Introduction to Data Science with R (Spring 2025)
Simultaneously, I served as a TA for DS100, supporting approximately 40 students learning statistical computing with R. This course covered data manipulation (dplyr, tidyr), visualization (ggplot2), statistical inference, hypothesis testing, regression analysis, and basic machine learning.
Unique Challenge: This was harder than the Java course because students came from diverse backgrounds. Some had strong programming skills but weak statistics. Others understood the math but struggled with R syntax. I had to adapt my explanations to meet each student where they were.
Teaching Philosophy: I emphasized conceptual understanding over memorizing code. When a student asked how to do something in R, I would first make sure they understood what they were trying to accomplish statistically, then show them the code. This approach led to deeper learning and better retention.
Relevance to Quant: Data science with R is directly relevant to quantitative research. I was teaching students how to clean messy data, visualize distributions, test hypotheses, build regression models, and interpret statistical output. These are the daily tasks of quantitative researchers.
3.3 Teaching Assistant, CS111 (Fall 2025)
In Fall 2025, I returned as a TA for CS111, this time with more experience and refined teaching methods. The student feedback was overwhelmingly positive. Many students joined sections specifically because I was the TA, which demonstrates that I had built a reputation as an effective, approachable instructor who genuinely cared about student success.
What This Shows: The fact that students actively sought me out shows strong communication skills and the ability to build trust. In quantitative trading, where you must collaborate with researchers, developers, traders, and risk managers, the ability to communicate clearly and build credibility is just as important as technical skills.
4. Leadership and Operational Excellence
4.1 Founder and CEO, Forbes Student Research (2023-Present)
Forbes Student Research started from a place of frustration. During my own high school research journey, I encountered numerous "research programs" that charged $8,000-$12,000 to connect students with professors. These programs promised publication, mentorship, and career advancement, but in reality, many were pay-to-play schemes that provided minimal value. I believed there had to be a better way: merit-based, accessible, and genuinely focused on developing student research skills.
The Model: We built a selective program matching talented high school students with faculty mentors for genuine research projects. Students applied with research proposals, academic transcripts, and letters of recommendation. We evaluated applications holistically, looking for intellectual curiosity, work ethic, and genuine interest in research—not ability to pay.
Scale and Growth:
- Applications: Over 1,000 applications in approximately 1.5 years
- Team: 40 volunteers across research coordination, operations, marketing, social media, and HR/interviews
- Cohorts: 6 cohorts totaling 125 selected students
- Outcomes: 100% of accepted students received admission to at least one Ivy League institution; average of 2-3 top-30 U.S. university acceptances per student
Operational Challenges: Managing 1,000 applications with 40 volunteers required building systematic processes:
- Application Review: Developed standardized rubrics for evaluating research proposals, academic records, and recommendation letters
- Interview Process: Designed structured interviews with consistent evaluation criteria
- Team Coordination: Built communication protocols, task assignment systems, and accountability mechanisms for distributed volunteers
- Quality Control: Implemented review processes to ensure consistent decision-making across multiple evaluators
- Matching Algorithm: Created a system to match students with appropriate faculty mentors based on research interests, backgrounds, and mentor availability
Why This Matters for Trading: This experience demonstrates operational scalability under resource constraints. Trading desks face similar challenges: evaluating many signals with limited capital, coordinating teams across research/development/trading, maintaining consistency in decision-making, and scaling processes without sacrificing quality. The skills are directly transferable.
4.2 President, Data Science Association at University of San Francisco (August 2024-Present)
In Fall 2024, I was elected President of the Data Science Association. This is not a casual club—it is an ambitious, selective community of data science innovators with direct connections to leading tech companies.
Vision: Build USF's premier technical organization focused on machine learning, deep learning, neural networks, and real-world applications. Create a powerful network that extends beyond campus into the Bay Area tech ecosystem.
Selection Process: We run rigorous interviews for all positions (E-board and general membership). We look for exceptional commitment, innovative thinking, leadership potential, and technical skills. This is intentional—we want members who will push each other to excellence.
Industry Connections: We have established direct relationships with Google, Microsoft, Salesforce, and several other tech companies. These partnerships provide:
- Company visits and site tours
- Guest speakers from industry
- Networking events
- Resume workshops tailored for tech applications
- Technical interview preparation
Technical Projects: One highlight is our real-time housing market prediction system. We scrape housing data, build time-series models, and predict optimal buy/sell timing. The system tells users:
- Which houses are currently undervalued
- Whether prices in specific neighborhoods will rise or fall
- Optimal timing for purchases based on predicted trends
This project demonstrates rapid prototyping, real-time data handling, forecasting under uncertainty, and user-facing product development—all relevant to algorithmic trading systems.
Attendance Policy: We track attendance but allow up to 3 absences per year (exceptions for medical emergencies and exams). This creates accountability while respecting academic commitments—similar to how trading teams balance flexibility with operational reliability.
4.3 MAGIS and STEP Mentor (Fall 2024-Present)
I serve as a mentor in two selective leadership development programs at USF:
MAGIS Leadership Program: A rigorous program focused on personal growth and leadership development. We receive 70+ applications and select only 25 participants. I was part of the selection committee, reviewing all applications and making admission decisions. Now, I mentor approximately 25 students, helping them develop leadership skills, navigate challenges, and create positive impact in their communities.
STEP Program: A personal transformation program helping students bring change into their lives. I mentor approximately 25 additional students, providing guidance on goal-setting, habit formation, and personal development.
Combined Impact: Mentoring 50+ students simultaneously requires strong organizational skills, empathy, time management, and the ability to tailor advice to individual needs. These are precisely the soft skills that distinguish great quants from merely good ones—the ability to mentor junior team members, collaborate effectively, and build strong working relationships.
4.4 Resident Assistant, Gillson Residence Hall (August 2025-Present)
In August 2025, I became a Resident Assistant for Gillson Residence Hall. I oversee 42 residents on my floor, providing support, community building, and crisis response.
Responsibilities:
- Ensuring residents feel welcomed and integrated into campus life
- Organizing community-building activities and floor events
- Providing academic and personal support
- Handling roommate conflicts and mediation
- Responding to emergencies and enforcing residential policies
- Connecting students with campus resources (counseling, academic support, etc.)
Key Skills Developed: Crisis management, conflict resolution, empathy, community building, and responsibility for others' wellbeing. These skills translate to trading environments where team dynamics, stress management, and supporting colleagues under pressure are critical.
5. Technical Skills and Quantitative Toolkit
5.1 Programming and Software Engineering
- Python: NumPy, pandas, scikit-learn, statsmodels, matplotlib, seaborn, Jupyter
- R: dplyr, tidyr, ggplot2, statistical modeling packages
- Java: Object-oriented programming, data structures, algorithms
- SQL: Database querying, joins, aggregations
- Git/GitHub: Version control, collaborative development
- Data Engineering: OCR, web scraping, data cleaning pipelines
5.2 Statistics and Machine Learning
- Time-Series Analysis: ARIMA, SARIMA, stationarity testing, autocorrelation analysis
- Statistical Inference: Hypothesis testing, confidence intervals, p-values, power analysis
- Regression Analysis: Linear regression, logistic regression, regularization (Ridge, Lasso)
- Machine Learning: Classification, supervised learning, model evaluation, cross-validation
- Probability Theory: Distributions, Bayes theorem, conditional probability, stochastic processes
5.3 Domain Knowledge
- Quantitative Trading Concepts: Market microstructure, alpha generation, factor models, backtesting, risk management
- Financial Mathematics: Probability, stochastic calculus (learning), option pricing theory (learning)
- Economics: Microeconomics, econometrics, causal inference
6. Why Quantitative Trading? The Perfect Fit
After three years of research, teaching, and leadership, I am more convinced than ever that quantitative trading is the right career path for me. Here is why:
Intellectual Honesty: Markets provide immediate, unbiased feedback. You cannot hide behind vague metrics or cherry-picked results. Either your model generates alpha, or it doesn't. This level of accountability appeals to me deeply.
Mathematical Rigor: Trading demands deep understanding of probability, statistics, and stochastic processes. These are areas where I have demonstrated strength through academic research and practical applications.
Rapid Iteration: Unlike academic research where publication cycles take years, trading provides fast feedback loops. You can test hypotheses, measure results, and iterate quickly. I thrive in this kind of environment.
Interdisciplinary Nature: Trading combines mathematics, computer science, economics, psychology, and domain knowledge. My diverse background—from teaching to research to leadership—positions me well for this interdisciplinary challenge.
Scalability Challenges: Taking a strategy from prototype to production involves operational challenges similar to what I faced scaling Forbes Student Research: systematic processes, quality control, team coordination, and performance monitoring.
Continuous Learning: Markets evolve constantly. What works today may not work tomorrow. This requirement for continuous learning and adaptation matches my personality perfectly. I am someone who reached out to 500+ professionals to understand a field, taught myself time-series modeling, and continuously seeks new challenges.
7. What Do I Want From My Next Role?
I am actively seeking internships and early-career positions in quantitative trading and algorithmic trading. Specifically, I am looking for roles where I can:
- Learn from exceptional people: I want to work with quants who are better than me, who can teach me advanced techniques, challenge my thinking, and push me to grow.
- Contribute quickly: I am not looking for a passive learning experience. I want to add value from day one, whether that is building data pipelines, testing strategies, or improving existing models.
- Build real systems: I want to work on production trading systems, not just academic exercises. I want to see my code run in live markets and experience the challenges of reliability, latency, and risk management.
- Work in a culture of intellectual honesty: I want to join a team that values rigorous thinking, admits mistakes, and focuses on what actually works rather than what sounds impressive.
What I Bring:
- Proven ability to work with noisy, real-world time-series data
- Strong mathematical foundation and statistical thinking
- Excellent programming skills (Python, R, SQL, Java)
- Operational scalability and systematic process design
- Clear communication and teaching ability
- Leadership experience managing teams and projects
- Intellectual curiosity and commitment to continuous learning
- Rapid prototyping and iteration skills
If you evaluate candidates on signal over noise, I would love to talk. I can walk you through my code, explain my thinking, show you my models, and we can figure out if there is a fit. Reach me at kaghindani@dons.usfca.edu.
8. Personal Note: Who Am I Outside the Resume?
I believe strongly that who you are outside work matters just as much as your technical skills. Here is a bit about me beyond the projects and grades:
8.1 Tennis: Discipline, Focus, and Handling Pressure
I have played tennis for over 10 years. I competed at the National level in India, and currently hold an NTRP 5.0+ doubles ranking, where I am ranked #1 on the U.S. East Coast. Tennis is more than a hobby for me—it has been one of my most important teachers.
Tennis taught me how to focus intensely for extended periods, handle pressure in high-stakes situations, bounce back after losses, and maintain discipline in training even when progress feels slow. These lessons carry over directly to quantitative trading, where mental toughness, emotional control, and consistent execution matter just as much as analytical skills.
There is also a beautiful parallel between tennis strategy and trading strategy. In tennis, you must constantly assess probabilities (where is my opponent likely to hit?), adapt your game plan based on observed patterns, manage risk (when to go for winners vs. play safe), and stay disciplined in executing your strategy even under pressure. These are exactly the skills required in systematic trading.
8.2 Music: Pattern Recognition and Cultural Curiosity
I am a huge fan of reggaeton, and I have been listening to Bad Bunny for over 6 years. I can sing along to almost every song—despite not speaking Spanish. My friends find this hilarious, but to me, it represents curiosity and the willingness to engage deeply with things even when you don't fully understand them at first. This mindset has served me well in quantitative fields: you don't need to understand everything to start engaging with it, and deep engagement leads to understanding over time.
8.3 Values: What Matters to Me
If I had to summarize my core values, they would be:
- Intellectual honesty: Admit what you don't know. Be upfront about uncertainty and limitations.
- Systematic thinking: Build processes, measure outcomes, iterate based on evidence.
- Continuous learning: Never stop asking questions. Reach out to 500 people if that's what it takes to understand something.
- Helping others: Whether through teaching, mentoring, or building accessible programs like Forbes Student Research, I care about lifting others up.
- High standards: Do things right, even when it is harder. Quality matters.
9. Visual Portfolio Summary
9.1 Advanced Quantitative Trading Skills Matrix
Mathematical Foundations
Quantitative Programming & Libraries
Quantitative Finance & Trading
9.2 Project Timeline
Math Education Research
Published paper on AI in mathematics education
Forbes Student Research
Founded organization, 1,000+ applications, 125 selected
ARIMA Air Quality Project
6-year dataset, presented to city officials
Teaching Assistant
CS111 Java, DS100 R, 80+ students
DSA President
Data Science Association leadership
9.3 Advanced Quantitative Code Examples
Black-Scholes Option Pricing with Greeks
import numpy as np
from scipy.stats import norm
from scipy.optimize import minimize_scalar
class BlackScholesPricer:
def __init__(self, S, K, T, r, sigma):
self.S = S # Spot price
self.K = K # Strike price
self.T = T # Time to maturity
self.r = r # Risk-free rate
self.sigma = sigma # Volatility
def d1_d2(self):
"""Calculate d1 and d2 for Black-Scholes formula"""
d1 = (np.log(self.S/self.K) + (self.r + 0.5*self.sigma**2)*self.T) / (self.sigma*np.sqrt(self.T))
d2 = d1 - self.sigma*np.sqrt(self.T)
return d1, d2
def call_price(self):
"""Calculate European call option price"""
d1, d2 = self.d1_d2()
call = self.S*norm.cdf(d1) - self.K*np.exp(-self.r*self.T)*norm.cdf(d2)
return call
def put_price(self):
"""Calculate European put option price"""
d1, d2 = self.d1_d2()
put = self.K*np.exp(-self.r*self.T)*norm.cdf(-d2) - self.S*norm.cdf(-d1)
return put
def delta(self):
"""Calculate option delta"""
d1, _ = self.d1_d2()
return norm.cdf(d1)
def gamma(self):
"""Calculate option gamma"""
d1, _ = self.d1_d2()
return norm.pdf(d1) / (self.S * self.sigma * np.sqrt(self.T))
def theta(self):
"""Calculate option theta"""
d1, d2 = self.d1_d2()
theta = (-self.S*norm.pdf(d1)*self.sigma/(2*np.sqrt(self.T))
- self.r*self.K*np.exp(-self.r*self.T)*norm.cdf(d2))
return theta
# Example usage
bs = BlackScholesPricer(S=100, K=105, T=0.25, r=0.05, sigma=0.2)
print(f"Call Price: ${bs.call_price():.2f}")
print(f"Delta: {bs.delta():.4f}")
print(f"Gamma: {bs.gamma():.4f}")
Monte Carlo VaR Calculation
import numpy as np
import pandas as pd
from scipy.stats import norm
import matplotlib.pyplot as plt
class MonteCarloVaR:
def __init__(self, returns, confidence_level=0.05):
self.returns = returns
self.confidence_level = confidence_level
self.mean_return = returns.mean()
self.volatility = returns.std()
def parametric_var(self, portfolio_value=1000000):
"""Calculate parametric VaR using normal distribution"""
z_score = norm.ppf(self.confidence_level)
var = portfolio_value * (self.mean_return + z_score * self.volatility)
return abs(var)
def monte_carlo_var(self, portfolio_value=1000000, n_simulations=100000):
"""Calculate VaR using Monte Carlo simulation"""
np.random.seed(42)
# Generate random returns
random_returns = np.random.normal(
self.mean_return,
self.volatility,
n_simulations
)
# Calculate portfolio values
portfolio_values = portfolio_value * (1 + random_returns)
# Calculate VaR
var_percentile = np.percentile(portfolio_values, self.confidence_level * 100)
var = portfolio_value - var_percentile
return var, portfolio_values
def expected_shortfall(self, portfolio_value=1000000, n_simulations=100000):
"""Calculate Expected Shortfall (CVaR)"""
var, portfolio_values = self.monte_carlo_var(portfolio_value, n_simulations)
# Calculate ES as average of losses beyond VaR
losses_beyond_var = portfolio_values[portfolio_values < (portfolio_value - var)]
es = portfolio_value - losses_beyond_var.mean()
return es
# Example with real market data
returns = pd.read_csv('sp500_returns.csv')['returns']
var_calculator = MonteCarloVaR(returns, confidence_level=0.05)
parametric_var = var_calculator.parametric_var()
monte_carlo_var, _ = var_calculator.monte_carlo_var()
expected_shortfall = var_calculator.expected_shortfall()
print(f"Parametric VaR (95%): ${parametric_var:,.2f}")
print(f"Monte Carlo VaR (95%): ${monte_carlo_var:,.2f}")
print(f"Expected Shortfall: ${expected_shortfall:,.2f}")
High-Frequency Trading Signal Generation
import numpy as np
import pandas as pd
from scipy import signal
from sklearn.ensemble import RandomForestRegressor
import talib
class HFTStrategyEngine:
def __init__(self, data):
self.data = data
self.signals = pd.DataFrame(index=data.index)
def generate_technical_signals(self):
"""Generate technical analysis signals"""
# Moving averages
self.signals['sma_20'] = talib.SMA(self.data['close'], timeperiod=20)
self.signals['sma_50'] = talib.SMA(self.data['close'], timeperiod=50)
# RSI
self.signals['rsi'] = talib.RSI(self.data['close'], timeperiod=14)
# MACD
macd, macd_signal, macd_hist = talib.MACD(self.data['close'])
self.signals['macd'] = macd
self.signals['macd_signal'] = macd_signal
self.signals['macd_hist'] = macd_hist
# Bollinger Bands
bb_upper, bb_middle, bb_lower = talib.BBANDS(self.data['close'])
self.signals['bb_upper'] = bb_upper
self.signals['bb_lower'] = bb_lower
return self.signals
def generate_momentum_signals(self):
"""Generate momentum-based signals"""
# Price momentum
self.signals['momentum_5'] = self.data['close'].pct_change(5)
self.signals['momentum_10'] = self.data['close'].pct_change(10)
# Volume-weighted momentum
self.signals['vwap'] = (self.data['close'] * self.data['volume']).rolling(20).sum() / self.data['volume'].rolling(20).sum()
self.signals['price_vs_vwap'] = (self.data['close'] - self.signals['vwap']) / self.signals['vwap']
# Volatility
self.signals['volatility'] = self.data['close'].rolling(20).std()
return self.signals
def generate_ml_signals(self):
"""Generate machine learning-based signals"""
# Feature engineering
features = pd.DataFrame(index=self.data.index)
features['returns'] = self.data['close'].pct_change()
features['volume_ratio'] = self.data['volume'] / self.data['volume'].rolling(20).mean()
features['high_low_ratio'] = (self.data['high'] - self.data['low']) / self.data['close']
# Technical indicators as features
features['rsi'] = talib.RSI(self.data['close'])
features['macd'] = talib.MACD(self.data['close'])[0]
# Target variable (future returns)
features['target'] = features['returns'].shift(-1)
# Remove NaN values
features = features.dropna()
# Split features and target
X = features.drop('target', axis=1)
y = features['target']
# Train Random Forest model
rf_model = RandomForestRegressor(n_estimators=100, random_state=42)
rf_model.fit(X, y)
# Generate predictions
predictions = rf_model.predict(X)
self.signals['ml_signal'] = pd.Series(predictions, index=features.index)
return self.signals
def combine_signals(self):
"""Combine all signals into trading decisions"""
# Generate all signals
self.generate_technical_signals()
self.generate_momentum_signals()
self.generate_ml_signals()
# Combine signals with weights
self.signals['combined_signal'] = (
0.3 * np.where(self.signals['rsi'] < 30, 1, np.where(self.signals['rsi'] > 70, -1, 0)) +
0.3 * np.where(self.signals['macd'] > self.signals['macd_signal'], 1, -1) +
0.2 * np.where(self.signals['momentum_5'] > 0.02, 1, np.where(self.signals['momentum_5'] < -0.02, -1, 0)) +
0.2 * np.sign(self.signals['ml_signal'])
)
return self.signals
# Example usage
market_data = pd.read_csv('market_data.csv', index_col=0, parse_dates=True)
strategy = HFTStrategyEngine(market_data)
signals = strategy.combine_signals()
print("Trading signals generated successfully!")
print(f"Signal distribution: {signals['combined_signal'].value_counts()}")
10. Conclusion: Ready to Contribute
This portfolio represents three years of systematic skill-building across quantitative research, teaching, and leadership. I have worked with noisy time-series data, built scalable operations, taught complex concepts to diverse audiences, led teams, and continuously sought to improve my understanding of quantitative trading.
I am not claiming to be an expert. I am claiming to be someone with a strong foundation, proven learning ability, operational discipline, and genuine passion for quantitative trading. I am ready to contribute, learn from exceptional people, and build a career in this field.
If you are building a team that values rigorous thinking, systematic evaluation, and intellectual honesty, let's talk. I will show you my work, explain my thinking, and we can figure out if there is a fit.
Contact: kaghindani@dons.usfca.edu | GitHub | LinkedIn
References
- Ghindani, K. (2022). Evolutionary Perspective for Developing Beyond Abacus. ResearchGate. DOI: [ResearchGate Publication]
- Ghindani, K., Moghe, M.S., & Prakash, M. (2023). Maths & AI Parallel yet Intersecting. ResearchGate. DOI: [ResearchGate Publication]
- Ghindani, K., & Kumar, S. (2024). Air Quality Forecasting Using ARIMA Time-Series Models: A Case Study of Indore, India. Research Project, Indian Institute of Management Indore. Presented to Indore Municipal Corporation.
- Ghindani, K. (2025). Forbes Student Research: Building Merit-Based Research Access for High School Students. Founder's Report.
- Ghindani, K. (2025). Real-Time Housing Market Prediction System. Data Science Association, University of San Francisco.
- Ghindani, K. (2025). Teaching Assistant Performance Report: CS111 and DS100. University of San Francisco.
- University of San Francisco Data Science Association. (2025). Annual Report: Technical Projects and Industry Partnerships.
- Ghindani, K. (2025). Longitudinal Database Construction for Economic and Sociological Research. Research Assistant Report, Economics Department, University of San Francisco.
Paper Information
- Pages: 18
- Word Count: 8,500+
- Research Projects: 6
- Published Papers: 2 (ResearchGate)
- Teaching Experience: 3 courses, 80+ students
- Leadership Roles: 5
- Data Points: 6 years AQI, 1,000+ applications
- Outreach: 500+ professionals contacted
- References: 8
- Last Updated: January 2025