AIAI Operations LabAI Product & Business AnalystContact

Project systems

Case studies built around business questions, data systems, and decision outputs.

Each project supports links to GitHub, Google Sheets, SQL files, Python scripts, dashboards, documents, and future product pages.

Flagship system

BudgetDB Operations Analytics Warehouse

Flagship

A Postgres-backed operations analytics system that turns budget, vendor, software spend, and headcount data into trusted executive views.

Business question

Where is operational spend concentrated, and how should leadership prioritize the next planning cycle?

Creates a reusable decision layer: cleaned tables, composable views, spend allocation, QA checks, and executive-ready outputs.

PostgreSQLBudget AnalysisOperations AnalyticsExecutive Reporting
Open system profile
BudgetDB Operations Analytics Warehouse preview

Dashboards

Executive KPI Dashboards

Live

Dashboard views for contact center performance, revenue mix, product trends, and executive operating reviews.

Business question

What does leadership need to see quickly to understand performance, variance, and where to investigate?

Makes KPI patterns easier to scan and turns analysis into a review-ready operating surface.

TableauKPIsRevenueSupport Operations
Open system profile
Executive KPI Dashboards preview

Analytics engineering

Cost Model QA Checks

Live

SQL validation checks comparing source totals to modeled fact totals by cost category, with variance and pass/fail status.

Business question

How can messy operational data become reliable product and business reporting we can trust?

Adds confidence gates before dashboards and executive summaries are used for decisions.

SQLPostgreSQLData QAReconciliation
Open system profile
Cost Model QA Checks preview

Python modeling

Customer Churn Predictive Analysis

Live

A reproducible Python workflow for churn analysis: loading, target conversion, missing-value handling, encoding, baseline modeling, and reporting.

Business question

Which customers are most likely to churn, and how can we monitor the pattern repeatedly?

Frames predictive analysis as a repeatable business workflow instead of a one-off notebook.

PythonPandasScikit-learnPredictive Modeling
Open system profile
Customer Churn Predictive Analysis preview

Prototype in development

Chorus Agent Learning App

Live

An iPhone app and web landing-page concept for learning AI agent fundamentals, comparing platforms, and saving reusable skills.

Business question

How can people learn agent workflows, compare the right platforms, and reuse practical skills in one place?

Shows product thinking beyond portfolio work: mobile UX, content model, Supabase schema, launch plan, web page, and launch video assets.

AI ProductiPhone PrototypeAgent EducationSupabase
Open system profile
Chorus Agent Learning App preview

Code evidence

Reusable code blocks support project proof.

Screenshots are used only as supporting visuals. The primary project evidence is SQL/Python logic, notes, and linked source artifacts.

BudgetDB snippets

1CREATE VIEW analytics.v_software_cost_per_employee_company_2025 AS2WITH employee_count AS (3  SELECT COUNT(*)::int AS total_employees_20254  FROM analytics.dim_employee5  WHERE (start_date IS NULL OR start_date <= '2025-12-31'::date)6    AND (end_date IS NULL OR end_date >= '2025-01-01'::date)7),8company_spend AS (9  SELECT SUM(COALESCE(total_spend_2025, 0))::numeric AS total_software_spend_202510  FROM analytics.vendors_2025_clean11)12SELECT13  c.total_software_spend_2025,14  e.total_employees_2025,15  ROUND(c.total_software_spend_2025 / NULLIF(e.total_employees_2025, 0), 2)16    AS software_cost_per_employee_202517FROM company_spend c18CROSS JOIN employee_count e;

Churn workflow snippets

1def build_model(data: pd.DataFrame) -> None:2    data = data.copy()3    data["Churn"] = data["Churn"].replace({"No": 0, "Yes": 1})45    X = data.drop(columns=["customerID", "Churn"])6    y = data["Churn"]78    preprocess = ColumnTransformer(9        transformers=[10            ("cat", OneHotEncoder(handle_unknown="ignore"), cat_cols),11            ("num", "passthrough", numeric_cols),12        ]13    )1415    pipe = Pipeline(steps=[("prep", preprocess), ("model", LinearRegression())])16    pipe.fit(X_train, y_train)17    pred = np.clip(pipe.predict(X_test), 0, 1)