Arif commited on
Commit
59d62b1
Β·
1 Parent(s): 947a915

Updated readme

Browse files
Files changed (1) hide show
  1. README.md +1 -27
README.md CHANGED
@@ -48,30 +48,6 @@ The **RAG Observability Platform** is a production-grade Retrieval-Augmented Gen
48
 
49
  ---
50
 
51
- ## How to Frame This in Your Resume
52
-
53
- ### Option 1: Technical Project Statement (Comprehensive)
54
- **RAG Observability Platform** – Senior AI Engineer Portfolio Project
55
- *Technologies: Python, MLX, LangChain, Docker, MLflow, Hugging Face Spaces, ChromaDB*
56
-
57
- Engineered a hybrid RAG platform combining local Apple Silicon optimization with cloud deployment:
58
- - Developed custom MLX LLM wrapper for LangChain LCEL, achieving 50+ tokens/sec inference on M4 GPU (vs. 5-10 on CPU)
59
- - Implemented cross-platform device detection, enabling automatic fallback from MPS (Mac) to CPU (Linux)
60
- - Built production-grade ingestion pipeline with experiment tracking via MLflow on Dagshub
61
- - Containerized application with Docker for HF Spaces deployment; optimized Python 3.12 base image to resolve dependency conflicts
62
- - Managed complex dependency isolation using UV package manager (excluding MLX from cloud builds)
63
-
64
- **Impact:** Demonstrates full-stack ML deployment: optimization, observability, and reproducibility across environments.
65
-
66
- ---
67
-
68
- ### Option 2: Concise Resume Bullet
69
- **Hybrid RAG Platform (Python, MLX, LangChain, Docker, MLflow)**
70
- - Built and deployed a full-stack RAG system leveraging Apple Silicon GPU locally (MLX) and scaling to cloud (HF Spaces)
71
- - Integrated MLflow experiment tracking with Dagshub for centralized observability and version control
72
- - Implemented fallback inference logic to maintain functionality across platforms (MPS β†’ CPU)
73
-
74
- ---
75
 
76
  ### Option 3: For a Data Science/ML Portfolio Section
77
  **"RAG Observability Platform"** – *Demonstrates MLOps maturity and cross-platform ML engineering*
@@ -81,7 +57,7 @@ Engineered a hybrid RAG platform combining local Apple Silicon optimization with
81
 
82
  ---
83
 
84
- ## Key Learnings to Highlight in Interviews
85
 
86
  1. **GPU Optimization**: Understand when to use specialized tools (MLX for Apple Silicon) vs. standard libraries (PyTorch)
87
  2. **Cross-Platform Development**: Device abstraction, graceful fallbacks, testing on multiple architectures
@@ -128,8 +104,6 @@ rag-observability-platform/
128
 
129
  ---
130
 
131
- ## Interview Talking Points
132
-
133
  1. **"Why MLX instead of PyTorch?"**
134
  - MLX is optimized for Apple Silicon; PyTorch CPU mode is 10x slower on M4
135
 
 
48
 
49
  ---
50
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
51
 
52
  ### Option 3: For a Data Science/ML Portfolio Section
53
  **"RAG Observability Platform"** – *Demonstrates MLOps maturity and cross-platform ML engineering*
 
57
 
58
  ---
59
 
60
+ ## Key Highlight
61
 
62
  1. **GPU Optimization**: Understand when to use specialized tools (MLX for Apple Silicon) vs. standard libraries (PyTorch)
63
  2. **Cross-Platform Development**: Device abstraction, graceful fallbacks, testing on multiple architectures
 
104
 
105
  ---
106
 
 
 
107
  1. **"Why MLX instead of PyTorch?"**
108
  - MLX is optimized for Apple Silicon; PyTorch CPU mode is 10x slower on M4
109