Spaces:
Sleeping
Sleeping
| import streamlit as st | |
| def display_projects(): | |
| st.title('My Projects') | |
| # Define tab titles | |
| tab_titles = [ | |
| "Semantic Video Search", | |
| "Video-QA System", | |
| "AdHubby: AI-Powered Marketing Campaign Generator", | |
| "Resume & CV Crafter", | |
| "Multi-Agent Job Search", | |
| "Resume Easz", | |
| "Job Easz", | |
| "Bitcoin Lightning Optimization", | |
| "National Infrastructure Monitoring", | |
| "Stock Market Analysis", | |
| "Twitter Trend Analysis", | |
| "Restaurant Recommendation", | |
| "ASL Translator", | |
| "Squat Easy" | |
| ] | |
| # Create tabs | |
| tabs = st.tabs(tab_titles) | |
| # Add content to each tab | |
| with tabs[0]: | |
| st.subheader("🏆 Semantic Video Search") | |
| st.markdown("**Winner - Liquid AI Hackathon** | Nov 2025") | |
| st.markdown(""" | |
| - **Description**: Multimodal app for AI-driven video analysis and semantic search using LiquidAI's LFM2 models | |
| - **Key Features**: | |
| • Frame-by-frame analysis with LFM2-VL-450M | |
| • Smart filtering to skip redundant frames | |
| • Semantic search with transformer embeddings | |
| • Clip extraction around matched frames | |
| • YouTube URL + video upload support | |
| - **Technical Implementation**: | |
| • Temporal context modeling (t-1, t, t+1) | |
| • 384-d sentence transformer embeddings (MiniLM-L6-v2) | |
| • Top-K retrieval with cosine similarity | |
| • Real-time progress tracking | |
| - **Technologies**: LFM2-VL-450M, MiniLM-L6-v2, PyTorch, OpenCV, Streamlit, yt-dlp | |
| - **Skills**: Computer Vision, Vision-Language Models, Semantic Search | |
| """) | |
| # Video-QA System | |
| with tabs[1]: | |
| st.subheader("Video-QA System") | |
| st.markdown("**E2B + Groq + MCP + Notion** | Nov 2025") | |
| st.markdown(""" | |
| - **Description**: Video Question-Answering system that processes videos in E2B sandbox, generates frame summaries via Groq VLM, and stores knowledge in Notion through MCP | |
| - **Key Features**: | |
| • Frame extraction + VLM analysis in isolated E2B VM | |
| • MCP integration for structured video knowledge storage | |
| • Chat interface for context-based Q&A | |
| • Timestamp-accurate answers from video context | |
| • YouTube links + local upload support | |
| - **Motivation**: Solve difficulty of revisiting complex YouTube walkthroughs—turning videos into searchable, chat-ready knowledge bases | |
| - **Technologies**: E2B, Groq API, Model Context Protocol (MCP), Notion API, VLM | |
| - **Demo**: [YouTube Demo](https://youtu.be/u08LN_Gh4HE) | |
| - **Reference**: [GitHub](https://github.com/niharpalem/VideoQA) | |
| """) | |
| with tabs[2]: | |
| st.subheader("AdHubby: AI-Powered Marketing Campaign Generator") | |
| st.markdown(""" | |
| - **Description**: Multi-agent multimodal AI system that creates comprehensive marketing campaigns by combining strategic analysis, visual generation, and local landmark integration for small businesses | |
| - **Key Features**: | |
| • Smart Marketing Intelligence with AI-powered briefing generation and competitive analysis | |
| • 9 Professional Marketing Formats (social carousels, billboards, email newsletters, print brochures, etc.) | |
| • Local Integration automatically incorporating 5 relevant landmarks into marketing visuals | |
| • Visual Style Engine with 5 artistic styles (Anime, Cartoon, Minimalist, Watercolor, Vintage) | |
| - **Technical Achievements**: | |
| • Engineered multi-agent architecture: Strategic Agent (Tenstorrent-powered Llama-3.1-8B accessed through Koyeb's cloud platform), Visual Agent (Stable Diffusion XL), QA Agent (LLM-as-judge) | |
| • Implemented advanced prompting with domain context injection and two-stage LLM judging system | |
| • Built comprehensive workflow delivering ready-to-deploy marketing materials in under 8 hours (hackathon timeframe) | |
| • Developed innovative geo-targeted visual asset generation for location-based marketing | |
| - **Technologies**: Streamlit, Tenstorrent (via Koyeb), Llama-3.1-8B, Stable Diffusion XL, Hugging Face API, Multi-Agent Architecture | |
| - **Achievement**: 🏆 **Tenstorrent Multi-Agent Hackathon Recognition** - Built in **5** hours | |
| - **Reference**: [GitHub Repository](https://github.com/niharpalem/AdHubby.com) | |
| """) | |
| with tabs[3]: | |
| st.subheader("LLM-powered Resume & CV Crafter") | |
| st.markdown(""" | |
| - **Description**: Developed AI platform combining LLaMA-3 70B and Deepseek R1 with low-temperature settings for stable, tailored resume and CV generation | |
| - **Key Features**: | |
| • Smart Matching Algorithm analyzing profiles against job requirements | |
| • LaTeX-Powered Resumes with professional formatting | |
| • Automated 4-paragraph Cover Letter Generation | |
| • Performance Metrics evaluating match quality | |
| - **Technical Achievements**: | |
| • Implemented dual-agent architecture: LLaMA-3 8B for profile analysis and 70B for LaTeX generation | |
| • Engineered JSON schema validation system for error-free template integration | |
| • Achieved 5,000+ LinkedIn impressions with 80% reduction in creation time | |
| - **Technologies**: Streamlit, GROQ API (LLaMA-3 70B), LaTeX, JSON Schema | |
| - **Reference**: [Link to Project](https://huggingface.co/spaces/Niharmahesh/Resume_and_CV_crafter) | |
| """) | |
| with tabs[4]: | |
| st.subheader("Multi-Agent Job Search System") | |
| st.markdown(""" | |
| - **Description**: Built an AI-powered job search assistant using dual-LLaMA architecture for comprehensive job matching and analysis | |
| - **Key Features**: | |
| • Real-time scraping across LinkedIn, Glassdoor, Indeed, ZipRecruiter | |
| • Advanced resume parsing and job matching | |
| • Intelligent compatibility scoring system | |
| - **Technical Achievements**: | |
| • Developed batch processing pipeline handling 60+ positions/search | |
| • Reduced job search time by 80% through accurate matching | |
| • Implemented specialized agents for input processing, scraping, and analysis | |
| - **Technologies**: GROQ API, jobspy, Streamlit, Pandas, LLMOps | |
| - **Reference**: [Link to Project](https://huggingface.co/spaces/Niharmahesh/Multi_Agent_Job_search_and_match) | |
| """) | |
| with tabs[5]: | |
| st.subheader("Resume Easz") | |
| st.markdown(""" | |
| - **Description**: Created an AI-driven resume analysis and enhancement tool using LLaMA 3.3 model | |
| - **Key Features**: | |
| • Quick and in-depth resume analysis options | |
| • Comprehensive skill gap analysis | |
| • ATS compatibility optimization | |
| • Multiple output formats (DOCX, HTML, TXT) | |
| - **Technical Implementation**: | |
| • Integrated GROQ API for advanced language processing | |
| • Built visual diff system for resume changes | |
| • Developed custom prompt engineering pipeline | |
| - **Technologies**: GROQ API, Streamlit, Python, LLM | |
| - **Reference**: [Link to Project](https://resume-easz.streamlit.app/) | |
| """) | |
| with tabs[6]: | |
| st.subheader("Job Easz") | |
| st.markdown(""" | |
| - **Description**: Engineered comprehensive job aggregation platform for data roles with advanced analytics | |
| - **Technical Achievements**: | |
| • Designed Airflow pipeline with exponential backoff retry (120-480s intervals) | |
| • Optimized concurrent processing reducing runtime from 2h to 40min | |
| • Processes ~3000 daily job listings across various data roles | |
| - **Key Features**: | |
| • Daily updates with comprehensive job role coverage | |
| • Custom filtering by role and location | |
| • Interactive dashboard for market trends | |
| • Automated ETL pipeline | |
| - **Technologies**: Python, Airflow, ThreadPoolExecutor, Hugging Face Datasets | |
| - **Reference**: [Link to Project](https://huggingface.co/spaces/Niharmahesh/job_easz) | |
| """) | |
| with tabs[7]: | |
| st.subheader("Bitcoin Lightning Path Optimization") | |
| st.markdown(""" | |
| - **Description**: Advanced payment routing optimization system for Bitcoin Lightning Network | |
| - **Technical Achievements**: | |
| • Developed ML classifiers achieving 98.77-99.10% accuracy | |
| • Implemented tri-model consensus system for optimal routing | |
| • Engineered ensemble models with 0.98 F1-scores | |
| - **Implementation Details**: | |
| • Created simulation environment for multi-channel transactions | |
| • Optimized graph-based algorithms for payment routing | |
| • Integrated with Lightning payment interceptor | |
| - **Technologies**: XGBoost, Random Forest, AdaBoost, Graph Algorithms | |
| """) | |
| with tabs[8]: | |
| st.subheader("National Infrastructure Monitoring") | |
| st.markdown(""" | |
| - **Description**: Developed satellite imagery analysis system for infrastructure change detection | |
| - **Technical Achievements**: | |
| • Fine-tuned ViT+ResNet-101 ensemble on 40GB satellite dataset | |
| • Achieved 85% accuracy in change detection | |
| • Implemented 8 parallel GPU threads for enhanced performance | |
| - **Key Features**: | |
| • Temporal analysis with 1km resolution | |
| • Interactive map interface with bounding box selection | |
| • Automatic image chipping for 256x256 inputs | |
| • Contrast adjustment optimization | |
| - **Technologies**: Change ViT Model, Google Earth Engine, PyTorch, Computer Vision | |
| - **Reference**: [Link to Project](https://huggingface.co/spaces/Niharmahesh/Data298) | |
| """) | |
| with tabs[9]: | |
| st.subheader("Stock Market Analysis with OpenAI Integration") | |
| st.markdown(""" | |
| - **Description**: Created comprehensive stock market analysis system with multilingual capabilities | |
| - **Technical Achievements**: | |
| • Built Spark streaming pipeline with 30% efficiency improvement | |
| • Orchestrated Airflow Docker pipeline for Snowflake integration | |
| • Developed bilingual GPT-3.5 chatbot for SQL query generation | |
| - **Key Features**: | |
| • Real-time financial metric calculations | |
| • Custom indicator generation | |
| • Multilingual query support | |
| • Automated data warehousing | |
| - **Technologies**: PySpark, Apache Airflow, Snowflake, OpenAI GPT-3.5 | |
| """) | |
| with tabs[10]: | |
| st.subheader("Twitter Trend Analysis") | |
| st.markdown(""" | |
| - **Description**: Engineered comprehensive Twitter analytics platform using GCP services | |
| - **Technical Achievements**: | |
| • Developed GCP pipeline processing 40k tweets | |
| • Achieved 40% efficiency improvement through custom Airflow operators | |
| • Implemented real-time trend analysis algorithms | |
| - **Key Features**: | |
| • Automated ETL workflows | |
| • Interactive Tableau dashboards | |
| • Viral metrics tracking | |
| • Engagement rate calculations | |
| - **Technologies**: Google Cloud Platform, BigQuery, Apache Airflow, Tableau | |
| """) | |
| with tabs[11]: | |
| st.subheader("Restaurant Recommendation System") | |
| st.markdown(""" | |
| - **Description**: Built hybrid recommendation system combining multiple filtering approaches | |
| - **Technical Achievements**: | |
| • Created hybrid TF-IDF and SVD-based filtering system | |
| • Achieved 43% improvement in recommendation relevance | |
| • Reduced computation time by 65% | |
| - **Key Features**: | |
| • Location-based suggestions | |
| • Personalized recommendations | |
| • Interactive web interface | |
| • Efficient matrix factorization | |
| - **Technologies**: Collaborative Filtering, Content-Based Filtering, Flask, Folium | |
| """) | |
| with tabs[12]: | |
| st.subheader("ASL Translator") | |
| st.markdown(""" | |
| - **Description**: Developed real-time American Sign Language translation system | |
| - **Technical Achievements**: | |
| • Achieved 95% accuracy in real-time gesture interpretation | |
| • Implemented adaptive hand skeleton GIF generator | |
| • Optimized MediaPipe integration for point detection | |
| - **Key Features**: | |
| • Real-time hand tracking | |
| • Visual feedback system | |
| • Intuitive gesture recognition | |
| • Accessible interface | |
| - **Technologies**: MediaPipe Hand Detection, Random Forest, Hugging Face Platform | |
| - **Reference**: [Link to Project](https://huggingface.co/spaces/Niharmahesh/slr-easz) | |
| """) | |
| with tabs[13]: | |
| st.subheader("Squat Easy") | |
| st.markdown(""" | |
| - **Description**: Developed deep learning system for squat form analysis and error detection | |
| - **Technical Achievements**: | |
| • Engineered custom BiLSTM architecture in PyTorch | |
| • Achieved 81% training and 75% test accuracy | |
| • Implemented CUDA-based GPU acceleration | |
| - **Key Features**: | |
| • Real-time form analysis | |
| • Six-type error classification | |
| • Video processing pipeline | |
| • Performance optimization | |
| - **Technologies**: PyTorch, BiLSTM, CUDA, Object-Oriented Programming | |
| - **Reference**: [Link to Project](https://github.com/niharpalem/squateasy_DL) | |
| """) |