Alina Lozovskaya commited on
Commit
a871e10
·
1 Parent(s): f27b243

Add architecture to README

Browse files
README.md CHANGED
@@ -4,6 +4,14 @@ Conversational demo for the Reachy Mini robot combining OpenAI's realtime APIs,
4
 
5
  ![Reachy Mini Dance](src/reachy_mini_conversation_demo/images/reachy_mini_dance.gif)
6
 
 
 
 
 
 
 
 
 
7
  ## Overview
8
  - Real-time audio conversation loop powered by the OpenAI realtime API and `fastrtc` for low-latency streaming.
9
  - Vision processing uses gpt-realtime by default (when camera tool is used), with optional local vision processing using SmolVLM2 model running on-device (CPU/GPU/MPS) via `--local-vision` flag.
 
4
 
5
  ![Reachy Mini Dance](src/reachy_mini_conversation_demo/images/reachy_mini_dance.gif)
6
 
7
+ ## Architecture
8
+
9
+ The demo follows a layered architecture connecting the user, AI services, and robot hardware:
10
+
11
+ <p align="center">
12
+ <img src="src/reachy_mini_conversation_demo/images/conversation_demo_arch.svg" alt="Architecture Diagram" width="600"/>
13
+ </p>
14
+
15
  ## Overview
16
  - Real-time audio conversation loop powered by the OpenAI realtime API and `fastrtc` for low-latency streaming.
17
  - Vision processing uses gpt-realtime by default (when camera tool is used), with optional local vision processing using SmolVLM2 model running on-device (CPU/GPU/MPS) via `--local-vision` flag.
src/reachy_mini_conversation_demo/images/conversation_demo_arch.svg ADDED

Git LFS Details

  • SHA256: 1267dbdf98b206599108accb20a6c38f724b6d96209d6ea72b9aa6c56cee9670
  • Pointer size: 131 Bytes
  • Size of remote file: 122 kB