
I’ve been working on a wearable project and finally got a rough prototype working. Code for esp32 and app on git: https://github.com/ob1ong/ESP32-LLM-internal-monolouge-BLE-
The setup:
ESP32 camera on glasses (low-res JPEG capture)
Sends images to an Android app
App calls OpenAI with a prompt
Response is spoken back using TTS
Runs in a loop every few seconds
Can remotely put the ESP32 into deep sleep from the app
Current features:
Configurable prompt + interval
BLE/Wi-Fi based communication
Live capture endpoint (/capture)
Phone handles all AI + audio
What works:
End-to-end pipeline (image → AI → voice)
Stable looping on phone
Power saving via sleep trigger
What doesn’t:
BLE image transfer is slow / a bit fragile
Battery + wiring still messy
Not comfortable yet as actual glasses
Next steps:
Improve power management
Shrink hardware into frame properly
Make responses shorter/more “thought-like”
