- cross-posted to:
- [email protected]
- cross-posted to:
- [email protected]
Tidy- Offline semantic Text-to-Image and Image-to-Image search on Android powered by quantized state-of-the-art vision-language pretrained CLIP model and ONNX Runtime inference engine
Features
- Text-to-Image search: Find photos using natural language descriptions.
- Image-to-Image search: Discover visually similar images.
- Automatic indexing: New photos are automatically added to the index.
- Fast and efficient: Get search results quickly.
- Privacy-focused: Your photos never leave your device.
- No internet required: Works perfectly offline.
- Powered by OpenAI’s CLIP model: Uses advanced AI for accurate results.
Does anybody know which CLIP model does it use?