Wine Cellar Skill in Minutes
Asked OpenClaw for a local wine cellar skill. It requests a sample CSV export and where to store it, then builds and tests the skill fast — 962 bottles in the example.
Tags: skills, local, csv, wine
Category: knowledge
Tips
- Export your collection from any existing app as CSV first — the cleaner the data, the better the skill
- Include columns for region, vintage, grape variety, and storage location to enable rich natural language queries
- Ask the agent to add a 'drinking window' calculator based on wine type and vintage for proactive recommendations
- Store the CSV in your OpenClaw workspace so the skill survives agent restarts and updates
- Extend the pattern to any personal collection: books, vinyl records, whisky, trading cards — anything with structured data
Community Feedback
Asked 'Robby' (@openclaw) for a local wine cellar skill and it said 30 mins, did it in 3! 962 bottles catalogued and searchable.
— Twitter/X
@prades_maxime has a wine cellar. A serious one. They fed Clawdbot a CSV file and now have 962 bottles catalogued, searchable, and managed through natural language.
— Generative AI Publication
Local wine cellar management from CSV exports. Track 962+ bottles, get pairing suggestions, and manage inventory through conversation.
— OpenClaw Directory
Frequently Asked Questions
Do I need a specific wine app to export the CSV?
No. Any app that exports to CSV works — Vivino, CellarTracker, Wine-Searcher, or even a manually maintained spreadsheet. The agent adapts to whatever columns your CSV contains.
How long does it actually take to create a skill like this?
In the original example, it took about 3 minutes. The time depends on CSV complexity and how many features you request, but simple collection skills typically take under 10 minutes.
Can the skill update when I add or remove bottles?
Yes. You can re-export your CSV periodically, or ask the agent to add update commands to the skill so you can add/remove bottles through conversation without touching the file directly.
Does this work offline?
The data stays local, but you still need an LLM provider connection for the agent to process your queries. If using a local model via Ollama, then yes — fully offline.