Two years of daily Claude + ChatGPT. They've seen probably a million tokens of my writing. Every response still opens with "Certainly!" or "Great question!" and closes with "In conclusion…".
Nobody writes like that. The model has no idea who you are — you're just another session.
So I built chatlectify. Point it at your exported chat history (Claude / ChatGPT / Gemini JSON, or a folder of your own writing — blog posts, emails, notes). It outputs a SKILL.md + system_prompt.txt that makes the model write like you.
How it works
- Extracts ~20 stylometric features from your messages — sentence-length distribution, contraction rate, bullet usage, hedge words, typo rate, punctuation histograms, question-vs-imperative ratio, top sentence starters
- Picks a stratified sample of your messages across length buckets as exemplars
- One LLM call distills it all into a portable style file
Privacy
Runs locally. Exactly one outbound LLM call to your configured model — the synth step that writes the style file. That call includes your feature summary and ~40 exemplar messages (the stratified sample). Nothing else leaves your machine. No telemetry, no cloud backend, no account.
Usage
pip install chatlectify
chatlectify all ./conversations.json --out-dir ./my_skill
Drop the folder into ~/.claude/skills/ or paste system_prompt.txt into any model that takes one.
Repo: https://github.com/0x1Adi/chatlectify
Curious what people think. Also — which export formats should I add next? Slack, iMessage, email, Discord, Obsidian vault?
United States
NORTH AMERICA
Related News
Amazon Employees Are 'Tokenmaxxing' Due To Pressure To Use AI Tools
20h ago
UCP Variant Data: The #1 Reason Agent Checkouts Fail
6h ago

Décryptage technique : Comment builder un téléchargeur de vidéos Reddit performant (DASH, HLS & WebAssembly)
16h ago
How Braze’s CTO is rethinking engineering for the agentic area
10h ago
Encryption Protocols for Secure AI Systems: A Practical Guide
20h ago