🔐 100% Local & Private
Everything in WorkspaceGPT runs locally on your system. No data is sent to third-party servers. Your code and your documents remain fully private and secure.
You don't need to worry about confidentiality — we don't share or transmit anything outside your machine.
🧠 Features
AI-Powered Workspace Q&A
Get context-aware answers from your local workspace using Retrieval-Augmented Generation (RAG).
Confluence Integration
Seamlessly connect to your Confluence space and chat with your documentation.
Smart Code Navigation
Understand and explore your codebase more efficiently (coming soon!).
Interactive Chat Interface
Ask questions and receive intelligent, project-specific responses.
Runs Locally
No remote APIs. Zero data leakage. Total privacy.
🚀 Getting Started
Prerequisites
- Ollama installed and running locally
- Node.js (v18 or higher)
Default Model
By default, WorkspaceGPT uses a lightweight model: llama3.2:1b
. If you're looking for more accurate and context-rich responses, you can switch to a more capable model that fits your system — such as llama3.2:4b
,gemma3:4b
, or mistral
.
Installation
- Open Visual Studio Code
- Navigate to the Extensions view (
Ctrl+Shift+X
orCmd+Shift+X
on macOS) - Search for "WorkspaceGPT"
- Click Install
🛠 Setup Guide
- Make sure Ollama is running on your system
- Open the WorkspaceGPT sidebar in VSCode
- Go to
Settings > Confluence Integration
- Enter your Confluence details
- Click "Check Connection" to verify access and fetch the total number of pages
- Click "Start Sync" to begin syncing your Confluence content (this may take time depending on the number of pages)
🔁 Reset WorkspaceGPT
If you ever need to reset WorkspaceGPT to its default state, simply go to:Settings > Reset VSCode State
🤝 Contributing
We welcome contributions! Here's how:
- Fork the repo
- Create your feature branch (
git checkout -b feature/amazing-feature
) - Commit your changes (
git commit -m 'Add some amazing feature'
) - Push to your branch (
git push origin feature/amazing-feature
) - Open a Pull Request