Get Started
Everything you need to get SeaSense up and running on your vessel.
What You'll Need
Documentation
Your vessel's technical documentation in digital format โ manuals, drawings, procedures, parts lists. We support PDF, Word, Excel, and images.
Hardware
A Docker-capable device on your vessel โ NAS, PC, mini server, or similar. Minimum 8GB RAM, 50GB storage. We can recommend hardware if needed.
Connectivity
Internet access for initial setup and model deployment. After setup, SeaSense runs fully offline. Periodic connectivity enables Cloud AI features.
Vessel Details
Basic vessel information โ IMO number, vessel type, propulsion system, key equipment. This helps us configure SeaSense optimally for your vessel.
Step-by-Step Guide
Contact Us
Get in touch to discuss your vessel and requirements. We'll guide you through licensing and setup. Book a demo or contact sales to begin.
- Book a demo or email our team
- Share vessel type and documentation scope
- We'll confirm plan and next steps
Get Your License
After signup, you'll receive a license and access to the cloud portal. Choose your plan (Starter, Professional, or Fleet) and complete onboarding.
- Receive license and portal access
- Enroll your vessel and upload documentation
- We train a vessel-specific model and provide deployment details
Download Docker
SeaSense runs as a Docker container. Pull the image from Docker Hub onto your vessel's Docker-capable hardware (NAS, PC, or server).
docker pull gmtsgroup/seasense
- Ensure Docker is installed (8GB+ RAM, 50GB+ storage recommended)
- Pull the SeaSense image
Configure LLM Integration
Choose how SeaSense connects to the AI model: cloud (when you have connectivity) or a local model on your vessel.
- Cloud: Configure API keys or endpoint for cloud LLM (e.g. OpenAI, Anthropic) in the app settings when internet is available. Enables live search and advanced models.
- Local: Point SeaSense to your local model endpoint (e.g. Ollama, local inference server) so the app runs fully offline on your vessel.
Your deployment package or portal will include the exact environment variables and config steps for your chosen option.
Run the Model
Start the container with your vessel-specific configuration. Access SeaSense from any browser on your vessel network and start asking questions.
- Run the container with your license and LLM config
- Open SeaSense in a browser on the vessel network
- Ask natural language questions and get referenced answers
Get Your Vessel on SeaSense Today
Book a demo with our team and we'll walk you through the entire process, from enrollment to deployment.