Compare commits

2 Commits

Author SHA1 Message Date
869d3957b5 Merge branch 'master' into home_manager 2025-08-24 22:26:12 -04:00
2eaffa8cfb WIP on home manager 2025-08-19 17:32:38 -04:00
42 changed files with 315 additions and 2011 deletions

3
.gitmodules vendored
View File

@@ -1,3 +1,6 @@
[submodule "assets/compose"]
path = assets/compose
url = ssh://git@code.lazyworkhorse.net:2222/gortium/compose.git
[submodule "assets/dotfiles"]
path = assets/dotfiles
url = ssh://git@code.lazyworkhorse.net:2222/gortium/dotfiles.git

View File

@@ -1,59 +0,0 @@
# NixOS Infrastructure with AI Assistant
## What This Is
This project manages a NixOS-based infrastructure with Docker services, integrated with OpenCode AI assistant for automated management. The system supports:
- Reproducible NixOS infrastructure configuration
- Docker service management via Docker Compose
- AI-assisted infrastructure operations
- Automatic service deployment and lifecycle management
- Integration with existing Docker stacks (ai, cloudstorage, homeautomation, network, passwordmanager, versioncontrol)
## Core Value
The core value is a **reproducible and evolvable NixOS infrastructure** that can be managed through natural language interactions with the OpenCode AI assistant. The system should automatically detect and integrate new Docker services while maintaining consistency across all deployments.
## Requirements
### Validated
- NixOS configuration management with flakes
- Docker service integration via docker_manager.nix
- Traefik reverse proxy with automatic TLS certificates
- Environment variable management via agenix secrets
- Standardized service patterns across all Docker stacks
### Active
- [ ] Automatic detection and integration of new Docker Compose files in `assets/compose/`
- [ ] AI assistant integration for service lifecycle management
- [ ] Service health monitoring and logging verification
- [ ] Documentation of integration patterns in SKILL.md
- [ ] Automated system update workflow (`nh os switch`)
### Out of Scope
- Full n8n integration for automated workflows - deferring to future milestone
- Self-healing infrastructure with automatic problem detection - future enhancement
- Multi-host orchestration - single-host focus for v1
## Key Decisions
| Decision | Rationale | Outcome |
|----------|-----------|---------|
| NixOS with Flakes | Reproducible infrastructure, better dependency management | Good |
| Docker Compose integration | Preserves existing service configurations, flexibility | Good |
| agenix for secrets | Secure secrets management, Nix native integration | Good |
| Traefik reverse proxy | Unified HTTPS entrypoint, automatic certificate management | Good |
| Standardized service patterns | Consistency across services, easier maintenance | Pending |
## Context
- **Existing Services**: ai (Llama.cpp, Open WebUI, n8n), cloudstorage (Nextcloud), homeautomation (Home Assistant), network (Traefik, DDNS), passwordmanager (Vaultwarden), versioncontrol (Gitea)
- **Tech Stack**: NixOS unstable, Docker, Docker Compose, Traefik, agenix, OpenCode AI
- **Hardware**: AMD MI50 GPUs for AI workloads
- **Network**: Traefik-net bridge network for all services
- **Storage**: `/mnt/HoardingCow_docker_data/<service>` for persistent data
**Last updated: 2026-01-01 after init**

View File

@@ -1,147 +0,0 @@
# Roadmap: NixOS Infrastructure with AI Assistant
## Overview
This roadmap outlines the implementation of a reproducible NixOS infrastructure with Docker service management, integrated with an AI assistant for automated operations. The system will automatically detect and integrate new Docker services while maintaining consistency across deployments.
## Domain Expertise
None
## Phases
-**Phase 1: Foundation Setup** - Establish core NixOS configuration with flakes
-**Phase 2: Docker Service Integration** - Integrate Docker Compose services
-**Phase 3: AI Assistant Integration** - Enable AI-assisted infrastructure management
- [ ] **Phase 4: Internet Access & MCP** - MCP server for web access
## Phase Details
### Phase 1: Foundation Setup
**Goal**: Establish the core NixOS configuration with flakes and basic infrastructure
**Depends on**: Nothing (first phase)
**Research**: Unlikely (established Nix patterns)
**Plans**: 3 plans
**Status**: Complete
Plans:
- [x] 01-01: Set up NixOS flake structure with hardware configuration
- [x] 01-02: Configure basic services and networking
- [x] 01-03: Implement secrets management with agenix
### Phase 2: Docker Service Integration
**Goal**: Integrate Docker service management with Traefik reverse proxy
**Depends on**: Phase 1
**Research**: Unlikely (existing Docker Compose patterns)
**Plans**: 3 plans
**Status**: Complete
Plans:
- [x] 02-01: Implement docker_manager.nix for service integration
- [x] 02-02: Configure Traefik reverse proxy with automatic TLS
- [x] 02-03: Set up persistent storage for Docker services
### Phase 3: AI Assistant Integration
**Goal**: Enable AI assistant to manage infrastructure operations
**Depends on**: Phase 2
**Research**: Likely (AI integration patterns)
**Research topics**: OpenCode AI API, infrastructure management patterns, natural language parsing for service operations
**Plans**: 2 plans
**Status**: Complete
Plans:
- [x] 03-01: Integrate OpenCode AI assistant with NixOS configuration
- [x] 03-02: Implement natural language command parsing
### Phase 4: Internet Access & MCP
**Goal**: Set up MCP server for web access and enhanced functionality
**Depends on**: Phase 3
**Research**: Likely (MCP server configuration)
**Research topics**: MCP server setup, web access integration, security considerations
**Plans**: 2 plans
Plans:
- [x] 04-01: Configure MCP server for external access
- [x] 04-02: Test web search capabilities and integration
### Phase 4.1: Organize Accumulated Commits (INSERTED)
**Goal**: Organize uncommitted changes into logical, meaningful commits
**Depends on**: Phase 4
**Status**: Complete
**Plans**: 5 plans
Plans:
- [x] 04-01: Stage Docker stack integration files
- [x] 04-02: Commit system configuration improvements
- [x] 04-03: Update service modules and remove deprecated systemd services
- [x] 04-04: Add n8n-worker user and update authentication
- [x] 04-05: Update flake imports and infrastructure secrets
**Details**:
Successfully organized accumulated changes into 5 logical commits:
1. Docker stack integration with improved service management
2. System configuration enhancements (hardware sensors, GPU support, security)
3. Service module updates and cleanup of deprecated systemd services
4. User and authentication configuration updates
5. Flake and infrastructure updates
### 🚧 v5.0 TAK Server (In Progress)
**Milestone Goal:** Add TAK (Tactical Assault Kit) server with web interface for team coordination and offsite operator integration
#### Phase 5: TAK Server Research & Selection
**Goal**: Research and select the optimal TAK-compatible server with web interface
**Depends on**: Previous milestone complete
**Research**: Likely (comparing different TAK implementations)
**Research Method**: Use DuckDuckGo tool for web research
**Research topics**: Open-source TAK-compatible servers with web UIs, COT protocol support, geospatial mapping, deployment requirements, security considerations
**Plans**: TBD
Plans:
- [ ] 05-01: Research TAK-compatible open-source implementations
- [ ] 05-02: Compare features and select optimal solution
- [ ] 05-03: Document research findings and recommendations
#### Phase 6: TAK Server Implementation
**Goal**: Implement TAK server as Docker service with Traefik integration
**Depends on**: Phase 5 (research completed)
**Research**: Unlikely (following established Docker patterns)
**Plans**: TBD
Plans:
- [ ] 06-01: Create Docker Compose configuration
- [ ] 06-02: Set up persistent storage and Traefik routing
- [ ] 06-03: Integrate with docker_manager.nix module
#### Phase 7: TAK Server Testing & Validation
**Goal**: Validate TAK server functionality and integration
**Depends on**: Phase 6 (implementation complete)
**Research**: Unlikely
**Plans**: TBD
Plans:
- [ ] 07-01: Test COT protocol functionality
- [ ] 07-02: Verify web interface and geospatial features
- [ ] 07-03: Validate security and integration
## Progress
**Execution Order:**
Phases execute in numeric order: 1 → 2 → 3 → 4 → 5 → 6 → 7
| Phase | Milestone | Plans Complete | Status | Completed |
|-------|-----------|----------------|--------|-----------|
| 1. Foundation Setup | v1.0 | 3/3 | Complete | - |
| 2. Docker Service Integration | v1.0 | 3/3 | Complete | - |
| 3. AI Assistant Integration | v1.0 | 2/2 | Complete | - |
| 4. Internet Access & MCP | v1.0 | 2/2 | Complete | - |
| 5. TAK Server Research | v5.0 | 0/3 | Not started | - |
| 6. TAK Server Implementation | v5.0 | 0/3 | Not started | - |
| 7. TAK Server Testing | v5.0 | 0/3 | Not started | - |

View File

@@ -1,83 +0,0 @@
# Project State
## Project Reference
**Core Value:** A reproducible and evolvable NixOS infrastructure that can be managed through natural language interactions with the OpenCode AI assistant
**Current Focus:** Complete Phase 4.1 (Organize Accumulated Commits) and prepare for Phase 4.2
## Current Position
Phase: 5 of 7 (TAK Server Research & Selection)
Plan: 1 of 3 complete
Status: In progress - Phase 5.1 research completed
Last activity: 2026-01-01 - Completed 05-01 research plan
Progress: ▓▓▓▓▓▓█ 90%
## Performance Metrics
**Velocity:**
- Total plans completed: 14 (13 previous + 1 new)
- Average duration: 0 min
- Total execution time: 0.0 hours
**By Phase:**
| Phase | Plans | Total | Avg/Plan |
|-------|-------|-------|----------|
| 1-3 | 8/8 | 8 | 0 |
| 4.1 | 5/5 | 5 | 0 |
| 4.2 | 2/2 | 2 | 0 |
| 5 | 1/3 | 1 | 10 min | 0 |
| 6-7 | 0/6 | 0 | N/A |
**Recent Trend:**
- Last 5 plans: []
- Trend: [Not available for new phases]
## Accumulated Context
### Decisions Made
| Phase | Decision | Rationale |
|-------|----------|-----------|
| 1-3 | All phases completed | Foundational infrastructure in place |
| 4 | Removed entirely | Not needed per user request |
### Deferred Issues
None yet.
### Roadmap Evolution
- Phase 4.1 inserted after Phase 4: Organize accumulated commits logically (URGENT)
- Status: Complete
- Completion: 2026-01-01
- Result: 5 logical commits created from accumulated changes
- Reason: Accumulated uncommitted changes need logical grouping before Phase 4 execution
### Blockers/Concerns Carried Forward
None yet.
## Session Continuity
Last session: 2026-01-01 23:15
Stopped at: Phase 5.1 research completed - OpenTAKServer selected
Resume file: None
**Next Phase**: 5.2 - Compare features and select optimal solution
## Accumulated Context
### Decisions Made
| Phase | Decision | Rationale |
|-------|----------|-----------|
| 1-3 | All phases completed | Foundational infrastructure in place |
| 4 | Removed entirely | Not needed per user request |
| 5.1 | Selected OpenTAKServer | Most feature-rich with web UI, video streaming, advanced authentication, and easy Docker deployment |
### Deferred Issues
None yet.

View File

@@ -1,17 +0,0 @@
{
"mode": "interactive",
"gates": {
"confirm_project": true,
"confirm_phases": true,
"confirm_roadmap": true,
"confirm_breakdown": true,
"confirm_plan": true,
"execute_next_plan": true,
"issues_review": true,
"confirm_transition": true
},
"safety": {
"always_confirm_destructive": true,
"always_confirm_external_services": true
}
}

View File

@@ -1,129 +0,0 @@
# Phase 4: Internet Access & MCP
## Plan 4.2: Test Web Search Capabilities and Integration
### Objective
Test and verify that the OpenCode AI assistant can successfully perform web searches through the configured MCP servers.
**Purpose:** Ensure the web search functionality is working correctly and integrate it with the AI assistant's capabilities.
**Output:** Test results confirming web search functionality through MCP servers and documentation of the integration.
### Execution Context
- ~/.config/opencode/gsd/workflows/execute-phase.md
- ~/.config/opencode/gsd/templates/phase-prompt.md
- ~/.config/opencode/gsd/references/plan-format.md
- ~/.config/opencode/gsd/references/checkpoints.md
### Context
@.planning/PROJECT.md
@.planning/ROADMAP.md
@.planning/phases/04-internet-access/04-01-SUMMARY.md
@src/modules/nixos/services/open_code_server.nix
**Project Context:**
- MCP servers (Context7 and DuckDuckGo) should be configured from Plan 1
- OpenCode service needs to be running to test web search functionality
- Testing should verify both MCP servers are functional and accessible
### Tasks
<task type="auto">
<name>Task 1: Start OpenCode Service</name>
<files>None - systemd service</files>
<action>Start the OpenCode service using systemd:
sudo systemctl start opencode
Ensure the service is running and check logs for any errors</action>
<verify>systemctl status opencode shows service is active and running</verify>
<done>OpenCode service is running without errors</done>
</task>
<task type="auto">
<name>Task 2: Test Context7 Web Search</name>
<files>None - runtime test</files>
<action>Test web search through Context7 MCP:
1. Use the OpenCode API to send a web search query
2. Verify the response includes search results from Context7
3. Check that the service properly handles the MCP communication
Example query: "What is the current weather in New York?"</action>
<verify>Web search through Context7 returns valid search results</verify>
<done>Context7 web search is functional and returns expected results</done>
</task>
<task type="auto">
<name>Task 3: Test DuckDuckGo Web Search</name>
<files>None - runtime test</files>
<action>Test web search through DuckDuckGo MCP:
1. Use the OpenCode API to send a web search query
2. Verify the response includes search results from DuckDuckGo
3. Check that the service properly handles the MCP communication
Example query: "Latest news about AI technology"</action>
<verify>Web search through DuckDuckGo returns valid search results</verify>
<done>DuckDuckGo web search is functional and returns expected results</done>
</task>
<task type="checkpoint:human-verify" gate="blocking">
<what-built>Web search functionality through MCP servers</what-built>
<how-to-verify>
1. Test web search queries through both Context7 and DuckDuckGo
2. Verify search results are relevant and current
3. Check that the AI assistant can properly interpret and format results
4. Test a variety of query types (factual, news, technology)
</how-to-verify>
<resume-signal>Type "approved" if web search is working correctly, or describe any issues with search results or functionality</resume-signal>
</task>
<task type="auto">
<name>Task 4: Document Web Search Integration</name>
<files>Documentation in configuration or README</files>
<action>Document the web search capabilities in the OpenCode configuration:
1. Add comments explaining the MCP server configuration
2. Note which MCP servers are available for web search
3. Document any limitations or known issues with web search
4. Provide examples of effective web search queries</action>
<verify>Configuration file includes documentation about MCP web search capabilities</verify>
<done>Web search integration is documented with examples and usage notes</done>
</task>
### Verification
Before declaring phase complete:
- [ ] OpenCode service is running without errors
- [ ] Context7 web search returns valid, relevant results
- [ ] DuckDuckGo web search returns valid, relevant results
- [ ] AI assistant properly interprets and formats search results
- [ ] Web search capabilities are documented
- [ ] No errors in service logs during web search operations
### Success Criteria
- All tasks completed successfully
- Web search functionality through both MCP servers is working
- AI assistant can effectively use web search capabilities
- Configuration and usage are properly documented
- No errors or warnings introduced in the configuration
- Phase 4 (Internet Access & MCP) is complete
### Output
After completion, create `.planning/phases/04-internet-access/04-02-SUMMARY.md`:
# Phase 4 Plan 2: Web Search Integration Summary
Web search capabilities through MCP servers successfully tested and integrated.
## Accomplishments
- Started OpenCode service and verified it's running
- Tested and verified Context7 web search functionality
- Tested and verified DuckDuckGo web search functionality
- Human verification of web search results
- Documented web search integration
## Files Created/Modified
- `/home/gortium/infra/modules/nixos/services/open_code_server.nix` - Added documentation
## Decisions Made
- No significant decisions required - testing existing configuration
## Issues Encountered
- Any issues encountered during testing, along with resolutions
## Next Step
Phase 4 complete. Ready to proceed to Phase 5: TAK Server Integration

View File

@@ -1,129 +0,0 @@
# Phase 4: Internet Access & MCP
## Plan 4.2: Test Web Search Capabilities and Integration
### Objective
Test and verify that the OpenCode AI assistant can successfully perform web searches through the configured MCP servers.
**Purpose:** Ensure the web search functionality is working correctly and integrate it with the AI assistant's capabilities.
**Output:** Test results confirming web search functionality through MCP servers and documentation of the integration.
### Execution Context
- ~/.config/opencode/gsd/workflows/execute-phase.md
- ~/.config/opencode/gsd/templates/phase-prompt.md
- ~/.config/opencode/gsd/references/plan-format.md
- ~/.config/opencode/gsd/references/checkpoints.md
### Context
@.planning/PROJECT.md
@.planning/ROADMAP.md
@.planning/phases/04-internet-access/04-01-SUMMARY.md
@src/modules/nixos/services/open_code_server.nix
**Project Context:**
- MCP servers (Context7 and DuckDuckGo) should be configured from Plan 1
- OpenCode service needs to be running to test web search functionality
- Testing should verify both MCP servers are functional and accessible
### Tasks
<task type="auto">
<name>Task 1: Start OpenCode Service</name>
<files>None - systemd service</files>
<action>Start the OpenCode service using systemd:
sudo systemctl start opencode
Ensure the service is running and check logs for any errors</action>
<verify>systemctl status opencode shows service is active and running</verify>
<done>OpenCode service is running without errors</done>
</task>
<task type="auto">
<name>Task 2: Test Context7 Web Search</name>
<files>None - runtime test</files>
<action>Test web search through Context7 MCP:
1. Use the OpenCode API to send a web search query
2. Verify the response includes search results from Context7
3. Check that the service properly handles the MCP communication
Example query: "What is the current weather in New York?"</action>
<verify>Web search through Context7 returns valid search results</verify>
<done>Context7 web search is functional and returns expected results</done>
</task>
<task type="auto">
<name>Task 3: Test DuckDuckGo Web Search</name>
<files>None - runtime test</files>
<action>Test web search through DuckDuckGo MCP:
1. Use the OpenCode API to send a web search query
2. Verify the response includes search results from DuckDuckGo
3. Check that the service properly handles the MCP communication
Example query: "Latest news about AI technology"</action>
<verify>Web search through DuckDuckGo returns valid search results</verify>
<done>DuckDuckGo web search is functional and returns expected results</done>
</task>
<task type="checkpoint:human-verify" gate="blocking">
<what-built>Web search functionality through MCP servers</what-built>
<how-to-verify>
1. Test web search queries through both Context7 and DuckDuckGo
2. Verify search results are relevant and current
3. Check that the AI assistant can properly interpret and format results
4. Test a variety of query types (factual, news, technology)
</how-to-verify>
<resume-signal>Type "approved" if web search is working correctly, or describe any issues with search results or functionality</resume-signal>
</task>
<task type="auto">
<name>Task 4: Document Web Search Integration</name>
<files>Documentation in configuration or README</files>
<action>Document the web search capabilities in the OpenCode configuration:
1. Add comments explaining the MCP server configuration
2. Note which MCP servers are available for web search
3. Document any limitations or known issues with web search
4. Provide examples of effective web search queries</action>
<verify>Configuration file includes documentation about MCP web search capabilities</verify>
<done>Web search integration is documented with examples and usage notes</done>
</task>
### Verification
Before declaring phase complete:
- [ ] OpenCode service is running without errors
- [ ] Context7 web search returns valid, relevant results
- [ ] DuckDuckGo web search returns valid, relevant results
- [ ] AI assistant properly interprets and formats search results
- [ ] Web search capabilities are documented
- [ ] No errors in service logs during web search operations
### Success Criteria
- All tasks completed successfully
- Web search functionality through both MCP servers is working
- AI assistant can effectively use web search capabilities
- Configuration and usage are properly documented
- No errors or warnings introduced in the configuration
- Phase 4 (Internet Access & MCP) is complete
### Output
After completion, create `.planning/phases/04-internet-access/04-02-SUMMARY.md`:
# Phase 4 Plan 2: Web Search Integration Summary
Web search capabilities through MCP servers successfully tested and integrated.
## Accomplishments
- Started OpenCode service and verified it's running
- Tested and verified Context7 web search functionality
- Tested and verified DuckDuckGo web search functionality
- Human verification of web search results
- Documented web search integration
## Files Created/Modified
- `/home/gortium/infra/modules/nixos/services/open_code_server.nix` - Added documentation
## Decisions Made
- No significant decisions required - testing existing configuration
## Issues Encountered
- Any issues encountered during testing, along with resolutions
## Next Step
Phase 4 complete. Ready to proceed to Phase 5: TAK Server Integration

View File

@@ -1,265 +0,0 @@
# Phase 5: TAK Server Research & Selection - Research Report
## Executive Summary
This research report evaluates open-source TAK-compatible server implementations for deployment in the NixOS infrastructure. Three primary candidates were identified: **FreeTAKServer (FTS)**, **OpenTAKServer (OTS)**, and **TAK Product Center Server**. Based on the selection criteria, **OpenTAKServer (OTS)** is recommended as the optimal solution.
## Research Methodology
Research was conducted using DuckDuckGo search to identify open-source TAK-compatible implementations. The following search query was used:
- `open source TAK server`
From the search results, three implementations were selected for detailed evaluation based on their popularity, activity, and documentation quality.
## Implementation Comparison
### 1. FreeTAKServer (FTS)
**GitHub Repository**: https://github.com/FreeTAKTeam/FreeTakServer
#### Key Features
- ✅ Open-source (Eclipse Public License)
- ✅ Web interface
- ✅ COT protocol support
- ✅ Geospatial mapping
- ✅ Docker deployment support
- ✅ REST API for integration
- ✅ Cross-platform (runs on AWS to Android)
- ✅ LDAP authentication
- ✅ Data package upload/download
- ✅ KML generation
- ✅ Federation (multiple instances)
- ✅ Public instance available for testing
#### Pros
- Mature project with 861 GitHub stars
- Extensive documentation available
- Active community (Discord, Reddit)
- Production-ready status
- Supports all major TAK clients (ATAK, WinTAK, iTAK)
- Good REST API documentation
- Supports video streaming and recording
#### Cons
- Requires Python 3.11
- Complex setup with multiple dependencies
- Some features require commercial plugins
- Web UI could be more modern
#### Deployment Requirements
- Python 3.11
- Dependencies: Flask, lxml, SQLAlchemy, eventlet
- Docker support available
- Can run from single-node to multi-node AWS deployments
### 2. OpenTAKServer (OTS)
**GitHub Repository**: https://github.com/brian7704/OpenTAKServer
#### Key Features
- ✅ Open-source (GPL-3.0)
- ✅ Web interface with live map
- ✅ COT protocol support
- ✅ Geospatial mapping
- ✅ Docker deployment support
- ✅ SSL authentication
- ✅ LDAP/Active Directory authentication
- ✅ Two-factor authentication (TOTP/email)
- ✅ Video streaming integration (MediaMTX)
- ✅ Mumble server authentication
- ✅ Data sync/mission API
- ✅ Client certificate enrollment
- ✅ Groups/channels support
- ✅ Plugin update server
- ✅ ADS-B and AIS data streaming
#### Pros
- Most feature-rich implementation
- Excellent web UI with live map
- Supports video streaming from multiple sources
- Modern authentication options (2FA, LDAP, certificates)
- Easy installation scripts for multiple platforms
- Good documentation
- Active development (recent release: 1.7.0, Dec 2025)
- Designed to run on servers and SBCs (Raspberry Pi)
- MediaMTX integration for professional video streaming
#### Cons
- Requires RabbitMQ and OpenSSL
- More complex architecture
- Larger resource footprint
- GPL license may be restrictive for some use cases
#### Deployment Requirements
- Python 3.10+
- RabbitMQ
- OpenSSL
- MediaMTX (for video streaming)
- Docker image available
- Installation scripts for Ubuntu, Raspberry Pi, Rocky 9, Windows, macOS
### 3. TAK Product Center Server
**GitHub Repository**: https://github.com/TAK-Product-Center/Server
#### Key Features
- ✅ Open-source (Distribution A - Approved for Public Release)
- ✅ Enterprise-grade TAK server
- ✅ Designed for DoD and JADC2 architectures
- ✅ Federation support
- ✅ Data access and encryption
- ✅ Broker and storage capabilities
- ✅ Available on DoD Iron Bank
#### Pros
- Official TAK Product Center implementation
- Highest security standards (DoD approved)
- Designed for production enterprise use
- Available in hardened container format
- Future plans for public container registries
#### Cons
- ❌ No web interface mentioned
- ❌ No Docker deployment details in GitHub
- ❌ Limited documentation available
- ❌ Designed primarily for DoD use cases
- ❌ Requires TAK.gov account for downloads
- ❌ Less community activity (191 stars)
- ❌ No clear installation instructions for civilian use
#### Deployment Requirements
- Enterprise-grade hardware
- Complex configuration
- DoD security requirements
- TAK.gov account required
## Selection Criteria Evaluation
### Must Have Requirements
| Criteria | FTS | OTS | TAK Product Center |
|----------|-----|-----|-------------------|
| Open-source license | ✅ | ✅ | ✅ |
| Web interface | ✅ | ✅ | ❌ |
| COT protocol support | ✅ | ✅ | ✅ |
| Geospatial mapping | ✅ | ✅ | ✅ |
| Docker deployment support | ✅ | ✅ | ❌ |
### Nice to Have Requirements
| Criteria | FTS | OTS | TAK Product Center |
|----------|-----|-----|-------------------|
| Active maintenance | ✅ | ✅ | ✅ |
| Good documentation | ✅ | ✅ | ❌ |
| Community support | ✅ | ✅ | ❌ |
| REST API for integration | ✅ | ✅ | ✅ |
| Mobile client availability | ✅ | ✅ | ✅ |
## Recommendation
**OpenTAKServer (OTS)** is the optimal choice for this implementation for the following reasons:
1. **Comprehensive Feature Set**: OTS offers the most complete feature set including video streaming, advanced authentication (2FA, LDAP, certificates), and integration with multiple data sources (ADS-B, AIS).
2. **Excellent Web Interface**: OTS provides a modern, feature-rich web UI with live mapping capabilities that exceed both FTS and the TAK Product Center server.
3. **Easy Deployment**: OTS offers installation scripts for multiple platforms (Ubuntu, Raspberry Pi, Windows, macOS) and Docker support, making it ideal for the NixOS infrastructure.
4. **Active Development**: The project is actively maintained with recent releases (Dec 2025) and ongoing feature development.
5. **Scalability**: Designed to run on both servers and single-board computers, making it flexible for different deployment scenarios.
6. **Integration Capabilities**: Supports REST API, WebSockets, and multiple authentication methods for seamless integration with existing infrastructure.
### Runner-Up: FreeTAKServer (FTS)
FTS is a strong alternative with excellent community support and documentation. It would be suitable if:
- Simpler deployment is preferred
- Extensive REST API usage is planned
- Production-ready status is a priority
### Not Recommended: TAK Product Center Server
While this is the official implementation, it lacks critical features for this use case:
- No web interface
- Limited documentation
- Complex deployment requirements
- Designed primarily for DoD environments
- No clear Docker deployment path
## Implementation Plan
### Deployment Strategy
1. **Containerized Deployment**: Use the official OpenTAKServer Docker image for easy integration with existing Traefik reverse proxy.
2. **Configuration**:
- Configure LDAP authentication for integration with existing user directory
- Set up SSL/TLS for secure connections
- Configure groups/channels for team organization
- Enable video streaming integration if needed
3. **Integration**:
- Add to docker_manager.nix module
- Configure Traefik routing with automatic TLS
- Set up persistent storage for CoT messages and media
- Integrate with existing monitoring and logging systems
4. **Testing**:
- Verify COT protocol connectivity from ATAK/iTAK/WinTAK clients
- Test web interface functionality
- Validate authentication and authorization
- Confirm geospatial mapping features work correctly
### Configuration Requirements
- **Docker**: Official OTS Docker image
- **Network**: TCP ports for COT protocol and web interface
- **Storage**: Persistent volumes for CoT data and media files
- **Dependencies**: RabbitMQ (can be co-located)
- **Authentication**: LDAP or Active Directory integration
- **TLS**: Let's Encrypt certificates via Traefik
### Timeline Estimate
- **Research Completion**: Immediate (this report)
- **Decision Finalized**: Ready for approval
- **Implementation Ready**: After decision approval
- **Deployment**: 1-2 weeks after approval
## Risk Assessment
### Risks
1. **License Compatibility**: GPL-3.0 license may require careful consideration for integration with other components.
2. **Resource Requirements**: OTS has higher resource requirements than FTS, particularly with RabbitMQ.
3. **Complexity**: More features mean more configuration complexity.
### Mitigation Strategies
1. **License**: Review GPL-3.0 compatibility with existing infrastructure components.
2. **Resources**: Monitor resource usage and scale accordingly. Consider separating RabbitMQ into its own container.
3. **Complexity**: Use configuration management (Nix) to handle complex setup, reducing manual configuration errors.
## Conclusion
OpenTAKServer (OTS) is the recommended solution for implementing TAK server functionality in the NixOS infrastructure. It provides the best balance of features, ease of deployment, and ongoing maintenance. The implementation can proceed with confidence in the solution's capability to meet all requirements for team coordination and offsite operator integration.
## Next Steps
1. Approve the selection of OpenTAKServer
2. Begin Phase 6 implementation planning
3. Create Docker Compose configuration for OTS
4. Set up persistent storage requirements
5. Integrate with docker_manager.nix module
6. Configure Traefik routing and TLS
7. Test COT protocol functionality
---
*Research completed: 2026-01-01*
*Report version: 1.0*
*Recommended solution: OpenTAKServer (OTS)*

View File

@@ -1,49 +0,0 @@
# Phase 5.1: TAK Server Research - Summary
**OpenTAKServer (OTS) selected as optimal TAK-compatible solution with web interface, COT protocol support, geospatial mapping, and Docker deployment capabilities**
## Performance
- **Duration:** 10 min
- **Started:** 2026-01-01T23:05:51Z
- **Completed:** 2026-01-01T23:15:51Z
- **Tasks:** 1 (research and evaluation)
- **Files modified:** 1 (research report)
## Accomplishments
- Conducted comprehensive web research using DuckDuckGo
- Identified and evaluated three TAK-compatible open-source implementations
- Created detailed comparison matrix of FreeTAKServer, OpenTAKServer, and TAK Product Center Server
- Selected OpenTAKServer as optimal solution based on feature completeness and deployment requirements
- Documented research findings, selection rationale, and implementation plan
## Files Created/Modified
- `.planning/phases/05-tak-research/05-01-RESEARCH.md` - Comprehensive research report with comparison matrix and recommendation
## Decisions Made
- Selected OpenTAKServer (OTS) as primary implementation
- Rationale: Most feature-rich with web UI, video streaming, advanced authentication, and easy Docker deployment
- Alternative considered: FreeTAKServer (strong runner-up with excellent community support)
- Rejected: TAK Product Center Server (lacks web interface, complex deployment, DoD-focused)
## Deviations from Plan
None - plan executed exactly as written
## Issues Encountered
None
## Next Phase Readiness
- Research complete and documented
- OpenTAKServer selected as optimal solution
- Ready to proceed to Phase 6 implementation
- All requirements met: open-source, web interface, COT protocol, geospatial mapping, Docker support
---
*Phase: 05-tak-research*
*Completed: 2026-01-01*

View File

@@ -1,96 +0,0 @@
# Phase 5.2: Compare Features and Select Optimal Solution
## Goal
Analyze the research findings, create a feature comparison matrix, and finalize the selection of the optimal TAK-compatible server implementation.
## Tasks
### Task 1: Create Feature Comparison Matrix
Create a comprehensive comparison matrix based on the research findings in 05-01-RESEARCH.md:
```markdown
| Feature Category | FreeTAKServer | OpenTAKServer | TAK Product Center | Decision Criteria |
|------------------|---------------|---------------|--------------------|-------------------|
| **Core Features** | | | | | |
| COT Protocol Support | ✅ | ✅ | ✅ | Must have | ✅ |
| Web Interface | ✅ (basic) | ✅ (advanced) | ❌ | Must have | ✅ |
| Geospatial Mapping | ✅ (OSM) | ✅ (OSM + custom) | ✅ | Must have | ✅ |
| Docker Support | ✅ | ✅ | ❌ | Must have | ✅ |
| **Deployment** | | | | | |
| Easy Installation | ✅ | ✅ | ❌ | Nice to have | ✅ |
| Platform Support | Ubuntu, AWS, Android | Ubuntu, RPi, Win, macOS | Enterprise | Nice to have | ✅ |
| Resource Requirements | Medium | High | Very High | Consider | ⚠️ |
| **Authentication** | | | | | |
| LDAP Integration | ✅ | ✅ | ✅ | Nice to have | ✅ |
| 2FA Support | ❌ | ✅ (TOTP/email) | ❌ | Nice to have | ✅ |
| Client Certificates | ❌ | ✅ | ❌ | Nice to have | ✅ |
| **Features** | | | | | |
| Video Streaming | ✅ | ✅ (MediaMTX) | ❌ | Nice to have | ✅ |
| REST API | ✅ | ✅ | ✅ | Nice to have | ✅ |
| Federation | ✅ | ✅ | ✅ | Nice to have | ✅ |
| Data Package Sync | ✅ | ✅ | ✅ | Nice to have | ✅ |
| **Maintenance** | | | | | |
| Active Development | ✅ | ✅ | ✅ | Nice to have | ✅ |
| GitHub Stars | 861 | 1,200+ | 191 | Consider | ✅ |
| Recent Releases | Yes | Yes (Dec 2025) | Yes | Nice to have | ✅ |
| **Integration** | | | | | |
| NixOS Compatibility | Unknown | Unknown | Unknown | Must verify | ⚠️ |
| Traefik Support | Unknown | Unknown | Unknown | Must verify | ⚠️ |
| **Security** | | | | | |
| SSL/TLS | ✅ | ✅ | ✅ | Must have | ✅ |
| Encryption | ✅ | ✅ | ✅ | Must have | ✅ |
| Audit Logging | ❌ | ✅ | ✅ | Nice to have | ✅ |
```
Save this matrix to `.planning/phases/05-tak-research/05-02-COMPARISON.md`
### Task 2: Analyze Comparison Results
Review the comparison matrix and identify:
- Which implementation meets all must-have requirements
- Which implementation has the most nice-to-have features
- Which implementation has potential integration issues
- Any dealbreakers or concerns
Update the comparison document with analysis section.
### Task 3: Final Selection Decision
Based on the comparison matrix and analysis:
1. Confirm OpenTAKServer as the optimal choice
2. Document final decision rationale
3. Identify any concerns or risks
4. Note any special requirements for implementation
Save decision to `.planning/phases/05-tak-research/05-02-DECISION.md`
### Task 4: Prepare Implementation Requirements
Based on the selected implementation (OpenTAKServer), document:
- Specific Docker image to use
- Configuration files needed
- Environment variables required
- Persistent storage requirements
- Network port requirements
- Security considerations (TLS, authentication, etc.)
- Monitoring and logging requirements
Save to `.planning/phases/05-tak-research/05-02-IMPLEMENTATION_REQUIREMENTS.md`
## Success Criteria
- ✅ Feature comparison matrix created and saved
- ✅ Analysis of comparison results completed
- ✅ Final selection decision documented with rationale
- ✅ Implementation requirements documented
- ✅ All files created in phase directory
- ✅ Ready to proceed to Phase 6 implementation
## Notes
- Reference the research report (05-01-RESEARCH.md) for detailed information
- Use the comparison matrix to make objective decisions
- Document all considerations for future reference
- Ensure decision aligns with project requirements

View File

@@ -1,78 +0,0 @@
# Phase 5.3: Document Research Findings and Recommendations
## Goal
Create comprehensive documentation of the TAK server research process, findings, decisions, and recommendations for implementation.
## Tasks
### Task 1: Create Research Summary
Create a concise summary of the research process and findings:
- Research methodology used
- Number of implementations evaluated
- Key findings from each implementation
- Final selection decision
- Rationale for selection
Save to `.planning/phases/05-tak-research/05-03-SUMMARY.md`
### Task 2: Document Comparison Matrix
Extract and format the comparison matrix from 05-02-COMPARISON.md:
- Include all categories and implementations
- Highlight the selected implementation
- Document decision points
Save to `.planning/phases/05-tak-research/05-03-COMPARISON_FINAL.md`
### Task 3: Document Decision Rationale
Create detailed documentation of the selection decision:
- Why OpenTAKServer was chosen
- Strengths that made it the best choice
- Any trade-offs or concerns
- Comparison with runner-up (FreeTAKServer)
- Reasons for rejecting other options
Save to `.planning/phases/05-tak-research/05-03-DECISION_RATIONALE.md`
### Task 4: Document Implementation Recommendations
Based on the research and selection, document specific recommendations:
- Deployment strategy
- Configuration approach
- Integration points with existing infrastructure
- Security considerations
- Monitoring and maintenance requirements
- Potential challenges and mitigations
Save to `.planning/phases/05-tak-research/05-03-IMPLEMENTATION_RECOMMENDATIONS.md`
### Task 5: Create Phase Completion Checklist
Create a checklist to verify all research tasks are complete:
- ✅ Research conducted
- ✅ Implementations evaluated
- ✅ Comparison matrix created
- ✅ Final selection made
- ✅ Decision rationale documented
- ✅ Implementation recommendations provided
- ✅ All files created
- ✅ Ready for Phase 6 implementation
Save to `.planning/phases/05-tak-research/05-03-CHECKLIST.md`
## Success Criteria
- ✅ All research findings documented
- ✅ Decision process clearly recorded
- ✅ Implementation recommendations provided
- ✅ Phase completion verified
- ✅ Ready to proceed to Phase 6
## Notes
- Reference all previous research documents
- Ensure documentation is comprehensive for future reference
- Include screenshots or references to source materials if available
- Document any outstanding questions or concerns

View File

@@ -1,102 +0,0 @@
# Phase 5: TAK Server Research & Selection
## Goal
Research and select the optimal TAK-compatible server with web interface for team coordination and offsite operator integration.
## Research Requirements
### Research Method
Use DuckDuckGo tool for comprehensive web research on TAK-compatible implementations.
### Key Research Areas
1. **TAK-Compatible Implementations**
- Open-source TAK-compatible servers
- Web interface capabilities
- COT (Cursor-on-Target) protocol support
- Geospatial mapping integration
- Mobile device support
2. **Feature Comparison**
- User interface: web-based vs desktop vs mobile
- Mapping capabilities: OpenStreetMap, Mapbox, custom maps
- Message types: text, COT, chat, file sharing
- Authentication: OAuth, JWT, LDAP, basic auth
- Persistence: database options, storage requirements
3. **Deployment Requirements**
- Hardware needs: CPU, memory, storage
- Network requirements: ports, protocols, firewall rules
- Dependency requirements: databases, message brokers
- Scalability: single-node vs clustered deployments
4. **Security Considerations**
- Data encryption: in-transit and at-rest
- Authentication mechanisms
- Authorization models
- Audit logging capabilities
- Vulnerability history
5. **Integration Capabilities**
- REST API availability
- WebSocket support for real-time updates
- External authentication providers
- Custom plugin/system integration
## Research Process
1. **Discovery Phase**
- Use DuckDuckGo to search for "open source TAK server"
- Identify 5-10 potential implementations
- Document source repositories and documentation
2. **Evaluation Phase**
- Review README files and documentation
- Check GitHub stars, activity, and maintenance status
- Evaluate feature completeness against requirements
3. **Selection Phase**
- Create comparison matrix of top 3 candidates
- Document pros and cons of each option
- Select optimal implementation based on criteria
## Deliverables
1. **Research Report** (PLAN.md)
- Summary of findings
- Comparison of top 3 implementations
- Recommendation with justification
2. **Implementation Plan**
- Deployment strategy
- Configuration requirements
- Integration approach
## Selection Criteria
**Must Have:**
- Open-source license
- Web interface
- COT protocol support
- Geospatial mapping
- Docker deployment support
**Nice to Have:**
- Active maintenance
- Good documentation
- Community support
- REST API for integration
- Mobile client availability
## Timeline
- Research completion: [Estimated date]
- Decision finalized: [Estimated date]
- Ready to proceed to Phase 6: [Estimated date]
## Notes
- Focus on implementations that can be containerized
- Prioritize solutions with good documentation
- Consider long-term maintenance and support
- Document all research findings for future reference

View File

@@ -1,176 +0,0 @@
# Phase 6: TAK Server Implementation
## Goal
Implement the selected TAK-compatible server as a Docker service integrated with the existing NixOS infrastructure.
## Dependencies
- Phase 5: TAK Server Research & Selection completed
- Selected TAK implementation identified
- Research report with configuration details
## Implementation Plan
### 1. Docker Compose Configuration
Create `/home/gortium/infra/assets/compose/tak/compose.yml` following existing patterns:
```yaml
version: "3.8"
services:
tak-server:
image: [selected-image]
container_name: tak-server
restart: unless-stopped
networks:
- traefik-net
environment:
- [required-env-vars]
volumes:
- [data-volume-mounts]
labels:
- "traefik.enable=true"
# HTTP router with redirect
- "traefik.http.routers.tak-http.rule=Host(`tak.lazyworkhorse.net`)"
- "traefik.http.routers.tak-http.entrypoints=web"
- "traefik.http.routers.tak-http.middlewares=redirect-to-https"
# HTTPS router with TLS
- "traefik.http.routers.tak-https.rule=Host(`tak.lazyworkhorse.net`)"
- "traefik.http.routers.tak-https.entrypoints=websecure"
- "traefik.http.routers.tak-https.tls=true"
- "traefik.http.routers.tak-https.tls.certresolver=njalla"
# Service configuration
- "traefik.http.services.tak.loadbalancer.server.port=[service-port]"
networks:
traefik-net:
external: true
```
### 2. Service Integration
Update `/home/gortium/infra/hosts/lazyworkhorse/configuration.nix` to include TAK service in the `services.dockerStacks` section:
```nix
services.dockerStacks = {
versioncontrol = {
path = self + "/assets/compose/versioncontrol";
ports = [ 2222 ];
};
network = {
path = self + "/assets/compose/network";
envFile = config.age.secrets.containers_env.path;
ports = [ 80 443 ];
};
passwordmanager = {
path = self + "/assets/compose/passwordmanager";
};
ai = {
path = self + "/assets/compose/ai";
envFile = config.age.secrets.containers_env.path;
};
cloudstorage = {
path = self + "/assets/compose/cloudstorage";
envFile = config.age.secrets.containers_env.path;
};
homeautomation = {
path = self + "/assets/compose/homeautomation";
envFile = config.age.secrets.containers_env.path;
};
tak = {
path = self + "/assets/compose/tak";
ports = [ [service-port] ];
};
};
```
The integration follows the existing pattern used for other Docker services, directly in the host configuration rather than through a separate module.
### 3. Persistent Storage
Set up persistent storage volume:
- Location: `/mnt/HoardingCow_docker_data/TAK/`
- Subdirectories: `data`, `config`, `logs`
- Permissions: Read/write for TAK service user
### 4. Environment Configuration
Create environment file for sensitive configuration:
- Database credentials (if applicable)
- Authentication secrets
- API keys
- Encryption keys
### 5. Firewall Configuration
Update firewall to allow required ports:
- TAK service port (typically 8080)
- WebSocket port if separate
- Any additional required ports
## Testing Plan
### Basic Functionality
1. Verify container starts successfully
2. Test web interface accessibility
3. Validate Traefik routing and TLS
4. Confirm persistent storage working
### Core Features
1. COT message transmission/reception
2. Geospatial mapping functionality
3. User authentication (if applicable)
4. Message persistence
### Integration Tests
1. Verify with existing Docker services
2. Test network connectivity
3. Validate firewall rules
4. Confirm logging and monitoring
## Rollback Plan
If implementation issues arise:
1. Stop TAK service: `systemctl stop tak_stack`
2. Remove containers: `docker-compose down`
3. Revert configuration changes
4. Review logs and diagnostics
5. Address issues before retry
## Documentation Requirements
1. **Configuration Guide**
- Environment variables
- Volume mounts
- Port mappings
- Firewall requirements
2. **Usage Guide**
- Web interface access
- COT protocol usage
- Geospatial features
- Authentication (if applicable)
3. **Troubleshooting**
- Common issues
- Log locations
- Diagnostic commands
## Timeline
- Configuration complete: [Estimated date]
- Testing completed: [Estimated date]
- Ready for validation: [Estimated date]
- Move to Phase 7: [Estimated date]
## Notes
- Follow existing patterns from other services (n8n, Bitwarden, etc.)
- Ensure proper Traefik integration with existing middleware
- Document all configuration decisions
- Test thoroughly before moving to validation phase

View File

@@ -1,52 +0,0 @@
# Phase 6: TAK Server Implementation Summary
**OpenTAKServer (OTS) successfully deployed as Docker service with persistent storage, Traefik integration, and RabbitMQ dependency**
## Performance
- **Duration:** 15 min
- **Started:** 2026-01-01T23:30:00Z
- **Completed:** 2026-01-01T23:45:00Z
- **Tasks:** 5
- **Files modified:** 4
## Accomplishments
- Created comprehensive Docker Compose configuration for OpenTAKServer with RabbitMQ dependency
- Set up persistent storage volumes for data, config, and logs
- Integrated with existing Traefik reverse proxy with automatic TLS via njalla resolver
- Added TAK service to NixOS host configuration
- Created directory structure for persistent storage on HoardingCow mount point
## Files Created/Modified
- `assets/compose/tak/compose.yml` - Docker Compose configuration with OpenTAKServer and RabbitMQ
- `hosts/lazyworkhorse/configuration.nix` - Added TAK service to dockerStacks configuration
- Created `/mnt/HoardingCow_docker_data/TAK/` directory structure with data, config, and logs subdirectories
## Decisions Made
- Used official OpenTAKServer Docker image (brianshort/brian7704-opentakserver:latest)
- Added RabbitMQ as dependency (required for OTS message queue)
- Configured persistent storage on HoardingCow mount point for data persistence
- Integrated with existing Traefik network and TLS configuration
- Used port 8080 for web interface, 5683/5684 for COAP/COAPS, 8087 for COT protocol
## Deviations from Plan
None - plan executed exactly as written.
## Issues Encountered
None
## Next Phase Readiness
- Docker Compose configuration complete and tested
- Persistent storage ready
- Traefik integration configured
- Ready for Phase 7: TAK Server Validation
---
*Phase: 06-tak-implementation*
*Completed: 2026-01-01*

View File

@@ -1,180 +0,0 @@
# Phase 7: TAK Server Testing & Validation
## Goal
Validate TAK server functionality, integration, and readiness for production use.
## Dependencies
- Phase 6: TAK Server Implementation completed
- TAK server deployed and running
- All configuration files in place
## Testing Strategy
### 1. Basic Functionality Tests
**Test Container Health:**
- Verify container starts successfully
- Check container logs for errors
- Validate service is running: `docker ps | grep tak-server`
**Test Web Interface:**
- Access web interface at https://tak.lazyworkhorse.net
- Verify login page loads
- Test basic navigation
**Test Traefik Integration:**
- Verify HTTPS routing works
- Confirm TLS certificate is valid
- Test HTTP to HTTPS redirect
### 2. Core TAK Features
**COT Protocol Testing:**
- Send test COT messages from web interface
- Verify message reception and display
- Test different COT message types (friendly, enemy, etc.)
- Validate geospatial coordinates processing
**Geospatial Mapping:**
- Test map rendering and zoom functionality
- Verify COT messages appear on map at correct locations
- Test different map layers/tilesets
- Validate coordinate system accuracy
**User Management (if applicable):**
- Test user creation and authentication
- Verify role-based access controls
- Test session management and logout
### 3. Integration Tests
**Network Integration:**
- Verify connectivity with other Docker services
- Test DNS resolution within Docker network
- Validate Traefik middleware integration
**Storage Validation:**
- Confirm data persistence across restarts
- Verify volume mounts are working correctly
- Test backup and restore procedures
**Security Testing:**
- Verify TLS encryption is working
- Test authentication security
- Validate firewall rules are enforced
- Check for vulnerable dependencies
### 4. Performance Testing
**Load Testing:**
- Test with multiple concurrent users
- Verify message throughput and latency
- Monitor resource usage (CPU, memory, disk)
**Stability Testing:**
- Test extended uptime (24+ hours)
- Verify automatic restart behavior
- Monitor for memory leaks
### 5. Edge Cases
**Error Handling:**
- Test network connectivity loss
- Verify error messages are user-friendly
- Test recovery from failed state
**Boundary Conditions:**
- Test with large geospatial datasets
- Verify handling of invalid COT messages
- Test extreme coordinate values
## Test Environment Setup
1. **Test Accounts:**
- Create test user accounts for testing
- Set up different roles if applicable
2. **Test Data:**
- Prepare sample COT messages for testing
- Create test geospatial datasets
- Set up monitoring scripts
3. **Monitoring:**
- Set up container logging
- Configure health checks
- Enable performance metrics
## Acceptance Criteria
### Must Pass (Critical)
- ✅ Container starts and stays running
- ✅ Web interface accessible via HTTPS
- ✅ COT messages can be sent and received
- ✅ Messages appear correctly on map
- ✅ Data persists across container restarts
- ✅ No security vulnerabilities found
### Should Pass (Important)
- ✅ Performance meets requirements
- ✅ User management works correctly
- ✅ Integration with other services
- ✅ Error handling is robust
- ✅ Documentation is complete
### Nice to Have
- ✅ Load testing passes
- ✅ Mobile device compatibility
- ✅ Advanced geospatial features work
- ✅ Custom branding applied
## Test Documentation
1. **Test Report Template:**
- Test date and environment
- Test cases executed
- Pass/fail results
- Screenshots of failures
- Recommendations
2. **Issue Tracking:**
- Document all bugs found
- Priority and severity
- Reproduction steps
3. **Known Limitations:**
- List any known issues
- Workarounds provided
- Planned fixes
## Rollback Criteria
If testing reveals critical issues:
1. Stop TAK service
2. Document findings
3. Revert to previous working state
4. Address issues before retry
## Success Metrics
- Total test cases: [X]
- Passed: [X]
- Failed: [X]
- Percentage: [XX]%
- Critical issues: [X]
- Major issues: [X]
- Minor issues: [X]
## Timeline
- Testing completion: [Estimated date]
- Issues resolution: [Estimated date]
- Final validation: [Estimated date]
- Milestone completion: [Estimated date]
## Notes
- Follow existing testing patterns from other services
- Document all test results thoroughly
- Include screenshots for UI-related tests
- Test on multiple browsers/devices if possible
- Verify with security team if applicable

33
flake.lock generated
View File

@@ -10,11 +10,11 @@
"systems": "systems"
},
"locked": {
"lastModified": 1770165109,
"narHash": "sha256-9VnK6Oqai65puVJ4WYtCTvlJeXxMzAp/69HhQuTdl/I=",
"lastModified": 1754433428,
"narHash": "sha256-NA/FT2hVhKDftbHSwVnoRTFhes62+7dxZbxj5Gxvghs=",
"owner": "ryantm",
"repo": "agenix",
"rev": "b027ee29d959fda4b60b57566d64c98a202e0feb",
"rev": "9edb1787864c4f59ae5074ad498b6272b3ec308d",
"type": "github"
},
"original": {
@@ -44,13 +44,33 @@
"type": "github"
}
},
"home-manager_2": {
"inputs": {
"nixpkgs": [
"nixpkgs"
]
},
"locked": {
"lastModified": 1755625756,
"narHash": "sha256-t57ayMEdV9g1aCfHzoQjHj1Fh3LDeyblceADm2hsLHM=",
"owner": "nix-community",
"repo": "home-manager",
"rev": "dd026d86420781e84d0732f2fa28e1c051117b59",
"type": "github"
},
"original": {
"owner": "nix-community",
"repo": "home-manager",
"type": "github"
}
},
"nixpkgs": {
"locked": {
"lastModified": 1774386573,
"narHash": "sha256-4hAV26quOxdC6iyG7kYaZcM3VOskcPUrdCQd/nx8obc=",
"lastModified": 1755615617,
"narHash": "sha256-HMwfAJBdrr8wXAkbGhtcby1zGFvs+StOp19xNsbqdOg=",
"owner": "nixos",
"repo": "nixpkgs",
"rev": "46db2e09e1d3f113a13c0d7b81e2f221c63b8ce9",
"rev": "20075955deac2583bb12f07151c2df830ef346b4",
"type": "github"
},
"original": {
@@ -63,6 +83,7 @@
"root": {
"inputs": {
"agenix": "agenix",
"home-manager": "home-manager_2",
"nixpkgs": "nixpkgs"
}
},

View File

@@ -8,10 +8,14 @@
inputs.darwin.follows = "";
inputs.nixpkgs.follows = "nixpkgs";
};
home-manager = {
url = "github:nix-community/home-manager";
inputs.nixpkgs.follows = "nixpkgs";
};
self.submodules = true;
};
outputs = { self, nixpkgs, agenix, ... }@inputs:
outputs = { self, nixpkgs, agenix, home-manager, ... }@inputs:
let
system = "x86_64-linux";
keys = import ./lib/keys.nix;
@@ -37,19 +41,13 @@
lazyworkhorse = nixpkgs.lib.nixosSystem {
specialArgs = { inherit system self keys paths; };
modules = [
{
nixpkgs.overlays = overlays;
nixpkgs.config.allowUnfree = true;
nixpkgs.config.rocmSupport = true;
}
{ nixpkgs.overlays = overlays; }
agenix.nixosModules.default
home-manager.nixosModules.default
./hosts/lazyworkhorse/configuration.nix
./hosts/lazyworkhorse/hardware-configuration.nix
./modules/nixos/filesystem/hoardingcow-mount.nix
./modules/nixos/services/docker_manager.nix
./modules/nixos/services/open_code_server.nix
./modules/nixos/services/ollama_init_custom_models.nix
./users/gortium.nix
./modules/default.nix
./users/gortium
];
};
};

View File

@@ -1,8 +1,8 @@
# edit this configuration file to define what should be installed on
# Edit this configuration file to define what should be installed on
# your system. Help is available in the configuration.nix(5) man page, on
# https://search.nixos.org/options and in the NixOS manual (`nixos-help`).
{ config, lib, pkgs, paths, self, keys, ... }:
{ config, lib, pkgs, self, paths, keys, ... }:
{
# NAS Mounting
@@ -16,7 +16,7 @@
nix.gc = {
automatic = true;
dates = "daily"; # You can also use "daily" or a cron-like spec
options = "--delete-older-than 30d";
options = "--delete-older-than 7d"; # Keep only 7 days of unreferenced data
};
nix.settings = {
@@ -29,19 +29,7 @@
boot.loader.systemd-boot.enable = true;
boot.loader.efi.canTouchEfiVariables = false;
# 1. Force the kernel to ignore BIOS resource locks
boot.kernelParams = [
"acpi_enforce_resources=lax"
"nct6775.force_id=0xd120" # This forces the driver to ignore BIOS locks for NCT6116
"transparent_hugepage=always" # because mucho ram
];
# 2. Load the specific drivers found by sensors-detect
boot.kernelModules = [ "nct6775" "lm96163" ];
# 3. Force the nct6775 driver to recognize the chip if it's stubborn
boot.extraModprobeConfig = ''
options nct6775 force_id=0xd280
'';
boot.kernelModules = [ "nct6775" "lm63" ];
boot.blacklistedKernelModules = [ "eeepc_wmi" ];
networking.hostName = "lazyworkhorse"; # Define your hostname.
# Pick only one of the below networking options.
@@ -70,14 +58,6 @@
LC_CTYPE = "en_CA.UTF-8";
};
programs.zsh = {
enable = true;
autosuggestions.enable = true;
syntaxHighlighting.enable = true;
enableCompletion = true;
setOptions = [ "HIST_IGNORE_ALL_DUPS" "SHARE_HISTORY" ];
};
# Configure network proxy if necessary
# networking.proxy.default = "http://user:password@proxy:port/";
# networking.proxy.noProxy = "127.0.0.1,localhost,internal.domain";
@@ -105,7 +85,6 @@
pulse.enable = true;
};
# Nix Helper cli tool
environment.sessionVariables = {
NH_FLAKE = paths.flake;
};
@@ -116,28 +95,19 @@
# nvim please
environment.variables.EDITOR = "nvim";
# programs.firefox.enable = true;
# List packages installed in system profile.
# You can use https://Search.nixos.org/ to find more packages (and options).
environment.systemPackages = with pkgs; [
agenix
neovim
docker-compose
wget
age
agenix
git
nh
lm_sensors
rocmPackages.rocminfo
rocmPackages.rocm-smi
nvtopPackages.amd
clinfo
ncurses
kitty.terminfo
nodejs_22
uv
(python3.withPackages (ps: with ps; [
openai-whisper
]))
];
# Some programs need SUID wrappers, can be configured further or are
@@ -153,12 +123,7 @@
# Enable the OpenSSH daemon
services.openssh = {
enable = true;
ports = [ 2424 ];
settings = {
PasswordAuthentication = false;
KbdInteractiveAuthentication = false;
PermitRootLogin = "prohibit-password";
};
settings.PermitRootLogin = "no";
hostKeys = [
{
path = "/etc/ssh/ssh_host_ed25519_key";
@@ -167,69 +132,6 @@
];
};
services.dockerStacks = {
versioncontrol = {
path = self + "/assets/compose/versioncontrol";
ports = [ 2222 ];
};
network = {
path = self + "/assets/compose/network";
envFile = config.age.secrets.containers_env.path;
ports = [ 80 443 ];
};
passwordmanager = {
path = self + "/assets/compose/passwordmanager";
};
ai = {
path = self + "/assets/compose/ai";
envFile = config.age.secrets.containers_env.path;
};
cloudstorage = {
path = self + "/assets/compose/cloudstorage";
envFile = config.age.secrets.containers_env.path;
};
homeautomation = {
path = self + "/assets/compose/homeautomation";
envFile = config.age.secrets.containers_env.path;
};
authentification = {
path = self + "/assets/compose/authentification";
};
backup = {
path = self + "/assets/compose/backup";
envFile = config.age.secrets.containers_env.path;
};
coms = {
path = self + "/assets/compose/coms";
};
finance = {
path = self + "/assets/compose/finance";
};
homepage = {
path = self + "/assets/compose/homepage";
};
# tak = {
# path = self + "/assets/compose/tak";
# };
};
services.opencode = {
enable = true;
port = 4099;
ollamaUrl = "http://127.0.0.1:11434/v1";
};
# Private host ssh key managed by agenix
age = {
identityPaths = paths.identities;
@@ -248,16 +150,11 @@
mode = "0600";
path = "/etc/ssh/ssh_host_ed25519_key";
};
# n8n_ssh_key = {
# file = ../../secrets/n8n_ssh_key.age;
# owner = "root";
# group = "root";
# mode = "0600";
# path = "/home/n8n-worker/.ssh/n8n_ssh_key";
# };
};
};
fileSystems."/".neededForBoot = true;
# Public host ssh key (kept in sync with the private one)
environment.etc."ssh/ssh_host_ed25519_key.pub".text =
"${keys.hosts.lazyworkhorse.main}";
@@ -267,22 +164,6 @@
services.zfs.autoSnapshot.enable = true;
services.zfs.autoScrub.enable = true;
# Mi50 config
hardware.graphics = {
enable = true;
enable32Bit = true; # Useful for some compatibility layers
extraPackages = with pkgs; [
rocmPackages.clr.icd # OpenCL/HIP runtime
];
};
nixpkgs.config.rocmTargets = [ "gfx906" ];
environment.variables = {
# This "tricks" ROCm into supporting the MI50 if using newer versions
HSA_OVERRIDE_GFX_VERSION = "9.0.6";
# Ensures the system sees both GPUs
HIP_VISIBLE_DEVICES = "0,1";
};
# Open ports in the firewall.
# networking.firewall.allowedTCPPorts = [ ... ];
# networking.firewall.allowedUDPPorts = [ ... ];

View File

@@ -5,15 +5,11 @@
github = "";
gitea = "ssh-ed25519 AAAAC3NzaC1lZDI1NTE5AAAAIN9tKezYidZglWBRI9/2I/cBGUUHj2dHY8rHXppYmf7F";
};
n8n-worker = {
main = "ssh-ed25519 AAAAC3NzaC1lZDI1NTE5AAAAIAXeGtPPcsP2IYRQNvII41NVWhJsarEk8c4qxs/a5sXf";
};
};
hosts = {
lazyworkhorse = {
main = "ssh-ed25519 AAAAC3NzaC1lZDI1NTE5AAAAIBmPv4JssvhHGIx85UwFxDSrL5anR4eXB/cd9V2i9wdW";
main = "ssh-ed25519 AAAAC3NzaC1lZDI1NTE5AAAAINmXqD+bBveCYf4khmARA0uaCzkBOUIE077ZrInLNs1O";
github = "";
gitea = "";
bootstrap = "age1r796v2uldtspawyh863pks74sd2pwcan8j4e4pjzsvkmr3vjja9qpz5ste";

7
modules/default.nix Normal file
View File

@@ -0,0 +1,7 @@
{ pkgs, lib, config, ... }: {
imports =
[
# ./home
./nixos
];
}

View File

@@ -0,0 +1,6 @@
{ pkgs, lib, config, ... }: {
imports =
[
./graphical-desktop.nix
];
}

View File

@@ -0,0 +1,9 @@
{ pkgs, lib, config, ... }: {
imports =
[
./bundles
# ./programs
./services
./filesystem
];
}

View File

@@ -0,0 +1,6 @@
{ pkgs, lib, config, ... }: {
imports =
[
./hoardingcow-mount.nix
];
}

View File

@@ -0,0 +1,6 @@
{
imports = [
./dotfiles.nix
./systemd
];
}

View File

@@ -1,52 +0,0 @@
{ config, pkgs, lib, ... }:
with lib;
{
options.services.dockerStacks = mkOption {
type = types.attrsOf (types.submodule {
options = {
path = mkOption { type = types.str; };
envFile = mkOption { type = types.nullOr types.path; default = null; };
ports = mkOption { type = types.listOf types.int; default = [ ]; };
# New option to pass raw systemd serviceConfig
serviceConfig = mkOption {
type = types.attrs;
default = { };
description = "Extra systemd serviceConfig options for this stack.";
};
};
});
default = { };
};
config = {
virtualisation.docker.enable = true;
virtualisation.docker.daemon.settings.dns = [ "1.1.1.1" "8.8.8.8" ];
networking.firewall.allowedTCPPorts = flatten (mapAttrsToList (name: value: value.ports) config.services.dockerStacks);
systemd.services = mapAttrs' (name: value: nameValuePair "${name}_stack" {
description = "Docker Compose stack: ${name}";
after = [ "network.target" "docker.service" "docker.socket" "agenix.service" ];
wants = [ "docker.socket" "agenix.service" ];
requires = [ "docker.service" ];
wantedBy = [ "multi-user.target" ];
path = with pkgs; [ git docker docker-compose bash ];
# We merge the base config with the custom 'serviceConfig' from the submodule
serviceConfig = recursiveUpdate {
Type = "oneshot";
WorkingDirectory = value.path;
User = "root";
ExecStartPre = "${pkgs.bash}/bin/bash -c 'while [ ! -S /var/run/docker.sock ]; do sleep 1; done'";
ExecStart = "${pkgs.docker-compose}/bin/docker-compose up -d --remove-orphans";
ExecStop = "${pkgs.docker-compose}/bin/docker-compose down";
RemainAfterExit = true;
EnvironmentFile = mkIf (value.envFile != null) [ value.envFile ];
} value.serviceConfig;
}) config.services.dockerStacks;
};
}

View File

@@ -0,0 +1,69 @@
{ config, lib, pkgs, ... }:
with lib;
let
cfg = config.services.dotfiles;
stowDir = cfg.stowDir;
# Function to recursively find all files in a directory
findFiles = dir:
let
files = builtins.attrNames (builtins.readDir dir);
in
concatMap (name:
let
path = dir + "/${name}";
in
if (builtins.typeOf (builtins.readDir path) == "set")
then findFiles path
else [ path ]
) files;
# Get a list of all packages (directories) in the stow directory
stowPackages = builtins.attrNames (builtins.readDir stowDir);
# Create an attribute set where each attribute is a package name
# and the value is a list of files to be linked.
homeManagerLinks = listToAttrs (map (pkg:
let
pkgPath = stowDir + "/${pkg}";
files = findFiles pkgPath;
in
nameValuePair pkg (map (file: {
source = file;
target = removePrefix (pkgPath + "/") file;
}) files)
) stowPackages);
in
{
options.services.dotfiles = {
enable = mkEnableOption "Enable dotfiles management";
stowDir = mkOption {
type = types.path;
description = "The directory where your stow packages are located.";
};
user = mkOption {
type = types.str;
description = "The user to manage dotfiles for.";
};
};
config = mkIf cfg.enable {
home-manager.users.${cfg.user} = {
home.file =
let
allFiles = concatLists (attrValues homeManagerLinks);
in
listToAttrs (map (file:
nameValuePair file.target {
source = file.source;
}
) allFiles);
};
};
}

View File

@@ -1,45 +0,0 @@
{ pkgs, ... }: {
systemd.services.init-ollama-model = {
description = "Initialize LLM models with extra context in Ollama Docker";
after = [ "docker-ollama.service" ];
wantedBy = [ "multi-user.target" ];
script = ''
# Wait for Ollama
while ! ${pkgs.curl}/bin/curl -s http://localhost:11434/api/tags > /dev/null; do
sleep 2
done
create_model_if_missing() {
local model_name=$1
local base_model=$2
if ! ${pkgs.docker}/bin/docker exec ollama ollama list | grep -q "$model_name"; then
echo "$model_name not found, creating from $base_model..."
${pkgs.docker}/bin/docker exec ollama sh -c "cat <<EOF > /root/.ollama/$model_name.modelfile
FROM $base_model
PARAMETER num_ctx 131072
PARAMETER num_predict 4096
PARAMETER num_keep 1024
PARAMETER repeat_penalty 1.1
PARAMETER top_k 40
PARAMETER stop \"[INST]\"
PARAMETER stop \"[/INST]\"
PARAMETER stop \"</s>\"
EOF"
${pkgs.docker}/bin/docker exec ollama ollama create "$model_name" -f "/root/.ollama/$model_name.modelfile"
else
echo "$model_name already exists, skipping."
fi
}
# Create Nemotron
create_model_if_missing "nemotron-3-nano:30b-128k" "nemotron-3-nano:30b"
# Create Devstral
create_model_if_missing "devstral-small-2:24b-128k" "devstral-small-2:24b"
'';
serviceConfig = {
Type = "oneshot";
RemainAfterExit = true;
};
};
}

View File

@@ -1,145 +0,0 @@
{ config, pkgs, lib, ... }:
let
cfg = config.services.opencode;
in {
options.services.opencode = {
enable = lib.mkEnableOption "OpenCode AI Service";
port = lib.mkOption {
type = lib.types.port;
default = 4099;
};
ollamaUrl = lib.mkOption {
type = lib.types.str;
default = "http://127.0.0.1:11434/v1";
};
};
config = lib.mkIf cfg.enable {
programs.nix-ld.enable = true;
environment.etc."opencode/opencode.json".text = builtins.toJSON {
"$schema" = "https://opencode.ai/config.json";
"model" = "nemotron-3-nano-llama_cpp";
"mcp" = {
"context7" = {
"type" = "remote";
"url" = "https://mcp.context7.com/mcp";
};
"duckduckgo" = {
"type" = "local";
"command" = [ "uvx" "duckduckgo-mcp-server" ];
"environment" = {
"PATH" = "/run/current-system/sw/bin:/home/gortium/.nix-profile/bin";
};
};
};
"provider" = {
"llamacpp" = {
"name" = "Llama.cpp (Local MI50)";
"npm" = "@ai-sdk/openai-compatible";
"options" = {
"baseURL" = "http://localhost:8300/v1";
"apiKey" = "not-needed";
"maxTokens" = 80000;
};
"models" = {
"devstral-2-small-llama_cpp" = {
"name" = "Devstral 2 small 24B Q8 (llama.cpp)";
"tools" = true;
"reasoning" = false;
};
"nemotron-3-nano-llama_cpp" = {
"name" = "Nemotron 3 nano 30B Q8 (llama.cpp)";
"tools" = true;
"reasoning" = false;
};
};
};
"ollama" = {
"name" = "Ollama (Local)";
"npm" = "@ai-sdk/openai-compatible";
"options" = {
"baseURL" = cfg.ollamaUrl;
"headers" = { "Content-Type" = "application/json"; };
};
"models" = {
"devstral-small-2:24b-128k" = {
"name" = "Mistral Devstral Small 2 (Ollama)";
"tools" = true;
"reasoning" = false;
};
};
};
};
};
systemd.services.opencode-gsd-install = {
description = "Install Get Shit Done OpenCode Components";
after = [ "network-online.target" ];
wants = [ "network-online.target" ];
wantedBy = [ "multi-user.target" ];
path = with pkgs; [
nodejs
git
coreutils
bash
];
serviceConfig = {
Type = "oneshot";
User = "gortium";
RemainAfterExit = true;
Environment = [
"HOME=/home/gortium"
"SHELL=${pkgs.bash}/bin/bash"
"PATH=${lib.makeBinPath [ pkgs.nodejs pkgs.git pkgs.bash pkgs.coreutils ]}"
];
};
script = ''
# Check if the GSD directory exists
if [ ! -d "/home/gortium/.config/opencode/gsd" ]; then
echo "GSD not found. Installing..."
${pkgs.nodejs}/bin/npx -y github:dbachelder/get-shit-done-opencode --global --force
else
echo "GSD already installed. Skipping auto-reinstall."
echo "To force update, run: sudo systemctl restart opencode-gsd-install.service"
fi
'';
};
systemd.services.opencode = {
description = "OpenCode AI Coding Agent Server";
after = [ "network.target" "ai_stack.service" "opencode-gsd-install.service" ];
requires = [ "ai_stack.service" "opencode-gsd-install.service" ];
wantedBy = [ "multi-user.target" ];
path = with pkgs; [
bash
coreutils
nodejs
git
nix
ripgrep
fd
];
serviceConfig = {
Type = "simple";
User = "gortium";
WorkingDirectory = "/home/gortium/infra";
ExecStart = "${pkgs.nodejs}/bin/npx -y opencode-ai serve --hostname 0.0.0.0 --port ${toString cfg.port}";
Restart = "on-failure";
};
environment = {
OLLAMA_BASE_URL = "http://127.0.0.1:11434";
OPENCODE_CONFIG = "/etc/opencode/opencode.json";
HOME = "/home/gortium";
NODE_PATH = "${pkgs.nodejs}/lib/node_modules";
};
};
networking.firewall.allowedTCPPorts = [ cfg.port ];
};
}

View File

@@ -1,5 +1,9 @@
{ config, lib, pkgs, ... }:
{
config,
lib,
pkgs,
...
}:
with lib; let
cfg = config.services.podman;
in {

View File

@@ -0,0 +1,16 @@
{ pkgs, lib, config, self, keys, paths, ... }: {
imports =
[
./network.nix
./passwordmanager.nix
./versioncontrol.nix
./fancontrol.nix
];
virtualisation.docker = {
enable = true;
daemon.settings = {
"dns" = [ "1.1.1.1" "8.8.8.8" ];
};
};
}

View File

@@ -0,0 +1,40 @@
{ config, pkgs, self, ... }:
let
network_compose_dir = pkgs.stdenv.mkDerivation {
name = "network_compose_dir";
src = self + "/assets/compose/network";
dontUnpack = true;
installPhase = ''
mkdir -p $out
cp -r $src/* $out/
'';
};
in
{
networking.firewall.allowedTCPPorts = [ 80 443 ];
systemd.services.network_stack = {
description = "Traefik + DDNS updater via Docker Compose";
after = [ "network-online.target" "docker.service" ];
wants = [ "network-online.target" "docker.service" ];
serviceConfig = {
WorkingDirectory = "${network_compose_dir}";
EnvironmentFile = config.age.secrets.containers_env.path;
# Stop left over container by the same name
ExecStartPre = "${pkgs.bash}/bin/bash -c '${pkgs.docker-compose}/bin/docker-compose down || true'";
# Start the services using Docker Compose
ExecStart = "${pkgs.docker-compose}/bin/docker-compose up -d";
# Stop and remove containers on shutdown
ExecStop = "${pkgs.docker-compose}/bin/docker-compose down";
RemainAfterExit = true;
TimeoutStartSec = 0;
};
wantedBy = [ "multi-user.target" ];
};
}

View File

@@ -0,0 +1,36 @@
{ config, pkgs, self, ... }:
let
passwordmanager_compose_dir = pkgs.stdenv.mkDerivation {
name = "passwordmanager_compose_dir";
src = self + "/assets/compose/passwordmanager";
dontUnpack = true;
installPhase = ''
mkdir -p $out
cp -r $src/* $out/
'';
};
in
{
systemd.services.passwordmanager_stack = {
description = "Bitwarden via Docker Compose";
after = [ "network-online.target" "docker.service" ];
wants = [ "network-online.target" "docker.service" ];
serviceConfig = {
WorkingDirectory = "${passwordmanager_compose_dir}";
# Stop left over container by the same name
ExecStartPre = "${pkgs.bash}/bin/bash -c '${pkgs.docker-compose}/bin/docker-compose down || true'";
# Start the services using Docker Compose
ExecStart = "${pkgs.docker-compose}/bin/docker-compose up -d";
# Stop and remove containers on shutdown
ExecStop = "${pkgs.docker-compose}/bin/docker-compose down";
RemainAfterExit = true;
TimeoutStartSec = 0;
};
wantedBy = [ "multi-user.target" ];
};
}

View File

@@ -0,0 +1,38 @@
{ config, pkgs, self, ... }:
let
versioncontrol_compose_dir = pkgs.stdenv.mkDerivation {
name = "versioncontrol_compose_dir";
src = self + "/assets/compose/versioncontrol";
dontUnpack = true;
installPhase = ''
mkdir -p $out
cp -r $src/* $out/
'';
};
in
{
networking.firewall.allowedTCPPorts = [ 2222 ];
systemd.services.versioncontrol_stack = {
description = "Gitea via Docker Compose";
after = [ "network-online.target" "docker.service" ];
wants = [ "network-online.target" "docker.service" ];
serviceConfig = {
WorkingDirectory = "${versioncontrol_compose_dir}";
# Stop left over container by the same name
ExecStartPre = "${pkgs.bash}/bin/bash -c '${pkgs.docker-compose}/bin/docker-compose down || true'";
# Start the services using Docker Compose
ExecStart = "${pkgs.docker-compose}/bin/docker-compose up -d";
# Stop and remove containers on shutdown
ExecStop = "${pkgs.docker-compose}/bin/docker-compose down";
RemainAfterExit = true;
TimeoutStartSec = 0;
};
wantedBy = [ "multi-user.target" ];
};
}

View File

@@ -1,32 +1,9 @@
-----BEGIN AGE ENCRYPTED FILE-----
YWdlLWVuY3J5cHRpb24ub3JnL3YxCi0+IHNzaC1lZDI1NTE5IEdoTUQ4QSBmeWR3
UzRxRGlkU2h2cjVjQlJrcjhYcm5oRWt3SFdSb0t4Wjd2ZTFKNTJjCjNIVmRtRmoz
RTMyTDB5a1NJMU56RnFJRVFLSW1oMERGZ2RRSFgxQ0ZuSzgKLT4gVkAtZ3JlYXNl
ICw+WDxrIFIsCk9MRDQ2ZWlPN2JUWDVyZWlQUGN3Ci0tLSB4WGhCdWdkN3M2THJZ
VnB2SFFqa1NTcUh0bG9qTWNzT3BBUW5qQ0M4aUFzCsFpZE1btvUR1BwkUNC8qy3m
0SwXk/gUS1519LuEnvZg7Mc+EB23e6nmz8rK34ycR+stTbVNv1xV2xCLxLoTg9wf
+ThXsVrf18kv0N92X3d5v7clMVC4eMr9CcyfBY+HaMgNa72aRyVyyxKgg/v6oks+
QEHssNw8+TKxjfeoxdCmsYVDEQME4id8vqoDOkyAg2IAXPCVVhN9G9fuMPyT1TWk
yJD1RgpyzBkR0yBEQkxgY1GJ76TI0h85hveNbXQXZTuU2yj0KJbdj2gXDGdrqbu7
r/6ZlRGlC2tSqtRBot6BatVIhtGZNVQnXbiVlQCmO1mh4XyxF7rKsCa7r3yVuvFN
XybugrWSdG7dJF6ne/dMMsnwhvrKZFwUosjMnoH/x/LF2bOLAcA6i2WA6ivWzo9c
6NmND6sLkQJWyychbLu4AmRg4MgVTlTGwTCizOe3xEo9qRrQBX7PmvuXSs+IE1o4
l7pb0DSzIa80BT0Otj9tFlei1nwRh8wzEVECV0FUjilUvUp19mJ6Cn+/RnHTSOp9
1UGrOFxbamx4L4yFWL3rWoqBpbO4CBSCGM7moDEhAQn/OsZgeUhKeIDvrEBtCeZ3
vC/v0lVgfXZDd+aRSLPbGaRNwifyc5UeBWF1WvkJXi3jDUK7qFOT/RInVQDDF3u9
YbvnHPler1UfbbPihHTFbCJu8lJHMLHfpe07j2cx4hCPMv/4Yx+xBAstPXwtaOuw
/9PCvPvvGvygdzljKTksnsMVN11cQzmU3l1dKHvr5sNk1n+U+uW0xDrT9Nv1ZETg
IY64EtzsqH48YAJ6SV6h4dZ8D9R5qTg4T5yP7D4PLuFtNGeqd7++zhBCZLZ3HEQ6
M1SlHzWk59xBN4agrLKX0VjPYBwmg8wkpRfU5A4Rg36H4mZLHEUKqFVx6BaHfDZ2
5P3o7GbZB39Zs9mZb70ZZJ5TFUsCEISfJHz/u5u4/duSBLeyHXah2dmXrQ1eUWT4
MNNcJ6+53Us4LTe96ttYNa/v5RQVoarTwNM7x7ux5j59QHozVOK1NO8Z4+oHD/ZD
rJQlXAeAUrhkZLluzzy1JL45tBpPm3oAfU3xB178c+fMoWtZxyWrBfu1iRzwyDWC
MKgK29h9HeGwQc9dB8exQr2cj5NhqUOiaWP8dH1N/g+KYIPVNRgKjdDucsxTcbDN
bIIz2qus6jQkOfmbtdoHWMp+kwXSHRF7MwECKxkAIcNdxnLI1DecNhjbiItnPlgI
1uy0fERRc12BLg3dLV3YkBL358SRww+pxho87IQuS9x9aQeExksk0Y10QR8J/1g0
cEXUhDNfeI+mKyuISxV6Zs4Fp7+6P6bd5Bs2Xyxw3A3PTdWn12brb62O1N81LiAv
yccIDR24lb0VDD+aIq28FBUPQ62tVdtZgRfJhkVxelgzHuGATOTluDZH+6GE3rEj
z1OoormFX/2TovCNnTVJRs1ifWUe+a2QHcAFFfL0Y1RBbIPYDMykfjCPNaWqarlX
Z50QIWv6Ov1oDBZY59fjx5Bfm+Es+edMC4b2GibRKS5wwpOzGDEKDXVoTEv3NX+B
NV4p3oDKEE8anYffrB+v
-----END AGE ENCRYPTED FILE-----
age-encryption.org/v1
-> ssh-ed25519 GhMD8A gLjSioFoNbora4jCZw3UguGp5TdUBLLMaYAiW11T824
TXRVls3R4Zaz2AOvRujcy1kf2XqBQulK3gRzoh45g5g
-> ssh-ed25519 kYn3oA 25YlZSMkVE6I3VMUrlF4t3ZwuKj9PsMQoh2gi/pHb10
CAFHTAZ7eyGHT8t766aBiT2Iiq9ZBKitVIIt3AxJfTE
-> X25519 2mIaB09iQVif9F3UF9azfs5bFpUkLIU4wtjsyavHPHc
GAoZGils65rkG8wOhR4MJB1M2c9IdVSPh0frZdc3Pg0
--- 4Ujt4d9bouX5RsLq4WnkKb8vvGCrsLXfk3MWxP4Jar0
<EFBFBD>ڝ<11><><08>ғ<EFBFBD>w9"<22><>=UY ( <0B>J9<4A>mw{<7B><>\<16>jcc><0E>N<EFBFBD>q<EFBFBD><71>T|<7C>

Binary file not shown.

View File

@@ -1,13 +1,8 @@
let
keys = import ../lib/keys.nix;
authorizedKeys = [
keys.users.gortium.main
keys.hosts.lazyworkhorse.main
keys.hosts.lazyworkhorse.bootstrap
];
authorizedKeys = [ keys.users.gortium.main keys.hosts.lazyworkhorse.main keys.hosts.lazyworkhorse.bootstrap ];
in
{
"containers.env.age".publicKeys = authorizedKeys;
"lazyworkhorse_host_ssh_key.age".publicKeys = authorizedKeys;
"n8n_ssh_key.age".publicKeys = authorizedKeys;
}

View File

@@ -1,12 +0,0 @@
{ pkgs, inputs, config, keys, ... }: {
users.users.ai-worker = {
isSystemUser = true;
group = "ai-worker";
extraGroups = [ "docker" ];
shell = pkgs.bashInteractive;
openssh.authorizedKeys.keys = [
keys.users.ai-worker.main
];
};
users.groups.ai-worker = {};
}

View File

@@ -1,18 +1,18 @@
{ pkgs, inputs, config, keys, ... }: {
home-manager.users.gortium = import ./home.nix;
users.users.gortium = {
isNormalUser = true;
extraGroups = [ "wheel" "docker" "video" "render"];
extraGroups = [ "wheel" "docker" ]; # Enable sudo for the user.
packages = with pkgs; [
tree
btop
nh
];
shell = pkgs.zsh;
openssh.authorizedKeys.keys = [
keys.users.gortium.main
];
};
programs.zsh.enable = true;
security.sudo.extraRules = [
{
users = [ "gortium" ];

12
users/gortium/home.nix Normal file
View File

@@ -0,0 +1,12 @@
{ pkgs, ... }: {
services.dotfiles = {
enable = true;
stowDir = ../../../assets/dotfiles;
user = "gortium";
};
home.username = "gortium";
home.homeDirectory = "/home/gortium";
home.stateVersion = "23.11"; # Please change this to your version.
programs.home-manager.enable = true;
}