first version of the knowledge base :)
This commit is contained in:
63
70 - Tools/caddy/caddy.md
Normal file
63
70 - Tools/caddy/caddy.md
Normal file
@@ -0,0 +1,63 @@
|
||||
---
|
||||
title: Caddy
|
||||
description: Tool overview for Caddy as a web server and reverse proxy with automatic HTTPS
|
||||
tags:
|
||||
- caddy
|
||||
- reverse-proxy
|
||||
- web
|
||||
category: tools
|
||||
created: 2026-03-14
|
||||
updated: 2026-03-14
|
||||
---
|
||||
|
||||
# Caddy
|
||||
|
||||
## Summary
|
||||
|
||||
Caddy is a web server and reverse proxy known for automatic HTTPS and a simple configuration model. In self-hosted environments, it is often used as an easy-to-operate edge or internal reverse proxy for web applications.
|
||||
|
||||
## Why it matters
|
||||
|
||||
For many homelab and small infrastructure setups, Caddy offers a faster path to a secure reverse proxy than more manual alternatives. It is especially effective when a small team wants readable configuration and low TLS management overhead.
|
||||
|
||||
## Core concepts
|
||||
|
||||
- Caddyfile as the high-level configuration format
|
||||
- Automatic HTTPS and certificate management
|
||||
- `reverse_proxy` as the core upstream routing primitive
|
||||
- Site blocks for host-based routing
|
||||
- JSON configuration for advanced automation cases
|
||||
|
||||
## Practical usage
|
||||
|
||||
Caddy commonly fits into infrastructure as:
|
||||
|
||||
```text
|
||||
Client -> Caddy -> upstream application
|
||||
```
|
||||
|
||||
Typical uses:
|
||||
|
||||
- Terminating TLS for self-hosted apps
|
||||
- Routing multiple hostnames to different backends
|
||||
- Serving simple static sites alongside proxied services
|
||||
|
||||
## Best practices
|
||||
|
||||
- Keep hostnames and upstream targets explicit
|
||||
- Use Caddy as a shared ingress layer instead of publishing many app ports
|
||||
- Back up Caddy configuration and persistent state if certificates or ACME state matter
|
||||
- Keep external base URLs aligned with proxy behavior
|
||||
|
||||
## Pitfalls
|
||||
|
||||
- Assuming automatic HTTPS removes the need to understand DNS and port reachability
|
||||
- Mixing public and private services without clear routing boundaries
|
||||
- Forgetting that proxied apps may need forwarded header awareness
|
||||
- Leaving Caddy state or config out of the backup plan
|
||||
|
||||
## References
|
||||
|
||||
- [Caddy documentation](https://caddyserver.com/docs/)
|
||||
- [Caddy: `reverse_proxy` directive](https://caddyserver.com/docs/caddyfile/directives/reverse_proxy)
|
||||
- [Caddyfile concepts](https://caddyserver.com/docs/caddyfile/concepts)
|
||||
63
70 - Tools/cloudflare/cloudflare-overview.md
Normal file
63
70 - Tools/cloudflare/cloudflare-overview.md
Normal file
@@ -0,0 +1,63 @@
|
||||
---
|
||||
title: Cloudflare
|
||||
description: Tool overview for Cloudflare as a DNS, edge, and access platform in self-hosted environments
|
||||
tags:
|
||||
- cloudflare
|
||||
- dns
|
||||
- edge
|
||||
category: tools
|
||||
created: 2026-03-14
|
||||
updated: 2026-03-14
|
||||
---
|
||||
|
||||
# Cloudflare
|
||||
|
||||
## Summary
|
||||
|
||||
Cloudflare is an edge platform commonly used for DNS hosting, proxying, TLS, tunnels, and access control. In self-hosted environments, it is often the public-facing layer in front of privately managed infrastructure.
|
||||
|
||||
## Why it matters
|
||||
|
||||
Cloudflare can reduce operational burden for public DNS, certificates, and internet exposure. It becomes especially useful when services need a controlled edge while the underlying infrastructure remains private or partially private.
|
||||
|
||||
## Core concepts
|
||||
|
||||
- Authoritative DNS hosting
|
||||
- Proxy mode for HTTP and selected proxied traffic
|
||||
- Zero Trust and Access controls
|
||||
- Tunnels for publishing services without opening inbound ports directly
|
||||
- CDN and caching features for web workloads
|
||||
|
||||
## Practical usage
|
||||
|
||||
Cloudflare commonly fits into infrastructure like this:
|
||||
|
||||
```text
|
||||
Client -> Cloudflare edge -> reverse proxy or tunnel -> application
|
||||
```
|
||||
|
||||
Typical uses:
|
||||
|
||||
- Public DNS for domains and subdomains
|
||||
- Cloudflare Tunnel for selected internal apps
|
||||
- Access policies in front of sensitive web services
|
||||
|
||||
## Best practices
|
||||
|
||||
- Keep public DNS records documented and intentional
|
||||
- Use tunnels or private access controls for admin-facing services when appropriate
|
||||
- Understand which services are proxied and which are DNS-only
|
||||
- Review TLS mode and origin certificate behavior carefully
|
||||
|
||||
## Pitfalls
|
||||
|
||||
- Assuming proxy mode works identically for every protocol
|
||||
- Forgetting that Cloudflare becomes part of the trust and availability path
|
||||
- Mixing internal admin services with public publishing defaults
|
||||
- Losing track of which records are authoritative in Cloudflare versus internal DNS
|
||||
|
||||
## References
|
||||
|
||||
- [Cloudflare Docs](https://developers.cloudflare.com/)
|
||||
- [Cloudflare Learning Center: What is DNS?](https://www.cloudflare.com/learning/dns/what-is-dns/)
|
||||
- [Cloudflare Zero Trust documentation](https://developers.cloudflare.com/cloudflare-one/)
|
||||
64
70 - Tools/docker/docker.md
Normal file
64
70 - Tools/docker/docker.md
Normal file
@@ -0,0 +1,64 @@
|
||||
---
|
||||
title: Docker
|
||||
description: Tool overview for Docker as a container runtime and packaging platform
|
||||
tags:
|
||||
- docker
|
||||
- containers
|
||||
- infrastructure
|
||||
category: tools
|
||||
created: 2026-03-14
|
||||
updated: 2026-03-14
|
||||
---
|
||||
|
||||
# Docker
|
||||
|
||||
## Summary
|
||||
|
||||
Docker is a container platform used to package and run applications with their dependencies in isolated environments. In self-hosted systems, it is often the default runtime for lightweight service deployment and reproducible application stacks.
|
||||
|
||||
## Why it matters
|
||||
|
||||
Docker reduces packaging inconsistency and makes service deployment more repeatable than hand-built application installs. It also provides a practical base for Compose-managed stacks in small to medium self-hosted environments.
|
||||
|
||||
## Core concepts
|
||||
|
||||
- Images and containers
|
||||
- Registries as image distribution points
|
||||
- Volumes for persistent data
|
||||
- Networks for service connectivity
|
||||
- Compose for multi-service application definitions
|
||||
|
||||
## Practical usage
|
||||
|
||||
Docker commonly fits into infrastructure as:
|
||||
|
||||
```text
|
||||
Image registry -> Docker host -> containerized services -> reverse proxy or internal clients
|
||||
```
|
||||
|
||||
Typical uses:
|
||||
|
||||
- Hosting web apps, dashboards, automation tools, and utility services
|
||||
- Running small multi-container stacks with Compose
|
||||
- Keeping application deployment separate from the base OS lifecycle
|
||||
|
||||
## Best practices
|
||||
|
||||
- Pin images intentionally and update them through a reviewed process
|
||||
- Use named volumes or clearly documented bind mounts for state
|
||||
- Put multi-service stacks under Compose and version control
|
||||
- Keep ingress and persistence boundaries explicit
|
||||
|
||||
## Pitfalls
|
||||
|
||||
- Treating containers as ephemeral while silently storing irreplaceable state inside them
|
||||
- Publishing too many host ports directly
|
||||
- Using `latest` everywhere without a maintenance workflow
|
||||
- Running every unrelated workload inside one large Compose project
|
||||
|
||||
## References
|
||||
|
||||
- [Docker: Docker overview](https://docs.docker.com/get-started/docker-overview/)
|
||||
- [Docker: Networking overview](https://docs.docker.com/engine/network/)
|
||||
- [Docker: Volumes](https://docs.docker.com/engine/storage/volumes/)
|
||||
- [Compose Specification](https://compose-spec.io/)
|
||||
63
70 - Tools/gitea/gitea.md
Normal file
63
70 - Tools/gitea/gitea.md
Normal file
@@ -0,0 +1,63 @@
|
||||
---
|
||||
title: Gitea
|
||||
description: Tool overview for Gitea as a lightweight self-hosted Git forge
|
||||
tags:
|
||||
- gitea
|
||||
- git
|
||||
- self-hosting
|
||||
category: tools
|
||||
created: 2026-03-14
|
||||
updated: 2026-03-14
|
||||
---
|
||||
|
||||
# Gitea
|
||||
|
||||
## Summary
|
||||
|
||||
Gitea is a lightweight self-hosted Git forge that provides repositories, issues, pull requests, user and organization management, and optional automation features. It is commonly used as a self-hosted alternative to centralized Git hosting platforms.
|
||||
|
||||
## Why it matters
|
||||
|
||||
For self-hosted environments, Gitea offers source control and collaboration without the operational weight of larger enterprise platforms. It is often a good fit for homelabs, small teams, and private infrastructure repositories.
|
||||
|
||||
## Core concepts
|
||||
|
||||
- Repositories, organizations, and teams
|
||||
- Authentication and user management
|
||||
- Webhooks and integrations
|
||||
- Actions or CI integrations depending on deployment model
|
||||
- Persistent storage for repository data and attachments
|
||||
|
||||
## Practical usage
|
||||
|
||||
Gitea commonly fits into infrastructure as:
|
||||
|
||||
```text
|
||||
Users and automation -> Gitea -> Git repositories -> CI or deployment systems
|
||||
```
|
||||
|
||||
Typical uses:
|
||||
|
||||
- Hosting application and infrastructure repositories
|
||||
- Managing issues and pull requests in a private environment
|
||||
- Acting as a central source of truth for docs-as-code workflows
|
||||
|
||||
## Best practices
|
||||
|
||||
- Back up repository data, configuration, and the database together
|
||||
- Integrate with centralized identity when possible
|
||||
- Put Gitea behind a reverse proxy with a stable external URL
|
||||
- Protect administrator access with MFA or a private access layer
|
||||
|
||||
## Pitfalls
|
||||
|
||||
- Treating Git repository data as sufficient without backing up the database and config
|
||||
- Allowing base URL and reverse proxy headers to drift out of sync
|
||||
- Running a forge without monitoring, backup validation, or update planning
|
||||
- Using one shared administrator account for normal operations
|
||||
|
||||
## References
|
||||
|
||||
- [Gitea Documentation](https://docs.gitea.com/)
|
||||
- [Gitea administration docs](https://docs.gitea.com/administration)
|
||||
- [Gitea installation docs](https://docs.gitea.com/installation)
|
||||
124
70 - Tools/github/repository-labeling-strategies.md
Normal file
124
70 - Tools/github/repository-labeling-strategies.md
Normal file
@@ -0,0 +1,124 @@
|
||||
---
|
||||
title: Repository Labeling Strategies
|
||||
description: A practical GitHub label taxonomy for issues and pull requests
|
||||
tags:
|
||||
- github
|
||||
- devops
|
||||
- workflow
|
||||
category: tools
|
||||
created: 2026-03-14
|
||||
updated: 2026-03-14
|
||||
---
|
||||
|
||||
# Repository Labeling Strategies
|
||||
|
||||
## Introduction
|
||||
|
||||
Labels make issue trackers easier to triage, search, automate, and report on. A good label system is small enough to stay consistent and expressive enough to support planning and maintenance.
|
||||
|
||||
## Purpose
|
||||
|
||||
This document provides a reusable label taxonomy for:
|
||||
|
||||
- Bugs and incidents
|
||||
- Features and enhancements
|
||||
- Operations and maintenance work
|
||||
- Pull request triage
|
||||
|
||||
## Architecture Overview
|
||||
|
||||
A useful label strategy separates labels by function instead of creating one long undifferentiated list. A practical model uses these groups:
|
||||
|
||||
- Type: what kind of work item it is
|
||||
- Priority: how urgent it is
|
||||
- Status: where it is in the workflow
|
||||
- Area: which subsystem it affects
|
||||
- Effort: rough size or complexity
|
||||
|
||||
## Suggested Taxonomy
|
||||
|
||||
### Type labels
|
||||
|
||||
- `type:bug`
|
||||
- `type:feature`
|
||||
- `type:docs`
|
||||
- `type:maintenance`
|
||||
- `type:security`
|
||||
- `type:question`
|
||||
|
||||
### Priority labels
|
||||
|
||||
- `priority:p0`
|
||||
- `priority:p1`
|
||||
- `priority:p2`
|
||||
- `priority:p3`
|
||||
|
||||
### Status labels
|
||||
|
||||
- `status:needs-triage`
|
||||
- `status:blocked`
|
||||
- `status:in-progress`
|
||||
- `status:ready-for-review`
|
||||
|
||||
### Area labels
|
||||
|
||||
- `area:networking`
|
||||
- `area:containers`
|
||||
- `area:security`
|
||||
- `area:ci`
|
||||
- `area:docs`
|
||||
|
||||
### Effort labels
|
||||
|
||||
- `size:small`
|
||||
- `size:medium`
|
||||
- `size:large`
|
||||
|
||||
## Configuration Example
|
||||
|
||||
Example policy:
|
||||
|
||||
```text
|
||||
Every new issue gets exactly one type label and one status label.
|
||||
High-impact incidents also get one priority label.
|
||||
Area labels are optional but recommended for owned systems.
|
||||
```
|
||||
|
||||
Example automation targets:
|
||||
|
||||
- Auto-add `status:needs-triage` to new issues
|
||||
- Route `type:security` to security reviewers
|
||||
- Build dashboards using `priority:*` and `area:*`
|
||||
|
||||
## Troubleshooting Tips
|
||||
|
||||
### Too many labels and nobody uses them
|
||||
|
||||
- Reduce the taxonomy to the labels that drive decisions
|
||||
- Remove near-duplicate labels such as `bug` and `kind:bug`
|
||||
- Standardize prefixes so labels sort clearly
|
||||
|
||||
### Labels stop reflecting reality
|
||||
|
||||
- Review automation rules and board filters
|
||||
- Make status changes part of the pull request or issue workflow
|
||||
- Archive labels that no longer map to current processes
|
||||
|
||||
### Teams interpret labels differently
|
||||
|
||||
- Document label meaning in the repository
|
||||
- Reserve priority labels for response urgency, not personal preference
|
||||
- Keep type and status labels mutually understandable
|
||||
|
||||
## Best Practices
|
||||
|
||||
- Use prefixes such as `type:` and `priority:` for readability and automation
|
||||
- Keep the total label count manageable
|
||||
- Apply a small mandatory label set and leave the rest optional
|
||||
- Review labels quarterly as workflows change
|
||||
- Match label taxonomy to how the team searches and reports on work
|
||||
|
||||
## References
|
||||
|
||||
- [GitHub Docs: Managing labels](https://docs.github.com/issues/using-labels-and-milestones-to-track-work/managing-labels)
|
||||
- [GitHub Docs: Filtering and searching issues and pull requests](https://docs.github.com/issues/tracking-your-work-with-issues/using-issues/filtering-and-searching-issues-and-pull-requests)
|
||||
63
70 - Tools/grafana/grafana.md
Normal file
63
70 - Tools/grafana/grafana.md
Normal file
@@ -0,0 +1,63 @@
|
||||
---
|
||||
title: Grafana
|
||||
description: Tool overview for Grafana as a dashboarding and observability interface
|
||||
tags:
|
||||
- grafana
|
||||
- monitoring
|
||||
- dashboards
|
||||
category: tools
|
||||
created: 2026-03-14
|
||||
updated: 2026-03-14
|
||||
---
|
||||
|
||||
# Grafana
|
||||
|
||||
## Summary
|
||||
|
||||
Grafana is a visualization and observability platform used to build dashboards, explore metrics, and manage alerting workflows across multiple data sources. In self-hosted environments, it is commonly paired with Prometheus to make infrastructure and service health easier to understand.
|
||||
|
||||
## Why it matters
|
||||
|
||||
Metrics data is more useful when operators can navigate it quickly during incidents and routine reviews. Grafana helps turn raw monitoring data into operational context that supports troubleshooting, reporting, and change validation.
|
||||
|
||||
## Core concepts
|
||||
|
||||
- Data sources such as Prometheus, Loki, or other backends
|
||||
- Dashboards and panels for visualization
|
||||
- Variables for reusable filtered views
|
||||
- Alerting and notification integration
|
||||
- Role-based access to shared observability data
|
||||
|
||||
## Practical usage
|
||||
|
||||
Grafana commonly fits into infrastructure as:
|
||||
|
||||
```text
|
||||
Prometheus and other data sources -> Grafana dashboards and alerts -> operators
|
||||
```
|
||||
|
||||
Typical uses:
|
||||
|
||||
- Infrastructure overview dashboards
|
||||
- Service-specific health views
|
||||
- Incident triage and post-change validation
|
||||
|
||||
## Best practices
|
||||
|
||||
- Keep dashboards tied to operational questions
|
||||
- Build separate views for platform health and service health
|
||||
- Use variables and naming conventions consistently
|
||||
- Protect Grafana access and treat it as part of the observability platform
|
||||
|
||||
## Pitfalls
|
||||
|
||||
- Creating dashboards that look impressive but answer no real question
|
||||
- Treating dashboards as enough without proper alerts
|
||||
- Allowing panel sprawl and inconsistent naming
|
||||
- Failing to back up dashboard definitions and provisioning config
|
||||
|
||||
## References
|
||||
|
||||
- [Grafana documentation](https://grafana.com/docs/grafana/latest/)
|
||||
- [Grafana dashboards](https://grafana.com/docs/grafana/latest/dashboards/)
|
||||
- [Grafana alerting](https://grafana.com/docs/grafana/latest/alerting/)
|
||||
63
70 - Tools/prometheus/prometheus.md
Normal file
63
70 - Tools/prometheus/prometheus.md
Normal file
@@ -0,0 +1,63 @@
|
||||
---
|
||||
title: Prometheus
|
||||
description: Tool overview for Prometheus as a metrics collection, query, and alerting platform
|
||||
tags:
|
||||
- prometheus
|
||||
- monitoring
|
||||
- observability
|
||||
category: tools
|
||||
created: 2026-03-14
|
||||
updated: 2026-03-14
|
||||
---
|
||||
|
||||
# Prometheus
|
||||
|
||||
## Summary
|
||||
|
||||
Prometheus is an open source monitoring system built around time-series metrics, pull-based scraping, alert evaluation, and queryable historical data. It is a standard choice for infrastructure and service monitoring in self-hosted environments.
|
||||
|
||||
## Why it matters
|
||||
|
||||
Prometheus gives operators a consistent way to collect metrics from hosts, applications, and infrastructure components. It is especially valuable because it pairs collection, storage, and alert evaluation in one practical operational model.
|
||||
|
||||
## Core concepts
|
||||
|
||||
- Scrape targets and exporters
|
||||
- Time-series storage
|
||||
- PromQL for querying and aggregation
|
||||
- Alerting rules for actionable conditions
|
||||
- Service discovery integrations for dynamic environments
|
||||
|
||||
## Practical usage
|
||||
|
||||
Prometheus commonly fits into infrastructure as:
|
||||
|
||||
```text
|
||||
Targets and exporters -> Prometheus -> dashboards and alerts
|
||||
```
|
||||
|
||||
Typical uses:
|
||||
|
||||
- Scraping node, container, and application metrics
|
||||
- Evaluating alert rules for outages and resource pressure
|
||||
- Providing metrics data to Grafana
|
||||
|
||||
## Best practices
|
||||
|
||||
- Start with critical infrastructure and user-facing services
|
||||
- Keep retention and scrape frequency aligned with actual operational needs
|
||||
- Write alerts that map to a human response
|
||||
- Protect Prometheus access because metrics can reveal sensitive system details
|
||||
|
||||
## Pitfalls
|
||||
|
||||
- Collecting too many high-cardinality metrics without a clear reason
|
||||
- Treating every metric threshold as an alert
|
||||
- Forgetting to monitor backup freshness, certificate expiry, or ingress paths
|
||||
- Running Prometheus without a retention and storage plan
|
||||
|
||||
## References
|
||||
|
||||
- [Prometheus overview](https://prometheus.io/docs/introduction/overview/)
|
||||
- [Prometheus concepts](https://prometheus.io/docs/concepts/)
|
||||
- [Prometheus configuration](https://prometheus.io/docs/prometheus/latest/configuration/configuration/)
|
||||
63
70 - Tools/proxmox/proxmox-ve.md
Normal file
63
70 - Tools/proxmox/proxmox-ve.md
Normal file
@@ -0,0 +1,63 @@
|
||||
---
|
||||
title: Proxmox VE
|
||||
description: Tool overview for Proxmox VE as a virtualization and clustering platform
|
||||
tags:
|
||||
- proxmox
|
||||
- virtualization
|
||||
- infrastructure
|
||||
category: tools
|
||||
created: 2026-03-14
|
||||
updated: 2026-03-14
|
||||
---
|
||||
|
||||
# Proxmox VE
|
||||
|
||||
## Summary
|
||||
|
||||
Proxmox VE is a virtualization platform for managing KVM virtual machines, Linux containers, storage, networking, and optional clustering. It is widely used in homelabs because it combines a web UI, CLI tooling, and strong documentation around core virtualization workflows.
|
||||
|
||||
## Why it matters
|
||||
|
||||
Proxmox provides a practical base layer for self-hosted environments that need flexible compute without managing every VM entirely by hand. It is especially useful when services need isolation that is stronger or more flexible than containers alone.
|
||||
|
||||
## Core concepts
|
||||
|
||||
- Nodes as individual hypervisor hosts
|
||||
- Virtual machines and LXC containers as workload types
|
||||
- Storage backends for disks, ISOs, backups, and templates
|
||||
- Clustering and quorum for multi-node management
|
||||
- Backup and restore tooling for guest protection
|
||||
|
||||
## Practical usage
|
||||
|
||||
Proxmox commonly fits into infrastructure as:
|
||||
|
||||
```text
|
||||
Physical host or cluster -> Proxmox VE -> VMs and containers -> platform and application services
|
||||
```
|
||||
|
||||
Typical uses:
|
||||
|
||||
- Hosting Docker VMs, DNS VMs, monitoring systems, and utility appliances
|
||||
- Separating critical services into dedicated guests
|
||||
- Running a small cluster for shared management and migration workflows
|
||||
|
||||
## Best practices
|
||||
|
||||
- Keep Proxmox management access on a trusted network segment
|
||||
- Document which workloads are stateful and how they are backed up
|
||||
- Use clustering only when the network and storage model support it
|
||||
- Treat hypervisors as core infrastructure with tighter change control
|
||||
|
||||
## Pitfalls
|
||||
|
||||
- Assuming clustering alone provides shared storage or HA guarantees
|
||||
- Mixing experimental and critical workloads on the same host without planning
|
||||
- Ignoring quorum behavior in small clusters
|
||||
- Treating snapshots as a complete backup strategy
|
||||
|
||||
## References
|
||||
|
||||
- [Proxmox VE documentation](https://pve.proxmox.com/pve-docs/)
|
||||
- [Proxmox VE Administration Guide: Cluster Manager](https://pve.proxmox.com/pve-docs/chapter-pvecm.html)
|
||||
- [Proxmox VE Backup and Restore](https://pve.proxmox.com/pve-docs/chapter-vzdump.html)
|
||||
63
70 - Tools/tailscale/tailscale.md
Normal file
63
70 - Tools/tailscale/tailscale.md
Normal file
@@ -0,0 +1,63 @@
|
||||
---
|
||||
title: Tailscale
|
||||
description: Tool overview for Tailscale as a private networking and remote access layer
|
||||
tags:
|
||||
- tailscale
|
||||
- vpn
|
||||
- networking
|
||||
category: tools
|
||||
created: 2026-03-14
|
||||
updated: 2026-03-14
|
||||
---
|
||||
|
||||
# Tailscale
|
||||
|
||||
## Summary
|
||||
|
||||
Tailscale is a WireGuard-based mesh VPN that provides identity-aware connectivity between devices. It is frequently used to reach homelab services, private admin interfaces, and remote systems without exposing them directly to the public internet.
|
||||
|
||||
## Why it matters
|
||||
|
||||
Tailscale simplifies remote access and private service connectivity without requiring a traditional central VPN gateway for all traffic. It is especially useful for small environments where easy onboarding and policy-driven access matter more than complex appliance-based VPN design.
|
||||
|
||||
## Core concepts
|
||||
|
||||
- Tailnet as the private network boundary
|
||||
- Identity-based access controls
|
||||
- Peer-to-peer encrypted connectivity with DERP fallback
|
||||
- MagicDNS for tailnet name resolution
|
||||
- Subnet routers and exit nodes for advanced routing roles
|
||||
|
||||
## Practical usage
|
||||
|
||||
Tailscale commonly fits into infrastructure as:
|
||||
|
||||
```text
|
||||
Admin or device -> tailnet -> private service or subnet router
|
||||
```
|
||||
|
||||
Typical uses:
|
||||
|
||||
- Remote SSH access to servers
|
||||
- Private access to dashboards and management services
|
||||
- Routing selected LAN subnets into a private network overlay
|
||||
|
||||
## Best practices
|
||||
|
||||
- Use tags and access controls early instead of keeping the tailnet flat
|
||||
- Treat exit nodes and subnet routers as high-trust infrastructure roles
|
||||
- Use MagicDNS or split DNS instead of memorized addresses
|
||||
- Limit which services are intended for tailnet-only access
|
||||
|
||||
## Pitfalls
|
||||
|
||||
- Advertising broad routes without matching access policy
|
||||
- Treating overlay connectivity as a substitute for local firewalling
|
||||
- Leaving unused devices enrolled in the tailnet
|
||||
- Using one large unrestricted trust domain for every user and service
|
||||
|
||||
## References
|
||||
|
||||
- [Tailscale: What is Tailscale?](https://tailscale.com/kb/1151/what-is-tailscale)
|
||||
- [Tailscale: Access controls](https://tailscale.com/kb/1018/acls)
|
||||
- [Tailscale: MagicDNS](https://tailscale.com/kb/1081/magicdns)
|
||||
63
70 - Tools/traefik/traefik.md
Normal file
63
70 - Tools/traefik/traefik.md
Normal file
@@ -0,0 +1,63 @@
|
||||
---
|
||||
title: Traefik
|
||||
description: Tool overview for Traefik as a modern reverse proxy and dynamic ingress controller
|
||||
tags:
|
||||
- traefik
|
||||
- reverse-proxy
|
||||
- ingress
|
||||
category: tools
|
||||
created: 2026-03-14
|
||||
updated: 2026-03-14
|
||||
---
|
||||
|
||||
# Traefik
|
||||
|
||||
## Summary
|
||||
|
||||
Traefik is a reverse proxy and ingress tool designed for dynamic environments. It is especially popular in containerized setups because it can discover services from providers such as Docker and build routes from metadata.
|
||||
|
||||
## Why it matters
|
||||
|
||||
When services are created or moved frequently, static proxy configuration becomes a maintenance burden. Traefik reduces manual route management by linking service discovery with ingress configuration.
|
||||
|
||||
## Core concepts
|
||||
|
||||
- EntryPoints as listening ports or addresses
|
||||
- Routers for request matching
|
||||
- Services for upstream destinations
|
||||
- Middlewares for auth, redirects, headers, and rate controls
|
||||
- Providers such as Docker or file-based configuration
|
||||
|
||||
## Practical usage
|
||||
|
||||
Traefik commonly fits into infrastructure as:
|
||||
|
||||
```text
|
||||
Client -> Traefik entrypoint -> router -> middleware -> service backend
|
||||
```
|
||||
|
||||
Typical uses:
|
||||
|
||||
- Reverse proxying containerized services
|
||||
- Automatic route generation from Docker labels
|
||||
- Central TLS termination for a container platform
|
||||
|
||||
## Best practices
|
||||
|
||||
- Keep provider metadata minimal and standardized
|
||||
- Separate public and internal entrypoints where trust boundaries differ
|
||||
- Review middleware behavior as part of security policy
|
||||
- Monitor certificate and routing health
|
||||
|
||||
## Pitfalls
|
||||
|
||||
- Hiding important routing logic in inconsistent labels across stacks
|
||||
- Exposing internal services accidentally through default provider behavior
|
||||
- Letting Docker label sprawl become the only source of ingress documentation
|
||||
- Assuming dynamic config removes the need for change review
|
||||
|
||||
## References
|
||||
|
||||
- [Traefik documentation](https://doc.traefik.io/traefik/)
|
||||
- [Traefik: Routing overview](https://doc.traefik.io/traefik/routing/overview/)
|
||||
- [Traefik Docker provider](https://doc.traefik.io/traefik/providers/docker/)
|
||||
Reference in New Issue
Block a user