mirror of
https://github.com/nmasur/dotfiles
synced 2025-07-07 06:40:13 +00:00
Compare commits
78 Commits
darwin-ssl
...
2694e3288c
Author | SHA1 | Date | |
---|---|---|---|
2694e3288c | |||
af31c65788 | |||
ef6c920c48 | |||
d97e3fda07 | |||
982566a92e | |||
27e2a42e46 | |||
41d8e30990 | |||
6f67e31723 | |||
7bca2775d1 | |||
89a95445e1 | |||
084e832039 | |||
e1e27ca065 | |||
6e26b64f43 | |||
0f112ea16b | |||
69a54b99c8 | |||
e2c351098b | |||
5410afb45b | |||
bc83c818db | |||
8cce61f4a8 | |||
595eac9367 | |||
a17a048d9d | |||
c2d0037bab | |||
e2af159c26 | |||
01e71e5810 | |||
c4c75cd587 | |||
a5e186ee87 | |||
170f8c67de | |||
b0aa82e7d0 | |||
7aacfe7887 | |||
d8b5d74dcb | |||
129e4bba4b | |||
e309889b0b | |||
5872abcc33 | |||
bfa9e8fc4e | |||
8dba2ef88b | |||
e89db82e7f | |||
4044721606 | |||
0637cc693b | |||
a9ae0c8858 | |||
da01f3be9b | |||
a7117fe4e9 | |||
c2b570b2af | |||
84ecbf9974 | |||
f38f782b63 | |||
31f3cfe77c | |||
a0089e28ae | |||
92223a49cd | |||
2434376963 | |||
f196f546b8 | |||
b4ba0706c0 | |||
90bc2ecd49 | |||
19de583433 | |||
8a97d9b2da | |||
015c393274 | |||
db0645075f | |||
034ff33e70 | |||
cd53060f02 | |||
f20b4ee31a | |||
381e06519b | |||
3ec1ef4394 | |||
d303924f02 | |||
485e8223cf | |||
657bec0929 | |||
4e23d677e8 | |||
5ce4ebf522 | |||
e6b7938218 | |||
1addb7ec21 | |||
ba14638a8a | |||
ddd517e0dd | |||
0ac3aec208 | |||
ae90b1041d | |||
9b0dcaba9f | |||
0bf5fd5862 | |||
7a9f7dd760 | |||
f834cc20f4 | |||
720a3cc409 | |||
9e3345ff9b | |||
50a538c78e |
23
README.md
23
README.md
@ -1,6 +1,17 @@
|
|||||||
This repository contains configuration files for my NixOS, macOS, and WSL
|
This repository contains configuration files for my NixOS, macOS, and WSL
|
||||||
hosts.
|
hosts.
|
||||||
|
|
||||||
|
They are organized and managed by [Nix](https://nixos.org), so some of the
|
||||||
|
configuration may be difficult to translate to a non-Nix system.
|
||||||
|
|
||||||
|
However, some of the configurations are easier to lift directly:
|
||||||
|
|
||||||
|
- [Neovim](https://github.com/nmasur/dotfiles/tree/master/modules/neovim/lua)
|
||||||
|
- [Fish functions](https://github.com/nmasur/dotfiles/tree/master/modules/shell/fish/functions)
|
||||||
|
- [More fish aliases](https://github.com/nmasur/dotfiles/blob/master/modules/shell/fish/default.nix)
|
||||||
|
- [Git aliases](https://github.com/nmasur/dotfiles/blob/master/modules/shell/git.nix)
|
||||||
|
- [Hammerspoon](https://github.com/nmasur/dotfiles/tree/master/modules/darwin/hammerspoon)
|
||||||
|
|
||||||
---
|
---
|
||||||
|
|
||||||
# Installation
|
# Installation
|
||||||
@ -14,7 +25,7 @@ installer disk:
|
|||||||
|
|
||||||
```bash
|
```bash
|
||||||
lsblk # Choose the disk you want to wipe
|
lsblk # Choose the disk you want to wipe
|
||||||
nix-shell -p nixFlakes
|
nix-shell -p nixVersions.stable
|
||||||
nix run github:nmasur/dotfiles#installer -- nvme0n1 desktop
|
nix run github:nmasur/dotfiles#installer -- nvme0n1 desktop
|
||||||
```
|
```
|
||||||
|
|
||||||
@ -24,7 +35,7 @@ If you're already running NixOS, you can switch to this configuration with the
|
|||||||
following command:
|
following command:
|
||||||
|
|
||||||
```bash
|
```bash
|
||||||
nix-shell -p nixFlakes
|
nix-shell -p nixVersions.stable
|
||||||
sudo nixos-rebuild switch --flake github:nmasur/dotfiles#desktop
|
sudo nixos-rebuild switch --flake github:nmasur/dotfiles#desktop
|
||||||
```
|
```
|
||||||
|
|
||||||
@ -35,10 +46,16 @@ WSL](https://xeiaso.net/blog/nix-flakes-4-wsl-2022-05-01), you can switch to
|
|||||||
the WSL configuration:
|
the WSL configuration:
|
||||||
|
|
||||||
```
|
```
|
||||||
nix-shell -p nixFlakes
|
nix-shell -p nixVersions.stable
|
||||||
sudo nixos-rebuild switch --flake github:nmasur/dotfiles#wsl
|
sudo nixos-rebuild switch --flake github:nmasur/dotfiles#wsl
|
||||||
```
|
```
|
||||||
|
|
||||||
|
You should also download the
|
||||||
|
[FiraCode](https://github.com/ryanoasis/nerd-fonts/releases/download/v2.2.2/FiraCode.zip)
|
||||||
|
font and install it on Windows. Install [Alacritty](https://alacritty.org/) and
|
||||||
|
move the `windows/alacritty.yml` file to
|
||||||
|
`C:\Users\<user>\AppData\Roaming\alacritty`.
|
||||||
|
|
||||||
## macOS
|
## macOS
|
||||||
|
|
||||||
To get started on a bare macOS installation, first install Nix:
|
To get started on a bare macOS installation, first install Nix:
|
||||||
|
19
apps/encrypt-secret.nix
Normal file
19
apps/encrypt-secret.nix
Normal file
@ -0,0 +1,19 @@
|
|||||||
|
{ pkgs, ... }: {
|
||||||
|
|
||||||
|
# nix run github:nmasur/dotfiles#encrypt-secret > private/mysecret.age
|
||||||
|
|
||||||
|
type = "app";
|
||||||
|
|
||||||
|
program = builtins.toString (pkgs.writeShellScript "encrypt-secret" ''
|
||||||
|
printf "\nEnter the secret data to encrypt for all hosts...\n\n" 1>&2
|
||||||
|
read -p "Secret: " secret
|
||||||
|
printf "\nEncrypting...\n\n" 1>&2
|
||||||
|
tmpfile=$(mktemp)
|
||||||
|
echo "''${secret}" > ''${tmpfile}
|
||||||
|
${pkgs.age}/bin/age --encrypt --armor --recipients-file ${
|
||||||
|
builtins.toString ../hosts/public-keys
|
||||||
|
} $tmpfile
|
||||||
|
rm $tmpfile
|
||||||
|
'');
|
||||||
|
|
||||||
|
}
|
@ -13,10 +13,12 @@
|
|||||||
PARTITION_PREFIX=""
|
PARTITION_PREFIX=""
|
||||||
|
|
||||||
if [ -z "$DISK" ] || [ -z "$FLAKE" ]; then
|
if [ -z "$DISK" ] || [ -z "$FLAKE" ]; then
|
||||||
echo "Missing required parameter."
|
${pkgs.gum}/bin/gum style --width 50 --margin "1 2" --padding "2 4" \
|
||||||
echo "Usage: installer -- <disk> <host>"
|
--foreground "#fb4934" \
|
||||||
echo "Example: installer -- nvme0n1 desktop"
|
"Missing required parameter." \
|
||||||
echo "Flake example: nix run github:nmasur/dotfiles#installer -- nvme0n1 desktop"
|
"Usage: installer -- <disk> <host>" \
|
||||||
|
"Example: installer -- nvme0n1 desktop" \
|
||||||
|
"Flake example: nix run github:nmasur/dotfiles#installer -- nvme0n1 desktop"
|
||||||
echo "(exiting)"
|
echo "(exiting)"
|
||||||
exit 1
|
exit 1
|
||||||
fi
|
fi
|
||||||
@ -25,10 +27,14 @@
|
|||||||
PARTITION_PREFIX="p"
|
PARTITION_PREFIX="p"
|
||||||
esac
|
esac
|
||||||
|
|
||||||
parted /dev/''${DISK} -- mklabel gpt
|
${pkgs.gum}/bin/gum confirm \
|
||||||
parted /dev/''${DISK} -- mkpart primary 512MiB 100%
|
"This will ERASE ALL DATA on the disk /dev/''${DISK}. Are you sure you want to continue?" \
|
||||||
parted /dev/''${DISK} -- mkpart ESP fat32 1MiB 512MiB
|
--default=false
|
||||||
parted /dev/''${DISK} -- set 3 esp on
|
|
||||||
|
${pkgs.parted}/bin/parted /dev/''${DISK} -- mklabel gpt
|
||||||
|
${pkgs.parted}/bin/parted /dev/''${DISK} -- mkpart primary 512MiB 100%
|
||||||
|
${pkgs.parted}/bin/parted /dev/''${DISK} -- mkpart ESP fat32 1MiB 512MiB
|
||||||
|
${pkgs.parted}/bin/parted /dev/''${DISK} -- set 3 esp on
|
||||||
mkfs.ext4 -L nixos /dev/''${DISK}''${PARTITION_PREFIX}1
|
mkfs.ext4 -L nixos /dev/''${DISK}''${PARTITION_PREFIX}1
|
||||||
mkfs.fat -F 32 -n boot /dev/''${DISK}''${PARTITION_PREFIX}2
|
mkfs.fat -F 32 -n boot /dev/''${DISK}''${PARTITION_PREFIX}2
|
||||||
|
|
||||||
@ -36,7 +42,7 @@
|
|||||||
mkdir --parents /mnt/boot
|
mkdir --parents /mnt/boot
|
||||||
mount /dev/disk/by-label/boot /mnt/boot
|
mount /dev/disk/by-label/boot /mnt/boot
|
||||||
|
|
||||||
nixos-install --flake github:nmasur/dotfiles#''${FLAKE}
|
${pkgs.nixos-install-tools}/bin/nixos-install --flake github:nmasur/dotfiles#''${FLAKE}
|
||||||
'');
|
'');
|
||||||
|
|
||||||
}
|
}
|
||||||
|
12
apps/loadkey.nix
Normal file
12
apps/loadkey.nix
Normal file
@ -0,0 +1,12 @@
|
|||||||
|
{ pkgs, ... }: {
|
||||||
|
|
||||||
|
type = "app";
|
||||||
|
|
||||||
|
program = builtins.toString (pkgs.writeShellScript "loadkey" ''
|
||||||
|
printf "\nEnter the seed phrase for your SSH key...\n"
|
||||||
|
printf "\nThen press ^D when complete.\n\n"
|
||||||
|
${pkgs.melt}/bin/melt restore ~/.ssh/id_ed25519
|
||||||
|
printf "\n\nContinuing activation.\n\n"
|
||||||
|
'');
|
||||||
|
|
||||||
|
}
|
19
apps/netdata-cloud.nix
Normal file
19
apps/netdata-cloud.nix
Normal file
@ -0,0 +1,19 @@
|
|||||||
|
{ pkgs, ... }: {
|
||||||
|
|
||||||
|
type = "app";
|
||||||
|
|
||||||
|
program = builtins.toString (pkgs.writeShellScript "netdata-cloud" ''
|
||||||
|
if [ "$EUID" -ne 0 ]; then
|
||||||
|
echo "Please run as root"
|
||||||
|
exit 1
|
||||||
|
fi
|
||||||
|
mkdir --parents --mode 0750 /var/lib/netdata/cloud.d
|
||||||
|
printf "\nEnter the claim token for netdata cloud...\n\n"
|
||||||
|
read -p "Token: " token
|
||||||
|
echo "''${token}" > /var/lib/netdata/cloud.d/token
|
||||||
|
chown -R netdata:netdata /var/lib/netdata
|
||||||
|
${pkgs.netdata}/bin/netdata-claim.sh -id=$(uuidgen)
|
||||||
|
printf "\n\nNow restart netdata service.\n\n"
|
||||||
|
'');
|
||||||
|
|
||||||
|
}
|
27
apps/reencrypt-secrets.nix
Normal file
27
apps/reencrypt-secrets.nix
Normal file
@ -0,0 +1,27 @@
|
|||||||
|
{ pkgs, ... }: {
|
||||||
|
|
||||||
|
# nix run github:nmasur/dotfiles#reencrypt-secrets ./private
|
||||||
|
|
||||||
|
type = "app";
|
||||||
|
|
||||||
|
program = builtins.toString (pkgs.writeShellScript "reencrypt-secrets" ''
|
||||||
|
if [ $# -eq 0 ]; then
|
||||||
|
echo "Must provide directory to reencrypt."
|
||||||
|
exit 1
|
||||||
|
fi
|
||||||
|
encrypted=$1
|
||||||
|
for encryptedfile in ''${1}/*; do
|
||||||
|
tmpfile=$(mktemp)
|
||||||
|
echo "Decrypting ''${encryptedfile}..."
|
||||||
|
${pkgs.age}/bin/age --decrypt \
|
||||||
|
--identity ~/.ssh/id_ed25519 $encryptedfile > $tmpfile
|
||||||
|
echo "Encrypting ''${encryptedfile}..."
|
||||||
|
${pkgs.age}/bin/age --encrypt --armor --recipients-file ${
|
||||||
|
builtins.toString ../hosts/public-keys
|
||||||
|
} $tmpfile > $encryptedfile
|
||||||
|
rm $tmpfile
|
||||||
|
done
|
||||||
|
echo "Finished."
|
||||||
|
'');
|
||||||
|
|
||||||
|
}
|
61
flake.lock
generated
61
flake.lock
generated
@ -7,11 +7,11 @@
|
|||||||
]
|
]
|
||||||
},
|
},
|
||||||
"locked": {
|
"locked": {
|
||||||
"lastModified": 1662478528,
|
"lastModified": 1664210064,
|
||||||
"narHash": "sha256-Myjd0HPL5lXri3NXOcJ6gP7IKod2eMweQBKM4uxgEGw=",
|
"narHash": "sha256-df6nKVZe/yAhmJ9csirTPahc0dldwm3HBhCVNA6qWr0=",
|
||||||
"owner": "lnl7",
|
"owner": "lnl7",
|
||||||
"repo": "nix-darwin",
|
"repo": "nix-darwin",
|
||||||
"rev": "3b69bf3cc26ae19de847bfe54d6ab22d7381a90a",
|
"rev": "02d2551c927b7d65ded1b3c7cd13da5cc7ae3fcf",
|
||||||
"type": "github"
|
"type": "github"
|
||||||
},
|
},
|
||||||
"original": {
|
"original": {
|
||||||
@ -60,11 +60,11 @@
|
|||||||
"utils": "utils"
|
"utils": "utils"
|
||||||
},
|
},
|
||||||
"locked": {
|
"locked": {
|
||||||
"lastModified": 1663328500,
|
"lastModified": 1664273942,
|
||||||
"narHash": "sha256-7n+J/exp8ky4dmk02y5a9R7CGmJvHpzrHMzfEkMtSWA=",
|
"narHash": "sha256-PFQR1UJQs7a7eaH5YoCZky5dmxR5cjaKRK+MpPbR7YE=",
|
||||||
"owner": "nix-community",
|
"owner": "nix-community",
|
||||||
"repo": "home-manager",
|
"repo": "home-manager",
|
||||||
"rev": "5427f3d1f0ea4357cd4af0bffee7248d640c6ffc",
|
"rev": "1f5ef2bb419a327fae28a83b50fab50959132c24",
|
||||||
"type": "github"
|
"type": "github"
|
||||||
},
|
},
|
||||||
"original": {
|
"original": {
|
||||||
@ -74,13 +74,49 @@
|
|||||||
"type": "github"
|
"type": "github"
|
||||||
}
|
}
|
||||||
},
|
},
|
||||||
|
"nixlib": {
|
||||||
|
"locked": {
|
||||||
|
"lastModified": 1636849918,
|
||||||
|
"narHash": "sha256-nzUK6dPcTmNVrgTAC1EOybSMsrcx+QrVPyqRdyKLkjA=",
|
||||||
|
"owner": "nix-community",
|
||||||
|
"repo": "nixpkgs.lib",
|
||||||
|
"rev": "28a5b0557f14124608db68d3ee1f77e9329e9dd5",
|
||||||
|
"type": "github"
|
||||||
|
},
|
||||||
|
"original": {
|
||||||
|
"owner": "nix-community",
|
||||||
|
"repo": "nixpkgs.lib",
|
||||||
|
"type": "github"
|
||||||
|
}
|
||||||
|
},
|
||||||
|
"nixos-generators": {
|
||||||
|
"inputs": {
|
||||||
|
"nixlib": "nixlib",
|
||||||
|
"nixpkgs": [
|
||||||
|
"nixpkgs"
|
||||||
|
]
|
||||||
|
},
|
||||||
|
"locked": {
|
||||||
|
"lastModified": 1660727616,
|
||||||
|
"narHash": "sha256-zYTIvdPMYMx/EYqXODAwIIU30RiEHqNHdgarIHuEYZc=",
|
||||||
|
"owner": "nix-community",
|
||||||
|
"repo": "nixos-generators",
|
||||||
|
"rev": "adccd191a0e83039d537e021f19495b7bad546a1",
|
||||||
|
"type": "github"
|
||||||
|
},
|
||||||
|
"original": {
|
||||||
|
"owner": "nix-community",
|
||||||
|
"repo": "nixos-generators",
|
||||||
|
"type": "github"
|
||||||
|
}
|
||||||
|
},
|
||||||
"nixpkgs": {
|
"nixpkgs": {
|
||||||
"locked": {
|
"locked": {
|
||||||
"lastModified": 1663357389,
|
"lastModified": 1664195620,
|
||||||
"narHash": "sha256-oYA2nVRSi6yhCBqS5Vz465Hw+3BQOVFEhfbfy//3vTs=",
|
"narHash": "sha256-/0V1a1gAR+QbiQe4aCxBoivhkxss0xyt2mBD6yDrgjw=",
|
||||||
"owner": "nixos",
|
"owner": "nixos",
|
||||||
"repo": "nixpkgs",
|
"repo": "nixpkgs",
|
||||||
"rev": "da6a05816e7fa5226c3f61e285ef8d9dfc868f3c",
|
"rev": "62228ccc672ed000f35b1e5c82e4183e46767e52",
|
||||||
"type": "github"
|
"type": "github"
|
||||||
},
|
},
|
||||||
"original": {
|
"original": {
|
||||||
@ -107,11 +143,11 @@
|
|||||||
},
|
},
|
||||||
"nur": {
|
"nur": {
|
||||||
"locked": {
|
"locked": {
|
||||||
"lastModified": 1663440270,
|
"lastModified": 1664282944,
|
||||||
"narHash": "sha256-RkBoLyxamsBqRn9lB9RbFSDg7KHiGgHBsrpffEVXWCQ=",
|
"narHash": "sha256-PrID+Tc90HWhkbO4b2kk3MFgjK+iBDWtDd534Y2D2Zs=",
|
||||||
"owner": "nix-community",
|
"owner": "nix-community",
|
||||||
"repo": "nur",
|
"repo": "nur",
|
||||||
"rev": "7511d58da488c67887745f40fd4846aa8c876d25",
|
"rev": "dcc2af3d2504af6726c5cf40eb5e1165d5700721",
|
||||||
"type": "github"
|
"type": "github"
|
||||||
},
|
},
|
||||||
"original": {
|
"original": {
|
||||||
@ -124,6 +160,7 @@
|
|||||||
"inputs": {
|
"inputs": {
|
||||||
"darwin": "darwin",
|
"darwin": "darwin",
|
||||||
"home-manager": "home-manager",
|
"home-manager": "home-manager",
|
||||||
|
"nixos-generators": "nixos-generators",
|
||||||
"nixpkgs": "nixpkgs",
|
"nixpkgs": "nixpkgs",
|
||||||
"nur": "nur",
|
"nur": "nur",
|
||||||
"wallpapers": "wallpapers",
|
"wallpapers": "wallpapers",
|
||||||
|
38
flake.nix
38
flake.nix
@ -32,9 +32,15 @@
|
|||||||
flake = false;
|
flake = false;
|
||||||
};
|
};
|
||||||
|
|
||||||
|
# Used to generate NixOS images for other platforms
|
||||||
|
nixos-generators = {
|
||||||
|
url = "github:nix-community/nixos-generators";
|
||||||
|
inputs.nixpkgs.follows = "nixpkgs";
|
||||||
};
|
};
|
||||||
|
|
||||||
outputs = { self, nixpkgs, darwin, wsl, home-manager, nur, wallpapers }:
|
};
|
||||||
|
|
||||||
|
outputs = { self, nixpkgs, ... }@inputs:
|
||||||
|
|
||||||
let
|
let
|
||||||
|
|
||||||
@ -45,7 +51,7 @@
|
|||||||
gitName = fullName;
|
gitName = fullName;
|
||||||
gitEmail = "7386960+nmasur@users.noreply.github.com";
|
gitEmail = "7386960+nmasur@users.noreply.github.com";
|
||||||
mailServer = "noahmasur.com";
|
mailServer = "noahmasur.com";
|
||||||
dotfilesRepo = "https://github.com/nmasur/dotfiles";
|
dotfilesRepo = "git@github.com:nmasur/dotfiles";
|
||||||
};
|
};
|
||||||
|
|
||||||
# System types to support.
|
# System types to support.
|
||||||
@ -57,19 +63,29 @@
|
|||||||
|
|
||||||
in {
|
in {
|
||||||
|
|
||||||
nixosConfigurations = {
|
nixosConfigurations = with inputs; {
|
||||||
desktop = import ./hosts/desktop {
|
desktop = import ./hosts/desktop {
|
||||||
inherit nixpkgs home-manager nur globals wallpapers;
|
inherit nixpkgs home-manager nur globals wallpapers;
|
||||||
};
|
};
|
||||||
wsl = import ./hosts/wsl { inherit nixpkgs wsl home-manager globals; };
|
wsl = import ./hosts/wsl { inherit nixpkgs wsl home-manager globals; };
|
||||||
|
oracle =
|
||||||
|
import ./hosts/oracle { inherit nixpkgs home-manager globals; };
|
||||||
};
|
};
|
||||||
|
|
||||||
darwinConfigurations = {
|
darwinConfigurations = with inputs; {
|
||||||
macbook = import ./hosts/macbook {
|
macbook = import ./hosts/macbook {
|
||||||
inherit nixpkgs darwin home-manager nur globals;
|
inherit nixpkgs darwin home-manager nur globals;
|
||||||
};
|
};
|
||||||
};
|
};
|
||||||
|
|
||||||
|
# Package servers into images with a generator
|
||||||
|
packages.x86_64-linux = with inputs; {
|
||||||
|
aws = import ./hosts/aws {
|
||||||
|
inherit nixpkgs nixos-generators home-manager globals;
|
||||||
|
system = "x86_64-linux";
|
||||||
|
};
|
||||||
|
};
|
||||||
|
|
||||||
apps = forAllSystems (system:
|
apps = forAllSystems (system:
|
||||||
let pkgs = import nixpkgs { inherit system; };
|
let pkgs = import nixpkgs { inherit system; };
|
||||||
in rec {
|
in rec {
|
||||||
@ -81,6 +97,19 @@
|
|||||||
# Display the readme for this repository
|
# Display the readme for this repository
|
||||||
readme = import ./apps/readme.nix { inherit pkgs; };
|
readme = import ./apps/readme.nix { inherit pkgs; };
|
||||||
|
|
||||||
|
# Load the SSH key for this machine
|
||||||
|
loadkey = import ./apps/loadkey.nix { inherit pkgs; };
|
||||||
|
|
||||||
|
# Encrypt secret for all machines
|
||||||
|
encrypt-secret = import ./apps/encrypt-secret.nix { inherit pkgs; };
|
||||||
|
|
||||||
|
# Re-encrypt secrets for all machines
|
||||||
|
reencrypt-secrets =
|
||||||
|
import ./apps/reencrypt-secrets.nix { inherit pkgs; };
|
||||||
|
|
||||||
|
# Connect machine metrics to Netdata Cloud
|
||||||
|
netdata = import ./apps/netdata-cloud.nix { inherit pkgs; };
|
||||||
|
|
||||||
});
|
});
|
||||||
|
|
||||||
devShells = forAllSystems (system:
|
devShells = forAllSystems (system:
|
||||||
@ -101,6 +130,7 @@
|
|||||||
vault
|
vault
|
||||||
awscli2
|
awscli2
|
||||||
google-cloud-sdk
|
google-cloud-sdk
|
||||||
|
ansible
|
||||||
kubectl
|
kubectl
|
||||||
kubernetes-helm
|
kubernetes-helm
|
||||||
kustomize
|
kustomize
|
||||||
|
30
hosts/aws/default.nix
Normal file
30
hosts/aws/default.nix
Normal file
@ -0,0 +1,30 @@
|
|||||||
|
{ nixpkgs, system, nixos-generators, home-manager, globals, ... }:
|
||||||
|
|
||||||
|
nixos-generators.nixosGenerate {
|
||||||
|
inherit system;
|
||||||
|
format = "amazon";
|
||||||
|
modules = [
|
||||||
|
home-manager.nixosModules.home-manager
|
||||||
|
{
|
||||||
|
user = globals.user;
|
||||||
|
fullName = globals.fullName;
|
||||||
|
dotfilesRepo = globals.dotfilesRepo;
|
||||||
|
gitName = globals.gitName;
|
||||||
|
gitEmail = globals.gitEmail;
|
||||||
|
networking.hostName = "sheep";
|
||||||
|
gui.enable = false;
|
||||||
|
colorscheme = (import ../modules/colorscheme/gruvbox);
|
||||||
|
passwordHash = null;
|
||||||
|
publicKey =
|
||||||
|
"ssh-ed25519 AAAAC3NzaC1lZDI1NTE5AAAAIB+AbmjGEwITk5CK9y7+Rg27Fokgj9QEjgc9wST6MA3s";
|
||||||
|
# AWS settings require this
|
||||||
|
permitRootLogin = "prohibit-password";
|
||||||
|
}
|
||||||
|
../../hosts/common.nix
|
||||||
|
../../modules/nixos
|
||||||
|
../../modules/services/sshd.nix
|
||||||
|
] ++ [
|
||||||
|
# Required to fix diskSize errors during build
|
||||||
|
({ ... }: { amazonImage.sizeMB = 16 * 1024; })
|
||||||
|
];
|
||||||
|
}
|
80
hosts/aws/main.tf
Normal file
80
hosts/aws/main.tf
Normal file
@ -0,0 +1,80 @@
|
|||||||
|
locals {
|
||||||
|
image_file = one(fileset(path.root, "result/nixos-amazon-image-*.vhd"))
|
||||||
|
}
|
||||||
|
|
||||||
|
# Upload to S3
|
||||||
|
resource "aws_s3_object" "image" {
|
||||||
|
bucket = "your_bucket_name"
|
||||||
|
key = basename(local.image_file)
|
||||||
|
source = local.image_file
|
||||||
|
etag = filemd5(local.image_file)
|
||||||
|
}
|
||||||
|
|
||||||
|
# Setup IAM access for the VM Importer
|
||||||
|
data "aws_iam_policy_document" "vmimport_trust_policy" {
|
||||||
|
statement {
|
||||||
|
actions = ["sts:AssumeRole"]
|
||||||
|
principals {
|
||||||
|
type = "Service"
|
||||||
|
identifiers = ["vmie.amazonaws.com"]
|
||||||
|
}
|
||||||
|
}
|
||||||
|
}
|
||||||
|
|
||||||
|
data "aws_iam_policy_document" "vmimport" {
|
||||||
|
statement {
|
||||||
|
actions = [
|
||||||
|
"s3:GetBucketLocation",
|
||||||
|
"s3:GetObject",
|
||||||
|
"s3:ListBucket",
|
||||||
|
]
|
||||||
|
resources = [
|
||||||
|
"arn:aws:s3:::${aws_s3_object.image.bucket}",
|
||||||
|
"arn:aws:s3:::${aws_s3_object.image.bucket}/*",
|
||||||
|
]
|
||||||
|
}
|
||||||
|
statement {
|
||||||
|
actions = [
|
||||||
|
"ec2:ModifySnapshotAttribute",
|
||||||
|
"ec2:CopySnapshot",
|
||||||
|
"ec2:RegisterImage",
|
||||||
|
"ec2:Describe*",
|
||||||
|
]
|
||||||
|
resources = ["*"]
|
||||||
|
}
|
||||||
|
}
|
||||||
|
|
||||||
|
resource "aws_iam_role" "vmimport" {
|
||||||
|
name = "vmimport"
|
||||||
|
assume_role_policy = data.aws_iam_policy_document.vmimport_trust_policy.json
|
||||||
|
inline_policy {
|
||||||
|
name = "vmimport"
|
||||||
|
policy = data.aws_iam_policy_document.vmimport.json
|
||||||
|
}
|
||||||
|
}
|
||||||
|
|
||||||
|
# Import to EBS
|
||||||
|
resource "aws_ebs_snapshot_import" "image" {
|
||||||
|
disk_container {
|
||||||
|
format = "VHD"
|
||||||
|
user_bucket {
|
||||||
|
s3_bucket = aws_s3_object.image.bucket
|
||||||
|
s3_key = aws_s3_object.image.key
|
||||||
|
}
|
||||||
|
}
|
||||||
|
|
||||||
|
role_name = aws_iam_role.vmimport.name
|
||||||
|
}
|
||||||
|
|
||||||
|
# Convert to AMI
|
||||||
|
resource "aws_ami" "image" {
|
||||||
|
description = "Created with NixOS."
|
||||||
|
name = replace(basename(local.image_file), "/\\.vhd$/", "")
|
||||||
|
virtualization_type = "hvm"
|
||||||
|
|
||||||
|
ebs_block_device {
|
||||||
|
device_name = "/dev/xvda"
|
||||||
|
snapshot_id = aws_ebs_snapshot_import.image.id
|
||||||
|
volume_size = 8
|
||||||
|
}
|
||||||
|
}
|
260
hosts/aws/workflow.yml
Normal file
260
hosts/aws/workflow.yml
Normal file
@ -0,0 +1,260 @@
|
|||||||
|
name: 'Terraform'
|
||||||
|
env:
|
||||||
|
|
||||||
|
|
||||||
|
AWS_ACCOUNT_NUMBER: ''
|
||||||
|
AWS_PLAN_ROLE_NAME: github_actions_plan
|
||||||
|
AWS_APPLY_ROLE_NAME: github_actions_admin
|
||||||
|
|
||||||
|
# Always required. Used for authenticating to AWS, but can also act as your
|
||||||
|
# default region if you don't want to specify in the provider configuration.
|
||||||
|
AWS_REGION: us-east-1
|
||||||
|
|
||||||
|
# You must change these to fit your project.
|
||||||
|
TF_VAR_project: change-me
|
||||||
|
TF_VAR_label: change-me
|
||||||
|
TF_VAR_owner: Your Name Here
|
||||||
|
|
||||||
|
# If storing Terraform in a subdirectory, specify it here.
|
||||||
|
TERRAFORM_DIRECTORY: .
|
||||||
|
|
||||||
|
# Pinned versions of tools to use.
|
||||||
|
# Check for new releases:
|
||||||
|
# - https://github.com/hashicorp/terraform/releases
|
||||||
|
# - https://github.com/fugue/regula/releases
|
||||||
|
# - https://github.com/terraform-linters/tflint/releases
|
||||||
|
TERRAFORM_VERSION: 1.2.6
|
||||||
|
REGULA_VERSION: 2.9.0
|
||||||
|
TFLINT_VERSION: 0.39.1
|
||||||
|
|
||||||
|
# Terraform configuration options
|
||||||
|
TERRAFORM_PARALLELISM: 10
|
||||||
|
|
||||||
|
# These variables are passed to Terraform based on GitHub information.
|
||||||
|
TF_VAR_repo: ${{ github.repository }}
|
||||||
|
|
||||||
|
# This workflow is triggered in the following ways.
|
||||||
|
on:
|
||||||
|
|
||||||
|
# Any push or merge to these branches.
|
||||||
|
push:
|
||||||
|
branches:
|
||||||
|
- dev
|
||||||
|
- prod
|
||||||
|
|
||||||
|
# Any pull request targeting these branches (plan only).
|
||||||
|
pull_request:
|
||||||
|
branches:
|
||||||
|
- dev
|
||||||
|
- prod
|
||||||
|
|
||||||
|
|
||||||
|
# Any manual trigger on these branches.
|
||||||
|
workflow_dispatch:
|
||||||
|
branches:
|
||||||
|
- dev
|
||||||
|
- prod
|
||||||
|
|
||||||
|
# -------------------------------------------------------------------
|
||||||
|
# The rest of this workflow can operate without adjustments. Edit the
|
||||||
|
# below content at your own risk!
|
||||||
|
# -------------------------------------------------------------------
|
||||||
|
|
||||||
|
# Used to connect to AWS IAM
|
||||||
|
permissions:
|
||||||
|
id-token: write
|
||||||
|
contents: read
|
||||||
|
pull-requests: write
|
||||||
|
|
||||||
|
# Only run one workflow at a time for each Terraform state. This prevents
|
||||||
|
# lockfile conflicts, especially during PR vs push.
|
||||||
|
concurrency: terraform-${{ github.base_ref || github.ref }}
|
||||||
|
|
||||||
|
jobs:
|
||||||
|
terraform:
|
||||||
|
|
||||||
|
name: 'Terraform'
|
||||||
|
|
||||||
|
# Change this if you need to run your deployment on-prem.
|
||||||
|
runs-on: ubuntu-latest
|
||||||
|
|
||||||
|
steps:
|
||||||
|
|
||||||
|
# Downloads the current repo code to the runner.
|
||||||
|
- name: Checkout Repo Code
|
||||||
|
uses: actions/checkout@v2
|
||||||
|
|
||||||
|
# Install Nix
|
||||||
|
- name: Install Nix
|
||||||
|
uses: cachix/install-nix-action@v17
|
||||||
|
|
||||||
|
# Build the image
|
||||||
|
- name: Build Image
|
||||||
|
run: nix build .#aws
|
||||||
|
|
||||||
|
# Login to AWS
|
||||||
|
- name: AWS Assume Role
|
||||||
|
uses: aws-actions/configure-aws-credentials@v1.6.1
|
||||||
|
with:
|
||||||
|
role-to-assume: ${{ env.AWS_ROLE_ARN }}
|
||||||
|
aws-region: ${{ env.AWS_REGION }}
|
||||||
|
|
||||||
|
# Exports all GitHub Secrets as environment variables prefixed by
|
||||||
|
# "TF_VAR_", which exposes them to Terraform. The name of each GitHub
|
||||||
|
# Secret must match its Terraform variable name exactly.
|
||||||
|
- name: Export Secrets to Terraform Variables
|
||||||
|
env:
|
||||||
|
ALL_SECRETS: ${{ toJson(secrets) }}
|
||||||
|
run: |
|
||||||
|
echo "$ALL_SECRETS" \
|
||||||
|
| jq "to_entries | .[] | \"TF_VAR_\" + ( .key | ascii_downcase ) + \"=\" + .value" \
|
||||||
|
| tr -d \" >> $GITHUB_ENV
|
||||||
|
|
||||||
|
# Installs the Terraform binary and some other accessory functions.
|
||||||
|
- name: Setup Terraform
|
||||||
|
uses: hashicorp/setup-terraform@v2
|
||||||
|
with:
|
||||||
|
terraform_version: ${{ env.TERRAFORM_VERSION }}
|
||||||
|
|
||||||
|
# Checks whether Terraform is formatted properly. If this fails, you
|
||||||
|
# should install the pre-commit hook.
|
||||||
|
- name: Check Formatting
|
||||||
|
run: |
|
||||||
|
terraform fmt -no-color -check -diff -recursive
|
||||||
|
|
||||||
|
# Downloads a Terraform code lint test.
|
||||||
|
- uses: terraform-linters/setup-tflint@v1
|
||||||
|
name: Setup TFLint
|
||||||
|
with:
|
||||||
|
tflint_version: v${{ env.TFLINT_VERSION }}
|
||||||
|
|
||||||
|
# Sets up linting with this codebase.
|
||||||
|
- name: Init TFLint
|
||||||
|
working-directory: ${{ env.TERRAFORM_DIRECTORY }}
|
||||||
|
run: tflint --init
|
||||||
|
|
||||||
|
# Lints the current code.
|
||||||
|
- name: Run TFLint
|
||||||
|
working-directory: ${{ env.TERRAFORM_DIRECTORY }}
|
||||||
|
run: |
|
||||||
|
tflint -f compact
|
||||||
|
find ./modules/* -type d -maxdepth 0 | xargs -I __ tflint -f compact --disable-rule=terraform_required_providers --disable-rule=terraform_required_version __
|
||||||
|
|
||||||
|
# Connects to remote state backend and download providers.
|
||||||
|
- name: Terraform Init
|
||||||
|
working-directory: ${{ env.TERRAFORM_DIRECTORY }}
|
||||||
|
run: |
|
||||||
|
terraform init \
|
||||||
|
-backend-config="role_arn=${{ env.AWS_STATE_ROLE_ARN }}" \
|
||||||
|
-backend-config="region=us-east-1" \
|
||||||
|
-backend-config="workspace_key_prefix=accounts/${{ env.AWS_ACCOUNT_NUMBER }}/${{ github.repository }}" \
|
||||||
|
-backend-config="key=state.tfstate" \
|
||||||
|
-backend-config="dynamodb_table=global-tf-state-lock"
|
||||||
|
|
||||||
|
# Set the Terraform Workspace to the current branch name.
|
||||||
|
- name: Set Terraform Workspace
|
||||||
|
working-directory: ${{ env.TERRAFORM_DIRECTORY }}
|
||||||
|
shell: bash
|
||||||
|
run: |
|
||||||
|
export WORKSPACE=${{ github.base_ref || github.ref_name }}
|
||||||
|
terraform workspace select ${WORKSPACE} || terraform workspace new $WORKSPACE
|
||||||
|
echo "TF_WORKSPACE=$(echo ${WORKSPACE} | sed 's/\//_/g')" >> $GITHUB_ENV
|
||||||
|
|
||||||
|
# Checks differences between current code and infrastructure state.
|
||||||
|
- name: Terraform Plan
|
||||||
|
id: plan
|
||||||
|
working-directory: ${{ env.TERRAFORM_DIRECTORY }}
|
||||||
|
run: |
|
||||||
|
terraform plan \
|
||||||
|
-input=false \
|
||||||
|
-no-color \
|
||||||
|
-out=tfplan \
|
||||||
|
-parallelism=${TERRAFORM_PARALLELISM} \
|
||||||
|
-var-file=variables-${TF_WORKSPACE}.tfvars
|
||||||
|
|
||||||
|
# Gets the results of the plan for pull requests.
|
||||||
|
- name: Terraform Show Plan
|
||||||
|
id: show
|
||||||
|
working-directory: ${{ env.TERRAFORM_DIRECTORY }}
|
||||||
|
run: terraform show -no-color tfplan
|
||||||
|
|
||||||
|
# Adds the results of the plan to the pull request.
|
||||||
|
- name: Comment Plan
|
||||||
|
uses: actions/github-script@v6
|
||||||
|
if: github.event_name == 'pull_request'
|
||||||
|
env:
|
||||||
|
STDOUT: "```terraform\n${{ steps.show.outputs.stdout }}```"
|
||||||
|
with:
|
||||||
|
github-token: ${{ secrets.GITHUB_TOKEN }}
|
||||||
|
script: |
|
||||||
|
// 1. Retrieve existing bot comments for the PR
|
||||||
|
const { data: comments } = await github.rest.issues.listComments({
|
||||||
|
owner: context.repo.owner,
|
||||||
|
repo: context.repo.repo,
|
||||||
|
issue_number: context.issue.number,
|
||||||
|
})
|
||||||
|
const botComment = comments.find(comment => {
|
||||||
|
return comment.user.type === 'Bot' && comment.body.includes('Terraform Format and Style')
|
||||||
|
})
|
||||||
|
|
||||||
|
// 2. Prepare format of the comment
|
||||||
|
const output = `#### Terraform Format and Style 🖌\`${{ steps.fmt.outcome }}\`
|
||||||
|
#### Terraform Initialization ⚙️\`${{ steps.init.outcome }}\`
|
||||||
|
#### Terraform Validation 🤖\`${{ steps.validate.outcome }}\`
|
||||||
|
<details><summary>Validation Output</summary>
|
||||||
|
|
||||||
|
\`\`\`\n
|
||||||
|
${{ steps.validate.outputs.stdout }}
|
||||||
|
\`\`\`
|
||||||
|
|
||||||
|
</details>
|
||||||
|
|
||||||
|
#### Terraform Plan 📖\`${{ steps.plan.outcome }}\`
|
||||||
|
|
||||||
|
<details><summary>Show Plan</summary>
|
||||||
|
|
||||||
|
\`\`\`\n
|
||||||
|
${process.env.PLAN}
|
||||||
|
\`\`\`
|
||||||
|
|
||||||
|
</details>
|
||||||
|
|
||||||
|
*Pusher: @${{ github.actor }}, Action: \`${{ github.event_name }}\`, Working Directory: \`${{ env.tf_actions_working_dir }}\`, Workflow: \`${{ github.workflow }}\`*`;
|
||||||
|
|
||||||
|
// 3. If we have a comment, update it, otherwise create a new one
|
||||||
|
if (botComment) {
|
||||||
|
github.rest.issues.updateComment({
|
||||||
|
owner: context.repo.owner,
|
||||||
|
repo: context.repo.repo,
|
||||||
|
comment_id: botComment.id,
|
||||||
|
body: output
|
||||||
|
})
|
||||||
|
} else {
|
||||||
|
github.rest.issues.createComment({
|
||||||
|
issue_number: context.issue.number,
|
||||||
|
owner: context.repo.owner,
|
||||||
|
repo: context.repo.repo,
|
||||||
|
body: output
|
||||||
|
})
|
||||||
|
}
|
||||||
|
|
||||||
|
# Downloads Regula and checks whether the plan meets compliance requirements.
|
||||||
|
- name: Regula Compliance Check
|
||||||
|
shell: bash
|
||||||
|
working-directory: ${{ env.TERRAFORM_DIRECTORY }}
|
||||||
|
run: |
|
||||||
|
REGULA_URL="https://github.com/fugue/regula/releases/download/v${REGULA_VERSION}/regula_${REGULA_VERSION}_Linux_x86_64.tar.gz"
|
||||||
|
curl -sL "$REGULA_URL" -o regula.tar.gz
|
||||||
|
tar xzf regula.tar.gz
|
||||||
|
terraform show -json tfplan | ./regula run
|
||||||
|
|
||||||
|
# Deploys infrastructure or changes to infrastructure.
|
||||||
|
- name: Terraform Apply
|
||||||
|
if: github.event_name == 'push' || github.event_name == 'workflow_dispatch'
|
||||||
|
working-directory: ${{ env.TERRAFORM_DIRECTORY }}
|
||||||
|
run: |
|
||||||
|
terraform apply \
|
||||||
|
-auto-approve \
|
||||||
|
-input=false \
|
||||||
|
-parallelism=${TERRAFORM_PARALLELISM} \
|
||||||
|
tfplan
|
@ -21,6 +21,11 @@
|
|||||||
if pkgs.stdenv.isDarwin then "$HOME/Downloads" else "$HOME/downloads";
|
if pkgs.stdenv.isDarwin then "$HOME/Downloads" else "$HOME/downloads";
|
||||||
};
|
};
|
||||||
};
|
};
|
||||||
|
identityFile = lib.mkOption {
|
||||||
|
type = lib.types.str;
|
||||||
|
description = "Path to existing identity file.";
|
||||||
|
default = "/etc/ssh/ssh_host_ed25519_key";
|
||||||
|
};
|
||||||
gui = {
|
gui = {
|
||||||
enable = mkEnableOption {
|
enable = mkEnableOption {
|
||||||
description = "Enable graphics";
|
description = "Enable graphics";
|
||||||
@ -39,6 +44,7 @@
|
|||||||
else
|
else
|
||||||
"/home/${config.user}");
|
"/home/${config.user}");
|
||||||
};
|
};
|
||||||
|
|
||||||
dotfilesPath = mkOption {
|
dotfilesPath = mkOption {
|
||||||
type = types.path;
|
type = types.path;
|
||||||
description = "Path of dotfiles repository.";
|
description = "Path of dotfiles repository.";
|
||||||
|
@ -12,6 +12,7 @@ nixpkgs.lib.nixosSystem {
|
|||||||
nixpkgs.overlays = [ nur.overlay ];
|
nixpkgs.overlays = [ nur.overlay ];
|
||||||
# Set registry to flake packages, used for nix X commands
|
# Set registry to flake packages, used for nix X commands
|
||||||
nix.registry.nixpkgs.flake = nixpkgs;
|
nix.registry.nixpkgs.flake = nixpkgs;
|
||||||
|
identityFile = "/home/${globals.user}/.ssh/id_ed25519";
|
||||||
gaming.steam = true;
|
gaming.steam = true;
|
||||||
gaming.leagueoflegends = true;
|
gaming.leagueoflegends = true;
|
||||||
gaming.legendary = true;
|
gaming.legendary = true;
|
||||||
|
@ -12,9 +12,11 @@ darwin.lib.darwinSystem {
|
|||||||
})
|
})
|
||||||
home-manager.darwinModules.home-manager
|
home-manager.darwinModules.home-manager
|
||||||
{
|
{
|
||||||
|
identityFile = "/home/${globals.user}/.ssh/id_ed25519";
|
||||||
gui.enable = true;
|
gui.enable = true;
|
||||||
colorscheme = (import ../../modules/colorscheme/gruvbox);
|
colorscheme = (import ../../modules/colorscheme/gruvbox);
|
||||||
mailUser = globals.user;
|
mailUser = globals.user;
|
||||||
|
networking.hostName = "noah-masur-mac";
|
||||||
nixpkgs.overlays = [ nur.overlay ];
|
nixpkgs.overlays = [ nur.overlay ];
|
||||||
# Set registry to flake packages, used for nix X commands
|
# Set registry to flake packages, used for nix X commands
|
||||||
nix.registry.nixpkgs.flake = nixpkgs;
|
nix.registry.nixpkgs.flake = nixpkgs;
|
||||||
|
89
hosts/oracle/default.nix
Normal file
89
hosts/oracle/default.nix
Normal file
@ -0,0 +1,89 @@
|
|||||||
|
{ nixpkgs, home-manager, globals, ... }:
|
||||||
|
|
||||||
|
# System configuration for an Oracle free server
|
||||||
|
|
||||||
|
# How to install:
|
||||||
|
# https://blog.korfuri.fr/posts/2022/08/nixos-on-an-oracle-free-tier-ampere-machine/
|
||||||
|
|
||||||
|
nixpkgs.lib.nixosSystem {
|
||||||
|
system = "aarch64-linux";
|
||||||
|
specialArgs = { };
|
||||||
|
modules = [
|
||||||
|
(removeAttrs globals [ "mailServer" ])
|
||||||
|
home-manager.nixosModules.home-manager
|
||||||
|
{
|
||||||
|
gui.enable = false;
|
||||||
|
colorscheme = (import ../../modules/colorscheme/gruvbox);
|
||||||
|
|
||||||
|
# FQDNs for various services
|
||||||
|
networking.hostName = "oracle";
|
||||||
|
bookServer = "books.masu.rs";
|
||||||
|
streamServer = "stream.masu.rs";
|
||||||
|
nextcloudServer = "cloud.masu.rs";
|
||||||
|
transmissionServer = "download.masu.rs";
|
||||||
|
metricsServer = "metrics.masu.rs";
|
||||||
|
vaultwardenServer = "vault.masu.rs";
|
||||||
|
giteaServer = "git.masu.rs";
|
||||||
|
|
||||||
|
# Disable passwords, only use SSH key
|
||||||
|
passwordHash = null;
|
||||||
|
publicKey =
|
||||||
|
"ssh-ed25519 AAAAC3NzaC1lZDI1NTE5AAAAIB+AbmjGEwITk5CK9y7+Rg27Fokgj9QEjgc9wST6MA3s";
|
||||||
|
|
||||||
|
# Nextcloud backup config
|
||||||
|
backupS3 = {
|
||||||
|
endpoint = "s3.us-west-002.backblazeb2.com";
|
||||||
|
bucket = "noahmasur-backup";
|
||||||
|
accessKeyId = "0026b0e73b2e2c80000000005";
|
||||||
|
};
|
||||||
|
|
||||||
|
# Grant access to Jellyfin directories from Nextcloud
|
||||||
|
users.users.nextcloud.extraGroups = [ "jellyfin" ];
|
||||||
|
|
||||||
|
# Wireguard config for Transmission
|
||||||
|
networking.wireguard.interfaces.wg0 = {
|
||||||
|
|
||||||
|
# The local IPs for this machine within the Wireguard network
|
||||||
|
# Any inbound traffic bound for these IPs should be kept on localhost
|
||||||
|
ips = [ "10.66.13.200/32" "fc00:bbbb:bbbb:bb01::3:dc7/128" ];
|
||||||
|
|
||||||
|
peers = [{
|
||||||
|
|
||||||
|
# Identity of Wireguard target peer (VPN)
|
||||||
|
publicKey = "bOOP5lIjqCdDx5t+mP/kEcSbHS4cZqE0rMlBI178lyY=";
|
||||||
|
|
||||||
|
# The public internet address of the target peer
|
||||||
|
endpoint = "86.106.143.132:51820";
|
||||||
|
|
||||||
|
# Which outgoing IP ranges should be sent through Wireguard
|
||||||
|
allowedIPs = [ "0.0.0.0/0" "::0/0" ];
|
||||||
|
|
||||||
|
# Send heartbeat signal within the network
|
||||||
|
persistentKeepalive = 25;
|
||||||
|
|
||||||
|
}];
|
||||||
|
|
||||||
|
};
|
||||||
|
|
||||||
|
# VPN port forwarding
|
||||||
|
services.transmission.settings.peer-port = 57599;
|
||||||
|
|
||||||
|
# Grant access to Transmission directories from Jellyfin
|
||||||
|
users.users.jellyfin.extraGroups = [ "transmission" ];
|
||||||
|
}
|
||||||
|
./hardware-configuration.nix
|
||||||
|
../common.nix
|
||||||
|
../../modules/nixos
|
||||||
|
../../modules/hardware/server.nix
|
||||||
|
../../modules/services/sshd.nix
|
||||||
|
../../modules/services/calibre.nix
|
||||||
|
../../modules/services/jellyfin.nix
|
||||||
|
../../modules/services/nextcloud.nix
|
||||||
|
../../modules/services/cloudflare.nix
|
||||||
|
../../modules/services/transmission.nix
|
||||||
|
../../modules/services/prometheus.nix
|
||||||
|
../../modules/services/vaultwarden.nix
|
||||||
|
../../modules/services/gitea.nix
|
||||||
|
../../modules/gaming/minecraft-server.nix
|
||||||
|
];
|
||||||
|
}
|
34
hosts/oracle/hardware-configuration.nix
Normal file
34
hosts/oracle/hardware-configuration.nix
Normal file
@ -0,0 +1,34 @@
|
|||||||
|
# Do not modify this file! It was generated by ‘nixos-generate-config’
|
||||||
|
# and may be overwritten by future invocations. Please make changes
|
||||||
|
# to /etc/nixos/configuration.nix instead.
|
||||||
|
{ config, lib, pkgs, modulesPath, ... }:
|
||||||
|
|
||||||
|
{
|
||||||
|
imports = [ (modulesPath + "/profiles/qemu-guest.nix") ];
|
||||||
|
|
||||||
|
boot.initrd.availableKernelModules = [ "xhci_pci" "virtio_pci" "usbhid" ];
|
||||||
|
boot.initrd.kernelModules = [ ];
|
||||||
|
boot.kernelModules = [ ];
|
||||||
|
boot.extraModulePackages = [ ];
|
||||||
|
|
||||||
|
fileSystems."/" = {
|
||||||
|
device = "/dev/disk/by-uuid/e1b6bd50-306d-429a-9f45-78f57bc597c3";
|
||||||
|
fsType = "ext4";
|
||||||
|
};
|
||||||
|
|
||||||
|
fileSystems."/boot" = {
|
||||||
|
device = "/dev/disk/by-uuid/D5CA-237A";
|
||||||
|
fsType = "vfat";
|
||||||
|
};
|
||||||
|
|
||||||
|
swapDevices = [ ];
|
||||||
|
|
||||||
|
# Enables DHCP on each ethernet and wireless interface. In case of scripted networking
|
||||||
|
# (the default) this is the recommended approach. When using systemd-networkd it's
|
||||||
|
# still possible to use this option, but it's recommended to use it in conjunction
|
||||||
|
# with explicit per-interface declarations with `networking.interfaces.<interface>.useDHCP`.
|
||||||
|
networking.useDHCP = lib.mkDefault true;
|
||||||
|
# networking.interfaces.eth0.useDHCP = lib.mkDefault true;
|
||||||
|
|
||||||
|
nixpkgs.hostPlatform = lib.mkDefault "aarch64-linux";
|
||||||
|
}
|
4
hosts/public-keys
Normal file
4
hosts/public-keys
Normal file
@ -0,0 +1,4 @@
|
|||||||
|
# Scan hosts: ssh-keyscan -t ed25519 <hostnames>
|
||||||
|
|
||||||
|
ssh-ed25519 AAAAC3NzaC1lZDI1NTE5AAAAIB+AbmjGEwITk5CK9y7+Rg27Fokgj9QEjgc9wST6MA3s noah
|
||||||
|
ssh-ed25519 AAAAC3NzaC1lZDI1NTE5AAAAIHVknmPi7sG6ES0G0jcsvebzKGWWaMfJTYgvOue6EULI oracle.masu.rs
|
@ -1,20 +0,0 @@
|
|||||||
{ nixpkgs, home-manager, globals, ... }:
|
|
||||||
|
|
||||||
# System configuration for a generic server
|
|
||||||
nixpkgs.lib.nixosSystem {
|
|
||||||
system = "x86_64-linux";
|
|
||||||
specialArgs = { };
|
|
||||||
modules = [
|
|
||||||
globals
|
|
||||||
home-manager.nixosModules.home-manager
|
|
||||||
{
|
|
||||||
networking.hostName = "sheep";
|
|
||||||
gui.enable = false;
|
|
||||||
colorscheme = (import ../../modules/colorscheme/gruvbox);
|
|
||||||
passwordHash =
|
|
||||||
"$6$PZYiMGmJIIHAepTM$Wx5EqTQ5GApzXx58nvi8azh16pdxrN6Qrv1wunDlzveOgawitWzcIxuj76X9V868fsPi/NOIEO8yVXqwzS9UF.";
|
|
||||||
}
|
|
||||||
../common.nix
|
|
||||||
../../modules/nixos
|
|
||||||
];
|
|
||||||
}
|
|
@ -12,6 +12,7 @@ nixpkgs.lib.nixosSystem {
|
|||||||
networking.hostName = "wsl";
|
networking.hostName = "wsl";
|
||||||
# Set registry to flake packages, used for nix X commands
|
# Set registry to flake packages, used for nix X commands
|
||||||
nix.registry.nixpkgs.flake = nixpkgs;
|
nix.registry.nixpkgs.flake = nixpkgs;
|
||||||
|
identityFile = "/home/${globals.user}/.ssh/id_ed25519";
|
||||||
gui.enable = false;
|
gui.enable = false;
|
||||||
colorscheme = (import ../../modules/colorscheme/gruvbox);
|
colorscheme = (import ../../modules/colorscheme/gruvbox);
|
||||||
passwordHash =
|
passwordHash =
|
||||||
|
@ -5,6 +5,7 @@
|
|||||||
./fonts.nix
|
./fonts.nix
|
||||||
./hammerspoon.nix
|
./hammerspoon.nix
|
||||||
./homebrew.nix
|
./homebrew.nix
|
||||||
|
./networking.nix
|
||||||
./nixpkgs.nix
|
./nixpkgs.nix
|
||||||
./system.nix
|
./system.nix
|
||||||
./tmux.nix
|
./tmux.nix
|
||||||
|
@ -29,17 +29,19 @@
|
|||||||
];
|
];
|
||||||
brews = [
|
brews = [
|
||||||
"trash" # Delete files and folders to trash instead of rm
|
"trash" # Delete files and folders to trash instead of rm
|
||||||
|
"openjdk" # Required by Apache Directory Studio
|
||||||
];
|
];
|
||||||
casks = [
|
casks = [
|
||||||
"firefox" # Firefox packaging on Nix is broken for MacOS
|
"firefox" # Firefox packaging on Nix is broken for macOS
|
||||||
"1password" # 1Password packaging on Nix is broken for MacOS
|
"1password" # 1Password packaging on Nix is broken for macOS
|
||||||
"scroll-reverser" # Different scroll style for mouse vs. trackpad
|
"scroll-reverser" # Different scroll style for mouse vs. trackpad
|
||||||
"meetingbar" # Show meetings in menu bar
|
"meetingbar" # Show meetings in menu bar
|
||||||
"gitify" # Git notifications in menu bar
|
"gitify" # Git notifications in menu bar
|
||||||
"logitech-g-hub" # Mouse and keyboard management
|
"logitech-g-hub" # Mouse and keyboard management
|
||||||
"mimestream" # Gmail client
|
"mimestream" # Gmail client
|
||||||
"obsidian" # Obsidian packaging on Nix is not available for MacOS
|
"obsidian" # Obsidian packaging on Nix is not available for macOS
|
||||||
"steam" # Not packaged for Nix
|
"steam" # Not packaged for Nix
|
||||||
|
"apache-directory-studio" # Packaging on Nix is not available for macOS
|
||||||
];
|
];
|
||||||
};
|
};
|
||||||
|
|
||||||
|
@ -1,8 +1,9 @@
|
|||||||
{ ... }: {
|
{ config, ... }: {
|
||||||
|
|
||||||
networking = {
|
networking = {
|
||||||
computerName = "MacBook"; # Host name
|
computerName = "${config.fullName}'\\''s Mac";
|
||||||
hostName = "MacBook";
|
# Adjust if necessary
|
||||||
|
# hostName = "";
|
||||||
};
|
};
|
||||||
|
|
||||||
}
|
}
|
||||||
|
@ -2,6 +2,8 @@
|
|||||||
|
|
||||||
services.nix-daemon.enable = true;
|
services.nix-daemon.enable = true;
|
||||||
|
|
||||||
|
security.pam.enableSudoTouchIdAuth = true;
|
||||||
|
|
||||||
system = {
|
system = {
|
||||||
|
|
||||||
keyboard = {
|
keyboard = {
|
||||||
@ -139,12 +141,12 @@
|
|||||||
|
|
||||||
echo "Define dock icon function"
|
echo "Define dock icon function"
|
||||||
__dock_item() {
|
__dock_item() {
|
||||||
printf '%s%s%s%s%s' \
|
printf "%s%s%s%s%s" \
|
||||||
'<dict><key>tile-data</key><dict><key>file-data</key><dict>' \
|
"<dict><key>tile-data</key><dict><key>file-data</key><dict>" \
|
||||||
'<key>_CFURLString</key><string>' \
|
"<key>_CFURLString</key><string>" \
|
||||||
"$1" \
|
"$1" \
|
||||||
'</string><key>_CFURLStringType</key><integer>0</integer>' \
|
"</string><key>_CFURLStringType</key><integer>0</integer>" \
|
||||||
'</dict></dict></dict>'
|
"</dict></dict></dict>"
|
||||||
}
|
}
|
||||||
|
|
||||||
echo "Choose and order dock icons"
|
echo "Choose and order dock icons"
|
||||||
@ -161,16 +163,6 @@
|
|||||||
"$(__dock_item /Applications/Alacritty.app)" \
|
"$(__dock_item /Applications/Alacritty.app)" \
|
||||||
"$(__dock_item /System/Applications/System\ Preferences.app)"
|
"$(__dock_item /System/Applications/System\ Preferences.app)"
|
||||||
|
|
||||||
echo "Enable sudo Touch ID"
|
|
||||||
echo "# sudo: auth account password session" > /tmp/sudofile
|
|
||||||
echo "auth sufficient pam_smartcard.so" >> /tmp/sudofile
|
|
||||||
echo "auth sufficient pam_tid.so " >> /tmp/sudofile
|
|
||||||
echo "auth required pam_opendirectory.so" >> /tmp/sudofile
|
|
||||||
echo "account required pam_permit.so" >> /tmp/sudofile
|
|
||||||
echo "password required pam_deny.so" >> /tmp/sudofile
|
|
||||||
echo "session required pam_permit.so" >> /tmp/sudofile
|
|
||||||
sudo mv /tmp/sudofile /etc/pam.d/sudo
|
|
||||||
|
|
||||||
echo "Allow apps from anywhere"
|
echo "Allow apps from anywhere"
|
||||||
SPCTL=$(spctl --status)
|
SPCTL=$(spctl --status)
|
||||||
if ! [ "$SPCTL" = "assessments disabled" ]; then
|
if ! [ "$SPCTL" = "assessments disabled" ]; then
|
||||||
|
@ -3,7 +3,7 @@
|
|||||||
home-manager.users.${config.user} = {
|
home-manager.users.${config.user} = {
|
||||||
|
|
||||||
home.packages = with pkgs; [
|
home.packages = with pkgs; [
|
||||||
# visidata # CSV inspector
|
visidata # CSV inspector
|
||||||
dos2unix # Convert Windows text files
|
dos2unix # Convert Windows text files
|
||||||
inetutils # Includes telnet
|
inetutils # Includes telnet
|
||||||
youtube-dl # Convert web videos
|
youtube-dl # Convert web videos
|
||||||
@ -11,10 +11,13 @@
|
|||||||
mpd # TUI slideshows
|
mpd # TUI slideshows
|
||||||
awscli2
|
awscli2
|
||||||
awslogs
|
awslogs
|
||||||
kubectl
|
google-cloud-sdk
|
||||||
k9s
|
ansible
|
||||||
|
vault
|
||||||
|
consul
|
||||||
noti # Create notifications programmatically
|
noti # Create notifications programmatically
|
||||||
ipcalc
|
ipcalc # Make IP network calculations
|
||||||
|
whois # Lookup IPs
|
||||||
(pkgs.writeScriptBin "ocr"
|
(pkgs.writeScriptBin "ocr"
|
||||||
(builtins.readFile ../shell/bash/scripts/ocr.sh))
|
(builtins.readFile ../shell/bash/scripts/ocr.sh))
|
||||||
];
|
];
|
||||||
|
145
modules/gaming/minecraft-server.nix
Normal file
145
modules/gaming/minecraft-server.nix
Normal file
@ -0,0 +1,145 @@
|
|||||||
|
{ pkgs, ... }:
|
||||||
|
|
||||||
|
let
|
||||||
|
|
||||||
|
localPort = 25564;
|
||||||
|
publicPort = 49732;
|
||||||
|
rconPort = 25575;
|
||||||
|
rconPassword = "thiscanbeanything";
|
||||||
|
|
||||||
|
in {
|
||||||
|
|
||||||
|
unfreePackages = [ "minecraft-server" ];
|
||||||
|
|
||||||
|
services.minecraft-server = {
|
||||||
|
enable = true;
|
||||||
|
eula = true;
|
||||||
|
declarative = true;
|
||||||
|
whitelist = { };
|
||||||
|
openFirewall = false;
|
||||||
|
serverProperties = {
|
||||||
|
server-port = localPort;
|
||||||
|
difficulty = "normal";
|
||||||
|
gamemode = "survival";
|
||||||
|
white-list = false;
|
||||||
|
enforce-whitelist = false;
|
||||||
|
level-name = "world";
|
||||||
|
motd = "Welcome!";
|
||||||
|
pvp = true;
|
||||||
|
player-idle-timeout = 30;
|
||||||
|
generate-structures = true;
|
||||||
|
max-players = 20;
|
||||||
|
snooper-enabled = false;
|
||||||
|
spawn-npcs = true;
|
||||||
|
spawn-animals = true;
|
||||||
|
spawn-monsters = true;
|
||||||
|
allow-nether = true;
|
||||||
|
allow-flight = false;
|
||||||
|
enable-rcon = true;
|
||||||
|
"rcon.port" = rconPort;
|
||||||
|
"rcon.password" = rconPassword;
|
||||||
|
};
|
||||||
|
};
|
||||||
|
|
||||||
|
networking.firewall.allowedTCPPorts = [ publicPort ];
|
||||||
|
|
||||||
|
## Automatically start and stop Minecraft server based on player connections
|
||||||
|
|
||||||
|
# Adapted shamelessly from:
|
||||||
|
# https://dataswamp.org/~solene/2022-08-20-on-demand-minecraft-with-systemd.html
|
||||||
|
|
||||||
|
# Prevent Minecraft from starting by default
|
||||||
|
systemd.services.minecraft-server = { wantedBy = pkgs.lib.mkForce [ ]; };
|
||||||
|
|
||||||
|
# Listen for connections on the public port, to trigger the actual
|
||||||
|
# listen-minecraft service.
|
||||||
|
systemd.sockets.listen-minecraft = {
|
||||||
|
wantedBy = [ "sockets.target" ];
|
||||||
|
requires = [ "network.target" ];
|
||||||
|
listenStreams = [ "${toString publicPort}" ];
|
||||||
|
};
|
||||||
|
|
||||||
|
# Proxy traffic to local port, and trigger hook-minecraft
|
||||||
|
systemd.services.listen-minecraft = {
|
||||||
|
path = [ pkgs.systemd ];
|
||||||
|
requires = [ "hook-minecraft.service" "listen-minecraft.socket" ];
|
||||||
|
after = [ "hook-minecraft.service" "listen-minecraft.socket" ];
|
||||||
|
serviceConfig.ExecStart =
|
||||||
|
"${pkgs.systemd.out}/lib/systemd/systemd-socket-proxyd 127.0.0.1:${
|
||||||
|
toString localPort
|
||||||
|
}";
|
||||||
|
};
|
||||||
|
|
||||||
|
# Start Minecraft if required and wait for it to be available
|
||||||
|
# Then unlock the listen-minecraft.service
|
||||||
|
systemd.services.hook-minecraft = {
|
||||||
|
path = with pkgs; [ systemd libressl busybox ];
|
||||||
|
|
||||||
|
# Start Minecraft and the auto-shutdown timer
|
||||||
|
script = ''
|
||||||
|
systemctl start minecraft-server.service
|
||||||
|
systemctl start stop-minecraft.timer
|
||||||
|
'';
|
||||||
|
|
||||||
|
# Keep checking until the service is available
|
||||||
|
postStart = ''
|
||||||
|
for i in $(seq 60); do
|
||||||
|
if ${pkgs.libressl.nc}/bin/nc -z 127.0.0.1 ${
|
||||||
|
toString localPort
|
||||||
|
} > /dev/null ; then
|
||||||
|
exit 0
|
||||||
|
fi
|
||||||
|
${pkgs.busybox.out}/bin/sleep 1
|
||||||
|
done
|
||||||
|
exit 1
|
||||||
|
'';
|
||||||
|
};
|
||||||
|
|
||||||
|
# Run a player check on a schedule for auto-shutdown
|
||||||
|
systemd.timers.stop-minecraft = {
|
||||||
|
timerConfig = {
|
||||||
|
OnCalendar = "*-*-* *:*:0/20"; # Every 20 seconds
|
||||||
|
Unit = "stop-minecraft.service";
|
||||||
|
};
|
||||||
|
};
|
||||||
|
|
||||||
|
# If no players are connected, then stop services and prepare to resume again
|
||||||
|
systemd.services.stop-minecraft = {
|
||||||
|
serviceConfig.Type = "oneshot";
|
||||||
|
script = ''
|
||||||
|
# Check when service was launched
|
||||||
|
servicestartsec=$(
|
||||||
|
date -d \
|
||||||
|
"$(systemctl show \
|
||||||
|
--property=ActiveEnterTimestamp \
|
||||||
|
minecraft-server.service \
|
||||||
|
| cut -d= -f2)" \
|
||||||
|
+%s)
|
||||||
|
|
||||||
|
# Calculate elapsed time
|
||||||
|
serviceelapsedsec=$(( $(date +%s) - servicestartsec))
|
||||||
|
|
||||||
|
# Ignore if service just started
|
||||||
|
if [ $serviceelapsedsec -lt 180 ]
|
||||||
|
then
|
||||||
|
echo "Server was just started"
|
||||||
|
exit 0
|
||||||
|
fi
|
||||||
|
|
||||||
|
PLAYERS=$(
|
||||||
|
printf "list\n" \
|
||||||
|
| ${pkgs.rcon.out}/bin/rcon -m \
|
||||||
|
-H 127.0.0.1 -p ${builtins.toString rconPort} -P ${rconPassword} \
|
||||||
|
)
|
||||||
|
|
||||||
|
if echo "$PLAYERS" | grep "are 0 of a"
|
||||||
|
then
|
||||||
|
echo "Stopping server"
|
||||||
|
systemctl stop minecraft-server.service
|
||||||
|
systemctl stop hook-minecraft.service
|
||||||
|
systemctl stop stop-minecraft.timer
|
||||||
|
fi
|
||||||
|
'';
|
||||||
|
};
|
||||||
|
|
||||||
|
}
|
7
modules/hardware/server.nix
Normal file
7
modules/hardware/server.nix
Normal file
@ -0,0 +1,7 @@
|
|||||||
|
{ config, ... }: {
|
||||||
|
|
||||||
|
# Servers need a bootloader or they won't start
|
||||||
|
boot.loader.systemd-boot.enable = true;
|
||||||
|
boot.loader.efi.canTouchEfiVariables = true;
|
||||||
|
|
||||||
|
}
|
@ -1,5 +1,8 @@
|
|||||||
{ config, pkgs, lib, ... }: {
|
{ config, pkgs, lib, ... }: {
|
||||||
|
|
||||||
|
# Required to place identity file on machine
|
||||||
|
imports = [ ../shell/age.nix ];
|
||||||
|
|
||||||
options = {
|
options = {
|
||||||
mailUser = lib.mkOption {
|
mailUser = lib.mkOption {
|
||||||
type = lib.types.str;
|
type = lib.types.str;
|
||||||
@ -73,8 +76,8 @@
|
|||||||
mu.enable = false;
|
mu.enable = false;
|
||||||
notmuch.enable = false;
|
notmuch.enable = false;
|
||||||
passwordCommand =
|
passwordCommand =
|
||||||
"${pkgs.age}/bin/age --decrypt --identity ${config.homePath}/.ssh/id_ed25519 ${
|
"${pkgs.age}/bin/age --decrypt --identity ${config.identityFile} ${
|
||||||
builtins.toString ./mailpass.age
|
builtins.toString ../../private/mailpass.age
|
||||||
}";
|
}";
|
||||||
smtp = {
|
smtp = {
|
||||||
host = "smtp.purelymail.com";
|
host = "smtp.purelymail.com";
|
||||||
|
@ -1,5 +0,0 @@
|
|||||||
age-encryption.org/v1
|
|
||||||
-> ssh-ed25519 MgHaOw 8h/ESNjn0gknNXoHM34UobHzPgmRunoP97H+KHOuGQM
|
|
||||||
qowH+6TlCRECGCscRgKx6kswY+PZezYUD6E+x9e+5pM
|
|
||||||
--- kFj1JzRdh/D13Uq9aNTzMJIFysEE+kzzthjewOIR2+o
|
|
||||||
Ȳ<EFBFBD><EFBFBD>6<EFBFBD><EFBFBD><EFBFBD>}rC<72>z<><7A><EFBFBD><EFBFBD><EFBFBD><EFBFBD>
|
|
@ -14,8 +14,7 @@ M.packer = function(use)
|
|||||||
return vim.fn.executable(program) == 1
|
return vim.fn.executable(program) == 1
|
||||||
end
|
end
|
||||||
|
|
||||||
local capabilities =
|
local capabilities = require("cmp_nvim_lsp").default_capabilities()
|
||||||
require("cmp_nvim_lsp").update_capabilities(vim.lsp.protocol.make_client_capabilities())
|
|
||||||
if on_path("lua-language-server") then
|
if on_path("lua-language-server") then
|
||||||
require("lspconfig").sumneko_lua.setup({
|
require("lspconfig").sumneko_lua.setup({
|
||||||
capabilities = capabilities,
|
capabilities = capabilities,
|
||||||
|
@ -16,7 +16,29 @@ M.packer = function(use)
|
|||||||
vim.keymap.set("v", "<Leader>gd", gitsigns.diffthis)
|
vim.keymap.set("v", "<Leader>gd", gitsigns.diffthis)
|
||||||
vim.keymap.set("n", "<Leader>rgf", gitsigns.reset_buffer)
|
vim.keymap.set("n", "<Leader>rgf", gitsigns.reset_buffer)
|
||||||
vim.keymap.set("v", "<Leader>hs", gitsigns.stage_hunk)
|
vim.keymap.set("v", "<Leader>hs", gitsigns.stage_hunk)
|
||||||
vim.keymap.set("v", "<Leader>hs", gitsigns.reset_hunk)
|
vim.keymap.set("v", "<Leader>hr", gitsigns.reset_hunk)
|
||||||
|
vim.keymap.set("v", "<Leader>hr", gitsigns.reset_hunk)
|
||||||
|
|
||||||
|
-- Navigation
|
||||||
|
vim.keymap.set("n", "]g", function()
|
||||||
|
if vim.wo.diff then
|
||||||
|
return "]g"
|
||||||
|
end
|
||||||
|
vim.schedule(function()
|
||||||
|
gitsigns.next_hunk()
|
||||||
|
end)
|
||||||
|
return "<Ignore>"
|
||||||
|
end, { expr = true })
|
||||||
|
|
||||||
|
vim.keymap.set("n", "[g", function()
|
||||||
|
if vim.wo.diff then
|
||||||
|
return "[g"
|
||||||
|
end
|
||||||
|
vim.schedule(function()
|
||||||
|
gitsigns.prev_hunk()
|
||||||
|
end)
|
||||||
|
return "<Ignore>"
|
||||||
|
end, { expr = true })
|
||||||
end,
|
end,
|
||||||
})
|
})
|
||||||
|
|
||||||
@ -102,7 +124,6 @@ M.packer = function(use)
|
|||||||
},
|
},
|
||||||
view = {
|
view = {
|
||||||
width = 30,
|
width = 30,
|
||||||
height = 30,
|
|
||||||
hide_root_folder = false,
|
hide_root_folder = false,
|
||||||
side = "left",
|
side = "left",
|
||||||
mappings = {
|
mappings = {
|
||||||
|
@ -3,8 +3,9 @@
|
|||||||
options = {
|
options = {
|
||||||
|
|
||||||
passwordHash = lib.mkOption {
|
passwordHash = lib.mkOption {
|
||||||
type = lib.types.str;
|
type = lib.types.nullOr lib.types.str;
|
||||||
description = "Password created with mkpasswd -m sha-512";
|
description = "Password created with mkpasswd -m sha-512";
|
||||||
|
# Test it by running: mkpasswd -m sha-512 --salt "PZYiMGmJIIHAepTM"
|
||||||
};
|
};
|
||||||
|
|
||||||
};
|
};
|
||||||
|
@ -2,7 +2,13 @@
|
|||||||
|
|
||||||
home-manager.users.${config.user} = {
|
home-manager.users.${config.user} = {
|
||||||
|
|
||||||
home.packages = with pkgs; [ kubectl k9s ];
|
home.packages = with pkgs; [
|
||||||
|
kubectl # Basic Kubernetes queries
|
||||||
|
k9s # Terminal Kubernetes UI
|
||||||
|
kubernetes-helm # Helm CLI
|
||||||
|
fluxcd # Bootstrap clusters with Flux
|
||||||
|
kustomize # Kustomize CLI (for Flux)
|
||||||
|
];
|
||||||
|
|
||||||
programs.fish.shellAbbrs = {
|
programs.fish.shellAbbrs = {
|
||||||
k = "kubectl";
|
k = "kubectl";
|
||||||
|
66
modules/services/backups.nix
Normal file
66
modules/services/backups.nix
Normal file
@ -0,0 +1,66 @@
|
|||||||
|
{ config, pkgs, lib, ... }: {
|
||||||
|
|
||||||
|
imports = [ ./secrets.nix ];
|
||||||
|
|
||||||
|
options = {
|
||||||
|
|
||||||
|
backupS3 = {
|
||||||
|
endpoint = lib.mkOption {
|
||||||
|
type = lib.types.str;
|
||||||
|
description = "S3 endpoint for backups";
|
||||||
|
};
|
||||||
|
bucket = lib.mkOption {
|
||||||
|
type = lib.types.str;
|
||||||
|
description = "S3 bucket for backups";
|
||||||
|
};
|
||||||
|
accessKeyId = lib.mkOption {
|
||||||
|
type = lib.types.str;
|
||||||
|
description = "S3 access key ID for backups";
|
||||||
|
};
|
||||||
|
};
|
||||||
|
|
||||||
|
};
|
||||||
|
|
||||||
|
config = {
|
||||||
|
|
||||||
|
users.groups.backup = { };
|
||||||
|
|
||||||
|
secrets.backup = {
|
||||||
|
source = ../../private/backup.age;
|
||||||
|
dest = "${config.secretsDirectory}/backup";
|
||||||
|
group = "backup";
|
||||||
|
permissions = "0440";
|
||||||
|
};
|
||||||
|
|
||||||
|
users.users.litestream.extraGroups = [ "backup" ];
|
||||||
|
|
||||||
|
services.litestream = {
|
||||||
|
enable = true;
|
||||||
|
environmentFile = config.secrets.backup.dest;
|
||||||
|
};
|
||||||
|
|
||||||
|
# Wait for secret to exist
|
||||||
|
systemd.services.litestream = {
|
||||||
|
after = [ "backup-secret.service" ];
|
||||||
|
requires = [ "backup-secret.service" ];
|
||||||
|
environment.AWS_ACCESS_KEY_ID = config.backupS3.accessKeyId;
|
||||||
|
};
|
||||||
|
|
||||||
|
# # Backup library to object storage
|
||||||
|
# services.restic.backups.calibre = {
|
||||||
|
# user = "calibre-web";
|
||||||
|
# repository =
|
||||||
|
# "s3://${config.backupS3.endpoint}/${config.backupS3.bucket}/calibre";
|
||||||
|
# paths = [
|
||||||
|
# "/var/books"
|
||||||
|
# "/var/lib/calibre-web/app.db"
|
||||||
|
# "/var/lib/calibre-web/gdrive.db"
|
||||||
|
# ];
|
||||||
|
# initialize = true;
|
||||||
|
# timerConfig = { OnCalendar = "00:05:00"; };
|
||||||
|
# environmentFile = backupS3File;
|
||||||
|
# };
|
||||||
|
|
||||||
|
};
|
||||||
|
|
||||||
|
}
|
35
modules/services/caddy.nix
Normal file
35
modules/services/caddy.nix
Normal file
@ -0,0 +1,35 @@
|
|||||||
|
{ config, pkgs, lib, ... }: {
|
||||||
|
|
||||||
|
options = {
|
||||||
|
caddyRoutes = lib.mkOption {
|
||||||
|
type = lib.types.listOf lib.types.attrs;
|
||||||
|
description = "Caddy JSON routes for http servers";
|
||||||
|
};
|
||||||
|
caddyBlocks = lib.mkOption {
|
||||||
|
type = lib.types.listOf lib.types.attrs;
|
||||||
|
description = "Caddy JSON error blocks for http servers";
|
||||||
|
default = [ ];
|
||||||
|
};
|
||||||
|
};
|
||||||
|
|
||||||
|
config = {
|
||||||
|
|
||||||
|
services.caddy = {
|
||||||
|
enable = true;
|
||||||
|
adapter = "''"; # Required to enable JSON
|
||||||
|
configFile = pkgs.writeText "Caddyfile" (builtins.toJSON {
|
||||||
|
apps.http.servers.main = {
|
||||||
|
listen = [ ":443" ];
|
||||||
|
routes = config.caddyRoutes;
|
||||||
|
errors.routes = config.caddyBlocks;
|
||||||
|
};
|
||||||
|
});
|
||||||
|
|
||||||
|
};
|
||||||
|
|
||||||
|
networking.firewall.allowedTCPPorts = [ 80 443 ];
|
||||||
|
networking.firewall.allowedUDPPorts = [ 443 ];
|
||||||
|
|
||||||
|
};
|
||||||
|
|
||||||
|
}
|
74
modules/services/calibre.nix
Normal file
74
modules/services/calibre.nix
Normal file
@ -0,0 +1,74 @@
|
|||||||
|
{ config, pkgs, lib, ... }: {
|
||||||
|
|
||||||
|
imports = [ ./caddy.nix ./backups.nix ];
|
||||||
|
|
||||||
|
options = {
|
||||||
|
bookServer = lib.mkOption {
|
||||||
|
type = lib.types.str;
|
||||||
|
description = "Hostname for Calibre library";
|
||||||
|
};
|
||||||
|
};
|
||||||
|
|
||||||
|
config = {
|
||||||
|
|
||||||
|
services.calibre-web = {
|
||||||
|
enable = true;
|
||||||
|
openFirewall = true;
|
||||||
|
options = {
|
||||||
|
reverseProxyAuth.enable = false;
|
||||||
|
enableBookConversion = true;
|
||||||
|
enableBookUploading = true;
|
||||||
|
};
|
||||||
|
};
|
||||||
|
|
||||||
|
# Fix: https://github.com/janeczku/calibre-web/issues/2422
|
||||||
|
nixpkgs.overlays = [
|
||||||
|
(final: prev: {
|
||||||
|
calibre-web = prev.calibre-web.overrideAttrs (old: {
|
||||||
|
patches = (old.patches or [ ])
|
||||||
|
++ [ ../../patches/calibre-web-cloudflare.patch ];
|
||||||
|
});
|
||||||
|
})
|
||||||
|
];
|
||||||
|
|
||||||
|
caddyRoutes = [{
|
||||||
|
match = [{ host = [ config.bookServer ]; }];
|
||||||
|
handle = [{
|
||||||
|
handler = "reverse_proxy";
|
||||||
|
upstreams = [{ dial = "localhost:8083"; }];
|
||||||
|
headers.request.add."X-Script-Name" = [ "/calibre-web" ];
|
||||||
|
}];
|
||||||
|
}];
|
||||||
|
|
||||||
|
# Run a backup on a schedule
|
||||||
|
systemd.timers.calibre-backup = {
|
||||||
|
timerConfig = {
|
||||||
|
OnCalendar = "*-*-* 00:00:00"; # Once per day
|
||||||
|
Unit = "calibre-backup.service";
|
||||||
|
};
|
||||||
|
wantedBy = [ "timers.target" ];
|
||||||
|
};
|
||||||
|
|
||||||
|
# Backup Calibre data to object storage
|
||||||
|
systemd.services.calibre-backup =
|
||||||
|
let libraryPath = "/var/lib/calibre-web"; # Default location
|
||||||
|
in {
|
||||||
|
description = "Backup Calibre data";
|
||||||
|
environment.AWS_ACCESS_KEY_ID = config.backupS3.accessKeyId;
|
||||||
|
serviceConfig = {
|
||||||
|
Type = "oneshot";
|
||||||
|
User = "calibre-web";
|
||||||
|
Group = "backup";
|
||||||
|
EnvironmentFile = config.secrets.backup.dest;
|
||||||
|
};
|
||||||
|
script = ''
|
||||||
|
${pkgs.awscli2}/bin/aws s3 sync \
|
||||||
|
${libraryPath}/ \
|
||||||
|
s3://${config.backupS3.bucket}/calibre/ \
|
||||||
|
--endpoint-url=https://${config.backupS3.endpoint}
|
||||||
|
'';
|
||||||
|
};
|
||||||
|
|
||||||
|
};
|
||||||
|
|
||||||
|
}
|
56
modules/services/cloudflare.nix
Normal file
56
modules/services/cloudflare.nix
Normal file
@ -0,0 +1,56 @@
|
|||||||
|
# This module is necessary for hosts that are serving through Cloudflare.
|
||||||
|
|
||||||
|
{ ... }:
|
||||||
|
|
||||||
|
let
|
||||||
|
|
||||||
|
cloudflareIpRanges = [
|
||||||
|
|
||||||
|
# Cloudflare IPv4: https://www.cloudflare.com/ips-v4
|
||||||
|
"173.245.48.0/20"
|
||||||
|
"103.21.244.0/22"
|
||||||
|
"103.22.200.0/22"
|
||||||
|
"103.31.4.0/22"
|
||||||
|
"141.101.64.0/18"
|
||||||
|
"108.162.192.0/18"
|
||||||
|
"190.93.240.0/20"
|
||||||
|
"188.114.96.0/20"
|
||||||
|
"197.234.240.0/22"
|
||||||
|
"198.41.128.0/17"
|
||||||
|
"162.158.0.0/15"
|
||||||
|
"104.16.0.0/13"
|
||||||
|
"104.24.0.0/14"
|
||||||
|
"172.64.0.0/13"
|
||||||
|
"131.0.72.0/22"
|
||||||
|
|
||||||
|
# Cloudflare IPv6: https://www.cloudflare.com/ips-v6
|
||||||
|
"2400:cb00::/32"
|
||||||
|
"2606:4700::/32"
|
||||||
|
"2803:f800::/32"
|
||||||
|
"2405:b500::/32"
|
||||||
|
"2405:8100::/32"
|
||||||
|
"2a06:98c0::/29"
|
||||||
|
"2c0f:f248::/32"
|
||||||
|
|
||||||
|
];
|
||||||
|
|
||||||
|
in {
|
||||||
|
|
||||||
|
imports = [ ./caddy.nix ];
|
||||||
|
|
||||||
|
config = {
|
||||||
|
|
||||||
|
# Forces Caddy to error if coming from a non-Cloudflare IP
|
||||||
|
caddyBlocks = [{
|
||||||
|
match = [{ not = [{ remote_ip.ranges = cloudflareIpRanges; }]; }];
|
||||||
|
handle = [{
|
||||||
|
handler = "static_response";
|
||||||
|
abort = true;
|
||||||
|
}];
|
||||||
|
}];
|
||||||
|
|
||||||
|
# Allows Nextcloud to trust Cloudflare IPs
|
||||||
|
services.nextcloud.config.trustedProxies = cloudflareIpRanges;
|
||||||
|
|
||||||
|
};
|
||||||
|
}
|
93
modules/services/gitea.nix
Normal file
93
modules/services/gitea.nix
Normal file
@ -0,0 +1,93 @@
|
|||||||
|
{ config, lib, ... }:
|
||||||
|
|
||||||
|
let giteaPath = "/var/lib/gitea"; # Default service directory
|
||||||
|
|
||||||
|
in {
|
||||||
|
|
||||||
|
imports = [ ./caddy.nix ./backups.nix ];
|
||||||
|
|
||||||
|
options = {
|
||||||
|
|
||||||
|
giteaServer = lib.mkOption {
|
||||||
|
description = "Hostname for Gitea.";
|
||||||
|
type = lib.types.str;
|
||||||
|
};
|
||||||
|
|
||||||
|
};
|
||||||
|
|
||||||
|
config = {
|
||||||
|
services.gitea = {
|
||||||
|
enable = true;
|
||||||
|
httpPort = 3001;
|
||||||
|
httpAddress = "127.0.0.1";
|
||||||
|
rootUrl = "https://${config.giteaServer}/";
|
||||||
|
database.type = "sqlite3";
|
||||||
|
settings = {
|
||||||
|
repository = {
|
||||||
|
DEFAULT_PUSH_CREATE_PRIVATE = true;
|
||||||
|
DISABLE_HTTP_GIT = false;
|
||||||
|
ACCESS_CONTROL_ALLOW_ORIGIN = config.giteaServer;
|
||||||
|
ENABLE_PUSH_CREATE_USER = true;
|
||||||
|
ENABLE_PUSH_CREATE_ORG = true;
|
||||||
|
DEFAULT_BRANCH = "main";
|
||||||
|
};
|
||||||
|
server = {
|
||||||
|
SSH_PORT = 22;
|
||||||
|
START_SSH_SERVER = false; # Use sshd instead
|
||||||
|
DISABLE_SSH = false;
|
||||||
|
# SSH_LISTEN_HOST = "0.0.0.0";
|
||||||
|
# SSH_LISTEN_PORT = 122;
|
||||||
|
};
|
||||||
|
service.DISABLE_REGISTRATION = true;
|
||||||
|
session.COOKIE_SECURE = true;
|
||||||
|
ui.SHOW_USER_EMAIL = false;
|
||||||
|
};
|
||||||
|
extraConfig = null;
|
||||||
|
};
|
||||||
|
|
||||||
|
networking.firewall.allowedTCPPorts = [ 122 ];
|
||||||
|
|
||||||
|
caddyRoutes = [{
|
||||||
|
match = [{ host = [ config.giteaServer ]; }];
|
||||||
|
handle = [{
|
||||||
|
handler = "reverse_proxy";
|
||||||
|
upstreams = [{ dial = "localhost:3001"; }];
|
||||||
|
}];
|
||||||
|
}];
|
||||||
|
|
||||||
|
## Backup config
|
||||||
|
|
||||||
|
# Open to groups, allowing for backups
|
||||||
|
systemd.services.gitea.serviceConfig.StateDirectoryMode =
|
||||||
|
lib.mkForce "0770";
|
||||||
|
systemd.tmpfiles.rules = [
|
||||||
|
"d ${giteaPath}/data 0775 gitea gitea"
|
||||||
|
"f ${giteaPath}/data/gitea.db 0660 gitea gitea"
|
||||||
|
];
|
||||||
|
|
||||||
|
# Allow litestream and gitea to share a sqlite database
|
||||||
|
users.users.litestream.extraGroups = [ "gitea" ];
|
||||||
|
users.users.gitea.extraGroups = [ "litestream" ];
|
||||||
|
|
||||||
|
# Backup sqlite database with litestream
|
||||||
|
services.litestream = {
|
||||||
|
settings = {
|
||||||
|
dbs = [{
|
||||||
|
path = "${giteaPath}/data/gitea.db";
|
||||||
|
replicas = [{
|
||||||
|
url =
|
||||||
|
"s3://${config.backupS3.bucket}.${config.backupS3.endpoint}/gitea";
|
||||||
|
}];
|
||||||
|
}];
|
||||||
|
};
|
||||||
|
};
|
||||||
|
|
||||||
|
# Don't start litestream unless gitea is up
|
||||||
|
systemd.services.litestream = {
|
||||||
|
after = [ "gitea.service" ];
|
||||||
|
requires = [ "gitea.service" ];
|
||||||
|
};
|
||||||
|
|
||||||
|
};
|
||||||
|
|
||||||
|
}
|
75
modules/services/honeypot.nix
Normal file
75
modules/services/honeypot.nix
Normal file
@ -0,0 +1,75 @@
|
|||||||
|
{ lib, pkgs, ... }:
|
||||||
|
|
||||||
|
# Currently has some issues that don't make this viable.
|
||||||
|
|
||||||
|
# Taken from:
|
||||||
|
# https://dataswamp.org/~solene/2022-09-29-iblock-implemented-in-nixos.html
|
||||||
|
|
||||||
|
# You will need to flush all rules when removing:
|
||||||
|
# https://serverfault.com/questions/200635/best-way-to-clear-all-iptables-rules
|
||||||
|
|
||||||
|
let
|
||||||
|
|
||||||
|
portsToBlock = [ 25545 25565 25570 ];
|
||||||
|
portsString =
|
||||||
|
builtins.concatStringsSep "," (builtins.map builtins.toString portsToBlock);
|
||||||
|
|
||||||
|
# Block IPs for 20 days
|
||||||
|
expire = 60 * 60 * 24 * 20;
|
||||||
|
|
||||||
|
rules = table: [
|
||||||
|
"INPUT -i eth0 -p tcp -m multiport --dports ${portsString} -m state --state NEW -m recent --set"
|
||||||
|
"INPUT -i eth0 -p tcp -m multiport --dports ${portsString} -m state --state NEW -m recent --update --seconds 10 --hitcount 1 -j SET --add-set ${table} src"
|
||||||
|
"INPUT -i eth0 -p tcp -m set --match-set ${table} src -j nixos-fw-refuse"
|
||||||
|
"INPUT -i eth0 -p udp -m set --match-set ${table} src -j nixos-fw-refuse"
|
||||||
|
];
|
||||||
|
|
||||||
|
create-rules = lib.concatStringsSep "\n"
|
||||||
|
(builtins.map (rule: "iptables -C " + rule + " || iptables -A " + rule)
|
||||||
|
(rules "blocked") ++ builtins.map
|
||||||
|
(rule: "ip6tables -C " + rule + " || ip6tables -A " + rule)
|
||||||
|
(rules "blocked6"));
|
||||||
|
|
||||||
|
delete-rules = lib.concatStringsSep "\n"
|
||||||
|
(builtins.map (rule: "iptables -C " + rule + " && iptables -D " + rule)
|
||||||
|
(rules "blocked") ++ builtins.map
|
||||||
|
(rule: "ip6tables -C " + rule + " && ip6tables -D " + rule)
|
||||||
|
(rules "blocked6"));
|
||||||
|
|
||||||
|
in {
|
||||||
|
|
||||||
|
networking.firewall = {
|
||||||
|
|
||||||
|
extraPackages = [ pkgs.ipset ];
|
||||||
|
# allowedTCPPorts = portsToBlock;
|
||||||
|
|
||||||
|
# Restore ban list when starting up
|
||||||
|
extraCommands = ''
|
||||||
|
if test -f /var/lib/ipset.conf
|
||||||
|
then
|
||||||
|
ipset restore -! < /var/lib/ipset.conf
|
||||||
|
else
|
||||||
|
ipset -exist create blocked hash:ip ${
|
||||||
|
if expire > 0 then "timeout ${toString expire}" else ""
|
||||||
|
}
|
||||||
|
ipset -exist create blocked6 hash:ip family inet6 ${
|
||||||
|
if expire > 0 then "timeout ${toString expire}" else ""
|
||||||
|
}
|
||||||
|
fi
|
||||||
|
${create-rules}
|
||||||
|
'';
|
||||||
|
|
||||||
|
# Save list when shutting down
|
||||||
|
extraStopCommands = ''
|
||||||
|
ipset -exist create blocked hash:ip ${
|
||||||
|
if expire > 0 then "timeout ${toString expire}" else ""
|
||||||
|
}
|
||||||
|
ipset -exist create blocked6 hash:ip family inet6 ${
|
||||||
|
if expire > 0 then "timeout ${toString expire}" else ""
|
||||||
|
}
|
||||||
|
ipset save > /var/lib/ipset.conf
|
||||||
|
${delete-rules}
|
||||||
|
'';
|
||||||
|
};
|
||||||
|
|
||||||
|
}
|
28
modules/services/jellyfin.nix
Normal file
28
modules/services/jellyfin.nix
Normal file
@ -0,0 +1,28 @@
|
|||||||
|
{ config, lib, ... }: {
|
||||||
|
|
||||||
|
options = {
|
||||||
|
streamServer = lib.mkOption {
|
||||||
|
type = lib.types.str;
|
||||||
|
description = "Hostname for Jellyfin library";
|
||||||
|
};
|
||||||
|
};
|
||||||
|
|
||||||
|
config = {
|
||||||
|
|
||||||
|
services.jellyfin.enable = true;
|
||||||
|
|
||||||
|
caddyRoutes = [{
|
||||||
|
match = [{ host = [ config.streamServer ]; }];
|
||||||
|
handle = [{
|
||||||
|
handler = "reverse_proxy";
|
||||||
|
upstreams = [{ dial = "localhost:8096"; }];
|
||||||
|
}];
|
||||||
|
}];
|
||||||
|
|
||||||
|
# Create videos directory, allow anyone in Jellyfin group to manage it
|
||||||
|
systemd.tmpfiles.rules =
|
||||||
|
[ "d /var/lib/jellyfin/library 0775 jellyfin jellyfin" ];
|
||||||
|
|
||||||
|
};
|
||||||
|
|
||||||
|
}
|
14
modules/services/netdata.nix
Normal file
14
modules/services/netdata.nix
Normal file
@ -0,0 +1,14 @@
|
|||||||
|
{ config, pkgs, lib, ... }: {
|
||||||
|
|
||||||
|
config = {
|
||||||
|
|
||||||
|
services.netdata = {
|
||||||
|
enable = true;
|
||||||
|
|
||||||
|
# Disable local dashboard (unsecured)
|
||||||
|
config = { web.mode = "none"; };
|
||||||
|
};
|
||||||
|
|
||||||
|
};
|
||||||
|
|
||||||
|
}
|
87
modules/services/nextcloud.nix
Normal file
87
modules/services/nextcloud.nix
Normal file
@ -0,0 +1,87 @@
|
|||||||
|
{ config, pkgs, lib, ... }: {
|
||||||
|
|
||||||
|
imports = [ ./caddy.nix ./secrets.nix ./backups.nix ];
|
||||||
|
|
||||||
|
options = {
|
||||||
|
|
||||||
|
nextcloudServer = lib.mkOption {
|
||||||
|
type = lib.types.str;
|
||||||
|
description = "Hostname for Nextcloud";
|
||||||
|
};
|
||||||
|
|
||||||
|
};
|
||||||
|
|
||||||
|
config = {
|
||||||
|
|
||||||
|
services.nextcloud = {
|
||||||
|
enable = true;
|
||||||
|
package = pkgs.nextcloud24; # Required to specify
|
||||||
|
https = true;
|
||||||
|
hostName = "localhost";
|
||||||
|
maxUploadSize = "50G";
|
||||||
|
config = {
|
||||||
|
adminpassFile = config.secrets.nextcloud.dest;
|
||||||
|
extraTrustedDomains = [ config.nextcloudServer ];
|
||||||
|
};
|
||||||
|
};
|
||||||
|
|
||||||
|
# Don't let Nginx use main ports (using Caddy instead)
|
||||||
|
services.nginx.virtualHosts."localhost".listen = [{
|
||||||
|
addr = "127.0.0.1";
|
||||||
|
port = 8080;
|
||||||
|
}];
|
||||||
|
|
||||||
|
# Point Caddy to Nginx
|
||||||
|
caddyRoutes = [{
|
||||||
|
match = [{ host = [ config.nextcloudServer ]; }];
|
||||||
|
handle = [{
|
||||||
|
handler = "reverse_proxy";
|
||||||
|
upstreams = [{ dial = "localhost:8080"; }];
|
||||||
|
}];
|
||||||
|
}];
|
||||||
|
|
||||||
|
# Create credentials file for nextcloud
|
||||||
|
secrets.nextcloud = {
|
||||||
|
source = ../../private/nextcloud.age;
|
||||||
|
dest = "${config.secretsDirectory}/nextcloud";
|
||||||
|
owner = "nextcloud";
|
||||||
|
group = "nextcloud";
|
||||||
|
permissions = "0440";
|
||||||
|
};
|
||||||
|
systemd.services.nextcloud-secret = {
|
||||||
|
requiredBy = [ "nextcloud-setup.service" ];
|
||||||
|
before = [ "nextcloud-setup.service" ];
|
||||||
|
};
|
||||||
|
|
||||||
|
## Backup config
|
||||||
|
|
||||||
|
# Open to groups, allowing for backups
|
||||||
|
systemd.services.phpfpm-nextcloud.serviceConfig.StateDirectoryMode =
|
||||||
|
lib.mkForce "0770";
|
||||||
|
|
||||||
|
# Allow litestream and nextcloud to share a sqlite database
|
||||||
|
users.users.litestream.extraGroups = [ "nextcloud" ];
|
||||||
|
users.users.nextcloud.extraGroups = [ "litestream" ];
|
||||||
|
|
||||||
|
# Backup sqlite database with litestream
|
||||||
|
services.litestream = {
|
||||||
|
settings = {
|
||||||
|
dbs = [{
|
||||||
|
path = "${config.services.nextcloud.datadir}/data/nextcloud.db";
|
||||||
|
replicas = [{
|
||||||
|
url =
|
||||||
|
"s3://${config.backupS3.bucket}.${config.backupS3.endpoint}/nextcloud";
|
||||||
|
}];
|
||||||
|
}];
|
||||||
|
};
|
||||||
|
};
|
||||||
|
|
||||||
|
# Don't start litestream unless nextcloud is up
|
||||||
|
systemd.services.litestream = {
|
||||||
|
after = [ "phpfpm-nextcloud.service" ];
|
||||||
|
requires = [ "phpfpm-nextcloud.service" ];
|
||||||
|
};
|
||||||
|
|
||||||
|
};
|
||||||
|
|
||||||
|
}
|
30
modules/services/prometheus.nix
Normal file
30
modules/services/prometheus.nix
Normal file
@ -0,0 +1,30 @@
|
|||||||
|
{ config, pkgs, lib, ... }: {
|
||||||
|
|
||||||
|
options.metricsServer = lib.mkOption {
|
||||||
|
type = lib.types.str;
|
||||||
|
description = "Hostname of the Grafana server.";
|
||||||
|
};
|
||||||
|
|
||||||
|
config = {
|
||||||
|
|
||||||
|
services.grafana.enable = true;
|
||||||
|
services.prometheus = {
|
||||||
|
enable = true;
|
||||||
|
exporters.node.enable = true;
|
||||||
|
scrapeConfigs = [{
|
||||||
|
job_name = "local";
|
||||||
|
static_configs = [{ targets = [ "127.0.0.1:9100" ]; }];
|
||||||
|
}];
|
||||||
|
};
|
||||||
|
|
||||||
|
caddyRoutes = [{
|
||||||
|
match = [{ host = [ config.metricsServer ]; }];
|
||||||
|
handle = [{
|
||||||
|
handler = "reverse_proxy";
|
||||||
|
upstreams = [{ dial = "localhost:3000"; }];
|
||||||
|
}];
|
||||||
|
}];
|
||||||
|
|
||||||
|
};
|
||||||
|
|
||||||
|
}
|
91
modules/services/secrets.nix
Normal file
91
modules/services/secrets.nix
Normal file
@ -0,0 +1,91 @@
|
|||||||
|
# Secrets management method taken from here:
|
||||||
|
# https://xeiaso.net/blog/nixos-encrypted-secrets-2021-01-20
|
||||||
|
|
||||||
|
# In my case, I pre-encrypt my secrets and commit them to git.
|
||||||
|
|
||||||
|
{ config, pkgs, lib, ... }: {
|
||||||
|
|
||||||
|
options = {
|
||||||
|
|
||||||
|
secretsDirectory = lib.mkOption {
|
||||||
|
type = lib.types.str;
|
||||||
|
description = "Default path to place secrets.";
|
||||||
|
default = "/var/private";
|
||||||
|
};
|
||||||
|
|
||||||
|
secrets = lib.mkOption {
|
||||||
|
type = lib.types.attrsOf (lib.types.submodule {
|
||||||
|
options = {
|
||||||
|
source = lib.mkOption {
|
||||||
|
type = lib.types.path;
|
||||||
|
description = "Path to encrypted secret.";
|
||||||
|
};
|
||||||
|
dest = lib.mkOption {
|
||||||
|
type = lib.types.str;
|
||||||
|
description = "Resulting path for decrypted secret.";
|
||||||
|
};
|
||||||
|
owner = lib.mkOption {
|
||||||
|
default = "root";
|
||||||
|
type = lib.types.str;
|
||||||
|
description = "User to own the secret.";
|
||||||
|
};
|
||||||
|
group = lib.mkOption {
|
||||||
|
default = "root";
|
||||||
|
type = lib.types.str;
|
||||||
|
description = "Group to own the secret.";
|
||||||
|
};
|
||||||
|
permissions = lib.mkOption {
|
||||||
|
default = "0400";
|
||||||
|
type = lib.types.str;
|
||||||
|
description = "Permissions expressed as octal.";
|
||||||
|
};
|
||||||
|
};
|
||||||
|
});
|
||||||
|
description = "Set of secrets to decrypt to disk.";
|
||||||
|
default = { };
|
||||||
|
};
|
||||||
|
|
||||||
|
};
|
||||||
|
|
||||||
|
config = {
|
||||||
|
|
||||||
|
# Create a default directory to place secrets
|
||||||
|
|
||||||
|
systemd.tmpfiles.rules = [ "d ${config.secretsDirectory} 0755 root wheel" ];
|
||||||
|
|
||||||
|
# Declare oneshot service to decrypt secret using SSH host key
|
||||||
|
# - Requires that the secret is already encrypted for the host
|
||||||
|
# - Encrypt secrets: nix run github:nmasur/dotfiles#encrypt-secret
|
||||||
|
|
||||||
|
systemd.services = lib.mapAttrs' (name: attrs: {
|
||||||
|
name = "${name}-secret";
|
||||||
|
value = {
|
||||||
|
|
||||||
|
description = "Decrypt secret for ${name}";
|
||||||
|
wantedBy = [ "multi-user.target" ];
|
||||||
|
serviceConfig.Type = "oneshot";
|
||||||
|
script = ''
|
||||||
|
${pkgs.age}/bin/age --decrypt \
|
||||||
|
--identity ${config.identityFile} \
|
||||||
|
--output ${attrs.dest} \
|
||||||
|
${attrs.source}
|
||||||
|
|
||||||
|
chown '${attrs.owner}':'${attrs.group}' '${attrs.dest}'
|
||||||
|
chmod '${attrs.permissions}' '${attrs.dest}'
|
||||||
|
'';
|
||||||
|
|
||||||
|
};
|
||||||
|
}) config.secrets;
|
||||||
|
|
||||||
|
# Example declaration
|
||||||
|
# config.secrets.my-secret = {
|
||||||
|
# source = ../../private/my-secret.age;
|
||||||
|
# dest = "/var/lib/private/my-secret";
|
||||||
|
# owner = "my-app";
|
||||||
|
# group = "my-app";
|
||||||
|
# permissions = "0440";
|
||||||
|
# };
|
||||||
|
|
||||||
|
};
|
||||||
|
|
||||||
|
}
|
33
modules/services/sshd.nix
Normal file
33
modules/services/sshd.nix
Normal file
@ -0,0 +1,33 @@
|
|||||||
|
{ config, pkgs, lib, ... }: {
|
||||||
|
|
||||||
|
options = {
|
||||||
|
publicKey = lib.mkOption {
|
||||||
|
type = lib.types.str;
|
||||||
|
description = "Public SSH key authorized for this system.";
|
||||||
|
};
|
||||||
|
permitRootLogin = lib.mkOption {
|
||||||
|
type = lib.types.str;
|
||||||
|
description = "Root login settings.";
|
||||||
|
default = "no";
|
||||||
|
};
|
||||||
|
};
|
||||||
|
|
||||||
|
config = {
|
||||||
|
services.openssh = {
|
||||||
|
enable = true;
|
||||||
|
ports = [ 22 ];
|
||||||
|
passwordAuthentication = false;
|
||||||
|
gatewayPorts = "no";
|
||||||
|
forwardX11 = false;
|
||||||
|
allowSFTP = true;
|
||||||
|
permitRootLogin = config.permitRootLogin;
|
||||||
|
};
|
||||||
|
|
||||||
|
users.users.${config.user}.openssh.authorizedKeys.keys =
|
||||||
|
[ config.publicKey ];
|
||||||
|
|
||||||
|
# Implement a simple fail2ban service for sshd
|
||||||
|
services.sshguard.enable = true;
|
||||||
|
};
|
||||||
|
|
||||||
|
}
|
79
modules/services/transmission.nix
Normal file
79
modules/services/transmission.nix
Normal file
@ -0,0 +1,79 @@
|
|||||||
|
{ config, pkgs, lib, ... }: {
|
||||||
|
|
||||||
|
imports = [ ./wireguard.nix ./secrets.nix ];
|
||||||
|
|
||||||
|
options = {
|
||||||
|
transmissionServer = lib.mkOption {
|
||||||
|
type = lib.types.str;
|
||||||
|
description = "Hostname for Transmission";
|
||||||
|
};
|
||||||
|
};
|
||||||
|
|
||||||
|
config = let
|
||||||
|
namespace = config.networking.wireguard.interfaces.wg0.interfaceNamespace;
|
||||||
|
vpnIp = lib.strings.removeSuffix "/32"
|
||||||
|
(builtins.head config.networking.wireguard.interfaces.wg0.ips);
|
||||||
|
in {
|
||||||
|
|
||||||
|
# Setup transmission
|
||||||
|
services.transmission = {
|
||||||
|
enable = true;
|
||||||
|
settings = {
|
||||||
|
port-forwarding-enabled = false;
|
||||||
|
rpc-authentication-required = true;
|
||||||
|
rpc-port = 9091;
|
||||||
|
rpc-bind-address = "0.0.0.0";
|
||||||
|
rpc-username = config.user;
|
||||||
|
rpc-host-whitelist = config.transmissionServer;
|
||||||
|
rpc-host-whitelist-enabled = true;
|
||||||
|
rpc-whitelist = "127.0.0.1,${vpnIp}";
|
||||||
|
rpc-whitelist-enabled = true;
|
||||||
|
};
|
||||||
|
credentialsFile = config.secrets.transmission.dest;
|
||||||
|
};
|
||||||
|
|
||||||
|
# Bind transmission to wireguard namespace
|
||||||
|
systemd.services.transmission = {
|
||||||
|
bindsTo = [ "netns@${namespace}.service" ];
|
||||||
|
requires = [ "network-online.target" "transmission-secret.service" ];
|
||||||
|
after = [ "wireguard-wg0.service" "transmission-secret.service" ];
|
||||||
|
unitConfig.JoinsNamespaceOf = "netns@${namespace}.service";
|
||||||
|
serviceConfig.NetworkNamespacePath = "/var/run/netns/${namespace}";
|
||||||
|
};
|
||||||
|
|
||||||
|
# Create reverse proxy for web UI
|
||||||
|
caddyRoutes = [{
|
||||||
|
match = [{ host = [ config.transmissionServer ]; }];
|
||||||
|
handle = [{
|
||||||
|
handler = "reverse_proxy";
|
||||||
|
upstreams = [{ dial = "localhost:9091"; }];
|
||||||
|
}];
|
||||||
|
}];
|
||||||
|
|
||||||
|
# Allow inbound connections to reach namespace
|
||||||
|
systemd.services.transmission-web-netns = {
|
||||||
|
description = "Forward to transmission in wireguard namespace";
|
||||||
|
requires = [ "transmission.service" ];
|
||||||
|
after = [ "transmission.service" ];
|
||||||
|
serviceConfig = {
|
||||||
|
Restart = "on-failure";
|
||||||
|
TimeoutStopSec = 300;
|
||||||
|
};
|
||||||
|
wantedBy = [ "multi-user.target" ];
|
||||||
|
script = ''
|
||||||
|
${pkgs.iproute2}/bin/ip netns exec ${namespace} ${pkgs.iproute2}/bin/ip link set dev lo up
|
||||||
|
${pkgs.socat}/bin/socat tcp-listen:9091,fork,reuseaddr exec:'${pkgs.iproute2}/bin/ip netns exec ${namespace} ${pkgs.socat}/bin/socat STDIO "tcp-connect:${vpnIp}:9091"',nofork
|
||||||
|
'';
|
||||||
|
};
|
||||||
|
|
||||||
|
# Create credentials file for transmission
|
||||||
|
secrets.transmission = {
|
||||||
|
source = ../../private/transmission.json.age;
|
||||||
|
dest = "${config.secretsDirectory}/transmission.json";
|
||||||
|
owner = "transmission";
|
||||||
|
group = "transmission";
|
||||||
|
};
|
||||||
|
|
||||||
|
};
|
||||||
|
|
||||||
|
}
|
123
modules/services/vaultwarden.nix
Normal file
123
modules/services/vaultwarden.nix
Normal file
@ -0,0 +1,123 @@
|
|||||||
|
{ config, pkgs, lib, ... }:
|
||||||
|
|
||||||
|
let vaultwardenPath = "/var/lib/bitwarden_rs"; # Default service directory
|
||||||
|
|
||||||
|
in {
|
||||||
|
|
||||||
|
imports = [ ./caddy.nix ./secrets.nix ./backups.nix ];
|
||||||
|
|
||||||
|
options = {
|
||||||
|
|
||||||
|
vaultwardenServer = lib.mkOption {
|
||||||
|
description = "Hostname for Vaultwarden.";
|
||||||
|
type = lib.types.str;
|
||||||
|
};
|
||||||
|
|
||||||
|
};
|
||||||
|
|
||||||
|
config = {
|
||||||
|
services.vaultwarden = {
|
||||||
|
enable = true;
|
||||||
|
config = {
|
||||||
|
DOMAIN = "https://${config.vaultwardenServer}";
|
||||||
|
SIGNUPS_ALLOWED = false;
|
||||||
|
SIGNUPS_VERIFY = true;
|
||||||
|
INVITATIONS_ALLOWED = true;
|
||||||
|
WEB_VAULT_ENABLED = true;
|
||||||
|
ROCKET_ADDRESS = "127.0.0.1";
|
||||||
|
ROCKET_PORT = 8222;
|
||||||
|
WEBSOCKET_ENABLED = true;
|
||||||
|
WEBSOCKET_ADDRESS = "0.0.0.0";
|
||||||
|
WEBSOCKET_PORT = 3012;
|
||||||
|
LOGIN_RATELIMIT_SECONDS = 60;
|
||||||
|
LOGIN_RATELIMIT_MAX_BURST = 10;
|
||||||
|
ADMIN_RATELIMIT_SECONDS = 300;
|
||||||
|
ADMIN_RATELIMIT_MAX_BURST = 3;
|
||||||
|
};
|
||||||
|
environmentFile = config.secrets.vaultwarden.dest;
|
||||||
|
dbBackend = "sqlite";
|
||||||
|
};
|
||||||
|
|
||||||
|
secrets.vaultwarden = {
|
||||||
|
source = ../../private/vaultwarden.age;
|
||||||
|
dest = "${config.secretsDirectory}/vaultwarden";
|
||||||
|
owner = "vaultwarden";
|
||||||
|
group = "vaultwarden";
|
||||||
|
};
|
||||||
|
|
||||||
|
networking.firewall.allowedTCPPorts = [ 3012 ];
|
||||||
|
|
||||||
|
caddyRoutes = [{
|
||||||
|
match = [{ host = [ config.vaultwardenServer ]; }];
|
||||||
|
handle = [{
|
||||||
|
handler = "reverse_proxy";
|
||||||
|
upstreams = [{ dial = "localhost:8222"; }];
|
||||||
|
}];
|
||||||
|
}];
|
||||||
|
|
||||||
|
## Backup config
|
||||||
|
|
||||||
|
# Open to groups, allowing for backups
|
||||||
|
systemd.services.vaultwarden.serviceConfig.StateDirectoryMode =
|
||||||
|
lib.mkForce "0770";
|
||||||
|
systemd.tmpfiles.rules = [
|
||||||
|
"f ${vaultwardenPath}/db.sqlite3 0660 vaultwarden vaultwarden"
|
||||||
|
"f ${vaultwardenPath}/db.sqlite3-shm 0660 vaultwarden vaultwarden"
|
||||||
|
"f ${vaultwardenPath}/db.sqlite3-wal 0660 vaultwarden vaultwarden"
|
||||||
|
];
|
||||||
|
|
||||||
|
# Allow litestream and vaultwarden to share a sqlite database
|
||||||
|
users.users.litestream.extraGroups = [ "vaultwarden" ];
|
||||||
|
users.users.vaultwarden.extraGroups = [ "litestream" ];
|
||||||
|
|
||||||
|
# Backup sqlite database with litestream
|
||||||
|
services.litestream = {
|
||||||
|
settings = {
|
||||||
|
dbs = [{
|
||||||
|
path = "${vaultwardenPath}/db.sqlite3";
|
||||||
|
replicas = [{
|
||||||
|
url =
|
||||||
|
"s3://${config.backupS3.bucket}.${config.backupS3.endpoint}/vaultwarden";
|
||||||
|
}];
|
||||||
|
}];
|
||||||
|
};
|
||||||
|
};
|
||||||
|
|
||||||
|
# Don't start litestream unless vaultwarden is up
|
||||||
|
systemd.services.litestream = {
|
||||||
|
after = [ "vaultwarden.service" ];
|
||||||
|
requires = [ "vaultwarden.service" ];
|
||||||
|
};
|
||||||
|
|
||||||
|
# Run a separate file backup on a schedule
|
||||||
|
systemd.timers.vaultwarden-backup = {
|
||||||
|
timerConfig = {
|
||||||
|
OnCalendar = "*-*-* 06:00:00"; # Once per day
|
||||||
|
Unit = "vaultwarden-backup.service";
|
||||||
|
};
|
||||||
|
wantedBy = [ "timers.target" ];
|
||||||
|
};
|
||||||
|
|
||||||
|
# Backup other Vaultwarden data to object storage
|
||||||
|
systemd.services.vaultwarden-backup = {
|
||||||
|
description = "Backup Vaultwarden files";
|
||||||
|
environment.AWS_ACCESS_KEY_ID = config.backupS3.accessKeyId;
|
||||||
|
serviceConfig = {
|
||||||
|
Type = "oneshot";
|
||||||
|
User = "vaultwarden";
|
||||||
|
Group = "backup";
|
||||||
|
EnvironmentFile = config.secrets.backup.dest;
|
||||||
|
};
|
||||||
|
script = ''
|
||||||
|
${pkgs.awscli2}/bin/aws s3 sync \
|
||||||
|
${vaultwardenPath}/ \
|
||||||
|
s3://${config.backupS3.bucket}/vaultwarden/ \
|
||||||
|
--endpoint-url=https://${config.backupS3.endpoint} \
|
||||||
|
--exclude "*db.sqlite3*" \
|
||||||
|
--exclude ".db.sqlite3*"
|
||||||
|
'';
|
||||||
|
};
|
||||||
|
|
||||||
|
};
|
||||||
|
|
||||||
|
}
|
@ -1,18 +1,44 @@
|
|||||||
{ ... }: {
|
{ config, pkgs, lib, ... }: {
|
||||||
|
|
||||||
|
imports = [ ./secrets.nix ];
|
||||||
|
|
||||||
|
config = {
|
||||||
|
|
||||||
networking.wireguard = {
|
networking.wireguard = {
|
||||||
enable = true;
|
enable = true;
|
||||||
interfaces = {
|
interfaces = {
|
||||||
wg0 = {
|
wg0 = {
|
||||||
ips = [ "10.66.127.235/32" "fc00:bbbb:bbbb:bb01::3:7fea/128" ];
|
|
||||||
generatePrivateKeyFile = true;
|
# Establishes identity of this machine
|
||||||
privateKeyFile = "/private/wireguard/wg0";
|
generatePrivateKeyFile = false;
|
||||||
peers = [{
|
privateKeyFile = config.secrets.wireguard.dest;
|
||||||
publicKey = "cVDIYPzNChIeANp+0jE12kWM5Ga1MbmNErT1Pmaf12A=";
|
|
||||||
allowedIPs = [ "0.0.0.0/0" "::0/0" ];
|
# Move to network namespace for isolating programs
|
||||||
endpoint = "89.46.62.197:51820";
|
interfaceNamespace = "wg";
|
||||||
persistentKeepalive = 25;
|
|
||||||
}];
|
|
||||||
};
|
};
|
||||||
};
|
};
|
||||||
};
|
};
|
||||||
|
|
||||||
|
# Create namespace for Wireguard
|
||||||
|
# This allows us to isolate specific programs to Wireguard
|
||||||
|
systemd.services."netns@" = {
|
||||||
|
description = "%I network namespace";
|
||||||
|
before = [ "network.target" ];
|
||||||
|
serviceConfig = {
|
||||||
|
Type = "oneshot";
|
||||||
|
RemainAfterExit = true;
|
||||||
|
ExecStart = "${pkgs.iproute2}/bin/ip netns add %I";
|
||||||
|
ExecStop = "${pkgs.iproute2}/bin/ip netns del %I";
|
||||||
|
};
|
||||||
|
};
|
||||||
|
|
||||||
|
# Create private key file for wireguard
|
||||||
|
secrets.wireguard = {
|
||||||
|
source = ../../private/wireguard.age;
|
||||||
|
dest = "${config.secretsDirectory}/wireguard";
|
||||||
|
};
|
||||||
|
|
||||||
|
};
|
||||||
|
|
||||||
}
|
}
|
||||||
|
@ -1,5 +0,0 @@
|
|||||||
{ config, pkgs, ... }: {
|
|
||||||
|
|
||||||
home-manager.users.${config.user}.home.packages = with pkgs; [ age ];
|
|
||||||
|
|
||||||
}
|
|
13
modules/shell/charm.nix
Normal file
13
modules/shell/charm.nix
Normal file
@ -0,0 +1,13 @@
|
|||||||
|
{ config, pkgs, ... }: {
|
||||||
|
|
||||||
|
home-manager.users.${config.user} = {
|
||||||
|
|
||||||
|
home.packages = with pkgs; [
|
||||||
|
glow # Markdown previews
|
||||||
|
skate # Key-value store
|
||||||
|
charm # Manage account and filesystem
|
||||||
|
];
|
||||||
|
|
||||||
|
};
|
||||||
|
|
||||||
|
}
|
@ -1,6 +1,6 @@
|
|||||||
{ ... }: {
|
{ ... }: {
|
||||||
imports = [
|
imports = [
|
||||||
./age.nix
|
./charm.nix
|
||||||
./direnv.nix
|
./direnv.nix
|
||||||
./fish
|
./fish
|
||||||
./fzf.nix
|
./fzf.nix
|
||||||
|
@ -1,10 +1,11 @@
|
|||||||
{ config, pkgs, ... }: {
|
{ config, pkgs, lib, ... }: {
|
||||||
home-manager.users.${config.user} = {
|
home-manager.users.${config.user} = {
|
||||||
|
|
||||||
programs.fish = {
|
programs.fish = {
|
||||||
shellAbbrs = {
|
shellAbbrs = {
|
||||||
n = "nix";
|
n = "nix";
|
||||||
ns = "nix-shell --run fish -p";
|
ns = "nix-shell -p";
|
||||||
|
nsf = "nix-shell --run fish -p";
|
||||||
nsr = "nix-shell-run";
|
nsr = "nix-shell-run";
|
||||||
nps = "nix repl '<nixpkgs>'";
|
nps = "nix repl '<nixpkgs>'";
|
||||||
nixo = "man configuration.nix";
|
nixo = "man configuration.nix";
|
||||||
@ -38,7 +39,7 @@
|
|||||||
set option "--option substitute false"
|
set option "--option substitute false"
|
||||||
end
|
end
|
||||||
git -C ${config.dotfilesPath} add --intent-to-add --all
|
git -C ${config.dotfilesPath} add --intent-to-add --all
|
||||||
commandline -r "doas nixos-rebuild switch $option --flake ${config.dotfilesPath}"
|
commandline -r "doas nixos-rebuild switch $option --flake ${config.dotfilesPath}#${config.networking.hostName}"
|
||||||
commandline --function execute
|
commandline --function execute
|
||||||
'';
|
'';
|
||||||
};
|
};
|
||||||
|
@ -9,6 +9,7 @@
|
|||||||
"$git_branch"
|
"$git_branch"
|
||||||
"$git_commit"
|
"$git_commit"
|
||||||
"$git_status"
|
"$git_status"
|
||||||
|
"$hostname"
|
||||||
"$cmd_duration"
|
"$cmd_duration"
|
||||||
"$character"
|
"$character"
|
||||||
];
|
];
|
||||||
@ -47,6 +48,10 @@
|
|||||||
deleted = "✘";
|
deleted = "✘";
|
||||||
style = "red";
|
style = "red";
|
||||||
};
|
};
|
||||||
|
hostname = {
|
||||||
|
ssh_only = true;
|
||||||
|
format = "on [$hostname](bold red) ";
|
||||||
|
};
|
||||||
nix_shell = {
|
nix_shell = {
|
||||||
format = "[$symbol $name]($style)";
|
format = "[$symbol $name]($style)";
|
||||||
symbol = "❄️";
|
symbol = "❄️";
|
||||||
|
@ -31,6 +31,8 @@ in {
|
|||||||
vimv-rs # Batch rename files
|
vimv-rs # Batch rename files
|
||||||
dig # DNS lookup
|
dig # DNS lookup
|
||||||
lf # File viewer
|
lf # File viewer
|
||||||
|
whois # Lookup IPs
|
||||||
|
age # Encryption
|
||||||
];
|
];
|
||||||
|
|
||||||
programs.zoxide.enable = true; # Shortcut jump command
|
programs.zoxide.enable = true; # Shortcut jump command
|
||||||
|
25
patches/calibre-web-cloudflare.patch
Normal file
25
patches/calibre-web-cloudflare.patch
Normal file
@ -0,0 +1,25 @@
|
|||||||
|
diff --git a/cps/__init__.py b/cps/__init__.py
|
||||||
|
index 0b912d23..ad5d1fa9 100644
|
||||||
|
--- a/cps/__init__.py
|
||||||
|
+++ b/cps/__init__.py
|
||||||
|
@@ -83,7 +83,6 @@ app.config.update(
|
||||||
|
lm = MyLoginManager()
|
||||||
|
lm.login_view = 'web.login'
|
||||||
|
lm.anonymous_user = ub.Anonymous
|
||||||
|
-lm.session_protection = 'strong'
|
||||||
|
|
||||||
|
if wtf_present:
|
||||||
|
csrf = CSRFProtect()
|
||||||
|
diff --git a/cps/admin.py b/cps/admin.py
|
||||||
|
index 1004ee78..e295066e 100644
|
||||||
|
--- a/cps/admin.py
|
||||||
|
+++ b/cps/admin.py
|
||||||
|
@@ -98,8 +98,6 @@ def before_request():
|
||||||
|
# make remember me function work
|
||||||
|
if current_user.is_authenticated:
|
||||||
|
confirm_login()
|
||||||
|
- if not ub.check_user_session(current_user.id, flask_session.get('_id')) and 'opds' not in request.path:
|
||||||
|
- logout_user()
|
||||||
|
g.constants = constants
|
||||||
|
g.user = current_user
|
||||||
|
g.allow_registration = config.config_public_reg
|
10
private/backup.age
Normal file
10
private/backup.age
Normal file
@ -0,0 +1,10 @@
|
|||||||
|
-----BEGIN AGE ENCRYPTED FILE-----
|
||||||
|
YWdlLWVuY3J5cHRpb24ub3JnL3YxCi0+IHNzaC1lZDI1NTE5IE1nSGFPdyBmVEo2
|
||||||
|
bExsZERhYi9vVXMxVThRK2w3dFR4UlZVcGlsWUFPM3pReTQwaW5ZCjQ5Z3g3amZC
|
||||||
|
bWUwWkdKTStVbFpwMmdwK3pQQU5CeE5tMVNHbXI1UkdCTFUKLT4gc3NoLWVkMjU1
|
||||||
|
MTkgWXlTVU1RIE9sTG1lOHIyVGdLNWtJRTZtdGNWWEFsTTJ5bE1HS1V2MEdKeGNN
|
||||||
|
WFMyV28KVlRHdDg5SGFadVlJempKWkp6eEp6TkhINnl0R0xDL0J0WXByclpFWE5I
|
||||||
|
VQotLS0gVVhaUDZLTy8xS3hKOVliSlpuTEY2Q2xOQUEvblBtUG9Vb0I5ZE1oOUZ1
|
||||||
|
VQr18Jwx6XDa7bwq0QWT6NdIFzqNUHWhDyUvS9twncFsr0yEAUDQd2XLtE+Vc8T9
|
||||||
|
Z7y/C8Ct5+duqd6YaeqROJz5zVj0NnI0lshirBl89PQWF9ihp4V4Hw==
|
||||||
|
-----END AGE ENCRYPTED FILE-----
|
10
private/mailpass.age
Normal file
10
private/mailpass.age
Normal file
@ -0,0 +1,10 @@
|
|||||||
|
-----BEGIN AGE ENCRYPTED FILE-----
|
||||||
|
YWdlLWVuY3J5cHRpb24ub3JnL3YxCi0+IHNzaC1lZDI1NTE5IE1nSGFPdyBIRnEy
|
||||||
|
am1HTXptMmpSTjZQa2hQSUxNUU1rdXlod3U3bVZ0VGxQVlE2WldBClg0K3k5MDZH
|
||||||
|
NFlPdHI0VnZSZE9DTTNMeDdldUpFQ3V0V0k0RnRIZHFhdzAKLT4gc3NoLWVkMjU1
|
||||||
|
MTkgWXlTVU1RIFlxZFpqNU5kNVY2VUk0Um0zZ1d1M2FlRkYvV1BoTEFSNjZ2Vk9I
|
||||||
|
QTVHM0UKY2gvVU9wckVUNEFwdUwyVFJZUGwxOFFKYm12cUlFTEVrb3IvcXI3TnND
|
||||||
|
UQotLS0gMHdaajFjV2ozd0g5dWN5YkhiU2NBVWZVSU00aVIzY0VKYjJleVlQTUdX
|
||||||
|
QQo7rH6kOTRFP43U/qiBOCHx+hBGlaODFRS1CgzkuqfMOq8PM28RsIN+l3sbwjxE
|
||||||
|
W8chE/A0EChjIDtfYTMgsN3cYg==
|
||||||
|
-----END AGE ENCRYPTED FILE-----
|
10
private/nextcloud.age
Normal file
10
private/nextcloud.age
Normal file
@ -0,0 +1,10 @@
|
|||||||
|
-----BEGIN AGE ENCRYPTED FILE-----
|
||||||
|
YWdlLWVuY3J5cHRpb24ub3JnL3YxCi0+IHNzaC1lZDI1NTE5IE1nSGFPdyBudkxn
|
||||||
|
ZzU1YVViYUZBWVVTYm1SeHpvanQ5M0YwVGo2YldlN2RwY0tscWpBCjd0ZmtLZ2th
|
||||||
|
dEMrQk5QV0EzT0RpVkg5bGo1cHdTNzVYVkZpVzE4aHR0azgKLT4gc3NoLWVkMjU1
|
||||||
|
MTkgWXlTVU1RIFlqaEI2QUNnMjR1T0FENXJIMEJWOUFJUXZ4SlJxbUFnQktWUW9w
|
||||||
|
UFlmUUkKL1RwaWxoNFM4SkpadWtyN3JnWHdjVTYrQmo0dU9JUnp0MjN5enVsUm9o
|
||||||
|
ZwotLS0gWUd2eTR2VGkyeTZ5cHNuanMrSlZKVmc4T1ZORExmUnhDSjN0NEJkNjkz
|
||||||
|
cwriuyYCgvJe7TRi3n/JwxIRKMsoh7+xj4B5Fdxuj3BOtKVi1geSjlDHVklRwu9Y
|
||||||
|
IMCTLqQtj08JnuLfDezRGHAYCM8=
|
||||||
|
-----END AGE ENCRYPTED FILE-----
|
10
private/transmission.json.age
Normal file
10
private/transmission.json.age
Normal file
@ -0,0 +1,10 @@
|
|||||||
|
-----BEGIN AGE ENCRYPTED FILE-----
|
||||||
|
YWdlLWVuY3J5cHRpb24ub3JnL3YxCi0+IHNzaC1lZDI1NTE5IE1nSGFPdyBSYUU2
|
||||||
|
OWp1ZDRLVTJrR1k3SVdXZnRPN3RUNDY5RFM2WEZaTzRmdU1zSWdrCjV1VHpNMG81
|
||||||
|
VHA4LzdsN3FpOUNoTGNlWmlHS3E4dTVvWTVoZHJMSlNYTHMKLT4gc3NoLWVkMjU1
|
||||||
|
MTkgWXlTVU1RIDVjM1JmclgxQThKcU1XQWptWmN0MjlKU1NvMEpwMnYyd3Y4czBT
|
||||||
|
RTVkQ0UKc0pOYkRxZldsWnloQnBYMWk1eFU0M3R5SkZVTUYyaldIcENONE1PWVJv
|
||||||
|
NAotLS0gclZDQndaREZpZ2Z0R0d0alBPeW1tZFVOVHhSaHNlQTRXdTRoZmFDUFFK
|
||||||
|
SQqueOUzTFuhSryWW4Do+NAUcq2YdOtN8gmP5Zcp1oMe/9+JIs6Upjsc3eWn+dSA
|
||||||
|
7QwbGlTyd6D0+PLJxHA18Xfgpj5owGeTDtwykFPgdO1BjE8C3KlgzUfN
|
||||||
|
-----END AGE ENCRYPTED FILE-----
|
11
private/vaultwarden.age
Normal file
11
private/vaultwarden.age
Normal file
@ -0,0 +1,11 @@
|
|||||||
|
-----BEGIN AGE ENCRYPTED FILE-----
|
||||||
|
YWdlLWVuY3J5cHRpb24ub3JnL3YxCi0+IHNzaC1lZDI1NTE5IE1nSGFPdyBqNm0x
|
||||||
|
YVc0bXp6eldNdkp1QWk2cEI0WFBhVVd3cHhDODNwMS9UUTBPN25JCmxXZnRIcFZr
|
||||||
|
SFJrQnI3R1BTUk1BcVl3RjlUaXMzSXpqaGdTMi9reno1eHcKLT4gc3NoLWVkMjU1
|
||||||
|
MTkgWXlTVU1RIFlKWCtsWGtWdTI4L0ZFTVRHNFN5by9vTE95MXFoMVZGYlYrM1I2
|
||||||
|
alREaE0Kd251SGRDdE96VmZqblhEWXFkZDhvRUZsZ1pnZ3NqdEdJSlBvaXhoOHVB
|
||||||
|
WQotLS0gaGJNRm14SkdXcTFmYlJUell1WUZUeEllT3ZwMkNaejF3eWJ5U1ZSdno1
|
||||||
|
MAqQIT8vvUro+C+avm6lCPfrX9yigKzx/gtKfMB//1Ie7BUo1+o5iYoA+R0luMU8
|
||||||
|
/zVX1yGAzDPqas/HfYclIPg3bdjm2dnpz0ltOrOvjA4x3nEzzrmS96zo3Fy1d8oX
|
||||||
|
oAMw2l/p2QDHI60cyhvC
|
||||||
|
-----END AGE ENCRYPTED FILE-----
|
10
private/wireguard.age
Normal file
10
private/wireguard.age
Normal file
@ -0,0 +1,10 @@
|
|||||||
|
-----BEGIN AGE ENCRYPTED FILE-----
|
||||||
|
YWdlLWVuY3J5cHRpb24ub3JnL3YxCi0+IHNzaC1lZDI1NTE5IE1nSGFPdyBOOXNm
|
||||||
|
VG5EMHhEU2JLbkYyY1VXdXZJd2VxSEVXUjZaaURnU254QUVzUENzCnhnV21oRFNY
|
||||||
|
NGpMeXlqdDlYRmltN1cxTlJ3eWFTVElpK0ZBalA3QVFoL2MKLT4gc3NoLWVkMjU1
|
||||||
|
MTkgWXlTVU1RIDk3TVhDVVBjQU5XNjVTbkxKdUNEU25uZXREeEpHcTF4STg4VXR1
|
||||||
|
V2xzRTQKZTBXZUQrbjIwTDEwOEc3MktpQzBjTzhjS3lTNTJ0TEMyMVBOODQ0N0lt
|
||||||
|
OAotLS0gODA2L2FpSmxiWDAyM1IvM2Q4U2QrNmRkVjl1bFhURW5sNCtWZ2tiMnZU
|
||||||
|
YwoC0chavNt+a/AImm/7bNheZIPghrobp9g+ga+UpRWBtM2snpkyFZrBR0qAkw/f
|
||||||
|
3krp5Rrco7IOlEwWx96UzvAUpKlC7CdVI1MFa76ZUg==
|
||||||
|
-----END AGE ENCRYPTED FILE-----
|
Reference in New Issue
Block a user