Compare commits
No commits in common. "master" and "redis/connection-pool" have entirely different histories.
master
...
redis/conn
10
.drone.yml
10
.drone.yml
|
@ -5,11 +5,15 @@ name: default
|
||||||
steps:
|
steps:
|
||||||
- name: Run Tests and Generate Coverage Report
|
- name: Run Tests and Generate Coverage Report
|
||||||
commands:
|
commands:
|
||||||
- nix flake check -L
|
- kill $(ps aux | grep '[r]edis-server 127.0.0.1:6389' | awk '{print $2}')
|
||||||
|
- redis-server --bind 127.0.0.1 --port 6389 >/dev/null &
|
||||||
|
- coverage run -m pytest -q
|
||||||
|
- coverage xml
|
||||||
- sonar-scanner -Dsonar.projectKey=SelfPrivacy-REST-API -Dsonar.sources=. -Dsonar.host.url=http://analyzer.lan:9000 -Dsonar.login="$SONARQUBE_TOKEN"
|
- sonar-scanner -Dsonar.projectKey=SelfPrivacy-REST-API -Dsonar.sources=. -Dsonar.host.url=http://analyzer.lan:9000 -Dsonar.login="$SONARQUBE_TOKEN"
|
||||||
environment:
|
environment:
|
||||||
SONARQUBE_TOKEN:
|
SONARQUBE_TOKEN:
|
||||||
from_secret: SONARQUBE_TOKEN
|
from_secret: SONARQUBE_TOKEN
|
||||||
|
USE_REDIS_PORT: 6389
|
||||||
|
|
||||||
|
|
||||||
- name: Run Bandit Checks
|
- name: Run Bandit Checks
|
||||||
|
@ -22,7 +26,3 @@ steps:
|
||||||
|
|
||||||
node:
|
node:
|
||||||
server: builder
|
server: builder
|
||||||
|
|
||||||
trigger:
|
|
||||||
event:
|
|
||||||
- push
|
|
||||||
|
|
4
.flake8
4
.flake8
|
@ -1,4 +0,0 @@
|
||||||
[flake8]
|
|
||||||
max-line-length = 80
|
|
||||||
select = C,E,F,W,B,B950
|
|
||||||
extend-ignore = E203, E501
|
|
|
@ -147,7 +147,3 @@ cython_debug/
|
||||||
# End of https://www.toptal.com/developers/gitignore/api/flask
|
# End of https://www.toptal.com/developers/gitignore/api/flask
|
||||||
|
|
||||||
*.db
|
*.db
|
||||||
*.rdb
|
|
||||||
|
|
||||||
/result
|
|
||||||
/.nixos-test-history
|
|
||||||
|
|
|
@ -1,6 +1,3 @@
|
||||||
[MASTER]
|
[MASTER]
|
||||||
init-hook="from pylint.config import find_pylintrc; import os, sys; sys.path.append(os.path.dirname(find_pylintrc()))"
|
init-hook="from pylint.config import find_pylintrc; import os, sys; sys.path.append(os.path.dirname(find_pylintrc()))"
|
||||||
extension-pkg-whitelist=pydantic
|
extension-pkg-whitelist=pydantic
|
||||||
|
|
||||||
[FORMAT]
|
|
||||||
max-line-length=88
|
|
||||||
|
|
92
README.md
92
README.md
|
@ -1,92 +0,0 @@
|
||||||
# SelfPrivacy GraphQL API which allows app to control your server
|
|
||||||
|
|
||||||
![CI status](https://ci.selfprivacy.org/api/badges/SelfPrivacy/selfprivacy-rest-api/status.svg)
|
|
||||||
|
|
||||||
## Build
|
|
||||||
|
|
||||||
```console
|
|
||||||
$ nix build
|
|
||||||
```
|
|
||||||
|
|
||||||
In case of successful build, you should get the `./result` symlink to a folder (in `/nix/store`) with build contents.
|
|
||||||
|
|
||||||
## Develop
|
|
||||||
|
|
||||||
```console
|
|
||||||
$ nix develop
|
|
||||||
[SP devshell:/dir/selfprivacy-rest-api]$ python
|
|
||||||
Python 3.10.13 (main, Aug 24 2023, 12:59:26) [GCC 12.3.0] on linux
|
|
||||||
Type "help", "copyright", "credits" or "license" for more information.
|
|
||||||
(ins)>>>
|
|
||||||
```
|
|
||||||
|
|
||||||
If you don't have experimental flakes enabled, you can use the following command:
|
|
||||||
|
|
||||||
```console
|
|
||||||
$ nix --extra-experimental-features nix-command --extra-experimental-features flakes develop
|
|
||||||
```
|
|
||||||
|
|
||||||
## Testing
|
|
||||||
|
|
||||||
Run the test suite by running coverage with pytest inside an ephemeral NixOS VM with redis service enabled:
|
|
||||||
```console
|
|
||||||
$ nix flake check -L
|
|
||||||
```
|
|
||||||
|
|
||||||
Run the same test suite, but additionally create `./result/coverage.xml` in the current directory:
|
|
||||||
```console
|
|
||||||
$ nix build .#checks.x86_64-linux.default -L
|
|
||||||
```
|
|
||||||
|
|
||||||
Alternatively, just print the path to `/nix/store/...coverage.xml` without creating any files in the current directory:
|
|
||||||
```console
|
|
||||||
$ nix build .#checks.x86_64-linux.default -L --print-out-paths --no-link
|
|
||||||
```
|
|
||||||
|
|
||||||
Run the same test suite with arbitrary pytest options:
|
|
||||||
```console
|
|
||||||
$ pytest-vm.sh # specify pytest options here, e.g. `--last-failed`
|
|
||||||
```
|
|
||||||
When running using the script, pytest cache is preserved between runs in `.pytest_cache` folder.
|
|
||||||
NixOS VM state temporary resides in `${TMPDIR:=/tmp}/nixos-vm-tmp-dir/vm-state-machine` during the test.
|
|
||||||
Git workdir directory is shared read-write with VM via `.nixos-vm-tmp-dir/shared-xchg` symlink. VM accesses workdir contents via `/tmp/shared` mount point and `/root/source` symlink.
|
|
||||||
|
|
||||||
Launch VM and execute commands manually either in Linux console (user `root`) or using python NixOS tests driver API (refer to [NixOS documentation](https://nixos.org/manual/nixos/stable/#ssec-machine-objects)):
|
|
||||||
```console
|
|
||||||
$ nix run .#checks.x86_64-linux.default.driverInteractive
|
|
||||||
```
|
|
||||||
|
|
||||||
You can add `--keep-vm-state` in order to keep VM state between runs:
|
|
||||||
```console
|
|
||||||
$ TMPDIR=".nixos-vm-tmp-dir" nix run .#checks.x86_64-linux.default.driverInteractive --keep-vm-state
|
|
||||||
```
|
|
||||||
|
|
||||||
Option `-L`/`--print-build-logs` is optional for all nix commands. It tells nix to print each log line one after another instead of overwriting a single one.
|
|
||||||
|
|
||||||
## Dependencies and Dependant Modules
|
|
||||||
|
|
||||||
This flake depends on a single Nix flake input - nixpkgs repository. nixpkgs repository is used for all software packages used to build, run API service, tests, etc.
|
|
||||||
|
|
||||||
In order to synchronize nixpkgs input with the same from selfprivacy-nixos-config repository, use this command:
|
|
||||||
|
|
||||||
```console
|
|
||||||
$ nix flake lock --override-input nixpkgs nixpkgs --inputs-from git+https://git.selfprivacy.org/SelfPrivacy/selfprivacy-nixos-config.git?ref=BRANCH
|
|
||||||
```
|
|
||||||
|
|
||||||
Replace BRANCH with the branch name of selfprivacy-nixos-config repository you want to sync with. During development nixpkgs input update might be required in both selfprivacy-rest-api and selfprivacy-nixos-config repositories simultaneously. So, a new feature branch might be temporarily used until selfprivacy-nixos-config gets the feature branch merged.
|
|
||||||
|
|
||||||
Show current flake inputs (e.g. nixpkgs):
|
|
||||||
```console
|
|
||||||
$ nix flake metadata
|
|
||||||
```
|
|
||||||
|
|
||||||
Show selfprivacy-nixos-config Nix flake inputs (including nixpkgs):
|
|
||||||
```console
|
|
||||||
$ nix flake metadata git+https://git.selfprivacy.org/SelfPrivacy/selfprivacy-nixos-config.git?ref=BRANCH
|
|
||||||
```
|
|
||||||
|
|
||||||
Nix code for NixOS service module for API is located in NixOS configuration repository.
|
|
||||||
|
|
||||||
## Troubleshooting
|
|
||||||
|
|
||||||
Sometimes commands inside `nix develop` refuse to work properly if the calling shell lacks `LANG` environment variable. Try to set it before entering `nix develop`.
|
|
|
@ -0,0 +1,64 @@
|
||||||
|
{ lib, python39Packages }:
|
||||||
|
with python39Packages;
|
||||||
|
buildPythonApplication {
|
||||||
|
pname = "selfprivacy-api";
|
||||||
|
version = "2.0.0";
|
||||||
|
|
||||||
|
propagatedBuildInputs = [
|
||||||
|
setuptools
|
||||||
|
portalocker
|
||||||
|
pytz
|
||||||
|
pytest
|
||||||
|
pytest-mock
|
||||||
|
pytest-datadir
|
||||||
|
huey
|
||||||
|
gevent
|
||||||
|
mnemonic
|
||||||
|
pydantic
|
||||||
|
typing-extensions
|
||||||
|
psutil
|
||||||
|
fastapi
|
||||||
|
uvicorn
|
||||||
|
(buildPythonPackage rec {
|
||||||
|
pname = "strawberry-graphql";
|
||||||
|
version = "0.123.0";
|
||||||
|
format = "pyproject";
|
||||||
|
patches = [
|
||||||
|
./strawberry-graphql.patch
|
||||||
|
];
|
||||||
|
propagatedBuildInputs = [
|
||||||
|
typing-extensions
|
||||||
|
python-multipart
|
||||||
|
python-dateutil
|
||||||
|
# flask
|
||||||
|
pydantic
|
||||||
|
pygments
|
||||||
|
poetry
|
||||||
|
# flask-cors
|
||||||
|
(buildPythonPackage rec {
|
||||||
|
pname = "graphql-core";
|
||||||
|
version = "3.2.0";
|
||||||
|
format = "setuptools";
|
||||||
|
src = fetchPypi {
|
||||||
|
inherit pname version;
|
||||||
|
sha256 = "sha256-huKgvgCL/eGe94OI3opyWh2UKpGQykMcJKYIN5c4A84=";
|
||||||
|
};
|
||||||
|
checkInputs = [
|
||||||
|
pytest-asyncio
|
||||||
|
pytest-benchmark
|
||||||
|
pytestCheckHook
|
||||||
|
];
|
||||||
|
pythonImportsCheck = [
|
||||||
|
"graphql"
|
||||||
|
];
|
||||||
|
})
|
||||||
|
];
|
||||||
|
src = fetchPypi {
|
||||||
|
inherit pname version;
|
||||||
|
sha256 = "KsmZ5Xv8tUg6yBxieAEtvoKoRG60VS+iVGV0X6oCExo=";
|
||||||
|
};
|
||||||
|
})
|
||||||
|
];
|
||||||
|
|
||||||
|
src = ./.;
|
||||||
|
}
|
31
default.nix
31
default.nix
|
@ -1,29 +1,2 @@
|
||||||
{ pythonPackages, rev ? "local" }:
|
{ pkgs ? import <nixpkgs> {} }:
|
||||||
|
pkgs.callPackage ./api.nix {}
|
||||||
pythonPackages.buildPythonPackage rec {
|
|
||||||
pname = "selfprivacy-graphql-api";
|
|
||||||
version = rev;
|
|
||||||
src = builtins.filterSource (p: t: p != ".git" && t != "symlink") ./.;
|
|
||||||
propagatedBuildInputs = with pythonPackages; [
|
|
||||||
fastapi
|
|
||||||
gevent
|
|
||||||
huey
|
|
||||||
mnemonic
|
|
||||||
portalocker
|
|
||||||
psutil
|
|
||||||
pydantic
|
|
||||||
pytz
|
|
||||||
redis
|
|
||||||
setuptools
|
|
||||||
strawberry-graphql
|
|
||||||
typing-extensions
|
|
||||||
uvicorn
|
|
||||||
];
|
|
||||||
pythonImportsCheck = [ "selfprivacy_api" ];
|
|
||||||
doCheck = false;
|
|
||||||
meta = {
|
|
||||||
description = ''
|
|
||||||
SelfPrivacy Server Management API
|
|
||||||
'';
|
|
||||||
};
|
|
||||||
}
|
|
||||||
|
|
26
flake.lock
26
flake.lock
|
@ -1,26 +0,0 @@
|
||||||
{
|
|
||||||
"nodes": {
|
|
||||||
"nixpkgs": {
|
|
||||||
"locked": {
|
|
||||||
"lastModified": 1709677081,
|
|
||||||
"narHash": "sha256-tix36Y7u0rkn6mTm0lA45b45oab2cFLqAzDbJxeXS+c=",
|
|
||||||
"owner": "nixos",
|
|
||||||
"repo": "nixpkgs",
|
|
||||||
"rev": "880992dcc006a5e00dd0591446fdf723e6a51a64",
|
|
||||||
"type": "github"
|
|
||||||
},
|
|
||||||
"original": {
|
|
||||||
"owner": "nixos",
|
|
||||||
"repo": "nixpkgs",
|
|
||||||
"type": "github"
|
|
||||||
}
|
|
||||||
},
|
|
||||||
"root": {
|
|
||||||
"inputs": {
|
|
||||||
"nixpkgs": "nixpkgs"
|
|
||||||
}
|
|
||||||
}
|
|
||||||
},
|
|
||||||
"root": "root",
|
|
||||||
"version": 7
|
|
||||||
}
|
|
162
flake.nix
162
flake.nix
|
@ -1,162 +0,0 @@
|
||||||
{
|
|
||||||
description = "SelfPrivacy API flake";
|
|
||||||
|
|
||||||
inputs.nixpkgs.url = "github:nixos/nixpkgs";
|
|
||||||
|
|
||||||
outputs = { self, nixpkgs, ... }:
|
|
||||||
let
|
|
||||||
system = "x86_64-linux";
|
|
||||||
pkgs = nixpkgs.legacyPackages.${system};
|
|
||||||
selfprivacy-graphql-api = pkgs.callPackage ./default.nix {
|
|
||||||
pythonPackages = pkgs.python310Packages;
|
|
||||||
rev = self.shortRev or self.dirtyShortRev or "dirty";
|
|
||||||
};
|
|
||||||
python = self.packages.${system}.default.pythonModule;
|
|
||||||
python-env =
|
|
||||||
python.withPackages (ps:
|
|
||||||
self.packages.${system}.default.propagatedBuildInputs ++ (with ps; [
|
|
||||||
coverage
|
|
||||||
pytest
|
|
||||||
pytest-datadir
|
|
||||||
pytest-mock
|
|
||||||
pytest-subprocess
|
|
||||||
black
|
|
||||||
mypy
|
|
||||||
pylsp-mypy
|
|
||||||
python-lsp-black
|
|
||||||
python-lsp-server
|
|
||||||
pyflakes
|
|
||||||
typer # for strawberry
|
|
||||||
types-redis # for mypy
|
|
||||||
] ++ strawberry-graphql.optional-dependencies.cli));
|
|
||||||
|
|
||||||
vmtest-src-dir = "/root/source";
|
|
||||||
shellMOTD = ''
|
|
||||||
Welcome to SP API development shell!
|
|
||||||
|
|
||||||
[formatters]
|
|
||||||
|
|
||||||
black
|
|
||||||
nixpkgs-fmt
|
|
||||||
|
|
||||||
[testing in NixOS VM]
|
|
||||||
|
|
||||||
nixos-test-driver - run an interactive NixOS VM with all dependencies included and 2 disk volumes
|
|
||||||
pytest-vm - run pytest in an ephemeral NixOS VM with Redis, accepting pytest arguments
|
|
||||||
'';
|
|
||||||
in
|
|
||||||
{
|
|
||||||
# see https://github.com/NixOS/nixpkgs/blob/66a9817cec77098cfdcbb9ad82dbb92651987a84/nixos/lib/test-driver/test_driver/machine.py#L359
|
|
||||||
packages.${system} = {
|
|
||||||
default = selfprivacy-graphql-api;
|
|
||||||
pytest-vm = pkgs.writeShellScriptBin "pytest-vm" ''
|
|
||||||
set -o errexit
|
|
||||||
set -o nounset
|
|
||||||
set -o xtrace
|
|
||||||
|
|
||||||
# see https://github.com/NixOS/nixpkgs/blob/66a9817cec77098cfdcbb9ad82dbb92651987a84/nixos/lib/test-driver/test_driver/machine.py#L359
|
|
||||||
export TMPDIR=''${TMPDIR:=/tmp}/nixos-vm-tmp-dir
|
|
||||||
readonly NIXOS_VM_SHARED_DIR_HOST="$TMPDIR/shared-xchg"
|
|
||||||
readonly NIXOS_VM_SHARED_DIR_GUEST="/tmp/shared"
|
|
||||||
|
|
||||||
mkdir -p "$TMPDIR"
|
|
||||||
ln -sfv "$PWD" -T "$NIXOS_VM_SHARED_DIR_HOST"
|
|
||||||
|
|
||||||
SCRIPT=$(cat <<EOF
|
|
||||||
start_all()
|
|
||||||
machine.succeed("ln -sf $NIXOS_VM_SHARED_DIR_GUEST -T ${vmtest-src-dir} >&2")
|
|
||||||
machine.succeed("cd ${vmtest-src-dir} && coverage run -m pytest -v $@ >&2")
|
|
||||||
machine.succeed("cd ${vmtest-src-dir} && coverage report >&2")
|
|
||||||
EOF
|
|
||||||
)
|
|
||||||
|
|
||||||
if [ -f "/etc/arch-release" ]; then
|
|
||||||
${self.checks.${system}.default.driverInteractive}/bin/nixos-test-driver --no-interactive <(printf "%s" "$SCRIPT")
|
|
||||||
else
|
|
||||||
${self.checks.${system}.default.driver}/bin/nixos-test-driver -- <(printf "%s" "$SCRIPT")
|
|
||||||
fi
|
|
||||||
'';
|
|
||||||
};
|
|
||||||
nixosModules.default =
|
|
||||||
import ./nixos/module.nix self.packages.${system}.default;
|
|
||||||
devShells.${system}.default = pkgs.mkShellNoCC {
|
|
||||||
name = "SP API dev shell";
|
|
||||||
packages = with pkgs; [
|
|
||||||
nixpkgs-fmt
|
|
||||||
rclone
|
|
||||||
redis
|
|
||||||
restic
|
|
||||||
self.packages.${system}.pytest-vm
|
|
||||||
# FIXME consider loading this explicitly only after ArchLinux issue is solved
|
|
||||||
self.checks.x86_64-linux.default.driverInteractive
|
|
||||||
# the target API application python environment
|
|
||||||
python-env
|
|
||||||
];
|
|
||||||
shellHook = ''
|
|
||||||
# envs set with export and as attributes are treated differently.
|
|
||||||
# for example. printenv <Name> will not fetch the value of an attribute.
|
|
||||||
export TEST_MODE="true"
|
|
||||||
|
|
||||||
# more tips for bash-completion to work on non-NixOS:
|
|
||||||
# https://discourse.nixos.org/t/whats-the-nix-way-of-bash-completion-for-packages/20209/16?u=alexoundos
|
|
||||||
# Load installed profiles
|
|
||||||
for file in "/etc/profile.d/"*.sh; do
|
|
||||||
# If that folder doesn't exist, bash loves to return the whole glob
|
|
||||||
[[ -f "$file" ]] && source "$file"
|
|
||||||
done
|
|
||||||
|
|
||||||
printf "%s" "${shellMOTD}"
|
|
||||||
'';
|
|
||||||
};
|
|
||||||
checks.${system} = {
|
|
||||||
fmt-check = pkgs.runCommandLocal "sp-api-fmt-check"
|
|
||||||
{ nativeBuildInputs = [ pkgs.black ]; }
|
|
||||||
"black --check ${self.outPath} > $out";
|
|
||||||
default =
|
|
||||||
pkgs.testers.runNixOSTest {
|
|
||||||
name = "default";
|
|
||||||
nodes.machine = { lib, pkgs, ... }: {
|
|
||||||
# 2 additional disks (1024 MiB and 200 MiB) with empty ext4 FS
|
|
||||||
virtualisation.emptyDiskImages = [ 1024 200 ];
|
|
||||||
virtualisation.fileSystems."/volumes/vdb" = {
|
|
||||||
autoFormat = true;
|
|
||||||
device = "/dev/vdb"; # this name is chosen by QEMU, not here
|
|
||||||
fsType = "ext4";
|
|
||||||
noCheck = true;
|
|
||||||
};
|
|
||||||
virtualisation.fileSystems."/volumes/vdc" = {
|
|
||||||
autoFormat = true;
|
|
||||||
device = "/dev/vdc"; # this name is chosen by QEMU, not here
|
|
||||||
fsType = "ext4";
|
|
||||||
noCheck = true;
|
|
||||||
};
|
|
||||||
boot.consoleLogLevel = lib.mkForce 3;
|
|
||||||
documentation.enable = false;
|
|
||||||
services.journald.extraConfig = lib.mkForce "";
|
|
||||||
services.redis.servers.sp-api = {
|
|
||||||
enable = true;
|
|
||||||
save = [ ];
|
|
||||||
settings.notify-keyspace-events = "KEA";
|
|
||||||
};
|
|
||||||
environment.systemPackages = with pkgs; [
|
|
||||||
python-env
|
|
||||||
# TODO: these can be passed via wrapper script around app
|
|
||||||
rclone
|
|
||||||
restic
|
|
||||||
];
|
|
||||||
environment.variables.TEST_MODE = "true";
|
|
||||||
systemd.tmpfiles.settings.src.${vmtest-src-dir}.L.argument =
|
|
||||||
self.outPath;
|
|
||||||
};
|
|
||||||
testScript = ''
|
|
||||||
start_all()
|
|
||||||
machine.succeed("cd ${vmtest-src-dir} && coverage run --data-file=/tmp/.coverage -m pytest -p no:cacheprovider -v >&2")
|
|
||||||
machine.succeed("coverage xml --rcfile=${vmtest-src-dir}/.coveragerc --data-file=/tmp/.coverage >&2")
|
|
||||||
machine.copy_from_vm("coverage.xml", ".")
|
|
||||||
machine.succeed("coverage report >&2")
|
|
||||||
'';
|
|
||||||
};
|
|
||||||
};
|
|
||||||
};
|
|
||||||
nixConfig.bash-prompt = ''\n\[\e[1;32m\][\[\e[0m\]\[\e[1;34m\]SP devshell\[\e[0m\]\[\e[1;32m\]:\w]\$\[\[\e[0m\] '';
|
|
||||||
}
|
|
|
@ -1,22 +0,0 @@
|
||||||
@startuml
|
|
||||||
|
|
||||||
left to right direction
|
|
||||||
|
|
||||||
title repositories and flake inputs relations diagram
|
|
||||||
|
|
||||||
cloud nixpkgs as nixpkgs_transit
|
|
||||||
control "<font:monospaced><size:15>nixos-rebuild" as nixos_rebuild
|
|
||||||
component "SelfPrivacy\nAPI app" as selfprivacy_app
|
|
||||||
component "SelfPrivacy\nNixOS configuration" as nixos_configuration
|
|
||||||
|
|
||||||
note top of nixos_configuration : SelfPrivacy\nAPI service module
|
|
||||||
|
|
||||||
nixos_configuration ).. nixpkgs_transit
|
|
||||||
nixpkgs_transit ..> selfprivacy_app
|
|
||||||
selfprivacy_app --> nixos_configuration
|
|
||||||
[nixpkgs] --> nixos_configuration
|
|
||||||
nixos_configuration -> nixos_rebuild
|
|
||||||
|
|
||||||
footer %date("yyyy-MM-dd'T'HH:mmZ")
|
|
||||||
|
|
||||||
@enduml
|
|
166
nixos/module.nix
166
nixos/module.nix
|
@ -1,166 +0,0 @@
|
||||||
selfprivacy-graphql-api: { config, lib, pkgs, ... }:
|
|
||||||
|
|
||||||
let
|
|
||||||
cfg = config.services.selfprivacy-api;
|
|
||||||
config-id = "default";
|
|
||||||
nixos-rebuild = "${config.system.build.nixos-rebuild}/bin/nixos-rebuild";
|
|
||||||
nix = "${config.nix.package.out}/bin/nix";
|
|
||||||
in
|
|
||||||
{
|
|
||||||
options.services.selfprivacy-api = {
|
|
||||||
enable = lib.mkOption {
|
|
||||||
default = true;
|
|
||||||
type = lib.types.bool;
|
|
||||||
description = ''
|
|
||||||
Enable SelfPrivacy API service
|
|
||||||
'';
|
|
||||||
};
|
|
||||||
};
|
|
||||||
config = lib.mkIf cfg.enable {
|
|
||||||
users.users."selfprivacy-api" = {
|
|
||||||
isNormalUser = false;
|
|
||||||
isSystemUser = true;
|
|
||||||
extraGroups = [ "opendkim" ];
|
|
||||||
group = "selfprivacy-api";
|
|
||||||
};
|
|
||||||
users.groups."selfprivacy-api".members = [ "selfprivacy-api" ];
|
|
||||||
|
|
||||||
systemd.services.selfprivacy-api = {
|
|
||||||
description = "API Server used to control system from the mobile application";
|
|
||||||
environment = config.nix.envVars // {
|
|
||||||
HOME = "/root";
|
|
||||||
PYTHONUNBUFFERED = "1";
|
|
||||||
} // config.networking.proxy.envVars;
|
|
||||||
path = [
|
|
||||||
"/var/"
|
|
||||||
"/var/dkim/"
|
|
||||||
pkgs.coreutils
|
|
||||||
pkgs.gnutar
|
|
||||||
pkgs.xz.bin
|
|
||||||
pkgs.gzip
|
|
||||||
pkgs.gitMinimal
|
|
||||||
config.nix.package.out
|
|
||||||
pkgs.restic
|
|
||||||
pkgs.mkpasswd
|
|
||||||
pkgs.util-linux
|
|
||||||
pkgs.e2fsprogs
|
|
||||||
pkgs.iproute2
|
|
||||||
];
|
|
||||||
after = [ "network-online.target" ];
|
|
||||||
wantedBy = [ "network-online.target" ];
|
|
||||||
serviceConfig = {
|
|
||||||
User = "root";
|
|
||||||
ExecStart = "${selfprivacy-graphql-api}/bin/app.py";
|
|
||||||
Restart = "always";
|
|
||||||
RestartSec = "5";
|
|
||||||
};
|
|
||||||
};
|
|
||||||
systemd.services.selfprivacy-api-worker = {
|
|
||||||
description = "Task worker for SelfPrivacy API";
|
|
||||||
environment = config.nix.envVars // {
|
|
||||||
HOME = "/root";
|
|
||||||
PYTHONUNBUFFERED = "1";
|
|
||||||
PYTHONPATH =
|
|
||||||
pkgs.python310Packages.makePythonPath [ selfprivacy-graphql-api ];
|
|
||||||
} // config.networking.proxy.envVars;
|
|
||||||
path = [
|
|
||||||
"/var/"
|
|
||||||
"/var/dkim/"
|
|
||||||
pkgs.coreutils
|
|
||||||
pkgs.gnutar
|
|
||||||
pkgs.xz.bin
|
|
||||||
pkgs.gzip
|
|
||||||
pkgs.gitMinimal
|
|
||||||
config.nix.package.out
|
|
||||||
pkgs.restic
|
|
||||||
pkgs.mkpasswd
|
|
||||||
pkgs.util-linux
|
|
||||||
pkgs.e2fsprogs
|
|
||||||
pkgs.iproute2
|
|
||||||
];
|
|
||||||
after = [ "network-online.target" ];
|
|
||||||
wantedBy = [ "network-online.target" ];
|
|
||||||
serviceConfig = {
|
|
||||||
User = "root";
|
|
||||||
ExecStart = "${pkgs.python310Packages.huey}/bin/huey_consumer.py selfprivacy_api.task_registry.huey";
|
|
||||||
Restart = "always";
|
|
||||||
RestartSec = "5";
|
|
||||||
};
|
|
||||||
};
|
|
||||||
# One shot systemd service to rebuild NixOS using nixos-rebuild
|
|
||||||
systemd.services.sp-nixos-rebuild = {
|
|
||||||
description = "nixos-rebuild switch";
|
|
||||||
environment = config.nix.envVars // {
|
|
||||||
HOME = "/root";
|
|
||||||
} // config.networking.proxy.envVars;
|
|
||||||
# TODO figure out how to get dependencies list reliably
|
|
||||||
path = [ pkgs.coreutils pkgs.gnutar pkgs.xz.bin pkgs.gzip pkgs.gitMinimal config.nix.package.out ];
|
|
||||||
# TODO set proper timeout for reboot instead of service restart
|
|
||||||
serviceConfig = {
|
|
||||||
User = "root";
|
|
||||||
WorkingDirectory = "/etc/nixos";
|
|
||||||
# sync top-level flake with sp-modules sub-flake
|
|
||||||
# (https://github.com/NixOS/nix/issues/9339)
|
|
||||||
ExecStartPre = ''
|
|
||||||
${nix} flake lock --override-input sp-modules path:./sp-modules
|
|
||||||
'';
|
|
||||||
ExecStart = ''
|
|
||||||
${nixos-rebuild} switch --flake .#${config-id}
|
|
||||||
'';
|
|
||||||
KillMode = "none";
|
|
||||||
SendSIGKILL = "no";
|
|
||||||
};
|
|
||||||
restartIfChanged = false;
|
|
||||||
unitConfig.X-StopOnRemoval = false;
|
|
||||||
};
|
|
||||||
# One shot systemd service to upgrade NixOS using nixos-rebuild
|
|
||||||
systemd.services.sp-nixos-upgrade = {
|
|
||||||
# protection against simultaneous runs
|
|
||||||
after = [ "sp-nixos-rebuild.service" ];
|
|
||||||
description = "Upgrade NixOS and SP modules to latest versions";
|
|
||||||
environment = config.nix.envVars // {
|
|
||||||
HOME = "/root";
|
|
||||||
} // config.networking.proxy.envVars;
|
|
||||||
# TODO figure out how to get dependencies list reliably
|
|
||||||
path = [ pkgs.coreutils pkgs.gnutar pkgs.xz.bin pkgs.gzip pkgs.gitMinimal config.nix.package.out ];
|
|
||||||
serviceConfig = {
|
|
||||||
User = "root";
|
|
||||||
WorkingDirectory = "/etc/nixos";
|
|
||||||
# TODO get URL from systemd template parameter?
|
|
||||||
ExecStartPre = ''
|
|
||||||
${nix} flake update \
|
|
||||||
--override-input selfprivacy-nixos-config git+https://git.selfprivacy.org/SelfPrivacy/selfprivacy-nixos-config.git?ref=flakes
|
|
||||||
'';
|
|
||||||
ExecStart = ''
|
|
||||||
${nixos-rebuild} switch --flake .#${config-id}
|
|
||||||
'';
|
|
||||||
KillMode = "none";
|
|
||||||
SendSIGKILL = "no";
|
|
||||||
};
|
|
||||||
restartIfChanged = false;
|
|
||||||
unitConfig.X-StopOnRemoval = false;
|
|
||||||
};
|
|
||||||
# One shot systemd service to rollback NixOS using nixos-rebuild
|
|
||||||
systemd.services.sp-nixos-rollback = {
|
|
||||||
# protection against simultaneous runs
|
|
||||||
after = [ "sp-nixos-rebuild.service" "sp-nixos-upgrade.service" ];
|
|
||||||
description = "Rollback NixOS using nixos-rebuild";
|
|
||||||
environment = config.nix.envVars // {
|
|
||||||
HOME = "/root";
|
|
||||||
} // config.networking.proxy.envVars;
|
|
||||||
# TODO figure out how to get dependencies list reliably
|
|
||||||
path = [ pkgs.coreutils pkgs.gnutar pkgs.xz.bin pkgs.gzip pkgs.gitMinimal config.nix.package.out ];
|
|
||||||
serviceConfig = {
|
|
||||||
User = "root";
|
|
||||||
WorkingDirectory = "/etc/nixos";
|
|
||||||
ExecStart = ''
|
|
||||||
${nixos-rebuild} switch --rollback --flake .#${config-id}
|
|
||||||
'';
|
|
||||||
KillMode = "none";
|
|
||||||
SendSIGKILL = "no";
|
|
||||||
};
|
|
||||||
restartIfChanged = false;
|
|
||||||
unitConfig.X-StopOnRemoval = false;
|
|
||||||
};
|
|
||||||
};
|
|
||||||
}
|
|
|
@ -7,7 +7,6 @@ from typing import Optional
|
||||||
from pydantic import BaseModel
|
from pydantic import BaseModel
|
||||||
from mnemonic import Mnemonic
|
from mnemonic import Mnemonic
|
||||||
|
|
||||||
from selfprivacy_api.utils.timeutils import ensure_tz_aware, ensure_tz_aware_strict
|
|
||||||
from selfprivacy_api.repositories.tokens.redis_tokens_repository import (
|
from selfprivacy_api.repositories.tokens.redis_tokens_repository import (
|
||||||
RedisTokensRepository,
|
RedisTokensRepository,
|
||||||
)
|
)
|
||||||
|
@ -95,22 +94,16 @@ class RecoveryTokenStatus(BaseModel):
|
||||||
|
|
||||||
|
|
||||||
def get_api_recovery_token_status() -> RecoveryTokenStatus:
|
def get_api_recovery_token_status() -> RecoveryTokenStatus:
|
||||||
"""Get the recovery token status, timezone-aware"""
|
"""Get the recovery token status"""
|
||||||
token = TOKEN_REPO.get_recovery_key()
|
token = TOKEN_REPO.get_recovery_key()
|
||||||
if token is None:
|
if token is None:
|
||||||
return RecoveryTokenStatus(exists=False, valid=False)
|
return RecoveryTokenStatus(exists=False, valid=False)
|
||||||
is_valid = TOKEN_REPO.is_recovery_key_valid()
|
is_valid = TOKEN_REPO.is_recovery_key_valid()
|
||||||
|
|
||||||
# New tokens are tz-aware, but older ones might not be
|
|
||||||
expiry_date = token.expires_at
|
|
||||||
if expiry_date is not None:
|
|
||||||
expiry_date = ensure_tz_aware_strict(expiry_date)
|
|
||||||
|
|
||||||
return RecoveryTokenStatus(
|
return RecoveryTokenStatus(
|
||||||
exists=True,
|
exists=True,
|
||||||
valid=is_valid,
|
valid=is_valid,
|
||||||
date=ensure_tz_aware_strict(token.created_at),
|
date=_naive(token.created_at),
|
||||||
expiration=expiry_date,
|
expiration=_naive(token.expires_at),
|
||||||
uses_left=token.uses_left,
|
uses_left=token.uses_left,
|
||||||
)
|
)
|
||||||
|
|
||||||
|
@ -128,9 +121,8 @@ def get_new_api_recovery_key(
|
||||||
) -> str:
|
) -> str:
|
||||||
"""Get new recovery key"""
|
"""Get new recovery key"""
|
||||||
if expiration_date is not None:
|
if expiration_date is not None:
|
||||||
expiration_date = ensure_tz_aware(expiration_date)
|
current_time = datetime.now().timestamp()
|
||||||
current_time = datetime.now(timezone.utc)
|
if expiration_date.timestamp() < current_time:
|
||||||
if expiration_date < current_time:
|
|
||||||
raise InvalidExpirationDate("Expiration date is in the past")
|
raise InvalidExpirationDate("Expiration date is in the past")
|
||||||
if uses_left is not None:
|
if uses_left is not None:
|
||||||
if uses_left <= 0:
|
if uses_left <= 0:
|
||||||
|
|
|
@ -1,34 +0,0 @@
|
||||||
from selfprivacy_api.utils.block_devices import BlockDevices
|
|
||||||
from selfprivacy_api.jobs import Jobs, Job
|
|
||||||
|
|
||||||
from selfprivacy_api.services import get_service_by_id
|
|
||||||
from selfprivacy_api.services.tasks import move_service as move_service_task
|
|
||||||
|
|
||||||
|
|
||||||
class ServiceNotFoundError(Exception):
|
|
||||||
pass
|
|
||||||
|
|
||||||
|
|
||||||
class VolumeNotFoundError(Exception):
|
|
||||||
pass
|
|
||||||
|
|
||||||
|
|
||||||
def move_service(service_id: str, volume_name: str) -> Job:
|
|
||||||
service = get_service_by_id(service_id)
|
|
||||||
if service is None:
|
|
||||||
raise ServiceNotFoundError(f"No such service:{service_id}")
|
|
||||||
|
|
||||||
volume = BlockDevices().get_block_device(volume_name)
|
|
||||||
if volume is None:
|
|
||||||
raise VolumeNotFoundError(f"No such volume:{volume_name}")
|
|
||||||
|
|
||||||
service.assert_can_move(volume)
|
|
||||||
|
|
||||||
job = Jobs.add(
|
|
||||||
type_id=f"services.{service.get_id()}.move",
|
|
||||||
name=f"Move {service.get_display_name()}",
|
|
||||||
description=f"Moving {service.get_display_name()} data to {volume.name}",
|
|
||||||
)
|
|
||||||
|
|
||||||
move_service_task(service, volume, job)
|
|
||||||
return job
|
|
|
@ -31,7 +31,7 @@ def get_ssh_settings() -> UserdataSshSettings:
|
||||||
if "enable" not in data["ssh"]:
|
if "enable" not in data["ssh"]:
|
||||||
data["ssh"]["enable"] = True
|
data["ssh"]["enable"] = True
|
||||||
if "passwordAuthentication" not in data["ssh"]:
|
if "passwordAuthentication" not in data["ssh"]:
|
||||||
data["ssh"]["passwordAuthentication"] = False
|
data["ssh"]["passwordAuthentication"] = True
|
||||||
if "rootKeys" not in data["ssh"]:
|
if "rootKeys" not in data["ssh"]:
|
||||||
data["ssh"]["rootKeys"] = []
|
data["ssh"]["rootKeys"] = []
|
||||||
return UserdataSshSettings(**data["ssh"])
|
return UserdataSshSettings(**data["ssh"])
|
||||||
|
@ -49,6 +49,19 @@ def set_ssh_settings(
|
||||||
data["ssh"]["passwordAuthentication"] = password_authentication
|
data["ssh"]["passwordAuthentication"] = password_authentication
|
||||||
|
|
||||||
|
|
||||||
|
def add_root_ssh_key(public_key: str):
|
||||||
|
with WriteUserData() as data:
|
||||||
|
if "ssh" not in data:
|
||||||
|
data["ssh"] = {}
|
||||||
|
if "rootKeys" not in data["ssh"]:
|
||||||
|
data["ssh"]["rootKeys"] = []
|
||||||
|
# Return 409 if key already in array
|
||||||
|
for key in data["ssh"]["rootKeys"]:
|
||||||
|
if key == public_key:
|
||||||
|
raise KeyAlreadyExists()
|
||||||
|
data["ssh"]["rootKeys"].append(public_key)
|
||||||
|
|
||||||
|
|
||||||
class KeyAlreadyExists(Exception):
|
class KeyAlreadyExists(Exception):
|
||||||
"""Key already exists"""
|
"""Key already exists"""
|
||||||
|
|
||||||
|
|
|
@ -2,10 +2,8 @@
|
||||||
import os
|
import os
|
||||||
import subprocess
|
import subprocess
|
||||||
import pytz
|
import pytz
|
||||||
from typing import Optional, List
|
from typing import Optional
|
||||||
from pydantic import BaseModel
|
from pydantic import BaseModel
|
||||||
from selfprivacy_api.jobs import Job, JobStatus, Jobs
|
|
||||||
from selfprivacy_api.jobs.upgrade_system import rebuild_system_task
|
|
||||||
|
|
||||||
from selfprivacy_api.utils import WriteUserData, ReadUserData
|
from selfprivacy_api.utils import WriteUserData, ReadUserData
|
||||||
|
|
||||||
|
@ -15,7 +13,7 @@ def get_timezone() -> str:
|
||||||
with ReadUserData() as user_data:
|
with ReadUserData() as user_data:
|
||||||
if "timezone" in user_data:
|
if "timezone" in user_data:
|
||||||
return user_data["timezone"]
|
return user_data["timezone"]
|
||||||
return "Etc/UTC"
|
return "Europe/Uzhgorod"
|
||||||
|
|
||||||
|
|
||||||
class InvalidTimezone(Exception):
|
class InvalidTimezone(Exception):
|
||||||
|
@ -60,68 +58,36 @@ def set_auto_upgrade_settings(
|
||||||
user_data["autoUpgrade"]["allowReboot"] = allowReboot
|
user_data["autoUpgrade"]["allowReboot"] = allowReboot
|
||||||
|
|
||||||
|
|
||||||
class ShellException(Exception):
|
def rebuild_system() -> int:
|
||||||
"""Something went wrong when calling another process"""
|
|
||||||
|
|
||||||
pass
|
|
||||||
|
|
||||||
|
|
||||||
def run_blocking(cmd: List[str], new_session: bool = False) -> str:
|
|
||||||
"""Run a process, block until done, return output, complain if failed"""
|
|
||||||
process_handle = subprocess.Popen(
|
|
||||||
cmd,
|
|
||||||
shell=False,
|
|
||||||
start_new_session=new_session,
|
|
||||||
stdout=subprocess.PIPE,
|
|
||||||
stderr=subprocess.PIPE,
|
|
||||||
)
|
|
||||||
stdout_raw, stderr_raw = process_handle.communicate()
|
|
||||||
stdout = stdout_raw.decode("utf-8")
|
|
||||||
if stderr_raw is not None:
|
|
||||||
stderr = stderr_raw.decode("utf-8")
|
|
||||||
else:
|
|
||||||
stderr = ""
|
|
||||||
output = stdout + "\n" + stderr
|
|
||||||
if process_handle.returncode != 0:
|
|
||||||
raise ShellException(
|
|
||||||
f"Shell command failed, command array: {cmd}, output: {output}"
|
|
||||||
)
|
|
||||||
return stdout
|
|
||||||
|
|
||||||
|
|
||||||
def rebuild_system() -> Job:
|
|
||||||
"""Rebuild the system"""
|
"""Rebuild the system"""
|
||||||
job = Jobs.add(
|
rebuild_result = subprocess.Popen(
|
||||||
type_id="system.nixos.rebuild",
|
["systemctl", "start", "sp-nixos-rebuild.service"], start_new_session=True
|
||||||
name="Rebuild system",
|
|
||||||
description="Applying the new system configuration by building the new NixOS generation.",
|
|
||||||
status=JobStatus.CREATED,
|
|
||||||
)
|
)
|
||||||
rebuild_system_task(job)
|
rebuild_result.communicate()[0]
|
||||||
return job
|
return rebuild_result.returncode
|
||||||
|
|
||||||
|
|
||||||
def rollback_system() -> int:
|
def rollback_system() -> int:
|
||||||
"""Rollback the system"""
|
"""Rollback the system"""
|
||||||
run_blocking(["systemctl", "start", "sp-nixos-rollback.service"], new_session=True)
|
rollback_result = subprocess.Popen(
|
||||||
return 0
|
["systemctl", "start", "sp-nixos-rollback.service"], start_new_session=True
|
||||||
|
|
||||||
|
|
||||||
def upgrade_system() -> Job:
|
|
||||||
"""Upgrade the system"""
|
|
||||||
job = Jobs.add(
|
|
||||||
type_id="system.nixos.upgrade",
|
|
||||||
name="Upgrade system",
|
|
||||||
description="Upgrading the system to the latest version.",
|
|
||||||
status=JobStatus.CREATED,
|
|
||||||
)
|
)
|
||||||
rebuild_system_task(job, upgrade=True)
|
rollback_result.communicate()[0]
|
||||||
return job
|
return rollback_result.returncode
|
||||||
|
|
||||||
|
|
||||||
|
def upgrade_system() -> int:
|
||||||
|
"""Upgrade the system"""
|
||||||
|
upgrade_result = subprocess.Popen(
|
||||||
|
["systemctl", "start", "sp-nixos-upgrade.service"], start_new_session=True
|
||||||
|
)
|
||||||
|
upgrade_result.communicate()[0]
|
||||||
|
return upgrade_result.returncode
|
||||||
|
|
||||||
|
|
||||||
def reboot_system() -> None:
|
def reboot_system() -> None:
|
||||||
"""Reboot the system"""
|
"""Reboot the system"""
|
||||||
run_blocking(["reboot"], new_session=True)
|
subprocess.Popen(["reboot"], start_new_session=True)
|
||||||
|
|
||||||
|
|
||||||
def get_system_version() -> str:
|
def get_system_version() -> str:
|
||||||
|
|
|
@ -58,7 +58,7 @@ def get_users(
|
||||||
)
|
)
|
||||||
for user in user_data["users"]
|
for user in user_data["users"]
|
||||||
]
|
]
|
||||||
if not exclude_primary and "username" in user_data.keys():
|
if not exclude_primary:
|
||||||
users.append(
|
users.append(
|
||||||
UserDataUser(
|
UserDataUser(
|
||||||
username=user_data["username"],
|
username=user_data["username"],
|
||||||
|
@ -107,12 +107,6 @@ class PasswordIsEmpty(Exception):
|
||||||
pass
|
pass
|
||||||
|
|
||||||
|
|
||||||
class InvalidConfiguration(Exception):
|
|
||||||
"""The userdata is broken"""
|
|
||||||
|
|
||||||
pass
|
|
||||||
|
|
||||||
|
|
||||||
def create_user(username: str, password: str):
|
def create_user(username: str, password: str):
|
||||||
if password == "":
|
if password == "":
|
||||||
raise PasswordIsEmpty("Password is empty")
|
raise PasswordIsEmpty("Password is empty")
|
||||||
|
@ -130,10 +124,6 @@ def create_user(username: str, password: str):
|
||||||
|
|
||||||
with ReadUserData() as user_data:
|
with ReadUserData() as user_data:
|
||||||
ensure_ssh_and_users_fields_exist(user_data)
|
ensure_ssh_and_users_fields_exist(user_data)
|
||||||
if "username" not in user_data.keys():
|
|
||||||
raise InvalidConfiguration(
|
|
||||||
"Broken config: Admin name is not defined. Consider recovery or add it manually"
|
|
||||||
)
|
|
||||||
if username == user_data["username"]:
|
if username == user_data["username"]:
|
||||||
raise UserAlreadyExists("User already exists")
|
raise UserAlreadyExists("User already exists")
|
||||||
if username in [user["username"] for user in user_data["users"]]:
|
if username in [user["username"] for user in user_data["users"]]:
|
||||||
|
|
|
@ -9,7 +9,14 @@ import uvicorn
|
||||||
from selfprivacy_api.dependencies import get_api_version
|
from selfprivacy_api.dependencies import get_api_version
|
||||||
from selfprivacy_api.graphql.schema import schema
|
from selfprivacy_api.graphql.schema import schema
|
||||||
from selfprivacy_api.migrations import run_migrations
|
from selfprivacy_api.migrations import run_migrations
|
||||||
|
from selfprivacy_api.restic_controller.tasks import init_restic
|
||||||
|
|
||||||
|
from selfprivacy_api.rest import (
|
||||||
|
system,
|
||||||
|
users,
|
||||||
|
api_auth,
|
||||||
|
services,
|
||||||
|
)
|
||||||
|
|
||||||
app = FastAPI()
|
app = FastAPI()
|
||||||
|
|
||||||
|
@ -26,6 +33,10 @@ app.add_middleware(
|
||||||
)
|
)
|
||||||
|
|
||||||
|
|
||||||
|
app.include_router(system.router)
|
||||||
|
app.include_router(users.router)
|
||||||
|
app.include_router(api_auth.router)
|
||||||
|
app.include_router(services.router)
|
||||||
app.include_router(graphql_app, prefix="/graphql")
|
app.include_router(graphql_app, prefix="/graphql")
|
||||||
|
|
||||||
|
|
||||||
|
@ -38,6 +49,7 @@ async def get_version():
|
||||||
@app.on_event("startup")
|
@app.on_event("startup")
|
||||||
async def startup():
|
async def startup():
|
||||||
run_migrations()
|
run_migrations()
|
||||||
|
init_restic()
|
||||||
|
|
||||||
|
|
||||||
if __name__ == "__main__":
|
if __name__ == "__main__":
|
||||||
|
|
|
@ -1,741 +0,0 @@
|
||||||
"""
|
|
||||||
This module contains the controller class for backups.
|
|
||||||
"""
|
|
||||||
from datetime import datetime, timedelta, timezone
|
|
||||||
import time
|
|
||||||
import os
|
|
||||||
from os import statvfs
|
|
||||||
from typing import Callable, List, Optional
|
|
||||||
|
|
||||||
from selfprivacy_api.services import (
|
|
||||||
get_service_by_id,
|
|
||||||
get_all_services,
|
|
||||||
)
|
|
||||||
from selfprivacy_api.services.service import (
|
|
||||||
Service,
|
|
||||||
ServiceStatus,
|
|
||||||
StoppedService,
|
|
||||||
)
|
|
||||||
|
|
||||||
from selfprivacy_api.jobs import Jobs, JobStatus, Job
|
|
||||||
|
|
||||||
from selfprivacy_api.graphql.queries.providers import (
|
|
||||||
BackupProvider as BackupProviderEnum,
|
|
||||||
)
|
|
||||||
from selfprivacy_api.graphql.common_types.backup import (
|
|
||||||
RestoreStrategy,
|
|
||||||
BackupReason,
|
|
||||||
AutobackupQuotas,
|
|
||||||
)
|
|
||||||
|
|
||||||
|
|
||||||
from selfprivacy_api.models.backup.snapshot import Snapshot
|
|
||||||
|
|
||||||
from selfprivacy_api.backup.providers.provider import AbstractBackupProvider
|
|
||||||
from selfprivacy_api.backup.providers import get_provider
|
|
||||||
from selfprivacy_api.backup.storage import Storage
|
|
||||||
from selfprivacy_api.backup.jobs import (
|
|
||||||
get_backup_job,
|
|
||||||
get_backup_fail,
|
|
||||||
add_backup_job,
|
|
||||||
get_restore_job,
|
|
||||||
add_restore_job,
|
|
||||||
)
|
|
||||||
|
|
||||||
|
|
||||||
BACKUP_PROVIDER_ENVS = {
|
|
||||||
"kind": "BACKUP_KIND",
|
|
||||||
"login": "BACKUP_LOGIN",
|
|
||||||
"key": "BACKUP_KEY",
|
|
||||||
"location": "BACKUP_LOCATION",
|
|
||||||
}
|
|
||||||
|
|
||||||
AUTOBACKUP_JOB_EXPIRATION_SECONDS = 60 * 60 # one hour
|
|
||||||
|
|
||||||
|
|
||||||
class NotDeadError(AssertionError):
|
|
||||||
"""
|
|
||||||
This error is raised when we try to back up a service that is not dead yet.
|
|
||||||
"""
|
|
||||||
|
|
||||||
def __init__(self, service: Service):
|
|
||||||
self.service_name = service.get_id()
|
|
||||||
super().__init__()
|
|
||||||
|
|
||||||
def __str__(self):
|
|
||||||
return f"""
|
|
||||||
Service {self.service_name} should be either stopped or dead from
|
|
||||||
an error before we back up.
|
|
||||||
Normally, this error is unreachable because we do try ensure this.
|
|
||||||
Apparently, not this time.
|
|
||||||
"""
|
|
||||||
|
|
||||||
|
|
||||||
class RotationBucket:
|
|
||||||
"""
|
|
||||||
Bucket object used for rotation.
|
|
||||||
Has the following mutable fields:
|
|
||||||
- the counter, int
|
|
||||||
- the lambda function which takes datetime and the int and returns the int
|
|
||||||
- the last, int
|
|
||||||
"""
|
|
||||||
|
|
||||||
def __init__(self, counter: int, last: int, rotation_lambda):
|
|
||||||
self.counter: int = counter
|
|
||||||
self.last: int = last
|
|
||||||
self.rotation_lambda: Callable[[datetime, int], int] = rotation_lambda
|
|
||||||
|
|
||||||
def __str__(self) -> str:
|
|
||||||
return f"Bucket(counter={self.counter}, last={self.last})"
|
|
||||||
|
|
||||||
|
|
||||||
class Backups:
|
|
||||||
"""A stateless controller class for backups"""
|
|
||||||
|
|
||||||
# Providers
|
|
||||||
|
|
||||||
@staticmethod
|
|
||||||
def provider() -> AbstractBackupProvider:
|
|
||||||
"""
|
|
||||||
Returns the current backup storage provider.
|
|
||||||
"""
|
|
||||||
return Backups._lookup_provider()
|
|
||||||
|
|
||||||
@staticmethod
|
|
||||||
def set_provider(
|
|
||||||
kind: BackupProviderEnum,
|
|
||||||
login: str,
|
|
||||||
key: str,
|
|
||||||
location: str,
|
|
||||||
repo_id: str = "",
|
|
||||||
) -> None:
|
|
||||||
"""
|
|
||||||
Sets the new configuration of the backup storage provider.
|
|
||||||
|
|
||||||
In case of `BackupProviderEnum.BACKBLAZE`, the `login` is the key ID,
|
|
||||||
the `key` is the key itself, and the `location` is the bucket name and
|
|
||||||
the `repo_id` is the bucket ID.
|
|
||||||
"""
|
|
||||||
provider: AbstractBackupProvider = Backups._construct_provider(
|
|
||||||
kind,
|
|
||||||
login,
|
|
||||||
key,
|
|
||||||
location,
|
|
||||||
repo_id,
|
|
||||||
)
|
|
||||||
Storage.store_provider(provider)
|
|
||||||
|
|
||||||
@staticmethod
|
|
||||||
def reset() -> None:
|
|
||||||
"""
|
|
||||||
Deletes all the data about the backup storage provider.
|
|
||||||
"""
|
|
||||||
Storage.reset()
|
|
||||||
|
|
||||||
@staticmethod
|
|
||||||
def _lookup_provider() -> AbstractBackupProvider:
|
|
||||||
redis_provider = Backups._load_provider_redis()
|
|
||||||
if redis_provider is not None:
|
|
||||||
return redis_provider
|
|
||||||
|
|
||||||
none_provider = Backups._construct_provider(
|
|
||||||
BackupProviderEnum.NONE, login="", key="", location=""
|
|
||||||
)
|
|
||||||
Storage.store_provider(none_provider)
|
|
||||||
return none_provider
|
|
||||||
|
|
||||||
@staticmethod
|
|
||||||
def set_provider_from_envs():
|
|
||||||
for env in BACKUP_PROVIDER_ENVS.values():
|
|
||||||
if env not in os.environ.keys():
|
|
||||||
raise ValueError(
|
|
||||||
f"Cannot set backup provider from envs, there is no {env} set"
|
|
||||||
)
|
|
||||||
|
|
||||||
kind_str = os.environ[BACKUP_PROVIDER_ENVS["kind"]]
|
|
||||||
kind_enum = BackupProviderEnum[kind_str]
|
|
||||||
provider = Backups._construct_provider(
|
|
||||||
kind=kind_enum,
|
|
||||||
login=os.environ[BACKUP_PROVIDER_ENVS["login"]],
|
|
||||||
key=os.environ[BACKUP_PROVIDER_ENVS["key"]],
|
|
||||||
location=os.environ[BACKUP_PROVIDER_ENVS["location"]],
|
|
||||||
)
|
|
||||||
Storage.store_provider(provider)
|
|
||||||
|
|
||||||
@staticmethod
|
|
||||||
def _construct_provider(
|
|
||||||
kind: BackupProviderEnum,
|
|
||||||
login: str,
|
|
||||||
key: str,
|
|
||||||
location: str,
|
|
||||||
repo_id: str = "",
|
|
||||||
) -> AbstractBackupProvider:
|
|
||||||
provider_class = get_provider(kind)
|
|
||||||
|
|
||||||
return provider_class(
|
|
||||||
login=login,
|
|
||||||
key=key,
|
|
||||||
location=location,
|
|
||||||
repo_id=repo_id,
|
|
||||||
)
|
|
||||||
|
|
||||||
@staticmethod
|
|
||||||
def _load_provider_redis() -> Optional[AbstractBackupProvider]:
|
|
||||||
provider_model = Storage.load_provider()
|
|
||||||
if provider_model is None:
|
|
||||||
return None
|
|
||||||
return Backups._construct_provider(
|
|
||||||
BackupProviderEnum[provider_model.kind],
|
|
||||||
provider_model.login,
|
|
||||||
provider_model.key,
|
|
||||||
provider_model.location,
|
|
||||||
provider_model.repo_id,
|
|
||||||
)
|
|
||||||
|
|
||||||
# Init
|
|
||||||
|
|
||||||
@staticmethod
|
|
||||||
def init_repo() -> None:
|
|
||||||
"""
|
|
||||||
Initializes the backup repository. This is required once per repo.
|
|
||||||
"""
|
|
||||||
Backups.provider().backupper.init()
|
|
||||||
Storage.mark_as_init()
|
|
||||||
|
|
||||||
@staticmethod
|
|
||||||
def erase_repo() -> None:
|
|
||||||
"""
|
|
||||||
Completely empties the remote
|
|
||||||
"""
|
|
||||||
Backups.provider().backupper.erase_repo()
|
|
||||||
Storage.mark_as_uninitted()
|
|
||||||
|
|
||||||
@staticmethod
|
|
||||||
def is_initted() -> bool:
|
|
||||||
"""
|
|
||||||
Returns whether the backup repository is initialized or not.
|
|
||||||
If it is not initialized, we cannot back up and probably should
|
|
||||||
call `init_repo` first.
|
|
||||||
"""
|
|
||||||
if Storage.has_init_mark():
|
|
||||||
return True
|
|
||||||
|
|
||||||
initted = Backups.provider().backupper.is_initted()
|
|
||||||
if initted:
|
|
||||||
Storage.mark_as_init()
|
|
||||||
return True
|
|
||||||
|
|
||||||
return False
|
|
||||||
|
|
||||||
# Backup
|
|
||||||
|
|
||||||
@staticmethod
|
|
||||||
def back_up(
|
|
||||||
service: Service, reason: BackupReason = BackupReason.EXPLICIT
|
|
||||||
) -> Snapshot:
|
|
||||||
"""The top-level function to back up a service
|
|
||||||
If it fails for any reason at all, it should both mark job as
|
|
||||||
errored and re-raise an error"""
|
|
||||||
|
|
||||||
job = get_backup_job(service)
|
|
||||||
if job is None:
|
|
||||||
job = add_backup_job(service)
|
|
||||||
Jobs.update(job, status=JobStatus.RUNNING)
|
|
||||||
|
|
||||||
try:
|
|
||||||
if service.can_be_backed_up() is False:
|
|
||||||
raise ValueError("cannot backup a non-backuppable service")
|
|
||||||
folders = service.get_folders()
|
|
||||||
service_name = service.get_id()
|
|
||||||
service.pre_backup()
|
|
||||||
snapshot = Backups.provider().backupper.start_backup(
|
|
||||||
folders,
|
|
||||||
service_name,
|
|
||||||
reason=reason,
|
|
||||||
)
|
|
||||||
|
|
||||||
Backups._on_new_snapshot_created(service_name, snapshot)
|
|
||||||
if reason == BackupReason.AUTO:
|
|
||||||
Backups._prune_auto_snaps(service)
|
|
||||||
service.post_restore()
|
|
||||||
except Exception as error:
|
|
||||||
Jobs.update(job, status=JobStatus.ERROR, error=str(error))
|
|
||||||
raise error
|
|
||||||
|
|
||||||
Jobs.update(job, status=JobStatus.FINISHED)
|
|
||||||
if reason in [BackupReason.AUTO, BackupReason.PRE_RESTORE]:
|
|
||||||
Jobs.set_expiration(job, AUTOBACKUP_JOB_EXPIRATION_SECONDS)
|
|
||||||
return Backups.sync_date_from_cache(snapshot)
|
|
||||||
|
|
||||||
@staticmethod
|
|
||||||
def sync_date_from_cache(snapshot: Snapshot) -> Snapshot:
|
|
||||||
"""
|
|
||||||
Our snapshot creation dates are different from those on server by a tiny amount.
|
|
||||||
This is a convenience, maybe it is better to write a special comparison
|
|
||||||
function for snapshots
|
|
||||||
"""
|
|
||||||
return Storage.get_cached_snapshot_by_id(snapshot.id)
|
|
||||||
|
|
||||||
@staticmethod
|
|
||||||
def _auto_snaps(service):
|
|
||||||
return [
|
|
||||||
snap
|
|
||||||
for snap in Backups.get_snapshots(service)
|
|
||||||
if snap.reason == BackupReason.AUTO
|
|
||||||
]
|
|
||||||
|
|
||||||
@staticmethod
|
|
||||||
def _prune_snaps_with_quotas(snapshots: List[Snapshot]) -> List[Snapshot]:
|
|
||||||
# Function broken out for testability
|
|
||||||
# Sorting newest first
|
|
||||||
sorted_snaps = sorted(snapshots, key=lambda s: s.created_at, reverse=True)
|
|
||||||
quotas: AutobackupQuotas = Backups.autobackup_quotas()
|
|
||||||
|
|
||||||
buckets: list[RotationBucket] = [
|
|
||||||
RotationBucket(
|
|
||||||
quotas.last, # type: ignore
|
|
||||||
-1,
|
|
||||||
lambda _, index: index,
|
|
||||||
),
|
|
||||||
RotationBucket(
|
|
||||||
quotas.daily, # type: ignore
|
|
||||||
-1,
|
|
||||||
lambda date, _: date.year * 10000 + date.month * 100 + date.day,
|
|
||||||
),
|
|
||||||
RotationBucket(
|
|
||||||
quotas.weekly, # type: ignore
|
|
||||||
-1,
|
|
||||||
lambda date, _: date.year * 100 + date.isocalendar()[1],
|
|
||||||
),
|
|
||||||
RotationBucket(
|
|
||||||
quotas.monthly, # type: ignore
|
|
||||||
-1,
|
|
||||||
lambda date, _: date.year * 100 + date.month,
|
|
||||||
),
|
|
||||||
RotationBucket(
|
|
||||||
quotas.yearly, # type: ignore
|
|
||||||
-1,
|
|
||||||
lambda date, _: date.year,
|
|
||||||
),
|
|
||||||
]
|
|
||||||
|
|
||||||
new_snaplist: List[Snapshot] = []
|
|
||||||
for i, snap in enumerate(sorted_snaps):
|
|
||||||
keep_snap = False
|
|
||||||
for bucket in buckets:
|
|
||||||
if (bucket.counter > 0) or (bucket.counter == -1):
|
|
||||||
val = bucket.rotation_lambda(snap.created_at, i)
|
|
||||||
if (val != bucket.last) or (i == len(sorted_snaps) - 1):
|
|
||||||
bucket.last = val
|
|
||||||
if bucket.counter > 0:
|
|
||||||
bucket.counter -= 1
|
|
||||||
if not keep_snap:
|
|
||||||
new_snaplist.append(snap)
|
|
||||||
keep_snap = True
|
|
||||||
|
|
||||||
return new_snaplist
|
|
||||||
|
|
||||||
@staticmethod
|
|
||||||
def _prune_auto_snaps(service) -> None:
|
|
||||||
# Not very testable by itself, so most testing is going on Backups._prune_snaps_with_quotas
|
|
||||||
# We can still test total limits and, say, daily limits
|
|
||||||
|
|
||||||
auto_snaps = Backups._auto_snaps(service)
|
|
||||||
new_snaplist = Backups._prune_snaps_with_quotas(auto_snaps)
|
|
||||||
|
|
||||||
deletable_snaps = [snap for snap in auto_snaps if snap not in new_snaplist]
|
|
||||||
Backups.forget_snapshots(deletable_snaps)
|
|
||||||
|
|
||||||
@staticmethod
|
|
||||||
def _standardize_quotas(i: int) -> int:
|
|
||||||
if i <= -1:
|
|
||||||
i = -1
|
|
||||||
return i
|
|
||||||
|
|
||||||
@staticmethod
|
|
||||||
def autobackup_quotas() -> AutobackupQuotas:
|
|
||||||
"""0 means do not keep, -1 means unlimited"""
|
|
||||||
|
|
||||||
return Storage.autobackup_quotas()
|
|
||||||
|
|
||||||
@staticmethod
|
|
||||||
def set_autobackup_quotas(quotas: AutobackupQuotas) -> None:
|
|
||||||
"""0 means do not keep, -1 means unlimited"""
|
|
||||||
|
|
||||||
Storage.set_autobackup_quotas(
|
|
||||||
AutobackupQuotas(
|
|
||||||
last=Backups._standardize_quotas(quotas.last), # type: ignore
|
|
||||||
daily=Backups._standardize_quotas(quotas.daily), # type: ignore
|
|
||||||
weekly=Backups._standardize_quotas(quotas.weekly), # type: ignore
|
|
||||||
monthly=Backups._standardize_quotas(quotas.monthly), # type: ignore
|
|
||||||
yearly=Backups._standardize_quotas(quotas.yearly), # type: ignore
|
|
||||||
)
|
|
||||||
)
|
|
||||||
# do not prune all autosnaps right away, this will be done by an async task
|
|
||||||
|
|
||||||
@staticmethod
|
|
||||||
def prune_all_autosnaps() -> None:
|
|
||||||
for service in get_all_services():
|
|
||||||
Backups._prune_auto_snaps(service)
|
|
||||||
|
|
||||||
# Restoring
|
|
||||||
|
|
||||||
@staticmethod
|
|
||||||
def _ensure_queued_restore_job(service, snapshot) -> Job:
|
|
||||||
job = get_restore_job(service)
|
|
||||||
if job is None:
|
|
||||||
job = add_restore_job(snapshot)
|
|
||||||
|
|
||||||
Jobs.update(job, status=JobStatus.CREATED)
|
|
||||||
return job
|
|
||||||
|
|
||||||
@staticmethod
|
|
||||||
def _inplace_restore(
|
|
||||||
service: Service,
|
|
||||||
snapshot: Snapshot,
|
|
||||||
job: Job,
|
|
||||||
) -> None:
|
|
||||||
Jobs.update(
|
|
||||||
job, status=JobStatus.CREATED, status_text="Waiting for pre-restore backup"
|
|
||||||
)
|
|
||||||
failsafe_snapshot = Backups.back_up(service, BackupReason.PRE_RESTORE)
|
|
||||||
|
|
||||||
Jobs.update(
|
|
||||||
job, status=JobStatus.RUNNING, status_text=f"Restoring from {snapshot.id}"
|
|
||||||
)
|
|
||||||
try:
|
|
||||||
Backups._restore_service_from_snapshot(
|
|
||||||
service,
|
|
||||||
snapshot.id,
|
|
||||||
verify=False,
|
|
||||||
)
|
|
||||||
except Exception as error:
|
|
||||||
Jobs.update(
|
|
||||||
job,
|
|
||||||
status=JobStatus.ERROR,
|
|
||||||
status_text=f"Restore failed with {str(error)}, reverting to {failsafe_snapshot.id}",
|
|
||||||
)
|
|
||||||
Backups._restore_service_from_snapshot(
|
|
||||||
service, failsafe_snapshot.id, verify=False
|
|
||||||
)
|
|
||||||
Jobs.update(
|
|
||||||
job,
|
|
||||||
status=JobStatus.ERROR,
|
|
||||||
status_text=f"Restore failed with {str(error)}, reverted to {failsafe_snapshot.id}",
|
|
||||||
)
|
|
||||||
raise error
|
|
||||||
|
|
||||||
@staticmethod
|
|
||||||
def restore_snapshot(
|
|
||||||
snapshot: Snapshot, strategy=RestoreStrategy.DOWNLOAD_VERIFY_OVERWRITE
|
|
||||||
) -> None:
|
|
||||||
"""Restores a snapshot to its original service using the given strategy"""
|
|
||||||
service = get_service_by_id(snapshot.service_name)
|
|
||||||
if service is None:
|
|
||||||
raise ValueError(
|
|
||||||
f"snapshot has a nonexistent service: {snapshot.service_name}"
|
|
||||||
)
|
|
||||||
job = Backups._ensure_queued_restore_job(service, snapshot)
|
|
||||||
|
|
||||||
try:
|
|
||||||
Backups._assert_restorable(snapshot)
|
|
||||||
Jobs.update(
|
|
||||||
job, status=JobStatus.RUNNING, status_text="Stopping the service"
|
|
||||||
)
|
|
||||||
with StoppedService(service):
|
|
||||||
Backups.assert_dead(service)
|
|
||||||
if strategy == RestoreStrategy.INPLACE:
|
|
||||||
Backups._inplace_restore(service, snapshot, job)
|
|
||||||
else: # verify_before_download is our default
|
|
||||||
Jobs.update(
|
|
||||||
job,
|
|
||||||
status=JobStatus.RUNNING,
|
|
||||||
status_text=f"Restoring from {snapshot.id}",
|
|
||||||
)
|
|
||||||
Backups._restore_service_from_snapshot(
|
|
||||||
service, snapshot.id, verify=True
|
|
||||||
)
|
|
||||||
|
|
||||||
service.post_restore()
|
|
||||||
Jobs.update(
|
|
||||||
job,
|
|
||||||
status=JobStatus.RUNNING,
|
|
||||||
progress=90,
|
|
||||||
status_text="Restarting the service",
|
|
||||||
)
|
|
||||||
|
|
||||||
except Exception as error:
|
|
||||||
Jobs.update(job, status=JobStatus.ERROR, status_text=str(error))
|
|
||||||
raise error
|
|
||||||
|
|
||||||
Jobs.update(job, status=JobStatus.FINISHED)
|
|
||||||
|
|
||||||
@staticmethod
|
|
||||||
def _assert_restorable(
|
|
||||||
snapshot: Snapshot, strategy=RestoreStrategy.DOWNLOAD_VERIFY_OVERWRITE
|
|
||||||
) -> None:
|
|
||||||
service = get_service_by_id(snapshot.service_name)
|
|
||||||
if service is None:
|
|
||||||
raise ValueError(
|
|
||||||
f"snapshot has a nonexistent service: {snapshot.service_name}"
|
|
||||||
)
|
|
||||||
|
|
||||||
restored_snap_size = Backups.snapshot_restored_size(snapshot.id)
|
|
||||||
|
|
||||||
if strategy == RestoreStrategy.DOWNLOAD_VERIFY_OVERWRITE:
|
|
||||||
needed_space = restored_snap_size
|
|
||||||
elif strategy == RestoreStrategy.INPLACE:
|
|
||||||
needed_space = restored_snap_size - service.get_storage_usage()
|
|
||||||
else:
|
|
||||||
raise NotImplementedError(
|
|
||||||
"""
|
|
||||||
We do not know if there is enough space for restoration because
|
|
||||||
there is some novel restore strategy used!
|
|
||||||
This is a developer's fault, open an issue please
|
|
||||||
"""
|
|
||||||
)
|
|
||||||
available_space = Backups.space_usable_for_service(service)
|
|
||||||
if needed_space > available_space:
|
|
||||||
raise ValueError(
|
|
||||||
f"we only have {available_space} bytes "
|
|
||||||
f"but snapshot needs {needed_space}"
|
|
||||||
)
|
|
||||||
|
|
||||||
@staticmethod
|
|
||||||
def _restore_service_from_snapshot(
|
|
||||||
service: Service,
|
|
||||||
snapshot_id: str,
|
|
||||||
verify=True,
|
|
||||||
) -> None:
|
|
||||||
folders = service.get_folders()
|
|
||||||
|
|
||||||
Backups.provider().backupper.restore_from_backup(
|
|
||||||
snapshot_id,
|
|
||||||
folders,
|
|
||||||
verify=verify,
|
|
||||||
)
|
|
||||||
|
|
||||||
# Snapshots
|
|
||||||
|
|
||||||
@staticmethod
|
|
||||||
def get_snapshots(service: Service) -> List[Snapshot]:
|
|
||||||
"""Returns all snapshots for a given service"""
|
|
||||||
snapshots = Backups.get_all_snapshots()
|
|
||||||
service_id = service.get_id()
|
|
||||||
return list(
|
|
||||||
filter(
|
|
||||||
lambda snap: snap.service_name == service_id,
|
|
||||||
snapshots,
|
|
||||||
)
|
|
||||||
)
|
|
||||||
|
|
||||||
@staticmethod
|
|
||||||
def get_all_snapshots() -> List[Snapshot]:
|
|
||||||
"""Returns all snapshots"""
|
|
||||||
# When we refresh our cache:
|
|
||||||
# 1. Manually
|
|
||||||
# 2. On timer
|
|
||||||
# 3. On new snapshot
|
|
||||||
# 4. On snapshot deletion
|
|
||||||
|
|
||||||
return Storage.get_cached_snapshots()
|
|
||||||
|
|
||||||
@staticmethod
|
|
||||||
def get_snapshot_by_id(snapshot_id: str) -> Optional[Snapshot]:
|
|
||||||
"""Returns a backup snapshot by its id"""
|
|
||||||
snap = Storage.get_cached_snapshot_by_id(snapshot_id)
|
|
||||||
if snap is not None:
|
|
||||||
return snap
|
|
||||||
|
|
||||||
# Possibly our cache entry got invalidated, let's try one more time
|
|
||||||
Backups.force_snapshot_cache_reload()
|
|
||||||
snap = Storage.get_cached_snapshot_by_id(snapshot_id)
|
|
||||||
|
|
||||||
return snap
|
|
||||||
|
|
||||||
@staticmethod
|
|
||||||
def forget_snapshots(snapshots: List[Snapshot]) -> None:
|
|
||||||
"""
|
|
||||||
Deletes a batch of snapshots from the repo and syncs cache
|
|
||||||
Optimized
|
|
||||||
"""
|
|
||||||
ids = [snapshot.id for snapshot in snapshots]
|
|
||||||
Backups.provider().backupper.forget_snapshots(ids)
|
|
||||||
|
|
||||||
Backups.force_snapshot_cache_reload()
|
|
||||||
|
|
||||||
@staticmethod
|
|
||||||
def forget_snapshot(snapshot: Snapshot) -> None:
|
|
||||||
"""Deletes a snapshot from the repo and from cache"""
|
|
||||||
Backups.forget_snapshots([snapshot])
|
|
||||||
|
|
||||||
@staticmethod
|
|
||||||
def forget_all_snapshots():
|
|
||||||
"""
|
|
||||||
Mark all snapshots we have made for deletion and make them inaccessible
|
|
||||||
(this is done by cloud, we only issue a command)
|
|
||||||
"""
|
|
||||||
Backups.forget_snapshots(Backups.get_all_snapshots())
|
|
||||||
|
|
||||||
@staticmethod
|
|
||||||
def force_snapshot_cache_reload() -> None:
|
|
||||||
"""
|
|
||||||
Forces a reload of the snapshot cache.
|
|
||||||
|
|
||||||
This may be an expensive operation, so use it wisely.
|
|
||||||
User pays for the API calls.
|
|
||||||
"""
|
|
||||||
upstream_snapshots = Backups.provider().backupper.get_snapshots()
|
|
||||||
Storage.invalidate_snapshot_storage()
|
|
||||||
for snapshot in upstream_snapshots:
|
|
||||||
Storage.cache_snapshot(snapshot)
|
|
||||||
|
|
||||||
@staticmethod
|
|
||||||
def snapshot_restored_size(snapshot_id: str) -> int:
|
|
||||||
"""Returns the size of the snapshot"""
|
|
||||||
return Backups.provider().backupper.restored_size(
|
|
||||||
snapshot_id,
|
|
||||||
)
|
|
||||||
|
|
||||||
@staticmethod
|
|
||||||
def _on_new_snapshot_created(service_id: str, snapshot: Snapshot) -> None:
|
|
||||||
"""What do we do with a snapshot that is just made?"""
|
|
||||||
# non-expiring timestamp of the last
|
|
||||||
Storage.store_last_timestamp(service_id, snapshot)
|
|
||||||
Backups.force_snapshot_cache_reload()
|
|
||||||
|
|
||||||
# Autobackup
|
|
||||||
|
|
||||||
@staticmethod
|
|
||||||
def autobackup_period_minutes() -> Optional[int]:
|
|
||||||
"""None means autobackup is disabled"""
|
|
||||||
return Storage.autobackup_period_minutes()
|
|
||||||
|
|
||||||
@staticmethod
|
|
||||||
def set_autobackup_period_minutes(minutes: int) -> None:
|
|
||||||
"""
|
|
||||||
0 and negative numbers are equivalent to disable.
|
|
||||||
Setting to a positive number may result in a backup very soon
|
|
||||||
if some services are not backed up.
|
|
||||||
"""
|
|
||||||
if minutes <= 0:
|
|
||||||
Backups.disable_all_autobackup()
|
|
||||||
return
|
|
||||||
Storage.store_autobackup_period_minutes(minutes)
|
|
||||||
|
|
||||||
@staticmethod
|
|
||||||
def disable_all_autobackup() -> None:
|
|
||||||
"""
|
|
||||||
Disables all automatic backing up,
|
|
||||||
but does not change per-service settings
|
|
||||||
"""
|
|
||||||
Storage.delete_backup_period()
|
|
||||||
|
|
||||||
@staticmethod
|
|
||||||
def is_time_to_backup(time: datetime) -> bool:
|
|
||||||
"""
|
|
||||||
Intended as a time validator for huey cron scheduler
|
|
||||||
of automatic backups
|
|
||||||
"""
|
|
||||||
|
|
||||||
return Backups.services_to_back_up(time) != []
|
|
||||||
|
|
||||||
@staticmethod
|
|
||||||
def services_to_back_up(time: datetime) -> List[Service]:
|
|
||||||
"""Returns a list of services that should be backed up at a given time"""
|
|
||||||
return [
|
|
||||||
service
|
|
||||||
for service in get_all_services()
|
|
||||||
if Backups.is_time_to_backup_service(service, time)
|
|
||||||
]
|
|
||||||
|
|
||||||
@staticmethod
|
|
||||||
def get_last_backed_up(service: Service) -> Optional[datetime]:
|
|
||||||
"""Get a timezone-aware time of the last backup of a service"""
|
|
||||||
return Storage.get_last_backup_time(service.get_id())
|
|
||||||
|
|
||||||
@staticmethod
|
|
||||||
def get_last_backup_error_time(service: Service) -> Optional[datetime]:
|
|
||||||
"""Get a timezone-aware time of the last backup of a service"""
|
|
||||||
job = get_backup_fail(service)
|
|
||||||
if job is not None:
|
|
||||||
datetime_created = job.created_at
|
|
||||||
if datetime_created.tzinfo is None:
|
|
||||||
# assume it is in localtime
|
|
||||||
offset = timedelta(seconds=time.localtime().tm_gmtoff)
|
|
||||||
datetime_created = datetime_created - offset
|
|
||||||
return datetime.combine(
|
|
||||||
datetime_created.date(), datetime_created.time(), timezone.utc
|
|
||||||
)
|
|
||||||
return datetime_created
|
|
||||||
return None
|
|
||||||
|
|
||||||
@staticmethod
|
|
||||||
def is_time_to_backup_service(service: Service, time: datetime):
|
|
||||||
"""Returns True if it is time to back up a service"""
|
|
||||||
period = Backups.autobackup_period_minutes()
|
|
||||||
if period is None:
|
|
||||||
return False
|
|
||||||
|
|
||||||
if not service.is_enabled():
|
|
||||||
return False
|
|
||||||
if not service.can_be_backed_up():
|
|
||||||
return False
|
|
||||||
|
|
||||||
last_error = Backups.get_last_backup_error_time(service)
|
|
||||||
|
|
||||||
if last_error is not None:
|
|
||||||
if time < last_error + timedelta(seconds=AUTOBACKUP_JOB_EXPIRATION_SECONDS):
|
|
||||||
return False
|
|
||||||
|
|
||||||
last_backup = Backups.get_last_backed_up(service)
|
|
||||||
|
|
||||||
# Queue a backup immediately if there are no previous backups
|
|
||||||
if last_backup is None:
|
|
||||||
return True
|
|
||||||
|
|
||||||
if time > last_backup + timedelta(minutes=period):
|
|
||||||
return True
|
|
||||||
|
|
||||||
return False
|
|
||||||
|
|
||||||
# Helpers
|
|
||||||
|
|
||||||
@staticmethod
|
|
||||||
def space_usable_for_service(service: Service) -> int:
|
|
||||||
"""
|
|
||||||
Returns the amount of space available on the volume the given
|
|
||||||
service is located on.
|
|
||||||
"""
|
|
||||||
folders = service.get_folders()
|
|
||||||
if folders == []:
|
|
||||||
raise ValueError("unallocated service", service.get_id())
|
|
||||||
|
|
||||||
# We assume all folders of one service live at the same volume
|
|
||||||
fs_info = statvfs(folders[0])
|
|
||||||
usable_bytes = fs_info.f_frsize * fs_info.f_bavail
|
|
||||||
return usable_bytes
|
|
||||||
|
|
||||||
@staticmethod
|
|
||||||
def set_localfile_repo(file_path: str):
|
|
||||||
"""Used by tests to set a local folder as a backup repo"""
|
|
||||||
# pylint: disable-next=invalid-name
|
|
||||||
ProviderClass = get_provider(BackupProviderEnum.FILE)
|
|
||||||
provider = ProviderClass(
|
|
||||||
login="",
|
|
||||||
key="",
|
|
||||||
location=file_path,
|
|
||||||
repo_id="",
|
|
||||||
)
|
|
||||||
Storage.store_provider(provider)
|
|
||||||
|
|
||||||
@staticmethod
|
|
||||||
def assert_dead(service: Service):
|
|
||||||
"""
|
|
||||||
Checks if a service is dead and can be safely restored from a snapshot.
|
|
||||||
"""
|
|
||||||
if service.get_status() not in [
|
|
||||||
ServiceStatus.INACTIVE,
|
|
||||||
ServiceStatus.FAILED,
|
|
||||||
]:
|
|
||||||
raise NotDeadError(service)
|
|
|
@ -1,73 +0,0 @@
|
||||||
from abc import ABC, abstractmethod
|
|
||||||
from typing import List
|
|
||||||
|
|
||||||
from selfprivacy_api.models.backup.snapshot import Snapshot
|
|
||||||
from selfprivacy_api.graphql.common_types.backup import BackupReason
|
|
||||||
|
|
||||||
|
|
||||||
class AbstractBackupper(ABC):
|
|
||||||
"""Abstract class for backuppers"""
|
|
||||||
|
|
||||||
# flake8: noqa: B027
|
|
||||||
def __init__(self) -> None:
|
|
||||||
pass
|
|
||||||
|
|
||||||
@abstractmethod
|
|
||||||
def is_initted(self) -> bool:
|
|
||||||
"""Returns true if the repository is initted"""
|
|
||||||
raise NotImplementedError
|
|
||||||
|
|
||||||
@abstractmethod
|
|
||||||
def set_creds(self, account: str, key: str, repo: str) -> None:
|
|
||||||
"""Set the credentials for the backupper"""
|
|
||||||
raise NotImplementedError
|
|
||||||
|
|
||||||
@abstractmethod
|
|
||||||
def start_backup(
|
|
||||||
self,
|
|
||||||
folders: List[str],
|
|
||||||
service_name: str,
|
|
||||||
reason: BackupReason = BackupReason.EXPLICIT,
|
|
||||||
) -> Snapshot:
|
|
||||||
"""Start a backup of the given folders"""
|
|
||||||
raise NotImplementedError
|
|
||||||
|
|
||||||
@abstractmethod
|
|
||||||
def get_snapshots(self) -> List[Snapshot]:
|
|
||||||
"""Get all snapshots from the repo"""
|
|
||||||
raise NotImplementedError
|
|
||||||
|
|
||||||
@abstractmethod
|
|
||||||
def init(self) -> None:
|
|
||||||
"""Initialize the repository"""
|
|
||||||
raise NotImplementedError
|
|
||||||
|
|
||||||
@abstractmethod
|
|
||||||
def erase_repo(self) -> None:
|
|
||||||
"""Completely empties the remote"""
|
|
||||||
raise NotImplementedError
|
|
||||||
|
|
||||||
@abstractmethod
|
|
||||||
def restore_from_backup(
|
|
||||||
self,
|
|
||||||
snapshot_id: str,
|
|
||||||
folders: List[str],
|
|
||||||
verify=True,
|
|
||||||
) -> None:
|
|
||||||
"""Restore a target folder using a snapshot"""
|
|
||||||
raise NotImplementedError
|
|
||||||
|
|
||||||
@abstractmethod
|
|
||||||
def restored_size(self, snapshot_id: str) -> int:
|
|
||||||
"""Get the size of the restored snapshot"""
|
|
||||||
raise NotImplementedError
|
|
||||||
|
|
||||||
@abstractmethod
|
|
||||||
def forget_snapshot(self, snapshot_id) -> None:
|
|
||||||
"""Forget a snapshot"""
|
|
||||||
raise NotImplementedError
|
|
||||||
|
|
||||||
@abstractmethod
|
|
||||||
def forget_snapshots(self, snapshot_ids: List[str]) -> None:
|
|
||||||
"""Maybe optimized deletion of a batch of snapshots, just cycling if unsupported"""
|
|
||||||
raise NotImplementedError
|
|
|
@ -1,45 +0,0 @@
|
||||||
from typing import List
|
|
||||||
|
|
||||||
from selfprivacy_api.models.backup.snapshot import Snapshot
|
|
||||||
from selfprivacy_api.backup.backuppers import AbstractBackupper
|
|
||||||
from selfprivacy_api.graphql.common_types.backup import BackupReason
|
|
||||||
|
|
||||||
|
|
||||||
class NoneBackupper(AbstractBackupper):
|
|
||||||
"""A backupper that does nothing"""
|
|
||||||
|
|
||||||
def is_initted(self, repo_name: str = "") -> bool:
|
|
||||||
return False
|
|
||||||
|
|
||||||
def set_creds(self, account: str, key: str, repo: str):
|
|
||||||
pass
|
|
||||||
|
|
||||||
def start_backup(
|
|
||||||
self, folders: List[str], tag: str, reason: BackupReason = BackupReason.EXPLICIT
|
|
||||||
):
|
|
||||||
raise NotImplementedError
|
|
||||||
|
|
||||||
def get_snapshots(self) -> List[Snapshot]:
|
|
||||||
"""Get all snapshots from the repo"""
|
|
||||||
return []
|
|
||||||
|
|
||||||
def init(self):
|
|
||||||
raise NotImplementedError
|
|
||||||
|
|
||||||
def erase_repo(self) -> None:
|
|
||||||
"""Completely empties the remote"""
|
|
||||||
# this one is already empty
|
|
||||||
pass
|
|
||||||
|
|
||||||
def restore_from_backup(self, snapshot_id: str, folders: List[str], verify=True):
|
|
||||||
"""Restore a target folder using a snapshot"""
|
|
||||||
raise NotImplementedError
|
|
||||||
|
|
||||||
def restored_size(self, snapshot_id: str) -> int:
|
|
||||||
raise NotImplementedError
|
|
||||||
|
|
||||||
def forget_snapshot(self, snapshot_id):
|
|
||||||
raise NotImplementedError("forget_snapshot")
|
|
||||||
|
|
||||||
def forget_snapshots(self, snapshots):
|
|
||||||
raise NotImplementedError("forget_snapshots")
|
|
|
@ -1,554 +0,0 @@
|
||||||
from __future__ import annotations
|
|
||||||
|
|
||||||
import subprocess
|
|
||||||
import json
|
|
||||||
import datetime
|
|
||||||
import tempfile
|
|
||||||
|
|
||||||
from typing import List, Optional, TypeVar, Callable
|
|
||||||
from collections.abc import Iterable
|
|
||||||
from json.decoder import JSONDecodeError
|
|
||||||
from os.path import exists, join
|
|
||||||
from os import mkdir
|
|
||||||
from shutil import rmtree
|
|
||||||
|
|
||||||
from selfprivacy_api.graphql.common_types.backup import BackupReason
|
|
||||||
from selfprivacy_api.backup.util import output_yielder, sync
|
|
||||||
from selfprivacy_api.backup.backuppers import AbstractBackupper
|
|
||||||
from selfprivacy_api.models.backup.snapshot import Snapshot
|
|
||||||
from selfprivacy_api.backup.jobs import get_backup_job
|
|
||||||
from selfprivacy_api.services import get_service_by_id
|
|
||||||
from selfprivacy_api.jobs import Jobs, JobStatus, Job
|
|
||||||
|
|
||||||
from selfprivacy_api.backup.local_secret import LocalBackupSecret
|
|
||||||
|
|
||||||
SHORT_ID_LEN = 8
|
|
||||||
|
|
||||||
T = TypeVar("T", bound=Callable)
|
|
||||||
|
|
||||||
|
|
||||||
def unlocked_repo(func: T) -> T:
|
|
||||||
"""unlock repo and retry if it appears to be locked"""
|
|
||||||
|
|
||||||
def inner(self: ResticBackupper, *args, **kwargs):
|
|
||||||
try:
|
|
||||||
return func(self, *args, **kwargs)
|
|
||||||
except Exception as error:
|
|
||||||
if "unable to create lock" in str(error):
|
|
||||||
self.unlock()
|
|
||||||
return func(self, *args, **kwargs)
|
|
||||||
else:
|
|
||||||
raise error
|
|
||||||
|
|
||||||
# Above, we manually guarantee that the type returned is compatible.
|
|
||||||
return inner # type: ignore
|
|
||||||
|
|
||||||
|
|
||||||
class ResticBackupper(AbstractBackupper):
|
|
||||||
def __init__(self, login_flag: str, key_flag: str, storage_type: str) -> None:
|
|
||||||
self.login_flag = login_flag
|
|
||||||
self.key_flag = key_flag
|
|
||||||
self.storage_type = storage_type
|
|
||||||
self.account = ""
|
|
||||||
self.key = ""
|
|
||||||
self.repo = ""
|
|
||||||
super().__init__()
|
|
||||||
|
|
||||||
def set_creds(self, account: str, key: str, repo: str) -> None:
|
|
||||||
self.account = account
|
|
||||||
self.key = key
|
|
||||||
self.repo = repo
|
|
||||||
|
|
||||||
def restic_repo(self) -> str:
|
|
||||||
# https://restic.readthedocs.io/en/latest/030_preparing_a_new_repo.html#other-services-via-rclone
|
|
||||||
# https://forum.rclone.org/t/can-rclone-be-run-solely-with-command-line-options-no-config-no-env-vars/6314/5
|
|
||||||
return f"rclone:{self.rclone_repo()}"
|
|
||||||
|
|
||||||
def rclone_repo(self) -> str:
|
|
||||||
return f"{self.storage_type}{self.repo}"
|
|
||||||
|
|
||||||
def rclone_args(self):
|
|
||||||
return "rclone.args=serve restic --stdio " + " ".join(
|
|
||||||
self.backend_rclone_args()
|
|
||||||
)
|
|
||||||
|
|
||||||
def backend_rclone_args(self) -> list[str]:
|
|
||||||
args = []
|
|
||||||
if self.account != "":
|
|
||||||
acc_args = [self.login_flag, self.account]
|
|
||||||
args.extend(acc_args)
|
|
||||||
if self.key != "":
|
|
||||||
key_args = [self.key_flag, self.key]
|
|
||||||
args.extend(key_args)
|
|
||||||
return args
|
|
||||||
|
|
||||||
def _password_command(self):
|
|
||||||
return f"echo {LocalBackupSecret.get()}"
|
|
||||||
|
|
||||||
def restic_command(self, *args, tags: Optional[List[str]] = None) -> List[str]:
|
|
||||||
"""
|
|
||||||
Construct a restic command against the currently configured repo
|
|
||||||
Can support [nested] arrays as arguments, will flatten them into the final commmand
|
|
||||||
"""
|
|
||||||
if tags is None:
|
|
||||||
tags = []
|
|
||||||
|
|
||||||
command = [
|
|
||||||
"restic",
|
|
||||||
"-o",
|
|
||||||
self.rclone_args(),
|
|
||||||
"-r",
|
|
||||||
self.restic_repo(),
|
|
||||||
"--password-command",
|
|
||||||
self._password_command(),
|
|
||||||
]
|
|
||||||
if tags != []:
|
|
||||||
for tag in tags:
|
|
||||||
command.extend(
|
|
||||||
[
|
|
||||||
"--tag",
|
|
||||||
tag,
|
|
||||||
]
|
|
||||||
)
|
|
||||||
if args:
|
|
||||||
command.extend(ResticBackupper.__flatten_list(args))
|
|
||||||
return command
|
|
||||||
|
|
||||||
def erase_repo(self) -> None:
|
|
||||||
"""Fully erases repo on remote, can be reinitted again"""
|
|
||||||
command = [
|
|
||||||
"rclone",
|
|
||||||
"purge",
|
|
||||||
self.rclone_repo(),
|
|
||||||
]
|
|
||||||
backend_args = self.backend_rclone_args()
|
|
||||||
if backend_args:
|
|
||||||
command.extend(backend_args)
|
|
||||||
|
|
||||||
with subprocess.Popen(command, stdout=subprocess.PIPE, shell=False) as handle:
|
|
||||||
output = handle.communicate()[0].decode("utf-8")
|
|
||||||
if handle.returncode != 0:
|
|
||||||
raise ValueError(
|
|
||||||
"purge exited with errorcode",
|
|
||||||
handle.returncode,
|
|
||||||
":",
|
|
||||||
output,
|
|
||||||
)
|
|
||||||
|
|
||||||
@staticmethod
|
|
||||||
def __flatten_list(list_to_flatten):
|
|
||||||
"""string-aware list flattener"""
|
|
||||||
result = []
|
|
||||||
for item in list_to_flatten:
|
|
||||||
if isinstance(item, Iterable) and not isinstance(item, str):
|
|
||||||
result.extend(ResticBackupper.__flatten_list(item))
|
|
||||||
continue
|
|
||||||
result.append(item)
|
|
||||||
return result
|
|
||||||
|
|
||||||
@staticmethod
|
|
||||||
def _run_backup_command(
|
|
||||||
backup_command: List[str], job: Optional[Job]
|
|
||||||
) -> List[dict]:
|
|
||||||
"""And handle backup output"""
|
|
||||||
messages = []
|
|
||||||
output = []
|
|
||||||
restic_reported_error = False
|
|
||||||
|
|
||||||
for raw_message in output_yielder(backup_command):
|
|
||||||
if "ERROR:" in raw_message:
|
|
||||||
restic_reported_error = True
|
|
||||||
output.append(raw_message)
|
|
||||||
|
|
||||||
if not restic_reported_error:
|
|
||||||
message = ResticBackupper.parse_message(raw_message, job)
|
|
||||||
messages.append(message)
|
|
||||||
|
|
||||||
if restic_reported_error:
|
|
||||||
raise ValueError(
|
|
||||||
"Restic returned error(s): ",
|
|
||||||
output,
|
|
||||||
)
|
|
||||||
|
|
||||||
return messages
|
|
||||||
|
|
||||||
@staticmethod
|
|
||||||
def _replace_in_array(array: List[str], target, replacement) -> None:
|
|
||||||
if target == "":
|
|
||||||
return
|
|
||||||
|
|
||||||
for i, value in enumerate(array):
|
|
||||||
if target in value:
|
|
||||||
array[i] = array[i].replace(target, replacement)
|
|
||||||
|
|
||||||
def _censor_command(self, command: List[str]) -> List[str]:
|
|
||||||
result = command.copy()
|
|
||||||
ResticBackupper._replace_in_array(result, self.key, "CENSORED")
|
|
||||||
ResticBackupper._replace_in_array(result, LocalBackupSecret.get(), "CENSORED")
|
|
||||||
return result
|
|
||||||
|
|
||||||
@staticmethod
|
|
||||||
def _get_backup_job(service_name: str) -> Optional[Job]:
|
|
||||||
service = get_service_by_id(service_name)
|
|
||||||
if service is None:
|
|
||||||
raise ValueError("No service with id ", service_name)
|
|
||||||
|
|
||||||
return get_backup_job(service)
|
|
||||||
|
|
||||||
@unlocked_repo
|
|
||||||
def start_backup(
|
|
||||||
self,
|
|
||||||
folders: List[str],
|
|
||||||
service_name: str,
|
|
||||||
reason: BackupReason = BackupReason.EXPLICIT,
|
|
||||||
) -> Snapshot:
|
|
||||||
"""
|
|
||||||
Start backup with restic
|
|
||||||
"""
|
|
||||||
assert len(folders) != 0
|
|
||||||
|
|
||||||
job = ResticBackupper._get_backup_job(service_name)
|
|
||||||
|
|
||||||
tags = [service_name, reason.value]
|
|
||||||
backup_command = self.restic_command(
|
|
||||||
"backup",
|
|
||||||
"--json",
|
|
||||||
folders,
|
|
||||||
tags=tags,
|
|
||||||
)
|
|
||||||
|
|
||||||
try:
|
|
||||||
messages = ResticBackupper._run_backup_command(backup_command, job)
|
|
||||||
|
|
||||||
id = ResticBackupper._snapshot_id_from_backup_messages(messages)
|
|
||||||
return Snapshot(
|
|
||||||
created_at=datetime.datetime.now(datetime.timezone.utc),
|
|
||||||
id=id,
|
|
||||||
service_name=service_name,
|
|
||||||
reason=reason,
|
|
||||||
)
|
|
||||||
|
|
||||||
except ValueError as error:
|
|
||||||
raise ValueError(
|
|
||||||
"Could not create a snapshot: ",
|
|
||||||
str(error),
|
|
||||||
"command: ",
|
|
||||||
self._censor_command(backup_command),
|
|
||||||
) from error
|
|
||||||
|
|
||||||
@staticmethod
|
|
||||||
def _snapshot_id_from_backup_messages(messages) -> str:
|
|
||||||
for message in messages:
|
|
||||||
if message["message_type"] == "summary":
|
|
||||||
# There is a discrepancy between versions of restic/rclone
|
|
||||||
# Some report short_id in this field and some full
|
|
||||||
return message["snapshot_id"][0:SHORT_ID_LEN]
|
|
||||||
|
|
||||||
raise ValueError("no summary message in restic json output")
|
|
||||||
|
|
||||||
@staticmethod
|
|
||||||
def parse_message(raw_message_line: str, job: Optional[Job] = None) -> dict:
|
|
||||||
message = ResticBackupper.parse_json_output(raw_message_line)
|
|
||||||
if not isinstance(message, dict):
|
|
||||||
raise ValueError("we have too many messages on one line?")
|
|
||||||
if message["message_type"] == "status":
|
|
||||||
if job is not None: # only update status if we run under some job
|
|
||||||
Jobs.update(
|
|
||||||
job,
|
|
||||||
JobStatus.RUNNING,
|
|
||||||
progress=int(message["percent_done"] * 100),
|
|
||||||
)
|
|
||||||
return message
|
|
||||||
|
|
||||||
def init(self) -> None:
|
|
||||||
init_command = self.restic_command(
|
|
||||||
"init",
|
|
||||||
)
|
|
||||||
with subprocess.Popen(
|
|
||||||
init_command,
|
|
||||||
shell=False,
|
|
||||||
stdout=subprocess.PIPE,
|
|
||||||
stderr=subprocess.STDOUT,
|
|
||||||
) as process_handle:
|
|
||||||
output = process_handle.communicate()[0].decode("utf-8")
|
|
||||||
if "created restic repository" not in output:
|
|
||||||
raise ValueError("cannot init a repo: " + output)
|
|
||||||
|
|
||||||
@unlocked_repo
|
|
||||||
def is_initted(self) -> bool:
|
|
||||||
command = self.restic_command(
|
|
||||||
"check",
|
|
||||||
)
|
|
||||||
|
|
||||||
with subprocess.Popen(
|
|
||||||
command,
|
|
||||||
stdout=subprocess.PIPE,
|
|
||||||
shell=False,
|
|
||||||
stderr=subprocess.STDOUT,
|
|
||||||
) as handle:
|
|
||||||
output = handle.communicate()[0].decode("utf-8")
|
|
||||||
if handle.returncode != 0:
|
|
||||||
if "unable to create lock" in output:
|
|
||||||
raise ValueError("Stale lock detected: ", output)
|
|
||||||
return False
|
|
||||||
return True
|
|
||||||
|
|
||||||
def unlock(self) -> None:
|
|
||||||
"""Remove stale locks."""
|
|
||||||
command = self.restic_command(
|
|
||||||
"unlock",
|
|
||||||
)
|
|
||||||
|
|
||||||
with subprocess.Popen(
|
|
||||||
command,
|
|
||||||
stdout=subprocess.PIPE,
|
|
||||||
shell=False,
|
|
||||||
stderr=subprocess.STDOUT,
|
|
||||||
) as handle:
|
|
||||||
# communication forces to complete and for returncode to get defined
|
|
||||||
output = handle.communicate()[0].decode("utf-8")
|
|
||||||
if handle.returncode != 0:
|
|
||||||
raise ValueError("cannot unlock the backup repository: ", output)
|
|
||||||
|
|
||||||
def lock(self) -> None:
|
|
||||||
"""
|
|
||||||
Introduce a stale lock.
|
|
||||||
Mainly for testing purposes.
|
|
||||||
Double lock is supposed to fail
|
|
||||||
"""
|
|
||||||
command = self.restic_command(
|
|
||||||
"check",
|
|
||||||
)
|
|
||||||
|
|
||||||
# using temporary cache in /run/user/1000/restic-check-cache-817079729
|
|
||||||
# repository 9639c714 opened (repository version 2) successfully, password is correct
|
|
||||||
# created new cache in /run/user/1000/restic-check-cache-817079729
|
|
||||||
# create exclusive lock for repository
|
|
||||||
# load indexes
|
|
||||||
# check all packs
|
|
||||||
# check snapshots, trees and blobs
|
|
||||||
# [0:00] 100.00% 1 / 1 snapshots
|
|
||||||
# no errors were found
|
|
||||||
|
|
||||||
try:
|
|
||||||
for line in output_yielder(command):
|
|
||||||
if "indexes" in line:
|
|
||||||
break
|
|
||||||
if "unable" in line:
|
|
||||||
raise ValueError(line)
|
|
||||||
except Exception as error:
|
|
||||||
raise ValueError("could not lock repository") from error
|
|
||||||
|
|
||||||
@unlocked_repo
|
|
||||||
def restored_size(self, snapshot_id: str) -> int:
|
|
||||||
"""
|
|
||||||
Size of a snapshot
|
|
||||||
"""
|
|
||||||
command = self.restic_command(
|
|
||||||
"stats",
|
|
||||||
snapshot_id,
|
|
||||||
"--json",
|
|
||||||
)
|
|
||||||
|
|
||||||
with subprocess.Popen(
|
|
||||||
command,
|
|
||||||
stdout=subprocess.PIPE,
|
|
||||||
stderr=subprocess.STDOUT,
|
|
||||||
shell=False,
|
|
||||||
) as handle:
|
|
||||||
output = handle.communicate()[0].decode("utf-8")
|
|
||||||
try:
|
|
||||||
parsed_output = ResticBackupper.parse_json_output(output)
|
|
||||||
return parsed_output["total_size"]
|
|
||||||
except ValueError as error:
|
|
||||||
raise ValueError("cannot restore a snapshot: " + output) from error
|
|
||||||
|
|
||||||
@unlocked_repo
|
|
||||||
def restore_from_backup(
|
|
||||||
self,
|
|
||||||
snapshot_id,
|
|
||||||
folders: List[str],
|
|
||||||
verify=True,
|
|
||||||
) -> None:
|
|
||||||
"""
|
|
||||||
Restore from backup with restic
|
|
||||||
"""
|
|
||||||
if folders is None or folders == []:
|
|
||||||
raise ValueError("cannot restore without knowing where to!")
|
|
||||||
|
|
||||||
with tempfile.TemporaryDirectory() as temp_dir:
|
|
||||||
if verify:
|
|
||||||
self._raw_verified_restore(snapshot_id, target=temp_dir)
|
|
||||||
snapshot_root = temp_dir
|
|
||||||
for folder in folders:
|
|
||||||
src = join(snapshot_root, folder.strip("/"))
|
|
||||||
if not exists(src):
|
|
||||||
raise ValueError(
|
|
||||||
f"No such path: {src}. We tried to find {folder}"
|
|
||||||
)
|
|
||||||
dst = folder
|
|
||||||
sync(src, dst)
|
|
||||||
|
|
||||||
else: # attempting inplace restore
|
|
||||||
for folder in folders:
|
|
||||||
rmtree(folder)
|
|
||||||
mkdir(folder)
|
|
||||||
self._raw_verified_restore(snapshot_id, target="/")
|
|
||||||
return
|
|
||||||
|
|
||||||
def _raw_verified_restore(self, snapshot_id, target="/"):
|
|
||||||
"""barebones restic restore"""
|
|
||||||
restore_command = self.restic_command(
|
|
||||||
"restore", snapshot_id, "--target", target, "--verify"
|
|
||||||
)
|
|
||||||
|
|
||||||
with subprocess.Popen(
|
|
||||||
restore_command,
|
|
||||||
stdout=subprocess.PIPE,
|
|
||||||
stderr=subprocess.STDOUT,
|
|
||||||
shell=False,
|
|
||||||
) as handle:
|
|
||||||
# for some reason restore does not support
|
|
||||||
# nice reporting of progress via json
|
|
||||||
output = handle.communicate()[0].decode("utf-8")
|
|
||||||
if "restoring" not in output:
|
|
||||||
raise ValueError("cannot restore a snapshot: " + output)
|
|
||||||
|
|
||||||
assert (
|
|
||||||
handle.returncode is not None
|
|
||||||
) # none should be impossible after communicate
|
|
||||||
if handle.returncode != 0:
|
|
||||||
raise ValueError(
|
|
||||||
"restore exited with errorcode",
|
|
||||||
handle.returncode,
|
|
||||||
":",
|
|
||||||
output,
|
|
||||||
)
|
|
||||||
|
|
||||||
def forget_snapshot(self, snapshot_id: str) -> None:
|
|
||||||
self.forget_snapshots([snapshot_id])
|
|
||||||
|
|
||||||
@unlocked_repo
|
|
||||||
def forget_snapshots(self, snapshot_ids: List[str]) -> None:
|
|
||||||
# in case the backupper program supports batching, otherwise implement it by cycling
|
|
||||||
forget_command = self.restic_command(
|
|
||||||
"forget",
|
|
||||||
[snapshot_ids],
|
|
||||||
# TODO: prune should be done in a separate process
|
|
||||||
"--prune",
|
|
||||||
)
|
|
||||||
|
|
||||||
with subprocess.Popen(
|
|
||||||
forget_command,
|
|
||||||
stdout=subprocess.PIPE,
|
|
||||||
stderr=subprocess.PIPE,
|
|
||||||
shell=False,
|
|
||||||
) as handle:
|
|
||||||
# for some reason restore does not support
|
|
||||||
# nice reporting of progress via json
|
|
||||||
output, err = [
|
|
||||||
string.decode(
|
|
||||||
"utf-8",
|
|
||||||
)
|
|
||||||
for string in handle.communicate()
|
|
||||||
]
|
|
||||||
|
|
||||||
if "no matching ID found" in err:
|
|
||||||
raise ValueError(
|
|
||||||
"trying to delete, but no such snapshot(s): ", snapshot_ids
|
|
||||||
)
|
|
||||||
|
|
||||||
assert (
|
|
||||||
handle.returncode is not None
|
|
||||||
) # none should be impossible after communicate
|
|
||||||
if handle.returncode != 0:
|
|
||||||
raise ValueError(
|
|
||||||
"forget exited with errorcode", handle.returncode, ":", output, err
|
|
||||||
)
|
|
||||||
|
|
||||||
def _load_snapshots(self) -> object:
|
|
||||||
"""
|
|
||||||
Load list of snapshots from repository
|
|
||||||
raises Value Error if repo does not exist
|
|
||||||
"""
|
|
||||||
listing_command = self.restic_command(
|
|
||||||
"snapshots",
|
|
||||||
"--json",
|
|
||||||
)
|
|
||||||
|
|
||||||
with subprocess.Popen(
|
|
||||||
listing_command,
|
|
||||||
shell=False,
|
|
||||||
stdout=subprocess.PIPE,
|
|
||||||
stderr=subprocess.STDOUT,
|
|
||||||
) as backup_listing_process_descriptor:
|
|
||||||
output = backup_listing_process_descriptor.communicate()[0].decode("utf-8")
|
|
||||||
|
|
||||||
if "Is there a repository at the following location?" in output:
|
|
||||||
raise ValueError("No repository! : " + output)
|
|
||||||
try:
|
|
||||||
return ResticBackupper.parse_json_output(output)
|
|
||||||
except ValueError as error:
|
|
||||||
raise ValueError("Cannot load snapshots: ", output) from error
|
|
||||||
|
|
||||||
@unlocked_repo
|
|
||||||
def get_snapshots(self) -> List[Snapshot]:
|
|
||||||
"""Get all snapshots from the repo"""
|
|
||||||
snapshots = []
|
|
||||||
|
|
||||||
for restic_snapshot in self._load_snapshots():
|
|
||||||
# Compatibility with previous snaps:
|
|
||||||
if len(restic_snapshot["tags"]) == 1:
|
|
||||||
reason = BackupReason.EXPLICIT
|
|
||||||
else:
|
|
||||||
reason = restic_snapshot["tags"][1]
|
|
||||||
|
|
||||||
snapshot = Snapshot(
|
|
||||||
id=restic_snapshot["short_id"],
|
|
||||||
created_at=restic_snapshot["time"],
|
|
||||||
service_name=restic_snapshot["tags"][0],
|
|
||||||
reason=reason,
|
|
||||||
)
|
|
||||||
|
|
||||||
snapshots.append(snapshot)
|
|
||||||
return snapshots
|
|
||||||
|
|
||||||
@staticmethod
|
|
||||||
def parse_json_output(output: str) -> object:
|
|
||||||
starting_index = ResticBackupper.json_start(output)
|
|
||||||
|
|
||||||
if starting_index == -1:
|
|
||||||
raise ValueError("There is no json in the restic output: " + output)
|
|
||||||
|
|
||||||
truncated_output = output[starting_index:]
|
|
||||||
json_messages = truncated_output.splitlines()
|
|
||||||
if len(json_messages) == 1:
|
|
||||||
try:
|
|
||||||
return json.loads(truncated_output)
|
|
||||||
except JSONDecodeError as error:
|
|
||||||
raise ValueError(
|
|
||||||
"There is no json in the restic output : " + output
|
|
||||||
) from error
|
|
||||||
|
|
||||||
result_array = []
|
|
||||||
for message in json_messages:
|
|
||||||
result_array.append(json.loads(message))
|
|
||||||
return result_array
|
|
||||||
|
|
||||||
@staticmethod
|
|
||||||
def json_start(output: str) -> int:
|
|
||||||
indices = [
|
|
||||||
output.find("["),
|
|
||||||
output.find("{"),
|
|
||||||
]
|
|
||||||
indices = [x for x in indices if x != -1]
|
|
||||||
|
|
||||||
if indices == []:
|
|
||||||
return -1
|
|
||||||
return min(indices)
|
|
||||||
|
|
||||||
@staticmethod
|
|
||||||
def has_json(output: str) -> bool:
|
|
||||||
if ResticBackupper.json_start(output) == -1:
|
|
||||||
return False
|
|
||||||
return True
|
|
|
@ -1,115 +0,0 @@
|
||||||
from typing import Optional, List
|
|
||||||
|
|
||||||
from selfprivacy_api.models.backup.snapshot import Snapshot
|
|
||||||
from selfprivacy_api.jobs import Jobs, Job, JobStatus
|
|
||||||
from selfprivacy_api.services.service import Service
|
|
||||||
from selfprivacy_api.services import get_service_by_id
|
|
||||||
|
|
||||||
|
|
||||||
def job_type_prefix(service: Service) -> str:
|
|
||||||
return f"services.{service.get_id()}"
|
|
||||||
|
|
||||||
|
|
||||||
def backup_job_type(service: Service) -> str:
|
|
||||||
return f"{job_type_prefix(service)}.backup"
|
|
||||||
|
|
||||||
|
|
||||||
def autobackup_job_type() -> str:
|
|
||||||
return "backups.autobackup"
|
|
||||||
|
|
||||||
|
|
||||||
def restore_job_type(service: Service) -> str:
|
|
||||||
return f"{job_type_prefix(service)}.restore"
|
|
||||||
|
|
||||||
|
|
||||||
def get_jobs_by_service(service: Service) -> List[Job]:
|
|
||||||
result = []
|
|
||||||
for job in Jobs.get_jobs():
|
|
||||||
if job.type_id.startswith(job_type_prefix(service)) and job.status in [
|
|
||||||
JobStatus.CREATED,
|
|
||||||
JobStatus.RUNNING,
|
|
||||||
]:
|
|
||||||
result.append(job)
|
|
||||||
return result
|
|
||||||
|
|
||||||
|
|
||||||
def is_something_running_for(service: Service) -> bool:
|
|
||||||
running_jobs = [
|
|
||||||
job for job in get_jobs_by_service(service) if job.status == JobStatus.RUNNING
|
|
||||||
]
|
|
||||||
return len(running_jobs) != 0
|
|
||||||
|
|
||||||
|
|
||||||
def add_autobackup_job(services: List[Service]) -> Job:
|
|
||||||
service_names = [s.get_display_name() for s in services]
|
|
||||||
pretty_service_list: str = ", ".join(service_names)
|
|
||||||
job = Jobs.add(
|
|
||||||
type_id=autobackup_job_type(),
|
|
||||||
name="Automatic backup",
|
|
||||||
description=f"Scheduled backup for services: {pretty_service_list}",
|
|
||||||
)
|
|
||||||
return job
|
|
||||||
|
|
||||||
|
|
||||||
def add_backup_job(service: Service) -> Job:
|
|
||||||
if is_something_running_for(service):
|
|
||||||
message = (
|
|
||||||
f"Cannot start a backup of {service.get_id()}, another operation is running: "
|
|
||||||
+ get_jobs_by_service(service)[0].type_id
|
|
||||||
)
|
|
||||||
raise ValueError(message)
|
|
||||||
display_name = service.get_display_name()
|
|
||||||
job = Jobs.add(
|
|
||||||
type_id=backup_job_type(service),
|
|
||||||
name=f"Backup {display_name}",
|
|
||||||
description=f"Backing up {display_name}",
|
|
||||||
)
|
|
||||||
return job
|
|
||||||
|
|
||||||
|
|
||||||
def add_restore_job(snapshot: Snapshot) -> Job:
|
|
||||||
service = get_service_by_id(snapshot.service_name)
|
|
||||||
if service is None:
|
|
||||||
raise ValueError(f"no such service: {snapshot.service_name}")
|
|
||||||
if is_something_running_for(service):
|
|
||||||
message = (
|
|
||||||
f"Cannot start a restore of {service.get_id()}, another operation is running: "
|
|
||||||
+ get_jobs_by_service(service)[0].type_id
|
|
||||||
)
|
|
||||||
raise ValueError(message)
|
|
||||||
display_name = service.get_display_name()
|
|
||||||
job = Jobs.add(
|
|
||||||
type_id=restore_job_type(service),
|
|
||||||
name=f"Restore {display_name}",
|
|
||||||
description=f"restoring {display_name} from {snapshot.id}",
|
|
||||||
)
|
|
||||||
return job
|
|
||||||
|
|
||||||
|
|
||||||
def get_job_by_type(type_id: str) -> Optional[Job]:
|
|
||||||
for job in Jobs.get_jobs():
|
|
||||||
if job.type_id == type_id and job.status in [
|
|
||||||
JobStatus.CREATED,
|
|
||||||
JobStatus.RUNNING,
|
|
||||||
]:
|
|
||||||
return job
|
|
||||||
return None
|
|
||||||
|
|
||||||
|
|
||||||
def get_failed_job_by_type(type_id: str) -> Optional[Job]:
|
|
||||||
for job in Jobs.get_jobs():
|
|
||||||
if job.type_id == type_id and job.status == JobStatus.ERROR:
|
|
||||||
return job
|
|
||||||
return None
|
|
||||||
|
|
||||||
|
|
||||||
def get_backup_job(service: Service) -> Optional[Job]:
|
|
||||||
return get_job_by_type(backup_job_type(service))
|
|
||||||
|
|
||||||
|
|
||||||
def get_backup_fail(service: Service) -> Optional[Job]:
|
|
||||||
return get_failed_job_by_type(backup_job_type(service))
|
|
||||||
|
|
||||||
|
|
||||||
def get_restore_job(service: Service) -> Optional[Job]:
|
|
||||||
return get_job_by_type(restore_job_type(service))
|
|
|
@ -1,45 +0,0 @@
|
||||||
"""Handling of local secret used for encrypted backups.
|
|
||||||
Separated out for circular dependency reasons
|
|
||||||
"""
|
|
||||||
|
|
||||||
from __future__ import annotations
|
|
||||||
import secrets
|
|
||||||
|
|
||||||
from selfprivacy_api.utils.redis_pool import RedisPool
|
|
||||||
|
|
||||||
|
|
||||||
REDIS_KEY = "backup:local_secret"
|
|
||||||
|
|
||||||
redis = RedisPool().get_connection()
|
|
||||||
|
|
||||||
|
|
||||||
class LocalBackupSecret:
|
|
||||||
@staticmethod
|
|
||||||
def get() -> str:
|
|
||||||
"""A secret string which backblaze/other clouds do not know.
|
|
||||||
Serves as encryption key.
|
|
||||||
"""
|
|
||||||
if not LocalBackupSecret.exists():
|
|
||||||
LocalBackupSecret.reset()
|
|
||||||
return redis.get(REDIS_KEY) # type: ignore
|
|
||||||
|
|
||||||
@staticmethod
|
|
||||||
def set(secret: str):
|
|
||||||
redis.set(REDIS_KEY, secret)
|
|
||||||
|
|
||||||
@staticmethod
|
|
||||||
def reset():
|
|
||||||
new_secret = LocalBackupSecret._generate()
|
|
||||||
LocalBackupSecret.set(new_secret)
|
|
||||||
|
|
||||||
@staticmethod
|
|
||||||
def _full_reset():
|
|
||||||
redis.delete(REDIS_KEY)
|
|
||||||
|
|
||||||
@staticmethod
|
|
||||||
def exists() -> bool:
|
|
||||||
return redis.exists(REDIS_KEY) == 1
|
|
||||||
|
|
||||||
@staticmethod
|
|
||||||
def _generate() -> str:
|
|
||||||
return secrets.token_urlsafe(256)
|
|
|
@ -1,31 +0,0 @@
|
||||||
from typing import Type
|
|
||||||
|
|
||||||
from selfprivacy_api.graphql.queries.providers import (
|
|
||||||
BackupProvider as BackupProviderEnum,
|
|
||||||
)
|
|
||||||
from selfprivacy_api.backup.providers.provider import AbstractBackupProvider
|
|
||||||
|
|
||||||
from selfprivacy_api.backup.providers.backblaze import Backblaze
|
|
||||||
from selfprivacy_api.backup.providers.memory import InMemoryBackup
|
|
||||||
from selfprivacy_api.backup.providers.local_file import LocalFileBackup
|
|
||||||
from selfprivacy_api.backup.providers.none import NoBackups
|
|
||||||
|
|
||||||
PROVIDER_MAPPING: dict[BackupProviderEnum, Type[AbstractBackupProvider]] = {
|
|
||||||
BackupProviderEnum.BACKBLAZE: Backblaze,
|
|
||||||
BackupProviderEnum.MEMORY: InMemoryBackup,
|
|
||||||
BackupProviderEnum.FILE: LocalFileBackup,
|
|
||||||
BackupProviderEnum.NONE: NoBackups,
|
|
||||||
}
|
|
||||||
|
|
||||||
|
|
||||||
def get_provider(
|
|
||||||
provider_type: BackupProviderEnum,
|
|
||||||
) -> Type[AbstractBackupProvider]:
|
|
||||||
if provider_type not in PROVIDER_MAPPING.keys():
|
|
||||||
raise LookupError("could not look up provider", provider_type)
|
|
||||||
return PROVIDER_MAPPING[provider_type]
|
|
||||||
|
|
||||||
|
|
||||||
def get_kind(provider: AbstractBackupProvider) -> str:
|
|
||||||
"""Get the kind of the provider in the form of a string"""
|
|
||||||
return provider.name.value
|
|
|
@ -1,11 +0,0 @@
|
||||||
from .provider import AbstractBackupProvider
|
|
||||||
from selfprivacy_api.backup.backuppers.restic_backupper import ResticBackupper
|
|
||||||
from selfprivacy_api.graphql.queries.providers import (
|
|
||||||
BackupProvider as BackupProviderEnum,
|
|
||||||
)
|
|
||||||
|
|
||||||
|
|
||||||
class Backblaze(AbstractBackupProvider):
|
|
||||||
backupper = ResticBackupper("--b2-account", "--b2-key", ":b2:")
|
|
||||||
|
|
||||||
name = BackupProviderEnum.BACKBLAZE
|
|
|
@ -1,11 +0,0 @@
|
||||||
from .provider import AbstractBackupProvider
|
|
||||||
from selfprivacy_api.backup.backuppers.restic_backupper import ResticBackupper
|
|
||||||
from selfprivacy_api.graphql.queries.providers import (
|
|
||||||
BackupProvider as BackupProviderEnum,
|
|
||||||
)
|
|
||||||
|
|
||||||
|
|
||||||
class LocalFileBackup(AbstractBackupProvider):
|
|
||||||
backupper = ResticBackupper("", "", ":local:")
|
|
||||||
|
|
||||||
name = BackupProviderEnum.FILE
|
|
|
@ -1,11 +0,0 @@
|
||||||
from .provider import AbstractBackupProvider
|
|
||||||
from selfprivacy_api.backup.backuppers.restic_backupper import ResticBackupper
|
|
||||||
from selfprivacy_api.graphql.queries.providers import (
|
|
||||||
BackupProvider as BackupProviderEnum,
|
|
||||||
)
|
|
||||||
|
|
||||||
|
|
||||||
class InMemoryBackup(AbstractBackupProvider):
|
|
||||||
backupper = ResticBackupper("", "", ":memory:")
|
|
||||||
|
|
||||||
name = BackupProviderEnum.MEMORY
|
|
|
@ -1,11 +0,0 @@
|
||||||
from selfprivacy_api.backup.providers.provider import AbstractBackupProvider
|
|
||||||
from selfprivacy_api.backup.backuppers.none_backupper import NoneBackupper
|
|
||||||
from selfprivacy_api.graphql.queries.providers import (
|
|
||||||
BackupProvider as BackupProviderEnum,
|
|
||||||
)
|
|
||||||
|
|
||||||
|
|
||||||
class NoBackups(AbstractBackupProvider):
|
|
||||||
backupper = NoneBackupper()
|
|
||||||
|
|
||||||
name = BackupProviderEnum.NONE
|
|
|
@ -1,25 +0,0 @@
|
||||||
"""
|
|
||||||
An abstract class for BackBlaze, S3 etc.
|
|
||||||
It assumes that while some providers are supported via restic/rclone, others
|
|
||||||
may require different backends
|
|
||||||
"""
|
|
||||||
from abc import ABC, abstractmethod
|
|
||||||
from selfprivacy_api.backup.backuppers import AbstractBackupper
|
|
||||||
from selfprivacy_api.graphql.queries.providers import (
|
|
||||||
BackupProvider as BackupProviderEnum,
|
|
||||||
)
|
|
||||||
|
|
||||||
|
|
||||||
class AbstractBackupProvider(ABC):
|
|
||||||
backupper: AbstractBackupper
|
|
||||||
|
|
||||||
name: BackupProviderEnum
|
|
||||||
|
|
||||||
def __init__(self, login="", key="", location="", repo_id=""):
|
|
||||||
self.backupper.set_creds(login, key, location)
|
|
||||||
self.login = login
|
|
||||||
self.key = key
|
|
||||||
self.location = location
|
|
||||||
# We do not need to do anything with this one
|
|
||||||
# Just remember in case the app forgets
|
|
||||||
self.repo_id = repo_id
|
|
|
@ -1,198 +0,0 @@
|
||||||
"""
|
|
||||||
Module for storing backup related data in redis.
|
|
||||||
"""
|
|
||||||
from typing import List, Optional
|
|
||||||
from datetime import datetime
|
|
||||||
|
|
||||||
from selfprivacy_api.models.backup.snapshot import Snapshot
|
|
||||||
from selfprivacy_api.models.backup.provider import BackupProviderModel
|
|
||||||
from selfprivacy_api.graphql.common_types.backup import (
|
|
||||||
AutobackupQuotas,
|
|
||||||
_AutobackupQuotas,
|
|
||||||
)
|
|
||||||
|
|
||||||
from selfprivacy_api.utils.redis_pool import RedisPool
|
|
||||||
from selfprivacy_api.utils.redis_model_storage import (
|
|
||||||
store_model_as_hash,
|
|
||||||
hash_as_model,
|
|
||||||
)
|
|
||||||
|
|
||||||
from selfprivacy_api.backup.providers.provider import AbstractBackupProvider
|
|
||||||
from selfprivacy_api.backup.providers import get_kind
|
|
||||||
|
|
||||||
REDIS_SNAPSHOTS_PREFIX = "backups:snapshots:"
|
|
||||||
REDIS_LAST_BACKUP_PREFIX = "backups:last-backed-up:"
|
|
||||||
REDIS_INITTED_CACHE = "backups:repo_initted"
|
|
||||||
|
|
||||||
REDIS_PROVIDER_KEY = "backups:provider"
|
|
||||||
REDIS_AUTOBACKUP_PERIOD_KEY = "backups:autobackup_period"
|
|
||||||
|
|
||||||
REDIS_AUTOBACKUP_QUOTAS_KEY = "backups:autobackup_quotas_key"
|
|
||||||
|
|
||||||
redis = RedisPool().get_connection()
|
|
||||||
|
|
||||||
|
|
||||||
class Storage:
|
|
||||||
"""Static class for storing backup related data in redis"""
|
|
||||||
|
|
||||||
@staticmethod
|
|
||||||
def reset() -> None:
|
|
||||||
"""Deletes all backup related data from redis"""
|
|
||||||
redis.delete(REDIS_PROVIDER_KEY)
|
|
||||||
redis.delete(REDIS_AUTOBACKUP_PERIOD_KEY)
|
|
||||||
redis.delete(REDIS_INITTED_CACHE)
|
|
||||||
redis.delete(REDIS_AUTOBACKUP_QUOTAS_KEY)
|
|
||||||
|
|
||||||
prefixes_to_clean = [
|
|
||||||
REDIS_SNAPSHOTS_PREFIX,
|
|
||||||
REDIS_LAST_BACKUP_PREFIX,
|
|
||||||
]
|
|
||||||
|
|
||||||
for prefix in prefixes_to_clean:
|
|
||||||
for key in redis.keys(prefix + "*"):
|
|
||||||
redis.delete(key)
|
|
||||||
|
|
||||||
@staticmethod
|
|
||||||
def invalidate_snapshot_storage() -> None:
|
|
||||||
"""Deletes all cached snapshots from redis"""
|
|
||||||
for key in redis.keys(REDIS_SNAPSHOTS_PREFIX + "*"):
|
|
||||||
redis.delete(key)
|
|
||||||
|
|
||||||
@staticmethod
|
|
||||||
def __last_backup_key(service_id: str) -> str:
|
|
||||||
return REDIS_LAST_BACKUP_PREFIX + service_id
|
|
||||||
|
|
||||||
@staticmethod
|
|
||||||
def __snapshot_key(snapshot: Snapshot) -> str:
|
|
||||||
return REDIS_SNAPSHOTS_PREFIX + snapshot.id
|
|
||||||
|
|
||||||
@staticmethod
|
|
||||||
def get_last_backup_time(service_id: str) -> Optional[datetime]:
|
|
||||||
"""Returns last backup time for a service or None if it was never backed up"""
|
|
||||||
key = Storage.__last_backup_key(service_id)
|
|
||||||
if not redis.exists(key):
|
|
||||||
return None
|
|
||||||
|
|
||||||
snapshot = hash_as_model(redis, key, Snapshot)
|
|
||||||
if not snapshot:
|
|
||||||
return None
|
|
||||||
return snapshot.created_at
|
|
||||||
|
|
||||||
@staticmethod
|
|
||||||
def store_last_timestamp(service_id: str, snapshot: Snapshot) -> None:
|
|
||||||
"""Stores last backup time for a service"""
|
|
||||||
store_model_as_hash(
|
|
||||||
redis,
|
|
||||||
Storage.__last_backup_key(service_id),
|
|
||||||
snapshot,
|
|
||||||
)
|
|
||||||
|
|
||||||
@staticmethod
|
|
||||||
def cache_snapshot(snapshot: Snapshot) -> None:
|
|
||||||
"""Stores snapshot metadata in redis for caching purposes"""
|
|
||||||
snapshot_key = Storage.__snapshot_key(snapshot)
|
|
||||||
store_model_as_hash(redis, snapshot_key, snapshot)
|
|
||||||
|
|
||||||
@staticmethod
|
|
||||||
def delete_cached_snapshot(snapshot: Snapshot) -> None:
|
|
||||||
"""Deletes snapshot metadata from redis"""
|
|
||||||
snapshot_key = Storage.__snapshot_key(snapshot)
|
|
||||||
redis.delete(snapshot_key)
|
|
||||||
|
|
||||||
@staticmethod
|
|
||||||
def get_cached_snapshot_by_id(snapshot_id: str) -> Optional[Snapshot]:
|
|
||||||
"""Returns cached snapshot by id or None if it doesn't exist"""
|
|
||||||
key = REDIS_SNAPSHOTS_PREFIX + snapshot_id
|
|
||||||
if not redis.exists(key):
|
|
||||||
return None
|
|
||||||
return hash_as_model(redis, key, Snapshot)
|
|
||||||
|
|
||||||
@staticmethod
|
|
||||||
def get_cached_snapshots() -> List[Snapshot]:
|
|
||||||
"""Returns all cached snapshots stored in redis"""
|
|
||||||
keys: list[str] = redis.keys(REDIS_SNAPSHOTS_PREFIX + "*") # type: ignore
|
|
||||||
result: list[Snapshot] = []
|
|
||||||
|
|
||||||
for key in keys:
|
|
||||||
snapshot = hash_as_model(redis, key, Snapshot)
|
|
||||||
if snapshot:
|
|
||||||
result.append(snapshot)
|
|
||||||
return result
|
|
||||||
|
|
||||||
@staticmethod
|
|
||||||
def autobackup_period_minutes() -> Optional[int]:
|
|
||||||
"""None means autobackup is disabled"""
|
|
||||||
if not redis.exists(REDIS_AUTOBACKUP_PERIOD_KEY):
|
|
||||||
return None
|
|
||||||
return int(redis.get(REDIS_AUTOBACKUP_PERIOD_KEY)) # type: ignore
|
|
||||||
|
|
||||||
@staticmethod
|
|
||||||
def store_autobackup_period_minutes(minutes: int) -> None:
|
|
||||||
"""Set the new autobackup period in minutes"""
|
|
||||||
redis.set(REDIS_AUTOBACKUP_PERIOD_KEY, minutes)
|
|
||||||
|
|
||||||
@staticmethod
|
|
||||||
def delete_backup_period() -> None:
|
|
||||||
"""Set the autobackup period to none, effectively disabling autobackup"""
|
|
||||||
redis.delete(REDIS_AUTOBACKUP_PERIOD_KEY)
|
|
||||||
|
|
||||||
@staticmethod
|
|
||||||
def store_provider(provider: AbstractBackupProvider) -> None:
|
|
||||||
"""Stores backup provider auth data in redis"""
|
|
||||||
model = BackupProviderModel(
|
|
||||||
kind=get_kind(provider),
|
|
||||||
login=provider.login,
|
|
||||||
key=provider.key,
|
|
||||||
location=provider.location,
|
|
||||||
repo_id=provider.repo_id,
|
|
||||||
)
|
|
||||||
store_model_as_hash(redis, REDIS_PROVIDER_KEY, model)
|
|
||||||
if Storage.load_provider() != model:
|
|
||||||
raise IOError("could not store the provider model: ", model.dict)
|
|
||||||
|
|
||||||
@staticmethod
|
|
||||||
def load_provider() -> Optional[BackupProviderModel]:
|
|
||||||
"""Loads backup storage provider auth data from redis"""
|
|
||||||
provider_model = hash_as_model(
|
|
||||||
redis,
|
|
||||||
REDIS_PROVIDER_KEY,
|
|
||||||
BackupProviderModel,
|
|
||||||
)
|
|
||||||
return provider_model
|
|
||||||
|
|
||||||
@staticmethod
|
|
||||||
def has_init_mark() -> bool:
|
|
||||||
"""Returns True if the repository was initialized"""
|
|
||||||
if redis.exists(REDIS_INITTED_CACHE):
|
|
||||||
return True
|
|
||||||
return False
|
|
||||||
|
|
||||||
@staticmethod
|
|
||||||
def mark_as_init():
|
|
||||||
"""Marks the repository as initialized"""
|
|
||||||
redis.set(REDIS_INITTED_CACHE, 1)
|
|
||||||
|
|
||||||
@staticmethod
|
|
||||||
def mark_as_uninitted():
|
|
||||||
"""Marks the repository as initialized"""
|
|
||||||
redis.delete(REDIS_INITTED_CACHE)
|
|
||||||
|
|
||||||
@staticmethod
|
|
||||||
def set_autobackup_quotas(quotas: AutobackupQuotas) -> None:
|
|
||||||
store_model_as_hash(redis, REDIS_AUTOBACKUP_QUOTAS_KEY, quotas.to_pydantic())
|
|
||||||
|
|
||||||
@staticmethod
|
|
||||||
def autobackup_quotas() -> AutobackupQuotas:
|
|
||||||
quotas_model = hash_as_model(
|
|
||||||
redis, REDIS_AUTOBACKUP_QUOTAS_KEY, _AutobackupQuotas
|
|
||||||
)
|
|
||||||
if quotas_model is None:
|
|
||||||
unlimited_quotas = AutobackupQuotas(
|
|
||||||
last=-1,
|
|
||||||
daily=-1,
|
|
||||||
weekly=-1,
|
|
||||||
monthly=-1,
|
|
||||||
yearly=-1,
|
|
||||||
)
|
|
||||||
return unlimited_quotas
|
|
||||||
return AutobackupQuotas.from_pydantic(quotas_model) # pylint: disable=no-member
|
|
|
@ -1,117 +0,0 @@
|
||||||
"""
|
|
||||||
The tasks module contains the worker tasks that are used to back up and restore
|
|
||||||
"""
|
|
||||||
from datetime import datetime, timezone
|
|
||||||
|
|
||||||
from selfprivacy_api.graphql.common_types.backup import (
|
|
||||||
RestoreStrategy,
|
|
||||||
BackupReason,
|
|
||||||
)
|
|
||||||
|
|
||||||
from selfprivacy_api.models.backup.snapshot import Snapshot
|
|
||||||
from selfprivacy_api.utils.huey import huey
|
|
||||||
from huey import crontab
|
|
||||||
|
|
||||||
from selfprivacy_api.services import get_service_by_id
|
|
||||||
from selfprivacy_api.backup import Backups
|
|
||||||
from selfprivacy_api.backup.jobs import add_autobackup_job
|
|
||||||
from selfprivacy_api.jobs import Jobs, JobStatus, Job
|
|
||||||
|
|
||||||
|
|
||||||
SNAPSHOT_CACHE_TTL_HOURS = 6
|
|
||||||
|
|
||||||
|
|
||||||
def validate_datetime(dt: datetime) -> bool:
|
|
||||||
"""
|
|
||||||
Validates that it is time to back up.
|
|
||||||
Also ensures that the timezone-aware time is used.
|
|
||||||
"""
|
|
||||||
if dt.tzinfo is None:
|
|
||||||
return Backups.is_time_to_backup(dt.replace(tzinfo=timezone.utc))
|
|
||||||
return Backups.is_time_to_backup(dt)
|
|
||||||
|
|
||||||
|
|
||||||
# huey tasks need to return something
|
|
||||||
@huey.task()
|
|
||||||
def start_backup(service_id: str, reason: BackupReason = BackupReason.EXPLICIT) -> bool:
|
|
||||||
"""
|
|
||||||
The worker task that starts the backup process.
|
|
||||||
"""
|
|
||||||
service = get_service_by_id(service_id)
|
|
||||||
if service is None:
|
|
||||||
raise ValueError(f"No such service: {service_id}")
|
|
||||||
Backups.back_up(service, reason)
|
|
||||||
return True
|
|
||||||
|
|
||||||
|
|
||||||
@huey.task()
|
|
||||||
def prune_autobackup_snapshots(job: Job) -> bool:
|
|
||||||
"""
|
|
||||||
Remove all autobackup snapshots that do not fit into quotas set
|
|
||||||
"""
|
|
||||||
Jobs.update(job, JobStatus.RUNNING)
|
|
||||||
try:
|
|
||||||
Backups.prune_all_autosnaps()
|
|
||||||
except Exception as e:
|
|
||||||
Jobs.update(job, JobStatus.ERROR, error=type(e).__name__ + ":" + str(e))
|
|
||||||
return False
|
|
||||||
|
|
||||||
Jobs.update(job, JobStatus.FINISHED)
|
|
||||||
return True
|
|
||||||
|
|
||||||
|
|
||||||
@huey.task()
|
|
||||||
def restore_snapshot(
|
|
||||||
snapshot: Snapshot,
|
|
||||||
strategy: RestoreStrategy = RestoreStrategy.DOWNLOAD_VERIFY_OVERWRITE,
|
|
||||||
) -> bool:
|
|
||||||
"""
|
|
||||||
The worker task that starts the restore process.
|
|
||||||
"""
|
|
||||||
Backups.restore_snapshot(snapshot, strategy)
|
|
||||||
return True
|
|
||||||
|
|
||||||
|
|
||||||
def do_autobackup() -> None:
|
|
||||||
"""
|
|
||||||
Body of autobackup task, broken out to test it
|
|
||||||
For some reason, we cannot launch periodic huey tasks
|
|
||||||
inside tests
|
|
||||||
"""
|
|
||||||
time = datetime.utcnow().replace(tzinfo=timezone.utc)
|
|
||||||
services_to_back_up = Backups.services_to_back_up(time)
|
|
||||||
if not services_to_back_up:
|
|
||||||
return
|
|
||||||
job = add_autobackup_job(services_to_back_up)
|
|
||||||
|
|
||||||
progress_per_service = 100 // len(services_to_back_up)
|
|
||||||
progress = 0
|
|
||||||
Jobs.update(job, JobStatus.RUNNING, progress=progress)
|
|
||||||
|
|
||||||
for service in services_to_back_up:
|
|
||||||
try:
|
|
||||||
Backups.back_up(service, BackupReason.AUTO)
|
|
||||||
except Exception as error:
|
|
||||||
Jobs.update(
|
|
||||||
job,
|
|
||||||
status=JobStatus.ERROR,
|
|
||||||
error=type(error).__name__ + ": " + str(error),
|
|
||||||
)
|
|
||||||
return
|
|
||||||
progress = progress + progress_per_service
|
|
||||||
Jobs.update(job, JobStatus.RUNNING, progress=progress)
|
|
||||||
|
|
||||||
Jobs.update(job, JobStatus.FINISHED)
|
|
||||||
|
|
||||||
|
|
||||||
@huey.periodic_task(validate_datetime=validate_datetime)
|
|
||||||
def automatic_backup() -> None:
|
|
||||||
"""
|
|
||||||
The worker periodic task that starts the automatic backup process.
|
|
||||||
"""
|
|
||||||
do_autobackup()
|
|
||||||
|
|
||||||
|
|
||||||
@huey.periodic_task(crontab(hour="*/" + str(SNAPSHOT_CACHE_TTL_HOURS)))
|
|
||||||
def reload_snapshot_cache():
|
|
||||||
Backups.force_snapshot_cache_reload()
|
|
|
@ -1,35 +0,0 @@
|
||||||
import subprocess
|
|
||||||
from os.path import exists
|
|
||||||
from typing import Generator
|
|
||||||
|
|
||||||
|
|
||||||
def output_yielder(command) -> Generator[str, None, None]:
|
|
||||||
"""Note: If you break during iteration, it kills the process"""
|
|
||||||
with subprocess.Popen(
|
|
||||||
command,
|
|
||||||
shell=False,
|
|
||||||
stdout=subprocess.PIPE,
|
|
||||||
stderr=subprocess.STDOUT,
|
|
||||||
universal_newlines=True,
|
|
||||||
) as handle:
|
|
||||||
if handle is None or handle.stdout is None:
|
|
||||||
raise ValueError("could not run command: ", command)
|
|
||||||
|
|
||||||
try:
|
|
||||||
for line in iter(handle.stdout.readline, ""):
|
|
||||||
if "NOTICE:" not in line:
|
|
||||||
yield line
|
|
||||||
except GeneratorExit:
|
|
||||||
handle.kill()
|
|
||||||
|
|
||||||
|
|
||||||
def sync(src_path: str, dest_path: str):
|
|
||||||
"""a wrapper around rclone sync"""
|
|
||||||
|
|
||||||
if not exists(src_path):
|
|
||||||
raise ValueError("source dir for rclone sync must exist")
|
|
||||||
|
|
||||||
rclone_command = ["rclone", "sync", "-P", src_path, dest_path]
|
|
||||||
for raw_message in output_yielder(rclone_command):
|
|
||||||
if "ERROR" in raw_message:
|
|
||||||
raise ValueError(raw_message)
|
|
|
@ -27,4 +27,4 @@ async def get_token_header(
|
||||||
|
|
||||||
def get_api_version() -> str:
|
def get_api_version() -> str:
|
||||||
"""Get API version"""
|
"""Get API version"""
|
||||||
return "3.1.0"
|
return "2.1.2"
|
||||||
|
|
|
@ -1,36 +0,0 @@
|
||||||
"""Backup"""
|
|
||||||
# pylint: disable=too-few-public-methods
|
|
||||||
from enum import Enum
|
|
||||||
import strawberry
|
|
||||||
from pydantic import BaseModel
|
|
||||||
|
|
||||||
|
|
||||||
@strawberry.enum
|
|
||||||
class RestoreStrategy(Enum):
|
|
||||||
INPLACE = "INPLACE"
|
|
||||||
DOWNLOAD_VERIFY_OVERWRITE = "DOWNLOAD_VERIFY_OVERWRITE"
|
|
||||||
|
|
||||||
|
|
||||||
@strawberry.enum
|
|
||||||
class BackupReason(Enum):
|
|
||||||
EXPLICIT = "EXPLICIT"
|
|
||||||
AUTO = "AUTO"
|
|
||||||
PRE_RESTORE = "PRE_RESTORE"
|
|
||||||
|
|
||||||
|
|
||||||
class _AutobackupQuotas(BaseModel):
|
|
||||||
last: int
|
|
||||||
daily: int
|
|
||||||
weekly: int
|
|
||||||
monthly: int
|
|
||||||
yearly: int
|
|
||||||
|
|
||||||
|
|
||||||
@strawberry.experimental.pydantic.type(model=_AutobackupQuotas, all_fields=True)
|
|
||||||
class AutobackupQuotas:
|
|
||||||
pass
|
|
||||||
|
|
||||||
|
|
||||||
@strawberry.experimental.pydantic.input(model=_AutobackupQuotas, all_fields=True)
|
|
||||||
class AutobackupQuotasInput:
|
|
||||||
pass
|
|
|
@ -2,7 +2,6 @@ import typing
|
||||||
import strawberry
|
import strawberry
|
||||||
|
|
||||||
|
|
||||||
# TODO: use https://strawberry.rocks/docs/integrations/pydantic when it is stable
|
|
||||||
@strawberry.type
|
@strawberry.type
|
||||||
class DnsRecord:
|
class DnsRecord:
|
||||||
"""DNS record"""
|
"""DNS record"""
|
||||||
|
@ -12,4 +11,3 @@ class DnsRecord:
|
||||||
content: str
|
content: str
|
||||||
ttl: int
|
ttl: int
|
||||||
priority: typing.Optional[int]
|
priority: typing.Optional[int]
|
||||||
display_name: str
|
|
||||||
|
|
|
@ -12,7 +12,6 @@ class ApiJob:
|
||||||
"""Job type for GraphQL."""
|
"""Job type for GraphQL."""
|
||||||
|
|
||||||
uid: str
|
uid: str
|
||||||
type_id: str
|
|
||||||
name: str
|
name: str
|
||||||
description: str
|
description: str
|
||||||
status: str
|
status: str
|
||||||
|
@ -29,7 +28,6 @@ def job_to_api_job(job: Job) -> ApiJob:
|
||||||
"""Convert a Job from jobs controller to a GraphQL ApiJob."""
|
"""Convert a Job from jobs controller to a GraphQL ApiJob."""
|
||||||
return ApiJob(
|
return ApiJob(
|
||||||
uid=str(job.uid),
|
uid=str(job.uid),
|
||||||
type_id=job.type_id,
|
|
||||||
name=job.name,
|
name=job.name,
|
||||||
description=job.description,
|
description=job.description,
|
||||||
status=job.status.name,
|
status=job.status.name,
|
||||||
|
|
|
@ -1,17 +1,11 @@
|
||||||
from enum import Enum
|
from enum import Enum
|
||||||
from typing import Optional, List
|
import typing
|
||||||
import datetime
|
|
||||||
import strawberry
|
import strawberry
|
||||||
|
|
||||||
from selfprivacy_api.graphql.common_types.backup import BackupReason
|
|
||||||
from selfprivacy_api.graphql.common_types.dns import DnsRecord
|
from selfprivacy_api.graphql.common_types.dns import DnsRecord
|
||||||
|
|
||||||
from selfprivacy_api.services import get_service_by_id, get_services_by_location
|
from selfprivacy_api.services import get_service_by_id, get_services_by_location
|
||||||
from selfprivacy_api.services import Service as ServiceInterface
|
from selfprivacy_api.services import Service as ServiceInterface
|
||||||
from selfprivacy_api.services import ServiceDnsRecord
|
|
||||||
|
|
||||||
from selfprivacy_api.utils.block_devices import BlockDevices
|
from selfprivacy_api.utils.block_devices import BlockDevices
|
||||||
from selfprivacy_api.utils.network import get_ip4, get_ip6
|
|
||||||
|
|
||||||
|
|
||||||
def get_usages(root: "StorageVolume") -> list["StorageUsageInterface"]:
|
def get_usages(root: "StorageVolume") -> list["StorageUsageInterface"]:
|
||||||
|
@ -21,7 +15,7 @@ def get_usages(root: "StorageVolume") -> list["StorageUsageInterface"]:
|
||||||
service=service_to_graphql_service(service),
|
service=service_to_graphql_service(service),
|
||||||
title=service.get_display_name(),
|
title=service.get_display_name(),
|
||||||
used_space=str(service.get_storage_usage()),
|
used_space=str(service.get_storage_usage()),
|
||||||
volume=get_volume_by_id(service.get_drive()),
|
volume=get_volume_by_id(service.get_location()),
|
||||||
)
|
)
|
||||||
for service in get_services_by_location(root.name)
|
for service in get_services_by_location(root.name)
|
||||||
]
|
]
|
||||||
|
@ -36,8 +30,8 @@ class StorageVolume:
|
||||||
used_space: str
|
used_space: str
|
||||||
root: bool
|
root: bool
|
||||||
name: str
|
name: str
|
||||||
model: Optional[str]
|
model: typing.Optional[str]
|
||||||
serial: Optional[str]
|
serial: typing.Optional[str]
|
||||||
type: str
|
type: str
|
||||||
|
|
||||||
@strawberry.field
|
@strawberry.field
|
||||||
|
@ -49,7 +43,7 @@ class StorageVolume:
|
||||||
@strawberry.interface
|
@strawberry.interface
|
||||||
class StorageUsageInterface:
|
class StorageUsageInterface:
|
||||||
used_space: str
|
used_space: str
|
||||||
volume: Optional[StorageVolume]
|
volume: typing.Optional[StorageVolume]
|
||||||
title: str
|
title: str
|
||||||
|
|
||||||
|
|
||||||
|
@ -57,7 +51,7 @@ class StorageUsageInterface:
|
||||||
class ServiceStorageUsage(StorageUsageInterface):
|
class ServiceStorageUsage(StorageUsageInterface):
|
||||||
"""Storage usage for a service"""
|
"""Storage usage for a service"""
|
||||||
|
|
||||||
service: Optional["Service"]
|
service: typing.Optional["Service"]
|
||||||
|
|
||||||
|
|
||||||
@strawberry.enum
|
@strawberry.enum
|
||||||
|
@ -85,21 +79,7 @@ def get_storage_usage(root: "Service") -> ServiceStorageUsage:
|
||||||
service=service_to_graphql_service(service),
|
service=service_to_graphql_service(service),
|
||||||
title=service.get_display_name(),
|
title=service.get_display_name(),
|
||||||
used_space=str(service.get_storage_usage()),
|
used_space=str(service.get_storage_usage()),
|
||||||
volume=get_volume_by_id(service.get_drive()),
|
volume=get_volume_by_id(service.get_location()),
|
||||||
)
|
|
||||||
|
|
||||||
|
|
||||||
# TODO: This won't be needed when deriving DnsRecord via strawberry pydantic integration
|
|
||||||
# https://strawberry.rocks/docs/integrations/pydantic
|
|
||||||
# Remove when the link above says it got stable.
|
|
||||||
def service_dns_to_graphql(record: ServiceDnsRecord) -> DnsRecord:
|
|
||||||
return DnsRecord(
|
|
||||||
record_type=record.type,
|
|
||||||
name=record.name,
|
|
||||||
content=record.content,
|
|
||||||
ttl=record.ttl,
|
|
||||||
priority=record.priority,
|
|
||||||
display_name=record.display_name,
|
|
||||||
)
|
)
|
||||||
|
|
||||||
|
|
||||||
|
@ -112,39 +92,15 @@ class Service:
|
||||||
is_movable: bool
|
is_movable: bool
|
||||||
is_required: bool
|
is_required: bool
|
||||||
is_enabled: bool
|
is_enabled: bool
|
||||||
can_be_backed_up: bool
|
|
||||||
backup_description: str
|
|
||||||
status: ServiceStatusEnum
|
status: ServiceStatusEnum
|
||||||
url: Optional[str]
|
url: typing.Optional[str]
|
||||||
|
dns_records: typing.Optional[typing.List[DnsRecord]]
|
||||||
@strawberry.field
|
|
||||||
def dns_records(self) -> Optional[List[DnsRecord]]:
|
|
||||||
service = get_service_by_id(self.id)
|
|
||||||
if service is None:
|
|
||||||
raise LookupError(f"no service {self.id}. Should be unreachable")
|
|
||||||
|
|
||||||
raw_records = service.get_dns_records(get_ip4(), get_ip6())
|
|
||||||
dns_records = [service_dns_to_graphql(record) for record in raw_records]
|
|
||||||
return dns_records
|
|
||||||
|
|
||||||
@strawberry.field
|
@strawberry.field
|
||||||
def storage_usage(self) -> ServiceStorageUsage:
|
def storage_usage(self) -> ServiceStorageUsage:
|
||||||
"""Get storage usage for a service"""
|
"""Get storage usage for a service"""
|
||||||
return get_storage_usage(self)
|
return get_storage_usage(self)
|
||||||
|
|
||||||
# TODO: fill this
|
|
||||||
@strawberry.field
|
|
||||||
def backup_snapshots(self) -> Optional[List["SnapshotInfo"]]:
|
|
||||||
return None
|
|
||||||
|
|
||||||
|
|
||||||
@strawberry.type
|
|
||||||
class SnapshotInfo:
|
|
||||||
id: str
|
|
||||||
service: Service
|
|
||||||
created_at: datetime.datetime
|
|
||||||
reason: BackupReason
|
|
||||||
|
|
||||||
|
|
||||||
def service_to_graphql_service(service: ServiceInterface) -> Service:
|
def service_to_graphql_service(service: ServiceInterface) -> Service:
|
||||||
"""Convert service to graphql service"""
|
"""Convert service to graphql service"""
|
||||||
|
@ -156,14 +112,22 @@ def service_to_graphql_service(service: ServiceInterface) -> Service:
|
||||||
is_movable=service.is_movable(),
|
is_movable=service.is_movable(),
|
||||||
is_required=service.is_required(),
|
is_required=service.is_required(),
|
||||||
is_enabled=service.is_enabled(),
|
is_enabled=service.is_enabled(),
|
||||||
can_be_backed_up=service.can_be_backed_up(),
|
|
||||||
backup_description=service.get_backup_description(),
|
|
||||||
status=ServiceStatusEnum(service.get_status().value),
|
status=ServiceStatusEnum(service.get_status().value),
|
||||||
url=service.get_url(),
|
url=service.get_url(),
|
||||||
|
dns_records=[
|
||||||
|
DnsRecord(
|
||||||
|
record_type=record.type,
|
||||||
|
name=record.name,
|
||||||
|
content=record.content,
|
||||||
|
ttl=record.ttl,
|
||||||
|
priority=record.priority,
|
||||||
|
)
|
||||||
|
for record in service.get_dns_records()
|
||||||
|
],
|
||||||
)
|
)
|
||||||
|
|
||||||
|
|
||||||
def get_volume_by_id(volume_id: str) -> Optional[StorageVolume]:
|
def get_volume_by_id(volume_id: str) -> typing.Optional[StorageVolume]:
|
||||||
"""Get volume by id"""
|
"""Get volume by id"""
|
||||||
volume = BlockDevices().get_block_device(volume_id)
|
volume = BlockDevices().get_block_device(volume_id)
|
||||||
if volume is None:
|
if volume is None:
|
||||||
|
|
|
@ -17,6 +17,7 @@ class UserType(Enum):
|
||||||
|
|
||||||
@strawberry.type
|
@strawberry.type
|
||||||
class User:
|
class User:
|
||||||
|
|
||||||
user_type: UserType
|
user_type: UserType
|
||||||
username: str
|
username: str
|
||||||
# userHomeFolderspace: UserHomeFolderUsage
|
# userHomeFolderspace: UserHomeFolderUsage
|
||||||
|
@ -31,6 +32,7 @@ class UserMutationReturn(MutationReturnInterface):
|
||||||
|
|
||||||
|
|
||||||
def get_user_by_username(username: str) -> typing.Optional[User]:
|
def get_user_by_username(username: str) -> typing.Optional[User]:
|
||||||
|
|
||||||
user = users_actions.get_user_by_username(username)
|
user = users_actions.get_user_by_username(username)
|
||||||
if user is None:
|
if user is None:
|
||||||
return None
|
return None
|
||||||
|
|
|
@ -1,241 +0,0 @@
|
||||||
import typing
|
|
||||||
import strawberry
|
|
||||||
|
|
||||||
from selfprivacy_api.jobs import Jobs
|
|
||||||
|
|
||||||
from selfprivacy_api.graphql import IsAuthenticated
|
|
||||||
from selfprivacy_api.graphql.mutations.mutation_interface import (
|
|
||||||
GenericMutationReturn,
|
|
||||||
GenericJobMutationReturn,
|
|
||||||
MutationReturnInterface,
|
|
||||||
)
|
|
||||||
from selfprivacy_api.graphql.queries.backup import BackupConfiguration
|
|
||||||
from selfprivacy_api.graphql.queries.backup import Backup
|
|
||||||
from selfprivacy_api.graphql.queries.providers import BackupProvider
|
|
||||||
from selfprivacy_api.graphql.common_types.jobs import job_to_api_job
|
|
||||||
from selfprivacy_api.graphql.common_types.backup import (
|
|
||||||
AutobackupQuotasInput,
|
|
||||||
RestoreStrategy,
|
|
||||||
)
|
|
||||||
|
|
||||||
from selfprivacy_api.backup import Backups
|
|
||||||
from selfprivacy_api.services import get_service_by_id
|
|
||||||
from selfprivacy_api.backup.tasks import (
|
|
||||||
start_backup,
|
|
||||||
restore_snapshot,
|
|
||||||
prune_autobackup_snapshots,
|
|
||||||
)
|
|
||||||
from selfprivacy_api.backup.jobs import add_backup_job, add_restore_job
|
|
||||||
|
|
||||||
|
|
||||||
@strawberry.input
|
|
||||||
class InitializeRepositoryInput:
|
|
||||||
"""Initialize repository input"""
|
|
||||||
|
|
||||||
provider: BackupProvider
|
|
||||||
# The following field may become optional for other providers?
|
|
||||||
# Backblaze takes bucket id and name
|
|
||||||
location_id: str
|
|
||||||
location_name: str
|
|
||||||
# Key ID and key for Backblaze
|
|
||||||
login: str
|
|
||||||
password: str
|
|
||||||
|
|
||||||
|
|
||||||
@strawberry.type
|
|
||||||
class GenericBackupConfigReturn(MutationReturnInterface):
|
|
||||||
"""Generic backup config return"""
|
|
||||||
|
|
||||||
configuration: typing.Optional[BackupConfiguration]
|
|
||||||
|
|
||||||
|
|
||||||
@strawberry.type
|
|
||||||
class BackupMutations:
|
|
||||||
@strawberry.mutation(permission_classes=[IsAuthenticated])
|
|
||||||
def initialize_repository(
|
|
||||||
self, repository: InitializeRepositoryInput
|
|
||||||
) -> GenericBackupConfigReturn:
|
|
||||||
"""Initialize a new repository"""
|
|
||||||
Backups.set_provider(
|
|
||||||
kind=repository.provider,
|
|
||||||
login=repository.login,
|
|
||||||
key=repository.password,
|
|
||||||
location=repository.location_name,
|
|
||||||
repo_id=repository.location_id,
|
|
||||||
)
|
|
||||||
Backups.init_repo()
|
|
||||||
return GenericBackupConfigReturn(
|
|
||||||
success=True,
|
|
||||||
message="",
|
|
||||||
code=200,
|
|
||||||
configuration=Backup().configuration(),
|
|
||||||
)
|
|
||||||
|
|
||||||
@strawberry.mutation(permission_classes=[IsAuthenticated])
|
|
||||||
def remove_repository(self) -> GenericBackupConfigReturn:
|
|
||||||
"""Remove repository"""
|
|
||||||
Backups.reset()
|
|
||||||
return GenericBackupConfigReturn(
|
|
||||||
success=True,
|
|
||||||
message="",
|
|
||||||
code=200,
|
|
||||||
configuration=Backup().configuration(),
|
|
||||||
)
|
|
||||||
|
|
||||||
@strawberry.mutation(permission_classes=[IsAuthenticated])
|
|
||||||
def set_autobackup_period(
|
|
||||||
self, period: typing.Optional[int] = None
|
|
||||||
) -> GenericBackupConfigReturn:
|
|
||||||
"""Set autobackup period. None is to disable autobackup"""
|
|
||||||
if period is not None:
|
|
||||||
Backups.set_autobackup_period_minutes(period)
|
|
||||||
else:
|
|
||||||
Backups.set_autobackup_period_minutes(0)
|
|
||||||
|
|
||||||
return GenericBackupConfigReturn(
|
|
||||||
success=True,
|
|
||||||
message="",
|
|
||||||
code=200,
|
|
||||||
configuration=Backup().configuration(),
|
|
||||||
)
|
|
||||||
|
|
||||||
@strawberry.mutation(permission_classes=[IsAuthenticated])
|
|
||||||
def set_autobackup_quotas(
|
|
||||||
self, quotas: AutobackupQuotasInput
|
|
||||||
) -> GenericBackupConfigReturn:
|
|
||||||
"""
|
|
||||||
Set autobackup quotas.
|
|
||||||
Values <=0 for any timeframe mean no limits for that timeframe.
|
|
||||||
To disable autobackup use autobackup period setting, not this mutation.
|
|
||||||
"""
|
|
||||||
|
|
||||||
job = Jobs.add(
|
|
||||||
name="Trimming autobackup snapshots",
|
|
||||||
type_id="backups.autobackup_trimming",
|
|
||||||
description="Pruning the excessive snapshots after the new autobackup quotas are set",
|
|
||||||
)
|
|
||||||
|
|
||||||
try:
|
|
||||||
Backups.set_autobackup_quotas(quotas)
|
|
||||||
# this task is async and can fail with only a job to report the error
|
|
||||||
prune_autobackup_snapshots(job)
|
|
||||||
return GenericBackupConfigReturn(
|
|
||||||
success=True,
|
|
||||||
message="",
|
|
||||||
code=200,
|
|
||||||
configuration=Backup().configuration(),
|
|
||||||
)
|
|
||||||
|
|
||||||
except Exception as e:
|
|
||||||
return GenericBackupConfigReturn(
|
|
||||||
success=False,
|
|
||||||
message=type(e).__name__ + ":" + str(e),
|
|
||||||
code=400,
|
|
||||||
configuration=Backup().configuration(),
|
|
||||||
)
|
|
||||||
|
|
||||||
@strawberry.mutation(permission_classes=[IsAuthenticated])
|
|
||||||
def start_backup(self, service_id: str) -> GenericJobMutationReturn:
|
|
||||||
"""Start backup"""
|
|
||||||
|
|
||||||
service = get_service_by_id(service_id)
|
|
||||||
if service is None:
|
|
||||||
return GenericJobMutationReturn(
|
|
||||||
success=False,
|
|
||||||
code=300,
|
|
||||||
message=f"nonexistent service: {service_id}",
|
|
||||||
job=None,
|
|
||||||
)
|
|
||||||
|
|
||||||
job = add_backup_job(service)
|
|
||||||
start_backup(service_id)
|
|
||||||
|
|
||||||
return GenericJobMutationReturn(
|
|
||||||
success=True,
|
|
||||||
code=200,
|
|
||||||
message="Backup job queued",
|
|
||||||
job=job_to_api_job(job),
|
|
||||||
)
|
|
||||||
|
|
||||||
@strawberry.mutation(permission_classes=[IsAuthenticated])
|
|
||||||
def restore_backup(
|
|
||||||
self,
|
|
||||||
snapshot_id: str,
|
|
||||||
strategy: RestoreStrategy = RestoreStrategy.DOWNLOAD_VERIFY_OVERWRITE,
|
|
||||||
) -> GenericJobMutationReturn:
|
|
||||||
"""Restore backup"""
|
|
||||||
snap = Backups.get_snapshot_by_id(snapshot_id)
|
|
||||||
if snap is None:
|
|
||||||
return GenericJobMutationReturn(
|
|
||||||
success=False,
|
|
||||||
code=404,
|
|
||||||
message=f"No such snapshot: {snapshot_id}",
|
|
||||||
job=None,
|
|
||||||
)
|
|
||||||
|
|
||||||
service = get_service_by_id(snap.service_name)
|
|
||||||
if service is None:
|
|
||||||
return GenericJobMutationReturn(
|
|
||||||
success=False,
|
|
||||||
code=404,
|
|
||||||
message=f"nonexistent service: {snap.service_name}",
|
|
||||||
job=None,
|
|
||||||
)
|
|
||||||
|
|
||||||
try:
|
|
||||||
job = add_restore_job(snap)
|
|
||||||
except ValueError as error:
|
|
||||||
return GenericJobMutationReturn(
|
|
||||||
success=False,
|
|
||||||
code=400,
|
|
||||||
message=str(error),
|
|
||||||
job=None,
|
|
||||||
)
|
|
||||||
|
|
||||||
restore_snapshot(snap, strategy)
|
|
||||||
|
|
||||||
return GenericJobMutationReturn(
|
|
||||||
success=True,
|
|
||||||
code=200,
|
|
||||||
message="restore job created",
|
|
||||||
job=job_to_api_job(job),
|
|
||||||
)
|
|
||||||
|
|
||||||
@strawberry.mutation(permission_classes=[IsAuthenticated])
|
|
||||||
def forget_snapshot(self, snapshot_id: str) -> GenericMutationReturn:
|
|
||||||
"""Forget a snapshot.
|
|
||||||
Makes it inaccessible from the server.
|
|
||||||
After some time, the data (encrypted) will not be recoverable
|
|
||||||
from the backup server too, but not immediately"""
|
|
||||||
|
|
||||||
snap = Backups.get_snapshot_by_id(snapshot_id)
|
|
||||||
if snap is None:
|
|
||||||
return GenericMutationReturn(
|
|
||||||
success=False,
|
|
||||||
code=404,
|
|
||||||
message=f"snapshot {snapshot_id} not found",
|
|
||||||
)
|
|
||||||
|
|
||||||
try:
|
|
||||||
Backups.forget_snapshot(snap)
|
|
||||||
return GenericMutationReturn(
|
|
||||||
success=True,
|
|
||||||
code=200,
|
|
||||||
message="",
|
|
||||||
)
|
|
||||||
except Exception as error:
|
|
||||||
return GenericMutationReturn(
|
|
||||||
success=False,
|
|
||||||
code=400,
|
|
||||||
message=str(error),
|
|
||||||
)
|
|
||||||
|
|
||||||
@strawberry.mutation(permission_classes=[IsAuthenticated])
|
|
||||||
def force_snapshots_reload(self) -> GenericMutationReturn:
|
|
||||||
"""Force snapshots reload"""
|
|
||||||
Backups.force_snapshot_cache_reload()
|
|
||||||
return GenericMutationReturn(
|
|
||||||
success=True,
|
|
||||||
code=200,
|
|
||||||
message="",
|
|
||||||
)
|
|
|
@ -1,216 +0,0 @@
|
||||||
"""Deprecated mutations
|
|
||||||
|
|
||||||
There was made a mistake, where mutations were not grouped, and were instead
|
|
||||||
placed in the root of mutations schema. In this file, we import all the
|
|
||||||
mutations from and provide them to the root for backwards compatibility.
|
|
||||||
"""
|
|
||||||
|
|
||||||
import strawberry
|
|
||||||
from selfprivacy_api.graphql import IsAuthenticated
|
|
||||||
from selfprivacy_api.graphql.common_types.user import UserMutationReturn
|
|
||||||
from selfprivacy_api.graphql.mutations.api_mutations import (
|
|
||||||
ApiKeyMutationReturn,
|
|
||||||
ApiMutations,
|
|
||||||
DeviceApiTokenMutationReturn,
|
|
||||||
)
|
|
||||||
from selfprivacy_api.graphql.mutations.backup_mutations import BackupMutations
|
|
||||||
from selfprivacy_api.graphql.mutations.job_mutations import JobMutations
|
|
||||||
from selfprivacy_api.graphql.mutations.mutation_interface import (
|
|
||||||
GenericJobMutationReturn,
|
|
||||||
GenericMutationReturn,
|
|
||||||
)
|
|
||||||
from selfprivacy_api.graphql.mutations.services_mutations import (
|
|
||||||
ServiceJobMutationReturn,
|
|
||||||
ServiceMutationReturn,
|
|
||||||
ServicesMutations,
|
|
||||||
)
|
|
||||||
from selfprivacy_api.graphql.mutations.storage_mutations import StorageMutations
|
|
||||||
from selfprivacy_api.graphql.mutations.system_mutations import (
|
|
||||||
AutoUpgradeSettingsMutationReturn,
|
|
||||||
SystemMutations,
|
|
||||||
TimezoneMutationReturn,
|
|
||||||
)
|
|
||||||
from selfprivacy_api.graphql.mutations.backup_mutations import BackupMutations
|
|
||||||
from selfprivacy_api.graphql.mutations.users_mutations import UsersMutations
|
|
||||||
|
|
||||||
|
|
||||||
def deprecated_mutation(func, group, auth=True):
|
|
||||||
return strawberry.mutation(
|
|
||||||
resolver=func,
|
|
||||||
permission_classes=[IsAuthenticated] if auth else [],
|
|
||||||
deprecation_reason=f"Use `{group}.{func.__name__}` instead",
|
|
||||||
)
|
|
||||||
|
|
||||||
|
|
||||||
@strawberry.type
|
|
||||||
class DeprecatedApiMutations:
|
|
||||||
get_new_recovery_api_key: ApiKeyMutationReturn = deprecated_mutation(
|
|
||||||
ApiMutations.get_new_recovery_api_key,
|
|
||||||
"api",
|
|
||||||
)
|
|
||||||
|
|
||||||
use_recovery_api_key: DeviceApiTokenMutationReturn = deprecated_mutation(
|
|
||||||
ApiMutations.use_recovery_api_key,
|
|
||||||
"api",
|
|
||||||
auth=False,
|
|
||||||
)
|
|
||||||
|
|
||||||
refresh_device_api_token: DeviceApiTokenMutationReturn = deprecated_mutation(
|
|
||||||
ApiMutations.refresh_device_api_token,
|
|
||||||
"api",
|
|
||||||
)
|
|
||||||
|
|
||||||
delete_device_api_token: GenericMutationReturn = deprecated_mutation(
|
|
||||||
ApiMutations.delete_device_api_token,
|
|
||||||
"api",
|
|
||||||
)
|
|
||||||
|
|
||||||
get_new_device_api_key: ApiKeyMutationReturn = deprecated_mutation(
|
|
||||||
ApiMutations.get_new_device_api_key,
|
|
||||||
"api",
|
|
||||||
)
|
|
||||||
|
|
||||||
invalidate_new_device_api_key: GenericMutationReturn = deprecated_mutation(
|
|
||||||
ApiMutations.invalidate_new_device_api_key,
|
|
||||||
"api",
|
|
||||||
)
|
|
||||||
|
|
||||||
authorize_with_new_device_api_key: DeviceApiTokenMutationReturn = (
|
|
||||||
deprecated_mutation(
|
|
||||||
ApiMutations.authorize_with_new_device_api_key,
|
|
||||||
"api",
|
|
||||||
auth=False,
|
|
||||||
)
|
|
||||||
)
|
|
||||||
|
|
||||||
|
|
||||||
@strawberry.type
|
|
||||||
class DeprecatedSystemMutations:
|
|
||||||
change_timezone: TimezoneMutationReturn = deprecated_mutation(
|
|
||||||
SystemMutations.change_timezone,
|
|
||||||
"system",
|
|
||||||
)
|
|
||||||
|
|
||||||
change_auto_upgrade_settings: AutoUpgradeSettingsMutationReturn = (
|
|
||||||
deprecated_mutation(
|
|
||||||
SystemMutations.change_auto_upgrade_settings,
|
|
||||||
"system",
|
|
||||||
)
|
|
||||||
)
|
|
||||||
|
|
||||||
run_system_rebuild: GenericMutationReturn = deprecated_mutation(
|
|
||||||
SystemMutations.run_system_rebuild,
|
|
||||||
"system",
|
|
||||||
)
|
|
||||||
|
|
||||||
run_system_rollback: GenericMutationReturn = deprecated_mutation(
|
|
||||||
SystemMutations.run_system_rollback,
|
|
||||||
"system",
|
|
||||||
)
|
|
||||||
|
|
||||||
run_system_upgrade: GenericMutationReturn = deprecated_mutation(
|
|
||||||
SystemMutations.run_system_upgrade,
|
|
||||||
"system",
|
|
||||||
)
|
|
||||||
|
|
||||||
reboot_system: GenericMutationReturn = deprecated_mutation(
|
|
||||||
SystemMutations.reboot_system,
|
|
||||||
"system",
|
|
||||||
)
|
|
||||||
|
|
||||||
pull_repository_changes: GenericMutationReturn = deprecated_mutation(
|
|
||||||
SystemMutations.pull_repository_changes,
|
|
||||||
"system",
|
|
||||||
)
|
|
||||||
|
|
||||||
|
|
||||||
@strawberry.type
|
|
||||||
class DeprecatedUsersMutations:
|
|
||||||
create_user: UserMutationReturn = deprecated_mutation(
|
|
||||||
UsersMutations.create_user,
|
|
||||||
"users",
|
|
||||||
)
|
|
||||||
|
|
||||||
delete_user: GenericMutationReturn = deprecated_mutation(
|
|
||||||
UsersMutations.delete_user,
|
|
||||||
"users",
|
|
||||||
)
|
|
||||||
|
|
||||||
update_user: UserMutationReturn = deprecated_mutation(
|
|
||||||
UsersMutations.update_user,
|
|
||||||
"users",
|
|
||||||
)
|
|
||||||
|
|
||||||
add_ssh_key: UserMutationReturn = deprecated_mutation(
|
|
||||||
UsersMutations.add_ssh_key,
|
|
||||||
"users",
|
|
||||||
)
|
|
||||||
|
|
||||||
remove_ssh_key: UserMutationReturn = deprecated_mutation(
|
|
||||||
UsersMutations.remove_ssh_key,
|
|
||||||
"users",
|
|
||||||
)
|
|
||||||
|
|
||||||
|
|
||||||
@strawberry.type
|
|
||||||
class DeprecatedStorageMutations:
|
|
||||||
resize_volume: GenericMutationReturn = deprecated_mutation(
|
|
||||||
StorageMutations.resize_volume,
|
|
||||||
"storage",
|
|
||||||
)
|
|
||||||
|
|
||||||
mount_volume: GenericMutationReturn = deprecated_mutation(
|
|
||||||
StorageMutations.mount_volume,
|
|
||||||
"storage",
|
|
||||||
)
|
|
||||||
|
|
||||||
unmount_volume: GenericMutationReturn = deprecated_mutation(
|
|
||||||
StorageMutations.unmount_volume,
|
|
||||||
"storage",
|
|
||||||
)
|
|
||||||
|
|
||||||
migrate_to_binds: GenericJobMutationReturn = deprecated_mutation(
|
|
||||||
StorageMutations.migrate_to_binds,
|
|
||||||
"storage",
|
|
||||||
)
|
|
||||||
|
|
||||||
|
|
||||||
@strawberry.type
|
|
||||||
class DeprecatedServicesMutations:
|
|
||||||
enable_service: ServiceMutationReturn = deprecated_mutation(
|
|
||||||
ServicesMutations.enable_service,
|
|
||||||
"services",
|
|
||||||
)
|
|
||||||
|
|
||||||
disable_service: ServiceMutationReturn = deprecated_mutation(
|
|
||||||
ServicesMutations.disable_service,
|
|
||||||
"services",
|
|
||||||
)
|
|
||||||
|
|
||||||
stop_service: ServiceMutationReturn = deprecated_mutation(
|
|
||||||
ServicesMutations.stop_service,
|
|
||||||
"services",
|
|
||||||
)
|
|
||||||
|
|
||||||
start_service: ServiceMutationReturn = deprecated_mutation(
|
|
||||||
ServicesMutations.start_service,
|
|
||||||
"services",
|
|
||||||
)
|
|
||||||
|
|
||||||
restart_service: ServiceMutationReturn = deprecated_mutation(
|
|
||||||
ServicesMutations.restart_service,
|
|
||||||
"services",
|
|
||||||
)
|
|
||||||
|
|
||||||
move_service: ServiceJobMutationReturn = deprecated_mutation(
|
|
||||||
ServicesMutations.move_service,
|
|
||||||
"services",
|
|
||||||
)
|
|
||||||
|
|
||||||
|
|
||||||
@strawberry.type
|
|
||||||
class DeprecatedJobMutations:
|
|
||||||
remove_job: GenericMutationReturn = deprecated_mutation(
|
|
||||||
JobMutations.remove_job,
|
|
||||||
"jobs",
|
|
||||||
)
|
|
|
@ -17,5 +17,5 @@ class GenericMutationReturn(MutationReturnInterface):
|
||||||
|
|
||||||
|
|
||||||
@strawberry.type
|
@strawberry.type
|
||||||
class GenericJobMutationReturn(MutationReturnInterface):
|
class GenericJobButationReturn(MutationReturnInterface):
|
||||||
job: typing.Optional[ApiJob] = None
|
job: typing.Optional[ApiJob] = None
|
||||||
|
|
|
@ -4,26 +4,18 @@ import typing
|
||||||
import strawberry
|
import strawberry
|
||||||
from selfprivacy_api.graphql import IsAuthenticated
|
from selfprivacy_api.graphql import IsAuthenticated
|
||||||
from selfprivacy_api.graphql.common_types.jobs import job_to_api_job
|
from selfprivacy_api.graphql.common_types.jobs import job_to_api_job
|
||||||
from selfprivacy_api.jobs import JobStatus
|
|
||||||
|
|
||||||
from traceback import format_tb as format_traceback
|
|
||||||
|
|
||||||
from selfprivacy_api.graphql.mutations.mutation_interface import (
|
|
||||||
GenericJobMutationReturn,
|
|
||||||
GenericMutationReturn,
|
|
||||||
)
|
|
||||||
from selfprivacy_api.graphql.common_types.service import (
|
from selfprivacy_api.graphql.common_types.service import (
|
||||||
Service,
|
Service,
|
||||||
service_to_graphql_service,
|
service_to_graphql_service,
|
||||||
)
|
)
|
||||||
|
from selfprivacy_api.graphql.mutations.mutation_interface import (
|
||||||
from selfprivacy_api.actions.services import (
|
GenericJobButationReturn,
|
||||||
move_service,
|
GenericMutationReturn,
|
||||||
ServiceNotFoundError,
|
|
||||||
VolumeNotFoundError,
|
|
||||||
)
|
)
|
||||||
|
|
||||||
from selfprivacy_api.services import get_service_by_id
|
from selfprivacy_api.services import get_service_by_id
|
||||||
|
from selfprivacy_api.utils.block_devices import BlockDevices
|
||||||
|
|
||||||
|
|
||||||
@strawberry.type
|
@strawberry.type
|
||||||
|
@ -42,7 +34,7 @@ class MoveServiceInput:
|
||||||
|
|
||||||
|
|
||||||
@strawberry.type
|
@strawberry.type
|
||||||
class ServiceJobMutationReturn(GenericJobMutationReturn):
|
class ServiceJobMutationReturn(GenericJobButationReturn):
|
||||||
"""Service job mutation return type."""
|
"""Service job mutation return type."""
|
||||||
|
|
||||||
service: typing.Optional[Service] = None
|
service: typing.Optional[Service] = None
|
||||||
|
@ -55,22 +47,14 @@ class ServicesMutations:
|
||||||
@strawberry.mutation(permission_classes=[IsAuthenticated])
|
@strawberry.mutation(permission_classes=[IsAuthenticated])
|
||||||
def enable_service(self, service_id: str) -> ServiceMutationReturn:
|
def enable_service(self, service_id: str) -> ServiceMutationReturn:
|
||||||
"""Enable service."""
|
"""Enable service."""
|
||||||
try:
|
service = get_service_by_id(service_id)
|
||||||
service = get_service_by_id(service_id)
|
if service is None:
|
||||||
if service is None:
|
|
||||||
return ServiceMutationReturn(
|
|
||||||
success=False,
|
|
||||||
message="Service not found.",
|
|
||||||
code=404,
|
|
||||||
)
|
|
||||||
service.enable()
|
|
||||||
except Exception as e:
|
|
||||||
return ServiceMutationReturn(
|
return ServiceMutationReturn(
|
||||||
success=False,
|
success=False,
|
||||||
message=pretty_error(e),
|
message="Service not found.",
|
||||||
code=400,
|
code=404,
|
||||||
)
|
)
|
||||||
|
service.enable()
|
||||||
return ServiceMutationReturn(
|
return ServiceMutationReturn(
|
||||||
success=True,
|
success=True,
|
||||||
message="Service enabled.",
|
message="Service enabled.",
|
||||||
|
@ -81,21 +65,14 @@ class ServicesMutations:
|
||||||
@strawberry.mutation(permission_classes=[IsAuthenticated])
|
@strawberry.mutation(permission_classes=[IsAuthenticated])
|
||||||
def disable_service(self, service_id: str) -> ServiceMutationReturn:
|
def disable_service(self, service_id: str) -> ServiceMutationReturn:
|
||||||
"""Disable service."""
|
"""Disable service."""
|
||||||
try:
|
service = get_service_by_id(service_id)
|
||||||
service = get_service_by_id(service_id)
|
if service is None:
|
||||||
if service is None:
|
|
||||||
return ServiceMutationReturn(
|
|
||||||
success=False,
|
|
||||||
message="Service not found.",
|
|
||||||
code=404,
|
|
||||||
)
|
|
||||||
service.disable()
|
|
||||||
except Exception as e:
|
|
||||||
return ServiceMutationReturn(
|
return ServiceMutationReturn(
|
||||||
success=False,
|
success=False,
|
||||||
message=pretty_error(e),
|
message="Service not found.",
|
||||||
code=400,
|
code=404,
|
||||||
)
|
)
|
||||||
|
service.disable()
|
||||||
return ServiceMutationReturn(
|
return ServiceMutationReturn(
|
||||||
success=True,
|
success=True,
|
||||||
message="Service disabled.",
|
message="Service disabled.",
|
||||||
|
@ -160,58 +137,33 @@ class ServicesMutations:
|
||||||
@strawberry.mutation(permission_classes=[IsAuthenticated])
|
@strawberry.mutation(permission_classes=[IsAuthenticated])
|
||||||
def move_service(self, input: MoveServiceInput) -> ServiceJobMutationReturn:
|
def move_service(self, input: MoveServiceInput) -> ServiceJobMutationReturn:
|
||||||
"""Move service."""
|
"""Move service."""
|
||||||
# We need a service instance for a reply later
|
|
||||||
service = get_service_by_id(input.service_id)
|
service = get_service_by_id(input.service_id)
|
||||||
if service is None:
|
if service is None:
|
||||||
return ServiceJobMutationReturn(
|
return ServiceJobMutationReturn(
|
||||||
success=False,
|
success=False,
|
||||||
message=f"Service does not exist: {input.service_id}",
|
message="Service not found.",
|
||||||
code=404,
|
code=404,
|
||||||
)
|
)
|
||||||
|
if not service.is_movable():
|
||||||
try:
|
|
||||||
job = move_service(input.service_id, input.location)
|
|
||||||
|
|
||||||
except (ServiceNotFoundError, VolumeNotFoundError) as e:
|
|
||||||
return ServiceJobMutationReturn(
|
return ServiceJobMutationReturn(
|
||||||
success=False,
|
success=False,
|
||||||
message=pretty_error(e),
|
message="Service is not movable.",
|
||||||
code=404,
|
|
||||||
)
|
|
||||||
except Exception as e:
|
|
||||||
return ServiceJobMutationReturn(
|
|
||||||
success=False,
|
|
||||||
message=pretty_error(e),
|
|
||||||
code=400,
|
code=400,
|
||||||
service=service_to_graphql_service(service),
|
service=service_to_graphql_service(service),
|
||||||
)
|
)
|
||||||
|
volume = BlockDevices().get_block_device(input.location)
|
||||||
if job.status in [JobStatus.CREATED, JobStatus.RUNNING]:
|
if volume is None:
|
||||||
return ServiceJobMutationReturn(
|
|
||||||
success=True,
|
|
||||||
message="Started moving the service.",
|
|
||||||
code=200,
|
|
||||||
service=service_to_graphql_service(service),
|
|
||||||
job=job_to_api_job(job),
|
|
||||||
)
|
|
||||||
elif job.status == JobStatus.FINISHED:
|
|
||||||
return ServiceJobMutationReturn(
|
|
||||||
success=True,
|
|
||||||
message="Service moved.",
|
|
||||||
code=200,
|
|
||||||
service=service_to_graphql_service(service),
|
|
||||||
job=job_to_api_job(job),
|
|
||||||
)
|
|
||||||
else:
|
|
||||||
return ServiceJobMutationReturn(
|
return ServiceJobMutationReturn(
|
||||||
success=False,
|
success=False,
|
||||||
message=f"While moving service and performing the step '{job.status_text}', error occured: {job.error}",
|
message="Volume not found.",
|
||||||
code=400,
|
code=404,
|
||||||
service=service_to_graphql_service(service),
|
service=service_to_graphql_service(service),
|
||||||
job=job_to_api_job(job),
|
|
||||||
)
|
)
|
||||||
|
job = service.move_to_volume(volume)
|
||||||
|
return ServiceJobMutationReturn(
|
||||||
def pretty_error(e: Exception) -> str:
|
success=True,
|
||||||
traceback = "/r".join(format_traceback(e.__traceback__))
|
message="Service moved.",
|
||||||
return type(e).__name__ + ": " + str(e) + ": " + traceback
|
code=200,
|
||||||
|
service=service_to_graphql_service(service),
|
||||||
|
job=job_to_api_job(job),
|
||||||
|
)
|
||||||
|
|
|
@ -0,0 +1,102 @@
|
||||||
|
#!/usr/bin/env python3
|
||||||
|
"""Users management module"""
|
||||||
|
# pylint: disable=too-few-public-methods
|
||||||
|
|
||||||
|
import strawberry
|
||||||
|
from selfprivacy_api.actions.users import UserNotFound
|
||||||
|
|
||||||
|
from selfprivacy_api.graphql import IsAuthenticated
|
||||||
|
from selfprivacy_api.actions.ssh import (
|
||||||
|
InvalidPublicKey,
|
||||||
|
KeyAlreadyExists,
|
||||||
|
KeyNotFound,
|
||||||
|
create_ssh_key,
|
||||||
|
remove_ssh_key,
|
||||||
|
)
|
||||||
|
from selfprivacy_api.graphql.common_types.user import (
|
||||||
|
UserMutationReturn,
|
||||||
|
get_user_by_username,
|
||||||
|
)
|
||||||
|
|
||||||
|
|
||||||
|
@strawberry.input
|
||||||
|
class SshMutationInput:
|
||||||
|
"""Input type for ssh mutation"""
|
||||||
|
|
||||||
|
username: str
|
||||||
|
ssh_key: str
|
||||||
|
|
||||||
|
|
||||||
|
@strawberry.type
|
||||||
|
class SshMutations:
|
||||||
|
"""Mutations ssh"""
|
||||||
|
|
||||||
|
@strawberry.mutation(permission_classes=[IsAuthenticated])
|
||||||
|
def add_ssh_key(self, ssh_input: SshMutationInput) -> UserMutationReturn:
|
||||||
|
"""Add a new ssh key"""
|
||||||
|
|
||||||
|
try:
|
||||||
|
create_ssh_key(ssh_input.username, ssh_input.ssh_key)
|
||||||
|
except KeyAlreadyExists:
|
||||||
|
return UserMutationReturn(
|
||||||
|
success=False,
|
||||||
|
message="Key already exists",
|
||||||
|
code=409,
|
||||||
|
)
|
||||||
|
except InvalidPublicKey:
|
||||||
|
return UserMutationReturn(
|
||||||
|
success=False,
|
||||||
|
message="Invalid key type. Only ssh-ed25519 and ssh-rsa are supported",
|
||||||
|
code=400,
|
||||||
|
)
|
||||||
|
except UserNotFound:
|
||||||
|
return UserMutationReturn(
|
||||||
|
success=False,
|
||||||
|
message="User not found",
|
||||||
|
code=404,
|
||||||
|
)
|
||||||
|
except Exception as e:
|
||||||
|
return UserMutationReturn(
|
||||||
|
success=False,
|
||||||
|
message=str(e),
|
||||||
|
code=500,
|
||||||
|
)
|
||||||
|
|
||||||
|
return UserMutationReturn(
|
||||||
|
success=True,
|
||||||
|
message="New SSH key successfully written",
|
||||||
|
code=201,
|
||||||
|
user=get_user_by_username(ssh_input.username),
|
||||||
|
)
|
||||||
|
|
||||||
|
@strawberry.mutation(permission_classes=[IsAuthenticated])
|
||||||
|
def remove_ssh_key(self, ssh_input: SshMutationInput) -> UserMutationReturn:
|
||||||
|
"""Remove ssh key from user"""
|
||||||
|
|
||||||
|
try:
|
||||||
|
remove_ssh_key(ssh_input.username, ssh_input.ssh_key)
|
||||||
|
except KeyNotFound:
|
||||||
|
return UserMutationReturn(
|
||||||
|
success=False,
|
||||||
|
message="Key not found",
|
||||||
|
code=404,
|
||||||
|
)
|
||||||
|
except UserNotFound:
|
||||||
|
return UserMutationReturn(
|
||||||
|
success=False,
|
||||||
|
message="User not found",
|
||||||
|
code=404,
|
||||||
|
)
|
||||||
|
except Exception as e:
|
||||||
|
return UserMutationReturn(
|
||||||
|
success=False,
|
||||||
|
message=str(e),
|
||||||
|
code=500,
|
||||||
|
)
|
||||||
|
|
||||||
|
return UserMutationReturn(
|
||||||
|
success=True,
|
||||||
|
message="SSH key successfully removed",
|
||||||
|
code=200,
|
||||||
|
user=get_user_by_username(ssh_input.username),
|
||||||
|
)
|
|
@ -4,7 +4,7 @@ from selfprivacy_api.graphql import IsAuthenticated
|
||||||
from selfprivacy_api.graphql.common_types.jobs import job_to_api_job
|
from selfprivacy_api.graphql.common_types.jobs import job_to_api_job
|
||||||
from selfprivacy_api.utils.block_devices import BlockDevices
|
from selfprivacy_api.utils.block_devices import BlockDevices
|
||||||
from selfprivacy_api.graphql.mutations.mutation_interface import (
|
from selfprivacy_api.graphql.mutations.mutation_interface import (
|
||||||
GenericJobMutationReturn,
|
GenericJobButationReturn,
|
||||||
GenericMutationReturn,
|
GenericMutationReturn,
|
||||||
)
|
)
|
||||||
from selfprivacy_api.jobs.migrate_to_binds import (
|
from selfprivacy_api.jobs.migrate_to_binds import (
|
||||||
|
@ -79,10 +79,10 @@ class StorageMutations:
|
||||||
)
|
)
|
||||||
|
|
||||||
@strawberry.mutation(permission_classes=[IsAuthenticated])
|
@strawberry.mutation(permission_classes=[IsAuthenticated])
|
||||||
def migrate_to_binds(self, input: MigrateToBindsInput) -> GenericJobMutationReturn:
|
def migrate_to_binds(self, input: MigrateToBindsInput) -> GenericJobButationReturn:
|
||||||
"""Migrate to binds"""
|
"""Migrate to binds"""
|
||||||
if is_bind_migrated():
|
if is_bind_migrated():
|
||||||
return GenericJobMutationReturn(
|
return GenericJobButationReturn(
|
||||||
success=False, code=409, message="Already migrated to binds"
|
success=False, code=409, message="Already migrated to binds"
|
||||||
)
|
)
|
||||||
job = start_bind_migration(
|
job = start_bind_migration(
|
||||||
|
@ -94,7 +94,7 @@ class StorageMutations:
|
||||||
pleroma_block_device=input.pleroma_block_device,
|
pleroma_block_device=input.pleroma_block_device,
|
||||||
)
|
)
|
||||||
)
|
)
|
||||||
return GenericJobMutationReturn(
|
return GenericJobButationReturn(
|
||||||
success=True,
|
success=True,
|
||||||
code=200,
|
code=200,
|
||||||
message="Migration to binds started, rebuild the system to apply changes",
|
message="Migration to binds started, rebuild the system to apply changes",
|
||||||
|
|
|
@ -3,18 +3,12 @@
|
||||||
import typing
|
import typing
|
||||||
import strawberry
|
import strawberry
|
||||||
from selfprivacy_api.graphql import IsAuthenticated
|
from selfprivacy_api.graphql import IsAuthenticated
|
||||||
from selfprivacy_api.graphql.common_types.jobs import job_to_api_job
|
|
||||||
from selfprivacy_api.graphql.mutations.mutation_interface import (
|
from selfprivacy_api.graphql.mutations.mutation_interface import (
|
||||||
GenericJobMutationReturn,
|
|
||||||
GenericMutationReturn,
|
GenericMutationReturn,
|
||||||
MutationReturnInterface,
|
MutationReturnInterface,
|
||||||
GenericJobMutationReturn,
|
|
||||||
)
|
)
|
||||||
|
|
||||||
import selfprivacy_api.actions.system as system_actions
|
import selfprivacy_api.actions.system as system_actions
|
||||||
from selfprivacy_api.graphql.common_types.jobs import job_to_api_job
|
|
||||||
from selfprivacy_api.jobs.nix_collect_garbage import start_nix_collect_garbage
|
|
||||||
import selfprivacy_api.actions.ssh as ssh_actions
|
|
||||||
|
|
||||||
|
|
||||||
@strawberry.type
|
@strawberry.type
|
||||||
|
@ -32,22 +26,6 @@ class AutoUpgradeSettingsMutationReturn(MutationReturnInterface):
|
||||||
allowReboot: bool
|
allowReboot: bool
|
||||||
|
|
||||||
|
|
||||||
@strawberry.type
|
|
||||||
class SSHSettingsMutationReturn(MutationReturnInterface):
|
|
||||||
"""A return type for after changing SSH settings"""
|
|
||||||
|
|
||||||
enable: bool
|
|
||||||
password_authentication: bool
|
|
||||||
|
|
||||||
|
|
||||||
@strawberry.input
|
|
||||||
class SSHSettingsInput:
|
|
||||||
"""Input type for SSH settings"""
|
|
||||||
|
|
||||||
enable: bool
|
|
||||||
password_authentication: bool
|
|
||||||
|
|
||||||
|
|
||||||
@strawberry.input
|
@strawberry.input
|
||||||
class AutoUpgradeSettingsInput:
|
class AutoUpgradeSettingsInput:
|
||||||
"""Input type for auto upgrade settings"""
|
"""Input type for auto upgrade settings"""
|
||||||
|
@ -99,90 +77,40 @@ class SystemMutations:
|
||||||
)
|
)
|
||||||
|
|
||||||
@strawberry.mutation(permission_classes=[IsAuthenticated])
|
@strawberry.mutation(permission_classes=[IsAuthenticated])
|
||||||
def change_ssh_settings(
|
def run_system_rebuild(self) -> GenericMutationReturn:
|
||||||
self, settings: SSHSettingsInput
|
system_actions.rebuild_system()
|
||||||
) -> SSHSettingsMutationReturn:
|
return GenericMutationReturn(
|
||||||
"""Change ssh settings of the server."""
|
|
||||||
ssh_actions.set_ssh_settings(
|
|
||||||
enable=settings.enable,
|
|
||||||
password_authentication=settings.password_authentication,
|
|
||||||
)
|
|
||||||
|
|
||||||
new_settings = ssh_actions.get_ssh_settings()
|
|
||||||
|
|
||||||
return SSHSettingsMutationReturn(
|
|
||||||
success=True,
|
success=True,
|
||||||
message="SSH settings changed",
|
message="Starting rebuild system",
|
||||||
code=200,
|
code=200,
|
||||||
enable=new_settings.enable,
|
|
||||||
password_authentication=new_settings.passwordAuthentication,
|
|
||||||
)
|
)
|
||||||
|
|
||||||
@strawberry.mutation(permission_classes=[IsAuthenticated])
|
|
||||||
def run_system_rebuild(self) -> GenericJobMutationReturn:
|
|
||||||
try:
|
|
||||||
job = system_actions.rebuild_system()
|
|
||||||
return GenericJobMutationReturn(
|
|
||||||
success=True,
|
|
||||||
message="Starting system rebuild",
|
|
||||||
code=200,
|
|
||||||
job=job_to_api_job(job),
|
|
||||||
)
|
|
||||||
except system_actions.ShellException as e:
|
|
||||||
return GenericJobMutationReturn(
|
|
||||||
success=False,
|
|
||||||
message=str(e),
|
|
||||||
code=500,
|
|
||||||
)
|
|
||||||
|
|
||||||
@strawberry.mutation(permission_classes=[IsAuthenticated])
|
@strawberry.mutation(permission_classes=[IsAuthenticated])
|
||||||
def run_system_rollback(self) -> GenericMutationReturn:
|
def run_system_rollback(self) -> GenericMutationReturn:
|
||||||
system_actions.rollback_system()
|
system_actions.rollback_system()
|
||||||
try:
|
return GenericMutationReturn(
|
||||||
return GenericMutationReturn(
|
success=True,
|
||||||
success=True,
|
message="Starting rebuild system",
|
||||||
message="Starting system rollback",
|
code=200,
|
||||||
code=200,
|
)
|
||||||
)
|
|
||||||
except system_actions.ShellException as e:
|
|
||||||
return GenericMutationReturn(
|
|
||||||
success=False,
|
|
||||||
message=str(e),
|
|
||||||
code=500,
|
|
||||||
)
|
|
||||||
|
|
||||||
@strawberry.mutation(permission_classes=[IsAuthenticated])
|
@strawberry.mutation(permission_classes=[IsAuthenticated])
|
||||||
def run_system_upgrade(self) -> GenericJobMutationReturn:
|
def run_system_upgrade(self) -> GenericMutationReturn:
|
||||||
try:
|
system_actions.upgrade_system()
|
||||||
job = system_actions.upgrade_system()
|
return GenericMutationReturn(
|
||||||
return GenericJobMutationReturn(
|
success=True,
|
||||||
success=True,
|
message="Starting rebuild system",
|
||||||
message="Starting system upgrade",
|
code=200,
|
||||||
code=200,
|
)
|
||||||
job=job_to_api_job(job),
|
|
||||||
)
|
|
||||||
except system_actions.ShellException as e:
|
|
||||||
return GenericJobMutationReturn(
|
|
||||||
success=False,
|
|
||||||
message=str(e),
|
|
||||||
code=500,
|
|
||||||
)
|
|
||||||
|
|
||||||
@strawberry.mutation(permission_classes=[IsAuthenticated])
|
@strawberry.mutation(permission_classes=[IsAuthenticated])
|
||||||
def reboot_system(self) -> GenericMutationReturn:
|
def reboot_system(self) -> GenericMutationReturn:
|
||||||
system_actions.reboot_system()
|
system_actions.reboot_system()
|
||||||
try:
|
return GenericMutationReturn(
|
||||||
return GenericMutationReturn(
|
success=True,
|
||||||
success=True,
|
message="System reboot has started",
|
||||||
message="System reboot has started",
|
code=200,
|
||||||
code=200,
|
)
|
||||||
)
|
|
||||||
except system_actions.ShellException as e:
|
|
||||||
return GenericMutationReturn(
|
|
||||||
success=False,
|
|
||||||
message=str(e),
|
|
||||||
code=500,
|
|
||||||
)
|
|
||||||
|
|
||||||
@strawberry.mutation(permission_classes=[IsAuthenticated])
|
@strawberry.mutation(permission_classes=[IsAuthenticated])
|
||||||
def pull_repository_changes(self) -> GenericMutationReturn:
|
def pull_repository_changes(self) -> GenericMutationReturn:
|
||||||
|
@ -198,14 +126,3 @@ class SystemMutations:
|
||||||
message=f"Failed to pull repository changes:\n{result.data}",
|
message=f"Failed to pull repository changes:\n{result.data}",
|
||||||
code=500,
|
code=500,
|
||||||
)
|
)
|
||||||
|
|
||||||
@strawberry.mutation(permission_classes=[IsAuthenticated])
|
|
||||||
def nix_collect_garbage(self) -> GenericJobMutationReturn:
|
|
||||||
job = start_nix_collect_garbage()
|
|
||||||
|
|
||||||
return GenericJobMutationReturn(
|
|
||||||
success=True,
|
|
||||||
code=200,
|
|
||||||
message="Garbage collector started...",
|
|
||||||
job=job_to_api_job(job),
|
|
||||||
)
|
|
||||||
|
|
|
@ -3,18 +3,10 @@
|
||||||
# pylint: disable=too-few-public-methods
|
# pylint: disable=too-few-public-methods
|
||||||
import strawberry
|
import strawberry
|
||||||
from selfprivacy_api.graphql import IsAuthenticated
|
from selfprivacy_api.graphql import IsAuthenticated
|
||||||
from selfprivacy_api.actions.users import UserNotFound
|
|
||||||
from selfprivacy_api.graphql.common_types.user import (
|
from selfprivacy_api.graphql.common_types.user import (
|
||||||
UserMutationReturn,
|
UserMutationReturn,
|
||||||
get_user_by_username,
|
get_user_by_username,
|
||||||
)
|
)
|
||||||
from selfprivacy_api.actions.ssh import (
|
|
||||||
InvalidPublicKey,
|
|
||||||
KeyAlreadyExists,
|
|
||||||
KeyNotFound,
|
|
||||||
create_ssh_key,
|
|
||||||
remove_ssh_key,
|
|
||||||
)
|
|
||||||
from selfprivacy_api.graphql.mutations.mutation_interface import (
|
from selfprivacy_api.graphql.mutations.mutation_interface import (
|
||||||
GenericMutationReturn,
|
GenericMutationReturn,
|
||||||
)
|
)
|
||||||
|
@ -29,16 +21,8 @@ class UserMutationInput:
|
||||||
password: str
|
password: str
|
||||||
|
|
||||||
|
|
||||||
@strawberry.input
|
|
||||||
class SshMutationInput:
|
|
||||||
"""Input type for ssh mutation"""
|
|
||||||
|
|
||||||
username: str
|
|
||||||
ssh_key: str
|
|
||||||
|
|
||||||
|
|
||||||
@strawberry.type
|
@strawberry.type
|
||||||
class UsersMutations:
|
class UserMutations:
|
||||||
"""Mutations change user settings"""
|
"""Mutations change user settings"""
|
||||||
|
|
||||||
@strawberry.mutation(permission_classes=[IsAuthenticated])
|
@strawberry.mutation(permission_classes=[IsAuthenticated])
|
||||||
|
@ -69,12 +53,6 @@ class UsersMutations:
|
||||||
message=str(e),
|
message=str(e),
|
||||||
code=400,
|
code=400,
|
||||||
)
|
)
|
||||||
except users_actions.InvalidConfiguration as e:
|
|
||||||
return UserMutationReturn(
|
|
||||||
success=False,
|
|
||||||
message=str(e),
|
|
||||||
code=400,
|
|
||||||
)
|
|
||||||
except users_actions.UserAlreadyExists as e:
|
except users_actions.UserAlreadyExists as e:
|
||||||
return UserMutationReturn(
|
return UserMutationReturn(
|
||||||
success=False,
|
success=False,
|
||||||
|
@ -137,73 +115,3 @@ class UsersMutations:
|
||||||
code=200,
|
code=200,
|
||||||
user=get_user_by_username(user.username),
|
user=get_user_by_username(user.username),
|
||||||
)
|
)
|
||||||
|
|
||||||
@strawberry.mutation(permission_classes=[IsAuthenticated])
|
|
||||||
def add_ssh_key(self, ssh_input: SshMutationInput) -> UserMutationReturn:
|
|
||||||
"""Add a new ssh key"""
|
|
||||||
|
|
||||||
try:
|
|
||||||
create_ssh_key(ssh_input.username, ssh_input.ssh_key)
|
|
||||||
except KeyAlreadyExists:
|
|
||||||
return UserMutationReturn(
|
|
||||||
success=False,
|
|
||||||
message="Key already exists",
|
|
||||||
code=409,
|
|
||||||
)
|
|
||||||
except InvalidPublicKey:
|
|
||||||
return UserMutationReturn(
|
|
||||||
success=False,
|
|
||||||
message="Invalid key type. Only ssh-ed25519, ssh-rsa and ecdsa are supported",
|
|
||||||
code=400,
|
|
||||||
)
|
|
||||||
except UserNotFound:
|
|
||||||
return UserMutationReturn(
|
|
||||||
success=False,
|
|
||||||
message="User not found",
|
|
||||||
code=404,
|
|
||||||
)
|
|
||||||
except Exception as e:
|
|
||||||
return UserMutationReturn(
|
|
||||||
success=False,
|
|
||||||
message=str(e),
|
|
||||||
code=500,
|
|
||||||
)
|
|
||||||
|
|
||||||
return UserMutationReturn(
|
|
||||||
success=True,
|
|
||||||
message="New SSH key successfully written",
|
|
||||||
code=201,
|
|
||||||
user=get_user_by_username(ssh_input.username),
|
|
||||||
)
|
|
||||||
|
|
||||||
@strawberry.mutation(permission_classes=[IsAuthenticated])
|
|
||||||
def remove_ssh_key(self, ssh_input: SshMutationInput) -> UserMutationReturn:
|
|
||||||
"""Remove ssh key from user"""
|
|
||||||
|
|
||||||
try:
|
|
||||||
remove_ssh_key(ssh_input.username, ssh_input.ssh_key)
|
|
||||||
except KeyNotFound:
|
|
||||||
return UserMutationReturn(
|
|
||||||
success=False,
|
|
||||||
message="Key not found",
|
|
||||||
code=404,
|
|
||||||
)
|
|
||||||
except UserNotFound:
|
|
||||||
return UserMutationReturn(
|
|
||||||
success=False,
|
|
||||||
message="User not found",
|
|
||||||
code=404,
|
|
||||||
)
|
|
||||||
except Exception as e:
|
|
||||||
return UserMutationReturn(
|
|
||||||
success=False,
|
|
||||||
message=str(e),
|
|
||||||
code=500,
|
|
||||||
)
|
|
||||||
|
|
||||||
return UserMutationReturn(
|
|
||||||
success=True,
|
|
||||||
message="SSH key successfully removed",
|
|
||||||
code=200,
|
|
||||||
user=get_user_by_username(ssh_input.username),
|
|
||||||
)
|
|
||||||
|
|
|
@ -38,7 +38,7 @@ class ApiRecoveryKeyStatus:
|
||||||
|
|
||||||
|
|
||||||
def get_recovery_key_status() -> ApiRecoveryKeyStatus:
|
def get_recovery_key_status() -> ApiRecoveryKeyStatus:
|
||||||
"""Get recovery key status, times are timezone-aware"""
|
"""Get recovery key status"""
|
||||||
status = get_api_recovery_token_status()
|
status = get_api_recovery_token_status()
|
||||||
if status is None or not status.exists:
|
if status is None or not status.exists:
|
||||||
return ApiRecoveryKeyStatus(
|
return ApiRecoveryKeyStatus(
|
||||||
|
|
|
@ -1,83 +0,0 @@
|
||||||
"""Backup"""
|
|
||||||
# pylint: disable=too-few-public-methods
|
|
||||||
import typing
|
|
||||||
import strawberry
|
|
||||||
|
|
||||||
|
|
||||||
from selfprivacy_api.backup import Backups
|
|
||||||
from selfprivacy_api.backup.local_secret import LocalBackupSecret
|
|
||||||
from selfprivacy_api.graphql.queries.providers import BackupProvider
|
|
||||||
from selfprivacy_api.graphql.common_types.service import (
|
|
||||||
Service,
|
|
||||||
ServiceStatusEnum,
|
|
||||||
SnapshotInfo,
|
|
||||||
service_to_graphql_service,
|
|
||||||
)
|
|
||||||
from selfprivacy_api.graphql.common_types.backup import AutobackupQuotas
|
|
||||||
from selfprivacy_api.services import get_service_by_id
|
|
||||||
|
|
||||||
|
|
||||||
@strawberry.type
|
|
||||||
class BackupConfiguration:
|
|
||||||
provider: BackupProvider
|
|
||||||
# When server is lost, the app should have the key to decrypt backups
|
|
||||||
# on a new server
|
|
||||||
encryption_key: str
|
|
||||||
# False when repo is not initialized and not ready to be used
|
|
||||||
is_initialized: bool
|
|
||||||
# If none, autobackups are disabled
|
|
||||||
autobackup_period: typing.Optional[int]
|
|
||||||
# None is equal to all quotas being unlimited (-1). Optional for compatibility reasons.
|
|
||||||
autobackup_quotas: AutobackupQuotas
|
|
||||||
# Bucket name for Backblaze, path for some other providers
|
|
||||||
location_name: typing.Optional[str]
|
|
||||||
location_id: typing.Optional[str]
|
|
||||||
|
|
||||||
|
|
||||||
@strawberry.type
|
|
||||||
class Backup:
|
|
||||||
@strawberry.field
|
|
||||||
def configuration(self) -> BackupConfiguration:
|
|
||||||
return BackupConfiguration(
|
|
||||||
provider=Backups.provider().name,
|
|
||||||
encryption_key=LocalBackupSecret.get(),
|
|
||||||
is_initialized=Backups.is_initted(),
|
|
||||||
autobackup_period=Backups.autobackup_period_minutes(),
|
|
||||||
location_name=Backups.provider().location,
|
|
||||||
location_id=Backups.provider().repo_id,
|
|
||||||
autobackup_quotas=Backups.autobackup_quotas(),
|
|
||||||
)
|
|
||||||
|
|
||||||
@strawberry.field
|
|
||||||
def all_snapshots(self) -> typing.List[SnapshotInfo]:
|
|
||||||
if not Backups.is_initted():
|
|
||||||
return []
|
|
||||||
result = []
|
|
||||||
snapshots = Backups.get_all_snapshots()
|
|
||||||
for snap in snapshots:
|
|
||||||
service = get_service_by_id(snap.service_name)
|
|
||||||
if service is None:
|
|
||||||
service = Service(
|
|
||||||
id=snap.service_name,
|
|
||||||
display_name=f"{snap.service_name} (Orphaned)",
|
|
||||||
description="",
|
|
||||||
svg_icon="",
|
|
||||||
is_movable=False,
|
|
||||||
is_required=False,
|
|
||||||
is_enabled=False,
|
|
||||||
status=ServiceStatusEnum.OFF,
|
|
||||||
url=None,
|
|
||||||
dns_records=None,
|
|
||||||
can_be_backed_up=False,
|
|
||||||
backup_description="",
|
|
||||||
)
|
|
||||||
else:
|
|
||||||
service = service_to_graphql_service(service)
|
|
||||||
graphql_snap = SnapshotInfo(
|
|
||||||
id=snap.id,
|
|
||||||
service=service,
|
|
||||||
created_at=snap.created_at,
|
|
||||||
reason=snap.reason,
|
|
||||||
)
|
|
||||||
result.append(graphql_snap)
|
|
||||||
return result
|
|
|
@ -15,6 +15,7 @@ from selfprivacy_api.jobs import Jobs
|
||||||
class Job:
|
class Job:
|
||||||
@strawberry.field
|
@strawberry.field
|
||||||
def get_jobs(self) -> typing.List[ApiJob]:
|
def get_jobs(self) -> typing.List[ApiJob]:
|
||||||
|
|
||||||
Jobs.get_jobs()
|
Jobs.get_jobs()
|
||||||
|
|
||||||
return [job_to_api_job(job) for job in Jobs.get_jobs()]
|
return [job_to_api_job(job) for job in Jobs.get_jobs()]
|
||||||
|
|
|
@ -19,7 +19,3 @@ class ServerProvider(Enum):
|
||||||
@strawberry.enum
|
@strawberry.enum
|
||||||
class BackupProvider(Enum):
|
class BackupProvider(Enum):
|
||||||
BACKBLAZE = "BACKBLAZE"
|
BACKBLAZE = "BACKBLAZE"
|
||||||
NONE = "NONE"
|
|
||||||
# for testing purposes, make sure not selectable in prod.
|
|
||||||
MEMORY = "MEMORY"
|
|
||||||
FILE = "FILE"
|
|
||||||
|
|
|
@ -23,7 +23,7 @@ class Storage:
|
||||||
else str(volume.size),
|
else str(volume.size),
|
||||||
free_space=str(volume.fsavail),
|
free_space=str(volume.fsavail),
|
||||||
used_space=str(volume.fsused),
|
used_space=str(volume.fsused),
|
||||||
root=volume.is_root(),
|
root=volume.name == "sda1",
|
||||||
name=volume.name,
|
name=volume.name,
|
||||||
model=volume.model,
|
model=volume.model,
|
||||||
serial=volume.serial,
|
serial=volume.serial,
|
||||||
|
|
|
@ -33,7 +33,6 @@ class SystemDomainInfo:
|
||||||
content=record.content,
|
content=record.content,
|
||||||
ttl=record.ttl,
|
ttl=record.ttl,
|
||||||
priority=record.priority,
|
priority=record.priority,
|
||||||
display_name=record.display_name,
|
|
||||||
)
|
)
|
||||||
for record in get_all_required_dns_records()
|
for record in get_all_required_dns_records()
|
||||||
]
|
]
|
||||||
|
|
|
@ -5,30 +5,21 @@ import asyncio
|
||||||
from typing import AsyncGenerator
|
from typing import AsyncGenerator
|
||||||
import strawberry
|
import strawberry
|
||||||
from selfprivacy_api.graphql import IsAuthenticated
|
from selfprivacy_api.graphql import IsAuthenticated
|
||||||
from selfprivacy_api.graphql.mutations.deprecated_mutations import (
|
|
||||||
DeprecatedApiMutations,
|
|
||||||
DeprecatedJobMutations,
|
|
||||||
DeprecatedServicesMutations,
|
|
||||||
DeprecatedStorageMutations,
|
|
||||||
DeprecatedSystemMutations,
|
|
||||||
DeprecatedUsersMutations,
|
|
||||||
)
|
|
||||||
from selfprivacy_api.graphql.mutations.api_mutations import ApiMutations
|
from selfprivacy_api.graphql.mutations.api_mutations import ApiMutations
|
||||||
from selfprivacy_api.graphql.mutations.job_mutations import JobMutations
|
from selfprivacy_api.graphql.mutations.job_mutations import JobMutations
|
||||||
from selfprivacy_api.graphql.mutations.mutation_interface import GenericMutationReturn
|
from selfprivacy_api.graphql.mutations.mutation_interface import GenericMutationReturn
|
||||||
from selfprivacy_api.graphql.mutations.services_mutations import ServicesMutations
|
from selfprivacy_api.graphql.mutations.services_mutations import ServicesMutations
|
||||||
|
from selfprivacy_api.graphql.mutations.ssh_mutations import SshMutations
|
||||||
from selfprivacy_api.graphql.mutations.storage_mutations import StorageMutations
|
from selfprivacy_api.graphql.mutations.storage_mutations import StorageMutations
|
||||||
from selfprivacy_api.graphql.mutations.system_mutations import SystemMutations
|
from selfprivacy_api.graphql.mutations.system_mutations import SystemMutations
|
||||||
from selfprivacy_api.graphql.mutations.backup_mutations import BackupMutations
|
|
||||||
|
|
||||||
from selfprivacy_api.graphql.queries.api_queries import Api
|
from selfprivacy_api.graphql.queries.api_queries import Api
|
||||||
from selfprivacy_api.graphql.queries.backup import Backup
|
|
||||||
from selfprivacy_api.graphql.queries.jobs import Job
|
from selfprivacy_api.graphql.queries.jobs import Job
|
||||||
from selfprivacy_api.graphql.queries.services import Services
|
from selfprivacy_api.graphql.queries.services import Services
|
||||||
from selfprivacy_api.graphql.queries.storage import Storage
|
from selfprivacy_api.graphql.queries.storage import Storage
|
||||||
from selfprivacy_api.graphql.queries.system import System
|
from selfprivacy_api.graphql.queries.system import System
|
||||||
|
|
||||||
from selfprivacy_api.graphql.mutations.users_mutations import UsersMutations
|
from selfprivacy_api.graphql.mutations.users_mutations import UserMutations
|
||||||
from selfprivacy_api.graphql.queries.users import Users
|
from selfprivacy_api.graphql.queries.users import Users
|
||||||
from selfprivacy_api.jobs.test import test_job
|
from selfprivacy_api.jobs.test import test_job
|
||||||
|
|
||||||
|
@ -37,16 +28,16 @@ from selfprivacy_api.jobs.test import test_job
|
||||||
class Query:
|
class Query:
|
||||||
"""Root schema for queries"""
|
"""Root schema for queries"""
|
||||||
|
|
||||||
@strawberry.field
|
|
||||||
def api(self) -> Api:
|
|
||||||
"""API access status"""
|
|
||||||
return Api()
|
|
||||||
|
|
||||||
@strawberry.field(permission_classes=[IsAuthenticated])
|
@strawberry.field(permission_classes=[IsAuthenticated])
|
||||||
def system(self) -> System:
|
def system(self) -> System:
|
||||||
"""System queries"""
|
"""System queries"""
|
||||||
return System()
|
return System()
|
||||||
|
|
||||||
|
@strawberry.field
|
||||||
|
def api(self) -> Api:
|
||||||
|
"""API access status"""
|
||||||
|
return Api()
|
||||||
|
|
||||||
@strawberry.field(permission_classes=[IsAuthenticated])
|
@strawberry.field(permission_classes=[IsAuthenticated])
|
||||||
def users(self) -> Users:
|
def users(self) -> Users:
|
||||||
"""Users queries"""
|
"""Users queries"""
|
||||||
|
@ -67,58 +58,19 @@ class Query:
|
||||||
"""Services queries"""
|
"""Services queries"""
|
||||||
return Services()
|
return Services()
|
||||||
|
|
||||||
@strawberry.field(permission_classes=[IsAuthenticated])
|
|
||||||
def backup(self) -> Backup:
|
|
||||||
"""Backup queries"""
|
|
||||||
return Backup()
|
|
||||||
|
|
||||||
|
|
||||||
@strawberry.type
|
@strawberry.type
|
||||||
class Mutation(
|
class Mutation(
|
||||||
DeprecatedApiMutations,
|
ApiMutations,
|
||||||
DeprecatedSystemMutations,
|
SystemMutations,
|
||||||
DeprecatedUsersMutations,
|
UserMutations,
|
||||||
DeprecatedStorageMutations,
|
SshMutations,
|
||||||
DeprecatedServicesMutations,
|
StorageMutations,
|
||||||
DeprecatedJobMutations,
|
ServicesMutations,
|
||||||
|
JobMutations,
|
||||||
):
|
):
|
||||||
"""Root schema for mutations"""
|
"""Root schema for mutations"""
|
||||||
|
|
||||||
@strawberry.field
|
|
||||||
def api(self) -> ApiMutations:
|
|
||||||
"""API mutations"""
|
|
||||||
return ApiMutations()
|
|
||||||
|
|
||||||
@strawberry.field(permission_classes=[IsAuthenticated])
|
|
||||||
def system(self) -> SystemMutations:
|
|
||||||
"""System mutations"""
|
|
||||||
return SystemMutations()
|
|
||||||
|
|
||||||
@strawberry.field(permission_classes=[IsAuthenticated])
|
|
||||||
def users(self) -> UsersMutations:
|
|
||||||
"""Users mutations"""
|
|
||||||
return UsersMutations()
|
|
||||||
|
|
||||||
@strawberry.field(permission_classes=[IsAuthenticated])
|
|
||||||
def storage(self) -> StorageMutations:
|
|
||||||
"""Storage mutations"""
|
|
||||||
return StorageMutations()
|
|
||||||
|
|
||||||
@strawberry.field(permission_classes=[IsAuthenticated])
|
|
||||||
def services(self) -> ServicesMutations:
|
|
||||||
"""Services mutations"""
|
|
||||||
return ServicesMutations()
|
|
||||||
|
|
||||||
@strawberry.field(permission_classes=[IsAuthenticated])
|
|
||||||
def jobs(self) -> JobMutations:
|
|
||||||
"""Jobs mutations"""
|
|
||||||
return JobMutations()
|
|
||||||
|
|
||||||
@strawberry.field(permission_classes=[IsAuthenticated])
|
|
||||||
def backup(self) -> BackupMutations:
|
|
||||||
"""Backup mutations"""
|
|
||||||
return BackupMutations()
|
|
||||||
|
|
||||||
@strawberry.mutation(permission_classes=[IsAuthenticated])
|
@strawberry.mutation(permission_classes=[IsAuthenticated])
|
||||||
def test_mutation(self) -> GenericMutationReturn:
|
def test_mutation(self) -> GenericMutationReturn:
|
||||||
"""Test mutation"""
|
"""Test mutation"""
|
||||||
|
@ -143,8 +95,4 @@ class Subscription:
|
||||||
await asyncio.sleep(0.5)
|
await asyncio.sleep(0.5)
|
||||||
|
|
||||||
|
|
||||||
schema = strawberry.Schema(
|
schema = strawberry.Schema(query=Query, mutation=Mutation, subscription=Subscription)
|
||||||
query=Query,
|
|
||||||
mutation=Mutation,
|
|
||||||
subscription=Subscription,
|
|
||||||
)
|
|
||||||
|
|
|
@ -8,8 +8,8 @@ A job is a dictionary with the following keys:
|
||||||
- name: name of the job
|
- name: name of the job
|
||||||
- description: description of the job
|
- description: description of the job
|
||||||
- status: status of the job
|
- status: status of the job
|
||||||
- created_at: date of creation of the job, naive localtime
|
- created_at: date of creation of the job
|
||||||
- updated_at: date of last update of the job, naive localtime
|
- updated_at: date of last update of the job
|
||||||
- finished_at: date of finish of the job
|
- finished_at: date of finish of the job
|
||||||
- error: error message if the job failed
|
- error: error message if the job failed
|
||||||
- result: result of the job
|
- result: result of the job
|
||||||
|
@ -26,11 +26,8 @@ from selfprivacy_api.utils.redis_pool import RedisPool
|
||||||
|
|
||||||
JOB_EXPIRATION_SECONDS = 10 * 24 * 60 * 60 # ten days
|
JOB_EXPIRATION_SECONDS = 10 * 24 * 60 * 60 # ten days
|
||||||
|
|
||||||
STATUS_LOGS_PREFIX = "jobs_logs:status:"
|
|
||||||
PROGRESS_LOGS_PREFIX = "jobs_logs:progress:"
|
|
||||||
|
|
||||||
|
class JobStatus(Enum):
|
||||||
class JobStatus(str, Enum):
|
|
||||||
"""
|
"""
|
||||||
Status of a job.
|
Status of a job.
|
||||||
"""
|
"""
|
||||||
|
@ -73,7 +70,6 @@ class Jobs:
|
||||||
jobs = Jobs.get_jobs()
|
jobs = Jobs.get_jobs()
|
||||||
for job in jobs:
|
for job in jobs:
|
||||||
Jobs.remove(job)
|
Jobs.remove(job)
|
||||||
Jobs.reset_logs()
|
|
||||||
|
|
||||||
@staticmethod
|
@staticmethod
|
||||||
def add(
|
def add(
|
||||||
|
@ -124,60 +120,6 @@ class Jobs:
|
||||||
return True
|
return True
|
||||||
return False
|
return False
|
||||||
|
|
||||||
@staticmethod
|
|
||||||
def reset_logs() -> None:
|
|
||||||
redis = RedisPool().get_connection()
|
|
||||||
for key in redis.keys(STATUS_LOGS_PREFIX + "*"):
|
|
||||||
redis.delete(key)
|
|
||||||
|
|
||||||
@staticmethod
|
|
||||||
def log_status_update(job: Job, status: JobStatus) -> None:
|
|
||||||
redis = RedisPool().get_connection()
|
|
||||||
key = _status_log_key_from_uuid(job.uid)
|
|
||||||
redis.lpush(key, status.value)
|
|
||||||
redis.expire(key, 10)
|
|
||||||
|
|
||||||
@staticmethod
|
|
||||||
def log_progress_update(job: Job, progress: int) -> None:
|
|
||||||
redis = RedisPool().get_connection()
|
|
||||||
key = _progress_log_key_from_uuid(job.uid)
|
|
||||||
redis.lpush(key, progress)
|
|
||||||
redis.expire(key, 10)
|
|
||||||
|
|
||||||
@staticmethod
|
|
||||||
def status_updates(job: Job) -> list[JobStatus]:
|
|
||||||
result: list[JobStatus] = []
|
|
||||||
|
|
||||||
redis = RedisPool().get_connection()
|
|
||||||
key = _status_log_key_from_uuid(job.uid)
|
|
||||||
if not redis.exists(key):
|
|
||||||
return []
|
|
||||||
|
|
||||||
status_strings: list[str] = redis.lrange(key, 0, -1) # type: ignore
|
|
||||||
for status in status_strings:
|
|
||||||
try:
|
|
||||||
result.append(JobStatus[status])
|
|
||||||
except KeyError as error:
|
|
||||||
raise ValueError("impossible job status: " + status) from error
|
|
||||||
return result
|
|
||||||
|
|
||||||
@staticmethod
|
|
||||||
def progress_updates(job: Job) -> list[int]:
|
|
||||||
result: list[int] = []
|
|
||||||
|
|
||||||
redis = RedisPool().get_connection()
|
|
||||||
key = _progress_log_key_from_uuid(job.uid)
|
|
||||||
if not redis.exists(key):
|
|
||||||
return []
|
|
||||||
|
|
||||||
progress_strings: list[str] = redis.lrange(key, 0, -1) # type: ignore
|
|
||||||
for progress in progress_strings:
|
|
||||||
try:
|
|
||||||
result.append(int(progress))
|
|
||||||
except KeyError as error:
|
|
||||||
raise ValueError("impossible job progress: " + progress) from error
|
|
||||||
return result
|
|
||||||
|
|
||||||
@staticmethod
|
@staticmethod
|
||||||
def update(
|
def update(
|
||||||
job: Job,
|
job: Job,
|
||||||
|
@ -198,17 +140,9 @@ class Jobs:
|
||||||
job.description = description
|
job.description = description
|
||||||
if status_text is not None:
|
if status_text is not None:
|
||||||
job.status_text = status_text
|
job.status_text = status_text
|
||||||
|
if progress is not None:
|
||||||
# if it is finished it is 100
|
|
||||||
# unless user says otherwise
|
|
||||||
if status == JobStatus.FINISHED and progress is None:
|
|
||||||
progress = 100
|
|
||||||
if progress is not None and job.progress != progress:
|
|
||||||
job.progress = progress
|
job.progress = progress
|
||||||
Jobs.log_progress_update(job, progress)
|
|
||||||
|
|
||||||
job.status = status
|
job.status = status
|
||||||
Jobs.log_status_update(job, status)
|
|
||||||
job.updated_at = datetime.datetime.now()
|
job.updated_at = datetime.datetime.now()
|
||||||
job.error = error
|
job.error = error
|
||||||
job.result = result
|
job.result = result
|
||||||
|
@ -224,14 +158,6 @@ class Jobs:
|
||||||
|
|
||||||
return job
|
return job
|
||||||
|
|
||||||
@staticmethod
|
|
||||||
def set_expiration(job: Job, expiration_seconds: int) -> Job:
|
|
||||||
redis = RedisPool().get_connection()
|
|
||||||
key = _redis_key_from_uuid(job.uid)
|
|
||||||
if redis.exists(key):
|
|
||||||
redis.expire(key, expiration_seconds)
|
|
||||||
return job
|
|
||||||
|
|
||||||
@staticmethod
|
@staticmethod
|
||||||
def get_job(uid: str) -> typing.Optional[Job]:
|
def get_job(uid: str) -> typing.Optional[Job]:
|
||||||
"""
|
"""
|
||||||
|
@ -268,33 +194,11 @@ class Jobs:
|
||||||
return False
|
return False
|
||||||
|
|
||||||
|
|
||||||
def report_progress(progress: int, job: Job, status_text: str) -> None:
|
def _redis_key_from_uuid(uuid_string):
|
||||||
"""
|
|
||||||
A terse way to call a common operation, for readability
|
|
||||||
job.report_progress() would be even better
|
|
||||||
but it would go against how this file is written
|
|
||||||
"""
|
|
||||||
Jobs.update(
|
|
||||||
job=job,
|
|
||||||
status=JobStatus.RUNNING,
|
|
||||||
status_text=status_text,
|
|
||||||
progress=progress,
|
|
||||||
)
|
|
||||||
|
|
||||||
|
|
||||||
def _redis_key_from_uuid(uuid_string) -> str:
|
|
||||||
return "jobs:" + str(uuid_string)
|
return "jobs:" + str(uuid_string)
|
||||||
|
|
||||||
|
|
||||||
def _status_log_key_from_uuid(uuid_string) -> str:
|
def _store_job_as_hash(redis, redis_key, model):
|
||||||
return STATUS_LOGS_PREFIX + str(uuid_string)
|
|
||||||
|
|
||||||
|
|
||||||
def _progress_log_key_from_uuid(uuid_string) -> str:
|
|
||||||
return PROGRESS_LOGS_PREFIX + str(uuid_string)
|
|
||||||
|
|
||||||
|
|
||||||
def _store_job_as_hash(redis, redis_key, model) -> None:
|
|
||||||
for key, value in model.dict().items():
|
for key, value in model.dict().items():
|
||||||
if isinstance(value, uuid.UUID):
|
if isinstance(value, uuid.UUID):
|
||||||
value = str(value)
|
value = str(value)
|
||||||
|
@ -305,7 +209,7 @@ def _store_job_as_hash(redis, redis_key, model) -> None:
|
||||||
redis.hset(redis_key, key, str(value))
|
redis.hset(redis_key, key, str(value))
|
||||||
|
|
||||||
|
|
||||||
def _job_from_hash(redis, redis_key) -> typing.Optional[Job]:
|
def _job_from_hash(redis, redis_key):
|
||||||
if redis.exists(redis_key):
|
if redis.exists(redis_key):
|
||||||
job_dict = redis.hgetall(redis_key)
|
job_dict = redis.hgetall(redis_key)
|
||||||
for date in [
|
for date in [
|
||||||
|
|
|
@ -67,8 +67,8 @@ def move_folder(
|
||||||
|
|
||||||
try:
|
try:
|
||||||
data_path.mkdir(mode=0o750, parents=True, exist_ok=True)
|
data_path.mkdir(mode=0o750, parents=True, exist_ok=True)
|
||||||
except Exception as error:
|
except Exception as e:
|
||||||
print(f"Error creating data path: {error}")
|
print(f"Error creating data path: {e}")
|
||||||
return
|
return
|
||||||
|
|
||||||
try:
|
try:
|
||||||
|
|
|
@ -1,147 +0,0 @@
|
||||||
import re
|
|
||||||
import subprocess
|
|
||||||
from typing import Tuple, Iterable
|
|
||||||
|
|
||||||
from selfprivacy_api.utils.huey import huey
|
|
||||||
|
|
||||||
from selfprivacy_api.jobs import JobStatus, Jobs, Job
|
|
||||||
|
|
||||||
|
|
||||||
class ShellException(Exception):
|
|
||||||
"""Shell-related errors"""
|
|
||||||
|
|
||||||
|
|
||||||
COMPLETED_WITH_ERROR = "Error occurred, please report this to the support chat."
|
|
||||||
RESULT_WAS_NOT_FOUND_ERROR = (
|
|
||||||
"We are sorry, garbage collection result was not found. "
|
|
||||||
"Something went wrong, please report this to the support chat."
|
|
||||||
)
|
|
||||||
CLEAR_COMPLETED = "Garbage collection completed."
|
|
||||||
|
|
||||||
|
|
||||||
def delete_old_gens_and_return_dead_report() -> str:
|
|
||||||
subprocess.run(
|
|
||||||
["nix-env", "-p", "/nix/var/nix/profiles/system", "--delete-generations old"],
|
|
||||||
check=False,
|
|
||||||
)
|
|
||||||
|
|
||||||
result = subprocess.check_output(["nix-store", "--gc", "--print-dead"]).decode(
|
|
||||||
"utf-8"
|
|
||||||
)
|
|
||||||
|
|
||||||
return " " if result is None else result
|
|
||||||
|
|
||||||
|
|
||||||
def run_nix_collect_garbage() -> Iterable[bytes]:
|
|
||||||
process = subprocess.Popen(
|
|
||||||
["nix-store", "--gc"], stdout=subprocess.PIPE, stderr=subprocess.STDOUT
|
|
||||||
)
|
|
||||||
return process.stdout if process.stdout else iter([])
|
|
||||||
|
|
||||||
|
|
||||||
def parse_line(job: Job, line: str) -> Job:
|
|
||||||
"""
|
|
||||||
We parse the string for the presence of a final line,
|
|
||||||
with the final amount of space cleared.
|
|
||||||
Simply put, we're just looking for a similar string:
|
|
||||||
"1537 store paths deleted, 339.84 MiB freed".
|
|
||||||
"""
|
|
||||||
pattern = re.compile(r"[+-]?\d+\.\d+ \w+(?= freed)")
|
|
||||||
match = re.search(pattern, line)
|
|
||||||
|
|
||||||
if match is None:
|
|
||||||
raise ShellException("nix returned gibberish output")
|
|
||||||
|
|
||||||
else:
|
|
||||||
Jobs.update(
|
|
||||||
job=job,
|
|
||||||
status=JobStatus.FINISHED,
|
|
||||||
status_text=CLEAR_COMPLETED,
|
|
||||||
result=f"{match.group(0)} have been cleared",
|
|
||||||
)
|
|
||||||
return job
|
|
||||||
|
|
||||||
|
|
||||||
def process_stream(job: Job, stream: Iterable[bytes], total_dead_packages: int) -> None:
|
|
||||||
completed_packages = 0
|
|
||||||
prev_progress = 0
|
|
||||||
|
|
||||||
for line in stream:
|
|
||||||
line = line.decode("utf-8")
|
|
||||||
|
|
||||||
if "deleting '/nix/store/" in line:
|
|
||||||
completed_packages += 1
|
|
||||||
percent = int((completed_packages / total_dead_packages) * 100)
|
|
||||||
|
|
||||||
if percent - prev_progress >= 5:
|
|
||||||
Jobs.update(
|
|
||||||
job=job,
|
|
||||||
status=JobStatus.RUNNING,
|
|
||||||
progress=percent,
|
|
||||||
status_text="Cleaning...",
|
|
||||||
)
|
|
||||||
prev_progress = percent
|
|
||||||
|
|
||||||
elif "store paths deleted," in line:
|
|
||||||
parse_line(job, line)
|
|
||||||
|
|
||||||
|
|
||||||
def get_dead_packages(output) -> Tuple[int, float]:
|
|
||||||
dead = len(re.findall("/nix/store/", output))
|
|
||||||
percent = 0
|
|
||||||
if dead != 0:
|
|
||||||
percent = 100 / dead
|
|
||||||
return dead, percent
|
|
||||||
|
|
||||||
|
|
||||||
@huey.task()
|
|
||||||
def calculate_and_clear_dead_paths(job: Job):
|
|
||||||
Jobs.update(
|
|
||||||
job=job,
|
|
||||||
status=JobStatus.RUNNING,
|
|
||||||
progress=0,
|
|
||||||
status_text="Calculate the number of dead packages...",
|
|
||||||
)
|
|
||||||
|
|
||||||
dead_packages, package_equal_to_percent = get_dead_packages(
|
|
||||||
delete_old_gens_and_return_dead_report()
|
|
||||||
)
|
|
||||||
|
|
||||||
if dead_packages == 0:
|
|
||||||
Jobs.update(
|
|
||||||
job=job,
|
|
||||||
status=JobStatus.FINISHED,
|
|
||||||
status_text="Nothing to clear",
|
|
||||||
result="System is clear",
|
|
||||||
)
|
|
||||||
return True
|
|
||||||
|
|
||||||
Jobs.update(
|
|
||||||
job=job,
|
|
||||||
status=JobStatus.RUNNING,
|
|
||||||
progress=0,
|
|
||||||
status_text=f"Found {dead_packages} packages to remove!",
|
|
||||||
)
|
|
||||||
|
|
||||||
stream = run_nix_collect_garbage()
|
|
||||||
try:
|
|
||||||
process_stream(job, stream, dead_packages)
|
|
||||||
except ShellException as error:
|
|
||||||
Jobs.update(
|
|
||||||
job=job,
|
|
||||||
status=JobStatus.ERROR,
|
|
||||||
status_text=COMPLETED_WITH_ERROR,
|
|
||||||
error=RESULT_WAS_NOT_FOUND_ERROR,
|
|
||||||
)
|
|
||||||
|
|
||||||
|
|
||||||
def start_nix_collect_garbage() -> Job:
|
|
||||||
job = Jobs.add(
|
|
||||||
type_id="maintenance.collect_nix_garbage",
|
|
||||||
name="Collect garbage",
|
|
||||||
description="Cleaning up unused packages",
|
|
||||||
)
|
|
||||||
|
|
||||||
calculate_and_clear_dead_paths(job=job)
|
|
||||||
|
|
||||||
return job
|
|
|
@ -1,136 +0,0 @@
|
||||||
"""
|
|
||||||
A task to start the system upgrade or rebuild by starting a systemd unit.
|
|
||||||
After starting, track the status of the systemd unit and update the Job
|
|
||||||
status accordingly.
|
|
||||||
"""
|
|
||||||
import subprocess
|
|
||||||
from selfprivacy_api.utils.huey import huey
|
|
||||||
from selfprivacy_api.jobs import JobStatus, Jobs, Job
|
|
||||||
from selfprivacy_api.utils.waitloop import wait_until_true
|
|
||||||
from selfprivacy_api.utils.systemd import (
|
|
||||||
get_service_status,
|
|
||||||
get_last_log_lines,
|
|
||||||
ServiceStatus,
|
|
||||||
)
|
|
||||||
|
|
||||||
START_TIMEOUT = 60 * 5
|
|
||||||
START_INTERVAL = 1
|
|
||||||
RUN_TIMEOUT = 60 * 60
|
|
||||||
RUN_INTERVAL = 5
|
|
||||||
|
|
||||||
|
|
||||||
def check_if_started(unit_name: str):
|
|
||||||
"""Check if the systemd unit has started"""
|
|
||||||
try:
|
|
||||||
status = get_service_status(unit_name)
|
|
||||||
if status == ServiceStatus.ACTIVE:
|
|
||||||
return True
|
|
||||||
return False
|
|
||||||
except subprocess.CalledProcessError:
|
|
||||||
return False
|
|
||||||
|
|
||||||
|
|
||||||
def check_running_status(job: Job, unit_name: str):
|
|
||||||
"""Check if the systemd unit is running"""
|
|
||||||
try:
|
|
||||||
status = get_service_status(unit_name)
|
|
||||||
if status == ServiceStatus.INACTIVE:
|
|
||||||
Jobs.update(
|
|
||||||
job=job,
|
|
||||||
status=JobStatus.FINISHED,
|
|
||||||
result="System rebuilt.",
|
|
||||||
progress=100,
|
|
||||||
)
|
|
||||||
return True
|
|
||||||
if status == ServiceStatus.FAILED:
|
|
||||||
log_lines = get_last_log_lines(unit_name, 10)
|
|
||||||
Jobs.update(
|
|
||||||
job=job,
|
|
||||||
status=JobStatus.ERROR,
|
|
||||||
error="System rebuild failed. Last log lines:\n" + "\n".join(log_lines),
|
|
||||||
)
|
|
||||||
return True
|
|
||||||
if status == ServiceStatus.ACTIVE:
|
|
||||||
log_lines = get_last_log_lines(unit_name, 1)
|
|
||||||
Jobs.update(
|
|
||||||
job=job,
|
|
||||||
status=JobStatus.RUNNING,
|
|
||||||
status_text=log_lines[0] if len(log_lines) > 0 else "",
|
|
||||||
)
|
|
||||||
return False
|
|
||||||
return False
|
|
||||||
except subprocess.CalledProcessError:
|
|
||||||
return False
|
|
||||||
|
|
||||||
|
|
||||||
def rebuild_system(job: Job, upgrade: bool = False):
|
|
||||||
"""
|
|
||||||
Broken out to allow calling it synchronously.
|
|
||||||
We cannot just block until task is done because it will require a second worker
|
|
||||||
Which we do not have
|
|
||||||
"""
|
|
||||||
|
|
||||||
unit_name = "sp-nixos-upgrade.service" if upgrade else "sp-nixos-rebuild.service"
|
|
||||||
try:
|
|
||||||
command = ["systemctl", "start", unit_name]
|
|
||||||
subprocess.run(
|
|
||||||
command,
|
|
||||||
check=True,
|
|
||||||
start_new_session=True,
|
|
||||||
shell=False,
|
|
||||||
)
|
|
||||||
Jobs.update(
|
|
||||||
job=job,
|
|
||||||
status=JobStatus.RUNNING,
|
|
||||||
status_text="Starting the system rebuild...",
|
|
||||||
)
|
|
||||||
# Wait for the systemd unit to start
|
|
||||||
try:
|
|
||||||
wait_until_true(
|
|
||||||
lambda: check_if_started(unit_name),
|
|
||||||
timeout_sec=START_TIMEOUT,
|
|
||||||
interval=START_INTERVAL,
|
|
||||||
)
|
|
||||||
except TimeoutError:
|
|
||||||
log_lines = get_last_log_lines(unit_name, 10)
|
|
||||||
Jobs.update(
|
|
||||||
job=job,
|
|
||||||
status=JobStatus.ERROR,
|
|
||||||
error="System rebuild timed out. Last log lines:\n"
|
|
||||||
+ "\n".join(log_lines),
|
|
||||||
)
|
|
||||||
return
|
|
||||||
Jobs.update(
|
|
||||||
job=job,
|
|
||||||
status=JobStatus.RUNNING,
|
|
||||||
status_text="Rebuilding the system...",
|
|
||||||
)
|
|
||||||
# Wait for the systemd unit to finish
|
|
||||||
try:
|
|
||||||
wait_until_true(
|
|
||||||
lambda: check_running_status(job, unit_name),
|
|
||||||
timeout_sec=RUN_TIMEOUT,
|
|
||||||
interval=RUN_INTERVAL,
|
|
||||||
)
|
|
||||||
except TimeoutError:
|
|
||||||
log_lines = get_last_log_lines(unit_name, 10)
|
|
||||||
Jobs.update(
|
|
||||||
job=job,
|
|
||||||
status=JobStatus.ERROR,
|
|
||||||
error="System rebuild timed out. Last log lines:\n"
|
|
||||||
+ "\n".join(log_lines),
|
|
||||||
)
|
|
||||||
return
|
|
||||||
|
|
||||||
except subprocess.CalledProcessError as e:
|
|
||||||
Jobs.update(
|
|
||||||
job=job,
|
|
||||||
status=JobStatus.ERROR,
|
|
||||||
status_text=str(e),
|
|
||||||
)
|
|
||||||
|
|
||||||
|
|
||||||
@huey.task()
|
|
||||||
def rebuild_system_task(job: Job, upgrade: bool = False):
|
|
||||||
"""Rebuild the system"""
|
|
||||||
rebuild_system(job, upgrade)
|
|
|
@ -8,16 +8,31 @@ at api.skippedMigrations in userdata.json and populating it
|
||||||
with IDs of the migrations to skip.
|
with IDs of the migrations to skip.
|
||||||
Adding DISABLE_ALL to that array disables the migrations module entirely.
|
Adding DISABLE_ALL to that array disables the migrations module entirely.
|
||||||
"""
|
"""
|
||||||
|
from selfprivacy_api.migrations.check_for_failed_binds_migration import (
|
||||||
from selfprivacy_api.utils import ReadUserData, UserDataFiles
|
CheckForFailedBindsMigration,
|
||||||
from selfprivacy_api.migrations.write_token_to_redis import WriteTokenToRedis
|
|
||||||
from selfprivacy_api.migrations.check_for_system_rebuild_jobs import (
|
|
||||||
CheckForSystemRebuildJobs,
|
|
||||||
)
|
)
|
||||||
|
from selfprivacy_api.utils import ReadUserData
|
||||||
|
from selfprivacy_api.migrations.fix_nixos_config_branch import FixNixosConfigBranch
|
||||||
|
from selfprivacy_api.migrations.create_tokens_json import CreateTokensJson
|
||||||
|
from selfprivacy_api.migrations.migrate_to_selfprivacy_channel import (
|
||||||
|
MigrateToSelfprivacyChannel,
|
||||||
|
)
|
||||||
|
from selfprivacy_api.migrations.mount_volume import MountVolume
|
||||||
|
from selfprivacy_api.migrations.providers import CreateProviderFields
|
||||||
|
from selfprivacy_api.migrations.prepare_for_nixos_2211 import (
|
||||||
|
MigrateToSelfprivacyChannelFrom2205,
|
||||||
|
)
|
||||||
|
from selfprivacy_api.migrations.redis_tokens import LoadTokensToRedis
|
||||||
|
|
||||||
migrations = [
|
migrations = [
|
||||||
WriteTokenToRedis(),
|
FixNixosConfigBranch(),
|
||||||
CheckForSystemRebuildJobs(),
|
CreateTokensJson(),
|
||||||
|
MigrateToSelfprivacyChannel(),
|
||||||
|
MountVolume(),
|
||||||
|
CheckForFailedBindsMigration(),
|
||||||
|
CreateProviderFields(),
|
||||||
|
MigrateToSelfprivacyChannelFrom2205(),
|
||||||
|
LoadTokensToRedis(),
|
||||||
]
|
]
|
||||||
|
|
||||||
|
|
||||||
|
@ -26,7 +41,7 @@ def run_migrations():
|
||||||
Go over all migrations. If they are not skipped in userdata file, run them
|
Go over all migrations. If they are not skipped in userdata file, run them
|
||||||
if the migration needed.
|
if the migration needed.
|
||||||
"""
|
"""
|
||||||
with ReadUserData(UserDataFiles.SECRETS) as data:
|
with ReadUserData() as data:
|
||||||
if "api" not in data:
|
if "api" not in data:
|
||||||
skipped_migrations = []
|
skipped_migrations = []
|
||||||
elif "skippedMigrations" not in data["api"]:
|
elif "skippedMigrations" not in data["api"]:
|
||||||
|
|
|
@ -0,0 +1,48 @@
|
||||||
|
from selfprivacy_api.jobs import JobStatus, Jobs
|
||||||
|
|
||||||
|
from selfprivacy_api.migrations.migration import Migration
|
||||||
|
from selfprivacy_api.utils import WriteUserData
|
||||||
|
|
||||||
|
|
||||||
|
class CheckForFailedBindsMigration(Migration):
|
||||||
|
"""Mount volume."""
|
||||||
|
|
||||||
|
def get_migration_name(self):
|
||||||
|
return "check_for_failed_binds_migration"
|
||||||
|
|
||||||
|
def get_migration_description(self):
|
||||||
|
return "If binds migration failed, try again."
|
||||||
|
|
||||||
|
def is_migration_needed(self):
|
||||||
|
try:
|
||||||
|
jobs = Jobs.get_jobs()
|
||||||
|
# If there is a job with type_id "migrations.migrate_to_binds" and status is not "FINISHED",
|
||||||
|
# then migration is needed and job is deleted
|
||||||
|
for job in jobs:
|
||||||
|
if (
|
||||||
|
job.type_id == "migrations.migrate_to_binds"
|
||||||
|
and job.status != JobStatus.FINISHED
|
||||||
|
):
|
||||||
|
return True
|
||||||
|
return False
|
||||||
|
except Exception as e:
|
||||||
|
print(e)
|
||||||
|
return False
|
||||||
|
|
||||||
|
def migrate(self):
|
||||||
|
# Get info about existing volumes
|
||||||
|
# Write info about volumes to userdata.json
|
||||||
|
try:
|
||||||
|
jobs = Jobs.get_jobs()
|
||||||
|
for job in jobs:
|
||||||
|
if (
|
||||||
|
job.type_id == "migrations.migrate_to_binds"
|
||||||
|
and job.status != JobStatus.FINISHED
|
||||||
|
):
|
||||||
|
Jobs.remove(job)
|
||||||
|
with WriteUserData() as userdata:
|
||||||
|
userdata["useBinds"] = False
|
||||||
|
print("Done")
|
||||||
|
except Exception as e:
|
||||||
|
print(e)
|
||||||
|
print("Error mounting volume")
|
|
@ -1,47 +0,0 @@
|
||||||
from selfprivacy_api.migrations.migration import Migration
|
|
||||||
from selfprivacy_api.jobs import JobStatus, Jobs
|
|
||||||
|
|
||||||
|
|
||||||
class CheckForSystemRebuildJobs(Migration):
|
|
||||||
"""Check if there are unfinished system rebuild jobs and finish them"""
|
|
||||||
|
|
||||||
def get_migration_name(self):
|
|
||||||
return "check_for_system_rebuild_jobs"
|
|
||||||
|
|
||||||
def get_migration_description(self):
|
|
||||||
return "Check if there are unfinished system rebuild jobs and finish them"
|
|
||||||
|
|
||||||
def is_migration_needed(self):
|
|
||||||
# Check if there are any unfinished system rebuild jobs
|
|
||||||
for job in Jobs.get_jobs():
|
|
||||||
if (
|
|
||||||
job.type_id
|
|
||||||
in [
|
|
||||||
"system.nixos.rebuild",
|
|
||||||
"system.nixos.upgrade",
|
|
||||||
]
|
|
||||||
) and job.status in [
|
|
||||||
JobStatus.CREATED,
|
|
||||||
JobStatus.RUNNING,
|
|
||||||
]:
|
|
||||||
return True
|
|
||||||
|
|
||||||
def migrate(self):
|
|
||||||
# As the API is restarted, we assume that the jobs are finished
|
|
||||||
for job in Jobs.get_jobs():
|
|
||||||
if (
|
|
||||||
job.type_id
|
|
||||||
in [
|
|
||||||
"system.nixos.rebuild",
|
|
||||||
"system.nixos.upgrade",
|
|
||||||
]
|
|
||||||
) and job.status in [
|
|
||||||
JobStatus.CREATED,
|
|
||||||
JobStatus.RUNNING,
|
|
||||||
]:
|
|
||||||
Jobs.update(
|
|
||||||
job=job,
|
|
||||||
status=JobStatus.FINISHED,
|
|
||||||
result="System rebuilt.",
|
|
||||||
progress=100,
|
|
||||||
)
|
|
|
@ -0,0 +1,58 @@
|
||||||
|
from datetime import datetime
|
||||||
|
import os
|
||||||
|
import json
|
||||||
|
from pathlib import Path
|
||||||
|
|
||||||
|
from selfprivacy_api.migrations.migration import Migration
|
||||||
|
from selfprivacy_api.utils import TOKENS_FILE, ReadUserData
|
||||||
|
|
||||||
|
|
||||||
|
class CreateTokensJson(Migration):
|
||||||
|
def get_migration_name(self):
|
||||||
|
return "create_tokens_json"
|
||||||
|
|
||||||
|
def get_migration_description(self):
|
||||||
|
return """Selfprivacy API used a single token in userdata.json for authentication.
|
||||||
|
This migration creates a new tokens.json file with the old token in it.
|
||||||
|
This migration runs if the tokens.json file does not exist.
|
||||||
|
Old token is located at ["api"]["token"] in userdata.json.
|
||||||
|
tokens.json path is declared in TOKENS_FILE imported from utils.py
|
||||||
|
tokens.json must have the following format:
|
||||||
|
{
|
||||||
|
"tokens": [
|
||||||
|
{
|
||||||
|
"token": "token_string",
|
||||||
|
"name": "Master Token",
|
||||||
|
"date": "current date from str(datetime.now())",
|
||||||
|
}
|
||||||
|
]
|
||||||
|
}
|
||||||
|
tokens.json must have 0600 permissions.
|
||||||
|
"""
|
||||||
|
|
||||||
|
def is_migration_needed(self):
|
||||||
|
return not os.path.exists(TOKENS_FILE)
|
||||||
|
|
||||||
|
def migrate(self):
|
||||||
|
try:
|
||||||
|
print(f"Creating tokens.json file at {TOKENS_FILE}")
|
||||||
|
with ReadUserData() as userdata:
|
||||||
|
token = userdata["api"]["token"]
|
||||||
|
# Touch tokens.json with 0600 permissions
|
||||||
|
Path(TOKENS_FILE).touch(mode=0o600)
|
||||||
|
# Write token to tokens.json
|
||||||
|
structure = {
|
||||||
|
"tokens": [
|
||||||
|
{
|
||||||
|
"token": token,
|
||||||
|
"name": "primary_token",
|
||||||
|
"date": str(datetime.now()),
|
||||||
|
}
|
||||||
|
]
|
||||||
|
}
|
||||||
|
with open(TOKENS_FILE, "w", encoding="utf-8") as tokens:
|
||||||
|
json.dump(structure, tokens, indent=4)
|
||||||
|
print("Done")
|
||||||
|
except Exception as e:
|
||||||
|
print(e)
|
||||||
|
print("Error creating tokens.json")
|
|
@ -0,0 +1,57 @@
|
||||||
|
import os
|
||||||
|
import subprocess
|
||||||
|
|
||||||
|
from selfprivacy_api.migrations.migration import Migration
|
||||||
|
|
||||||
|
|
||||||
|
class FixNixosConfigBranch(Migration):
|
||||||
|
def get_migration_name(self):
|
||||||
|
return "fix_nixos_config_branch"
|
||||||
|
|
||||||
|
def get_migration_description(self):
|
||||||
|
return """Mobile SelfPrivacy app introduced a bug in version 0.4.0.
|
||||||
|
New servers were initialized with a rolling-testing nixos config branch.
|
||||||
|
This was fixed in app version 0.4.2, but existing servers were not updated.
|
||||||
|
This migration fixes this by changing the nixos config branch to master.
|
||||||
|
"""
|
||||||
|
|
||||||
|
def is_migration_needed(self):
|
||||||
|
"""Check the current branch of /etc/nixos and return True if it is rolling-testing"""
|
||||||
|
current_working_directory = os.getcwd()
|
||||||
|
try:
|
||||||
|
os.chdir("/etc/nixos")
|
||||||
|
nixos_config_branch = subprocess.check_output(
|
||||||
|
["git", "rev-parse", "--abbrev-ref", "HEAD"], start_new_session=True
|
||||||
|
)
|
||||||
|
os.chdir(current_working_directory)
|
||||||
|
return nixos_config_branch.decode("utf-8").strip() == "rolling-testing"
|
||||||
|
except subprocess.CalledProcessError:
|
||||||
|
os.chdir(current_working_directory)
|
||||||
|
return False
|
||||||
|
|
||||||
|
def migrate(self):
|
||||||
|
"""Affected server pulled the config with the --single-branch flag.
|
||||||
|
Git config remote.origin.fetch has to be changed, so all branches will be fetched.
|
||||||
|
Then, fetch all branches, pull and switch to master branch.
|
||||||
|
"""
|
||||||
|
print("Fixing Nixos config branch")
|
||||||
|
current_working_directory = os.getcwd()
|
||||||
|
try:
|
||||||
|
os.chdir("/etc/nixos")
|
||||||
|
|
||||||
|
subprocess.check_output(
|
||||||
|
[
|
||||||
|
"git",
|
||||||
|
"config",
|
||||||
|
"remote.origin.fetch",
|
||||||
|
"+refs/heads/*:refs/remotes/origin/*",
|
||||||
|
]
|
||||||
|
)
|
||||||
|
subprocess.check_output(["git", "fetch", "--all"])
|
||||||
|
subprocess.check_output(["git", "pull"])
|
||||||
|
subprocess.check_output(["git", "checkout", "master"])
|
||||||
|
os.chdir(current_working_directory)
|
||||||
|
print("Done")
|
||||||
|
except subprocess.CalledProcessError:
|
||||||
|
os.chdir(current_working_directory)
|
||||||
|
print("Error")
|
|
@ -0,0 +1,49 @@
|
||||||
|
import os
|
||||||
|
import subprocess
|
||||||
|
|
||||||
|
from selfprivacy_api.migrations.migration import Migration
|
||||||
|
|
||||||
|
|
||||||
|
class MigrateToSelfprivacyChannel(Migration):
|
||||||
|
"""Migrate to selfprivacy Nix channel."""
|
||||||
|
|
||||||
|
def get_migration_name(self):
|
||||||
|
return "migrate_to_selfprivacy_channel"
|
||||||
|
|
||||||
|
def get_migration_description(self):
|
||||||
|
return "Migrate to selfprivacy Nix channel."
|
||||||
|
|
||||||
|
def is_migration_needed(self):
|
||||||
|
try:
|
||||||
|
output = subprocess.check_output(
|
||||||
|
["nix-channel", "--list"], start_new_session=True
|
||||||
|
)
|
||||||
|
output = output.decode("utf-8")
|
||||||
|
first_line = output.split("\n", maxsplit=1)[0]
|
||||||
|
return first_line.startswith("nixos") and (
|
||||||
|
first_line.endswith("nixos-21.11") or first_line.endswith("nixos-21.05")
|
||||||
|
)
|
||||||
|
except subprocess.CalledProcessError:
|
||||||
|
return False
|
||||||
|
|
||||||
|
def migrate(self):
|
||||||
|
# Change the channel and update them.
|
||||||
|
# Also, go to /etc/nixos directory and make a git pull
|
||||||
|
current_working_directory = os.getcwd()
|
||||||
|
try:
|
||||||
|
print("Changing channel")
|
||||||
|
os.chdir("/etc/nixos")
|
||||||
|
subprocess.check_output(
|
||||||
|
[
|
||||||
|
"nix-channel",
|
||||||
|
"--add",
|
||||||
|
"https://channel.selfprivacy.org/nixos-selfpricacy",
|
||||||
|
"nixos",
|
||||||
|
]
|
||||||
|
)
|
||||||
|
subprocess.check_output(["nix-channel", "--update"])
|
||||||
|
subprocess.check_output(["git", "pull"])
|
||||||
|
os.chdir(current_working_directory)
|
||||||
|
except subprocess.CalledProcessError:
|
||||||
|
os.chdir(current_working_directory)
|
||||||
|
print("Error")
|
|
@ -0,0 +1,51 @@
|
||||||
|
import os
|
||||||
|
import subprocess
|
||||||
|
|
||||||
|
from selfprivacy_api.migrations.migration import Migration
|
||||||
|
from selfprivacy_api.utils import ReadUserData, WriteUserData
|
||||||
|
from selfprivacy_api.utils.block_devices import BlockDevices
|
||||||
|
|
||||||
|
|
||||||
|
class MountVolume(Migration):
|
||||||
|
"""Mount volume."""
|
||||||
|
|
||||||
|
def get_migration_name(self):
|
||||||
|
return "mount_volume"
|
||||||
|
|
||||||
|
def get_migration_description(self):
|
||||||
|
return "Mount volume if it is not mounted."
|
||||||
|
|
||||||
|
def is_migration_needed(self):
|
||||||
|
try:
|
||||||
|
with ReadUserData() as userdata:
|
||||||
|
return "volumes" not in userdata
|
||||||
|
except Exception as e:
|
||||||
|
print(e)
|
||||||
|
return False
|
||||||
|
|
||||||
|
def migrate(self):
|
||||||
|
# Get info about existing volumes
|
||||||
|
# Write info about volumes to userdata.json
|
||||||
|
try:
|
||||||
|
volumes = BlockDevices().get_block_devices()
|
||||||
|
# If there is an unmounted volume sdb,
|
||||||
|
# Write it to userdata.json
|
||||||
|
is_there_a_volume = False
|
||||||
|
for volume in volumes:
|
||||||
|
if volume.name == "sdb":
|
||||||
|
is_there_a_volume = True
|
||||||
|
break
|
||||||
|
with WriteUserData() as userdata:
|
||||||
|
userdata["volumes"] = []
|
||||||
|
if is_there_a_volume:
|
||||||
|
userdata["volumes"].append(
|
||||||
|
{
|
||||||
|
"device": "/dev/sdb",
|
||||||
|
"mountPoint": "/volumes/sdb",
|
||||||
|
"fsType": "ext4",
|
||||||
|
}
|
||||||
|
)
|
||||||
|
print("Done")
|
||||||
|
except Exception as e:
|
||||||
|
print(e)
|
||||||
|
print("Error mounting volume")
|
|
@ -0,0 +1,58 @@
|
||||||
|
import os
|
||||||
|
import subprocess
|
||||||
|
|
||||||
|
from selfprivacy_api.migrations.migration import Migration
|
||||||
|
|
||||||
|
|
||||||
|
class MigrateToSelfprivacyChannelFrom2205(Migration):
|
||||||
|
"""Migrate to selfprivacy Nix channel.
|
||||||
|
For some reason NixOS 22.05 servers initialized with the nixos channel instead of selfprivacy.
|
||||||
|
This stops us from upgrading to NixOS 22.11
|
||||||
|
"""
|
||||||
|
|
||||||
|
def get_migration_name(self):
|
||||||
|
return "migrate_to_selfprivacy_channel_from_2205"
|
||||||
|
|
||||||
|
def get_migration_description(self):
|
||||||
|
return "Migrate to selfprivacy Nix channel from NixOS 22.05."
|
||||||
|
|
||||||
|
def is_migration_needed(self):
|
||||||
|
try:
|
||||||
|
output = subprocess.check_output(
|
||||||
|
["nix-channel", "--list"], start_new_session=True
|
||||||
|
)
|
||||||
|
output = output.decode("utf-8")
|
||||||
|
first_line = output.split("\n", maxsplit=1)[0]
|
||||||
|
return first_line.startswith("nixos") and (
|
||||||
|
first_line.endswith("nixos-22.05")
|
||||||
|
)
|
||||||
|
except subprocess.CalledProcessError:
|
||||||
|
return False
|
||||||
|
|
||||||
|
def migrate(self):
|
||||||
|
# Change the channel and update them.
|
||||||
|
# Also, go to /etc/nixos directory and make a git pull
|
||||||
|
current_working_directory = os.getcwd()
|
||||||
|
try:
|
||||||
|
print("Changing channel")
|
||||||
|
os.chdir("/etc/nixos")
|
||||||
|
subprocess.check_output(
|
||||||
|
[
|
||||||
|
"nix-channel",
|
||||||
|
"--add",
|
||||||
|
"https://channel.selfprivacy.org/nixos-selfpricacy",
|
||||||
|
"nixos",
|
||||||
|
]
|
||||||
|
)
|
||||||
|
subprocess.check_output(["nix-channel", "--update"])
|
||||||
|
nixos_config_branch = subprocess.check_output(
|
||||||
|
["git", "rev-parse", "--abbrev-ref", "HEAD"], start_new_session=True
|
||||||
|
)
|
||||||
|
if nixos_config_branch.decode("utf-8").strip() == "api-redis":
|
||||||
|
print("Also changing nixos-config branch from api-redis to master")
|
||||||
|
subprocess.check_output(["git", "checkout", "master"])
|
||||||
|
subprocess.check_output(["git", "pull"])
|
||||||
|
os.chdir(current_working_directory)
|
||||||
|
except subprocess.CalledProcessError:
|
||||||
|
os.chdir(current_working_directory)
|
||||||
|
print("Error")
|
|
@ -0,0 +1,43 @@
|
||||||
|
from selfprivacy_api.migrations.migration import Migration
|
||||||
|
from selfprivacy_api.utils import ReadUserData, WriteUserData
|
||||||
|
|
||||||
|
|
||||||
|
class CreateProviderFields(Migration):
|
||||||
|
"""Unhardcode providers"""
|
||||||
|
|
||||||
|
def get_migration_name(self):
|
||||||
|
return "create_provider_fields"
|
||||||
|
|
||||||
|
def get_migration_description(self):
|
||||||
|
return "Add DNS, backup and server provider fields to enable user to choose between different clouds and to make the deployment adapt to these preferences."
|
||||||
|
|
||||||
|
def is_migration_needed(self):
|
||||||
|
try:
|
||||||
|
with ReadUserData() as userdata:
|
||||||
|
return "dns" not in userdata
|
||||||
|
except Exception as e:
|
||||||
|
print(e)
|
||||||
|
return False
|
||||||
|
|
||||||
|
def migrate(self):
|
||||||
|
# Write info about providers to userdata.json
|
||||||
|
try:
|
||||||
|
with WriteUserData() as userdata:
|
||||||
|
userdata["dns"] = {
|
||||||
|
"provider": "CLOUDFLARE",
|
||||||
|
"apiKey": userdata["cloudflare"]["apiKey"],
|
||||||
|
}
|
||||||
|
userdata["server"] = {
|
||||||
|
"provider": "HETZNER",
|
||||||
|
}
|
||||||
|
userdata["backup"] = {
|
||||||
|
"provider": "BACKBLAZE",
|
||||||
|
"accountId": userdata["backblaze"]["accountId"],
|
||||||
|
"accountKey": userdata["backblaze"]["accountKey"],
|
||||||
|
"bucket": userdata["backblaze"]["bucket"],
|
||||||
|
}
|
||||||
|
|
||||||
|
print("Done")
|
||||||
|
except Exception as e:
|
||||||
|
print(e)
|
||||||
|
print("Error migrating provider fields")
|
|
@ -0,0 +1,48 @@
|
||||||
|
from selfprivacy_api.migrations.migration import Migration
|
||||||
|
|
||||||
|
from selfprivacy_api.repositories.tokens.json_tokens_repository import (
|
||||||
|
JsonTokensRepository,
|
||||||
|
)
|
||||||
|
from selfprivacy_api.repositories.tokens.redis_tokens_repository import (
|
||||||
|
RedisTokensRepository,
|
||||||
|
)
|
||||||
|
from selfprivacy_api.repositories.tokens.abstract_tokens_repository import (
|
||||||
|
AbstractTokensRepository,
|
||||||
|
)
|
||||||
|
|
||||||
|
|
||||||
|
class LoadTokensToRedis(Migration):
|
||||||
|
"""Load Json tokens into Redis"""
|
||||||
|
|
||||||
|
def get_migration_name(self):
|
||||||
|
return "load_tokens_to_redis"
|
||||||
|
|
||||||
|
def get_migration_description(self):
|
||||||
|
return "Loads access tokens and recovery keys from legacy json file into redis token storage"
|
||||||
|
|
||||||
|
def is_repo_empty(self, repo: AbstractTokensRepository) -> bool:
|
||||||
|
if repo.get_tokens() != []:
|
||||||
|
return False
|
||||||
|
if repo.get_recovery_key() is not None:
|
||||||
|
return False
|
||||||
|
return True
|
||||||
|
|
||||||
|
def is_migration_needed(self):
|
||||||
|
try:
|
||||||
|
if not self.is_repo_empty(JsonTokensRepository()) and self.is_repo_empty(
|
||||||
|
RedisTokensRepository()
|
||||||
|
):
|
||||||
|
return True
|
||||||
|
except Exception as e:
|
||||||
|
print(e)
|
||||||
|
return False
|
||||||
|
|
||||||
|
def migrate(self):
|
||||||
|
# Write info about providers to userdata.json
|
||||||
|
try:
|
||||||
|
RedisTokensRepository().clone(JsonTokensRepository())
|
||||||
|
|
||||||
|
print("Done")
|
||||||
|
except Exception as e:
|
||||||
|
print(e)
|
||||||
|
print("Error migrating access tokens from json to redis")
|
|
@ -1,63 +0,0 @@
|
||||||
from datetime import datetime
|
|
||||||
from typing import Optional
|
|
||||||
from selfprivacy_api.migrations.migration import Migration
|
|
||||||
from selfprivacy_api.models.tokens.token import Token
|
|
||||||
|
|
||||||
from selfprivacy_api.repositories.tokens.redis_tokens_repository import (
|
|
||||||
RedisTokensRepository,
|
|
||||||
)
|
|
||||||
from selfprivacy_api.repositories.tokens.abstract_tokens_repository import (
|
|
||||||
AbstractTokensRepository,
|
|
||||||
)
|
|
||||||
from selfprivacy_api.utils import ReadUserData, UserDataFiles
|
|
||||||
|
|
||||||
|
|
||||||
class WriteTokenToRedis(Migration):
|
|
||||||
"""Load Json tokens into Redis"""
|
|
||||||
|
|
||||||
def get_migration_name(self):
|
|
||||||
return "write_token_to_redis"
|
|
||||||
|
|
||||||
def get_migration_description(self):
|
|
||||||
return "Loads the initial token into redis token storage"
|
|
||||||
|
|
||||||
def is_repo_empty(self, repo: AbstractTokensRepository) -> bool:
|
|
||||||
if repo.get_tokens() != []:
|
|
||||||
return False
|
|
||||||
return True
|
|
||||||
|
|
||||||
def get_token_from_json(self) -> Optional[Token]:
|
|
||||||
try:
|
|
||||||
with ReadUserData(UserDataFiles.SECRETS) as userdata:
|
|
||||||
return Token(
|
|
||||||
token=userdata["api"]["token"],
|
|
||||||
device_name="Initial device",
|
|
||||||
created_at=datetime.now(),
|
|
||||||
)
|
|
||||||
except Exception as e:
|
|
||||||
print(e)
|
|
||||||
return None
|
|
||||||
|
|
||||||
def is_migration_needed(self):
|
|
||||||
try:
|
|
||||||
if self.get_token_from_json() is not None and self.is_repo_empty(
|
|
||||||
RedisTokensRepository()
|
|
||||||
):
|
|
||||||
return True
|
|
||||||
except Exception as e:
|
|
||||||
print(e)
|
|
||||||
return False
|
|
||||||
|
|
||||||
def migrate(self):
|
|
||||||
# Write info about providers to userdata.json
|
|
||||||
try:
|
|
||||||
token = self.get_token_from_json()
|
|
||||||
if token is None:
|
|
||||||
print("No token found in secrets.json")
|
|
||||||
return
|
|
||||||
RedisTokensRepository()._store_token(token)
|
|
||||||
|
|
||||||
print("Done")
|
|
||||||
except Exception as e:
|
|
||||||
print(e)
|
|
||||||
print("Error migrating access tokens from json to redis")
|
|
|
@ -1,11 +0,0 @@
|
||||||
from pydantic import BaseModel
|
|
||||||
|
|
||||||
"""for storage in Redis"""
|
|
||||||
|
|
||||||
|
|
||||||
class BackupProviderModel(BaseModel):
|
|
||||||
kind: str
|
|
||||||
login: str
|
|
||||||
key: str
|
|
||||||
location: str
|
|
||||||
repo_id: str # for app usage, not for us
|
|
|
@ -1,11 +0,0 @@
|
||||||
import datetime
|
|
||||||
from pydantic import BaseModel
|
|
||||||
|
|
||||||
from selfprivacy_api.graphql.common_types.backup import BackupReason
|
|
||||||
|
|
||||||
|
|
||||||
class Snapshot(BaseModel):
|
|
||||||
id: str
|
|
||||||
service_name: str
|
|
||||||
created_at: datetime.datetime
|
|
||||||
reason: BackupReason = BackupReason.EXPLICIT
|
|
|
@ -1,24 +0,0 @@
|
||||||
from enum import Enum
|
|
||||||
from typing import Optional
|
|
||||||
from pydantic import BaseModel
|
|
||||||
|
|
||||||
|
|
||||||
class ServiceStatus(Enum):
|
|
||||||
"""Enum for service status"""
|
|
||||||
|
|
||||||
ACTIVE = "ACTIVE"
|
|
||||||
RELOADING = "RELOADING"
|
|
||||||
INACTIVE = "INACTIVE"
|
|
||||||
FAILED = "FAILED"
|
|
||||||
ACTIVATING = "ACTIVATING"
|
|
||||||
DEACTIVATING = "DEACTIVATING"
|
|
||||||
OFF = "OFF"
|
|
||||||
|
|
||||||
|
|
||||||
class ServiceDnsRecord(BaseModel):
|
|
||||||
type: str
|
|
||||||
name: str
|
|
||||||
content: str
|
|
||||||
ttl: int
|
|
||||||
display_name: str
|
|
||||||
priority: Optional[int] = None
|
|
|
@ -22,7 +22,7 @@ class NewDeviceKey(BaseModel):
|
||||||
|
|
||||||
def is_valid(self) -> bool:
|
def is_valid(self) -> bool:
|
||||||
"""
|
"""
|
||||||
Check if key is valid.
|
Check if the recovery key is valid.
|
||||||
"""
|
"""
|
||||||
if is_past(self.expires_at):
|
if is_past(self.expires_at):
|
||||||
return False
|
return False
|
||||||
|
@ -30,7 +30,7 @@ class NewDeviceKey(BaseModel):
|
||||||
|
|
||||||
def as_mnemonic(self) -> str:
|
def as_mnemonic(self) -> str:
|
||||||
"""
|
"""
|
||||||
Get the key as a mnemonic.
|
Get the recovery key as a mnemonic.
|
||||||
"""
|
"""
|
||||||
return Mnemonic(language="english").to_mnemonic(bytes.fromhex(self.key))
|
return Mnemonic(language="english").to_mnemonic(bytes.fromhex(self.key))
|
||||||
|
|
||||||
|
|
|
@ -47,7 +47,6 @@ class RecoveryKey(BaseModel):
|
||||||
) -> "RecoveryKey":
|
) -> "RecoveryKey":
|
||||||
"""
|
"""
|
||||||
Factory to generate a random token.
|
Factory to generate a random token.
|
||||||
If passed naive time as expiration, assumes utc
|
|
||||||
"""
|
"""
|
||||||
creation_date = datetime.now(timezone.utc)
|
creation_date = datetime.now(timezone.utc)
|
||||||
if expiration is not None:
|
if expiration is not None:
|
||||||
|
|
|
@ -0,0 +1,8 @@
|
||||||
|
from selfprivacy_api.repositories.tokens.abstract_tokens_repository import (
|
||||||
|
AbstractTokensRepository,
|
||||||
|
)
|
||||||
|
from selfprivacy_api.repositories.tokens.json_tokens_repository import (
|
||||||
|
JsonTokensRepository,
|
||||||
|
)
|
||||||
|
|
||||||
|
repository = JsonTokensRepository()
|
|
@ -0,0 +1,153 @@
|
||||||
|
"""
|
||||||
|
temporary legacy
|
||||||
|
"""
|
||||||
|
from typing import Optional
|
||||||
|
from datetime import datetime, timezone
|
||||||
|
|
||||||
|
from selfprivacy_api.utils import UserDataFiles, WriteUserData, ReadUserData
|
||||||
|
from selfprivacy_api.models.tokens.token import Token
|
||||||
|
from selfprivacy_api.models.tokens.recovery_key import RecoveryKey
|
||||||
|
from selfprivacy_api.models.tokens.new_device_key import NewDeviceKey
|
||||||
|
from selfprivacy_api.repositories.tokens.exceptions import (
|
||||||
|
TokenNotFound,
|
||||||
|
)
|
||||||
|
from selfprivacy_api.repositories.tokens.abstract_tokens_repository import (
|
||||||
|
AbstractTokensRepository,
|
||||||
|
)
|
||||||
|
|
||||||
|
|
||||||
|
DATETIME_FORMAT = "%Y-%m-%dT%H:%M:%S.%f"
|
||||||
|
|
||||||
|
|
||||||
|
class JsonTokensRepository(AbstractTokensRepository):
|
||||||
|
def get_tokens(self) -> list[Token]:
|
||||||
|
"""Get the tokens"""
|
||||||
|
tokens_list = []
|
||||||
|
|
||||||
|
with ReadUserData(UserDataFiles.TOKENS) as tokens_file:
|
||||||
|
for userdata_token in tokens_file["tokens"]:
|
||||||
|
tokens_list.append(
|
||||||
|
Token(
|
||||||
|
token=userdata_token["token"],
|
||||||
|
device_name=userdata_token["name"],
|
||||||
|
created_at=userdata_token["date"],
|
||||||
|
)
|
||||||
|
)
|
||||||
|
|
||||||
|
return tokens_list
|
||||||
|
|
||||||
|
def _store_token(self, new_token: Token):
|
||||||
|
"""Store a token directly"""
|
||||||
|
with WriteUserData(UserDataFiles.TOKENS) as tokens_file:
|
||||||
|
tokens_file["tokens"].append(
|
||||||
|
{
|
||||||
|
"token": new_token.token,
|
||||||
|
"name": new_token.device_name,
|
||||||
|
"date": new_token.created_at.strftime(DATETIME_FORMAT),
|
||||||
|
}
|
||||||
|
)
|
||||||
|
|
||||||
|
def delete_token(self, input_token: Token) -> None:
|
||||||
|
"""Delete the token"""
|
||||||
|
with WriteUserData(UserDataFiles.TOKENS) as tokens_file:
|
||||||
|
for userdata_token in tokens_file["tokens"]:
|
||||||
|
if userdata_token["token"] == input_token.token:
|
||||||
|
tokens_file["tokens"].remove(userdata_token)
|
||||||
|
return
|
||||||
|
|
||||||
|
raise TokenNotFound("Token not found!")
|
||||||
|
|
||||||
|
def __key_date_from_str(self, date_string: str) -> datetime:
|
||||||
|
if date_string is None or date_string == "":
|
||||||
|
return None
|
||||||
|
# we assume that we store dates in json as naive utc
|
||||||
|
utc_no_tz = datetime.fromisoformat(date_string)
|
||||||
|
utc_with_tz = utc_no_tz.replace(tzinfo=timezone.utc)
|
||||||
|
return utc_with_tz
|
||||||
|
|
||||||
|
def __date_from_tokens_file(
|
||||||
|
self, tokens_file: object, tokenfield: str, datefield: str
|
||||||
|
):
|
||||||
|
date_string = tokens_file[tokenfield].get(datefield)
|
||||||
|
return self.__key_date_from_str(date_string)
|
||||||
|
|
||||||
|
def get_recovery_key(self) -> Optional[RecoveryKey]:
|
||||||
|
"""Get the recovery key"""
|
||||||
|
with ReadUserData(UserDataFiles.TOKENS) as tokens_file:
|
||||||
|
|
||||||
|
if (
|
||||||
|
"recovery_token" not in tokens_file
|
||||||
|
or tokens_file["recovery_token"] is None
|
||||||
|
):
|
||||||
|
return
|
||||||
|
|
||||||
|
recovery_key = RecoveryKey(
|
||||||
|
key=tokens_file["recovery_token"].get("token"),
|
||||||
|
created_at=self.__date_from_tokens_file(
|
||||||
|
tokens_file, "recovery_token", "date"
|
||||||
|
),
|
||||||
|
expires_at=self.__date_from_tokens_file(
|
||||||
|
tokens_file, "recovery_token", "expiration"
|
||||||
|
),
|
||||||
|
uses_left=tokens_file["recovery_token"].get("uses_left"),
|
||||||
|
)
|
||||||
|
|
||||||
|
return recovery_key
|
||||||
|
|
||||||
|
def _store_recovery_key(self, recovery_key: RecoveryKey) -> None:
|
||||||
|
with WriteUserData(UserDataFiles.TOKENS) as tokens_file:
|
||||||
|
key_expiration: Optional[str] = None
|
||||||
|
if recovery_key.expires_at is not None:
|
||||||
|
key_expiration = recovery_key.expires_at.strftime(DATETIME_FORMAT)
|
||||||
|
tokens_file["recovery_token"] = {
|
||||||
|
"token": recovery_key.key,
|
||||||
|
"date": recovery_key.created_at.strftime(DATETIME_FORMAT),
|
||||||
|
"expiration": key_expiration,
|
||||||
|
"uses_left": recovery_key.uses_left,
|
||||||
|
}
|
||||||
|
|
||||||
|
def _decrement_recovery_token(self):
|
||||||
|
"""Decrement recovery key use count by one"""
|
||||||
|
if self.is_recovery_key_valid():
|
||||||
|
with WriteUserData(UserDataFiles.TOKENS) as tokens:
|
||||||
|
if tokens["recovery_token"]["uses_left"] is not None:
|
||||||
|
tokens["recovery_token"]["uses_left"] -= 1
|
||||||
|
|
||||||
|
def _delete_recovery_key(self) -> None:
|
||||||
|
"""Delete the recovery key"""
|
||||||
|
with WriteUserData(UserDataFiles.TOKENS) as tokens_file:
|
||||||
|
if "recovery_token" in tokens_file:
|
||||||
|
del tokens_file["recovery_token"]
|
||||||
|
return
|
||||||
|
|
||||||
|
def _store_new_device_key(self, new_device_key: NewDeviceKey) -> None:
|
||||||
|
with WriteUserData(UserDataFiles.TOKENS) as tokens_file:
|
||||||
|
tokens_file["new_device"] = {
|
||||||
|
"token": new_device_key.key,
|
||||||
|
"date": new_device_key.created_at.strftime(DATETIME_FORMAT),
|
||||||
|
"expiration": new_device_key.expires_at.strftime(DATETIME_FORMAT),
|
||||||
|
}
|
||||||
|
|
||||||
|
def delete_new_device_key(self) -> None:
|
||||||
|
"""Delete the new device key"""
|
||||||
|
with WriteUserData(UserDataFiles.TOKENS) as tokens_file:
|
||||||
|
if "new_device" in tokens_file:
|
||||||
|
del tokens_file["new_device"]
|
||||||
|
return
|
||||||
|
|
||||||
|
def _get_stored_new_device_key(self) -> Optional[NewDeviceKey]:
|
||||||
|
"""Retrieves new device key that is already stored."""
|
||||||
|
with ReadUserData(UserDataFiles.TOKENS) as tokens_file:
|
||||||
|
if "new_device" not in tokens_file or tokens_file["new_device"] is None:
|
||||||
|
return
|
||||||
|
|
||||||
|
new_device_key = NewDeviceKey(
|
||||||
|
key=tokens_file["new_device"]["token"],
|
||||||
|
created_at=self.__date_from_tokens_file(
|
||||||
|
tokens_file, "new_device", "date"
|
||||||
|
),
|
||||||
|
expires_at=self.__date_from_tokens_file(
|
||||||
|
tokens_file, "new_device", "expiration"
|
||||||
|
),
|
||||||
|
)
|
||||||
|
return new_device_key
|
|
@ -1,10 +1,9 @@
|
||||||
"""
|
"""
|
||||||
Token repository using Redis as backend.
|
Token repository using Redis as backend.
|
||||||
"""
|
"""
|
||||||
from typing import Any, Optional
|
from typing import Optional
|
||||||
from datetime import datetime
|
from datetime import datetime, timezone
|
||||||
from hashlib import md5
|
from hashlib import md5
|
||||||
from datetime import timezone
|
|
||||||
|
|
||||||
from selfprivacy_api.repositories.tokens.abstract_tokens_repository import (
|
from selfprivacy_api.repositories.tokens.abstract_tokens_repository import (
|
||||||
AbstractTokensRepository,
|
AbstractTokensRepository,
|
||||||
|
@ -30,15 +29,15 @@ class RedisTokensRepository(AbstractTokensRepository):
|
||||||
|
|
||||||
@staticmethod
|
@staticmethod
|
||||||
def token_key_for_device(device_name: str):
|
def token_key_for_device(device_name: str):
|
||||||
md5_hash = md5(usedforsecurity=False)
|
hash = md5()
|
||||||
md5_hash.update(bytes(device_name, "utf-8"))
|
hash.update(bytes(device_name, "utf-8"))
|
||||||
digest = md5_hash.hexdigest()
|
digest = hash.hexdigest()
|
||||||
return TOKENS_PREFIX + digest
|
return TOKENS_PREFIX + digest
|
||||||
|
|
||||||
def get_tokens(self) -> list[Token]:
|
def get_tokens(self) -> list[Token]:
|
||||||
"""Get the tokens"""
|
"""Get the tokens"""
|
||||||
redis = self.connection
|
redis = self.connection
|
||||||
token_keys: list[str] = redis.keys(TOKENS_PREFIX + "*") # type: ignore
|
token_keys = redis.keys(TOKENS_PREFIX + "*")
|
||||||
tokens = []
|
tokens = []
|
||||||
for key in token_keys:
|
for key in token_keys:
|
||||||
token = self._token_from_hash(key)
|
token = self._token_from_hash(key)
|
||||||
|
@ -46,15 +45,14 @@ class RedisTokensRepository(AbstractTokensRepository):
|
||||||
tokens.append(token)
|
tokens.append(token)
|
||||||
return tokens
|
return tokens
|
||||||
|
|
||||||
def _discover_token_key(self, input_token: Token) -> Optional[str]:
|
def _discover_token_key(self, input_token: Token) -> str:
|
||||||
"""brute-force searching for tokens, for robust deletion"""
|
"""brute-force searching for tokens, for robust deletion"""
|
||||||
redis = self.connection
|
redis = self.connection
|
||||||
token_keys: list[str] = redis.keys(TOKENS_PREFIX + "*") # type: ignore
|
token_keys = redis.keys(TOKENS_PREFIX + "*")
|
||||||
for key in token_keys:
|
for key in token_keys:
|
||||||
token = self._token_from_hash(key)
|
token = self._token_from_hash(key)
|
||||||
if token == input_token:
|
if token == input_token:
|
||||||
return key
|
return key
|
||||||
return None
|
|
||||||
|
|
||||||
def delete_token(self, input_token: Token) -> None:
|
def delete_token(self, input_token: Token) -> None:
|
||||||
"""Delete the token"""
|
"""Delete the token"""
|
||||||
|
@ -113,28 +111,26 @@ class RedisTokensRepository(AbstractTokensRepository):
|
||||||
return self._new_device_key_from_hash(NEW_DEVICE_KEY_REDIS_KEY)
|
return self._new_device_key_from_hash(NEW_DEVICE_KEY_REDIS_KEY)
|
||||||
|
|
||||||
@staticmethod
|
@staticmethod
|
||||||
def _is_date_key(key: str) -> bool:
|
def _is_date_key(key: str):
|
||||||
return key in [
|
return key in [
|
||||||
"created_at",
|
"created_at",
|
||||||
"expires_at",
|
"expires_at",
|
||||||
]
|
]
|
||||||
|
|
||||||
@staticmethod
|
@staticmethod
|
||||||
def _prepare_model_dict(model_dict: dict[str, Any]) -> None:
|
def _prepare_model_dict(d: dict):
|
||||||
date_keys = [
|
date_keys = [key for key in d.keys() if RedisTokensRepository._is_date_key(key)]
|
||||||
key for key in model_dict.keys() if RedisTokensRepository._is_date_key(key)
|
|
||||||
]
|
|
||||||
for date in date_keys:
|
for date in date_keys:
|
||||||
if model_dict[date] != "None":
|
if d[date] != "None":
|
||||||
model_dict[date] = datetime.fromisoformat(model_dict[date])
|
d[date] = datetime.fromisoformat(d[date])
|
||||||
for key in model_dict.keys():
|
for key in d.keys():
|
||||||
if model_dict[key] == "None":
|
if d[key] == "None":
|
||||||
model_dict[key] = None
|
d[key] = None
|
||||||
|
|
||||||
def _model_dict_from_hash(self, redis_key: str) -> Optional[dict[str, Any]]:
|
def _model_dict_from_hash(self, redis_key: str) -> Optional[dict]:
|
||||||
redis = self.connection
|
redis = self.connection
|
||||||
if redis.exists(redis_key):
|
if redis.exists(redis_key):
|
||||||
token_dict: dict[str, Any] = redis.hgetall(redis_key) # type: ignore
|
token_dict = redis.hgetall(redis_key)
|
||||||
RedisTokensRepository._prepare_model_dict(token_dict)
|
RedisTokensRepository._prepare_model_dict(token_dict)
|
||||||
return token_dict
|
return token_dict
|
||||||
return None
|
return None
|
||||||
|
@ -150,7 +146,6 @@ class RedisTokensRepository(AbstractTokensRepository):
|
||||||
if token is not None:
|
if token is not None:
|
||||||
token.created_at = token.created_at.replace(tzinfo=None)
|
token.created_at = token.created_at.replace(tzinfo=None)
|
||||||
return token
|
return token
|
||||||
return None
|
|
||||||
|
|
||||||
def _recovery_key_from_hash(self, redis_key: str) -> Optional[RecoveryKey]:
|
def _recovery_key_from_hash(self, redis_key: str) -> Optional[RecoveryKey]:
|
||||||
return self._hash_as_model(redis_key, RecoveryKey)
|
return self._hash_as_model(redis_key, RecoveryKey)
|
||||||
|
|
|
@ -0,0 +1,125 @@
|
||||||
|
from datetime import datetime
|
||||||
|
from typing import Optional
|
||||||
|
from fastapi import APIRouter, Depends, HTTPException
|
||||||
|
from pydantic import BaseModel
|
||||||
|
from selfprivacy_api.actions.api_tokens import (
|
||||||
|
CannotDeleteCallerException,
|
||||||
|
InvalidExpirationDate,
|
||||||
|
InvalidUsesLeft,
|
||||||
|
NotFoundException,
|
||||||
|
delete_api_token,
|
||||||
|
refresh_api_token,
|
||||||
|
get_api_recovery_token_status,
|
||||||
|
get_api_tokens_with_caller_flag,
|
||||||
|
get_new_api_recovery_key,
|
||||||
|
use_mnemonic_recovery_token,
|
||||||
|
delete_new_device_auth_token,
|
||||||
|
get_new_device_auth_token,
|
||||||
|
use_new_device_auth_token,
|
||||||
|
)
|
||||||
|
|
||||||
|
from selfprivacy_api.dependencies import TokenHeader, get_token_header
|
||||||
|
|
||||||
|
|
||||||
|
router = APIRouter(
|
||||||
|
prefix="/auth",
|
||||||
|
tags=["auth"],
|
||||||
|
responses={404: {"description": "Not found"}},
|
||||||
|
)
|
||||||
|
|
||||||
|
|
||||||
|
@router.get("/tokens")
|
||||||
|
async def rest_get_tokens(auth_token: TokenHeader = Depends(get_token_header)):
|
||||||
|
"""Get the tokens info"""
|
||||||
|
return get_api_tokens_with_caller_flag(auth_token.token)
|
||||||
|
|
||||||
|
|
||||||
|
class DeleteTokenInput(BaseModel):
|
||||||
|
"""Delete token input"""
|
||||||
|
|
||||||
|
token_name: str
|
||||||
|
|
||||||
|
|
||||||
|
@router.delete("/tokens")
|
||||||
|
async def rest_delete_tokens(
|
||||||
|
token: DeleteTokenInput, auth_token: TokenHeader = Depends(get_token_header)
|
||||||
|
):
|
||||||
|
"""Delete the tokens"""
|
||||||
|
try:
|
||||||
|
delete_api_token(auth_token.token, token.token_name)
|
||||||
|
except NotFoundException:
|
||||||
|
raise HTTPException(status_code=404, detail="Token not found")
|
||||||
|
except CannotDeleteCallerException:
|
||||||
|
raise HTTPException(status_code=400, detail="Cannot delete caller's token")
|
||||||
|
return {"message": "Token deleted"}
|
||||||
|
|
||||||
|
|
||||||
|
@router.post("/tokens")
|
||||||
|
async def rest_refresh_token(auth_token: TokenHeader = Depends(get_token_header)):
|
||||||
|
"""Refresh the token"""
|
||||||
|
try:
|
||||||
|
new_token = refresh_api_token(auth_token.token)
|
||||||
|
except NotFoundException:
|
||||||
|
raise HTTPException(status_code=404, detail="Token not found")
|
||||||
|
return {"token": new_token}
|
||||||
|
|
||||||
|
|
||||||
|
@router.get("/recovery_token")
|
||||||
|
async def rest_get_recovery_token_status(
|
||||||
|
auth_token: TokenHeader = Depends(get_token_header),
|
||||||
|
):
|
||||||
|
return get_api_recovery_token_status()
|
||||||
|
|
||||||
|
|
||||||
|
class CreateRecoveryTokenInput(BaseModel):
|
||||||
|
expiration: Optional[datetime] = None
|
||||||
|
uses: Optional[int] = None
|
||||||
|
|
||||||
|
|
||||||
|
@router.post("/recovery_token")
|
||||||
|
async def rest_create_recovery_token(
|
||||||
|
limits: CreateRecoveryTokenInput = CreateRecoveryTokenInput(),
|
||||||
|
auth_token: TokenHeader = Depends(get_token_header),
|
||||||
|
):
|
||||||
|
try:
|
||||||
|
token = get_new_api_recovery_key(limits.expiration, limits.uses)
|
||||||
|
except InvalidExpirationDate as e:
|
||||||
|
raise HTTPException(status_code=400, detail=str(e))
|
||||||
|
except InvalidUsesLeft as e:
|
||||||
|
raise HTTPException(status_code=400, detail=str(e))
|
||||||
|
return {"token": token}
|
||||||
|
|
||||||
|
|
||||||
|
class UseTokenInput(BaseModel):
|
||||||
|
token: str
|
||||||
|
device: str
|
||||||
|
|
||||||
|
|
||||||
|
@router.post("/recovery_token/use")
|
||||||
|
async def rest_use_recovery_token(input: UseTokenInput):
|
||||||
|
token = use_mnemonic_recovery_token(input.token, input.device)
|
||||||
|
if token is None:
|
||||||
|
raise HTTPException(status_code=404, detail="Token not found")
|
||||||
|
return {"token": token}
|
||||||
|
|
||||||
|
|
||||||
|
@router.post("/new_device")
|
||||||
|
async def rest_new_device(auth_token: TokenHeader = Depends(get_token_header)):
|
||||||
|
token = get_new_device_auth_token()
|
||||||
|
return {"token": token}
|
||||||
|
|
||||||
|
|
||||||
|
@router.delete("/new_device")
|
||||||
|
async def rest_delete_new_device_token(
|
||||||
|
auth_token: TokenHeader = Depends(get_token_header),
|
||||||
|
):
|
||||||
|
delete_new_device_auth_token()
|
||||||
|
return {"token": None}
|
||||||
|
|
||||||
|
|
||||||
|
@router.post("/new_device/authorize")
|
||||||
|
async def rest_new_device_authorize(input: UseTokenInput):
|
||||||
|
token = use_new_device_auth_token(input.token, input.device)
|
||||||
|
if token is None:
|
||||||
|
raise HTTPException(status_code=404, detail="Token not found")
|
||||||
|
return {"message": "Device authorized", "token": token}
|
|
@ -0,0 +1,374 @@
|
||||||
|
"""Basic services legacy api"""
|
||||||
|
import base64
|
||||||
|
from typing import Optional
|
||||||
|
from fastapi import APIRouter, Depends, HTTPException
|
||||||
|
from pydantic import BaseModel
|
||||||
|
from selfprivacy_api.actions.ssh import (
|
||||||
|
InvalidPublicKey,
|
||||||
|
KeyAlreadyExists,
|
||||||
|
KeyNotFound,
|
||||||
|
create_ssh_key,
|
||||||
|
enable_ssh,
|
||||||
|
get_ssh_settings,
|
||||||
|
remove_ssh_key,
|
||||||
|
set_ssh_settings,
|
||||||
|
)
|
||||||
|
from selfprivacy_api.actions.users import UserNotFound, get_user_by_username
|
||||||
|
|
||||||
|
from selfprivacy_api.dependencies import get_token_header
|
||||||
|
from selfprivacy_api.restic_controller import ResticController, ResticStates
|
||||||
|
from selfprivacy_api.restic_controller import tasks as restic_tasks
|
||||||
|
from selfprivacy_api.services.bitwarden import Bitwarden
|
||||||
|
from selfprivacy_api.services.gitea import Gitea
|
||||||
|
from selfprivacy_api.services.mailserver import MailServer
|
||||||
|
from selfprivacy_api.services.nextcloud import Nextcloud
|
||||||
|
from selfprivacy_api.services.ocserv import Ocserv
|
||||||
|
from selfprivacy_api.services.pleroma import Pleroma
|
||||||
|
from selfprivacy_api.services.service import ServiceStatus
|
||||||
|
from selfprivacy_api.utils import WriteUserData, get_dkim_key, get_domain
|
||||||
|
|
||||||
|
router = APIRouter(
|
||||||
|
prefix="/services",
|
||||||
|
tags=["services"],
|
||||||
|
dependencies=[Depends(get_token_header)],
|
||||||
|
responses={404: {"description": "Not found"}},
|
||||||
|
)
|
||||||
|
|
||||||
|
|
||||||
|
def service_status_to_return_code(status: ServiceStatus):
|
||||||
|
"""Converts service status object to return code for
|
||||||
|
compatibility with legacy api"""
|
||||||
|
if status == ServiceStatus.ACTIVE:
|
||||||
|
return 0
|
||||||
|
elif status == ServiceStatus.FAILED:
|
||||||
|
return 1
|
||||||
|
elif status == ServiceStatus.INACTIVE:
|
||||||
|
return 3
|
||||||
|
elif status == ServiceStatus.OFF:
|
||||||
|
return 4
|
||||||
|
else:
|
||||||
|
return 2
|
||||||
|
|
||||||
|
|
||||||
|
@router.get("/status")
|
||||||
|
async def get_status():
|
||||||
|
"""Get the status of the services"""
|
||||||
|
mail_status = MailServer.get_status()
|
||||||
|
bitwarden_status = Bitwarden.get_status()
|
||||||
|
gitea_status = Gitea.get_status()
|
||||||
|
nextcloud_status = Nextcloud.get_status()
|
||||||
|
ocserv_stauts = Ocserv.get_status()
|
||||||
|
pleroma_status = Pleroma.get_status()
|
||||||
|
|
||||||
|
return {
|
||||||
|
"imap": service_status_to_return_code(mail_status),
|
||||||
|
"smtp": service_status_to_return_code(mail_status),
|
||||||
|
"http": 0,
|
||||||
|
"bitwarden": service_status_to_return_code(bitwarden_status),
|
||||||
|
"gitea": service_status_to_return_code(gitea_status),
|
||||||
|
"nextcloud": service_status_to_return_code(nextcloud_status),
|
||||||
|
"ocserv": service_status_to_return_code(ocserv_stauts),
|
||||||
|
"pleroma": service_status_to_return_code(pleroma_status),
|
||||||
|
}
|
||||||
|
|
||||||
|
|
||||||
|
@router.post("/bitwarden/enable")
|
||||||
|
async def enable_bitwarden():
|
||||||
|
"""Enable Bitwarden"""
|
||||||
|
Bitwarden.enable()
|
||||||
|
return {
|
||||||
|
"status": 0,
|
||||||
|
"message": "Bitwarden enabled",
|
||||||
|
}
|
||||||
|
|
||||||
|
|
||||||
|
@router.post("/bitwarden/disable")
|
||||||
|
async def disable_bitwarden():
|
||||||
|
"""Disable Bitwarden"""
|
||||||
|
Bitwarden.disable()
|
||||||
|
return {
|
||||||
|
"status": 0,
|
||||||
|
"message": "Bitwarden disabled",
|
||||||
|
}
|
||||||
|
|
||||||
|
|
||||||
|
@router.post("/gitea/enable")
|
||||||
|
async def enable_gitea():
|
||||||
|
"""Enable Gitea"""
|
||||||
|
Gitea.enable()
|
||||||
|
return {
|
||||||
|
"status": 0,
|
||||||
|
"message": "Gitea enabled",
|
||||||
|
}
|
||||||
|
|
||||||
|
|
||||||
|
@router.post("/gitea/disable")
|
||||||
|
async def disable_gitea():
|
||||||
|
"""Disable Gitea"""
|
||||||
|
Gitea.disable()
|
||||||
|
return {
|
||||||
|
"status": 0,
|
||||||
|
"message": "Gitea disabled",
|
||||||
|
}
|
||||||
|
|
||||||
|
|
||||||
|
@router.get("/mailserver/dkim")
|
||||||
|
async def get_mailserver_dkim():
|
||||||
|
"""Get the DKIM record for the mailserver"""
|
||||||
|
domain = get_domain()
|
||||||
|
|
||||||
|
dkim = get_dkim_key(domain, parse=False)
|
||||||
|
if dkim is None:
|
||||||
|
raise HTTPException(status_code=404, detail="DKIM record not found")
|
||||||
|
dkim = base64.b64encode(dkim.encode("utf-8")).decode("utf-8")
|
||||||
|
return dkim
|
||||||
|
|
||||||
|
|
||||||
|
@router.post("/nextcloud/enable")
|
||||||
|
async def enable_nextcloud():
|
||||||
|
"""Enable Nextcloud"""
|
||||||
|
Nextcloud.enable()
|
||||||
|
return {
|
||||||
|
"status": 0,
|
||||||
|
"message": "Nextcloud enabled",
|
||||||
|
}
|
||||||
|
|
||||||
|
|
||||||
|
@router.post("/nextcloud/disable")
|
||||||
|
async def disable_nextcloud():
|
||||||
|
"""Disable Nextcloud"""
|
||||||
|
Nextcloud.disable()
|
||||||
|
return {
|
||||||
|
"status": 0,
|
||||||
|
"message": "Nextcloud disabled",
|
||||||
|
}
|
||||||
|
|
||||||
|
|
||||||
|
@router.post("/ocserv/enable")
|
||||||
|
async def enable_ocserv():
|
||||||
|
"""Enable Ocserv"""
|
||||||
|
Ocserv.enable()
|
||||||
|
return {
|
||||||
|
"status": 0,
|
||||||
|
"message": "Ocserv enabled",
|
||||||
|
}
|
||||||
|
|
||||||
|
|
||||||
|
@router.post("/ocserv/disable")
|
||||||
|
async def disable_ocserv():
|
||||||
|
"""Disable Ocserv"""
|
||||||
|
Ocserv.disable()
|
||||||
|
return {
|
||||||
|
"status": 0,
|
||||||
|
"message": "Ocserv disabled",
|
||||||
|
}
|
||||||
|
|
||||||
|
|
||||||
|
@router.post("/pleroma/enable")
|
||||||
|
async def enable_pleroma():
|
||||||
|
"""Enable Pleroma"""
|
||||||
|
Pleroma.enable()
|
||||||
|
return {
|
||||||
|
"status": 0,
|
||||||
|
"message": "Pleroma enabled",
|
||||||
|
}
|
||||||
|
|
||||||
|
|
||||||
|
@router.post("/pleroma/disable")
|
||||||
|
async def disable_pleroma():
|
||||||
|
"""Disable Pleroma"""
|
||||||
|
Pleroma.disable()
|
||||||
|
return {
|
||||||
|
"status": 0,
|
||||||
|
"message": "Pleroma disabled",
|
||||||
|
}
|
||||||
|
|
||||||
|
|
||||||
|
@router.get("/restic/backup/list")
|
||||||
|
async def get_restic_backup_list():
|
||||||
|
restic = ResticController()
|
||||||
|
return restic.snapshot_list
|
||||||
|
|
||||||
|
|
||||||
|
@router.put("/restic/backup/create")
|
||||||
|
async def create_restic_backup():
|
||||||
|
restic = ResticController()
|
||||||
|
if restic.state is ResticStates.NO_KEY:
|
||||||
|
raise HTTPException(status_code=400, detail="Backup key not provided")
|
||||||
|
if restic.state is ResticStates.INITIALIZING:
|
||||||
|
raise HTTPException(status_code=400, detail="Backup is initializing")
|
||||||
|
if restic.state is ResticStates.BACKING_UP:
|
||||||
|
raise HTTPException(status_code=409, detail="Backup is already running")
|
||||||
|
restic_tasks.start_backup()
|
||||||
|
return {
|
||||||
|
"status": 0,
|
||||||
|
"message": "Backup creation has started",
|
||||||
|
}
|
||||||
|
|
||||||
|
|
||||||
|
@router.get("/restic/backup/status")
|
||||||
|
async def get_restic_backup_status():
|
||||||
|
restic = ResticController()
|
||||||
|
|
||||||
|
return {
|
||||||
|
"status": restic.state.name,
|
||||||
|
"progress": restic.progress,
|
||||||
|
"error_message": restic.error_message,
|
||||||
|
}
|
||||||
|
|
||||||
|
|
||||||
|
@router.get("/restic/backup/reload")
|
||||||
|
async def reload_restic_backup():
|
||||||
|
restic_tasks.load_snapshots()
|
||||||
|
return {
|
||||||
|
"status": 0,
|
||||||
|
"message": "Snapshots reload started",
|
||||||
|
}
|
||||||
|
|
||||||
|
|
||||||
|
class BackupRestoreInput(BaseModel):
|
||||||
|
backupId: str
|
||||||
|
|
||||||
|
|
||||||
|
@router.put("/restic/backup/restore")
|
||||||
|
async def restore_restic_backup(backup: BackupRestoreInput):
|
||||||
|
restic = ResticController()
|
||||||
|
if restic.state is ResticStates.NO_KEY:
|
||||||
|
raise HTTPException(status_code=400, detail="Backup key not provided")
|
||||||
|
if restic.state is ResticStates.NOT_INITIALIZED:
|
||||||
|
raise HTTPException(
|
||||||
|
status_code=400, detail="Backups repository is not initialized"
|
||||||
|
)
|
||||||
|
if restic.state is ResticStates.BACKING_UP:
|
||||||
|
raise HTTPException(status_code=409, detail="Backup is already running")
|
||||||
|
if restic.state is ResticStates.INITIALIZING:
|
||||||
|
raise HTTPException(status_code=400, detail="Repository is initializing")
|
||||||
|
if restic.state is ResticStates.RESTORING:
|
||||||
|
raise HTTPException(status_code=409, detail="Restore is already running")
|
||||||
|
|
||||||
|
for backup_item in restic.snapshot_list:
|
||||||
|
if backup_item["short_id"] == backup.backupId:
|
||||||
|
restic_tasks.restore_from_backup(backup.backupId)
|
||||||
|
return {
|
||||||
|
"status": 0,
|
||||||
|
"message": "Backup restoration procedure started",
|
||||||
|
}
|
||||||
|
|
||||||
|
raise HTTPException(status_code=404, detail="Backup not found")
|
||||||
|
|
||||||
|
|
||||||
|
class BackupConfigInput(BaseModel):
|
||||||
|
accountId: str
|
||||||
|
accountKey: str
|
||||||
|
bucket: str
|
||||||
|
|
||||||
|
|
||||||
|
@router.put("/restic/backblaze/config")
|
||||||
|
async def set_backblaze_config(backup_config: BackupConfigInput):
|
||||||
|
with WriteUserData() as data:
|
||||||
|
if "backup" not in data:
|
||||||
|
data["backup"] = {}
|
||||||
|
data["backup"]["provider"] = "BACKBLAZE"
|
||||||
|
data["backup"]["accountId"] = backup_config.accountId
|
||||||
|
data["backup"]["accountKey"] = backup_config.accountKey
|
||||||
|
data["backup"]["bucket"] = backup_config.bucket
|
||||||
|
|
||||||
|
restic_tasks.update_keys_from_userdata()
|
||||||
|
|
||||||
|
return "New backup settings saved"
|
||||||
|
|
||||||
|
|
||||||
|
@router.post("/ssh/enable")
|
||||||
|
async def rest_enable_ssh():
|
||||||
|
"""Enable SSH"""
|
||||||
|
enable_ssh()
|
||||||
|
return {
|
||||||
|
"status": 0,
|
||||||
|
"message": "SSH enabled",
|
||||||
|
}
|
||||||
|
|
||||||
|
|
||||||
|
@router.get("/ssh")
|
||||||
|
async def rest_get_ssh():
|
||||||
|
"""Get the SSH configuration"""
|
||||||
|
settings = get_ssh_settings()
|
||||||
|
return {
|
||||||
|
"enable": settings.enable,
|
||||||
|
"passwordAuthentication": settings.passwordAuthentication,
|
||||||
|
}
|
||||||
|
|
||||||
|
|
||||||
|
class SshConfigInput(BaseModel):
|
||||||
|
enable: Optional[bool] = None
|
||||||
|
passwordAuthentication: Optional[bool] = None
|
||||||
|
|
||||||
|
|
||||||
|
@router.put("/ssh")
|
||||||
|
async def rest_set_ssh(ssh_config: SshConfigInput):
|
||||||
|
"""Set the SSH configuration"""
|
||||||
|
set_ssh_settings(ssh_config.enable, ssh_config.passwordAuthentication)
|
||||||
|
|
||||||
|
return "SSH settings changed"
|
||||||
|
|
||||||
|
|
||||||
|
class SshKeyInput(BaseModel):
|
||||||
|
public_key: str
|
||||||
|
|
||||||
|
|
||||||
|
@router.put("/ssh/key/send", status_code=201)
|
||||||
|
async def rest_send_ssh_key(input: SshKeyInput):
|
||||||
|
"""Send the SSH key"""
|
||||||
|
try:
|
||||||
|
create_ssh_key("root", input.public_key)
|
||||||
|
except KeyAlreadyExists as error:
|
||||||
|
raise HTTPException(status_code=409, detail="Key already exists") from error
|
||||||
|
except InvalidPublicKey as error:
|
||||||
|
raise HTTPException(
|
||||||
|
status_code=400,
|
||||||
|
detail="Invalid key type. Only ssh-ed25519 and ssh-rsa are supported",
|
||||||
|
) from error
|
||||||
|
|
||||||
|
return {
|
||||||
|
"status": 0,
|
||||||
|
"message": "SSH key sent",
|
||||||
|
}
|
||||||
|
|
||||||
|
|
||||||
|
@router.get("/ssh/keys/{username}")
|
||||||
|
async def rest_get_ssh_keys(username: str):
|
||||||
|
"""Get the SSH keys for a user"""
|
||||||
|
user = get_user_by_username(username)
|
||||||
|
if user is None:
|
||||||
|
raise HTTPException(status_code=404, detail="User not found")
|
||||||
|
|
||||||
|
return user.ssh_keys
|
||||||
|
|
||||||
|
|
||||||
|
@router.post("/ssh/keys/{username}", status_code=201)
|
||||||
|
async def rest_add_ssh_key(username: str, input: SshKeyInput):
|
||||||
|
try:
|
||||||
|
create_ssh_key(username, input.public_key)
|
||||||
|
except KeyAlreadyExists as error:
|
||||||
|
raise HTTPException(status_code=409, detail="Key already exists") from error
|
||||||
|
except InvalidPublicKey as error:
|
||||||
|
raise HTTPException(
|
||||||
|
status_code=400,
|
||||||
|
detail="Invalid key type. Only ssh-ed25519 and ssh-rsa are supported",
|
||||||
|
) from error
|
||||||
|
except UserNotFound as error:
|
||||||
|
raise HTTPException(status_code=404, detail="User not found") from error
|
||||||
|
|
||||||
|
return {
|
||||||
|
"message": "New SSH key successfully written",
|
||||||
|
}
|
||||||
|
|
||||||
|
|
||||||
|
@router.delete("/ssh/keys/{username}")
|
||||||
|
async def rest_delete_ssh_key(username: str, input: SshKeyInput):
|
||||||
|
try:
|
||||||
|
remove_ssh_key(username, input.public_key)
|
||||||
|
except KeyNotFound as error:
|
||||||
|
raise HTTPException(status_code=404, detail="Key not found") from error
|
||||||
|
except UserNotFound as error:
|
||||||
|
raise HTTPException(status_code=404, detail="User not found") from error
|
||||||
|
return {"message": "SSH key deleted"}
|
|
@ -0,0 +1,105 @@
|
||||||
|
from typing import Optional
|
||||||
|
from fastapi import APIRouter, Body, Depends, HTTPException
|
||||||
|
from pydantic import BaseModel
|
||||||
|
|
||||||
|
from selfprivacy_api.dependencies import get_token_header
|
||||||
|
|
||||||
|
import selfprivacy_api.actions.system as system_actions
|
||||||
|
|
||||||
|
router = APIRouter(
|
||||||
|
prefix="/system",
|
||||||
|
tags=["system"],
|
||||||
|
dependencies=[Depends(get_token_header)],
|
||||||
|
responses={404: {"description": "Not found"}},
|
||||||
|
)
|
||||||
|
|
||||||
|
|
||||||
|
@router.get("/configuration/timezone")
|
||||||
|
async def get_timezone():
|
||||||
|
"""Get the timezone of the server"""
|
||||||
|
return system_actions.get_timezone()
|
||||||
|
|
||||||
|
|
||||||
|
class ChangeTimezoneRequestBody(BaseModel):
|
||||||
|
"""Change the timezone of the server"""
|
||||||
|
|
||||||
|
timezone: str
|
||||||
|
|
||||||
|
|
||||||
|
@router.put("/configuration/timezone")
|
||||||
|
async def change_timezone(timezone: ChangeTimezoneRequestBody):
|
||||||
|
"""Change the timezone of the server"""
|
||||||
|
try:
|
||||||
|
system_actions.change_timezone(timezone.timezone)
|
||||||
|
except system_actions.InvalidTimezone as e:
|
||||||
|
raise HTTPException(status_code=400, detail=str(e))
|
||||||
|
return {"timezone": timezone.timezone}
|
||||||
|
|
||||||
|
|
||||||
|
@router.get("/configuration/autoUpgrade")
|
||||||
|
async def get_auto_upgrade_settings():
|
||||||
|
"""Get the auto-upgrade settings"""
|
||||||
|
return system_actions.get_auto_upgrade_settings().dict()
|
||||||
|
|
||||||
|
|
||||||
|
class AutoUpgradeSettings(BaseModel):
|
||||||
|
"""Settings for auto-upgrading user data"""
|
||||||
|
|
||||||
|
enable: Optional[bool] = None
|
||||||
|
allowReboot: Optional[bool] = None
|
||||||
|
|
||||||
|
|
||||||
|
@router.put("/configuration/autoUpgrade")
|
||||||
|
async def set_auto_upgrade_settings(settings: AutoUpgradeSettings):
|
||||||
|
"""Set the auto-upgrade settings"""
|
||||||
|
system_actions.set_auto_upgrade_settings(settings.enable, settings.allowReboot)
|
||||||
|
return "Auto-upgrade settings changed"
|
||||||
|
|
||||||
|
|
||||||
|
@router.get("/configuration/apply")
|
||||||
|
async def apply_configuration():
|
||||||
|
"""Apply the configuration"""
|
||||||
|
return_code = system_actions.rebuild_system()
|
||||||
|
return return_code
|
||||||
|
|
||||||
|
|
||||||
|
@router.get("/configuration/rollback")
|
||||||
|
async def rollback_configuration():
|
||||||
|
"""Rollback the configuration"""
|
||||||
|
return_code = system_actions.rollback_system()
|
||||||
|
return return_code
|
||||||
|
|
||||||
|
|
||||||
|
@router.get("/configuration/upgrade")
|
||||||
|
async def upgrade_configuration():
|
||||||
|
"""Upgrade the configuration"""
|
||||||
|
return_code = system_actions.upgrade_system()
|
||||||
|
return return_code
|
||||||
|
|
||||||
|
|
||||||
|
@router.get("/reboot")
|
||||||
|
async def reboot_system():
|
||||||
|
"""Reboot the system"""
|
||||||
|
system_actions.reboot_system()
|
||||||
|
return "System reboot has started"
|
||||||
|
|
||||||
|
|
||||||
|
@router.get("/version")
|
||||||
|
async def get_system_version():
|
||||||
|
"""Get the system version"""
|
||||||
|
return {"system_version": system_actions.get_system_version()}
|
||||||
|
|
||||||
|
|
||||||
|
@router.get("/pythonVersion")
|
||||||
|
async def get_python_version():
|
||||||
|
"""Get the Python version"""
|
||||||
|
return system_actions.get_python_version()
|
||||||
|
|
||||||
|
|
||||||
|
@router.get("/configuration/pull")
|
||||||
|
async def pull_configuration():
|
||||||
|
"""Pull the configuration"""
|
||||||
|
action_result = system_actions.pull_repository_changes()
|
||||||
|
if action_result.status == 0:
|
||||||
|
return action_result.dict()
|
||||||
|
raise HTTPException(status_code=500, detail=action_result.dict())
|
|
@ -0,0 +1,62 @@
|
||||||
|
"""Users management module"""
|
||||||
|
from typing import Optional
|
||||||
|
from fastapi import APIRouter, Body, Depends, HTTPException
|
||||||
|
from pydantic import BaseModel
|
||||||
|
|
||||||
|
import selfprivacy_api.actions.users as users_actions
|
||||||
|
|
||||||
|
from selfprivacy_api.dependencies import get_token_header
|
||||||
|
|
||||||
|
router = APIRouter(
|
||||||
|
prefix="/users",
|
||||||
|
tags=["users"],
|
||||||
|
dependencies=[Depends(get_token_header)],
|
||||||
|
responses={404: {"description": "Not found"}},
|
||||||
|
)
|
||||||
|
|
||||||
|
|
||||||
|
@router.get("")
|
||||||
|
async def get_users(withMainUser: bool = False):
|
||||||
|
"""Get the list of users"""
|
||||||
|
users: list[users_actions.UserDataUser] = users_actions.get_users(
|
||||||
|
exclude_primary=not withMainUser, exclude_root=True
|
||||||
|
)
|
||||||
|
|
||||||
|
return [user.username for user in users]
|
||||||
|
|
||||||
|
|
||||||
|
class UserInput(BaseModel):
|
||||||
|
"""User input"""
|
||||||
|
|
||||||
|
username: str
|
||||||
|
password: str
|
||||||
|
|
||||||
|
|
||||||
|
@router.post("", status_code=201)
|
||||||
|
async def create_user(user: UserInput):
|
||||||
|
try:
|
||||||
|
users_actions.create_user(user.username, user.password)
|
||||||
|
except users_actions.PasswordIsEmpty as e:
|
||||||
|
raise HTTPException(status_code=400, detail=str(e))
|
||||||
|
except users_actions.UsernameForbidden as e:
|
||||||
|
raise HTTPException(status_code=409, detail=str(e))
|
||||||
|
except users_actions.UsernameNotAlphanumeric as e:
|
||||||
|
raise HTTPException(status_code=400, detail=str(e))
|
||||||
|
except users_actions.UsernameTooLong as e:
|
||||||
|
raise HTTPException(status_code=400, detail=str(e))
|
||||||
|
except users_actions.UserAlreadyExists as e:
|
||||||
|
raise HTTPException(status_code=409, detail=str(e))
|
||||||
|
|
||||||
|
return {"result": 0, "username": user.username}
|
||||||
|
|
||||||
|
|
||||||
|
@router.delete("/{username}")
|
||||||
|
async def delete_user(username: str):
|
||||||
|
try:
|
||||||
|
users_actions.delete_user(username)
|
||||||
|
except users_actions.UserNotFound as e:
|
||||||
|
raise HTTPException(status_code=404, detail=str(e))
|
||||||
|
except users_actions.UserIsProtected as e:
|
||||||
|
raise HTTPException(status_code=400, detail=str(e))
|
||||||
|
|
||||||
|
return {"result": 0, "username": username}
|
|
@ -0,0 +1,244 @@
|
||||||
|
"""Restic singleton controller."""
|
||||||
|
from datetime import datetime
|
||||||
|
import json
|
||||||
|
import subprocess
|
||||||
|
import os
|
||||||
|
from threading import Lock
|
||||||
|
from enum import Enum
|
||||||
|
import portalocker
|
||||||
|
from selfprivacy_api.utils import ReadUserData
|
||||||
|
from selfprivacy_api.utils.singleton_metaclass import SingletonMetaclass
|
||||||
|
|
||||||
|
|
||||||
|
class ResticStates(Enum):
|
||||||
|
"""Restic states enum."""
|
||||||
|
|
||||||
|
NO_KEY = 0
|
||||||
|
NOT_INITIALIZED = 1
|
||||||
|
INITIALIZED = 2
|
||||||
|
BACKING_UP = 3
|
||||||
|
RESTORING = 4
|
||||||
|
ERROR = 5
|
||||||
|
INITIALIZING = 6
|
||||||
|
|
||||||
|
|
||||||
|
class ResticController(metaclass=SingletonMetaclass):
|
||||||
|
"""
|
||||||
|
States in wich the restic_controller may be
|
||||||
|
- no backblaze key
|
||||||
|
- backblaze key is provided, but repository is not initialized
|
||||||
|
- backblaze key is provided, repository is initialized
|
||||||
|
- fetching list of snapshots
|
||||||
|
- creating snapshot, current progress can be retrieved
|
||||||
|
- recovering from snapshot
|
||||||
|
|
||||||
|
Any ongoing operation acquires the lock
|
||||||
|
Current state can be fetched with get_state()
|
||||||
|
"""
|
||||||
|
|
||||||
|
_initialized = False
|
||||||
|
|
||||||
|
def __init__(self):
|
||||||
|
if self._initialized:
|
||||||
|
return
|
||||||
|
self.state = ResticStates.NO_KEY
|
||||||
|
self.lock = False
|
||||||
|
self.progress = 0
|
||||||
|
self._backblaze_account = None
|
||||||
|
self._backblaze_key = None
|
||||||
|
self._repository_name = None
|
||||||
|
self.snapshot_list = []
|
||||||
|
self.error_message = None
|
||||||
|
self._initialized = True
|
||||||
|
self.load_configuration()
|
||||||
|
self.write_rclone_config()
|
||||||
|
self.load_snapshots()
|
||||||
|
|
||||||
|
def load_configuration(self):
|
||||||
|
"""Load current configuration from user data to singleton."""
|
||||||
|
with ReadUserData() as user_data:
|
||||||
|
self._backblaze_account = user_data["backblaze"]["accountId"]
|
||||||
|
self._backblaze_key = user_data["backblaze"]["accountKey"]
|
||||||
|
self._repository_name = user_data["backblaze"]["bucket"]
|
||||||
|
if self._backblaze_account and self._backblaze_key and self._repository_name:
|
||||||
|
self.state = ResticStates.INITIALIZING
|
||||||
|
else:
|
||||||
|
self.state = ResticStates.NO_KEY
|
||||||
|
|
||||||
|
def write_rclone_config(self):
|
||||||
|
"""
|
||||||
|
Open /root/.config/rclone/rclone.conf with portalocker
|
||||||
|
and write configuration in the following format:
|
||||||
|
[backblaze]
|
||||||
|
type = b2
|
||||||
|
account = {self.backblaze_account}
|
||||||
|
key = {self.backblaze_key}
|
||||||
|
"""
|
||||||
|
with portalocker.Lock(
|
||||||
|
"/root/.config/rclone/rclone.conf", "w", timeout=None
|
||||||
|
) as rclone_config:
|
||||||
|
rclone_config.write(
|
||||||
|
f"[backblaze]\n"
|
||||||
|
f"type = b2\n"
|
||||||
|
f"account = {self._backblaze_account}\n"
|
||||||
|
f"key = {self._backblaze_key}\n"
|
||||||
|
)
|
||||||
|
|
||||||
|
def load_snapshots(self):
|
||||||
|
"""
|
||||||
|
Load list of snapshots from repository
|
||||||
|
"""
|
||||||
|
backup_listing_command = [
|
||||||
|
"restic",
|
||||||
|
"-o",
|
||||||
|
"rclone.args=serve restic --stdio",
|
||||||
|
"-r",
|
||||||
|
f"rclone:backblaze:{self._repository_name}/sfbackup",
|
||||||
|
"snapshots",
|
||||||
|
"--json",
|
||||||
|
]
|
||||||
|
|
||||||
|
if self.state in (ResticStates.BACKING_UP, ResticStates.RESTORING):
|
||||||
|
return
|
||||||
|
with subprocess.Popen(
|
||||||
|
backup_listing_command,
|
||||||
|
shell=False,
|
||||||
|
stdout=subprocess.PIPE,
|
||||||
|
stderr=subprocess.STDOUT,
|
||||||
|
) as backup_listing_process_descriptor:
|
||||||
|
snapshots_list = backup_listing_process_descriptor.communicate()[0].decode(
|
||||||
|
"utf-8"
|
||||||
|
)
|
||||||
|
try:
|
||||||
|
starting_index = snapshots_list.find("[")
|
||||||
|
json.loads(snapshots_list[starting_index:])
|
||||||
|
self.snapshot_list = json.loads(snapshots_list[starting_index:])
|
||||||
|
self.state = ResticStates.INITIALIZED
|
||||||
|
print(snapshots_list)
|
||||||
|
except ValueError:
|
||||||
|
if "Is there a repository at the following location?" in snapshots_list:
|
||||||
|
self.state = ResticStates.NOT_INITIALIZED
|
||||||
|
return
|
||||||
|
self.state = ResticStates.ERROR
|
||||||
|
self.error_message = snapshots_list
|
||||||
|
return
|
||||||
|
|
||||||
|
def initialize_repository(self):
|
||||||
|
"""
|
||||||
|
Initialize repository with restic
|
||||||
|
"""
|
||||||
|
initialize_repository_command = [
|
||||||
|
"restic",
|
||||||
|
"-o",
|
||||||
|
"rclone.args=serve restic --stdio",
|
||||||
|
"-r",
|
||||||
|
f"rclone:backblaze:{self._repository_name}/sfbackup",
|
||||||
|
"init",
|
||||||
|
]
|
||||||
|
with subprocess.Popen(
|
||||||
|
initialize_repository_command,
|
||||||
|
shell=False,
|
||||||
|
stdout=subprocess.PIPE,
|
||||||
|
stderr=subprocess.STDOUT,
|
||||||
|
) as initialize_repository_process_descriptor:
|
||||||
|
msg = initialize_repository_process_descriptor.communicate()[0].decode(
|
||||||
|
"utf-8"
|
||||||
|
)
|
||||||
|
if initialize_repository_process_descriptor.returncode == 0:
|
||||||
|
self.state = ResticStates.INITIALIZED
|
||||||
|
else:
|
||||||
|
self.state = ResticStates.ERROR
|
||||||
|
self.error_message = msg
|
||||||
|
|
||||||
|
self.state = ResticStates.INITIALIZED
|
||||||
|
|
||||||
|
def start_backup(self):
|
||||||
|
"""
|
||||||
|
Start backup with restic
|
||||||
|
"""
|
||||||
|
backup_command = [
|
||||||
|
"restic",
|
||||||
|
"-o",
|
||||||
|
"rclone.args=serve restic --stdio",
|
||||||
|
"-r",
|
||||||
|
f"rclone:backblaze:{self._repository_name}/sfbackup",
|
||||||
|
"--verbose",
|
||||||
|
"--json",
|
||||||
|
"backup",
|
||||||
|
"/var",
|
||||||
|
]
|
||||||
|
with open("/var/backup.log", "w", encoding="utf-8") as log_file:
|
||||||
|
subprocess.Popen(
|
||||||
|
backup_command,
|
||||||
|
shell=False,
|
||||||
|
stdout=log_file,
|
||||||
|
stderr=subprocess.STDOUT,
|
||||||
|
)
|
||||||
|
|
||||||
|
self.state = ResticStates.BACKING_UP
|
||||||
|
self.progress = 0
|
||||||
|
|
||||||
|
def check_progress(self):
|
||||||
|
"""
|
||||||
|
Check progress of ongoing backup operation
|
||||||
|
"""
|
||||||
|
backup_status_check_command = ["tail", "-1", "/var/backup.log"]
|
||||||
|
|
||||||
|
if self.state in (ResticStates.NO_KEY, ResticStates.NOT_INITIALIZED):
|
||||||
|
return
|
||||||
|
|
||||||
|
# If the log file does not exists
|
||||||
|
if os.path.exists("/var/backup.log") is False:
|
||||||
|
self.state = ResticStates.INITIALIZED
|
||||||
|
|
||||||
|
with subprocess.Popen(
|
||||||
|
backup_status_check_command,
|
||||||
|
shell=False,
|
||||||
|
stdout=subprocess.PIPE,
|
||||||
|
stderr=subprocess.STDOUT,
|
||||||
|
) as backup_status_check_process_descriptor:
|
||||||
|
backup_process_status = (
|
||||||
|
backup_status_check_process_descriptor.communicate()[0].decode("utf-8")
|
||||||
|
)
|
||||||
|
|
||||||
|
try:
|
||||||
|
status = json.loads(backup_process_status)
|
||||||
|
except ValueError:
|
||||||
|
print(backup_process_status)
|
||||||
|
self.error_message = backup_process_status
|
||||||
|
return
|
||||||
|
if status["message_type"] == "status":
|
||||||
|
self.progress = status["percent_done"]
|
||||||
|
self.state = ResticStates.BACKING_UP
|
||||||
|
elif status["message_type"] == "summary":
|
||||||
|
self.state = ResticStates.INITIALIZED
|
||||||
|
self.progress = 0
|
||||||
|
self.snapshot_list.append(
|
||||||
|
{
|
||||||
|
"short_id": status["snapshot_id"],
|
||||||
|
# Current time in format 2021-12-02T00:02:51.086452543+03:00
|
||||||
|
"time": datetime.now().strftime("%Y-%m-%dT%H:%M:%S.%f%z"),
|
||||||
|
}
|
||||||
|
)
|
||||||
|
|
||||||
|
def restore_from_backup(self, snapshot_id):
|
||||||
|
"""
|
||||||
|
Restore from backup with restic
|
||||||
|
"""
|
||||||
|
backup_restoration_command = [
|
||||||
|
"restic",
|
||||||
|
"-o",
|
||||||
|
"rclone.args=serve restic --stdio",
|
||||||
|
"-r",
|
||||||
|
f"rclone:backblaze:{self._repository_name}/sfbackup",
|
||||||
|
"restore",
|
||||||
|
snapshot_id,
|
||||||
|
"--target",
|
||||||
|
"/",
|
||||||
|
]
|
||||||
|
|
||||||
|
self.state = ResticStates.RESTORING
|
||||||
|
|
||||||
|
subprocess.run(backup_restoration_command, shell=False)
|
||||||
|
|
||||||
|
self.state = ResticStates.INITIALIZED
|
|
@ -0,0 +1,70 @@
|
||||||
|
"""Tasks for the restic controller."""
|
||||||
|
from huey import crontab
|
||||||
|
from selfprivacy_api.utils.huey import huey
|
||||||
|
from . import ResticController, ResticStates
|
||||||
|
|
||||||
|
|
||||||
|
@huey.task()
|
||||||
|
def init_restic():
|
||||||
|
controller = ResticController()
|
||||||
|
if controller.state == ResticStates.NOT_INITIALIZED:
|
||||||
|
initialize_repository()
|
||||||
|
|
||||||
|
|
||||||
|
@huey.task()
|
||||||
|
def update_keys_from_userdata():
|
||||||
|
controller = ResticController()
|
||||||
|
controller.load_configuration()
|
||||||
|
controller.write_rclone_config()
|
||||||
|
initialize_repository()
|
||||||
|
|
||||||
|
|
||||||
|
# Check every morning at 5:00 AM
|
||||||
|
@huey.task(crontab(hour=5, minute=0))
|
||||||
|
def cron_load_snapshots():
|
||||||
|
controller = ResticController()
|
||||||
|
controller.load_snapshots()
|
||||||
|
|
||||||
|
|
||||||
|
# Check every morning at 5:00 AM
|
||||||
|
@huey.task()
|
||||||
|
def load_snapshots():
|
||||||
|
controller = ResticController()
|
||||||
|
controller.load_snapshots()
|
||||||
|
if controller.state == ResticStates.NOT_INITIALIZED:
|
||||||
|
load_snapshots.schedule(delay=120)
|
||||||
|
|
||||||
|
|
||||||
|
@huey.task()
|
||||||
|
def initialize_repository():
|
||||||
|
controller = ResticController()
|
||||||
|
if controller.state is not ResticStates.NO_KEY:
|
||||||
|
controller.initialize_repository()
|
||||||
|
load_snapshots()
|
||||||
|
|
||||||
|
|
||||||
|
@huey.task()
|
||||||
|
def fetch_backup_status():
|
||||||
|
controller = ResticController()
|
||||||
|
if controller.state is ResticStates.BACKING_UP:
|
||||||
|
controller.check_progress()
|
||||||
|
if controller.state is ResticStates.BACKING_UP:
|
||||||
|
fetch_backup_status.schedule(delay=2)
|
||||||
|
else:
|
||||||
|
load_snapshots.schedule(delay=240)
|
||||||
|
|
||||||
|
|
||||||
|
@huey.task()
|
||||||
|
def start_backup():
|
||||||
|
controller = ResticController()
|
||||||
|
if controller.state is ResticStates.NOT_INITIALIZED:
|
||||||
|
resp = initialize_repository()
|
||||||
|
resp.get()
|
||||||
|
controller.start_backup()
|
||||||
|
fetch_backup_status.schedule(delay=3)
|
||||||
|
|
||||||
|
|
||||||
|
@huey.task()
|
||||||
|
def restore_from_backup(snapshot):
|
||||||
|
controller = ResticController()
|
||||||
|
controller.restore_from_backup(snapshot)
|
|
@ -3,7 +3,7 @@
|
||||||
import typing
|
import typing
|
||||||
from selfprivacy_api.services.bitwarden import Bitwarden
|
from selfprivacy_api.services.bitwarden import Bitwarden
|
||||||
from selfprivacy_api.services.gitea import Gitea
|
from selfprivacy_api.services.gitea import Gitea
|
||||||
from selfprivacy_api.services.jitsimeet import JitsiMeet
|
from selfprivacy_api.services.jitsi import Jitsi
|
||||||
from selfprivacy_api.services.mailserver import MailServer
|
from selfprivacy_api.services.mailserver import MailServer
|
||||||
from selfprivacy_api.services.nextcloud import Nextcloud
|
from selfprivacy_api.services.nextcloud import Nextcloud
|
||||||
from selfprivacy_api.services.pleroma import Pleroma
|
from selfprivacy_api.services.pleroma import Pleroma
|
||||||
|
@ -18,7 +18,7 @@ services: list[Service] = [
|
||||||
Nextcloud(),
|
Nextcloud(),
|
||||||
Pleroma(),
|
Pleroma(),
|
||||||
Ocserv(),
|
Ocserv(),
|
||||||
JitsiMeet(),
|
Jitsi(),
|
||||||
]
|
]
|
||||||
|
|
||||||
|
|
||||||
|
@ -42,7 +42,7 @@ def get_disabled_services() -> list[Service]:
|
||||||
|
|
||||||
|
|
||||||
def get_services_by_location(location: str) -> list[Service]:
|
def get_services_by_location(location: str) -> list[Service]:
|
||||||
return [service for service in services if service.get_drive() == location]
|
return [service for service in services if service.get_location() == location]
|
||||||
|
|
||||||
|
|
||||||
def get_all_required_dns_records() -> list[ServiceDnsRecord]:
|
def get_all_required_dns_records() -> list[ServiceDnsRecord]:
|
||||||
|
@ -54,20 +54,14 @@ def get_all_required_dns_records() -> list[ServiceDnsRecord]:
|
||||||
name="api",
|
name="api",
|
||||||
content=ip4,
|
content=ip4,
|
||||||
ttl=3600,
|
ttl=3600,
|
||||||
display_name="SelfPrivacy API",
|
),
|
||||||
|
ServiceDnsRecord(
|
||||||
|
type="AAAA",
|
||||||
|
name="api",
|
||||||
|
content=ip6,
|
||||||
|
ttl=3600,
|
||||||
),
|
),
|
||||||
]
|
]
|
||||||
|
|
||||||
if ip6 is not None:
|
|
||||||
dns_records.append(
|
|
||||||
ServiceDnsRecord(
|
|
||||||
type="AAAA",
|
|
||||||
name="api",
|
|
||||||
content=ip6,
|
|
||||||
ttl=3600,
|
|
||||||
display_name="SelfPrivacy API (IPv6)",
|
|
||||||
)
|
|
||||||
)
|
|
||||||
for service in get_enabled_services():
|
for service in get_enabled_services():
|
||||||
dns_records += service.get_dns_records(ip4, ip6)
|
dns_records += service.get_dns_records()
|
||||||
return dns_records
|
return dns_records
|
||||||
|
|
|
@ -1,12 +1,17 @@
|
||||||
"""Class representing Bitwarden service"""
|
"""Class representing Bitwarden service"""
|
||||||
import base64
|
import base64
|
||||||
import subprocess
|
import subprocess
|
||||||
from typing import Optional, List
|
import typing
|
||||||
|
|
||||||
from selfprivacy_api.utils import get_domain
|
from selfprivacy_api.jobs import Job, JobStatus, Jobs
|
||||||
|
from selfprivacy_api.services.generic_service_mover import FolderMoveNames, move_service
|
||||||
from selfprivacy_api.utils.systemd import get_service_status
|
from selfprivacy_api.services.generic_size_counter import get_storage_usage
|
||||||
from selfprivacy_api.services.service import Service, ServiceStatus
|
from selfprivacy_api.services.generic_status_getter import get_service_status
|
||||||
|
from selfprivacy_api.services.service import Service, ServiceDnsRecord, ServiceStatus
|
||||||
|
from selfprivacy_api.utils import ReadUserData, WriteUserData, get_domain
|
||||||
|
from selfprivacy_api.utils.block_devices import BlockDevice
|
||||||
|
from selfprivacy_api.utils.huey import huey
|
||||||
|
import selfprivacy_api.utils.network as network_utils
|
||||||
from selfprivacy_api.services.bitwarden.icon import BITWARDEN_ICON
|
from selfprivacy_api.services.bitwarden.icon import BITWARDEN_ICON
|
||||||
|
|
||||||
|
|
||||||
|
@ -34,19 +39,11 @@ class Bitwarden(Service):
|
||||||
return base64.b64encode(BITWARDEN_ICON.encode("utf-8")).decode("utf-8")
|
return base64.b64encode(BITWARDEN_ICON.encode("utf-8")).decode("utf-8")
|
||||||
|
|
||||||
@staticmethod
|
@staticmethod
|
||||||
def get_user() -> str:
|
def get_url() -> typing.Optional[str]:
|
||||||
return "vaultwarden"
|
|
||||||
|
|
||||||
@staticmethod
|
|
||||||
def get_url() -> Optional[str]:
|
|
||||||
"""Return service url."""
|
"""Return service url."""
|
||||||
domain = get_domain()
|
domain = get_domain()
|
||||||
return f"https://password.{domain}"
|
return f"https://password.{domain}"
|
||||||
|
|
||||||
@staticmethod
|
|
||||||
def get_subdomain() -> Optional[str]:
|
|
||||||
return "password"
|
|
||||||
|
|
||||||
@staticmethod
|
@staticmethod
|
||||||
def is_movable() -> bool:
|
def is_movable() -> bool:
|
||||||
return True
|
return True
|
||||||
|
@ -56,8 +53,9 @@ class Bitwarden(Service):
|
||||||
return False
|
return False
|
||||||
|
|
||||||
@staticmethod
|
@staticmethod
|
||||||
def get_backup_description() -> str:
|
def is_enabled() -> bool:
|
||||||
return "Password database, encryption certificate and attachments."
|
with ReadUserData() as user_data:
|
||||||
|
return user_data.get("bitwarden", {}).get("enable", False)
|
||||||
|
|
||||||
@staticmethod
|
@staticmethod
|
||||||
def get_status() -> ServiceStatus:
|
def get_status() -> ServiceStatus:
|
||||||
|
@ -72,6 +70,22 @@ class Bitwarden(Service):
|
||||||
"""
|
"""
|
||||||
return get_service_status("vaultwarden.service")
|
return get_service_status("vaultwarden.service")
|
||||||
|
|
||||||
|
@staticmethod
|
||||||
|
def enable():
|
||||||
|
"""Enable Bitwarden service."""
|
||||||
|
with WriteUserData() as user_data:
|
||||||
|
if "bitwarden" not in user_data:
|
||||||
|
user_data["bitwarden"] = {}
|
||||||
|
user_data["bitwarden"]["enable"] = True
|
||||||
|
|
||||||
|
@staticmethod
|
||||||
|
def disable():
|
||||||
|
"""Disable Bitwarden service."""
|
||||||
|
with WriteUserData() as user_data:
|
||||||
|
if "bitwarden" not in user_data:
|
||||||
|
user_data["bitwarden"] = {}
|
||||||
|
user_data["bitwarden"]["enable"] = False
|
||||||
|
|
||||||
@staticmethod
|
@staticmethod
|
||||||
def stop():
|
def stop():
|
||||||
subprocess.run(["systemctl", "stop", "vaultwarden.service"])
|
subprocess.run(["systemctl", "stop", "vaultwarden.service"])
|
||||||
|
@ -97,5 +111,64 @@ class Bitwarden(Service):
|
||||||
return ""
|
return ""
|
||||||
|
|
||||||
@staticmethod
|
@staticmethod
|
||||||
def get_folders() -> List[str]:
|
def get_storage_usage() -> int:
|
||||||
return ["/var/lib/bitwarden", "/var/lib/bitwarden_rs"]
|
storage_usage = 0
|
||||||
|
storage_usage += get_storage_usage("/var/lib/bitwarden")
|
||||||
|
storage_usage += get_storage_usage("/var/lib/bitwarden_rs")
|
||||||
|
return storage_usage
|
||||||
|
|
||||||
|
@staticmethod
|
||||||
|
def get_location() -> str:
|
||||||
|
with ReadUserData() as user_data:
|
||||||
|
if user_data.get("useBinds", False):
|
||||||
|
return user_data.get("bitwarden", {}).get("location", "sda1")
|
||||||
|
else:
|
||||||
|
return "sda1"
|
||||||
|
|
||||||
|
@staticmethod
|
||||||
|
def get_dns_records() -> typing.List[ServiceDnsRecord]:
|
||||||
|
"""Return list of DNS records for Bitwarden service."""
|
||||||
|
return [
|
||||||
|
ServiceDnsRecord(
|
||||||
|
type="A",
|
||||||
|
name="password",
|
||||||
|
content=network_utils.get_ip4(),
|
||||||
|
ttl=3600,
|
||||||
|
),
|
||||||
|
ServiceDnsRecord(
|
||||||
|
type="AAAA",
|
||||||
|
name="password",
|
||||||
|
content=network_utils.get_ip6(),
|
||||||
|
ttl=3600,
|
||||||
|
),
|
||||||
|
]
|
||||||
|
|
||||||
|
def move_to_volume(self, volume: BlockDevice) -> Job:
|
||||||
|
job = Jobs.add(
|
||||||
|
type_id="services.bitwarden.move",
|
||||||
|
name="Move Bitwarden",
|
||||||
|
description=f"Moving Bitwarden data to {volume.name}",
|
||||||
|
)
|
||||||
|
|
||||||
|
move_service(
|
||||||
|
self,
|
||||||
|
volume,
|
||||||
|
job,
|
||||||
|
[
|
||||||
|
FolderMoveNames(
|
||||||
|
name="bitwarden",
|
||||||
|
bind_location="/var/lib/bitwarden",
|
||||||
|
group="vaultwarden",
|
||||||
|
owner="vaultwarden",
|
||||||
|
),
|
||||||
|
FolderMoveNames(
|
||||||
|
name="bitwarden_rs",
|
||||||
|
bind_location="/var/lib/bitwarden_rs",
|
||||||
|
group="vaultwarden",
|
||||||
|
owner="vaultwarden",
|
||||||
|
),
|
||||||
|
],
|
||||||
|
"bitwarden",
|
||||||
|
)
|
||||||
|
|
||||||
|
return job
|
||||||
|
|
|
@ -0,0 +1,236 @@
|
||||||
|
"""Generic handler for moving services"""
|
||||||
|
|
||||||
|
import subprocess
|
||||||
|
import time
|
||||||
|
import pathlib
|
||||||
|
import shutil
|
||||||
|
|
||||||
|
from pydantic import BaseModel
|
||||||
|
from selfprivacy_api.jobs import Job, JobStatus, Jobs
|
||||||
|
from selfprivacy_api.utils.huey import huey
|
||||||
|
from selfprivacy_api.utils.block_devices import BlockDevice
|
||||||
|
from selfprivacy_api.utils import ReadUserData, WriteUserData
|
||||||
|
from selfprivacy_api.services.service import Service, ServiceStatus
|
||||||
|
|
||||||
|
|
||||||
|
class FolderMoveNames(BaseModel):
|
||||||
|
name: str
|
||||||
|
bind_location: str
|
||||||
|
owner: str
|
||||||
|
group: str
|
||||||
|
|
||||||
|
|
||||||
|
@huey.task()
|
||||||
|
def move_service(
|
||||||
|
service: Service,
|
||||||
|
volume: BlockDevice,
|
||||||
|
job: Job,
|
||||||
|
folder_names: list[FolderMoveNames],
|
||||||
|
userdata_location: str,
|
||||||
|
):
|
||||||
|
"""Move a service to another volume."""
|
||||||
|
job = Jobs.update(
|
||||||
|
job=job,
|
||||||
|
status_text="Performing pre-move checks...",
|
||||||
|
status=JobStatus.RUNNING,
|
||||||
|
)
|
||||||
|
service_name = service.get_display_name()
|
||||||
|
with ReadUserData() as user_data:
|
||||||
|
if not user_data.get("useBinds", False):
|
||||||
|
Jobs.update(
|
||||||
|
job=job,
|
||||||
|
status=JobStatus.ERROR,
|
||||||
|
error="Server is not using binds.",
|
||||||
|
)
|
||||||
|
return
|
||||||
|
# Check if we are on the same volume
|
||||||
|
old_volume = service.get_location()
|
||||||
|
if old_volume == volume.name:
|
||||||
|
Jobs.update(
|
||||||
|
job=job,
|
||||||
|
status=JobStatus.ERROR,
|
||||||
|
error=f"{service_name} is already on this volume.",
|
||||||
|
)
|
||||||
|
return
|
||||||
|
# Check if there is enough space on the new volume
|
||||||
|
if int(volume.fsavail) < service.get_storage_usage():
|
||||||
|
Jobs.update(
|
||||||
|
job=job,
|
||||||
|
status=JobStatus.ERROR,
|
||||||
|
error="Not enough space on the new volume.",
|
||||||
|
)
|
||||||
|
return
|
||||||
|
# Make sure the volume is mounted
|
||||||
|
if volume.name != "sda1" and f"/volumes/{volume.name}" not in volume.mountpoints:
|
||||||
|
Jobs.update(
|
||||||
|
job=job,
|
||||||
|
status=JobStatus.ERROR,
|
||||||
|
error="Volume is not mounted.",
|
||||||
|
)
|
||||||
|
return
|
||||||
|
# Make sure current actual directory exists and if its user and group are correct
|
||||||
|
for folder in folder_names:
|
||||||
|
if not pathlib.Path(f"/volumes/{old_volume}/{folder.name}").exists():
|
||||||
|
Jobs.update(
|
||||||
|
job=job,
|
||||||
|
status=JobStatus.ERROR,
|
||||||
|
error=f"{service_name} is not found.",
|
||||||
|
)
|
||||||
|
return
|
||||||
|
if not pathlib.Path(f"/volumes/{old_volume}/{folder.name}").is_dir():
|
||||||
|
Jobs.update(
|
||||||
|
job=job,
|
||||||
|
status=JobStatus.ERROR,
|
||||||
|
error=f"{service_name} is not a directory.",
|
||||||
|
)
|
||||||
|
return
|
||||||
|
if (
|
||||||
|
not pathlib.Path(f"/volumes/{old_volume}/{folder.name}").owner()
|
||||||
|
== folder.owner
|
||||||
|
):
|
||||||
|
Jobs.update(
|
||||||
|
job=job,
|
||||||
|
status=JobStatus.ERROR,
|
||||||
|
error=f"{service_name} owner is not {folder.owner}.",
|
||||||
|
)
|
||||||
|
return
|
||||||
|
|
||||||
|
# Stop service
|
||||||
|
Jobs.update(
|
||||||
|
job=job,
|
||||||
|
status=JobStatus.RUNNING,
|
||||||
|
status_text=f"Stopping {service_name}...",
|
||||||
|
progress=5,
|
||||||
|
)
|
||||||
|
service.stop()
|
||||||
|
# Wait for the service to stop, check every second
|
||||||
|
# If it does not stop in 30 seconds, abort
|
||||||
|
for _ in range(30):
|
||||||
|
if service.get_status() not in (
|
||||||
|
ServiceStatus.ACTIVATING,
|
||||||
|
ServiceStatus.DEACTIVATING,
|
||||||
|
):
|
||||||
|
break
|
||||||
|
time.sleep(1)
|
||||||
|
else:
|
||||||
|
Jobs.update(
|
||||||
|
job=job,
|
||||||
|
status=JobStatus.ERROR,
|
||||||
|
error=f"{service_name} did not stop in 30 seconds.",
|
||||||
|
)
|
||||||
|
return
|
||||||
|
|
||||||
|
# Unmount old volume
|
||||||
|
Jobs.update(
|
||||||
|
job=job,
|
||||||
|
status_text="Unmounting old folder...",
|
||||||
|
status=JobStatus.RUNNING,
|
||||||
|
progress=10,
|
||||||
|
)
|
||||||
|
for folder in folder_names:
|
||||||
|
try:
|
||||||
|
subprocess.run(
|
||||||
|
["umount", folder.bind_location],
|
||||||
|
check=True,
|
||||||
|
)
|
||||||
|
except subprocess.CalledProcessError:
|
||||||
|
Jobs.update(
|
||||||
|
job=job,
|
||||||
|
status=JobStatus.ERROR,
|
||||||
|
error="Unable to unmount old volume.",
|
||||||
|
)
|
||||||
|
return
|
||||||
|
# Move data to new volume and set correct permissions
|
||||||
|
Jobs.update(
|
||||||
|
job=job,
|
||||||
|
status_text="Moving data to new volume...",
|
||||||
|
status=JobStatus.RUNNING,
|
||||||
|
progress=20,
|
||||||
|
)
|
||||||
|
current_progress = 20
|
||||||
|
folder_percentage = 50 // len(folder_names)
|
||||||
|
for folder in folder_names:
|
||||||
|
shutil.move(
|
||||||
|
f"/volumes/{old_volume}/{folder.name}",
|
||||||
|
f"/volumes/{volume.name}/{folder.name}",
|
||||||
|
)
|
||||||
|
Jobs.update(
|
||||||
|
job=job,
|
||||||
|
status_text="Moving data to new volume...",
|
||||||
|
status=JobStatus.RUNNING,
|
||||||
|
progress=current_progress + folder_percentage,
|
||||||
|
)
|
||||||
|
|
||||||
|
Jobs.update(
|
||||||
|
job=job,
|
||||||
|
status_text=f"Making sure {service_name} owns its files...",
|
||||||
|
status=JobStatus.RUNNING,
|
||||||
|
progress=70,
|
||||||
|
)
|
||||||
|
for folder in folder_names:
|
||||||
|
try:
|
||||||
|
subprocess.run(
|
||||||
|
[
|
||||||
|
"chown",
|
||||||
|
"-R",
|
||||||
|
f"{folder.owner}:{folder.group}",
|
||||||
|
f"/volumes/{volume.name}/{folder.name}",
|
||||||
|
],
|
||||||
|
check=True,
|
||||||
|
)
|
||||||
|
except subprocess.CalledProcessError as error:
|
||||||
|
print(error.output)
|
||||||
|
Jobs.update(
|
||||||
|
job=job,
|
||||||
|
status=JobStatus.RUNNING,
|
||||||
|
error=f"Unable to set ownership of new volume. {service_name} may not be able to access its files. Continuing anyway.",
|
||||||
|
)
|
||||||
|
|
||||||
|
# Mount new volume
|
||||||
|
Jobs.update(
|
||||||
|
job=job,
|
||||||
|
status_text=f"Mounting {service_name} data...",
|
||||||
|
status=JobStatus.RUNNING,
|
||||||
|
progress=90,
|
||||||
|
)
|
||||||
|
|
||||||
|
for folder in folder_names:
|
||||||
|
try:
|
||||||
|
subprocess.run(
|
||||||
|
[
|
||||||
|
"mount",
|
||||||
|
"--bind",
|
||||||
|
f"/volumes/{volume.name}/{folder.name}",
|
||||||
|
folder.bind_location,
|
||||||
|
],
|
||||||
|
check=True,
|
||||||
|
)
|
||||||
|
except subprocess.CalledProcessError as error:
|
||||||
|
print(error.output)
|
||||||
|
Jobs.update(
|
||||||
|
job=job,
|
||||||
|
status=JobStatus.ERROR,
|
||||||
|
error="Unable to mount new volume.",
|
||||||
|
)
|
||||||
|
return
|
||||||
|
|
||||||
|
# Update userdata
|
||||||
|
Jobs.update(
|
||||||
|
job=job,
|
||||||
|
status_text="Finishing move...",
|
||||||
|
status=JobStatus.RUNNING,
|
||||||
|
progress=95,
|
||||||
|
)
|
||||||
|
with WriteUserData() as user_data:
|
||||||
|
if userdata_location not in user_data:
|
||||||
|
user_data[userdata_location] = {}
|
||||||
|
user_data[userdata_location]["location"] = volume.name
|
||||||
|
# Start service
|
||||||
|
service.start()
|
||||||
|
Jobs.update(
|
||||||
|
job=job,
|
||||||
|
status=JobStatus.FINISHED,
|
||||||
|
result=f"{service_name} moved successfully.",
|
||||||
|
status_text=f"Starting {service_name}...",
|
||||||
|
progress=100,
|
||||||
|
)
|
|
@ -1,17 +1,16 @@
|
||||||
"""Generic service status fetcher using systemctl"""
|
"""Generic service status fetcher using systemctl"""
|
||||||
import subprocess
|
import subprocess
|
||||||
from typing import List
|
|
||||||
|
|
||||||
from selfprivacy_api.models.services import ServiceStatus
|
from selfprivacy_api.services.service import ServiceStatus
|
||||||
|
|
||||||
|
|
||||||
def get_service_status(unit: str) -> ServiceStatus:
|
def get_service_status(service: str) -> ServiceStatus:
|
||||||
"""
|
"""
|
||||||
Return service status from systemd.
|
Return service status from systemd.
|
||||||
Use systemctl show to get the status of a service.
|
Use systemctl show to get the status of a service.
|
||||||
Get ActiveState from the output.
|
Get ActiveState from the output.
|
||||||
"""
|
"""
|
||||||
service_status = subprocess.check_output(["systemctl", "show", unit])
|
service_status = subprocess.check_output(["systemctl", "show", service])
|
||||||
if b"LoadState=not-found" in service_status:
|
if b"LoadState=not-found" in service_status:
|
||||||
return ServiceStatus.OFF
|
return ServiceStatus.OFF
|
||||||
if b"ActiveState=active" in service_status:
|
if b"ActiveState=active" in service_status:
|
||||||
|
@ -59,24 +58,3 @@ def get_service_status_from_several_units(services: list[str]) -> ServiceStatus:
|
||||||
if ServiceStatus.ACTIVE in service_statuses:
|
if ServiceStatus.ACTIVE in service_statuses:
|
||||||
return ServiceStatus.ACTIVE
|
return ServiceStatus.ACTIVE
|
||||||
return ServiceStatus.OFF
|
return ServiceStatus.OFF
|
||||||
|
|
||||||
|
|
||||||
def get_last_log_lines(service: str, lines_count: int) -> List[str]:
|
|
||||||
if lines_count < 1:
|
|
||||||
raise ValueError("lines_count must be greater than 0")
|
|
||||||
try:
|
|
||||||
logs = subprocess.check_output(
|
|
||||||
[
|
|
||||||
"journalctl",
|
|
||||||
"-u",
|
|
||||||
service,
|
|
||||||
"-n",
|
|
||||||
str(lines_count),
|
|
||||||
"-o",
|
|
||||||
"cat",
|
|
||||||
],
|
|
||||||
shell=False,
|
|
||||||
).decode("utf-8")
|
|
||||||
return logs.splitlines()
|
|
||||||
except subprocess.CalledProcessError:
|
|
||||||
return []
|
|
|
@ -1,12 +1,17 @@
|
||||||
"""Class representing Bitwarden service"""
|
"""Class representing Bitwarden service"""
|
||||||
import base64
|
import base64
|
||||||
import subprocess
|
import subprocess
|
||||||
from typing import Optional, List
|
import typing
|
||||||
|
|
||||||
from selfprivacy_api.utils import get_domain
|
from selfprivacy_api.jobs import Job, Jobs
|
||||||
|
from selfprivacy_api.services.generic_service_mover import FolderMoveNames, move_service
|
||||||
from selfprivacy_api.utils.systemd import get_service_status
|
from selfprivacy_api.services.generic_size_counter import get_storage_usage
|
||||||
from selfprivacy_api.services.service import Service, ServiceStatus
|
from selfprivacy_api.services.generic_status_getter import get_service_status
|
||||||
|
from selfprivacy_api.services.service import Service, ServiceDnsRecord, ServiceStatus
|
||||||
|
from selfprivacy_api.utils import ReadUserData, WriteUserData, get_domain
|
||||||
|
from selfprivacy_api.utils.block_devices import BlockDevice
|
||||||
|
from selfprivacy_api.utils.huey import huey
|
||||||
|
import selfprivacy_api.utils.network as network_utils
|
||||||
from selfprivacy_api.services.gitea.icon import GITEA_ICON
|
from selfprivacy_api.services.gitea.icon import GITEA_ICON
|
||||||
|
|
||||||
|
|
||||||
|
@ -34,15 +39,11 @@ class Gitea(Service):
|
||||||
return base64.b64encode(GITEA_ICON.encode("utf-8")).decode("utf-8")
|
return base64.b64encode(GITEA_ICON.encode("utf-8")).decode("utf-8")
|
||||||
|
|
||||||
@staticmethod
|
@staticmethod
|
||||||
def get_url() -> Optional[str]:
|
def get_url() -> typing.Optional[str]:
|
||||||
"""Return service url."""
|
"""Return service url."""
|
||||||
domain = get_domain()
|
domain = get_domain()
|
||||||
return f"https://git.{domain}"
|
return f"https://git.{domain}"
|
||||||
|
|
||||||
@staticmethod
|
|
||||||
def get_subdomain() -> Optional[str]:
|
|
||||||
return "git"
|
|
||||||
|
|
||||||
@staticmethod
|
@staticmethod
|
||||||
def is_movable() -> bool:
|
def is_movable() -> bool:
|
||||||
return True
|
return True
|
||||||
|
@ -52,8 +53,9 @@ class Gitea(Service):
|
||||||
return False
|
return False
|
||||||
|
|
||||||
@staticmethod
|
@staticmethod
|
||||||
def get_backup_description() -> str:
|
def is_enabled() -> bool:
|
||||||
return "Git repositories, database and user data."
|
with ReadUserData() as user_data:
|
||||||
|
return user_data.get("gitea", {}).get("enable", False)
|
||||||
|
|
||||||
@staticmethod
|
@staticmethod
|
||||||
def get_status() -> ServiceStatus:
|
def get_status() -> ServiceStatus:
|
||||||
|
@ -67,6 +69,22 @@ class Gitea(Service):
|
||||||
"""
|
"""
|
||||||
return get_service_status("gitea.service")
|
return get_service_status("gitea.service")
|
||||||
|
|
||||||
|
@staticmethod
|
||||||
|
def enable():
|
||||||
|
"""Enable Gitea service."""
|
||||||
|
with WriteUserData() as user_data:
|
||||||
|
if "gitea" not in user_data:
|
||||||
|
user_data["gitea"] = {}
|
||||||
|
user_data["gitea"]["enable"] = True
|
||||||
|
|
||||||
|
@staticmethod
|
||||||
|
def disable():
|
||||||
|
"""Disable Gitea service."""
|
||||||
|
with WriteUserData() as user_data:
|
||||||
|
if "gitea" not in user_data:
|
||||||
|
user_data["gitea"] = {}
|
||||||
|
user_data["gitea"]["enable"] = False
|
||||||
|
|
||||||
@staticmethod
|
@staticmethod
|
||||||
def stop():
|
def stop():
|
||||||
subprocess.run(["systemctl", "stop", "gitea.service"])
|
subprocess.run(["systemctl", "stop", "gitea.service"])
|
||||||
|
@ -92,5 +110,56 @@ class Gitea(Service):
|
||||||
return ""
|
return ""
|
||||||
|
|
||||||
@staticmethod
|
@staticmethod
|
||||||
def get_folders() -> List[str]:
|
def get_storage_usage() -> int:
|
||||||
return ["/var/lib/gitea"]
|
storage_usage = 0
|
||||||
|
storage_usage += get_storage_usage("/var/lib/gitea")
|
||||||
|
return storage_usage
|
||||||
|
|
||||||
|
@staticmethod
|
||||||
|
def get_location() -> str:
|
||||||
|
with ReadUserData() as user_data:
|
||||||
|
if user_data.get("useBinds", False):
|
||||||
|
return user_data.get("gitea", {}).get("location", "sda1")
|
||||||
|
else:
|
||||||
|
return "sda1"
|
||||||
|
|
||||||
|
@staticmethod
|
||||||
|
def get_dns_records() -> typing.List[ServiceDnsRecord]:
|
||||||
|
return [
|
||||||
|
ServiceDnsRecord(
|
||||||
|
type="A",
|
||||||
|
name="git",
|
||||||
|
content=network_utils.get_ip4(),
|
||||||
|
ttl=3600,
|
||||||
|
),
|
||||||
|
ServiceDnsRecord(
|
||||||
|
type="AAAA",
|
||||||
|
name="git",
|
||||||
|
content=network_utils.get_ip6(),
|
||||||
|
ttl=3600,
|
||||||
|
),
|
||||||
|
]
|
||||||
|
|
||||||
|
def move_to_volume(self, volume: BlockDevice) -> Job:
|
||||||
|
job = Jobs.add(
|
||||||
|
type_id="services.gitea.move",
|
||||||
|
name="Move Gitea",
|
||||||
|
description=f"Moving Gitea data to {volume.name}",
|
||||||
|
)
|
||||||
|
|
||||||
|
move_service(
|
||||||
|
self,
|
||||||
|
volume,
|
||||||
|
job,
|
||||||
|
[
|
||||||
|
FolderMoveNames(
|
||||||
|
name="gitea",
|
||||||
|
bind_location="/var/lib/gitea",
|
||||||
|
group="gitea",
|
||||||
|
owner="gitea",
|
||||||
|
),
|
||||||
|
],
|
||||||
|
"gitea",
|
||||||
|
)
|
||||||
|
|
||||||
|
return job
|
||||||
|
|
|
@ -0,0 +1,142 @@
|
||||||
|
"""Class representing Jitsi service"""
|
||||||
|
import base64
|
||||||
|
import subprocess
|
||||||
|
import typing
|
||||||
|
|
||||||
|
from selfprivacy_api.jobs import Job, Jobs
|
||||||
|
from selfprivacy_api.services.generic_service_mover import FolderMoveNames, move_service
|
||||||
|
from selfprivacy_api.services.generic_size_counter import get_storage_usage
|
||||||
|
from selfprivacy_api.services.generic_status_getter import (
|
||||||
|
get_service_status,
|
||||||
|
get_service_status_from_several_units,
|
||||||
|
)
|
||||||
|
from selfprivacy_api.services.service import Service, ServiceDnsRecord, ServiceStatus
|
||||||
|
from selfprivacy_api.utils import ReadUserData, WriteUserData, get_domain
|
||||||
|
from selfprivacy_api.utils.block_devices import BlockDevice
|
||||||
|
from selfprivacy_api.utils.huey import huey
|
||||||
|
import selfprivacy_api.utils.network as network_utils
|
||||||
|
from selfprivacy_api.services.jitsi.icon import JITSI_ICON
|
||||||
|
|
||||||
|
|
||||||
|
class Jitsi(Service):
|
||||||
|
"""Class representing Jitsi service"""
|
||||||
|
|
||||||
|
@staticmethod
|
||||||
|
def get_id() -> str:
|
||||||
|
"""Return service id."""
|
||||||
|
return "jitsi"
|
||||||
|
|
||||||
|
@staticmethod
|
||||||
|
def get_display_name() -> str:
|
||||||
|
"""Return service display name."""
|
||||||
|
return "Jitsi"
|
||||||
|
|
||||||
|
@staticmethod
|
||||||
|
def get_description() -> str:
|
||||||
|
"""Return service description."""
|
||||||
|
return "Jitsi is a free and open-source video conferencing solution."
|
||||||
|
|
||||||
|
@staticmethod
|
||||||
|
def get_svg_icon() -> str:
|
||||||
|
"""Read SVG icon from file and return it as base64 encoded string."""
|
||||||
|
return base64.b64encode(JITSI_ICON.encode("utf-8")).decode("utf-8")
|
||||||
|
|
||||||
|
@staticmethod
|
||||||
|
def get_url() -> typing.Optional[str]:
|
||||||
|
"""Return service url."""
|
||||||
|
domain = get_domain()
|
||||||
|
return f"https://meet.{domain}"
|
||||||
|
|
||||||
|
@staticmethod
|
||||||
|
def is_movable() -> bool:
|
||||||
|
return False
|
||||||
|
|
||||||
|
@staticmethod
|
||||||
|
def is_required() -> bool:
|
||||||
|
return False
|
||||||
|
|
||||||
|
@staticmethod
|
||||||
|
def is_enabled() -> bool:
|
||||||
|
with ReadUserData() as user_data:
|
||||||
|
return user_data.get("jitsi", {}).get("enable", False)
|
||||||
|
|
||||||
|
@staticmethod
|
||||||
|
def get_status() -> ServiceStatus:
|
||||||
|
return get_service_status_from_several_units(
|
||||||
|
["jitsi-videobridge.service", "jicofo.service"]
|
||||||
|
)
|
||||||
|
|
||||||
|
@staticmethod
|
||||||
|
def enable():
|
||||||
|
"""Enable Jitsi service."""
|
||||||
|
with WriteUserData() as user_data:
|
||||||
|
if "jitsi" not in user_data:
|
||||||
|
user_data["jitsi"] = {}
|
||||||
|
user_data["jitsi"]["enable"] = True
|
||||||
|
|
||||||
|
@staticmethod
|
||||||
|
def disable():
|
||||||
|
"""Disable Gitea service."""
|
||||||
|
with WriteUserData() as user_data:
|
||||||
|
if "jitsi" not in user_data:
|
||||||
|
user_data["jitsi"] = {}
|
||||||
|
user_data["jitsi"]["enable"] = False
|
||||||
|
|
||||||
|
@staticmethod
|
||||||
|
def stop():
|
||||||
|
subprocess.run(["systemctl", "stop", "jitsi-videobridge.service"])
|
||||||
|
subprocess.run(["systemctl", "stop", "jicofo.service"])
|
||||||
|
|
||||||
|
@staticmethod
|
||||||
|
def start():
|
||||||
|
subprocess.run(["systemctl", "start", "jitsi-videobridge.service"])
|
||||||
|
subprocess.run(["systemctl", "start", "jicofo.service"])
|
||||||
|
|
||||||
|
@staticmethod
|
||||||
|
def restart():
|
||||||
|
subprocess.run(["systemctl", "restart", "jitsi-videobridge.service"])
|
||||||
|
subprocess.run(["systemctl", "restart", "jicofo.service"])
|
||||||
|
|
||||||
|
@staticmethod
|
||||||
|
def get_configuration():
|
||||||
|
return {}
|
||||||
|
|
||||||
|
@staticmethod
|
||||||
|
def set_configuration(config_items):
|
||||||
|
return super().set_configuration(config_items)
|
||||||
|
|
||||||
|
@staticmethod
|
||||||
|
def get_logs():
|
||||||
|
return ""
|
||||||
|
|
||||||
|
@staticmethod
|
||||||
|
def get_storage_usage() -> int:
|
||||||
|
storage_usage = 0
|
||||||
|
storage_usage += get_storage_usage("/var/lib/jitsi-meet")
|
||||||
|
return storage_usage
|
||||||
|
|
||||||
|
@staticmethod
|
||||||
|
def get_location() -> str:
|
||||||
|
return "sda1"
|
||||||
|
|
||||||
|
@staticmethod
|
||||||
|
def get_dns_records() -> typing.List[ServiceDnsRecord]:
|
||||||
|
ip4 = network_utils.get_ip4()
|
||||||
|
ip6 = network_utils.get_ip6()
|
||||||
|
return [
|
||||||
|
ServiceDnsRecord(
|
||||||
|
type="A",
|
||||||
|
name="meet",
|
||||||
|
content=ip4,
|
||||||
|
ttl=3600,
|
||||||
|
),
|
||||||
|
ServiceDnsRecord(
|
||||||
|
type="AAAA",
|
||||||
|
name="meet",
|
||||||
|
content=ip6,
|
||||||
|
ttl=3600,
|
||||||
|
),
|
||||||
|
]
|
||||||
|
|
||||||
|
def move_to_volume(self, volume: BlockDevice) -> Job:
|
||||||
|
raise NotImplementedError("jitsi service is not movable")
|
|
@ -1,108 +0,0 @@
|
||||||
"""Class representing Jitsi Meet service"""
|
|
||||||
import base64
|
|
||||||
import subprocess
|
|
||||||
from typing import Optional, List
|
|
||||||
|
|
||||||
from selfprivacy_api.jobs import Job
|
|
||||||
from selfprivacy_api.utils.systemd import (
|
|
||||||
get_service_status_from_several_units,
|
|
||||||
)
|
|
||||||
from selfprivacy_api.services.service import Service, ServiceStatus
|
|
||||||
from selfprivacy_api.utils import get_domain
|
|
||||||
from selfprivacy_api.utils.block_devices import BlockDevice
|
|
||||||
from selfprivacy_api.services.jitsimeet.icon import JITSI_ICON
|
|
||||||
|
|
||||||
|
|
||||||
class JitsiMeet(Service):
|
|
||||||
"""Class representing Jitsi service"""
|
|
||||||
|
|
||||||
@staticmethod
|
|
||||||
def get_id() -> str:
|
|
||||||
"""Return service id."""
|
|
||||||
return "jitsi-meet"
|
|
||||||
|
|
||||||
@staticmethod
|
|
||||||
def get_display_name() -> str:
|
|
||||||
"""Return service display name."""
|
|
||||||
return "JitsiMeet"
|
|
||||||
|
|
||||||
@staticmethod
|
|
||||||
def get_description() -> str:
|
|
||||||
"""Return service description."""
|
|
||||||
return "Jitsi Meet is a free and open-source video conferencing solution."
|
|
||||||
|
|
||||||
@staticmethod
|
|
||||||
def get_svg_icon() -> str:
|
|
||||||
"""Read SVG icon from file and return it as base64 encoded string."""
|
|
||||||
return base64.b64encode(JITSI_ICON.encode("utf-8")).decode("utf-8")
|
|
||||||
|
|
||||||
@staticmethod
|
|
||||||
def get_url() -> Optional[str]:
|
|
||||||
"""Return service url."""
|
|
||||||
domain = get_domain()
|
|
||||||
return f"https://meet.{domain}"
|
|
||||||
|
|
||||||
@staticmethod
|
|
||||||
def get_subdomain() -> Optional[str]:
|
|
||||||
return "meet"
|
|
||||||
|
|
||||||
@staticmethod
|
|
||||||
def is_movable() -> bool:
|
|
||||||
return False
|
|
||||||
|
|
||||||
@staticmethod
|
|
||||||
def is_required() -> bool:
|
|
||||||
return False
|
|
||||||
|
|
||||||
@staticmethod
|
|
||||||
def get_backup_description() -> str:
|
|
||||||
return "Secrets that are used to encrypt the communication."
|
|
||||||
|
|
||||||
@staticmethod
|
|
||||||
def get_status() -> ServiceStatus:
|
|
||||||
return get_service_status_from_several_units(
|
|
||||||
["jitsi-videobridge.service", "jicofo.service"]
|
|
||||||
)
|
|
||||||
|
|
||||||
@staticmethod
|
|
||||||
def stop():
|
|
||||||
subprocess.run(
|
|
||||||
["systemctl", "stop", "jitsi-videobridge.service"],
|
|
||||||
check=False,
|
|
||||||
)
|
|
||||||
subprocess.run(["systemctl", "stop", "jicofo.service"], check=False)
|
|
||||||
|
|
||||||
@staticmethod
|
|
||||||
def start():
|
|
||||||
subprocess.run(
|
|
||||||
["systemctl", "start", "jitsi-videobridge.service"],
|
|
||||||
check=False,
|
|
||||||
)
|
|
||||||
subprocess.run(["systemctl", "start", "jicofo.service"], check=False)
|
|
||||||
|
|
||||||
@staticmethod
|
|
||||||
def restart():
|
|
||||||
subprocess.run(
|
|
||||||
["systemctl", "restart", "jitsi-videobridge.service"],
|
|
||||||
check=False,
|
|
||||||
)
|
|
||||||
subprocess.run(["systemctl", "restart", "jicofo.service"], check=False)
|
|
||||||
|
|
||||||
@staticmethod
|
|
||||||
def get_configuration():
|
|
||||||
return {}
|
|
||||||
|
|
||||||
@staticmethod
|
|
||||||
def set_configuration(config_items):
|
|
||||||
return super().set_configuration(config_items)
|
|
||||||
|
|
||||||
@staticmethod
|
|
||||||
def get_logs():
|
|
||||||
return ""
|
|
||||||
|
|
||||||
@staticmethod
|
|
||||||
def get_folders() -> List[str]:
|
|
||||||
return ["/var/lib/jitsi-meet"]
|
|
||||||
|
|
||||||
def move_to_volume(self, volume: BlockDevice) -> Job:
|
|
||||||
raise NotImplementedError("jitsi-meet service is not movable")
|
|
|
@ -2,13 +2,20 @@
|
||||||
|
|
||||||
import base64
|
import base64
|
||||||
import subprocess
|
import subprocess
|
||||||
from typing import Optional, List
|
import typing
|
||||||
|
|
||||||
from selfprivacy_api.utils.systemd import (
|
from selfprivacy_api.jobs import Job, JobStatus, Jobs
|
||||||
|
from selfprivacy_api.services.generic_service_mover import FolderMoveNames, move_service
|
||||||
|
from selfprivacy_api.services.generic_size_counter import get_storage_usage
|
||||||
|
from selfprivacy_api.services.generic_status_getter import (
|
||||||
|
get_service_status,
|
||||||
get_service_status_from_several_units,
|
get_service_status_from_several_units,
|
||||||
)
|
)
|
||||||
from selfprivacy_api.services.service import Service, ServiceDnsRecord, ServiceStatus
|
from selfprivacy_api.services.service import Service, ServiceDnsRecord, ServiceStatus
|
||||||
from selfprivacy_api import utils
|
import selfprivacy_api.utils as utils
|
||||||
|
from selfprivacy_api.utils.block_devices import BlockDevice
|
||||||
|
from selfprivacy_api.utils.huey import huey
|
||||||
|
import selfprivacy_api.utils.network as network_utils
|
||||||
from selfprivacy_api.services.mailserver.icon import MAILSERVER_ICON
|
from selfprivacy_api.services.mailserver.icon import MAILSERVER_ICON
|
||||||
|
|
||||||
|
|
||||||
|
@ -17,7 +24,7 @@ class MailServer(Service):
|
||||||
|
|
||||||
@staticmethod
|
@staticmethod
|
||||||
def get_id() -> str:
|
def get_id() -> str:
|
||||||
return "simple-nixos-mailserver"
|
return "mailserver"
|
||||||
|
|
||||||
@staticmethod
|
@staticmethod
|
||||||
def get_display_name() -> str:
|
def get_display_name() -> str:
|
||||||
|
@ -32,18 +39,10 @@ class MailServer(Service):
|
||||||
return base64.b64encode(MAILSERVER_ICON.encode("utf-8")).decode("utf-8")
|
return base64.b64encode(MAILSERVER_ICON.encode("utf-8")).decode("utf-8")
|
||||||
|
|
||||||
@staticmethod
|
@staticmethod
|
||||||
def get_user() -> str:
|
def get_url() -> typing.Optional[str]:
|
||||||
return "virtualMail"
|
|
||||||
|
|
||||||
@staticmethod
|
|
||||||
def get_url() -> Optional[str]:
|
|
||||||
"""Return service url."""
|
"""Return service url."""
|
||||||
return None
|
return None
|
||||||
|
|
||||||
@staticmethod
|
|
||||||
def get_subdomain() -> Optional[str]:
|
|
||||||
return None
|
|
||||||
|
|
||||||
@staticmethod
|
@staticmethod
|
||||||
def is_movable() -> bool:
|
def is_movable() -> bool:
|
||||||
return True
|
return True
|
||||||
|
@ -52,10 +51,6 @@ class MailServer(Service):
|
||||||
def is_required() -> bool:
|
def is_required() -> bool:
|
||||||
return True
|
return True
|
||||||
|
|
||||||
@staticmethod
|
|
||||||
def get_backup_description() -> str:
|
|
||||||
return "Mail boxes and filters."
|
|
||||||
|
|
||||||
@staticmethod
|
@staticmethod
|
||||||
def is_enabled() -> bool:
|
def is_enabled() -> bool:
|
||||||
return True
|
return True
|
||||||
|
@ -76,18 +71,18 @@ class MailServer(Service):
|
||||||
|
|
||||||
@staticmethod
|
@staticmethod
|
||||||
def stop():
|
def stop():
|
||||||
subprocess.run(["systemctl", "stop", "dovecot2.service"], check=False)
|
subprocess.run(["systemctl", "stop", "dovecot2.service"])
|
||||||
subprocess.run(["systemctl", "stop", "postfix.service"], check=False)
|
subprocess.run(["systemctl", "stop", "postfix.service"])
|
||||||
|
|
||||||
@staticmethod
|
@staticmethod
|
||||||
def start():
|
def start():
|
||||||
subprocess.run(["systemctl", "start", "dovecot2.service"], check=False)
|
subprocess.run(["systemctl", "start", "dovecot2.service"])
|
||||||
subprocess.run(["systemctl", "start", "postfix.service"], check=False)
|
subprocess.run(["systemctl", "start", "postfix.service"])
|
||||||
|
|
||||||
@staticmethod
|
@staticmethod
|
||||||
def restart():
|
def restart():
|
||||||
subprocess.run(["systemctl", "restart", "dovecot2.service"], check=False)
|
subprocess.run(["systemctl", "restart", "dovecot2.service"])
|
||||||
subprocess.run(["systemctl", "restart", "postfix.service"], check=False)
|
subprocess.run(["systemctl", "restart", "postfix.service"])
|
||||||
|
|
||||||
@staticmethod
|
@staticmethod
|
||||||
def get_configuration():
|
def get_configuration():
|
||||||
|
@ -102,64 +97,83 @@ class MailServer(Service):
|
||||||
return ""
|
return ""
|
||||||
|
|
||||||
@staticmethod
|
@staticmethod
|
||||||
def get_folders() -> List[str]:
|
def get_storage_usage() -> int:
|
||||||
return ["/var/vmail", "/var/sieve"]
|
return get_storage_usage("/var/vmail")
|
||||||
|
|
||||||
@classmethod
|
@staticmethod
|
||||||
def get_dns_records(cls, ip4: str, ip6: Optional[str]) -> List[ServiceDnsRecord]:
|
def get_location() -> str:
|
||||||
|
with utils.ReadUserData() as user_data:
|
||||||
|
if user_data.get("useBinds", False):
|
||||||
|
return user_data.get("mailserver", {}).get("location", "sda1")
|
||||||
|
else:
|
||||||
|
return "sda1"
|
||||||
|
|
||||||
|
@staticmethod
|
||||||
|
def get_dns_records() -> typing.List[ServiceDnsRecord]:
|
||||||
domain = utils.get_domain()
|
domain = utils.get_domain()
|
||||||
dkim_record = utils.get_dkim_key(domain)
|
dkim_record = utils.get_dkim_key(domain)
|
||||||
|
ip4 = network_utils.get_ip4()
|
||||||
|
ip6 = network_utils.get_ip6()
|
||||||
|
|
||||||
if dkim_record is None:
|
if dkim_record is None:
|
||||||
return []
|
return []
|
||||||
|
|
||||||
dns_records = [
|
return [
|
||||||
ServiceDnsRecord(
|
ServiceDnsRecord(
|
||||||
type="A",
|
type="A",
|
||||||
name=domain,
|
name=domain,
|
||||||
content=ip4,
|
content=ip4,
|
||||||
ttl=3600,
|
ttl=3600,
|
||||||
display_name="Root Domain",
|
|
||||||
),
|
),
|
||||||
ServiceDnsRecord(
|
ServiceDnsRecord(
|
||||||
type="MX",
|
type="AAAA",
|
||||||
name=domain,
|
name=domain,
|
||||||
content=domain,
|
content=ip6,
|
||||||
ttl=3600,
|
ttl=3600,
|
||||||
priority=10,
|
|
||||||
display_name="Mail server record",
|
|
||||||
),
|
),
|
||||||
ServiceDnsRecord(
|
ServiceDnsRecord(
|
||||||
type="TXT",
|
type="MX", name=domain, content=domain, ttl=3600, priority=10
|
||||||
name="_dmarc",
|
),
|
||||||
content="v=DMARC1; p=none",
|
ServiceDnsRecord(
|
||||||
ttl=18000,
|
type="TXT", name="_dmarc", content=f"v=DMARC1; p=none", ttl=18000
|
||||||
display_name="DMARC record",
|
|
||||||
),
|
),
|
||||||
ServiceDnsRecord(
|
ServiceDnsRecord(
|
||||||
type="TXT",
|
type="TXT",
|
||||||
name=domain,
|
name=domain,
|
||||||
content=f"v=spf1 a mx ip4:{ip4} -all",
|
content=f"v=spf1 a mx ip4:{ip4} -all",
|
||||||
ttl=18000,
|
ttl=18000,
|
||||||
display_name="SPF record",
|
|
||||||
),
|
),
|
||||||
ServiceDnsRecord(
|
ServiceDnsRecord(
|
||||||
type="TXT",
|
type="TXT", name="selector._domainkey", content=dkim_record, ttl=18000
|
||||||
name="selector._domainkey",
|
|
||||||
content=dkim_record,
|
|
||||||
ttl=18000,
|
|
||||||
display_name="DKIM key",
|
|
||||||
),
|
),
|
||||||
]
|
]
|
||||||
|
|
||||||
if ip6 is not None:
|
def move_to_volume(self, volume: BlockDevice) -> Job:
|
||||||
dns_records.append(
|
job = Jobs.add(
|
||||||
ServiceDnsRecord(
|
type_id="services.mailserver.move",
|
||||||
type="AAAA",
|
name="Move Mail Server",
|
||||||
name=domain,
|
description=f"Moving mailserver data to {volume.name}",
|
||||||
content=ip6,
|
)
|
||||||
ttl=3600,
|
|
||||||
display_name="Root Domain (IPv6)",
|
move_service(
|
||||||
|
self,
|
||||||
|
volume,
|
||||||
|
job,
|
||||||
|
[
|
||||||
|
FolderMoveNames(
|
||||||
|
name="vmail",
|
||||||
|
bind_location="/var/vmail",
|
||||||
|
group="virtualMail",
|
||||||
|
owner="virtualMail",
|
||||||
),
|
),
|
||||||
)
|
FolderMoveNames(
|
||||||
return dns_records
|
name="sieve",
|
||||||
|
bind_location="/var/sieve",
|
||||||
|
group="virtualMail",
|
||||||
|
owner="virtualMail",
|
||||||
|
),
|
||||||
|
],
|
||||||
|
"mailserver",
|
||||||
|
)
|
||||||
|
|
||||||
|
return job
|
||||||
|
|
|
@ -1,72 +0,0 @@
|
||||||
"""Generic handler for moving services"""
|
|
||||||
|
|
||||||
from __future__ import annotations
|
|
||||||
import shutil
|
|
||||||
from typing import List
|
|
||||||
|
|
||||||
from selfprivacy_api.jobs import Job, report_progress
|
|
||||||
from selfprivacy_api.utils.block_devices import BlockDevice
|
|
||||||
from selfprivacy_api.services.owned_path import Bind
|
|
||||||
|
|
||||||
|
|
||||||
class MoveError(Exception):
|
|
||||||
"""Move of the data has failed"""
|
|
||||||
|
|
||||||
|
|
||||||
def check_volume(volume: BlockDevice, space_needed: int) -> None:
|
|
||||||
# Check if there is enough space on the new volume
|
|
||||||
if int(volume.fsavail) < space_needed:
|
|
||||||
raise MoveError("Not enough space on the new volume.")
|
|
||||||
|
|
||||||
# Make sure the volume is mounted
|
|
||||||
if not volume.is_root() and f"/volumes/{volume.name}" not in volume.mountpoints:
|
|
||||||
raise MoveError("Volume is not mounted.")
|
|
||||||
|
|
||||||
|
|
||||||
def check_binds(volume_name: str, binds: List[Bind]) -> None:
|
|
||||||
# Make sure current actual directory exists and if its user and group are correct
|
|
||||||
for bind in binds:
|
|
||||||
bind.validate()
|
|
||||||
|
|
||||||
|
|
||||||
def unbind_folders(owned_folders: List[Bind]) -> None:
|
|
||||||
for folder in owned_folders:
|
|
||||||
folder.unbind()
|
|
||||||
|
|
||||||
|
|
||||||
# May be moved into Bind
|
|
||||||
def move_data_to_volume(
|
|
||||||
binds: List[Bind],
|
|
||||||
new_volume: BlockDevice,
|
|
||||||
job: Job,
|
|
||||||
) -> List[Bind]:
|
|
||||||
current_progress = job.progress
|
|
||||||
if current_progress is None:
|
|
||||||
current_progress = 0
|
|
||||||
|
|
||||||
progress_per_folder = 50 // len(binds)
|
|
||||||
for bind in binds:
|
|
||||||
old_location = bind.location_at_volume()
|
|
||||||
bind.drive = new_volume
|
|
||||||
new_location = bind.location_at_volume()
|
|
||||||
|
|
||||||
try:
|
|
||||||
shutil.move(old_location, new_location)
|
|
||||||
except Exception as error:
|
|
||||||
raise MoveError(
|
|
||||||
f"could not move {old_location} to {new_location} : {str(error)}"
|
|
||||||
) from error
|
|
||||||
|
|
||||||
progress = current_progress + progress_per_folder
|
|
||||||
report_progress(progress, job, "Moving data to new volume...")
|
|
||||||
return binds
|
|
||||||
|
|
||||||
|
|
||||||
def ensure_folder_ownership(folders: List[Bind]) -> None:
|
|
||||||
for folder in folders:
|
|
||||||
folder.ensure_ownership()
|
|
||||||
|
|
||||||
|
|
||||||
def bind_folders(folders: List[Bind]):
|
|
||||||
for folder in folders:
|
|
||||||
folder.bind()
|
|
|
@ -1,14 +1,15 @@
|
||||||
"""Class representing Nextcloud service."""
|
"""Class representing Nextcloud service."""
|
||||||
import base64
|
import base64
|
||||||
import subprocess
|
import subprocess
|
||||||
from typing import Optional, List
|
import typing
|
||||||
|
|
||||||
from selfprivacy_api.utils import get_domain
|
|
||||||
from selfprivacy_api.jobs import Job, Jobs
|
from selfprivacy_api.jobs import Job, Jobs
|
||||||
|
from selfprivacy_api.services.generic_service_mover import FolderMoveNames, move_service
|
||||||
from selfprivacy_api.utils.systemd import get_service_status
|
from selfprivacy_api.services.generic_size_counter import get_storage_usage
|
||||||
from selfprivacy_api.services.service import Service, ServiceStatus
|
from selfprivacy_api.services.generic_status_getter import get_service_status
|
||||||
|
from selfprivacy_api.services.service import Service, ServiceDnsRecord, ServiceStatus
|
||||||
|
from selfprivacy_api.utils import ReadUserData, WriteUserData, get_domain
|
||||||
|
from selfprivacy_api.utils.block_devices import BlockDevice
|
||||||
|
import selfprivacy_api.utils.network as network_utils
|
||||||
from selfprivacy_api.services.nextcloud.icon import NEXTCLOUD_ICON
|
from selfprivacy_api.services.nextcloud.icon import NEXTCLOUD_ICON
|
||||||
|
|
||||||
|
|
||||||
|
@ -36,15 +37,11 @@ class Nextcloud(Service):
|
||||||
return base64.b64encode(NEXTCLOUD_ICON.encode("utf-8")).decode("utf-8")
|
return base64.b64encode(NEXTCLOUD_ICON.encode("utf-8")).decode("utf-8")
|
||||||
|
|
||||||
@staticmethod
|
@staticmethod
|
||||||
def get_url() -> Optional[str]:
|
def get_url() -> typing.Optional[str]:
|
||||||
"""Return service url."""
|
"""Return service url."""
|
||||||
domain = get_domain()
|
domain = get_domain()
|
||||||
return f"https://cloud.{domain}"
|
return f"https://cloud.{domain}"
|
||||||
|
|
||||||
@staticmethod
|
|
||||||
def get_subdomain() -> Optional[str]:
|
|
||||||
return "cloud"
|
|
||||||
|
|
||||||
@staticmethod
|
@staticmethod
|
||||||
def is_movable() -> bool:
|
def is_movable() -> bool:
|
||||||
return True
|
return True
|
||||||
|
@ -54,8 +51,9 @@ class Nextcloud(Service):
|
||||||
return False
|
return False
|
||||||
|
|
||||||
@staticmethod
|
@staticmethod
|
||||||
def get_backup_description() -> str:
|
def is_enabled() -> bool:
|
||||||
return "All the files and other data stored in Nextcloud."
|
with ReadUserData() as user_data:
|
||||||
|
return user_data.get("nextcloud", {}).get("enable", False)
|
||||||
|
|
||||||
@staticmethod
|
@staticmethod
|
||||||
def get_status() -> ServiceStatus:
|
def get_status() -> ServiceStatus:
|
||||||
|
@ -70,6 +68,22 @@ class Nextcloud(Service):
|
||||||
"""
|
"""
|
||||||
return get_service_status("phpfpm-nextcloud.service")
|
return get_service_status("phpfpm-nextcloud.service")
|
||||||
|
|
||||||
|
@staticmethod
|
||||||
|
def enable():
|
||||||
|
"""Enable Nextcloud service."""
|
||||||
|
with WriteUserData() as user_data:
|
||||||
|
if "nextcloud" not in user_data:
|
||||||
|
user_data["nextcloud"] = {}
|
||||||
|
user_data["nextcloud"]["enable"] = True
|
||||||
|
|
||||||
|
@staticmethod
|
||||||
|
def disable():
|
||||||
|
"""Disable Nextcloud service."""
|
||||||
|
with WriteUserData() as user_data:
|
||||||
|
if "nextcloud" not in user_data:
|
||||||
|
user_data["nextcloud"] = {}
|
||||||
|
user_data["nextcloud"]["enable"] = False
|
||||||
|
|
||||||
@staticmethod
|
@staticmethod
|
||||||
def stop():
|
def stop():
|
||||||
"""Stop Nextcloud service."""
|
"""Stop Nextcloud service."""
|
||||||
|
@ -100,5 +114,58 @@ class Nextcloud(Service):
|
||||||
return ""
|
return ""
|
||||||
|
|
||||||
@staticmethod
|
@staticmethod
|
||||||
def get_folders() -> List[str]:
|
def get_storage_usage() -> int:
|
||||||
return ["/var/lib/nextcloud"]
|
"""
|
||||||
|
Calculate the real storage usage of /var/lib/nextcloud and all subdirectories.
|
||||||
|
Calculate using pathlib.
|
||||||
|
Do not follow symlinks.
|
||||||
|
"""
|
||||||
|
return get_storage_usage("/var/lib/nextcloud")
|
||||||
|
|
||||||
|
@staticmethod
|
||||||
|
def get_location() -> str:
|
||||||
|
"""Get the name of disk where Nextcloud is installed."""
|
||||||
|
with ReadUserData() as user_data:
|
||||||
|
if user_data.get("useBinds", False):
|
||||||
|
return user_data.get("nextcloud", {}).get("location", "sda1")
|
||||||
|
else:
|
||||||
|
return "sda1"
|
||||||
|
|
||||||
|
@staticmethod
|
||||||
|
def get_dns_records() -> typing.List[ServiceDnsRecord]:
|
||||||
|
return [
|
||||||
|
ServiceDnsRecord(
|
||||||
|
type="A",
|
||||||
|
name="cloud",
|
||||||
|
content=network_utils.get_ip4(),
|
||||||
|
ttl=3600,
|
||||||
|
),
|
||||||
|
ServiceDnsRecord(
|
||||||
|
type="AAAA",
|
||||||
|
name="cloud",
|
||||||
|
content=network_utils.get_ip6(),
|
||||||
|
ttl=3600,
|
||||||
|
),
|
||||||
|
]
|
||||||
|
|
||||||
|
def move_to_volume(self, volume: BlockDevice) -> Job:
|
||||||
|
job = Jobs.add(
|
||||||
|
type_id="services.nextcloud.move",
|
||||||
|
name="Move Nextcloud",
|
||||||
|
description=f"Moving Nextcloud to volume {volume.name}",
|
||||||
|
)
|
||||||
|
move_service(
|
||||||
|
self,
|
||||||
|
volume,
|
||||||
|
job,
|
||||||
|
[
|
||||||
|
FolderMoveNames(
|
||||||
|
name="nextcloud",
|
||||||
|
bind_location="/var/lib/nextcloud",
|
||||||
|
owner="nextcloud",
|
||||||
|
group="nextcloud",
|
||||||
|
),
|
||||||
|
],
|
||||||
|
"nextcloud",
|
||||||
|
)
|
||||||
|
return job
|
||||||
|
|
|
@ -2,11 +2,15 @@
|
||||||
import base64
|
import base64
|
||||||
import subprocess
|
import subprocess
|
||||||
import typing
|
import typing
|
||||||
from selfprivacy_api.jobs import Job
|
from selfprivacy_api.jobs import Job, Jobs
|
||||||
from selfprivacy_api.utils.systemd import get_service_status
|
from selfprivacy_api.services.generic_service_mover import FolderMoveNames, move_service
|
||||||
from selfprivacy_api.services.service import Service, ServiceStatus
|
from selfprivacy_api.services.generic_size_counter import get_storage_usage
|
||||||
|
from selfprivacy_api.services.generic_status_getter import get_service_status
|
||||||
|
from selfprivacy_api.services.service import Service, ServiceDnsRecord, ServiceStatus
|
||||||
|
from selfprivacy_api.utils import ReadUserData, WriteUserData
|
||||||
from selfprivacy_api.utils.block_devices import BlockDevice
|
from selfprivacy_api.utils.block_devices import BlockDevice
|
||||||
from selfprivacy_api.services.ocserv.icon import OCSERV_ICON
|
from selfprivacy_api.services.ocserv.icon import OCSERV_ICON
|
||||||
|
import selfprivacy_api.utils.network as network_utils
|
||||||
|
|
||||||
|
|
||||||
class Ocserv(Service):
|
class Ocserv(Service):
|
||||||
|
@ -33,10 +37,6 @@ class Ocserv(Service):
|
||||||
"""Return service url."""
|
"""Return service url."""
|
||||||
return None
|
return None
|
||||||
|
|
||||||
@staticmethod
|
|
||||||
def get_subdomain() -> typing.Optional[str]:
|
|
||||||
return "vpn"
|
|
||||||
|
|
||||||
@staticmethod
|
@staticmethod
|
||||||
def is_movable() -> bool:
|
def is_movable() -> bool:
|
||||||
return False
|
return False
|
||||||
|
@ -46,28 +46,39 @@ class Ocserv(Service):
|
||||||
return False
|
return False
|
||||||
|
|
||||||
@staticmethod
|
@staticmethod
|
||||||
def can_be_backed_up() -> bool:
|
def is_enabled() -> bool:
|
||||||
return False
|
with ReadUserData() as user_data:
|
||||||
|
return user_data.get("ocserv", {}).get("enable", False)
|
||||||
@staticmethod
|
|
||||||
def get_backup_description() -> str:
|
|
||||||
return "Nothing to backup."
|
|
||||||
|
|
||||||
@staticmethod
|
@staticmethod
|
||||||
def get_status() -> ServiceStatus:
|
def get_status() -> ServiceStatus:
|
||||||
return get_service_status("ocserv.service")
|
return get_service_status("ocserv.service")
|
||||||
|
|
||||||
|
@staticmethod
|
||||||
|
def enable():
|
||||||
|
with WriteUserData() as user_data:
|
||||||
|
if "ocserv" not in user_data:
|
||||||
|
user_data["ocserv"] = {}
|
||||||
|
user_data["ocserv"]["enable"] = True
|
||||||
|
|
||||||
|
@staticmethod
|
||||||
|
def disable():
|
||||||
|
with WriteUserData() as user_data:
|
||||||
|
if "ocserv" not in user_data:
|
||||||
|
user_data["ocserv"] = {}
|
||||||
|
user_data["ocserv"]["enable"] = False
|
||||||
|
|
||||||
@staticmethod
|
@staticmethod
|
||||||
def stop():
|
def stop():
|
||||||
subprocess.run(["systemctl", "stop", "ocserv.service"], check=False)
|
subprocess.run(["systemctl", "stop", "ocserv.service"])
|
||||||
|
|
||||||
@staticmethod
|
@staticmethod
|
||||||
def start():
|
def start():
|
||||||
subprocess.run(["systemctl", "start", "ocserv.service"], check=False)
|
subprocess.run(["systemctl", "start", "ocserv.service"])
|
||||||
|
|
||||||
@staticmethod
|
@staticmethod
|
||||||
def restart():
|
def restart():
|
||||||
subprocess.run(["systemctl", "restart", "ocserv.service"], check=False)
|
subprocess.run(["systemctl", "restart", "ocserv.service"])
|
||||||
|
|
||||||
@staticmethod
|
@staticmethod
|
||||||
def get_configuration():
|
def get_configuration():
|
||||||
|
@ -82,8 +93,29 @@ class Ocserv(Service):
|
||||||
return ""
|
return ""
|
||||||
|
|
||||||
@staticmethod
|
@staticmethod
|
||||||
def get_folders() -> typing.List[str]:
|
def get_location() -> str:
|
||||||
return []
|
return "sda1"
|
||||||
|
|
||||||
|
@staticmethod
|
||||||
|
def get_dns_records() -> typing.List[ServiceDnsRecord]:
|
||||||
|
return [
|
||||||
|
ServiceDnsRecord(
|
||||||
|
type="A",
|
||||||
|
name="vpn",
|
||||||
|
content=network_utils.get_ip4(),
|
||||||
|
ttl=3600,
|
||||||
|
),
|
||||||
|
ServiceDnsRecord(
|
||||||
|
type="AAAA",
|
||||||
|
name="vpn",
|
||||||
|
content=network_utils.get_ip6(),
|
||||||
|
ttl=3600,
|
||||||
|
),
|
||||||
|
]
|
||||||
|
|
||||||
|
@staticmethod
|
||||||
|
def get_storage_usage() -> int:
|
||||||
|
return 0
|
||||||
|
|
||||||
def move_to_volume(self, volume: BlockDevice) -> Job:
|
def move_to_volume(self, volume: BlockDevice) -> Job:
|
||||||
raise NotImplementedError("ocserv service is not movable")
|
raise NotImplementedError("ocserv service is not movable")
|
||||||
|
|
|
@ -1,126 +0,0 @@
|
||||||
from __future__ import annotations
|
|
||||||
import subprocess
|
|
||||||
import pathlib
|
|
||||||
from pydantic import BaseModel
|
|
||||||
from os.path import exists
|
|
||||||
|
|
||||||
from selfprivacy_api.utils.block_devices import BlockDevice, BlockDevices
|
|
||||||
|
|
||||||
# tests override it to a tmpdir
|
|
||||||
VOLUMES_PATH = "/volumes"
|
|
||||||
|
|
||||||
|
|
||||||
class BindError(Exception):
|
|
||||||
pass
|
|
||||||
|
|
||||||
|
|
||||||
class OwnedPath(BaseModel):
|
|
||||||
"""
|
|
||||||
A convenient interface for explicitly defining ownership of service folders.
|
|
||||||
One overrides Service.get_owned_paths() for this.
|
|
||||||
|
|
||||||
Why this exists?:
|
|
||||||
One could use Bind to define ownership but then one would need to handle drive which
|
|
||||||
is unnecessary and produces code duplication.
|
|
||||||
|
|
||||||
It is also somewhat semantically wrong to include Owned Path into Bind
|
|
||||||
instead of user and group. Because owner and group in Bind are applied to
|
|
||||||
the original folder on the drive, not to the binding path. But maybe it is
|
|
||||||
ok since they are technically both owned. Idk yet.
|
|
||||||
"""
|
|
||||||
|
|
||||||
path: str
|
|
||||||
owner: str
|
|
||||||
group: str
|
|
||||||
|
|
||||||
|
|
||||||
class Bind:
|
|
||||||
"""
|
|
||||||
A directory that resides on some volume but we mount it into fs where we need it.
|
|
||||||
Used for storing service data.
|
|
||||||
"""
|
|
||||||
|
|
||||||
def __init__(self, binding_path: str, owner: str, group: str, drive: BlockDevice):
|
|
||||||
self.binding_path = binding_path
|
|
||||||
self.owner = owner
|
|
||||||
self.group = group
|
|
||||||
self.drive = drive
|
|
||||||
|
|
||||||
# TODO: delete owned path interface from Service
|
|
||||||
@staticmethod
|
|
||||||
def from_owned_path(path: OwnedPath, drive_name: str) -> Bind:
|
|
||||||
drive = BlockDevices().get_block_device(drive_name)
|
|
||||||
if drive is None:
|
|
||||||
raise BindError(f"No such drive: {drive_name}")
|
|
||||||
|
|
||||||
return Bind(
|
|
||||||
binding_path=path.path, owner=path.owner, group=path.group, drive=drive
|
|
||||||
)
|
|
||||||
|
|
||||||
def bind_foldername(self) -> str:
|
|
||||||
return self.binding_path.split("/")[-1]
|
|
||||||
|
|
||||||
def location_at_volume(self) -> str:
|
|
||||||
return f"{VOLUMES_PATH}/{self.drive.name}/{self.bind_foldername()}"
|
|
||||||
|
|
||||||
def validate(self) -> None:
|
|
||||||
path = pathlib.Path(self.location_at_volume())
|
|
||||||
|
|
||||||
if not path.exists():
|
|
||||||
raise BindError(f"directory {path} is not found.")
|
|
||||||
if not path.is_dir():
|
|
||||||
raise BindError(f"{path} is not a directory.")
|
|
||||||
if path.owner() != self.owner:
|
|
||||||
raise BindError(f"{path} is not owned by {self.owner}.")
|
|
||||||
|
|
||||||
def bind(self) -> None:
|
|
||||||
if not exists(self.binding_path):
|
|
||||||
raise BindError(f"cannot bind to a non-existing path: {self.binding_path}")
|
|
||||||
|
|
||||||
source = self.location_at_volume()
|
|
||||||
target = self.binding_path
|
|
||||||
|
|
||||||
try:
|
|
||||||
subprocess.run(
|
|
||||||
["mount", "--bind", source, target],
|
|
||||||
stderr=subprocess.PIPE,
|
|
||||||
check=True,
|
|
||||||
)
|
|
||||||
except subprocess.CalledProcessError as error:
|
|
||||||
print(error.stderr)
|
|
||||||
raise BindError(f"Unable to bind {source} to {target} :{error.stderr}")
|
|
||||||
|
|
||||||
def unbind(self) -> None:
|
|
||||||
if not exists(self.binding_path):
|
|
||||||
raise BindError(f"cannot unbind a non-existing path: {self.binding_path}")
|
|
||||||
|
|
||||||
try:
|
|
||||||
subprocess.run(
|
|
||||||
# umount -l ?
|
|
||||||
["umount", self.binding_path],
|
|
||||||
check=True,
|
|
||||||
)
|
|
||||||
except subprocess.CalledProcessError:
|
|
||||||
raise BindError(f"Unable to unmount folder {self.binding_path}.")
|
|
||||||
pass
|
|
||||||
|
|
||||||
def ensure_ownership(self) -> None:
|
|
||||||
true_location = self.location_at_volume()
|
|
||||||
try:
|
|
||||||
subprocess.run(
|
|
||||||
[
|
|
||||||
"chown",
|
|
||||||
"-R",
|
|
||||||
f"{self.owner}:{self.group}",
|
|
||||||
# Could we just chown the binded location instead?
|
|
||||||
true_location,
|
|
||||||
],
|
|
||||||
check=True,
|
|
||||||
stderr=subprocess.PIPE,
|
|
||||||
)
|
|
||||||
except subprocess.CalledProcessError as error:
|
|
||||||
print(error.stderr)
|
|
||||||
error_message = (
|
|
||||||
f"Unable to set ownership of {true_location} :{error.stderr}"
|
|
||||||
)
|
|
||||||
raise BindError(error_message)
|
|
|
@ -1,14 +1,15 @@
|
||||||
"""Class representing Nextcloud service."""
|
"""Class representing Nextcloud service."""
|
||||||
import base64
|
import base64
|
||||||
import subprocess
|
import subprocess
|
||||||
from typing import Optional, List
|
import typing
|
||||||
|
from selfprivacy_api.jobs import Job, Jobs
|
||||||
from selfprivacy_api.utils import get_domain
|
from selfprivacy_api.services.generic_service_mover import FolderMoveNames, move_service
|
||||||
|
from selfprivacy_api.services.generic_size_counter import get_storage_usage
|
||||||
from selfprivacy_api.services.owned_path import OwnedPath
|
from selfprivacy_api.services.generic_status_getter import get_service_status
|
||||||
from selfprivacy_api.utils.systemd import get_service_status
|
from selfprivacy_api.services.service import Service, ServiceDnsRecord, ServiceStatus
|
||||||
from selfprivacy_api.services.service import Service, ServiceStatus
|
from selfprivacy_api.utils import ReadUserData, WriteUserData, get_domain
|
||||||
|
from selfprivacy_api.utils.block_devices import BlockDevice
|
||||||
|
import selfprivacy_api.utils.network as network_utils
|
||||||
from selfprivacy_api.services.pleroma.icon import PLEROMA_ICON
|
from selfprivacy_api.services.pleroma.icon import PLEROMA_ICON
|
||||||
|
|
||||||
|
|
||||||
|
@ -32,15 +33,11 @@ class Pleroma(Service):
|
||||||
return base64.b64encode(PLEROMA_ICON.encode("utf-8")).decode("utf-8")
|
return base64.b64encode(PLEROMA_ICON.encode("utf-8")).decode("utf-8")
|
||||||
|
|
||||||
@staticmethod
|
@staticmethod
|
||||||
def get_url() -> Optional[str]:
|
def get_url() -> typing.Optional[str]:
|
||||||
"""Return service url."""
|
"""Return service url."""
|
||||||
domain = get_domain()
|
domain = get_domain()
|
||||||
return f"https://social.{domain}"
|
return f"https://social.{domain}"
|
||||||
|
|
||||||
@staticmethod
|
|
||||||
def get_subdomain() -> Optional[str]:
|
|
||||||
return "social"
|
|
||||||
|
|
||||||
@staticmethod
|
@staticmethod
|
||||||
def is_movable() -> bool:
|
def is_movable() -> bool:
|
||||||
return True
|
return True
|
||||||
|
@ -50,13 +47,28 @@ class Pleroma(Service):
|
||||||
return False
|
return False
|
||||||
|
|
||||||
@staticmethod
|
@staticmethod
|
||||||
def get_backup_description() -> str:
|
def is_enabled() -> bool:
|
||||||
return "Your Pleroma accounts, posts and media."
|
with ReadUserData() as user_data:
|
||||||
|
return user_data.get("pleroma", {}).get("enable", False)
|
||||||
|
|
||||||
@staticmethod
|
@staticmethod
|
||||||
def get_status() -> ServiceStatus:
|
def get_status() -> ServiceStatus:
|
||||||
return get_service_status("pleroma.service")
|
return get_service_status("pleroma.service")
|
||||||
|
|
||||||
|
@staticmethod
|
||||||
|
def enable():
|
||||||
|
with WriteUserData() as user_data:
|
||||||
|
if "pleroma" not in user_data:
|
||||||
|
user_data["pleroma"] = {}
|
||||||
|
user_data["pleroma"]["enable"] = True
|
||||||
|
|
||||||
|
@staticmethod
|
||||||
|
def disable():
|
||||||
|
with WriteUserData() as user_data:
|
||||||
|
if "pleroma" not in user_data:
|
||||||
|
user_data["pleroma"] = {}
|
||||||
|
user_data["pleroma"]["enable"] = False
|
||||||
|
|
||||||
@staticmethod
|
@staticmethod
|
||||||
def stop():
|
def stop():
|
||||||
subprocess.run(["systemctl", "stop", "pleroma.service"])
|
subprocess.run(["systemctl", "stop", "pleroma.service"])
|
||||||
|
@ -85,20 +97,61 @@ class Pleroma(Service):
|
||||||
return ""
|
return ""
|
||||||
|
|
||||||
@staticmethod
|
@staticmethod
|
||||||
def get_owned_folders() -> List[OwnedPath]:
|
def get_storage_usage() -> int:
|
||||||
"""
|
storage_usage = 0
|
||||||
Get a list of occupied directories with ownership info
|
storage_usage += get_storage_usage("/var/lib/pleroma")
|
||||||
Pleroma has folders that are owned by different users
|
storage_usage += get_storage_usage("/var/lib/postgresql")
|
||||||
"""
|
return storage_usage
|
||||||
|
|
||||||
|
@staticmethod
|
||||||
|
def get_location() -> str:
|
||||||
|
with ReadUserData() as user_data:
|
||||||
|
if user_data.get("useBinds", False):
|
||||||
|
return user_data.get("pleroma", {}).get("location", "sda1")
|
||||||
|
else:
|
||||||
|
return "sda1"
|
||||||
|
|
||||||
|
@staticmethod
|
||||||
|
def get_dns_records() -> typing.List[ServiceDnsRecord]:
|
||||||
return [
|
return [
|
||||||
OwnedPath(
|
ServiceDnsRecord(
|
||||||
path="/var/lib/pleroma",
|
type="A",
|
||||||
owner="pleroma",
|
name="social",
|
||||||
group="pleroma",
|
content=network_utils.get_ip4(),
|
||||||
|
ttl=3600,
|
||||||
),
|
),
|
||||||
OwnedPath(
|
ServiceDnsRecord(
|
||||||
path="/var/lib/postgresql",
|
type="AAAA",
|
||||||
owner="postgres",
|
name="social",
|
||||||
group="postgres",
|
content=network_utils.get_ip6(),
|
||||||
|
ttl=3600,
|
||||||
),
|
),
|
||||||
]
|
]
|
||||||
|
|
||||||
|
def move_to_volume(self, volume: BlockDevice) -> Job:
|
||||||
|
job = Jobs.add(
|
||||||
|
type_id="services.pleroma.move",
|
||||||
|
name="Move Pleroma",
|
||||||
|
description=f"Moving Pleroma to volume {volume.name}",
|
||||||
|
)
|
||||||
|
move_service(
|
||||||
|
self,
|
||||||
|
volume,
|
||||||
|
job,
|
||||||
|
[
|
||||||
|
FolderMoveNames(
|
||||||
|
name="pleroma",
|
||||||
|
bind_location="/var/lib/pleroma",
|
||||||
|
owner="pleroma",
|
||||||
|
group="pleroma",
|
||||||
|
),
|
||||||
|
FolderMoveNames(
|
||||||
|
name="postgresql",
|
||||||
|
bind_location="/var/lib/postgresql",
|
||||||
|
owner="postgres",
|
||||||
|
group="postgres",
|
||||||
|
),
|
||||||
|
],
|
||||||
|
"pleroma",
|
||||||
|
)
|
||||||
|
return job
|
||||||
|
|
|
@ -1,30 +1,32 @@
|
||||||
"""Abstract class for a service running on a server"""
|
"""Abstract class for a service running on a server"""
|
||||||
from abc import ABC, abstractmethod
|
from abc import ABC, abstractmethod
|
||||||
from typing import List, Optional
|
from enum import Enum
|
||||||
|
import typing
|
||||||
|
|
||||||
from selfprivacy_api import utils
|
from pydantic import BaseModel
|
||||||
from selfprivacy_api.utils import ReadUserData, WriteUserData
|
from selfprivacy_api.jobs import Job
|
||||||
from selfprivacy_api.utils.waitloop import wait_until_true
|
|
||||||
from selfprivacy_api.utils.block_devices import BlockDevice, BlockDevices
|
|
||||||
|
|
||||||
from selfprivacy_api.jobs import Job, Jobs, JobStatus, report_progress
|
from selfprivacy_api.utils.block_devices import BlockDevice
|
||||||
from selfprivacy_api.jobs.upgrade_system import rebuild_system
|
|
||||||
|
|
||||||
from selfprivacy_api.models.services import ServiceStatus, ServiceDnsRecord
|
|
||||||
from selfprivacy_api.services.generic_size_counter import get_storage_usage
|
|
||||||
from selfprivacy_api.services.owned_path import OwnedPath, Bind
|
|
||||||
from selfprivacy_api.services.moving import (
|
|
||||||
check_binds,
|
|
||||||
check_volume,
|
|
||||||
unbind_folders,
|
|
||||||
bind_folders,
|
|
||||||
ensure_folder_ownership,
|
|
||||||
MoveError,
|
|
||||||
move_data_to_volume,
|
|
||||||
)
|
|
||||||
|
|
||||||
|
|
||||||
DEFAULT_START_STOP_TIMEOUT = 5 * 60
|
class ServiceStatus(Enum):
|
||||||
|
"""Enum for service status"""
|
||||||
|
|
||||||
|
ACTIVE = "ACTIVE"
|
||||||
|
RELOADING = "RELOADING"
|
||||||
|
INACTIVE = "INACTIVE"
|
||||||
|
FAILED = "FAILED"
|
||||||
|
ACTIVATING = "ACTIVATING"
|
||||||
|
DEACTIVATING = "DEACTIVATING"
|
||||||
|
OFF = "OFF"
|
||||||
|
|
||||||
|
|
||||||
|
class ServiceDnsRecord(BaseModel):
|
||||||
|
type: str
|
||||||
|
name: str
|
||||||
|
content: str
|
||||||
|
ttl: int
|
||||||
|
priority: typing.Optional[int] = None
|
||||||
|
|
||||||
|
|
||||||
class Service(ABC):
|
class Service(ABC):
|
||||||
|
@ -36,147 +38,71 @@ class Service(ABC):
|
||||||
@staticmethod
|
@staticmethod
|
||||||
@abstractmethod
|
@abstractmethod
|
||||||
def get_id() -> str:
|
def get_id() -> str:
|
||||||
"""
|
|
||||||
The unique id of the service.
|
|
||||||
"""
|
|
||||||
pass
|
pass
|
||||||
|
|
||||||
@staticmethod
|
@staticmethod
|
||||||
@abstractmethod
|
@abstractmethod
|
||||||
def get_display_name() -> str:
|
def get_display_name() -> str:
|
||||||
"""
|
|
||||||
The name of the service that is shown to the user.
|
|
||||||
"""
|
|
||||||
pass
|
pass
|
||||||
|
|
||||||
@staticmethod
|
@staticmethod
|
||||||
@abstractmethod
|
@abstractmethod
|
||||||
def get_description() -> str:
|
def get_description() -> str:
|
||||||
"""
|
|
||||||
The description of the service that is shown to the user.
|
|
||||||
"""
|
|
||||||
pass
|
pass
|
||||||
|
|
||||||
@staticmethod
|
@staticmethod
|
||||||
@abstractmethod
|
@abstractmethod
|
||||||
def get_svg_icon() -> str:
|
def get_svg_icon() -> str:
|
||||||
"""
|
|
||||||
The monochrome svg icon of the service.
|
|
||||||
"""
|
|
||||||
pass
|
pass
|
||||||
|
|
||||||
@staticmethod
|
@staticmethod
|
||||||
@abstractmethod
|
@abstractmethod
|
||||||
def get_url() -> Optional[str]:
|
def get_url() -> typing.Optional[str]:
|
||||||
"""
|
|
||||||
The url of the service if it is accessible from the internet browser.
|
|
||||||
"""
|
|
||||||
pass
|
pass
|
||||||
|
|
||||||
@staticmethod
|
|
||||||
@abstractmethod
|
|
||||||
def get_subdomain() -> Optional[str]:
|
|
||||||
"""
|
|
||||||
The assigned primary subdomain for this service.
|
|
||||||
"""
|
|
||||||
pass
|
|
||||||
|
|
||||||
@classmethod
|
|
||||||
def get_user(cls) -> Optional[str]:
|
|
||||||
"""
|
|
||||||
The user that owns the service's files.
|
|
||||||
Defaults to the service's id.
|
|
||||||
"""
|
|
||||||
return cls.get_id()
|
|
||||||
|
|
||||||
@classmethod
|
|
||||||
def get_group(cls) -> Optional[str]:
|
|
||||||
"""
|
|
||||||
The group that owns the service's files.
|
|
||||||
Defaults to the service's user.
|
|
||||||
"""
|
|
||||||
return cls.get_user()
|
|
||||||
|
|
||||||
@staticmethod
|
@staticmethod
|
||||||
@abstractmethod
|
@abstractmethod
|
||||||
def is_movable() -> bool:
|
def is_movable() -> bool:
|
||||||
"""`True` if the service can be moved to the non-system volume."""
|
|
||||||
pass
|
pass
|
||||||
|
|
||||||
@staticmethod
|
@staticmethod
|
||||||
@abstractmethod
|
@abstractmethod
|
||||||
def is_required() -> bool:
|
def is_required() -> bool:
|
||||||
"""`True` if the service is required for the server to function."""
|
|
||||||
pass
|
pass
|
||||||
|
|
||||||
@staticmethod
|
|
||||||
def can_be_backed_up() -> bool:
|
|
||||||
"""`True` if the service can be backed up."""
|
|
||||||
return True
|
|
||||||
|
|
||||||
@staticmethod
|
@staticmethod
|
||||||
@abstractmethod
|
@abstractmethod
|
||||||
def get_backup_description() -> str:
|
def is_enabled() -> bool:
|
||||||
"""
|
|
||||||
The text shown to the user that exlplains what data will be
|
|
||||||
backed up.
|
|
||||||
"""
|
|
||||||
pass
|
pass
|
||||||
|
|
||||||
@classmethod
|
|
||||||
def is_enabled(cls) -> bool:
|
|
||||||
"""
|
|
||||||
`True` if the service is enabled.
|
|
||||||
`False` if it is not enabled or not defined in file
|
|
||||||
If there is nothing in the file, this is equivalent to False
|
|
||||||
because NixOS won't enable it then.
|
|
||||||
"""
|
|
||||||
name = cls.get_id()
|
|
||||||
with ReadUserData() as user_data:
|
|
||||||
return user_data.get("modules", {}).get(name, {}).get("enable", False)
|
|
||||||
|
|
||||||
@staticmethod
|
@staticmethod
|
||||||
@abstractmethod
|
@abstractmethod
|
||||||
def get_status() -> ServiceStatus:
|
def get_status() -> ServiceStatus:
|
||||||
"""The status of the service, reported by systemd."""
|
|
||||||
pass
|
pass
|
||||||
|
|
||||||
@classmethod
|
@staticmethod
|
||||||
def _set_enable(cls, enable: bool):
|
@abstractmethod
|
||||||
name = cls.get_id()
|
def enable():
|
||||||
with WriteUserData() as user_data:
|
pass
|
||||||
if "modules" not in user_data:
|
|
||||||
user_data["modules"] = {}
|
|
||||||
if name not in user_data["modules"]:
|
|
||||||
user_data["modules"][name] = {}
|
|
||||||
user_data["modules"][name]["enable"] = enable
|
|
||||||
|
|
||||||
@classmethod
|
@staticmethod
|
||||||
def enable(cls):
|
@abstractmethod
|
||||||
"""Enable the service. Usually this means enabling systemd unit."""
|
def disable():
|
||||||
cls._set_enable(True)
|
pass
|
||||||
|
|
||||||
@classmethod
|
|
||||||
def disable(cls):
|
|
||||||
"""Disable the service. Usually this means disabling systemd unit."""
|
|
||||||
cls._set_enable(False)
|
|
||||||
|
|
||||||
@staticmethod
|
@staticmethod
|
||||||
@abstractmethod
|
@abstractmethod
|
||||||
def stop():
|
def stop():
|
||||||
"""Stop the service. Usually this means stopping systemd unit."""
|
|
||||||
pass
|
pass
|
||||||
|
|
||||||
@staticmethod
|
@staticmethod
|
||||||
@abstractmethod
|
@abstractmethod
|
||||||
def start():
|
def start():
|
||||||
"""Start the service. Usually this means starting systemd unit."""
|
|
||||||
pass
|
pass
|
||||||
|
|
||||||
@staticmethod
|
@staticmethod
|
||||||
@abstractmethod
|
@abstractmethod
|
||||||
def restart():
|
def restart():
|
||||||
"""Restart the service. Usually this means restarting systemd unit."""
|
|
||||||
pass
|
pass
|
||||||
|
|
||||||
@staticmethod
|
@staticmethod
|
||||||
|
@ -194,276 +120,21 @@ class Service(ABC):
|
||||||
def get_logs():
|
def get_logs():
|
||||||
pass
|
pass
|
||||||
|
|
||||||
@classmethod
|
@staticmethod
|
||||||
def get_storage_usage(cls) -> int:
|
@abstractmethod
|
||||||
"""
|
def get_storage_usage() -> int:
|
||||||
Calculate the real storage usage of folders occupied by service
|
pass
|
||||||
Calculate using pathlib.
|
|
||||||
Do not follow symlinks.
|
|
||||||
"""
|
|
||||||
storage_used = 0
|
|
||||||
for folder in cls.get_folders():
|
|
||||||
storage_used += get_storage_usage(folder)
|
|
||||||
return storage_used
|
|
||||||
|
|
||||||
@classmethod
|
|
||||||
def get_dns_records(cls, ip4: str, ip6: Optional[str]) -> List[ServiceDnsRecord]:
|
|
||||||
subdomain = cls.get_subdomain()
|
|
||||||
display_name = cls.get_display_name()
|
|
||||||
if subdomain is None:
|
|
||||||
return []
|
|
||||||
dns_records = [
|
|
||||||
ServiceDnsRecord(
|
|
||||||
type="A",
|
|
||||||
name=subdomain,
|
|
||||||
content=ip4,
|
|
||||||
ttl=3600,
|
|
||||||
display_name=display_name,
|
|
||||||
)
|
|
||||||
]
|
|
||||||
if ip6 is not None:
|
|
||||||
dns_records.append(
|
|
||||||
ServiceDnsRecord(
|
|
||||||
type="AAAA",
|
|
||||||
name=subdomain,
|
|
||||||
content=ip6,
|
|
||||||
ttl=3600,
|
|
||||||
display_name=f"{display_name} (IPv6)",
|
|
||||||
)
|
|
||||||
)
|
|
||||||
return dns_records
|
|
||||||
|
|
||||||
@classmethod
|
|
||||||
def get_drive(cls) -> str:
|
|
||||||
"""
|
|
||||||
Get the name of the drive/volume where the service is located.
|
|
||||||
Example values are `sda1`, `vda`, `sdb`.
|
|
||||||
"""
|
|
||||||
root_device: str = BlockDevices().get_root_block_device().name
|
|
||||||
if not cls.is_movable():
|
|
||||||
return root_device
|
|
||||||
with utils.ReadUserData() as userdata:
|
|
||||||
if userdata.get("useBinds", False):
|
|
||||||
return (
|
|
||||||
userdata.get("modules", {})
|
|
||||||
.get(cls.get_id(), {})
|
|
||||||
.get(
|
|
||||||
"location",
|
|
||||||
root_device,
|
|
||||||
)
|
|
||||||
)
|
|
||||||
else:
|
|
||||||
return root_device
|
|
||||||
|
|
||||||
@classmethod
|
|
||||||
def get_folders(cls) -> List[str]:
|
|
||||||
"""
|
|
||||||
get a plain list of occupied directories
|
|
||||||
Default extracts info from overriden get_owned_folders()
|
|
||||||
"""
|
|
||||||
if cls.get_owned_folders == Service.get_owned_folders:
|
|
||||||
raise NotImplementedError(
|
|
||||||
"you need to implement at least one of get_folders() or get_owned_folders()"
|
|
||||||
)
|
|
||||||
return [owned_folder.path for owned_folder in cls.get_owned_folders()]
|
|
||||||
|
|
||||||
@classmethod
|
|
||||||
def get_owned_folders(cls) -> List[OwnedPath]:
|
|
||||||
"""
|
|
||||||
Get a list of occupied directories with ownership info
|
|
||||||
Default extracts info from overriden get_folders()
|
|
||||||
"""
|
|
||||||
if cls.get_folders == Service.get_folders:
|
|
||||||
raise NotImplementedError(
|
|
||||||
"you need to implement at least one of get_folders() or get_owned_folders()"
|
|
||||||
)
|
|
||||||
return [cls.owned_path(path) for path in cls.get_folders()]
|
|
||||||
|
|
||||||
@staticmethod
|
@staticmethod
|
||||||
def get_foldername(path: str) -> str:
|
@abstractmethod
|
||||||
return path.split("/")[-1]
|
def get_dns_records() -> typing.List[ServiceDnsRecord]:
|
||||||
|
|
||||||
# TODO: with better json utils, it can be one line, and not a separate function
|
|
||||||
@classmethod
|
|
||||||
def set_location(cls, volume: BlockDevice):
|
|
||||||
"""
|
|
||||||
Only changes userdata
|
|
||||||
"""
|
|
||||||
|
|
||||||
service_id = cls.get_id()
|
|
||||||
with WriteUserData() as user_data:
|
|
||||||
if "modules" not in user_data:
|
|
||||||
user_data["modules"] = {}
|
|
||||||
if service_id not in user_data["modules"]:
|
|
||||||
user_data["modules"][service_id] = {}
|
|
||||||
user_data["modules"][service_id]["location"] = volume.name
|
|
||||||
|
|
||||||
def binds(self) -> List[Bind]:
|
|
||||||
owned_folders = self.get_owned_folders()
|
|
||||||
|
|
||||||
return [
|
|
||||||
Bind.from_owned_path(folder, self.get_drive()) for folder in owned_folders
|
|
||||||
]
|
|
||||||
|
|
||||||
def assert_can_move(self, new_volume):
|
|
||||||
"""
|
|
||||||
Checks if the service can be moved to new volume
|
|
||||||
Raises errors if it cannot
|
|
||||||
"""
|
|
||||||
service_name = self.get_display_name()
|
|
||||||
if not self.is_movable():
|
|
||||||
raise MoveError(f"{service_name} is not movable")
|
|
||||||
|
|
||||||
with ReadUserData() as user_data:
|
|
||||||
if not user_data.get("useBinds", False):
|
|
||||||
raise MoveError("Server is not using binds.")
|
|
||||||
|
|
||||||
current_volume_name = self.get_drive()
|
|
||||||
if current_volume_name == new_volume.name:
|
|
||||||
raise MoveError(f"{service_name} is already on volume {new_volume}")
|
|
||||||
|
|
||||||
check_volume(new_volume, space_needed=self.get_storage_usage())
|
|
||||||
|
|
||||||
binds = self.binds()
|
|
||||||
if binds == []:
|
|
||||||
raise MoveError("nothing to move")
|
|
||||||
check_binds(current_volume_name, binds)
|
|
||||||
|
|
||||||
def do_move_to_volume(
|
|
||||||
self,
|
|
||||||
new_volume: BlockDevice,
|
|
||||||
job: Job,
|
|
||||||
):
|
|
||||||
"""
|
|
||||||
Move a service to another volume.
|
|
||||||
Note: It may be much simpler to write it per bind, but a bit less safe?
|
|
||||||
"""
|
|
||||||
service_name = self.get_display_name()
|
|
||||||
binds = self.binds()
|
|
||||||
|
|
||||||
report_progress(10, job, "Unmounting folders from old volume...")
|
|
||||||
unbind_folders(binds)
|
|
||||||
|
|
||||||
report_progress(20, job, "Moving data to new volume...")
|
|
||||||
binds = move_data_to_volume(binds, new_volume, job)
|
|
||||||
|
|
||||||
report_progress(70, job, f"Making sure {service_name} owns its files...")
|
|
||||||
try:
|
|
||||||
ensure_folder_ownership(binds)
|
|
||||||
except Exception as error:
|
|
||||||
# We have logged it via print and we additionally log it here in the error field
|
|
||||||
# We are continuing anyway but Job has no warning field
|
|
||||||
Jobs.update(
|
|
||||||
job,
|
|
||||||
JobStatus.RUNNING,
|
|
||||||
error=f"Service {service_name} will not be able to write files: "
|
|
||||||
+ str(error),
|
|
||||||
)
|
|
||||||
|
|
||||||
report_progress(90, job, f"Mounting {service_name} data...")
|
|
||||||
bind_folders(binds)
|
|
||||||
|
|
||||||
report_progress(95, job, f"Finishing moving {service_name}...")
|
|
||||||
self.set_location(new_volume)
|
|
||||||
|
|
||||||
def move_to_volume(self, volume: BlockDevice, job: Job) -> Job:
|
|
||||||
service_name = self.get_display_name()
|
|
||||||
|
|
||||||
report_progress(0, job, "Performing pre-move checks...")
|
|
||||||
self.assert_can_move(volume)
|
|
||||||
|
|
||||||
report_progress(5, job, f"Stopping {service_name}...")
|
|
||||||
assert self is not None
|
|
||||||
with StoppedService(self):
|
|
||||||
report_progress(9, job, "Stopped service, starting the move...")
|
|
||||||
self.do_move_to_volume(volume, job)
|
|
||||||
|
|
||||||
report_progress(98, job, "Move complete, rebuilding...")
|
|
||||||
rebuild_system(job, upgrade=False)
|
|
||||||
|
|
||||||
Jobs.update(
|
|
||||||
job=job,
|
|
||||||
status=JobStatus.FINISHED,
|
|
||||||
result=f"{service_name} moved successfully.",
|
|
||||||
status_text=f"Starting {service_name}...",
|
|
||||||
progress=100,
|
|
||||||
)
|
|
||||||
|
|
||||||
return job
|
|
||||||
|
|
||||||
@classmethod
|
|
||||||
def owned_path(cls, path: str):
|
|
||||||
"""Default folder ownership"""
|
|
||||||
service_name = cls.get_display_name()
|
|
||||||
|
|
||||||
try:
|
|
||||||
owner = cls.get_user()
|
|
||||||
if owner is None:
|
|
||||||
# TODO: assume root?
|
|
||||||
# (if we do not want to do assumptions, maybe not declare user optional?)
|
|
||||||
raise LookupError(f"no user for service: {service_name}")
|
|
||||||
group = cls.get_group()
|
|
||||||
if group is None:
|
|
||||||
raise LookupError(f"no group for service: {service_name}")
|
|
||||||
except Exception as error:
|
|
||||||
raise LookupError(
|
|
||||||
f"when deciding a bind for folder {path} of service {service_name}, error: {str(error)}"
|
|
||||||
)
|
|
||||||
|
|
||||||
return OwnedPath(
|
|
||||||
path=path,
|
|
||||||
owner=owner,
|
|
||||||
group=group,
|
|
||||||
)
|
|
||||||
|
|
||||||
def pre_backup(self):
|
|
||||||
pass
|
pass
|
||||||
|
|
||||||
def post_restore(self):
|
@staticmethod
|
||||||
|
@abstractmethod
|
||||||
|
def get_location() -> str:
|
||||||
pass
|
pass
|
||||||
|
|
||||||
|
@abstractmethod
|
||||||
class StoppedService:
|
def move_to_volume(self, volume: BlockDevice) -> Job:
|
||||||
"""
|
pass
|
||||||
A context manager that stops the service if needed and reactivates it
|
|
||||||
after you are done if it was active
|
|
||||||
|
|
||||||
Example:
|
|
||||||
```
|
|
||||||
assert service.get_status() == ServiceStatus.ACTIVE
|
|
||||||
with StoppedService(service) [as stopped_service]:
|
|
||||||
assert service.get_status() == ServiceStatus.INACTIVE
|
|
||||||
```
|
|
||||||
"""
|
|
||||||
|
|
||||||
def __init__(self, service: Service):
|
|
||||||
self.service = service
|
|
||||||
self.original_status = service.get_status()
|
|
||||||
|
|
||||||
def __enter__(self) -> Service:
|
|
||||||
self.original_status = self.service.get_status()
|
|
||||||
if self.original_status not in [ServiceStatus.INACTIVE, ServiceStatus.FAILED]:
|
|
||||||
try:
|
|
||||||
self.service.stop()
|
|
||||||
wait_until_true(
|
|
||||||
lambda: self.service.get_status() == ServiceStatus.INACTIVE,
|
|
||||||
timeout_sec=DEFAULT_START_STOP_TIMEOUT,
|
|
||||||
)
|
|
||||||
except TimeoutError as error:
|
|
||||||
raise TimeoutError(
|
|
||||||
f"timed out waiting for {self.service.get_display_name()} to stop"
|
|
||||||
) from error
|
|
||||||
return self.service
|
|
||||||
|
|
||||||
def __exit__(self, type, value, traceback):
|
|
||||||
if self.original_status in [ServiceStatus.ACTIVATING, ServiceStatus.ACTIVE]:
|
|
||||||
try:
|
|
||||||
self.service.start()
|
|
||||||
wait_until_true(
|
|
||||||
lambda: self.service.get_status() == ServiceStatus.ACTIVE,
|
|
||||||
timeout_sec=DEFAULT_START_STOP_TIMEOUT,
|
|
||||||
)
|
|
||||||
except TimeoutError as error:
|
|
||||||
raise TimeoutError(
|
|
||||||
f"timed out waiting for {self.service.get_display_name()} to start"
|
|
||||||
) from error
|
|
||||||
|
|
|
@ -1,22 +0,0 @@
|
||||||
from selfprivacy_api.services import Service
|
|
||||||
from selfprivacy_api.utils.block_devices import BlockDevice
|
|
||||||
from selfprivacy_api.utils.huey import huey
|
|
||||||
from selfprivacy_api.jobs import Job, Jobs, JobStatus
|
|
||||||
|
|
||||||
|
|
||||||
@huey.task()
|
|
||||||
def move_service(service: Service, new_volume: BlockDevice, job: Job) -> bool:
|
|
||||||
"""
|
|
||||||
Move service's folders to new physical volume
|
|
||||||
Does not raise exceptions (we cannot handle exceptions from tasks).
|
|
||||||
Reports all errors via job.
|
|
||||||
"""
|
|
||||||
try:
|
|
||||||
service.move_to_volume(new_volume, job)
|
|
||||||
except Exception as e:
|
|
||||||
Jobs.update(
|
|
||||||
job=job,
|
|
||||||
status=JobStatus.ERROR,
|
|
||||||
error=type(e).__name__ + " " + str(e),
|
|
||||||
)
|
|
||||||
return True
|
|
Some files were not shown because too many files have changed in this diff Show More
Loading…
Reference in New Issue