Compare commits

..

82 Commits
master ... l10n

Author SHA1 Message Date
Inex Code 47d1a0f4a4 refactor(i10l): Move services string localization to API layer 2023-06-01 22:42:27 +03:00
Houkime d9dab29fe8 test(backups): test 2-folder restoration 2023-04-29 15:28:21 +03:00
Houkime 9815281735 test(backups): actually back up 2 folders 2023-04-29 15:28:21 +03:00
Houkime d38872072d refactor(backups): set a list of folders for our dummy service 2023-04-29 15:28:21 +03:00
Houkime 47f2f857f6 refactor(backups): actually accept a list of folders 2023-04-29 15:28:21 +03:00
Houkime 406d255b2c refactor(backups): make api accept a list of folders 2023-04-29 15:28:21 +03:00
Houkime 69f63f04eb refactor(backups): make a dedicated get_folders() function 2023-04-29 15:28:21 +03:00
Houkime 170cf1923e refactor(services): rename get_location() to get_drive() 2023-04-29 15:28:21 +03:00
Houkime 5101f41437 test(backups): register dummy service 2023-04-29 15:28:21 +03:00
Inex Code a7feda02ec fix: Include the translation files in the project 2023-04-12 17:55:41 +03:00
Inex Code c7a65febe7 feat: Locale extension to parse the Accept-Language header 2023-04-12 16:59:23 +03:00
Inex Code e0ea004e80 feat: Test if getting headers works 2023-04-12 16:13:30 +03:00
Inex Code 9376fe151f feat(l10n): Add option for localizing the output of strings in Service classes 2023-04-12 14:55:34 +03:00
Houkime 3d4d05ff11 feature(backups): automatic backup 2023-04-10 16:35:35 +00:00
Houkime d7316f8e79 test(backups): test autobackup timing 2023-04-10 15:51:54 +00:00
Houkime a11627da7d refactor(backups): split out storage 2023-04-10 13:23:17 +00:00
Houkime 9d772ea2e2 test(backups): test that we do use cache 2023-04-07 18:12:05 +00:00
Houkime d68e9a4141 feature(backups): enable snapshot cache usage 2023-04-07 17:24:53 +00:00
Houkime 942c35b7e6 feature(backups): add snapshot cache sync functions 2023-04-07 15:41:02 +00:00
Houkime 644a0b96b8 test(backups): test last backup date retrieval 2023-04-07 15:18:54 +00:00
Houkime f6402f2394 feature(backups): add a datetime validator function for huey autobackups 2023-04-03 23:29:02 +00:00
Houkime aeec3ad0a2 test(backups): test setting autobackup period 2023-04-03 23:29:02 +00:00
Houkime 9edfe10128 test(backups): test setting services as enabled for autobackups 2023-04-03 23:29:02 +00:00
Houkime 7f99fd044e feature(backups): methods for autobackup period setting and getting 2023-04-03 23:29:02 +00:00
Houkime 3e93572648 fix(backups): remove self from static method 2023-04-03 23:29:02 +00:00
Houkime 58086909a4 feature(backups): check, set and unset service autobackup status 2023-04-03 23:29:02 +00:00
Houkime d9102eba37 feature(backups): cache snapshots and last backup timestamps 2023-04-03 23:29:02 +00:00
Houkime 3a65f0845a test(backups): test that we do return snapshot on backup 2023-04-03 23:29:02 +00:00
Houkime a82a986997 feature(backups): return snapshot info from backup function 2023-04-03 23:29:02 +00:00
Houkime daa40d1142 feature(backups): huey task to back up 2023-04-03 23:29:02 +00:00
Houkime baf3afb25b refactor(backups): make backups stateless 2023-04-03 23:29:02 +00:00
Inex Code 09598033e7 chore: Bump python to 3.10 and nixpkgs to 22.11 2023-04-03 23:29:02 +00:00
Houkime b4a3658c78 feature(backups): repo init tracking 2023-04-03 23:29:02 +00:00
Houkime e02c1a878b feature(backups): provider storage and retrieval 2023-04-03 23:29:02 +00:00
Houkime 36907aa9c2 refactor(backups): add a provider model for redis storage 2023-04-03 23:29:02 +00:00
Houkime f785e6724a refactor(backups): redis model storage utils 2023-04-03 23:29:02 +00:00
Houkime d8a0e05602 feature(backups): load from json 2023-04-03 23:29:02 +00:00
Houkime 719c81d2f4 feat(backups): local secret generation and storage 2023-04-03 23:29:02 +00:00
Houkime 5d5ceee1cf feat(backups): sizing up snapshots 2023-04-03 23:29:02 +00:00
Houkime 48f8f95d83 test(backups): test restoring a file 2023-04-03 23:29:02 +00:00
Houkime f2aab38085 feat(backups): add restore_snapshot and restore_service_from_snapshot 2023-04-03 23:29:02 +00:00
Houkime 39a97cf6d8 feat(backups): a better error on failed snapshot retrieval 2023-04-03 23:29:02 +00:00
Houkime cc09e933ed feat(backups): return proper snapshot structs when listing 2023-04-03 23:29:02 +00:00
Houkime de12685d3d test(backups): reenable snapshot testing 2023-04-03 23:29:02 +00:00
Houkime 86467788d3 feat(backups): throw an error on a failed backup 2023-04-03 23:29:02 +00:00
Houkime 8f019c99e3 fix(backups): singleton metaclass was screwing with tests 2023-04-03 23:29:02 +00:00
Houkime 090198c300 test(backups): localfile repo by default in tests 2023-04-03 23:29:02 +00:00
Houkime 7cb6ca9641 feature(backups): throw an error if repo init fails 2023-04-03 23:29:02 +00:00
Houkime d7f96a9adf test(backups): basic file backend init test 2023-04-03 23:29:02 +00:00
Houkime 0ce6624d5a feature(backups): register localfile backend 2023-04-03 23:29:02 +00:00
Houkime aeb66b9c72 feature(backups): localfile repo 2023-04-03 23:29:02 +00:00
Houkime 6821b245d2 test(backups): test repo init 2023-04-03 23:29:02 +00:00
Houkime ad1b1c4972 refactor(backups): repo init service method 2023-04-03 23:29:02 +00:00
Houkime d1dbcbae5e refactor(backups): add repo init 2023-04-03 23:29:02 +00:00
Houkime cbf917ad8a refactor(backups): snapshotlist and local secret groundwork 2023-04-03 23:29:02 +00:00
Houkime 37b747f87f test(backup): no snapshots 2023-04-03 23:29:02 +00:00
Houkime 6989dd0f7c refactor(backup): snapshot model 2023-04-03 23:29:02 +00:00
Houkime 6376503793 feature(backup): loading snapshots 2023-04-03 23:29:02 +00:00
Houkime e7062d72c6 feature(backup): add a restore function to restic backuper 2023-04-03 23:29:02 +00:00
Houkime 0381a9c671 feat(backup): hooks 2023-04-03 23:29:02 +00:00
Houkime de6b96f446 test(backup): use a backup service function 2023-04-03 23:29:02 +00:00
Houkime bf3b698b34 refactor(backup): add a backup function to Backups singleton class 2023-04-03 23:29:02 +00:00
Houkime 33df16f743 refactor(backup): add a placeholder Backups singleton class 2023-04-03 23:29:02 +00:00
Houkime 88b715464d test(backup): try to back up! 2023-04-03 23:29:02 +00:00
Houkime 4547f02e1b fix(backup): add memory backup class,forgot to add to git 2023-04-03 23:29:02 +00:00
Houkime 6a11cd67ac feat(backup): add backuping to restic backuper 2023-04-03 23:29:02 +00:00
Houkime d5e0a3894b test(backup): make a testfile to backup 2023-04-03 23:29:02 +00:00
Houkime b6650b52c3 test(backup): init an in-memory backup class 2023-04-03 23:29:02 +00:00
Houkime 83ed93b271 feat(backup): add in-memory backup 2023-04-03 23:29:02 +00:00
Houkime 5fd7b6c4ed feat(backup): allow no auth 2023-04-03 23:29:02 +00:00
Houkime 696cb406a8 test(backup): dummy service 2023-04-03 23:29:02 +00:00
Houkime 327ad8171f test(backup): provider class selection 2023-04-03 23:29:02 +00:00
Houkime 19ad9a5113 feature(backups): copy cli logic to new restic backuper 2023-04-03 23:29:02 +00:00
Houkime 96b6dfabbe feature(backups): placeholders for the backupers and backup providers 2023-04-03 23:29:02 +00:00
Houkime 61ff2724f3 feature(backups): placeholders for the modules of the new backup system 2023-04-03 23:29:02 +00:00
Houkime fbed185475 feature(backups): add backup structures and queries 2023-04-03 23:29:02 +00:00
Houkime cdcb4ec4c0 refactor(backup): do not use config file 2023-04-03 23:29:02 +00:00
Houkime 4e8050727a refactor(backup): pass key and account to exec 2023-04-03 23:29:02 +00:00
Houkime 7956829981 refactor(backup): extract restic repo 2023-04-03 23:29:02 +00:00
Houkime c7f304744c refactor(backup): extract rclone args 2023-04-03 23:29:02 +00:00
Houkime 2d761f7311 refactor(backup): delete unused import 2023-04-03 23:29:02 +00:00
Houkime 5450b92454 test(backup): add restic to the dev shell 2023-04-03 23:29:02 +00:00
235 changed files with 12329 additions and 12039 deletions

View File

@ -5,11 +5,15 @@ name: default
steps:
- name: Run Tests and Generate Coverage Report
commands:
- nix flake check -L
- kill $(ps aux | grep '[r]edis-server 127.0.0.1:6389' | awk '{print $2}')
- redis-server --bind 127.0.0.1 --port 6389 >/dev/null &
- coverage run -m pytest -q
- coverage xml
- sonar-scanner -Dsonar.projectKey=SelfPrivacy-REST-API -Dsonar.sources=. -Dsonar.host.url=http://analyzer.lan:9000 -Dsonar.login="$SONARQUBE_TOKEN"
environment:
SONARQUBE_TOKEN:
from_secret: SONARQUBE_TOKEN
USE_REDIS_PORT: 6389
- name: Run Bandit Checks
@ -22,7 +26,3 @@ steps:
node:
server: builder
trigger:
event:
- push

View File

@ -1,4 +0,0 @@
[flake8]
max-line-length = 80
select = C,E,F,W,B,B950
extend-ignore = E203, E501

4
.gitignore vendored Normal file → Executable file
View File

@ -147,7 +147,3 @@ cython_debug/
# End of https://www.toptal.com/developers/gitignore/api/flask
*.db
*.rdb
/result
/.nixos-test-history

View File

@ -1,2 +0,0 @@
[mypy]
plugins = pydantic.mypy

View File

@ -1,6 +1,3 @@
[MASTER]
init-hook="from pylint.config import find_pylintrc; import os, sys; sys.path.append(os.path.dirname(find_pylintrc()))"
extension-pkg-whitelist=pydantic
[FORMAT]
max-line-length=88

1
MANIFEST.in Normal file
View File

@ -0,0 +1 @@
recursive-include selfprivacy_api/locales *.json

View File

@ -1,92 +0,0 @@
# SelfPrivacy GraphQL API which allows app to control your server
![CI status](https://ci.selfprivacy.org/api/badges/SelfPrivacy/selfprivacy-rest-api/status.svg)
## Build
```console
$ nix build
```
In case of successful build, you should get the `./result` symlink to a folder (in `/nix/store`) with build contents.
## Develop
```console
$ nix develop
[SP devshell:/dir/selfprivacy-rest-api]$ python
Python 3.10.13 (main, Aug 24 2023, 12:59:26) [GCC 12.3.0] on linux
Type "help", "copyright", "credits" or "license" for more information.
(ins)>>>
```
If you don't have experimental flakes enabled, you can use the following command:
```console
$ nix --extra-experimental-features nix-command --extra-experimental-features flakes develop
```
## Testing
Run the test suite by running coverage with pytest inside an ephemeral NixOS VM with redis service enabled:
```console
$ nix flake check -L
```
Run the same test suite, but additionally create `./result/coverage.xml` in the current directory:
```console
$ nix build .#checks.x86_64-linux.default -L
```
Alternatively, just print the path to `/nix/store/...coverage.xml` without creating any files in the current directory:
```console
$ nix build .#checks.x86_64-linux.default -L --print-out-paths --no-link
```
Run the same test suite with arbitrary pytest options:
```console
$ pytest-vm.sh # specify pytest options here, e.g. `--last-failed`
```
When running using the script, pytest cache is preserved between runs in `.pytest_cache` folder.
NixOS VM state temporary resides in `${TMPDIR:=/tmp}/nixos-vm-tmp-dir/vm-state-machine` during the test.
Git workdir directory is shared read-write with VM via `.nixos-vm-tmp-dir/shared-xchg` symlink. VM accesses workdir contents via `/tmp/shared` mount point and `/root/source` symlink.
Launch VM and execute commands manually either in Linux console (user `root`) or using python NixOS tests driver API (refer to [NixOS documentation](https://nixos.org/manual/nixos/stable/#ssec-machine-objects)):
```console
$ nix run .#checks.x86_64-linux.default.driverInteractive
```
You can add `--keep-vm-state` in order to keep VM state between runs:
```console
$ TMPDIR=".nixos-vm-tmp-dir" nix run .#checks.x86_64-linux.default.driverInteractive --keep-vm-state
```
Option `-L`/`--print-build-logs` is optional for all nix commands. It tells nix to print each log line one after another instead of overwriting a single one.
## Dependencies and Dependant Modules
This flake depends on a single Nix flake input - nixpkgs repository. nixpkgs repository is used for all software packages used to build, run API service, tests, etc.
In order to synchronize nixpkgs input with the same from selfprivacy-nixos-config repository, use this command:
```console
$ nix flake lock --override-input nixpkgs nixpkgs --inputs-from git+https://git.selfprivacy.org/SelfPrivacy/selfprivacy-nixos-config.git?ref=BRANCH
```
Replace BRANCH with the branch name of selfprivacy-nixos-config repository you want to sync with. During development nixpkgs input update might be required in both selfprivacy-rest-api and selfprivacy-nixos-config repositories simultaneously. So, a new feature branch might be temporarily used until selfprivacy-nixos-config gets the feature branch merged.
Show current flake inputs (e.g. nixpkgs):
```console
$ nix flake metadata
```
Show selfprivacy-nixos-config Nix flake inputs (including nixpkgs):
```console
$ nix flake metadata git+https://git.selfprivacy.org/SelfPrivacy/selfprivacy-nixos-config.git?ref=BRANCH
```
Nix code for NixOS service module for API is located in NixOS configuration repository.
## Troubleshooting
Sometimes commands inside `nix develop` refuse to work properly if the calling shell lacks `LANG` environment variable. Try to set it before entering `nix develop`.

64
api.nix Normal file
View File

@ -0,0 +1,64 @@
{ lib, python39Packages }:
with python39Packages;
buildPythonApplication {
pname = "selfprivacy-api";
version = "2.0.0";
propagatedBuildInputs = [
setuptools
portalocker
pytz
pytest
pytest-mock
pytest-datadir
huey
gevent
mnemonic
pydantic
typing-extensions
psutil
fastapi
uvicorn
(buildPythonPackage rec {
pname = "strawberry-graphql";
version = "0.123.0";
format = "pyproject";
patches = [
./strawberry-graphql.patch
];
propagatedBuildInputs = [
typing-extensions
python-multipart
python-dateutil
# flask
pydantic
pygments
poetry
# flask-cors
(buildPythonPackage rec {
pname = "graphql-core";
version = "3.2.0";
format = "setuptools";
src = fetchPypi {
inherit pname version;
sha256 = "sha256-huKgvgCL/eGe94OI3opyWh2UKpGQykMcJKYIN5c4A84=";
};
checkInputs = [
pytest-asyncio
pytest-benchmark
pytestCheckHook
];
pythonImportsCheck = [
"graphql"
];
})
];
src = fetchPypi {
inherit pname version;
sha256 = "KsmZ5Xv8tUg6yBxieAEtvoKoRG60VS+iVGV0X6oCExo=";
};
})
];
src = ./.;
}

View File

@ -1,29 +1,2 @@
{ pythonPackages, rev ? "local" }:
pythonPackages.buildPythonPackage rec {
pname = "selfprivacy-graphql-api";
version = rev;
src = builtins.filterSource (p: t: p != ".git" && t != "symlink") ./.;
propagatedBuildInputs = with pythonPackages; [
fastapi
gevent
huey
mnemonic
portalocker
psutil
pydantic
pytz
redis
setuptools
strawberry-graphql
typing-extensions
uvicorn
];
pythonImportsCheck = [ "selfprivacy_api" ];
doCheck = false;
meta = {
description = ''
SelfPrivacy Server Management API
'';
};
}
{ pkgs ? import <nixpkgs> {} }:
pkgs.callPackage ./api.nix {}

View File

@ -1,26 +0,0 @@
{
"nodes": {
"nixpkgs": {
"locked": {
"lastModified": 1709677081,
"narHash": "sha256-tix36Y7u0rkn6mTm0lA45b45oab2cFLqAzDbJxeXS+c=",
"owner": "nixos",
"repo": "nixpkgs",
"rev": "880992dcc006a5e00dd0591446fdf723e6a51a64",
"type": "github"
},
"original": {
"owner": "nixos",
"repo": "nixpkgs",
"type": "github"
}
},
"root": {
"inputs": {
"nixpkgs": "nixpkgs"
}
}
},
"root": "root",
"version": 7
}

162
flake.nix
View File

@ -1,162 +0,0 @@
{
description = "SelfPrivacy API flake";
inputs.nixpkgs.url = "github:nixos/nixpkgs";
outputs = { self, nixpkgs, ... }:
let
system = "x86_64-linux";
pkgs = nixpkgs.legacyPackages.${system};
selfprivacy-graphql-api = pkgs.callPackage ./default.nix {
pythonPackages = pkgs.python310Packages;
rev = self.shortRev or self.dirtyShortRev or "dirty";
};
python = self.packages.${system}.default.pythonModule;
python-env =
python.withPackages (ps:
self.packages.${system}.default.propagatedBuildInputs ++ (with ps; [
coverage
pytest
pytest-datadir
pytest-mock
pytest-subprocess
black
mypy
pylsp-mypy
python-lsp-black
python-lsp-server
pyflakes
typer # for strawberry
types-redis # for mypy
] ++ strawberry-graphql.optional-dependencies.cli));
vmtest-src-dir = "/root/source";
shellMOTD = ''
Welcome to SP API development shell!
[formatters]
black
nixpkgs-fmt
[testing in NixOS VM]
nixos-test-driver - run an interactive NixOS VM with all dependencies included and 2 disk volumes
pytest-vm - run pytest in an ephemeral NixOS VM with Redis, accepting pytest arguments
'';
in
{
# see https://github.com/NixOS/nixpkgs/blob/66a9817cec77098cfdcbb9ad82dbb92651987a84/nixos/lib/test-driver/test_driver/machine.py#L359
packages.${system} = {
default = selfprivacy-graphql-api;
pytest-vm = pkgs.writeShellScriptBin "pytest-vm" ''
set -o errexit
set -o nounset
set -o xtrace
# see https://github.com/NixOS/nixpkgs/blob/66a9817cec77098cfdcbb9ad82dbb92651987a84/nixos/lib/test-driver/test_driver/machine.py#L359
export TMPDIR=''${TMPDIR:=/tmp}/nixos-vm-tmp-dir
readonly NIXOS_VM_SHARED_DIR_HOST="$TMPDIR/shared-xchg"
readonly NIXOS_VM_SHARED_DIR_GUEST="/tmp/shared"
mkdir -p "$TMPDIR"
ln -sfv "$PWD" -T "$NIXOS_VM_SHARED_DIR_HOST"
SCRIPT=$(cat <<EOF
start_all()
machine.succeed("ln -sf $NIXOS_VM_SHARED_DIR_GUEST -T ${vmtest-src-dir} >&2")
machine.succeed("cd ${vmtest-src-dir} && coverage run -m pytest -v $@ >&2")
machine.succeed("cd ${vmtest-src-dir} && coverage report >&2")
EOF
)
if [ -f "/etc/arch-release" ]; then
${self.checks.${system}.default.driverInteractive}/bin/nixos-test-driver --no-interactive <(printf "%s" "$SCRIPT")
else
${self.checks.${system}.default.driver}/bin/nixos-test-driver -- <(printf "%s" "$SCRIPT")
fi
'';
};
nixosModules.default =
import ./nixos/module.nix self.packages.${system}.default;
devShells.${system}.default = pkgs.mkShellNoCC {
name = "SP API dev shell";
packages = with pkgs; [
nixpkgs-fmt
rclone
redis
restic
self.packages.${system}.pytest-vm
# FIXME consider loading this explicitly only after ArchLinux issue is solved
self.checks.x86_64-linux.default.driverInteractive
# the target API application python environment
python-env
];
shellHook = ''
# envs set with export and as attributes are treated differently.
# for example. printenv <Name> will not fetch the value of an attribute.
export TEST_MODE="true"
# more tips for bash-completion to work on non-NixOS:
# https://discourse.nixos.org/t/whats-the-nix-way-of-bash-completion-for-packages/20209/16?u=alexoundos
# Load installed profiles
for file in "/etc/profile.d/"*.sh; do
# If that folder doesn't exist, bash loves to return the whole glob
[[ -f "$file" ]] && source "$file"
done
printf "%s" "${shellMOTD}"
'';
};
checks.${system} = {
fmt-check = pkgs.runCommandLocal "sp-api-fmt-check"
{ nativeBuildInputs = [ pkgs.black ]; }
"black --check ${self.outPath} > $out";
default =
pkgs.testers.runNixOSTest {
name = "default";
nodes.machine = { lib, pkgs, ... }: {
# 2 additional disks (1024 MiB and 200 MiB) with empty ext4 FS
virtualisation.emptyDiskImages = [ 1024 200 ];
virtualisation.fileSystems."/volumes/vdb" = {
autoFormat = true;
device = "/dev/vdb"; # this name is chosen by QEMU, not here
fsType = "ext4";
noCheck = true;
};
virtualisation.fileSystems."/volumes/vdc" = {
autoFormat = true;
device = "/dev/vdc"; # this name is chosen by QEMU, not here
fsType = "ext4";
noCheck = true;
};
boot.consoleLogLevel = lib.mkForce 3;
documentation.enable = false;
services.journald.extraConfig = lib.mkForce "";
services.redis.servers.sp-api = {
enable = true;
save = [ ];
settings.notify-keyspace-events = "KEA";
};
environment.systemPackages = with pkgs; [
python-env
# TODO: these can be passed via wrapper script around app
rclone
restic
];
environment.variables.TEST_MODE = "true";
systemd.tmpfiles.settings.src.${vmtest-src-dir}.L.argument =
self.outPath;
};
testScript = ''
start_all()
machine.succeed("cd ${vmtest-src-dir} && coverage run --data-file=/tmp/.coverage -m pytest -p no:cacheprovider -v >&2")
machine.succeed("coverage xml --rcfile=${vmtest-src-dir}/.coveragerc --data-file=/tmp/.coverage >&2")
machine.copy_from_vm("coverage.xml", ".")
machine.succeed("coverage report >&2")
'';
};
};
};
nixConfig.bash-prompt = ''\n\[\e[1;32m\][\[\e[0m\]\[\e[1;34m\]SP devshell\[\e[0m\]\[\e[1;32m\]:\w]\$\[\[\e[0m\] '';
}

View File

@ -1,22 +0,0 @@
@startuml
left to right direction
title repositories and flake inputs relations diagram
cloud nixpkgs as nixpkgs_transit
control "<font:monospaced><size:15>nixos-rebuild" as nixos_rebuild
component "SelfPrivacy\nAPI app" as selfprivacy_app
component "SelfPrivacy\nNixOS configuration" as nixos_configuration
note top of nixos_configuration : SelfPrivacy\nAPI service module
nixos_configuration ).. nixpkgs_transit
nixpkgs_transit ..> selfprivacy_app
selfprivacy_app --> nixos_configuration
[nixpkgs] --> nixos_configuration
nixos_configuration -> nixos_rebuild
footer %date("yyyy-MM-dd'T'HH:mmZ")
@enduml

View File

@ -1,166 +0,0 @@
selfprivacy-graphql-api: { config, lib, pkgs, ... }:
let
cfg = config.services.selfprivacy-api;
config-id = "default";
nixos-rebuild = "${config.system.build.nixos-rebuild}/bin/nixos-rebuild";
nix = "${config.nix.package.out}/bin/nix";
in
{
options.services.selfprivacy-api = {
enable = lib.mkOption {
default = true;
type = lib.types.bool;
description = ''
Enable SelfPrivacy API service
'';
};
};
config = lib.mkIf cfg.enable {
users.users."selfprivacy-api" = {
isNormalUser = false;
isSystemUser = true;
extraGroups = [ "opendkim" ];
group = "selfprivacy-api";
};
users.groups."selfprivacy-api".members = [ "selfprivacy-api" ];
systemd.services.selfprivacy-api = {
description = "API Server used to control system from the mobile application";
environment = config.nix.envVars // {
HOME = "/root";
PYTHONUNBUFFERED = "1";
} // config.networking.proxy.envVars;
path = [
"/var/"
"/var/dkim/"
pkgs.coreutils
pkgs.gnutar
pkgs.xz.bin
pkgs.gzip
pkgs.gitMinimal
config.nix.package.out
pkgs.restic
pkgs.mkpasswd
pkgs.util-linux
pkgs.e2fsprogs
pkgs.iproute2
];
after = [ "network-online.target" ];
wantedBy = [ "network-online.target" ];
serviceConfig = {
User = "root";
ExecStart = "${selfprivacy-graphql-api}/bin/app.py";
Restart = "always";
RestartSec = "5";
};
};
systemd.services.selfprivacy-api-worker = {
description = "Task worker for SelfPrivacy API";
environment = config.nix.envVars // {
HOME = "/root";
PYTHONUNBUFFERED = "1";
PYTHONPATH =
pkgs.python310Packages.makePythonPath [ selfprivacy-graphql-api ];
} // config.networking.proxy.envVars;
path = [
"/var/"
"/var/dkim/"
pkgs.coreutils
pkgs.gnutar
pkgs.xz.bin
pkgs.gzip
pkgs.gitMinimal
config.nix.package.out
pkgs.restic
pkgs.mkpasswd
pkgs.util-linux
pkgs.e2fsprogs
pkgs.iproute2
];
after = [ "network-online.target" ];
wantedBy = [ "network-online.target" ];
serviceConfig = {
User = "root";
ExecStart = "${pkgs.python310Packages.huey}/bin/huey_consumer.py selfprivacy_api.task_registry.huey";
Restart = "always";
RestartSec = "5";
};
};
# One shot systemd service to rebuild NixOS using nixos-rebuild
systemd.services.sp-nixos-rebuild = {
description = "nixos-rebuild switch";
environment = config.nix.envVars // {
HOME = "/root";
} // config.networking.proxy.envVars;
# TODO figure out how to get dependencies list reliably
path = [ pkgs.coreutils pkgs.gnutar pkgs.xz.bin pkgs.gzip pkgs.gitMinimal config.nix.package.out ];
# TODO set proper timeout for reboot instead of service restart
serviceConfig = {
User = "root";
WorkingDirectory = "/etc/nixos";
# sync top-level flake with sp-modules sub-flake
# (https://github.com/NixOS/nix/issues/9339)
ExecStartPre = ''
${nix} flake lock --override-input sp-modules path:./sp-modules
'';
ExecStart = ''
${nixos-rebuild} switch --flake .#${config-id}
'';
KillMode = "none";
SendSIGKILL = "no";
};
restartIfChanged = false;
unitConfig.X-StopOnRemoval = false;
};
# One shot systemd service to upgrade NixOS using nixos-rebuild
systemd.services.sp-nixos-upgrade = {
# protection against simultaneous runs
after = [ "sp-nixos-rebuild.service" ];
description = "Upgrade NixOS and SP modules to latest versions";
environment = config.nix.envVars // {
HOME = "/root";
} // config.networking.proxy.envVars;
# TODO figure out how to get dependencies list reliably
path = [ pkgs.coreutils pkgs.gnutar pkgs.xz.bin pkgs.gzip pkgs.gitMinimal config.nix.package.out ];
serviceConfig = {
User = "root";
WorkingDirectory = "/etc/nixos";
# TODO get URL from systemd template parameter?
ExecStartPre = ''
${nix} flake update \
--override-input selfprivacy-nixos-config git+https://git.selfprivacy.org/SelfPrivacy/selfprivacy-nixos-config.git?ref=flakes
'';
ExecStart = ''
${nixos-rebuild} switch --flake .#${config-id}
'';
KillMode = "none";
SendSIGKILL = "no";
};
restartIfChanged = false;
unitConfig.X-StopOnRemoval = false;
};
# One shot systemd service to rollback NixOS using nixos-rebuild
systemd.services.sp-nixos-rollback = {
# protection against simultaneous runs
after = [ "sp-nixos-rebuild.service" "sp-nixos-upgrade.service" ];
description = "Rollback NixOS using nixos-rebuild";
environment = config.nix.envVars // {
HOME = "/root";
} // config.networking.proxy.envVars;
# TODO figure out how to get dependencies list reliably
path = [ pkgs.coreutils pkgs.gnutar pkgs.xz.bin pkgs.gzip pkgs.gitMinimal config.nix.package.out ];
serviceConfig = {
User = "root";
WorkingDirectory = "/etc/nixos";
ExecStart = ''
${nixos-rebuild} switch --rollback --flake .#${config-id}
'';
KillMode = "none";
SendSIGKILL = "no";
};
restartIfChanged = false;
unitConfig.X-StopOnRemoval = false;
};
};
}

View File

@ -1,15 +1,11 @@
"""
App tokens actions.
The only actions on tokens that are accessible from APIs
"""
from datetime import datetime, timezone
"""App tokens actions"""
from datetime import datetime
from typing import Optional
from pydantic import BaseModel
from mnemonic import Mnemonic
from selfprivacy_api.utils.timeutils import ensure_tz_aware, ensure_tz_aware_strict
from selfprivacy_api.repositories.tokens.redis_tokens_repository import (
RedisTokensRepository,
from selfprivacy_api.repositories.tokens.json_tokens_repository import (
JsonTokensRepository,
)
from selfprivacy_api.repositories.tokens.exceptions import (
TokenNotFound,
@ -18,7 +14,7 @@ from selfprivacy_api.repositories.tokens.exceptions import (
NewDeviceKeyNotFound,
)
TOKEN_REPO = RedisTokensRepository()
TOKEN_REPO = JsonTokensRepository()
class TokenInfoWithIsCaller(BaseModel):
@ -29,14 +25,6 @@ class TokenInfoWithIsCaller(BaseModel):
is_caller: bool
def _naive(date_time: datetime) -> datetime:
if date_time is None:
return None
if date_time.tzinfo is not None:
date_time.astimezone(timezone.utc)
return date_time.replace(tzinfo=None)
def get_api_tokens_with_caller_flag(caller_token: str) -> list[TokenInfoWithIsCaller]:
"""Get the tokens info"""
caller_name = TOKEN_REPO.get_token_by_token_string(caller_token).device_name
@ -95,22 +83,16 @@ class RecoveryTokenStatus(BaseModel):
def get_api_recovery_token_status() -> RecoveryTokenStatus:
"""Get the recovery token status, timezone-aware"""
"""Get the recovery token status"""
token = TOKEN_REPO.get_recovery_key()
if token is None:
return RecoveryTokenStatus(exists=False, valid=False)
is_valid = TOKEN_REPO.is_recovery_key_valid()
# New tokens are tz-aware, but older ones might not be
expiry_date = token.expires_at
if expiry_date is not None:
expiry_date = ensure_tz_aware_strict(expiry_date)
return RecoveryTokenStatus(
exists=True,
valid=is_valid,
date=ensure_tz_aware_strict(token.created_at),
expiration=expiry_date,
date=token.created_at,
expiration=token.expires_at,
uses_left=token.uses_left,
)
@ -128,9 +110,8 @@ def get_new_api_recovery_key(
) -> str:
"""Get new recovery key"""
if expiration_date is not None:
expiration_date = ensure_tz_aware(expiration_date)
current_time = datetime.now(timezone.utc)
if expiration_date < current_time:
current_time = datetime.now().timestamp()
if expiration_date.timestamp() < current_time:
raise InvalidExpirationDate("Expiration date is in the past")
if uses_left is not None:
if uses_left <= 0:

View File

@ -1,34 +0,0 @@
from selfprivacy_api.utils.block_devices import BlockDevices
from selfprivacy_api.jobs import Jobs, Job
from selfprivacy_api.services import get_service_by_id
from selfprivacy_api.services.tasks import move_service as move_service_task
class ServiceNotFoundError(Exception):
pass
class VolumeNotFoundError(Exception):
pass
def move_service(service_id: str, volume_name: str) -> Job:
service = get_service_by_id(service_id)
if service is None:
raise ServiceNotFoundError(f"No such service:{service_id}")
volume = BlockDevices().get_block_device(volume_name)
if volume is None:
raise VolumeNotFoundError(f"No such volume:{volume_name}")
service.assert_can_move(volume)
job = Jobs.add(
type_id=f"services.{service.get_id()}.move",
name=f"Move {service.get_display_name()}",
description=f"Moving {service.get_display_name()} data to {volume.name}",
)
move_service_task(service, volume, job)
return job

View File

@ -31,7 +31,7 @@ def get_ssh_settings() -> UserdataSshSettings:
if "enable" not in data["ssh"]:
data["ssh"]["enable"] = True
if "passwordAuthentication" not in data["ssh"]:
data["ssh"]["passwordAuthentication"] = False
data["ssh"]["passwordAuthentication"] = True
if "rootKeys" not in data["ssh"]:
data["ssh"]["rootKeys"] = []
return UserdataSshSettings(**data["ssh"])
@ -49,6 +49,19 @@ def set_ssh_settings(
data["ssh"]["passwordAuthentication"] = password_authentication
def add_root_ssh_key(public_key: str):
with WriteUserData() as data:
if "ssh" not in data:
data["ssh"] = {}
if "rootKeys" not in data["ssh"]:
data["ssh"]["rootKeys"] = []
# Return 409 if key already in array
for key in data["ssh"]["rootKeys"]:
if key == public_key:
raise KeyAlreadyExists()
data["ssh"]["rootKeys"].append(public_key)
class KeyAlreadyExists(Exception):
"""Key already exists"""

View File

@ -2,10 +2,8 @@
import os
import subprocess
import pytz
from typing import Optional, List
from typing import Optional
from pydantic import BaseModel
from selfprivacy_api.jobs import Job, JobStatus, Jobs
from selfprivacy_api.jobs.upgrade_system import rebuild_system_task
from selfprivacy_api.utils import WriteUserData, ReadUserData
@ -15,7 +13,7 @@ def get_timezone() -> str:
with ReadUserData() as user_data:
if "timezone" in user_data:
return user_data["timezone"]
return "Etc/UTC"
return "Europe/Uzhgorod"
class InvalidTimezone(Exception):
@ -60,68 +58,36 @@ def set_auto_upgrade_settings(
user_data["autoUpgrade"]["allowReboot"] = allowReboot
class ShellException(Exception):
"""Something went wrong when calling another process"""
pass
def run_blocking(cmd: List[str], new_session: bool = False) -> str:
"""Run a process, block until done, return output, complain if failed"""
process_handle = subprocess.Popen(
cmd,
shell=False,
start_new_session=new_session,
stdout=subprocess.PIPE,
stderr=subprocess.PIPE,
)
stdout_raw, stderr_raw = process_handle.communicate()
stdout = stdout_raw.decode("utf-8")
if stderr_raw is not None:
stderr = stderr_raw.decode("utf-8")
else:
stderr = ""
output = stdout + "\n" + stderr
if process_handle.returncode != 0:
raise ShellException(
f"Shell command failed, command array: {cmd}, output: {output}"
)
return stdout
def rebuild_system() -> Job:
def rebuild_system() -> int:
"""Rebuild the system"""
job = Jobs.add(
type_id="system.nixos.rebuild",
name="Rebuild system",
description="Applying the new system configuration by building the new NixOS generation.",
status=JobStatus.CREATED,
rebuild_result = subprocess.Popen(
["systemctl", "start", "sp-nixos-rebuild.service"], start_new_session=True
)
rebuild_system_task(job)
return job
rebuild_result.communicate()[0]
return rebuild_result.returncode
def rollback_system() -> int:
"""Rollback the system"""
run_blocking(["systemctl", "start", "sp-nixos-rollback.service"], new_session=True)
return 0
def upgrade_system() -> Job:
"""Upgrade the system"""
job = Jobs.add(
type_id="system.nixos.upgrade",
name="Upgrade system",
description="Upgrading the system to the latest version.",
status=JobStatus.CREATED,
rollback_result = subprocess.Popen(
["systemctl", "start", "sp-nixos-rollback.service"], start_new_session=True
)
rebuild_system_task(job, upgrade=True)
return job
rollback_result.communicate()[0]
return rollback_result.returncode
def upgrade_system() -> int:
"""Upgrade the system"""
upgrade_result = subprocess.Popen(
["systemctl", "start", "sp-nixos-upgrade.service"], start_new_session=True
)
upgrade_result.communicate()[0]
return upgrade_result.returncode
def reboot_system() -> None:
"""Reboot the system"""
run_blocking(["reboot"], new_session=True)
subprocess.Popen(["reboot"], start_new_session=True)
def get_system_version() -> str:

View File

@ -58,7 +58,7 @@ def get_users(
)
for user in user_data["users"]
]
if not exclude_primary and "username" in user_data.keys():
if not exclude_primary:
users.append(
UserDataUser(
username=user_data["username"],
@ -107,12 +107,6 @@ class PasswordIsEmpty(Exception):
pass
class InvalidConfiguration(Exception):
"""The userdata is broken"""
pass
def create_user(username: str, password: str):
if password == "":
raise PasswordIsEmpty("Password is empty")
@ -130,10 +124,6 @@ def create_user(username: str, password: str):
with ReadUserData() as user_data:
ensure_ssh_and_users_fields_exist(user_data)
if "username" not in user_data.keys():
raise InvalidConfiguration(
"Broken config: Admin name is not defined. Consider recovery or add it manually"
)
if username == user_data["username"]:
raise UserAlreadyExists("User already exists")
if username in [user["username"] for user in user_data["users"]]:

View File

@ -9,7 +9,14 @@ import uvicorn
from selfprivacy_api.dependencies import get_api_version
from selfprivacy_api.graphql.schema import schema
from selfprivacy_api.migrations import run_migrations
from selfprivacy_api.restic_controller.tasks import init_restic
from selfprivacy_api.rest import (
system,
users,
api_auth,
services,
)
app = FastAPI()
@ -26,6 +33,10 @@ app.add_middleware(
)
app.include_router(system.router)
app.include_router(users.router)
app.include_router(api_auth.router)
app.include_router(services.router)
app.include_router(graphql_app, prefix="/graphql")
@ -38,6 +49,7 @@ async def get_version():
@app.on_event("startup")
async def startup():
run_migrations()
init_restic()
if __name__ == "__main__":

View File

@ -1,741 +1,268 @@
"""
This module contains the controller class for backups.
"""
from datetime import datetime, timedelta, timezone
import time
import os
from os import statvfs
from typing import Callable, List, Optional
from selfprivacy_api.services import (
get_service_by_id,
get_all_services,
)
from selfprivacy_api.services.service import (
Service,
ServiceStatus,
StoppedService,
)
from selfprivacy_api.jobs import Jobs, JobStatus, Job
from selfprivacy_api.graphql.queries.providers import (
BackupProvider as BackupProviderEnum,
)
from selfprivacy_api.graphql.common_types.backup import (
RestoreStrategy,
BackupReason,
AutobackupQuotas,
)
from typing import List, Optional
from datetime import datetime, timedelta
from selfprivacy_api.models.backup.snapshot import Snapshot
from selfprivacy_api.utils import ReadUserData
from selfprivacy_api.utils.redis_pool import RedisPool
from selfprivacy_api.services import get_service_by_id
from selfprivacy_api.services.service import Service
from selfprivacy_api.graphql.queries.providers import BackupProvider
from selfprivacy_api.backup.providers.provider import AbstractBackupProvider
from selfprivacy_api.backup.providers import get_provider
from selfprivacy_api.backup.storage import Storage
from selfprivacy_api.backup.jobs import (
get_backup_job,
get_backup_fail,
add_backup_job,
get_restore_job,
add_restore_job,
)
BACKUP_PROVIDER_ENVS = {
"kind": "BACKUP_KIND",
"login": "BACKUP_LOGIN",
"key": "BACKUP_KEY",
"location": "BACKUP_LOCATION",
}
AUTOBACKUP_JOB_EXPIRATION_SECONDS = 60 * 60 # one hour
class NotDeadError(AssertionError):
"""
This error is raised when we try to back up a service that is not dead yet.
"""
def __init__(self, service: Service):
self.service_name = service.get_id()
super().__init__()
def __str__(self):
return f"""
Service {self.service_name} should be either stopped or dead from
an error before we back up.
Normally, this error is unreachable because we do try ensure this.
Apparently, not this time.
"""
class RotationBucket:
"""
Bucket object used for rotation.
Has the following mutable fields:
- the counter, int
- the lambda function which takes datetime and the int and returns the int
- the last, int
"""
def __init__(self, counter: int, last: int, rotation_lambda):
self.counter: int = counter
self.last: int = last
self.rotation_lambda: Callable[[datetime, int], int] = rotation_lambda
def __str__(self) -> str:
return f"Bucket(counter={self.counter}, last={self.last})"
class Backups:
"""A stateless controller class for backups"""
"""A singleton controller for backups"""
# Providers
provider: AbstractBackupProvider
@staticmethod
def provider() -> AbstractBackupProvider:
"""
Returns the current backup storage provider.
"""
return Backups._lookup_provider()
@staticmethod
def set_provider(
kind: BackupProviderEnum,
login: str,
key: str,
location: str,
repo_id: str = "",
) -> None:
"""
Sets the new configuration of the backup storage provider.
In case of `BackupProviderEnum.BACKBLAZE`, the `login` is the key ID,
the `key` is the key itself, and the `location` is the bucket name and
the `repo_id` is the bucket ID.
"""
provider: AbstractBackupProvider = Backups._construct_provider(
kind,
login,
key,
location,
repo_id,
)
def set_localfile_repo(file_path: str):
ProviderClass = get_provider(BackupProvider.FILE)
provider = ProviderClass(file_path)
Storage.store_testrepo_path(file_path)
Storage.store_provider(provider)
@staticmethod
def reset() -> None:
"""
Deletes all the data about the backup storage provider.
"""
Storage.reset()
@staticmethod
def _lookup_provider() -> AbstractBackupProvider:
redis_provider = Backups._load_provider_redis()
if redis_provider is not None:
return redis_provider
none_provider = Backups._construct_provider(
BackupProviderEnum.NONE, login="", key="", location=""
)
Storage.store_provider(none_provider)
return none_provider
@staticmethod
def set_provider_from_envs():
for env in BACKUP_PROVIDER_ENVS.values():
if env not in os.environ.keys():
raise ValueError(
f"Cannot set backup provider from envs, there is no {env} set"
)
kind_str = os.environ[BACKUP_PROVIDER_ENVS["kind"]]
kind_enum = BackupProviderEnum[kind_str]
provider = Backups._construct_provider(
kind=kind_enum,
login=os.environ[BACKUP_PROVIDER_ENVS["login"]],
key=os.environ[BACKUP_PROVIDER_ENVS["key"]],
location=os.environ[BACKUP_PROVIDER_ENVS["location"]],
)
Storage.store_provider(provider)
@staticmethod
def _construct_provider(
kind: BackupProviderEnum,
login: str,
key: str,
location: str,
repo_id: str = "",
) -> AbstractBackupProvider:
provider_class = get_provider(kind)
return provider_class(
login=login,
key=key,
location=location,
repo_id=repo_id,
)
@staticmethod
def _load_provider_redis() -> Optional[AbstractBackupProvider]:
provider_model = Storage.load_provider()
if provider_model is None:
return None
return Backups._construct_provider(
BackupProviderEnum[provider_model.kind],
provider_model.login,
provider_model.key,
provider_model.location,
provider_model.repo_id,
)
# Init
@staticmethod
def init_repo() -> None:
"""
Initializes the backup repository. This is required once per repo.
"""
Backups.provider().backupper.init()
Storage.mark_as_init()
@staticmethod
def erase_repo() -> None:
"""
Completely empties the remote
"""
Backups.provider().backupper.erase_repo()
Storage.mark_as_uninitted()
@staticmethod
def is_initted() -> bool:
"""
Returns whether the backup repository is initialized or not.
If it is not initialized, we cannot back up and probably should
call `init_repo` first.
"""
if Storage.has_init_mark():
return True
initted = Backups.provider().backupper.is_initted()
if initted:
Storage.mark_as_init()
return True
return False
# Backup
@staticmethod
def back_up(
service: Service, reason: BackupReason = BackupReason.EXPLICIT
) -> Snapshot:
"""The top-level function to back up a service
If it fails for any reason at all, it should both mark job as
errored and re-raise an error"""
job = get_backup_job(service)
if job is None:
job = add_backup_job(service)
Jobs.update(job, status=JobStatus.RUNNING)
try:
if service.can_be_backed_up() is False:
raise ValueError("cannot backup a non-backuppable service")
folders = service.get_folders()
service_name = service.get_id()
service.pre_backup()
snapshot = Backups.provider().backupper.start_backup(
folders,
service_name,
reason=reason,
)
Backups._on_new_snapshot_created(service_name, snapshot)
if reason == BackupReason.AUTO:
Backups._prune_auto_snaps(service)
service.post_restore()
except Exception as error:
Jobs.update(job, status=JobStatus.ERROR, error=str(error))
raise error
Jobs.update(job, status=JobStatus.FINISHED)
if reason in [BackupReason.AUTO, BackupReason.PRE_RESTORE]:
Jobs.set_expiration(job, AUTOBACKUP_JOB_EXPIRATION_SECONDS)
return Backups.sync_date_from_cache(snapshot)
@staticmethod
def sync_date_from_cache(snapshot: Snapshot) -> Snapshot:
"""
Our snapshot creation dates are different from those on server by a tiny amount.
This is a convenience, maybe it is better to write a special comparison
function for snapshots
"""
return Storage.get_cached_snapshot_by_id(snapshot.id)
@staticmethod
def _auto_snaps(service):
return [
snap
for snap in Backups.get_snapshots(service)
if snap.reason == BackupReason.AUTO
]
@staticmethod
def _prune_snaps_with_quotas(snapshots: List[Snapshot]) -> List[Snapshot]:
# Function broken out for testability
# Sorting newest first
sorted_snaps = sorted(snapshots, key=lambda s: s.created_at, reverse=True)
quotas: AutobackupQuotas = Backups.autobackup_quotas()
buckets: list[RotationBucket] = [
RotationBucket(
quotas.last, # type: ignore
-1,
lambda _, index: index,
),
RotationBucket(
quotas.daily, # type: ignore
-1,
lambda date, _: date.year * 10000 + date.month * 100 + date.day,
),
RotationBucket(
quotas.weekly, # type: ignore
-1,
lambda date, _: date.year * 100 + date.isocalendar()[1],
),
RotationBucket(
quotas.monthly, # type: ignore
-1,
lambda date, _: date.year * 100 + date.month,
),
RotationBucket(
quotas.yearly, # type: ignore
-1,
lambda date, _: date.year,
),
]
new_snaplist: List[Snapshot] = []
for i, snap in enumerate(sorted_snaps):
keep_snap = False
for bucket in buckets:
if (bucket.counter > 0) or (bucket.counter == -1):
val = bucket.rotation_lambda(snap.created_at, i)
if (val != bucket.last) or (i == len(sorted_snaps) - 1):
bucket.last = val
if bucket.counter > 0:
bucket.counter -= 1
if not keep_snap:
new_snaplist.append(snap)
keep_snap = True
return new_snaplist
@staticmethod
def _prune_auto_snaps(service) -> None:
# Not very testable by itself, so most testing is going on Backups._prune_snaps_with_quotas
# We can still test total limits and, say, daily limits
auto_snaps = Backups._auto_snaps(service)
new_snaplist = Backups._prune_snaps_with_quotas(auto_snaps)
deletable_snaps = [snap for snap in auto_snaps if snap not in new_snaplist]
Backups.forget_snapshots(deletable_snaps)
@staticmethod
def _standardize_quotas(i: int) -> int:
if i <= -1:
i = -1
return i
@staticmethod
def autobackup_quotas() -> AutobackupQuotas:
"""0 means do not keep, -1 means unlimited"""
return Storage.autobackup_quotas()
@staticmethod
def set_autobackup_quotas(quotas: AutobackupQuotas) -> None:
"""0 means do not keep, -1 means unlimited"""
Storage.set_autobackup_quotas(
AutobackupQuotas(
last=Backups._standardize_quotas(quotas.last), # type: ignore
daily=Backups._standardize_quotas(quotas.daily), # type: ignore
weekly=Backups._standardize_quotas(quotas.weekly), # type: ignore
monthly=Backups._standardize_quotas(quotas.monthly), # type: ignore
yearly=Backups._standardize_quotas(quotas.yearly), # type: ignore
)
)
# do not prune all autosnaps right away, this will be done by an async task
@staticmethod
def prune_all_autosnaps() -> None:
for service in get_all_services():
Backups._prune_auto_snaps(service)
# Restoring
@staticmethod
def _ensure_queued_restore_job(service, snapshot) -> Job:
job = get_restore_job(service)
if job is None:
job = add_restore_job(snapshot)
Jobs.update(job, status=JobStatus.CREATED)
return job
@staticmethod
def _inplace_restore(
service: Service,
snapshot: Snapshot,
job: Job,
) -> None:
Jobs.update(
job, status=JobStatus.CREATED, status_text="Waiting for pre-restore backup"
)
failsafe_snapshot = Backups.back_up(service, BackupReason.PRE_RESTORE)
Jobs.update(
job, status=JobStatus.RUNNING, status_text=f"Restoring from {snapshot.id}"
)
try:
Backups._restore_service_from_snapshot(
service,
snapshot.id,
verify=False,
)
except Exception as error:
Jobs.update(
job,
status=JobStatus.ERROR,
status_text=f"Restore failed with {str(error)}, reverting to {failsafe_snapshot.id}",
)
Backups._restore_service_from_snapshot(
service, failsafe_snapshot.id, verify=False
)
Jobs.update(
job,
status=JobStatus.ERROR,
status_text=f"Restore failed with {str(error)}, reverted to {failsafe_snapshot.id}",
)
raise error
@staticmethod
def restore_snapshot(
snapshot: Snapshot, strategy=RestoreStrategy.DOWNLOAD_VERIFY_OVERWRITE
) -> None:
"""Restores a snapshot to its original service using the given strategy"""
service = get_service_by_id(snapshot.service_name)
if service is None:
raise ValueError(
f"snapshot has a nonexistent service: {snapshot.service_name}"
)
job = Backups._ensure_queued_restore_job(service, snapshot)
try:
Backups._assert_restorable(snapshot)
Jobs.update(
job, status=JobStatus.RUNNING, status_text="Stopping the service"
)
with StoppedService(service):
Backups.assert_dead(service)
if strategy == RestoreStrategy.INPLACE:
Backups._inplace_restore(service, snapshot, job)
else: # verify_before_download is our default
Jobs.update(
job,
status=JobStatus.RUNNING,
status_text=f"Restoring from {snapshot.id}",
)
Backups._restore_service_from_snapshot(
service, snapshot.id, verify=True
)
service.post_restore()
Jobs.update(
job,
status=JobStatus.RUNNING,
progress=90,
status_text="Restarting the service",
)
except Exception as error:
Jobs.update(job, status=JobStatus.ERROR, status_text=str(error))
raise error
Jobs.update(job, status=JobStatus.FINISHED)
@staticmethod
def _assert_restorable(
snapshot: Snapshot, strategy=RestoreStrategy.DOWNLOAD_VERIFY_OVERWRITE
) -> None:
service = get_service_by_id(snapshot.service_name)
if service is None:
raise ValueError(
f"snapshot has a nonexistent service: {snapshot.service_name}"
)
restored_snap_size = Backups.snapshot_restored_size(snapshot.id)
if strategy == RestoreStrategy.DOWNLOAD_VERIFY_OVERWRITE:
needed_space = restored_snap_size
elif strategy == RestoreStrategy.INPLACE:
needed_space = restored_snap_size - service.get_storage_usage()
else:
raise NotImplementedError(
"""
We do not know if there is enough space for restoration because
there is some novel restore strategy used!
This is a developer's fault, open an issue please
"""
)
available_space = Backups.space_usable_for_service(service)
if needed_space > available_space:
raise ValueError(
f"we only have {available_space} bytes "
f"but snapshot needs {needed_space}"
)
@staticmethod
def _restore_service_from_snapshot(
service: Service,
snapshot_id: str,
verify=True,
) -> None:
folders = service.get_folders()
Backups.provider().backupper.restore_from_backup(
snapshot_id,
folders,
verify=verify,
)
# Snapshots
@staticmethod
def get_snapshots(service: Service) -> List[Snapshot]:
"""Returns all snapshots for a given service"""
snapshots = Backups.get_all_snapshots()
service_id = service.get_id()
return list(
filter(
lambda snap: snap.service_name == service_id,
snapshots,
)
)
@staticmethod
def get_all_snapshots() -> List[Snapshot]:
"""Returns all snapshots"""
# When we refresh our cache:
# 1. Manually
# 2. On timer
# 3. On new snapshot
# 4. On snapshot deletion
return Storage.get_cached_snapshots()
@staticmethod
def get_snapshot_by_id(snapshot_id: str) -> Optional[Snapshot]:
"""Returns a backup snapshot by its id"""
snap = Storage.get_cached_snapshot_by_id(snapshot_id)
if snap is not None:
return snap
# Possibly our cache entry got invalidated, let's try one more time
Backups.force_snapshot_cache_reload()
snap = Storage.get_cached_snapshot_by_id(snapshot_id)
return snap
@staticmethod
def forget_snapshots(snapshots: List[Snapshot]) -> None:
"""
Deletes a batch of snapshots from the repo and syncs cache
Optimized
"""
ids = [snapshot.id for snapshot in snapshots]
Backups.provider().backupper.forget_snapshots(ids)
Backups.force_snapshot_cache_reload()
@staticmethod
def forget_snapshot(snapshot: Snapshot) -> None:
"""Deletes a snapshot from the repo and from cache"""
Backups.forget_snapshots([snapshot])
@staticmethod
def forget_all_snapshots():
"""
Mark all snapshots we have made for deletion and make them inaccessible
(this is done by cloud, we only issue a command)
"""
Backups.forget_snapshots(Backups.get_all_snapshots())
@staticmethod
def force_snapshot_cache_reload() -> None:
"""
Forces a reload of the snapshot cache.
This may be an expensive operation, so use it wisely.
User pays for the API calls.
"""
upstream_snapshots = Backups.provider().backupper.get_snapshots()
Storage.invalidate_snapshot_storage()
for snapshot in upstream_snapshots:
Storage.cache_snapshot(snapshot)
@staticmethod
def snapshot_restored_size(snapshot_id: str) -> int:
"""Returns the size of the snapshot"""
return Backups.provider().backupper.restored_size(
snapshot_id,
)
@staticmethod
def _on_new_snapshot_created(service_id: str, snapshot: Snapshot) -> None:
"""What do we do with a snapshot that is just made?"""
# non-expiring timestamp of the last
Storage.store_last_timestamp(service_id, snapshot)
Backups.force_snapshot_cache_reload()
# Autobackup
@staticmethod
def autobackup_period_minutes() -> Optional[int]:
"""None means autobackup is disabled"""
return Storage.autobackup_period_minutes()
@staticmethod
def set_autobackup_period_minutes(minutes: int) -> None:
"""
0 and negative numbers are equivalent to disable.
Setting to a positive number may result in a backup very soon
if some services are not backed up.
"""
if minutes <= 0:
Backups.disable_all_autobackup()
return
Storage.store_autobackup_period_minutes(minutes)
@staticmethod
def disable_all_autobackup() -> None:
"""
Disables all automatic backing up,
but does not change per-service settings
"""
Storage.delete_backup_period()
@staticmethod
def is_time_to_backup(time: datetime) -> bool:
"""
Intended as a time validator for huey cron scheduler
of automatic backups
"""
return Backups.services_to_back_up(time) != []
@staticmethod
def services_to_back_up(time: datetime) -> List[Service]:
"""Returns a list of services that should be backed up at a given time"""
return [
service
for service in get_all_services()
if Backups.is_time_to_backup_service(service, time)
]
@staticmethod
def get_last_backed_up(service: Service) -> Optional[datetime]:
"""Get a timezone-aware time of the last backup of a service"""
return Storage.get_last_backup_time(service.get_id())
@staticmethod
def get_last_backup_error_time(service: Service) -> Optional[datetime]:
"""Get a timezone-aware time of the last backup of a service"""
job = get_backup_fail(service)
if job is not None:
datetime_created = job.created_at
if datetime_created.tzinfo is None:
# assume it is in localtime
offset = timedelta(seconds=time.localtime().tm_gmtoff)
datetime_created = datetime_created - offset
return datetime.combine(
datetime_created.date(), datetime_created.time(), timezone.utc
)
return datetime_created
return None
def get_cached_snapshots_service(service_id: str) -> List[Snapshot]:
snapshots = Storage.get_cached_snapshots()
return [snap for snap in snapshots if snap.service_name == service_id]
@staticmethod
def is_time_to_backup_service(service: Service, time: datetime):
"""Returns True if it is time to back up a service"""
def sync_service_snapshots(service_id: str, snapshots: List[Snapshot]):
for snapshot in snapshots:
if snapshot.service_name == service_id:
Storage.cache_snapshot(snapshot)
for snapshot in Backups.get_cached_snapshots_service(service_id):
if snapshot.id not in [snap.id for snap in snapshots]:
Storage.delete_cached_snapshot(snapshot)
@staticmethod
def enable_autobackup(service: Service):
Storage.set_autobackup(service)
@staticmethod
def _service_ids_to_back_up(time: datetime) -> List[str]:
services = Storage.services_with_autobackup()
return [id for id in services if Backups.is_time_to_backup_service(id, time)]
@staticmethod
def services_to_back_up(time: datetime) -> List[Service]:
result = []
for id in Backups._service_ids_to_back_up(time):
service = get_service_by_id(id)
if service is None:
raise ValueError("Cannot look up a service scheduled for backup!")
result.append(service)
return result
@staticmethod
def is_time_to_backup(time: datetime) -> bool:
"""
Intended as a time validator for huey cron scheduler of automatic backups
"""
return Backups._service_ids_to_back_up(time) != []
@staticmethod
def is_time_to_backup_service(service_id: str, time: datetime):
period = Backups.autobackup_period_minutes()
if period is None:
return False
if not service.is_enabled():
return False
if not service.can_be_backed_up():
if not Storage.is_autobackup_set(service_id):
return False
last_error = Backups.get_last_backup_error_time(service)
if last_error is not None:
if time < last_error + timedelta(seconds=AUTOBACKUP_JOB_EXPIRATION_SECONDS):
return False
last_backup = Backups.get_last_backed_up(service)
# Queue a backup immediately if there are no previous backups
last_backup = Storage.get_last_backup_time(service_id)
if last_backup is None:
return True
return True # queue a backup immediately if there are no previous backups
if time > last_backup + timedelta(minutes=period):
return True
return False
@staticmethod
def disable_autobackup(service: Service):
"""also see disable_all_autobackup()"""
Storage.unset_autobackup(service)
@staticmethod
def is_autobackup_enabled(service: Service) -> bool:
return Storage.is_autobackup_set(service.get_id())
@staticmethod
def autobackup_period_minutes() -> Optional[int]:
"""None means autobackup is disabled"""
return Storage.autobackup_period_minutes()
@staticmethod
def set_autobackup_period_minutes(minutes: int):
"""
0 and negative numbers are equivalent to disable.
Setting to a positive number may result in a backup very soon if some services are not backed up.
"""
if minutes <= 0:
Backups.disable_all_autobackup()
return
Storage.store_autobackup_period_minutes(minutes)
@staticmethod
def disable_all_autobackup():
"""disables all automatic backing up, but does not change per-service settings"""
Storage.delete_backup_period()
@staticmethod
def provider():
return Backups.lookup_provider()
@staticmethod
def set_provider(kind: str, login: str, key: str):
provider = Backups.construct_provider(kind, login, key)
Storage.store_provider(provider)
@staticmethod
def construct_provider(kind: str, login: str, key: str):
provider_class = get_provider(BackupProvider[kind])
if kind == "FILE":
path = Storage.get_testrepo_path()
return provider_class(path)
return provider_class(login=login, key=key)
@staticmethod
def reset():
Storage.reset()
@staticmethod
def lookup_provider() -> AbstractBackupProvider:
redis_provider = Backups.load_provider_redis()
if redis_provider is not None:
return redis_provider
json_provider = Backups.load_provider_json()
if json_provider is not None:
Storage.store_provider(json_provider)
return json_provider
memory_provider = Backups.construct_provider("MEMORY", login="", key="")
Storage.store_provider(memory_provider)
return memory_provider
@staticmethod
def load_provider_json() -> AbstractBackupProvider:
with ReadUserData() as user_data:
account = ""
key = ""
if "backup" not in user_data.keys():
if "backblaze" in user_data.keys():
account = user_data["backblaze"]["accountId"]
key = user_data["backblaze"]["accountKey"]
provider_string = "BACKBLAZE"
return Backups.construct_provider(
kind=provider_string, login=account, key=key
)
return None
account = user_data["backup"]["accountId"]
key = user_data["backup"]["accountKey"]
provider_string = user_data["backup"]["provider"]
return Backups.construct_provider(
kind=provider_string, login=account, key=key
)
@staticmethod
def load_provider_redis() -> AbstractBackupProvider:
provider_model = Storage.load_provider()
if provider_model is None:
return None
return Backups.construct_provider(
provider_model.kind, provider_model.login, provider_model.key
)
@staticmethod
def back_up(service: Service):
"""The top-level function to back up a service"""
folders = service.get_folders()
repo_name = service.get_id()
service.pre_backup()
snapshot = Backups.provider().backuper.start_backup(folders, repo_name)
Backups._store_last_snapshot(repo_name, snapshot)
service.post_restore()
@staticmethod
def init_repo(service: Service):
repo_name = service.get_id()
Backups.provider().backuper.init(repo_name)
Storage.mark_as_init(service)
@staticmethod
def is_initted(service: Service) -> bool:
repo_name = service.get_id()
if Storage.has_init_mark(service):
return True
initted = Backups.provider().backuper.is_initted(repo_name)
if initted:
Storage.mark_as_init(service)
return True
return False
# Helpers
@staticmethod
def get_snapshots(service: Service) -> List[Snapshot]:
service_id = service.get_id()
cached_snapshots = Backups.get_cached_snapshots_service(service_id)
if cached_snapshots != []:
return cached_snapshots
# TODO: the oldest snapshots will get expired faster than the new ones.
# How to detect that the end is missing?
upstream_snapshots = Backups.provider().backuper.get_snapshots(service_id)
Backups.sync_service_snapshots(service_id, upstream_snapshots)
return upstream_snapshots
@staticmethod
def space_usable_for_service(service: Service) -> int:
"""
Returns the amount of space available on the volume the given
service is located on.
"""
def restore_service_from_snapshot(service: Service, snapshot_id: str):
repo_name = service.get_id()
folders = service.get_folders()
if folders == []:
raise ValueError("unallocated service", service.get_id())
# We assume all folders of one service live at the same volume
fs_info = statvfs(folders[0])
usable_bytes = fs_info.f_frsize * fs_info.f_bavail
return usable_bytes
Backups.provider().backuper.restore_from_backup(repo_name, snapshot_id, folders)
@staticmethod
def set_localfile_repo(file_path: str):
"""Used by tests to set a local folder as a backup repo"""
# pylint: disable-next=invalid-name
ProviderClass = get_provider(BackupProviderEnum.FILE)
provider = ProviderClass(
login="",
key="",
location=file_path,
repo_id="",
def restore_snapshot(snapshot: Snapshot):
Backups.restore_service_from_snapshot(
get_service_by_id(snapshot.service_name), snapshot.id
)
Storage.store_provider(provider)
@staticmethod
def assert_dead(service: Service):
"""
Checks if a service is dead and can be safely restored from a snapshot.
"""
if service.get_status() not in [
ServiceStatus.INACTIVE,
ServiceStatus.FAILED,
]:
raise NotDeadError(service)
def service_snapshot_size(service: Service, snapshot_id: str) -> float:
repo_name = service.get_id()
return Backups.provider().backuper.restored_size(repo_name, snapshot_id)
@staticmethod
def snapshot_restored_size(snapshot: Snapshot) -> float:
return Backups.service_snapshot_size(
get_service_by_id(snapshot.service_name), snapshot.id
)
@staticmethod
def _store_last_snapshot(service_id: str, snapshot: Snapshot):
"""What do we do with a snapshot that is just made?"""
# non-expiring timestamp of the last
Storage.store_last_timestamp(service_id, snapshot)
# expiring cache entry
Storage.cache_snapshot(snapshot)

View File

@ -0,0 +1,35 @@
from abc import ABC, abstractmethod
from typing import List
from selfprivacy_api.models.backup.snapshot import Snapshot
class AbstractBackuper(ABC):
def __init__(self):
pass
@abstractmethod
def is_initted(self, repo_name: str) -> bool:
raise NotImplementedError
@abstractmethod
def start_backup(self, folders: List[str], repo_name: str):
raise NotImplementedError
@abstractmethod
def get_snapshots(self, repo_name) -> List[Snapshot]:
"""Get all snapshots from the repo"""
raise NotImplementedError
@abstractmethod
def init(self, repo_name):
raise NotImplementedError
@abstractmethod
def restore_from_backup(self, repo_name: str, snapshot_id: str, folders: List[str]):
"""Restore a target folder using a snapshot"""
raise NotImplementedError
@abstractmethod
def restored_size(self, repo_name, snapshot_id) -> float:
raise NotImplementedError

View File

@ -1,73 +0,0 @@
from abc import ABC, abstractmethod
from typing import List
from selfprivacy_api.models.backup.snapshot import Snapshot
from selfprivacy_api.graphql.common_types.backup import BackupReason
class AbstractBackupper(ABC):
"""Abstract class for backuppers"""
# flake8: noqa: B027
def __init__(self) -> None:
pass
@abstractmethod
def is_initted(self) -> bool:
"""Returns true if the repository is initted"""
raise NotImplementedError
@abstractmethod
def set_creds(self, account: str, key: str, repo: str) -> None:
"""Set the credentials for the backupper"""
raise NotImplementedError
@abstractmethod
def start_backup(
self,
folders: List[str],
service_name: str,
reason: BackupReason = BackupReason.EXPLICIT,
) -> Snapshot:
"""Start a backup of the given folders"""
raise NotImplementedError
@abstractmethod
def get_snapshots(self) -> List[Snapshot]:
"""Get all snapshots from the repo"""
raise NotImplementedError
@abstractmethod
def init(self) -> None:
"""Initialize the repository"""
raise NotImplementedError
@abstractmethod
def erase_repo(self) -> None:
"""Completely empties the remote"""
raise NotImplementedError
@abstractmethod
def restore_from_backup(
self,
snapshot_id: str,
folders: List[str],
verify=True,
) -> None:
"""Restore a target folder using a snapshot"""
raise NotImplementedError
@abstractmethod
def restored_size(self, snapshot_id: str) -> int:
"""Get the size of the restored snapshot"""
raise NotImplementedError
@abstractmethod
def forget_snapshot(self, snapshot_id) -> None:
"""Forget a snapshot"""
raise NotImplementedError
@abstractmethod
def forget_snapshots(self, snapshot_ids: List[str]) -> None:
"""Maybe optimized deletion of a batch of snapshots, just cycling if unsupported"""
raise NotImplementedError

View File

@ -1,45 +0,0 @@
from typing import List
from selfprivacy_api.models.backup.snapshot import Snapshot
from selfprivacy_api.backup.backuppers import AbstractBackupper
from selfprivacy_api.graphql.common_types.backup import BackupReason
class NoneBackupper(AbstractBackupper):
"""A backupper that does nothing"""
def is_initted(self, repo_name: str = "") -> bool:
return False
def set_creds(self, account: str, key: str, repo: str):
pass
def start_backup(
self, folders: List[str], tag: str, reason: BackupReason = BackupReason.EXPLICIT
):
raise NotImplementedError
def get_snapshots(self) -> List[Snapshot]:
"""Get all snapshots from the repo"""
return []
def init(self):
raise NotImplementedError
def erase_repo(self) -> None:
"""Completely empties the remote"""
# this one is already empty
pass
def restore_from_backup(self, snapshot_id: str, folders: List[str], verify=True):
"""Restore a target folder using a snapshot"""
raise NotImplementedError
def restored_size(self, snapshot_id: str) -> int:
raise NotImplementedError
def forget_snapshot(self, snapshot_id):
raise NotImplementedError("forget_snapshot")
def forget_snapshots(self, snapshots):
raise NotImplementedError("forget_snapshots")

View File

@ -1,554 +0,0 @@
from __future__ import annotations
import subprocess
import json
import datetime
import tempfile
from typing import List, Optional, TypeVar, Callable
from collections.abc import Iterable
from json.decoder import JSONDecodeError
from os.path import exists, join
from os import mkdir
from shutil import rmtree
from selfprivacy_api.graphql.common_types.backup import BackupReason
from selfprivacy_api.backup.util import output_yielder, sync
from selfprivacy_api.backup.backuppers import AbstractBackupper
from selfprivacy_api.models.backup.snapshot import Snapshot
from selfprivacy_api.backup.jobs import get_backup_job
from selfprivacy_api.services import get_service_by_id
from selfprivacy_api.jobs import Jobs, JobStatus, Job
from selfprivacy_api.backup.local_secret import LocalBackupSecret
SHORT_ID_LEN = 8
T = TypeVar("T", bound=Callable)
def unlocked_repo(func: T) -> T:
"""unlock repo and retry if it appears to be locked"""
def inner(self: ResticBackupper, *args, **kwargs):
try:
return func(self, *args, **kwargs)
except Exception as error:
if "unable to create lock" in str(error):
self.unlock()
return func(self, *args, **kwargs)
else:
raise error
# Above, we manually guarantee that the type returned is compatible.
return inner # type: ignore
class ResticBackupper(AbstractBackupper):
def __init__(self, login_flag: str, key_flag: str, storage_type: str) -> None:
self.login_flag = login_flag
self.key_flag = key_flag
self.storage_type = storage_type
self.account = ""
self.key = ""
self.repo = ""
super().__init__()
def set_creds(self, account: str, key: str, repo: str) -> None:
self.account = account
self.key = key
self.repo = repo
def restic_repo(self) -> str:
# https://restic.readthedocs.io/en/latest/030_preparing_a_new_repo.html#other-services-via-rclone
# https://forum.rclone.org/t/can-rclone-be-run-solely-with-command-line-options-no-config-no-env-vars/6314/5
return f"rclone:{self.rclone_repo()}"
def rclone_repo(self) -> str:
return f"{self.storage_type}{self.repo}"
def rclone_args(self):
return "rclone.args=serve restic --stdio " + " ".join(
self.backend_rclone_args()
)
def backend_rclone_args(self) -> list[str]:
args = []
if self.account != "":
acc_args = [self.login_flag, self.account]
args.extend(acc_args)
if self.key != "":
key_args = [self.key_flag, self.key]
args.extend(key_args)
return args
def _password_command(self):
return f"echo {LocalBackupSecret.get()}"
def restic_command(self, *args, tags: Optional[List[str]] = None) -> List[str]:
"""
Construct a restic command against the currently configured repo
Can support [nested] arrays as arguments, will flatten them into the final commmand
"""
if tags is None:
tags = []
command = [
"restic",
"-o",
self.rclone_args(),
"-r",
self.restic_repo(),
"--password-command",
self._password_command(),
]
if tags != []:
for tag in tags:
command.extend(
[
"--tag",
tag,
]
)
if args:
command.extend(ResticBackupper.__flatten_list(args))
return command
def erase_repo(self) -> None:
"""Fully erases repo on remote, can be reinitted again"""
command = [
"rclone",
"purge",
self.rclone_repo(),
]
backend_args = self.backend_rclone_args()
if backend_args:
command.extend(backend_args)
with subprocess.Popen(command, stdout=subprocess.PIPE, shell=False) as handle:
output = handle.communicate()[0].decode("utf-8")
if handle.returncode != 0:
raise ValueError(
"purge exited with errorcode",
handle.returncode,
":",
output,
)
@staticmethod
def __flatten_list(list_to_flatten):
"""string-aware list flattener"""
result = []
for item in list_to_flatten:
if isinstance(item, Iterable) and not isinstance(item, str):
result.extend(ResticBackupper.__flatten_list(item))
continue
result.append(item)
return result
@staticmethod
def _run_backup_command(
backup_command: List[str], job: Optional[Job]
) -> List[dict]:
"""And handle backup output"""
messages = []
output = []
restic_reported_error = False
for raw_message in output_yielder(backup_command):
if "ERROR:" in raw_message:
restic_reported_error = True
output.append(raw_message)
if not restic_reported_error:
message = ResticBackupper.parse_message(raw_message, job)
messages.append(message)
if restic_reported_error:
raise ValueError(
"Restic returned error(s): ",
output,
)
return messages
@staticmethod
def _replace_in_array(array: List[str], target, replacement) -> None:
if target == "":
return
for i, value in enumerate(array):
if target in value:
array[i] = array[i].replace(target, replacement)
def _censor_command(self, command: List[str]) -> List[str]:
result = command.copy()
ResticBackupper._replace_in_array(result, self.key, "CENSORED")
ResticBackupper._replace_in_array(result, LocalBackupSecret.get(), "CENSORED")
return result
@staticmethod
def _get_backup_job(service_name: str) -> Optional[Job]:
service = get_service_by_id(service_name)
if service is None:
raise ValueError("No service with id ", service_name)
return get_backup_job(service)
@unlocked_repo
def start_backup(
self,
folders: List[str],
service_name: str,
reason: BackupReason = BackupReason.EXPLICIT,
) -> Snapshot:
"""
Start backup with restic
"""
assert len(folders) != 0
job = ResticBackupper._get_backup_job(service_name)
tags = [service_name, reason.value]
backup_command = self.restic_command(
"backup",
"--json",
folders,
tags=tags,
)
try:
messages = ResticBackupper._run_backup_command(backup_command, job)
id = ResticBackupper._snapshot_id_from_backup_messages(messages)
return Snapshot(
created_at=datetime.datetime.now(datetime.timezone.utc),
id=id,
service_name=service_name,
reason=reason,
)
except ValueError as error:
raise ValueError(
"Could not create a snapshot: ",
str(error),
"command: ",
self._censor_command(backup_command),
) from error
@staticmethod
def _snapshot_id_from_backup_messages(messages) -> str:
for message in messages:
if message["message_type"] == "summary":
# There is a discrepancy between versions of restic/rclone
# Some report short_id in this field and some full
return message["snapshot_id"][0:SHORT_ID_LEN]
raise ValueError("no summary message in restic json output")
@staticmethod
def parse_message(raw_message_line: str, job: Optional[Job] = None) -> dict:
message = ResticBackupper.parse_json_output(raw_message_line)
if not isinstance(message, dict):
raise ValueError("we have too many messages on one line?")
if message["message_type"] == "status":
if job is not None: # only update status if we run under some job
Jobs.update(
job,
JobStatus.RUNNING,
progress=int(message["percent_done"] * 100),
)
return message
def init(self) -> None:
init_command = self.restic_command(
"init",
)
with subprocess.Popen(
init_command,
shell=False,
stdout=subprocess.PIPE,
stderr=subprocess.STDOUT,
) as process_handle:
output = process_handle.communicate()[0].decode("utf-8")
if "created restic repository" not in output:
raise ValueError("cannot init a repo: " + output)
@unlocked_repo
def is_initted(self) -> bool:
command = self.restic_command(
"check",
)
with subprocess.Popen(
command,
stdout=subprocess.PIPE,
shell=False,
stderr=subprocess.STDOUT,
) as handle:
output = handle.communicate()[0].decode("utf-8")
if handle.returncode != 0:
if "unable to create lock" in output:
raise ValueError("Stale lock detected: ", output)
return False
return True
def unlock(self) -> None:
"""Remove stale locks."""
command = self.restic_command(
"unlock",
)
with subprocess.Popen(
command,
stdout=subprocess.PIPE,
shell=False,
stderr=subprocess.STDOUT,
) as handle:
# communication forces to complete and for returncode to get defined
output = handle.communicate()[0].decode("utf-8")
if handle.returncode != 0:
raise ValueError("cannot unlock the backup repository: ", output)
def lock(self) -> None:
"""
Introduce a stale lock.
Mainly for testing purposes.
Double lock is supposed to fail
"""
command = self.restic_command(
"check",
)
# using temporary cache in /run/user/1000/restic-check-cache-817079729
# repository 9639c714 opened (repository version 2) successfully, password is correct
# created new cache in /run/user/1000/restic-check-cache-817079729
# create exclusive lock for repository
# load indexes
# check all packs
# check snapshots, trees and blobs
# [0:00] 100.00% 1 / 1 snapshots
# no errors were found
try:
for line in output_yielder(command):
if "indexes" in line:
break
if "unable" in line:
raise ValueError(line)
except Exception as error:
raise ValueError("could not lock repository") from error
@unlocked_repo
def restored_size(self, snapshot_id: str) -> int:
"""
Size of a snapshot
"""
command = self.restic_command(
"stats",
snapshot_id,
"--json",
)
with subprocess.Popen(
command,
stdout=subprocess.PIPE,
stderr=subprocess.STDOUT,
shell=False,
) as handle:
output = handle.communicate()[0].decode("utf-8")
try:
parsed_output = ResticBackupper.parse_json_output(output)
return parsed_output["total_size"]
except ValueError as error:
raise ValueError("cannot restore a snapshot: " + output) from error
@unlocked_repo
def restore_from_backup(
self,
snapshot_id,
folders: List[str],
verify=True,
) -> None:
"""
Restore from backup with restic
"""
if folders is None or folders == []:
raise ValueError("cannot restore without knowing where to!")
with tempfile.TemporaryDirectory() as temp_dir:
if verify:
self._raw_verified_restore(snapshot_id, target=temp_dir)
snapshot_root = temp_dir
for folder in folders:
src = join(snapshot_root, folder.strip("/"))
if not exists(src):
raise ValueError(
f"No such path: {src}. We tried to find {folder}"
)
dst = folder
sync(src, dst)
else: # attempting inplace restore
for folder in folders:
rmtree(folder)
mkdir(folder)
self._raw_verified_restore(snapshot_id, target="/")
return
def _raw_verified_restore(self, snapshot_id, target="/"):
"""barebones restic restore"""
restore_command = self.restic_command(
"restore", snapshot_id, "--target", target, "--verify"
)
with subprocess.Popen(
restore_command,
stdout=subprocess.PIPE,
stderr=subprocess.STDOUT,
shell=False,
) as handle:
# for some reason restore does not support
# nice reporting of progress via json
output = handle.communicate()[0].decode("utf-8")
if "restoring" not in output:
raise ValueError("cannot restore a snapshot: " + output)
assert (
handle.returncode is not None
) # none should be impossible after communicate
if handle.returncode != 0:
raise ValueError(
"restore exited with errorcode",
handle.returncode,
":",
output,
)
def forget_snapshot(self, snapshot_id: str) -> None:
self.forget_snapshots([snapshot_id])
@unlocked_repo
def forget_snapshots(self, snapshot_ids: List[str]) -> None:
# in case the backupper program supports batching, otherwise implement it by cycling
forget_command = self.restic_command(
"forget",
[snapshot_ids],
# TODO: prune should be done in a separate process
"--prune",
)
with subprocess.Popen(
forget_command,
stdout=subprocess.PIPE,
stderr=subprocess.PIPE,
shell=False,
) as handle:
# for some reason restore does not support
# nice reporting of progress via json
output, err = [
string.decode(
"utf-8",
)
for string in handle.communicate()
]
if "no matching ID found" in err:
raise ValueError(
"trying to delete, but no such snapshot(s): ", snapshot_ids
)
assert (
handle.returncode is not None
) # none should be impossible after communicate
if handle.returncode != 0:
raise ValueError(
"forget exited with errorcode", handle.returncode, ":", output, err
)
def _load_snapshots(self) -> object:
"""
Load list of snapshots from repository
raises Value Error if repo does not exist
"""
listing_command = self.restic_command(
"snapshots",
"--json",
)
with subprocess.Popen(
listing_command,
shell=False,
stdout=subprocess.PIPE,
stderr=subprocess.STDOUT,
) as backup_listing_process_descriptor:
output = backup_listing_process_descriptor.communicate()[0].decode("utf-8")
if "Is there a repository at the following location?" in output:
raise ValueError("No repository! : " + output)
try:
return ResticBackupper.parse_json_output(output)
except ValueError as error:
raise ValueError("Cannot load snapshots: ", output) from error
@unlocked_repo
def get_snapshots(self) -> List[Snapshot]:
"""Get all snapshots from the repo"""
snapshots = []
for restic_snapshot in self._load_snapshots():
# Compatibility with previous snaps:
if len(restic_snapshot["tags"]) == 1:
reason = BackupReason.EXPLICIT
else:
reason = restic_snapshot["tags"][1]
snapshot = Snapshot(
id=restic_snapshot["short_id"],
created_at=restic_snapshot["time"],
service_name=restic_snapshot["tags"][0],
reason=reason,
)
snapshots.append(snapshot)
return snapshots
@staticmethod
def parse_json_output(output: str) -> object:
starting_index = ResticBackupper.json_start(output)
if starting_index == -1:
raise ValueError("There is no json in the restic output: " + output)
truncated_output = output[starting_index:]
json_messages = truncated_output.splitlines()
if len(json_messages) == 1:
try:
return json.loads(truncated_output)
except JSONDecodeError as error:
raise ValueError(
"There is no json in the restic output : " + output
) from error
result_array = []
for message in json_messages:
result_array.append(json.loads(message))
return result_array
@staticmethod
def json_start(output: str) -> int:
indices = [
output.find("["),
output.find("{"),
]
indices = [x for x in indices if x != -1]
if indices == []:
return -1
return min(indices)
@staticmethod
def has_json(output: str) -> bool:
if ResticBackupper.json_start(output) == -1:
return False
return True

View File

@ -1,115 +0,0 @@
from typing import Optional, List
from selfprivacy_api.models.backup.snapshot import Snapshot
from selfprivacy_api.jobs import Jobs, Job, JobStatus
from selfprivacy_api.services.service import Service
from selfprivacy_api.services import get_service_by_id
def job_type_prefix(service: Service) -> str:
return f"services.{service.get_id()}"
def backup_job_type(service: Service) -> str:
return f"{job_type_prefix(service)}.backup"
def autobackup_job_type() -> str:
return "backups.autobackup"
def restore_job_type(service: Service) -> str:
return f"{job_type_prefix(service)}.restore"
def get_jobs_by_service(service: Service) -> List[Job]:
result = []
for job in Jobs.get_jobs():
if job.type_id.startswith(job_type_prefix(service)) and job.status in [
JobStatus.CREATED,
JobStatus.RUNNING,
]:
result.append(job)
return result
def is_something_running_for(service: Service) -> bool:
running_jobs = [
job for job in get_jobs_by_service(service) if job.status == JobStatus.RUNNING
]
return len(running_jobs) != 0
def add_autobackup_job(services: List[Service]) -> Job:
service_names = [s.get_display_name() for s in services]
pretty_service_list: str = ", ".join(service_names)
job = Jobs.add(
type_id=autobackup_job_type(),
name="Automatic backup",
description=f"Scheduled backup for services: {pretty_service_list}",
)
return job
def add_backup_job(service: Service) -> Job:
if is_something_running_for(service):
message = (
f"Cannot start a backup of {service.get_id()}, another operation is running: "
+ get_jobs_by_service(service)[0].type_id
)
raise ValueError(message)
display_name = service.get_display_name()
job = Jobs.add(
type_id=backup_job_type(service),
name=f"Backup {display_name}",
description=f"Backing up {display_name}",
)
return job
def add_restore_job(snapshot: Snapshot) -> Job:
service = get_service_by_id(snapshot.service_name)
if service is None:
raise ValueError(f"no such service: {snapshot.service_name}")
if is_something_running_for(service):
message = (
f"Cannot start a restore of {service.get_id()}, another operation is running: "
+ get_jobs_by_service(service)[0].type_id
)
raise ValueError(message)
display_name = service.get_display_name()
job = Jobs.add(
type_id=restore_job_type(service),
name=f"Restore {display_name}",
description=f"restoring {display_name} from {snapshot.id}",
)
return job
def get_job_by_type(type_id: str) -> Optional[Job]:
for job in Jobs.get_jobs():
if job.type_id == type_id and job.status in [
JobStatus.CREATED,
JobStatus.RUNNING,
]:
return job
return None
def get_failed_job_by_type(type_id: str) -> Optional[Job]:
for job in Jobs.get_jobs():
if job.type_id == type_id and job.status == JobStatus.ERROR:
return job
return None
def get_backup_job(service: Service) -> Optional[Job]:
return get_job_by_type(backup_job_type(service))
def get_backup_fail(service: Service) -> Optional[Job]:
return get_failed_job_by_type(backup_job_type(service))
def get_restore_job(service: Service) -> Optional[Job]:
return get_job_by_type(restore_job_type(service))

View File

@ -15,31 +15,27 @@ redis = RedisPool().get_connection()
class LocalBackupSecret:
@staticmethod
def get() -> str:
def get():
"""A secret string which backblaze/other clouds do not know.
Serves as encryption key.
"""
if not LocalBackupSecret.exists():
LocalBackupSecret.reset()
return redis.get(REDIS_KEY) # type: ignore
@staticmethod
def set(secret: str):
redis.set(REDIS_KEY, secret)
return redis.get(REDIS_KEY)
@staticmethod
def reset():
new_secret = LocalBackupSecret._generate()
LocalBackupSecret.set(new_secret)
@staticmethod
def _full_reset():
redis.delete(REDIS_KEY)
LocalBackupSecret._store(new_secret)
@staticmethod
def exists() -> bool:
return redis.exists(REDIS_KEY) == 1
return redis.exists(REDIS_KEY)
@staticmethod
def _generate() -> str:
return secrets.token_urlsafe(256)
@staticmethod
def _store(secret: str):
redis.set(REDIS_KEY, secret)

View File

@ -1,31 +1,22 @@
from typing import Type
from selfprivacy_api.graphql.queries.providers import (
BackupProvider as BackupProviderEnum,
)
from selfprivacy_api.graphql.queries.providers import BackupProvider
from selfprivacy_api.backup.providers.provider import AbstractBackupProvider
from selfprivacy_api.backup.providers.backblaze import Backblaze
from selfprivacy_api.backup.providers.memory import InMemoryBackup
from selfprivacy_api.backup.providers.local_file import LocalFileBackup
from selfprivacy_api.backup.providers.none import NoBackups
PROVIDER_MAPPING: dict[BackupProviderEnum, Type[AbstractBackupProvider]] = {
BackupProviderEnum.BACKBLAZE: Backblaze,
BackupProviderEnum.MEMORY: InMemoryBackup,
BackupProviderEnum.FILE: LocalFileBackup,
BackupProviderEnum.NONE: NoBackups,
PROVIDER_MAPPING = {
BackupProvider.BACKBLAZE: Backblaze,
BackupProvider.MEMORY: InMemoryBackup,
BackupProvider.FILE: LocalFileBackup,
}
def get_provider(
provider_type: BackupProviderEnum,
) -> Type[AbstractBackupProvider]:
if provider_type not in PROVIDER_MAPPING.keys():
raise LookupError("could not look up provider", provider_type)
def get_provider(provider_type: BackupProvider) -> AbstractBackupProvider:
return PROVIDER_MAPPING[provider_type]
def get_kind(provider: AbstractBackupProvider) -> str:
"""Get the kind of the provider in the form of a string"""
return provider.name.value
for key, value in PROVIDER_MAPPING.items():
if isinstance(provider, value):
return key.value

View File

@ -1,11 +1,6 @@
from .provider import AbstractBackupProvider
from selfprivacy_api.backup.backuppers.restic_backupper import ResticBackupper
from selfprivacy_api.graphql.queries.providers import (
BackupProvider as BackupProviderEnum,
)
from selfprivacy_api.backup.restic_backuper import ResticBackuper
class Backblaze(AbstractBackupProvider):
backupper = ResticBackupper("--b2-account", "--b2-key", ":b2:")
name = BackupProviderEnum.BACKBLAZE
backuper = ResticBackuper("--b2-account", "--b2-key", ":b2:")

View File

@ -1,11 +1,11 @@
from .provider import AbstractBackupProvider
from selfprivacy_api.backup.backuppers.restic_backupper import ResticBackupper
from selfprivacy_api.graphql.queries.providers import (
BackupProvider as BackupProviderEnum,
)
from selfprivacy_api.backup.restic_backuper import ResticBackuper
class LocalFileBackup(AbstractBackupProvider):
backupper = ResticBackupper("", "", ":local:")
backuper = ResticBackuper("", "", "memory")
name = BackupProviderEnum.FILE
# login and key args are for compatibility with generic provider methods. They are ignored.
def __init__(self, filename: str, login: str = "", key: str = ""):
super().__init__()
self.backuper = ResticBackuper("", "", f":local:{filename}/")

View File

@ -1,11 +1,6 @@
from .provider import AbstractBackupProvider
from selfprivacy_api.backup.backuppers.restic_backupper import ResticBackupper
from selfprivacy_api.graphql.queries.providers import (
BackupProvider as BackupProviderEnum,
)
from selfprivacy_api.backup.restic_backuper import ResticBackuper
class InMemoryBackup(AbstractBackupProvider):
backupper = ResticBackupper("", "", ":memory:")
name = BackupProviderEnum.MEMORY
backuper = ResticBackuper("", "", ":memory:")

View File

@ -1,11 +0,0 @@
from selfprivacy_api.backup.providers.provider import AbstractBackupProvider
from selfprivacy_api.backup.backuppers.none_backupper import NoneBackupper
from selfprivacy_api.graphql.queries.providers import (
BackupProvider as BackupProviderEnum,
)
class NoBackups(AbstractBackupProvider):
backupper = NoneBackupper()
name = BackupProviderEnum.NONE

View File

@ -1,25 +1,17 @@
"""
An abstract class for BackBlaze, S3 etc.
It assumes that while some providers are supported via restic/rclone, others
may require different backends
It assumes that while some providers are supported via restic/rclone, others may
require different backends
"""
from abc import ABC, abstractmethod
from selfprivacy_api.backup.backuppers import AbstractBackupper
from selfprivacy_api.graphql.queries.providers import (
BackupProvider as BackupProviderEnum,
)
from abc import ABC
from selfprivacy_api.backup.backuper import AbstractBackuper
class AbstractBackupProvider(ABC):
backupper: AbstractBackupper
@property
def backuper(self) -> AbstractBackuper:
raise NotImplementedError
name: BackupProviderEnum
def __init__(self, login="", key="", location="", repo_id=""):
self.backupper.set_creds(login, key, location)
def __init__(self, login="", key=""):
self.login = login
self.key = key
self.location = location
# We do not need to do anything with this one
# Just remember in case the app forgets
self.repo_id = repo_id

View File

@ -0,0 +1,256 @@
import subprocess
import json
import datetime
from typing import List
from collections.abc import Iterable
from selfprivacy_api.backup.backuper import AbstractBackuper
from selfprivacy_api.models.backup.snapshot import Snapshot
from selfprivacy_api.backup.local_secret import LocalBackupSecret
class ResticBackuper(AbstractBackuper):
def __init__(self, login_flag: str, key_flag: str, type: str):
self.login_flag = login_flag
self.key_flag = key_flag
self.type = type
self.account = ""
self.key = ""
def set_creds(self, account: str, key: str):
self.account = account
self.key = key
def restic_repo(self, repository_name: str) -> str:
# https://restic.readthedocs.io/en/latest/030_preparing_a_new_repo.html#other-services-via-rclone
# https://forum.rclone.org/t/can-rclone-be-run-solely-with-command-line-options-no-config-no-env-vars/6314/5
return f"rclone:{self.type}{repository_name}/sfbackup"
def rclone_args(self):
return "rclone.args=serve restic --stdio" + self.backend_rclone_args()
def backend_rclone_args(self) -> str:
acc_arg = ""
key_arg = ""
if self.account != "":
acc_arg = f"{self.login_flag} {self.account}"
if self.key != "":
key_arg = f"{self.key_flag} {self.key}"
return f"{acc_arg} {key_arg}"
def _password_command(self):
return f"echo {LocalBackupSecret.get()}"
def restic_command(self, repo_name: str, *args):
command = [
"restic",
"-o",
self.rclone_args(),
"-r",
self.restic_repo(repo_name),
"--password-command",
self._password_command(),
]
if args != []:
command.extend(ResticBackuper.__flatten_list(args))
return command
@staticmethod
def __flatten_list(list):
"""string-aware list flattener"""
result = []
for item in list:
if isinstance(item, Iterable) and not isinstance(item, str):
result.extend(ResticBackuper.__flatten_list(item))
continue
result.append(item)
return result
def start_backup(self, folders: List[str], repo_name: str):
"""
Start backup with restic
"""
# but maybe it is ok to accept a union of a string and an array of strings
assert not isinstance(folders, str)
backup_command = self.restic_command(
repo_name,
"backup",
"--json",
folders,
)
with subprocess.Popen(
backup_command,
shell=False,
stdout=subprocess.PIPE,
stderr=subprocess.STDOUT,
) as handle:
output = handle.communicate()[0].decode("utf-8")
try:
messages = self.parse_json_output(output)
return ResticBackuper._snapshot_from_backup_messages(
messages, repo_name
)
except ValueError as e:
raise ValueError("could not create a snapshot: ") from e
@staticmethod
def _snapshot_from_backup_messages(messages, repo_name) -> Snapshot:
for message in messages:
if message["message_type"] == "summary":
return ResticBackuper._snapshot_from_fresh_summary(message, repo_name)
raise ValueError("no summary message in restic json output")
@staticmethod
def _snapshot_from_fresh_summary(message: object, repo_name) -> Snapshot:
return Snapshot(
id=message["snapshot_id"],
created_at=datetime.datetime.now(datetime.timezone.utc),
service_name=repo_name,
)
def init(self, repo_name):
init_command = self.restic_command(
repo_name,
"init",
)
with subprocess.Popen(
init_command,
shell=False,
stdout=subprocess.PIPE,
stderr=subprocess.STDOUT,
) as process_handle:
output = process_handle.communicate()[0].decode("utf-8")
if not "created restic repository" in output:
raise ValueError("cannot init a repo: " + output)
def is_initted(self, repo_name: str) -> bool:
command = self.restic_command(
repo_name,
"check",
"--json",
)
with subprocess.Popen(command, stdout=subprocess.PIPE, shell=False) as handle:
output = handle.communicate()[0].decode("utf-8")
if not self.has_json(output):
return False
# raise NotImplementedError("error(big): " + output)
return True
def restored_size(self, repo_name, snapshot_id) -> float:
"""
Size of a snapshot
"""
command = self.restic_command(
repo_name,
"stats",
snapshot_id,
"--json",
)
with subprocess.Popen(command, stdout=subprocess.PIPE, shell=False) as handle:
output = handle.communicate()[0].decode("utf-8")
try:
parsed_output = self.parse_json_output(output)
return parsed_output["total_size"]
except ValueError as e:
raise ValueError("cannot restore a snapshot: " + output) from e
def restore_from_backup(self, repo_name, snapshot_id, folders):
"""
Restore from backup with restic
"""
# snapshots save the path of the folder in the file system
# I do not alter the signature yet because maybe this can be
# changed with flags
restore_command = self.restic_command(
repo_name,
"restore",
snapshot_id,
"--target",
"/",
)
with subprocess.Popen(
restore_command, stdout=subprocess.PIPE, shell=False
) as handle:
output = handle.communicate()[0].decode("utf-8")
if "restoring" not in output:
raise ValueError("cannot restore a snapshot: " + output)
def _load_snapshots(self, repo_name) -> object:
"""
Load list of snapshots from repository
raises Value Error if repo does not exist
"""
listing_command = self.restic_command(
repo_name,
"snapshots",
"--json",
)
with subprocess.Popen(
listing_command,
shell=False,
stdout=subprocess.PIPE,
stderr=subprocess.STDOUT,
) as backup_listing_process_descriptor:
output = backup_listing_process_descriptor.communicate()[0].decode("utf-8")
if "Is there a repository at the following location?" in output:
raise ValueError("No repository! : " + output)
try:
return self.parse_json_output(output)
except ValueError as e:
raise ValueError("Cannot load snapshots: ") from e
def get_snapshots(self, repo_name) -> List[Snapshot]:
"""Get all snapshots from the repo"""
snapshots = []
for restic_snapshot in self._load_snapshots(repo_name):
snapshot = Snapshot(
id=restic_snapshot["short_id"],
created_at=restic_snapshot["time"],
service_name=repo_name,
)
snapshots.append(snapshot)
return snapshots
def parse_json_output(self, output: str) -> object:
starting_index = self.json_start(output)
if starting_index == -1:
raise ValueError("There is no json in the restic output : " + output)
truncated_output = output[starting_index:]
json_messages = truncated_output.splitlines()
if len(json_messages) == 1:
return json.loads(truncated_output)
result_array = []
for message in json_messages:
result_array.append(json.loads(message))
return result_array
def json_start(self, output: str) -> int:
indices = [
output.find("["),
output.find("{"),
]
indices = [x for x in indices if x != -1]
if indices == []:
return -1
return min(indices)
def has_json(self, output: str) -> bool:
if self.json_start(output) == -1:
return False
return True

View File

@ -1,51 +1,46 @@
"""
Module for storing backup related data in redis.
"""
from typing import List, Optional
from datetime import datetime
from selfprivacy_api.models.backup.snapshot import Snapshot
from selfprivacy_api.models.backup.provider import BackupProviderModel
from selfprivacy_api.graphql.common_types.backup import (
AutobackupQuotas,
_AutobackupQuotas,
)
from selfprivacy_api.utils.redis_pool import RedisPool
from selfprivacy_api.utils.redis_model_storage import (
store_model_as_hash,
hash_as_model,
)
from selfprivacy_api.utils.redis_model_storage import store_model_as_hash, hash_as_model
from selfprivacy_api.services.service import Service
from selfprivacy_api.backup.providers.provider import AbstractBackupProvider
from selfprivacy_api.backup.providers import get_kind
# a hack to store file path.
REDIS_SNAPSHOT_CACHE_EXPIRE_SECONDS = 24 * 60 * 60 # one day
REDIS_AUTOBACKUP_ENABLED_PREFIX = "backup:autobackup:services:"
REDIS_SNAPSHOTS_PREFIX = "backups:snapshots:"
REDIS_LAST_BACKUP_PREFIX = "backups:last-backed-up:"
REDIS_INITTED_CACHE = "backups:repo_initted"
REDIS_INITTED_CACHE_PREFIX = "backups:initted_services:"
REDIS_REPO_PATH_KEY = "backups:test_repo_path"
REDIS_PROVIDER_KEY = "backups:provider"
REDIS_AUTOBACKUP_PERIOD_KEY = "backups:autobackup_period"
REDIS_AUTOBACKUP_QUOTAS_KEY = "backups:autobackup_quotas_key"
redis = RedisPool().get_connection()
class Storage:
"""Static class for storing backup related data in redis"""
@staticmethod
def reset() -> None:
"""Deletes all backup related data from redis"""
def reset():
redis.delete(REDIS_PROVIDER_KEY)
redis.delete(REDIS_REPO_PATH_KEY)
redis.delete(REDIS_AUTOBACKUP_PERIOD_KEY)
redis.delete(REDIS_INITTED_CACHE)
redis.delete(REDIS_AUTOBACKUP_QUOTAS_KEY)
prefixes_to_clean = [
REDIS_INITTED_CACHE_PREFIX,
REDIS_SNAPSHOTS_PREFIX,
REDIS_LAST_BACKUP_PREFIX,
REDIS_AUTOBACKUP_ENABLED_PREFIX,
]
for prefix in prefixes_to_clean:
@ -53,146 +48,121 @@ class Storage:
redis.delete(key)
@staticmethod
def invalidate_snapshot_storage() -> None:
"""Deletes all cached snapshots from redis"""
for key in redis.keys(REDIS_SNAPSHOTS_PREFIX + "*"):
redis.delete(key)
def store_testrepo_path(path: str):
redis.set(REDIS_REPO_PATH_KEY, path)
@staticmethod
def __last_backup_key(service_id: str) -> str:
def get_testrepo_path() -> str:
if not redis.exists(REDIS_REPO_PATH_KEY):
raise ValueError(
"No test repository filepath is set, but we tried to access it"
)
return redis.get(REDIS_REPO_PATH_KEY)
@staticmethod
def services_with_autobackup() -> List[str]:
keys = redis.keys(REDIS_AUTOBACKUP_ENABLED_PREFIX + "*")
service_ids = [key.split(":")[-1] for key in keys]
return service_ids
@staticmethod
def __last_backup_key(service_id):
return REDIS_LAST_BACKUP_PREFIX + service_id
@staticmethod
def __snapshot_key(snapshot: Snapshot) -> str:
def __snapshot_key(snapshot: Snapshot):
return REDIS_SNAPSHOTS_PREFIX + snapshot.id
@staticmethod
def get_last_backup_time(service_id: str) -> Optional[datetime]:
"""Returns last backup time for a service or None if it was never backed up"""
key = Storage.__last_backup_key(service_id)
if not redis.exists(key):
return None
snapshot = hash_as_model(redis, key, Snapshot)
if not snapshot:
return None
return snapshot.created_at
@staticmethod
def store_last_timestamp(service_id: str, snapshot: Snapshot) -> None:
"""Stores last backup time for a service"""
store_model_as_hash(
redis,
Storage.__last_backup_key(service_id),
snapshot,
)
def store_last_timestamp(service_id: str, snapshot: Snapshot):
store_model_as_hash(redis, Storage.__last_backup_key(service_id), snapshot)
@staticmethod
def cache_snapshot(snapshot: Snapshot) -> None:
"""Stores snapshot metadata in redis for caching purposes"""
def cache_snapshot(snapshot: Snapshot):
snapshot_key = Storage.__snapshot_key(snapshot)
store_model_as_hash(redis, snapshot_key, snapshot)
redis.expire(snapshot_key, REDIS_SNAPSHOT_CACHE_EXPIRE_SECONDS)
@staticmethod
def delete_cached_snapshot(snapshot: Snapshot) -> None:
"""Deletes snapshot metadata from redis"""
def delete_cached_snapshot(snapshot: Snapshot):
snapshot_key = Storage.__snapshot_key(snapshot)
redis.delete(snapshot_key)
@staticmethod
def get_cached_snapshot_by_id(snapshot_id: str) -> Optional[Snapshot]:
"""Returns cached snapshot by id or None if it doesn't exist"""
key = REDIS_SNAPSHOTS_PREFIX + snapshot_id
if not redis.exists(key):
return None
return hash_as_model(redis, key, Snapshot)
@staticmethod
def get_cached_snapshots() -> List[Snapshot]:
"""Returns all cached snapshots stored in redis"""
keys: list[str] = redis.keys(REDIS_SNAPSHOTS_PREFIX + "*") # type: ignore
result: list[Snapshot] = []
keys = redis.keys(REDIS_SNAPSHOTS_PREFIX + "*")
result = []
for key in keys:
snapshot = hash_as_model(redis, key, Snapshot)
if snapshot:
result.append(snapshot)
result.append(snapshot)
return result
@staticmethod
def __autobackup_key(service_name: str) -> str:
return REDIS_AUTOBACKUP_ENABLED_PREFIX + service_name
@staticmethod
def set_autobackup(service: Service):
# shortcut this
redis.set(Storage.__autobackup_key(service.get_id()), 1)
@staticmethod
def unset_autobackup(service: Service):
"""also see disable_all_autobackup()"""
redis.delete(Storage.__autobackup_key(service.get_id()))
@staticmethod
def is_autobackup_set(service_name: str) -> bool:
return redis.exists(Storage.__autobackup_key(service_name))
@staticmethod
def autobackup_period_minutes() -> Optional[int]:
"""None means autobackup is disabled"""
if not redis.exists(REDIS_AUTOBACKUP_PERIOD_KEY):
return None
return int(redis.get(REDIS_AUTOBACKUP_PERIOD_KEY)) # type: ignore
return int(redis.get(REDIS_AUTOBACKUP_PERIOD_KEY))
@staticmethod
def store_autobackup_period_minutes(minutes: int) -> None:
"""Set the new autobackup period in minutes"""
def store_autobackup_period_minutes(minutes: int):
redis.set(REDIS_AUTOBACKUP_PERIOD_KEY, minutes)
@staticmethod
def delete_backup_period() -> None:
"""Set the autobackup period to none, effectively disabling autobackup"""
def delete_backup_period():
redis.delete(REDIS_AUTOBACKUP_PERIOD_KEY)
@staticmethod
def store_provider(provider: AbstractBackupProvider) -> None:
"""Stores backup provider auth data in redis"""
model = BackupProviderModel(
kind=get_kind(provider),
login=provider.login,
key=provider.key,
location=provider.location,
repo_id=provider.repo_id,
)
store_model_as_hash(redis, REDIS_PROVIDER_KEY, model)
if Storage.load_provider() != model:
raise IOError("could not store the provider model: ", model.dict)
@staticmethod
def load_provider() -> Optional[BackupProviderModel]:
"""Loads backup storage provider auth data from redis"""
provider_model = hash_as_model(
def store_provider(provider: AbstractBackupProvider):
store_model_as_hash(
redis,
REDIS_PROVIDER_KEY,
BackupProviderModel,
BackupProviderModel(
kind=get_kind(provider), login=provider.login, key=provider.key
),
)
@staticmethod
def load_provider() -> BackupProviderModel:
provider_model = hash_as_model(redis, REDIS_PROVIDER_KEY, BackupProviderModel)
return provider_model
@staticmethod
def has_init_mark() -> bool:
"""Returns True if the repository was initialized"""
if redis.exists(REDIS_INITTED_CACHE):
def has_init_mark(service: Service) -> bool:
repo_name = service.get_id()
if redis.exists(REDIS_INITTED_CACHE_PREFIX + repo_name):
return True
return False
@staticmethod
def mark_as_init():
"""Marks the repository as initialized"""
redis.set(REDIS_INITTED_CACHE, 1)
@staticmethod
def mark_as_uninitted():
"""Marks the repository as initialized"""
redis.delete(REDIS_INITTED_CACHE)
@staticmethod
def set_autobackup_quotas(quotas: AutobackupQuotas) -> None:
store_model_as_hash(redis, REDIS_AUTOBACKUP_QUOTAS_KEY, quotas.to_pydantic())
@staticmethod
def autobackup_quotas() -> AutobackupQuotas:
quotas_model = hash_as_model(
redis, REDIS_AUTOBACKUP_QUOTAS_KEY, _AutobackupQuotas
)
if quotas_model is None:
unlimited_quotas = AutobackupQuotas(
last=-1,
daily=-1,
weekly=-1,
monthly=-1,
yearly=-1,
)
return unlimited_quotas
return AutobackupQuotas.from_pydantic(quotas_model) # pylint: disable=no-member
def mark_as_init(service: Service):
repo_name = service.get_id()
redis.set(REDIS_INITTED_CACHE_PREFIX + repo_name, 1)

View File

@ -1,117 +1,31 @@
"""
The tasks module contains the worker tasks that are used to back up and restore
"""
from datetime import datetime, timezone
from datetime import datetime
from selfprivacy_api.graphql.common_types.backup import (
RestoreStrategy,
BackupReason,
)
from selfprivacy_api.models.backup.snapshot import Snapshot
from selfprivacy_api.utils.huey import huey
from huey import crontab
from selfprivacy_api.services import get_service_by_id
from selfprivacy_api.services.service import Service
from selfprivacy_api.backup import Backups
from selfprivacy_api.backup.jobs import add_autobackup_job
from selfprivacy_api.jobs import Jobs, JobStatus, Job
SNAPSHOT_CACHE_TTL_HOURS = 6
def validate_datetime(dt: datetime) -> bool:
"""
Validates that it is time to back up.
Also ensures that the timezone-aware time is used.
"""
if dt.tzinfo is None:
return Backups.is_time_to_backup(dt.replace(tzinfo=timezone.utc))
def validate_datetime(dt: datetime):
# dt = datetime.now(timezone.utc)
if dt.timetz is None:
raise ValueError(
"""
huey passed in the timezone-unaware time!
Post it in support chat or maybe try uncommenting a line above
"""
)
return Backups.is_time_to_backup(dt)
# huey tasks need to return something
@huey.task()
def start_backup(service_id: str, reason: BackupReason = BackupReason.EXPLICIT) -> bool:
"""
The worker task that starts the backup process.
"""
service = get_service_by_id(service_id)
if service is None:
raise ValueError(f"No such service: {service_id}")
Backups.back_up(service, reason)
def start_backup(service: Service) -> bool:
Backups.back_up(service)
return True
@huey.task()
def prune_autobackup_snapshots(job: Job) -> bool:
"""
Remove all autobackup snapshots that do not fit into quotas set
"""
Jobs.update(job, JobStatus.RUNNING)
try:
Backups.prune_all_autosnaps()
except Exception as e:
Jobs.update(job, JobStatus.ERROR, error=type(e).__name__ + ":" + str(e))
return False
Jobs.update(job, JobStatus.FINISHED)
return True
@huey.task()
def restore_snapshot(
snapshot: Snapshot,
strategy: RestoreStrategy = RestoreStrategy.DOWNLOAD_VERIFY_OVERWRITE,
) -> bool:
"""
The worker task that starts the restore process.
"""
Backups.restore_snapshot(snapshot, strategy)
return True
def do_autobackup() -> None:
"""
Body of autobackup task, broken out to test it
For some reason, we cannot launch periodic huey tasks
inside tests
"""
time = datetime.utcnow().replace(tzinfo=timezone.utc)
services_to_back_up = Backups.services_to_back_up(time)
if not services_to_back_up:
return
job = add_autobackup_job(services_to_back_up)
progress_per_service = 100 // len(services_to_back_up)
progress = 0
Jobs.update(job, JobStatus.RUNNING, progress=progress)
for service in services_to_back_up:
try:
Backups.back_up(service, BackupReason.AUTO)
except Exception as error:
Jobs.update(
job,
status=JobStatus.ERROR,
error=type(error).__name__ + ": " + str(error),
)
return
progress = progress + progress_per_service
Jobs.update(job, JobStatus.RUNNING, progress=progress)
Jobs.update(job, JobStatus.FINISHED)
@huey.periodic_task(validate_datetime=validate_datetime)
def automatic_backup() -> None:
"""
The worker periodic task that starts the automatic backup process.
"""
do_autobackup()
@huey.periodic_task(crontab(hour="*/" + str(SNAPSHOT_CACHE_TTL_HOURS)))
def reload_snapshot_cache():
Backups.force_snapshot_cache_reload()
def automatic_backup():
time = datetime.now()
for service in Backups.services_to_back_up(time):
start_backup(service)

View File

@ -1,35 +0,0 @@
import subprocess
from os.path import exists
from typing import Generator
def output_yielder(command) -> Generator[str, None, None]:
"""Note: If you break during iteration, it kills the process"""
with subprocess.Popen(
command,
shell=False,
stdout=subprocess.PIPE,
stderr=subprocess.STDOUT,
universal_newlines=True,
) as handle:
if handle is None or handle.stdout is None:
raise ValueError("could not run command: ", command)
try:
for line in iter(handle.stdout.readline, ""):
if "NOTICE:" not in line:
yield line
except GeneratorExit:
handle.kill()
def sync(src_path: str, dest_path: str):
"""a wrapper around rclone sync"""
if not exists(src_path):
raise ValueError("source dir for rclone sync must exist")
rclone_command = ["rclone", "sync", "-P", src_path, dest_path]
for raw_message in output_yielder(rclone_command):
if "ERROR" in raw_message:
raise ValueError(raw_message)

View File

@ -27,4 +27,4 @@ async def get_token_header(
def get_api_version() -> str:
"""Get API version"""
return "3.1.0"
return "2.1.2"

View File

@ -1,10 +1,13 @@
"""GraphQL API for SelfPrivacy."""
# pylint: disable=too-few-public-methods
import typing
from strawberry.permission import BasePermission
from strawberry.types import Info
from strawberry.extensions import Extension
from selfprivacy_api.actions.api_tokens import is_token_valid
from selfprivacy_api.utils.localization import Localization
class IsAuthenticated(BasePermission):
@ -19,3 +22,14 @@ class IsAuthenticated(BasePermission):
if token is None:
return False
return is_token_valid(token.replace("Bearer ", ""))
class LocaleExtension(Extension):
"""Parse the Accept-Language header and set the locale in the context as one of the supported locales."""
def resolve(self, _next, root, info: Info, *args, **kwargs):
locale = Localization().get_locale(
info.context["request"].headers.get("Accept-Language")
)
info.context["locale"] = locale
return _next(root, info, *args, **kwargs)

View File

@ -1,36 +0,0 @@
"""Backup"""
# pylint: disable=too-few-public-methods
from enum import Enum
import strawberry
from pydantic import BaseModel
@strawberry.enum
class RestoreStrategy(Enum):
INPLACE = "INPLACE"
DOWNLOAD_VERIFY_OVERWRITE = "DOWNLOAD_VERIFY_OVERWRITE"
@strawberry.enum
class BackupReason(Enum):
EXPLICIT = "EXPLICIT"
AUTO = "AUTO"
PRE_RESTORE = "PRE_RESTORE"
class _AutobackupQuotas(BaseModel):
last: int
daily: int
weekly: int
monthly: int
yearly: int
@strawberry.experimental.pydantic.type(model=_AutobackupQuotas, all_fields=True)
class AutobackupQuotas:
pass
@strawberry.experimental.pydantic.input(model=_AutobackupQuotas, all_fields=True)
class AutobackupQuotasInput:
pass

View File

@ -0,0 +1,9 @@
import datetime
import strawberry
@strawberry.type
class SnapshotInfo:
id: str
service_name: str
created_at: datetime.datetime

View File

@ -2,7 +2,6 @@ import typing
import strawberry
# TODO: use https://strawberry.rocks/docs/integrations/pydantic when it is stable
@strawberry.type
class DnsRecord:
"""DNS record"""
@ -12,4 +11,3 @@ class DnsRecord:
content: str
ttl: int
priority: typing.Optional[int]
display_name: str

View File

@ -12,7 +12,6 @@ class ApiJob:
"""Job type for GraphQL."""
uid: str
type_id: str
name: str
description: str
status: str
@ -29,7 +28,6 @@ def job_to_api_job(job: Job) -> ApiJob:
"""Convert a Job from jobs controller to a GraphQL ApiJob."""
return ApiJob(
uid=str(job.uid),
type_id=job.type_id,
name=job.name,
description=job.description,
status=job.status.name,

View File

@ -1,24 +1,21 @@
from enum import Enum
from typing import Optional, List
import datetime
import typing
import strawberry
from selfprivacy_api.graphql.common_types.backup import BackupReason
from strawberry.types import Info
from selfprivacy_api.graphql.common_types.dns import DnsRecord
from selfprivacy_api.graphql.common_types.backup_snapshot import SnapshotInfo
from selfprivacy_api.services import get_service_by_id, get_services_by_location
from selfprivacy_api.services import Service as ServiceInterface
from selfprivacy_api.services import ServiceDnsRecord
from selfprivacy_api.utils.block_devices import BlockDevices
from selfprivacy_api.utils.network import get_ip4, get_ip6
from selfprivacy_api.utils.localization import Localization as L10n
def get_usages(root: "StorageVolume") -> list["StorageUsageInterface"]:
def get_usages(root: "StorageVolume", locale: str) -> list["StorageUsageInterface"]:
"""Get usages of a volume"""
return [
ServiceStorageUsage(
service=service_to_graphql_service(service),
service=service_to_graphql_service(service, locale),
title=service.get_display_name(),
used_space=str(service.get_storage_usage()),
volume=get_volume_by_id(service.get_drive()),
@ -36,20 +33,21 @@ class StorageVolume:
used_space: str
root: bool
name: str
model: Optional[str]
serial: Optional[str]
model: typing.Optional[str]
serial: typing.Optional[str]
type: str
@strawberry.field
def usages(self) -> list["StorageUsageInterface"]:
def usages(self, info: Info) -> list["StorageUsageInterface"]:
"""Get usages of a volume"""
return get_usages(self)
locale = info.context["locale"]
return get_usages(self, locale)
@strawberry.interface
class StorageUsageInterface:
used_space: str
volume: Optional[StorageVolume]
volume: typing.Optional[StorageVolume]
title: str
@ -57,7 +55,7 @@ class StorageUsageInterface:
class ServiceStorageUsage(StorageUsageInterface):
"""Storage usage for a service"""
service: Optional["Service"]
service: typing.Optional["Service"]
@strawberry.enum
@ -71,7 +69,7 @@ class ServiceStatusEnum(Enum):
OFF = "OFF"
def get_storage_usage(root: "Service") -> ServiceStorageUsage:
def get_storage_usage(root: "Service", locale: str) -> ServiceStorageUsage:
"""Get storage usage for a service"""
service = get_service_by_id(root.id)
if service is None:
@ -82,27 +80,13 @@ def get_storage_usage(root: "Service") -> ServiceStorageUsage:
volume=get_volume_by_id("sda1"),
)
return ServiceStorageUsage(
service=service_to_graphql_service(service),
service=service_to_graphql_service(service, locale),
title=service.get_display_name(),
used_space=str(service.get_storage_usage()),
volume=get_volume_by_id(service.get_drive()),
)
# TODO: This won't be needed when deriving DnsRecord via strawberry pydantic integration
# https://strawberry.rocks/docs/integrations/pydantic
# Remove when the link above says it got stable.
def service_dns_to_graphql(record: ServiceDnsRecord) -> DnsRecord:
return DnsRecord(
record_type=record.type,
name=record.name,
content=record.content,
ttl=record.ttl,
priority=record.priority,
display_name=record.display_name,
)
@strawberry.type
class Service:
id: str
@ -112,58 +96,48 @@ class Service:
is_movable: bool
is_required: bool
is_enabled: bool
can_be_backed_up: bool
backup_description: str
status: ServiceStatusEnum
url: Optional[str]
url: typing.Optional[str]
dns_records: typing.Optional[typing.List[DnsRecord]]
@strawberry.field
def dns_records(self) -> Optional[List[DnsRecord]]:
service = get_service_by_id(self.id)
if service is None:
raise LookupError(f"no service {self.id}. Should be unreachable")
raw_records = service.get_dns_records(get_ip4(), get_ip6())
dns_records = [service_dns_to_graphql(record) for record in raw_records]
return dns_records
@strawberry.field
def storage_usage(self) -> ServiceStorageUsage:
def storage_usage(self, info: Info) -> ServiceStorageUsage:
"""Get storage usage for a service"""
return get_storage_usage(self)
locale = info.context["locale"]
return get_storage_usage(self, locale)
# TODO: fill this
@strawberry.field
def backup_snapshots(self) -> Optional[List["SnapshotInfo"]]:
def backup_snapshots(self) -> typing.Optional[typing.List[SnapshotInfo]]:
return None
@strawberry.type
class SnapshotInfo:
id: str
service: Service
created_at: datetime.datetime
reason: BackupReason
def service_to_graphql_service(service: ServiceInterface) -> Service:
def service_to_graphql_service(service: ServiceInterface, locale: str) -> Service:
"""Convert service to graphql service"""
l10n = L10n()
return Service(
id=service.get_id(),
display_name=service.get_display_name(),
description=service.get_description(),
display_name=l10n.get(service.get_display_name(), locale),
description=l10n.get(service.get_description(), locale),
svg_icon=service.get_svg_icon(),
is_movable=service.is_movable(),
is_required=service.is_required(),
is_enabled=service.is_enabled(),
can_be_backed_up=service.can_be_backed_up(),
backup_description=service.get_backup_description(),
status=ServiceStatusEnum(service.get_status().value),
url=service.get_url(),
dns_records=[
DnsRecord(
record_type=record.type,
name=record.name,
content=record.content,
ttl=record.ttl,
priority=record.priority,
)
for record in service.get_dns_records()
],
)
def get_volume_by_id(volume_id: str) -> Optional[StorageVolume]:
def get_volume_by_id(volume_id: str) -> typing.Optional[StorageVolume]:
"""Get volume by id"""
volume = BlockDevices().get_block_device(volume_id)
if volume is None:

View File

@ -17,6 +17,7 @@ class UserType(Enum):
@strawberry.type
class User:
user_type: UserType
username: str
# userHomeFolderspace: UserHomeFolderUsage
@ -31,6 +32,7 @@ class UserMutationReturn(MutationReturnInterface):
def get_user_by_username(username: str) -> typing.Optional[User]:
user = users_actions.get_user_by_username(username)
if user is None:
return None

View File

@ -1,241 +0,0 @@
import typing
import strawberry
from selfprivacy_api.jobs import Jobs
from selfprivacy_api.graphql import IsAuthenticated
from selfprivacy_api.graphql.mutations.mutation_interface import (
GenericMutationReturn,
GenericJobMutationReturn,
MutationReturnInterface,
)
from selfprivacy_api.graphql.queries.backup import BackupConfiguration
from selfprivacy_api.graphql.queries.backup import Backup
from selfprivacy_api.graphql.queries.providers import BackupProvider
from selfprivacy_api.graphql.common_types.jobs import job_to_api_job
from selfprivacy_api.graphql.common_types.backup import (
AutobackupQuotasInput,
RestoreStrategy,
)
from selfprivacy_api.backup import Backups
from selfprivacy_api.services import get_service_by_id
from selfprivacy_api.backup.tasks import (
start_backup,
restore_snapshot,
prune_autobackup_snapshots,
)
from selfprivacy_api.backup.jobs import add_backup_job, add_restore_job
@strawberry.input
class InitializeRepositoryInput:
"""Initialize repository input"""
provider: BackupProvider
# The following field may become optional for other providers?
# Backblaze takes bucket id and name
location_id: str
location_name: str
# Key ID and key for Backblaze
login: str
password: str
@strawberry.type
class GenericBackupConfigReturn(MutationReturnInterface):
"""Generic backup config return"""
configuration: typing.Optional[BackupConfiguration]
@strawberry.type
class BackupMutations:
@strawberry.mutation(permission_classes=[IsAuthenticated])
def initialize_repository(
self, repository: InitializeRepositoryInput
) -> GenericBackupConfigReturn:
"""Initialize a new repository"""
Backups.set_provider(
kind=repository.provider,
login=repository.login,
key=repository.password,
location=repository.location_name,
repo_id=repository.location_id,
)
Backups.init_repo()
return GenericBackupConfigReturn(
success=True,
message="",
code=200,
configuration=Backup().configuration(),
)
@strawberry.mutation(permission_classes=[IsAuthenticated])
def remove_repository(self) -> GenericBackupConfigReturn:
"""Remove repository"""
Backups.reset()
return GenericBackupConfigReturn(
success=True,
message="",
code=200,
configuration=Backup().configuration(),
)
@strawberry.mutation(permission_classes=[IsAuthenticated])
def set_autobackup_period(
self, period: typing.Optional[int] = None
) -> GenericBackupConfigReturn:
"""Set autobackup period. None is to disable autobackup"""
if period is not None:
Backups.set_autobackup_period_minutes(period)
else:
Backups.set_autobackup_period_minutes(0)
return GenericBackupConfigReturn(
success=True,
message="",
code=200,
configuration=Backup().configuration(),
)
@strawberry.mutation(permission_classes=[IsAuthenticated])
def set_autobackup_quotas(
self, quotas: AutobackupQuotasInput
) -> GenericBackupConfigReturn:
"""
Set autobackup quotas.
Values <=0 for any timeframe mean no limits for that timeframe.
To disable autobackup use autobackup period setting, not this mutation.
"""
job = Jobs.add(
name="Trimming autobackup snapshots",
type_id="backups.autobackup_trimming",
description="Pruning the excessive snapshots after the new autobackup quotas are set",
)
try:
Backups.set_autobackup_quotas(quotas)
# this task is async and can fail with only a job to report the error
prune_autobackup_snapshots(job)
return GenericBackupConfigReturn(
success=True,
message="",
code=200,
configuration=Backup().configuration(),
)
except Exception as e:
return GenericBackupConfigReturn(
success=False,
message=type(e).__name__ + ":" + str(e),
code=400,
configuration=Backup().configuration(),
)
@strawberry.mutation(permission_classes=[IsAuthenticated])
def start_backup(self, service_id: str) -> GenericJobMutationReturn:
"""Start backup"""
service = get_service_by_id(service_id)
if service is None:
return GenericJobMutationReturn(
success=False,
code=300,
message=f"nonexistent service: {service_id}",
job=None,
)
job = add_backup_job(service)
start_backup(service_id)
return GenericJobMutationReturn(
success=True,
code=200,
message="Backup job queued",
job=job_to_api_job(job),
)
@strawberry.mutation(permission_classes=[IsAuthenticated])
def restore_backup(
self,
snapshot_id: str,
strategy: RestoreStrategy = RestoreStrategy.DOWNLOAD_VERIFY_OVERWRITE,
) -> GenericJobMutationReturn:
"""Restore backup"""
snap = Backups.get_snapshot_by_id(snapshot_id)
if snap is None:
return GenericJobMutationReturn(
success=False,
code=404,
message=f"No such snapshot: {snapshot_id}",
job=None,
)
service = get_service_by_id(snap.service_name)
if service is None:
return GenericJobMutationReturn(
success=False,
code=404,
message=f"nonexistent service: {snap.service_name}",
job=None,
)
try:
job = add_restore_job(snap)
except ValueError as error:
return GenericJobMutationReturn(
success=False,
code=400,
message=str(error),
job=None,
)
restore_snapshot(snap, strategy)
return GenericJobMutationReturn(
success=True,
code=200,
message="restore job created",
job=job_to_api_job(job),
)
@strawberry.mutation(permission_classes=[IsAuthenticated])
def forget_snapshot(self, snapshot_id: str) -> GenericMutationReturn:
"""Forget a snapshot.
Makes it inaccessible from the server.
After some time, the data (encrypted) will not be recoverable
from the backup server too, but not immediately"""
snap = Backups.get_snapshot_by_id(snapshot_id)
if snap is None:
return GenericMutationReturn(
success=False,
code=404,
message=f"snapshot {snapshot_id} not found",
)
try:
Backups.forget_snapshot(snap)
return GenericMutationReturn(
success=True,
code=200,
message="",
)
except Exception as error:
return GenericMutationReturn(
success=False,
code=400,
message=str(error),
)
@strawberry.mutation(permission_classes=[IsAuthenticated])
def force_snapshots_reload(self) -> GenericMutationReturn:
"""Force snapshots reload"""
Backups.force_snapshot_cache_reload()
return GenericMutationReturn(
success=True,
code=200,
message="",
)

View File

@ -1,216 +0,0 @@
"""Deprecated mutations
There was made a mistake, where mutations were not grouped, and were instead
placed in the root of mutations schema. In this file, we import all the
mutations from and provide them to the root for backwards compatibility.
"""
import strawberry
from selfprivacy_api.graphql import IsAuthenticated
from selfprivacy_api.graphql.common_types.user import UserMutationReturn
from selfprivacy_api.graphql.mutations.api_mutations import (
ApiKeyMutationReturn,
ApiMutations,
DeviceApiTokenMutationReturn,
)
from selfprivacy_api.graphql.mutations.backup_mutations import BackupMutations
from selfprivacy_api.graphql.mutations.job_mutations import JobMutations
from selfprivacy_api.graphql.mutations.mutation_interface import (
GenericJobMutationReturn,
GenericMutationReturn,
)
from selfprivacy_api.graphql.mutations.services_mutations import (
ServiceJobMutationReturn,
ServiceMutationReturn,
ServicesMutations,
)
from selfprivacy_api.graphql.mutations.storage_mutations import StorageMutations
from selfprivacy_api.graphql.mutations.system_mutations import (
AutoUpgradeSettingsMutationReturn,
SystemMutations,
TimezoneMutationReturn,
)
from selfprivacy_api.graphql.mutations.backup_mutations import BackupMutations
from selfprivacy_api.graphql.mutations.users_mutations import UsersMutations
def deprecated_mutation(func, group, auth=True):
return strawberry.mutation(
resolver=func,
permission_classes=[IsAuthenticated] if auth else [],
deprecation_reason=f"Use `{group}.{func.__name__}` instead",
)
@strawberry.type
class DeprecatedApiMutations:
get_new_recovery_api_key: ApiKeyMutationReturn = deprecated_mutation(
ApiMutations.get_new_recovery_api_key,
"api",
)
use_recovery_api_key: DeviceApiTokenMutationReturn = deprecated_mutation(
ApiMutations.use_recovery_api_key,
"api",
auth=False,
)
refresh_device_api_token: DeviceApiTokenMutationReturn = deprecated_mutation(
ApiMutations.refresh_device_api_token,
"api",
)
delete_device_api_token: GenericMutationReturn = deprecated_mutation(
ApiMutations.delete_device_api_token,
"api",
)
get_new_device_api_key: ApiKeyMutationReturn = deprecated_mutation(
ApiMutations.get_new_device_api_key,
"api",
)
invalidate_new_device_api_key: GenericMutationReturn = deprecated_mutation(
ApiMutations.invalidate_new_device_api_key,
"api",
)
authorize_with_new_device_api_key: DeviceApiTokenMutationReturn = (
deprecated_mutation(
ApiMutations.authorize_with_new_device_api_key,
"api",
auth=False,
)
)
@strawberry.type
class DeprecatedSystemMutations:
change_timezone: TimezoneMutationReturn = deprecated_mutation(
SystemMutations.change_timezone,
"system",
)
change_auto_upgrade_settings: AutoUpgradeSettingsMutationReturn = (
deprecated_mutation(
SystemMutations.change_auto_upgrade_settings,
"system",
)
)
run_system_rebuild: GenericMutationReturn = deprecated_mutation(
SystemMutations.run_system_rebuild,
"system",
)
run_system_rollback: GenericMutationReturn = deprecated_mutation(
SystemMutations.run_system_rollback,
"system",
)
run_system_upgrade: GenericMutationReturn = deprecated_mutation(
SystemMutations.run_system_upgrade,
"system",
)
reboot_system: GenericMutationReturn = deprecated_mutation(
SystemMutations.reboot_system,
"system",
)
pull_repository_changes: GenericMutationReturn = deprecated_mutation(
SystemMutations.pull_repository_changes,
"system",
)
@strawberry.type
class DeprecatedUsersMutations:
create_user: UserMutationReturn = deprecated_mutation(
UsersMutations.create_user,
"users",
)
delete_user: GenericMutationReturn = deprecated_mutation(
UsersMutations.delete_user,
"users",
)
update_user: UserMutationReturn = deprecated_mutation(
UsersMutations.update_user,
"users",
)
add_ssh_key: UserMutationReturn = deprecated_mutation(
UsersMutations.add_ssh_key,
"users",
)
remove_ssh_key: UserMutationReturn = deprecated_mutation(
UsersMutations.remove_ssh_key,
"users",
)
@strawberry.type
class DeprecatedStorageMutations:
resize_volume: GenericMutationReturn = deprecated_mutation(
StorageMutations.resize_volume,
"storage",
)
mount_volume: GenericMutationReturn = deprecated_mutation(
StorageMutations.mount_volume,
"storage",
)
unmount_volume: GenericMutationReturn = deprecated_mutation(
StorageMutations.unmount_volume,
"storage",
)
migrate_to_binds: GenericJobMutationReturn = deprecated_mutation(
StorageMutations.migrate_to_binds,
"storage",
)
@strawberry.type
class DeprecatedServicesMutations:
enable_service: ServiceMutationReturn = deprecated_mutation(
ServicesMutations.enable_service,
"services",
)
disable_service: ServiceMutationReturn = deprecated_mutation(
ServicesMutations.disable_service,
"services",
)
stop_service: ServiceMutationReturn = deprecated_mutation(
ServicesMutations.stop_service,
"services",
)
start_service: ServiceMutationReturn = deprecated_mutation(
ServicesMutations.start_service,
"services",
)
restart_service: ServiceMutationReturn = deprecated_mutation(
ServicesMutations.restart_service,
"services",
)
move_service: ServiceJobMutationReturn = deprecated_mutation(
ServicesMutations.move_service,
"services",
)
@strawberry.type
class DeprecatedJobMutations:
remove_job: GenericMutationReturn = deprecated_mutation(
JobMutations.remove_job,
"jobs",
)

View File

@ -17,5 +17,5 @@ class GenericMutationReturn(MutationReturnInterface):
@strawberry.type
class GenericJobMutationReturn(MutationReturnInterface):
class GenericJobButationReturn(MutationReturnInterface):
job: typing.Optional[ApiJob] = None

View File

@ -1,29 +1,23 @@
"""Services mutations"""
# pylint: disable=too-few-public-methods
from threading import local
import typing
import strawberry
from strawberry.types import Info
from selfprivacy_api.graphql import IsAuthenticated
from selfprivacy_api.graphql.common_types.jobs import job_to_api_job
from selfprivacy_api.jobs import JobStatus
from traceback import format_tb as format_traceback
from selfprivacy_api.graphql.mutations.mutation_interface import (
GenericJobMutationReturn,
GenericMutationReturn,
)
from selfprivacy_api.graphql.common_types.service import (
Service,
service_to_graphql_service,
)
from selfprivacy_api.actions.services import (
move_service,
ServiceNotFoundError,
VolumeNotFoundError,
from selfprivacy_api.graphql.mutations.mutation_interface import (
GenericJobButationReturn,
GenericMutationReturn,
)
from selfprivacy_api.services import get_service_by_id
from selfprivacy_api.utils.block_devices import BlockDevices
@strawberry.type
@ -42,7 +36,7 @@ class MoveServiceInput:
@strawberry.type
class ServiceJobMutationReturn(GenericJobMutationReturn):
class ServiceJobMutationReturn(GenericJobButationReturn):
"""Service job mutation return type."""
service: typing.Optional[Service] = None
@ -53,59 +47,47 @@ class ServicesMutations:
"""Services mutations."""
@strawberry.mutation(permission_classes=[IsAuthenticated])
def enable_service(self, service_id: str) -> ServiceMutationReturn:
def enable_service(self, service_id: str, info: Info) -> ServiceMutationReturn:
"""Enable service."""
try:
service = get_service_by_id(service_id)
if service is None:
return ServiceMutationReturn(
success=False,
message="Service not found.",
code=404,
)
service.enable()
except Exception as e:
locale = info.context["locale"]
service = get_service_by_id(service_id)
if service is None:
return ServiceMutationReturn(
success=False,
message=pretty_error(e),
code=400,
message="Service not found.",
code=404,
)
service.enable()
return ServiceMutationReturn(
success=True,
message="Service enabled.",
code=200,
service=service_to_graphql_service(service),
service=service_to_graphql_service(service, locale),
)
@strawberry.mutation(permission_classes=[IsAuthenticated])
def disable_service(self, service_id: str) -> ServiceMutationReturn:
def disable_service(self, service_id: str, info: Info) -> ServiceMutationReturn:
"""Disable service."""
try:
service = get_service_by_id(service_id)
if service is None:
return ServiceMutationReturn(
success=False,
message="Service not found.",
code=404,
)
service.disable()
except Exception as e:
locale = info.context["locale"]
service = get_service_by_id(service_id)
if service is None:
return ServiceMutationReturn(
success=False,
message=pretty_error(e),
code=400,
message="Service not found.",
code=404,
)
service.disable()
return ServiceMutationReturn(
success=True,
message="Service disabled.",
code=200,
service=service_to_graphql_service(service),
service=service_to_graphql_service(service, locale),
)
@strawberry.mutation(permission_classes=[IsAuthenticated])
def stop_service(self, service_id: str) -> ServiceMutationReturn:
def stop_service(self, service_id: str, info: Info) -> ServiceMutationReturn:
"""Stop service."""
locale = info.context["locale"]
service = get_service_by_id(service_id)
if service is None:
return ServiceMutationReturn(
@ -118,12 +100,13 @@ class ServicesMutations:
success=True,
message="Service stopped.",
code=200,
service=service_to_graphql_service(service),
service=service_to_graphql_service(service, locale),
)
@strawberry.mutation(permission_classes=[IsAuthenticated])
def start_service(self, service_id: str) -> ServiceMutationReturn:
def start_service(self, service_id: str, info: Info) -> ServiceMutationReturn:
"""Start service."""
locale = info.context["locale"]
service = get_service_by_id(service_id)
if service is None:
return ServiceMutationReturn(
@ -136,12 +119,13 @@ class ServicesMutations:
success=True,
message="Service started.",
code=200,
service=service_to_graphql_service(service),
service=service_to_graphql_service(service, locale),
)
@strawberry.mutation(permission_classes=[IsAuthenticated])
def restart_service(self, service_id: str) -> ServiceMutationReturn:
def restart_service(self, service_id: str, info: Info) -> ServiceMutationReturn:
"""Restart service."""
locale = info.context["locale"]
service = get_service_by_id(service_id)
if service is None:
return ServiceMutationReturn(
@ -154,64 +138,42 @@ class ServicesMutations:
success=True,
message="Service restarted.",
code=200,
service=service_to_graphql_service(service),
service=service_to_graphql_service(service, locale),
)
@strawberry.mutation(permission_classes=[IsAuthenticated])
def move_service(self, input: MoveServiceInput) -> ServiceJobMutationReturn:
def move_service(
self, input: MoveServiceInput, info: Info
) -> ServiceJobMutationReturn:
"""Move service."""
# We need a service instance for a reply later
locale = info.context["locale"]
service = get_service_by_id(input.service_id)
if service is None:
return ServiceJobMutationReturn(
success=False,
message=f"Service does not exist: {input.service_id}",
message="Service not found.",
code=404,
)
try:
job = move_service(input.service_id, input.location)
except (ServiceNotFoundError, VolumeNotFoundError) as e:
if not service.is_movable():
return ServiceJobMutationReturn(
success=False,
message=pretty_error(e),
message="Service is not movable.",
code=400,
service=service_to_graphql_service(service, locale),
)
volume = BlockDevices().get_block_device(input.location)
if volume is None:
return ServiceJobMutationReturn(
success=False,
message="Volume not found.",
code=404,
service=service_to_graphql_service(service, locale),
)
except Exception as e:
return ServiceJobMutationReturn(
success=False,
message=pretty_error(e),
code=400,
service=service_to_graphql_service(service),
)
if job.status in [JobStatus.CREATED, JobStatus.RUNNING]:
return ServiceJobMutationReturn(
success=True,
message="Started moving the service.",
code=200,
service=service_to_graphql_service(service),
job=job_to_api_job(job),
)
elif job.status == JobStatus.FINISHED:
return ServiceJobMutationReturn(
success=True,
message="Service moved.",
code=200,
service=service_to_graphql_service(service),
job=job_to_api_job(job),
)
else:
return ServiceJobMutationReturn(
success=False,
message=f"While moving service and performing the step '{job.status_text}', error occured: {job.error}",
code=400,
service=service_to_graphql_service(service),
job=job_to_api_job(job),
)
def pretty_error(e: Exception) -> str:
traceback = "/r".join(format_traceback(e.__traceback__))
return type(e).__name__ + ": " + str(e) + ": " + traceback
job = service.move_to_volume(volume)
return ServiceJobMutationReturn(
success=True,
message="Service moved.",
code=200,
service=service_to_graphql_service(service, locale),
job=job_to_api_job(job),
)

View File

@ -0,0 +1,102 @@
#!/usr/bin/env python3
"""Users management module"""
# pylint: disable=too-few-public-methods
import strawberry
from selfprivacy_api.actions.users import UserNotFound
from selfprivacy_api.graphql import IsAuthenticated
from selfprivacy_api.actions.ssh import (
InvalidPublicKey,
KeyAlreadyExists,
KeyNotFound,
create_ssh_key,
remove_ssh_key,
)
from selfprivacy_api.graphql.common_types.user import (
UserMutationReturn,
get_user_by_username,
)
@strawberry.input
class SshMutationInput:
"""Input type for ssh mutation"""
username: str
ssh_key: str
@strawberry.type
class SshMutations:
"""Mutations ssh"""
@strawberry.mutation(permission_classes=[IsAuthenticated])
def add_ssh_key(self, ssh_input: SshMutationInput) -> UserMutationReturn:
"""Add a new ssh key"""
try:
create_ssh_key(ssh_input.username, ssh_input.ssh_key)
except KeyAlreadyExists:
return UserMutationReturn(
success=False,
message="Key already exists",
code=409,
)
except InvalidPublicKey:
return UserMutationReturn(
success=False,
message="Invalid key type. Only ssh-ed25519 and ssh-rsa are supported",
code=400,
)
except UserNotFound:
return UserMutationReturn(
success=False,
message="User not found",
code=404,
)
except Exception as e:
return UserMutationReturn(
success=False,
message=str(e),
code=500,
)
return UserMutationReturn(
success=True,
message="New SSH key successfully written",
code=201,
user=get_user_by_username(ssh_input.username),
)
@strawberry.mutation(permission_classes=[IsAuthenticated])
def remove_ssh_key(self, ssh_input: SshMutationInput) -> UserMutationReturn:
"""Remove ssh key from user"""
try:
remove_ssh_key(ssh_input.username, ssh_input.ssh_key)
except KeyNotFound:
return UserMutationReturn(
success=False,
message="Key not found",
code=404,
)
except UserNotFound:
return UserMutationReturn(
success=False,
message="User not found",
code=404,
)
except Exception as e:
return UserMutationReturn(
success=False,
message=str(e),
code=500,
)
return UserMutationReturn(
success=True,
message="SSH key successfully removed",
code=200,
user=get_user_by_username(ssh_input.username),
)

View File

@ -4,7 +4,7 @@ from selfprivacy_api.graphql import IsAuthenticated
from selfprivacy_api.graphql.common_types.jobs import job_to_api_job
from selfprivacy_api.utils.block_devices import BlockDevices
from selfprivacy_api.graphql.mutations.mutation_interface import (
GenericJobMutationReturn,
GenericJobButationReturn,
GenericMutationReturn,
)
from selfprivacy_api.jobs.migrate_to_binds import (
@ -79,10 +79,10 @@ class StorageMutations:
)
@strawberry.mutation(permission_classes=[IsAuthenticated])
def migrate_to_binds(self, input: MigrateToBindsInput) -> GenericJobMutationReturn:
def migrate_to_binds(self, input: MigrateToBindsInput) -> GenericJobButationReturn:
"""Migrate to binds"""
if is_bind_migrated():
return GenericJobMutationReturn(
return GenericJobButationReturn(
success=False, code=409, message="Already migrated to binds"
)
job = start_bind_migration(
@ -94,7 +94,7 @@ class StorageMutations:
pleroma_block_device=input.pleroma_block_device,
)
)
return GenericJobMutationReturn(
return GenericJobButationReturn(
success=True,
code=200,
message="Migration to binds started, rebuild the system to apply changes",

View File

@ -3,18 +3,12 @@
import typing
import strawberry
from selfprivacy_api.graphql import IsAuthenticated
from selfprivacy_api.graphql.common_types.jobs import job_to_api_job
from selfprivacy_api.graphql.mutations.mutation_interface import (
GenericJobMutationReturn,
GenericMutationReturn,
MutationReturnInterface,
GenericJobMutationReturn,
)
import selfprivacy_api.actions.system as system_actions
from selfprivacy_api.graphql.common_types.jobs import job_to_api_job
from selfprivacy_api.jobs.nix_collect_garbage import start_nix_collect_garbage
import selfprivacy_api.actions.ssh as ssh_actions
@strawberry.type
@ -32,22 +26,6 @@ class AutoUpgradeSettingsMutationReturn(MutationReturnInterface):
allowReboot: bool
@strawberry.type
class SSHSettingsMutationReturn(MutationReturnInterface):
"""A return type for after changing SSH settings"""
enable: bool
password_authentication: bool
@strawberry.input
class SSHSettingsInput:
"""Input type for SSH settings"""
enable: bool
password_authentication: bool
@strawberry.input
class AutoUpgradeSettingsInput:
"""Input type for auto upgrade settings"""
@ -99,90 +77,40 @@ class SystemMutations:
)
@strawberry.mutation(permission_classes=[IsAuthenticated])
def change_ssh_settings(
self, settings: SSHSettingsInput
) -> SSHSettingsMutationReturn:
"""Change ssh settings of the server."""
ssh_actions.set_ssh_settings(
enable=settings.enable,
password_authentication=settings.password_authentication,
)
new_settings = ssh_actions.get_ssh_settings()
return SSHSettingsMutationReturn(
def run_system_rebuild(self) -> GenericMutationReturn:
system_actions.rebuild_system()
return GenericMutationReturn(
success=True,
message="SSH settings changed",
message="Starting rebuild system",
code=200,
enable=new_settings.enable,
password_authentication=new_settings.passwordAuthentication,
)
@strawberry.mutation(permission_classes=[IsAuthenticated])
def run_system_rebuild(self) -> GenericJobMutationReturn:
try:
job = system_actions.rebuild_system()
return GenericJobMutationReturn(
success=True,
message="Starting system rebuild",
code=200,
job=job_to_api_job(job),
)
except system_actions.ShellException as e:
return GenericJobMutationReturn(
success=False,
message=str(e),
code=500,
)
@strawberry.mutation(permission_classes=[IsAuthenticated])
def run_system_rollback(self) -> GenericMutationReturn:
system_actions.rollback_system()
try:
return GenericMutationReturn(
success=True,
message="Starting system rollback",
code=200,
)
except system_actions.ShellException as e:
return GenericMutationReturn(
success=False,
message=str(e),
code=500,
)
return GenericMutationReturn(
success=True,
message="Starting rebuild system",
code=200,
)
@strawberry.mutation(permission_classes=[IsAuthenticated])
def run_system_upgrade(self) -> GenericJobMutationReturn:
try:
job = system_actions.upgrade_system()
return GenericJobMutationReturn(
success=True,
message="Starting system upgrade",
code=200,
job=job_to_api_job(job),
)
except system_actions.ShellException as e:
return GenericJobMutationReturn(
success=False,
message=str(e),
code=500,
)
def run_system_upgrade(self) -> GenericMutationReturn:
system_actions.upgrade_system()
return GenericMutationReturn(
success=True,
message="Starting rebuild system",
code=200,
)
@strawberry.mutation(permission_classes=[IsAuthenticated])
def reboot_system(self) -> GenericMutationReturn:
system_actions.reboot_system()
try:
return GenericMutationReturn(
success=True,
message="System reboot has started",
code=200,
)
except system_actions.ShellException as e:
return GenericMutationReturn(
success=False,
message=str(e),
code=500,
)
return GenericMutationReturn(
success=True,
message="System reboot has started",
code=200,
)
@strawberry.mutation(permission_classes=[IsAuthenticated])
def pull_repository_changes(self) -> GenericMutationReturn:
@ -198,14 +126,3 @@ class SystemMutations:
message=f"Failed to pull repository changes:\n{result.data}",
code=500,
)
@strawberry.mutation(permission_classes=[IsAuthenticated])
def nix_collect_garbage(self) -> GenericJobMutationReturn:
job = start_nix_collect_garbage()
return GenericJobMutationReturn(
success=True,
code=200,
message="Garbage collector started...",
job=job_to_api_job(job),
)

View File

@ -3,18 +3,10 @@
# pylint: disable=too-few-public-methods
import strawberry
from selfprivacy_api.graphql import IsAuthenticated
from selfprivacy_api.actions.users import UserNotFound
from selfprivacy_api.graphql.common_types.user import (
UserMutationReturn,
get_user_by_username,
)
from selfprivacy_api.actions.ssh import (
InvalidPublicKey,
KeyAlreadyExists,
KeyNotFound,
create_ssh_key,
remove_ssh_key,
)
from selfprivacy_api.graphql.mutations.mutation_interface import (
GenericMutationReturn,
)
@ -29,16 +21,8 @@ class UserMutationInput:
password: str
@strawberry.input
class SshMutationInput:
"""Input type for ssh mutation"""
username: str
ssh_key: str
@strawberry.type
class UsersMutations:
class UserMutations:
"""Mutations change user settings"""
@strawberry.mutation(permission_classes=[IsAuthenticated])
@ -69,12 +53,6 @@ class UsersMutations:
message=str(e),
code=400,
)
except users_actions.InvalidConfiguration as e:
return UserMutationReturn(
success=False,
message=str(e),
code=400,
)
except users_actions.UserAlreadyExists as e:
return UserMutationReturn(
success=False,
@ -137,73 +115,3 @@ class UsersMutations:
code=200,
user=get_user_by_username(user.username),
)
@strawberry.mutation(permission_classes=[IsAuthenticated])
def add_ssh_key(self, ssh_input: SshMutationInput) -> UserMutationReturn:
"""Add a new ssh key"""
try:
create_ssh_key(ssh_input.username, ssh_input.ssh_key)
except KeyAlreadyExists:
return UserMutationReturn(
success=False,
message="Key already exists",
code=409,
)
except InvalidPublicKey:
return UserMutationReturn(
success=False,
message="Invalid key type. Only ssh-ed25519, ssh-rsa and ecdsa are supported",
code=400,
)
except UserNotFound:
return UserMutationReturn(
success=False,
message="User not found",
code=404,
)
except Exception as e:
return UserMutationReturn(
success=False,
message=str(e),
code=500,
)
return UserMutationReturn(
success=True,
message="New SSH key successfully written",
code=201,
user=get_user_by_username(ssh_input.username),
)
@strawberry.mutation(permission_classes=[IsAuthenticated])
def remove_ssh_key(self, ssh_input: SshMutationInput) -> UserMutationReturn:
"""Remove ssh key from user"""
try:
remove_ssh_key(ssh_input.username, ssh_input.ssh_key)
except KeyNotFound:
return UserMutationReturn(
success=False,
message="Key not found",
code=404,
)
except UserNotFound:
return UserMutationReturn(
success=False,
message="User not found",
code=404,
)
except Exception as e:
return UserMutationReturn(
success=False,
message=str(e),
code=500,
)
return UserMutationReturn(
success=True,
message="SSH key successfully removed",
code=200,
user=get_user_by_username(ssh_input.username),
)

View File

@ -38,7 +38,7 @@ class ApiRecoveryKeyStatus:
def get_recovery_key_status() -> ApiRecoveryKeyStatus:
"""Get recovery key status, times are timezone-aware"""
"""Get recovery key status"""
status = get_api_recovery_token_status()
if status is None or not status.exists:
return ApiRecoveryKeyStatus(

View File

@ -2,82 +2,13 @@
# pylint: disable=too-few-public-methods
import typing
import strawberry
from selfprivacy_api.backup import Backups
from selfprivacy_api.backup.local_secret import LocalBackupSecret
from selfprivacy_api.graphql.queries.providers import BackupProvider
from selfprivacy_api.graphql.common_types.service import (
Service,
ServiceStatusEnum,
SnapshotInfo,
service_to_graphql_service,
)
from selfprivacy_api.graphql.common_types.backup import AutobackupQuotas
from selfprivacy_api.services import get_service_by_id
@strawberry.type
class BackupConfiguration:
provider: BackupProvider
# When server is lost, the app should have the key to decrypt backups
# on a new server
encryption_key: str
# False when repo is not initialized and not ready to be used
is_initialized: bool
# If none, autobackups are disabled
autobackup_period: typing.Optional[int]
# None is equal to all quotas being unlimited (-1). Optional for compatibility reasons.
autobackup_quotas: AutobackupQuotas
# Bucket name for Backblaze, path for some other providers
location_name: typing.Optional[str]
location_id: typing.Optional[str]
from selfprivacy_api.graphql.common_types.backup_snapshot import SnapshotInfo
@strawberry.type
class Backup:
@strawberry.field
def configuration(self) -> BackupConfiguration:
return BackupConfiguration(
provider=Backups.provider().name,
encryption_key=LocalBackupSecret.get(),
is_initialized=Backups.is_initted(),
autobackup_period=Backups.autobackup_period_minutes(),
location_name=Backups.provider().location,
location_id=Backups.provider().repo_id,
autobackup_quotas=Backups.autobackup_quotas(),
)
backend: str
@strawberry.field
def all_snapshots(self) -> typing.List[SnapshotInfo]:
if not Backups.is_initted():
return []
result = []
snapshots = Backups.get_all_snapshots()
for snap in snapshots:
service = get_service_by_id(snap.service_name)
if service is None:
service = Service(
id=snap.service_name,
display_name=f"{snap.service_name} (Orphaned)",
description="",
svg_icon="",
is_movable=False,
is_required=False,
is_enabled=False,
status=ServiceStatusEnum.OFF,
url=None,
dns_records=None,
can_be_backed_up=False,
backup_description="",
)
else:
service = service_to_graphql_service(service)
graphql_snap = SnapshotInfo(
id=snap.id,
service=service,
created_at=snap.created_at,
reason=snap.reason,
)
result.append(graphql_snap)
return result
def get_backups(self) -> typing.List[SnapshotInfo]:
return []

View File

@ -15,6 +15,7 @@ from selfprivacy_api.jobs import Jobs
class Job:
@strawberry.field
def get_jobs(self) -> typing.List[ApiJob]:
Jobs.get_jobs()
return [job_to_api_job(job) for job in Jobs.get_jobs()]

View File

@ -7,7 +7,6 @@ import strawberry
class DnsProvider(Enum):
CLOUDFLARE = "CLOUDFLARE"
DIGITALOCEAN = "DIGITALOCEAN"
DESEC = "DESEC"
@strawberry.enum
@ -19,7 +18,6 @@ class ServerProvider(Enum):
@strawberry.enum
class BackupProvider(Enum):
BACKBLAZE = "BACKBLAZE"
NONE = "NONE"
# for testing purposes, make sure not selectable in prod.
MEMORY = "MEMORY"
FILE = "FILE"

View File

@ -2,6 +2,7 @@
# pylint: disable=too-few-public-methods
import typing
import strawberry
from strawberry.types import Info
from selfprivacy_api.graphql.common_types.service import (
Service,
@ -13,6 +14,7 @@ from selfprivacy_api.services import get_all_services
@strawberry.type
class Services:
@strawberry.field
def all_services(self) -> typing.List[Service]:
def all_services(self, info: Info) -> typing.List[Service]:
locale = info.context["locale"]
services = get_all_services()
return [service_to_graphql_service(service) for service in services]
return [service_to_graphql_service(service, locale) for service in services]

View File

@ -23,7 +23,7 @@ class Storage:
else str(volume.size),
free_space=str(volume.fsavail),
used_space=str(volume.fsused),
root=volume.is_root(),
root=volume.name == "sda1",
name=volume.name,
model=volume.model,
serial=volume.serial,

View File

@ -33,7 +33,6 @@ class SystemDomainInfo:
content=record.content,
ttl=record.ttl,
priority=record.priority,
display_name=record.display_name,
)
for record in get_all_required_dns_records()
]

View File

@ -4,31 +4,24 @@
import asyncio
from typing import AsyncGenerator
import strawberry
from selfprivacy_api.graphql import IsAuthenticated
from selfprivacy_api.graphql.mutations.deprecated_mutations import (
DeprecatedApiMutations,
DeprecatedJobMutations,
DeprecatedServicesMutations,
DeprecatedStorageMutations,
DeprecatedSystemMutations,
DeprecatedUsersMutations,
)
from strawberry.types import Info
from selfprivacy_api.graphql import IsAuthenticated, LocaleExtension
from selfprivacy_api.graphql.mutations.api_mutations import ApiMutations
from selfprivacy_api.graphql.mutations.job_mutations import JobMutations
from selfprivacy_api.graphql.mutations.mutation_interface import GenericMutationReturn
from selfprivacy_api.graphql.mutations.services_mutations import ServicesMutations
from selfprivacy_api.graphql.mutations.ssh_mutations import SshMutations
from selfprivacy_api.graphql.mutations.storage_mutations import StorageMutations
from selfprivacy_api.graphql.mutations.system_mutations import SystemMutations
from selfprivacy_api.graphql.mutations.backup_mutations import BackupMutations
from selfprivacy_api.graphql.queries.api_queries import Api
from selfprivacy_api.graphql.queries.backup import Backup
from selfprivacy_api.graphql.queries.jobs import Job
from selfprivacy_api.graphql.queries.services import Services
from selfprivacy_api.graphql.queries.storage import Storage
from selfprivacy_api.graphql.queries.system import System
from selfprivacy_api.graphql.mutations.users_mutations import UsersMutations
from selfprivacy_api.graphql.mutations.users_mutations import UserMutations
from selfprivacy_api.graphql.queries.users import Users
from selfprivacy_api.jobs.test import test_job
@ -37,16 +30,16 @@ from selfprivacy_api.jobs.test import test_job
class Query:
"""Root schema for queries"""
@strawberry.field
def api(self) -> Api:
"""API access status"""
return Api()
@strawberry.field(permission_classes=[IsAuthenticated])
def system(self) -> System:
"""System queries"""
return System()
@strawberry.field
def api(self) -> Api:
"""API access status"""
return Api()
@strawberry.field(permission_classes=[IsAuthenticated])
def users(self) -> Users:
"""Users queries"""
@ -67,58 +60,24 @@ class Query:
"""Services queries"""
return Services()
@strawberry.field(permission_classes=[IsAuthenticated])
def backup(self) -> Backup:
"""Backup queries"""
return Backup()
@strawberry.field()
def test(self, info: Info) -> str:
"""Test query"""
return info.context["locale"]
@strawberry.type
class Mutation(
DeprecatedApiMutations,
DeprecatedSystemMutations,
DeprecatedUsersMutations,
DeprecatedStorageMutations,
DeprecatedServicesMutations,
DeprecatedJobMutations,
ApiMutations,
SystemMutations,
UserMutations,
SshMutations,
StorageMutations,
ServicesMutations,
JobMutations,
):
"""Root schema for mutations"""
@strawberry.field
def api(self) -> ApiMutations:
"""API mutations"""
return ApiMutations()
@strawberry.field(permission_classes=[IsAuthenticated])
def system(self) -> SystemMutations:
"""System mutations"""
return SystemMutations()
@strawberry.field(permission_classes=[IsAuthenticated])
def users(self) -> UsersMutations:
"""Users mutations"""
return UsersMutations()
@strawberry.field(permission_classes=[IsAuthenticated])
def storage(self) -> StorageMutations:
"""Storage mutations"""
return StorageMutations()
@strawberry.field(permission_classes=[IsAuthenticated])
def services(self) -> ServicesMutations:
"""Services mutations"""
return ServicesMutations()
@strawberry.field(permission_classes=[IsAuthenticated])
def jobs(self) -> JobMutations:
"""Jobs mutations"""
return JobMutations()
@strawberry.field(permission_classes=[IsAuthenticated])
def backup(self) -> BackupMutations:
"""Backup mutations"""
return BackupMutations()
@strawberry.mutation(permission_classes=[IsAuthenticated])
def test_mutation(self) -> GenericMutationReturn:
"""Test mutation"""
@ -147,4 +106,5 @@ schema = strawberry.Schema(
query=Query,
mutation=Mutation,
subscription=Subscription,
extensions=[LocaleExtension],
)

View File

@ -8,8 +8,8 @@ A job is a dictionary with the following keys:
- name: name of the job
- description: description of the job
- status: status of the job
- created_at: date of creation of the job, naive localtime
- updated_at: date of last update of the job, naive localtime
- created_at: date of creation of the job
- updated_at: date of last update of the job
- finished_at: date of finish of the job
- error: error message if the job failed
- result: result of the job
@ -26,11 +26,8 @@ from selfprivacy_api.utils.redis_pool import RedisPool
JOB_EXPIRATION_SECONDS = 10 * 24 * 60 * 60 # ten days
STATUS_LOGS_PREFIX = "jobs_logs:status:"
PROGRESS_LOGS_PREFIX = "jobs_logs:progress:"
class JobStatus(str, Enum):
class JobStatus(Enum):
"""
Status of a job.
"""
@ -73,7 +70,6 @@ class Jobs:
jobs = Jobs.get_jobs()
for job in jobs:
Jobs.remove(job)
Jobs.reset_logs()
@staticmethod
def add(
@ -124,60 +120,6 @@ class Jobs:
return True
return False
@staticmethod
def reset_logs() -> None:
redis = RedisPool().get_connection()
for key in redis.keys(STATUS_LOGS_PREFIX + "*"):
redis.delete(key)
@staticmethod
def log_status_update(job: Job, status: JobStatus) -> None:
redis = RedisPool().get_connection()
key = _status_log_key_from_uuid(job.uid)
redis.lpush(key, status.value)
redis.expire(key, 10)
@staticmethod
def log_progress_update(job: Job, progress: int) -> None:
redis = RedisPool().get_connection()
key = _progress_log_key_from_uuid(job.uid)
redis.lpush(key, progress)
redis.expire(key, 10)
@staticmethod
def status_updates(job: Job) -> list[JobStatus]:
result: list[JobStatus] = []
redis = RedisPool().get_connection()
key = _status_log_key_from_uuid(job.uid)
if not redis.exists(key):
return []
status_strings: list[str] = redis.lrange(key, 0, -1) # type: ignore
for status in status_strings:
try:
result.append(JobStatus[status])
except KeyError as error:
raise ValueError("impossible job status: " + status) from error
return result
@staticmethod
def progress_updates(job: Job) -> list[int]:
result: list[int] = []
redis = RedisPool().get_connection()
key = _progress_log_key_from_uuid(job.uid)
if not redis.exists(key):
return []
progress_strings: list[str] = redis.lrange(key, 0, -1) # type: ignore
for progress in progress_strings:
try:
result.append(int(progress))
except KeyError as error:
raise ValueError("impossible job progress: " + progress) from error
return result
@staticmethod
def update(
job: Job,
@ -198,17 +140,9 @@ class Jobs:
job.description = description
if status_text is not None:
job.status_text = status_text
# if it is finished it is 100
# unless user says otherwise
if status == JobStatus.FINISHED and progress is None:
progress = 100
if progress is not None and job.progress != progress:
if progress is not None:
job.progress = progress
Jobs.log_progress_update(job, progress)
job.status = status
Jobs.log_status_update(job, status)
job.updated_at = datetime.datetime.now()
job.error = error
job.result = result
@ -224,14 +158,6 @@ class Jobs:
return job
@staticmethod
def set_expiration(job: Job, expiration_seconds: int) -> Job:
redis = RedisPool().get_connection()
key = _redis_key_from_uuid(job.uid)
if redis.exists(key):
redis.expire(key, expiration_seconds)
return job
@staticmethod
def get_job(uid: str) -> typing.Optional[Job]:
"""
@ -268,33 +194,11 @@ class Jobs:
return False
def report_progress(progress: int, job: Job, status_text: str) -> None:
"""
A terse way to call a common operation, for readability
job.report_progress() would be even better
but it would go against how this file is written
"""
Jobs.update(
job=job,
status=JobStatus.RUNNING,
status_text=status_text,
progress=progress,
)
def _redis_key_from_uuid(uuid_string) -> str:
def _redis_key_from_uuid(uuid_string):
return "jobs:" + str(uuid_string)
def _status_log_key_from_uuid(uuid_string) -> str:
return STATUS_LOGS_PREFIX + str(uuid_string)
def _progress_log_key_from_uuid(uuid_string) -> str:
return PROGRESS_LOGS_PREFIX + str(uuid_string)
def _store_job_as_hash(redis, redis_key, model) -> None:
def _store_job_as_hash(redis, redis_key, model):
for key, value in model.dict().items():
if isinstance(value, uuid.UUID):
value = str(value)
@ -305,7 +209,7 @@ def _store_job_as_hash(redis, redis_key, model) -> None:
redis.hset(redis_key, key, str(value))
def _job_from_hash(redis, redis_key) -> typing.Optional[Job]:
def _job_from_hash(redis, redis_key):
if redis.exists(redis_key):
job_dict = redis.hgetall(redis_key)
for date in [

View File

@ -67,8 +67,8 @@ def move_folder(
try:
data_path.mkdir(mode=0o750, parents=True, exist_ok=True)
except Exception as error:
print(f"Error creating data path: {error}")
except Exception as e:
print(f"Error creating data path: {e}")
return
try:

View File

@ -1,147 +0,0 @@
import re
import subprocess
from typing import Tuple, Iterable
from selfprivacy_api.utils.huey import huey
from selfprivacy_api.jobs import JobStatus, Jobs, Job
class ShellException(Exception):
"""Shell-related errors"""
COMPLETED_WITH_ERROR = "Error occurred, please report this to the support chat."
RESULT_WAS_NOT_FOUND_ERROR = (
"We are sorry, garbage collection result was not found. "
"Something went wrong, please report this to the support chat."
)
CLEAR_COMPLETED = "Garbage collection completed."
def delete_old_gens_and_return_dead_report() -> str:
subprocess.run(
["nix-env", "-p", "/nix/var/nix/profiles/system", "--delete-generations old"],
check=False,
)
result = subprocess.check_output(["nix-store", "--gc", "--print-dead"]).decode(
"utf-8"
)
return " " if result is None else result
def run_nix_collect_garbage() -> Iterable[bytes]:
process = subprocess.Popen(
["nix-store", "--gc"], stdout=subprocess.PIPE, stderr=subprocess.STDOUT
)
return process.stdout if process.stdout else iter([])
def parse_line(job: Job, line: str) -> Job:
"""
We parse the string for the presence of a final line,
with the final amount of space cleared.
Simply put, we're just looking for a similar string:
"1537 store paths deleted, 339.84 MiB freed".
"""
pattern = re.compile(r"[+-]?\d+\.\d+ \w+(?= freed)")
match = re.search(pattern, line)
if match is None:
raise ShellException("nix returned gibberish output")
else:
Jobs.update(
job=job,
status=JobStatus.FINISHED,
status_text=CLEAR_COMPLETED,
result=f"{match.group(0)} have been cleared",
)
return job
def process_stream(job: Job, stream: Iterable[bytes], total_dead_packages: int) -> None:
completed_packages = 0
prev_progress = 0
for line in stream:
line = line.decode("utf-8")
if "deleting '/nix/store/" in line:
completed_packages += 1
percent = int((completed_packages / total_dead_packages) * 100)
if percent - prev_progress >= 5:
Jobs.update(
job=job,
status=JobStatus.RUNNING,
progress=percent,
status_text="Cleaning...",
)
prev_progress = percent
elif "store paths deleted," in line:
parse_line(job, line)
def get_dead_packages(output) -> Tuple[int, float]:
dead = len(re.findall("/nix/store/", output))
percent = 0
if dead != 0:
percent = 100 / dead
return dead, percent
@huey.task()
def calculate_and_clear_dead_paths(job: Job):
Jobs.update(
job=job,
status=JobStatus.RUNNING,
progress=0,
status_text="Calculate the number of dead packages...",
)
dead_packages, package_equal_to_percent = get_dead_packages(
delete_old_gens_and_return_dead_report()
)
if dead_packages == 0:
Jobs.update(
job=job,
status=JobStatus.FINISHED,
status_text="Nothing to clear",
result="System is clear",
)
return True
Jobs.update(
job=job,
status=JobStatus.RUNNING,
progress=0,
status_text=f"Found {dead_packages} packages to remove!",
)
stream = run_nix_collect_garbage()
try:
process_stream(job, stream, dead_packages)
except ShellException as error:
Jobs.update(
job=job,
status=JobStatus.ERROR,
status_text=COMPLETED_WITH_ERROR,
error=RESULT_WAS_NOT_FOUND_ERROR,
)
def start_nix_collect_garbage() -> Job:
job = Jobs.add(
type_id="maintenance.collect_nix_garbage",
name="Collect garbage",
description="Cleaning up unused packages",
)
calculate_and_clear_dead_paths(job=job)
return job

View File

@ -1,136 +0,0 @@
"""
A task to start the system upgrade or rebuild by starting a systemd unit.
After starting, track the status of the systemd unit and update the Job
status accordingly.
"""
import subprocess
from selfprivacy_api.utils.huey import huey
from selfprivacy_api.jobs import JobStatus, Jobs, Job
from selfprivacy_api.utils.waitloop import wait_until_true
from selfprivacy_api.utils.systemd import (
get_service_status,
get_last_log_lines,
ServiceStatus,
)
START_TIMEOUT = 60 * 5
START_INTERVAL = 1
RUN_TIMEOUT = 60 * 60
RUN_INTERVAL = 5
def check_if_started(unit_name: str):
"""Check if the systemd unit has started"""
try:
status = get_service_status(unit_name)
if status == ServiceStatus.ACTIVE:
return True
return False
except subprocess.CalledProcessError:
return False
def check_running_status(job: Job, unit_name: str):
"""Check if the systemd unit is running"""
try:
status = get_service_status(unit_name)
if status == ServiceStatus.INACTIVE:
Jobs.update(
job=job,
status=JobStatus.FINISHED,
result="System rebuilt.",
progress=100,
)
return True
if status == ServiceStatus.FAILED:
log_lines = get_last_log_lines(unit_name, 10)
Jobs.update(
job=job,
status=JobStatus.ERROR,
error="System rebuild failed. Last log lines:\n" + "\n".join(log_lines),
)
return True
if status == ServiceStatus.ACTIVE:
log_lines = get_last_log_lines(unit_name, 1)
Jobs.update(
job=job,
status=JobStatus.RUNNING,
status_text=log_lines[0] if len(log_lines) > 0 else "",
)
return False
return False
except subprocess.CalledProcessError:
return False
def rebuild_system(job: Job, upgrade: bool = False):
"""
Broken out to allow calling it synchronously.
We cannot just block until task is done because it will require a second worker
Which we do not have
"""
unit_name = "sp-nixos-upgrade.service" if upgrade else "sp-nixos-rebuild.service"
try:
command = ["systemctl", "start", unit_name]
subprocess.run(
command,
check=True,
start_new_session=True,
shell=False,
)
Jobs.update(
job=job,
status=JobStatus.RUNNING,
status_text="Starting the system rebuild...",
)
# Wait for the systemd unit to start
try:
wait_until_true(
lambda: check_if_started(unit_name),
timeout_sec=START_TIMEOUT,
interval=START_INTERVAL,
)
except TimeoutError:
log_lines = get_last_log_lines(unit_name, 10)
Jobs.update(
job=job,
status=JobStatus.ERROR,
error="System rebuild timed out. Last log lines:\n"
+ "\n".join(log_lines),
)
return
Jobs.update(
job=job,
status=JobStatus.RUNNING,
status_text="Rebuilding the system...",
)
# Wait for the systemd unit to finish
try:
wait_until_true(
lambda: check_running_status(job, unit_name),
timeout_sec=RUN_TIMEOUT,
interval=RUN_INTERVAL,
)
except TimeoutError:
log_lines = get_last_log_lines(unit_name, 10)
Jobs.update(
job=job,
status=JobStatus.ERROR,
error="System rebuild timed out. Last log lines:\n"
+ "\n".join(log_lines),
)
return
except subprocess.CalledProcessError as e:
Jobs.update(
job=job,
status=JobStatus.ERROR,
status_text=str(e),
)
@huey.task()
def rebuild_system_task(job: Job, upgrade: bool = False):
"""Rebuild the system"""
rebuild_system(job, upgrade)

View File

@ -0,0 +1,52 @@
{
"services": {
"bitwarden": {
"display_name": "Bitwarden",
"description": "Bitwarden is an open source password management solution you can run on your own server.",
"move_job": {
"name": "Move Bitwarden",
"description": "Moving Bitwarden data to {volume}"
}
},
"gitea": {
"display_name": "Gitea",
"description": "Gitea is a lightweight code hosting solution written in Go.",
"move_job": {
"name": "Move Gitea",
"description": "Moving Gitea data to {volume}"
}
},
"jitsi": {
"display_name": "Jitsi",
"description": "Jitsi is a free and open source video conferencing solution."
},
"mailserver": {
"display_name": "Email",
"description": "E-Mail for company and family.",
"move_job": {
"name": "Move Mail Server",
"description": "Moving mailserver data to {volume}"
}
},
"nextcloud": {
"display_name": "Nextcloud",
"description": "Nextcloud is a cloud storage service that offers a web interface and a desktop client.",
"move_job": {
"name": "Move Nextcloud",
"description": "Moving Nextcloud data to {volume}"
}
},
"ocserv": {
"display_name": "OpenConnect VPN",
"description": "OpenConnect VPN to connect your devices and access the internet."
},
"pleroma": {
"display_name": "Pleroma",
"description": "Pleroma is a free and open source microblogging server.",
"move_job": {
"name": "Move Pleroma",
"description": "Moving Pleroma data to {volume}"
}
}
}
}

View File

@ -0,0 +1,12 @@
{
"services": {
"bitwarden": {
"display_name": "Bitwarden",
"description": "Bitwarden это менеджер паролей с открытым исходным кодом, который может работать на вашем сервере.",
"move_job": {
"name": "Переместить Bitwarden",
"description": "Перемещение данных Bitwarden на {volume}"
}
}
}
}

View File

@ -8,16 +8,29 @@ at api.skippedMigrations in userdata.json and populating it
with IDs of the migrations to skip.
Adding DISABLE_ALL to that array disables the migrations module entirely.
"""
from selfprivacy_api.utils import ReadUserData, UserDataFiles
from selfprivacy_api.migrations.write_token_to_redis import WriteTokenToRedis
from selfprivacy_api.migrations.check_for_system_rebuild_jobs import (
CheckForSystemRebuildJobs,
from selfprivacy_api.migrations.check_for_failed_binds_migration import (
CheckForFailedBindsMigration,
)
from selfprivacy_api.utils import ReadUserData
from selfprivacy_api.migrations.fix_nixos_config_branch import FixNixosConfigBranch
from selfprivacy_api.migrations.create_tokens_json import CreateTokensJson
from selfprivacy_api.migrations.migrate_to_selfprivacy_channel import (
MigrateToSelfprivacyChannel,
)
from selfprivacy_api.migrations.mount_volume import MountVolume
from selfprivacy_api.migrations.providers import CreateProviderFields
from selfprivacy_api.migrations.prepare_for_nixos_2211 import (
MigrateToSelfprivacyChannelFrom2205,
)
migrations = [
WriteTokenToRedis(),
CheckForSystemRebuildJobs(),
FixNixosConfigBranch(),
CreateTokensJson(),
MigrateToSelfprivacyChannel(),
MountVolume(),
CheckForFailedBindsMigration(),
CreateProviderFields(),
MigrateToSelfprivacyChannelFrom2205(),
]
@ -26,7 +39,7 @@ def run_migrations():
Go over all migrations. If they are not skipped in userdata file, run them
if the migration needed.
"""
with ReadUserData(UserDataFiles.SECRETS) as data:
with ReadUserData() as data:
if "api" not in data:
skipped_migrations = []
elif "skippedMigrations" not in data["api"]:

View File

@ -0,0 +1,48 @@
from selfprivacy_api.jobs import JobStatus, Jobs
from selfprivacy_api.migrations.migration import Migration
from selfprivacy_api.utils import WriteUserData
class CheckForFailedBindsMigration(Migration):
"""Mount volume."""
def get_migration_name(self):
return "check_for_failed_binds_migration"
def get_migration_description(self):
return "If binds migration failed, try again."
def is_migration_needed(self):
try:
jobs = Jobs.get_jobs()
# If there is a job with type_id "migrations.migrate_to_binds" and status is not "FINISHED",
# then migration is needed and job is deleted
for job in jobs:
if (
job.type_id == "migrations.migrate_to_binds"
and job.status != JobStatus.FINISHED
):
return True
return False
except Exception as e:
print(e)
return False
def migrate(self):
# Get info about existing volumes
# Write info about volumes to userdata.json
try:
jobs = Jobs.get_jobs()
for job in jobs:
if (
job.type_id == "migrations.migrate_to_binds"
and job.status != JobStatus.FINISHED
):
Jobs.remove(job)
with WriteUserData() as userdata:
userdata["useBinds"] = False
print("Done")
except Exception as e:
print(e)
print("Error mounting volume")

View File

@ -1,47 +0,0 @@
from selfprivacy_api.migrations.migration import Migration
from selfprivacy_api.jobs import JobStatus, Jobs
class CheckForSystemRebuildJobs(Migration):
"""Check if there are unfinished system rebuild jobs and finish them"""
def get_migration_name(self):
return "check_for_system_rebuild_jobs"
def get_migration_description(self):
return "Check if there are unfinished system rebuild jobs and finish them"
def is_migration_needed(self):
# Check if there are any unfinished system rebuild jobs
for job in Jobs.get_jobs():
if (
job.type_id
in [
"system.nixos.rebuild",
"system.nixos.upgrade",
]
) and job.status in [
JobStatus.CREATED,
JobStatus.RUNNING,
]:
return True
def migrate(self):
# As the API is restarted, we assume that the jobs are finished
for job in Jobs.get_jobs():
if (
job.type_id
in [
"system.nixos.rebuild",
"system.nixos.upgrade",
]
) and job.status in [
JobStatus.CREATED,
JobStatus.RUNNING,
]:
Jobs.update(
job=job,
status=JobStatus.FINISHED,
result="System rebuilt.",
progress=100,
)

View File

@ -0,0 +1,58 @@
from datetime import datetime
import os
import json
from pathlib import Path
from selfprivacy_api.migrations.migration import Migration
from selfprivacy_api.utils import TOKENS_FILE, ReadUserData
class CreateTokensJson(Migration):
def get_migration_name(self):
return "create_tokens_json"
def get_migration_description(self):
return """Selfprivacy API used a single token in userdata.json for authentication.
This migration creates a new tokens.json file with the old token in it.
This migration runs if the tokens.json file does not exist.
Old token is located at ["api"]["token"] in userdata.json.
tokens.json path is declared in TOKENS_FILE imported from utils.py
tokens.json must have the following format:
{
"tokens": [
{
"token": "token_string",
"name": "Master Token",
"date": "current date from str(datetime.now())",
}
]
}
tokens.json must have 0600 permissions.
"""
def is_migration_needed(self):
return not os.path.exists(TOKENS_FILE)
def migrate(self):
try:
print(f"Creating tokens.json file at {TOKENS_FILE}")
with ReadUserData() as userdata:
token = userdata["api"]["token"]
# Touch tokens.json with 0600 permissions
Path(TOKENS_FILE).touch(mode=0o600)
# Write token to tokens.json
structure = {
"tokens": [
{
"token": token,
"name": "primary_token",
"date": str(datetime.now()),
}
]
}
with open(TOKENS_FILE, "w", encoding="utf-8") as tokens:
json.dump(structure, tokens, indent=4)
print("Done")
except Exception as e:
print(e)
print("Error creating tokens.json")

View File

@ -0,0 +1,57 @@
import os
import subprocess
from selfprivacy_api.migrations.migration import Migration
class FixNixosConfigBranch(Migration):
def get_migration_name(self):
return "fix_nixos_config_branch"
def get_migration_description(self):
return """Mobile SelfPrivacy app introduced a bug in version 0.4.0.
New servers were initialized with a rolling-testing nixos config branch.
This was fixed in app version 0.4.2, but existing servers were not updated.
This migration fixes this by changing the nixos config branch to master.
"""
def is_migration_needed(self):
"""Check the current branch of /etc/nixos and return True if it is rolling-testing"""
current_working_directory = os.getcwd()
try:
os.chdir("/etc/nixos")
nixos_config_branch = subprocess.check_output(
["git", "rev-parse", "--abbrev-ref", "HEAD"], start_new_session=True
)
os.chdir(current_working_directory)
return nixos_config_branch.decode("utf-8").strip() == "rolling-testing"
except subprocess.CalledProcessError:
os.chdir(current_working_directory)
return False
def migrate(self):
"""Affected server pulled the config with the --single-branch flag.
Git config remote.origin.fetch has to be changed, so all branches will be fetched.
Then, fetch all branches, pull and switch to master branch.
"""
print("Fixing Nixos config branch")
current_working_directory = os.getcwd()
try:
os.chdir("/etc/nixos")
subprocess.check_output(
[
"git",
"config",
"remote.origin.fetch",
"+refs/heads/*:refs/remotes/origin/*",
]
)
subprocess.check_output(["git", "fetch", "--all"])
subprocess.check_output(["git", "pull"])
subprocess.check_output(["git", "checkout", "master"])
os.chdir(current_working_directory)
print("Done")
except subprocess.CalledProcessError:
os.chdir(current_working_directory)
print("Error")

View File

@ -0,0 +1,49 @@
import os
import subprocess
from selfprivacy_api.migrations.migration import Migration
class MigrateToSelfprivacyChannel(Migration):
"""Migrate to selfprivacy Nix channel."""
def get_migration_name(self):
return "migrate_to_selfprivacy_channel"
def get_migration_description(self):
return "Migrate to selfprivacy Nix channel."
def is_migration_needed(self):
try:
output = subprocess.check_output(
["nix-channel", "--list"], start_new_session=True
)
output = output.decode("utf-8")
first_line = output.split("\n", maxsplit=1)[0]
return first_line.startswith("nixos") and (
first_line.endswith("nixos-21.11") or first_line.endswith("nixos-21.05")
)
except subprocess.CalledProcessError:
return False
def migrate(self):
# Change the channel and update them.
# Also, go to /etc/nixos directory and make a git pull
current_working_directory = os.getcwd()
try:
print("Changing channel")
os.chdir("/etc/nixos")
subprocess.check_output(
[
"nix-channel",
"--add",
"https://channel.selfprivacy.org/nixos-selfpricacy",
"nixos",
]
)
subprocess.check_output(["nix-channel", "--update"])
subprocess.check_output(["git", "pull"])
os.chdir(current_working_directory)
except subprocess.CalledProcessError:
os.chdir(current_working_directory)
print("Error")

View File

@ -0,0 +1,51 @@
import os
import subprocess
from selfprivacy_api.migrations.migration import Migration
from selfprivacy_api.utils import ReadUserData, WriteUserData
from selfprivacy_api.utils.block_devices import BlockDevices
class MountVolume(Migration):
"""Mount volume."""
def get_migration_name(self):
return "mount_volume"
def get_migration_description(self):
return "Mount volume if it is not mounted."
def is_migration_needed(self):
try:
with ReadUserData() as userdata:
return "volumes" not in userdata
except Exception as e:
print(e)
return False
def migrate(self):
# Get info about existing volumes
# Write info about volumes to userdata.json
try:
volumes = BlockDevices().get_block_devices()
# If there is an unmounted volume sdb,
# Write it to userdata.json
is_there_a_volume = False
for volume in volumes:
if volume.name == "sdb":
is_there_a_volume = True
break
with WriteUserData() as userdata:
userdata["volumes"] = []
if is_there_a_volume:
userdata["volumes"].append(
{
"device": "/dev/sdb",
"mountPoint": "/volumes/sdb",
"fsType": "ext4",
}
)
print("Done")
except Exception as e:
print(e)
print("Error mounting volume")

View File

@ -0,0 +1,58 @@
import os
import subprocess
from selfprivacy_api.migrations.migration import Migration
class MigrateToSelfprivacyChannelFrom2205(Migration):
"""Migrate to selfprivacy Nix channel.
For some reason NixOS 22.05 servers initialized with the nixos channel instead of selfprivacy.
This stops us from upgrading to NixOS 22.11
"""
def get_migration_name(self):
return "migrate_to_selfprivacy_channel_from_2205"
def get_migration_description(self):
return "Migrate to selfprivacy Nix channel from NixOS 22.05."
def is_migration_needed(self):
try:
output = subprocess.check_output(
["nix-channel", "--list"], start_new_session=True
)
output = output.decode("utf-8")
first_line = output.split("\n", maxsplit=1)[0]
return first_line.startswith("nixos") and (
first_line.endswith("nixos-22.05")
)
except subprocess.CalledProcessError:
return False
def migrate(self):
# Change the channel and update them.
# Also, go to /etc/nixos directory and make a git pull
current_working_directory = os.getcwd()
try:
print("Changing channel")
os.chdir("/etc/nixos")
subprocess.check_output(
[
"nix-channel",
"--add",
"https://channel.selfprivacy.org/nixos-selfpricacy",
"nixos",
]
)
subprocess.check_output(["nix-channel", "--update"])
nixos_config_branch = subprocess.check_output(
["git", "rev-parse", "--abbrev-ref", "HEAD"], start_new_session=True
)
if nixos_config_branch.decode("utf-8").strip() == "api-redis":
print("Also changing nixos-config branch from api-redis to master")
subprocess.check_output(["git", "checkout", "master"])
subprocess.check_output(["git", "pull"])
os.chdir(current_working_directory)
except subprocess.CalledProcessError:
os.chdir(current_working_directory)
print("Error")

View File

@ -0,0 +1,43 @@
from selfprivacy_api.migrations.migration import Migration
from selfprivacy_api.utils import ReadUserData, WriteUserData
class CreateProviderFields(Migration):
"""Unhardcode providers"""
def get_migration_name(self):
return "create_provider_fields"
def get_migration_description(self):
return "Add DNS, backup and server provider fields to enable user to choose between different clouds and to make the deployment adapt to these preferences."
def is_migration_needed(self):
try:
with ReadUserData() as userdata:
return "dns" not in userdata
except Exception as e:
print(e)
return False
def migrate(self):
# Write info about providers to userdata.json
try:
with WriteUserData() as userdata:
userdata["dns"] = {
"provider": "CLOUDFLARE",
"apiKey": userdata["cloudflare"]["apiKey"],
}
userdata["server"] = {
"provider": "HETZNER",
}
userdata["backup"] = {
"provider": "BACKBLAZE",
"accountId": userdata["backblaze"]["accountId"],
"accountKey": userdata["backblaze"]["accountKey"],
"bucket": userdata["backblaze"]["bucket"],
}
print("Done")
except Exception as e:
print(e)
print("Error migrating provider fields")

View File

@ -1,63 +0,0 @@
from datetime import datetime
from typing import Optional
from selfprivacy_api.migrations.migration import Migration
from selfprivacy_api.models.tokens.token import Token
from selfprivacy_api.repositories.tokens.redis_tokens_repository import (
RedisTokensRepository,
)
from selfprivacy_api.repositories.tokens.abstract_tokens_repository import (
AbstractTokensRepository,
)
from selfprivacy_api.utils import ReadUserData, UserDataFiles
class WriteTokenToRedis(Migration):
"""Load Json tokens into Redis"""
def get_migration_name(self):
return "write_token_to_redis"
def get_migration_description(self):
return "Loads the initial token into redis token storage"
def is_repo_empty(self, repo: AbstractTokensRepository) -> bool:
if repo.get_tokens() != []:
return False
return True
def get_token_from_json(self) -> Optional[Token]:
try:
with ReadUserData(UserDataFiles.SECRETS) as userdata:
return Token(
token=userdata["api"]["token"],
device_name="Initial device",
created_at=datetime.now(),
)
except Exception as e:
print(e)
return None
def is_migration_needed(self):
try:
if self.get_token_from_json() is not None and self.is_repo_empty(
RedisTokensRepository()
):
return True
except Exception as e:
print(e)
return False
def migrate(self):
# Write info about providers to userdata.json
try:
token = self.get_token_from_json()
if token is None:
print("No token found in secrets.json")
return
RedisTokensRepository()._store_token(token)
print("Done")
except Exception as e:
print(e)
print("Error migrating access tokens from json to redis")

View File

@ -7,5 +7,3 @@ class BackupProviderModel(BaseModel):
kind: str
login: str
key: str
location: str
repo_id: str # for app usage, not for us

View File

@ -1,11 +1,8 @@
import datetime
from pydantic import BaseModel
from selfprivacy_api.graphql.common_types.backup import BackupReason
class Snapshot(BaseModel):
id: str
service_name: str
created_at: datetime.datetime
reason: BackupReason = BackupReason.EXPLICIT

View File

@ -1,24 +0,0 @@
from enum import Enum
from typing import Optional
from pydantic import BaseModel
class ServiceStatus(Enum):
"""Enum for service status"""
ACTIVE = "ACTIVE"
RELOADING = "RELOADING"
INACTIVE = "INACTIVE"
FAILED = "FAILED"
ACTIVATING = "ACTIVATING"
DEACTIVATING = "DEACTIVATING"
OFF = "OFF"
class ServiceDnsRecord(BaseModel):
type: str
name: str
content: str
ttl: int
display_name: str
priority: Optional[int] = None

View File

@ -1,13 +1,11 @@
"""
New device key used to obtain access token.
"""
from datetime import datetime, timedelta, timezone
from datetime import datetime, timedelta
import secrets
from pydantic import BaseModel
from mnemonic import Mnemonic
from selfprivacy_api.models.tokens.time import is_past
class NewDeviceKey(BaseModel):
"""
@ -22,15 +20,15 @@ class NewDeviceKey(BaseModel):
def is_valid(self) -> bool:
"""
Check if key is valid.
Check if the recovery key is valid.
"""
if is_past(self.expires_at):
if self.expires_at < datetime.now():
return False
return True
def as_mnemonic(self) -> str:
"""
Get the key as a mnemonic.
Get the recovery key as a mnemonic.
"""
return Mnemonic(language="english").to_mnemonic(bytes.fromhex(self.key))
@ -39,10 +37,10 @@ class NewDeviceKey(BaseModel):
"""
Factory to generate a random token.
"""
creation_date = datetime.now(timezone.utc)
creation_date = datetime.now()
key = secrets.token_bytes(16).hex()
return NewDeviceKey(
key=key,
created_at=creation_date,
expires_at=creation_date + timedelta(minutes=10),
expires_at=datetime.now() + timedelta(minutes=10),
)

View File

@ -3,14 +3,12 @@ Recovery key used to obtain access token.
Recovery key has a token string, date of creation, optional date of expiration and optional count of uses left.
"""
from datetime import datetime, timezone
from datetime import datetime
import secrets
from typing import Optional
from pydantic import BaseModel
from mnemonic import Mnemonic
from selfprivacy_api.models.tokens.time import is_past, ensure_timezone
class RecoveryKey(BaseModel):
"""
@ -28,7 +26,7 @@ class RecoveryKey(BaseModel):
"""
Check if the recovery key is valid.
"""
if self.expires_at is not None and is_past(self.expires_at):
if self.expires_at is not None and self.expires_at < datetime.now():
return False
if self.uses_left is not None and self.uses_left <= 0:
return False
@ -47,11 +45,8 @@ class RecoveryKey(BaseModel):
) -> "RecoveryKey":
"""
Factory to generate a random token.
If passed naive time as expiration, assumes utc
"""
creation_date = datetime.now(timezone.utc)
if expiration is not None:
expiration = ensure_timezone(expiration)
creation_date = datetime.now()
key = secrets.token_bytes(24).hex()
return RecoveryKey(
key=key,

View File

@ -1,14 +0,0 @@
from datetime import datetime, timezone
def is_past(dt: datetime) -> bool:
# we cannot compare a naive now()
# to dt which might be tz-aware or unaware
dt = ensure_timezone(dt)
return dt < datetime.now(timezone.utc)
def ensure_timezone(dt: datetime) -> datetime:
if dt.tzinfo is None or dt.tzinfo.utcoffset(None) is None:
dt = dt.replace(tzinfo=timezone.utc)
return dt

View File

@ -0,0 +1,8 @@
from selfprivacy_api.repositories.tokens.abstract_tokens_repository import (
AbstractTokensRepository,
)
from selfprivacy_api.repositories.tokens.json_tokens_repository import (
JsonTokensRepository,
)
repository = JsonTokensRepository()

View File

@ -1,5 +1,3 @@
from __future__ import annotations
from abc import ABC, abstractmethod
from datetime import datetime
from typing import Optional
@ -88,15 +86,13 @@ class AbstractTokensRepository(ABC):
def get_recovery_key(self) -> Optional[RecoveryKey]:
"""Get the recovery key"""
@abstractmethod
def create_recovery_key(
self,
expiration: Optional[datetime],
uses_left: Optional[int],
) -> RecoveryKey:
"""Create the recovery key"""
recovery_key = RecoveryKey.generate(expiration, uses_left)
self._store_recovery_key(recovery_key)
return recovery_key
def use_mnemonic_recovery_key(
self, mnemonic_phrase: str, device_name: str
@ -127,14 +123,6 @@ class AbstractTokensRepository(ABC):
return False
return recovery_key.is_valid()
@abstractmethod
def _store_recovery_key(self, recovery_key: RecoveryKey) -> None:
"""Store recovery key directly"""
@abstractmethod
def _delete_recovery_key(self) -> None:
"""Delete the recovery key"""
def get_new_device_key(self) -> NewDeviceKey:
"""Creates and returns the new device key"""
new_device_key = NewDeviceKey.generate()
@ -168,26 +156,6 @@ class AbstractTokensRepository(ABC):
return new_token
def reset(self):
for token in self.get_tokens():
self.delete_token(token)
self.delete_new_device_key()
self._delete_recovery_key()
def clone(self, source: AbstractTokensRepository) -> None:
"""Clone the state of another repository to this one"""
self.reset()
for token in source.get_tokens():
self._store_token(token)
recovery_key = source.get_recovery_key()
if recovery_key is not None:
self._store_recovery_key(recovery_key)
new_device_key = source._get_stored_new_device_key()
if new_device_key is not None:
self._store_new_device_key(new_device_key)
@abstractmethod
def _store_token(self, new_token: Token):
"""Store a token directly"""

View File

@ -0,0 +1,133 @@
"""
temporary legacy
"""
from typing import Optional
from datetime import datetime
from selfprivacy_api.utils import UserDataFiles, WriteUserData, ReadUserData
from selfprivacy_api.models.tokens.token import Token
from selfprivacy_api.models.tokens.recovery_key import RecoveryKey
from selfprivacy_api.models.tokens.new_device_key import NewDeviceKey
from selfprivacy_api.repositories.tokens.exceptions import (
TokenNotFound,
)
from selfprivacy_api.repositories.tokens.abstract_tokens_repository import (
AbstractTokensRepository,
)
DATETIME_FORMAT = "%Y-%m-%dT%H:%M:%S.%f"
class JsonTokensRepository(AbstractTokensRepository):
def get_tokens(self) -> list[Token]:
"""Get the tokens"""
tokens_list = []
with ReadUserData(UserDataFiles.TOKENS) as tokens_file:
for userdata_token in tokens_file["tokens"]:
tokens_list.append(
Token(
token=userdata_token["token"],
device_name=userdata_token["name"],
created_at=userdata_token["date"],
)
)
return tokens_list
def _store_token(self, new_token: Token):
"""Store a token directly"""
with WriteUserData(UserDataFiles.TOKENS) as tokens_file:
tokens_file["tokens"].append(
{
"token": new_token.token,
"name": new_token.device_name,
"date": new_token.created_at.strftime(DATETIME_FORMAT),
}
)
def delete_token(self, input_token: Token) -> None:
"""Delete the token"""
with WriteUserData(UserDataFiles.TOKENS) as tokens_file:
for userdata_token in tokens_file["tokens"]:
if userdata_token["token"] == input_token.token:
tokens_file["tokens"].remove(userdata_token)
return
raise TokenNotFound("Token not found!")
def get_recovery_key(self) -> Optional[RecoveryKey]:
"""Get the recovery key"""
with ReadUserData(UserDataFiles.TOKENS) as tokens_file:
if (
"recovery_token" not in tokens_file
or tokens_file["recovery_token"] is None
):
return
recovery_key = RecoveryKey(
key=tokens_file["recovery_token"].get("token"),
created_at=tokens_file["recovery_token"].get("date"),
expires_at=tokens_file["recovery_token"].get("expiration"),
uses_left=tokens_file["recovery_token"].get("uses_left"),
)
return recovery_key
def create_recovery_key(
self,
expiration: Optional[datetime],
uses_left: Optional[int],
) -> RecoveryKey:
"""Create the recovery key"""
recovery_key = RecoveryKey.generate(expiration, uses_left)
with WriteUserData(UserDataFiles.TOKENS) as tokens_file:
key_expiration: Optional[str] = None
if recovery_key.expires_at is not None:
key_expiration = recovery_key.expires_at.strftime(DATETIME_FORMAT)
tokens_file["recovery_token"] = {
"token": recovery_key.key,
"date": recovery_key.created_at.strftime(DATETIME_FORMAT),
"expiration": key_expiration,
"uses_left": recovery_key.uses_left,
}
return recovery_key
def _decrement_recovery_token(self):
"""Decrement recovery key use count by one"""
if self.is_recovery_key_valid():
with WriteUserData(UserDataFiles.TOKENS) as tokens:
if tokens["recovery_token"]["uses_left"] is not None:
tokens["recovery_token"]["uses_left"] -= 1
def _store_new_device_key(self, new_device_key: NewDeviceKey) -> None:
with WriteUserData(UserDataFiles.TOKENS) as tokens_file:
tokens_file["new_device"] = {
"token": new_device_key.key,
"date": new_device_key.created_at.strftime(DATETIME_FORMAT),
"expiration": new_device_key.expires_at.strftime(DATETIME_FORMAT),
}
def delete_new_device_key(self) -> None:
"""Delete the new device key"""
with WriteUserData(UserDataFiles.TOKENS) as tokens_file:
if "new_device" in tokens_file:
del tokens_file["new_device"]
return
def _get_stored_new_device_key(self) -> Optional[NewDeviceKey]:
"""Retrieves new device key that is already stored."""
with ReadUserData(UserDataFiles.TOKENS) as tokens_file:
if "new_device" not in tokens_file or tokens_file["new_device"] is None:
return
new_device_key = NewDeviceKey(
key=tokens_file["new_device"]["token"],
created_at=tokens_file["new_device"]["date"],
expires_at=tokens_file["new_device"]["expiration"],
)
return new_device_key

View File

@ -1,10 +1,8 @@
"""
Token repository using Redis as backend.
"""
from typing import Any, Optional
from typing import Optional
from datetime import datetime
from hashlib import md5
from datetime import timezone
from selfprivacy_api.repositories.tokens.abstract_tokens_repository import (
AbstractTokensRepository,
@ -30,15 +28,12 @@ class RedisTokensRepository(AbstractTokensRepository):
@staticmethod
def token_key_for_device(device_name: str):
md5_hash = md5(usedforsecurity=False)
md5_hash.update(bytes(device_name, "utf-8"))
digest = md5_hash.hexdigest()
return TOKENS_PREFIX + digest
return TOKENS_PREFIX + str(hash(device_name))
def get_tokens(self) -> list[Token]:
"""Get the tokens"""
redis = self.connection
token_keys: list[str] = redis.keys(TOKENS_PREFIX + "*") # type: ignore
token_keys = redis.keys(TOKENS_PREFIX + "*")
tokens = []
for key in token_keys:
token = self._token_from_hash(key)
@ -46,24 +41,21 @@ class RedisTokensRepository(AbstractTokensRepository):
tokens.append(token)
return tokens
def _discover_token_key(self, input_token: Token) -> Optional[str]:
"""brute-force searching for tokens, for robust deletion"""
redis = self.connection
token_keys: list[str] = redis.keys(TOKENS_PREFIX + "*") # type: ignore
for key in token_keys:
token = self._token_from_hash(key)
if token == input_token:
return key
return None
def delete_token(self, input_token: Token) -> None:
"""Delete the token"""
redis = self.connection
key = self._discover_token_key(input_token)
if key is None:
key = RedisTokensRepository._token_redis_key(input_token)
if input_token not in self.get_tokens():
raise TokenNotFound
redis.delete(key)
def reset(self):
for token in self.get_tokens():
self.delete_token(token)
self.delete_new_device_key()
redis = self.connection
redis.delete(RECOVERY_KEY_REDIS_KEY)
def get_recovery_key(self) -> Optional[RecoveryKey]:
"""Get the recovery key"""
redis = self.connection
@ -71,13 +63,15 @@ class RedisTokensRepository(AbstractTokensRepository):
return self._recovery_key_from_hash(RECOVERY_KEY_REDIS_KEY)
return None
def _store_recovery_key(self, recovery_key: RecoveryKey) -> None:
def create_recovery_key(
self,
expiration: Optional[datetime],
uses_left: Optional[int],
) -> RecoveryKey:
"""Create the recovery key"""
recovery_key = RecoveryKey.generate(expiration=expiration, uses_left=uses_left)
self._store_model_as_hash(RECOVERY_KEY_REDIS_KEY, recovery_key)
def _delete_recovery_key(self) -> None:
"""Delete the recovery key"""
redis = self.connection
redis.delete(RECOVERY_KEY_REDIS_KEY)
return recovery_key
def _store_new_device_key(self, new_device_key: NewDeviceKey) -> None:
"""Store new device key directly"""
@ -113,28 +107,26 @@ class RedisTokensRepository(AbstractTokensRepository):
return self._new_device_key_from_hash(NEW_DEVICE_KEY_REDIS_KEY)
@staticmethod
def _is_date_key(key: str) -> bool:
def _is_date_key(key: str):
return key in [
"created_at",
"expires_at",
]
@staticmethod
def _prepare_model_dict(model_dict: dict[str, Any]) -> None:
date_keys = [
key for key in model_dict.keys() if RedisTokensRepository._is_date_key(key)
]
def _prepare_model_dict(d: dict):
date_keys = [key for key in d.keys() if RedisTokensRepository._is_date_key(key)]
for date in date_keys:
if model_dict[date] != "None":
model_dict[date] = datetime.fromisoformat(model_dict[date])
for key in model_dict.keys():
if model_dict[key] == "None":
model_dict[key] = None
if d[date] != "None":
d[date] = datetime.fromisoformat(d[date])
for key in d.keys():
if d[key] == "None":
d[key] = None
def _model_dict_from_hash(self, redis_key: str) -> Optional[dict[str, Any]]:
def _model_dict_from_hash(self, redis_key: str) -> Optional[dict]:
redis = self.connection
if redis.exists(redis_key):
token_dict: dict[str, Any] = redis.hgetall(redis_key) # type: ignore
token_dict = redis.hgetall(redis_key)
RedisTokensRepository._prepare_model_dict(token_dict)
return token_dict
return None
@ -146,11 +138,7 @@ class RedisTokensRepository(AbstractTokensRepository):
return None
def _token_from_hash(self, redis_key: str) -> Optional[Token]:
token = self._hash_as_model(redis_key, Token)
if token is not None:
token.created_at = token.created_at.replace(tzinfo=None)
return token
return None
return self._hash_as_model(redis_key, Token)
def _recovery_key_from_hash(self, redis_key: str) -> Optional[RecoveryKey]:
return self._hash_as_model(redis_key, RecoveryKey)
@ -162,7 +150,5 @@ class RedisTokensRepository(AbstractTokensRepository):
redis = self.connection
for key, value in model.dict().items():
if isinstance(value, datetime):
if value.tzinfo is None:
value = value.replace(tzinfo=timezone.utc)
value = value.isoformat()
redis.hset(redis_key, key, str(value))

View File

@ -0,0 +1,125 @@
from datetime import datetime
from typing import Optional
from fastapi import APIRouter, Depends, HTTPException
from pydantic import BaseModel
from selfprivacy_api.actions.api_tokens import (
CannotDeleteCallerException,
InvalidExpirationDate,
InvalidUsesLeft,
NotFoundException,
delete_api_token,
refresh_api_token,
get_api_recovery_token_status,
get_api_tokens_with_caller_flag,
get_new_api_recovery_key,
use_mnemonic_recovery_token,
delete_new_device_auth_token,
get_new_device_auth_token,
use_new_device_auth_token,
)
from selfprivacy_api.dependencies import TokenHeader, get_token_header
router = APIRouter(
prefix="/auth",
tags=["auth"],
responses={404: {"description": "Not found"}},
)
@router.get("/tokens")
async def rest_get_tokens(auth_token: TokenHeader = Depends(get_token_header)):
"""Get the tokens info"""
return get_api_tokens_with_caller_flag(auth_token.token)
class DeleteTokenInput(BaseModel):
"""Delete token input"""
token_name: str
@router.delete("/tokens")
async def rest_delete_tokens(
token: DeleteTokenInput, auth_token: TokenHeader = Depends(get_token_header)
):
"""Delete the tokens"""
try:
delete_api_token(auth_token.token, token.token_name)
except NotFoundException:
raise HTTPException(status_code=404, detail="Token not found")
except CannotDeleteCallerException:
raise HTTPException(status_code=400, detail="Cannot delete caller's token")
return {"message": "Token deleted"}
@router.post("/tokens")
async def rest_refresh_token(auth_token: TokenHeader = Depends(get_token_header)):
"""Refresh the token"""
try:
new_token = refresh_api_token(auth_token.token)
except NotFoundException:
raise HTTPException(status_code=404, detail="Token not found")
return {"token": new_token}
@router.get("/recovery_token")
async def rest_get_recovery_token_status(
auth_token: TokenHeader = Depends(get_token_header),
):
return get_api_recovery_token_status()
class CreateRecoveryTokenInput(BaseModel):
expiration: Optional[datetime] = None
uses: Optional[int] = None
@router.post("/recovery_token")
async def rest_create_recovery_token(
limits: CreateRecoveryTokenInput = CreateRecoveryTokenInput(),
auth_token: TokenHeader = Depends(get_token_header),
):
try:
token = get_new_api_recovery_key(limits.expiration, limits.uses)
except InvalidExpirationDate as e:
raise HTTPException(status_code=400, detail=str(e))
except InvalidUsesLeft as e:
raise HTTPException(status_code=400, detail=str(e))
return {"token": token}
class UseTokenInput(BaseModel):
token: str
device: str
@router.post("/recovery_token/use")
async def rest_use_recovery_token(input: UseTokenInput):
token = use_mnemonic_recovery_token(input.token, input.device)
if token is None:
raise HTTPException(status_code=404, detail="Token not found")
return {"token": token}
@router.post("/new_device")
async def rest_new_device(auth_token: TokenHeader = Depends(get_token_header)):
token = get_new_device_auth_token()
return {"token": token}
@router.delete("/new_device")
async def rest_delete_new_device_token(
auth_token: TokenHeader = Depends(get_token_header),
):
delete_new_device_auth_token()
return {"token": None}
@router.post("/new_device/authorize")
async def rest_new_device_authorize(input: UseTokenInput):
token = use_new_device_auth_token(input.token, input.device)
if token is None:
raise HTTPException(status_code=404, detail="Token not found")
return {"message": "Device authorized", "token": token}

View File

@ -0,0 +1,374 @@
"""Basic services legacy api"""
import base64
from typing import Optional
from fastapi import APIRouter, Depends, HTTPException
from pydantic import BaseModel
from selfprivacy_api.actions.ssh import (
InvalidPublicKey,
KeyAlreadyExists,
KeyNotFound,
create_ssh_key,
enable_ssh,
get_ssh_settings,
remove_ssh_key,
set_ssh_settings,
)
from selfprivacy_api.actions.users import UserNotFound, get_user_by_username
from selfprivacy_api.dependencies import get_token_header
from selfprivacy_api.restic_controller import ResticController, ResticStates
from selfprivacy_api.restic_controller import tasks as restic_tasks
from selfprivacy_api.services.bitwarden import Bitwarden
from selfprivacy_api.services.gitea import Gitea
from selfprivacy_api.services.mailserver import MailServer
from selfprivacy_api.services.nextcloud import Nextcloud
from selfprivacy_api.services.ocserv import Ocserv
from selfprivacy_api.services.pleroma import Pleroma
from selfprivacy_api.services.service import ServiceStatus
from selfprivacy_api.utils import WriteUserData, get_dkim_key, get_domain
router = APIRouter(
prefix="/services",
tags=["services"],
dependencies=[Depends(get_token_header)],
responses={404: {"description": "Not found"}},
)
def service_status_to_return_code(status: ServiceStatus):
"""Converts service status object to return code for
compatibility with legacy api"""
if status == ServiceStatus.ACTIVE:
return 0
elif status == ServiceStatus.FAILED:
return 1
elif status == ServiceStatus.INACTIVE:
return 3
elif status == ServiceStatus.OFF:
return 4
else:
return 2
@router.get("/status")
async def get_status():
"""Get the status of the services"""
mail_status = MailServer.get_status()
bitwarden_status = Bitwarden.get_status()
gitea_status = Gitea.get_status()
nextcloud_status = Nextcloud.get_status()
ocserv_stauts = Ocserv.get_status()
pleroma_status = Pleroma.get_status()
return {
"imap": service_status_to_return_code(mail_status),
"smtp": service_status_to_return_code(mail_status),
"http": 0,
"bitwarden": service_status_to_return_code(bitwarden_status),
"gitea": service_status_to_return_code(gitea_status),
"nextcloud": service_status_to_return_code(nextcloud_status),
"ocserv": service_status_to_return_code(ocserv_stauts),
"pleroma": service_status_to_return_code(pleroma_status),
}
@router.post("/bitwarden/enable")
async def enable_bitwarden():
"""Enable Bitwarden"""
Bitwarden.enable()
return {
"status": 0,
"message": "Bitwarden enabled",
}
@router.post("/bitwarden/disable")
async def disable_bitwarden():
"""Disable Bitwarden"""
Bitwarden.disable()
return {
"status": 0,
"message": "Bitwarden disabled",
}
@router.post("/gitea/enable")
async def enable_gitea():
"""Enable Gitea"""
Gitea.enable()
return {
"status": 0,
"message": "Gitea enabled",
}
@router.post("/gitea/disable")
async def disable_gitea():
"""Disable Gitea"""
Gitea.disable()
return {
"status": 0,
"message": "Gitea disabled",
}
@router.get("/mailserver/dkim")
async def get_mailserver_dkim():
"""Get the DKIM record for the mailserver"""
domain = get_domain()
dkim = get_dkim_key(domain, parse=False)
if dkim is None:
raise HTTPException(status_code=404, detail="DKIM record not found")
dkim = base64.b64encode(dkim.encode("utf-8")).decode("utf-8")
return dkim
@router.post("/nextcloud/enable")
async def enable_nextcloud():
"""Enable Nextcloud"""
Nextcloud.enable()
return {
"status": 0,
"message": "Nextcloud enabled",
}
@router.post("/nextcloud/disable")
async def disable_nextcloud():
"""Disable Nextcloud"""
Nextcloud.disable()
return {
"status": 0,
"message": "Nextcloud disabled",
}
@router.post("/ocserv/enable")
async def enable_ocserv():
"""Enable Ocserv"""
Ocserv.enable()
return {
"status": 0,
"message": "Ocserv enabled",
}
@router.post("/ocserv/disable")
async def disable_ocserv():
"""Disable Ocserv"""
Ocserv.disable()
return {
"status": 0,
"message": "Ocserv disabled",
}
@router.post("/pleroma/enable")
async def enable_pleroma():
"""Enable Pleroma"""
Pleroma.enable()
return {
"status": 0,
"message": "Pleroma enabled",
}
@router.post("/pleroma/disable")
async def disable_pleroma():
"""Disable Pleroma"""
Pleroma.disable()
return {
"status": 0,
"message": "Pleroma disabled",
}
@router.get("/restic/backup/list")
async def get_restic_backup_list():
restic = ResticController()
return restic.snapshot_list
@router.put("/restic/backup/create")
async def create_restic_backup():
restic = ResticController()
if restic.state is ResticStates.NO_KEY:
raise HTTPException(status_code=400, detail="Backup key not provided")
if restic.state is ResticStates.INITIALIZING:
raise HTTPException(status_code=400, detail="Backup is initializing")
if restic.state is ResticStates.BACKING_UP:
raise HTTPException(status_code=409, detail="Backup is already running")
restic_tasks.start_backup()
return {
"status": 0,
"message": "Backup creation has started",
}
@router.get("/restic/backup/status")
async def get_restic_backup_status():
restic = ResticController()
return {
"status": restic.state.name,
"progress": restic.progress,
"error_message": restic.error_message,
}
@router.get("/restic/backup/reload")
async def reload_restic_backup():
restic_tasks.load_snapshots()
return {
"status": 0,
"message": "Snapshots reload started",
}
class BackupRestoreInput(BaseModel):
backupId: str
@router.put("/restic/backup/restore")
async def restore_restic_backup(backup: BackupRestoreInput):
restic = ResticController()
if restic.state is ResticStates.NO_KEY:
raise HTTPException(status_code=400, detail="Backup key not provided")
if restic.state is ResticStates.NOT_INITIALIZED:
raise HTTPException(
status_code=400, detail="Backups repository is not initialized"
)
if restic.state is ResticStates.BACKING_UP:
raise HTTPException(status_code=409, detail="Backup is already running")
if restic.state is ResticStates.INITIALIZING:
raise HTTPException(status_code=400, detail="Repository is initializing")
if restic.state is ResticStates.RESTORING:
raise HTTPException(status_code=409, detail="Restore is already running")
for backup_item in restic.snapshot_list:
if backup_item["short_id"] == backup.backupId:
restic_tasks.restore_from_backup(backup.backupId)
return {
"status": 0,
"message": "Backup restoration procedure started",
}
raise HTTPException(status_code=404, detail="Backup not found")
class BackupConfigInput(BaseModel):
accountId: str
accountKey: str
bucket: str
@router.put("/restic/backblaze/config")
async def set_backblaze_config(backup_config: BackupConfigInput):
with WriteUserData() as data:
if "backup" not in data:
data["backup"] = {}
data["backup"]["provider"] = "BACKBLAZE"
data["backup"]["accountId"] = backup_config.accountId
data["backup"]["accountKey"] = backup_config.accountKey
data["backup"]["bucket"] = backup_config.bucket
restic_tasks.update_keys_from_userdata()
return "New backup settings saved"
@router.post("/ssh/enable")
async def rest_enable_ssh():
"""Enable SSH"""
enable_ssh()
return {
"status": 0,
"message": "SSH enabled",
}
@router.get("/ssh")
async def rest_get_ssh():
"""Get the SSH configuration"""
settings = get_ssh_settings()
return {
"enable": settings.enable,
"passwordAuthentication": settings.passwordAuthentication,
}
class SshConfigInput(BaseModel):
enable: Optional[bool] = None
passwordAuthentication: Optional[bool] = None
@router.put("/ssh")
async def rest_set_ssh(ssh_config: SshConfigInput):
"""Set the SSH configuration"""
set_ssh_settings(ssh_config.enable, ssh_config.passwordAuthentication)
return "SSH settings changed"
class SshKeyInput(BaseModel):
public_key: str
@router.put("/ssh/key/send", status_code=201)
async def rest_send_ssh_key(input: SshKeyInput):
"""Send the SSH key"""
try:
create_ssh_key("root", input.public_key)
except KeyAlreadyExists as error:
raise HTTPException(status_code=409, detail="Key already exists") from error
except InvalidPublicKey as error:
raise HTTPException(
status_code=400,
detail="Invalid key type. Only ssh-ed25519 and ssh-rsa are supported",
) from error
return {
"status": 0,
"message": "SSH key sent",
}
@router.get("/ssh/keys/{username}")
async def rest_get_ssh_keys(username: str):
"""Get the SSH keys for a user"""
user = get_user_by_username(username)
if user is None:
raise HTTPException(status_code=404, detail="User not found")
return user.ssh_keys
@router.post("/ssh/keys/{username}", status_code=201)
async def rest_add_ssh_key(username: str, input: SshKeyInput):
try:
create_ssh_key(username, input.public_key)
except KeyAlreadyExists as error:
raise HTTPException(status_code=409, detail="Key already exists") from error
except InvalidPublicKey as error:
raise HTTPException(
status_code=400,
detail="Invalid key type. Only ssh-ed25519 and ssh-rsa are supported",
) from error
except UserNotFound as error:
raise HTTPException(status_code=404, detail="User not found") from error
return {
"message": "New SSH key successfully written",
}
@router.delete("/ssh/keys/{username}")
async def rest_delete_ssh_key(username: str, input: SshKeyInput):
try:
remove_ssh_key(username, input.public_key)
except KeyNotFound as error:
raise HTTPException(status_code=404, detail="Key not found") from error
except UserNotFound as error:
raise HTTPException(status_code=404, detail="User not found") from error
return {"message": "SSH key deleted"}

View File

@ -0,0 +1,105 @@
from typing import Optional
from fastapi import APIRouter, Body, Depends, HTTPException
from pydantic import BaseModel
from selfprivacy_api.dependencies import get_token_header
import selfprivacy_api.actions.system as system_actions
router = APIRouter(
prefix="/system",
tags=["system"],
dependencies=[Depends(get_token_header)],
responses={404: {"description": "Not found"}},
)
@router.get("/configuration/timezone")
async def get_timezone():
"""Get the timezone of the server"""
return system_actions.get_timezone()
class ChangeTimezoneRequestBody(BaseModel):
"""Change the timezone of the server"""
timezone: str
@router.put("/configuration/timezone")
async def change_timezone(timezone: ChangeTimezoneRequestBody):
"""Change the timezone of the server"""
try:
system_actions.change_timezone(timezone.timezone)
except system_actions.InvalidTimezone as e:
raise HTTPException(status_code=400, detail=str(e))
return {"timezone": timezone.timezone}
@router.get("/configuration/autoUpgrade")
async def get_auto_upgrade_settings():
"""Get the auto-upgrade settings"""
return system_actions.get_auto_upgrade_settings().dict()
class AutoUpgradeSettings(BaseModel):
"""Settings for auto-upgrading user data"""
enable: Optional[bool] = None
allowReboot: Optional[bool] = None
@router.put("/configuration/autoUpgrade")
async def set_auto_upgrade_settings(settings: AutoUpgradeSettings):
"""Set the auto-upgrade settings"""
system_actions.set_auto_upgrade_settings(settings.enable, settings.allowReboot)
return "Auto-upgrade settings changed"
@router.get("/configuration/apply")
async def apply_configuration():
"""Apply the configuration"""
return_code = system_actions.rebuild_system()
return return_code
@router.get("/configuration/rollback")
async def rollback_configuration():
"""Rollback the configuration"""
return_code = system_actions.rollback_system()
return return_code
@router.get("/configuration/upgrade")
async def upgrade_configuration():
"""Upgrade the configuration"""
return_code = system_actions.upgrade_system()
return return_code
@router.get("/reboot")
async def reboot_system():
"""Reboot the system"""
system_actions.reboot_system()
return "System reboot has started"
@router.get("/version")
async def get_system_version():
"""Get the system version"""
return {"system_version": system_actions.get_system_version()}
@router.get("/pythonVersion")
async def get_python_version():
"""Get the Python version"""
return system_actions.get_python_version()
@router.get("/configuration/pull")
async def pull_configuration():
"""Pull the configuration"""
action_result = system_actions.pull_repository_changes()
if action_result.status == 0:
return action_result.dict()
raise HTTPException(status_code=500, detail=action_result.dict())

View File

@ -0,0 +1,62 @@
"""Users management module"""
from typing import Optional
from fastapi import APIRouter, Body, Depends, HTTPException
from pydantic import BaseModel
import selfprivacy_api.actions.users as users_actions
from selfprivacy_api.dependencies import get_token_header
router = APIRouter(
prefix="/users",
tags=["users"],
dependencies=[Depends(get_token_header)],
responses={404: {"description": "Not found"}},
)
@router.get("")
async def get_users(withMainUser: bool = False):
"""Get the list of users"""
users: list[users_actions.UserDataUser] = users_actions.get_users(
exclude_primary=not withMainUser, exclude_root=True
)
return [user.username for user in users]
class UserInput(BaseModel):
"""User input"""
username: str
password: str
@router.post("", status_code=201)
async def create_user(user: UserInput):
try:
users_actions.create_user(user.username, user.password)
except users_actions.PasswordIsEmpty as e:
raise HTTPException(status_code=400, detail=str(e))
except users_actions.UsernameForbidden as e:
raise HTTPException(status_code=409, detail=str(e))
except users_actions.UsernameNotAlphanumeric as e:
raise HTTPException(status_code=400, detail=str(e))
except users_actions.UsernameTooLong as e:
raise HTTPException(status_code=400, detail=str(e))
except users_actions.UserAlreadyExists as e:
raise HTTPException(status_code=409, detail=str(e))
return {"result": 0, "username": user.username}
@router.delete("/{username}")
async def delete_user(username: str):
try:
users_actions.delete_user(username)
except users_actions.UserNotFound as e:
raise HTTPException(status_code=404, detail=str(e))
except users_actions.UserIsProtected as e:
raise HTTPException(status_code=400, detail=str(e))
return {"result": 0, "username": username}

View File

@ -0,0 +1,233 @@
"""Restic singleton controller."""
from datetime import datetime
import json
import subprocess
import os
from enum import Enum
from selfprivacy_api.utils import ReadUserData
from selfprivacy_api.utils.singleton_metaclass import SingletonMetaclass
class ResticStates(Enum):
"""Restic states enum."""
NO_KEY = 0
NOT_INITIALIZED = 1
INITIALIZED = 2
BACKING_UP = 3
RESTORING = 4
ERROR = 5
INITIALIZING = 6
class ResticController(metaclass=SingletonMetaclass):
"""
States in wich the restic_controller may be
- no backblaze key
- backblaze key is provided, but repository is not initialized
- backblaze key is provided, repository is initialized
- fetching list of snapshots
- creating snapshot, current progress can be retrieved
- recovering from snapshot
Any ongoing operation acquires the lock
Current state can be fetched with get_state()
"""
_initialized = False
def __init__(self):
if self._initialized:
return
self.state = ResticStates.NO_KEY
self.lock = False
self.progress = 0
self._backblaze_account = None
self._backblaze_key = None
self._repository_name = None
self.snapshot_list = []
self.error_message = None
self._initialized = True
self.load_configuration()
self.load_snapshots()
def load_configuration(self):
"""Load current configuration from user data to singleton."""
with ReadUserData() as user_data:
self._backblaze_account = user_data["backblaze"]["accountId"]
self._backblaze_key = user_data["backblaze"]["accountKey"]
self._repository_name = user_data["backblaze"]["bucket"]
if self._backblaze_account and self._backblaze_key and self._repository_name:
self.state = ResticStates.INITIALIZING
else:
self.state = ResticStates.NO_KEY
def load_snapshots(self):
"""
Load list of snapshots from repository
"""
backup_listing_command = [
"restic",
"-o",
self.rclone_args(),
"-r",
self.restic_repo(),
"snapshots",
"--json",
]
if self.state in (ResticStates.BACKING_UP, ResticStates.RESTORING):
return
with subprocess.Popen(
backup_listing_command,
shell=False,
stdout=subprocess.PIPE,
stderr=subprocess.STDOUT,
) as backup_listing_process_descriptor:
snapshots_list = backup_listing_process_descriptor.communicate()[0].decode(
"utf-8"
)
try:
starting_index = snapshots_list.find("[")
json.loads(snapshots_list[starting_index:])
self.snapshot_list = json.loads(snapshots_list[starting_index:])
self.state = ResticStates.INITIALIZED
print(snapshots_list)
except ValueError:
if "Is there a repository at the following location?" in snapshots_list:
self.state = ResticStates.NOT_INITIALIZED
return
self.state = ResticStates.ERROR
self.error_message = snapshots_list
return
def restic_repo(self):
# https://restic.readthedocs.io/en/latest/030_preparing_a_new_repo.html#other-services-via-rclone
# https://forum.rclone.org/t/can-rclone-be-run-solely-with-command-line-options-no-config-no-env-vars/6314/5
return f"rclone::b2:{self._repository_name}/sfbackup"
def rclone_args(self):
return "rclone.args=serve restic --stdio" + self.backend_rclone_args()
def backend_rclone_args(self):
return f"--b2-account {self._backblaze_account} --b2-key {self._backblaze_key}"
def initialize_repository(self):
"""
Initialize repository with restic
"""
initialize_repository_command = [
"restic",
"-o",
self.rclone_args(),
"-r",
self.restic_repo(),
"init",
]
with subprocess.Popen(
initialize_repository_command,
shell=False,
stdout=subprocess.PIPE,
stderr=subprocess.STDOUT,
) as initialize_repository_process_descriptor:
msg = initialize_repository_process_descriptor.communicate()[0].decode(
"utf-8"
)
if initialize_repository_process_descriptor.returncode == 0:
self.state = ResticStates.INITIALIZED
else:
self.state = ResticStates.ERROR
self.error_message = msg
self.state = ResticStates.INITIALIZED
def start_backup(self):
"""
Start backup with restic
"""
backup_command = [
"restic",
"-o",
self.rclone_args(),
"-r",
self.restic_repo(),
"--verbose",
"--json",
"backup",
"/var",
]
with open("/var/backup.log", "w", encoding="utf-8") as log_file:
subprocess.Popen(
backup_command,
shell=False,
stdout=log_file,
stderr=subprocess.STDOUT,
)
self.state = ResticStates.BACKING_UP
self.progress = 0
def check_progress(self):
"""
Check progress of ongoing backup operation
"""
backup_status_check_command = ["tail", "-1", "/var/backup.log"]
if self.state in (ResticStates.NO_KEY, ResticStates.NOT_INITIALIZED):
return
# If the log file does not exists
if os.path.exists("/var/backup.log") is False:
self.state = ResticStates.INITIALIZED
with subprocess.Popen(
backup_status_check_command,
shell=False,
stdout=subprocess.PIPE,
stderr=subprocess.STDOUT,
) as backup_status_check_process_descriptor:
backup_process_status = (
backup_status_check_process_descriptor.communicate()[0].decode("utf-8")
)
try:
status = json.loads(backup_process_status)
except ValueError:
print(backup_process_status)
self.error_message = backup_process_status
return
if status["message_type"] == "status":
self.progress = status["percent_done"]
self.state = ResticStates.BACKING_UP
elif status["message_type"] == "summary":
self.state = ResticStates.INITIALIZED
self.progress = 0
self.snapshot_list.append(
{
"short_id": status["snapshot_id"],
# Current time in format 2021-12-02T00:02:51.086452543+03:00
"time": datetime.now().strftime("%Y-%m-%dT%H:%M:%S.%f%z"),
}
)
def restore_from_backup(self, snapshot_id):
"""
Restore from backup with restic
"""
backup_restoration_command = [
"restic",
"-o",
self.rclone_args(),
"-r",
self.restic_repo(),
"restore",
snapshot_id,
"--target",
"/",
]
self.state = ResticStates.RESTORING
subprocess.run(backup_restoration_command, shell=False)
self.state = ResticStates.INITIALIZED

View File

@ -0,0 +1,70 @@
"""Tasks for the restic controller."""
from huey import crontab
from selfprivacy_api.utils.huey import huey
from . import ResticController, ResticStates
@huey.task()
def init_restic():
controller = ResticController()
if controller.state == ResticStates.NOT_INITIALIZED:
initialize_repository()
@huey.task()
def update_keys_from_userdata():
controller = ResticController()
controller.load_configuration()
controller.write_rclone_config()
initialize_repository()
# Check every morning at 5:00 AM
@huey.task(crontab(hour=5, minute=0))
def cron_load_snapshots():
controller = ResticController()
controller.load_snapshots()
# Check every morning at 5:00 AM
@huey.task()
def load_snapshots():
controller = ResticController()
controller.load_snapshots()
if controller.state == ResticStates.NOT_INITIALIZED:
load_snapshots.schedule(delay=120)
@huey.task()
def initialize_repository():
controller = ResticController()
if controller.state is not ResticStates.NO_KEY:
controller.initialize_repository()
load_snapshots()
@huey.task()
def fetch_backup_status():
controller = ResticController()
if controller.state is ResticStates.BACKING_UP:
controller.check_progress()
if controller.state is ResticStates.BACKING_UP:
fetch_backup_status.schedule(delay=2)
else:
load_snapshots.schedule(delay=240)
@huey.task()
def start_backup():
controller = ResticController()
if controller.state is ResticStates.NOT_INITIALIZED:
resp = initialize_repository()
resp.get()
controller.start_backup()
fetch_backup_status.schedule(delay=3)
@huey.task()
def restore_from_backup(snapshot):
controller = ResticController()
controller.restore_from_backup(snapshot)

View File

@ -3,7 +3,7 @@
import typing
from selfprivacy_api.services.bitwarden import Bitwarden
from selfprivacy_api.services.gitea import Gitea
from selfprivacy_api.services.jitsimeet import JitsiMeet
from selfprivacy_api.services.jitsi import Jitsi
from selfprivacy_api.services.mailserver import MailServer
from selfprivacy_api.services.nextcloud import Nextcloud
from selfprivacy_api.services.pleroma import Pleroma
@ -18,7 +18,7 @@ services: list[Service] = [
Nextcloud(),
Pleroma(),
Ocserv(),
JitsiMeet(),
Jitsi(),
]
@ -42,7 +42,7 @@ def get_disabled_services() -> list[Service]:
def get_services_by_location(location: str) -> list[Service]:
return [service for service in services if service.get_drive() == location]
return [service for service in services if service.get_location() == location]
def get_all_required_dns_records() -> list[ServiceDnsRecord]:
@ -54,20 +54,14 @@ def get_all_required_dns_records() -> list[ServiceDnsRecord]:
name="api",
content=ip4,
ttl=3600,
display_name="SelfPrivacy API",
),
ServiceDnsRecord(
type="AAAA",
name="api",
content=ip6,
ttl=3600,
),
]
if ip6 is not None:
dns_records.append(
ServiceDnsRecord(
type="AAAA",
name="api",
content=ip6,
ttl=3600,
display_name="SelfPrivacy API (IPv6)",
)
)
for service in get_enabled_services():
dns_records += service.get_dns_records(ip4, ip6)
dns_records += service.get_dns_records()
return dns_records

View File

@ -1,12 +1,18 @@
"""Class representing Bitwarden service"""
import base64
import subprocess
from typing import Optional, List
import typing
from selfprivacy_api.utils import get_domain
from selfprivacy_api.utils.systemd import get_service_status
from selfprivacy_api.services.service import Service, ServiceStatus
from selfprivacy_api.jobs import Job, JobStatus, Jobs
from selfprivacy_api.services.generic_service_mover import FolderMoveNames, move_service
from selfprivacy_api.services.generic_size_counter import get_storage_usage
from selfprivacy_api.services.generic_status_getter import get_service_status
from selfprivacy_api.services.service import Service, ServiceDnsRecord, ServiceStatus
from selfprivacy_api.utils import ReadUserData, WriteUserData, get_domain
from selfprivacy_api.utils.block_devices import BlockDevice
from selfprivacy_api.utils.huey import huey
from selfprivacy_api.utils.localization import Localization as L10n
import selfprivacy_api.utils.network as network_utils
from selfprivacy_api.services.bitwarden.icon import BITWARDEN_ICON
@ -19,14 +25,14 @@ class Bitwarden(Service):
return "bitwarden"
@staticmethod
def get_display_name() -> str:
def get_display_name(locale: str = "en") -> str:
"""Return service display name."""
return "Bitwarden"
return "services.bitwarden.display_name"
@staticmethod
def get_description() -> str:
def get_description(locale: str = "en") -> str:
"""Return service description."""
return "Bitwarden is a password manager."
return "services.bitwarden.description"
@staticmethod
def get_svg_icon() -> str:
@ -34,19 +40,11 @@ class Bitwarden(Service):
return base64.b64encode(BITWARDEN_ICON.encode("utf-8")).decode("utf-8")
@staticmethod
def get_user() -> str:
return "vaultwarden"
@staticmethod
def get_url() -> Optional[str]:
def get_url() -> typing.Optional[str]:
"""Return service url."""
domain = get_domain()
return f"https://password.{domain}"
@staticmethod
def get_subdomain() -> Optional[str]:
return "password"
@staticmethod
def is_movable() -> bool:
return True
@ -56,8 +54,9 @@ class Bitwarden(Service):
return False
@staticmethod
def get_backup_description() -> str:
return "Password database, encryption certificate and attachments."
def is_enabled() -> bool:
with ReadUserData() as user_data:
return user_data.get("bitwarden", {}).get("enable", False)
@staticmethod
def get_status() -> ServiceStatus:
@ -72,6 +71,22 @@ class Bitwarden(Service):
"""
return get_service_status("vaultwarden.service")
@staticmethod
def enable():
"""Enable Bitwarden service."""
with WriteUserData() as user_data:
if "bitwarden" not in user_data:
user_data["bitwarden"] = {}
user_data["bitwarden"]["enable"] = True
@staticmethod
def disable():
"""Disable Bitwarden service."""
with WriteUserData() as user_data:
if "bitwarden" not in user_data:
user_data["bitwarden"] = {}
user_data["bitwarden"]["enable"] = False
@staticmethod
def stop():
subprocess.run(["systemctl", "stop", "vaultwarden.service"])
@ -97,5 +112,66 @@ class Bitwarden(Service):
return ""
@staticmethod
def get_folders() -> List[str]:
return ["/var/lib/bitwarden", "/var/lib/bitwarden_rs"]
def get_storage_usage() -> int:
storage_usage = 0
storage_usage += get_storage_usage("/var/lib/bitwarden")
storage_usage += get_storage_usage("/var/lib/bitwarden_rs")
return storage_usage
@staticmethod
def get_drive() -> str:
with ReadUserData() as user_data:
if user_data.get("useBinds", False):
return user_data.get("bitwarden", {}).get("location", "sda1")
else:
return "sda1"
@staticmethod
def get_dns_records() -> typing.List[ServiceDnsRecord]:
"""Return list of DNS records for Bitwarden service."""
return [
ServiceDnsRecord(
type="A",
name="password",
content=network_utils.get_ip4(),
ttl=3600,
),
ServiceDnsRecord(
type="AAAA",
name="password",
content=network_utils.get_ip6(),
ttl=3600,
),
]
def move_to_volume(self, volume: BlockDevice, locale: str = "en") -> Job:
job = Jobs.add(
type_id="services.bitwarden.move",
name=L10n().get("services.bitwarden.move_job.name", locale),
description=L10n()
.get("services.bitwarden.move_job.description")
.format(volume=volume.name),
)
move_service(
self,
volume,
job,
[
FolderMoveNames(
name="bitwarden",
bind_location="/var/lib/bitwarden",
group="vaultwarden",
owner="vaultwarden",
),
FolderMoveNames(
name="bitwarden_rs",
bind_location="/var/lib/bitwarden_rs",
group="vaultwarden",
owner="vaultwarden",
),
],
"bitwarden",
)
return job

View File

@ -0,0 +1,236 @@
"""Generic handler for moving services"""
import subprocess
import time
import pathlib
import shutil
from pydantic import BaseModel
from selfprivacy_api.jobs import Job, JobStatus, Jobs
from selfprivacy_api.utils.huey import huey
from selfprivacy_api.utils.block_devices import BlockDevice
from selfprivacy_api.utils import ReadUserData, WriteUserData
from selfprivacy_api.services.service import Service, ServiceStatus
class FolderMoveNames(BaseModel):
name: str
bind_location: str
owner: str
group: str
@huey.task()
def move_service(
service: Service,
volume: BlockDevice,
job: Job,
folder_names: list[FolderMoveNames],
userdata_location: str,
):
"""Move a service to another volume."""
job = Jobs.update(
job=job,
status_text="Performing pre-move checks...",
status=JobStatus.RUNNING,
)
service_name = service.get_display_name()
with ReadUserData() as user_data:
if not user_data.get("useBinds", False):
Jobs.update(
job=job,
status=JobStatus.ERROR,
error="Server is not using binds.",
)
return
# Check if we are on the same volume
old_volume = service.get_drive()
if old_volume == volume.name:
Jobs.update(
job=job,
status=JobStatus.ERROR,
error=f"{service_name} is already on this volume.",
)
return
# Check if there is enough space on the new volume
if int(volume.fsavail) < service.get_storage_usage():
Jobs.update(
job=job,
status=JobStatus.ERROR,
error="Not enough space on the new volume.",
)
return
# Make sure the volume is mounted
if volume.name != "sda1" and f"/volumes/{volume.name}" not in volume.mountpoints:
Jobs.update(
job=job,
status=JobStatus.ERROR,
error="Volume is not mounted.",
)
return
# Make sure current actual directory exists and if its user and group are correct
for folder in folder_names:
if not pathlib.Path(f"/volumes/{old_volume}/{folder.name}").exists():
Jobs.update(
job=job,
status=JobStatus.ERROR,
error=f"{service_name} is not found.",
)
return
if not pathlib.Path(f"/volumes/{old_volume}/{folder.name}").is_dir():
Jobs.update(
job=job,
status=JobStatus.ERROR,
error=f"{service_name} is not a directory.",
)
return
if (
not pathlib.Path(f"/volumes/{old_volume}/{folder.name}").owner()
== folder.owner
):
Jobs.update(
job=job,
status=JobStatus.ERROR,
error=f"{service_name} owner is not {folder.owner}.",
)
return
# Stop service
Jobs.update(
job=job,
status=JobStatus.RUNNING,
status_text=f"Stopping {service_name}...",
progress=5,
)
service.stop()
# Wait for the service to stop, check every second
# If it does not stop in 30 seconds, abort
for _ in range(30):
if service.get_status() not in (
ServiceStatus.ACTIVATING,
ServiceStatus.DEACTIVATING,
):
break
time.sleep(1)
else:
Jobs.update(
job=job,
status=JobStatus.ERROR,
error=f"{service_name} did not stop in 30 seconds.",
)
return
# Unmount old volume
Jobs.update(
job=job,
status_text="Unmounting old folder...",
status=JobStatus.RUNNING,
progress=10,
)
for folder in folder_names:
try:
subprocess.run(
["umount", folder.bind_location],
check=True,
)
except subprocess.CalledProcessError:
Jobs.update(
job=job,
status=JobStatus.ERROR,
error="Unable to unmount old volume.",
)
return
# Move data to new volume and set correct permissions
Jobs.update(
job=job,
status_text="Moving data to new volume...",
status=JobStatus.RUNNING,
progress=20,
)
current_progress = 20
folder_percentage = 50 // len(folder_names)
for folder in folder_names:
shutil.move(
f"/volumes/{old_volume}/{folder.name}",
f"/volumes/{volume.name}/{folder.name}",
)
Jobs.update(
job=job,
status_text="Moving data to new volume...",
status=JobStatus.RUNNING,
progress=current_progress + folder_percentage,
)
Jobs.update(
job=job,
status_text=f"Making sure {service_name} owns its files...",
status=JobStatus.RUNNING,
progress=70,
)
for folder in folder_names:
try:
subprocess.run(
[
"chown",
"-R",
f"{folder.owner}:{folder.group}",
f"/volumes/{volume.name}/{folder.name}",
],
check=True,
)
except subprocess.CalledProcessError as error:
print(error.output)
Jobs.update(
job=job,
status=JobStatus.RUNNING,
error=f"Unable to set ownership of new volume. {service_name} may not be able to access its files. Continuing anyway.",
)
# Mount new volume
Jobs.update(
job=job,
status_text=f"Mounting {service_name} data...",
status=JobStatus.RUNNING,
progress=90,
)
for folder in folder_names:
try:
subprocess.run(
[
"mount",
"--bind",
f"/volumes/{volume.name}/{folder.name}",
folder.bind_location,
],
check=True,
)
except subprocess.CalledProcessError as error:
print(error.output)
Jobs.update(
job=job,
status=JobStatus.ERROR,
error="Unable to mount new volume.",
)
return
# Update userdata
Jobs.update(
job=job,
status_text="Finishing move...",
status=JobStatus.RUNNING,
progress=95,
)
with WriteUserData() as user_data:
if userdata_location not in user_data:
user_data[userdata_location] = {}
user_data[userdata_location]["location"] = volume.name
# Start service
service.start()
Jobs.update(
job=job,
status=JobStatus.FINISHED,
result=f"{service_name} moved successfully.",
status_text=f"Starting {service_name}...",
progress=100,
)

View File

@ -1,17 +1,16 @@
"""Generic service status fetcher using systemctl"""
import subprocess
from typing import List
from selfprivacy_api.models.services import ServiceStatus
from selfprivacy_api.services.service import ServiceStatus
def get_service_status(unit: str) -> ServiceStatus:
def get_service_status(service: str) -> ServiceStatus:
"""
Return service status from systemd.
Use systemctl show to get the status of a service.
Get ActiveState from the output.
"""
service_status = subprocess.check_output(["systemctl", "show", unit])
service_status = subprocess.check_output(["systemctl", "show", service])
if b"LoadState=not-found" in service_status:
return ServiceStatus.OFF
if b"ActiveState=active" in service_status:
@ -59,24 +58,3 @@ def get_service_status_from_several_units(services: list[str]) -> ServiceStatus:
if ServiceStatus.ACTIVE in service_statuses:
return ServiceStatus.ACTIVE
return ServiceStatus.OFF
def get_last_log_lines(service: str, lines_count: int) -> List[str]:
if lines_count < 1:
raise ValueError("lines_count must be greater than 0")
try:
logs = subprocess.check_output(
[
"journalctl",
"-u",
service,
"-n",
str(lines_count),
"-o",
"cat",
],
shell=False,
).decode("utf-8")
return logs.splitlines()
except subprocess.CalledProcessError:
return []

View File

@ -1,12 +1,18 @@
"""Class representing Bitwarden service"""
import base64
import subprocess
from typing import Optional, List
import typing
from selfprivacy_api.utils import get_domain
from selfprivacy_api.utils.systemd import get_service_status
from selfprivacy_api.services.service import Service, ServiceStatus
from selfprivacy_api.jobs import Job, Jobs
from selfprivacy_api.services.generic_service_mover import FolderMoveNames, move_service
from selfprivacy_api.services.generic_size_counter import get_storage_usage
from selfprivacy_api.services.generic_status_getter import get_service_status
from selfprivacy_api.services.service import Service, ServiceDnsRecord, ServiceStatus
from selfprivacy_api.utils import ReadUserData, WriteUserData, get_domain
from selfprivacy_api.utils.block_devices import BlockDevice
from selfprivacy_api.utils.huey import huey
from selfprivacy_api.utils.localization import Localization as L10n
import selfprivacy_api.utils.network as network_utils
from selfprivacy_api.services.gitea.icon import GITEA_ICON
@ -19,14 +25,14 @@ class Gitea(Service):
return "gitea"
@staticmethod
def get_display_name() -> str:
def get_display_name(locale: str = "en") -> str:
"""Return service display name."""
return "Gitea"
return "services.gitea.display_name"
@staticmethod
def get_description() -> str:
def get_description(locale: str = "en") -> str:
"""Return service description."""
return "Gitea is a Git forge."
return "services.gitea.description"
@staticmethod
def get_svg_icon() -> str:
@ -34,15 +40,11 @@ class Gitea(Service):
return base64.b64encode(GITEA_ICON.encode("utf-8")).decode("utf-8")
@staticmethod
def get_url() -> Optional[str]:
def get_url() -> typing.Optional[str]:
"""Return service url."""
domain = get_domain()
return f"https://git.{domain}"
@staticmethod
def get_subdomain() -> Optional[str]:
return "git"
@staticmethod
def is_movable() -> bool:
return True
@ -52,8 +54,9 @@ class Gitea(Service):
return False
@staticmethod
def get_backup_description() -> str:
return "Git repositories, database and user data."
def is_enabled() -> bool:
with ReadUserData() as user_data:
return user_data.get("gitea", {}).get("enable", False)
@staticmethod
def get_status() -> ServiceStatus:
@ -67,6 +70,22 @@ class Gitea(Service):
"""
return get_service_status("gitea.service")
@staticmethod
def enable():
"""Enable Gitea service."""
with WriteUserData() as user_data:
if "gitea" not in user_data:
user_data["gitea"] = {}
user_data["gitea"]["enable"] = True
@staticmethod
def disable():
"""Disable Gitea service."""
with WriteUserData() as user_data:
if "gitea" not in user_data:
user_data["gitea"] = {}
user_data["gitea"]["enable"] = False
@staticmethod
def stop():
subprocess.run(["systemctl", "stop", "gitea.service"])
@ -92,5 +111,58 @@ class Gitea(Service):
return ""
@staticmethod
def get_folders() -> List[str]:
return ["/var/lib/gitea"]
def get_storage_usage() -> int:
storage_usage = 0
storage_usage += get_storage_usage("/var/lib/gitea")
return storage_usage
@staticmethod
def get_drive() -> str:
with ReadUserData() as user_data:
if user_data.get("useBinds", False):
return user_data.get("gitea", {}).get("location", "sda1")
else:
return "sda1"
@staticmethod
def get_dns_records() -> typing.List[ServiceDnsRecord]:
return [
ServiceDnsRecord(
type="A",
name="git",
content=network_utils.get_ip4(),
ttl=3600,
),
ServiceDnsRecord(
type="AAAA",
name="git",
content=network_utils.get_ip6(),
ttl=3600,
),
]
def move_to_volume(self, volume: BlockDevice, locale: str = "en") -> Job:
job = Jobs.add(
type_id="services.gitea.move",
name=L10n().get("services.gitea.move_job.name", locale),
description=L10n()
.get("services.gitea.move_job.description", locale)
.format(volume=volume.name),
)
move_service(
self,
volume,
job,
[
FolderMoveNames(
name="gitea",
bind_location="/var/lib/gitea",
group="gitea",
owner="gitea",
),
],
"gitea",
)
return job

View File

@ -0,0 +1,140 @@
"""Class representing Jitsi service"""
import base64
import subprocess
import typing
import selfprivacy_api.utils.network as network_utils
from selfprivacy_api.jobs import Job
from selfprivacy_api.services.generic_size_counter import get_storage_usage
from selfprivacy_api.services.generic_status_getter import (
get_service_status_from_several_units,
)
from selfprivacy_api.services.jitsi.icon import JITSI_ICON
from selfprivacy_api.utils.localization import Localization as L10n
from selfprivacy_api.services.service import Service, ServiceDnsRecord, ServiceStatus
from selfprivacy_api.utils import ReadUserData, WriteUserData, get_domain
from selfprivacy_api.utils.block_devices import BlockDevice
class Jitsi(Service):
"""Class representing Jitsi service"""
@staticmethod
def get_id() -> str:
"""Return service id."""
return "jitsi"
@staticmethod
def get_display_name(locale: str = "en") -> str:
"""Return service display name."""
return "services.jitsi.display_name"
@staticmethod
def get_description(locale: str = "en") -> str:
"""Return service description."""
return "services.jitsi.description"
@staticmethod
def get_svg_icon() -> str:
"""Read SVG icon from file and return it as base64 encoded string."""
return base64.b64encode(JITSI_ICON.encode("utf-8")).decode("utf-8")
@staticmethod
def get_url() -> typing.Optional[str]:
"""Return service url."""
domain = get_domain()
return f"https://meet.{domain}"
@staticmethod
def is_movable() -> bool:
return False
@staticmethod
def is_required() -> bool:
return False
@staticmethod
def is_enabled() -> bool:
with ReadUserData() as user_data:
return user_data.get("jitsi", {}).get("enable", False)
@staticmethod
def get_status() -> ServiceStatus:
return get_service_status_from_several_units(
["jitsi-videobridge.service", "jicofo.service"]
)
@staticmethod
def enable():
"""Enable Jitsi service."""
with WriteUserData() as user_data:
if "jitsi" not in user_data:
user_data["jitsi"] = {}
user_data["jitsi"]["enable"] = True
@staticmethod
def disable():
"""Disable Gitea service."""
with WriteUserData() as user_data:
if "jitsi" not in user_data:
user_data["jitsi"] = {}
user_data["jitsi"]["enable"] = False
@staticmethod
def stop():
subprocess.run(["systemctl", "stop", "jitsi-videobridge.service"])
subprocess.run(["systemctl", "stop", "jicofo.service"])
@staticmethod
def start():
subprocess.run(["systemctl", "start", "jitsi-videobridge.service"])
subprocess.run(["systemctl", "start", "jicofo.service"])
@staticmethod
def restart():
subprocess.run(["systemctl", "restart", "jitsi-videobridge.service"])
subprocess.run(["systemctl", "restart", "jicofo.service"])
@staticmethod
def get_configuration():
return {}
@staticmethod
def set_configuration(config_items):
return super().set_configuration(config_items)
@staticmethod
def get_logs():
return ""
@staticmethod
def get_storage_usage() -> int:
storage_usage = 0
storage_usage += get_storage_usage("/var/lib/jitsi-meet")
return storage_usage
@staticmethod
def get_drive() -> str:
return "sda1"
@staticmethod
def get_dns_records() -> typing.List[ServiceDnsRecord]:
ip4 = network_utils.get_ip4()
ip6 = network_utils.get_ip6()
return [
ServiceDnsRecord(
type="A",
name="meet",
content=ip4,
ttl=3600,
),
ServiceDnsRecord(
type="AAAA",
name="meet",
content=ip6,
ttl=3600,
),
]
def move_to_volume(self, volume: BlockDevice, locale: str = "en") -> Job:
raise NotImplementedError("jitsi service is not movable")

Some files were not shown because too many files have changed in this diff Show More