Compare commits

...

203 Commits

Author SHA1 Message Date
Inex Code 0a09a338b8 Register migration 2022-09-22 19:41:48 +03:00
Inex Code 7a1e8af8fe Catch error during binds migration and delete the job if it is stuck during restart 2022-09-22 19:38:59 +03:00
Inex Code e387e30983 Fix handling of FileNotFoundError during size calculation 2022-09-22 18:34:33 +03:00
Inex Code 582e38452d Fix Gitea moving 2022-09-19 03:50:43 +03:00
Inex Code 6bbceca917 Fix ownership issue 2022-09-19 03:04:57 +03:00
Inex Code 9a339729b7 Fix Generic service mover success output 2022-09-19 02:57:22 +03:00
Inex Code a7208c1a91 Fix generic service mover being unable to move 2022-09-19 02:43:06 +03:00
Inex Code 49571b6ef2 Fix generic service mover being unable to move to sda1 2022-09-19 02:32:29 +03:00
Alya Sirko a3260aadc3 Add SonarQube to Pipeline (#15)
Co-authored-by: Alya Sirko <alya@selfprivacy.org>
Reviewed-on: SelfPrivacy/selfprivacy-rest-api#15
Co-authored-by: Alya Sirko <alya.sirko@tuta.io>
Co-committed-by: Alya Sirko <alya.sirko@tuta.io>
2022-09-16 10:31:51 +03:00
Inex Code 9489180363 Fix job deletion 2022-09-09 17:51:41 +03:00
Inex Code 97acae189f Bump API version 2022-09-09 17:46:58 +03:00
Inex Code d7cba49c4a Fix job uid generation 2022-09-09 17:42:40 +03:00
Inex Code 32278e9063 Bind to 127.0.0.1 instead of 0.0.0.0 2022-08-26 21:01:14 +04:00
Inex Code 4f2332f8a0 Add permission check for deleting job 2022-08-25 22:42:37 +04:00
Inex Code 0e68ef1386 Merge pull request 'SelfPrivacy API 2.0' (#14) from graphql into master
Reviewed-on: SelfPrivacy/selfprivacy-rest-api#14
2022-08-25 20:25:03 +03:00
Inex Code 7935de0fe1 Migrate to FastAPI (#13)
Co-authored-by: inexcode <inex.code@selfprivacy.org>
Reviewed-on: SelfPrivacy/selfprivacy-rest-api#13
2022-08-25 20:03:56 +03:00
def 206589d5ad add system nixos tasks 2022-08-01 21:32:20 +02:00
def 337cf29884 Add GraphQJ user and ssh management (#12)
Co-authored-by: Inex Code <inex.code@selfprivacy.org>
Reviewed-on: SelfPrivacy/selfprivacy-rest-api#12
Co-authored-by: def <dettlaff@riseup.net>
Co-committed-by: def <dettlaff@riseup.net>
2022-08-01 13:40:40 +03:00
Inex Code 5be240d357 Update Strawberry and backport graphql-core to Nixos 21.11 2022-08-01 13:28:12 +03:00
Inex Code bec99f29ec Add a jobs singleton 2022-07-30 18:24:21 +03:00
Inex Code 8ea6548710 Fix typing 2022-07-30 18:01:51 +03:00
Inex Code 67c8486c9b Add more fields to GraphQL storage query 2022-07-30 17:48:33 +03:00
Inex Code 1f64a76723 Fix typo 2022-07-26 15:52:28 +03:00
Inex Code e3245cd26a Add mount volume migration 2022-07-26 15:33:44 +03:00
Inex Code a6fe72608f Bytes from int to str 2022-07-25 17:17:57 +03:00
Inex Code 5532114668 Add volume management 2022-07-25 17:08:31 +03:00
Inex Code 26f9393d95 Implement change system settings
Co-authored-by: Detlaff <dettlaff@riseup.net>
2022-07-12 16:24:29 +03:00
Inex Code eb21b65bbc More system tests
Co-authored-by: Detlaff <dettlaff@riseup.net>
2022-07-11 16:42:51 +03:00
Inex Code e3354c73ef Change datetime formats, more tests 2022-07-08 18:28:08 +03:00
def 9bd2896db8 fix recovery tests 2022-07-07 15:53:19 +02:00
Inex Code 63f3b2f4d1 Update tests for detlaff 2022-07-07 14:49:04 +03:00
Inex Code e5405dfc6b linting 2022-07-05 15:54:21 +03:00
Inex Code 5711cf66b0 Api fixes 2022-07-05 15:11:41 +03:00
Inex Code 376bf1ef77 Add more tests 2022-07-05 08:14:37 +03:00
Inex Code 503a39f390 API keys graphql tests 2022-06-29 20:39:46 +03:00
Inex Code 45c3e3003d hhh 2022-06-24 21:18:21 +03:00
Inex Code 80e5550f7d add basic system getters 2022-06-24 21:14:20 +03:00
Inex Code c6a3588e33 add CORS 2022-06-24 20:25:49 +03:00
Inex Code 07e723dec8 more precise permission control 2022-06-24 20:12:32 +03:00
Inex Code 517a769e5b add auth check 2022-06-24 20:08:58 +03:00
Inex Code e2ac429975 parser 2022-06-24 19:50:30 +03:00
Inex Code 71c70592b2 fixes 2022-06-24 19:35:42 +03:00
Inex Code 6ca723867e once again 2022-06-24 19:28:58 +03:00
Inex Code 766edc657a resolve circular import 2022-06-24 19:24:10 +03:00
Inex Code 9b25bc0d53 Add api status resolvers 2022-06-24 19:17:03 +03:00
Inex Code 17b8334c6e typo 2022-06-24 18:23:09 +03:00
Inex Code 01dea50c1f tmp allow access to graphql without auth 2022-06-24 18:21:13 +03:00
Inex Code 28db251f1f rollback the rename 2022-06-24 18:13:54 +03:00
Inex Code a6ad9aaf90 rename folder 2022-06-24 18:02:40 +03:00
Inex Code fc971292c2 add __init__.py to resolvers 2022-06-24 17:49:52 +03:00
Inex Code c20b0c94f4 Update strawberry patch 2022-06-24 16:17:18 +03:00
Inex Code 992a7837d4 Update strawberry patch to remove backport 2022-06-24 16:12:56 +03:00
Inex Code 99beee40d6 Add integration with flask 2022-06-24 16:05:18 +03:00
Inex Code 75e3143c82 strawberry init 2022-06-24 15:26:51 +03:00
Inex Code c30e062210 Fix date formats 2022-05-31 11:46:58 +03:00
Inex Code 401dff23fb bitwarden_rs -> vaultwarden 2022-05-26 18:40:07 +03:00
Inex Code 1ac5f72433 Fix API to allow returning user list with the master user 2022-05-12 18:55:57 +03:00
Inex Code 36bf1a80bf Fix username length check 2022-05-02 14:48:55 +03:00
Inex Code c0c9c1e89e Merge pull request 'nix-channel-migration' (#11) from nix-channel-migration into master
Reviewed-on: SelfPrivacy/selfprivacy-rest-api#11
2022-05-02 11:02:37 +03:00
Inex Code 3044557963 Fix formatting 2022-05-02 10:59:04 +03:00
Inex Code f31b1173a2 add migration to init 2022-04-29 14:27:13 +03:00
Inex Code ac220c6968 some logging 2022-04-29 14:24:25 +03:00
Inex Code f2c73853bc Add migration to selfprivacy nix channel 2022-04-28 16:22:13 +03:00
Inex Code 874acb1343 Merge pull request 'Fix pull system tests for non-nixos systems' (#10) from NaiJi/selfprivacy-rest-api:chdir-mock into master
Reviewed-on: SelfPrivacy/selfprivacy-rest-api#10
2022-04-05 17:11:08 +03:00
NaiJi ✨ 035796ecc7 Fix pull system tests for non-nixos systems 2022-04-03 22:04:49 +03:00
Inex Code 6cd896f977 Delete rootKeys field instead of adding empty element 2022-03-20 19:48:00 +03:00
Inex Code 4cbb08cb6e Fix root key deletion to not make system rebuild unpossible 2022-03-20 19:32:31 +03:00
Inex Code 72a9b11541 Merge pull request 'API 1.2.0' (#9) from authorization_tokens into master
Reviewed-on: SelfPrivacy/selfprivacy-rest-api#9
2022-02-16 16:13:37 +02:00
Inex Code 2ba3777713 Merge branch 'master' into authorization_tokens 2022-02-16 16:12:05 +02:00
Inex Code 250677f97d formatting 2022-02-16 17:10:45 +03:00
Inex Code 50a82a065e Cover the case when uses is less than 1 2022-02-16 17:10:29 +03:00
Inex Code c22fe9e8bd Linting 2022-02-16 16:03:38 +03:00
Inex Code f228db5b29 Bump version to 1.2.0 2022-02-16 15:50:12 +03:00
Inex Code 2235358827 Auth module coverage and bug fixes 2022-02-16 15:49:10 +03:00
Inex Code 98e60abe74 When returning the list of tokens, indicate which one is caller's 2022-02-16 09:40:31 +03:00
Inex Code 2ec9c8a441 Mark field as required in swagger docs 2022-02-02 14:51:48 +02:00
Inex Code 6fbfee5b1b App pylint to shell.nix 2022-01-27 14:13:00 +02:00
Inex Code fbb82c87e8 Add new device token deletion endpoint 2022-01-27 14:12:49 +02:00
Inex Code 4f30017132 Merge pull request 'Migration module' (#8) from migrations-branch-fix into master
Reviewed-on: SelfPrivacy/selfprivacy-rest-api#8
2022-01-26 16:58:20 +02:00
Inex Code 40501401b4 More auth tests 2022-01-24 22:01:37 +02:00
Inex Code 08c7f62e93 Add nix shell 2022-01-24 21:56:48 +02:00
Inex Code 5140081cdb Some test and bupfixes 2022-01-18 17:20:47 +02:00
Inex Code 759e90f734 Formatting and fix swagger test 2022-01-17 13:29:54 +02:00
Inex Code ade7c77754 Fix bugs 2022-01-17 13:28:17 +02:00
Inex Code fe86382819 Fix migration to run 2022-01-14 09:24:33 +02:00
Inex Code d7fe7097e6 Remove redundant security strings from swagger 2022-01-14 10:08:41 +03:00
Inex Code ea696d0f0e Inital auth work, untested 2022-01-14 08:38:53 +03:00
Inex Code aa76f87828 Allow user to disable all migrations entirely 2022-01-11 08:57:14 +03:00
Inex Code 6c0af38e27 Move is_migration_needed call under try block too 2022-01-11 08:54:57 +03:00
Inex Code 355fc68232 Fix newline 2022-01-11 08:52:43 +03:00
Inex Code 07cf926e79 Fix migration crashes. 2022-01-11 08:52:15 +03:00
Inex Code 650032bdab Formatting 2022-01-11 08:41:25 +03:00
Inex Code 0c61c1abb5 Migrations mechanism and fix to the nixos-config branch 2022-01-11 08:36:11 +03:00
Inex Code 13cef67204 More unit tests and bugfixes (#7)
Reviewed-on: SelfPrivacy/selfprivacy-rest-api#7
Co-authored-by: Inex Code <inex.code@selfprivacy.org>
Co-committed-by: Inex Code <inex.code@selfprivacy.org>
2022-01-10 22:35:00 +02:00
Illia Chub 030a655c39 Trigger CI/CD hook 2021-12-13 10:46:59 +02:00
Inex Code 0d92be8fb7 Merge pull request 'API 1.1.0 relaese' (#6) from system-configuration into master
Reviewed-on: SelfPrivacy/selfprivacy-rest-api#6
2021-12-09 15:24:25 +02:00
Inex Code f235819a1c Merge branch 'system-configuration' of git.selfprivacy.org:SelfPrivacy/selfprivacy-rest-api into system-configuration 2021-12-08 07:45:57 +03:00
Inex Code f24323606f Fix restic to hide files instead of deleting 2021-12-08 07:45:27 +03:00
Illia Chub 0e8e78de08 Fixed identation 2021-12-06 21:12:30 +02:00
Illia Chub 3cb6b769df Added CI/CD integration 2021-12-06 21:11:04 +02:00
Inex Code 710925f3ea Linting 2021-12-06 21:35:41 +03:00
Inex Code f4288dacd6 debugging 2021-12-06 12:07:03 +03:00
Inex Code 340b50bb0d Remove locks 2021-12-06 12:00:53 +03:00
Inex Code f68bd88a31 Restic controller 2021-12-06 09:48:29 +03:00
Inex Code dc4c9a89e1 Make list backups init backups 2021-12-02 23:08:25 +03:00
Inex Code 71fc93914a Update backup endpoints 2021-12-02 18:06:23 +03:00
Inex Code c53cab7b88 Move PullRepositoryChanges to system resource 2021-12-01 17:51:55 +03:00
Inex Code 077c886f40 Add endpoint to update b2 keys 2021-12-01 17:51:38 +03:00
Inex Code 69eb91a1ea Reformat code with Black 2021-12-01 00:53:39 +03:00
Inex Code 8e0a5f5ec0 Merge branch 'master' of git.selfprivacy.org:SelfPrivacy/selfprivacy-rest-api into system-configuration 2021-11-30 23:30:26 +03:00
Illia Chub 1385835a7f Removed useless imports from update module 2021-11-30 07:12:04 +02:00
Illia Chub d1fdaf186d Mitigated possible directory escape scenario 2021-11-30 07:10:00 +02:00
Illia Chub fb98fd1e60 Implemented update check endpoint 2021-11-30 07:04:06 +02:00
Inex Code 245964c998 First wave of unit tests, and bugfixes caused by them 2021-11-29 22:16:08 +03:00
Illia Chub 205908b46c Merge branch 'master' of git.selfprivacy.org:SelfPrivacy/selfprivacy-rest-api 2021-11-25 12:32:24 +02:00
Illia Chub 9a87fa43eb Added backup restoration endpoint 2021-11-25 12:31:18 +02:00
Illia Chub b201cd6ca2
Changed backup protocol to rclone 2021-11-24 07:59:55 +02:00
Inex Code b185724000 Move SSH key validation to utils 2021-11-23 20:32:51 +02:00
Inex Code ec7ff62d59 Add SSH and system settings endpoints 2021-11-22 18:50:50 +02:00
Inex Code eb4f25285d Merge branch 'master' of git.selfprivacy.org:SelfPrivacy/selfprivacy-rest-api 2021-11-18 20:55:14 +02:00
Inex Code e00aaf7118 Hotfix: user and ssh key creation when no were defined 2021-11-18 20:54:45 +02:00
Inex Code fb8066a2f5 Add a license 2021-11-18 10:05:04 +02:00
Inex Code 1432671cbe API version endpoint 2021-11-18 09:35:50 +02:00
Inex Code ec76484857 Add API version endpoint 2021-11-18 09:25:33 +02:00
Inex Code dc56b6f4ad Use systemd units for system rebuilds 2021-11-17 12:36:26 +02:00
Inex Code c910b761d1 Changed /system/upgrade to /system/configuration/upgrade 2021-11-17 11:36:39 +02:00
Inex Code 8700642260 Hotfix: ssh root key adding 2021-11-17 11:18:17 +02:00
Inex Code 82b7f97dce Merge pull request 'Input sanitization, added swagger' (#5) from more-RESTful-api into master
Reviewed-on: ilchub/selfprivacy-rest-api#5
2021-11-17 10:27:37 +02:00
Inex Code 6df4204ca1 Hotfix: imports in services module 2021-11-16 18:19:31 +02:00
Inex Code 447cc5ff55 Input sanitization, added swagger 2021-11-16 18:14:01 +02:00
Illia Chub 48da8c2228 Merge pull request 'Add basic API auth' (#4) from inex/selfprivacy-rest-api:json-manipulations into master
Reviewed-on: ilchub/selfprivacy-rest-api#4
2021-11-16 12:35:53 +02:00
Illia Chub af4907dda5
Fixed pipe path 2021-11-16 12:33:17 +02:00
Inex Code 6c3609f590 Add basic API auth 2021-11-16 12:32:10 +02:00
Illia Chub 59d251bb77
Optimized memory objects usage for status check endpoint 2021-11-16 12:06:36 +02:00
Illia Chub 5c4d871bcd
Added backup process status check endpoint 2021-11-16 12:02:36 +02:00
Illia Chub a1e6c77cc1
Added backup creation message 2021-11-16 10:25:44 +02:00
Illia Chub a86f2fe2bb Merge pull request 'Move to JSON controlled server settings' (#3) from inex/selfprivacy-rest-api:json-manipulations into master
Reviewed-on: ilchub/selfprivacy-rest-api#3
2021-11-16 09:42:52 +02:00
Inex Code 767c504a1d Move to JSON controlled server settings 2021-11-15 15:49:06 +02:00
Illia Chub dbb4c10956 Merge pull request 'Decomposition' (#2) from inex/selfprivacy-rest-api:master into master
Reviewed-on: ilchub/selfprivacy-rest-api#2
2021-11-12 11:22:22 +02:00
Inex Code af059c2efd Remove wrong import 2021-11-11 20:50:00 +02:00
Inex Code 26b054760c Merge remote-tracking branch 'upstream/master' 2021-11-11 20:49:31 +02:00
Inex Code 2f03cb0756 Fix ssh key add 2021-11-11 20:45:57 +02:00
Inex Code 09f319d683 Decomposition 2021-11-11 20:31:28 +02:00
Illia Chub 5612ff5373
Made asyncronyous backup creation request 2021-11-11 11:49:12 +02:00
Illia Chub ba2d785ada Update 'main.py' 2021-10-27 13:30:24 +03:00
Illia Chub 1bf4a779af Update 'main.py' 2021-10-27 13:23:06 +03:00
Illia Chub f46e1ba134 Update 'main.py' 2021-10-27 12:51:34 +03:00
Illia Chub 37bb014994 Update 'main.py' 2021-10-27 12:49:29 +03:00
Illia Chub c1c9f18521 Fixed identation 2021-10-27 12:45:03 +03:00
Illia Chub 956336da4b Changed the way command passed to the Flask endpoint 2021-10-27 12:24:46 +03:00
Illia Chub 376e38942b
Changed command shell routing parameters 2021-10-25 21:26:52 +03:00
Illia Chub 82821603f1
Fixed restic entrypoint command 2021-10-25 21:23:42 +03:00
Illia Chub 4b39d6e4f9
Changed command shell routing parameters 2021-10-25 21:21:55 +03:00
Illia Chub b7c8bade4c
Fixed Popen communication 2021-10-25 21:18:52 +03:00
Illia Chub e6ef9be267 Added Restic-related functionality 2021-10-25 15:22:13 +03:00
Illia Chub 568add06c6 Removed odd spaces at the end of hash 2021-10-12 13:44:06 +03:00
Illia Chub 2f526c7bcd Fixed '{' parsing 2021-10-12 13:34:37 +03:00
Illia Chub 7c144332cf Fixed spacing 2021-10-12 13:31:37 +03:00
Illia Chub 16b6a15d0c Implemented server-side password hashing 2021-10-12 13:27:58 +03:00
Illia Chub 8580c047d3 Fixed class declaration 2021-10-12 13:00:45 +03:00
Illia Chub b63a4c3b39 Fixed syntax issues 2021-10-12 11:42:45 +03:00
Illia Chub 2ae6925d61 Fixed syntax issues 2021-10-12 11:33:14 +03:00
Illia Chub 4c63004954 Fixed syntax issues 2021-10-12 11:31:01 +03:00
Illia Chub ef79004f19 Fixed syntac issues 2021-10-12 11:27:03 +03:00
Illia Chub a1292232a7 Rewritten password hashing logic 2021-10-12 11:22:40 +03:00
Illia Chub 1bfa887e6b Disabled stream pipe into python variable 2021-10-12 10:53:12 +03:00
Illia Chub 1330e1c202 Fixed hashed password encoding 2021-10-11 13:57:21 +03:00
Illia Chub ffe89b6983 Fixed minor logical issues 2021-10-07 20:19:07 +03:00
Illia Chub 413305c290 Fixed minor logical issues 2021-10-07 19:39:39 +03:00
Illia Chub ea16f656ee Fixed minor logical issues 2021-10-07 19:36:01 +03:00
Illia Chub d06e8dc662 Fixed minor logical issues 2021-10-07 19:24:08 +03:00
Illia Chub a8ccab1901 Fixed minor logical issues 2021-10-07 19:21:28 +03:00
Illia Chub f8a3c94fdd Fixed minor logical issues 2021-10-07 19:17:24 +03:00
Illia Chub 5e8736fa5a Added mailuser creation fix 2021-10-07 18:53:17 +03:00
Illia Chub e4267aba18 Minor bugfixes 2021-09-14 17:29:56 +03:00
Illia Chub ebd60b9787 Added status report for all available services 2021-09-14 17:26:17 +03:00
Illia Chub 1aa0a46267 Fixed typos 2021-08-26 12:34:50 +03:00
Illia Chub 1386c24692 Fixed SSH configuration write issues 2021-08-26 12:30:22 +03:00
Illia Chub 4604bcb666 Fixed SSH configuration write issues 2021-08-26 12:18:27 +03:00
Illia Chub 8d05640a09 Fixed SSH configuration write issues 2021-08-26 12:15:55 +03:00
Illia Chub b8a2f20840 Fixed SSH configuration write issues 2021-08-26 12:11:21 +03:00
Illia Chub 9ec897f519 Minor bugfixes 2021-08-25 09:38:30 +03:00
Illia Chub e7edbe2669 Fixed logical mistakes in public key insertion mechanism 2021-08-25 08:07:02 +03:00
Illia Chub 18d8d78f0d Fixed wrong file path 2021-08-25 07:51:24 +03:00
Illia Chub 685baf4e04 Added method definitions for the service management endpoints 2021-08-25 07:44:39 +03:00
Illia Chub 678488866c Added SSH inclusion endpoint 2021-08-23 16:34:29 +03:00
Illia Chub b9093f041b Added service management 2021-08-20 17:59:12 +03:00
Illia Chub 0980039a67 Replaced LF endings with CRLF ones 2021-07-26 13:35:36 +03:00
Illia Chub 2795aa5fd3 Changed return status for apply endpoint 2021-07-07 19:01:32 +03:00
Illia Chub e62fc24644 Changed return status for apply endpoint 2021-07-07 18:57:48 +03:00
Illia Chub b96bfd755d Fixed infinite loop issue 2021-07-02 18:55:22 +03:00
Illia Chub 17c3e6f9e3 Removed sensitive data from logs 2021-07-02 18:45:57 +03:00
Illia Chub 1071bc459b Added gaps in the user configuration snippet template 2021-07-02 18:39:24 +03:00
Illia Chub a4987210ac Implemented logging for user creation endpoint 2021-07-02 18:25:56 +03:00
Illia Chub ff13321154 Shifted user configuration insertion position for one line back 2021-07-01 10:43:13 +03:00
Illia Chub fe6e3c6034 Fixed null pointing references at user addition endpoint 2021-06-30 18:19:37 +03:00
Illia Chub db726b82e2 Replaced relative users file path with absolute 2021-06-23 20:45:25 +03:00
Illia Chub 2885fe4356 Removed SocketIO object creation 2021-06-21 11:24:40 +03:00
Illia Chub a8a2fe06fb Removed unnecessary dependency 2021-06-21 11:22:02 +03:00
Illia Chub fe1773b7f5 Fixed dependency issues 2021-06-21 10:34:58 +03:00
Illia Chub d2716f5816 Added SSH disable option. Added user addition feature. Rewritten user deletion logic 2021-06-21 10:17:02 +03:00
Illia Chub 1ceda086f5 Fixed user addition 2021-05-29 22:35:40 +03:00
Illia Chub 858b8e4698 Added user management. Added SSH service management 2021-05-26 19:35:20 +03:00
182 changed files with 17870 additions and 91 deletions

2
.coveragerc Normal file
View File

@ -0,0 +1,2 @@
[run]
source = selfprivacy_api

24
.drone.yml Normal file
View File

@ -0,0 +1,24 @@
kind: pipeline
type: exec
name: default
steps:
- name: Run Tests and Generate Coverage Report
commands:
- coverage run -m pytest -q
- coverage xml
- sonar-scanner -Dsonar.projectKey=SelfPrivacy-REST-API -Dsonar.sources=. -Dsonar.host.url=http://analyzer.lan:9000 -Dsonar.login="$SONARQUBE_TOKEN"
environment:
SONARQUBE_TOKEN:
from_secret: SONARQUBE_TOKEN
- name: Run Bandit Checks
commands:
- bandit -ll -r selfprivacy_api
- name: Run Code Formatting Checks
commands:
- black --check .
node:
server: builder

149
.gitignore vendored Executable file
View File

@ -0,0 +1,149 @@
users.nix
### Flask ###
instance/*
!instance/.gitignore
.webassets-cache
.env
### Flask.Python Stack ###
# Byte-compiled / optimized / DLL files
__pycache__/
*.py[cod]
*$py.class
# C extensions
*.so
# Distribution / packaging
.Python
build/
develop-eggs/
dist/
downloads/
eggs/
.eggs/
lib/
lib64/
parts/
sdist/
var/
wheels/
share/python-wheels/
*.egg-info/
.installed.cfg
*.egg
MANIFEST
# PyInstaller
# Usually these files are written by a python script from a template
# before PyInstaller builds the exe, so as to inject date/other infos into it.
*.manifest
*.spec
# Installer logs
pip-log.txt
pip-delete-this-directory.txt
# Unit test / coverage reports
htmlcov/
.tox/
.nox/
.coverage
.coverage.*
.cache
nosetests.xml
coverage.xml
*.cover
*.py,cover
.hypothesis/
.pytest_cache/
cover/
# Translations
*.mo
*.pot
# Django stuff:
*.log
local_settings.py
db.sqlite3
db.sqlite3-journal
# Flask stuff:
instance/
# Scrapy stuff:
.scrapy
# Sphinx documentation
docs/_build/
# PyBuilder
.pybuilder/
target/
# Jupyter Notebook
.ipynb_checkpoints
# IPython
profile_default/
ipython_config.py
# pyenv
# For a library or package, you might want to ignore these files since the code is
# intended to run in multiple environments; otherwise, check them in:
# .python-version
# pipenv
# According to pypa/pipenv#598, it is recommended to include Pipfile.lock in version control.
# However, in case of collaboration, if having platform-specific dependencies or dependencies
# having no cross-platform support, pipenv may install dependencies that don't work, or not
# install all needed dependencies.
#Pipfile.lock
# PEP 582; used by e.g. github.com/David-OConnor/pyflow
__pypackages__/
# Celery stuff
celerybeat-schedule
celerybeat.pid
# SageMath parsed files
*.sage.py
# Environments
.venv
env/
venv/
ENV/
env.bak/
venv.bak/
# Spyder project settings
.spyderproject
.spyproject
# Rope project settings
.ropeproject
# mkdocs documentation
/site
# mypy
.mypy_cache/
.dmypy.json
dmypy.json
# Pyre type checker
.pyre/
# pytype static type analyzer
.pytype/
# Cython debug symbols
cython_debug/
# End of https://www.toptal.com/developers/gitignore/api/flask
*.db

3
.pylintrc Normal file
View File

@ -0,0 +1,3 @@
[MASTER]
init-hook="from pylint.config import find_pylintrc; import os, sys; sys.path.append(os.path.dirname(find_pylintrc()))"
extension-pkg-whitelist=pydantic

19
.vscode/launch.json vendored Normal file
View File

@ -0,0 +1,19 @@
{
// Use IntelliSense to learn about possible attributes.
// Hover to view descriptions of existing attributes.
// For more information, visit: https://go.microsoft.com/fwlink/?linkid=830387
"version": "0.2.0",
"configurations": [
{
"name": "Python: FastAPI",
"type": "python",
"request": "launch",
"module": "uvicorn",
"args": [
"selfprivacy_api.app:app"
],
"jinja": true,
"justMyCode": false
}
]
}

12
.vscode/settings.json vendored Normal file
View File

@ -0,0 +1,12 @@
{
"python.formatting.provider": "black",
"python.linting.pylintEnabled": true,
"python.linting.enabled": true,
"python.testing.pytestArgs": [
"tests"
],
"python.testing.unittestEnabled": false,
"python.testing.pytestEnabled": true,
"python.languageServer": "Pylance",
"python.analysis.typeCheckingMode": "basic"
}

661
LICENSE Normal file
View File

@ -0,0 +1,661 @@
GNU AFFERO GENERAL PUBLIC LICENSE
Version 3, 19 November 2007
Copyright (C) 2007 Free Software Foundation, Inc. <https://fsf.org/>
Everyone is permitted to copy and distribute verbatim copies
of this license document, but changing it is not allowed.
Preamble
The GNU Affero General Public License is a free, copyleft license for
software and other kinds of works, specifically designed to ensure
cooperation with the community in the case of network server software.
The licenses for most software and other practical works are designed
to take away your freedom to share and change the works. By contrast,
our General Public Licenses are intended to guarantee your freedom to
share and change all versions of a program--to make sure it remains free
software for all its users.
When we speak of free software, we are referring to freedom, not
price. Our General Public Licenses are designed to make sure that you
have the freedom to distribute copies of free software (and charge for
them if you wish), that you receive source code or can get it if you
want it, that you can change the software or use pieces of it in new
free programs, and that you know you can do these things.
Developers that use our General Public Licenses protect your rights
with two steps: (1) assert copyright on the software, and (2) offer
you this License which gives you legal permission to copy, distribute
and/or modify the software.
A secondary benefit of defending all users' freedom is that
improvements made in alternate versions of the program, if they
receive widespread use, become available for other developers to
incorporate. Many developers of free software are heartened and
encouraged by the resulting cooperation. However, in the case of
software used on network servers, this result may fail to come about.
The GNU General Public License permits making a modified version and
letting the public access it on a server without ever releasing its
source code to the public.
The GNU Affero General Public License is designed specifically to
ensure that, in such cases, the modified source code becomes available
to the community. It requires the operator of a network server to
provide the source code of the modified version running there to the
users of that server. Therefore, public use of a modified version, on
a publicly accessible server, gives the public access to the source
code of the modified version.
An older license, called the Affero General Public License and
published by Affero, was designed to accomplish similar goals. This is
a different license, not a version of the Affero GPL, but Affero has
released a new version of the Affero GPL which permits relicensing under
this license.
The precise terms and conditions for copying, distribution and
modification follow.
TERMS AND CONDITIONS
0. Definitions.
"This License" refers to version 3 of the GNU Affero General Public License.
"Copyright" also means copyright-like laws that apply to other kinds of
works, such as semiconductor masks.
"The Program" refers to any copyrightable work licensed under this
License. Each licensee is addressed as "you". "Licensees" and
"recipients" may be individuals or organizations.
To "modify" a work means to copy from or adapt all or part of the work
in a fashion requiring copyright permission, other than the making of an
exact copy. The resulting work is called a "modified version" of the
earlier work or a work "based on" the earlier work.
A "covered work" means either the unmodified Program or a work based
on the Program.
To "propagate" a work means to do anything with it that, without
permission, would make you directly or secondarily liable for
infringement under applicable copyright law, except executing it on a
computer or modifying a private copy. Propagation includes copying,
distribution (with or without modification), making available to the
public, and in some countries other activities as well.
To "convey" a work means any kind of propagation that enables other
parties to make or receive copies. Mere interaction with a user through
a computer network, with no transfer of a copy, is not conveying.
An interactive user interface displays "Appropriate Legal Notices"
to the extent that it includes a convenient and prominently visible
feature that (1) displays an appropriate copyright notice, and (2)
tells the user that there is no warranty for the work (except to the
extent that warranties are provided), that licensees may convey the
work under this License, and how to view a copy of this License. If
the interface presents a list of user commands or options, such as a
menu, a prominent item in the list meets this criterion.
1. Source Code.
The "source code" for a work means the preferred form of the work
for making modifications to it. "Object code" means any non-source
form of a work.
A "Standard Interface" means an interface that either is an official
standard defined by a recognized standards body, or, in the case of
interfaces specified for a particular programming language, one that
is widely used among developers working in that language.
The "System Libraries" of an executable work include anything, other
than the work as a whole, that (a) is included in the normal form of
packaging a Major Component, but which is not part of that Major
Component, and (b) serves only to enable use of the work with that
Major Component, or to implement a Standard Interface for which an
implementation is available to the public in source code form. A
"Major Component", in this context, means a major essential component
(kernel, window system, and so on) of the specific operating system
(if any) on which the executable work runs, or a compiler used to
produce the work, or an object code interpreter used to run it.
The "Corresponding Source" for a work in object code form means all
the source code needed to generate, install, and (for an executable
work) run the object code and to modify the work, including scripts to
control those activities. However, it does not include the work's
System Libraries, or general-purpose tools or generally available free
programs which are used unmodified in performing those activities but
which are not part of the work. For example, Corresponding Source
includes interface definition files associated with source files for
the work, and the source code for shared libraries and dynamically
linked subprograms that the work is specifically designed to require,
such as by intimate data communication or control flow between those
subprograms and other parts of the work.
The Corresponding Source need not include anything that users
can regenerate automatically from other parts of the Corresponding
Source.
The Corresponding Source for a work in source code form is that
same work.
2. Basic Permissions.
All rights granted under this License are granted for the term of
copyright on the Program, and are irrevocable provided the stated
conditions are met. This License explicitly affirms your unlimited
permission to run the unmodified Program. The output from running a
covered work is covered by this License only if the output, given its
content, constitutes a covered work. This License acknowledges your
rights of fair use or other equivalent, as provided by copyright law.
You may make, run and propagate covered works that you do not
convey, without conditions so long as your license otherwise remains
in force. You may convey covered works to others for the sole purpose
of having them make modifications exclusively for you, or provide you
with facilities for running those works, provided that you comply with
the terms of this License in conveying all material for which you do
not control copyright. Those thus making or running the covered works
for you must do so exclusively on your behalf, under your direction
and control, on terms that prohibit them from making any copies of
your copyrighted material outside their relationship with you.
Conveying under any other circumstances is permitted solely under
the conditions stated below. Sublicensing is not allowed; section 10
makes it unnecessary.
3. Protecting Users' Legal Rights From Anti-Circumvention Law.
No covered work shall be deemed part of an effective technological
measure under any applicable law fulfilling obligations under article
11 of the WIPO copyright treaty adopted on 20 December 1996, or
similar laws prohibiting or restricting circumvention of such
measures.
When you convey a covered work, you waive any legal power to forbid
circumvention of technological measures to the extent such circumvention
is effected by exercising rights under this License with respect to
the covered work, and you disclaim any intention to limit operation or
modification of the work as a means of enforcing, against the work's
users, your or third parties' legal rights to forbid circumvention of
technological measures.
4. Conveying Verbatim Copies.
You may convey verbatim copies of the Program's source code as you
receive it, in any medium, provided that you conspicuously and
appropriately publish on each copy an appropriate copyright notice;
keep intact all notices stating that this License and any
non-permissive terms added in accord with section 7 apply to the code;
keep intact all notices of the absence of any warranty; and give all
recipients a copy of this License along with the Program.
You may charge any price or no price for each copy that you convey,
and you may offer support or warranty protection for a fee.
5. Conveying Modified Source Versions.
You may convey a work based on the Program, or the modifications to
produce it from the Program, in the form of source code under the
terms of section 4, provided that you also meet all of these conditions:
a) The work must carry prominent notices stating that you modified
it, and giving a relevant date.
b) The work must carry prominent notices stating that it is
released under this License and any conditions added under section
7. This requirement modifies the requirement in section 4 to
"keep intact all notices".
c) You must license the entire work, as a whole, under this
License to anyone who comes into possession of a copy. This
License will therefore apply, along with any applicable section 7
additional terms, to the whole of the work, and all its parts,
regardless of how they are packaged. This License gives no
permission to license the work in any other way, but it does not
invalidate such permission if you have separately received it.
d) If the work has interactive user interfaces, each must display
Appropriate Legal Notices; however, if the Program has interactive
interfaces that do not display Appropriate Legal Notices, your
work need not make them do so.
A compilation of a covered work with other separate and independent
works, which are not by their nature extensions of the covered work,
and which are not combined with it such as to form a larger program,
in or on a volume of a storage or distribution medium, is called an
"aggregate" if the compilation and its resulting copyright are not
used to limit the access or legal rights of the compilation's users
beyond what the individual works permit. Inclusion of a covered work
in an aggregate does not cause this License to apply to the other
parts of the aggregate.
6. Conveying Non-Source Forms.
You may convey a covered work in object code form under the terms
of sections 4 and 5, provided that you also convey the
machine-readable Corresponding Source under the terms of this License,
in one of these ways:
a) Convey the object code in, or embodied in, a physical product
(including a physical distribution medium), accompanied by the
Corresponding Source fixed on a durable physical medium
customarily used for software interchange.
b) Convey the object code in, or embodied in, a physical product
(including a physical distribution medium), accompanied by a
written offer, valid for at least three years and valid for as
long as you offer spare parts or customer support for that product
model, to give anyone who possesses the object code either (1) a
copy of the Corresponding Source for all the software in the
product that is covered by this License, on a durable physical
medium customarily used for software interchange, for a price no
more than your reasonable cost of physically performing this
conveying of source, or (2) access to copy the
Corresponding Source from a network server at no charge.
c) Convey individual copies of the object code with a copy of the
written offer to provide the Corresponding Source. This
alternative is allowed only occasionally and noncommercially, and
only if you received the object code with such an offer, in accord
with subsection 6b.
d) Convey the object code by offering access from a designated
place (gratis or for a charge), and offer equivalent access to the
Corresponding Source in the same way through the same place at no
further charge. You need not require recipients to copy the
Corresponding Source along with the object code. If the place to
copy the object code is a network server, the Corresponding Source
may be on a different server (operated by you or a third party)
that supports equivalent copying facilities, provided you maintain
clear directions next to the object code saying where to find the
Corresponding Source. Regardless of what server hosts the
Corresponding Source, you remain obligated to ensure that it is
available for as long as needed to satisfy these requirements.
e) Convey the object code using peer-to-peer transmission, provided
you inform other peers where the object code and Corresponding
Source of the work are being offered to the general public at no
charge under subsection 6d.
A separable portion of the object code, whose source code is excluded
from the Corresponding Source as a System Library, need not be
included in conveying the object code work.
A "User Product" is either (1) a "consumer product", which means any
tangible personal property which is normally used for personal, family,
or household purposes, or (2) anything designed or sold for incorporation
into a dwelling. In determining whether a product is a consumer product,
doubtful cases shall be resolved in favor of coverage. For a particular
product received by a particular user, "normally used" refers to a
typical or common use of that class of product, regardless of the status
of the particular user or of the way in which the particular user
actually uses, or expects or is expected to use, the product. A product
is a consumer product regardless of whether the product has substantial
commercial, industrial or non-consumer uses, unless such uses represent
the only significant mode of use of the product.
"Installation Information" for a User Product means any methods,
procedures, authorization keys, or other information required to install
and execute modified versions of a covered work in that User Product from
a modified version of its Corresponding Source. The information must
suffice to ensure that the continued functioning of the modified object
code is in no case prevented or interfered with solely because
modification has been made.
If you convey an object code work under this section in, or with, or
specifically for use in, a User Product, and the conveying occurs as
part of a transaction in which the right of possession and use of the
User Product is transferred to the recipient in perpetuity or for a
fixed term (regardless of how the transaction is characterized), the
Corresponding Source conveyed under this section must be accompanied
by the Installation Information. But this requirement does not apply
if neither you nor any third party retains the ability to install
modified object code on the User Product (for example, the work has
been installed in ROM).
The requirement to provide Installation Information does not include a
requirement to continue to provide support service, warranty, or updates
for a work that has been modified or installed by the recipient, or for
the User Product in which it has been modified or installed. Access to a
network may be denied when the modification itself materially and
adversely affects the operation of the network or violates the rules and
protocols for communication across the network.
Corresponding Source conveyed, and Installation Information provided,
in accord with this section must be in a format that is publicly
documented (and with an implementation available to the public in
source code form), and must require no special password or key for
unpacking, reading or copying.
7. Additional Terms.
"Additional permissions" are terms that supplement the terms of this
License by making exceptions from one or more of its conditions.
Additional permissions that are applicable to the entire Program shall
be treated as though they were included in this License, to the extent
that they are valid under applicable law. If additional permissions
apply only to part of the Program, that part may be used separately
under those permissions, but the entire Program remains governed by
this License without regard to the additional permissions.
When you convey a copy of a covered work, you may at your option
remove any additional permissions from that copy, or from any part of
it. (Additional permissions may be written to require their own
removal in certain cases when you modify the work.) You may place
additional permissions on material, added by you to a covered work,
for which you have or can give appropriate copyright permission.
Notwithstanding any other provision of this License, for material you
add to a covered work, you may (if authorized by the copyright holders of
that material) supplement the terms of this License with terms:
a) Disclaiming warranty or limiting liability differently from the
terms of sections 15 and 16 of this License; or
b) Requiring preservation of specified reasonable legal notices or
author attributions in that material or in the Appropriate Legal
Notices displayed by works containing it; or
c) Prohibiting misrepresentation of the origin of that material, or
requiring that modified versions of such material be marked in
reasonable ways as different from the original version; or
d) Limiting the use for publicity purposes of names of licensors or
authors of the material; or
e) Declining to grant rights under trademark law for use of some
trade names, trademarks, or service marks; or
f) Requiring indemnification of licensors and authors of that
material by anyone who conveys the material (or modified versions of
it) with contractual assumptions of liability to the recipient, for
any liability that these contractual assumptions directly impose on
those licensors and authors.
All other non-permissive additional terms are considered "further
restrictions" within the meaning of section 10. If the Program as you
received it, or any part of it, contains a notice stating that it is
governed by this License along with a term that is a further
restriction, you may remove that term. If a license document contains
a further restriction but permits relicensing or conveying under this
License, you may add to a covered work material governed by the terms
of that license document, provided that the further restriction does
not survive such relicensing or conveying.
If you add terms to a covered work in accord with this section, you
must place, in the relevant source files, a statement of the
additional terms that apply to those files, or a notice indicating
where to find the applicable terms.
Additional terms, permissive or non-permissive, may be stated in the
form of a separately written license, or stated as exceptions;
the above requirements apply either way.
8. Termination.
You may not propagate or modify a covered work except as expressly
provided under this License. Any attempt otherwise to propagate or
modify it is void, and will automatically terminate your rights under
this License (including any patent licenses granted under the third
paragraph of section 11).
However, if you cease all violation of this License, then your
license from a particular copyright holder is reinstated (a)
provisionally, unless and until the copyright holder explicitly and
finally terminates your license, and (b) permanently, if the copyright
holder fails to notify you of the violation by some reasonable means
prior to 60 days after the cessation.
Moreover, your license from a particular copyright holder is
reinstated permanently if the copyright holder notifies you of the
violation by some reasonable means, this is the first time you have
received notice of violation of this License (for any work) from that
copyright holder, and you cure the violation prior to 30 days after
your receipt of the notice.
Termination of your rights under this section does not terminate the
licenses of parties who have received copies or rights from you under
this License. If your rights have been terminated and not permanently
reinstated, you do not qualify to receive new licenses for the same
material under section 10.
9. Acceptance Not Required for Having Copies.
You are not required to accept this License in order to receive or
run a copy of the Program. Ancillary propagation of a covered work
occurring solely as a consequence of using peer-to-peer transmission
to receive a copy likewise does not require acceptance. However,
nothing other than this License grants you permission to propagate or
modify any covered work. These actions infringe copyright if you do
not accept this License. Therefore, by modifying or propagating a
covered work, you indicate your acceptance of this License to do so.
10. Automatic Licensing of Downstream Recipients.
Each time you convey a covered work, the recipient automatically
receives a license from the original licensors, to run, modify and
propagate that work, subject to this License. You are not responsible
for enforcing compliance by third parties with this License.
An "entity transaction" is a transaction transferring control of an
organization, or substantially all assets of one, or subdividing an
organization, or merging organizations. If propagation of a covered
work results from an entity transaction, each party to that
transaction who receives a copy of the work also receives whatever
licenses to the work the party's predecessor in interest had or could
give under the previous paragraph, plus a right to possession of the
Corresponding Source of the work from the predecessor in interest, if
the predecessor has it or can get it with reasonable efforts.
You may not impose any further restrictions on the exercise of the
rights granted or affirmed under this License. For example, you may
not impose a license fee, royalty, or other charge for exercise of
rights granted under this License, and you may not initiate litigation
(including a cross-claim or counterclaim in a lawsuit) alleging that
any patent claim is infringed by making, using, selling, offering for
sale, or importing the Program or any portion of it.
11. Patents.
A "contributor" is a copyright holder who authorizes use under this
License of the Program or a work on which the Program is based. The
work thus licensed is called the contributor's "contributor version".
A contributor's "essential patent claims" are all patent claims
owned or controlled by the contributor, whether already acquired or
hereafter acquired, that would be infringed by some manner, permitted
by this License, of making, using, or selling its contributor version,
but do not include claims that would be infringed only as a
consequence of further modification of the contributor version. For
purposes of this definition, "control" includes the right to grant
patent sublicenses in a manner consistent with the requirements of
this License.
Each contributor grants you a non-exclusive, worldwide, royalty-free
patent license under the contributor's essential patent claims, to
make, use, sell, offer for sale, import and otherwise run, modify and
propagate the contents of its contributor version.
In the following three paragraphs, a "patent license" is any express
agreement or commitment, however denominated, not to enforce a patent
(such as an express permission to practice a patent or covenant not to
sue for patent infringement). To "grant" such a patent license to a
party means to make such an agreement or commitment not to enforce a
patent against the party.
If you convey a covered work, knowingly relying on a patent license,
and the Corresponding Source of the work is not available for anyone
to copy, free of charge and under the terms of this License, through a
publicly available network server or other readily accessible means,
then you must either (1) cause the Corresponding Source to be so
available, or (2) arrange to deprive yourself of the benefit of the
patent license for this particular work, or (3) arrange, in a manner
consistent with the requirements of this License, to extend the patent
license to downstream recipients. "Knowingly relying" means you have
actual knowledge that, but for the patent license, your conveying the
covered work in a country, or your recipient's use of the covered work
in a country, would infringe one or more identifiable patents in that
country that you have reason to believe are valid.
If, pursuant to or in connection with a single transaction or
arrangement, you convey, or propagate by procuring conveyance of, a
covered work, and grant a patent license to some of the parties
receiving the covered work authorizing them to use, propagate, modify
or convey a specific copy of the covered work, then the patent license
you grant is automatically extended to all recipients of the covered
work and works based on it.
A patent license is "discriminatory" if it does not include within
the scope of its coverage, prohibits the exercise of, or is
conditioned on the non-exercise of one or more of the rights that are
specifically granted under this License. You may not convey a covered
work if you are a party to an arrangement with a third party that is
in the business of distributing software, under which you make payment
to the third party based on the extent of your activity of conveying
the work, and under which the third party grants, to any of the
parties who would receive the covered work from you, a discriminatory
patent license (a) in connection with copies of the covered work
conveyed by you (or copies made from those copies), or (b) primarily
for and in connection with specific products or compilations that
contain the covered work, unless you entered into that arrangement,
or that patent license was granted, prior to 28 March 2007.
Nothing in this License shall be construed as excluding or limiting
any implied license or other defenses to infringement that may
otherwise be available to you under applicable patent law.
12. No Surrender of Others' Freedom.
If conditions are imposed on you (whether by court order, agreement or
otherwise) that contradict the conditions of this License, they do not
excuse you from the conditions of this License. If you cannot convey a
covered work so as to satisfy simultaneously your obligations under this
License and any other pertinent obligations, then as a consequence you may
not convey it at all. For example, if you agree to terms that obligate you
to collect a royalty for further conveying from those to whom you convey
the Program, the only way you could satisfy both those terms and this
License would be to refrain entirely from conveying the Program.
13. Remote Network Interaction; Use with the GNU General Public License.
Notwithstanding any other provision of this License, if you modify the
Program, your modified version must prominently offer all users
interacting with it remotely through a computer network (if your version
supports such interaction) an opportunity to receive the Corresponding
Source of your version by providing access to the Corresponding Source
from a network server at no charge, through some standard or customary
means of facilitating copying of software. This Corresponding Source
shall include the Corresponding Source for any work covered by version 3
of the GNU General Public License that is incorporated pursuant to the
following paragraph.
Notwithstanding any other provision of this License, you have
permission to link or combine any covered work with a work licensed
under version 3 of the GNU General Public License into a single
combined work, and to convey the resulting work. The terms of this
License will continue to apply to the part which is the covered work,
but the work with which it is combined will remain governed by version
3 of the GNU General Public License.
14. Revised Versions of this License.
The Free Software Foundation may publish revised and/or new versions of
the GNU Affero General Public License from time to time. Such new versions
will be similar in spirit to the present version, but may differ in detail to
address new problems or concerns.
Each version is given a distinguishing version number. If the
Program specifies that a certain numbered version of the GNU Affero General
Public License "or any later version" applies to it, you have the
option of following the terms and conditions either of that numbered
version or of any later version published by the Free Software
Foundation. If the Program does not specify a version number of the
GNU Affero General Public License, you may choose any version ever published
by the Free Software Foundation.
If the Program specifies that a proxy can decide which future
versions of the GNU Affero General Public License can be used, that proxy's
public statement of acceptance of a version permanently authorizes you
to choose that version for the Program.
Later license versions may give you additional or different
permissions. However, no additional obligations are imposed on any
author or copyright holder as a result of your choosing to follow a
later version.
15. Disclaimer of Warranty.
THERE IS NO WARRANTY FOR THE PROGRAM, TO THE EXTENT PERMITTED BY
APPLICABLE LAW. EXCEPT WHEN OTHERWISE STATED IN WRITING THE COPYRIGHT
HOLDERS AND/OR OTHER PARTIES PROVIDE THE PROGRAM "AS IS" WITHOUT WARRANTY
OF ANY KIND, EITHER EXPRESSED OR IMPLIED, INCLUDING, BUT NOT LIMITED TO,
THE IMPLIED WARRANTIES OF MERCHANTABILITY AND FITNESS FOR A PARTICULAR
PURPOSE. THE ENTIRE RISK AS TO THE QUALITY AND PERFORMANCE OF THE PROGRAM
IS WITH YOU. SHOULD THE PROGRAM PROVE DEFECTIVE, YOU ASSUME THE COST OF
ALL NECESSARY SERVICING, REPAIR OR CORRECTION.
16. Limitation of Liability.
IN NO EVENT UNLESS REQUIRED BY APPLICABLE LAW OR AGREED TO IN WRITING
WILL ANY COPYRIGHT HOLDER, OR ANY OTHER PARTY WHO MODIFIES AND/OR CONVEYS
THE PROGRAM AS PERMITTED ABOVE, BE LIABLE TO YOU FOR DAMAGES, INCLUDING ANY
GENERAL, SPECIAL, INCIDENTAL OR CONSEQUENTIAL DAMAGES ARISING OUT OF THE
USE OR INABILITY TO USE THE PROGRAM (INCLUDING BUT NOT LIMITED TO LOSS OF
DATA OR DATA BEING RENDERED INACCURATE OR LOSSES SUSTAINED BY YOU OR THIRD
PARTIES OR A FAILURE OF THE PROGRAM TO OPERATE WITH ANY OTHER PROGRAMS),
EVEN IF SUCH HOLDER OR OTHER PARTY HAS BEEN ADVISED OF THE POSSIBILITY OF
SUCH DAMAGES.
17. Interpretation of Sections 15 and 16.
If the disclaimer of warranty and limitation of liability provided
above cannot be given local legal effect according to their terms,
reviewing courts shall apply local law that most closely approximates
an absolute waiver of all civil liability in connection with the
Program, unless a warranty or assumption of liability accompanies a
copy of the Program in return for a fee.
END OF TERMS AND CONDITIONS
How to Apply These Terms to Your New Programs
If you develop a new program, and you want it to be of the greatest
possible use to the public, the best way to achieve this is to make it
free software which everyone can redistribute and change under these terms.
To do so, attach the following notices to the program. It is safest
to attach them to the start of each source file to most effectively
state the exclusion of warranty; and each file should have at least
the "copyright" line and a pointer to where the full notice is found.
<one line to give the program's name and a brief idea of what it does.>
Copyright (C) <year> <name of author>
This program is free software: you can redistribute it and/or modify
it under the terms of the GNU Affero General Public License as published
by the Free Software Foundation, either version 3 of the License, or
(at your option) any later version.
This program is distributed in the hope that it will be useful,
but WITHOUT ANY WARRANTY; without even the implied warranty of
MERCHANTABILITY or FITNESS FOR A PARTICULAR PURPOSE. See the
GNU Affero General Public License for more details.
You should have received a copy of the GNU Affero General Public License
along with this program. If not, see <https://www.gnu.org/licenses/>.
Also add information on how to contact you by electronic and paper mail.
If your software can interact with users remotely through a computer
network, you should also make sure that it provides a way for users to
get its source. For example, if your program is a web application, its
interface could display a "Source" link that leads users to an archive
of the code. There are many ways you could offer source, and different
solutions will be better for different programs; see section 13 for the
specific requirements.
You should also get your employer (if you work as a programmer) or school,
if any, to sign a "copyright disclaimer" for the program, if necessary.
For more information on this, and how to apply and follow the GNU AGPL, see
<https://www.gnu.org/licenses/>.

64
api.nix Normal file
View File

@ -0,0 +1,64 @@
{ lib, python39Packages }:
with python39Packages;
buildPythonApplication {
pname = "selfprivacy-api";
version = "2.0.0";
propagatedBuildInputs = [
setuptools
portalocker
pytz
pytest
pytest-mock
pytest-datadir
huey
gevent
mnemonic
pydantic
typing-extensions
psutil
fastapi
uvicorn
(buildPythonPackage rec {
pname = "strawberry-graphql";
version = "0.123.0";
format = "pyproject";
patches = [
./strawberry-graphql.patch
];
propagatedBuildInputs = [
typing-extensions
python-multipart
python-dateutil
# flask
pydantic
pygments
poetry
# flask-cors
(buildPythonPackage rec {
pname = "graphql-core";
version = "3.2.0";
format = "setuptools";
src = fetchPypi {
inherit pname version;
sha256 = "sha256-huKgvgCL/eGe94OI3opyWh2UKpGQykMcJKYIN5c4A84=";
};
checkInputs = [
pytest-asyncio
pytest-benchmark
pytestCheckHook
];
pythonImportsCheck = [
"graphql"
];
})
];
src = fetchPypi {
inherit pname version;
sha256 = "KsmZ5Xv8tUg6yBxieAEtvoKoRG60VS+iVGV0X6oCExo=";
};
})
];
src = ./.;
}

2
default.nix Normal file
View File

@ -0,0 +1,2 @@
{ pkgs ? import <nixpkgs> {} }:
pkgs.callPackage ./api.nix {}

81
main.py
View File

@ -1,81 +0,0 @@
#!/usr/bin/env python3
from flask import Flask, jsonify, request, json
from flask_restful import Resource, Api, reqparse
import base64
import pandas as pd
import ast
import subprocess
import os
app = Flask(__name__)
api = Api(app)
@app.route("/systemVersion", methods=["GET"])
def uname():
uname = subprocess.check_output(["uname", "-arm"])
return jsonify(uname)
@app.route("/getDKIM", methods=["GET"])
def getDkimKey():
with open("/var/domain") as domainFile:
domain = domainFile.readline()
domain = domain.rstrip("\n")
catProcess = subprocess.Popen(["cat", "/var/dkim/" + domain + ".selector.txt"], stdout=subprocess.PIPE)
dkim = catProcess.communicate()[0]
dkim = base64.b64encode(dkim)
dkim = str(dkim, 'utf-8')
print(dkim)
response = app.response_class(
response=json.dumps(dkim),
status=200,
mimetype='application/json'
)
return response
@app.route("/pythonVersion", methods=["GET"])
def getPythonVersion():
pythonVersion = subprocess.check_output(["python","--version"])
return jsonify(pythonVersion)
@app.route("/apply", methods=["GET"])
def rebuildSystem():
rebuildResult = subprocess.Popen(["nixos-rebuild","switch"])
rebuildResult.communicate()[0]
return jsonify(rebuildResult.returncode)
@app.route("/rollback", methods=["GET"])
def rollbackSystem():
rollbackResult = subprocess.Popen(["nixos-rebuild","switch","--rollback"])
rollbackResult.communicate()[0]
return jsonify(rollbackResult.returncode)
@app.route("/upgrade", methods=["GET"])
def upgradeSystem():
upgradeResult = subprocess.Popen(["nixos-rebuild","switch","--upgrade"])
upgradeResult.communicate()[0]
return jsonify(upgradeResult.returncode)
@app.route("/createUser", methods=["GET"])
def createUser():
user = subprocess.Popen(["useradd","-m",request.headers.get("X-User")])
user.communicate()[0]
return jsonify(user.returncode)
@app.route("/deleteUser", methods=["DELETE"])
def deleteUser():
user = subprocess.Popen(["userdel",request.headers.get("X-User")])
user.communicate()[0]
return jsonify(user.returncode)
@app.route("/serviceStatus", methods=["GET"])
def getServiceStatus():
imapService = subprocess.Popen(["systemctl", "status", "dovecot2.service"])
imapService.communicate()[0]
smtpService = subprocess.Popen(["systemctl", "status", "postfix.service"])
smtpService.communicate()[0]
httpService = subprocess.Popen(["systemctl", "status", "nginx.service"])
httpService.communicate()[0]
return jsonify(
imap=imapService.returncode,
smtp=smtpService.returncode,
http=httpService.returncode
)
@app.route("/decryptDisk", methods=["POST"])
def requestDiskDecryption():
decryptionService = subprocess.Popen(["echo", "-n", request.headers['X-Decryption-Key'], "|", "cryptsetup", "luksOpen", "/dev/sdb", "decryptedVar"], stdout=subprocess.PIPE, shell=False)
decryptionService.communicate()[0]
return jsonify(
status=decryptionService.returncode
)
if __name__ == '__main__':
app.run(port=5050, debug=False)

3
pyproject.toml Normal file
View File

@ -0,0 +1,3 @@
[build-system]
requires = ["setuptools", "wheel", "portalocker"]
build-backend = "setuptools.build_meta"

View File

@ -1,6 +0,0 @@
flask
flask_restful
pandas
ast
subprocess
os

View File

View File

View File

@ -0,0 +1,116 @@
"""App tokens actions"""
from datetime import datetime
from typing import Optional
from pydantic import BaseModel
from selfprivacy_api.utils.auth import (
delete_token,
generate_recovery_token,
get_recovery_token_status,
get_tokens_info,
is_recovery_token_exists,
is_recovery_token_valid,
is_token_name_exists,
is_token_name_pair_valid,
refresh_token,
get_token_name,
)
class TokenInfoWithIsCaller(BaseModel):
"""Token info"""
name: str
date: datetime
is_caller: bool
def get_api_tokens_with_caller_flag(caller_token: str) -> list[TokenInfoWithIsCaller]:
"""Get the tokens info"""
caller_name = get_token_name(caller_token)
tokens = get_tokens_info()
return [
TokenInfoWithIsCaller(
name=token.name,
date=token.date,
is_caller=token.name == caller_name,
)
for token in tokens
]
class NotFoundException(Exception):
"""Not found exception"""
class CannotDeleteCallerException(Exception):
"""Cannot delete caller exception"""
def delete_api_token(caller_token: str, token_name: str) -> None:
"""Delete the token"""
if is_token_name_pair_valid(token_name, caller_token):
raise CannotDeleteCallerException("Cannot delete caller's token")
if not is_token_name_exists(token_name):
raise NotFoundException("Token not found")
delete_token(token_name)
def refresh_api_token(caller_token: str) -> str:
"""Refresh the token"""
new_token = refresh_token(caller_token)
if new_token is None:
raise NotFoundException("Token not found")
return new_token
class RecoveryTokenStatus(BaseModel):
"""Recovery token status"""
exists: bool
valid: bool
date: Optional[datetime] = None
expiration: Optional[datetime] = None
uses_left: Optional[int] = None
def get_api_recovery_token_status() -> RecoveryTokenStatus:
"""Get the recovery token status"""
if not is_recovery_token_exists():
return RecoveryTokenStatus(exists=False, valid=False)
status = get_recovery_token_status()
if status is None:
return RecoveryTokenStatus(exists=False, valid=False)
is_valid = is_recovery_token_valid()
return RecoveryTokenStatus(
exists=True,
valid=is_valid,
date=status["date"],
expiration=status["expiration"],
uses_left=status["uses_left"],
)
class InvalidExpirationDate(Exception):
"""Invalid expiration date exception"""
class InvalidUsesLeft(Exception):
"""Invalid uses left exception"""
def get_new_api_recovery_key(
expiration_date: Optional[datetime] = None, uses_left: Optional[int] = None
) -> str:
"""Get new recovery key"""
if expiration_date is not None:
current_time = datetime.now().timestamp()
if expiration_date.timestamp() < current_time:
raise InvalidExpirationDate("Expiration date is in the past")
if uses_left is not None:
if uses_left <= 0:
raise InvalidUsesLeft("Uses must be greater than 0")
key = generate_recovery_token(expiration_date, uses_left)
return key

View File

@ -0,0 +1,149 @@
"""Actions to manage the SSH."""
from typing import Optional
from pydantic import BaseModel
from selfprivacy_api.actions.users import (
UserNotFound,
ensure_ssh_and_users_fields_exist,
)
from selfprivacy_api.utils import WriteUserData, ReadUserData, validate_ssh_public_key
def enable_ssh():
with WriteUserData() as data:
if "ssh" not in data:
data["ssh"] = {}
data["ssh"]["enable"] = True
class UserdataSshSettings(BaseModel):
"""Settings for the SSH."""
enable: bool = True
passwordAuthentication: bool = True
rootKeys: list[str] = []
def get_ssh_settings() -> UserdataSshSettings:
with ReadUserData() as data:
if "ssh" not in data:
return UserdataSshSettings()
if "enable" not in data["ssh"]:
data["ssh"]["enable"] = True
if "passwordAuthentication" not in data["ssh"]:
data["ssh"]["passwordAuthentication"] = True
if "rootKeys" not in data["ssh"]:
data["ssh"]["rootKeys"] = []
return UserdataSshSettings(**data["ssh"])
def set_ssh_settings(
enable: Optional[bool] = None, password_authentication: Optional[bool] = None
) -> None:
with WriteUserData() as data:
if "ssh" not in data:
data["ssh"] = {}
if enable is not None:
data["ssh"]["enable"] = enable
if password_authentication is not None:
data["ssh"]["passwordAuthentication"] = password_authentication
def add_root_ssh_key(public_key: str):
with WriteUserData() as data:
if "ssh" not in data:
data["ssh"] = {}
if "rootKeys" not in data["ssh"]:
data["ssh"]["rootKeys"] = []
# Return 409 if key already in array
for key in data["ssh"]["rootKeys"]:
if key == public_key:
raise KeyAlreadyExists()
data["ssh"]["rootKeys"].append(public_key)
class KeyAlreadyExists(Exception):
"""Key already exists"""
pass
class InvalidPublicKey(Exception):
"""Invalid public key"""
pass
def create_ssh_key(username: str, ssh_key: str):
"""Create a new ssh key"""
if not validate_ssh_public_key(ssh_key):
raise InvalidPublicKey()
with WriteUserData() as data:
ensure_ssh_and_users_fields_exist(data)
if username == data["username"]:
if ssh_key in data["sshKeys"]:
raise KeyAlreadyExists()
data["sshKeys"].append(ssh_key)
return
if username == "root":
if ssh_key in data["ssh"]["rootKeys"]:
raise KeyAlreadyExists()
data["ssh"]["rootKeys"].append(ssh_key)
return
for user in data["users"]:
if user["username"] == username:
if "sshKeys" not in user:
user["sshKeys"] = []
if ssh_key in user["sshKeys"]:
raise KeyAlreadyExists()
user["sshKeys"].append(ssh_key)
return
raise UserNotFound()
class KeyNotFound(Exception):
"""Key not found"""
pass
def remove_ssh_key(username: str, ssh_key: str):
"""Delete a ssh key"""
with WriteUserData() as data:
ensure_ssh_and_users_fields_exist(data)
if username == "root":
if ssh_key in data["ssh"]["rootKeys"]:
data["ssh"]["rootKeys"].remove(ssh_key)
return
raise KeyNotFound()
if username == data["username"]:
if ssh_key in data["sshKeys"]:
data["sshKeys"].remove(ssh_key)
return
raise KeyNotFound()
for user in data["users"]:
if user["username"] == username:
if "sshKeys" not in user:
user["sshKeys"] = []
if ssh_key in user["sshKeys"]:
user["sshKeys"].remove(ssh_key)
return
raise KeyNotFound()
raise UserNotFound()

View File

@ -0,0 +1,139 @@
"""Actions to manage the system."""
import os
import subprocess
import pytz
from typing import Optional
from pydantic import BaseModel
from selfprivacy_api.utils import WriteUserData, ReadUserData
def get_timezone() -> str:
"""Get the timezone of the server"""
with ReadUserData() as user_data:
if "timezone" in user_data:
return user_data["timezone"]
return "Europe/Uzhgorod"
class InvalidTimezone(Exception):
"""Invalid timezone"""
pass
def change_timezone(timezone: str) -> None:
"""Change the timezone of the server"""
if timezone not in pytz.all_timezones:
raise InvalidTimezone(f"Invalid timezone: {timezone}")
with WriteUserData() as user_data:
user_data["timezone"] = timezone
class UserDataAutoUpgradeSettings(BaseModel):
"""Settings for auto-upgrading user data"""
enable: bool = True
allowReboot: bool = False
def get_auto_upgrade_settings() -> UserDataAutoUpgradeSettings:
"""Get the auto-upgrade settings"""
with ReadUserData() as user_data:
if "autoUpgrade" in user_data:
return UserDataAutoUpgradeSettings(**user_data["autoUpgrade"])
return UserDataAutoUpgradeSettings()
def set_auto_upgrade_settings(
enalbe: Optional[bool] = None, allowReboot: Optional[bool] = None
) -> None:
"""Set the auto-upgrade settings"""
with WriteUserData() as user_data:
if "autoUpgrade" not in user_data:
user_data["autoUpgrade"] = {}
if enalbe is not None:
user_data["autoUpgrade"]["enable"] = enalbe
if allowReboot is not None:
user_data["autoUpgrade"]["allowReboot"] = allowReboot
def rebuild_system() -> int:
"""Rebuild the system"""
rebuild_result = subprocess.Popen(
["systemctl", "start", "sp-nixos-rebuild.service"], start_new_session=True
)
rebuild_result.communicate()[0]
return rebuild_result.returncode
def rollback_system() -> int:
"""Rollback the system"""
rollback_result = subprocess.Popen(
["systemctl", "start", "sp-nixos-rollback.service"], start_new_session=True
)
rollback_result.communicate()[0]
return rollback_result.returncode
def upgrade_system() -> int:
"""Upgrade the system"""
upgrade_result = subprocess.Popen(
["systemctl", "start", "sp-nixos-upgrade.service"], start_new_session=True
)
upgrade_result.communicate()[0]
return upgrade_result.returncode
def reboot_system() -> None:
"""Reboot the system"""
subprocess.Popen(["reboot"], start_new_session=True)
def get_system_version() -> str:
"""Get system version"""
return subprocess.check_output(["uname", "-a"]).decode("utf-8").strip()
def get_python_version() -> str:
"""Get Python version"""
return subprocess.check_output(["python", "-V"]).decode("utf-8").strip()
class SystemActionResult(BaseModel):
"""System action result"""
status: int
message: str
data: str
def pull_repository_changes() -> SystemActionResult:
"""Pull repository changes"""
git_pull_command = ["git", "pull"]
current_working_directory = os.getcwd()
os.chdir("/etc/nixos")
git_pull_process_descriptor = subprocess.Popen(
git_pull_command,
stdout=subprocess.PIPE,
stderr=subprocess.STDOUT,
shell=False,
)
data = git_pull_process_descriptor.communicate()[0].decode("utf-8")
os.chdir(current_working_directory)
if git_pull_process_descriptor.returncode == 0:
return SystemActionResult(
status=0,
message="Pulled repository changes",
data=data,
)
return SystemActionResult(
status=git_pull_process_descriptor.returncode,
message="Failed to pull repository changes",
data=data,
)

View File

@ -0,0 +1,219 @@
"""Actions to manage the users."""
import re
from typing import Optional
from pydantic import BaseModel
from enum import Enum
from selfprivacy_api.utils import (
ReadUserData,
WriteUserData,
hash_password,
is_username_forbidden,
)
class UserDataUserOrigin(Enum):
"""Origin of the user in the user data"""
NORMAL = "NORMAL"
PRIMARY = "PRIMARY"
ROOT = "ROOT"
class UserDataUser(BaseModel):
"""The user model from the userdata file"""
username: str
ssh_keys: list[str]
origin: UserDataUserOrigin
def ensure_ssh_and_users_fields_exist(data):
if "ssh" not in data:
data["ssh"] = {}
data["ssh"]["rootKeys"] = []
elif data["ssh"].get("rootKeys") is None:
data["ssh"]["rootKeys"] = []
if "sshKeys" not in data:
data["sshKeys"] = []
if "users" not in data:
data["users"] = []
def get_users(
exclude_primary: bool = False,
exclude_root: bool = False,
) -> list[UserDataUser]:
"""Get the list of users"""
users = []
with ReadUserData() as user_data:
ensure_ssh_and_users_fields_exist(user_data)
users = [
UserDataUser(
username=user["username"],
ssh_keys=user.get("sshKeys", []),
origin=UserDataUserOrigin.NORMAL,
)
for user in user_data["users"]
]
if not exclude_primary:
users.append(
UserDataUser(
username=user_data["username"],
ssh_keys=user_data["sshKeys"],
origin=UserDataUserOrigin.PRIMARY,
)
)
if not exclude_root:
users.append(
UserDataUser(
username="root",
ssh_keys=user_data["ssh"]["rootKeys"],
origin=UserDataUserOrigin.ROOT,
)
)
return users
class UsernameForbidden(Exception):
"""Attemted to create a user with a forbidden username"""
pass
class UserAlreadyExists(Exception):
"""Attemted to create a user that already exists"""
pass
class UsernameNotAlphanumeric(Exception):
"""Attemted to create a user with a non-alphanumeric username"""
pass
class UsernameTooLong(Exception):
"""Attemted to create a user with a too long username. Username must be less than 32 characters"""
pass
class PasswordIsEmpty(Exception):
"""Attemted to create a user with an empty password"""
pass
def create_user(username: str, password: str):
if password == "":
raise PasswordIsEmpty("Password is empty")
if is_username_forbidden(username):
raise UsernameForbidden("Username is forbidden")
if not re.match(r"^[a-z_][a-z0-9_]+$", username):
raise UsernameNotAlphanumeric(
"Username must be alphanumeric and start with a letter"
)
if len(username) >= 32:
raise UsernameTooLong("Username must be less than 32 characters")
with ReadUserData() as user_data:
ensure_ssh_and_users_fields_exist(user_data)
if username == user_data["username"]:
raise UserAlreadyExists("User already exists")
if username in [user["username"] for user in user_data["users"]]:
raise UserAlreadyExists("User already exists")
hashed_password = hash_password(password)
with WriteUserData() as user_data:
ensure_ssh_and_users_fields_exist(user_data)
user_data["users"].append(
{"username": username, "sshKeys": [], "hashedPassword": hashed_password}
)
class UserNotFound(Exception):
"""Attemted to get a user that does not exist"""
pass
class UserIsProtected(Exception):
"""Attemted to delete a user that is protected"""
pass
def delete_user(username: str):
with WriteUserData() as user_data:
ensure_ssh_and_users_fields_exist(user_data)
if username == user_data["username"] or username == "root":
raise UserIsProtected("Cannot delete main or root user")
for data_user in user_data["users"]:
if data_user["username"] == username:
user_data["users"].remove(data_user)
break
else:
raise UserNotFound("User did not exist")
def update_user(username: str, password: str):
if password == "":
raise PasswordIsEmpty("Password is empty")
hashed_password = hash_password(password)
with WriteUserData() as data:
ensure_ssh_and_users_fields_exist(data)
if username == data["username"]:
data["hashedMasterPassword"] = hashed_password
# Return 404 if user does not exist
else:
for data_user in data["users"]:
if data_user["username"] == username:
data_user["hashedPassword"] = hashed_password
break
else:
raise UserNotFound("User does not exist")
def get_user_by_username(username: str) -> Optional[UserDataUser]:
with ReadUserData() as data:
ensure_ssh_and_users_fields_exist(data)
if username == "root":
return UserDataUser(
origin=UserDataUserOrigin.ROOT,
username="root",
ssh_keys=data["ssh"]["rootKeys"],
)
if username == data["username"]:
return UserDataUser(
origin=UserDataUserOrigin.PRIMARY,
username=username,
ssh_keys=data["sshKeys"],
)
for user in data["users"]:
if user["username"] == username:
if "sshKeys" not in user:
user["sshKeys"] = []
return UserDataUser(
origin=UserDataUserOrigin.NORMAL,
username=username,
ssh_keys=user["sshKeys"],
)
return None

58
selfprivacy_api/app.py Normal file
View File

@ -0,0 +1,58 @@
#!/usr/bin/env python3
"""SelfPrivacy server management API"""
from fastapi import FastAPI
from fastapi.middleware.cors import CORSMiddleware
from strawberry.fastapi import GraphQLRouter
import uvicorn
from selfprivacy_api.dependencies import get_api_version
from selfprivacy_api.graphql.schema import schema
from selfprivacy_api.migrations import run_migrations
from selfprivacy_api.restic_controller.tasks import init_restic
from selfprivacy_api.rest import (
system,
users,
api_auth,
services,
)
app = FastAPI()
graphql_app = GraphQLRouter(
schema,
)
app.add_middleware(
CORSMiddleware,
allow_origins=["*"],
allow_credentials=True,
allow_methods=["*"],
allow_headers=["*"],
)
app.include_router(system.router)
app.include_router(users.router)
app.include_router(api_auth.router)
app.include_router(services.router)
app.include_router(graphql_app, prefix="/graphql")
@app.get("/api/version")
async def get_version():
"""Get the version of the server"""
return {"version": get_api_version()}
@app.on_event("startup")
async def startup():
run_migrations()
init_restic()
if __name__ == "__main__":
uvicorn.run(
"selfprivacy_api.app:app", host="127.0.0.1", port=5050, log_level="info"
)

View File

@ -0,0 +1,30 @@
from fastapi import Depends, HTTPException, status
from fastapi.security import APIKeyHeader
from pydantic import BaseModel
from selfprivacy_api.utils.auth import is_token_valid
class TokenHeader(BaseModel):
token: str
async def get_token_header(
token: str = Depends(APIKeyHeader(name="Authorization", auto_error=False))
) -> TokenHeader:
if token is None:
raise HTTPException(
status_code=status.HTTP_401_UNAUTHORIZED, detail="Token not provided"
)
else:
token = token.replace("Bearer ", "")
if not is_token_valid(token):
raise HTTPException(
status_code=status.HTTP_401_UNAUTHORIZED, detail="Invalid token"
)
return TokenHeader(token=token)
def get_api_version() -> str:
"""Get API version"""
return "2.0.9"

View File

@ -0,0 +1,21 @@
"""GraphQL API for SelfPrivacy."""
# pylint: disable=too-few-public-methods
import typing
from strawberry.permission import BasePermission
from strawberry.types import Info
from selfprivacy_api.utils.auth import is_token_valid
class IsAuthenticated(BasePermission):
"""Is authenticated permission"""
message = "You must be authenticated to access this resource."
def has_permission(self, source: typing.Any, info: Info, **kwargs) -> bool:
token = info.context["request"].headers.get("Authorization")
if token is None:
token = info.context["request"].query_params.get("token")
if token is None:
return False
return is_token_valid(token.replace("Bearer ", ""))

View File

@ -0,0 +1,13 @@
import typing
import strawberry
@strawberry.type
class DnsRecord:
"""DNS record"""
record_type: str
name: str
content: str
ttl: int
priority: typing.Optional[int]

View File

@ -0,0 +1,49 @@
"""Jobs status"""
# pylint: disable=too-few-public-methods
import datetime
import typing
import strawberry
from selfprivacy_api.jobs import Job, Jobs
@strawberry.type
class ApiJob:
"""Job type for GraphQL."""
uid: str
name: str
description: str
status: str
status_text: typing.Optional[str]
progress: typing.Optional[int]
created_at: datetime.datetime
updated_at: datetime.datetime
finished_at: typing.Optional[datetime.datetime]
error: typing.Optional[str]
result: typing.Optional[str]
def job_to_api_job(job: Job) -> ApiJob:
"""Convert a Job from jobs controller to a GraphQL ApiJob."""
return ApiJob(
uid=str(job.uid),
name=job.name,
description=job.description,
status=job.status.name,
status_text=job.status_text,
progress=job.progress,
created_at=job.created_at,
updated_at=job.updated_at,
finished_at=job.finished_at,
error=job.error,
result=job.result,
)
def get_api_job_by_id(job_id: str) -> typing.Optional[ApiJob]:
"""Get a job for GraphQL by its ID."""
job = Jobs.get_instance().get_job(job_id)
if job is None:
return None
return job_to_api_job(job)

View File

@ -0,0 +1,146 @@
from enum import Enum
import typing
import strawberry
from selfprivacy_api.graphql.common_types.dns import DnsRecord
from selfprivacy_api.services import get_service_by_id, get_services_by_location
from selfprivacy_api.services import Service as ServiceInterface
from selfprivacy_api.utils.block_devices import BlockDevices
def get_usages(root: "StorageVolume") -> list["StorageUsageInterface"]:
"""Get usages of a volume"""
return [
ServiceStorageUsage(
service=service_to_graphql_service(service),
title=service.get_display_name(),
used_space=str(service.get_storage_usage()),
volume=get_volume_by_id(service.get_location()),
)
for service in get_services_by_location(root.name)
]
@strawberry.type
class StorageVolume:
"""Stats and basic info about a volume or a system disk."""
total_space: str
free_space: str
used_space: str
root: bool
name: str
model: typing.Optional[str]
serial: typing.Optional[str]
type: str
@strawberry.field
def usages(self) -> list["StorageUsageInterface"]:
"""Get usages of a volume"""
return get_usages(self)
@strawberry.interface
class StorageUsageInterface:
used_space: str
volume: typing.Optional[StorageVolume]
title: str
@strawberry.type
class ServiceStorageUsage(StorageUsageInterface):
"""Storage usage for a service"""
service: typing.Optional["Service"]
@strawberry.enum
class ServiceStatusEnum(Enum):
ACTIVE = "ACTIVE"
RELOADING = "RELOADING"
INACTIVE = "INACTIVE"
FAILED = "FAILED"
ACTIVATING = "ACTIVATING"
DEACTIVATING = "DEACTIVATING"
OFF = "OFF"
def get_storage_usage(root: "Service") -> ServiceStorageUsage:
"""Get storage usage for a service"""
service = get_service_by_id(root.id)
if service is None:
return ServiceStorageUsage(
service=service,
title="Not found",
used_space="0",
volume=get_volume_by_id("sda1"),
)
return ServiceStorageUsage(
service=service_to_graphql_service(service),
title=service.get_display_name(),
used_space=str(service.get_storage_usage()),
volume=get_volume_by_id(service.get_location()),
)
@strawberry.type
class Service:
id: str
display_name: str
description: str
svg_icon: str
is_movable: bool
is_required: bool
is_enabled: bool
status: ServiceStatusEnum
url: typing.Optional[str]
dns_records: typing.Optional[typing.List[DnsRecord]]
@strawberry.field
def storage_usage(self) -> ServiceStorageUsage:
"""Get storage usage for a service"""
return get_storage_usage(self)
def service_to_graphql_service(service: ServiceInterface) -> Service:
"""Convert service to graphql service"""
return Service(
id=service.get_id(),
display_name=service.get_display_name(),
description=service.get_description(),
svg_icon=service.get_svg_icon(),
is_movable=service.is_movable(),
is_required=service.is_required(),
is_enabled=service.is_enabled(),
status=ServiceStatusEnum(service.get_status().value),
url=service.get_url(),
dns_records=[
DnsRecord(
record_type=record.type,
name=record.name,
content=record.content,
ttl=record.ttl,
priority=record.priority,
)
for record in service.get_dns_records()
],
)
def get_volume_by_id(volume_id: str) -> typing.Optional[StorageVolume]:
"""Get volume by id"""
volume = BlockDevices().get_block_device(volume_id)
if volume is None:
return None
return StorageVolume(
total_space=str(volume.fssize)
if volume.fssize is not None
else str(volume.size),
free_space=str(volume.fsavail),
used_space=str(volume.fsused),
root=volume.name == "sda1",
name=volume.name,
model=volume.model,
serial=volume.serial,
type=volume.type,
)

View File

@ -0,0 +1,57 @@
import typing
from enum import Enum
import strawberry
import selfprivacy_api.actions.users as users_actions
from selfprivacy_api.graphql.mutations.mutation_interface import (
MutationReturnInterface,
)
@strawberry.enum
class UserType(Enum):
NORMAL = "NORMAL"
PRIMARY = "PRIMARY"
ROOT = "ROOT"
@strawberry.type
class User:
user_type: UserType
username: str
# userHomeFolderspace: UserHomeFolderUsage
ssh_keys: typing.List[str] = strawberry.field(default_factory=list)
@strawberry.type
class UserMutationReturn(MutationReturnInterface):
"""Return type for user mutation"""
user: typing.Optional[User] = None
def get_user_by_username(username: str) -> typing.Optional[User]:
user = users_actions.get_user_by_username(username)
if user is None:
return None
return User(
user_type=UserType(user.origin.value),
username=user.username,
ssh_keys=user.ssh_keys,
)
def get_users() -> typing.List[User]:
"""Get users"""
users = users_actions.get_users(exclude_root=True)
return [
User(
user_type=UserType(user.origin.value),
username=user.username,
ssh_keys=user.ssh_keys,
)
for user in users
]

View File

@ -0,0 +1,219 @@
"""API access mutations"""
# pylint: disable=too-few-public-methods
import datetime
import typing
import strawberry
from strawberry.types import Info
from selfprivacy_api.actions.api_tokens import (
CannotDeleteCallerException,
InvalidExpirationDate,
InvalidUsesLeft,
NotFoundException,
delete_api_token,
get_new_api_recovery_key,
)
from selfprivacy_api.graphql import IsAuthenticated
from selfprivacy_api.graphql.mutations.mutation_interface import (
GenericMutationReturn,
MutationReturnInterface,
)
from selfprivacy_api.utils.auth import (
delete_new_device_auth_token,
get_new_device_auth_token,
refresh_token,
use_mnemonic_recoverery_token,
use_new_device_auth_token,
)
@strawberry.type
class ApiKeyMutationReturn(MutationReturnInterface):
key: typing.Optional[str]
@strawberry.type
class DeviceApiTokenMutationReturn(MutationReturnInterface):
token: typing.Optional[str]
@strawberry.input
class RecoveryKeyLimitsInput:
"""Recovery key limits input"""
expiration_date: typing.Optional[datetime.datetime] = None
uses: typing.Optional[int] = None
@strawberry.input
class UseRecoveryKeyInput:
"""Use recovery key input"""
key: str
deviceName: str
@strawberry.input
class UseNewDeviceKeyInput:
"""Use new device key input"""
key: str
deviceName: str
@strawberry.type
class ApiMutations:
@strawberry.mutation(permission_classes=[IsAuthenticated])
def get_new_recovery_api_key(
self, limits: typing.Optional[RecoveryKeyLimitsInput] = None
) -> ApiKeyMutationReturn:
"""Generate recovery key"""
if limits is None:
limits = RecoveryKeyLimitsInput()
try:
key = get_new_api_recovery_key(limits.expiration_date, limits.uses)
except InvalidExpirationDate:
return ApiKeyMutationReturn(
success=False,
message="Expiration date must be in the future",
code=400,
key=None,
)
except InvalidUsesLeft:
return ApiKeyMutationReturn(
success=False,
message="Uses must be greater than 0",
code=400,
key=None,
)
return ApiKeyMutationReturn(
success=True,
message="Recovery key generated",
code=200,
key=key,
)
@strawberry.mutation()
def use_recovery_api_key(
self, input: UseRecoveryKeyInput
) -> DeviceApiTokenMutationReturn:
"""Use recovery key"""
token = use_mnemonic_recoverery_token(input.key, input.deviceName)
if token is None:
return DeviceApiTokenMutationReturn(
success=False,
message="Recovery key not found",
code=404,
token=None,
)
return DeviceApiTokenMutationReturn(
success=True,
message="Recovery key used",
code=200,
token=token,
)
@strawberry.mutation(permission_classes=[IsAuthenticated])
def refresh_device_api_token(self, info: Info) -> DeviceApiTokenMutationReturn:
"""Refresh device api token"""
token = (
info.context["request"]
.headers.get("Authorization", "")
.replace("Bearer ", "")
)
if token is None:
return DeviceApiTokenMutationReturn(
success=False,
message="Token not found",
code=404,
token=None,
)
new_token = refresh_token(token)
if new_token is None:
return DeviceApiTokenMutationReturn(
success=False,
message="Token not found",
code=404,
token=None,
)
return DeviceApiTokenMutationReturn(
success=True,
message="Token refreshed",
code=200,
token=new_token,
)
@strawberry.mutation(permission_classes=[IsAuthenticated])
def delete_device_api_token(self, device: str, info: Info) -> GenericMutationReturn:
"""Delete device api token"""
self_token = (
info.context["request"]
.headers.get("Authorization", "")
.replace("Bearer ", "")
)
try:
delete_api_token(self_token, device)
except NotFoundException:
return GenericMutationReturn(
success=False,
message="Token not found",
code=404,
)
except CannotDeleteCallerException:
return GenericMutationReturn(
success=False,
message="Cannot delete caller token",
code=400,
)
except Exception as e:
return GenericMutationReturn(
success=False,
message=str(e),
code=500,
)
return GenericMutationReturn(
success=True,
message="Token deleted",
code=200,
)
@strawberry.mutation(permission_classes=[IsAuthenticated])
def get_new_device_api_key(self) -> ApiKeyMutationReturn:
"""Generate device api key"""
key = get_new_device_auth_token()
return ApiKeyMutationReturn(
success=True,
message="Device api key generated",
code=200,
key=key,
)
@strawberry.mutation(permission_classes=[IsAuthenticated])
def invalidate_new_device_api_key(self) -> GenericMutationReturn:
"""Invalidate new device api key"""
delete_new_device_auth_token()
return GenericMutationReturn(
success=True,
message="New device key deleted",
code=200,
)
@strawberry.mutation()
def authorize_with_new_device_api_key(
self, input: UseNewDeviceKeyInput
) -> DeviceApiTokenMutationReturn:
"""Authorize with new device api key"""
token = use_new_device_auth_token(input.key, input.deviceName)
if token is None:
return DeviceApiTokenMutationReturn(
success=False,
message="Token not found",
code=404,
token=None,
)
return DeviceApiTokenMutationReturn(
success=True,
message="Token used",
code=200,
token=token,
)

View File

@ -0,0 +1,28 @@
"""Manipulate jobs"""
# pylint: disable=too-few-public-methods
import strawberry
from selfprivacy_api.graphql.mutations.mutation_interface import GenericMutationReturn
from selfprivacy_api.graphql import IsAuthenticated
from selfprivacy_api.jobs import Jobs
@strawberry.type
class JobMutations:
"""Mutations related to jobs"""
@strawberry.mutation(permission_classes=[IsAuthenticated])
def remove_job(self, job_id: str) -> GenericMutationReturn:
"""Remove a job from the queue"""
result = Jobs.get_instance().remove_by_uid(job_id)
if result:
return GenericMutationReturn(
success=True,
code=200,
message="Job removed",
)
return GenericMutationReturn(
success=False,
code=404,
message="Job not found",
)

View File

@ -0,0 +1,21 @@
import strawberry
import typing
from selfprivacy_api.graphql.common_types.jobs import ApiJob
@strawberry.interface
class MutationReturnInterface:
success: bool
message: str
code: int
@strawberry.type
class GenericMutationReturn(MutationReturnInterface):
pass
@strawberry.type
class GenericJobButationReturn(MutationReturnInterface):
job: typing.Optional[ApiJob] = None

View File

@ -0,0 +1,169 @@
"""Services mutations"""
# pylint: disable=too-few-public-methods
import typing
import strawberry
from selfprivacy_api.graphql import IsAuthenticated
from selfprivacy_api.graphql.common_types.jobs import job_to_api_job
from selfprivacy_api.graphql.common_types.service import (
Service,
service_to_graphql_service,
)
from selfprivacy_api.graphql.mutations.mutation_interface import (
GenericJobButationReturn,
GenericMutationReturn,
)
from selfprivacy_api.services import get_service_by_id
from selfprivacy_api.utils.block_devices import BlockDevices
@strawberry.type
class ServiceMutationReturn(GenericMutationReturn):
"""Service mutation return type."""
service: typing.Optional[Service] = None
@strawberry.input
class MoveServiceInput:
"""Move service input type."""
service_id: str
location: str
@strawberry.type
class ServiceJobMutationReturn(GenericJobButationReturn):
"""Service job mutation return type."""
service: typing.Optional[Service] = None
@strawberry.type
class ServicesMutations:
"""Services mutations."""
@strawberry.mutation(permission_classes=[IsAuthenticated])
def enable_service(self, service_id: str) -> ServiceMutationReturn:
"""Enable service."""
service = get_service_by_id(service_id)
if service is None:
return ServiceMutationReturn(
success=False,
message="Service not found.",
code=404,
)
service.enable()
return ServiceMutationReturn(
success=True,
message="Service enabled.",
code=200,
service=service_to_graphql_service(service),
)
@strawberry.mutation(permission_classes=[IsAuthenticated])
def disable_service(self, service_id: str) -> ServiceMutationReturn:
"""Disable service."""
service = get_service_by_id(service_id)
if service is None:
return ServiceMutationReturn(
success=False,
message="Service not found.",
code=404,
)
service.disable()
return ServiceMutationReturn(
success=True,
message="Service disabled.",
code=200,
service=service_to_graphql_service(service),
)
@strawberry.mutation(permission_classes=[IsAuthenticated])
def stop_service(self, service_id: str) -> ServiceMutationReturn:
"""Stop service."""
service = get_service_by_id(service_id)
if service is None:
return ServiceMutationReturn(
success=False,
message="Service not found.",
code=404,
)
service.stop()
return ServiceMutationReturn(
success=True,
message="Service stopped.",
code=200,
service=service_to_graphql_service(service),
)
@strawberry.mutation(permission_classes=[IsAuthenticated])
def start_service(self, service_id: str) -> ServiceMutationReturn:
"""Start service."""
service = get_service_by_id(service_id)
if service is None:
return ServiceMutationReturn(
success=False,
message="Service not found.",
code=404,
)
service.start()
return ServiceMutationReturn(
success=True,
message="Service started.",
code=200,
service=service_to_graphql_service(service),
)
@strawberry.mutation(permission_classes=[IsAuthenticated])
def restart_service(self, service_id: str) -> ServiceMutationReturn:
"""Restart service."""
service = get_service_by_id(service_id)
if service is None:
return ServiceMutationReturn(
success=False,
message="Service not found.",
code=404,
)
service.restart()
return ServiceMutationReturn(
success=True,
message="Service restarted.",
code=200,
service=service_to_graphql_service(service),
)
@strawberry.mutation(permission_classes=[IsAuthenticated])
def move_service(self, input: MoveServiceInput) -> ServiceJobMutationReturn:
"""Move service."""
service = get_service_by_id(input.service_id)
if service is None:
return ServiceJobMutationReturn(
success=False,
message="Service not found.",
code=404,
)
if not service.is_movable():
return ServiceJobMutationReturn(
success=False,
message="Service is not movable.",
code=400,
service=service_to_graphql_service(service),
)
volume = BlockDevices().get_block_device(input.location)
if volume is None:
return ServiceJobMutationReturn(
success=False,
message="Volume not found.",
code=404,
service=service_to_graphql_service(service),
)
job = service.move_to_volume(volume)
return ServiceJobMutationReturn(
success=True,
message="Service moved.",
code=200,
service=service_to_graphql_service(service),
job=job_to_api_job(job),
)

View File

@ -0,0 +1,102 @@
#!/usr/bin/env python3
"""Users management module"""
# pylint: disable=too-few-public-methods
import strawberry
from selfprivacy_api.actions.users import UserNotFound
from selfprivacy_api.graphql import IsAuthenticated
from selfprivacy_api.actions.ssh import (
InvalidPublicKey,
KeyAlreadyExists,
KeyNotFound,
create_ssh_key,
remove_ssh_key,
)
from selfprivacy_api.graphql.common_types.user import (
UserMutationReturn,
get_user_by_username,
)
@strawberry.input
class SshMutationInput:
"""Input type for ssh mutation"""
username: str
ssh_key: str
@strawberry.type
class SshMutations:
"""Mutations ssh"""
@strawberry.mutation(permission_classes=[IsAuthenticated])
def add_ssh_key(self, ssh_input: SshMutationInput) -> UserMutationReturn:
"""Add a new ssh key"""
try:
create_ssh_key(ssh_input.username, ssh_input.ssh_key)
except KeyAlreadyExists:
return UserMutationReturn(
success=False,
message="Key already exists",
code=409,
)
except InvalidPublicKey:
return UserMutationReturn(
success=False,
message="Invalid key type. Only ssh-ed25519 and ssh-rsa are supported",
code=400,
)
except UserNotFound:
return UserMutationReturn(
success=False,
message="User not found",
code=404,
)
except Exception as e:
return UserMutationReturn(
success=False,
message=str(e),
code=500,
)
return UserMutationReturn(
success=True,
message="New SSH key successfully written",
code=201,
user=get_user_by_username(ssh_input.username),
)
@strawberry.mutation(permission_classes=[IsAuthenticated])
def remove_ssh_key(self, ssh_input: SshMutationInput) -> UserMutationReturn:
"""Remove ssh key from user"""
try:
remove_ssh_key(ssh_input.username, ssh_input.ssh_key)
except KeyNotFound:
return UserMutationReturn(
success=False,
message="Key not found",
code=404,
)
except UserNotFound:
return UserMutationReturn(
success=False,
message="User not found",
code=404,
)
except Exception as e:
return UserMutationReturn(
success=False,
message=str(e),
code=500,
)
return UserMutationReturn(
success=True,
message="SSH key successfully removed",
code=200,
user=get_user_by_username(ssh_input.username),
)

View File

@ -0,0 +1,102 @@
"""Storage devices mutations"""
import strawberry
from selfprivacy_api.graphql import IsAuthenticated
from selfprivacy_api.graphql.common_types.jobs import job_to_api_job
from selfprivacy_api.utils.block_devices import BlockDevices
from selfprivacy_api.graphql.mutations.mutation_interface import (
GenericJobButationReturn,
GenericMutationReturn,
)
from selfprivacy_api.jobs.migrate_to_binds import (
BindMigrationConfig,
is_bind_migrated,
start_bind_migration,
)
@strawberry.input
class MigrateToBindsInput:
"""Migrate to binds input"""
email_block_device: str
bitwarden_block_device: str
gitea_block_device: str
nextcloud_block_device: str
pleroma_block_device: str
@strawberry.type
class StorageMutations:
@strawberry.mutation(permission_classes=[IsAuthenticated])
def resize_volume(self, name: str) -> GenericMutationReturn:
"""Resize volume"""
volume = BlockDevices().get_block_device(name)
if volume is None:
return GenericMutationReturn(
success=False, code=404, message="Volume not found"
)
volume.resize()
return GenericMutationReturn(
success=True, code=200, message="Volume resize started"
)
@strawberry.mutation(permission_classes=[IsAuthenticated])
def mount_volume(self, name: str) -> GenericMutationReturn:
"""Mount volume"""
volume = BlockDevices().get_block_device(name)
if volume is None:
return GenericMutationReturn(
success=False, code=404, message="Volume not found"
)
is_success = volume.mount()
if is_success:
return GenericMutationReturn(
success=True,
code=200,
message="Volume mounted, rebuild the system to apply changes",
)
return GenericMutationReturn(
success=False, code=409, message="Volume not mounted (already mounted?)"
)
@strawberry.mutation(permission_classes=[IsAuthenticated])
def unmount_volume(self, name: str) -> GenericMutationReturn:
"""Unmount volume"""
volume = BlockDevices().get_block_device(name)
if volume is None:
return GenericMutationReturn(
success=False, code=404, message="Volume not found"
)
is_success = volume.unmount()
if is_success:
return GenericMutationReturn(
success=True,
code=200,
message="Volume unmounted, rebuild the system to apply changes",
)
return GenericMutationReturn(
success=False, code=409, message="Volume not unmounted (already unmounted?)"
)
@strawberry.mutation(permission_classes=[IsAuthenticated])
def migrate_to_binds(self, input: MigrateToBindsInput) -> GenericJobButationReturn:
"""Migrate to binds"""
if is_bind_migrated():
return GenericJobButationReturn(
success=False, code=409, message="Already migrated to binds"
)
job = start_bind_migration(
BindMigrationConfig(
email_block_device=input.email_block_device,
bitwarden_block_device=input.bitwarden_block_device,
gitea_block_device=input.gitea_block_device,
nextcloud_block_device=input.nextcloud_block_device,
pleroma_block_device=input.pleroma_block_device,
)
)
return GenericJobButationReturn(
success=True,
code=200,
message="Migration to binds started, rebuild the system to apply changes",
job=job_to_api_job(job),
)

View File

@ -0,0 +1,128 @@
"""System management mutations"""
# pylint: disable=too-few-public-methods
import typing
import strawberry
from selfprivacy_api.graphql import IsAuthenticated
from selfprivacy_api.graphql.mutations.mutation_interface import (
GenericMutationReturn,
MutationReturnInterface,
)
import selfprivacy_api.actions.system as system_actions
@strawberry.type
class TimezoneMutationReturn(MutationReturnInterface):
"""Return type of the timezone mutation, contains timezone"""
timezone: typing.Optional[str]
@strawberry.type
class AutoUpgradeSettingsMutationReturn(MutationReturnInterface):
"""Return type autoUpgrade Settings"""
enableAutoUpgrade: bool
allowReboot: bool
@strawberry.input
class AutoUpgradeSettingsInput:
"""Input type for auto upgrade settings"""
enableAutoUpgrade: typing.Optional[bool] = None
allowReboot: typing.Optional[bool] = None
@strawberry.type
class SystemMutations:
"""Mutations related to system settings"""
@strawberry.mutation(permission_classes=[IsAuthenticated])
def change_timezone(self, timezone: str) -> TimezoneMutationReturn:
"""Change the timezone of the server. Timezone is a tzdatabase name."""
try:
system_actions.change_timezone(timezone)
except system_actions.InvalidTimezone as e:
return TimezoneMutationReturn(
success=False,
message=str(e),
code=400,
timezone=None,
)
return TimezoneMutationReturn(
success=True,
message="Timezone changed",
code=200,
timezone=timezone,
)
@strawberry.mutation(permission_classes=[IsAuthenticated])
def change_auto_upgrade_settings(
self, settings: AutoUpgradeSettingsInput
) -> AutoUpgradeSettingsMutationReturn:
"""Change auto upgrade settings of the server."""
system_actions.set_auto_upgrade_settings(
settings.enableAutoUpgrade, settings.allowReboot
)
new_settings = system_actions.get_auto_upgrade_settings()
return AutoUpgradeSettingsMutationReturn(
success=True,
message="Auto-upgrade settings changed",
code=200,
enableAutoUpgrade=new_settings.enable,
allowReboot=new_settings.allowReboot,
)
@strawberry.mutation(permission_classes=[IsAuthenticated])
def run_system_rebuild(self) -> GenericMutationReturn:
system_actions.rebuild_system()
return GenericMutationReturn(
success=True,
message="Starting rebuild system",
code=200,
)
@strawberry.mutation(permission_classes=[IsAuthenticated])
def run_system_rollback(self) -> GenericMutationReturn:
system_actions.rollback_system()
return GenericMutationReturn(
success=True,
message="Starting rebuild system",
code=200,
)
@strawberry.mutation(permission_classes=[IsAuthenticated])
def run_system_upgrade(self) -> GenericMutationReturn:
system_actions.upgrade_system()
return GenericMutationReturn(
success=True,
message="Starting rebuild system",
code=200,
)
@strawberry.mutation(permission_classes=[IsAuthenticated])
def reboot_system(self) -> GenericMutationReturn:
system_actions.reboot_system()
return GenericMutationReturn(
success=True,
message="System reboot has started",
code=200,
)
@strawberry.mutation(permission_classes=[IsAuthenticated])
def pull_repository_changes(self) -> GenericMutationReturn:
result = system_actions.pull_repository_changes()
if result.status == 0:
return GenericMutationReturn(
success=True,
message="Repository changes pulled",
code=200,
)
return GenericMutationReturn(
success=False,
message=f"Failed to pull repository changes:\n{result.data}",
code=500,
)

View File

@ -0,0 +1,117 @@
#!/usr/bin/env python3
"""Users management module"""
# pylint: disable=too-few-public-methods
import strawberry
from selfprivacy_api.graphql import IsAuthenticated
from selfprivacy_api.graphql.common_types.user import (
UserMutationReturn,
get_user_by_username,
)
from selfprivacy_api.graphql.mutations.mutation_interface import (
GenericMutationReturn,
)
import selfprivacy_api.actions.users as users_actions
@strawberry.input
class UserMutationInput:
"""Input type for user mutation"""
username: str
password: str
@strawberry.type
class UserMutations:
"""Mutations change user settings"""
@strawberry.mutation(permission_classes=[IsAuthenticated])
def create_user(self, user: UserMutationInput) -> UserMutationReturn:
try:
users_actions.create_user(user.username, user.password)
except users_actions.PasswordIsEmpty as e:
return UserMutationReturn(
success=False,
message=str(e),
code=400,
)
except users_actions.UsernameForbidden as e:
return UserMutationReturn(
success=False,
message=str(e),
code=409,
)
except users_actions.UsernameNotAlphanumeric as e:
return UserMutationReturn(
success=False,
message=str(e),
code=400,
)
except users_actions.UsernameTooLong as e:
return UserMutationReturn(
success=False,
message=str(e),
code=400,
)
except users_actions.UserAlreadyExists as e:
return UserMutationReturn(
success=False,
message=str(e),
code=409,
user=get_user_by_username(user.username),
)
return UserMutationReturn(
success=True,
message="User created",
code=201,
user=get_user_by_username(user.username),
)
@strawberry.mutation(permission_classes=[IsAuthenticated])
def delete_user(self, username: str) -> GenericMutationReturn:
try:
users_actions.delete_user(username)
except users_actions.UserNotFound as e:
return GenericMutationReturn(
success=False,
message=str(e),
code=404,
)
except users_actions.UserIsProtected as e:
return GenericMutationReturn(
success=False,
message=str(e),
code=400,
)
return GenericMutationReturn(
success=True,
message="User deleted",
code=200,
)
@strawberry.mutation(permission_classes=[IsAuthenticated])
def update_user(self, user: UserMutationInput) -> UserMutationReturn:
"""Update user mutation"""
try:
users_actions.update_user(user.username, user.password)
except users_actions.PasswordIsEmpty as e:
return UserMutationReturn(
success=False,
message=str(e),
code=400,
)
except users_actions.UserNotFound as e:
return UserMutationReturn(
success=False,
message=str(e),
code=404,
)
return UserMutationReturn(
success=True,
message="User updated",
code=200,
user=get_user_by_username(user.username),
)

View File

@ -0,0 +1,97 @@
"""API access status"""
# pylint: disable=too-few-public-methods
import datetime
import typing
import strawberry
from strawberry.types import Info
from selfprivacy_api.actions.api_tokens import get_api_tokens_with_caller_flag
from selfprivacy_api.graphql import IsAuthenticated
from selfprivacy_api.utils import parse_date
from selfprivacy_api.dependencies import get_api_version as get_api_version_dependency
from selfprivacy_api.utils.auth import (
get_recovery_token_status,
is_recovery_token_exists,
is_recovery_token_valid,
)
def get_api_version() -> str:
"""Get API version"""
return get_api_version_dependency()
@strawberry.type
class ApiDevice:
"""A single device with SelfPrivacy app installed"""
name: str
creation_date: datetime.datetime
is_caller: bool
@strawberry.type
class ApiRecoveryKeyStatus:
"""Recovery key status"""
exists: bool
valid: bool
creation_date: typing.Optional[datetime.datetime]
expiration_date: typing.Optional[datetime.datetime]
uses_left: typing.Optional[int]
def get_recovery_key_status() -> ApiRecoveryKeyStatus:
"""Get recovery key status"""
if not is_recovery_token_exists():
return ApiRecoveryKeyStatus(
exists=False,
valid=False,
creation_date=None,
expiration_date=None,
uses_left=None,
)
status = get_recovery_token_status()
if status is None:
return ApiRecoveryKeyStatus(
exists=False,
valid=False,
creation_date=None,
expiration_date=None,
uses_left=None,
)
return ApiRecoveryKeyStatus(
exists=True,
valid=is_recovery_token_valid(),
creation_date=parse_date(status["date"]),
expiration_date=parse_date(status["expiration"])
if status["expiration"] is not None
else None,
uses_left=status["uses_left"] if status["uses_left"] is not None else None,
)
@strawberry.type
class Api:
"""API access status"""
version: str = strawberry.field(resolver=get_api_version)
@strawberry.field(permission_classes=[IsAuthenticated])
def devices(self, info: Info) -> typing.List[ApiDevice]:
return [
ApiDevice(
name=device.name,
creation_date=device.date,
is_caller=device.is_caller,
)
for device in get_api_tokens_with_caller_flag(
info.context["request"]
.headers.get("Authorization", "")
.replace("Bearer ", "")
)
]
recovery_key: ApiRecoveryKeyStatus = strawberry.field(
resolver=get_recovery_key_status, permission_classes=[IsAuthenticated]
)

View File

@ -0,0 +1,30 @@
"""Common types and enums used by different types of queries."""
from enum import Enum
import datetime
import typing
import strawberry
@strawberry.enum
class Severity(Enum):
"""
Severity of an alert.
"""
INFO = "INFO"
WARNING = "WARNING"
ERROR = "ERROR"
CRITICAL = "CRITICAL"
SUCCESS = "SUCCESS"
@strawberry.type
class Alert:
"""
Alert type.
"""
severity: Severity
title: str
message: str
timestamp: typing.Optional[datetime.datetime]

View File

@ -0,0 +1,25 @@
"""Jobs status"""
# pylint: disable=too-few-public-methods
import typing
import strawberry
from selfprivacy_api.graphql.common_types.jobs import (
ApiJob,
get_api_job_by_id,
job_to_api_job,
)
from selfprivacy_api.jobs import Jobs
@strawberry.type
class Job:
@strawberry.field
def get_jobs(self) -> typing.List[ApiJob]:
Jobs.get_instance().get_jobs()
return [job_to_api_job(job) for job in Jobs.get_instance().get_jobs()]
@strawberry.field
def get_job(self, job_id: str) -> typing.Optional[ApiJob]:
return get_api_job_by_id(job_id)

View File

@ -0,0 +1,13 @@
"""Enums representing different service providers."""
from enum import Enum
import strawberry
@strawberry.enum
class DnsProvider(Enum):
CLOUDFLARE = "CLOUDFLARE"
@strawberry.enum
class ServerProvider(Enum):
HETZNER = "HETZNER"

View File

@ -0,0 +1,18 @@
"""Services status"""
# pylint: disable=too-few-public-methods
import typing
import strawberry
from selfprivacy_api.graphql.common_types.service import (
Service,
service_to_graphql_service,
)
from selfprivacy_api.services import get_all_services
@strawberry.type
class Services:
@strawberry.field
def all_services(self) -> typing.List[Service]:
services = get_all_services()
return [service_to_graphql_service(service) for service in services]

View File

@ -0,0 +1,33 @@
"""Storage queries."""
# pylint: disable=too-few-public-methods
import typing
import strawberry
from selfprivacy_api.graphql.common_types.service import (
StorageVolume,
)
from selfprivacy_api.utils.block_devices import BlockDevices
@strawberry.type
class Storage:
"""GraphQL queries to get storage information."""
@strawberry.field
def volumes(self) -> typing.List[StorageVolume]:
"""Get list of volumes"""
return [
StorageVolume(
total_space=str(volume.fssize)
if volume.fssize is not None
else str(volume.size),
free_space=str(volume.fsavail),
used_space=str(volume.fsused),
root=volume.name == "sda1",
name=volume.name,
model=volume.model,
serial=volume.serial,
type=volume.type,
)
for volume in BlockDevices().get_block_devices()
]

View File

@ -0,0 +1,166 @@
"""Common system information and settings"""
# pylint: disable=too-few-public-methods
import os
import typing
import strawberry
from selfprivacy_api.graphql.common_types.dns import DnsRecord
from selfprivacy_api.graphql.queries.common import Alert, Severity
from selfprivacy_api.graphql.queries.providers import DnsProvider, ServerProvider
from selfprivacy_api.jobs import Jobs
from selfprivacy_api.jobs.migrate_to_binds import is_bind_migrated
from selfprivacy_api.services import get_all_required_dns_records
from selfprivacy_api.utils import ReadUserData
import selfprivacy_api.actions.system as system_actions
import selfprivacy_api.actions.ssh as ssh_actions
@strawberry.type
class SystemDomainInfo:
"""Information about the system domain"""
domain: str
hostname: str
provider: DnsProvider
@strawberry.field
def required_dns_records(self) -> typing.List[DnsRecord]:
"""Collect all required DNS records for all services"""
return [
DnsRecord(
record_type=record.type,
name=record.name,
content=record.content,
ttl=record.ttl,
priority=record.priority,
)
for record in get_all_required_dns_records()
]
def get_system_domain_info() -> SystemDomainInfo:
"""Get basic system domain info"""
with ReadUserData() as user_data:
return SystemDomainInfo(
domain=user_data["domain"],
hostname=user_data["hostname"],
provider=DnsProvider.CLOUDFLARE,
)
@strawberry.type
class AutoUpgradeOptions:
"""Automatic upgrade options"""
enable: bool
allow_reboot: bool
def get_auto_upgrade_options() -> AutoUpgradeOptions:
"""Get automatic upgrade options"""
settings = system_actions.get_auto_upgrade_settings()
return AutoUpgradeOptions(
enable=settings.enable,
allow_reboot=settings.allowReboot,
)
@strawberry.type
class SshSettings:
"""SSH settings and root SSH keys"""
enable: bool
password_authentication: bool
root_ssh_keys: typing.List[str]
def get_ssh_settings() -> SshSettings:
"""Get SSH settings"""
settings = ssh_actions.get_ssh_settings()
return SshSettings(
enable=settings.enable,
password_authentication=settings.passwordAuthentication,
root_ssh_keys=settings.rootKeys,
)
def get_system_timezone() -> str:
"""Get system timezone"""
return system_actions.get_timezone()
@strawberry.type
class SystemSettings:
"""Common system settings"""
auto_upgrade: AutoUpgradeOptions = strawberry.field(
resolver=get_auto_upgrade_options
)
ssh: SshSettings = strawberry.field(resolver=get_ssh_settings)
timezone: str = strawberry.field(resolver=get_system_timezone)
def get_system_version() -> str:
"""Get system version"""
return system_actions.get_system_version()
def get_python_version() -> str:
"""Get Python version"""
return system_actions.get_python_version()
@strawberry.type
class SystemInfo:
"""System components versions"""
system_version: str = strawberry.field(resolver=get_system_version)
python_version: str = strawberry.field(resolver=get_python_version)
@strawberry.field
def using_binds(self) -> bool:
"""Check if the system is using BINDs"""
return is_bind_migrated()
@strawberry.type
class SystemProviderInfo:
"""Information about the VPS/Dedicated server provider"""
provider: ServerProvider
id: str
def get_system_provider_info() -> SystemProviderInfo:
"""Get system provider info"""
return SystemProviderInfo(provider=ServerProvider.HETZNER, id="UNKNOWN")
@strawberry.type
class System:
"""
Base system type which represents common system status
"""
status: Alert = strawberry.field(
resolver=lambda: Alert(
severity=Severity.INFO,
title="Test message",
message="Test message",
timestamp=None,
)
)
domain_info: SystemDomainInfo = strawberry.field(resolver=get_system_domain_info)
settings: SystemSettings = SystemSettings()
info: SystemInfo = SystemInfo()
provider: SystemProviderInfo = strawberry.field(resolver=get_system_provider_info)
@strawberry.field
def busy(self) -> bool:
"""Check if the system is busy"""
return Jobs.is_busy()
@strawberry.field
def working_directory(self) -> str:
"""Get working directory"""
return os.getcwd()

View File

@ -0,0 +1,23 @@
"""Users"""
# pylint: disable=too-few-public-methods
import typing
import strawberry
from selfprivacy_api.graphql.common_types.user import (
User,
get_user_by_username,
get_users,
)
from selfprivacy_api.graphql import IsAuthenticated
@strawberry.type
class Users:
@strawberry.field(permission_classes=[IsAuthenticated])
def get_user(self, username: str) -> typing.Optional[User]:
"""Get users"""
return get_user_by_username(username)
all_users: typing.List[User] = strawberry.field(
permission_classes=[IsAuthenticated], resolver=get_users
)

View File

@ -0,0 +1,98 @@
"""GraphQL API for SelfPrivacy."""
# pylint: disable=too-few-public-methods
import asyncio
from typing import AsyncGenerator
import strawberry
from selfprivacy_api.graphql import IsAuthenticated
from selfprivacy_api.graphql.mutations.api_mutations import ApiMutations
from selfprivacy_api.graphql.mutations.job_mutations import JobMutations
from selfprivacy_api.graphql.mutations.mutation_interface import GenericMutationReturn
from selfprivacy_api.graphql.mutations.services_mutations import ServicesMutations
from selfprivacy_api.graphql.mutations.ssh_mutations import SshMutations
from selfprivacy_api.graphql.mutations.storage_mutations import StorageMutations
from selfprivacy_api.graphql.mutations.system_mutations import SystemMutations
from selfprivacy_api.graphql.queries.api_queries import Api
from selfprivacy_api.graphql.queries.jobs import Job
from selfprivacy_api.graphql.queries.services import Services
from selfprivacy_api.graphql.queries.storage import Storage
from selfprivacy_api.graphql.queries.system import System
from selfprivacy_api.graphql.mutations.users_mutations import UserMutations
from selfprivacy_api.graphql.queries.users import Users
from selfprivacy_api.jobs.test import test_job
@strawberry.type
class Query:
"""Root schema for queries"""
@strawberry.field(permission_classes=[IsAuthenticated])
def system(self) -> System:
"""System queries"""
return System()
@strawberry.field
def api(self) -> Api:
"""API access status"""
return Api()
@strawberry.field(permission_classes=[IsAuthenticated])
def users(self) -> Users:
"""Users queries"""
return Users()
@strawberry.field(permission_classes=[IsAuthenticated])
def storage(self) -> Storage:
"""Storage queries"""
return Storage()
@strawberry.field(permission_classes=[IsAuthenticated])
def jobs(self) -> Job:
"""Jobs queries"""
return Job()
@strawberry.field(permission_classes=[IsAuthenticated])
def services(self) -> Services:
"""Services queries"""
return Services()
@strawberry.type
class Mutation(
ApiMutations,
SystemMutations,
UserMutations,
SshMutations,
StorageMutations,
ServicesMutations,
JobMutations,
):
"""Root schema for mutations"""
@strawberry.mutation(permission_classes=[IsAuthenticated])
def test_mutation(self) -> GenericMutationReturn:
"""Test mutation"""
test_job()
return GenericMutationReturn(
success=True,
message="Test mutation",
code=200,
)
pass
@strawberry.type
class Subscription:
"""Root schema for subscriptions"""
@strawberry.subscription(permission_classes=[IsAuthenticated])
async def count(self, target: int = 100) -> AsyncGenerator[int, None]:
for i in range(target):
yield i
await asyncio.sleep(0.5)
schema = strawberry.Schema(query=Query, mutation=Mutation, subscription=Subscription)

View File

@ -0,0 +1,228 @@
"""
Jobs controller. It handles the jobs that are created by the user.
This is a singleton class holding the jobs list.
Jobs can be added and removed.
A single job can be updated.
A job is a dictionary with the following keys:
- id: unique identifier of the job
- name: name of the job
- description: description of the job
- status: status of the job
- created_at: date of creation of the job
- updated_at: date of last update of the job
- finished_at: date of finish of the job
- error: error message if the job failed
- result: result of the job
"""
import typing
import datetime
from uuid import UUID
import asyncio
import json
import os
import time
import uuid
from enum import Enum
from pydantic import BaseModel
from selfprivacy_api.utils import ReadUserData, UserDataFiles, WriteUserData
class JobStatus(Enum):
"""
Status of a job.
"""
CREATED = "CREATED"
RUNNING = "RUNNING"
FINISHED = "FINISHED"
ERROR = "ERROR"
class Job(BaseModel):
"""
Job class.
"""
uid: UUID
type_id: str
name: str
description: str
status: JobStatus
status_text: typing.Optional[str]
progress: typing.Optional[int]
created_at: datetime.datetime
updated_at: datetime.datetime
finished_at: typing.Optional[datetime.datetime]
error: typing.Optional[str]
result: typing.Optional[str]
class Jobs:
"""
Jobs class.
"""
__instance = None
@staticmethod
def get_instance():
"""
Singleton method.
"""
if Jobs.__instance is None:
Jobs()
if Jobs.__instance is None:
raise Exception("Couldn't init Jobs singleton!")
return Jobs.__instance
return Jobs.__instance
def __init__(self):
"""
Initialize the jobs list.
"""
if Jobs.__instance is not None:
raise Exception("This class is a singleton!")
else:
Jobs.__instance = self
@staticmethod
def reset() -> None:
"""
Reset the jobs list.
"""
with WriteUserData(UserDataFiles.JOBS) as user_data:
user_data["jobs"] = []
@staticmethod
def add(
name: str,
type_id: str,
description: str,
status: JobStatus = JobStatus.CREATED,
status_text: str = "",
progress: int = 0,
) -> Job:
"""
Add a job to the jobs list.
"""
job = Job(
uid=uuid.uuid4(),
name=name,
type_id=type_id,
description=description,
status=status,
status_text=status_text,
progress=progress,
created_at=datetime.datetime.now(),
updated_at=datetime.datetime.now(),
finished_at=None,
error=None,
result=None,
)
with WriteUserData(UserDataFiles.JOBS) as user_data:
try:
if "jobs" not in user_data:
user_data["jobs"] = []
user_data["jobs"].append(json.loads(job.json()))
except json.decoder.JSONDecodeError:
user_data["jobs"] = [json.loads(job.json())]
return job
def remove(self, job: Job) -> None:
"""
Remove a job from the jobs list.
"""
self.remove_by_uid(str(job.uid))
def remove_by_uid(self, job_uuid: str) -> bool:
"""
Remove a job from the jobs list.
"""
with WriteUserData(UserDataFiles.JOBS) as user_data:
if "jobs" not in user_data:
user_data["jobs"] = []
for i, j in enumerate(user_data["jobs"]):
if j["uid"] == job_uuid:
del user_data["jobs"][i]
return True
return False
@staticmethod
def update(
job: Job,
status: JobStatus,
status_text: typing.Optional[str] = None,
progress: typing.Optional[int] = None,
name: typing.Optional[str] = None,
description: typing.Optional[str] = None,
error: typing.Optional[str] = None,
result: typing.Optional[str] = None,
) -> Job:
"""
Update a job in the jobs list.
"""
if name is not None:
job.name = name
if description is not None:
job.description = description
if status_text is not None:
job.status_text = status_text
if progress is not None:
job.progress = progress
job.status = status
job.updated_at = datetime.datetime.now()
job.error = error
job.result = result
if status in (JobStatus.FINISHED, JobStatus.ERROR):
job.finished_at = datetime.datetime.now()
with WriteUserData(UserDataFiles.JOBS) as user_data:
if "jobs" not in user_data:
user_data["jobs"] = []
for i, j in enumerate(user_data["jobs"]):
if j["uid"] == str(job.uid):
user_data["jobs"][i] = json.loads(job.json())
break
return job
@staticmethod
def get_job(uid: str) -> typing.Optional[Job]:
"""
Get a job from the jobs list.
"""
with ReadUserData(UserDataFiles.JOBS) as user_data:
if "jobs" not in user_data:
user_data["jobs"] = []
for job in user_data["jobs"]:
if job["uid"] == uid:
return Job(**job)
return None
@staticmethod
def get_jobs() -> typing.List[Job]:
"""
Get the jobs list.
"""
with ReadUserData(UserDataFiles.JOBS) as user_data:
try:
if "jobs" not in user_data:
user_data["jobs"] = []
return [Job(**job) for job in user_data["jobs"]]
except json.decoder.JSONDecodeError:
return []
@staticmethod
def is_busy() -> bool:
"""
Check if there is a job running.
"""
with ReadUserData(UserDataFiles.JOBS) as user_data:
if "jobs" not in user_data:
user_data["jobs"] = []
for job in user_data["jobs"]:
if job["status"] == JobStatus.RUNNING.value:
return True
return False

View File

@ -0,0 +1,329 @@
"""Function to perform migration of app data to binds."""
import subprocess
import pathlib
import shutil
from pydantic import BaseModel
from selfprivacy_api.jobs import Job, JobStatus, Jobs
from selfprivacy_api.services.bitwarden import Bitwarden
from selfprivacy_api.services.gitea import Gitea
from selfprivacy_api.services.mailserver import MailServer
from selfprivacy_api.services.nextcloud import Nextcloud
from selfprivacy_api.services.pleroma import Pleroma
from selfprivacy_api.utils import ReadUserData, WriteUserData
from selfprivacy_api.utils.huey import huey
from selfprivacy_api.utils.block_devices import BlockDevices
class BindMigrationConfig(BaseModel):
"""Config for bind migration.
For each service provide block device name.
"""
email_block_device: str
bitwarden_block_device: str
gitea_block_device: str
nextcloud_block_device: str
pleroma_block_device: str
def is_bind_migrated() -> bool:
"""Check if bind migration was performed."""
with ReadUserData() as user_data:
return user_data.get("useBinds", False)
def activate_binds(config: BindMigrationConfig):
"""Activate binds."""
# Activate binds in userdata
with WriteUserData() as user_data:
if "email" not in user_data:
user_data["email"] = {}
user_data["email"]["location"] = config.email_block_device
if "bitwarden" not in user_data:
user_data["bitwarden"] = {}
user_data["bitwarden"]["location"] = config.bitwarden_block_device
if "gitea" not in user_data:
user_data["gitea"] = {}
user_data["gitea"]["location"] = config.gitea_block_device
if "nextcloud" not in user_data:
user_data["nextcloud"] = {}
user_data["nextcloud"]["location"] = config.nextcloud_block_device
if "pleroma" not in user_data:
user_data["pleroma"] = {}
user_data["pleroma"]["location"] = config.pleroma_block_device
user_data["useBinds"] = True
def move_folder(
data_path: pathlib.Path, bind_path: pathlib.Path, user: str, group: str
):
"""Move folder from data to bind."""
if data_path.exists():
shutil.move(str(data_path), str(bind_path))
else:
return
try:
data_path.mkdir(mode=0o750, parents=True, exist_ok=True)
except Exception as e:
print(f"Error creating data path: {e}")
return
try:
shutil.chown(str(bind_path), user=user, group=group)
shutil.chown(str(data_path), user=user, group=group)
except LookupError:
pass
try:
subprocess.run(["mount", "--bind", str(bind_path), str(data_path)], check=True)
except subprocess.CalledProcessError as error:
print(error)
try:
subprocess.run(["chown", "-R", f"{user}:{group}", str(data_path)], check=True)
except subprocess.CalledProcessError as error:
print(error)
@huey.task()
def migrate_to_binds(config: BindMigrationConfig, job: Job):
"""Migrate app data to binds."""
# Exit if migration is already done
if is_bind_migrated():
Jobs.update(
job=job,
status=JobStatus.ERROR,
error="Migration already done.",
)
return
Jobs.update(
job=job,
status=JobStatus.RUNNING,
progress=0,
status_text="Checking if all volumes are available.",
)
# Get block devices.
block_devices = BlockDevices().get_block_devices()
block_device_names = [device.name for device in block_devices]
# Get all unique required block devices
required_block_devices = []
for block_device_name in config.__dict__.values():
if block_device_name not in required_block_devices:
required_block_devices.append(block_device_name)
# Check if all block devices from config are present.
for block_device_name in required_block_devices:
if block_device_name not in block_device_names:
Jobs.update(
job=job,
status=JobStatus.ERROR,
error=f"Block device {block_device_name} not found.",
)
return
# Make sure all required block devices are mounted.
# sda1 is the root partition and is always mounted.
for block_device_name in required_block_devices:
if block_device_name == "sda1":
continue
block_device = BlockDevices().get_block_device(block_device_name)
if block_device is None:
Jobs.update(
job=job,
status=JobStatus.ERROR,
error=f"Block device {block_device_name} not found.",
)
return
if f"/volumes/{block_device_name}" not in block_device.mountpoints:
Jobs.update(
job=job,
status=JobStatus.ERROR,
error=f"Block device {block_device_name} not mounted.",
)
return
# Make sure /volumes/sda1 exists.
pathlib.Path("/volumes/sda1").mkdir(parents=True, exist_ok=True)
Jobs.update(
job=job,
status=JobStatus.RUNNING,
progress=5,
status_text="Activating binds in NixOS config.",
)
activate_binds(config)
# Perform migration of Nextcloud.
Jobs.update(
job=job,
status=JobStatus.RUNNING,
progress=10,
status_text="Migrating Nextcloud.",
)
Nextcloud().stop()
# If /volumes/sda1/nextcloud or /volumes/sdb/nextcloud exists, skip it.
if not pathlib.Path("/volumes/sda1/nextcloud").exists():
if not pathlib.Path("/volumes/sdb/nextcloud").exists():
move_folder(
data_path=pathlib.Path("/var/lib/nextcloud"),
bind_path=pathlib.Path(
f"/volumes/{config.nextcloud_block_device}/nextcloud"
),
user="nextcloud",
group="nextcloud",
)
# Start Nextcloud
Nextcloud().start()
# Perform migration of Bitwarden
Jobs.update(
job=job,
status=JobStatus.RUNNING,
progress=28,
status_text="Migrating Bitwarden.",
)
Bitwarden().stop()
if not pathlib.Path("/volumes/sda1/bitwarden").exists():
if not pathlib.Path("/volumes/sdb/bitwarden").exists():
move_folder(
data_path=pathlib.Path("/var/lib/bitwarden"),
bind_path=pathlib.Path(
f"/volumes/{config.bitwarden_block_device}/bitwarden"
),
user="vaultwarden",
group="vaultwarden",
)
if not pathlib.Path("/volumes/sda1/bitwarden_rs").exists():
if not pathlib.Path("/volumes/sdb/bitwarden_rs").exists():
move_folder(
data_path=pathlib.Path("/var/lib/bitwarden_rs"),
bind_path=pathlib.Path(
f"/volumes/{config.bitwarden_block_device}/bitwarden_rs"
),
user="vaultwarden",
group="vaultwarden",
)
# Start Bitwarden
Bitwarden().start()
# Perform migration of Gitea
Jobs.update(
job=job,
status=JobStatus.RUNNING,
progress=46,
status_text="Migrating Gitea.",
)
Gitea().stop()
if not pathlib.Path("/volumes/sda1/gitea").exists():
if not pathlib.Path("/volumes/sdb/gitea").exists():
move_folder(
data_path=pathlib.Path("/var/lib/gitea"),
bind_path=pathlib.Path(f"/volumes/{config.gitea_block_device}/gitea"),
user="gitea",
group="gitea",
)
Gitea().start()
# Perform migration of Mail server
Jobs.update(
job=job,
status=JobStatus.RUNNING,
progress=64,
status_text="Migrating Mail server.",
)
MailServer().stop()
if not pathlib.Path("/volumes/sda1/vmail").exists():
if not pathlib.Path("/volumes/sdb/vmail").exists():
move_folder(
data_path=pathlib.Path("/var/vmail"),
bind_path=pathlib.Path(f"/volumes/{config.email_block_device}/vmail"),
user="virtualMail",
group="virtualMail",
)
if not pathlib.Path("/volumes/sda1/sieve").exists():
if not pathlib.Path("/volumes/sdb/sieve").exists():
move_folder(
data_path=pathlib.Path("/var/sieve"),
bind_path=pathlib.Path(f"/volumes/{config.email_block_device}/sieve"),
user="virtualMail",
group="virtualMail",
)
MailServer().start()
# Perform migration of Pleroma
Jobs.update(
job=job,
status=JobStatus.RUNNING,
progress=82,
status_text="Migrating Pleroma.",
)
Pleroma().stop()
if not pathlib.Path("/volumes/sda1/pleroma").exists():
if not pathlib.Path("/volumes/sdb/pleroma").exists():
move_folder(
data_path=pathlib.Path("/var/lib/pleroma"),
bind_path=pathlib.Path(
f"/volumes/{config.pleroma_block_device}/pleroma"
),
user="pleroma",
group="pleroma",
)
if not pathlib.Path("/volumes/sda1/postgresql").exists():
if not pathlib.Path("/volumes/sdb/postgresql").exists():
move_folder(
data_path=pathlib.Path("/var/lib/postgresql"),
bind_path=pathlib.Path(
f"/volumes/{config.pleroma_block_device}/postgresql"
),
user="postgres",
group="postgres",
)
Pleroma().start()
Jobs.update(
job=job,
status=JobStatus.FINISHED,
progress=100,
status_text="Migration finished.",
result="Migration finished.",
)
def start_bind_migration(config: BindMigrationConfig) -> Job:
"""Start migration."""
job = Jobs.add(
type_id="migrations.migrate_to_binds",
name="Migrate to binds",
description="Migration required to use the new disk space management.",
)
migrate_to_binds(config, job)
return job

View File

@ -0,0 +1,57 @@
import time
from selfprivacy_api.utils.huey import huey
from selfprivacy_api.jobs import JobStatus, Jobs
@huey.task()
def test_job():
job = Jobs.get_instance().add(
type_id="test",
name="Test job",
description="This is a test job.",
status=JobStatus.CREATED,
status_text="",
progress=0,
)
time.sleep(5)
Jobs.get_instance().update(
job=job,
status=JobStatus.RUNNING,
status_text="Performing pre-move checks...",
progress=5,
)
time.sleep(5)
Jobs.get_instance().update(
job=job,
status=JobStatus.RUNNING,
status_text="Performing pre-move checks...",
progress=10,
)
time.sleep(5)
Jobs.get_instance().update(
job=job,
status=JobStatus.RUNNING,
status_text="Performing pre-move checks...",
progress=15,
)
time.sleep(5)
Jobs.get_instance().update(
job=job,
status=JobStatus.RUNNING,
status_text="Performing pre-move checks...",
progress=20,
)
time.sleep(5)
Jobs.get_instance().update(
job=job,
status=JobStatus.RUNNING,
status_text="Performing pre-move checks...",
progress=25,
)
time.sleep(5)
Jobs.get_instance().update(
job=job,
status=JobStatus.FINISHED,
status_text="Job finished.",
progress=100,
)

View File

@ -0,0 +1,53 @@
"""Migrations module.
Migrations module is introduced in v1.1.1 and provides one-shot
migrations which cannot be performed from the NixOS configuration file changes.
These migrations are checked and ran before every start of the API.
You can disable certain migrations if needed by creating an array
at api.skippedMigrations in userdata.json and populating it
with IDs of the migrations to skip.
Adding DISABLE_ALL to that array disables the migrations module entirely.
"""
from selfprivacy_api.migrations.check_for_failed_binds_migration import CheckForFailedBindsMigration
from selfprivacy_api.utils import ReadUserData
from selfprivacy_api.migrations.fix_nixos_config_branch import FixNixosConfigBranch
from selfprivacy_api.migrations.create_tokens_json import CreateTokensJson
from selfprivacy_api.migrations.migrate_to_selfprivacy_channel import (
MigrateToSelfprivacyChannel,
)
from selfprivacy_api.migrations.mount_volume import MountVolume
migrations = [
FixNixosConfigBranch(),
CreateTokensJson(),
MigrateToSelfprivacyChannel(),
MountVolume(),
CheckForFailedBindsMigration(),
]
def run_migrations():
"""
Go over all migrations. If they are not skipped in userdata file, run them
if the migration needed.
"""
with ReadUserData() as data:
if "api" not in data:
skipped_migrations = []
elif "skippedMigrations" not in data["api"]:
skipped_migrations = []
else:
skipped_migrations = data["api"].get("skippedMigrations", [])
if "DISABLE_ALL" in skipped_migrations:
return
for migration in migrations:
if migration.get_migration_name() not in skipped_migrations:
try:
if migration.is_migration_needed():
migration.migrate()
except Exception as err:
print(f"Error while migrating {migration.get_migration_name()}")
print(err)
print("Skipping this migration")

View File

@ -0,0 +1,48 @@
from selfprivacy_api.jobs import JobStatus, Jobs
from selfprivacy_api.migrations.migration import Migration
from selfprivacy_api.utils import WriteUserData
class CheckForFailedBindsMigration(Migration):
"""Mount volume."""
def get_migration_name(self):
return "check_for_failed_binds_migration"
def get_migration_description(self):
return "If binds migration failed, try again."
def is_migration_needed(self):
try:
jobs = Jobs.get_instance().get_jobs()
# If there is a job with type_id "migrations.migrate_to_binds" and status is not "FINISHED",
# then migration is needed and job is deleted
for job in jobs:
if (
job.type_id == "migrations.migrate_to_binds"
and job.status != JobStatus.FINISHED
):
return True
return False
except Exception as e:
print(e)
return False
def migrate(self):
# Get info about existing volumes
# Write info about volumes to userdata.json
try:
jobs = Jobs.get_instance().get_jobs()
for job in jobs:
if (
job.type_id == "migrations.migrate_to_binds"
and job.status != JobStatus.FINISHED
):
Jobs.get_instance().remove(job)
with WriteUserData() as userdata:
userdata["useBinds"] = False
print("Done")
except Exception as e:
print(e)
print("Error mounting volume")

View File

@ -0,0 +1,58 @@
from datetime import datetime
import os
import json
from pathlib import Path
from selfprivacy_api.migrations.migration import Migration
from selfprivacy_api.utils import TOKENS_FILE, ReadUserData
class CreateTokensJson(Migration):
def get_migration_name(self):
return "create_tokens_json"
def get_migration_description(self):
return """Selfprivacy API used a single token in userdata.json for authentication.
This migration creates a new tokens.json file with the old token in it.
This migration runs if the tokens.json file does not exist.
Old token is located at ["api"]["token"] in userdata.json.
tokens.json path is declared in TOKENS_FILE imported from utils.py
tokens.json must have the following format:
{
"tokens": [
{
"token": "token_string",
"name": "Master Token",
"date": "current date from str(datetime.now())",
}
]
}
tokens.json must have 0600 permissions.
"""
def is_migration_needed(self):
return not os.path.exists(TOKENS_FILE)
def migrate(self):
try:
print(f"Creating tokens.json file at {TOKENS_FILE}")
with ReadUserData() as userdata:
token = userdata["api"]["token"]
# Touch tokens.json with 0600 permissions
Path(TOKENS_FILE).touch(mode=0o600)
# Write token to tokens.json
structure = {
"tokens": [
{
"token": token,
"name": "primary_token",
"date": str(datetime.now()),
}
]
}
with open(TOKENS_FILE, "w", encoding="utf-8") as tokens:
json.dump(structure, tokens, indent=4)
print("Done")
except Exception as e:
print(e)
print("Error creating tokens.json")

View File

@ -0,0 +1,57 @@
import os
import subprocess
from selfprivacy_api.migrations.migration import Migration
class FixNixosConfigBranch(Migration):
def get_migration_name(self):
return "fix_nixos_config_branch"
def get_migration_description(self):
return """Mobile SelfPrivacy app introduced a bug in version 0.4.0.
New servers were initialized with a rolling-testing nixos config branch.
This was fixed in app version 0.4.2, but existing servers were not updated.
This migration fixes this by changing the nixos config branch to master.
"""
def is_migration_needed(self):
"""Check the current branch of /etc/nixos and return True if it is rolling-testing"""
current_working_directory = os.getcwd()
try:
os.chdir("/etc/nixos")
nixos_config_branch = subprocess.check_output(
["git", "rev-parse", "--abbrev-ref", "HEAD"], start_new_session=True
)
os.chdir(current_working_directory)
return nixos_config_branch.decode("utf-8").strip() == "rolling-testing"
except subprocess.CalledProcessError:
os.chdir(current_working_directory)
return False
def migrate(self):
"""Affected server pulled the config with the --single-branch flag.
Git config remote.origin.fetch has to be changed, so all branches will be fetched.
Then, fetch all branches, pull and switch to master branch.
"""
print("Fixing Nixos config branch")
current_working_directory = os.getcwd()
try:
os.chdir("/etc/nixos")
subprocess.check_output(
[
"git",
"config",
"remote.origin.fetch",
"+refs/heads/*:refs/remotes/origin/*",
]
)
subprocess.check_output(["git", "fetch", "--all"])
subprocess.check_output(["git", "pull"])
subprocess.check_output(["git", "checkout", "master"])
os.chdir(current_working_directory)
print("Done")
except subprocess.CalledProcessError:
os.chdir(current_working_directory)
print("Error")

View File

@ -0,0 +1,49 @@
import os
import subprocess
from selfprivacy_api.migrations.migration import Migration
class MigrateToSelfprivacyChannel(Migration):
"""Migrate to selfprivacy Nix channel."""
def get_migration_name(self):
return "migrate_to_selfprivacy_channel"
def get_migration_description(self):
return "Migrate to selfprivacy Nix channel."
def is_migration_needed(self):
try:
output = subprocess.check_output(
["nix-channel", "--list"], start_new_session=True
)
output = output.decode("utf-8")
first_line = output.split("\n", maxsplit=1)[0]
return first_line.startswith("nixos") and (
first_line.endswith("nixos-21.11") or first_line.endswith("nixos-21.05")
)
except subprocess.CalledProcessError:
return False
def migrate(self):
# Change the channel and update them.
# Also, go to /etc/nixos directory and make a git pull
current_working_directory = os.getcwd()
try:
print("Changing channel")
os.chdir("/etc/nixos")
subprocess.check_output(
[
"nix-channel",
"--add",
"https://channel.selfprivacy.org/nixos-selfpricacy",
"nixos",
]
)
subprocess.check_output(["nix-channel", "--update"])
subprocess.check_output(["git", "pull"])
os.chdir(current_working_directory)
except subprocess.CalledProcessError:
os.chdir(current_working_directory)
print("Error")

View File

@ -0,0 +1,28 @@
from abc import ABC, abstractmethod
class Migration(ABC):
"""
Abstract Migration class
This class is used to define the structure of a migration
Migration has a function is_migration_needed() that returns True or False
Migration has a function migrate() that does the migration
Migration has a function get_migration_name() that returns the migration name
Migration has a function get_migration_description() that returns the migration description
"""
@abstractmethod
def get_migration_name(self):
pass
@abstractmethod
def get_migration_description(self):
pass
@abstractmethod
def is_migration_needed(self):
pass
@abstractmethod
def migrate(self):
pass

View File

@ -0,0 +1,51 @@
import os
import subprocess
from selfprivacy_api.migrations.migration import Migration
from selfprivacy_api.utils import ReadUserData, WriteUserData
from selfprivacy_api.utils.block_devices import BlockDevices
class MountVolume(Migration):
"""Mount volume."""
def get_migration_name(self):
return "mount_volume"
def get_migration_description(self):
return "Mount volume if it is not mounted."
def is_migration_needed(self):
try:
with ReadUserData() as userdata:
return "volumes" not in userdata
except Exception as e:
print(e)
return False
def migrate(self):
# Get info about existing volumes
# Write info about volumes to userdata.json
try:
volumes = BlockDevices().get_block_devices()
# If there is an unmounted volume sdb,
# Write it to userdata.json
is_there_a_volume = False
for volume in volumes:
if volume.name == "sdb":
is_there_a_volume = True
break
with WriteUserData() as userdata:
userdata["volumes"] = []
if is_there_a_volume:
userdata["volumes"].append(
{
"device": "/dev/sdb",
"mountPoint": "/volumes/sdb",
"fsType": "ext4",
}
)
print("Done")
except Exception as e:
print(e)
print("Error mounting volume")

View File

View File

@ -0,0 +1,127 @@
from datetime import datetime
from typing import Optional
from fastapi import APIRouter, Depends, HTTPException
from pydantic import BaseModel
from selfprivacy_api.actions.api_tokens import (
CannotDeleteCallerException,
InvalidExpirationDate,
InvalidUsesLeft,
NotFoundException,
delete_api_token,
get_api_recovery_token_status,
get_api_tokens_with_caller_flag,
get_new_api_recovery_key,
refresh_api_token,
)
from selfprivacy_api.dependencies import TokenHeader, get_token_header
from selfprivacy_api.utils.auth import (
delete_new_device_auth_token,
get_new_device_auth_token,
use_mnemonic_recoverery_token,
use_new_device_auth_token,
)
router = APIRouter(
prefix="/auth",
tags=["auth"],
responses={404: {"description": "Not found"}},
)
@router.get("/tokens")
async def rest_get_tokens(auth_token: TokenHeader = Depends(get_token_header)):
"""Get the tokens info"""
return get_api_tokens_with_caller_flag(auth_token.token)
class DeleteTokenInput(BaseModel):
"""Delete token input"""
token_name: str
@router.delete("/tokens")
async def rest_delete_tokens(
token: DeleteTokenInput, auth_token: TokenHeader = Depends(get_token_header)
):
"""Delete the tokens"""
try:
delete_api_token(auth_token.token, token.token_name)
except NotFoundException:
raise HTTPException(status_code=404, detail="Token not found")
except CannotDeleteCallerException:
raise HTTPException(status_code=400, detail="Cannot delete caller's token")
return {"message": "Token deleted"}
@router.post("/tokens")
async def rest_refresh_token(auth_token: TokenHeader = Depends(get_token_header)):
"""Refresh the token"""
try:
new_token = refresh_api_token(auth_token.token)
except NotFoundException:
raise HTTPException(status_code=404, detail="Token not found")
return {"token": new_token}
@router.get("/recovery_token")
async def rest_get_recovery_token_status(
auth_token: TokenHeader = Depends(get_token_header),
):
return get_api_recovery_token_status()
class CreateRecoveryTokenInput(BaseModel):
expiration: Optional[datetime] = None
uses: Optional[int] = None
@router.post("/recovery_token")
async def rest_create_recovery_token(
limits: CreateRecoveryTokenInput = CreateRecoveryTokenInput(),
auth_token: TokenHeader = Depends(get_token_header),
):
try:
token = get_new_api_recovery_key(limits.expiration, limits.uses)
except InvalidExpirationDate as e:
raise HTTPException(status_code=400, detail=str(e))
except InvalidUsesLeft as e:
raise HTTPException(status_code=400, detail=str(e))
return {"token": token}
class UseTokenInput(BaseModel):
token: str
device: str
@router.post("/recovery_token/use")
async def rest_use_recovery_token(input: UseTokenInput):
token = use_mnemonic_recoverery_token(input.token, input.device)
if token is None:
raise HTTPException(status_code=404, detail="Token not found")
return {"token": token}
@router.post("/new_device")
async def rest_new_device(auth_token: TokenHeader = Depends(get_token_header)):
token = get_new_device_auth_token()
return {"token": token}
@router.delete("/new_device")
async def rest_delete_new_device_token(
auth_token: TokenHeader = Depends(get_token_header),
):
delete_new_device_auth_token()
return {"token": None}
@router.post("/new_device/authorize")
async def rest_new_device_authorize(input: UseTokenInput):
token = use_new_device_auth_token(input.token, input.device)
if token is None:
raise HTTPException(status_code=404, detail="Token not found")
return {"message": "Device authorized", "token": token}

View File

@ -0,0 +1,373 @@
"""Basic services legacy api"""
import base64
from typing import Optional
from fastapi import APIRouter, Depends, HTTPException
from pydantic import BaseModel
from selfprivacy_api.actions.ssh import (
InvalidPublicKey,
KeyAlreadyExists,
KeyNotFound,
create_ssh_key,
enable_ssh,
get_ssh_settings,
remove_ssh_key,
set_ssh_settings,
)
from selfprivacy_api.actions.users import UserNotFound, get_user_by_username
from selfprivacy_api.dependencies import get_token_header
from selfprivacy_api.restic_controller import ResticController, ResticStates
from selfprivacy_api.restic_controller import tasks as restic_tasks
from selfprivacy_api.services.bitwarden import Bitwarden
from selfprivacy_api.services.gitea import Gitea
from selfprivacy_api.services.mailserver import MailServer
from selfprivacy_api.services.nextcloud import Nextcloud
from selfprivacy_api.services.ocserv import Ocserv
from selfprivacy_api.services.pleroma import Pleroma
from selfprivacy_api.services.service import ServiceStatus
from selfprivacy_api.utils import WriteUserData, get_dkim_key, get_domain
router = APIRouter(
prefix="/services",
tags=["services"],
dependencies=[Depends(get_token_header)],
responses={404: {"description": "Not found"}},
)
def service_status_to_return_code(status: ServiceStatus):
"""Converts service status object to return code for
compatibility with legacy api"""
if status == ServiceStatus.ACTIVE:
return 0
elif status == ServiceStatus.FAILED:
return 1
elif status == ServiceStatus.INACTIVE:
return 3
elif status == ServiceStatus.OFF:
return 4
else:
return 2
@router.get("/status")
async def get_status():
"""Get the status of the services"""
mail_status = MailServer.get_status()
bitwarden_status = Bitwarden.get_status()
gitea_status = Gitea.get_status()
nextcloud_status = Nextcloud.get_status()
ocserv_stauts = Ocserv.get_status()
pleroma_status = Pleroma.get_status()
return {
"imap": service_status_to_return_code(mail_status),
"smtp": service_status_to_return_code(mail_status),
"http": 0,
"bitwarden": service_status_to_return_code(bitwarden_status),
"gitea": service_status_to_return_code(gitea_status),
"nextcloud": service_status_to_return_code(nextcloud_status),
"ocserv": service_status_to_return_code(ocserv_stauts),
"pleroma": service_status_to_return_code(pleroma_status),
}
@router.post("/bitwarden/enable")
async def enable_bitwarden():
"""Enable Bitwarden"""
Bitwarden.enable()
return {
"status": 0,
"message": "Bitwarden enabled",
}
@router.post("/bitwarden/disable")
async def disable_bitwarden():
"""Disable Bitwarden"""
Bitwarden.disable()
return {
"status": 0,
"message": "Bitwarden disabled",
}
@router.post("/gitea/enable")
async def enable_gitea():
"""Enable Gitea"""
Gitea.enable()
return {
"status": 0,
"message": "Gitea enabled",
}
@router.post("/gitea/disable")
async def disable_gitea():
"""Disable Gitea"""
Gitea.disable()
return {
"status": 0,
"message": "Gitea disabled",
}
@router.get("/mailserver/dkim")
async def get_mailserver_dkim():
"""Get the DKIM record for the mailserver"""
domain = get_domain()
dkim = get_dkim_key(domain)
if dkim is None:
raise HTTPException(status_code=404, detail="DKIM record not found")
dkim = base64.b64encode(dkim.encode("utf-8")).decode("utf-8")
return dkim
@router.post("/nextcloud/enable")
async def enable_nextcloud():
"""Enable Nextcloud"""
Nextcloud.enable()
return {
"status": 0,
"message": "Nextcloud enabled",
}
@router.post("/nextcloud/disable")
async def disable_nextcloud():
"""Disable Nextcloud"""
Nextcloud.disable()
return {
"status": 0,
"message": "Nextcloud disabled",
}
@router.post("/ocserv/enable")
async def enable_ocserv():
"""Enable Ocserv"""
Ocserv.enable()
return {
"status": 0,
"message": "Ocserv enabled",
}
@router.post("/ocserv/disable")
async def disable_ocserv():
"""Disable Ocserv"""
Ocserv.disable()
return {
"status": 0,
"message": "Ocserv disabled",
}
@router.post("/pleroma/enable")
async def enable_pleroma():
"""Enable Pleroma"""
Pleroma.enable()
return {
"status": 0,
"message": "Pleroma enabled",
}
@router.post("/pleroma/disable")
async def disable_pleroma():
"""Disable Pleroma"""
Pleroma.disable()
return {
"status": 0,
"message": "Pleroma disabled",
}
@router.get("/restic/backup/list")
async def get_restic_backup_list():
restic = ResticController()
return restic.snapshot_list
@router.put("/restic/backup/create")
async def create_restic_backup():
restic = ResticController()
if restic.state is ResticStates.NO_KEY:
raise HTTPException(status_code=400, detail="Backup key not provided")
if restic.state is ResticStates.INITIALIZING:
raise HTTPException(status_code=400, detail="Backup is initializing")
if restic.state is ResticStates.BACKING_UP:
raise HTTPException(status_code=409, detail="Backup is already running")
restic_tasks.start_backup()
return {
"status": 0,
"message": "Backup creation has started",
}
@router.get("/restic/backup/status")
async def get_restic_backup_status():
restic = ResticController()
return {
"status": restic.state.name,
"progress": restic.progress,
"error_message": restic.error_message,
}
@router.get("/restic/backup/reload")
async def reload_restic_backup():
restic_tasks.load_snapshots()
return {
"status": 0,
"message": "Snapshots reload started",
}
class BackupRestoreInput(BaseModel):
backupId: str
@router.put("/restic/backup/restore")
async def restore_restic_backup(backup: BackupRestoreInput):
restic = ResticController()
if restic.state is ResticStates.NO_KEY:
raise HTTPException(status_code=400, detail="Backup key not provided")
if restic.state is ResticStates.NOT_INITIALIZED:
raise HTTPException(
status_code=400, detail="Backups repository is not initialized"
)
if restic.state is ResticStates.BACKING_UP:
raise HTTPException(status_code=409, detail="Backup is already running")
if restic.state is ResticStates.INITIALIZING:
raise HTTPException(status_code=400, detail="Repository is initializing")
if restic.state is ResticStates.RESTORING:
raise HTTPException(status_code=409, detail="Restore is already running")
for backup_item in restic.snapshot_list:
if backup_item["short_id"] == backup.backupId:
restic_tasks.restore_from_backup(backup.backupId)
return {
"status": 0,
"message": "Backup restoration procedure started",
}
raise HTTPException(status_code=404, detail="Backup not found")
class BackblazeConfigInput(BaseModel):
accountId: str
accountKey: str
bucket: str
@router.put("/restic/backblaze/config")
async def set_backblaze_config(backblaze_config: BackblazeConfigInput):
with WriteUserData() as data:
if "backblaze" not in data:
data["backblaze"] = {}
data["backblaze"]["accountId"] = backblaze_config.accountId
data["backblaze"]["accountKey"] = backblaze_config.accountKey
data["backblaze"]["bucket"] = backblaze_config.bucket
restic_tasks.update_keys_from_userdata()
return "New Backblaze settings saved"
@router.post("/ssh/enable")
async def rest_enable_ssh():
"""Enable SSH"""
enable_ssh()
return {
"status": 0,
"message": "SSH enabled",
}
@router.get("/ssh")
async def rest_get_ssh():
"""Get the SSH configuration"""
settings = get_ssh_settings()
return {
"enable": settings.enable,
"passwordAuthentication": settings.passwordAuthentication,
}
class SshConfigInput(BaseModel):
enable: Optional[bool] = None
passwordAuthentication: Optional[bool] = None
@router.put("/ssh")
async def rest_set_ssh(ssh_config: SshConfigInput):
"""Set the SSH configuration"""
set_ssh_settings(ssh_config.enable, ssh_config.passwordAuthentication)
return "SSH settings changed"
class SshKeyInput(BaseModel):
public_key: str
@router.put("/ssh/key/send", status_code=201)
async def rest_send_ssh_key(input: SshKeyInput):
"""Send the SSH key"""
try:
create_ssh_key("root", input.public_key)
except KeyAlreadyExists as error:
raise HTTPException(status_code=409, detail="Key already exists") from error
except InvalidPublicKey as error:
raise HTTPException(
status_code=400,
detail="Invalid key type. Only ssh-ed25519 and ssh-rsa are supported",
) from error
return {
"status": 0,
"message": "SSH key sent",
}
@router.get("/ssh/keys/{username}")
async def rest_get_ssh_keys(username: str):
"""Get the SSH keys for a user"""
user = get_user_by_username(username)
if user is None:
raise HTTPException(status_code=404, detail="User not found")
return user.ssh_keys
@router.post("/ssh/keys/{username}", status_code=201)
async def rest_add_ssh_key(username: str, input: SshKeyInput):
try:
create_ssh_key(username, input.public_key)
except KeyAlreadyExists as error:
raise HTTPException(status_code=409, detail="Key already exists") from error
except InvalidPublicKey as error:
raise HTTPException(
status_code=400,
detail="Invalid key type. Only ssh-ed25519 and ssh-rsa are supported",
) from error
except UserNotFound as error:
raise HTTPException(status_code=404, detail="User not found") from error
return {
"message": "New SSH key successfully written",
}
@router.delete("/ssh/keys/{username}")
async def rest_delete_ssh_key(username: str, input: SshKeyInput):
try:
remove_ssh_key(username, input.public_key)
except KeyNotFound as error:
raise HTTPException(status_code=404, detail="Key not found") from error
except UserNotFound as error:
raise HTTPException(status_code=404, detail="User not found") from error
return {"message": "SSH key deleted"}

View File

@ -0,0 +1,105 @@
from typing import Optional
from fastapi import APIRouter, Body, Depends, HTTPException
from pydantic import BaseModel
from selfprivacy_api.dependencies import get_token_header
import selfprivacy_api.actions.system as system_actions
router = APIRouter(
prefix="/system",
tags=["system"],
dependencies=[Depends(get_token_header)],
responses={404: {"description": "Not found"}},
)
@router.get("/configuration/timezone")
async def get_timezone():
"""Get the timezone of the server"""
return system_actions.get_timezone()
class ChangeTimezoneRequestBody(BaseModel):
"""Change the timezone of the server"""
timezone: str
@router.put("/configuration/timezone")
async def change_timezone(timezone: ChangeTimezoneRequestBody):
"""Change the timezone of the server"""
try:
system_actions.change_timezone(timezone.timezone)
except system_actions.InvalidTimezone as e:
raise HTTPException(status_code=400, detail=str(e))
return {"timezone": timezone.timezone}
@router.get("/configuration/autoUpgrade")
async def get_auto_upgrade_settings():
"""Get the auto-upgrade settings"""
return system_actions.get_auto_upgrade_settings().dict()
class AutoUpgradeSettings(BaseModel):
"""Settings for auto-upgrading user data"""
enable: Optional[bool] = None
allowReboot: Optional[bool] = None
@router.put("/configuration/autoUpgrade")
async def set_auto_upgrade_settings(settings: AutoUpgradeSettings):
"""Set the auto-upgrade settings"""
system_actions.set_auto_upgrade_settings(settings.enable, settings.allowReboot)
return "Auto-upgrade settings changed"
@router.get("/configuration/apply")
async def apply_configuration():
"""Apply the configuration"""
return_code = system_actions.rebuild_system()
return return_code
@router.get("/configuration/rollback")
async def rollback_configuration():
"""Rollback the configuration"""
return_code = system_actions.rollback_system()
return return_code
@router.get("/configuration/upgrade")
async def upgrade_configuration():
"""Upgrade the configuration"""
return_code = system_actions.upgrade_system()
return return_code
@router.get("/reboot")
async def reboot_system():
"""Reboot the system"""
system_actions.reboot_system()
return "System reboot has started"
@router.get("/version")
async def get_system_version():
"""Get the system version"""
return {"system_version": system_actions.get_system_version()}
@router.get("/pythonVersion")
async def get_python_version():
"""Get the Python version"""
return system_actions.get_python_version()
@router.get("/configuration/pull")
async def pull_configuration():
"""Pull the configuration"""
action_result = system_actions.pull_repository_changes()
if action_result.status == 0:
return action_result.dict()
raise HTTPException(status_code=500, detail=action_result.dict())

View File

@ -0,0 +1,62 @@
"""Users management module"""
from typing import Optional
from fastapi import APIRouter, Body, Depends, HTTPException
from pydantic import BaseModel
import selfprivacy_api.actions.users as users_actions
from selfprivacy_api.dependencies import get_token_header
router = APIRouter(
prefix="/users",
tags=["users"],
dependencies=[Depends(get_token_header)],
responses={404: {"description": "Not found"}},
)
@router.get("")
async def get_users(withMainUser: bool = False):
"""Get the list of users"""
users: list[users_actions.UserDataUser] = users_actions.get_users(
exclude_primary=not withMainUser, exclude_root=True
)
return [user.username for user in users]
class UserInput(BaseModel):
"""User input"""
username: str
password: str
@router.post("", status_code=201)
async def create_user(user: UserInput):
try:
users_actions.create_user(user.username, user.password)
except users_actions.PasswordIsEmpty as e:
raise HTTPException(status_code=400, detail=str(e))
except users_actions.UsernameForbidden as e:
raise HTTPException(status_code=409, detail=str(e))
except users_actions.UsernameNotAlphanumeric as e:
raise HTTPException(status_code=400, detail=str(e))
except users_actions.UsernameTooLong as e:
raise HTTPException(status_code=400, detail=str(e))
except users_actions.UserAlreadyExists as e:
raise HTTPException(status_code=409, detail=str(e))
return {"result": 0, "username": user.username}
@router.delete("/{username}")
async def delete_user(username: str):
try:
users_actions.delete_user(username)
except users_actions.UserNotFound as e:
raise HTTPException(status_code=404, detail=str(e))
except users_actions.UserIsProtected as e:
raise HTTPException(status_code=400, detail=str(e))
return {"result": 0, "username": username}

View File

@ -0,0 +1,251 @@
"""Restic singleton controller."""
from datetime import datetime
import json
import subprocess
import os
from threading import Lock
from enum import Enum
import portalocker
from selfprivacy_api.utils import ReadUserData
class ResticStates(Enum):
"""Restic states enum."""
NO_KEY = 0
NOT_INITIALIZED = 1
INITIALIZED = 2
BACKING_UP = 3
RESTORING = 4
ERROR = 5
INITIALIZING = 6
class ResticController:
"""
States in wich the restic_controller may be
- no backblaze key
- backblaze key is provided, but repository is not initialized
- backblaze key is provided, repository is initialized
- fetching list of snapshots
- creating snapshot, current progress can be retrieved
- recovering from snapshot
Any ongoing operation acquires the lock
Current state can be fetched with get_state()
"""
_instance = None
_lock = Lock()
_initialized = False
def __new__(cls):
if not cls._instance:
with cls._lock:
cls._instance = super(ResticController, cls).__new__(cls)
return cls._instance
def __init__(self):
if self._initialized:
return
self.state = ResticStates.NO_KEY
self.lock = False
self.progress = 0
self._backblaze_account = None
self._backblaze_key = None
self._repository_name = None
self.snapshot_list = []
self.error_message = None
self._initialized = True
self.load_configuration()
self.write_rclone_config()
self.load_snapshots()
def load_configuration(self):
"""Load current configuration from user data to singleton."""
with ReadUserData() as user_data:
self._backblaze_account = user_data["backblaze"]["accountId"]
self._backblaze_key = user_data["backblaze"]["accountKey"]
self._repository_name = user_data["backblaze"]["bucket"]
if self._backblaze_account and self._backblaze_key and self._repository_name:
self.state = ResticStates.INITIALIZING
else:
self.state = ResticStates.NO_KEY
def write_rclone_config(self):
"""
Open /root/.config/rclone/rclone.conf with portalocker
and write configuration in the following format:
[backblaze]
type = b2
account = {self.backblaze_account}
key = {self.backblaze_key}
"""
with portalocker.Lock(
"/root/.config/rclone/rclone.conf", "w", timeout=None
) as rclone_config:
rclone_config.write(
f"[backblaze]\n"
f"type = b2\n"
f"account = {self._backblaze_account}\n"
f"key = {self._backblaze_key}\n"
)
def load_snapshots(self):
"""
Load list of snapshots from repository
"""
backup_listing_command = [
"restic",
"-o",
"rclone.args=serve restic --stdio",
"-r",
f"rclone:backblaze:{self._repository_name}/sfbackup",
"snapshots",
"--json",
]
if self.state in (ResticStates.BACKING_UP, ResticStates.RESTORING):
return
with subprocess.Popen(
backup_listing_command,
shell=False,
stdout=subprocess.PIPE,
stderr=subprocess.STDOUT,
) as backup_listing_process_descriptor:
snapshots_list = backup_listing_process_descriptor.communicate()[0].decode(
"utf-8"
)
try:
starting_index = snapshots_list.find("[")
json.loads(snapshots_list[starting_index:])
self.snapshot_list = json.loads(snapshots_list[starting_index:])
self.state = ResticStates.INITIALIZED
print(snapshots_list)
except ValueError:
if "Is there a repository at the following location?" in snapshots_list:
self.state = ResticStates.NOT_INITIALIZED
return
self.state = ResticStates.ERROR
self.error_message = snapshots_list
return
def initialize_repository(self):
"""
Initialize repository with restic
"""
initialize_repository_command = [
"restic",
"-o",
"rclone.args=serve restic --stdio",
"-r",
f"rclone:backblaze:{self._repository_name}/sfbackup",
"init",
]
with subprocess.Popen(
initialize_repository_command,
shell=False,
stdout=subprocess.PIPE,
stderr=subprocess.STDOUT,
) as initialize_repository_process_descriptor:
msg = initialize_repository_process_descriptor.communicate()[0].decode(
"utf-8"
)
if initialize_repository_process_descriptor.returncode == 0:
self.state = ResticStates.INITIALIZED
else:
self.state = ResticStates.ERROR
self.error_message = msg
self.state = ResticStates.INITIALIZED
def start_backup(self):
"""
Start backup with restic
"""
backup_command = [
"restic",
"-o",
"rclone.args=serve restic --stdio",
"-r",
f"rclone:backblaze:{self._repository_name}/sfbackup",
"--verbose",
"--json",
"backup",
"/var",
]
with open("/var/backup.log", "w", encoding="utf-8") as log_file:
subprocess.Popen(
backup_command,
shell=False,
stdout=log_file,
stderr=subprocess.STDOUT,
)
self.state = ResticStates.BACKING_UP
self.progress = 0
def check_progress(self):
"""
Check progress of ongoing backup operation
"""
backup_status_check_command = ["tail", "-1", "/var/backup.log"]
if self.state in (ResticStates.NO_KEY, ResticStates.NOT_INITIALIZED):
return
# If the log file does not exists
if os.path.exists("/var/backup.log") is False:
self.state = ResticStates.INITIALIZED
with subprocess.Popen(
backup_status_check_command,
shell=False,
stdout=subprocess.PIPE,
stderr=subprocess.STDOUT,
) as backup_status_check_process_descriptor:
backup_process_status = (
backup_status_check_process_descriptor.communicate()[0].decode("utf-8")
)
try:
status = json.loads(backup_process_status)
except ValueError:
print(backup_process_status)
self.error_message = backup_process_status
return
if status["message_type"] == "status":
self.progress = status["percent_done"]
self.state = ResticStates.BACKING_UP
elif status["message_type"] == "summary":
self.state = ResticStates.INITIALIZED
self.progress = 0
self.snapshot_list.append(
{
"short_id": status["snapshot_id"],
# Current time in format 2021-12-02T00:02:51.086452543+03:00
"time": datetime.now().strftime("%Y-%m-%dT%H:%M:%S.%f%z"),
}
)
def restore_from_backup(self, snapshot_id):
"""
Restore from backup with restic
"""
backup_restoration_command = [
"restic",
"-o",
"rclone.args=serve restic --stdio",
"-r",
f"rclone:backblaze:{self._repository_name}/sfbackup",
"restore",
snapshot_id,
"--target",
"/",
]
self.state = ResticStates.RESTORING
subprocess.run(backup_restoration_command, shell=False)
self.state = ResticStates.INITIALIZED

View File

@ -0,0 +1,70 @@
"""Tasks for the restic controller."""
from huey import crontab
from selfprivacy_api.utils.huey import huey
from . import ResticController, ResticStates
@huey.task()
def init_restic():
controller = ResticController()
if controller.state == ResticStates.NOT_INITIALIZED:
initialize_repository()
@huey.task()
def update_keys_from_userdata():
controller = ResticController()
controller.load_configuration()
controller.write_rclone_config()
initialize_repository()
# Check every morning at 5:00 AM
@huey.task(crontab(hour=5, minute=0))
def cron_load_snapshots():
controller = ResticController()
controller.load_snapshots()
# Check every morning at 5:00 AM
@huey.task()
def load_snapshots():
controller = ResticController()
controller.load_snapshots()
if controller.state == ResticStates.NOT_INITIALIZED:
load_snapshots.schedule(delay=120)
@huey.task()
def initialize_repository():
controller = ResticController()
if controller.state is not ResticStates.NO_KEY:
controller.initialize_repository()
load_snapshots()
@huey.task()
def fetch_backup_status():
controller = ResticController()
if controller.state is ResticStates.BACKING_UP:
controller.check_progress()
if controller.state is ResticStates.BACKING_UP:
fetch_backup_status.schedule(delay=2)
else:
load_snapshots.schedule(delay=240)
@huey.task()
def start_backup():
controller = ResticController()
if controller.state is ResticStates.NOT_INITIALIZED:
resp = initialize_repository()
resp.get()
controller.start_backup()
fetch_backup_status.schedule(delay=3)
@huey.task()
def restore_from_backup(snapshot):
controller = ResticController()
controller.restore_from_backup(snapshot)

View File

@ -0,0 +1,67 @@
"""Services module."""
import typing
from selfprivacy_api.services.bitwarden import Bitwarden
from selfprivacy_api.services.gitea import Gitea
from selfprivacy_api.services.jitsi import Jitsi
from selfprivacy_api.services.mailserver import MailServer
from selfprivacy_api.services.nextcloud import Nextcloud
from selfprivacy_api.services.pleroma import Pleroma
from selfprivacy_api.services.ocserv import Ocserv
from selfprivacy_api.services.service import Service, ServiceDnsRecord
import selfprivacy_api.utils.network as network_utils
services: list[Service] = [
Bitwarden(),
Gitea(),
MailServer(),
Nextcloud(),
Pleroma(),
Ocserv(),
Jitsi(),
]
def get_all_services() -> list[Service]:
return services
def get_service_by_id(service_id: str) -> typing.Optional[Service]:
for service in services:
if service.get_id() == service_id:
return service
return None
def get_enabled_services() -> list[Service]:
return [service for service in services if service.is_enabled()]
def get_disabled_services() -> list[Service]:
return [service for service in services if not service.is_enabled()]
def get_services_by_location(location: str) -> list[Service]:
return [service for service in services if service.get_location() == location]
def get_all_required_dns_records() -> list[ServiceDnsRecord]:
ip4 = network_utils.get_ip4()
ip6 = network_utils.get_ip6()
dns_records: list[ServiceDnsRecord] = [
ServiceDnsRecord(
type="A",
name="api",
content=ip4,
ttl=3600,
),
ServiceDnsRecord(
type="AAAA",
name="api",
content=ip6,
ttl=3600,
),
]
for service in get_enabled_services():
dns_records += service.get_dns_records()
return dns_records

View File

@ -0,0 +1,174 @@
"""Class representing Bitwarden service"""
import base64
import subprocess
import typing
from selfprivacy_api.jobs import Job, JobStatus, Jobs
from selfprivacy_api.services.generic_service_mover import FolderMoveNames, move_service
from selfprivacy_api.services.generic_size_counter import get_storage_usage
from selfprivacy_api.services.generic_status_getter import get_service_status
from selfprivacy_api.services.service import Service, ServiceDnsRecord, ServiceStatus
from selfprivacy_api.utils import ReadUserData, WriteUserData, get_domain
from selfprivacy_api.utils.block_devices import BlockDevice
from selfprivacy_api.utils.huey import huey
import selfprivacy_api.utils.network as network_utils
from selfprivacy_api.services.bitwarden.icon import BITWARDEN_ICON
class Bitwarden(Service):
"""Class representing Bitwarden service."""
@staticmethod
def get_id() -> str:
"""Return service id."""
return "bitwarden"
@staticmethod
def get_display_name() -> str:
"""Return service display name."""
return "Bitwarden"
@staticmethod
def get_description() -> str:
"""Return service description."""
return "Bitwarden is a password manager."
@staticmethod
def get_svg_icon() -> str:
"""Read SVG icon from file and return it as base64 encoded string."""
return base64.b64encode(BITWARDEN_ICON.encode("utf-8")).decode("utf-8")
@staticmethod
def get_url() -> typing.Optional[str]:
"""Return service url."""
domain = get_domain()
return f"https://password.{domain}"
@staticmethod
def is_movable() -> bool:
return True
@staticmethod
def is_required() -> bool:
return False
@staticmethod
def is_enabled() -> bool:
with ReadUserData() as user_data:
return user_data.get("bitwarden", {}).get("enable", False)
@staticmethod
def get_status() -> ServiceStatus:
"""
Return Bitwarden status from systemd.
Use command return code to determine status.
Return code 0 means service is running.
Return code 1 or 2 means service is in error stat.
Return code 3 means service is stopped.
Return code 4 means service is off.
"""
return get_service_status("vaultwarden.service")
@staticmethod
def enable():
"""Enable Bitwarden service."""
with WriteUserData() as user_data:
if "bitwarden" not in user_data:
user_data["bitwarden"] = {}
user_data["bitwarden"]["enable"] = True
@staticmethod
def disable():
"""Disable Bitwarden service."""
with WriteUserData() as user_data:
if "bitwarden" not in user_data:
user_data["bitwarden"] = {}
user_data["bitwarden"]["enable"] = False
@staticmethod
def stop():
subprocess.run(["systemctl", "stop", "vaultwarden.service"])
@staticmethod
def start():
subprocess.run(["systemctl", "start", "vaultwarden.service"])
@staticmethod
def restart():
subprocess.run(["systemctl", "restart", "vaultwarden.service"])
@staticmethod
def get_configuration():
return {}
@staticmethod
def set_configuration(config_items):
return super().set_configuration(config_items)
@staticmethod
def get_logs():
return ""
@staticmethod
def get_storage_usage() -> int:
storage_usage = 0
storage_usage += get_storage_usage("/var/lib/bitwarden")
storage_usage += get_storage_usage("/var/lib/bitwarden_rs")
return storage_usage
@staticmethod
def get_location() -> str:
with ReadUserData() as user_data:
if user_data.get("useBinds", False):
return user_data.get("bitwarden", {}).get("location", "sda1")
else:
return "sda1"
@staticmethod
def get_dns_records() -> typing.List[ServiceDnsRecord]:
"""Return list of DNS records for Bitwarden service."""
return [
ServiceDnsRecord(
type="A",
name="password",
content=network_utils.get_ip4(),
ttl=3600,
),
ServiceDnsRecord(
type="AAAA",
name="password",
content=network_utils.get_ip6(),
ttl=3600,
),
]
def move_to_volume(self, volume: BlockDevice) -> Job:
job = Jobs.get_instance().add(
type_id="services.bitwarden.move",
name="Move Bitwarden",
description=f"Moving Bitwarden data to {volume.name}",
)
move_service(
self,
volume,
job,
[
FolderMoveNames(
name="bitwarden",
bind_location="/var/lib/bitwarden",
group="vaultwarden",
owner="vaultwarden",
),
FolderMoveNames(
name="bitwarden_rs",
bind_location="/var/lib/bitwarden_rs",
group="vaultwarden",
owner="vaultwarden",
),
],
"bitwarden",
)
return job

View File

@ -0,0 +1,3 @@
<svg width="24" height="24" viewBox="0 0 24 24" fill="none" xmlns="http://www.w3.org/2000/svg">
<path d="M5.125 2C4.2962 2 3.50134 2.32924 2.91529 2.91529C2.32924 3.50134 2 4.2962 2 5.125L2 18.875C2 19.7038 2.32924 20.4987 2.91529 21.0847C3.50134 21.6708 4.2962 22 5.125 22H18.875C19.7038 22 20.4987 21.6708 21.0847 21.0847C21.6708 20.4987 22 19.7038 22 18.875V5.125C22 4.2962 21.6708 3.50134 21.0847 2.91529C20.4987 2.32924 19.7038 2 18.875 2H5.125ZM6.25833 4.43333H17.7583C17.9317 4.43333 18.0817 4.49667 18.2083 4.62333C18.2688 4.68133 18.3168 4.7511 18.3494 4.82835C18.3819 4.9056 18.3983 4.98869 18.3975 5.0725V12.7392C18.3975 13.3117 18.2858 13.8783 18.0633 14.4408C17.8558 14.9751 17.5769 15.4789 17.2342 15.9383C16.8824 16.3987 16.4882 16.825 16.0567 17.2117C15.6008 17.6242 15.18 17.9667 14.7942 18.24C14.4075 18.5125 14.005 18.77 13.5858 19.0133C13.1667 19.2558 12.8692 19.4208 12.6925 19.5075C12.5158 19.5942 12.375 19.6608 12.2675 19.7075C12.1872 19.7472 12.0987 19.7674 12.0092 19.7667C11.919 19.7674 11.8299 19.7468 11.7492 19.7067C11.6062 19.6429 11.4645 19.5762 11.3242 19.5067C11.0218 19.3511 10.7242 19.1866 10.4317 19.0133C10.0175 18.7738 9.6143 18.5158 9.22333 18.24C8.7825 17.9225 8.36093 17.5791 7.96083 17.2117C7.52907 16.825 7.13456 16.3987 6.7825 15.9383C6.44006 15.4788 6.16141 14.9751 5.95417 14.4408C5.73555 13.9 5.62213 13.3225 5.62 12.7392V5.0725C5.62 4.89917 5.68333 4.75 5.80917 4.6225C5.86726 4.56188 5.93717 4.51382 6.01457 4.48129C6.09196 4.44875 6.17521 4.43243 6.25917 4.43333H6.25833ZM12.0083 6.35V17.7C12.8 17.2817 13.5092 16.825 14.135 16.3333C15.6992 15.1083 16.4808 13.9108 16.4808 12.7392V6.35H12.0083Z" fill="black"/>
</svg>

After

Width:  |  Height:  |  Size: 1.6 KiB

View File

@ -0,0 +1,5 @@
BITWARDEN_ICON = """
<svg width="24" height="24" viewBox="0 0 24 24" fill="none" xmlns="http://www.w3.org/2000/svg">
<path d="M5.125 2C4.2962 2 3.50134 2.32924 2.91529 2.91529C2.32924 3.50134 2 4.2962 2 5.125L2 18.875C2 19.7038 2.32924 20.4987 2.91529 21.0847C3.50134 21.6708 4.2962 22 5.125 22H18.875C19.7038 22 20.4987 21.6708 21.0847 21.0847C21.6708 20.4987 22 19.7038 22 18.875V5.125C22 4.2962 21.6708 3.50134 21.0847 2.91529C20.4987 2.32924 19.7038 2 18.875 2H5.125ZM6.25833 4.43333H17.7583C17.9317 4.43333 18.0817 4.49667 18.2083 4.62333C18.2688 4.68133 18.3168 4.7511 18.3494 4.82835C18.3819 4.9056 18.3983 4.98869 18.3975 5.0725V12.7392C18.3975 13.3117 18.2858 13.8783 18.0633 14.4408C17.8558 14.9751 17.5769 15.4789 17.2342 15.9383C16.8824 16.3987 16.4882 16.825 16.0567 17.2117C15.6008 17.6242 15.18 17.9667 14.7942 18.24C14.4075 18.5125 14.005 18.77 13.5858 19.0133C13.1667 19.2558 12.8692 19.4208 12.6925 19.5075C12.5158 19.5942 12.375 19.6608 12.2675 19.7075C12.1872 19.7472 12.0987 19.7674 12.0092 19.7667C11.919 19.7674 11.8299 19.7468 11.7492 19.7067C11.6062 19.6429 11.4645 19.5762 11.3242 19.5067C11.0218 19.3511 10.7242 19.1866 10.4317 19.0133C10.0175 18.7738 9.6143 18.5158 9.22333 18.24C8.7825 17.9225 8.36093 17.5791 7.96083 17.2117C7.52907 16.825 7.13456 16.3987 6.7825 15.9383C6.44006 15.4788 6.16141 14.9751 5.95417 14.4408C5.73555 13.9 5.62213 13.3225 5.62 12.7392V5.0725C5.62 4.89917 5.68333 4.75 5.80917 4.6225C5.86726 4.56188 5.93717 4.51382 6.01457 4.48129C6.09196 4.44875 6.17521 4.43243 6.25917 4.43333H6.25833ZM12.0083 6.35V17.7C12.8 17.2817 13.5092 16.825 14.135 16.3333C15.6992 15.1083 16.4808 13.9108 16.4808 12.7392V6.35H12.0083Z" fill="black"/>
</svg>
"""

View File

@ -0,0 +1,236 @@
"""Generic handler for moving services"""
import subprocess
import time
import pathlib
import shutil
from pydantic import BaseModel
from selfprivacy_api.jobs import Job, JobStatus, Jobs
from selfprivacy_api.utils.huey import huey
from selfprivacy_api.utils.block_devices import BlockDevice
from selfprivacy_api.utils import ReadUserData, WriteUserData
from selfprivacy_api.services.service import Service, ServiceStatus
class FolderMoveNames(BaseModel):
name: str
bind_location: str
owner: str
group: str
@huey.task()
def move_service(
service: Service,
volume: BlockDevice,
job: Job,
folder_names: list[FolderMoveNames],
userdata_location: str,
):
"""Move a service to another volume."""
job = Jobs.get_instance().update(
job=job,
status_text="Performing pre-move checks...",
status=JobStatus.RUNNING,
)
service_name = service.get_display_name()
with ReadUserData() as user_data:
if not user_data.get("useBinds", False):
Jobs.get_instance().update(
job=job,
status=JobStatus.ERROR,
error="Server is not using binds.",
)
return
# Check if we are on the same volume
old_volume = service.get_location()
if old_volume == volume.name:
Jobs.get_instance().update(
job=job,
status=JobStatus.ERROR,
error=f"{service_name} is already on this volume.",
)
return
# Check if there is enough space on the new volume
if int(volume.fsavail) < service.get_storage_usage():
Jobs.get_instance().update(
job=job,
status=JobStatus.ERROR,
error="Not enough space on the new volume.",
)
return
# Make sure the volume is mounted
if volume.name != "sda1" and f"/volumes/{volume.name}" not in volume.mountpoints:
Jobs.get_instance().update(
job=job,
status=JobStatus.ERROR,
error="Volume is not mounted.",
)
return
# Make sure current actual directory exists and if its user and group are correct
for folder in folder_names:
if not pathlib.Path(f"/volumes/{old_volume}/{folder.name}").exists():
Jobs.get_instance().update(
job=job,
status=JobStatus.ERROR,
error=f"{service_name} is not found.",
)
return
if not pathlib.Path(f"/volumes/{old_volume}/{folder.name}").is_dir():
Jobs.get_instance().update(
job=job,
status=JobStatus.ERROR,
error=f"{service_name} is not a directory.",
)
return
if (
not pathlib.Path(f"/volumes/{old_volume}/{folder.name}").owner()
== folder.owner
):
Jobs.get_instance().update(
job=job,
status=JobStatus.ERROR,
error=f"{service_name} owner is not {folder.owner}.",
)
return
# Stop service
Jobs.get_instance().update(
job=job,
status=JobStatus.RUNNING,
status_text=f"Stopping {service_name}...",
progress=5,
)
service.stop()
# Wait for the service to stop, check every second
# If it does not stop in 30 seconds, abort
for _ in range(30):
if service.get_status() not in (
ServiceStatus.ACTIVATING,
ServiceStatus.DEACTIVATING,
):
break
time.sleep(1)
else:
Jobs.get_instance().update(
job=job,
status=JobStatus.ERROR,
error=f"{service_name} did not stop in 30 seconds.",
)
return
# Unmount old volume
Jobs.get_instance().update(
job=job,
status_text="Unmounting old folder...",
status=JobStatus.RUNNING,
progress=10,
)
for folder in folder_names:
try:
subprocess.run(
["umount", folder.bind_location],
check=True,
)
except subprocess.CalledProcessError:
Jobs.get_instance().update(
job=job,
status=JobStatus.ERROR,
error="Unable to unmount old volume.",
)
return
# Move data to new volume and set correct permissions
Jobs.get_instance().update(
job=job,
status_text="Moving data to new volume...",
status=JobStatus.RUNNING,
progress=20,
)
current_progress = 20
folder_percentage = 50 // len(folder_names)
for folder in folder_names:
shutil.move(
f"/volumes/{old_volume}/{folder.name}",
f"/volumes/{volume.name}/{folder.name}",
)
Jobs.get_instance().update(
job=job,
status_text="Moving data to new volume...",
status=JobStatus.RUNNING,
progress=current_progress + folder_percentage,
)
Jobs.get_instance().update(
job=job,
status_text=f"Making sure {service_name} owns its files...",
status=JobStatus.RUNNING,
progress=70,
)
for folder in folder_names:
try:
subprocess.run(
[
"chown",
"-R",
f"{folder.owner}:{folder.group}",
f"/volumes/{volume.name}/{folder.name}",
],
check=True,
)
except subprocess.CalledProcessError as error:
print(error.output)
Jobs.get_instance().update(
job=job,
status=JobStatus.RUNNING,
error=f"Unable to set ownership of new volume. {service_name} may not be able to access its files. Continuing anyway.",
)
# Mount new volume
Jobs.get_instance().update(
job=job,
status_text=f"Mounting {service_name} data...",
status=JobStatus.RUNNING,
progress=90,
)
for folder in folder_names:
try:
subprocess.run(
[
"mount",
"--bind",
f"/volumes/{volume.name}/{folder.name}",
folder.bind_location,
],
check=True,
)
except subprocess.CalledProcessError as error:
print(error.output)
Jobs.get_instance().update(
job=job,
status=JobStatus.ERROR,
error="Unable to mount new volume.",
)
return
# Update userdata
Jobs.get_instance().update(
job=job,
status_text="Finishing move...",
status=JobStatus.RUNNING,
progress=95,
)
with WriteUserData() as user_data:
if userdata_location not in user_data:
user_data[userdata_location] = {}
user_data[userdata_location]["location"] = volume.name
# Start service
service.start()
Jobs.get_instance().update(
job=job,
status=JobStatus.FINISHED,
result=f"{service_name} moved successfully.",
status_text=f"Starting {service_name}...",
progress=100,
)

View File

@ -0,0 +1,21 @@
"""Generic size counter using pathlib"""
import pathlib
def get_storage_usage(path: str) -> int:
"""
Calculate the real storage usage of path and all subdirectories.
Calculate using pathlib.
Do not follow symlinks.
"""
storage_usage = 0
for iter_path in pathlib.Path(path).rglob("**/*"):
if iter_path.is_dir():
continue
try:
storage_usage += iter_path.stat().st_size
except FileNotFoundError:
pass
except Exception as error:
print(error)
return storage_usage

View File

@ -0,0 +1,60 @@
"""Generic service status fetcher using systemctl"""
import subprocess
from selfprivacy_api.services.service import ServiceStatus
def get_service_status(service: str) -> ServiceStatus:
"""
Return service status from systemd.
Use systemctl show to get the status of a service.
Get ActiveState from the output.
"""
service_status = subprocess.check_output(["systemctl", "show", service])
if b"LoadState=not-found" in service_status:
return ServiceStatus.OFF
if b"ActiveState=active" in service_status:
return ServiceStatus.ACTIVE
if b"ActiveState=inactive" in service_status:
return ServiceStatus.INACTIVE
if b"ActiveState=activating" in service_status:
return ServiceStatus.ACTIVATING
if b"ActiveState=deactivating" in service_status:
return ServiceStatus.DEACTIVATING
if b"ActiveState=failed" in service_status:
return ServiceStatus.FAILED
if b"ActiveState=reloading" in service_status:
return ServiceStatus.RELOADING
return ServiceStatus.OFF
def get_service_status_from_several_units(services: list[str]) -> ServiceStatus:
"""
Fetch all service statuses for all services and return the worst status.
Statuses from worst to best:
- OFF
- FAILED
- RELOADING
- ACTIVATING
- DEACTIVATING
- INACTIVE
- ACTIVE
"""
service_statuses = []
for service in services:
service_statuses.append(get_service_status(service))
if ServiceStatus.OFF in service_statuses:
return ServiceStatus.OFF
if ServiceStatus.FAILED in service_statuses:
return ServiceStatus.FAILED
if ServiceStatus.RELOADING in service_statuses:
return ServiceStatus.RELOADING
if ServiceStatus.ACTIVATING in service_statuses:
return ServiceStatus.ACTIVATING
if ServiceStatus.DEACTIVATING in service_statuses:
return ServiceStatus.DEACTIVATING
if ServiceStatus.INACTIVE in service_statuses:
return ServiceStatus.INACTIVE
if ServiceStatus.ACTIVE in service_statuses:
return ServiceStatus.ACTIVE
return ServiceStatus.OFF

View File

@ -0,0 +1,165 @@
"""Class representing Bitwarden service"""
import base64
import subprocess
import typing
from selfprivacy_api.jobs import Job, Jobs
from selfprivacy_api.services.generic_service_mover import FolderMoveNames, move_service
from selfprivacy_api.services.generic_size_counter import get_storage_usage
from selfprivacy_api.services.generic_status_getter import get_service_status
from selfprivacy_api.services.service import Service, ServiceDnsRecord, ServiceStatus
from selfprivacy_api.utils import ReadUserData, WriteUserData, get_domain
from selfprivacy_api.utils.block_devices import BlockDevice
from selfprivacy_api.utils.huey import huey
import selfprivacy_api.utils.network as network_utils
from selfprivacy_api.services.gitea.icon import GITEA_ICON
class Gitea(Service):
"""Class representing Gitea service"""
@staticmethod
def get_id() -> str:
"""Return service id."""
return "gitea"
@staticmethod
def get_display_name() -> str:
"""Return service display name."""
return "Gitea"
@staticmethod
def get_description() -> str:
"""Return service description."""
return "Gitea is a Git forge."
@staticmethod
def get_svg_icon() -> str:
"""Read SVG icon from file and return it as base64 encoded string."""
return base64.b64encode(GITEA_ICON.encode("utf-8")).decode("utf-8")
@staticmethod
def get_url() -> typing.Optional[str]:
"""Return service url."""
domain = get_domain()
return f"https://git.{domain}"
@staticmethod
def is_movable() -> bool:
return True
@staticmethod
def is_required() -> bool:
return False
@staticmethod
def is_enabled() -> bool:
with ReadUserData() as user_data:
return user_data.get("gitea", {}).get("enable", False)
@staticmethod
def get_status() -> ServiceStatus:
"""
Return Gitea status from systemd.
Use command return code to determine status.
Return code 0 means service is running.
Return code 1 or 2 means service is in error stat.
Return code 3 means service is stopped.
Return code 4 means service is off.
"""
return get_service_status("gitea.service")
@staticmethod
def enable():
"""Enable Gitea service."""
with WriteUserData() as user_data:
if "gitea" not in user_data:
user_data["gitea"] = {}
user_data["gitea"]["enable"] = True
@staticmethod
def disable():
"""Disable Gitea service."""
with WriteUserData() as user_data:
if "gitea" not in user_data:
user_data["gitea"] = {}
user_data["gitea"]["enable"] = False
@staticmethod
def stop():
subprocess.run(["systemctl", "stop", "gitea.service"])
@staticmethod
def start():
subprocess.run(["systemctl", "start", "gitea.service"])
@staticmethod
def restart():
subprocess.run(["systemctl", "restart", "gitea.service"])
@staticmethod
def get_configuration():
return {}
@staticmethod
def set_configuration(config_items):
return super().set_configuration(config_items)
@staticmethod
def get_logs():
return ""
@staticmethod
def get_storage_usage() -> int:
storage_usage = 0
storage_usage += get_storage_usage("/var/lib/gitea")
return storage_usage
@staticmethod
def get_location() -> str:
with ReadUserData() as user_data:
if user_data.get("useBinds", False):
return user_data.get("gitea", {}).get("location", "sda1")
else:
return "sda1"
@staticmethod
def get_dns_records() -> typing.List[ServiceDnsRecord]:
return [
ServiceDnsRecord(
type="A",
name="git",
content=network_utils.get_ip4(),
ttl=3600,
),
ServiceDnsRecord(
type="AAAA",
name="git",
content=network_utils.get_ip6(),
ttl=3600,
),
]
def move_to_volume(self, volume: BlockDevice) -> Job:
job = Jobs.get_instance().add(
type_id="services.gitea.move",
name="Move Gitea",
description=f"Moving Gitea data to {volume.name}",
)
move_service(
self,
volume,
job,
[
FolderMoveNames(
name="gitea",
bind_location="/var/lib/gitea",
group="gitea",
owner="gitea",
),
],
"gitea",
)
return job

View File

@ -0,0 +1,3 @@
<svg width="24" height="24" viewBox="0 0 24 24" fill="none" xmlns="http://www.w3.org/2000/svg">
<path d="M2.60007 10.5899L8.38007 4.79995L10.0701 6.49995C9.83007 7.34995 10.2201 8.27995 11.0001 8.72995V14.2699C10.4001 14.6099 10.0001 15.2599 10.0001 15.9999C10.0001 16.5304 10.2108 17.0391 10.5859 17.4142C10.9609 17.7892 11.4696 17.9999 12.0001 17.9999C12.5305 17.9999 13.0392 17.7892 13.4143 17.4142C13.7894 17.0391 14.0001 16.5304 14.0001 15.9999C14.0001 15.2599 13.6001 14.6099 13.0001 14.2699V9.40995L15.0701 11.4999C15.0001 11.6499 15.0001 11.8199 15.0001 11.9999C15.0001 12.5304 15.2108 13.0391 15.5859 13.4142C15.9609 13.7892 16.4696 13.9999 17.0001 13.9999C17.5305 13.9999 18.0392 13.7892 18.4143 13.4142C18.7894 13.0391 19.0001 12.5304 19.0001 11.9999C19.0001 11.4695 18.7894 10.9608 18.4143 10.5857C18.0392 10.2107 17.5305 9.99995 17.0001 9.99995C16.8201 9.99995 16.6501 9.99995 16.5001 10.0699L13.9301 7.49995C14.1901 6.56995 13.7101 5.54995 12.7801 5.15995C12.3501 4.99995 11.9001 4.95995 11.5001 5.06995L9.80007 3.37995L10.5901 2.59995C11.3701 1.80995 12.6301 1.80995 13.4101 2.59995L21.4001 10.5899C22.1901 11.3699 22.1901 12.6299 21.4001 13.4099L13.4101 21.3999C12.6301 22.1899 11.3701 22.1899 10.5901 21.3999L2.60007 13.4099C1.81007 12.6299 1.81007 11.3699 2.60007 10.5899Z" fill="black"/>
</svg>

After

Width:  |  Height:  |  Size: 1.3 KiB

View File

@ -0,0 +1,5 @@
GITEA_ICON = """
<svg width="24" height="24" viewBox="0 0 24 24" fill="none" xmlns="http://www.w3.org/2000/svg">
<path d="M2.60007 10.5899L8.38007 4.79995L10.0701 6.49995C9.83007 7.34995 10.2201 8.27995 11.0001 8.72995V14.2699C10.4001 14.6099 10.0001 15.2599 10.0001 15.9999C10.0001 16.5304 10.2108 17.0391 10.5859 17.4142C10.9609 17.7892 11.4696 17.9999 12.0001 17.9999C12.5305 17.9999 13.0392 17.7892 13.4143 17.4142C13.7894 17.0391 14.0001 16.5304 14.0001 15.9999C14.0001 15.2599 13.6001 14.6099 13.0001 14.2699V9.40995L15.0701 11.4999C15.0001 11.6499 15.0001 11.8199 15.0001 11.9999C15.0001 12.5304 15.2108 13.0391 15.5859 13.4142C15.9609 13.7892 16.4696 13.9999 17.0001 13.9999C17.5305 13.9999 18.0392 13.7892 18.4143 13.4142C18.7894 13.0391 19.0001 12.5304 19.0001 11.9999C19.0001 11.4695 18.7894 10.9608 18.4143 10.5857C18.0392 10.2107 17.5305 9.99995 17.0001 9.99995C16.8201 9.99995 16.6501 9.99995 16.5001 10.0699L13.9301 7.49995C14.1901 6.56995 13.7101 5.54995 12.7801 5.15995C12.3501 4.99995 11.9001 4.95995 11.5001 5.06995L9.80007 3.37995L10.5901 2.59995C11.3701 1.80995 12.6301 1.80995 13.4101 2.59995L21.4001 10.5899C22.1901 11.3699 22.1901 12.6299 21.4001 13.4099L13.4101 21.3999C12.6301 22.1899 11.3701 22.1899 10.5901 21.3999L2.60007 13.4099C1.81007 12.6299 1.81007 11.3699 2.60007 10.5899Z" fill="black"/>
</svg>
"""

View File

@ -0,0 +1,142 @@
"""Class representing Jitsi service"""
import base64
import subprocess
import typing
from selfprivacy_api.jobs import Job, Jobs
from selfprivacy_api.services.generic_service_mover import FolderMoveNames, move_service
from selfprivacy_api.services.generic_size_counter import get_storage_usage
from selfprivacy_api.services.generic_status_getter import (
get_service_status,
get_service_status_from_several_units,
)
from selfprivacy_api.services.service import Service, ServiceDnsRecord, ServiceStatus
from selfprivacy_api.utils import ReadUserData, WriteUserData, get_domain
from selfprivacy_api.utils.block_devices import BlockDevice
from selfprivacy_api.utils.huey import huey
import selfprivacy_api.utils.network as network_utils
from selfprivacy_api.services.jitsi.icon import JITSI_ICON
class Jitsi(Service):
"""Class representing Jitsi service"""
@staticmethod
def get_id() -> str:
"""Return service id."""
return "jitsi"
@staticmethod
def get_display_name() -> str:
"""Return service display name."""
return "Jitsi"
@staticmethod
def get_description() -> str:
"""Return service description."""
return "Jitsi is a free and open-source video conferencing solution."
@staticmethod
def get_svg_icon() -> str:
"""Read SVG icon from file and return it as base64 encoded string."""
return base64.b64encode(JITSI_ICON.encode("utf-8")).decode("utf-8")
@staticmethod
def get_url() -> typing.Optional[str]:
"""Return service url."""
domain = get_domain()
return f"https://meet.{domain}"
@staticmethod
def is_movable() -> bool:
return False
@staticmethod
def is_required() -> bool:
return False
@staticmethod
def is_enabled() -> bool:
with ReadUserData() as user_data:
return user_data.get("jitsi", {}).get("enable", False)
@staticmethod
def get_status() -> ServiceStatus:
return get_service_status_from_several_units(
["jitsi-videobridge.service", "jicofo.service"]
)
@staticmethod
def enable():
"""Enable Jitsi service."""
with WriteUserData() as user_data:
if "jitsi" not in user_data:
user_data["jitsi"] = {}
user_data["jitsi"]["enable"] = True
@staticmethod
def disable():
"""Disable Gitea service."""
with WriteUserData() as user_data:
if "jitsi" not in user_data:
user_data["jitsi"] = {}
user_data["jitsi"]["enable"] = False
@staticmethod
def stop():
subprocess.run(["systemctl", "stop", "jitsi-videobridge.service"])
subprocess.run(["systemctl", "stop", "jicofo.service"])
@staticmethod
def start():
subprocess.run(["systemctl", "start", "jitsi-videobridge.service"])
subprocess.run(["systemctl", "start", "jicofo.service"])
@staticmethod
def restart():
subprocess.run(["systemctl", "restart", "jitsi-videobridge.service"])
subprocess.run(["systemctl", "restart", "jicofo.service"])
@staticmethod
def get_configuration():
return {}
@staticmethod
def set_configuration(config_items):
return super().set_configuration(config_items)
@staticmethod
def get_logs():
return ""
@staticmethod
def get_storage_usage() -> int:
storage_usage = 0
storage_usage += get_storage_usage("/var/lib/jitsi-meet")
return storage_usage
@staticmethod
def get_location() -> str:
return "sda1"
@staticmethod
def get_dns_records() -> typing.List[ServiceDnsRecord]:
ip4 = network_utils.get_ip4()
ip6 = network_utils.get_ip6()
return [
ServiceDnsRecord(
type="A",
name="meet",
content=ip4,
ttl=3600,
),
ServiceDnsRecord(
type="AAAA",
name="meet",
content=ip6,
ttl=3600,
),
]
def move_to_volume(self, volume: BlockDevice) -> Job:
raise NotImplementedError("jitsi service is not movable")

View File

@ -0,0 +1,5 @@
JITSI_ICON = """
<svg width="32" height="32" viewBox="0 0 32 32" fill="none" xmlns="http://www.w3.org/2000/svg">
<path d="M26.6665 2.66663H5.33317C3.8665 2.66663 2.67984 3.86663 2.67984 5.33329L2.6665 29.3333L7.99984 24H26.6665C28.1332 24 29.3332 22.8 29.3332 21.3333V5.33329C29.3332 3.86663 28.1332 2.66663 26.6665 2.66663ZM26.6665 21.3333H6.89317L5.33317 22.8933V5.33329H26.6665V21.3333ZM18.6665 14.1333L22.6665 17.3333V9.33329L18.6665 12.5333V9.33329H9.33317V17.3333H18.6665V14.1333Z" fill="black"/>
</svg>
"""

View File

@ -0,0 +1,179 @@
"""Class representing Dovecot and Postfix services"""
import base64
import subprocess
import typing
from selfprivacy_api.jobs import Job, JobStatus, Jobs
from selfprivacy_api.services.generic_service_mover import FolderMoveNames, move_service
from selfprivacy_api.services.generic_size_counter import get_storage_usage
from selfprivacy_api.services.generic_status_getter import (
get_service_status,
get_service_status_from_several_units,
)
from selfprivacy_api.services.service import Service, ServiceDnsRecord, ServiceStatus
import selfprivacy_api.utils as utils
from selfprivacy_api.utils.block_devices import BlockDevice
from selfprivacy_api.utils.huey import huey
import selfprivacy_api.utils.network as network_utils
from selfprivacy_api.services.mailserver.icon import MAILSERVER_ICON
class MailServer(Service):
"""Class representing mail service"""
@staticmethod
def get_id() -> str:
return "mailserver"
@staticmethod
def get_display_name() -> str:
return "Mail Server"
@staticmethod
def get_description() -> str:
return "E-Mail for company and family."
@staticmethod
def get_svg_icon() -> str:
return base64.b64encode(MAILSERVER_ICON.encode("utf-8")).decode("utf-8")
@staticmethod
def get_url() -> typing.Optional[str]:
"""Return service url."""
return None
@staticmethod
def is_movable() -> bool:
return True
@staticmethod
def is_required() -> bool:
return True
@staticmethod
def is_enabled() -> bool:
return True
@staticmethod
def get_status() -> ServiceStatus:
return get_service_status_from_several_units(
["dovecot2.service", "postfix.service"]
)
@staticmethod
def enable():
raise NotImplementedError("enable is not implemented for MailServer")
@staticmethod
def disable():
raise NotImplementedError("disable is not implemented for MailServer")
@staticmethod
def stop():
subprocess.run(["systemctl", "stop", "dovecot2.service"])
subprocess.run(["systemctl", "stop", "postfix.service"])
@staticmethod
def start():
subprocess.run(["systemctl", "start", "dovecot2.service"])
subprocess.run(["systemctl", "start", "postfix.service"])
@staticmethod
def restart():
subprocess.run(["systemctl", "restart", "dovecot2.service"])
subprocess.run(["systemctl", "restart", "postfix.service"])
@staticmethod
def get_configuration():
return {}
@staticmethod
def set_configuration(config_items):
return super().set_configuration(config_items)
@staticmethod
def get_logs():
return ""
@staticmethod
def get_storage_usage() -> int:
return get_storage_usage("/var/vmail")
@staticmethod
def get_location() -> str:
with utils.ReadUserData() as user_data:
if user_data.get("useBinds", False):
return user_data.get("mailserver", {}).get("location", "sda1")
else:
return "sda1"
@staticmethod
def get_dns_records() -> typing.List[ServiceDnsRecord]:
domain = utils.get_domain()
dkim_record = utils.get_dkim_key(domain)
ip4 = network_utils.get_ip4()
ip6 = network_utils.get_ip6()
if dkim_record is None:
return []
return [
ServiceDnsRecord(
type="A",
name=domain,
content=ip4,
ttl=3600,
),
ServiceDnsRecord(
type="AAAA",
name=domain,
content=ip6,
ttl=3600,
),
ServiceDnsRecord(
type="MX", name=domain, content=domain, ttl=3600, priority=10
),
ServiceDnsRecord(
type="TXT", name="_dmarc", content=f"v=DMARC1; p=none", ttl=18000
),
ServiceDnsRecord(
type="TXT",
name=domain,
content=f"v=spf1 a mx ip4:{ip4} -all",
ttl=18000,
),
ServiceDnsRecord(
type="TXT", name="selector._domainkey", content=dkim_record, ttl=18000
),
]
def move_to_volume(self, volume: BlockDevice) -> Job:
job = Jobs.get_instance().add(
type_id="services.mailserver.move",
name="Move Mail Server",
description=f"Moving mailserver data to {volume.name}",
)
move_service(
self,
volume,
job,
[
FolderMoveNames(
name="vmail",
bind_location="/var/vmail",
group="virtualMail",
owner="virtualMail",
),
FolderMoveNames(
name="sieve",
bind_location="/var/sieve",
group="virtualMail",
owner="virtualMail",
),
],
"mailserver",
)
return job

View File

@ -0,0 +1,5 @@
MAILSERVER_ICON = """
<svg width="16" height="16" viewBox="0 0 16 16" fill="none" xmlns="http://www.w3.org/2000/svg">
<path d="M13.3333 2.66675H2.66665C1.93331 2.66675 1.33998 3.26675 1.33998 4.00008L1.33331 12.0001C1.33331 12.7334 1.93331 13.3334 2.66665 13.3334H13.3333C14.0666 13.3334 14.6666 12.7334 14.6666 12.0001V4.00008C14.6666 3.26675 14.0666 2.66675 13.3333 2.66675ZM13.3333 12.0001H2.66665V5.33341L7.99998 8.66675L13.3333 5.33341V12.0001ZM7.99998 7.33341L2.66665 4.00008H13.3333L7.99998 7.33341Z" fill="black"/>
</svg>
"""

View File

@ -0,0 +1,3 @@
<svg width="16" height="16" viewBox="0 0 16 16" fill="none" xmlns="http://www.w3.org/2000/svg">
<path d="M13.3333 2.66675H2.66665C1.93331 2.66675 1.33998 3.26675 1.33998 4.00008L1.33331 12.0001C1.33331 12.7334 1.93331 13.3334 2.66665 13.3334H13.3333C14.0666 13.3334 14.6666 12.7334 14.6666 12.0001V4.00008C14.6666 3.26675 14.0666 2.66675 13.3333 2.66675ZM13.3333 12.0001H2.66665V5.33341L7.99998 8.66675L13.3333 5.33341V12.0001ZM7.99998 7.33341L2.66665 4.00008H13.3333L7.99998 7.33341Z" fill="#201A19"/>
</svg>

After

Width:  |  Height:  |  Size: 510 B

View File

@ -0,0 +1,171 @@
"""Class representing Nextcloud service."""
import base64
import subprocess
import typing
from selfprivacy_api.jobs import Job, Jobs
from selfprivacy_api.services.generic_service_mover import FolderMoveNames, move_service
from selfprivacy_api.services.generic_size_counter import get_storage_usage
from selfprivacy_api.services.generic_status_getter import get_service_status
from selfprivacy_api.services.service import Service, ServiceDnsRecord, ServiceStatus
from selfprivacy_api.utils import ReadUserData, WriteUserData, get_domain
from selfprivacy_api.utils.block_devices import BlockDevice
import selfprivacy_api.utils.network as network_utils
from selfprivacy_api.services.nextcloud.icon import NEXTCLOUD_ICON
class Nextcloud(Service):
"""Class representing Nextcloud service."""
@staticmethod
def get_id() -> str:
"""Return service id."""
return "nextcloud"
@staticmethod
def get_display_name() -> str:
"""Return service display name."""
return "Nextcloud"
@staticmethod
def get_description() -> str:
"""Return service description."""
return "Nextcloud is a cloud storage service that offers a web interface and a desktop client."
@staticmethod
def get_svg_icon() -> str:
"""Read SVG icon from file and return it as base64 encoded string."""
return base64.b64encode(NEXTCLOUD_ICON.encode("utf-8")).decode("utf-8")
@staticmethod
def get_url() -> typing.Optional[str]:
"""Return service url."""
domain = get_domain()
return f"https://cloud.{domain}"
@staticmethod
def is_movable() -> bool:
return True
@staticmethod
def is_required() -> bool:
return False
@staticmethod
def is_enabled() -> bool:
with ReadUserData() as user_data:
return user_data.get("nextcloud", {}).get("enable", False)
@staticmethod
def get_status() -> ServiceStatus:
"""
Return Nextcloud status from systemd.
Use command return code to determine status.
Return code 0 means service is running.
Return code 1 or 2 means service is in error stat.
Return code 3 means service is stopped.
Return code 4 means service is off.
"""
return get_service_status("phpfpm-nextcloud.service")
@staticmethod
def enable():
"""Enable Nextcloud service."""
with WriteUserData() as user_data:
if "nextcloud" not in user_data:
user_data["nextcloud"] = {}
user_data["nextcloud"]["enable"] = True
@staticmethod
def disable():
"""Disable Nextcloud service."""
with WriteUserData() as user_data:
if "nextcloud" not in user_data:
user_data["nextcloud"] = {}
user_data["nextcloud"]["enable"] = False
@staticmethod
def stop():
"""Stop Nextcloud service."""
subprocess.Popen(["systemctl", "stop", "phpfpm-nextcloud.service"])
@staticmethod
def start():
"""Start Nextcloud service."""
subprocess.Popen(["systemctl", "start", "phpfpm-nextcloud.service"])
@staticmethod
def restart():
"""Restart Nextcloud service."""
subprocess.Popen(["systemctl", "restart", "phpfpm-nextcloud.service"])
@staticmethod
def get_configuration() -> dict:
"""Return Nextcloud configuration."""
return {}
@staticmethod
def set_configuration(config_items):
return super().set_configuration(config_items)
@staticmethod
def get_logs():
"""Return Nextcloud logs."""
return ""
@staticmethod
def get_storage_usage() -> int:
"""
Calculate the real storage usage of /var/lib/nextcloud and all subdirectories.
Calculate using pathlib.
Do not follow symlinks.
"""
return get_storage_usage("/var/lib/nextcloud")
@staticmethod
def get_location() -> str:
"""Get the name of disk where Nextcloud is installed."""
with ReadUserData() as user_data:
if user_data.get("useBinds", False):
return user_data.get("nextcloud", {}).get("location", "sda1")
else:
return "sda1"
@staticmethod
def get_dns_records() -> typing.List[ServiceDnsRecord]:
return [
ServiceDnsRecord(
type="A",
name="cloud",
content=network_utils.get_ip4(),
ttl=3600,
),
ServiceDnsRecord(
type="AAAA",
name="cloud",
content=network_utils.get_ip6(),
ttl=3600,
),
]
def move_to_volume(self, volume: BlockDevice) -> Job:
job = Jobs.get_instance().add(
type_id="services.nextcloud.move",
name="Move Nextcloud",
description=f"Moving Nextcloud to volume {volume.name}",
)
move_service(
self,
volume,
job,
[
FolderMoveNames(
name="nextcloud",
bind_location="/var/lib/nextcloud",
owner="nextcloud",
group="nextcloud",
),
],
"nextcloud",
)
return job

View File

@ -0,0 +1,12 @@
NEXTCLOUD_ICON = """
<svg width="24" height="24" viewBox="0 0 24 24" fill="none" xmlns="http://www.w3.org/2000/svg">
<g clip-path="url(#clip0_51106_4974)">
<path d="M12.018 6.53699C9.518 6.53699 7.418 8.24899 6.777 10.552C6.217 9.31999 4.984 8.44699 3.552 8.44699C2.61116 8.45146 1.71014 8.82726 1.04495 9.49264C0.379754 10.158 0.00420727 11.0591 0 12C0.00420727 12.9408 0.379754 13.842 1.04495 14.5073C1.71014 15.1727 2.61116 15.5485 3.552 15.553C4.984 15.553 6.216 14.679 6.776 13.447C7.417 15.751 9.518 17.463 12.018 17.463C14.505 17.463 16.594 15.77 17.249 13.486C17.818 14.696 19.032 15.553 20.447 15.553C21.3881 15.549 22.2895 15.1734 22.955 14.508C23.6205 13.8425 23.9961 12.9411 24 12C23.9958 11.059 23.6201 10.1577 22.9547 9.49229C22.2893 8.82688 21.388 8.4512 20.447 8.44699C19.031 8.44699 17.817 9.30499 17.248 10.514C16.594 8.22999 14.505 6.53599 12.018 6.53699ZM12.018 8.62199C13.896 8.62199 15.396 10.122 15.396 12C15.396 13.878 13.896 15.378 12.018 15.378C11.5739 15.38 11.1338 15.2939 10.7231 15.1249C10.3124 14.9558 9.93931 14.707 9.62532 14.393C9.31132 14.0789 9.06267 13.7057 8.89373 13.295C8.72478 12.8842 8.63888 12.4441 8.641 12C8.641 10.122 10.141 8.62199 12.018 8.62199ZM3.552 10.532C4.374 10.532 5.019 11.177 5.019 12C5.019 12.823 4.375 13.467 3.552 13.468C3.35871 13.47 3.16696 13.4334 2.988 13.3603C2.80905 13.2872 2.64648 13.1792 2.50984 13.0424C2.3732 12.9057 2.26524 12.7431 2.19229 12.5641C2.11934 12.3851 2.08286 12.1933 2.085 12C2.085 11.177 2.729 10.533 3.552 10.533V10.532ZM20.447 10.532C21.27 10.532 21.915 11.177 21.915 12C21.915 12.823 21.27 13.468 20.447 13.468C20.2537 13.47 20.062 13.4334 19.883 13.3603C19.704 13.2872 19.5415 13.1792 19.4048 13.0424C19.2682 12.9057 19.1602 12.7431 19.0873 12.5641C19.0143 12.3851 18.9779 12.1933 18.98 12C18.98 11.177 19.624 10.533 20.447 10.533V10.532Z" fill="black"/>
</g>
<defs>
<clipPath id="clip0_51106_4974">
<rect width="24" height="24" fill="white"/>
</clipPath>
</defs>
</svg>
"""

View File

@ -0,0 +1,10 @@
<svg width="24" height="24" viewBox="0 0 24 24" fill="none" xmlns="http://www.w3.org/2000/svg">
<g clip-path="url(#clip0_51106_4974)">
<path d="M12.018 6.53699C9.518 6.53699 7.418 8.24899 6.777 10.552C6.217 9.31999 4.984 8.44699 3.552 8.44699C2.61116 8.45146 1.71014 8.82726 1.04495 9.49264C0.379754 10.158 0.00420727 11.0591 0 12C0.00420727 12.9408 0.379754 13.842 1.04495 14.5073C1.71014 15.1727 2.61116 15.5485 3.552 15.553C4.984 15.553 6.216 14.679 6.776 13.447C7.417 15.751 9.518 17.463 12.018 17.463C14.505 17.463 16.594 15.77 17.249 13.486C17.818 14.696 19.032 15.553 20.447 15.553C21.3881 15.549 22.2895 15.1734 22.955 14.508C23.6205 13.8425 23.9961 12.9411 24 12C23.9958 11.059 23.6201 10.1577 22.9547 9.49229C22.2893 8.82688 21.388 8.4512 20.447 8.44699C19.031 8.44699 17.817 9.30499 17.248 10.514C16.594 8.22999 14.505 6.53599 12.018 6.53699ZM12.018 8.62199C13.896 8.62199 15.396 10.122 15.396 12C15.396 13.878 13.896 15.378 12.018 15.378C11.5739 15.38 11.1338 15.2939 10.7231 15.1249C10.3124 14.9558 9.93931 14.707 9.62532 14.393C9.31132 14.0789 9.06267 13.7057 8.89373 13.295C8.72478 12.8842 8.63888 12.4441 8.641 12C8.641 10.122 10.141 8.62199 12.018 8.62199ZM3.552 10.532C4.374 10.532 5.019 11.177 5.019 12C5.019 12.823 4.375 13.467 3.552 13.468C3.35871 13.47 3.16696 13.4334 2.988 13.3603C2.80905 13.2872 2.64648 13.1792 2.50984 13.0424C2.3732 12.9057 2.26524 12.7431 2.19229 12.5641C2.11934 12.3851 2.08286 12.1933 2.085 12C2.085 11.177 2.729 10.533 3.552 10.533V10.532ZM20.447 10.532C21.27 10.532 21.915 11.177 21.915 12C21.915 12.823 21.27 13.468 20.447 13.468C20.2537 13.47 20.062 13.4334 19.883 13.3603C19.704 13.2872 19.5415 13.1792 19.4048 13.0424C19.2682 12.9057 19.1602 12.7431 19.0873 12.5641C19.0143 12.3851 18.9779 12.1933 18.98 12C18.98 11.177 19.624 10.533 20.447 10.533V10.532Z" fill="black"/>
</g>
<defs>
<clipPath id="clip0_51106_4974">
<rect width="24" height="24" fill="white"/>
</clipPath>
</defs>
</svg>

After

Width:  |  Height:  |  Size: 1.9 KiB

View File

@ -0,0 +1,121 @@
"""Class representing ocserv service."""
import base64
import subprocess
import typing
from selfprivacy_api.jobs import Job, Jobs
from selfprivacy_api.services.generic_service_mover import FolderMoveNames, move_service
from selfprivacy_api.services.generic_size_counter import get_storage_usage
from selfprivacy_api.services.generic_status_getter import get_service_status
from selfprivacy_api.services.service import Service, ServiceDnsRecord, ServiceStatus
from selfprivacy_api.utils import ReadUserData, WriteUserData
from selfprivacy_api.utils.block_devices import BlockDevice
from selfprivacy_api.services.ocserv.icon import OCSERV_ICON
import selfprivacy_api.utils.network as network_utils
class Ocserv(Service):
"""Class representing ocserv service."""
@staticmethod
def get_id() -> str:
return "ocserv"
@staticmethod
def get_display_name() -> str:
return "OpenConnect VPN"
@staticmethod
def get_description() -> str:
return "OpenConnect VPN to connect your devices and access the internet."
@staticmethod
def get_svg_icon() -> str:
return base64.b64encode(OCSERV_ICON.encode("utf-8")).decode("utf-8")
@staticmethod
def get_url() -> typing.Optional[str]:
"""Return service url."""
return None
@staticmethod
def is_movable() -> bool:
return False
@staticmethod
def is_required() -> bool:
return False
@staticmethod
def is_enabled() -> bool:
with ReadUserData() as user_data:
return user_data.get("ocserv", {}).get("enable", False)
@staticmethod
def get_status() -> ServiceStatus:
return get_service_status("ocserv.service")
@staticmethod
def enable():
with WriteUserData() as user_data:
if "ocserv" not in user_data:
user_data["ocserv"] = {}
user_data["ocserv"]["enable"] = True
@staticmethod
def disable():
with WriteUserData() as user_data:
if "ocserv" not in user_data:
user_data["ocserv"] = {}
user_data["ocserv"]["enable"] = False
@staticmethod
def stop():
subprocess.run(["systemctl", "stop", "ocserv.service"])
@staticmethod
def start():
subprocess.run(["systemctl", "start", "ocserv.service"])
@staticmethod
def restart():
subprocess.run(["systemctl", "restart", "ocserv.service"])
@staticmethod
def get_configuration():
return {}
@staticmethod
def set_configuration(config_items):
return super().set_configuration(config_items)
@staticmethod
def get_logs():
return ""
@staticmethod
def get_location() -> str:
return "sda1"
@staticmethod
def get_dns_records() -> typing.List[ServiceDnsRecord]:
return [
ServiceDnsRecord(
type="A",
name="vpn",
content=network_utils.get_ip4(),
ttl=3600,
),
ServiceDnsRecord(
type="AAAA",
name="vpn",
content=network_utils.get_ip6(),
ttl=3600,
),
]
@staticmethod
def get_storage_usage() -> int:
return 0
def move_to_volume(self, volume: BlockDevice) -> Job:
raise NotImplementedError("ocserv service is not movable")

View File

@ -0,0 +1,5 @@
OCSERV_ICON = """
<svg width="24" height="24" viewBox="0 0 24 24" fill="none" xmlns="http://www.w3.org/2000/svg">
<path d="M12 1L3 5V11C3 16.55 6.84 21.74 12 23C17.16 21.74 21 16.55 21 11V5L12 1ZM12 11.99H19C18.47 16.11 15.72 19.78 12 20.93V12H5V6.3L12 3.19V11.99Z" fill="black"/>
</svg>
"""

View File

@ -0,0 +1,3 @@
<svg width="24" height="24" viewBox="0 0 24 24" fill="none" xmlns="http://www.w3.org/2000/svg">
<path d="M12 1L3 5V11C3 16.55 6.84 21.74 12 23C17.16 21.74 21 16.55 21 11V5L12 1ZM12 11.99H19C18.47 16.11 15.72 19.78 12 20.93V12H5V6.3L12 3.19V11.99Z" fill="black"/>
</svg>

After

Width:  |  Height:  |  Size: 270 B

View File

@ -0,0 +1,157 @@
"""Class representing Nextcloud service."""
import base64
import subprocess
import typing
from selfprivacy_api.jobs import Job, Jobs
from selfprivacy_api.services.generic_service_mover import FolderMoveNames, move_service
from selfprivacy_api.services.generic_size_counter import get_storage_usage
from selfprivacy_api.services.generic_status_getter import get_service_status
from selfprivacy_api.services.service import Service, ServiceDnsRecord, ServiceStatus
from selfprivacy_api.utils import ReadUserData, WriteUserData, get_domain
from selfprivacy_api.utils.block_devices import BlockDevice
import selfprivacy_api.utils.network as network_utils
from selfprivacy_api.services.pleroma.icon import PLEROMA_ICON
class Pleroma(Service):
"""Class representing Pleroma service."""
@staticmethod
def get_id() -> str:
return "pleroma"
@staticmethod
def get_display_name() -> str:
return "Pleroma"
@staticmethod
def get_description() -> str:
return "Pleroma is a microblogging service that offers a web interface and a desktop client."
@staticmethod
def get_svg_icon() -> str:
return base64.b64encode(PLEROMA_ICON.encode("utf-8")).decode("utf-8")
@staticmethod
def get_url() -> typing.Optional[str]:
"""Return service url."""
domain = get_domain()
return f"https://social.{domain}"
@staticmethod
def is_movable() -> bool:
return True
@staticmethod
def is_required() -> bool:
return False
@staticmethod
def is_enabled() -> bool:
with ReadUserData() as user_data:
return user_data.get("pleroma", {}).get("enable", False)
@staticmethod
def get_status() -> ServiceStatus:
return get_service_status("pleroma.service")
@staticmethod
def enable():
with WriteUserData() as user_data:
if "pleroma" not in user_data:
user_data["pleroma"] = {}
user_data["pleroma"]["enable"] = True
@staticmethod
def disable():
with WriteUserData() as user_data:
if "pleroma" not in user_data:
user_data["pleroma"] = {}
user_data["pleroma"]["enable"] = False
@staticmethod
def stop():
subprocess.run(["systemctl", "stop", "pleroma.service"])
subprocess.run(["systemctl", "stop", "postgresql.service"])
@staticmethod
def start():
subprocess.run(["systemctl", "start", "pleroma.service"])
subprocess.run(["systemctl", "start", "postgresql.service"])
@staticmethod
def restart():
subprocess.run(["systemctl", "restart", "pleroma.service"])
subprocess.run(["systemctl", "restart", "postgresql.service"])
@staticmethod
def get_configuration(config_items):
return {}
@staticmethod
def set_configuration(config_items):
return super().set_configuration(config_items)
@staticmethod
def get_logs():
return ""
@staticmethod
def get_storage_usage() -> int:
storage_usage = 0
storage_usage += get_storage_usage("/var/lib/pleroma")
storage_usage += get_storage_usage("/var/lib/postgresql")
return storage_usage
@staticmethod
def get_location() -> str:
with ReadUserData() as user_data:
if user_data.get("useBinds", False):
return user_data.get("pleroma", {}).get("location", "sda1")
else:
return "sda1"
@staticmethod
def get_dns_records() -> typing.List[ServiceDnsRecord]:
return [
ServiceDnsRecord(
type="A",
name="social",
content=network_utils.get_ip4(),
ttl=3600,
),
ServiceDnsRecord(
type="AAAA",
name="social",
content=network_utils.get_ip6(),
ttl=3600,
),
]
def move_to_volume(self, volume: BlockDevice) -> Job:
job = Jobs.get_instance().add(
type_id="services.pleroma.move",
name="Move Pleroma",
description=f"Moving Pleroma to volume {volume.name}",
)
move_service(
self,
volume,
job,
[
FolderMoveNames(
name="pleroma",
bind_location="/var/lib/pleroma",
owner="pleroma",
group="pleroma",
),
FolderMoveNames(
name="postgresql",
bind_location="/var/lib/postgresql",
owner="postgres",
group="postgres",
),
],
"pleroma",
)
return job

View File

@ -0,0 +1,12 @@
PLEROMA_ICON = """
<svg width="24" height="24" viewBox="0 0 24 24" fill="none" xmlns="http://www.w3.org/2000/svg">
<g clip-path="url(#clip0_51106_4998)">
<path d="M6.35999 1.07076e-06C6.11451 -0.000261753 5.87139 0.0478616 5.64452 0.14162C5.41766 0.235378 5.21149 0.372932 5.03782 0.546418C4.86415 0.719904 4.72638 0.925919 4.63237 1.15269C4.53837 1.37945 4.48999 1.62252 4.48999 1.868V24H10.454V1.07076e-06H6.35999ZM13.473 1.07076e-06V12H17.641C18.1364 12 18.6115 11.8032 18.9619 11.4529C19.3122 11.1026 19.509 10.6274 19.509 10.132V1.07076e-06H13.473ZM13.473 18.036V24H17.641C18.1364 24 18.6115 23.8032 18.9619 23.4529C19.3122 23.1026 19.509 22.6274 19.509 22.132V18.036H13.473Z" fill="black"/>
</g>
<defs>
<clipPath id="clip0_51106_4998">
<rect width="24" height="24" fill="white"/>
</clipPath>
</defs>
</svg>
"""

View File

@ -0,0 +1,10 @@
<svg width="24" height="24" viewBox="0 0 24 24" fill="none" xmlns="http://www.w3.org/2000/svg">
<g clip-path="url(#clip0_51106_4998)">
<path d="M6.35999 1.07076e-06C6.11451 -0.000261753 5.87139 0.0478616 5.64452 0.14162C5.41766 0.235378 5.21149 0.372932 5.03782 0.546418C4.86415 0.719904 4.72638 0.925919 4.63237 1.15269C4.53837 1.37945 4.48999 1.62252 4.48999 1.868V24H10.454V1.07076e-06H6.35999ZM13.473 1.07076e-06V12H17.641C18.1364 12 18.6115 11.8032 18.9619 11.4529C19.3122 11.1026 19.509 10.6274 19.509 10.132V1.07076e-06H13.473ZM13.473 18.036V24H17.641C18.1364 24 18.6115 23.8032 18.9619 23.4529C19.3122 23.1026 19.509 22.6274 19.509 22.132V18.036H13.473Z" fill="black"/>
</g>
<defs>
<clipPath id="clip0_51106_4998">
<rect width="24" height="24" fill="white"/>
</clipPath>
</defs>
</svg>

After

Width:  |  Height:  |  Size: 794 B

View File

@ -0,0 +1,140 @@
"""Abstract class for a service running on a server"""
from abc import ABC, abstractmethod
from enum import Enum
import typing
from pydantic import BaseModel
from selfprivacy_api.jobs import Job
from selfprivacy_api.utils.block_devices import BlockDevice
class ServiceStatus(Enum):
"""Enum for service status"""
ACTIVE = "ACTIVE"
RELOADING = "RELOADING"
INACTIVE = "INACTIVE"
FAILED = "FAILED"
ACTIVATING = "ACTIVATING"
DEACTIVATING = "DEACTIVATING"
OFF = "OFF"
class ServiceDnsRecord(BaseModel):
type: str
name: str
content: str
ttl: int
priority: typing.Optional[int] = None
class Service(ABC):
"""
Service here is some software that is hosted on the server and
can be installed, configured and used by a user.
"""
@staticmethod
@abstractmethod
def get_id() -> str:
pass
@staticmethod
@abstractmethod
def get_display_name() -> str:
pass
@staticmethod
@abstractmethod
def get_description() -> str:
pass
@staticmethod
@abstractmethod
def get_svg_icon() -> str:
pass
@staticmethod
@abstractmethod
def get_url() -> typing.Optional[str]:
pass
@staticmethod
@abstractmethod
def is_movable() -> bool:
pass
@staticmethod
@abstractmethod
def is_required() -> bool:
pass
@staticmethod
@abstractmethod
def is_enabled() -> bool:
pass
@staticmethod
@abstractmethod
def get_status() -> ServiceStatus:
pass
@staticmethod
@abstractmethod
def enable():
pass
@staticmethod
@abstractmethod
def disable():
pass
@staticmethod
@abstractmethod
def stop():
pass
@staticmethod
@abstractmethod
def start():
pass
@staticmethod
@abstractmethod
def restart():
pass
@staticmethod
@abstractmethod
def get_configuration():
pass
@staticmethod
@abstractmethod
def set_configuration(config_items):
pass
@staticmethod
@abstractmethod
def get_logs():
pass
@staticmethod
@abstractmethod
def get_storage_usage() -> int:
pass
@staticmethod
@abstractmethod
def get_dns_records() -> typing.List[ServiceDnsRecord]:
pass
@staticmethod
@abstractmethod
def get_location() -> str:
pass
@abstractmethod
def move_to_volume(self, volume: BlockDevice) -> Job:
pass

View File

@ -0,0 +1,4 @@
from selfprivacy_api.utils.huey import huey
from selfprivacy_api.jobs.test import test_job
from selfprivacy_api.restic_controller.tasks import *
from selfprivacy_api.services.generic_service_mover import move_service

View File

@ -0,0 +1,189 @@
#!/usr/bin/env python3
"""Various utility functions"""
import datetime
from enum import Enum
import json
import os
import subprocess
import portalocker
USERDATA_FILE = "/etc/nixos/userdata/userdata.json"
TOKENS_FILE = "/etc/nixos/userdata/tokens.json"
JOBS_FILE = "/etc/nixos/userdata/jobs.json"
DOMAIN_FILE = "/var/domain"
class UserDataFiles(Enum):
"""Enum for userdata files"""
USERDATA = 0
TOKENS = 1
JOBS = 2
def get_domain():
"""Get domain from /var/domain without trailing new line"""
with open(DOMAIN_FILE, "r", encoding="utf-8") as domain_file:
domain = domain_file.readline().rstrip()
return domain
class WriteUserData(object):
"""Write userdata.json with lock"""
def __init__(self, file_type=UserDataFiles.USERDATA):
if file_type == UserDataFiles.USERDATA:
self.userdata_file = open(USERDATA_FILE, "r+", encoding="utf-8")
elif file_type == UserDataFiles.TOKENS:
self.userdata_file = open(TOKENS_FILE, "r+", encoding="utf-8")
elif file_type == UserDataFiles.JOBS:
# Make sure file exists
if not os.path.exists(JOBS_FILE):
with open(JOBS_FILE, "w", encoding="utf-8") as jobs_file:
jobs_file.write("{}")
self.userdata_file = open(JOBS_FILE, "r+", encoding="utf-8")
else:
raise ValueError("Unknown file type")
portalocker.lock(self.userdata_file, portalocker.LOCK_EX)
self.data = json.load(self.userdata_file)
def __enter__(self):
return self.data
def __exit__(self, exc_type, exc_value, traceback):
if exc_type is None:
self.userdata_file.seek(0)
json.dump(self.data, self.userdata_file, indent=4)
self.userdata_file.truncate()
portalocker.unlock(self.userdata_file)
self.userdata_file.close()
class ReadUserData(object):
"""Read userdata.json with lock"""
def __init__(self, file_type=UserDataFiles.USERDATA):
if file_type == UserDataFiles.USERDATA:
self.userdata_file = open(USERDATA_FILE, "r", encoding="utf-8")
elif file_type == UserDataFiles.TOKENS:
self.userdata_file = open(TOKENS_FILE, "r", encoding="utf-8")
elif file_type == UserDataFiles.JOBS:
# Make sure file exists
if not os.path.exists(JOBS_FILE):
with open(JOBS_FILE, "w", encoding="utf-8") as jobs_file:
jobs_file.write("{}")
self.userdata_file = open(JOBS_FILE, "r", encoding="utf-8")
else:
raise ValueError("Unknown file type")
portalocker.lock(self.userdata_file, portalocker.LOCK_SH)
self.data = json.load(self.userdata_file)
def __enter__(self) -> dict:
return self.data
def __exit__(self, *args):
portalocker.unlock(self.userdata_file)
self.userdata_file.close()
def validate_ssh_public_key(key):
"""Validate SSH public key. It may be ssh-ed25519 or ssh-rsa."""
if not key.startswith("ssh-ed25519"):
if not key.startswith("ssh-rsa"):
return False
return True
def is_username_forbidden(username):
forbidden_prefixes = ["systemd", "nixbld"]
forbidden_usernames = [
"root",
"messagebus",
"postfix",
"polkituser",
"dovecot2",
"dovenull",
"nginx",
"postgres",
"prosody",
"opendkim",
"rspamd",
"sshd",
"selfprivacy-api",
"restic",
"redis",
"pleroma",
"ocserv",
"nextcloud",
"memcached",
"knot-resolver",
"gitea",
"bitwarden_rs",
"vaultwarden",
"acme",
"virtualMail",
"nobody",
]
for prefix in forbidden_prefixes:
if username.startswith(prefix):
return True
for forbidden_username in forbidden_usernames:
if username == forbidden_username:
return True
return False
def parse_date(date_str: str) -> datetime.datetime:
"""Parse date string which can be in one of these formats:
- %Y-%m-%dT%H:%M:%S.%fZ
- %Y-%m-%dT%H:%M:%S.%f
- %Y-%m-%d %H:%M:%S.%fZ
- %Y-%m-%d %H:%M:%S.%f
"""
try:
return datetime.datetime.strptime(date_str, "%Y-%m-%d %H:%M:%S.%fZ")
except ValueError:
pass
try:
return datetime.datetime.strptime(date_str, "%Y-%m-%d %H:%M:%S.%f")
except ValueError:
pass
try:
return datetime.datetime.strptime(date_str, "%Y-%m-%dT%H:%M:%S.%fZ")
except ValueError:
pass
try:
return datetime.datetime.strptime(date_str, "%Y-%m-%dT%H:%M:%S.%f")
except ValueError:
pass
raise ValueError("Invalid date string")
def get_dkim_key(domain):
"""Get DKIM key from /var/dkim/<domain>.selector.txt"""
if os.path.exists("/var/dkim/" + domain + ".selector.txt"):
cat_process = subprocess.Popen(
["cat", "/var/dkim/" + domain + ".selector.txt"], stdout=subprocess.PIPE
)
dkim = cat_process.communicate()[0]
return str(dkim, "utf-8")
return None
def hash_password(password):
hashing_command = ["mkpasswd", "-m", "sha-512", password]
password_hash_process_descriptor = subprocess.Popen(
hashing_command,
shell=False,
stdout=subprocess.PIPE,
stderr=subprocess.STDOUT,
)
hashed_password = password_hash_process_descriptor.communicate()[0]
hashed_password = hashed_password.decode("ascii")
hashed_password = hashed_password.rstrip()
return hashed_password

View File

@ -0,0 +1,329 @@
#!/usr/bin/env python3
"""Token management utils"""
import secrets
from datetime import datetime, timedelta
import re
import typing
from pydantic import BaseModel
from mnemonic import Mnemonic
from . import ReadUserData, UserDataFiles, WriteUserData, parse_date
"""
Token are stored in the tokens.json file.
File contains device tokens, recovery token and new device auth token.
File structure:
{
"tokens": [
{
"token": "device token",
"name": "device name",
"date": "date of creation",
}
],
"recovery_token": {
"token": "recovery token",
"date": "date of creation",
"expiration": "date of expiration",
"uses_left": "number of uses left"
},
"new_device": {
"token": "new device auth token",
"date": "date of creation",
"expiration": "date of expiration",
}
}
Recovery token may or may not have expiration date and uses_left.
There may be no recovery token at all.
Device tokens must be unique.
"""
def _get_tokens():
"""Get all tokens as list of tokens of every device"""
with ReadUserData(UserDataFiles.TOKENS) as tokens:
return [token["token"] for token in tokens["tokens"]]
def _get_token_names():
"""Get all token names"""
with ReadUserData(UserDataFiles.TOKENS) as tokens:
return [t["name"] for t in tokens["tokens"]]
def _validate_token_name(name):
"""Token name must be an alphanumeric string and not empty.
Replace invalid characters with '_'
If token name exists, add a random number to the end of the name until it is unique.
"""
if not re.match("^[a-zA-Z0-9]*$", name):
name = re.sub("[^a-zA-Z0-9]", "_", name)
if name == "":
name = "Unknown device"
while name in _get_token_names():
name += str(secrets.randbelow(10))
return name
def is_token_valid(token):
"""Check if token is valid"""
if token in _get_tokens():
return True
return False
def is_token_name_exists(token_name):
"""Check if token name exists"""
with ReadUserData(UserDataFiles.TOKENS) as tokens:
return token_name in [t["name"] for t in tokens["tokens"]]
def is_token_name_pair_valid(token_name, token):
"""Check if token name and token pair exists"""
with ReadUserData(UserDataFiles.TOKENS) as tokens:
for t in tokens["tokens"]:
if t["name"] == token_name and t["token"] == token:
return True
return False
def get_token_name(token: str) -> typing.Optional[str]:
"""Return the name of the token provided"""
with ReadUserData(UserDataFiles.TOKENS) as tokens:
for t in tokens["tokens"]:
if t["token"] == token:
return t["name"]
return None
class BasicTokenInfo(BaseModel):
"""Token info"""
name: str
date: datetime
def get_tokens_info():
"""Get all tokens info without tokens themselves"""
with ReadUserData(UserDataFiles.TOKENS) as tokens:
return [
BasicTokenInfo(
name=t["name"],
date=parse_date(t["date"]),
)
for t in tokens["tokens"]
]
def _generate_token():
"""Generates new token and makes sure it is unique"""
token = secrets.token_urlsafe(32)
while token in _get_tokens():
token = secrets.token_urlsafe(32)
return token
def create_token(name):
"""Create new token"""
token = _generate_token()
name = _validate_token_name(name)
with WriteUserData(UserDataFiles.TOKENS) as tokens:
tokens["tokens"].append(
{
"token": token,
"name": name,
"date": str(datetime.now().strftime("%Y-%m-%dT%H:%M:%S.%f")),
}
)
return token
def delete_token(token_name):
"""Delete token"""
with WriteUserData(UserDataFiles.TOKENS) as tokens:
tokens["tokens"] = [t for t in tokens["tokens"] if t["name"] != token_name]
def refresh_token(token: str) -> typing.Optional[str]:
"""Change the token field of the existing token"""
new_token = _generate_token()
with WriteUserData(UserDataFiles.TOKENS) as tokens:
for t in tokens["tokens"]:
if t["token"] == token:
t["token"] = new_token
return new_token
return None
def is_recovery_token_exists():
"""Check if recovery token exists"""
with ReadUserData(UserDataFiles.TOKENS) as tokens:
return "recovery_token" in tokens
def is_recovery_token_valid():
"""Check if recovery token is valid"""
with ReadUserData(UserDataFiles.TOKENS) as tokens:
if "recovery_token" not in tokens:
return False
recovery_token = tokens["recovery_token"]
if "uses_left" in recovery_token and recovery_token["uses_left"] is not None:
if recovery_token["uses_left"] <= 0:
return False
if "expiration" not in recovery_token or recovery_token["expiration"] is None:
return True
return datetime.now() < parse_date(recovery_token["expiration"])
def get_recovery_token_status():
"""Get recovery token date of creation, expiration and uses left"""
with ReadUserData(UserDataFiles.TOKENS) as tokens:
if "recovery_token" not in tokens:
return None
recovery_token = tokens["recovery_token"]
return {
"date": recovery_token["date"],
"expiration": recovery_token["expiration"]
if "expiration" in recovery_token
else None,
"uses_left": recovery_token["uses_left"]
if "uses_left" in recovery_token
else None,
}
def _get_recovery_token():
"""Get recovery token"""
with ReadUserData(UserDataFiles.TOKENS) as tokens:
if "recovery_token" not in tokens:
return None
return tokens["recovery_token"]["token"]
def generate_recovery_token(
expiration: typing.Optional[datetime], uses_left: typing.Optional[int]
) -> str:
"""Generate a 24 bytes recovery token and return a mneomnic word list.
Write a string representation of the recovery token to the tokens.json file.
"""
# expires must be a date or None
# uses_left must be an integer or None
if expiration is not None:
if not isinstance(expiration, datetime):
raise TypeError("expires must be a datetime object")
if uses_left is not None:
if not isinstance(uses_left, int):
raise TypeError("uses_left must be an integer")
if uses_left <= 0:
raise ValueError("uses_left must be greater than 0")
recovery_token = secrets.token_bytes(24)
recovery_token_str = recovery_token.hex()
with WriteUserData(UserDataFiles.TOKENS) as tokens:
tokens["recovery_token"] = {
"token": recovery_token_str,
"date": str(datetime.now().strftime("%Y-%m-%dT%H:%M:%S.%f")),
"expiration": expiration.strftime("%Y-%m-%dT%H:%M:%S.%f")
if expiration is not None
else None,
"uses_left": uses_left if uses_left is not None else None,
}
return Mnemonic(language="english").to_mnemonic(recovery_token)
def use_mnemonic_recoverery_token(mnemonic_phrase, name):
"""Use the recovery token by converting the mnemonic word list to a byte array.
If the recovery token if invalid itself, return None
If the binary representation of phrase not matches
the byte array of the recovery token, return None.
If the mnemonic phrase is valid then generate a device token and return it.
Substract 1 from uses_left if it exists.
mnemonic_phrase is a string representation of the mnemonic word list.
"""
if not is_recovery_token_valid():
return None
recovery_token_str = _get_recovery_token()
if recovery_token_str is None:
return None
recovery_token = bytes.fromhex(recovery_token_str)
if not Mnemonic(language="english").check(mnemonic_phrase):
return None
phrase_bytes = Mnemonic(language="english").to_entropy(mnemonic_phrase)
if phrase_bytes != recovery_token:
return None
token = _generate_token()
name = _validate_token_name(name)
with WriteUserData(UserDataFiles.TOKENS) as tokens:
tokens["tokens"].append(
{
"token": token,
"name": name,
"date": str(datetime.now()),
}
)
if "recovery_token" in tokens:
if (
"uses_left" in tokens["recovery_token"]
and tokens["recovery_token"]["uses_left"] is not None
):
tokens["recovery_token"]["uses_left"] -= 1
return token
def get_new_device_auth_token() -> str:
"""Generate a new device auth token which is valid for 10 minutes
and return a mnemonic phrase representation
Write token to the new_device of the tokens.json file.
"""
token = secrets.token_bytes(16)
token_str = token.hex()
with WriteUserData(UserDataFiles.TOKENS) as tokens:
tokens["new_device"] = {
"token": token_str,
"date": str(datetime.now()),
"expiration": str(datetime.now() + timedelta(minutes=10)),
}
return Mnemonic(language="english").to_mnemonic(token)
def _get_new_device_auth_token():
"""Get new device auth token. If it is expired, return None"""
with ReadUserData(UserDataFiles.TOKENS) as tokens:
if "new_device" not in tokens:
return None
new_device = tokens["new_device"]
if "expiration" not in new_device:
return None
expiration = parse_date(new_device["expiration"])
if datetime.now() > expiration:
return None
return new_device["token"]
def delete_new_device_auth_token():
"""Delete new device auth token"""
with WriteUserData(UserDataFiles.TOKENS) as tokens:
if "new_device" in tokens:
del tokens["new_device"]
def use_new_device_auth_token(mnemonic_phrase, name):
"""Use the new device auth token by converting the mnemonic string to a byte array.
If the mnemonic phrase is valid then generate a device token and return it.
New device auth token must be deleted.
"""
token_str = _get_new_device_auth_token()
if token_str is None:
return None
token = bytes.fromhex(token_str)
if not Mnemonic(language="english").check(mnemonic_phrase):
return None
phrase_bytes = Mnemonic(language="english").to_entropy(mnemonic_phrase)
if phrase_bytes != token:
return None
token = create_token(name)
with WriteUserData(UserDataFiles.TOKENS) as tokens:
if "new_device" in tokens:
del tokens["new_device"]
return token

View File

@ -0,0 +1,226 @@
"""Wrapper for block device functions."""
import subprocess
import json
import typing
from selfprivacy_api.utils import WriteUserData
def get_block_device(device_name):
"""
Return a block device by name.
"""
lsblk_output = subprocess.check_output(
[
"lsblk",
"-J",
"-b",
"-o",
"NAME,PATH,FSAVAIL,FSSIZE,FSTYPE,FSUSED,MOUNTPOINTS,LABEL,UUID,SIZE,MODEL,SERIAL,TYPE",
f"/dev/{device_name}",
]
)
lsblk_output = lsblk_output.decode("utf-8")
lsblk_output = json.loads(lsblk_output)
return lsblk_output["blockdevices"][0]
def resize_block_device(block_device) -> bool:
"""
Resize a block device. Return True if successful.
"""
resize_command = ["resize2fs", block_device]
try:
subprocess.check_output(resize_command, shell=False)
except subprocess.CalledProcessError:
return False
return True
class BlockDevice:
"""
A block device.
"""
def __init__(self, block_device):
self.name = block_device["name"]
self.path = block_device["path"]
self.fsavail = str(block_device["fsavail"])
self.fssize = str(block_device["fssize"])
self.fstype = block_device["fstype"]
self.fsused = str(block_device["fsused"])
self.mountpoints = block_device["mountpoints"]
self.label = block_device["label"]
self.uuid = block_device["uuid"]
self.size = str(block_device["size"])
self.model = block_device["model"]
self.serial = block_device["serial"]
self.type = block_device["type"]
self.locked = False
def __str__(self):
return self.name
def __repr__(self):
return f"<BlockDevice {self.name} of size {self.size} mounted at {self.mountpoints}>"
def __eq__(self, other):
return self.name == other.name
def __hash__(self):
return hash(self.name)
def stats(self) -> typing.Dict[str, typing.Any]:
"""
Update current data and return a dictionary of stats.
"""
device = get_block_device(self.name)
self.fsavail = str(device["fsavail"])
self.fssize = str(device["fssize"])
self.fstype = device["fstype"]
self.fsused = str(device["fsused"])
self.mountpoints = device["mountpoints"]
self.label = device["label"]
self.uuid = device["uuid"]
self.size = str(device["size"])
self.model = device["model"]
self.serial = device["serial"]
self.type = device["type"]
return {
"name": self.name,
"path": self.path,
"fsavail": self.fsavail,
"fssize": self.fssize,
"fstype": self.fstype,
"fsused": self.fsused,
"mountpoints": self.mountpoints,
"label": self.label,
"uuid": self.uuid,
"size": self.size,
"model": self.model,
"serial": self.serial,
"type": self.type,
}
def resize(self):
"""
Resize the block device.
"""
if not self.locked:
self.locked = True
resize_block_device(self.path)
self.locked = False
def mount(self) -> bool:
"""
Mount the block device.
"""
with WriteUserData() as user_data:
if "volumes" not in user_data:
user_data["volumes"] = []
# Check if the volume is already mounted
for volume in user_data["volumes"]:
if volume["device"] == self.path:
return False
user_data["volumes"].append(
{
"device": self.path,
"mountPoint": f"/volumes/{self.name}",
"fsType": self.fstype,
}
)
return True
def unmount(self) -> bool:
"""
Unmount the block device.
"""
with WriteUserData() as user_data:
if "volumes" not in user_data:
user_data["volumes"] = []
# Check if the volume is already mounted
for volume in user_data["volumes"]:
if volume["device"] == self.path:
user_data["volumes"].remove(volume)
return True
return False
class BlockDevices:
"""Singleton holding all Block devices"""
_instance = None
def __new__(cls, *args, **kwargs):
if not cls._instance:
cls._instance = super().__new__(cls)
return cls._instance
def __init__(self):
self.block_devices = []
self.update()
def update(self) -> None:
"""
Update the list of block devices.
"""
devices = []
lsblk_output = subprocess.check_output(
[
"lsblk",
"-J",
"-b",
"-o",
"NAME,PATH,FSAVAIL,FSSIZE,FSTYPE,FSUSED,MOUNTPOINTS,LABEL,UUID,SIZE,MODEL,SERIAL,TYPE",
]
)
lsblk_output = lsblk_output.decode("utf-8")
lsblk_output = json.loads(lsblk_output)
for device in lsblk_output["blockdevices"]:
# Ignore devices with type "rom"
if device["type"] == "rom":
continue
if device["fstype"] is None:
if "children" in device:
for child in device["children"]:
if child["fstype"] == "ext4":
device = child
break
devices.append(device)
# Add new devices and delete non-existent devices
for device in devices:
if device["name"] not in [
block_device.name for block_device in self.block_devices
]:
self.block_devices.append(BlockDevice(device))
for block_device in self.block_devices:
if block_device.name not in [device["name"] for device in devices]:
self.block_devices.remove(block_device)
def get_block_device(self, name: str) -> typing.Optional[BlockDevice]:
"""
Return a block device by name.
"""
for block_device in self.block_devices:
if block_device.name == name:
return block_device
return None
def get_block_devices(self) -> typing.List[BlockDevice]:
"""
Return a list of block devices.
"""
return self.block_devices
def get_block_devices_by_mountpoint(
self, mountpoint: str
) -> typing.List[BlockDevice]:
"""
Return a list of block devices with a given mountpoint.
"""
block_devices = []
for block_device in self.block_devices:
if mountpoint in block_device.mountpoints:
block_devices.append(block_device)
return block_devices

View File

@ -0,0 +1,14 @@
"""MiniHuey singleton."""
import os
from huey import SqliteHuey
HUEY_DATABASE = "/etc/nixos/userdata/tasks.db"
# Singleton instance containing the huey database.
test_mode = os.environ.get("TEST_MODE")
huey = SqliteHuey(
HUEY_DATABASE,
immediate=test_mode == "true",
)

View File

@ -0,0 +1,29 @@
#!/usr/bin/env python3
"""Network utils"""
import subprocess
import re
from typing import Optional
def get_ip4() -> str:
"""Get IPv4 address"""
try:
ip4 = subprocess.check_output(["ip", "addr", "show", "dev", "eth0"]).decode(
"utf-8"
)
ip4 = re.search(r"inet (\d+\.\d+\.\d+\.\d+)\/\d+", ip4)
except subprocess.CalledProcessError:
ip4 = None
return ip4.group(1) if ip4 else ""
def get_ip6() -> str:
"""Get IPv6 address"""
try:
ip6 = subprocess.check_output(["ip", "addr", "show", "dev", "eth0"]).decode(
"utf-8"
)
ip6 = re.search(r"inet6 (\S+)\/\d+", ip6)
except subprocess.CalledProcessError:
ip6 = None
return ip6.group(1) if ip6 else ""

11
setup.py Normal file → Executable file
View File

@ -1,7 +1,10 @@
from distutils.core import setup
from setuptools import setup, find_packages
setup(
name='selfprivacy-api',
version='1.0.1',
scripts=['main.py',],
name="selfprivacy_api",
version="2.0.0",
packages=find_packages(),
scripts=[
"selfprivacy_api/app.py",
],
)

71
shell.nix Normal file
View File

@ -0,0 +1,71 @@
{ pkgs ? import <nixpkgs> { } }:
let
sp-python = pkgs.python39.withPackages (p: with p; [
setuptools
portalocker
pytz
pytest
pytest-mock
pytest-datadir
huey
gevent
mnemonic
coverage
pylint
pydantic
typing-extensions
psutil
black
fastapi
uvicorn
(buildPythonPackage rec {
pname = "strawberry-graphql";
version = "0.123.0";
format = "pyproject";
patches = [
./strawberry-graphql.patch
];
propagatedBuildInputs = [
typing-extensions
python-multipart
python-dateutil
# flask
pydantic
pygments
poetry
# flask-cors
(buildPythonPackage rec {
pname = "graphql-core";
version = "3.2.0";
format = "setuptools";
src = fetchPypi {
inherit pname version;
sha256 = "sha256-huKgvgCL/eGe94OI3opyWh2UKpGQykMcJKYIN5c4A84=";
};
checkInputs = [
pytest-asyncio
pytest-benchmark
pytestCheckHook
];
pythonImportsCheck = [
"graphql"
];
})
];
src = fetchPypi {
inherit pname version;
sha256 = "KsmZ5Xv8tUg6yBxieAEtvoKoRG60VS+iVGV0X6oCExo=";
};
})
]);
in
pkgs.mkShell {
buildInputs = [
sp-python
pkgs.black
];
shellHook = ''
PYTHONPATH=${sp-python}/${sp-python.sitePackages}
# maybe set more env-vars
'';
}

96
strawberry-graphql.patch Normal file
View File

@ -0,0 +1,96 @@
diff --git a/pyproject.toml b/pyproject.toml
index 0cbf2ef..7736e92 100644
--- a/pyproject.toml
+++ b/pyproject.toml
@@ -51,7 +51,6 @@ python-multipart = "^0.0.5"
sanic = {version = ">=20.12.2,<22.0.0", optional = true}
aiohttp = {version = "^3.7.4.post0", optional = true}
fastapi = {version = ">=0.65.2", optional = true}
-"backports.cached-property" = "^1.0.1"
[tool.poetry.dev-dependencies]
pytest = "^7.1"
diff --git a/strawberry/directive.py b/strawberry/directive.py
index 491e390..26ba345 100644
--- a/strawberry/directive.py
+++ b/strawberry/directive.py
@@ -1,10 +1,10 @@
from __future__ import annotations
import dataclasses
+from functools import cached_property
import inspect
from typing import Any, Callable, List, Optional, TypeVar
-from backports.cached_property import cached_property
from typing_extensions import Annotated
from graphql import DirectiveLocation
diff --git a/strawberry/extensions/tracing/datadog.py b/strawberry/extensions/tracing/datadog.py
index 01fba20..7c06950 100644
--- a/strawberry/extensions/tracing/datadog.py
+++ b/strawberry/extensions/tracing/datadog.py
@@ -1,8 +1,8 @@
import hashlib
+from functools import cached_property
from inspect import isawaitable
from typing import Optional
-from backports.cached_property import cached_property
from ddtrace import tracer
from strawberry.extensions import Extension
diff --git a/strawberry/field.py b/strawberry/field.py
index 80ed12a..f1bf2e9 100644
--- a/strawberry/field.py
+++ b/strawberry/field.py
@@ -1,5 +1,6 @@
import builtins
import dataclasses
+from functools import cached_property
import inspect
import sys
from typing import (
@@ -18,7 +19,6 @@ from typing import (
overload,
)
-from backports.cached_property import cached_property
from typing_extensions import Literal
from strawberry.annotation import StrawberryAnnotation
diff --git a/strawberry/types/fields/resolver.py b/strawberry/types/fields/resolver.py
index c5b3edd..f4112ce 100644
--- a/strawberry/types/fields/resolver.py
+++ b/strawberry/types/fields/resolver.py
@@ -1,6 +1,7 @@
from __future__ import annotations as _
import builtins
+from functools import cached_property
import inspect
import sys
import warnings
@@ -22,7 +23,6 @@ from typing import ( # type: ignore[attr-defined]
_eval_type,
)
-from backports.cached_property import cached_property
from typing_extensions import Annotated, Protocol, get_args, get_origin
from strawberry.annotation import StrawberryAnnotation
diff --git a/strawberry/types/info.py b/strawberry/types/info.py
index a172c04..475a3ee 100644
--- a/strawberry/types/info.py
+++ b/strawberry/types/info.py
@@ -1,9 +1,8 @@
import dataclasses
+from functools import cached_property
import warnings
from typing import TYPE_CHECKING, Any, Dict, Generic, List, Optional, TypeVar, Union
-from backports.cached_property import cached_property
-
from graphql import GraphQLResolveInfo, OperationDefinitionNode
from graphql.language import FieldNode
from graphql.pyutils.path import Path

0
tests/__init__.py Normal file
View File

28
tests/common.py Normal file
View File

@ -0,0 +1,28 @@
import json
from mnemonic import Mnemonic
def read_json(file_path):
with open(file_path, "r", encoding="utf-8") as file:
return json.load(file)
def write_json(file_path, data):
with open(file_path, "w", encoding="utf-8") as file:
json.dump(data, file, indent=4)
def generate_api_query(query_array):
return "query TestApi {\n api {" + "\n".join(query_array) + "}\n}"
def generate_system_query(query_array):
return "query TestSystem {\n system {" + "\n".join(query_array) + "}\n}"
def generate_users_query(query_array):
return "query TestUsers {\n users {" + "\n".join(query_array) + "}\n}"
def mnemonic_to_hex(mnemonic):
return Mnemonic(language="english").to_entropy(mnemonic).hex()

62
tests/conftest.py Normal file
View File

@ -0,0 +1,62 @@
"""Tests configuration."""
# pylint: disable=redefined-outer-name
# pylint: disable=unused-argument
import os
import pytest
from fastapi.testclient import TestClient
def pytest_generate_tests(metafunc):
os.environ["TEST_MODE"] = "true"
@pytest.fixture
def tokens_file(mocker, shared_datadir):
"""Mock tokens file."""
mock = mocker.patch(
"selfprivacy_api.utils.TOKENS_FILE", shared_datadir / "tokens.json"
)
return mock
@pytest.fixture
def jobs_file(mocker, shared_datadir):
"""Mock tokens file."""
mock = mocker.patch("selfprivacy_api.utils.JOBS_FILE", shared_datadir / "jobs.json")
return mock
@pytest.fixture
def huey_database(mocker, shared_datadir):
"""Mock huey database."""
mock = mocker.patch(
"selfprivacy_api.utils.huey.HUEY_DATABASE", shared_datadir / "huey.db"
)
return mock
@pytest.fixture
def client(tokens_file, huey_database, jobs_file):
from selfprivacy_api.app import app
return TestClient(app)
@pytest.fixture
def authorized_client(tokens_file, huey_database, jobs_file):
"""Authorized test client fixture."""
from selfprivacy_api.app import app
client = TestClient(app)
client.headers.update({"Authorization": "Bearer TEST_TOKEN"})
return client
@pytest.fixture
def wrong_auth_client(tokens_file, huey_database, jobs_file):
"""Wrong token test client fixture."""
from selfprivacy_api.app import app
client = TestClient(app)
client.headers.update({"Authorization": "Bearer WRONG_TOKEN"})
return client

1
tests/data/jobs.json Normal file
View File

@ -0,0 +1 @@
{}

Some files were not shown because too many files have changed in this diff Show More