Compare commits

...

230 Commits

Author SHA1 Message Date
Houkime 45011450c5 feature(backup):calculate needed space for inplace restoration
continuous-integration/drone/push Build is failing Details
2023-07-17 16:05:00 +03:00
Houkime f711275a5e test(backup): test moving preventing backups 2023-07-17 16:05:00 +03:00
Houkime 097cf50b37 fix(servers): hopefully fix moving 2023-07-17 16:05:00 +03:00
Houkime c53f35c947 feature(servers): set default timeout of server operations to 10 min 2023-07-17 16:05:00 +03:00
Houkime b001e198bf feature(backups): stop services before restores 2023-07-17 16:05:00 +03:00
Houkime 40ad1b5ce4 feature(backups): stop services before backups 2023-07-17 16:05:00 +03:00
Houkime a7427f3cb5 test(backups): do not store the status file in backupped folders 2023-07-17 16:05:00 +03:00
Houkime 86c2ae2c1f refactor(backups): make a StoppedService context manager 2023-07-17 16:05:00 +03:00
Houkime ea4e53f826 test(backups): make delay settable per dummyservice 2023-07-17 16:05:00 +03:00
Houkime e2b906b219 test(backups): test async service start n stop simulation 2023-07-17 16:05:00 +03:00
Houkime d33e9d6335 test(backups): simulating async service start n stop 2023-07-17 16:05:00 +03:00
Houkime 8e29634d02 feature(utils): a hopefully reusable waitloop 2023-07-17 16:05:00 +03:00
Houkime be95b84d52 feature(backups): expose restore strategies to the API 2023-07-17 16:05:00 +03:00
Houkime cacbf8335d fix(backups): actually mount if asked for an inplace restore 2023-07-17 16:05:00 +03:00
Houkime 65ce86f0f9 test(backups): test out that pre-restore backup plays nice with jobs 2023-07-17 16:05:00 +03:00
Houkime 95e4296d0b feature(backups): implement inplace restore strategy 2023-07-17 16:05:00 +03:00
Houkime 59fe386463 feature(backups): restore strategies enum 2023-07-17 16:05:00 +03:00
Houkime 02e3c9bd5e feature(backups): forgetting snapshots 2023-07-17 16:05:00 +03:00
Houkime f361f44ded feature(backups): check restore exit code 2023-07-17 16:05:00 +03:00
Houkime 4423db7458 refactor(backups): download a copy before replacing original 2023-07-17 16:05:00 +03:00
Houkime 9137536294 feature(backups): mounting a repo 2023-07-17 16:05:00 +03:00
Houkime 5467a62906 test(backups): remove the 100mb file after test 2023-07-17 16:05:00 +03:00
Houkime 9a28c0ebcb refactor(backups): move syncing (non-restic) into backup utils 2023-07-17 16:05:00 +03:00
Houkime 7ad5f91be1 refactor(backups): move output yielding into backup utils 2023-07-17 16:05:00 +03:00
Houkime ae708e446b test(backups): actually list folders 2023-07-17 16:05:00 +03:00
Houkime 1c28984475 feature(backups): a wrapper for rclone sync 2023-07-17 16:05:00 +03:00
Inex Code 2df930b9ba feat(backups): Add backup descriptions for UI 2023-07-17 16:05:00 +03:00
Inex Code 2c21bd2a14 feat(backups): expose if the service can be backed up 2023-07-17 16:05:00 +03:00
Inex Code 21c5f6814c style: fix styling 2023-07-17 16:05:00 +03:00
Houkime 559de63221 fix(jobs): make finishing the job set progress to 100 2023-07-17 16:05:00 +03:00
Houkime 5ff89c21d5 test(backup): make large testfile larger 2023-07-17 16:05:00 +03:00
Inex Code ba9270755a feat(jobs): return type_id of the job in graphql api 2023-07-17 16:05:00 +03:00
Houkime 0e13e61b73 fix(services): proper backup progress reporting 2023-07-17 16:05:00 +03:00
Houkime 1fb5e3af97 fix(services): cleanup a stray get_location 2023-07-17 16:05:00 +03:00
Houkime 2dd9da9a96 fix(backups): register the correct tasks 2023-07-17 16:05:00 +03:00
Inex Code a7d0f6226f fix(backups): missing space in rclone args 2023-07-17 16:05:00 +03:00
Houkime e8f1f39b18 refactor(backups): rename service_snapshot_size to snapshot_restored_size 2023-07-17 16:05:00 +03:00
Houkime f804c88fa6 refactor(backups): remove the by-service getting of cached snapshots 2023-07-17 16:05:00 +03:00
Houkime 6004977845 refactor(backups): rename force_snapshot_reload to force_snapshot_cache_reload 2023-07-17 16:05:00 +03:00
Houkime 3551813b34 refactor(backups): merge sync_all_snapshots with force_snapshot_reload 2023-07-17 16:05:00 +03:00
Houkime ce55416b26 refactor(backups): straighten get_all_snapshots 2023-07-17 16:05:00 +03:00
Houkime 16a96fe0fa refactor(backups): delete sync_service_snapshots 2023-07-17 16:05:00 +03:00
Houkime f2161f0532 refactor(backups): privatize assert_restorable and restore_snapshot_from_id 2023-07-17 16:05:00 +03:00
Houkime cb2273323f refactor(backups): group operations together 2023-07-17 16:05:00 +03:00
Houkime 6369042420 refactor(backups): move reset() to top because toplevel interface 2023-07-17 16:05:00 +03:00
Houkime 3edb38262f refactor(backups): make redis and json provider related lowlevels private 2023-07-17 16:05:00 +03:00
Houkime 3684345c2d refactor(backups): make construct_provider not public 2023-07-17 16:05:00 +03:00
Houkime 6b0c55a786 refactor(backups): make lookup_provider not public 2023-07-17 16:05:00 +03:00
Houkime dbac010303 refactor(backups): reorder imports 2023-07-17 16:05:00 +03:00
Houkime c09f2f393b refactor(backups): api readability reorg 2023-07-17 16:05:00 +03:00
Houkime ce9b24b579 feature(dev_qol): mypy type checking and rope refactoring support 2023-07-17 16:05:00 +03:00
Houkime 4b1594ca22 refactoring(backups): backuper -> backupper 2023-07-17 16:05:00 +03:00
Houkime c94b4d07bf fix(tokens-repo): persistent hashing 2023-07-17 16:05:00 +03:00
Inex Code 4225772573 fix(backups): Providers were not initialized corretly 2023-07-17 16:05:00 +03:00
Houkime 2040272879 fix(redis): Do not shut down redis on ctrl c
see https://github.com/NixOS/nix/issues/2141
2023-07-17 16:05:00 +03:00
Inex Code f3dd18a830 ci: only run on push event 2023-07-17 16:05:00 +03:00
Inex Code 0d622d431f ci: ignore the failure when trying to kill redis 2023-07-17 16:05:00 +03:00
Inex Code f27a3df807 refactor(backups): fix typing errors 2023-07-17 16:05:00 +03:00
Inex Code 1e840f8cff ci: fix killing redis-server 2023-07-17 16:05:00 +03:00
Inex Code b78ee5fcca refactor(api): Group mutations
I've learned that there is no problem in grouping mutations like we do with queries.
This is a big mistake from my side, now we have legacy not so conveniently placed endpoints.
I've grouped all mutations, left the copies of old ones flattened in the root for backwards compatibility.
We will migrate to mutation groups on client side, and backups now only use grouped mutations.
Tests are updated.
2023-07-17 16:05:00 +03:00
Houkime 53dfb38284 test(backups): ensure asking to reload snaps does not explode the server 2023-07-17 16:05:00 +03:00
Houkime ecf72948b1 test(backups): setting autobackup period 2023-07-17 16:05:00 +03:00
Houkime f829a34dc7 refactor(backups): delete legacy provider setting 2023-07-17 16:05:00 +03:00
Houkime 9f096ed2c0 feature(backups): actually dealing with situation when the provider is not configured 2023-07-17 16:05:00 +03:00
Houkime cd32aa83b7 refactor(backups): NoneBackupper class for those cases when we do not know 2023-07-17 16:05:00 +03:00
Houkime a56461fb96 refactor(backups): make a dir for backuppers 2023-07-17 16:05:00 +03:00
Houkime b346a283a4 test(backups): add a backend json reset test 2023-07-17 16:05:00 +03:00
Houkime 806fb3c84b feature(backups): resetting json config too 2023-07-17 16:05:00 +03:00
Houkime 1fd5db9ff3 fix(backups): fix output API return types for configuration 2023-07-17 16:05:00 +03:00
Houkime 5d95c1b44e test(backups): preliminary test of repo reset 2023-07-17 16:05:00 +03:00
Houkime 1c96743c5d test(backups): test reinitting repository 2023-07-17 16:05:00 +03:00
Houkime 38de01da8b refactor(backups): cleanup localfile-specific logic 2023-07-17 16:05:00 +03:00
Houkime 8475ae3375 refactor(backups): make localfile repos normal 2023-07-17 16:05:00 +03:00
Houkime a48856c9ad fix(backups): non-nullable service when backing up 2023-07-17 16:05:00 +03:00
Houkime a8f72201a7 test(backups): test restore 2023-07-17 16:05:00 +03:00
Houkime cf2dc6795a test(backups): use get_data 2023-07-17 16:05:00 +03:00
Houkime a486825a4f test(backups): check snapshots getting created 2023-07-17 16:05:00 +03:00
Houkime eac561c57c test(backups): test dummy service compliance 2023-07-17 16:05:00 +03:00
Houkime 53638b7e06 test(backups): make dummy service more compliant 2023-07-17 16:05:00 +03:00
Houkime de1cbcb1ca test(backups): display errors from api 2023-07-17 16:05:00 +03:00
Houkime cfda6b0810 fix(backups): shorten snapshot query signature 2023-07-17 16:05:00 +03:00
Houkime 09c79b3477 test(backups): snapshot query 2023-07-17 16:05:00 +03:00
Inex Code 93b98cd4fd fix(backups): Handle orphaned snapshots 2023-07-17 16:05:00 +03:00
Inex Code 421c92d12e fix(backups): return type of encryption key 2023-07-17 16:05:00 +03:00
Inex Code c603394449 fix(backups): try to actually get backup configuration 2023-07-17 16:05:00 +03:00
Houkime f77556b60e test(backups): actual testfile 2023-07-17 16:05:00 +03:00
Houkime b04dfc6c4e fix(backups): register queries 2023-07-17 16:05:00 +03:00
Houkime 42a5b6f70a test(backups): test backup API - backing up 2023-07-17 16:05:00 +03:00
Inex Code 32a242b560 feat(backups): register backups in GraphQL schema 2023-07-17 16:05:00 +03:00
Inex Code a4b0e6f208 fix: BackupConfiguration argument order 2023-07-17 16:05:00 +03:00
Houkime ad130e392c feature(backups): check available space before restoring 2023-07-17 16:05:00 +03:00
Houkime 780c12df6c refactor(backups): expect one more error of restic json output parsing 2023-07-17 16:05:00 +03:00
Houkime 6da0791b47 feature(backups): integration between restore and jobs 2023-07-17 16:05:00 +03:00
Houkime 792dcd459d fix(backups): return one job, not an array of one 2023-07-17 16:05:00 +03:00
Houkime 5100f1a497 fix(backups): return 400, not 300 2023-07-17 16:05:00 +03:00
Houkime 44e45a5124 BREAKING CHANGE(backups): support only individual service backup requests(combinable) 2023-07-17 16:05:00 +03:00
Houkime 0b8f77e6f7 feature(backups): set autobackup period from gql 2023-07-17 16:05:00 +03:00
Houkime e3545d4541 feature(backups): get all snapshots if requested by api 2023-07-17 16:05:00 +03:00
Houkime 550f7fa620 refactor(backups): introduce get_all_snapshots() 2023-07-17 16:05:00 +03:00
Houkime cc073155db feature(backups): feature(backups): return a snapshot from start_backup 2023-07-17 16:05:00 +03:00
Houkime 891993e4cd feature(backups): a graphql call to invalidate cache 2023-07-17 16:05:00 +03:00
Houkime 7e022e0cfe feature(backups): graphql mutation for restore 2023-07-17 16:05:00 +03:00
Houkime 44ddd27e84 fix(backups): return correct snapshots per service 2023-07-17 16:05:00 +03:00
Houkime 761b6be4e5 refactor(backups): global snapshots 2023-07-17 16:05:00 +03:00
Houkime a76b4ac134 feature(backups): start backup graphql API 2023-07-17 16:05:00 +03:00
Houkime ac9fbbff3e feature(backups): drop repository call 2023-07-17 16:05:00 +03:00
Houkime bdae6cfb75 feature(backups): global init instead of per-service 2023-07-17 16:05:00 +03:00
Houkime e7683352cd feature(backups): a graphql query to get provider info 2023-07-17 16:05:00 +03:00
Houkime d0b27da641 feature(backups): init repo mutation 2023-07-17 16:05:00 +03:00
Houkime d10bf99927 fix(backups): make sure location and credentials get properly passed around 2023-07-17 16:05:00 +03:00
Houkime c5c41b3ced refactor(backups): remove extraneous asserts from jobs 2023-07-17 16:05:00 +03:00
Houkime c8512eacdc refactor(backups): refactor realtime updating 2023-07-17 16:05:00 +03:00
Houkime d38b8180cb feature(backups): realtime progress updates of backups 2023-07-17 16:05:00 +03:00
Houkime 1faaed992e test(backups): break out obtaining finished jobs 2023-07-17 16:05:00 +03:00
Houkime 135fb0c42d feature(backups): job progress logs 2023-07-17 16:05:00 +03:00
Houkime ca036b294a refactor(backups): break out job logs status prefix 2023-07-17 16:05:00 +03:00
Houkime afdbf01cfc refactor(backups): use single repo and multiplex by tags 2023-07-17 16:05:00 +03:00
Houkime ecf44e5169 feature(backups): deny adding a backup job if another one is already queued 2023-07-17 16:05:00 +03:00
Houkime ebff2b308a test(backups): test that the job has run 2023-07-17 16:05:00 +03:00
Houkime 2a87eb80f9 refactor(backups): quick-expiration logs of jobs status updates 2023-07-17 16:05:00 +03:00
Houkime f116ce1bdb feature(backups): set job status to error if backup fails 2023-07-17 16:05:00 +03:00
Houkime 05f2cc3f14 refactor(backups): cleanup unused imports in tasks 2023-07-17 16:05:00 +03:00
Houkime f622d617cf test(backups): test jobs starting and finishing when from Backups 2023-07-17 16:05:00 +03:00
Houkime 312fceeb9c test(backups): break out a finished job checker 2023-07-17 16:05:00 +03:00
Houkime ac6d25c4c1 refactor(backups): make a backup job running when the backup code itself is executed 2023-07-17 16:05:00 +03:00
Houkime 026d72b551 refactor(backups): delete unused redis import from backups ckass 2023-07-17 16:05:00 +03:00
Houkime 029cb47db6 feature(backups): also create a job if not called from a task 2023-07-17 16:05:00 +03:00
Houkime b32ca3b11a test(backups): assure that jobs are created and not duplicated 2023-07-17 16:05:00 +03:00
Houkime fa86c45bd0 feature(backups): simplest jobs intergration in tasks: created and finished 2023-07-17 16:05:00 +03:00
Houkime 4572c00640 feature(backups): restore task 2023-07-17 16:05:00 +03:00
Houkime d3f9ce7bf5 test(backups): test local secrets 2023-07-17 16:05:00 +03:00
Houkime ebeb76149b refactor(services): make local secret setting public 2023-07-17 16:05:00 +03:00
Houkime 592eb1a1f8 refactor(services): use fully generic foldermoves 2023-07-17 16:05:00 +03:00
Houkime f09d21a031 test(services): test derivation of foldermoves 2023-07-17 16:05:00 +03:00
Houkime 7a5af6af99 test(services): test that we indeed return correct folders and owned folders from real services 2023-07-17 16:05:00 +03:00
Houkime aca05f26ea fix(services): folder methods typing 2023-07-17 16:05:00 +03:00
Houkime 92be699031 refactor(services): make a foldermove from owned path 2023-07-17 16:05:00 +03:00
Houkime 71b987da57 refactor(services): add folder owner derivation 2023-07-17 16:05:00 +03:00
Houkime 9f2dbaa98d refactor(services): add overridable get owner and get group 2023-07-17 16:05:00 +03:00
Houkime 6057e350ef refactor(services): add OwnedPath struct 2023-07-17 16:05:00 +03:00
Houkime df5b318fff refactor(services): remove special storage counting from pleroma 2023-07-17 16:05:00 +03:00
Houkime f0d6ac624d refactor(services): remove special storage counting from ocserv 2023-07-17 16:05:00 +03:00
Houkime ae7f53d1ec refactor(services): remove special storage counting from nextcloud 2023-07-17 16:05:00 +03:00
Houkime 34854b5118 documentation(services): move the storage count docstring to parent service class 2023-07-17 16:05:00 +03:00
Houkime f5de4974e7 refactor(services): remove special storage counting from mail 2023-07-17 16:05:00 +03:00
Houkime 208e256c0f refactor(services): remove special storage counting from jitsi 2023-07-17 16:05:00 +03:00
Houkime 44041662c2 refactor(services): remove special storage counting from gitea 2023-07-17 16:05:00 +03:00
Houkime 3b8168c25d refactor(services): remove special storage counting from bitwarden 2023-07-17 16:05:00 +03:00
Houkime c2cd972805 refactor(services): add a generic storage counter 2023-07-17 16:05:00 +03:00
Houkime 0a9848be47 refactor(services): make get_folders() a mandatory part of Service interface 2023-07-17 16:05:00 +03:00
Houkime ac04425221 refactor(services): add get_folders() to the rest of the services 2023-07-17 16:05:00 +03:00
Houkime 1019031b5b fix(services): use get_foldername() for moving around 2023-07-17 16:05:00 +03:00
Houkime 95b88ea2e4 test(backups): implement get_folders() for gitea 2023-07-17 16:05:00 +03:00
Houkime 498208f083 test(backups): implement get_folders() for bitwarden 2023-07-17 16:05:00 +03:00
Houkime 840572f82c test(backups): test 2-folder restoration 2023-07-17 16:05:00 +03:00
Houkime f3bfa2293c test(backups): actually back up 2 folders 2023-07-17 16:05:00 +03:00
Houkime b21d63be63 refactor(backups): set a list of folders for our dummy service 2023-07-17 16:05:00 +03:00
Houkime 3aefbaaf0b refactor(backups): actually accept a list of folders 2023-07-17 16:05:00 +03:00
Houkime f0aabec947 refactor(backups): make api accept a list of folders 2023-07-17 16:05:00 +03:00
Houkime d1e1039519 refactor(backups): make a dedicated get_folders() function 2023-07-17 16:05:00 +03:00
Houkime 507cdb3bbd refactor(services): rename get_location() to get_drive() 2023-07-17 16:05:00 +03:00
Houkime 6132f1bb4c test(backups): register dummy service 2023-07-17 16:05:00 +03:00
Houkime 1940b29161 feature(backups): automatic backup 2023-07-17 16:05:00 +03:00
Houkime 5e9c651c65 test(backups): test autobackup timing 2023-07-17 16:05:00 +03:00
Houkime b305c19559 refactor(backups): split out storage 2023-07-17 16:05:00 +03:00
Houkime ef57e25a26 test(backups): test that we do use cache 2023-07-17 16:05:00 +03:00
Houkime f9eaaab929 feature(backups): enable snapshot cache usage 2023-07-17 16:05:00 +03:00
Houkime 2c510ae884 feature(backups): add snapshot cache sync functions 2023-07-17 16:05:00 +03:00
Houkime ed0861aacc test(backups): test last backup date retrieval 2023-07-17 16:05:00 +03:00
Houkime 054b07baa3 feature(backups): add a datetime validator function for huey autobackups 2023-07-17 16:05:00 +03:00
Houkime 343fda0630 test(backups): test setting autobackup period 2023-07-17 16:05:00 +03:00
Houkime 0a4338596b test(backups): test setting services as enabled for autobackups 2023-07-17 16:05:00 +03:00
Houkime 79b9bb352a feature(backups): methods for autobackup period setting and getting 2023-07-17 16:05:00 +03:00
Houkime 951bb8d5ec fix(backups): remove self from static method 2023-07-17 16:05:00 +03:00
Houkime d354f4ac0b feature(backups): check, set and unset service autobackup status 2023-07-17 16:05:00 +03:00
Houkime 43b6ebd04d feature(backups): cache snapshots and last backup timestamps 2023-07-17 16:05:00 +03:00
Houkime d57dc3f7d2 test(backups): test that we do return snapshot on backup 2023-07-17 16:05:00 +03:00
Houkime 35a4fec9d4 feature(backups): return snapshot info from backup function 2023-07-17 16:05:00 +03:00
Houkime a134009165 feature(backups): huey task to back up 2023-07-17 16:05:00 +03:00
Houkime d972fdc3cc refactor(backups): make backups stateless 2023-07-17 16:05:00 +03:00
Houkime 6f8f5cbb9e feature(backups): repo init tracking 2023-07-17 16:05:00 +03:00
Houkime 02deae217d feature(backups): provider storage and retrieval 2023-07-17 16:05:00 +03:00
Houkime 48dc63a590 refactor(backups): add a provider model for redis storage 2023-07-17 16:05:00 +03:00
Houkime 873bc8282e refactor(backups): redis model storage utils 2023-07-17 16:05:00 +03:00
Houkime c928263fce feature(backups): load from json 2023-07-17 16:05:00 +03:00
Houkime 0847e16089 feat(backups): local secret generation and storage 2023-07-17 16:05:00 +03:00
Houkime 60dcde458c feat(backups): sizing up snapshots 2023-07-17 16:05:00 +03:00
Houkime 1d403b0e94 test(backups): test restoring a file 2023-07-17 16:05:00 +03:00
Houkime c8a8d45110 feat(backups): add restore_snapshot and restore_service_from_snapshot 2023-07-17 16:05:00 +03:00
Houkime ff6bc2a142 feat(backups): a better error on failed snapshot retrieval 2023-07-17 16:05:00 +03:00
Houkime e56907f2cd feat(backups): return proper snapshot structs when listing 2023-07-17 16:05:00 +03:00
Houkime a0a32a7f37 test(backups): reenable snapshot testing 2023-07-17 16:05:00 +03:00
Houkime 228eab44bb feat(backups): throw an error on a failed backup 2023-07-17 16:05:00 +03:00
Houkime 348ece8b9c fix(backups): singleton metaclass was screwing with tests 2023-07-17 16:05:00 +03:00
Houkime a280e5c999 test(backups): localfile repo by default in tests 2023-07-17 16:05:00 +03:00
Houkime add4e21f39 feature(backups): throw an error if repo init fails 2023-07-17 16:05:00 +03:00
Houkime b27f19b201 test(backups): basic file backend init test 2023-07-17 16:05:00 +03:00
Houkime 5efb351159 feature(backups): register localfile backend 2023-07-17 16:05:00 +03:00
Houkime 529608d52e feature(backups): localfile repo 2023-07-17 16:05:00 +03:00
Houkime 29c4b74a86 test(backups): test repo init 2023-07-17 16:05:00 +03:00
Houkime 3f30469532 refactor(backups): repo init service method 2023-07-17 16:05:00 +03:00
Houkime a405eddbcf refactor(backups): add repo init 2023-07-17 16:05:00 +03:00
Houkime 5371c7feef refactor(backups): snapshotlist and local secret groundwork 2023-07-17 16:05:00 +03:00
Houkime e156e9cd58 test(backup): no snapshots 2023-07-17 16:05:00 +03:00
Houkime 83b24f5fcd refactor(backup): snapshot model 2023-07-17 16:05:00 +03:00
Houkime 4ca2e62b5c feature(backup): loading snapshots 2023-07-17 16:05:00 +03:00
Houkime a42294b706 feature(backup): add a restore function to restic backuper 2023-07-17 16:05:00 +03:00
Houkime a0a0e1fb3b feat(backup): hooks 2023-07-17 16:05:00 +03:00
Houkime 95e2032c63 test(backup): use a backup service function 2023-07-17 16:05:00 +03:00
Houkime 178c456593 refactor(backup): add a backup function to Backups singleton class 2023-07-17 16:05:00 +03:00
Houkime ff72d4124e refactor(backup): add a placeholder Backups singleton class 2023-07-17 16:05:00 +03:00
Houkime 54103973bc test(backup): try to back up! 2023-07-17 16:05:00 +03:00
Houkime a9cd8dda37 fix(backup): add memory backup class,forgot to add to git 2023-07-17 16:05:00 +03:00
Houkime 86c99c0be8 feat(backup): add backuping to restic backuper 2023-07-17 16:05:00 +03:00
Houkime 3f2c1e0593 test(backup): make a testfile to backup 2023-07-17 16:05:00 +03:00
Houkime fc7483a6f2 test(backup): init an in-memory backup class 2023-07-17 16:05:00 +03:00
Houkime 37c18ead99 feat(backup): add in-memory backup 2023-07-17 16:05:00 +03:00
Houkime e5a965ea29 feat(backup): allow no auth 2023-07-17 16:05:00 +03:00
Houkime 45ab9423b9 test(backup): dummy service 2023-07-17 16:05:00 +03:00
Houkime 9097ba02d7 test(backup): provider class selection 2023-07-17 16:05:00 +03:00
Houkime 7d76b74dbc feature(backups): copy cli logic to new restic backuper 2023-07-17 16:05:00 +03:00
Houkime 1e5fb67374 feature(backups): placeholders for the backupers and backup providers 2023-07-17 16:05:00 +03:00
Houkime a3d58be0d5 feature(backups): placeholders for the modules of the new backup system 2023-07-17 16:05:00 +03:00
Houkime a1071fd2c9 feature(backups): add backup structures and queries 2023-07-17 16:05:00 +03:00
Houkime 7b7f782185 refactor(backup): do not use config file 2023-07-17 16:05:00 +03:00
Houkime f65c0522b0 refactor(backup): pass key and account to exec 2023-07-17 16:05:00 +03:00
Houkime 6bf5ee4b64 refactor(backup): extract restic repo 2023-07-17 16:05:00 +03:00
Houkime 8eab26d552 refactor(backup): extract rclone args 2023-07-17 16:05:00 +03:00
Houkime 70cf0306a9 refactor(backup): delete unused import 2023-07-17 16:05:00 +03:00
Inex Code b3a37e8b1f fix: Migrate to SP channel from 22.11 installations
continuous-integration/drone/push Build is failing Details
2023-06-14 19:27:11 +03:00
75 changed files with 4692 additions and 744 deletions

View File

@ -5,7 +5,7 @@ name: default
steps:
- name: Run Tests and Generate Coverage Report
commands:
- kill $(ps aux | grep '[r]edis-server 127.0.0.1:6389' | awk '{print $2}')
- kill $(ps aux | grep 'redis-server 127.0.0.1:6389' | awk '{print $2}') || true
- redis-server --bind 127.0.0.1 --port 6389 >/dev/null &
- coverage run -m pytest -q
- coverage xml
@ -26,3 +26,7 @@ steps:
node:
server: builder
trigger:
event:
- push

1
.gitignore vendored
View File

@ -147,3 +147,4 @@ cython_debug/
# End of https://www.toptal.com/developers/gitignore/api/flask
*.db
*.rdb

64
api.nix
View File

@ -1,64 +0,0 @@
{ lib, python39Packages }:
with python39Packages;
buildPythonApplication {
pname = "selfprivacy-api";
version = "2.0.0";
propagatedBuildInputs = [
setuptools
portalocker
pytz
pytest
pytest-mock
pytest-datadir
huey
gevent
mnemonic
pydantic
typing-extensions
psutil
fastapi
uvicorn
(buildPythonPackage rec {
pname = "strawberry-graphql";
version = "0.123.0";
format = "pyproject";
patches = [
./strawberry-graphql.patch
];
propagatedBuildInputs = [
typing-extensions
python-multipart
python-dateutil
# flask
pydantic
pygments
poetry
# flask-cors
(buildPythonPackage rec {
pname = "graphql-core";
version = "3.2.0";
format = "setuptools";
src = fetchPypi {
inherit pname version;
sha256 = "sha256-huKgvgCL/eGe94OI3opyWh2UKpGQykMcJKYIN5c4A84=";
};
checkInputs = [
pytest-asyncio
pytest-benchmark
pytestCheckHook
];
pythonImportsCheck = [
"graphql"
];
})
];
src = fetchPypi {
inherit pname version;
sha256 = "KsmZ5Xv8tUg6yBxieAEtvoKoRG60VS+iVGV0X6oCExo=";
};
})
];
src = ./.;
}

View File

@ -1,2 +0,0 @@
{ pkgs ? import <nixpkgs> {} }:
pkgs.callPackage ./api.nix {}

View File

@ -0,0 +1,505 @@
from datetime import datetime, timedelta
from operator import add
from os import statvfs, path, walk
from typing import List, Optional
from selfprivacy_api.utils import ReadUserData, WriteUserData
from selfprivacy_api.services import get_service_by_id
from selfprivacy_api.services.service import Service, ServiceStatus, StoppedService
from selfprivacy_api.jobs import Jobs, JobStatus, Job
from selfprivacy_api.graphql.queries.providers import (
BackupProvider as BackupProviderEnum,
)
from selfprivacy_api.graphql.common_types.backup import RestoreStrategy
from selfprivacy_api.models.backup.snapshot import Snapshot
from selfprivacy_api.backup.providers.provider import AbstractBackupProvider
from selfprivacy_api.backup.providers import get_provider
from selfprivacy_api.backup.storage import Storage
from selfprivacy_api.backup.jobs import (
get_backup_job,
add_backup_job,
get_restore_job,
add_restore_job,
)
DEFAULT_JSON_PROVIDER = {
"provider": "BACKBLAZE",
"accountId": "",
"accountKey": "",
"bucket": "",
}
class NotDeadError(AssertionError):
def __init__(self, service: Service):
self.service_name = service.get_id()
def __str__(self):
return f"""
Service {self.service_name} should be either stopped or dead from an error before we back up.
Normally, this error is unreachable because we do try ensure this.
Apparently, not this time.
"""
class Backups:
"""A stateless controller class for backups"""
### Providers
@staticmethod
def provider():
return Backups._lookup_provider()
@staticmethod
def set_provider(
kind: BackupProviderEnum,
login: str,
key: str,
location: str,
repo_id: str = "",
):
provider = Backups._construct_provider(
kind,
login,
key,
location,
repo_id,
)
Storage.store_provider(provider)
@staticmethod
def reset(reset_json=True):
Storage.reset()
if reset_json:
try:
Backups._reset_provider_json()
except FileNotFoundError:
# if there is no userdata file, we do not need to reset it
pass
@staticmethod
def _lookup_provider() -> AbstractBackupProvider:
redis_provider = Backups._load_provider_redis()
if redis_provider is not None:
return redis_provider
try:
json_provider = Backups._load_provider_json()
except FileNotFoundError:
json_provider = None
if json_provider is not None:
Storage.store_provider(json_provider)
return json_provider
none_provider = Backups._construct_provider(
BackupProviderEnum.NONE, login="", key="", location=""
)
Storage.store_provider(none_provider)
return none_provider
@staticmethod
def _construct_provider(
kind: BackupProviderEnum,
login: str,
key: str,
location: str,
repo_id: str = "",
) -> AbstractBackupProvider:
provider_class = get_provider(kind)
return provider_class(
login=login,
key=key,
location=location,
repo_id=repo_id,
)
@staticmethod
def _load_provider_redis() -> Optional[AbstractBackupProvider]:
provider_model = Storage.load_provider()
if provider_model is None:
return None
return Backups._construct_provider(
BackupProviderEnum[provider_model.kind],
provider_model.login,
provider_model.key,
provider_model.location,
provider_model.repo_id,
)
@staticmethod
def _load_provider_json() -> Optional[AbstractBackupProvider]:
with ReadUserData() as user_data:
provider_dict = {
"provider": "",
"accountId": "",
"accountKey": "",
"bucket": "",
}
if "backup" not in user_data.keys():
if "backblaze" in user_data.keys():
provider_dict.update(user_data["backblaze"])
provider_dict["provider"] = "BACKBLAZE"
return None
else:
provider_dict.update(user_data["backup"])
if provider_dict == DEFAULT_JSON_PROVIDER:
return None
try:
return Backups._construct_provider(
kind=BackupProviderEnum[provider_dict["provider"]],
login=provider_dict["accountId"],
key=provider_dict["accountKey"],
location=provider_dict["bucket"],
)
except KeyError:
return None
@staticmethod
def _reset_provider_json() -> None:
with WriteUserData() as user_data:
if "backblaze" in user_data.keys():
del user_data["backblaze"]
user_data["backup"] = DEFAULT_JSON_PROVIDER
### Init
@staticmethod
def init_repo():
Backups.provider().backupper.init()
Storage.mark_as_init()
@staticmethod
def is_initted() -> bool:
if Storage.has_init_mark():
return True
initted = Backups.provider().backupper.is_initted()
if initted:
Storage.mark_as_init()
return True
return False
### Backup
@staticmethod
def back_up(service: Service):
"""The top-level function to back up a service"""
folders = service.get_folders()
tag = service.get_id()
job = get_backup_job(service)
if job is None:
job = add_backup_job(service)
Jobs.update(job, status=JobStatus.RUNNING)
try:
with StoppedService(service):
Backups.assert_dead(service) # to be extra sure
service.pre_backup()
snapshot = Backups.provider().backupper.start_backup(
folders,
tag,
)
Backups._store_last_snapshot(tag, snapshot)
service.post_restore()
except Exception as e:
Jobs.update(job, status=JobStatus.ERROR)
raise e
Jobs.update(job, status=JobStatus.FINISHED)
return snapshot
### Restoring
@staticmethod
def _ensure_queued_restore_job(service, snapshot) -> Job:
job = get_restore_job(service)
if job is None:
job = add_restore_job(snapshot)
Jobs.update(job, status=JobStatus.CREATED)
return job
@staticmethod
def _inplace_restore(service: Service, snapshot: Snapshot, job: Job):
failsafe_snapshot = Backups.back_up(service)
Jobs.update(job, status=JobStatus.RUNNING)
try:
Backups._restore_service_from_snapshot(service, snapshot.id, verify=False)
except Exception as e:
Backups._restore_service_from_snapshot(
service, failsafe_snapshot.id, verify=False
)
raise e
Backups.forget_snapshot(failsafe_snapshot)
@staticmethod
def restore_snapshot(
snapshot: Snapshot, strategy=RestoreStrategy.DOWNLOAD_VERIFY_OVERWRITE
):
service = get_service_by_id(snapshot.service_name)
if service is None:
raise ValueError(
f"snapshot has a nonexistent service: {snapshot.service_name}"
)
job = Backups._ensure_queued_restore_job(service, snapshot)
try:
Backups._assert_restorable(snapshot)
with StoppedService(service):
Backups.assert_dead(service)
if strategy == RestoreStrategy.INPLACE:
Backups._inplace_restore(service, snapshot, job)
else: # verify_before_download is our default
Jobs.update(job, status=JobStatus.RUNNING)
Backups._restore_service_from_snapshot(
service, snapshot.id, verify=True
)
service.post_restore()
except Exception as e:
Jobs.update(job, status=JobStatus.ERROR)
raise e
Jobs.update(job, status=JobStatus.FINISHED)
@staticmethod
def _assert_restorable(
snapshot: Snapshot, strategy=RestoreStrategy.DOWNLOAD_VERIFY_OVERWRITE
):
service = get_service_by_id(snapshot.service_name)
if service is None:
raise ValueError(
f"snapshot has a nonexistent service: {snapshot.service_name}"
)
restored_snap_size = Backups.snapshot_restored_size(snapshot.id)
if strategy == RestoreStrategy.DOWNLOAD_VERIFY_OVERWRITE:
needed_space = restored_snap_size
elif strategy == RestoreStrategy.INPLACE:
needed_space = restored_snap_size - service.get_storage_usage()
else:
raise NotImplementedError(
"""
We do not know if there is enough space for restoration because there is some novel restore strategy used!
This is a developer's fault, open a issue please
"""
)
available_space = Backups.space_usable_for_service(service)
if needed_space > available_space:
raise ValueError(
f"we only have {available_space} bytes "
f"but snapshot needs {needed_space}"
)
@staticmethod
def _restore_service_from_snapshot(service: Service, snapshot_id: str, verify=True):
folders = service.get_folders()
Backups.provider().backupper.restore_from_backup(
snapshot_id,
folders,
)
### Snapshots
@staticmethod
def get_snapshots(service: Service) -> List[Snapshot]:
snapshots = Backups.get_all_snapshots()
service_id = service.get_id()
return list(
filter(
lambda snap: snap.service_name == service_id,
snapshots,
)
)
@staticmethod
def get_all_snapshots() -> List[Snapshot]:
cached_snapshots = Storage.get_cached_snapshots()
if cached_snapshots != []:
return cached_snapshots
# TODO: the oldest snapshots will get expired faster than the new ones.
# How to detect that the end is missing?
Backups.force_snapshot_cache_reload()
return Storage.get_cached_snapshots()
@staticmethod
def get_snapshot_by_id(id: str) -> Optional[Snapshot]:
snap = Storage.get_cached_snapshot_by_id(id)
if snap is not None:
return snap
# Possibly our cache entry got invalidated, let's try one more time
Backups.force_snapshot_cache_reload()
snap = Storage.get_cached_snapshot_by_id(id)
return snap
@staticmethod
def forget_snapshot(snapshot: Snapshot):
Backups.provider().backupper.forget_snapshot(snapshot.id)
Storage.delete_cached_snapshot(snapshot)
@staticmethod
def force_snapshot_cache_reload():
upstream_snapshots = Backups.provider().backupper.get_snapshots()
Storage.invalidate_snapshot_storage()
for snapshot in upstream_snapshots:
Storage.cache_snapshot(snapshot)
@staticmethod
def snapshot_restored_size(snapshot_id: str) -> int:
return Backups.provider().backupper.restored_size(
snapshot_id,
)
@staticmethod
def _store_last_snapshot(service_id: str, snapshot: Snapshot):
"""What do we do with a snapshot that is just made?"""
# non-expiring timestamp of the last
Storage.store_last_timestamp(service_id, snapshot)
# expiring cache entry
Storage.cache_snapshot(snapshot)
### Autobackup
@staticmethod
def is_autobackup_enabled(service: Service) -> bool:
return Storage.is_autobackup_set(service.get_id())
@staticmethod
def enable_autobackup(service: Service):
Storage.set_autobackup(service)
@staticmethod
def disable_autobackup(service: Service):
"""also see disable_all_autobackup()"""
Storage.unset_autobackup(service)
@staticmethod
def disable_all_autobackup():
"""
Disables all automatic backing up,
but does not change per-service settings
"""
Storage.delete_backup_period()
@staticmethod
def autobackup_period_minutes() -> Optional[int]:
"""None means autobackup is disabled"""
return Storage.autobackup_period_minutes()
@staticmethod
def set_autobackup_period_minutes(minutes: int):
"""
0 and negative numbers are equivalent to disable.
Setting to a positive number may result in a backup very soon
if some services are not backed up.
"""
if minutes <= 0:
Backups.disable_all_autobackup()
return
Storage.store_autobackup_period_minutes(minutes)
@staticmethod
def is_time_to_backup(time: datetime) -> bool:
"""
Intended as a time validator for huey cron scheduler
of automatic backups
"""
return Backups._service_ids_to_back_up(time) != []
@staticmethod
def services_to_back_up(time: datetime) -> List[Service]:
result = []
for id in Backups._service_ids_to_back_up(time):
service = get_service_by_id(id)
if service is None:
raise ValueError(
"Cannot look up a service scheduled for backup!",
)
result.append(service)
return result
@staticmethod
def get_last_backed_up(service: Service) -> Optional[datetime]:
"""Get a timezone-aware time of the last backup of a service"""
return Storage.get_last_backup_time(service.get_id())
@staticmethod
def is_time_to_backup_service(service_id: str, time: datetime):
period = Backups.autobackup_period_minutes()
if period is None:
return False
if not Storage.is_autobackup_set(service_id):
return False
last_backup = Storage.get_last_backup_time(service_id)
if last_backup is None:
# queue a backup immediately if there are no previous backups
return True
if time > last_backup + timedelta(minutes=period):
return True
return False
@staticmethod
def _service_ids_to_back_up(time: datetime) -> List[str]:
services = Storage.services_with_autobackup()
return [
id
for id in services
if Backups.is_time_to_backup_service(
id,
time,
)
]
### Helpers
@staticmethod
def space_usable_for_service(service: Service) -> int:
folders = service.get_folders()
if folders == []:
raise ValueError("unallocated service", service.get_id())
# We assume all folders of one service live at the same volume
fs_info = statvfs(folders[0])
usable_bytes = fs_info.f_frsize * fs_info.f_bavail
return usable_bytes
@staticmethod
def set_localfile_repo(file_path: str):
ProviderClass = get_provider(BackupProviderEnum.FILE)
provider = ProviderClass(
login="",
key="",
location=file_path,
repo_id="",
)
Storage.store_provider(provider)
@staticmethod
def assert_dead(service: Service):
# if we backup the service that is failing to restore it to the
# previous snapshot, its status can be FAILED
# And obviously restoring a failed service is the moun route
if service.get_status() not in [ServiceStatus.INACTIVE, ServiceStatus.FAILED]:
raise NotDeadError(service)

View File

@ -0,0 +1,43 @@
from abc import ABC, abstractmethod
from typing import List
from selfprivacy_api.models.backup.snapshot import Snapshot
class AbstractBackupper(ABC):
def __init__(self):
pass
@abstractmethod
def is_initted(self) -> bool:
raise NotImplementedError
@abstractmethod
def set_creds(self, account: str, key: str, repo: str):
raise NotImplementedError
@abstractmethod
def start_backup(self, folders: List[str], repo_name: str):
raise NotImplementedError
@abstractmethod
def get_snapshots(self) -> List[Snapshot]:
"""Get all snapshots from the repo"""
raise NotImplementedError
@abstractmethod
def init(self):
raise NotImplementedError
@abstractmethod
def restore_from_backup(self, snapshot_id: str, folders: List[str], verify=True):
"""Restore a target folder using a snapshot"""
raise NotImplementedError
@abstractmethod
def restored_size(self, snapshot_id: str) -> int:
raise NotImplementedError
@abstractmethod
def forget_snapshot(self, snapshot_id):
raise NotImplementedError

View File

@ -0,0 +1,32 @@
from typing import List
from selfprivacy_api.models.backup.snapshot import Snapshot
from selfprivacy_api.backup.backuppers import AbstractBackupper
class NoneBackupper(AbstractBackupper):
def is_initted(self, repo_name: str = "") -> bool:
return False
def set_creds(self, account: str, key: str, repo: str):
pass
def start_backup(self, folders: List[str], repo_name: str):
raise NotImplementedError
def get_snapshots(self) -> List[Snapshot]:
"""Get all snapshots from the repo"""
return []
def init(self):
raise NotImplementedError
def restore_from_backup(self, snapshot_id: str, folders: List[str]):
"""Restore a target folder using a snapshot"""
raise NotImplementedError
def restored_size(self, snapshot_id: str) -> int:
raise NotImplementedError
def forget_snapshot(self, snapshot_id):
raise NotImplementedError

View File

@ -0,0 +1,369 @@
import subprocess
import json
import datetime
import tempfile
from typing import List
from collections.abc import Iterable
from json.decoder import JSONDecodeError
from os.path import exists, join
from os import listdir
from time import sleep
from selfprivacy_api.backup.util import output_yielder, sync
from selfprivacy_api.backup.backuppers import AbstractBackupper
from selfprivacy_api.models.backup.snapshot import Snapshot
from selfprivacy_api.backup.jobs import get_backup_job
from selfprivacy_api.services import get_service_by_id
from selfprivacy_api.jobs import Jobs, JobStatus
from selfprivacy_api.backup.local_secret import LocalBackupSecret
class ResticBackupper(AbstractBackupper):
def __init__(self, login_flag: str, key_flag: str, type: str):
self.login_flag = login_flag
self.key_flag = key_flag
self.type = type
self.account = ""
self.key = ""
self.repo = ""
def set_creds(self, account: str, key: str, repo: str):
self.account = account
self.key = key
self.repo = repo
def restic_repo(self) -> str:
# https://restic.readthedocs.io/en/latest/030_preparing_a_new_repo.html#other-services-via-rclone
# https://forum.rclone.org/t/can-rclone-be-run-solely-with-command-line-options-no-config-no-env-vars/6314/5
return f"rclone:{self.type}{self.repo}"
def rclone_args(self):
return "rclone.args=serve restic --stdio " + self.backend_rclone_args()
def backend_rclone_args(self) -> str:
acc_arg = ""
key_arg = ""
if self.account != "":
acc_arg = f"{self.login_flag} {self.account}"
if self.key != "":
key_arg = f"{self.key_flag} {self.key}"
return f"{acc_arg} {key_arg}"
def _password_command(self):
return f"echo {LocalBackupSecret.get()}"
def restic_command(self, *args, tag: str = "") -> List[str]:
command = [
"restic",
"-o",
self.rclone_args(),
"-r",
self.restic_repo(),
"--password-command",
self._password_command(),
]
if tag != "":
command.extend(
[
"--tag",
tag,
]
)
if args != []:
command.extend(ResticBackupper.__flatten_list(args))
return command
def mount_repo(self, dir):
mount_command = self.restic_command("mount", dir)
mount_command.insert(0, "nohup")
handle = subprocess.Popen(mount_command, stdout=subprocess.DEVNULL, shell=False)
sleep(2)
if not "ids" in listdir(dir):
raise IOError("failed to mount dir ", dir)
return handle
def unmount_repo(self, dir):
mount_command = ["umount", "-l", dir]
with subprocess.Popen(
mount_command, stdout=subprocess.PIPE, shell=False
) as handle:
output = handle.communicate()[0].decode("utf-8")
# TODO: check for exit code?
if "error" in output.lower():
return IOError("failed to unmount dir ", dir, ": ", output)
if not listdir(dir) == []:
return IOError("failed to unmount dir ", dir)
@staticmethod
def __flatten_list(list):
"""string-aware list flattener"""
result = []
for item in list:
if isinstance(item, Iterable) and not isinstance(item, str):
result.extend(ResticBackupper.__flatten_list(item))
continue
result.append(item)
return result
def start_backup(self, folders: List[str], tag: str):
"""
Start backup with restic
"""
# but maybe it is ok to accept a union of a string and an array of strings
assert not isinstance(folders, str)
backup_command = self.restic_command(
"backup",
"--json",
folders,
tag=tag,
)
messages = []
job = get_backup_job(get_service_by_id(tag))
try:
for raw_message in output_yielder(backup_command):
message = self.parse_message(raw_message, job)
messages.append(message)
return ResticBackupper._snapshot_from_backup_messages(messages, tag)
except ValueError as e:
raise ValueError("could not create a snapshot: ", messages) from e
@staticmethod
def _snapshot_from_backup_messages(messages, repo_name) -> Snapshot:
for message in messages:
if message["message_type"] == "summary":
return ResticBackupper._snapshot_from_fresh_summary(message, repo_name)
raise ValueError("no summary message in restic json output")
def parse_message(self, raw_message_line: str, job=None) -> dict:
message = ResticBackupper.parse_json_output(raw_message_line)
if not isinstance(message, dict):
raise ValueError("we have too many messages on one line?")
if message["message_type"] == "status":
if job is not None: # only update status if we run under some job
Jobs.update(
job,
JobStatus.RUNNING,
progress=int(message["percent_done"] * 100),
)
return message
@staticmethod
def _snapshot_from_fresh_summary(message: dict, repo_name) -> Snapshot:
return Snapshot(
id=message["snapshot_id"],
created_at=datetime.datetime.now(datetime.timezone.utc),
service_name=repo_name,
)
def init(self):
init_command = self.restic_command(
"init",
)
with subprocess.Popen(
init_command,
shell=False,
stdout=subprocess.PIPE,
stderr=subprocess.STDOUT,
) as process_handle:
output = process_handle.communicate()[0].decode("utf-8")
if not "created restic repository" in output:
raise ValueError("cannot init a repo: " + output)
def is_initted(self) -> bool:
command = self.restic_command(
"check",
"--json",
)
with subprocess.Popen(command, stdout=subprocess.PIPE, shell=False) as handle:
output = handle.communicate()[0].decode("utf-8")
if not ResticBackupper.has_json(output):
return False
# raise NotImplementedError("error(big): " + output)
return True
def restored_size(self, snapshot_id: str) -> int:
"""
Size of a snapshot
"""
command = self.restic_command(
"stats",
snapshot_id,
"--json",
)
with subprocess.Popen(
command,
stdout=subprocess.PIPE,
shell=False,
) as handle:
output = handle.communicate()[0].decode("utf-8")
try:
parsed_output = ResticBackupper.parse_json_output(output)
return parsed_output["total_size"]
except ValueError as e:
raise ValueError("cannot restore a snapshot: " + output) from e
def restore_from_backup(self, snapshot_id, folders: List[str], verify=True):
"""
Restore from backup with restic
"""
if folders is None or folders == []:
raise ValueError("cannot restore without knowing where to!")
with tempfile.TemporaryDirectory() as dir:
if verify:
self.do_restore(snapshot_id, target=dir, verify=verify)
snapshot_root = dir
else: # attempting inplace restore via mount + sync
self.mount_repo(dir)
snapshot_root = join(dir, "ids", snapshot_id)
assert snapshot_root is not None
for folder in folders:
src = join(snapshot_root, folder.strip("/"))
if not exists(src):
raise ValueError(
f"there is no such path: {src}. We tried to find {folder}"
)
dst = folder
sync(src, dst)
def do_restore(self, snapshot_id, target="/", verify=False):
"""barebones restic restore"""
restore_command = self.restic_command(
"restore",
snapshot_id,
"--target",
target,
)
if verify:
restore_command.append("--verify")
with subprocess.Popen(
restore_command, stdout=subprocess.PIPE, shell=False
) as handle:
# for some reason restore does not support nice reporting of progress via json
output = handle.communicate()[0].decode("utf-8")
if "restoring" not in output:
raise ValueError("cannot restore a snapshot: " + output)
assert (
handle.returncode is not None
) # none should be impossible after communicate
if handle.returncode != 0:
raise ValueError(
"restore exited with errorcode", returncode, ":", output
)
def forget_snapshot(self, snapshot_id):
"""either removes snapshot or marks it for deletion later depending on server settings"""
forget_command = self.restic_command(
"forget",
snapshot_id,
)
with subprocess.Popen(
forget_command, stdout=subprocess.PIPE, stderr=subprocess.PIPE, shell=False
) as handle:
# for some reason restore does not support nice reporting of progress via json
output, err = [string.decode("utf-8") for string in handle.communicate()]
if "no matching ID found" in err:
raise ValueError(
"trying to delete, but no such snapshot: ", snapshot_id
)
assert (
handle.returncode is not None
) # none should be impossible after communicate
if handle.returncode != 0:
raise ValueError(
"forget exited with errorcode", returncode, ":", output
)
def _load_snapshots(self) -> object:
"""
Load list of snapshots from repository
raises Value Error if repo does not exist
"""
listing_command = self.restic_command(
"snapshots",
"--json",
)
with subprocess.Popen(
listing_command,
shell=False,
stdout=subprocess.PIPE,
stderr=subprocess.STDOUT,
) as backup_listing_process_descriptor:
output = backup_listing_process_descriptor.communicate()[0].decode("utf-8")
if "Is there a repository at the following location?" in output:
raise ValueError("No repository! : " + output)
try:
return ResticBackupper.parse_json_output(output)
except ValueError as e:
raise ValueError("Cannot load snapshots: ") from e
def get_snapshots(self) -> List[Snapshot]:
"""Get all snapshots from the repo"""
snapshots = []
for restic_snapshot in self._load_snapshots():
snapshot = Snapshot(
id=restic_snapshot["short_id"],
created_at=restic_snapshot["time"],
service_name=restic_snapshot["tags"][0],
)
snapshots.append(snapshot)
return snapshots
@staticmethod
def parse_json_output(output: str) -> object:
starting_index = ResticBackupper.json_start(output)
if starting_index == -1:
raise ValueError("There is no json in the restic output : " + output)
truncated_output = output[starting_index:]
json_messages = truncated_output.splitlines()
if len(json_messages) == 1:
try:
return json.loads(truncated_output)
except JSONDecodeError as e:
raise ValueError(
"There is no json in the restic output : " + output
) from e
result_array = []
for message in json_messages:
result_array.append(json.loads(message))
return result_array
@staticmethod
def json_start(output: str) -> int:
indices = [
output.find("["),
output.find("{"),
]
indices = [x for x in indices if x != -1]
if indices == []:
return -1
return min(indices)
@staticmethod
def has_json(output: str) -> bool:
if ResticBackupper.json_start(output) == -1:
return False
return True

View File

@ -0,0 +1,88 @@
from typing import Optional, List
from selfprivacy_api.models.backup.snapshot import Snapshot
from selfprivacy_api.jobs import Jobs, Job, JobStatus
from selfprivacy_api.services.service import Service
from selfprivacy_api.services import get_service_by_id
def job_type_prefix(service: Service) -> str:
return f"services.{service.get_id()}"
def backup_job_type(service: Service) -> str:
return f"{job_type_prefix(service)}.backup"
def restore_job_type(service: Service) -> str:
return f"{job_type_prefix(service)}.restore"
def get_jobs_by_service(service: Service) -> List[Job]:
result = []
for job in Jobs.get_jobs():
if job.type_id.startswith(job_type_prefix(service)) and job.status in [
JobStatus.CREATED,
JobStatus.RUNNING,
]:
result.append(job)
return result
def is_something_running_for(service: Service) -> bool:
running_jobs = [
job for job in get_jobs_by_service(service) if job.status == JobStatus.RUNNING
]
return len(running_jobs) != 0
def add_backup_job(service: Service) -> Job:
if is_something_running_for(service):
message = (
f"Cannot start a backup of {service.get_id()}, another operation is running: "
+ get_jobs_by_service(service)[0].type_id
)
raise ValueError(message)
display_name = service.get_display_name()
job = Jobs.add(
type_id=backup_job_type(service),
name=f"Backup {display_name}",
description=f"Backing up {display_name}",
)
return job
def add_restore_job(snapshot: Snapshot) -> Job:
service = get_service_by_id(snapshot.service_name)
if service is None:
raise ValueError(f"no such service: {snapshot.service_name}")
if is_something_running_for(service):
message = (
f"Cannot start a restore of {service.get_id()}, another operation is running: "
+ get_jobs_by_service(service)[0].type_id
)
raise ValueError(message)
display_name = service.get_display_name()
job = Jobs.add(
type_id=restore_job_type(service),
name=f"Restore {display_name}",
description=f"restoring {display_name} from {snapshot.id}",
)
return job
def get_job_by_type(type_id: str) -> Optional[Job]:
for job in Jobs.get_jobs():
if job.type_id == type_id and job.status in [
JobStatus.CREATED,
JobStatus.RUNNING,
]:
return job
def get_backup_job(service: Service) -> Optional[Job]:
return get_job_by_type(backup_job_type(service))
def get_restore_job(service: Service) -> Optional[Job]:
return get_job_by_type(restore_job_type(service))

View File

@ -0,0 +1,45 @@
"""Handling of local secret used for encrypted backups.
Separated out for circular dependency reasons
"""
from __future__ import annotations
import secrets
from selfprivacy_api.utils.redis_pool import RedisPool
REDIS_KEY = "backup:local_secret"
redis = RedisPool().get_connection()
class LocalBackupSecret:
@staticmethod
def get() -> str:
"""A secret string which backblaze/other clouds do not know.
Serves as encryption key.
"""
if not LocalBackupSecret.exists():
LocalBackupSecret.reset()
return redis.get(REDIS_KEY) # type: ignore
@staticmethod
def set(secret: str):
redis.set(REDIS_KEY, secret)
@staticmethod
def reset():
new_secret = LocalBackupSecret._generate()
LocalBackupSecret.set(new_secret)
@staticmethod
def _full_reset():
redis.delete(REDIS_KEY)
@staticmethod
def exists() -> bool:
return redis.exists(REDIS_KEY) == 1
@staticmethod
def _generate() -> str:
return secrets.token_urlsafe(256)

View File

@ -0,0 +1,29 @@
from typing import Type
from selfprivacy_api.graphql.queries.providers import (
BackupProvider as BackupProviderEnum,
)
from selfprivacy_api.backup.providers.provider import AbstractBackupProvider
from selfprivacy_api.backup.providers.backblaze import Backblaze
from selfprivacy_api.backup.providers.memory import InMemoryBackup
from selfprivacy_api.backup.providers.local_file import LocalFileBackup
from selfprivacy_api.backup.providers.none import NoBackups
PROVIDER_MAPPING: dict[BackupProviderEnum, Type[AbstractBackupProvider]] = {
BackupProviderEnum.BACKBLAZE: Backblaze,
BackupProviderEnum.MEMORY: InMemoryBackup,
BackupProviderEnum.FILE: LocalFileBackup,
BackupProviderEnum.NONE: NoBackups,
}
def get_provider(
provider_type: BackupProviderEnum,
) -> Type[AbstractBackupProvider]:
return PROVIDER_MAPPING[provider_type]
def get_kind(provider: AbstractBackupProvider) -> str:
"""Get the kind of the provider in the form of a string"""
return provider.name.value

View File

@ -0,0 +1,11 @@
from .provider import AbstractBackupProvider
from selfprivacy_api.backup.backuppers.restic_backupper import ResticBackupper
from selfprivacy_api.graphql.queries.providers import (
BackupProvider as BackupProviderEnum,
)
class Backblaze(AbstractBackupProvider):
backupper = ResticBackupper("--b2-account", "--b2-key", ":b2:")
name = BackupProviderEnum.BACKBLAZE

View File

@ -0,0 +1,11 @@
from .provider import AbstractBackupProvider
from selfprivacy_api.backup.backuppers.restic_backupper import ResticBackupper
from selfprivacy_api.graphql.queries.providers import (
BackupProvider as BackupProviderEnum,
)
class LocalFileBackup(AbstractBackupProvider):
backupper = ResticBackupper("", "", ":local:")
name = BackupProviderEnum.FILE

View File

@ -0,0 +1,11 @@
from .provider import AbstractBackupProvider
from selfprivacy_api.backup.backuppers.restic_backupper import ResticBackupper
from selfprivacy_api.graphql.queries.providers import (
BackupProvider as BackupProviderEnum,
)
class InMemoryBackup(AbstractBackupProvider):
backupper = ResticBackupper("", "", ":memory:")
name = BackupProviderEnum.MEMORY

View File

@ -0,0 +1,11 @@
from .provider import AbstractBackupProvider
from selfprivacy_api.backup.backuppers.none_backupper import NoneBackupper
from selfprivacy_api.graphql.queries.providers import (
BackupProvider as BackupProviderEnum,
)
class NoBackups(AbstractBackupProvider):
backupper = NoneBackupper()
name = BackupProviderEnum.NONE

View File

@ -0,0 +1,25 @@
"""
An abstract class for BackBlaze, S3 etc.
It assumes that while some providers are supported via restic/rclone, others
may require different backends
"""
from abc import ABC, abstractmethod
from selfprivacy_api.backup.backuppers import AbstractBackupper
from selfprivacy_api.graphql.queries.providers import (
BackupProvider as BackupProviderEnum,
)
class AbstractBackupProvider(ABC):
backupper: AbstractBackupper
name: BackupProviderEnum
def __init__(self, login="", key="", location="", repo_id=""):
self.backupper.set_creds(login, key, location)
self.login = login
self.key = key
self.location = location
# We do not need to do anything with this one
# Just remember in case the app forgets
self.repo_id = repo_id

View File

@ -0,0 +1,175 @@
from typing import List, Optional
from datetime import datetime
from selfprivacy_api.models.backup.snapshot import Snapshot
from selfprivacy_api.models.backup.provider import BackupProviderModel
from selfprivacy_api.utils.redis_pool import RedisPool
from selfprivacy_api.utils.redis_model_storage import (
store_model_as_hash,
hash_as_model,
)
from selfprivacy_api.services.service import Service
from selfprivacy_api.backup.providers.provider import AbstractBackupProvider
from selfprivacy_api.backup.providers import get_kind
# a hack to store file path.
REDIS_SNAPSHOT_CACHE_EXPIRE_SECONDS = 24 * 60 * 60 # one day
REDIS_AUTOBACKUP_ENABLED_PREFIX = "backup:autobackup:services:"
REDIS_SNAPSHOTS_PREFIX = "backups:snapshots:"
REDIS_LAST_BACKUP_PREFIX = "backups:last-backed-up:"
REDIS_INITTED_CACHE_PREFIX = "backups:initted_services:"
REDIS_PROVIDER_KEY = "backups:provider"
REDIS_AUTOBACKUP_PERIOD_KEY = "backups:autobackup_period"
redis = RedisPool().get_connection()
class Storage:
@staticmethod
def reset():
redis.delete(REDIS_PROVIDER_KEY)
redis.delete(REDIS_AUTOBACKUP_PERIOD_KEY)
prefixes_to_clean = [
REDIS_INITTED_CACHE_PREFIX,
REDIS_SNAPSHOTS_PREFIX,
REDIS_LAST_BACKUP_PREFIX,
REDIS_AUTOBACKUP_ENABLED_PREFIX,
]
for prefix in prefixes_to_clean:
for key in redis.keys(prefix + "*"):
redis.delete(key)
@staticmethod
def invalidate_snapshot_storage():
for key in redis.keys(REDIS_SNAPSHOTS_PREFIX + "*"):
redis.delete(key)
@staticmethod
def services_with_autobackup() -> List[str]:
keys = redis.keys(REDIS_AUTOBACKUP_ENABLED_PREFIX + "*")
service_ids = [key.split(":")[-1] for key in keys]
return service_ids
@staticmethod
def __last_backup_key(service_id):
return REDIS_LAST_BACKUP_PREFIX + service_id
@staticmethod
def __snapshot_key(snapshot: Snapshot):
return REDIS_SNAPSHOTS_PREFIX + snapshot.id
@staticmethod
def get_last_backup_time(service_id: str) -> Optional[datetime]:
key = Storage.__last_backup_key(service_id)
if not redis.exists(key):
return None
snapshot = hash_as_model(redis, key, Snapshot)
return snapshot.created_at
@staticmethod
def store_last_timestamp(service_id: str, snapshot: Snapshot):
store_model_as_hash(redis, Storage.__last_backup_key(service_id), snapshot)
@staticmethod
def cache_snapshot(snapshot: Snapshot):
snapshot_key = Storage.__snapshot_key(snapshot)
store_model_as_hash(redis, snapshot_key, snapshot)
redis.expire(snapshot_key, REDIS_SNAPSHOT_CACHE_EXPIRE_SECONDS)
@staticmethod
def delete_cached_snapshot(snapshot: Snapshot):
snapshot_key = Storage.__snapshot_key(snapshot)
redis.delete(snapshot_key)
@staticmethod
def get_cached_snapshot_by_id(snapshot_id: str) -> Optional[Snapshot]:
key = REDIS_SNAPSHOTS_PREFIX + snapshot_id
if not redis.exists(key):
return None
return hash_as_model(redis, key, Snapshot)
@staticmethod
def get_cached_snapshots() -> List[Snapshot]:
keys = redis.keys(REDIS_SNAPSHOTS_PREFIX + "*")
result = []
for key in keys:
snapshot = hash_as_model(redis, key, Snapshot)
result.append(snapshot)
return result
@staticmethod
def __autobackup_key(service_name: str) -> str:
return REDIS_AUTOBACKUP_ENABLED_PREFIX + service_name
@staticmethod
def set_autobackup(service: Service):
# shortcut this
redis.set(Storage.__autobackup_key(service.get_id()), 1)
@staticmethod
def unset_autobackup(service: Service):
"""also see disable_all_autobackup()"""
redis.delete(Storage.__autobackup_key(service.get_id()))
@staticmethod
def is_autobackup_set(service_name: str) -> bool:
return redis.exists(Storage.__autobackup_key(service_name))
@staticmethod
def autobackup_period_minutes() -> Optional[int]:
"""None means autobackup is disabled"""
if not redis.exists(REDIS_AUTOBACKUP_PERIOD_KEY):
return None
return int(redis.get(REDIS_AUTOBACKUP_PERIOD_KEY))
@staticmethod
def store_autobackup_period_minutes(minutes: int):
redis.set(REDIS_AUTOBACKUP_PERIOD_KEY, minutes)
@staticmethod
def delete_backup_period():
redis.delete(REDIS_AUTOBACKUP_PERIOD_KEY)
@staticmethod
def store_provider(provider: AbstractBackupProvider):
store_model_as_hash(
redis,
REDIS_PROVIDER_KEY,
BackupProviderModel(
kind=get_kind(provider),
login=provider.login,
key=provider.key,
location=provider.location,
repo_id=provider.repo_id,
),
)
@staticmethod
def load_provider() -> Optional[BackupProviderModel]:
provider_model = hash_as_model(
redis,
REDIS_PROVIDER_KEY,
BackupProviderModel,
)
return provider_model
@staticmethod
def has_init_mark() -> bool:
if redis.exists(REDIS_INITTED_CACHE_PREFIX):
return True
return False
@staticmethod
def mark_as_init():
redis.set(REDIS_INITTED_CACHE_PREFIX, 1)

View File

@ -0,0 +1,45 @@
from datetime import datetime
from selfprivacy_api.graphql.common_types.backup import RestoreStrategy
from selfprivacy_api.models.backup.snapshot import Snapshot
from selfprivacy_api.utils.huey import huey
from selfprivacy_api.services import get_service_by_id
from selfprivacy_api.services.service import Service
from selfprivacy_api.backup import Backups
from selfprivacy_api.backup.jobs import add_backup_job, add_restore_job
def validate_datetime(dt: datetime):
# dt = datetime.now(timezone.utc)
if dt.timetz is None:
raise ValueError(
"""
huey passed in the timezone-unaware time!
Post it in support chat or maybe try uncommenting a line above
"""
)
return Backups.is_time_to_backup(dt)
# huey tasks need to return something
@huey.task()
def start_backup(service: Service) -> bool:
Backups.back_up(service)
return True
@huey.task()
def restore_snapshot(
snapshot: Snapshot,
strategy: RestoreStrategy = RestoreStrategy.DOWNLOAD_VERIFY_OVERWRITE,
) -> bool:
Backups.restore_snapshot(snapshot, strategy)
return True
@huey.periodic_task(validate_datetime=validate_datetime)
def automatic_backup():
time = datetime.now()
for service in Backups.services_to_back_up(time):
start_backup(service)

View File

@ -0,0 +1,27 @@
import subprocess
from os.path import exists
def output_yielder(command):
with subprocess.Popen(
command,
shell=False,
stdout=subprocess.PIPE,
stderr=subprocess.STDOUT,
universal_newlines=True,
) as handle:
for line in iter(handle.stdout.readline, ""):
if "NOTICE:" not in line:
yield line
def sync(src_path: str, dest_path: str):
"""a wrapper around rclone sync"""
if not exists(src_path):
raise ValueError("source dir for rclone sync must exist")
rclone_command = ["rclone", "sync", "-P", src_path, dest_path]
for raw_message in output_yielder(rclone_command):
if "ERROR" in raw_message:
raise ValueError(raw_message)

View File

@ -27,4 +27,4 @@ async def get_token_header(
def get_api_version() -> str:
"""Get API version"""
return "2.1.2"
return "2.1.3"

View File

@ -0,0 +1,10 @@
"""Backup"""
# pylint: disable=too-few-public-methods
import strawberry
from enum import Enum
@strawberry.enum
class RestoreStrategy(Enum):
INPLACE = "INPLACE"
DOWNLOAD_VERIFY_OVERWRITE = "DOWNLOAD_VERIFY_OVERWRITE"

View File

@ -12,6 +12,7 @@ class ApiJob:
"""Job type for GraphQL."""
uid: str
type_id: str
name: str
description: str
status: str
@ -28,6 +29,7 @@ def job_to_api_job(job: Job) -> ApiJob:
"""Convert a Job from jobs controller to a GraphQL ApiJob."""
return ApiJob(
uid=str(job.uid),
type_id=job.type_id,
name=job.name,
description=job.description,
status=job.status.name,

View File

@ -1,6 +1,7 @@
from enum import Enum
import typing
import strawberry
import datetime
from selfprivacy_api.graphql.common_types.dns import DnsRecord
from selfprivacy_api.services import get_service_by_id, get_services_by_location
@ -15,7 +16,7 @@ def get_usages(root: "StorageVolume") -> list["StorageUsageInterface"]:
service=service_to_graphql_service(service),
title=service.get_display_name(),
used_space=str(service.get_storage_usage()),
volume=get_volume_by_id(service.get_location()),
volume=get_volume_by_id(service.get_drive()),
)
for service in get_services_by_location(root.name)
]
@ -79,7 +80,7 @@ def get_storage_usage(root: "Service") -> ServiceStorageUsage:
service=service_to_graphql_service(service),
title=service.get_display_name(),
used_space=str(service.get_storage_usage()),
volume=get_volume_by_id(service.get_location()),
volume=get_volume_by_id(service.get_drive()),
)
@ -92,6 +93,8 @@ class Service:
is_movable: bool
is_required: bool
is_enabled: bool
can_be_backed_up: bool
backup_description: str
status: ServiceStatusEnum
url: typing.Optional[str]
dns_records: typing.Optional[typing.List[DnsRecord]]
@ -101,6 +104,17 @@ class Service:
"""Get storage usage for a service"""
return get_storage_usage(self)
@strawberry.field
def backup_snapshots(self) -> typing.Optional[typing.List["SnapshotInfo"]]:
return None
@strawberry.type
class SnapshotInfo:
id: str
service: Service
created_at: datetime.datetime
def service_to_graphql_service(service: ServiceInterface) -> Service:
"""Convert service to graphql service"""
@ -112,6 +126,8 @@ def service_to_graphql_service(service: ServiceInterface) -> Service:
is_movable=service.is_movable(),
is_required=service.is_required(),
is_enabled=service.is_enabled(),
can_be_backed_up=service.can_be_backed_up(),
backup_description=service.get_backup_description(),
status=ServiceStatusEnum(service.get_status().value),
url=service.get_url(),
dns_records=[

View File

@ -0,0 +1,170 @@
import datetime
import typing
import strawberry
from strawberry.types import Info
from selfprivacy_api.graphql import IsAuthenticated
from selfprivacy_api.graphql.mutations.mutation_interface import (
GenericMutationReturn,
GenericJobMutationReturn,
MutationReturnInterface,
)
from selfprivacy_api.graphql.queries.backup import BackupConfiguration
from selfprivacy_api.graphql.queries.backup import Backup
from selfprivacy_api.graphql.queries.providers import BackupProvider
from selfprivacy_api.graphql.common_types.jobs import job_to_api_job
from selfprivacy_api.graphql.common_types.backup import RestoreStrategy
from selfprivacy_api.backup import Backups
from selfprivacy_api.services import get_all_services, get_service_by_id
from selfprivacy_api.backup.tasks import start_backup, restore_snapshot
from selfprivacy_api.backup.jobs import add_backup_job, add_restore_job
@strawberry.input
class InitializeRepositoryInput:
"""Initialize repository input"""
provider: BackupProvider
# The following field may become optional for other providers?
# Backblaze takes bucket id and name
location_id: str
location_name: str
# Key ID and key for Backblaze
login: str
password: str
@strawberry.type
class GenericBackupConfigReturn(MutationReturnInterface):
"""Generic backup config return"""
configuration: typing.Optional[BackupConfiguration]
@strawberry.type
class BackupMutations:
@strawberry.mutation(permission_classes=[IsAuthenticated])
def initialize_repository(
self, repository: InitializeRepositoryInput
) -> GenericBackupConfigReturn:
"""Initialize a new repository"""
Backups.set_provider(
kind=repository.provider,
login=repository.login,
key=repository.password,
location=repository.location_name,
repo_id=repository.location_id,
)
Backups.init_repo()
return GenericBackupConfigReturn(
success=True,
message="",
code=200,
configuration=Backup().configuration(),
)
@strawberry.mutation(permission_classes=[IsAuthenticated])
def remove_repository(self) -> GenericBackupConfigReturn:
"""Remove repository"""
Backups.reset()
return GenericBackupConfigReturn(
success=True,
message="",
code=200,
configuration=Backup().configuration(),
)
@strawberry.mutation(permission_classes=[IsAuthenticated])
def set_autobackup_period(
self, period: typing.Optional[int] = None
) -> GenericBackupConfigReturn:
"""Set autobackup period. None is to disable autobackup"""
if period is not None:
Backups.set_autobackup_period_minutes(period)
else:
Backups.set_autobackup_period_minutes(0)
return GenericBackupConfigReturn(
success=True,
message="",
code=200,
configuration=Backup().configuration(),
)
@strawberry.mutation(permission_classes=[IsAuthenticated])
def start_backup(self, service_id: str) -> GenericJobMutationReturn:
"""Start backup"""
service = get_service_by_id(service_id)
if service is None:
return GenericJobMutationReturn(
success=False,
code=300,
message=f"nonexistent service: {service_id}",
job=None,
)
job = add_backup_job(service)
start_backup(service)
return GenericJobMutationReturn(
success=True,
code=200,
message="Backup job queued",
job=job_to_api_job(job),
)
@strawberry.mutation(permission_classes=[IsAuthenticated])
def restore_backup(
self,
snapshot_id: str,
strategy: RestoreStrategy = RestoreStrategy.DOWNLOAD_VERIFY_OVERWRITE,
) -> GenericJobMutationReturn:
"""Restore backup"""
snap = Backups.get_snapshot_by_id(snapshot_id)
if snap is None:
return GenericJobMutationReturn(
success=False,
code=404,
message=f"No such snapshot: {snapshot_id}",
job=None,
)
service = get_service_by_id(snap.service_name)
if service is None:
return GenericJobMutationReturn(
success=False,
code=404,
message=f"nonexistent service: {snap.service_name}",
job=None,
)
try:
job = add_restore_job(snap)
except ValueError as e:
return GenericJobMutationReturn(
success=False,
code=400,
message=str(e),
job=None,
)
restore_snapshot(snap, strategy)
return GenericJobMutationReturn(
success=True,
code=200,
message="restore job created",
job=job_to_api_job(job),
)
@strawberry.mutation(permission_classes=[IsAuthenticated])
def force_snapshots_reload(self) -> GenericMutationReturn:
"""Force snapshots reload"""
Backups.force_snapshot_cache_reload()
return GenericMutationReturn(
success=True,
code=200,
message="",
)

View File

@ -0,0 +1,215 @@
"""Deprecated mutations
There was made a mistake, where mutations were not grouped, and were instead
placed in the root of mutations schema. In this file, we import all the
mutations from and provide them to the root for backwards compatibility.
"""
import strawberry
from selfprivacy_api.graphql import IsAuthenticated
from selfprivacy_api.graphql.common_types.user import UserMutationReturn
from selfprivacy_api.graphql.mutations.api_mutations import (
ApiKeyMutationReturn,
ApiMutations,
DeviceApiTokenMutationReturn,
)
from selfprivacy_api.graphql.mutations.backup_mutations import BackupMutations
from selfprivacy_api.graphql.mutations.job_mutations import JobMutations
from selfprivacy_api.graphql.mutations.mutation_interface import (
GenericJobMutationReturn,
GenericMutationReturn,
)
from selfprivacy_api.graphql.mutations.services_mutations import (
ServiceMutationReturn,
ServicesMutations,
)
from selfprivacy_api.graphql.mutations.storage_mutations import StorageMutations
from selfprivacy_api.graphql.mutations.system_mutations import (
AutoUpgradeSettingsMutationReturn,
SystemMutations,
TimezoneMutationReturn,
)
from selfprivacy_api.graphql.mutations.backup_mutations import BackupMutations
from selfprivacy_api.graphql.mutations.users_mutations import UsersMutations
def deprecated_mutation(func, group, auth=True):
return strawberry.mutation(
resolver=func,
permission_classes=[IsAuthenticated] if auth else [],
deprecation_reason=f"Use `{group}.{func.__name__}` instead",
)
@strawberry.type
class DeprecatedApiMutations:
get_new_recovery_api_key: ApiKeyMutationReturn = deprecated_mutation(
ApiMutations.get_new_recovery_api_key,
"api",
)
use_recovery_api_key: DeviceApiTokenMutationReturn = deprecated_mutation(
ApiMutations.use_recovery_api_key,
"api",
auth=False,
)
refresh_device_api_token: DeviceApiTokenMutationReturn = deprecated_mutation(
ApiMutations.refresh_device_api_token,
"api",
)
delete_device_api_token: GenericMutationReturn = deprecated_mutation(
ApiMutations.delete_device_api_token,
"api",
)
get_new_device_api_key: ApiKeyMutationReturn = deprecated_mutation(
ApiMutations.get_new_device_api_key,
"api",
)
invalidate_new_device_api_key: GenericMutationReturn = deprecated_mutation(
ApiMutations.invalidate_new_device_api_key,
"api",
)
authorize_with_new_device_api_key: DeviceApiTokenMutationReturn = (
deprecated_mutation(
ApiMutations.authorize_with_new_device_api_key,
"api",
auth=False,
)
)
@strawberry.type
class DeprecatedSystemMutations:
change_timezone: TimezoneMutationReturn = deprecated_mutation(
SystemMutations.change_timezone,
"system",
)
change_auto_upgrade_settings: AutoUpgradeSettingsMutationReturn = (
deprecated_mutation(
SystemMutations.change_auto_upgrade_settings,
"system",
)
)
run_system_rebuild: GenericMutationReturn = deprecated_mutation(
SystemMutations.run_system_rebuild,
"system",
)
run_system_rollback: GenericMutationReturn = deprecated_mutation(
SystemMutations.run_system_rollback,
"system",
)
run_system_upgrade: GenericMutationReturn = deprecated_mutation(
SystemMutations.run_system_upgrade,
"system",
)
reboot_system: GenericMutationReturn = deprecated_mutation(
SystemMutations.reboot_system,
"system",
)
pull_repository_changes: GenericMutationReturn = deprecated_mutation(
SystemMutations.pull_repository_changes,
"system",
)
@strawberry.type
class DeprecatedUsersMutations:
create_user: UserMutationReturn = deprecated_mutation(
UsersMutations.create_user,
"users",
)
delete_user: GenericMutationReturn = deprecated_mutation(
UsersMutations.delete_user,
"users",
)
update_user: UserMutationReturn = deprecated_mutation(
UsersMutations.update_user,
"users",
)
add_ssh_key: UserMutationReturn = deprecated_mutation(
UsersMutations.add_ssh_key,
"users",
)
remove_ssh_key: UserMutationReturn = deprecated_mutation(
UsersMutations.remove_ssh_key,
"users",
)
@strawberry.type
class DeprecatedStorageMutations:
resize_volume: GenericMutationReturn = deprecated_mutation(
StorageMutations.resize_volume,
"storage",
)
mount_volume: GenericMutationReturn = deprecated_mutation(
StorageMutations.mount_volume,
"storage",
)
unmount_volume: GenericMutationReturn = deprecated_mutation(
StorageMutations.unmount_volume,
"storage",
)
migrate_to_binds: GenericJobMutationReturn = deprecated_mutation(
StorageMutations.migrate_to_binds,
"storage",
)
@strawberry.type
class DeprecatedServicesMutations:
enable_service: ServiceMutationReturn = deprecated_mutation(
ServicesMutations.enable_service,
"services",
)
disable_service: ServiceMutationReturn = deprecated_mutation(
ServicesMutations.disable_service,
"services",
)
stop_service: ServiceMutationReturn = deprecated_mutation(
ServicesMutations.stop_service,
"services",
)
start_service: ServiceMutationReturn = deprecated_mutation(
ServicesMutations.start_service,
"services",
)
restart_service: ServiceMutationReturn = deprecated_mutation(
ServicesMutations.restart_service,
"services",
)
move_service: ServiceMutationReturn = deprecated_mutation(
ServicesMutations.move_service,
"services",
)
@strawberry.type
class DeprecatedJobMutations:
remove_job: GenericMutationReturn = deprecated_mutation(
JobMutations.remove_job,
"jobs",
)

View File

@ -17,5 +17,5 @@ class GenericMutationReturn(MutationReturnInterface):
@strawberry.type
class GenericJobButationReturn(MutationReturnInterface):
class GenericJobMutationReturn(MutationReturnInterface):
job: typing.Optional[ApiJob] = None

View File

@ -10,7 +10,7 @@ from selfprivacy_api.graphql.common_types.service import (
service_to_graphql_service,
)
from selfprivacy_api.graphql.mutations.mutation_interface import (
GenericJobButationReturn,
GenericJobMutationReturn,
GenericMutationReturn,
)
@ -34,7 +34,7 @@ class MoveServiceInput:
@strawberry.type
class ServiceJobMutationReturn(GenericJobButationReturn):
class ServiceJobMutationReturn(GenericJobMutationReturn):
"""Service job mutation return type."""
service: typing.Optional[Service] = None

View File

@ -1,102 +0,0 @@
#!/usr/bin/env python3
"""Users management module"""
# pylint: disable=too-few-public-methods
import strawberry
from selfprivacy_api.actions.users import UserNotFound
from selfprivacy_api.graphql import IsAuthenticated
from selfprivacy_api.actions.ssh import (
InvalidPublicKey,
KeyAlreadyExists,
KeyNotFound,
create_ssh_key,
remove_ssh_key,
)
from selfprivacy_api.graphql.common_types.user import (
UserMutationReturn,
get_user_by_username,
)
@strawberry.input
class SshMutationInput:
"""Input type for ssh mutation"""
username: str
ssh_key: str
@strawberry.type
class SshMutations:
"""Mutations ssh"""
@strawberry.mutation(permission_classes=[IsAuthenticated])
def add_ssh_key(self, ssh_input: SshMutationInput) -> UserMutationReturn:
"""Add a new ssh key"""
try:
create_ssh_key(ssh_input.username, ssh_input.ssh_key)
except KeyAlreadyExists:
return UserMutationReturn(
success=False,
message="Key already exists",
code=409,
)
except InvalidPublicKey:
return UserMutationReturn(
success=False,
message="Invalid key type. Only ssh-ed25519 and ssh-rsa are supported",
code=400,
)
except UserNotFound:
return UserMutationReturn(
success=False,
message="User not found",
code=404,
)
except Exception as e:
return UserMutationReturn(
success=False,
message=str(e),
code=500,
)
return UserMutationReturn(
success=True,
message="New SSH key successfully written",
code=201,
user=get_user_by_username(ssh_input.username),
)
@strawberry.mutation(permission_classes=[IsAuthenticated])
def remove_ssh_key(self, ssh_input: SshMutationInput) -> UserMutationReturn:
"""Remove ssh key from user"""
try:
remove_ssh_key(ssh_input.username, ssh_input.ssh_key)
except KeyNotFound:
return UserMutationReturn(
success=False,
message="Key not found",
code=404,
)
except UserNotFound:
return UserMutationReturn(
success=False,
message="User not found",
code=404,
)
except Exception as e:
return UserMutationReturn(
success=False,
message=str(e),
code=500,
)
return UserMutationReturn(
success=True,
message="SSH key successfully removed",
code=200,
user=get_user_by_username(ssh_input.username),
)

View File

@ -4,7 +4,7 @@ from selfprivacy_api.graphql import IsAuthenticated
from selfprivacy_api.graphql.common_types.jobs import job_to_api_job
from selfprivacy_api.utils.block_devices import BlockDevices
from selfprivacy_api.graphql.mutations.mutation_interface import (
GenericJobButationReturn,
GenericJobMutationReturn,
GenericMutationReturn,
)
from selfprivacy_api.jobs.migrate_to_binds import (
@ -79,10 +79,10 @@ class StorageMutations:
)
@strawberry.mutation(permission_classes=[IsAuthenticated])
def migrate_to_binds(self, input: MigrateToBindsInput) -> GenericJobButationReturn:
def migrate_to_binds(self, input: MigrateToBindsInput) -> GenericJobMutationReturn:
"""Migrate to binds"""
if is_bind_migrated():
return GenericJobButationReturn(
return GenericJobMutationReturn(
success=False, code=409, message="Already migrated to binds"
)
job = start_bind_migration(
@ -94,7 +94,7 @@ class StorageMutations:
pleroma_block_device=input.pleroma_block_device,
)
)
return GenericJobButationReturn(
return GenericJobMutationReturn(
success=True,
code=200,
message="Migration to binds started, rebuild the system to apply changes",

View File

@ -3,10 +3,18 @@
# pylint: disable=too-few-public-methods
import strawberry
from selfprivacy_api.graphql import IsAuthenticated
from selfprivacy_api.actions.users import UserNotFound
from selfprivacy_api.graphql.common_types.user import (
UserMutationReturn,
get_user_by_username,
)
from selfprivacy_api.actions.ssh import (
InvalidPublicKey,
KeyAlreadyExists,
KeyNotFound,
create_ssh_key,
remove_ssh_key,
)
from selfprivacy_api.graphql.mutations.mutation_interface import (
GenericMutationReturn,
)
@ -21,8 +29,16 @@ class UserMutationInput:
password: str
@strawberry.input
class SshMutationInput:
"""Input type for ssh mutation"""
username: str
ssh_key: str
@strawberry.type
class UserMutations:
class UsersMutations:
"""Mutations change user settings"""
@strawberry.mutation(permission_classes=[IsAuthenticated])
@ -115,3 +131,73 @@ class UserMutations:
code=200,
user=get_user_by_username(user.username),
)
@strawberry.mutation(permission_classes=[IsAuthenticated])
def add_ssh_key(self, ssh_input: SshMutationInput) -> UserMutationReturn:
"""Add a new ssh key"""
try:
create_ssh_key(ssh_input.username, ssh_input.ssh_key)
except KeyAlreadyExists:
return UserMutationReturn(
success=False,
message="Key already exists",
code=409,
)
except InvalidPublicKey:
return UserMutationReturn(
success=False,
message="Invalid key type. Only ssh-ed25519 and ssh-rsa are supported",
code=400,
)
except UserNotFound:
return UserMutationReturn(
success=False,
message="User not found",
code=404,
)
except Exception as e:
return UserMutationReturn(
success=False,
message=str(e),
code=500,
)
return UserMutationReturn(
success=True,
message="New SSH key successfully written",
code=201,
user=get_user_by_username(ssh_input.username),
)
@strawberry.mutation(permission_classes=[IsAuthenticated])
def remove_ssh_key(self, ssh_input: SshMutationInput) -> UserMutationReturn:
"""Remove ssh key from user"""
try:
remove_ssh_key(ssh_input.username, ssh_input.ssh_key)
except KeyNotFound:
return UserMutationReturn(
success=False,
message="Key not found",
code=404,
)
except UserNotFound:
return UserMutationReturn(
success=False,
message="User not found",
code=404,
)
except Exception as e:
return UserMutationReturn(
success=False,
message=str(e),
code=500,
)
return UserMutationReturn(
success=True,
message="SSH key successfully removed",
code=200,
user=get_user_by_username(ssh_input.username),
)

View File

@ -0,0 +1,76 @@
"""Backup"""
# pylint: disable=too-few-public-methods
import typing
import strawberry
from selfprivacy_api.backup import Backups
from selfprivacy_api.backup.local_secret import LocalBackupSecret
from selfprivacy_api.graphql.queries.providers import BackupProvider
from selfprivacy_api.graphql.common_types.service import (
Service,
ServiceStatusEnum,
SnapshotInfo,
service_to_graphql_service,
)
from selfprivacy_api.services import get_service_by_id
@strawberry.type
class BackupConfiguration:
provider: BackupProvider
# When server is lost, the app should have the key to decrypt backups
# on a new server
encryption_key: str
# False when repo is not initialized and not ready to be used
is_initialized: bool
# If none, autobackups are disabled
autobackup_period: typing.Optional[int]
# Bucket name for Backblaze, path for some other providers
location_name: typing.Optional[str]
location_id: typing.Optional[str]
@strawberry.type
class Backup:
@strawberry.field
def configuration(self) -> BackupConfiguration:
return BackupConfiguration(
provider=Backups.provider().name,
encryption_key=LocalBackupSecret.get(),
is_initialized=Backups.is_initted(),
autobackup_period=Backups.autobackup_period_minutes(),
location_name=Backups.provider().location,
location_id=Backups.provider().repo_id,
)
@strawberry.field
def all_snapshots(self) -> typing.List[SnapshotInfo]:
if not Backups.is_initted():
return []
result = []
snapshots = Backups.get_all_snapshots()
for snap in snapshots:
service = get_service_by_id(snap.service_name)
if service is None:
service = Service(
id=snap.service_name,
display_name=f"{snap.service_name} (Orphaned)",
description="",
svg_icon="",
is_movable=False,
is_required=False,
is_enabled=False,
status=ServiceStatusEnum.OFF,
url=None,
dns_records=None,
)
else:
service = service_to_graphql_service(service)
graphql_snap = SnapshotInfo(
id=snap.id,
service=service,
created_at=snap.created_at,
)
result.append(graphql_snap)
return result

View File

@ -19,3 +19,7 @@ class ServerProvider(Enum):
@strawberry.enum
class BackupProvider(Enum):
BACKBLAZE = "BACKBLAZE"
NONE = "NONE"
# for testing purposes, make sure not selectable in prod.
MEMORY = "MEMORY"
FILE = "FILE"

View File

@ -6,20 +6,31 @@ from typing import AsyncGenerator
import strawberry
from selfprivacy_api.graphql import IsAuthenticated
from selfprivacy_api.graphql.mutations.api_mutations import ApiMutations
from selfprivacy_api.graphql.mutations.backup_mutations import BackupMutations
from selfprivacy_api.graphql.mutations.deprecated_mutations import (
DeprecatedApiMutations,
DeprecatedJobMutations,
DeprecatedServicesMutations,
DeprecatedStorageMutations,
DeprecatedSystemMutations,
DeprecatedUsersMutations,
)
from selfprivacy_api.graphql.mutations.job_mutations import JobMutations
from selfprivacy_api.graphql.mutations.mutation_interface import GenericMutationReturn
from selfprivacy_api.graphql.mutations.services_mutations import ServicesMutations
from selfprivacy_api.graphql.mutations.ssh_mutations import SshMutations
from selfprivacy_api.graphql.mutations.storage_mutations import StorageMutations
from selfprivacy_api.graphql.mutations.system_mutations import SystemMutations
from selfprivacy_api.graphql.mutations.backup_mutations import BackupMutations
from selfprivacy_api.graphql.queries.api_queries import Api
from selfprivacy_api.graphql.queries.backup import Backup
from selfprivacy_api.graphql.queries.jobs import Job
from selfprivacy_api.graphql.queries.services import Services
from selfprivacy_api.graphql.queries.storage import Storage
from selfprivacy_api.graphql.queries.system import System
from selfprivacy_api.graphql.queries.backup import Backup
from selfprivacy_api.graphql.mutations.users_mutations import UserMutations
from selfprivacy_api.graphql.mutations.users_mutations import UsersMutations
from selfprivacy_api.graphql.queries.users import Users
from selfprivacy_api.jobs.test import test_job
@ -28,16 +39,16 @@ from selfprivacy_api.jobs.test import test_job
class Query:
"""Root schema for queries"""
@strawberry.field(permission_classes=[IsAuthenticated])
def system(self) -> System:
"""System queries"""
return System()
@strawberry.field
def api(self) -> Api:
"""API access status"""
return Api()
@strawberry.field(permission_classes=[IsAuthenticated])
def system(self) -> System:
"""System queries"""
return System()
@strawberry.field(permission_classes=[IsAuthenticated])
def users(self) -> Users:
"""Users queries"""
@ -58,19 +69,58 @@ class Query:
"""Services queries"""
return Services()
@strawberry.field(permission_classes=[IsAuthenticated])
def backup(self) -> Backup:
"""Backup queries"""
return Backup()
@strawberry.type
class Mutation(
ApiMutations,
SystemMutations,
UserMutations,
SshMutations,
StorageMutations,
ServicesMutations,
JobMutations,
DeprecatedApiMutations,
DeprecatedSystemMutations,
DeprecatedUsersMutations,
DeprecatedStorageMutations,
DeprecatedServicesMutations,
DeprecatedJobMutations,
):
"""Root schema for mutations"""
@strawberry.field
def api(self) -> ApiMutations:
"""API mutations"""
return ApiMutations()
@strawberry.field(permission_classes=[IsAuthenticated])
def system(self) -> SystemMutations:
"""System mutations"""
return SystemMutations()
@strawberry.field(permission_classes=[IsAuthenticated])
def users(self) -> UsersMutations:
"""Users mutations"""
return UsersMutations()
@strawberry.field(permission_classes=[IsAuthenticated])
def storage(self) -> StorageMutations:
"""Storage mutations"""
return StorageMutations()
@strawberry.field(permission_classes=[IsAuthenticated])
def services(self) -> ServicesMutations:
"""Services mutations"""
return ServicesMutations()
@strawberry.field(permission_classes=[IsAuthenticated])
def jobs(self) -> JobMutations:
"""Jobs mutations"""
return JobMutations()
@strawberry.field(permission_classes=[IsAuthenticated])
def backup(self) -> BackupMutations:
"""Backup mutations"""
return BackupMutations()
@strawberry.mutation(permission_classes=[IsAuthenticated])
def test_mutation(self) -> GenericMutationReturn:
"""Test mutation"""
@ -95,4 +145,8 @@ class Subscription:
await asyncio.sleep(0.5)
schema = strawberry.Schema(query=Query, mutation=Mutation, subscription=Subscription)
schema = strawberry.Schema(
query=Query,
mutation=Mutation,
subscription=Subscription,
)

View File

@ -26,8 +26,11 @@ from selfprivacy_api.utils.redis_pool import RedisPool
JOB_EXPIRATION_SECONDS = 10 * 24 * 60 * 60 # ten days
STATUS_LOGS_PREFIX = "jobs_logs:status:"
PROGRESS_LOGS_PREFIX = "jobs_logs:progress:"
class JobStatus(Enum):
class JobStatus(str, Enum):
"""
Status of a job.
"""
@ -70,6 +73,7 @@ class Jobs:
jobs = Jobs.get_jobs()
for job in jobs:
Jobs.remove(job)
Jobs.reset_logs()
@staticmethod
def add(
@ -120,6 +124,60 @@ class Jobs:
return True
return False
@staticmethod
def reset_logs():
redis = RedisPool().get_connection()
for key in redis.keys(STATUS_LOGS_PREFIX + "*"):
redis.delete(key)
@staticmethod
def log_status_update(job: Job, status: JobStatus):
redis = RedisPool().get_connection()
key = _status_log_key_from_uuid(job.uid)
redis.lpush(key, status.value)
redis.expire(key, 10)
@staticmethod
def log_progress_update(job: Job, progress: int):
redis = RedisPool().get_connection()
key = _progress_log_key_from_uuid(job.uid)
redis.lpush(key, progress)
redis.expire(key, 10)
@staticmethod
def status_updates(job: Job) -> typing.List[JobStatus]:
result = []
redis = RedisPool().get_connection()
key = _status_log_key_from_uuid(job.uid)
if not redis.exists(key):
return []
status_strings = redis.lrange(key, 0, -1)
for status in status_strings:
try:
result.append(JobStatus[status])
except KeyError as e:
raise ValueError("impossible job status: " + status) from e
return result
@staticmethod
def progress_updates(job: Job) -> typing.List[int]:
result = []
redis = RedisPool().get_connection()
key = _progress_log_key_from_uuid(job.uid)
if not redis.exists(key):
return []
progress_strings = redis.lrange(key, 0, -1)
for progress in progress_strings:
try:
result.append(int(progress))
except KeyError as e:
raise ValueError("impossible job progress: " + progress) from e
return result
@staticmethod
def update(
job: Job,
@ -140,9 +198,14 @@ class Jobs:
job.description = description
if status_text is not None:
job.status_text = status_text
if status == JobStatus.FINISHED:
job.progress = 100
if progress is not None:
# explicitly provided progress has priority
job.progress = progress
Jobs.log_progress_update(job, progress)
job.status = status
Jobs.log_status_update(job, status)
job.updated_at = datetime.datetime.now()
job.error = error
job.result = result
@ -198,6 +261,14 @@ def _redis_key_from_uuid(uuid_string):
return "jobs:" + str(uuid_string)
def _status_log_key_from_uuid(uuid_string):
return STATUS_LOGS_PREFIX + str(uuid_string)
def _progress_log_key_from_uuid(uuid_string):
return PROGRESS_LOGS_PREFIX + str(uuid_string)
def _store_job_as_hash(redis, redis_key, model):
for key, value in model.dict().items():
if isinstance(value, uuid.UUID):

View File

@ -22,6 +22,9 @@ from selfprivacy_api.migrations.providers import CreateProviderFields
from selfprivacy_api.migrations.prepare_for_nixos_2211 import (
MigrateToSelfprivacyChannelFrom2205,
)
from selfprivacy_api.migrations.prepare_for_nixos_2305 import (
MigrateToSelfprivacyChannelFrom2211,
)
migrations = [
FixNixosConfigBranch(),
@ -31,6 +34,7 @@ migrations = [
CheckForFailedBindsMigration(),
CreateProviderFields(),
MigrateToSelfprivacyChannelFrom2205(),
MigrateToSelfprivacyChannelFrom2211(),
]

View File

@ -0,0 +1,58 @@
import os
import subprocess
from selfprivacy_api.migrations.migration import Migration
class MigrateToSelfprivacyChannelFrom2211(Migration):
"""Migrate to selfprivacy Nix channel.
For some reason NixOS 22.11 servers initialized with the nixos channel instead of selfprivacy.
This stops us from upgrading to NixOS 23.05
"""
def get_migration_name(self):
return "migrate_to_selfprivacy_channel_from_2211"
def get_migration_description(self):
return "Migrate to selfprivacy Nix channel from NixOS 22.11."
def is_migration_needed(self):
try:
output = subprocess.check_output(
["nix-channel", "--list"], start_new_session=True
)
output = output.decode("utf-8")
first_line = output.split("\n", maxsplit=1)[0]
return first_line.startswith("nixos") and (
first_line.endswith("nixos-22.11")
)
except subprocess.CalledProcessError:
return False
def migrate(self):
# Change the channel and update them.
# Also, go to /etc/nixos directory and make a git pull
current_working_directory = os.getcwd()
try:
print("Changing channel")
os.chdir("/etc/nixos")
subprocess.check_output(
[
"nix-channel",
"--add",
"https://channel.selfprivacy.org/nixos-selfpricacy",
"nixos",
]
)
subprocess.check_output(["nix-channel", "--update"])
nixos_config_branch = subprocess.check_output(
["git", "rev-parse", "--abbrev-ref", "HEAD"], start_new_session=True
)
if nixos_config_branch.decode("utf-8").strip() == "api-redis":
print("Also changing nixos-config branch from api-redis to master")
subprocess.check_output(["git", "checkout", "master"])
subprocess.check_output(["git", "pull"])
os.chdir(current_working_directory)
except subprocess.CalledProcessError:
os.chdir(current_working_directory)
print("Error")

View File

@ -0,0 +1,11 @@
from pydantic import BaseModel
"""for storage in Redis"""
class BackupProviderModel(BaseModel):
kind: str
login: str
key: str
location: str
repo_id: str # for app usage, not for us

View File

@ -0,0 +1,8 @@
import datetime
from pydantic import BaseModel
class Snapshot(BaseModel):
id: str
service_name: str
created_at: datetime.datetime

View File

@ -3,6 +3,7 @@ Token repository using Redis as backend.
"""
from typing import Optional
from datetime import datetime
from hashlib import md5
from selfprivacy_api.repositories.tokens.abstract_tokens_repository import (
AbstractTokensRepository,
@ -28,7 +29,10 @@ class RedisTokensRepository(AbstractTokensRepository):
@staticmethod
def token_key_for_device(device_name: str):
return TOKENS_PREFIX + str(hash(device_name))
hash = md5()
hash.update(bytes(device_name, "utf-8"))
digest = hash.hexdigest()
return TOKENS_PREFIX + digest
def get_tokens(self) -> list[Token]:
"""Get the tokens"""
@ -41,11 +45,20 @@ class RedisTokensRepository(AbstractTokensRepository):
tokens.append(token)
return tokens
def _discover_token_key(self, input_token: Token) -> str:
"""brute-force searching for tokens, for robust deletion"""
redis = self.connection
token_keys = redis.keys(TOKENS_PREFIX + "*")
for key in token_keys:
token = self._token_from_hash(key)
if token == input_token:
return key
def delete_token(self, input_token: Token) -> None:
"""Delete the token"""
redis = self.connection
key = RedisTokensRepository._token_redis_key(input_token)
if input_token not in self.get_tokens():
key = self._discover_token_key(input_token)
if key is None:
raise TokenNotFound
redis.delete(key)
@ -138,7 +151,10 @@ class RedisTokensRepository(AbstractTokensRepository):
return None
def _token_from_hash(self, redis_key: str) -> Optional[Token]:
return self._hash_as_model(redis_key, Token)
token = self._hash_as_model(redis_key, Token)
if token is not None:
token.created_at = token.created_at.replace(tzinfo=None)
return token
def _recovery_key_from_hash(self, redis_key: str) -> Optional[RecoveryKey]:
return self._hash_as_model(redis_key, RecoveryKey)

View File

@ -3,9 +3,7 @@ from datetime import datetime
import json
import subprocess
import os
from threading import Lock
from enum import Enum
import portalocker
from selfprivacy_api.utils import ReadUserData
from selfprivacy_api.utils.singleton_metaclass import SingletonMetaclass
@ -51,7 +49,6 @@ class ResticController(metaclass=SingletonMetaclass):
self.error_message = None
self._initialized = True
self.load_configuration()
self.write_rclone_config()
self.load_snapshots()
def load_configuration(self):
@ -65,25 +62,6 @@ class ResticController(metaclass=SingletonMetaclass):
else:
self.state = ResticStates.NO_KEY
def write_rclone_config(self):
"""
Open /root/.config/rclone/rclone.conf with portalocker
and write configuration in the following format:
[backblaze]
type = b2
account = {self.backblaze_account}
key = {self.backblaze_key}
"""
with portalocker.Lock(
"/root/.config/rclone/rclone.conf", "w", timeout=None
) as rclone_config:
rclone_config.write(
f"[backblaze]\n"
f"type = b2\n"
f"account = {self._backblaze_account}\n"
f"key = {self._backblaze_key}\n"
)
def load_snapshots(self):
"""
Load list of snapshots from repository
@ -91,9 +69,9 @@ class ResticController(metaclass=SingletonMetaclass):
backup_listing_command = [
"restic",
"-o",
"rclone.args=serve restic --stdio",
self.rclone_args(),
"-r",
f"rclone:backblaze:{self._repository_name}/sfbackup",
self.restic_repo(),
"snapshots",
"--json",
]
@ -123,6 +101,17 @@ class ResticController(metaclass=SingletonMetaclass):
self.error_message = snapshots_list
return
def restic_repo(self):
# https://restic.readthedocs.io/en/latest/030_preparing_a_new_repo.html#other-services-via-rclone
# https://forum.rclone.org/t/can-rclone-be-run-solely-with-command-line-options-no-config-no-env-vars/6314/5
return f"rclone::b2:{self._repository_name}/sfbackup"
def rclone_args(self):
return "rclone.args=serve restic --stdio" + self.backend_rclone_args()
def backend_rclone_args(self):
return f"--b2-account {self._backblaze_account} --b2-key {self._backblaze_key}"
def initialize_repository(self):
"""
Initialize repository with restic
@ -130,9 +119,9 @@ class ResticController(metaclass=SingletonMetaclass):
initialize_repository_command = [
"restic",
"-o",
"rclone.args=serve restic --stdio",
self.rclone_args(),
"-r",
f"rclone:backblaze:{self._repository_name}/sfbackup",
self.restic_repo(),
"init",
]
with subprocess.Popen(
@ -159,9 +148,9 @@ class ResticController(metaclass=SingletonMetaclass):
backup_command = [
"restic",
"-o",
"rclone.args=serve restic --stdio",
self.rclone_args(),
"-r",
f"rclone:backblaze:{self._repository_name}/sfbackup",
self.restic_repo(),
"--verbose",
"--json",
"backup",
@ -228,9 +217,9 @@ class ResticController(metaclass=SingletonMetaclass):
backup_restoration_command = [
"restic",
"-o",
"rclone.args=serve restic --stdio",
self.rclone_args(),
"-r",
f"rclone:backblaze:{self._repository_name}/sfbackup",
self.restic_repo(),
"restore",
snapshot_id,
"--target",

View File

@ -42,7 +42,7 @@ def get_disabled_services() -> list[Service]:
def get_services_by_location(location: str) -> list[Service]:
return [service for service in services if service.get_location() == location]
return [service for service in services if service.get_drive() == location]
def get_all_required_dns_records() -> list[ServiceDnsRecord]:

View File

@ -5,7 +5,6 @@ import typing
from selfprivacy_api.jobs import Job, JobStatus, Jobs
from selfprivacy_api.services.generic_service_mover import FolderMoveNames, move_service
from selfprivacy_api.services.generic_size_counter import get_storage_usage
from selfprivacy_api.services.generic_status_getter import get_service_status
from selfprivacy_api.services.service import Service, ServiceDnsRecord, ServiceStatus
from selfprivacy_api.utils import ReadUserData, WriteUserData, get_domain
@ -38,6 +37,10 @@ class Bitwarden(Service):
"""Read SVG icon from file and return it as base64 encoded string."""
return base64.b64encode(BITWARDEN_ICON.encode("utf-8")).decode("utf-8")
@staticmethod
def get_user() -> str:
return "vaultwarden"
@staticmethod
def get_url() -> typing.Optional[str]:
"""Return service url."""
@ -52,6 +55,10 @@ class Bitwarden(Service):
def is_required() -> bool:
return False
@staticmethod
def get_backup_description() -> str:
return "Password database, encryption certificate and attachments."
@staticmethod
def is_enabled() -> bool:
with ReadUserData() as user_data:
@ -111,14 +118,11 @@ class Bitwarden(Service):
return ""
@staticmethod
def get_storage_usage() -> int:
storage_usage = 0
storage_usage += get_storage_usage("/var/lib/bitwarden")
storage_usage += get_storage_usage("/var/lib/bitwarden_rs")
return storage_usage
def get_folders() -> typing.List[str]:
return ["/var/lib/bitwarden", "/var/lib/bitwarden_rs"]
@staticmethod
def get_location() -> str:
def get_drive() -> str:
with ReadUserData() as user_data:
if user_data.get("useBinds", False):
return user_data.get("bitwarden", {}).get("location", "sda1")
@ -154,20 +158,7 @@ class Bitwarden(Service):
self,
volume,
job,
[
FolderMoveNames(
name="bitwarden",
bind_location="/var/lib/bitwarden",
group="vaultwarden",
owner="vaultwarden",
),
FolderMoveNames(
name="bitwarden_rs",
bind_location="/var/lib/bitwarden_rs",
group="vaultwarden",
owner="vaultwarden",
),
],
FolderMoveNames.default_foldermoves(self),
"bitwarden",
)

View File

@ -1,5 +1,6 @@
"""Generic handler for moving services"""
from __future__ import annotations
import subprocess
import time
import pathlib
@ -11,6 +12,7 @@ from selfprivacy_api.utils.huey import huey
from selfprivacy_api.utils.block_devices import BlockDevice
from selfprivacy_api.utils import ReadUserData, WriteUserData
from selfprivacy_api.services.service import Service, ServiceStatus
from selfprivacy_api.services.owned_path import OwnedPath
class FolderMoveNames(BaseModel):
@ -19,6 +21,26 @@ class FolderMoveNames(BaseModel):
owner: str
group: str
@staticmethod
def from_owned_path(path: OwnedPath) -> FolderMoveNames:
return FolderMoveNames(
name=FolderMoveNames.get_foldername(path.path),
bind_location=path.path,
owner=path.owner,
group=path.group,
)
@staticmethod
def get_foldername(path: str) -> str:
return path.split("/")[-1]
@staticmethod
def default_foldermoves(service: Service) -> list[FolderMoveNames]:
return [
FolderMoveNames.from_owned_path(folder)
for folder in service.get_owned_folders()
]
@huey.task()
def move_service(
@ -44,7 +66,7 @@ def move_service(
)
return
# Check if we are on the same volume
old_volume = service.get_location()
old_volume = service.get_drive()
if old_volume == volume.name:
Jobs.update(
job=job,

View File

@ -5,7 +5,6 @@ import typing
from selfprivacy_api.jobs import Job, Jobs
from selfprivacy_api.services.generic_service_mover import FolderMoveNames, move_service
from selfprivacy_api.services.generic_size_counter import get_storage_usage
from selfprivacy_api.services.generic_status_getter import get_service_status
from selfprivacy_api.services.service import Service, ServiceDnsRecord, ServiceStatus
from selfprivacy_api.utils import ReadUserData, WriteUserData, get_domain
@ -52,6 +51,10 @@ class Gitea(Service):
def is_required() -> bool:
return False
@staticmethod
def get_backup_description() -> str:
return "Git repositories, database and user data."
@staticmethod
def is_enabled() -> bool:
with ReadUserData() as user_data:
@ -110,13 +113,11 @@ class Gitea(Service):
return ""
@staticmethod
def get_storage_usage() -> int:
storage_usage = 0
storage_usage += get_storage_usage("/var/lib/gitea")
return storage_usage
def get_folders() -> typing.List[str]:
return ["/var/lib/gitea"]
@staticmethod
def get_location() -> str:
def get_drive() -> str:
with ReadUserData() as user_data:
if user_data.get("useBinds", False):
return user_data.get("gitea", {}).get("location", "sda1")
@ -151,14 +152,7 @@ class Gitea(Service):
self,
volume,
job,
[
FolderMoveNames(
name="gitea",
bind_location="/var/lib/gitea",
group="gitea",
owner="gitea",
),
],
FolderMoveNames.default_foldermoves(self),
"gitea",
)

View File

@ -5,7 +5,6 @@ import typing
from selfprivacy_api.jobs import Job, Jobs
from selfprivacy_api.services.generic_service_mover import FolderMoveNames, move_service
from selfprivacy_api.services.generic_size_counter import get_storage_usage
from selfprivacy_api.services.generic_status_getter import (
get_service_status,
get_service_status_from_several_units,
@ -55,6 +54,10 @@ class Jitsi(Service):
def is_required() -> bool:
return False
@staticmethod
def get_backup_description() -> str:
return "Secrets that are used to encrypt the communication."
@staticmethod
def is_enabled() -> bool:
with ReadUserData() as user_data:
@ -110,13 +113,11 @@ class Jitsi(Service):
return ""
@staticmethod
def get_storage_usage() -> int:
storage_usage = 0
storage_usage += get_storage_usage("/var/lib/jitsi-meet")
return storage_usage
def get_folders() -> typing.List[str]:
return ["/var/lib/jitsi-meet"]
@staticmethod
def get_location() -> str:
def get_drive() -> str:
return "sda1"
@staticmethod

View File

@ -6,7 +6,6 @@ import typing
from selfprivacy_api.jobs import Job, JobStatus, Jobs
from selfprivacy_api.services.generic_service_mover import FolderMoveNames, move_service
from selfprivacy_api.services.generic_size_counter import get_storage_usage
from selfprivacy_api.services.generic_status_getter import (
get_service_status,
get_service_status_from_several_units,
@ -38,6 +37,10 @@ class MailServer(Service):
def get_svg_icon() -> str:
return base64.b64encode(MAILSERVER_ICON.encode("utf-8")).decode("utf-8")
@staticmethod
def get_user() -> str:
return "virtualMail"
@staticmethod
def get_url() -> typing.Optional[str]:
"""Return service url."""
@ -51,6 +54,10 @@ class MailServer(Service):
def is_required() -> bool:
return True
@staticmethod
def get_backup_description() -> str:
return "Mail boxes and filters."
@staticmethod
def is_enabled() -> bool:
return True
@ -97,11 +104,11 @@ class MailServer(Service):
return ""
@staticmethod
def get_storage_usage() -> int:
return get_storage_usage("/var/vmail")
def get_folders() -> typing.List[str]:
return ["/var/vmail", "/var/sieve"]
@staticmethod
def get_location() -> str:
def get_drive() -> str:
with utils.ReadUserData() as user_data:
if user_data.get("useBinds", False):
return user_data.get("mailserver", {}).get("location", "sda1")
@ -159,20 +166,7 @@ class MailServer(Service):
self,
volume,
job,
[
FolderMoveNames(
name="vmail",
bind_location="/var/vmail",
group="virtualMail",
owner="virtualMail",
),
FolderMoveNames(
name="sieve",
bind_location="/var/sieve",
group="virtualMail",
owner="virtualMail",
),
],
FolderMoveNames.default_foldermoves(self),
"mailserver",
)

View File

@ -4,7 +4,6 @@ import subprocess
import typing
from selfprivacy_api.jobs import Job, Jobs
from selfprivacy_api.services.generic_service_mover import FolderMoveNames, move_service
from selfprivacy_api.services.generic_size_counter import get_storage_usage
from selfprivacy_api.services.generic_status_getter import get_service_status
from selfprivacy_api.services.service import Service, ServiceDnsRecord, ServiceStatus
from selfprivacy_api.utils import ReadUserData, WriteUserData, get_domain
@ -50,6 +49,10 @@ class Nextcloud(Service):
def is_required() -> bool:
return False
@staticmethod
def get_backup_description() -> str:
return "All the files and other data stored in Nextcloud."
@staticmethod
def is_enabled() -> bool:
with ReadUserData() as user_data:
@ -114,16 +117,11 @@ class Nextcloud(Service):
return ""
@staticmethod
def get_storage_usage() -> int:
"""
Calculate the real storage usage of /var/lib/nextcloud and all subdirectories.
Calculate using pathlib.
Do not follow symlinks.
"""
return get_storage_usage("/var/lib/nextcloud")
def get_folders() -> typing.List[str]:
return ["/var/lib/nextcloud"]
@staticmethod
def get_location() -> str:
def get_drive() -> str:
"""Get the name of disk where Nextcloud is installed."""
with ReadUserData() as user_data:
if user_data.get("useBinds", False):
@ -158,14 +156,7 @@ class Nextcloud(Service):
self,
volume,
job,
[
FolderMoveNames(
name="nextcloud",
bind_location="/var/lib/nextcloud",
owner="nextcloud",
group="nextcloud",
),
],
FolderMoveNames.default_foldermoves(self),
"nextcloud",
)
return job

View File

@ -4,7 +4,6 @@ import subprocess
import typing
from selfprivacy_api.jobs import Job, Jobs
from selfprivacy_api.services.generic_service_mover import FolderMoveNames, move_service
from selfprivacy_api.services.generic_size_counter import get_storage_usage
from selfprivacy_api.services.generic_status_getter import get_service_status
from selfprivacy_api.services.service import Service, ServiceDnsRecord, ServiceStatus
from selfprivacy_api.utils import ReadUserData, WriteUserData
@ -45,6 +44,14 @@ class Ocserv(Service):
def is_required() -> bool:
return False
@staticmethod
def can_be_backed_up() -> bool:
return False
@staticmethod
def get_backup_description() -> str:
return "Nothing to backup."
@staticmethod
def is_enabled() -> bool:
with ReadUserData() as user_data:
@ -93,7 +100,7 @@ class Ocserv(Service):
return ""
@staticmethod
def get_location() -> str:
def get_drive() -> str:
return "sda1"
@staticmethod
@ -114,8 +121,8 @@ class Ocserv(Service):
]
@staticmethod
def get_storage_usage() -> int:
return 0
def get_folders() -> typing.List[str]:
return []
def move_to_volume(self, volume: BlockDevice) -> Job:
raise NotImplementedError("ocserv service is not movable")

View File

@ -0,0 +1,7 @@
from pydantic import BaseModel
class OwnedPath(BaseModel):
path: str
owner: str
group: str

View File

@ -4,9 +4,9 @@ import subprocess
import typing
from selfprivacy_api.jobs import Job, Jobs
from selfprivacy_api.services.generic_service_mover import FolderMoveNames, move_service
from selfprivacy_api.services.generic_size_counter import get_storage_usage
from selfprivacy_api.services.generic_status_getter import get_service_status
from selfprivacy_api.services.service import Service, ServiceDnsRecord, ServiceStatus
from selfprivacy_api.services.owned_path import OwnedPath
from selfprivacy_api.utils import ReadUserData, WriteUserData, get_domain
from selfprivacy_api.utils.block_devices import BlockDevice
import selfprivacy_api.utils.network as network_utils
@ -46,6 +46,10 @@ class Pleroma(Service):
def is_required() -> bool:
return False
@staticmethod
def get_backup_description() -> str:
return "Your Pleroma accounts, posts and media."
@staticmethod
def is_enabled() -> bool:
with ReadUserData() as user_data:
@ -97,14 +101,26 @@ class Pleroma(Service):
return ""
@staticmethod
def get_storage_usage() -> int:
storage_usage = 0
storage_usage += get_storage_usage("/var/lib/pleroma")
storage_usage += get_storage_usage("/var/lib/postgresql")
return storage_usage
def get_owned_folders() -> typing.List[OwnedPath]:
"""
Get a list of occupied directories with ownership info
pleroma has folders that are owned by different users
"""
return [
OwnedPath(
path="/var/lib/pleroma",
owner="pleroma",
group="pleroma",
),
OwnedPath(
path="/var/lib/postgresql",
owner="postgres",
group="postgres",
),
]
@staticmethod
def get_location() -> str:
def get_drive() -> str:
with ReadUserData() as user_data:
if user_data.get("useBinds", False):
return user_data.get("pleroma", {}).get("location", "sda1")
@ -138,20 +154,7 @@ class Pleroma(Service):
self,
volume,
job,
[
FolderMoveNames(
name="pleroma",
bind_location="/var/lib/pleroma",
owner="pleroma",
group="pleroma",
),
FolderMoveNames(
name="postgresql",
bind_location="/var/lib/postgresql",
owner="postgres",
group="postgres",
),
],
FolderMoveNames.default_foldermoves(self),
"pleroma",
)
return job

View File

@ -8,6 +8,12 @@ from selfprivacy_api.jobs import Job
from selfprivacy_api.utils.block_devices import BlockDevice
from selfprivacy_api.services.generic_size_counter import get_storage_usage
from selfprivacy_api.services.owned_path import OwnedPath
from selfprivacy_api.utils.waitloop import wait_until_true
DEFAULT_START_STOP_TIMEOUT = 10 * 60
class ServiceStatus(Enum):
"""Enum for service status"""
@ -38,71 +44,125 @@ class Service(ABC):
@staticmethod
@abstractmethod
def get_id() -> str:
"""
The unique id of the service.
"""
pass
@staticmethod
@abstractmethod
def get_display_name() -> str:
"""
The name of the service that is shown to the user.
"""
pass
@staticmethod
@abstractmethod
def get_description() -> str:
"""
The description of the service that is shown to the user.
"""
pass
@staticmethod
@abstractmethod
def get_svg_icon() -> str:
"""
The monochrome svg icon of the service.
"""
pass
@staticmethod
@abstractmethod
def get_url() -> typing.Optional[str]:
"""
The url of the service if it is accessible from the internet browser.
"""
pass
@classmethod
def get_user(cls) -> typing.Optional[str]:
"""
The user that owns the service's files.
Defaults to the service's id.
"""
return cls.get_id()
@classmethod
def get_group(cls) -> typing.Optional[str]:
"""
The group that owns the service's files.
Defaults to the service's user.
"""
return cls.get_user()
@staticmethod
@abstractmethod
def is_movable() -> bool:
"""`True` if the service can be moved to the non-system volume."""
pass
@staticmethod
@abstractmethod
def is_required() -> bool:
"""`True` if the service is required for the server to function."""
pass
@staticmethod
def can_be_backed_up() -> bool:
"""`True` if the service can be backed up."""
return True
@staticmethod
@abstractmethod
def get_backup_description() -> str:
"""
The text shown to the user that exlplains what data will be
backed up.
"""
pass
@staticmethod
@abstractmethod
def is_enabled() -> bool:
"""`True` if the service is enabled."""
pass
@staticmethod
@abstractmethod
def get_status() -> ServiceStatus:
"""The status of the service, reported by systemd."""
pass
@staticmethod
@abstractmethod
def enable():
"""Enable the service. Usually this means enabling systemd unit."""
pass
@staticmethod
@abstractmethod
def disable():
"""Disable the service. Usually this means disabling systemd unit."""
pass
@staticmethod
@abstractmethod
def stop():
"""Stop the service. Usually this means stopping systemd unit."""
pass
@staticmethod
@abstractmethod
def start():
"""Start the service. Usually this means starting systemd unit."""
pass
@staticmethod
@abstractmethod
def restart():
"""Restart the service. Usually this means restarting systemd unit."""
pass
@staticmethod
@ -120,10 +180,17 @@ class Service(ABC):
def get_logs():
pass
@staticmethod
@abstractmethod
def get_storage_usage() -> int:
pass
@classmethod
def get_storage_usage(cls) -> int:
"""
Calculate the real storage usage of folders occupied by service
Calculate using pathlib.
Do not follow symlinks.
"""
storage_used = 0
for folder in cls.get_folders():
storage_used += get_storage_usage(folder)
return storage_used
@staticmethod
@abstractmethod
@ -132,9 +199,88 @@ class Service(ABC):
@staticmethod
@abstractmethod
def get_location() -> str:
def get_drive() -> str:
pass
@classmethod
def get_folders(cls) -> typing.List[str]:
"""
get a plain list of occupied directories
Default extracts info from overriden get_owned_folders()
"""
if cls.get_owned_folders == Service.get_owned_folders:
raise NotImplementedError(
"you need to implement at least one of get_folders() or get_owned_folders()"
)
return [owned_folder.path for owned_folder in cls.get_owned_folders()]
@classmethod
def get_owned_folders(cls) -> typing.List[OwnedPath]:
"""
Get a list of occupied directories with ownership info
Default extracts info from overriden get_folders()
"""
if cls.get_folders == Service.get_folders:
raise NotImplementedError(
"you need to implement at least one of get_folders() or get_owned_folders()"
)
return [cls.owned_path(path) for path in cls.get_folders()]
@staticmethod
def get_foldername(path: str) -> str:
return path.split("/")[-1]
@abstractmethod
def move_to_volume(self, volume: BlockDevice) -> Job:
pass
@classmethod
def owned_path(cls, path: str):
"""A default guess on folder ownership"""
return OwnedPath(
path=path,
owner=cls.get_user(),
group=cls.get_group(),
)
def pre_backup(self):
pass
def post_restore(self):
pass
class StoppedService:
"""
A context manager that stops the service if needed and reactivates it
after you are done if it was active
Example:
```
assert service.get_status() == ServiceStatus.ACTIVE
with StoppedService(service) [as stopped_service]:
assert service.get_status() == ServiceStatus.INACTIVE
```
"""
def __init__(self, service: Service):
self.service = service
self.original_status = service.get_status()
def __enter__(self) -> Service:
self.original_status = self.service.get_status()
if self.original_status != ServiceStatus.INACTIVE:
self.service.stop()
wait_until_true(
lambda: self.service.get_status() == ServiceStatus.INACTIVE,
timeout_sec=DEFAULT_START_STOP_TIMEOUT,
)
return self.service
def __exit__(self, type, value, traceback):
if self.original_status in [ServiceStatus.ACTIVATING, ServiceStatus.ACTIVE]:
self.service.start()
wait_until_true(
lambda: self.service.get_status() == ServiceStatus.ACTIVE,
timeout_sec=DEFAULT_START_STOP_TIMEOUT,
)

View File

@ -0,0 +1,184 @@
"""Class representing Bitwarden service"""
import base64
import typing
import subprocess
from typing import List
from os import path
# from enum import Enum
from selfprivacy_api.jobs import Job
from selfprivacy_api.services.service import Service, ServiceDnsRecord, ServiceStatus
from selfprivacy_api.utils import ReadUserData, get_domain
from selfprivacy_api.utils.block_devices import BlockDevice
import selfprivacy_api.utils.network as network_utils
from selfprivacy_api.services.test_service.icon import BITWARDEN_ICON
DEFAULT_DELAY = 0
class DummyService(Service):
"""A test service"""
folders: List[str] = []
startstop_delay = 0
def __init_subclass__(cls, folders: List[str]):
cls.folders = folders
def __init__(self):
super().__init__()
status_file = self.status_file()
with open(status_file, "w") as file:
file.write(ServiceStatus.ACTIVE.value)
@staticmethod
def get_id() -> str:
"""Return service id."""
return "testservice"
@staticmethod
def get_display_name() -> str:
"""Return service display name."""
return "Test Service"
@staticmethod
def get_description() -> str:
"""Return service description."""
return "A small service used for test purposes. Does nothing."
@staticmethod
def get_svg_icon() -> str:
"""Read SVG icon from file and return it as base64 encoded string."""
# return ""
return base64.b64encode(BITWARDEN_ICON.encode("utf-8")).decode("utf-8")
@staticmethod
def get_url() -> typing.Optional[str]:
"""Return service url."""
domain = "test.com"
return f"https://password.{domain}"
@staticmethod
def is_movable() -> bool:
return True
@staticmethod
def is_required() -> bool:
return False
@staticmethod
def get_backup_description() -> str:
return "How did we get here?"
@staticmethod
def is_enabled() -> bool:
return True
@classmethod
def status_file(cls) -> str:
dir = cls.folders[0]
# we do not REALLY want to store our state in our declared folders
return path.join(dir, "..", "service_status")
@classmethod
def set_status(cls, status: ServiceStatus):
with open(cls.status_file(), "w") as file:
status_string = file.write(status.value)
@classmethod
def get_status(cls) -> ServiceStatus:
with open(cls.status_file(), "r") as file:
status_string = file.read().strip()
return ServiceStatus[status_string]
@classmethod
def change_status_with_async_delay(
cls, new_status: ServiceStatus, delay_sec: float
):
"""simulating a delay on systemd side"""
status_file = cls.status_file()
command = [
"bash",
"-c",
f" sleep {delay_sec} && echo {new_status.value} > {status_file}",
]
handle = subprocess.Popen(command)
if delay_sec == 0:
handle.communicate()
@classmethod
def enable(cls):
pass
@classmethod
def disable(cls, delay):
pass
@classmethod
def set_delay(cls, new_delay):
cls.startstop_delay = new_delay
@classmethod
def stop(cls):
cls.set_status(ServiceStatus.DEACTIVATING)
cls.change_status_with_async_delay(ServiceStatus.INACTIVE, cls.startstop_delay)
@classmethod
def start(cls):
cls.set_status(ServiceStatus.ACTIVATING)
cls.change_status_with_async_delay(ServiceStatus.ACTIVE, cls.startstop_delay)
@classmethod
def restart(cls):
cls.set_status(ServiceStatus.RELOADING) # is a correct one?
cls.change_status_with_async_delay(ServiceStatus.ACTIVE, cls.startstop_delay)
@staticmethod
def get_configuration():
return {}
@staticmethod
def set_configuration(config_items):
return super().set_configuration(config_items)
@staticmethod
def get_logs():
return ""
@staticmethod
def get_storage_usage() -> int:
storage_usage = 0
return storage_usage
@staticmethod
def get_drive() -> str:
return "sda1"
@classmethod
def get_folders(cls) -> List[str]:
return cls.folders
@staticmethod
def get_dns_records() -> typing.List[ServiceDnsRecord]:
"""Return list of DNS records for Bitwarden service."""
return [
ServiceDnsRecord(
type="A",
name="password",
content=network_utils.get_ip4(),
ttl=3600,
),
ServiceDnsRecord(
type="AAAA",
name="password",
content=network_utils.get_ip6(),
ttl=3600,
),
]
def move_to_volume(self, volume: BlockDevice) -> Job:
pass

View File

@ -0,0 +1,3 @@
<svg width="24" height="24" viewBox="0 0 24 24" fill="none" xmlns="http://www.w3.org/2000/svg">
<path d="M5.125 2C4.2962 2 3.50134 2.32924 2.91529 2.91529C2.32924 3.50134 2 4.2962 2 5.125L2 18.875C2 19.7038 2.32924 20.4987 2.91529 21.0847C3.50134 21.6708 4.2962 22 5.125 22H18.875C19.7038 22 20.4987 21.6708 21.0847 21.0847C21.6708 20.4987 22 19.7038 22 18.875V5.125C22 4.2962 21.6708 3.50134 21.0847 2.91529C20.4987 2.32924 19.7038 2 18.875 2H5.125ZM6.25833 4.43333H17.7583C17.9317 4.43333 18.0817 4.49667 18.2083 4.62333C18.2688 4.68133 18.3168 4.7511 18.3494 4.82835C18.3819 4.9056 18.3983 4.98869 18.3975 5.0725V12.7392C18.3975 13.3117 18.2858 13.8783 18.0633 14.4408C17.8558 14.9751 17.5769 15.4789 17.2342 15.9383C16.8824 16.3987 16.4882 16.825 16.0567 17.2117C15.6008 17.6242 15.18 17.9667 14.7942 18.24C14.4075 18.5125 14.005 18.77 13.5858 19.0133C13.1667 19.2558 12.8692 19.4208 12.6925 19.5075C12.5158 19.5942 12.375 19.6608 12.2675 19.7075C12.1872 19.7472 12.0987 19.7674 12.0092 19.7667C11.919 19.7674 11.8299 19.7468 11.7492 19.7067C11.6062 19.6429 11.4645 19.5762 11.3242 19.5067C11.0218 19.3511 10.7242 19.1866 10.4317 19.0133C10.0175 18.7738 9.6143 18.5158 9.22333 18.24C8.7825 17.9225 8.36093 17.5791 7.96083 17.2117C7.52907 16.825 7.13456 16.3987 6.7825 15.9383C6.44006 15.4788 6.16141 14.9751 5.95417 14.4408C5.73555 13.9 5.62213 13.3225 5.62 12.7392V5.0725C5.62 4.89917 5.68333 4.75 5.80917 4.6225C5.86726 4.56188 5.93717 4.51382 6.01457 4.48129C6.09196 4.44875 6.17521 4.43243 6.25917 4.43333H6.25833ZM12.0083 6.35V17.7C12.8 17.2817 13.5092 16.825 14.135 16.3333C15.6992 15.1083 16.4808 13.9108 16.4808 12.7392V6.35H12.0083Z" fill="black"/>
</svg>

After

Width:  |  Height:  |  Size: 1.6 KiB

View File

@ -0,0 +1,5 @@
BITWARDEN_ICON = """
<svg width="24" height="24" viewBox="0 0 24 24" fill="none" xmlns="http://www.w3.org/2000/svg">
<path d="M5.125 2C4.2962 2 3.50134 2.32924 2.91529 2.91529C2.32924 3.50134 2 4.2962 2 5.125L2 18.875C2 19.7038 2.32924 20.4987 2.91529 21.0847C3.50134 21.6708 4.2962 22 5.125 22H18.875C19.7038 22 20.4987 21.6708 21.0847 21.0847C21.6708 20.4987 22 19.7038 22 18.875V5.125C22 4.2962 21.6708 3.50134 21.0847 2.91529C20.4987 2.32924 19.7038 2 18.875 2H5.125ZM6.25833 4.43333H17.7583C17.9317 4.43333 18.0817 4.49667 18.2083 4.62333C18.2688 4.68133 18.3168 4.7511 18.3494 4.82835C18.3819 4.9056 18.3983 4.98869 18.3975 5.0725V12.7392C18.3975 13.3117 18.2858 13.8783 18.0633 14.4408C17.8558 14.9751 17.5769 15.4789 17.2342 15.9383C16.8824 16.3987 16.4882 16.825 16.0567 17.2117C15.6008 17.6242 15.18 17.9667 14.7942 18.24C14.4075 18.5125 14.005 18.77 13.5858 19.0133C13.1667 19.2558 12.8692 19.4208 12.6925 19.5075C12.5158 19.5942 12.375 19.6608 12.2675 19.7075C12.1872 19.7472 12.0987 19.7674 12.0092 19.7667C11.919 19.7674 11.8299 19.7468 11.7492 19.7067C11.6062 19.6429 11.4645 19.5762 11.3242 19.5067C11.0218 19.3511 10.7242 19.1866 10.4317 19.0133C10.0175 18.7738 9.6143 18.5158 9.22333 18.24C8.7825 17.9225 8.36093 17.5791 7.96083 17.2117C7.52907 16.825 7.13456 16.3987 6.7825 15.9383C6.44006 15.4788 6.16141 14.9751 5.95417 14.4408C5.73555 13.9 5.62213 13.3225 5.62 12.7392V5.0725C5.62 4.89917 5.68333 4.75 5.80917 4.6225C5.86726 4.56188 5.93717 4.51382 6.01457 4.48129C6.09196 4.44875 6.17521 4.43243 6.25917 4.43333H6.25833ZM12.0083 6.35V17.7C12.8 17.2817 13.5092 16.825 14.135 16.3333C15.6992 15.1083 16.4808 13.9108 16.4808 12.7392V6.35H12.0083Z" fill="black"/>
</svg>
"""

View File

@ -1,4 +1,4 @@
from selfprivacy_api.utils.huey import huey
from selfprivacy_api.jobs.test import test_job
from selfprivacy_api.restic_controller.tasks import *
from selfprivacy_api.backup.tasks import *
from selfprivacy_api.services.generic_service_mover import move_service

View File

@ -0,0 +1,30 @@
from datetime import datetime
from typing import Optional
def store_model_as_hash(redis, redis_key, model):
for key, value in model.dict().items():
if isinstance(value, datetime):
value = value.isoformat()
redis.hset(redis_key, key, str(value))
def hash_as_model(redis, redis_key: str, model_class):
token_dict = _model_dict_from_hash(redis, redis_key)
if token_dict is not None:
return model_class(**token_dict)
return None
def _prepare_model_dict(d: dict):
for key in d.keys():
if d[key] == "None":
d[key] = None
def _model_dict_from_hash(redis, redis_key: str) -> Optional[dict]:
if redis.exists(redis_key):
token_dict = redis.hgetall(redis_key)
_prepare_model_dict(token_dict)
return token_dict
return None

View File

@ -0,0 +1,20 @@
from time import sleep
from typing import Callable
from typing import Optional
def wait_until_true(
readiness_checker: Callable[[], bool],
*,
interval: float = 0.1,
timeout_sec: Optional[float] = None
):
elapsed = 0.0
if timeout_sec is None:
timeout_sec = 10e16
while (not readiness_checker()) and elapsed < timeout_sec:
sleep(interval)
elapsed += interval
if elapsed > timeout_sec:
raise TimeoutError()

View File

@ -2,7 +2,7 @@ from setuptools import setup, find_packages
setup(
name="selfprivacy_api",
version="2.1.2",
version="2.1.3",
packages=find_packages(),
scripts=[
"selfprivacy_api/app.py",

View File

@ -12,6 +12,9 @@ let
mnemonic
coverage
pylint
rope
mypy
pylsp-mypy
pydantic
typing-extensions
psutil
@ -35,7 +38,8 @@ pkgs.mkShell {
# for example. printenv <Name> will not fetch the value of an attribute.
export USE_REDIS_PORT=6379
pkill redis-server
redis-server --bind 127.0.0.1 --port $USE_REDIS_PORT >/dev/null &
sleep 2
setsid redis-server --bind 127.0.0.1 --port $USE_REDIS_PORT >/dev/null 2>/dev/null &
# maybe set more env-vars
'';
}

View File

@ -24,5 +24,9 @@ def generate_users_query(query_array):
return "query TestUsers {\n users {" + "\n".join(query_array) + "}\n}"
def generate_backup_query(query_array):
return "query TestBackup {\n backup {" + "\n".join(query_array) + "}\n}"
def mnemonic_to_hex(mnemonic):
return Mnemonic(language="english").to_entropy(mnemonic).hex()

View File

@ -3,6 +3,8 @@
# pylint: disable=unused-argument
import os
import pytest
from os import path
from fastapi.testclient import TestClient
@ -10,6 +12,10 @@ def pytest_generate_tests(metafunc):
os.environ["TEST_MODE"] = "true"
def global_data_dir():
return path.join(path.dirname(__file__), "data")
@pytest.fixture
def tokens_file(mocker, shared_datadir):
"""Mock tokens file."""
@ -26,6 +32,20 @@ def jobs_file(mocker, shared_datadir):
return mock
@pytest.fixture
def generic_userdata(mocker, tmpdir):
filename = "turned_on.json"
source_path = path.join(global_data_dir(), filename)
userdata_path = path.join(tmpdir, filename)
with open(userdata_path, "w") as file:
with open(source_path, "r") as source:
file.write(source.read())
mock = mocker.patch("selfprivacy_api.utils.USERDATA_FILE", new=userdata_path)
return mock
@pytest.fixture
def huey_database(mocker, shared_datadir):
"""Mock huey database."""

60
tests/data/turned_on.json Normal file
View File

@ -0,0 +1,60 @@
{
"api": {
"token": "TEST_TOKEN",
"enableSwagger": false
},
"bitwarden": {
"enable": true
},
"databasePassword": "PASSWORD",
"domain": "test.tld",
"hashedMasterPassword": "HASHED_PASSWORD",
"hostname": "test-instance",
"nextcloud": {
"adminPassword": "ADMIN",
"databasePassword": "ADMIN",
"enable": true
},
"resticPassword": "PASS",
"ssh": {
"enable": true,
"passwordAuthentication": true,
"rootKeys": [
"ssh-ed25519 KEY test@pc"
]
},
"username": "tester",
"gitea": {
"enable": true
},
"ocserv": {
"enable": true
},
"pleroma": {
"enable": true
},
"jitsi": {
"enable": true
},
"autoUpgrade": {
"enable": true,
"allowReboot": true
},
"timezone": "Europe/Moscow",
"sshKeys": [
"ssh-rsa KEY test@pc"
],
"dns": {
"provider": "CLOUDFLARE",
"apiKey": "TOKEN"
},
"server": {
"provider": "HETZNER"
},
"backup": {
"provider": "BACKBLAZE",
"accountId": "ID",
"accountKey": "KEY",
"bucket": "selfprivacy"
}
}

View File

@ -0,0 +1,372 @@
from os import path
from tests.test_graphql.test_backup import dummy_service, backups, raw_dummy_service
from tests.common import generate_backup_query
from selfprivacy_api.graphql.common_types.service import service_to_graphql_service
from selfprivacy_api.jobs import Jobs, JobStatus
API_RELOAD_SNAPSHOTS = """
mutation TestSnapshotsReload {
backup {
forceSnapshotsReload {
success
message
code
}
}
}
"""
API_SET_AUTOBACKUP_PERIOD_MUTATION = """
mutation TestAutobackupPeriod($period: Int) {
backup {
setAutobackupPeriod(period: $period) {
success
message
code
configuration {
provider
encryptionKey
isInitialized
autobackupPeriod
locationName
locationId
}
}
}
}
"""
API_REMOVE_REPOSITORY_MUTATION = """
mutation TestRemoveRepo {
backup {
removeRepository {
success
message
code
configuration {
provider
encryptionKey
isInitialized
autobackupPeriod
locationName
locationId
}
}
}
}
"""
API_INIT_MUTATION = """
mutation TestInitRepo($input: InitializeRepositoryInput!) {
backup {
initializeRepository(repository: $input) {
success
message
code
configuration {
provider
encryptionKey
isInitialized
autobackupPeriod
locationName
locationId
}
}
}
}
"""
API_RESTORE_MUTATION = """
mutation TestRestoreService($snapshot_id: String!) {
backup {
restoreBackup(snapshotId: $snapshot_id) {
success
message
code
job {
uid
status
}
}
}
}
"""
API_SNAPSHOTS_QUERY = """
allSnapshots {
id
service {
id
}
createdAt
}
"""
API_BACK_UP_MUTATION = """
mutation TestBackupService($service_id: String!) {
backup {
startBackup(serviceId: $service_id) {
success
message
code
job {
uid
status
}
}
}
}
"""
def api_restore(authorized_client, snapshot_id):
response = authorized_client.post(
"/graphql",
json={
"query": API_RESTORE_MUTATION,
"variables": {"snapshot_id": snapshot_id},
},
)
return response
def api_backup(authorized_client, service):
response = authorized_client.post(
"/graphql",
json={
"query": API_BACK_UP_MUTATION,
"variables": {"service_id": service.get_id()},
},
)
return response
def api_set_period(authorized_client, period):
response = authorized_client.post(
"/graphql",
json={
"query": API_SET_AUTOBACKUP_PERIOD_MUTATION,
"variables": {"period": period},
},
)
return response
def api_remove(authorized_client):
response = authorized_client.post(
"/graphql",
json={
"query": API_REMOVE_REPOSITORY_MUTATION,
"variables": {},
},
)
return response
def api_reload_snapshots(authorized_client):
response = authorized_client.post(
"/graphql",
json={
"query": API_RELOAD_SNAPSHOTS,
"variables": {},
},
)
return response
def api_init_without_key(
authorized_client, kind, login, password, location_name, location_id
):
response = authorized_client.post(
"/graphql",
json={
"query": API_INIT_MUTATION,
"variables": {
"input": {
"provider": kind,
"locationId": location_id,
"locationName": location_name,
"login": login,
"password": password,
}
},
},
)
return response
def assert_ok(data):
assert data["code"] == 200
assert data["success"] is True
def get_data(response):
assert response.status_code == 200
response = response.json()
if (
"errors" in response.keys()
): # convenience for debugging, this will display error
assert response["errors"] == []
assert response["data"] is not None
data = response["data"]
return data
def api_snapshots(authorized_client):
response = authorized_client.post(
"/graphql",
json={"query": generate_backup_query([API_SNAPSHOTS_QUERY])},
)
data = get_data(response)
result = data["backup"]["allSnapshots"]
assert result is not None
return result
def test_dummy_service_convertible_to_gql(dummy_service):
gql_service = service_to_graphql_service(dummy_service)
assert gql_service is not None
def test_snapshots_empty(authorized_client, dummy_service):
snaps = api_snapshots(authorized_client)
assert snaps == []
def test_start_backup(authorized_client, dummy_service):
response = api_backup(authorized_client, dummy_service)
data = get_data(response)["backup"]["startBackup"]
assert data["success"] is True
job = data["job"]
assert Jobs.get_job(job["uid"]).status == JobStatus.FINISHED
snaps = api_snapshots(authorized_client)
assert len(snaps) == 1
snap = snaps[0]
assert snap["id"] is not None
assert snap["id"] != ""
assert snap["service"]["id"] == "testservice"
def test_restore(authorized_client, dummy_service):
api_backup(authorized_client, dummy_service)
snap = api_snapshots(authorized_client)[0]
assert snap["id"] is not None
response = api_restore(authorized_client, snap["id"])
data = get_data(response)["backup"]["restoreBackup"]
assert data["success"] is True
job = data["job"]
assert Jobs.get_job(job["uid"]).status == JobStatus.FINISHED
def test_reinit(authorized_client, dummy_service, tmpdir):
test_repo_path = path.join(tmpdir, "not_at_all_sus")
response = api_init_without_key(
authorized_client, "FILE", "", "", test_repo_path, ""
)
data = get_data(response)["backup"]["initializeRepository"]
assert_ok(data)
configuration = data["configuration"]
assert configuration["provider"] == "FILE"
assert configuration["locationId"] == ""
assert configuration["locationName"] == test_repo_path
assert len(configuration["encryptionKey"]) > 1
assert configuration["isInitialized"] is True
response = api_backup(authorized_client, dummy_service)
data = get_data(response)["backup"]["startBackup"]
assert data["success"] is True
job = data["job"]
assert Jobs.get_job(job["uid"]).status == JobStatus.FINISHED
def test_remove(authorized_client, generic_userdata):
response = api_remove(authorized_client)
data = get_data(response)["backup"]["removeRepository"]
assert_ok(data)
configuration = data["configuration"]
assert configuration["provider"] == "NONE"
assert configuration["locationId"] == ""
assert configuration["locationName"] == ""
# still generated every time it is missing
assert len(configuration["encryptionKey"]) > 1
assert configuration["isInitialized"] is False
def test_autobackup_period_nonzero(authorized_client):
new_period = 11
response = api_set_period(authorized_client, new_period)
data = get_data(response)["backup"]["setAutobackupPeriod"]
assert_ok(data)
configuration = data["configuration"]
assert configuration["autobackupPeriod"] == new_period
def test_autobackup_period_zero(authorized_client):
new_period = 0
# since it is none by default, we better first set it to something non-negative
response = api_set_period(authorized_client, 11)
# and now we nullify it
response = api_set_period(authorized_client, new_period)
data = get_data(response)["backup"]["setAutobackupPeriod"]
assert_ok(data)
configuration = data["configuration"]
assert configuration["autobackupPeriod"] == None
def test_autobackup_period_none(authorized_client):
# since it is none by default, we better first set it to something non-negative
response = api_set_period(authorized_client, 11)
# and now we nullify it
response = api_set_period(authorized_client, None)
data = get_data(response)["backup"]["setAutobackupPeriod"]
assert_ok(data)
configuration = data["configuration"]
assert configuration["autobackupPeriod"] == None
def test_autobackup_period_negative(authorized_client):
# since it is none by default, we better first set it to something non-negative
response = api_set_period(authorized_client, 11)
# and now we nullify it
response = api_set_period(authorized_client, -12)
data = get_data(response)["backup"]["setAutobackupPeriod"]
assert_ok(data)
configuration = data["configuration"]
assert configuration["autobackupPeriod"] == None
# We cannot really check the effect at this level, we leave it to backend tests
# But we still make it run in both empty and full scenarios and ask for snaps afterwards
def test_reload_snapshots_bare_bare_bare(authorized_client, dummy_service):
api_remove(authorized_client)
response = api_reload_snapshots(authorized_client)
data = get_data(response)["backup"]["forceSnapshotsReload"]
assert_ok(data)
snaps = api_snapshots(authorized_client)
assert snaps == []
def test_reload_snapshots(authorized_client, dummy_service):
response = api_backup(authorized_client, dummy_service)
data = get_data(response)["backup"]["startBackup"]
response = api_reload_snapshots(authorized_client)
data = get_data(response)["backup"]["forceSnapshotsReload"]
assert_ok(data)
snaps = api_snapshots(authorized_client)
assert len(snaps) == 1

View File

@ -75,10 +75,12 @@ def test_graphql_tokens_info_unauthorized(client, tokens_file):
DELETE_TOKEN_MUTATION = """
mutation DeleteToken($device: String!) {
deleteDeviceApiToken(device: $device) {
success
message
code
api {
deleteDeviceApiToken(device: $device) {
success
message
code
}
}
}
"""
@ -110,9 +112,9 @@ def test_graphql_delete_token(authorized_client, tokens_file):
)
assert response.status_code == 200
assert response.json().get("data") is not None
assert response.json()["data"]["deleteDeviceApiToken"]["success"] is True
assert response.json()["data"]["deleteDeviceApiToken"]["message"] is not None
assert response.json()["data"]["deleteDeviceApiToken"]["code"] == 200
assert response.json()["data"]["api"]["deleteDeviceApiToken"]["success"] is True
assert response.json()["data"]["api"]["deleteDeviceApiToken"]["message"] is not None
assert response.json()["data"]["api"]["deleteDeviceApiToken"]["code"] == 200
assert read_json(tokens_file) == {
"tokens": [
{
@ -136,13 +138,16 @@ def test_graphql_delete_self_token(authorized_client, tokens_file):
)
assert response.status_code == 200
assert response.json().get("data") is not None
assert response.json()["data"]["deleteDeviceApiToken"]["success"] is False
assert response.json()["data"]["deleteDeviceApiToken"]["message"] is not None
assert response.json()["data"]["deleteDeviceApiToken"]["code"] == 400
assert response.json()["data"]["api"]["deleteDeviceApiToken"]["success"] is False
assert response.json()["data"]["api"]["deleteDeviceApiToken"]["message"] is not None
assert response.json()["data"]["api"]["deleteDeviceApiToken"]["code"] == 400
assert read_json(tokens_file) == TOKENS_FILE_CONTETS
def test_graphql_delete_nonexistent_token(authorized_client, tokens_file):
def test_graphql_delete_nonexistent_token(
authorized_client,
tokens_file,
):
response = authorized_client.post(
"/graphql",
json={
@ -154,19 +159,21 @@ def test_graphql_delete_nonexistent_token(authorized_client, tokens_file):
)
assert response.status_code == 200
assert response.json().get("data") is not None
assert response.json()["data"]["deleteDeviceApiToken"]["success"] is False
assert response.json()["data"]["deleteDeviceApiToken"]["message"] is not None
assert response.json()["data"]["deleteDeviceApiToken"]["code"] == 404
assert response.json()["data"]["api"]["deleteDeviceApiToken"]["success"] is False
assert response.json()["data"]["api"]["deleteDeviceApiToken"]["message"] is not None
assert response.json()["data"]["api"]["deleteDeviceApiToken"]["code"] == 404
assert read_json(tokens_file) == TOKENS_FILE_CONTETS
REFRESH_TOKEN_MUTATION = """
mutation RefreshToken {
refreshDeviceApiToken {
success
message
code
token
api {
refreshDeviceApiToken {
success
message
code
token
}
}
}
"""
@ -181,19 +188,25 @@ def test_graphql_refresh_token_unauthorized(client, tokens_file):
assert response.json()["data"] is None
def test_graphql_refresh_token(authorized_client, tokens_file, token_repo):
def test_graphql_refresh_token(
authorized_client,
tokens_file,
token_repo,
):
response = authorized_client.post(
"/graphql",
json={"query": REFRESH_TOKEN_MUTATION},
)
assert response.status_code == 200
assert response.json().get("data") is not None
assert response.json()["data"]["refreshDeviceApiToken"]["success"] is True
assert response.json()["data"]["refreshDeviceApiToken"]["message"] is not None
assert response.json()["data"]["refreshDeviceApiToken"]["code"] == 200
assert response.json()["data"]["api"]["refreshDeviceApiToken"]["success"] is True
assert (
response.json()["data"]["api"]["refreshDeviceApiToken"]["message"] is not None
)
assert response.json()["data"]["api"]["refreshDeviceApiToken"]["code"] == 200
token = token_repo.get_token_by_name("test_token")
assert token == Token(
token=response.json()["data"]["refreshDeviceApiToken"]["token"],
token=response.json()["data"]["api"]["refreshDeviceApiToken"]["token"],
device_name="test_token",
created_at=datetime.datetime(2022, 1, 14, 8, 31, 10, 789314),
)
@ -201,17 +214,22 @@ def test_graphql_refresh_token(authorized_client, tokens_file, token_repo):
NEW_DEVICE_KEY_MUTATION = """
mutation NewDeviceKey {
getNewDeviceApiKey {
success
message
code
key
api {
getNewDeviceApiKey {
success
message
code
key
}
}
}
"""
def test_graphql_get_new_device_auth_key_unauthorized(client, tokens_file):
def test_graphql_get_new_device_auth_key_unauthorized(
client,
tokens_file,
):
response = client.post(
"/graphql",
json={"query": NEW_DEVICE_KEY_MUTATION},
@ -220,22 +238,26 @@ def test_graphql_get_new_device_auth_key_unauthorized(client, tokens_file):
assert response.json()["data"] is None
def test_graphql_get_new_device_auth_key(authorized_client, tokens_file):
def test_graphql_get_new_device_auth_key(
authorized_client,
tokens_file,
):
response = authorized_client.post(
"/graphql",
json={"query": NEW_DEVICE_KEY_MUTATION},
)
assert response.status_code == 200
assert response.json().get("data") is not None
assert response.json()["data"]["getNewDeviceApiKey"]["success"] is True
assert response.json()["data"]["getNewDeviceApiKey"]["message"] is not None
assert response.json()["data"]["getNewDeviceApiKey"]["code"] == 200
assert response.json()["data"]["api"]["getNewDeviceApiKey"]["success"] is True
assert response.json()["data"]["api"]["getNewDeviceApiKey"]["message"] is not None
assert response.json()["data"]["api"]["getNewDeviceApiKey"]["code"] == 200
assert (
response.json()["data"]["getNewDeviceApiKey"]["key"].split(" ").__len__() == 12
response.json()["data"]["api"]["getNewDeviceApiKey"]["key"].split(" ").__len__()
== 12
)
token = (
Mnemonic(language="english")
.to_entropy(response.json()["data"]["getNewDeviceApiKey"]["key"])
.to_entropy(response.json()["data"]["api"]["getNewDeviceApiKey"]["key"])
.hex()
)
assert read_json(tokens_file)["new_device"]["token"] == token
@ -243,20 +265,25 @@ def test_graphql_get_new_device_auth_key(authorized_client, tokens_file):
INVALIDATE_NEW_DEVICE_KEY_MUTATION = """
mutation InvalidateNewDeviceKey {
invalidateNewDeviceApiKey {
success
message
code
api {
invalidateNewDeviceApiKey {
success
message
code
}
}
}
"""
def test_graphql_invalidate_new_device_token_unauthorized(client, tokens_file):
def test_graphql_invalidate_new_device_token_unauthorized(
client,
tokens_file,
):
response = client.post(
"/graphql",
json={
"query": DELETE_TOKEN_MUTATION,
"query": INVALIDATE_NEW_DEVICE_KEY_MUTATION,
"variables": {
"device": "test_token",
},
@ -266,22 +293,26 @@ def test_graphql_invalidate_new_device_token_unauthorized(client, tokens_file):
assert response.json()["data"] is None
def test_graphql_get_and_delete_new_device_key(authorized_client, tokens_file):
def test_graphql_get_and_delete_new_device_key(
authorized_client,
tokens_file,
):
response = authorized_client.post(
"/graphql",
json={"query": NEW_DEVICE_KEY_MUTATION},
)
assert response.status_code == 200
assert response.json().get("data") is not None
assert response.json()["data"]["getNewDeviceApiKey"]["success"] is True
assert response.json()["data"]["getNewDeviceApiKey"]["message"] is not None
assert response.json()["data"]["getNewDeviceApiKey"]["code"] == 200
assert response.json()["data"]["api"]["getNewDeviceApiKey"]["success"] is True
assert response.json()["data"]["api"]["getNewDeviceApiKey"]["message"] is not None
assert response.json()["data"]["api"]["getNewDeviceApiKey"]["code"] == 200
assert (
response.json()["data"]["getNewDeviceApiKey"]["key"].split(" ").__len__() == 12
response.json()["data"]["api"]["getNewDeviceApiKey"]["key"].split(" ").__len__()
== 12
)
token = (
Mnemonic(language="english")
.to_entropy(response.json()["data"]["getNewDeviceApiKey"]["key"])
.to_entropy(response.json()["data"]["api"]["getNewDeviceApiKey"]["key"])
.hex()
)
assert read_json(tokens_file)["new_device"]["token"] == token
@ -291,35 +322,46 @@ def test_graphql_get_and_delete_new_device_key(authorized_client, tokens_file):
)
assert response.status_code == 200
assert response.json().get("data") is not None
assert response.json()["data"]["invalidateNewDeviceApiKey"]["success"] is True
assert response.json()["data"]["invalidateNewDeviceApiKey"]["message"] is not None
assert response.json()["data"]["invalidateNewDeviceApiKey"]["code"] == 200
assert (
response.json()["data"]["api"]["invalidateNewDeviceApiKey"]["success"] is True
)
assert (
response.json()["data"]["api"]["invalidateNewDeviceApiKey"]["message"]
is not None
)
assert response.json()["data"]["api"]["invalidateNewDeviceApiKey"]["code"] == 200
assert read_json(tokens_file) == TOKENS_FILE_CONTETS
AUTHORIZE_WITH_NEW_DEVICE_KEY_MUTATION = """
mutation AuthorizeWithNewDeviceKey($input: UseNewDeviceKeyInput!) {
authorizeWithNewDeviceApiKey(input: $input) {
success
message
code
token
api {
authorizeWithNewDeviceApiKey(input: $input) {
success
message
code
token
}
}
}
"""
def test_graphql_get_and_authorize_new_device(client, authorized_client, tokens_file):
def test_graphql_get_and_authorize_new_device(
client,
authorized_client,
tokens_file,
):
response = authorized_client.post(
"/graphql",
json={"query": NEW_DEVICE_KEY_MUTATION},
)
assert response.status_code == 200
assert response.json().get("data") is not None
assert response.json()["data"]["getNewDeviceApiKey"]["success"] is True
assert response.json()["data"]["getNewDeviceApiKey"]["message"] is not None
assert response.json()["data"]["getNewDeviceApiKey"]["code"] == 200
mnemonic_key = response.json()["data"]["getNewDeviceApiKey"]["key"]
assert response.json()["data"]["api"]["getNewDeviceApiKey"]["success"] is True
assert response.json()["data"]["api"]["getNewDeviceApiKey"]["message"] is not None
assert response.json()["data"]["api"]["getNewDeviceApiKey"]["code"] == 200
mnemonic_key = response.json()["data"]["api"]["getNewDeviceApiKey"]["key"]
assert mnemonic_key.split(" ").__len__() == 12
key = Mnemonic(language="english").to_entropy(mnemonic_key).hex()
assert read_json(tokens_file)["new_device"]["token"] == key
@ -337,17 +379,24 @@ def test_graphql_get_and_authorize_new_device(client, authorized_client, tokens_
)
assert response.status_code == 200
assert response.json().get("data") is not None
assert response.json()["data"]["authorizeWithNewDeviceApiKey"]["success"] is True
assert (
response.json()["data"]["authorizeWithNewDeviceApiKey"]["message"] is not None
response.json()["data"]["api"]["authorizeWithNewDeviceApiKey"]["success"]
is True
)
assert response.json()["data"]["authorizeWithNewDeviceApiKey"]["code"] == 200
token = response.json()["data"]["authorizeWithNewDeviceApiKey"]["token"]
assert (
response.json()["data"]["api"]["authorizeWithNewDeviceApiKey"]["message"]
is not None
)
assert response.json()["data"]["api"]["authorizeWithNewDeviceApiKey"]["code"] == 200
token = response.json()["data"]["api"]["authorizeWithNewDeviceApiKey"]["token"]
assert read_json(tokens_file)["tokens"][2]["token"] == token
assert read_json(tokens_file)["tokens"][2]["name"] == "new_device"
def test_graphql_authorize_new_device_with_invalid_key(client, tokens_file):
def test_graphql_authorize_new_device_with_invalid_key(
client,
tokens_file,
):
response = client.post(
"/graphql",
json={
@ -362,25 +411,33 @@ def test_graphql_authorize_new_device_with_invalid_key(client, tokens_file):
)
assert response.status_code == 200
assert response.json().get("data") is not None
assert response.json()["data"]["authorizeWithNewDeviceApiKey"]["success"] is False
assert (
response.json()["data"]["authorizeWithNewDeviceApiKey"]["message"] is not None
response.json()["data"]["api"]["authorizeWithNewDeviceApiKey"]["success"]
is False
)
assert response.json()["data"]["authorizeWithNewDeviceApiKey"]["code"] == 404
assert (
response.json()["data"]["api"]["authorizeWithNewDeviceApiKey"]["message"]
is not None
)
assert response.json()["data"]["api"]["authorizeWithNewDeviceApiKey"]["code"] == 404
assert read_json(tokens_file) == TOKENS_FILE_CONTETS
def test_graphql_get_and_authorize_used_key(client, authorized_client, tokens_file):
def test_graphql_get_and_authorize_used_key(
client,
authorized_client,
tokens_file,
):
response = authorized_client.post(
"/graphql",
json={"query": NEW_DEVICE_KEY_MUTATION},
)
assert response.status_code == 200
assert response.json().get("data") is not None
assert response.json()["data"]["getNewDeviceApiKey"]["success"] is True
assert response.json()["data"]["getNewDeviceApiKey"]["message"] is not None
assert response.json()["data"]["getNewDeviceApiKey"]["code"] == 200
mnemonic_key = response.json()["data"]["getNewDeviceApiKey"]["key"]
assert response.json()["data"]["api"]["getNewDeviceApiKey"]["success"] is True
assert response.json()["data"]["api"]["getNewDeviceApiKey"]["message"] is not None
assert response.json()["data"]["api"]["getNewDeviceApiKey"]["code"] == 200
mnemonic_key = response.json()["data"]["api"]["getNewDeviceApiKey"]["key"]
assert mnemonic_key.split(" ").__len__() == 12
key = Mnemonic(language="english").to_entropy(mnemonic_key).hex()
assert read_json(tokens_file)["new_device"]["token"] == key
@ -398,14 +455,18 @@ def test_graphql_get_and_authorize_used_key(client, authorized_client, tokens_fi
)
assert response.status_code == 200
assert response.json().get("data") is not None
assert response.json()["data"]["authorizeWithNewDeviceApiKey"]["success"] is True
assert (
response.json()["data"]["authorizeWithNewDeviceApiKey"]["message"] is not None
response.json()["data"]["api"]["authorizeWithNewDeviceApiKey"]["success"]
is True
)
assert response.json()["data"]["authorizeWithNewDeviceApiKey"]["code"] == 200
assert (
response.json()["data"]["api"]["authorizeWithNewDeviceApiKey"]["message"]
is not None
)
assert response.json()["data"]["api"]["authorizeWithNewDeviceApiKey"]["code"] == 200
assert (
read_json(tokens_file)["tokens"][2]["token"]
== response.json()["data"]["authorizeWithNewDeviceApiKey"]["token"]
== response.json()["data"]["api"]["authorizeWithNewDeviceApiKey"]["token"]
)
assert read_json(tokens_file)["tokens"][2]["name"] == "new_token"
@ -415,7 +476,7 @@ def test_graphql_get_and_authorize_used_key(client, authorized_client, tokens_fi
"query": AUTHORIZE_WITH_NEW_DEVICE_KEY_MUTATION,
"variables": {
"input": {
"key": mnemonic_key,
"key": NEW_DEVICE_KEY_MUTATION,
"deviceName": "test_token2",
}
},
@ -423,16 +484,22 @@ def test_graphql_get_and_authorize_used_key(client, authorized_client, tokens_fi
)
assert response.status_code == 200
assert response.json().get("data") is not None
assert response.json()["data"]["authorizeWithNewDeviceApiKey"]["success"] is False
assert (
response.json()["data"]["authorizeWithNewDeviceApiKey"]["message"] is not None
response.json()["data"]["api"]["authorizeWithNewDeviceApiKey"]["success"]
is False
)
assert response.json()["data"]["authorizeWithNewDeviceApiKey"]["code"] == 404
assert (
response.json()["data"]["api"]["authorizeWithNewDeviceApiKey"]["message"]
is not None
)
assert response.json()["data"]["api"]["authorizeWithNewDeviceApiKey"]["code"] == 404
assert read_json(tokens_file)["tokens"].__len__() == 3
def test_graphql_get_and_authorize_key_after_12_minutes(
client, authorized_client, tokens_file
client,
authorized_client,
tokens_file,
):
response = authorized_client.post(
"/graphql",
@ -440,15 +507,16 @@ def test_graphql_get_and_authorize_key_after_12_minutes(
)
assert response.status_code == 200
assert response.json().get("data") is not None
assert response.json()["data"]["getNewDeviceApiKey"]["success"] is True
assert response.json()["data"]["getNewDeviceApiKey"]["message"] is not None
assert response.json()["data"]["getNewDeviceApiKey"]["code"] == 200
assert response.json()["data"]["api"]["getNewDeviceApiKey"]["success"] is True
assert response.json()["data"]["api"]["getNewDeviceApiKey"]["message"] is not None
assert response.json()["data"]["api"]["getNewDeviceApiKey"]["code"] == 200
assert (
response.json()["data"]["getNewDeviceApiKey"]["key"].split(" ").__len__() == 12
response.json()["data"]["api"]["getNewDeviceApiKey"]["key"].split(" ").__len__()
== 12
)
key = (
Mnemonic(language="english")
.to_entropy(response.json()["data"]["getNewDeviceApiKey"]["key"])
.to_entropy(response.json()["data"]["api"]["getNewDeviceApiKey"]["key"])
.hex()
)
assert read_json(tokens_file)["new_device"]["token"] == key
@ -473,14 +541,21 @@ def test_graphql_get_and_authorize_key_after_12_minutes(
)
assert response.status_code == 200
assert response.json().get("data") is not None
assert response.json()["data"]["authorizeWithNewDeviceApiKey"]["success"] is False
assert (
response.json()["data"]["authorizeWithNewDeviceApiKey"]["message"] is not None
response.json()["data"]["api"]["authorizeWithNewDeviceApiKey"]["success"]
is False
)
assert response.json()["data"]["authorizeWithNewDeviceApiKey"]["code"] == 404
assert (
response.json()["data"]["api"]["authorizeWithNewDeviceApiKey"]["message"]
is not None
)
assert response.json()["data"]["api"]["authorizeWithNewDeviceApiKey"]["code"] == 404
def test_graphql_authorize_without_token(client, tokens_file):
def test_graphql_authorize_without_token(
client,
tokens_file,
):
response = client.post(
"/graphql",
json={

View File

@ -57,22 +57,26 @@ def test_graphql_recovery_key_status_when_none_exists(authorized_client, tokens_
API_RECOVERY_KEY_GENERATE_MUTATION = """
mutation TestGenerateRecoveryKey($limits: RecoveryKeyLimitsInput) {
getNewRecoveryApiKey(limits: $limits) {
success
message
code
key
api {
getNewRecoveryApiKey(limits: $limits) {
success
message
code
key
}
}
}
"""
API_RECOVERY_KEY_USE_MUTATION = """
mutation TestUseRecoveryKey($input: UseRecoveryKeyInput!) {
useRecoveryApiKey(input: $input) {
success
message
code
token
api {
useRecoveryApiKey(input: $input) {
success
message
code
token
}
}
}
"""
@ -87,18 +91,20 @@ def test_graphql_generate_recovery_key(client, authorized_client, tokens_file):
)
assert response.status_code == 200
assert response.json().get("data") is not None
assert response.json()["data"]["getNewRecoveryApiKey"]["success"] is True
assert response.json()["data"]["getNewRecoveryApiKey"]["message"] is not None
assert response.json()["data"]["getNewRecoveryApiKey"]["code"] == 200
assert response.json()["data"]["getNewRecoveryApiKey"]["key"] is not None
assert response.json()["data"]["api"]["getNewRecoveryApiKey"]["success"] is True
assert response.json()["data"]["api"]["getNewRecoveryApiKey"]["message"] is not None
assert response.json()["data"]["api"]["getNewRecoveryApiKey"]["code"] == 200
assert response.json()["data"]["api"]["getNewRecoveryApiKey"]["key"] is not None
assert (
response.json()["data"]["getNewRecoveryApiKey"]["key"].split(" ").__len__()
response.json()["data"]["api"]["getNewRecoveryApiKey"]["key"]
.split(" ")
.__len__()
== 18
)
assert read_json(tokens_file)["recovery_token"] is not None
time_generated = read_json(tokens_file)["recovery_token"]["date"]
assert time_generated is not None
key = response.json()["data"]["getNewRecoveryApiKey"]["key"]
key = response.json()["data"]["api"]["getNewRecoveryApiKey"]["key"]
assert (
datetime.datetime.strptime(time_generated, "%Y-%m-%dT%H:%M:%S.%f")
- datetime.timedelta(seconds=5)
@ -136,12 +142,12 @@ def test_graphql_generate_recovery_key(client, authorized_client, tokens_file):
)
assert response.status_code == 200
assert response.json().get("data") is not None
assert response.json()["data"]["useRecoveryApiKey"]["success"] is True
assert response.json()["data"]["useRecoveryApiKey"]["message"] is not None
assert response.json()["data"]["useRecoveryApiKey"]["code"] == 200
assert response.json()["data"]["useRecoveryApiKey"]["token"] is not None
assert response.json()["data"]["api"]["useRecoveryApiKey"]["success"] is True
assert response.json()["data"]["api"]["useRecoveryApiKey"]["message"] is not None
assert response.json()["data"]["api"]["useRecoveryApiKey"]["code"] == 200
assert response.json()["data"]["api"]["useRecoveryApiKey"]["token"] is not None
assert (
response.json()["data"]["useRecoveryApiKey"]["token"]
response.json()["data"]["api"]["useRecoveryApiKey"]["token"]
== read_json(tokens_file)["tokens"][2]["token"]
)
assert read_json(tokens_file)["tokens"][2]["name"] == "new_test_token"
@ -161,12 +167,12 @@ def test_graphql_generate_recovery_key(client, authorized_client, tokens_file):
)
assert response.status_code == 200
assert response.json().get("data") is not None
assert response.json()["data"]["useRecoveryApiKey"]["success"] is True
assert response.json()["data"]["useRecoveryApiKey"]["message"] is not None
assert response.json()["data"]["useRecoveryApiKey"]["code"] == 200
assert response.json()["data"]["useRecoveryApiKey"]["token"] is not None
assert response.json()["data"]["api"]["useRecoveryApiKey"]["success"] is True
assert response.json()["data"]["api"]["useRecoveryApiKey"]["message"] is not None
assert response.json()["data"]["api"]["useRecoveryApiKey"]["code"] == 200
assert response.json()["data"]["api"]["useRecoveryApiKey"]["token"] is not None
assert (
response.json()["data"]["useRecoveryApiKey"]["token"]
response.json()["data"]["api"]["useRecoveryApiKey"]["token"]
== read_json(tokens_file)["tokens"][3]["token"]
)
assert read_json(tokens_file)["tokens"][3]["name"] == "new_test_token2"
@ -190,17 +196,19 @@ def test_graphql_generate_recovery_key_with_expiration_date(
)
assert response.status_code == 200
assert response.json().get("data") is not None
assert response.json()["data"]["getNewRecoveryApiKey"]["success"] is True
assert response.json()["data"]["getNewRecoveryApiKey"]["message"] is not None
assert response.json()["data"]["getNewRecoveryApiKey"]["code"] == 200
assert response.json()["data"]["getNewRecoveryApiKey"]["key"] is not None
assert response.json()["data"]["api"]["getNewRecoveryApiKey"]["success"] is True
assert response.json()["data"]["api"]["getNewRecoveryApiKey"]["message"] is not None
assert response.json()["data"]["api"]["getNewRecoveryApiKey"]["code"] == 200
assert response.json()["data"]["api"]["getNewRecoveryApiKey"]["key"] is not None
assert (
response.json()["data"]["getNewRecoveryApiKey"]["key"].split(" ").__len__()
response.json()["data"]["api"]["getNewRecoveryApiKey"]["key"]
.split(" ")
.__len__()
== 18
)
assert read_json(tokens_file)["recovery_token"] is not None
key = response.json()["data"]["getNewRecoveryApiKey"]["key"]
key = response.json()["data"]["api"]["getNewRecoveryApiKey"]["key"]
assert read_json(tokens_file)["recovery_token"]["expiration"] == expiration_date_str
assert read_json(tokens_file)["recovery_token"]["token"] == mnemonic_to_hex(key)
@ -246,12 +254,12 @@ def test_graphql_generate_recovery_key_with_expiration_date(
)
assert response.status_code == 200
assert response.json().get("data") is not None
assert response.json()["data"]["useRecoveryApiKey"]["success"] is True
assert response.json()["data"]["useRecoveryApiKey"]["message"] is not None
assert response.json()["data"]["useRecoveryApiKey"]["code"] == 200
assert response.json()["data"]["useRecoveryApiKey"]["token"] is not None
assert response.json()["data"]["api"]["useRecoveryApiKey"]["success"] is True
assert response.json()["data"]["api"]["useRecoveryApiKey"]["message"] is not None
assert response.json()["data"]["api"]["useRecoveryApiKey"]["code"] == 200
assert response.json()["data"]["api"]["useRecoveryApiKey"]["token"] is not None
assert (
response.json()["data"]["useRecoveryApiKey"]["token"]
response.json()["data"]["api"]["useRecoveryApiKey"]["token"]
== read_json(tokens_file)["tokens"][2]["token"]
)
@ -270,12 +278,12 @@ def test_graphql_generate_recovery_key_with_expiration_date(
)
assert response.status_code == 200
assert response.json().get("data") is not None
assert response.json()["data"]["useRecoveryApiKey"]["success"] is True
assert response.json()["data"]["useRecoveryApiKey"]["message"] is not None
assert response.json()["data"]["useRecoveryApiKey"]["code"] == 200
assert response.json()["data"]["useRecoveryApiKey"]["token"] is not None
assert response.json()["data"]["api"]["useRecoveryApiKey"]["success"] is True
assert response.json()["data"]["api"]["useRecoveryApiKey"]["message"] is not None
assert response.json()["data"]["api"]["useRecoveryApiKey"]["code"] == 200
assert response.json()["data"]["api"]["useRecoveryApiKey"]["token"] is not None
assert (
response.json()["data"]["useRecoveryApiKey"]["token"]
response.json()["data"]["api"]["useRecoveryApiKey"]["token"]
== read_json(tokens_file)["tokens"][3]["token"]
)
@ -299,10 +307,10 @@ def test_graphql_generate_recovery_key_with_expiration_date(
)
assert response.status_code == 200
assert response.json().get("data") is not None
assert response.json()["data"]["useRecoveryApiKey"]["success"] is False
assert response.json()["data"]["useRecoveryApiKey"]["message"] is not None
assert response.json()["data"]["useRecoveryApiKey"]["code"] == 404
assert response.json()["data"]["useRecoveryApiKey"]["token"] is None
assert response.json()["data"]["api"]["useRecoveryApiKey"]["success"] is False
assert response.json()["data"]["api"]["useRecoveryApiKey"]["message"] is not None
assert response.json()["data"]["api"]["useRecoveryApiKey"]["code"] == 404
assert response.json()["data"]["api"]["useRecoveryApiKey"]["token"] is None
assert read_json(tokens_file)["tokens"] == new_data["tokens"]
@ -345,10 +353,10 @@ def test_graphql_generate_recovery_key_with_expiration_in_the_past(
)
assert response.status_code == 200
assert response.json().get("data") is not None
assert response.json()["data"]["getNewRecoveryApiKey"]["success"] is False
assert response.json()["data"]["getNewRecoveryApiKey"]["message"] is not None
assert response.json()["data"]["getNewRecoveryApiKey"]["code"] == 400
assert response.json()["data"]["getNewRecoveryApiKey"]["key"] is None
assert response.json()["data"]["api"]["getNewRecoveryApiKey"]["success"] is False
assert response.json()["data"]["api"]["getNewRecoveryApiKey"]["message"] is not None
assert response.json()["data"]["api"]["getNewRecoveryApiKey"]["code"] == 400
assert response.json()["data"]["api"]["getNewRecoveryApiKey"]["key"] is None
assert "recovery_token" not in read_json(tokens_file)
@ -393,12 +401,12 @@ def test_graphql_generate_recovery_key_with_limited_uses(
)
assert response.status_code == 200
assert response.json().get("data") is not None
assert response.json()["data"]["getNewRecoveryApiKey"]["success"] is True
assert response.json()["data"]["getNewRecoveryApiKey"]["message"] is not None
assert response.json()["data"]["getNewRecoveryApiKey"]["code"] == 200
assert response.json()["data"]["getNewRecoveryApiKey"]["key"] is not None
assert response.json()["data"]["api"]["getNewRecoveryApiKey"]["success"] is True
assert response.json()["data"]["api"]["getNewRecoveryApiKey"]["message"] is not None
assert response.json()["data"]["api"]["getNewRecoveryApiKey"]["code"] == 200
assert response.json()["data"]["api"]["getNewRecoveryApiKey"]["key"] is not None
mnemonic_key = response.json()["data"]["getNewRecoveryApiKey"]["key"]
mnemonic_key = response.json()["data"]["api"]["getNewRecoveryApiKey"]["key"]
key = mnemonic_to_hex(mnemonic_key)
assert read_json(tokens_file)["recovery_token"]["token"] == key
@ -433,10 +441,10 @@ def test_graphql_generate_recovery_key_with_limited_uses(
)
assert response.status_code == 200
assert response.json().get("data") is not None
assert response.json()["data"]["useRecoveryApiKey"]["success"] is True
assert response.json()["data"]["useRecoveryApiKey"]["message"] is not None
assert response.json()["data"]["useRecoveryApiKey"]["code"] == 200
assert response.json()["data"]["useRecoveryApiKey"]["token"] is not None
assert response.json()["data"]["api"]["useRecoveryApiKey"]["success"] is True
assert response.json()["data"]["api"]["useRecoveryApiKey"]["message"] is not None
assert response.json()["data"]["api"]["useRecoveryApiKey"]["code"] == 200
assert response.json()["data"]["api"]["useRecoveryApiKey"]["token"] is not None
# Try to get token status
response = authorized_client.post(
@ -467,10 +475,10 @@ def test_graphql_generate_recovery_key_with_limited_uses(
)
assert response.status_code == 200
assert response.json().get("data") is not None
assert response.json()["data"]["useRecoveryApiKey"]["success"] is True
assert response.json()["data"]["useRecoveryApiKey"]["message"] is not None
assert response.json()["data"]["useRecoveryApiKey"]["code"] == 200
assert response.json()["data"]["useRecoveryApiKey"]["token"] is not None
assert response.json()["data"]["api"]["useRecoveryApiKey"]["success"] is True
assert response.json()["data"]["api"]["useRecoveryApiKey"]["message"] is not None
assert response.json()["data"]["api"]["useRecoveryApiKey"]["code"] == 200
assert response.json()["data"]["api"]["useRecoveryApiKey"]["token"] is not None
# Try to get token status
response = authorized_client.post(
@ -501,10 +509,10 @@ def test_graphql_generate_recovery_key_with_limited_uses(
)
assert response.status_code == 200
assert response.json().get("data") is not None
assert response.json()["data"]["useRecoveryApiKey"]["success"] is False
assert response.json()["data"]["useRecoveryApiKey"]["message"] is not None
assert response.json()["data"]["useRecoveryApiKey"]["code"] == 404
assert response.json()["data"]["useRecoveryApiKey"]["token"] is None
assert response.json()["data"]["api"]["useRecoveryApiKey"]["success"] is False
assert response.json()["data"]["api"]["useRecoveryApiKey"]["message"] is not None
assert response.json()["data"]["api"]["useRecoveryApiKey"]["code"] == 404
assert response.json()["data"]["api"]["useRecoveryApiKey"]["token"] is None
def test_graphql_generate_recovery_key_with_negative_uses(
@ -524,10 +532,10 @@ def test_graphql_generate_recovery_key_with_negative_uses(
)
assert response.status_code == 200
assert response.json().get("data") is not None
assert response.json()["data"]["getNewRecoveryApiKey"]["success"] is False
assert response.json()["data"]["getNewRecoveryApiKey"]["message"] is not None
assert response.json()["data"]["getNewRecoveryApiKey"]["code"] == 400
assert response.json()["data"]["getNewRecoveryApiKey"]["key"] is None
assert response.json()["data"]["api"]["getNewRecoveryApiKey"]["success"] is False
assert response.json()["data"]["api"]["getNewRecoveryApiKey"]["message"] is not None
assert response.json()["data"]["api"]["getNewRecoveryApiKey"]["code"] == 400
assert response.json()["data"]["api"]["getNewRecoveryApiKey"]["key"] is None
def test_graphql_generate_recovery_key_with_zero_uses(authorized_client, tokens_file):
@ -545,7 +553,7 @@ def test_graphql_generate_recovery_key_with_zero_uses(authorized_client, tokens_
)
assert response.status_code == 200
assert response.json().get("data") is not None
assert response.json()["data"]["getNewRecoveryApiKey"]["success"] is False
assert response.json()["data"]["getNewRecoveryApiKey"]["message"] is not None
assert response.json()["data"]["getNewRecoveryApiKey"]["code"] == 400
assert response.json()["data"]["getNewRecoveryApiKey"]["key"] is None
assert response.json()["data"]["api"]["getNewRecoveryApiKey"]["success"] is False
assert response.json()["data"]["api"]["getNewRecoveryApiKey"]["message"] is not None
assert response.json()["data"]["api"]["getNewRecoveryApiKey"]["code"] == 400
assert response.json()["data"]["api"]["getNewRecoveryApiKey"]["key"] is None

View File

@ -0,0 +1,647 @@
import pytest
import os.path as path
from os import makedirs
from os import remove
from os import listdir
from os import urandom
from datetime import datetime, timedelta, timezone
from subprocess import Popen
import selfprivacy_api.services as services
from selfprivacy_api.services import Service
from selfprivacy_api.services import get_service_by_id
from selfprivacy_api.services.test_service import DummyService
from selfprivacy_api.graphql.queries.providers import BackupProvider
from selfprivacy_api.graphql.common_types.backup import RestoreStrategy
from selfprivacy_api.jobs import Jobs, JobStatus
from selfprivacy_api.models.backup.snapshot import Snapshot
from selfprivacy_api.backup import Backups
import selfprivacy_api.backup.providers as providers
from selfprivacy_api.backup.providers import AbstractBackupProvider
from selfprivacy_api.backup.providers.backblaze import Backblaze
from selfprivacy_api.backup.util import sync
from selfprivacy_api.backup.backuppers.restic_backupper import ResticBackupper
from selfprivacy_api.backup.jobs import add_backup_job, add_restore_job
from selfprivacy_api.backup.tasks import start_backup, restore_snapshot
from selfprivacy_api.backup.storage import Storage
from selfprivacy_api.backup.jobs import get_backup_job
TESTFILE_BODY = "testytest!"
TESTFILE_2_BODY = "testissimo!"
REPO_NAME = "test_backup"
@pytest.fixture(scope="function")
def backups(tmpdir):
Backups.reset()
test_repo_path = path.join(tmpdir, "totallyunrelated")
Backups.set_localfile_repo(test_repo_path)
Jobs.reset()
@pytest.fixture()
def backups_backblaze(generic_userdata):
Backups.reset(reset_json=False)
@pytest.fixture()
def raw_dummy_service(tmpdir):
dirnames = ["test_service", "also_test_service"]
service_dirs = []
for d in dirnames:
service_dir = path.join(tmpdir, d)
makedirs(service_dir)
service_dirs.append(service_dir)
testfile_path_1 = path.join(service_dirs[0], "testfile.txt")
with open(testfile_path_1, "w") as file:
file.write(TESTFILE_BODY)
testfile_path_2 = path.join(service_dirs[1], "testfile2.txt")
with open(testfile_path_2, "w") as file:
file.write(TESTFILE_2_BODY)
# we need this to not change get_folders() much
class TestDummyService(DummyService, folders=service_dirs):
pass
service = TestDummyService()
return service
@pytest.fixture()
def dummy_service(tmpdir, backups, raw_dummy_service) -> Service:
service = raw_dummy_service
repo_path = path.join(tmpdir, "test_repo")
assert not path.exists(repo_path)
# assert not repo_path
Backups.init_repo()
# register our service
services.services.append(service)
assert get_service_by_id(service.get_id()) is not None
yield service
# cleanup because apparently it matters wrt tasks
services.services.remove(service)
@pytest.fixture()
def memory_backup() -> AbstractBackupProvider:
ProviderClass = providers.get_provider(BackupProvider.MEMORY)
assert ProviderClass is not None
memory_provider = ProviderClass(login="", key="")
assert memory_provider is not None
return memory_provider
@pytest.fixture()
def file_backup(tmpdir) -> AbstractBackupProvider:
test_repo_path = path.join(tmpdir, "test_repo")
ProviderClass = providers.get_provider(BackupProvider.FILE)
assert ProviderClass is not None
provider = ProviderClass(location=test_repo_path)
assert provider is not None
return provider
def test_config_load(generic_userdata):
Backups.reset(reset_json=False)
provider = Backups.provider()
assert provider is not None
assert isinstance(provider, Backblaze)
assert provider.login == "ID"
assert provider.key == "KEY"
assert provider.location == "selfprivacy"
assert provider.backupper.account == "ID"
assert provider.backupper.key == "KEY"
def test_json_reset(generic_userdata):
Backups.reset(reset_json=False)
provider = Backups.provider()
assert provider is not None
assert isinstance(provider, Backblaze)
assert provider.login == "ID"
assert provider.key == "KEY"
assert provider.location == "selfprivacy"
Backups.reset()
provider = Backups.provider()
assert provider is not None
assert isinstance(provider, AbstractBackupProvider)
assert provider.login == ""
assert provider.key == ""
assert provider.location == ""
assert provider.repo_id == ""
def test_select_backend():
provider = providers.get_provider(BackupProvider.BACKBLAZE)
assert provider is not None
assert provider == Backblaze
def test_file_backend_init(file_backup):
file_backup.backupper.init()
def test_backup_simple_file(raw_dummy_service, file_backup):
# temporarily incomplete
service = raw_dummy_service
assert service is not None
assert file_backup is not None
name = service.get_id()
file_backup.backupper.init()
def test_backup_service(dummy_service, backups):
id = dummy_service.get_id()
assert_job_finished(f"services.{id}.backup", count=0)
assert Backups.get_last_backed_up(dummy_service) is None
Backups.back_up(dummy_service)
now = datetime.now(timezone.utc)
date = Backups.get_last_backed_up(dummy_service)
assert date is not None
assert now > date
assert now - date < timedelta(minutes=1)
assert_job_finished(f"services.{id}.backup", count=1)
def test_no_repo(memory_backup):
with pytest.raises(ValueError):
assert memory_backup.backupper.get_snapshots() == []
def test_one_snapshot(backups, dummy_service):
Backups.back_up(dummy_service)
snaps = Backups.get_snapshots(dummy_service)
assert len(snaps) == 1
snap = snaps[0]
assert snap.service_name == dummy_service.get_id()
def test_backup_returns_snapshot(backups, dummy_service):
service_folders = dummy_service.get_folders()
provider = Backups.provider()
name = dummy_service.get_id()
snapshot = provider.backupper.start_backup(service_folders, name)
assert snapshot.id is not None
assert snapshot.service_name == name
assert snapshot.created_at is not None
def folder_files(folder):
return [
path.join(folder, filename)
for filename in listdir(folder)
if filename is not None
]
def service_files(service):
result = []
for service_folder in service.get_folders():
result.extend(folder_files(service_folder))
return result
def test_restore(backups, dummy_service):
paths_to_nuke = service_files(dummy_service)
contents = []
for service_file in paths_to_nuke:
with open(service_file, "r") as file:
contents.append(file.read())
Backups.back_up(dummy_service)
snap = Backups.get_snapshots(dummy_service)[0]
assert snap is not None
for p in paths_to_nuke:
assert path.exists(p)
remove(p)
assert not path.exists(p)
Backups._restore_service_from_snapshot(dummy_service, snap.id)
for p, content in zip(paths_to_nuke, contents):
assert path.exists(p)
with open(p, "r") as file:
assert file.read() == content
def test_sizing(backups, dummy_service):
Backups.back_up(dummy_service)
snap = Backups.get_snapshots(dummy_service)[0]
size = Backups.snapshot_restored_size(snap.id)
assert size is not None
assert size > 0
def test_init_tracking(backups, raw_dummy_service):
assert Backups.is_initted() is False
Backups.init_repo()
assert Backups.is_initted() is True
def finished_jobs():
return [job for job in Jobs.get_jobs() if job.status is JobStatus.FINISHED]
def assert_job_finished(job_type, count):
finished_types = [job.type_id for job in finished_jobs()]
assert finished_types.count(job_type) == count
def assert_job_has_run(job_type):
job = [job for job in finished_jobs() if job.type_id == job_type][0]
assert JobStatus.RUNNING in Jobs.status_updates(job)
def job_progress_updates(job_type):
job = [job for job in finished_jobs() if job.type_id == job_type][0]
return Jobs.progress_updates(job)
def assert_job_had_progress(job_type):
assert len(job_progress_updates(job_type)) > 0
def make_large_file(path: str, bytes: int):
with open(path, "wb") as file:
file.write(urandom(bytes))
def test_snapshots_by_id(backups, dummy_service):
snap1 = Backups.back_up(dummy_service)
snap2 = Backups.back_up(dummy_service)
snap3 = Backups.back_up(dummy_service)
assert snap2.id is not None
assert snap2.id != ""
assert len(Backups.get_snapshots(dummy_service)) == 3
assert Backups.get_snapshot_by_id(snap2.id).id == snap2.id
@pytest.fixture(params=["instant_server_stop", "delayed_server_stop"])
def simulated_service_stopping_delay(request) -> float:
if request.param == "instant_server_stop":
return 0.0
else:
return 0.3
def test_backup_service_task(backups, dummy_service, simulated_service_stopping_delay):
dummy_service.set_delay(simulated_service_stopping_delay)
handle = start_backup(dummy_service)
handle(blocking=True)
snaps = Backups.get_snapshots(dummy_service)
assert len(snaps) == 1
id = dummy_service.get_id()
job_type_id = f"services.{id}.backup"
assert_job_finished(job_type_id, count=1)
assert_job_has_run(job_type_id)
assert_job_had_progress(job_type_id)
def test_forget_snapshot(backups, dummy_service):
snap1 = Backups.back_up(dummy_service)
snap2 = Backups.back_up(dummy_service)
assert len(Backups.get_snapshots(dummy_service)) == 2
Backups.forget_snapshot(snap2)
assert len(Backups.get_snapshots(dummy_service)) == 1
Backups.force_snapshot_cache_reload()
assert len(Backups.get_snapshots(dummy_service)) == 1
assert Backups.get_snapshots(dummy_service)[0].id == snap1.id
Backups.forget_snapshot(snap1)
assert len(Backups.get_snapshots(dummy_service)) == 0
def test_forget_nonexistent_snapshot(backups, dummy_service):
bogus = Snapshot(
id="gibberjibber", service_name="nohoho", created_at=datetime.now(timezone.utc)
)
with pytest.raises(ValueError):
Backups.forget_snapshot(bogus)
def test_backup_larger_file(backups, dummy_service):
dir = path.join(dummy_service.get_folders()[0], "LARGEFILE")
mega = 2**20
make_large_file(dir, 100 * mega)
handle = start_backup(dummy_service)
handle(blocking=True)
# results will be slightly different on different machines. if someone has troubles with it on their machine, consider dropping this test.
id = dummy_service.get_id()
job_type_id = f"services.{id}.backup"
assert_job_finished(job_type_id, count=1)
assert_job_has_run(job_type_id)
updates = job_progress_updates(job_type_id)
assert len(updates) > 3
assert updates[int((len(updates) - 1) / 2.0)] > 10
# clean up a bit
remove(dir)
@pytest.fixture(params=["verify", "inplace"])
def restore_strategy(request) -> RestoreStrategy:
if request.param == "verify":
return RestoreStrategy.DOWNLOAD_VERIFY_OVERWRITE
else:
return RestoreStrategy.INPLACE
def test_restore_snapshot_task(
backups, dummy_service, restore_strategy, simulated_service_stopping_delay
):
dummy_service.set_delay(simulated_service_stopping_delay)
Backups.back_up(dummy_service)
snaps = Backups.get_snapshots(dummy_service)
assert len(snaps) == 1
paths_to_nuke = service_files(dummy_service)
contents = []
for service_file in paths_to_nuke:
with open(service_file, "r") as file:
contents.append(file.read())
for p in paths_to_nuke:
remove(p)
handle = restore_snapshot(snaps[0], restore_strategy)
handle(blocking=True)
for p, content in zip(paths_to_nuke, contents):
assert path.exists(p)
with open(p, "r") as file:
assert file.read() == content
snaps = Backups.get_snapshots(dummy_service)
assert len(snaps) == 1
def test_autobackup_enable_service(backups, dummy_service):
assert not Backups.is_autobackup_enabled(dummy_service)
Backups.enable_autobackup(dummy_service)
assert Backups.is_autobackup_enabled(dummy_service)
Backups.disable_autobackup(dummy_service)
assert not Backups.is_autobackup_enabled(dummy_service)
def test_autobackup_enable_service_storage(backups, dummy_service):
assert len(Storage.services_with_autobackup()) == 0
Backups.enable_autobackup(dummy_service)
assert len(Storage.services_with_autobackup()) == 1
assert Storage.services_with_autobackup()[0] == dummy_service.get_id()
Backups.disable_autobackup(dummy_service)
assert len(Storage.services_with_autobackup()) == 0
def test_set_autobackup_period(backups):
assert Backups.autobackup_period_minutes() is None
Backups.set_autobackup_period_minutes(2)
assert Backups.autobackup_period_minutes() == 2
Backups.disable_all_autobackup()
assert Backups.autobackup_period_minutes() is None
Backups.set_autobackup_period_minutes(3)
assert Backups.autobackup_period_minutes() == 3
Backups.set_autobackup_period_minutes(0)
assert Backups.autobackup_period_minutes() is None
Backups.set_autobackup_period_minutes(3)
assert Backups.autobackup_period_minutes() == 3
Backups.set_autobackup_period_minutes(-1)
assert Backups.autobackup_period_minutes() is None
def test_no_default_autobackup(backups, dummy_service):
now = datetime.now(timezone.utc)
assert not Backups.is_time_to_backup_service(dummy_service.get_id(), now)
assert not Backups.is_time_to_backup(now)
def test_autobackup_timer_periods(backups, dummy_service):
now = datetime.now(timezone.utc)
backup_period = 13 # minutes
Backups.enable_autobackup(dummy_service)
assert not Backups.is_time_to_backup_service(dummy_service.get_id(), now)
assert not Backups.is_time_to_backup(now)
Backups.set_autobackup_period_minutes(backup_period)
assert Backups.is_time_to_backup_service(dummy_service.get_id(), now)
assert Backups.is_time_to_backup(now)
Backups.set_autobackup_period_minutes(0)
assert not Backups.is_time_to_backup_service(dummy_service.get_id(), now)
assert not Backups.is_time_to_backup(now)
def test_autobackup_timer_enabling(backups, dummy_service):
now = datetime.now(timezone.utc)
backup_period = 13 # minutes
Backups.set_autobackup_period_minutes(backup_period)
assert not Backups.is_time_to_backup_service(dummy_service.get_id(), now)
assert not Backups.is_time_to_backup(now)
Backups.enable_autobackup(dummy_service)
assert Backups.is_time_to_backup_service(dummy_service.get_id(), now)
assert Backups.is_time_to_backup(now)
Backups.disable_autobackup(dummy_service)
assert not Backups.is_time_to_backup_service(dummy_service.get_id(), now)
assert not Backups.is_time_to_backup(now)
def test_autobackup_timing(backups, dummy_service):
backup_period = 13 # minutes
now = datetime.now(timezone.utc)
Backups.enable_autobackup(dummy_service)
Backups.set_autobackup_period_minutes(backup_period)
assert Backups.is_time_to_backup_service(dummy_service.get_id(), now)
assert Backups.is_time_to_backup(now)
Backups.back_up(dummy_service)
now = datetime.now(timezone.utc)
assert not Backups.is_time_to_backup_service(dummy_service.get_id(), now)
assert not Backups.is_time_to_backup(now)
past = datetime.now(timezone.utc) - timedelta(minutes=1)
assert not Backups.is_time_to_backup_service(dummy_service.get_id(), past)
assert not Backups.is_time_to_backup(past)
future = datetime.now(timezone.utc) + timedelta(minutes=backup_period + 2)
assert Backups.is_time_to_backup_service(dummy_service.get_id(), future)
assert Backups.is_time_to_backup(future)
# Storage
def test_snapshots_caching(backups, dummy_service):
Backups.back_up(dummy_service)
# we test indirectly that we do redis calls instead of shell calls
start = datetime.now()
for i in range(10):
snapshots = Backups.get_snapshots(dummy_service)
assert len(snapshots) == 1
assert datetime.now() - start < timedelta(seconds=0.5)
cached_snapshots = Storage.get_cached_snapshots()
assert len(cached_snapshots) == 1
Storage.delete_cached_snapshot(cached_snapshots[0])
cached_snapshots = Storage.get_cached_snapshots()
assert len(cached_snapshots) == 0
snapshots = Backups.get_snapshots(dummy_service)
assert len(snapshots) == 1
cached_snapshots = Storage.get_cached_snapshots()
assert len(cached_snapshots) == 1
# Storage
def test_init_tracking_caching(backups, raw_dummy_service):
assert Storage.has_init_mark() is False
Storage.mark_as_init()
assert Storage.has_init_mark() is True
assert Backups.is_initted() is True
# Storage
def test_init_tracking_caching2(backups, raw_dummy_service):
assert Storage.has_init_mark() is False
Backups.init_repo()
assert Storage.has_init_mark() is True
# Storage
def test_provider_storage(backups_backblaze):
provider = Backups.provider()
assert provider is not None
assert isinstance(provider, Backblaze)
assert provider.login == "ID"
assert provider.key == "KEY"
Storage.store_provider(provider)
restored_provider = Backups._load_provider_redis()
assert isinstance(restored_provider, Backblaze)
assert restored_provider.login == "ID"
assert restored_provider.key == "KEY"
def test_services_to_back_up(backups, dummy_service):
backup_period = 13 # minutes
now = datetime.now(timezone.utc)
Backups.enable_autobackup(dummy_service)
Backups.set_autobackup_period_minutes(backup_period)
services = Backups.services_to_back_up(now)
assert len(services) == 1
assert services[0].get_id() == dummy_service.get_id()
def test_sync(dummy_service):
src = dummy_service.get_folders()[0]
dst = dummy_service.get_folders()[1]
old_files_src = set(listdir(src))
old_files_dst = set(listdir(dst))
assert old_files_src != old_files_dst
sync(src, dst)
new_files_src = set(listdir(src))
new_files_dst = set(listdir(dst))
assert new_files_src == old_files_src
assert new_files_dst == new_files_src
def test_sync_nonexistent_src(dummy_service):
src = "/var/lib/nonexistentFluffyBunniesOfUnix"
dst = dummy_service.get_folders()[1]
with pytest.raises(ValueError):
sync(src, dst)
# Restic lowlevel
def test_mount_umount(backups, dummy_service, tmpdir):
Backups.back_up(dummy_service)
backupper = Backups.provider().backupper
assert isinstance(backupper, ResticBackupper)
mountpoint = tmpdir / "mount"
makedirs(mountpoint)
assert path.exists(mountpoint)
assert len(listdir(mountpoint)) == 0
handle = backupper.mount_repo(mountpoint)
assert len(listdir(mountpoint)) != 0
backupper.unmount_repo(mountpoint)
# handle.terminate()
assert len(listdir(mountpoint)) == 0
def test_move_blocks_backups(backups, dummy_service, restore_strategy):
snap = Backups.back_up(dummy_service)
job = Jobs.add(
type_id=f"services.{dummy_service.get_id()}.move",
name="Move Dummy",
description=f"Moving Dummy data to the Rainbow Land",
status=JobStatus.RUNNING,
)
with pytest.raises(ValueError):
Backups.back_up(dummy_service)
with pytest.raises(ValueError):
Backups.restore_snapshot(snap, restore_strategy)

View File

@ -0,0 +1,38 @@
from selfprivacy_api.backup.local_secret import LocalBackupSecret
from pytest import fixture
@fixture()
def localsecret():
LocalBackupSecret._full_reset()
return LocalBackupSecret
def test_local_secret_firstget(localsecret):
assert not LocalBackupSecret.exists()
secret = LocalBackupSecret.get()
assert LocalBackupSecret.exists()
assert secret is not None
# making sure it does not reset again
secret2 = LocalBackupSecret.get()
assert LocalBackupSecret.exists()
assert secret2 == secret
def test_local_secret_reset(localsecret):
secret1 = LocalBackupSecret.get()
LocalBackupSecret.reset()
secret2 = LocalBackupSecret.get()
assert secret2 is not None
assert secret2 != secret1
def test_local_secret_set(localsecret):
newsecret = "great and totally safe secret"
oldsecret = LocalBackupSecret.get()
assert oldsecret != newsecret
LocalBackupSecret.set(newsecret)
assert LocalBackupSecret.get() == newsecret

View File

@ -44,13 +44,15 @@ def some_users(mocker, datadir):
API_CREATE_SSH_KEY_MUTATION = """
mutation addSshKey($sshInput: SshMutationInput!) {
addSshKey(sshInput: $sshInput) {
success
message
code
user {
username
sshKeys
users {
addSshKey(sshInput: $sshInput) {
success
message
code
user {
username
sshKeys
}
}
}
}
@ -90,12 +92,12 @@ def test_graphql_add_ssh_key(authorized_client, some_users, mock_subprocess_pope
assert response.status_code == 200
assert response.json().get("data") is not None
assert response.json()["data"]["addSshKey"]["code"] == 201
assert response.json()["data"]["addSshKey"]["message"] is not None
assert response.json()["data"]["addSshKey"]["success"] is True
assert response.json()["data"]["users"]["addSshKey"]["code"] == 201
assert response.json()["data"]["users"]["addSshKey"]["message"] is not None
assert response.json()["data"]["users"]["addSshKey"]["success"] is True
assert response.json()["data"]["addSshKey"]["user"]["username"] == "user1"
assert response.json()["data"]["addSshKey"]["user"]["sshKeys"] == [
assert response.json()["data"]["users"]["addSshKey"]["user"]["username"] == "user1"
assert response.json()["data"]["users"]["addSshKey"]["user"]["sshKeys"] == [
"ssh-rsa KEY user1@pc",
"ssh-rsa KEY test_key@pc",
]
@ -117,12 +119,12 @@ def test_graphql_add_root_ssh_key(authorized_client, some_users, mock_subprocess
assert response.status_code == 200
assert response.json().get("data") is not None
assert response.json()["data"]["addSshKey"]["code"] == 201
assert response.json()["data"]["addSshKey"]["message"] is not None
assert response.json()["data"]["addSshKey"]["success"] is True
assert response.json()["data"]["users"]["addSshKey"]["code"] == 201
assert response.json()["data"]["users"]["addSshKey"]["message"] is not None
assert response.json()["data"]["users"]["addSshKey"]["success"] is True
assert response.json()["data"]["addSshKey"]["user"]["username"] == "root"
assert response.json()["data"]["addSshKey"]["user"]["sshKeys"] == [
assert response.json()["data"]["users"]["addSshKey"]["user"]["username"] == "root"
assert response.json()["data"]["users"]["addSshKey"]["user"]["sshKeys"] == [
"ssh-ed25519 KEY test@pc",
"ssh-rsa KEY test_key@pc",
]
@ -144,12 +146,12 @@ def test_graphql_add_main_ssh_key(authorized_client, some_users, mock_subprocess
assert response.status_code == 200
assert response.json().get("data") is not None
assert response.json()["data"]["addSshKey"]["code"] == 201
assert response.json()["data"]["addSshKey"]["message"] is not None
assert response.json()["data"]["addSshKey"]["success"] is True
assert response.json()["data"]["users"]["addSshKey"]["code"] == 201
assert response.json()["data"]["users"]["addSshKey"]["message"] is not None
assert response.json()["data"]["users"]["addSshKey"]["success"] is True
assert response.json()["data"]["addSshKey"]["user"]["username"] == "tester"
assert response.json()["data"]["addSshKey"]["user"]["sshKeys"] == [
assert response.json()["data"]["users"]["addSshKey"]["user"]["username"] == "tester"
assert response.json()["data"]["users"]["addSshKey"]["user"]["sshKeys"] == [
"ssh-rsa KEY test@pc",
"ssh-rsa KEY test_key@pc",
]
@ -171,9 +173,9 @@ def test_graphql_add_bad_ssh_key(authorized_client, some_users, mock_subprocess_
assert response.status_code == 200
assert response.json().get("data") is not None
assert response.json()["data"]["addSshKey"]["code"] == 400
assert response.json()["data"]["addSshKey"]["message"] is not None
assert response.json()["data"]["addSshKey"]["success"] is False
assert response.json()["data"]["users"]["addSshKey"]["code"] == 400
assert response.json()["data"]["users"]["addSshKey"]["message"] is not None
assert response.json()["data"]["users"]["addSshKey"]["success"] is False
def test_graphql_add_ssh_key_nonexistent_user(
@ -194,20 +196,22 @@ def test_graphql_add_ssh_key_nonexistent_user(
assert response.status_code == 200
assert response.json().get("data") is not None
assert response.json()["data"]["addSshKey"]["code"] == 404
assert response.json()["data"]["addSshKey"]["message"] is not None
assert response.json()["data"]["addSshKey"]["success"] is False
assert response.json()["data"]["users"]["addSshKey"]["code"] == 404
assert response.json()["data"]["users"]["addSshKey"]["message"] is not None
assert response.json()["data"]["users"]["addSshKey"]["success"] is False
API_REMOVE_SSH_KEY_MUTATION = """
mutation removeSshKey($sshInput: SshMutationInput!) {
removeSshKey(sshInput: $sshInput) {
success
message
code
user {
username
sshKeys
users {
removeSshKey(sshInput: $sshInput) {
success
message
code
user {
username
sshKeys
}
}
}
}
@ -247,12 +251,14 @@ def test_graphql_remove_ssh_key(authorized_client, some_users, mock_subprocess_p
assert response.status_code == 200
assert response.json().get("data") is not None
assert response.json()["data"]["removeSshKey"]["code"] == 200
assert response.json()["data"]["removeSshKey"]["message"] is not None
assert response.json()["data"]["removeSshKey"]["success"] is True
assert response.json()["data"]["users"]["removeSshKey"]["code"] == 200
assert response.json()["data"]["users"]["removeSshKey"]["message"] is not None
assert response.json()["data"]["users"]["removeSshKey"]["success"] is True
assert response.json()["data"]["removeSshKey"]["user"]["username"] == "user1"
assert response.json()["data"]["removeSshKey"]["user"]["sshKeys"] == []
assert (
response.json()["data"]["users"]["removeSshKey"]["user"]["username"] == "user1"
)
assert response.json()["data"]["users"]["removeSshKey"]["user"]["sshKeys"] == []
def test_graphql_remove_root_ssh_key(
@ -273,12 +279,14 @@ def test_graphql_remove_root_ssh_key(
assert response.status_code == 200
assert response.json().get("data") is not None
assert response.json()["data"]["removeSshKey"]["code"] == 200
assert response.json()["data"]["removeSshKey"]["message"] is not None
assert response.json()["data"]["removeSshKey"]["success"] is True
assert response.json()["data"]["users"]["removeSshKey"]["code"] == 200
assert response.json()["data"]["users"]["removeSshKey"]["message"] is not None
assert response.json()["data"]["users"]["removeSshKey"]["success"] is True
assert response.json()["data"]["removeSshKey"]["user"]["username"] == "root"
assert response.json()["data"]["removeSshKey"]["user"]["sshKeys"] == []
assert (
response.json()["data"]["users"]["removeSshKey"]["user"]["username"] == "root"
)
assert response.json()["data"]["users"]["removeSshKey"]["user"]["sshKeys"] == []
def test_graphql_remove_main_ssh_key(
@ -299,12 +307,14 @@ def test_graphql_remove_main_ssh_key(
assert response.status_code == 200
assert response.json().get("data") is not None
assert response.json()["data"]["removeSshKey"]["code"] == 200
assert response.json()["data"]["removeSshKey"]["message"] is not None
assert response.json()["data"]["removeSshKey"]["success"] is True
assert response.json()["data"]["users"]["removeSshKey"]["code"] == 200
assert response.json()["data"]["users"]["removeSshKey"]["message"] is not None
assert response.json()["data"]["users"]["removeSshKey"]["success"] is True
assert response.json()["data"]["removeSshKey"]["user"]["username"] == "tester"
assert response.json()["data"]["removeSshKey"]["user"]["sshKeys"] == []
assert (
response.json()["data"]["users"]["removeSshKey"]["user"]["username"] == "tester"
)
assert response.json()["data"]["users"]["removeSshKey"]["user"]["sshKeys"] == []
def test_graphql_remove_nonexistent_ssh_key(
@ -325,9 +335,9 @@ def test_graphql_remove_nonexistent_ssh_key(
assert response.status_code == 200
assert response.json().get("data") is not None
assert response.json()["data"]["removeSshKey"]["code"] == 404
assert response.json()["data"]["removeSshKey"]["message"] is not None
assert response.json()["data"]["removeSshKey"]["success"] is False
assert response.json()["data"]["users"]["removeSshKey"]["code"] == 404
assert response.json()["data"]["users"]["removeSshKey"]["message"] is not None
assert response.json()["data"]["users"]["removeSshKey"]["success"] is False
def test_graphql_remove_ssh_key_nonexistent_user(
@ -348,6 +358,6 @@ def test_graphql_remove_ssh_key_nonexistent_user(
assert response.status_code == 200
assert response.json().get("data") is not None
assert response.json()["data"]["removeSshKey"]["code"] == 404
assert response.json()["data"]["removeSshKey"]["message"] is not None
assert response.json()["data"]["removeSshKey"]["success"] is False
assert response.json()["data"]["users"]["removeSshKey"]["code"] == 404
assert response.json()["data"]["users"]["removeSshKey"]["message"] is not None
assert response.json()["data"]["users"]["removeSshKey"]["success"] is False

View File

@ -382,11 +382,13 @@ def test_graphql_get_timezone_on_undefined(authorized_client, undefined_config):
API_CHANGE_TIMEZONE_MUTATION = """
mutation changeTimezone($timezone: String!) {
changeTimezone(timezone: $timezone) {
success
message
code
timezone
system {
changeTimezone(timezone: $timezone) {
success
message
code
timezone
}
}
}
"""
@ -420,10 +422,13 @@ def test_graphql_change_timezone(authorized_client, turned_on):
)
assert response.status_code == 200
assert response.json().get("data") is not None
assert response.json()["data"]["changeTimezone"]["success"] is True
assert response.json()["data"]["changeTimezone"]["message"] is not None
assert response.json()["data"]["changeTimezone"]["code"] == 200
assert response.json()["data"]["changeTimezone"]["timezone"] == "Europe/Helsinki"
assert response.json()["data"]["system"]["changeTimezone"]["success"] is True
assert response.json()["data"]["system"]["changeTimezone"]["message"] is not None
assert response.json()["data"]["system"]["changeTimezone"]["code"] == 200
assert (
response.json()["data"]["system"]["changeTimezone"]["timezone"]
== "Europe/Helsinki"
)
assert read_json(turned_on / "turned_on.json")["timezone"] == "Europe/Helsinki"
@ -440,10 +445,13 @@ def test_graphql_change_timezone_on_undefined(authorized_client, undefined_confi
)
assert response.status_code == 200
assert response.json().get("data") is not None
assert response.json()["data"]["changeTimezone"]["success"] is True
assert response.json()["data"]["changeTimezone"]["message"] is not None
assert response.json()["data"]["changeTimezone"]["code"] == 200
assert response.json()["data"]["changeTimezone"]["timezone"] == "Europe/Helsinki"
assert response.json()["data"]["system"]["changeTimezone"]["success"] is True
assert response.json()["data"]["system"]["changeTimezone"]["message"] is not None
assert response.json()["data"]["system"]["changeTimezone"]["code"] == 200
assert (
response.json()["data"]["system"]["changeTimezone"]["timezone"]
== "Europe/Helsinki"
)
assert (
read_json(undefined_config / "undefined.json")["timezone"] == "Europe/Helsinki"
)
@ -462,10 +470,10 @@ def test_graphql_change_timezone_without_timezone(authorized_client, turned_on):
)
assert response.status_code == 200
assert response.json().get("data") is not None
assert response.json()["data"]["changeTimezone"]["success"] is False
assert response.json()["data"]["changeTimezone"]["message"] is not None
assert response.json()["data"]["changeTimezone"]["code"] == 400
assert response.json()["data"]["changeTimezone"]["timezone"] is None
assert response.json()["data"]["system"]["changeTimezone"]["success"] is False
assert response.json()["data"]["system"]["changeTimezone"]["message"] is not None
assert response.json()["data"]["system"]["changeTimezone"]["code"] == 400
assert response.json()["data"]["system"]["changeTimezone"]["timezone"] is None
assert read_json(turned_on / "turned_on.json")["timezone"] == "Europe/Moscow"
@ -482,10 +490,10 @@ def test_graphql_change_timezone_with_invalid_timezone(authorized_client, turned
)
assert response.status_code == 200
assert response.json().get("data") is not None
assert response.json()["data"]["changeTimezone"]["success"] is False
assert response.json()["data"]["changeTimezone"]["message"] is not None
assert response.json()["data"]["changeTimezone"]["code"] == 400
assert response.json()["data"]["changeTimezone"]["timezone"] is None
assert response.json()["data"]["system"]["changeTimezone"]["success"] is False
assert response.json()["data"]["system"]["changeTimezone"]["message"] is not None
assert response.json()["data"]["system"]["changeTimezone"]["code"] == 400
assert response.json()["data"]["system"]["changeTimezone"]["timezone"] is None
assert read_json(turned_on / "turned_on.json")["timezone"] == "Europe/Moscow"
@ -589,12 +597,14 @@ def test_graphql_get_auto_upgrade_turned_off(authorized_client, turned_off):
API_CHANGE_AUTO_UPGRADE_SETTINGS = """
mutation changeServerSettings($settings: AutoUpgradeSettingsInput!) {
changeAutoUpgradeSettings(settings: $settings) {
success
message
code
enableAutoUpgrade
allowReboot
system {
changeAutoUpgradeSettings(settings: $settings) {
success
message
code
enableAutoUpgrade
allowReboot
}
}
}
"""
@ -634,14 +644,25 @@ def test_graphql_change_auto_upgrade(authorized_client, turned_on):
)
assert response.status_code == 200
assert response.json().get("data") is not None
assert response.json()["data"]["changeAutoUpgradeSettings"]["success"] is True
assert response.json()["data"]["changeAutoUpgradeSettings"]["message"] is not None
assert response.json()["data"]["changeAutoUpgradeSettings"]["code"] == 200
assert (
response.json()["data"]["changeAutoUpgradeSettings"]["enableAutoUpgrade"]
response.json()["data"]["system"]["changeAutoUpgradeSettings"]["success"]
is True
)
assert (
response.json()["data"]["system"]["changeAutoUpgradeSettings"]["message"]
is not None
)
assert response.json()["data"]["system"]["changeAutoUpgradeSettings"]["code"] == 200
assert (
response.json()["data"]["system"]["changeAutoUpgradeSettings"][
"enableAutoUpgrade"
]
is False
)
assert response.json()["data"]["changeAutoUpgradeSettings"]["allowReboot"] is True
assert (
response.json()["data"]["system"]["changeAutoUpgradeSettings"]["allowReboot"]
is True
)
assert read_json(turned_on / "turned_on.json")["autoUpgrade"]["enable"] is False
assert read_json(turned_on / "turned_on.json")["autoUpgrade"]["allowReboot"] is True
@ -662,14 +683,25 @@ def test_graphql_change_auto_upgrade_on_undefined(authorized_client, undefined_c
)
assert response.status_code == 200
assert response.json().get("data") is not None
assert response.json()["data"]["changeAutoUpgradeSettings"]["success"] is True
assert response.json()["data"]["changeAutoUpgradeSettings"]["message"] is not None
assert response.json()["data"]["changeAutoUpgradeSettings"]["code"] == 200
assert (
response.json()["data"]["changeAutoUpgradeSettings"]["enableAutoUpgrade"]
response.json()["data"]["system"]["changeAutoUpgradeSettings"]["success"]
is True
)
assert (
response.json()["data"]["system"]["changeAutoUpgradeSettings"]["message"]
is not None
)
assert response.json()["data"]["system"]["changeAutoUpgradeSettings"]["code"] == 200
assert (
response.json()["data"]["system"]["changeAutoUpgradeSettings"][
"enableAutoUpgrade"
]
is False
)
assert response.json()["data"]["changeAutoUpgradeSettings"]["allowReboot"] is True
assert (
response.json()["data"]["system"]["changeAutoUpgradeSettings"]["allowReboot"]
is True
)
assert (
read_json(undefined_config / "undefined.json")["autoUpgrade"]["enable"] is False
)
@ -695,14 +727,25 @@ def test_graphql_change_auto_upgrade_without_vlaues(authorized_client, no_values
)
assert response.status_code == 200
assert response.json().get("data") is not None
assert response.json()["data"]["changeAutoUpgradeSettings"]["success"] is True
assert response.json()["data"]["changeAutoUpgradeSettings"]["message"] is not None
assert response.json()["data"]["changeAutoUpgradeSettings"]["code"] == 200
assert (
response.json()["data"]["changeAutoUpgradeSettings"]["enableAutoUpgrade"]
response.json()["data"]["system"]["changeAutoUpgradeSettings"]["success"]
is True
)
assert (
response.json()["data"]["system"]["changeAutoUpgradeSettings"]["message"]
is not None
)
assert response.json()["data"]["system"]["changeAutoUpgradeSettings"]["code"] == 200
assert (
response.json()["data"]["system"]["changeAutoUpgradeSettings"][
"enableAutoUpgrade"
]
is True
)
assert (
response.json()["data"]["system"]["changeAutoUpgradeSettings"]["allowReboot"]
is True
)
assert response.json()["data"]["changeAutoUpgradeSettings"]["allowReboot"] is True
assert read_json(no_values / "no_values.json")["autoUpgrade"]["enable"] is True
assert read_json(no_values / "no_values.json")["autoUpgrade"]["allowReboot"] is True
@ -723,14 +766,25 @@ def test_graphql_change_auto_upgrade_turned_off(authorized_client, turned_off):
)
assert response.status_code == 200
assert response.json().get("data") is not None
assert response.json()["data"]["changeAutoUpgradeSettings"]["success"] is True
assert response.json()["data"]["changeAutoUpgradeSettings"]["message"] is not None
assert response.json()["data"]["changeAutoUpgradeSettings"]["code"] == 200
assert (
response.json()["data"]["changeAutoUpgradeSettings"]["enableAutoUpgrade"]
response.json()["data"]["system"]["changeAutoUpgradeSettings"]["success"]
is True
)
assert (
response.json()["data"]["system"]["changeAutoUpgradeSettings"]["message"]
is not None
)
assert response.json()["data"]["system"]["changeAutoUpgradeSettings"]["code"] == 200
assert (
response.json()["data"]["system"]["changeAutoUpgradeSettings"][
"enableAutoUpgrade"
]
is True
)
assert (
response.json()["data"]["system"]["changeAutoUpgradeSettings"]["allowReboot"]
is True
)
assert response.json()["data"]["changeAutoUpgradeSettings"]["allowReboot"] is True
assert read_json(turned_off / "turned_off.json")["autoUpgrade"]["enable"] is True
assert (
read_json(turned_off / "turned_off.json")["autoUpgrade"]["allowReboot"] is True
@ -752,14 +806,25 @@ def test_grphql_change_auto_upgrade_without_enable(authorized_client, turned_off
)
assert response.status_code == 200
assert response.json().get("data") is not None
assert response.json()["data"]["changeAutoUpgradeSettings"]["success"] is True
assert response.json()["data"]["changeAutoUpgradeSettings"]["message"] is not None
assert response.json()["data"]["changeAutoUpgradeSettings"]["code"] == 200
assert (
response.json()["data"]["changeAutoUpgradeSettings"]["enableAutoUpgrade"]
response.json()["data"]["system"]["changeAutoUpgradeSettings"]["success"]
is True
)
assert (
response.json()["data"]["system"]["changeAutoUpgradeSettings"]["message"]
is not None
)
assert response.json()["data"]["system"]["changeAutoUpgradeSettings"]["code"] == 200
assert (
response.json()["data"]["system"]["changeAutoUpgradeSettings"][
"enableAutoUpgrade"
]
is False
)
assert response.json()["data"]["changeAutoUpgradeSettings"]["allowReboot"] is True
assert (
response.json()["data"]["system"]["changeAutoUpgradeSettings"]["allowReboot"]
is True
)
assert read_json(turned_off / "turned_off.json")["autoUpgrade"]["enable"] is False
assert (
read_json(turned_off / "turned_off.json")["autoUpgrade"]["allowReboot"] is True
@ -783,14 +848,25 @@ def test_graphql_change_auto_upgrade_without_allow_reboot(
)
assert response.status_code == 200
assert response.json().get("data") is not None
assert response.json()["data"]["changeAutoUpgradeSettings"]["success"] is True
assert response.json()["data"]["changeAutoUpgradeSettings"]["message"] is not None
assert response.json()["data"]["changeAutoUpgradeSettings"]["code"] == 200
assert (
response.json()["data"]["changeAutoUpgradeSettings"]["enableAutoUpgrade"]
response.json()["data"]["system"]["changeAutoUpgradeSettings"]["success"]
is True
)
assert response.json()["data"]["changeAutoUpgradeSettings"]["allowReboot"] is False
assert (
response.json()["data"]["system"]["changeAutoUpgradeSettings"]["message"]
is not None
)
assert response.json()["data"]["system"]["changeAutoUpgradeSettings"]["code"] == 200
assert (
response.json()["data"]["system"]["changeAutoUpgradeSettings"][
"enableAutoUpgrade"
]
is True
)
assert (
response.json()["data"]["system"]["changeAutoUpgradeSettings"]["allowReboot"]
is False
)
assert read_json(turned_off / "turned_off.json")["autoUpgrade"]["enable"] is True
assert (
read_json(turned_off / "turned_off.json")["autoUpgrade"]["allowReboot"] is False
@ -810,14 +886,25 @@ def test_graphql_change_auto_upgrade_with_empty_input(authorized_client, turned_
)
assert response.status_code == 200
assert response.json().get("data") is not None
assert response.json()["data"]["changeAutoUpgradeSettings"]["success"] is True
assert response.json()["data"]["changeAutoUpgradeSettings"]["message"] is not None
assert response.json()["data"]["changeAutoUpgradeSettings"]["code"] == 200
assert (
response.json()["data"]["changeAutoUpgradeSettings"]["enableAutoUpgrade"]
response.json()["data"]["system"]["changeAutoUpgradeSettings"]["success"]
is True
)
assert (
response.json()["data"]["system"]["changeAutoUpgradeSettings"]["message"]
is not None
)
assert response.json()["data"]["system"]["changeAutoUpgradeSettings"]["code"] == 200
assert (
response.json()["data"]["system"]["changeAutoUpgradeSettings"][
"enableAutoUpgrade"
]
is False
)
assert (
response.json()["data"]["system"]["changeAutoUpgradeSettings"]["allowReboot"]
is False
)
assert response.json()["data"]["changeAutoUpgradeSettings"]["allowReboot"] is False
assert read_json(turned_off / "turned_off.json")["autoUpgrade"]["enable"] is False
assert (
read_json(turned_off / "turned_off.json")["autoUpgrade"]["allowReboot"] is False
@ -826,10 +913,12 @@ def test_graphql_change_auto_upgrade_with_empty_input(authorized_client, turned_
API_PULL_SYSTEM_CONFIGURATION_MUTATION = """
mutation testPullSystemConfiguration {
pullRepositoryChanges {
success
message
code
system {
pullRepositoryChanges {
success
message
code
}
}
}
"""
@ -861,9 +950,12 @@ def test_graphql_pull_system_configuration(
assert response.status_code == 200
assert response.json().get("data") is not None
assert response.json()["data"]["pullRepositoryChanges"]["success"] is True
assert response.json()["data"]["pullRepositoryChanges"]["message"] is not None
assert response.json()["data"]["pullRepositoryChanges"]["code"] == 200
assert response.json()["data"]["system"]["pullRepositoryChanges"]["success"] is True
assert (
response.json()["data"]["system"]["pullRepositoryChanges"]["message"]
is not None
)
assert response.json()["data"]["system"]["pullRepositoryChanges"]["code"] == 200
assert mock_subprocess_popen.call_count == 1
assert mock_subprocess_popen.call_args[0][0] == ["git", "pull"]
@ -886,9 +978,14 @@ def test_graphql_pull_system_broken_repo(
assert response.status_code == 200
assert response.json().get("data") is not None
assert response.json()["data"]["pullRepositoryChanges"]["success"] is False
assert response.json()["data"]["pullRepositoryChanges"]["message"] is not None
assert response.json()["data"]["pullRepositoryChanges"]["code"] == 500
assert (
response.json()["data"]["system"]["pullRepositoryChanges"]["success"] is False
)
assert (
response.json()["data"]["system"]["pullRepositoryChanges"]["message"]
is not None
)
assert response.json()["data"]["system"]["pullRepositoryChanges"]["code"] == 500
assert mock_broken_service.call_count == 1
assert mock_os_chdir.call_count == 2

View File

@ -54,10 +54,12 @@ def mock_subprocess_check_output(mocker):
API_REBUILD_SYSTEM_MUTATION = """
mutation rebuildSystem {
runSystemRebuild {
success
message
code
system {
runSystemRebuild {
success
message
code
}
}
}
"""
@ -86,9 +88,9 @@ def test_graphql_system_rebuild(authorized_client, mock_subprocess_popen):
)
assert response.status_code == 200
assert response.json().get("data") is not None
assert response.json()["data"]["runSystemRebuild"]["success"] is True
assert response.json()["data"]["runSystemRebuild"]["message"] is not None
assert response.json()["data"]["runSystemRebuild"]["code"] == 200
assert response.json()["data"]["system"]["runSystemRebuild"]["success"] is True
assert response.json()["data"]["system"]["runSystemRebuild"]["message"] is not None
assert response.json()["data"]["system"]["runSystemRebuild"]["code"] == 200
assert mock_subprocess_popen.call_count == 1
assert mock_subprocess_popen.call_args[0][0] == [
"systemctl",
@ -99,10 +101,12 @@ def test_graphql_system_rebuild(authorized_client, mock_subprocess_popen):
API_UPGRADE_SYSTEM_MUTATION = """
mutation upgradeSystem {
runSystemUpgrade {
success
message
code
system {
runSystemUpgrade {
success
message
code
}
}
}
"""
@ -131,9 +135,9 @@ def test_graphql_system_upgrade(authorized_client, mock_subprocess_popen):
)
assert response.status_code == 200
assert response.json().get("data") is not None
assert response.json()["data"]["runSystemUpgrade"]["success"] is True
assert response.json()["data"]["runSystemUpgrade"]["message"] is not None
assert response.json()["data"]["runSystemUpgrade"]["code"] == 200
assert response.json()["data"]["system"]["runSystemUpgrade"]["success"] is True
assert response.json()["data"]["system"]["runSystemUpgrade"]["message"] is not None
assert response.json()["data"]["system"]["runSystemUpgrade"]["code"] == 200
assert mock_subprocess_popen.call_count == 1
assert mock_subprocess_popen.call_args[0][0] == [
"systemctl",
@ -144,10 +148,12 @@ def test_graphql_system_upgrade(authorized_client, mock_subprocess_popen):
API_ROLLBACK_SYSTEM_MUTATION = """
mutation rollbackSystem {
runSystemRollback {
success
message
code
system {
runSystemRollback {
success
message
code
}
}
}
"""
@ -176,9 +182,9 @@ def test_graphql_system_rollback(authorized_client, mock_subprocess_popen):
)
assert response.status_code == 200
assert response.json().get("data") is not None
assert response.json()["data"]["runSystemRollback"]["success"] is True
assert response.json()["data"]["runSystemRollback"]["message"] is not None
assert response.json()["data"]["runSystemRollback"]["code"] == 200
assert response.json()["data"]["system"]["runSystemRollback"]["success"] is True
assert response.json()["data"]["system"]["runSystemRollback"]["message"] is not None
assert response.json()["data"]["system"]["runSystemRollback"]["code"] == 200
assert mock_subprocess_popen.call_count == 1
assert mock_subprocess_popen.call_args[0][0] == [
"systemctl",
@ -189,10 +195,12 @@ def test_graphql_system_rollback(authorized_client, mock_subprocess_popen):
API_REBOOT_SYSTEM_MUTATION = """
mutation system {
rebootSystem {
success
message
code
system {
rebootSystem {
success
message
code
}
}
}
"""
@ -223,9 +231,9 @@ def test_graphql_reboot_system(authorized_client, mock_subprocess_popen):
assert response.status_code == 200
assert response.json().get("data") is not None
assert response.json()["data"]["rebootSystem"]["success"] is True
assert response.json()["data"]["rebootSystem"]["message"] is not None
assert response.json()["data"]["rebootSystem"]["code"] == 200
assert response.json()["data"]["system"]["rebootSystem"]["success"] is True
assert response.json()["data"]["system"]["rebootSystem"]["message"] is not None
assert response.json()["data"]["system"]["rebootSystem"]["code"] == 200
assert mock_subprocess_popen.call_count == 1
assert mock_subprocess_popen.call_args[0][0] == ["reboot"]

View File

@ -295,13 +295,15 @@ def test_graphql_get_nonexistent_user(
API_CREATE_USERS_MUTATION = """
mutation createUser($user: UserMutationInput!) {
createUser(user: $user) {
success
message
code
user {
username
sshKeys
users {
createUser(user: $user) {
success
message
code
user {
username
sshKeys
}
}
}
}
@ -341,12 +343,12 @@ def test_graphql_add_user(authorized_client, one_user, mock_subprocess_popen):
assert response.status_code == 200
assert response.json().get("data") is not None
assert response.json()["data"]["createUser"]["message"] is not None
assert response.json()["data"]["createUser"]["code"] == 201
assert response.json()["data"]["createUser"]["success"] is True
assert response.json()["data"]["users"]["createUser"]["message"] is not None
assert response.json()["data"]["users"]["createUser"]["code"] == 201
assert response.json()["data"]["users"]["createUser"]["success"] is True
assert response.json()["data"]["createUser"]["user"]["username"] == "user2"
assert response.json()["data"]["createUser"]["user"]["sshKeys"] == []
assert response.json()["data"]["users"]["createUser"]["user"]["username"] == "user2"
assert response.json()["data"]["users"]["createUser"]["user"]["sshKeys"] == []
def test_graphql_add_undefined_settings(
@ -367,12 +369,12 @@ def test_graphql_add_undefined_settings(
assert response.status_code == 200
assert response.json().get("data") is not None
assert response.json()["data"]["createUser"]["message"] is not None
assert response.json()["data"]["createUser"]["code"] == 201
assert response.json()["data"]["createUser"]["success"] is True
assert response.json()["data"]["users"]["createUser"]["message"] is not None
assert response.json()["data"]["users"]["createUser"]["code"] == 201
assert response.json()["data"]["users"]["createUser"]["success"] is True
assert response.json()["data"]["createUser"]["user"]["username"] == "user2"
assert response.json()["data"]["createUser"]["user"]["sshKeys"] == []
assert response.json()["data"]["users"]["createUser"]["user"]["username"] == "user2"
assert response.json()["data"]["users"]["createUser"]["user"]["sshKeys"] == []
def test_graphql_add_without_password(
@ -393,11 +395,11 @@ def test_graphql_add_without_password(
assert response.status_code == 200
assert response.json().get("data") is not None
assert response.json()["data"]["createUser"]["message"] is not None
assert response.json()["data"]["createUser"]["code"] == 400
assert response.json()["data"]["createUser"]["success"] is False
assert response.json()["data"]["users"]["createUser"]["message"] is not None
assert response.json()["data"]["users"]["createUser"]["code"] == 400
assert response.json()["data"]["users"]["createUser"]["success"] is False
assert response.json()["data"]["createUser"]["user"] is None
assert response.json()["data"]["users"]["createUser"]["user"] is None
def test_graphql_add_without_both(authorized_client, one_user, mock_subprocess_popen):
@ -416,11 +418,11 @@ def test_graphql_add_without_both(authorized_client, one_user, mock_subprocess_p
assert response.status_code == 200
assert response.json().get("data") is not None
assert response.json()["data"]["createUser"]["message"] is not None
assert response.json()["data"]["createUser"]["code"] == 400
assert response.json()["data"]["createUser"]["success"] is False
assert response.json()["data"]["users"]["createUser"]["message"] is not None
assert response.json()["data"]["users"]["createUser"]["code"] == 400
assert response.json()["data"]["users"]["createUser"]["success"] is False
assert response.json()["data"]["createUser"]["user"] is None
assert response.json()["data"]["users"]["createUser"]["user"] is None
@pytest.mark.parametrize("username", invalid_usernames)
@ -442,11 +444,11 @@ def test_graphql_add_system_username(
assert response.status_code == 200
assert response.json().get("data") is not None
assert response.json()["data"]["createUser"]["message"] is not None
assert response.json()["data"]["createUser"]["code"] == 409
assert response.json()["data"]["createUser"]["success"] is False
assert response.json()["data"]["users"]["createUser"]["message"] is not None
assert response.json()["data"]["users"]["createUser"]["code"] == 409
assert response.json()["data"]["users"]["createUser"]["success"] is False
assert response.json()["data"]["createUser"]["user"] is None
assert response.json()["data"]["users"]["createUser"]["user"] is None
def test_graphql_add_existing_user(authorized_client, one_user, mock_subprocess_popen):
@ -465,13 +467,13 @@ def test_graphql_add_existing_user(authorized_client, one_user, mock_subprocess_
assert response.status_code == 200
assert response.json().get("data") is not None
assert response.json()["data"]["createUser"]["message"] is not None
assert response.json()["data"]["createUser"]["code"] == 409
assert response.json()["data"]["createUser"]["success"] is False
assert response.json()["data"]["users"]["createUser"]["message"] is not None
assert response.json()["data"]["users"]["createUser"]["code"] == 409
assert response.json()["data"]["users"]["createUser"]["success"] is False
assert response.json()["data"]["createUser"]["user"]["username"] == "user1"
assert response.json()["data"]["users"]["createUser"]["user"]["username"] == "user1"
assert (
response.json()["data"]["createUser"]["user"]["sshKeys"][0]
response.json()["data"]["users"]["createUser"]["user"]["sshKeys"][0]
== "ssh-rsa KEY user1@pc"
)
@ -492,13 +494,15 @@ def test_graphql_add_main_user(authorized_client, one_user, mock_subprocess_pope
assert response.status_code == 200
assert response.json().get("data") is not None
assert response.json()["data"]["createUser"]["message"] is not None
assert response.json()["data"]["createUser"]["code"] == 409
assert response.json()["data"]["createUser"]["success"] is False
assert response.json()["data"]["users"]["createUser"]["message"] is not None
assert response.json()["data"]["users"]["createUser"]["code"] == 409
assert response.json()["data"]["users"]["createUser"]["success"] is False
assert response.json()["data"]["createUser"]["user"]["username"] == "tester"
assert (
response.json()["data"]["createUser"]["user"]["sshKeys"][0]
response.json()["data"]["users"]["createUser"]["user"]["username"] == "tester"
)
assert (
response.json()["data"]["users"]["createUser"]["user"]["sshKeys"][0]
== "ssh-rsa KEY test@pc"
)
@ -518,11 +522,11 @@ def test_graphql_add_long_username(authorized_client, one_user, mock_subprocess_
)
assert response.json().get("data") is not None
assert response.json()["data"]["createUser"]["message"] is not None
assert response.json()["data"]["createUser"]["code"] == 400
assert response.json()["data"]["createUser"]["success"] is False
assert response.json()["data"]["users"]["createUser"]["message"] is not None
assert response.json()["data"]["users"]["createUser"]["code"] == 400
assert response.json()["data"]["users"]["createUser"]["success"] is False
assert response.json()["data"]["createUser"]["user"] is None
assert response.json()["data"]["users"]["createUser"]["user"] is None
@pytest.mark.parametrize("username", ["", "1", "фыр", "user1@", "^-^"])
@ -544,19 +548,21 @@ def test_graphql_add_invalid_username(
assert response.status_code == 200
assert response.json().get("data") is not None
assert response.json()["data"]["createUser"]["message"] is not None
assert response.json()["data"]["createUser"]["code"] == 400
assert response.json()["data"]["createUser"]["success"] is False
assert response.json()["data"]["users"]["createUser"]["message"] is not None
assert response.json()["data"]["users"]["createUser"]["code"] == 400
assert response.json()["data"]["users"]["createUser"]["success"] is False
assert response.json()["data"]["createUser"]["user"] is None
assert response.json()["data"]["users"]["createUser"]["user"] is None
API_DELETE_USER_MUTATION = """
mutation deleteUser($username: String!) {
deleteUser(username: $username) {
success
message
code
users {
deleteUser(username: $username) {
success
message
code
}
}
}
"""
@ -585,9 +591,9 @@ def test_graphql_delete_user(authorized_client, some_users, mock_subprocess_pope
assert response.status_code == 200
assert response.json().get("data") is not None
assert response.json()["data"]["deleteUser"]["code"] == 200
assert response.json()["data"]["deleteUser"]["message"] is not None
assert response.json()["data"]["deleteUser"]["success"] is True
assert response.json()["data"]["users"]["deleteUser"]["code"] == 200
assert response.json()["data"]["users"]["deleteUser"]["message"] is not None
assert response.json()["data"]["users"]["deleteUser"]["success"] is True
@pytest.mark.parametrize("username", ["", "def"])
@ -604,9 +610,9 @@ def test_graphql_delete_nonexistent_users(
assert response.status_code == 200
assert response.json().get("data") is not None
assert response.json()["data"]["deleteUser"]["code"] == 404
assert response.json()["data"]["deleteUser"]["message"] is not None
assert response.json()["data"]["deleteUser"]["success"] is False
assert response.json()["data"]["users"]["deleteUser"]["code"] == 404
assert response.json()["data"]["users"]["deleteUser"]["message"] is not None
assert response.json()["data"]["users"]["deleteUser"]["success"] is False
@pytest.mark.parametrize("username", invalid_usernames)
@ -624,11 +630,11 @@ def test_graphql_delete_system_users(
assert response.json().get("data") is not None
assert (
response.json()["data"]["deleteUser"]["code"] == 404
or response.json()["data"]["deleteUser"]["code"] == 400
response.json()["data"]["users"]["deleteUser"]["code"] == 404
or response.json()["data"]["users"]["deleteUser"]["code"] == 400
)
assert response.json()["data"]["deleteUser"]["message"] is not None
assert response.json()["data"]["deleteUser"]["success"] is False
assert response.json()["data"]["users"]["deleteUser"]["message"] is not None
assert response.json()["data"]["users"]["deleteUser"]["success"] is False
def test_graphql_delete_main_user(authorized_client, some_users, mock_subprocess_popen):
@ -642,20 +648,22 @@ def test_graphql_delete_main_user(authorized_client, some_users, mock_subprocess
assert response.status_code == 200
assert response.json().get("data") is not None
assert response.json()["data"]["deleteUser"]["code"] == 400
assert response.json()["data"]["deleteUser"]["message"] is not None
assert response.json()["data"]["deleteUser"]["success"] is False
assert response.json()["data"]["users"]["deleteUser"]["code"] == 400
assert response.json()["data"]["users"]["deleteUser"]["message"] is not None
assert response.json()["data"]["users"]["deleteUser"]["success"] is False
API_UPDATE_USER_MUTATION = """
mutation updateUser($user: UserMutationInput!) {
updateUser(user: $user) {
success
message
code
user {
username
sshKeys
users {
updateUser(user: $user) {
success
message
code
user {
username
sshKeys
}
}
}
}
@ -695,12 +703,12 @@ def test_graphql_update_user(authorized_client, some_users, mock_subprocess_pope
assert response.status_code == 200
assert response.json().get("data") is not None
assert response.json()["data"]["updateUser"]["code"] == 200
assert response.json()["data"]["updateUser"]["message"] is not None
assert response.json()["data"]["updateUser"]["success"] is True
assert response.json()["data"]["users"]["updateUser"]["code"] == 200
assert response.json()["data"]["users"]["updateUser"]["message"] is not None
assert response.json()["data"]["users"]["updateUser"]["success"] is True
assert response.json()["data"]["updateUser"]["user"]["username"] == "user1"
assert response.json()["data"]["updateUser"]["user"]["sshKeys"] == [
assert response.json()["data"]["users"]["updateUser"]["user"]["username"] == "user1"
assert response.json()["data"]["users"]["updateUser"]["user"]["sshKeys"] == [
"ssh-rsa KEY user1@pc"
]
assert mock_subprocess_popen.call_count == 1
@ -724,9 +732,9 @@ def test_graphql_update_nonexistent_user(
assert response.status_code == 200
assert response.json().get("data") is not None
assert response.json()["data"]["updateUser"]["code"] == 404
assert response.json()["data"]["updateUser"]["message"] is not None
assert response.json()["data"]["updateUser"]["success"] is False
assert response.json()["data"]["users"]["updateUser"]["code"] == 404
assert response.json()["data"]["users"]["updateUser"]["message"] is not None
assert response.json()["data"]["users"]["updateUser"]["success"] is False
assert response.json()["data"]["updateUser"]["user"] is None
assert response.json()["data"]["users"]["updateUser"]["user"] is None
assert mock_subprocess_popen.call_count == 1

View File

@ -80,6 +80,29 @@ def test_jobs(jobs_with_one_job):
jobsmodule.JOB_EXPIRATION_SECONDS = backup
def test_finishing_equals_100(jobs_with_one_job):
jobs = jobs_with_one_job
test_job = jobs.get_jobs()[0]
assert not jobs.is_busy()
assert test_job.progress != 100
jobs.update(job=test_job, status=JobStatus.FINISHED)
assert test_job.progress == 100
def test_finishing_equals_100_unless_stated_otherwise(jobs_with_one_job):
jobs = jobs_with_one_job
test_job = jobs.get_jobs()[0]
assert not jobs.is_busy()
assert test_job.progress != 100
assert test_job.progress != 23
jobs.update(job=test_job, status=JobStatus.FINISHED, progress=23)
assert test_job.progress == 23
@pytest.fixture
def jobs():
j = Jobs()

View File

@ -0,0 +1,33 @@
import pytest
from pydantic import BaseModel
from datetime import datetime
from typing import Optional
from selfprivacy_api.utils.redis_model_storage import store_model_as_hash, hash_as_model
from selfprivacy_api.utils.redis_pool import RedisPool
TEST_KEY = "model_storage"
redis = RedisPool().get_connection()
@pytest.fixture()
def clean_redis():
redis.delete(TEST_KEY)
class DummyModel(BaseModel):
name: str
date: Optional[datetime]
def test_store_retrieve():
model = DummyModel(name="test", date=datetime.now())
store_model_as_hash(redis, TEST_KEY, model)
assert hash_as_model(redis, TEST_KEY, DummyModel) == model
def test_store_retrieve_none():
model = DummyModel(name="test", date=None)
store_model_as_hash(redis, TEST_KEY, model)
assert hash_as_model(redis, TEST_KEY, DummyModel) == model

89
tests/test_services.py Normal file
View File

@ -0,0 +1,89 @@
"""
Tests for generic service methods
"""
from pytest import raises
from selfprivacy_api.services.bitwarden import Bitwarden
from selfprivacy_api.services.pleroma import Pleroma
from selfprivacy_api.services.owned_path import OwnedPath
from selfprivacy_api.services.generic_service_mover import FolderMoveNames
from selfprivacy_api.services.test_service import DummyService
from selfprivacy_api.services.service import Service, ServiceStatus, StoppedService
from selfprivacy_api.utils.waitloop import wait_until_true
from tests.test_graphql.test_backup import raw_dummy_service
def test_unimplemented_folders_raises():
with raises(NotImplementedError):
Service.get_folders()
with raises(NotImplementedError):
Service.get_owned_folders()
class OurDummy(DummyService, folders=["testydir", "dirtessimo"]):
pass
owned_folders = OurDummy.get_owned_folders()
assert owned_folders is not None
def test_service_stopper(raw_dummy_service):
dummy: Service = raw_dummy_service
dummy.set_delay(0.3)
assert dummy.get_status() == ServiceStatus.ACTIVE
with StoppedService(dummy) as stopped_dummy:
assert stopped_dummy.get_status() == ServiceStatus.INACTIVE
assert dummy.get_status() == ServiceStatus.INACTIVE
assert dummy.get_status() == ServiceStatus.ACTIVE
def test_delayed_start_stop(raw_dummy_service):
dummy = raw_dummy_service
dummy.set_delay(0.3)
dummy.stop()
assert dummy.get_status() == ServiceStatus.DEACTIVATING
wait_until_true(lambda: dummy.get_status() == ServiceStatus.INACTIVE)
assert dummy.get_status() == ServiceStatus.INACTIVE
dummy.start()
assert dummy.get_status() == ServiceStatus.ACTIVATING
wait_until_true(lambda: dummy.get_status() == ServiceStatus.ACTIVE)
assert dummy.get_status() == ServiceStatus.ACTIVE
def test_owned_folders_from_not_owned():
assert Bitwarden.get_owned_folders() == [
OwnedPath(
path=folder,
group="vaultwarden",
owner="vaultwarden",
)
for folder in Bitwarden.get_folders()
]
def test_paths_from_owned_paths():
assert len(Pleroma.get_folders()) == 2
assert Pleroma.get_folders() == [
ownedpath.path for ownedpath in Pleroma.get_owned_folders()
]
def test_foldermoves_from_ownedpaths():
owned = OwnedPath(
path="var/lib/bitwarden",
group="vaultwarden",
owner="vaultwarden",
)
assert FolderMoveNames.from_owned_path(owned) == FolderMoveNames(
name="bitwarden",
bind_location="var/lib/bitwarden",
group="vaultwarden",
owner="vaultwarden",
)