Skip to content

GitLab

  • Menu
Projects Groups Snippets
  • Help
    • Help
    • Support
    • Community forum
    • Submit feedback
    • Contribute to GitLab
  • Sign in
  • s3storage s3storage
  • Project information
    • Project information
    • Activity
    • Labels
    • Members
  • Repository
    • Repository
    • Files
    • Commits
    • Branches
    • Tags
    • Contributors
    • Graph
    • Compare
  • Issues 6
    • Issues 6
    • List
    • Boards
    • Service Desk
    • Milestones
  • Merge requests 2
    • Merge requests 2
  • CI/CD
    • CI/CD
    • Pipelines
    • Jobs
    • Schedules
  • Deployments
    • Deployments
    • Environments
    • Releases
  • Monitor
    • Monitor
    • Metrics
    • Incidents
  • Packages & Registries
    • Packages & Registries
    • Package Registry
    • Container Registry
    • Infrastructure Registry
  • Analytics
    • Analytics
    • Value stream
    • CI/CD
    • Repository
  • Wiki
    • Wiki
  • Snippets
    • Snippets
  • Activity
  • Graph
  • Create a new issue
  • Jobs
  • Commits
  • Issue Boards
Collapse sidebar
  • cubicweb
  • cubes
  • s3storages3storage
  • Issues
  • #6

Closed
Open
Created Feb 01, 2021 by Katia Saurfelt@ksaurfeltMaintainer

Faciliter la surcharge des DeleteFileOp.postcommit_event et AddFileOp.postcommit_event

Sur FranceArchives nous avons besoin de surcharger les DeleteFileOp.postcommit_event et AddFileOp.postcommit_event pour chaque fichier traité. Serait-il possible d'avoir la possibilité d'appeler du code sur chaque donnée :


class S3DeleteFileOp(DataOperationMixIn, LateOperation):
    containercls = list

    def new_func_process_data(self, storage, key, eid, attr):
        pass

    def postcommit_event(self):
        for storage, key, eid, attr in self.get_data():
            self.info('Deleting object %s.%s (%s/%s) from S3',
                      eid, attr, storage.bucket, key)
            resp = storage.s3cnx.delete_object(Bucket=storage.bucket, Key=key)
            if resp.get('ResponseMetadata', {}).get('HTTPStatusCode') >= 300:
                self.error('S3 object deletion FAILED: %s', resp)
            else:
                self.debug('S3 object deletion OK: %s', resp)
            self.new_func_process_data(storage, key, eid, attr)

 
Edited Feb 01, 2021 by Katia Saurfelt
To upload designs, you'll need to enable LFS and have an admin enable hashed storage. More information
Assignee
Assign to
Time tracking