1. 30 Jan, 2019 5 commits
    • Philippe Pepiot's avatar
      Use frozen task id in run_all_tasks() · 09962b2ccb79
      Philippe Pepiot authored
      This code is only used in tests where we check for existence of task_id in database.
      Before running the Task, there is no guarantees that the value of Task.id will
      be used as task_id, this is the purpose of freeze().
      In this particular case Task.id = Task.freeze().id for celery 3.1.25, but this
      is no longer true in celery 4.
    • Philippe Pepiot's avatar
      Drop get_task_id() · 98bb9bad617b
      Philippe Pepiot authored
      Only Task objects pass through get_task_id() and never AsyncResult, thus get_task_id()
      is now useless and is equivalent to task.freeze().id
      I think an old version of attach_task() was recursively called with either Task
      or AsyncResult thus the handling of both cases, but I cannot find the relevant
      changeset (maybe this code was never used ? ...)
      I'm pretty confident on this because all known use-cases are covered by tests.
    • Philippe Pepiot's avatar
      Detect celery workflow meta tasks with task.type instead of hasattr · c19d0234c332
      Philippe Pepiot authored
      This make code more readable and less error prone.
    • Philippe Pepiot's avatar
      Use as_tuple() instead of serializable() · 1535d4229beb
      Philippe Pepiot authored
      serializable() and from_serializable() are deprecated in flavor of as_tuple()
      and result_from_tuple().
      This is required to migrate to celery 4
    • Philippe Pepiot's avatar
      flake8 fix over-indented line · 434cc13f80df
      Philippe Pepiot authored
  2. 29 Jan, 2019 1 commit
  3. 26 Jun, 2018 1 commit
  4. 22 Jun, 2018 1 commit
    • Philippe Pepiot's avatar
      Migrate task logs from redis and database to logs files · e501a370ed29
      Philippe Pepiot authored
      Having logs stored in redis then in database took too much memory in redis and
      storage in database.
      Using files is far simplier, but it require to have a shared file system (nfs)
      when the worker and the cubiweb instance (the reader) are not in the same
      Use the new cw_celerytask_helpers filelogger instead of redislogger.
      Logs are stored in celerytask-log-dir directory in gzip with a predictible
      filename based on task_id (which is unique).
      Drop task_logs attribute from CeleryTask and update tests accordingly.
      celery-monitor don't copy anymore from redis to database when the task is
  5. 17 Nov, 2017 2 commits
  6. 15 Nov, 2017 1 commit
    • Philippe Pepiot's avatar
      Fix testing tasks creating other tasks · 1ef783692e79
      Philippe Pepiot authored
      When a task create a new task (by calling start_async_task), _TEST_TASK was
      reset during the loop on it, this was leading to a KeyError (in case of
      multiple tasks) or in sub-tasks not being started.
      Fix this by not overriding _TEST_TASK for each new cubicweb connection and by
      consuming _TEST_TASK until there is no tasks left.
  7. 23 Jun, 2017 2 commits
  8. 30 May, 2017 1 commit
    • Philippe Pepiot's avatar
      test: run celery monitor after the tasks finish in non EAGER mode · f736030ffc25
      Philippe Pepiot authored
      When running tests in non EAGER mode, tasks workflow synchronization was done
      in run_all_tasks(), this seemed to work because task execution was
      asynchronously finished before call to celery-monitor.
      Fix this by explicitely running celery-monitor in wait_async_task which wait
      the task is actually finished.
  9. 19 May, 2017 2 commits
  10. 23 May, 2017 3 commits
  11. 14 Mar, 2017 1 commit
  12. 16 Feb, 2017 1 commit
  13. 17 Jan, 2017 2 commits
    • Philippe Pepiot's avatar
      [entities] force convention for subtask created in a task · b27994d95bc7
      Philippe Pepiot authored
      Convention is to return a dict with a key "celerytask_subtasks".
      So we can remove a "except Exception" that could hide errors or future bugs.
    • Philippe Pepiot's avatar
      [ccplugin] celery-monitor: retry failed items · be044d7109a3
      Philippe Pepiot authored
      When multiple instance of celery-monitor are running, we could have an
      integrity errors raised if two instance are working on the same task_id or
      temporary (network, host) failures. In this case we want to retry handling the
      task_id later.
      Put pending processed task_id in a "pending queue" and each minutes and if the
      monitor queue is empty, requeue pending items.
      This change require to handle the "timeout" parameter of loop (only used in
      tests) in a different way to ensure not blocking forever in redis "brpoplpush".
  14. 16 Jan, 2017 4 commits
    • Philippe Pepiot's avatar
      [entities] use try/except to get or create CeleryTask · 98c0d45fd271
      Philippe Pepiot authored
      IMHO this looks better than "if rset".
      Also log created entity eid.
    • Philippe Pepiot's avatar
      [entities] don't log "cannot deserialize task" · ae5462a6f276
      Philippe Pepiot authored
      This usecase (sending a serialized task signature as result of a task)  is
      exceptional, so avoid flooding the logs with such messages.
    • Philippe Pepiot's avatar
      monitor: can work with multiple instances · f094aff48e8d
      Philippe Pepiot authored
      Previously we handle monitoring celery task by listening to celery event bus
      (celery.events.EventReceiver) that was not persistent. In this case we used a
      dedicated routine (on_monitor_start) to synchronize non finished tasks, but
      this wasn't working in case of an untracked task (eg. not started with
      start_async_task). Also this was a single point of failure because it cannot
      run in multiple instance without concurrency issues (events are sent to all
      Now we use a redis queue where worker put task_id and task_name to be
      synchronized and celery-monitor use brpop (https://redis.io/commands/brpop) to
      process the queue.
      We don't require CELERY_SEND_EVENTS to be enabled anymore (-E or --events in
      worker options).
      We require to add 'cw_celerytask_helpers.helpers' to CELERY_IMPORTS.
    • Philippe Pepiot's avatar
      CeleryTask: move sync task logic in sync_task_state · c97f0e74bb5f
      Philippe Pepiot authored
      - strict policy on sql transaction (commit all or nothing for each task)
      - make it work with on_monitor_start (used to synchronize task states when
        monitor starts in case of missing events)
      - Use serializable() in "spawn" task so it force celery to use the json task
        serializer instead of pickle
      - Don't always update task_name when creating subtasks outside of celerytask
        (eg. by using start_async_task), use a fixed identifier "<unknown>" as task
        name instead and only update these.
  15. 16 Dec, 2016 2 commits
  16. 06 Dec, 2016 1 commit
  17. 15 Nov, 2016 1 commit
  18. 12 Dec, 2016 1 commit
    • David Douard's avatar
      [entities] improve workflow management of CeleryTasks (by celery-monitor)... · 51e3f4e41879
      David Douard authored
      [entities] improve workflow management of CeleryTasks (by celery-monitor) robustness (related to #16640842)
      the transition may fail (if for some race-like reasons it has already been fired).
      Also ensure some CeleryTask changes are commited (celery-monitor)
      since the transaction may be rolled back during the WF management part of the
      on_event() method.
  19. 04 Nov, 2016 4 commits
  20. 28 Oct, 2016 2 commits
  21. 12 Dec, 2016 1 commit
  22. 04 Nov, 2016 1 commit