Geopoiesis tasks are a unique feature allowing arbitrary commands to be performed for a given scope. Before your command is executed (wrapped in
sh -c, thus allowing you to write clever shell one-liners), a few things will happen:
Geopoiesis will pull the code for the current HEAD of tracked branch;
Geopoiesis will run
before_plan lifecycle hooks, if any;
Terraform workspace will be set up using
Both the lifecycle hooks and
terraform init are running with the scope environment, so they can make use of variables you've set up for this scope. In fact, under the hood tasks are very similar to regular runs, though the state diagram is somewhat different.
Tasks are particularly useful for extraordinary state management tasks that would normally need to be handled from an operator's machine like state management, or tainting as in the example below. Unlike things that you run from your local machine, tasks in Geopoiesis are tracked and logged.
In order to run a task, navigate to the Tasks section in your Geopoiesis UI. If you have write permissions on the scope, you will see a Command field waiting for your input. When you start typing your command, the Perform button will become active. Clicking it will create a new task, and take you to its page.
In the example below, we used the task functionality to taint an individual GitHub team. The
terraform taint command manually marks a managed resource as tainted in the state, forcing it to be destroyed and recreated on the next
Your new task should now also be visible in the Tasks section in your Geopoiesis UI:
You can now manually trigger a run to make sure that the resource will be recreated. In our case, it is:
The above workflow represents a happy path, but there are always plenty of things that can go wrong. Below is a detailed diagram explaining all possible state transitions for a task:
When a new task is created, it starts with a waiting state. If some other run or task currently has a lock on its scope, it is marked as blocked for as long as the lock is being held. The reason tasks need to hold the lock on the state is that a task command could modify the state and there is no way to be 100% sure to know that in advance.
At any point before actual work starts, the user can cancel the task and thus transition it to the canceled state. When the task is not blocked and a worker is available, work begins and the run enters task initializing, and then performing state.
The performing phase can have one of three possible outcomes. The unhappy ones are the worker crashing, resulting in performing crashed status and one of the commands exiting with non-zero status, transitioning run state to performing failed. For more nuanced explanation of failing vs. crashing please see the relevant section in the Runs article.