Usage
Getting started
Create a file called Taskfile.yml
in the root of your project. The cmds
attribute should contain the commands of a task. The example below allows
compiling a Go app and uses esbuild to concat and
minify multiple CSS files into a single one.
version: '3'
tasks:
build:
cmds:
- go build -v -i main.go
assets:
cmds:
- esbuild --bundle --minify css/index.css > public/bundle.css
Running the tasks is as simple as running:
task assets build
Task uses mvdan.cc/sh, a native Go sh interpreter. So
you can write sh/bash commands, and it will work even on Windows, where sh
or
bash
are usually not available. Just remember any executable called must be
available by the OS or in PATH.
If you omit a task name, "default" will be assumed.
Supported file names
Task will look for the following file names, in order of priority:
- Taskfile.yml
- taskfile.yml
- Taskfile.yaml
- taskfile.yaml
- Taskfile.dist.yml
- taskfile.dist.yml
- Taskfile.dist.yaml
- taskfile.dist.yaml
The intention of having the .dist
variants is to allow projects to have one
committed version (.dist
) while still allowing individual users to override
the Taskfile by adding an additional Taskfile.yml
(which would be on
.gitignore
).
Running a Taskfile from a subdirectory
If a Taskfile cannot be found in the current working directory, it will walk up
the file tree until it finds one (similar to how git
works). When running Task
from a subdirectory like this, it will behave as if you ran it from the
directory containing the Taskfile.
You can use this functionality along with the special {{.USER_WORKING_DIR}}
variable to create some very useful reusable tasks. For example, if you have a
monorepo with directories for each microservice, you can cd
into a
microservice directory and run a task command to bring it up without having to
create multiple tasks or Taskfiles with identical content. For example:
version: '3'
tasks:
up:
dir: '{{.USER_WORKING_DIR}}'
preconditions:
- test -f docker-compose.yml
cmds:
- docker-compose up -d
In this example, we can run cd <service>
and task up
and as long as the
<service>
directory contains a docker-compose.yml
, the Docker composition
will be brought up.
Running a global Taskfile
If you call Task with the --global
(alias -g
) flag, it will look for your
home directory instead of your working directory. In short, Task will look for a
Taskfile that matches $HOME/{T,t}askfile.{yml,yaml}
.
This is useful to have automation that you can run from anywhere in your system!
When running your global Taskfile with -g
, tasks will run on $HOME
by
default, and not on your working directory!
As mentioned in the previous section, the {{.USER_WORKING_DIR}}
special
variable can be very handy here to run stuff on the directory you're calling
task -g
from.
version: '3'
tasks:
from-home:
cmds:
- pwd
from-working-directory:
dir: '{{.USER_WORKING_DIR}}'
cmds:
- pwd
Reading a Taskfile from stdin
Taskfile also supports reading from stdin. This is useful if you are generating
Taskfiles dynamically and don't want write them to disk. To tell task to read
from stdin, you must specify the -t/--taskfile
flag with the special -
value. You may then pipe into Task as you would any other program:
task -t - <(cat ./Taskfile.yml)
# OR
cat ./Taskfile.yml | task -t -
Environment variables
Task
You can use env
to set custom environment variables for a specific task:
version: '3'
tasks:
greet:
cmds:
- echo $GREETING
env:
GREETING: Hey, there!
Additionally, you can set global environment variables that will be available to all tasks:
version: '3'
env:
GREETING: Hey, there!
tasks:
greet:
cmds:
- echo $GREETING
env
supports expansion and retrieving output from a shell command just like
variables, as you can see in the Variables section.
.env files
You can also ask Task to include .env
like files by using the dotenv:
setting:
KEYNAME=VALUE
ENDPOINT=testing.com
version: '3'
env:
ENV: testing
dotenv: ['.env', '{{.ENV}}/.env.', '{{.HOME}}/.env']
tasks:
greet:
cmds:
- echo "Using $KEYNAME and endpoint $ENDPOINT"
Dotenv files can also be specified at the task level:
version: '3'
env:
ENV: testing
tasks:
greet:
dotenv: ['.env', '{{.ENV}}/.env.', '{{.HOME}}/.env']
cmds:
- echo "Using $KEYNAME and endpoint $ENDPOINT"
Environment variables specified explicitly at the task-level will override variables defined in dotfiles:
version: '3'
env:
ENV: testing
tasks:
greet:
dotenv: ['.env', '{{.ENV}}/.env.', '{{.HOME}}/.env']
env:
KEYNAME: DIFFERENT_VALUE
cmds:
- echo "Using $KEYNAME and endpoint $ENDPOINT"
Please note that you are not currently able to use the dotenv
key inside
included Taskfiles.
Including other Taskfiles
If you want to share tasks between different projects (Taskfiles), you can use
the importing mechanism to include other Taskfiles using the includes
keyword:
version: '3'
includes:
docs: ./documentation # will look for ./documentation/Taskfile.yml
docker: ./DockerTasks.yml
The tasks described in the given Taskfiles will be available with the informed
namespace. So, you'd call task docs:serve
to run the serve
task from
documentation/Taskfile.yml
or task docker:build
to run the build
task from
the DockerTasks.yml
file.
Relative paths are resolved relative to the directory containing the including Taskfile.
OS-specific Taskfiles
With version: '2'
, task automatically includes any Taskfile_{{OS}}.yml
if it
exists (for example: Taskfile_windows.yml
, Taskfile_linux.yml
or
Taskfile_darwin.yml
). Since this behavior was a bit too implicit, it was
removed on version 3, but you still can have a similar behavior by explicitly
importing these files:
version: '3'
includes:
build: ./Taskfile_{{OS}}.yml
Directory of included Taskfile
By default, included Taskfile's tasks are run in the current directory, even if the Taskfile is in another directory, but you can force its tasks to run in another directory by using this alternative syntax:
version: '3'
includes:
docs:
taskfile: ./docs/Taskfile.yml
dir: ./docs
The included Taskfiles must be using the same schema version as the main Taskfile uses.
Optional includes
Includes marked as optional will allow Task to continue execution as normal if the included file is missing.
version: '3'
includes:
tests:
taskfile: ./tests/Taskfile.yml
optional: true
tasks:
greet:
cmds:
- echo "This command can still be successfully executed if
./tests/Taskfile.yml does not exist"
Internal includes
Includes marked as internal will set all the tasks of the included file to be internal as well (see the Internal tasks section below). This is useful when including utility tasks that are not intended to be used directly by the user.
version: '3'
includes:
tests:
taskfile: ./taskfiles/Utils.yml
internal: true
Vars of included Taskfiles
You can also specify variables when including a Taskfile. This may be useful for having reusable Taskfile that can be tweaked or even included more than once:
version: '3'
includes:
backend:
taskfile: ./taskfiles/Docker.yml
vars:
DOCKER_IMAGE: backend_image
frontend:
taskfile: ./taskfiles/Docker.yml
vars:
DOCKER_IMAGE: frontend_image
Namespace aliases
When including a Taskfile, you can give the namespace a list of aliases
. This
works in the same way as task aliases and can be used together
to create shorter and easier-to-type commands.
version: '3'
includes:
generate:
taskfile: ./taskfiles/Generate.yml
aliases: [gen]
Vars declared in the included Taskfile have preference over the variables in the
including Taskfile! If you want a variable in an included Taskfile to be
overridable, use the
default function:
MY_VAR: '{{.MY_VAR | default "my-default-value"}}'
.
Internal tasks
Internal tasks are tasks that cannot be called directly by the user. They will
not appear in the output when running task --list|--list-all
. Other tasks may
call internal tasks in the usual way. This is useful for creating reusable,
function-like tasks that have no useful purpose on the command line.
version: '3'
tasks:
build-image-1:
cmds:
- task: build-image
vars:
DOCKER_IMAGE: image-1
build-image:
internal: true
cmds:
- docker build -t {{.DOCKER_IMAGE}} .
Task directory
By default, tasks will be executed in the directory where the Taskfile is
located. But you can easily make the task run in another folder, informing
dir
:
version: '3'
tasks:
serve:
dir: public/www
cmds:
# run http server
- caddy
If the directory does not exist, task
creates it.
Task dependencies
Dependencies run in parallel, so dependencies of a task should not depend one another. If you want to force tasks to run serially, take a look at the Calling Another Task section below.
You may have tasks that depend on others. Just pointing them on deps
will make
them run automatically before running the parent task:
version: '3'
tasks:
build:
deps: [assets]
cmds:
- go build -v -i main.go
assets:
cmds:
- esbuild --bundle --minify css/index.css > public/bundle.css
In the above example, assets
will always run right before build
if you run
task build
.
A task can have only dependencies and no commands to group tasks together:
version: '3'
tasks:
assets:
deps: [js, css]
js:
cmds:
- esbuild --bundle --minify js/index.js > public/bundle.js
css:
cmds:
- esbuild --bundle --minify css/index.css > public/bundle.css
If there is more than one dependency, they always run in parallel for better performance.
You can also make the tasks given by the command line run in parallel by using
the --parallel
flag (alias -p
). Example: task --parallel js css
.
If you want to pass information to dependencies, you can do that the same manner as you would to call another task:
version: '3'
tasks:
default:
deps:
- task: echo_sth
vars: { TEXT: 'before 1' }
- task: echo_sth
vars: { TEXT: 'before 2' }
silent: true
cmds:
- echo "after"
echo_sth:
cmds:
- echo {{.TEXT}}
Platform specific tasks and commands
If you want to restrict the running of tasks to explicit platforms, this can be
achieved using the platforms:
key. Tasks can be restricted to a specific OS,
architecture or a combination of both. On a mismatch, the task or command will
be skipped, and no error will be thrown.
The values allowed as OS or Arch are valid GOOS
and GOARCH
values, as
defined by the Go language
here.
The build-windows
task below will run only on Windows, and on any
architecture:
version: '3'
tasks:
build-windows:
platforms: [windows]
cmds:
- echo 'Running command on Windows'
This can be restricted to a specific architecture as follows:
version: '3'
tasks:
build-windows-amd64:
platforms: [windows/amd64]
cmds:
- echo 'Running command on Windows (amd64)'
It is also possible to restrict the task to specific architectures:
version: '3'
tasks:
build-amd64:
platforms: [amd64]
cmds:
- echo 'Running command on amd64'
Multiple platforms can be specified as follows:
version: '3'
tasks:
build:
platforms: [windows/amd64, darwin]
cmds:
- echo 'Running command on Windows (amd64) and macOS'
Individual commands can also be restricted to specific platforms:
version: '3'
tasks:
build:
cmds:
- cmd: echo 'Running command on Windows (amd64) and macOS'
platforms: [windows/amd64, darwin]
- cmd: echo 'Running on all platforms'
Calling another task
When a task has many dependencies, they are executed concurrently. This will often result in a faster build pipeline. However, in some situations, you may need to call other tasks serially. In this case, use the following syntax:
version: '3'
tasks:
main-task:
cmds:
- task: task-to-be-called
- task: another-task
- echo "Both done"
task-to-be-called:
cmds:
- echo "Task to be called"
another-task:
cmds:
- echo "Another task"
Using the vars
and silent
attributes you can choose to pass variables and
toggle silent mode on a call-by-call basis:
version: '3'
tasks:
greet:
vars:
RECIPIENT: '{{default "World" .RECIPIENT}}'
cmds:
- echo "Hello, {{.RECIPIENT}}!"
greet-pessimistically:
cmds:
- task: greet
vars: { RECIPIENT: 'Cruel World' }
silent: true
The above syntax is also supported in deps
.
NOTE: If you want to call a task declared in the root Taskfile from within an
included Taskfile, add a leading :
like this:
task: :task-name
.
Prevent unnecessary work
By fingerprinting locally generated files and their sources
If a task generates something, you can inform Task the source and generated files, so Task will prevent running them if not necessary.
version: '3'
tasks:
build:
deps: [js, css]
cmds:
- go build -v -i main.go
js:
cmds:
- esbuild --bundle --minify js/index.js > public/bundle.js
sources:
- src/js/**/*.js
generates:
- public/bundle.js
css:
cmds:
- esbuild --bundle --minify css/index.css > public/bundle.css
sources:
- src/css/**/*.css
generates:
- public/bundle.css
sources
and generates
can be files or glob patterns. When given, Task will
compare the checksum of the source files to determine if it's necessary to run
the task. If not, it will just print a message like Task "js" is up to date
.
exclude:
can also be used to exclude files from fingerprinting. Sources are
evaluated in order, so exclude:
must come after the positive glob it is
negating.
version: '3'
tasks:
css:
sources:
- mysources/**/*.css
- exclude: mysources/ignoreme.css
generates:
- public/bundle.css
If you prefer these check to be made by the modification timestamp of the files,
instead of its checksum (content), just set the method
property to
timestamp
.
version: '3'
tasks:
build:
cmds:
- go build .
sources:
- ./*.go
generates:
- app{{exeExt}}
method: timestamp
In situations where you need more flexibility the status
keyword can be used.
You can even combine the two. See the documentation for
status for an
example.
By default, task stores checksums on a local .task
directory in the project's
directory. Most of the time, you'll want to have this directory on .gitignore
(or equivalent) so it isn't committed. (If you have a task for code generation
that is committed it may make sense to commit the checksum of that task as well,
though).
If you want these files to be stored in another directory, you can set a
TASK_TEMP_DIR
environment variable in your machine. It can contain a relative
path like tmp/task
that will be interpreted as relative to the project
directory, or an absolute or home path like /tmp/.task
or ~/.task
(subdirectories will be created for each project).
export TASK_TEMP_DIR='~/.task'
Each task has only one checksum stored for its sources
. If you want to
distinguish a task by any of its input variables, you can add those variables as
part of the task's label, and it will be considered a different task.
This is useful if you want to run a task once for each distinct set of inputs until the sources actually change. For example, if the sources depend on the value of a variable, or you if you want the task to rerun if some arguments change even if the source has not.
The method none
skips any validation and always run the task.
For the checksum
(default) or timestamp
method to work, it is only necessary
to inform the source files. When the timestamp
method is used, the last time
of the running the task is considered as a generate.
Using programmatic checks to indicate a task is up to date
Alternatively, you can inform a sequence of tests as status
. If no error is
returned (exit status 0), the task is considered up-to-date:
version: '3'
tasks:
generate-files:
cmds:
- mkdir directory
- touch directory/file1.txt
- touch directory/file2.txt
# test existence of files
status:
- test -d directory
- test -f directory/file1.txt
- test -f directory/file2.txt
Normally, you would use sources
in combination with generates
- but for
tasks that generate remote artifacts (Docker images, deploys, CD releases) the
checksum source and timestamps require either access to the artifact or for an
out-of-band refresh of the .checksum
fingerprint file.
Two special variables {{.CHECKSUM}}
and {{.TIMESTAMP}}
are available for
interpolation within status
commands, depending on the method assigned to
fingerprint the sources. Only source
globs are fingerprinted.
Note that the {{.TIMESTAMP}}
variable is a "live" Go time.Time
struct, and
can be formatted using any of the methods that time.Time
responds to.
See the Go Time documentation for more information.
You can use --force
or -f
if you want to force a task to run even when
up-to-date.
Also, task --status [tasks]...
will exit with a non-zero exit code if any of
the tasks are not up-to-date.
status
can be combined with the
fingerprinting
to have a task run if either the the source/generated artifacts changes, or the
programmatic check fails:
version: '3'
tasks:
build:prod:
desc: Build for production usage.
cmds:
- composer install
# Run this task if source files changes.
sources:
- composer.json
- composer.lock
generates:
- ./vendor/composer/installed.json
- ./vendor/autoload.php
# But also run the task if the last build was not a production build.
status:
- grep -q '"dev": false' ./vendor/composer/installed.json
Using programmatic checks to cancel the execution of a task and its dependencies
In addition to status
checks, preconditions
checks are the logical inverse
of status
checks. That is, if you need a certain set of conditions to be
true you can use the preconditions
stanza. preconditions
are similar to
status
lines, except they support sh
expansion, and they SHOULD all
return 0.
version: '3'
tasks:
generate-files:
cmds:
- mkdir directory
- touch directory/file1.txt
- touch directory/file2.txt
# test existence of files
preconditions:
- test -f .env
- sh: '[ 1 = 0 ]'
msg: "One doesn't equal Zero, Halting"
Preconditions can set specific failure messages that can tell a user what steps
to take using the msg
field.
If a task has a dependency on a sub-task with a precondition, and that
precondition is not met - the calling task will fail. Note that a task executed
with a failing precondition will not run unless --force
is given.
Unlike status
, which will skip a task if it is up to date and continue
executing tasks that depend on it, a precondition
will fail a task, along with
any other tasks that depend on it.
version: '3'
tasks:
task-will-fail:
preconditions:
- sh: 'exit 1'
task-will-also-fail:
deps:
- task-will-fail
task-will-still-fail:
cmds:
- task: task-will-fail
- echo "I will not run"