Best Practices for Testing Your Julia Packages


This post was written by Steven Whitaker.
The Julia programming language is a high-level language that is known, at least in part, for its excellent package manager and outstanding composability. (See another blog post that illustrates this composability.)
Julia makes it super easy for anybody to create their own package. Julia's package manager enables easy development and testing of packages. The ease of package development encourages developers to split reusable chunks of code into individual packages, further enhancing Julia's composability.
In our previous post, we discussed how to create and register your own package. However, to encourage people to actually use your package, it helps to have an assurance that the package works. This is why testing is important. (Plus, you also want to know your package works, right?)
In this post, we will learn about some of the tools Julia provides for testing packages. We will also learn how to use GitHub Actions to run package tests against commits and/or pull requests to check whether code changes break package functionality.
This post assumes you are comfortable navigating the Julia REPL. If you need a refresher, check out our post on the Julia REPL.
Example Package
We will use a custom package called Averages.jl to illustrate how to implement testing in Julia.
The Project.toml
looks like:
name = "Averages"
uuid = "1fc6e63b-fe0f-463a-8652-42f2a29b8cc6"
version = "0.1.0"
[deps]
Statistics = "10745b16-79ce-11e8-11f9-7d13ad32a3b2"
[extras]
Test = "8dfed614-e22c-5e08-85e1-65c5234f0b40"
[targets]
test = ["Test"]
Note that this Project.toml
has two more sections besides [deps]
:
[extras]
is used to indicate additional packages that are not direct dependencies of the package. In this example, Test is not used in Averages.jl itself; Test is used only when running tests.[targets]
is used to specify what packages are used where. In this example,test = ["Test"]
indicates that the Test package should be used when testing Averages.jl.
The actual package code in src/Averages.jl
looks like:
module Averages
using Statistics
export compute_average
compute_average(x) = (check_real(x); mean(x))
function compute_average(a, b...)
check_real(a)
N = length(a)
for (i, x) in enumerate(b)
check_real(x)
check_length(i + 1, x, N)
end
T = float(promote_type(eltype(a), eltype.(b)...))
average = Vector{T}(undef, N)
average .= a
for x in b
average .+= x
end
average ./= length(b) + 1
return a isa Real ? average[1] : average
end
function check_real(x)
T = eltype(x)
T <: Real || throw(ArgumentError("only real numbers are supported; unsupported type $T"))
end
function check_length(i, x, expected)
N = length(x)
N == expected || throw(DimensionMismatch("the length of input $i does not match the length of the first input: $N != $expected"))
end
end
Adding Tests
Tests for a package live in test/runtests.jl
.
(The file name is important!)
Inside this file there are two main testing utilities that are used:
@testset
and @test
.
Additionally,
@test_throws
can also be useful for testing.
The Test standard library package provides all of these macros.
@testset
is used to organize tests into cohesive blocks.@test
is used to actually test package functionality.@test_throws
is used to ensure the package throws the errors it should.
Here is how test/runtests.jl
might look for Averages.jl:
using Averages
using Test
@testset "Averages.jl" begin
a = [1, 2, 3]
b = [4.0, 5.0, 6.0]
c = (BigInt(7), 8f0, Int32(9))
d = 10
e = 11.0
bad = ["hi", "hello", "hey"]
@testset "`compute_average(x)`" begin
@test compute_average(a) == 2
@test compute_average(a) isa Float64
@test compute_average(c) == 8
@test compute_average(c) isa BigFloat
@test compute_average(d) == 10
end
@testset "`compute_average(a, b...)`" begin
@test compute_average(a, a) == a
@test compute_average(a, b) == [2.5, 3.5, 4.5]
@test compute_average(a, b, c) == b
@test compute_average(a, b, c) isa Vector{Float64}
@test compute_average(b, b, b) == b
@test compute_average(d, e) == 10.5
end
@testset "Error Handling" begin
@test_throws ArgumentError compute_average(im)
@test_throws ArgumentError compute_average(a, bad)
@test_throws ArgumentError compute_average(bad, c)
@test_throws DimensionMismatch compute_average(a, b[1:2])
@test_throws DimensionMismatch compute_average(a[1:2], b)
end
end
Now let's look more closely at the macros used:
@testset
can be given a label to help organize the reporting Julia does at the end of testing. Besides that,@testset
wraps around a set of tests (including other@testset
s).@test
is given an expression that evaluates to a boolean. If the boolean istrue
, the test passes; otherwise it fails.@test_throws
takes two inputs: an error type and then an expression. The test passes if the expression throws an error of the given type.
Testing Against Other Packages
In some cases,
you might want to ensure your package
is compatible with a type defined in another package.
For our example,
let's test against StaticArrays.jl.
Our package does not depend on StaticArrays.jl,
so we need to add it as a test-only dependency
by editing the [extras]
and [targets]
sections
in the Project.toml
:
[extras]
StaticArrays = "90137ffa-7385-5640-81b9-e52037218182"
Test = "8dfed614-e22c-5e08-85e1-65c5234f0b40"
[targets]
test = ["StaticArrays", "Test"]
(Note that I grabbed the UUID for StaticArrays.jl
from its Project.toml
on GitHub.)
Then we can add some tests
to make sure compute_average
is generic enough
to work with StaticArray
s:
using Averages
using Test
using StaticArrays
@testset "Averages.jl" begin
⋮
@testset "StaticArrays.jl" begin
s = SA[12, 13, 14]
@test compute_average(s) == 13
@test compute_average(s, s) == [12, 13, 14]
@test compute_average(a, b, s) == [17/3, 20/3, 23/3]
@test compute_average(s, a, c) == [20/3, 23/3, 26/3]
end
end
Running Tests Locally
Now Averages.jl is ready for testing.
To run package tests on your own computer,
start Julia, activate the package environment,
and then run test
from the package prompt:
(@v1.X) pkg> activate /path/to/Averages
(Averages) pkg> test
The first thing test
does
is set up a temporary package environment for testing
that includes the packages defined in the test
target
in the Project.toml
.
Then it runs the tests and displays the result:
Testing Running tests...
Test Summary: | Pass Total Time
Averages.jl | 20 20 0.7s
Testing Averages tests passed
If a test fails, the result looks like this:
Testing Running tests...
`compute_average(a, b...)`: Test Failed at /path/to/Averages/test/runtests.jl:27
Expression: compute_average(a, b) == [2.0, 3.5, 4.5]
Evaluated: [2.5, 3.5, 4.5] == [2.0, 3.5, 4.5]
Stacktrace:
[1] macro expansion
@ /path/to/julia-1.X.Y/share/julia/stdlib/v1.X/Test/src/Test.jl:672 [inlined]
[2] macro expansion
@ /path/to/Averages/test/runtests.jl:27 [inlined]
[3] macro expansion
@ /path/to/julia-1.X.Y/share/julia/stdlib/v1.X/Test/src/Test.jl:1577 [inlined]
[4] macro expansion
@ /path/to/Averages/test/runtests.jl:26 [inlined]
[5] macro expansion
@ /path/to/julia-1.X.Y/share/julia/stdlib/v1.X/Test/src/Test.jl:1577 [inlined]
[6] top-level scope
@ /path/to/Averages/test/runtests.jl:7
Test Summary: | Pass Fail Total Time
Averages.jl | 19 1 20 0.9s
`compute_average(x)` | 5 5 0.1s
`compute_average(a, b...)` | 5 1 6 0.6s
Error Handling | 5 5 0.0s
StaticArrays.jl | 4 4 0.2s
ERROR: LoadError: Some tests did not pass: 19 passed, 1 failed, 0 errored, 0 broken.
in expression starting at /path/to/Averages/test/runtests.jl:5
ERROR: Package Averages errored during testing
Some things to note:
- When all tests in a test set pass, the test summary does not report the individual results of nested test sets. When a test fails, results of nested test sets are reported individually to report more precisely where the failure occurred.
- When a test fails, the file and line number of the failing test are reported, along with the expression that failed. This information is displayed for all failures that occur.
- The test summary reports how many tests passed and how many failed in each test set, in addition to how long each test set took.
- Tests in a test set continue to run after a test fails.
To have a test set stop on failure,
use the
failfast
option:
(This option is available only in Julia 1.9 and later.)@testset failfast = true "Averages.jl" begin
Now, when developing Averages.jl, we can run the tests locally to ensure we don't break any functionality!
Running Tests with GitHub Actions
Besides running tests locally, one can use GitHub Actions to run tests on one of GitHub's servers. One advantage is that it enables automated testing on various machines/operating systems and across various Julia versions. Automating tests in this way is an essential part of continuous integration (CI) (so much so that the phrase "running CI" is equivalent to "running tests via GitHub Actions", even though CI technically involves more than just testing).
To enable testing via GitHub Actions,
we just need to add an appropriate .yml
file
in the .github/workflows
directory of our package.
As mentioned in our previous post,
PkgTemplates.jl can automatically generate
the necessary .yml
file.
This is the default CI workflow generated by PkgTemplates.jl:
name: CI
on:
push:
branches:
- main
tags: ['*']
pull_request:
workflow_dispatch:
concurrency:
# Skip intermediate builds: always.
# Cancel intermediate builds: only if it is a pull request build.
group: ${{ github.workflow }}-${{ github.ref }}
cancel-in-progress: ${{ startsWith(github.ref, 'refs/pull/') }}
jobs:
test:
name: Julia ${{ matrix.version }} - ${{ matrix.os }} - ${{ matrix.arch }} - ${{ github.event_name }}
runs-on: ${{ matrix.os }}
timeout-minutes: 60
permissions: # needed to allow julia-actions/cache to proactively delete old caches that it has created
actions: write
contents: read
strategy:
fail-fast: false
matrix:
version:
- '1.10'
- '1.6'
- 'pre'
os:
- ubuntu-latest
arch:
- x64
steps:
- uses: actions/checkout@v4
- uses: julia-actions/setup-julia@v2
with:
version: ${{ matrix.version }}
arch: ${{ matrix.arch }}
- uses: julia-actions/cache@v2
- uses: julia-actions/julia-buildpkg@v1
- uses: julia-actions/julia-runtest@v1
For most users,
the most relevant fields to customize
are version
and os
(under jobs: test: strategy: matrix
).
Under os
,
specify the operating systems to run tests on
(e.g., ubuntu-latest
, windows-latest
, macOS-latest
).
Under version
,
specify the versions of Julia to use when testing:
'1.X'
means run on Julia 1.X.Y, where Y is the largest patch of Julia 1.X that has been released. For example,'1.9'
means run on Julia 1.9.4.'1'
means run on the latest stable version of Julia.'pre'
means run on the latest pre-release version of Julia.'lts'
means run on Julia's long-term support (LTS) version.
Usually,
it makes sense just to test '1'
and 'pre'
to ensure compatibility with the current
and upcoming Julia versions.
One can also fine-tune the version
and os
fields,
as well as other fields,
when generating a package
with PkgTemplates.jl.
For example,
to generate the .yml
file
to run tests only on Windows
with Julia 1.8 and the latest pre-release version of Julia:
using PkgTemplates
gha = GitHubActions(; linux = false, windows = true, extra_versions = ["1.8", "pre"])
t = Template(; dir = ".", plugins = [gha])
t("MyPackage")
Note that the .yml
file generated
will also include testing on Julia 1.6.
The Template
constructor has a keyword argument julia
that sets the minimum version of Julia
you want your package to support,
and this version is included in testing.
As of this writing,
by default the minimum version is Julia 1.6.
See the PkgTemplates.jl docs
about Template
and GitHubActions
for more details
on customizing the .yml
file.
See also the GitHub Actions docs,
and in particular the workflow syntax docs,
for more details on what makes up the .yml
file.
(Be warned, these docs are quite lengthy
and probably aren't practically useful
for most people to get a CI workflow up and running.
For a more approachable overview of the .yml
file,
consider looking at this tutorial for building and testing Python.)
Once we push .github/workflows/CI.yml
to GitHub,
whenever branch main
is pushed to,
or a pull request (PR) is opened or pushed to,
our package's tests will run.
This is the essence of CI:
continuously making sure changes we make to our code
integrate well with the code base
(i.e., don't break anything).
By running tests against PRs,
we can be sure changes made
don't break existing functionality.
One neat thing about GitHub Actions is that GitHub provides a status badge/icon that you can display in your package's README. This badge lets people know
- that your package is regularly tested, and
- whether the current state of your package passes those tests.
In other words, this badge is a good way to boost confidence that your package is suitable for use. You can add this badge to your package's README by adding something like the following markdown:
[](https://github.com/username/Averages.jl/actions/workflows/CI.yml)
And it will display as follows:
Summary
In this post, we learned how to add tests to our own Julia package. We also learned how to enable CI with GitHub Actions to run our tests against code changes to ensure our package remains in working order.
How difficult was it for you to set up CI for the first time? Do you have any tips for beginners? Let us know in the comments below!
Additional Links
- Julia Testing Docs
- Official Julia documentation on testing.
- PkgTemplates.jl Docs
- Documentation for PkgTemplates.jl, including potential customizations to the generated CI workflow.
Subscribe to my newsletter
Read articles from Great Lakes Consulting directly inside your inbox. Subscribe to the newsletter, and don't miss out.
Written by

Great Lakes Consulting
Great Lakes Consulting
Modeling & Simulation, HPC, and Enterprise Software all under one roof. Great Lakes Consulting Services, Inc., a premier Information Technology Consulting company, serving others in IT staffing, analytic consulting, business intelligence and application development since 2009. We now specialize in custom Julia software services as the trusted partner to JuliaHub for their Consulting Services. Since 2015, we’ve partnered together to develop high-performance Julia code for low-latency data visualization and analytic solutions, high performance financial modeling, Modeling and Simulation for multiple sciences, personal Julia training, and legacy code migration & evolution.