• Home   /  
  • Archive by category "1"

Gnu Makefile Conditional Assignment

organization | general topics

building software | generating documents | data workflows

large projects | setup targets | portable projects | shell scripts | references

A project is a directory of source files under version control with a build file at the root. The build file is used to generate target files which are not under version control.

A build file is an automation tool. As a side effect it documents how to use a project.

One has a choice of build tools and build file formats. Developers are well-served by being proficient with the build tool , and in particular GNU .

is an appropriate build tool when:

  • The artifacts to be built and their prerequisites are local files.
  • The tools which create target files can be invoked from the command line.
  • The build system does not need to be highly portable.


I organize my Makefiles into 4 sections, separated by empty lines:


In a small project includes are not needed and the includes section is absent.

Putting includes before the prologue makes the prologue a file local declaration. even prevents the flag from being added multiple times to . This is a property of the special variable and not the operator.

An to bring in automatically generated header dependencies should be put in the body because it depends on the variable which contains the list of sources.


I put the following boilerplate at the top of my makefiles:

I ask to warn me when I use an undefined variable so I catch misspelled variables. If I need to use an environment variable or a variable in an included makefile that might not be defined, I set it to empty with the conditional assignment operator.

The variable was introduced with GNU Make 3.82. It has no effect on the version of installed on Mac OS X, which is GNU Make 3.81.

I set the shell to so I can use the option. I set the option so if any of the commands in a pipeline fail, the entire pipeline fails. Otherwise the return value of the pipeline is the return value of the last command. Without this precaution, would think the following pipeline succeeded:

The flag causes with qualifications to exit immediately if a command it executes fails. It is not strictly speaking necessary, since executes a separate shell for each line of recipe and stops if any of them fail. I use it to be consistent with our shell script prologue. The shell scripts section describes our shell script prologue and the circumstances under which exits.

The flag causes to exit with an error message if a variable is accessed without being defined.

The flag is in the default value of and we must preserve it, because it is how passes the script to be executed to .

The default target can be declared by making it the prerequisite of the target. Otherwise the default target is the first target in the makefile that doesn't start with a period. I prefer so targets can be listed in the makefile in the order they execute. Using as the default target is a GNU convention. I always use the same name for our default target so the prologue section is always the same.

I set so that a target is removed if its recipe fails. This prevents me from re-running and using an incomplete or invalid target. When debugging it may be necessary to comment this line out so the incomplete or invalid target can be inspected. Also be aware that will not delete a directory if the target that creates it fails.

I set to nothing because I prefer to define rules explicitly.

environment variables

Variables inherited from the environment should be all-caps. This style is used in the GNU Make Manual. There are some variables special to which are also in all-caps.

I don't put empty lines in between variable declarations.

Environment variables should be declared with conditional assignment operator. The value after the operator is the value that is used if the environment variable is not set. If the default value is the empty string, then allows the declaration to be omitted, but I still declare the environment variable so all environment variables used by the makefile are documented.

If an environment variable is required, the makefile should throw an error when it is undefined:

The above code requires that the environment variable be defined to run any of the targets in the makefile. If only one target used the environment variable, the check can be performed in the recipe:


The last, and usually longest, section of a makefile contains target and rule declarations, as well as any variable declarations not in the special and environment variable section.

internal variables | rules and targets | phony targets | intermediate targets | declaration order | target names

internal variables

Variables which are not special to or inherited from the environment should be in lowercase.

I declare variables with the immediate assignment operator instead of the delayed evaluation assignment operator. The only use I have found for the delayed evaluation assignment operator is with the variable function to pick up file names that are created during execution of the makefile. However, I don't use variables defined using delayed evaluation in target and prerequisite lists since must evaluate these before any recipes are executed to build the dependency graph.

I don't put empty lines in between variable declarations.

By default whitespace is trimmed from the right and left side of a literal value when it is assigned to a variable. Here is how to prevent it:

Commas can cause a problem in variable functions. Here is how to use a comma in a argument:

rules and targets

A rule or target declaration should be set off by empty lines from any declarations before or after it. The exception is when a target is being declared phony or intermediate, as described below.

issues a warning if a target recipe is redefined.

Prerequisites for a target can be declared in multiple places, however, and the will use the union. This is useful with the special targets and . It can be used to prevent long lines when a targets has lots of prerequisites. It can be used with the directive; for example each included makefile could make a target which makes a target for testing code a prerequisite of the target in the main makefile.

If more than one rule can apply to a goal, then the recipe of the first pattern that matches will be executed. If the pattern is exactly the same, it overwrites the previous rule, however. Try invoking on this makefile:

I like patterns in rules to be anchored on the right side: i.e. and more than on the left side: i.e. . This makes rules specific to a file type when files have appropriate suffixes. I like rules where the percent sign is set off from the rest of the pattern with periods.

A goal is a target which the user specifies on the command line: e.g. . The user can specify more than one goal: e.g. . Or the user can specify no goals at all, in which case builds the targets specified in the variable. If is not defined, executes the first target in the makefile.

I set to and I declare to be a phony target. This is a GNU convention.

The target normally has all the files used by the end user as direct or indirect prerequisites. However, if a full build takes a long time, consider having the target echo the available goals.

phony targets

Targets which don't create a file with the same name as the target are called phony targets. Other targets are sometimes called file targets. Whether a target is a phony target is a property of the recipe. is not able to infer this property, but there is a way to explicitly declare them:

When a phony target is declared, will execute the recipe regardless of whether a file with the same name exists.

The following rules are recommended:

  • All phony targets should be declared by making them prerequisites of .
  • Add each phony target as a prerequisite of immediately before the target declaration, rather than listing all the phony targets in a single place.
  • No file targets should be prerequisites of .
  • Phony targets should not be prerequisites of file targets.

The last rule is advised because a target with a phony target will always be executed.

See the section on phony targets with an argument for how to handle them.

intermediate targets

Declaring a file target as intermediate tells that it can be removed when it is no longer needed. This is done by making the file a prerequisite of the target. I do this immediately before the target rather than declaring all intermediate files in a single place:

It is good style to declare all file targets which which are not used by the end user as intermediate so there is less clutter in the directory. On the other hand one might not want to declare a file target as intermediate when the recipe takes a long time to execute—especially during development.

There is no need to declare targets which are generated by pattern rules as intermediate, since will remove them unless they are a command line target. If you would prefer that some of these files were kept, make them prerequisites of the target. If the target is present but has no prerequisites, all files created by pattern rules are kept.

declaration order

I put the declaration for a prerequisite of a target before the declaration of the target. This way the makefile tends to read in the order that things happen. If a set of targets are prerequisites of only one other target, put them immediately before that target.

I put variables and prerequisites before the rules and targets that use them. Variables or prerequisites which are used only once should be declared immediately before the rule or target which uses them. On the other hand, variables which are used by more than one target or rule might be profitably be collected at the top of the body in a common variables subsection.

If I have a target I put it first. It is usually run once as sudo and is not part of the dependency graph. I put the and targets after the target. Here is a suggested order:

  • common variables
  • setup targets
  • build targets
  • test and lint targets
  • documentation targets
  • clean and install targets

When a makefile is large, it might be desirable to divide the body into sections in which all of the variables and rules share a common prefix.

How declarations work. No warning if a variable is redefined (unlike targets). Maybe this is for includes, or maybe this is so the makefile writer can redefine predefined makefile variables such as and or variables inherited from the environment. It can cause bugs in a large makefile.

target names

Unfortunately, does not provide a command line option for listing the available targets like or . One must read the makefile to discover the tasks. As a consequence, one should strive to keep the makefile as easy to understand as possible. Another alternative is to see to it that all useful work is performed either directly or indirectly by a single target and to make that target the default.

A third alternative is to use standard task names. The closest thing I have found to a de facto standard are the GNU Standard Targets. Most of the GNU standard targets are only relevant to projects consisting of source code distributed in the GNU manner, however. The most generic target names are

  • : the name of the default target
  • : runs tests, linters, and style enforcers
  • : removes files created by
  • :
  • : undoes what did

prefixes to group targets; suffixes to classify file types.

Use of periods, commas, underscores, and hyphens.


automatic variables | whitespace | breaking long lines | comments | directory layout | making directories | recipes with multiple output files | phony targets with an argument | debugging | cleanup tasks | shell scripts

automatic variables

The automatic variables , , , and should be used whenever possible. Their use helps ensure that prerequisites are declared, which in turn ensures that the dependency graph isn't missing edges. They also aid maintainability; without them file names would appear in the prerequisites and be repeated one or more times in the recipe.

I list a file as a prerequisite and use to refer to it even when the file is under version control and not a target. However, I only do this when the file is a input, not an executable.

When a target has one prerequisite, I use in preference to .

When targets share a recipe, refers to the target being built, not the entire list.

refers to the "stem" in a pattern rule; i.e. what was matched by .

The variable function can be used to refer to the 2nd, 3rd, and so prerequisites in isolation:

The variable function can be used to refer to the last prerequisite:

The other automatic variables are less useful and should probably be regarded as cryptic.

is a de-duplicated list of the prerequisites, but the original list is available in .

Neither nor contain the order only prerequisites, which are available in .

refers to the prerequisites which are newer than the target. This can be used to write a recipe which only adds components which have changed to a library.


I separate the makefile sections with empty lines. I also separate the rules and targets that do not start with a period with empty lines.

I do not use spaces after commas in variable function invocations which use commas as argument separators because any whitespace gets included in the argument.

breaking long lines

In recipes long lines can be broken with a backslash. The continuation line should start with a tab.

I prefer to put the break after a space, but be aware that the backslash can be put in the middle of a shell word:

Break up the right side of a long variable declaration by using :

The operator will insert a space in between the two parts.

If a target has a lot of prerequisites, they can be split over multiple lines like this:

Or like this:


I use comments sparingly. I don't use them to mark off sections of the makefile. Although it is tempting to document a project in the makefile, I prefer to put this documentation in a separate file.

directory layout

I prefer to put files which are created by the makefile at the root of the project directory. This is Rule 3 from Paul's Rules of Makefiles.

To remove generated files I use this task:

is this a bit dangerous?

As a consequence, we put our source code and non-generated prerequisites in subdirectories. We put subdirectories containing non-generated prerequisites in to keep filenames short.

VPATH uses a colon delimited list of paths:

making directories

If a subdirectory must be created, it should be an "order-only" prerequisite. This is achieved by listing it after a pipe symbol in the prerequisites:

The reason is that the last modification time of a directory is the last time a file was added, removed, or renamed. Usually we don't want to rebuild everything in the directory when this happens.

Directory targets can share a recipe:

Is this necessary to avoid mkdir -p?

recipes with multiple output files

Targets can share a recipe:

This is distinct from a recipe which generates multiple files. The following is incorrect:

In a serial build the above recipe will be called twice needlessly. In a parallel build the recipe can be called twice at the same time, corrupting the output.

There are two correct ways to do this. One uses a dummy file:

When the output filenames share a common stem, a pattern rule can be used instead of a dummy file:

phony targets with an argument

Targets with arguments are a way to make a makefile more general and hence reusable. Pattern rules and can be used to implement targets with arguments:

We recommend using a period to set off the argument from the rest of the target.

Because the arguments that might be used are not known in advance, it is not possible to make these phony targets prerequisites of . Here is a mechanism for achieving the same effect:


Because echoes commands before it runs them, debugging recipes is usually trivial.

A common source of errors is due to macro substitution errors. Two common mistakes are not double escaping dollar signs in the underlying shell script and not using parens to access variables with names longer than a single character.

Otherwise debugging recipes is the same as debugging shell scripts.

Another class of problems are problems in the dependency graph. A job that should run doesn't, or a job runs when it doesn't need to.

A possible cause of dependency graph problems is that there are variables in the targets or the prerequisites, and those variables don't contain the expected values. There are opportunities for error when populating variables with values using , , or . Here is a generic task which can be used to inspect any variable:

We have encountered two situations which cause recipes to run when they don't need to. One is having directories as prerequisites and not declaring them as order only prerequisites. The other is misusing the shared recipe syntax for a recipe which creates multiple files.

cleanup tasks

It is standard to have a task.

We often see two levels of clean task. One reason is that some resources, might take a long time to download or to compile.

In GNU projects which use , the target will remove files created by but will not.

Another convention is for to remove everything that is not under version control, and for to remove a subset of the files removed by per the judgement of the makefile author. Files which might not be removed by are the final output of the workflow or files which are expensive to generate.


repositories | testing | installing | header files


why they must be a separate setup target and not part of the regular build DAG


The GNU standard stipulates as the standard target for running tests. We often have a separate target for running tests, and use as a target which runs as well as targets for linters and style enforcers.

We often separate and targets. The former runs traditional xUnit style tests; the latter might require that services are running and might take longer to run than . usually does not run .


What about the command? Don't require root to install.

Makefiles which install software are often generated using the command. This takes an optional flag which the user can use to change from to another location on the file system. A simple implementation of would set a variable in the makefile. If is an target but no script, use as an environment variable for setting the installation location.

Filesystem Hierarchy Standard.

header files

When building a language like C which has header files, don't manage the dependencies between headers and source files manually in the makefile.

In a small project, make all source files dependent on all headers. Whenever a header file changes, the entire project is recompiled. This is acceptable as long as the total build time is not very long.

In a large project, use to compute a dependency file for each source file, and use to convert it to a format that can be included into the makefile:

Putting a hyphen in front of the directive quiesces warning messages when the files don't exist. This technique exploits a feature of GNU in which the argument of an directive gets built if it is missing, provided can find a target to build it.


Sometimes human readable documents are prepared by editing a plain text file in some form of markup and then running a tool on the markup file to create the final format. For example, the source format might be XML or Markdown.

Common target formats are HTML, PDF, or EPUB. It might be desirable to support multiple target formats.


A data workflow makefile implements data processing. Data is kept in files, often in a relational format, and executables are invoked on the files, transforming them in stages to the desired output.

Data workflows are different from source code builds in that the tools they use are often newly written and hence buggy. Also data workflows are more likely to benefit from parallelization.

source files

The source files of a data workflow are data, not source code. Especially if the data is large, it may be undesirable to keep them under version control. Instead an approach is to define targets with no prerequisites which download the data files.


In order for parallelization to work, dependencies must be declared. Jobs which are not dependent on each other must be isolated. There should not be any resources accessed by jobs which is not aware of.

It is best if tools which are run by accept the names of all files which they read from or write to as arguments, or if the tools read from standard input and write to standard output. The file system is then completely managed by and documented by the makefile.

Tools which read from or write to a hard-coded file name are maintenance problems when invoked by because the path must also be hard-coded in the makefile. This violates the DRY principle.

When tools need temporary files, they should use a library which returns an unused file name. Tools with a hard-coded path for a temporary file can't be invoked in parallel.

Tools which access databases might not parallelizable. is a poor tool for implementing a database workflow because expects targets to be a file with a last modified timestamp. We are not aware of a good tool for managing a database workflow. Makefiles should restrict themselves to reading from databases at the start of the workflow and writing to a database at the end.

We put the onus on the user of specifying the number of simultaneous jobs when invoking with the flag.

The alternative would be to hardcode a value in the variable, but choosing a portable value is difficult and the user might not want to use all the cores.

If a makefile can run jobs in parallel, it should be documented in the .

If is used without an argument, there is no limit on the number of jobs will run in parallel.

The flag can be used to put an upper bound on the load average as reported by that will put on the system, but we have not experimented with it. Note that for a box with 16 cores, a load average of 16 does not suggest contention, but it does suggest contention on a box with 4 cores.

splitting large files

It is often desirable to split a large file so that the parts can be processed in parallel.

Ideally we would use to split the file and the variable function to read the parts into a variable. Doing it this way prevents from building the entire graph of dependencies at invocation, however. The result is that user will have to invoke two or more times to run the entire workflow.

The alternative is to calculate the names of the files that will be created by . Here is an example:

The approach has at least 3 disadvantages: (1) it is error-prone to compute the file names that will be created by , (2) it requires the creation of an empty dummy file, and (3) we are using flags and which are not available in all implementations of . They are available on Ubuntu 12.04 but not Ubuntu 11.10. As of August 2013, they are not available on any version of Mac OS X.

We think the importance of implementing a workflow with a single target outweighs the disadvantages.

file names

Well chosen file names make a project easier to understand. The benefit is experienced both by a user navigating the file system and a user reading the makefile.

The best choice of names is often not apparent until the end of development, so refactoring is necessary. Using automatic variables in recipes makes renaming files easier. Furthermore variables can be defined for files which appear in multiple target declarations. As previously noted, we prefer file names to specified in the makefile and passed as arguments to executables invoked by the makefile.

File suffixes should be used to declare the format of the data in a file. Consistent use of file suffixes make it possible to define rules. We use a period to separate a suffix from the root.

The most obvious convention for file names is they should describe what is in the file. However, in a workflow with a long chain of dependencies, this naming convention can result in long file names. An alternative convention is for files to be named after the executable that produced them.

We prefer file names which match this regular expression: .

Spaces are discouraged because makefile programming is shell programming. We use underscores where spaces would occur in natural language.

We use hyphens where hyphens would occur in natural language.

We use periods when we intend to parse the name in a pattern rule. Unfortunately it is sometimes also desirable to insert periods into file names—say to encode version numbers or floating point numbers.

file formats

Debugging is easiest if each file has a well-defined format, and each tool fails with an informative error message if any of its input was not in the expected format. This approach makes it easy to find the component which is at fault.

utf-8$ iconv -f utf-8 -t utf-8
csvRFC 4180
jsonOften one JSON object per line.
Note that whitespace that
does not occur inside strings is optional.
$ python -mjson.tool
tabWe use this suffix for
tab-delimited data with no header.
We prefer a header for the documentation
it provides, but headers are inconvenient
when sorting or joining files.
tsvA header should always be
present. Tab and EOL delimited
with no method of escaping or
quoting those characters. Every
row must have the same number of

the IANA specification
xml$ xmllint FILE

A way to test whether all the rows in a tab-delimited file have the same number of fields:

TODO: trimming whitespace in a tsv

non-local prerequisites and targets

can be used to generate artifacts which are not local files, but this is not ideal. Consider defining a target to create a database table and insert data into it. Because there is no last modified timestamp associated with the database table that is aware of, it does not know to update the database table when the prerequisites are newer. Furthermore, if the database table were a prerequisite of other targets, would not know to update the targets when the database table is newer.


multiple makefiles in a directory | partitioned make | shared make | recursive make | inclusive make

multiple makefiles in a directory

One way to deal with the variable name and target name collision problem is to have multiple makefiles in the directory, and to make the user choose a makefile with the flag each time is invoked. This makes unpleasant to use, however.

Alternatives are to introduce subdirectories, each with a makefile.

Another option is to keep everything in a large makefile with the variables and targets of the body grouped into sections. The variables and targets in each section share a common prefix.

partitioned make

We don't think there is much value using the directive just to split a large makefile, even one thousands of lines long, into multiple files. The directive performs simple text substitution like the C preprocessor directive; hence it does not solve the variable name or target name collision problem. Note that gives a warning if a target is redefined, but not if a variable is redefined.

shared make

An application of the is for makefiles to share common variables and target definitions. A project with subdirectories that contain makefiles is a good application of this. We do not put the prologue section in an included makefile.

recursive make

We avoid recursive make so the complete dependency graph is available to a single invocation of .

describe how to do it

describe the drawbacks

Perhaps it is okay when subdirectories are loosely related. For example when a small project with a pre-existing Makefile is incorporated into a project.

inclusive make

as described by Peter Miller

Root makefile includes information from subdirectories.

What about ARG_MAX.


Setup targets perform actions such as:

  • installing host packages: e.g. apt-get, yum, port, brew, …
  • installing language packages: e.g. pip, gem

Two guiding principles here are (1) we don't want the makefile to contain file targets which are outside of the project directory, and (2) we don't want to write make recipes which prompt the end user for information such as a password.

Ideally, we install packages inside the project without using elevated permissions. This way the project is insulated from other projects on the machine. Other build targets can have the setup target as a prerequisite. The build target can be the directory inside the project in which the packages were installed. Alternatively, we can touch a dummy file at the end of the setup recipe.

If elevated permissions are required, the setup task should be a phony target. The recipe should not invoke . Instead the task should be invoked by the end user with the correct permissions, i.e.:

One advantage of this approach is that it gives the end user some flexibility. The user can use and install packages as a regular user, or use and install packages as root.

A disadvantage of a phony setup task is it cannot be a prerequisite of other build tasks. The end user must invoke the setup task separately. If ease-of-use is critical, test for the presence of necessary packages:

The end user might want to install system packages as root and language packages as an unprivileged user. There should be separate targets for each.


The manual is about twice as long as the manual. is perhaps evidence that is not a good solution for portable builds. Perhaps portable builds are always difficult. Making definitive statements involves evaluating other build systems, and is out of scope of this document.

Even if it is decided that a project is only going to target one architecture and doesn't need to be highly portable, it is still worthwhile to write makefiles in a portable way. Machines and user environments are rarely configured identically. There have been cases where only one developer was able to build the project.

Here are some things to think about.

  • make version
  • shell version
  • external commands
  • environment variables.
  • paths outside the project directory

make version

I always use GNU Make. The choice of is made by the person invoking and not the makefile author, however. The makefile author can inspect the variable if the version of GNU Make is critical. This only gives a major and minor version number. Here is code which tests for GNU Make:

shell version

The shell I use is Bash. In particular I assume that the first executable named in is the Bash shell. The version of that is distributed with Mac OS X is quite old: version 3.2 circa 2006. Personally I install a current version of Bash, but most Mac users probably don't.

The default Make shell is . For maximum portability, one should use this as the shell and not use any features that are not listed in the POSIX standard. FreeBSD does not come with Bash installed by default, and FreeBSD system which have Bash do not install it at . To verify you are not using Bash specific features run the script or recipe with dash.

external commands

Shell scripts can fail because external commands are missing. Even when run on the same system, the script may fail because it was run by a user with a different .

The GNU Coding Standards prohibit using any external commands except for the following:

Even if the external command is present, the option might not be. The following options are available on recent versions of Linux but not Mac OS X:

  • : randomly shuffles input
  • : grep using a Perl-style regular expression
  • : splits input into N files
  • : show disk usage

Here are the POSIX mandated options.

External commands which are not reliably present should be installed in the subdirectory of the project and invoke them from there in the Makefile. Such external commands should be implemented as shell scripts, or a widely available scripting language such as Perl or Python.

What about defining all external utilities (i.e. outside of the repository) in one place in shell variables and invoking them via the variables? This makes an audit of the script easy.

environment variables

  • HOME
  • LANG
  • PATH
  • PWD
  • TERM

bash doesn't read any startup files, does it?

Shell scripts can't parse any of the commonly used configuration file formats. Write a utility to parse a configuration file format and then write a shell script that can be sourced? One could write a "shell only" configuration file format which is a bunch of variable assignments or exports which is intended to be sourced. Really, the way to configure shell scripts is by environment variables.

paths outside the project directory

  • installing files
  • /tmp directories
  • getting the containing directory of a makefile or a shell script (MAKELIST)


Makefile recipes are in effect shell scripts. At times it may be beneficial to move code out of a recipe and into a separate shell script.

Extracting a shell script from a recipe is easier when the recipe has few Makefile variables.

The shell script should not know about file names. Ideally, the shell script should read from standard input and write to standard output. The dependency DAG is in the makefile.Yes, this means sometimes passing lots of arguments to the script.If you need to move or rename files in the directory, you can do it all in the Makefile.

tmp files.

Extracted shell scripts should be kept in the project directory under source control. If a subdirectory is desired for extracted shell scripts, bin is a good choice.

There is a tool called shellcheck which finds a lot of errors and risky idioms. It is available in package managers such as and .

The shell script should have a prologue to make its behavior agree with the the and variables in the prologue of the Makefile:

Here is an except from the man page describing the flag:

Exit immediately if a simple command exits with a non-zero status. The shell does not exit if the command that fails is part of the command list immediately following a while or until keyword, part of the test in an if statement, part of a or list, or if the command's return value is being inverted via . A trap on , if set, is executed before the shell exits.

Some commands exit with a non-zero status in conditions which should not always be treated as errors. For example, when no lines match, and when the files being compared are different. The idiom handles this:

The flag causes to treat unset variables as an error when encountered in parameter expansion. To source a file which references unset variables:

Use "$@" to pass the command line or function parameters to a command when you want the command to get the same number of parameters. If you want to combine the command line or functions parameters to a single parameter, use "$*". $@ and $* will expand to at least the number of parameters that the shell or function received. If any of the parameters contained whitespace the command will receive more parameters than the shell or function did.

Filenames which contains spaces or which start with hyphens present hazards when shell scripting. In a makefile project they are avoided by renaming any files with external provenance as soon as they are acquired. When iterating over files with , do not use command substitution to generate the list of files. Use the built in shell globbing operator instead. When passing a list files generated by a fileglob, the double hyphen will prevent any files with hyphens at the start from being interpreted as flags:

The above will work with file names with spaces. The code gives another example:

  • use $( ) instead of ` `
  • double quote all variables in [ ]; or use [[ ]] instead of [ ]
  • use readonly and local

Bash scripts should not depend on the working directory of the invoker. If the bash script calls other executables in the same directory, here is a reliable way to get that directory:

An example of how to write an error message:

Here is an example of how to perform cleanup after an error condition:

is used to register a signal handler. The condition is a pseudo-signal which fires when a command fails. It fires in the same situations as when a command failure would cause the shell to exit running under the option.

The script looks for errors in shell scripts. A package of the same name exists in both Homebrew and Apt:

Comments can be used to prevent certain checks on certain code:


Go to the first, previous, next, last section, table of contents.

How to Use Variables

A variable is a name defined in a makefile to represent a string of text, called the variable's value. These values are substituted by explicit request into targets, prerequisites, commands, and other parts of the makefile. (In some other versions of , variables are called macros.)

Variables and functions in all parts of a makefile are expanded when read, except for the shell commands in rules, the right-hand sides of variable definitions using , and the bodies of variable definitions using the directive.

Variables can represent lists of file names, options to pass to compilers, programs to run, directories to look in for source files, directories to write output in, or anything else you can imagine.

A variable name may be any sequence of characters not containing , , , or leading or trailing whitespace. However, variable names containing characters other than letters, numbers, and underscores should be avoided, as they may be given special meanings in the future, and with some shells they cannot be passed through the environment to a sub- (see section Communicating Variables to a Sub-).

Variable names are case-sensitive. The names , , and all refer to different variables.

It is traditional to use upper case letters in variable names, but we recommend using lower case letters for variable names that serve internal purposes in the makefile, and reserving upper case for parameters that control implicit rules or for parameters that the user should override with command options (see section Overriding Variables).

A few variables have names that are a single punctuation character or just a few characters. These are the automatic variables, and they have particular specialized uses. See section Automatic Variables.

Basics of Variable References

To substitute a variable's value, write a dollar sign followed by the name of the variable in parentheses or braces: either or is a valid reference to the variable . This special significance of is why you must write to have the effect of a single dollar sign in a file name or command.

Variable references can be used in any context: targets, prerequisites, commands, most directives, and new variable values. Here is an example of a common case, where a variable holds the names of all the object files in a program:

objects = program.o foo.o utils.o program : $(objects) cc -o program $(objects) $(objects) : defs.h

Variable references work by strict textual substitution. Thus, the rule

foo = c prog.o : prog.$(foo) $(foo)$(foo) -$(foo) prog.$(foo)

could be used to compile a C program . Since spaces before the variable value are ignored in variable assignments, the value of is precisely . (Don't actually write your makefiles this way!)

A dollar sign followed by a character other than a dollar sign, open-parenthesis or open-brace treats that single character as the variable name. Thus, you could reference the variable with . However, this practice is strongly discouraged, except in the case of the automatic variables (see section Automatic Variables).

The Two Flavors of Variables

There are two ways that a variable in GNU can have a value; we call them the two flavors of variables. The two flavors are distinguished in how they are defined and in what they do when expanded.

The first flavor of variable is a recursively expanded variable. Variables of this sort are defined by lines using (see section Setting Variables) or by the directive (see section Defining Variables Verbatim). The value you specify is installed verbatim; if it contains references to other variables, these references are expanded whenever this variable is substituted (in the course of expanding some other string). When this happens, it is called recursive expansion.

For example,

foo = $(bar) bar = $(ugh) ugh = Huh? all:;echo $(foo)

will echo : expands to which expands to which finally expands to .

This flavor of variable is the only sort supported by other versions of . It has its advantages and its disadvantages. An advantage (most would say) is that:

CFLAGS = $(include_dirs) -O include_dirs = -Ifoo -Ibar

will do what was intended: when is expanded in a command, it will expand to . A major disadvantage is that you cannot append something on the end of a variable, as in


because it will cause an infinite loop in the variable expansion. (Actually detects the infinite loop and reports an error.)

Another disadvantage is that any functions (see section Functions for Transforming Text) referenced in the definition will be executed every time the variable is expanded. This makes run slower; worse, it causes the and functions to give unpredictable results because you cannot easily control when they are called, or even how many times.

To avoid all the problems and inconveniences of recursively expanded variables, there is another flavor: simply expanded variables.

Simply expanded variables are defined by lines using (see section Setting Variables). The value of a simply expanded variable is scanned once and for all, expanding any references to other variables and functions, when the variable is defined. The actual value of the simply expanded variable is the result of expanding the text that you write. It does not contain any references to other variables; it contains their values as of the time this variable was defined. Therefore,

x := foo y := $(x) bar x := later

is equivalent to

y := foo bar x := later

When a simply expanded variable is referenced, its value is substituted verbatim.

Here is a somewhat more complicated example, illustrating the use of in conjunction with the function. (See section The Function.) This example also shows use of the variable , which is changed when it is passed down from level to level. (See section Communicating Variables to a Sub-, for information about .)

ifeq (0,${MAKELEVEL}) cur-dir := $(shell pwd) whoami := $(shell whoami) host-type := $(shell arch) MAKE := ${MAKE} host-type=${host-type} whoami=${whoami} endif

An advantage of this use of is that a typical `descend into a directory' command then looks like this:

${subdirs}: ${MAKE} cur-dir=${cur-dir}/$@ -C $@ all

Simply expanded variables generally make complicated makefile programming more predictable because they work like variables in most programming languages. They allow you to redefine a variable using its own value (or its value processed in some way by one of the expansion functions) and to use the expansion functions much more efficiently (see section Functions for Transforming Text).

You can also use them to introduce controlled leading whitespace into variable values. Leading whitespace characters are discarded from your input before substitution of variable references and function calls; this means you can include leading spaces in a variable value by protecting them with variable references, like this:

nullstring := space := $(nullstring) # end of the line

Here the value of the variable is precisely one space. The comment is included here just for clarity. Since trailing space characters are not stripped from variable values, just a space at the end of the line would have the same effect (but be rather hard to read). If you put whitespace at the end of a variable value, it is a good idea to put a comment like that at the end of the line to make your intent clear. Conversely, if you do not want any whitespace characters at the end of your variable value, you must remember not to put a random comment on the end of the line after some whitespace, such as this:

dir := /foo/bar # directory to put the frobs in

Here the value of the variable is (with four trailing spaces), which was probably not the intention. (Imagine something like with this definition!)

There is another assignment operator for variables, . This is called a conditional variable assignment operator, because it only has an effect if the variable is not yet defined. This statement:

FOO ?= bar

is exactly equivalent to this (see section The Function):

ifeq ($(origin FOO), undefined) FOO = bar endif

Note that a variable set to an empty value is still defined, so will not set that variable.

Advanced Features for Reference to Variables

This section describes some advanced features you can use to reference variables in more flexible ways.

Substitution References

A substitution reference substitutes the value of a variable with alterations that you specify. It has the form (or ) and its meaning is to take the value of the variable , replace every at the end of a word with in that value, and substitute the resulting string.

When we say "at the end of a word", we mean that must appear either followed by whitespace or at the end of the value in order to be replaced; other occurrences of in the value are unaltered. For example:

foo := a.o b.o c.o bar := $(foo:.o=.c)

sets to . See section Setting Variables.

A substitution reference is actually an abbreviation for use of the expansion function (see section Functions for String Substitution and Analysis). We provide substitution references as well as for compatibility with other implementations of .

Another type of substitution reference lets you use the full power of the function. It has the same form described above, except that now must contain a single character. This case is equivalent to . See section Functions for String Substitution and Analysis, for a description of the function.

For example: foo := a.o b.o c.o bar := $(foo:%.o=%.c)

sets to .

Computed Variable Names

Computed variable names are a complicated concept needed only for sophisticated makefile programming. For most purposes you need not consider them, except to know that making a variable with a dollar sign in its name might have strange results. However, if you are the type that wants to understand everything, or you are actually interested in what they do, read on.

Variables may be referenced inside the name of a variable. This is called a computed variable name or a nested variable reference. For example,

x = y y = z a := $($(x))

defines as : the inside expands to , so expands to which in turn expands to . Here the name of the variable to reference is not stated explicitly; it is computed by expansion of . The reference here is nested within the outer variable reference.

The previous example shows two levels of nesting, but any number of levels is possible. For example, here are three levels:

x = y y = z z = u a := $($($(x)))

Here the innermost expands to , so expands to which in turn expands to ; now we have , which becomes .

References to recursively-expanded variables within a variable name are reexpanded in the usual fashion. For example:

x = $(y) y = z z = Hello a := $($(x))

defines as : becomes which becomes which becomes .

Nested variable references can also contain modified references and function invocations (see section Functions for Transforming Text), just like any other reference. For example, using the function (see section Functions for String Substitution and Analysis):

x = variable1 variable2 := Hello y = $(subst 1,2,$(x)) z = y a := $($($(z)))

eventually defines as . It is doubtful that anyone would ever want to write a nested reference as convoluted as this one, but it works: expands to which becomes . This gets the value from and changes it by substitution to , so that the entire string becomes , a simple variable reference whose value is .

A computed variable name need not consist entirely of a single variable reference. It can contain several variable references, as well as some invariant text. For example,

a_dirs := dira dirb 1_dirs := dir1 dir2 a_files := filea fileb 1_files := file1 file2 ifeq "$(use_a)" "yes" a1 := a else a1 := 1 endif ifeq "$(use_dirs)" "yes" df := dirs else df := files endif dirs := $($(a1)_$(df))

will give the same value as , , or depending on the settings of and .

Computed variable names can also be used in substitution references:

a_objects := a.o b.o c.o 1_objects := 1.o 2.o 3.o sources := $($(a1)_objects:.o=.c)

defines as either or , depending on the value of .

The only restriction on this sort of use of nested variable references is that they cannot specify part of the name of a function to be called. This is because the test for a recognized function name is done before the expansion of nested references. For example,

ifdef do_sort func := sort else func := strip endif bar := a d b g q c foo := $($(func) $(bar))

attempts to give the value of the variable or , rather than giving as the argument to either the or the function. This restriction could be removed in the future if that change is shown to be a good idea.

You can also use computed variable names in the left-hand side of a variable assignment, or in a directive, as in:

dir = foo $(dir)_sources := $(wildcard $(dir)/*.c) define $(dir)_print lpr $($(dir)_sources) endef

This example defines the variables , , and .

Note that nested variable references are quite different from recursively expanded variables (see section The Two Flavors of Variables), though both are used together in complex ways when doing makefile programming.

How Variables Get Their Values

Variables can get values in several different ways:

Setting Variables

To set a variable from the makefile, write a line starting with the variable name followed by or . Whatever follows the or on the line becomes the value. For example,

objects = main.o foo.o bar.o utils.o

defines a variable named . Whitespace around the variable name and immediately after the is ignored.

Variables defined with are recursively expanded variables. Variables defined with are simply expanded variables; these definitions can contain variable references which will be expanded before the definition is made. See section The Two Flavors of Variables.

The variable name may contain function and variable references, which are expanded when the line is read to find the actual variable name to use.

There is no limit on the length of the value of a variable except the amount of swapping space on the computer. When a variable definition is long, it is a good idea to break it into several lines by inserting backslash-newline at convenient places in the definition. This will not affect the functioning of , but it will make the makefile easier to read.

Most variable names are considered to have the empty string as a value if you have never set them. Several variables have built-in initial values that are not empty, but you can set them in the usual ways (see section Variables Used by Implicit Rules). Several special variables are set automatically to a new value for each rule; these are called the automatic variables (see section Automatic Variables).

If you'd like a variable to be set to a value only if it's not already set, then you can use the shorthand operator instead of . These two settings of the variable are identical (see section The Function):

FOO ?= bar


ifeq ($(origin FOO), undefined) FOO = bar endif

Appending More Text to Variables

Often it is useful to add more text to the value of a variable already defined. You do this with a line containing , like this:

objects += another.o

This takes the value of the variable , and adds the text to it (preceded by a single space). Thus:

objects = main.o foo.o bar.o utils.o objects += another.o

sets to .

Using is similar to:

objects = main.o foo.o bar.o utils.o objects := $(objects) another.o

but differs in ways that become important when you use more complex values.

When the variable in question has not been defined before, acts just like normal : it defines a recursively-expanded variable. However, when there is a previous definition, exactly what does depends on what flavor of variable you defined originally. See section The Two Flavors of Variables, for an explanation of the two flavors of variables.

When you add to a variable's value with , acts essentially as if you had included the extra text in the initial definition of the variable. If you defined it first with , making it a simply-expanded variable, adds to that simply-expanded definition, and expands the new text before appending it to the old value just as does (see section Setting Variables, for a full explanation of ). In fact,

variable := value variable += more

is exactly equivalent to:

variable := value variable := $(variable) more

On the other hand, when you use with a variable that you defined first to be recursively-expanded using plain , does something a bit different. Recall that when you define a recursively-expanded variable, does not expand the value you set for variable and function references immediately. Instead it stores the text verbatim, and saves these variable and function references to be expanded later, when you refer to the new variable (see section The Two Flavors of Variables). When you use on a recursively-expanded variable, it is this unexpanded text to which appends the new text you specify.

variable = value variable += more

is roughly equivalent to:

temp = value variable = $(temp) more

except that of course it never defines a variable called . The importance of this comes when the variable's old value contains variable references. Take this common example:

CFLAGS = $(includes) -O ... CFLAGS += -pg # enable profiling

The first line defines the variable with a reference to another variable, . ( is used by the rules for C compilation; see section Catalogue of Implicit Rules.) Using for the definition makes a recursively-expanded variable, meaning is not expanded when processes the definition of . Thus, need not be defined yet for its value to take effect. It only has to be defined before any reference to . If we tried to append to the value of without using , we might do it like this:

CFLAGS := $(CFLAGS) -pg # enable profiling

This is pretty close, but not quite what we want. Using redefines as a simply-expanded variable; this means expands the text before setting the variable. If is not yet defined, we get , and a later definition of will have no effect. Conversely, by using we set to the unexpanded value . Thus we preserve the reference to , so if that variable gets defined at any later point, a reference like still uses its value.

The Directive

If a variable has been set with a command argument (see section Overriding Variables), then ordinary assignments in the makefile are ignored. If you want to set the variable in the makefile even though it was set with a command argument, you can use an directive, which is a line that looks like this:

override =


override :=

To append more text to a variable defined on the command line, use:

override +=

See section Appending More Text to Variables.

The directive was not invented for escalation in the war between makefiles and command arguments. It was invented so you can alter and add to values that the user specifies with command arguments.

For example, suppose you always want the switch when you run the C compiler, but you would like to allow the user to specify the other switches with a command argument just as usual. You could use this directive:

override CFLAGS += -g

You can also use directives with directives. This is done as you might expect:

override define foo bar endef

See the next section for information about .

Defining Variables Verbatim

Another way to set the value of a variable is to use the directive. This directive has an unusual syntax which allows newline characters to be included in the value, which is convenient for defining canned sequences of commands (see section Defining Canned Command Sequences).

The directive is followed on the same line by the name of the variable and nothing more. The value to give the variable appears on the following lines. The end of the value is marked by a line containing just the word . Aside from this difference in syntax, works just like : it creates a recursively-expanded variable (see section The Two Flavors of Variables). The variable name may contain function and variable references, which are expanded when the directive is read to find the actual variable name to use.

define two-lines echo foo echo $(bar) endef

The value in an ordinary assignment cannot contain a newline; but the newlines that separate the lines of the value in a become part of the variable's value (except for the final newline which precedes the and is not considered part of the value).

The previous example is functionally equivalent to this:

two-lines = echo foo; echo $(bar)

since two commands separated by semicolon behave much like two separate shell commands. However, note that using two separate lines means will invoke the shell twice, running an independent subshell for each line. See section Command Execution.

If you want variable definitions made with to take precedence over command-line variable definitions, you can use the directive together with :

override define two-lines foo $(bar) endef

See section The Directive.

Variables from the Environment

Variables in can come from the environment in which is run. Every environment variable that sees when it starts up is transformed into a variable with the same name and value. But an explicit assignment in the makefile, or with a command argument, overrides the environment. (If the flag is specified, then values from the environment override assignments in the makefile. See section Summary of Options. But this is not recommended practice.)

Thus, by setting the variable in your environment, you can cause all C compilations in most makefiles to use the compiler switches you prefer. This is safe for variables with standard or conventional meanings because you know that no makefile will use them for other things. (But this is not totally reliable; some makefiles set explicitly and therefore are not affected by the value in the environment.)

When is invoked recursively, variables defined in the outer invocation can be passed to inner invocations through the environment (see section Recursive Use of ). By default, only variables that came from the environment or the command line are passed to recursive invocations. You can use the directive to pass other variables. See section Communicating Variables to a Sub-, for full details.

Other use of variables from the environment is not recommended. It is not wise for makefiles to depend for their functioning on environment variables set up outside their control, since this would cause different users to get different results from the same makefile. This is against the whole purpose of most makefiles.

Such problems would be especially likely with the variable , which is normally present in the environment to specify the user's choice of interactive shell. It would be very undesirable for this choice to affect . So ignores the environment value of (except on MS-DOS and MS-Windows, where is usually not set. See section Command Execution.)

Target-specific Variable Values

Variable values in are usually global; that is, they are the same regardless of where they are evaluated (unless they're reset, of course). One exception to that is automatic variables (see section Automatic Variables).

The other exception is target-specific variable values. This feature allows you to define different values for the same variable, based on the target that is currently building. As with automatic variables, these values are only available within the context of a target's command script (and in other target-specific assignments).

Set a target-specific variable value like this:

... :

or like this:

... : override

Multiple values create a target-specific variable value for each member of the target list individually.

The can be any valid form of assignment; recursive (), static (), appending (), or conditional (). All variables that appear within the are evaluated within the context of the target: thus, any previously-defined target-specific variable values will be in effect. Note that this variable is actually distinct from any "global" value: the two variables do not have to have the same flavor (recursive vs. static).

Target-specific variables have the same priority as any other makefile variable. Variables provided on the command-line (and in the environment if the option is in force) will take precedence. Specifying the directive will allow the target-specific variable value to be preferred.

There is one more special feature of target-specific variables: when you define a target-specific variable, that variable value is also in effect for all prerequisites of this target (unless those prerequisites override it with their own target-specific variable value). So, for example, a statement like this:

prog : CFLAGS = -g prog : prog.o foo.o bar.o

will set to in the command script for , but it will also set to in the command scripts that create , , and , and any command scripts which create their prerequisites.

Pattern-specific Variable Values

In addition to target-specific variable values (see section Target-specific Variable Values), GNU supports pattern-specific variable values. In this form, a variable is defined for any target that matches the pattern specified. Variables defined in this way are searched after any target-specific variables defined explicitly for that target, and before target-specific variables defined for the parent target.

Set a pattern-specific variable value like this:

... :

or like this:

... : override

where is a %-pattern. As with target-specific variable values, multiple values create a pattern-specific variable value for each pattern individually. The can be any valid form of assignment. Any command-line variable setting will take precedence, unless is specified.

For example:

%.o : CFLAGS = -O

will assign the value of for all targets matching the pattern .

Go to the first, previous, next, last section, table of contents.

One thought on “Gnu Makefile Conditional Assignment

Leave a comment

L'indirizzo email non verrà pubblicato. I campi obbligatori sono contrassegnati *