An indented block is also used to show examples of job configuration:

This latter approach is clearer in the context where a comment is also specified using the hash character.

Note that some examples make use of sudo(8) to show the command should be run as root: the example above could thus be written:

Indented lines starting with a hash (or "pound") character (' # ') are used to denote the shell prompt (followed by optional commands) for the root user. Command output is shown by indented lines not preceded by the hash character :

Indented lines starting with a dollar character (' $ ') are used to denote the shell prompt (followed by optional commands) for a non-privileged user. Command output is shown by indented lines not preceded by the dollar character:

An indented block will be used to denote user input and command output.

Throughout this document a fixed-width font such as this will be used to denote commands, brief command output and configuration stanzas.

The landscape has changed and Upstart is fully able to accommodate such changes since its design is clean, elegant and abstract. Crucially, Upstart is not tied to the rigid runlevel system. Indeed, Upstart has no knowledge of runlevels internally, but it supports them trivially with events. And since events are so abstract, they are highly flexible building blocks for higher-level constructs. Added to which, since Upstart's events are dynamic, the system can be configured for a myriad of possible system behaviours and failure modes and have it react accordingly.

It's a fact that systems and software are getting more complex. In the old days of Unix, runlevels encompassed every major mode of operation you might want your system to handle. However, expectations have changed. Nowadays, we expect systems to react to problems (and maybe even "self-heal" the simple ones).

Consider also the case for Cloud deployments, which of course run on servers. Here, boot speed is very important as it affects the time taken to deploy a new server instance. The faster you can deploy new services to handle an increasing workload the better the experience for your customers.

Some say that boot performance is not important on servers, possibly since the time taken to bring RAID arrays on-line is significantly longer than the time it takes to boot the operating system. However, nobody seriously wants their system to take longer than necessary to boot.

Upstart is used by Ubuntu for the Ubuntu Desktop and for Ubuntu Server (and as a result of this, it is also used in the Ubuntu Cloud ). Why is Upstart also compelling in a server environment?

Upstart was designed with performance in mind. It makes heavy use of the NIH Utility Library which is optimised for efficient early boot environments. Additionally, Upstart's design is lightweight, efficient and elegant. At its heart it is an event-based messaging system that has the ability to control and monitor processes. Upstart is designed to manage services running in parallel. It will only start services when the conditions they have specified are met.

In essence, Upstart is an event engine: it creates events, handles the consequences of those events being emitted and starts and stops processes as required. Like the best Unix software, it does this job very well. It is efficient, fast, flexible and reliable. It makes use of "helper" daemons (such as the upstart-udev-bridge and the upstart-socket-bridge ) to inject new types of events into the system and react to these events. This design is sensible and clean: the init system itself must not be compromised since if it fails, the kernel panics. Therefore, any functionality which is not considered "core" functionality is farmed out to other daemons.

Further, Upstart is being guided by the ultimate arbiter of hardware devices: the kernel.

Upstart emits "events" which services can register an interest in. When an event -- or combination of events -- is emitted that satisfies some service's requirements, Upstart will automatically start or stop that service. If multiple jobs have the same "start on" condition, Upstart will start those jobs ''in parallel''. To be manifest: Upstart handles starting the "dependent" services itself - this is not handled by the service file itself as it is with dependency-based systems.

Upstart is revolutionary as it recognises and was designed specifically for a dynamic system. It handles asynchronicity by emitting events. This too is revolutionary.

It was necessary to outline the limitations of the SysV and dependency-based init systems to appreciate why Upstart is special...

The other problem with dependency-based init systems is that they require a dependency-solver which is often complex and not always optimal.

This summary is worth considering carefully as the distinction between the two types of system is subtle but important.

Note that the init system itself is not doing the heavy-lifting: that is left up to each service itself (!)

Each service generally does this using a brute-force approach of forcing all the dependencies to start.

The service (job configuration file) only needs to specify the conditions that allow the service to run, and the executable to run the service itself.

This can be summarised as:

What you really want is a system that detects such asynchronous events and when the conditions are right for a service to run, the service is started.

Have a daemon that hangs around polling for new hardware being plugged.

Corresponds to an inability to handle this scenario.

However, consider how such a system would approach the problem of dealing with a user who plugs in an external monitor. Maybe we'd like our system to display some sort of configuration dialogue so the user can choose how they want to use their new monitor in combination with their existing laptop display. This can only be "hacked" with a dependency-based init system since you do not know when the new screen will be plugged. So, your choices are either:

For example, if a dependency-based init system wished to start say MySQL , it would first start all the dependent services that MySQL needed. This sounds perfectly reasonable.

The main problem with dependency-based init systems is that they approach the problem from the "wrong direction". Again, this is due to their not recognising the dynamic nature of modern Linux systems.

The recognition that services often need to make use of other services is an important improvement over SystemV init systems. It places a bigger responsibility on the init system itself and reduces the complexity and work that needs to be performed by individual service files.

The most difficult and time costly operation these services perform is that of handling dependent daemons. The LSB specifies helper utilities that these services can make use of, but arguably each service shouldn't need to be handling this activity themselves: the init system itself should do it on behalf of the services it manages.

Most service files are fairly formulaic. For example, they might:

Modern Linux systems can deal with new hardware devices being added and removed dynamically ("hot-plug"). The traditional SysV init system itself is incapable of handling such a dynamically changing system.

However, the world has now moved on. From an Ubuntu perspective, a significant proportion of users run the desktop edition on portable devices where they may reboot multiple times a day.

In the days of colossal Unix systems with hundreds of concurrent users, where reboots were rare, the traditional SysV approach was perfect. If hardware needed replacing, a system shutdown was scheduled, the shutdown performed, the new hardware was installed and the system was brought back on-line.

A common "hack" used by Administrators is to circumvent the serialisation by running their service in the background, such that some degree of parallelism is possible. The fact that this hack is required and is common on such systems demonstrates clearly the flaw in that system.

It was designed to be simple and efficient for Administrators to manage. However, this model does not make full use of modern system resources, particularly once it is recognised that multiple services can often be run simultaneously.

The traditional sequential boot system was appropriate for the time it was invented, but by modern standards it is "slow" in the sense that it makes no use of parallelism.

This is achieved by init running the scripts pointed to by the symbolic links in sequence. The relative order in which init invokes these scripts is determined by a numeric element in the name: lower numbered services run before higher numbered services.

Creating service files is easy with SystemV init since they are simply shell scripts. To enable/disable a service in a particular runlevel, you only need to create/remove a symbolic link in a particular directory or set of directories.

To understand why Upstart was written and why its revolutionary design was chosen, it is necessary to consider these two classes of init system.

Upstart was created due to fundamental limitations in existing systems. Those systems can be categorized into two types:

To help ensure reliability and avoid regressions, Upstart and the NIH Utility Library both come with comprehensive test suites. See Unit Tests for further information.

Upstart is written using the NIH Utility Library (" libnih "). This is a very small, efficient and safe library of generic routines. It is designed for applications that run early in the boot sequence ("plumbing"). Reliability and safety is critically important for an init daemon since:

The " init " or "system initialisation" process on Unix and Linux systems has process ID (PID) " 1 ". That is to say, it is the first process to start when the system boots (ignoring the initrd/initramfs). As the quote shows, Upstart is an " init " replacement for the traditional Unix "System V" " init " system. Upstart provides the same facilities as the traditional " init " system, but surpasses it in many ways.

Traditionally, the default runlevel was encoded in file /etc/inittab . However, with Upstart, this file is no longer used (it is supported by Upstart, but its use is deprecated).

If you want to change the default runlevel for a single boot, rather than making the change permanent by modify the rc-sysinit.conf file, simply append the variable to the kernel command line:

To change the default runlevel the system will boot into, modify the variable DEFAULT_RUNLEVEL in file /etc/init/rc-sysinit.conf . For example, to make the system boot by default to single user mode, set:

To change runlevel immediately, use one of the commands below:

To display your current and previous runlevels separated by a space character, run the /sbin/runlevel command. Note that if this command is unable to determine the system runlevel, it may display simply " unknown ":

There are also a few pseudo-runlevels:

A runlevel is a single-byte name for a particular system configuration. Runlevels for Debian and Ubuntu systems are generally as follows :

Assuming that job " A " is already running, if the " foo " event is emitted, Upstart will always stop job " A " before starting job " B ".

Consider two jobs like this:

Upstart guarantees that jobs which stop on a particular event are processed before jobs that start on the same event.

Here is an example from the system log showing what happened in more detail. First the entries relating to starting the job:

And here is an example from the system log (with annotations) showing what happened:

Now, we can watch the state transitions by viewing the system log.

We can see what happens when we run this job more clearly when we increase the log priority to debug (see Change the log-priority ):

Would Upstart be happy with this? Actually, yes it would! Upstart always handles stop on stanzas before handling start on stanzas. This means that this strange job would first be stopped (if it's currently running), then it would be started.

Answer: It is not possible to say, and indeed you should not make any assumptions about the order in which jobs with the same conditions run in.

Question: If event event-A is emitted, which job will run first?

Assume you have three jobs like this:

For example on Ubuntu , these are documented in the upstart-events(7) man page, which is included within this document for convenience in appendix Ubuntu Well-Known Events (ubuntu-specific) .

That said, most systems which use Upstart provide a number of "well-known" events which you can rely upon.

As a general rule, you cannot rely upon the the order in which events will be emitted. Your system is dynamic and Upstart responds to changes as-and-when they occur (for example hot-plug events).

Note: this information is also available in upstart-events(7) .

Any jobs whose start on (or stop on ) condition would be satisfied by this job being stopped are started (or stopped respectively).

When this event completes, the job is fully stopped.

The state is changed from post-stop to waiting .

If the post-stop stanza exists, the post-stop process is spawned.

The state is changed from killed to post-stop .

If the process is still running after the timeout, a SIGKILL signal is sent to the process which cannot be ignored and will forcibly stop the processes in the process group.

Upstart waits for up to kill timeout seconds (default 5 seconds) for the process to end.

The signal specified by the kill signal stanza is sent to the process group of the main process. (such that all processes belonging to the jobs main process are killed). By default this signal is SIGTERM .

Any jobs whose start on (or stop on ) condition would be satisfied by this job stopping are started (or stopped respectively).

If neither variable is set, the process in question failed to spawn (for example, because the specified command to run was not found).

Either EXIT_STATUS or EXIT_SIGNAL will be set, depending on whether the job exited itself ( EXIT_STATUS ) or was stopped as a result of a signal ( EXIT_SIGNAL ).

The name of the script section that resulted in the failure. This variable is not set if RESULT=ok . If set, the variable will have one of the following values:

This variable will have the value " ok " if the job exited normally or " failed " if the job exited due to failure. Note that Upstart's view of success and failure can be modified using the normal exit stanza.

The name of the instance of the job this event refers to. This will be empty for single-instance jobs (those jobs that have not specified the instance stanza).

The name of the job this event refers to.

The stopping event has a number of associated environment variables:

The state is changed from pre-stop to stopping .

If the pre-stop stanza exists, the pre-stop process is spawned.

The state is changed from running to pre-stop .

The goal is changed from start to stop indicating the job is attempting to stop.

Assuming the job is fully running, it will have a goal of start and a state of running (shown as start/running by the initctl list and initctl status commands).

Any jobs whose start on (or stop on ) condition would be satisfied by this job being started are started (or stopped respectively).

For services , when this event completes the main process will now be fully running. If the job refers to a task , it will now have completed (successfully or other‐wise).

The state is changed from post-start to running .

If the post-start stanza exists, the post-start process is spawned.

The state is changed from spawned to post-start .

Upstart then ascertains the final PID for the job which may be a descendent of the immediate child process if expect fork or expect daemon has been specified.

The state is changed from pre-start to spawned .

Assuming the pre-start did not fail or did not call " stop ", the main process is spawned.

If the pre-start process fails, the goal is changed from start to stop , and the stopping(7) and stopped(7) events are emitted with appropriate variables set denoting the error.

If the pre-start stanza exists, the pre-start process is spawned.

The state is changed from starting to pre-start .

Any jobs whose start on (or stop on ) condition would be satisfied by this job starting are started (or stopped respectively).

The starting(7) event is emitted denoting the job is "about to start".

The state is changed from waiting to starting .

The goal is changed from stop to start indicating the job is attempting to start.

Initially the job is "at rest" with a goal of stop and a state of waiting (shown as stop/waiting by the initctl list and initctl status commands).

Although Upstart does use states internally (and these are exposed via the list and status commands in initctl(8) ), events are the way that job configuration files specify the desired behaviour of jobs: starting(7) , started(7) , stopping(7) , stopped(7) are events, not states. These events are emitted "just prior" to the particular transition occurring. For example, the starting(7) event is emitted just before the job associated with this event is actually queued for start by Upstart.

The canonical examples of Hooks are the two job events starting(7) and stopping(7) , emitted by Upstart to indicate that a job is about to start and about to stop respectively.

Hooks are therefore used to flag to all interested parties that something is about to happen.

You could start the myapp job and check if the "method" worked as follows:

Assuming we have a job configuration file /etc/init/myapp.conf like this:

This is exactly like a Signal Event , except the event is being emitted synchronously such that the emitter has to wait until the initctl command completes. Once the initctl command has completed, there are two possible outcomes for the task that starts on Event mymethod :

A Method Event is a blocking (or synchronous) event which is usually coupled with a task . It acts like a method or function call in programming languages in that the caller is requesting that some work be done. The caller waits for the work to be done, and if problems were encountered, it expects to be informed of this fact.

The non-blocking behaviour directly affects the emitter by allowing it to continue processing without having to wait for any jobs which make use of the event. Jobs which make use of the event (via start on or stop on ) are also affected, as they're unable to stop, delay, or in any other way "hold up" the operation of the emitter.

Signal Events are created using the --no-wait option to the initctl emit command like this:

A Signal Event is a non-blocking (or asynchronous) event. Emitting an event of this type returns immediately, allowing the caller to continue. Quoting from :

Upstart provides three different types of Events.

To help reinforce the difference, consider how Upstart itself starts: See the Startup Process .

These events are as follows:

Jobs are often started or stopped as a result of other jobs starting or stopping. Upstart has a special set of events that it emits to announce these job state transitions. You'll probably notice that these events have the same names as some of the job states described in Job States , however it's important to appreciate that these are not describing the same thing. Task states are not events, and events are not task states. See Events, not States for details.

Note also that an event name with the same name as a job is allowed.

Note that some events are "special". See the upstart-events(7) manual page for a list.

Events can be created by an administrator at any time using:

If there are no jobs which have registered an interest in an event in either their start on or stop on conditions, the event has no effect on the system.

Events are emitted (created and then broadcast) to the entire Upstart system. Note that it is not possible to stop any other job or event from seeing an event when it is emitted.

A notification is sent by Upstart to all interested parties (either jobs or other events). Events can generally be thought of as " signals ", " methods ", or " hooks " , depending on how they are emitted and/or consumed.

The initctl list command above will now list jobs in the users session specified by the $UPSTART_SESSION environment variable.

If you have multiple sessions running for a user, or have started a Session Init from a System Job as shown in the example above, it is possible to "join" the appropriate session by simply setting the $UPSTART_SESSION environment variable.

Note that this behaviour is Session Init-specific: without --user , the system Upstart would read Job Configuration Files from the /etc/bob/ directory only.

Now, the Session Init will only read Job Configuration Files from /etc/james/ and /etc/bob/ .

Note that it is possible to specify that only certain Job Configuration File directories are read for a Session Init by specifying the --confdir option multiple times. For example:

The session-init-setup job will start when the system is in a suitable state (disks mounted writeable and networking up). That job will start the session-init instance job which will start the actual Session Init (which will read Job Configuration Files from the usual locations for a Session Init ).

Create two System Job similar to the following...

However, what if you want to use a Session Init on a server? This is not fully supported right now, but can be achieved as follows.

As of Ubuntu Saucy Salamander (13.10), a Session Init is used to manage the default graphical user session.

The advent of Session Inits removes all need for User Jobs . These continue to be supported since the Session Init still reads the job configuration files from the User Job directory, but that directory is deprecated. See Session Job for further details.

To run a Session Init, simply arrange for the first process that starts a session to be run as " init --user ". As when running as PID 1, the Session Init will emit the " startup " event that jobs can use to react to. All jobs that are managed by a Session Init have their parent set to the Session Init, not the system init. This is because a Session Init process is a true "sub-init". Jobs are loaded from potentially multiple directories. See Session Job for details.

But why would you want to run another instance of Upstart? Well, due to its elegant design which assumes a dynamic system, it is perfectly suited to managing a users session. Traditionally, this job has been handled by applications such as " gnome-session ", but by moving to an Upstart-based design a lot of benefits come "for free":

As of Upstart v1.7, Upstart has the ability to run as a non-PID 1 process (see upstart-user-sessions-spec for full details).

As shown, these are all example of Abstract Job configuration files.

Therefore, some examples of minimal job configuration files are:

What is the minimum content of a job configuration file? Interestingly enough, to be valid a job configuration file:

If any job instances are running at system shutdown time, Upstart will stop them.

If a job has no start on stanza, it can only be started manually by an Administrator running either of:

However, if such a job is not stopped, it may be stopped either by another job, or some other facility . Worst case, if nothing else stops it, all processes will obviously be killed when the system is powered off.

A job does not necessarily need a stop on stanza. If it lacks one, any running instances can still be stopped by an Administrator running either of:

Upstart will first read $HOME/.init/foo.conf , and then apply any changes in $HOME/.config/upstart/foo.override .

Only the first, $HOME/.init/foo.conf will be used. Whereas if the following files exist:

Upstart resolves any name collisions by simply accepting the first valid job (or override file ) that it finds. For example, if the following two file exist:

The name of each job is taken to be the basename when any of the directory names above have been removed. For example, if a job configuration file exists as $HOME/.config/upstart/hello/world.conf , its name will be " hello/world " whereas if a job configuration file exists as /usr/share/upstart/sessions/foo/bar.conf , its name will be " foo/bar ".

Unlike when Upstart runs as PID 1, a Session Init can read its Job Configuration files from multiple directories. The list of directories jobs are read from is as follows (in order):

Session Jobs are analogous to the old User Jobs . Unlike the old User Jobs, Session Jobs are not managed by Upstart running as PID 1 - they are managed by the users own Session Init .

The Upstream Upstart 1.3 distribution already includes a " Upstart.conf " file containing the required changes.

To enable user jobs, the administrator must modify the D-Bus configuration file " Upstart.conf " to allow non-root users access to all the Upstart D-Bus methods and properties. On an Ubuntu system the file to modify is:

User jobs cannot currently take advantage of job logging. If a user job does specify console log , it is considered to have specified console none . Logging of user jobs is planned for the next release of Upstart.

Stanzas which manipulate resources limits (such as limit , nice , and oom ) may cause a job to fail to start should the value provided to such a stanza attempt to exceed the maximum value the users privilege level allows.

Controlling user jobs is the same as for system jobs: use initctl , start , stop , et cetera.

Currently, a user job cannot be created with the same name as a system job: the system job will take precedence.

The syntax for such jobs is identical for "system jobs".

This feature is not currently enabled in Ubuntu (up to and including 11.10 ("Oneiric Ocelot")).

Upstart 1.3 introduced user jobs, allowing non-privileged users to create jobs by placing job configuration files in the following directory:

This directory can be overridden by specifying the --confdir=<directory> option to the init daemon, however this is a specialist option which users should not need to use.

All system jobs by default live in the following directory:

Note that it is common to refer to a Job configuration file as a "job", although technically a job is a running instance of a Job configuration file.

Job configuration files can exist in two types of location, depending on whether they are a System Job or a User Job .

Where " <name> " should reflect the application being run or the service being provided.

A Job is defined in a Job Configuration File (or more simply a conf file ) which is a plain text file containing one or more stanzas . Job configuration files are named:

Session Jobs are different. They too can use env and export , but they already inherit the environment of the Session Init that is supervising them. However, further to that, Session Jobs can also influence the environment of the processes that comprise both a single job and all subsequent jobs. See the "env" commands in the initctl Commands Summary for details.

If your system job needs further variables to be set, you can use the env and export stanzas.

Upstart itself will also potentially set some special variables the job can use. See Standard Environment Variables for further details.

When Upstart runs a job, it provides it with a very restrictive environment which contains just two system variables:

State transitions diagram for versions of Upstart up to and including version 1.12.1 (green lines represent goal=start , red lines represent goal=stop ):

Note that jobs may change state so quickly that you may not be able to observe all the values above in the initctl output. However, you will see the transitions if you raise the log-priority to debug or info . See initctl log-priority for details.

For example, if the job is currently in state starting , and its goal is start , it will then move to the pre-start state.

The table below shows all possible Job States and the legal transitions between them. States are exposed to users via the status field in the output of the initctl status command.

There is one other type of job which has no script sections or exec stanzas. Such abstract jobs can still be started and stopped, but will have no corresponding child process (PID). In fact, starting such a job will result in it "running" perpetually if not stopped by an Administrator. Abstract jobs exist only within Upstart itself but can be very useful. See for example:

Examples of Service Jobs are entities such as databases, webservers or ftp servers.

A Service Job is a long-running (or daemon(3) process). It is the opposite of a Task Job since a Service Job might never end of its own accord.

In this book Task Jobs are often referred to as tasks .

For example, deleting a file could be a Task Job since the command starts, deletes the file in question (which might take some time if the file is huge) and then the delete command ends.

A Task Job is one which runs a short-running process, that is, a program which might still take a long time to run, but which has a definite lifetime and end state.

A " unit of work " - generally either a " Task " or a " Service ". Jobs are defined in a Job configuration file .

The main concepts in Upstart are "events" and "jobs". Understanding the difference between the two is crucial.

This is a new phase introduced in Ubuntu 11.10 that borrows an idea from Google's Chrome OS. A new job called failsafe has been introduced that checks to ensure the system has reached a particular state. If the expected state is not attained, the job reboots the system automatically.

Ubuntu provides a recovery mode in case your system experiences problems. This is handled by the friendly-recovery package. If you select a " recovery mode " option on the Grub menu. This makes the initramfs pass a flag to Upstart which ensures that the /etc/init/friendly-recovery.conf Upstart job is the first job run after Upstart starts. As a result, this job has full control over the system and provides a friendly menu that allows users to check disks with fsck(8) , repair your package database and so on.

When booting direct into single-user mode, the runlevel command will show:

This script will kill any remaining processes not already stopped (including Upstart processes).

One of the scripts run is /etc/init.d/sendsigs .

The SystemV system will then invoke the necessary scripts in /etc/rc6.d/ to stop SystemV services.

This job calls /etc/init.d/rc passing it the new runlevel (" 6 ").

Assuming the current runlevel is " 2 ", whichever command is run above will cause Upstart to emit the runlevel(7) event like this:

The following will steps will now be taken:

Run the shutdown(8) command specifying the " -r " option, for example:

Click "Restart..." (or equivalent) in your graphical environment (for example Gnome)

To initiate a reboot, perform one of the following actions:

This script will kill any remaining processes not already stopped (including Upstart processes).

One of the scripts run is /etc/init.d/sendsigs .

The SystemV system will then invoke the necessary scripts in /etc/rc0.d/ to stop SystemV services.

This job calls /etc/init.d/rc passing it the new runlevel (" 0 ").

Assuming the current runlevel is " 2 ", either of the actions above will cause Upstart to emit the runlevel(7) event like this:

The following steps will now be taken:

Click "Shut Down..." (or equivalent) in your graphical environment (for example Gnome)

To initiate a shutdown, perform one of the following actions:

Ubuntu currently employs a hybrid system where core services are handled by Upstart, but additional services can be run in the legacy SystemV mode. This may seem odd, but consider that there are thousands of packages available in Ubuntu via the Universe and Multiverse repositories and hundreds of services. To avoid having to change every package to work with Upstart, Upstart allows packages to utilize their existing SystemV (and thus Debian-compatible) scripts.

Upstart never stops a job with no stop on condition.

Upstart will "die" when the system is powered off, but if it ever exits, that is a bug.

There are some important points related to system shutdown:

The runlevel(7) event causes many other Upstart jobs to start, including /etc/init/rc.conf which starts the legacy SystemV init system.

See Runlevels for further information on runlevels.

Note that this is all the telinit command does – it runs no commands itself to change runlevel!

The rc-sysinit job calls the telinit command, passing it the runlevel to move to:

Since the start on condition for the rc-sysinit job is:

After the last filesystem is mounted, mountall(8) will emit the filesystem event.

The upstart-udev-bridge job will at some point emit the " net-device-up IFACE=lo " event signifying the local network (for example, 127.0.0.0 for IPv4) is available.

The udev job causes the upstart-udev-bridge job to start.

The virtual-filesystems(7) event causes the udev job to start.

These include local-filesystems(7) , virtual-filesystems(7) and all-swaps(7) . See upstart-events(7) for further details.

The most notable of these is the mountall job which mounts your disks and filesystems.

init(8) runs a small number of jobs which specify the startup(7) event in their start on condition.

This event triggers the rest of the system to initialize .

Note that in this section we assume the default runlevel is " 2 ". See Changing the Default Runlevel for further details.

At boot, after the initramfs system has been run (for setting up RAID, unlocking encrypted file system volumes, et cetera), Upstart will be given control. The initramfs environment will exec(3) /sbin/init (this is the main Upstart binary) and cause it to run as PID 1.

To obtain a better understanding of how jobs and events relate at startup and shutdown time, see Visualising Jobs and Events .

The information in this section relates to an Ubuntu system.

This stanza may contain version information about the job, such as revision control or package version number. It is not used or interpreted by init(8) in any way.

If a job specifies the usage stanza, attempting to start the job without specifying the correct variables will display the usage statement. Additionally, the usage can be queried using initctl usage .

Brief message explaining how to start the job in question. Most useful for instance jobs which require environment variable parameters to be specified before they can be started.

Set the file mode creation mask for the process. <value> should be an octal value for the mask. See umask(2) for more details.

If we did not use " task " in the above example, queue-worker would be allowed to start as soon as we executed /path/to/pre-warm-memcache , which means it might potentially start before the cache was warmed.

We could also accomplish this without mentioning the pre-warm in the queue-worker job by doing this:

The key concept demonstrated above is that we " start on stopped pre-warm-memcache ". This means that we don't start until the task has completed. If we were to use started instead of stopped , we would start our queue worker as soon as /path/to/pre-warm-memcached had been started running.

So you can have another job that starts your background queue worker once the local memcached is pre-warmed:

Typically, task is for something that you just want to run and finish completely when a certain event happens.

With task, the events that lead to this job starting will be blocked until the job has completely transitioned back to stopped. This means that the job has run up to the previously mentioned started(7) event, and has also completed its post-stop , and emitted its stopped(7) event.

Without the 'task' keyword, the events that cause the job to start will be unblocked as soon as the job is started. This means the job has emitted a starting(7) event, run its pre-start , begun its script/exec, and post-start , and emitted its started(7) event.

In concept, a task is just a short lived job. In practice, this is accomplished by changing how the transition from a goal of "stop" to "start" is handled.

Note that this also will stop when other-service is restarted, so you will generally want to couple this with the start on condition:

Or if a generic job is available such as network-services

See start on for further syntax details.

Like the stop on stanza, start on expects a token to follow on the same line:

This stanza defines the set of Events that will cause the Job to be automatically stopped if it is already running.

Example: your web app needs memcached to be started before apache :

We use the started(7) event so that anything that must be started before all network services can do " start on starting network-services ".

The network-services job is a generic job that most network services should follow in releases where it is available. This allows the system administrator and/or the distribution maintainers to change the general startup of services that don't need any special case start on criteria.

In addition, services may be aggregated around an abstract job, such as network-services :

However if your service requires that a non-loopback interface is configured for some reason (i.e., it will not start without broadcasting capabilities), then explicitly saying "once a non loopback device has come up" can help.

The difference in whether to use the more generic 'runlevel' or the more explicit local-filesystems(7) and net-device-up events should be guided by your job's behaviour. If your service will come up without a valid network interface (for instance, it binds to 0.0.0.0 , or uses setsockopt(2) SO_FREEBIND ), then the runlevel event is preferable, as your service will start a bit earlier and start in parallel with other services.

If you are just writing an upstart job that needs to start the service after the basic facilities are up, either of these will work:

See Really understanding start on and stop on for further details.

If no environment variables are specified via KEY to restrict the match, the condition will match all instances of the specified event.

Note that the start on stanza expects a token to follow on the same line. Thus:

Note that if the job is already running and is not an instance job, if the start on condition becomes true (again), no further action will be taken.

Negation is permitted by using " != " between the KEY and VALUE .

VALUE may contain wildcard matches and globs as permitted by fnmatch(3) and may expand the value of any variable defined with the env stanza.

You may also match on the environment variables contained within the event by specifying the KEY and expected VALUE . If you know the order in which the variables are given to the event you may omit the KEY .

Each event EVENT is given by its name. Multiple events are permitted using the operators " and " and " or " and complex expressions may be performed with parentheses (within which line breaks are permitted).

This stanza defines the set of Events that will cause the Job to be automatically started.

Although the username is not logged, it is clear there is a problem with the setuid stanza for the specified foo job.

For example, if job foo specifies an invalid setuid username:

Note that if you specify an invalid username in the setuid stanza, Upstart will log an error if it is in Debug Mode .

Note that System jobs using the setuid stanza are still system jobs, and can not be controlled by an unprivileged user, even if the setuid stanza specifies that user.

If this stanza is unspecified, the job will run as root in the case of system jobs, and as the user in the case of User Jobs.

Note that all processes ( pre-start , post-stop , et cetera) will be run as the user specified.

Changes to the user <username> before running the job's process.

If this stanza is unspecified, the primary group of the user specified in the setuid block is used. If both stanzas are unspecified, the job will run with its group ID set to 0 in the case of system jobs, and as the primary group of the user in the case of User Jobs.

Note that all processes ( pre-start , post-stop , et cetera) will be run with the group specified.

Changes to the group <groupname> before running the job's process.

Allows the specification of a multi-line block of shell code to be executed. Block is terminated by end script .

If the job has been respawned up to its respawn limit, the variable " $PROCESS " will be set to " respawn " to denote that the respawn limit was reached. See stopped(7) for further details.

Note that respawn only applies to automatic respawns and not the restart(8) command.

Specifying either COUNT or INTERVAL as 0 (zero) implies unlimited .

To have the job respawn indefinitely, specify an argument of " unlimited ". However, care should be taken using this option: does your service really stop that frequently? Should it?

Respawning is subject to a limit. If the job is respawned more than COUNT times in INTERVAL seconds, it will be considered to be having deeper problems and will be stopped. Default COUNT is 10 . Default INTERVAL is 5 seconds.

Yes, this is different to a plain respawn : specifying respawn limit does not imply respawn .

Further note that if the job does not specify the respawn limit stanza as well as the respawn stanza, the job will have the default respawn limit applied (see respawn limit ).

Note that if a job is respawned, the variable " $PROCESS " will be set to the name of the job process that failed (for example " pre-start " or " main "). See stopped(7) for further details.

However, the appropriate way to handle that situation is a pre-stop which runs this shutdown command. Since the job's goal will already be 'stop' when a pre-stop is run, you can shutdown the process through any means, and the process won't be re-spawned (even with the respawn stanza).

One situation where it may seem like respawn should be avoided, is when a daemon does not respond well to SIGTERM for stopping it. You may believe that you need to send the service its shutdown command without Upstart being involved, and therefore, you don't want to use respawn because Upstart will keep trying to start your service back up when you told it to shutdown.

Likewise, for tasks, (see below), respawning means that you want that task to be retried until it exits with zero (0) as its exit code.

There are a number of reasons why you may or may not want to use this. For most traditional network services this makes good sense. If the tracked process exits for some reason that wasn't the administrator's intent, you probably want to start it back up again.

With this stanza, whenever the main script/exec exits, without the goal of the job having been changed to stop , the job will be started again. This includes running pre-start , post-start and post-stop . Note that pre-stop will not be run.

Without this stanza, a job that exits quietly transitions into the stop/waiting state, no matter how it exited.

If you are creating a new Job Configuration File, do not specify the respawn stanza until you are fully satisfied you have specified the expect stanza correctly. If you do, you will find the behaviour potentially very confusing.

The signal should be specified as a full name (for example SIGHUP ) or a partial name (for example HUP ). Note that it is possible to specify the signal as a number (for example 1 ) although this should be avoided if at all possible since signal numbers may differ between systems.

Specifies the signal that Upstart will send to the jobs main process when the job needs to be reloaded (the default is SIGHUP ).

You can also use this stanza to cancel the stop, in a similar fashion to the way one can cancel the start in the pre-start .

Stopping a job involves sending SIGTERM to it. If there is anything that needs to be done before SIGTERM , do it here. Arguably, services should handle SIGTERM very gracefully, so this shouldn't be necessary. However, if the service takes more than kill timeout seconds (default, 5 seconds) then it will be sent SIGKILL , so if there is anything critical, like a flush to disk, and raising kill timeout is not an option, pre-stop is not a bad place to do it.

The pre-stop stanza will be executed before the job's stopping(7) event is emitted and before the main process is killed .

See Example of console output for another of example where you can display an error message if the job detects it should not be started.

Or something like this:

Note that the example above assumes your applications configuration file is shell-compatible (in other words it contains name="value" entries). If this is not the case, just use grep(1) or similar:

This is safe since the job will not start (technically it won't progress beyond the pre-start stage) if:

On Ubuntu , the common pre-start idiom is to use /etc/default/myapp , so the example would become:

Note that the " stop " command did not receive any arguments. This is a shortcut available to jobs where the " stop " command will look at the current environment and determine that you mean to stop the current job.

Another possibility is to cancel the start of the job for some reason. One good reason is that it's clear from the system configuration that a service is not needed:

Use this stanza to prepare the environment for the job. Clearing out cache/tmp dirs is a good idea, but any heavy logic is discouraged, as Upstart job files should read like configuration files, not so much like complicated software.

Because this job is marked respawn , an exit of 0 is "ok" and will not force a respawn (only exiting with a non- 0 exit or being killed by an unexpected signal causes a respawn), this script stanza is used to start the optional daemon rpc.statd based on the defaults file. If NEED_STATD=no is in /etc/default/nfs-common , this job will run this snippet of script, and then the script will exit with 0 as its return code. Upstart will not respawn it, but just gracefully see that it has stopped on its own, and return to stopped status. If, however, rpc.statd had been run, it would stay in the start/running state and be tracked normally.

If you need to do some scripting before starting the daemon, script works fine here. Here is one example of using a script stanza that may be non-obvious:

If it is possible, you'll want to run your daemon with a simple exec line. Something like this:

There are times where the cleanup done in pre-start is not enough. Ultimately, the cleanup should be done both pre-start and post-stop , to ensure the service starts with a consistent environment, and does not leave behind anything that it shouldn't.

Use this stanza when a delay (or some arbitrary condition) must be satisfied before an executed job is considered "started". An example is MySQL . After executing it, it may need to perform recovery operations before accepting network traffic. Rather than start dependent services, you can have a post-start like this:

Script or process to run after the main process has been spawned, but before the started(7) event has been emitted.

The "adjustment" value provided to this stanza may be an integer value from -999 (very unlikely to be killed by the OOM killer) up to 1000 (very likely to be killed by the OOM killer). It may also be the special value never to have the job ignored by the OOM killer entirely (potentially dangerous unless you really trust the application in all possible system scenarios).

Normally the OOM killer regards all processes equally, this stanza advises the kernel to treat this job differently.

Linux has an "Out of Memory" killer facility. This is a feature of the kernel that will detect if a process is consuming increasingly more memory. Once "triggered", the kernel automatically takes action by killing the rogue process to avoid it impacting the system adversely.

For example, to consider exit codes 0 and 13 as success and also to consider the program to have completed successfully if it exits on signal SIGUSR1 and SIGWINCH , specify:

You can even specify signals. A signal can be specified either as a full name (for example SIGTERM ) or a partial name (for example TERM );

Used to change Upstart's idea of what a "normal" exit status is. Conventionally, processes exit with status 0 (zero) to denote success and non-zero to denote failure. If your application can exit with exit status 13 and you want Upstart to consider this as an normal (successful) exit, then you can specify:

Change the jobs scheduling priority from the default. See nice(1) .

This stanza will tell Upstart to ignore the start on / stop on stanzas. It is useful for keeping the logic and capability of a job on the system while not having it automatically start at boot-up.

For further details on the available limits see init(5) and getrlimit(2) .

If a user job specifies this stanza, it may fail to start should it specify a value greater than the users privilege level allows.

For example, to allow a job to open any number of files, specify:

Provides the ability to specify resource limits for a job.

The number of seconds Upstart will wait before killing a process. The default is 5 seconds.

Note that if you are running an older version of Upstart without this feature, and you have an application which breaks with the normal conventions for shutdown signal, you can simulate it to some degree by using start-stop-daemon(8) with the --signal option:

The signal should be specified as a full name (for example SIGTERM ) or a partial name (for example TERM ). Note that it is possible to specify the signal as a number (for example 15 ) although this should be avoided if at all possible since signal numbers may differ between systems.

Specifies the stopping signal, SIGTERM by default, a job's main process will receive when stopping the running job.

However, since the job sets a null default value for this variable, when an Administrator starts the job, UPSTART_EVENTS will be set to a null value. This empty value is enough to make that instance unique (since there are no other instances with a null instance value!)

This bit of trickery relies upon the fact that Upstart will set the $UPSTART_EVENTS environment variable before starting this job as a result of its start on condition becoming true. In this case, Upstart would therefore set UPSTART_EVENTS='foo' .

And this will work even if there is already a running instance of the trickery job (assuming the existing instance was started automatically).

Now, an Administrator can start this job as follows:

Note that if you have a job which makes use of instance but which may need to be run manually by an administrator, it is possible to "cheat" and allow them to start the job without specifying an explicit instance value:

Note further that if any worker fails to start or stop, this wil fail the overall " workers " job. If you don't want this behaviour, use the " || true " trick:

Note that " workers.conf " has no main exec or script section - this "master" job will run (without a pid) for the duration that the slave or children (individual " worker ") job instances run:

Note that to obtain correct restart behaviour, you would need to do something like the following:

Finally, let's see the current state of our two job instances:

Attempting to start it without specifying a value for foo will fail:

If you attempt to start a job with the instance stanza, but forget to provide the required variables, you will get an error since Upstart cannot then guarantee uniqueness. For example, if you have a job configuration file foo.conf such as this:

You must include at least one variable and it must have a leading dollar sign ( $ ):

See Multiple Running Job Instances Without PID for another crazy real-life example.

the stanza isn't restricted to a single value. You can do silly things like the following if you wish:

In this way, Upstart will keep them all running with the specified arguments, and stop them if memcached is ever stopped.

Lets say that once memcached is up and running, we want to start a queue worker for each directory in /var/lib/queues :

All unique instances of the foo job are now stopped.

That fails as Upstart needs to know which instance to stop and we didn't specify an instance value for the BAR instance variable. Rather than stopping each instance in turn, let's script it so that we can stop then all in one go:

We will start one more instance:

Good - Upstart is running two instances as expected. Notice the instance name in brackets after the job name in the initctl output above.

Okay. We should now have two instance running, but let us confirm that:

Oops! We tried to run another instance with the same instance name (well, the same value of the BAR variable technically). Lets try again:

So, we now have one instance running. Let's start another:

Oops! We forgot to specify the particular value for the BAR variable which makes each instance unique. Lets try again:

So, let's start an instance of this job:

The job first sources an instance-specific configuration file (" myapp-${BAR} ") then displays a message. Note again that we're now using that instance variable $BAR .

Note that the entire job is the instance job: providing the instance stanza allows Upstart to make each running version of this job unique.

The example above defines an instance job by specifying the instance stanza followed by the name of a variable (note that you MUST specify the dollar sign (' $ ').

Let us start with a simple example which we will call " foo.conf ":

Sometimes you want to run the same job, but with different arguments. The variable that defines the unique instance of this job is defined with instance .

Note that no leading dollar sign ( $ ) is specified.

Export variables previously set with env to all events that result from this job. See for example Job Lifecycle .

Find your jobs PID using ps(1) . (If you're struggling to find it, remember that the parent PID will always be " 1 ").

Run the initctl status command for your job. You will see something like:

If you have misspecified the expect stanza by telling Upstart to expect fewer fork(2) calls than your application actually makes, Upstart will be unable to manage it since it will be looking at the wrong PID. The start command will start your job, but it will show unexpected output (the goal and state will be shown as stop/waiting ).

Re-run the initctl status command for your job. You will see something like:

You'll notice that the PID shown is actually correct since Upstart has tracked the initial PID.

Run the initctl status command for your job. You will see something like:

Interrupt the start command by using " CONTROL+c " (or sending the process the SIGINT signal).

The start command will "hang" if you have misspecified the expect stanza by telling Upstart to expect more fork(2) calls than your application actually makes.

The table below summarizes the behaviour resulting for every combination of expect stanza and number of fork(2) calls:

If the application you are attempting to create a Job Configuration File does not document how many times it forks, you can run it with a tool such as strace(1) which will allow you to count the number of forks. For example:

Only then will Upstart consider the job to be running.

Specifies that the job's main process will raise the SIGSTOP signal to indicate that it is ready. init(8) will wait for this signal and then:

Upstart will expect the process executed to call fork(2) exactly twice.

Some daemons fork a new copy of themselves on SIGHUP , which means when the Upstart reload command is used, Upstart will lose track of this daemon. In this case, expect fork cannot be used. See Daemon Behaviour .

Upstart will expect the process executed to call fork(2) exactly once.

It's important to note that the " expect " stanza is thus being used for two different but complementary tasks:

A final point: the expect stanza only applies to exec and script stanzas: it has no effect on pre-start and post-start .

If your daemon has a "don't daemonize" or "run in the foreground" mode, then it's much simpler to use that and not run with fork following. One issue with that though, is that Upstart will emit the started JOB=yourjob event as soon as it has executed your daemon, which may be before it has had time to listen for incoming connections or fully initialize.

The syntax is simple, but you do need to know how many times your service forks.

To allow Upstart to determine the final process ID for a job, it needs to know how many times that process will call fork(2) . Upstart itself cannot know the answer to this question since once a daemon is running, it could then fork a number of "worker" processes which could themselves fork any number of times. Upstart cannot be expected to know which PID is the "master" in this case, considering it does not know if worker processes will be created at all, let alone how many times, or how many times the process will fork initially. As such, it is necessary to tell Upstart which PID is the "master" or parent PID. This is achieved using the expect stanza.

In this case, Upstart must have a way to track it, so you can use expect fork , or expect daemon which allows Upstart to use ptrace(2) to "count forks".

If you do not specify the expect stanza, Upstart will track the life cycle of the first PID that it executes in the exec or script stanzas. However, most Unix services will "daemonize", meaning that they will create a new process (using fork(2) ) which is a child of the initial process. Often services will "double fork" to ensure they have no association whatsoever with the initial process. (Note that no services will fork more than twice initially since there is no additional benefit in doing so).

Upstart will keep track of the process ID that it thinks belongs to a job. If a job has specified the instance stanza, Upstart will track the PIDs for each unique instance of that job.

Stanza that allows the specification of a single-line command to run. Note that if this command-line contains any shell meta-characters, it will be passed through a shell prior to being executed. This ensures that shell redirection and variable expansion occur as expected.

Allows an environment variable to be set which is accessible in all script sections.

For example, upstart-udev-bridge can emit a large number of events. Rather than having to specify every possible event, since the form of the event names is consistent, a single emits stanza can be specified to cover all possible events:

Specifies the events the job configuration file generates (directly or indirectly via a child process). This stanza can be specified multiple times for each event emitted. This stanza can also use the following shell wildcard meta-characters to simplify the specification:

One line quoted description of Job Configuration File . For example:

Note that the specified directory must have all the necessary system libraries for the process to be run, often including /bin/sh .

Runs the job's processes in a chroot(8) environment underneath the specified directory.

Runs the job's processes with a working directory in the specified directory instead of the root of the filesystem.

Identical to console output except that additionally it makes the job the owner of the console device. This means it will receive certain signals from the kernel when special key combinations such as Control-C are pressed.

Connects the job's standard input, standard output and standard error file descriptors to the console device.

If a User Job running in a pre-Upstart 1.7 environment specifies this stanza, Upstart will treat the job as if it had specified console none .

The log directory can be changed by specifying the --logdir <directory> command-line option.

Connects standard input to /dev/null . Standard output and standard error are connected to one end of a pseudo-terminal such that any job output is automatically logged to a file in directory /var/log/upstart/ for System Jobs and $XDG_CACHE_HOME/upstart/ (or $HOME/.cache/upstart/ if $XDG_CACHE_HOME is not set) for Session Jobs .

For all versions of Upstart prior to v1.4, the default value for console was console none . As of Upstart 1.4, the default value is console log . If you are using Upstart 1.4 or later and wish to retain the old default, boot specifying the --no-log command-line option. An alternative is to boot using the --default-console <value> option which allows the default console value for jobs to be specified. Using this option it is possible to set the default to none but still honour jobs that specify explicitly console log .

The job will only start once the manager is up and running and will have a 50MB memory limit, be restricted to CPU ids 0 and 1 and have a 1MB/s write limit to the block device 8:16. The job will fail to start if the system has less than 50MB of RAM or less than 2 CPUs:

It is not an error if NAME already exists.

The stanza maybe specified multiple times. The last occurence will be used except in the scenario where each occurence specifies a different KEY in which case all the keys and values will be applied.

If a KEY is specified, a VALUE must also be specified (even it is simply an empty string).

If any argument contains space characters, it must be quoted.

The NAME argument may contain any valid variable and can also contain forward slashes to run the job processes in a sub-cgroup.

If the CONTROLLER is invalid, or the NAME cannot be created or the KEY or VALUE are invalid, the job will be failed.

No validation is performed on the specified values until the job is due to be started.

Note that this special variable cannot be specified with enclosing braces around the name.

If NAME is not specified or does not contain " $UPSTART_CGROUP ", the job processes will not be placed in an upstart-specific group.

This default cgroup for the job may be specified explicitly within a NAME using the special variable " $UPSTART_CGROUP ". This variable is not an environment variable and is only valid within the context of the cgroup stanza.

Any forward slashes in $UPSTART_JOB and $UPSTART_INSTANCE will be replaced with underscore (" _ ") characters.

... or if the job specifies the instance stanza the group will be the expanded value of:

If only the cgroup controller (such as memory , cpuset , blkio ) is specified, a job-specific cgroup will be created and the job processes placed in it. The form of this cgroup is:

This allows the job to specify the control group all job processes will run in and optionally specify a setting for the particular cgroup.

A new " cgroup " stanza is introduced that allows job processes to be run within the specified cgroup.

Upstart 1.13 supports cgroups with the aid of cgmanager (see cgmanager(8) ).

Quoted name (and maybe contact details) of author of this Job Configuration File .

Load specified AppArmor Mandatory Access Control system profile into the kernel prior to starting the job. The main job process (as specified by exec or script ) will be confined to this profile.

This section lists a number of job configuration file stanzas, giving example usage for each. The reference for your specific version of Upstart will be available in the init(5) man page.

Under normal conditions, you should not need to specify any command-line options to Upstart. A number of these options were added specifically for testing Upstart itself and if used without due care can stop your system from booting (for example specifying --no-startup-event ). Therefore you should be extremely careful specifying any command-line options to Upstart unless you understand the implications of doing so.

The table below lists the command-line options accepted by the Upstart init daemon.

For the example above, the output would be:

For start on , stop on and emits stanzas, you can confirm Upstart's decision, you can use the initctl show-config command like this:

...since that is the last start on condition specified.

This job will have a start on condition of:

The way in which Upstart parses the job configuration files means that "the last entry wins". That is to say, every job configuration file must be syntactically correct, but if you had a file such as:

and does not specify the instance stanza, when job " foo " starts, the environment of the " bar " job will contain:

Note carefully the distinction between JOB and UPSTART_JOB . If a job " bar.conf " specifies a start on condition of:

Notes that some variables (those marked with ' * ' and ' † ') are only set when the job fails:

The following table lists the variables from the table above which are set when job events are emitted, and which are thus available from within a jobs environment.

The table below shows all variables set by Upstart itself. Note that variables prefixed by " UPSTART_ " are variables set within a jobs environment, whereas the remainder are set within an events environment (see the following table).

Similarly, the following will not work:

This will start the job in question when the " $FOO " event is emitted, not when the event " bar " is emitted:

Environment variables do not expand in start on or stop on conditions:

As shown, every script section receives the value of $var as bar , but if any script section changes the value, it only affects that particular script sections copy of the variable. To summarize:

This will generate output in your system log as follows (the timestamp and hostname have been removed, and the output formatted to make it clearer):

As another example of environment variables, consider this job configuration file :

However, using the technique above, it is possible to inject a variable from a user's environment into a job indirectly:

Note that a Job Configuration File does not have access to a user's environment variables, not even the superuser. This is not possible since all job processes created are children of init which does not have a user's environment.

Note that a variables value can always be overridden by specifying a new value on the command-line. For example:

If we now run the following command, both jobs A and B will run, causing B to write " value of foo is 'bar' " to the system log:

Further, we can pass environment variables defined in events to jobs using the env stanza and the export stanza. Assume we have two job configuration files, A.conf and B.conf :

Upstart allows you to set environment variables which will be accessible to the jobs whose job configuration files they are defined in. Environment variables are set using the env keyword.

Since this job has specified the runlevel event, it automatically gets access to the variables set by this event ( RUNLEVEL and PREVLEVEL ). However, note that these two variables are also exported. The reason for this is to allow other jobs which start on or stop on the rc job to make use of these variables (which were set by the runlevel event).

However, note that when the system moves to a new runlevel, Upstart will then immediately re-run the job at the new runlevel since the start on condition specifies that this job should be started in every runlevel.

This is just a safety measure. What it is saying is:

So, if the runlevel is currently " 2 " (full graphical multi-user under Ubuntu ), the RUNLEVEL variable will be set to RUNLEVEL=2 . The condition will thus evaluate to:

This admittedly does initially appear nonsensical. The way to read the statement above though is:

Thus, the stop on condition is saying:

The previous system runlevel (which may be set to an empty value).

The new "goal" runlevel the system is changing to.

This requires some explanation. The manual page for runlevel(7) explains that the runlevel event specifies two variables in the following order:

The rc job configuration file is well worth considering:

If you look at this job configuration file, you will see, as deduced:

But where does the RUNLEVEL environment variable come from? Well, variables are exported in a job configuration file to related jobs. Thus, the answer is The rc Job .

If we again add in the implicit variable it becomes clearer:

Looking at a slightly more complex real-life example:

This example shows that the fictitious job above would only be started when the mydb database server brings the foobar database on-line. Correspondingly, file /etc/init/mydb.conf would need to specify " export DBNAME " and be started like this:

Where <vars_to_match_event_on> is optional, but if specified comprises one or more variables.

Remember that started(7) is an event which Upstart emits automatically when the mysql job has started to run. The whole start on stanza can be summarized as:

The syntax above is actually a short-hand way of writing:

The start on stanza needs careful contemplation. Consider this example:

(Note: This section focuses on start on , but the information also applies to stop on unless explicitly specified).

See Run a Job When a User Logs in for an example.

As of D-Bus version 1.4.1-0ubuntu2 (in Ubuntu), you can have Upstart start a D-Bus service rather than D-Bus . This is useful because it is then possible to create Upstart jobs that start or stop when D-Bus services start.

11.1 List All Jobs To list all jobs on the system along with their states, run: $ initctl list See initctl.

11.2 List All Jobs With No stop on Condition # list all jobs (stopped and running instances), and compact down # to actual job names. initctl list | awk '{print $1}' | sort -u | while read job do # identify jobs with no "stop on" initctl show-config -e $job | grep -q "^ stop on" || echo "$job" done

11.3 List All Events That Jobs Are Interested In On Your System Here is another example of how initctl show-config can be useful: initctl show-config -e | egrep -i "(start|stop) on" | awk '{print $3}' | sort -u

11.4 Create an Event To create, or "emit" an event, use initctl(8) specifying the emit command. For example, to emit the hello event, you would run: # initctl emit hello This event will be "broadcast" to all Upstart jobs. If you are creating a job configuration file for a new application, you probably do not need to do this though, since Upstart emits events on behalf of a job whenever the job changes state. A simple configuration file like that shown below may suffice for your application: # /etc/init/myapp.conf description "run my app under Upstart" task exec /path/to/myapp

11.5 Create an Event Alias Say you have an event, but want to create a different name for it, you can simulate a new name by creating a new job which: has a start on that matches the event you want to "rename"

is a task

emits the new name for the event For example, if you wanted to create an alias for a particular flavour of the runlevel event called " shutdown " which would be emitted when the system was shutdown, you could create a job configuration file called /etc/init/shutdown.conf containing: start on runlevel RUNLEVEL=0 task exec initctl emit shutdown Note that this isn't a true alias since: there are now two events which will be generated when the system is shutting down: runlevel RUNLEVEL=0 shutdown

the two events will be delivered by Upstart at slightly different times ( shutdown will be emitted just fractionally before runlevel RUNLEVEL=0 ). However, the overall result might suffice for your purposes such that you could create a job configuration file like the following which will run (and complete) just before your system changes to runlevel 0 (in other words halts): start on shutdown task exec backup_my_machine.sh 11.5.1 Change the Type of an Event Note that along with creating a new name for an event, you could make your alias be a different type of event. See Event Types for further details.

11.6 Synchronisation Upstart is very careful to ensure when a condition becomes true that it starts all relevant jobs in sequence (see Order in Which Jobs Which start on the Same Event are Run). However, although Upstart has started them one after another they might still be running at the same time. For example, assume the following: /etc/init/X.conf start on event-A script echo "`date`: $UPSTART_JOB started" >> /tmp/test.log sleep 2 echo "`date`: $UPSTART_JOB stopped" >> /tmp/test.log end script

/etc/init/Y.conf start on event-A script echo "`date`: $UPSTART_JOB started" >> /tmp/test.log sleep 2 echo "`date`: $UPSTART_JOB stopped" >> /tmp/test.log end script

/etc/init/Z.conf start on event-A script echo "`date`: $UPSTART_JOB started" >> /tmp/test.log sleep 2 echo "`date`: $UPSTART_JOB stopped" >> /tmp/test.log end script Running the following will cause all the jobs above to run in some order: # initctl emit event-A Here is sample output of /tmp/test.log : Thu Mar 31 10:20:44 BST 2011: Y started Thu Mar 31 10:20:44 BST 2011: X started Thu Mar 31 10:20:44 BST 2011: Z started Thu Mar 31 10:20:46 BST 2011: Y stopped Thu Mar 31 10:20:46 BST 2011: Z stopped Thu Mar 31 10:20:46 BST 2011: X stopped There are a few points to note about this output: All jobs start "around the same time" but are started sequentially.

The order the jobs are initiated by Upstart cannot be predicted.

All three jobs are running concurrently. It is possible with a bit of thought to create a simple framework for synchronisation. Take the following job configuration file /etc/init/synchronise.conf : manual This one-line Abstract Job configuration file is extremely interesting in that: Since it includes the manual keyword, a job created from it can only be started manually.

Only a single instance of a job created from this configuration can exist (since no instance stanza has been specified). What this means is that we can use a job based on this configuration as a simple synchronisation device. The astute reader may observe that synchronise has similar semantics to a POSIX pthread condition variable. Now we have our synchronisation primitive, how do we use it? Here is an example which we'll call /etc/init/test_synchronise.conf : start on stopped synchronise # allow multiple instances instance $N # this is not a service task pre-start script # "lock" start synchronise || true end script script # do something here, knowing that you have exclusive access # to some resource that you are using the "synchronise" # job to protect. echo "`date`: $UPSTART_JOB ($N) started" >> /tmp/test.log sleep 2 echo "`date`: $UPSTART_JOB ($N) stopped" >> /tmp/test.log end script post-stop script # "unlock" stop synchronise || true end script For example, to run 3 instances of this job, run: for n in $(seq 3) do start test_synchronise N=$n done Here is sample output of /tmp/test.log : Thu Mar 31 10:32:20 BST 2011: test_synchronise (1) started Thu Mar 31 10:32:22 BST 2011: test_synchronise (1) stopped Thu Mar 31 10:32:22 BST 2011: test_synchronise (2) started Thu Mar 31 10:32:24 BST 2011: test_synchronise (2) stopped Thu Mar 31 10:32:25 BST 2011: test_synchronise (3) started Thu Mar 31 10:32:27 BST 2011: test_synchronise (3) stopped The main observation here: Each instance of the job started and stopped before any other instance ran. Like condition variables, this technique require collaboration from all parties. Note that you cannot know the order in which each instance of the test_synchronise job will run. Note too that it is not necessary to use instances here. All that is required is that your chosen set of jobs all collaborate in their handling of the "lock". Instances make this simple since you can spawn any number of jobs from a single "template" job configuration file.

11.7 Determine if Job was Started by an Event or by " start " A job that specifies a start on condition can be started in two ways: by Upstart itself when the start on condition becomes true.

by running, " start <job> ". Interestingly, it is possible for a job to establish how it was started by considering the UPSTART_EVENTS variable: If the UPSTART_EVENTS variable is set in the job environment, the job was started by an event.

variable is set in the job environment, the job was started by an event. If the UPSTART_EVENTS variable is not set in the job environment, the job was started by the start command. Note that this technique does not allow you to determine definitively if the job was started manually by an Administrator since it is possible that if the UPSTART_EVENTS variable is not set that the job was started by another job calling start inside a script section.

11.8 Stop a Job from Running if A pre-start Condition Fails If you wish a job to not be run if a pre-start condition fails: pre-start script # main process will not be run if /some/file does not exist test -f /some/file || { stop ; exit 0; } end script script # main process is run here end script

11.9 Run a Job Only When an Event Variable Matches Some Value By default, Upstart will run your job if the start on condition matches the events listed: start on event-A But if event-A provides a number of environment variables, you can restrict your job to starting only when one or more of these variables matches some value. For example: start on event-A FOO=hello BAR=wibble Now, Upstart will only run your job if all of the following are true: the event-A is emitted

is emitted the value of the $FOO variable in event-A 's environment is " hello ".

variable in 's environment is " ". the value of the $BAR variable in event-A 's environment is " wibble ".

11.10 Run a Job when an Event Variable Does Not Match Some Value Upstart supports negation of environment variable values such that you can say: start on event-A FOO=hello BAR!=wibble Now, Upstart will only run your job if all of the following are true: the event-A is emitted

is emitted the value of the $FOO variable in event-A 's environment is " hello ".

variable in 's environment is " ". the value of the $BAR variable in event-A 's environment is not " wibble ".

11.11 Run a Job as Soon as Possible After Boot (Note: we ignore the initramfs in this section). To start a job as early as possible, simply " start on " the startup event. This is the first event Upstart emits and all other events and jobs follow from this: start on startup

11.12 Run a Job When a User Logs in Graphically ( ) Assuming a graphical login, this can be achieved using a start on condition of: start on desktop-session-start This requires the display manager emit the event in question. See the upstart-events(7) man page on an Ubuntu system for the 2 events a Display Manager is expected to emit. If your Display Manager does not emit these event, check its documentation to see if it allows scripts to be called at appropriate points and then you can easily conform to the reference implementations behaviour: # A user has logged in /sbin/initctl -q emit desktop-session-start \ DISPLAY_MANAGER=some_name USER=$USER # Display Manager has initialized and displayed a login screen # (if appropriate) /sbin/initctl -q emit login-session-start \ DISPLAY_MANAGER=some_name

11.14 Run a Job For All of a Number of Conditions If you have a job configuration file like this: start on (event-A or (event-B or event-C)) script echo "`date`: ran in environment: `env`" >> /tmp/myjob.log end script Upstart will run this job when any of the following events is emitted: event-A

event-B

event-C You cannot know the order in which the events will arrive in, but the specified start on condition has told Upstart that any of them will suffice for your purposes. So, if event-B is emitted first, Upstart will run the job and only consider re-running the job if and when the job has finished running. If event-B is emitted and the job is running and then (before the job finishes running) event-A is emitted, the job will not be re-run. However, what if you wanted to run the script for all the events? If you know that all of these events will be emitted at some point, you could change the start on to be: start on (event-A and (event-B and event-C)) Here, the job will only run at the time when the last of the three events is received. Is it possible to run this job for each event as soon as each event arrives? Yes it is: start on (event-A or (event-B or event-C)) instance $UPSTART_EVENTS script echo "`date`: ran in environment: `env`" >> /tmp/myjob.log end script By adding the instance keyword, you ensure that whenever any of the events listed in your start on condition is emitted, an instance of the job will be run. Therefore, if all three events are emitted very close together in time, three jobs instances will now be run. See the Instance section for further details.

11.15 Run a Job Before Another Job If you wish to run a particular job before some other job, simply make your jobs start on condition specify the starting(7) event. Since the starting(7) event is emitted just before the job in question starts, this provides the behaviour you want since your job will be run first. For example, assuming your job is called job-B and you want it to start before job-A , in /etc/init/job-B.conf you would specify: start on starting job-A

11.16 Run a Job After Another Job If you have a job you wish to run after job " job-A ", your start on condition would need to make use of the stopped(7) event like this: start on stopped job-A

11.17 Run a Job Once After Some Other Job Ends Imagine a job configuration file myjob.conf such as the following which might result in a job which is restarted a number of times: start on event-A script # do something end script Is it possible to run a job only once after job myjob ends? Yes if you create a job configuration file myjob-sync.conf such as: start on stopped myjob and event-B script # do something end script Now, when event-A is emitted, job myjob will start and if and when job myjob finishes and event event-B is emitted, job myjob-sync will be run. However, crucially, even if job myjob is restarted, the myjob-sync job will not be restarted.

11.18 Run a Job Before Another Job and Stop it After that Job Stops If you have a job you wish to be running before job " job-A " starts, but which you want to stop as soon as job-A stops: start on starting job-A stop on stopped job-A

11.19 Run a Job Only If Another Job Succeeds To have a job start only when job-A succeeds, use the $RESULT variable from the stopped(7) event like this: start on stopped job-A RESULT=ok

11.20 Run a Job Only If Another Job Fails To have a job start only when job-A fails, use the $RESULT variable from the stopped(7) event like this: start on stopped job-A RESULT=failed Note that you could also specify this condition as: start on stopped job-A RESULT!=ok

11.21 Run a Job Only If One Job Succeeds and Another Fails This would be a strange scenario to want, but it is quite easy to specify. Assuming we want a job to start only if job-A succeeds and if job-B fails: start on stopped job-A RESULT=ok and stopped job-B RESULT=failed

11.22 Run a Job If Another Job Exits with a particular Exit Code Imagine you have a database server process that exits with a particular exit code (say 7 ) to denote that it needs some sort of cleanup process to be run before it can be re-started. To handle this you could create /etc/init/mydb-cleanup.conf with a start on condition like this: start on stopped mydb EXIT_STATUS=7 script # handle cleanup... # assuming the cleanup was successful, restart the server start mydb end script

11.23 Detect if Any Job Fails To "monitor" all jobs for failures, you could either create a job that checks specifically for a single job failure (see Run a Job If Another Job Exits with a particular Exit Code), but you could just as easily detect if any job has failed as follows: start on stopped RESULT=failed Since this start on condition does not specify the Job to match against, it will match all jobs. You can then perform condition processing: script if [ -n "$EXIT_STATUS" ]; then str="with exit status $EXIT_STATUS" else str="due to signal $EXIT_SIGNAL" fi logger "Upstart Job $JOB (instance '$INSTANCE', process $PROCESS) failed $str" case "$JOB" in myjob1) ;; myjob2) ;; etc) ;; esac end script Note that $PROCESS above is not the PID, it is the name of the job process type (such as main or pre-start ). See stopped(7) for further details.

11.24 Use Details of a Failed Job from Another Job Although you cannot see the exact environment another job ran in, you can access some details. For example, if your job specified /etc/init/job-B.conf as: start on stopped job-A RESULT=fail script exec 1>>/tmp/log.file echo "Environment of job $JOB was:" env echo end script The file /tmp/log.file might contain something like this: UPSTART_INSTANCE= EXIT_STATUS=7 INSTANCE= UPSTART_JOB=B TERM=linux PATH=/usr/local/sbin:/usr/local/bin:/usr/bin:/usr/sbin:/sbin:/bin PROCESS=main UPSTART_EVENTS=stopped PWD=/ RESULT=failed JOB=A Here, job-B can see that: job-A exited in its "main" process. This is a special name for the script section. All other script sections are named as expected. For example, if the pre-start section had failed, the PROCESS variable would be set to pre-start , and if in post-stop , the variable would have been set to post-stop .

exited in its "main" process. This is a special name for the section. All other script sections are named as expected. For example, if the section had failed, the variable would be set to , and if in , the variable would have been set to . job-A exited with exit code 7 .

exited with exit code . job-A only had 1 instance (since the INSTANCE variable is set to the null value.

only had 1 instance (since the variable is set to the null value. job-A ran in the root (" / ") directory.

ran in the root (" ") directory. UPSTART_JOB is the name of the job running the script (ie job-B ).

is the name of the job running the script (ie ). JOB is the name of the job that we are starting on (here job-A ).

is the name of the job that we are starting on (here ). UPSTART_EVENTS is a list of the events that caused UPSTART_JOB (ie job-B ) to start. Here, the event is starting(7) showing that job-B started as a result of job-A being sent the stopped(7) event.

11.25 Stop a Job when Another Job Starts If we wish job-A to stop when job-B starts, specify the following in /etc/init/job-A.conf : stop on starting job-B 11.25.1 Simple Mutual Exclusion It is possible to create two jobs which will be "toggled" such that when job-A is running, job-B will be stopped and vice versa. This provides a simple mutually exclusive environment. Here is the job configuration file for job-A : # /etc/init/job-A.conf start on stopped job-B script # do something when job-B is stopped end script And job-B : # /etc/init/job-B.conf start on stopped job-A script # do something when job-A is stopped end script Finally, start one of the jobs: # start job-A Now: when job-A is running, job-B will be stopped.

is running, will be stopped. when job-B is running, job-A will be stopped. Note though that attempting to have more than two jobs using such a scheme will not work. However, you can use the technique described in the Synchronisation section to achieve the same goal.

11.26 Run a Job Periodically This cannot currently be handled by Upstart directly. However, the "Temporal Events" feature is being worked on now will address this. Until Temporal Events are available you should either use cron(8), or something like: # /etc/init/timer.conf instance $JOB_TO_RUN script for var in SLEEP JOB_TO_RUN do eval val=\${$var} if [ -z "$val" ] then logger -t $0 "ERROR: variable $var not specified" exit 1 fi done eval _sleep=\${SLEEP} eval _job=\${JOB_TO_RUN} while [ 1 ] do stop $_job || true sleep $_sleep start $_job || true done end script Note well the contents of the while loop. We ensure that the commands that might fail are converted into expressions guaranteed to pass. If we did not do this, timer.conf would fail, which would be undesirable. Note too the use of instance to allow more than one instance of the timer job to be running at any one time.

11.27 Restart a job on a Particular Event To restart a job when a particular event is emitted requires two jobs. First the main job: start on something exec /sbin/some-command Then a helper job to perform the restart: start on my-special-event exec restart main-job Now, when the my-special-event event is emitted, the main job will be restarted.

11.28 Migration from System V initialization scripts With SysV init scripts, the Administrator decides the order that jobs are started in by assigning numeric values to each service. Such a system is simple, but non-optimal since: The SysV init system runs each job sequentially. This disallows running jobs in parallel, to make full use of system resources. Due to the limited nature of the SysV system, many SysV services put services that take a long time to start into the background to give the illusion that the boot is progressing quickly. However, this makes it difficult for Administrators to know if a required service is running by the time their later service starts.

The Administrator cannot know the best order to run jobs in. Since the only meta information encoded for services is a numeric value used purely for ordering jobs, the system cannot optimize the services since it knows nothing about the requirements for each job. In summary, the SysV init system is designed to be easy for the Administrator to use, not easy for the system to optimize. In order to migrate a service from SysV to Upstart, it is necessary to change your mindset somewhat. Rather than trying to decide which two services to "slot" your service between, you need to consider the conditions that your service needs before it can legitimately be started. So, if you wished to add a new service that traditionally started before cron(8) or atd(8) you do not need to change the configuration files cron.conf or atd.conf . You can "insert" your new service by specifying a simple: # /etc/init/my-service.conf start on (starting cron or starting atd) In English, this says, "start the " my-service " service just before either the cron or the atd services start". Whether crond or atd actually start first is not a concern for my-service: Upstart ensures that the my-service service will be started before either of them. Even if cron normally starts before atd but for some reason one day atd starts first, Upstart will ensure that my-service will be started before atd . Note therefore that introducing a new service should not generally require existing job configuration files to be updated.

11.30 Guarantee that a job will only run once If you have a job which must only be run once, but which depends on multiple conditions, the naive approach won't necessarily work: task start on (A or B) If event 'A' is emitted, the task will run. But assuming the task has completed and event 'B' is then emitted, the task will run again. 11.30.1 Method 1 A better approach is as follows: Create separate job configuration files for each condition you want your job to start on: # /etc/init/got-A.conf # job that will "run forever" when event A is emitted start on A # /etc/init/got-B.conf # job that will "run forever" when event B is emitted start on B Create a job which starts on either of the got-A or got-B jobs starting: # /etc/init/only-run-once.conf start on (starting got-A or starting got-B) Now, job " only-run-once " will start only once since jobs " got-A " and " got-B " can only be started once themselves since: they do not specify the instance stanza to allow multiple instances of the jobs.

if either job starts, that job will run forever.

none of the jobs have a stop on stanza. 11.30.2 Method 2 Change your start on condition to include the startup event: task start on startup and (A or B)

11.31 Stop a Job That is About to Start Upstart will start a job when its " start on " condition becomes true. Although somewhat unusual, it is quite possible to stop a job from starting when Upstart tries to start it: start on starting job-A script stop $JOB end script

11.32 Stop a Job That is About to Start From Within That Job You can in fact stop a job that Upstart has decided it needs to start from within that job: pre-start script stop end script This is actually just an alias for: pre-start script stop $UPSTART_JOB end script Of course, you could set the pre-start using the Override Files facility.

11.34 Stop a Job When Some Other Job is about to Start Here, we create /etc/init/job-C.conf which will stop job-B when job-A is about to start: start on starting job-A script stop job-B end script

11.35 Start a Job when a Particular Filesystem is About to be Mounted Here, we start a job when the /apps mountpoint is mounted read-only as an NFS-v4 filesystem: start on mounting TYPE=nfs4 MOUNTPOINT=/apps OPTION=ro Here's another example: start on mounted MOUNTPOINT=/var/run TYPE=tmpfs Another example where a job would be started when any non-virtual filesystem is mounted: start on mounted DEVICE=[/UL]* The use of the $DEVICE variable is interesting. It is used here to specify succinctly any device that: is a real device (starts with " / " (to denote a normal " /dev/... " mount)).

" (to denote a normal " " mount)). is a device specified by its filesystem: label (starts with " L " (to denote a " LABEL= " mount)). UUID (starts with " U " (to denote a " UUID= " mount)).

Another example where a job is started when a non-root filesystem is mounted: start on mounting MOUNTPOINT!=/ TYPE!=swap

11.37 Stopping a Job if it Runs for Too Long To stop a running job after a certain period of time, create a generic job configuration file like this: # /etc/init/timeout.conf stop on stopping JOB=$JOB_TO_WAIT_FOR kill timeout 1 manual export JOB_TO_WAIT_FOR export TIMEOUT script sleep $TIMEOUT initctl stop $JOB_TO_WAIT_FOR end script Now, you can control a job using a timeout: start myjob start timeout JOB_TO_WAIT_FOR=myjob TIMEOUT=5 This will start job myjob running and then wait for 5 seconds. If job " myjob " is still running after this period of time, the job will be stopped using the initctl(8) command. Note the stop on stanza which will cause the timeout job not to run if the job being waited for has already started to stop.

11.38 Run a Job When a File or Directory is Created/Deleted As of Upstart 1.8, you can use the upstart-file-bridge. If you are using an older version of Upstart, read on... If you need to start a Job only when a certain file is created, you could create a generic job configuration file such as the following: # /etc/init/wait_for_file.conf instance $FILE_PATH export FILE_PATH script while [ ! -e "$FILE_PATH" ] do sleep 1 done initctl emit file FILE_PATH="$FILE_PATH" end script Having done this, you can now make use of it. To have another job start if say file /var/run/foo.dat gets created, you first need to create a job configuration file stating this: # /etc/init/myapp.conf start on file FILE_PATH=/var/run/foo.dat script # ... end script Lastly, kick of the process by starting an instance of wait_for_file : start wait_for_file FILE_PATH=/var/run/foo.dat Now, when file /var/run/foo.dat is created, the following will happen: The myapp job will emit the file event, passing the path of the file which you just specified in that events environment. Upstart will see that the start on condition for the myapp job configuration file is satisfied. Upstart will create a myapp job, and start it. You can modify this strategy slightly to run a job when a file is: modified

deleted

contains certain content

et cetera See test(1), or your shells documentation for available file tests. Note that this is very simplistic. A better approach would be to use inotify(7).

11.39 Run a Job Each Time a Condition is True This is the default way Upstart works when you have defined a task: # /etc/init/myjob.conf task exec /some/program start on (A or B) Job "myjob" will run every time either event 'A' or event 'B' are emitted. However, there is a corner condition: if event 'A' has been emitted and the task is currently running when event 'B' is emitted, job "myjob" will not be run. To avoid this situation, use instances: # /etc/init/myjob2.conf task instance $SOME_VARIABLE exec /some/program start on (A or B) Now, as long variable $SOME_VARIABLE is defined with a unique value each time either event 'A' or 'B' is emitted, Upstart will run job " myjob2 " multiple times.

11.40 Run a Job When a Particular Runlevel is Entered and Left To run a job when a particular runlevel is entered and also run it when that same runlevel is left, you could specify: start on runlevel RUNLEVEL=5 or runlevel PREVLEVEL=5 See runlevel(7) and the Runlevels section for more details.

11.41 Pass State Between Job Processes Assume you have a job configuration file like this: pre-start script # ... end script exec /bin/some-program $ARG How can you get the pre-start script section to set $ARG and have the "main" section use that value in the " exec " stanza? This isn't as easy as you might imagine for the simple reason that Upstart runs each script and exec section in a new process. As such, by the time Upstart gets to the exec stanza the process spawned to handle the pre-start script section has now ended. This implies they cannot communicate directly. However, there are ways to send information from one section to another... One method to achieve the required goal is as follows: # set a variable which is the name of a file this job will use # to pass information between script sections. env ARG_FILE="/var/myapp/myapp.dat" # make the variable accessible to all script sections (ie sub-shells) export ARG_FILE pre-start script # decide upon arguments and write them to # $ARG_FILE, which is available in this sub-shell. end script script # read back the contents of the arguments file # and pass the values to the program to run. ARGS="$(cat $ARG_FILE)" # clean up rm -f $ARG_FILE || true exec /bin/some-program $ARGS end script However, as of Upstart 1.7, this is now possible (for Session Jobs only!) by using the initctl set-env command. For example: pre-start script # modify the running jobs job environment table # such that when the 'exec' stanza is executed, Upstart will apply # all variables in this table to that job process. initctl set-env ARG=foo end script exec /bin/some-program $ARG

11.42 Pass State From Job Configuration File to a Script Section To pass a value from a job configuration file to one of its script sections, simply use the env stanza: env CONF_FILE=/etc/myapp/myapp.cfg script exec /bin/myapp -c $CONF_FILE end script This example is a little pointless, but the following slightly modified example is much more useful: start on an-event export CONF_FILE script exec /bin/myapp -c $CONF_FILE end script By dropping the use of the env stanza we can now pass the value in via an event: # initctl emit an-event CONF_FILE=/etc/myapp/myapp.cfg This is potentially much more useful since the value passed into myapp.conf can be varied without having to modify the job configuration file.

11.44 Disabling a Job from Automatically Starting With Upstart 0.6.7, to stop Upstart automatically starting a job, you can either: Rename the job configuration file such that it does not end with " .conf ".

". Edit the job configuration file and comment out the " start on " stanza using a leading '#'. To re-enable the job, just undo the change. 11.44.1 Override Files With Upstart 1.3, you can make use of an "override file" and the manual stanza to achieve the same result in a simpler manner : # echo "manual" >> /etc/init/myjob.override Note that you could achieve the