Version 2.2.0

Ammonite lets you use the Scala language for scripting purposes: in the REPL, as scripts, as a library to use in existing projects, or as a standalone systems shell.

Ammonite-REPL A Modernized Scala REPL. With syntax highlighting, multi-line editing, the ability to load maven artifacts directly in the REPL, and many other quality-of-life improvements missing in the default Scala REPL. Scala Scripts Lightweight Programming in Scala. Create scripts that you can run easily from the command line, without the overhead of setting up a "project" or waiting for SBT's slow startup times. Ammonite-Ops A Rock-solid Filesystem Library for Scala. Deal with the filesystem easily from your existing Scala projects or applications, as easily as you would from a Bash or Python script. Ammonite-Shell A modern replacement for the Bash system shell. Provides a systems shell in the high-level Scala language, letting you seamlessly mix system operations with real code without the hassle or the frustration of trying to write complex code in Bash.

If you use Ammonite, you will probably find the follow book by the Author helpful in using Ammonite to the fullest:

Ammonite is a project by Li Haoyi. If you use Ammonite and enjoyed it, please chip in to support our development at:

Any amount will help us develop Ammonite into the best possible REPL and script runner for the Scala community!

The goal of Ammonite is to liberate your Scala code from heavyweight "projects", using the lightweight Ammonite runtime: if you want to run some Scala, open the Ammonite-REPL and run it, interactively! If you want to run it later, save it into some Scala Scripts and run those later.

For a video overview of the project and it's motivation, check out this talk:

If you are already working in Scala, you no longer have to drop down to Python or Bash for your scripting needs: you can use Scala Scripts for your scripting needs, and avoid the overhead of working in multiple languages.

Each of the above projects is usable standalone; click on the links to jump straight to their docs, or scroll around and browse through the navigation bar on the left. If you're wondering what you can do with Ammonite, there is an

Which contains a bunch of fun things that you can do, whether in the interactive Ammonite-REPL or in some Scala Scripts. You can also take a look at how people are using Ammonite in the wild:

To see what people are doing with it. And there are more talks available below:

The bulk of this page describes the latest stable release of Ammonite, 2.2.0 . If you're willing to live on the edge, we also publish Unstable Versions from any commits that get pushed or pull-requests that land in the master branch:

Ammonite-REPL

A Modernized Scala REPL

The Ammonite-REPL is an improved Scala REPL, re-implemented from first principles. It is much more featureful than the default REPL and comes with a lot of ergonomic improvements and configurability that may be familiar to people coming from IDEs or other REPLs such as IPython or Zsh.

It can be combined with Ammonite-Ops to replace Bash as your systems shell, but also can be used alone as a superior version of the default Scala REPL, or as a debugging tool, or for many other fun and interesting things!

If you want to use Ammonite as a plain Scala shell, download the standalone Ammonite 2.2.0 executable for Scala 2.12 (also available for Older Scala Versions):

$ sudo sh -c '(echo "#!/usr/bin/env sh" && curl -L https://github.com/lihaoyi/Ammonite/releases/download/2.2.0/2.13-2.2.0) > /usr/local/bin/amm && chmod +x /usr/local/bin/amm' && amm

Or to try out the latest features in our Unstable Release 2.2.0-4-4bd225e:

$ sudo sh -c '(echo "#!/usr/bin/env sh" && curl -L https://github.com/lihaoyi/Ammonite/releases/download/2.2.0/2.13-2.2.0-4-4bd225e) > /usr/local/bin/amm && chmod +x /usr/local/bin/amm' && amm

Alternatively, on macOS you can install Ammonite via brew:

$ brew install ammonite-repl

You can also download a bootstrap script, that can be downloaded and committed to version control as a file `amm`:

curl -L https://github.com/lihaoyi/ammonite/releases/download/2.2.0/2.13-2.2.0-bootstrap > amm && chmod +x amm

You can then run Ammonite via the `./amm` command.

This will give you access to the Ammonite-REPL:

With Pretty Printing, Syntax Highlighting for input and output, Artifact Loading in-REPL, and all the other nice Features!

If you want to use Ammonite as a filesystem shell, take a look at Ammonite-Shell. If you're not sure what to do with Ammonite, check out the Ammonite Cookbook for some fun ideas!

If you want some initialization code available to the REPL, you can add it to your ~/.ammonite/predef.sc .

If you have any questions, come hang out on the mailing list or gitter channel and get help!

You can also try out Ammonite 2.2.0 in an existing SBT project. To do so, add the following to your build.sbt

libraryDependencies += { val version = scalaBinaryVersion.value match { case "2.10" => "1.0.3" case _ ⇒ "2.2.0" } "com.lihaoyi" % "ammonite" % version % "test" cross CrossVersion.full } sourceGenerators in Test += Def.task { val file = (sourceManaged in Test).value / "amm.scala" IO.write(file, """object amm extends App { ammonite.Main.main(args) }""") Seq(file) }.taskValue

// Optional, required for the `source` command to work (fullClasspath in Test) ++= { (updateClassifiers in Test).value .configurations .find(_.configuration.name == Test.name) .get .modules .flatMap(_.artifacts) .collect{case (a, f) if a.classifier == Some("sources") => f} }

Or to try out the latest features in our Unstable Release 2.2.0-4-4bd225e:

libraryDependencies += "com.lihaoyi" % "ammonite" % "2.2.0-4-4bd225e" % "test" cross CrossVersion.full

After that, simply hit

sbt projectName/test:run

or if there are other main methods in the Test scope

sbt projectName/test:runMain amm

To activate the Ammonite REPL

You can also pass a string to the Main call containing any commands or imports you want executed at the start of every run, along with other configuration. If you want Ammonite to be available in all projects, simply add the above snippet to a new file ~/.sbt/0.13/global.sbt .

Note: Ammonite-REPL does not support Windows, even though Ammonite-Ops does. See #119 if you are interested in details or want to try your hand at making it work.

Features

Ammonite-REPL supports many more features than the default REPL, including:

Pretty-printed output

@ Seq.fill(10)(Seq.fill(3)("Foo")) res0: Seq[Seq[String]] = List( List("Foo", "Foo", "Foo"), List("Foo", "Foo", "Foo"), List("Foo", "Foo", "Foo"), List("Foo", "Foo", "Foo"), List("Foo", "Foo", "Foo"), List("Foo", "Foo", "Foo"), List("Foo", "Foo", "Foo"), List("Foo", "Foo", "Foo"), List("Foo", "Foo", "Foo"), List("Foo", "Foo", "Foo") ) @ case class Foo(i: Int, s0: String, s1: Seq[String]) defined class Foo @ Foo(1, "", Nil) res2: Foo = Foo(1, "", List()) @ Foo( @ 1234567, @ "I am a cow, hear me moo", @ Seq("I weigh twice as much as you", "and I look good on the barbecue") @ ) res3: Foo = Foo( 1234567, "I am a cow, hear me moo", List("I weigh twice as much as you", "and I look good on the barbecue") )

Ammonite-REPL uses PPrint to display its output by default. That means that everything is nicely formatted to fit within the width of the terminal, and is copy-paste-able!

By default, Ammonite truncates the pretty-printed output to avoid flooding your terminal. If you want to disable truncation, call show(...) on your expression to pretty-print it's full output. You can also pass in an optional height = ... parameter to control how much you want to show before truncation.

Configurable Truncation

@ Seq.fill(20)(100) res0: Seq[Int] = List( 100, 100, 100, 100, 100, 100, 100, 100, 100, 100, 100, 100, 100, 100, ... @ show(Seq.fill(20)(100)) res1: ammonite.pprint.Show[Seq[Int]] = List( 100, 100, 100, 100, 100, 100, 100, 100, 100, 100, 100, 100, 100, 100, 100, 100, 100, 100, 100, 100 ) @ show(Seq.fill(20)(100), height = 3) res2: ammonite.pprint.Show[Seq[Int]] = List( 100, 100, ... @ repl.pprinter() = repl.pprinter().copy(defaultHeight = 5) @ Seq.fill(20)(100) res4: Seq[Int] = List( 100, 100, 100, 100, ...

Ammonite-REPL intelligently truncates your output when it's beyond a certain size. You can request for the full output to be printed on-demand, print a certain number of lines, or even change the implicit pprintConfig so subsequent lines all use your new configuration.

Editing

Ammonite by default ships with a custom implementation of readline, which provides...

Syntax Highlighting

Ammonite syntax highlights both the code you're entering as well as any output being echoed in response. This should make it much easier to work with larger snippets of input.

All colors are configurable, and you can easily turn off colors entirely via the Configuration.

Stack traces are similarly highlighted, for easier reading:

Multi-line editing

You can use the Up and Down arrows to navigate between lines within your snippet. Enter only executes the code when you're on the last line of a multi-line snippet, meaning you can take your time, space out your code nicely, and fix any mistakes anywhere in your snippet. History is multi-line too, meaning re-running a multi-line snippet is trivial, even with tweaks.

Long gone are the days where you're desperately trying to cram everything in a single line, or curse quietly when you notice a mistake in an earlier line you are no longer able to fix. No more painstakingly crafting a multi-line snippet, and then having to painstakingly fish it line by individual line out of the history so you can run it again!

Desktop key-bindings

You can use Alt-Left/Right to move forward/backwards by one word at a time or hold down Shift to select text to delete. These compose as you'd be used to: e.g. Shift-Up selects all the text between your current cursor and the same column one row up.

Tab and Shift-Tab now work to block-indent and -dedent sections of code, as you'd expect in any desktop editor like Sublime Text or IntelliJ. This further enhances the multi-line editing experience, letting your nicely lay-out your more-complex REPL commands the same way you'd format code in any other editor.

Console key-bindings

All the readline-style navigation hotkeys like Ctrl-W to delete a word or Esc-Left/Right to navigate one word left/right still work. If you're comfortable with consoles like Bash, Python, IPython or even the default Scala console, you should have no trouble as all the exact same hotkeys work in Ammonite

History Search

Apart from browsing your command-line history with UP , you can also perform a history search by entering some search term and then pressing UP . That will pull up the most recent history line with that term in it, underlined. You can continue to press UP or DOWN to cycle through the matches, or Backspace or continue typing characters to refine your search to what you want.

You can press TAB , or any other command character ( LEFT , RIGHT , ...) to end the search and let you continue working with the currently-displayed command. Pressing ENTER will end the search and immediately submit the command to be run.

You can also kick off a history search using Ctrl-R , and use Ctrl-R to cycle through the matches.

Block Input

To enter block input (many independent lines all at once) into the Ammonite-REPL, simply wrap the multiple lines in curly braces { ... } , and Ammonite will wait until you close it before evaluating the contents:

@ { @ val x = 1 @ val y = 2 @ x + y @ } x: Int = 1 y: Int = 2 res0_2: Int = 3

As you can see, the contents of the { ... } block are unwrapped and evaluated as top-level statements. You can use this to e.g. declare mutually recursive functions or classes & companion-objects without being forced to squeeze everything onto a single line.

If you don't want this un-wrapping behavior, simply add another set of curlies and the block will be evaluated as a normal block, to a single expression:

@ {{ @ val x = 1 @ val y = 2 @ x + y @ }} res0: Int = 3

Undo & Redo

The Ammonite command-line editor allows you to undo and re-do portions of your edits:

Ctrl - : Undo last change

: Undo last change Alt/Esc - : Redo last change

Each block of typing, deletes, or navigation counts as one undo. This should make it much more convenient to recover from botched copy-pastes or bulk-deletions.

Magic Imports

Ammonite provides a set of magic imports that let you load additional code into a REPL session: these are imports which start with a $ , and are *top-level* inside the REPL command or your Scala Scripts.

import $file

This lets you load Scala Scripts into the REPL. For example given a small script defining one value we want

// MyScript.sc val elite = 31337

We can load it into our REPL session using:

@ import $file.MyScript @ MyScript.elite res1: Int = 31337

If the script is in a sub-folder, simply use:

@ import $file.myfolder.MyScript

Or if the script is in an outer folder,

@ import $file.^.MyScript

Or if we want to import the contents of the script in one go:

@ import $file.MyScript, MyScript._ import $file.$ @ elite res1: Int = 31337

While this is a trivial example, your MyScript.sc file can contain anything you want, not just val s: function def s, class es object s or trait s, or imports from other scripts. For more documentation on how these scripts work, check out the Scala Scripts section.

There are some subtleties when dealing with $file imports that are worth remembering

Note you can also perform file imports from your predef file, which are resolved relative to that file's path. This is useful if your predef is large and you want to break it up into multiple files.

Imported Scripts are Re-used

Note that script files imported multiple times are re-used; even if the same script is imported multiple times, it will only be compiled once, and its top-level definitions or statements will only be evaluated once. If you want to run code over and over, def a function in the script you are importing and you can call it repeatedly.

If you want to re-load a script, you should use Ammonite's Save/Load Session functionality to sess.save() the session before importing the script, and sess.load() ing to reset the script before re- import ing it.

Cannot directly import from inside a Script

You cannot import things from "inside" that script in

@ import $file.MyScript._

one chain:

Rather, you must always import the script-object first, and then import things from the script object after:

@ import $file.MyScript, MyScript._

Renamed-scripts and multiple-scripts

You can re-name scripts on-import if you find their names are colliding:

@ import $file.{MyScript => FooBarScript}, FooBarScript._

Or import multiple scripts at once

@ import $file.{MyScript, MyOtherScript}

These behave as you would expect imports to work. Note that when importing multiple scripts, you have to name them explicitly and cannot use wildcard `._` imports:

@ import $file._ // doesn't work

import $exec

This is similar to import $file , except that it dumps the definitions and imports from the script into your REPL session. This is useful if you are using a script to hold a set of common imports: using import $file to import a script doesn't propagate imports from that script into your REPL.

Alternatively, this is also useful if you want to split up your ~/.ammonite/predef.sc file into multiple scripts: e.g. if you want to break up your predef.sc into two scripts ~/.ammonite/predef.sc and ~/.ammonite/helper.sc . While you could use import $file to import $file.helper within your predef.sc file, it will only bring the helper object into scope within predef.sc or within your REPL. import $exec.helper will properly "dump" all the definitions from helper.sc into your local scope, which is often what you want when dealing with predef files.

See the docs for Scala Scripts for more on how script files work in general.

import $ivy

Lets you import Ivy dependencies from Maven Central, or anywhere else. For example, here is loading Scalaz and using it in the Ammonite-REPL:

@ import $ivy.`org.scalaz::scalaz-core:7.2.27`, scalaz._, Scalaz._ @ (Option(1) |@| Option(2))(_ + _) res1: Option[Int] = Some(3)

Note that the different portions of the $ivy import are in a org::library:version format; the :: is used to represent Scala dependencies, similar to %% in SBT's dependency syntax. If you want Java dependencies, you can load them using the org:library:version format, e.g. here we load the Google Guava Java library

@ import $ivy.`com.google.guava:guava:18.0`, com.google.common.collect._ @ val bimap = ImmutableBiMap.of(1, "one", 2, "two", 3, "three") @ bimap.get(1) res2: String = "one" @ bimap.inverse.get("two") res3: Int = 2

As well as the org:::library:version syntax for loading Scala libraries cross-published against the full Scala version (e.g. 2.12.2 rather than just 2.12 ):

@ import org.scalamacros.paradise.Settings._ error: object scalamacros is not a member of package org @ import $ivy.`org.scalamacros:::paradise:2.1.1`, org.scalamacros.paradise.Settings._ @ boolSetting("key").value res1: Boolean = false

If you want to load a compiler plugin, you can do so using import $plugin.$ivy :

@ // Compiler plugins imported without `.$plugin` are not loaded @ import $ivy.`org.spire-math::kind-projector:0.6.3` @ trait TC0[F[_]] defined trait TC0 @ type TC0EitherStr = TC0[Either[String, ?]] error: not found: type ? @ // You need to use `import $plugin.$ivy` @ import $plugin.$ivy.`org.spire-math::kind-projector:0.6.3` @ trait TC[F[_]] defined trait TC @ type TCEitherStr = TC[Either[String, ?]] defined type TCEitherStr @ // Importing plugins doesn't affect the run-time classpath @ import $plugin.$ivy.`com.lihaoyi::scalatags:0.7.0` @ import scalatags.Text error: not found: value scalatags

This makes Ammonite ideal for trying out new libraries or tools. You can pull down projects like Scalaz or Shapeless and immediately start working with them in the REPL:

@ import $ivy.`com.chuusai::shapeless:2.3.3`, shapeless._ @ (1 :: "lol" :: List(1, 2, 3) :: HNil) res1: Int :: String :: List[Int] :: HNil = 1 :: "lol" :: List(1, 2, 3) :: HNil @ res1(1) res2: String = "lol" @ import shapeless.syntax.singleton._ @ 2.narrow res4: 2 = 2

Even non-trivial web frameworks like Finagle or Akka-HTTP can be simply pulled down and run in the REPL!

@ import $ivy.`com.twitter::finagle-httpx:6.26.0` @ import com.twitter.finagle._, com.twitter.util._ @ var serverCount = 0 @ var clientResponse = 0 @ val service = new Service[httpx.Request, httpx.Response] { @ def apply(req: httpx.Request): Future[httpx.Response] = { @ serverCount += 1 @ Future.value( @ httpx.Response(req.version, httpx.Status.Ok) @ ) @ } @ } @ val server = Httpx.serve(":8080", service) @ val client: Service[httpx.Request, httpx.Response] = Httpx.newService(":8080") @ val request = httpx.Request(httpx.Method.Get, "/") @ request.host = "www.scala-lang.org" @ val response: Future[httpx.Response] = client(request) @ response.onSuccess { resp: httpx.Response => @ clientResponse = resp.getStatusCode @ } @ Await.ready(response) @ serverCount res12: Int = 1 @ clientResponse res13: Int = 200 @ server.close()

Ammonite-REPL is configured with a set of default resolvers but you can add your own using Coursier's Repository APIs

@ import $ivy.`com.ambiata::mundane:1.2.1-20141230225616-50fc792` error: Failed to resolve ivy dependencies @ interp.repositories() ++= Seq(coursierapi.IvyRepository.of( @ "https://ambiata-oss.s3-ap-southeast-2.amazonaws.com/[defaultPattern]" @ )) @ import $ivy.`com.ambiata::mundane:1.2.1-20141230225616-50fc792` @ import com.ambiata.mundane._

Including repositories which need authentication:

import coursierapi.{Credentials, MavenRepository} interp.repositories() ++= Seq( MavenRepository .of("https://nexus.corp.com/content/repositories/releases") .withCredentials(Credentials.of("user", "pass")) )

If you need more detailed control over what you are importing, e.g. with attributes, classifiers or exclusions, you can fall back to using the interp.load.ivy(deps: coursierapi.Dependency*) function and configure each Dependency to your heart's content:

@ import os._ @ interp.load.module($printedScriptPath/"loadIvyAdvanced.sc") @ serializer

import $repo

Lets you statically add Maven or Ivy repositories that will be used during artifact resolution. This mechanism does not (yet) allow for credentials registration. (

@ import $repo.`https://jitpack.io` @ import $ivy.`com.github.jupyter:jvm-repr:0.4.0` @ import jupyter._

NB: Prefixing the url pattern with "ivy:", such as import $repo.`ivy:https://dl.bintray.com/typesafe/ivy-releases/[defaultPattern]` , is required for ivy repositories. Maven repositories do not require any prefix

Builtins

The Ammonite REPL contains a bunch of built-in imports and definitions by default. This includes:

repl : the object representing the Repl API, aliased as repl . This allows you (e.g. repl.history ) and you can use autocomplete or typeOf on the repl object to see what is available.

: the object representing the Repl API, aliased as . This allows you (e.g. ) and you can use autocomplete or on the object to see what is available. Utilities: tools such as time, grep or browse that are independent from the REPL, but are extremely useful to have in it.

All of these are imported by default into any Ammonite REPL, in order to provide a rich and consistent REPL experience. If you want to disable these imports and run the REPL with a clean namespace (with only the core implicits needed for result pretty-printing/type-printing to work) pass in defaultPredef = false to the REPL's Main API or --no-default-predef to the REPL from the command-line.

Repl API

Ammonite contains a range of useful built-ins implemented as normal functions on the repl and interp objects, e.g. repl.history , repl.width , repl.frontEnd() to change the front-end terminal implementation, etc.:

trait ReplAPI { /** * Read/writable prompt for the shell. Use this to change the * REPL prompt at any time! */ val prompt: Ref[String] /** * The front-end REPL used to take user input. Modifiable! */ val frontEnd: Ref[FrontEnd] /** * Display help text if you don't know how to use the REPL */ def help: String /** * The last exception that was thrown in the REPL; `null` if nothing has * yet been thrown. Useful if you want additional information from the * thrown exception than the printed stack trace (e.g. many exceptions have * additional metadata attached) or if you want to show the stack trace * on an exception that doesn't normally print it (e.g. seeing the stack * when a Ctrl-C interrupt happened) via `lastException.printStackTrace`. */ def lastException: Throwable /** * History of commands that have been entered into the shell, including * previous sessions */ def fullHistory: History /** * History of commands that have been entered into the shell during the * current session */ def history: History /** * Get the `Type` object of [[T]]. Useful for finding * what its methods are and what you can do with it */ def typeOf[T: WeakTypeTag]: Type /** * Get the `Type` object representing the type of `t`. Useful * for finding what its methods are and what you can do with it * */ def typeOf[T: WeakTypeTag](t: => T): Type /** * Throw away the current scala.tools.nsc.Global and get a new one */ def newCompiler(): Unit /** * Access the compiler to do crazy things if you really want to! */ def compiler: scala.tools.nsc.Global /** * Access the presentation compiler to do even crazier things if you really want to! */ def interactiveCompiler: scala.tools.nsc.interactive.Global /** * Shows all imports added that bring values into scope for the commands a * user runs; *includes* imports from the built-in predef and user predef files */ def fullImports: Imports /** * Shows the imports added to scope by the commands a user has entered so far; * *excludes* imports from the built-in predef and user predef files */ def imports: Imports /** * If class wrapping is enabled, this lists the names of the previous commands * that the current commands actually references (as told by the scalac). * * E.g. in a session like * ``` * @ val n = 2 * n: Int = 2 * * @ val p = 1 * p: Int = 1 * * @ n + p * res2: Int = 3 * ``` * this would have returned an empty list if called from the same line as `val n = 2` * or `val p = 1`. This would have returned `Seq("cmd0", "cmd1")` if called * from the same line as `n + p`, as both `cmd0`, that defines `n`, and `cmd1`, that * defines `p`, are referenced from this line. */ def usedEarlierDefinitions: Seq[String] /** * Controls how things are pretty-printed in the REPL. Feel free * to shadow this with your own definition to change how things look */ implicit def tprintColorsImplicit: pprint.TPrintColors implicit def codeColorsImplicit: CodeColors val pprinter: Ref[pprint.PPrinter] implicit def pprinterImplicit = pprinter() /** * Current width of the terminal */ def width: Int /** * Current height of the terminal */ def height: Int def show(t: Any): Unit /** * Lets you configure the pretty-printing of a value. By default, it simply * disables truncation and prints the entire thing, but you can set other * parameters as well if you want. */ def show(t: Any, width: Integer = null, height: Integer = null, indent: Integer = null): Unit /** * Functions that can be used to manipulate the current REPL session: * check-pointing progress, reverting to earlier checkpoints, or deleting * checkpoints by name. * * Frames get pushed on a stack; by default, a saved frame is * accessible simply by calling `load`. If you provide a name * when `save`ing a checkpoint, it can later be `load`ed directly * by providing the same name to `load` * * Un-named checkpoints are garbage collected, together with their * classloader and associated data, when they are no longer accessible * due to `restore`. Named checkpoints are kept forever; call `delete` * on them if you really want them to go away. */ def sess: Session def load: ReplLoad def clipboard: Clipboard } trait ReplLoad{ /** * Loads a command into the REPL and * evaluates them one after another */ def apply(line: String): Unit /** * Loads and executes the scriptfile on the specified path. * Compilation units separated by `@

` are evaluated sequentially. * If an error happens it prints an error message to the console. */ def exec(path: os.Path): Unit } trait Session{ /** * The current stack of frames */ def frames: List[Frame] /** * Checkpoints your current work, placing all future work into its own * frames. If a name is provided, it can be used to quickly recover * that checkpoint later. */ def save(name: String = ""): Unit /** * Discards the last frames, effectively reverting your session to * the last `save`-ed checkpoint. If a name is provided, it instead reverts * your session to the checkpoint with that name. */ def load(name: String = ""): SessionChanged /** * Resets you to the last save point. If you pass in `num`, it resets * you to that many savepoints since the last one. */ def pop(num: Int = 1): SessionChanged /** * Deletes a named checkpoint, allowing it to be garbage collected if it * is no longer accessible. */ def delete(name: String): Unit } trait Clipboard{ /** * Reads contents from the system clipboard. * @return System clipboard contents if they are readable as `String`, * empty string otherwise. */ def read: String /** * Sets the contents of the system clipboard. * * @param data New contents for the clipboard. */ def write(data: Internals.Writable): Unit }

All of these are available as part of the repl object which is imported in scope by default. Additional functionality available under the interp object, which is also available in scripts:

trait InterpAPI { /** * When running a script in `--watch` mode, re-run the main script if this * file changes. By default, this happens for all script files, but you can * call this to watch arbitrary files your script may depend on */ def watch(p: os.Path): Unit /** * A generalization of [[watch]], allows watching arbitrary values and not * just the contents of file paths. */ def watchValue[T](v: => T): T /** * The colors that will be used to render the Ammonite REPL in the terminal, * or for rendering miscellaneous info messages when running scripts. */ val colors: Ref[Colors] /** * Tools related to loading external scripts and code into the REPL */ def load: InterpLoad /** * resolvers to use when loading jars */ def repositories: Ref[List[Repository]] /** * Functions that will be chained and called on the coursier * Fetch object right before they are run */ val resolutionHooks: mutable.Buffer[ Fetch => Fetch ] /** * Exit the Ammonite REPL. You can also use Ctrl-D to exit */ def exit = throw AmmoniteExit(()) /** * Exit the Ammonite REPL. You can also use Ctrl-D to exit */ def exit(value: Any) = throw AmmoniteExit(value) /** * Functions that will be chained and called on the * exitValue before the repl exits */ val beforeExitHooks: mutable.Buffer[Any => Any] /** * Configures the current compiler, or if the compiler hasn't been initialized * yet, registers the configuration callback and applies it to the compiler * when it ends up being initialized later */ def configureCompiler(c: scala.tools.nsc.Global => Unit): Unit /** * Pre-configures the next compiler. Useful for tuning options that are * used during parsing such as -Yrangepos */ def preConfigureCompiler(c: scala.tools.nsc.Settings => Unit): Unit } trait LoadJar { /** * Load a `.jar` file or directory into your JVM classpath */ def cp(jar: os.Path): Unit /** * Load a `.jar` from a URL into your JVM classpath */ def cp(jar: java.net.URL): Unit /** * Load one or more `.jar` files or directories into your JVM classpath */ def cp(jars: Seq[os.Path]): Unit /** * Load a library from its maven/ivy coordinates */ def ivy(coordinates: Dependency*): Unit } trait InterpLoad extends LoadJar{ def module(path: os.Path): Unit def plugin: LoadJar }

Utilities

Apart from the core Builtins of the REPL, the Ammonite REPL also includes many helpers that are not strictly necessarily but are very useful in almost all REPL sessions. Here are a few of them

The REPL also imports the pipe-operators from Ammonite-Ops by default to make it easy for you to use tools like grep interactively, and imports all the Builtins from the repl .

These tools are useful but not strictly necessary;

source

Ammonite provides the src built-in, which lets you easily peek at the source code of various functions or classes. You can use this to read their doc-comments or inspect their implementation, to help you figure out how to use them.

src accepts two kinda of inputs:

A method call foo.bar(...) , in which case it will try to bring you to concrete implementation of that method bar . You can also leave the method arguments empty using `_`.

, in which case it will try to bring you to concrete implementation of that method . You can also leave the method arguments empty using `_`. An arbitrary expression foo , in which case it will try to bring you to the implementation of foo 's runtime class

src works on both Scala and Java APIs, both the standard library as well as third-party dependencies. src opens source files using the less pager by default; if you wish to change this, you can pass in a replacement command as a second argument e.g. src(..., "vim") or e.g. src(..., Seq("vim", "--flag"))

When used within a SBT project src requires the following SBT setting in order to make the source code of third-party dependencies available:

// Optional, required for the `source` command to work (fullClasspath in Test) ++= { (updateClassifiers in Test).value .configurations .find(_.configuration.name == Test.name) .get .modules .flatMap(_.artifacts) .collect{case (a, f) if a.classifier == Some("sources") => f} }

Ammonite also automatically downloads the source jars of any libraries you import via import $ivy , and makes them browsable via src .

src is experimental: it may not always be able to find the source code of a particular method or class, and the source location it brings you to may be a few lines away from the source you really want. Furthermore, src also does not find sources that are within your own local scripts or SBT project: you likely already have access to those via your text editor anyway.

Nevertheless, it should work in the bulk of cases, so try it out and come by the Gitter Channel if you face any issues!

time

bash$ time ls -a . .. amm amm-template.sh appveyor.yml build.sbt build.sc ci deploy_key .git .github .gitignore integration internals-docs LICENSE mill ops out project readme readme.md shell sshd terminal .travis.yml real 0m0.002s user 0m0.000s sys 0m0.000s @ time{ls!} res0 : ( LsSeq , concurrent . duration . FiniteDuration ) = ( ".git" "amm-template.sh" 'integration 'readme ".github" "appveyor.yml" "internals-docs" "readme.md" ".gitignore" "build.sbt" 'mill 'shell ".travis.yml" "build.sc" 'ops 'sshd 'LICENSE 'ci 'out 'target 'amm 'deploy_key 'project 'terminal , 14344364 nanoseconds )

Just as bash provides a time command that you can use to see how long a command took to run, Ammonite-Shell provides a time function which serves the same purpose.

While the bash version spits out the time in an ad-hoc table format, stuck together with the output of the command, Ammonite-Shell's time instead returns a tuple containing the expression that was evaluated, and the time taken to evaluate it.

grep

bash$ ls -a . | grep re .gitignore readme readme.md /home/travis/build/lihaoyi/Ammonite/.gitigno re , /home/travis/build/lihaoyi/Ammonite/ re adme, /home/travis/build/lihaoyi/Ammonite/ re adme.md )

bash$ ls -a . | grep re .gitignore readme readme.md /home/travis/build/lihaoyi/Ammonite/.gitignore, /home/travis/build/lihaoyi/Ammonite/readme, /home/travis/build/lihaoyi/Ammonite/readme.md )

Ammonite provides its own grep command, which lets you easily perform ad-hoc searches within a list.

As shown above, Ammonite's grep can be used via || ( flatMap ) or |? ( filter ). In the case of || , it displays the matches found, highlighted, with some additional context before and after the match. When used with |? , it simply returns the relevant items. In general, || is useful for manual exploration, while |? is useful in scripts where you want to deal with the list of matched results later.

By default, Ammonite's grep matches a string as a literal. If you want to match via a regex, append a .r to the string literal to turn it into a regex:

bash$ ls -a . | grep -G "re[a-z]\+" readme readme.md /home/travis/build/lihaoyi/Ammonite/ readme , /home/travis/build/lihaoyi/Ammonite/ readme .md )

Ammonite's grep isn't limited to "filesystem"-y things; any collection of objects can be piped through grep ! For example, here's grep being used to quickly search through the JVM system properties:

@ sys.props || grep! "os|OS" .r cmd0.sc:1: value || is not a member of scala.sys.SystemProperties val res0 = sys.props || grep! "os|OS".r ^ Compilation Failed

You can even use Ammonite's grep to dig through the methods of an object, even large messy objects with far-too-many methods to go over by hand hunting for what you want:

@ // I forgot what I want but I think it has Raw in the name @ typeOf(repl.compiler).members || grep! "Raw" cmd1.sc:1: value || is not a member of reflect.runtime.universe.MemberScope val res1 = typeOf(repl.compiler).members || grep! "Raw" ^ Compilation Failed

In general, Ammonite's grep serves the same purpose of grep in the Bash shell: a quick and dirty way to explore large amounts of semi-structured data. You probably don't want to build your enterprise business logic on top of grep 's string matching. While you're working, though, grep can be a quick way to find items of interest in collections of things (anything!) too large to sift through by hand, when you're not yet sure exactly what you want.

browse

browse is a utility that lets you open up far-too-large data structures in the less pager, letting you page through large quantities of text, navigating around it and searching through it, without needing to spam your terminal output with its contents and losing your earlier work to the output-spam. Simple call browse on whatever value you want, e.g. this 50 thousand line ls.rec result show above.

If you're dealing with large blobs of data that you want to dig through manually, you might normally format it nicely, write it to a file, and open it in vim or less or an editor such as Sublime Text. browse makes that process quick and convenient.

You can customize the browse call like you would a show call or pprint.pprintln call, e.g. setting an optional width , colors or indent . You can also choose a viewer program in case you don't want to use less : e.g. here's a command that would open it up in vim :

haoyi-Ammonite@ browse(res0, viewer="vim", colors = pprint.Colors.BlackWhite)

Apart from using viewer="vim" , we also set the colors to black and white because Vim by default doesn't display ANSI colors nicely. You can also pass in a Seq of strings to viewer if you want to pass additional flags to your editor, and of course use any other editor you would like such as "emacs" or "nano" or "subl"

desugar

desugar allows you to easily see what the compiler is doing with your code before it gets run. For example, in the above calls to desugar , you can see:

List(...) being converted to List.apply(...)

being converted to true -> false being converted to Predef.ArrayAssoc(true).$minus$greater(false)

being converted to default.write$default , default.SeqishW , etc. being injected as implicits

, , etc. being injected as implicits for comprehensions with if filters being converted into the relevant withFilter and map calls

In general, if you are having trouble understanding the combination of implicit parameters, implicit conversions, macros, and other odd Scala features, desugar could you see what is left after all the magic happens.

desugar only works in Scala 2.11.x and above, not in 2.10.x

Save/Load Session

Ammonite allows you to save your work half way through, letting you discard and future changes and returning to the state of the world you saved.

Defined some memory-hogging variable you didn't need? Loaded the wrong version of some third-party library? Reluctant to reload the REPL because reloading is slow? Fear not! With Ammonite, you can save your important work, do whatever you want later, and simply discard all the jars you loaded, variables you defined

@ val veryImportant = 1 veryImportant: Int = 1 @ repl.sess.save() @ val oopsDontWantThis = 2 oopsDontWantThis: Int = 2 @ // Let's try this new cool new library @ import $ivy.`com.lihaoyi::scalatags:0.7.0` @ veryImportant res4: Int = 1 @ oopsDontWantThis res5: Int = 2 @ import scalatags.Text.all._ @ div("Hello").render res7: String = "<div>Hello</div>" @ // Oh no, maybe we don't want scalatags! @ repl.sess.load() @ veryImportant res9: Int = 1 @ oopsDontWantThis error: not found: value oopsDontWantThis @ import scalatags.Text.all._ error: not found: value scalatags """)

Apart from plain save s and load s, which simply discard everything after the most recent save, you can also provide a name to these functions. That lets you stop working on a branch, go do something else for a while, and be able to come back later to continue where you left off:

@ val (x, y) = (1, 2) x: Int = 1 y: Int = 2 @ repl.sess.save("xy initialized") @ val z = x + y z: Int = 3 @ repl.sess.save("first z") @ repl.sess.load("xy initialized") @ val z = x - y z: Int = -1 @ repl.sess.save("second z") @ z res7: Int = -1 @ repl.sess.load("first z") @ z res9: Int = 3 @ repl.sess.load("second z") @ z res11: Int = -1 """)

Lastly, you have the repl.sess.pop() function. Without any arguments, it behaves the same as repl.sess.load() , reseting you to your last savepoint. However, you can pass in a number of session frames which you'd like to pop, allow you to reset your session to even earlier save points. repl.sess.pop(2) would put you two save-points ago, repl.sess.pop(3) would put you three save-points ago, letting you reach earlier save-points even if you did not give them names. Passing in a large number like repl.sess.pop(999) would reset your session all the way until the start.

Ammonite's save and load functionality is implemented via Java class-loaders.

Superior Autocomplete

The original Scala REPL provides no autocomplete except for the most basic scenarios of value.<complete> . In the Ammonite-REPL, you get the same autocomplete-anywhere support that you get in a modern IDE.

@ Seq(1, 2, 3).map(x => x.) getClass ## asInstanceOf isInstanceOf toString hashCode equals != == % / * - + ^ & | >= > <= < >> >>> << unary_- unary_+ unary_~ toDouble toFloat toLong toInt toChar toShort toByte compareTo doubleValue ... @ Futu scala.collection.parallel.FutureThreadPoolTasks scala.collection.parallel.FutureTasks scala.concurrent.impl.Future$PromiseCompletingRunnable scala.concurrent.impl.Future scala.concurrent.Future scala.concurrent.FutureTaskRunner scala.concurrent.Future$InternalCallbackExecutor scala.concurrent.Future$class java.util.concurrent.Future java.util.concurrent.FutureTask$WaitNode java.util.concurrent.FutureTask com.sun.corba.se.impl.orbutil.closure.Future

Ammonite also allows you to autocomplete third party dependencies pulled in via import $ivy, to make it easy for you to navigate the public package repositories and find the name and version of a dependency that you want:

None of these examples work in the standard Scala REPL.

Interrupting run-away execution with Ctrl-C

@ while(true) () ... hangs ... ^Ctrl-C Interrupted! @

The traditional Scala REPL doesn't handle runaway code, and gives you no option but to kill the process, losing all your work. Ammonite-REPL lets you interrupt the thread, stop the runaway-command and keep going.

Compiler-crash Robustness

@ val x = 1 x: Int = 1 @ /* trigger compiler crash */ trait Bar { super[Object].hashCode } error: java.lang.AssertionError: assertion failed @ 1 + x res1: Int = 2

The default Scala REPL throws away all your work if the compiler crashes. This doesn't make any sense, because all the compiler is is a dumb String => Array[Byte] pipe. In the Ammonite, we simply swap out the broken compiler for a new one and let you continue your work.

Other Fixes

Apart from the above features, the Ammonite REPL fixes a large number of bugs in the default Scala REPL, including but not limited to:

Configuration

Ammonite is configured via Scala code, that can live in the ~/.ammonite/predef.sc file, passed in through SBT's initialCommands , or passed to the command-line executable as --predef='...' .

Anything that you put in predef.sc will be executed when you load the Ammonite REPL. This is a handy place to put common imports, setup code, or even call import $ivy to load third-party jars. The compilation of the predef is cached, so after the first run it should not noticeably slow down the initialization of your REPL.

Some examples of things you can configure:

@ // Set the shell prompt to be something else @ repl.prompt() = ">" @ // Change the terminal front end; the default is @ // Ammonite on Linux/OSX and JLineWindows on Windows @ repl.frontEnd() = frontEnd("unix") @ repl.frontEnd() = frontEnd("windows") @ repl.frontEnd() = frontEnd("ammonite") @ // Changing the colors used by Ammonite; all at once: @ interp.colors() = ammonite.util.Colors.BlackWhite @ interp.colors() = ammonite.util.Colors.Default @ // or one at a time: @ interp.colors().prompt() = fansi.Color.Red @ interp.colors().ident() = fansi.Color.Green @ interp.colors().`type`() = fansi.Color.Yellow @ interp.colors().literal() = fansi.Color.Magenta @ interp.colors().prefix() = fansi.Color.Cyan @ interp.colors().comment() = fansi.Color.Red @ interp.colors().keyword() = fansi.Bold.On @ interp.colors().selected() = fansi.Underlined.On @ interp.colors().error() = fansi.Color.Yellow

Refs

By default, all the values you're seeing here with the () after them are Ref s, defined as

trait StableRef[T]{ /** * Get the current value of the this [[StableRef]] at this instant in time */ def apply(): T /** * Set the value of this [[StableRef]] to always be the value `t` */ def update(t: T): Unit } trait Ref[T] extends StableRef[T]{ /** * Return a function that can be used to get the value of this [[Ref]] * at any point in time */ def live(): () => T /** * Set the value of this [[Ref]] to always be the value of the by-name * argument `t`, at any point in time */ def bind(t: => T): Unit }

As you can see from the signature, you can basically interact with the Ref s in two ways: either getting or setting their values as values, or binding their values to expressions that will be evaluated every time the Ref 's value is needed.

As an example of the latter, you can use bind to set your prompt to always include your current working directory

repl.prompt.bind(wd.toString + "@ ")

As is common practice in other shells. Further modifications to make it include e.g. your current branch in Git (which you can call through Ammonite's subprocess API or the current timestamp/user are similarly possible.

Compiler Flags

Apart from configuration of the rest of the shell through Refs, configuration of the Scala compiler takes place separately through the compiler's own configuration mechanism. You have access to the compiler as compiler , and can modify its settings as you see fit. Here's an example of this in action:

@ // Disabling default Scala imports @ List(1, 2, 3) + "lol" res0: String = "List(1, 2, 3)lol" @ interp.configureCompiler(_.settings.noimports.value = true) @ List(1, 2, 3) + "lol" // predef imports disappear error: not found: value List @ interp.configureCompiler(_.settings.noimports.value = false) @ List(1, 2, 3) + "lol" res3: String = "List(1, 2, 3)lol" @ // Disabling Scala language-import enforcement @ object X extends Dynamic error: extension of type scala.Dynamic needs to be enabled @ interp.configureCompiler(_.settings.language.tryToSet(List("dynamics"))) @ object X extends Dynamic defined object X @ 1 + 1 // other things still work @ // Enabling warnings (which are disabled by default) @ List(1) match { case _: List[Double] => 2 } res7: Int = 2 @ interp.configureCompiler(_.settings.nowarnings.value = false) @ List(1) match { case _: List[Double] => 2 } warning: $fruitlessTypeTestWarningMessageBlahBlahBlah @ // Note you can refer to `repl.compiler` when interactive in the REPL @ // But you should use `interp.configureCompiler` in your scripts/predef @ // because `repl.compiler` may be `null` if the script is cached. @ repl.compiler.settings.nowarnings.value res10: Boolean = false

If you want these changes to always be present, place them in your ~/.ammonite/predef.sc .

JVM Flags

Ammonite also supports the JAVA_OPTS environment variable for passing arguments to the JVM that it runs inside, e.g. you can pass in a custom memory limit via

bash$ JAVA_OPTS="-Xmx1024m" amm

To start the REPL while letting it use only up to 1024 megabytes of memory

Embedding

The Ammonite REPL is just a plain-old-Scala-object, just like any other Scala object, and can be easily used within an existing Scala program. This is useful for things like interactive Debugging or hosting a Remote REPL to interact with a long-lived Scala process, or Instantiating Ammonite inside an existing program to serve as a powerful interactive console.

Instantiating Ammonite

To use Ammonite inside an existing Scala program, you need to first add it to your dependencies:

libraryDependencies += "com.lihaoyi" % "ammonite" % "2.2.0" cross CrossVersion.full

Then instantiate it with this code anywhere within your program:

package ammonite.integration object TestMain { def main(args: Array[String]): Unit = { val hello = "Hello" // Break into debug REPL with ammonite.Main( predefCode = "println(\"Starting Debugging!\")" ).run( "hello" -> hello, "fooValue" -> foo() ) } def foo() = 1 }

You can configure the instantiated REPL by passing in arguments to the Main() call, e.g. to redirect the input/output streams or to run a predef to configure it further.

Debugging

Ammonite can be used as a tool to debug any other Scala program, by conveniently opening a REPL at any point within your program with which you can interact with live program data, similar to pdb/ipdb in Python. To do so, first add Ammonite to your classpath, e.g. through this SBT snippet:

libraryDependencies += "com.lihaoyi" % "ammonite" % "2.2.0" cross CrossVersion.full

Note that unlike the snippet given above, we leave out the % "test" because we may want ammonite to be available within the "main" project, and not just in the unit tests. Then, anywhere within your program, you can place a breakpoint via:

package ammonite.integration object TestMain { def main(args: Array[String]): Unit = { val hello = "Hello" // Break into debug REPL with ammonite.Main( predefCode = "println(\"Starting Debugging!\")" ).run( "hello" -> hello, "fooValue" -> foo() ) } def foo() = 1 }

And when your program reaches that point, it will pause and open up an Ammonite REPL with the values you provided it bound to the names you gave it. From there, you can interact with those values as normal Scala values within the REPL. Use Ctrl-D or exit to exit the REPL and continue normal program execution. Note that the names given must be plain Scala identifiers.

Here's an example of it being used to debug changes to the WootJS webserver:

debug

run

In this case, we added thestatement within the websocket frame handler, so we can inspect the values that are taking part in the client-server data exchange. You can also put thestatement inside a conditional, to make it break only when certain interesting situations (e.g. bugs) occur.

As you can see, you can bind the values you're interested in to names inside the debug REPL, and once in the REPL are free to explore them interactively.

The debug() call returns : Any ; by default, this is (): Unit , but you can also return custom values by passing in an argument to exit(...) when you exit the REPL. This value will then be returned from debug() , and can be used in the rest of your Scala application.

Remote REPL

Ammonite can also be used to remotely connect to your running application and interact with it in real-time, similar to Erlang's erl -remsh command.

This is useful if e.g. you have multiple Scala/Java processes running but aren't sure when/if you'd want to inspect them for debugging, and if so which ones. With Ammonite, you can leave a ssh server running in each process. You can then and connect-to/disconnect-from each one at your leisure, working with the in-process Scala/Java objects and methods and classes interactively, without having to change code and restart the process to add breakpoints or instrumentation.

To do this, add ammonite-sshd to your classpath, for example with SBT:

libraryDependencies += "com.lihaoyi" % "ammonite-sshd" % "2.2.0" cross CrossVersion.full

Now add repl server to your application:

import ammonite.sshd._ val replServer = new SshdRepl( SshServerConfig( address = "localhost", // or "0.0.0.0" for public-facing shells port = 22222, // Any available port passwordAuthenticator = Some(pwdAuth) // or publicKeyAuthenticator ) ) replServer.start()

And start your application. You will be able to connect to it using ssh like this: ssh repl@localhost -p22222 and interact with your running app. Invoke stop() method whenever you want to shutdown ammonite sshd server. Here for example sshd repl server is embedded in the Akka HTTP microservice example:

Here we can interact with code live, inspecting values or calling methods on the running system. We can try different things, see which works and which not, and then put our final bits in application code. In this example app is located on local machine, but you are free to connect to any remote node running your code.

Security notes: It is probably unsafe to run this server publicly (on host "0.0.0.0" ) in a production, public-facing application. If you insist on doing so, you probably want key-based authentication, available by supplying publicKeyAuthenticator in the SshServerConfig .

Despite this, it is perfectly possible to run these on production infrastructure: simply leave the host set to "localhost" , and rely on the machine's own SSH access to keep out unwanted users: you would first ssh onto the machine itself, and then ssh into the Ammonite REPL running on localhost .

Typically most organizations already have bastions, firewalls, and other necessary infrastructure to allow trusted parties SSH access to the relevant machines. Running on localhost lets you leverage that and gain all the same security properties without having to re-implement them in Scala.

Scala Scripts

Lightweight Programming in Scala

Scala scripts are lightweight files containing Scala code that can be directly run from the command line. Unlike normal Scala projects, Scala scripts let you save and run code without setting up a "build-file" or "project". Scala Scripts are useful for writing small pieces of code, and are much quicker to write and deploy than a full-fledged SBT project.

Creating an Ammonite Script is just a matter of creating a MyScript.sc with some Scala code in it, and running it from your terminal. Deploying the script is a matter of copying the script file to where-ever you want to run it, and running it. No project/ folder, no worrying about .jar files or uber-jars. No worrying about compiling your code: scripts are automatically compiled the first time they are run, and subsequently start quickly with minimal overhead. Writing and running Scala code doesn't get much easier than that!

As an example, Ammonite's own Continuous Integration Scripts are written as .sc Scala Scripts, as are Haoyi's blog and resume. These are all examples of using Scala Scripts to do some simple (or not so simple!) tasks in just a few files, without the hassle of setting up a heavyweight SBT project.

To begin with, download the Ammonite 2.2.0 script-runner for Scala 2.12 (also available for Older Scala Versions):

$ sudo sh -c '(echo "#!/usr/bin/env sh" && curl -L https://github.com/lihaoyi/Ammonite/releases/download/2.2.0/2.13-2.2.0) > /usr/local/bin/amm && chmod +x /usr/local/bin/amm' && amm

Or to try out the latest features in our Unstable Release 2.2.0-4-4bd225e:

$ sudo sh -c '(echo "#!/usr/bin/env sh" && curl -L https://github.com/lihaoyi/Ammonite/releases/download/2.2.0/2.13-2.2.0-4-4bd225e) > /usr/local/bin/amm && chmod +x /usr/local/bin/amm' && amm

And read on to learn about how Scala scripts work.

Script Files

Ammonite defines a format that allows you to load external scripts into the REPL; this can be used to save common functionality so it can be used at a later date. In the simplest case, a script file is simply a sequence of Scala statements, e.g.

// MyScript.sc // print banner println("Hello World!!") // common imports import sys.process._ import collection.mutable // common initialization code val x = 123 println("x is " + 123) ...

You can write any Scala code you want in an Ammonite script, including top-level statements and definitions (e.g. the println and val x = 123 above) that are not valid in "normal" Scala projects. You do not need to wrap these sorts of top-level statements or expressions in boilerplate object Foo{...} wrappers: this is all done automatically for you by Ammonite.

After that, it's a matter of running the script From the REPL or From Bash, e.g.

bash$ amm MyScript.sc

Hello World!! x is 123 ...

Script Imports

No code stands alone; scripts depend on other scripts. Often they depend on third party libraries, as there's so much code out there already written it doesn't make sense to re-invent everything yourself.

Ammonite Scripts allow you to import Other Scripts, just like any Bash or Python scripts do. Furthermore, they let you cleanly depend on third party libraries: since Ammonite runs on the JVM, this means Ivy Dependencies. Ammonite will ensure that the relevant dependencies are always downloaded and used, and you never need to worry about remembering to "install" things before running your scripts!

Other Scripts

Like other scripting languages, Ammonite Scripts allow you to break your script into multiple files, and import them from each other in order to use what is in each file. Unlike "Normal" Scala projects, there is no need to set up a src/main/scala folder, and create a build file, and all these other things: simply split your script into two files, and import one from the other using import $file:

// Basic.sc val basicValue = 31337

// FileImport.sc import $file.Basic val fileImportVal = Basic.basicValue + 1

Here, we are defining a simple val in Basic.sc , and then importing it from FileImport.sc . And of course, we can use what we defined in FileImport.sc and import it in another file

// IndirectFileImport.sc import $file.FileImport val indirectFileImportVal = FileImport.fileImportVal + 1

And so on, importing files as many or as deep as you want. You can use ^ segments at the start of your import $file to import things from outside the current script's enclosing folder, e.g. import $file.^.^.foo will import the script file ../../foo.sc and make it available for you to use.

$file imports inside Scala Scripts behave the same as $file imports within the Ammonite-REPL, and have the same characteristics:

Ivy Dependencies

You can easily make use of external Ivy artifacts right in your scripts, without needing to set up a separate build file. Simply use a import $ivy, just as you would in the Ammonite-REPL, and it will be available in the script for you to use, e.g. here we make use of the Scalatags library:

import $ivy.`com.lihaoyi::scalatags:0.7.0`, scalatags.Text.all._ val rendered = div("Moo").render

If you need more detailed control over what you are importing, e.g. with attributes, classifiers or exclusions, you can fall back to using the interp.load.ivy(deps: coursierapi.Dependency*) function.

@ import os._ @ interp.load.module($printedScriptPath/"loadIvyAdvanced.sc") @ serializer

Note that to use this function, your script needs to be a multi-stage script as listed below, and the interp.load.ivy call needs to be in an earlier block

Multi-stage Scripts

By default, everything in a script is compiled and executed as a single block. While you can use Magic Imports to load other scripts or Ivy artifacts before your script runs, those can only load "hardcoded" scripts or artifacts, and cannot e.g. load different scripts depending on some runtime variables.

If you want to load different scripts or ivy artifacts depending on runtime values, you can use the runtime-equivalent of magic imports:

import $cp becomes interp.load.cp

becomes import $file becomes interp.load.module

becomes import $ivy becomes interp.load.ivy

These are plain-old-Scala-functions that let you pass in a Path to a script to load, or load different Ivy artifacts depending on runtime values. Additionally, there is an overloaded version of interp.load.cp which takes a Seq[Path] of classpath entries. This variant is much more efficient for adding multiple classpath entries at once.

Since these functions get run *after* the current compilation block is compiled, you need to split your script into two compilation blocks, and can only use the results of the load ed code in subsequent blocks:

// print banner println("Welcome to the XYZ custom REPL!!") val scalazVersion = "7.2.7" interp.load.ivy("org.scalaz" %% "scalaz-core" % scalazVersion) // This @ is necessary for Ammonite to process the `interp.load.ivy` // before continuing @ // common imports import scalaz._ import Scalaz._ // use Scalaz! ...

In general, this should not be necessary very often: usually you should be able to load what you want using Magic Imports. Nevertheless, sometimes you may find yourself needing to get "under the hood" and use these load ing functions directly. When that happens, using Multi-stage Scripts is the way to go.

Script Arguments

Often when calling a script from the external command-line (e.g. Bash), you need to pass arguments to configure its behavior. With Ammonite, this is done by defining a @main method, e.g.

// Args.sc val x = 1 @main def main(i: Int, s: String, path: os.Path = os.pwd) = { s"Hello! ${s * i} ${path.last}." }

When the script is run from the command line:

$ amm Args.sc 3 Moo "Hello! MooMooMoo Ammonite."

The top-level definitions execute first (e.g. setting x ), and then the @main method is called with the arguments you passed in. Note that the return-value of the script is pretty-printed by default, which quotes strings and may nicely format/indent lists or other data-structures. If you want to avoid this default pretty-printing behavior, annotate your @main method as returning : Unit and add your own println s:

// Args.sc val x = 1 @main def main(i: Int, s: String, path: os.Path = os.pwd): Unit = { println(s"Hello! ${s * i} ${path.last}.") }

$ amm Args2.sc 3 Moo Hello! MooMooMoo Ammonite

You can also pass in named arguments using -- to demarcate them:

$ amm Args.sc --i 3 --s Moo "Hello! MooMooMoo Ammonite."

Default arguments behave as you would expect (i.e. they allow you to omit it when calling) and arguments are parsed using the scopt.Read typeclass, which provides parsers for primitives like Int , Double , String , as well as basic data-structures like Seq s (taken as a comma-separated list) and common types like Paths.

If you pass in the wrong number of arguments, or if an argument fails to deserialize, the script will fail with an error message.

The main method does not get automatically called when you load.module or load.exec a script from within the Ammonite REPL. It gets imported into scope like any other method or value defined in the script, and you can just call it normally.

vararg* arguments work as you would expect as well, allowing one or more arguments to be passed from the command-line and aggregated into a Seq for your function to use. This also allows you to use a custom argument-parser (e.g. Eugene Yokota's excellent Scopt) library by defining your function as taking String* :

@main def entrypoint(args: String*) = { ... }

In which case Ammonite will take all arguments and forward them to your main method un-checked and un-validated, from which point you can deal with the raw Seq[String] however you wish. Note that vararg* arguments cannot be passed by-name, e.g. via --args foo

Ammonite Arguments in Scripts

Ammonite passing any arguments that come before the script file to Ammonite itself, while arguments that come after the script file are given to the script:

$ amm --predef-code 'println("welcome!")' Args.sc 3 Moo welcome! "Hello! MooMooMoo Ammonite."

Here, "Ammonite Arguments" go on the left of the Args.sc , while Script Arguments go on the right of the Args.sc . The script arguments on the right can also be empty if you don't want to pass any arguments to the script.

If you want to define a script with a Shebang line that runs Ammonite with particular arguments, you can use

#!/bin/bash exec amm --predef 'println("welcome!")' "$0" "$@" !#

And which will pass in the --predef flag to Ammonite while running the script via ./Args.sc . If you want to then pass in different sets of arguments, you can run the script using amm e.g. amm --predef 'println("Hello!")' Args.sc 3 Moo as before. (Note that while a single-line #!/usr/bin/env amm --predef '...' shebang may work on some systems such as OS-X, it is not portable and would not work on Linux)

Multiple Main Methods

If you have only a single @main method, any arguments that you pass to the script get used as arguments to that @main . But if you have multiple @main methods, the first argument to the script is used to select which @main method to call. e.g. given:

// MultiMain.sc val x = 1 @main def mainA() = { println("Hello! " + x) } @main def functionB(i: Int, s: String, path: os.Path = os.pwd) = { println(s"Hello! ${s * i} ${path.relativeTo(os.pwd)}.") }

You can call it via

amm MultiMain.sc mainA

Or

amm MultiMain.sc functionB 2 "Hello"

Script Usage Docs

You can document your scripts with the @doc annotation. By default, a script such as

// MultiMain.sc val x = 1 @main def mainA() = { println("Hello! " + x) } @main def functionB(i: Int, s: String, path: os.Path = os.pwd) = { println(s"Hello! ${s * i} ${path.relativeTo(os.pwd)}.") }

Will result in a usage message:

Need to specify a subcommand to call when running MultiMain.sc Available subcommands: mainA functionB --i Int --s String --path os.Path (default ${os.pwd})

You can add docs via

// MultiMainDoc.sc val x = 1 @main def mainA() = { println("Hello! " + x) } @doc("This explains what the function does") @main def functionB(i: Int @doc( "how many times to repeat the string to make " + "it very very long, more than it originally was" ), s: String @doc("the string to repeat"), path: os.Path = os.pwd) = { println(s"Hello! ${s * i} ${path.relativeTo(os.pwd)}.") }

Which will be shown as part of the usage message

Need to specify a subcommand to call when running MultiMainDoc.sc Available subcommands: mainA functionB This explains what the function does --i Int: how many times to repeat the string to make it very very long, more than it originally was --s String: the string to repeat --path os.Path (default ${os.pwd})

Bundled Libraries

While Ammonite allows you to load any Java or Scala library for use via the import $ivy syntax, it also comes bundled with some basic libraries, e.g. Requests-scala for making HTTP calls, or the uPickle library with its JSON Api for dealing with the common JSON format.

For example, here's a tiny script that offers two main methods, one to shorten a github link using Requests-Scala and the git.io API, and one that pulls out a list of release-names from a given github project using Requests-Scala, uPickle's JSON package, and the Github API:

#!/usr/bin/env amm // HttpApi.sc import $ivy.{ `com.lihaoyi::requests:0.2.0`, `com.lihaoyi::ujson:0.7.5` } lazy val jsonPlaceHolderBase = Option(System.getenv("JSONPLACEHOLDER")).getOrElse { "http://jsonplaceholder.typicode.com" } @main def addPost(title: String, body: String) = { ujson.read( requests.get( s"$jsonPlaceHolderBase/posts", data = Seq( "title" -> title, "body" -> body, "userId" -> "1" ) ).text() ).obj.get("id").map(_.num.toInt).getOrElse(0) } @main def comments(postId: Int) = { val json = ujson.read( requests .get(s"$jsonPlaceHolderBase/comments?postId=$postId") .text() ) val names = for{ item <- json.arr name <- item.obj.get("name") } yield name.str names.mkString(",") }

You can run amm on the script to see what it can do

> amm HttpApi.sc Need to specify a main method to call when running HttpApi.sc Available main methods: def shorten(longUrl: String) def listReleases(project: String)

And you can run the two functions (after using chmod +x to make the file executable) via

> ./HttpApi.sc shorten https://www.github.com https://git.io/vDN6Ig

Or

> ./HttpApi.sc listReleases lihaoyi/Ammonite 0.7.0,Snaphot Commit Uploads,0.6.2,0.6.1,0.6.0,0.5.9,0.5.8, 0.5.7,0.5.6,0.5.5,0.5.4,0.5.3,0.5.2,0.5.1,0.5.0,0.4.9,0.4.8, 0.4.7,0.4.6,0.4.5,0.4.4,0.4.3,0.4.2,0.4.0

Script Builtins

Apart from bundling some third-party libraries for convenience, Ammonite also provides some builtins you can use from scripts to inspect and manipulate the interpreter itself. Note that this is a much smaller set of functionality than the set of Builtins available to the REPL: it won't have things like the repl.prompt , repl.history , and other things that only really make sense in the interactive REPL.

trait InterpAPI { /** * When running a script in `--watch` mode, re-run the main script if this * file changes. By default, this happens for all script files, but you can * call this to watch arbitrary files your script may depend on */ def watch(p: os.Path): Unit /** * A generalization of [[watch]], allows watching arbitrary values and not * just the contents of file paths. */ def watchValue[T](v: => T): T /** * The colors that will be used to render the Ammonite REPL in the terminal, * or for rendering miscellaneous info messages when running scripts. */ val colors: Ref[Colors] /** * Tools related to loading external scripts and code into the REPL */ def load: InterpLoad /** * resolvers to use when loading jars */ def repositories: Ref[List[Repository]] /** * Functions that will be chained and called on the coursier * Fetch object right before they are run */ val resolutionHooks: mutable.Buffer[ Fetch => Fetch ] /** * Exit the Ammonite REPL. You can also use Ctrl-D to exit */ def exit = throw AmmoniteExit(()) /** * Exit the Ammonite REPL. You can also use Ctrl-D to exit */ def exit(value: Any) = throw AmmoniteExit(value) /** * Functions that will be chained and called on the * exitValue before the repl exits */ val beforeExitHooks: mutable.Buffer[Any => Any] /** * Configures the current compiler, or if the compiler hasn't been initialized * yet, registers the configuration callback and applies it to the compiler * when it ends up being initialized later */ def configureCompiler(c: scala.tools.nsc.Global => Unit): Unit /** * Pre-configures the next compiler. Useful for tuning options that are * used during parsing such as -Yrangepos */ def preConfigureCompiler(c: scala.tools.nsc.Settings => Unit): Unit } trait LoadJar { /** * Load a `.jar` file or directory into your JVM classpath */ def cp(jar: os.Path): Unit /** * Load a `.jar` from a URL into your JVM classpath */ def cp(jar: java.net.URL): Unit /** * Load one or more `.jar` files or directories into your JVM classpath */ def cp(jars: Seq[os.Path]): Unit /** * Load a library from its maven/ivy coordinates */ def ivy(coordinates: Dependency*): Unit } trait InterpLoad extends LoadJar{ def module(path: os.Path): Unit def plugin: LoadJar }

Script Predef

If you want code to be loaded before you run any script, you can place it in ~/.ammonite/predefScript.sc . This is distinct from the REPL pre-defined code which lives in ~/.ammonite/predef.sc . If you want code that is pre-initialized in both the REPL and in scripts, you can place it in ~/.ammonite/predefShared.sc .

Running Scripts

There are two way main ways to run Ammonite scripts: From the REPL and From Bash.

From Bash

Apart from loading scripts within the Ammonite-REPL, You can also run scripts using the Ammonite executable from an external shell (e.g. bash):

$ sudo sh -c '(echo "#!/usr/bin/env sh" && curl -L https://github.com/lihaoyi/Ammonite/releases/download/2.2.0/2.13-2.2.0) > /usr/local/bin/amm && chmod +x /usr/local/bin/amm' && amm

$ amm path/to/script.sc

All types, values and imports defined in scripts are available to commands entered in REPL after loading the script.

You can also make an Ammonite script self-executable by using a shebang #! . This is an example script named hello . There is no need to add the .sc extension. The amm command needs to be in the PATH :

#!/usr/bin/env amm println("hello world")

make it executable and run it from an external shell (e.g. bash):

$ chmod +x /path/to/script $ /path/to/script

Ammonite also supports the JAVA_OPTS environment variable for passing arguments to the JVM that it runs inside, e.g. you can pass in a custom memory limit via

bash$ JAVA_OPTS="-Xmx1024m" amm path/to/script.sc

To let it use only up to 1024 megabytes of memory

Watch and Reload

Ammonite provides the -w / --watch flag, which tells it to not exit when a script completes, but instead watch the files that were run, and re-run them when any of them change. You can use this flag via

$ amm -w foo.sc

Within your scripts, you can also flag other files you want Ammonite to watch, via the interp.watch(p: Path) function. This is useful if you are iterating on a script together with some external data files the script depends on, and you want to

Script Debug REPL

When a script is not working as intended, it is useful to be able to poke around in a REPL after the script has run, in order to see what values are stored in which variables or what methods are available via autocomplete. To do so, you can run the script using the --predef / -p flag.

$ amm --predef foo.sc

This will run the script as normal, but on completion open up a REPL which has all the values defined in that script imported and ready to use. You can then poke around within the REPL as you wish.

Using --predef / -p to run a script and then open an interactive REPL can be combined with the --watch / -w flag:

$ amm --watch --predef foo.sc

This will open up a REPL after the script runs, and when you exit the REPL it will watch the script file and the files that the script depends on, re-running the script and REPL if any of them change.

--predef / -p can be used to include a script file as a predef before running any script or REPL, which is useful for a range of things apart from serving as a debug REPL on any script.

From the REPL

You can load any script into the Ammonite REPL using the import $file syntax, for example here we import the above MyScript.sc file to access its x value:

@ x // doesn't work yet Compilation Failed cmd0.sc:1: not found: value x val res0 = x // doesn't work yet ^ @ import $file.MyScript Welcome to the XYZ custom REPL!! @ MyScript.x // You can refer to definitions from that module res1: Int = 123 @ import MyScript._ @ x // works res2: Int = 123

You can also import the module, and any associated definitions you want, in the same import:

@ x // doesn't work yet Compilation Failed cmd0.sc:1: not found: value x val res0 = x // doesn't work yet ^ @ import $file.MyScript, MyScript._ Welcome to the XYZ custom REPL!! @ x res1: Int = 123

Note that by default, scripts imported via $file are encapsulated, so any imports inside that MyScript performs are not available outside of MyScript.sc :

@ import $file.MyScript, MyScript._ Welcome to the XYZ custom REPL!! import $file.$ @ mutable.Buffer(x) cmd1.sc:1: not found: value mutable val res1 = mutable.Buffer(x) ^ Compilation Failed

As you can see, even though collection.mutable was imported inside MyScript.sc , you cannot use them outside after importing it.

If you want to make everything (both imports and definitions) available by default after importing, you can use an $exec import instead of $file :

@ import $exec.MyScript Welcome to the XYZ custom REPL!! import $exec.$ @ mutable.Buffer(x) res1: mutable.Buffer[Int] = ArrayBuffer(123)

As you can see, now mutable is available, and so is x even though we did not directly import it.

While $file imports are useful for defining re-usable modules with common functions and definitions, $exec imports are useful as aliases for common setup to get your REPL environment just the way you want it. Of course, any files you import via import $file or import $exec can themselves import other Scala scripts in the same way, and the same rules apply.

Execution Model

Ammonite's Scala Scripts run as normal Scala code, though with some simple source-to-source transformations applied first to turn the script syntax (which allows top-level expressions, def s, val s, etc.) into valid Scala syntax (which doesn't). What happens is roughly:

A script file is read off disk

If the script file has been compiled/executed before, the previously compiled bytecode and metadata is loaded from the ~/.ammonite/cache . If not, the script is parsed but not yet compiled.

. If not, the script is parsed but compiled. Multi-stage Scripts are split into multiple individual scripts, to be handled separately/sequentially.

Any Magic Imports are resolved: any Ivy dependencies are downloaded, or any imported scripts are themselves run. Any imported scripts are themselves handled in the same way, as are any scripts they import, etc.

import, etc. If the script has already been previously compiled and cached, the cached bytecode that was read off of disk earlier is executed.

Otherwise, the source code for this script is then wrapped in a package / object wrapper, corresponding to the path to the script from the current script's enclosing folder. For example, a script at path foo/bar/baz/Qux.sc will be wrapped in: package foo.bar.baz object Qux{ // script code }

/ wrapper, corresponding to the path to the script from the current script's enclosing folder. For example, a script at path will be wrapped in: The script is then compiled and executed.

In general, due to Scala's slow compiler, Scala Scripts rely heavily on caching to achieve reasonable performance. While the first run of a modified script has a few-seconds overhead due to the Scala compiler, subsequent runs of the same script should be fast-ish, with only a few 100ms overhead for spinning up a JVM.

Although this is much slower than other scripting languages like Bash (which starts up in ~4ms) or Python (~30ms), in practice it is acceptable for many use cases. You probably do not want to find . | xargs amm Foo.sc on large numbers of files, where the 100ms overhead will add up, but for individual scripts it should be fine.

Furthermore, Ammonite makes it really easy to include that sort of recursive/iterative logic inside a single script: you can use ls! or ls.rec! from Ammonite-Ops to traverse the filesystem and work on multiple files all within the same process, which avoids duplicating the startup overhead on all the files you are manipulating.

SBT Integration

If you have an existing SBT project and you'd like to access those classes from an ammonite script, you can achieve this by running your script through SBT itself. This requires adding ammonite to your SBT project and creating a "bridge" class to pass arguments from SBT into an ammonite Main class.

Add the ammonite 2.2.0 dependency to build.sbt :

libraryDependencies += "com.lihaoyi" % "ammonite" % "2.2.0" % "test" cross CrossVersion.full

In your test directory, create a class like:

package ammonite.integration object AmmoniteBridge { def main(args: Array[String]): Unit = { ammonite.Main.main(args) } }

Run your script using sbt "test:run /path/to/script.sc arg1 arg2 arg3" .

If you have already started an SBT repl, then you can run the above without the quotes: test:run /path/to/script.sc arg1 arg2 arg3

Running a script from your SBT project can be achieved with ammonite.Main.main(Array("--predef-code", """println("welcome!")""", "file.sc"))

Ammonite-Ops

A Rock-solid Filesystem Library for Scala

Ammonite-Ops is a library to make common filesystem operations in Scala as concise and easy-to-use as from the Bash shell, while being robust enough to use in large applications without getting messy. It lives in the same repo as the Ammonite REPL, but can easily be used stand-alone in a normal SBT/maven project.

To get started with Ammonite-Ops, add this to your build.sbt :

libraryDependencies += "com.lihaoyi" %% "ammonite-ops" % "2.2.0"

And you're all set! Here's an example of some common operations you can do with Ammonite-Ops

import ammonite.ops._ // Pick the directory you want to work with, // relative to the process working dir val wd = pwd/'ops/'target/"scala-2.11"/"test-classes"/'example2 // Delete a file or folder, if it exists rm! wd // Make a folder named "folder" mkdir! wd/'folder // Copy a file or folder to a particular path cp(wd/'folder, wd/'folder1) // Copy a file or folder *into* another folder at a particular path // There's also `cp.over` which copies it to a path and stomps over // anything that was there before. cp.into(wd/'folder, wd/'folder1) // List the current directory val listed = ls! wd // Write to a file without pain! Necessary // enclosing directories are created automatically write(wd/'dir2/"file1.scala", "package example

class Foo{}

", createFolders = true) write(wd/'dir2/"file2.scala", "package example

class Bar{}

", createFolders = true) // Rename all .scala files inside the folder d into .java files ls! wd/'dir2 | mv{case r"$x.scala" => s"$x.java"} // List files in a folder val renamed = ls! wd/'dir2 // Line-count of all .java files recursively in wd val lineCount = ls.rec! wd |? (_.ext == "java") | read.lines | (_.size) sum // Find and concatenate all .java files directly in the working directory mkdir(wd / 'target) ls! wd/'dir2 |? (_.ext == "java") | read |> (write(wd/'target/"bundled.java", _))

These examples make heavy use of Ammonite-Ops' Paths, Operations and Extensions to achieve their minimal, concise syntax

As you can see, Ammonite-Ops replaces the common mess of boilerplate:

def removeAll(path: String) = { def getRecursively(f: java.io.File): Seq[java.io.File] = { f.listFiles.filter(_.isDirectory).flatMap(getRecursively) ++ f.listFiles } getRecursively(new java.io.File(path)).foreach{f => println(f) if (!f.delete()) throw new RuntimeException("Failed to delete " + f.getAbsolutePath) } new java.io.File(path).delete } removeAll("target/folder/thing")

With a single, sleek expression:

That handles the common case for you: recursively deleting folders, not-failing if the file doesn't exist, etc.

Note: Ammonite-Ops supports Windows experimentally, even if Ammonite-REPL does not. That means you can use these convenient filesystem operations and commands in your Scala programs that run on Windows. Try it out and let me know if there are problems.

Paths

Ammonite uses strongly-typed data-structures to represent filesystem paths. The two basic versions are:

Path : an absolute path, starting from the root

: an absolute path, starting from the root RelPath : a relative path, not rooted anywhere

Generally, almost all commands take absolute Path s. These have a number of useful operations that can be performed on them. Absolute paths can be created in a few ways:

// Get the process' Current Working Directory. As a convention // the directory that "this" code cares about (which may differ // from the pwd) is called `wd` val wd = pwd // A path nested inside `wd` wd/'folder/'file // A path starting from the root root/'folder/'file // A path with spaces or other special characters wd/"My Folder"/"My File.txt" // Up one level from the wd wd/up // Up two levels from the wd wd/up/up

Note that there are no in-built operations to change the pwd . In general you should not need to: simply defining a new path, e.g.

val target = pwd/'target

Should be sufficient for most needs.

Above, we made use of the pwd built-in path. There are a number of Path s built into Ammonite:

pwd : The current working directory of the process. This can't be changed in Java, so if you need another path to work with the convention is to define a wd variable.

: The current working directory of the process. This can't be changed in Java, so if you need another path to work with the convention is to define a variable. root : The root of the filesystem.

: The root of the filesystem. home : The home directory of the current user.

: The home directory of the current user. tmp() / tmp.dir() : Creates a temporary file/folder and returns the path.

RelPaths

RelPath s represent relative paths. These are basically the same data structure as Path s, except that they can represent a number of up s before the relative path is applied. They can be created in the following ways:

// The path "folder/file" val rel1 = 'folder/'file val rel2 = 'folder/'file // The path "file"; will get converted to a RelPath by an implicit val rel3 = 'file // The relative difference between two paths val target = pwd/'target/'file assert((target relativeTo pwd) == 'target/'file) // `up`s get resolved automatically val minus = pwd relativeTo target val ups = up/up assert(minus == ups)

In general, very few APIs take relative paths. Their main purpose is to be combined with absolute paths in order to create new absolute paths. e.g.

val target = pwd/'target/'file val rel = target relativeTo pwd val newBase = root/'code/'server assert(newBase/rel == root/'code/'server/'target/'file)

up is a relative path that comes in-built:

val target = root/'target/'file assert(target/up == root/'target)

Note that all paths, both relative and absolute, are always expressed in a canonical manner:

assert((root/'folder/'file/up).toString == "/folder") // not "/folder/file/.." assert(('folder/'file/up).toString == "folder") // not "folder/file/.."

So you don't need to worry about canonicalizing your paths before comparing them for equality or otherwise manipulating them.

Path Operations

Ammonite's paths are transparent data-structures, and you can always access the segments and ups directly. Nevertheless, Ammonite defines a number of useful operations that handle the common cases of dealing with these paths.

In this definition, ThisType represents the same type as the current path; e.g. a Path 's / returns a Path while a RelPath 's / returns a RelPath . Similarly, you can only compare or subtract paths of the same type.

Apart from RelPath s themselves, a number of other data structures are convertible into RelPath s when spliced into a path using / :

String s

s Symbols s

s Array[T] s where T is convertible into a RelPath

s where is convertible into a Seq[T] s where T is convertible into a RelPath

Constructing Paths

Apart from built-ins like pwd or root or home , you can also construct Ammonite's Path s from String s, java.io.File s or java.nio.file.Path s:

val relStr = "hello/cow/world/.." val absStr = "/hello/world" assert( RelPath(relStr) == 'hello/'cow, // Path(...) also allows paths starting with ~, // which is expanded to become your home directory Path(absStr) == root/'hello/'world ) // You can also pass in java.io.File and java.nio.file.Path // objects instead of Strings when constructing paths val relIoFile = new java.io.File(relStr) val absNioFile = java.nio.file.Paths.get(absStr) assert( RelPath(relIoFile) == 'hello/'cow, Path(absNioFile) == root/'hello/'world, Path(relIoFile, root/'base) == root/'base/'hello/'cow )

Trying to construct invalid paths fails with exceptions:

val relStr = "hello/.." intercept[java.lang.IllegalArgumentException]{ Path(relStr) } val absStr = "/hello" intercept[java.lang.IllegalArgumentException]{ RelPath(absStr) } val tooManyUpsStr = "/hello/../.." intercept[PathError.AbsolutePathOutsideRoot.type]{ Path(tooManyUpsStr) }

As you can see, attempting to parse a relative path with Path or an absolute path with RelPath throws an exception. If you're uncertain about what kind of path you are getting, you could use BasePath to parse it:

val relStr = "hello/cow/world/.." val absStr = "/hello/world" assert( FilePath(relStr) == 'hello/'cow, FilePath(absStr) == root/'hello/'world )

This converts it into a BasePath , which is either a Path or RelPath . It's then up to you to pattern-match on the types and decide what you want to do in each case.

You can also pass in a second argument to Path(..., base) . If the path being parsed is a relative path, this base will be used to coerce it into an absolute path:

val relStr = "hello/cow/world/.." val absStr = "/hello/world" val basePath: FilePath = FilePath(relStr) assert( Path(relStr, root/'base) == root/'base/'hello/'cow, Path(absStr, root/'base) == root/'hello/'world, Path(basePath, root/'base) == root/'base/'hello/'cow, Path(".", pwd).last != "" )

For example, if you wanted the common behavior of converting relative paths to absolute based on your current working directory, you can pass in pwd as the second argument to Path(...) . Apart from passing in String s or java.io.File s or java.nio.file.Path s, you can also pass in BasePath s you parsed early as a convenient way of converting it to a absolute path, if it isn't already one.

In general, Ammonite is very picky about the distinction between relative and absolute paths, and doesn't allow "automatic" conversion between them based on current-working-directory the same way many other filesystem APIs (Bash, Java, Python, ...) do. Even in cases where it's uncertain, e.g. you're taking user input as a String , you have to either handle both possibilities with BasePath or explicitly choose to convert relative paths to absolute using some base.

While this adds some boilerplate, it should overall result in more robust filesystem code that doesn't contain bugs like this one.

Operations

Paths not aren't interesting on their own, but serve as a base to use to perform filesystem operations in a concise and easy to use way. Here is a quick tour of the core capabilities that Ammonite-Ops provides:

import ammonite.ops._ // Let's pick our working directory val wd: Path = pwd/'ops/'target/"scala-2.11"/"test-classes"/'example3 // And make sure it's empty rm! wd mkdir! wd // Reading and writing to files is done through the read! and write! // You can write `Strings`, `Traversable[String]`s or `Array[Byte]`s write(wd/"file1.txt", "I am cow") write(wd/"file2.txt", Seq("I am cow

", "hear me moo")) write(wd/'file3, "I weigh twice as much as you".getBytes) // When reading, you can either `read!` a `String`, `read.lines!` to // get a `Vector[String]` or `read.bytes` to get an `Array[Byte]` read! wd/"file1.txt" ==> "I am cow" read! wd/"file2.txt" ==> "I am cow

hear me moo" read.lines! wd/"file2.txt" ==> Vector("I am cow", "hear me moo") read.bytes! wd/"file3" ==> "I weigh twice as much as you".getBytes // These operations are mirrored in `read.resource`, // `read.resource.lines` and `read.resource.bytes` to conveniently read // files from your classpath: val resourcePath = resource/'test/'ammonite/'ops/'folder/"file.txt" read(resourcePath).length ==> 18 read.bytes(resourcePath).length ==> 18 read.lines(resourcePath).length ==> 1 // You can read resources relative to any particular class, including // the "current" class by passing in `getClass` val relResourcePath = resource(getClass)/'folder/"file.txt" read(relResourcePath).length ==> 18 read.bytes(relResourcePath).length ==> 18 read.lines(relResourcePath).length ==> 1 // By default, `write` fails if there is already a file in place. Use // `write.append` or `write.over` if you want to append-to/overwrite // any existing files write.append(wd/"file1.txt", "

I eat grass") write.over(wd/"file2.txt", "I am cow

Here I stand") read! wd/"file1.txt" ==> "I am cow

I eat grass" read! wd/"file2.txt" ==> "I am cow

Here I stand" // You can create folders through `mkdir!`. This behaves the same as // `mkdir -p` in Bash, and creates and parents necessary val deep = wd/'this/'is/'very/'deep mkdir! deep // Writing to a file also creates necessary parents write(deep/'deeeep/"file.txt", "I am cow", createFolders = true) // `ls` provides a listing of every direct child of the given folder. // Both files and folders are included ls! wd ==> Seq(wd/"file1.txt", wd/"file2.txt", wd/'file3, wd/'this) // `ls.rec` does the same thing recursively ls.rec! deep ==> Seq(deep/'deeeep, deep/'deeeep/"file.txt") // You can move files or folders with `mv` and remove them with `rm!` ls! deep ==> Seq(deep/'deeeep) mv(deep/'deeeep, deep/'renamed_deeeep) ls! deep ==> Seq(deep/'renamed_deeeep) // `mv.into` lets you move a file into a // particular folder, rather than to particular path mv.into(deep/'renamed_deeeep/"file.txt", deep) ls! deep/'renamed_deeeep ==> Seq() ls! deep ==> Seq(deep/"file.txt", deep/'renamed_deeeep) // `mv.over` lets you move a file to a particular path, but // if something was there before it stomps over it mv.over(deep/"file.txt", deep/'renamed_deeeep) ls! deep ==> Seq(deep/'renamed_deeeep) read! deep/'renamed_deeeep ==> "I am cow" // contents from file.txt // `rm!` behaves the same as `rm -rf` in Bash, and deletes anything: // file, folder, even a folder filled with contents rm! deep/'renamed_deeeep rm! deep/"file.txt" ls! deep ==> Seq() // You can stat paths to find out information about any file or // folder that exists there val info = stat! wd/"file1.txt" info.isDir ==> false info.isFile ==> true info.size ==> 20 // Ammonite provides an implicit conversion from `Path` to // `stat`, so you can use these attributes directly (wd/"file1.txt").size ==> 20 ()

In these definitions, Op1 and Op2 are isomorphic to Function1 and Function2 . The main difference is that ops can be called in two ways:

rm(filepath) rm! filepath

The latter syntax allows you to use it more easily from the command line, where remembering to close all your parenthesis is a hassle. Indentation signifies nesting, e.g. in addition to write! you also have write.append! and write.over!

Operator Reference

All of these operations are pre-defined and strongly typed, so feel free to jump to their implementation to look at what they do or what else is available. Here's a shortlist of the one that may interest you:

ls! path [doc] returning Vector[Path] , and ls.iter! path returning a Iterator[Path]

[doc] returning , and returning a ls.rec! path [doc] and ls.rec.iter!

[doc] and read! path [doc] returning a String , and read.lines! path and read.bytes! path returning Seq[String] and Array[Byte] . You can also use the various read! commands for Reading Resources or reading java.io.InputStream s

[doc] returning a , and and returning and . You can also use the various commands for Reading Resources or reading s write(path, contents) , [doc], which lets you write String s, Array[Byte] s, and Seq s of those

, [doc], which lets you write s, s, and s of those rm! path , [doc], roughly Bash's rm -rf

, [doc], roughly Bash's mv(src, dest) , [doc]

, [doc] cp(src, dest) , [doc], roughly Bash's cp -r

, [doc], roughly Bash's exists! path , [doc]

, [doc] stat! path , [doc]

, [doc] stat.posix! path , [doc]

, [doc] ln(src, dest) , [doc]

, [doc] kill(9)! processId , [doc]

In general, each operator has sensible/safe defaults:

rm and cp are recursive

and are recursive rm ignores the file if it doesn't exist

ignores the file if it doesn't exist all operations that create a file or folder ( mkdir , write , mv ) automatically create any necessary parent directories

, , ) automatically create any necessary parent directories write also does not stomp over existing files by default. You need to use write.over

In general, this should make these operations much easier to use; the defaults should cover the 99% use case without needing any special flags or fiddling.

Extensions

Ammonite-Ops contains a set of extension methods on common types, which serve no purpose other than to make things more concise. These turn Scala from a "relatively-concise" language into one as tight as Bash scripts, while still maintaining the high level of type-safety and maintainability that comes with Scala code.

Traversable

These extensions apply to any Traversable : Seq s, List s, Array s, and others.

things | f is an alias for things map f

is an alias for things || f is an alias for things flatMap f

is an alias for things |? f is an alias for things filter f

is an alias for things |& f is an alias for things reduce f

is an alias for things |! f is an alias for things foreach f

These should behave exactly the same as their implementations; their sole purpose is to make things more concise at the command-line.

Pipeable

thing |> f is an alias for f(thing)

This lets you flip around the function and argument, and fits nicely into the Ammonite's | pipelines.

Callable

f! thing is an alias for f(thing)

This is another syntax-saving extension, that makes it easy to call functions without having to constantly be opening and closing brackets. It does nothing else.

Chaining

The real value of Ammonite is the fact that you can pipe things together as easily as you could in Bash. No longer do you need to write reams of boilerplate to accomplish simple tasks. Some of these chains are listed at the top of this readme, here are a few more fun examples:

// Move all files inside the "py" folder out of it ls! wd/"py" | mv.all*{case d/"py"/x => d/x } // Find all dot-files in the current folder val dots = ls! wd |? (_.last(0) == '.') // Find the names of the 10 largest files in the current working directory ls.rec! wd | (x => x.size -> x) sortBy (-_._1) take 10 // Sorted list of the most common words in your .scala source files def txt = ls.rec! wd |? (_.ext == "scala") | read def freq(s: Seq[String]) = s groupBy (x => x) mapValues (_.length) toSeq val map = txt || (_.split("[^a-zA-Z0-9_]")) |> freq sortBy (-_._2)

As you can see, you can often compose elaborate operations entirely naturally using the available pipes, without needing to remember any special flags or techniques.

Here's another example:

// Ensure that we don't have any Scala files in the current working directory // which have lines more than 100 characters long, excluding generated sources // in `src_managed` folders. def longLines(p: Path) = (p, read.lines(p).zipWithIndex |? (_._1.length > 100) | (_._2)) val filesWithTooLongLines = ( %%("git", "ls-files")(ammonite.ops.pwd).out.lines | (Path(_, ammonite.ops.pwd)) |? (_.ext == "scala") | longLines |? (_._2.length > 0) |? (!_._1.segments.contains("src_managed")) ) assert(filesWithTooLongLines.length == 0)

Reading Resources

In addition to manipulating paths on the filesystem, you can also manipulate resource paths in order to read resources off of the Java classpath. By default, the path used to load resources is absolute, using the Thread.currentThread().getContextClassLoader . You can also pass in a classloader explicitly to the resource call:

val contents = read(resource/'test/'ammonite/'ops/'folder/"file.txt") assert(contents.contains("file contents lols")) val cl = getClass.getClassLoader val contents2 = read(resource(cl)/'test/'ammonite/'ops/'folder/"file.txt") assert(contents2.contains("file contents lols"))

If you want to load resources relative to a particular class, pass in a class for the resource to be relative, or getClass to get something relative to the current class.

val cls = classOf[_root_.test.ammonite.ops.Testing] val contents = read! resource(cls)/'folder/"file.txt" assert(contents.contains("file contents lols")) val contents2 = read! resource(getClass)/'folder/"file.txt" assert(contents2.contains("file contents lols"))

In both cases, reading resources is performed as if you did not pass a leading slash into the getResource("foo/bar") call. In the case of ClassLoader#getResource , passing in a leading slash is never valid, and in the case of Class#getResource , passing in a leading slash is equivalent to calling getResource on the ClassLoader .

Ammonite-Ops ensures you only use the two valid cases in the API, without a leading slash, and not the two cases with a leading slash which are redundant (in the case of Class#getResource , which can be replaced by ClassLoader#getResource ) or invalid (a leading slash with ClassLoader#getResource )

Note that you can only read! from paths; you can't write to them or perform any other filesystem operations on them, since they're not really files.

Note also that resources belong to classloaders, and you may have multiple classloaders in your application e.g. if you are running in a servlet or REPL. Make sure you use the correct classloader (or a class belonging to the correct classloader) to load the resources you want, or else it might not find them.

Spawning Subprocesses

Ammonite-Ops provides easy syntax for anyone who wants to spawn sub-processes, e.g. commands like ls or git commit -am "wip" . This is provided through the % and %% operators, which are used as follows:

@ import ammonite.ops._ @ import ammonite.ops.ImplicitWd._ @ %ls build.sbt log ops readme repl terminal echo modules project readme.md target shell res2: Int = 0 @ %%('ls) res3: CommandResult = build.sbt echo log modules ops project readme readme.md repl target terminal ...

In short, % lets you run a command as you